In the past, I have written and taught quite a bit about image classification with Keras (e.g. here). Text classification isn’t too different in terms of using the Keras principles to train a sequential or function model. You can even use Convolutional Neural Nets (CNNs) for text classification. What is very different, however, is how to prepare raw text data for modeling. When you look at the IMDB example from the Deep Learning with R Book, you get a great explanation of how to train the model.
I am very happy to announce that (after many months) my interactive course on Hyperparameter Tuning in R has now been officially launched on Data Camp! Course Description For many machine learning problems, simply running a model out-of-the-box and getting a prediction is not enough; you want the best model with the most accurate prediction. One way to perfect your model is with hyperparameter tuning, which means optimizing the settings for that specific model.
These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about Real-World Data Science and showed examples on Fraud Detection, Customer Churn & Predictive Maintenance. Real-World Data Science (Fraud Detection, Customer Churn & Predictive Maintenance) von Shirin Glander The slides were created with xaringan.
As with the other videos from our codecentric.ai Bootcamp (Random Forests, Neural Nets & Gradient Boosting), I am again sharing an English version of the script (plus R code) for this most recent addition on How Convolutional Neural Nets work. In this lesson, I am going to explain how computers learn to see; meaning, how do they learn to recognize images or object on images? One of the most commonly used approaches to teach computers “vision” are Convolutional Neural Nets.
This is code that accompanies a book chapter on customer churn that I have written for the German dpunkt Verlag. The book is in German and will probably appear in February: https://www.dpunkt.de/buecher/13208/9783864906107-data-science.html. The code you find below can be used to recreate all figures and analyses from this book chapter. Because the content is exclusively for the book, my descriptions around the code had to be minimal. But I’m sure, you can get the gist, even without the book.
Update: There is now a recording of the meetup up on YouTube. Here you find my slides the TWiML & AI EMEA Meetup about Trust in ML models, where I presented the Anchors paper by Carlos Guestrin et al.. I have also just written two articles for the German IT magazin iX about the same topic of Explaining Black-Box Machine Learning Models: A short article in the iX 12/2018
In a recent video, I covered Random Forests and Neural Nets as part of the codecentric.ai Bootcamp. In the most recent video, I covered Gradient Boosting and XGBoost. You can find the video on YouTube and the slides on slides.com. Both are again in German with code examples in Python. But below, you find the English version of the content, plus code examples in R for caret, xgboost and h2o. :-)
- OLDER POSTS
- page 1 of 7