This is code that accompanies a book chapter on customer churn that I have written for the German dpunkt Verlag. The book is in German and will probably appear in February: https://www.dpunkt.de/buecher/13208/9783864906107-data-science.html. The code you find below can be used to recreate all figures and analyses from this book chapter. Because the content is exclusively for the book, my descriptions around the code had to be minimal. But I’m sure, you can get the gist, even without the book.

Continue reading

Update: There is now a recording of the meetup up on YouTube. Here you find my slides the TWiML & AI EMEA Meetup about Trust in ML models, where I presented the Anchors paper by Carlos Guestrin et al.. I have also just written two articles for the German IT magazin iX about the same topic of Explaining Black-Box Machine Learning Models: A short article in the iX 12/2018

Continue reading

In a recent video, I covered Random Forests and Neural Nets as part of the codecentric.ai Bootcamp. In the most recent video, I covered Gradient Boosting and XGBoost. You can find the video on YouTube and the slides on slides.com. Both are again in German with code examples in Python. But below, you find the English version of the content, plus code examples in R for caret, xgboost and h2o. :-)

Continue reading

In my last blogpost about Random Forests I introduced the codecentric.ai Bootcamp. The next part I published was about Neural Networks and Deep Learning. Every video of our bootcamp will have example code and tasks to promote hands-on learning. While the practical parts of the bootcamp will be using Python, below you will find the English R version of this Neural Nets Practical Example, where I explain how neural nets learn and how the concepts and techniques translate to training neural nets in R with the H2O Deep Learning function.

Continue reading

A few colleagues of mine and I from codecentric.ai are currently working on developing a free online course about machine learning and deep learning. As part of this course, I am developing a series of videos about machine learning basics - the first video in this series was about Random Forests. You can find the video on YouTube but as of now, it is only available in German. Same goes for the slides, which are also currently German only.

Continue reading

During my stay in London for the m3 conference, I also gave a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME. Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks.

Continue reading

These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Evaluating Model Explainability Methods with Sara Hooker: Sketchnotes from TWiMLAI talk: Evaluating Model Explainability Methods with Sara Hooker You can listen to the podcast here. In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks.

Continue reading

Author's picture

Dr. Shirin Glander

Biologist turned Bioinformatician turned Data Scientist

Data Scientist

Münster, Germany