🌎 Community-curated list of tech conference talks, videos, slides and the like — from all around the world

🎤

As algorithms get more and more complex (i.e. Ensemble models - XGBoost, Random Forest, Neural Networks), it becomes harder to explain the predictions they make. These “Black Box” models may produce more accurate results but may in fact hard to operationalize in the real world as it gets harder and harder to explain to business decision makers how a model came up with the prediction. In certain cases such as in credit scoring model interpretability is crucial particularly for regulatory compliance. This talk will highlight certain Python tools and libraries such as LIME, ELI5 and Skater, that would allow data scientists to finally be able to explain how their models came up with its predictions.
This page was generated from this YAML file. Found a typo, want to add some data? Just edit it on GitHub.