🌎 Community-curated list of tech conference talks, videos, slides and the like — from all around the world

🎤

During a fictional match between a human and an AI algorithm, Virginie reviews 3 traps to avoid when working with Artificial Intelligence and how to tackle them: the accuracy paradox (when you have imbalanced classes), the choice of the evaluation function in behavioral tasks, and spotting overfitting in behavioral tasks More about those traps: the accuracy paradox: when you have imbalanced classes, the algorithm can be stuck, classifying everything in the major class. Some ways to tackle the problem are presented: gather more data, resample your data, change the metric, add penalty, change the algorithm or make data augmentation. the choice of the evaluation function: many metrics exist for a classification or a regression task, but in behavioral tasks it may be difficult to find the right one. One solution consists in defining the function as if you were speaking to a child. the overfitting spotting: in behavioral tasks, as you have no dataset and no validation error, it may be difficult to spot when you have overfitting or not. One way to avoid it is either to add randomness in the simulations or to test the models on real robots as soon as possible.
This page was generated from this YAML file. Found a typo, want to add some data? Just edit it on GitHub.