Jose Alberto Arcos Sánchez
05 Apr 2019
Artificial Intelligence
The danger of failing to interpret the predictions of your machine learning models
In recent months, BBVA Next Technologies has devoted some effort into researching tools and techniques for interpreting machine learning models. These techniques are very useful to understand the predictions of a model (or make others understand them), to extract business insights from a model that has managed to capture the underlying patterns of customers interest, and to debug models in order to ensure they make the right decisions for the right reasons.
In this article we will explain how we have applied these techniques to avoid deploying flawed models into production that seemed totally correct a priori according to standard validation methods.