Nowadays, data science, machine learning, artificial intelligence based  solutions being integrated in industry, economy and healthcare sectors embedded in surrounding us devices constitute a part of our lives. And since decisions they make are increasingly important for us, the question of their interpretability and of the explainability of their behavior becomes crucial.

There are two main types of approaches to achieve explainability of the algorithms:

  1. Approaches that explain decisions for the existing models.
  2. Approaches which modify the model and/or its training process by  incorporating the ability to explain.

Methods from the first group are in place when the AI algorithm is fixed and they give insights in why certain output has been produced for the given input.

Approaches of the second group change the design of the algorithm to produce explanations together with the predictions or force the algorithm to produce explainable solutions.

In the video below Pavlo Mozharovskyi, professor at Télécom Paris, Institut Polytechnique de Paris, explains the two general approaches to the explainability of the artificial intelligence.

_____________________________________________________________________

By Pavlo Mozharovskyi, Télécom Paris – Institut Polytechnique de Paris

_____________________________________________________________________

Illustration: Designed by starline / Freepik