Algorithms are becoming more and more involved in our daily lives, such as decision support algorithms (recommendation or scoring algorithms), or autonomous algorithms embedded in intelligent machines (autonomous vehicles). Deployed in many sectors and industries for their efficiency, their results are more and more discussed and contested. In particular, they are accused of being black boxes and of leading to discriminatory practices based on gender or ethnic origin.

Researchers Patrice Bertail (Université Paris Nanterre), David BounieStephan Clémençon and Patrick Waelbroeck (Télécom ParisTech) describe the biases linked to algorithms and outline ways to remedy them. They are particularly interested in the results of algorithms in relation to equity objectives and their consequences in terms of discrimination.

Three main questions motivate this article: by what mechanisms can algorithm biases occur? Can they be avoided? And finally, can they be corrected or limited?

In the first part, the authors describe how a statistical learning algorithm works. Then they look at the origin of these biases which can be of a cognitive, statistical or economic nature. They then present some promising statistical or algorithmic approaches to correct the biases. A discussion of the main societal issues raised such as interpretability, explicability, transparency and accountability concludes the paper.

Fondation AbeonaThis work was carried out with the financial support of the Abeona Foundation. Under the aegis of the Fondation de France, the Abeona Foundation aims to support multidisciplinary research projects using data sciences and to catalyze reflection on the subject of equity in artificial intelligence.

The full text of this article is available for downloading in French