Teaching engineers to understand ethical and regulatory constraints, and propose solutions

The Operational AI Ethics team teaches courses for second and third year engineering students on how AI « high risk » use cases can raise legal, ethical and societal concerns.

Our « law and ethics of AI » courses are offered in the traditional 2d and 3d year curriculum, and in Data Science and AI masters programs at IP Paris.

The main subjects covered in these courses include:

AI governance and accountability

  • Students will learn how design choices give rise to trade-offs, e.g. between fairness and accuracy. Each trade-off needs to be discussed with internal stakeholders, and each decision documented so that it can be justified to internal and external stakeholders. Responsibilities must be clearly established for each design choice having potential consequences for safety or respect for human rights. Students will learn how « accountability » and « risk management frameworks » operate, using the AI lifecycle approach.

AI and fundamental rights

  • AI systems, particularly those deployed by governments, will often create tensions with fundamental rights. Given different use cases, students will be able to identify which public interest objectives and fundamental rights are potentially in tension, and how those tensions can be resolved through a test of proportionality. Every government system will have to satisfy this test, to ensure that interferences with fundamental rights are not excessive. In many cases, systems deployed by private sector entities must also be proportionate, i.e. the impact on fundamental rights must be assessed and measures put into place to ensure that the interferences are not excessive.

GDPR and AI Act:

  • computer engineers need to know the basics of the GDPR and the European AI Act. Even if they work outside Europe, engineers need to understand these European texts on data privacy and on AI safety because they represent international best practices. Many other countries adopt their own legislation inspired by these European texts.

Fairness and bias:

  • students of data science will have learned bias from a statistics point of view. Our classes will put fairness into different operational contexts such as granting loans, predicting recidivism, or facial recognition. In addition to the « impossibility theorem », we will study how different definitions of fairness may be justified in different circumstances. When studying bias and random errors, we will also discuss whether it is better in a given situation to reduce false positives or false negatives, knowing that you often can’t reduce both at the same time. Human cognitive bias is also studied, to frame discussions around how « fair » an algorithm needs to be compared to a human in order to be acceptable. Is there an « acceptable » level of residual bias, and if so, how do we determine what that level should be?

Explainability

  • what is explainability for? Who should receive explanations? Are we sure they are effective? Students will learn differences between white box, inherently explainable models, and black box models with post-hoc explainability. The limitations of post-hoc explainability will be explored. New developments in hybrid models with explainability built in to the learning process will be mentioned. Global and local explanations, and the requirements of the AI Act, will be discussed.

Sociology of AI

  • who talks about AI? who, and what, has weight in the social space defined around AI? How is knowledge constructed in AI and data science? We will explore both the cartography of actors and their tenets, as well as the power dynamics they install and challenge. We will also briefly discuss how users react to the use of algorithms, i.e. the ethnography of algorithms.

Current courses

BGDIA 702, MAP670P, MODS 212, DataAI 951: Law & ethics of artificial intelligence

The following subjects are taught

  • Introduction to fundamental rights, ethics, and the balancing of conflicting objectives (proportionality test) in a high-risk AI project, using debates around actual use cases (Introduction aux droits fondamentaux, à l’éthique et à la pondération des objectifs en conflit (test de proportionnalité) dans un projet numérique à haut risque, avec mise en situation)
  • Review of different kinds of AI (symbolic, Machine learning, foundation models, GPT-4) and the risks associated with each kind of model (Les différents types d’IA (symbolique, machine learning, modèles de fondation, GPT-4) et les risques inhérents à chaque type de modèle.)
  • Fairness, algorithmic bias and human bias. Algorithms are biased, but often less than humans. Perfect fairness is an unattainable objective, whether for a human or for a machine. How do we define an imperfect, but acceptable, level of fairness? (L’équité, les biais algorithmiques et les biais humains. Les algorithmes sont biaisés, mais souvent moins que les humains. L’équité parfaite est inatteignable, que ce soit pour une machine ou un humain. Comment définir une équité imparfaite, mais acceptable?)
  • Human control of algorithmic systems – what is “effective” human control, and what purpose(s) does it serve? (Le contrôle humain des systèmes algorithmiques – qu’est-ce qu’un contrôle humain effectif, et pour quoi faire?)
  • Explainability of AI – do we necessarily need to understand the functioning of a model in order to deploy it? (L’explicabilité des algorithmes – faut-il nécessairement comprendre le fonctionnement du modèle pour le déployer?)
  • Mapping social actors in the AI ecosystem, their influences and controversies. (La cartographie des acteurs de l’écosystème de l’IA, leurs influences, les controverses)
  • What is “ethical” AI, and what purpose does an ethical debate serve? (Qu’est-ce qu’une IA éthique et à quoi sert le débat éthique?)
  • The AI Act and GDPR for a high-risk AI use case, such as facial recognition. (L’IA Act et le RGPD pour un projet IA à haut risque (par exemple reconnaissance faciale))
  • Standards, and evaluating trustworthy AI. (Les normes, et l’évaluation d’une IA “de confiance”)
  • Accountability and risk management for high-risk AI systems. (La “responsabilisation” (accountability) et la gestion des risques pour un système IA à haut risque°