Our objective

Through publications of our professors and PhD students, explore the social, ethical and legal issues associated with AI

  • Explore the social context of AI systems
  • Examine how regulatory principles from the AI Act and the GDPR can be implemented in complex use cases
  • Translate vague concepts such as “human-centric” and “trustworthy” AI into operational characteristics
  • Contribute to discussions on standards for trustworthy AI
  • Promote human-centred design methodology for explainable AI (XAI)
  • Explore innovative data sharing and data protection approaches
  • Propose improvements to regulatory frameworks for AI

Our partners

Our research is made possible thanks to collaboration with private and public sector actors

AI and society

Tiphaine Viard, associate professor

Artificial intelligence regroups actors — be it individuals or institutions — with a heterogeneous set of positions and principles. This includes researchers, both in exact and social sciences, regulatory institutions, companies developing or using AI systems, but also news outlets (specialised or not), civil society regroupments criticizing their implementations, and so on. It is a construct with social and technical dimensions, that we aim to study as is.The recent AI Act emphasizes the need « to clarify the roles of actors who can contribute to the conceptions of AI systems ». The AI Act advocates for the co-regulation of AI based on impact assessments; appreciating the impact of AI systems requires a fine-grained understanding of the actors and their respective positions.In this context, we are interested in identifying the actors, their tenets and how they evolve in time, to shape the social world of artificial intelligence.

I am particularly interested in mixed-methods approaches, reducing the (perceived) schism between quantitative and qualitative methods, as well as research-action methods.

  • Understanding the social world of AI: using interviews and computational tools, I want to understand how the discourses around AI can be broken down to their core elements, and how challenging critically these core elements can lead to power shifts;
  • Understanding semantic shifts: how do words and concepts such as explainability, fairness, and even artificial intelligence, shift meaning over time?
  • Understanding power dynamics: who holds power, and how is knowledge constructed? How can we critically assess and challenge the reinforcement of existing power structures?
  • Graph modelling: How can we finely model relations between actors, institutions and concepts? How can we extract knowledge from these models?
  • Some publications Read moreless
    • Viard, T., Gornet, M., & Maxwell, W. (2023, December). Reading the drafts of the AI Act with a technical lens. In NeurIPS 2023 Workshop on Regulatable ML.
    • Delarue, S., Viard, T., & Dessalles, J. L. (2023, November). Unexpected Attributed Subgraphs: a Mining Algorithm. In IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).
    • Bertrand, A., Viard, T., Belloum, R., Eagan, J. R., & Maxwell, W. (2023, April). On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-21).
    • Mapping AI Ethics: a meso-scale analysis of charters and manifestos

AI and the environment

Thomas Le Goff, Associate Professor

The “AI and Environment” research axis investigates the impact of AI technologies on the environment, examining how AI systems can be strategically employed to enhance environmental preservation efforts, optimize resource management, and contribute to sustainable development. The pressing environmental issues of our time require innovative, efficient, and scalable solutions. AI presents a unique set of tools and methodologies that can be used to tackle complex environmental problems. By leveraging machine learning, data analytics, and predictive modelling, we can contribute to protecting biodiversity and ecosystems, reducing humans’ impact on the environment and build robust strategies for mitigating climate change impact.

For example, AI is used in the energy sector to optimize the production of low carbon electricity (see Metroscope’s software of predictive maintenance for power plant or data centres), to better predict renewable energy production or to increase energy efficiency (see Enerbrain).

Simultaneously, we need to analyze the potential environmental footprint of AI technologies, addressing challenges associated with energy consumption and waste generation so that AI systems do not become an ecological problem on their own. Recently, the famous company HuggingFace published a paper, co-written with researcher Emma Strubell, which highlights, once again, the outstanding cost to the environment brought by new generative AI models, given the amount of energy these systems require and the amount of carbon that they emit.

In our rapidly evolving world, the intersection of AI and environmental sustainability stands as a promising frontier for transformative research.

Research Priorities:

  1. AI environmental footprint: This axis is dedicated to the environmental consequences of AI technologies, including their energy consumption, carbon emissions, and overall environmental footprint. Our research aims to develop frameworks for assessing and mitigating the environmental impact of AI applications, fostering the development of more sustainable and eco-friendly AI solutions.
  1. Sustainable AI regulation and public policy: Exploring regulatory frameworks and governance models to guide the responsible, ethical and sustainable deployment of AI is a key priority. Our research initiative investigates legal mechanisms and public policy actions to ensure that AI technologies align with conservation goals, adhere to sustainable practices, and comply with ethical standards. Emphasis is placed on developing guidelines that promote the integration of sustainable practices into AI development and deployment processes.
  1. AI for climate change mitigation or environment preservation: This axis explores the synergy between AI and the imperative goals of climate change mitigation and environmental preservation. It aims at fostering cutting-edge research on AI-driven technologies that can serve these ecological goals and seeks to pioneer solutions that contribute significantly to a sustainable and resilient future. From a legal point of view, it is also essential to study the legal implications surrounding the application of AI in climate change mitigation and broader environmental preservation efforts. Indeed, unnecessary or disproportionate legal barriers should not restrict efforts made to leverage AI potential for environmental solutions.
  • Key references Read moreless
    • Dhar P. (2020). The Carbon Impact of Artificial Intelligence, Nature Machine Intelligence, 2(8), 423‑25.
    • Luccioni A.S., Jernite Y., Strubell E. (2023). Power Hungry Processing: Watts Driving the Cost of AI Deployment?, arxiv, 2311.16863.
    • Pagallo U., Ciani Sciolla J., Durante M. (2022). The Environmental Challenge of AI in EU Law: Lessons Learned from the Artificial Intelligence Act (AIA) with its Drawbacks, Transforming Government: People, Process and Policy, 2022, 16(3), 359-376.
    • Rolnick D., Donti P.L., Kaack L.H. et al. (2022). Tackling Climate Change with Machine Learning, ACM Comput. Surv., 55(2), 42.
    • Stein A.L. (2020). Artificial Intelligence and Climate Change, Yale Journal on Regulation, 37(890).
    • Strubel E., Ganesh A., McCallum A. (2019). Energy and Policy Considerations for Deep Learning in NLP, 57th Annual meeting of the Association for Computational Linguistics, 3645–3650.

    Vinuesa R., Azizpour H., Leite I. et al. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals, Nature Communications, 11, 233.

AI Ethics and Regulation

Melanie Gornet, PhD Student

In her research, PhD student Melanie Gornet focuses on the regulation of artificial intelligence, encompassing social, legal and technical aspects. Notably, she has been studying the relationship between technical standards, the CE marking that will be required for high-risk AI systems under the AI Act, and respect for fundamental rights. Her papers and conference presentations look at the multiple standardization initiatives around “trustworthy AI”, including standards that attempt to address bias and non-discrimination. She asks the question whether it is ever possible to certify that an AI system complies with fundamental rights? The challenge stems from the connection between fundamental rights and the specifics and context of each case. Only a judge can decide, for example, whether a certain level of residual bias in a facial recognition system is acceptable, or whether, on the contrary, it creates illegal discrimination. While technical standards cannot set the exact level of tolerance with regard to fundamental rights, they can play a role in establishing common terminology or acting as a toolbox by defining design methods and means of measurement. Melanie Gornet’s other projects involve the study of AI ethics charters and manifestos, fairness in facial recognition systems, metrics of explainability, and AI audit methodology.

  • AI Standards Read moreless
  • Ethics of AI Read moreless
  • AI Act Read moreless
    • [presentation] Reading the AI Act with a technical lens, Seminar at the Council of the European Union, Jul 2023.
  • Other presentations Read moreless
    • [presentation] Les limitations techniques du machine learning, Colloque Décision humaine, Décision de l’IA, Université d’Artois, Nov 2023
    • [paper][presentation] L’IA explicable appliquée à la détection de ceintures et de téléphones au volant, Conférence Nationale sur les Applications Pratiques de l’Intelligence Artificielle APIA@PFIA2023, Jul 2023. https://hal.science/hal-04158889
    • [online article] Operational Fairness for Facial Authentication Systems, ERCIM News 131 Special theme: Ethical Software Engineering and Ethically Aligned Design, Oct 2022 https://ercim-news.ercim.eu/en131/special/operational-fairness-for-facial-authentication-systems
  • Teaching Read moreless
    • Ecole Polytechnique, Seminar Ethical issues, Law & Novel applications of AI, Master of Science and Technology in Artificial Intelligence and advanced Visual Computing (organisation of the seminar, 2022-2024)
    • ISAE-SUPAERO, Introduction au droit des données personnelles, final year engineering students specialising in Data and Decision Sciences (course, 2022-2024)
    • Télécom Paris, Exploration de grands volumes de données / Mining and exploring large datasets, Master and Mastère Spécialisé (intervention, 2022)
    • Télécom Paris, Law and ethics of artificial intelligence, Master 2 (intervention, 2022)

AI standards and fundamental rights

The European AI Act takes a product safety approach, relying heavily on technical standards. My work explores the intersection between technical standards and AI principles such as fairness. Many AI systems will require trade-offs, for example between fairness and performance, or between individual fairness and group fairness. Technical standards are useful, but may be of little help in defining what is an ‘acceptable’ trade-off.

Taking a human centric approach to explainable AI

Astrid Bertrand, PhD Student

PhD student Astrid Bertrand asks the question of whether explanations of AI decisions are really helpful, particularly in the context of financial services.  Bertrand developed an experiment to see whether explanations of algorithmic recommendations really help consumers of life insurance products understand and decide which product is best suited to their needs. The experiment found that explanations can make consumers less vigilant, too trusting of the algorithm. Bertrand also studied the needs of banking regulators to receive explanations of AI systems used to detect money laundering and terrorism financing. She found that explanations have a very different role for supervisors and auditors than they do for consumers. Supervisors and auditors need to understand, for example, why an AI system missed certain signals of criminal activity, and whether the omission reveals a systemic flaw in the system that merits a sanction. Bertrand concludes that explanations of AI decisions rarely work for everybody. They need to be tailored to the specific audience, and even then, certain kinds of explanations will miss the mark for certain people. Bertrand recommends using human-centered design methodology, a familiar methodology in the domain of HCI, to develop explanation methods that work fairly well for most of the people most of the time. The idea of a “perfect” explanation method is unrealistic, and should be abandoned in favor of more modest goals.

AI meets applied ethics

Joshua Brand, PhD Student

Philosophy and explainable AI?

Why do we need Moral Philosophy in our research on Explainable AI (XAI)? XAI is widely accepted as a requirement for trustworthy AI systems, for example when deploying AI in the financial sector for anti-money laundering efforts. To do any technical, policy, or legal work, however, we often simply accept the need for explainability and start from this assumption to decide its technical limitations and how to best proceed with its development and implementation. Yet, among all this excellent work discussing new technologies, legal clarifications, and implementation insight, we nevertheless need to eventually answer the simple, yet foundational normative question: Why ought we do this? Why do we need XAI? Before we implement and discuss the practical uses of XAI techniques we need to provide robust justification for its use beyond merely appealing to some document or an arbitrary reason that “someone said so”. 

Answering this question through the critical analysis lens of moral philosophy is important because XAI methods are not easy to implement. They can provide beneficial explanations that help support accountability and auditing measures, yet are less efficient and contrast with the fast, relatively autonomous, “black-box” machine learning models. Robust justification is necessary to show those who implement AI that even though it may slow down their decision-making processes, it is morally right, or necessary, to implement XAI.

This is something I recently tackled in my paper on XAI and public financial institutions (see link below) where I argued that it was embedded in the public financial institution identity to only employ XAI—explainability for these banks is beyond a mere preference, but essential to their identity.

With this justificatory work that grounds and clarifies the use of explainability, we better understand its limitations and where and how its development and implementation should be directed.

If we look into the philosophical foundations of AI ethics, we see deep disagreements about the nature and future of humanity, science, and modernity. Questioning AI opens up an abyss of critical questions about human knowledge, human society, and the nature of human morality. 

  • Mark Coeckelbergh, AI Ethics, 2020, p. 61

…philosophy is not a subject. It’s a discipline, designed to address the various forms of philosophical perplexity to which any reflective human being is subject.

  • Christine Korsgaard, “Thinking in Good Company”, 2022, p. 25-26
  • Articles in a journal Read moreless
    • Brand, Joshua L.M. “The Duty to Implement Explainable Artificial Intelligence: A Case Study of Public Service Postal Banks.” Canadian Journal of Practical Philosophy 9, no. 1 (2023): 1-16. https://ojs.uwindsor.ca/index.php/cjpp/article/view/8146 
    • Brand, Joshua L.M. “Why reciprocity prohibits autonomous weapons systems in war.” AI and Ethics 3 (2023): 619-624. https://doi.org/10.1007/s43681-022-00193-1 
    • Brand, Joshua L.M. “The misdirected approach of open source algorithms.” AI & Society (2022). https://doi.org/10.1007/s00146-022-01547-3 
    • Brand, Joshua L.M. “En lisant le Télémaque : L’émancipation des femmes dans Une fille d’Ève de H. de Balzac.” USURJ 6, no. 3 (2020). https://doi.org/10.32396/usurj.v6i3.498. 
  • Conferences Read moreless
    • Bertrand, Astrid, James R. Eagan, Winston Maxwell, and Joshua Brand. “AI is Entering Regulated Territory: Understanding the Supervisors’ Perspective for Model Justifiability in Financial Crime Detection.” In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, May 2024, Honolulu,Hawai’i, USA. Accepted.
    • Brand, Joshua and Luca Nannini. “Does Explainable AI Have Moral Value?” 37th Conference on Neural Information Processing Systems, Workshop on AI meets Moral Philosophy and Moral Psychology, December 2023, New Orleans, USA (NeurIPS 2023). https://arxiv.org/abs/2311.14687
    • Brand, Joshua L.M. “Exploring the Moral Value of Explainable Artificial Intelligence Through Public Service Postal Banks.” In Proceedings of AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, August 2023, Montréal, Canada (AIES ’23): 990-992. https://doi.org/10.1145/3600211.3604741. 
    • Brand, Joshua L.M. “Exploring the Moral Value of Explainable Artificial Intelligence Through Public Service Postal Banks.” Conference paper for Canadian Society for the Study of Practical Ethics, Congress of the Humanities and Social Sciences, May 2023, Toronto, Canada. http://csspe.ca/conference/ 
  • Miscellaneous (web articles and blog posts) Read moreless
    • Brand, Joshua. Research Summary of Why reciprocity bans autonomous weapons systems in war, by Joshua L.M. Brand. Montreal AI Ethics Institute (May 28, 2023). https://montrealethics.ai/why-reciprocity-prohibits-autonomous-weapons-systems-in-war/ 
    • Brand, Joshua. Research Summary of How Cognitive Biases Affect XAI-assisted Decision- making: A Systematic Review, by Astrid Bertrand et al. Montreal AI Ethics Institute (May 27, 2023). https://montrealethics.ai/how-cognitive-biases-affect-xai-assisted-decision-making-a-systematic-review/ 
    • Brand, Joshua L.M. “Clarifying the Moral Foundation of Explainable AI.” The Digital Constitutionalist (2022). https://digi-con.org/clarifying-the-moral-foundation-of- explainable-ai/
    • Brand, Joshua and Mélanie Gornet. “AI and anti-discrimination law: Remarks on Prof. Sandra Wachter’s presentation.” (November 30, 2022). https://www.telecom-paris.fr/ai-anti-discrimination-law-remarks-sandra-wachter 
    • Brand, Joshua and Dilia Carolina Olivo. “Thoughts on the Inauguration of the Trustworthy and Responsible AI Lab by Axa and Sorbonne University.” (May 3, 2022). https://www.telecom-paris.fr/trustworthy-responsible-ai-lab-axa-sorbonne 

AI “explainability” is an ethical imperative

Explaining decisions shows respect for the recipient’s humanity and vulnerability. Where AI supports human decision-making, the human decision-maker must be able to explain and justify her decision to the person affected. Explanations are a key characteristic of human-to-human interactions, and AI systems must ensure that this human characteristic is preserved, even where black box algorithms are used.

ICMS research chair

The Intelligent Cybersecurity for Mobility Systems (ICMS) chair will focus on how on-board vehicle systems can resist cyber attacks with the help of AI.

Winston Maxwell and Thomas Le Goff of the operational AI ethics team will be leading the research theme devoted to regulatory, data protection and data sharing aspects.

Human control over AI

Winston Maxwell, Professor of law

Human control over AI systems. How do we ensure that human control over AI systems is effective? Human control means different things in different contexts. Human-in-the loop, human-on-the loop, human-out-of-the-loop, are some of the terms used to designate different degrees of human control. Human control is one of the requirements of the AI Act for high risk systems. The European Court of Justice has also required human control for algorithmic systems that can have important impacts on fundamental rights. For medical diagnosis, effective control by the doctor is essential.  International Humanitarian Law applicable to armed conflict requires human control over lethal weapon systems. We study human control in operational contexts, attempting to gauge the effectiveness of human control in light of different operational constraints. Explainability is of course one of the enablers of effective human control. But humans are not  always good decision-makers, and sometimes human control can do more harm than good. But even if the human doesn’t always increase the objective quality of the final decision, having a human decision-maker is sometimes essential for human dignity, and what Professor Robert Summers calls “process values”. 



  • Selected publications on human control Read moreless
    • Winston Maxwell. Le contrôle humain des systèmes algorithmiques. Les Lundis de l’IA et de la Finance, ACPR-Banque de France, Mar 2023, Paris, France. ⟨hal-04039650⟩
    • Winston Maxwell. Comment assurer l’efficacité du contrôle humain dans les systèmes de décision algorithmiques?. Commission nationale consultative des droits de l’homme (CNCDH) 2022, Jan 2022, Paris, France. ⟨hal-03544203⟩
    • Valérie Beaudouin, Isabelle Bloch, David Bounie, Stéphan Clémençon, Florence d’Alché-Buc, et al.. Identifying the “Right” Level of Explanation in a Given Situation. First International Workshop on New Foundations for Human-Centered AI (NeHuAI), Sep 2020, Santiago de Compostella, Spain. pp.63. ⟨hal-02507316⟩
    • Winston Maxwell. Le contrôle humain pour détecter des erreurs algorithmiques. Céline Castets-Renard; Jessica Eynard. Droit de l’intelligence artificielle : entre règles sectorielles et régime général – Perspectives de droit comparé, Larcier, 2023, 9782802772088. ⟨hal-04026934⟩
    • Winston Maxwell. Meaningful Human Control to Detect Algorithmic Errors. Céline Castets-Renard; Jessica Eynard. Artificial Intelligence Law: Between Sectoral Rules and Comprehensive Regime – Comparative Law Perspectives, Bruylant, In press. ⟨hal-04026883⟩
    • Winston Maxwell, Bruno Dumas. Meaningful XAI Based on User-Centric Design Methodology: Combining legal and human-computer interaction (HCI) approaches to achieve meaningful algorithmic explainability. CERRE – Centre on Regulation in Europe. 2023. ⟨hal-04187446⟩
    • Winston Maxwell. LE CONTRÔLE HUMAIN DES SYSTÈMES ALGORITHMIQUES – UN REGARD CRITIQUE SUR L’EXIGENCE D’UN “HUMAIN DANS LA BOUCLE”. Droit. Université Paris 1 Panthéon- Sorbonne, 2022. ⟨tel-04010389⟩

6 research areas

Our research and teaching gravitate around the most pressing problems facing high-risk AI use cases, regardless of the kind of technology deployed (GenAI, deep learning, symbolic AI)

Team & Partners