These discussions took place over four panels: Ethics and Regulation; Standards and Values; AI Human Decision Loops; and Humanity and Risk Management. While each panel and subsequent discussions provided diverse thoughts on a wide-range of topics, some of the common threads were: which legal regulations and approaches best serve ethical principles; insight into the viability of explainable AI; questions on the necessity of Big Data; and observations on the efficacy and potential of the recent EU AI Act.
Two observations personally come to mind that considered how we should approach interdisciplinarity in AI ethics. The first is Dr Carina Prunkl’s reminder on the role of ethical theory. Ethics is not merely a list of standards around which we should formulate our laws. It is a discipline that helps guide all the other fields through a logical process of analysing and clarifying concepts in both new and unclear contexts. For example, if we aim to both uphold individual autonomy and implement biometric data collection, the role of ethics is to help define the concept of autonomy in the context of biometrics so our legal regulators can more effectively respond to its regulation. The other observation was made by Dr Joanna Bryson who reminded us that AI is an artefact, not an agent (unless, perhaps controversially, we include social robots or “implicit moral agency”). Therefore, the relationship we have to consider when ethically implementing AI is not human-machine, but a human-human relationship. AI is a tool we use to interact with other humans and its use can either improve or impair this relationship. Every discipline must first question how humans can and should sustainably live together and then ask how AI can fit within this narrative.