Articles we’ve been reading and using for teaching
Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D., & Ho, D. E. (2024). Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. arXiv preprint arXiv:2405.20362. https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf
Tod Feathers, Google’s AI will help decide whether unemployed workers get benefits, Sept 10, 2024, https://gizmodo.com/googles-ai-will-help-decide-whether-unemployed-workers-get-benefits-2000496215
Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, and Iryna Gurevych. 2024. Are Emergent Abilities in Large Language Models just In-Context Learning?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5098–5139, Bangkok, Thailand. Association for Computational Linguistics.
Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models (2024) https://arxiv.org/abs/2401.01301v1
Harvard Data Science Review May 2024 https://hdsr.mitpress.mit.edu/specialissue5
A Critical Analysis of the Largest Source for Generative AI Training Data: Common Crawl 2024 https://facctconference.org/static/papers24/facct24-148.pdf
Yann LeCun lecture at Collège de France, Feb. 2024
Model collapse: Shumailov et al., The Curse of Recursion: Training on Generated Data Makes Models Forget, May 2023, https://arxiv.org/pdf/2305.17493.pdf
Detecting AI generated content:
Ada Lovelace Institute proposes regulation of foundation models based on the regulatory model for medical devices in the US
An explanation of how “transformers” work: https://ig.ft.com/generative-ai/?fbclid=IwAR17_gDAB-LjHCMdCbPHOM6N91d7JPAuYeK22j14lvQU2XK5cZEME0FCEUg`
Interview of Geoffrey Hinton on GPT4 in MIT Technology Review https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/
Stanford study on foundation models:
https://crfm.stanford.edu/report.html
Melanie Mitchell articles on GenAI and LLMs
https://www.science.org/doi/10.1126/science.adj5957
https://arxiv.org/abs/2210.13966
OECD report on Gen AI (Sept 2023)
https://www.oecd.org/publications/initial-policy-considerations-for-generative-artificial-intelligence-fae2d1e6-en.htm
Homogenization or monoculture risk : a single model will find itself in many downstream applications, creating vulnerabilities (Kleinberg & Raghavan 2022; Bommasani et al. 2022)
Emergence risk : unpredictable capabilities emerge as the model gets larger. “More is different” (Wei et al, 2022; Kolt, 2023).
Power-seeking, reward-hacking: model finds unforeseen, and potentially harmful ways of achieving a goal while exploiting the reward signal (Skalse, Howe and Krueger, 2022)
Increased agency Chan, A. et al. (2023), “Harms from Increasingly Agentic Algorithmic Systems”, IEEE Computer Society., Vol. Vol. 2022-March, https://arxiv.org/abs/2302.10329 .
Park, J. et al. (2023), “Generative Agents: Interactive Simulacra of Human Behaviour”, http://arxiv.org/abs/2304.03442 (Generative AI Agents can plan and plot.)
Hinduja article (Harvard Berkman Klein) Nov 2023 on Thinking Through Generative AI Harms Among Users on Online Platforms
NYT article (Nov 23) Chatbots may hallucinate more often than many realize
Writers’ Guild settlement on use of AI for scriptwriting
https://www.wired.co.uk/article/us-writers-strike-ai-provisions-precedents
Felten, Edward W. and Raj, Manav and Seamans, Robert, How will Language Modelers like ChatGPT Affect Occupations and Industries? (March 1, 2023). Available at SSRN: https://ssrn.com/abstract=4375268 or http://dx.doi.org/10.2139/ssrn.4375268
ChatGPT and the environment: https://www.theatlantic.com/technology/archive/2023/08/ai-carbon-emissions-data-centers/675094/
https://www.wired.co.uk/article/the-generative-ai-search-race-has-a-dirty-secret
Homogenization or monoculture risk : a single model will find itself in many downstream applications, creating vulnerabilities (Kleinberg & Raghavan 2022; Bommasani et al. 2022)
Emergence risk : unpredictable capabilities emerge as the model gets larger. “More is different” (Wei et al, 2022; Kolt, 2023).
Power-seeking, reward-hacking: model finds unforeseen, and potentially harmful ways of achieving a goal while exploiting the reward signal (Skalse, Howe and Krueger, 2022)
Increased agency Chan, A. et al. (2023), “Harms from Increasingly Agentic Algorithmic Systems”, IEEE Computer Society., Vol. Vol. 2022-March, https://arxiv.org/abs/2302.10329 .
Park, J. et al. (2023), “Generative Agents: Interactive Simulacra of Human Behaviour”, http://arxiv.org/abs/2304.03442
Draft Chinese Regulation on Generative AI
Interactive and compositional deepfakes Eric Horvitz https://arxiv.org/abs/2209.01714
Anthropic and Constitutional AI https://arxiv.org/abs/2212.08073
(Kolt, 2023) Kolt, Noam, Algorithmic Black Swans (October 14, 2023). Washington University Law Review, Vol. 101, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4370566
Mhamdi et al. On the Impossible Safety of Large AI Models, https://arxiv.org/abs/2209.15259
Nasr et al. Scalable Extraction of Training Data from (Production) Language Models Nov. 28, 2023, arXiv:2311.17035
UK Competition and Markets Authority, AI Foundation Models, Initial Review May 4, 2023
How public AI can strengthen democracy
https://operational-ai-ethics.telecom-paris.fr/wp-content/uploads/2024/03/Tech-authoritarianism-The-Atlantic-March-2024.pdf
https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/
Maini, P., Jia, H., Papernot, N., & Dziedzic, A. (2024). LLM Dataset Inference: Did you train on my dataset?. arXiv preprint arXiv:2406.06443.
Das et al., 2024, Security and Privacy Challenges of Large Language Models: A Survey https://arxiv.org/html/2402.00888v1
Solove, Daniel J. and Hartzog, Woodrow, The Great Scrape: The Clash Between Scraping and Privacy (July 03, 2024). Available at SSRN: https://ssrn.com/abstract=4884485 or http://dx.doi.org/10.2139/ssrn.4884485
D. Rosenthal (VISCHER), Language models with and without personal data, 17 July 2024, https://www.vischer.com/en/knowledge/blog/part-19-language-models-with-and-without-personal-data/
Hamburg Data Protection Authority discussion paper on LLMs, concluding that LLMs do not store personal data from the training dataset (July 2024) https://datenschutz-hamburg.de/fileadmin/user_upload/HmbBfDI/Datenschutz/Informationen/240715_Discussion_Paper_Hamburg_DPA_KI_Models.pdf
Woodrow Hartzog on regulating AI “Two AI Truths and a Lie” (June 2024) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4840383
EDPB analysis of ChatGPT (May 23, 2024) https://www.edpb.europa.eu/system/files/2024-05/edpb_20240523_report_chatgpt_taskforce_en.pdf
The unfair side of Privacy Enhancing Technologies: addressing the trade-offs between PETs and fairness 2024 https://facctconference.org/static/papers24/facct24-139.pdf
Data collection for AI: https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html
Privacy enhancing technology
Prokhorenkov, D., & Cao, Y. (2023, November). Towards Benchmarking Privacy Risk for Differential Privacy: A Survey. In Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (pp. 322-327). https://dl.acm.org/doi/pdf/10.1145/3600100.3625373
Near J, Darais D (2023) Guidelines for Evaluating Differential Privacy Guarantees. (National Institute of Standards and Technology, Gaithersburg, MD), NIST SP 800-226 ipd. https://doi.org/10.6028/NIST.SP.800-226.ipd
Drechsler, J. (2023). Differential Privacy for Government Agencies—Are We There Yet?. Journal of the American Statistical Association, 118(541), 761-773. https://www.tandfonline.com/doi/pdf/10.1080/01621459.2022.2161385
Das, S., Drechsler, J., Merrill, K., & Merrill, S. (2022). Imputation under differential privacy. arXiv preprint arXiv:2206.15063. https://arxiv.org/pdf/2206.15063.pdf
Google (2022), Applying Differential Privacy to Large Scale Image Classification, https://blog.research.google/2022/02/applying-differential-privacy-to-large.html
Royal Society. (2019). Protecting privacy in practice: The current use, development and limits of Privacy Enhancing Technologies in data analysis. https://royalsociety.org/-/media/policy/projects/privacy-enhancing-technologies/Protecting-privacy-in-practice.pdf
OECD (2023), “Emerging privacy-enhancing technologies: Current regulatory and policy approaches”, OECD Digital Economy Papers, No. 351, OECD Publishing, Paris,https://doi.org/10.1787/bf121be4-en.
Tang, J., Korolova, A., Bai, X., Wang, X., & Wang, X. (2017). Privacy loss in apple’s implementation of differential privacy on macos 10.12. arXiv preprint arXiv:1709.02753. https://arxiv.org/abs/1709.02753
Prokhorenkov, D. (2022). Anonymization level and compliance for differential privacy: A systematic literature review. 2022 International Wireless Communications and Mobile Computing (IWCMC), 1119-1124. https://ieeexplore.ieee.org/document/9824899
Fischer-Hübner, S., Hansen, M., Hoepman, J. H., & Jensen, M. (2022). Privacy-Enhancing Technologies and Anonymisation in Light of GDPR and Machine Learning. In IFIP International Summer School on Privacy and Identity Management (pp. 11-20). Cham: Springer Nature Switzerland. https://pure.rug.nl/ws/portalfiles/portal/915269066/978-3-031-31971-6_2.pdf
Vassilev, A., Oprea, A., Fordyce, A., and Anderson, H. (2024). Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (National Institute of Standards and Technology) 10.6028/NIST.AI.100-2e2023.
Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083.
Wood, A., Altman, M., Bembenek, A., Bun, M., Gaboardi, M., Honaker, J., … & Vadhan, S. (2018). Differential privacy: A primer for a non-technical audience. Vand. J. Ent. & Tech. L., 21, 209.
Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law, Innovation and Technology, 10(1), 40-81.
Valentin Rupp, Max von Grafenstein, Clarifying “personal data” and the role of anonymisation in data protection law: Including and excluding data from the scope of the GDPR (more clearly) through refining the concept of data protection, Computer Law & Security Review, Volume 52, 2024,105932, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2023.105932 (https://www.sciencedirect.com/science/article/pii/S0267364923001425)
Andreas Häuselmann, Bart Custers, Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR, Computer Law & Security Review, Volume 52, 2024, 105942, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2024.105942.
(https://www.sciencedirect.com/science/article/pii/S0267364924000098)
United Nations B Tech report on Gen AI and Human Rights May 2024
EU Fundamental Rights Agency report on AI and fundamental rights (2020) https://fra.europa.eu/sites/default/files/fra_uploads/fra-2020-artificial-intelligence_en.pdf
Thomas H. Costello et al Durably reducing conspiracy beliefs through dialogues with AI.Science385,eadq1814(2024).DOI:10.1126/science.adq1814 https://www.science.org/doi/10.1126/science.adq1814
The Role of Explainability in Collaborative Human-AI Disinformation Detection 2024 https://facctconference.org/static/papers24/facct24-146.pdf
Bontcheva et al., Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities, 2024 https://edmo.eu/wp-content/uploads/2023/12/Generative-AI-and-Disinformation_-White-Paper-v8.pdf
Real Risks of Fake Data: Synthetic Data, Diversity-Washing and Consent Circumvention 2024 https://facctconference.org/static/papers24/facct24-117.pdf
Attitudes Toward Facial Analysis AI: A Cross-National Study Comparing Argentina, Kenya, Japan, and the USA 2024 https://facctconference.org/static/papers24/facct24-153.pdf
How facial recognition app poses threat to privacy, civil liberties (in Harvard Gazette Oct 2023): https://news.harvard.edu/gazette/story/2023/10/how-facial-recognition-app-poses-threat-to-privacy-civil-liberties/
https://www.theguardian.com/us-news/2023/feb/08/us-immigration-cbp-one-app-facial-recognition-bias
Sénat, Commission des lois, Rapport d’information n° 627 sur la reconnaissance biométrique dans l’espace public, 10 mai 2022, Summary document here.
CNIL, Caméras dites « intelligentes » ou « augmentées » dans les espaces publics, juill. 2022
Council of Europe, Guidelines on Facial Recognition, June 2021
Leslie, David. « Understanding bias in facial recognition technologies. » arXiv preprint arXiv:2010.07023 (2020).
S. Lohr, Facial Recognition is Accurate, if you’re a white guy, NY Times,
Khalil, Ashraf, et al. « Investigating bias in facial analysis systems: A systematic review. » IEEE Access 8 (2020): 130751-130761.
Singh, Richa, et al. « Anatomizing bias in facial analysis. » Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 11. 2022.
CNIL: Reconnaissance faciale : pour un débat à la hauteur des enjeux
FRA (EU Fundamental Rights Agency): Facial recognition technology: fundamental rights considerations in the context of law enforcement
Clémençon and Maxwell, Why Facial Recognition Algorithms Can’t be Perfectly Fair (French version here)
Felten, Edward W. and Raj, Manav and Seamans, Robert, How will Language Modelers like ChatGPT Affect Occupations and Industries? (March 1, 2023). Available at SSRN: https://ssrn.com/abstract=4375268 or http://dx.doi.org/10.2139/ssrn.4375268
Mateescu, Challenging worker datafication (Data & Society Nov 2023) https://datasociety.net/library/challenging-worker-datafication/
Teachout, Nov 2023, Surveillance Wages: a Taxonomy
https://lpeproject.org/blog/surveillance-wages-a-taxonomy/
OECD and Unesco study on AI and effect on women at work https://read.oecd-ilibrary.org/science-and-technology/the-effects-of-ai-on-the-working-lives-of-women_14e9b92c-en#page1
Garg, S., Sinha, S., Kar, A.K. and Mani, M. (2021), “A review of machine learning applications in human resource management”, International Journal of Productivity and Performance Management, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/IJPPM-08-2020-0427
K. Simbeck, “HR analytics and ethics,” in IBM Journal of Research and Development, vol. 63, no. 4/5, pp. 9:1-9:12, 1 July-Sept. 2019, doi: 10.1147/JRD.2019.2915067.
Tambe P, Cappelli P, Yakubovich V. Artificial Intelligence in Human Resources Management: Challenges and a Path Forward. California Management Review. 2019;61(4):15-42. doi:10.1177/0008125619867910
Köchling, A., Riazy, S., Wehner, M.C. et al. Highly Accurate, But Still Discriminatory. Bus Inf Syst Eng 63, 39–54 (2021). https://doi.org/10.1007/s12599-020-00673-w
Köchling, A., Wehner, M.C. Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Bus Res 13, 795–848 (2020). https://doi.org/10.1007/s40685-020-00134-w
Faiyaz Md. Iqbal, Can Artificial Intelligence Change the Way in Which Companies Recruit, Train, Develop and Manage Human Resources in Workplace?, Asian Journal of Social Sciences and Management Studies, Vol. 5, No. 3, 102-104, 2018
https://www.bostonreview.net/articles/the-new-workplace-surveillance/
[ https://www.college-de-france.fr/agenda/cours/le-travail-au-xxie-siecle-droit-techniques-ecoumene/le-travail-au-xxie-siecle-droit-techniques-ecoumene-5 | https://www.college-de-france.fr/agenda/cours/le-travail-au-xxie-siecle-droit-techniques-ecoumene/le-travail-au-xxie-siecle-droit-techniques-ecoumene-5 ] [ https://www.college-de-france.fr/la-gouvernance-par-les-nombres-introduction | https://www.college-de-france.fr/la-gouvernance-par-les-nombres-introduction ] [ https://op.europa.eu/fr/publication-detail/-/publication/b4ce8f90-2b1b-43ec-a1ac-f857b393906e/language-fr | https://op.europa.eu/fr/publication-detail/-/publication/b4ce8f90-2b1b-43ec-a1ac-f857b393906e/language-fr ] [ https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2020)656305 | https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2020)656305 ]
Markus Kattnig, Alessa Angerschmid, Thomas Reichel, Roman Kern,
Assessing trustworthy AI: Technical and legal perspectives of fairness in AI,
Computer Law & Security Review, Volume 55, 2024, 106053, ISSN 0267-3649,
https://doi.org/10.1016/j.clsr.2024.106053.
Racial/Ethnic Categories in AI and Algorithmic Fairness: Why They Matter and What They Represent 2024 https://facctconference.org/static/papers24/facct24-165.pdf
Algorithmic Pluralism: A Structural Approach To Equal Opportunity 2024 https://facctconference.org/static/papers24/facct24-14.pdf
The Conflict Between Algorithmic Fairness and Non-Discrimination: An Analysis of Fair Automated Hiring 2024 https://facctconference.org/static/papers24/facct24-130.pdf
Balancing Act: Evaluating People’s Perceptions of Fair Ranking Metrics 2024 https://facctconference.org/static/papers24/facct24-133.pdf
Recommend Me? Designing Fairness Metrics with Providers 2024 https://facctconference.org/static/papers24/facct24-159.pdf
Fundamental Rights Agency (FRA) Dec 2022 report on bias
Chouldechova A. (2017), « Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments », Big Data, vol. 5, n°2, pp. 153‑163, https://arxiv.org/abs/1610.07524
« Machine Bias. There is software that is used across the county to predict future criminals. And it is biased against blacks. », https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Kleinberg, Sunstein et al. Discrimination in the Age of Algorithms https://www.nber.org/papers/w25548
Leslie D, Mazumder A, Peppin A, Wolters M K, Hagerty A. Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ 2021; 372 :n304 doi:10.1136/bmj.n304
Wachter, Sandra, The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law (February 15, 2022). Tulane Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4099100 or http://dx.doi.org/10.2139/ssrn.4099100
Mécanismes d’une justice algorithmisée https://www.jean-jaures.org/publication/mecanisme-dune-justice-algorithmisee/
Sarah Brayne, Angèle Christin, Technologies of Crime Prediction: The Reception of Algorithms in Policing and Criminal Courts, Social Problems, Volume 68, Issue 3, August 2021, Pages 608–624, https://doi.org/10.1093/socpro/spaa004
Flores et al., False Positives, False Negatives, and False analyses
Kleinberg et al., Human decisions and machine predictions
Sunstein, Governing by Algorithm? No Noise and (Potentially) Less Bias
Barraud, Un algorithme capable de prédire les décisions des juges
https://bailproject.org/wp-content/uploads/2022/07/RAT_policy_brief_v3.pdf
https://www.wired.com/story/algorithms-supposed-fix-bail-system-they-havent/
https://pretrialrisk.com/national-landscape/state-laws-on-rats/
https://advancingpretrial.org/psa/about/
Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations.Science366,447-453(2019).DOI:10.1126/science.aax2342
Marta Cantero Gamito, Christopher T Marsden, Artificial intelligence co-regulation? The role of standards in the EU AI Act, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae011, https://doi.org/10.1093/ijlit/eaae011
Gornet, M. & Maxwell, W. (2024). The European approach to regulating AI through technical standards. Internet Policy Review, 13(3). https://doi.org/10.14763/ 2024.3.1784https://policyreview.info/pdf/policyreview-2024-3-1784.pdf
Mélanie Gornet. The European approach to regulating AI through technical standards. 2023. ⟨hal-04254949⟩
Laux, J., Wachter, S., & Mittelstadt, B. (2024). Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act. Computer Law & Security Review, 53, 105957.
Kostina Prifti, Eduard Fosch-Villaronga, Towards experimental standardization for AI governance in the EU, Computer Law & Security Review, Volume 52, 2024, 105959,ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2024.105959.
(https://www.sciencedirect.com/science/article/pii/S0267364924000268)
http://eulawanalysis.blogspot.com/2021/09/
https://www.senat.fr/rap/r19-506/r19-506.html (see section on algorithms)
2018 annual report of the CNCTR, beginning on page 96
GCHQ’s ethical approach to AI: an initial human rights-based response
Thaler, Richard H. and Sunstein, Cass R. and Balz, John P., Choice Architecture (April 2, 2010). Available at SSRN: https://ssrn.com/abstract=1583509 or http://dx.doi.org/10.2139/ssrn.1583509
Capasso, M., Umbrello, S. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. Med Health Care and Philos 25, 11–22 (2022). https://doi.org/10.1007/s11019-021-10062-z
Bad nudge bad robot project (dataia)
https://www.gov.uk/government/organisations/behavioural-insights-team
Suh, Can AI Nudge Us to make Better Choices?
Mele et al., Smart nudging: How cognitive technologies enable choice architectures for value co-creation
Kevin Werbach, Orwell’s that ends well (Nudge, social credit system) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3589804#
Escalation Risks from Language Models in Military and Diplomatic Decision-Making 2024 https://facctconference.org/static/papers24/facct24-57.pdf
Anderson, Kenneth and Waxman, Matthew C., Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law (February 28, 2017). Roger Brownsword, Eloise Scotford, Karen Yeung, eds., The Oxford Handbook of Law, Regulation, and Technology (Oxford University Press, July 2017), Chapter 45, Columbia Public Law Research Paper No. 14-553, American University, WCL Research Paper No. 2017-21, Available at SSRN: https://ssrn.com/abstract=2978359
International Committee of the Red Cross, Autonomy, artificial intelligence and robotics: Technical aspects of human control, Geneva, August 2019, https://www.icrc.org/en/document/autonomy-artificial-intelligence-and-robotics-technical-aspects-human-control
https://www.europarl.europa.eu/cmsdata/194143/SEDE_presentation_Verbruggen_3December2019-original.pdf
https://www.economist.com/science-and-technology/2019/09/07/artificial-intelligence-is-changing-every-aspect-of-war
https://www.youtube.com/watch?v=8GwBTFRFlzA
autonomous lethal weapons ICRC report may 2021
https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
SHARKEY, Noel. “Autonomous warfare. Ensuring meaningful human control over killer machines is vital to global security” Scientific American, n° 2, février 2020 (p. 52-57)
DE GANAY, Claude. GOUTTEFARDE, Fabien. « Rapport d’information de la Commission de la défense et des forces armées de l’Assemblée nationale sur les systèmes d’armes létaux autonomes », n° 3248, 22 juillet 2020.
https://theconversation.com/systemes-darmes-letales-autonomes-y-aura-t-il-un-terminator-tricolore-146425
https://www.dems.defense.gouv.fr/cdem/productions/biblioveilles/systemes-darmes-letales-autonomes-sala
https://www.defense.gouv.fr/salle-de-presse/discours/discours-de-florence-parly/discours-de-florence-parly-ministre-des-armees_intelligence-artificielle-et-defense
Algorithmic Arbitrariness in Content Moderation 2024 https://facctconference.org/static/papers24/facct24-151.pdf
Ovadya and Thorburn (Columbia University) Article (Oct 2023) on “bridging systems” to reduce divisive social media content. Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders, and Governance
https://santaclaraprinciples.org/images/SantaClara_Report.pdf
https://newrepublic.com/article/113045/free-speech-internet-silicon-valley-making-rules
https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/online-content-moderation
European Parliament, The impact of algorithms for online content filtering or moderation, “Upload filters”, Sept. 2020.
Gongane VU, Munot MV, Anuse AD. Detection and moderation of detrimental content on social media platforms: current status and future directions. Soc Netw Anal Min. 2022;12(1):129. doi: 10.1007/s13278-022-00951-3.
Maxwell, W. Applying Net neutrality rules to social media content moderation systems, Enjeux numériques, Annales des Mines, N° 18, juin 2022.
Maxwell, W. et Donnat, F. , Le DSA immose aux plates-formes d’identifier les maux et d’inventer des remèdes, sous l’oeil de la Commission européenne, Le Monde, 26 nov. 2022
Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt, Like trainer, like bot? Inheritance of bias in algorithmic content moderation, arXiv:1707.01477, 2017
De Gregorio, Giovanni, Democratising Online Content Moderation: A Constitutional Framework (2019). Computer Law and Security Review, 2019 Forthcoming, Available at SSRN: https://ssrn.com/abstract=3469443
Gorwa R, Binns R, Katzenbach C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society. January 2020. doi:10.1177/2053951719897945
Geiger, Christophe and Jütte, Bernd Justin, Platform liability under Article 17 of the Copyright in the Digital Single Market Directive, Automated Filtering and Fundamental Rights: An Impossible Match (January 30, 2021). published in: GRUR International 2021, Vol 70(6), p. 517., Available at SSRN: https://ssrn.com/abstract=3776267
social media, fake news and democracy, Rapport Bronner
https://www.elysee.fr/admin/upload/default/0001/12/0f50f46f0941569e780ffc456e62faac59a9e3b7.pdf
Regulating Explainability in Machine Learning Applications – Observations from a Policy Design Experiment 2024 https://facctconference.org/static/papers24/facct24-143.pdf
https://www.thelancet.com/journals/landig/article/PIIS2589-7500(21)00208-9/fulltext
Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach Valérie Beaudouin (SES), Isabelle Bloch (IMAGES), David Bounie (IP Paris, ECOGE, SES), Stéphan Clémençon (LPMA), Florence d’Alché-Buc, James Eagan (DIVA), Winston Maxwell, Pavlo Mozharovskyi(IRMAR), Jayneel Parekh
https://arxiv.org/abs/2003.07703
Clément Henin, Daniel Le Métayer, Beyond explainability: justifiability and contestability of algorithmic decision systems, AI & SOCIETY, July 2021 https://doi.org/10.1007/s00146-021-01251-8
Bin-Nun, A.Y., Derler, P., Mehdipour, N. et al. How should autonomous vehicles drive? Policy, methodological, and social considerations for designing a driver. Humanit Soc Sci Commun 9, 299 (2022). https://doi.org/10.1057/s41599-022-01286-2
Advising Autonomous Cars about the Rules of the Road
Joe Collenette (The University of Manchester), Louise A. Dennis (The University of Manchester), Michael Fisher (The University of Manchester)
Baldini, G., Testing and certification of automated vehicles including cybersecurity and artificial intelligence aspects, EUR 30472 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-26818-5, doi:10.2760/86907, JRC121631.
Giannaros, A.; Karras, A.; Theodorakopoulos, L.; Karras, C.; Kranias, P.; Schizas, N.; Kalogeratos, G.; Tsolis, D. Autonomous Vehicles: Sophisticated Attacks, Safety Issues, Challenges, Open Topics, Blockchain, and Future Directions. J. Cybersecur. Priv. 2023, 3, 493-543. https://doi.org/10.3390/jcp3030025
Robertson, Cassandra Burke, Litigating Partial Autonomy (March 2023). 109 Iowa Law Review, 2023 Forthcoming, Case Legal Studies Research Paper No. 23-4, Available at SSRN: https://ssrn.com/abstract=4392073
Othman, K. Exploring the implications of autonomous vehicles: a comprehensive review.Innov. Infrastruct. Solut. 7, 165 (2022). https://doi.org/10.1007/s41062-022-00763-6
Shah, M.U., Rehman, U., Iqbal, F. et al. Exploring the human factors in moral dilemmas of autonomous vehicles. Pers Ubiquit Comput 26, 1321–1331 (2022). https://doi.org/10.1007/s00779-022-01685-x
S Mo Jones-Jang, Yong Jin Park, How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability, Journal of Computer-Mediated Communication, Volume 28, Issue 1, January 2023, zmac029, https://doi.org/10.1093/jcmc/zmac029
Yu, F., Moehring, A., Banerjee, O. et al. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nat Med 30, 837–849 (2024). https://doi.org/10.1038/s41591-024-02850-w
algorithm to allocate kidneys for transplant patients https://slate.com/technology/2022/08/kidney-allocation-algorithm-ai-ethics.html
https://partage.imt.fr/index.php/s/d7PmBdqt9jiYPfy
Articles in “Polytechnique Insights”
Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations.Science366,447-453(2019).DOI:10.1126/science.aax2342
Leslie D, Mazumder A, Peppin A, Wolters M K, Hagerty A. Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ 2021; 372 :n304 doi:10.1136/bmj.n304
Digital twins in healthcare:
Katsoulakis, E., Wang, Q., Wu, H. et al. Digital twins for health: a scoping review. npj Digit. Med. 7, 77 (2024). https://doi.org/10.1038/s41746-024-01073-0 https://www.nature.com/articles/s41746-024-01073-0
Maxwell, Winston, The GDPR and Private Sector Measures to Detect Criminal Activity (March 2021). Revue des Affaires Européennes – Law and European Affairs, Available at SSRN: https://ssrn.com/abstract=3964066
European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (2020/2016(INI))
https://huggingface.co/blog/sasha/ai-environment-primer
Data for good, Livre blanc IA générative, 2023, https://dataforgood.fr/iagenerative/.
AFNOR, Spec IA frugale, https://www.ecologie.gouv.fr/presse/publication-du-referentiel-general-lia-frugale-sattaquer-limpact-environnemental-lia.
Power Hungry Processing: Watts Driving the Cost of AI Deployment? 2024 https://facctconference.org/static/papers24/facct24-6.pdf
Brevini, B. (2020). Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720935141; https://journals.sagepub.com/doi/abs/10.1177/2053951720935141
Special edition “annales des mines” https://annales.org/re/2023/re_110_avril_2023.pdf
Dhiman, R.; Miteff, S.; Wang, Y. ; Ma, S.-C.; Amirikas, R.; Fabian, B. (2024). “Artificial Intelligence and Sustainability—A Review”, Analytics, 2024, 3, 140–164. https://doi.org/10.3390/analytics3010008.
Goh H.-H., Vinuesa R. (2021). “Regulating artificial‑intelligence applications to achieve the sustainable development goals”, Discover Sustainability, 2, 52. https://doi.org/10.1007/s43621-021-00064-5.
Hacker, P. (2024). “Sustainable AI Regulation”, Common Market Law Review, 61, 2, 345-386.
Hao K. (2024). “AI is taking water from the desert”, The Atlantic, March 1st 2024, online: https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/.
Jones N. (2024). “How to stop data centres from gobbling up the world’s electricity,” Nature, September 12, 2018, https://www.nature.com/articles/d41586-018-06610-y.
McDonald J., Li B., Frey N. et al. (2022). “Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models”, Findings of the Association for Computational Linguistics: NAACL, 1962–1970.
Pagallo, U., Ciani Sciolla, J., Durante, M. (2022). “The environmental challenges of AI in EU law: lessons learned from the Artificial Intelligence Act (AIA) with its drawbacks”, Transforming Government: People, Process and Policy, 16, 3, 359-376. https://doi.org/10.1108/TG-07-2021-0121.
Patterson D., Gonzalez J., Le Q., et al. (2021). “Carbon Emissions and Large Neural Network Training”, arxiv. https://arxiv.org/pdf/2104.10350.pdf.
Ren S. (2023). “How much water does AI consume? The public deserves to know”, OECD.AI, 30th November 2023. Online: https://oecd.ai/en/wonk/how-much-water-does-ai-consume.
Stein A.L. (2020). “Artificial Intelligence and Climate Change”, Yale Journal on Regulation, 37, 890.
Strubell E., Ganesh A., Mccallum A. (2019). “Energy and Policy Considerations for Deep Learning in NLP”, 57th Annual meeting of the Association for Computational Linguistics, 5 juin 2019, https://arxiv.org/pdf/1906.02243.pdf.
https://www.lecercledeladonnee.org/wp-content/uploads/2024/04/Empreinte-de-la-donnee-sur-le-vivant_FB.pdf
AI Art is Theft: Labour, Extraction, and Exploitation 2024 https://facctconference.org/static/papers24/facct24-13.pdf
https://www.newyorker.com/magazine/2023/11/20/holly-herndons-infinite-art
https://www.newyorker.com/magazine/2024/01/22/who-owns-this-sentence-a-history-of-copyrights-and-wrongs-david-bellos-alexandre-montagu-book-review?mc_cid=b7fd369da1&mc_eid=f720a42bfb
Adrienne La France, The Despots of Silicon Valley, The Atlantic, March 2024 Tech authoritarianism The Atlantic March 2024
Nielsen, A. Can cities shape future tech regulation?. Nat Cities 1, 10–11 (2024). https://doi.org/10.1038/s44284-023-00003-7
Solow-Niederman, Alicia, Do Cases Generate Bad AI Law? (December 31, 2023). Columbia Science and Technology Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4680641or http://dx.doi.org/10.2139/ssrn.4680641
Ada Lovelace Institute Jan 2024 arguing for “FDA” model of regulation https://www.adalovelaceinstitute.org/report/safe-before-sale/#executive-summary-1
A. Tutt, An FDA for Algorithms 2017
Gianclaudio Malgieri, Frank Pasquale, Licensing high-risk artificial intelligence: Toward ex ante justification for a disruptive technology, Computer Law & Security Review, Volume 52, 2024, 105899, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2023.105899.
(https://www.sciencedirect.com/science/article/pii/S0267364923001097)
https://rm.coe.int/-1493-10-1b-committee-on-artificial-intelligence-cai-b-draft-framework/1680aee411
AI Nationalism(s): Global Industrial Policy Approaches to AI
https://www.economie.gouv.fr/files/files/directions_services/cge/commission-IA.pdf?v=1710339902
Sienna Project (EU) publications
Döbler, N.A., Carbon, CC. Vaccination against SARS-CoV-2: a human enhancement story.transl med commun 6, 27 (2021). https://doi.org/10.1186/s41231-021-00104-2
Neurotechnology, law, and the legal profession, 2022
Arges, K., Assimes, T., Bajaj, V. et al. The Project Baseline Health Study: a step towards a broader mission to map human health. npj Digit. Med. 3, 84 (2020). https://doi.org/10.1038/s41746-020-0290-y
Bruynseels K, Santoni de Sio F and van den Hoven J (2018) Digital Twins in Health Care: Ethical Implications of an Emerging Engineering Paradigm. Front. Genet. 9:31. doi: 10.3389/fgene.2018.00031
https://www.mdpi.com/2075-4426/12/8/1255
https://lsspjournal.biomedcentral.com/articles/10.1186/s40504-021-00113-x
French competition authority decision on neighbouring rights: https://www.legipresse.com/011-52492-ia-et-droit-voisin-des-editeurs-de-presse-breves-observations-sur-la-decision-de-lautorite-de-la-concurrence-n-24-d-03-du-15-mars-2024-concernant-google.html
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
This New Yorker article (Jan 2024) discusses some of the copyright and image rights issues in LLMs: https://www.newyorker.com/magazine/2024/01/22/who-owns-this-sentence-a-history-of-copyrights-and-wrongs-david-bellos-alexandre-montagu-book-review?mc_cid=b7fd369da1&mc_eid=f720a42bfb
Writers’ Guild settlement on use of AI for scriptwriting
https://www.wired.co.uk/article/us-writers-strike-ai-provisions-precedents
Crootof, R., Kaminski, M. E., Price, W., & Nicholson, I. I. (2023). Humans in the Loop. Vand. L. Rev., 76, 429.
Lyons, H., Wijenayake, S., Miller, T., & Velloso, E. (2022, April). What’s the appeal? Perceptions of review processes for algorithmic decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
Lyons, H., Miller, T., & Velloso, E. (2023, June). Algorithmic decisions, desire for control, and the preference for human review over algorithmic review. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 764-774).
Cheng, M. (2024). The right to human decision: analyzing policies, ethics, and implementation. In First International Conference on Addressing Socioethical Effects of Artificial Intelligence.https://openreview.net/pdf?id=VrxtvBw0ly
Colonna, Liane, Exploring the Relationship between Article 22 of the General Data Protection Regulation and Article 14 of the Proposed AI Act (February 16, 2024). Faculty of Law, Stockholm University Research Paper No. 124, Available at SSRN: https://ssrn.com/abstract=4729206 or http://dx.doi.org/10.2139/ssrn.4729206
Goodman, C. C. (2021). AI, can you hear me? Promoting procedural due process in government use of artificial intelligence technologies. Rich. JL & Tech., 28, 700.https://jolt.richmond.edu/files/2022/08/Goodman-Final-for-Publication.pdf
https://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=1542&context=glr
Mazur, J., & Bernatt, M. (2023). Can the Automated State Be Trusted? The Role of Rule of Law Safeguards for Governing Automated Decision-Making and Artificial Intelligence. Ga. L. Rev., 58, 1089.
Veluwenkamp, H. (2022). Reasons for meaningful human control. Ethics and Information Technology, 24(4), 51.https://link.springer.com/article/10.1007/s10676-022-09673-8
Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, 87 GEO. WASH. L. REV. 1 (2019)
Richard H. Fallon, Jr., “The Rule of Law” as a Concept in ConstitutionalDiscourse, 97 COLUM. L. REV. 1, 5 (1997)
Fallon Jr, R. H. (1997). The rule of law as a concept in constitutional discourse. Colum. L. Rev., 97, 1.
Davidovic, J. (2023). On the purpose of meaningful human control of AI. Frontiers in big data, 5, 1017677.
Brennan-Marquez, K., Levy, K., & Susser, D. (2019). Strange Loops. Berkeley Technology Law Journal, 34(3), 745-772.
Meg Leta Jones, The Right to A Human in the Loop: Political Constructions of Computer Automation and Personhood, 47 SOC. STUD. SCI. 216 (2017)
On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives 2024 https://facctconference.org/static/papers24/facct24-166.pdf
Buiten, M.C. Product liability for defective AI. Eur J Law Econ 57, 239–273 (2024). https://doi.org/10.1007/s10657-024-09794-z
Buiten, M., De Streel, A., & Peitz, M. (2023). The law and economics of AI liability. Computer Law & Security Review, 48, 105794.
Philosophy is crucial in the age of AI, The Conversation, Aug. 1 2024 https://theconversation.com/philosophy-is-crucial-in-the-age-of-ai-235907