Please note: some events may be only accessible upon invitation, but do not hesitate to get in touch for further information, by writing to [email protected]
All indications refer to Paris time (currently CET, CEST from 31-04-2024).
Organized by the Law & Technology Research Group at the Sciences Po Law School
📆 6 and 7 June 2024, all day
📍 Sciences Po Law School
➡️ Webpage here (includes link to register)
📎 Complete program:
13 September 2023, 1 PM - 3 PM, Sciences Po Law School - in person
with Marta Arisi (Sciences Po Law School)
Internal session
25 September 2023, 10 AM - 5 PM, Sciences Po Law School - in person
Organized by Prof. Raphaële Xenidis in collaboration with Fern University in Hagen & Paris 1 Panthéon-Sorbonne
12 October 2023, 5 PM, Sciences Po Law School - in person
Law School event: info and registration here.
In this session, Prof. Sylvain will reflect on potential structural dimensions of data law reforms, with particular focus on emergent non-rights regulatory interventions.
Please note that the views expressed here are those of Prof. Sylvain in his personal capacity as a scholar, and they do not reflect the views of any FTC Commissioners or the Commission.
19 October 2023, 5-7 PM, Sciences Po Law School - in person
Law School event: info and registration here.
📆 9 November 2023, 7-9 PM
📍 Salle du conseil (5th floor), Sciences Po Law School (13 rue de l'Université, 75007 Paris) - in person
➡️ Link and registration here.
📆 15 November 2023, 1-2.30 PM
📍 Room J208, Sciences Po Law School (13 Rue de l’Université, 75007 Paris) - in person & online
➡️ Register through the following form.
Description
There is extensive research highlighting the environmental impacts of emerging technologies like AI, and of the technology industry generally. However, in the field of law (with limited recent exceptions) there has been little research exploring how these impacts should be factored into platform regulation. This is a serious oversight, because the EU (like other jurisdictions around the world) is currently passing a series of legislative reforms that will shape the business practices and environmental impacts of the platform economy for years to come.
Against this background, our work-in-progress paper aims to interrogate the role of dominant platforms in contributing to – or mitigating – environmental risks and to situate European debates around platform regulation in the context of the climate emergency. We highlight concrete links between climate policy and technology regulation through a legal analysis of the EU’s 2022 Digital Services Act (DSA). In particular, we focus on three sets of provisions from Chapter III Section 5 DSA: those on systemic risk assessment and mitigation, research data access, and the crisis response mechanism. These provisions apply to platforms and search engines with over 45 million EU users – those which exercise the most power over the broader economy and information environment, and which therefore represent particularly important targets for regulatory intervention. We highlight the DSA's potential as a regulatory lever to reduce the environmental impacts of the platform economy, but also raise concerns that it could be used to suppress vital political debates and climate activism.
About the speakers
Ilaria Buri is a research fellow at the “DSA Observatory” at the Institute for Information Law (IViR), University of Amsterdam. Her current research focuses on questions of trade secrets and research data access, and on the implementation and enforcement of the DSA systemic risk provisions. She is one of the organisers of the “EU Platform Regulation” summer course - also addressed to regulators and practitioners - which was held for the first time in July 2023.
Rachel Griffin is a PhD candidate and lecturer at Sciences Po Law School. Her research focuses on how EU platform regulation (in particular the DSA) addresses social inequalities and discrimination in the context of social media.
📆 21 November 2023, 1-2.30 PM
📍 Room K031 (building M), Sciences Po (1 Place St. Thomas d'Aquin, 75007 Paris) - in person
➡️ Event description here.
➡️ Register through the following form.
Description
Although proponents of online dispute resolution systems proclaim that their innovations will expand access to justice for so-called “simple cases,” evidence of how the technology actually operates and who is benefitting from it demonstrates just the opposite. Resolution of some disputes may be more expeditious and user interface more intuitive. But in order to achieve this, parties generally do not receive meaningful information about their rights and defenses. The opacity of the technology (ODR code is not public and unlike court appearance its proceedings are private) means that due process defects and systemic biases are difficult to identify and address. Worse still, the “simple cases” argument for ODR assumes that the dollar value of a dispute is a reasonable proxy for its complexity and significance to the parties. This assumption is contradicted by well established research on procedural justice. Moreover, recent empirical studies show that low money value cases, which dominate state court dockets, are for the most part debt collection proceedings brought by well-represented private creditors or public creditors (including courts themselves, which increasingly depend on fines and fees for their operating budget). Defendants in these proceedings are overwhelmingly unrepresented individuals. What ODR offers in these settings is not access to justice for ordinary people, but rather a powerful accelerated collection and compliance technology for private creditors and the state.
This chapter examines the design features of ODR and connects them to the ideology of tech evangelism that drives deregulation and market capture, the aspirations of the alternative dispute resolution movement, and hostility to the adversary system that has made strange bedfellows of traditional proponents of access to justice and tech profiteers. The chapter closes with an analysis of front-end standards for courts and bar regulators to consider to ensure that technology marketed in the name of access to justice actually serves the legal needs of ordinary people.
About the speaker
Norman W. Spaulding is Sweitzer Professor of Law at Stanford Law School. His scholarship examines the history and ethics of the adversary system, procedural justice, and the effects of artificial intelligence and other technologies on the administration of justice. See Is Human Judgement Necessary? Artificial Intelligence, Algorithmic Governance, and the Law, in Dubber et al., The Oxford Handbook of Ethics and AI (2020).
📆 30 November 2023, 2.45-4.45 PM
Event upon invitation - in person
Description
International inter-governmental treaty-based organizations (IOs) increasingly shape legal and regulatory practices affecting the production, processing and use of data by and for public and private actors across States. In turn, emerging global data laws and standards affect the practices and operations of IOs. To what extent can IOs contribute to solving (or mitigating) the distributional problems associated with unevenly-distributed infrastructural control over data? What concerns might arise in the context of IO data-producing and data-driven activities ? IO’s constitutive treaties are often silent on key data law issues, because their drafting predates the current thinking and imagination around data. This enables IOs to chart their own path but also raises critical questions about what IO data governance should look like. What laws, regulations, norms, etc. should govern IOs' data practices?
About the speaker
Angelina Fisher is Director for Practice and Policy of Guarini Global Law & Tech and Adjunct Professor of Law at NYU Law. She holds an LLB from Osgoode Hall Law School and an LLM in International Legal Studies from New York University School of Law. She is the founder and co-teacher of the International Organizations Clinic. Her research interests include law and development, international organizations, technologies of governance (particular uses of data and quantitative information), and global governance of education.
📆 30 November 2023, 5-6.30 PM
📍 Room 511, Sciences Po (199 Boulevard Saint Germain, 75007 Paris) - in person
➡️ Link to the event here - register through the following form.
Description
In recent years, AI has witnessed remarkable strides in text processing, predictive modelling, and data analysis. AI systems have become commonplace across various industries, changing the way we work, communicate, and make informed decisions. However, the legal domain presents unique challenges for AI systems due to its highly specialised, often open-ended language, and argumentation rules. This presentation aims to explore the recent developments in natural language processing and machine learning techniques that are transforming the legal landscape. We’ll delve into how these advancements can improve the analysis of online contracts, legal knowledge retrieval, and identification of key arguments within legal documents. For this, methods from the various ALMA-AI projects will be presented to illustrate a creation of a legal dataset, large language model prompting techniques, and legal data analysis. Moreover, we will explore the potential of AI-driven analysis in assisting regulatory compliance. The EU rules on the digital economy are continuously evolving, and AI holds the potential to provide proactive insights into compliance and monitoring issues, helping organisations navigate complex regulatory frameworks and protect consumers. These advancements represent not only an evolution in technology but also a significant milestone in the intersection of law and artificial intelligence.
About the speaker
Rūta Liepiņa is a research fellow at the ALMA-AI Bologna interdisciplinary research centre for AI and Law (since 2022). She obtained a PhD from the European University Institute (2020) on work about logic models of causality and evidence in law. She was an Assistant Professor in Digital Legal Studies at Maastricht University (2019-2022). Her work focuses on the application of AI methods for legal analytics and formal models of legal reasoning. She is currently part of CLAUDETTE (Machine Learning Powered Analysis of Consumer Contracts and Privacy Policies) and the ERC project ‘CompuLaw’.
📆 11 December 2023, 9 AM - 6 PM
📍Room N207, Innovation Pavillon floor 2, Sciences Po (1 place Saint-Thomas d'Aquin, 75007 Paris) - in person
➡️ Register through the following form.
Description here.
Detailed program here.
📆 30 January 2024
🕓 16 - 17.30
In person and online (please write us for the link) - Invite-only
📍Room CS16, Sciences Po (1 Place St. Thomas d'Aquin, 75007 Paris)
Please go through the following materials before joining the session:
📆 2 February 2024
🕓 10.30 - 12.00
➡️ In person. Register here.
📍Room B.001, Building M, Sciences Po (1 Place St. Thomas d'Aquin, 75007 Paris)
📖 Materials here. Further materials will be sent by e-mail.
📆 13 February 2024
🕓 12.30 - 14.30
➡️ In person & online. Register here.
📍Room J208, Sciences Po (13 rue de l'Université, 75007 Paris)
About the book
The book argues that privacy law is implicitly grounded on concepts from contracts, which set the rules for voluntary agreements, and we need to instead ground it on concepts from torts, which set the rules for harms caused to others. Departing from existing regulations and proposals, it proposes a plan to build accountability into the information economy for individual and group harms. The publisher's official page is available here.
About the speaker
Ignacio Cofone is an Associate Professor and Canada Research Chair in Artificial Intelligence Law & Data Governance at McGill University's Faculty of Law, where he teaches Privacy Law, Artificial Intelligence Regulation, and Advanced Obligations. His research focuses on law reform for privacy and A.I., exploring how the law should adapt to technological and economic change. His current projects examine liability for privacy harm and A.I. discrimination.
Full bio available here.
📆 Tuesday 27 February
🕓 12.30-14 / bring your own lunch
➡️ Invite-only.
📍 Law School Meeting room, 410T, 13 Rue de l’Université (Paris 75007)
📖 Materials here.
📆 Thursday 7 (9.00-18.00) and Friday 8 March (9.00-13.00) 2024
📍 Sciences Po Law School, Pavillons Scientifiques (Building M), 1 Place Saint Thomas d'Aquin, Paris 75007
➡️ Law School Webpage here
🗂️ Description and program here
📆 Friday 8 March 2024
🕓 14.45-16.45
📍Sciences Po, Amphithéâtre Simone Veil, 28 rue des Saints-Pères (Paris 75007)
➡️ In person. The event is public upon subscription; register here.
This lecture delves into the challenges to the attribution of criminal responsibility arising from the use of AI in the military domain.
About the speaker: Dr. Marta Bo is a senior researcher at the University of Amsterdam-Asser Institute in The Hague and is Associate Senior Researcher at SIPRI’s Armament and Disarmament programme. Her research focuses on state responsibility and individual criminal responsibility for unlawful conducts in the use and development of military AI; AI and criminal responsibility; automation biases and mens rea; disarmament and criminalisation. Marta leads, designs and implements capacity-building training projects for judiciaries in international and transnational criminal law, international humanitarian law, and human rights law. She has published on international and transnational criminal law (especially, piracy and migrant smuggling), the International Criminal Court, complementarity, law of the sea and human rights, artificial intelligence and criminal responsibility, autonomous weapons, self-driving cars, state responsibility. Marta is member of the Steering Committee of the Antonio Cassese Initiative for Justice, Peace and Humanity and editor of the international criminal law section of the Leiden Journal of International Law.
📆 Tuesday 12 March 2024
🕓 12.30 - 14.00
📍 Room 106, Building B, Sciences Po - 56 rue des Saints-Pères (Paris 75007)
➡️ In person and online; subscribe here (you will receive a link via e-mail)
📖 More info about the talk here
📆 Friday 15 March 2024
🕓 14.45-16.45
📍Sciences Po, Amphithéâtre Simone Veil, 28 rue des Saints-Pères (Paris 75007)
➡️ In person. This event is upon invitation.
Description: This presentation aims to discuss the concept of fairness in predictive AI-based anti-corruption tools (AI-ACTs) by identifying possible risks at different levels – individual, institutional, and infrastructural –, their respective main sources and possible ways to mitigate it. It does that using empirical evidence from cases of tools developed in Brazil to critically map challenges in three types of AI-ACTs: risk estimation of corrupt behaviour in public procurement, among civil servants, and of female straw candidacies in local electoral competition. The paper draws on 12 interviews with law enforcement officials directly involved in the development of anti-corruption technologies, as well as academic and grey literature, including official reports and dissertations on the tools used as examples. Findings suggest that not only have AI-ACT developers not been reflecting on potential risks of unfairness when creating their tools, but the existing models are based on findings from past anti-corruption procedures and practices that may be reinforcing unfairness against individuals from historically disadvantaged groups in the case of risk-scoring tools for straw candidates and owners of supplier companies in public contacts, and for civil servants working in specific units with higher punishment records and affiliated to political parties. Although the tools under analysis do not make any automated decision making without human supervision, it is worth mentioning that their algorithms are not open to external auditing.
About the speaker: Dr. Margaret Monyani is a Senior Migration Researcher at the Institute for Security Studies in Pretoria, South Africa. With a rich academic background, she has served as a faculty member at the University of Witwatersrand, teaching in the Department of International Relations where she also earned her PhD. Dr. Monyani's research expertise lies at the intersection of gender, migration, and security. Her work, noted for its depth and insight, has significantly contributed to understanding the complex dynamics of migration, particularly in the context of gender and security issues. Dr. Monyani's academic contributions continue to shape policy and scholarly discourse in these critical areas.
📆 Thursday 25 March 2024
🕓 Time: 17-19
📍Room 931, 9 rue de la Chaise (75007 Paris)
➡️ More info here
📆 Thursday 28 March 2024
🕓 17-19
📍 Room K031, Sciences Po, 1 Place Saint-Thomas d'Aquin (Paris 75007)
➡️ Info here
📆 29 March 2024
🕓 14.45-16.45
📍Sciences Po, Amphithéâtre Simone Veil, 28 rue des Saints-Pères (Paris 75007)
➡️ Info here
📆 Wednesday 3 April 2024
🕓 12.30 - 14.00
📍 Room J211, Sciences Po, 13 rue de l'Université (Paris 75007)
➡️ In person; register here (additional materials will be sent via e-mail)
📖 More info here.
📆 Thursday 11 April 2024
🕓 14.45 - 16.45
📍 Room B.010, Building M, Sciences Po, 1 Place St. Thomas d’Aquin (Paris 75007)
➡️ In person; register here
📖 Adam Zimmerman is a professor of law at University of Southern California, specializing in tort law. Kate Klonick is a visiting professor at SciencesPo, Fulbright Innovation Scholar 2023-2024, and a professor at St. John's University Law School. Their talk will draw on their new joint project combining Klonick's ongoing work on Article 21 of the Digital Services Act mandating out of court dispute resolution bodies for content moderation on platforms with Zimmerman's past work on corporate dispute resolution systems required by federal law which are meant to address mass harms.
📆 Monday 15 April 2024
🕓 17-19
📍 Room K011, Building M, Sciences Po, 1 Place Saint Thomas d'Aquin (Paris 75007)
➡️ In person event, please register here.
📖 More info here.
📆 Thursday 23 May 2024
🕓 16.30-17.30 (Paris time)
📍Online (link to be shared with registered participants)
➡️ Register here
Let’s keep an eye on AI!
For more than a year, the discourse about generative AI has been explosive. Many generative AI tools, from OpenAI’s well-known ChatGPT to French startup Mistral, to systems from companies like Baidu or ByteDance, have all become part of our daily lives. Meanwhile, regulations have started to flourish all over the globe.
Since September 2023, the students of the DIGILaw Clinic at Sciences Po in collaboration with Open Terms Archive have endeavored to look into what we all agree to when using generative AI tools: their terms and conditions.
What do users consent to when clicking on the terms and conditions of generative AI tools? What are the regulatory responses from the United States of America, China and the European Union?
If you ever wondered what is actually going on in the terms and conditions of the generative AI tools and what current and upcoming law says about generative AI, join our online event on May 23 to discover the Generative AI Watch project!