top of page

European Parliament adopts INL to create a new AI liability regime

After intense negotiations with the other political groups, our legislative own-initiative proposal - drafted in the JURI-committee - was adopted by an overwhelming majority in plenary.


Yesterday's vote demonstrated the overwhelming support (626 vs. 25) that our proposal enjoys in the European Parliament. In a nutshell, we're saying that there is no need for overarching new AI liability provisions and that the excellent Product Liability Directive should continue to apply. The only legal gap that we found (backed up by the results of several AI expert groups) is the liability of the operator/deployer of an AI-system. Therefore, our report lists only for these actors additional liability rules to guarantee that the victim - often an innocent bystander, who does not even know about the operation of the harmful AI-system - is always being compensated. In order to not hamper European innovation in AI, the new rules concentrate mostly on high-risk systems.


The political negotiations


The path towards this clear vote result was quite complicated. After reading all tabled AMs, we did not expect these complications to be fair. There was a 90% match with Renew's and ECR's AMs and a 75% with the Greens but not much common ground with S&D. Since the European Parliament favors consensus between the major political groups, there was a strong will to find a compromise with the S&D Shadow. In the end of September all political groups, including ID and GUE, were nevertheless able to agree on a single text that was later adopted unanimously in JURI.


What did we secure?

  • regulation - harmonize liability rules to avoid fragmentation in the DSM

  • PLD - remains the legal base for liability claims against the producer

  • risk-based approach - to differentiate between AI-systems

  • strict liability - only for a few high-risk AI-systems

  • high-risk definition - based on the severity of harm / likelihood / manner of use

  • mandatory insurance for high-risk AI-systems

  • delegated act - to depoliticize the high-risk classification and to let experts decide

  • standing committee - to include stakeholders (experts from NGOs, industry, ...)

  • harmonized rules on the amount / extent of compensation & limitation periods

  • but national rules continue to apply in case of fault-based liability (non-high-risk AI)

  • presumption of fault towards the operator - for all non-high-risk AI-systems

  • joint and several liability of all operators

  • possibility to recourse for compensation among operators / producer


What did we need to give up?

  • deployer - majority of political groups preferred the term 'operator'

  • backend operator - now included if not already liable under PLD

  • immaterial damages - our biggest defeat. The other groups wanted to include them even though strict liability regimes in Europe do no cover this type of harm. We fear that such an inclusion will lead to legal overlaps with already existing laws (such as the GDPR, anti-discrimination-directives etc.)

  • ANNEX - our examples of potential high-risk AI-systems were deleted. It is now up to the European Commission to bring forward a first draft ANNEX.


What happens next?


The European Commission needs to integrate our balanced blueprint into their upcoming AI-legislation. Our proposal will NOT be part of the horizontal AI-framework in Q1 2021 but can be expect later that year (probably in Q3) as a stand-alone legislative proposal as part of a liability package (that also includes a Product Liability Directive review). Regardless of the exact timing or form of the proposal, the European Parliament stays ready and united.



Comments


bottom of page