AI Law - International Review of Artificial Intelligence Law
G. Giappichelli Editore

13/03/2024 - AI Act has been approved by EU Parliament

argument: Normativa/Regulations - European Union Law

The European Parliament has officially approved the Artificial Intelligence Act, aiming to ensure AI safety and compliance with fundamental rights while fostering innovation. This landmark regulation, supported by a majority vote, seeks to mitigate risks associated with high-risk AI applications by setting strict obligations based on their potential impact.

» view: il documento () scarica file

written by Marco Perilli

The European Union's Artificial Intelligence Act ("AI Act" or "the Regulation") aims to establish a uniform legal framework for the development, placing on the market, and use of artificial intelligence systems ("AI systems") within the European Union. The key objectives are:

  • Improving the functioning of the EU internal market through harmonized rules on AI
  • Promoting the uptake of trustworthy and human-centric AI
  • Ensuring a high level of protection of health, safety, fundamental rights, democracy, rule of law, and the environment against potential harms from AI systems
  • Supporting innovation in the field of AI

The Regulation seeks to facilitate the free movement of AI-based goods and services across the EU by preventing Member States from imposing restrictions unless explicitly authorized. It aims to apply the EU's values as enshrined in the EU Charter of Fundamental Rights.

The Regulation applies across sectors, to all AI systems placed on the market, put into service, or used in the EU. It provides for uniform obligations for operators and protections of public interests and individual rights across the internal market. It builds on and complements, but does not seek to affect, existing EU legislation on data protection, privacy, consumer protection, fundamental rights, labor, and product safety. Data subjects retain all rights and remedies under data protection law regarding automated decision-making.

Scope and Key Definitions

The AI Act will apply across sectors to AI systems placed on the market, put into service, or used in the EU. It provides for uniform obligations for operators and protections of public interests and individual rights across the internal market.

Key definitions in the Regulation include:

  • "AI system": Defined based on capabilities of inference, learning, reasoning, or modeling to influence virtual or real environments to achieve objectives. Aligns with international organizations' work on AI.

  • "Deployer": Any natural or legal person, including public authorities, using an AI system under their authority, excluding personal non-professional use. Use can concern persons beyond the deployer.

  • "Biometric data": Data enabling authentication, identification, or categorization of natural persons or emotion recognition. Aligns with definitions in GDPR, EU data protection regulation for EU institutions, and Law Enforcement Directive.

  • "Biometric identification": Automated recognition of human features to establish identity by comparison against a reference database, regardless of consent. Excludes verification/authentication systems.

  • "Biometric categorization": Assigning people to categories like gender, age, hair/eye color, tattoos, behavior, personality, language, religion, minority status, sexual/political orientation based on biometric data. Excludes purely accessorial characteristics.

  • "Remote biometric identification system": AI system to identify persons without active involvement, remotely, by comparing against a reference database. Typically used to perceive multiple persons simultaneously. Excludes verification/authentication.

The Regulation builds on and complements, but does not seek to affect, existing EU law on data protection, privacy, consumer protection, fundamental rights, labor, and product safety. Data subjects retain all rights and remedies under data protection law regarding automated decision-making.

 

Harmonized Rules for High-Risk AI Systems

A central aspect of the AI Act is establishing common rules for "high-risk" AI systems to ensure a high and consistent level of protection of public interests concerning health, safety, and fundamental rights. These harmonized rules are intended to be applied across sectors in line with the EU's "New Legislative Framework" for products.

High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. The Regulation provides for two ways an AI system can be considered high-risk:

  1. The system is intended to be used as a safety component of a product covered by some EU product safety legislation listed in Annex II. This includes machinery, medical devices, toys, lifts, gas appliances, pressure equipment, radio equipment, and more.

  2. The system itself is a product that falls under the EU legislation listed in Annex III. This covers a range of applications with fundamental rights and safety implications, such as biometric identification and categorization systems, AI systems used for law enforcement and criminal justice, systems determining access to educational institutions, recruitment tools, creditworthiness assessments, and more.

For high-risk AI systems, the Regulation sets out mandatory requirements that must be met before placing on the market or putting into service, as well as obligations on providers and users to ensure ongoing compliance. Core requirements for high-risk systems include:

  • Establishing and implementing a risk management system to identify and analyze known and foreseeable risks, estimate and evaluate such risks, and adopt suitable measures to eliminate or mitigate them
  • Using high-quality, relevant, representative, error-free, and complete training, validation and testing data
  • Drawing up and keeping up-to-date technical documentation demonstrating conformity with the Regulation's requirements
  • Achieving an appropriate level of transparency by providing information to users
  • Ensuring human oversight in the design and development to prevent or minimize risks
  • Achieving and maintaining a high level of accuracy, robustness, and security

Obligations on providers of high-risk AI systems include ensuring conformity with the above requirements, undergoing a conformity assessment, registering in an EU database, and cooperating with authorities. Users of high-risk systems have obligations regarding input data, monitoring operation for risks, informing the provider or distributor of any serious incidents or malfunctions, and keeping logs generated by the system.

The Regulation also places restrictions on certain AI systems deemed to pose unacceptable risks to fundamental rights and safety, such as systems that manipulate human behavior to deprive people of free will, exploit vulnerabilities of specific groups, conduct social scoring by public authorities, or use real-time remote biometric identification in publicly accessible spaces for law enforcement (with exceptions).

 

Supporting Innovation and Regulatory Sandboxes

The AI Act includes measures to support innovation in AI, with a focus on helping small and medium-sized enterprises (SMEs) and startups. This includes establishing regulatory sandboxes to facilitate the development and testing of innovative AI systems under regulatory guidance and oversight. The sandboxes aim to help companies gain experience with applying the Regulation's rules and requirements.

The Regulation tasks the European Artificial Intelligence Board with supporting the establishment of and cooperation among AI regulatory sandboxes. Participation in sandboxes is subject to safeguards and acceptance is prioritized for SMEs and startups. Participants can gain guidance from competent authorities on applying the Regulation.

 

Governance and Enforcement

The AI Act establishes a governance framework and enforcement mechanisms to oversee and ensure compliance with the new AI rules.

At EU level, the Regulation establishes a European Artificial Intelligence Board to facilitate harmonized implementation, advise the Commission on various aspects, and support cooperation among national supervisory authorities.

Member States must designate one or more national competent authorities to supervise the application and implementation of the Regulation. They must provide such authorities with adequate powers and resources.

Failure to comply with the rules set out in the Regulation can lead to significant penalties being imposed by national authorities. For the most serious infringements, including prohibited AI practices and non-compliance with data governance obligations for high-risk systems, fines of up to 6% of total worldwide annual turnover or 30 million euros (whichever is higher) can be imposed.

The approach to governance and enforcement largely follows that of the EU's data protection regime under GDPR, with national supervisory authorities playing a central role and an EU-level body to promote consistency. However, the AI Act does not provide for some of GDPR's powers, such as investigative powers for the EU Board.