The landmark EU law on artificial intelligence officially comes into force today, Thursday, August 1st. This regulation aims to manage how companies develop, use, and apply AI.
Proposed by the European Commission in 2020, the AI Act aims to address the negative impacts of artificial intelligence. It received final approval from member states, MEPs, and the European Commission last May.
The law will primarily target large American tech companies, which are currently the main developers of advanced AI systems. However, many other businesses, including non-tech companies, will also fall under its scope.
The regulation establishes a comprehensive and harmonized regulatory framework for AI across the EU, implementing a risk-based approach to regulate the technology.
New Legislation Aims to Protect Fundamental Rights and Promote Innovation
According to the European Parliament, the new legislation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI systems, while also fostering innovation and positioning Europe as a leading force in the field. The regulation specifies different obligations for AI systems based on the potential risks and impacts of their use.
Prohibition of Certain Practices
The new rules ban AI applications that threaten citizens’ rights, such as biometric categorization systems based on sensitive characteristics and the indiscriminate collection of facial images from the internet or CCTV for facial recognition databases. The regulation also prohibits emotion recognition in workplaces and schools, social scoring, predictive policing (based solely on profiling or characteristic assessment), and AI that manipulates human behavior or exploits vulnerabilities.
Exceptions for Law Enforcement
The use of biometric identification systems by law enforcement is generally prohibited, except in specific cases outlined and detailed by the new regulation. Real-time biometric identification systems are allowed only under strict conditions, such as limited time and geographical scope with prior judicial or administrative authorization. These cases might include the targeted search for a missing person or preventing a terrorist attack. The use of post-event biometric identification is considered high-risk and requires judicial approval linked to a criminal offense.
Obligations for High-Risk Systems
The new regulation sets clear obligations for other high-risk AI systems due to their potential harmful impacts on health, safety, fundamental rights, the environment, democracy, and the rule of law. It lists sectors where such systems are used or might be used, including critical infrastructure, education and training, employment, essential private and public services (e.g., healthcare, banking), law enforcement, migration and border management, and judicial and democratic processes. These systems must assess and mitigate risks, keep usage logs, be transparent and accurate, and be supervised by humans. Citizens will have the right to file complaints about AI systems and receive explanations for decisions made by high-risk AI affecting their rights.
Transparency Requirements
General-purpose AI systems and their models must meet certain transparency requirements, including compliance with EU intellectual property laws and disclosing detailed summaries of the content used to train their models. Stronger models that can pose systemic risks will face additional requirements, such as evaluation, risk assessment and mitigation, and incident and malfunction reporting. Additionally, any artificial or manipulated audiovisual content (known as “deepfakes”) must be clearly labeled.
Measures to Support Innovation and SMEs
National authorities in each country must establish “regulatory sandboxes” for testing AI systems in real-world conditions. These facilities should be accessible to SMEs and startups to develop and train innovative AI models before they go to market.