Summary of the AI Act (AIA)
The EU Artificial Intelligence Act (AIA) is fully approved regulation by the European Union that creates a regulatory framework for the use of artificial intelligence (AI) in the EU. The AIA is based on the premise that AI has the potential to bring about significant benefits for society, but that it also poses risks for certain fundamental rights, such as the right to privacy and the right to non-discrimination. The AIA aims to strike a balance between promoting the benefits of AI and mitigating these risks.
The AIA divides AI systems into three categories:
- Unacceptable AI: This category includes AI systems that pose an unacceptable risk to human safety, fundamental rights and values, or public security. For instance, this category includes AI systems that are used for social scoring, deepfakes, and autonomous weapons.
- High-risk AI: This category includes AI systems that pose a high risk to certain fundamental rights and values, such as AI systems used for automated decision-making that can significantly affect individuals in their daily lives. AI systems within this category are subject to stricter regulatory requirements, including mandatory prior conformity assessments, requirements for data protection and transparency, and requirements for human oversight.
- Non-high-risk AI: This category includes AI systems that do not pose a high risk to fundamental rights and values. These systems are subject to certain general obligations, such as data protection and transparency, but they are not subject to the same stringent regulatory requirements as high-risk AI systems.
The AIA also sets out a number of requirements for specific AI applications, such as AI used for facial recognition, AI used for online advertising, and AI used for children. These requirements aim to address the specific risks associated with these applications.
The AIA is now fully approved and in the implementation phase. It represents a significant step towards creating a regulatory framework for AI in the EU.