Article updated on
September 4, 2024

What are regulatory requirements on high-risk AI systems that are not required from non-high-risk AI systems? 

These requirements include the following:

  • Mandatory prior conformity assessments: Developers of high-risk AI systems must have their systems assessed by an independent conformity assessment body before they can be marketed or put into use;
  • Requirement for data protection and transparency: High-risk AI systems must comply with EU data protection law and must be transparent about their operation;
  • Requirement for human oversight: High-risk AI systems must be designed in a way that allows for human oversight and intervention;
  • Requirement for post-market monitoring: Developers of high-risk AI systems must monitor the performance of their systems and must take corrective action if necessary.

By imposing stricter regulatory requirements on high-risk AI systems, the AIA aims to mitigate the risks associated with these systems while also promoting the benefits of AI for society.