Article updated on
September 4, 2024

What will the process be like to get a high-risk application approved by the EU commission?

In order to get a new high-risk AI application approved by the EU authorities, the European Commission has created a new EU-level regulator called the European AI Office, which will operate under the Directorate-General for Communication Networks, Content and Technology (DG CNECT).

The AI Office’s role is to monitor, supervise, and enforce the requirements of the AI Act for general purpose AI (GPAI) models and systems throughout the 27 EU Member States. This involves assessing emerging and unforeseen systemic risks related to GPAI development and deployment, evaluating capabilities, conducting model assessments, and investigating potential cases of non-compliance or infringement. To support GPAI model providers in meeting compliance standards and to incorporate their viewpoints, the AI Office will develop voluntary codes of practice. Following these codes will provide a presumption of conformity.

It is the EU’s intention to provide testing and assessment tools for assessing compliance with the regulations. One of the initiatives currently being developed is capAI, which is described in this paper (downloadable from this link) and which contains detailed process guidelines and expected outputs for the assessment.

The EU also provides organisations with a Compliance Checker that allows you to verify how the EU AI Act will affect an organisation’s systems.