Article updated on
September 4, 2024

Glossary of terms used in the EU AI Act

Here is a glossary of terms used in the EU AI Act:

  • AI system refers to software that can, for a given set of input data, generate outputs such as predictions, decisions, or recommendations without being explicitly programmed for each specific input-output mapping.
  • A General Purpose AI System (GPAI) is defined as an application that is characterized by:
    • Versatility Across Tasks: GPAI systems are capable of handling various tasks, unlike specialized AI systems that are designed for a specific function (like medical diagnosis, credit scoring, etc.). Examples include large language models (LLMs) like GPT, image recognition models, and other foundational models that can be used across different sectors.
    • Wide Range of Applications: GPAI systems can be applied to numerous contexts, such as customer service, content creation, decision-making processes, and more. Their adaptability makes them useful in fields like healthcare, finance, education, and more.
    • The EU AI Act identifies and regulates GPAI systems due to their broad application potential and the risks that come with this versatility. Because these systems can be employed in high-risk or critical sectors (like healthcare or transport), the regulation emphasizes transparency, accountability, and safety measures for GPAI.
  • High-risk AI system refers to any AI system that is likely to cause or increase harm to individuals, society, or the environment.
  • Unacceptable AI system refers to any AI system that is likely to pose an unacceptable risk to human safety or fundamental rights and values.
  • Prior conformity assessment refers to the process of having an AI system assessed by an independent body to verify that it complies with the requirements of the EU AI Act.
  • Data protection and transparency refers to the obligations on providers and users of AI systems to protect personal data and to be transparent about the operation of their systems.
  • Human oversight refers to the requirement for high-risk AI systems to have mechanisms in place to allow humans to override the decisions of the systems and to monitor their operation.
  • Post-market monitoring refers to the obligation on providers of AI systems to monitor the performance of their systems and to take corrective action if necessary.
  • Notified Body refers to an independent organization that is authorized by the EU Commission to assess the conformity of AI systems.
  • EU Declaration of Conformity refers to a document that states that an AI system has been assessed and found to meet the requirements of the EU AI Act.
  • Fundamental rights and values refers to the rights and values enshrined in the Charter of Fundamental Rights of the European Union.
  • Harm refers to any negative impact on individuals, society, or the environment.
  • Safety refers to the state of not being at risk of harm.
  • Transparency refers to the ability to be easily understood.
  • Accountability refers to the ability to explain and justify one's actions.
  • Non-high-risk AI system refers to any AI system that is not likely to cause or increase harm to individuals, society, or the environment.

This glossary is not exhaustive, and there may be other terms that are used in the EU AI Act that are not defined here. However, it provides a good starting point for understanding the key terminology used in the regulation.