The EU AI Act1 imposes different obligations depending on your role and the risk level of your AI systems. This checklist outlines what needs to be done. Legislative timeline explains when each obligation takes effect.
Step 1: Identify your role
The Regulation distinguishes between providers (those who develop or place AI systems on the market) and deployers (those who use them under their authority) (Art 3(3) and (4)). Many businesses are both. Obligations depend on the role you occupy for each system, and substantially modifying a system may make you its provider (Art 25).
Step 2: Confirm you meet the obligations already in force
Since 2 February 2025, two requirements apply to every business using AI. First, ensure that staff working with AI systems have sufficient AI literacy (Art 4).2 Second, verify that none of your AI practices fall within the prohibited categories in Article 5: manipulative or deceptive techniques that materially distort behaviour, workplace emotion inference, social scoring, and others.
Step 3: Map your AI systems and classify their risk level
List every AI system your business provides or deploys and determine whether each is high-risk. A system is high-risk if it serves one of the purposes in Annex III3 unless it meets all the conditions of the derogation in Article 6(3). Systems that profile individuals are always high-risk regardless of that derogation.
Step 4: Meet the provider obligations for high-risk AI
Providers of high-risk AI systems must: establish a risk management system (Art 9); meet data governance standards (Art 10); prepare technical documentation (Art 11); enable automatic event logging (Art 12); supply instructions to deployers (Art 13); design for human oversight (Art 14); meet accuracy, robustness, and cybersecurity standards (Art 15); implement a quality management system (Art 17); carry out a conformity assessment (Art 43); issue an EU declaration of conformity (Art 47); affix the CE marking (Art 48); and register the system in the EU database (Art 49).
Step 5: Meet the deployer obligations for high-risk AI
Deployers of high-risk AI systems must: assign human oversight to competent persons (Art 26(2)); monitor the system in line with the instructions for use (Art 26(5)); ensure input data is relevant and representative (Art 26(4)); retain logs for at least six months (Art 26(6)); inform affected individuals that AI is involved in decisions about them (Art 26(11)); and inform workers before workplace deployment (Art 26(7)). Public bodies and providers of public services must also carry out a fundamental rights impact assessment (Art 27).
Step 6: Meet the transparency obligations
Regardless of risk level, if your AI system interacts with people, they must know it is AI (Art 50(1)). If it generates content, that content must be machine-readably marked (Art 50(2)). Deployers of emotion recognition or biometric categorisation systems must inform exposed persons (Art 50(3)), and deep fakes must be disclosed as artificially generated (Art 50(4)).
What this means
This checklist shows what compliance generally requires. The approach for each step depends on your systems, sector, and risk profile. The work may take months. Starting now means you will be ready regardless of whether the Digital Omnibus4 postpones the high-risk deadlines.
-
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, OJ L, 12.7.2024. EUR-Lex. ↩︎
-
European Commission, ‘AI Literacy – Questions & Answers’ (May 2025) digital-strategy.ec.europa.eu. ↩︎
-
Annex III lists eight areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice. ↩︎
-
A proposed measure that makes some technical parts of the AI Act simpler, so certain rules can be put in place on time and in a fair and practical way. ↩︎