EU AI Act
Compliance Checklist
& Sample Documentation
A practical, step-by-step guide for businesses operating customer-facing AI systems. 47 checklist items across 5 compliance phases, with testing methods and article references. Also includes 8 completed sample compliance documents for a hypothetical AI system deployment.
Time remaining
2 August 2026 — Enforcement date
The majority of EU AI Act obligations become enforceable, including Article 50 transparency requirements for all customer-facing AI and full compliance obligations for high-risk systems under Articles 9–15.
Understanding the EU AI Act
The world's first comprehensive AI regulation. And it almost certainly applies to your business.
What is it?
The EU AI Act (Regulation 2024/1689) is a binding legal framework adopted by the European Parliament and Council in 2024. It establishes rules for the development, deployment, and use of artificial intelligence across the European Union.
The regulation takes a risk-based approach: the higher the risk an AI system poses to people's health, safety, or fundamental rights, the stricter the rules. A spam filter faces no mandatory obligations. A chatbot must disclose it's AI. A hiring algorithm must pass a full conformity assessment with documented risk management, bias testing, and human oversight.
The Act doesn't ban AI. Businesses that comply can operate freely across all 27 EU Member States. Businesses that don't face penalties of up to €35 million or 7% of global annual turnover.
Why does it exist?
AI is being used to screen job applicants, price insurance policies, triage medical patients, assess creditworthiness, and interact with millions of customers daily. These systems can be biased, opaque, manipulable, and wrong. When they fail, the people affected often have no way to understand why or challenge the outcome.
The Act was shaped by real incidents: discriminatory hiring algorithms, opaque credit scoring systems, manipulative recommendation engines, chatbots that leaked confidential data, and deepfakes. It is a response to documented harm, not hypothetical risk.
Does it apply to my business?
The Act has extraterritorial scope. If your AI system affects people in the EU, you must comply — regardless of where your business is headquartered.
- A US company running an AI chatbot that serves European customers — in scope
- An Australian fintech using AI credit scoring for EU applicants — in scope
- A Japanese SaaS tool with AI features used by EU businesses — in scope
This mirrors the extraterritorial approach of GDPR, which caught many non-EU businesses off guard in 2018. If GDPR taught us anything, it's that "we're not based in the EU" is not a defence.
Key deadlines
Prohibitions on unacceptable-risk AI practices. AI literacy obligations (Article 4).
Rules for general-purpose AI (GPAI) models. National competent authorities designated.
Majority of the Act becomes enforceable. High-risk system obligations (Articles 9–15). Article 50 transparency obligations. Chatbots must disclose AI nature, AI content must be labelled. Full penalties begin.
AI systems embedded in regulated products (machinery, medical devices, toys) must comply. Extended deadline for GPAI models placed on market before August 2025.
High-risk AI deployed in large-scale EU IT systems (Schengen Information System, Visa Information System, Eurodac, etc.) must comply.
The risk classification system
The Act organises AI systems into four risk tiers. Your tier determines your obligations, from no requirements at all to a complete ban.
Prohibited
BANNED ENTIRELY
Social scoring, cognitive manipulation, untargeted facial scraping, emotion recognition in workplaces. If you operate any of these — stop immediately.
High-Risk
FULL COMPLIANCE REQUIRED
AI used in hiring, credit scoring, insurance, healthcare, education. Requires risk management, bias testing, technical documentation, and conformity assessment.
Limited-Risk
TRANSPARENCY OBLIGATIONS
Chatbots, AI-generated content, deepfakes. Must disclose AI nature to users and label AI-generated media. Most customer-facing chatbots fall here.
Minimal-Risk
NO MANDATORY OBLIGATIONS
Spam filters, AI-enabled games, basic recommendation engines, inventory management, predictive text. Voluntary codes of conduct encouraged.
The profiling trigger: For AI systems in Annex III categories, profiling individuals (evaluating economic situation, health, preferences, behaviour, location) blocks the Article 6(3) exception — the system remains high-risk with no carve-out. Profiling alone does not make a system high-risk if it falls outside Annex III.
The EU AI Act is not something you can comply with the week before the deadline. It requires organisational change, technical implementation, and documented processes.
Get the full compliance checklistWho is responsible? Provider vs. Deployer
The Act assigns different obligations depending on your role. Getting this wrong means preparing for the wrong requirements.
Provider
DEVELOPER / COMMISSIONER
You developed the AI system, or had it developed, and place it on the market or put it into service under your own name or trademark.
- Risk management system
- Data governance & bias testing
- Technical documentation
- Conformity assessment
- EU database registration
- Post-market monitoring
Deployer
USER / OPERATOR
You use an AI system under your authority, even if someone else built it. Most businesses using third-party AI tools are deployers.
- Use per provider's instructions
- Human oversight by trained staff
- Monitor & report issues
- Inform affected individuals
- Fundamental rights impact assessment
- Cooperate with authorities
The grey zone: when a deployer becomes a provider
You carry full provider obligations even if you didn't build the AI from scratch, if you:
- •Put your own name or trademark on someone else's AI system
- •Substantially modify the system beyond its intended purpose
- •Fine-tune or customise a general-purpose AI model for a specific high-risk application
Example: You build a customer support chatbot using the ChatGPT or Claude API, customise the system prompt, integrate it into your product, and launch it under your brand. You are likely the provider of that chatbot system, not just a deployer, even though you didn't build the underlying model. Using a third-party AI model does not absolve you of compliance obligations.
The full guide includes a detailed responsibility matrix and grey-zone scenarios to help you determine your exact role and obligations.
Implementation timeline
The EU AI Act phases in over several years. These are the dates that matter.
Published in Official Journal
AI Act formally published as Regulation 2024/1689.
Art. 113
Entry into force
The AI Act enters into force. No requirements apply yet. Obligations phase in over time.
Art. 113
Prohibitions & AI literacy apply
Banned AI practices (social scoring, cognitive manipulation, untargeted facial scraping) are now prohibited. AI literacy training obligations begin.
Art. 113(a)
GPAI, governance & penalties apply
Rules for general-purpose AI models, notified bodies, governance structures, confidentiality and penalty provisions start to apply. Member States designate national competent authorities.
Art. 113(b)
Commission Article 6 guidelines due
Commission publishes guidelines on the practical implementation of high-risk classification (Article 6) including post-market monitoring.
Art. 6(5), 72(3)
Main enforcement deadline
The remainder of the AI Act becomes enforceable. High-risk obligations (Articles 9–15), transparency requirements for high and limited-risk systems (Article 50), conformity assessments, and full penalties all apply. AI regulatory sandboxes must be operational in every Member State.
Art. 113
Regulated products & legacy GPAI
Article 6(1) obligations apply. AI in regulated products (medical devices, machinery, toys) must comply. GPAI models placed on the market before August 2025 must be brought into compliance.
Art. 111(3), 113
Public authority high-risk AI
High-risk AI systems used by public authorities must comply. Providers and deployers of these systems must meet all requirements by this date.
Art. 111(2)
Large-scale EU IT systems
AI components in large-scale EU IT systems (Schengen Information System, Visa Information System, Eurodac) placed on the market before August 2027 must be compliant.
Art. 111(1)
Source: artificialintelligenceact.eu. Dates based on Regulation 2024/1689, Article 113.
Contents
This checklist is organised into five compliance phases, plus a practical testing annex.
AI System Inventory & Classification
8 itemsMap every AI system to a risk tier. Determine your role as provider or deployer.
Governance Structure
6 itemsAppoint compliance ownership, establish cross-functional working groups and policies.
Transparency Compliance
8 itemsAI disclosure at first contact, human escalation paths, content labelling under Article 50.
High-Risk System Compliance
16 itemsRisk management, data governance, technical documentation, conformity assessment.
Ongoing Monitoring & Maintenance
9 itemsPost-market surveillance, incident reporting, quarterly audits, regulatory tracking.
Testing Annex
40+ promptsPractical copy-paste test prompts for transparency verification, adversarial robustness, bias detection, and documentation review.
Also included in this guide
- Plain-English explanations of Articles 4, 5, 9–15, 27, 43, 49, 50, 62, and 72
- Provider vs. Deployer responsibility guide with grey-zone scenarios
- Risk classification quick-reference table for 11 common AI system types
- Glossary of 13 regulatory terms with article references
- 40+ adversarial test prompts (prompt injection, jailbreaking, data leakage)
- Bias and fairness testing methodology with example prompt pairs
- Documentation completeness checklist and freshness test
- Links to official EU sources, Annex III, and the EU AI Office
UPDATE: 31 MARCH 2026
Now bundled: Sample Compliance Documentation
The checklist bundle now includes a complete set of eight sample compliance documents — Declaration of Conformity, Fundamental Rights Impact Assessment, Risk Management File, Technical Documentation, Data Governance Policy, Instructions for Use, Post-Market Monitoring Plan, and Serious Incident Report. Each document is fully worked through for a fictional organisation deploying a high-risk AI system, giving you a practical reference for what finished compliance documentation actually looks like.
Penalties for non-compliance
The EU AI Act carries the heaviest fines of any AI regulation worldwide. They exceed GDPR.
| Violation | Maximum fine | % of turnover |
|---|---|---|
| Prohibited AI practices | €35 million | 7% |
| High-risk system non-compliance | €15 million | 3% |
| Incorrect information to authorities | €7.5 million | 1% |
Whichever is higher (fine or percentage of global annual turnover) applies.
For SMEs and startups, the lower of the two amounts applies.
For comparison, GDPR penalties cap at €20 million or 4% of turnover.
Get the Compliance Checklist
EU AI Act Compliance Checklist & Sample Documentation
April 2026 Edition — PDF format
Includes VAT. One-time purchase. Instant download.
Buy NowCompliance consultants charge thousands. This checklist covers the same ground for less than the cost of a team lunch.
Frequently asked questions
Is this legal advice?
No. This is a practical compliance checklist: a structured framework to help you understand and work through the Act's requirements. It does not replace professional legal counsel. For advice specific to your situation, consult a qualified lawyer.
Does the EU AI Act apply to my business?
If your AI system affects people in the EU, yes, regardless of where your business is headquartered. The Act has extraterritorial scope. A US company running a chatbot that serves European customers is in scope. An Australian fintech using AI credit scoring for EU applicants is in scope.
What if I only use a third-party AI tool?
You are a 'deployer' under the Act, and you still have obligations. You must use the system according to the provider's instructions, ensure human oversight, monitor it in operation, and report issues. If you have customised the AI or launched it under your brand, you may be classified as a 'provider' with heavier obligations.
When do I need to comply?
The main deadline is 2 August 2026. Some obligations are already in force: prohibited AI practices have been banned since February 2025, and AI literacy training (Article 4) is already required.
Is this checklist kept up to date?
Each edition is dated. This is the April 2026 edition. We update the checklist when significant regulatory developments occur.
What's in the checklist that I can't find online for free?
The EU AI Act itself is public, but it's 144 pages of dense legal text. This checklist distils it into a structured, actionable framework: 47 prioritised checklist items across 5 phases, a plain-English guide to the articles that matter most, a provider vs. deployer responsibility breakdown, and a testing annex with 40+ copy-paste prompts for adversarial testing, bias detection, and transparency verification. You won't find that combination in a blog post.
Is this a one-time payment or a subscription?
One-time payment. You pay once, download the PDF, and it's yours. No subscription, no recurring charges, no account required.
Can I share this with my team?
Yes. Your purchase covers use within your organisation. You can share it with colleagues, contractors, and advisers involved in your AI compliance work. You may not redistribute it outside your organisation or publish it elsewhere. Consultancies using it on behalf of multiple clients need a separate purchase per client.
What's the difference between a provider and a deployer?
A provider develops or commissions an AI system and places it on the market. A deployer uses an AI system under their authority. Each role carries different obligations under the Act. The critical nuance: if you've customised a third-party AI model, fine-tuned it, or launched it under your own brand, you may be classified as a provider, with significantly heavier compliance requirements. The checklist includes a detailed guide to help you determine your role.
My chatbot uses ChatGPT or Claude — am I a provider or a deployer?
If you built a chatbot using an API, customised the system prompt, integrated it into your product, and launched it under your brand, you are likely the provider of that chatbot system, even though you didn't build the underlying model. The model provider (OpenAI, Anthropic, etc.) has their own obligations for the general-purpose AI model, but the transparency and risk obligations for your specific application fall on you.