EU AI Act : Everything You Need to Know

get the work done for any meeting
Meeting transcription, AI custom notes, CRM/ATS integration, and more.
If your business builds, buys, or uses AI in Europe, the EU AI Act is now imperative reading.
It applies not only to European companies but also to any organization whose AI touches the EU market.
In this guide, you’ll learn what the EU AI Act is, how its legislation is structured, the concrete impact on your business, and how to integrate its requirements into your workflows.
What is the EU AI Act
The EU AI Act is the world’s first comprehensive law for artificial intelligence. It sets harmonized rules for how AI is built, deployed, and supervised across the European Union.
The law officially entered into force on 1 August 2024 after publication in the EU’s Official Journal on 12 July 2024. From there, its obligations phase in over time rather than all at once.
You’ll see the Act referenced as Regulation (EU) 2024/1689. Because it’s a regulation (not a directive), the rules apply directly in every Member State without national transposition.
If you build or integrate general-purpose AI (GPAI) models, the law adds extra transparency and governance duties. An EU AI Office inside the European Commission coordinates these rules, especially for GPAI, and works with national authorities on enforcement.
The Act’s requirements roll out in stages. It’s in force today, bans on prohibited practices applied first, then GPAI duties, with most high-risk obligations landing later. The official timeline highlights the OJ publication (12 July 2024) and entry into force (1 August 2024), with staged applicability following. The Commission has also reaffirmed that deadlines stay in place despite industry calls for delays.
EU AI Act Legislations & Regulatory Structure

To understand how the EU AI Act shapes your obligations, let’s break down its legislative architecture, key obligations by risk level, and how enforcement is built in.
Prohibited AI practices (the “red line”)
Some AI uses are outright banned. The Act prohibits AI systems that:
- Use subliminal or manipulative techniques that distort behavior without conscious awareness.
- Exploit vulnerable groups (due to age, disability, social status) in a way that causes harm.
- Perform social scoring or generalized behavioral classification in a way that results in discriminatory or unjust treatment.
- Use biometric categorization to infer sensitive traits like race, religion, orientation (except in tightly regulated law enforcement uses).
If your system falls into one of these categories, it’s disallowed in the EU.
The risk-based approach (what applies to you)
The Act tiers obligations by risk, so you don’t over-engineer compliance for simple tools and you don’t under-govern critical ones. Here’s how to read the levels and what you should do.
Minimal or negligible risk.
Think spam filters or innocuous automation. You have no heavy legal duties under the Act. Keep going, but follow good engineering practices.
Limited risk.
If users interact directly with your AI—like a chatbot or recommender—you must be transparent that they’re engaging with AI. Make that disclosure clear and timely in your UX.
High risk.
This covers AI used in high-stakes areas such as hiring, education, healthcare, critical infrastructure, and access to essential services. Here you need robust governance: risk management, strong data quality controls, human oversight, technical documentation, accuracy and robustness, audit logs, and pre-market conformity assessment. If you’re a provider, expect meaningful prep work before launch and ongoing monitoring after.
General-purpose AI (GPAI).
If you develop or provide a broad model others will adapt downstream, you carry extra transparency and documentation duties. You may need to publish a summary of copyrighted training data sources and maintain model cards and transparency reports. For models with “systemic risk,” anticipate deeper testing and adversarial robustness checks.
You can also opt into voluntary codes of conduct when you’re outside high-risk or GPAI scope, which is a smart way to demonstrate accountability to customers and regulators.
Obligations for high-risk AI systems
If your AI qualifies as high-risk, several responsibilities kick in — especially when you are a provider. You’ll need to:
- Establish and maintain risk management systems to detect, evaluate, and mitigate risks.
- Ensure data quality, bias management, validation and testing of training and input datasets.
- Produce and maintain technical documentation (design, architecture, control logic, limitations).
- Implement human oversight, so that humans can intervene or override decisions.
- Satisfy accuracy, robustness, cybersecurity, reliability requirements.
- Keep post-market monitoring and report serious incidents or malfunctions.
- Assign CE marking / conformity assessment, often via notified bodies in the EU.
As for deployers (those who use AI systems), their obligations are lighter but still meaningful: follow usage guidelines, monitor operation in context, keep logs for at least six months, inform workers and users of risks.
Particular rules for general-purpose AI (GPAI)
GPAI models are more than just tools — they’re infrastructure. That’s why the Act treats them specially:
- Providers must publish summaries of the copyrighted data used for training.
- They must maintain technical documentation, model cards, and transparency reports.
- The model needs to be designed to prevent generation of illegal content (e.g. hate speech, disinformation).
- For GPAI models deemed “systemic risk,” additional testing, adversarial robustness checks, and reporting obligations may apply.
Finally, non-high risk or non-GPAI systems can still opt into voluntary codes of conduct under the Act (Title III).
Governance, enforcement & penalties
A good law is only as strong as its enforcement mechanism. EU AI Act governance is layered.
European AI Office & advisory bodies
The AI Act establishes an AI Office inside the European Commission to supervise cross-EU compliance, especially for GPAI, and to coordinate enforcement across national authorities. Advisory bodies include the European Artificial Intelligence Board, a Scientific Panel, and an Advisory Forum.
National authorities & market surveillance
Each EU Member State must designate national competent authorities (market surveillance bodies) to oversee compliance, conduct inspections, and enforce penalties. These authorities also designate notified bodies that handle conformity assessments for high-risk systems.
Enforcement powers & limits
National authorities wield investigatory powers (document requests, audits) under regulation 2019/1020 on product compliance. For GPAI, the EU AI Office has the power to oversee directly. However, there are procedural safeguards: requests for documents exchanged with in-house legal counsel are protected, and firms have the right to be heard before decisions.
Penalties for non-compliance
Violations are tiered by severity. Key fines include:
- Up to €40 million or 7% of global turnover for prohibited practices.
- For breaches of transparency, data governance, or informational duties: up to €20 million or 4% of turnover.
- Other violations: up to €10 million or 2% of turnover.
Public bodies, lawyers, or citizens may raise complaints with national authorities. Regulators can impose corrective orders, require suspension of AI systems, or levy fines.
What is the concrete impact of the EU AI Act on businesses

The EU AI Act isn’t an abstract regulation. It changes your roadmap, your governance, and how you sell into Europe. Here’s what you need to know.
3.1 Who is affected
The Act applies broadly. It binds:
- Providers – companies developing or placing AI on the EU market
- Deployers – companies using AI within the EU
- Importers and distributors – firms bringing AI systems into the EU or reselling them
It also applies extraterritorially: if your system is used in the EU, you’re in scope, regardless of where you built or host it.
3.2 Timelines you must track
The law entered into force on 1 August 2024, but obligations roll out in phases:
- February 2025 → bans on prohibited practices and AI literacy duties apply
- August 2025 → general-purpose AI (GPAI) rules kick in
- August 2026 → bulk of obligations for most systems apply
- August 2027 → high-risk AI linked to existing EU product-safety rules must comply
These deadlines are fixed. The Commission has already confirmed there will be no blanket delays.
3.3 New obligations you must plan for
For high-risk AI systems, you’ll need to:
- Run a risk management system
- Use quality, bias-checked datasets
- Produce and maintain detailed technical documentation
- Embed human oversight
- Meet accuracy, robustness, and cybersecurity requirements
- Keep post-market monitoring and incident reporting processes
- Pass conformity assessment and affix CE marking
Providers shoulder most of these obligations, but deployers must also follow usage guidelines, keep logs, monitor systems in their real context, and inform affected people when required.
For GPAI models, providers must:
- Publish summaries of copyrighted training data
- Produce transparency reports and technical documentation
- Design safeguards against illegal content
- Carry out adversarial testing if models are considered “systemic risk”
If you integrate GPAI into your products, you’ll need to flow these duties downstream and confirm your vendors comply.
3.4 Business functions most impacted
- Engineering & product → CI/CD pipelines now include dataset validation, bias testing, robustness checks, documentation updates, and logs for oversight.
- Legal & compliance → CE marking files, conformity assessments, vendor contract clauses, and regulator-response playbooks.
- Procurement & vendor management → new diligence on suppliers: training data lineage, transparency reports, incident pathways.
- Commercial & sales → compliance becomes a trust signal in RFPs. CE marking and GPAI transparency reports can shorten security reviews.
3.5 Costs and risks
Compliance isn’t free. Expect budget impacts from audits, documentation, new staff training, and potential system redesigns. For smaller firms, templates and codes of practice help lighten the load, but you’ll still need a governance baseline.
Fines scale quickly:
- Up to €40 million or 7% of global turnover for prohibited practices
- Up to €20 million or 4% for breaches of transparency and governance
- Up to €10 million or 2% for other violations
Even before fines, investigations and corrective orders can disrupt operations and damage reputation.
How to integrate the EU AI Act in your processes

You don’t need a legal thesis. You need a repeatable playbook you can run every quarter. Here’s a practical sequence you can adopt today.
4.1 Map your AI footprint and roles
List every system that uses or provides AI across your product and back-office. Tag who you are for each system: provider, deployer, importer, or distributor—because duties change with the role. Deployers, for instance, must use systems as instructed, assign human oversight, inform workers, and keep logs at least six months.
4.2 Classify risk early
Place each system into the Act’s risk tiers. Check whether it’s high-risk under Article 6/Annex III (e.g., hiring, education, critical services). If yes, plan for strict controls; if no, note any limited-risk transparency duties. Keep this inventory live and recheck when features change.
4.3 Build a lightweight Quality Management System (QMS)
For high-risk systems, a QMS is mandatory for providers. Bake in procedures for risk management, dataset governance, testing, documentation, human oversight, incident handling, and CE marking. Treat it like a product discipline, not a binder.
4.4 Design human oversight into the UX
Human oversight isn’t a checkbox. Define who can intervene, when they see alerts, and how they override the system. Document foreseeable misuse and the guardrails that reduce it. Put these controls where users actually work.
4.5 Document like you’ll be audited
Create living technical docs: purpose, limitations, architecture, data sources, evaluation plans, and known risks. Keep versioned logs and change history—you’ll need them for post-market monitoring and, if high-risk, for conformity assessment. Automate as much evidence capture as possible.
4.6 Prepare for CE conformity (when applicable)
If your AI is high-risk, line up the conformity assessment path early. Some categories use internal control; others involve a notified body. Don’t leave evidence collection to release week—tie it to your CI/CD. Affix the CE mark when you meet requirements.
4.7 Align to harmonized standards
Track CEN-CENELEC work on AI standards and adopt relevant ones as they publish. Harmonized standards provide a presumption of conformity, shrinking ambiguity and audit friction. Assign an owner to watch standards updates and translate them into checklists.
4.8 Set up post-market monitoring
Decide how you’ll detect issues after launch: telemetry, error reports, user feedback, and periodic bias/accuracy checks. Define when you must notify authorities of a serious incident and how fast you can compile facts. Build a dry-run now; don’t invent this during a crisis.
4.9 Governance for GPAI (if you build on or provide it)
If you provide general-purpose AI, prepare summaries of copyrighted training content and transparency reports. Track the Commission’s GPAI Code of Practice and the official template for training-data summaries—signing and using them can lower your compliance burden.
4.10 Create a regulator-ready posture
Know your regulators: your national authority plus the EU AI Office (especially for GPAI). Keep a single source of truth for documents and designate a response team. Practice “table-top” drills so you can provide evidence quickly and coherently.
4.11 Vendor and contract hygiene
When you buy AI, request risk classification, intended use, technical docs, model cards, data-source summaries (for GPAI), and incident SLAs. Reflect provider vs deployer responsibilities in contracts so gaps don’t land on you. Re-assess vendors with each major release.
4.12 Tie it to your roadmap and deadlines
Lock your plan to the EU timeline: the Act is in force, with broad applicability from 2 Aug 2026, and additional dates for GPAI and other chapters before that. The Commission has confirmed there’s no pause—so sequence your gaps now
Noota: EU-Compliant AI Note-Taker

Noota encrypts your data end-to-end and stores it in EU data centers. It aligns with GDPR and enterprise security standards like SOC 2 and ISO 27001, so your legal team can breathe easier.
The product is purpose-built for professional meetings. It records, transcribes, and summarizes across video calls, phone interviews, and in-person conversations—so you capture every detail without juggling apps.
- Security first: EU-hosted, encrypted storage with recognized certifications helps you meet data-residency and protection requirements.
- Transparent workflow: Meeting capture, transcripts, and summaries create an auditable trail you can surface during internal reviews.
- Integrations that respect privacy: Sync notes and outcomes to tools like HubSpot while keeping data in the EU and under encryption.
- Languages at scale: Transcribe in 80+ languages and translate into 30+—useful for EU teams operating across borders.
You want to make the most of your meeting with a compliant AI Tool ? Try Noota for free now.
get the work done for any meeting
Meeting transcription, AI custom notes, CRM/ATS integration, and more.
Related articles

Forget note-taking and
try Noota now
FAQ
In the first case, you can directly activate recording as soon as you join a videoconference.
In the second case, you can add a bot to your videoconference, which will record everything.
Noota also enables you to translate your files into over 30 languages.