AI & Compliance
Deep Dive

The EU AI Act: What Dutch Businesses Need to Know in 2026

The EU AI Act is now in force. From risk classifications to mandatory conformity assessments, here is what every Dutch organization deploying AI systems must do to stay compliant — and avoid fines up to 35 million euro.

Mar 28, 2026
10 min read
The EU AI Act: What Dutch Businesses Need to Know in 2026

The EU AI Act: What Dutch Businesses Need to Know in 2026

The EU AI Act — formally Regulation (EU) 2024/1689 — is the world's first comprehensive legal framework for artificial intelligence. After entering into force on 1 August 2024, its provisions are now rolling out in phases. As of February 2025 the prohibitions on unacceptable-risk AI practices are enforceable, and by August 2026 the obligations for high-risk AI systems will apply in full.

For Dutch organisations, this is not a distant concern. The Netherlands is home to over 3,600 AI companies and ranks among the top-five European AI ecosystems according to Dealroom data. Whether you build AI products or simply embed third-party models into customer-facing workflows, understanding your obligations is essential.

How the AI Act Classifies Risk

The regulation uses a four-tier, risk-based approach:

1. Unacceptable Risk — Prohibited

These AI practices are banned outright since 2 February 2025:

  • Social scoring by public authorities
  • Real-time biometric identification in public spaces for law enforcement (with narrow exceptions)
  • Subliminal manipulation that causes harm
  • Exploitation of vulnerabilities of specific groups (age, disability)
  • Emotion recognition in workplace and education settings
  • Untargeted scraping to build facial-recognition databases

Dutch employers take note: using AI to infer employee emotions through webcam analysis — a practice some remote-monitoring tools offered — is now illegal.

2. High Risk — Heavy Obligations

AI systems in these domains face mandatory conformity assessments, human oversight, and incident reporting:

  • Recruitment and HR: CV screening tools, automated interview scoring, workforce management
  • Credit scoring and insurance: Automated lending decisions, risk profiling
  • Education: Exam proctoring, admission scoring
  • Critical infrastructure: Energy grid management, water treatment AI
  • Law enforcement: Predictive policing, evidence evaluation
  • Migration and border control: Visa-application assessment

For high-risk systems, organisations must:

  • Register the system in the [EU AI database](https://ec.europa.eu/digital-building-blocks/sites/pages/viewpage.action?pageId=712377447)
  • Conduct a Fundamental Rights Impact Assessment (FRIA) before deployment
  • Maintain technical documentation covering training data, model architecture, and performance metrics
  • Implement human oversight — a qualified person must be able to override or shut down the system
  • Guarantee data governance — training data must be relevant, representative, and free from known biases
  • Log all system decisions for traceability and auditing

3. Limited Risk — Transparency Obligations

Systems that interact with people, generate synthetic content, or make automated decisions must disclose that fact clearly. This includes:

  • Chatbots: Must inform users they are interacting with AI
  • Deepfakes and AI-generated media: Must be labelled as artificially generated
  • Emotion-recognition systems (where permitted): Users must be informed

4. Minimal Risk — No Specific Obligations

AI-powered spam filters, recommendation engines on e-commerce sites, or internal analytics dashboards generally fall here. No regulatory burden, but voluntary codes of conduct are encouraged.

Timeline That Matters

| Date | Milestone | |------|-----------| | 2 Feb 2025 | Prohibitions on unacceptable-risk AI enforced | | 2 Aug 2025 | Rules for general-purpose AI (GPAI) models apply | | 2 Aug 2026 | Full obligations for high-risk AI systems | | 2 Aug 2027 | Rules for high-risk AI embedded in regulated products (medical devices, machinery) |

The Dutch Angle: National Implementation

The Dutch government designated the Autoriteit Persoonsgegevens (AP) — already the data-protection authority — as the national supervisory body for the AI Act. The AP published its AI supervision strategy in late 2025, signalling that it will prioritise:

  • Public-sector AI (welfare algorithms, fraud detection at municipalities)
  • HR and recruitment AI (automated candidate screening)
  • AI in financial services (credit-risk models, insurance underwriting)

The Netherlands has painful history here. The SyRI ruling in 2020 — where a court struck down a government fraud-detection algorithm for violating human rights — and the childcare benefits scandal (toeslagenaffaire) are still fresh. Regulators will not be lenient.

Fines

  • Up to 35 million euro or 7% of global annual turnover for prohibited-practice violations
  • Up to 15 million euro or 3% for high-risk non-compliance
  • Up to 7.5 million euro or 1% for providing incorrect information to authorities

Practical Steps for Dutch Organisations

Step 1: Inventory Your AI Systems

Create a register of every AI system you develop, deploy, or procure. For each system, document:

  • What decisions it influences
  • What data it processes
  • Who is affected by its outputs
  • Whether it falls into a high-risk category

The IAMA (Impact Assessment Mensenrechten en Algoritmes) framework, developed by the Dutch government, is a practical tool for this exercise.

Step 2: Classify Risk Levels

Map each system against the AI Act's Annex III (high-risk categories). When in doubt, engage legal counsel — the cost of misclassification is steep.

Step 3: Close Governance Gaps

For high-risk systems, ensure you have:

  • A designated AI officer or governance team
  • Documented data-governance procedures for training datasets
  • Bias-testing and fairness-evaluation protocols
  • Incident-response plans for AI failures
  • Human-oversight procedures with clearly assigned responsibilities

Step 4: Update Contracts

Review vendor agreements for third-party AI tools. Under the AI Act, deployers (not just providers) carry obligations. If your recruitment platform uses an AI scoring engine, you share responsibility for compliance.

Step 5: Train Your Teams

The AI Act explicitly requires AI literacy (Article 4). All staff who operate or oversee AI systems must have sufficient understanding of the technology, its limitations, and the regulatory requirements.

General-Purpose AI Models (GPAI)

Since August 2025, providers of GPAI models — including large language models like GPT-4, Claude, Gemini, and open-source models like Llama — must:

  • Publish a sufficiently detailed model summary
  • Maintain a copyright compliance policy (relevant for training-data disputes)
  • Comply with transparency obligations toward downstream deployers

Models classified as posing systemic risk (currently those trained with more than 10^25 FLOPs) face additional obligations: adversarial testing, incident monitoring, and cybersecurity assessments.

For Dutch businesses building on top of these models via APIs, the key question is: does your application turn a general-purpose model into a high-risk system? If you use an LLM to screen job applications or assess creditworthiness, the answer is likely yes — and the high-risk obligations apply to you as the deployer.

What This Means for AI Innovation in the Netherlands

The AI Act is not designed to stifle innovation. The regulation includes provisions for AI regulatory sandboxes — controlled environments where companies can test AI systems under regulatory supervision before full deployment. The Netherlands has committed to establishing at least one sandbox, coordinated by the AP.

Additionally, SMEs benefit from reduced documentation requirements and fee waivers for conformity assessments.

The message is clear: innovate, but do so responsibly. Dutch organisations that get ahead of compliance now will have a competitive advantage as the full high-risk obligations take effect in August 2026.

Resources

  • [EU AI Act full text](https://artificialintelligenceact.eu/the-act/)
  • [Autoriteit Persoonsgegevens AI guidance](https://www.autoriteitpersoonsgegevens.nl/themas/algoritmes-en-ai)
  • [IAMA framework](https://www.rijksoverheid.nl/documenten/rapporten/2021/02/25/impact-assessment-mensenrechten-en-algoritmes)
  • [Netherlands AI Coalition (NL AIC)](https://nlaic.com/)

For help assessing your AI systems and building a compliance roadmap, explore our automation and DevOps services or read our articles on DevOps and AI trends and data management.

AI
EU AI Act
Compliance
Regulation

Related Articles

Cybersecurity

AI-Powered Cybersecurity: How Machine Learning Is Transforming Threat Detection

Cyberattacks are growing in volume and sophistication. AI-powered security tools are shifting the balance back to defenders. This article examines how Dutch organisations are using ML-driven SIEM, EDR, and threat intelligence to stay ahead.

Read More
AI & Automation

AI Agents and Autonomous Workflows: From Chatbots to Digital Coworkers

AI agents that plan, reason, and execute multi-step tasks are reshaping how businesses operate. This article covers agent architectures, tool use, safety guardrails, and real-world deployment patterns emerging across Dutch industries.

Read More
AI Engineering

RAG in Production: Building Retrieval-Augmented Generation for Enterprise

Retrieval-Augmented Generation is moving from proof-of-concept to production. This guide covers chunking strategies, vector databases, re-ranking, evaluation, and the pitfalls Dutch enterprises encounter when shipping RAG systems at scale.

Read More

Need Help with Your IT Infrastructure?

Let's discuss how we can help transform your IT operations with modern solutions.