Neomanex Logo
Deadline: August 2, 2026

EU AI Act Compliance for AI Agents

The EU AI Act (Regulation 2024/1689) requires AI systems to disclose their AI nature and maintain audit trails for human oversight. Enforcement begins August 2, 2026.

If your AI agents serve EU users — chatbots, data collection agents, workflow automation — you need transparency mechanisms and oversight infrastructure in place. Regardless of where your company is headquartered.

EUR 15M
Maximum fine for transparency violations (or 3% of global turnover)
Aug 2026
Enforcement deadline for Article 50 transparency obligations
GDPR-like
Extraterritorial scope — applies to any company serving EU users
EUR 35M
Maximum fine for prohibited AI practices (or 7% of global turnover)

What Is the EU AI Act?

The world's first comprehensive legal framework for AI. Directly applicable in all EU member states — no national transposition needed.

Article 50: Transparency

The core obligation for AI agents

AI systems that interact directly with people must disclose their AI nature. Users must know they are interacting with an AI system — not a human — before or at the start of the interaction.

  • Clear, persistent disclosure at first interaction
  • AI-generated content must be machine-readable marked
  • Accessible format meeting EU accessibility standards
  • Exception only if AI nature is "obvious" to a reasonable person

Article 14: Human Oversight

Required for high-risk AI systems

High-risk AI systems must be designed so humans can effectively oversee them during operation. Oversight capabilities must match the system's risk level, autonomy, and context.

  • Humans must understand the system's capabilities and limitations
  • Ability to monitor, interpret, and override AI decisions
  • Interrupt capability via stop mechanism
  • Complete audit trail of AI actions and decisions

Extraterritorial Scope — Like GDPR

The EU AI Act applies to any company whose AI system is used by people in the EU, or whose AI output is consumed in the EU — regardless of where the company is headquartered. A SaaS product accessible to EU customers through a website, API, or distribution channel has placed its system on the EU market.

Risk Classification Framework

Your obligations depend on how your AI agent is used, not how it is built. The same technology can be limited-risk or high-risk depending on the use case.

Prohibited

Unacceptable Risk

Social scoring, manipulative AI, exploitation of vulnerabilities, real-time biometric identification in public spaces.

Banned since February 2025

Heavy Compliance

High Risk

Conformity assessments, human oversight, technical documentation, post-market monitoring, EU database registration.

Enforced August 2, 2026

Most AI Agents

Limited Risk

Transparency obligations: disclose AI nature, mark AI-generated content as machine-readable. This is where most chatbots and conversational agents fall.

Enforced August 2, 2026

No Obligations

Minimal Risk

Internal automation, spam filters, AI-powered search — systems that do not interact directly with end-users or make decisions about people.

No mandatory requirements

Use Case Determines Risk Level

AI Agent Use CaseRisk LevelKey Obligation
Customer support chatbotLimitedTransparency: disclose AI nature
Conversational data collection agentLimitedTransparency: disclose AI nature + mark AI content
Internal workflow automationMinimalNo mandatory obligations
Legal client intake agentHighFull compliance: oversight, conformity assessment, audit trail
Medical patient intake agentHighFull compliance: oversight, conformity assessment, audit trail
HR screening or recruitment agentHighFull compliance: oversight, conformity assessment, audit trail
Insurance/credit assessment agentHighFull compliance: oversight, conformity assessment, audit trail
Education assessment agentHighFull compliance: oversight, conformity assessment, audit trail
General feedback collectionLimitedTransparency: disclose AI nature

Based on Annex III of the EU AI Act. Classification depends on the specific deployment context.

What You Need to Do

The practical requirements depend on your risk classification. Most AI agents need transparency. Sensitive use cases need oversight infrastructure.

Limited Risk: Transparency

Required for all customer-facing AI agents

AI Disclosure at First Interaction

Users must be informed they are interacting with an AI system before or at the start of the conversation. The disclosure must be clear, persistent, and meet accessibility standards.

Machine-Readable Content Marking

AI-generated text, audio, image, or video content must be marked in a machine-readable format so it can be detected as artificially generated.

Deployer Instructions

Providers must give deployers (businesses using the AI) clear instructions on maintaining compliance, including what they can and cannot customize.

AI Literacy (Already Required)

Staff involved in AI operations must have sufficient understanding of AI systems. This obligation has been in effect since February 2, 2025.

High Risk: Oversight

Additional requirements for sensitive use cases

Human Monitoring Capability

Designated overseers must be able to monitor the AI system in real time, detect anomalies, and understand system outputs using available tools.

Override and Intervention

Humans must be able to override AI decisions, reject AI outputs, and interrupt system operation via a stop mechanism at any point.

Complete Audit Trail

Every AI action must be logged with sufficient detail for post-hoc review. Audit records must be portable and available for regulatory inspection.

Competent Oversight Personnel

Oversight must be assigned to people with the necessary competence, training, authority to override, and organizational support to perform their role.

How Neomanex Products Address Compliance

We build AI products that operate within regulatory frameworks by design — not as an afterthought. Two products map directly to the EU AI Act's core obligations.

Article 50 Transparency

Gnosari

AI data collection agents with built-in compliance

Gnosari deploys conversational AI agents that collect structured data from users. Every agent is designed to meet Article 50 transparency requirements at the platform level.

Compliance Features

  • Mandatory AI disclosure — Non-removable notification informing users they are interacting with an AI system, visible from first interaction
  • Machine-readable content marking — AI-generated responses flagged in machine-readable format per EU Code of Practice
  • Platform-level enforcement — Deployers can customize disclosure text but cannot disable it. Compliance is structural, not optional
  • Deployer compliance documentation — Clear guidance for deployers on their obligations and how to maintain compliance

Gnosari agents are designed for Article 50 compliance. Deployers using Gnosari for Annex III use cases (legal intake, medical intake) should assess high-risk classification independently.

Article 14 Human Oversight

Kleosari

The Agent Operations Platform

Kleosari is a workflow definition and enforcement platform for AI agents. You define workflows in YAML. Agents follow them. Gates enforce quality. Humans decide at checkpoints. The oversight Article 14 requires is a structural consequence of how the system works.

Preventive Governance, Not Reactive Monitoring

  • Structural approval gates — Agents call workflow_advance(). The system blocks until a human approves. There is no API to bypass this
  • Audit trail as structural consequence — Every agent action logged with actor, before/after diffs, and timestamps. Not bolted on. Built in
  • YAML workflow definitions — 27 templates, 7 composable fragments. Non-developers can read, review, and approve agent processes
  • 45+ MCP tools — The workflow engine IS the MCP interface. Claude Code, Cursor, any MCP client is a native Kleosari client
  • Exportable audit records — CSV, PDF, and JSON export for regulatory evidence and compliance reviews

Observability watches agents fail. Kleosari prevents it. The platform does not certify compliance — it enforces the workflow discipline and audit infrastructure organizations need to demonstrate effective human oversight to regulators.

What We Provide

  • Infrastructure designed to support EU AI Act compliance
  • Built-in transparency mechanisms for AI agents
  • Audit trail and oversight tools for demonstrating human oversight
  • Documentation and guidance for deployer obligations

What We Do Not Provide

  • Legal advice or compliance certification
  • Conformity assessment for high-risk systems
  • Guaranteed regulatory compliance (context-dependent)
  • Model governance or bias detection (different layer)

Enforcement Timeline

The EU AI Act entered into force August 1, 2024. Obligations are phasing in through August 2027.

Aug 1, 2024

AI Act Enters Into Force

The regulation is published and begins its phased implementation across all EU member states.

Feb 2, 2025

Prohibited Practices Banned + AI Literacy

Unacceptable-risk AI systems are prohibited. AI literacy obligations begin for all organizations using AI.

Aug 2, 2025

GPAI Model Obligations + AI Office Operational

General-purpose AI model obligations take effect. The EU AI Office becomes fully operational. First fines enforceable for prohibited practices.

Aug 2, 2026

Article 50 Transparency + High-Risk Compliance

The critical deadline: all chatbots and AI agents must disclose their AI nature. High-risk AI systems must comply with full framework. Market surveillance authorities can enforce compliance and levy fines.

Aug 2, 2027

Regulated Product AI Systems

High-risk AI systems embedded in regulated products (medical devices, vehicles, machinery) must comply. Extended transition for systems already on market.

Penalties for Non-Compliance

The EU AI Act uses a tiered penalty structure proportional to violation severity. Fines are calculated as the higher of the absolute amount or percentage of global turnover.

EUR 35M
or 7% of global annual turnover
Prohibited Practices
EUR 15M
or 3% of global annual turnover
Transparency Violations
EUR 7.5M
or 1% of global annual turnover
Misleading Information

Beyond Financial Penalties

Orders to withdraw AI systems from the EU market
Orders to recall or modify deployed AI systems
Public disclosure of non-compliance
SME/startup protections: fines capped at the lower of percentage or absolute amount

Build Compliant AI Operations

The August 2, 2026 deadline is approaching. Neomanex helps companies implement AI Operating Models with compliance built into the foundation — transparency, oversight, and governance by design.

See Our Services

Free Discovery Session. No commitment. Plans start at EUR 2,500/month.