The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with a phased application schedule that places the insurance sector under significant compliance pressure in 2026. Unlike most AI governance frameworks that are principles-based, the AI Act is a hard-law regulation with fines of up to €35 million or 7% of global annual turnover for the most serious violations.
This guide focuses on the obligations that apply to insurance firms — not AI developers in general — and explains what you must have in place before the key August 2026 deadline.
The AI Act Application Timeline
The AI Act's obligations do not all apply at once:
- 2 February 2025: Prohibited AI practices (Chapter II) became unlawful.
- 2 August 2025: General-purpose AI model obligations (Chapter V) became applicable; notified body designation requirements began.
- 2 August 2026: High-risk AI system obligations (Chapter III) become fully applicable — this is the critical deadline for insurance firms.
- 2 August 2027: Obligations for high-risk AI systems embedded in products regulated by existing EU product safety law (Annex I sectors).
Which Insurance AI Systems Are High-Risk?
Annex III of the AI Act lists the categories of high-risk AI systems. For insurance firms, the most relevant categories are:
Annex III(5): Access to and Enjoyment of Essential Services
- AI systems used to evaluate the creditworthiness of natural persons — directly applicable to insurance firms using AI credit scores as underwriting inputs.
- AI systems used in life and health insurance to assess and price risk for natural persons — EIOPA has indicated this covers mortality prediction models, morbidity scoring, and longevity risk tools.
Annex III(6): Law Enforcement (Adjacent)
- AI systems used for polygraph-equivalent testing — not directly relevant to most insurance firms, but relevant for counter-fraud applications that assess the "truthfulness" of claimants.
Annex III(8): Administration of Justice
- AI systems assisting judicial authorities in researching and interpreting facts and the law — relevant to firms using AI in claims dispute resolution.
Beyond Annex III, any insurance AI system that makes or substantially influences decisions that produce legal or similarly significant effects on natural persons is likely to be treated as high-risk by national regulators applying the regulation. EIOPA's 2024 report on AI governance in insurance explicitly cites automated claims handling decisions, policy cancellation recommendations, and fraud flagging as systems requiring human oversight mechanisms consistent with the high-risk framework.
Prohibited AI Practices: What Was Banned From February 2025
Article 5 of the AI Act prohibits certain AI practices outright, with no grace period. Insurance firms must have already reviewed their systems for:
- Social scoring: AI systems that evaluate or classify natural persons based on their social behaviour or personal characteristics in a way that leads to detrimental or unfavourable treatment — a practice that could be implicated in certain telematics-based motor insurance scoring systems that extend beyond driving behaviour to infer personal characteristics.
- Real-time biometric identification in public spaces: Not typically an insurance issue, but relevant to firms using facial recognition in claims inspection or fraud detection at physical premises.
- Emotion recognition in the workplace: Some claims management tools that analyse video calls with claimants for deception indicators may fall within prohibited emotion recognition practices.
- Subliminal manipulation: AI systems that exploit psychological vulnerabilities — relevant to AI-driven upselling tools in insurance distribution.
Article 10: Data Governance Obligations
For high-risk AI systems, Article 10 requires that training, validation, and testing data sets meet specific quality standards:
- Data sets must be subject to appropriate data governance practices — data collection, preparation, and labelling must be documented.
- Data sets must be relevant, sufficiently representative, and free from errors and biases — particularly important for underwriting models where historical data may embed discriminatory patterns by postcode, age, or gender.
- Article 10(5) permits (under strict conditions) the processing of special category data, including health data, for the purposes of ensuring bias monitoring, detection, and correction in high-risk AI systems.
- Documentation of data governance practices must be maintained for the lifecycle of the AI system plus ten years.
These data governance obligations interact closely with GDPR Article 9 obligations for health data processing. Any high-risk insurance AI system using health data must comply with both frameworks simultaneously. See our GDPR compliance guide for the Article 9 baseline.
Article 13: Transparency and Instructions for Use
High-risk AI systems must be designed and developed to ensure sufficient transparency for users to interpret the system's output and use it appropriately. Concretely:
- The AI system must be accompanied by instructions for use (Article 13(3)) detailing: identity of the provider, the system's intended purpose, the level of accuracy and performance limitations, known risks and misuse scenarios, and human oversight measures.
- Where the system produces decisions or recommendations, the output must be interpretable to the human responsible for the decision.
- For consumer-facing AI (e.g., chatbots providing advice on policy selection), Article 52 requires disclosure that the consumer is interacting with an AI system.
Conformity Assessment Requirements
Before deploying or placing on the market a high-risk AI system, the deployer (or provider, depending on your contractual relationship with the AI vendor) must complete a conformity assessment under Article 43. For most Annex III systems (including insurance scoring and claims handling AI), conformity assessment is conducted via internal assessment — there is no mandatory third-party audit requirement for these categories.
However, the conformity assessment must produce:
- Technical documentation as specified in Annex IV.
- A completed EU declaration of conformity.
- Registration in the EU AI Act database (Article 71) — which became operational in August 2025.
- CE marking affixed to the system or its documentation.
If you are purchasing AI tools from a vendor rather than developing in-house, confirm with the vendor in writing which party bears the provider obligations under the AI Act. Purchasing a pre-built claims automation tool does not automatically transfer conformity assessment obligations to the vendor if you materially modify the system or deploy it in a substantially different context.
The August 2026 Deadline: What Must Be in Place
By 2 August 2026, the following must be complete for any high-risk AI system you are deploying in the insurance context:
- Complete an AI system inventory — identify every AI tool that makes or influences decisions affecting policyholders, including third-party tools embedded in your workflow.
- Classify each system as high-risk, limited-risk, or minimal-risk using the Annex III criteria and EIOPA guidance.
- For each high-risk system: complete data governance documentation (Article 10), prepare technical documentation (Annex IV), implement human oversight mechanisms (Article 14), and produce instructions for use (Article 13).
- Complete internal conformity assessment and register high-risk systems in the EU AI Act database.
- Establish ongoing monitoring procedures — Article 72 requires post-market monitoring plans for high-risk systems.
- Update your DORA ICT third-party risk register to include AI vendors, as required by the DORA framework.
EIOPA's Position and National Supervisor Expectations
EIOPA published its Opinion on Artificial Intelligence Governance and Risk Management in insurance in June 2024, setting out supervisory expectations that go beyond the AI Act minimum. EIOPA expects insurers and distributors to embed AI governance into their broader risk management frameworks under Solvency II's Own Risk and Solvency Assessment (ORSA), not treat it as a separate compliance track.
PrizMova Europa's platform is designed from the ground up for AI-assisted insurance workflows that comply with both the EU AI Act and GDPR simultaneously — including explainability outputs for underwriting recommendations, audit trails for AI-assisted claims decisions, and data governance documentation that satisfies Article 10 requirements. Our infrastructure never transfers data outside the EU, addressing the data residency concerns raised in our EU data residency guide.