What the EU AI Act means for financial institutions in 2025: a practical guide
The EU AI Act entered into force in August 2024, with obligations for high-risk AI systems in financial services beginning to apply in 2025. We break down what your compliance team needs to know and prioritise right now.
The Regulatory Landscape Has Changed
The EU AI Act represents the world's first comprehensive legal framework for artificial intelligence. Unlike sector-specific rules, it applies horizontally across industries — and financial services, given its intensive use of AI in credit decisions, fraud detection, customer scoring, and trading, sits squarely in its highest-risk categories.
For compliance officers and CROs, the challenge is not understanding the regulation in the abstract. It is knowing which systems are affected, what documentation is required now, and where the organisation is most exposed. This analysis addresses each of those questions in turn.
Key obligation timeline: Prohibited AI practices were banned from February 2025. Requirements for high-risk AI systems — including those used in credit, insurance underwriting, and employment — begin applying from August 2026, but preparatory obligations are active now.
What Counts as High-Risk in Financial Services
Annex III of the AI Act identifies eight categories of high-risk AI systems. Two are particularly relevant to financial institutions:
- AI systems used in credit scoring and creditworthiness assessment — including models used in retail lending, SME credit, and mortgage origination.
- AI systems used in employment and worker management — relevant for firms using algorithmic tools in hiring, performance assessment, or workforce planning.
Beyond Annex III, the Act's transparency obligations apply broadly to any AI system that interacts with natural persons — covering chatbots, virtual assistants, and certain advisory tools used in client-facing financial services.
The Core Compliance Obligations
For high-risk AI systems, deployers — not just developers — carry significant obligations. Financial institutions deploying third-party AI models are not exempt. The principal requirements include:
- Technical documentation and conformity assessment — providers must produce detailed documentation on the system's design, purpose, and risk management. Deployers must verify this documentation exists and is adequate.
- Risk management system — a continuous, iterative risk assessment process covering intended and foreseeable misuse, with documented mitigation measures.
- Data governance — training data must meet quality standards. Institutions must demonstrate data representativeness and document data provenance.
- Human oversight — effective mechanisms for human monitoring and intervention must be maintained. Purely automated high-stakes decisions are not compliant without meaningful review procedures.
- Transparency and explainability — affected individuals have rights to explanation. Systems must generate interpretable outputs for the decisions they influence.
Where Institutions Are Most Exposed
1. Legacy scoring models with opaque architectures
Many institutions continue to operate credit scoring models built before explainability was a regulatory concern. These systems frequently cannot generate individual-level explanations and may fail data governance requirements. Remediation typically requires either model replacement or the development of a surrogate explanation layer — both of which take significant lead time.
2. Third-party AI procurement without AI Act due diligence
Procurement teams are not yet systematically reviewing AI Act conformity as part of vendor onboarding. This creates compliance gaps that the institution — as deployer — is legally responsible for. Contracts signed today should include AI Act representations and warranties.
3. Undocumented human oversight procedures
Most institutions have informal review processes around AI-generated recommendations. The AI Act requires these to be documented, tested, and maintained. Gap analyses consistently reveal that human-in-the-loop claims are not substantiated by formal procedures.
4. Incomplete AI system inventories
You cannot govern what you cannot see. A significant number of institutions do not maintain a comprehensive inventory of AI systems in production. Without this foundation, risk tiering under the Act is impossible.
Immediate Actions for Compliance Teams
- Conduct an AI system inventory across all business lines, capturing system purpose, data inputs, decision influence, and current documentation status.
- Apply the Annex III risk tiering criteria to each system. Seek external review where classification is ambiguous.
- Review existing vendor contracts for AI Act obligations and initiate renegotiation where gaps exist.
- Establish or formalise an AI risk management function with cross-functional representation from compliance, technology, legal, and business.
- Brief the board and audit committee on the institution's current exposure and remediation programme.
Swiss institutions note: The EU AI Act does not directly apply in Switzerland. However, FINMA is actively developing guidance aligned with EU approaches, and Swiss-headquartered institutions with EU operations are subject to the regulation. We advise treating EU AI Act compliance as a prudent baseline regardless of formal applicability.
The Compliance Dividend
Institutions that treat the EU AI Act as an opportunity to build genuine AI governance capability — rather than a compliance checklist to minimise — will emerge with durable competitive advantages: stronger regulatory relationships, lower model risk, and the institutional trust required to deploy AI more ambitiously over time.
The governance burden is real. But the cost of reactive compliance — following a regulatory enforcement action or a model failure — is materially higher. The investment made now compounds.