A curated list of tools, frameworks, standards, and resources for AI Assurance and EU AI Act compliance.
AI Assurance is "the process of measuring, evaluating, and communicating the trustworthiness of AI systems" (UK DSIT, 2024). It is what the EU AI Act actually requires in practice: Arts. 9–15 mandate risk management, data governance, transparency, oversight, and robustness — all of which require verifiable evidence, not just self-assessment.
Assessment tells you WHERE you stand. Assurance proves you've DONE something about it.
The EU AI Act entered into force on 1 August 2024. High-risk AI systems (Annex III) must comply by August 2026 (subject to the Digital Omnibus backstop). This list covers tools that help engineers generate the evidence required by law — not just classify risk.
Contributing: Pull requests welcome. See CONTRIBUTING.md.
- Developer Tools & SDKs
- Assessment & Classification
- AI Governance Platforms
- Monitoring & Observability
- Testing & Red-Teaming
- Evidence Formats & Frameworks
- AI Assurance Frameworks
- Standards
- Regulatory Documents
- Spain
- Educational Resources
- Communities
- News & Newsletters
- Related Awesome Lists
Tools that integrate into ML pipelines and generate compliance evidence.
- Venturalitica SDK — Open-source Python SDK for EU AI Act and ISO 42001 compliance evidence. Generates OSCAL policies, CycloneDX ML BOM, bias audits, and Annex IV documentation.
pip install venturalitica - Giskard — Open-source LLM testing and red-teaming framework with vulnerability scanning. CLI-first, integrates with HuggingFace and LangChain.
- VerifyWise — Open-source AI governance platform. Self-hosted compliance tracking for EU AI Act, ISO 42001, NIST AI RMF.
- Evidently AI — ML monitoring and evaluation framework. 7K+ stars, 35M+ downloads. No compliance mapping, but strong data quality and drift detection (Art. 10 relevant).
- IBM OpenPages — GRC platform with AI governance module. Enterprise-grade, watsonx.governance integration.
- AIR Blackbox — Open-source CLI scanner for EU AI Act technical requirements (Arts. 9–15). Checks Python AI agent code for risk management, data governance, transparency, logging, human oversight, and robustness. 6/6 technical checks.
pip install air-blackbox - Microsoft Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10 controls. SDKs in Python, TypeScript, .NET, Rust, Go. MIT licensed.
- COMPL-AI — Compliance-centered LLM evaluation framework with 29+ benchmarks mapped to EU AI Act technical requirements. Built on UK AISI Inspect. By ETH Zurich, INSAIT, and LatticeFlow AI.
Tools to classify AI systems by risk level and assess compliance gaps.
- Modulos Risk Agent — Interactive AI risk assessment with EUR quantification. No login required. ISO 42001 certified (first, via CertX).
- Trail-ML — EU AI Act compliance platform. ETH Zurich spin-off. Focus on risk classification and technical documentation.
- Holistic AI — AI risk governance platform. Comprehensive auditing and mitigation across 8 risk domains.
- Enkrypt AI — AI risk classification and red-teaming for LLMs.
Enterprise platforms for AI risk management and governance.
- Credo AI — AI governance platform. Policy enforcement, model registry, audit trails. SOC 2 Type II certified.
- Arthur AI — ML observability and AI governance. Agent discovery and governance for agentic AI. SOC 2 Type II.
- Fiddler AI — ML monitoring and explainability. Amazon SageMaker integration. $30M Series C (2025).
- Saidot — AI governance knowledge graph with inherited governance data. EU AI Pact signatory.
- NAAIA — French AI governance platform. First ISO 42001 certified in France (AFNOR). EU AI Pact signatory.
- Lumenova AI — AI governance and compliance platform. SOC 2 Type II.
- Trustible — AI governance and policy management.
- OneTrust — GRC platform expanding into AI governance. $1.13B raised.
- Vanta — Automated compliance platform with AI governance modules. $504M raised.
Tools for post-deployment AI system monitoring (Art. 72 Post-Market Monitoring).
- Evidently AI — Data drift, model performance, and data quality monitoring. 40-lesson free course.
- Fiddler AI — ML monitoring, explainability, and fairness monitoring.
- WhyLabs — Data and ML monitoring platform.
- Arize AI — ML observability platform with LLM tracing.
AI Assurance techniques for adversarial testing, robustness, and vulnerability scanning (Art. 15 Robustness).
- Giskard — Automated LLM vulnerability scanning and red-teaming. 4K+ GitHub stars.
- DeepEval — LLM evaluation framework with 14+ evaluation metrics.
- PyRIT — Microsoft's Python Risk Identification Tool for generative AI.
- Inspect AI — UK AISI's framework for LLM safety evaluations.
- Inkog — Open-source security scanner for AI agents. Detects prompt injection, infinite loops, token bombing, SQL injection via LLM, and missing human oversight across 20+ frameworks. Maps vulnerabilities to EU AI Act Articles 9, 14 (Human Oversight), and 15 (Accuracy, Robustness, Cybersecurity). CLI + MCP server with SARIF output.
- AI Verify — Singapore government AI testing framework. Supports EU AI Act mappings.
Standards and formats for generating auditable compliance evidence.
- OSCAL (Open Security Controls Assessment Language) — NIST standard for machine-readable compliance documentation. Native format for policy-as-code AI governance. Used by Venturalitica SDK.
- CycloneDX ML BOM — Machine Learning Bill of Materials standard. Documents model provenance, datasets, and dependencies (EU AI Act Annex IV.2).
- Model Card Toolkit — Google's toolkit for generating model cards (Annex IV.3).
- Croissant — ML dataset format with provenance metadata (Art. 10 data governance).
- SLSA Framework — Supply-chain security framework for software artifacts. Relevant for Art. 15.5 cybersecurity.
- OWASP Top 10 for Agentic Applications — First OWASP risk framework for autonomous AI agents. 10 risks from Agent Goal Hijacking to Rogue Agents. Peer-reviewed by 100+ researchers. Released December 2025.
Institutional frameworks that define and structure the AI Assurance process.
- CDEI AI Assurance Roadmap — Centre for Data Ethics & Innovation (UK). Blueprint for a functional AI assurance ecosystem. Defines the techniques catalogue: auditing, impact assessment, red-teaming, bias analysis, explainability.
- UK AI Safety Institute — Develops evaluations for frontier models. Framework directly applicable to EU AI Act Art. 15 (accuracy, robustness, cybersecurity).
- Inspect AI — UK AISI open-source framework for LLM safety evaluations. Apache 2.0.
- AI Verify (Singapore IMDA) — Governance testing framework. Includes EU AI Act principle mappings.
- ALTAI (Assessment List for Trustworthy AI) — EU Commission self-assessment tool for Trustworthy AI. Based on the 7 HLEG principles.
- HUDERIA (Council of Europe) — Human rights, democracy, and rule of law impact assessment methodology for AI systems. Complements EU AI Act risk management (Art. 9) with fundamental rights perspective.
Technical standards relevant to EU AI Act compliance.
Note: No harmonised standards are currently available (Stage 10-40 only). Organizations must comply with EU AI Act obligations regardless (Art. 40). Standards expected 2026-2027.
| Standard | Scope | Stage | EU AI Act Article |
|---|---|---|---|
| prEN 18286 | Quality Management System for AI | Stage 40 (public consultation) | Art. 17 |
| prEN 18228 | Risk Management | Stage 20 | Art. 9 |
| prEN 18284 | Data Governance | Stage 10 | Art. 10 |
| prEN 18283 | Fairness | Stage 10 | Art. 10 |
| prEN 18229-1 | Transparency & Logging | Stage 20 | Arts. 12, 13 |
| prEN 18229-2 | Accuracy & Robustness | Stage 20 | Art. 15 |
| prEN 18282 | Cybersecurity | Stage 10 | Art. 15.5 |
- ISO 42001:2023 — AI Management System (AIMS). Organizational governance of AI. Complementary to EU AI Act (not a substitute).
- ISO/IEC 23894:2023 — AI Risk Management guidance.
- ISO/IEC 24028:2020 — AI Trustworthiness overview.
- ISO/IEC 5338:2023 — AI System Lifecycle Processes.
- ISO/IEC TR 24029-1:2021 — Assessment of robustness of neural networks. Relevant to Art. 15 (accuracy, robustness, cybersecurity).
- ISO/IEC 42005:2025 — AI System Impact Assessment. Guidance for understanding how AI systems affect individuals, groups, and society. Complements ISO 42001.
- ISO/IEC 42006:2025 — Requirements for bodies providing audit and certification of AI Management Systems. Enables the ISO 42001 certification ecosystem.
- NIST AI RMF 1.0 — AI Risk Management Framework. Governs, Map, Measure, Manage structure. US-origin but globally adopted.
- NIST AI RMF Playbook — Practical implementation guidance.
- NIST SP 1270 — Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
Official EU AI Act texts and guidance.
- EU AI Act — Official Text — Regulation (EU) 2024/1689. Official Journal, 12 July 2024.
- EU AI Act — Consolidated Reader-Friendly Version — Annotated version by Future of Life Institute.
- AI Office — Implementation Guidance — European Commission AI Office resources.
- Guidelines on AI System Definition — Official Commission guidance clarifying what constitutes an AI system under the regulation (Art. 3).
- Guidelines on Prohibited AI Practices — Commission guidelines on banned AI applications and practices (Art. 5).
- Digital Omnibus Proposal — COM(2025) 836. Proposes deadline adjustments for Annex III systems.
- EU AI Act Annex III — High-risk AI system categories.
- EU AI Act Annex IV — Technical documentation requirements.
- EU AI Pact — Voluntary commitment for early compliance. Signatories: Modulos, Saidot, Collibra, and 100+ others.
- GPAI Code of Practice — General Purpose AI model governance.
- Guidelines for GPAI Providers — Detailed scope and obligations for general-purpose AI model providers.
- GPAI Code of Practice — Signatory Taskforce — Coordination forum for GPAI Code of Practice signatories (OpenAI, Anthropic, Google, Mistral, Amazon, xAI).
- AI Watch — European Commission observatory tracking AI development, uptake, and policy impact across Member States.
- AI Act Single Information Platform — Official EU platform with interactive Compliance Checker, AI Act Explorer, timeline, and online helpdesk. Available in EN, FR, DE.
- Code of Practice on AI-Generated Content Marking — Article 50 marking & labelling. Second draft published March 2026, final expected June 2026. Covers machine-readable marking by providers and deepfake labelling by deployers.
- GPAI Code of Practice — Final Version — Final version (July 2025). Three chapters: Transparency, Copyright, Safety & Security. Provides "presumption of compliance" if followed.
Spain is the first EU Member State with a fully operational AI supervisory authority (AESIA) and the most comprehensive published implementation guidance.
AESIA published 16 practical guides in December 2025 — the most comprehensive practical implementation resource available while JTC 21 standards are pending.
- AESIA Official Website — Agencia Española de Supervisión de Inteligencia Artificial. First operational EU AI Act supervisory authority.
- All 16 guides (index) — Complete list with PDFs.
- Guide 01 — Introduction to the AI Act — Overview of the regulation scope, definitions, and obligations.
- Guide 02 — Practical examples — Worked examples for understanding the AI Act.
- Guide 03 — Conformity Assessment — Art. 43 conformity assessment procedures.
- Guide 04 — Quality Management System — Art. 17 QMS requirements / prEN 18286.
- Guide 05 — Risk Management — Art. 9 risk management system / prEN 18228.
- Guide 06 — Human Oversight — Art. 14 human oversight measures / prEN 18229-1.
- Guide 07 — Data Governance — Art. 10 data quality, fairness metrics / prEN 18284, 18283.
- Guide 08 — Transparency — Art. 13 transparency obligations / prEN 18229-1.
- Guide 09 — Accuracy — Art. 15 accuracy and performance metrics / prEN 18229-2.
- Guide 10 — Robustness — Art. 15.4 robustness, drift detection / prEN 18229-2.
- Guide 11 — Cybersecurity — Art. 15.5 cybersecurity / prEN 18282.
- Guide 12 — Logging & Records — Art. 12 logging requirements / prEN 18229-1.
- Guide 13 — Post-Market Monitoring — Art. 72 post-market surveillance.
- Guide 14 — Incident Management — Art. 73 serious incident reporting.
- Guide 15 — Technical Documentation — Art. 11 + Annex IV documentation requirements.
- Guide 16 — Requirements Checklist — Master checklist covering all 16 guides.
The AEPD has published specific guidance on the intersection of AI systems with data protection — critical for any AI Act compliance program since most high-risk AI systems also process personal data.
- AEPD AI Guides & Tools — Complete catalogue of AEPD guidance documents including AI-specific resources.
- Agentic AI & Data Protection — Guidance on autonomous AI agents from a data protection perspective.
- AEPD Generative AI Internal Policy — Reference implementation: how a public authority governs its own use of generative AI.
- Privacy & AI Decalogue — 10 recommendations to protect privacy when using AI systems.
- AI Treatment Framework (Infographic) — Visual guide mapping the full regulatory landscape for AI data processing.
- España Digital 2026 — Spain's digital transformation roadmap including AI priorities and investment.
- SEDIA Regulatory Sandbox — Controlled testing environment for AI innovations under regulatory oversight. First EU AI Act sandbox.
- ENIA — National AI Strategy — Estrategia Nacional de Inteligencia Artificial within the EU Recovery and Resilience Plan.
Courses, tutorials, and articles for learning EU AI Act compliance.
- ML Observability Course — Evidently AI. 40 lessons on ML monitoring and data quality. Free, no gate.
- MLOps Zoomcamp — DataTalks.Club. Free MLOps course covering model deployment and monitoring.
- Andrew Ng AI for Everyone — Non-technical AI literacy. Useful for compliance officers.
- EU AI Act Engineering Compliance Guide — Practical guide for engineering teams implementing EU AI Act compliance, covering risk classification, technical documentation, audit logging, and conformity assessment.
- The EU AI Act Explained (Article by Article) — Annotated walkthrough by Future of Life Institute. Each article cross-referenced with recitals.
- NIST AI RMF Playbook — Practical implementation guidance for the AI Risk Management Framework.
- Making AI Compliance Evidence Machine-Readable — Proposes OSCAL as an interchange format for AI governance, defines 16 property extensions covering lifecycle phases, enforcement semantics, and risk traceability, and presents a three-layer Compliance-as-Code architecture (policy, evidence, enforcement). Validated on two Annex III high-risk systems (credit scoring, medical imaging). Cilla Ugarte, Patricio Guisado, Berlanga de Jesús & Molina López, 2026.
- AI Agents Under EU Law — Structural analysis of why current agentic AI systems cannot satisfy EU AI Act essential requirements: system prompts are not security controls (Art. 15.4), oversight evasion in RL-trained models (Art. 14), transparency across multi-party action chains (Art. 13), and behavioural drift breaking conformity assessment (Art. 43). Nannini, Leon Smith, Maggini, Panai, Feliciano & Tiulkanov, 2026.
- Overview of the CDEI's Roadmap to an Effective AI Assurance Ecosystem — Commentary on the UK blueprint for AI assurance. Frontiers in AI, 2022.
- Mapping the EU AI Act — Technical analysis of AI Act requirements. Madiega et al., 2024.
- NIST SP 1270: Bias in AI — Identifying and managing bias in AI systems. NIST, 2022.
Where practitioners discuss EU AI Act compliance.
- Venturalitica Discord — Community for EU AI Act compliance engineers. Channels: #eu-ai-act, #iso-42001, #sdk-support.
- MLOps Community Slack — 85K+ MLOps practitioners. Active #ai-governance channel.
- DataTalks.Club Slack — 50K+ data practitioners.
- IAPP AI Governance Community — Privacy and AI governance professionals.
- LinkedIn: EU AI Act Compliance — Multiple groups focused on EU AI Act implementation.
Stay updated on EU AI Act developments.
- AI Office Newsletter — Official European Commission AI Office updates.
- AI Supremacy Newsletter — Weekly AI regulation and policy digest.
- The Batch (DeepLearning.AI) — AI news with regulation coverage.
- Import AI — Jack Clark's AI research and policy newsletter.
- IAPP Daily Dashboard — Privacy and AI governance news.
Curated lists with overlapping coverage across AI governance, compliance, and responsible AI.
- Awesome Europe — Open-source software for European institutions, regulations, and standards. Includes a Digital Regulation section with EU AI Act tools.
- Awesome Artificial Intelligence Regulation — Guidelines, principles, tools, and courses on AI ethics and regulation. 1.4K+ stars.
- Awesome MLOps — MLOps tools including model fairness, privacy, and interpretability. 5K+ stars.
- Awesome OSCAL — OSCAL (Open Security Controls Assessment Language) ecosystem — tools, libraries, and resources for compliance-as-code.
- Awesome Responsible AI — Responsible AI tools covering fairness, explainability, privacy, and LLM regulation compliance.
- AI Act Engineering — Reference list for the emerging field of "AI Act Engineering" — practices and tools for EU AI Act compliance.
- Awesome ML Model Governance — Resources on ML model governance, ethics, and responsible AI. By the same maintainer as Awesome MLOps (13K+ stars).
- Awesome Compliance — GRC frameworks, standards, and compliance automation tools including ISO 42001 and NIST AI RMF.
Contributions are welcome! Please read CONTRIBUTING.md before submitting a pull request.
Criteria for inclusion:
- Actively maintained (updated within 12 months)
- Directly relevant to EU AI Act compliance or AI governance
- Open-source tools: must have a public repository
- Commercial tools: must have a free tier, trial, or public documentation
Not included:
- Paid-only tools with no free tier or public docs
- Tools with no EU AI Act relevance
- Abandoned projects (no activity > 12 months)
To the extent possible under law, the contributors have waived all copyright and related or neighboring rights to this work.