Skip to content

morganrcu/awesome-eu-ai-act

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 

Repository files navigation

Awesome EU AI Act Awesome

A curated list of tools, frameworks, standards, and resources for AI Assurance and EU AI Act compliance.

AI Assurance is "the process of measuring, evaluating, and communicating the trustworthiness of AI systems" (UK DSIT, 2024). It is what the EU AI Act actually requires in practice: Arts. 9–15 mandate risk management, data governance, transparency, oversight, and robustness — all of which require verifiable evidence, not just self-assessment.

Assessment tells you WHERE you stand. Assurance proves you've DONE something about it.

The EU AI Act entered into force on 1 August 2024. High-risk AI systems (Annex III) must comply by August 2026 (subject to the Digital Omnibus backstop). This list covers tools that help engineers generate the evidence required by law — not just classify risk.

Contributing: Pull requests welcome. See CONTRIBUTING.md.


Contents


Developer Tools & SDKs

Tools that integrate into ML pipelines and generate compliance evidence.

  • Venturalitica SDK — Open-source Python SDK for EU AI Act and ISO 42001 compliance evidence. Generates OSCAL policies, CycloneDX ML BOM, bias audits, and Annex IV documentation. pip install venturalitica
  • Giskard — Open-source LLM testing and red-teaming framework with vulnerability scanning. CLI-first, integrates with HuggingFace and LangChain.
  • VerifyWise — Open-source AI governance platform. Self-hosted compliance tracking for EU AI Act, ISO 42001, NIST AI RMF.
  • Evidently AI — ML monitoring and evaluation framework. 7K+ stars, 35M+ downloads. No compliance mapping, but strong data quality and drift detection (Art. 10 relevant).
  • IBM OpenPages — GRC platform with AI governance module. Enterprise-grade, watsonx.governance integration.
  • AIR Blackbox — Open-source CLI scanner for EU AI Act technical requirements (Arts. 9–15). Checks Python AI agent code for risk management, data governance, transparency, logging, human oversight, and robustness. 6/6 technical checks. pip install air-blackbox
  • Microsoft Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10 controls. SDKs in Python, TypeScript, .NET, Rust, Go. MIT licensed.
  • COMPL-AI — Compliance-centered LLM evaluation framework with 29+ benchmarks mapped to EU AI Act technical requirements. Built on UK AISI Inspect. By ETH Zurich, INSAIT, and LatticeFlow AI.

Assessment & Classification

Tools to classify AI systems by risk level and assess compliance gaps.

  • Modulos Risk Agent — Interactive AI risk assessment with EUR quantification. No login required. ISO 42001 certified (first, via CertX).
  • Trail-ML — EU AI Act compliance platform. ETH Zurich spin-off. Focus on risk classification and technical documentation.
  • Holistic AI — AI risk governance platform. Comprehensive auditing and mitigation across 8 risk domains.
  • Enkrypt AI — AI risk classification and red-teaming for LLMs.

AI Governance Platforms

Enterprise platforms for AI risk management and governance.

  • Credo AI — AI governance platform. Policy enforcement, model registry, audit trails. SOC 2 Type II certified.
  • Arthur AI — ML observability and AI governance. Agent discovery and governance for agentic AI. SOC 2 Type II.
  • Fiddler AI — ML monitoring and explainability. Amazon SageMaker integration. $30M Series C (2025).
  • Saidot — AI governance knowledge graph with inherited governance data. EU AI Pact signatory.
  • NAAIA — French AI governance platform. First ISO 42001 certified in France (AFNOR). EU AI Pact signatory.
  • Lumenova AI — AI governance and compliance platform. SOC 2 Type II.
  • Trustible — AI governance and policy management.
  • OneTrust — GRC platform expanding into AI governance. $1.13B raised.
  • Vanta — Automated compliance platform with AI governance modules. $504M raised.

Monitoring & Observability

Tools for post-deployment AI system monitoring (Art. 72 Post-Market Monitoring).

  • Evidently AI — Data drift, model performance, and data quality monitoring. 40-lesson free course.
  • Fiddler AI — ML monitoring, explainability, and fairness monitoring.
  • WhyLabs — Data and ML monitoring platform.
  • Arize AI — ML observability platform with LLM tracing.

Testing & Red-Teaming

AI Assurance techniques for adversarial testing, robustness, and vulnerability scanning (Art. 15 Robustness).

  • Giskard — Automated LLM vulnerability scanning and red-teaming. 4K+ GitHub stars.
  • DeepEval — LLM evaluation framework with 14+ evaluation metrics.
  • PyRIT — Microsoft's Python Risk Identification Tool for generative AI.
  • Inspect AI — UK AISI's framework for LLM safety evaluations.
  • Inkog — Open-source security scanner for AI agents. Detects prompt injection, infinite loops, token bombing, SQL injection via LLM, and missing human oversight across 20+ frameworks. Maps vulnerabilities to EU AI Act Articles 9, 14 (Human Oversight), and 15 (Accuracy, Robustness, Cybersecurity). CLI + MCP server with SARIF output.
  • AI Verify — Singapore government AI testing framework. Supports EU AI Act mappings.

Evidence Formats & Frameworks

Standards and formats for generating auditable compliance evidence.

  • OSCAL (Open Security Controls Assessment Language) — NIST standard for machine-readable compliance documentation. Native format for policy-as-code AI governance. Used by Venturalitica SDK.
  • CycloneDX ML BOM — Machine Learning Bill of Materials standard. Documents model provenance, datasets, and dependencies (EU AI Act Annex IV.2).
  • Model Card Toolkit — Google's toolkit for generating model cards (Annex IV.3).
  • Croissant — ML dataset format with provenance metadata (Art. 10 data governance).
  • SLSA Framework — Supply-chain security framework for software artifacts. Relevant for Art. 15.5 cybersecurity.
  • OWASP Top 10 for Agentic Applications — First OWASP risk framework for autonomous AI agents. 10 risks from Agent Goal Hijacking to Rogue Agents. Peer-reviewed by 100+ researchers. Released December 2025.

AI Assurance Frameworks

Institutional frameworks that define and structure the AI Assurance process.

  • CDEI AI Assurance Roadmap — Centre for Data Ethics & Innovation (UK). Blueprint for a functional AI assurance ecosystem. Defines the techniques catalogue: auditing, impact assessment, red-teaming, bias analysis, explainability.
  • UK AI Safety Institute — Develops evaluations for frontier models. Framework directly applicable to EU AI Act Art. 15 (accuracy, robustness, cybersecurity).
  • Inspect AI — UK AISI open-source framework for LLM safety evaluations. Apache 2.0.
  • AI Verify (Singapore IMDA) — Governance testing framework. Includes EU AI Act principle mappings.
  • ALTAI (Assessment List for Trustworthy AI) — EU Commission self-assessment tool for Trustworthy AI. Based on the 7 HLEG principles.
  • HUDERIA (Council of Europe) — Human rights, democracy, and rule of law impact assessment methodology for AI systems. Complements EU AI Act risk management (Art. 9) with fundamental rights perspective.

Standards

Technical standards relevant to EU AI Act compliance.

EU AI Act Harmonised Standards (JTC 21)

Note: No harmonised standards are currently available (Stage 10-40 only). Organizations must comply with EU AI Act obligations regardless (Art. 40). Standards expected 2026-2027.

Standard Scope Stage EU AI Act Article
prEN 18286 Quality Management System for AI Stage 40 (public consultation) Art. 17
prEN 18228 Risk Management Stage 20 Art. 9
prEN 18284 Data Governance Stage 10 Art. 10
prEN 18283 Fairness Stage 10 Art. 10
prEN 18229-1 Transparency & Logging Stage 20 Arts. 12, 13
prEN 18229-2 Accuracy & Robustness Stage 20 Art. 15
prEN 18282 Cybersecurity Stage 10 Art. 15.5

ISO Standards

  • ISO 42001:2023 — AI Management System (AIMS). Organizational governance of AI. Complementary to EU AI Act (not a substitute).
  • ISO/IEC 23894:2023 — AI Risk Management guidance.
  • ISO/IEC 24028:2020 — AI Trustworthiness overview.
  • ISO/IEC 5338:2023 — AI System Lifecycle Processes.
  • ISO/IEC TR 24029-1:2021 — Assessment of robustness of neural networks. Relevant to Art. 15 (accuracy, robustness, cybersecurity).
  • ISO/IEC 42005:2025 — AI System Impact Assessment. Guidance for understanding how AI systems affect individuals, groups, and society. Complements ISO 42001.
  • ISO/IEC 42006:2025 — Requirements for bodies providing audit and certification of AI Management Systems. Enables the ISO 42001 certification ecosystem.

NIST Frameworks

  • NIST AI RMF 1.0 — AI Risk Management Framework. Governs, Map, Measure, Manage structure. US-origin but globally adopted.
  • NIST AI RMF Playbook — Practical implementation guidance.
  • NIST SP 1270 — Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

Regulatory Documents

Official EU AI Act texts and guidance.

Spain

Spain is the first EU Member State with a fully operational AI supervisory authority (AESIA) and the most comprehensive published implementation guidance.

AESIA (AI Supervisory Authority)

AESIA published 16 practical guides in December 2025 — the most comprehensive practical implementation resource available while JTC 21 standards are pending.

AEPD (Data Protection Authority)

The AEPD has published specific guidance on the intersection of AI systems with data protection — critical for any AI Act compliance program since most high-risk AI systems also process personal data.

National AI Strategy

Educational Resources

Courses, tutorials, and articles for learning EU AI Act compliance.

Courses

Articles & Tutorials

Key Papers

  • Making AI Compliance Evidence Machine-Readable — Proposes OSCAL as an interchange format for AI governance, defines 16 property extensions covering lifecycle phases, enforcement semantics, and risk traceability, and presents a three-layer Compliance-as-Code architecture (policy, evidence, enforcement). Validated on two Annex III high-risk systems (credit scoring, medical imaging). Cilla Ugarte, Patricio Guisado, Berlanga de Jesús & Molina López, 2026.
  • AI Agents Under EU Law — Structural analysis of why current agentic AI systems cannot satisfy EU AI Act essential requirements: system prompts are not security controls (Art. 15.4), oversight evasion in RL-trained models (Art. 14), transparency across multi-party action chains (Art. 13), and behavioural drift breaking conformity assessment (Art. 43). Nannini, Leon Smith, Maggini, Panai, Feliciano & Tiulkanov, 2026.
  • Overview of the CDEI's Roadmap to an Effective AI Assurance Ecosystem — Commentary on the UK blueprint for AI assurance. Frontiers in AI, 2022.
  • Mapping the EU AI Act — Technical analysis of AI Act requirements. Madiega et al., 2024.
  • NIST SP 1270: Bias in AI — Identifying and managing bias in AI systems. NIST, 2022.

Communities

Where practitioners discuss EU AI Act compliance.

News & Newsletters

Stay updated on EU AI Act developments.

Related Awesome Lists

Curated lists with overlapping coverage across AI governance, compliance, and responsible AI.

  • Awesome Europe — Open-source software for European institutions, regulations, and standards. Includes a Digital Regulation section with EU AI Act tools.
  • Awesome Artificial Intelligence Regulation — Guidelines, principles, tools, and courses on AI ethics and regulation. 1.4K+ stars.
  • Awesome MLOps — MLOps tools including model fairness, privacy, and interpretability. 5K+ stars.
  • Awesome OSCAL — OSCAL (Open Security Controls Assessment Language) ecosystem — tools, libraries, and resources for compliance-as-code.
  • Awesome Responsible AI — Responsible AI tools covering fairness, explainability, privacy, and LLM regulation compliance.
  • AI Act Engineering — Reference list for the emerging field of "AI Act Engineering" — practices and tools for EU AI Act compliance.
  • Awesome ML Model Governance — Resources on ML model governance, ethics, and responsible AI. By the same maintainer as Awesome MLOps (13K+ stars).
  • Awesome Compliance — GRC frameworks, standards, and compliance automation tools including ISO 42001 and NIST AI RMF.

Contributing

Contributions are welcome! Please read CONTRIBUTING.md before submitting a pull request.

Criteria for inclusion:

  • Actively maintained (updated within 12 months)
  • Directly relevant to EU AI Act compliance or AI governance
  • Open-source tools: must have a public repository
  • Commercial tools: must have a free tier, trial, or public documentation

Not included:

  • Paid-only tools with no free tier or public docs
  • Tools with no EU AI Act relevance
  • Abandoned projects (no activity > 12 months)

License

CC0

To the extent possible under law, the contributors have waived all copyright and related or neighboring rights to this work.

About

A curated list of tools, frameworks, standards, and resources for EU AI Act compliance and AI governance

Topics

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors