Seirios covers both the regulatory obligation and the underlying safety engineering discipline. Compliance frameworks, adversarial threat models, and formal safety standards — all verified by the same 4-layer platform.
The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems by risk level and imposes strict requirements on HIGH-risk systems — those used in lending, hiring, healthcare triage, biometric identification, and other consequential domains. HIGH-risk enforcement begins August 2026.
Design-time proof, implementation proof, developer guidance logs, and a continuous CI compliance score — all in one exportable package per release.
The standard threat ontology ships with pre-mapped HIGH-risk scenarios for the most common EU fintech and enterprise AI deployments.
The CI gate hard-blocks any merge where a HIGH-risk rule is not covered. No deployment can bypass this without explicit model change and re-verification.
GDPR governs the collection, processing, and storage of personal data of EU residents. For AI systems, GDPR introduces specific obligations around automated decision-making, data minimisation, purpose limitation, and the right to explanation — all areas where AI systems frequently fail silently.
The EU Medical Device Regulation and In Vitro Diagnostic Regulation are the primary regulatory frameworks for AI used in medical devices and diagnostics in Europe. Any AI system that qualifies as a medical device or IVD under EU law must demonstrate conformity with MDR/IVDR requirements — including quality management systems (Article 10), post-market surveillance (Article 83), and clinical evaluation. IEC 62304 is the harmonised standard for software under both regulations.
Seirios's 4-layer stack produces the documentation evidence an MDR Article 10 QMS audit requires — risk model, code controls, developer records, and continuous CI verification.
MDR requires continuous post-market surveillance of deployed devices. Seirios's L3 CI agent and on-chain audit trail produce per-release evidence suitable for PMS reports.
HIGH-risk AI in medical devices falls under both EU AI Act and MDR simultaneously. Seirios's combined EU profile covers both without duplicate configuration.
The Digital Operational Resilience Act mandates ICT risk management, incident classification, third-party oversight, and resilience testing for EU financial entities. AI systems deployed in financial services fall squarely within DORA scope — particularly where AI drives automated decisions in trading, credit, or fraud detection.
Seirios's risk model maps ICT risks to DORA articles. Controls are generated and verified automatically.
Every AI decision is logged on-chain. Compliance gaps are detected in CI before deployment — not after an incident.
Seirios's compliance model can be embedded in supplier contracts — requiring downstream vendors to meet the same standards.
PSD3 and the Payment Services Regulation (PSR) extend and strengthen PSD2's requirements around strong customer authentication, fraud liability, and open banking APIs. Where AI is used for fraud scoring, transaction monitoring, or credit decisioning, PSD3 introduces new transparency and explainability obligations that align closely with EU AI Act Article 13.
The NIST AI Risk Management Framework provides a voluntary but increasingly mandatory structure for managing AI risk across the US federal government and its contractors. The 2024 GenAI Profile extends the framework to generative AI systems. Federal contractors should treat this as a compliance requirement, not guidance.
Seirios's risk model formalises governance requirements as verifiable OCL invariants — not policy documents.
L0 threat modeling maps AI risks to NIST categories with documented mitigations before any code is written.
L3 CI reports generate compliance scores per release, providing the continuous measurement evidence NIST requires.
L1 auto-generated controls implement NIST mitigations directly in code. Non-compliant deployments are blocked.
Hallucination, data poisoning, prompt injection, and model inversion threats are mapped in the standard ontology.
On-chain audit log satisfies NIST's requirement for tamper-evident records of AI system decisions.
NIST AI 600-1 is the first authoritative US government framework to name hallucination, harmful content generation, and human-AI configuration risks as explicit compliance categories. Published in 2024 as a companion to NIST AI RMF, it defines twelve risk categories specific to generative AI — including data privacy, confabulation (hallucination), homogenisation, and obscene or abusive content generation. Every US-facing organisation building on LLMs will be asked about it in procurement.
The FDA's AI/ML-Based SaMD Action Plan governs AI systems that qualify as Software as a Medical Device in the US market — including diagnostic algorithms, clinical decision support tools, medical imaging analysis, and drug discovery AI. The 2024 update introduced predetermined change control plans (PCCP) and real-world performance monitoring requirements. Any AI system touching a clinical outcome in the US requires FDA clearance or approval under this framework.
21 CFR Part 11 governs electronic records and electronic signatures in FDA-regulated industries — pharmaceutical manufacturing, clinical trials, medical device production. Any AI system that generates, modifies, or transmits records in these environments must produce audit trails that meet Part 11 requirements: uniquely identifiable, time-stamped, sequentially recorded, and protected against falsification. Seirios's on-chain AuditRegistry is architecturally aligned with every one of these requirements.
Part 11 requires audit trails that record the date and time of operator entries and actions that create, modify, or delete electronic records. Seirios's on-chain AuditRegistry satisfies this by construction.
Part 11 requires controls that limit access to authorised individuals only. L0 role-based access definitions and L1-generated access controls map directly to this requirement.
Blockchain-backed on-chain storage ensures records cannot be altered after the fact — meeting Part 11's requirement for records that are accurate, complete, and protected against falsification.
ICH E6 Good Clinical Practice is the international standard governing clinical trials — adopted by regulatory authorities in the EU, US, Japan, and most global markets. The 2023 revision (R3) explicitly addresses digital systems and AI used in trial processes, requiring data integrity, audit trails, validation, and traceability for any AI-generated or AI-assisted data used in a regulatory submission. Seirios's on-chain audit log and L3 CI reports map directly to GCP's audit trail requirements.
Every AI decision logged on-chain with immutable timestamp. GCP requires that records be attributable, legible, contemporaneous, original, and accurate — ALCOA+ criteria that Seirios's audit trail satisfies by design.
GCP requires validation documentation for computerised systems. Seirios's per-release compliance reports and CI gate logs constitute system validation evidence.
GCP mandates audit trails showing who did what and when. Seirios's on-chain AuditRegistry produces exactly this — immutable, timestamped, and exportable for submission.
MITRE ATT&CK is the industry-standard taxonomy for adversarial tactics and techniques. Seirios's threat ontology maps ATT&CK vectors specifically to AI and LLM attack surfaces — prompt injection, model evasion, data poisoning, and supply chain compromise — and generates enforcement controls automatically from them.
All major ATT&CK tactic categories are mapped to AI-specific attack vectors in the standard threat ontology.
Threats from the OWASP Top 10 for LLMs and ATLAS framework are included alongside classical ATT&CK vectors.
L1 codegen produces a dedicated guard per ATT&CK vector. The guard cannot be bypassed without the CI pipeline detecting it.
MITRE ATLAS (Adversarial Threat Landscape for AI Systems) is MITRE's companion framework to ATT&CK, built specifically for machine learning attack surfaces. Where ATT&CK covers enterprise IT threats broadly, ATLAS maps adversarial tactics and techniques targeting ML models directly — including reconnaissance against ML pipelines, model evasion, data poisoning, and ML supply chain compromise. It is the most comprehensive adversarial framework for AI systems available today.
Attackers probing model architecture, training data, and inference APIs. Seirios maps these vectors to L0 threat classifications and L1 access controls.
Building adversarial examples, shadow models, and poisoned datasets. L0 formal verification catches incomplete mitigations before code is written.
The most common AI safety failure mode. L3 adversarial bypass detection catches evasion patterns in CI before deployment.
Extracting training data or model weights through inference. GDPR and ATLAS controls overlap directly — one configuration covers both.
Degrading model performance or poisoning production outputs. L3 CI monitors for control bypass patterns that could enable these attacks.
Seirios's threat ontology covers both ATT&CK and ATLAS vectors in a single risk model. No separate configuration required.
The OWASP Top 10 for LLMs is the most widely adopted reference list for vulnerabilities specific to large language model applications. Published by the Open Worldwide Application Security Project, it defines the ten most critical risks — from prompt injection and insecure output handling to training data poisoning and model theft. Every engineering team building AI products recognises it. Seirios's threat ontology maps all ten categories to enforceable controls automatically.
The OWASP Top 10 for LLM Agents extends the LLM Top 10 to cover the specific vulnerabilities of autonomous agent systems — where the risk is not just what the model says, but what the agent does. Agents with access to tools, APIs, filesystems, and credentials introduce an entirely new attack surface. Seirios addresses all ten categories through formally verified scope invariants, auto-generated tool guards, and CI-enforced permission boundaries.
ISO/IEC 42001 is the first international AI management system standard — the ISO equivalent of a compliance certification for AI. Where EU AI Act tells you what to achieve, ISO 42001 defines how to demonstrate you have an ongoing management system to achieve it. Enterprises and procurement teams are already requiring ISO 42001 certification in supplier contracts, making it one of the most commercially significant profiles on the roadmap.
Large enterprises and public sector buyers are adding ISO 42001 certification requirements to AI supplier contracts. Seirios's evidence chain maps directly to what an ISO 42001 audit requires.
ISO 42001 and EU AI Act share significant structural overlap. A Seirios risk model built for EU AI Act compliance satisfies the majority of ISO 42001 Clause 6 and 8 requirements without additional configuration.
ISO 42001 certification requires documented evidence of your AI management system in operation. Seirios's per-release compliance reports and on-chain audit trail are exactly that evidence.
IEC 62304 defines the software development lifecycle requirements for medical device software — including AI systems used for diagnosis, treatment recommendation, and clinical decision support. It is mandatory in the EU (via MDR/IVDR), US (via FDA 510k/PMA), and most global markets. IEC 62304's post-market surveillance requirements are structurally identical to what Seirios's L3 CI agent produces on every release.
IEC 62304's three safety classes map directly to Seirios's risk level classifications in the L0 threat model. HIGH-risk AI maps to Class C — the strictest requirements.
Every Seirios release produces a traceable evidence chain from risk definition (L0) to code controls (L1) to developer guidance (L2) to CI verification (L3) — exactly what IEC 62304 requires.
IEC 62304 requires ongoing monitoring of deployed software. Seirios's L3 CI agent and on-chain audit trail provide continuous, per-release evidence for post-market surveillance submissions.
ISO 26262 is the functional safety standard for automotive electrical and electronic systems — mandatory for any AI system making decisions in a vehicle. ISO 21448 (SOTIF — Safety of the Intended Functionality) extends this to AI-specific failures: situations where the system performs as designed but the design itself is insufficient for safety. SOTIF is philosophically identical to Seirios's core premise — guard presence is not guard sufficiency.
Seirios's L0 formal threat modeling performs exactly the structured hazard analysis ISO 26262 requires — with mathematical verification that the analysis is complete.
SOTIF asks whether a correctly functioning system is safe. Seirios's 3-check CI pipeline — presence, coverage, integrity — directly addresses SOTIF's requirement to prove behavioural safety, not just control presence.
ISO 26262 mandates full traceability from safety requirements to implementation to test evidence. Seirios's 4-layer chain — model to code to CI to audit log — is that traceability chain.
Google's Secure AI Framework (SAIF) is the most widely adopted enterprise AI security reference framework that isn't a government regulation. Originally published in 2023 and updated in 2024 to address agentic systems, SAIF defines six core principles for securing AI across the full stack — from the model itself to the deployment infrastructure. It is increasingly cited in enterprise procurement requirements and supplier assessments, making it commercially significant even though it carries no legal force.
SAIF requires security built into AI systems from design time, not added after deployment. Seirios's L0 formal verification and L1 auto-generated controls are exactly this — compliance and security enforced structurally, not through policy.
SAIF requires extending security detection to cover AI-specific threats. Seirios's L3 CI agent and on-chain audit log provide per-release compliance scoring and immutable behavioural records — the detection layer SAIF requires.
SAIF advocates for automated security controls that keep pace with AI development velocity. Seirios's MDD engine generates guards automatically from the verified model — zero manual security code, scales with every code change.
SAIF requires consistent security controls across model, data, and infrastructure layers. Seirios's single risk model drives controls across all four layers — no inconsistency between what the compliance team defined and what engineering deployed.
SAIF recognises that classical security controls are insufficient for AI. Seirios's MITRE ATLAS and OWASP Agentic profiles add AI-specific threat categories on top of the classical security baseline.
SAIF requires controls proportionate to the risk level of each AI deployment. Seirios's L0 risk classification drives exactly this — HIGH-risk systems get hard-blocking CI gates, lower-risk systems get scored reports.
The Financial Services AI Risk Management Framework combines MAS Technology Risk Management (TRM) guidelines with the FEAT (Fairness, Ethics, Accountability, Transparency) principles for AI in Singapore financial services. MAS TRM is live in Seirios today. The full FS AI RMF profile adds FEAT-specific controls for financial AI systems.
APRA's CPS 230 Operational Risk Management standard came into force in July 2025 and imposes strict requirements on Australian financial institutions for operational risk management, business continuity, and third-party oversight. AI systems that drive automated decisions in banking, insurance, or superannuation fall directly under CPS 230's scope — particularly its requirements for service provider risk management and resilience testing.
The UK is moving from its current sector-based approach (FCA, ICO, and CMA guidance) toward a statutory AI framework following the AI Safety Institute's work and the 2024 AI Safety Summit. The framework will impose obligations on developers and deployers of AI systems across high-risk sectors, with strong structural parallels to the EU AI Act. Any EU customer with UK operations needs both.
FCA guidance on AI in financial services, ICO guidance on AI and data protection, and CMA principles on AI foundation models all apply today. Seirios's EU AI Act + GDPR profiles cover the majority of these requirements.
The UK government has committed to statutory AI regulation. When enacted, it will require risk management systems, transparency obligations, and audit capabilities — all directly addressed by Seirios's 4-layer stack.
The UK framework is deliberately designed to be compatible with EU AI Act to avoid creating dual compliance burdens. A Seirios model built for EU AI Act provides strong UK coverage with minimal additional configuration.
The UAE is the most active AI regulatory jurisdiction in the Middle East and North Africa region. The UAE AI Office has published an AI ethics framework and principles; ADGM (Abu Dhabi Global Market) and DIFC (Dubai International Financial Centre) have published specific AI guidance for financial services. UAE financial institutions and technology companies are building AI compliance programmes now, ahead of anticipated mandatory requirements.
The UAE AI Office's principles map closely to EU AI Act Article 13 (transparency) and Article 14 (human oversight). Seirios's L0 risk model and L1 controls address both directly.
Both free zones have published AI governance guidance for financial institutions covering model risk management, explainability, and audit trails — all areas Seirios addresses natively.
The UAE government has signalled intent to move from principles to mandatory requirements. Organisations building AI compliance infrastructure now will be well-positioned when obligations become enforceable.
Colorado's SB 205 and Texas's HB 1709 are among the first US state-level AI regulations with enforceable requirements. Both focus on algorithmic discrimination in consequential decisions — employment, housing, credit, insurance, education — and require deployers to conduct impact assessments and implement risk management programmes that closely mirror EU AI Act Article 9.
The Korean AI Basic Act (passed January 2024, enforcement 2026) establishes obligations for "high-impact AI" covering transparency, human oversight, safety testing, and incident reporting — closely paralleling EU AI Act HIGH-risk requirements. Korean financial institutions and platform operators deploying AI face mandatory compliance by 2026.
Japan's Ministry of Economy, Trade and Industry (METI) published AI Guidelines for Business in 2024, establishing a voluntary risk management framework with strong alignment to NIST AI RMF and EU AI Act principles. Japan is expected to move toward mandatory requirements by 2025–26. Organisations operating in Japan can use Seirios's NIST profile as a strong baseline today.
Singapore's Personal Data Protection Act (PDPA) combined with the IMDA/PDPC AI Governance Framework v2 establishes data protection and AI accountability requirements for organisations operating in Singapore. The framework emphasises explainability, human oversight, and algorithmic impact assessments — all areas directly covered by Seirios's existing MAS TRM profile.