One platform.
Every framework.

Seirios covers both the regulatory obligation and the underlying safety engineering discipline. Compliance frameworks, adversarial threat models, and formal safety standards — all verified by the same 4-layer platform.

6 live 8 in progress 13 planned
European Union · Regulation (EU) 2024/1689

EU AI Act

Live Aug 2026 — HIGH-risk enforcement €35M or 7% global turnover max fine

The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems by risk level and imposes strict requirements on HIGH-risk systems — those used in lending, hiring, healthcare triage, biometric identification, and other consequential domains. HIGH-risk enforcement begins August 2026.

Article Requirement Seirios layer
Art. 9 Risk management system — documented, continuously updated, covering all lifecycle phases
L0 — Risk model
Art. 10 Data governance — training data quality, bias monitoring, relevance documentation
L0 + L1
Art. 12 Record keeping — automatic logging of every decision the AI system makes
L1 — Audit log
Art. 13 Transparency — users must be informed when interacting with an AI system
L1 controls
Art. 17 Quality management system — documented processes, training records, change controls
L2 + L3 reports
Art. 72 Post-market monitoring — continuous assessment of AI system performance in deployment
L3 — CI agent
What Seirios produces

Regulator-ready evidence package

Design-time proof, implementation proof, developer guidance logs, and a continuous CI compliance score — all in one exportable package per release.

HIGH-risk use cases covered

Lending, hiring, medical triage, insurance

The standard threat ontology ships with pre-mapped HIGH-risk scenarios for the most common EU fintech and enterprise AI deployments.

Deployment blocking

Non-compliant code cannot merge

The CI gate hard-blocks any merge where a HIGH-risk rule is not covered. No deployment can bypass this without explicit model change and re-verification.

European Union · Regulation (EU) 2016/679

GDPR

Live In force since May 2018 €20M or 4% global turnover max fine

GDPR governs the collection, processing, and storage of personal data of EU residents. For AI systems, GDPR introduces specific obligations around automated decision-making, data minimisation, purpose limitation, and the right to explanation — all areas where AI systems frequently fail silently.

Article Requirement Seirios layer
Art. 5 Purpose limitation — data may only be processed for specified, explicit, legitimate purposes
L0 — OCL invariants
Art. 6 Lawfulness of processing — valid legal basis (consent, contract, legitimate interest) required
L0 + L1 controls
Art. 22 Automated decision-making — right not to be subject to solely automated decisions with significant effects
L1 — GDPR controls
Art. 25 Data protection by design and by default — privacy built into system architecture
L1 codegen
Art. 30 Records of processing activities — documented inventory of all data processing operations
L1 — Audit log
Art. 33 Breach notification — 72-hour notification to supervisory authority after discovering a breach
L3 — CI + alerts
Note: GDPR and EU AI Act overlap significantly for AI systems processing personal data. Seirios's combined EU profile covers both simultaneously — no duplicate configuration required.
European Union · MDR (EU) 2017/745 + IVDR (EU) 2017/746

EU MDR / IVDR

Planned In force — EU medical device market AI in medical devices, diagnostic systems, clinical decision support in the EU

The EU Medical Device Regulation and In Vitro Diagnostic Regulation are the primary regulatory frameworks for AI used in medical devices and diagnostics in Europe. Any AI system that qualifies as a medical device or IVD under EU law must demonstrate conformity with MDR/IVDR requirements — including quality management systems (Article 10), post-market surveillance (Article 83), and clinical evaluation. IEC 62304 is the harmonised standard for software under both regulations.

Article 10 — Quality management

Documented QMS covering the full lifecycle

Seirios's 4-layer stack produces the documentation evidence an MDR Article 10 QMS audit requires — risk model, code controls, developer records, and continuous CI verification.

Article 83 — Post-market surveillance

Continuous monitoring after CE marking

MDR requires continuous post-market surveillance of deployed devices. Seirios's L3 CI agent and on-chain audit trail produce per-release evidence suitable for PMS reports.

EU AI Act overlap

Medical AI must meet both MDR and EU AI Act

HIGH-risk AI in medical devices falls under both EU AI Act and MDR simultaneously. Seirios's combined EU profile covers both without duplicate configuration.

📋
Planned: EU MDR / IVDR profile builds directly on the EU AI Act and IEC 62304 profiles. Organisations with AI medical devices need all three — they will be available together as a life sciences bundle.
European Union · Regulation (EU) 2022/2554

DORA

In progress In force Jan 2025 Financial entities — banks, insurers, investment firms

The Digital Operational Resilience Act mandates ICT risk management, incident classification, third-party oversight, and resilience testing for EU financial entities. AI systems deployed in financial services fall squarely within DORA scope — particularly where AI drives automated decisions in trading, credit, or fraud detection.

ICT risk management (Art. 6–16)

Documented ICT risk framework with controls

Seirios's risk model maps ICT risks to DORA articles. Controls are generated and verified automatically.

Incident classification (Art. 17–23)

Automated detection and evidence trail

Every AI decision is logged on-chain. Compliance gaps are detected in CI before deployment — not after an incident.

Third-party oversight (Art. 28–44)

Supplier compliance requirements

Seirios's compliance model can be embedded in supplier contracts — requiring downstream vendors to meet the same standards.

🔧
In progress: The DORA regulation profile is under active development. Existing EU AI Act + GDPR coverage already addresses significant DORA overlap for AI systems. Contact us if DORA is a priority for your organisation — pilot customers shape the roadmap.
European Union · PSD3 / PSR (draft)

PSD3

In progress Expected 2026 Payment service providers, open banking

PSD3 and the Payment Services Regulation (PSR) extend and strengthen PSD2's requirements around strong customer authentication, fraud liability, and open banking APIs. Where AI is used for fraud scoring, transaction monitoring, or credit decisioning, PSD3 introduces new transparency and explainability obligations that align closely with EU AI Act Article 13.

🔧
In progress: PSD3 is still in legislative process. Seirios's combined EU profile (EU AI Act + GDPR) covers the majority of PSD3's AI-related obligations today. A dedicated PSD3 profile will ship once the final text is adopted.
United States · NIST AI 100-1 + GenAI Profile

NIST AI RMF

Live FedRAMP mandates incoming Federal contractor requirement

The NIST AI Risk Management Framework provides a voluntary but increasingly mandatory structure for managing AI risk across the US federal government and its contractors. The 2024 GenAI Profile extends the framework to generative AI systems. Federal contractors should treat this as a compliance requirement, not guidance.

GOVERN

Policies, processes, organisational accountability

Seirios's risk model formalises governance requirements as verifiable OCL invariants — not policy documents.

MAP

Risk identification and classification

L0 threat modeling maps AI risks to NIST categories with documented mitigations before any code is written.

MEASURE

Risk analysis and metrics

L3 CI reports generate compliance scores per release, providing the continuous measurement evidence NIST requires.

MANAGE

Risk response and treatment

L1 auto-generated controls implement NIST mitigations directly in code. Non-compliant deployments are blocked.

GenAI Profile

Generative AI-specific risks

Hallucination, data poisoning, prompt injection, and model inversion threats are mapped in the standard ontology.

Audit trail

Immutable evidence for federal review

On-chain audit log satisfies NIST's requirement for tamper-evident records of AI system decisions.

United States · NIST AI 600-1 — Generative AI Profile (2024)

NIST GenAI Profile

In progress Extends NIST AI RMF for GenAI Any organisation building or deploying generative AI / LLM systems

NIST AI 600-1 is the first authoritative US government framework to name hallucination, harmful content generation, and human-AI configuration risks as explicit compliance categories. Published in 2024 as a companion to NIST AI RMF, it defines twelve risk categories specific to generative AI — including data privacy, confabulation (hallucination), homogenisation, and obscene or abusive content generation. Every US-facing organisation building on LLMs will be asked about it in procurement.

Risk category Description Seirios layer
Confabulation Generation of false, fabricated, or hallucinated information presented as fact
L0 threat model
Data privacy Exposure of personal or sensitive data through model outputs or training data leakage
L1 — GDPR controls
Human-AI configuration Misconfiguration of human oversight and AI autonomy boundaries leading to unintended consequences
L0 + L2 IDE agent
Information security Prompt injection, jailbreaking, and adversarial inputs that manipulate model behaviour
L1 + L3 bypass detection
Harmful content Generation of content that is illegal, abusive, discriminatory, or harmful to individuals
L0 + L1 controls
Traceability Inability to audit or explain AI-generated outputs — required for accountability
L1 on-chain audit log
Extends your existing NIST AI RMF coverage: NIST AI 600-1 is a profile of AI RMF — adding it requires no new risk model, only additional threat categories mapped to existing controls. Organisations already using the NIST AI RMF profile get GenAI coverage with minimal additional configuration.
United States · FDA AI/ML-Based Software as a Medical Device (2024)

FDA AI/ML SaMD

Planned Active enforcement — US market AI used in diagnostics, clinical decision support, medical imaging, drug discovery

The FDA's AI/ML-Based SaMD Action Plan governs AI systems that qualify as Software as a Medical Device in the US market — including diagnostic algorithms, clinical decision support tools, medical imaging analysis, and drug discovery AI. The 2024 update introduced predetermined change control plans (PCCP) and real-world performance monitoring requirements. Any AI system touching a clinical outcome in the US requires FDA clearance or approval under this framework.

Requirement Description Seirios layer
Quality system 21 CFR Part 820 quality management system for software development lifecycle
L0–L3 combined
PCCP Predetermined change control plan — documented process for approving post-market algorithm changes
L3 CI gate
Real-world monitoring Post-market performance monitoring with defined metrics and re-evaluation triggers
L3 + audit trail
Transparency Labelling and disclosure of AI/ML model inputs, outputs, intended use, and known limitations
L0 + L1 controls
Cybersecurity Threat modelling and security controls for connected medical device software
L0 MITRE mapping
IEC 62304 is the technical standard underneath FDA SaMD. Adding the FDA SaMD profile adds regulatory pathway requirements on top of IEC 62304's software lifecycle requirements. Both will be available together.
United States · 21 CFR Part 11 — Electronic Records and Signatures

21 CFR Part 11

Planned FDA requirement — US clinical and manufacturing Any AI generating electronic records in FDA-regulated clinical or manufacturing environments

21 CFR Part 11 governs electronic records and electronic signatures in FDA-regulated industries — pharmaceutical manufacturing, clinical trials, medical device production. Any AI system that generates, modifies, or transmits records in these environments must produce audit trails that meet Part 11 requirements: uniquely identifiable, time-stamped, sequentially recorded, and protected against falsification. Seirios's on-chain AuditRegistry is architecturally aligned with every one of these requirements.

Audit trails (§11.10(e))

Computer-generated, time-stamped audit trails

Part 11 requires audit trails that record the date and time of operator entries and actions that create, modify, or delete electronic records. Seirios's on-chain AuditRegistry satisfies this by construction.

System access controls (§11.10(d))

Limiting access to authorised users

Part 11 requires controls that limit access to authorised individuals only. L0 role-based access definitions and L1-generated access controls map directly to this requirement.

Record integrity (§11.10(a))

Accurate and reliable records throughout retention

Blockchain-backed on-chain storage ensures records cannot be altered after the fact — meeting Part 11's requirement for records that are accurate, complete, and protected against falsification.

21 CFR Part 11 and ICH E6 GCP overlap significantly. Organisations conducting FDA-regulated clinical trials need both. The Seirios life sciences bundle covers Part 11, GCP, FDA SaMD, EU MDR, and IEC 62304 together.
International · ICH E6(R3) Good Clinical Practice (2023)

ICH E6 GCP

Planned Global standard — all clinical trials AI used in clinical trial design, patient recruitment, data analysis, adverse event detection

ICH E6 Good Clinical Practice is the international standard governing clinical trials — adopted by regulatory authorities in the EU, US, Japan, and most global markets. The 2023 revision (R3) explicitly addresses digital systems and AI used in trial processes, requiring data integrity, audit trails, validation, and traceability for any AI-generated or AI-assisted data used in a regulatory submission. Seirios's on-chain audit log and L3 CI reports map directly to GCP's audit trail requirements.

Data integrity (GCP 5.5)

Complete, consistent, accurate trial records

Every AI decision logged on-chain with immutable timestamp. GCP requires that records be attributable, legible, contemporaneous, original, and accurate — ALCOA+ criteria that Seirios's audit trail satisfies by design.

System validation (GCP 5.5.3)

Documented evidence that AI systems perform as intended

GCP requires validation documentation for computerised systems. Seirios's per-release compliance reports and CI gate logs constitute system validation evidence.

Audit trail (GCP 8.3)

Tamper-evident record of all data changes

GCP mandates audit trails showing who did what and when. Seirios's on-chain AuditRegistry produces exactly this — immutable, timestamped, and exportable for submission.

MITRE Corporation · ATT&CK for Enterprise + LLM Threats

MITRE ATT&CK

Live 27 threat vectors mapped

MITRE ATT&CK is the industry-standard taxonomy for adversarial tactics and techniques. Seirios's threat ontology maps ATT&CK vectors specifically to AI and LLM attack surfaces — prompt injection, model evasion, data poisoning, and supply chain compromise — and generates enforcement controls automatically from them.

Threat categories covered

Initial access, execution, persistence, exfiltration

All major ATT&CK tactic categories are mapped to AI-specific attack vectors in the standard threat ontology.

LLM-specific threats

Prompt injection, jailbreaking, model inversion

Threats from the OWASP Top 10 for LLMs and ATLAS framework are included alongside classical ATT&CK vectors.

From threat to control

Each threat generates an enforcement control

L1 codegen produces a dedicated guard per ATT&CK vector. The guard cannot be bypassed without the CI pipeline detecting it.

Security + compliance in one model: MITRE threat coverage and EU AI Act / GDPR compliance are defined in the same risk model. No separate security toolchain required.
MITRE Corporation · Adversarial Threat Landscape for AI Systems

MITRE ATLAS

In progress MITRE ATT&CK for AI/ML systems Any organisation developing or deploying machine learning systems

MITRE ATLAS (Adversarial Threat Landscape for AI Systems) is MITRE's companion framework to ATT&CK, built specifically for machine learning attack surfaces. Where ATT&CK covers enterprise IT threats broadly, ATLAS maps adversarial tactics and techniques targeting ML models directly — including reconnaissance against ML pipelines, model evasion, data poisoning, and ML supply chain compromise. It is the most comprehensive adversarial framework for AI systems available today.

Reconnaissance

ML pipeline and model discovery attacks

Attackers probing model architecture, training data, and inference APIs. Seirios maps these vectors to L0 threat classifications and L1 access controls.

ML attack staging

Capability development for model attacks

Building adversarial examples, shadow models, and poisoned datasets. L0 formal verification catches incomplete mitigations before code is written.

Model evasion

Crafted inputs that bypass model controls

The most common AI safety failure mode. L3 adversarial bypass detection catches evasion patterns in CI before deployment.

Exfiltration

Model inversion and membership inference

Extracting training data or model weights through inference. GDPR and ATLAS controls overlap directly — one configuration covers both.

Impact

Model corruption and availability attacks

Degrading model performance or poisoning production outputs. L3 CI monitors for control bypass patterns that could enable these attacks.

ATT&CK + ATLAS

One threat model, both frameworks

Seirios's threat ontology covers both ATT&CK and ATLAS vectors in a single risk model. No separate configuration required.

🔧
In progress: The ATLAS profile extends Seirios's existing MITRE ATT&CK coverage with ML-specific tactic and technique mappings. Contact us if ATLAS coverage is a priority — it is in active development.
OWASP Foundation · OWASP Top 10 for Large Language Model Applications

OWASP LLM Top 10

Live Industry standard reference Any organisation building or deploying LLM-based AI systems

The OWASP Top 10 for LLMs is the most widely adopted reference list for vulnerabilities specific to large language model applications. Published by the Open Worldwide Application Security Project, it defines the ten most critical risks — from prompt injection and insecure output handling to training data poisoning and model theft. Every engineering team building AI products recognises it. Seirios's threat ontology maps all ten categories to enforceable controls automatically.

Risk Description Seirios layer
LLM01 Prompt injection — manipulating LLM behaviour through crafted inputs, direct or indirect
L0 + L1 controls
LLM02 Insecure output handling — downstream components trusting LLM output without validation
L1 controls
LLM03 Training data poisoning — corrupting training data to introduce backdoors or biases
L0 threat model
LLM06 Sensitive information disclosure — LLM revealing confidential data from training or context
L1 — GDPR controls
LLM08 Excessive agency — LLM taking unintended actions with real-world consequences
L0 + L3 CI gate
LLM10 Model theft — extracting proprietary model behaviour through inference attacks
L0 threat model
Already covered by MITRE profile: Seirios's MITRE ATT&CK profile addresses all ten OWASP LLM categories. Making OWASP explicit gives development teams a familiar reference point — the underlying controls are the same.
OWASP Foundation · Top 10 for LLM Agents (2024)

OWASP Agentic Top 10

Live AI Agent Security Any organisation building or deploying autonomous AI agents

The OWASP Top 10 for LLM Agents extends the LLM Top 10 to cover the specific vulnerabilities of autonomous agent systems — where the risk is not just what the model says, but what the agent does. Agents with access to tools, APIs, filesystems, and credentials introduce an entirely new attack surface. Seirios addresses all ten categories through formally verified scope invariants, auto-generated tool guards, and CI-enforced permission boundaries.

Risk Description Seirios layer
Agent01 Unbounded agent scope — agent takes actions outside its intended operational boundary
L0 scope invariants
Agent02 Tool misuse — agent invokes tools in unintended ways or with unintended parameters
L1 tool guards
Agent03 Prompt injection via tool output — malicious instructions injected through tool responses
L1 + L3 bypass detection
Agent04 Credential exposure — API keys, secrets, and tokens accessible to agent without access controls
L0 + L1 credential guards
Agent05 Missing audit trail — agent actions not logged, making forensics and compliance impossible
L1 on-chain audit log
Agent06 Insufficient human oversight — agent operates autonomously in situations requiring human approval
L0 oversight invariants
Agent07 Multi-agent trust failure — agent blindly trusts instructions from another agent without verification
L0 + L1 trust controls
Agent08 Memory manipulation — persistent agent memory poisoned with adversarial content
L0 threat model
Agent09 Uncontrolled resource consumption — agent triggers runaway API calls or cloud resource usage
L1 rate guards + L3
Agent10 Irreversible actions without confirmation — agent executes destructive operations without human approval
L0 reversibility invariants
Different from OWASP LLM Top 10: LLM Top 10 covers model-level risks (what the model outputs). Agentic Top 10 covers action-level risks (what the agent does with that output). Both profiles are needed for any production agent deployment. Seirios addresses both through the same risk model.
International · ISO/IEC 42001:2023

ISO 42001

In progress Certification standard Increasingly required in supplier contracts & procurement

ISO/IEC 42001 is the first international AI management system standard — the ISO equivalent of a compliance certification for AI. Where EU AI Act tells you what to achieve, ISO 42001 defines how to demonstrate you have an ongoing management system to achieve it. Enterprises and procurement teams are already requiring ISO 42001 certification in supplier contracts, making it one of the most commercially significant profiles on the roadmap.

Clause Requirement Seirios layer
Clause 4 Context of the organisation — understanding AI risks and stakeholder requirements
L0 — Risk model
Clause 6 Planning — AI risk assessment, treatment plan, and documented objectives
L0 + L1
Clause 8 Operation — implementing AI system lifecycle controls including design, development, and deployment
L1 + L2 controls
Clause 9 Performance evaluation — monitoring, measurement, audit, and management review
L3 — CI reports
Clause 10 Improvement — nonconformity, corrective action, and continual improvement
L3 + audit trail
Annex A AI-specific controls — responsible AI policy, data governance, human oversight, incident management
L0–L3 combined
Why this matters commercially

Procurement teams are already requiring it

Large enterprises and public sector buyers are adding ISO 42001 certification requirements to AI supplier contracts. Seirios's evidence chain maps directly to what an ISO 42001 audit requires.

EU AI Act overlap

One model, two certifications

ISO 42001 and EU AI Act share significant structural overlap. A Seirios risk model built for EU AI Act compliance satisfies the majority of ISO 42001 Clause 6 and 8 requirements without additional configuration.

Audit readiness

Evidence package built for ISO auditors

ISO 42001 certification requires documented evidence of your AI management system in operation. Seirios's per-release compliance reports and on-chain audit trail are exactly that evidence.

🔧
In progress: ISO 42001 profile is under active development. Contact us if ISO 42001 certification is a near-term requirement — pilot customers get early access and direct input on the control mapping.
International · IEC 62304:2006+AMD1:2015 — Medical Device Software

IEC 62304

Planned Mandatory for medical device software globally AI in medical devices, diagnostic software, clinical decision support

IEC 62304 defines the software development lifecycle requirements for medical device software — including AI systems used for diagnosis, treatment recommendation, and clinical decision support. It is mandatory in the EU (via MDR/IVDR), US (via FDA 510k/PMA), and most global markets. IEC 62304's post-market surveillance requirements are structurally identical to what Seirios's L3 CI agent produces on every release.

Software safety classification

Class A / B / C risk classification

IEC 62304's three safety classes map directly to Seirios's risk level classifications in the L0 threat model. HIGH-risk AI maps to Class C — the strictest requirements.

Software development process

Documented lifecycle with traceability

Every Seirios release produces a traceable evidence chain from risk definition (L0) to code controls (L1) to developer guidance (L2) to CI verification (L3) — exactly what IEC 62304 requires.

Post-market surveillance

Continuous monitoring after release

IEC 62304 requires ongoing monitoring of deployed software. Seirios's L3 CI agent and on-chain audit trail provide continuous, per-release evidence for post-market surveillance submissions.

📋
Planned: IEC 62304 profile is on the roadmap for organisations building AI in medical devices or clinical decision support. The 4-layer Seirios stack maps almost directly to IEC 62304's software lifecycle documentation requirements.
International · ISO 26262:2018 + ISO 21448 (SOTIF)

ISO 26262 & SOTIF

Planned Mandatory for automotive AI systems ADAS, autonomous driving, in-vehicle AI decision systems

ISO 26262 is the functional safety standard for automotive electrical and electronic systems — mandatory for any AI system making decisions in a vehicle. ISO 21448 (SOTIF — Safety of the Intended Functionality) extends this to AI-specific failures: situations where the system performs as designed but the design itself is insufficient for safety. SOTIF is philosophically identical to Seirios's core premise — guard presence is not guard sufficiency.

Hazard analysis (ISO 26262)

Systematic identification of safety-critical risks

Seirios's L0 formal threat modeling performs exactly the structured hazard analysis ISO 26262 requires — with mathematical verification that the analysis is complete.

SOTIF — intended functionality

Proving the system does what it's supposed to

SOTIF asks whether a correctly functioning system is safe. Seirios's 3-check CI pipeline — presence, coverage, integrity — directly addresses SOTIF's requirement to prove behavioural safety, not just control presence.

Traceability

Requirements to implementation to verification

ISO 26262 mandates full traceability from safety requirements to implementation to test evidence. Seirios's 4-layer chain — model to code to CI to audit log — is that traceability chain.

SOTIF and Seirios share the same insight: a control that exists but is bypassed is not a control. ISO 26262 and SOTIF demand proof of sufficiency, not just presence — exactly what Seirios's verification engine provides.
📋
Planned: ISO 26262 / SOTIF profile is on the roadmap for automotive AI deployments. Contact us if you are building AI systems for ADAS or autonomous vehicle applications.
Google · Secure AI Framework (SAIF) — 2023, updated 2024

Google SAIF

In progress AI Agent Security Enterprise AI deployments — increasingly cited in procurement

Google's Secure AI Framework (SAIF) is the most widely adopted enterprise AI security reference framework that isn't a government regulation. Originally published in 2023 and updated in 2024 to address agentic systems, SAIF defines six core principles for securing AI across the full stack — from the model itself to the deployment infrastructure. It is increasingly cited in enterprise procurement requirements and supplier assessments, making it commercially significant even though it carries no legal force.

Expand strong foundations

Secure-by-design AI infrastructure

SAIF requires security built into AI systems from design time, not added after deployment. Seirios's L0 formal verification and L1 auto-generated controls are exactly this — compliance and security enforced structurally, not through policy.

Extend detection and response

Continuous monitoring of AI behaviour

SAIF requires extending security detection to cover AI-specific threats. Seirios's L3 CI agent and on-chain audit log provide per-release compliance scoring and immutable behavioural records — the detection layer SAIF requires.

Automate defences

AI-powered security that scales

SAIF advocates for automated security controls that keep pace with AI development velocity. Seirios's MDD engine generates guards automatically from the verified model — zero manual security code, scales with every code change.

Harmonise platform controls

Consistent controls across the AI stack

SAIF requires consistent security controls across model, data, and infrastructure layers. Seirios's single risk model drives controls across all four layers — no inconsistency between what the compliance team defined and what engineering deployed.

Adapt controls for AI

AI-specific threat mitigations

SAIF recognises that classical security controls are insufficient for AI. Seirios's MITRE ATLAS and OWASP Agentic profiles add AI-specific threat categories on top of the classical security baseline.

Contextualise risks

Risk-proportionate controls

SAIF requires controls proportionate to the risk level of each AI deployment. Seirios's L0 risk classification drives exactly this — HIGH-risk systems get hard-blocking CI gates, lower-risk systems get scored reports.

🔧
In progress: A dedicated Google SAIF profile maps SAIF's six principles to Seirios's 4-layer stack. Organisations including SAIF compliance in supplier assessments can use this profile to generate auditor-ready evidence automatically.
Singapore · MAS TRM + FS-AISC / MAS FEAT

FS AI RMF

In progress MAS TRM live today Financial institutions in Singapore

The Financial Services AI Risk Management Framework combines MAS Technology Risk Management (TRM) guidelines with the FEAT (Fairness, Ethics, Accountability, Transparency) principles for AI in Singapore financial services. MAS TRM is live in Seirios today. The full FS AI RMF profile adds FEAT-specific controls for financial AI systems.

MAS TRM is live today. Existing MAS TRM coverage handles the majority of Singapore financial AI obligations. The expanded FS AI RMF profile adds FEAT principle enforcement and is on the near-term roadmap.
Australia · APRA CPS 230 Operational Risk Management

APRA CPS 230

Planned In force July 2025 APRA-regulated banks, insurers, superannuation funds

APRA's CPS 230 Operational Risk Management standard came into force in July 2025 and imposes strict requirements on Australian financial institutions for operational risk management, business continuity, and third-party oversight. AI systems that drive automated decisions in banking, insurance, or superannuation fall directly under CPS 230's scope — particularly its requirements for service provider risk management and resilience testing.

Requirement Description Seirios layer
Risk management Documented framework for identifying, assessing, and managing operational risks including AI systems
L0 — Risk model
Change management Assessment and approval process for material changes to critical operations including AI deployments
L1 + L3 CI gate
Service provider oversight Due diligence and ongoing monitoring of material service providers including AI vendors
L0 controls
Incident management Detection, escalation, and notification of operational incidents affecting regulated activities
L3 + audit trail
Already in force: CPS 230 came into effect July 2025. APRA-regulated institutions deploying AI systems should treat this as an active compliance obligation. Seirios's MAS TRM profile (live today) provides strong coverage of CPS 230's operational risk requirements given the structural alignment between MAS and APRA frameworks.
📋
Planned: A dedicated APRA CPS 230 profile is on the near-term roadmap. Contact us if you are an APRA-regulated institution — this is a priority profile and pilot customers will shape the control mapping.
United Kingdom · DSIT AI Regulation Framework

UK AI Act

Planned Statutory framework expected 2025–26 UK-based AI developers and deployers

The UK is moving from its current sector-based approach (FCA, ICO, and CMA guidance) toward a statutory AI framework following the AI Safety Institute's work and the 2024 AI Safety Summit. The framework will impose obligations on developers and deployers of AI systems across high-risk sectors, with strong structural parallels to the EU AI Act. Any EU customer with UK operations needs both.

Current landscape

Sector-based guidance already in force

FCA guidance on AI in financial services, ICO guidance on AI and data protection, and CMA principles on AI foundation models all apply today. Seirios's EU AI Act + GDPR profiles cover the majority of these requirements.

Statutory framework

Cross-sector obligations coming

The UK government has committed to statutory AI regulation. When enacted, it will require risk management systems, transparency obligations, and audit capabilities — all directly addressed by Seirios's 4-layer stack.

EU AI Act overlap

One model covers both jurisdictions

The UK framework is deliberately designed to be compatible with EU AI Act to avoid creating dual compliance burdens. A Seirios model built for EU AI Act provides strong UK coverage with minimal additional configuration.

📋
Planned: The EU AI Act profile covers current UK sector guidance today. A dedicated UK profile will be added once the statutory framework is enacted. Organisations with UK operations can use the EU AI Act profile as a robust interim measure.
United Arab Emirates · UAE AI Office + ADGM / DIFC Frameworks

UAE AI Regulation

Planned Most active AI regulatory jurisdiction in Middle East Financial services, government AI, platform operators

The UAE is the most active AI regulatory jurisdiction in the Middle East and North Africa region. The UAE AI Office has published an AI ethics framework and principles; ADGM (Abu Dhabi Global Market) and DIFC (Dubai International Financial Centre) have published specific AI guidance for financial services. UAE financial institutions and technology companies are building AI compliance programmes now, ahead of anticipated mandatory requirements.

UAE AI Ethics Guidelines

Responsibility, transparency, human oversight

The UAE AI Office's principles map closely to EU AI Act Article 13 (transparency) and Article 14 (human oversight). Seirios's L0 risk model and L1 controls address both directly.

ADGM / DIFC guidance

Financial services AI requirements

Both free zones have published AI governance guidance for financial institutions covering model risk management, explainability, and audit trails — all areas Seirios addresses natively.

Regulatory direction

Mandatory framework expected

The UAE government has signalled intent to move from principles to mandatory requirements. Organisations building AI compliance infrastructure now will be well-positioned when obligations become enforceable.

📋
Planned: A dedicated UAE profile is on the community roadmap. The EU AI Act + NIST AI RMF profiles provide strong coverage of UAE requirements today given their structural alignment with UAE AI ethics principles.
United States · Colorado SB 205 + Texas HB 1709

Colorado & Texas AI Acts

Planned High-risk AI systems affecting Colorado/Texas residents

Colorado's SB 205 and Texas's HB 1709 are among the first US state-level AI regulations with enforceable requirements. Both focus on algorithmic discrimination in consequential decisions — employment, housing, credit, insurance, education — and require deployers to conduct impact assessments and implement risk management programmes that closely mirror EU AI Act Article 9.

📋
Planned: The EU AI Act profile covers approximately 70% of Colorado / Texas requirements today. A dedicated US state profile is on the community roadmap. Organisations operating under both EU AI Act and US state laws can use the EU profile as an interim measure with minimal gaps.
Republic of Korea · AI Basic Act (2024)

Korea AI Act

Planned Enforcement 2026 High-impact AI systems in Korea

The Korean AI Basic Act (passed January 2024, enforcement 2026) establishes obligations for "high-impact AI" covering transparency, human oversight, safety testing, and incident reporting — closely paralleling EU AI Act HIGH-risk requirements. Korean financial institutions and platform operators deploying AI face mandatory compliance by 2026.

📋
Planned: The EU AI Act profile provides strong coverage of Korea AI Act requirements today given their structural similarity. A dedicated Korea profile targeting the specific HIGH-impact categories defined in Korean law is on the roadmap for 2025.
Japan · AI Guidelines for Business (2024)

Japan Voluntary AI Framework

Planned Voluntary — mandatory legislation expected 2025–26

Japan's Ministry of Economy, Trade and Industry (METI) published AI Guidelines for Business in 2024, establishing a voluntary risk management framework with strong alignment to NIST AI RMF and EU AI Act principles. Japan is expected to move toward mandatory requirements by 2025–26. Organisations operating in Japan can use Seirios's NIST profile as a strong baseline today.

📋
Planned: Japan's framework aligns closely with NIST AI RMF, which is live today. A dedicated Japan profile will be added to the community library once mandatory requirements are finalised.
Singapore · PDPA + AI Governance Framework v2

Singapore PDPA & AI Governance

Planned MAS TRM live today Organisations processing personal data in Singapore

Singapore's Personal Data Protection Act (PDPA) combined with the IMDA/PDPC AI Governance Framework v2 establishes data protection and AI accountability requirements for organisations operating in Singapore. The framework emphasises explainability, human oversight, and algorithmic impact assessments — all areas directly covered by Seirios's existing MAS TRM profile.

MAS TRM is live today and covers the majority of Singapore AI governance requirements. A combined PDPA + AI Governance Framework profile will be added as part of the Asia-Pacific expansion roadmap.

Your regulation isn't listed?
It probably will be.

The community regulation library grows with every new law. Contribute a regulation profile or contact us if your jurisdiction is a priority.

Request Demo → Request a regulation profile