Executive Summary
AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper-evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator-controlled logs.
Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.
The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step-by-step human oversight.
Design Principles
- Attestation by constructionControls produce cryptographic proof as a byproduct of execution, not as a separate documentation exercise.
- Privacy by architectureProtected content never leaves the operator’s environment. Only hashes and signed receipts cross trust boundaries.
- Independence by structureThe entity attesting to governance is structurally independent of the entity being governed. Self-attestation is not compliant.
- Statistical rigor by defaultSafety claims carry confidence intervals, sample sizes, and auditor-reproducible methodologies. Unquantified assertions are not attestation artifacts.
- Open by designRoyalty-free patent covenant for all conformant implementations. Multiple protocol profiles are permitted.
- Security-supporting evidenceThe attestation architecture occupies the same inline position that security detection requires, producing security-supporting evidence within the attested scope.
What OVERT Covers
| Part | Description |
|---|---|
| Foundations | Attestation assurance levels (AAL-1 through AAL-4), trust architecture, threat model, cross-boundary attestation protocol |
| Governance Domains | Six domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation |
| Agentic AI Controls | Tool-call governance, MCP server trust, multi-agent system controls, capability-based access, human-in-the-loop attestation, persistent state governance, delegation chains, behavioral drift detection |
| Architecture | Non-egress attestation, temporal binding, statistical safety measurement, third-party auditability, legal preservation |
| Conformance | Maturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program |
| Crosswalks | NIST AI RMF, ISO 42001, EU AI Act, OWASP, NIST SP 800-53, FedRAMP, OMB M-25-21/M-25-22, DASF v3.0 |
Key Resources
- Standard (PDF) — OVERT 1.0 specification
- Standard (Markdown) — source text
- IPR Policy — patent covenant, disclosures & licensing
- Review Feedback — [email protected]
Machine-Readable Feed
- latest.json — current version metadata
- feed.json — polling feed with all versions
- versions.json — complete version index
- latest.md — canonical Markdown for the latest release