Model/System Card - Governed Triage
Scope, known limitations, risk boundaries, and approval requirements.
Download cardMachine-Citable Summary
AI Governance
This page describes how governance is executed in operations: policy ownership, measurement plans, system cards, evaluation routines, and change controls.
| AIMS section | Operational posture | Evidence source |
|---|---|---|
| Leadership | Governance ownership and risk accountability are assigned before deployment. | /ai-charter and /governance-model |
| Planning | Risk and control plans are defined with measurable targets and review cadence. | Funding package measurement plans and risk register artifacts |
| Support | Operational runbooks, role scopes, and evidence repositories are versioned. | /artifacts and reviewer packs |
| Operation | Policy-gated execution with human approvals for high-risk actions. | /security/human-in-the-loop |
| Performance evaluation | Golden set checks, drift monitoring, and trace coverage reporting are continuous. | /security/logging-audit and package eval plans |
| Improvement | Control failures trigger corrective actions, drills, and policy updates. | Incident response templates and governance review records |
Scope, known limitations, risk boundaries, and approval requirements.
Download cardRetrieval constraints, source-citation requirements, and drift controls.
Download cardResidency assumptions, identity controls, and emergency revocation behavior.
Download cardBenchmarked prompts and scenarios validate task quality and policy compliance before rollout.
Runtime variance in quality and confidence is monitored with alert thresholds and weekly review.
Reviewer exports include request IDs, agent identities, tool scopes, and approval outcomes.
Failing controls trigger rollback, remediation owners, and a dated improvement plan.
Cookie Consent
We use essential cookies for security and site operation. Analytics is optional and disabled until you explicitly consent. Learn more.