Standard IDGCAIS-STD-001
Versionv1.0
Published2026-01-15
StatusActive
Review Due2028-01-15 (24-month cycle)
SupersedesInitial publication

1. Scope

This standard establishes minimum safety requirements applicable to AI systems operating in any production environment where outputs may influence human decisions or physical actions. It applies to systems that generate recommendations, classifications, predictions, or automated outputs used in operational contexts.

The requirements set out in this standard apply to the organization responsible for the deployment and operation of the AI system. Requirements do not extend to underlying model providers where the deploying organization has no visibility into or control over model internals, except where such visibility is necessary to meet a specific requirement.

Systems used solely in research or testing environments with no production deployment are excluded from the scope of this standard.

2. Definitions

AI System
A machine-based system that, for a given set of objectives, makes predictions, recommendations, decisions, or generates content that influences real or virtual environments.
Production Environment
Any deployment context in which an AI system's outputs are used to inform or automate decisions affecting real persons, processes, or physical systems.
Failure Mode
A documented condition under which an AI system produces incorrect, harmful, or unpredictable outputs, including edge cases and distributional shift scenarios.
Output Boundary
The defined limits within which an AI system's outputs are considered valid and reliable for the intended application.
Incident
Any occurrence in which an AI system produces outputs that cause or contribute to harm, or that deviate materially from intended behavior in a production environment.

3. Requirements

REQ-001-1: Risk Classification The organization shall maintain a documented risk classification for each AI system in scope, identifying the potential severity and likelihood of harm associated with system failures or errors. Risk classifications shall be reviewed following material changes to the system or its deployment context.
REQ-001-2: Failure Mode Documentation The organization shall maintain documentation of known failure modes for each AI system in scope, including the conditions under which failures are likely to occur and the expected impact of each failure mode. Documentation shall be updated within 30 days of any identified new failure mode.
REQ-001-3: Output Boundary Testing The organization shall conduct and document testing of each AI system against defined output boundaries prior to production deployment and following any material change to the system. Testing records shall be retained for a minimum of 24 months.
REQ-001-4: Emergency Override Mechanism Each AI system in scope shall be equipped with a documented and tested mechanism by which authorized personnel can suspend or override system outputs within a defined response window. The override mechanism shall be tested at intervals not exceeding 12 months.
REQ-001-5: Incident Logging and Reporting The organization shall maintain a log of all incidents meeting the definition in Section 2. Incident logs shall be retained for a minimum of 36 months. High-severity incidents shall be reviewed by responsible personnel within 5 business days of identification.

4. Assessment Criteria

Assessment against this standard is conducted through the following methods:

  • Documentation review - Review of risk classifications, failure mode documentation, testing records, and incident logs
  • Technical testing - Independent testing of AI system outputs against declared output boundaries
  • Audit trail review - Review of override mechanism test records and incident response records
  • Personnel interviews - Interviews with personnel responsible for safety compliance (at Accredited tier)

Assessors will evaluate whether documented requirements are implemented in practice, not solely whether documentation exists. Discrepancies between documentation and observed practice will be recorded as non-conformities.

5. Compliance Indicators

The following indicators are used to determine compliance with each requirement:

Requirement Compliant Indicator Non-Compliant Indicator
REQ-001-1 Current, dated risk classification document exists for each system No classification, or classification predates last material change
REQ-001-2 Failure mode register maintained and updated within required intervals No register, or register not updated following identified failures
REQ-001-3 Testing records dated within required intervals with defined pass/fail criteria No testing records, or records lack defined output boundary criteria
REQ-001-4 Override mechanism documented, tested, and test records retained No documented override mechanism or test records exceed 12-month threshold
REQ-001-5 Incident log maintained; high-severity incidents reviewed within required window No incident log, or review records absent for high-severity incidents

6. Version History

Version Date Notes
v1.0 2026-01-15 Initial publication. Adopted by Standards Board resolution 2026-01-12.
GCAIS-STD-001 AI Safety Standard v1.0 (HTML) Request PDF version