8  Frameworks and Templates

This appendix provides practical templates for common AI governance artifacts. These templates should be adapted to organizational context; they represent starting points rather than final forms.

8.1 A.1 AI Impact Assessment Template

System Information

System name: System owner: Date of assessment: Assessor(s):

Purpose and Use

Describe the purpose of the AI system. What problem does it solve? What decisions does it support or make?

Describe the intended users. Who will interact with the system? What training or expertise do they have?

Describe the affected individuals. Whose data is processed? Who receives outputs? Who is affected by decisions?

System Description

Describe how the system works at a level appropriate for governance review. What type of AI/ML approach is used? What are the inputs and outputs?

Describe the training data. What data was used? Where did it come from? How representative is it?

Describe the deployment environment. Where does the system run? How does it integrate with other systems?

Risk Assessment

Identify potential risks across categories:

Accuracy risks: What if the system makes errors? How severe would consequences be? How would errors be detected?

Fairness risks: Could the system disadvantage protected groups? Has fairness been tested? What disparities exist?

Transparency risks: Can affected individuals understand AI involvement and decisions? Can explanations be provided?

Privacy risks: What personal data is processed? What are the privacy implications? How is data protected?

Security risks: What security vulnerabilities exist? What would be the impact of security breaches?

Misuse risks: How could the system be misused? What safeguards prevent misuse?

For each identified risk, assess likelihood (low/medium/high) and severity (low/medium/high).

Mitigation Measures

For each significant risk, describe planned mitigation measures:

Risk: [Description] Mitigation: [Description] Residual risk after mitigation: [Low/Medium/High]

Human Oversight

Describe the human oversight approach. What human review occurs? When can humans override the system? How is oversight meaningful rather than perfunctory?

Monitoring Plan

Describe how the system will be monitored after deployment. What metrics will be tracked? How frequently? Who reviews monitoring output?

Conclusion and Approval

Overall risk assessment: [Low/Medium/High] Recommendation: [Proceed/Proceed with conditions/Do not proceed] Conditions (if applicable):

Approval: Approver name: Approver title: Date:

8.2 A.2 Model Card Template

Model Details

Model name: Version: Developer: Date: Model type: Training approach: Parameters/architecture: License:

Intended Use

Primary intended uses:

Primary intended users:

Out-of-scope uses (uses for which the model is not appropriate):

Training Data

Description of training data:

Data sources:

Data collection process:

Known limitations of training data:

Evaluation Data

Description of evaluation data:

How evaluation data differs from training data:

Performance Metrics

Overall performance: [Metric]: [Value] [Metric]: [Value]

Disaggregated performance by relevant groups: [Group]: [Metric]: [Value] [Group]: [Metric]: [Value]

Fairness Considerations

Fairness criteria applied:

Fairness testing performed:

Known disparities:

Limitations and Biases

Known limitations:

Potential biases:

Situations where the model may perform poorly:

Ethical Considerations

Potential risks from model use:

Populations that may be affected:

Use cases that raise ethical concerns:

Recommendations

For users of this model:

For deployers of this model:

8.3 A.3 Vendor AI Assessment Checklist

Vendor Information

Vendor name: Contact: System/service being evaluated: Date of assessment:

Vendor Governance

Does the vendor have documented AI governance policies? [Yes/No/Unknown] Has the vendor committed to responsible AI principles? [Yes/No/Unknown] Does the vendor have relevant certifications (e.g., ISO 42001)? [Yes/No/Unknown] What is the vendor’s track record with AI incidents? [Description]

System Information

Has the vendor provided adequate description of system capabilities? [Yes/No] Has the vendor provided information about training data? [Yes/No] Has the vendor provided performance metrics? [Yes/No] Has the vendor provided disaggregated performance by demographic groups? [Yes/No] Has the vendor disclosed known limitations? [Yes/No]

Compliance

Has the vendor addressed EU AI Act requirements (if applicable)? [Yes/No/N/A] Has the vendor addressed other applicable regulatory requirements? [Yes/No] Will the vendor support the organization’s compliance obligations? [Yes/No]

Contractual Provisions

Does the contract provide adequate information rights? [Yes/No] Does the contract provide audit rights? [Yes/No] Does the contract address liability allocation? [Yes/No] Does the contract address incident notification? [Yes/No] Does the contract address data handling and privacy? [Yes/No] Does the contract address updates and change management? [Yes/No] Does the contract address termination and data return? [Yes/No]

Risk Assessment

Overall vendor risk: [Low/Medium/High] Technical/performance risk: [Low/Medium/High] Compliance risk: [Low/Medium/High] Operational risk: [Low/Medium/High] Reputational risk: [Low/Medium/High]

Recommendation

[Proceed/Proceed with conditions/Do not proceed] Conditions (if applicable):

8.4 A.4 AI Incident Report Template

Incident Information

Incident ID: Date/time detected: Date/time of incident (if different): Reporter: System involved:

Incident Description

What happened:

How the incident was detected:

Who was affected:

Estimated impact:

Severity Assessment

Severity: [Critical/High/Medium/Low]

Criteria for severity: - Critical: Significant harm to individuals, major regulatory implications, widespread impact - High: Moderate harm to individuals, regulatory implications, significant impact - Medium: Minor harm, limited impact - Low: No harm, minimal impact

Immediate Response

Actions taken:

System status (active/suspended/modified):

Stakeholders notified:

Investigation

Root cause (if determined):

Contributing factors:

Evidence collected:

Remediation

Short-term remediation:

Long-term remediation:

Timeline for remediation:

Regulatory Reporting

Reporting required? [Yes/No] If yes, to which authorities: Reporting deadline: Reporting status:

Lessons Learned

What governance processes worked well:

What governance processes need improvement:

Recommendations for preventing recurrence:

8.5 A.5 RACI Matrix for AI Governance

This matrix shows typical responsibility allocation for AI governance activities. R = Responsible (does the work), A = Accountable (makes decisions), C = Consulted, I = Informed. Adapt to your organization’s structure.

Use Case Assessment - Executive: A - AI Governance: R - Legal: C - Technical: C - Business Unit: R

Impact Assessment - Executive: I - AI Governance: A/R - Legal: C - Technical: C - Risk: C - Business Unit: C

Legal Requirements Analysis - Executive: I - AI Governance: C - Legal: A/R - Technical: I - Business Unit: I

System Design - Executive: I - AI Governance: C - Legal: C - Technical: A/R - Business Unit: C

Data Governance - Executive: I - AI Governance: C - Data Team: A/R - Technical: C - Legal: C

Testing and Validation - Executive: I - AI Governance: A - Technical: R - Legal: C - Risk: C

Release Approval - Executive: A - AI Governance: R - Legal: C - Technical: C - Risk: C

Operational Monitoring - Executive: I - AI Governance: A - Technical: R - Operations: R

Incident Response - Executive: I/A (for serious incidents) - AI Governance: A - Technical: R - Legal: C - Risk: C - Communications: C