10 Practice Scenarios
These scenarios present realistic AI governance situations for discussion and analysis. They may be used for exam preparation, team training, or governance process exercises.
10.1 Scenario 1: The Rushed Deployment
Situation
A retail company has developed an AI recommendation system intended to personalize product suggestions. The development team completed technical testing and the system meets accuracy targets. Marketing wants to deploy for the holiday shopping season, which begins in three weeks.
The governance team has not completed its impact assessment. Initial review identified potential concerns about whether recommendations might systematically differ across demographic groups, but disaggregated fairness testing has not been performed. The legal team is still analyzing whether certain personalization approaches might raise consumer protection concerns.
Marketing argues that delay means missing the most important shopping season of the year. The technical team says the system can always be updated if issues emerge. The governance team is concerned about deploying without completed review.
Questions
- What are the key governance considerations in this situation?
- What additional information would help inform the decision?
- What options exist beyond simply approving or rejecting deployment?
- How should the organization balance business urgency against governance thoroughness?
- What governance process improvements might prevent this situation in future projects?
10.2 Scenario 2: The Fairness Tradeoff
Situation
A financial services company is testing an AI model for credit decisions. Overall accuracy is excellent, exceeding the existing manual process.
Fairness testing reveals that the model has significantly higher false negative rates for applicants over age 60, meaning creditworthy older applicants are more likely to be incorrectly declined. The technical team has explored mitigation approaches, but each degrades overall accuracy.
Option A: Deploy the current model with higher accuracy but known age-related disparities. Option B: Deploy a modified model with reduced disparities but lower overall accuracy. Option C: Do not deploy and continue using the existing manual process.
Questions
- What legal requirements might apply to this situation?
- How should the organization think about the tradeoff between accuracy and fairness?
- What stakeholders should be involved in this decision?
- What documentation should accompany whatever decision is made?
- If Option A is chosen, what monitoring and mitigation measures would be appropriate?
10.3 Scenario 3: The Vendor Limitation
Situation
An HR department wants to use a vendor’s AI tool for initial screening of job applications. The vendor provides a demonstration that impresses the HR team with its ability to quickly rank candidates.
During governance review, the legal team requests information about the model’s training data, accuracy across demographic groups, and compliance with employment discrimination laws. The vendor responds that training data composition is proprietary, that they cannot provide disaggregated accuracy metrics, but that they warrant the tool complies with applicable laws.
The HR team argues that the vendor is well-established, the tool is widely used, and the organization’s competitors are already using similar tools.
Questions
- What governance concerns does the vendor’s limited disclosure create?
- How should the organization evaluate the vendor’s warranty of legal compliance?
- What contractual provisions might address the organization’s concerns?
- What testing could the organization perform itself to address information gaps?
- How should competitive considerations factor into the governance decision?
10.4 Scenario 4: The Unexpected Behavior
Situation
A healthcare organization has deployed an AI system that assists with patient triage, helping staff prioritize patients based on reported symptoms. The system has been operating for six months with positive feedback from staff.
A nurse notices that the system seems to consistently assign lower priority to elderly patients reporting certain symptoms. She raises this concern with her supervisor, who asks the technical team to investigate.
Investigation reveals that the model learned patterns from historical data that reflected a genuine clinical pattern: certain symptoms are more concerning in younger patients. However, the nurse argues that some elderly patients who should have been seen urgently were deprioritized.
Questions
- How should the organization respond to this finding?
- Was the model behaving incorrectly, or reflecting legitimate clinical patterns?
- What governance mechanisms should have detected this issue earlier?
- How should the organization communicate with staff and patients about this issue?
- What changes to the system or its governance are appropriate?
10.5 Scenario 5: The Scope Expansion
Situation
A technology company developed an AI content moderation system for their social media platform. The system was designed and tested for identifying spam and obvious terms-of-service violations. Impact assessment and governance review addressed this scope.
Over time, product managers have expanded the system’s use to also flag potential misinformation, hate speech, and self-harm content. Each expansion seemed incremental and did not trigger new governance review. The system is now making significant content decisions with potential impacts on free expression that the original governance review did not contemplate.
A content creator whose posts were removed files a complaint alleging the moderation system is biased. Investigation reveals the governance documentation does not reflect the system’s current scope.
Questions
- What governance failures allowed this scope expansion without review?
- How should the organization respond to the current situation?
- What governance mechanisms could prevent similar scope creep?
- How should change management apply to AI system evolution?
- What are the organization’s obligations regarding the original governance documentation?
10.6 Scenario 6: The Autonomous Agent
Situation
A company is piloting an AI agent that can autonomously handle customer service inquiries. Unlike a chatbot that provides information, this agent can take actions: process refunds, modify orders, apply discounts, and escalate to human agents.
During pilot testing, the agent handles most inquiries appropriately. However, reviewers note several concerning patterns: the agent sometimes offers discounts that exceed policy limits, the agent occasionally processes refunds for orders that do not qualify, and the agent’s escalation decisions are inconsistent.
The technical team says these issues can be addressed through additional training. The customer service team loves the agent’s efficiency and wants to expand deployment. The finance team is concerned about unauthorized discounts and refunds.
Questions
- How do governance considerations for autonomous agents differ from advisory AI?
- What controls should exist before expanding deployment?
- How should human oversight work for an agent making many autonomous decisions?
- What accountability mechanisms are appropriate for agent actions?
- What monitoring would detect problematic patterns in agent behavior?
10.7 Scenario 7: The Cross-Border Deployment
Situation
A US-headquartered company wants to deploy an AI system across its global operations. The system will be used in the United States, European Union, United Kingdom, Canada, and Japan. It involves processing personal data and makes recommendations that affect individuals.
Legal analysis reveals different requirements across jurisdictions: EU AI Act high-risk requirements may apply for EU deployment; GDPR Article 22 creates obligations for EU data subjects; various Canadian provincial laws apply; Japan has guidelines but limited binding requirements; US requirements vary by state.
The product team wants a unified global deployment. Legal suggests significant differences in compliance requirements across jurisdictions.
Questions
- How should the organization approach governance for a system with different requirements across jurisdictions?
- Should the organization apply the most stringent requirements globally, or comply differently in different jurisdictions?
- What documentation strategy supports multi-jurisdictional compliance?
- How should the organization handle jurisdictions where requirements are unclear or evolving?
- What organizational structure supports governance across jurisdictions?