4  Understanding How to Govern AI Deployment and Use

4.1 Introduction

The deployment of an AI system marks a transition from controlled development to real-world operation, where the system interacts with actual users, affects actual individuals, and operates in conditions that may differ from those anticipated during development. This transition introduces new governance challenges. The risks that were theoretical during development become actual. The edge cases that were difficult to test may now be encountered. The individuals affected by AI decisions now have a stake in how the system operates.

This chapter examines governance during AI deployment and operation, addressing how organizations evaluate AI systems before deployment, manage ongoing operations, handle third-party AI relationships, protect user rights, address workforce implications, and integrate AI governance with enterprise risk management. The material maps primarily to Domain IV of the AIGP Body of Knowledge, which addresses the knowledge and skills required to govern the deployment and use of AI systems.

Organizations may be deployers using AI systems developed by others, deployers using AI systems they developed themselves, or both. The governance practices described here apply regardless of where the AI system originated, though the available information and control mechanisms differ when systems come from external providers versus internal development teams.

Figure 4.1: AI Deployment Governance Framework

Figure 4.1: AI Deployment Governance Framework — Five-phase deployment lifecycle with activities and stakeholder responsibilities.

4.2 Evaluating AI Deployment Decisions

Before deploying an AI system, organizations must evaluate whether deployment is appropriate. This evaluation considers whether the system is ready, whether the organization is ready, and whether the deployment context is appropriate.

System Readiness

System readiness assessment verifies that the AI system meets requirements and is fit for deployment. For internally developed systems, this assessment builds on development testing. For externally sourced systems, this assessment may require independent evaluation.

Technical readiness requires that the system functions correctly, achieves required performance levels, and integrates appropriately with organizational infrastructure. Does the system produce outputs in the expected format? Does it meet latency and throughput requirements? Does it handle errors gracefully? Does it integrate with authentication, logging, and monitoring systems?

Performance readiness requires that the system achieves required accuracy, fairness, and robustness. Performance should be validated on data representative of actual deployment conditions, not just development test data. If the deployment context differs significantly from development conditions, additional testing may be needed.

Compliance readiness requires that legal and regulatory requirements have been addressed. Has required documentation been prepared? Are required disclosures ready to be made? Are human oversight mechanisms in place? Have required impact assessments been completed?

Organizational Readiness

Deploying an AI system requires organizational capabilities beyond the system itself. Organizational readiness assessment verifies these capabilities are in place.

Operational readiness requires that the organization can operate the AI system effectively. Are staff trained to use the system appropriately? Are procedures documented? Is support available when users encounter issues?

Oversight readiness requires that the organization can provide required human oversight. If human review is required for certain decisions, are reviewers identified, trained, and available? Do they have access to the information needed for meaningful review?

Monitoring readiness requires that the organization can monitor the deployed system. Are monitoring systems in place? Are metrics defined? Are alert thresholds set? Is someone responsible for reviewing monitoring outputs?

Incident readiness requires that the organization can respond when problems occur. Are incident response procedures established? Are roles and responsibilities clear? Are escalation paths defined? Are regulatory notification requirements understood?

Deployment Context Assessment

The same AI system may be appropriate for deployment in some contexts but not others. Deployment context assessment evaluates whether the specific deployment scenario is appropriate.

Use case alignment verifies that the deployment use case matches the system’s intended purpose and validated capabilities. An AI system validated for one application may not be appropriate for different applications, even if they seem similar. A credit risk model developed for consumer lending may not be appropriate for small business lending without additional validation.

Population alignment verifies that the deployment population matches the population on which the system was developed and tested. If the system will serve a different demographic, geographic, or other population than it was developed for, performance may differ.

Environmental alignment verifies that deployment conditions match development assumptions. If the system depends on particular data inputs, infrastructure, or integration points, those dependencies must be satisfied in the deployment environment.

Risk proportionality verifies that the deployment context does not present risks disproportionate to the validated capabilities. A system with modest accuracy might be appropriate for low-stakes applications but inappropriate for consequential decisions.

Deployment Decision

The deployment decision integrates system readiness, organizational readiness, and deployment context assessment into a determination of whether to proceed. This decision should be documented, with clear accountability for the decision-maker.

The decision may be to proceed with deployment as planned, proceed with modifications or conditions, delay deployment until readiness gaps are addressed, or not proceed because the deployment is inappropriate.

Conditions might include limiting initial deployment scope, implementing additional oversight measures, requiring enhanced monitoring, or setting triggers for deployment review. These conditions should be documented and tracked to ensure they are actually implemented.

4.3 Assessing Third-Party AI Systems

Many organizations deploy AI systems obtained from external providers rather than developing systems internally. Third-party AI introduces governance challenges because the organization has less visibility into system development and less control over system characteristics.

Due Diligence

Before acquiring a third-party AI system, organizations should conduct due diligence to evaluate the system and the provider.

Provider assessment evaluates the provider’s capabilities, practices, and track record. Does the provider have appropriate expertise? What governance practices does the provider follow? Has the provider had incidents with other AI systems? What is the provider’s financial stability and likely continuity?

System assessment evaluates the AI system itself. What is the system designed to do? How was it developed? What data was used for training? What testing was performed? What are the known limitations? This assessment may be constrained by limited access to system details; providers may restrict information for competitive or security reasons.

Documentation assessment evaluates whether the provider supplies documentation sufficient to support the organization’s governance needs. Is there adequate description of system capabilities and limitations? Are performance metrics provided? Is the information sufficient for required impact assessments?

Compliance assessment evaluates whether the provider has addressed applicable regulatory requirements. For systems subject to the EU AI Act, has the provider completed conformity assessment? Does the provider supply required documentation? Will the provider support the organization’s compliance obligations?

Contractual Provisions

Contracts with AI providers should address governance needs that cannot be fully satisfied through pre-acquisition due diligence.

Information rights should ensure the organization receives information needed for ongoing governance. This might include performance metrics, information about system updates, notification of incidents or identified issues, and documentation needed for regulatory compliance.

Audit rights should enable the organization to verify provider claims and assess ongoing compliance. This might include rights to audit provider practices, review testing results, or conduct independent testing of the system.

Update provisions should address how system updates are handled. Will updates be automatic or subject to organization approval? Will the organization be notified in advance? What testing will the organization be able to conduct before updates take effect?

Incident provisions should address how incidents are handled. What notification will the provider give? What cooperation will the provider provide for incident investigation? What remediation will the provider undertake?

Liability provisions should appropriately allocate responsibility for AI-related harms. Who bears liability if the system causes harm to third parties? What indemnification does the provider offer? How are regulatory penalties allocated?

Termination provisions should address what happens if the relationship ends. Will the organization retain access to trained models? Will data be returned or deleted? What transition support will the provider offer?

Ongoing Vendor Management

Third-party AI relationships require ongoing management beyond initial due diligence and contracting.

Performance monitoring tracks whether the system continues to meet requirements. If performance degrades or unexpected issues arise, the organization should investigate and work with the provider to address problems.

Compliance monitoring tracks whether the provider continues to meet contractual and regulatory obligations. Regular review should verify that required documentation is current, required notifications are being made, and representations remain accurate.

Relationship management maintains communication with the provider about system issues, upcoming changes, and evolving requirements. Organizations should ensure they have appropriate contacts and escalation paths at the provider.

Risk assessment should be updated periodically and when circumstances change. A provider that was appropriate when selected may become inappropriate if their practices change, their financial condition weakens, or organizational requirements evolve.

Figure 4.2: Third-Party AI Governance Lifecycle

Figure 4.2: Third-Party AI Governance Lifecycle — Cyclical governance from due diligence through monitoring and renewal or exit.

4.4 Deployment Options and Their Governance Implications

AI systems can be deployed through various technical architectures, each with different governance implications. Organizations should understand these options and their tradeoffs.

Cloud Deployment

Cloud deployment runs AI systems on infrastructure operated by cloud service providers. The organization accesses AI capabilities through APIs or managed services without operating the underlying infrastructure.

Cloud deployment offers advantages including reduced infrastructure burden, scalability, and access to advanced capabilities the organization might not be able to develop independently. Many AI services are available only or primarily through cloud deployment.

Cloud deployment introduces governance considerations. Data sent to cloud services leaves organizational control; privacy and security depend on provider practices and contractual protections. The organization may have limited visibility into how systems operate. Latency and availability depend on network connectivity and provider uptime. Regulatory requirements may restrict use of cloud services for certain data or applications.

A less obvious but increasingly important consideration is the competitive intelligence embedded in organizational data. When employees use cloud AI services for their work—analyzing documents, writing code, refining strategies—their interactions may train or improve the provider’s models. The organizational knowledge encoded in these interactions can become part of a platform that serves competitors. When AI becomes part of the team rather than a private assistant, it becomes a platform for organizational intelligence that may not remain organizational. Governance should address what data and interactions flow to external AI services and whether competitive sensitivity warrants constraints.

On-Premise Deployment

On-premise deployment runs AI systems on infrastructure the organization owns and operates. The organization maintains control over the computing environment and data.

On-premise deployment offers advantages for sensitive applications. Data remains within organizational boundaries. The organization has full control over the computing environment. Operation does not depend on external connectivity or provider availability.

On-premise deployment requires organizational capabilities to operate and maintain AI infrastructure. This includes appropriate hardware, software expertise, and ongoing maintenance. On-premise deployment may limit access to capabilities available only through cloud services.

Edge Deployment

Edge deployment runs AI systems on devices at the edge of networks, close to data sources or end users. This might include AI on mobile devices, IoT devices, industrial equipment, or local computing appliances.

Edge deployment offers advantages for applications requiring low latency, offline operation, or local data processing. AI on a mobile device can operate without network connectivity. AI on manufacturing equipment can respond faster than a round-trip to a cloud server would allow.

Edge deployment introduces governance considerations. Models deployed to edge devices may be harder to update and monitor. Security of edge devices may be challenging. Distributed deployment may complicate version management and consistency.

Hybrid Deployment

Many organizations use hybrid approaches combining cloud, on-premise, and edge deployment for different systems or different aspects of the same system. For example, model training might occur in the cloud while inference runs on-premise, or a mobile app might use on-device AI for routine processing while calling cloud services for complex requests.

Hybrid deployment requires governance approaches that address each deployment context and the interactions between them. Data flows between environments must be understood and protected. Consistency must be maintained when the same model runs in different environments.

4.5 Governing AI During Operation

Once deployed, AI systems require ongoing governance throughout their operational life. This governance ensures systems continue to perform appropriately, risks are managed, and issues are identified and addressed.

Operational Monitoring

Continuous monitoring provides visibility into AI system operation and enables early detection of issues.

Technical monitoring tracks system health including availability, latency, throughput, and errors. Standard application monitoring practices apply, supplemented by AI-specific considerations.

Performance monitoring tracks model accuracy and related metrics over time. This requires defining appropriate metrics, collecting the data needed to compute them, and establishing processes to review results. For supervised learning systems, this typically requires obtaining ground truth labels for a sample of predictions, which may be available immediately, delayed, or require active collection.

Fairness monitoring tracks whether outcomes remain equitable across demographic groups. Disparities that were not present or were acceptable at deployment may emerge or worsen over time.

Drift monitoring detects changes in input data distributions or in the relationship between inputs and appropriate outputs. Data drift occurs when the characteristics of incoming data change from the training distribution. Concept drift occurs when the underlying relationships the model learned change. Both can cause performance degradation.

Usage monitoring tracks how the system is being used, which can identify misuse, unexpected use patterns, or opportunities for improvement.

Human Oversight in Operation

Human oversight requirements established during design must be implemented during operation. The nature and intensity of oversight depends on the application context and regulatory requirements.

For some applications, human oversight means humans review AI recommendations before taking action. The human reviewer should have appropriate expertise, access to relevant information, and genuine authority to override AI recommendations. Organizations should guard against automation bias, where humans routinely accept AI outputs without meaningful review.

For other applications, human oversight means humans monitor aggregate AI behavior without reviewing individual decisions. This might involve periodic review of performance metrics, audits of decision samples, or investigation of anomalies or complaints.

For applications requiring human intervention capabilities, humans must be able to understand system operation, intervene when necessary, and stop the system if required. These capabilities must be tested and maintained.

Documentation During Operation

Operational documentation captures information needed for ongoing governance, incident investigation, and compliance demonstration.

Logging should capture appropriate information about AI system inputs, outputs, and operation. What constitutes appropriate logging depends on the application; logs should be sufficient to investigate issues and demonstrate compliance without unnecessarily retaining sensitive information.

Audit trails should enable reconstruction of decisions for investigation or challenge. When an individual questions an AI decision affecting them, the organization should be able to retrieve relevant information and explain what happened.

Performance records should document ongoing performance metrics, enabling trend analysis and compliance demonstration.

Incident records should document any incidents, investigations, and responses, supporting organizational learning and demonstrating diligent incident management.

Change Management

AI systems change over time through updates, retraining, configuration changes, and environmental changes. Change management ensures changes are controlled and do not introduce unintended consequences.

Update assessment should evaluate proposed changes before implementation. What is the purpose of the change? What are the expected effects? What testing has been performed? What risks might the change introduce?

Approval processes should ensure appropriate review of changes. Minor changes might be approved by operational staff; significant changes might require governance review. The level of review should be proportionate to the potential impact.

Rollback capabilities should enable reverting to previous versions if changes cause problems. This requires maintaining previous versions and having procedures to restore them.

Documentation should capture what changes were made, when, why, and by whom. This supports troubleshooting and provides an audit trail.

4.6 Managing Downstream Risks

AI systems can affect parties beyond the immediate users, and deployers must consider and manage these downstream risks.

Identifying Affected Parties

AI systems affect multiple categories of stakeholders whose interests governance should consider.

Users are individuals who interact directly with the AI system. They may be employees using AI tools in their work, customers using AI-powered products or services, or others who engage with the system.

Subjects are individuals about whom the AI system makes decisions or predictions, who may or may not be the same as users. A hiring AI affects job applicants who may never directly interact with the system. A credit AI affects loan applicants whose information the system processes.

Third parties are individuals or entities affected by AI system outputs or by actions taken based on those outputs. If an AI system recommends a price increase that affects customers, those customers are affected parties even if the AI’s role is invisible to them.

Society broadly may be affected by aggregate impacts of AI deployment, including effects on labor markets, information environments, or social dynamics.

Risk Communication

Organizations should communicate appropriately about AI risks to affected parties.

Users should understand what the AI system does, how to use it appropriately, and what limitations to be aware of. Documentation, training, and interface design should support appropriate use.

Subjects should be informed about AI involvement in decisions affecting them, as required by law and as appropriate for building trust. The EU AI Act and various national laws require disclosure of AI use in specified contexts.

Downstream deployers, when an organization provides AI systems for others to deploy, should receive information needed to deploy responsibly. This includes accurate descriptions of capabilities and limitations, documentation supporting impact assessments, and guidance on appropriate use.

Misuse Prevention

Organizations should take reasonable steps to prevent misuse of AI systems they deploy or provide.

Use restrictions should prohibit inappropriate applications. Terms of service, acceptable use policies, and technical controls can restrict use to appropriate contexts.

Access controls should limit who can use the system to authorized parties with legitimate needs.

Monitoring should detect potential misuse patterns that warrant investigation.

Response procedures should address identified misuse through warnings, access revocation, or other appropriate measures.

4.7 External Communication

Organizations must communicate externally about their AI systems to various audiences including regulators, affected individuals, and the public.

Regulatory Communication

Regulatory communication requirements vary by jurisdiction and system type.

The EU AI Act requires providers of high-risk AI systems to register in an EU database before placing systems on the market. Deployers of high-risk AI systems must also register certain uses. Serious incidents must be reported to authorities.

Sector-specific regulations may impose additional reporting requirements. Financial institutions may need to report to banking regulators about AI use in covered activities. Healthcare AI may require regulatory filings.

Organizations should identify applicable reporting requirements and establish processes to meet them.

Individual Communication

Communication to individuals affected by AI systems serves both compliance and trust-building purposes.

Disclosure requirements may mandate informing individuals about AI involvement in decisions. GDPR Article 22 requires information about automated decision-making. The EU AI Act requires disclosure when individuals interact with certain AI systems. Various national laws require AI disclosure in specific contexts.

Explanation requirements may mandate explaining AI decisions to affected individuals. When AI contributes to a decision that affects someone, they may have the right to understand the factors involved and how they might achieve a different outcome.

Recourse mechanisms should provide paths for individuals to question, contest, or seek review of AI decisions affecting them. This might include human review processes, complaint mechanisms, or formal appeal rights.

Public Communication

Many organizations make public statements about their AI use through AI principles, transparency reports, or other communications.

Public commitments create expectations that organizations should be prepared to meet. Stating a commitment to fairness or transparency creates accountability for actually achieving those goals.

Transparency reporting provides information about AI systems, their performance, and their impacts. Some organizations voluntarily publish detailed information about their AI systems; others may be required to do so by regulations or contractual obligations.

Crisis communication may be necessary if AI systems cause publicized harms. Organizations should be prepared to communicate about incidents, taking accountability while protecting legal interests.

4.8 Deactivation Capabilities

Organizations must maintain the ability to deactivate AI systems when necessary. This capability supports incident response, provides a safeguard against runaway systems, and satisfies regulatory requirements.

Technical Deactivation

Technical mechanisms should enable stopping AI systems quickly when needed.

Kill switches provide immediate shutdown capability. For critical systems, this might be a physical or software control that immediately halts operation.

Graceful shutdown procedures enable stopping systems in an orderly way that preserves data integrity and enables investigation.

Rollback capabilities enable reverting to previous versions or configurations if problems arise with updates.

Fallback modes enable continuing operations without AI, either by reverting to manual processes or by using simpler backup systems.

Organizational Authority

Clear authority for deactivation decisions ensures that necessary shutdowns can happen quickly.

Who has authority to order a shutdown? This should include operational personnel who can respond to immediate technical issues, governance personnel who can respond to compliance concerns, and executive leadership who can respond to strategic or reputational concerns.

What triggers a mandatory shutdown? Some conditions should require shutdown regardless of other considerations, such as serious safety incidents, regulatory orders, or evidence of severe discrimination.

What processes apply after shutdown? Investigation, remediation, and restart authorization processes should be established in advance.

Documentation and Testing

Deactivation capabilities should be documented and tested.

Documentation should describe shutdown procedures, authority, and decision criteria so that personnel can act quickly when needed.

Testing should verify that shutdown mechanisms work as intended. Organizations should periodically test their ability to deactivate AI systems, just as they test disaster recovery for other systems.

Incident exercises should include scenarios requiring AI deactivation, ensuring that personnel are prepared to execute shutdown procedures under pressure.

4.9 Protecting User Rights

Individuals affected by AI systems have various rights that organizations must respect. Some rights are established by law; others reflect ethical commitments or best practices.

Explanation and Transparency

Rights to explanation require organizations to provide meaningful information about AI decisions.

What was decided? The individual should understand the outcome of AI processing affecting them.

What factors were considered? The individual should understand what information influenced the decision.

Why was this outcome reached? The individual should understand, at least at a general level, why the AI reached this conclusion rather than another.

What can the individual do? The individual should understand options for contesting the decision or achieving a different outcome.

Explanations should be meaningful to the individuals receiving them, not technical descriptions comprehensible only to AI specialists. Plain language explanations, even if they sacrifice some precision, may be more valuable than technically accurate but incomprehensible descriptions.

Contestation and Recourse

Rights to contest AI decisions require mechanisms for individuals to challenge outcomes.

Human review enables individuals to have AI decisions reviewed by a human who can consider factors the AI may have missed and exercise judgment the AI cannot replicate.

Appeal processes enable individuals to escalate concerns through defined channels with increasing levels of review.

Correction mechanisms enable individuals to provide information that was missing or incorrect and have decisions reconsidered in light of complete or corrected information.

Remediation provides appropriate relief when AI decisions are found to be wrong, including reversing decisions, providing compensation, or taking other corrective action.

Rights Implementation

Implementing user rights requires organizational processes, not just policies.

Awareness ensures that staff understand rights requirements and their role in honoring them.

Intake mechanisms provide clear channels for individuals to exercise their rights.

Response processes ensure that rights requests are handled consistently, completely, and within required timeframes.

Documentation captures how rights were exercised and honored, supporting compliance demonstration.

Figure 4.3: User Rights in AI Systems

Figure 4.3: User Rights in AI Systems — Rights request lifecycle and implementation requirements for opt-out, explanation, and contestation.

4.10 Workforce Considerations

AI deployment affects workforces, and governance should address these impacts. This includes impacts on workers who use AI systems, workers whose jobs are affected by AI automation, and workers who are subjects of AI decision-making.

AI Augmentation of Work

Many AI deployments augment worker capabilities rather than replacing workers entirely. Governance should address how these AI-augmented work arrangements operate.

Training should prepare workers to use AI tools effectively and appropriately. Workers should understand what AI systems can and cannot do, how to interpret AI outputs, when to rely on AI and when to exercise independent judgment, and how to identify potential AI errors.

Workflow design should integrate AI appropriately into work processes. AI should support workers rather than creating burdens. Human oversight should be meaningful rather than perfunctory.

Performance management should account for AI assistance. Expectations should reflect AI’s contributions, and workers should not be penalized for exercising appropriate judgment to override AI recommendations.

Well-being considerations should address potential negative effects of AI-augmented work, including deskilling, reduced autonomy, or surveillance effects.

Workforce Displacement

AI may reduce demand for certain types of work, potentially displacing workers. Organizations should consider these impacts and respond appropriately.

Impact assessment should analyze potential workforce effects of AI deployment. Which roles might be affected? How many workers? Over what timeframe?

Transition support might include retraining, reassignment to other roles, severance packages, or outplacement assistance.

Stakeholder communication should inform workers and their representatives about AI plans and their potential impacts.

Responsible automation practices might include gradual implementation, worker input into automation decisions, or commitments to redeploy rather than lay off affected workers.

AI in Employment Decisions

AI systems that make or influence decisions about workers raise particular governance concerns.

Hiring AI affects applicants who may have no relationship with the organization beyond their application. Discrimination in hiring AI can exclude qualified individuals from opportunities.

Performance management AI affects existing workers’ evaluations, compensation, and advancement. Errors or bias can harm careers.

Surveillance AI monitors worker behavior, raising privacy concerns and potentially affecting workplace culture and worker well-being.

Workforce AI should receive heightened governance attention given the power dynamics involved and the potential for significant individual impacts.

4.11 Integrating AI Governance with Enterprise Risk Management

AI governance does not exist in isolation but operates within organizations’ broader risk management frameworks. Effective AI governance integrates with enterprise risk management rather than creating parallel structures.

AI Risk Categories

AI risks relate to traditional enterprise risk categories while also presenting novel characteristics.

Operational risk includes risks of AI system failures, errors, or performance degradation that disrupt operations.

Compliance risk includes risks of violating laws, regulations, or contractual obligations related to AI.

Reputational risk includes risks of public criticism, loss of trust, or brand damage from AI incidents or practices.

Strategic risk includes risks that AI investments fail to achieve expected benefits or that competitors gain advantage through superior AI capabilities.

Financial risk includes potential costs from AI incidents, including regulatory penalties, litigation, remediation, and lost business.

Integration Approaches

Organizations can integrate AI governance with enterprise risk management in various ways.

Risk taxonomy integration incorporates AI risks into existing risk taxonomies, ensuring AI risks are identified and assessed through established processes.

Control framework integration maps AI governance controls to existing control frameworks, enabling consistent assessment and reporting.

Reporting integration incorporates AI risk information into enterprise risk reporting, ensuring appropriate visibility to risk committees and leadership.

Assurance integration includes AI governance in internal audit scope, providing independent verification of governance effectiveness.

Three Lines Model

The three lines model common in enterprise risk management can be applied to AI governance.

First line functions own and manage AI risks as part of their operational responsibilities. Development teams, business units deploying AI, and operational staff managing AI systems are first line.

Second line functions provide oversight, frameworks, and expertise. AI governance teams, privacy offices, compliance functions, and risk management functions are second line.

Third line functions provide independent assurance. Internal audit and external auditors are third line.

Clear delineation of responsibilities across lines helps ensure comprehensive coverage without gaps or redundant effort.

Figure 4.4: AI Governance in Enterprise Risk Management

Figure 4.4: AI Governance in Enterprise Risk Management — Integration with three lines model and enterprise risk categories.

4.12 Chapter 4 Summary

This chapter examined governance during AI deployment and operation, addressing the transition from controlled development to real-world use and the ongoing governance required throughout an AI system’s operational life.

Deployment decisions require evaluating system readiness, organizational readiness, and deployment context. System readiness encompasses technical, performance, and compliance dimensions. Organizational readiness requires operational, oversight, monitoring, and incident response capabilities. Deployment context assessment verifies alignment between the specific deployment scenario and validated system capabilities.

Third-party AI requires due diligence assessing providers and systems, contractual provisions protecting organizational interests and governance needs, and ongoing vendor management maintaining appropriate oversight.

Deployment options including cloud, on-premise, edge, and hybrid architectures have different governance implications regarding control, visibility, latency, and regulatory considerations.

Operational governance requires continuous monitoring of technical health, model performance, fairness, and drift. Human oversight must be implemented as designed, with attention to automation bias. Documentation including logs, audit trails, and performance records supports governance and compliance. Change management controls modifications to deployed systems.

Downstream risk management considers affected parties beyond immediate users, communicates appropriately about risks, and implements measures to prevent misuse.

External communication addresses regulatory reporting requirements, individual disclosure and explanation obligations, and public communication about AI practices.

Deactivation capabilities provide technical mechanisms and organizational authority to stop AI systems when necessary, with documented procedures and periodic testing.

User rights including opt-out, explanation, and contestation require organizational processes to implement effectively, not just policies acknowledging rights exist.

Workforce considerations address AI augmentation of work, potential workforce displacement, and the particular sensitivities of AI in employment decisions.

Integration with enterprise risk management situates AI governance within broader organizational structures, mapping AI risks to risk categories and AI governance to control frameworks, reporting, and assurance functions.

4.13 Chapter 4 Review Questions

  1. An organization is preparing to deploy an AI customer service chatbot. System testing has been completed with positive results, but the organization has not yet trained customer service staff on when and how to escalate from the chatbot to human agents. How should the deployment decision address this situation?

  2. An organization is evaluating an AI vendor’s system for use in screening job applicants. The vendor provides accuracy metrics but declines to share detailed information about training data, citing competitive concerns. How should the organization’s governance process address this limitation?

  3. An AI credit decisioning system has been deployed for six months. Monitoring shows that overall accuracy remains within acceptable bounds, but accuracy for applicants over age 60 has declined significantly compared to initial deployment. What governance response is most appropriate?

  4. A retail company uses an AI system to personalize product recommendations. A customer requests an explanation of why they were recommended a particular product. What information should the company provide to satisfy explanation requirements?

  5. An organization is planning to deploy an AI system that will automate significant portions of work currently performed by 200 employees. What workforce-related governance considerations should inform the deployment decision?

4.14 References

IAPP. AIGP Body of Knowledge, Version 2.0.1. International Association of Privacy Professionals, 2025.

European Parliament and Council. Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, 2024.

European Parliament and Council. Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, 2016.

Institute of Internal Auditors. The IIA’s Three Lines Model. IIA Position Paper, 2020.

National Institute of Standards and Technology. AI Risk Management Framework 1.0. NIST AI 100-1, 2023.

Federal Reserve Board, Office of the Comptroller of the Currency. Supervisory Guidance on Model Risk Management. SR 11-7, 2011.

IAPP. AI Governance in Practice Report 2025. International Association of Privacy Professionals, 2025.

Ogunseye, S. “Stop Training Your Competitor’s AI.” Communications of the ACM Blog, 2025. https://cacm.acm.org/blogcacm/stop-training-your-competitors-ai/