11  Glossary and Resources

11.1 D.1 Glossary of Key Terms

Accountability: The principle that organizations and individuals responsible for AI systems should be identifiable and answerable for the systems’ impacts.

Adversarial attack: An attempt to cause an AI system to make errors through inputs specifically crafted to exploit system vulnerabilities.

AI governance: The policies, procedures, structures, and practices through which organizations manage the development and deployment of AI systems responsibly.

Algorithmic impact assessment: A systematic evaluation of an AI system’s potential impacts on individuals, groups, and society.

Bias: In AI context, systematic errors that create unfair outcomes for certain groups or individuals. Can arise from training data, model design, or deployment context.

Concept drift: Changes in the underlying patterns or relationships that an AI model learned, which may cause model performance to degrade over time.

Conformity assessment: Under the EU AI Act, the process of verifying that a high-risk AI system meets applicable requirements before it can be placed on the market.

Data drift: Changes in the statistical properties of input data compared to the data used to train the model.

Deployer: Under the EU AI Act, a natural or legal person using an AI system under its authority, except where the AI system is used in the course of personal non-professional activity.

Explainability: The degree to which the internal mechanics of an AI system can be explained in human terms.

Fairness: The principle that AI systems should treat people equitably and should not discriminate based on protected characteristics.

Foundation model: A large AI model trained on broad data that can be adapted for various downstream tasks. Also called general-purpose AI model.

Generative AI: AI systems that create new content (text, images, audio, video) rather than analyzing or classifying existing content.

Hallucination: When a generative AI system produces confident but false outputs, such as fabricated facts or citations.

High-risk AI system: Under the EU AI Act, an AI system that poses significant risks to health, safety, or fundamental rights and is subject to extensive compliance requirements.

Human oversight: The capacity for humans to understand, supervise, and when necessary intervene in or stop AI system operation.

Impact assessment: A systematic evaluation of potential consequences of an AI system, including privacy, fairness, and other impacts.

Machine learning: An approach to AI where systems learn patterns from data rather than being explicitly programmed with rules.

Model card: A documentation format that provides information about a trained model’s intended uses, performance, limitations, and ethical considerations.

Provider: Under the EU AI Act, a natural or legal person who develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark.

Responsible AI: Development and deployment of AI systems that are ethical, transparent, fair, accountable, and aligned with human values.

Risk-based approach: Regulatory or governance strategy that applies requirements proportionate to the level of risk an AI system presents.

Robustness: The ability of an AI system to maintain performance when facing unexpected inputs, adversarial attacks, or changing conditions.

Transparency: The principle that AI development and deployment should be open about methods, limitations, and impacts.

11.2 D.2 Key Resources

Regulatory and Standards Bodies

European Commission AI Act resources: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework

OECD AI Policy Observatory: https://oecd.ai/en/

ISO AI standards: https://www.iso.org/committee/6794475.html

Professional Resources

IAPP AI Governance resources: https://iapp.org/resources/topics/artificial-intelligence/

Research and Academia

Partnership on AI: https://partnershiponai.org/

AI Now Institute: https://ainowinstitute.org/

Stanford Human-Centered AI: https://hai.stanford.edu/

Regulatory Guidance

US FTC AI guidance: https://www.ftc.gov/business-guidance/blog/tags/artificial-intelligence

UK ICO AI guidance: https://ico.org.uk/for-organisations/ai/

Technical Resources

Model Cards paper: Mitchell et al., “Model Cards for Model Reporting” (2019)

Datasheets paper: Gebru et al., “Datasheets for Datasets” (2021)

Fairness ML resources: https://fairmlbook.org/


12 Final Words: A Note to the AI Governance Professional

The field you are entering or advancing in did not exist in its current form just a few years ago. AI governance has emerged rapidly in response to AI capabilities that have advanced faster than our institutions could adapt. You are joining a profession that is still defining itself, working on problems that do not have settled answers, in a landscape that continues to shift.

This is both challenging and exciting. It is challenging because there is no established playbook to follow. Reasonable people disagree about how AI should be governed. Regulations are being written, interpreted, and revised. Best practices are emerging through trial and error. You will often face questions where the right answer is not clear, and you will need to exercise judgment under uncertainty.

It is exciting because you have the opportunity to shape a field that will significantly influence how AI affects humanity. The governance frameworks being built now will influence whether AI’s benefits are broadly shared, whether its risks are adequately managed, whether vulnerable people are protected, and whether human values are reflected in increasingly powerful systems. The work you do matters.

Several principles may serve you well as you navigate this uncertain landscape.

Stay grounded in reality. AI governance deals with real systems affecting real people. Keep your focus on actual impacts rather than getting lost in theoretical concerns or hype cycles. The practical question is always: how does this AI system affect the people it touches, and how can we make those effects better?

Embrace cross-disciplinary collaboration. No single perspective has all the answers. The legal view, the technical view, the ethical view, the business view, the user view, the affected community view: all have something to contribute. Your job is often to synthesize these perspectives rather than to provide the answer yourself.

Balance caution with enabling progress. AI governance is not about stopping AI; it is about enabling beneficial AI while managing risks. Governance that only prevents is incomplete. Effective governance helps organizations move forward confidently, knowing that appropriate safeguards are in place.

Keep learning. The technology, the regulation, and the practice are all evolving rapidly. What you know today will be incomplete tomorrow. Build habits of continuous learning that will keep you current throughout your career.

Act with integrity. You will sometimes face pressure to approve things that should not be approved, to minimize risks that should be taken seriously, or to tell stakeholders what they want to hear rather than what they need to hear. Your value as a governance professional depends on your integrity. Protect it.

Remember why governance matters. When governance feels like bureaucracy, remember that it exists because AI systems can harm real people and institutions. Every assessment you conduct, every review you participate in, every concern you raise has the potential to prevent harm or enable benefit that would not otherwise have occurred. That is meaningful work.

This book has aimed to provide you with knowledge and frameworks for AI governance, but knowledge is not enough. The field needs professionals who can apply that knowledge wisely, who can navigate ambiguity with judgment, who can collaborate across disciplines, and who can stand for what is right even when it is difficult.

Welcome to the profession. The work matters, and it needs you.


13 Changelog

Version 5.0 (January 2025)

Comprehensive revision incorporating: - AIGP Body of Knowledge v2.0.1 (effective February 2025) - EU AI Act implementation timelines and detailed requirements - Expanded international coverage (China, Singapore, Japan, Canada) - New chapters on Governance by Design and Ongoing Issues - Enhanced multi-perspective approach with integrated worked example - Revised chapter structure with prose-first presentation - Updated references and resources - Practice scenarios for exam preparation and team training


End of Document