AI Glossary: AI governance




AI contextual governance framework

Governance that adapts controls to context (risk tier, data sensitivity, role, region) instead of one blanket policy.


AI data governance

Policies, roles and processes that keep AI data accurate, secure and lawful (quality, lineage, retention, access).

ALSO:

AI data governance framework


AI governance

The full system of roles, policies, approvals, documentation and monitoring that makes AI safe, compliant and auditable.

ALSO:

artificial intelligence governance


AI governance framework

A blueprint (or set of blueprints) defining responsibilities, processes, reviews and controls across the AI lifecycle.

ALSO:

AI governance frameworks

AI governance framework example

A visual or case study showing how those components connect end-to-end.


AI governance framework components

Typical building blocks: roles/committees, risk tiers, policy library, approvals, testing/evals, monitoring, incident response, audit.

 

AI governance framework diagram

A visual or case study showing how those components connect end-to-end.


AI governance principles

Fairness, transparency, privacy, security, accountability, and human oversight—used to guide policy and reviews.


AI model governance

Controls specific to models: documentation, validation, approvals, change management, bias testing, monitoring, rollback.


Databricks governance

Data/AI governance capabilities on Databricks (e.g., Unity Catalog, permissions, lineage) used to control who can access what.


generative AI governance

Governance tailored to GenAI risks (hallucination, prompt injection, copyright, data leakage) with red-teaming and content safety.

ALSO:

generative AI governance framework


machine learning governance

Controls specific to models: documentation, validation, approvals, change management, bias testing, monitoring, rollback.


Microsoft AI governance framework

Microsoft’s vendor playbook for building and operating AI responsibly—derived from its internal Responsible AI Standard and productized in Azure guidance and tools.

How it works: Principles → policies → lifecycle controls (data use, evaluations, safety filters, incident response). Backed by tooling like Azure AI Content Safety, Responsible AI Dashboard/Evaluations, red-teaming guidance, and policy-as-code patterns.


NIST AI governance framework

A neutral, U.S. standards body framework (voluntary) focused on risk. Defines four core functions: Govern, Map, Measure, Manage, plus profiles and a Playbook.

How it works: Start by mapping context and harms, measuring risks (bias, robustness, privacy, security), then managing them with controls, all under an organizational governance function.


PDPC model AI governance framework

Singapore’s practical playbook (from the Personal Data Protection Commission) for responsible AI deployment—very usable for enterprises, especially where personal data is involved.

How it works: Four pillars: Internal governance, human involvement in AI-assisted decisions, operations management (data, models, testing, monitoring), and stakeholder communication. Supported by checklists and companion tools (e.g., AI Verify).