AI Governance Components

1.0 AI Accountability
2.0 Assess Regulatory Risks
3.0 Gather Use Cases
4.0 Value of Data
5.0 Fairness & Accessibility
6.0 Reliability & Safety
7.0 Transparency & Explainability
8.0 Human-in-the-Loop
9.0 Privacy & Retention
10.0 Security
11.0 Model Lifecycle
12.0 Risk
13.0 AI Value Realization
1.1 Executive Sponsor
1.2 AI Strategy
1.3 AI Governance Leader
1.4 AI Oversight Board
1.5 Definition of 'AI'
1.6 AI Policy
2.1 AI-Specific Regulations
2.2 Data Privacy
2.3 Intellectual Property
2.4 Competition Law
2.5 Value Realization
2.6 Industry & Domain
3.1 Use Cases
3.2 Business Cases
3.3 AI Spend
4.1 Value Data
4.2 Data Rights
4.3 Most Valuable Data
4.4 Data Governance & Quality
4.5 Classify Data & Access
5.1 Bias
5.2 Accessibility
6.1 Model Quality
6.1.1 Hallucinations
6.1.1 Hallucinations
6.1.2 Code Vulnerabilities
6.1.2 Code Vulnerabilities
6.1.3 Code Interpreter Abuse
6.1.3 Code Interpreter Abuse
6.2 Malign Agent Influence
6.2.1 Rational Persuasion
6.2.1 Rational Persuasion
6.2.2 Manipulation
6.2.2 Manipulation
6.2.3 Deception
Deception
6.2.4 Coercion
6.2.4 Coercion
6.2.5 Exploitation
6.2.5 Exploitation
6.3 Red Teams
7.1 Transparency
7.2 Explainability
7.2.1 Non-Causality
7.2.1 Non-Causality
7.2.2 Causality
7.2.2 Causality
7.3 Intellectual Property Rights
7.4 Third-Party Indemnifications
8.1 AI Stewards
8.2 Regulatory Risk
8.3 AI Agents
9.1 Data Anonymization
9.2 Special Bias Categories
9.3 Synthetic Data
9.4 Data Retention
9.5 Data Sovereignty
10.1 Direct Prompt Injection
10.2 Indirect Prompt Injection
10.3 Availability Poisoning
10.3.1 Incr. Computation
10.3.2 Denial of Service
10.3.3 Energy-Latency
10.3.4 Data Spamming
10.3.5 Model Integrity
10.3.6 Cost Harvesting
10.4 Data and Model Poisoning
10.4.1 Data Poisoning
10.4.2 Targeted Poisoning
10.4.3 Backdoor Poisoning
10.4.4 Model Poisoning
10.5 Data and Model Privacy
10.5.1 Data Reconstruction
10.5.1 Data Reconstruction
10.5.2 Membership Inference
10.5.2 Membership Inference
10.5.3 Data Extraction
10.5.3 Data Extraction
10.5.4 Model Extraction
10.5.4 Model Extraction
10.5.5 Property Inference
10.5.5 Property Inference
10.5.6 Prompt Extraction
10.5.6 Prompt Extraction
10.6 Abuse Violations
10.6.1 Violent Crimes
10.6.2 Non-Violent Crimes
10.6.3 Sex Crimes
10.6.4 Child Exploitation
10.6.5 Defamation
10.6.6 Specialized Advice
10.6.7 Privacy
10.6.8 Intellectual Property
10.6.9 Indiscriminate Weapons
10.6.10 Hate
10.6.11 Self-Harm
10.6.12 Sexual Content
10.6.13 Election Interference
10.6.14 Harassment
10.6.15 Competitor Checks
10.7 Evasion Attacks
10.7.1 White-Box Evasion
10.7.2 Black-Box Evasion
10.7.3 Transferability of Attacks
10.8 AI Agent Misuse
10.8.1 Spear-Phishing
10.8.2 Software Vulnerabilities
10.8.3 Malicious Code
10.8.4 Harmful Content at Scale
10.8.5 Non-Consensual Content
10.8.6 Fraudulent Services
10.8.7 Delegating Decisions
10.8.7 Delegating Decisions
10.9 Insecure Plugins
10.10 Credential Access
10.11 Unsafe Artifacts
10.12 Scripting Interpreters
11.1 AI Lifecycle
11.2 AI Inventory
11.3 Pre-Release Controls
11.4 Logs
12.1 Impact Assessments
12.2 Third-Party Risk
12.3 Risk Ratings
12.4 AI Control Tower
12.5 AI Risk Taxonomy
12.6 PRCI Inventory
12.7 Framework Mapping
12.8 Quality Management
12.9 Conformity Assessment
12.10 Registration
12.11 ESG Risk
13.1 Use Case Prioritization
13.2 Pilot Use Cases
13.3 Scale Pilots
13.4 AI COE
13.5 Business Benefits
13.6 AI Literacy
13.7 Post-Market Monitoring
13.8 Serious Incidents