• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive
Marquee Hyperlink Example
YDC_AIGOV Shadow AI Governance Agents to highlight risks associated with AI embedded within vendor applications

Webinar Recap – Scale AI with Confidence: Agentic AI Governance with Collibra

Sunil Soares, Founder & CEO, YDC December 16, 2024

Last week, Ashley Blake, Principal Product Manager, AI Governance at Collibra and I hosted a webinar on Agentic AI Governance

The webinar was very well attended and we covered the following topics:

  • Collibra AI Governance Module
    Ashley presented Collibra AI Governance that integrates seamlessly with the rest of the Collibra platform.

  • Regulatory Imperatives
    AI regulations are multiplying across jurisdictions. Article 5 of the EU AI Act on Prohibited AI Practices goes into effect in February 2025. The U.S. is adopting a sectoral approach to AI regulations including longstanding legislation like the Civil Right Act in addition to state regulations that have already gone into effect.

  • Human-out-of-the-Loop
    Challenges with Agentic AI Governance when agents significantly reduce (or remove) human involvement.

  • Collibra’s Newly-Released OOTB EU AI Act Assessment
    I showcased some recent YDC work on an AI Risk Assessment for LinkedIn Hiring Assistant using the Collibra AI Governance Module. We used the OOTB EU AI Act Assessment from Collibra to assess the use case as High-Risk with associated Deployer obligations.

  • Stakeholders and Committees
    We discussed the use of committees such as the AI Oversight Board and AI Center of Excellence. Ashley highlighted the importance of engaging Procurement and Third-Party Risk Management since the vast majority of AI use cases involve vendor technologies.

  • Conditional Collibra Workflows Based on AI Score
    I presented the use of Collibra workflows where the path is conditioned on the risk rating of the AI use case (e.g., High Risk AI Use Cases are routed to the Operational Risk Management Committee).

  • Fundamental Rights Impact Assessment
    I showcased how the security testing results including prompts and OWASP Top 10 LLM/MITRE ATLAS mappings can be cataloged in the AI use case in Collibra AI Governance.

I will promote the webinar recording when it becomes available. In the meantime, enjoy the holidays and see everyone in the New Year!

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.