• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF

Agentic AI Governance with Collibra: Using Risk Scoring to Drive Differentiated Approval Workflows

Sunil Soares, Founder & CEO, YDC November 18, 2024

In a previous blog, I discussed Agentic AI Governance with Collibra using an example with LinkedIn Hiring Assistant.

In this blog, I will discuss the use of Collibra’s workflow functionality to implement approvals by different governing bodies based on the risk score of the AI use case.

The AI Governance process was overseen by three groups:

  1. Legal & Compliance – Responsible AI Principles
  2. Operational Risk Management Committee (ORMC) – Approval of High-Risk and Medium-Risk AI Use Cases
  3. AI Governance Center of Excellence (CoE) – Approval of Low-Risk Use Cases



The LinkedIn Hiring Assistant AI Use Case referenced earlier was classified as High-Risk based on Article 6 of the EU AI Act. As a result, this use case needed approval from the ORMC.

The Collibra workflow implemented the following business rules:

  1. Third-Party Risk Management (TPRM) – Approve all AI use cases, which generally involved the use of third-party technology.
  2. ORMC – Approve high-risk AI use cases.
  3. AI Governance Committee – Approve low-risk AI use cases.


Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.