• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF

Agentic AI Governance with Collibra: Example with LinkedIn Hiring Assistant

Sunil Soares, Founder & CEO, YDC November 8, 2024

I’ve been writing about Agentic AI Governance recently.

LinkedIn just announced their Hiring Assistant for LinkedIn Recruiter & Jobs to help recruiters spend time on their most impactful work. See my recent blog on an AI Risk Assessment for LinkedIn Hiring Assistant.

In this blog, I will discuss how the YDC team leveraged the Collibra AI Governance module to implement Agentic AI Governance for LinkedIn Hiring Assistant. We added several custom questions and two custom tabs to Collibra AI Governance.

Catalog of AI Use Cases
If a recruiter in any large or small company is using LinkedIn Hiring Assistant, then that needs to be cataloged as an AI Use Case. We cataloged LinkedIn Recruiter & Hiring Assistant as an AI use case in Collibra AI Governance.

 

 

Individual Risk Rating Dimensions
We added custom questions to address specific risk rating dimensions (Bias, Reliability, Explainability, Accountability, Privacy, and Security). We mapped each question to specific articles within the EU AI Act as well as other regulations such as the GDPR (privacy) and New York City Local Law 144 (bias).

 

High-Risk Classification
LinkedIn Hiring Assistant should be classified as a high-risk AI system under Article 6 of the EU AI Act.

 

Potential Mitigants
We added a question to address potential risk mitigation measures including: 1) changing the settings in LinkedIn to NOT allow the use of user data by LinkedIn to train their AI models, and 2) to conduct testing/read teaming.

AI Agents present unique AI Governance challenges at scale. As we demonstrate in this blog, Collibra AI Governance provides a great solution to address Agentic AI Governance.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.