I’ve been writing about Agentic AI Governance recently.
LinkedIn just announced their Hiring Assistant for LinkedIn Recruiter & Jobs to help recruiters spend time on their most impactful work. See my recent blog on an AI Risk Assessment for LinkedIn Hiring Assistant.
In this blog, I will discuss how the YDC team leveraged the Collibra AI Governance module to implement Agentic AI Governance for LinkedIn Hiring Assistant. We added several custom questions and two custom tabs to Collibra AI Governance.
Catalog of AI Use Cases
If a recruiter in any large or small company is using LinkedIn Hiring Assistant, then that needs to be cataloged as an AI Use Case. We cataloged LinkedIn Recruiter & Hiring Assistant as an AI use case in Collibra AI Governance.
Individual Risk Rating Dimensions
We added custom questions to address specific risk rating dimensions (Bias, Reliability, Explainability, Accountability, Privacy, and Security). We mapped each question to specific articles within the EU AI Act as well as other regulations such as the GDPR (privacy) and New York City Local Law 144 (bias).
High-Risk Classification
LinkedIn Hiring Assistant should be classified as a high-risk AI system under Article 6 of the EU AI Act.
Potential Mitigants
We added a question to address potential risk mitigation measures including: 1) changing the settings in LinkedIn to NOT allow the use of user data by LinkedIn to train their AI models, and 2) to conduct testing/read teaming.
AI Agents present unique AI Governance challenges at scale. As we demonstrate in this blog, Collibra AI Governance provides a great solution to address Agentic AI Governance.