• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF

Sample AI Risk Assessment for LinkedIn Hiring Assistant

Sunil Soares, Founder & CEO, YDC November 4, 2024

I’ve been writing about Agentic AI Governance recently.

LinkedIn just announced their Hiring Assistant for LinkedIn Recruiter & Jobs to help recruiters spend time on their most impactful work. A few thoughts on how this announcement relates to a risk assessment for Agentic AI Governance:

  • Catalog of AI Use Cases
    If a recruiter in any large or small company is using LinkedIn Hiring Assistant, then that needs to be cataloged as an AI Use Case.

  • High-Risk Classification LinkedIn Hiring
    Assistant would likely be classified as a high-risk AI system under Article 6 of the EU AI Act: “… in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates.” This means that Hiring Assistant would need to comply with the requirements of high-risk AI systems under the EU AI Act.

  • Transparency & Explainability
    Article 86 of the EU AI Act deals with the “right to explanation of individual decision-making.” Companies might need to explain why certain candidates were selected and others were not.

  • Bias
    Several regulations such as Article 10 of the EU AI Act, New York City Local Law 144, and numerous U.S. statutes such as the Civil Rights Act address bias prevention in hiring practices. Automated hiring assistants may unintentionally discriminate against certain categories of applicants based on protected characteristics such as race, gender, sexual orientation, and national origin.

  • Accountability
    Article 14 of the EU AI Act addresses Human Oversight. Presumably, the recruiter would act as the human-in-the-loop (HITL) to review the results of the Hiring Assistant.

  • Privacy
    LinkedIn’s Privacy Policy explicitly states that they “may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”

LinkedIn addresses these risks with the following statement in their product announcement: “As we developed Hiring Assistant, we conducted rigorous evaluations to identify potential gaps and risks, such as hallucinations and low-quality content. Actions are audited, and reported in the same manner as human users. This ensures that activities maintain the same level of transparency and accountability.”

Obviously, companies need to be vigilant about these concerns as they deploy Linkedin Hiring Assistant.

This is my quick take on the applicability of Agentic AI Governance to LinkedIn Hiring Assistant. More to come on this exciting topic.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.