I’ve been writing about Agentic AI Governance recently.
LinkedIn just announced their Hiring Assistant for LinkedIn Recruiter & Jobs to help recruiters spend time on their most impactful work. A few thoughts on how this announcement relates to a risk assessment for Agentic AI Governance:
- Catalog of AI Use Cases
If a recruiter in any large or small company is using LinkedIn Hiring Assistant, then that needs to be cataloged as an AI Use Case. - High-Risk Classification LinkedIn Hiring
Assistant would likely be classified as a high-risk AI system under Article 6 of the EU AI Act: “… in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates.” This means that Hiring Assistant would need to comply with the requirements of high-risk AI systems under the EU AI Act. - Transparency & Explainability
Article 86 of the EU AI Act deals with the “right to explanation of individual decision-making.” Companies might need to explain why certain candidates were selected and others were not. - Bias
Several regulations such as Article 10 of the EU AI Act, New York City Local Law 144, and numerous U.S. statutes such as the Civil Rights Act address bias prevention in hiring practices. Automated hiring assistants may unintentionally discriminate against certain categories of applicants based on protected characteristics such as race, gender, sexual orientation, and national origin. - Accountability
Article 14 of the EU AI Act addresses Human Oversight. Presumably, the recruiter would act as the human-in-the-loop (HITL) to review the results of the Hiring Assistant. - Privacy
LinkedIn’s Privacy Policy explicitly states that they “may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”
LinkedIn addresses these risks with the following statement in their product announcement: “As we developed Hiring Assistant, we conducted rigorous evaluations to identify potential gaps and risks, such as hallucinations and low-quality content. Actions are audited, and reported in the same manner as human users. This ensures that activities maintain the same level of transparency and accountability.”
Obviously, companies need to be vigilant about these concerns as they deploy Linkedin Hiring Assistant.
This is my quick take on the applicability of Agentic AI Governance to LinkedIn Hiring Assistant. More to come on this exciting topic.