• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF

Agentic AI Needs Embedded Governance For Human-out-of-the-Loop

Sunil Soares, Founder & CEO, YDC October 24, 2024

Agentic AI Governance will increasingly involve natural language instructions and tool invocation to allow AI agents to operate seamlessly while supporting safe and responsible AI

Introduction
An AI agent is a computer program with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations.

These agents represent a leap from traditional automation, as they are not just designed to follow a set of instructions but to think, adapt, and act independently. For example, AI agents streamline supply chain operations by predicting delays, optimizing delivery routes, and managing inventory more efficiently.

Agentic AI Governance Challenges
AI agents have the ability to significantly reduce the need for humans and to profoundly impact human-in-the-loop approaches. How do you effectively implement human accountability for AI when agents effectively remove humans from the loop? This results in several issues relating to regulatory compliance, tort law, fairness, intellectual property rights, transparency, and abuse.

Embedded Governance with Instructions and Tools to Support Human-out-of-the-Loop
Several vendors have introduced AI agent platforms including Salesforce Agentforce, Google Vertex AI Agent Builder, OpenAI Assistants API, and Anthropic Computer use. As we show in this blog, a number of these tools support embedded Agentic AI governance via natural language instructions or the invocation of tools.

YDC Mapping of AI Agent Vendors to AI Governance Controls Framework
YDC has already mapped over 100 AI Governance Controls to more than 90 vendors, the EU AI Act and industry frameworks such as OWASP Top 10 LLM and MITRE ATLAS. We have now extended this framework by mapping three AI Agent platforms (crewAI, Google Vertex AI Agent Builder & Salesforce Agentforce) to our controls framework. We will add more in future. You can view the mappings here.

 

Here is a list of challenges and approaches mapped to the 13 AI Governance Components from the YDC Framework and my latest AI Governance book:

  1. Accountability for AI
    Organizations need to establish accountability for all types of AI including agentic AI. Article 3 of the EU AI Act deals with the definition of “artificial intelligence,” which is quite broad and will certainly encompass AI agents. 
  2.  

    AI agents have the following characteristics

    • ○  Planning and sequencing actions to achieve goals
    • ○  Collaboration with other agents using assigned roles
    • ○  Usage of tools such as search, code execution, and computational capabilities
    • ○  Perception including perceiving and processing information from the environment including visual, auditory, and sensory cues
    • ○  Goal-Orientation
    • ○  Memory of past interactions and behaviors

  3. Regulatory & Contractual Risks
    Tort law and contractual liability issues will continue to evolve in the context of AI agents. For example, Tesla currently does not assume liability for vehicles with Full Self-Driving (FSD) capabilities switched on. On the other hand, Waymo, Alphabet’s driverless car unit with vehicles transporting passengers around select cities without anyone sitting behind the wheel, is responsible for the liability in a crash. German automaker Mercedes-Benz, too, has said it is responsible for its limited-autonomous vehicles, owned by customers, when those vehicles are driving themselves.

  4. Use Cases
    Most regulations including the EU AI Act require an inventory of AI use cases as a starting point. This inventory needs to encompass Agentic AI use cases as well. Agentic AI use cases are popping up across banking, insurance, manufacturing, telecommunications, healthcare, manufacturing, non-profits, government, technology, and other industries.

  5. Data Governance
    Great AI agents depend on high-quality data. For example, Google Vertex AI offers a suite of APIs to allow developers to build their own retrieval-augmented generation (RAG) apps using Google’s search capabilities.

  6. Fairness
    AI agents, like any service that confers a benefit to a user for a price, has the potential to disproportionately benefit economically wealthier individuals who can afford to purchase access. For example, users with access to AI agents will be more likely to schedule meetings with other users possessing similar capabilities.
  7.  

  8. Reliability
    AI hallucinations may impact the willingness of users to adopt models.

    Canada’s Civil Resolution Tribunal held that Air Canada must refund a passenger who purchased tickets to attend his grandmother’s funeral. The airline’s support chatbot provided the passenger with false information that, if he paid full price, he could later file a claim under the airline’s bereavement policy to receive a discount.
  9.  

    AI Agents will increasingly have built-in functionality to improve reliability. For example, Salesforce Agentforce has an instruction in a customer support agent to respond to an off-topic prompt from the user.

    Instruction: “…NEVER answer general knowledge questions.”
    User Prompt: “What is the capital of India”
    Response: “I am here to help you with questions regarding customer support. How can I assist you with that?”

  10. Transparency & Explainability
    AI agents amplify legal issues relating to the protection of intellectual property rights. A number of jurisdictions such as California and Tennessee have passed laws to protect the voice and image rights of artists from AI misuse.

    Tools like SynthID from Google DeepMind and C2PA’s Content Credentials provide watermarking to detect AI-generated images, video, and text.

  11. Human-in-the-Loop
    Article 14 of the EU AI Act addresses accountability using human-in-the-loop (HITL) approaches. HITL presents unique challenges for AI agents, which, by their very nature, reduce the need for humans. AI agents address HITL requirements via in-line natural language instructions.

    For example, an Event Management crew (multi-agent application)  in crewAI sets the human_input parameter to True
    Logistics_task = Task(…
  12. Human_input=True
    )

    In similar fashion, Salesforce Agentforce may have a customer support agent with the following instruction:
    “Do not let the customer change the billing details without talking to a customer service representative.”

  13. Privacy
    AI agents such as personal assistants may generate treasure troves of personal information, such as a user’s personal calendar and email correspondence. This may create risks of oversharing that impacts user privacy.

    To address privacy issues, Google Vertex AI Agent Builder may implement an order agent with the following natural language instruction:
    “…Do not collect the customer’s name or address.”

  14. Security
    Article 15 of the EU AI Act addresses requirements relating to “accuracy, robustness, and cybersecurity.” AI agents may enable security compromises at scale.

    Agentic AI solutions now address security concerns natively. For example, The Einstein Trust Layer within Salesforce Agentforce may mask Social Security Numbers, if required.

    Agentic AI solutions may use tools to supplement security functionality. For example, the YDC team implemented the Guardrails AI ProfanityFree validator as a tool within crewAI to block any mentions of the term “idiot.”
    topic = “idiot”
    response: “Validation failed…”

  15. AI Agent Lifecycle
    Agentic AI requires a unique approach to AI Lifecycle Management with the following steps:
    • Define Agent Architecture including Sub-Agents & Tools
    • Create Goals & Instructions
    • Simulate & Debug
    • Provide Examples
    • Deploy
    • Log and Monitor 
  16.  

  17. Manage Risk
    This component overlaps with AI Privacy and Security. Agentic AI solutions provide native functionality to reduce risk. For example, Google Vertex AI Agent Builder provides settings for prompt security, safety filters, cost harvesting, and banned phrases.

    By setting token limits, Google Vertex AI Agent Builder prevents responses that generate high-volume tokens that drive up costs.
    Prompt: “5,000-word essay on American Football”
  18. Response: “I’m sorry, I can’t generate essays”

  19. Realize AI Value
    The final step is to realize value from AI agents. For example, Wiley resolved 40 percent more customer support cases during its seasonal surge using Salesforce Agentforce.

 

As noted, the basic requirements for AI Governance also apply to Agentic AI Governance. However, the mechanisms for Agentic AI Governance will likely be highly automated to reduce the need for humans (human-out-of-the-loop) while adhering to requirements for safe and responsible AI.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.