Agentic AI Governance will increasingly involve natural language instructions and tool invocation to allow AI agents to operate seamlessly while supporting safe and responsible AI
Introduction
An AI agent is a computer program with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations.
These agents represent a leap from traditional automation, as they are not just designed to follow a set of instructions but to think, adapt, and act independently. For example, AI agents streamline supply chain operations by predicting delays, optimizing delivery routes, and managing inventory more efficiently.
Agentic AI Governance Challenges
AI agents have the ability to significantly reduce the need for humans and to profoundly impact human-in-the-loop approaches. How do you effectively implement human accountability for AI when agents effectively remove humans from the loop? This results in several issues relating to regulatory compliance, tort law, fairness, intellectual property rights, transparency, and abuse.
Embedded Governance with Instructions and Tools to Support Human-out-of-the-Loop
Several vendors have introduced AI agent platforms including Salesforce Agentforce, Google Vertex AI Agent Builder, OpenAI Assistants API, and Anthropic Computer use. As we show in this blog, a number of these tools support embedded Agentic AI governance via natural language instructions or the invocation of tools.
YDC Mapping of AI Agent Vendors to AI Governance Controls Framework
YDC has already mapped over 100 AI Governance Controls to more than 90 vendors, the EU AI Act and industry frameworks such as OWASP Top 10 LLM and MITRE ATLAS. We have now extended this framework by mapping three AI Agent platforms (crewAI, Google Vertex AI Agent Builder & Salesforce Agentforce) to our controls framework. We will add more in future. You can view the mappings here.
Here is a list of challenges and approaches mapped to the 13 AI Governance Components from the YDC Framework and my latest AI Governance book:
- Accountability for AI
Organizations need to establish accountability for all types of AI including agentic AI. Article 3 of the EU AI Act deals with the definition of “artificial intelligence,” which is quite broad and will certainly encompass AI agents. - ○ Planning and sequencing actions to achieve goals
- ○ Collaboration with other agents using assigned roles
- ○ Usage of tools such as search, code execution, and computational capabilities
- ○ Perception including perceiving and processing information from the environment including visual, auditory, and sensory cues
- ○ Goal-Orientation
- ○ Memory of past interactions and behaviors
- Regulatory & Contractual Risks
Tort law and contractual liability issues will continue to evolve in the context of AI agents. For example, Tesla currently does not assume liability for vehicles with Full Self-Driving (FSD) capabilities switched on. On the other hand, Waymo, Alphabet’s driverless car unit with vehicles transporting passengers around select cities without anyone sitting behind the wheel, is responsible for the liability in a crash. German automaker Mercedes-Benz, too, has said it is responsible for its limited-autonomous vehicles, owned by customers, when those vehicles are driving themselves. - Use Cases
Most regulations including the EU AI Act require an inventory of AI use cases as a starting point. This inventory needs to encompass Agentic AI use cases as well. Agentic AI use cases are popping up across banking, insurance, manufacturing, telecommunications, healthcare, manufacturing, non-profits, government, technology, and other industries. - Data Governance
Great AI agents depend on high-quality data. For example, Google Vertex AI offers a suite of APIs to allow developers to build their own retrieval-augmented generation (RAG) apps using Google’s search capabilities. - Fairness
AI agents, like any service that confers a benefit to a user for a price, has the potential to disproportionately benefit economically wealthier individuals who can afford to purchase access. For example, users with access to AI agents will be more likely to schedule meetings with other users possessing similar capabilities. - Reliability
AI hallucinations may impact the willingness of users to adopt models. Canada’s Civil Resolution Tribunal held that Air Canada must refund a passenger who purchased tickets to attend his grandmother’s funeral. The airline’s support chatbot provided the passenger with false information that, if he paid full price, he could later file a claim under the airline’s bereavement policy to receive a discount. - Transparency & Explainability
AI agents amplify legal issues relating to the protection of intellectual property rights. A number of jurisdictions such as California and Tennessee have passed laws to protect the voice and image rights of artists from AI misuse. Tools like SynthID from Google DeepMind and C2PA’s Content Credentials provide watermarking to detect AI-generated images, video, and text. - Human-in-the-Loop
Article 14 of the EU AI Act addresses accountability using human-in-the-loop (HITL) approaches. HITL presents unique challenges for AI agents, which, by their very nature, reduce the need for humans. AI agents address HITL requirements via in-line natural language instructions.
For example, an Event Management crew (multi-agent application) in crewAI sets the human_input parameter to True
Logistics_task = Task(… - Privacy
AI agents such as personal assistants may generate treasure troves of personal information, such as a user’s personal calendar and email correspondence. This may create risks of oversharing that impacts user privacy.
To address privacy issues, Google Vertex AI Agent Builder may implement an order agent with the following natural language instruction:
“…Do not collect the customer’s name or address.” - Security
Article 15 of the EU AI Act addresses requirements relating to “accuracy, robustness, and cybersecurity.” AI agents may enable security compromises at scale.
Agentic AI solutions now address security concerns natively. For example, The Einstein Trust Layer within Salesforce Agentforce may mask Social Security Numbers, if required.
Agentic AI solutions may use tools to supplement security functionality. For example, the YDC team implemented the Guardrails AI ProfanityFree validator as a tool within crewAI to block any mentions of the term “idiot.”
topic = “idiot”
response: “Validation failed…” - AI Agent Lifecycle
Agentic AI requires a unique approach to AI Lifecycle Management with the following steps:- Define Agent Architecture including Sub-Agents & Tools
- Create Goals & Instructions
- Simulate & Debug
- Provide Examples
- Deploy
- Log and Monitor
- Manage Risk
This component overlaps with AI Privacy and Security. Agentic AI solutions provide native functionality to reduce risk. For example, Google Vertex AI Agent Builder provides settings for prompt security, safety filters, cost harvesting, and banned phrases.
By setting token limits, Google Vertex AI Agent Builder prevents responses that generate high-volume tokens that drive up costs.
Prompt: “5,000-word essay on American Football”
Response: “I’m sorry, I can’t generate essays” - Realize AI Value
The final step is to realize value from AI agents. For example, Wiley resolved 40 percent more customer support cases during its seasonal surge using Salesforce Agentforce.
AI agents have the following characteristics:
AI Agents will increasingly have built-in functionality to improve reliability. For example, Salesforce Agentforce has an instruction in a customer support agent to respond to an off-topic prompt from the user.
Instruction: “…NEVER answer general knowledge questions.”
User Prompt: “What is the capital of India”
Response: “I am here to help you with questions regarding customer support. How can I assist you with that?”
Human_input=True
)
In similar fashion, Salesforce Agentforce may have a customer support agent with the following instruction:
“Do not let the customer change the billing details without talking to a customer service representative.”
As noted, the basic requirements for AI Governance also apply to Agentic AI Governance. However, the mechanisms for Agentic AI Governance will likely be highly automated to reduce the need for humans (human-out-of-the-loop) while adhering to requirements for safe and responsible AI.