Let’s talk about Agentic AI Governance.
AI Agents are becoming increasingly popular as the next step in Generative AI.
Google DeepMind defines an AI agent as a computer program with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations.
The entire paradigm of AI Governance is going to be turned on its head with AI Agents. For example:
- How do you introduce Human-in-the-Loop with autonomous agents?
- How do you implement guardrails for toxic content?
- What about hallucinations?
I believe that Agentic AI Governance needs to be implemented inline via controls such as natural language instructions and tools.
The YDC team did some hands-on work with a few AI agent systems. Here are a few observations:
- Salesforce Agentforce has OOTB tasks to filter off-topic prompts
- crewAI supports human-input parameters for HITL
- Google Vertex AI Agent Builder has safety settings for toxic content
There is much more that is possible by integrating third-party tools. For example, an AI Agent can implement a third party tool to detect toxic content or model theft.
Just some initial thoughts. More to come…