• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive

AI Governance with Atlan: AI Use Cases, Risk Assessments, Workflows & Shadow AI Governance

Sunil Soares, Founder & CEO, YDC February 20, 2025
The YDC team developed an AI Governance prototype in Atlan. We reused the existing operating model with assets and added custom attributes and relations.

AI Use Cases

As discussed in an earlier blog, a digital twin may be a virtual replica of a particular patient that reflects the unique genetic makeup of the patient or a simulated three-dimensional model that exhibits the characteristics of a patient’s heart. Digital twins may be utilized to accelerate clinical trials and reduce costs in the life sciences industry. The YDC team implemented an overview of the Digital Twins for Clinical Trials AI Use Case in Atlan.



AI Risk Assessments

We conducted an AI Risk Assessment for the use case with Atlan. Digital twins have the potential to introduce bias risks based on the algorithms and the underlying data sets. We documented the bias risk assessment and a mapping to the associated regulations in Atlan.



We also documented the privacy risks in Atlan.


We documented other dimensions of AI risk including Reliability, Accountability, Explainability and Security in Atlan. For the sake of brevity, I have not included those screenshots here.

This use case would likely be classified as High Risk based on the Medical Device category of Article 6 of the EU AI Act. 


AI Risk Assessment Workflows

We configured an AI Risk Assessment workflow in Atlan to route the AI Risk Assessment to the appropriate parties for approval.



The screenshot below shows the AI Risk Assessment in Approved status based on approvals from the Operational Risk Management Committee (ORMC) and the AI Governance Council.


Shadow AI Governance to Ingest Metadata from ServiceNow CMDB and YDC_AIGOV Agents on Hugging Face to Highlight COTS Apps with Embedded AI

In an earlier blog, I discussed Shadow AI Governance and the YDC_AIGOV agents. As part of the current exercise, we ingested metadata around the Commercial-off-the-Shelf (COTS) apps into Atlan. This information includes metadata such as Application Name, Privacy Policy URL, Data Specifically Excluded from AI Training, Embedded AI and Opt-Out Option.


The screenshot below shows Atlan before running the integration with the YDC_AIGOV agents. The catalog only contains one AI Use Case (Digital Twins for Clinical trials) and one application (Google Product Services).


After running the integration with Atlan API, Atlan contains a broader list of applications including Actimize Xceed including metadata in the right panel.


Conditional Logic with Atlan API to Auto-Create AI Use Case and AI Risk Assessment Objects

We implemented conditional logic in the Atlan API to auto-create AI use cases only for applications with embedded AI. In this case, we created an AI use case object in Atlan for Actimize Xceed because Embedded AI = “Yes.”



We also implemented conditional logic in the Atlan API to auto-create AI Risk Assessment objects where Data Specifically Excluded for AI Training = “No.” Obviously, this logic is configurable.


This is a basic AI Governance configuration in Atlan with more to come!  

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.