• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive
Marquee Hyperlink Example
YDC_AIGOV Shadow AI Governance Agents to highlight risks associated with AI embedded within vendor applications

Using Collibra AI Governance Module to Assess Digital Twins for Personalized Health Care Against Maryland AI Executive Order

Sunil Soares, Founder & CEO, YDC December 23, 2024

I wrote an earlier blog on AI Governance Considerations with Digital Twins for Personalized Health Care. A digital twin may be a virtual replica of a particular patient that reflects the unique genetic makeup of the patient or a simulated three-dimensional model that exhibits the characteristics of a patient’s heart. With predictive algorithms and real-time data, digital twins have the potential to detect anomalies and assess health risks before a disease develops or becomes symptomatic. 

However, these digital twins introduce several AI risks including quality issues with data collection, biased algorithms, biased training data sets, data privacy, data security, data reconstruction through attribute inference, and epistemic injustice by discounting  patient knowledge.

In another blog, Zhanara Amans and I mapped the Maryland AI Executive Order to the YDC AI Governance Framework that applies to the use of AI by state agencies in the U.S. State of Maryland.

The YDC team developed a Custom AI Risk Assessment in the Collibra AI Governance Module to cover the Maryland AI Executive Order. The information is hypothetical for illustrative purposes only and does not represent our current knowledge of AI use cases in Maryland.

We started with a taxonomy of departments and AI use cases for the State of Maryland. The Department of Education is an AI Program that contains the AI-Driven Individual Education Plans (IEPs) AI Use Case. The Department of Health contains the Digital Twins for Personalized Health Care AI use case, which we will showcase in more detail. Finally, the Department of Justice contains the AI-Driven Sentencing Recommendations AI Use Case.

The Business Context for the Digital Twins AI Use Case is shown below.

Finally, we assessed specific risks such as Fairness and Equity, Innovation, Privacy, Safety, Security & Resiliency, Validity & Reliability, and Transparency, Accountability & Explainability in the AI Risk Assessment. We mapped each risk dimension to the relevant section in the Maryland AI Executive Order (e.g., Section B.1 – Fairness and equity). 

As an added bonus, we configured the assessment to automatically return an AI Risk Rating based on the most conservative risk dimension. For example, the Digital Twins AI Use Case was classified as High-Risk because the Fairness & Equity and Privacy Risks were rated as High (see Figure above).

You can find more information on the Collibra AI Governance Module here.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.