I wrote an earlier blog on AI Governance Considerations with Digital Twins for Personalized Health Care. A digital twin may be a virtual replica of a particular patient that reflects the unique genetic makeup of the patient or a simulated three-dimensional model that exhibits the characteristics of a patient’s heart. With predictive algorithms and real-time data, digital twins have the potential to detect anomalies and assess health risks before a disease develops or becomes symptomatic.
However, these digital twins introduce several AI risks including quality issues with data collection, biased algorithms, biased training data sets, data privacy, data security, data reconstruction through attribute inference, and epistemic injustice by discounting patient knowledge.
In another blog, Zhanara Amans and I mapped the Maryland AI Executive Order to the YDC AI Governance Framework that applies to the use of AI by state agencies in the U.S. State of Maryland.
The YDC team developed a Custom AI Risk Assessment in the Collibra AI Governance Module to cover the Maryland AI Executive Order. The information is hypothetical for illustrative purposes only and does not represent our current knowledge of AI use cases in Maryland.
We started with a taxonomy of departments and AI use cases for the State of Maryland. The Department of Education is an AI Program that contains the AI-Driven Individual Education Plans (IEPs) AI Use Case. The Department of Health contains the Digital Twins for Personalized Health Care AI use case, which we will showcase in more detail. Finally, the Department of Justice contains the AI-Driven Sentencing Recommendations AI Use Case.
The Business Context for the Digital Twins AI Use Case is shown below.
Finally, we assessed specific risks such as Fairness and Equity, Innovation, Privacy, Safety, Security & Resiliency, Validity & Reliability, and Transparency, Accountability & Explainability in the AI Risk Assessment. We mapped each risk dimension to the relevant section in the Maryland AI Executive Order (e.g., Section B.1 – Fairness and equity).
As an added bonus, we configured the assessment to automatically return an AI Risk Rating based on the most conservative risk dimension. For example, the Digital Twins AI Use Case was classified as High-Risk because the Fairness & Equity and Privacy Risks were rated as High (see Figure above).
You can find more information on the Collibra AI Governance Module here.