• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF

AI Governance Considerations with Digital Twins for Personalized Health Care

Sunil Soares, Founder & CEO, YDC November 18, 2024

I am publishing this piece based on my recent AI Governance Book.


According to the Digital Twin Consortium, a digital twin is a virtual representation of real-world entities and processes, synchronized at a specified frequency and fidelity. Digital twins use real-time and historical data to represent the past and present and simulate predicted futures.


Applying this concept to the health care sector, a digital twin may be a virtual replica of a particular patient that reflects the unique genetic makeup of the patient or a simulated three-dimensional model that exhibits the characteristics of a patient’s heart. With predictive algorithms and real-time data, digital twins have the potential to detect anomalies and assess health risks before a disease develops or becomes symptomatic.


The AI governance implications of digital twins for personalized health care are mapped to the YDC AI Governance Framework based on research from the National Library of Medicine.


Address Quality Issues with Data Collection
Wearables such as the Apple Watch now make the collection of a wide range of biosignals possible. However, the accuracy of the devices used for data collection varies. For example, a review of the accuracy of the Apple Watch’s performance in measuring heart rate and energy expenditure found that although the device offers clinically reliable measurement of heart rates, it systematically overestimated the expenditure of energy in patients with cardiovascular disease.


Mitigate the Impact of Biased Algorithms
The algorithms in digital twins may yield unanticipated discriminatory results. A research study discovered that Black patients were systematically discriminated against by a widely adopted health care algorithm for identifying patients who were highly likely to need complex health care. The algorithm unintentionally discriminated against Black patients by assigning them lower risks as it used health care costs as a proxy for prediction. It is generally true that the more complex the health needs, the higher the cost. However, using health care costs as a proxy overlooks the fact that expenditure depends partially on health care access. The lower amount of health expenditure observed in Black patients did not imply that they were less ill than White patients. Instead, it was more likely to result from unequal access to health care.


Evaluate the Impact of Biased Training Data Sets
The reliability of AI models can be severely compromised if the data sets used to train these algorithms do not properly reflect the deployment environment. For example, IBM’s Watson for Oncology was less effective and reliable when applied to non-Western populations because the imagery data used for training Watson were primarily from the Western population.


Prevent Overdiagnosis
One of the general goals of digital twins for personalized health care services is to provide early warnings to users and assist in preventive health care. However, in practice, early action sometimes leads to overdiagnosis and overtreatment. This sort of ethical dilemma has been highlighted in the personalized medicine literature on the use of biomarkers. For example, many bioethicists and clinicians are concerned that genetic testing that can be used to detect BReast CAncer gene 1 (BRCA1) and BReast CAncer gene 1 (BRCA2) mutations might cause overtreatment, causing harm to a patient’s bodily integrity. By way of background, people who inherit harmful variants in one of these genes have increased risks of several cancers—most notably breast and ovarian cancer, but also several additional types of cancer.


Prevent Decontextualization of Disease Formation by Overlooking Socioeconomic
Determinants
A digital twin for personalized health care services might overly individualize health issues and overlook the fact that socioenvironmental determinants, such as air pollution, water pollution, and a lack of education, also contribute to health problems. The importance of explainability and interpretability continues to be paramount.


Avoid Epistemic Injustice by Discounting Patient’s Personal Knowledge
Epistemic injustice is a wrong done to someone in their capacity as a knower. It manifests as the exclusion of marginalized and oppressed people from 1) being heard and understood by others in interpersonal communications, and 2) contributing to broader and deeper social understandings of the human experience. The growing reliance on health information produced by digital twins for personalized health care services could also lead to undervaluing patients’ personal views and experiential knowledge. Some might think that health information offered by the digital twin is more reliable than a patient’s personal account because the information results from an objective fact.


Prevent Surveillance Health Care Through Data Minimization and Anonymization
Combining non-health data such as social media, education, and occupation with health data in the digital twin may violate the patient’s right to privacy and autonomy in the absence of informed consent. Selling health care data to data brokers without informed consent also violates the patient’s right to privacy.


Improve AI Security
In 2023, 46 hospital systems in the United States, comprising 141 hospitals, were impacted by ransomware, according to a tally from cybersecurity firm Emsisoft. That number was up from 25 hospital systems hit by ransomware in 2022, according to the firm. The promise of digital twins for personalized health care services is built on extensive health-related data. This might attract even more cyberattacks than other services in the health care sector have ever undergone.


Prevent Data Reconstruction Through Attribute Inference Attacks
Extensive information about the patient may be used to reveal information that patients do not wish to share, seriously infringing their privacy.


Prevent Black-Box Evasion Attacks Through Unorthodox Use
Some users might deliberately use a digital twin in unorthodox ways to trick the system in certain circumstances. For instance, a digital twin for personalized health care service devised by insurance companies could be compromised as some users might be more interested in getting a lower premium rather than tracking how a newly adopted healthy lifestyle could improve their health.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.