• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive
Marquee Hyperlink Example
YDC_AIGOV Shadow AI Governance Agents to highlight risks associated with AI embedded within vendor applications

Supporting Compliance with the EU AI Act and EU Digital Operational Resilience Act (DORA) for Shadow AI Governance with the Hyperproof GRC Platform

Shadow AI introduces real risks for Third-Party Risk Management including biases from the use of AI within embedded apps. Regulators need to provide further guidance on the implications of Shadow AI.
Sunil Soares, Founder & CEO, YDC January 27, 2025
In this blog, I will discuss the implications of Shadow AI around compliance with the EU AI Act and the EU Digital Operational Resilience Act (DORA) for the financial service sector using the Hyperproof GRC Platform.

What is Shadow AI?
In a previous blog, I highlighted a recent YDC study at a mid-size bank with a number of applications with “Shadow AI.” We define Shadow AI as applications where vendors have added artificial intelligence capabilities into their application suite without the full knowledge of the company. The bank had 800 commercial-off-the-shelf (COTS) applications of which 256 (32 percent) had embedded AI with data not excluded from AI training. 


Why Does Shadow AI Need to be Governed?
Shadow AI has several implications:

  • EU AI Act Article 9 addresses Risk Management, which encompasses the risks of AI embedded in applications (Shadow AI).

  • DORA has a major emphasis on Third-Party Risk Management (TPRM).

  • U.S. Sectoral Laws do not make a distinction between Providers and Deployers of AI Systems, something that is the focus of the EU AI Act. For example, the U.S. Equal Employment Opportunity Commission (EEOC) has provided guidance that employers using third-party AI tools may potentially violate Title I of the Americans with Disabilities Act (ADA). This may happen if the employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. In addition, the AI may intentionally or unintentionally “screen out” an individual with a disability.

  • Deployers of AI Embedded Applications also carry litigation risk. For example, the U.S. Department of Justice sued six of the nation’s largest landlords in addition to RealPage for an algorithmic pricing scheme that allegedly harmed renters.

Hyperproof Overview & AI Literacy Example
Hyperproof supports regulatory compliance in one platform. For example, the YDC-13.6 AI Literacy control is mapped to Article 4 of the EU AI Act in Hyperproof. Article 4 enters into force on February 2, 2025.


Hyperproof supports proof of compliance in the form of an AI Literacy Plan.


The AI Literacy Plan provides an approach for different audiences starting with the Board of Directors to everyone in the company.


Third-Party Risk Management (TPRM)
Shadow AI introduces third-party risk that needs to be rigorously managed. The YDC-12.2 Third-Party Risk Management control is mapped to relevant articles in DORA and the EU AI Act in Hyperproof. DORA, in particular, has a major emphasis on TPRM and we believe that regulators have not fully appreciated the implications of Shadow AI.


At the very minimum, the vendor Master Services Agreements (MSAs) need to be updated to include AI clauses (the vendor names are illustrative only). 


Regulators Need to Provide Further Guidance on Shadow AI
Shadow AI will become a bigger issue as more vendors embed AI into their applications. There are likely additional steps that TPRM teams need to consider including updating their vendor questionnaires. Risk Management teams also need to assess their Risk Appetite for Shadow AI. For example, will they accept this risk, will they seek to mitigate this risk through mechanisms like MSA updates, or some combination? 

Regulators also need to determine if they need to issue guidance under the EU AI Act and DORA to account for the very real risks associated with Shadow AI.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.