In this blog, I will discuss the implications of Shadow AI around compliance with the EU AI Act and the EU Digital Operational Resilience Act (DORA) for the financial service sector using the Hyperproof GRC Platform.
What is Shadow AI? In a previous blog, I highlighted a recent YDC study at a mid-size bank with a number of applications with “Shadow AI.” We define Shadow AI as applications where vendors have added artificial intelligence capabilities into their application suite without the full knowledge of the company. The bank had 800 commercial-off-the-shelf (COTS) applications of which 256 (32 percent) had embedded AI with data not excluded from AI training.
Why Does Shadow AI Need to be Governed?
Shadow AI has several implications:
Hyperproof supports proof of compliance in the form of an AI Literacy Plan.
The AI Literacy Plan provides an approach for different audiences starting with the Board of Directors to everyone in the company.
Third-Party Risk Management (TPRM)
Shadow AI introduces third-party risk that needs to be rigorously managed. The YDC-12.2 Third-Party Risk Management control is mapped to relevant articles in DORA and the EU AI Act in Hyperproof. DORA, in particular, has a major emphasis on TPRM and we believe that regulators have not fully appreciated the implications of Shadow AI.
At the very minimum, the vendor Master Services Agreements (MSAs) need to be updated to include AI clauses (the vendor names are illustrative only).
Regulators Need to Provide Further Guidance on Shadow AI
Shadow AI will become a bigger issue as more vendors embed AI into their applications. There are likely additional steps that TPRM teams need to consider including updating their vendor questionnaires. Risk Management teams also need to assess their Risk Appetite for Shadow AI. For example, will they accept this risk, will they seek to mitigate this risk through mechanisms like MSA updates, or some combination?
Regulators also need to determine if they need to issue guidance under the EU AI Act and DORA to account for the very real risks associated with Shadow AI.
What is Shadow AI? In a previous blog, I highlighted a recent YDC study at a mid-size bank with a number of applications with “Shadow AI.” We define Shadow AI as applications where vendors have added artificial intelligence capabilities into their application suite without the full knowledge of the company. The bank had 800 commercial-off-the-shelf (COTS) applications of which 256 (32 percent) had embedded AI with data not excluded from AI training.

- EU AI Act Article 9 addresses Risk Management, which encompasses the risks of AI embedded in applications (Shadow AI).
- DORA has a major emphasis on Third-Party Risk Management (TPRM).
- U.S. Sectoral Laws do not make a distinction between Providers and Deployers of AI Systems, something that is the focus of the EU AI Act. For example, the U.S. Equal Employment Opportunity Commission (EEOC) has provided guidance that employers using third-party AI tools may potentially violate Title I of the Americans with Disabilities Act (ADA). This may happen if the employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. In addition, the AI may intentionally or unintentionally “screen out” an individual with a disability.
- Deployers of AI Embedded Applications also carry litigation risk. For example, the U.S. Department of Justice sued six of the nation’s largest landlords in addition to RealPage for an algorithmic pricing scheme that allegedly harmed renters.




