• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive
Marquee Hyperlink Example
YDC_AIGOV Shadow AI Governance Agents to highlight risks associated with AI embedded within vendor applications

Shadow AI in Higher Education: Responsible AI Policies with an Irresponsible Lack of Detail

YDC_AIGOV Shadow AI Governance Agents discovered that almost half of the most popular COTS apps in higher education had embedded AI with opaque AI policies
Sunil Soares, Founder & CEO, YDC February 12, 2025
The YDC team conducted an analysis of the Top 50 Commercial-off-the-Shelf (COTS) applications within Higher Education. This analysis was to help the Chief Data Officer of a major public university gain alignment with their Chief Information Officer around AI risk management.

Shadow AI Governance and YDC_AIGOV Agents
We define Shadow AI as applications where vendors have added artificial intelligence capabilities into their application suite without the full understanding of the company as to the overall impact on AI risk. We used the YDC_AIGOV Agents to automate the process of researching the AI policies for these higher education vendors.

Almost Half the COTS Apps in Higher Education Had Embedded AI With Opaque AI Policies
As shown in the pie chart above, 24 apps (48%) had embedded AI with privacy policies that did not specifically prevent the vendors from using customer data to train their models. 21 apps (42%) did not have embedded AI. Only 5 apps (10%) had embedded AI with privacy policies that specifically prevented the vendors from using customer data to train their models.

Vendor Responsible AI Policies Had An Irresponsible Lack of Detail
As shown in the image above, the vendors’ Responsible AI policies were extremely high-level. We have anonymized vendor names to allow our analysis to be forthright. 

For example, a Top EdTech vendor published its Trustworthy AI Approach with the following text:
“Our Trustworthy AI program is aligned to the NIST AI Risk Management Framework and upcoming legislation such as the EU AI Act…. Fairness: Minimizing harmful bias in AI systems is a Trustworthy AI Principle”

The Lack of Detail in Responsible AI Policies Would Make It Extremely Difficult To Conduct AI Risk Assessments
This limited level of detail (at least what was available publicly) would make it very hard for the university to adequately assess the level of AI risk within these apps.

We have observed this trend across companies and industries. More to come on this topic.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.