The YDC team conducted an analysis of the Top 50 Commercial-off-the-Shelf (COTS) applications within Higher Education. This analysis was to help the Chief Data Officer of a major public university gain alignment with their Chief Information Officer around AI risk management.
Shadow AI Governance and YDC_AIGOV Agents
We define Shadow AI as applications where vendors have added artificial intelligence capabilities into their application suite without the full understanding of the company as to the overall impact on AI risk. We used the YDC_AIGOV Agents to automate the process of researching the AI policies for these higher education vendors.
Almost Half the COTS Apps in Higher Education Had Embedded AI With Opaque AI Policies
As shown in the pie chart above, 24 apps (48%) had embedded AI with privacy policies that did not specifically prevent the vendors from using customer data to train their models. 21 apps (42%) did not have embedded AI. Only 5 apps (10%) had embedded AI with privacy policies that specifically prevented the vendors from using customer data to train their models.
Vendor Responsible AI Policies Had An Irresponsible Lack of Detail
As shown in the image above, the vendors’ Responsible AI policies were extremely high-level. We have anonymized vendor names to allow our analysis to be forthright.
For example, a Top EdTech vendor published its Trustworthy AI Approach with the following text:
“Our Trustworthy AI program is aligned to the NIST AI Risk Management Framework and upcoming legislation such as the EU AI Act….
Fairness: Minimizing harmful bias in AI systems is a Trustworthy AI Principle”
The Lack of Detail in Responsible AI Policies Would Make It Extremely Difficult To Conduct AI Risk Assessments
This limited level of detail (at least what was available publicly) would make it very hard for the university to adequately assess the level of AI risk within these apps.
We have observed this trend across companies and industries. More to come on this topic.