• AI Governance Overview
  • 358 pages and 90 vendors
  • 90 controls and 25 case studies
  • Mappings to EU AI Act and NIST AI RMF
Vertical Line
  • Agentic AI Governance
  • 19 case studies
  • 11 Agentic AI platforms
  • Companion to AI Governance Comprehensive

Given Recent Geopolitical Developments, I Believe there is a High Likelihood that the Full Implementation of the EU AI Act will be Significantly Delayed. I Used Perplexity AI’s Deep Research to Validate My Hypothesis. Here’s What I Found.

Sunil Soares, Founder & CEO, YDC March 6, 2025

EU AI Act Implementation Timeline

The EU AI Act entered into force on August 1, 2024, and will be fully applicable two years later on August 2, 2026, with some exceptions:

  • Prohibitions and AI literacy obligations entered into application on February 2, 2025
  • The governance rules and the obligations for general-purpose AI models become applicable on August 2, 2025
  • The rules for high-risk AI systems – embedded into regulated products – have an extended transition period until August 2, 2027

Perplexity AI Deep Research

Perplexity AI’s Deep Research is meant to save hours of time by conducting in-depth research and analysis on the user’s behalf. When you ask a Deep Research question, Perplexity performs dozens of searches, reads hundreds of sources, and reasons through the material to autonomously deliver a comprehensive report.



I used Deep Research to help me determine if there was a high likelihood that the full implementation of the EU AI Act would be significantly delayed.

This report is not meant to be a personal assessment of the acceptability of AI regulation. It is meant to provide an assessment of what is likely to happen versus what should happen.


Initial Prompt to Perplexity Included Geopolitical Considerations

My initial prompt asked Perplexity to determine the odds that the main implementation of the EU AI Act would be watered down. I asked Perplexity to consider public statements from politicians in the U.S. and the European Union including the balance of AI risks and innovation as well as threats from China.


Perplexity Determined that the EU AI Act Will Likely Face Delays

Perplexity AI’s Deep Research determined that the main implementation of the EU AI Act in August 2026 will face significant delays, modifications, or selective enforcement. The report cites significant industry lobbying, U.S. competition, and threats to global AI investment as reasons for its findings.



Political Pressures from the United States

Perplexity cited the speech by the U.S. Vice President as well as the Stargate investment to indicate that European leaders need to keep the region competitive for global AI investments.



European Leadership Perspectives

Perplexity cited several sources including the European Commission President, the French President, and the new European Commission to highlight a softening in the European position on AI regulation.


Industry Lobbying

Perplexity also highlighted industry lobbying and delays in the Code of Practice for General Purpose AI Models.



Previous Concessions and Modifications

Perplexity also listed the legislative history of the EU AI Act including potential vulnerabilities to lobbying pressures.


Overall Assessment of Perplexity’s Analysis

My overall assessment of Perplexity AI’s Deep Research is that the analysis is top notch.

  1. Time Savings
    Perplexity saved me several hours of research and analysis in substantiating my initial hypothesis.

  2. Extensive Citations
    Perplexity used 67 sources to support its findings.

  3. Accurate Sources
    I checked key (not all of) Perplexity sources and they appeared accurate.

  4. Chinese Risks Not Fully Highlighted
    Perplexity could have done a better job highlighting the risks to European AI development due to China’s DeepSeek. However, its overall findings would not have changed much.

Fairness & Accessibility

Component

Component ID: 5.0

Mitigate bias and manage AI accessibility.

List of Controls:

  • Bias
  • Accessibility
Mitigate Bias
Control
ID: 5.1

Ensure that AI systems are fair and manage harmful bias.
Component
Sub-Control
Regulation
 
Source
Address Fairness and Accessibility EU AI Act -Article 10(2)(f)(g) – Data and Data Governance (“Examination of Possible Biases”)

Vendors

Detect Data Poisoning Attacks
Control

ID: 10.4.1

Data poisoning involves the deliberate and malicious contamination of data to compromise the performance of AI and machine learning systems.

Component
Control
Regulation
Source
10. Improve Security10.4 Avoid Data and Model Poisoning AttacksEU AI Act: Article 15 – Accuracy, Robustness and Cybersecurity 

Vendors

Improve Security
Component

Component ID: 10

Address emerging attack vectors impacting availability, integrity, abuse, and privacy.  

List of Controls:

  • Prevent Direct Prompt Injection Including Jailbreak
  • Avoid Indirect Prompt Injection
  • Avoid Availability Poisoning
    • Manage Increased Computation Attack
    • Detect Denial of Service (DoS) Attacks
    • Prevent Energy-Latency Attacks
  • Avoid Data and Model Poisoning Attacks
    • Detect Data Poisoning Attacks
    • Avoid Targeted Poisoning Attacks
    • Avoid Backdoor Poisoning Attacks
    • Prevent Model Poisoning Attacks
  • Support Data and Model Privacy
    • Prevent Data Reconstruction Attacks
    • Prevent Membership Inference Attacks
    • Avoid Data Extraction Attacks
    • Avoid Model Extraction Attacks
    • Prevent Property Inference Attacks
    • Prevent Prompt Extraction Attacks
  • Manage Abuse Violations
    • Detect White-Box Evasion Attacks
    • Detect Black-Box Evasion Attacks
    • Mitigate Transferability of Attacks
  • Misuse of AI Agents
    • Prevent AI-Powered Spear-Phishing at Scale
    • Prevent AI-Assisted Software Vulnerability Discovery
    • Prevent Malicious Code Generation
    • Identify Harmful Content Generation at Scale
    • Detect Non-Consensual Content
    • Detect Fraudulent Services
    • Prevent Delegation of Decision-Making Authority to Malicious Actors

Identify Executive Sponsor

ID : 1.1 

Appoint an executive who will be accountable for the overall success of the program.

ComponentRegulationVendors
1. Establish Accountability for AIEU AI Act 
We use cookies to ensure we give you the best experience on our website. If you continue to use this site, we will assume you consent to our privacy policy.