In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

European Commission’s third draft of General-Purpose AI Code of Practice: finer KPIs but ‘still risks undermining the EU’s digital competitiveness’

Context: Last month was quite eventful for EU AI policy. Alongside several other proposal withdrawals, the European Commission (EC) dropped its AI Liability Directive, after two years of negotiation, due to the lack of agreement on its adoption by EU Member States (PDF). A few days later, at the AI Action Summit in Paris, EC President Ursula von der Leyen announced a brand new “EU InvestAI” initiative, which aims to mobilise €200 billion in AI investment, including a new European fund of €20 billion for four AI gigafactories (February 11, 2025 ai fray article). Meanwhile, since the EU AI Act (January 22, 2024 ai fray article) entered into force last August, four Working Groups have been putting together guidance for general-purpose AI model providers and those with systemic risks on complying with the new legislation, also known as the General-Purpose AI Code of Practice (Code). The most up-to-date version, the second draft, was published (PDF) in December and, according to EU AI governance consultant Valéria Silva, showed “clear progress, with a better structure and a stronger focus on specific measures and performance indicators” than the first draft (PDF). Mrs. Silva told ai fray in an interview last month that the Code still caused a divide though – while some stakeholders noted the Code needed to better align with existing EU laws, others argued for more flexibility (February 14, 2025 ai fray article).

What’s new: The EC today published the third draft of its Code (March 11, 2025 EC press release). The deadline for feedback is March 30. While this is not the final version – only minor tweaks are expected in the next iteration, there are several key differences between this latest version and the second draft, including:

  • More specific guidelines on copyright compliance (around text and data mining, compliance with union law and obligations of AI model providers);
  • The second draft introduced a taxonomy for systemic risks, but the third one refined this further; and
  • While the second draft introduced a set of commitments for AI providers to mitigate risks, the third one includes standardised protocols and more explicit key performance indicators (KPIs).

Direct impact and wider ramifications: Once finalised, the Code will be presented in a Closing Plenary and is expected to be assessed by the AI Office and AI Board, approved by the EC via an Implementing Act and published on 2 May. CCIA Europe has already expressed its disappointment with this final iteration. If the draft remains as it is there are “serious issues”, including far-ranging obligations regarding copyright and transparency, which would “threaten trade secrets”, as well as burdensome external risk assessments, it said in a statement today. The Code still risks directly undermining the EU’s digital competitiveness, CCIA Europe added.

In her interview with ai fray last month, Mrs. Silva noted that there are three levels upon which the Code is being built:

  • Commitments: The broad promises that companies signing the Code agree to follow
  • Measures: The specific actions companies must take to fulfil these commitments
  • Performance Indicators: Practical ways to measure compliance (for example, whistleblower protection is measured by looking at what can be reported, available reporting channels, and safeguards against retaliation)

This latest draft is based on a concise list of high-level commitments and provides more detailed measures to implement each commitment. These are two commitments related to transparency and copyright for all providers of general-purpose AI models, and a further 16 related to safety and security only for providers of general-purpose AI models classified as “general-purpose AI models with systemic risk”.

Alongside the publication of this third draft today, the Chairs and Vice-Chairs of the Working Groups also today published a dedicated executive summary and interactive website, which aims to help stakeholders provide feedback on the draft in writing and in the upcoming discussions. 

The key differences

As noted in the introduction, there are a number of key differences between the December draft and today’s updated iteration, including the following:

  • Introduction of a user-friendly Model Documentation Form for transparency requirements: the form particularly standardises data recording and makes it easier for compliance.
  • Refined KPIs: KPIs are more measurable and standardized, there is more emphasis placed on how AI providers must document and demonstrate compliance, and KPIs are grouped more effectively under commitments for Transparency, Risk Assessment, and Risk Mitigation.
  • Simpler, reduced copyright compliance measures: more specific guidelines provided for text and data mining (there is a greater emphasis on how AI model providers should respect copyrighted content), compliance with Union Law (more details on tracking sources of training data), and the obligations of AI model providers (a clearer differentiation between open-source and proprietary models in terms of documentation).
  • Refined taxonomy for systemic risks: the expanded taxonomy clarifies what constitutes systemic risk under EU law, the methods for evaluating AI models to determine systemic risk, and the reporting obligations for companies developing high-impact AI systems.
  • Confirmation that the AI Office will be publishing additional guidance to clarify critical aspects of the AI Act, including:
    • Definitions of general-purpose AI models
    • Responsibilities along the value chain
    • Application to models placed on the market before August 2025
    • Exemptions for free and open-source licensed models

Initial reactions

In a statement received by ai fray today, CCIA Europe’s Senior Policy Manager, Boniface de Champris said the third draft “continues to raise significant concerns”. Mr de Champris noted that the new draft makes “limited progress” from its “highly problematic predecessor” and still risks directly undermining the EU’s digital competitiveness.

He believes that “serious issues remain”, including far-ranging obligations regarding copyright and transparency, which would “threaten trade secrets”, as well as burdensome external risk assessments that are still part of this latest iteration. The actual purpose of the Code, according to Mr. de Champris, is to help companies comply with the obligations set out in the AI Act, but “significantly more work” is needed to achieve this objective.

An array of voices across the market have also already taken to LinkedIn to express their thoughts on the third draft, including August Debouzy’s Eden Gall, who noted that AI providers will not benefit from a simplified structure with clearer, high-level commitments and practical measures. He said that a user-friendly model documentation form contained in the document makes transparency “straightforward”. Those providers classified as having systemic risks can now also receive targeted guidance on risk assessment, cybersecurity, and incident reporting, Mr. Gall added.

Bruno Schneider, co-founder and chairman of the board at the European BlockTech Federation, pointed to what he believes are three significant points in the new draft: 

  • “Robots.txt and Machine-Readable Protocol are standard protocols used to instruct web crawlers (bots) on how to crawl and index web pages. This is a common method for website owners to control access to their content. There is also a requirement to make “best efforts” to comply with other machine-readable protocols, such as asset-based or location-based metadata. 
  • Effectiveness of the rights reservation mechanisms? While the draft provides guidelines, there may be concerns about their practical implementation and enforcement.
  • The draft includes a specific measure to exclude recognized “piracy domains” from scraping. This is a positive step towards protecting copyrighted material by preventing automated systems from accessing and potentially distributing content from known infringing sites.”

Once published, the Code means website owners and content creators will need to ensure their sites are compliant with ‘robots.txt’ and other machine-readable protocols to protect their content effectively – and will need to ensure they are abiding by these guidelines properly if they want to protect their digital assets effectively, Mr. Schneider wrote.