In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

Paris AI Action Summit made divisions between major powers’ approaches “even clearer”: EU AI governance consultant Valéria Silva

Following the entry into force of the EU AI Act last August, the participants of four different Working Groups have been working for months on putting together guidance for general-purpose AI model providers and those with systemic risks on complying with the AI Act via the General-Purpose AI Code of Practice (Code).

The Code is due to be finalised and published on May 1, 2025 and has already gone through multiple rounds of checks and deliberation. Ahead of the publication of the third draft on Monday, ai fray spoke with Valéria Silva, a senior independent consultant working on AI governance from a law and policy perspective.

As well as helping draft the Code as part of the independent multi-stakeholder expert group, Mrs. Silva contributes to the OECD AI Policy Observatory as an external expert and to the research think tank 4iP Council as a member of the Advisory Committee. She has also previously worked in the OECD Legal Department and as Chief of Staff and Head of the International Department at Brazil’s competition authority, CADE.

In an exclusive interview with ai fray, she discussed the behind-the-scenes of the Code’s formation, the likelihood of the Act having a Brussels effect, the chances of the new U.S. Administration adopting something like the Code, and whether the current Code is “future proof” enough.

Thank you for taking the time to speak with us, Valéria. Could you give us an overview of how the Code is being put together and what each Working Group is in charge of?

The Code is built in three levels, each more specific than the last:

  1. Commitments: The broad promises that companies signing the Code agree to follow
  2. Measures: The specific actions companies must take to fulfil these commitments
  3. Performance Indicators: Practical ways to measure compliance (for example, whistleblower protection is measured by looking at what can be reported, available reporting channels, and safeguards against retaliation)

Four specialized Working Groups focus on different aspects:

  • Group 1: handles both transparency and copyright-related rules (this group has different chairs and co-chairs addressing each of these two topics separately)
  • Group 2: focuses on risk assessment for systemic risks
  • Group 3: deals with technical risk mitigation for systemic risks
  • Group 4: works on governance risk mitigation for systemic risk

Each group runs its own meetings to gather feedback on the latest draft. These meetings follow a structured format: stakeholders get three minutes each to present their views, and the most common questions get addressed. All participants can also submit written feedback by a certain deadline.

Thank you. And what happened in the most recent plenary meetings for the Working Groups last month?

The second draft of the Code drew varied feedback from different stakeholders. The new version shows clear progress, with a better structure and a stronger focus on specific measures and performance indicators. Some participants noted areas where the Code needs to better align with existing EU laws, like the taxonomy for systemic risks. Many felt they needed more time to review the draft, partly explained by the fact that the EU has a very tight schedule for publication of the Code.

Different groups brought different priorities to the discussion. Civil society representatives pushed for stronger compliance measures and better protection of fundamental rights under EU law. Industry groups argued for more flexibility.

These contrasting views also show up clearly in debates about risk assessment. Civil society wants mandatory external evaluators to check for systemic risks, arguing this would reduce bias compared to internal company assessments. Industry representatives counter that such requirements are too burdensome and should be voluntary.

What is the likelihood that the EU AI Act will have a Brussels effect? And what would be the conditions for that to happen?

Will the EU AI Act become a global standard for AI regulation? It’s too early to tell. For the Act to influence rules in other countries, several key elements need to work well first within Europe itself.

  1. The first challenge is getting all the regulatory pieces to fit together smoothly. The main law, detailed rules, guidelines, and technical standards must work in harmony – like pieces of a complex puzzle that need to align perfectly.
  2. The second hurdle is ensuring consistent implementation across different authorities. Each EU country can choose how to set up their regulatory bodies, and we’re already seeing varied approaches. Spain created one central AI agency, while Finland spread the responsibility across ten different authorities covering sectors like transport, energy, and communications. All these national bodies, plus the EU-level institutions like the AI Board and AI Office, need to interpret and apply the rules consistently.
  3. The third crucial element is making compliance practical, especially for smaller companies. The procedures need to be effective enough to protect people but not so complicated or expensive that they burden startups and small businesses. Getting this balance right will be essential for the Act’s success.

The EU is attempting something unprecedented – creating comprehensive AI regulation. Whether other countries follow this model will depend largely on how well these pieces come together in practice.

And what about beyond the EU?

The global AI regulatory landscape resembles an intricate patchwork, with jurisdictions pursuing divergent approaches across multiple dimensions.

China, the U.S., and the EU showcase how differently major powers approach AI governance. China maintains tight central control while intensively developing technology, prioritizing social stability and national security. The U.S. is taking a different path – while Biden’s AI Executive Order touched on various protections, the new Administration is increasingly focused on maintaining its tech leadership rather than controlling risks. The EU takes a third path with its comprehensive AI Act, creating mandatory rules that apply across all sectors and focusing on protecting fundamental rights.

These differences are causing difficulties in achieving alignment for international cooperation. Major initiatives such as the G7 Hiroshima AI Process and the UK’s Bletchley Park Summit are happening with little coordination between them.

The recent February 2025 Paris AI Action Summit made these divisions even clearer. While trying to steer coordinated AI development that respects human values, the summit suffered a setback when the US and UK refused to sign a declaration supporting “open, inclusive, transparent, ethical, safe, secure and trustworthy AI.” Meanwhile, 60 other countries, including major players like France, Japan, Australia, Canada, China, and India, backed the document.

The simultaneous withdrawal of the AI Liability Directive by the European Commission after two years of negotiation, due to the lack of agreement on its adoption by EU Member States, is another example of the moving grounds we currently observe in AI governance.

So what can we expect moving forward on an international basis?

Countries are taking very different paths to regulate AI. The challenge is clear: we’re trying to control a technology that flows freely across borders using laws that stop at national boundaries.

Each major power has chosen its own way forward. The EU is creating comprehensive, mandatory rules that apply to all sectors of the economy. China is mixing broad regulations with specific rules for different industries. Japan is keeping AI governance flexible with voluntary guidelines. The U.S. is letting market forces drive innovation to stay ahead in the global race. These different approaches are leaving gaps in how we handle AI risks that we are already seeing today.

Looking ahead, no one knows how global AI regulation will develop. Major economies fundamentally disagree on how much oversight is needed – some want strict controls while others prefer minimal rules. Even in relevant international forums such as the G7 and G20, countries cannot agree on basic questions about how strictly AI should be regulated. This makes it very hard to create the coordinated global approach we need to properly manage the technology.

The Biden Administration’s Executive Order was revoked on January 23 and replaced by something called Removing Barriers to American Leadership in Artificial Intelligence”. Where does this leave the U.S. and can you see a future in which it adopts something like the Code?

The US approach to AI regulation has shifted significantly. The previous Biden Executive Order (EO 14110) aimed to balance innovation with protecting consumers, workers, minorities, and national security from AI risks.

The recent EO issued by the Trump Administration marks a clear change in direction when it revokes the Biden EO. It drops previous protections for individuals and only keeps mention of national security concerns. The main goal is to push U.S. leadership in global AI development. A new executive group will create an AI Action Plan, and existing OMB guidelines from the Biden era will be updated to match this new approach.

What does this mean for current programs? While the full picture is not yet clear, initiatives focused on trustworthy AI and protecting vulnerable groups are likely to be cut back or eliminated. We can see hints of this broader shift in recent tech industry changes – Meta’s decision to remove content moderation, in alignment with the administration, shows more than just a return to free-market principles. It suggests a move to strip away even voluntary safeguards, as shown by Meta replacing independent fact-checking with a looser community-driven system. Misinformation and disinformation are a primary AI risk in this context.

What issues still remain open or unaddressed in the current draft of the EU AI Code?

The second draft of the Code still faces several key challenges.

  1. One of them is finding the right balance. The Code needs to create specific, effective measures that clearly guide AI Act implementation while being broad enough to work across different sectors. These measures must also be proportional, meaning that they should work for companies of all sizes – from small startups to global corporations, which is a challenge. This has been addressed in the Code by establishing exceptions for SMEs and startups.
  2. Furthermore, stakeholder views often reflect diverging standpoints. For example, on systemic risk assessments (Commitment 6), industry groups understand that the current requirement for assessments every six months is too burdensome, while civil society groups strongly support it.
  3. Several technical aspects additionally need refining and additional clarity. The Code will have to specify what transparency measures are needed in practice and what documentation should be public. The final outcome needs to help copyright holders spot infringement while protecting companies’ trade secrets.
  4. Measuring is another challenge. The Code needs to create performance indicators in areas where metrics are still evolving, such as measuring energy consumption. 
  5. The Code also needs better integration – since different experts wrote different sections, some concepts need to be harmonized for consistency.
  6. There is also the challenge of preventing loopholes. The Code must be specific enough to prevent gaming the system while remaining flexible enough to work across sectors.

But the ultimate test will be whether the Code helps providers clearly understand how to meet their AI Act obligations. Simply repeating what is in the Act is not enough, as companies need practical guidance on implementation.

One high-level principle that the European AI Office states that it is following while drafting the AI Code of Practice is making it “future-proof”. How future-proof do you think the current draft Code is? How do you go about ensuring this and what could be added to the current draft to ensure it remains this way?

Those involved in the elaboration of the Code are taking steps to ensure this document stays relevant as AI technology evolves.

First, there is a focus on core principles and desired outcomes instead of specific technologies. When there is mention of particular technologies (robot.txt), it is done on an exceptional basis, only where seen as strictly necessary.

Second, the aim is to strike a careful balance between being specific enough to be useful while staying general enough to work across different sectors and situations. This has worked better in some areas than others – for instance, copyright holders are currently not satisfied with measures 2.6 and 2.7, understanding that they are too abstract.

Third, one of the roles of the AI Office is to keep the Code effective. The Office will monitor how the Code works in practice and adapt it as needed, helping it stay current with technological changes.

These approaches aim to create a Code that can evolve alongside AI technology, rather than becoming outdated as soon as it is finalised. We will only be able to measure its level of success once the Code begins to be implemented. Again, the challenge is to keep it up to date in the face of a technology that is growing exponentially, with new advances now occurring only months apart.