Context: In December 2023, secretive negotiations between the European Commission, the EU Council and the EU Parliament resulted in a political agreement (EC press release). Last month, the text on which the chief negotiators of those EU institutions had agreed was leaked (ai fray article). Normally such political agreements are rubberstamped by the Council and the Parliament, but in this case there were reports of efforts by certain EU member states, particularly France, to renegotiate and potentially block the deal.
What’s new: On Friday (February 2, 2024), diplomats representing the 27 EU member states unanimously approved the compromise text (Euractiv article). The remaining steps are a committee vote in the EP followed by the EP’s plenary vote and finally the notarization of Friday’s vote at a meeting of EU ministers. Efforts to change the proposal or to make clarifications showed that there continues to be a fair amount of disagreement over whether the EU will benefit, especially in an economic sense, from having been “the first continent” to comprehensively regulate AI, prioritizing speed over substance and quality. Certain public statements made in recent days by politicians and lawyers shed light on certain issues.
Direct impact: Despite valid criticism and skepticism, the EU’s legislative machinery has reached a point of no return after Friday’s vote. Only in the Council would there have been a possibility of voices of reason pulling the brakes. A majority of the European Parliament is unreceptive to the issues that are reasonably raised, even more so with the elections being only a few months away.
Wider ramifications: It appears unlikely that other major economies will engage in a race with the EU, much less in a race to the bottom. Global agreements are going to have limited scope for the near term. Through its regulatory zeal, driven by politicians’ personal ambitions and ideology, the EU may accelerate its economic decline relative to the United States and Asia.
To begin with, it’s misleading in two ways when EU leaders claim that the AI Act, the enactment of which is now practically certain, will make “Europe” the “first continent” in the world to comprehensively regulate AI. The EU is not a continent: there was Brexit, and there are also countries such as Switzerland that are neither members of the EU nor of the European Economic Area (which is the EU plus Norway, Iceland and Liechtenstein). But even if one equated “EU” and “Europe”, China actually adopted comprehensive AI regulations in July 2023, well ahead even of the EU’s interstitutional political agreement. And China is large enough in its own right. By the same logic that ignores China, one could also ignore anything the United States does unless it’s a NAFTA deal.
What’s more important to bear in mind is that a first-mover advantage exists with respect to innovation, not regulation.
In a different context (standard-essential patents) covered by ai fray‘s sibling publication ip fray (January 29, 2024 article), EU leaders also claim that somehow they should pre-empt other jurisdictions. But older law isn’t stronger law on the global stage. In the end, the EU can regulate only its own market, and others are free to do what they want. Those in the EU who want to be first to regulate like to point to the General Data Protection Regulation (GDPR), which has indeed resulted in popups that annoy web users around the globe, but those popups don’t have to be shown outside the EU. The GDPR is actually a failure and deprives EU-based internet users of access to some content, as for some websites it turned out more efficient simply not to serve the EU market because of the demand from that region not justifying the technical effort to implement special mechanisms.
The EU’s desire to pass regulations into law as quickly as possible is symptomatic of the fact that the EU has been left behind by the U.S. and parts of Asia when it comes to innovation, so its focus is increasingly on regulation. It doesn’t have much of a tech industry left (compared to the U.S. and China), but it does have a market of approximately 450 million people that it can regulate.
For the avoidance of doubt, ai fray does believe that AI requires (new) regulatory measures. The AI Act is a complex, multifaceted measure. There are various angles from which to analyze it, and it will have a diversity of practical implications. This article, however, focuses on the flaws of the agreed-upon text and the processes as well as the “regulate first” attitude that led to this result. Moreover, the AI Act is not cast in stone for all perpetuity.
EU’s economic decline
The Financial Times, which for a long time used to be (and may still be) the most widely-read publication among Brussels leaders, published an article in June 2023 entitled Europe has fallen behind America and the gap is growing. The summary just below the headline tells it all: “From technology to energy to capital markets and universities, the EU cannot compete with the US.” The FT then points to an article by the European Council on Foreign Relations that says this:
“In 2008 the EU’s economy was somewhat larger than America’s: $16.2 trillion versus $14.7 trillion. By 2022, the US economy had grown to $25 trillion, whereas the EU and the UK together had only reached $19.8 trillion. America’s economy is now nearly one-third bigger. It is more than 50 per cent larger than the EU without the UK.”
That even understates the problem. The International Monetary Fund (IMF) projects a per-capita GDP of US$43K for the EU, which is just about half of U.S. per-capita GDP of US$83K. That discrepancy has various reasons, but the EU clearly can’t afford anything unnecessary that adversely affects its workers’ productivity. AI is increasingly going to be a productivity factor.
The FT article also notes that America is going to be way ahead in the development of AI technologies.
Influential MEP suggests the EU has already decided to content itself with regulating, not innovating
If the EU wanted to use its regulatory powers for the good of its economy, it would have to take measures that level the playing field instead of doing things that only serve to widen the gap. One EU regulatory initiative that was meant, and theoretically had the potential, to protect smaller European players from the overwhelming market power of digital gatekeepers (particularly Apple, Google and Amazon) is the Digital Markets Act (DMA). It could have redressed the balance with a view to app developers that depend on a duopoly of mobile app stores (Apple’s App Store and the Google Play Store). But Apple has announced a set of rules that reduces the DMA to absurdity, making it impossible even for such a deep-pocketed and sophisticated player as Meta to derive any value (games fray article).
With the AI Act, there never even was much of a DMA-style ambition, and it’s also hard to see how any AI regulation could have helped AI innovators in the EU in their dealings with U.S. counterparts such as major cloud providers. But the minimum goal should have been (and if the EU wasn’t on the wrong track, would have been) to ensure that
- EU-based developers of AI technologies are not disadvantaged vis-à-vis their rivals in other jurisdictions and
- EU-based users (“deployers”) of AI technologies can reasonably leverage them for productivity gains.
The AI Act fails to achieve that, and therefore threatens to make a negative impact on EU competitiveness. In fact, it even creates an incentive now for AI startups to seek greener pastures elsewhere, and tech startups leave the EU all the time to set up shop in the United States.
In a German-language op-ed, German center-right MEP Axel Voss carefully sought to welcome in principle the fact that the AI Act would now be passed into law while also expressing important concerns. The opinion piece reads like he just doesn’t dare to say how bad the situation is, but he does express mixed feelings and wants to be on the record as someone who warned against certain more or less foreseeable consequences.
Mr. Voss was the EP’s rapporteur on the 2019 copyright law reform, which was also controversial, especially with many young citizens being concerned about “upload filters” and other potential measures that would inflict collateral damage. Whether or not one agreed with that particular law, Mr. Voss clearly has a DMA-style mindset: he’d like to use European legislation and regulation to strengthen the European economy, while others don’t appear to care at all about the widening gap.
His article focuses very much on Europe’s digital industry being “left behind, brutally.” Just like ai fray, Mr. Voss notes that the AI Act does not take account of that transatlantic economic gap and, as he puts its rather dramatically, the bill does not reflect “Europe’s will to survive in the digital world.”
He goes on to say that even if the EU “had no more ambition and wanted to content itself with merely being a global regulatory agency, many of the statutes are far too imprecise and flawed.” Key terms such as “substantial modification of an AI system” weren’t clearly defined.
With a view to EU competitiveness, Mr. Voss notes “the problem facing many European AI developers of insufficient access to high-quality data because data protection rules massively impede [LLM] training.”
All of those points are valid in ai fray‘s view. With respect to legal flaws, a LinkedIn post by Brussels-based White & Case partner Assimakis Komninos (“Articial Intelligence Act — the mess that is in the coming”) highlights an inconsistency that ai fray‘s analysis of the leaked text also noted: the text sometimes refers to the EU Commission and sometimes to an “AI Office” with respect to who should fulfill certain functions. While ai fray attributed (and still does attribute) this inconsistency to the fact that the European Parliament apparently wanted a new EU agency and the compromise was that the Commission would just set up a department for that purpose, it is a flaw. And Mr. Komninos, who prior to joining White & Case actually was a leader of a regulatory agency (the competition authority of Greece), rightly points out that it is unclear whether that “AI Office” would ultimately have more of an advocacy role or actual regulatory powers.
Kai Zenner, the chief of staff of MEP Voss, played a more important role in this legislative process than MEPs’ aides usually do (which is not to say they’re not important). In a German-language interview with heise online, Mr. Zenner addresses a lot of questions including this lack of clarity regarding responsibilities. It looks like the objective was for the AI Office to focus on General Purpose AI systems, while a separate AI Board composed of representatives of the EU member states would have other tasks. Mr. Zenner, too, notes that the document wasn’t updated accordingly. And he then explains that if a European company intended to enforce any of its rights under the AI Act against a large U.S. player, the latter’s army of lawyers would know how to capitalize on any loopholes.
EU agreement against French and German economic interests: how was that possible?
The Euractiv article mentioned further above explains that particularly the governments of France and Germany “did not want to clip the wings to promising European start-ups like Mistral AI [from France] and Aleph Alpha [from Germany] that might challenge American companies in this space.” But the European Parliament insisted on strict rules for “General Purpose AI” systems such as ChatGPT regardless of what those rules would mean for Mistral or Aleph.
Normally, when the governments of France and Germany have a common interest, the two are in a strong position to influence the outcome of an EU legislative process. That was already the case prior to Brexit. The “Paris-Berlin axis” as it is sometimes called (case in point, a Malta Independent article) has a lot of voting rights, and MEPs from those countries are powerful in their political groups. The two countries obviously have to avoid the impression of dominating, like a duopoly, the EU. But it is normally very hard to overcome their resistance to something.
In connection with the AI Act, there were two factors that made the difference:
- Both countries had internal divisions over the AI Act. For instance, some French politicians were more interested in protecting copyright holders (a key goal of French policies in many areas) than anything else. Germany is being governed by the least popular coalition in its history, the “Ampel” (“traffic light”) of social democrats (red), Greens and libertarians (yellow). Only the libertarian minister in charge of transport and digital infrastructure had concerns. The Green minister of economic affairs is a farmer and children’s book author, and the Greens are generally rather ideological about tech regulation.
- The EU Council (where the member states cast their votes) has a rotating presidency. In the second half of 2023, the Spanish government chaired the Council meetings and (not singlehandedly, but with a lot of extra weight) represented the Council in interinstitutional negotiations. Spain is not the most advanced country in Europe in economic terms and particularly not with respect to the digital economy. Its current government is also very ideological. Apparently the Spanish government, in its Council presidency role, effectively presented France and Germany with a “take it over leave it” proposition.
Temporarily, France and Germany also got some support from Italy, but Italy itself doesn’t have a company like Mistral or Aleph.
What made the French government’s concerns even more interesting is the fact the French member of the EU Commission, Thierry Breton, fished for credit more aggressively than anyone else after the interinstitutional agreement in December. It is widely known in Brussels that he is auditioning for higher office. Theoretically, the idea is that there is one commissioner from each member state (a fact that makes the Commission an unwieldy decision-making body), but that they would only have EU and not national interests in mind. In practice, it’s not like that, and here it’s not that Mr. Breton put EU over French interests, but simply his personal ambitions over responsible policy-making.
People like Mr. Voss and his chief of staff, Mr. Zenner, are obviously aware the EU legislative machinery’s workings not being conducive to the reputation and popularity of the bloc among citizens. At the moment, the EU does not have to fear the next exit of a major member state, and that may be part of the reason why policy makers are acting the way they are. But there is a lot of fear in Brussels that there will be an unprecedented number of EU-skeptical MEPs after this year’s elections.
With respect to how the EU operates and the “democratic deficit” that has often been criticized, it’s also interesting to see that the EU Commission decided to set up that AI Office even before the AI Act had been passed into law. They are in a rush, but not a gold rush for the EU economy: they’re in a rush to regulate. It is clearly problematic when the executive branch of government acts as if a law that is still in the making had already been enacted. In an acute crisis (like the recent pandemic), that could be excused. There is no excuse here. It just shows that the AI Act was pushed through by politicians who wanted to claim to have done something innovative in the regulatory field, even if it’s bad for EU innovation and competitiveness.