In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

Text of EU AI Act published: last-minute inclusion of General Purpose AI Models raises questions, as do open-source exceptions

Context: The European Union has been working for a couple of years on its “Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts” (2021/0106 (COD)) (briefly referred to as the AI Act).

What’s new: Euractiv tech reporter Luca Bertuzzi (profile), who has been following this legislative process from the get-go and made himself a name in this context, has taken the extraordinary step of publishing a four-column 892-page EU-internal document that juxtaposes the Commission, Parliament and Council positions as well as the compromise text agreed upon between leaders of the three institutions (Google Drive link to PDF).

Direct impact: Normally, the EU institutions stand by those political agreements, but there are rumors concerning a potential blocking minority in the Council (which poses a greater threat to adoption than the likely level of resistance by Members of the European Parliament). Assuming that the current proposal gets adopted, it threatens hefty fines on the basis of a voluminous regulation that would confer far-reaching regulatory powers on the European Commission. Various definitions will be unclear, at least initially. The document is more of a framework that needs to be fleshed out through implementing acts and other measures as well as potentially frequent (relative to other legislative projects) updates. Some definitions and exceptions raise questions.

Wider ramifications: There is nothing in the entire bill that will make the European Union economy more innovative. It will, however, create regulatory risks of the unpredictable kind. It would have been preferable to strike a balance between regulation and innovation. At least one of the political leaders supporting this approach, internal market commissioner Breton, is presently more interested in positioning himself as a shaker and mover with a view to whatever his next role may be than in taking the time to make solid and well-considered laws. Other EU leaders may be more concerned about the composition of the European Parliament after this year’s elections, with far-right and far-left parties, particularly EU-skeptical parties, likely to win an unprecedented number of seats.

After quickly going over all 809 pages of the document first mentioned above, ai fray will now share some general observations on how the compromise that is currently on the table (and which would be adopted unless there is an unusual development in the EU Council) came into being, and on what the implications are.

AI Office synonymous with EU Commission

A typical example of a political compromise that is nonsensical, yet attributable to a bargaining process, is the use of the term “Artificial Intelligence Office” (AI Office) in the document. What the European Parliament wanted was to create, under the Commission, a whole new EU institution to work on AI regulation. The Council (where the governments of the EU member states cast their votes) was against. What they agreed upon was then to still use the term AI Office in the document, though the definition clarifies that it effectively means the Commission.

Without going into much detail here, ai fray has identified a number of politically motivated and not always very competent-looking proposals in the Parliament’s position that either failed to make it into the compromise text or were diluted. The amendmend-by-amendment voting process in the Parliament has the effect that all sorts of proposals from different wishlists get majority support. And the Parliament’s ideas are quite often not even feasible. For instance, the Council apparently had to make sure that military applications were excluded, as anything else would amount to the EU overstepping its competencies.

The Parliament’s ideas are a mix of very reasonable concerns (e.g., over the discrimination of people with disabilities or potential harm to the environment), ideologically charged topics such as refugee and asylum matters, and political goals such as “democracy” (such as “democratic control” of AI) special rules for small and medium-sized enterprises (SMEs) without proposing anything that is likely to make a difference. It’s just about saying “we did something for SMEs” (meaningful or, more likely, not).

The comparison of the different institutions’ positions shows that the Council had to fight for regulatory restraint and for a less fear-focused approach. The key purpose of that legislation is to establish guardrails and safeguards, though it would have been preferable to see the EU focus on innovation at least as much as on regulation. In other major economies, there will probably be more of an effort to attain or defend technology leadership and maximize companies’ competitiveness. Be that as it may, the Parliament overemphasized the potential negative effects of AI, and the compromise text, while incorporating many of the buzzwords, is far more balanced.

The Council also had to oppose some of the Parliament’s proposals in order to ensure that AI remains available, within reason, for law enforcement purposes including (but not limited to) tracking down criminals. It is, of course, key to ensure that AI systems will not result in the wrong people being arrested (which would be a failure of such systems to do their job).

Controll over how AI makes decisions is an objective, not 100% achievable (yet)

It is indisputably reasonable to set out as a goal that AI algorithms should not (to use just one example) discriminate based on race, gender or other criteria. Otherwise they would end up perpetuating or even exacerbating certain issues. But the assumption that today’s AI systems can keep clear of doing something that some person might consider discriminatory (though reasonable people may draw different lines) is not realistic. If it’s Generative AI (GAI), the impact of statistical correlations is inevitable. Artificial General Intelligence (AGI) doesn’t exist yet, but it will most likely be extremely difficult to exert complete control over the reasoning of a system that can only do its job if it has the freedom to develop its own problem-solving and optimization strategies.

When the political agreement was struck, the European Commission announced (which was no surprise at the time) the potential sanctions (EC statement):

“Fines would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information.”

December 9, 2023 press release by the European Commission (“Commission welcomes political agreement on Artificial Intelligence Act”)

Maybe there won’t be any excessive fines in the end, but for now there’s a problematic discrepancy between the huge risk that the AI Act, based on the compromise text, would pose to innovators versus the extent to which one can reasonably hold them responsible. A commercially-reasonable-efforts standard should apply rather than what would amount to strict liability.

Open-Source exceptions

One of the Parliament’s “achievements” (and arguably the most questionable one of them) is that those offering free and open-source software should be privileged. Their responsibilities and obligations would be reduced. But the same Parliament engaged in what almost constitutes fearmongering. It doesn’t make sense to be very afraid of bad actors on the one hand only to then make it easier for them to obtain AI systems they can manipulate for their criminal and other purposes.

The inconsistency can be attributed to the Parliament’s limited ability to understand technical matters and/or the amendment-by-amendment bargaining process, where some politicians (particularly some Greens and far-left ones, but also others) believe that open-source software should be treated preferentially. There are respects in which one could think of open-source exceptions or initiatives to promote the use and to support the development of open-source software. But if the question is what could go wrong with AI and the intent is to exercise control, putting out powerful AI software on an open-source basis means two things:

  • Theoretically it’s possible that weak spots in open-source code will be identified by someone who wouldn’t have been able to find them without that kind of access to the inner workings of the software.
  • Practically it’s a given that bad actors, from individual fraudsters to entire rogue states, will be able to modify open-source software and use it for criminal purposes though it would have been too difficult and time-consuming for them without access to the source code.

General Purpose AI Models

Originally, the regulation was supposed to distinguish between AI systems of different risk levels. But during the multi-year legislative process, AI innovation accelerated, particularly in terms of Generative AI becoming useful for ever more purposes and by ever more people and organizations. Innovation will likely continue to progress very rapidly, likely necessitating relatively frequent updates to the EU AI Act.

It is attributable to the rapid pace of innovation that EU policy makers then decided to introduce a whole new concept into the AI Act at the “trilogue” (final negotiations between Commission, Council and Parliament) stage: General Purpose AI Models. The definition is not clear; and what’s even less clear is how those who make such models can actually restrict and control how others will put them to use.

Apart from the rules relating to General Purpose AI Models, the AI Act makes at least a certain effort to distinguish between “provider” and “deployer” responsibilities. But when it comes to General Purpose AI Models, responsibility is shifted further upstream. That poses a threat to innovation as those who make such models may view (and if so, will rightly view) the EU as a riskier market in which to sell licenses than other jurisdictions.

Overlaps with existing rules: a regulation thicket

Throughout the document one can see that the EU actually (and not just in the view of its critics) has become a regulation thicket. In such fields as data privacy, it already has more granular rules and regulations in place than any other (at least any other Western) jurisdiction. Now comes the AI Act, parts of which relate to the same issues such as privacy. It’s not clear why any AI-specific rules are needed in specific contexts where citizens are already protected by existing laws, other than having a basis for imposing a fine of up to 7% of global annual turnover.

The AI Act seeks to always clarify that any other relevant rules remain intact. It may ultimately be up to the European Court of Justice to reconcile potential conflicts (such as the risk of dual liability for the same action).

Zero ambition to foster innovation

The EU’s regulatory zeal in this area (where regulation is objectively needed, but that doesn’t mean that any regulation is necessarily bottom-line positive), with the end of the current Commission and Parliament term approaching fast, has resulted in a bill that deals with a wide range of issues. Even the word “tattoos” appears in it (twice).

The personal ambitions of one or more politicians are not hard to see.

What is impossible to find in the entire compromise text, however, is anything that would make the EU more competitive or more innovative. U.S. and EU per-capita GDP were pretty much at a level in 2008 and now the U.S. is ahead by 60%. The EU struggles to compete with Asian nations. And some emerging economies are closing the gap fast. The EU still has a large market to regulate, and that gives its politicians certain power. But looking at the overall situation, the intuitive thing would be for the EU to place a lot more emphasis on growth and competitiveness, where AI can play a key role.

Regulatory leadership in a context like this should be part of a broader agenda to strengthen the economy.