In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

OpenAI under fire, looming deepfake dangers and flurry of safety standard setting: the top AI governance and litigation developments of 2024

The last 12 months were crucial for the global artificial intelligence regulatory space, with governments across the world forming taskforces, enacting legislation and issuing multi-million-dollar fines to the biggest actors in AI – all in an effort to keep up with, and support, what is now arguably the leading item on political agendas. Those same actors also fought to keep their shares of a rapidly booming market and formed coalitions to address the dangers that come with this still-nascent industry.

Here, then, is ai fray’s timeline of every key event in the AI space in 2024:

January

Governance and merger control
  • An 892-page EU-internal document, first published by tech reporter Luca Bertuzzi, juxtaposed positions held by the Commission, Parliament and Council, as well as the compromise text agreed upon between leaders of the three institutions. At the time, there were rumours concerning a potential blocking minority in the Council. The document itself contained nothing that would make the EU economy more innovative – instead, the bill threatened to create regulatory risks of the unpredictable kind.
  • The U.S. Federal Trade Commission announced discovery letters sent to Microsoft, Google and Amazon about the former’s partnership with OpenAI and the two other companies’ relationship with Anthropic – the letters were sent under FTC Act § 6(b), leaving them open to opposition.
  • The U.S. National Institute of Standards and Technology published its taxonomy on Adversarial Machine Learning, which included methods for mitigating and managing the consequences of attacks, as well as the challenges one must consider in the lifecycle of AI systems. The document also intends to help with the creation of standards and the management of AI system security.
  • The World Health Organisation released new guidance on AI ethics and governance for large multi-modal models (LMM) – a type of fast-growing GenAI technology with applications across health care – that includes 40 different recommendations for governments and LMM developers. They include a recommendation for governments to ensure LMMs used in healthcare meet ethical obligations and human rights standards that affect, for example, a person’s dignity, autonomy or privacy, as well as developers to engage indirect stakeholders, including medical providers, scientific researchers, health care professionals and patients, from the early stages of AI development in a structured, inclusive, transparent design.
  • The UK’s Central Digital and Data Office published the Generative AI framework for HM Government, which defines ten common principles to guide the safe, responsible and effective use of generative AI in government organisations. The framework built on the White Paper released in 2023.
  • Italy’s privacy regulator Garante per la protezione dei dati personali (frequently referred to as “il (= the) Garante”) notified OpenAI of breaches of EU data privacy law on Italian territory. OpenAI had 30 days to file its defence. In December, after the company failed to prove to the authority that it had not breached EU data privacy rules, the Garante fined OpenAI €15 million.
  • In Saudi Arabia, the government’s Data & Artificial Intelligence Authority published guidelines for Generative AI for both the public and government entities, highlighting the challenges and considerations associated with the use of GenAI – as well as recommendations for those instances.
  • Hungary also joined the global AI conversation when its antitrust agency (Gazdasági Versenyhivatal (GVH)) launched a market investigation into concerns such as whether the resource-intensive nature of AI poses a threat to fair competition in digital markets and whether certain data collection and advertising practices are “dangerous for consumers.”
Litigation

February

Governance and merger control
  • Companies took charge of governing AI in February, with 20 leading technology companies, including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X forming a coalition to “detect and counter harmful AI content”, under the name of the AI Elections Accord. Google also teamed up with Adobe, Microsoft, Sony, the BBC and Amazon as a steering committee member of their Coalition for Content Provenance and Authenticity.
  • China’s National Information Security Standardization Technical Committee (TC260) announced plans to launch over 50 AI standards by 2026, with its first targeting GenAI outlined in a document entitled Basic Safety Requirements for GenAI Services (in Chinese) last February.
  • The UK government published its response to the AI Regulation White Paper Consultation, explaining its approach to the regulation of AI in the UK and emphasising its ambitions to “maintain its position as a global leader in AI” with investments of over £100 million in the industry. It also recruited a team dedicated to the monitoring of cross-sectoral AI risks, which will also have to evaluate the effectiveness of interventions by both the government and regulators. 
  • Japan’s Ministry of Economy, Trade and Industry established an AI Safety Institute tasked with conducting investigations related to evaluating the safety of AI and creating standards. AISI will also consolidate the latest information in industry and academia, and promote collaboration among AI-related companies and organizations.
  • The French competition authority, Autorité de la Concurrence (Adlc), launched a public consultation on questions surrounding the role that the ownership of key inputs and market positions in adjacent markets play with a view to opportunities for entry into or expansion within the Generative AI market.
  • The Association of Southeast Asian Nations published a guide on AI Governance and Ethics, encouraging the alignment and interoperability of AI frameworks across the region.
Litigation

March

Governance and merger control
  • The European Parliament approved the AI Act, less than four months after member states agreed on a final draft, with only 46 MEPs (out of 618) voting against it. During a plenary debate the day before, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) also announced the creation of an AI Office.
  • The Organisation for Economic Co-operation and Development (OECD) published guidance on the updated definition of an AI system. The new definition reads:
    • “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
  • The United Nations General Assembly adopted its first-ever resolution to regulate AI, led by the U.S. and backed by more than 120 other Member States. The resolution highlights the respect, protection and promotion of human rights in the design, development, deployment and use of AI – as well as AI systems’ potential to accelerate and enable progress towards reaching the 17 Sustainable Development Goals.
  • In the U.S., the White House Office of Management and Budget (OMB) issued its first government-wide policy to “mitigate AI risks and harness its benefits”. As of December 1, 2024, federal agencies were required to implement safeguards when using AI in a way that could affect the rights or safety of U.S. citizens.
  • Also in the U.S., the state of Utah enacted SB 149, known as the Artificial Intelligence Policy Act, which aims to regulate the use of AI across a variety of sectors.
  • A French interministerial expert commission on AI published a report on, and recommended policy measures relating to, AI. It described AI as a threat to and an opportunity for France, as well as warning Europe not to allow itself to fall behind further than it already has.
Litigation
  • Three book authors, Abdi Nazemian, Brian Keene, and Stewart O’Nan, sued Nvidia for copyright infringement over its NeMo model.
  • In response to OpenAI’s motion to dismiss parts of its copyright infringement complaint in February, the NYT filed an opposition to OpenAI’s motion to dismiss, defending all of its claims as original and requesting the opportunity to amend the complaint if necessary.

April

Governance and merger control
  • Nine U.S. federal departments and agencies published a joint statement on the enforcement of civil rights, fair competition, consumer protection and equal opportunity laws in automated systems (used instead of “AI”). “Many automated systems rely on vast amounts of data to find patterns or correlations…while these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination,” the statement warned.
  • Not long after the inter-U.S. joint statement, the Trade and Technology Council (TTC) of the U.S. federal government and the European Commission also issued a joint statement on AI, stating that they intend “to cooperate on interoperable and international standards”.
  • The UK Competition and Markets Authority had a busy month, releasing an “update paper” on AI Foundation Models and launching a triad of AI investigations into partnerships between Microsoft and Mistral, Microsoft and Inflection, and Amazon and Anthropic. In its paper, the CMA voiced its fears over “winner takes all” dynamics previously seen in tech markets.
Litigation
  • Eight U.S. newspapers sued Microsoft and OpenAI in the Southern District of New York, alleging they “purloin millions” of articles without permission. The plaintiffs included the Mercury News, Denver Post, Orange County Register, St. Paul Pioneer-Press, Chicago Tribune, Orlando Sentinel, South Florida Sun Sentinel, and the New York Daily News. Their suit overlaps with the NYT’s action against OpenAI and Microsoft in several ways, including that it was filed in the same court.

May

Governance and merger control
  • The European Council greenlit the EU AI Act, making it the first of its kind to be enacted. 
  • The European Commission established the AI Office, which is tasked with monitoring the implementation of rules by the Global Partnership on Artificial Intelligence model developers, requiring them to take corrective measures when non-compliant, as well as facilitating the uniform application of the AI Act across Member States, among other responsibilities.
  • Governments of the world’s largest economies signed the Seoul Declaration for Safe, Innovative and Inclusive AI, at the AI Seoul Summit 2024. They include: Australia, Canada, the European Union, France, Germany, Italy, Japan, South Korea, Singapore, the UK, and the U.S.
  • After initiating a review of Microsoft’s partnership with Mistral AI, the UK’s CMA found the collaboration did not qualify for investigation under the country’s merger control regime. However, the decision laid out the standard under which a wide range of commercial partnerships could be subjected to review under those rules.
  • The NIST launched the Assessing Risks and Impacts of AI programme, which is dedicated to quantifying how a system functions within societal contexts once it is deployed.
  • Also in the U.S., the state of Colorado enacted SB 205, regulating the use of high-risk AI.
  • In Singapore, the AI Verify Foundation published the Model AI Governance Framework for Generative AI, which outlines the nine dimensions for a “basis for global conversation to address GenAI concerns while maximising space for continued innovation”. The report also reiterates the need for policymakers to work with the industry, researchers and like-minded jurisdictions.
Litigation
  • In its ongoing lawsuit against OpenAI and Microsoft, the NYT filed a motion for leave to amend the original complaint to “correct errors in the identification of copyright registration numbers for previously asserted works” and to “add approximately 7 million additional works to the suit”. Later that week, OpenAI wrote a letter to the judge presiding over the case, raising two discovery issues – including that the NYT’s lawyers so far declined to provide all documents and communications relating to the creation of Exhibit J of the complaint. The NYT responded to that letter a week later, noting that it would not be relying on those particular examples of regurgitation.
  • In the book authors’ action against OpenAI, the defendant made a noteworthy admission that became discoverable in May: it said that it did not deny creating and using two training datasets named books1 and books2. It admitted to having deleted them, but then disputed their relevance.

June

Governance and merger control
Litigation
  • The NYT-OpenAI saga continued, with the NYT filing a motion to compel the defendant to produce certain categories of documents, raising the issue of OpenAI allegedly refusing to answer its request for information “concerning OpenAI’s transition to a for-profit company”. The letter read: “OpenAI admitted that while no individual entity has ‘transitioned’ from non-profit to for-profit status, OpenAI did create at least one for-profit entity.”

July

Governance and merger control
  • NATO released its revised AI strategy, which aims to accelerate the use of AI technologies within NATO in a safe and responsible way.
  • The OECD launched a public consultation on AI risk thresholds.
  • The four leading Western competition authorities – U.S. Department of Justice, U.S. Federal Trade Commission, European Commission and UK Competition & Markets Authority – issued a joint statement on Generative AI, agreeing to compare notes in light of the inherently cross-border nature of Generative AI (although they will continue to act and decide independently).
  • The Coalition for Secure AI, backed by Google, IBM, Intel, Microsoft, NVIDIA, PayPal, Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz, was announced at the Aspen Security Forum. The coalition presented itself as an: “open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems” and “[h]osted by the OASIS global standards body.” Its main objective is to “foster a collaborative ecosystem to share open-source methodologies, standardized frameworks, and tools”.
  • During his first speech as the UK’s prime minister, Keir Starmer introduced AI legislation plans, stating that his government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
  • China’s Ministry of Industry and Information Technology announced plans to launch over 50 AI standards by 2026, with its first having already targeted GenAI in February.
  • The United Arab Emirates published its Charter for the Development and Use of AI, which aims to, among other things ensure the use of artificial intelligence in an ethical and responsible manner; protect privacy and data security; enhance transparency and accountability in its use; and improve the UAE’s global standing in technology and innovation.
  • The African Union published its Continental AI Strategy.
  • In New Zealand, the government published its own high-level approach to AI regulation.
Litigation
  • An open-source developer class action’s DMCA § 1202 claim (order on motions to dismiss) was – for the second time – dismissed. The case was brought against Github in 2022 for using ChatGPT-powered AI technology to suggest code snippets of approximately 150 characters to developers without providing copyright management information such as the name of the original author.

August

Governance and merger control
  • The EU’s AI Act entered into force – soon after, organisations began signing up to the EU AI Pact to signal their commitment to responsible AI.
  • Australia enacted an amendment to the Deepfake Sexual Material Bill 2024, criminalising the creation of non-consensual sexually explicit deepfakes.
  • 17 countries in South America signed the “Cartagena de Indias Declaration for Governance”, the construction of AI ecosystems and the promotion of AI education in an Ethical and Responsible manner in Latin America and the Caribbean. This stated the commitment to promote AI governance frameworks and ecosystems for a safe, inclusive, ethical, and responsible development of AI. The signatories included Argentina, Brazil, Chile, Colombia, Costa Rica, Panama, Paraguay, Peru, and Uruguay. 
  • In the U.S., the state of Illinois enacted HB 3773, regulating the use of AI in employment.
Litigation
  • Another copyright infringement class action was filed by three book authors – Andrea Bartz, Charles Graeber and Kirk Wallace Johnson – in the Northern District of California against Anthropic, a startup valued in the tens of billions of dollars that has received backing from Amazon and Google.

September

Governance and merger control

October

Governance and merger control

November

Governance and merger control
Litigation

December

Governance and merger control
Litigation
  • OpenAI was embroiled in more copyright infringement litigation filed by Canada’s top news organizations in the Ontario Superior Court of Justice. The publications – which include Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press, CBC and PNI Maritimes – asked the court to grant them an injunction, as well as CAD20,000 in statutory damages for every article that OpenAI has allegedly used to train its ChatGPT software unlawfully.
  • Indian advertising firm Mash Audio Visuals Pvt Ltd. filed a petition in the Delhi High Court, seeking the prohibition and punishment of the sale of AI-generated images that are created using the original works of artists without their permission. The action was filed as public interest litigation, seeking the amendment of India’s Copyright Act 1957 and its Information Technology Rules so that it can rule over cases of cheating by impersonation using AI or deepfakes.