In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

In build-up to 2024 elections, tech companies form and expand coalitions to combat deepfakes

Context: AI technologies are now able to generate, cost-efficiently and even in real time, deepfake videos of actual persons. Just this month, a fraud case in Asia became known where an employee of a finance department received instructions from a fake version of the company’s chief financial officer, who was joined by other fakes in a video conference, and paid out $25 million (February 4, 2024 CNN article). Deepfakes are often not identifiable even by other (defensive) AI-based tools. With major elections scheduled this year, particularly (but not only) in the United States, there is increasing concern that foreign governments could utilize technology to influence Western election outcomes.

What’s new: On Friday (February 16, 2024), “20 leading technology companies including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X [formerly known as Twitter] pledge[d] to work together to detect and counter harmful AI content.” The announcement of the AI Elections Accord (PDF) was made at the Munich Security Conference (MSC). The previous week, Google joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member (February 8, 2024 press release), thereby teaming up with such companies as Adobe, Microsoft, Sony, the BBC and Amazon.

Direct impact: Both initiatives are complementary, and have different profiles. The AI Elections Accord has a broader membership and an agenda that reaches beyond making content provenance identifiable, but it’s merely a high-level political commitment, while the C2PA is a standard-setting organization (SSO) with an elaborate set of rules.

Wider ramifications: Legislative measures would have been too slow for the 2024 elections. Industry alliances and voluntary commitments were the only short-term option. But some politicians believe that industry action is not enough and that governmental rule-setting will be inevitable. In this context it’s worth noting that the European Union’s AI Act, which is about to be passed into law, reflects regulatory zeal but does nothing to combat the threat from deepfakes used to manipulate elections. By coincidence, there are also EU Parliament elections this year (June 6-9).

The industry is coming together step by step as companies realize that it’s in their common interest to promote responsible AI and, in that regard, security. The C2PA was founded in 2021 (February 22, 2021 press release) and effectively constituted a merger betwen two parallel projects that started in 2019: the Adobe-led Content Authenticity Initiative (CAI) and a Microsoft-BBC project named Project Origin. Arguably, Google’s decision to join at this critical juncture is of comparable significance.

Google continues to pursue other AI-related initiatives that do not conflict with the C2PA’s policies, such as its SynthID technology that watermarks AI-generated content (Google DeepMind webpage on SynthID). Security researchers are unconvinced of the ability of current watermarking techniques to make content reliably identifiable as having been generated with AI tools, as watermarks of that kind have been neutralized or forged (October 3, 2023 Wired article). Another problem is that the use of AI tools doesn’t necessarily make content untrustworthy: AI may have been used for a perfectly legitimate purpose.

Critics say that the kind of content provenance identifier developed and promoted by the C2PA (which enables warnings that, for instance, a video attributed to the BBC may actually have been modified somewhere) isn’t immune to manipulation either. But technically it is more feasible, especially if everyone in a content production and distribution chain participates (from the maker of a recording device (camera, smartphone) to a publisher to a social network), to show the source of certain content and indicate that content has been edited along the way. In the end, those who consume content will have to use their judgment: even if it’s guaranteed that certain content wasn’t potentially manipulated with deepfake techniques, one still has to decide whether to trust the original source.

The C2PA approach will make it easier to disavow fake material. For instance, if someone faked a segment of a BBC interview with a presidential candidate, the campaign team would be able to point reporters to a missing link in the content provenance chain, which will be more meaningful than a denial alone. That’s because the BBC is a C2PA member, as are other media organizations. But what if the fake involves a non-C2PA member, such as a TV station that decided not to participate? Then the situation is the same as before.

In order to enable universal adoption, the C2PA has adopted the World Wide Web Consortium’s (W3C) intellectual property policy. Simply put, it’s free to use.

Some critics are concerned that content provenance identification could be used not only to ensure that media content can be trusted to come from a particular source, but also for the purpose of Digital Rights Management (DRM). For now, such criticism is not sufficiently specific that one could say DRM will be a side benefit or adverse effect of content provenance identification, or maybe a non-issue in the end.

What is not an option is to do nothing. That’s what the following companies recognized by signing the AI Elections Accord at the MSC: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly known as Twitter).

What the press release calls “eight specific commitments” is defined in the three-page document publicly signed by the 20 companies (PDF). That document describes the AI Elections Accord as a “voluntary framework of principles and actions.” It’s a political commitment make commercially reasonable efforts. The vocabulary makes that clear, as it contains wordings like “attempting to…”, “undertaking…efforts…to”, “engaging in…” and “supporting efforts to…” (which is not uncommon for a voluntary political commitment).

As Microsoft president Brad Smith explained in a corporate blog post on Friday, “[t]his deepfake challenge connects two parts of the tech sector”:

  • “companies that create AI models, applications, and services that can be used to create realistic video, audio, and image-based content” and
  • “companies that run consumer services where individuals can distribute deepfakes to the public.”

Some companies, such as Microsoft, do both.

The C2PA and the AI Elections Act up the ante for bad actors. But there’s a lot of work left to be done. It’s a good sign that certain companies agree to collaborate in that context even if they disagree in other fields of policy making (or competition enforcement in connection with cloud services, mobile platforms etc.). Not everyone is on board yet. A company that is notably absent from these initiatives for now is Apple. It remains to be seen whether they will join forces with their industry peers for the public good.

Some lawmakers believe that voluntary commitments by industry to “attempt” to do, “engage in” or “support” certain things may prove insufficient in the long run. The CNBC article on the AI Elections Accord quotes Silicon Valley’s Democratic state senator Josh Becker, who “do[es]n’t see enough specifics, so we will likely need legislation that sets clear standards.”

If everyone joined the C2PA, legislative intervention may be unnecessary.