In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

The mutually beneficial relationship between AI and standards: a bird’s-eye view

This article refers to some recent news such as U.S. cooperation agreements with European partners and Google’s new membership in the International Press Telecommunications Council (IPTC), but primarily it is meant to provide a timeless overview and to be referenced in future standards-related contexts.

Policy makers around the globe have started to make rules for the creation, operation and use of AI technologies. Even if anyone wanted to stop that tide, it would be too late. Some of those rules can only be made (often because they can only be enforced) by governments, such as strict prohibitions of certain technologies, particular ways to use AI, or competition enforcement. Still there are problems that can be solved – and improvements that can be achieved – through industry collaboration.

It is a pillar of ai fray‘s editorial concept to look at public policy as well as industry-level collaboration (on top of policy debates and industry disputes). Two examples of voluntary multi-company initiatives that ai fray has previously commented on are the AI Elections Accord and the Coalition for Content Provenance and Authenticity (C2PA) (February 18, 2024 ai fray article).

The term “standards” has a broad meaning. It can mean legal standards in terms of requirements, or ethical standards in the form of principles that may be defined with greater or lesser specificity. In the technology industry, it most often relates to technical standards, which are conventions such as protocols that enable interoperability between products made by different companies.

On Friday (April 5, 2024), the Trade and Technology Council (TTC) of the U.S. federal government and the EU (represented by the European Commission) issued a joint statement (White House webpage) that also involved AI policy. The two trade partners intend “to cooperate on interoperable and international standards.”

Earlier that week (on April 2, 2024), the United States signed a Memorandum of Understanding (MoU) with the UK concerning collaboration on AI safety. In that particular context, “standards” are not (or at least not primarily) about interoperability, but about “international standards for AI safety testing and other standards applicable to the development, deployment, and use of frontier AI models.”

The comparison between the United States’ two recent AI partnership announcements reflects the above-mentioned breadth of the term “standards.”

With a view to technical standards, there are different ways in which the AI revolution impacts standardization:

  • At this point, competing AI systems do not exchange data over what would be standards for interoperability between AI systems using different language models, and it may stay that way for a long time.
  • C2PA is an example of a standard necessitated by AI. It would also serve a very useful purpose in the absence of AI, but the fact that AI enables deep fakes to be produced quickly and cheaply made certain measures a must-have rather than just a nice-to-have.
  • On Thursday, the International Press Telecommunications Council (IPTC) announced that Google joined the organization (April 4, 2024 IPTC press release): “Google will take part in all decisions regarding IPTC standards and delegates will contribute to shaping the standards as they evolve.” Some IPTC standards metadata standards for media content while others facilitate the exchange of content, such as news items. Metadata can also be used to identify content as AI-generated and to identify original sources.
  • Arguably, 5G (the latest cellular telecommunications standard currently in use) was the first major standard on which AI placed demands. With a view to autonomous driving (clearly an AI application), the reduction of latency was a major 5G design goal that entailed various changes to the way user equipment (here, the telecommunications system built into a car) communicates with base stations. Simply put, low latency helps to avoid accidents as a vehicle must be warned of an approaching danger in time to prevent a collision.
  • 6G (which may start to be deployed around the year 2030) will go further in that direction. It will be the first cellular telecommunications standard to be designed for the purpose of connecting smart devices. Some call it the “smart Internet of Things” (or should it be called the “Internet of Smart Things”?). It will be an AI-centric (r)evolution of an existing standard. The density of devices that can be served in a single network cell increased hugely from 4G to 5G, and will grow ten-fold with 6G. The applications that telecommunications engineers (working on 6G as we speak) have in mind include, but are not limited to, collaborative robots, fully autonomous driving (which admittedly was already envisioned for the 5G era), mixed reality, holographic displays, multi-sensory communication, and the possibility of sensor data being shared across devices so that, for instance, there will be no need for each car to scan the entire environment if others can already provide the results of their analysis.
  • 6G networks will be optimized by AI technologies, enabling faster, more efficient and more secure data transfers. AI will enable self-optimizing networks. In that regard, 6G will be an AI-powered standard (as well as a standard enabling AI applications).
  • Video compression standards may increasingly depart from the mathematical compression of bit sequences to AI-based encoding and decoding. For the time being, but presumably not for all perpetuity, bit compression still yields better results.
  • AI can be used to optimize even bit-based video compression for such purposes as increased energy efficiency.

A single article cannot cover the wide range of interactions and interdependencies between AI and technical standards. Over time, ai fray will discuss specific examples of industry players working together under the umbrella of standard-setting organizations to enable AI and its safe and responsible use.