Context: Industry initiatives to promote responsible AI development and use reduce the extent to which lawmakers and regulators feel compelled to make and enforce rules. They are voluntary, but benefit from the depth of their members’ technical knowledge and commercial understanding. Thus far, AI-generated deepfakes have not been a major factor in the various major 2024 elections, some of which have already been held. Earlier this year, tech companies announced the AI Elections Accord, and Google joined the Coalition for Content Provenance and Authenticity (C2PA) (February 18, 2024 ai fray article).
What’s new: On Thursday, the Coalition for Secure AI (CoSAI) was announced at the Aspen Security Forum (July 18, 2024 press release by OASIS). It is described as “an open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems” and “[h]osted by the OASIS global standards body.” The objective is to “foster a collaborative ecosystem to share open-source methodologies, standardized frameworks, and tools.” At this stage, this does not involve the creation of technical standards (such as interfaces between AI systems and security-testing software), but that could still be part of the agenda depending on what CoSAI’s contributors decide.
Direct impact: The organization is off to a good start with “founding Premier Sponsors” Google, IBM, Intel, Microsoft, NVIDIA, and PayPal, and “additional founding Sponsors” including Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz. It is an open-source community project, so it is open to participation by others.
Wider ramifications: It remains to be seen whether other AI providers will join this effort or whether some may elect to pursue an alternative approach (or not to collaborate with other industry players at all).
“To ensure trust in AI and drive responsible development, [CoSAI seeks] to develop and share methodologies that keep security at the forefront, identify and mitigate potential vulnerabilities in AI systems, and lead to the creation of systems that are Secure-by-Design.” That is more of a mission statement than a specific technical plan. CoSAI has identified the existence of “a patchwork of guidelines and standards which are often inconsistent and siloed” as a problem and intends to streamline and harmonize. But what exactly CoSAI is going to put in place will now be decided during the course of the project.
It’s a reasonable assumption that the founding members exchanged their ideas to an extent that there was a meeting of the minds. Still, they have to flesh it out in the months and years ahead.
These are CoSAI’s initial three workstreams:
- Software supply chain security for AI systems: enhancing composition and provenance tracking to secure AI applications.
- Preparing defenders for a changing cybersecurity landscape: addressing investments and integration challenges in AI and classical systems.
- AI security governance: developing best practices and risk assessment frameworks for AI security.
Given that OASIS is well-known as a standards development organization (with an open-source approach to the relevant intellectual property), ai fray reached out to learn whether there are any plans at this stage to develop technical standards, such as interfaces between AI systems and security-testing software. A spokeswoman for OASIS replied:
“CoSAI’s initial focus is on best practices and recommended methodologies. They have not ruled out creating standards in the future—or setting up separate OASIS Technical Committees to do so—but they have no immediate plans.”
The project is open to participation by everyone, sponsor or not, and the door remains open to additional sponsors.
The biggest missing names are Apple (which generally does not appear to be interested in joining AI industry initiatives) and Meta, but Europe’s leading LLM developer Mistral would also fit the profile of the ones who are already on board.