Context: In September, the Irish Data Protection Authority, also known as the Irish supervisory authority (IE SA), asked the European Data Protection Board (EDPB) to issue an opinion on the applicability of GDPR in the development and deployment of AI models (September 4, 2024 IE SA request). The IE SA’s request came after it had received several complaints from data subjects across Ireland and the wider European Economic Area, raising questions concerning the extent to which the processing of personal data associated with the development of AI Models complies with the GDPR. It also noted that an increasing number of data controllers are incorporating the use of AI models within their business operations and this means there is a need for supervisory authorities to engage in the general regulation of the processing of such data associated with the development of AI models. While some European supervisory authorities had already taken stances on the issue, the IE SA highlighted that the EDPB had not yet shared its consensus.
What’s new: The EDPB yesterday published its opinion in response to the IE SA’s questions. The opinion clarifies among other things that for an AI model to be considered anonymous both the following should be insignificantly likely: direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to develop the model and obtaining, intentionally or not, such personal data from queries. It also reminds authorities that they must balance their interests with data subjects’ rights and always ensure that processing personal data for AI under “legitimate interest” satisfies the three-step test: determine a lawful, clearly defined, and genuine interest; ascertain that processing is necessary to achieve that interest; and finally, verify that individuals’ rights and freedoms do not override the interest.
Direct impact and wider ramifications: The EDPB’s opinion has already received reactions from a wide variety of individuals and organisations across the EU, who have mostly welcomed its responses. It is too soon to say how supervisory authorities will apply this to their regulatory activities but it is one step closer to a Europe-wide regulatory alignment on AI.
In summary, the IE SA’s questions asked:
- When and how is an AI model considered anonymous?
- How can controllers demonstrate the appropriateness of legitimate interest as a legal basis in each of both the development and deployment phases?
- What are the consequences of the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model?
This is the EDPB’s opinion:
To gather input for this opinion, the EDPB had an exchange with the EU AI Office and held a dedicated stakeholders’ event.
There are four key takeaways from its opinion:
- The determination of whether an AI model is anonymous assessed on a case-by-case basis: AI models trained with personal data cannot, in all cases, be considered anonymous. In fact, the threshold for AI models to be considered anonymous is quite high. The EDPB opines that it is necessary for the likelihood of direct extractions of personal data for the model’s development and the likelihood of obtaining – intentionally or not – such personal data from queries should be “insignificant”. The controllers who carry out these assessments must therefore ensure there is minimal risk of extracting personal data during an AI model’s use.
- Legitimate interest: Controllers must meet strict criteria, balancing their interests with data subjects’ rights. Using personal data for AI under “legitimate interest” requires satisfying a strict three-step test. That test includes:
- Determining a lawful, clearly defined, and genuine interest.
- Ascertaining that processing is necessary to achieve that interest.
- Verifying that individuals’ rights and freedoms do not override the interest
- Unlawful initial processing does not impact lawful deployment: AI models built on unlawfully processed personal data may face legal and operational risks unless fully anonymized. However, anonymized models post-training might mitigate these concerns. In these scenarios, the lawfulness of the processing carried out in the deployment phase should not be impacted by the unlawfulness of the initial processing.
- Data commission powers: If a supervisory authority finds that your AI model was produced unlawfully under GDPR, it has the power to order you to delete that model or allow people to opt-out their data.
Market reactions
The EDPB’s opinion has already received reactions from a wider variety of individuals and organisations across the EU, including Claudia Canelles Quaroni, senior policy manager at the Computer & Communications Industry Association, who has stated:
“The EDPB’s confirmation that legitimate interest is a lawful basis under GDPR for processing personal data in the context of AI model development and deployment marks an important step towards more legal certainty.”
Silvio Mario Cucciarrè at Rödl & Partner notes that this opinion sets a foundation for harmonised practices across the EU, paving the way for “responsible AI development”. “As AI adoption accelerates across industries, aligning innovation with GDPR principles ensures both compliance and public trust,” he says.
Meanwhile, Daniela Birnbauer at Schoenherr states:
“While the GDPR doesn’t explicitly mention AI, its provisions are highly relevant to AI systems that process personal data. Organizations must ensure they have a legal basis for every instance of data processing, whether for development, training, or deployment.”