In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

Italian authority Garante issues privacy charge sheet to OpenAI, may raise issues in EU data protection task force

Context: On March 31, 2023, Italy’s privacy regulator Garante per la protezione dei dati personali (GPDP, frequentlly referred to as “il (= the) Garante”) temporarily blocked ChatGPT in its country over alleged violations of EU privacy law that applies in Italy (March 31, 2023 bilingual press release by the Garante). On April 12, 2023, the ban was temporarily suspended subject to OpenAI’s compliance with a variety of requirements that primarily amounted to the education of users about what ChatGPT does and their related rights as well as an age verification mechanism (April 12, 2023 bilingual press release by the Garante).

What’s new: Yesterday (Monday, January 29, 2024), the Garante announced having notified OpenAi of breaches of EU data privacy law on Italian territory (January 29, 2024 bilingual press release by the Garante). OpenAI has 30 days to file its defense. The Garante also stated that it would “take account” of the developments in the Italian case with respect to the agency’s participation in an ad hoc task force on ChatGPT created last year by the European Data Protection Board (EDPB), triggered by the Italian initiative (April 13, 2023 press release by the EDPB).

Direct impact: OpenAI will likely present solid defenses, but there is a clear and present danger of ChatGPT going dark in Italy (again) with the Garante expressing so much concern after another ten months of investigating ChatGPT’s obtention and processing of data. While it appears unlikely that the Garante would actually want to deprive Italian users of access to ChatGPT (which would put Italian knowledge workers at a productivity disadvantage), the agency’s aggressive course of action in 2023 makes it a possibility. What appears more probable, however, is that the Garante wants OpenAI to take further steps to comply with the agency’s interpretation of the law.

Wider ramifications:

  • Should the Garante declare itself unsatisfied with OpenAI’s arguments and measures, its position would still be unlikely to be adopted on an EU-wide basis, though there are investigations by data protection authorities (DPAs) in other EU member states.
  • Meanwhile, the EU is striving to pass into law its AI Act (January 22, 2024 ai fray article). It remains to be seen whether the European Commission is then effectively going to handle the matter. The AI Act also covers the privacy-related aspects of AI systems, though it clarifies that existing rules remain in place.
  • A fundamental question raised by the Garante’s stance is whether a sub-100% hit rate of AI systems when striving to comply with certain regulations may just have to be accepted in order to enable the development and early adoption of Generative AI technologies, which is in the legitimate public interest.

The Garante is apparently striving to spearhead EU GDPR enforcement with respect to ChatGPT. The EDPB’s ChatGPT taks force was created last year “to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” As noted above, the question is at what point the European Commission will take the lead, but the AI Act firstly needs to be enacted and to enter into force.

Yesterday’s notice to OpenAI was also announced on X (formerly known as Twitter):

In a notice last year (Provvedimento del 30 marzo 2023 (in Italian)), the Garante listed the articles of the GDPR that it believes, or believed at the time, ChatGPT was violating:

  • Art. 5 (principles relating to processing of personal data),
  • Art. 6 (lawfulness of processing),
  • Art. 8 (conditions applicable to child’s consent in relation to information society services),
  • Art. 13 (information to be provided where personal data are collected from the data subject) and
  • Art. 25 (data protection by design and by default).

The most fundamental point made by the Garante in its March 30, 2023 press release was the assertion that “there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.” That’s a broad statement with far-reaching implications. It sounds like fundamentally deeming ChatGPT illegal, though the fact that the ban was suspended a little later based on certain measures to be taken suggests that the Garante is (presumably and hopefully) not pursuing the objective of forcing ChatGPT out of Italy.

Concerns about children’s consent can have two different aspects. If ChatGPT should just ask whether users are old enough, that would be easy to do and has apparently been done. However, to the extent that ChatGPT scrapes data from all over the internet, it’s obviously not possible to obtain every affected child’s consent at that stage. That applies to data of adult persons as well: if it’s on the web, it’s public. The GDPR then has to be enforced where the data is first made public, as OpenAI’s software has no way of knowing whether someone else’s website obtained the prerequisite consent.

One of the Garante’s concerns relates to what is described as hallucination: in an effort to provide at least some answer, ChatGPT sometimes makes false claims. That is also an issue raised by the New York Times Company in its copyright action against OpenAI and Microsoft (December 29, 2023 ai fray article). In connection with personal data, it’s sometimes easy to see that if one asks ChatGPT a question about a person specified by first and last name, but ChatGPT doesn’t have much (or any) information about that one, it may give an answer about someone who has the same first name, but a different last name. That also applies to queries that involve other characteristics. It sometimes admits it has no answer, but there are situations where it will make its “best efforts” to find at least something. And as impressive as it is in some ways, one always has to remember it’s GAI (Generative AI), not AGI (Artificial General Intelligence). It doesn’t have judgment.

OpenAI’s argument is that it wants to learn about the world, not about particular persons. That is true, though obviously the world includes, not least, the persons who live in it and, besides natural phenomena, shape it.

One obvious problem in this context is that the extent to which a given person’s information is protected varies. A U.S. president or famous actor is obviously far more transparent than a corporate executive, who may yet be more of a public figure than a an engineer or a teacher. It’s unclear to what extent and how soon GAI systems like ChatGPT can be reasonably expected to make and especially apply those distinctions. Again, those systems don’t have judgment.

One of the central terms of the GDPR is “legitimate interest” (by those obtaining, storing and processing personal data). It remains to be seen to what extent the Garante recognizes that as an excuse for certain imperfections, in light of good-faith efforts to address the issues identified.

Similar inquiries as in Italy are underway in other parts of Europe. And if that wasn’t already enough fragmentation, in Germany each federal state has its own DPA, which is why different regional privacy watchdogs have asked OpenAi questions. That even includes, for instance, the northernmost and thinly populated German state of Schleswig Holstein (< 3 million inhabitants) (German regional DPA’s questionnaire (PDF)).

Put differently, it’s a problem for not only OpenAI but AI providers in general that the rules in Europe are strict and enforcement is, for now, extremely fragmented. A reasonable balance must be struck, however, between the requirements imposed on AI providers and Europe’s need to foster innovation and productivity. With a view to productivity, it’s worth noting that, according to the International Monetary Fund (IMF), per-capita GDP in the EU amounts to US$43.3K versus US$83.6K in the United States (almost twice as much). There was no gap like that 15 years, and it is widening. The situation will exacerbate if the EU doesn’t adopt better policies in various areas.