Published:May 16, 2023

Digital democracy|Digital privacy

Is ChatGPT Violating Your Privacy?

ChatGPT is surging in popularity. At the same time, concern is being expressed over the AI giant's questionable data collection practices and safety. This worry resulted in Italy's temporary ban on ChatGPT¹ and the creation of ChatGPT task forces by the European Data Protection Board (EDPB) and the US Department of Homeland Security (DHS). Surfshark's privacy counsel analyzed ChatGPT's data collection practices and uncovered some potential flaws. Many of the issues discussed were presented by the Italian Data Protection Authority² upon justifying the temporary ban.

ChatGPT’s potential data privacy issues:

  • Issue #1: ChatGPT collected a massive amount of personal data to train its models but may have had no legal basis for such data collection. According to the GDPR³, data collected without an appropriate legal basis (such as user consent) is unlawful. This was one of the main reasons for Italy’s temporary ChatGPT ban. In response, Open AI has recently provided a form for EU users⁴, allowing them to opt out of their data being collected to train the AI model. However, the form is only available in the EU, and those who do not actively fill out the form can expect their data to remain on the platform.
  • Issue #2: According to the GDPR³, data controllers (in this case, ChatGPT/Open AI) must inform users whose data they collect and use. However, ChatGPT has not notified all of the people whose data was used to train the AI model² - another reason why the use of this data may be unlawful. Since many people do not know that their data has been collected and is being used by ChatGPT, they do not know that they should fill out the opt-out form⁴, further amplifying the issue of potentially unlawful data collection.
  • Issue #3: ChatGPT is not in line with the principle of accuracy - The platform's provision of inaccurate information can significantly damage an individual's reputation. For instance, it may falsely assert a person's guilt or accuse them of crimes they have never been associated with⁵.
  • Issue #4: ChatGPT lacks effective age verification tools to prevent children under the age of 13 from using it. In response to Italy’s ban, ChatGPT developed a tool to determine that users are above the age of 13⁴. However, ChatGPT cannot confirm that the age entered is the true age of the user, meaning that children can lie that they are adults and use the platform freely.
  • Issue #5: ChatGPT lacks parental consent verification for children aged 13-18. ChatGPT’s terms of use⁶ require that children aged 13-18 must receive parental consent before using the platform. However, there is no verification process to ensure that this consent has been given, meaning that children between these ages can also use the platform freely, even without consent.

Methodology and sources

Surfshark’s privacy counsel Gabriele Kaveckyte analyzed ChatGPT’s data collection practices and presented five potential privacy issues related to data collection and safety based on the GDPR and other privacy-related regulations.

References:

¹ Luca Bertuzzi (2023) Italian data protection authority bans ChatGPT citing privacy violations;² The Italian Data Protection Authority (2023) Provision of March 30, 2023 [9870832];³ General Data Protection Regulation (2023);⁴ Kelvin Chan (2023) OpenAI: ChatGPT back in Italy after meeting watchdog demands;⁵ Pranshu Verma and Will Oremus (2023) ChatGPT invented a sexual harassment scandal and named a real law prof as the accused;⁶ Open AI (2023) Terms of Use.
The team behind this research:About us