Digital democracy|Digital privacy
Is ChatGPT Violating Your Privacy?
ChatGPT is surging in popularity. At the same time, concern is being expressed over the AI giant's questionable data collection practices and safety. This worry resulted in Italy's temporary ban on ChatGPT¹ and the creation of ChatGPT task forces by the European Data Protection Board (EDPB) and the US Department of Homeland Security (DHS). Surfshark's privacy counsel analyzed ChatGPT's data collection practices and uncovered some potential flaws. Many of the issues discussed were presented by the Italian Data Protection Authority² upon justifying the temporary ban.
ChatGPT’s potential data privacy issues:
- Issue #1: ChatGPT collected a massive amount of personal data to train its models but may have had no legal basis for such data collection. According to the GDPR³, data collected without an appropriate legal basis (such as user consent) is unlawful. This was one of the main reasons for Italy’s temporary ChatGPT ban. In response, Open AI has recently provided a form for EU users⁴, allowing them to opt out of their data being collected to train the AI model. However, the form is only available in the EU, and those who do not actively fill out the form can expect their data to remain on the platform.
- Issue #2: According to the GDPR³, data controllers (in this case, ChatGPT/Open AI) must inform users whose data they collect and use. However, ChatGPT has not notified all of the people whose data was used to train the AI model² - another reason why the use of this data may be unlawful. Since many people do not know that their data has been collected and is being used by ChatGPT, they do not know that they should fill out the opt-out form⁴, further amplifying the issue of potentially unlawful data collection.
- Issue #3: ChatGPT is not in line with the principle of accuracy - The platform's provision of inaccurate information can significantly damage an individual's reputation. For instance, it may falsely assert a person's guilt or accuse them of crimes they have never been associated with⁵.
- Issue #4: ChatGPT lacks effective age verification tools to prevent children under the age of 13 from using it. In response to Italy’s ban, ChatGPT developed a tool to determine that users are above the age of 13⁴. However, ChatGPT cannot confirm that the age entered is the true age of the user, meaning that children can lie that they are adults and use the platform freely.
- Issue #5: ChatGPT lacks parental consent verification for children aged 13-18. ChatGPT’s terms of use⁶ require that children aged 13-18 must receive parental consent before using the platform. However, there is no verification process to ensure that this consent has been given, meaning that children between these ages can also use the platform freely, even without consent.
Methodology and sources
Surfshark’s privacy counsel Gabriele Kaveckyte analyzed ChatGPT’s data collection practices and presented five potential privacy issues related to data collection and safety based on the GDPR and other privacy-related regulations.