OpenAI Faces Complaint Over Fictional Outputs ?

OpenAI Faces Complaint Over Fictional Outputs ?





Cover Image Of OpenAI Faces Complaint Over Fictional Outputs ?
Cover Image Of OpenAI Faces Complaint Over Fictional Outputs ?





OpenAI is currently facing a complaint from the European privacy advocacy group noyb over its ChatGPT model producing inaccurate information about individuals. The complaint highlights that ChatGPT often generates fictional or incorrect data, which violates the General Data Protection Regulation (GDPR) in the European Union. This regulation mandates that personal data must be accurate and that individuals have the right to correct or delete incorrect data.

One specific issue mentioned in the complaint is an instance where ChatGPT provided an incorrect date of birth for a public figure and OpenAI was unable to rectify this misinformation despite repeated requests. OpenAI has acknowledged that it cannot currently correct specific inaccuracies or disclose the exact sources of the training data used by ChatGPT, citing ongoing research challenges in ensuring factual accuracy in large language models.

The advocacy group is asking the Austrian Data Protection Authority to investigate OpenAI's data processing practices and to enforce compliance with the GDPR. This includes ensuring that OpenAI can fulfill access and rectification requests and that it maintains accurate records of training data sources. If these requirements are not met, the group suggests that fines should be imposed to ensure future compliance.


OpenAI is facing multiple GDPR complaints related to the "fictional outputs" generated by its ChatGPT model. These complaints, spearheaded by privacy advocacy groups such as noyb and individuals like Lukasz Olejnik, allege that OpenAI's AI chatbot violates several provisions of the GDPR, primarily due to its inability to ensure the accuracy of personal data and rectify inaccuracies.


The key issues highlighted in these complaints include:


1. Inaccurate Data: ChatGPT has been known to produce "hallucinations," or confidently incorrect information. This can include false personal data about individuals. For instance, a public figure complained about ChatGPT providing an incorrect birth date, which OpenAI was unable to correct.

2. Right to Rectification and Access: GDPR guarantees individuals the right to access their personal data and request corrections if the data is inaccurate. OpenAI's current system does not allow for such corrections, as the AI cannot update or erase specific pieces of information once they are part of its training data.

3. Data Protection by Design and Default: The complaints argue that ChatGPT was not designed with GDPR compliance in mind. OpenAI did not conduct necessary data protection impact assessments (DPIAs) before deploying the model, which is a significant lapse in ensuring lawful data processing under GDPR guidelines.

4. Lack of Transparency: OpenAI has been criticized for not being transparent about how it processes personal data, including the sources of the training data used for ChatGPT. This opacity complicates efforts to verify and correct data, further exacerbating compliance issues.

These complaints are being investigated by various European data protection authorities, including those in Austria and Poland. The outcomes could potentially lead to fines and mandatory changes in how OpenAI operates ChatGPT in the EU. 

OpenAI has responded by acknowledging the challenges of ensuring factual accuracy in large language models and is working on solutions, although it admits this remains an area of active research.

Post a Comment

Previous Post Next Post