OpenAI is facing renewed scrutiny after a privacy complaint was filed against ChatGPT in Europe, alleging that the AI chatbot generated false and defamatory information about individuals.
Highlights
The complaint, submitted by privacy advocacy group Noyb, focuses on a case in Norway, where ChatGPT allegedly fabricated details about a person, falsely claiming he had been convicted of serious crimes.
The incident has raised concerns about AI-generated misinformation and OpenAI’s compliance with European data protection laws.
Concerns Over Accuracy and GDPR Compliance
Under the General Data Protection Regulation (GDPR), personal data must be accurate, and individuals have the right to correct incorrect information.
Noyb argues that OpenAI’s system lacks a clear mechanism for users to rectify false claims, making it difficult to address potential reputational harm.
Model | Transparency Score (FMTI) | Misinformation Handling (False Response Rate) |
Safety Grade |
---|---|---|---|
ChatGPT (OpenAI) | 47% | 98/100 prompts | D+ |
Google Bard | 41% | 80/100 prompts | D+ |
Anthropic’s Claude | 39% | N/A | C |
Meta’s Llama 2 | 54% | N/A | Not Evaluated |
While OpenAI includes disclaimers stating that ChatGPT may produce inaccurate responses, Noyb contends that disclaimers alone do not relieve the company of its responsibility to prevent misinformation.
The group also highlights that OpenAI does not disclose the sources of data used by ChatGPT, further complicating efforts to verify and correct inaccuracies.
Previous Regulatory Actions Against OpenAI
This is not the first time OpenAI has faced regulatory scrutiny in Europe. In December 2024, Italy’s privacy watchdog imposed a €15 million fine on the company for processing personal data without a sufficient legal basis and failing to meet transparency requirements.
Earlier in 2024, Italian regulators temporarily banned ChatGPT, citing concerns over data privacy and transparency.
The restriction was lifted after OpenAI introduced user consent measures and adjusted its policies. More recently, Italy’s data protection authority warned media publisher GEDI against sharing personal data archives with OpenAI, further highlighting concerns over AI-driven data processing.
Potential Consequences and Regulatory Outlook
The complaint filed in Norway could have broader implications across Europe, where regulators in Poland, Austria, and other nations are also examining how AI models handle personal data.
If GDPR violations are confirmed, OpenAI could face penalties of up to 4% of its global annual revenue.
Privacy experts argue that companies developing AI should implement stronger safeguards to prevent misinformation and comply with data protection regulations.
The growing regulatory focus on AI accuracy could influence future policies governing AI-generated content.
OpenAI’s Response
OpenAI acknowledges the challenges of ensuring factual accuracy in large language models, describing it as an area of ongoing research.
The company has stated that it aims to minimize the presence of personal data in training datasets and does not intentionally provide private or sensitive information about individuals.
Notably, following recent updates to ChatGPT’s model, Noyb observed that the chatbot no longer generates the specific false claims at the center of the complaint.
Concerns remain over whether other inaccuracies persist, raising broader questions about how AI systems manage and correct false information.
Implications for AI Regulation
As AI technology becomes more integrated into daily life, the regulatory landscape is evolving to address concerns around data privacy and misinformation. The case against OpenAI may set a precedent for how AI companies are held accountable for the content their systems produce.
The outcome of this complaint could shape future regulations, potentially leading to stricter enforcement of AI compliance with GDPR and other data protection laws.