Connect with us


ChatGPT’s ‘hallucination’ problem led to a new privacy complaint in the EU




ChatGPT's 'hallucination' problem led to a new privacy complaint in the EU

OpenAI is facing a new privacy complaint in the European Union. This was filed by a privacy rights nonprofit nob focuses on behalf of an individual complainant on the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.

The tendency of GenAI tools to produce information that is outright wrong is well documented. But it also puts the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR), which regulates how regional users’ personal data can be processed.

Fines for non-compliance with the GDPR can amount to 4% of global annual turnover. Even more important for a resource-rich giant like OpenAI, data protection regulators can impose changes in the way information is processed, so GDPR enforcement could reshape how generative AI tools can function in the EU .

OpenAI was already forced to make some changes after an early intervention by the Italian data protection authority, which briefly forced a local shutdown of ChatGPT in 2023.

Now noyb is filing the latest GDPR complaint against ChatGPT with the Austrian Data Protection Authority on behalf of an unnamed complainant who discovered that the AI ​​chatbot had produced an incorrect date of birth for them.

Under the GDPR, people in the EU have a range of rights associated with information about them, including the right to have inaccurate data corrected. noyb claims that OpenAI is not meeting this obligation with regard to the output of its chatbot. The company said the company denied the complainant’s request to correct the incorrect date of birth, responding that it was technically impossible to correct it.

Instead, it offered to filter or block the data based on certain clues, such as the complainant’s name.

OpenAIs privacy policy states that users who notice that the AI ​​chatbot has generated “factually incorrect information about you” can submit a “correction request” via or by sending an email to However, the report caveats the sentence by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in all cases.”

In that case, OpenAI suggests users to completely remove their personal data from ChatGPT’s output – by filling out a form digital form.

The problem for the AI ​​giant is that GDPR rights are not à la carte. People in Europe have the right to request rectification. They also have the right to request deletion of their data. But as noyb notes, it’s not up to OpenAI to choose which of these permissions are available.

Other elements of the complaint focus on concerns about GDPR transparency, with noyb claiming that OpenAI cannot tell where the data it generates about individuals comes from, nor what data the chatbot stores about people.

This is important because the regulation once again gives individuals the right to request such information by submitting a so-called subject access request (SAR). Per noyb, OpenAI has not adequately responded to complainant’s SAR and has not disclosed any information about the data processed, its sources, or its recipients.

Responding to the complaint, Maartje de Graaf, data protection lawyer at noyb, said: “Coming up with false information is in itself quite problematic. But when it comes to false information about individuals, there can be serious consequences. It is clear that companies are currently unable to make chatbots such as ChatGPT compliant with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology must meet the legal requirements, and not the other way around.”

The company said it is asking Austria’s DPA to investigate the complaint about OpenAI’s data processing, and is urging the company to impose a fine to ensure future compliance. But it added that it was “likely” the matter would be handled through EU cooperation.

OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authority opened an investigation into ChatGPT following the complaint of a privacy and security researcher who also found that he could not have incorrect information about him corrected by OpenAI. That complaint also accuses the AI ​​giant of failing to comply with the regulation’s transparency requirements.

The Italian Data Protection Authority, meanwhile, still has an open investigation into ChatGPT. In January, it issued a draft decision stating that it believes OpenAI has violated the GDPR in a number of ways, including in relation to the chatbot’s tendency to produce misinformation about people. The findings also address other crucial issues, such as the lawfulness of the processing.

The Italian authority gave OpenAI a month to respond to its findings. A final decision remains pending.

Now that another GDPR complaint has been fired against the chatbot, the risk of OpenAI facing a series of GDPR enforcement actions in several member states has increased.

Last fall, the company opened a regional office in Dublin – in a move that appears designed to reduce regulatory risk by having privacy complaints referred to Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR intended to ease oversight of cross-border streamline complaints. by forwarding them to a single authority of a Member State where the company is ‘mainly established’.