BRUSSELS (Reuters) -OpenAI’s efforts to reduce false output from ChatGPT are insufficient for full compliance with EU data rules. The EU privacy watchdog task force highlighted this issue.
“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle,” the task force said in a report released on its website on Friday.
Europe’s national privacy watchdogs formed a task force on ChatGPT last year. This followed concerns raised by national regulators, led by Italy’s authority, about the AI service.
OpenAI did not immediately respond to a Reuters request for comment.
National privacy watchdogs in some member states have ongoing investigations. The report said it’s too early to provide full results. The findings represent a “common denominator” among national authorities.
Data accuracy is one of the guiding principles of the EU’s set of data protection rules.
“As a matter of fact, due to the probabilistic nature of the system, the current training approach leads to a model which may also produce biased or made up outputs”, the report said.
“In addition, the outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy.”
Ultimately, ensuring ChatGPT data accuracy is not just a regulatory requirement but a fundamental step towards responsible and ethical AI use.
read more
image source