Italian data protection authority says age verification mechanisms pose risks for young users.
OpenAI has been fined more than $15 Million for violations of the General Data Protection Regulation (GDPR).
Aside from failing to inform authorities of a data breach in March 2023, OpenAI has also processed ChatGPT users' personal details without sufficient legal basis while not providing age verification mechanisms, The Hacker News reports.
The Italian data protection authority Garante said not providing age verification mechanisms poses risks for any children using the AI chatbot. The authority has also required OpenAI to conduct a communication campaign educating users regarding ChatGPT's data gathering practices over six months.
"Through this communication campaign, users and non-users of ChatGPT will have to be made aware of how to oppose generative artificial intelligence being trained with their personal data and thus be effectively enabled to exercise their rights under the GDPR," said Garante. OpenAI has expressed intent to appeal the decision, which it regarded to be unreasonable.
Written by
Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.