The Italian Data Protection Authority (Garante per la protezione dei dati personali, GPDP) has levied a €15 million fine against OpenAI, the developer of ChatGPT, concluding an investigation into the application's handling of personal data.

The Garante found that OpenAI processed user data to train ChatGPT without a valid legal basis, violating transparency principles and neglecting associated user information obligations.

In its findings, the Italian Data Protection Authority noted that in 2023, the US-based company processed personal data without first identifying an appropriate legal framework, violating transparency principles and the obligations to inform users.

Additionally, the Authority highlighted that OpenAI had not implemented age verification mechanisms, which raises concerns about the potential exposure of children under 13 to inappropriate content relative to their developmental stage and self-awareness.

In addition to the 15 million euro penalty, the Authority mandated that OpenAI conduct a six-month institutional communication campaign across radio, television, print media, and online platforms to enhance transparency in personal data processing.

The campaign's content, to be developed in collaboration with the Authority, should aim to educate the public about how ChatGPT operates, particularly regarding the collection of user and non-user data for training generative AI, as well as the rights of data subjects, including the rights to object, rectify, and delete their data.

This communication initiative is intended to inform both users and non-users of ChatGPT about their rights concerning the use of their personal data in training generative AI, thereby empowering them to exercise their rights under the GDPR.

The Data Protection Authority's decision to impose a 15 million euro fine on OpenAI also took into account the company's level of cooperation during the investigation.

Last year, the Italian regulatory authority temporarily prohibited the use of ChatGPT in Italy due to alleged violations of EU privacy regulations.

The service was reinstated after OpenAI, supported by Microsoft, resolved concerns related to users' rights to decline consent for the utilization of personal data in algorithm training.

OpenAI's penalty marked the third significant action taken against major technology companies this week, with Meta, the parent company of Facebook, and Netflix also facing scrutiny from European regulators for data protection violations.

On Wednesday, the Dutch Data Protection Authority imposed a €4.75 million fine on Netflix for not providing customers with sufficient and clear information regarding its management of personal data from 2018 to 2020.

This penalty resulted from an investigation revealing violations of the General Data Protection Regulation (GDPR).

Earlier on Tuesday, Meta received a €251 million fine in Europe for a 2018 data breach impacting 29 million global Facebook users.

In Meta's case, the Irish Data Protection Commission (DPC) imposed the penalty, citing unauthorized third-party exploitation of user tokens on the Facebook platform as the cause of the breach.