Request for an opinion came from Irish Data Protection Authority.
The European Data Protection Board (EDPB) has said it wants to support responsible AI innovation by ensuring personal data are protected and in full respect of the General Data Protection Regulation (GDPR).
In its adoption of an opinion on the use of personal data for the development and deployment of AI models, the EDPB says this will consider:
- When and how AI models can be considered anonymous
- Whether and how legitimate interest can be used as a legal basis for developing or using AI models
- What happens if an AI model is developed using personal data that was processed unlawfully.
Regarding anonymity, the opinion says that whether an AI model is anonymous should be assessed on a case by case basis by the DPAs.
The opinion also includes a number of criteria to help DPAs assess if individuals may reasonably expect certain uses of their personal data. These criteria include: whether or not the personal data was publicly available, the nature of the relationship between the individual and the controller, the nature of the service, the context in which the personal data was collected, the source from which the data was collected, the potential further uses of the model, and whether individuals are actually aware that their personal data is online.
Regulatory harmonisation
The opinion was requested by the Irish Data Protection Authority (DPA), with a view to seeking Europe-wide regulatory harmonisation. To gather input for this opinion, which deals with fast-moving technologies that have an important impact on society, the EDPB organised a stakeholders’ event and had an exchange with the EU AI Office.
The EDPB said considering the scope of the request from the Irish DPA, the vast diversity of AI models and their rapid evolution, the opinion aims to give guidance on various elements that can be used for conducting a case by case analysis.
Benefits
In response, the Irish DPA welcomed the opinion, which it said will benefit Supervisory Authorities across the EU/EEA in regulating the responsible development of AI products, by providing a harmonised position for all Supervisory Authorities to take account of.
Des Hogan, chairperson and DPC commissioner, said: “As the Lead Supervisory Authority of many of the world’s largest tech companies, we have a deep awareness and understanding of the complexities associated with regulating the processing of personal data in an AI context. Equally, we recognise that the core questions concerning compliance with the GDPR in an AI context are EU-wide industry challenges and as such require a harmonised approach at EU level.
“In having made this request for an opinion, the DPC triggered a discussion, in which we participated, that led to this agreement at EDPB level, on some of the core issues that arise in the context of processing personal data for the development and deployment of AI models, thereby bringing some much needed clarity to this complex area.”
Written by
Dan Raywood
Senior Editor
SC Media UK
Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.
Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.