🗝 Key Points:
- Pause on AI Training: Meta has decided to halt its plans to use data from EU and UK users to train its AI systems.
- Regulatory Pushback: This decision follows pressure from the Irish Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO), both expressing concerns over Meta’s plans.
- DPC Statement: The DPC welcomed Meta’s pause and emphasized continued cooperation with other EU data protection authorities to address the issue.
Meta faces significant obstacles in Europe due to stringent GDPR regulations, unlike in the U.S., impacting its ability to use user-generated content for AI training. Last month, Meta began informing users about changes to its privacy policy, intending to use public content on Facebook and Instagram for AI training starting June 26. Privacy activist group NOYB filed 11 complaints, arguing Meta’s actions violated GDPR, particularly regarding opt-in vs. opt-out consent.
Meta claimed that using user data for AI training falls under “legitimate interests,” a GDPR provision, but has faced legal challenges with this argument in the past. Users were notified of changes through standard notifications, which were easy to miss. The process to object to data use involved multiple steps and required users to justify their objection, rather than offering a straightforward opt-out option.
In response to the DPC’s request, Meta stated:
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March. This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The ICO stressed the importance of maintaining public trust in privacy rights when using generative AI and committed to ongoing monitoring of major AI developers, including Meta. This pause on AI training using European user data is a significant move in response to regulatory scrutiny, highlighting the ongoing tension between innovation and data privacy.
Meta’s pause is part of a broader context where companies are eager to use vast amounts of data to train AI systems. Other companies like Reddit and Google are also navigating similar regulatory landscapes. Meta plans to continue discussions with the DPC and ICO to find a compliant approach to using user data for AI training in Europe.