Snapchat’s AI Data Grab: Why teens are at risk and regulators are silent

We are not talking enough about Snapchat, the race for AI hegemony and the lack of enforcement of tech regulation. The AI goldrush has started, but at what cost?

Snapchat’s quiet AI expansion

Over the summer, many companies have grabbed even more personal data to train their generative AI models, at users’ and society’s expense. Each company’s actions deserve separate posts, so let me focus on Snap Inc here, which often doesn’t get the critical focus they deserve. They were the first to launch a genAI chatbot to young (and vulnerable) users without safeguards, and recently sent direct marketing likely in breach of the law — just two examples of the company’s many questionable, unethical, and possibly illegal practices.

This week some observant users notified me that the Snapchat app has introduced a new setting that allows them to train generative AI on users’ public content, such as images, videos, audio and text from Spotlights, public stories and Snap Map Snaps. I recorded this video of how the setting was on my Snap account, showing how it is turned on – not by asking for my explisit consent – but by default.

Vulnerable users, exploitative design

This is highly problematic, as Snapchat over time have pushed possibly unaware users into sharing their content publicly, often using deceptive design. Social pressure, FOMO, network effects and many other triggers are used to incentivise public sharing. Teens and young adults are particularly vulnerable.

We already have a legal system that lags heavily behind, where civil society may have to wait 5-7 years (or even more) for a complaint to go through the system. For example, we already have a pending complaint against Meta for many of the same practices as Snapchat, together with noyb. Companies are using stalling tactics as a strategy to avoid accountability, while they launch new products with impunity.

The US factor: weak enforcement, global impact 

Another reason why we might see this increased pace of “right infringing practices” being rolled out, is the situation in the US. Public Citizen just published a report stating that “The Trump administration has already dropped or halted one third of targeted enforcement actions against technology corporations in its first six months in office”.

This happens while the US is increasingly putting pressure on national governments in Europe and EU officials to reduce or even stop enforcement of tech companies. We might see an increased pace of infringements as companies see the possibility of “flooding the zone” in the race for personal data and ever larger generative AI models to increase their profits. Protecting the rule of law is more important than ever.