313Blog - EU crackdown on ‘addictive’ social media reignites Indias child safety debate

EU crackdown on ‘addictive’ social media reignites Indias child safety debate

Posted on 14th May 2026

The Effects of Social Media on Children and Adolescents

The EU is tightening scrutiny on TikTok and Meta over addictive design features such as endless scrolling, autoplay & push notifications that regulators believe fuel compulsive usage among young users

by e4m

“Children are not commodities”, the European Union President Ursula von der Leyen declared on Tuesday, unveiling a sweeping crackdown on what she calls “addictive design” features — including endless scrolling, autoplay and push notifications — used by social media platforms to keep young users constantly engaged online.

Speaking at the European Summit on Artificial Intelligence and Children in Denmark, von der Leyen outlined the bloc’s concerns around compulsive digital engagement. “We are taking action against TikTok and its addictive design — endless scrolling, autoplay and push notifications. The same applies to Meta, because we believe Instagram and Facebook are failing to enforce their own minimum age of 13,” she said, adding that regulators are also investigating how platforms push children into “rabbit holes” of harmful content such as eating-disorder and self-harm videos.

 

The EU’s latest move has reignited a broader debate in India around whether children’s access to social media now requires stronger regulatory intervention beyond conventional content moderation. The conversation carries particular significance in the world’s second-largest smartphone market, with over 750 million devices and nearly a billion internet users. India is also Meta’s largest user market globally across Facebook, Instagram and WhatsApp, even as TikTok remains banned in the country from 2020.

 

Moreover, earlier this year, Karnataka and Goa said they were exploring restrictions on social media use for children under 16, reflecting growing concern among policymakers over rising screen dependency, online behavioural risks and the psychological impact of compulsive platform usage. 

The Economic Survey 2026 further amplified the debate by urging New Delhi to examine age-based access controls and policy interventions to tackle what it described as rising “digital addiction” among young users.

 

A parliamentary panel in March also added momentum to the debate. Earlier this year, the Standing Committee on Communications and Information Technology recommended that the Centre examine age-based restrictions on social media platforms as part of a broader push to strengthen digital safety safeguards for children and teenagers. 

The panel is also understood to have backed stronger age-verification and KYC-linked mechanisms for social media, gaming and dating platforms, arguing that existing safeguards remain inadequate to protect minors online. However, things didn’t move beyond discussion and recommendations.

Several countries are already moving aggressively. Australia became the first country to ban under-16 users from most social media platforms in 2025, while France, Greece, Spain and the United Kingdom are considering similar restrictions.

e4m reached out to Meta for comments. The copy will be updated when they respond. 

Design Under Fire

The EU’s intervention marks a major shift in the global child-safety debate. Regulators are no longer focusing only on harmful content, but increasingly questioning whether the architecture of social media platforms itself is engineered to maximise addiction-like engagement among younger users.

Infinite scroll, autoplay, streak mechanisms, algorithmic reinforcement and push notifications are now being viewed less as neutral interface features and more as behavioural systems designed to maximise retention, engagement and screen dependency.

That distinction is central to the debate, according to Carol Goyal, Founder, Aesthetic Intelligence Lab. “If the goal is protecting kids, design is where the harm happens, not just content,” she said. “Content moderation asks what children are seeing. Design regulation asks why they can’t stop seeing it.”

Goyal argues that the mechanism itself is addictive. “Infinite scroll removes natural stopping cues. Autoplay removes the decision to continue. Push notifications hijack attention without consent. These are behavioural engineering systems, not neutral UI,” she said, adding that such features become particularly harmful for developing brains that lack mature impulse-control systems.

The EU’s move reflects a growing global consensus that child safety online cannot be addressed solely through post-facto moderation when the very design of platforms incentivises compulsive behaviour. Von der Leyen argued that several online business models are built around commodifying children’s attention and exploiting mental vulnerabilities for profit.

Sonam Chandwani, Managing Partner, KS Legal & Associates, said regulators are increasingly justified in treating such features as part of a platform’s duty-of-care obligations toward minors. “The larger legal question is no longer whether platforms host harmful content, but whether they knowingly deploy systems that amplify dependency among children while commercially benefiting from their engagement,” she added.

Why India Can’t Simply Ban Social Media

One of the biggest challenges here is the age verification system. As part of its child-safety push, the EU has developed its own age-verification application, which von der Leyen describes as having the “highest privacy standards in the world.” Member states will soon be able to integrate the system into digital wallets to help platforms enforce age checks more effectively. 

Yet experts warn that India’s scale and digital realities make blanket bans difficult to implement. Advocate Abhishek Anthwal described age-verification enforcement as a “slippery slope,” cautioning that excessive regulation could create concerns around innovation, privacy and free expression while still proving technologically easy to bypass.

“Technology itself will now have to enforce age-based guidelines for its own creation,” Anthwal said, questioning whether such systems can be genuinely effective in a country marked by shared-device usage, uneven digital infrastructure and privacy concerns.

Chandwani similarly argued that a blanket prohibition model may neither be practical nor constitutionally sustainable in India. Instead, she advocates calibrated regulation involving child-safe design standards, default privacy protections, parental controls, time-use restrictions and greater transparency around recommendation algorithms.

Blog Index