France Proposes Sweeping Social Media Ban for Children Under 15

France Proposes Sweeping Social Media Ban for Children Under 15
Photo by Pedro Lastra / Unsplash

France is moving forward with ambitious legislation that would ban children under 15 from accessing social media platforms, positioning itself at the forefront of a growing global movement to protect minors from digital harms. The proposal, championed by President Emmanuel Macron, aims to shield young people from what French authorities describe as the mounting risks of excessive screen time and harmful online content.

The Proposed Legislation

The French government has drafted a concise two-article bill set to be presented to Parliament in early January 2026, with potential implementation by September 2026. According to draft language obtained by Le Monde and France Info, the legislation would explicitly prohibit online platforms from providing social networking services to anyone under 15 years old.

"Many studies and reports now confirm the various risks caused by excessive use of digital screens by adolescents," the draft states, highlighting concerns about inappropriate content exposure, cyberbullying, sleep pattern disruptions, and broader mental health impacts.

The bill represents a significant escalation from France's current approach. While a 2023 law already requires parental consent for children under 15 to create social media accounts, technical challenges and EU legal constraints have severely limited its enforcement. The new proposal would impose an outright ban rather than relying on parental permission.

0:00
/2:36

Political Context and Motivation

President Macron has positioned youth digital protection as a priority issue, particularly following a June 2024 school stabbing in eastern France that shocked the nation. The 14-year-old student's attack on a teaching assistant prompted Macron to call for immediate action, with the president pointing to social media as a contributing factor to rising youth violence.

In his New Year's Eve address, Macron pledged to "protect our children and teenagers from social media and screens," though he stopped short of announcing the specific legislative details that had leaked to media outlets.

The political calculus behind the proposal extends beyond child safety concerns. With Macron's domestic legacy challenged by a hung parliament following his 2024 election gamble—which triggered what observers describe as France's worst political crisis in decades—cracking down on social media access for minors appears to be a rare policy area with broad public support. A Harris Interactive survey conducted in 2024 found that 73% of French respondents supported banning social media access for children under 15.

Enforcement Challenges

Implementation responsibility will fall to Arcom (Autorité française de régulation de la communication audiovisuelle et numérique), France's audiovisual and digital regulator. However, significant questions remain about how the ban would be enforced in practice.

The legislation must navigate complex technical and legal terrain. Previous French attempts at digital age restrictions have struggled with:

  • Age verification technology limitations: Robust systems capable of accurately verifying users' ages without creating privacy risks remain elusive. As privacy advocates have documented extensively, age verification systems create surveillance infrastructure that threatens fundamental rights while often failing to achieve their stated protective goals.
  • EU legal compatibility: France's 2023 parental consent law was partially blocked due to conflicts with EU regulations, particularly around data protection and free movement of services. The largest GDPR fines in history have included €405 million to Instagram and €345 million to TikTok specifically for child data protection failures.
  • Platform compliance: Social media companies have historically resisted age verification requirements, citing both technical feasibility and privacy concerns

Digital Affairs Minister Anne Le Hénanff has emphasized that the proposed bill is "short and compatible with European law, principally the Digital Services Act," suggesting that lessons learned from the 2023 law's implementation challenges have informed the new approach.

Broader School Phone Ban

The social media legislation would be accompanied by an expansion of existing mobile phone restrictions in French schools. Since 2018, mobile phones have been banned in French primary and middle schools (serving children aged 6-15). The new measures would extend this prohibition to high schools, effectively banning personal device use during school hours for students aged 15-18.

These parallel restrictions reflect a comprehensive strategy to reduce screen time and digital distractions across the entire K-12 education system.

France’s Encryption War Escalates: GrapheneOS Exodus Signals Dangerous Precedent for Open Source Privacy Tech
Executive Summary: The GrapheneOS project’s dramatic withdrawal from France in November 2025 represents a watershed moment in the escalating global conflict between privacy technology and state surveillance powers. This case follows an established pattern of French law enforcement targeting encrypted communications platforms, but marks the first time authorities have directly

Following Australia's Lead

France's proposal explicitly follows Australia's groundbreaking approach. In December 2024, Australia became the world's first country to implement a nationwide social media age limit, setting the bar at 16 years old. The Australian law, which came into full effect on December 10, 2025, places responsibility squarely on platforms—including TikTok, Instagram, Facebook, X, Snapchat, Reddit, YouTube, and Twitch—to prevent underage users from having accounts.

Major platforms have already begun compliance efforts. Meta preemptively removed over 500,000 accounts belonging to Australian users under 16 from Facebook, Instagram, and Threads beginning in early December. The company acknowledged that enforcement would be "an ongoing and multi-layered process" involving age verification technologies, behavioral signals, and active detection of circumvention attempts.

Australian platforms face substantial penalties for non-compliance: fines of up to AUD $49.5 million (approximately USD $32 million) for failing to take "reasonable steps" to prevent underage access. Notably, the Australian model includes no penalties for children or parents who circumvent the restrictions.

A European Movement Gains Momentum

France is far from alone in pursuing stricter social media age controls. A wave of similar initiatives is sweeping across Europe:

Denmark secured cross-party agreement in November 2025 to ban social media access for children under 15, with implementation expected in mid-2026. The Danish approach would allow parents to grant access to 13- and 14-year-olds following specific assessment, utilizing Denmark's national electronic ID system for verification. The Danish government has allocated 160 million kroner (approximately €21.4 million) for 14 child online safety initiatives.

Italy introduced legislation in May 2025 that would impose social media restrictions on children under 15 and limit "kidfluencers" under 15. The Italian bill requires platforms to verify user age using a "mini portafoglio nazionale" (national digital wallet) that will integrate with the upcoming EU age-verification system. Italy's education minister has explicitly called for following the Australian model.

France’s €150M Apple Fine: App Tracking Transparency Enforcement
Analyze France’s €150 million Apple fine with expert insights on App Tracking Transparency compliance requirements, regulatory enforcement trends, and similar privacy framework obligations.

Spain is actively drafting bills requiring parental authorization for users under 16. Spain, France, and Greece jointly presented a proposal to EU telecommunications ministers calling for mandatory age verification and parental control tools on all internet-enabled devices sold in the European market.

Germany has commissioned a committee to study potential ban implementation, with a final report expected in autumn 2026. Unlike some proposals, Germany is considering a ban with no parental consent exemptions.

Greece is developing its "Kids Wallet" parental control tool as an age verification mechanism, giving parents the ability to restrict or block access to applications and online services.

Eleven EU member states—Austria, Croatia, Cyprus, Denmark, France, Greece, Ireland, Italy, Slovakia, Slovenia, Spain, and Belgium's Wallonia-Brussels Federation—have formally requested that the European Commission make age verification mandatory for social media access under the Digital Services Act guidelines.

EU-Wide Coordination Efforts

While individual member states pursue their own restrictions, the European Commission is coordinating broader initiatives. The Commission is developing an age verification app prototype set to be piloted in five countries: Denmark, France, Spain, Greece, and Italy during summer 2025.

This app will verify whether users are over 18 using photo ID and selfie verification, with features including parental controls and application restrictions for minors. The system is designed to work with the upcoming EU Digital Identity Wallet, scheduled for rollout by the end of 2026, which will provide a unified digital identity system across Europe.

However, the Commission has resisted calls for an EU-wide blanket ban on social media for minors. Commission spokesperson Thomas Regnier clarified in late 2024: "Let's be clear... [a] wide social media ban is not what the European Commission is doing. It's not where we are heading to. Why? Because this is the prerogative of our member states."

The European Parliament passed a non-binding resolution in November 2025 advocating for a minimum age of 16 without parental consent for social media access, along with stronger enforcement of the Digital Services Act to protect minors. The Parliament's November 26, 2025 vote—which passed 483 to 92—established a two-tiered framework with age 13 as an absolute minimum and age 16 for unrestricted access, backed by the EU Digital Identity Wallet system. However, final decisions on age limits remain with individual member states.

The Case for Restrictions

Proponents of age-based social media bans point to mounting evidence of harm to adolescent mental health and development:

Mental Health Impacts: Studies have documented correlations between heavy social media use and increased rates of depression, anxiety, and other mental health conditions among adolescents. Gen-Z suicide and self-harm rates in Australia and other developed nations have risen in recent years, prompting concerns about social media's role.

Developmental Concerns: Macron and other supporters argue that teenagers under 16 lack the emotional maturity and brain development to handle exposure to harmful content. Research suggests that adolescent brains are particularly vulnerable to addictive algorithms and social comparison mechanisms built into social media platforms.

Sleep Disruption: The French draft explicitly references sleep pattern alterations as a documented risk. Screen time before bed and nighttime social media use have been shown to significantly impact adolescent sleep quality and duration.

Cyberbullying: A 2023 e-Enfance report found that one in four families in France experiences cyberbullying. Among affected children, 51% face educational challenges while 52% experience sleep disorders and appetite loss.

Early Exposure: Despite platforms' stated minimum age requirements of 13, actual usage begins much younger. The French National Commission for Computing and Freedoms found that the average age for a child's first social media account is approximately 8.5 years old, with more than 50% of 10-14-year-olds maintaining online profiles.

The tragic case that galvanized French support illustrates these concerns. In December 2024, a French mother whose 15-year-old son died by suicide launched legal action against major social media platforms, alleging that their algorithms created a "downward spiral" of harmful content that contributed to his death.

Critics' Concerns

Despite broad public support, the proposals face significant criticism from digital rights advocates, technology experts, and some child safety organizations:

Privacy Risks: Age verification systems require collection and processing of sensitive identity information, creating potential for data breaches and surveillance. Critics point to recent high-profile breaches, including Australia's MediSecure (2024) and Medibank (2022) incidents, as evidence of these risks. The UK's Online Safety Act implementation in July 2025 has already demonstrated how age verification requirements can expand into broader surveillance infrastructure affecting all users, not just minors.

Ella Jakubowska, Head of Policy at European Digital Rights (EDRi), called France's proposal "misguided, impractical, and potentially more harmful than helpful," warning that current age verification systems are "technically infeasible, lacking vital protections for privacy, data protection and cybersecurity."

Exclusion Concerns: Verification requirements could exclude marginalized populations including undocumented individuals, those without formal identity documents, and people with limited digital literacy. This raises equity issues about differential access to online resources and communities.

Effectiveness Doubts: Early experiences with Australia's ban reveal mixed results. While many underage accounts were removed, reports indicate that some teenagers successfully bypassed facial age estimation tools using VPNs and other circumvention methods. Critics question whether bans will simply push youth usage underground rather than eliminating it.

Loss of Benefits: Social media provides legitimate educational resources, social connection opportunities, and platforms for youth expression and activism. UNICEF Australia, while supporting better protections, argues that blanket bans overlook these positive aspects and that making platforms safer would be more effective than attempting to keep children off them entirely.

Free Expression: Civil liberties organizations warn that age verification requirements threaten anonymous speech and could establish infrastructure for broader internet monitoring and censorship.

Global Implications

The success or failure of France's implementation will be closely watched worldwide. Beyond Europe, several countries are considering similar measures:

Malaysia announced in November 2025 that all social media platforms would be required to ban users under 16 starting January 1, 2026, implementing age verification via eKYC (electronic Know Your Customer) systems.

Brazil has enacted what may be the world's most comprehensive child online protection framework through its Digital Child and Adolescent Statute (Digital ECA), signed in September 2025. Unlike narrowly focused social media bans, Brazil's law requires age verification "at each access attempt" for virtually any digital service that might contain content inappropriate for minors.

New Zealand introduced the Social Media (Age-Restricted Users) Bill in May 2025, following Australia's model with a 16-year age limit but imposing lower maximum fines of NZD 2 million (approximately USD 1.2 million).

Pakistan temporarily proposed under-16 restrictions in July 2025 but withdrew the bill in August following controversy, particularly around provisions that would have imprisoned adults who created accounts for minors.

Kenya published guidelines requiring age verification for social media services, though implementation status remains unclear as of January 2026.

Singapore is banning smartphones and smartwatches in secondary schools, including during recess and after-school activities, though not pursuing comprehensive social media restrictions.

In the United States, various states have attempted to pass age verification and parental consent laws for social media, though most face constitutional challenges under the First Amendment. As detailed in ComplianceHub's analysis of the age verification compliance nightmare, the patchwork of 19+ state laws with conflicting requirements has created an unprecedented compliance crisis for digital platforms. Federal legislation has stalled despite bipartisan concern about children's online safety.

Technical and Business Challenges

Implementation will require solving complex technical problems:

Age Assurance Technologies: Platforms will need to deploy sophisticated systems combining multiple verification methods, potentially including ID document verification, facial age estimation, behavioral analysis, and device signals. Australia's experience suggests this requires "multi-layered" approaches rather than single verification points.

Cross-Border Complexity: With social media operating globally, users can potentially circumvent national restrictions through VPNs and location-spoofing technologies. Australia's eSafety Commissioner has suggested that platforms monitor various signals including device identifiers, network providers, and account settings to determine user location.

Platform Business Models: Social media companies have resisted age restrictions partly due to concerns about user experience friction and potential loss of younger user bases. Requiring verification for all users—not just those under age limits—could significantly impact platform growth and engagement metrics.

Data Minimization: Regulations like GDPR require that personal data collection be minimized and proportionate. Storing identity documents or biometric data for age verification creates tension with these privacy principles. Australia's law requires platforms to delete identity information after verification, though maintaining age status records may still be necessary. The global wave of child safety legislation in summer 2025 has accelerated demand for privacy-preserving age verification technologies that balance effectiveness with data protection requirements.

The Path Forward

France's proposal faces a complex legislative journey. While public support appears strong, the deeply divided French Parliament means the bill will need backing from multiple opposition parties to pass. Digital Affairs Minister Le Hénanff has emphasized creating legislation that is both effective and compliant with EU law, learning from the complications that derailed the 2023 parental consent approach.

The European Commission's age verification app pilot, scheduled for summer 2025, may provide crucial technical infrastructure that could ease implementation concerns. If successful, this EU-backed system could become the standard verification mechanism across member states pursuing similar restrictions.

Critical questions remain unanswered:

  • Will platforms comply voluntarily or require aggressive enforcement and penalties?
  • Can age verification systems be developed that adequately balance effectiveness, privacy protection, and user experience?
  • Will restrictions successfully reduce harm, or will they simply shift youth social media use to unmonitored channels?
  • How will legitimate educational and beneficial uses of social media by minors be preserved?

Conclusion

France's proposed ban on social media for children under 15 represents a watershed moment in the ongoing debate about protecting minors in the digital age. With strong public support, high-level political backing, and coordination with other European nations, the initiative has significant momentum.

Yet the proposal also crystallizes fundamental tensions between child protection, privacy rights, free expression, and technological feasibility. The French experiment—along with similar efforts in Australia, Denmark, and other nations—will test whether governments can effectively regulate social media access by age without creating new problems that rival those they seek to solve.

As Parliament prepares to debate the legislation in January 2026, stakeholders across the technology industry, civil society, and child welfare organizations will be watching closely. The outcome could set precedents affecting not just France's 15 million young people, but children worldwide as other nations weigh following suit.

The fundamental question remains: In an increasingly digital world, can we protect children from online harms without cutting them off from the digital literacy, social connections, and educational opportunities that responsible technology use can provide? France's answer to that question, coming in the months ahead, will reverberate far beyond its borders.

For parents concerned about protecting their children's digital privacy and safety while navigating these evolving restrictions, comprehensive strategies for family digital protection remain essential regardless of regulatory outcomes.

Read more

Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates