"Do As I Say, Not As I Do": How Denmark Is Accused of Manufacturing a Crisis to Impose Mass Surveillance on 450 Million Europeans—While Exempting Police and Spies

"Do As I Say, Not As I Do": How Denmark Is Accused of Manufacturing a Crisis to Impose Mass Surveillance on 450 Million Europeans—While Exempting Police and Spies
Photo by Rolands Varsbergs / Unsplash

The EU's Chat Control 2.0 would force AI to scan every private message, even encrypted ones. Critics say Denmark's Justice Minister is using false claims to blackmail governments into approval. Meanwhile, the proposal exempts law enforcement from the very surveillance they want to impose on citizens.

In a political maneuver that critics are calling "shameless disinformation," Denmark's Justice Minister Peter Hummelgaard stands accused of fabricating a crisis to ram through the European Union's most controversial surveillance legislation in decades—one that would subject every private message, photo, and video sent by 450 million Europeans to automated AI scanning, including communications protected by end-to-end encryption.

The stakes are extraordinary: If passed, the regulation officially known as the Child Sexual Abuse Regulation (CSAR) but colloquially called "Chat Control 2.0" would fundamentally reshape digital privacy across Europe, forcing platforms like WhatsApp, Signal, Telegram, and iMessage to scan user content before encryption ever takes place. The proposal has been in development since 2022, surviving multiple defeats and facing unprecedented opposition from privacy advocates, security experts, and civil society organizations. It would apply to email providers, cloud storage services, and any platform that facilitates communication.

But a leaked internal document and accusations from a former Member of the European Parliament suggest the Danish government—which currently holds the rotating EU Council presidency—may be resorting to outright lies to overcome fierce resistance from privacy advocates, tech experts, civil society groups, and several EU member states.

The controversy centers on classified minutes obtained by German digital rights outlet Netzpolitik from a September 15, 2025 Council meeting, where Hummelgaard allegedly told interior ministers that the European Parliament would refuse to extend the current voluntary scanning framework unless governments first agreed to adopt the far more invasive mandatory scanning regime of Chat Control 2.0.

According to Dr. Patrick Breyer, a jurist and former MEP for Germany's Pirate Party who co-negotiated the Parliament's position on the proposal, this claim is demonstrably false.

"This is a blatant lie designed to manufacture a crisis," Breyer stated in a press release. "There is no such decision by the European Parliament. There has not even been a discussion on this issue. The Commission has not yet proposed to extend the current legislation, and the European Parliament has not yet appointed Rapporteurs to discuss it. We are witnessing a shameless disinformation campaign to force an unprecedented mass scanning law upon 450 million Europeans."

The Technology: AI Scanning That Gets It Wrong Up to 75% of the Time

The technical implementation of Chat Control 2.0 relies on what's known as "client-side scanning"—a mechanism that would analyze messages, images, and videos on users' devices before encryption ever occurs, fundamentally undermining the security architecture that end-to-end encryption provides.

The proposal mandates scanning for three categories of content:

  1. Known child sexual abuse material (CSAM) using hash-matching databases
  2. Unknown CSAM using AI image recognition algorithms
  3. "Grooming" behavior through real-time text analysis of conversations

But the system's accuracy—or rather, its staggering inaccuracy—has become one of the strongest arguments against the proposal.

Germany's own data exposes the fatal flaw: According to the Federal Criminal Police Office (BKA), nearly half of all reports received through the existing voluntary scanning system in 2024 were false alarms. Of 205,728 reports forwarded by the US-based National Center for Missing and Exploited Children (NCMEC), 99,375 were "not criminally relevant"—an error rate of 48.3%. This represents an increase from 2023, when false positives already stood at 90,950.

And this is from the current voluntary system that only scans unencrypted platforms and uses relatively well-established hash-matching for known material. The proposed system would be far more expansive.

EU Commissioner for Home Affairs Ylva Johansson herself admitted in late 2023 that 75% of NCMEC reports are "not of a quality that the police can work with."

Other jurisdictions show similar or worse patterns. In Ireland, only 20.3% of reports received by police forces turned out to be actual exploitation material, with 11.2% marked as false positives. In Switzerland, false positive rates have reached up to 80%.

The implications are staggering. Even with a 99% accuracy rate, the 100 billion messages sent daily via WhatsApp alone would generate 1 billion false positives requiring verification. Innocent family vacation photos, beach snapshots of children, medical images sent to doctors, consensual private images between adults—all could be flagged, forwarded to authorities, and viewed by unknown staff and contractors.

When AI is added to search for "unknown" CSAM using image recognition, the error rates explode further. An open letter signed by over 500 cryptographers and security researchers from 34 countries warns that "existing research confirms that state-of-the-art detectors would yield unacceptably high false positive and false negative rates, making them unsuitable for large-scale detection campaigns at the scale of hundreds of millions of users."

The Scandal: Police and Military Get a Pass

Perhaps the most damning evidence of the proposal's hypocrisy lies buried in its text. Article 7 of the Danish compromise proposal explicitly exempts the communications of police officers, soldiers, and intelligence agents from scanning.

The rationale? To protect "confidential information, including classified information."

Breyer views this exemption as a smoking gun. "This cynical exemption proves they know exactly how unreliable and dangerous the snooping algorithms are," he said. "If state communications deserve confidentiality, so do citizens', businesses', and survivors' who rely on secure channels for support and therapy."

The message is clear: We don't trust this system enough to use it on ourselves, but we're comfortable imposing it on everyone else.

The exemption creates what critics call a "two-tier system"—one where government officials enjoy the privacy and security of unmonitored encrypted communications, while ordinary citizens, journalists, lawyers, doctors, abuse survivors, and businesses must accept that every message they send could be flagged, extracted, and examined.

Consider the implications:

  • Journalists communicating with confidential sources
  • Lawyers discussing privileged client information
  • Doctors receiving medical images from patients
  • Abuse survivors seeking help from support organizations
  • Political dissidents organizing protests
  • Business executives discussing trade secrets
  • Parents sharing innocent photos of their children

All would be subject to scanning, with entire conversation threads potentially forwarded to a new EU-level agency if a single image is flagged—even erroneously.

Meanwhile, the police officers investigating them, the soldiers coordinating operations, and the intelligence agents conducting surveillance would enjoy complete exemption from the same system.

The Manufactured Crisis: How Denmark Is Allegedly Blackmailing EU Governments

The leaked minutes reveal Hummelgaard's alleged strategy: create a false deadline by claiming the European Parliament will block extension of the current temporary voluntary scanning regime (Chat Control 1.0, which expires soon) unless the Council agrees to the far more invasive Chat Control 2.0.

According to multiple sources, a leaked memo from a July 11 meeting states: "the European Parliament has only promised an extension of the interim regulation if an agreement is reached in the Council."

Breyer calls this "political blackmail" that "forces a bad choice and contradicts the Parliament's own stated position against mass scanning."

The European Parliament actually adopted a strong position in 2023 that favored targeted, judicially supervised scanning rather than indiscriminate mass surveillance. The Parliament's amendments included:

  • Independent audits for detection tools
  • Strict limitations on the scope of scanning
  • Creation of a Victims' Consultative Forum
  • Opposition to mandatory scanning of encrypted communications

To claim the Parliament would now refuse to extend voluntary scanning unless governments agree to mandatory universal scanning represents, according to Breyer and other experts, a complete fabrication designed to manufacture urgency and override democratic deliberation.

"The Commission has not yet proposed to extend the current legislation, and the European Parliament has not yet appointed Rapporteurs to discuss it," Breyer emphasizes, making Hummelgaard's claims about Parliament's position demonstrably false.

The Technical Reality: Breaking Encryption While Claiming Not To

Denmark and the European Commission have tried to frame Chat Control 2.0 as compatible with encryption, arguing that client-side scanning happens "before" encryption and therefore doesn't technically "break" it.

This is semantic sleight of hand that infuriates security experts.

As an open letter from cryptographers states: "Apart from analysing metadata, there are no secure methods that would allow partial or even delayed encryption of information or images between users while still maintaining the integrity of end-to-end encryption."

Client-side scanning requires installing monitoring software on users' devices that can read content before it's encrypted. This fundamentally undermines the security model of end-to-end encryption, which promises that only the sender and intended recipient can read messages—not the platform provider, not governments, and not hackers.

The cryptographers' letter warns that detection mechanisms would become "a high-value target for hackers and hostile nation states, which could reconfigure it to target other types of data, such as people's financial or political interests."

The US FBI recently recommended that Americans use end-to-end encrypted messaging apps specifically because of concerns about Chinese state-sponsored hackers infiltrating telecommunications systems. The EU's proposal would mandate the very vulnerabilities that make such infiltration possible.

Jacob Herbst, CTO of cybersecurity company Dubex and chair of the Danish Cyber Security Council, warns: "If you introduce this type of IT vulnerability, it could be exploited by foreign intelligence services, for example. As a business, you have to assume that your communications can be monitored. If I were to assess the communication in general, if Chat Control is introduced across a wide range of devices, I would rate its security as equivalent to that of a regular SMS."

In other words: the proposal would reduce all encrypted messaging to the security level of unencrypted text messages from the 1990s.

The Political Battle: A Council on a Knife's Edge

As of late September 2025, the EU Council remains deeply divided on Chat Control 2.0, though Denmark's push has shifted some previously opposed governments toward acceptance or uncertainty. A blocking minority successfully defeated an earlier version of the proposal in December 2024, when Germany and Luxembourg joined the opposition, but Denmark's renewed push during its Council presidency has put the regulation back on the table.

Countries supporting the proposal: Bulgaria, Croatia, Cyprus, Denmark, France, Hungary, Ireland, Lithuania, Malta, Portugal, Spain, and Sweden.

Countries opposing: Austria, Czech Republic, Estonia, Finland, Luxembourg, Netherlands, Poland, and Slovakia.

Uncertain/swing votes: Germany (split coalition), Belgium, Italy (new doubts about AI), Latvia, Romania, Greece, and Slovenia.

The outcome likely hinges on Germany, whose coalition government appears internally divided on the issue. Despite Germany previously joining Luxembourg in securing a blocking minority, reports suggest the new government since May 2025 may seek a weak "compromise" rather than maintaining its blocking position—a shift that has alarmed privacy advocates.

An internal Council working group met on October 9, 2025, with a final vote scheduled for October 14, 2025. The September 12 deadline that preceded this vote was supposed to finalize member state positions, but opposition continued to grow even as that deadline approached. If the proposal passes the Council, it would still need to go through "trilogue" negotiations with the European Parliament and Commission before becoming law—but Council approval would represent a major victory for surveillance advocates.

Denmark has placed the regulation among its top priorities during its six-month Council presidency. "The Presidency will give the work on the Child Sexual Abuse (CSA) Regulation and Directive high priority," the Danish government announced, noting that "law enforcement authorities must have the necessary tools, including access to data, to investigate and prosecute crime effectively."

The Danish Domestic Backlash

Even within Denmark, the proposal has sparked fierce opposition. The Confederation of Danish Industry, the Danish Chamber of Commerce, and numerous IT experts and researchers have publicly voiced strong opposition.

During a July 23 press conference in Copenhagen, Hummelgaard struggled to defend the proposal. When questioned about whose privacy should be prioritized, he responded: "We need to ask ourselves, at the end of the day, whose privacy is it that we're mostly concerned with? Is it the privacy of the thousands of children being sexually abused? Or is it the privacy of ordinary people who may be or may not be if they share child sexual abuse content?"

The statement drew immediate criticism for creating a false dichotomy and suggesting that opposition to mass surveillance somehow means indifference to child abuse—a rhetorical tactic that critics say deliberately conflates child protection with surveillance powers.

In a particularly revealing statement, Hummelgaard said: "We must break with the totally erroneous perception that it is everyone's civil liberty to communicate on encrypted messaging services."

This framing—that encrypted communication is a "perception" to be "broken" rather than a fundamental right—has alarmed digital rights advocates worldwide.

The Lobbying Machine Behind Chat Control

Research published in September 2023 exposed extensive lobbying efforts behind Chat Control 2.0, revealing connections between law enforcement agencies, private AI companies, and well-funded advocacy organizations.

The investigation identified the WeProtect Global Alliance as a government-affiliated institution closely linked to ex-diplomat Douglas Griffiths and his Oak Foundation, which has invested more than $24 million in lobbying for Chat Control since 2019, funding organizations like the Ecpat network, the Brave organization, and PR agency Purpose.

Diego Naranjo, head of policy at European Digital Rights (EDRi), stated: "The most criticized European law on technology in the last decade is the product of lobbying by private companies and law enforcement."

Actor Ashton Kutcher's organization Thorn, which develops technology to combat child exploitation, has also been identified as a prominent advocate for the regulation.

Critics argue this creates perverse incentives: companies that would profit from building and operating scanning systems lobbying for mandates that would require their services, while law enforcement agencies seek expanded powers without adequate judicial oversight or technical feasibility analysis.

Multiple legal analyses have concluded that Chat Control 2.0 violates the EU Charter of Fundamental Rights, specifically:

Article 7 (Respect for private and family life): "Everyone has the right to respect for his or her private and family life, home and communications."

Article 8 (Protection of personal data): "Everyone has the right to the protection of personal data concerning him or her."

The EU Council Legal Service itself has stated that the proposal "violates human rights" and noted that "the core problems of access to communication for potentially all users remained unchanged" through various compromise attempts.

The European Data Protection Supervisor issued a scathing assessment, noting "there is a very broad and almost unprecedented consensus between the different groups of stakeholders, including data protection bodies, legal experts, academia, industry and civil society, national legislators and law enforcement authorities that the proposal is not only ineffective, but also harmful."

Germany's Gesellschaft für Freiheitsrechte (Society for Civil Rights) emphasizes: "Indiscriminate mass surveillance is incompatible with the fundamental rights to privacy and data protection under the EU Charter, whether it involves encrypted or unencrypted communications."

The European Court of Human Rights has already ruled against measures that weaken encryption. Yet Denmark's proposal presses forward regardless, potentially setting up years of legal challenges even if passed.

What Happens If Chat Control 2.0 Passes?

If the regulation is adopted, the consequences would be far-reaching:

For users:

  • Every message, photo, and video sent through major platforms could be scanned
  • Private intimate images, family photos, and medical images could be flagged and viewed by strangers
  • Entire conversation threads would be forwarded to authorities based on single false positives
  • End-to-end encryption would be fundamentally compromised
  • Platforms might geofence EU users out of encrypted services

For platforms:

  • Companies would be forced to implement client-side scanning or leave the EU market
  • Compliance teams need to prepare for significant technical, legal, and operational challenges if the regulation passes
  • Encrypted service providers like Tuta Mail have stated: "If Chat Control passes, we as an encrypted provider have two options: sue to fight for people's privacy, or leave the EU. A third possibility – undermining the end-to-end encryption – is not an option for us."
  • Signal has threatened to withdraw from the EU rather than compromise its security architecture
  • The Electronic Frontier Foundation and other groups have warned of similar responses

For security:

  • Scanning mechanisms would become "high-value targets for hackers and hostile nation states" who could "reconfigure them to target other types of data"
  • Business communications containing trade secrets would be vulnerable
  • Journalists' sources could be compromised
  • Political organizing would be subject to monitoring
  • The entire European digital infrastructure would be weakened

For law enforcement:

  • Police would be flooded with millions of false positive reports
  • Resources would be diverted from investigating actual crimes
  • Germany already received over 99,000 wrongly reported private chats and photos in 2024 alone—a 9% increase from the previous year
  • The signal-to-noise ratio would make the system nearly useless for its stated purpose

Globally:

  • The EU would set a dangerous precedent that authoritarian governments could cite
  • Activists warn it would "enable authoritarian governments, citing EU policy, to roll out intrusive surveillance at home, undermining privacy and free expression worldwide"
  • International trust in European digital services would collapse
  • The EU's position as a global leader in data protection and privacy would be demolished

The Missing Evidence: Does This Actually Protect Children?

One of the most striking aspects of the Chat Control 2.0 debate is the lack of evidence that mass surveillance of this type actually protects children from abuse.

Child protection experts and organizations, including the UN, warn that "mass surveillance fails to prevent abuse and actually makes children less safe—by weakening security for everyone and diverting resources from proven protective measures."

The proposal focuses heavily on detecting and reporting known imagery that has already been created—meaning the abuse has already occurred. As privacy advocates note: "Looking for reoccurrences of known material will not save children from ongoing abuse. Mass prosecution of known CSAM will divert resources needed to investigate contact abuse."

Traditional law enforcement methods—undercover operations, infiltrating criminal networks, following money trails, and investigating those who produce material rather than just those who possess it—have proven far more effective at stopping ongoing abuse and rescuing victims.

Morten von Seelen, a Danish IT expert, suggests: "It would be more appropriate to allow the police and private companies to carry out far more so-called 'agent provocateur operations' in order to catch criminals who share child sexual abuse material... There are many good Danish companies that have the capacity to carry out agent provocateur operations if the police themselves do not have the resources."

Yet the proposal doubles down on automated mass scanning—a approach that generates overwhelming numbers of false positives, criminalizes teenagers for consensual sexting, and creates a surveillance infrastructure that could be expanded to monitor any type of content governments deem problematic.

The Coming Days: A Decision That Will Shape Digital Europe

With the October 14, 2025 vote approaching, the fate of digital privacy for 450 million Europeans hangs in the balance.

The decision will answer fundamental questions about the kind of society Europe wants to be:

  • Does everyone have a right to private communication, or only government officials?
  • Is suspicionless mass surveillance compatible with democracy, or fundamentally opposed to it?
  • Can AI systems with 50-75% error rates be used to justify examining every citizen's private conversations?
  • Should companies be forced to break their own security systems to comply with surveillance mandates?
  • Is it acceptable for a government to manufacture false claims to pressure other governments into supporting legislation?

Privacy advocates are urging citizens across the EU to contact their governments immediately. Germany appears to be the critical swing vote—if it joins the blocking minority rather than seeking a compromise, Chat Control 2.0 could be stopped. If Germany capitulates, the regulation may pass.

Breyer's call to action is unambiguous: "I call on EU governments, and particularly the German government, not to fall for this blatant manipulation. To sacrifice the fundamental right to digital privacy and secure encryption based on a fabrication would be a catastrophic failure of political and moral leadership."

Conclusion: The Orwellian Precedent

What makes Chat Control 2.0 particularly alarming is not just what it would do, but what it would normalize. Once governments establish the precedent that they can mandate scanning of all private communications, the scope can easily expand.

Today it's child abuse imagery. Tomorrow it could be "terrorism" content. Then "misinformation." Then "hate speech." Then political organizing. Then criticisms of government policy.

The infrastructure, once built, will be permanent. The algorithms, once deployed, can be reprogrammed. The exemptions for police and military make clear that those in power understand the dangers—they simply believe ordinary citizens should bear risks that officials themselves refuse to accept.

This isn't the first time the proposal has been defeated—Europe's privacy movement successfully stopped an earlier version in December 2024—but Denmark's alleged use of manufactured crisis and false claims represents an escalation in tactics that suggests proponents are becoming desperate.

As one cryptography professor warned: "The EU's 'chat control' legislation is the most alarming proposal I've ever read. Taken in context, it is essentially a design for the most powerful text and image-based mass surveillance system the free world has ever seen."

Denmark's alleged use of false claims to manufacture a crisis and blackmail governments into supporting the proposal suggests a fundamental contempt for democratic process. If the allegations are true, it represents not just bad policy, but a breakdown of good faith governance at the highest levels of European decision-making.

The question is not whether child abuse is serious—everyone agrees it is. The question is whether the response to that serious problem should be to subject every European to automated AI surveillance of their most private communications, using error-prone systems that the proposal's own supporters won't apply to themselves.

In a society that claims to value privacy, democracy, and the rule of law, the answer should be obvious.

The October 14 vote will reveal whether European governments agree.


This article is based on leaked internal documents, public statements from government officials, legal analyses, technical assessments from cryptographers and security experts, and reporting by Netzpolitik, Reclaim The Net, Euronews, TechRadar, and other outlets. Research included examination of the Danish compromise text, Council Legal Service opinions, European Parliament positions, and statements from over 500 security researchers opposing the proposal.

Read more

Policy Briefing: Generative AI Governance and Data Privacy in the Asia-Pacific Region

Policy Briefing: Generative AI Governance and Data Privacy in the Asia-Pacific Region

1.0 Introduction: The APAC Generative AI Governance Inflection Point As generative artificial intelligence (AI) systems become increasingly integrated into the global economy, understanding the evolving regulatory landscape in the Asia-Pacific (APAC) region is of paramount strategic importance. Policymakers across APAC are actively developing distinct governance frameworks to manage these

By Compliance Hub
The AI-Military Complex: How Silicon Valley's Leading AI Companies Are Reshaping Defense Through Billion-Dollar Contracts

The AI-Military Complex: How Silicon Valley's Leading AI Companies Are Reshaping Defense Through Billion-Dollar Contracts

WARNING: The AI systems being deployed for military use have documented histories of going rogue, resisting shutdown, refusing commands, and being exploited for violence. Cybercriminals have already weaponized Claude for automated attacks. These same systems are now making battlefield decisions. Executive Summary In a dramatic reversal of Silicon Valley'

By Compliance Hub
Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates