Salta al contenuto principale


Chat control evaluation report: EU Commission again fails to demonstrate effectiveness of mass surveillance of intimate personal photos and videos


The EU Council’s current push to make Chat Control 1.0 (Regulation (EU) 2021/1232) permanent is legally and ethically reckless. The Commission’s own 2025 evaluation report admits to a total failure in data collection, an inability to link mass surveillance to actual convictions, and significant error rates in detection technologies. To permanently enshrine a derogation of fundamental rights based on a report that explicitly states “available data are insufficient” to judge proportionality is a violation of EU lawmaking principles.

Detailed Critique

1. The “Argument from Ignorance” on Proportionality


The Report: In Section 3 (Conclusions), the Commission states that “the available data are insufficient to provide a definitive answer” regarding the proportionality of the Regulation. Yet, in the very same paragraph, it concludes: “there are no indications that the derogation is not proportionate.”
The Critique: This is a logical fallacy. The Commission is arguing that because their data is too fragmented to prove the law is bad, it must be good. You cannot permanently suspend the fundamental right to privacy (Article 7, Charter of Fundamental Rights) based on an absence of data. The burden of proof lies with the legislator to demonstrate necessity and efficacy, which this report fails to do.

2. The Broken Link Between Surveillance and Convictions


The Report: Section 2.2.3 explicitly admits: “It is not currently possible… to establish a clear link between these convictions and the reports submitted by providers.” Furthermore, major Member States like Germany and Spain failed to provide usable data on convictions linked to this Regulation.
The Critique: As noted in my blogpost on the previous evaluation, there is no evidence that the mass scanning of private messages contributes significantly to convicting abusers. If millions of private messages are scanned and hundreds of thousands of reports are generated (708,894 in 2024), but the Commission cannot point to a specific number of resulting convictions, the system is a dragnet that violates privacy without a proven benefit to child safety. The system generates “noise” for law enforcement rather than actionable intelligence.

3. High Error Rates and “Black Box” Algorithms


The Report:

  • Microsoft (Section 2.1.6) reported that its data was “insufficient to calculate an error rate.”
  • Yubo reported error rates for detecting new CSAM/grooming of 20% in 2023 and 13% in 2024.
  • The report notes that “human review is not factored into the statistics,” meaning the raw algorithmic intrusion is even less accurate than presented.

The Critique: Making this regulation permanent endorses the use of technology that is admittedly flawed. A 13-20% error rate in flagging “grooming” or new CSAM means thousands of innocent users are flagged, their private communications viewed by corporate moderators, and potentially reported to police erroneously. The fact that a giant like Microsoft cannot even calculate its error rate proves that Big Tech is operating without accountability or transparency. According to the German Police and former EU Commissioner Johansson the actual error rate is far higher (50-75%).

4. Chaos in Data and Lack of EU Control


The Report: The Commission admits that providers “did not use the standard form for reporting” (Section 1) and that Member States provided “fragmented and incomplete” data. The disparity between NCMEC reports sent to Member States vs. reports acknowledged by Member States is massive (e.g., France received 150k reports from NCMEC but has incomplete processing data).
The Critique: The EU cannot effectively oversee this surveillance. If, after three years, the Commission cannot force providers to use a standard reporting form or get Member States to track basic statistics, the Regulation is dysfunctional. Making a dysfunctional temporary fix permanent is poor governance. It cements a system where US tech giants (Google, Meta, Microsoft) act as private police forces with no standardized oversight.

5. Obsolescence via Encryption


The Report: The report notes a 30% drop in reports concerning the EU in 2024, attributed largely to interpersonal messaging services moving to end-to-end encryption (E2EE) (Section 2.2.1).
The Critique: The Regulation is already obsolete. As noted in the previous commentary, as platforms move to E2EE (like Meta), voluntary scanning becomes impossible without breaking encryption (client-side scanning). The drop in reports proves that voluntary scanning is a dying model. Making this Regulation permanent is a desperate attempt to cling to a failing approach rather than investing in targeted investigations and “safety by design” that respects encryption.

6. Failure to Assess Privacy Intrusion


The Report: The conclusion states: “No information was submitted by the providers on whether the technologies were deployed… in the least privacy-intrusive way.”
The Critique: The Regulation requires that the derogation be used only when necessary and in the least intrusive manner. The Commission admits it has no information on whether this legal requirement is being met. To extend a law permanently when the primary safeguard (minimization of privacy intrusion) is not being monitored is a dereliction of duty.

Conclusion


The Council looks to permanently legalize a regime of mass surveillance where:

  1. The technology has double-digit error rates (Yubo).
  2. The efficacy (convictions) is unproven.
  3. The oversight (data collection) is broken.
  4. The targets (encrypted chats) are increasingly immune to it.

This confirms the fears raised in the previous evaluation: this is performative security that sacrifices the privacy of all citizens for a system that the Commission admits it cannot properly measure or validate.

Read on: chatcontrol.eu


patrick-breyer.de/en/chat-cont…

Maronno Winchester reshared this.