EU Commission reported that the false positive rate was 13-20%.
German police reported that 50% of all reports were wrong.
The system is rubbish and the EU MEPs were quite open about wanting it to go away.
Of course actually carrying out that experiment would be absurd since I don't think anyone expects an appreciable percentage of clearnet material to be CSAM. The working assumption is that the goal is to find a needle in a haystack so GP's objection about needing to know the false negative rate is misguided.
However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.
Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.
The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.
[1] https://www.europarl.europa.eu/RegData/docs_autres_instituti...
„We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“
It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.
But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.
The companies in this blog post are effectively admitting they are making a choice to go against the law.
As there should be.
The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.
A) actually being paid in the end and
B) high enough to be of any concern to the concern.