upvote
They did.

EU Commission reported that the false positive rate was 13-20%.

German police reported that 50% of all reports were wrong.

The system is rubbish and the EU MEPs were quite open about wanting it to go away.

reply
What is the false negative rate and total rates? Without those we are missing too much. If the false negative rate (saying fine but it isn't) then the whole thing is useless. If the total cases are a few hundred (either CASM isn't a problem or those doing it use other platforms cause they know they will be caught on these) I don't care much that some are false positives - odds are it didn't get me.
reply
You can not know the false negative rate without investigating 100% of all photos. You are asking for the impossible.
reply
Sure you can, random sampling should work. Don't just go making things up.

Of course actually carrying out that experiment would be absurd since I don't think anyone expects an appreciable percentage of clearnet material to be CSAM. The working assumption is that the goal is to find a needle in a haystack so GP's objection about needing to know the false negative rate is misguided.

reply
I expect the equivelent of the fbi is investigating this using other sourcs and so has plenty of data without needing to randomly sample any non-suspect conversation. CASM has been a problem since before computers.
reply
if you want perfection. But the eu should be doing investigation that they can use statistics to create a good estimate.
reply
The report you're referring to by the European Commission [1] shows that the mass surveillance of Chat Control 1.0 is probably not very proportional. They even note themselves that "The available data are insufficient to provide a definitive answer to this question".

However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.

Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.

The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.

[1] https://www.europarl.europa.eu/RegData/docs_autres_instituti...

reply
I'm sure a lot of HN commenters would agree that a CSAM detection system with a 13-20% false positive rate should be terminated, but we're not EU regulators. And you've got a sibling comment saying this would be malicious compliance, so even on HN it's not unanimous. Is there an example of a specific EU official, MEP, etc. explicitly stating that tech companies should not perform hash-based CSAM detection or should not perform CSAM detection at all?
reply
Yes? The Pirate Party has MEPs, it’s not exactly difficult to find their quotes. 3 seconds of searching was enough to find the following quote from MEP Markéta Gregorová:

„We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“

It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.

But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.

The companies in this blog post are effectively admitting they are making a choice to go against the law.

reply
> I'm pretty confident there would be much more outrage about "malicious compliance".

As there should be.

The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.

reply
Spanked? Hardly ever are there fines

A) actually being paid in the end and

B) high enough to be of any concern to the concern.

reply