I had ChatGPT investigate and summarize the report from CCDH it is based on. https://counterhate.com/research/grok-floods-x-with-sexualiz...
"CCDH did not prove that X is widely distributing child sexual abuse material. Their report extrapolates from a small, non-random sample of AI-generated images, many of which appear to be stylized or fictional anime content. While regulators are rightly investigating whether Grok’s safeguards were insufficient, CCDH’s public framing collapses “sexualized imagery” and “youthful-looking fictional characters” into CSAM-adjacent rhetoric that is not supported by verified prevalence data or legal findings."
Scale of sexual content: “~3 million sexualized images generated by Grok”
They sampled ~20,000 images, labeled some as sexualized, then extrapolated using estimated total image volume. The total image count (~4.6M) is not independently verified; extrapolation assumes uniform distribution across all prompts and users.
Images of children: “~23,000 sexualized images of children”
They label images as “likely depicting minors” based on visual inference, not age metadata. No verification that these are real minors, real people, or legally CSAM.
CSAM framing: Implies Grok/X is flooding the platform with child sexual abuse material.
The report explicitly avoids claiming confirmed CSAM, using phrases like “may amount to CSAM.”
Public-facing messaging collapses “sexualized anime / youthful-looking characters” into CSAM-adjacent rhetoric.
CCDH's bias: Ties to the UK Labour Party: Several of CCDH’s founders and leaders have deep ties to Britain's center-left Labour Party. Founder Imran Ahmed was an advisor to Labour MPs.
Target Selection: The organization’s "Stop Funding Fake News" campaign and other deplatforming efforts have frequently targeted right-leaning outlets like The Daily Wire, Breitbart, and Zero Hedge. Critics argue they rarely apply the same scrutiny to misinformation from left-leaning sources.
"Kill Musk's Twitter" Controversy: Leaked documents and reporting in late 2024 and 2025 alleged that CCDH had internal goals to "kill" Elon Musk’s X (Twitter) by targeting its advertising revenue.For instance, in the US, I cannot hysterically scream FIRE while running toward the exit of a theater, nor could I express a desire to cause bodily harm to an individual.
Not that I would, per se, but if I did I'd be liable to prosecution for the damages caused in either instance.
I'd have to get the approval of those involved (by their not seeking legal recourse), in order to do either without consequence.
Under current First Amendment law, the government cannot punish inflammatory speech unless it is directed to inciting "imminent lawless action" and is "likely" to produce such action.
To illustrate how high this bar is: you can legally sell and wear a T-shirt that says "I heart killing [X group]". While many find that expression offensive or harmful, it is protected speech. This is because:
- It is not a true threat (it doesn’t target a specific individual with a credible intent to harm).
- It isn't incitement (it doesn't command a crowd to commit a crime immediately).
In the US, you don't need approval to express yourself. The default is that your speech is protected unless the government can prove it falls into a tiny handful of narrow, well-defined exceptions.
Anybody can run their mouths. Discussing ideas with others is what’s protected.
Obviously we should censor fascists and subversives!