I do find it more helpful when people specify why they think something was AI-generated. Especially since people are often wrong (fwict).
For example, some people seem to be irritated by jokes and being able to ignore +5 funny comments might be something they want.
Strong agree.
If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.
If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.
The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.
It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.
Also, I'm pretty sure most people can spot blogspam full of glaringly obvious cliche AI patterns without being able to create a high reliability AI detector. To set that as the threshold for commentary on whether an article might have been generated is akin to arguing that people shouldn't question the accuracy of a claim unless they've built an oracle or cracked lie detection.
In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.
GPT has ruined my enjoyment of using em dashes, for instance.