upvote
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.

I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.

reply
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
reply
A mere opinion is not mental illness.
reply
Was that written by an LLM? It isn't that it's a mere opinion, it's that when every word out there has to be scrutinized for the possibility that an AI output it instead of a human intelligence that it gets pathological. Am I an LLM with the right prompts set up to respond this way? I mean, I know I'm not, but everyone else out there is just going to have to trust me that I'm not.
reply
I wasn't suggesting you have a mental illness for having an opinion.

More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.

So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.

reply
The threads that have the top comment saying "this is AI slop" are nearly always about an article that is obvious AI slop.

Threads that aren't - like this one - don't.

reply
If you need to tell yourself that in order to cope that's fine with me.
reply
Which part do you disagree with?
reply
I’m thinking that I may actually prefer undetectable AI slop to human comments like that. I do agree with your upthread comments.
reply