upvote
Which LLMisms are you seeing in their post? Their grammar, word choice, thought flow, and markings all denote a fully human authorship to me, so confidently that I would say they likely didn't even consult an LLM.
reply
Yeah I definitely misread their post.
reply
lol. I did use a lot of short sentences, that’s my bad. But please read through [1] and compare my text onto it, it may enlighten you on how to actually spot llm writing.

[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

reply
Oh no, I'm sorry to hear that.

For the future, try to avoid prevaricating when you actually have a clear sense of what you want to argue. Instead of convincing me that you've weighed both options and found luddism wanting, you just come off as dishonest. If you think stridently, write stridently.

reply
I’m not a native speaker and you may find my writing simplistic if your standard vocabulary includes three expressions I’ve had to look up (I don’t mean this as an insult, I was just genuinely stumped I could barely understand your comment).

I may think stridently (debatable) but I generally believe it is best to always try to meet in the middle if the goal is genuine discussion. This is my attempt at that.

reply
But meeting in the middle only works if you honestly believe the middle is a valuable place to be. I don't want to dissect your writing too much, but let's look at one example.

> The issue with most of these articles is that they seem to demonize the technology, and systematically use demeaning language about all of its facets.

This is very confident, strident language. You clearly believe that there is a faction of people demonizing technology, akin to luddites, who are not worthy of being taken seriously.

> This one raises a lot of important points about LLMs, but...

So here you go for the rhetorical device of weighing the opposing view. Except, you don't weight it at all. You are not at all specific about what those points are. It's just a way to signal that you're being thoughtful without having to actually engage with the opposing viewpoint.

> I do think that safety is important... But I think it's better not to be a luddite.

Again, the rhetoric of moderation but not at all moderate in content.

It was a clear mistake to think that this was LLM writing. But I suspect the reason I made this mistake is that AI writing influences people to mimic surface level aspects of its style. AI writing tends to actually do the "You might say A is true, but B has some valid points, however A is ultimately correct." Your writing seems like that if you aren't reading it closely, but underneath that is a very human self-assuredness with a thin veneer of charitability.

reply