upvote
> The bitter irony the author lands on: the only way to seem human is to pass your writing through an LLM.

(FWIW, some people consider this style of colon use an LLM-ism.)

I appreciate where you're coming from, though. As bland as LLM output can be, it seems to read more human to people because it's more average. (Although I can't really fathom seeing the neurodivergent as not human; neurodiversity is about the most human trait I can imagine. cf. https://quoteinvestigator.com/2022/11/05/think-alike/ .)

Long before the rise of ChatGPT, it seems a lot of people were immersed in a culture where "improving" your writing with tools like Grammarly was considered more or less mandatory. And it seems like people read less nowadays, certainly when it comes to attempts at good writing for writing's sake. Overall I fear the art of natural language communication is in decline.

reply
As this post has been (to my sensibilities) obviously composed by an LLM, I can tell you: this does not read "human."
reply
"AI use detection" is, like any test, not without cost. Meaning that, as a teacher, accusing a student of using an LLM, it may be prudent to consider the cost of a "false positive" accusation. I've seen a couple of examples now where students find sudden spurts of motivation and show unexpected talent on an assignment, to be accused of AI use after handing it in.

One should ask oneself: How many insults to the intelligence and creativity of an unexpectedly excelling student (that hasn't used AI) is it worth catching the shortcut-taking, LLM-using student? Is it 1/10? 1/1000? How much "demotivation of an unexpectedly excelling student" is the "rightful punishment of the cheating LLM using student" worth? And what is the exact cost of a false negative (letting the LLM using student off the hook)?

In other words, where on the Receiver Operating Characteristic (ROC) curve do you want to sit, as a teacher? I imagine it's quite the dilemma.

reply
The non-LLM version of this happened to me in a high school English class, back in the 2010's. I was accused of turning in a downloaded story from the internet, but in reality, I had pushed past the incredible barrier I usually felt at the start of a task, got into the flow, and started enjoying it.

I'm not sure if it had any lasting effects. Maybe a burning hatred of Grammarly ads.

reply
~30 years ago I sat down with two students and accused them of copying each others’ work, because they both made the same amusing mistake: they called their C functions without passing arguments, but they declared their variables in such a way that the values would coincidentally be in the right place on the stack. I have to imagine debugging their own code was a mystery.

They indicated that while they worked closely together while learning the material, they weren’t stealing from each other. I believed them then, and still believe them now, but I’m so glad I don’t have to deal with today’s AI world.

reply
deleted
reply
>To intentionally misspell a word makes me [sic], but it must be done.

LLM killed traditional poetry, what you are now seeing is post-LLM poetry.

Maybe you missed it, but this is clearly not an LLM, what prompt would even produce that.

reply
I can already see it playing out. Some day (maybe soon) LLMs will come up with such quirks; I (and perhaps you?) will continue to insist that this does not make them "conscious" or "AGI" or "persons" or what-have-you; and I will be accused of goalpost-shifting.
reply
But changing the way we communicate and present ourselves to prove we are not malicious (or disreputable) actors has always been a thing
reply