In this particular case the linked article is definitely AI generated.
So if you write in a way that engages the reader, you’re going to struggle not to use em dashes and the occasional a/b contrast, because those are challenging the reader to engage… but when overused, they not only don’t have the intended effect ( to break the reader out of passivity) , they also constitute a new kind of sin.
So no, don’t “trust your gut”. Trust the math. Is it too much? Or is it just trying to jar you out of not engaging with the prose?
But yeah, I’d say this article is likely written primarily with AI. Which doesn’t mean it’s not guided with intention and potentially important, it just means the article was probably commissioned and edited by a human, not written by one.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...
It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"
The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content
We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.
> there's no reason to believe the style came from LLMs
They say "might" and "plausibly". I think there's no belief there until you assume it.
And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.
I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.
The whole corpus is in there, but the standard style is tuned for.
And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.
For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?
It’s also true llm second drafts are a thing.
And it’s true both can ‘record scratch’ you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.