I don’t mind that they let an LLM write the text, but they should at least have edited it.
Another one: "Two instructions are missing: [...] Four bytes."
One more: "The defensive coding hid the problem, but it didn’t eliminate it."
This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.
My hunch that this is substantially LLM-generated is based on more than that.
In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.
Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.
Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.
Fun exercise: https://en.wikipedia.org/wiki/Wikipedia:AI_or_not_quiz
Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.
More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.
Your job doesn't require you to think or expend effort?
I also hate this style of plastic, pre-digested prose. Its soulless and uninteresting. Maybe I've just read too much AI slop. I associate this writing style with low quality, uninteresting junk.
If there is constant vigilance on the part of the reader as to how it was created, meaning and value become secondary, a sure path to the death of reading as a joy.
For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...
73% judged GPT 4.5 (edit: had incorrectly said 4o before)to be the human.
https://arxiv.org/abs/2503.23674
Not only are people bad at judging this, but are directionally wrong.
> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.
Even though they are perfect for usage in writing down thoughts and notes.
“An em dash … they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”
that's a strawman alright; all the comments complaining how they can't use their writing style without being ganged up on are positive karma from my angle, so I'm not sure the "positive social reactions" are really aligned with your imagination. Or does it only count when it aligns with your persecution complex?
In fact, the latter is the opposite of terseness. LLMs love to tell you what things are not way more than people do.
See https://www.blakestockton.com/dont-write-like-ai-1-101-negat...
(The irony that I started with "it's not just" isn't lost on me)
But an LLM wouldn't write "It's not just X, it's the Y and Z". No disrespect to your writing intended, but adding that extra clause adds just the slightest bit of natural slack to the flow of the sentence, whereas everything LLMs generate comes out like marketing copy that's trying to be as punchy and cloying as possible at all times.
It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.
Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.
> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.
> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.
> *Tests verify the code as written; a behavioural specification asks what the code is for.*
However this is a blog post about using Claude for XYZ, from an AI company whose tagline is
"AI-assisted engineering that unlocks your organization's potential"
Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.
Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.
Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.
Don't understand how these tools exist.
They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.
I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.
What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.
It seems to look at sections of ~300 words. And for one section at least it has low confidence.
I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.
Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...
ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...
A Note on the Process
To be clear about what happened here: Claude wrote this article.
https://www.juxt.pro/blog/what-we-learned-from-34-clojure-in...therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.
It is:
- sneering
- a shallow dismissal (please address the content)
- curmudgeonly
- a tangential annoyance
All things explicitly discouraged in the site guidelines. [1]
Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
The same principle applies to submissions. If you couldn't be bothered to write it, don't ask me to read it. HN is for humans.
You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.
Optimistically, I guess I can call myself some sort of live-and-let-live person.
Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
Note: the guidelines are a living document that contain references to current AI tools.
> Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
This is something worth saying about a pure slop content. But the "charge" against the current item is that a reader encountered a feeling that an LLM was involved in the production of interesting content.
With enough eyeballs, all prose contains LLM tells.
We don't need to be told every time someone's personal AI detection algorithm flags. It's a cookie-banner comment: no new information for the reader, but a frustratingly predictable obstacle to scroll through.
But they won't do that, because deep down they feel shameful about it (as they should).
It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.
Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.
Basically, it doesn't work
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Speaking of the HN guidelines, they also say this:
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
>> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
>> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
They don't. people. tangential.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
There might be a market for your alternative though. Should be easy enough to build with Claude Code.
By asking AI to write the article for you, you're asserting that the subject matter is not interesting enough to be worth your time to write, so why would it be worth my time to read?
Sure, let me have a look.
He wrote 8 similarly lengthy blog posts in just 2 months: https://www.juxt.pro/blog/from-specification-to-stress-test/ https://www.juxt.pro/blog/three-paradoxes/ https://www.juxt.pro/blog/what-outlasts-the-code/ https://www.juxt.pro/blog/composition-at-a-distance/ https://www.juxt.pro/blog/new-vocabulary-for-an-old-problem/ https://www.juxt.pro/blog/softwares-second-heroic-age/ https://www.juxt.pro/blog/capability-hyperinflation/
They contain a lot of classic LLMisms:
"Implementation is the shrinking currency. Not because it’s worthless, but because supply is exploding."
His past writing was much, much less wordy: https://henrygarner.com/
The short sentence construction is the most suspicious, but I actually don't see anything glaring. It normally jumps out and hits me in the face.
1. Use Short Sentences
Who gives a crap if it was written by an LLM. Read it or don’t read it. Your choice.
If it conveys the idea and your learn something new, then it’s mission accomplished.