Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.
Basically, it doesn't work
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Speaking of the HN guidelines, they also say this:
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
>> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
>> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
They don't. people. tangential.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
There might be a market for your alternative though. Should be easy enough to build with Claude Code.
By asking AI to write the article for you, you're asserting that the subject matter is not interesting enough to be worth your time to write, so why would it be worth my time to read?
Sure, let me have a look.
He wrote 8 similarly lengthy blog posts in just 2 months: https://www.juxt.pro/blog/from-specification-to-stress-test/ https://www.juxt.pro/blog/three-paradoxes/ https://www.juxt.pro/blog/what-outlasts-the-code/ https://www.juxt.pro/blog/composition-at-a-distance/ https://www.juxt.pro/blog/new-vocabulary-for-an-old-problem/ https://www.juxt.pro/blog/softwares-second-heroic-age/ https://www.juxt.pro/blog/capability-hyperinflation/
They contain a lot of classic LLMisms:
"Implementation is the shrinking currency. Not because it’s worthless, but because supply is exploding."
His past writing was much, much less wordy: https://henrygarner.com/