upvote
How do you know? Some of the text has a slightly LLM-ish flavour to it (e.g. the numbered lists) but other than that I don’t see any solid evidence of that

Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all

reply
Not them but the formatting screams LLM to me. Random "bolding" (rendered on this website as blue text) of phrases, the heading layout, the lists at the end (bullet point followed by bolded text), common repeats of LLM-isms like "A. Not B". None of these alone prove it but combined they provide strong evidence.

You can also see the format and pacing differs greatly from posts on their blog made before LLMs were mainstream, e.g. https://dixken.de/blog/monitoring-dremel-digilab-3d45

While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.

reply
Hang on, they used a computer to help them create the post content?! Outrageous.
reply
HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?
reply
I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.

I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.

reply
[flagged]
reply
> This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.

Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.

> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?

It's alright as long as it's not based on faith or guesswork.

reply
It is not based on guesswork. For whatever it's worth, I have gotten 7 LLM accounts banned from HN in the past week based on accurately detecting and reporting them to moderation[1]. Many of these accounts had between dozens and 100 upvotes, some with posts rated to the top of their threads that escaped detection by others. I have not once misidentified and reported an account that was genuinely human. I am aware that other people have poorly-tuned heuristics and make false accusations, but it is possible to build the skill to detect LLM output reliably, and I have done so. In the end, it is up to you whether you believe me, but I am simply trying to offer a warning for people who dislike reading generated material, nothing more.

[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.

reply
Congrats, and thanks for your work, but you should be aware that HN comments are completely different from articles. What makes you think the skills/automations required to identify LLM generated HN comments will work seamlessly with submitted articles? You have to do a statistical analysis of this, otherwise it's just guesswork.

You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.

[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf

reply
Those are a lot of words to say you guessed. And the banning comment is nice I guess but pretty meaningless. Does moderation really always report back to you when you make such an accusation ? Who's to even say all the banned accounts were LLMs ? You know what would happened if i got banned because someone accused me of being a LLM ? Nothing. I'd take it as a sign to do other things.

Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.

reply
What is the evidence that the content is entirely LLM generated, rather just LLM-assisted writing of a genuine story?
reply
> contains no real substance.

The same could be said of the accusation being levied here.

reply
You know I had a thoughtful comment written in response to this that wouldn’t post because your comment got flagged to death when I tried to submit it!

Your firebrand attitude is doing a disservice to everyone who takes vibe hunting vibecraft seriously!

The intended audience doesn’t even care that this is LLM-assisted writing. Whether the narrative is affected by AI is second to the technical details. This is technical documentation communicated through a narrative, not personal narrative about someone’s experience with a technical problem. There’s a difference!

What are you in this for?!

reply
Proof?
reply
Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.

I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.

(my experience is roughly a decade in cybersecurity and risk management, ymmv)

reply
The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.

Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.

reply
I wonder if there's any probabilistic analyser that could confirm that the article is generated, or show which parts might have been generated
reply
Pangram[0] thinks the closing part is AI generated but the opening paragraphs are human. Certainly the closing paragraphs have a bit of an LLM flavor (a header titled "The Pattern", eg)

[0] https://www.pangram.com

reply
There are no automated AI detectors that work. False positives and false negatives are both common, and the false positives particularly render them incredibly dangerous to use. Just like LLMs have not actually replaced competent engineers working on real software despite all the hysteria about them doing so, they also can't automate detection, and it is possible to build up stronger heuristics as a human. I am fully confident and would place a large sum of money on this article being LLM-generated if we could verify the bet, but we can't, so you'll just have to take my word for it, or not.
reply
I'm very sensitive to this but disagree vehemently.

I saw one or two sigils (ex. a little eager to jump to lists)

It certainly has real substance and detail.

It's not, like, generic LinkedIn post quality.

You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."

I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.

But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.

Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.

reply
[flagged]
reply