Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all
You can also see the format and pacing differs greatly from posts on their blog made before LLMs were mainstream, e.g. https://dixken.de/blog/monitoring-dremel-digilab-3d45
While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.
I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.
Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.
> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?
It's alright as long as it's not based on faith or guesswork.
[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.
You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.
[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf
Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.
The same could be said of the accusation being levied here.
Your firebrand attitude is doing a disservice to everyone who takes vibe hunting vibecraft seriously!
The intended audience doesn’t even care that this is LLM-assisted writing. Whether the narrative is affected by AI is second to the technical details. This is technical documentation communicated through a narrative, not personal narrative about someone’s experience with a technical problem. There’s a difference!
What are you in this for?!
I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.
(my experience is roughly a decade in cybersecurity and risk management, ymmv)
Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.
I saw one or two sigils (ex. a little eager to jump to lists)
It certainly has real substance and detail.
It's not, like, generic LinkedIn post quality.
You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."
I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.
But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.
Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.