EDIT to correct: most are not [flagged], but [dead] anyway, so probably manual moderator action or an automated anti-bot measure.
That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.
I agree it doesn't seem obviously AI. The early comments are all in the same writing style and smell human. Lots of strong opinions e.g.
"logged in after years away and had basically the same experience. the feed is just AI slop and engagement bait now, none of it from people I actually followed." [about Facebook]
HN has got a big problem with silently shadowbanning accounts for no obvious reason. Whether it's an attempt to fight bots gone wrong or something else isn't clear. By the very nature of shadowbanning there is no feedback loop that can correct mistake.
I don't think it's clear at all why people do this. I suspect a large amount of it, at least on a site like HN, is just hapless morons who think it's "cool".
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I'll actually post a comment or question and I'll get a reply with a bit of a paragraph of what feels like a very "off" (not 'wrong' but strangely vague) summary of the topic ... and then maybe an observation or pointed agenda to push, but almost strangely disconnected from what I said.
One of the challenges is that yeah regular users don't get each other's meaning / don't read well as it is / language barriers. Yet the volume of posts I see where the other user REALLY isn't responding to the other person seems awfully high these days.
I wonder if it is neural networks that are inherently biased, but in blind spots, and that applies to both natural and artificial ones. It may be that to approximate neutrality we or our machines have to leave behind the form of intelligence that depends on intrinsically biased weights and instead depend on logically deriving all values from first principles. I have low confidence that AI's can accomplish that any time soon, and zero confidence that natural intelligence can. And it's difficult to see how first principles regarding human values can be neutral.
I'm also skeptical that succeeding at becoming unbiased is a solution, and that while neutrality may be an epistemic advance, it also degrades social cohesion, and that neutrality looks like rationality, but bias may be Chesterson's Fence and we should be very careful about tearing it down. Maybe it's a blessing that we can't.
https://news.ycombinator.com/item?id=45322362
> First impression: I need to dive into this hackernews reply mockup thing thoroughly without any fluff or self-promotion. My persona should be ..., energetic with health/tech insights but casual and relatable.
> Looking at the constraints: short, punchy between 50-80 characters total—probably multiple one-sentence paragraphs here to fit that brevity while keeping it engaging.
> User specified avoiding "Hey" or "absolutely."
Lots more in its other comments (you need [showdead] on).
It's not just clever—it's devious!
Is it ideological?
Is it product marketing in those relevant threads where someone is showcasing?
Or is it pure technical testing, playing around?
So far it hasn't happed here, but we'll see!
Incidentally, how much do they pay for a HN account that is a few years old and accumulated a few thousand Internet points?
Asking for a friend.
My relationship with writing, while improved, has been a difficult one. Part of me has always felt that there was a gap in my writing education. The choices other writers seem to make intuitively - sentence structure, word choice, and expression of ideas - do not come naturally to me. It feels like everyone else received the instructions and I missed that lesson.
The result was a sense of unequal skill. Not because my ideas are any less deserving, but because my ability to articulate them doesn't do them justice. The conceit is that, "If I was able to write better, more people would agree with me." It's entirely based on ego and fear of rejection.
Eventually, I learned that no matter how polished my writing is, even restructured by LLMs, it won't give me what I craved. At that moment, the separation of writer and words widened to a point where it wasn't about me anymore and more about them, the readers. This distance made all the difference and now I write with my own voice however awkward that may be.
Because it looks completely adequate for me. Maybe you're not the bad writer you think you are.
Slashdot's system was superior because mod points were finite and randomly dispensed. This entropy discouraged abuse by design—as opposed to making it a key feature of the site.
It's the Achilles' heel of Reddit and every site that attempts to emulate it.
I've been advocating for a while now that HN could use meta-moderation at least on flagging activity, so it can stop giving flagging powers to users who are using it for reasons other than flagging rulebreaking.
Sometimes there is no clear explanation for fake account registration. Perhaps they were registered to be actively used in the future, as most fraud prevention techniques target new account registration and therefore old, aged accounts won't raise suspicion.
Slightly off-topic, but there are relatively new `services` that offer native brand mentions in reddit comments. Perhaps this will soon be available for HN as well, and warming up accounts might be needed for this purpose.
Other accounts might be trying to age accounts and dilute their eventual coordinated voting or commenting rings. It's harder to identify sockpuppet accounts when they've been dutifully commenting slop for months before they start astroturfing for the chosen topic.
They don't have anything worth saying but want people to think they do
To reverse the argument - it would be amateurish and plain stupid to ignore it. Barrier to entry is very low. Politics, ads, swaying mildly opinions of some recent clusterfuck by popular megacorp XYZ, just spying on people, you have it all here.
I dont know how dang and crew protects against this, I'd expect some level of success but 100% seems unrealistic. Slow and steady mild infiltration, either by AI bots or humans from GRU and similar orgs who have this literally in their job description.
Oh, would you look at that?
I love how the bot forgot to read CLAUDE.md or whatever persona it set up (e.g., "make me text all lowercase, use -- instead of em dashes pleaseeee") for this single comment mixed in with the other ones:
https://news.ycombinator.com/item?id=47132431
Sadly, I think that bot comment without the 'snowhale' persona filter applied is what a lot of people here still think every bot is going to look and sound like, because the amount of people I've seen on here getting tricked by them and interacting with them has been a bit worrisome.
This loss of trust is getting tiresome. Depending on context we've likely all wondered if something is astro turfed, but with the frequency increase from llms it's never really possible to not have it somewhere in mind
To date, I've never used an LLM directly. I find them deeply repellant, and I've yet to be convinced that there exists a sufficiently tuned prompt that will make me not hate their literally 'mid' output.
Loss of trust though, that's a societal issue of this gilded age of grifters and scammers. Until we have a system of accountability and consequences for serial lying, we're gonna drown in this shit. LLMs are jet fuel for our existing environment of impunity.