I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
https://news.ycombinator.com/item?id=47913650
It had 639 comments and 866 upvotes. And that's not a one-off.
If you like some authors or journalists or bloggers, go see who they read (trust me they all say who they follow in their own niches) and build from there. You can develop quite a good RSS feed following this method in like an hours tops.
I once made a rather boisterously-argued comment on a political issue I'm passionate about, and I realised that I'd made a serious error of reading comprehension when it came to my opponent's argument. I apologised to them for being an abrasive arse over my own mistake, then edited my comment to say that I was mistaken.
My incorrect comment which literally said at the bottom it was incorrect continued to be upvoted while my opponent who had made the stronger argument continued to be downvoted.
That's 90% of current Facebook pages and groups.
After awhile I had to wade through all sorts of nonsense to get to the posts I actually wanted to see, and even later Facebook stopped putting posts from people I follow in my feed. It was 100% garbage. I can't imagine why anyone uses Facebook for anything other than the marketplace.
That is hard work. I have a few friends in the trans world and occasionally interact with relevant groups on FB. The attention algorithm thinks that this means I might want to see random posts from pricks who literally want to see people like my friends herded up into concentration camps. Most of it is far less extreme than that, but the system is definitely optimised in favour of rage-bait because that ticks up the engagement metrics.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
I think the attestation approach works best if there are different reasons for the punishment. Eg someone inviting a turd doesn't ban the person who invited them. Someone going full ai spam should.
If you weren't a bellend on what.cd you got access to certain forums where there were even more and better private trackers. Once you built that trust there were social privileges, but if you abuse that trust you got rightfully banned.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.
For Debian-style applications that are 100% about openness and 0% about secrecy, sure.
But if you want to secure communications between pro-democracy activists in China, or you're a Snowden-like whistleblower wanting to securely communicate with journalists - y'all probably don't want to be vouching for one another's keys.
It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.
I want to create a community for immigrants. How would I make it welcoming to recent immigrants for whom no one can vouch?
A web of trust is a wonderful tool, but it's exclusive by design. This is a problem for some communities, even though it makes others much better.
Being welcoming to every random person is by definition not a community, it's a free-for-all mess.
A community means communal interests and values, it's in the name. And to guard those you can't just be accepting everyone without vetoing them. That's how it turns to a shit of spammers and trolls and people who want to hijack it and don't share the original cause/spirit. Has happened to forum after forum...
In the end, you need to filter people at the door. You need to keep unpleasant people out and shut down bad behaviour.
I figured that a paid, motivated moderator could be better than a web of trust for this demographic. Maybe enforce a stricter moderation standard on unvetted members. At my scale it might work.
Or have a two-stage process: run very public, very open events that anyone can sign up to an attend. And then invite specific people that you meet at those events that look like a good fit for your community to your private, community-only event.
The closest analog I can think of is community-run bike repair workshops. Some people are deeply involved in, and others just have a flat tire.
The closest digital equivalent is the forums of old.
This preserves anonymity because for the latter because they’re only known to be “related” to the former, which is a vague hint at their real identity (e.g. they could’ve met in another online community). And the former don’t care, if they want they can vouch an anonymous alt.
Spot the fed
It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.
Also, I do feel that GP's take is hyperbolic even in the twentieth century. My own background is mostly German immigrants, of various religions and non-religion, and the way I've been told the story none of them faced significant resistance as they moved upward in the various academic and corporate institutions of their choices. These included NASA executives, department heads, etc.
Note that in balancing GP's accusation against WASPs I'm not attempting to address the related, but not precisely complementary, phenomenon of perpetually marginalized groupings.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
Leave them on the device, authorize the device to validate before age inappropriate content appears.
Website wants to know your age? Your face and fingerprint support your attestation signed by a trusted party.
Can it be tricked potentially? Sure, but then you’re probably a super genius kid and not the reason that these laws were created (as if).
Don’t let anyone tell you anonymity must die for safety to exist.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:
1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.
2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.
3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.
The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.
I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.
Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't
My point was to create something outside a specific government, with very limited information... that would require a fee or some kind of funding.
I don't think I'd trust the US/China or other bodies to trust each other for such a use case.
Ideally, yes
But you're right, this isn't likely to happen in real life and I'm just being wishful. Instead we're going to get the much shittier capitalist version of this where every company and government spies on us and we have no expectation of privacy online at all
I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.
The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?
Anyway, good luck on your work!
Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
However, I might be not typical in that I don't look at vote scores very often.
They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web
What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.
So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?
I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.
Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.
Just settle for stopping spam.
Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?
I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.
That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
Except eventually it will also weigh down those users who supported <XYZ political stance>
I’m not sure if that would work for account deletions though.
Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).
How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.
Would it stop this sort of "get human id", past it into .env, so agents can use it?
even worse many of them are just plain vocal about their disdain for people in general.
at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.
I have no idea about the eye thing taking off. But I think your comment is very HN and a bit out-of-touch with regular people. What "you're seeing" is a bubble and not representative of the general population. The eye thing is a slow frog boil and it will be commonplace before you can blink.
https://github.com/Exocija/ZetaLib/blob/main/The%20Gay%20Jai...
How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.
Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?
I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).
FTFY.
There isn't a clear solution. And if there is, this ain't it.
China gets away with this shit because they've been conditioning their population for 60 years... everyone's eased into it. Elsewhere, not even slightly so.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
I personally can't wait for a mechanism to kill 99% of bot traffic.
Those sorts of places were always the only places with reliably good communities.
The better fix would be to make the support for multiple accounts in the reddit app not so incredibly-shitty, where you're basically logging out and logging back in. Instead, just tell it "posts to this sub use this account, posts to that sub use that account", etc.
People were finding each other online when they couldn’t in person.
This isn’t to say I disagree with you. Just expressing sorrow over the loss of such a grand moment in our shared history.
As to compromising material for bribery, that can be collected in so many different ways, and things like email or messaging or tiktok videos are probably far more interesting, reddit is not particularly useful for that.
---
One of subs I was visiting had some drama happening in ~2020 around supposed negative community behavior: people were criticizing creative works uploaded which personally I agree, weren't the best. Mods team decided that's a big no-no and this place has to be inclusive, welcoming and filled with positivity - so they started banning those who dared to criticize. Fast forward till now, there are only screenshots uploaded by bots, comments done by bots who also include screenshots along with 2 sentences in every thread.
If these platforms had to listen to "their customers" (here comes the inevitable comment about how users aren't customers; yes, I know)? They'd all be fired. They'd have to find a new job. They all act in incredibly insulting ways with a too big to fail attitude
Mods were rightfully upset because they were losing control of their communities when reddit preferred only caring about their upcoming IPO.
I honestly don't think you could remake reddit if you did everything exactly the same starting in 2016. Corporate social media has definitely ruined the individual aspect of social media that is unlikely to return.
No one wants to share on a place with a bunch spammers.
The protest came after that so the timeline is not quite correct.
That's antiproductive, in that it promotes survival of only the worst bots.
I'd like to flip the switches that absolutely end poverty globally, absolutely eliminate guns from the US, and absolutely remove bots from Reddit,
If you can show me where these switches are located, I'll cheerfully go flip them and accept full responsibility for the results.
(Over here where things don't work in absolutes: Some of those bots that got killed were countermeasures to help keep the bad, well-funded bots at bay.)
It was a midpoint between Facebook and Geocities, it got people to build communities within its walled garden, but it was always going to betray them for cash.
Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.
If Russia is willing to spend cash like that, then of course they're willing to run massive bot farms to pollute any forums they can. I'd be shocked if the US was not doing the same in any way they can. You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
Not sure how this relates to the subject in a direct way. Radio Free America was a outlet explicitly created and utilized to spread US propaganda, but kinda sorta barely disguised as a journalistic enterprise (not really, if you were listening to RFA you knew what you were listening to.) Shutting it down seems to be a counterpoint to all of the covert participation of US intelligence on the web which has done nothing but escalate.
The obvious answer to that question is "because he's a Russian asset". But that doesn't mean the obvious answer is also the correct one.
IMHO, we're seeing another and much more concerning trend at play here... the utter and complete rejection of anything but violence by the far-right. Diplomacy? Development aid? Cultural exchange? All sorts of soft power have been under attack for decades now, and not just by the far-right but (especially when it comes to development aid) also by mainstream centrist parties across the Western world. And it's always pseudo-masculine / "strongman" BS backing the sentiment - Bernd Höcke, German AfD mastermind, comes to my mind with "we have to rediscover our masculinity" [1], so do Hungary's Viktor Orban and his denouncement of LGBT or Trump's entire Œuvre.
I'm not saying that violence or at least being prepared, ready and willing to use it is automatically bad. Far from it. But all the various forms of "soft power"? They have a lot of value, value that the far-right is all too willing to just burn for entertainment.
[1] https://blogs.taz.de/zeitlupe/2019/03/24/die-auferstehung-de...
No matter where you look, the far-right kills and maims substantially much more people than the far-left does.
If AI is being used in these areas it is less as an attempt to manipulate as it is to just create noise and engender distrust in what they hear.
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
Account farmers: these can be people in 3rd world countries automated/not automated. Can be using hundreds of mobile phones to create accounts and do daily activity to make the account look legitimate. While they're building an activity history they are also being paid to like/follow/interact with content.
Advertisers: these are brought accounts that are used to pose inauthentic reviews of their service and inject it into discussion and to do PR
Sloppers: people who build AI pipelines and then just pump the most dogshit content directly into a platform trying to make any amount of money.
Nation State propaganda arms: These accounts build a narrative character and then join discussion pushing a certain narrative, boost real content creators who share their message and bog down discussion.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
(I'm normally posting in the context of my startup - although I try to keep the self promotion to a minimum and always contribute to the "conversation," if LLMs replying to one another can be called such).
For what it's worth, I created a community for paying users of Phrasing that has been going really well. I think free online communities may be going away, but there may be a future in exclusive/paid communities.
Set text size as preferred, underline links (or not), turn off display name styles (or not), ui density compact or default, chat message display to compact, space between message groups 0px, turn off all the animated emojis and gif animation stuff if you want.
In client use, there's a button to hide member list (or not).
You can definitely make discord look like a slightly less dense IRC client (mainly because of the channel picker) if you want. And if you want to go really crazy use it in a browser and userscript customize it or use betterdiscord.
I think a lot of the features like embeds and emoji reactions add a lot of value compared to IRC (which I think is also why the IRC world is trying to add those features).
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
Which is all to say i agree about needing mostly irl, but there is also something of online community that irl could never replicate (for most people).
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
Some would see those as negatives.
> IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
Just because a protocol doesn't change purposes as time goes on that doesn't mean it "sucks". Who is this "they" you're talking about? Do you think IRC is a centralized service like Discord?
Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.
There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.
but yes the publicly accessible servers are going to face similar problems. the socially competent people tend not to run those servers, and have smaller private servers with people they know as they have no drive to try to create a space for strangers to gather.
Sure, if you want to chat while gaming, that's the whole point of Discord. Ganbatte.
But, for everything else, Discord is such a horrible misfit that I don't understand why it's the default.
but yes i also game and it gets a lot of use for that as well
i agree though that for collecting and organizing information longer term like forums do, it is not ideal
Mailing lists are old, boring, boomer tech. Ayup. They are. And they work.
However, Zoomer, if you must have Teh Sh1ny(tm), then explain to me why a Discourse isn't a better choice?
Discord is the anti-Pangloss; it is the "Worst of All Possible Worlds".
Because it equally well supports real-time communication.
And it looks shiny.
And some people use it to e.g. watch a video together, or other social purposes.
Alas, Reddit is basically dead to me because of this.
Is this based on the belief that an LLM can only represent an "average" human being?
This is sad, because Reddit remained one of the final bastions of human content on the internet. For several years, appending "site:reddit.com" to a google search was a valid way to get something usable out of a google search. Doing that is still an improvement over raw-dogging Google's ranking algorithms with an unfettered search, but AI slop increasingly is the result.
This is one of my great disappointments in the current rise of AI. LLM's can give good search results when dealing with a topic they've been specifically trained on by human experts, but they're not good at separating human-produced signal from AI slop noise. We've done nothing to prevent a sea of AI slop from being dumped on top all the human signal that's out there. When AI companies enter their enshittification phase and stop investing in expert human trainers, the search results LLM's produce are going to fall off a cliff. Search is a bigger problem than ever.
_____
[1]https://9to5mac.com/2024/02/19/reddit-user-content-being-sol...
HN autokills comments it detects as LLM. I think maybe you're not giving HN enough credit. :)
It doesn't even show you the post is killed, it looks to you like it posted fine, and you have to logout to see it's actually dead. It's an approach that's extremely hostile to the user.
For giggles, here's how it would look for this comment. Rather meta, but in this case it removed the "It needs hellp" so here we are.
I often run my screed through an LLM before posting. I ask it to keep the writing at about a 10th grade reading level and to avoid em dashes.
No it doesn't. Unless you have proof.... ???
We may end up with things like that…
Same as it ever was.
I don't suppose you could show some examples? How convincing is the state of the art now?
You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.
And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.
Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.
You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.
Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.
You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".
Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.
P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
Yeah it's become my default assumption that any user who does this is either a bot or a bad-faith troll.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
Yesterday I was watching people on the street and on the tram. Every other person was staring at their phone and scrolling through something.
That might scare me more than the fact that someone is chatting with an LLM bot online.
(I am pro-ai, use it every day for coding that I couldn’t achieve pre-2022 as I am lame coder.)
People using LLMs without being fed their own post history are still pretty easy to detect. There's just something very recognizable about the cadence and tone of LLMs.
What really stuns me is that if you call someone out for it, 9/10 times you get absolutely buried in downvotes. Even here on HN. Its like people are angry that you're lifting the curtain on the slop, that the writing they enjoyed is fake.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
That said, your experiment scares me as well.
My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.
It’s an unpopular opinion but I am looking forward to ID and age verified social media. If done right we can have real people around again.
BTW, ironically the harsher communities like 4Chan doesn’t seem to suffer from the dead internet. I guess it’s either because the advertising value is too low to justify AI use there or maybe AI API providers refuse to work with such a content this reducing opportunities to infest with bots.
- I am trying to learn about the topic at hand and trust a human's comment more than an LLM's guess - I am trying to connect with other humans to fulfill my social needs - I am maybe spending time to help another human out with a response because I want to help someone else - I am interested in the perspective of other humans
Those are just a few reasons. For each of those if it's actually an AI I feel I'm losing out on something.
Imagine an online community where you can only join on the recommendation of two other members, who you must have actually met in person, to participate. Meanwhile, you leave at least some of the activity publicly available to the general public so that interested parties can meet up IRL and join.
This could probably be implemented easily on top of existing online platforms like Discord, Reddit, etc. since it's really just a community building rule, not a community itself.
What factual basis do you have for that?
Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed
(Yes, I know moderating this stuff at scale is hard)
- A human. Beep boop.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
“Mods are Unconstitutional” lmao
Was this a browser using agent? What did you use?
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
The application-layer stuff is harder. Each application can develop its own heuristics, and that's difficult to automate in a cross-cutting fashion.
Reddit doesn't do anything about that? That seems stupid.
Name and shame.
You're giving "let them eat cake" energy.
If you look at what people outside HN talk about HN, it's not uncommon to see wannabe tech entrepreneurs talking about how to promote their products via Show HNs and how to stay HN front page. It's honestly a little sad considering that HN has a tendency to rip these projects apart.
Show HN is for showing a cool project you've built. To warrant front page placement, it has to “gratify intellectual curiosity”, just like everything else. There needs to be some kind of novel breakthrough or something for others to learn from. Or, sure, some way it can help others with their work or life.
And yes, a byproduct of all this may be that some people buy a license or subscription. But submitters who are just trying to get attention and sales for a commercial product don't belong in Show HN.
I've seen some claim they do it to avoid stylometry or being fingerprinted, or because of social anxiety problems.
Some people just have a compulsive need to optimize everything, and HN's guidelines and tone policing are more easily followed by a bot than a human.
HN's guidelines aren't that strict and the mod hammer is a plushie. It's not difficult to get by here. It's also kind of useful for critical reflection/self-regulation to hear the occasional "you came in too hot" or "don't be boring" from a moderator.
Seems better to me to just try to be sort of reasonable and let the mods nudge you if they need to and let your comments be downvoted from time to time. What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?
Look at all the people who complain about cancel culture. There's a huge swath of people who don't ever want to hear "that was mean/bad/shitty".
Yes?
Most comments are just grammatically "correct". Not a high bar.
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."
This just makes me wonder...so what?
Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).
To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?
We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.
No? I’m imagining not at least. Because there would be no point to it.
If you would enjoy it, then I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
The reason I'm not simulating the experience with an LLM is because:
1. It costs more time to do so, because I have to prompt it to create a single comment. Multiply that by the typical number of an HN thread.
2. I suppose in a way you need bad takes to form your own view of a topic or an issue. LLMs would also be unable to provide truly unique experiences, such as some of the veterans who sometimes post here who were part of the living computing history as we know it.
> I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
That's something you imagined that I claimed I want. If you read my comment again, you'll see there was no such thing.
Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?
Maybe you are. I like getting to a reasonably correct model of a topic or issue. Bad human takes can still be useful here. I just get inevitably tired of the people crying about potential LLM comments all the time.
> Would you really rather just be talking to LLMs here?
Obviously we're not there yet, regardless of what I want. But there is a great number of HN threads posted here that touch on topics that have been discussed so many countless times, that an average LLM summary would do better than most comments.
LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.
So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.
Threads that aren't - like this one - don't.