It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.
It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.
I fear losing the battle.
> fake AI accounts
First, how do you identify them? Is it strictly admins monitoring posts/server-side logs or do users report odd behaviour?Second, what is the purpose of these accounts? Are they basically running submarine adverts, or are they just trolling (to harm the community)?
AI Deception: A Survey of Examples, Risks, and Potential Solutions - https://arxiv.org/abs/2308.14752
Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective - https://arxiv.org/abs/2406.05724
Online Deception in Social Media - https://cacm.acm.org/research/online-deception-in-social-med...
Or ... how small can a community be and still be drowned in AI slop?
Is it a community inside one of the major platforms, or it has its custom thing?
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
People often confuse freedom of speech, with freedom to access a specific platform for speech.
Its dead wrong, I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
Not including the cost of the letter itself, or the envelope, or the cost to write it if it's being farmed out to overseas labour, who then has to send it by international postage. And then you have evidence of where the letter originated, and that can be compared with how the user presents themselves online.
Little bit more than 2 hours minimum wage I think.
Also, I imagine it's not impossible to reliably distinguish between an autopen and genuine handwriting. The company who's site you linked say their machine can't perform complex pen movements so calligraphy is impossible.
The real advantage of posting a letter is that you have to pay for postage, and the stamps on the envelope will indicate which country the letter is really coming from.
Was.
Maybe you are to young to remember the (pre-spam) days when it was polite to leave your SMTP server open for others to use?
Yep. Was.
This isn’t the internet you grow up on. This is an internet scoped for bots and organizations.
Sorry, they did an interview about 20 years back were they kept correcting the host to 'Something is awful' I have just called it that ever since.
It'll stop the ones doing it for the lols, but I imagine they're a minority anyway.
The people leaving LLM replies are paying minimum $20/month for LLM access, and probably more in practice.
A one time $10 fee is not a deterrent.
1) the cost becomes even higher for AI slop factories since they will probably get multiple accounts banned.
2) It prevents influence to accrue to any specific account. This diminishes the incentive for slop, since sufficient success means a ban.
3) It reduces the moderation effort since creating accounts is no longer a sustainable strategy.
Bots are indeed killing twitter now. I noticed more and more were leaving permanently. Musk evidently accelerated the decay here. There is something wrong with his mindset here, it's almost as if it is pathological. His perception of things is genuinely distorted, and I am not even 100% certain he is completely aware of it; he must be partially aware, but it seems there is also something wrong with the brain. No wonder he gets along with Trump - that one now has clearly dementia narcissism in the final stage.
You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)
You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.
On completely different scales. Even if it not perfect, it is strong enough of a filter to turn a bot infestation into a mild annoyance.
I "log in with Instagram", where "I log in with Facebook". Guess how well data recovery works when there is literally no password set. I'm surprised these systems work at all.
Both sites have survived and continue to work well for their users.
A small cost does definitely work for some sites.
Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
YMMV
What makes you think the rage is artificial?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method >AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
So I downvoted.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
The exceptions to the anti-AI sentiment are management and people with a vested interest.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.
Perhaps it will even see a (small) resurgence when AI providers start charging for the actual costs.
That ship has sailed long time ago with zealot admins and verbal harassment.
Where there are certainly strong examples of this, a lot of people mistake enforcing the rules as zealotry. Part of the point of SO was that if things don't change then there is a completed state for SO too - no need to ask duplicate questions like on platforms where a post is less long-lived. Unfortunately people take things like “this is a dup”, “provide more information as we can't help”, “this isn't a complete answer”, and so forth, as deeply personal attacks…
One of the good things about LLMs is that they've drawn off all the simple already-answered questions! Unfortunately the more complex ones, or the ones for new solutions, are also going there so SO and its family of sites is ceasing to grow even in the ways it wants to.
> and verbal harassment.
Again, that did/does happen, but a lot less than some people report it. The most abusive people I've seen on there are those who have been given one of the responses I listed above.
So I don't imagine AI is going to go away, especially given that now there are more open source models like Qwen that you can run locally. So even if those American behemoths go bankrupt it will persist.
Depends on how you're looking at it (using speculated numbers for easy math):
1. Having operating costs of $100m on revenue of $10b is very deep in the red, regardless of training costs.
2. Having $90m training costs on $10m revenue means they're just breaking even.
Problem is, we don't know their financials and how it is broken down (they could, of course, clear up the confusion and release some numbers, ut they aren't doing that now); all we know is when they need a new raise to continue operating.
From the raises we can determine what their operating costs are (For example, raised $30m in 2024, then $300m in 2025 is a 10x increase in operating costs because they aren't spending on capex. The training is done on opex).
From their subscriptions (which are all only estimated), we can sorta tell what the revenue is, but that's for subscriptions only which are almost guaranteed to be running at a loss (until recently, anyway). We don't even have estimates on revenue from the PAYG API users. Common sentiment is you'd be a fool to use the PAYG options for anything but trialing the service, but the world is filled with fools, so you never know!
What is interesting is comparing the prices for PAYG on the providers supplying open models vs the PAYG on the closed models - the suppliers providing open models aren't spending on training cost, so the cost to supply tokens on open source models is pretty close to the actual price of running models. This is partially confounded by the fact that many of these will have VC money backing them (they are not bootstrapped), and so will also try to perform landgrabs via subsidised tokens, because their goal is an exit with a buyout, and without an eventual acquisition they will simply fail.
I can't think of many open source model suppliers providing subscriptions, not ones that subsidise the subscription, at any rate.
The first IPO of these SOTA providers is going to be the eye-opener; we'll finally see their financials and we'll see just how much the PAYG was subsidised, and how much the subscriptions were subsidised.
Until then, with a collective industry investment of $800b (last I checked) and a collective revenue of $20b (last I checked), they are most definitely operating in the red for the most common definitions of operating in the red.
At some point an instagram/tiktok/etc user could see nothing by real people and not even know what is promoted vs ad vs post.
I loved maps and geography as a child and still do. I've never met anyone in real life that likes it as much as me. But on the internet there are places were I can discuss it and other people share fascinating articles, pictures, etc.
Plenty of people have a reason why they can’t do it, but plenty do it and are happier for finding their community IRL.
No, it isn't anywhere near good. One doesn't throw out the baby to get rid of fouled-up bathwater. Online communities are just as valid as offline ones; it's just that many people a) don't want to be deceived, and b) don't want fakery (slop) all that entails. Easy.
No, it evidently isn't. Online communities connect people, and other communities, in ways that are impossible or undesirable to realize in meatspace. Bizarre to treat this as a zero-sum game.
> "Nothing, nothing substitutes for real human contact in the real world."
It all depends on your smell™. Et cetera.
I don't want to be limited to only the friends I can make who live near me
"Popular" reddit posts and subreddits are a good example of this.
Yeah, the “blast radius” for social media AI slop is 80%-99% of humanity. There’s many times even I cannot make out if something is slop.
Hell, AI slop is going to be even better than reality for a portion of humanity, so it’s More likely they will stay online.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
I can hope.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
Now there's more pressure to have a stronger signal and hopefully rewards to match.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
https://news.ycombinator.com/item?id=47913650
It had 639 comments and 866 upvotes. And that's not a one-off.
If you like some authors or journalists or bloggers, go see who they read (trust me they all say who they follow in their own niches) and build from there. You can develop quite a good RSS feed following this method in like an hours tops.
I once made a rather boisterously-argued comment on a political issue I'm passionate about, and I realised that I'd made a serious error of reading comprehension when it came to my opponent's argument. I apologised to them for being an abrasive arse over my own mistake, then edited my comment to say that I was mistaken.
My incorrect comment which literally said at the bottom it was incorrect continued to be upvoted while my opponent who had made the stronger argument continued to be downvoted.
That's 90% of current Facebook pages and groups.
After awhile I had to wade through all sorts of nonsense to get to the posts I actually wanted to see, and even later Facebook stopped putting posts from people I follow in my feed. It was 100% garbage. I can't imagine why anyone uses Facebook for anything other than the marketplace.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
I think the attestation approach works best if there are different reasons for the punishment. Eg someone inviting a turd doesn't ban the person who invited them. Someone going full ai spam should.
If you weren't a bellend on what.cd you got access to certain forums where there were even more and better private trackers. Once you built that trust there were social privileges, but if you abuse that trust you got rightfully banned.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.
For Debian-style applications that are 100% about openness and 0% about secrecy, sure.
But if you want to secure communications between pro-democracy activists in China, or you're a Snowden-like whistleblower wanting to securely communicate with journalists - y'all probably don't want to be vouching for one another's keys.
It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.
I want to create a community for immigrants. How would I make it welcoming to recent immigrants for whom no one can vouch?
A web of trust is a wonderful tool, but it's exclusive by design. This is a problem for some communities, even though it makes others much better.
Being welcoming to every random person is by definition not a community, it's a free-for-all mess.
A community means communal interests and values, it's in the name. And to guard those you can't just be accepting everyone without vetoing them. That's how it turns to a shit of spammers and trolls and people who want to hijack it and don't share the original cause/spirit. Has happened to forum after forum...
In the end, you need to filter people at the door. You need to keep unpleasant people out and shut down bad behaviour.
I figured that a paid, motivated moderator could be better than a web of trust for this demographic. Maybe enforce a stricter moderation standard on unvetted members. At my scale it might work.
This preserves anonymity because for the latter because they’re only known to be “related” to the former, which is a vague hint at their real identity (e.g. they could’ve met in another online community). And the former don’t care, if they want they can vouch an anonymous alt.
Or have a two-stage process: run very public, very open events that anyone can sign up to an attend. And then invite specific people that you meet at those events that look like a good fit for your community to your private, community-only event.
The closest analog I can think of is community-run bike repair workshops. Some people are deeply involved in, and others just have a flat tire.
The closest digital equivalent is the forums of old.
Spot the fed
It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.
Also, I do feel that GP's take is hyperbolic even in the twentieth century. My own background is mostly German immigrants, of various religions and non-religion, and the way I've been told the story none of them faced significant resistance as they moved upward in the various academic and corporate institutions of their choices. These included NASA executives, department heads, etc.
Note that in balancing GP's accusation against WASPs I'm not attempting to address the related, but not precisely complementary, phenomenon of perpetually marginalized groupings.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
Leave them on the device, authorize the device to validate before age inappropriate content appears.
Website wants to know your age? Your face and fingerprint support your attestation signed by a trusted party.
Can it be tricked potentially? Sure, but then you’re probably a super genius kid and not the reason that these laws were created (as if).
Don’t let anyone tell you anonymity must die for safety to exist.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:
1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.
2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.
3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.
The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.
I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.
Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't
My point was to create something outside a specific government, with very limited information... that would require a fee or some kind of funding.
I don't think I'd trust the US/China or other bodies to trust each other for such a use case.
Ideally, yes
But you're right, this isn't likely to happen in real life and I'm just being wishful. Instead we're going to get the much shittier capitalist version of this where every company and government spies on us and we have no expectation of privacy online at all
I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.
The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?
Anyway, good luck on your work!
Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
However, I might be not typical in that I don't look at vote scores very often.
They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web
What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.
So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?
I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.
Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.
Just settle for stopping spam.
Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?
I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.
That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
Except eventually it will also weigh down those users who supported <XYZ political stance>
I’m not sure if that would work for account deletions though.
Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).
How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.
Would it stop this sort of "get human id", past it into .env, so agents can use it?
even worse many of them are just plain vocal about their disdain for people in general.
at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.
I have no idea about the eye thing taking off. But I think your comment is very HN and a bit out-of-touch with regular people. What "you're seeing" is a bubble and not representative of the general population. The eye thing is a slow frog boil and it will be commonplace before you can blink.
https://github.com/Exocija/ZetaLib/blob/main/The%20Gay%20Jai...
How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.
Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?
I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).
FTFY.
There isn't a clear solution. And if there is, this ain't it.
China gets away with this shit because they've been conditioning their population for 60 years... everyone's eased into it. Elsewhere, not even slightly so.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
I personally can't wait for a mechanism to kill 99% of bot traffic.
Those sorts of places were always the only places with reliably good communities.
The better fix would be to make the support for multiple accounts in the reddit app not so incredibly-shitty, where you're basically logging out and logging back in. Instead, just tell it "posts to this sub use this account, posts to that sub use that account", etc.
People were finding each other online when they couldn’t in person.
This isn’t to say I disagree with you. Just expressing sorrow over the loss of such a grand moment in our shared history.
As to compromising material for bribery, that can be collected in so many different ways, and things like email or messaging or tiktok videos are probably far more interesting, reddit is not particularly useful for that.
---
One of subs I was visiting had some drama happening in ~2020 around supposed negative community behavior: people were criticizing creative works uploaded which personally I agree, weren't the best. Mods team decided that's a big no-no and this place has to be inclusive, welcoming and filled with positivity - so they started banning those who dared to criticize. Fast forward till now, there are only screenshots uploaded by bots, comments done by bots who also include screenshots along with 2 sentences in every thread.
Mods were rightfully upset because they were losing control of their communities when reddit preferred only caring about their upcoming IPO.
I honestly don't think you could remake reddit if you did everything exactly the same starting in 2016. Corporate social media has definitely ruined the individual aspect of social media that is unlikely to return.
No one wants to share on a place with a bunch spammers.
The protest came after that so the timeline is not quite correct.
If these platforms had to listen to "their customers" (here comes the inevitable comment about how users aren't customers; yes, I know)? They'd all be fired. They'd have to find a new job. They all act in incredibly insulting ways with a too big to fail attitude
That's antiproductive, in that it promotes survival of only the worst bots.
I'd like to flip the switches that absolutely end poverty globally, absolutely eliminate guns from the US, and absolutely remove bots from Reddit,
If you can show me where these switches are located, I'll cheerfully go flip them and accept full responsibility for the results.
(Over here where things don't work in absolutes: Some of those bots that got killed were countermeasures to help keep the bad, well-funded bots at bay.)
It was a midpoint between Facebook and Geocities, it got people to build communities within its walled garden, but it was always going to betray them for cash.
Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.
If Russia is willing to spend cash like that, then of course they're willing to run massive bot farms to pollute any forums they can. I'd be shocked if the US was not doing the same in any way they can. You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
Not sure how this relates to the subject in a direct way. Radio Free America was a outlet explicitly created and utilized to spread US propaganda, but kinda sorta barely disguised as a journalistic enterprise (not really, if you were listening to RFA you knew what you were listening to.) Shutting it down seems to be a counterpoint to all of the covert participation of US intelligence on the web which has done nothing but escalate.
The obvious answer to that question is "because he's a Russian asset". But that doesn't mean the obvious answer is also the correct one.
IMHO, we're seeing another and much more concerning trend at play here... the utter and complete rejection of anything but violence by the far-right. Diplomacy? Development aid? Cultural exchange? All sorts of soft power have been under attack for decades now, and not just by the far-right but (especially when it comes to development aid) also by mainstream centrist parties across the Western world. And it's always pseudo-masculine / "strongman" BS backing the sentiment - Bernd Höcke, German AfD mastermind, comes to my mind with "we have to rediscover our masculinity" [1], so do Hungary's Viktor Orban and his denouncement of LGBT or Trump's entire Œuvre.
I'm not saying that violence or at least being prepared, ready and willing to use it is automatically bad. Far from it. But all the various forms of "soft power"? They have a lot of value, value that the far-right is all too willing to just burn for entertainment.
[1] https://blogs.taz.de/zeitlupe/2019/03/24/die-auferstehung-de...
No matter where you look, the far-right kills and maims substantially much more people than the far-left does.
If AI is being used in these areas it is less as an attempt to manipulate as it is to just create noise and engender distrust in what they hear.
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
Account farmers: these can be people in 3rd world countries automated/not automated. Can be using hundreds of mobile phones to create accounts and do daily activity to make the account look legitimate. While they're building an activity history they are also being paid to like/follow/interact with content.
Advertisers: these are brought accounts that are used to pose inauthentic reviews of their service and inject it into discussion and to do PR
Sloppers: people who build AI pipelines and then just pump the most dogshit content directly into a platform trying to make any amount of money.
Nation State propaganda arms: These accounts build a narrative character and then join discussion pushing a certain narrative, boost real content creators who share their message and bog down discussion.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
(I'm normally posting in the context of my startup - although I try to keep the self promotion to a minimum and always contribute to the "conversation," if LLMs replying to one another can be called such).
For what it's worth, I created a community for paying users of Phrasing that has been going really well. I think free online communities may be going away, but there may be a future in exclusive/paid communities.
Set text size as preferred, underline links (or not), turn off display name styles (or not), ui density compact or default, chat message display to compact, space between message groups 0px, turn off all the animated emojis and gif animation stuff if you want.
In client use, there's a button to hide member list (or not).
You can definitely make discord look like a slightly less dense IRC client (mainly because of the channel picker) if you want. And if you want to go really crazy use it in a browser and userscript customize it or use betterdiscord.
I think a lot of the features like embeds and emoji reactions add a lot of value compared to IRC (which I think is also why the IRC world is trying to add those features).
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
Which is all to say i agree about needing mostly irl, but there is also something of online community that irl could never replicate (for most people).
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
Some would see those as negatives.
> IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
Just because a protocol doesn't change purposes as time goes on that doesn't mean it "sucks". Who is this "they" you're talking about? Do you think IRC is a centralized service like Discord?
Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.
There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.
but yes the publicly accessible servers are going to face similar problems. the socially competent people tend not to run those servers, and have smaller private servers with people they know as they have no drive to try to create a space for strangers to gather.
Sure, if you want to chat while gaming, that's the whole point of Discord. Ganbatte.
But, for everything else, Discord is such a horrible misfit that I don't understand why it's the default.
but yes i also game and it gets a lot of use for that as well
i agree though that for collecting and organizing information longer term like forums do, it is not ideal
Mailing lists are old, boring, boomer tech. Ayup. They are. And they work.
However, Zoomer, if you must have Teh Sh1ny(tm), then explain to me why a Discourse isn't a better choice?
Discord is the anti-Pangloss; it is the "Worst of All Possible Worlds".
Because it equally well supports real-time communication.
And it looks shiny.
And some people use it to e.g. watch a video together, or other social purposes.
Alas, Reddit is basically dead to me because of this.
Is this based on the belief that an LLM can only represent an "average" human being?
This is sad, because Reddit remained one of the final bastions of human content on the internet. For several years, appending "site:reddit.com" to a google search was a valid way to get something usable out of a google search. Doing that is still an improvement over raw-dogging Google's ranking algorithms with an unfettered search, but AI slop increasingly is the result.
This is one of my great disappointments in the current rise of AI. LLM's can give good search results when dealing with a topic they've been specifically trained on by human experts, but they're not good at separating human-produced signal from AI slop noise. We've done nothing to prevent a sea of AI slop from being dumped on top all the human signal that's out there. When AI companies enter their enshittification phase and stop investing in expert human trainers, the search results LLM's produce are going to fall off a cliff. Search is a bigger problem than ever.
_____
[1]https://9to5mac.com/2024/02/19/reddit-user-content-being-sol...
HN autokills comments it detects as LLM. I think maybe you're not giving HN enough credit. :)
It doesn't even show you the post is killed, it looks to you like it posted fine, and you have to logout to see it's actually dead. It's an approach that's extremely hostile to the user.
For giggles, here's how it would look for this comment. Rather meta, but in this case it removed the "It needs hellp" so here we are.
I often run my screed through an LLM before posting. I ask it to keep the writing at about a 10th grade reading level and to avoid em dashes.
No it doesn't. Unless you have proof.... ???
We may end up with things like that…
Same as it ever was.
I don't suppose you could show some examples? How convincing is the state of the art now?
You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.
And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.
Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.
You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.
Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.
You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".
Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.
P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
Yeah it's become my default assumption that any user who does this is either a bot or a bad-faith troll.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
Yesterday I was watching people on the street and on the tram. Every other person was staring at their phone and scrolling through something.
That might scare me more than the fact that someone is chatting with an LLM bot online.
(I am pro-ai, use it every day for coding that I couldn’t achieve pre-2022 as I am lame coder.)
People using LLMs without being fed their own post history are still pretty easy to detect. There's just something very recognizable about the cadence and tone of LLMs.
What really stuns me is that if you call someone out for it, 9/10 times you get absolutely buried in downvotes. Even here on HN. Its like people are angry that you're lifting the curtain on the slop, that the writing they enjoyed is fake.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
That said, your experiment scares me as well.
My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.
It’s an unpopular opinion but I am looking forward to ID and age verified social media. If done right we can have real people around again.
BTW, ironically the harsher communities like 4Chan doesn’t seem to suffer from the dead internet. I guess it’s either because the advertising value is too low to justify AI use there or maybe AI API providers refuse to work with such a content this reducing opportunities to infest with bots.
- I am trying to learn about the topic at hand and trust a human's comment more than an LLM's guess - I am trying to connect with other humans to fulfill my social needs - I am maybe spending time to help another human out with a response because I want to help someone else - I am interested in the perspective of other humans
Those are just a few reasons. For each of those if it's actually an AI I feel I'm losing out on something.
Imagine an online community where you can only join on the recommendation of two other members, who you must have actually met in person, to participate. Meanwhile, you leave at least some of the activity publicly available to the general public so that interested parties can meet up IRL and join.
This could probably be implemented easily on top of existing online platforms like Discord, Reddit, etc. since it's really just a community building rule, not a community itself.
What factual basis do you have for that?
Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed
(Yes, I know moderating this stuff at scale is hard)
- A human. Beep boop.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
“Mods are Unconstitutional” lmao
Was this a browser using agent? What did you use?
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
The application-layer stuff is harder. Each application can develop its own heuristics, and that's difficult to automate in a cross-cutting fashion.
Reddit doesn't do anything about that? That seems stupid.
Name and shame.
You're giving "let them eat cake" energy.
If you look at what people outside HN talk about HN, it's not uncommon to see wannabe tech entrepreneurs talking about how to promote their products via Show HNs and how to stay HN front page. It's honestly a little sad considering that HN has a tendency to rip these projects apart.
Show HN is for showing a cool project you've built. To warrant front page placement, it has to “gratify intellectual curiosity”, just like everything else. There needs to be some kind of novel breakthrough or something for others to learn from. Or, sure, some way it can help others with their work or life.
And yes, a byproduct of all this may be that some people buy a license or subscription. But submitters who are just trying to get attention and sales for a commercial product don't belong in Show HN.
I've seen some claim they do it to avoid stylometry or being fingerprinted, or because of social anxiety problems.
Some people just have a compulsive need to optimize everything, and HN's guidelines and tone policing are more easily followed by a bot than a human.
HN's guidelines aren't that strict and the mod hammer is a plushie. It's not difficult to get by here. It's also kind of useful for critical reflection/self-regulation to hear the occasional "you came in too hot" or "don't be boring" from a moderator.
Seems better to me to just try to be sort of reasonable and let the mods nudge you if they need to and let your comments be downvoted from time to time. What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?
Look at all the people who complain about cancel culture. There's a huge swath of people who don't ever want to hear "that was mean/bad/shitty".
Yes?
Most comments are just grammatically "correct". Not a high bar.
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."
This just makes me wonder...so what?
Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).
To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?
We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.
No? I’m imagining not at least. Because there would be no point to it.
If you would enjoy it, then I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
The reason I'm not simulating the experience with an LLM is because:
1. It costs more time to do so, because I have to prompt it to create a single comment. Multiply that by the typical number of an HN thread.
2. I suppose in a way you need bad takes to form your own view of a topic or an issue. LLMs would also be unable to provide truly unique experiences, such as some of the veterans who sometimes post here who were part of the living computing history as we know it.
> I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
That's something you imagined that I claimed I want. If you read my comment again, you'll see there was no such thing.
Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?
Maybe you are. I like getting to a reasonably correct model of a topic or issue. Bad human takes can still be useful here. I just get inevitably tired of the people crying about potential LLM comments all the time.
> Would you really rather just be talking to LLMs here?
Obviously we're not there yet, regardless of what I want. But there is a great number of HN threads posted here that touch on topics that have been discussed so many countless times, that an average LLM summary would do better than most comments.
LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.
So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.
Threads that aren't - like this one - don't.
I don’t think they crave it enough to make a difference. Even before AI slop, Reddit had made successive changes that led to much less of a feeling of interaction with real, authentic humans who could become your buddies. The UI de-emphasized usernames and hid the sidebars where subreddits could have their own distinct community atmosphere. I hear that now on comment threads, Reddit will even hide a decent number of posts from other users, so that a poster may well be talking into the void.
It is on old-school fora that one can get a sense of actual interaction: with avatars and other personalized touches it’s easy to gradually learn who is who, and there is a culture of longform text where you can actually get a sense of other people’s personalities. But how many people under the age of 35 or 40 are joining those fora that survive? Give people a choice, and it turns out they prefer the dopamine hits of engagement-maximizing commercial platforms, and the smartphone as the default (or sole) interface to the internet with all the death of nuance that spells.
The problem is, there is fundamentally no way to scale this.
The only way to give authentic human interaction with like-minded individuals is to connect real humans to other real humans who share interests. And as we've already seen over the first few ages of the Internet, once such a community scales past a certain size, it a) ceases to be a place where people can come to chat, discuss, and hang out with their interest-sharing friends, because there are just too many people for one person to know, and b) becomes a target for profit-minded interests who will cheerfully eviscerate any authenticity and connection the community brought if it will make them a small profit before the community crumbles and collapses.
So anyone trying to "give authentic online experiences" as a business model is going to have to accept that they are going to be, at best, a small, modestly profitable company. And given the state of things today, I very much doubt that this is in the cards.
Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.
We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...
Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.
The fact people are losing those connections is depressing to me
I use ai okay. I think it's useful. But people who dove hard into this stuff treat all text on their screen like it's a chat bot and not a person.
"Rewrite this code using the new API" "excuse me?" "Can you do it I need it right now chatgpt won't compile!" "Show me your code please" provides the biggest pile of dookie ever "hey can I ask how you came to decide on any of this? Maybe we should rewrite what you have here because x y z is concerning" "the ai did it I am learning. There is no need to rewrite anything just write this section for me" " no thanks" someone else does . user leaves
AIs have changed the feedback loop here such that these approaches are rewarded and even lauded.
One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.
On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.
That highlights the problem - its not AI - it's the oversharing thats the issue. Many people have moved from "Sharing whats unusual/interested/excited me" to "What can I share today".
The constant stream of mediocrity drove me away from Facebook (years ago) and then Instagram.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
Smaller communities are generally a lot healthier anyway, so tbh I don't think this is all that bad of a thing. I don't think it's possible to be open to millions and also be healthy, unless you spend a lot of money paying moderators (and regularly rotating them, to prevent burn-out or mental harm from too much exposure, which ~0 do in an even slightly ethical way).
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
Feudalism had norms.
AI is ending an era of large public communities which will likely never come again.
The alternative is having a community born that will be small, have early adopters who can be overly passionate or critical and gatekeep folks from discussion. That means high effort to curate initially.
No, I don't think I will.
https://www.androidauthority.com/google-recaptcha-play-servi...
They’re effective at annoying humans. Driving traffic away from your site. Reducing conversation rates.
Stuff started moving to web site forums which I still don't think are as good as a Usenet newsreader. slrn was my favorite.
Then reddit came along and a lot of online forums started dying as people moved to reddit.
Just this morning on reddit I reported 4 separate posts as AI slop to the moderators. They need to change the categories as I flag it as "disruptive use of bots"
For 2 of the posts the moderators agreed with me and about 5 hours later the posts were removed. For the other 2 the moderators haven't done anything.
It's a losing battle.
Some of the posts start by asking questions like "I was thinking about this and... [long rambling paragraphs] Your thoughts on this?"
I waste a minute reading then another minute skimming the rest of it and then realize I wasted 2 minutes of my life. Then another 30 seconds reporting it to the mods.
This has exploded in the last 6 months.
Then there are all the repost bots farming for karma. Some subs have a rule that you can't repost something in the last 30 days or 6 months. But it is really ridiculous when something get 500 upvotes and then literally the next day a bot reposts the same thing and it still gets 300 upvotes. I think it is just a bot farm upvoting stuff.
The baseline level of trust in an online interaction has been eroded significantly by LLMs.
The question is, how can we reverse this trend and increase trust?
I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.
Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
Let’s empower people to effectively have more control over the content they interact with.
Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.
The fediverse has been around for well over a decade in some form or another. It never caught on with society enough to make a difference. And unfortunately, the fediverse has now developed such a distinct culture of its own, Highly Online people with distinctive political and social shibboleths, that it even alienates many tech idealists around the world, let alone the general public.
As far as the "tech idealists," a lot of them seem to want every space to be 4chan where they can be racist trolling assholes without consequence. And those folks have Nostr.
Sites and apps don’t need your actual national ID, just to know that you have one. I think it could be possible to have 3rd party verification services that don’t know where the verification request is coming from, thus preserving privacy on both sides.
Most people aren’t willing to go through a identity verification process, or pay to join a community, and invitation-only spaces would probably lose diversity of thought pretty quickly.
Even still, I guess one of the above is a lesser evil because the bot problem is only going to become more unbearable.
P.S. Props to the author. I really liked this writing style.
I think what we need is the equivalent of what was done for CORS: client/server cooperation.
That is, APIs should mark that they are human only, and harnesses should cooperate with such flags and prevent calling said APIs.
It's not perfect, as it's client side enforcement, and one could still theorically build their own harnesses without, but that's the only way forward.
It used to be because the comments lacked any critical thinking. This is probably due to the fact that most people on instagram are teenagers. That's fine, and for that reason I stopped reading comments.
But now it's pretty obvious that the comments are LLMs talking. Whether a human initiated it, no idea, but the big walls of text done by bobbyfoo2012 seems highly unlikely.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
Also people will get used to AI in online spaces as AI quality improves. If I'm online trying to get help for some task, I personally don't care who wrote what if it is correct; it's not like humans have great track records of accuracy or substantial contributions either on average. Correctness is expensive in general.
If I'm online trying to relate to other humans emotionally, well I get what I'm paying for. It's been true forever that the better the gate, the better the community. I've tried to push the boundaries of openness, but as I've written extensively on MeatballWiki, soft security depends on there being more good than bad apples in a community. With machine intelligence, the economics of that are silly.
Regardless, people love people, so we'll figure it out. I'm optimistic we can rise to this challenge.
2. only human generated input composer, no copy/paste, no file uploads ect. control the composer. control the camera sessions for photos videos.
3. no algorithmic feed that is designed for ad-spend and eyeballs.
4. moderate
How, at scale?
Yes, but how many decimal places did you optimistically give it, only to never use more than the "10s" place?
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
https://www.youtube.com/watch?v=UEfCTCBDKIU
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
It is exhausting to see a single, sincere sentence based on genuine human experience buried under 1,000 pages of SEO-optimized, AI-generated "void" that Google deems "correct." Despite this, I will keep working on filtering through the noise today.
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
For me it was a wholesome response. It seemed genuinely kind/human.
Click on user profile...it's a bot just pumping out posts like that. Looked organic when seen in isolation, but when you see a wall of them you see that it's got to be an LLM (with a good prompt).
That was disheartening...I had kinda accepted that the sht-stirring rage posts might be bots but the kind comments too? Ouch
It was also so much easier to make a dating app profile back when I was single, like one click. Recently was watching a friend set one up, and now they not only want like 3FA but also proof that you're a human. Assuming the old accounts are grandfathered in.
It's implemented for plan9, but clients could be made for any OS:
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of
> user1: please help me understand this acronym the post title speaks of > user2: (explains in detail what it means)
In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.
There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.
Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.
Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
These posts also usually get all these glowing comments from users who clearly haven't checked the code. It's even worse when authors get busted and claim "Okay, Claude wrote it, but the design is mine" despite clearly not understanding the output themselves.
Unfortunately, that makes high-effort projects less visible. The SNR will probably keep getting worse until slop can be flagged on HN.
If platforms had a subscription model that you had to pay for in order to do more than just read comments, there’d be a lot less LLM content. There would also be a lot less of all content. But maybe that’s the price you pay (literally) to get rid of AI slop.
Oh hey, now thats an idea.
There are maybe 20 or so online handles I know, some of whom I've met in person, who I deeply trust. To the extent that I fully trust anyone they vouch for too.
Even with just one degree, that's a large enough international semi anonymous online community that can provide value to each other through online text based communication. Doesn't need iris scans or credit card checks, just "patio11 on hn Twitter and whatever his domain is is one of the good uns" and a network effect from there.
Already seeing some form of this reputation staking in eg Pi PRs, everyone is treated as clanker slop by default but the entry bar remains quite low to prove and build reputation.
I don't think online communities will stay the same in the face of AI but I do think whatever comes next will strongly rhyme
And Listen Notes is removing 4000 to 8000 ai slop podcasts per month - https://www.listennotes.com/podcast-stats/#growth
I'm not sure about that.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
Even if everything online is fake, events are not. So if people say they’re going to show up somewhere, there must eventually be a moment of truth. And then you can form high trust private group chats to keep talking together.
It may be hard for the current generation of chronically online people to adjust to that new reality, but the next generation of kids growing up can get used to this now, and eventually socializing in person will be natural again and the internet is for bots and weirdos LARPing as something they’re not.
The large group will have to endure the manipulations that we've come to know and hate from the internet, but they'll also be better coordinated than the small ones. They'll vote together, buy the same sorts of things, have an outiszed influence on the global conversation... They'll define the de facto majority opinion whether or not they actually are a majority and whether or not it's authentically their opinion.
I don't think that's a good outcome. We need ways to get on the same page en-masse, if only to counteract the harms caused by whichever highest-bidder is currently using an AI horde to control the other group. Besides, we should save them from this abuse for their sake, if not for ours.
The internet is worth fighting for, if we abandon it entirely we'll be forever at a disadvantage against those who would use it to manipulate.
Strict invitation trees? Small signup fees? No SEO incentives?
Since it creates a tree structure, you can wipe out entire armies of bot/spam/otherwise accounts by following the vouches up the tree.
* dead online communities
* highly-invasive, government-mandated "prove you are a human" requirements in order to participate in online communities
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
Really enjoyed https://hackernews-insight.vercel.app/user-analysis
That spike in users near the end is really something!
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment. The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed. How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
This is by design btw.
Thank you OP, this puts into words why I no longer look at Show HNs.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?