I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Also, honestly, with AI/LLMs now, do we even need human moderators anywhere anymore
Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.
The comments above you are suggesting that global guidelines are unnecessary. Instead, they suggest you don't need moderation at all when LLMs now give us the technology to filter out the stuff individual users don't want to see based own their own personal policies. I am sure you can come up with reasons to dispute that, but "you need moderators to do the thing you say is no longer necessary" doesn't add to the discussion.
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
> Wikipedia of all places already employed IP address matching to link sockpuppet accounts
That’s… well, that’s just not how tcp/ip works. Your phone number has nothing to do with your device IP…
My boss uses Reddit some. I'm banned. At the shop, we use the same IP address (and we do not use ipv6 there).
I tried to log in with a ~10-year-old account that I'd never commented with. A perfect Beetlejuicing moment had arrived and I just wanted to play the game with a short, snarky comment.
It logged in fine, and then: Insta-ban, just like that. (Maybe I should have used a new browser on a new network that I've never used before, but whatever -- nothing of value was lost here.)
Meanwhile, the boss man's access continued unimpeded; this suggests that it is a rather targeted contagion.
And it seems to follow the systems, not the networks.
(If anyone wants banned, just let me know. I seem to have a well-poisoned system to play with.)
Edit: to be clear, I'm more concerned about how russia was basically banned from the site but worldnews itself seems like the primary fountain of western astroturfing on the internet. No matter your opinion of putin, that is extremely unhealthy for productive discourse. I don't care about american domestic politics.
Reddit is filled with calls to violence, I would say it's gotten quite worse since. What's changed is that it all comes from one side now.
When you curate the echo chamber, the calls start coming from inside the house.
A short version of this is, if you let a nazi come to your bar, you have a nazi bar.
When you claim that calls for violence are not freedom of speech, it's a slippery slope that leads you to absurdities like speech that could "lead" to calls of violence are not freedom of speech, or that secret codes that could be interpreted as speech that would lead to calls to violence are not freedom of speech, or that violent-sounding slang that is eventually recognized as being encoded speech that would lead to calls of violence isn't freedom of speech, or that people who own bars who host people who use violent-sounding slang that is related to secret codes for speech that could lead to calls for violence are nazis.
And since nazis deserve to be violently suppressed...
Even HN is only quasi-free speech, there are rules that will get one censored.
If you love freedom, there are mailing lists and other platforms but they arnt as high on dopamine and the audience gets a little bit more sketch.
Somehow we jut gave business owners more freedoms than we gave everyone else....
No, it doesn't. The concept of "free speech" isn't limited to prior restraint, you're mistaking it for the dominant precedent in judicial interpretations of the the 1st Amendment of the US constitution.
> It doesn't mean your fellow citizens need to stand there and listen to your shit,
Nobody asked you, or claimed this.
> nor does it mean you are entitled to any sort of platform or megaphone.
You should look up common carrier previsions. If we had to depend on your interpretation of law or morality, they'd be able to shut off your electricity for speech violations.
> It means you can scream on the side of the road into the ether and you won't be arrested for it.
If that's all it meant, it would be dumb and useless. What's more, it doesn't mean that, you can be arrested for screaming on the side of the road.
You're wrong in every way you could be wrong.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
Which, in fact, would open up the same rat race with determining which accounts are real and so forth.
Not disagreeing with you, just circling around this same problem. Feels like the world still isn't ready yet.
I was on a subreddit for a while that voted on rules and had a rotating dictator to facilitate them. It worked decently well, although it never got to the point where the sub was brigaded. This was also pre-LLM so moderation was still a big time sink and the sub eventually fizzled out
It was just a copy of reddit. How useful?
Most places can hide posts and block users at the user level, so why not select which mods can do that for you?
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
Who said the election needs to take place on the internet?
A paper ballot-style election, while not perfect either, works well enough in practice.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
The vast majority of sub forums however are more targeted and smaller to begin with.
For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.
Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?
If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.
I am a big proponent of (direct) democracy in general.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
Are you sure? My understanding is that accounts were only allowed to create two communities.
That limit wouldn't stop you creating more communities with more accounts anyway.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
Perhaps not the worst thing in the world?
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
People will prefer the bots that give them head pats and tell them they're so smart and that they love them
Especially considering the fact that it seems more the case that the bigger stop-gap is what we already have:
In asian (especially Japan) it's host(ess) clubs.
Globally for friends it's influencers exploiting loneliness.
Those are things I think has to go for people to embrace offline socialization or using their online time better.
Definitely not. “Terminally online” is as deleterious as it sounds.
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
If it were YouTube, "YouTuber" is a start, but you could also be a "YouTube science communicator" or something
But what do you call their output?
What do you call an illustrator's output? A photographer? What about when all of that shows up on your feed collectively?
Content is a gross word.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
"Am I making a post which is either funny, informative, or interesting on any level?
I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.
Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).
Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.
You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.
This would nuke spam economics and be minimally disruptive for other use cases of email IMO…
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
You can barely comment before you are rate limited.
You can’t upvote until you’ve been around a pretty long time.
New accounts are given a green badge of dishonor that makes users scrutinize their comments more.
I’m not saying these are bad things but they’re probably too restrictive for a social media network that’s just meant to be a good fun time.
You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.
HN does limit some of it, but it's not a panacea.
How do you do it?
And I'm trying to limit myself from saying unwanted things like criticizing ** or saying something nice about **. (Self censoring to avoid downvotes).
Maybe I should be more active.
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
If that is true, you are saying far too much.
I got encouraged by another HN poster a few days ago, let me know if you have any suggestions.
I’m always open to criticism.
I would suggest you explain what it's about in one sentence, just like you explain in your HN profile. The About-page says not so much. You can add some explanation there, or even just one sentence at the top of the homepage (or other pages).
> Failed sending verification e-mail to XXX@XXmail.XXX, please contact administrator on stonky@stonkys.com
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.
>It should be noted that falsifying ID is a crime
Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
They will try and OpenAI will sell favorable placement to manufacturers.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
The internet archive is my safe haven these days, i can go back and remember the old internet.
What is HN doing differently then?
Issue is we are seeing a ton of AI stuff getting posted so it's a losing battle.
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
Two months, according to The Verge.
https://www.theverge.com/tech/894803/digg-beta-shutdown-layo...
This is particularly embarrassing since from what I recall they were all in on AI with the new website, so to shut it down so fast because of it…
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".
This smacks of flailing leadership and zero respect for their target user demographic.
The only sustained business I'm aware of is Hodinkee.
From what I can tell Watchville was abandoned a few years ago.
Their plan is to make the internet what is was 22 years ago.
I'm sure it's impossible, but what if it's not?
Example: https://0x0.st/8RmU.png
I use mander.xyz, it's science focused, but they also have a policy of only de-federating instances that host CSAM.
Their /instances page also only shows a single blocked instance, whereas something like programming.dev shows lots of questionable instances blocked.
If you're telling me it's _worse_ than reddit in this regard, I can only imagine how terrible it is.
Next time try doing it in a way that you control it.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
Digg.com Is Back - https://news.ycombinator.com/item?id=46671181 - Jan 2026 (10 comments)
Digg.com relaunch public beta is live - https://news.ycombinator.com/item?id=46623390 - Jan 2026 (18 comments)
Digg.com (Relaunch) - https://news.ycombinator.com/item?id=46524806 - Jan 2026 (3 comments)
Digg.com is back - https://news.ycombinator.com/item?id=44963430 - Aug 2025 (204 comments)
Digg is trying to come back from the dead with a reboot - https://news.ycombinator.com/item?id=43812384 - April 2025 (0 comments)
(context so people don't have to click links)
Damn, that didn't take long at all...
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
In the same way people want to be fit.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
I think communities like Reddit and Digg grow to a certain point and don’t grow anymore because everybody else absolutely hates what those communities have become. See the fight years ago where Digg thought it had to outgrow MrBabyMan. Problem is platforms don’t usually win those fights.
Sure, today’s redditors love sharing stupid image memes. For each of them there are 20 people who wouldn’t touch Reddit with a 10-foot pole.
The point being made is that communities maintain high signal to noise ratios by adding effort filters.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
If this were to exist today, I know I would be incredibly critical of it.
https://aaltodoc.aalto.fi/server/api/core/bitstreams/4176474...
Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.
The original Digg excepted, Kevin Rose's attention span is extremely limited. He will give something ~3-4 months of attention before (apparently) getting bored and wanting to move on to something else.
Up until that point, he will be an unrelenting hype man of whatever his attention is lasered on at that moment.
Then the hype posts start to drift. They show up once every few days, then once a week, then stop entirely. Any criticism or skepticism is considered a buzz kill in the cloud of good vibes only.
A few months later, a dramatic explainer post arrives (underestimating the cold start problem? Really??), outlining why the idea didn't work and why the next one will be better, for sure, for real.
This (AI generated) note from the current CEO paints an optimistic picture, but the most likely outcome will be that Digg simply doesn't launch. It's sustained on the nostalgic vapors of the old guard, not renewed by a replenished sense of purpose, or connection.
I'd say I'd love to be proven wrong, but I personally question the utility of a Web 2.0 social network phoenixing itself. We have endured a decade+ of originality being buffed out of web products, most now resembling variations of Bootstrap and shadcn in service of dev convenience and getting rich quicker.
Surely in the age of vibe coding, we can afford to take creative risks again, and think of something new.
Moonbirds
Digg
Too comfortable with money in the bank to give full attention to a new venture.
I'm done falling for the Kevin Rose hype train. Long time fan but this is just pathetic.
Am I completely off base or did they use AI to write the post complaining about AI?
Digg isn't just here again. It's gone again.
The LLM style is like nails down a blackboard, are people blind to it or do they just not even read the stuff they're posting?
I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
The vast majority of people do not want to get on a forum to escape their life to see every more or worse content about their daily lives.
You're right, there needs to be some outlet but when people propose this it's because they are sick and tired of politics and the injection of them into everthing is not helping those politics, it just makes it worse.
Tons of people aren't political creatures and want nothing to do with politicians. This notion that more politics will fix thing isn't born out by Reddit, X, the US Congress, Brexit, etc. It's too easy to divide and manipulate people.
No it wouldn't be. And if your definition of "politics" includes "literally every time a thing happens" then your definition is so broad as to be useless.
When people say that they want politics banned, they are talking about the extremely controversial arguments that are almost completely unrelated to whatever the community is about. IE, if you run a group about Cheese making, and someone comes in and starts arguing about if an ice shooting on the other side of the country was justified or not, that is... off topic. And everyone with a brain can understand that.
It really isn't that hard to figure out which topics are related to cheese making and which other topics have almost nothing to do with it, even if someone could make a bad faith argument that it is related (EX:, your response would probably go something like "Well what if someone knows a cheese maker who is here illegally, therefore thats why ice enforcement on the other side of the country is relevant!". You could say that but we all would know that you are being bad faith or have some sort of issue with determining what words mean to regular people)
Partial credit in this example could go to political issues that are very obviously and directly related to cheese making. A new tax on cheese that goes into effect in your local town, and very directly is related to the group topic. Stuff like that might be OK.
And your response to this example would go something like "Oh, so are you saying that politics should be allowed!?!? how do you tell the difference between a cheese tax and an ice shooting on the other side of the country? Hypocrit!"
And the answer to that is that we can use our brain. We all know that a cheese tax is more related to the local cheese making group than national politics. And we don't have to argue with clearly bad faith arguments that pretend otherwise.
To summarize, when people say that they want to ban politics, what they actually mean is that they want to ban completely off topic controversial issues that others are trying to shoe horn into a group that isn't related to that issue.
And people are saying that it is OK to compartmentalize things. Every group in the world doesn't have to talk about your pet issue. The cheese making group can just be mostly about cheese making and they don't have to argue every day about national immigration policies.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
You thinking that astroturfing only happens for US politics is dangerously naive.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.
Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.
So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.
The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.
You've identified a problem that unrelated systems also have. Like banks and identity theft. This solution isn't responsible for causing that problem.
"How will the AI be detected? By another AI?"
However a platform likes to. Let the best platform win.
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
https://news.ycombinator.com/item?id=39046023
Apparently the reason why their articles were interesting was because... they copied all of their content from DamnInteresting. Once they were called out they stopped, and the quality went downhill.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
I'm on plenty of niche interest boards built on PHPbb, Xenforo and Discourse. Chronologically ordered discussions, RSS support, no algorithmic "For You" bullshit.
Build it and they will come.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
I'd also be really surprised if there wasn't coordination with Reddit employees/execs themselves for big advertisers.
https://www.businessinsider.com/reddit-ceo-platform-most-hum...
We really need some way to "verify as human" in the next coming years.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
Moderation was really hard. We didn't have AI posters, but there were persistent posters who were extremely annoying (mostly in their post volume and long-windedness) while still following the rules. I was really trying a hands-off approach with moderation, and it seemed to be working for the most part. It's all moot now though.
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
I don’t understand what kind of shenanigans transpired. But it seems there’s more to in than “bots”
If it truly is bots, maybe a private invite only social network is the way to go.
This 1000x times
> We're not giving up. Digg isn't going away.
Post title is misleading.
Thanks for the fun this past year Digg.
Ironic, they use AI in their shutdown post that blames AI.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
> We know how frustrating this is, and we hope you'll give us another look once we have something to show, we’ll save your usernames!
I think it's partly human. But ex:
> Network effects aren't just a moat, they're a wall.
isn't a natural sentence.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
Could it be generated? Sure. But there aren't the obvious tells you act like there are.
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
Also werent all "moats" commonly paired with a wall in real life? As in a moat around a castle wall?
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
I don't care so much about Digg, but the endless "haha, I caught you!" comments annoy me more than the rare actual AI-written content they label.
I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.
In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.
I, for one, won't be a part of that, and I hope you won't, either.
If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.
(Where do you think AI picked up its writing habits from?)
I think the HN title needs adjusted
No you can't visit.
The only website which became totally useless for me after the general availability of LLMs is OkCupid. It's indeed dead. The rest are fine.
What am I doing differently compared to everyone else?
I'm regularly using: telegram, whatsapp, wechat, hackernews, lobsters, reddit, opennet.ru, vk.com, pornhub, youtube, odysee, libera.chat, arxiv, gmail, github, gitlab, sourcehut, codeberg, thepiratebay, rutracker, Anna's archive, xda-developers.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
YouTube comment sections are botted.
Hmm...
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall.
What does this even mean? How many metaphors can it mix up in one paragraph? Can't they write a blog post the old fashioned way, with feeling? Imagine reading a corporate blog post about being laid off which the founder couldn't even be bothered to write.
Amazing how close to corporate newspeak chatgpt can get (prompt was the headings of this blog post), it has the same sort of blank say-nothing feeling of this blog post: https://chatgpt.com/s/t_69b4890e54ac819193f221351ea900a7
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
i really enjoyed the new digg
Step 1: Copy Reddit
Step 2: ?
Step 3: Profit!
Step 1: speed-run into the ground while loading it up with the debt of the purchase price and paying yourself management fees.
Step 2: close up shop, write down the loss and reduce tax liability for next year?
[0] https://techcrunch.com/2026/01/14/digg-launches-its-new-redd...
If they relaunch, I hope they develop something integrated with the fediverse. I believe the time to build walled gardens is over, plugging with the fediverse might give them a running start to build something g together with the wide fediverse community, maybe something easier to use for non-techies and well moderated.
We will see I guess…
Dead internet theory confirmed, Digg the latest victim
What's an actual viable solution to this kind of thing?
CATPCHAs aren't it. Maybe micro-fees to actually post things would discourage bot posting? I really don't know.
Seems like it's just dead internet all over the place these days.
This. So much This.
And I will continue to die on the will die on the hill that Reddit only survived/became "successful" because of the legendary Digg slip up and exodus. Alexis Ohanian still doesn't seem to have any clue that it was right-place-right-time and Kevin Rose seems to have not learned much either. Can we stop giving either anymore credibility? As with any social site it's the user base/community that helps pull thru darkness. And no one was really asking for this.
Let sleeping dogs lie.
I wasn't a digg user, but this was done to combat 'voting rings' (bots), and the reddit migration was memed partially because it was/is far more open to manipulation. So at least their principles have been somewhat consistent.