upvote
*You need an authoratitive source to say "This person is real"*

Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

reply
Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.
reply
If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.
reply
you under estimate human population in certain countries, literally
reply
I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can't just use a slow mode type mechanic to halt this kind of attack.
reply
Have you considered sock puppets? It's not out of the question to handle with human mods but detecting them automatically is pretty bad if someone is supplying credentials to each one, and sometimes it does take months or years to notice that new user Y is banned user X.
reply
I think sockpuppets are only useful in a community with non-text signals like upvotes and downvotes or likes. These kinds of signals are not necessary and often plain corrosive to small communities. In a larger community they're a great feedback mechanism, but large communities are fundamentally different spaces than small ones and need a fundamentally different moderation approach IMO.
reply
I think sock puppets that reply with text are a lot persuasive than just "likes".

However, I might be not typical in that I don't look at vote scores very often.

reply
I've seen them used to dogpile in arguments (harder to do since you need to keep writing styles distinct), game votes in forum games or quests, etc. And of course you don't need to use multiple at once if you just switch to a sock puppet every time you're suspended or banned.
reply
> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web

reply
There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.

What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.

So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?

reply
Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.

I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.

reply
Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it
reply
Crypto could be a part of it. Like you need to sign with an adress that has held some non-trivial amount for some minimum amount of time. As a component of such a system it could cut down on mass or low-effort impersonation.
reply
it can also be "rented" btw, rented by llms? interesting
reply
Money is great at thwarting spam/Sybil attacks. You don't have to raise the price very much to make them fail.

Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.

Just settle for stopping spam.

reply
Yeah, I think "pay to enter" or maybe "pay to be able to post" is ultimately going to be the solution. Then we'll have the paid "gated" social networks, filled with mostly humans, and the free ones will all be bot-swarmed wastelands.
reply