upvote
I'm not sure that it would be too hard technically... basically, auth+social-network. Basically Facebook auth without the rest of facebook, adding attestation.

IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.

The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.

I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.

reply
I don't think the main challenge is building this system, the main challenge is getting enough people using it to make it worthwhile.

Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't

reply
Which government? Will they interoperate with foreign governments?

My point was to create something outside a specific government, with very limited information... that would require a fee or some kind of funding.

I don't think I'd trust the US/China or other bodies to trust each other for such a use case.

reply
> Will they interoperate with foreign governments?

Ideally, yes

But you're right, this isn't likely to happen in real life and I'm just being wishful. Instead we're going to get the much shittier capitalist version of this where every company and government spies on us and we have no expectation of privacy online at all

reply
I agree its a very, very interesting problem. Maybe one of the biggest problems of the coming decade.

I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.

The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?

Anyway, good luck on your work!

reply
*You need an authoratitive source to say "This person is real"*

Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

reply
Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.
reply
If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.
reply
you under estimate human population in certain countries, literally
reply
I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can't just use a slow mode type mechanic to halt this kind of attack.
reply
Have you considered sock puppets? It's not out of the question to handle with human mods but detecting them automatically is pretty bad if someone is supplying credentials to each one, and sometimes it does take months or years to notice that new user Y is banned user X.
reply
I think sockpuppets are only useful in a community with non-text signals like upvotes and downvotes or likes. These kinds of signals are not necessary and often plain corrosive to small communities. In a larger community they're a great feedback mechanism, but large communities are fundamentally different spaces than small ones and need a fundamentally different moderation approach IMO.
reply
I think sock puppets that reply with text are a lot persuasive than just "likes".

However, I might be not typical in that I don't look at vote scores very often.

reply
I've seen them used to dogpile in arguments (harder to do since you need to keep writing styles distinct), game votes in forum games or quests, etc. And of course you don't need to use multiple at once if you just switch to a sock puppet every time you're suspended or banned.
reply
> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web

reply
There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.

What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.

So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?

reply
Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.

I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.

reply
Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it
reply
Crypto could be a part of it. Like you need to sign with an adress that has held some non-trivial amount for some minimum amount of time. As a component of such a system it could cut down on mass or low-effort impersonation.
reply
it can also be "rented" btw, rented by llms? interesting
reply
Money is great at thwarting spam/Sybil attacks. You don't have to raise the price very much to make them fail.

Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.

Just settle for stopping spam.

reply
Yeah, I think "pay to enter" or maybe "pay to be able to post" is ultimately going to be the solution. Then we'll have the paid "gated" social networks, filled with mostly humans, and the free ones will all be bot-swarmed wastelands.
reply
Verifiable credentials are all about this. You need some sort of credentialing body that generates the credential for you, but after that you'll just have an opaque identifier. Any caller that wants to verify whether you're human submits the id to a verifier and the verifier says yes or no. You can also do attestations like age, so gate a forum on 16+ or something. You never end up having to actually give away your name or any other details.
reply
What happens when someone agrees to sell or give away their id? The credentialing body could catch the very worst abusers who seem to be signing in to various sites and services multiple times an hour, but would fail to catch anything else.
reply
I don't think you'll ever be fully free of spam, so you'll still need to filter bad content. If credentials get sold and used to spam, they'll get banned.
reply
How do you ban credentials if they're anonymous? Notice that if you can tell two requests are from the same person then you can do it across services by both of them pretending to be the same service.

Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?

reply
world.org is doing exactly that including the privacy aspect. the iris scan aspect is scary but the alternatives don't seem to solve the problem either.
reply