The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
"Am I making a post which is either funny, informative, or interesting on any level?
I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.
If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.
Of course this also creates problems in the other direction, like servers that ignore deletion requests.
That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.
Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).
Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.
You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.
This would nuke spam economics and be minimally disruptive for other use cases of email IMO…
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.