And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
Except eventually it will also weigh down those users who supported <XYZ political stance>
I’m not sure if that would work for account deletions though.