> fake AI accounts
First, how do you identify them? Is it strictly admins monitoring posts/server-side logs or do users report odd behaviour?Second, what is the purpose of these accounts? Are they basically running submarine adverts, or are they just trolling (to harm the community)?
AI Deception: A Survey of Examples, Risks, and Potential Solutions - https://arxiv.org/abs/2308.14752
Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective - https://arxiv.org/abs/2406.05724
Online Deception in Social Media - https://cacm.acm.org/research/online-deception-in-social-med...
Or ... how small can a community be and still be drowned in AI slop?
Is it a community inside one of the major platforms, or it has its custom thing?
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
People often confuse freedom of speech, with freedom to access a specific platform for speech.
Its dead wrong, I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
Was.
Maybe you are to young to remember the (pre-spam) days when it was polite to leave your SMTP server open for others to use?
Yep. Was.
This isn’t the internet you grow up on. This is an internet scoped for bots and organizations.
Also, I imagine it's not impossible to reliably distinguish between an autopen and genuine handwriting. The company who's site you linked say their machine can't perform complex pen movements so calligraphy is impossible.
The real advantage of posting a letter is that you have to pay for postage, and the stamps on the envelope will indicate which country the letter is really coming from.
Not including the cost of the letter itself, or the envelope, or the cost to write it if it's being farmed out to overseas labour, who then has to send it by international postage. And then you have evidence of where the letter originated, and that can be compared with how the user presents themselves online.
Little bit more than 2 hours minimum wage I think.
Sorry, they did an interview about 20 years back were they kept correcting the host to 'Something is awful' I have just called it that ever since.
It'll stop the ones doing it for the lols, but I imagine they're a minority anyway.
The people leaving LLM replies are paying minimum $20/month for LLM access, and probably more in practice.
A one time $10 fee is not a deterrent.
1) the cost becomes even higher for AI slop factories since they will probably get multiple accounts banned.
2) It prevents influence to accrue to any specific account. This diminishes the incentive for slop, since sufficient success means a ban.
3) It reduces the moderation effort since creating accounts is no longer a sustainable strategy.
Bots are indeed killing twitter now. I noticed more and more were leaving permanently. Musk evidently accelerated the decay here. There is something wrong with his mindset here, it's almost as if it is pathological. His perception of things is genuinely distorted, and I am not even 100% certain he is completely aware of it; he must be partially aware, but it seems there is also something wrong with the brain. No wonder he gets along with Trump - that one now has clearly dementia narcissism in the final stage.
You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)
You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.
I "log in with Instagram", where "I log in with Facebook". Guess how well data recovery works when there is literally no password set. I'm surprised these systems work at all.
On completely different scales. Even if it not perfect, it is strong enough of a filter to turn a bot infestation into a mild annoyance.
Both sites have survived and continue to work well for their users.
A small cost does definitely work for some sites.
Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
YMMV
What makes you think the rage is artificial?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method >AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
So I downvoted.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
The exceptions to the anti-AI sentiment are management and people with a vested interest.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.