It was a cheaters website and you could pay to send messages to other cheaters, I think that was the business model at least.
Anyways, since the userbase was like 99.99% male, there just were not the numbers to talk with others. So, they just side stepped it and has very crummy chatbots that you would pay like $1 per message to talk with. (this was well before AI LLMs, think AOL bots from the naughts). Thing was, just like with the 'Nigerian Prince' scams, the worse the bot, the better the john.
It all got exposed a while back, but for me, that was the real Turing test - take people and see if they pay real actual money to talk with bots. Turns out, yes, if couched correctly (...like selling ice to Eskimos, just call it French ice).
So, I'm not sure that LLMs are going to unveil a wave of scams. Likely it will be a bit higher, of course, but the low hanging fruit is lucrative and there is enough of it to go around, and that's been true since really forever.
It's like outrunning a bear, you don't actually have to run faster than the bear, you just have to run faster than the poor sop next to you. Same goes for the bear, there is plenty of prey if you just do the little amount of exercise.
Finally, a profit source!
The company I work for uses a contracted recruiter for hiring, and the other day he was telling me that they're seeing a huge amount of scams, fake candidates, and "hands off" applications where people are trying to use AI to do basically the whole interview process - apprently even video interviews. We've mandated at least one on-site interview just so we can be sure we're getting actual people.
And most of these job candidates aren't even doing it maliciously, just "life hacking" the interview process. It's going to be a shit show if organized criminals start using AI.
Heck, I think it was in 23/24, after an apple launch event, I saw a video of Tim Cook talking about a crypto coin. I had to look at it twice to reassure myself that it really was a scam. This was immediately after the event, and YouTube very helpfully suggested it for me.
Then there was the paper with Bruce Schneier as an author, about how LLMs result in significant targeting improvements and process efficiency gains for criminals. These enhancements mean that entire demographics that were too poor to be worth targetting, are now profitable.
Plus this is all for people in the developed world, who still haven’t seen the worst of it.
In the majority world, shit was already fucked six ways to Sunday. For example, in India, things are so outrageously, that people who deal with fraud are relieved when people lose less than $100k.
Someone in another thread pointed out that people on HN seem to be very unaware of how bad things are online for some reason.
I think around that time there was a trend of phishing large YT channels and uploading deepfaked crypto ads. The channel's popularity ensured the recommendation algorithm showed it to many people.