upvote
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.

In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.

https://en.wikipedia.org/wiki/Loebner_Prize

reply
Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?
reply
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?

But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.

reply
Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!
reply
I think it would be great to be revived with a different premise.
reply
>because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Isn't that really what all these AI companies are doing too? It sure seems like it is.

reply
[dead]
reply