Maybe also, they take those queries that needed lots of fuzziness to get to the answer, and track what people click to relate the fuzzy searches to actual results. Keep in mind, what you might think is a super unique "tip of tongue" question, across billions of searches, might not be that unique.
Building a search system to find things can be much more optimized than making an AI to return an answer, especially when you have humans in the loop that can tweak things based on analytics data.
I've tried it for similar cases and have only had a 50% success rate. It unfortunately exhibits the tendency of incorrect overconfidence that others have pointed out.
===
Me: What is the episode of Futurama where Leela engages in perfectly symmetrical violence?
Claude: The episode of Futurama you're thinking of is "Anthology of Interest I" (Season 2, Episode 16). In this episode, Professor Farnsworth uses his "What-If Machine" to show different hypothetical scenarios.
During Bender's segment titled "What if Bender were human?", there's a scene where Leela confronts a group causing trouble. One of them challenges her, saying "What are you going to do?" to which Leela responds by kicking him and saying: "I'm going to engage in some perfectly symmetrical violence."
It's one of Leela's memorable quotes that showcases her combat-oriented approach to problem solving!
===
That's completely wrong. It's from S4E15 The Farnsworth Parabox. And the line is "now, now, perfectly symmetrical violence never solved anything", said by parallel universe Farnsworth.
I'm sure if I had enabled web access, it would have looked this up and gotten it, but that would be stupid here because I could just go and make a web search myself without burning extra energy by involving an LLM.
Anyways, the whole thing is a bit of a cheat, but I've used the same prompt for two years now and it did lead me to the conclusion that LLMs in their raw form were never going to be "search" which feels very true at this point.
Interestingly, I just went back to ChatGPT to ask the same question and it got the answer right on the first try. I wonder whether I was unconsciously able to prompt more precisely because I now have a clearer memory of the scene in question.
Wonder if books are inherently easier because their content is purely written language? Whereas movies and art tend to have less point by point descriptions of what they are.
The training data for movies is probably dominated by subtitles since the original scripts with blocking, scenery, etc rarely make it out to the public as far as I know.
Someone ask ai where my handle comes from.
Why are some cars called a spider?
Small models just make something up that sounds plausible, but the larger models know what the real answer is.