upvote
I agree those results ate handy, but I had several occasions where they turned out to be completely wrong. 95% correctness rate is not good enough.
reply
LLMs have a lot of issues with facts, because they are probabilistic and you typically only get one answer per query instead of multiple covering a larger space.

However, they are still useful in these cases if you know the above and use their output as a starting point to think and ask questions.

reply
Funny how Gemini generally takes into account all the words you type whereas Google search tends to ignore most words you type or otherwise direct you to results for thematically (or grammatically or semantically) similar words to what you searched but otherwise wholly irrelevant.

Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?

reply
They're already crippling their AI to perform what look like sponsored searches.

The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.

Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.

Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.

reply