upvote
Poster wants it to play Jeopardy, not process text.
reply
Care to enlighten me?
reply
Don't ask a small LLM about precise minutiae factual information.

Alternatively, ask yourself how plausible it sounds that all the facts in the world could be compressed into 8k parameters while remaining intact and fine-grained. If your answer is that it sounds pretty impossible... well it is.

reply
Did you see the part in my original post where it said "Not unexpected for an 8k model"?
reply
I don't think he does. Larger models are definitely better at not hallucinating. Enough that they are good at answering questions on popular topics.

Smaller models, not so much.

reply
Not sure if you're correct, as the market is betting trillions of dollars on these LLMs, hoping that they'll be close to what the OP had expected to happen in this case.
reply
The market didn't throw trillions of dollars to develop Llama 3 8B.

What GP is expected to happen has happened around late 2024 ~ early 2025 when LLM frontends got web search feature. It's old tech now.

reply
The GP’s point was about LLMs generally, no matter the interface. I agree that this particular model is (relatively speaking) ancient in AI the world, but go back 3 or 4 years and this (pretty complex “reasoning” at almost instant speed) would have seemed taken out of a science-fiction book.
reply