upvote
Better than waiting 7.5 million years to have a tell you the answer is 42.
reply
Looked at a certain way it's incredible that a 40-odd year old comedy sci-fi series is so accurate about the expected quality of (at least some) AI output.

Which makes it even funnier.

It makes me a little sad that Douglas Adams didn't live to see it.

reply
Also check out "The Great Automatic Grammatizator" by Roald Dahl for another eerily accurate scifi description of LLMs written in 1954:

https://gwern.net/doc/fiction/science-fiction/1953-dahl-theg...

reply
"Can write a prize-winning novel in fifteen minutes" - that's quite optimistic by modern standards!
reply
42 wasn't a low quality answer.

The joke revolves around the incongruity of "42" being precisely correct.

reply
Should have used a better platform. So long and thanks for all the fish.
reply
deleted
reply
Yes and then no one knows the prompt!
reply
Maybe you should have asked a better question. :P
reply
What do you get if you multiply six by nine?
reply
Tea
reply
For two
reply
Some one should let Douglas Adams know the calculation could have been so much faster if the machine just lied.
reply
I think Adams was prescient, since in his story the all powerful computer reaches the answer '42' via incorrect arithmetic.
reply
The Bistromathics? That's not incorrect, it's simply too advanced for us to understand.
reply
“What do you get if you multiply six by nine?”

(One) source: https://www.reddit.com/r/Fedora/comments/1mjudsm/comment/n7d...

reply
You also have the problem that if the both the ultimate answer to life the universe and everything, and the ultimate question to life the universe and everything, are know at the same time in the same universe. The universe is spontaneously replaced with a slightly more absurd universe to ensure that both the question and answer become meaningless.

To quote the message from the universes creators to its creation “We apologise for the inconvenience”. Does seem to sum up Douglas Adam’s views on absurdity of life.

reply
Ok, my Hitchhiker-foo was too weak, thanks!
reply
I don't think we are ever going to win this. The general population loves being glazed way too much.
reply
> The general population loves being glazed way too much.

This is 100% correct!

reply
Thanks for short warm blast of dopamine, no one else ever seems to grasp how smart I truly am!
reply
That is an excellent observation.
reply
The other day, I got:

"You are absolutely right to be confused"

That was the closest AI has been to calling me "dumb meatbag".

reply
It would be much worse if it had said "You are absolutely wrong to be confused", haha.
reply
"Carrot: The Musical" in the Carrot weather app, all about the AI and her developer meatbag, is on point.
reply
That's an astute point, and you're right to point it out.
reply
You are thinking about this exactly the right way.
reply
You’re absolutely right!
reply
Poor “we”. “They” love looking at their own reflection too much.
reply
I thought you were being sarcastic until I watched the video and saw those words slowly appear.

Emphasis on slowly.

reply
I too thought you were joking

laughed when it slowly began to type that out

reply
2 years ago, LLMs failed at answering coherently. Last year, they failed at answering fast on optimized servers. Now, they're failing at answering fast on underpowered handheld devices... I can't wait to see what they'll be failing to do next year.
reply
Probably the one elephant in the roomy thing that matters: failing to say they don't know/can't answer
reply
With tool use, it's actually quite doable!
reply
Claude does it all the time, in my experience.
reply
Same here, it's even told me "I don't have much experience with this, you probably know better than me, want me to help with something else?".
reply
The speed on a constrained device isn't entirely the point. Two years ago, LLMs failed at answering coherently. Now...

You're absolutely right. Now, LLMs are too slow to be useful on handheld devices, and the future of LLMs is brighter than ever.

LLMs can be useful, but quite often the responses are about as painful as LinkedIn posts. Will they get better? Maybe. Will they get worse? Maybe.

reply
> Will they get better? Maybe. Will they get worse? Maybe.

I find it hard to understand your uncertainty; how could they not keep getting even better when we've been seeing qualitative improvements literally every second week for months on end? These improvements being eminently public and applied across multiple relevant dimensions: raw inference speed (https://github.com/ggml-org/llama.cpp/releases), external-facing capabilities (https://github.com/open-webui/open-webui/releases) and performance against established benchmarks (https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks)

reply
I mean size says nothing, you could do it on a Pi Zero with sufficient storage attached.

So this post is like saying that yes an iPhone is Turing complete. Or at least not locked down so far that you're unable to do it.

reply
You need fast storage to make it worthwhile. PCIe x4 5.0 is a reasonable minimum. Or multiple PCIe x4 4.0 accessed in parallel, but this is challenging since the individual expert-layers are usually small. Intel Optane drives are worth experimenting with for the latter (they are stuck on PCIe 4.0) purely for their good random-read properties (quite aside from their wearout resistance, which opens up use for KV-cache and even activations).
reply