A calculator can do very complex sums very quickly, but we don't tend to call it "smart" because we don't think it's operating intelligently to some internal model of the world. I think the "LLMs are AGI" crowd would say that LLMs are, but it's perfectly consistent to think the output of LLMs is consistent/impressive/useful, but still maintain that they aren't "smart" in any meaningful way.
Okay, but you have to actually address why you think LLMs lack an "internal model of the world"
You can train one on 1930s text, and then teach it Python in-context.
They've produced multiple novel mathematical proofs now; Terrance Tao is impressed with them as research assistants.
You can very clearly ask them questions about the world, and they'll produce answers that match what you'd get from a "model" of the world.
What are weights, if not a model of the world? It's got a very skewed perspective, certainly, since it's terminally online and has never touched grass, but it still very clearly has a model of the world.
I'd dare say it's probably a more accurate model than the average person has, too, thanks to having Wikipedia and such baked in.
Clearly there's a limit. For example, if an alien autocomplete implementation were to fall out of a wormhole that somehow manages to, say, accurately complete sentences like "S&P 500, <tomorrow's date>:" with tomorrow's actual closing value today, I'd call that something else.
> At what point does autocomplete stop being "just autocomplete"?
Every single discussion on the internet is a repeat of https://en.wikipedia.org/wiki/Loki%27s_wager it seems…
That's the sorcery mentioned in the GP, the issue comes when people believe it to be smart however in reality it is just a next word prediction. Gives the impression it's actually thinking, and this is by design. Personally I think it's dangerous in the sense it gives users a false sense of confidence in the LLM and so a LOT of people will blindly trust it. This isn't a good thing.
edit:
You cannot predict all the actions or words of someone smarter than you. If I could always predict Magnus Carlsen's next chess move, I'd be at least as good at chess as Magnus - and that would have to involve a deep understanding of chess, even if I can't explain my understanding.
I can't predict the next token in a novel mathematical proof unless I've already understood the solution.
If you can predict the words a bright person will say about X... Isn't that some truly astounding tool? That could be used in myriad useful ways if one is a little creative with it
Since it's also "alien" it can also detect and explore paths that we simply haven't noticed since their biases aren't quite the same as ours
What would it take for you to concede a future model was smart?
For example, it's training set it purely engineering and code with general language data set, would be "aware" what art is, but has never seen an artistic image, aware what colours are and able to create something it never saw before.
Like a child with a paintbrush, there is an intuitive behavior that happens.
They can already create something they've never seen - you can prompt ChatGPT to generate images, and there's a few dedicated models for it: https://chatgpt.com/images/
Terence Tao feels like they've done innovative work on mathematics: https://www.scientificamerican.com/article/amateur-armed-wit...
They are useful but a cul de sac for heading toward AGI.
A better model to use is this: LLMs possess a different type of intelligence than us, just like an intelligent alien species from another planet might.
A calculator has a very narrow sort of intelligence. It has near perfect capability in a subset of algebra with finite precision numbers, but that's it.
An old-school expert system has its own kind of intelligence, albeit brittle and limited to the scope of its pre-programmed if-then-else statements.
By extension, an AI chat bot has a type of intelligence too. Not the same as ours, but in many ways superior, just as how a calculator is superior to a human at basic numeric algebra. We make mistakes, the calculator does not. We make grammar and syntax errors all the time, the AI chat bots generally never do. We speak at most half a dozen languages fluently, the chat bots over a hundred. We're experts in at most a couple of fields of study, the chat bots have a very wide but shallow understanding. Etc.
Don't be so narrow minded! Start viewing all machines (and creatures) as having some type of intelligence instead of a boolean "have" or "have not" intelligence.
Have you ever heard anyone refer to a calculator as intelligent?
These companies have a vested interest in making the product appear more human/smart than it is. It's new tech smeared with the same ole marketing matter.
The LLM tasks is to produce a string of words according to an internal model trained on texts written by humans (and now generted by other LLMs). This is not intelligence.
Where it fails is generally the first step. It’s kinda like the old saying “you have to ask the right question”. In all problem solving matters, the definition of problem is the first step. It may not be the hardest (we have problems that are well defined, but unresolved), but not being able to do it is often a clear indication of not being able to do the rest.
> What would convince you that you're wrong?
Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear. Issue here refer to unpleasantness or frustrating situation.
Until then, I see them as tools. Often to speed up my writing pace (generic code and generic presentation), or as a weird database where what goes in have a high probability to appear.