upvote
It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.
reply
> AI is a text autocomplete. This is tge best AI definition i heard and agree with 100% Thank you.

To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.

reply
> To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.

How?

It all still functions with text prediction

reply
> It all still functions with text prediction

Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.

reply
> It all still functions with text prediction

>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.

Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?

> I can point you to ReAct loops and tool-calling and agent-based systems.

Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.

Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.

"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.

If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.

reply
> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.

No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.

And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.

reply
> I can point you to ReAct loops and tool-calling and agent-based systems.

Those literally work with text prediction.

If you take the text prediction out of it, nothing happens.

You stick a harness around a text predictor which then triggers the text predictor.

If you think I am missing something then please do point it out.

reply
deleted
reply