by definition a summary is the best at nothing though, and the mentality that the best way to rule is from a single summarized interpretation is both flawed and scary. It's not answering all questions; it's attempting to provide a single summation dramatically influenced by training. Go ahead and incorporate this into your balanced and multi-perspective decision-making process, but "one tool to rule them all" is not the same thing and definitely not what we're getting.
Emphasis on looks like ;-)
Very much agree. This reminded me of Project Cybersyn [1], an attempt by socialist Chile to build a central heavily-computerized room that would summarize their entire economy to a few men literally pushing the buttons. Complete with 70s aesthetics and Star Trek TOS feel.
[1] https://thereader.mitpress.mit.edu/project-cybersyn-chiles-r...
It's best at summarizing/processing modest amount of information quickly. But given more, its usefulness drastically decreases. This demand toolings that divide the amount of information and flow.
Elon and the rest of AI crew who claim LLMs can just forever grow is not realistic or held out by real world testing.
It can do "everything" but by everything, it'll still be fine tuned and harnessed and agentified which isn't really the idea that the model can do everything.