upvote
You have spend tens of thousands of dollars on hardware to approach the reasoning and tool call levels of SOTA models...so, casually mentioning "just use local LLM" is out of reach for the common man.
reply
That's pretty much how it was in the 90s with computer tech. 10 years later we were watching cat videos on machines that dwarfed the computing power of what used to be servers.
reply
> And you can use a local LLM

That ship has sailed a long time ago. It's of course possible, if you are willing to invest a few thousand dollars extra for the graphics card rig + pay for power.

reply