upvote
I am interested as well what future would look like. So far what I am seeing is:

(1) specialized AI agent -> (2) we should add 1790 agents to be competitive -> (3) pivot to agentic workforce platform

now we have lots and lots of agentic workforce platforms and sandbox providers to run them. All have similar capabilities: create agent for HR, create agent for Sales,...

Hope to see something interesting to pop-up, at least it was happening in SaaS-era where people were inventing new ways of solving old problems: DocuSign, Salesforce, Zoho,...

reply
I think both product and engineering is lacking. The only thing that works great today is the LLM model themselves.

Everything is dependent on "agents", but there are either barely any scaffold around them or it is full speghetti, at least it's hard to find one that's well constructed.

For instance, humans zoom around in cars, these cars don't spontaneously combust (most of the time), have seatbelts and airbags, and don't need engine oil replacement every 1 mile. Humans are amazing, the cars are also relatively solidly engineered (at least the ones we drive around today).

The agent product that we have today are decidedly NOT that. Maybe for a single week openclaw was it - and then it decided to add a trawler and a fishhook to the car along with 1000 other addition because why not? And that has been true for almost every one of the LLM/AI product I have seen.

reply
I think the winners here, such as it is, will be the companies that have an actual specialized service that actually does something, where any "agentic" functionality is on top of that.
reply
The thing that AI is best at is summarizing vast quantities of information. That means the most natural thing for an AI to do is be "the one tool to rule them all".

The more information it has access to, the more useful the answer can be. But that also means that it can answer all the questions.

reply
>> The thing that AI is best at is summarizing vast quantities of information

by definition a summary is the best at nothing though, and the mentality that the best way to rule is from a single summarized interpretation is both flawed and scary. It's not answering all questions; it's attempting to provide a single summation dramatically influenced by training. Go ahead and incorporate this into your balanced and multi-perspective decision-making process, but "one tool to rule them all" is not the same thing and definitely not what we're getting.

reply
"If all you have is an LLM, every problem looks like summarizing information."

Emphasis on looks like ;-)

reply
> the mentality that the best way to rule is from a single summarized interpretation is both flawed and scary.

Very much agree. This reminded me of Project Cybersyn [1], an attempt by socialist Chile to build a central heavily-computerized room that would summarize their entire economy to a few men literally pushing the buttons. Complete with 70s aesthetics and Star Trek TOS feel.

[1] https://thereader.mitpress.mit.edu/project-cybersyn-chiles-r...

reply
Not until it's context window and attention is infinite.

It's best at summarizing/processing modest amount of information quickly. But given more, its usefulness drastically decreases. This demand toolings that divide the amount of information and flow.

reply
“Not until it is context window”???
reply
this has exceedingly obvious limits. The primary limit is the context pollution that happens when you give it too much context.

Elon and the rest of AI crew who claim LLMs can just forever grow is not realistic or held out by real world testing.

It can do "everything" but by everything, it'll still be fine tuned and harnessed and agentified which isn't really the idea that the model can do everything.

reply
I'm tired of endless LLM spam submissions from people who only use their accounts here to advertise and self promote.
reply
LLM submissions are no different from tech submission of yesterday. But most people used to build tools that does one thing well instead whatever the current meta is.
reply
i have found flagging and ignoring is better than complaining... the bots dislike those that point our the uselessness of llm.
reply
Developing and serving GenAI models is highly unprofitable, so, no, we're not going to have that in the AI world.

Either those model developers & providers package them in as many services as possible so that they can be somewhat profitable, or they die, and we don't have model developers & providers anymore.

reply
Well, the product here has nothing to do with serving GenAI models. It's now application territory.

And I prefer unix philosophy vs. the Copilot product approach.

reply