upvote
There is also the use case of delegating tasks programmatically to an LLM, for example, transforming unstructured data to structured data. This task often can’t be done reliably without either 1. lots of manual work, or 2. intelligence, especially when the structure of the individual data pieces are unknown. Problems like these can be much more efficiently solved by LLMs, and if you imagine these programs are processing very large datasets, then sub-millisecond inference is crucial.
reply
Aren't such tasks inherently parrallelizable?
reply
Agents already bypass human inference time, if it can input-output instantly it can also loop it generating near instantly long cached tasks
reply
Agents also "read", so yes there is. Think about spinning up 10, 20, 100 sub agents for a small task and they all return near instant. That's the usecase, not the chatbot.
reply
Yes. You can allow multiple people to use a single chip. A slower solution will be able to service far fewer users.
reply
Right, but it is also possible it's cheaper to use 42 Google TPUs for a second than one of these.
reply