upvote
> The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.

Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.

If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.

It’s not like “best” has won any other b2b arms race in the past.

reply
>If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.

Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.

reply
I don't know Gemini's pricing model in detail, but in general pricing doesn't generalize well between personal/hobbyist and enterprise use. Consumer pricing of variable costs is a balancing act, and most Gemini users aren't going to be anywhere near the quota; a company of 1000 can't always buy for $20,000 what 1000 random users with $20 personal plans are theoretically capped at.
reply
Ultimately though in the long run.. They invented the tech, have a large cashflow generating business subsidizing R&D as well as sales, with network effect of existing B2B relationships.

Further they have their own TPUs, datacenters, etc on which to run their models.

Plus existing data they've squirreled away over the preceding 30 years from books, web, etc.

Just seems like a lot of efficiencies if its going to come down to cost.

reply
In large part because most companies have a set budget for IT spend. Thats how “normal” profitable companies operate outside this cash burning bonanza that’s going on.

And in that reality one can’t just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isn’t retuning value. So “token limits” and cost controls on B2B is entirely expected here.

reply
> especially when said fancy new thing isn’t retuning value

I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.

I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.

If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.

reply
The problem is that AGI fantasy aside, CTOs at companies are expected to deliver results today and tomorrow. Better to let somebody else hold the bag and train models, then once it finally works as advertised you can ease on the brakes.
reply
> The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.

Expert systems were amazing. They were not cost effective.

There might be another bitter lesson to be had here, and unless the accountants start talking we're not gonna know any time soon.

reply
LLM usage will largely replace traditional search, and that's stage one. To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end. Search will more be a feature of Gemini in the not very distant future, rather than Gemini being bolted onto/into search.

Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.

Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.

reply
> To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end.

We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.

reply
I find searching chatgpt.com and asking for sources, then visiting them, works much better than Google to find niche topics
reply
Every year I ask the latest version of Chat GPT a basic facts question about rugby results. It almost always gets it wrong - even when it does web search and cites sources. Wrong scores, hallucinated matches, wrong locations - just gob smacking amounts of wrongness.

The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.

So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.

reply
yep google ai results are old too.
reply
Who is revolted? I use the AI Google results every day when asking for specific questions, I rarely visit the webpages before anymore. Also Google already injects ads into conversations in the form of Google Shopping affiliate links.
reply
>I rarely visit the webpages before anymore.

And what do you think this'll do for future LLM models that need to train on new content if web page traffic collapses?

reply
I understand the concern but it's frankly not my problem as a user, that is for the authors and corporations to figure out. No one would (or should) blame car buyers for putting horse and buggies out of business, they're merely participating in the market as a consumer not the producer.
reply
They won't figure it out. It's the tragedy of the commons.
reply
Then that is how it will be, it's a self correcting problem in that if they don't figure it out, their models won't continue improving.
reply
You see it already with how many people use LLMs for everything these days. Google Gemini can also integrate with your other Google apps to personalize further, and Gemini already has product placement ads.
reply
Google is already dumping LLMs into search and it works well and is free.
reply
It doesn't work well. The searches are wrong and uninformative much of the time.
reply
Any examples of bad ones? I find them perfectly fine for my queries.
reply
Search for anything mechanically car related and the results are terrible or wrong.
reply
Do you have a concrete example I can reproduce? I searched for things like how to change the filter of X make and model and it seems correct, not sure if that's what you meant.
reply
I'm not the person you replied to but I'm wondering which Google AI product you are referring to that you use for search which is so excellent that you need someone to find for you an example of it failing?

I think Google has several ai products with search features?

Which one in your experience "seems correct"?

I'm fascinated because I've never found any LLM to be particularly error free at search.

reply
Google.com with the AI overview or whatever they call it now. It seems to source web page information for grounding so it's reasonably correct and doesn't hallucinate recently at least.
reply
It works very poorly
reply
deleted
reply