The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.
Just as the risk of giving a poorly vetted employee unfettered access to the company vault.
In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.
Exactly as we do with employees.
Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?
This is the calculus that large companies use all the time when committing acts that are 'most likely' illegal. While they may be fined million of dollars they at least believe they'll make 10s to 100s of millions on said action.
Now, for you as an individual things are far more risky.
You don't have a nest of heathen lawyers to keep you out of trouble.
You can't bully nation states, government entities, or even other large companies.
You individually may be held civilly or criminally liable if things go bad enough.
As a former (bespoke) WP hosting provider, I'd counter those usually did. Not sure I ever met a prospective "online" business customer's build that didn't? They'd put their entire business into WP installs with plugins for everything.
Our step one was to turn WP into static site gen and get WP itself behind a firewall and VPN, and even then single tenant only on isolated networks per tenant.
To be fair that data wasn't ALL about everyone's PII — until by ~2008 when the Buddy Press craze was hot. And that was much more difficult to keep safe.
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.
Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.
Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.
As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...
From the article:
> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.
> Anyway, I’m sure these numbers are great-oh my GOD!
> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.
We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.
They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.
Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.
https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...
> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)
Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.
Heck we were spoiled by “memory is cheap” but here we are today wasting it at every expense as prices keep skyrocketing (ps they ain’t coming back down). If you can’t see the shift to forceful subscriptions via technologies guised as “security” ie. secure boot and the monopolistic distribution (Apple, Google, Amazon) or the OEM, you’re running with blinders. Computings future as it’s heading will be closed ecosystems that are subscription serviced, mobile only. They’ll nickel and dime users for every nuanced freedom of expression they can.
Is it crazy to correlate the price of memory to our ability to localize LLM?
None of these went 10x. Actually the internet went 0.0001~0.001x for me in terms of bits/money. I lived through dial-up era.
What if a thermonuclear war breaks out? What's your backup plan for this scenario?
I genuinely can't tell which is more likely to happen in the next decade. If I have to guess I'll say war.