upvote
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
reply
You're taking it too literally.

The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.

Just as the risk of giving a poorly vetted employee unfettered access to the company vault.

In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.

Exactly as we do with employees.

reply
Employees are humans and therefore subject to the law. There are remedies. And you can point a camera at the cash register.

Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?

reply
If your chatbot provided you 1.5 feet worth of value before shooting your foot it may have been worth it. The optimal self damage to maximize total value may be non 0.
reply
>The optimal self damage to maximize total value may be non 0.

This is the calculus that large companies use all the time when committing acts that are 'most likely' illegal. While they may be fined million of dollars they at least believe they'll make 10s to 100s of millions on said action.

Now, for you as an individual things are far more risky.

You don't have a nest of heathen lawyers to keep you out of trouble.

You can't bully nation states, government entities, or even other large companies.

You individually may be held civilly or criminally liable if things go bad enough.

reply
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
reply
Those usually didn't have keys to all your data. Worst case, you lost your server, and perhaps you hosted your emails there too? Very bad, but nothing compared to the access these clawdbot instances get.
reply
> Those usually didn't have keys to all your data.

As a former (bespoke) WP hosting provider, I'd counter those usually did. Not sure I ever met a prospective "online" business customer's build that didn't? They'd put their entire business into WP installs with plugins for everything.

Our step one was to turn WP into static site gen and get WP itself behind a firewall and VPN, and even then single tenant only on isolated networks per tenant.

To be fair that data wasn't ALL about everyone's PII — until by ~2008 when the Buddy Press craze was hot. And that was much more difficult to keep safe.

reply
> are running
reply
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
reply
Even an OnlyMolts!!
reply
I understand that things can go wrong and there can be security issues, but I see at least two other issues:

1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?

2. what if the current prices really are unsustainable and the thing goes 10x?

Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?

I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"

And I now do the same:

"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".

How long before I get twelve ads along with paid vendors recommendations?

reply
> what if the current prices really are unsustainable and the thing goes 10x?

Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.

reply
From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference. It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated), but if the screws start being turned on investment dollars they not only have to increase the price of their current offerings (2x cost wouldn't shock me), but some of them also need a (massive) influx of capital to handle things like datacenter build obligations (10s of billions of dollars). So I don't think it's crazy to think that prices might go up quite a bit. We've already seen waves of it, like last summer when Cursor suddenly became a lot more expensive (or less functional, depending on your perspective)
reply
Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088

reply
We can also look at the inference costs at 3rd party inference providers.
reply
> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.

Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.

As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...

From the article:

> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.

> Anyway, I’m sure these numbers are great-oh my GOD!

> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.

reply
Their whole company has to be profitable, or at least not run out of money/investors. If you have no cash you can't just point to one part of your business as being profitable, given that it will quickly become hopelessly out-of-date when other models overtake it.
reply
Other models will only overtake as long as there is enough investor money or margins from inference for others to continue training bigger and bigger models.

We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.

reply
Yeah, that’s the whole game they’re playing. Compete until they can’t raise more and then they will start cutting costs and introducing new revenue sources like ads.

They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.

reply
This is my understanding as well. If GPT made money the companies that run them would be publicly traded?

Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.

https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...

reply
Going public is not a trivial thing for a company to do. You may want to bring in additional facts to support your thesis.
reply
Going public also brings with it a lot of pesky reporting requirements and challenges. If it wasn't for the benefit of liquidity for shareholders, "nobody" would go public. If the bigger shareholders can get enough liquidity from private sales, or have a long enough time horizon, there's very little to be gained from going public.
reply
> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.

> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)

Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.

reply
Some of it's whistle blowers, some of it is pretty simple math and analysis. Some of it's just common sense. Constantly raising money isn't sustainable and just increases obligations dramatically.. if these companies didn't need the cash to keep operating, they probably wouldn't be asking for tens of billions a year because it creates profit expectations that simply can't be delivered on.
reply
Sam Altman is on record saying that OpenAI is profitable on inference. He might be lying, but it seems an unlikely thing to lie about.
reply
Where did u get this notion from? you must not be old enough to know how subscription services play out. Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

Heck we were spoiled by “memory is cheap” but here we are today wasting it at every expense as prices keep skyrocketing (ps they ain’t coming back down). If you can’t see the shift to forceful subscriptions via technologies guised as “security” ie. secure boot and the monopolistic distribution (Apple, Google, Amazon) or the OEM, you’re running with blinders. Computings future as it’s heading will be closed ecosystems that are subscription serviced, mobile only. They’ll nickel and dime users for every nuanced freedom of expression they can.

Is it crazy to correlate the price of memory to our ability to localize LLM?

reply
> Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

None of these went 10x. Actually the internet went 0.0001~0.001x for me in terms of bits/money. I lived through dial-up era.

reply
Seems much more likely the cost will go down 99%. With open source models and architectural innovations, something like Claude will run on a local machine for free.
reply
How much RAM and SSD will be needed by future local inference, to be competitive with present cloud inference?
reply
> what if the current prices really are unsustainable and the thing goes 10x?

What if a thermonuclear war breaks out? What's your backup plan for this scenario?

I genuinely can't tell which is more likely to happen in the next decade. If I have to guess I'll say war.

reply
I asked Gemini deep research to project when that will likely happen based on historical precedent. It guessed October 2027.
reply