upvote
I think Apple will become increasingly draconian about LLMs. Very soon people won't need to buy many of their apps. They can just make them. This threatens Apple's entire business model.
reply
It came out in the Epic trial that 90% of App Store revenue comes from in app purchases of loot boxes and other pay to win mechanics.

Apple doesn’t care about revenue from a random TODO app.

reply
truly a k-shaped economy we live in
reply
But… why would I put the effort into getting an llm to make me an app when a there’s an existing app that I don’t have to maintain? I don’t want to have to make every app I use?
reply
There's a huge difference between local apps that cost one time 3-10$ and apps that ask for a subscription between 5 to 20$ per month. the first category will remain and might become more popular as quality increases, the second category will be oblitereated as the value isn't there, even if all the buyers are rich. The second group takes up a much larger part of the pie than the first though, so apple's revenue will decrease.
reply
All apps that don't have a tangible component, legal protection (like music, tv, movies), or a personality behind it will trend towards $0.
reply
Apple's business model isn't really affected by 2% of its users choosing not to spend $100/yr on the App Store. That isn't even a blip on the radar.

A kid playing Roblox can spend more than that in a good weekend.

reply
They are said to be introducing a framework to make it easier to integrate modern LLMs into apps in a couple of months at WWDC.

https://9to5mac.com/2026/03/01/apple-replacing-core-ml-with-...

reply
VibeOS. It’s just an LLM from which all other userspace is vibed.
reply
vibe-ls(1) - often list directory contents, but maybe do something else.

Where can I get this amazing technology?

reply
I guess I am not seeing why would I want to abandon most (if any) of my simple, small, purpose-built apps that always do the exact thing I want for a private company’s ever-changing LLM that will approximate what I’m asking and approximate its response utilizing far more resources.

I’m sure there are things on my phone it could replace (though I struggle to think of them) but there are plenty it can’t. My black magic camera app, web browsers, local send, libby/hoopla…

I can’t really think of any apps I use every day - or every week - that an LLM would replace. I’m not coding on my smartphone and aside from that an LLM is basically a more complex, somewhat inconsistent search engine experience right now for most people. Siri didn’t replace any of my apps, for instance. Why would chatGPT?

TL;DR: what apps would an LLM replace on my iPhone?

reply
Though of course Apple's rules aren't always consistent, I have 2 separate apps currently on my phone that can/are running this (Google's Edge Gallery and Locally AI)
reply
They've been slowly cutting them off of updates and/or taking them off the app store entirely.

See Anywhere and Replit. Anywhere was the #1 or #2 app and was taken off the app store entirely before being put on and then taken off again.

Last I checked, Replit hasn't received an update on the iOS app store in over two months due to reviews denying them.

reply
Can't be just a SaaSpocolypse. LLMs with the right harness could obliterate much of the TODO+ apps with a general assistant.

But it's more likely it's just walled garden + security theatre that'll keep them from allowing outside apps.

reply
Wouldn't trust AI to run TODO, especially weak models. They can hallucinate tasks, forget to remind etc.
reply
LLMs are stateless. But given an actual database of task-shaped items and some work, I could see the potential.

With a canonical source of truth, and set input/output expectations, the potential blast radius is quite small.

reply
And the end results is.....? What? A todo app that takes 16GB of RAM?
reply
Nothing that Mac and Windows users aren't already used to.
reply
It’s tempting to be flippant about MacOS/windows but in all seriousness, the resources required for an LLM to do the job of a typical lighter weight app/software is a serious consideration. No amount of bloat matches what an LLM needs.
reply
> No amount of bloat matches what an LLM needs.

I don't think that's necessarily true. For instance, LinkedIn uses more memory than Gemma E2B inference does.

reply
LinkedIn is an entirely different category and an extreme case at that. We’re not talking about LLM’s replacing LinkedIn either. It’s an entirely different comparison/discussion.
reply
Finally, we've fully documented the Singularity-is-actually-just-bloated software.
reply
In case someone don't know, this is the full text:

> 2.5.2 Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps. Educational apps designed to teach, develop, or allow students to test executable code may, in limited circumstances, download code provided that such code is not used for other purposes. Such apps must make the source code provided by the app completely viewable and editable by the user.

Why is this related to local LLMs in app?

reply
A vibe coding app that generates new executable code and runs it would:

> execute code which introduces or changes features or functionality of the app,

reply
Is this an issue with Cactus compute stuff as well?
reply
What is your app doing? Just LLM inference?
reply
It's a custom agent harness with on-device models and the ability to swap between models.

Basically, a "toy" app to showcase where we are with coding agents on-device.

reply
Use of the LLMs to do what?
reply
Seriously, how do people put up with being nannied by Apple?

Come on folks, their IT hardware may be nice but supporting them is not worth it.

reply