upvote
I'm sorry to spoil it for you, but Perl script was able to do all of that like ... 10 years ago? The out-of-the-box Shotwell manages photos quite well without any intelligence. The problem, as people mentioned above, is SOTA models cognitive and tooling abilities. Also, have you noticed as top-end Mac Studios got downgraded recently? They don't want you to have access to frontier models. And you will not have it. See Mythos as Exibit A.
reply
> They don't want you to have access to frontier models. And you will not have it. See Mythos as Exibit A.

"They" fully well know that they current frontier model are maybe 6 month ahead of what people will have access to without their control. See Deepseek as Exibit B

The reason you can't run these locally are more with the fact that those mythos sized models require extreme amount of memory and processing power to run at acceptable speeds. And neither you, nor I can afford to pay for those resources to run those models locally. A big reason is that "running locally" means running on your own hardware. And for almost everyone this means "running on hardware that will spent a big portion of its time just sleeping". Because data center and providers have higher utilization rates, they can easily outpace you. That and the fact that when they place an order it's usually for hundreds of thousands of units.

reply
> The out-of-the-box Shotwell manages photos quite well without any intelligence.

This piqued my interest on how it does it and after briefly checking the project it seems it only has two features for automatic photo categorization. 1) it can group photos by date and 2) It has face detection and recognition that uses trained weights (so ML "intelligence").

reply
Immich (server) also has a whole host of ML features for classification as well.

I got away from google images and upload to my own Immich instance.

I also use an open source camera app on fdroid to degoogle that whole path.

reply
The Mac Studio's disappearance is related to the fact that people now want them for the purpose of running local models. Supply and demand. That plus Apple doesn't shift prices for released products, and it essentially became underpriced when large RAM quantities exploded in price. For the price of 512GB of RAM alone you could get an M3 Ultra with 512GB of unified memory in a nice, quiet, and power efficient package. With the RAM you still need to spend a few thousand more on CPU/GPU, power supplies, storage and case.

Also the fact that an M5 version will be coming, and they likely know they are going to sell out on day one (I expect we'll see a price correction from Apple for higher end configs of M5 studios, base price will probably stay the same), so they need to build up stock reserves.

reply
Do we even have decent OCR nowadays? Any free solutions?
reply
The latest rounds of open weights vision language models are incredibly good. Like, massively good. Open weights vision capabilities trade blows with frontier models. Over the last few months I'd roughly rank capabilities as Gemini -> {chatgpt and SoTa open weights models} -> Claude.

qwen3.5-2b and qwen3.5-4b are great at document parsing. They can run on CPU

qwen3.6-27b and gemma4-31b are borderline better than the human eye in some cases. Their OCR isn't perfect, but they're seriously good. They can still run on the CPU but you'll be waiting minutes per document.

You can demand JSON, YAML, MD, or freeform text just by varying the prompt. Even if you have a custom template, you can just put that in the prompt and they'll do an OK-ish job.

There's also models that aren't in the r/locallama zeitgeist. IBM released a new 4b parameter model for structured text extraction last week, and there's a sea of recent chinese OCR models too.

IMO the open wights models are so good that in a lot of cases it's not worth paying frontier labs for OCR purposes. The only barrier to entry is the effort to set up a pipeline, and havin the spare CPU/GPU capacity.

reply
Many of the open-weights LLMs accept either text or images as input.

Besides those, there are a few smaller open-weights models that are dedicated for OCR tasks, for instance DeepSeek-OCR-2 and IBM granite-vision-4.1-4b. (They can be found on huggingface.co)

The dedicated vision models can be run on much cheaper hardware, including smartphones, than the big models that can process images besides text.

Similarly, besides bigger multimodal models, that can accept audio, images or text as imput, there are smaller open-weights models that are dedicated for speech recognition, e.g. Xiaomi MiMo-V2.5-ASR and IBM granite-speech-4.1-2b.

reply
The qwen models not only have good OCR, they will describe pictures to you.
reply
Anyone wanna do a quick offline MVP on a general vision assistant for the blind? We've had things like Google Lens for a while, but it's a bit vision and touchscreen-centric.
reply
>Also, have you noticed as top-end Mac Studios got downgraded recently? They don't want you to have access to frontier models. And you will not have it.

Isn't that a function of RAM supply not being available now?

reply
OpenAI did buy out the RAM supply to block competition. Arguably local models are one of its (smaller) competitors.

Even if that weren't the case, every corp _needs_ you to be on a subscription.

reply
Huh? Why would Apple not want you to be able to run local models? They have very deliberately stayed the hell away from this space.
reply
The conspiracy angle here is not really relevant. Ram is expensive and they're gearing up for M5 studios. Not the illuminati keeping better LLM models out of your hands.
reply
You think Apple doesn't want you to use local models?

That's an interesting way to view the world. I mean, utterly stupid as it is, but interesting.

But the previous sentence is even stupider (a Perl script 10 years ago could write code like Qwen does now?), so I guess at least it's consistent.

reply
I built my own IDE and run my own model specifically to have private agentic coding. I can still access model APIs but I can be purely local if I want too. It’s amazing.
reply
Curious, why did Zed with ACP not work for you?
reply
I'm just guessing, but IDE which is using 3D acceleration just for stupid UI to run "smoothly", that is ridiculous.

Who runs IDE with LLM agents accessing your local filesystem, on bare metal?

Or am I alone to run everything LLM related on my VM just for development work. Then because of ZED genius decision, you need to share your GPU to VM, then some important features will not work, like snapshots. So you also need workaround for this, etc.

Too much hassle, Zed is not for me.

But I'm anti-Apple, so maybe that's the reason :)

Btw, even "ImHex" devs realized this and they're providing version without acceleration for VM use. They're using ImGui. Using it for local desktop app UI is also ridiculous, imho. Whatever.

reply
I would imagine running a local LLM for development isn’t as popular as using a hosted provider. I don’t personally host a local model, but I have shared GPUs and storage volumes with VMs and I didn’t see it as that much of a hassle. What kinds of problems are you running into?

Doesn’t ghostty also use graphics acceleration? I was under the impression that rendering text is a relatively challenging graphics compute task.

reply
Multiple gazillion dollar companies each seem to be spending to ensure that they alone pretty much dominate all knowledge work, with customers eating up their tokens like Cookie Monster. I wonder if the any of them could survive as LLM providers if they not only failed to do that, but the entire industry ended up selling what the current Cookie Monster would call a “sometimes snack,” for very special occasions?
reply
In my experience once you get to ~30 gigs of ram for a model like Gemma4, the rest of the 128g of memory is simply nice to have. The speed and costs are what make it tough though, because its slower and more expensive than the same model served on a big accelerator card, and is going to be worse than a frontier model.
reply
I wonder if it really needs to be worse. I am playing with the idea of fine tuning a model on my exact stack and coding patterns. I suspect I could get better performance by training “taste” into a model rather than breadth.
reply
I also wonder about JS only, Python only, etc models.

Maybe the future is a selection of local, specific stack trained models?

reply
These models being able to generalise at coding will likely get worse if you remove high quality training data like all of python.
reply
Fine tuning these models (at least with PPO or equivalent) requires even more VRAM than inference does, potentially 2-3 times more.
reply
You could use PEFT? Operating on only a subset of weights is fairly standard practice nowadays …
reply
>It's here, right now.

I mean I've been forcing my good old 1080ti to run local models since a short while after llama was first leaked.

But I wouldn't say "local models are here" in the same way as "year of the Linux desktop!111"

Until someone can just go out and buy some sort of "AI pod" that they can take home, plug in and hit one button on a mobile app to select a model (or even just hide models behind various personas) then I wouldn't say it's quite there yet.

It's important that the average consumer can do it, I think the limitations for that are: things are changing too quickly, ram+compute components are exceedingly expensive now, we're still waiting on better controls/harnesses for this stuff to stop consumers not just from shooting themselves in the foot, but blowing their foot clean off.

Would be interesting to see a Taalas-like chip in a product, albeit there's so many changes going on atm with diffusion based models, Google's Turboquant (which as someone who has had to almost always run quantized models, makes a lot of sense to me).

reply
What is the use case you see for non-technical users self-hosting? I think it’s important that tools remain available but I don’t expect it to be adopted by “average consumers.”

I’m interested in self-hosting for privacy and control. I already owned the hardware I’m testing with, so my spend is limited to time and electricity.

The “LLM pods” you describe will be loaded with spyware and adware (see: Smart TVs), and average consumers won’t max their compute around the clock so naturally data centers are able to make more efficient use of hardware by maximizing utilization.

reply
There are local ai pods. They're like 2k for a low end.
reply
Can you share how you use it to categorize trip photos!
reply
I'm not sure there's a one-stop shop for this at the moment. I think the process is:

* Have a box with sufficient spare (V)RAM -- probably 8G for simple categorization with qwen3.5-4b, and 24G or more for more intelligent categorization with qwen3.6-27b or gemma4-31b.

* Download or compile llama.cpp. Choose a model, then choose one of the "quantized" builds that will actually fit on your hardware. There are literally hundreds to thousands of these per model on Hugging Face.

* Spend half a day tuning command-line parameters until llama.cpp doesn't crash.

* Watch llama.cpp regularly OOM itself, then put it in a systemd service with a memory limit so it doesn't take the entire machine down when it dies.

* Download all your photos to a folder.

* Start vibing a Python script to categorize your images by repeatedly prompting the LLM with each image in turn.

* Spend days tweaking/refining the prompt to try to get the LLM to actually do what you want.

The endgame is one of:

* The local model categorizes your images. Yay.

* The local model is too slow and you give up. Boo.

* The local model is too slow, so you spend $1k-$10k on hardware. Your image categorization task becomes a cover story for buying new gear. Yay.

* The local model can't understand your categorization metric, so you give up. Boo.

* You eagerly await news of the next open model being released. Yay?

* You consider replacing your local model with a frontier model, but then you realize you'd be spending $500 to categorize your photos. Boo.

* You refuse to allow Google/Gemini/Anthropic to train on your nudes. Boo.

reply
I'm also interested on how to do this
reply
Perhaps I am the odd one out here, but a small part of me wants to see what happens when you run a proprietary SOTA model on a laptop.
reply
Currently I'm testing something like this just to see what happens. I have an old laptop with 4GB of RAM. I attached a USB drive with Gemma 4 31B model (which is 32.6 GB). Currently the laptop is running llama.cpp and trying to respond to a prompt by streaming the model from disk.

The USB drive light is flickering, showing something is happening. It's been about 8 hours since I entered the prompt and I've gotten about 10 tokens back so far. I'm going to leave it running overnight and see what happens.

reply
Nice.

What did you use to do this, something standard like llamacpp or something else like vllm or your own contraption ?

reply
You burn your lap?
reply
Nothing special?

I mean, inference engine might need to get some tweaks, to support whatever compute is available. But then, if you put a few terabytes of disk for swap, and replace RAM to bigger sticks if possible, it should work? Slowly, of course, but there is no reason it should not to.

reply
The big difference will be measuring seconds per token instead of tokens per second.
reply
Seconds per token is just fractional tokens per second ;)
reply
> fractional

Reciprocal?

reply
You can if you have enough ram slots?
reply
Not sure if this is exactly the scenario you envision but I run ComfyUI on an Acer Helio 300 laptop, from four years ago. Has 16GB RAM, NVIDIA GeForce RTX 2060 w/6144MiB of VRAM and have generated a few images using "NetaYumev35_pretrained_all_in_one.safetensors" @ 10.6GB checkpoint, (well beyond the 6GB capacity of the RTX 2060 card). That being said, it takes more than 10 minutes to complete the task. Of course, I have to turn off all other apps, and browser tabs or hibernate them. If I don't, the laptop's fans begin to spin up like an airplane propeller. It's worth mentioning that I've tried to do this with other IDEs and all seem to fail with some error or another, usually out of VRAM issue. I've only gotten it to work with ComfyUI.

I use an anaconda environment, though would have preferred an "uv" environment, on Linux and automate the startup sequence using the following script (start_comfy.sh) from the term rather than manually starting the environment from same said term:

#!/bin/bash

#

# temporary shell version

eval "$(conda shell.bash hook)"

conda activate comfy-env

comfy launch -- --lowvram --cpu-vae

Here are some of the images: https://imgbox.com/nqjYhdx3 https://imgbox.com/93vSWFic https://imgbox.com/qs1898dz

I'm hesitant to increase the sizes of the renders as that will surely stress my laptop's components.

reply
I'm not running local for exactly the same reason, to not stress my components. As it seems we are in for a long haul due to this AI bubble (can't wait for it to pop) so need to make sure I survive this madness, as for sure I can't afford to replace anything right now.
reply
This is my exact setup as well and dear lord gemma is absolutely batshit insane. I'm trying to get a self-reflection and confidence loop going now, but it does feel like it's not the local resources, it's the limits of the training. Dedicated coding or dedicated real-world task models would be a good optimisation.
reply
I need to see these proper harnesses

I tried oMLX and OpenCode a few weeks ago and the 65k context window was useless, it tried to analyze a very small codebase before going full on agentic and ran out of context window immediately

I don't have time to tweak 1,000 permutations of settings just re-prove that its not as smart as Opus 4.6

I need out the box multimodal behavior as similar as typing claude in the command line and its so not there yet

but I'm open to seeing what people's workflows are

reply
I'm running opencode with qwen3.6-35b-a3b at a 3-bit quant. I also have qwen3.5-0.8b used for context compaction. I run with 128k context.

It's usable. I set it loose on the postgres codebase, told it to find or build a performance benchmark for the bloom filter index and then identify a performance improvement. It took a long time (overnight), but eventually presented an alternate hashing algorithm with experimental data on false positive rate, insertion speed and lookup speed. There wasn't a clear winner, but it was a reasonable find with rigorous data.

reply
Do you encounter looping issues at such low quants? How do you deal with those?
reply
I'm playing with a tape drive for backups, so I asked a local model to rewrite LTFS ( https://github.com/LinearTapeFileSystem/ltfs ) in Go.

I gave it the reference C implementation, the LTFS spec from SNIA, and asked it to use the C implementation to verify the correctness of the Go code.

LTFS is a pretty straightforward spec, so it made a very reasonable port within about 2 days. It's now working on implementing the iSCSI initiator (client) to speak with my tape drive directly, without involving the kernel.

Edit: the model is Qwen3.6-35B

reply
Hey man, you can just say "I'm lazy, so I'm staying with the cloud. if I wanted to use my brain, I wouldn't be using AI, gosh" - it's much shorter.
reply
Personal attacks are against the rules, by the way.
reply