upvote
8GB is not enough to do complex reasoning, but you could do very small simple things. Models like Whisper, SmolVLM, Quen2.5-0.5B, Phi-3-mini, Granite-4.0-micro, Mistral-7B, Gemma3, Llama-3.2 all work on very little memory. Tiny models can do a lot if you tune/train them. They also need to be used differently: system prompt preloaded with information, few-shot examples, reasoning guidance, single-task purpose, strict output guidelines. See https://github.com/acon96/home-llm for an example. For each small model, check if Unsloth has a tuned version of it; it reduces your memory footprint and makes inference faster.

For your Mac, you can use Ollama, or MLX (Mac ARM specific, requires different engine and different model disk format, but is faster). Ramalama may help fix bugs or ease the process w/MLX. Use either Docker Desktop or Colima for the VM + Docker.

For today's coding & reasoning models, you need a minimum of 32GB VRAM combined (graphics + system), the more in GPU the better. Copying memory between CPU and GPU is too slow so the model needs to "live" in GPU space. If it can't fit all in GPU space, your CPU has to work hard, and you get a space heater. That Mac M1 will do 5-10 tokens/s with 8GB (and CPU on full blast), or 50 token/s with 32GB RAM (CPU idling). And now you know why there's a RAM shortage.

reply
There’s no way around needing a powerful-enough system to run the model. So you either choose a model that can fit on what you have —i.e. via a small model, or a quantised slightly larger model— or you access more powerful hardware, either by buying it or renting it. (IME you don’t need Docker. For an easy start just install LM Studio and have a play.)

I picked up a second-hand 64GB M1 Max MacBook Pro a while back for not too much money for such experimentation. It’s sufficiently fast at running any LLM models that it can fit in memory, but the gap between those models and Claude is considerable. However, this might be a path for you? It can also run all manner of diffusion models, but there the performance suffers (vs. an older discrete GPU) and you’re waiting sometimes many minutes for an edit or an image.

reply
I wasn't able to have very satisfying success until I bit the bullet and threw a GPU at the problem. Found an actually reasonably priced A4000 Ada generation 20GB GPU on eBay and never looked back. I still can't run the insanely large models, but 20GB should hold me over for a while, and I didn't have to upgrade my 10 year old Ivy Bridge vintage homelab.
reply
Are mac kernels optimized compared to CUDA kernels? I know that the unified GPU approach is inherently slower, but I thought a ton of optimizations were at the kernel level too (CUDA itself is a moat)
reply
Depending on what you do. If you are doing token generations, compute-dense kernel optimization is less interesting (as, it is memory-bounded) than latency optimizations else where (data transfers, kernel invocations etc). And for these, Mac devices actually have a leg than CUDA kernels (as pretty much Metal shaders pipelines are optimized for latencies (a.k.a. games) while CUDA shaders are not (until cudagraph introduction, and of course there are other issues).
reply
Mac kernels are almost always compute shaders written in Metal. That's the bare-minimum of acceleration, being done in a non-portable proprietary graphics API. It's optimized in the loosest sense of the word, but extremely far from "optimal" relative to CUDA (or hell, even Vulkan Compute).

Most people will not choose Metal if they're picking between the two moats. CUDA is far-and-away the better hardware architecture, not to mention better-supported by the community.

reply
The general rule of thumb is that you should feel free to quantize even as low as 2 bits average if this helps you run a model with more active parameters. Quantized models are not perfect at all, but they're preferable to the models with fewer, bigger parameters. With 8GB usable, you could run models with up to 32B active at heavy quantization.
reply
I think a better bet is to ask on reddit.

https://www.reddit.com/r/LocalLLM/

Everytime I ask the same thing here, people point me there.

reply
With only 8 GB of memory, you're going to be running a really small quant, and it's going to be slow and lower quality. But yes, it should be doable. In the worst case, find a tiny gguf and run it on CPU with llamafile.
reply
This is the easiest set up on a Mac. You need at least 16gb on a MacBook:

https://github.com/ggml-org/llama.cpp/discussions/15396

reply
Maybe check out Docker Model Runner -- it's built on llama.cpp (in a good way -- not like Ollama) and handles I think most of what you're looking for?

https://www.docker.com/blog/run-llms-locally/

As far as how to find good models to run locally, I found this site recently, and I liked the data it provides:

https://localclaw.io/

reply
I tried to run some models on my M1 Max (32 GB) Mac Studio and it was a pretty miserable experience. Slow performance and awful results.
reply