upvote
The key difference is that MLX's array model assumes unified memory from the ground up. llama.cpp's Metal backend works fine but carries abstractions from the discrete GPU world — explicit buffer synchronization, command buffer boundaries — that are unnecessary when CPU and GPU share the same address space. You'll notice the gap most at large context lengths where KV cache pressure is highest.
reply
Insightful comment, thanks!
reply
How many tokens per second?
reply
They initially messed up this launch and overwrote some of the GGUF models in their library, making them non-downloadable on platforms other than Apple Silicon. Hopefully that gets fixed.
reply