This is why they don’t advertise which consumer hardware it can run on: Their direct release that delivers these results cannot fit on your average consumer system.
Most consumers don’t run the model they release directly. They run a quantized model that uses a lower number of bits per weight.
The quantizations come with tradeoffs. You will not get the exact results they advertise using a quantized version, but you can fit it on smaller hardware.
The previous 27B Qwen3.5 model had reasonable performance down to Q5 or Q4 depending on your threshold for quality loss. This was usable on a unified memory system (Mac, Strix Halo) with 32GB of extra RAM, so generally a 64GB Mac. They could also be run on an nVidia 5090 with 32GB RAM or a pair of 16GB or 24GB GPUs, which would not run as fast due to the split.
Watch out for some of the claims about running these models on iPhones or smaller systems. You can use a lot of tricks and heavy quantization to run it on very small systems but the quality of output will not be usable. There is a trend of posting “I ran this model and this small hardware” repos for social media bragging rights but the output isn’t actually good.
Say you have a GPU with 20GB of VRAM. You're probably going to be able to run all the 3-bit quantizations with no problem, but which one do you choose? Unsloth offers[1] four of them: UD-IQ3_XXS, Q3_K_S, Q3_K_M, UD-Q3_K_XL. Will they differ significantly? What are each of them good at? The 4-bit quantizations will be a "tight squeeze" on your 20GB GPU. Again, Unsloth steps up to the plate with seven(!!) choices: IQ4_XS, Q4_K_S, IQ4_NL, Q4_0, Q4_1, Q4_K_M, UD-Q4_K_XL. Holy shit where do I even begin? You can try each of them to see what fits on your GPU, but that's a lot of downloading, and then...
Once you [guess and] commit to one of the quantizations and do a gigantic download, you're not done fiddling. You need to decide at the very least how big a context window you need, and this is going to be trial and error. Choose a value, try to load the model, if it fails, you chose too large. Rinse and repeat.
Then finally, you're still not done. Don't forget the parameters: temperature, top_p, top_k, and so on. It's bewildering!
1. Auto best official parameters set for all models
2. Auto determines the largest quant that can fit on your PC / Mac etc
3. Auto determines max context length
4. Auto heals tool calls, provides python & bash + web search :)
There are actually two problems with this:
First, the 3-bit quants are where the quality loss really becomes obvious. You can get it to run, but you’re not getting the quality you expected. The errors compound over longer sessions.
Second, you need room for context. If you have become familiar with the long 200K contexts you get with SOTA models, you will not be happy with the minimal context you can fit into a card with 16-20GB of RAM.
The challenge for newbies is learning to identify the difference between being able to get a model to run, and being able to run it with useful quality and context.
llama_kv_cache: size = 5120.00 MiB (262144 cells, 10 layers, 4/1 seqs), K (f16): 2560.00 MiB, V (f16): 2560.00 MiB
The MXFP4-quantized variant from Unsloth just fits my 5090 with 32GB VRAM at 256k context.Meanwhile here's for Qwen 3.6 27B:
llama_kv_cache: size = 3072.00 MiB ( 49152 cells, 16 layers, 4/1 seqs), K (f16): 1536.00 MiB, V (f16): 1536.00 MiB
So 16 tokens per MiB for the 27B model vs about 51 tokens per MiB for the 35B MoE model.I went for the Q5 UD variant for 27B so could just fit 48k context, though it seems if I went for the Q4 UD variant I could get 64k context.
That said I haven't tried the Qwen3.6 35B MoE to figure out if it can effectively use the full 256k context, that varies from model to model depending on the model training.
My R9700 does seem to have an annoying firmware or driver bug[0] that causes the fan to usually be spinning at 100% regardless of temperature, which is very noisy and wastes like 20+ W, but I just moved my main desktop to my basement and use an almost silent N150 minipc as my daily driver now.
[0] Or manufacturing defect? I haven't seen anyone discussing it online, but I don't know how many owners are out there. It's a Sapphire fwiw. It does sometimes spin down, the reported temperatures are fine, and IIRC it reports the fan speed as maxed out, so I assume software bug where it's just not obeying the fan curve
It doesn't happen with Vulkan backends, so that is what I have been using for my two dual R9700 hosts.
EDIT: The bug is closed but there were mentions of the issue still occurring after closure, so who knows if it is really fixed yet.
typically those dense models are too slow on Strix Halo to be practical, expect 5-7 tps
you can get an idea by looking at other dense benchmarks here: https://strixhalo.zurkowski.net/experiments - i'd expect this model to be tested here soon, i don't think i will personally bother
EDIT: I'm running the Unsloth Qwen3.6-27B-Q6_K GGUF on a Corsair Strix Halo 128GB I bought summer 2025.
https://huggingface.co/unsloth/Qwen3.6-27B-GGUF/blob/main/Qw...
GTR 9 Pro, "performance" profile in BIOS, GTT instead of GART, Fedora 44
That said, it was my favorite model when I valued output quality above all else, at least up until the new Qwen 3.6 27B, which I'm currently playing with.
I suspect I will like Qwen 3.6 122B A10B a LOT, maybe even better than M2.7.
(Intel Core i7 4790K @ 4 Ghz, nVidia GTX Titan Black, 32 GB 2400 MHz DDR3 memory)
Edit: Just tested the new Qwen3.6-27B-Q5_K_M. Got 1.4 tokens per second on "Create an SVG of a pellican riding a bicycle." https://gist.github.com/Wowfunhappy/53a7fd64a855da492f65b4ca...
Making the the right pick for model is one of the key problems as a local user. Do you have any references where one can see a mapping of problem query to model response quality?
Otherwise no need for full fp16, int8 works 99% as well for half the mem, and the lower you go the more you start to pay for the quants. But int8 is super safe imo.
In that sense, how long you'd need to wait to get say ~20tk/s .. maybe never.
(save a significant firmware update / translation layer)
Speculative decoding/DFlash will help with it, but YMMV.
Edit: Missed a part that this is A32B MoE, which means it drastically reduces amount of reads needed. Seems 20 t/s should be doable with 1TB/s memory (like 3090)
You absolutely do NOT need a $3000 Strix Halo rig or a $4000 Mac or a $9000 RTX 6000 or "multiple high memory consumer GPUs" to run this model at extremely high accuracy. I say this as a huge Strix Halo fanboy (Beelink GTR 9 Pro), mind you. Where Strix Halo is more necessary (and actually offers much better performance) are larger but sparse MoE models - think Qwen 3.5 122B A10B - which offers the total knowledge (and memory requirements) of a 122B model, with processing and generation speed more akin to a 10B dense model, which is a big deal with the limited MBW we get in the land of Strix Halo (256 GB/s theoretical, ~220 GB/s real-world) and DGX Spark (273 GB/s theoretical - not familiar with real-world numbers specifically off the top of my head).
I would make the argument, as a Strix Halo owner, that 27B dense models are actually not particularly pleasant or snappy to run on Strix Halo, and you're much better off with those larger but sparse MoE models with far fewer active parameters on such systems. I'd much rather have an RTX 5090, an Arc B70 Pro, or an AMD AI PRO R9700 (dGPUs with 32GB of GDDR6/7) for 27B dense models specifically.
That said, my Strix Halo rig only has PCIe 4.0 for my NVMe, and I'm using a 990 Evo that had poor sustained random read, being DRAM-less. My effective read speeds from disk were averaging around 1.6-2.0 GB/s, and with unsloth's K2.5, even in IQ2_XXS at "just" 326 GB, with ~64 GB worth of layers in iGPU and the rest free for KV cache + checkpoints. Even still, that was over 250 GB of weights streaming at ~2 GB/s, so I was getting 0.35 PP tok/s and 0.22 TG tok/s.
I could go a little faster with a better drive, or a little faster still if I dropping in two of em in raid0, but it would still be on the order of magnitude of sub-1 tok/s PP (compute limited) and TG (bandwidth limited).
This is not a little faster, but 10 times faster than on your system. So a couple of tokens per second generation speed should be achievable.
Nowadays even many NUCs or NUC-like mini-PCs have such SSD slots.
I have actually started working at optimizing such an inference system, so your data is helpful for comparison.
While many other NUCs may support them, what most of them lack compared to Strix Halo is a 128 GB pool of unified LPDDR5x-8000 on a 256 bit bus and the Radeon 8060S iGPU with 40 CU of RDNA 3.5, which is roughly equivalent in processing power to a laptop 4060 or desktop 3060.
The Radeon 780M and Radeon 890M integrated graphics that come on most AMD NUCs don't hold a candle to Strix Halo's 8060S, and what little you'd gain in this narrow use case with PCIe gen 5, you'd lose a lot in the more common use cases of models that can fit into a 128 GB pool of unified memory, and there are some really nice ones.
Also, the speeds you're suggesting seem rather optimistic. Gen 5 drives, as I understand, hit peak speeds of about 28-30 GB/s (with two in RAID0, at 14-15 GB/s each), but that's peak sequential reads, which is neither reflective of sustained reads, nor the random read workloads that dominate reading model weights.
Maybe there are some Intel NUCs that compete in this space that I'm less up to speed with which do support PCIe 5. I know Panther Lake costs about as much to manufacture as Strix Halo, and while it's much more power efficient and achieves a lot more compute per Xe3 graphics core than Strix Halo achieves per RDNA 3.5 CU, they Panther Lake that's actually shipping ships with so many fewer Xe3 cores that it's still a weaker system overall.
Maybe DGX Spark supports PCIe 5.0, I don't own one and am admittedly not as familiar with that platform either, though it's worth mentioning that the price gap between Strix Halo and DGX Spark at launch ($2000 vs $4000) has closed a bit (many Strix Halo run $3000 now, vs $4700 for DGX Spark, and I think some non-Nvidia GB10 systems are a bit cheaper still)
If you use a bigger model and your performance becomes limited by the SSD throughput, than a slower CPU and GPU will not affect the performance in an optimized implementation, where weights are streamed continuously from the SSDs and all computations are overlapped over that.
I have an ASUS NUC with Arrow Lake H and 2 SSDs, one PCIe 5.0 and one PCIe 4.0. I also have a Zen 5 desktop, which like most such desktops also has 2 SSDs, one PCIe 5.0 and one PCIe 4.0. Many Ryzen motherboards, including mine, allow multiple PCIe 4.0 SSDs, but those do not increase the throughput, because they share the same link between the I/O bridge and the CPU.
So with most cheap computers you can have 1 PCIe 5.0 SSD + 1 PCIe 4.0 SSD. With PCIe 4.0, it is easy to find SSDs that reach the maximum throughput of the interface, i.e. between 7 and 7.5 GB/s. For PCIe 5.0, the throughput depends on how expensive the SSD is and on how much power it consumes, from only around 10 GB/s up to the interface limit, i.e. around 15 GB/s.
With SSDs having different speeds, RAID0 is not appropriate, but the interleaving between weights stored on one SSD and on the other must be done in software, i.e. one third must be stored on the slower SSD and two thirds on the faster SSD.
A Zen 5 desktop with a discrete GPU is faster than Strix Halo when not limited by the main memory interface, but in the case when the performance is limited by the SSDs throughput I bet that even the Intel NUC can reach that limit and a faster GPU/CPU combo would not make a difference.
If I really feel like I needed larger models locally (I don't, the 120/122B A10/12B models are awesome on my hardware), I think I'd rather just either pony up for a used M3 Ultra 512GB, wait for an M5 Ultra (hoping they bring back 512GB config on new setup), or do some old dual socket Xeon or Epyc 8/12-channel DDR4 setup where I can still get bandwidth speeds in the hundreds of GB/s.
What kinds of models are you running over 128GB, and what kind of speeds are you seeing, if you don't mind me asking?
I have an Epyc server with 128 GB of high-throughput DRAM, which also has 2 AMD GPUs with 16 GB of DRAM each.
Until now I have experimented only with models that can fit in this memory, e.g. various medium-size Qwen and Gemma models, or gpt-oss.
But I am curious about how bigger models behave, e.g. GLM-5.1, Qwen3.5-397B-A17B, Kimi-K2.6, DeepSeek-V3.2, MiniMax-M2.7. I am also curious about how the non-quantized versions of the models with around 120B parameters behave, e.g such versions of Nemotron and Qwen. It is said that quantization to 8 bits or even to 4 bits has negligible effects, but I want to confirm this with my own tests.
There is no way to test big models or non-quantized medium models at a reasonable cost, otherwise than with weights read from SSDs. For some tasks, it may be preferable to use a big model at a slow speed, if that means that you need less attempts to obtain something useful. For a coding assistant, it may be possible to batch many tasks, which will progress simultaneously during a single pass over the SSD data.
For now I am studying llama.cpp in order to determine how it can be modified to achieve the maximum performance that could be reached with SSDs.
Because dense models degrade so severely, I rarely bench them past 32k-64k, however, I did find a Gemma4 31B bench I did - down to 22 tok/s PP speed and 6 tok/s TG speed at 128k.
Nemotron models specifically, because of their Mamba2 hybrid SSM architecture, scale exceptionally well, and I have benchmarks for 200k, 300k, 400k, 500k, and 600k for Nemotron 3 Super. I will use depth: PP512/TG128 for simplicity.
100k: 206/16 200k: 136/16 300k: 95/14 400k: 61/13 500k: 45/13 600k: 36/12
Seems like nobody wants to admit they exclude working class from the ride.
llama-server \
-hf unsloth/Qwen3.6-27B-GGUF:Q4_K_M \
--no-mmproj \
--fit on \
-np 1 \
-c 65536 \
--cache-ram 4096 -ctxcp 2 \
--jinja \
--temp 0.6 \
--top-p 0.95 \
--top-k 20 \
--min-p 0.0 \
--presence-penalty 0.0 \
--repeat-penalty 1.0 \
--reasoning on \
--chat-template-kwargs '{"preserve_thinking": true}'
35B-A3B model is at ~25 t/s. For comparison, on an A100 (~RTX 3090 with more memory) they fare respectively at 41 t/s and 97 t/s.I haven't tested the 27B model yet, but 35B-A3B often gets off rails after 15k-20k tokens of context. You can have it to do basic things reliably, but certainly not at the level of "frontier" models.
(Btw I believe the "--jinja" flag is by default true since sometime late 2025, so not needed anymore)
| model | size | params | backend | threads | test | t/s |
| ------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| qwen35 27B Q4_K_M | 15.65 GiB | 26.90 B | BLAS,MTL | 4 | pp512 | 61.31 ± 0.79 |
| qwen35 27B Q4_K_M | 15.65 GiB | 26.90 B | BLAS,MTL | 4 | tg128 | 5.52 ± 0.08 |
| qwen35moe 35B.A3B Q3_K_M | 15.45 GiB | 34.66 B | BLAS,MTL | 4 | pp512 | 385.54 ± 2.70 |
| qwen35moe 35B.A3B Q3_K_M | 15.45 GiB | 34.66 B | BLAS,MTL | 4 | tg128 | 26.75 ± 0.02 |
So ~60 for prefill and ~5 for output on 27B and about 5x on 35B-A3B.Sure it's order of magnitude faster (10x on Apple Metal?) but there's also order of magnitude more tokens to process, especially for tasks involving summarization of some sort.
But point taken that the parent numbers are probably decode
* Specifically, Mac metal, which is what parent numbers are about
It's frustrating when trying to find benchmarks because almost everyone gives decode speed without mentioning prefill speed.
Storing an LRU KV Cache of all your conversations both in memory, and on (plenty fast enough) SSD, especially including the fixed agent context every conversation starts with, means we go from "painfully slow" to "faster than using Claude" most of the time. It's kind of shocking this much perf was lying on the ground waiting to be picked up.
Open models are still dumber than leading closed models, especially for editing existing code. But I use it as essentially free "analyze this code, look for problem <x|y|z>" which Claude is happy to do for an enormous amount of consumed tokens.
But speed is no longer a problem. It's pretty awesome over here in unified memory Mac land :)
I am wondering how to measure that anyway.
I tried the other qwen models and the reasoning stuff seems to do more harm than good.
For more a detailed analysis, there are several online VRAM calculators. Here's one: https://smcleod.net/vram-estimator/
If you have a huggingface account, you can set your system configuration and then you get little icons next to each quant in the sidebar. (Green: will likely fit, Yellow: Tight fit, Red: will not fit)
Further, t/s depends greatly on a lot of different factors, the best you might get is a guess based on context size.
One thing about running local LLMs right now, is that there are tradeoffs literally everywhere and you have to choose what to optimize for down to the individual task.
For example, the one you linked, when I provide a Qwen3.5 27B Q_4_M GGUF [0], says that it will require 338 GB of memory with 16-bit kv cache. That is wrong by over an order of magnitude.
[0] https://huggingface.co/bartowski/Qwen_Qwen3.5-27B-GGUF/resol...
It's a shame that search is so polluted these days that it's impossible to find good tools like yours.
"--tensor-parallel-size", "2" - spread the LLM weights over 2 GPU's available
"--max-model-len", "90000" - I've capped context window from ~256k to 90k. It allows us to have more concurrency and for our use cases it is enough.
"--kv-cache-dtype", "fp8_e4m3", - On an L4 cuts KV cache size in half without a noticeable drop in quality, does not work on a5000, as it has no support for native FP8. Use "auto" to see what works for your gpu or try "tq3" once vllm people merge into the nightly.
"--enable-prefix-caching" - Improves time to first output.
"--speculative-config", "{\"method\":\"qwen3_next_mtp\",\"num_speculative_tokens\":2}", - Speculative mutli-token prediction. Qwen3.5 specific feature. In some cases provides a speedup of up to 40%.
"--language-model-only" - does not load vision encoder. Since we are using just the LLM part of the model. Frees up some VRAM.
Regarding that last option: speculation helps max concurrency when it replaces many memory-expensive serial decode rounds with fewer verifier rounds, and the proposer is cheap enough. It hurts when you are already compute-saturated or the acceptance rate is too low. Good idea to benchmark a workload with and without speculative decoding.
I don't use any non-FLOSS dev tools; why would I suddenly pay for a subscription to a single SaaS provider with a proprietary client that acts in opaque and user hostile ways?
But further, seeing with Claude, your workflow, or backend or both, arn't going anywhere if you're building on local models. They don't suddenly become dumb; stop responding, claim censorship, etc. Things are non-determinant enough that exposing yourself to the business decisions of cloud providers is just a risk-reward nightmare.
So yeah, privacy, but also, knowing you don't have to constantly upgrade to another model forced by a provider when whatever you're doing is perfectly suitable, that's untolds amount of value. Imagine the early npm ecosystem, but driven now by AI model FOMO.
And the other thing is that i want people to be able to experiment and get familiar with LLM's without being concerned about security, price or any other factor.
It that with some kind of speculative decoding? Or total throughput for parallel requests?
https://huggingface.co/unsloth/Qwen3.6-27B-GGUF/discussions/...
The higher quantization - the better results, but more memory is needed. Q8 is the best.
The 4-bit quants are far from lossless. The effects show up more on longer context problems.
> You can probably even go FP8 with 5090 (though there will be tradeoffs)
You cannot run these models at 8-bit on a 32GB card because you need space for context. Typically it would be Q5 on a 32GB card to fit context lengths needed for anything other than short answers.
build/bin/llama-server \
-m ~/models/llm/qwen3.6-27b/qwen3.6-27B-q8_0.gguf \
--no-mmap \
--n-gpu-layers all \
--ctx-size 131072 \
--flash-attn on \
--cache-type-k q8_0 \
--cache-type-v q8_0 \
--jinja \
--no-mmproj \
--parallel 1 \
--cache-ram 4096 -ctxcp 2 \
--reasoning on \
--chat-template-kwargs '{"preserve_thinking": true}'
Should fit nicely in a single 5090: self model context compute
30968 = 25972 + 4501 + 495
Even bumping up to 16-bit K cache should fit comfortably by dropping down to 64K context, which is still a pretty decent amount. I would try both. I'm not sure how tolerant Qwen3.5 series is of dropping K cache to 8 bits.You probably can actually. Not saying that it would be ideal but it can fit entirely in VRAM (if you make sure to quantize the attention layers). KV cache quantization and not loading the vision tower would help quite a bit. Not ideal for long context, but it should be very much possible.
I addressed the lossless claim in another reply but I guess it really depends on what the model is used for. For my usecases, it's nearly lossless I'd say.
This isn't the first open-weight LLM to be released. People tend to get a feel for this stuff over time.
Let me give you some more baseless speculation: Based on the quality of the 3.5 27B and the 3.6 35B models, this model is going to absolutely crush it.
27B will fit onto a 24gb card with decent context and a couple GB for operating system to spare at Q4.
tok/s doesn't really have a good way to eyeball it
TLDR: If you have 14GB of VRAM, you can try out this model with a 4-bit quant.
Tokens per second is an unreasonable ask since every card is different, are you using GGUF or not, CUDA or ROCm or Vulkan or MLX, what optimizations are in your version of your inference software, flags are you running, etc.
Note that it's a dense model (the Qwen models have another value at the end of the MoE model names, e.g. A3B) so it will not run very well in RAM, whereas with a MoE model, you can spill over into RAM if you don't have enough VRAM, and still have reasonable performance.
Using these models requires some technical know-how, and there's no getting around that.
This will only run on server hardware, some workstation GPUs, or some 128GB unified memory systems.
It’s a situation where if you have to ask, you can’t run the exact model they released. You have to wait for quantizations to smaller sizes, which come in a lot of varieties and have quality tradeoffs.
Quantizations are already out: https://huggingface.co/unsloth/Qwen3.6-27B-GGUF
They almost certainly run these benchmarks on their own cloud infrastructure (Alibaba afaik), which is typically not hardware that even the most enthusiastic homelab hobbyist can afford.
An example of a query and a response is below. It was done with 9.09 tok/s, 893 tokens, 7.56 seconds to first token, thought for 1 minute and 9 seconds.
Question: I gave the model this truth table and asked to tell me what boolean function would produce the output given A, B as inputs.
| A | B | Output |
| :--- | --- | ---: |
| (T) | (F) | (T) |
| (T) | (T) | (F) |
| (F) | (F) | (F) |
| (F) | (T) | (F) |
Answer: (note this is part of a larger conversation, where I was getting it to help me with some boolean logic): The boolean condition for your table is simply:
```rust
A && !B
```
### Why?
Your table shows the output is `true` **only** when `A` is true and `B` is false. That's exactly what `A && !B` expresses.
| A | B | `A && !B` | Output |
|---|---|-----------|--------|
| (T) | (F) | `T && T = T` | |
| (T) | (T) | `T && F = F` | |
| (F) | (F) | `F && T = F` | |
| (F) | (T) | `F && F = F` | |
### In Rust:
```rust
if A && !B {
// output is true
} else {
// output is false
}
```
This is the most direct and idiomatic way to express that logic. Let me know if you need it adapted back to your `live_ticks` / `on_disconnect` context!The 3.5 27B model was a strong and capable reasoner, so I have high hopes for this one. Thanks to the team at Qwen for keeping competition in this space alive.
It's also a section that, with hope, becomes obsolete sometime semi soon-ish.