Hacker News
new
past
comments
ask
show
jobs
points
by
reddit_clone
14 hours ago
|
comments
by
bigyabai
14 hours ago
|
[-]
A sparser model like Qwen3.6 35B A3B is probably your best choice:
https://qwen.ai/blog?id=qwen3.6-35b-a3b
reply
by
hnfong
7 hours ago
|
parent
|
[-]
The 35B MOE will run faster, but 48GB RAM is more than enough to run the 27B dense model as well. It's just that token/s will be on the lower side.
reply