Hacker News
new
past
comments
ask
show
jobs
points
by
reactordev
22 hours ago
|
comments
by
bottlepalm
22 hours ago
|
[-]
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
reply
by
3836293648
22 hours ago
|
parent
|
next
[-]
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.
reply
by
CamperBob2
18 hours ago
|
parent
|
prev
|
[-]
Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.
reply