Hacker News
new
past
comments
ask
show
jobs
points
by
danielhanchen
12 hours ago
|
comments
by
nnx
11 hours ago
|
next
[-]
Can you describe what is this slightly different approach and why it should work on all models?
reply
by
hedora
6 hours ago
|
prev
|
[-]
Nice! Your stuff ran LLMs extremely well on < $500 boxes (24-32GB ram) with iGPUS before this update.
I’m eager to try it out, especially if 16GB is viable now.
reply