Hacker News
new
past
comments
ask
show
jobs
points
by
s_dev
9 hours ago
|
comments
by
dockerd
7 hours ago
|
[-]
You won't feel much difference in performance for the next 4 years but my guess is local LLM inference going to be much better post 3-4 generation.
reply