Hacker News
new
past
comments
ask
show
jobs
points
by
mfro
13 hours ago
|
comments
by
steve-atx-7600
13 hours ago
|
[-]
Inference from an LLM is O(tokens^2)
reply
by
halJordan
10 hours ago
|
parent
|
[-]
Only in the naive implementations of attention
reply