Hacker News
new
past
comments
ask
show
jobs
points
by
1970-01-01
6 hours ago
|
comments
by
jampekka
5 hours ago
|
[-]
LLM inference uses on the order of 1 Wh per query. That's under 10 meters of driving on an EV or running air conditioning for under 5 seconds.
https://hannahritchie.substack.com/p/ai-footprint-august-202...
reply
by
bluefirebrand
3 hours ago
|
parent
|
next
[-]
One query is not going to be a useful benchmark when people are deploying AI swarms in loops to solve simple problems
reply
by
deadbabe
4 hours ago
|
parent
|
prev
|
[-]
Or a human riding a stationary bike for 36 seconds.
reply