upvote
> However, LLM would also require >75% of our galaxy energy output to reach 1 human level intelligence error rates in general.

citation needed

reply
The activation capping effect on LLM behavior is available in this paper:

https://www.anthropic.com/research/assistant-axis

The estimated energy consumption versus error rate is likely projected from agent test and hidden-agent coverage.

You are correct, in that such a big number likely includes large errors itself given models change daily. =3

reply
ok, your quote was over generalized, you meant "current LLM need..." and not "any conceivable LLM"

although the word "energy" does not appear on that page, not sure where you get the galaxy energy consumption from

reply
In general, "any conceivable LLM" was the metric based on current energy usage trends within the known data-centers peak loads (likely much higher due to municipal NDA.) A straw-man argument on whether it is asymptotic or not is irrelevant with numbers that large. For example, 75% of a our galaxy energy output... now only needing 40% total output... does not correct a core model design problem.

LLM are not "AI", and unlikely ever will be due to that cost... but Neuromorphic computing is a more interesting area of study. =3

reply
Humans also spew nonsense when faced with some unknown domain search space
reply
Indeed, the list of human cognitive biases was posted above.

The activation capping effect on LLM behavior is available in this paper:

https://www.anthropic.com/research/assistant-axis

This data should already have been added to the isomorphic plagiarism machine models.

Some seem to want to bury this thread, but I think you are hilarious. =3

reply
[dead]
reply