upvote
Depending on how you convert synapse count to parameters, the brain also has something like a thousand trillion parameters. In that light it's pretty darn surprising that an artificial neural network can produce anything like coherent text.
reply
Maybe the brain is more akin to a network of networks and the actual reasoning part is not all that large? There are lots of areas dedicated exclusively to processing input and controlling subsystems. I can imagine a future where large artificial networks work in a similar way, with multiple smaller ones connected to each other.
reply
It indeed is. We now have models less than 100M params producing pretty coherent, and somewhat relevant text to give input. That is indeed impressive.

I believe the answer lies in how "quickly" (and how?) we are able to learn, and then generalize those learnings as well. As of now, these models need millions (at least) examples to learn, and are still not capable of generalizing the learnings to other domains. Human brains hardly need a few, and then, they generalize those pretty well.

reply
A 1980's desk calculator can multiply two 8 digit numbers with much less energy than your brain takes to do the same.

Modern LLM's similarly beat the human brain in lots of tasks for energy efficiency - mostly by the fact the LLM can produce the answer in 1 second and the brain has to spend half an hour researching and drafting something.

reply
> Nothing we've built comes close... either in capability or efficiency.

Only when you look at stuff that the brain is specifically good at.

You can surpass the brain with even simple mechanical adders or an abacus in certain subdomains.

reply
General intelligence I mean. What calculations even need to be performed and when, still comes from our brains.
reply