But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.
I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.
We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).
Even skin cells exchange information in neuron-like manner, including using light, albeit thousands times slower.
This switches complexity of human brain to "86 billions quantum computers operating thousands of small neural networks, exchanging information by lasers-based optical channels."
We don’t even know if they want to. But in general, it’s impossible to conclusively prove that something won’t ever happen in the future.
> unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.
If we're going to be reductionist we can just call humans "meat sacks" and flip the question around entirely.
I disagree with this premise. A computer approximates a Turing Machine, which puts it far above a brick.
The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."
The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."
Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.
Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.
> That's irrelevant.
My comment was relevant, if a bit tangential.
Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.