upvote
After thousands of years of research we still don’t fully understand how humans do it, so what reason (besides a sort of naked techno-optimism) is there to believe we will ever be able to replicate the behavior in machines?
reply
The Church-Turing thesis comes to mind. It would at least suggest that humans aren’t capable of doing anything computationally beyond what can be instantiated in software and hardware.

But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.

reply
The church turing thesis is about following well-defined rules. It is not about the system that creates or decides to follow or not follow such rules. Such a system (the human mind) must exist for rules to be followed, yet that system must be outside mere rule-following since it embodies a function which does not exist in rule-following itself, e.g., the faculty of deciding what rules are to be followed.
reply
We can keep our discussion about church turing here if you want.

I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.

reply
That humans come in various degrees of competence at this rather than an, ahem, boolean have/don't have; plus how we can already do a bad approximation of it, in a field whose rapid improvements hint that there is still a lot of low-hanging fruit, is a reason for techno-optimism.
reply
Thousands of years?

We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).

reply
And then we discover that DNA in (not only brain) cells are ideal quantum computers, DNA's reactions generate coherent light (as in lasers) used to communicate between cells and single dendrite of cerebral cortex' neuron can compute at the very least a XOR function which requires at least 9 coefficients and one hidden layer. Neurons have from one-two to dozens of thousands of dendrites.

Even skin cells exchange information in neuron-like manner, including using light, albeit thousands times slower.

This switches complexity of human brain to "86 billions quantum computers operating thousands of small neural networks, exchanging information by lasers-based optical channels."

reply
>>> But do we have reason to believe that no AI system can synthesize novel technologies

We don’t even know if they want to. But in general, it’s impossible to conclusively prove that something won’t ever happen in the future.

reply
Its not an assumption, it is a fact about how computers function today. LLMs interpolate, they do not extrapolate. Nobody has shown a method to get them to extrapolate. The insistence to the contrary involves an unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
reply
As long as agnosticism is the attitude, that’s fine. But we shouldn’t let mythology about human intelligence/computational capacity stop us from making progress toward that end.

> unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.

For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.

reply
In the grand scale of things, a computer is not much more than a fancy brick. Certainly it is much closer to a brick than to a human. So the question is more 'why should this particularly fancy brick have abilities that so far we have only encountered in humans?'
reply
> fancy brick

If we're going to be reductionist we can just call humans "meat sacks" and flip the question around entirely.

reply
> Certainly it is much closer to a brick than to a human.

I disagree with this premise. A computer approximates a Turing Machine, which puts it far above a brick.

reply
That's irrelevant.

The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."

The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."

Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.

Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.

reply
Note that I prefaced my comment by saying the parent might be right about LLMs.

> That's irrelevant.

My comment was relevant, if a bit tangential.

Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.

reply