upvote
As I understand, Rice's theorem does not apply because neural networks are not Turing-complete.
reply
I'm not sure I agree with that. Even technically, my PC is not Turing-complete because its hard drive is finite. Yet there is an informal sense that Rice's Theorem is still relevant in a kind of PC abstraction sense, as we are all taught "virus checkers are strictly speaking impossible". This is a subtle point that needs further clarification from CS theorists, of which I am not.

Neural networks in general are Turing models. Human brains are in the abstract Turing complete as well, as a simple example. LLMs being run iteratively in an unbounded loop may be "effectively Turing complete" for this simple reason, as well.

Regardless, any theory purporting to be foundational ought to explicitly address this demarcation. Unless practitioners think computability and formal complexity are not scientific foundations for CS.

reply
But most "normal" neural networks are feed-forward, so they are guaranteed to terminate in a bounded amount of time. This rules Turing completeness right out. And even recurrent NNs can be "unfolded" into feed-forward equivalents, so they are not TC either.

You need a memory element the network can interact with, just like an ALU by itself is not TC, but a barebones stateful CPU (ALU + registers) is.

reply