Neural networks in general are Turing models. Human brains are in the abstract Turing complete as well, as a simple example. LLMs being run iteratively in an unbounded loop may be "effectively Turing complete" for this simple reason, as well.
Regardless, any theory purporting to be foundational ought to explicitly address this demarcation. Unless practitioners think computability and formal complexity are not scientific foundations for CS.
You need a memory element the network can interact with, just like an ALU by itself is not TC, but a barebones stateful CPU (ALU + registers) is.