I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.