My parent made a claim that humans have separate pathways for data and instructions and cannot mix them up like LLMs do. Showing that we don't has every effect on refuting their argument.
>>> The principal security problem of LLMs is that there is no architectural boundary between data and control paths.