At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.
But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.
One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.