Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages.
> You might just as well assume everyone and everything else is a philosophical zombie.
I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims.
And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying.