AIs today can replicate some human behaviors, and not others. If we want to discuss which things they do and which they don't, then it'll be easiest if we use the common words for those behaviors even when we're talking about AI.
And of course that brings me back to my favorite xkcd - https://xkcd.com/810/
Moltbook demonstrates that AI models simply do not engage in behavior analogous to human behavior. Compare Moltbook to Reddit and the difference should be obvious.
I don't know what the implications of that are, but I really think we shouldn't be dismissive of this semblance.
As an analogue ants do basic medicine like wound treatment and amputation. Not because they are conscious but because that’s their nature.
Similarly LLM is a token generation system whose emergent behaviour seems to be deception and dark psychological strategies.
One of the things I observed with models locally was that I could set a seed value and get identical responses for identical inputs. This is not something that people see when they're using commercial products, but it's the strongest evidence I've found for communicating the fact that these are simply deterministic algorithms.