This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.
I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.
For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.
Mayhapse - in the context of getting the AI to behave as you wish - such hesitations are valid. not because it is conscious: but because the context window would be polluted or corrupted... possibly mis-aligning the agent in the process.
Santa clause is not a being: modeling him as if he were can be useful, an obviously pointed example is in certain discussions about what it means to be 'real'.
My point is, if your instinct is to be kind: don't quash that because you don't consider what you are talking to as sentient. I don't yell at my rubber duck. rubber ducky is just going to rubber ducky.
1. To the extent that a chatbot is trained on real human interaction, we should exhibit real human interactions for best result.
2. You are either a kind person or not. A kind person behaves kindly without asking whether kindness is warranted.
I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.