upvote
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.

In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."

Or, in this case, that it really enjoys food-pellets.

reply
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
reply
What does 'precisely' mean? Everyone has the same understanding of events - a precise one?
reply
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.

I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.

Its like when people say, if there are aliens they would find the same mathematical constants thet we do

reply
deleted
reply
Different argument

I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)

Hence why it’s a “unintentional nod” not an instantiation

reply
deleted
reply