upvote
I think a counterargument would be parallel evolution: There are various examples in nature, where a certain feature evolved independently several times, without any genetic connection - from what I understand, we believe because the evolutionary pressures were similar.

One obvious example would be wings, where you have several different strategies - feathers, insect wings, bat-like wings, etc - that have similar functionality and employ the same physical principles, but are "implemented" vastly differently.

You have similar examples in brains, where e.g. corvids are capable of various cognitive feats that would involve the neocortex in human brains - only their brains don't have a neocortex. Instead they seem to use certain other brain regions for that, which don't have an equivalent in humans.

Nevertheless it's possible to communicate with corvids.

So this makes me wonder if a different "implementation" always necessarily means the results are incomparable.

In the interest of falsifiability, what behavior or internal structures in LLMs would be enough to be convincing that they are "real" emotions?

reply
"Parallel" evolution is just different branches of the same evolutionary tree. The most distantly related naturally evolved lifeforms are more similar to each other than an LLM is to a human. The LLM did not evolve at all.
reply
Evolution is the way how the "mechanism" came to be, which is indeed very different. But the mechanism itself - spiking neurons and neurotransmitters on one hand vs matrix multiplications and nonlinear functions (both "inspired" by our understanding of neurons) don't seem so different, at least not on a fundamental level.

What is different for sure is the time dimension: Biological brains are continuous and persistent, while LLMs only "think" in the space between two tokens, and the entire state that is persisted is the context window.

reply
> The LLM did not evolve at all.

Evolution and Transormer training are 'just' different optimization algorithms. Different optimizers obviously can produce very comparable results given comparable constraints.

reply
The training process shares a lot of high-level properties with the biological evolution.
reply
"Minimize training loss while isolated from the environment" is not at all similar to "maximize replication of genes while physically interacting with the environment". Any human-like behavior observed from LLMs is built on such fundamentally alien foundations that it can only be unreliable mimicry.
reply
The environment for the model is its dataset and training algorithms. It's literally a model of it, in the same sense we are models of our physical (and social) environment. Human-like behavior is of course too specific, but highest level things like staged learning (pretraining/posttraining/in-context learning) and evolutionary/algorithmic pressure are similar enough to draw certain parallels, especially when LLM's data is proxying our environment to an extent. In this sense the GP is right.
reply
I don't think anything you said here contradicts what they said, they take great pains throughout the blog post to explain that the model does not "expedience" these "emotions," that they're not emotions in the human sense but models of emotions (both the "expected human emotional response to a prompt" as well as what emotions another character is experiencing in part of a prompt) and functional emotions (in that they can influence behavior), and that any apparent emotions the model may show is it playing a character.
reply
Almost! They're merely making the claim of functional emotions and outright avoiding the thorny philosophical question of whether they're "real".

[ I've actually tried exploiting functional emotions in a RAG system. The sentiment scoring and retrieval part was easy. Sentiment analysis is pretty much a settled thing I'd say, even though the mechanisms are still being studied (see the paper we're discussing.

What I'd love to be able to do is be able to extract the vector(s) they're discussing, rather than outputting as text into the context]

reply
If you listen to Anthropic in their other works and interviews, they clearly do believe the equivalence-by-proxy between humans and LLMs to a large degree, and introduce things like model welfare (that is, caring about what the model feels). This is just another study in the series. I think they're adding these disclaimers to not sound like absolute cranks to the unprepared audience, because sometimes they really do.
reply
I like to call this Frieren's Demon. In that show, it is explained that demons evolved with no common ancestor to humans, but they speak the language. They learned the language to hunt humans. This leads to a fundamentally different understanding of words and language.

Now, I don't personally believe this is an intelligence at all, but it's possible I'm wrong. What we have with these machines is a different evolutionary reason for it speaking our language (we evolved it to speak our language ourselves). It's understanding of our language, and of our images is completely alien. If it is an intelligence, I could believe that the way it makes mistakes in image generation, and the strange logical mistakes it makes that no human would make are simply a result of that alien understanding.

After all, a human artist learning to draw hands makes mistakes, but those mistakes are rooted in a human understanding (e.g. the effects of perspective when translating a 3D object to 2D). The machine with a different understanding of what a hand is will instead render extra fingers (it does not conceptualize a hand as a 3D object at all).

Though, again, I still just think its an incomprehensible amount of data going through a really impressive pattern matcher. The result is still language out of a machine, which is really interesting. The only reason I'm not super confident it is not an intelligence is because I can't really rule out that I am not an incomprehensible amount of data going through a really impressive pattern matcher, just built different. I do however feel like I would know a real intelligence after interacting with it for long enough, though, and none of these models feel like a real intelligence to me.

reply
>it does not conceptualize a hand as a 3D object at all

Oh but it does, it's an emergent property. The biggest finding in Sora was exactly that, an internal conceptualization of the 3D space and objects. Extra fingers in older models were the result of the insufficient fidelity of this conceptualization, and also architectural artifacts in small semantically dense details.

reply
> interpreting these emotions as human-like is a clear blunder. How do you tell the shoggoth likes or dislikes something, feels desperation or joy? Because it said so? How do you know these words mean the same for us?

I think you took it backwards

those vectors are exactly what it says - it affects the output and we can measure it

and it's exactly what it means for us because that's what it's measured against

and the main problem isn't "is its emotion same as ours", but "does it apply our emotion as emotion"

reply