upvote
Almost! They're merely making the claim of functional emotions and outright avoiding the thorny philosophical question of whether they're "real".

[ I've actually tried exploiting functional emotions in a RAG system. The sentiment scoring and retrieval part was easy. Sentiment analysis is pretty much a settled thing I'd say, even though the mechanisms are still being studied (see the paper we're discussing.

What I'd love to be able to do is be able to extract the vector(s) they're discussing, rather than outputting as text into the context]

reply
If you listen to Anthropic in their other works and interviews, they clearly do believe the equivalence-by-proxy between humans and LLMs to a large degree, and introduce things like model welfare (that is, caring about what the model feels). This is just another study in the series. I think they're adding these disclaimers to not sound like absolute cranks to the unprepared audience, because sometimes they really do.
reply