upvote
Nor anthropodeny it. But really both directions are anthropocentrism in a raincoat.

Sonnet is its own thing. Which is fine.

We've known that eg. animals have emotions (functional or not) for quite a long time.

Btw: don't go looking on youtube for evidence of that. People outrageously anthropomorphizing their pets can be true at the same time.

reply
What is there to anthropodeny?
reply
Completely agree here. Stop anthropomorphizing these tools. Just remove the extra language. Don't say please or thank you. Just ask for the desired outcome.
reply
The places where solutions are discussed in a way that is best long term solution may well exist in a language subspace with politeness, calmness and thoughtfulness. Getting the model to those areas of linguistic space is useful; as is preserving my own habits of kind and thoughtful speech.
reply
Okay great, that's EASILY operatinalizable. Set up -say- 100 replications of the same question sequence (say to build a program) against some cheap model like qwen. One half of the set can be with please and thank you, and the other half without. You can vibe code it even. I'd be curious to see your results!
reply
You can even boost its effectiveness by roleplaying with it. I’m not joking. Fully based on vibes, I haven’t done any testing. But it’s part of prompting imo.

IMO these things are like a reflection. Present what you want reflected back.

reply
Indeed. It reminds me of Lewis’ That Hideous Strength in a way. If we take the severed head post-brain-death and pump it with blood and oxygen and feed it impulses so that the mouth moves to form the words we tell it, is the person living again? No, it’s just a head, speaking the words it’s been given.
reply
I don’t see why you can’t use politeness. The thing is a mimic, you “treat” it badly and it mimics how a human might respond.

It’s fun to play with, as long as you’re fully cognizant that IT IS NOT A HUMAN

reply
I'd argue with you, but there's nothing strictly wrong with your statement. I'd like to point out that it's also not a cat nor a dog, nor a parrot (dead, stochastic, or otherwise). It's a Sonnet model.
reply
But, well, how does it do the human-like-text-outputting exactly?
reply
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?

Humans make predictions, that doesn’t mean that’s all we do.

reply
No, my point is that "statistical next-token predictor" is an empty phrase that doesn't really explain much. Markov chains are statistical next-token predictors as well and nevertheless, no one would confuse a markov chain with a conscious being (or deem the generated texts in any way useful for that matter).

The question is how the prediction works in detail, and those details are still being researched, as Anthropic does here, and the research can yield unexpected results.

reply