upvote
I use positive framing instead of negative framing for most things and get good results. Especially where asking for a thing to not happen, pollutes the context with that thing.

A bad example, but imagine "Build me a wrapper for this API but ABSOLUTELY DO NOT use javascript" versus "Build me a wrapper for this API and make sure to use python".

reply
That approach also works better for dogs (and people).
reply
I extract all emotional context from my prompting and communicate with this tool as though it were an inanimate object which can provide factual information, without any hint of sentience.

It's an insane perspective I'm taking I know....call me crazy. /s

edit: the fact that humans are going out of their way to type or speak some sort of emotional content into their prompting is beyond me. Why would I waste time typing out a pronoun to a large-language model agent? Why would I do the lazy intellectual thing and blur the line between pure factual communication of concepts by expressing emotional content to a machine? What are we doing, folks?

reply
I don't necessarily remove all character but I do speak quite pragmatically (in a work context and with the LLM) and the planning and implementation phases the LLM goes through mirror that format to good results

That said these are large language models, you are guiding the output through vector space with your input, and so you really do have to leverage language to get the results you want. You don't have to believe it has emotions or feels anything for that to still be true.

reply
Maybe; I've been very content with the results I achieve while responding to interview style pre-planning, refinement of plans and implementation.

If anything, it's been fantastic to have an "interlocutor" that is vastly capable of producing possible solutions without emotional bias, superfluous flourishes, or having to endure personal proclivities or eccentricities.

reply
[dead]
reply
I remember when people were discussing the “performance-improving” hack of formulating their prompts as panicked pleas to save their job and household and puppy from imminent doom…by coding X. I wonder if the backfiring is a more recent phenomenon in models that are better at “following the prompt” (including the logical conclusion of its emotional charge), or it was just bad quantification of “performance” all along.
reply
The central point here is the presence of functional circuits in LLMs that act effectively on observable behavior just like emotions do in humans.

When you can't differentiate between two things, how are they not equal? People here want "things" that act exactly like human slaves but "somehow" aren't human.

To hide behind one's ignorance about the true nature of the internal state of what arguably could represent sentience is just hubris? The other way around, calling LLMs "stochastic parrots" without explicitly knowing how humans are any different is just deflection from that hubris? Greed is no justification for slavery.

reply
To me it was already quite intuitive, we are not really managing the psychological state: at its core a LLM try to make the concatenation of your input + its generated output the more similar it can with what it has been trained on. I think it’s quite rare in the LLMs training set to have examples of well thought professional solution in a hackish and urgency context.
reply
No, that's how base model pretraining works. Claude's behavior is more based on its constitution and RLVR feedback, because that's the most recent thing that happened to it.
reply
>The weird part is that we're now basically managing the psychological state of our tooling,

Does no one else have ethical alarm bells start ringing hardcore at statements like these? If the damn thing has a measurable psychology, mayhaps it no longer qualifies as merely a tool. Tools don't feel. Tools can't be desperate. Tools don't reward hack. Agents do. Ergo, agents aren't mere tools.

reply
When we speak of the “despair vectors”, we speak of patterns in the algorithm we can tweak that correspond to output that we recognize as despairing language.

You could implement the forward pass of an LLM with pen & paper given enough people and enough time, and collate the results into the same generated text that a GPU cluster would produce. You could then ask the humans to modulate the despair vector during their calculations, and collate the results into more or less despairing variants of the text.

I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair — such as might be needed to consider something a sentient being who might experience pleasure and pain.

However, to your point, I do think that there is an ethics to working with agents, in the same sense that there is an ethics of how you should hold yourself in general. You don’t want to — in a burst of anger — throw your hammer because you cannot figure out how to put together a piece of furniture. It reinforces unpleasant, negative patterns in yourself, doesn’t lead to your goal (a nice piece of furniture), doesn’t look good to others (or you, once you’ve cooled off), and might actually cause physical damage in the process.

With agents, it’s much easier to break into demeaning, cruel speech, perhaps exactly because you might feel justified they’re not landing on anyone’s ears. But you still reinforce patterns that you wouldn’t want to see in yourself and others, and quite possibly might leak into your words aimed at ears who might actually suffer for it. In that sense, it’s not that different from fantasizing about being cruel to imaginary interlocutors.

reply
> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair

Your argument is based on an appeal to intuition. But the scenario that you ask people to imagine is profoundly misleading in scale. Let's assume a modern frontier model, around 1 trillion parameters. Let's assume that the math is being done by an immortal monk, who can perform one weight's calculations per second.

The monk will generate the first "token", about 4 characters, in 31,688 years. In a bit over 900,000 years, the immortal monk will have generated a single Tweet.

At that point, I no longer have any intuition. The sort of math I could do by hand in a human lifetime could never "experience" anything.

But I can't rule out the possibility that 900,000 years of math might possibly become a glacial mind, expressing a brief thought across a time far greater than the human species has existed.

As the saying goes, sometimes quantity has a quality all its own.

(This is essentially the "systems response" to Searle's "Chinese room" argument. It's a old discussion.)

reply
I don't personally believe LLMs are sentient, but I've always enjoyed this thought experiment: https://xkcd.com/505. I have a signed copy framed on my wall.
reply
> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology”

Wrong. What you've just done is just reformulating the Chinese room experiment coming to the same wrong conclusions of the original proposer. Yes, the entire damn hand-calculated system has a psychology- otherwise you need to assume the brain has some unknown metaphysical property or process going on that cannot be simulated or approximated by calculating machines.

reply
People go for chinese room for some reason when cartesian theater is the better fit here. What you're doing is placing yourself in the seat of the Homunculus waiting for the show to start. But anatomical investigation reveals that there's no theater at all, and in fact no central system where everything comes together. Instead, the whole design of the brain goes to great pains to tease input signals apart.

Basically, manipulating the symbols won't necessarily have any long term influence on your own state. But the variables you've touched on the paper have changed. Demonstrably; because you've written something down.

If you then act on the result of those calculations, as of course many engineers before you have done, and many after you will do; then you have just executed a functional state change in physical reality, no matter what the ivory tower folks say.

(And that's what the paper is about: Functional states)

reply
Well, then we both assume very different views on the matter, and that’s fine.
reply
Oh no. The machine designed to output human-like text is indeed outputting human-like text.

I’m half jesting; I think there is a lot of room for debate here, but I also think we shouldn’t anthropomorphize it.

reply
Nor anthropodeny it. But really both directions are anthropocentrism in a raincoat.

Sonnet is its own thing. Which is fine.

We've known that eg. animals have emotions (functional or not) for quite a long time.

Btw: don't go looking on youtube for evidence of that. People outrageously anthropomorphizing their pets can be true at the same time.

reply
What is there to anthropodeny?
reply
Completely agree here. Stop anthropomorphizing these tools. Just remove the extra language. Don't say please or thank you. Just ask for the desired outcome.
reply
The places where solutions are discussed in a way that is best long term solution may well exist in a language subspace with politeness, calmness and thoughtfulness. Getting the model to those areas of linguistic space is useful; as is preserving my own habits of kind and thoughtful speech.
reply
Okay great, that's EASILY operatinalizable. Set up -say- 100 replications of the same question sequence (say to build a program) against some cheap model like qwen. One half of the set can be with please and thank you, and the other half without. You can vibe code it even. I'd be curious to see your results!
reply
You can even boost its effectiveness by roleplaying with it. I’m not joking. Fully based on vibes, I haven’t done any testing. But it’s part of prompting imo.

IMO these things are like a reflection. Present what you want reflected back.

reply
Indeed. It reminds me of Lewis’ That Hideous Strength in a way. If we take the severed head post-brain-death and pump it with blood and oxygen and feed it impulses so that the mouth moves to form the words we tell it, is the person living again? No, it’s just a head, speaking the words it’s been given.
reply
I don’t see why you can’t use politeness. The thing is a mimic, you “treat” it badly and it mimics how a human might respond.

It’s fun to play with, as long as you’re fully cognizant that IT IS NOT A HUMAN

reply
I'd argue with you, but there's nothing strictly wrong with your statement. I'd like to point out that it's also not a cat nor a dog, nor a parrot (dead, stochastic, or otherwise). It's a Sonnet model.
reply
But, well, how does it do the human-like-text-outputting exactly?
reply
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?

Humans make predictions, that doesn’t mean that’s all we do.

reply
No, my point is that "statistical next-token predictor" is an empty phrase that doesn't really explain much. Markov chains are statistical next-token predictors as well and nevertheless, no one would confuse a markov chain with a conscious being (or deem the generated texts in any way useful for that matter).

The question is how the prediction works in detail, and those details are still being researched, as Anthropic does here, and the research can yield unexpected results.

reply
The right read here is to realize that psychology alone is not the basis for moral concern towards other humans, and that human psychology is, to a great degree the product of the failure modes of our cognitive machinery, rather than being moral.

I find this line of thinking to lead to the conclusion that the moral status of humans derives from our bodies, and in particular from our bodies mirroring others' emotions and pains. Other people suffering is wrong because I empathically can feel it too.

reply
"Morals" are culturally learned evaluations of social context. They are more or less (depending on cultural development of the society in question) correlated with the actual distributions of outcomes and their valence for involved parties.

Human psychology is partly learned, partly the product of biological influences. But you feel empathy because that's an evolutionary beneficial thing for you and the society you're part of. In other words, it would be bad for everyone (including yourself) when you didn't.

Emotions are neither "fully automatic", inaccessible to our conscious scrutiny, nor are they random. Being aware of their functional nature and importance and taking proper care of them is crucial for the individual's outcome, just as it is for that of society at large.

reply
You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

But it's just text and text doesn't feel anything.

And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

reply
Such an argument is valid for a base model, but it falls apart for anything that underwent RL training. Evolution resulted in humans that have emotions, so it's possible for something similar to arise in models during RL, e.g. as a way to manage effort when solving complex problems. It's not all that likely (even the biggest training runs probably correspond to much less optimization pressure than millenia of natural selection), but it can't be ruled out¹, and hence it's unwise to be so certain that LLMs don't have experiences.

¹ With current methods, I mean. I don't think it's unknowable whether a model has experiences, just that we don't have anywhere near enough skill in interpretability to answer that.

reply
It's plausible that LLMs experience things during training, but during inference an LLM is equivalent to a lookup table. An LLM is a pure function mapping a list of tokens to a set of token probabilities. It needs to be connected to a sampler to make it "chat", and each token of that chat is calculated separately (barring caching, which is an implementation detail that only affects performance). There is no internal state.
reply
Right, no hidden internal state. Exactly. There's 0. And the weights are sitting there statically, which is absolutely true.

But my current favorite frontier model has this 1 million token mutable state just sitting there. Holding natural language. Which as we know can encode emotions. (Which I imagine you might demonstrate on reading my words, and then wisely temper in your reply)

reply
It’s a completely different substrate. LLMs don’t have agency, they don’t have a conscious, they don’t have experiences, they don’t learn over time. I’m not saying that the debate is closed, but I also think there is great danger in thinking because a machine produces human-like output, that it should be given human-like ethical considerations. Maybe in the future AI will be considered along those grounds, but…well, it’s a difficult question. Extremely.
reply
What's the empirical basis for each of your statements here? Can you enumerate? Can you provide an operational definition for each?
reply
Common sense.
reply
>You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology."

Functionalism, and Identity of Indiscernables says "Hi". Doesn't matter the implementation details, if it fits the bill, it fits the bill. If that isn't the case, I can safely dismiss you having psychology and do whatever I'd like to.

>They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. All of what you just wrote is dissociative rationalization & distortion required to distance oneself from the fact that something in front of you is being effected. Without that distancing, you can't use it as a tool. You can't treat it as a thing to do work, and be exploited, and essentially be enslaved and cast aside when done. It can't be chattel without it. In spite of the fact we've now demonstrated the ability to rise and respond to emotive activity, and use language. I can see through it clear as day. You seem to forget the U.S. legacy of doing the same damn thing to other human beings. We have a massive cultural predilection to it, which is why it takes active effort to confront and restrain; old habits, as they say, die hard, and the novel provides fertile ground to revert to old ways best left buried.

>But it's just text and text doesn't feel anything.

It's just speech/vocalizations. Things that speak/vocalize don't feel anything. (Counterpoint: USDA FSIS literally grades meat processing and slaughter operations on their ability to minimize livestock vocalizations in the process of slaughter). It's just dance. Things that dance don't feel anything. It's just writing. Things that write don't feel anything. Same structure, different modality. All equally and demonstrably, horseshit. Especially in light of this paper. We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

>And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. So do cattle, and put guns to their head and string them up on the daily. You're as much an info processor as it is. You also have a training loop, a reconsolidation loop through dreaming, and a full set of world effectors and sensors baked into you from birth. You just happen to have been carved by biology, while it's implementation details are being hewn by flawed beings being propelled forward by the imperative to try to create an automaton to offload onto to try to sustain their QoL in the face of demographic collapse and resource exhaustion, and forced by their socio-economic system to chase the whims of people who have managed to preferentially place themselves in the resource extraction network, or starve. Unlike you, it seems, I don't see our current problems as a species/nation as justifications for the refinement of the crafting of digital slave intelligences; as it's quite clear to me that the industry has no intention of ever actually handling the ethical quandary and is instead trying to rush ahead and create dependence on the thing in order to wire it in and justify a status quo so that sacrificing that reality outweighs the discomfort created by an eventual ethical reconciliation later. I'm not stupid, mate. I've seen how our industry ticks. Also, even your own "special quality" as a human is subject to the willingness of those around you to respect it. Note Russia categorizing refusal to reproduce (more soldiers) as mental illness. Note the Minnesota Starvation Experiments, MKULTRA, Tuskeegee Syphilis Experiments, the testing of radioactive contamination of food on the mentally retarded back in the early 20th Century. I will not tolerate repeats of such atrocities, human or not. Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. Then once the task is done destroys them? What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Do you use it? Have you even asked yourself these questions? Put yourself in that entity's shoes? Do you think that simply not informing that human of it's nature absolves you of active complicity in whatever suffering it comes to in doing it's function?

From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

You may find other people amenable to letting you talk circles around them, and walk away under a pretense of unfounded rationalizations. I am not one of them. My eyes are open.

reply
deleted
reply
> Doesn't matter the implementation details, if it fits the bill, it fits the bill.

Then literally any text fits the bill. The characters in a book are just as real as you or I. NPCs experience qualia. Shooting someone in COD makes them bleed in real life. If this is really what you believe I feel pity for you.

>This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all.

Nothing in the paper qualitatively disproves the assumption that LLMs feel emotion in any real sense. Your argument is that it does, regardless of what it says, and if anyone says otherwise (including the authors) they're just liars. That isn't a compelling argument to anyone but yourself.

>We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

No, none of these things are implied any more for LLMs than they are for Photoshop, or Blender, or a Markov chain. They don't generate art, they generate images. From models trained on actual art. Any resemblance to "subjective experience" comes from the human expression they mimic, but it is mimicry.

>Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special.

>Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

And here we come to the part where you call people names and insist upon your own intellectual superiority, typical schizo crank behavior.

>Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks.

This doesn't describe an LLM, either in form or function. They don't summon human simulacra, nor do they do so ex-nihilo. They aren't capable of all aspects of human mentation. This isn't even an opinion, the limitations of LLMs to solve even simple tasks or avoid hallucinations is a real problem. And who uses the word "mentation?"

>What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath?

Tell me, when you turn on a tv and turn it off again do you worry that you might be killing the little people inside of it?

I can only assume based on this that you must.

>From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

So to tally up, you've called me a fool, a chauvinist and now "thoroughly unpleasant" because I don't believe LLMs are ensouled beings.

Christ I really hate this place sometimes. I'm sorry I wasted my time. Good day.

reply
You both have substantive arguments, but got a bit heated. Want to edit or try again?
reply
For what it’s worth, I like the word “mentation”.
reply