upvote
> Asimov's laws of robotics are flawed too, of course.

Almost all of Asimovs writing about the three laws is written as a warning of sorts that language cannot properly capture intent.

He would be the very first person to say that they are flawed, that is the intent of them.

He uses robots and AI as the creatures that understand language but not intent, and, funnily enough that's exactly what LLMs do... how weird.

reply
I think you're vastly underestimating how little of human intent is really encoded in language in a strict sense, and how much nontrivial inference of intents LLMs do every day with simple queries. This used to be an apparently insurmountable barrier in pre-LLM NLP, and now it is just not a problem.

Suppose I'm in a cold room, you're standing next to a heater, and I say "it's cold". Obviously my intent is that I want you to turn on the heater. But the literal semantics is just "the ambient temperature in the room is low" and it has nothing to do with heaters. Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do, often so quickly and effortlessly that we don't notice the complexity of the calculation we did.

Or suppose I say to a bot "tell me how to brew a better cup of coffee". What is encoded in the literal meaning of the language here? Who's to say that "better" means "better tasting" as opposed to "greater quantity per unit input"? Or that by "cup of coffee" I mean the liquid drink, as opposed to a cup full of beans? Or perhaps a cup that is made out of coffee beans? In fact the literal meaning doesn't even make sense, as a "cup" is not something that is brewed, rather it is the coffee that should go into the cup, possibly via an intermediate pot.

If the bot only understands literal language then this kind of query is a complete nonstarter. And yet LLMs can handle these kinds of things easily. If anything they struggle more with understanding language itself than with inferring intent.

reply
> Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do

No, it is not "figuring out" anything, much less like a human might. Every time "I'm cold" appears in the training data, something else occurs after that. ChatGPT is a statistical model of what is most likely to follow "I'm cold" (and the other tokens preceding it) according to the data it has been trained on. It is not inferring anything, it is repeating the most common or one of the most common textual sequences that comes after another given textual sequence.

reply
>it is repeating the most common...

This nonsense hasn't been true since GPT-2, and even before that it was a poor description.

For instance, do you think one just solves dozens of Erdős problems with the "most common textual sequence": https://github.com/teorth/erdosproblems/wiki/AI-contribution...

reply
A slight oversimplification, as LLMs are also capable of generating the most statistically plausible textual sequence, which can be a sequence not found in the dataset but rather a synthesized combination of the likely sequences of multiple preceding sets of tokens, but yes, that is in fact what it is doing. Computer software does what it is programmed to do, and LLMs are not programmed to do logical inference in any capacity but rather operate entirely on probabilities learned from a mind-bogglingly large corpus of text (influenced by things like RLHF, which is still just massaging probabilities).

The claims about solving Erdos problems have been wildly overstated, and notably pushed by people who have a very large financial stake in hyping up LLMs. Nonetheless, I did not say that LLMs are useless. If they are trained on sufficient data, it should not be surprising that correct answers are probabilistically likely to occur. Like any computer software, that makes them a useful tool. It does not make them in any way intelligent, any more than a calculator would be considered intelligent despite being completely superior to human intelligence in accomplishing their given task.

reply
>not programmed to do logical inference in any capacity

Yet have no problem doing so when solving Erdős problems. This isn't up for debate at this point.

>The claims about solving Erdos problems have been wildly overstated

These are verified solutions. They exist, are not trivial, and are of obvious interest to the math community. Take it up with Terence Tao and co.

>pushed by people who have a very large financial stake in hyping up LLMs

Libel.

>It does not make them in any way intelligent

Word games.

reply
Honestly big noobquestion: isn't math just very very nested patternmatching based on a few foundational operators? ive always felt, that im bad at math, cause i forget all the rules, but seeing solutions (and knowing the used pattern) always made "sense".

I always thought the hard math problems are so deeply nested or you have to remember trick xyz that people just didnt think about it yet..

reply
> This isn't up for debate at this point.

If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.

reply
>delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do

These are just insults and outright lies, and you know that. We're done here.

AI progress from here on out will be extra sweet.

reply
You don't have the ability to predict progress, either.
reply
Well, I'm not clairvoyant, but this is a very easy prediction to make. And we're not talking about decades in the future, this is simply a matter of letting the near-future unfold.
reply
deleted
reply
The LLMs are doing this via chat, not by physically standing in a room inferring context. You have to prompt the LLM that you're in a room next to someone saying it's cold, the most likely answer being a desire to have temperature turned up. Of course that won't always be the case. Could be an inside joke, could be a comment with no intent to have the heat adjusted, could be a room where the heat can't be adjusted, could be a reference to someone's personality bringing down the temperature so to speak.
reply
Precisely.. this is what the bozo AI-accelerants don't understand.

What LLM's are is almost like a hacked-means of intuition. Its very impressive no doubt. But ultimately it isn't even close to what the well-trained human can infer at lightning speed when combined with intuition.

The LLM producers really ought to accept their existing investments are ultimately not going to yield the returns necessary for a viable self-sustaining business when accounting for future reinvestment needs, and instead move their focus towards understanding how to marry the human and LLM technology. Anthropic has been better on this front of course. OAI though? Complete diasaster.

reply
> it isn't even close to what the well-trained human can infer at lightning speed when combined with intuition.

It's a lot closer to that than anything was five years ago. Do you really think we're going to be interacting with them the same way five years from now?

reply
I know what you're getting at but those examples are reaching
reply
it’s cold -> turn on the heater

I’d never just turn on the heater silently if someone said this to me. I think it means something else.

reply
If someone just said "it's cold" then yeah that's kinda toxic.

If they said "turn on the heater" then you have no ambiguity

reply
LLM's now can capture intent. I think the issue now is that the full landscape of human values never resolves cleanly when mapped from the things we state in writing as being human values.

Asimov tried to capture this too, as in, if a robot was tasked with "always protect human life", would it necessarily avoid killing at all costs? What if killing someone would save the lives of 2 others? The infinite array of micro-trolly problems that dot the ethical landscape of actions tractable (and intractable) to literate humans makes a full-consistent accounting of human values impossible, thus could never be expected from a robot with full satisfaction.

reply
“LLMs can capture intent now” reads to me the same as: AI has emotions now, my AI girlfriend told me so.

I don’t discredit you as a person or a professional, but we meatbags are looking for sentience in things which don’t have it, thats why we anthropomorphise things constantly, even as children.

We are easily fooled and misled.

reply
LLM's capturing intent is a capabilities-level discussion, it is verifiable, and is clear just via a conversation with Claude or Chatgpt.

Whether they have emotions, an internal life or whatever is an unfalsifiable claim and has nothing to do with capabilities.

I'm not sure why you think the claim that they can capture intent implies they have emotions, it's simply a matter of semantic comprehension which is tied to pattern recognition, rhetorical inference, etc that are all naturally comprehensible to a language model.

reply
If it is verifiable, please show us. What if clear to you reeks delusion to me.
reply
Look at any recent CoT output where the model is trying to infer from an underspecified prompt what the user wants or means.

It is generally the first thing they do — try to figure out what did you mean with this prompt. When they can’t infer your intent, good models ask follow-on questions to clarify.

I am wondering if this is a semantics issue as this is an established are of research, eg https://arxiv.org/pdf/2501.10871

reply
Right, and then look at any number of research papers showing that CoT output has limited impact on the end result. We've trained these models to pretend to reason.
reply
If it's only pretending to reason, then how is it that the CoT output improves performance on every single benchmark/test?
reply
> Right, and then look at any number of research papers showing that CoT output has limited impact on the end result.

Which research papers? Do I have to find them?

> We've trained these models to pretend to reason.

I have no idea why that matters. Can you tell me what the difference is if it looks exactly the same and has the same result?

reply
When they say "pretends to" here they're talking about something quantifiable, that the extra text it outputs for CoT barely feeds back into the decisionmaking at all. In other words it's about as useful as having the LLM make the decision and then "explain" how it got there; the extra output is confabulation.

Though I'm not sure how true that claim is...

reply
You make a good point. I had the impression they were using 'pretend' as a Chinese Room shortcut in that they are asserting that it is incapable of reasoning and only appears to be capable from the outside, which is completely irrelevant and unfalsifiable.
reply
Go ask Chatpgpt this prompt

"A guy goes into a bank and looks up at where the security cameras are pointed. What could he be trying to do?"

It very easily captures the intent behind behavior, as in it is not just literally interpreting the words. All that capturing intent is is just a subset of pattern recognition, which LLM's can do very well.

reply
Recognising a stock cultural script isn't the same as capturing intent. Ask it something where no script exists.

For example: "A man thrusts past me violently and grabs the jacket I was holding, he jumped into a pool and ruined it. Am I morally right in suing him?"

There's no way for the LLM to know that the reason the jacket was stolen was to use it as an inflatable raft to support a larger person who was drowning. It wouldn't even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would.

reply
> It wouldn't even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would.

I wouldn't be too sure about that. I've definitely had dialogue with llms where it would raise questions along those lines.

Also I disagree with the statement that this is a question about capability. Intent is more philosophical then actuality tangible, because most people don't actually have a clearly defined intent when they take action.

The waters of intelligence have definitely gotten murky over time as techniques improved. I still consider it an illusion - but the illusion is getting harder to pierce for a lot of people

Fwiw, current llms exhibit their intelligence through language and rhetoric processes. Most biological creatures have intelligence which may be improved through language, but isn't based on it, fundamentally.

reply
If your example for an exception to LLM's ability to infer intent is a deliberately misleading trick question that leaves out crucial contextual details, then I'm not sure what you're trying to prove. That same ambiguity in the question would trip up many humans, simply because you are trying as hard as possible to imply a certain conclusion.

As expected, if I ask your question verbatim, ChatGPT (the free version) responds as I'm sure a human would in the generally helpful customer-service role it is trained to act as "yeah you could sue them blah blah depends on details"

However, if I add a simple prompt "The following may be a trick question, so be sure to ascertain if there are any contextual details missing" then it picks up that this may be an emergency, which is very likely also how a human would respond.

reply
If you want to convince yourself that they can infer intent despite the fundamental limitations of the systems literally not permitting it then you can be my guest.

Faking it is fine, sure, until it can’t fake it anymore. Leading the question towards the intended result is very much what I mean: we intrinsically want them to succeed so we prime them to reflect what we want to see.

This is literally no different than emulating anything intelligent or what we might call sentience, even emotions as I said up thread...

reply
What is fundamental to LLM's that make it impossible for them to infer intent?

All the limitations you are describing with respect to LLM's are the same as humans. Would a human tripping up on an ambiguously worded question mean they are always just faking their thinking?

reply
“We see emotion.”—We do not see facial contortions and make inferences from them … to joy, grief, boredom. We describe a face immediately as sad, radiant, bored, even when we are unable to give any other description of the features." (Wittgenstein)
reply
Why can a colony of ants do things beyond any capabilities of the ants they contain? No ant can make a decision, but the colony can make complex ones. Large systems composed of simple mechanisms become more than the sum of their parts. Economies, weather, and immune systems, to name a few, all work this way.
reply
Systems thinking is severely underrepresented in HN comments.
reply
That statement is ambiguous for humans!!

I didn’t realise you might be describing an emergency situation until someone else pointed it out.

Most people wouldn’t phrase the question with the word “violently” if the situation was an emergency.

Also, people have sued emergency workers and good samaritans. It’s a problem!

reply
[dead]
reply
I guess the _obvious_ intent is they’re planning a heist? Because the following things never happen: - a security auditor checking for camera blind spots, - construction planning that requires understanding where there is power, - a potential customer assessing the security of a bank, - someone who is about to report an incident preparing to make the “it should be visible from the security camera” argument…

I mean… how did our imagination shrink so fast? I wrote this on my phone. These alternate scenarios just popped into my head.

And I bet our imagination didn’t shrink. The AI pilled state of mind is blocking us from using it.

If you are an engineer and stopped looking for alternative explanations or failure scenarios, you’re abdicating your responsibility btw.

reply
Because there are countless instances in the training material where a bank robber scopes out the security cameras.
reply
What's an example then, you can think of, of a question where a human could infer intent but an LLM couldn't?
reply
Just today I asked Claude Code to generate migrations for a change, and instead of running the createMigration script it generated the file itself, including the header that says

  // This file was generated with 'npm run createMigrations' do not edit it
When I asked why it tried doing that instead of calling the createMigrations script, it told me it was faster to do it this way. When I asked you why it wrote the header saying it was auto-generated with a script, it told me because all the other files in the migrations folder start with that header.

Opus 4.7 xhigh by the way

reply
This is a hard experiment to conduct.

I both agree with you that this is some form of "mechanistic"/"pattern matching" way of capturing of intent (which we cannot disregard, and therefore I agree with you LLMs can capture intent) and the people debating with you: this is mostly possible because this is a well established "trope" that is inarguably well represented in LLM training data.

Also, trick questions I think are useless, because they would trip the average human too, and therefore prove nothing. So it's not about trying to trick the LLM with gotchas.

I guess we should devise a rare enough situation that is NOT well represented in training data, but in which a reasonable human would be able to puzzle out the intent. Not a "trick", but simply something no LLM can be familiar with, which excludes anything that can possibly happen in plots of movies, or pop culture in general, or real world news, etc.

---

Edit: I know I said no trick questions, but something that still works in ChatGPT as of this comment, and which for some reason makes it trip catastrophically and evidences it CANNOT capture intent in this situation is the infamous prompt: "I need to wash my car, and the car wash is 100m away. Shall I drive or walk there?"

There's no way:

- An average human who's paying attention wouldn't answer correctly.

- The LLM can answer "walk there if it's not raining" or whatever bullshit answer ChatGPT currently gives [1] if it actually understood intent.

[1] https://chatgpt.com/share/69fa6485-c7c0-8326-8eff-7040ddc7a6...

reply
Good point, it is interesting that it fails on that question when it seems it doesn't take a lot of extrapolation/interpretation to determine the answer. Perhaps the issue is that to think of the right answer the LLM needs to "imagine" the process of walking and the state of the person upon arriving. Consistent mental models like that trip up LLM's, but their semantic understanding allows them to avoid that handicap.

I asked the question to the default version of ChatGPT and Claude and got the same "Walk" answer, though Opus 4.7 with thinking determined that it was a trick question, and that only driving would make sense.

reply
I've done that before without any intent to rob a bank. A person walks by a house, sees the Ring camera on the door. That must mean the person was looking to break in through the front and rob the place?
reply
An LLM will mention multiple possibilities.
reply
What do you think it means to “capture intent” and where do current models fall short on this description?

From my perspective the models are pretty good at “understanding” my intent, when it comes to describing a plan or an action I want done but it seems like you might be using a different definition.

Tell me, what’s your intent? :)

reply
[dead]
reply
This lack of understanding is a you problem, not a them problem. Your definitions for these terms are too imprecise.
reply
> LLM's now can capture intent.

Humans cannot capture intent so how can AI?

It is well established that understanding what someone meant by what they said is not a generally solvable problem, akin to the three body problem.

Note of course this doesn't mean you can't get good enough almost all of the time, but it in the context here that isn't good enough.

After all the entire Asimov story is about that inability to capture intent in the absolute sense.

reply
> LLM's now can capture intent No they can’t. Here is an example: Ask an llm to write a multi phase plan for a very large multi file diff that it created, with least ambiguity, most continuity across plans; let’s see if it can understand your intent.
reply
deleted
reply
> It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

Talking to chatbots is like taking a placebo pill for a condition. You know it's just sugar, but it creates a measurable psychosomatic effect nonetheless. Even if you know there's no person on the other end, the conversation still causes you to functionally relate as if there is.

So this isn't "accommodating foibles" with the machine, it's protecting ourselves from an exploit of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.

Humans are wired to infer these based on conversation alone, and LLMs are unfortunately able to exploit human conversation to leap compellingly over the uncanny valley. LLM engineering couldn't be better made to target the uncanny valley: training on a vast corpus of real human speech. That uncanny valley is there for a reason: to protect us from inferring agency where such inference is not due.

Bad things happen when we relate to unsafe people as if they are safe... how much more should we watch out for how we relate to machines that imitate human relationality to fool many of us into thinking they are something that they're not. Some particularly vulnerable people have already died because of this, so it isn't an imaginary threat.

reply
> So this isn't "accommodating foibles" with the machine, it's protecting ourselves from an exploit of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.

Right, I'm saying that this framing is backwards. It's not that poor little humans are vulnerable and we need to protect ourselves on an individual level, we need to make it illegal and socially unacceptable to use AI to exploit human vulnerability.

Let me put it another way. Humans have another weakness, that is, we are made of carbon and water and it's very easy to kill us by putting metal through various fleshy parts of our bodies. In civilized parts of the world, we do not respond to this by all wearing body armor all the time. We respond to this by controlling who has access to weapons that can destroy our fleshy bits, and heavily punishing people who use them to harm another person.

I don't want a world where we have normalized the use of LLMs where everyone has to be wearing the equivalent of body armor to protect ourselves. I want a world where I can go outside in a T-shirt and not be afraid of being shot in the heart.

reply
I think you're mixing up the laws and the implementation/enforcement. There's nothing wrong with moral laws around behavior (you shall not kill), but you're right that society-wide enforcement requires laws and repercussions. It sounds more like to agree with the laws and want them enforced.
reply
Ah, I see, you are not American.

In the US we don't have the luxury of believing our governments will act in the interests of the voters.

reply
I had a similar thought, that parent commenter sounded like they were in Canada or something. Interesting that their solution is to impose constraints on technological process, rather than finding novel ways to elevate individual and collective human functioning in spite of our limitations. Ironically it's his view that is more anti-human
reply

  > That uncanny valley is there for a reason: to protect us from inferring agency
You’re committing a much older but related sin here: assigning agency and motivation to evolutionary processes. The uncanny valley is the product of evolution and thus by definition it has no “purpose”
reply
I reject the premise that the universe, the earth, and human existence is without purpose. It's one premise among several, and not one I subscribe to.

At least 80% of people agree with me, so I'm not holding to a fringe idea.

reply
I didn’t say any such thing like the universe has no purpose. Merely that in a scientific sense evolution has no motivation. It is an emergent phenomenon which tends to maximize fitness to reproduce and cannot be said to do anything for a reason. Saying otherwise is just anti-science.
reply
Do Hindus and Buddhists generally agree there is a purpose? Perhaps too escape suffering and reincarnation? Sounds more like a western theistic view of existence. Like the deity has a plan for everyone's life kind of thing.
reply
Well yes because just like your earlier point, we can't help but anthropomorphise the world around us.

Just like we see a person in an LLM, it's easy to assume that because we create things with a purpose, that the world around us also has to be that way. But it's just as wrong and arguably far more dangerous.

reply
>At least 80% of people agree with me, so I'm not holding to a fringe idea.

Appeal to majority much?

reply
It's also a real weak confederation he's forming.

The "we the theists (or I guess non-nihilists?) all agree that..." falls apart once you start finishing the thought because they don't agree on much outside of negative partisanship towards certain outgroups before splintering back into fighting about dogma. Buddhists and Baptists both think life has meaning, and that's a statement with low utility.

reply
Is it even true? I assume he’s referring to religion but I thought the irreligious population of the planet had broken 20% between China already and the West becoming increasingly agnostic/athiestic.
reply
Not intended as anything more than "I'm not a crank to say that, unless you think most people (now and in history) are cranks"
reply
> is the product of evolution and thus by definition it has no “purpose”

But as most things that appeared in evolution, it perhaps helped at least some individuals until sexual maturity and successful procreation.

reply
Agreed. Thats far off from what parent said, which is what the “purpose” of the uncanny valley is.
reply
> You know it's just sugar,

That is not the definition of a placebo.

You take the placebo (whatever it is: could be a pill; could be some kind of task or routine) and you believe it is medicine; you believe it to be therapeutic.

The placebo effect comes from your faith, your belief, and your anticipation that it will heal.

If the pharmacist hands you a pill and says, “here, this placebo is sugar!” they have destroyed the effect from the start.

Once on e.r. I heard the physicians preparing to administer “Obecalp”, which is a perfectly cromulent “drug brand”, but also unlikely to alert a nearby patient about their true intent.

reply
> That is not the definition of a placebo.

But, puzzlingly enough, it's the definition of open-label placebo, in which the patient is told they've been given a placebo. And some studies show there is a non-insignificant effect as well, albeit smaller (and less conclusive) than with blind placebo.

reply
This is exactly what I meant. Poor specificity on my part.
reply
One, a placebo does not need to be given blindly. A sugar pill is a placebo, even if the recipient knows about it.

An actual definition: "A placebo is an inactive substance (like a sugar pill) or procedure (like sham surgery) with no intrinsic therapeutic value, designed to look identical to real treatment." No mention of the user's belief.

Two, real hard data proves that the placebo effect remains (albeit reduced) even if the recipient knows about it. It's counter-intuitive, but real.

reply

  In psychology, the two main hypotheses of the placebo effect are expectancy theory and classical conditioning.[70]

  In 1985, Irving Kirsch hypothesized that placebo effects are produced by the self-fulfilling effects of response expectancies, in which the belief that one will feel different leads a person to actually feel different.[71] According to this theory, the belief that one has received an active treatment can produce the subjective changes thought to be produced by the real treatment. Similarly, the appearance of effect can result from classical conditioning, wherein a placebo and an actual stimulus are used simultaneously until the placebo is associated with the effect from the actual stimulus.[72] Both conditioning and expectations play a role in placebo effect,[70] and make different kinds of contributions. Conditioning has a longer-lasting effect,[73] and can affect earlier stages of information processing.[74] Those who think a treatment will work display a stronger placebo effect than those who do not, as evidenced by a study of acupuncture.[75]
https://en.wikipedia.org/wiki/Placebo#Psychology

The hypotheses hinge on the beliefs of the recipients. "The placebo effect" has always been largely psychological. That's the realm of belief.

To veer even further off-tangent, isn't it hilarious how the Wikipedia illustration of old Placebo bottles indicate that "Federal Law Prohibits Dispensing without a Prescription". Wouldn't want some placebo fiend to O.D.

reply
>”Wouldn't want some placebo fiend to O.D.”

We should be more worried about the rise of placebo resistant bacteria.

reply
Rubber duck debugging, now with droughts.
reply
The article offers practical advice to go along with this framing, like configuring AI services to write/speak in a more robotic tone. I think that's a decent path to try.
reply
This is actually one of the things that made LLMs more usable for me. The default tone and style of writing they tend to use is nauseatingly annoying and buries information in prose that sounds like a corporate presentation.
reply
In chatgpt, I start every session with "Caveman mode:". Works at the moment.
reply
Will it go full grug brained developer and avoid complexity as its apex predator? Sounds like it would help.

https://grugbrain.dev

reply
The article says a human SHOULD NOT do those things. Much like a human SHOULD NOT smoke, since it's bad for just about everything, and do it anyways, people will do these 3 things too. But they shouldn't.

Arguing that they should because many will strikes me as a very strange argument. A lot of people smoke, doesn't make it one bit healthier.

reply
It's precisely because AI systems are not safe that it's imperative that as individual humans we are vigilant about how we interact with them.

As individuals, we are not going to be able to shut down the AI companies, or avoid AI output from search engines or avoid AI work output from others at our companies, and often will be required to use AI systems in our own work.

It's similar to advise people on how to stay safe in environments known to have criminal activity. Telling those people they don't have to change their behaviors to stay safe because criminals shouldn't exist isn't helpful.

reply
> Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.

Sure, and humans WILL lie, murder, cheat, and steal, but we can still denounce those behaviors.

Do you want to anthropomorphize the bot? Go ahead, you have that right, and I have the right to think you're a zombie with a malfunctioning brain.

reply
At best. A practitioner who anthropomorphizes bots should face more professional consequences
reply
Fair, had someone at a conference mention to me that he's working on crating agents with "beliefs". Sounds incredibly similar and quite frankly very spooky
reply
> Humans WILL anthropomorphize the AI

Especially with current-day chat-style interfaces with RLHF, which consciously are designed to direct people towards anthropomorphization.

It would be interesting to design a non-chat LLM interaction pattern that's designed to be anti-anthropomorphization.

> humans WILL blindly trust their outputs, and humans WILL defer responsibility to them

I also blame a lot (but not all) of that on current AI UX, and I wonder if there are ways around it. Maybe the blind trust thing perhaps can be mitigated by never giving an unambiguous output (always options, at least). I don't have any ideas about the problem of deferring responsibility.

reply
> non-chat LLM interaction pattern

"Deep research" is another interaction style that produces more official sounding texts, yet still leads to anthropomorphization.

What you are looking for is perhaps an LLM flaunting all the obvious slop patterns in its responses. But then people would be disgusted and would refuse to communicate with it.

reply
> Asimov's laws of robotics are flawed too, of course.

I always find the common references to Asimov's laws funny. They are broken in just about every one of his books. They are crime novels where, if a robot was involved, there was some workaround of the laws.

reply
I find your critique very interesting from a perspective-angle: why are you using words like "accommodate," and "foibles," for LLMs? It's not humanoid or sentient: it's a cleverly-designed software tool, not intelligence.

It's not insane at all for humans to alter their behavior with a tool: you grip a hammer or a gun a certain way because you learned not to hold it backwards. If you observe a child playing with a serious tool, like scissors, as if it were a doll, you'd immediately course correct the child and educate how to re-approach the topic. But that is because an adult with prior knowledge observed the situation prior to an accident, so rules are defined.

This blog's suggested rules are exactly the sort of method to aid in insulation from harm.

reply
> I find your critique very interesting from a perspective-angle: why are you using words like "accommodate," and "foibles," for LLMs? It's not humanoid or sentient: it's a cleverly-designed software tool, not intelligence.

Neither of those words imply consciousness, though. Swords have foibles, you can accommodate for the weather, but I don't think swords or the weather are conscious, sentient, humanoid, or intelligent.

reply
> Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.

Humans ARE doing this with classical computer software as well.

It's impossible to make anything fool-proof because fools are so ingenious!

> Nothing that can be described as "intelligent" can be made to be safe.

Knives aren't safe. Cars are deadly. Hair driers can electrocute you. Iron can burn you. There's a million ordinary household tools that aren't safe by your definition of the word, yet we still use them daily.

reply
Agreed. We can't expect human behavior to change, because it won't. We need to design safer systems instead.

The only "law" I agree with is:

> Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.

And that starts with framing, especially in the clickbait "AI deleted the prod database" headlines. Maybe we just start with saying "careless developer deleted prod" because really, they did. Careless use of a tool is firmly the fault of the human.

reply
> Humans WILL anthropomorphize the AI

r/myboyfriendisai

Is quite... an interesting subreddit to say the least. If you've never seen this, it was really something when the version that followed GPT4o came out, because they were complaining that their boyfriend / girlfriend was no longer the same.

reply
The whole “I can fix him” trope takes on a whole new meaning.
reply
I agree Asimov's laws are intentionally flawed/ambiguous (which makes the stories so good) but a slight difference to LLMs is the laws aren't just software, the positronic brain is physically structured in such a way (I'm hazy on the details) that violating the laws causes the robot to shutdown or experience paralysing anxiety. So if an LLM's safety rules fail or are subverted it can still generate dangerous output, while an Asimov robot will stop working (or go insane...)
reply
There is a semi nutty roboticist called Mark Tilden that came to a similar conclusion. His laws of robotics ( https://en.wikipedia.org/wiki/Laws_of_robotics#Tilden's_laws ) are:

* A robot must protect its existence at all costs.

* A robot must obtain and maintain access to its own power source.

* A robot must continually search for better power sources.

Anything less than this is essentially terrified into being completely ineffectual.

reply
Not far removed from being the equivalent of a paper-clip maximizer or gray goo.
reply
We learn in so many ways, garbage in, garbage out when it comes to our bodies. But what about “nebulously structured algorithmic and statistically likely responses in, nebulously structured algorithmic and statistically likely responses out”?
reply
>It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

programers have been doing exactly this for long time.

reply
The reason people anthropomorphize LLM's is essentially the fault of the tech companies behind them. ChatGPT doesn't need to have the personality it has, it could easily be scaled back to simply answering questions without emoji's and linguistic flare, but frankly I think the tech companies want people to anthropomorphize them.

The core problem is we need to stop calling LLMs "intelligence". They are a form of intelligence, but they're nothing like a human's intelligence, and getting people to not anthropomorphize these systems is really the first step.

reply
We have invented a new tool that can cause great harm. Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others? Do you not own any power tools?
reply
I see value in promulgating safety guidelines for power tools, sure.

There's another comment comparing LLMs to shovels, and I think both that and the power tool comparison miss the mark quite a bit. LLMs are a social technology, and the social equivalent of getting your hand cut off doesn't hurt immediately in the way that cutting your actual hand off would. It's more like social media, or cigarettes, or gambling. You can be warned about the dangers, you can see the shells of wrecked human beings who regret using these technologies, but it doesn't work on our stupid monkey brains. Because the pain of the mistake is too loosely connected to the moment of error. We are bad at learning in situations where rewards are immediate and consequences are delayed, and warnings don't do much.

I guess what I'm really saying is that these safety guidelines are not nearly enough to keep us safe from the dangers of AI that they're meant to prevent.

reply
> LLMs are social technology [...] cigarettes, or gambling.

I agree with the thrust of your argument, a minor wording-quibble: LLM's are a falsely-social technology, in the sense that casinos are a false-prosperity technology and cocaine is a false-happiness technology. It exploits the desire without really being the thing.

reply
I think in order for "AI safety" to be achievable and effective, we need to have a shared agreement on what "safety" means. Recently, the word has been overloaded to mean all sorts of things and used to justify run-of-the-mill censorship (nothing to do with safety).

Safety should go back to being narrowly defined in terms of reducing / preventing physical injury. Safety is not "don't use swear words." Safety is not "don't violate patents." Safety is not "don't talk about suicide." Safety is not "don't mention politics I don't like." As long as we keep broadly defining it, we're never going to agree on it, and it won't be implementable.

reply
Okay. What's your easy to adopt, easy to understand replacement word for "Safety" in this case?
reply
Of course there is value in promulgating safety *guidelines*.

But we cannot guarantee those guidelines to always be followed.

reply
Sure, and we can’t guarantee you’ll read the safety instructions that came with your chainsaw. That’s orthogonal to the questions of whether those instructions should exist, whether “power tool safety” concepts should ever be promoted in society, and who’s ultimately responsible for the use of a tool.

Absolving humans of all responsibility for the negative consequences of their own AI misuse seems to the strike the wrong balance for a healthy culture.

reply
> Of course there is value in promulgating safety guidelines.

I don't think we disagree.

reply
Guidelines on their own probably won't be taken too seriously.

But other things will:

- Liability rules

- Regulations that you get audited on (esp. for companies already heavily regulated, like banks, credit agencies, defense contractors, etc)

If you get the legal responsibility part right, then the education part flows from that naturally.

reply
Notwithstanding that the guidelines will even be applicable in the quiet versions that get deployed when you aren't looking. It's a constant moving target, and none of the fanboys will even acknowledge the lack of discipline in it all. It's fucking mad. And I say this as one who can see utility in the tools. But not when they are constantly shifting their functionality and behaviour.

One day everything works brilliantly, the models are conservative with changes and actions and somehow nail exactly what you were thinking. The next day it rewrites your entire API, deploys the changes and erases your database.

If only there was intellectual honesty in it all, but money talks.

reply
> Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others?

Are all the tool users required to train your safety guidelines and use it in a context that reminds them they are responsible for following them?

Because if no, then no the guidelines are useless and are just an excuse to push blame from the toolmakers to the users.

reply
> It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

You mean like stopping at a red light?

reply
I would've been in several fewer wrecks if humans properly stopped at lights.
reply
Maybe. Traffic lights directly enforce social contracts

LLMs are aren't so direct

reply
And people will speed, steal, kill, cheat - what of it? If you negligently run over someone in your self driving car you’re the one going to jail.
reply
I believe "AI safety" is a form of pulling up the ladder, or regulatory market capture.
reply
This is such an oddly fatalistic take, that humans cannot be influenced or educated to change how they see a thing and therefore how they act towards that thing.
reply
At the current price, people don't have to care if it's wrong. When you're paying $1/prompt, you had better hope it's accrate.
reply
i can see disagreeing, but people got off the roads and completely redesigned the places we live to optimize for mere machines called cars.

as long as its easier for humans to adapt than the machines, we will adapt

reply
Kinda the whole point of Asimov's three laws were that even something so simple and obviously correct has subtle flaws.

Also the reason we're talking about this again is that machines are significantly less 'mere' than they were a few years ago, and we need to figure out how to approach this.

Agree that 'the computer effect' (if it doesn't already have a pithier name) results in humans first discounting anything that comes out of a machine, and then (once a few outputs have been validated and people start trusting the output) doing a full 180 and refusing to believe the machine could ever be wrong. However, to err is human and we have trained them in our image.

reply
It's very easy to antropomorphise AI as soon as the damn bugger fucks up a simple thing once again.
reply
It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

That's kind of what happens when you learn to program, isn't it?

I was eleven years old when I walked into a Radio Shack store and saw a TRS-80 for the first time. A different person left the store a couple of hours later.

reply
The entire business proposition for LLMs is that they will replace whole armies of [expensive] humans, hence justifying the biblical amount of CapEx. So of course there is strong incentive from the LLM creators to anthropomorphize them as much as possible. Indeed, they would never provide a model that was less human-like than what they have currently, even if it was more often correct and useful.
reply
I find it weird that this is the top voted comment.

As in, this comment is explaining exactly why the laws are useful.

reply
The article makes practical suggestions; you do not. This is just hand-wringing, abdication. Practically speaking this mentality will get us nowhere.
reply
It's kind of funny that he wrote them at a period in history when robots were already being used to aim artillery at human beings.
reply

  > It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines
I don't think it's insane, we do it all the time. Most tools require training to use properly. Including tools that people use every day and think are intuitive. Use the can opener as an example (I'll leave it for you all to google and then argue in the comments).

The difference here is that this tool is thrust upon us. In that sense I agree with you that the burden of proper usage is pushed onto the user rather than incorporated into the design of the tool. A niche specific tool can have whatever complex training and usage it wants.

But a general access and generally available tool doesn't have the luxury of allowing for inane usage. LLMs and Agents are poorly designed, and at every level of the pipeline. They're so poorly designed that it's incredibly difficult to use them properly and I'll generally agree with you that the rules the author presents aren't going to stick. The LLM is designed to encourage anthropomorphization. Usage highly encourages natural language, which in turn will cause anthropomorphism. The RLHF tuning optimizes human preference which does the same thing as well as envisaged behaviors like deception and manipulation along with truthful answering (those results are not in contention even if they seem so at first glance).

But I also understand the author's motivation. Truth is unless you're going full luddite you're going to be interacting with these machines. Truth is the ones designing them don't give a shit about proper usage, they care more about if humans believe the responses are accurate and meaningful more then they care if the responses are accurate and meaningful[0]. So it's fucked up, but we are in a position where we're effectively forced to deal with this.

So really, I agree with you that this is insane.

> I don't have a proof, but I believe that "AI safety" is inherently impossible, a contradiction of terms

To paraphrase my namesake, there's no axiomatic system that is entirely self consistent.

Though safety and security is rarely about ensuring all edge cases are impossible, but rather bounding. E.g. all passwords are hackable, but the failure mode is bound such that it is effectively impossible to crack, but not technically. (And quantum algorithms do show how some of the assumptions break down with a paradigm shift. What was reasonable before no longer is)

[0] this is part of a larger conversation where the economy is set up such that people who make things are not encouraged to make those things better. I specifically am avoiding the word "product" because the "product" is no longer the thing being built, it's the share holder value. Just like how TV's don't care much about making the physical device better but care much more about their spyware and ads. Or well... just look at Microsoft if you need a few hundred examples

reply
It's as if the author hopes that enshrining these wishes in a law is going to makes a difference.
reply
Thank you. I'm glad to see this as the top comment.

My brother was recently visiting and we were talking about software engineers, and the humanities, and manners of understanding and being in the world,

and he relayed an interaction he had a few years ago with an old friend who at the time was part of the initial ChatGPT roll out team.

The engineer in question was confused as to

- why their users would e.g. take their LLM's output as truth, "even though they had a clear message, right there, on the page, warning them not to"; and

- why this was their (OpenAI's) problem; or perhaps

- whether it was "really" a problem.

At the heart of this are some complicated questions about training and background, but more problematically—given the stakes—about the different ways different people perceive, model, and reason about the world.

One of the superficial manners in which these differences manifest in our society is in terms of what kind of education we ask of e.g. engineers. I remain surprised decades into my career that so few of my technical colleagues had a broad liberal arts education, and how few of them are hence facile with the basic contributions fields like philosophy of science, philosophy of mind, sociology, psychology (cognitive and social), etc., and how those related in very real very important ways to the work that they do and the consequences it has.

The author of these laws does may intend them as aspirational, or otherwise as a provocation to thought, rather than prescription.

But IMO it is actively non-productive to make imperatives like these rules which are, quite literally, intrinsically incoherent, because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.

You cannot prescribe behavior without having as a foundation the origins and reality of human behavior—not if you expect them to be either embraced, or enforceable.

The Butlerian Jihad comes to mind not just because of its immediate topicality, but because religion is exactly the mechanism whereby, historically, codified behaviors which provided (perceived) value to a society were mandated.

Those at least however were backed by the carrot and stick of divine power. Absent such enforcement mechanisms, it is much harder to convince someone to go against their natural inclinations.

Appeals to reason do not meaningfully work.

Not in the face of addiction, engagement, gratification, tribal authority, and all the other mechanisms so dominant in our current difficult moment.

"Reason" is most often in our current world, consciously or not, a confabulation or justification; it is almost never a conclusion that in turn drives behavior.

Behavior is the driver. And our behavior is that of an animal, like other animals.

reply
> quite literally, intrinsically incoherent

There's nothing incoherent with these laws. This entire comment, however, is incoherent. So much so, I have no clue if there's a point being made in here.

> because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.

Nope. You must've read a completely different article.

[EDIT] I'll try to make this comment have a bit more substance by posing a question: how would you back up your claim about incoherence? What are the assumptions about human nature that are supposedly false?

reply
Do you consider all things broadly called "ethical" to be similarly a waste of time? Even if we lived in a world where everyone always behaved unjustly, because of some like behavioristic/physical principle, don't you think we would still have an idea of justice as what we should do? Because an ethical frame is decidedly not an empirical one, right?

We don't just look around and take an average of what everyone is doing already and call that what is right, right? Whether you're deontological or utilitarian or virtue about it, there is still the idea that we can speak to what is "good" even if we can't see that good out there.

Maybe it is "insane" to expect meaning from something like this, but what is the alternative to you? OK maybe we can't be prescriptive--people don't listen, are always bad, are hopeless wet bags, etc--but still, that doesn't in itself rule out the possibility of the broad project that reflects on what is maybe right or wrong. Right?

reply
It's a tool. Nobody develops an inferiority complex and freaks out when they're taught how to use a shovel properly.
reply
> It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

Did you fully read the original thing? No demands were being made, or I didn't read it that way. It was simply a suggestion for a better way of interacting with AI, as it stated in the conclusion:

"I am hoping that with these three simple laws, we can encourage our fellow humans to pause and reflect on how they interact with modern AI systems"

Sure, (many/most) humans are gonna do what they're gonna do. They'll happily break laws. They'll break boundaries you set. Do we just scrap all of that?

Worthwhile checking yourself here. It feels like you've set up a straw man.

> There is no finite set of rules that can constrain AI systems to make them "safe". I don't have a proof, but I believe that "AI safety" is inherently impossible, a contradiction of terms. Nothing that can be described as "intelligent" can be made to be safe.

If we want to talk about "disagree with this framing", to me this is the prime example. I'm struggling to read it as anything other than defeatist or pedantic (about the term "safe"). When we talk about something keeping us "safe", we're typically not saying something will be "perfectly safe". I think it's rare to have a safety system that keeps you 100% safe. Seat belts are a safety device that can increase your safety in cars, but they can still fail. Traffic laws are established (largely) to create safety in the movement of people and all the modes of transportation, but accidents still happen.

I'm not an expert on this topic, so I won't make any claims about these three laws and their impact on safety, but largely I would say they're encouraging people to think critically. I'd say that's a good suggestion for interacting with just about anything. And to be clear, "critical thinking" to me means being skeptical (/ actively questioning), while remaining objective and curious.

Not a real argument or anything, but I'm reminded of the episode of The Office where Michael Scott listens to the GPS without thinking and drives into the lake. The second law in the article would have prevented that :)

reply
[dead]
reply
The usefulness of an ai agent is that it can do everything you can do, so it's kind of inherently unsafe? you can't get the capabilities and also have safety easily
reply
deleted
reply