Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying.
LLM are pretty dangerous in confirming you own distorted view of the world.
This is why I've turned off Claude/ChatGPT's ability to use other conversations as context. I allow memories (which I have to check/prune regularly) but not reading other conversations, there is just too high of a chance of poisoning or biasing the context.
Once I switched to a new chat to confirm an assumption and the LLM said "Yes, and your error confirms that..." but I hadn't sent the error to that chat. At that point I had to turn it off, I open a new chat specifically to get "clean" context. I wish these platforms would give more tools to turn on/off that and have "private" chats (no memories, no system prompt edits) as well (some do, I know).
Obviously, context poisoning from other chats is not what happened in your case, but it's in the same "class" of issue, "leading the witness". I think about "leading the witness" _constantly_ while using LLMs. I often will not give it all the context or all of what I'm thinking, I want to see if it independently gets to the same place. I _never_ say "I'm considering X" when presenting a problem because I've seen it latch onto my suggestion too hard, too often.
Or more cynically, the goal is to give you the answer that makes you use the product more. Finetuning is really diverging the model from whats in the training set and towards what users "prefer".
I don't dispute that but man that is some shitty marriage. Even rather submissive guys are not happy in such setup, not at all. Remember its supposed to be for life or divorce/breakup, nothing in between.
Lifelong situation like that... why folks don't do more due diligence on most important aspect of long term relationships - personality match? Its usually not a rocket science, observe behavior in conflicts, don't desperately appease in situations where one is clearly not to blame. Masks fall off quickly in heated situations, when people are tired and so on. Its not perfect but pretty damn good and covers >95% of the scenarios.
Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting.
Those aren't the good parts of life.
Those aren't the parts I want the machine to do to people on a mass scale.
>You may even struggle to stay married if you don't learn to confirm your wife's perspectives.
Sorry what?
The important part is validating the way someone feels, not "confirming perspectives".
A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.")
You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false.
There's a word for that.
It's lying.
When you're deliberately lying to keep someone in a relationship, that's manipulation.
When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) -
... - that, my friend, is gaslighting.
This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me".
It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement".
I.e., prevents someone from leaving.
That's abuse.
That, too is a part of life.
>I agree with your conclusion, but that's by design
All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse).
Anyone can verify the correctness of the labeling in this context.
I agree with your assertion, as well as that of the parent comment. And putting them together we have this:
LLM chatbots today are abusive by design.
This shit needs to be regulated, that's all. FDA and CPSC should get involved.
I use LLMs every day, I use Claude, Gemini, they're great. But they are very elaborate autocomplete engines. I'm not really shaking off that impression of them despite daily use .
Maybe they can also be smart. I'm skeptical that the current LLM approach can lead to human-level intelligence, but I'm not ruling it out. If it did, then you'd have human-level intelligence in a very elaborate autocomplete. The two things aren't mutually exclusive.
I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.
I think it really helps to have them ask questions in which they are a domain expert, and see what it says. Expose them to "The Plumber Problem" [0]. Honestly, I think seeing it be wrong so often in code or things about the project I'm using it for it what keeps me "grounded", the constant reminders that you have to stay on top of it, can't blindly trust what it says. I'm also glad I used it in the earlier stages to see when it was even "stupider", it's better now but the fundamental issues still lurk and surface regularly, if less regularly than a year or two ago.
Longer term I dunno if statistics or “fits the shape of what a response might look like” is the right way of thinking about it either because what’s actually happening might change from under you. It’s possible given enough parameters anything humans care about is separable. The process of discovering those numbers and the numbers themselves are different.
"It's a collection of warehouses of computers where the system designers gave up on even making a system diagram, instead invoking the cloud clipart to represent amorphous interconnection."
My wife: So, like a doberge cake?
Me: Yes, exactly! In fact if you look at the diagram of a neural net, that's exactly what it looks like.
In our household, AI is officially "the Doberge Cake of Statistics". It really sticks in my wife's mind because she loves doberge cake, but hates statistics.
> Nontechnical people simply don't have any idea about what LLMs are.
We're on HN, a highly technical corner of the internet, yet we see the same stuff here. It's not just non-technical people...I think one of the big dangers is that people (including us) are quick to believe "I'm better than that". Yet this is a bias conmen have been exploiting for millennia.
The only real defense is not lulling yourself into a false sense of security. You're less vulnerable (not invincible) by knowing you too can be fooled
Honestly, it's just a good way to go about getting information. There's a famous Feynman quote about it too. The first principle is to not fool yourself, and you're the easiest person to fool
Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).
> Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Worse, models often perform better when using that natural language because that's what kind of language they were trained on. I say worse because by speaking that way to them you will also naturally humanize them too.(As a ml researcher) I think one of the biggest problems we have is that we're trying to make a duck by making an animatronic duck indistinguishable from a real duck. In some sense this makes a lot of sense but it also only allows us to build a thing that's indistinguishable from a real duck to us, not indistinguishable from a real duck to something/someone else. It seems like a fine point, but the duck test only allows us to conclude something is probably a duck, not that it is a duck.
We need to be very very careful here. Just like advertisements work, weather you think you're immune or not, so does AI. You might think you're spotting every red flag, but of course you think so. You can't see all the ones you missed.
Do not make the mistake of thinking that being techy somehow immunizes you from flattery. It works on you too.
That's an extreme downward punch. Have you not observed the marketing these LLM companies are themselves producing? They're intentionally misleading the public as to the capabilities of these systems.
> if people were able to casually not anthropomorphize LLMs
Of course they can. You just need to train them appropriately. No one is doing that. Companies are busy running around talking about the "end of coding" or the "end of work" because some extremely chinsy LLM models are around that they want to _sell you_.