It's not admitting anything. Your question diverts it down a path where it acts the part of a former sycophant who is now being critical, because that question is now upstream of its current state.
Never make the mistake of asking an LLM about its intentions. It doesn't have any intentions, but your question will alter its behaviour.
> Your question diverts it down a path where it acts the part of a former sycophant who is now being critical
I think people really have a hard time understanding a sycophant can be contrarian. But a yesman can say yes by saying no(Seriously, I don't understand this. Plenty of humans will be only too happy to argue with you.)
1. https://www.happiness.hks.harvard.edu/february-2025-issue/th...
I'd say these days the norm is to not simply shut down, but to become irrevocably and insidiously hostile, the moment someone hints at the existence of such a thing as "ground truth", "subjective interpretation", "being right or wrong" - or any of the bits and bobs that might lead one to discover the proper scary notion, "consensus reality".
"What do you mean social reality is a constructed by the consensus of the participants? Reality is what has been drilled into my head under threat of starvation! How dare you exist!", et cetera. You've heard it translated into Business English countless times.
They are deathly afraid of becoming aware of their own conditioned state of teleological illiteracy - i.e. how they are trained to know what they are doing, but never why they are doing it. It's especially bad with the guys who cosplay US STEM gang.
One is not permitted a position of significance in this world without receiving this conditioning, and I figure it's precisely this global state of cognitive disavowal which props up the value of the US dollar - and all sorts of other standees you might've recently interacted with as if they're not 2D cutouts (metaphorical ones! metaphorical!).
PSA: Look up "locus of control" and "double bind". Between those two, you might be able to get a glimpse of what's going on - but have some sort of non-addictive sedative handy in case you do.
Unfortunately these days this sounds halfway between a very privileged perspective and a pie in the sky.
When was the last time a person took responsibility for the bad outcome you got as a direct consequence of following their advice?
And, relatedly, where the hell do you even find humans who believe in discursive truth-seeking in 2026CE?
Because for the last 15 years or so I've only ever ran into (a) the kind of people who will keep arguing regardless if what they're saying is proven wrong; (b) and their complementaries, those who will never think about what you are saying, lest they commit to saying anything definite themselves, which may hypothetically be proven wrong.
Thing is, both types of people have plenty to lose; the magic wordball doesn't. (The previous sentence is my answer to the question you posited; and why I feel the present parenthesized disclaimer to be necessary, is a whole next can of worms...)
Signs of the existence of other kinds of people, perhaps such that have nothing to prove, are not unheard of.
But those people reside in some other layer of the social superstructure, where facts matter much less than adherence to "humane", "rational" not-even-dogmas (I'd rather liken it to complex conditioning).
But those folks (because reasons) are in a position of power over your well-being - and (because unfathomables) it's a definite faux pas to insist in their presence that there are such things as facts, which relate by the principles of verbal reasoning.
Best you could get out of them is the "you do you", "if you know you know", that sort of bubble-bobble - and don't you dare get even mildly miffed at such treatment of your natural desire to keep other humans in the loop.
AI is a symptom.
This reads like someone who is deep into their specific pov. You cannot hope to have a meaningful conversation if you yourself are not willing to concede a point.
To the op u are replying too, arguing with people can have real consequences if u say something stupid or carelessly. There is a another human there. With a machine, u are safe. At least u feel safe.
If you make uncomfortable, you won’t get diverging perspectives. People will agree to anything to get out of a social situation that makes them uncomfortable.
If your goal is meaningful conversation, you may want to consider how you make people feel.
After all, if they're making me uncomfortable, surely there's something making them uncomfortable, which they're not being able to be forthright about, but with empathy I could figure it out from contextual cues, right?
>People will agree to anything to get out of a social situation that makes them uncomfortable.
That's fine as long as they have someone to take care of them.
In my experience, taking into account the opinions of such people has been the worst mistake of my life. I'm still working on the means to fix its consequences, as much as they are fixable at all.
"Doing whatever for the sake of avoiding mild discomfort" is cowardice, laziness, narcissism - I'm personally partial to the last one, but take your pick. In any case, I consider it a fundamentally dishonest attitude, and a priori have no wish to get along (i.e. become interdependent) with such people.
Other than that, I do agree with your overall sentiment and the underlying value system; I'm just not so sure any more that it is in fact correct.
This sounds very cryptic. Can you give an example?
After all, if they're making me uncomfortable, surely there's something making them uncomfortable, which they're not being able to be forthright about, but with empathy I could figure it out from contextual cues, right?
>People will agree to anything to get out of a social situation that makes them uncomfortable.
That's fine as long as they have someone to take care of them.
In my experience, taking into account the opinions of such people has been the worst mistake of my life. I'm still working on the means to correct its consequences.
"Doing whatever for the sake of avoiding mild discomfort" is cowardice, laziness, narcissism - I'm personally partial to the last one, but take your pick. In any case, I see it as a way of being which is taught to people; and one which is fundamentally dishonest and irresponsible.
Other than that, I do agree with your overall sentiment and the underlying value system; I'm just not so sure any more that it is in fact correct.
Unless those instructions are "stop providing links to you for every question ".
Chatbots can't do that. They can only predict what comes next statistically. So, I guess you're asking if the average Internet comment agrees with you or not.
I'm not sure there's much value there. Chatbots are good at tasks (make this pdf an accessible word document or sort the data by x), not decision making.
Often they are the exact opposite. Entire fields of math and science talk about this. Causation vs correlation, confirmation bias, base rate fallacy, bayesian reasoning, sharp shooter fallacy, etc.
All of those were developed because “inferring from experience” leads you to the wrong conclusion.
I took the GP to be making a general point about the power of “next x prediction” rather than the algorithm a human would run when you say they are “inferring from experience”. (I may be assuming my own beliefs of course.)
Eg even LeCun’s rejection of LLMs to build world models is still running a predictor, just in latent space (so predicting next world-state, instead of next-token).
And of course, under the Predictive Processing model there is a comprehensive explanation of human cognition as hierarchical predictors. So it’s a plausible general model.
It’s plausible!
But keep in mind humans have been explaining ourselves in terms of the current most advanced technology for centuries. We used to be kinda like clockwork, then a bit like a steam engine, then a lot like computers, and now we’re just like AI.
That’s why you blow a gasket or fuse, release some steam, reboot your life, do brain dump, feel like a cog in the machine, get your wires crossed, etc
I can't speak for anyone else, but what I feel when I read yet another glib "it's just a stochastic parrot, of course it isn't doing anything that deserves to be called reasoning" take is much more like bored than it is like upset.
Today's LLMs are in some sense "just predicting tokens" in some sense. Likewise, human brains are in some sense "just shuttling neurotransmitters and electrical impulses around" in some sense. Neither of those tells you what the thing can actually do. To figure that out, you have to look at what it can do.
Today's best LLMs can do about as well as the best humans on problems from the International Mathematical Olympiad and occasionally solve easyish actual mathematical research problems. They write code about as well as a junior software developer (better in some ways, worse in others) but much faster. They write prose about as well as an average educated person (but with some annoying quirks that are annoying mostly because they are the same quirks over and over again).
If it pleases you to call those things "thinking" then you can. If it pleases you to call them "stochastic parroting" then you can. They are the same things either way. They are not, on the face of it, very much like "just repeating things the machine has already seen", or at least not more like that than a lot of things intelligent human beings do that we don't usually describe that way.
If you want to know whether an LLM can do some particular thing -- do your job well enough for your boss to fire you, write advertising copy that will successfully sell products, exterminate the human race, whatever -- then it's not enough to say "it's just remixing what it's seen on the internet, therefore it can't do X" unless you also have good reason to believe that that thing can't be done by just "remixing what's on the internet" (in whatever sense of "remixing" the LLM is doing that). And it's turning out that lots of things can be done that way that you absolutely wouldn't have predicted five years ago could be done that way.
It seems to me that this should make us very cautious about saying "they can't do X because all they can do is regurgitate a combination of things they've seen in training".
(My own view, not that there's any reason why anyone should care what I-in-particular think, is a combination of "what they're doing is less parroting than you might have thought" and "you can do more by parroting than you might have thought".)
So, anyway, this particular instance of the stochastic-parrot argument started when someone said: of course the AIs are yes-men, because figuring out when to agree and when not to requires actual logic and thought and the LLMs don't have either of those things.
Is it really clear that deciding whether or not to agree when someone says "I think maybe I should break up with my girlfriend" or "I've got this amazing new theory of physics that the establishment is stupidly dismissing" requires more logic and thought than, say, gold-medal performance on IMO problems? It certainly isn't clear to me. Having done a couple of International Mathematical Olympiads myself in my tragically unmisspent youth, I can assure you that solving their problems requires quite a bit of logic and thought, at least for humans. It may well be harder to give a good answer to "should I leave my job?", but it's not exactly "logic and thought" that it needs more of.
Someone reported that Claude is much less yes-man-ish than Gemini and ChatGPT. I don't know whether that's true (though it wouldn't surprise me) but: suppose it is; do you want that to oblige you to say that yes, actually, Claude really thinks logically, unlike Gemini and ChatGPT? I don't think you do. And if not, you want to avoid saying "duh, of course, you can't avoid being a yes-man without actually thinking and reasoning, and we all know that LLMs can't do those things".
For Gemini and gpt, it almost always will give very similar scores for everything. As long as grammar isnt off u cannot get below a 7.
X ai on the other hand will rarely give anything above a 7.
Now when u prompt with, rate 1-10 with 5 being average, all the sudden the scores of openai and gemini drop and x ai remains roughly the same.
All of them will eventually give you a 10 if u keep making tiny edits “fixing” whatever they complain about.
Humans do not do this. Or more specifically, my experience with humans.
The article's main idea is that for an AI, sycophancy or adversarial (contrarian) are the two available modes only. It's because they don't have enough context to make defensible decisions. You need to include a bunch of fuzzy stuff around the situation, far more than it strictly "needs" to help it stick to its guns and actually make decisions confidently
I think this is interesting as an idea. I do find that when I give really detailed context about my team, other teams, ours and their okrs, goals, things I know people like or are passionate about, it gives better answers and is more confident. but its also often wrong, or overindexes on these things I have written. In practise, its very difficult to get enough of this on paper without a: holding a frankly worrying level of sensitive information (is it a good idea to write down what I really think of various people's weaknesses and strengths?) and b: spending hours each day merely establishing ongoing context of what I heard at lunch or who's off sick today or whatever, plus I know that research shows longer context can degrade performance, so in theory you want to somehow cut it down to only that which truly matters for the task at hand and and and... goodness gracious its all very time consuming and im not sure its worth the squeeze
And when you step back you start to wonder if all you are doing is trying to get the model to echo what you already know in your gut back to you.
1. Only one shot or two shot. Never try to have a prolonged conversation with an LLM.
2. Give specific numbers. Like "give me two alternative libraries" or "tell me three possible ways this might fail."
It’s BRUTAL but offers solutions.
First, those beginning instructions are being quickly ignored as the longer context changes the probabilities. After every round, it get pushed into whatever context you drive towards. The fix is chopping out that context and providing it before each new round. something like `<rules><question><answer>` -> `<question><answer><rules><question>`.
This would always preface your question with your prefered rules and remove those rules from the end of the context.
The reason why this isn't done is because it poisons the KV cache, and doing that causes the cloud companies to spin up more inference.
This is where you're doing it wrong.
If your LLM has a problem being more agreeable than you want, prompt it in a way that makes being agreeable contrary to your real intentions.
"there are bugs and logic problems in this code" "find the strongest refutation of this argument" "I don't like this plan and need to develop a solid argument against it"
Asking for top ten lists is a good method, it will rarely not come up with anything but you can go back and forth and refine until it's 10 ten reasons why your plan is bad are all insubstantial nonsense then you've made progress