upvote
For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.

https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...

reply
So this system prompt is always there, no matter if i'm using chatgpt or azure openai with my own provisioned gpt? This explains why chatgpt is a joke for professionals where asking clarifying questions is the core of professional work.
reply
It's interesting how much focus there is on 'playing along' with any riddle or joke. This gives me some ideas for my personal context prompt to assure the LLM that I'm not trying to trick it or probe its ability to infer missing context.
reply
Are these actual (leaked?) system prompts, or are they just "I asked it what its system prompt is and here's the stuff it made up:" ?
reply
Out of curiosity: when you add custom instructions client-side, does it change this behavior?
reply
It changes some behavior, but there's some things that are frustratingly difficult to override. The GPT-5 version of ChatGPT really likes to add a bunch of suggestions for next steps at the end of every message (e.g. "if you'd like, I can recommend distances where it would be better to walk to the car wash and ones where it would be better to drive, let me know what kind of car you have and how far you're comfortable walking") and really loves bringing up resolved topics repeatedly (e.g. if you followed up the car wash question with a gas station question, every message will talk about the car wash again, often confusing the topics). Custom instructions haven't been able to correct these so far for me.
reply
For claude at least I have been getting more assumption clarification questions after adding some custom prompts. It is still making some assumptions but asking some questions makes me feel more in control of the progress.

In terms of the behavior, technically it doesn’t override, but instead think of it as a nudge. Both system prompt and your custom prompt participates in the attention process, so the output tokens get some influence from both. Not equally but to some varying degree and chance

reply
It does. Just put it in the custom instructions section.
reply
Not for me, at least with CharGPT. I am slowly moving to Gemini due to ChatGPT uptime issues. I will try it with Gemini too.
reply
"If you're unsure, ask. Don't guess." in prompts makes a huge difference, imo.
reply
I have that in my system prompt for chatgpt and it almost never makes a difference. I can count on one hand the number of times its asked in the past year. Unless you count the engagement hacking questions at the end of a response
reply
In general spitting out a scrollbar of text when asked a simple question that you've misunderstood is not, in any real sense, a "chat".
reply
I use models with OpenRouter, and only have this models with OpenAI models. That's why I don't use them.
reply
The way I see it is that long game is to have agents in your life that memorize and understand your routine, facts, more and more. Imagine having an agent that knows about cars, and more specifically your car, when the checkups are due, when you washed it last time, etc., another one that knows more about your hobbies, another that knows more about your XYZ etc.

The more specific they are, the more accurate they typically are.

reply
Do really understand deeply and in great amount I feel we would need models with changing weights and everyone would have their own so they could truly adjust to the user. Now we have have chunk of context that it may or may not use properly if it gets too long. But then again, how do we prevent it learning the wrong things if the weights are adjusting.
reply
In principle you're right but these things can get probably 60-70% of the job done. The rest is up to "you". Never rely on it blindly as we're being told kind of... :)
reply