upvote
Also for coding: I often use prompts like "follow the structure of this existing feature as closely as possible".

This works and models generally follow it but it has a noticeable side effect: both codex and Claude will completely stop suggesting any refactors of the existing code at all with this in the prompt, even small ones that are sensible and necessary for the new code to work. Instead they start proposing messy hacks to get the new code to conform exactly to the old one

reply
deleted
reply
So, did your Claude switch from "You're absolutely right!" to "You're absolutely right." or was it deeper than that?
reply
I'd say it was a little deeper than that, it stopped conveying any kind of enthusiasm.
reply
Personally I think that is a good thing. I have asked all AIs not to show enthusiasm, express superlatives (e.g. "massive" is a Gemini favourite) and stop using words which I guess come from consuming too many Silicon Valley-style investor slidedecks (risk, trap, ...).

The AI has no soul, no mind, no feelings, no genuine enthusiasm... I want it to be pleasant to deal with but I don't want it to try and fake emotions. Don't manipulate me. Maybe it's a different use case than you but I think the best AI is more like an interactive and highly specific Wikipedia, manual or calculator. A computer.

reply
I can appreciate that. I don't mind when models channel some personality, it can make whatever we are working on more interesting. I don't perceive it as manipulation. But it is nice that they are pretty good at sticking to instructions that don't call for nuance. I imagine if you tell it, "you are a wikipedia article", that is exactly the output you would get.
reply
I had put an example like "decision locked" in my CLAUDE.md and a few days later 20 instances of Claude's responses had phrases around this. I thought it was a more general model tic until I had Claude look into it.
reply
It is funny how that works. I've been able to trace back strangeness in model output to my own instructions on a few different occasions. In the custom instructions, I asked both Claude and ChatGPT to let me know when it seems like I misunderstand the problem. Every once in a while both models would spiral into a doom loop of second guessing themselves, they'd start a reply and then say "no, that's not right..." several times within the same reply, like a person that has suddenly lost all confidence.

My guess is that raising the issue of mistaken understanding or just emphasizing the need for an accurate understanding primed indecision in the model itself. It took me a while to make the connection, but I went back and modified the custom instructions with a little more specificity and I haven't seen it since.

reply