>In the same way, using gpt5 is now very unbearable to me as it almost always starts all responses of a conversation by things like: "Great question"
User preference data is toxic. Doing RLHF on it gives LLM sycophancy brainrot. And by now, all major LLMs have it.
At least it's not 4o levels of bad - hope they learned that fucking lesson.
OpenAI are in a difficult position when it comes to global standards. It's probably easier to see from outside of the United States, because the degree to which the historical puritanism has influenced everything is remarkable. I remember the release of the Watchmen film and being amazed at how pervasive the preoccupation with a penis was in the media coverage.
Imagine if we woke up tomorrow morning and grep refused to process a file because there was "morally objectionable" content in it (objectionable as defined by the authors of grep). We would rightly call that a bug and someone would have a patch ready by noon. Imagine if vi refused to save if you wrote something political. Same thing. Yet, for some reason, we're OK with this behavior from "certain" software?
None of the templates included with e.g. Word were for smut.
Word allowed you to type in smut, but it didn’t produce smut that wasn’t written by the user. For previous enterprise software, that wasn’t really a relevant question.
So… I don’t think it is obvious that the “Word lets you type in smut” implies “ChatGPT should produce smut if you ask it for smut.”
I guess precedent might imply “if you write some smut and ask it to fix the grammar, it shouldn’t refuse on the basis of what you wrote being smut”?
Photoshop, MS word.