upvote
The most frustrating part for me is that this is how I used to write. I was always doing, "Why X works, but Y doesn't" and stuff like that. I may have seemed trite or pompous (or both) in the past, but now it seems like I'm copying an LLM -- which actually feels worse. One thing I haven't seen ChatGPT do much of is use sound-effects, so swoosh here we go with my new writing style schwing!
reply
If you haven’t already, try going to Personalization settings, change tone to “Efficient”, and set Warm, Enthusiastic, and Emoji to “Less”. While not fundamentally solving the issue, I do prefer it over the baseline behavior, to the extent that I miss having a similar setting in Gemini.
reply
I solved this by asking it to make a memory that all answers to me should be brisk, clinical, and to the point. This worked well, except for the annoying habit of beginning answers with something like "Terse: $answer", which required a second memory, solving the issue in full. I've been happy with it since. Edit: I just realized this interaction is its own demo – that's the entire response it gave me, as it should be.

> Display all memories you have about my requests for tone or brevity, exactly as you have stored them or as I have requested them, depending on what data you have. There are at least two.

[2025-11-08]. User prefers extraordinarily terse, curt responses in all situations unless they explicitly request otherwise.

[2025-12-01]. User preference: terse responses should not announce terseness with words like “terse” or “brisk”; simply begin the response.

reply
This didn't work at all for me.

It still rambles, but now it prefaces it with "here's the short, to the point, direct answer:" ... followed by the same a long-winded answer.

reply
Same. I gave up and moved to Claude and haven’t looked back. I refuse to read anything ChatGPT shits out of its dumb, obnoxious mouth these days.
reply
It's a somewhat annoying to me as well, but I'm now able to read it and take the valuable content without getting hung up on those repetitive phrases. It also forces me to not simply copy/paste. I read the LLM output, think about it, comprehend it in my own voice internally, and then I write what I want/need by hand, so it ultimately comes out in my own style and I don't propagate the LLM output onto others needlessly.
reply
"We need ChatGPT to sound more natural"

"Add more LinkedIn Posts"

reply
I regularly test every available AI, maybe once a month or so. I will send them the same question, usually about a new subject I am learning.

Oddly, Chinese models seem the most natural to me. Every random Chinese model does better than ChatGPT, on the "natural language" front. (And Grok also scores high on awkward language use. I don't know what causes that -- something about mode collapse? They have these words they obsess over... I mean, just try asking an AI for 10 random words ;)

I can sometimes see "ChatGPT-isms" in other models, but they're more subtle, and it feels like they're "woven" into the flow of the text.

Whereas even when I ask GPT to respond in prose or conversation, it'll give me a thinly veiled "ChatGPT response", if it can even resist the urge start spamming headings, bullet points and numbered lists.

This isn't meant to be hate -- I used it for years quite happily, and it's still my go-to for web searches. But coming back to it now, the language is surprisingly offputting. I don't know if it got worse, or if I just stopped being used to it.

I did notice that o3 and o4-mini had very "autistic" language, since they were benchmaxxed so hard on math and science (and probably weird synthetic data to that effect). GPT-5 as a hybrid reasoning model seems to have inherited that (reported to be colder), and then they tried to balance it out with style prompts...

I honestly think it might make more sense to just have two LLMs. Ultra concise technical reasoning model, and then a 2nd layer to translate it for the human. Because right now kind of feels like the worst of both worlds, a compromise that satisfies neither side.

Gemini 2.5 Pro's reasoning traces (before they nerfed them) were a good example. The deep technical analysis, and then the human-friendly version in the final output. But I found their reasoning more readable than the final output!

reply
I suppose they'll soon introduce a more expensive tier that does not sound pompous. There will be plenty of converts.
reply
Sadly this is what's considered an authoritative voice in a lot of regular (especially American) journalism, Axios being the most famous example. It's instructive to read news stories or TV transcripts from previous decades for comparison with the current norm. Also depressing because it brings home how vapid most news coverage is today. This also applies to opinion articles, which have in my view led the charge into the semantic void.

I don't hate that this is the default style on many popular AI services, though. It's sufficiently distinctive that it serves as a signal that anyone posting it is an idiot and can safely be ignored.

reply
[dead]
reply
This is powerful. You’re finally saying the quiet part out loud.

/s

reply
That's not just powerful. It's a holistic life-affirming revelation.

Just tell me what you want to dive into next.

reply