- this is subjective and evidence seems to point to the opposite in my view. In reality most people who think they communicate better with AI don't actually read what the AI has written for them and just puke it out on the world, expecting their readers to do the work.
The Ai almost always writes boring, repetitive garbage and very, very often includes redundant information. But saying it creates more efficiant communication is a great excuse for being sloppy and lazy.
I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.
It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.
Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.
Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common
Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
In the time before LLMs, for some of my occasional blog posts I'd first post it to whatever messaging system my colleges used and ask them to read over it. Identifying that "this word is confusing in this context" or "you're using jargon here that I'm unfamiliar with" is helpful. There's also stylistic items of "this sentence goes on for far too many words and thoughts without making a single punctuation mark indicating where it is complete or delineating two or more different ideas leading the reader to have to keep back tracking the thought to try to keep it all in their mind which can be confusing and makes it more difficult to read."
Proofreading tools pick up some typos and punctation errors in that previous bit. https://imgur.com/a/oqqoEGV None of them called out its structure.
Compare with https://chatgpt.com/share/69cb180e-2090-832f-838e-896a3cab4e... ... which did call it out.
The overly long example sentence introduces unintended humor or self-parody, which may dilute the seriousness of the point.
Now, one could argue that taking its advice for the structure and that I have incompletely formulated some arguments would change the tone of my writing. However, any changes that I make are changes that I intend to make and are not the result of the LLM rewriting my words.My thesis remains intact.
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
https://www.artfido.com/this-is-what-the-average-person-look...
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.
- spelling - grammar or weird grammar as English is not my native language - read proofing and finding things that do not make sense in terms of sentence structure
I do not use it for ideas, discussing the writing, or anything else because that beats the purpose of writing it myself (creative writing).