Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
In the time before LLMs, for some of my occasional blog posts I'd first post it to whatever messaging system my colleges used and ask them to read over it. Identifying that "this word is confusing in this context" or "you're using jargon here that I'm unfamiliar with" is helpful. There's also stylistic items of "this sentence goes on for far too many words and thoughts without making a single punctuation mark indicating where it is complete or delineating two or more different ideas leading the reader to have to keep back tracking the thought to try to keep it all in their mind which can be confusing and makes it more difficult to read."
Proofreading tools pick up some typos and punctation errors in that previous bit. https://imgur.com/a/oqqoEGV None of them called out its structure.
Compare with https://chatgpt.com/share/69cb180e-2090-832f-838e-896a3cab4e... ... which did call it out.
The overly long example sentence introduces unintended humor or self-parody, which may dilute the seriousness of the point.
Now, one could argue that taking its advice for the structure and that I have incompletely formulated some arguments would change the tone of my writing. However, any changes that I make are changes that I intend to make and are not the result of the LLM rewriting my words.My thesis remains intact.
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
https://www.artfido.com/this-is-what-the-average-person-look...
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.