upvote
Well, for one example, it inhibits your desire to improve against those very blind spots. In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
reply
AI can take a rough draft, clean it up and shorten it as much as you want. The suggestions very often expose ambiguities in the original text. If you think the LLM got it wrong, it’s nearly often the LLM overreading some feature of the original that you failed to catch, which is precisely what you’d want out of your proofreader.

Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.

reply
> Well, for one example, it inhibits your desire to improve against those very blind spots.

I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.

> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.

This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.

You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.

reply
> Even professional authors go to an editor who identifies things that need to be fixed.

Yes, and these people are good at it. What’s your point?

If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.

reply
I believe you are confusing what an editor does and proofreading.

In the time before LLMs, for some of my occasional blog posts I'd first post it to whatever messaging system my colleges used and ask them to read over it. Identifying that "this word is confusing in this context" or "you're using jargon here that I'm unfamiliar with" is helpful. There's also stylistic items of "this sentence goes on for far too many words and thoughts without making a single punctuation mark indicating where it is complete or delineating two or more different ideas leading the reader to have to keep back tracking the thought to try to keep it all in their mind which can be confusing and makes it more difficult to read."

Proofreading tools pick up some typos and punctation errors in that previous bit. https://imgur.com/a/oqqoEGV None of them called out its structure.

Compare with https://chatgpt.com/share/69cb180e-2090-832f-838e-896a3cab4e... ... which did call it out.

    The overly long example sentence introduces unintended humor or self-parody, which may dilute the seriousness of the point.
Now, one could argue that taking its advice for the structure and that I have incompletely formulated some arguments would change the tone of my writing. However, any changes that I make are changes that I intend to make and are not the result of the LLM rewriting my words.

My thesis remains intact.

reply
> it takes care of my grammar blindspots (damn you commas and a/an/the articles!)

There are plenty of pre-LLM tools that can fix grammar issues.

> Can you please share what and how gets degraded?

I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:

https://www.artfido.com/this-is-what-the-average-person-look...

As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.

reply
In many kinds of writing, perhaps most, communicating your state of mind to the reader is a primary goal. Even a smart LLM fundamentally degrades this, because to whatever degree that it has a mind it isn't shaped like yours or mine. I've had a number of experiences this year where I get to the end of a grammatical, well-structured technical document, only to find that it was completely useless because it recited a bunch of facts and analyses but failed to convey what the author was thinking as they wrote it.

(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)

reply
>damn you commas and a/an/the articles

This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.

reply