Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
Plus, "lazy" would actually be just using AI to edit the writing.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.
Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.
I never ask the LLM to evaluate my text in terms of being good or bad. Instead I try something like this:
"In this section I tried to explain X, I intended to sound in Y and Z fashion, and I want a reader to come out with ateast W impression. Is the text achieving these goals? Do I communicate my ideas clearly and consisely, or are they too confuse and meandering?"
I typically get useful feedback. I preface specifically asking it to not rewrite, simply pointing the bits that it finds faulty and explaining why.
Of course the prompt is different is I am writing, for example, technical documentation, or if it is an attempt at creative writing.
I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.
I have no idea why you think that LLMs can't do that lol
There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.
You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.