> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers
The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.
Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.
That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.
People put out AI text, primarily, to run hustles.
So its writing style is a kind of internet version of "talking like a used car salesman".
With some people that's fine, but anyone with a healthy epistemic immune system is not going to listen to you.
If you want to save a few minutes, you'll just have to accept that.
I mean, obviously you can't know your actual error rates, but it seems useful to estimate a number for this and to have a rough intuition for what your target rate is.
Did chatGPT write this response?
Looks like this comment is embracing the tools too?
I'd take cheap snark over something somebody didn't bother to write, but expect us to read.
Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.