upvote
I had the same thought. TBH there is nothing in those individual sentences that read like AI but when you read them all together I could see it too. I dunno what it is, only way I can describe it is that it does not sound like a normal human but rather a monologue from a character trying to sound impressive with each successive sentence.
reply
The author works at OpenAI, so it's no surprise that they've stopped noticing how grating this kind of structure is to read.
reply
I think it’s likely there will be methods to fix this soon, some de-slop algorithms, or is there a deep reason it will always be detectable? Perhaps there are some PhD linguists who have figured out how to quantify the “slop” effect and are writing their thesis on it. Once that is done it will be possible to smooth it away.

The book is definitely LLM assisted authoring yet it also has great content, so not sure we can immediately jump to shaming it entirely for being slop.

reply
Thanks for the kind words, and checking out the book here.

I'd written this piecemeal over the last year or so (originally a series of blog posts), and was happy to release it all for free in a single edition, and under CC.

I'll release an Edition 1.1 soon with some errata, adjustments. There's already a free PDF for the on-the-go -> https://gitperf.com/pdf.html

Regarding the cherry-picking of fragments of an LLM: of course an LLM (in fact several!) were used to stitch together those disparate blog posts into a more coherent whole. And they certainly left an imprint in places. Otherwise, as a solo writer with a full-time job putting together a 200-page book, I'd have to pay an editor, or work with O'Reilly (did this in 2010 on a Redis book; never again!); and perhaps the book wouldn't be free!

LLMs will continue to leave imprints in our work. Some words will, over time, be edited and whittled away. Other words, when the LLM writes well enough to convey a useful point, will be kept.

reply
I think it’s great and you should be doing it, I have no problem at all if there is LLM assistance in authoring, I think it’s a good thing because like you said it enables solo writers with good ideas to produce valuable work that they otherwise wouldn’t!

What I’m interested in is how to address the “grating” or whatever characteristics the readers detect to have them focus on the LLM aspect. I feel it’s probably soon or already removable with some methods.

Ignore the haters they are just wrong to blanket criticize, however their observations are helpful to try and improve the process. We want LLMs to assist in creating useful and effective content for humans.

reply
> The book is definitely LLM assisted authoring yet it also has great content, so not sure we can immediately jump to shaming it entirely for being slop.

Personally I have an extremely hard time reading text like this and it makes me lose trust in the author. Publishing potentially useful Git knowledge this way is a shame.

reply
"Shame" is a strong word to describe a free ebook written for the general good. Happy to have a live conversation with you anytime to discuss Git and its internals to ensure your trust; I have some experience with it.
reply
I'm sorry if I have offended you.

You probably have a great deal of understanding and knowledge about Git, and this book might be a good resource.

I'm not asking you to do anything differently, and yet I think it's important to realize that people have a deep aversion to text that appears to be LLM generated.

By "shame", I meant that just from a skim of the contents of this book, it can be hard to distinguish it from any other LLM generated text by any other author who has no idea what they're talking about.

That makes people (like me) inclined to discount what it has to say, potentially losing out on good technical content.

reply
Yep, signals are signals, but I think it's quite complicated now. (In any case, this is still the embryonic era of LLMs).

An interesting point to consider: an author that goes out of their way to hide any LLM influence may actually be degrading the signal. Because in that case, you'll not see the LLM's etchings, and misattribute skill to the author under the belief an LLM was not involved. Complicated times.

reply
They wouldn’t be able to publish this useful knowledge easily without it though. And it’s the author’s guidance and vision which the LLM just helps materialize and so I think we should be studying how to generate content with less “slop” features and make it more natural and satisfactory for human readers, not discouraging it.
reply
Slop is content not written by a human. By definition, there can be no de-slop algorithms. There can only be algorithms that remove certain telltale signs, fraudulently attempting to present non-human-generated content as human-generated.
reply
Here we are in place and time where if you put — character anywhere in your text you will be burned like OP on stake for witchcraft.

For those hunting witches doesn't matter if you put in effort and just did fixing grammar or did some research using LLM but in general thoughts and experience were yours. Maybe you are not that good at writing — yet still they will just take pitchforks and torches and drag you out, call you names.

reply
It's fairly easy to quite thoroughly "de-slop" writing: Just feed chunk by chunk to a an agent that you make compare the writing to a good piece of human writing, and adjust the writing to match. It won't address structural/content issues, but all the major models are perfectly capable of copying the tone and style of a particular style of writing, and in doing so it tends to remove most of the rough edges.

(The corollary is that the LLM writing you notice is mostly going to be from people who aren't actively trying to hide it from you)

reply
Although this LLMisms also still stand out to me, I find them bearable as the glue part of this kind of technical/white paper like content.

Maybe I'm already lost in the AI psychosis, maybe some of us are in a transition phase trying to separate from pure synthetic "unmanned slop" to "acceptable slop", maybe someone could derive the same or more value getting the prompts that hold the industry experience the author seems to hold and pointing them to the git codebase/docs herself...

In my case (not seriously engaged in git performance since my git game is trivial) I find the explanations from the sections I have limited knowledge of to be very informative.

reply
I think people 'scan' for LLM tells so that they know to read the text with some skepticism instead of accepting it as authoritative; this is probably a healthy attitude to have. However, I'm sure that over time the 'tells' will just go away entirely.

If the text is valuable and correct then it probably won't matter much. It's not like I read technical documentation in detail to begin with (more scan reading)

reply