The book is definitely LLM assisted authoring yet it also has great content, so not sure we can immediately jump to shaming it entirely for being slop.
I'd written this piecemeal over the last year or so (originally a series of blog posts), and was happy to release it all for free in a single edition, and under CC.
I'll release an Edition 1.1 soon with some errata, adjustments. There's already a free PDF for the on-the-go -> https://gitperf.com/pdf.html
Regarding the cherry-picking of fragments of an LLM: of course an LLM (in fact several!) were used to stitch together those disparate blog posts into a more coherent whole. And they certainly left an imprint in places. Otherwise, as a solo writer with a full-time job putting together a 200-page book, I'd have to pay an editor, or work with O'Reilly (did this in 2010 on a Redis book; never again!); and perhaps the book wouldn't be free!
LLMs will continue to leave imprints in our work. Some words will, over time, be edited and whittled away. Other words, when the LLM writes well enough to convey a useful point, will be kept.
What I’m interested in is how to address the “grating” or whatever characteristics the readers detect to have them focus on the LLM aspect. I feel it’s probably soon or already removable with some methods.
Ignore the haters they are just wrong to blanket criticize, however their observations are helpful to try and improve the process. We want LLMs to assist in creating useful and effective content for humans.
Personally I have an extremely hard time reading text like this and it makes me lose trust in the author. Publishing potentially useful Git knowledge this way is a shame.
You probably have a great deal of understanding and knowledge about Git, and this book might be a good resource.
I'm not asking you to do anything differently, and yet I think it's important to realize that people have a deep aversion to text that appears to be LLM generated.
By "shame", I meant that just from a skim of the contents of this book, it can be hard to distinguish it from any other LLM generated text by any other author who has no idea what they're talking about.
That makes people (like me) inclined to discount what it has to say, potentially losing out on good technical content.
An interesting point to consider: an author that goes out of their way to hide any LLM influence may actually be degrading the signal. Because in that case, you'll not see the LLM's etchings, and misattribute skill to the author under the belief an LLM was not involved. Complicated times.
For those hunting witches doesn't matter if you put in effort and just did fixing grammar or did some research using LLM but in general thoughts and experience were yours. Maybe you are not that good at writing — yet still they will just take pitchforks and torches and drag you out, call you names.
(The corollary is that the LLM writing you notice is mostly going to be from people who aren't actively trying to hide it from you)