From chapter 1:
> When Git slows down, engineers adapt in bad ways. They stop asking questions the history could answer. They batch work to avoid sync cost. They keep messy branches alive longer, postpone cleanup, and treat the repository like something slightly dangerous.
From https://gitperf.com/epilogue.html
> Once machines start producing code at machine cadence, the model from this book does not break. What changes is the pace: more branches, more commits, more automation, and more surrounding metadata. The traffic gets louder, and the features that keep Git legible under pressure move from "nice to have" to "essential."
> These stop looking like side optimizations. They are what keep machine-scale Git traffic usable.
The book is definitely LLM assisted authoring yet it also has great content, so not sure we can immediately jump to shaming it entirely for being slop.
I'd written this piecemeal over the last year or so (originally a series of blog posts), and was happy to release it all for free in a single edition, and under CC.
I'll release an Edition 1.1 soon with some errata, adjustments. There's already a free PDF for the on-the-go -> https://gitperf.com/pdf.html
Regarding the cherry-picking of fragments of an LLM: of course an LLM (in fact several!) were used to stitch together those disparate blog posts into a more coherent whole. And they certainly left an imprint in places. Otherwise, as a solo writer with a full-time job putting together a 200-page book, I'd have to pay an editor, or work with O'Reilly (did this in 2010 on a Redis book; never again!); and perhaps the book wouldn't be free!
LLMs will continue to leave imprints in our work. Some words will, over time, be edited and whittled away. Other words, when the LLM writes well enough to convey a useful point, will be kept.
(The corollary is that the LLM writing you notice is mostly going to be from people who aren't actively trying to hide it from you)
Personally I have an extremely hard time reading text like this and it makes me lose trust in the author. Publishing potentially useful Git knowledge this way is a shame.
But the day this breaks down and I have to deal with bloom filters, packfiles, maintaining the git garbage collector or rerere cleanup, is the day I switch our codebase to a centralized VCS.
This stuff is cool to learn about; but it's 5 layers removed from anything I want to be thinking about in my day to day work.
The tooling on top is inconsistent and kind of messy though, and harder to explain than the internals. I recall hearing somewhere that the tooling we see today as the user tooling was really supposed to just be the tooling for messing with git directly, with the expectation that something would sit above and make it actually user-friendly. I don't remember where I recall this from though, so could be just a post-justification from my own brain to explain the situation :)
that's not true either. originally it was simple internally - it was mostly shell scripts! writing text files! - but now it has all sorts of complicated optimisations.
the "middle" is somewhat simple for CS people, though - a graph of commits, you can put labels on them, you can send and receive strict appends to the graph to another repository. both the stuff under and above that is quite complicated in practice, but the UI does continue to improve - e.g. editing a past commit message until the release last week was ... complicated.
Was it? ‘git log —-oneline’ to figure the commit id if it’s not really recent. ‘git rebase -i <commit-id>^’ and then apply the reword action to your commit.
Nah, I remember that time vividly, Github became a thing about a year or two after it was already very much taking the lead.
GitHub became GitHub because git was the winner. There were alternative hubs that supported bazaar and mercurial and whatnot, but git won because for most people, Linus and the kernel team being behind it was reason enough to trust it.
(and I say this as someone who liked hg more than git)
Most people just wanted to collaborate on the platform other people were on, and where the popular projects were, that it used git was just an implementation detail at that point for most I think.
So true. I used Mercurial back in the day and also used Darcs before it, and it helped me realize that the best versioning tool UX that exists is still the one Git provides.
PS: Also CVS, SVN, Perforce, and Clear Case professionally, and gave a try to Fossil. None of them even close to Git usability-wise.
Seemingly seconds on every remote-touching command, even on a very small repo.
Why isn't
git clone --depth 1 ...
the default?I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.
https://github.blog/open-source/git/get-up-to-speed-with-par...
I was going to ask if there's a way to set that as the default but I guess I'll just set up an alias like I have for most of the subcommands I use daily.
A) You can update them, because you can git pull to fetch changes.
B) If you want to apply patches on top, its better to have version control so you can keep track of what you changed, especially useful if you want to rebase.
B) See A
I use OpenBSD and before that, I was on alpine, debian, and arch. Of it was a software I want to try, I downloaded the tarball. if it’s something I wanted to keep for longer, I created a port or a custom packages.
Downloading a tarball and running ./configure or make, editing a config file here or there, etc then running `make install` is the most common flow. Now days I find myself frequently editing the Dockerfile to make it to my liking. With a git repo, the owners of the repo have excluded all the local files, build caches, etc and you can keep pulling to get updates stashing and reapplying your local changes. With tarballs, you have to figure it out all over again. Lose your build cache (language dependent maybe), lose a change you made here or there, etc.
and also git
which makes more sense i guess
Even for a small office, git can be immensely useful. Entire production line workflows can be implemented with git .. if only folks would learn to use it productively.
Its not just for development. Writers can use it productively. Accountants too.
It always kind of irks me that Git hasn't just been folded into the OS front-end UI by any of the OS vendors .. it'd be so revolutionary to give common folks an easy way to manage the timeline/history of their computer use using git.
Plus, its chicken and egg. If the OS had a great interface to Git as part of its responsibilities in the Explorer/Finder interface, folks would be more inclined to use text-based file format standards that are coherent with the Git methodology.