Get the top HN stories in your inbox every day.
john-tells-all
Git's data structure is shockingly simple. Here's an article with diagrams on the four basic data types. Also shows why "HEAD" is different from a "commit".
packetlost
Is HEAD not just a ref to a commit? There's basically only two "things" in git, refs and objects. Git internals are so easy that IMO people should start off by running through this tutorial [0] instead of learning the basics of git porcelain, it makes understanding what's going on so much easier.
[0]: https://git-scm.com/book/en/v2/Git-Internals-Git-Objects
WorldMaker
HEAD in most operations is usually a ref to a branch, which makes it somewhat unique as a ref type (it's a ref to a ref, double pointer). When it is a ref to a commit, that's a detached HEAD state.
Plus HEAD to the CLI can also mean the family of refs under refs/heads/* that relate to the HEADS of each branch (which depending on fetch status may not be the same as the branch ref) and traversal into the reflog.
skydhash
Objects should be split into trees and blobs to make some operations clearer, especially checkout and rename detections.
packetlost
There's also commits and tags. Commits are important for understanding how branches and histories work. I was just trying to be brief, the types of objects are important and covered in that tutorial.
hmpc
Similarly, if not performance-focused, I can wholeheartedly recommend Building Git[0], which walks you through building your own git clone in Ruby (although the language is immaterial).
alxgsv
I never faced git performance issues when working with code. Guess my repos weren't bit. But when I tried to use git as a versioned database of changes in my pet project, I learned a lot about indexes, compacting, etc. Article covers a lot and is very helpful!
WolfeReader
Git is noticeably slow on Windows. Git is built to run on top of Unix commands, which work great on Linux and Mac. For Windows, the commands have to be installed separately, and there's a performance penalty for each call. Individual Git commands are usually fine, but anything that calls several steps in sequence will visibly drag.
(WSL gets around this entirely.)
zokier
afaik far bigger factor is that windows file io is just generally much slower than linux. both of these are further exacerbated by av solutions which are ubiquitous in windows. that is why ms introduced "dev drive" in windows few years back which in their own benchmarks showed biggest gain specifically with git: https://blogs.windows.com/windowsdeveloper/2023/06/01/dev-dr...
retired
I hardly see developers with Windows anymore, I guess this is one of the many reasons.
WolfeReader
I have to use Windows for work. With WSL, it's actually perfectly fine! Which is really more of an endorsement of Linux than Windows.
ergl
Surprise, surprise, another piece of LLM-generated slop on the front page of HN.
From chapter 1:
> When Git slows down, engineers adapt in bad ways. They stop asking questions the history could answer. They batch work to avoid sync cost. They keep messy branches alive longer, postpone cleanup, and treat the repository like something slightly dangerous.
From https://gitperf.com/epilogue.html
> Once machines start producing code at machine cadence, the model from this book does not break. What changes is the pace: more branches, more commits, more automation, and more surrounding metadata. The traffic gets louder, and the features that keep Git legible under pressure move from "nice to have" to "essential."
> These stop looking like side optimizations. They are what keep machine-scale Git traffic usable.
redditor98654
I had the same thought. TBH there is nothing in those individual sentences that read like AI but when you read them all together I could see it too. I dunno what it is, only way I can describe it is that it does not sound like a normal human but rather a monologue from a character trying to sound impressive with each successive sentence.
ergl
The author works at OpenAI, so it's no surprise that they've stopped noticing how grating this kind of structure is to read.
alchemist1e9
I think it’s likely there will be methods to fix this soon, some de-slop algorithms, or is there a deep reason it will always be detectable? Perhaps there are some PhD linguists who have figured out how to quantify the “slop” effect and are writing their thesis on it. Once that is done it will be possible to smooth it away.
The book is definitely LLM assisted authoring yet it also has great content, so not sure we can immediately jump to shaming it entirely for being slop.
frangonf
Although this LLMisms also still stand out to me, I find them bearable as the glue part of this kind of technical/white paper like content.
Maybe I'm already lost in the AI psychosis, maybe some of us are in a transition phase trying to separate from pure synthetic "unmanned slop" to "acceptable slop", maybe someone could derive the same or more value getting the prompts that hold the industry experience the author seems to hold and pointing them to the git codebase/docs herself...
In my case (not seriously engaged in git performance since my git game is trivial) I find the explanations from the sections I have limited knowledge of to be very informative.
Cthulhu_
I think people 'scan' for LLM tells so that they know to read the text with some skepticism instead of accepting it as authoritative; this is probably a healthy attitude to have. However, I'm sure that over time the 'tells' will just go away entirely.
If the text is valuable and correct then it probably won't matter much. It's not like I read technical documentation in detail to begin with (more scan reading)
anitil
I'm only on to chapter two and already it's explained some plumbing details that I somehow have missed all these years. This is great
tnm
Thanks much. I hoped that chapter would stand well on its own. That and the packfile chapter were the first I wrote, partly as a reference for myself!
normie3000
> LFS adds its own operational overhead.
Seemingly seconds on every remote-touching command, even on a very small repo.
Hendrikto
What is worse is that for about half a year or so, I now have to authenticate my ed25519-sk key with my Yubikey thrice (!) when using LFS. On every push.
snthpy
I've been wanting to ask this:
Why isn't
git clone --depth 1 ...
the default?I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.
joshka
try `git clone --filter=blob:none` instead
https://github.blog/open-source/git/get-up-to-speed-with-par...
snthpy
Thanks. That's great! I especially like that it then lazy loads the blobs as you need them.
I was going to ask if there's a way to set that as the default but I guess I'll just set up an alias like I have for most of the subcommands I use daily.
dwattttt
A question: why is git involved at all in this? You don't want a repository.
snthpy
Good question! Idk and I don't make the rules. I guess people default to it because most people have git installed already?
I'm thinking of LazyVim for example which has [1]:
git clone https://github.com/LazyVim/starter ~/.config/nvim
After that, once you do a sync or update, there's a whole lot more cloning going on.The other projects I was going to mention have apparently all switched away from using git for their package management (homebrew, Go, cargo, ...). I can't help but wonder to what extent that might have been influenced by the default slowness of doing a full git clone?
Of course these all could add `--depth 1` to their instructions or internal package management tooling, and ofc we need both options to be available. I pondering aloud that in my observation, `--depth 1` is probably the option that I want more often than not but YMMV.
skydhash
This! The default was to have a link to download a tarball of the source. And if the user wanted to contribute (or check the devel version), you would add a link to the vcs.
kingstnap
Grabbing git repos instead of just tarballs is useful.
A) You can update them, because you can git pull to fetch changes.
B) If you want to apply patches on top, its better to have version control so you can keep track of what you changed, especially useful if you want to rebase.
eddythompson80
I think gitignore solves a problem that is hard to solve with the traditional tarball approach.
Downloading a tarball and running ./configure or make, editing a config file here or there, etc then running `make install` is the most common flow. Now days I find myself frequently editing the Dockerfile to make it to my liking. With a git repo, the owners of the repo have excluded all the local files, build caches, etc and you can keep pulling to get updates stashing and reapplying your local changes. With tarballs, you have to figure it out all over again. Lose your build cache (language dependent maybe), lose a change you made here or there, etc.
jurakovic
What if that's only you? Git isn't made only for those who "just want to install something"
snthpy
Fair enough. I also work with a monorepo at work but that I cloned like 5 years ago.
If I think about what I've cloned over the last week or so (LazyVim, gstack, my dotfiles), most of the time I just want the current state and be able to pull updates. Even for my dotfiles or projects that I fork and hack on, most of the time I'm just adding commits and it's seldom that I want to go back to historical ones.
Given how often I see `git clone ...` instructions in Github README.md files, I was just wondering how many other people felt the same?
So my contention is that most of the time, `git clone --depth 1` or `git clone --filter=blob:none` is what you actually want, and in the case that you want the full history then you could do `git clone --depth 0` (or `git clone -full` for even better UX, not that the git cli is known for it's UX).
undefined
aa-jv
Its not the default because that'd be counter-productive to developers who use git with larger repositories, which is how git started life in the first place - your clone depth would be entirely useless for Linux kernel developers, for example, if it were default ..
lthi747
The only thing I miss in git is an uncomplicated sparse checkout, or how svn’s allows to just check specific path.
wadefletch
ted nyman: #1 most knowledgable college football fan in sf
and also git
which makes more sense i guess
mitchellh
Of most things, really, he was on Jeopardy for a reason! https://thejeopardyfan.com/tag/ted-nyman
nananana9
Git is industry standard, because for what it give you it's a remarkably robust and simple program to use. We're all vaguely aware that the internals are complex, but the UX is clean and usable enough that the complexity usually doesn't leak out.
But the day this breaks down and I have to deal with bloom filters, packfiles, maintaining the git garbage collector or rerere cleanup, is the day I switch our codebase to a centralized VCS.
This stuff is cool to learn about; but it's 5 layers removed from anything I want to be thinking about in my day to day work.
codesnik
i think it is the other way around. Git is pretty simple internally, and its ui is just knobs and levers to reach into that simple reliable internal structure. This is why for some people it seems like a mess - they want button "do what I want" (and all people and their needs are different), and for other people it's clean - open the throttle, engine will rev.
embedding-shape
Agree, the insides are fairly simple and cleanly designed, you could explain exactly how almost everything works in a 1 hour presentation, and most people will grok the main ideas fairly easily.
The tooling on top is inconsistent and kind of messy though, and harder to explain than the internals. I recall hearing somewhere that the tooling we see today as the user tooling was really supposed to just be the tooling for messing with git directly, with the expectation that something would sit above and make it actually user-friendly. I don't remember where I recall this from though, so could be just a post-justification from my own brain to explain the situation :)
skydhash
It’s more about using them to present a better interface for your workflow and the project.
bananapub
> Git is pretty simple internally, and its ui is just knobs and levers to reach into that simple reliable internal structure.
that's not true either. originally it was simple internally - it was mostly shell scripts! writing text files! - but now it has all sorts of complicated optimisations.
the "middle" is somewhat simple for CS people, though - a graph of commits, you can put labels on them, you can send and receive strict appends to the graph to another repository. both the stuff under and above that is quite complicated in practice, but the UI does continue to improve - e.g. editing a past commit message until the release last week was ... complicated.
skydhash
> editing a past commit message until the release last week was ... complicated
Was it? ‘git log —-oneline’ to figure the commit id if it’s not really recent. ‘git rebase -i <commit-id>^’ and then apply the reword action to your commit.
thfuran
I'm pretty sure git is industry standard almost entirely entirely because GitHub exists. And I very much disagree that the UX is clean. The cli is more than a bit of a mess.
stingraycharles
> I'm pretty sure git is industry standard almost entirely entirely because GitHub exists.
Nah, I remember that time vividly, Github became a thing about a year or two after it was already very much taking the lead.
GitHub became GitHub because git was the winner. There were alternative hubs that supported bazaar and mercurial and whatnot, but git won because for most people, Linus and the kernel team being behind it was reason enough to trust it.
(and I say this as someone who liked hg more than git)
embedding-shape
I mean, I don't think anyone can say for sure if "GitHub became GitHub because git was the winner" or "Git became mainstream because GitHub won the developer mindshare", pretty much everyone I knew used GitHub for everything besides the actual VCS protocol, although a lot of us early users were users of GitHub especially because of git.
Most people just wanted to collaborate on the platform other people were on, and where the popular projects were, that it used git was just an implementation detail at that point for most I think.
barrkel
Git was blazingly fast when it came out, faster than hg (C vs Python) and of course a different order of complexity to svn, which was the actual existing alternative it supplanted.
BerislavLopac
Anyone who has ever used Mercurial knows very well what a good versioning tool UX looks like...
windward
No. When I left a job using Mercurial, I made a vow never to start a job that used it again. And that employer was seeking to move on from it.
liveoneggs
good because clones take forever so you get free time? Good because you need plugins/extension/special-config to support rebase?
miroljub
> Anyone who has ever used Mercurial knows very well what a good versioning tool UX looks like...
So true. I used Mercurial back in the day and also used Darcs before it, and it helped me realize that the best versioning tool UX that exists is still the one Git provides.
PS: Also CVS, SVN, Perforce, and Clear Case professionally, and gave a try to Fossil. None of them even close to Git usability-wise.
aa-jv
I've always wanted to see a book that describes git for the common man and gives them tons of examples for how to use it to do productive things.
Even for a small office, git can be immensely useful. Entire production line workflows can be implemented with git .. if only folks would learn to use it productively.
Its not just for development. Writers can use it productively. Accountants too.
It always kind of irks me that Git hasn't just been folded into the OS front-end UI by any of the OS vendors .. it'd be so revolutionary to give common folks an easy way to manage the timeline/history of their computer use using git.
jcranmer
The thing is that change tracking and sometimes even full-on version control are actually integrated with a lot of the tools that people use, like your document editor. The incremental benefit of git then would largely not be automatic version tracking but only the interactive history browsing which git itself is kinda meh at (and is of questionable value for a lot of workflows). And the cost of this transition is forcing people to use a different tool from their regular workflow, one that wants to be used in an environment they're not comfortable in, and also one that is not conducive to handling anything other than plain text.
Rather, programmers should learn from how other software handles version control and incorporate those ideas into git instead. For example, perhaps we should automatically create a commit every time we build the project so that we can roll back or forward to previous builds and not rely on the programmer to remember to make commits so frequently.
awesan
The obvious reason is that most file formats used by writers, accountants, etc. are binary files which do not very much benefit from git.
fragmede
Microsoft Office files are zipped XML these days, there's a standard and everything.
undefined
aa-jv
So? Doesn't matter. Git in that case still provides valuable historical archiving and versioning that is still more useful than the option, without it.
Plus, its chicken and egg. If the OS had a great interface to Git as part of its responsibilities in the Explorer/Finder interface, folks would be more inclined to use text-based file format standards that are coherent with the Git methodology.
Get the top HN stories in your inbox every day.
So nice to see this get picked up, and honestly surprised to see the interest in what I think of as an extremely esoteric area. Few things:
- Just released an Edition 1.1 that fixed some small errors, amended a few chapters content, and removed some general bluster. I'm going to try and, well, version these.
- New things are coming to Git, and I suspect I'll be talking about Git Futures or A Post-Git World soon enough.
- There's now a free PDF, https://gitperf.com/pdf.html
- I'll have a couple more highly practical chapters coming soon, focused on pragmatic organizational adoption, e.g., on wrapping the git CLI to best practices