Skip to content(if available)orjump to list(if available)

Baby’s First Garbage Collector (2013)

benhoyt

The "Garbage Collection" chapter in the OP's recent book Crafting Interpreters goes into this in a bunch more detail (in the context of a language VM): https://craftinginterpreters.com/garbage-collection.html -- so well written, both this and the 2013 blog post.

rurban

But it stays at a primitive Mark & Sweep collector, not the more advanced Cheney compacting, copying, half-space collector. It doesn't even mention how bad Mark & Sweep is, esp. for larger heaps.

vsnf

Like his other work, Crafting Interpreters, this is so much more approachable, understandable, and actionable than any academic textbooks or courses on the equivalent subjects are. It sometimes amazes me that one can learn anything at all without material like this.

tyleo

Agreed. I sometimes read such textbooks for my job and it is demanding. Sometimes I reread the same chapter multiple times over the course of a weekend just to make sense of anything.

sebastianconcpt

Would it make sense to have the VM to do partial mark and sweeps instead?

Let's say the references are a lot. Like tens of millions order of magnitude.

Would it make sense to GC sweeping it in chunks?

ridiculous_fish

Yes indeed. The hard part is knowing when objects are referenced by something you didn't scan. For example you scan the first half of the heap and find some unreferenced objects; how do you know those objects aren't reachable from the second half of the heap? A lot of GC engineering is an attempt to answer that question as efficiently as possible.

vore

This is what incremental garbage collection does: you don't necessarily have to do it all at once, you can make incremental progress as needed.

MaulingMonkey

And the big caveat to this simple modification is that, if done naively, previously marked objects may become the sole referencers of non-garbage without being re-enumerated - potentially leading to the garbage-collection of non-garbage, which would be a serious bug.

To fix this, you might have the live code that runs in-between the partial mark+sweeps mark objects to be re-swept as it runs. This adds overhead and complexity, of course. This code is apparently called write barriers by Ruby's GC:

https://jemma.dev/blog/gc-incremental

https://blog.peterzhu.ca/notes-on-ruby-gc/

http://www.atdot.net/~ko1/activities/rgengc_ismm.pdf

I'm left wondering what the overhead of such greylisting / write barriers are, compared to the reference counting overhead mentioned by shwestrick in the "Reference count, don't garbage collect" thread:

https://news.ycombinator.com/item?id=32277088

moonchild

> This code is apparently called write barriers by Ruby's GC

'write barrier' is a standard term (along with 'read barrier'). See memory management glossary: https://www.memorymanagement.org/glossary/b.html#barrier-1

> wondering what the overhead of such greylisting / write barriers are

https://twitter.com/stevemblackburn/status/14942409060061102...

Read barriers are more expensive.

chc4

Conventual wisdom is the overhead of write barriers is somewhere around 5-10%. There are tricks you can use to reduce it, though; you can coalesce the barriers for objects together into one block of operations if you have a JIT, only do the write barrier if the GC is in a collecting state (and then the overhead is 0% most of the time!), skip the barrier if you know an object is short lived and doesn't escape a scope, etc.

im3w1l

So maybe I'm missing something but couldn't the algorithm as described lead to an infinite loop where some annoying thread insists on making black objects grey faster than the gc can turn them black again, so that the GC ends up not being able to make progress? I think the fix is that when a black object makes a new reference to a white object, instead of making the black object grey, it should make the white object grey.

iainctduncan

Do you have a suggestion for accessible material on learning about incremental GC?

mhh__

Why not the CPU?

(Intel put a garbage collector in hardware in the iAPX 432)

sebastianconcpt

Woha! didn't knew that one!

So that in 1985?

Why this feature didn't got supported further in today's architectures?

https://en.wikipedia.org/wiki/Intel_iAPX_432

wernsey

Something else from the author that I recently stumbled upon: his LISP2 Mark Compact garbage collector [0].

I was in need of a simple garbage collector for a toy project of mine and settled on a copying collector based on Cheney's algorithm at first [1]. The author's mark compact code is so easy to read that I was able to grok it immediately and replace my original GC without trouble and save half my memory.

[0] https://github.com/munificent/lisp2-gc [1] https://en.wikipedia.org/wiki/Cheney%27s_algorithm

dang

Not its first thread though:

Baby’s First Garbage Collector (2013) - https://news.ycombinator.com/item?id=21462190 - Nov 2019 (29 comments)

Baby's First Garbage Collector (2013) - https://news.ycombinator.com/item?id=10794026 - Dec 2015 (17 comments)

Baby's First Garbage Collector - https://news.ycombinator.com/item?id=6871202 - Dec 2013 (84 comments)

quickthrower2

Decent spacing between them though. Not oversubmitted by any means.

dang

Correct! Reposts are fine after a year or so: https://news.ycombinator.com/newsfaq.html.

matheusmoreira

My favorite article from that blog, I always cite it whenever I see someone asking about garbage collectors. The other posts about programming language design and implementation are really great too, they made a huge impression on me.

snarfy

For a long time I had an idea I thought was great - a lock free garbage collector that could be run on its own core to achieve zero latency. I've got 32 cores busy doing nothing and thought I could throw hardware at the problem and get garbage collection for free.

A bunch of people much smarter than me all chimed in on how this was a terrible idea. They argued it would destroy the cache and result in worse performance than a traditional garbage collector. Locality matters, apparently. I still sometimes think of this idea but it doesn't fit very well into the actual hardware.

hinkley

Interprocess cache invalidation is one of the bigger devils in the details. It’s one part of the memory corruption crisis that the Java Memory Model tries to solve and then a number of other languages copied.

There are concurrency libraries that allocate 8x as much memory as they need just to guarantee that two threads don’t share a cache line.

pebal

Here is a working implementation: https://github.com/pebal/sgcl

iainctduncan

This motivated me to crack open Crafting Interpreters, which has been sitting on my shelf for a couple of months.

It is truly a wonderful book, and a much needed (IMHO) addition to the literature. I have a bunch of the classic language books and this is such a good addition, and way easier and more fun to read! If you've been on the fence about ordering, do it!

dietr1ch

Nice read. BTW, is there any reason to implement sweep with an Object** instead of just an Object*?

edit: Like this, https://gist.github.com/Dietr1ch/a636578fe9f06faa5d46e1b78aa...

edflsafoiewq

When an object O is deleted, it needs to be removed from the linked list by patching the pointer that led to O to lead to O->next instead. There are two cases: for the first object in the list, you need to patch vm->firstObject; for the rest, you need to patch prevObject->next. You can unify these cases by storing a pointer to the pointer you need to patch, ie. a pointer to the pointer that led to O.

See also https://stackoverflow.com/questions/12914917/using-pointers-....

hinkley

I want to know if Peter Norvig has written a First GC. I’m still just blown away by his sudoku solver. I thought I was good at paring accidental complexity from code…

I also want to know if Norvig has a biographer, and if not, why not?

Jagerbizzle

I've just read the blog post about his late dog and boy was that a tear-jerker.

tomxor

The author is quite good at writing, some of his blog posts are my favourite, the content, the tone, the style, the language.