Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

kgeist

In my hobby project, I started always passing an allocator argument to every function or object which requires allocation (inspired by Zig) and I love it so far. Often I can just pass a bump pointer allocator or a stack-based allocator and do not care about deallocation of individual objects. I also wrote a simple unit testing framework to test out-of-memory conditions because it's easy to do when you're in control of allocators. Basically I inject an allocator which calculates how many allocations are done when a unit test is run, and then unit tests are later rerun again by injecting OOM at every known allocation point. A lot of bugs and crashes happen when an OOM is encountered because such paths are rarely tested. The idea of my pet project is a very resilient HTTP server with request-scoped allocations and recovery from OOM without crashes.

habibur

Adding that when many linux distributions face OOM, a killer daemon steps in and might kill your service even if you were handling the situation properly.

amalcon

Interestingly (confusingly), Linux's OOM killer is invoked for a different notion of OOM than a null return from malloc / bad_alloc exception. On a 64-bit machine, the latter will pretty much only ever happen if you set a vsize ulimit or you pass an absurd size into malloc. The OOM killer is the only response when you actually run out of memory.

If you want to avoid your program triggering the OOM killer all on its own, you need to set up a vsize such that you'll get an application level error before actually exhausting memory. Even that isn't completely foolproof (obviously anyone with a shell can allocate a large amount of RAM), but in practice -- if your program is the only significant thing on the system -- you can get it to be very reliable this way.

Add in some cgroup settings and you should be able to keep your program from being OOM killed at all, though that step is a bit more complex.

gpderetta

I wonder if it is possible to avoid OOM by making sure that all allocations are done from a named (on disk, not shm) memory file. This way in principle is always possible to swap to disk and never overcommit.

I guess in practice the kernel might be in such dire straits that it is not able to even swap to disk and might need to kill indiscriminately.

kgeist

The idea is that the server must have a known allocation budget, similar to Java's max heap size. There's a tree of allocators, i.e. a temporarily created arena allocator needs initial memory for its arena, so it can grab it from the root allocator. And the root allocator ultimately must be fixed-size and deterministic. Sure if there are other processes in the system allocating without concern for other apps, then the OOM killer can kill the server. But if there's no such process, I think it should be pretty stable.

AndyKelley

You can disable the OOM killer on your server OS:

https://www.kernel.org/doc/Documentation/vm/overcommit-accou...

olodus

Oh wow that is a really interesting test solution. That would be an interesting thing to add to all zig tests (I know they already have the testing allocator and good valgrind support but I don't think that tests/simulates oom).

I love things like these that use existing tests and expand the to just test further thing in already covered flows. We have done similar things at my work where we test expansion of data models against old models to check that we cover upgrade scenarios.

squeek502

There's support for exactly this type of OOM testing in Zig via std.testing.checkAllAllocationFailures:

- https://github.com/ziglang/zig/blob/1606717b5fed83ee64ba1a91...

- https://www.ryanliptak.com/blog/zig-intro-to-check-all-alloc...

astrange

This is a clear sign of a badly designed language. You should never see a fixed-size (less than page size) allocation fail, simply because there's nothing you can reasonably do if it does fail. Either you should crash or it should block until it is possible again.

(Where crash means a worker process or something limited to something less than the entire system. See Erlang for the logical extension of this.)

I realize this implies Windows and Java are badly designed and my answer to that is "yes".

olodus

Oh cool, didn't know that. Thanks.

judofyr

I've been using this helper: https://github.com/judofyr/zini/blob/ea91f645b7dc061adcedc91.... It starts by making the first allocation fail, then the second, then the third, and so on. As long as it returns OutOfMemory (without leaking memory) then everything is fine.

undefined

[deleted]

corysama

A very old trick for running Lua in your PlayStation 2 game (where the PS2 is a machine with 32MB of RAM and no memory paging) is to hook Lua’s realloc function into the venerable Doug Lea’s Malloc (https://gee.cs.oswego.edu/dl/html/malloc.html) set up to run in arena mode (ONLY_MSPACES? It’s been a decade or two…). That way Lua can fragment the arena all it wants without making a mess of the rest of the tiny address space.

phire

> (where the PS2 is a machine with 32MB of RAM and no memory paging)

The PS2 hardware does have full support for memory paging (at least on the main cpu core). PS2 Linux makes full use of it.

But the default TLB configuration from the BIOS is just a single static 31MB page (the other 1MB is reserved for the BIOS) and the SDK doesn't provide any tooling for dynamic pages.

And this is MIPS, so it's a software managed TLB with 48 entries. I wouldn't be surprised if some games did have dynamic paging, but they would need to provide their own TLB exception handler.

varispeed

I recently needed to write a memory allocator and being lazy asked ChatGPT for help. Interestingly it came up with implementation eerily similar to what is described in that document. Nonetheless everything worked like a charm from the start.

SoftTalker

Writing a memory allocator is a standard homework assignment in any CS program. The solutions are mostly similar, and ChatGPT has probably learned hundreds of examples.

hgs3

There is also the double-sided arena allocator which uses one contiguous buffer but grows in both directions (front-to-back and back-to-front). When allocating memory from it you need to indicate whether you want memory from the front or back. The allocator is out-of-memory when both font and back meet.

The double-sided approach is useful for various purposes. For instance you can allocate short-lived data from the front and long-lived data from the back. It also makes better use of free space: with two separate arena allocators one could be out-of-memory but the other might have free space. With the double-sided approach all memory is fair game.

sakras

Fantastic article! I have a project with a similar arena allocator, so I'll definitely be taking some of these tricks. One thing my allocator does do is organize arenas into a linked list so that you can grow your size dynamically. However I really like the article's point that you're always going to be living within _some_ memory budget, so you might as well allocate everything up front into a giant arena, and then divide the giant arena up into smaller arenas.

Also I've heard that you can save an instruction when checking if your allocator is full by subtracting from the top, and checking the zero flag. It seems to complicate alignment logic. Does that ever end up mattering?

saagarjha

> However I really like the article's point that you're always going to be living within _some_ memory budget, so you might as well allocate everything up front into a giant arena, and then divide the giant arena up into smaller arenas.

That depends. If you’re running on e.g. a video game console where you’re the sole user of a block of pretty much all memory, go ahead. On a system with other things running, you generally don’t want to assume you can just take some amount of memory, even if it’s “just the free memory”, or even “I probably won’t use it so it will be overcommitted”. Changing system conditions and other system pressure are outside of your control and your reservation may prevent the system from doing its job effectively and prioritizing your application appropriately.

galangalalgol

Yeah, profiling is your friend. I forget if it's called a sharded slab or a buddy allocator, but the one where you have different preallocated buffers chunked at different sizes. Any time you allocate you are given the smallest chunk that will hold what you asked for. Profiling gives you optimal size boundaries as well as the number of each. Add a safety margin and off you go. Super fast allocation and guaranteed no fragmentation. In a c++ codebase overloading std::new to do this is probably the easiest way to get your allocation performance back and avoid fragmentation.

maccard

> If you’re running on e.g. a video game console where you’re the sole user of a block of pretty much all memory

Games consoles haven't been that for a long time. PS5 and XSS are full blown multi-user multi-application systems. PS4 and Xbox One were multi user systems with reserved blocks for the OS, but still very close to a modern OS.

undefined

[deleted]

ithkuil

I would argue bumping down makes it even easier to reason about alignment.

Anyways, you can find a full article about up vs down at https://fitzgeraldnick.com/2019/11/01/always-bump-downwards....

wongarsu

> so you might as well allocate everything up front into a giant arena, and then divide the giant arena up into smaller arenas

However if you do this note how the article hints at this strategy needing a bit more code on Windows: Windows doesn't do overcommit by default. If you do one big malloc Windows will grow the page file to ensure it can page that much memory in if you start writing to it. That's fine if you allocate a couple megabytes, but if your area is gigabytes in size you want to call VirtualAlloc with MEM_RESERVE to get a big contiguous memory area, then call VirtualAlloc with MEM_COMMIT as needed on chunks you actually want to use.

Cloudef

Zig has arena allocator in the standard library https://github.com/ziglang/zig/blob/master/lib/std/heap/aren...

fanf2

So does glibc, where they are called obstacks https://www.gnu.org/software/libc/manual/html_node/Obstacks....

matheusmoreira

Excellent article.

> While you could make a big, global char[] array to back your arena, it’s technically not permitted (strict aliasing).

Aren't char pointers/arrays allowed to alias everything?

I used that technique in my programming language and its allocator. It's freestanding so I couldn't use malloc. I had to get memory from somewhere so I just statically allocated a big array of bytes. It worked perfectly but I do disable strict aliasing as a matter of course since in systems programming there's aliasing everywhere.

gpderetta

char can alias everything, i.e. you can deference a char pointer with impunity, even if the actual dynamic type[1] of an object is a different type. The reverse is not true: if the dynamic type of an object is char, just by using the alias rules, you can't deference it as an object of a different type.

In C++ you can just use placement new to change the dynamic type of (part of ) a char array (but beware of pointer provenance). In C is more complex: I don't claim to understand this fully, but my understanding is you can't change the type of a named object, but you should be able to change the type of anonymous memory (for example, what is allocated with malloc) by simply writing into it.

In practice at least GCC considers the full alias rules unimplementable and my understanding is that it uses a conservative model where every store can change the type of an object, and uses purely structural equivalence.

[1] of course most C implementations don't actually track dynamic types at runtime.

matheusmoreira

> The reverse is not true: if the dynamic type of an object is char, just by using the alias rules, you can't deference it as an object of a different type.

A limitation like that simply makes no sense to me. Everything is a valid char array but I can't place structs on top of one? Oh well, nothing I can do about it. I'll just keep strict aliasing disabled. If we're writing C, it's because we want to do stuff like that without the compiler getting clever about it.

> you should be able to change the type of anonymous memory (for example, what is allocated with malloc) by simply writing into it

Well, in my case, I'm the one writing the malloc and the buffer is the anonymous memory. I remember months ago I scoured the GCC documentation for some kind of builtin that would allow me to mark the memory as such but there was nothing. I did add some malloc attributes to my allocation function just like TFA suggested but apparently its main purpose is to optimize based on aliasing nonsense which I disabled anyway.

gpderetta

At some point you need to get the memory for your pool from somewhere, for example from malloc [1], hence you can safely set the type of the raw memory by writing into it. You can also change that type to some other type, by writing other stuff (so you can reuse the memory). You can also write metadata to it between uses. What you cannot do is having overlapping lifetimes for different types.

Making sure that you respect all the underspecified, obscure, and often contradicting rules is not easy, so if you prefer to disable strict-alias, you have my sympathy. For the most part is useful for high performance numerical code, and less advantageous for typical pointer chasing stuff.

From the practical point of view, the safest way to implement a custom allocator is to make sure that the compiler can't see through it, so separate compilation and no LTO and/or launder your pointers through appropriate inline asm.

[1] but other 'anonymous' sources, like mmap, would also work in practice.

LoganDark

> Everything is a valid char array but I can't place structs on top of one? Oh well, nothing I can do about it. I'll just keep strict aliasing disabled.

Well, yeah. Strict aliasing is less about the incidental values of memory addresses and more about the actual semantics of what you're doing. Where writing a struct into the middle of a char array makes no sense because you have no guarantee in the type system that the array is properly sized or aligned to contain that struct.

variadix

I feel like there should be a builtin that takes a void* and returns a void* which basically marks the input as aliased by the returned pointer regardless of the TBAA rules.

dellorter

If you overlay a struct in your (char) buffer and dereference a pointer to it you would be accessing something with a different type than its declared type(char* as struct something *), it’s strict aliasing violation

To do stay in the rules you could set up a void* to suitable region in a linkerscript

matheusmoreira

Welp. Linus Torvalds was right about this stuff.

https://lwn.net/Articles/316126/

https://lkml.org/lkml/2003/2/26/158

saagarjha

Wait until someone tells Linus about how processors do reordering.

(Yes, I know he understands it. Clearly he just refuses to accept that compilers can also reorder his code and he needs to accommodate that.)

undefined

[deleted]

undefined

[deleted]

bajsejohannes

> Typically arena lifetime is the whole program

Some other good cases for arenas are rendering of a frame in a video game and handling of a http request. The memory is contained within that context and short lived.

gpderetta

That's the lifetime of the objects in the arena, but wouldn't you recycle the arena itself across frames?

For requests it might make sense to have low and high water marks so that additional arenas are created during request peaks and destroyed after if you want to limit long term memory usage of your application.

flohofwoe

Typically you would have different arenas for different lifetimes ('per frame', 'per level', 'per game session' - or maybe even more specialized, like the duration of an IO operation), and 'reset' the arenas at the end of their respective lifetimes (which is basically just setting the current 'top offset' to zero). This sort-of expects that object destruction is a no-op (e.g. no destructors need to be called).

The general idea being that you don't need to track granular 'per-object lifetimes', but only a handful of 'arena lifetimes', and all objects share the lifetime of the arena they've been allocated in.

Of course it's also possible to manually call a destructor on an object in the arena without recycling its memory, but I heavily prefer using plain POD structs without owning pointers and which can be safely 'abandondend' at any time without leaking memory.

bjourne

I never heard the term "arena allocation" before. I always thought it was called "bump pointer allocation" since all you do is add to (bump) a pointer. One useful trick when designing a bump allocator is to allocate word size bytes (8 on 64-bit) extra to store object headers in. For example, if you store objects' sizes in the header you can iterate all allocated objects and you can often also reallocate objects in the middle of the arena without having to free every object.

undefined

[deleted]

eigenspace

I really like arena / bump allocators, they're really useful and powerful tools. I've been playing around lately with a Julia package that makes it relatively easy and safe to manage an arena https://github.com/MasonProtter/Bumper.jl

The thread-safety and dynamic extent is something I'm particularly pleased about.

saagarjha

> Unsigned sizes are another historically common source of defects, and offer no practical advantages in return. Case in point exercise for the reader: Change each ptrdiff_t to size_t in alloc, find the defect that results, then fix it.

I know that it’s a different “kind” of defect, but none of the code has overflow checks even with ptrdiff_t…

patrec

Why would it need overflow checks when subtracting two valid pointers?

saagarjha

The overflow is in size * count

patrec

Of course -- thanks!

yelnatz

Didn't know these were called Arenas, this technique is prevalent in game development.

TickleSteve

also called a "bump" allocator... because all it does is bump a pointer.

nice to use when you have a nicely ordered order of execution where you are guaranteed to always come back to a known position where you can free the entire heap/arena at once. (i.e. a typical main message handling loop).

yxhuvud

You can however use bump allocation for things that are not arenas. There are some GC allocators that use the technique.

dgb23

The JVM GC has a generational model.

It first allocates objects into an arena like structure. In a second step, it moves (evacuates) long lived objects into a compact region. The first region gets deallocated at once after.

Roughly speaking this leans on a heuristic that most objects are short lived. So it has arena like characteristics, but is of course managed/dynamic.

This might be one reason why managed languages like Java/C# get such good out of the box performance. You really need insight in your program and how it executes to beat this.

naasking

Also called "regions".

usrnm

> A minority of programs inherently require general purpose allocation, at least in part, that linear allocation cannot fulfill. This includes, for example, most programming language runtimes

Interesting definition of "minority"

vkazanov

As a share of the total number of programs written this is a minority indeed. How many pls are out there vs the number of libraries/apps?

The number of deploys is a different thing.

usrnm

Almost every program written in some programming language depends on the runtime provided by this programming language. Very few programming languages even let you to opt out of using the runtime. Which means that if the runtime needs something, then your program also needs it. Complexity doesn't magically go away when you put it into a library

dkersten

It’s not so much about using the code as it is about writing the code (or at least, providing utilities to the code being written). Yes every program uses the runtime, but very few people write the runtime. That is, perhaps the default provided to end users should be arena allocators, keeping a general malloc for special cases.

Daily Digest email

Get the top HN stories in your inbox every day.

Arena allocator tips and tricks - Hacker News