Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

adsharma

> We plan to deliver improvements to [..] purging mechanisms

During my time at Facebook, I maintained a bunch of kernel patches to improve jemalloc purging mechanisms. It wasn't popular in the kernel or the security community, but it was more efficient on benchmarks for sure.

Many programs run multiple threads, allocate in one and free in the other. Jemalloc's primary mechanism used to be: madvise the page back to the kernel and then have it allocate it in another thread's pool.

One problem: this involves zero'ing memory, which has an impact on cache locality and over all app performance. It's completely unnecessary if the page is being recirculated within the same security domain.

The problem was getting everyone to agree on what that security domain is, even if the mechanism was opt-in.

https://marc.info/?l=linux-kernel&m=132691299630179&w=2

jcalvinowens

I'm really surprised to see you still hocking this.

We did extensive benchmarking of HHVM with and without your patches, and they were proven to make no statistically significant difference in high level metrics. So we dropped them out of the kernel, and they never went back in.

I don't doubt for a second you can come up with specific counterexamples and microbenchnarks which show benefit. But you were unable to show an advantage at the system level when challenged on it, and that's what matters.

adsharma

You probably weren't there when servers were running for many days at a time.

By the time you joined and benchmarked these systems, the continuous rolling deployment had taken over. If you're restarting the server every few hours, of course the memory fragmentation isn't much of an issue.

> But you were unable to show an advantage at the system level when challenged on it, and that's what matters.

You mean 5 years after I stopped working on the kernel and the underlying system had changed?

I don't recall ever talking to you on the matter.

jcalvinowens

> By the time you joined and benchmarked these systems, the continuous rolling deployment had taken over

Nope, I started in 2014.

> I don't recall ever talking to you on the matter.

I recall. You refused to believe the benchmark results and made me repeat the test, then stopped replying after I did :)

vardump

I wouldn't be surprised if both 'adsharma' and 'jcalvinowens' were right, just at different points in time, perhaps in a bit different context. Things change.

google234123

I like your clocks!

asveikau

Maybe I'm misreading, but considering it OK to leak memory contents across a process boundary because it's within a cgroup sounds wild.

adsharma

It wasn't any cgroup. If you put two untrusting processes in a memory cgroup, there is a lot that can go wrong.

If you don't like the idea of memory cgroups as a security domain, you could tighten it to be a process. But kernel developers have been opposed to tracking pages on a per address space basis for a long time. On the other hand memory cgroup tracking happens by construction.

asveikau

> across a process boundary

> within a cgroup

Note the complementary language usage here. You seem to have interpreted that as me writing that it didn't matter what cgroup they are in, which is an odd thing to claim that I implied. I meant within the same cgroup obviously.

Yes, you can read memory out of another process through other means.. but you shouldn't map pages, be able to read them and see what happened in another process. That's the wild part. It strikes me as asking for problems.

I was unaware of MAP_UNINITIALIZED, support for which was disabled by default and for good reason. Seems like it was since removed.

genxy

What metrics were improved by your patches?

adsharma

Some more historical context. It wasn't a random optimization idea that I thought about in the shower and implemented the next day. Previous work on company wide profiling, where my contribution was low level perf_events plumbing:

https://research.google/pubs/google-wide-profiling-a-continu... https://engineering.fb.com/2025/01/21/production-engineering...

The profiling clearly showed kernel functions doing memzero at the top of the profiles which motivated the change. The performance impact (A/B testing and measuring the throughput) also showed a benefit at the point the change was committed.

This was when "facebook" was a ~1GB ELF binary. https://en.wikipedia.org/wiki/HipHop_for_PHP

The change stopped being impactful sometime after 2013, when a JIT replaced the transpiler. I'm guessing likely before 2016 when continuous deployment came into play. But that was continuously deploying PHP code, not HHVM itself.

By the time the patches were reevaluated I was working on a Graph Database, which sounded a lot more interesting than going back to my old job function and defending a patch that may or may not be relevant.

I'm still working on one. Guilty as charged of carrying ideas in my head for 10+ years and acting on them later. Link in my profile.

genxy

This kind of thing always struck me as something that the MMU and the memory controller could team up on. When you give back memory, you could not refresh it for some cycles. Or you could DMA the same page of zeros over all of it, so the CPU isn't involved in menial labor.

bmenrigh

I recently started using Microsoft's mimalloc (via an LD_PRELOAD) to better use huge (1 GB) pages in a memory intensive program. The performance gains are significant (around 20%). It feels rather strange using an open source MS library for performance on my Linux system.

There needs to be more competition in the malloc space. Between various huge page sizes and transparent huge pages, there are a lot of gains to be had over what you get from a default GNU libc.

skavi

We evaluated a few allocators for some of our Linux apps and found (modern) tcmalloc to consistently win in time and space. Our applications are primarily written in Rust and the allocators were linked in statically (except for glibc). Unfortunately I didn't capture much context on the allocation patterns. I think in general the apps allocate and deallocate at a higher rate than most Rust apps (or more than I'd like at least).

Our results from July 2025:

rows are <allocator>: <RSS>, <time spent for allocator operations>

  app1:
  glibc: 215,580 KB, 133 ms
  mimalloc 2.1.7: 144,092 KB, 91 ms
  mimalloc 2.2.4: 173,240 KB, 280 ms
  tcmalloc: 138,496 KB, 96 ms
  jemalloc: 147,408 KB, 92 ms

  app2, bench1
  glibc: 1,165,000 KB, 1.4 s
  mimalloc 2.1.7: 1,072,000 KB, 5.1 s
  mimalloc 2.2.4:
  tcmalloc: 1,023,000 KB, 530 ms

  app2, bench2
  glibc: 1,190,224 KB, 1.5 s
  mimalloc 2.1.7: 1,128,328 KB, 5.3 s
  mimalloc 2.2.4: 1,657,600 KB, 3.7 s
  tcmalloc: 1,045,968 KB, 640 ms
  jemalloc: 1,210,000 KB, 1.1 s

  app3
  glibc: 284,616 KB, 440 ms
  mimalloc 2.1.7: 246,216 KB, 250 ms
  mimalloc 2.2.4: 325,184 KB, 290 ms
  tcmalloc: 178,688 KB, 200 ms
  jemalloc: 264,688 KB, 230 ms
tcmalloc was from github.com/google/tcmalloc/tree/24b3f29.

i don't recall which jemalloc was tested.

hedora

I’m surprised (unless they replaced the core tcmalloc algorithm but kept the name).

tcmalloc (thread caching malloc) assumes memory allocations have good thread locality. This is often a double win (less false sharing of cache lines, and most allocations hit thread-local data structures in the allocator).

Multithreaded async systems destroy that locality, so it constantly has to run through the exception case: A allocated a buffer, went async, the request wakes up on thread B, which frees the buffer, and has to synchronize with A to give it back.

Are you using async rust, or sync rust?

skavi

modern tcmalloc uses per CPU caches via rseq [0]. We use async rust with multithreaded tokio executors (sometimes multiple in the same application). so relatively high thread counts.

[0]: https://github.com/google/tcmalloc/blob/master/docs/design.m...

jhalstead

> I’m surprised (unless they replaced the core tcmalloc algorithm but kept the name).

Indeed, it's not the old gperftools version.

Blog: https://abseil.io/blog/20200212-tcmalloc

History / Diffs: https://google.github.io/tcmalloc/gperftools.html

skavi

also:

1. tcmalloc is actually the only allocator I tested which was not using thread local caches. even glibc malloc has tcache.

2. async executors typically shouldn’t have tasks jumping willy nilly between threads. i see the issue u describe more often with the use of thread pools (like rayon or tokio’s spawn_blocking). i’d argue that the use of thread pools isn’t necessarily an inherent feature of async executors. certainly tokio relies on its threadpool for fs operations, but io-uring (for example) makes that mostly unnecessary.

ComputerGuru

That’s a considerable regression for mimalloc between 2.1 and 2.2 – did you track it down or report it upstream?

Edit: I see mimalloc v3 is out – I missed that! That probably moots this discussion altogether.

skavi

nope.

codexon

This is similar to what I experienced when I tested mimalloc many years ago. If it was faster, it wasn't faster by much, and had pretty bad worst cases.

pjmlp

If you go into Dr Dobbs, The C/C++ User's Journal and BYTE digital archives, there will be ads of companies whose product was basically special cased memory allocator.

Even toolchains like Turbo Pascal for MS-DOS, had an API to customise the memory allocator.

The one size fits all was never a solution.

adgjlsfhk1

One of the best parts about GC languages is they tend to have much more efficient allocation/freeing because the cost is much more lumped together so it shows up better in a profile.

pjmlp

Agreed, however there is also a reason why the best ones also pack multiple GC algorithms, like in Java and .NET, because one approach doesn't fit all workloads.

nevdka

Then there’s perl, which doesn’t free at all.

CyberDildonics

Any extra throughput is far overshadowed by trying to control pauses and too much heap allocations happening because too much gets put on the heap. For anything interactive the options are usually fighting the gc or avoiding gc.

bluGill

When it works. Many programs in GC language end up fighting the GC by allocating a large buffer and managing it by hand anyway because when performance counts you can't have allocation time in there at all. (you see this in C all the time as well)

cogman10

That's generally a bad idea. Not always, but generally.

It was a better idea when Java had the old mark and sweep collector. However, with the generational collectors (which are all Java collectors now. except for epsilon) it's more problematic. Reusing buffers and objects in those buffers will pretty much guarantees that buffer ends up in oldgen. That means to clear it out, the VM has to do more expensive collections.

The actual allocation time for most of Java's collectors is almost 0, it's a capacity check and a pointer bump in most circumstances. Giving the JVM more memory will generally solve issues with memory pressure and GC times. That's (generally) a better solution to performance problems vs doing the large buffer.

Now, that said, there certainly have been times where allocation pressure is a major problem and removing the allocation is the solution. In particular, I've found boxing to often be a major cause of performance problems.

m463

I remember in the early days of web services, using the apache portable runtime, specifically memory pools.

If you got a web request, you could allocate a memory pool for it, then you would do all your memory allocations from that pool. And when your web request ended - either cleanly or with a hundred different kinds of errors, you could just free the entire pool.

it was nice and made an impression on me.

I think the lowly malloc probably has lots of interesting ways of growing and changing.

Sesse__

This is called “an arena” more generally, and it is in wide use across many forms of servers, compilers, and others.

jra_samba

Look into talloc, used inside Samba (and other FLOSS projects like sssd). Exactly this.

pocksuppet

In many cases you can also do better than using malloc e.g. if you know you need a huge page, map a huge page directly with mmap

Yes, if you want to use huge pages with arbitrary alloc/free, then use a third-party malloc. If your alloc/free patterns are not arbitrary, you can do even better. We treat malloc as a magic black box but it's actually not very good.

IshKebab

I feel like the real thing that needs to change is we need a more expressive allocation interface than just malloc/realloc. I'm sure that memory allocators could do a significantly better job if they had more information about what the program was intending to do.

liuliu

There are, look no further than jemalloc API surface itself:

https://jemalloc.net/jemalloc.3.html

One thing to call out: sdallocx integrates well with C++'s sized delete semantics: https://isocpp.org/files/papers/n3778.html

hedora

You can also play tricks with inlining and constant propagation in C (especially on the malloc path, where the ground-truth allocation size is usually statically known).

Dylan16807

I think some operating system improvements could get people motivated to use huge pages a lot better. In particular make them less fragile on linux and make them not need admin rights on windows. The biggest factor causing problems there is that neither OS can swap a 2MB page. So someone needs to care enough to fix that.

anthk

I used mimalloc to run zenlisp under OpenBSD as it would clash with the paranoid malloc of base.

dang

Related. Others?

Jemalloc Postmortem - https://news.ycombinator.com/item?id=44264958 - June 2025 (233 comments)

Jemalloc Repositories Are Archived - https://news.ycombinator.com/item?id=44161128 - June 2025 (7 comments)

bfgeek

One has to wonder if this due to the global memory shortage. ("Oh - changing our memory allocator to be more efficient will yield $XXM dollar savings over the next year").

bluGill

Facebook had talks already years ago (10+) - nobody was allowed to share real numbers, but several facebook employed where allowed to share that the company has measured savings from optimizations. Reading between the lines, a 0.1% efficiency improvement to some parts of Facebook would save them $100,000 a month (again real numbers were never publicly shared so there is a range - it can't be less than $20,000), and so they had teams of people whose job it was to find those improvements.

Most of the savings seemed to come from HVAC costs, followed by buying less computers and in turn less data centers. I'm sure these days saving memory is also a big deal but it doesn't seem to have been then.

The above was already the case 10 years ago, so LLMs are at most another factor added on.

sethhochberg

I don't have many regrets about having spent my career in (relatively) tiny companies by comparison, but it sure does sound fun to be on the other side for this kind of thing - the scale where micro-optimizations have macro impact.

In startups I've put more effort into squeezing blood from a stone for far less change; even if the change was proportionally more significant to the business. Sometimes it would be neat to say "something I did saved $X million dollars or saved Y kWh of energy" or whatever.

Anon1096

I've worked on optimizing systems in that ballpark range, memory is worth saving but it isn't necessarily 1:1 with increasing revenue like CPU is. For CPU we have tables to calculate the infra cost savings (we're not really going to free up the server, more like the system is self balancing so it can run harder with the freed CPU), but for memory as long as we can load in whatever we want to (rec systems or ai models) we're in the clear so the marginal headroom isn't as important. It's more of a side thing that people optimizing CPU also get wins in by chance because the skillsets are similar.

alex1138

I've heard of some people getting banned from FB to save memory space? Surely that can't be the case but I swear I've seen something like that

gzread

There are some people who think they can beat the system by treating apps like Telegram and Discord as free cloud storage, and they certainly get banned to save storage space.

HackerThemAll

> LLMs are at most another factor added on

At most... Think 10x rather than 0.1x or 1x.

runevault

On top of cost, they probably cannot get as much memory as they order in a timely fashion so offsetting that with greater efficiency matters right now.

loeg

Yeah, identifying single-digit millions of savings out of profiles is relatively common practice at Meta. It's ~easy to come up with a big number when the impact is scaled across a very large numbers of servers. There is a culture of measuring and documenting these quantified wins.

foobarian

Oooh maybe finally time for lovingly hand-optimized assembly to come back in fashion! (It probably has in AI workloads or so I daydream)

Nuzzerino

With the reputation of that company, one can wonder a lot of backstories that are even more depressing than a memory shortage.

augusto-moura

Not just shortage, any improvements to LLMs/electricity/servers memory footprint is becoming much more valuable as the time goes. If we can get 10% faster, you can easily get a lead in the LLM race. The incentives to transparently improving performance are tremendous

mathisfun123

> changing our memory allocator

they've been using jemalloc (and employing "je") since 2009.

apatheticonion

As an Australian who was just made redundant from a role that involved this type of low level programming - I love working on these these kinds of challenges.

I'm saddened that the job market in Australia is largely React CRUD applications and that it's unlikely I will find a role that lets me leverage my niche skill set (which is also my hobby)

amacneil

I know this isn’t who’s hiring thread, but we are hiring in AU for low-level data processing and have interesting performance challenges.

Link in bio.

maxwindiff

Love your product!

amacneil

Thanks!

ajxs

Speaking as an Australian that works on React CRUD applications because there's nothing else in the market, I've been reading through this thread thinking the exact same thing.

apatheticonion

Google had some position open working on the kernel for ChromeOS, and Microsoft had some positions working on data center network drivers.

I applied for both and got ghosted, haha.

I also saw a government role as a security researcher. Involves reverse engineering, ghidra and that sort of thing. Super awesome - but the pay is extremely uncompetitive. Such a shame.

Other than that, the most interesting roles are in finance (like HFT) - where you need to juggle memory allocations, threads and use C++ (hoping I can pitch Rust but unlikely).

Sadly they have a reputation of having pretty rough cultures, uncompetitive salaries and it's all in-office

undefined

[deleted]

Pepe1vo

Not sure if it's the domain you're interested in, but there are quite a few HFT firms with offices in Australia.

The one I know of (IMC trading) does a lot of low level stuff like this and is currently hiring.

apatheticonion

I'm actually looking at HFT companies. Hoping I find one that allows remote working - but looks like there are basically no remote roles going at the moment

apatheticonion

I just tried to apply for IMC, the form on their careers page is broken. Looks like that's the first boss to defeat, haha

lukeh

I hear you. Actually I read this thread because we’re using jemalloc in an embedded product. The only way I found to work on interesting problems here was to work for myself. (Having said that I think Apple might have some security research in Canberra? Years ago there was LinuxCare there and a lot of smart people. But that was in 2003…)

profquail

SIG is hiring for some roles in Sydney: https://careers.sig.com/global-experienced

gzread

I have a relative in Australia who was hired by some type of consultancy to work on Samba, but I don't know what work he was doing.

pabs3

Which consultancy was that?

gzread

I don't remember. It was years ago.

joelsiks

Opening up strong with a gigantic merge of the stuff they've been working on in their own fork: https://github.com/jemalloc/jemalloc/pull/2863

jjuliano

I remember I was a senior lead softeng of a worldbank funded startup project, and have deployed Ruby with jemalloc in prod. There's a huge noticeable speed and memory efficiency. It did saved us a lot of AWS costs, compare to just using normal Ruby. This was 8 years ago, why haven't projects adopt it yet as de facto.

kortex

Usually lack of knowledge that such a thing exists, or just plain ol' momentum. Changing something long in production at established companies, even if there is a tangible benefit, can be a real challenge.

RegnisGnaw

Is there a concise timelime/history of this? I thought jemalloc was 100% open source, why is Meta in control of it?

masklinn

Jason Evans (the creator of jemalloc) recounted the entire thing last year: https://jasone.github.io/2025/06/12/jemalloc-postmortem/

vintermann

"Were I to reengage, the first step would be at least hundreds of hours of refactoring to pay off accrued technical debt."

Facebook's coding AIs to the rescue, maybe? I wonder how good all these "agentic" AIs are at dreaded refactoring jobs like these.

xxs

Refactor doesn't mean just artificial puff-up jobs, it's very likely internal changes and reorganization (hence 100s of hours).

There are not many engineers capable of working on memory allocators, so adding more burden by agentic stuff is unlikely to produce anything of value.

rvz

> Facebook's coding AIs to the rescue, maybe? I wonder how good all these "agentic" AIs are at dreaded refactoring jobs like these.

No.

This is something you shouldn't allow coding agents anywhere near, unless you have expert-level understanding required to maintain the project like the previous authors have done without an AI for years.

echelon

If you filter the commits to the past five years, four of the top six committers are Meta employees. The other two might be as well, it just doesn't say that on their Github / personal website.

robertlagrant

> With the leverage jemalloc provides however, it can be tempting to realize some short-term benefit. It requires strong self-discipline as an organization to resist that temptation and adhere to the core engineering principles.

This doesn't quite read properly to me. What does it actually mean, does anyone know?

gjm11

I'm pretty sure it means something like this: "Because jemalloc is used all over the place in our systems that run at tremendous scale, some hack that improves its performance a little bit while degrading the longer-term maintainability of the code can look very appealing -- look, doing this thing will save us $X,000,000 per year! -- and it takes discipline to avoid giving in to that temptation and to insist on doing things properly even if sometimes it means passing up a chance to make the code 0.1% faster and 10% messier."

Asm2D

It would be great if Meta was able to sustain to support more open source projects, especially those they benefit from.

For example they use AsmJit in a lot of projects (both internal and open-source) and it's now unmaintained because of funding issues. Maybe they have now internal forks too.

jshorty

Surprised not to see any mention of the global memory supply shock. Would love to learn more about how that economic is shifting software priorities toward memory allocation for the first time in my (relatively young) career

twodave

While it may seem directly related, it's just not. These things are worked on regardless of how cheap or expensive RAM is, because optimizing memory footprint pretty much always leads to fewer machines leased, which is a worthwhile goal even for smaller shops.

jshorty

That's useful to know, thank you.

undefined

[deleted]

refulgentis

There’s been shocks at hyperscaler scale, ex. this got yuge at Google for a couple years before ChatGPT

Daily Digest email

Get the top HN stories in your inbox every day.

Meta’s renewed commitment to jemalloc - Hacker News