Skip to content(if available)orjump to list(if available)

How to Prevent the Next Heartbleed (2020)


It is impossible to prevent the next heartbleed, no one can predict the future.

Even though everything in the article is important and true: You need to take preventive measures into account when building your solutions! You need to make sure that you have security by design in your products, and that you have to learn from disasters etc etc etc..

But you also need to make disaster recovery plans! You need to know what actions to take when the shit hits the fan!


> It is impossible to prevent the next heartbleed, no one can predict the future.

I think that's a little strong. Verify code using theorem provers and such vulnerabilities are all but impossible.

But yes, prepare for the worst anyway.


Then the next heartbleed could be a sidechannel not covered in your theorem. Better prepare in any case.


Side channels are indeed a tough problem, but if all we have to worry about is side channel attacks, we'd be in a pretty good place overall. We're nowhere near that right now.


I'm the author, thanks for the link!

Mistakes happen, but I think we should try to learn from each one.


What I find interesting about buffer bleeds like Heartbleed (and also Cloudbleed) as memory exploits in general, is that:

1. they surprise everyone by often being mere underflows into an allocated buffer (for example, any buffer where you forget to zero the padding, or where you write to the wrong place leaving another uninitialized) rather than being overflows beyond it (or being logical overflows as in Heartbleed, but not always physical overflows that could be caught by something like a borrow checker—it depends on the bleed, Cloudbleed would have been caught),

2. they let someone read all your sensitive server memory, with no UAF, even in a language with 100% memory safety (e.g. JavaScript, if you're reusing buffers), and finally that

3. they are often caused by simple integer overflow wraparound to 0, where having checked integer arithmetic enabled by default would mitigate the risk, but where this is typically not enabled. The default is typically left unsafe for performance with an opt-in for safety, where I believe it would be better flipped around to checked arithmetic for safety with an opt-out for performance (in the few block scopes where it makes sense—and I find that even in high performance systems there are few places where loops are hot—assuming there's already a clear separation of concerns between control flow and data plane).

And today there all kinds of software systems that still have these vulnerabilities. For example, around 2019, I worked on some static analysis security software that could often detect bleeds automatically in outgoing email attachments and it would find more than one different banking system leaking data in autogenerated statements. The problem is everywhere: think popular file formats that people exchange and you can probably find a major codec with a bleed.

Finally, it's just such an epic post by Wheeler (he's done another great one on potential filename exploits) with so many insights into testing:

"Fuzzers make things go wrong. Assertions make sure we find out." — Jesse Ruderman

Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be “as simple as possible”.

In most cases only negative tests, not positive tests, have any value for security. As I noted earlier, what matters about test suites is how you create them. This is probably obvious to many readers of this paper. In particular, I suspect Eric S. Raymond is including these kinds of tests when he discusses the advantages of testing. However, this is not obvious to many software developers. All too many developers and organizations only use a mostly-positive test suite instead. Many developers find it very difficult to think like an attacker, and simply fail to consider widespread testing of inputs that “should not happen”.


Does it (buffer bleeds) happen on languages that systematically zeroe their memory? (I vaguely remember reading that for Java but I'm not sure)


Yes, although as s_gourichon said, if there's any object pool reuse of these buffers then the potential for bleeds comes back.

So, for example, you could still have a bleed in Node.js, even if you're safely allocating them as zeroed, but you're then reusing them, say when you're writing something to a file and then shipping it off elsewhere.


If you are reusing things beware that languages will sometimes optimize out attempts to zero memory when they notice you write something else after that, or free it. Check your compiler/implementation instructions for details, if they don't give specific guidance assume that even if they do the right thing now the next upgrade will do differently.


That fixes the problem of a buffer content after allocation by the language/runtime/platform. When a buffer is reused at the library/application level, no zeroing is done.