Skip to content(if available)orjump to list(if available)

Knuth's Art of Computer Programming, V 4B, has gone into print


A few years ago I had the pleasure of meeting Knuth in his home, my wife was doing some photography for him. He took me to his room and showed me his setup, he was researching Soduku algorithms at the time. His hands were blisteringly quick, moving between EMacs panes, triggering evaluations and printing of results, as fast as any 20 year old. In his 80s , he hasn’t seemed to suffer any mental decline.

I started talking to him about some of the latest AI research and he mentioned the papers authors, he had already read them! Not only is he still very productive at 84, but he has not ossified into one particular subject but he continues to keep abreast of other related fields.

He also played his huge pipe organ for us, he was gracious, humble, down to earth, truly a treasure. I only wish he could live another hundred years, selfishly so I could see Volumes 5, 6, and 7 of Art of Computer Programming finished.

I should not with some sadness that COVID hit not long after and that year we lost a number of other luminaries in the field like Jon Conway.


> moving between EMacs panes

Well, the editor wars are over folks.

More seriously, Don Knuth is a treasure. One of the coolest things about computing science is how all the greats in the field are either alive, or within the living memory of people who are.


These are the photos my wife took at his home.

Like the 5th picture down you can see him standing (he uses a standing desk!) and if you blow up the screen, you can see part of an emacs window on the left hand side. I believe he also was using emacs shell windows as well, but not 100% sure.

Direct Link:

I can't emphasize enough how incredibly nice and welcoming he is. He is bursting with enthusiasm for computers, and my wife and his wife had to almost restrain him so he would sit for his pictures, because he would start talking to me again about computer related stuff.


Hard to tell at this resolution, but it looks like he's using a fairly bare X setup with a tiling window manager and only one Emacs frame on the left. The colours and layout on Emacs look like org-mode. Does he write with org-mode? I would have expected him to write bare TeX, of course. Maybe he's doing something else with org-mode.

He's got Xeyes on the right, and the shell windows could conceivably be Emacs too, but if so, he's customised them to remove the minibuffer.


“The best way to communicate from one human being to another is through story,” Knuth said.

Great quote and great insight. Understanding how the brain works and is stimulated augments communication. Especially important for dry, theoretical subjects.


Look how he use only a single screen and has this extraordinary device nobody uses anymore left of his screen but proves to be one of the best tool to dig deep into code.




Whose photos are on the wall? Would be interesting to recognize some of them.


The pictures are beautiful, thank you for sharing that.


Emacs windows. There are no panes in Emacs.

The Emacs window is held in a frame. The operating system calls that frame a window, but that's not what Emacs calls it.

As to why Emacs calls everything something else, well, Emacs was first. As to why Emacs doesn't change its names, well, have you ever tried to change Stallman's mind?


I know. I’ve been an emacs user for 30 years, but I tend to just call them buffers. In the old days when I would be using a PPP or SLIP connection into a dialup Unix, my whole screen would be Emacs in VT100 mode so it made a lot more sense to thing of them as windows back then.


learning that Knuth uses Emacs has been my proudest moment as an Emacs user.


We won! Emacs vs VI war over. It's finally over thanks to Knuth! VI users will be forced to pay reparations, and agree to cede territory back to the Emacs military alliance. :)




Just think how much more he could have created with vi ;)


He is only using EMacs as an OS....


I also noticed smart people type really fast (if they can touch type) and can switch between multiple windows/tabs fast too. It's not because their fingers move faster than normal people, but because their mind moves smoothly so they can produce a long string of command with stopping to think.

> as fast as any 20 year old. I I certainly hope so. Though I think it's more likely that Knuth can be as fast as other 20-year olds, but not as the 20-year old of himself.


As someone whose computing career was improved by learning to touch-type in grade school (Mom had been a secretary, saw that these "personal" computers had keyboards, and bludgeoned the high school to let me in to the summer-school typing class) it's not actually the speed that matters - though that does certainly impress people - it's that it's "automatic". If you're not thinking about letters and where they are, you aren't distracted from the entire sentence (or function) you're holding in your head - it just pours out through your hands into the machine.

(Sometimes I joke about how my hands know emacs, not my brain - but if someone asks me what keys I hit for something, my fingers move and then I work backwards from that.)

(And if you're collecting correlations - I have no musical performance talent at all and have spent maybe 10 hours total poking at musical keyboards - for me, typing is all typing, I expect musical keyboarding is a related but parallel thing, not the same thing.)


There is however a terrible effect on productivity and self-respect when you have to switch keyboard layout for any reason.

I'm still one month into switching to a macbook, and I now dread cutting and pasting. Oh, and typing "pipe", too.

I wonder how long it will take my 40+ years old brain and hands to get their act together...


Not a joke, my fingers and brain know vim and I don't. Often have to try it out to know what command I'm actually using.


One of the smartest people I met as an undergrad at Berkeley was a theoretical chemistry PhD student. He was an extremely slow typist, with mostly hunt and peck. It would be frustrating for me because it would be like 1/4 my comfortable speed (20 vs. 80 wpm?). But being slow there made him faster, because he’d spend a little more time before typing thinking about what to type.


I’ve always had the opposite experience. When I am unable to type well I am massively less effective. This happened when switched from emacs to vim, or got a new keyboard with a weird layout, or broke my thumb, or, hell, anytime I use a phone.

First there’s the cognitive overhead involved with the task itself, as opposed to letting your fingers do the work while you think about real stuff. Second, when you’re slow at typing, typing becomes the bottleneck more often, and your brain is sometimes idling waiting for your typing to catch up. Your brain has the hard part; it should always be the bottleneck. Third, typing fast gives you more options, because you don’t consider “how long will this take me to type out?” when considering different approaches to some immediate challenge. If opening a REPL and evaluating something is going to take you five minutes, you might not do it.

I suspect your friend was a fast worker despite being a slow typist. If he learned to touch-type, he could always have spent the same time thinking as he does now then less time and effort actually typing it.


Interesting - I do this but I learned the skill from sight reading music scores - you have to be playing several bars behind what you’re reading so the skill is in going into autopilot with your fingers as you read. No small coincidence then that Knuth is an organist


> No small coincidence then that Knuth is an organist

Also great on the harpsichord and piano.


Google and Youtube often think I am a robot because of how fast I operate on their sites: typing, clicking, switching tabs etc..


I wish modern OS could keep up with my typing and window switching and app launching and ...

I'm constantly perturbed and interrupted when some app window just 'hangs' for a few seconds (some garbage collection going on, probably?). I can't pinpoint any one thing, just... everything seems relatively slow in 2022 compared to, even 10 years ago, and certainly 20. Applications weren't always phoning home to check things during bootup, which I suspect happens a lot now. Our computers are dozens of times faster than in the 90s, and, computationally, some things get done much faster, but others - around UI, it seems - apparently regress.


I hope you never use the stock iOS calculator, which still eats keystrokes.


may we all age so gracefully. perhaps part of the secret is to work like Don.


I'd imagine it's mostly a question of having the freedom to be able to work upon subjects which very much interest you, independently and on your own schedule, without much outside pressure or coercion to produce tangible results that are useful to anyone but yourself.


There may be something to that, at least according to one study. Specifically, this article [1] indicates "embracing new mental challenges may be the key to preserving both brain tissue and brain function."



A colleague and I also had the delight of visiting him, quite a few years ago now. I'd been doing some unusual stuff with TeX, and had been in touch and showed him some examples of our work.

He had recently been working on 3:16 and showed us some of the original calligraphy that he had at home at the time.

What a lovely man, as well as being a multi-talented genius.


Imagine Don being your interviewer at a FAANG company.


Note that, to get an idea of the contents, you can see the previously published fascicles, and earlier drafts online (pre-fascicles):

• Mathematical Preliminaries Redux:

• 7.2.2 Introduction to Backtracking:

• Dancing Links:

• Satisfiability:

The first three were published together as "Volume 4, Fascicle 5" in 2019, and the last one—Satisfiability—was published as "Volume 4, Fascicle 6" in 2015. Of course the actual publication as Volume 4B has hundreds of fixes and additions beyond what was published earlier, and a lovely preface that you can read here:

> You might think that a 700-page book has probably been padded with peripheral material. But I constantly had to "cut, cut, cut" while writing it, because […] new and potentially interesting-yet-unexplored topics kept popping up, more than enough to fill a lifetime; yet I knew that I must move on. […] I wrote nearly a thousand computer programs while preparing this material, because I find that I don't understand things unless I try to program them.



Re "being paid", consider that he took early retirement in 1992 to work on this book that he sees as his life's work ( (and until then was doing in his "side" time, separate from his job of teaching and research), giving up whatever salary a tenured Stanford CS professor's would be, and has been living off savings and royalties since then: he only gets "paid" when people choose to buy the results. (And if he were to focus on money he'd probably put out more newer volumes than 2 in 30 years, rather than keep making newer editions / fixes to earlier volumes…)

But you're right that Knuth has a dream "job" that the rest of us, at least those with the slightest "scholarly" inclinations, can only envy him for: he reads, explores, digests, and explains, the topics he likes and in the way he likes, and the results are great. But when you're already a legendary programmer in college (, and have won a Turing Award at 36 ( for a book you've been writing in your free time, one can say you're entitled to the life.


> So teacher, that is how we spent the summer of 1960.

ENIAC was finished in 1945. I cannot imagine how small the programming world must've felt back then.


"Imagine being Noah Webster at work, writing 'thousands of definitions' of words that already exist, and being paid for the privilege!"

There is value in providing reference texts. There is value in providing sample usages of programs to the unfamiliar. There is value in demonstrating alternative means to reach the same goal.

Donald Knuth's work of cataloguing and examining the different branches and leaves of computer science would be valuable even if he weren't also a world-class computer scientist in his own right (which he just happens to be).


How's that unproductive?

He writes these programs to research things, and then publishes an authoritative book on the subject.


If you think Knuth's work is "unproductive" you obviously either don't know much about Knuth, or you just don't know what you are talking about. He has had a profound influence on the field of Computer Science - and everyone who has met him says he's one the nicest, humblest people they've ever met.

I for one would feel privileged to get to meet him.


He is writing programs to help him publish a volume for one of the most important series on computer programming. we have different definitions of the word productive.


You may have heard of the term "Kata"

It's used by many to perfect their craft.


Pumsae for the tae-kwon-do folks. It is awesome.


For the Christian, reading the Bible is internally transformative.


This is also an important topic for Knuth and he has published a book on the subject (which I have not read nor discussed with him as the topic is of zero interest to me).

OK, even though the bible doesn't interest me, we once went through the score of the Fantasia Apocalyptica together and it was useful that I knew enough of the bible to discuss some references. There are also musical "puzzles" or clues in the score a few of which I was lucky enough to catch so I didn't come off as a complete dufus.


I did like the fact that he led a Bible study in January - on the Book of Numbers. Does that count as an addition to semi-numerical algorithms?






God is strangely obsessed with foreskins.


Likewise for the Marxist reading Capital! (Especially Vol. 3)




In Google’s recent “Hacking Google” series [1] they appeared to credit Knuth via his bug reward program in his books [2] as having created the first software bug bounty offer.

Anyone aware of any software bug bounties that predate Knuth’s?



- specifically:



FWIW, here's a side-ways mention of Knuth's bounty, from

> So I sat down with him and proofread the pages as they came out of the typewriter. It seemed that he was composing and typing as fast as I could read. By morning the manual was done. Years later when don's first book came but, he told me that if I would proofread it for him that he would pay me a dollar for every error that I found, including typographical errors. I took the offer as a great personal compliment at the time, but I think that he probably made that a standing-offer to anyone.

also from earlier in that essay:

> don claimed that he could write the compiler and a language manual all by himself during his three and a half month summer vacation. He said that he would do it for $5000. Our Fortran compiler required a card reader, card punch. line printer and automatic floating point. Don said that he would not need the card reader or card punch, but he wanted a magnetic tape unit and paper tape. I asked Gerard Guyod how Brad could have been suckered into paying this college kid $5000 to write something that had to be a piece of junk if he was only going to spend three and a half months on it. Gerard whispered his response to me. He said "We think that he already has it written. He probably did it in his spare time while working in the computer center at Case Institute." I still wasn't entirely satisfied with that answer because I was a college graduate whose first job was for 5325 per month and I had just changed jobs and was making $525 per month. Besides that it was taking mortal human beings 25 man-years to write compilers: not three and a half man-months. I thought that Brad had taker leave of his senses.


Thanks, there are a lot of gems in that piece.


As a saddening commentary, Knuth stopped sending those checks in 2008 because of problems with check fraud. Since this refers to Knuth, I carefully parsed that not as "because of check fraud," but, "because of problems with check fraud." There is an interesting difference between those two.

From your wiki link:

  Initially, Knuth sent real, negotiable checks to recipients. He stopped doing
  so in October 2008 because of problems with check fraud. As a replacement, he
  started his own "Bank of San Serriffe", in the fictional nation of
  San Serriffe, which keeps an account for everyone who found an error since
  2006.[2] Knuth now sends out "hexadecimal certificates" instead of negotiable


Essentially, he also created NFTs then...


I think Knuth also created,, and He refused all google stock, however, since they implemented map-reduce incorrectly, using a set of filters. In the case of fb, he was outfoxed, because he didn't have the documents to prove that he created all the algorithms for matching faces.


Very cool. I own the originals and the 4th but sadly I have to admit that even though I am a reasonably accomplished developer with a couple of CS degrees I have a hard time understanding this work and have not yet really read it. I feel like I am publicly admitting failure just posting this. I still have time to make sure these are not just bookshelf decorations though.


The 1st Chapter is one of the best mathematical discussions I've ever read, bar none. But hardcore math isn't needed for most of us programmers. Knuth knows this.

I suggest skipping around on an "as needed" basis.

Try Volume 1, Section 2.2 "Linear Lists", as a starting point. I promise you its an easier read than you expect. Despite being relatively easy material, you will probably learn something about arrays with just maybe ~3 or ~4 pages worth of reading material.

Start at the start of Section 2.2. Maybe work your way backwards to Section 2.1 as you realize its not as hard as you think (so you start at the "proper" starting point). And then go on from there.

Its really basic stuff. But hitting "rock bottom" on the subject of arrays, stacks, queues, and such is fascinating.


He wrote his "Concrete Mathematics" book specifically to address your first point. I admit its the only book of his I read cover to cover and it wrecked me.


I agree on the section on linear lists. It was really eye opening for me back when I read it the first time. It's remarkable how much useful information there is about such a seemingly simple concept.


You're not the only one. If we're honest most of us buy the series as a mark of status / pride, not out of any plan to actually read it. I mean yeah I do plan to. Probably once I've retired. But that's a long way away :-)


I use it more like a dictionary. I don't read it from start to finish, but if I need an algorithm for something, I look it up in Knuth.


This is the function my copy of TAOCP serves. I'm not anywhere near knowledgeable enough to digest all of it, but when I need something and I want to understand the ground theory behind it then to Knuth I go.


Consider also CLRS.


If you’re interested in reading them (and they are extremely readable), you should first know something (not everything) about programming and discrete mathematics. If you watched some YouTube videos on logic, sets, graphs, etc. you’ll be more than equipped. Knuth has even written a book, Concrete Mathematics, that covers these pre-requisites but it might be faster and easier to watch a Khan Academy video!


Uuuuuh. I know a lot about programming and discrete mathematics (I used to tutor both, and I've used a lot of discrete math throughout my 20 year career as a software developer) and I still struggle with the books. Maybe your definition of "something" varies from mine.


What's the struggle?

I've a moderste grasp of induction etc, but my impression is that that's all that's required.


Does Khan Academy cover the math topics of Concrete Mathematics yet? Looking at their list of math courses I don't see a single discrete math course in the bunch.


Concrete Mathematics has some fairly advanced stuff (e.g. asymptotics) which I doubt Khan Academy covers.


I have no idea! I just used it as an example of “educational video on YouTube!” :-)


That's not really what Concrete Mathematics is about. It's not a prereq to TAOCP.


Mathematical Induction in volume 1 is the prerequisite material. The work is supposed to be self-sufficient.


Concrete Mathematics is in fact an elaboration on the chapter covering the prereq in TAOCP. It was well worth it considering how great this chapter is.

The best part of TAOCP for me is how incredibly good the exercises are. The gradual difficulty is perfect and from 20 onwards you always get some interring insight from solving them. I am especially M and HM ones which are amongst some of the best maths problems I have seen despite being in a CS book.


Honestly, despite everyone's claims of how "beautiful" the books are, I think they are rather terribly formatted.

For example, the first volume consists of 2 "chapters". Each chapter spans about 250 pages on its own. So actually not a chapter, but a section, or maybe even a book in its own right. Then, each chapter is broken into sections and subsections that flow one into the next. There's a fundamental lack of whitespace and breathing room in a book that is supposedly "beautifully formatted".

At the very beginning of the book, there is a flowchart for how you're expected to read the book. It's not linear. It's not even close to linear, even within a specific topic. So why not just format the book in the suggested reading order?

I don't agree that Knuth is a great communicator. He's verbose when he could be succinct. He's taciturn when he should expound. He's prone to flowery language with humorous barbs in his introductions (and they can be quite funny, if you are clued in enough to what he's talking about already), but then spews dense mathematical notation when he gets into details.

So yeah, there's a lot of interesting information in the book, but it's a struggle to read it. Some people will say it's supposed to be a struggle, it's meant to make you work. Why? Programming is already hard on its own. Why make the act of reading about programming hard, too? I think, for the amount of effort being put into them, they could have been written more clearly. It makes the books feel like they are designed to mark an in-crowd of mathematicians who learned programming rather than to teach people computer science.


Writing style is subjective and I find Knuth's delightful: almost every paper of his that I've read has been fun. Even in his technical papers he writes at the level an undergraduate student with decent mathematical background (or a strong high-schooler) could mostly understand, which suits me just fine, especially compared to some other academics who write only for their peers / graduate-level and higher. And in any case, all the mathematical parts of TAOCP can be safely skipped if one is not interested.

But regarding your complaint about formatting, note that the book design of the TAOCP volumes is not Knuth's doing: someone at Addison-Wesley came up with them in the mid-1960s, and the first three volumes were published with Monotype (hot-metal) typesetting. When Knuth wrote TeX in the late 70s/early 80s, his goal was simply to retain the style of the books. (The layout was fine; the publishers were trying to move to phototypesetting and the fonts were poor, so he needed digital fonts and wrote METAFONT to create them, and he needed TeX just to be able to use those fonts.) Perhaps the book Concrete Mathematics will be a better example for you to critique, as Knuth had more of a hand in its typesetting (though there too a book designer was involved; the article Typesetting Concrete Mathematics has some details, reprinted in the Digital Typography volume and a poor scan of its earlier publication here:

About chapter length: the initial table of contents was one book of 12 chapters; Knuth wildly underestimated the printed length of what he had written (not many people know he actually wrote the whole book, all 12 chapters, in the 1960s, amounting to about 3500 pages — since then you can say he's been updating the chapters and publishing each section when he thinks it's ready), so the plan changed to seven volumes with the same chapters. And the chapter sections' lengths have themselves grown with the growth of the subject; as you can see, the current volume 4B is just a tiny fraction of the projected Chapter 7: covers and

Also, about the flowchart for how to read the books: it's partly a joke about flowcharts, but the suggested reading order is very much linear: — in words, it's just something like "read the chapters in order, skipping starred sections on the first reading. Also, feel free to skip boring sections, and skim math when it gets difficult" (he addresses the mathematics in the preface on p. viii).


That flowchart needs a HEFTY dose of, and I ALMOST had finished remaking it in it just now, but a terrible mistake on my part wiped my progress; thanks, Android. :(

cba/to frustrated to re-redo it right now.


You read a math book slowly, so you don't need whitespace for help skimming. Extra whitespace would make the huge books even huger.

Beautiful formatting here refers to the text, not the whitespace.


Except everyone who talks about how to read the books successfully--including the books themselves--talk about jumping around in them in a way that requires a lot of skimming.

I don't find the text in any other book I own to be insufficient, or even noticeably that different from Knuth's books (with the obvious exception of some fly-by-night print-on-demand stuff purchased off Amazon in recent years). It's an assertion in want of proof. TeX might be the greatest example of Yak Shaving in human history: Knuth spent at least 30 years between Volumes 3 and 4 to work on TeX because of dissatisfaction with layout in V3.

This is what I'm talking about. Notice the use of colored blocks and whitespace to break up the flow of the page, to callout different asides and section headers.

Adding whitespace doesn't have to make the book longer. It could just make the book wider. Besides, paper is cheap.


> Why make the act of reading about programming hard

These books are not really programming books, they are math books. There are oodles of "programming" books out there that teach algorithms, like how to implement a quicksort and why X algorithm is better than Y for problem Z. Knuth's books are more like a graduate level algorithms or discrete math course.

>then spews dense mathematical notation when he gets into details

Because the Art of Computer Programming isn't really about programming per se, it's about the theory and analysis of programming IMHO. I'd say O'Reilly books for more famous for learning programming.


I vaguely recall an anecdote being told here on HN where Donald Knuth met Steve Jobs. Steve Jobs said something along the lines of "I admire you very much, I have read all of your books!", to which Donald Knuth replied something to the effect of "I don't think so." (more politely, though, I presume).

Which is to say I don't think many people have actually worked their way all through them. Neither have I, there's no shame in that.

And, as you point out, it's not too late to give it a try.


The story is related at:

but probably didn't happen that way.


Anecdotally, Bill Gates once said that anyone who have read all of TAOCP could be hired by Microsoft on the spot.

davidrupp -- Dr. Knuth sets the record straight at a talk being given by Randall Munroe (of xkcd fame). Spoiler alert: "I only met him a couple of times [...] and in each case I was impressed by him probably more than he was impressed by me."


the impression i have--i'm not sure where i got it from, could even be from the first book itself--is the art of computer programming's mission is to be a useful resource if, a thousand years from now, computer science finds itself starting from scratch. contemporaries can learn from it, but by even just buying copies you increase the odds that the books can be recovered and used in that hypothetical situation.


Same here. I was toward the end of an associate's degree in programming and finding Knuth Vol 1 in the library stacks felt like discovering there was a whole continent of intellectual effort where I'd only seen a tiny neighborhood. It helped motivate me to go get a BSCS. But 40 years later, I've still only dipped into it from time to time (I did buy my own copy :))


Acceptance is the first step on the road to recovery.


Nostalgia time! When I graduated years ago, Knuth's books were still all the rage because the central themes in the CS departments in universities were fundamentals: algorithms, compiler theory, operating systems and what not. We referenced TAOCP for implementing the priority queue using fibonacci heap for our OS projects. We studied chapters on vol 2 for some of the numerical algorithms, and etc. We read Parsing Techniques cover to cover to understand how to write a parser. We still carefully studied AI: A Modern Approach and the dragon/dinosaur/tiger book. Every once in a while a discussion on Knuth's work went to the first page of HN or Programming Reddit. Then, we got more and more mature libraries, and we rarely needed to understand 10 different algorithms that solve the same class of problems. Consequently, we got fewer discussion about Knuth's work too. Of course, it's super fun to read his meticulous discussion of combinatorial algorithms in Vol 4. It just that TAOCP does not appear as essential to as many people as a decode ago.


Can someone honestly tell me if they have actually read these books AND found them useful ON TGE JOB? What do you guys do for work? I tried reading part 1?? Like ten years ago and it was pretty much assembly language or something I believe. And gave up since it wasnt something that I needed in academia back then and there were far better ways to learn DSA


I used them to implement an arbitrary precision math library for an embedded processor that needed to do some precise calculation in an autonomous environment. The embedded processor had no floating point capabilities.

The book showed me the method of how to do it efficiently in a manner that could be adapted to the 16 bit architecture I was working with. It also showed me how to detect and handle overflow situations, as well as when they would happen. It kept me from creating a buggy, error-prone pile of crap code that I'm certain I would have created at the time.

At the same time, it took me a couple of days of study to understand the two or three pages I was interested in. The material was the most condensed material I have consumed in computer science. Like you say, you have to learn a new machine architecture and assembly language to understand the examples. Things you learn from those books come at a high cost in terms of time and effort.

It reminds me of studying the bible, where cross checking, reading different translations, studying hebrew and greek, and the culture of the day are all necessary if you want to get the best understanding you can. The TAOCP books are scholarly articles and almost a kind of shorthand notation of computer science concepts.


It may not be a coincidence that one of the author's other books is on Biblical studies:

I didn't realize it had been blurbed by a professor emeritus at a seminary (as well as Martin Gardner).


My favourite thing about his 3:16 project is the name he gave to the technique (of, effectively, taking a stratified sample of the Bible in order to get an overview of its themes): "the way of the cross section". (If the pun isn't immediately apparent, see


I've spent a lot of time doing the latter and had trouble understanding the former the one time I needed it (during a high pressure project where I simply didn't have time or the quiet time to spend two days digesting it).

Glad to know that the problem isn't totally me :)


I've been randomly skipping around inside these books for a long, long time. They're full of little gems that you won't find elsewhere.

For example, in Volume 4A there's a simple technique for comparing two pointers in bit-reversed form. This technique was patented by Hewlett-Packard as a method of randomizing search trees (treaps):

From Volume 3, here is by far the fastest method of sorting integers (Singleton's method), an algorithm I have used professionally several times (e.g., for beam search in a speech decoder):

Knuth's books are just packed with gems like that.

The meticulous high quality of the books is also remarkable. However, if you look very carefully you can find rare mistakes. I received payment from Knuth and have an account at his bank:

Getting your name on the above list was once a Hacker rite of passage.


> However, if you look very carefully you can find mistakes.

Opened the book, thought "no, it can't be". Agonized for days and weeks if I'm making a fool of myself.

Sent the bug report.

First word in first sentence in first paragraph in first chapter was wrong. Got my cheque. ;-)


What was it?!


I'm somehow on that list twice, which looks like a bug. I wonder if there is a reward for finding an error in the reward list.


He fixed the issue quickly after I sent an email.


TAOCP is not job training now, if it ever was; that is trying to get the cart to pull the horse.

The point of a job, unless you're lucky enough to get a job where you can change the world, is to pay your living expenses so that you have the leisure to do things like read TAOCP and write Sudoku solvers. If that's not your idea of an enjoyable vacation, don't read it; you'll regret it.


Writing a decent Sudoku solver takes just a few hours... I did this last year in order to refresh my memory on how Ruby worked. Reading TAOCP would take a few more hours than that.


Luckily, you are more than a few hours from death.


The world changes through the collective effort of the people who inhabit it. Not because some demigod wrote a script.


Mostly the world changes due to unpredictable emergent phenomena, but occasionally you do get intentional changes as well. But intention is individual, not collective; there's no such thing as collective intention. At best you have many people's intentions in agreement. Rarely does that arise in a workplace.


The one can be a special case of the other...


I'm currently going through Fascicule 6A dedicated to SAT solving and I can tell you it is by far the best reference. I haven't found anywhere else a complete reference that builds a SAT solver from scratch while explaining why each technique is used.

Regarding more classical algorithms, I never bothered reading the other fascicles. I think there are much more practical references but none is as complete and detailed as Knuth's treatment. He goes to the bottom of any algorithm, not just proving it has the right complexity but also asking what inputs are the most difficult, what other problems it can solve, etc. In the end you understand the field much better and have a good idea of what the boundary of knowledge (ie research) looks like.

But, and I say that as an ICPC world finalist, if you just want to solve practical problems you probably won't need TAOCP.


Knuth's chapters are mostly decades old. I don't think they are near the boundaries of research. (except the starting boundaries.)


It just means you're not doing anything interesting on your job.

If you have constraints, i.e. not enough compute, not enough memory, not enough storage, then you'll need a solution and, odds are, somebody has had the same problem before and it's been written up already in Knuth's TAOCP. Of course, the problem won't be exactly in the form it's been written up, but you have to have enough experience and knowledge to know the abstract form of your problem in order to figure out what parts of TAOCP would be useful to you.

Want a concrete example? Well, I'm in the generation that go to assume that storage was basically constant access regardless of whether you wanted to access the 1st byte, or the Nth one. New, high-performance memory like NAND flash, is weird because it's fundamentally unreliable. In order to make NAND reliable you have to do stuff, and some of the stuff you can do, makes NAND look like tape, and accessing tape is best done sequentially. Guess what TAOCP has? All these great ideas about maximizing random access to sequential media.


>It just means you're not doing anything interesting on your job.

That man had a family...


The vast, vast majority of us may be doing difficult work, maybe even necessary work, but much of it is not really that interesting.

It's good to remind myself that I'm basically a plumber for the internet tubes.


Serious question - why wouldn’t you search for the algorithm you need (or the problem statement) on Google scholar these days? I have the books but if I ever found myself reaching for them at work, I would consult most recent literature


I read a lot of papers I find on Google Scholar, and it's very common that they are wrong, or don't accurately survey related work, or pointless. For an overview of a field or a subfield, you want a review paper, and TAOCP is the best review of algorithms I've found. It's true that some of the volumes are outdated, but in many areas the new work is incremental.

Consider, as an example, Fascicle 5b, Introduction to Backtracking. His citations in the first 25 pages are from 1899, 1918, 2017, 1957, 1967, 1963, 1900, 1947, 1882, 1848, 1849, 1850, and 1960. Had he somehow managed to write this chapter in 01968 instead of 02019 he would only have been missing one of them, though surely the illustrations and experimental results of his own he reports would not have been as excellent, and surely the overall framing of the section benefits from the additional 61 years of hindsight.

In fact, although enormous progress has been made in backtracking in recent years, it hasn't affected the introductory aspects of the question. But clearly you can learn an enormous amount about backtracking without straying into the literature of the current millennium.


The book is a nice way to give you a hint about what exists and what stuff is called. From there you can search for more.

Of course other resources exist, many of which cite TAOCP or are derived from it


Nowadays, most papers in CS are filler (or even worse, sometimes they're just fraudulent), written to meet arbitrary bureucratic requirments. Sifting through that noise is really tedious.


Over 25 years ago I was researching algorithms for a search engine I had been commissioned to write, and for its disk allocation strategy I had settled on a "buddy algorithm" from The Art of Computer Programming Vol 1. It had nice properties and was easy to make.

I implemented it to a tee, only to see it fail (I think in the part that merges abandoned blocks). I banged my head against this problem for days before finally daring to entertain the notion that the algorithm might be flawed. This was TAOCP after all!

Turned out my university library had an old edition and I had run into one of the scarce bugs in it (which had been duly fixed in the next edition).

So yes, I have used TAOCP on the job and, to prove it, a scar of which I am rather proud.


If you can't find a bug that exists in your copy of TAOCP, maybe you didn't learn what it was teaching you :-)

(Adapted from Feynman, who apologized for errors in his Lectures, but refused to accept blame for anyone who cited them in a dispute. The truth speaks for itself.)


Would you be willing to provide a reference for the original? My Google-fu is failing me.


A colleague of mine had invented a technique for factoring composite numbers, that he thought would be quite fast.

I thought a little about it and saw that it would degenerate into sat solving.

I checked taocp and indeed my colleague's idea was in there. Also a faster algorithm was presented in the relevant section. I lended the book to him and we saved around a month of work.


This is exactly what TAOCP can be best at - if you know enough about the problem domain to be dangerous, or even to build a solution, you will know enough to read the relevant portion of TAOCP, where it is very likely you will find value.


There is quite a lot of money waiting for you if you ever figure out how to factor composites quickly. (Any old bum can factor primes quickly.)


We had a very specific use case, where the input was a known composite, and some properties of the factors were known.

Also, while no fast algorithms are known (and may never be depending on if p=np) some are faster than others depending on the properties of your inputs, and choosing the right one can mean saving a lot of time and effort.


I once used them at work. I had a monitor that was too low, so putting a volume or two of Knuth under it raised it to the right height.

That's the only time I've ever used them though.


Quite expensive for that use.


And yet significantly cheaper than Apple's Pro Stand.


But you get to brag that you use TAOCP on a regular basis.


This is one of those book series that I really, really want to purchase as a beautiful, matching, leather-bound set. And, yes, I would read it for pleasure, and I would treasure it.


Each volume is taking longer and longer (4A and 4B and work on 4C have taken over 20 years). Mr Knuth is 84 and if there are 3 more volumes...


Maybe Brandon Sanderson can write the last three books.


The joke, for anyone else here who isn't a book nerd (my wife is), is that now whenever an old author dies with unfinished work, or hasn't finished their work (that's you, George!) people make a joke "Just let Brandon Sanderson finish it!" because Mr. Sanderson is an absolute writing beast, he's a "100x writer" if there ever was one and he is working on (or finished?) a book series[0] by a now-dead author and most people think he made it better.



Never heard of him, is he another great computer scientist?

I'd love it if someone like Chris Wellons or Per Bothner could work on them, but filling Knuth's shoes may be a big task for one man.


Journey before destination


Well played


I was thinking the same thing when I clicked on the comments. I somewhat expect this will be the last volume in the series, so one could now hope to buy a "complete" set. I would love to revisit and explore all of the material using modern programming languages.


One might hope that he can at least complete 4C such that volume 4 is complete. I am skeptical that volume 5 or later will see the light of day, But I also cannot predict the future.


This comment made me check the history of the official TAOCP page [1] through the Wayback Machine. Volume 5 was estimated to be ready in 2009 at the very first snapshot (1997), in 2010 by 2003, in 2015 by 2007, in 2020 by 2010, in 2025 by 2015 which is the latest estimate today. At the first glance this looks like an eternal "10 years later", but note that that estimate didn't change since 2015 and the 2025 estimate is now only 3 years away. Does this mean that Knuth does plan to publish anything related to Volume 5 by that time?



Bookbinding is a great hobby. Or you can commission someone to create leather-bound covers for your book?


Yes - you can find book rebinders who can take a paperback book and make a bound book out of it.


Same - it's just a joy dropping into some random section and reading through, but my copies are a little abused by that practice. I'd love to have a second set for aesthetics and display.


R.A. MacAvoy has the titular character do this in her wonderful novel _Tea with the Black Dragon_


There's something beautiful about his work on the series. He's been working on it for years, built TeX (not sure if that was for this series or other books of his), and has such ambitious plans. He's a serious legend in the field of computer science and we're lucky to have someone like him to have provided our industry with so much incredible knowledge. I'm interested to see how the series progresses for the next 30 years.

He also seems like the kind of guy you'd want to have a cup of coffee or tea with. Always seemed like a fun and sweet guy in the interviews I've seen with him.


> I'm interested to see how the series progresses for the next 30 years.

Donald is 84. And each time I’m reminded of that I get sad, it’s very unlikely we’ll get another 30 years of his work.


It reminds me of that Hokusai quote (japanese painter):

“From the age of 6 I had a mania for drawing the shapes of things. When I was 50 I had published a universe of designs. But all I have done before the the age of 70 is not worth bothering with. At 75 I'll have learned something of the pattern of nature, of animals, of plants, of trees, birds, fish and insects. When I am 80 you will see real progress. At 90 I shall have cut my way deeply into the mystery of life itself. At 100, I shall be a marvelous artist. At 110, everything I create; a dot, a line, will jump to life as never before. To all of you who are going to live as long as I do, I promise to keep my word. I am writing this in my old age. I used to call myself Hokusai, but today I sign my self 'The Old Man Mad About Drawing.’”


He's already well over the avg life expectancy for American men. But hey, good genes exist and it's possible he's got another 20 lucid years.


The average age is from birth. What’s the expected age having lived to be 84?

People can live well into their 90’s. Hopefully he’s got another book in him. These people are still alive, for example:

Jimmy Carter: 98

Henry Kissinger: 99

Warren Buffet: 92

Charlie Munger: 98

Mel Brooks: 96

Alan Greenspan: 96

Noam Chomsky: 94 in December




Unfortunately, for an American man at age 84 he would only be expected to live (on average) another 6.44 years. See:


I didn't want to say it out right as it felt kind of morbid, but I'm wondering how it will carry on after he passes. Not sure if there's someone he's asked to continue it after his passing or not, but I hope so.



It looks like he started working on it in 1962.

Just imagine being that intelligent and pouring 60 years of your life into something like this.


Sounds blissful


I'd even say it sounds sacred.


The origins of TeX from Knuth himself:

His books were the reason he started TeX.


I was hoping that Fascicle 7 would come out, as that would be on Constraint Programming, and I think that's very similar to SAT-solvers (Fascicle 6) and the BDDs that he's already covered.

Then again: SAT-solvers are a big enough subject matter on their own that backtracking search + SAT-solvers is more than enough to fill a volume of its own. So choosing these two subjects as volume 4B makes sense.


As a fan of constraint programming, I'd love to see Knuth's take on the subject. I hope he doesn't skip over Fasicle 7.


have you read his current draft of fascicle 7a ( Constraint Satisfaction)? it is currently 145 pages and is available at


I wasn't aware of this! Thanks for the info. I'll give it a lookthrough.

Its not exactly an easy subject, so I'll need some time to digest the material. Especially if its written in Knuth's famous "difficult, but terse" style.


I once had the pleasure of listening to Knuth speak at my university's computer science department. I joined the very large queue of other people, and expected the great man to totally revolutionise my understanding of – well, I didn't know what, but something. Computer programming to TeX, I don't mind. I was expecting enlightening solutions to famously difficult problems, a great study of algorithms, that kind of thing.

Well, what happened was that he spoke for nearly two hours on self-referential aptitude tests. The kind of "20 questions" game where the 20th question is "The maximum score obtainable on this test is (a) 18 (b) 19 (c) 20 (d) indeterminate (e) achievable only by getting this question wrong" and all of the others are inherently recursively linked. I had no idea these things existed, even less of an idea why anyone would want to study them, and came away later with both mind dripping down the side of my skull and a greater appreciation of SAT solvers and compiler design in functional programming languages. Highly recommended.

(And if anyone wants to do the quiz, it's here: [1]


Many years ago I needed to understand hashing for an important work project. I sat down and read Knuth's chapter on hashing. It was very, very good and it helped to greatly accelerate my project work.


What I also learned is how impossibly large a topic even just hashing is. A section of Knuth's encyclopedia isn't sufficient space to cover it thoroughly (for some definition of thoroughly) and further research showed how much progress had been made in the subject since Knuth published his work.

This isn't a complaint about Knuth, again, I'm grateful for how much his writing accelerated my understanding. But further research underlined how impossible a task he's undertaken.


A copy personally autographed by Knuth will be up for auction in the Gathering For Gardner fund raiser.

* The Art of Computer Programming, Volume 4B: Combinatorial Algorithms, Part 2