Yes, those two books are Programming Pearls and More Programming Pearls. I have both, and they are good. In one of them there is a section or chapter with a few examples of advanced uses of awk, involving awk's associative arrays, to do some non-trivial work in very few lines of code. He shows or says that the equivalent programs in C would be much longer and harder to write.
 Even more so than in most other common examples where awk programs are shorter than those in other languages.
Even in 1985 you could foresee wanting to reference something after 2000 - terms for a thirty year loan, for example.
Harder to see reasons past 2038 back in 1970.
> Harder to see reasons past 2038 back in 1970
I still find myself confused at this one. Did people in 1970s not imagine that any particular program or database would "survive" for 30+ years? Was the expectation that programs written in the future could set the epoch to a later date?
Now that I think about it, I also find it interesting that the epoch date isn't configurable in any system that I'm aware of.
Nobody expected Unix timestamp to survive so long when it was designed, and norm was to use different format in databases that would care about dates beyond 2038
All computer evolution up to that day happened in the 30 years prior, so how on earth could they possibly expect that their system would last such a long time?
I do recall from the time frame mid-eighthies to mid-nineties that three year old computers were obsolete. Ten year old stuff was ancient history. Never would I have imagined to be able to use Linux for 30 years.
It's a worthy exercise to code for a while on a vintage machine with a hundred or so kilowords memory. Puts things into perspective.
Unix timestamps can represent dates *before* 1970, by using negative numbers.
Dennis Ritchie has said that he picked signed 32-bit values with an epoch of 1/1/1970 because that would comfortably represent dates exceeding his life - before he was born until after his (expected) death.
Y10K bug! Only 4 digit dates.
i don't think that's the text of the entire book - it looks like a scrape of the book's former website (www.programmingpearls.com).
This is only an opinion --but it's the opinion of someone who has been involved in software engineering since the 1990s:
Since this quote is in the section of general advice, I feel this advice applies mainly to the analysis and design of systems. Here's an example: You want to make a command line program (or it could be an API) that can behave differently, depending on certain conditions. So you have an analysis and design problem, which is: how do you cover these different scenarios (flexibility) without making the program/api too difficult to use (complexity)?
If you hard-code the choices into the system, it gets a part of the job done, but there is no flexibility and so your system will be useless to those who need the outcome to match certain common use cases. But, on the other hand, if you require users to type in a long list of parameters every time they use the system (complexity), users will avoid this solution because it is simply too difficult to use for the most common scenarios.
Symmetry in this sense means striking the right balance among the aspects of usefulness and simplicity so that your system will match the needs of its intended users. In my hypothetical case, the programmer might provide a config file which lists comments showing the default values for the config options. If a user needs to tweak this behavior slightly from the defaults, they can set the config values as it suits them in the config file once, rather than needing to remember to type the overrides in as command-line params each time.
Again -- this is one simple example. When defining a data model (a different realm where symmetry applies), it is common to take many iterations to "get it right" so that the system can properly reflect a real-world problem, such as how to represent that 'many students take many classes at the college'. Early efforts tend to be "asymmetric" in the sense that they fail to capture key relationships between important concepts by making the data model too "flat" or simplistic. Further modeling efforts, though, may make the opposite mistake by trying too hard to faithfully capture every possible corner case, including those which either seldom occur or have no practical value (such as recording whether any instructors took leaves of abscence during a semester, when the point of the system is to capture student attendance and grades).
"asymmetry" typically involves exceptional state/behaviour. It's an extra cost while you're building it, and it's an extra cost while maintaining it. For example, a customer wants something special but it will only be needed by that customer.
I was wondering about that. I think it is about having the ability to go A->B without being able to go B->A.
But, as with many of the Pearls, it is heavily context dependent. And if you aren't well versed in "primitive" computing paradigms, you'll be confused.
It's a bit hard to explain this one, but symmetry is a form of simplicity. This might make more sense to a physicist or mathematician.