Hacker News

7 hours ago by dspillett

> CPUs have somewhat plateaued in their single core performance in the past decade

In fact for many cases single core performance has dropped at a given relative price-point. Look at renting inexpensive (not bleeding edge) bare-metal servers: the brand new boxes often have a little less single-core performance than units a few years old, but have two, three, or four times the number of cores at a similar inflation/other adjusted cost.

For most server workloads, at least where there is more than near-zero concurrency, adding more cores is far more effective than trying to make a single core go faster (up to a point - there are diminishing returns when shoving more and more cores into one machine, even for embarrassingly parallel workloads, due to other bottlenecks, unless using specialist kit for your task).

It can be more power efficient too despite all the extra silicon - one of the reasons for the slight drop (rather than a plateau) in single core oomph is that a small drop in core speed (or a reduction in the complexity via pipeline depth and other features) can create a significant reduction in power consumption. Once you take into account modern CPUs being able to properly idle unused cores (so they aren't consuming more than a trickle of energy unless actively doing something) it becomes a bit of a no-brainer in many data-centre environments. There are exceptions to every rule of course - the power dynamic flips if you are running every core at full capacity most or all of the time (i.e. crypto mining).

7 hours ago by sandos

Yes, yes, this is why I haven't bothered to upgrade my 2500K, although it is actually time now, since games apparently learnt how to use more than 1 core. I always went to some benchmark every year and saw single-core performance barely moving upwards.

5 hours ago by VHRanger

2500k is borderline of the sweet spot, but something like a 4790k, 5775c or 6700k can hold up 7 years later.

That said, the very latest processors (AMD 5000 series, M1 apple silicon) are starting to make real gains in single threaded speed

5 hours ago by CoolGuySteve

I replaced my 2500K a couple years ago using the cheapest AMD components I could find. The main improvements were mostly in the chipset/motherboard:

- The PCIe 2.0 lanes on the old CPU were throttling my NVMe drive to 1GB/sec transfer rates.

- USB3 compatibility and USB power delivery were vastly more reliable. My old 2500K ASUS motherboard couldn't power a Lenovo VR headset for example, and plugging too many things into my USB hub would cause device dropouts.

- Some improvement in either DDR4 memory bandwidth or latency fixed occasional loading stalls I'd see in games when transitioning to new areas. Even with the same GPU, before the upgrade games would lock up for about half a second sometimes and then go back to running in 60fps.

4 hours ago by dspillett

Similar here. My main home machine's CPU held out for years more than previous ones had. I didn't do much by way of heavy dev/test/DB work in that period[†] so games were the only big processing it did[‡], and they only used a couple of cores properly or were bottlenecked at the GPU.

I upgraded early last year because I was doing a bunch of video transcoding, something where going from 4 to 16 cores really helps, and had finally started to notice it bogging down more than a little elsewhere. There was a per-core performance bump too in this case, that R7 2700/x was excellent value for money at the time. Also there was a goodly increase in memory bandwidth with the new kit, to keep those cores' caches full of things to be getting on with, but again that wasn't a massive bottleneck for my other uses up to that point.

[†] which I had previously, but personal dev has dropped off significantly since developing out-door habits (when you properly get into running it can be very time-consuming!) and day-job work is usually done via VPN+RDC when I do it at home.

[‡] and even then I wasn't spending time & money on the bleeding edge, though I did upgrade to a 1060/6GB for those that were demanding more than my old GPU could give)

9 hours ago by peter_d_sherman

>"Closing File Handles on Windows

Many years ago I was profiling Mercurial to help improve the working directory checkout speed on Windows, as users were observing that checkout times on Windows were much slower than on Linux, even on the same machine.

I thought I could chalk this up to NTFS versus Linux filesystems or general kernel/OS level efficiency differences. What I actually learned was much more surprising.

When I started profiling Mercurial on Windows, I observed that most I/O APIs were completing in a few dozen microseconds, maybe a single millisecond or two ever now and then. Windows/NTFS performance seemed great!

Except for CloseHandle(). These calls were often taking 1-10+ milliseconds to complete. It seemed odd to me that file writes - even sustained file writes that were sufficient to blow past any write buffering capacity - were fast but closes slow. It was even more perplexing that CloseHandle() was slow even if you were using completion ports (i.e. async I/O). This behavior for completion ports was counter to what the MSDN documentation said should happen (the function should return immediately and its status can be retrieved later).

While I didn't realize it at the time, the cause for this was/is Windows Defender. Windows Defender (and other anti-virus / scanning software) typically work on Windows by installing what's called a filesystem filter driver. This is a kernel driver that essentially hooks itself into the kernel and receives callbacks on I/O and filesystem events. It turns out the close file callback triggers scanning of written data. And this scanning appears to occur synchronously, blocking CloseHandle() from returning. This adds milliseconds of overhead."

PDS: Observation: In an OS, if I/O (or more generally, API calls) are initially written to run and return quickly -- this doesn't mean that they won't degrade (for whatever reason), as the OS expands and/or underlying hardware changes, over time...

For any OS writer, present or future, a key aspect of OS development is writing I/O (and API) performance tests, running them regularly, and immediately halting development to understand/fix the root cause -- if and when performance anomalies are detected... in large software systems, in large codebases, it's usually much harder to gain back performance several versions after performance has been lost (i.e., Browsers), than to be disciplined, constantly test performance, and halt development (and understand/fix the root cause) the instant any performance anomaly is detected...

7 hours ago by greggman3

Related, If you copy a file via the OS's copy function the system knows the file was scanned and you get fast copies. If you copy the file by opening a new destination file for write, opening the source file for read, and copying bytes, then of course you trigger the virus scanner.

So for example I was using a build system and part of my build needed to copy ~5000 files of assets to the "out" folder. It was taking 5 seconds on other OSes and 2 minutes on Windows. Turned out the build system was copying using the "make a new file and copy bytes" approach instead of calling the their language's library copy function, which, at least on Windows, calls the OS copyfile function. I filed a bug and submitted a PR. Unfortunately while they acknowledged the issue they did not take the PR nor fix it on their side. My guess is they don't really care about devs that use Windows.

Note that python's copyfile does this wrong on MacOS. It also uses the open, read bytes, write bytes to new file method instead of calling into the OS. While it doesn't have the virus scanning issue (yet) it does mean files aren't actually "copied" so metadata is lost.

3 hours ago by vetinari

> Note that python's copyfile does this wrong on MacOS. It also uses the open, read bytes, write bytes to new file method instead of calling into the OS.

It doesn't, since 3.8. It tries fcopyfile() and only if it fails, does the read/write dance.

See: https://github.com/python/cpython/blob/master/Lib/shutil.py#...

2 hours ago by greggman3

I tested in 3.8, didn't seem to work

https://bugs.python.org/issue38906

9 hours ago by chrisweekly

> "For any OS writer, present or future, a key aspect of OS development is writing I/O (and API) performance tests, running them regularly, and immediately halting development to understand/fix the root cause -- if and when performance anomalies are detected... in large software systems, in large codebases, it's usually much harder to gain back performance several versions after performance has been lost (i.e., Browsers), than to be disciplined, constantly test performance, and halt development (and understand/fix the root cause) the instant any performance anomaly is detected..."

Yes, this! And not just OS writers, but authors of any kind of software. Performance is like a living thing; vigilance is required.

9 hours ago by swiley

I've had the displeasure of using machines with Mcaffe software that installed a filesystem driver. It made the machine completely unusable for development and I'm shocked Microsoft thought making that the default configuration was reasonable.

8 hours ago by MereInterest

Copying or moving a folder that contained a .git folder resulted in a very large number of small files being created. To this day, I'm not sure if it was the antivirus, the backup software, or Windows' built-in indexing, but the computer would become unusable for about 5 minutes whenever I would move folders around. It was always Windows Explorer and System Interrupts taking up a huge amount of CPU, and holy cow was it annoying.

8 hours ago by roywiggins

Even worse than that, moving a lot of small files in WSL reliably BSODs my work machine due to some sort of interaction with the mandated antivirus on-access scanning, making WSL totally unusable for development on that machine.

8 hours ago by orangeoxidation

Good talk about debugging i/o in rustup:

https://youtube.com/watch?v=qbKGw8MQ0i8

16 hours ago by chungy

> Historically, the Windows Command Prompt and the built-in Terminal.app on macOS were very slow at handling tons of output.

A very old trick I remember on Windows, is to minimize command prompts if a lot of output was expected and would otherwise slow down the process. I don't know if it turned write operations into a no-op, or bypassed some slow GDI functions, but it had an extremely noticeable difference in performance.

11 hours ago by dan-robertson

I strongly don’t think that throughput is what terminal emulators should optimise for: basically no one cares about how quickly you can get a load of text you will ignore. Instead, kill the command and modify it to reduce output.

I think the right thing to optimise for is input latency.

11 hours ago by joosters

Input latency is the problem for many terminals when a command accidentally generates megabytes of output text - the console locks up and won't respond to input until it has rendered and scrolled through every single line of output.

Surprisingly few terminals, when suddenly faced with a million new lines of text, decide "hey, let's just draw the last screen of output, I don't need to draw each and every line and then move the existing text upwards by a row"

10 hours ago by tux3

There's a good reason terminals can't just skip ahead, even if we ignore ANSI/vt100/etc escape sequences, even "plain text" is not so plain.

Some applications output a few lines, then many MBs of "\rcurrent progress: xx.xxx%" that constantly overwrites itself.

Rendering is just not linear.

11 hours ago by iainmerrick

I strongly disagree! It shouldn’t be that hard to make this fast, and it’s a very common source of slowdown in builds, so why not just make it fast?

As to why you shouldn’t just silence all that noisy spam output, you never know when you might need it to diagnose a weird build error.

Sure, it would be great if the build system always produced nice concise output with exactly the error messages you need, but that’s not always realistic. A big bunch of spam that you can dig into if you need to is almost as good -- as long as it doesn’t slow you down.

Edit to add: I guess another approach is to redirect spammy output to a file instead of the terminal. But then I have to know in advance which commands are going to be noisy, and it introduces another workflow (maybe the error message you need is right at the end of the output, but now it’s tucked away in a file instead of right there in your terminal).

Just make the terminal fast at handling large output. No reason that should make it harder to have good input latency too.

11 hours ago by dan-robertson

One thing to consider is that ssh is much more likely to be throttling than the terminal emulator, so having a really fast terminal emulator won’t really fix problems with programs that output too much. I’m also not saying that throughput should be totally ignored, just that it shouldn’t be the metric used to benchmark and optimise ones terminal emulator.

11 hours ago by coldtea

>I strongly don’t think that throughput is what terminal emulators should optimise for: basically no one cares about how quickly you can get a load of text you will ignore. Instead

They would care if they knew it would also block their program doing the output...

11 hours ago by dan-robertson

Ssh also has this problem to a greater extent and is very commonly used

11 hours ago by howaboutnope

Maybe not something to optimize for, but still something to optimize when you can do it without sacrificing speed or functionality in other areas.

5 hours ago by hinkley

As I recall, command line scrolling output was also faster if you minimized the window. You just needed an alert of some sort when the thing was done, like a bell or audio file.

There was a period of time where I earned brownie points by flushing all of the defunct debugging output from old bugs that nobody removed, typically for a 2x improvement in app performance. All because of screen scroll bottlenecks.

15 hours ago by lifthrasiir

IIRC the font rendering in Windows was surprisingly slow and even somewhat unsafe. In one case lots of different webfonts displayed in the MSIE rendered the whole text rendering stack broken, with all text across the entire system disappeared. I wouldn't be surprised if this is a root cause of slow command prompts.

15 hours ago by patates

Well that explains the crazy slow performance on an old windows forms app I had written back when I was a junior developer. I'll try to see if I can reach anyone from my first employer and make them disable the debug output. Would be an interesting contribution, considering that I left more than 10 years ago :)

10 hours ago by ygra

Old old Windows Forms had its own text rendering (and still has), but most controls now have a second code path that uses the system's text renderer, which got updated with better shaping, more scripts, etc., while GDI+ basically never got any updates. You can see that when there's a call to SetCompatibleTextRendering(false) in the code somewhere; then it's using GDI instead of GDI+.

13 hours ago by gmueckl

This sounds a lot like GDI resource exhaustion. It looks like the limits on handle IDs and the GDI heap size are still in place even in Windows 10.

11 hours ago by magicalhippo

> bypassed some slow GDI functions

I think it is mainly just the scroll operation. You can visually see the process speeding up as you reduce the height of the console window.

Typically I reduce it to a couple of lines, then it goes several times faster yet I can keep an eye on it.

7 hours ago by ajuc

> Currently, many Linux distributions (including RHEL and Debian) have binary compatibility with the first x86_64 processor, the AMD K8, launched in 2003. [..] What this means is that by default, binaries provided by many Linux distributions won't contain instructions from modern Instruction Set Architectures (ISAs). No SSE4. No AVX. No AVX2. And more. (Well, technically binaries can contain newer instructions. But they likely won't be in default code paths and there will likely be run-time dispatching code to opt into using them.)

I've used Gentoo (everything compiled for my exact processor) and Kubuntu (default binaries) on the same laptop a few years ago and the differences in perceived software speed was negligible.

5 hours ago by CoolGuySteve

It depends on the software. I've recompiled the R core with -march=native and -ftree-vectorize and gotten 20-30% performance improvements on large dataframe operations.

If it were up to me, the R process would be a small shim that detects your CPU and then loads a .so that's compiled specifically for your architecture.

The same improvements would probably be seen in video/image codecs, especially on Linux where browsers seem incredibly eager to disable hardware acceleration.

6 hours ago by taeric

My understanding is that the stdlib of the machine already figures out the faster code for the machine at run time. Such that, for most of the heavy stuff in many programs, it isn't that different.

Granted, I actually do think I can notice the difference on some programs.

11 hours ago by fabian2k

The python overhead is something I've noticed as well in a system that runs a lot of python scripts. Especially with a few more modules imported, the interpreter and module loading overhead can be quite significant for short running scripts.

Numpy was particularly slow during imports, but I didn't see an easy way to fix this apart from removing it entirely. My impression was that it does a significant amount of work on module loading, without a way around it.

I think the other side of "surprisingly slow" is that computers are generally very fast, and the things we tend to think of as the "real" work can often be faster than this kind of stuff that we don't think about that much.

11 hours ago by nullify88

I see this alot with Ansible. Its not particularly slow but running it places a bigger burden on laptop cpu and fans than I'd imagined.

9 hours ago by throwdbaaway

Noticed the same too. It is likely that we are both impacted by the very aggressive default of 1ms for `internal_poll_interval`: https://github.com/ceph/ceph-cm-ansible/pull/308

6 hours ago by mkj

Huh, polling with a sleep() rather than a proper event wait seems like a bad code smell there...

7 hours ago by nullify88

Thanks for the heads up, i'll give that a go :)

2 days ago by h2odragon

I'll throw in "hidden network dependencies / name resolution"; it's amazing how things break nowadays when there's no net.

5 hours ago by iudqnolq

For years I thought sudo just had to take seconds to startup. Then one day I stumbled across the fact that this is caused by a missing entry in /etc/hosts. I still don't understand why this is necessary.

https://serverfault.com/a/41820

16 hours ago by segmondy

Number one rule of distributed systems, "the network is not reliable"

16 hours ago by donw

Also the number one rule of Comcast.

13 hours ago by pjmlp

Second rule, trust no one.

17 hours ago by verdverm

I'd add SaaS dependencies as well, whether it be slowness or downtime

16 hours ago by segmondy

This is a solved problem tho, timeout / retry / circuit breakers / fallback etc.

See - https://github.com/resilience4j/resilience4j

15 hours ago by verdverm

I wouldn't call it solved if a downstream SaaS is down and my build still times out despite the aforementioned resiliency.

14 hours ago by balloneij

Window's slow thread spawn time is incredibly noticeable when you use Magit in Emacs.

It runs a bunch of separate git commands to populate a detailed buffer. It's instantaneous on MacOS, but I have to sit and stare on Windows

14 hours ago by brabel

Do you mean *process* spawn time?

From the article:

> On Windows, assume a new process will take 10-30ms to spawn. On Linux, new processes (often via fork() + exec() will take single digit milliseconds to spawn, if that).

> However, thread creation on Windows is very fast (~dozens of microseconds).

13 hours ago by murkt

Yes, they clearly mean process spawn time:

> It runs a bunch of separate git commands

14 hours ago by TeMPOraL

One of many reasons why I prefer to run Emacs under WSL1 when on Windows. WSL1 has faster process start times.

But then with git, there are other challenges. It took me a while to make Magit usable on our codebase (that for various reasons needs to be on the Windows side of the filesystem) - the main culprit were submodules, and someone's bright recommendation to configure git to query submodules when running git status.

Here's the things I did to get Magit status on our large codebase to show in a reasonable time (around 1-2 seconds):

- git config --global core.preloadindex true # This should be defaulted to true, but sometimes might not be; it ensures git operations parallelize looking at index.

- git config --global gc.auto 256 # Reduce GC threshold; didn't do much in my case, but everyone recommends it in case of performance problems on Windows...

- git config status.submoduleSummary false # This did the trick! It significantly cut down time to show status output.

Unfortunately, it turned out that even with submoduleSummary=false, git status still checks if submodules are there, which impacts performance. On the command line, you can use --ignore-submodules argument to solve this, but for Magit, I didn't find an easy way to configure it (and didn't want to defadvice the function that builds the status buffer), so I ended up editing .git/config and adding "ignore = all" to every single submodule entry in that config.

With this, finally, I get around ~1s for Magit status (and about 0.5s for raw git status). It only gets longer if I issue a git command against the same repo from Windows side - git detects the index isn't correct for the platform, and rebuilds it, which takes several seconds.

Final note: if you want to check why Git is running slow on your end, set GIT_TRACE_PERFORMANCE to true before running your command[0], and you'll learn a lot. That's how I discovered submoduleSummary = false doesn't prevent git status from poking submodules.

--

[0] - https://git-scm.com/docs/git, ctrl+f GIT_TRACE_PERFORMANCE. Other values are 1, 2 (equivalent to true), or n, where n > 2, to output to a file descriptor instead of stderr.

4 hours ago by balloneij

Wow that's very helpful. I'll give it a shot next time I'm at work

13 hours ago by monsieurbanana

To precise, you say WSL1 is faster compared to Windows, or compared to WSL2? With WSL2 (and native-comp emacs branch) I've never noticed any unusual slowdowns with magit or other.

I haven't tried WSL1.

13 hours ago by TeMPOraL

WSL1 process creation is faster compared to Windows, because part of the black magic it does to run Linux processes on NT kernel is using minimal processes - so called "pico processes"[0]. These are much leaner than standard Windows processes, and more suited for UNIX-style workflow.

I can't say if it's faster relative to WSL2, but I'd guess so. WSL2 is a full VM, after all.

--

[0] - https://docs.microsoft.com/en-us/archive/blogs/wsl/pico-proc...

7 hours ago by ok123456

Writing things that do a lot of forking, like using the multiprocess or subprocess modules in python, is basically unusable to my coworkers who use windows.

Startup time for those processes goes from basically instant to 30+ seconds.

I researched this a little bit and it seems that it may be related to DEP.

5 hours ago by riskable

It's basically just Windows: Back when the current Windows architecture was designed (OS/2 and Windows NT going forward--not Win9x) the primary purpose of any given PC was to run one application at a time. Sure, you could switch applications and that was well accounted for but the entire concept was that one application would always be in focus and pretty much everything related to process/memory/file system standpoint is based around this assumption.

Even for servers the concept was and is still just one (Windows) server per function. If you were running MSSQL on a Domain Controller this was considered bad form/you're doing something wrong.

The "big change" with the switch to the NT kernel in Windows 2000 was "proper" multi-user permissions/access controls but again, the assumption was that only one user would be using the PC at a time. Even if it was a server! Windows Terminal Server was special in a number of ways that I won't get into here but know that a lot of problems folks had with that product (and one of many reasons why it was never widely adopted) were due to the fact that it was basically just a hack on top of an architecture that wasn't made for that sort of thing.

Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts. Based on this assumption they built in hooks into the kernel that allow 3rd party applications to scan every file on use/close. This in itself was a hack of sorts to work around the problem of viruses which really only exist because Windows makes all files executable by default. Unfortunately by the time Microsoft realized their mistake it was too late to change it and would break (fundamental) backwards compatibility.

All this and more is the primary reason why file system and forking/new process performance is so bad on Windows. Everything that supposedly mitigates these problems (keeping one process open/using threads instead of forking, using OS copy utilities instead of copying files via your code, etc) are really just hacks to work around what is fundamentally a legacy/out-of-date OS architecture.

Don't get me wrong: Microsoft has kept the OS basically the same for nearly 30 years because it's super convenient for end users. It probably was a good business decision but I think we can all agree at this point that it has long since fallen behind the times when it comes to technical capabilities. Everything we do to make our apps work better on Windows these days are basically just workarounds and hacks and there doesn't appear to be anything coming down the pipe to change this.

My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

3 hours ago by Joker_vD

> Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts.

Okay, let me interrupt you right here. To this very day Linux has a default maximum number of file descriptors per process as 1024. And select(3), in fact, can't be persuaded to use FDs larger than 1023 without recompiling libc.

Now let's look at Windows XP Home Edition -- you can write a loop of "for (int i = 0; i < 1000000; i++) { char tmp[100]; sprintf(tmp, "%d", i); CreateFile(tmp, GENERIC_ALL, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); }" and it will dutifully open a million of file handles in a single process (although it'll take quite some time) with no complaints at all. Also, on Windows, select(3) takes an arbitrary number of socket handles.

I dunno, but it looks to me like Windows was actually designed to handle applications that would work with lots of files simultaneously.

> fundamentally a legacy/out-of-date OS architecture

You probably wanted to write "badly designed OS architecture", because Linux (if you count it as continuation of UNIX) is actually an older OS architecture than Windows.

an hour ago by TeMPOraL

> I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

I think one way they could pull it off is to do a WSL2 with Windows - run the NT kernel as a VM on the new OS.

As for the price, I think they're already heading there. They already officially consider Windows to be a service - I'm guessing they're just not finished getting everyone properly addicted to the cloud. If they turn Windows into SaaS execution platform, they may just as well start giving it away for free.

4 hours ago by fasquoika

>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

https://en.wikipedia.org/wiki/Midori_%28operating_system%29

3 hours ago by ok123456

>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

More and more stuff gets offloaded onto the WSL for stuff which doesn't need interactive graphics or interoperability through the traditional windows IPC mechanisms.

11 hours ago by pjc50

Yeah, I hope this is one of the issues Microsoft address some time because although CreateProcess is a slightly nicer API in some regards the cost is very high. It may not be possible to fix it without removing backwards-compatibility, but maybe we could have a new "lite" API.

The bit about Windows Defender being hooked into every process is also infuriating. We pay a high price for malware existing even if we're never hit by it.

10 hours ago by TeMPOraL

Yes. This makes me wonder if I could speed up our builds by 2x by whitelisting the source repository folder. If it's at all possible (and company policy allows for it)...

2 hours ago by skrebbel

One thing that deeply frustrates me is that I simply don't know which things are slowed down by Defender. I can add my source repos to some "exclude folder" list deep in the Defender settings, but I've yet to figure out whether that actually does something, whether I'm doing it right, whether I should whitelist processes instead of folders or both, I have no idea.

If anyone here knows how to actually see which files Defender scans / slows down, then that would be awesome. Right now it's a black box and it feels like I'm doing it wrong, and it's easily the thing I dislike the most about developing on Windows.

5 hours ago by brundolf

This is a fascinating set of shop-knowledge from someone who's clearly spent many years in a set of trenches that I hope I never have to. Great stuff.

an hour ago by tomhallett

Does anyone have an recommendations for interesting “lessons learned” type information like this blog article has?

Daily digest email

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.