Get the top HN stories in your inbox every day.
magnio
pmontra
Thanks. Your UI is much better than the one on the site. There are two problems there:
1. The vertical text is difficult to read despite its size, because it's vertical.
2. When we click on it a large part of the text disappears below the bottom margin of the page.
Problem number 1 is not so bad but the combination with 2 kills the UX. The text in the clicked bar should appear somewhere on screen, horizontally.
Edit: if anybody like me wonders what's Zippy its a C++ compression library from Google. It's called Snappy now [1]
samwho
1. I didn't expect people to have such a negative reaction to sideways text. It doesn't bother me personally, but it seems some people really can't work with it. I'll likely avoid it in everything else I do going forward.
2. I feel a big part of the problem here is that it's not obvious how to get it back once it's gone. I could certainly try making the text visible after the bar is gone.
wruza
I guess the most readable form would be a static logarithmic plot with colored dots/bars and a legend in the corner (or on tap/hover). Everyone interested in these numbers likely knows how to read it.
pmontra
Point 1, we're used to sideways text because of books on a shelf but here it's compounded by the text almost disappearing after the click. The only way to get it back is clicking multiple times in the empty space above the bar. The only hint to click there is in one of the steps in the text box on the left, which probably nobody reads. Something to click above the bar (an arrow up?) would probably remove the need for the help text. Other hints could remove the need for any help text and free the box to display the content of the clicked bar.
neilv
It seems like there's some potential here, but not quite nailed yet.
I'd already seen cost model numbers like these before, but this interactive visualization still seemed to obscure the information as I was taking a first look.
I wonder whether it would be more useful adapted to a visualization/calculator for specific numbers, maybe for multiple operations in an algorithm, and the alternatives for implementing each? (And the click-to-scale is for selecting N for each operation, and maybe somehow constants?)
klyrs
FWIW I don't mind the sideways text, but the ant's-eye-view histogram is one of the strangest user experiences I've had in data science.
nurettin
Not to diminish your art or anything, but if you just want to present some numbers, a <table> with two columns is fine. We can infer scale.
brody_hamer
My intuition was that scrolling would increase the y-axis maximum. (Effectively, scrolling would “zoom out”)
And that scrolling horizontally would pan me through the content.
Browsing on mobile, I should clarify.
But I’ll add that I also got the hang of scrolling back “in” fairly quickly. After I had zoomed out a couple times, then finally stopped to read the instructions.
adr1an
Mmm.. you could rotate both text and bars, right? Like, horizontal bars.
mattl
It also has very poor contrast. I can turn my phone on the side to read the vertical text but the white text is impossible to read on the yellow and orange backgrounds.
Narishma
Another problem is that on low resolution screens (or small browser windows) the boxes on the top left hide the text on the bars behind. I had to zoom out to 50% for it to be readable, which then put other bars behind the boxes.
undefined
denotational
The original source linked from this post [0] is using models that assume exponential growth of bandwidths over time (see the JavaScript at the bottom of the page): this is fun, but these figures are real things that can be measured, so I think it’s very misleading for the site in this link to present them without explaining they’re basically made up.
The 1Gb network latency figure on this post is complete nonsense (I left another comment about this further down); looking at the source data it’s clear that this is because this isn’t based on a 1Gb network, but rather a “commodity NIC” with this model, and the quoted figure is for a 200Gb network:
function getNICTransmissionDelay(payloadBytes) {
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
// 125*10^6 = a*b^x
// b = 2^(1/2)
// -> a = 125*10^6 / 2^(2003.5)
var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
var b = Math.pow(2, 1.0/2);
var bw = a * Math.pow(b, shift(year));
// B/s * s/ns = B/ns
var ns = payloadBytes / (bw / Math.pow(10,9));
return ns;
}
[0] https://colin-scott.github.io/personal_website/research/inte...sspiff
It's very surprising to me that a main memory reference takes longer than sending 1K over a gigabit network.
IshKebab
Because they're comparing two different things. The main memory reference is latency, the 1K is a throughput measurement.
In other words they're not saying "if you send only 1K of data it will take this long". They're saying "if you send 1 GB, then the total time divided by 1 million is this much".
denotational
This figure quoted on this website is completely wrong: the serialisation delay of 1KiB on a 1Gb link is much higher than that, it’s actually closer to 10us.
This is a transcription error from the source data, which as it turns out is based on a rough exponential model rather than real data, but first let’s consider the original claim:
If there’s a buffer on the send side, then assuming the buffer has enough space, the send is fire and forget, and costs a 1KiB memcpy regardless of the link speed.
If there’s no buffer, or the buffer is full, then you will need to wait the entire serialisation delay, which is orders of magnitude higher than 44ns.
One might further make assumptions on the packet size and arrival rate distributions, and compute an expected wait time, but otherwise the default assumption for a figure like this would be to assume the link is saturated, and the sender has to wait the whole serialisation delay.
> They're saying "if you send 1 GB, then the total time divided by 1 million is this much".
This would take ~8s to serialise, neglecting L1 overheads, dividing that by 1MM gives you 8us (my ~10us figure above), which is ~200x higher than 44ns.
Looking at the source data [0], it says “commodity network”, not 1Gb, so based on the presented data, they must be talking about a 200Gb network, which is increasingly common (although rare outside of very serious data centres), not a 1Gb network like the post claims.
Interestingly the source data quotes an even smaller number of 11ns when first loaded, which jumps back to 44ns if you change the year away from 2020 (the default when it loads) and back again.
That implies 800Gb: there is an 800GbE spec (802.3df), but it’s very recent, and probably still too specialised/niche to be considered “commodity”.
Digging further, we see that the source data is computed based models that show various bandwidths growing exponentially over time, not based on a any real data, so these data are extremely rough, given these are real figures that can actually be measured:
function getNICTransmissionDelay(payloadBytes) {
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
// 125*10^6 = a*b^x
// b = 2^(1/2)
// -> a = 125*10^6 / 2^(2003.5)
var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
var b = Math.pow(2, 1.0/2);
var bw = a * Math.pow(b, shift(year));
// B/s * s/ns = B/ns
var ns = payloadBytes / (bw / Math.pow(10,9));
return ns;
}
[0] https://colin-scott.github.io/personal_website/research/inte...undefined
Gibbon1
I think VME busses were extended using high speed serial links in order to send data faster than you could using the 32 bit address/data bus.
genman
> Send 1K bytes over 1 Gbps network = 44ns
Doubt.
lukasm
a few others
40ms - average human thinks the operation is instant.
15s - user gets frustrated and closes your app or website.
8n4vidtmkvmk
I don't think I'd wait even 15 seconds. Maybe on average across all users because a lot of users have slower connections or devices so they're more patient. But I'd expect to have something in 3 or 4 seconds. Even that I consider slow. At probably 8 or 10 I'm gone.
metanonsense
I say this without hate: it's absolutely fascinating how bad this UX is. Having said this, I am sure that I have committed worse UX crimes in my career, but when curse of knowledge hits you, only your users can see the problems. But lucky samwho has the HN community that is not shy of criticizing ;-).
I think it's really interesting and instructional to think about why the UX feels so bad. My ideas are:
- The page has one main job: presenting latency numbers to the viewer.
- This job is easy enough. There are many ways to get this done. So people expect the main job to be done at least as good as with these other ways.
- I hypothesize that the page prioritizes other jobs before the main job. It tries to make finding the relationship between those numbers fun to detect. * Users are foremost interested in the main job, but this main job is done poorly because you don't see all latency numbers in one view (maybe after clicking a few times at the right places, but for such an easy task this is way too much work)
- It's very difficult to grasp the mental model of the UI just aby using it. You click somewhere and things happen. Even now that I have used it for a few minutes, I have no idea what it does or is supposed to do. I found it very interesting how much it frustrated my that repeated clicks are not idempotent and made the UI "diverge". It makes you somehow feel lost and worry about breaking things.
- The user must read the help text. But users don't do this. At least I didn't until I was very frustrated. Then this help text changes. And changes again. I don't want to learn a new application only to read a simple list of numbers.
These are my main points, I think. To me, it was very interesting. Thanks for that, samwho. and kudos for sharing this publically :-)
samwho
No hate taken. The art is not the artist, etc. :)
I'm in the middle of writing up a self-reflective post about this and I just wrote the following:
"Ultimately, the way I'm presenting the data is egregious and unnecessary. I can see why people are annoyed about it. The extra visuals and interactions get in the way of what's being shown, they don't enhance it. Tapping around feels fun to me, but it isn't helping people understand. This experiment prioritised form way more than it prioritised function."
We've come to some of the same conclusions, though you in more detail than me. The idea about clicks not being idempotent wasn't something I ever noticed, but now you've said it I can't not.
If you're willing, I'd love to connect with you 1:1 and talk a bit more about this. My contact details are on my homepage.
metanonsense
Great attitude :) I'll try to get in contact but don't be mad if I forget 8-|
heisenzombie
I mostly agree, and certainly a list of numbers or maybe a log plot would be better if the goal was communicating the raw data. Certainly the click-about jumpy interface is pretty janky. However there’s one thing I think this does better than a list of numbers would: Most people (me included) have a hard time getting an intuitive feel for things like just how much smaller 1ns is compared to 1ms or truly how much a billion dollars is. SI prefixes or a log scale can give the wrong /feeling/ even when they’re giving the right /information/.
Sometimes, the inconvenience of a linear scale is the point.
Pages that I think use this technique to really good effect:
https://xkcd.com/1732/ https://mkorostoff.github.io/1-pixel-wealth/
1letterunixname
Log scale on a static graph is far easier to visualize and understand without a complex, UX-unfriendly interaction that doesn't make sense.
karmakaze
The title is missing "Latency" which would show many other results on searching. My go to is this one[0] because it's plain text and shows "Syscall" and "Context switch".
Latency numbers every programmer should know
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Syscall on Intel 5150 ...................... 105 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Context switch on Intel 5150 ............. 4,300 ns = 4 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
Round trip within same datacenter ...... 500,000 ns = 0.5 ms
Read 1 MB sequentially from SSD* ..... 1,000,000 ns = 1 ms
Disk seek ........................... 10,000,000 ns = 10 ms
Read 1 MB sequentially from disk .... 20,000,000 ns = 20 ms
Send packet CA->Netherlands->CA .... 150,000,000 ns = 150 ms
Assuming ~1GB/sec SSD
[0] https://gist.github.com/nelsnelson/3955759samwho
I added the word "latency" into the title of the page. Sorry for the confusion.
_notreallyme_
I don't get how expressing these numbers in time unit is useful ?
I've been a developer for embedded systems in the telecom industry for nearly two decades now, and I had never met anyone using something else than "cycles" or "symbols" until today... Except obviously for the mean RTT US<->EU.
mlyle
> I've been a developer for embedded systems in the telecom industry for nearly two decades now
On big computers, cycles are squishy (HT, multicore, variable clock frequency, so many clock domains) and not what we're dealing with.
If we're making an architectural choice between local storage and the network, we need to be able to make an apples to apples comparison.
I think it's great this resource is out there, because the tradeoffs have changed. "RAM is the new disk", etc.
_notreallyme_
then, why not just using qualifiers ? from slowest to fastest. You might not know that, but you can develop bare metals solution for HPC that are used in several industries like telecommunication. Calculation based on cycles are totally accurate whether the number of cores...
Izkata
Because it's something very different. I was expecting standalone numbers that would hint to the user something is wonky if they showed up in unexpected places - numbers like 255 or 2147483647.
saagarjha
It gives you a rough understanding of how many you can do in a second.
undefined
tux3
I bring criticism: The first few bars on my screen cannot be read, as the text is hidden behind the floating HUD. If I click on the next few bars, to bring those below the box, then the bar becomes too small and the text is cropped, so I cannot read it either.
It is also a bit uncomfortable to read 90° text. It's fun to click the bars and play with the UI, but not to actually read what they say. It's a nice visualization, but it suffers from form over function! I can't comfortably use it to learn about the numbers I should know :(
samwho
I appreciate the feedback! I'm trying to get better and comments like this genuinely help.
Are you reading on a landscape tablet? I know the sizes of stuff are wrong on that form factor. Desktop and mobile shouldn't have the first couple of bars obscured.
The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
readingnews
>> Desktop and mobile shouldn't have the first couple of bars obscured.
I am on a desktop with a huge monitor in ultra high res. It is pretty bad.
>> The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
Then the subtle nod is lost on me... why not turn the text when I click, or have hover text, or make the whole page rotated 90 degrees?
Like the original response, it was fun for one second, then I was like I can not read this stuff, or its painful.
tux3
I'm reading on a 1080p desktop. Although accounting for the browser chrome (bookmarks, tabs on the side), my window.inner{Width,Height} comes out as 1583x950
elaus
For me (pretty default FullHD desktop screen in landscape) the first bar is not obscured, but the next two are covered by the floating UI.
samldev
I’m on iOS and can’t see the bottom of any of the bars. They’re hidden behind the Safari controls at the bottom of the screen.
sccxy
Add some padding to bottom and use horizontal texts for "compressed" columns.
It is big but unreadable in 4k 32inch screen.
montebicyclelo
Lineage:
- Peter Norvig (original (?)) - http://norvig.com/21-days.html#answers
- Jeff Dean (slides) - https://www.cs.cornell.edu/projects/ladis2009/talks/dean-key...
- Colin Scott - https://github.com/colin-scott/interactive_latencies?tab=rea...
- this post
codetrotter
Don’t forget Grace Hopper (1906 – 1992), American computer scientist, mathematician, and United States Navy rear admiral.
> Hopper became known for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long—11.8 inches (30 cm)—the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds." She was careful to tell her audience that the length of her nanoseconds was actually the maximum distance the signals would travel in a vacuum, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire 984 feet (300 meters) long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.
zeroonetwothree
Why not just make them 27cm and then you get the true distance signals travel?
dahart
Maybe because speed of light is more approachable to a general audience, and maybe because she’s making a point about absolute upper bounds for all possible signals and didn’t want to reference something that could be improved, or because signal speed depends on the medium and maybe she didn’t feel like rat-holing on materials to make a point about speed of light? Light signals travel at light speed through space. 27cm works for electrical signals but not optical signals in fiber or space, nor other signal types. 30cm as a bound always works, and happens to be a nice round number too, more memorable… I see a lot of reasons why not just. :P
menaerus
Great content from Jeff Dean slides. Since they are dating from 2009, I wonder what are the things he had changed his mind of.
baapercollege
a well documented rabbit hole
danpalmer
Some of these have always been quite counterintuitive to me, particularly the networking ones. Google Stadia was always an exercise in edge cases in expectations on these numbers for me.
It felt weird that a gaming computer in a datacenter could be "faster" than a computer on my network, but one frame takes ~16ms to render, bandwidth is big enough to stream, network latency might only be another ~frame, and suddenly the image is on my machine within 2 or 3 frames. However there were unexpectedly slow parts! The controller actually ran over WiFi directly, so that inputs went straight to the server rather than via Bluetooth, comparing with Xbox Cloud on a Bluetooth controller, this made a huge difference, but that makes sense because Bluetooth's latency might be 1-2 frames itself. It's counterintuitive to me that the latency from my controller to my computer, less than 1m, might be higher than the latency from my computer, to my router, to my ISP, to Google's DC, and to a server. Similarly, the latency on HDMI from a computer to my TV is in the same ballpark of a few frames because of all the processing my cheap TV does to look good.
samwho
Man, I had such high hopes for Stadia. I was an SRE at Google when it was being built and knew some of the traffic folks working on the networking parts of it. Some of the absolute best people. Such a shame.
I’d never have considered adding WiFi to the controller to _reduce_ latency, that’s absolutely wild. Thanks for sharing!
CrimsonRain
I'm not sure why you'd find it wild. Any gamer with decent tech knowledge never buys Bluetooth wireless devices (mouse kb headset etc) for gaming precisely for this reason. Sites like rting measures latency for the same reason.
danpalmer
Gamers know about bluetooth latency when specifically compared to wired peripherals, and in that case I think that's intuitive. The counterintuitive part is that WiFi – a much more complex spec, plus all the IP stack, connecting to web services from a small low powered device, etc – is faster than a "simple" bluetooth connection designed for such devices.
account-5
I'm on Firefox mobile. I can make head nor tail of what this is meant to demonstrate.
politelemon
The meaning of the number with the + and - is completely escaping me. It looks like a year but goes into the future.
samwho
It is indeed a year. The latencies are based on the calculations done by Colin Scott in https://github.com/colin-scott/interactive_latencies and support projecting out into the future. Sorry it's not as obvious as it could be.
alkonaut
The main memory one stays 100ns for every year?
dan-robertson
It’s indeed been about 100ns for a long time. Part of this is that memory is larger though, so there may be more decisions to make to look up a line (and those are made faster). And throughput has improved. Some consumer high-end desktop hardware (think gaming rather than workstation) can have lower latency ram.
userbinator
Slightly mysterious title. I thought it would be about 16, 256, 65536, 16777216, 4294967296, etc.
samwho
It's a remix of something from quite a long time ago by the same name. There's another comment in here somewhere that links to the full lineage.
karmakaze
It's missing "Latency" at the beginning of the title, then it's very familiar to those who've seen them listed before.
jmpman
The 1MB streaming data from disk should be closer to 4ms (250MB/s). Disk streaming rates (on 7200rpm drives) have not improved significantly, based upon the published “sustained transfer rate OD” metrics from the three drive manufacturers.
peteri
The time to send over a 1Gbps network looks very wrong. Each bit takes 1ns (by definition) so sending 1K byte must take at least 8192ns
nebulous1
Yes, I think that's a transcription error
serial_dev
Holy smokes, this design is terrible and the site is unusable (on mobile at least).
samwho
Do you have any specific feedback about why it's unusable? I'd like to get better at this.
serial_dev
First of all, I hope you don't take the feedback here personally. It's great that you tried your design skills in a risky, unconventional presentation form. With all that said, I still think this page is unusable. The colors, fonts are nice though.
Just open your site on mobile and imagine that you don't know the dataset by heart.
Can you glance at the values, can you easily compare different values? When you scroll half a screen to the right, are you completely lost? I know I am. When you select the largest or smallest item, what do you see? What if you then scroll to the other end of the spectrum? Can you read what the smallest item stands for? L1 C something? Can you change the scale so that you can improve what you see? Does scrolling up and down behave intuitively?
All in all, it's just impossible to extract useful information and context using this design. It looks great, you could post it on Behance and it will get positive feedback, but when someone actually wants to use it to discover what the data says, it's a very frustrating user experience.
paddim8
I honestly don't understand what people are talking about. Looks fine to me.
Get the top HN stories in your inbox every day.
``` for (const { children } of document.getElementsByClassName("latency-container")) { console.log(`${children[0].innerText.padEnd(35, " ")} = ${children[1].innerText}`); } ```
L1 cache reference = 1ns
Branch mispredict = 3ns
L2 cache reference = 4ns
Mutex lock/unlock = 17ns
Send 1K bytes over 1 Gbps network = 44ns
Main memory reference = 100ns
Compress 1K bytes with Zippy = 2us
Read 1 MB sequentially from memory = 3us
Read 4K randomly from SSD = 16us
Read 1 MB sequentially from SSD = 49us
Round trip within same datacenter = 500us
Read 1 MB sequentially from disk = 825us
Disk seek = 2ms
Send packet CA->Netherlands->CA = 150ms
Can we discuss the actual material now.