Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

Starlevel004

> I believe what is happening is that those images are being drawn by some script-kiddies. If I understand correctly, the website limited everyone to 1 pixel per 30 seconds, so I guess everyone was just scripting Puppeteer/Chromium to start a new browser, click a pixel, and close the browser, possibly with IP address rotation, but maybe that wasn't even needed.

I think you perhaps underestimate just how big of a thing this became basically overnight. I mentioned a drawing over my house to a few people and literally everyone instantly knew what I meant without even saying the website. People love /r/place style things every few years, and this having such a big canvas and being on a world map means that there is a lot of space for everyone to draw literally where they live.

indrora

It's also way more than 1px/30s -- Its like 20px/30s and you have a "tank" of them, which you can expand to however big you want.

Placing pixels gives you points, which you can turn into more pixels or a bigger bag of pixels over time. I've seen people who have done enough pixel pushing that they get 3-4K pixels at a time.

Aurornis

> I think you perhaps underestimate just how big of a thing this became basically overnight.

They don’t need to estimate because in the article they talked to the site and got their traffic numbers: An estimated 2 million users.

That’s 1500 requests per user, which implies a lot of scripting is going on.

zahlman

> I think you perhaps underestimate just how big of a thing this became basically overnight. I mentioned a drawing over my house to a few people and literally everyone instantly knew what I meant without even saying the website.

On the other hand, this is the first I've heard of this thing.

johnisgood

I have known about this kind of pixel drawing but it was on empty canvas.

yifanl

They have the user count from the dev, 2 million daily users shouldn't be generating billions of requests unless a good portion of them are botting.

zamadatix

Why not? This is tile requests right, not login requests or something, so shouldn't a single user be expected to consume a few thousand zipping around the map while looking at drawings overlaid?

I'm sure there is some botting, it's basically guaranteed, but I wouldn't be surprised if nearly half the traffic was "legitimate". The bots don't normally need to reload (or even load) the map tiles anyways.

LoganDark

> I believe what is happening is that those images are being drawn by some script-kiddies.

Oh absolutely not. I've seen so many autistic people literally just nolifing and also collaborating on huge arts on wplace. It is absolutely not just script kiddies.

> 3 billion requests / 2 million users is an average of 1,500 req/user. A normal user might make 10-20 requests when loading a map, so these are extremely high, scripted use cases.

I don't know about that either. Users don't just load a map, they look all around the place to search for and see a bunch of the art others have made. I don't know how many requests is typical for "exploring a map for hours on end" but I imagine a lot of people are doing just that.

I wouldn't completely discount automation but these usage patterns seem by far not impossible. Especially since wplace didn't expect sudden popularity so they may not have optimized their traffic patterns as much as they could have.

Karliss

Just scrolled around a little bit 2-3minutes with network monitor open. That already resulted in 500requests, 5MB transferred (after filtering by vector tile data). Not sure how many of those got cached by browser with no actual requests, cached by browser exchanging only headers or cached by cloudflare. I am guessing that the typical 10-20 requests/user case is for embedded map fragment like those commonly found in contact page where most users don't scroll at all or at most slightly zoom out to better see rest of city.

nemomarx

There are some user scripts to overlay templates on the map and coordinate working together, but I can't imagine that increases the load much. What might is that wplace has been struggling under the load and you have to refresh to see your pixels placed or any changes and that could be causing more calls an hour maybe?

andai

From the screenshot I wanted to say, couldn't this be done on a single VPS? Seemed over engineered to me. Then I realized the silly pixels are on top of a map of the entire earth. Dang!

I'm curious what the peak req/s is like. I think it might be just barely within the range supported by benchmark-friendly web servers.

Unless there's some kind of order of magnitude slowdowns due to the nature of the application.

Edit: Looks like about 64 pixels per km (4096 per km^2). At full color uncompressed that's about 8TB to cover the entire earth (thinking long-term!). 10TB box is €20/month from Hetzner. You'd definitely want some caching though ;)

Edit 2: wplace uses 1000x1000 px pngs for the drawing layer. The drawings load instantly, while the map itself is currently very laggy, and some chunks permanently missing.

TylerE

"€20/month from Hetzner" is great until you actually need it to be up and working when you need it.

motorest

> "€20/month from Hetzner" is great until you actually need it to be up and working when you need it.

I managed a few Hetzner cloud instances, and some report perfect uptime for over a year. The ones that don't, I was the root cause.

What exactly leads you to make this sort of claim? Do you actually have any data or are you just running your mouth off?

immibis

IME Hetzner's not unreliable. I don't think you could serve 100k requests per second on a single VPS though. (And with dedicated, you're on the hook for redundancy yourself, same as any dedicated.)

cyberpunk

They’re unreliable as soon as you have to deal with their support who have the technical knowledge of a brick.

And as soon as you have to do ant business / deal with the german side of the business expect everything to slow down to 2 weeks for response which will still be incorrect.

They are simply not worth the hassle. Go with a competent host.

colinbartlett

Thank you for this breakdown and for this level of transparency. We have been thinking of moving from MapTiler to OpenFreeMap for StatusGator's outage maps.

hyperknot

Feel free to migrate. If you ever worry about High Availability, self-hosting is always an option. But I'm working hard on making the public instance as reliable as possible.

charcircuit

>Nice idea, interesting project, next time please contact me before.

It's impossible to predict that one's project may go viral.

>As a single user, you broke the service for everyone.

Or you did by not having a high enough fd limit. Blaming sites when using it too much when you advertise there is no limit is not cool. It's not like wplace themselves were maliciously hammering the API.

columb

You are so entitled... Because of you most nice things have "no limits but...". Not cool stress testing someone's infrastructure. Not cool. The author of this post is more than understanding, tried to fix it and offered a solution even after blocking them. On a free service.

Show us what you have done.

charcircuit

>You are so entitled

That's how agreements work. If someone says they will sell a hamburger for $5, and another person pays $5 for a hamburger, then they are entitled to a hamburger.

>On a free service.

It's up to the owner to price the service. Being overwhelmed by traffic when there are no limits is not a problem limited only to free services.

perching_aix

> Do you offer support and SLA guarantees?

>

> At the moment, I don’t offer SLA guarantees or personalized support.

From the website.

eszed

Sure, and if you bulk-order 5k hamburgers the restaurant will honor the price, but they'll also tell you "we're going to need some notice to handle that much product". Perfect analogy, really. This guy handled the situation perfectly, imo.

austhrow743

Hamburger situation is not comparable. It’s a trade.

This is just someone being not very specific in a text file on their computer. I have many such notes, some of them publicly viewable.

010101010101

Do you expect him just to let the service remain broken or to scale up to infinite cost to himself on this volunteer project? He worked with the project author to find a solution that works for both and does not degrade service for every other user, under literally no obligation to do anything at all. This isn’t Anthropic deciding to throttle users paying hundreds of dollars a month for a subscription. Constructive criticism is one thing, but entitlement to something run by an individual volunteer for free is absurd.

toast0

The project page kind of suggests he might scale up to infinite cost...

> Financially, the plan is to keep renting servers until they cover the bandwidth. I believe it can be self-sustainable if enough people subscribe to the support plans.

Especially since he said Cloudflare is providing the CDN for free... Yes, running the origins costs money, but in most cases, default fd limits are low, and you can push them a lot higher. At some point you'll run into i/o limits, but I think the I/O at the origin seems pretty managable if my napkin math was right.

If the files are all tiny, and the fd limit is the actual bottleneck, there's ways to make that work better too. IMHO, it doesn't make sense to accept a inbound connection if you can't get a fd to read a file for it, so better to limit the concurrent connections and let connections sit in the listen queue and have a short keepalive time out to make sure you're not wasting your fds on idle connections. With no other knowledge, I'd put the connection limit at half the FD limit, assuming the origin server is dedicated for this and serves static files exclusively. But, to be honest, if I set up something like this, I probably wouldn't have thought about FD limits until they got hit, so no big deal ... hopefully whatever I used to monitor would include available fds by default and I'd have noticed, but it's not a default output everywhere.

charcircuit

We are talking about hosting a fixed amount of static files. This should be a solved problem. This is nothing like running large AI models for people.

undefined

[deleted]

010101010101

The nature of the service is completely irrelevant.

rikafurude21

the funny part is that his service didnt break- cloudflares cache caught 99% of the requests. just wanted to feel powerful and break the latest viral trend.

ivanjermakov

> Nice idea, interesting project, next time please contact me before.

I understand that my popular service might bring your less popular one to the halt, but please configure it on your end so I know _programmatically_ what its capabilities are.

I host no API without rate-limiting. Additionally, clearly listing usage limits might be a good idea.

Aurornis

> I understand that my popular service might bring your less popular one to the halt, but please configure it on your end so I know _programmatically_ what its capabilities are.

Quite entitled expectations for someone using a free and open service to underpin their project.

The requests were coming from distributed clients, not a central API gateway that could respond to rate limiting requests

> I host no API without rate-limiting. Additionally, clearly listing usage limits might be a good idea.

Again, this wasn’t a central, well-behaved client hitting the API from a couple of IPs or with a known API key.

They calculate that per every 1 user of the wlive.place website, they were getting 1500 requests. This implies a lot of botting and scripting.

This is basically load testing the web site at DDoS scale.

zamadatix

> The requests were coming from distributed clients, not a central API gateway that could respond to rate limiting requests

The block was done based on URL origin rather than client/token, why wouldn't a rate limiter solution consider the same? For this case (a site which uses the API) it would work perfectly fine. Especially since the bots don't even care about the information from this API so non-site based bots aren't even going to bother to pull the OpenFreeMap tiles.

ivanjermakov

Ugh, then I agree. This way it's indistinguishable from DDoS attack.

Aeolun

I think it’s reasonable to assume that a free service is not going to deal gracefully with your 100k rps hug of death. The fact that it actually did is an exception, not the rule.

If you are hitting anything free with more than 10rps (temporarily) you are an taking advantage in my opinion.

sour-taste

Since the limit you ran into was number of open files could you just raise that limit? I get blocking the spammy traffic but theoretically could you have handled more if that limit was upped?

hyperknot

I've just written my question to the nginx community forum, after a lengthy debugging session with multiple LLMs. Right now, I believe it was the combination of multi_accept + open_file_cache > worker_rlimit_nofile.

https://community.nginx.org/t/too-many-open-files-at-1000-re...

Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.

toast0

I'm pretty sure your open file cache is way too large. If you're doing 1k/sec, and you cache file descriptors for 60 minutes, assuming those are all unique, that's asking for 3 million FDs to be cached, when you've only got 1 million available. I've never used nginx or open_file_cache[1], but I would tune it way down and see if you even notice a difference in performance in normal operation. Maybe 10k files, 60s timeout.

> Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.

For cost reasons or system overload?

If system overload ... What kind of storage? Are you monitoring disk i/o? What kind of CPU do you have in your system? I used to push almost 10GBps with https on dual E5-2690 [2], but it was a larger file. 2690s were high end, but something more modern will have much better AES acceleration and should do better than 200 Mbps almost regardless of what it is.

[1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive; maybe at hundreds of thousands of rps or if you have a very complex filesystem. PS don't put tens of thousands of files in a directory. Everything works better if you take your ten thousand files and put one hundred files into each of one hundred directories. You can experiment to see what works best with your load, but a tree where you've got N layers of M directories and the last layer has M files is a good plan, 64 <= M <= 256. The goal is keeping the directories compact so searching and editing is effective.

[2] https://www.intel.com/content/www/us/en/products/sku/64596/i...

CoolCold

> [1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive

I may have a hint here - remember, that Nginx was created in the times of dialup was a thing yet and having single Pentium 3 server was a norm (I believe I've seen myself that wwwXXX machines in the Rambler DCs over that time).

So my a bit educated guess here, that saving every syscal was sorta ultimate goal and it was more efficient in terms of at least latency by that times. You may take a look how Nginx parses http methods (GET/POST) to save operations.

Myself I don't remember seeing large benefits of using open_file_cache, but I likely never did a proper perf test here. Say ensure use of sendfile/buffers/TLS termination made much more influence for me on modern (10-15 years old) HW.

Aeolun

If you do 200Mbps on a hetzner server after cloudflare caching, you are going to run out of traffic pretty rapidly. The limit is 20TB / month (which you’d reach in roughly 9 days).

ndriscoll

One thing that might work for you is to actually make the empty tile file, and hard link it everywhere it needs to be. Then you don't need to special case it at runtime, but instead at generation time.

NVMe disks are incredibly fast and 1k rps is not a lot (IIRC my n100 seems to be capable of ~40k if not for the 1 Gbit NIC bottlenecking). I'd try benchmarking without the tuning options you've got. Like do you actually get 40k concurrent connections from cloudflare? If you have connections to your upstream kept alive (so no constant slow starts), ideally you have numCores workers and they each do one thing at a time, and that's enough to max out your NIC. You only add concurrency if latency prevents you from maxing bandwidth.

hyperknot

Yes, that's a good idea. But we are talking about 90+% of the titles being empty (I might be wrong on that), that's a lot of hard links. I think the nginx config just need to be fixed, I hope I'll receive some help on their forum.

justinclift

> so I couldn't have kept up _much_ longer, no matter the limits.

Why would that kind of rate cause a problem over time?

Ericson2314

Oh wow, TIL there is finally a simple way to actually view OpenStreetMap! Gosh, that's overdue. Glad it's done though!

bspammer

What was wrong with the main site? Genuine question https://www.openstreetmap.org

Ericson2314

Oh.... last I checked they didn't have that?

drewda

The OSM Foundation has been serving raster tiles for years and years (that's what's visible by default on the slippy map at www.openstreetmap.org): https://wiki.openstreetmap.org/wiki/OpenStreetMap_Carto

After on and off experimentation by various contributors, OSMF just released vector tiles as well: https://operations.osmfoundation.org/policies/vector/

Ericson2314

Thanks, I'm just out of date

bravesoul2

... And then it became 1M rps!

fnord77

sounds like they survived 1,000 reqs/sec and the cloudflare CDN survived 99,000 reqs/sec

eggbrain

Limiting by referrer seems strange — if you know a normal user makes 10-20 requests (let’s assume per minute), can’t you just rate limit requests to 100 requests per minute per IP (5x the average load) and still block the majority of these cases?

Or, if it’s just a few bad actors, block based on JA4/JA3 fingerprint?

hyperknot

What if one user really wants to browse around the world and explore the map. I remember spending half an hour in Google Earth desktop, just exploring around interesting places.

I think referer based limits are better, this way I can ask high users to please choose self-hosting instead of the public instance.

toast0

Limiting by referrer is probably the right first step. (And changing the front page text)

You want to track usage by the site, not the person, because you can ask a site to change usage patterns in a way you can't really ask a site's users. Maybe a per IP limit makes sense too, but you wouldn't want them low enough that it would be effective for something like this.

rtaylorgarlock

Is it always/only 'laziness' (derogatory, i know) when caching isn't implemented by a site like wplace.live ? Why wouldn't they save openfreemap all the traffic when a caching server on their side presumably could serve tiles almost as fast or faster than openfreemap?

toast0

Why should they when openfreemap is behind a CDN and their home page says things like:

> Using our public instance is completely free: there are no limits on the number of map views or requests. There’s no registration, no user database, no API keys, and no cookies. We aim to cover the running costs of our public instance through donations.

> Is commercial usage allowed?

> Yes.

IMHO, reading this and then just using it, makes a lot of sense. Yeah, you could put a cache infront of their CDN, but why, when they said it's all good, no limits, for free?

I might wonder a bit, if I knew the bandwidth it was using, but I might be busy with other stuff if my site went unexpectedly viral.

radu_floricica

Politeness? Also speed.

But it depends on the project architecture. If the tiles are needed only client-side, then there's really little reason to cache things on _my_ server. That would imply I'm better at caching openstreetmap tiles than... openstreetmap. Plus you're just making the system needlessly more complicated.

And there's little reason for openstreet map to be upset, since it's not like _I_ am making 2 million requests - there are 2 million separate users of osm+. It's aligned to their incentive of having more people use osm and derived works. All is well.

Aeolun

I think, when you read that, you should be reassured that nobody is going to suddenly tell you to pay, and then still implement caching on your own side to preserve the free offering for everyone else.

Seriously, whose first thought on reading that is “oh great, I can exploit this”.

naniwaduni

You don't need to be thinking "I can't exploit this" when you can just stop thinking about it.

VladVladikoff

I actually have a direct answer for this: priorities. I run a fairly popular auction website and we have map tiles via stadia maps. We spend about $80/month on this service for our volume. We definitely could get this cost down to a lower tier by caching the tiles and serving them from our proxy. However we simply haven’t yet had the time to work on this, as there is always some other task which is higher priority.

latchkey

Like reading and commenting on HN articles! ;-)

hyperknot

We are talking about an insane amount of data here. It was 56 Gbit/s (or 56 x 1 Gbit servers 100% saturated!). This is not something a "caching server" could handle. We are talking on the order of CDN networks, like Cloudflare, to be able to handle this.

Sesse__

> We are talking about an insane amount of data here. It was 56 Gbit/s. This is not something a "caching server" could handle.

You are not talking about an insane amount of data if it's 56 Gbit/s. Of course a caching server could handle that.

Source: Has written servers that saturated 40gig (with TLS) on an old quadcore.

hyperknot

OK, technically there might exist such server, I guess Netflix and friends are using those. But we are talking about a community supported, free service here. Hetzner servers are my only options, because of their unmetered bandwidth.

bigstrat2003

I realize that what constitutes "insane" is a subjective judgement. But, uh... I most certainly would call 56 Gbps insane. Which is not to say that hardware which handles it doesn't exist. It might not even be especially insane hardware. But that is a pretty insane data rate in my book.

wyager

> or 56 x 1 Gbit servers 100% saturated

Presumably a caching server would be 10GbE, 40GbE, or 100GbE

56Gbit/sec of pre-generated data is definitely something that you can handle from 1 or 2 decent servers, assuming each request doesn't generate a huge number of random disk reads or something

ndriscoll

I'd be somewhat surprised if nginx couldn't saturate a 10Gbit link with an n150 serving static files, so I'd expect 6x $200 minipcs to handle it. I'd think the expensive part would be the hosting/connection.

markerz

It looks like a fun website, not a for-profit website. The expectations and focus of fun websites is more to just get it working than to handle the scale. It sounds like their user base exploded overnight, doubling every 14 hours or so. It also sounds like it’s other a solo dev or a small group based on the maintainers wording.

undefined

[deleted]

undefined

[deleted]

cube00

It's really surprising that no CDNs or cloud storage providers offer even the single PMTiles file in some sort of shared library customers can use.

I guess they'd all rather their customers each upload the 120GB file and then charge them all individually.

If they're crafty they'll have their storage configured so there's only one actual copy on the underlying infra so every other shadow copy is pure profit.

Daily Digest email

Get the top HN stories in your inbox every day.