Skip to content(if available)orjump to list(if available)

Do svidaniya, Igor, and thank you for Nginx


I really take for granted how well Nginx works across a number of web backend functions.

Some of the container/orchestration world has tried to supplant the need for it as a reverse proxy, but you get so many goodies out of the box just by sticking this in front of your app, and for very little overhead.

I remember the pre-Nginx days and all of the struggles people routinely ran into with options like Apache or other reverse proxy tools.


I mean there was lighttpd before nginx and they have/had pretty similar structures, weights, etc.

I feel like I knew at one point why it got so thoroughly supplanted by nginx but I don't remember now why that happened.


The difference is that nginx really works. I had Panoramio, a photo website featured in Google Earth / Maps, using Apache. It started to fail down under load, and I quickly switched to lighttpd. It was faster but crashing, getting OOM, etc. I fixed a memory leak and a few more bugs, but it still crashed every now and then and I looked for alternatives.

This was 2006 and nginx was the only realistic alternative on the market. It worked beautifully since day 1. It saved my startup. Next year we got acquired by Google.

I only got 1 crash with nginx and it was partially my fault, I had an "expires 30y" on some images, and a morning on feb 2008 I came to the office and the whole site was down. After a very quick gdb session under panic I realized it was trying to get a weekday name on an array with a negative index. Nginx was adding 30 years to the current date and that was over 2038 and it overflowed. Igor fixed that issue in hours, and he graciously explained that I could have used "expires max"

Nginx has powered all my startups since then (Freepik, Flaticon, Slidesgo, Besoccer).

This guy has added more real value to the economy than most unicorns. A true hero.


Panoramio, Freepik and Flaticon? Man, you just collapsed what I thought there was an early Spanish startup success story and two different corporations from the US into a single person :D Maximum respect.


Wait you made Flaticon? I would like to to say thank you. Before I truly got into software I was a humble associate consultant and I honestly don't know how I would have made all those decks without you.


Panoramio was so good. I had photos there. People wrote me comments. Then Google just killed it. Fuck them.


Major Panoramio fan here. I have traveled a lot of places on that site. Thank you for making it :)


Yeah thanks to this thread I am definitely now remembering running into issues with memory usage and crashes with lighttpd.


Thanks for making Freepik


Panoramio user here! Big fan!


panoramio was amazing, thank you! a shame it was shutdown by g*gle.


The performance engineering in NGINX back then was really quite something.

This classic 2007 tutorial starts by pointing out that NGINX parses the HTTP verb by looking at the second letter first, so that if it's O it knows to check for POST or COPY!


Doing Boyer and Moore proud.

I think that if you want to support all verbs, you face at least 3 'ambiguities' whether you first check the first, second, or third character of the string. (It must be at most the third, as the shortest verbs are 3 characters long.)

First checking the first character is ambiguous between POST, PUT and PATCH. First checking the second character is ambiguous between HEAD, DELETE, and GET. First checking the third character is ambiguous between GET, PUT, OPTIONS, and PATCH. [0]

edit As danachow points out, the verbs are not all used with the same frequency. If real-world performance is the goal we'd presumably want to optimise for the GET case, which presumably means first checking the first character, as the 'G' is unique to GET.



These tricks are cute, but at the time nginx's performance came from much more fundamental design decisions.

First, the async model was literally years ahead of Apache. It leaned heavily on interfaces like epoll to manage large connection pools with a small number of processes, while Apache still used a thread or process per connection.

Second, it removed exactly the right features - those with minimal benefit and high performance impact. The classic example is .htaccess, which adds (at least) one stat to every single request, but in practice was only needed for the horrible multi-tenant LAMP reseller setups of the day - everyone else was fine with static centralized configuration.


Nice! I can't be the only one who went looking and spent way to much time trying to acquaint myself with c++ :)

Here's there I ended up:

PS, I'm not totally sure, but they definitely use the count of letters as an optimization, and it seems they increment the bits associated with each type, so the order of the bits behind each NGX_HTTP_GET etc seems to matter...!

Someday I will understand :)


Yes, nginx code is something you spend a lot of time studying and hope one day you can do as well.


Does it really look at second letter first or is that snippet taken out of context (it isn’t implied that it does in that email, just that it doesn’t use a library function) ? Since most requests are GET it still makes sense to handle that case first. Though after trying to common cases looking at the second letter for the P subcases may save some branching.


From my limited experience, lighttpd has non-stellar documentation and the community (including devs) is kinda rude. nginx has better documentation and a much more welcoming support community. At least for the tasks I use it for (proxying a bunch of random services on a home server), the syntax is a lot easier than lighttpd and it's easier to bring in goodies as modules that would require recompilation on the lighttpd side. It might be a bit of a chicken-and-egg problem, but the nginx binaries are also a lot of up to date in the package managers I used. On a raspberry pi I ended up needing to compile from source to get a modern version, which got kinda annoying.


Not sure which year (decade? :P) you're talking about but in the beginning (before 2010, maybe 2007-08?) Jan was still kinda involved in the German PHP scene and I can't imagine that to be true, I only remember good interactions and we were one of the heaviest users of lighttpd back then. But the docs were never that great, and it seemed to be a one man show, later with a very small team, that's when I think it stalled and only picked up pace years later.

But there indeed came a time (maybe 2010ish?) where nginx took the lead with gread strides and most people (even the die-hard fans) mostly moved to nginx, that's about when it was clear that it would probably win and stay for the forseeable future. Back in those circles at least (see some other comments for the PHP FPM story) Apache was only kept for setups with lots of different other dependencies, like mod_svn or webdav, if you "only" needed a webserver to front PHP it was nginx.

I also remember many people holding out on adopting Apache 2.2 for a looong time.


Years ago, I was working at a small startup here in Austin (named “Ihiji”), and it was my job to completely rebuild their cloud systems infrastructure. They wanted to use lighttpd instead of Apache. It took a while, but with a bit of a kickstart from a friend who worked at Opscode Chef (thanks, Matt Ray!), I was able to get that done.

With Apache, at their current max load, the systems would just completely fall over, but with lighttpd, at that same load, the system breathed a little hard. I could push lighttpd 10x more before it fell over.

Then we started looking at replacing the haproxy solution, and I looked at nginx. I tried it out. I also tried out replacing lighttpd with nginx. And no matter how hard I pushed nginx, I couldn’t get the damn thing to breathe hard. I couldn’t even push the load average up over 1.0.

We went back to lighttpd and haproxy because those tools gave us better monitoring and logging, but we always held nginx in our back pocket in reserve, in case we needed another 10x beyond what lighttpd+haproxy could do.

And yes, I did get invited to Edinburgh to do a nice little talk on the subject.

They had their ups and downs over the years, but ihiji did end up getting acquired by Control4, and the founders are now off doing other weird and cool things.


It was believed that lighttpd leaked memory quite badly, and there was also a period of time where the development/updates on lighttpd dramatically slowed down. Both of which wasn't a concern in the early days of nginx.


My memory is hazy but I think I remember running into an actual memory leak in lighttpd circa 2008.

We were serving dynamic content via FastCGI, IIRC.

This was a long time ago and I'm pretty hazy on the details, but I'm pretty sure I remember finding memory leak bug discussions on the lighttpd website around that time, and no clear answers on how to avoid it.


I used lighttpd from 2007 until early 2009. Then the company I was working for at the time went to managed hosting for about a year, and during that time, we had to use Apache (possibly even with the prefork MPM). When we decided to go back to self-managed dedicated servers in early 2010, I chose nginx. What I remember from that time was that nginx was rising in popularity, and it was being used by Also, unlike the 2007 setup, this early 2010 setup had a load balancer with three application servers behind it (using FastCGI I think), and if I remember correctly, it was easier to do that with nginx than lighttpd.


Lighttpd has received very nice updates lately (HTTP/2, etc.) and I use it daily for mission critical servers (facing private customers) behind HAProxy, Varnish and the like. No problems so far after 5 years.


I like Traefik, not sure if it "as light", but it works well for me in the personal setting, I use it just by adding some lines to my docker-compose file, no further configuration required. It sits in front of several services and automatically uses Let's Encrypt for certs.


I’m sure that when it was only Nginx and Apache for reverse proxy options it was the only way to go, however, these days for rev proxy, I prefer HAProxy for enterprise and Caddy for personal stuff..


I tend to use haproxy only in situations where I don't have access to a 'proper' load balancer (e.g, something which does tcp connection state failover and all that jazz); make it listen on lo0 on each client server, who then just talk to

It does mean you are doing health checks from each client service/app, but it only eats a few mb of ram, and then you don't have to deal with making your haproxy service HA :}


Use Caddy for enterprise stuff, too. Enterprise deployments deserve memory safety, too.


In January 2021, Nginx took over Apache as the #1 web server on the Internet that users can install:

I'm a bit surprised it didn't happen earlier as it feels like it's been the dominant choice for tech people.


Too many shared hosts with LAMP not updating their stack in ages, I suppose.


Wordpress had some say in this i think.



I've founded and grew several webhosters, one specialised in WordPress. Our HTTP stack was varnish->nginx(loadbalancer)->nginx->phpfpm.

It was pain. Not even WordPress core could (can?) run all its features; e.g. the SEO-friendly-URL thing relied (relies?) heavily on - I kid you not - rewriting the .htaccess file from the CMS: really: the CMS rewriting webserver configuration files from the web.

Let alone all the plugins and themes. The community of plugin and theme devs is generally professional, but there is a staggering amount of stupidity found. Like a payment-processing plugin that would write all its payments into [bankaccount-number].txt files. Web-readable. Obviously a severe security breach for one of our clients. The plugin-devs reaction? "Not a bug: we include a .htaccess that denies access to those text-files. So no-one can read them but the plugin". I can't even...

Point being: WordPress is highly coupled to Apache. If you want smooth experience of hosting, just go for Apache. Or don't use WordPress. I'd advise the latter.


I don't recognize anything about the WP Apache marriage.


It still surprises me that NGINX beat out Apache so quickly even though Apache had way more modules and was/is entirely free vs. NGINX which is more or less "open core" with some nice features requiring commercial licensing.


On the other hand, the unreadable weird-ass pseudo-XML configuration files of Apache made anyone touching them wish for something better.

I also expect ngx_lua did a lot for adoption, the fact that you could always "shell out" to lua if you needed was a huge boon even just for peace of mind.


> On the other hand, the unreadable weird-ass pseudo-XML configuration files

If I have one gripe about NGINX it's that its configuration is a still-half-baked DSL that has quirks you wouldn't expect and when they error you don't get great feedback.

Examples: You can have an if clause, but no else attached. You can't have an if clause with multiple conditions. Finally, "if [might be] evil." 1

You end up writing a bunch of partitioned control flow statements and you're never really sure at what level of config hierarchy they would best be applied.

I love the product but Apache's XML versus NGINX's semi-declarative, hierarchical blocks aren't night and day better.



I agree with you completely. Nginx's config syntax is better than Apache's but it still feels like mystery meat. Can you use this directive or option within this block? Maybe, maybe not. If not, why? Who knows. It's just not allowed to use map within a location block and that's just how it is, okay?

My dream web server has Nginx's capabilities and Lighttpd's Lua configuration files/scripts. Is that what ngx_lua does? I've heard of it before but never really gave it a look.


With the rewrite and map blocks it is maybe a little easier for you to write fewer if statements….


To be fair NGINX config is not better. An ad-hoc grown soup of syntax without a clear concept to govern it all.

I would prefer a simple JSON file any day. Or some Lispy S-expressions. Or some TOML or well structured XML and XSD even.

NGINX makes you learn another lang only for one tool and for a config, which mostly (always?) does not need anything more than being declarative config.


No JSON, please. You can't have comments. A JSON config would a deal-breaker for me to use a server.


Oh gosh, I had to try and figure out an Apache config file some time last year - it was a real slog trying to figure out what was happening thanks in no small part to the poor documentation of their pseudo-XML.


You can do similarly in Apache with the Perl sections...

    # dynamic perl config goes here..


It should be remembered that NGINX is used as a reverse proxy for a lot of servers behind the scenes. That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct. I use NGINX to front a sites that have Apache on the back end and as a result, the Internet spiders think my websites are running NGINX rather than Apache. NGINX is incredibly easy to configure as a reverse proxy, image router, and SSL front-end. Thanks Igor.


> That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct.

The exact same argument can be made to explain why nginx is undercounted. A lot of setups will run nginx behind proxies, so you'll count a proxy: a Varnish, a single nginx, cloudfront servers (are they running nginx?) while in reality there may be many nginx-es running.

Nonetheless: nginx is a gift and thanks go out to Igor, regardless of how good the spiders can count the number nginx instances.


Granted I've done exactly this before, but why put Nginx in front of Apache? In my experience it added headaches without any real benefit.

(Unless you don't mean Apache webserver but rather some other Apache product)


Nginx manages a ton of connections better and can serve static files very fast. It can then multiplex the dynamic requests into fewer connections to Apache. If you mean why not only use nginx, I would guess that's easier than changing your legacy systems to use nginx (e.g. if you have a ton of htaccess files). It's also possible you got better performance with mod_php although most people seem to claim that php-fpm with nginx is faster.


2012-2015 I worked at a shared hosting company and towards the end of my tenure there we revamped the architecture to be centered on nginx (SSL termination, HTTP2 support, etc.) and invested quite a bit in API and GUI support for rewrite rules, redirects, etc.

However, for better or worse, a lot of the software people want to run on shared hosting come with a .htaccess file and documentation for how to configure it otherwise. So we gave customers a choice to put Apache behind nginx.

Unfortunately I left too early to learn what %age of customers ended up enabling Apache but they‘re still running this architecture today.


I have done this to host multiple services (running using multiple users and setups) from one host.


These days the cool kids love to call reverse proxies "load balancers" (when you have n>1 backends).


We added Nginx to our hosting environment in front of Apache and knew a bunch of other folks who did the same. The outwardly visible adoption of Nginx was not necessarily zero-sum with Apache’s footprint at first.

In my case we scaled Drupal and Wordpress sites by using Varnish as a reverse proxy cache in front of Apache. But then we wanted to go HTTPS across the board, which Varnish does not handle. So we terminated HTTPS in Nginx and then passed the connection back to the existing Varnish/Apache stack. I know other folks just skipped or ripped out the Varnish layer and used Nginx for both HTTPS and caching.

At the time both Drupal and Wordpress (and other popular PHP projects) depended on Apache-specific features for things like pretty URLs and even security. Over time, the community engineered away from those so there was little reason to prefer Apache anymore.


The web changed. We moved away from static HTML pages and CGI scripts to monolithic application servers in java, ruby, python, etc. Apache excelled with these static content sites and simple auth scenarios (remember .htaccess files?) but became painfully complex proxying application servers. Nginx was doing exactly what was needed at exactly the time it was needed.


And yet interestingly, nginx started in 2002, which was still old-school internet. So really, it was ahead of its time.


2002 was the start of the glory days of java web monoliths, like big monstrosities with spring, Rails, Django, etc. came a couple years later and monolith app servers really started to take off.


Painfully complex proxying? Can you explain? I still use Apache as my go to HTTP server and proxying is just 2 config lines.


Around here, Apache was heavily used for its mod_php. It could run php embedded without complex fcgi setup.

Then everyone moved to ruby and python (and also perl) and mod_php stopped being an advantage.


Everyone moved to Ruby and Python? In your bubble perhaps, but PHP is probably more popular than Ruby and Python combined globally.


Somehow I still see .htaccess files in projects that aren't that old (and in a few cases never used Apache).


Yes - this would be my take as well.


Back when the Apache was beaten there was no commercial licensing in Nginx.

Also the Apache that was beaten was Apache 1, which was fork-only, and that was the whole reason Nginx was written in the first place.

Then Apache did Apache2 with mpm modules and badly missed the mark. After that Apache was doomed. No async support == dead. It was that simple.


This jives with my memory of that time as well. Apache just couldn't keep up with Nginx' async speed, and if you weren't having to deal with PHP (before FastCGI's adoption), there was no real reason to use Apache.

And post-FCGI's adoption, you didn't need to use Apache, so... why use it?


mpm_event though from Apache 2.4 was async and kind of great.


I think the modules were Apache's curse, they made it possible to bring down Apache. Speed is great, but Always Responding is a more important feature. I'm sure most Nginx configurations could have been done with Apache without any real performance issue, but Apache hurt its own reputation by doing extra things poorly.


Nearly all the performance reviews between the stock Apache and Nginx at their hype time were equal to compare Word vs Notepad. An Apache installed from distribution package (with their range of enabled modules) and an Nginx compiled from source without nothing. A vanilla and good built Apache it's perfectly fine for realworld use, at the same level than Nginx, because when you are close to the limits of this pieces of software, your scalability problems are in other place.


I'm reminded of how Linux beat GNU Hurd, or how systemd is slowly replacing SysVinit. Highly modular systems often lose out to more monolithic ones, since they tend to be slower, more complex, and harder to use in practice, despite their theoretical advantages.


At the time time there was no commercial Nginx, only open source. Also, Apache was a huge pain to configure for anything other than configuring static files. Nginx config was a delight to deal with by comparison.


Yes - this. Building my first web site (we didn't call them apps back then) and wrangling with Apache and OpenSSL to enable encryption was ... not fun.


Tried Caddy yet? They provide really compact configuration templates and if needed can be reconfigured using the API.


As an open source developer and commercial OSS startup founder myself, Nginx gave me a lot of confidence to challenge status quo. Apache was so revered that you would have been crazy to think you could improve it, but he did and that really had an impact on me.


Fun story time: a few years back I worked at a major EU "traditional" (non-FAANG) IT company, and they were using Apache for handling web traffic. Rumour was that nginx, being already a backbone of half of internet, was dismissed as "too new" :) (we're talking mid-2010s)


Haha that reminds me of a company I worked at that used some MS library for .net.

I think it was like Microsoft.WebMatrix.Data

It essentially was a micro ORM written using dynamic but had no caching so it performed terribly with all the reflection. It was a drop in replacement to use Dapper. But dapper was dismissed due to it being “Demoware” despite it running stackoverflow. I left that place 2 weeks later.


This story really shows the hype of nginx. It wasn't the backbone of half the internet until 2021.

Don't get me wrong, I am an nginx user now for the past decade at least but when it first came out I was very skeptical. People were saying apache was too bloated but you could already run apache with as few modules as possible so that was a false argument.

Then there was the c10k challenge of course. Basically, a lot of hype for nginx but it came out on top in the end so I guess it doesn't matter.


Waiting for everybody else to test the product before you migrate is perfectly common sense strategy. Especially if that product does not give you any special edge over competition.


I think the irony is that newness is irrelevant once it's being used at a certain scale. You can battle test more in an hour than you could a small scale project in ten years.


How do you battle-test in an hour the ability of the upstream developer to provide security fixes? To provide updates at all as the ecosystem develops (e.g. the rise of systemd, taking advantage of advancements in worker models, SSL library API changes, new Lua versions)? Ability to keep backward compatibility with modules?

Your approach might have led you to invest heavily in lighttpd at some point in time.


By the mid 2010s it had definitely proved its mettle.

Arguing that it wouldn't have provided enough benefit to justify the switch is different than saying it was unproven by that point.


I think some of the pressure to update products is irrational. Just because something is newer and better is not yet reason to upgrade.

If Apache did everything they needed I can imagine a company to completely forgo investigating Nginx and this might have been cause of that kind of statement. Or maybe this was just a way to explain it to younger devs who could not understand "don't break it if it works". We don't know.

The correct way to decide this kind of decision (and many other) is to look at the RoI and your available bandwidth to run multiple projects.

I am still keeping some very old (but still actively developed) products. I am busy with other projects and there just have not been any pressure to update. When I have some time available I prefer to choose a project with highest RoI rather than update stuff because of peer pressure.


No, switching creates risks. Risk of configuration errors leading to downtimes or vulnerabilities, risk of unexpected delays in deployment, risk of running into bugs that the users are unaware of.

Many software projects fail by facing delays due to excessive complexity and tech churn. Moving carefully helps.




I remember the days circa 2009 when the Nginx docs pages still had lots of Soviet-style graphics... those were the days :)


Yeah that was great. I think we all felt like we had a secret superpower few others knew about.


My job as an intern was to write an Nginx plugin for a specific type of filtering for high performance- boss was abit of a masochist ;)


That’s actually pretty cool. Wish I found an excuse to need to do that.


I'd love to see this if anyone has a screenshot from this era!



Love it! Thank you for sharing!


Yup, look at that logo especially!


I remember watching a video from some conference where Igor participated. As soon as he says "Hello, I'm Igor Sysoev, creator of nginx" the audience bursts with extra-long applause. He even had to tell them "Come on guys, you haven't heard my presentation yet"


Would love to see that talk!


Here are relevant Russian discussions on OpenNET[0] & LOR[1].

N.B. From Nginx company history on Wikipedia:

> On 12 December 2019, it was reported that the Moscow offices of Nginx Inc. had been raided by police, and that Sysoev and Konovalov had been detained. The raid was conducted under a search warrant connected to a copyright claim over Nginx by Rambler—which asserts that it owns all rights to the code because it was written while Sysoev was an employee of the company. On 16 December 2019, Russian state lender Sberbank, which owns 46.5 percent of Rambler, called an extraordinary meeting of Rambler's board of directors asking Rambler's management team to request Russian law enforcement agencies cease pursuit of the criminal case, and begin talks with Nginx and with F5.[2]





Was the case resolved? Wikipedia doesn't provide any further information?


Yes, Wikipedia -- in broad strokes -- just sucks, check what happened to the Scottish Wikipedia (but there are any number of issues in the English one as well, the "no credentials" policy made sure scientists shun it because they don't want to endlessly argue with neckbeards with an agenda).

Anyways, everything is dropped in Russia, there's a lawsuit in the US but at first the court dismissed the whole thing in 2021, I expect that one go exactly nowhere.


That's why we rely on people like you to update article at wikipedia, your services are invaluable!


Thank you. This was a great source.


I think the characterization on Wikipedia is also incorrect. Igor seems to have had a permission directly from the CTO to open-source the code, but 10 years later the company claimed that the CTO was not in a position to do so.


The problem is that the CTO, who is rather famous in the Russosphere, only gave a verbal permision, and only mentioned this happening when he was long gone from the company.

The lesson here is that, open source or not, you always need real documents to demarkate your IP, otherwise you're asking for trouble later down the line.

In typical US or UK companies software written would just go to the company, period. Here's a good article from Spolsky on how this works:


You can find a summary at

TL;DR: the Russian investigators closed the case of Rambler Group against Nginx/T5 in 2020 "for the absence of a crime event". Another company co-owned by the same owner of Rambler Group started a case in the USA but it was dismissed by a court in California in 2021.




One could only hope to build software as great as NGINX, keep it up for 20 years and receive a send off like this.



Thanks Igor for making a difference! I've always been after simple tools that do their job well, and, for the past 15 years, nginx was always one of them. Good luck with everything that comes next for you.


I know this is massively off-topic (have a good well paid "retirement" Igor), but I assumed that it would be written as

Dos Vidaniya

instead of (the correct)

Do svidanyia

My Russian studies is limited to listening to Sean Connery in The Russia House, and I guess I took Dos from the latin languages. Odd.


It's worse than that, the first thing that I scanned in the article is if he is alive. A title like that without a pre-defined context more often, in my native Russian perception, could mean much worse that just leaving a company. I'm glad he's doing well, I really enjoy NGINX as a casual user. It is a great gift to people. Удачи (good luck) or всего хорошего (best wishes) would not trigger such a reaction to scan the article for me.

I've just counted: only on the 13th paragraph I could get the answer.


As another native Russian speaker, to me the headline explicitly did mean that Igor is alive, and the subsequent meeting would be in the physical plane of our existence. Had it been one of the closer synonyms of "Good bye", I myself would have surmised the worst.


I know zero Russian, or any other language other than English. I had a very similar reaction to you. It sounded almost like an obituary.


maybe they've updated, but the fourth paragraph is currently "we announce today Igor has chosen to step back..."

which would seem to imply "not dead". but given the tone of the first three paragraphs i think even that is a bit too late in the post to clarify.


Yeah, but that's like the fourth paragraph.

I know hardly any Russian, only about enough to recognise "da svidanja" as "goodbye", so I'm not sure "in what language" I digested the headline (=link here on HN) -- probably a bit of all of my eclectic blend of European ones... But to me it certainly felt fifty-fifty whether it was a "changed jobs" or a "dearly departed" post. Checking which it was probably constituted my main reason for clicking through.


I "scanned" the article twice, so could have missed. First time I did quit, because HN comments are often more clear and useful instead of "reading" every noise they publish out there. And I did not find what I was looking for. The second time I scanned again after I posted, just to find the paragraph if it was there at all. Even if it was in the 4th originally, it's still far too off. It should have been in the first sentence of the first paragraph.


It sounds the same to an american english speaker without knowing a word of Russian since we can infer context and use the same farewell structure for the deceased.


> the first thing that I scanned in the article is if he is alive

Same thing, my first reaction was "oh my god, no, please no" and I rushed to see if he's alive.


Maybe it was just some sort of click bait to keep the reader on the page...


I would rather think machine translation and learning foreign languages should be better in general. But that is probably a C2-level subtlety, so if a non-Russian was writing that I could understand.

I imagine a situation: EN copywriter asks a RU colleague how to say "Goodbye", gets "Do svidaniya" as a transliteration without a context, and just puts it there. Which sounds like farewell.


Do (до) is basically “till”

Svidaniye (Свидание) has several meanings:

- most common modern single-word usage is for date as in “romantic date”

- archaic is for “meeting” that remained in this goodbye form.

So “do svidaniya” is literally for “till we meet again” :-)


I wonder why is it romanized to "do", when it's read as "da"?


Because it's spelt "do" in Russian. It's only due to stress and the fact it's a preposition that the way it is said becomes "da".

As the preposition is so small, it's considered together with the following word, which in the genitive has its stress on the "а".


English. The answer is almost always English, and its "quirky" way of transliteration.


I had the same thought. I think it's more Slavic vs Germanic/Romantic. "sv", without a vowel, doesn't exist in any word I can think of. However, in Russian, consonant clusters like that are pretty common. See also, from the article, Sberbank. I'd bet there's plenty of examples in reverse too.


svelte is probably the most common english one


Ah that's a good one!




s: with, together

vid: videt' = to see, vid as in video

anie: just a suffix like "ing" in english

"till together-seeing"


The process is called “Romanization of Russian” [1], and there are various standard ways to do it.



I thought it was a typo and meant to say "to". Why not use Cyrillic here? Bit odd.


Dos Svidanya would be a great cocktail name.


Or early PC software.


do (till) svidaniya (seeing, i think it's called gerundive in english grammar)


Oh my gosh, I thought he passed away.


Same. An editorialised title may have been preferable, but I understand the rules here generally don't allow that.


An inserted translation of the literal meaning might have been allowed, since the rules also say (ISTR) that this is an English-language website.

Colloquialise that from the formal exact "Til we meet again" to be a bit more informal (because that can still be read as " Heaven"), and you'd get something like

    "Do svidaniya (=See you later), Igor, and thank you for Nginx"
...which probably would have been much less likely to make half the readers start to tink he'd died.