Get the top HN stories in your inbox every day.
judofyr
bradfitz
> and a tailetc cluster
What do you mean by this part? tailetc is a library used by the client of etcd.
Running an etcd cluster is much easier than running an HA PostgreSQL or MySQL config. (I previously made LiveJournal and ran its massively sharded HA MySQL setup)
jgraettinger1
Neat. This is very similar to [0], which is _not_ a cache but rather a complete mirror of an Etcd keyspace. It does Key/Value decoding up front, into a user-defined & validated runtime type, and promises to never mutate an existing instance (instead decoding into a new instance upon revision change).
The typical workflow is do do all of your "reads" out of the keyspace, attempt to apply Etcd transactions, and (if needed) block until your keyspace has caught up such that you read your write -- or someone else's conflicting write.
crawshaw
Drat! I went looking for people doing something similar when I sat down to design our client, but did not find your package. That's a real pity, I would love to have collaborated on this.
I guess Go package discovery remains an unsolved problem.
bradfitz
Whoa, we hadn't seen that! At first glance it indeed appears to be perhaps exactly identical to what we did.
harikb
I wish pkg.dev had a signin and option to star/watch a package. I do this with GitHub repos I should revisit. Would have been handy for pkg.dev :) yes, I know - nobody wants yet another login
judofyr
> What do you mean by this part? tailetc is a library used by the client of etcd.
Oh. Since they have a full cache of the database I thought it was intended to be used as a separate set of servers layered in front of etcd to lessen the read load. But you're actually using it directly? Interesting. What's the impact on memory usage and scalability? Are you not worried that this will not scale over time since all clients need to have all the data?
bradfitz
Well, we have exactly 1 client (our 1 control server process).
So architecturally it's:
3 or 5 etcd (forget what we last deployed) <--> 1 control process <--> every Tailscale client in the world
The "Future" section is about bumping "1 control process" to "N control processes" where N will be like 2 or max 5 perhaps.
The memory overhead isn't bad, as the "database" isn't big. Modern computers have tons of RAM.
mwcampbell
> Running an etcd cluster is much easier than running an HA PostgreSQL or MySQL config.
What if you used one of the managed RDBMS services offered by the big cloud providers? BTW, if you don't mind sharing, where are you hosting the control plane?
bradfitz
> What if you used one of the managed RDBMS services offered by the big cloud providers?
We could (and likely would, despite the costs) but that doesn't address our testing requirements.
The control plane is on AWS.
We use 4 or 5 different cloud providers (Tailscale makes that much easier) but the most important bit is on AWS.
cakoose
> > Running an etcd cluster is much easier than running an HA PostgreSQL or MySQL config.
> What if you used one of the managed RDBMS services offered by the big cloud providers?
Yeah, AWS RDS "multi-AZ" does a good job of taking care of HA for you. (Google Cloud SQL's HA setup is extremely similar.) But you still get 1-2 minutes of full unavailability when hardware fails.
I haven't operated etcd in production myself, but I assume it does better because it's designed specifically for HA. You can't even run less than three nodes. (The etcd docs talk about election timeouts on the order of 1s, which is encouraging.)
For many use cases, 1-2 minutes of downtime is tolerable. But I can imagine situations where availability is paramount and you're willing to give up scale/performance/features to gain another 9.
endymi0n
This is about spot on. I do get the part about testability, but with a simple Key/Value use case like this, BoltDB or Pebble might have fit extremely well into the Native Golang paradigm as a backing store for the in-memory maps while not needing nearly as much custom code.
Plus maybe replacing the sync.Mutex with RWMutexes for optimum read performance in a seldom-write use case.
On the other hand again, I feel a bit weird criticizing Brad Fitzpatrick ;-) — so there might be other things at play I don‘t have a clue about...
segmondy
if you want a distributed key/value data store, you want to use what's already out there and vetted. It use to be zookeeper, but etcd is much simpler and that's what Kubernetes uses and it has been a big success and proved itself out there in the field. Definitely easier than a full SQL database which is overkill and much harder to replicate especially if you want to have a cluster of >= 3. Again, key is "distributed" and that immediately rules out sqlite.
mindwok
It's overkill until it's not. We chose etcd initially but after a while we started wanted to ask questions about our data that weren't necessarily organised in the same way as the key/value hierarchy. That just moved all the processing to the client, and now I just wish we used a SQL database from the beginning.
segmondy
Yeah, but for their use case it's just KV and also ability to link directly in go
strken
I was initially baffled by the choice of technology too. Part of it is that etcd is apparently much faster at handling writes, and offers more flexibility with regards to consistency, than I remember. Part of it might be that I don't understand the durability guarantees they're after, the gotchas they can avoid (e.g. transactions), or their overall architecture.
jeff-davis
This post illustrates the difference between persistence and a database.
If you are expecting to simply persist one instance of one application's state across different runs and failures, a database can be frustrating.
But if you want to manage your data across different versions of an app, different apps accessing the same data, or concurrent access, then a database will save you a lot of headaches.
The trick is knowing which one you want. Persistence is tempting, so a lot of people fool themselves into going that direction, and it can be pretty painful.
I like to say that rollback is the killer feature of SQL. A single request fails (e.g. unique violation), and the overall app keeps going, handling other requests. You application code can be pretty bad, and you can still have a good service. That's why PHP was awesome despite being bad -- SQL made it good (except for all the security pitfalls of PHP, which the DB couldn't help with).
perlgeek
I'd say the universal query capability is the killer feature of SQL.
In the OP they spent two weeks designing and implementing transaction-save indexes -- something that all major SQL RDBMS (and even many NoSQL solutions) have out of the box.
mrkstu
Maybe also part of the success of Rails? Similarly an easy to engineer veneer atop a database.
psankar
This comment helped me understand the problem and the solution better (along with a few followup tweets by the tailscale engineers). Thanks.
petters
> (Attempts to avoid this with ORMs usually replace an annoying amount of typing with an annoying amount of magic and loss of efficiency.)
Loss of efficiency? Come on, you were using a file before! :-)
Article makes me glad I'm using Django. Just set up a managed Postgres instance in AWS and be done with it. Sqlite for testing locally. Just works and very little engineering time spent on persistent storage.
Note: I do realize Brad is a very, very good engineer.
lmilcin
Efficiency can be measured in many different ways.
Having no dedicated database server or even database instance, being able to persist data to disk with almost no additional memory required, marginal amount of CPU and no heavy application dependencies can be considered very efficient depending on context.
Of course, if you start doing this on every change, many times a second, then it stops being efficient. But then there are ways to fix it without invoking Oracle or MongoDB or other beast.
When I worked on algorithmic trading framework the persistence was just two pointers in memory pointing to end of persisted and end of published region. Occasionally those pointers would be sent over to a dedicated CPU core that would be actually the only core talking to the operating system, and it would just append that data to a file and publish completion so that the other core can update the pointers.
The application would never read the file (the latency even to SSD is such that it could just as well be on the Moon) and the file was used to be able to retrace trading session and to bring up the application from event log in case it failed mid session.
As the data was nicely placed in order in the file, the entire process of reading that "database" would take no more than 1,5s, after which the application would be ready to synchronize with trading session again.
robben1234
>Article makes me glad I'm using Django.
This was my main thought throughout reading it. So many things to consider and difficult issues to solve it seems they face a self-made database hell. Makes me appreciate the simplicity and stable performance of django orm + postgre.
0xbadcafebee
I am missing a lot of context from this post because this just sounds nonsensical.
First they're conflating storage with transport. SQL databases are a storage and query system. They're intended to be slow, but efficient, like a bodybuilder. You don't ask a bodybuilder to run the 500m dash.
Second, they had a 150MB dataset, and they moved to... a distributed decentralized key-value store? They went from the simplest thing imaginable to the most complicated thing imaginable. I guess SQL is just complex in a direct way, and etcd is complex in an indirect way. But the end results of both are drastically different. And doesn't etcd have a whole lot of functional limitations SQL databases don't? Not to mention its dependence on gRPC makes it a PITA to work with REST APIs. Consul has a much better general-purpose design, imo.
And more of it doesn't make sense. Is this a backend component? Client side, server side? Why was it using JSON if resources mattered (you coulda saved like 20% of that 150MB with something less bloated). Why a single process? Why global locks? Like, I really don't understand the implementation at all. It seems like they threw away a common-sense solution to make a weird toy.
bradfitz
I'd answer questions but I'm not sure where to start.
I think we're pretty well aware of the pros and cons of all the options and between the team members designing this we have pretty good experience with all of them. But it's entirely possible we didn't communicate the design constraints well enough. (More: https://news.ycombinator.com/item?id=25769320)
Our data's tiny. We don't want to do anything to access it. It's nice just having it in memory always.
Architecturally, see https://news.ycombinator.com/item?id=25768146
JSON vs compressed JSON isn't the point: see https://news.ycombinator.com/item?id=25768771 and my reply to it.
0xbadcafebee
You say you want to have a small database just for each control process deployment to be independent. But you need multiple nodes for etcd... So you currently have either a shared database for all control processes, or 3 nodes per control process, or 3 processes per control node, etc. Either way it seems weird.
I get that SQLite wouldn't work, but it also doesn't make sense to have one completely independent database per process. So I imagine you're using a shared database, at which poitlnt etcd starts to make more sense. It's just not that widely understood in production as sql databases, and has limitations which you might reach in a few years.
chupy
> It's just not that widely understood in production as sql databases, and has limitations which you might reach in a few years.
Reaching limitations in a few years and biting that bullet makes the difference between a successful startup that knows when and where to spend time innovating or a startup that spends all their time optimizing for that 1 million simultaneous requests / sec.
malisper
The post touches upon it, but I didn't really understand the point. Why doesn't synchronous replication in Postgres work for this use case? With synchronous replication you have a primary and secondary. Your queries go to the primary and the secondary is guaranteed to be at least as up to date as the primary. That way if the primary goes down, you can query the secondary instead and not lose any data.
apenwarr
That would have been considerably less scalable. etcd has some interesting scaling characteristics. I posted some followup notes on twitter here: https://twitter.com/apenwarr/status/1349453076541927425
judofyr
How is PostgreSQL (or MySQL) "considerably less scalable" exactly? etcd isn't particularly known for being scalable or performant. I'm sure it's fast enough for your use-case (since you've benchmarked it), but people have been scaling both PostgreSQL and MySQL far beyond what etcd can achieve (usually at the cost of availability of course).
apenwarr
[I work at Tailscale] I only mean scalable for our very specific and weird access patterns, which involves frequently read-iterating through a large section of the keyspace to calculate and distribute network+firewall updates.
Our database has very small amounts of data but a very, very large number of parallel readers. etcd explicitly disclaims any ability to scale to large data sizes, and probably rightly so :)
bradfitz
We could've done that. We could've also used DRBD, etc. But then we still have the SQL/ORM/testing latency/dependency problems.
irrational
Can you go into more about what these problems are? I've always used databases (about 15 years on Oracle and about 5 years on Postgres) and I'm not sure if I know what problems you are referring to. Maybe I have experienced them, but have thought of them by a different name.
SQL - I'm not sure what the problems are with SQL. But it is like a second language to me so maybe I experienced these problems long ago and have forgotten about them.
ORM - I never use an ORM, so I have no idea what the problems might be.
testing latency - I don't know what this refers to.
dependency - ditto
bradfitz
SQL is fine. We use it for some things. But not writing SQL is easier than writing SQL. Our data is small enough to fit in memory. Having all the data in memory and just accessible is easier than doing SQL + network round trips to get anything.
ORMs: consider yourself lucky. They try to make SQL easy by auto-generating terrible SQL.
Testing latency: we want to run many unit tests very quickly without high start-up cost. Launching MySQL/PostgreSQL docker containers and running tests against Real Databases is slower than we'd like.
Dependencies: Docker and those MySQL or PostgreSQL servers in containers.
jgraettinger1
I think the "database" label is tripping up the conversation here. What's being talked about here, really, is fast & HA coordination over a (relatively) small amount of shared state by multiple actors within a distributed system. This is literally Etcd's raison d'etre, it excels at this use case.
There are many operational differences between Etcd and a traditional RDBMs, but the biggest ones are that broadcasting updates (so that actors may react) is a core operation, and the MVCC log is "exposed" (via ModRevision) so that actors can resolve state disagreements (am I out of date, or are you?).
lrossi
This reminds me of this post from the hostifi founder, sharing the code they used for the first 3 years:
https://twitter.com/_rchase_/status/1334619345935355905
It’s just 2 files.
Sometimes it’s better to focus on getting the product working, and handle tech debt later.
bob1029
I do like putting .json files on disk when it makes sense, as this is a one-liner to serialize both ways in .NET/C#. But, once you hit that wall of wanting to select subsets of data because the total dataset got larger than your CPU cache (or some other step-wise NUMA constraint)... It's time for a little bit more structure. I would have just gone with SQLite to start. If I am not serializing a singleton out to disk, I reach for SQLite by default.
Cthulhu_
I've seen the same when at one point we decided to just store most data in a JSON blob in the database, since "we will only read and write by ID anyway". Until we didn't, sigh. At least Postgres had JSON primitives for basic querying.
The real problem with that project was of course trying to set up a microservices architecture where it wasn't necessary yet and nobody had the right level of experience and critical thinking to determine where to separate the services.
bob1029
Storing JSON blobs in the database can be the best option if you are careful with your domain modeling.
undefined
miki123211
I use the same system (a JSON file protected with a mutex) for an internal tool I wrote, and it works great. For us, file size or request count is not a concern, it's serving a couple (internal) users per minute at peak loads, the JSON is about 150 kb after half a year, and old data could easily be deleted/archived if need be.
This tool needs to insert data in the middle of (pretty short) lists, using a pretty complicated algorithm to calculate the position to insert at. If I had used an RDBMS, I'd probably have to implement fractional indexes, or at least change the IDs of all the entries following the newly inserted one, and that would be a lot of code to write. This way, I just copy part of the old slice, insert the new item, copy the other part (which are very easy operations in Go), and then write the whole thing out to JSON.
I kept it simple, stupid, and I'm very happy I went with that decision. Sometimes you don't need a database after all.
Quekid5
As long as you're guaranteeing correctness[0], it's hard to disagree with the "simple" approach. As long as you don't over-promise or under-deliver, there's no problem, AFAICS.
[0] Via mutex in your case. Have you thought about durability, though. That one's actually weirdly difficult to guarantee...
a1369209993
> Have you thought about durability, though. That one's actually weirdly difficult to guarantee...
Strictly speaking, it's literally impossible to guarantee[0], so it's more a question of what kinds and degrees of problems are in- versus out-of-scope for being able to recover from.
0: What happens if I smash your hard drive with a hammer? Oh, you have multiple hard drives? That's fine, I have multiple hammers.
winrid
What happened to the first hammer :D
AlfeG
That's good. But single file could break on powerloss. I use sqllite. It's quite easy to use, not a single line though.
masklinn
Their point about schema migration is completely true though. An SQLite db is extremely stateful, and querying that state in order to apply schema migrations (for both the data schema and the indexes) is bothersome.
gfody
> The file reached a peak size of 150MB
is this a typo? 150MB is such a minuscule amount of data that you could do pretty much anything and be OK.
bradfitz
Not a typo. See why we're holding it all in RAM?
But writing out 150MB many times per second isn't super nice when both 150MB and the number of times per second are both growing.
cbushko
I think a lot of people are missing the point that a traditional DB (MYSQL/Postgress) are not a good fit for this scenario. This isn't a CRUD application but is instead a distributed control plane with a lot of reads and a small dataset. Joins and complex queries are not needed in this case as the data is simple.
I am also going to go out on a limb and guess that this is all running in kubernetes. Running etcd there is dead simple compared to even running something like Postgress.
Congrats on a well engineered solution that you can easily test on a dev machine. Running a DB in a docker container isn't difficult but it is just one more dev environment nuance that needs to be maintained.
bradfitz
We don't use Kubernetes (or even Docker) currently.
cbushko
Hopefully tailscale gets to the size where kubernetes is worth it. It's a complex thing to run and understand but in the end I think it is worth it. It has certainly made my day to day life a lot easier and allowed our tiny team to build out a solid platform. It has greatly reduced the amount of time that our developers need to get a service up and running our new features out.
bradfitz
We have a lot of Kubernetes experience on the team. Multiple of us run Kubernetes clusters in our home labs (mine: https://github.com/bradfitz/homelab), and one of us used to be on the Google GKE team as an SRE, and is the author of https://metallb.universe.tf/ (which multiple of us also use).
Us _not_ using Kubernetes isn't because we don't know how to use it. It's because we _do_ know how to use it and when _not_ to use it. :)
dekhn
I never took a course in databases. At some point I was expected to store some data for a webserver, looked as the BSDDB API, and went straight to mysql (this was in ~2000). I spent the time to read the manual on how to do CRUD but didn't really look at indices or anything exotic. The webserver just wrote raw SQL queries against an ultra-simple schema, storing lunch orders. It's worked for a good 20 years and only needed minor data updates when the vendor changed and small python syntax changes to move to python3.
At that point I thought "hmm, i guess I know databases" and a few years later, attempted to store some slightly larger, more complicated data in MySQL and query it. My query was basically "join every record in this table against itself, returning only rows that satisfy some filter". It ran incredibly slowly, but it turned out our lab secretary was actually an ex-IBM Database Engineer, and she said "did you try sorting the data first?" One call to strace showed that MySQL was doing a very inefficient full table scan for each row, but by inserting the data in sorted order, the query ran much faster. Uh, OK. I can't repeat the result, so I expect MySQL fixed it at some point. She showed me the sorts of DBs "real professionals" designed- it was a third order normal form menu ordering system for an early meal delivery website (wayyyyy ahead of its time. food.com). At that point I realized that there was obviously something I didn't know about databases, in particular that there was an entire schema theory on how to structure knowledge to take advantage of the features that databases have.
My next real experience with databases came when I was hired to help run Google's MySQL databases. Google's Ads DB was implemented as a collection of mysql primaries with many local and remote replicas. It was a beast to run, required many trained engineers, and never used any truly clever techniques, since the database was sharded so nobody could really do any interesting joins.
I gained a ton of appreciation for MySQL's capabilities from that experience but I can't say I really enjoy MySQL as a system. I like PostgresQL much better; it feels like a grownup database.
What I can say is that after all this experience, and some recent work with ORMs, has led me to believe that while the SQL query model is very powerful, and RDBMS are very powerful, you basically have to fully buy into the mental model and retain some serious engineering talent- folks who understand database index disk structures, multithreading, etc, etc.
For everybody else, a simple single-machine on-disk key-value store with no schema is probably the best thing you can do.
JacobiX
After reading the comments and the blog post, I think that the requirements boils down to fast persistence to disk, minimum dependencies and fast test-runs. Fortunately the data is very small 150MB and it fits very easily in memory. According to the post the data changes often so they need to write the data many times in a second. But I'm not sure why do they need to flush every time the entire 150MB ? Why not structure the files/indexes such that we write only the modified data ?
Get the top HN stories in your inbox every day.
Interesting choice of technology, but you didn't completely convince me to why this is better than just using SQLite or PostgreSQL with a lagging replica. (You could probably start with either one and easily migrate to the other one if needed.)
In particular you've designed a very complicated system: Operationally you need an etcd cluster and a tailetc cluster. Code-wise you now have to maintain your own transaction-aware caching layer on top of etcd (https://github.com/tailscale/tailetc/blob/main/tailetc.go). That's quite a brave task considering how many databases fail at Jepsen. Have you tried running Jepsen tests on tailetc yourself? You also mentioned a secondary index system which I assume is built on top of tailetc again? How does that interact with tailetc?
Considering that high-availability was not a requirement and that the main problem with the previous solution was performance ("writes went from nearly a second (sometimes worse!) to milliseconds") it looks like a simple server with SQLite + some indexes could have gotten you quite far.
We don't really get the full overview from a short blog post like this though so maybe it turns out to be a great solution for you. The code quality itself looks great and it seems that you have thought about all of the hard problems.