Get the top HN stories in your inbox every day.
MHordecki
bddicken
Author here. This is good feedback.
The combination of transactions, isolation levels, and MVCC is such a huge undertaking to cover all at once, specially when comparing how it's done across multiple DBs which I attempted here. Always a balance between technical depth, accessibility to people with less experience, and not letting it turn into an hour-long read.
libraryofbabel
I actually like this article a lot. I do a bit of teaching, and I imagined the ideal audience for this as a smart junior engineer who knows SQL and has encountered transactions but maybe doesn’t really understand them yet. I think introducing things via examples of isolation anomalies (which most engineers will have seen examples of in bugs, even if they didn’t fully understand them) gives the explanation a lot more concreteness than starting with serializability as a theoretical concept as GP is proposing. Sure, strict serializability is a powerful idea that ties all this together and is more satisfying for an expert who already knows this stuff. But for someone who is just learning, you have to motivate it first.
If anything, I’d say it might be better to start with the lower isolation levels first, highlight the concurrency problems that can arise with them, and gradually introduce higher isolation levels until you get to serializability. That feels a bit more intuitive rather than downward progression from serializability to read uncommitted as presented here.
It also might be nice to see a quick discussion of why people choose particular isolation levels in practice, e.g. why you might make a tradeoff under high concurrency and give up serializability to avoid waits and deadlocks.
But excellent article overall, and great visualizations.
jaxr
I love the work planetscale does on keeping this type of content accurate yet accessible. Keep it up!
Rapzid
https://aphyr.com/posts/327-jepsen-mariadb-galera-cluster
More notation, more citations, more better.
bddicken
Notation is useful. Citations are nice for further reading. But I don't agree more of this makes for a better article!
peterclary
Looks like the author is geoblocking in protest of the UK Online Safety Act (and fair enough).
lateforwork
Most RDBMSs offer serializable isolation if you need it. Often you don't need it. The downside of using serializable isolation unnecessarily is reduced concurrency and throughput due to increased coordination between transactions.
vvern
Still, I think it’s the right default to start with serializable. Then when you have performance issues you can think long and hard about whether relaxed isolation levels will work in a big free way. Better to start with a correct application.
lateforwork
Starting with serializable is not free, there is a coding cost to pay to handle all the concurrency errors.
bddicken
Yep. Its a wonderful capability to have for some situations, but for 90% of applications SERIALIZABLE isolation is overkill.
undefined
ignoramous
> concept of (strict) serializability [("S")], which is really a generalization of the concept of thread safety
Unsure why "strict" (L + S) is in braces: Linearizability ("L") is what resembles safety in SMP systems the most?
mika6996
Then recommend a better explanation?
zzzeek
I think this is a great post to have but I'm going to make a critical usability suggestion:
* the videos should have "pause" and a "step at a time" control *
Even at the "half speed", without a deep knowledge of the context, the videos move way too fast for me to read the syntax that's invoking and line it up with the data on the left side. I (and im definitely not the only one) need to be able to sit on one step and stare at the whole thing without the latent anxiety of the state changing before I've had a chance to grok the whole thing.
this has nothing to do with familiarity with the concepts (read my profile). I literally need time to read all the words and connect them together mentally (ooh, just noticed this is pseudo-SQL syntax also, e.g. "select id=4", that probably added some load for me) without worrying they're going to change before watching things move.
please add a step-at-a-time button!
rmunn
One way to think about transactions, as I wrote in an earlier comment, would be to think of them as being like snapshots in a copy-on-write filesystem like btrfs or zfs. But another way to think of them is being like Git branches.
When you BEGIN a transaction, you're creating a branch in Git. Everyone else continues to work on the master branch, perhaps making their own branches (transactions) off of it while you're working. Every UPDATE command you run inside the transaction is a commit pushed to your branch. If you do a ROLLBACK, then you're deleting the branch unmerged, and its changes will be discarded without ever ending up in the master branch. But if you instead do a COMMIT, then that's a `git merge` command, and your changes will be merged into the master branch. If they merge cleanly, then all is well. If they do NOT merge cleanly, because someone else merged their own branch (committed their own transaction) that touched the same files that you touched (updated rows in the same table), then the DB will go through the file line by line (go through the table row by row) to try to get a clean merge. If it can successfully merge both changes without conflict, great. If it can't, then what happens depends on the transaction settings you chose. You can, when you start the transaction, tell the DB "If this doesn't merge cleanly, roll it back". Or you can say "If this doesn't merge cleanly, I don't care, just make sure it gets merged and I don't care if the conflict resolution ends up picking the "wrong" value, because for my use case there is no wrong value." This is like using "READ UNCOMMITTED" vs "SERIALIZABLE" transaction settings (isolation levels): you would use "READ UNCOMMITTED" if you don't care about merge conflicts in this particular table, and just want a quick merge. You would use "SERIALIZABLE" for tables with data that must, MUST, be correct, e.g. account balances. And there are two more levels in between for subtle differences in your use case's requirements.
As with my previous comment, this is probably obvious to 98.5% of people here. But maybe it'll help someone get that "ah-ha!" moment and understand transactions better.
shep101
concurr
zadikian
Seems like a frequent surprise is that Postgres and MySQL don't default to serializable (so not fully I in ACID). They do read-committed. I didn't see this article mention that, but maybe I missed it. The article says read-committed provides "slightly" better performance, but it's been way faster in my experience. Forget where, but I think they said they chose this default for that reason.
Using read-committed ofc means having to keep locking details in mind. Like, UNIQUE doesn't just guard against bad data entry, it can also be necessary for avoiding race conditions. But now that I know, I'd rather do that than take the serializable performance hit, and also have to retry xacts and deal with the other caveats at the bottom of https://www.postgresql.org/docs/current/transaction-iso.html
benoitg
Recent versions of MySQL and MariaDB default to repeatable-read for InnoDB tables, not read-commited :
https://dev.mysql.com/doc/refman/8.4/en/set-transaction.html...
https://mariadb.com/docs/server/reference/sql-statements/adm...
I don't know about MyISAM though (who uses it anyway ;-) ).
layer8
The issue with SERIALIZABLE, aside from performance, is that transactions can fail due to conflicts/deadlocks/timeouts, so application code must be prepared to recognize those cases and have a strategy to retry the transactions.
zadikian
Right. So my code had a helper to run some inner func in a serializable xact, in rw or ro mode, which would retry with backoff. Like the TransactionRunner in Spanner. But even with no retries occurring, it was very slow.
twic
VoltDB took this to an extreme - the way you interact with it is by sending it some code which does a mix of queries and logic, and it automatically retries the code as many times as necessary if there's a conflict. Because it all happens inside the DBMS, it's transparent and fast. I thought that was really clever.
I'm using the past tense here, but VoltDB is still going. I don't think it's as well-known as it deserves to be.
lateforwork
> Postgres and MySQL don't default to serializable
Oracle and SQL Server also default to read committed, not serializable. Serializable looks good in text books but is rarely used in practice.
mike_hearn
One reason Oracle uses it is because this mode scales horizontally whilst allowing very large transactions. You can just keep adding write masters.
The best implementation of serializable transactions I've seen is in FoundationDB but it comes with serious costs. Transactions are limited in size and duration to a point where many normal database operations are disallowed by the system and require app-layer workarounds (at which point, of course, you lose serializability). And in many cases you do need cluster locks for other purposes anyway.
zadikian
Spanner has similar limitations on xact size, maybe for this reason?
zadikian
Yeah, the only examples I know of it being default are Spanner and Cockroach, which are for a different use case.
rishabhaiover
It's an absolute pleasure reading planetscale blogs. I'm curious about what tool is used to make these visualizations?
bddicken
Author here. Thank you! These visuals are built with js + gsap (https://gsap.com)
rishabhaiover
Thank you for sharing it, kind sir. Your explanation on b+trees (https://planetscale.com/blog/btrees-and-database-indexes) is probably the best one I've ever seen on the internet.
airstrike
Thought it was going to be a blog post about Jeopardy for a sec
interlocutor
A lot of database tools these days prioritize instant sharing of updates over transactions and ACID properties. Example: Airtable. As soon as you update a field the update shows up on your coworkers screen who also has the same table open. The downside of this is that Airtable doesn't do transactions. And the downside of not doing transactions is potentially dangerous data inconsistencies. More about that here: https://visualdb.com/blog/concurrencycontrol/
gtowey
Not so subtle product promo while taking a swipe at your competition.
shalabhc
For all interested in this topic, I highly recommend the book Designing Data Intensive Applications https://www.goodreads.com/book/show/23463279-designing-data-....
It goes into not only different isolation levels, but also some ambiguity in the traditional ACID definition.
I believe a 2nd edition is imminent.
vismit2000
Second edition is available now: https://www.oreilly.com/library/view/designing-data-intensiv...
unkulunkulu
> At this stage, it has nothing to do with xmin and xmax, but rather because other transactions cannot see uncommitted data
Am I missing something or this statement is incomplete? Also I find the visualization of commit weird, it “points to” the header of the table, but then xmax gets updated “behind the scenes”? Isnt xmax/xmin “the mechanism behind how the database knows what is committed/not committed”? Also, there could be subtransactions, which make this statement even more contradictory?
I enjoyed the visualizations and explanations otherwise, thanks!
CWIZO
I also think the article glossed/skipped over the xmax/xmin concepts. And they are fundamental to understand how different isolation levels actually work. It's quite jarring to the point I'm wondering if a whole section got accidentally dropped from the article.
rmunn
I like to think of transactions, in an MVCC system like Postgres, as being like snapshots in copy-on-write filesystems like btrfs or zfs. When you BEGIN a transaction, the DB takes a snapshot of your data, so now there are two versions of the data, the snapshot (visible to everyone else) and the "private" version visible only to your transaction. Then as you run UPDATEs, the new data is written to the private copy, but everyone else continues to work with the snapshot. (And might be creating their own private copies for other transactions).
If you do a ROLLBACK, then your private copy of the data is discarded, and its changes never make it into the official copy. But if you do a COMMIT, then your private snapshot is made public and is the new, official, copy for everyone else to read from. (Except those who started a transaction before you ran COMMIT: they made their private copies from the older snapshot and don't have a copy of your changes).
This is probably obvious to nearly everyone here, but I figured I'd write it anyway. You never know who might read an analogy like this and have that lightbulb moment where it suddenly makes sense.
P.S. Another analogy would be Git branches, but I'll write that in a different comment.
Animats
Not quite. Databases use both branching and locking. Two transactions that conflict can cause one thread to block, rather than rolling back.
SELECT followed by an update is the most usual case for a block. (I have to code one today, and I want to see if I can rewrite it as one MySQL statement.)
rmunn
Yes, the analogy isn't perfect. I didn't want to get into all the subtleties in an introductory analogy, but I should probably have mentioned blocking. Too late for me to edit my post with your correction, though.
jascha_eng
This actually used to be one of my favorite interview questions for backend engineers. Everyone has used transactions but depending on your seniority you'd understand it to different degrees.
And no I'd never expect people to know the isolation levels by heart, but if you know there are different ones and they behave differntly that's pretty good and tells me you are curious about how things work under the hood.
layer8
The nominally same isolation levels can also behave differently on different database systems, so in general you have to investigate the details on a case-by-case basis anyway.
piskov
> A phantom read is one where a transaction runs the same SELECT multiple times, but sees different results the second time around
> Under the SQL standard, the repeatable read level allows phantom reads, though in Postgres they still aren't possible.
This is bad wording which could lead to an impression that a repeatable read may show different values. Values in rows will be the same but new rows may be added to the second result set. New rows is important as no previously read rows can be either changed or deleted — otherwise there will be no repetition for those rows second time around.
Quarrelsome
I have learned about the beauty of predicate locks. That's such a sexy way of dealing with the issue instead of just blithely funneling all writes.
Get the top HN stories in your inbox every day.
I’ve found this article lacking. Like some other articles in this space, it introduces isolation levels through the lens of the phenomena described in the SQL standard, but I find that there’s a different, more intuitive approach.
I think it’s more tractable to define this problem space starting from the concept of (strict) serializability, which is really a generalization of the concept of thread safety. Every software engineer has an intuitive understanding of it. Lack of serializability can lead to execution-dependent behavior, which usually results in hard-to-diagnose bugs. Thus, all systems should strive towards serializability, and the database can be a tool in achieving it.
Various non-serializable levels of database transaction isolation are relaxations of the serializability guarantee, where the database no longer enforces the guarantee and it’s up to the database user to ensure it through other means.
The isolation phenomena are a useful tool for visualizing various corner cases of non-serializability, but they are not inherently tied to it. It's possible to achieve serializability while observing all of the SQL phenomena. For example, a Kubernetes cluster with carefully-written controllers can be serializable.