alt.hn

2/22/2026 at 12:28:41 PM

What is a database transaction?

https://planetscale.com/blog/database-transactions

by 0x54MUR41

2/22/2026 at 1:44:31 PM

I’ve found this article lacking. Like some other articles in this space, it introduces isolation levels through the lens of the phenomena described in the SQL standard, but I find that there’s a different, more intuitive approach.

I think it’s more tractable to define this problem space starting from the concept of (strict) serializability, which is really a generalization of the concept of thread safety. Every software engineer has an intuitive understanding of it. Lack of serializability can lead to execution-dependent behavior, which usually results in hard-to-diagnose bugs. Thus, all systems should strive towards serializability, and the database can be a tool in achieving it.

Various non-serializable levels of database transaction isolation are relaxations of the serializability guarantee, where the database no longer enforces the guarantee and it’s up to the database user to ensure it through other means.

The isolation phenomena are a useful tool for visualizing various corner cases of non-serializability, but they are not inherently tied to it. It's possible to achieve serializability while observing all of the SQL phenomena. For example, a Kubernetes cluster with carefully-written controllers can be serializable.

by MHordecki

2/22/2026 at 2:23:24 PM

Author here. This is good feedback.

The combination of transactions, isolation levels, and MVCC is such a huge undertaking to cover all at once, specially when comparing how it's done across multiple DBs which I attempted here. Always a balance between technical depth, accessibility to people with less experience, and not letting it turn into an hour-long read.

by bddicken

2/22/2026 at 5:05:14 PM

I actually like this article a lot. I do a bit of teaching, and I imagined the ideal audience for this as a smart junior engineer who knows SQL and has encountered transactions but maybe doesn’t really understand them yet. I think introducing things via examples of isolation anomalies (which most engineers will have seen examples of in bugs, even if they didn’t fully understand them) gives the explanation a lot more concreteness than starting with serializability as a theoretical concept as GP is proposing. Sure, strict serializability is a powerful idea that ties all this together and is more satisfying for an expert who already knows this stuff. But for someone who is just learning, you have to motivate it first.

If anything, I’d say it might be better to start with the lower isolation levels first, highlight the concurrency problems that can arise with them, and gradually introduce higher isolation levels until you get to serializability. That feels a bit more intuitive rather than downward progression from serializability to read uncommitted as presented here.

It also might be nice to see a quick discussion of why people choose particular isolation levels in practice, e.g. why you might make a tradeoff under high concurrency and give up serializability to avoid waits and deadlocks.

But excellent article overall, and great visualizations.

by libraryofbabel

2/22/2026 at 3:13:19 PM

I love the work planetscale does on keeping this type of content accurate yet accessible. Keep it up!

by jaxr

2/22/2026 at 2:22:48 PM

https://aphyr.com/posts/327-jepsen-mariadb-galera-cluster

More notation, more citations, more better.

by Rapzid

2/22/2026 at 2:32:51 PM

Notation is useful. Citations are nice for further reading. But I don't agree more of this makes for a better article!

by bddicken

2/22/2026 at 2:33:07 PM

Looks like the author is geoblocking in protest of the UK Online Safety Act (and fair enough).

by peterclary

2/22/2026 at 3:08:31 PM

Most RDBMSs offer serializable isolation if you need it. Often you don't need it. The downside of using serializable isolation unnecessarily is reduced concurrency and throughput due to increased coordination between transactions.

by lateforwork

2/22/2026 at 11:31:52 PM

Still, I think it’s the right default to start with serializable. Then when you have performance issues you can think long and hard about whether relaxed isolation levels will work in a big free way. Better to start with a correct application.

by vvern

2/23/2026 at 12:00:27 AM

Starting with serializable is not free, there is a coding cost to pay to handle all the concurrency errors.

by lateforwork

2/23/2026 at 4:07:00 AM

Sybase SQLAnywhere implements (or at least did) strict serialization by taking an exclusive row lock on all rows... which you can imagine scales horribly for a table with a reasonable row count.

I found out the hard way at work. I had assumed it took an exclusive lock on the table level only, the documentation didn't really spell out the details of how it enforced the serialized access.

I changed it to a retry loop which worked fine and was fairly easy to implement all things considered. Not gonna reach for strict serialization again unless I have to.

by magicalhippo

2/22/2026 at 9:54:35 PM

Yep. Its a wonderful capability to have for some situations, but for 90% of applications SERIALIZABLE isolation is overkill.

by bddicken

2/22/2026 at 2:13:36 PM

Then recommend a better explanation?

by mika6996

2/22/2026 at 7:49:06 PM

> concept of (strict) serializability [("S")], which is really a generalization of the concept of thread safety

Unsure why "strict" (L + S) is in braces: Linearizability ("L") is what resembles safety in SMP systems the most?

by ignoramous

2/23/2026 at 1:36:35 AM

One way to think about transactions, as I wrote in an earlier comment, would be to think of them as being like snapshots in a copy-on-write filesystem like btrfs or zfs. But another way to think of them is being like Git branches.

When you BEGIN a transaction, you're creating a branch in Git. Everyone else continues to work on the master branch, perhaps making their own branches (transactions) off of it while you're working. Every UPDATE command you run inside the transaction is a commit pushed to your branch. If you do a ROLLBACK, then you're deleting the branch unmerged, and its changes will be discarded without ever ending up in the master branch. But if you instead do a COMMIT, then that's a `git merge` command, and your changes will be merged into the master branch. If they merge cleanly, then all is well. If they do NOT merge cleanly, because someone else merged their own branch (committed their own transaction) that touched the same files that you touched (updated rows in the same table), then the DB will go through the file line by line (go through the table row by row) to try to get a clean merge. If it can successfully merge both changes without conflict, great. If it can't, then what happens depends on the transaction settings you chose. You can, when you start the transaction, tell the DB "If this doesn't merge cleanly, roll it back". Or you can say "If this doesn't merge cleanly, I don't care, just make sure it gets merged and I don't care if the conflict resolution ends up picking the "wrong" value, because for my use case there is no wrong value." This is like using "READ UNCOMMITTED" vs "SERIALIZABLE" transaction settings (isolation levels): you would use "READ UNCOMMITTED" if you don't care about merge conflicts in this particular table, and just want a quick merge. You would use "SERIALIZABLE" for tables with data that must, MUST, be correct, e.g. account balances. And there are two more levels in between for subtle differences in your use case's requirements.

As with my previous comment, this is probably obvious to 98.5% of people here. But maybe it'll help someone get that "ah-ha!" moment and understand transactions better.

by rmunn

2/23/2026 at 6:26:48 AM

concurr

by shep101

2/22/2026 at 5:29:41 PM

Seems like a frequent surprise is that Postgres and MySQL don't default to serializable (so not fully I in ACID). They do read-committed. I didn't see this article mention that, but maybe I missed it. The article says read-committed provides "slightly" better performance, but it's been way faster in my experience. Forget where, but I think they said they chose this default for that reason.

Using read-committed ofc means having to keep locking details in mind. Like, UNIQUE doesn't just guard against bad data entry, it can also be necessary for avoiding race conditions. But now that I know, I'd rather do that than take the serializable performance hit, and also have to retry xacts and deal with the other caveats at the bottom of https://www.postgresql.org/docs/current/transaction-iso.html

by zadikian

2/22/2026 at 6:51:56 PM

The issue with SERIALIZABLE, aside from performance, is that transactions can fail due to conflicts/deadlocks/timeouts, so application code must be prepared to recognize those cases and have a strategy to retry the transactions.

by layer8

2/22/2026 at 7:11:04 PM

Right. So my code had a helper to run some inner func in a serializable xact, in rw or ro mode, which would retry with backoff. Like the TransactionRunner in Spanner. But even with no retries occurring, it was very slow.

by zadikian

2/22/2026 at 10:33:27 PM

VoltDB took this to an extreme - the way you interact with it is by sending it some code which does a mix of queries and logic, and it automatically retries the code as many times as necessary if there's a conflict. Because it all happens inside the DBMS, it's transparent and fast. I thought that was really clever.

I'm using the past tense here, but VoltDB is still going. I don't think it's as well-known as it deserves to be.

by twic

2/23/2026 at 1:46:48 AM

Interesting. How is that faster than just having the code running on the same machine as the DB? Guess it could be smarter about conflicts than random backoff.

by zadikian

2/22/2026 at 7:20:42 PM

> Postgres and MySQL don't default to serializable

Oracle and SQL Server also default to read committed, not serializable. Serializable looks good in text books but is rarely used in practice.

by lateforwork

2/22/2026 at 8:16:29 PM

Yeah, the only examples I know of it being default are Spanner and Cockroach, which are for a different use case.

by zadikian

2/22/2026 at 2:37:50 PM

A lot of database tools these days prioritize instant sharing of updates over transactions and ACID properties. Example: Airtable. As soon as you update a field the update shows up on your coworkers screen who also has the same table open. The downside of this is that Airtable doesn't do transactions. And the downside of not doing transactions is potentially dangerous data inconsistencies. More about that here: https://visualdb.com/blog/concurrencycontrol/

by interlocutor

2/23/2026 at 12:20:39 AM

Not so subtle product promo while taking a swipe at your competition.

by gtowey

2/22/2026 at 1:17:21 PM

It's an absolute pleasure reading planetscale blogs. I'm curious about what tool is used to make these visualizations?

by rishabhaiover

2/22/2026 at 2:24:18 PM

Author here. Thank you! These visuals are built with js + gsap (https://gsap.com)

by bddicken

2/22/2026 at 5:49:12 PM

Thought it was going to be a blog post about Jeopardy for a sec

by airstrike

2/23/2026 at 1:27:41 AM

I like to think of transactions, in an MVCC system like Postgres, as being like snapshots in copy-on-write filesystems like btrfs or zfs. When you BEGIN a transaction, the DB takes a snapshot of your data, so now there are two versions of the data, the snapshot (visible to everyone else) and the "private" version visible only to your transaction. Then as you run UPDATEs, the new data is written to the private copy, but everyone else continues to work with the snapshot. (And might be creating their own private copies for other transactions).

If you do a ROLLBACK, then your private copy of the data is discarded, and its changes never make it into the official copy. But if you do a COMMIT, then your private snapshot is made public and is the new, official, copy for everyone else to read from. (Except those who started a transaction before you ran COMMIT: they made their private copies from the older snapshot and don't have a copy of your changes).

This is probably obvious to nearly everyone here, but I figured I'd write it anyway. You never know who might read an analogy like this and have that lightbulb moment where it suddenly makes sense.

P.S. Another analogy would be Git branches, but I'll write that in a different comment.

by rmunn

2/23/2026 at 2:42:31 AM

Not quite. Databases use both branching and locking. Two transactions that conflict can cause one thread to block, rather than rolling back.

SELECT followed by an update is the most usual case for a block. (I have to code one today, and I want to see if I can rewrite it as one MySQL statement.)

by Animats

2/23/2026 at 5:00:14 AM

Yes, the analogy isn't perfect. I didn't want to get into all the subtleties in an introductory analogy, but I should probably have mentioned blocking. Too late for me to edit my post with your correction, though.

by rmunn

2/22/2026 at 4:20:41 PM

This actually used to be one of my favorite interview questions for backend engineers. Everyone has used transactions but depending on your seniority you'd understand it to different degrees.

And no I'd never expect people to know the isolation levels by heart, but if you know there are different ones and they behave differntly that's pretty good and tells me you are curious about how things work under the hood.

by jascha_eng

2/22/2026 at 6:58:57 PM

The nominally same isolation levels can also behave differently on different database systems, so in general you have to investigate the details on a case-by-case basis anyway.

by layer8

2/22/2026 at 11:11:58 PM

> A phantom read is one where a transaction runs the same SELECT multiple times, but sees different results the second time around

> Under the SQL standard, the repeatable read level allows phantom reads, though in Postgres they still aren't possible.

This is bad wording which could lead to an impression that a repeatable read may show different values. Values in rows will be the same but new rows may be added to the second result set. New rows is important as no previously read rows can be either changed or deleted — otherwise there will be no repetition for those rows second time around.

by piskov

2/22/2026 at 5:13:46 PM

> At this stage, it has nothing to do with xmin and xmax, but rather because other transactions cannot see uncommitted data

Am I missing something or this statement is incomplete? Also I find the visualization of commit weird, it “points to” the header of the table, but then xmax gets updated “behind the scenes”? Isnt xmax/xmin “the mechanism behind how the database knows what is committed/not committed”? Also, there could be subtransactions, which make this statement even more contradictory?

I enjoyed the visualizations and explanations otherwise, thanks!

by unkulunkulu

2/22/2026 at 8:26:13 PM

I also think the article glossed/skipped over the xmax/xmin concepts. And they are fundamental to understand how different isolation levels actually work. It's quite jarring to the point I'm wondering if a whole section got accidentally dropped from the article.

by CWIZO

2/22/2026 at 5:35:59 PM

I have learned about the beauty of predicate locks. That's such a sexy way of dealing with the issue instead of just blithely funneling all writes.

by Quarrelsome

2/22/2026 at 6:13:33 PM

We built an entire project for a client-side project with millions of SQL rows and thousands of users without adding a single transaction. :/

by lasgawe

2/22/2026 at 6:55:02 PM

If you have no explicit transactions, every insert/update is its own transaction (aka auto-commit). Depending on what you do, you might not need more. It’s still important to know that these execute as a transaction.

by layer8

2/22/2026 at 7:14:00 PM

Yep, there have been times I get through a whole project without any explicit transactions. In fact it can be a sign of not fully normalized schema design if you rely on those a lot (which can ofc be fine if you deliberately wanted that).

by zadikian

2/22/2026 at 9:48:14 PM

These are still transactions! It's not uncommon for a large % of transactions in an OLTP workload to be only one query without explicit BEGIN / COMMIT; This is called an autocommit transactions or implicit transaction.

by bddicken

2/22/2026 at 7:53:32 PM

How nice for you. But since you totally neglected to say anything about your use-case or schema or query patterns, it’s impossible to know what this even means. Some use cases can trivially be done without any explicit transactions and you’re not giving anything up. For others (usually, something where you need to enforce invariants under high concurrency writes or writes+reads on the same data across multiple tables), transactions are pretty critical. So, it depends.

by libraryofbabel

2/22/2026 at 7:48:50 PM

Did you use SELECT FOR UPDATE at all, or just never had to update dependent data? If the complex operations are implemented using stored functions / procedures then the a transaction is implicit.

If the data is fairly straightforward like just one-to-many CRUD with no circular references, you would be able to do it without transactions, just table relationships would be enough to ensure consistency.

by ninjaoxygen

2/22/2026 at 7:10:29 PM

I thought this was pretty good, not least because it attempts to explain isolation levels, something I always found pretty tricky when teaching SQL. Mind you, I was only teaching SQL, and so isolation, as part of C and C++ courses so that our clients could do useful stuff, but explaining what levels to use was always tuff.

by zabzonk

2/22/2026 at 2:34:00 PM

I think this is a great post to have but I'm going to make a critical usability suggestion:

* the videos should have "pause" and a "step at a time" control *

Even at the "half speed", without a deep knowledge of the context, the videos move way too fast for me to read the syntax that's invoking and line it up with the data on the left side. I (and im definitely not the only one) need to be able to sit on one step and stare at the whole thing without the latent anxiety of the state changing before I've had a chance to grok the whole thing.

this has nothing to do with familiarity with the concepts (read my profile). I literally need time to read all the words and connect them together mentally (ooh, just noticed this is pseudo-SQL syntax also, e.g. "select id=4", that probably added some load for me) without worrying they're going to change before watching things move.

please add a step-at-a-time button!

by zzzeek

2/22/2026 at 4:12:20 PM

I appreciate this feedback, and then you read through it with enough rigor to notice.

by bddicken

2/23/2026 at 2:14:50 PM

I second it. At the very least a pause button is needed.

by johanyc

2/22/2026 at 9:04:09 PM

In the section about serializable read TFA gets the `accounts` `balance` wrong.

by cryptonector

2/22/2026 at 9:37:57 PM

Yes, I wasn't sure if I misunderstood the concept or if there was an error in the article.

by bone_tag

2/22/2026 at 9:46:36 PM

Thanks, fixed!

by bddicken

2/22/2026 at 6:12:54 PM

Have you ever seen anyone changing transaction isolation levels in code? I think pessimistic or optimistic locking is preferred way to handle transaction concurrency.

by nkzd

2/22/2026 at 9:40:55 PM

I have recommended that various teams drop down to READ-COMMITTED (MySQL) for various actions to avoid gap locks being taken, but AFAIK no one has done so yet.

by sgarland

2/22/2026 at 9:29:49 PM

never used planetscale but I’ve always liked their blog, and other content. One of the founders interviewed on the software engineering daily podcast and it was super interesting

by chamomeal

2/22/2026 at 9:18:21 PM

Is it just me, or are the final results of the deadlock visualisations incorrect? In both animations (mysql/pg), the final `SELECT balance from account...` queries appear to show the result of the two sessions, which have been terminated.

by nazwa

2/22/2026 at 9:35:27 PM

Author here. You're right! I'm fixing now.

by bddicken

2/22/2026 at 4:32:03 PM

[flagged]

by skeeter2020

2/22/2026 at 4:38:21 PM

>What's up with the trend of articles that seem like they're written by someone after a couple of sessions in a freshmen comp sci class?

This is funny and cracked me up. Because the author is actually the one who teaches CS in University.

>Nothing new here and a discussion of DB transactions without even mentioning ACID compliance and the trade-offs? You're better off picking up a 40 year old textbook than posts like this.

That would have been a very long blog post. Edit: I just realise Ben has already replied above.

by ksec

2/22/2026 at 11:05:00 PM

A miserable little pile of queries! But enough talk, have at you!

by bigstrat2003