alt.hn

2/21/2026 at 10:31:37 AM

A distributed queue in a single JSON file on object storage

https://turbopuffer.com/blog/object-storage-queue

by Sirupsen

2/24/2026 at 11:17:47 AM

Several things going on here:

- concurrency is very hard

- .. but object storage "solves" most of that for you, handing you a set of semantics which work reliably

- single file throughput sucks hilariously badly

- .. because 1Gb is ridiculously large for an atomic unit

- (this whole thing resembles a project I did a decade ago for transactional consistency on TFAT on Flash, except that somehow managed faster commit times despite running on a 400Mhz MIPS CPU. Edit: maybe I should try to remember how that worked and write it up for HN)

- therefore, all of the actual work is shifted to the broker. The broker is just periodically committing its state in case it crashes

- it's not clear whether the broker ACKs requests before they're in durable storage? Is it possible to lose requests in flight anyway?

- there's a great design for a message queue system between multiple nodes that aims for at least once delivery, and has existed for decades, while maintaining high throughput: SMTP. Actually, there's a whole bunch of message queue systems?

by pjc50

2/24/2026 at 1:48:04 PM

> The broker runs a single group commit loop on behalf of all clients, so no one contends for the object. Critically, it doesn't acknowledge a write until the group commit has landed in object storage. No client moves on until its data is durably committed.

by jitl

2/24/2026 at 9:16:35 PM

Yea, the group commit is the real insight here.

I read this blog post and to help wrap my head around it I put together a simple TCP-based KV store with group commit, helped make it click for me.

https://github.com/a10y/group-commit/

by aduffy

2/24/2026 at 12:44:23 PM

AFAIK you can kinda "seek" reads in S3 using a range header, WCGW? =D

by candiddevmike

2/24/2026 at 2:11:59 PM

You can, and it's actually great if you store little "headers" etc to tell you those offsets. Their design doesn't seem super amenable to it because it appears to be one file, but this is why a system that actually intends to scale would break things up. You then cache these headers and, on cache hit, you know "the thing I want is in that chunk of the file, grab it". Throw in bloom filters and now you have a query engine.

Works great for Parquet.

by staticassertion

2/24/2026 at 3:27:21 PM

Yep! Other than random reads (~p99=200ms on larger ranges), it's essential to get good download performance of a single file. A single (range) request can "only" drive ~500 MB/s, so you need multiple offsets.

https://github.com/sirupsen/napkin-math

by Sirupsen

2/24/2026 at 2:17:52 PM

Amazon S3 Select enables SQL queries directly on CSV, JSON, or Apache Parquet objects, allowing retrieval of filtered data subsets to reduce latency and costs

by UltraSane

2/24/2026 at 2:25:53 PM

S3 Select is, very sadly, deprecated. It also supported HTTP RANGE headers! But they've killed it and I'll never forgive them :)

Still, it's nbd. You can cache a billion Parquet header/footers on disk/ memory and get 90% of the performance (or better tbh).

by staticassertion

2/25/2026 at 4:48:05 PM

Caching Parquet headers/footers sounds super interesting. Can you say more about how you implemented it?

by dotgov

2/25/2026 at 6:49:24 PM

Currently there's nothing in my headers, but the footer is straightforward. There's the schema, row group metadata, some statistics, byte offsets for each column in a group, page index, etc. It's everything you'd want if you wanted to reject a query outright or, if necessary, query extremely efficiently.

min/max stats for a column are huge because I pre-encode any low-cardinality strings into integers. This means I can skip entire row groups without every touching S3, just with that footer information, and if I don't have it cached I can read it and skip decoding anything that doesn't have my data.

Footers can get quite large in one sense - 10s-100s of KB for a very large file. But that's obviously tiny compared to a multi-GB Parquet file, and the data can compress extremely well for a second/ third tier cache. You can store 1000s of these pre-parsed in memory no problem, and store 10s of thousands more on disk.

I've spent 0 time optimizing my footers currently. They can get smaller than they are, I assume, but I've not put much thought. In fact, I don't have to assume, I know that my own custom metadata overlaps with the existing parquet stats and I just haven't bothered to deal with it. TBH there are a bunch of layout optimizations I've yet to explore, like using headers would obviously have some benefits (streaming) whereas right now I do a sort of "attempt to grab the footer from the end in chunks until we find it lol". But it doesn't come up because... caching. And there are worse things than a few spurious RANGE requests.

by staticassertion

2/25/2026 at 7:18:01 PM

Have you tried AWS s3 tables which is a manged iceberg service?

by UltraSane

2/25/2026 at 10:20:29 PM

I haven't. I'm sort of aware of it but I guess I prefer to just have tight control over the protocol/ data layout. It's not that hard and it gives me a ton of room to make niche optimizations. I doubt I'd get the same performance if I used it, but I could be wrong. Usually the more you can push your use case into the protocol the better.

by staticassertion

2/24/2026 at 5:53:38 PM

Wow I didn't know that. To be fair now that S3 tables exists it is rather redundant.

by UltraSane

2/24/2026 at 11:10:59 AM

The original graph appears to simply show the blocking issue of their previous synchronisation mechanism; 10 min to process an item down to 6 min. Any central system would seem to resolve this for them.

In any organisation its good to make choices for simplicity rather than small optimisations - you're optimising maintenance, incident resolution, and development.

Typically I have a small pg server for these things. It'll work out slightly more expensive than this setup for one action, yet it will cope with so much more - extending to all kinds of other queues and config management - with simple management, off the shelf diagnostics etc.

While the object store is neat, there is a confluence of factors which make it great and simple for this workload, that may not extend to others. 200ms latency is a lot for other workloads, 5GB/s doesn't leave a lot of headroom, etc. And I don't want to be asked to diagnose transient issues with this.

So I'm torn. It's simple to deploy and configure from a fresh deployment PoV. Yet it wouldn't be accepted into any deployment I have worked on.

by Normal_gaussian

2/24/2026 at 2:09:09 PM

Yeah, I mean, I think we're all basically doing this now, right? I wouldn't choose this design, but I think something similar to DeltaLake can be simplified down for tons of use cases. Manifest with CAS + buffered objects to S3, maybe compaction if you intend to do lots of reads. It's not hard to put it together.

You can achieve stupidly fast read/write operations if you do this right with a system that is shocking simple to reason about.

> Step 4: queue.json with an HA brokered group commit > The broker is stateless, so it's easy and inexpensive to move. And if we end up with more than one broker at a time? That's fine: CAS ensures correctness even with two brokers.

TBH this is the part that I think is tricky. Just resolving this in a way that doesn't end up with tons of clients wasting time talking to a broker that buffers their writes, pushes them, then always fails. I solved this at one point with token fencing and then decided it wasn't worth it and I just use a single instance to manage all writes. I'd again point to DeltaLake for the "good" design here, which is to have multiple manifests and only serialize compaction, which also unlocks parallel writers.

The other hard part is data deletion. For the queue it looks deadly simple since it's one file, but if you want to ramp up your scale and get multiple writers or manage indexes (also in S3) then deletion becomes something you have to slip into compaction. Again, I had it at one point and backed it out because it was painful.

But I have 40k writes per second working just fine for my setup, so I'm not worrying. I'd suggest others basically punt as hard as possible on this. If you need more writes, start up a separate index with its own partition for its own separate set of data, or do naive sharding.

by staticassertion

2/24/2026 at 5:07:23 PM

This is news to me. What motivates you to reach for an S3-backed queue versus SQS?

by allknowingfrog

2/24/2026 at 5:52:15 PM

I'm not building a queue, but a lot of things on s3 end up being queue-shaped (more like 'log shaped') because it's very easy to compose many powerful systems out of CAS + "buffer, then push". Basically, you start with "build an immutable log" with those operations and the rest of your system becomes a matter of what you do with that log. A queue needs to support a "pop", but I am supporting other operations. Still, the architecture overlap all begins with CAS + buffer.

That said, I suspect that you can probably beat SQS for a number of use cases, and definitely if you want to hold onto the data long term or search over it then S3 has huge options there.

Performance will be extremely solid unless you need your worst case latency for "push -> pop" to be very tight in your p90.

by staticassertion

2/24/2026 at 6:55:36 PM

This is fascinating. It sounds like you're building "cloud datastructures" based on S3+CAS. What are the benefits, in your view, of doing using S3 instead of, say, dynamo or postgres? Or reaching for NATS/rabbitmq/sqs/kafka. I'd love to hear a bit more about what you're building.

by loevborg

2/24/2026 at 7:14:18 PM

It's just trade-offs. If you have a lot of data, s3 is just the only option for storing it. You don't want to pay for petabytes of storage in Dynamo or Postgres. I also don't want to manage postgres, even RDS - dealing with write loads that S3 handles easily is very annoying, dealing with availability, etc, all is painful. S3 "just works" but you need to build some of the protocol yourself.

If you want consistently really low latency/ can't tolerate a 50ms spike, don't retain tons of data, have <10K/s writes, and need complex indexing that might change over time, Postgres is probably what you want (or some other thing). If you know how your data should be indexed ahead of time, you need to store a massive amount, you care more about throughput than a latency spike here or there, or really a bunch of other use cases probably, S3 is just an insanely powerful primitive.

Insane storage also unlocks new capabilities. Immutable logs unlock "time travel" where you can ask questions like "what did the system look like at this point?" since no information is lost (unless you want to lose it, up to you).

Everything about a system like this comes down to reducing the cost of a GET. Bloom filters are your best friend, metadata is your best friend, prefetching is a reluctant friend, etc.

I'm not sure what I'm building. I had this idea years ago before S3 CAS was a thing and I was building a graph database on S3 with the fundamental primitive being an immutable event log (at the time using CRDTs for merge semantics, but I've abandoned that for now) and then maintaining an external index in Scylla with S3 Select for projections. Years later, I have fun poking at it sometimes and redesigning it. S3 CAS unlocked a lot of ways to completely move the system to S3.

by staticassertion

2/24/2026 at 7:10:46 PM

What you describe is very similar to how Icechunk[1] works. It works beautifully for transactional writes to "repos" containing PBs of scientific array data in object storage.

[1]: https://icechunk.io/en/latest/

by tomnicholas1

2/24/2026 at 3:10:54 PM

A lot of good insights here. I am also wandering if they can just simply put different jobs (unclaimed, in-progress, deleted/done) into different directory/prefix, and rely on atomic object rename primitive [1][2][3] to solve the problem more gracefully (group commit can still be used if needed).

[1] https://docs.cloud.google.com/storage/docs/samples/storage-m... [2] https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameOb... [3] https://fractalbits.com/blog/why-we-built-another-object-sto...

by thomas_fa

2/24/2026 at 9:47:22 PM

I didn't know about atomic object rename... it's going to take me a long time to think through the options here.

> RenameObject is only supported for objects stored in the S3 Express One Zone storage class.

Ah interesting, I don't use this but I bet in a year+ AWS will have this everywhere lol S3 is just too good.

by staticassertion

2/24/2026 at 3:06:44 PM

> I solved this at one point with token fencing

Could you expand on that? Even if it wasn't the approach you stuck with, I'm curious.

by zbentley

2/24/2026 at 3:28:42 PM

Oof, I probably misspoke there just slightly. I attempted to solve this with token fencing, I honestly don't know if it worked under failure conditions. This was also a while ago. But the idea was basically that there were two tiers - one was a ring based approach where a single file determined which writer was allocated a 'space' in the ring. Then every write was prepended with that token. Even if a node dropped/ joined and others didn't know about it (because they hadn't re-read the ring file), every write had this token.

Writes were not visible until compaction in this system. At compaction time, tokens would be checked and writes for older tokens would be rejected, so even if two nodes thought that they owned a 'place' in the ring, only writes for the higher value would be accepted. Soooomething like that. I ended up disliking this because it had undesirable failure modes like lots of stale/ wasted writes, and the code sucked.

by staticassertion

2/24/2026 at 2:40:27 PM

Reminds me of WarpStream: https://www.warpstream.com

Similar idea but you have the power of S3 scale (if you really need it). For context, I do not work at WS. My company switched to it recently and we've seen great improvements over traditional Kafka.

by salil999

2/24/2026 at 10:42:28 AM

The usual path an engineer takes is to take a complex and slow system and reengineer it into something simple, fast, and wrong. But as far as I can tell from the description in the blog though, it actually works at scale! This feels like a free lunch and I’m wondering what the tradeoff is.

by soletta

2/24/2026 at 10:48:40 AM

It seems like this is an approach that trades off scale and performance for operational simplicity. They say they only have 1GB of records and they can use a single committer to handle all requests. Failover happens by missing a compare-and-set so there's probably a second of latency to become leader?

This is not to say it's a bad system, but it's very precisely tailored for their needs. If you look at the original Kafka implementation, for instance, it was also very simple and targeted. As you bolt on more use cases and features you lose the simplicity to try and become all things to all people.

by jrjeksjd8d

2/24/2026 at 6:12:09 PM

(author here)

> It seems like this is an approach that trades off scale and performance for operational simplicity.

Yes, this is exactly it. Given that turbopuffer itself is built on the idea of object storage + stateless cache, we're all very comfortable dealing with it operationally. This design is enough for our needs and is much easier to be oncall for than adding an entirely new dependency would have been.

by danhhz

2/24/2026 at 6:43:16 PM

IMO this is the ideal way to engineer most (not all) systems. As simple as your needs allow. Nice work!

by packetlost

2/24/2026 at 1:58:35 PM

> Failover happens by missing a compare-and-set so there's probably a second of latency to become leader?

Conceptually that makes sense. How complicated is it to implement this failover logic in a safe way? If there are two processes, competing for CAS wins, is there not a risk that both will think they're non-leaders and terminate themselves?

by loevborg

2/24/2026 at 3:32:47 PM

The broker lifecycle is presumably

1. Start

2. Load the queue.json from the object store

3. Receive request(s)

3. Edit in memory JSON with batch data

4. Save data with CAS

5. On failure not due to CAS, recover (or fail)

6. On success, succeed requests and go to 3

7. On failure due to CAS, fail active requests and terminate

The client should have a retry mechanism against the broker (which may include looking up the address again).

From the brokers PoV, it will never fail a CAS until another broker wins a CAS, at which point that other broker is the leader. If it does fail a CAS the client will retry with another broker, which will probably be the leader. The key insight is that the broker reads the file once, it doesn't compete to become leader by re-reading the data and this is OK because of the nature of the data. You could also say that brokers are set up to consider themselves "maybe the leader" until they find out they are not, and losing leadership doesn't lose data.

The mechanism to start brokers is only vaguely discussed, but if a host-unreachable also triggers a new broker there is a neat from-zero scaling property.

by Normal_gaussian

2/24/2026 at 2:34:54 PM

This is the hardest part because you can easily end up in a situation like you're describing, or having large portions of clients talking to a server just to have their writes rejected.

Further, this system (as described) scales best when writes are colocated (since it maximizes throughput via buffering). So even just by having a second writer you cut your throughput in ~half if one of them is basically dead.

If you split things up you can just do "merge manifests on conflict" since different writers would be writing to different files and the manifest is just an index, or you can do multiple manifests + compaction. DeltaLake does the latter, so you end up with a bunch of `0000.json`, `0001.json` and to reconstruct the full index you read all of them. You still have conflicts on allocating the json file but that's it, no wasted flushing. And then you can merge as you please. This all gets very complex at this stage I think, compaction becomes the "one writer only" bit, but you can serve reads and writes without compaction.

https://doi.org/10.14778/3415478.3415560

Note that since this paper was published we have gotten S3 CAS.

Alternatively, I guess just do what Kafka does or something like that?

by staticassertion

2/24/2026 at 10:50:07 AM

Write amplification >9000 mostly

by formerly_proven

2/24/2026 at 4:04:23 PM

[dead]

by snowhale

2/24/2026 at 1:56:12 PM

Love this writeup. There's so much interesting stuff you can build on top of Object Storage + compare-and-swap. You learn a lot about distributed systems this way.

I'd love to see a full sample implementation based on s3 + ecs - just to study how it works.

by loevborg

2/24/2026 at 10:53:17 AM

Depending on who hosts your object storage this seems like it could get much more expensive than using a queue table in your database? But I'm also aware that this is a blog post of an object storage company.

by dewey

2/24/2026 at 12:20:20 PM

(cofounder of tpuf here)

We don't have a relational database, otherwise that would work great for a queue! You can imagine us continuing to iterate here to Step 5, Step 6, ... Step N over time. The tradeoff of each step is complexity, and complexity has to be deserved. This is working exceptionally well currently.

by Sirupsen

2/24/2026 at 12:37:10 PM

Makes total sense for your use case! I have got bitten by using object storage as a database before (and churning through "update" ops) so this will depend on the pricing (and busy-ness of the queue of course) of the provider anyway. Using whatever you have available instead of introducing complexity is the way. Sqlite / Postgres goes a long way for use cases you wouldn't originally think would go well with a relational database too (full text search, using as queue,...).

by dewey

2/24/2026 at 12:47:37 PM

Due to the batching, this will only consume a few million class B per month. They are $5/million

by Sirupsen

2/24/2026 at 1:59:33 PM

> You can imagine us continuing to iterate here to Step 5, Step 6, ... Step N over time. The tradeoff of each step is complexity, and complexity has to be deserved. This is working exceptionally well currently.

Love this approach

by loevborg

2/24/2026 at 10:48:58 AM

This post touches on a realisation I made a while ago, just how far you can get with the guarantees and trade-offs of object storage.

What actually _needs_ to be in the database? I've never gone as far as building a job queue on top of object storage, but have been involved in building surprisingly consistent and reliable systems with object storage.

by jamescun

2/25/2026 at 8:41:52 AM

This is really cool but feels like an attack on the pride of software development at the same time.

Just slap some garbage on it and it is better

by ozgrakkurt

2/24/2026 at 4:16:36 PM

Does this suffer from ABA problem, or does object storage solve that for you by e.g. refusing to accept writes where content has changed between the read and write?

by talentedtumor

2/24/2026 at 6:23:20 PM

> refusing to accept writes where content has changed between the read and write?

Right. You can issue a write that will only be accepted if a condition is matched, like the etag of the object matching your expectation. If it doesn't match, your object was invalidated.

by staticassertion

2/24/2026 at 2:20:01 PM

By typography alone I can now turbopuffer is written in zig.

by motoboi

2/24/2026 at 3:15:54 PM

It is by the juice of Zig that binaries acquire speed, the allocators acquire ownership, the ownership becomes a warning. It is by typography alone I can now turbopuffer is written in zig.

by soletta

2/24/2026 at 7:20:03 PM

thanks for that!

by motoboi

2/24/2026 at 7:57:37 PM

For performance reasons we needed a set of assets on all copies of a service. We were using consul for the task management, which is effectively a tree of data that’s tantamount to a json file (in fact we usually pull trees of data as a json file).

Among other problems I knew the next thing we were going to have to do was autoscaling and the system we had for call and response was a mess from that respect. Unanswered questions were: How do you know when all agents have succeeded, how do you avoid overwriting your peers’ data, and what do you do with agents that existed yesterday and don’t today?

I ended up rewriting all of the state management data so that each field had one writer and one or more readers. It also allowed me to move the last live service call for another service and decommission it. Instead of having a admin service you just called one of the peers at random and elected it leader for the duration of that operation. I also arranged the data so the leader could watch the parent key for the roll call and avoid needing to poll.

Each time a task was created the leader would do a service discovery call to get a headcount and then wait for everyone to set a suggests or failure state. Some of these state transitions were idempotent, so if you reissued a task you didn’t need to delete the old results. Everyone who already completed it would noop, and the ones that failed or the new servers that joined the cluster would finish up. If there was a delete operation later then the data would be purged from the data set and the agents, a subsequent call would be considered new.

Long story short, your CS program should have distributed computing classes because this shit is hard to work out from first principles when you don’t know what the principles even are.

by hinkley

2/24/2026 at 11:29:29 AM

Is this reinventing a few redis features with an object storage for persistence?

by isoprophlex

2/24/2026 at 11:35:48 AM

Assuming you already using object storage in your project, but don't use Redis yet it wouldn't be re-inventing but just avoiding an extra dependency that would only be used by a single feature.

by dewey

2/24/2026 at 1:51:27 PM

it’s got some more 9s of durability compared to redis (redis did not invent “queue”)

by jitl

2/24/2026 at 4:49:47 PM

The windows has passed already for this kind of opportunities since there are dozen of people all doing the same thing. Also abusing object storage is not very fun.

by up2isomorphism

2/24/2026 at 6:04:44 PM

[dead]

by SignalStackDev

2/24/2026 at 2:08:36 PM

[dead]

by octoclaw

2/24/2026 at 8:35:00 PM

[dead]

by umairnadeem123

2/24/2026 at 1:39:42 PM

[flagged]

by PunchyHamster

2/24/2026 at 7:03:25 PM

[flagged]

by naillang

2/24/2026 at 8:03:19 PM

Exponential back off with random jitter is the solution there. It can be a bit tricky though because the random jitter can reduce throughput if it’s too generous or not generous enough.

For instance backing off 53 ms is a lot of latency. Backing off 1 μs may still result in a collision, requiring two or three backoffs before it resolves, where a longer pause on the first backoff might have resulted in quicker recovery.

Optimistic locking is a statistical model just like bloom filters are, and scaling something 20 or 100 times higher is a hell of a lot of runway. Especially if the failure mode is a person sharing someone else’s account or using a laptop and tablet at exactly the same time.

by hinkley

2/24/2026 at 12:03:15 PM

that's A choice.

by jstrong