alt.hn

3/27/2025 at 1:51:44 PM

Building a modern durable execution engine from first principles

https://restate.dev/blog/building-a-modern-durable-execution-engine-from-first-principles/

by whoiskatrin

3/28/2025 at 8:42:38 PM

One of the authors worked on Apache Flink but is too modest to include that interesting detail! So I'm adding it here. Hopefully he won't mind.

by dang

3/28/2025 at 9:16:29 PM

All of the Restate co-founders com from various stages of Apache Flink.

Restate is in many ways a mirror image to Flink. Both are event-streaming architectures, but otherwise make a lot of contrary design choices.

(This is not really helpful to understand what Restate does for you, but it is an interesting tid bit about the design.)

       Flink     |   Restate
  -------------------------------
                 |
    analytics    |  transactions
                 |
  coarse-grained |  fine-grained
    snapshots    | quorum replication
                 |
   throughput-   |  latency-sensitive
    optimized    |  
                 |
  app and Flink- |  disaggregated code
  share process  |   and framework
                 |
      Java       |      Rust
the list goes on...

by sewen

3/28/2025 at 8:58:39 PM

They mention it in their about page: https://restate.dev/about

by yubblegum

3/28/2025 at 9:06:00 PM

Thanks—sounds like it was more than one of them!

by dang

3/29/2025 at 1:50:17 AM

Hi CEO of DBOS here (we’re a restate competitor).

I really enjoyed this blog post. The depth of the technical explanations is appreciated. It really helps me understand the choices you’ve made and why.

We’ve obviously made some different design choices, but each solution has its place.

by jedberg

3/29/2025 at 1:41:05 PM

DBOS was the first thing I thought of when I saw this.

From a DBOS perspective, can you explain what the differences are between the two?

by digdugdirk

3/31/2025 at 9:04:14 PM

(DBOS co-founder here) DBOS embeds durable execution into your app as a library backed by Postgres, whereas Restate provides durable execution as a service.

In my opinion, this makes DBOS more lightweight and easier to integrate into an existing application. To use DBOS, you just install the library and annotate workflows and steps in your program. DBOS will checkpoint your workflows and steps in Postgres to make them durable and recover them from failures, but will otherwise leave your application alone. By contrast, to use Restate, you need to split out your durable code into a separate worker service and use the Restate server to dispatch events to it. You're essentially outsourcing control flow and event processing to the Restate server., which will require some rearchitecting.

Here's a blog post with more detail comparing DBOS and Temporal, whose model is similar to (but not the same as!) Restate: https://www.dbos.dev/blog/durable-execution-coding-compariso...

by KraftyOne

3/29/2025 at 5:28:24 PM

I have to say, the examples are really good: https://github.com/restatedev/examples

For someone (me) who is new to distributed systems, and what durable execution even is, I learned a lot (both from the blog post & the examples!). thanks!

ChatGPT also helped :)

by randomcatuser

3/27/2025 at 6:03:44 PM

Interesting, how does it compare to Inngest and DBOS?

by oulipo

3/27/2025 at 7:40:19 PM

Hey, I work on Restate. There are lots of differences throughout the architecture and the developer experience, but the one most relevant to this article is that Restate is itself a self-contained distributed stream-processing engine, which it uses to offer extremely low latency durable execution with strong consistency across AZs/regions. Other products tend to layer on top of other stores, which will inherit the good things and the bad things about those stores when it comes to throughput/latency/multi-region/consistency.

We are putting a lot of work into high throughput, low latency, distributed use cases, hence some of the decisions in this article. We felt that this necessitated a new database.

by p10jkle

3/28/2025 at 8:58:05 PM

Hi,

I'm building a distributed application based on Hypergraphs, because the data being processed is mostly re-executable in different ways.

It's so refreshing to read this, I was also sitting down many nights and was thinking up about the same problem that you guys solved. I'm so glad about this!

Would it be possible to plug other storage engines into Restate? The data-structure that needs to be persisted allows multiple-path execution and instant re-ordering without indexing requirements.

I'm mostly programming in Julia and would love to see some little support for it too =)

Great work guys!

by ALLTaken

3/28/2025 at 9:09:55 PM

Thank you for the kind words!

The storage engine is pretty tightly integrated with the log, but the programming model allows you to attach quasi arbitrary state to keys.

So see whether this fits your use case, would be great to better understand the data and structure you are working with. Do you have a link where we could look at this?

by sewen

3/30/2025 at 9:21:31 PM

> Do you have a link where we could look at this? Hi, thank you for your reply, highly appreciated.

Happy to explain in more detail =) But it's not public yet.

I'm working on the consensus module optimised for hypergraphs with a succinct data-structure. The edges serve as an order-free index (FIT). Achieving max-flow, flow-matching, graph-reduction via circuits is amongst the goals.

Targeting low-latency/hig-performance distributed inference enabling layer-combination of distinct models and resumable multi-use computations as a sort of distributed compute cache.

A data-structure follows a data-format for persistence, resumeability and achieving service resilience. But although I've learned quite a bit about state management, it's still a topic I have much respect for and think using restate.dev maybe better than re-inventing the wheel. I didn't have in mind to also build a cellular automaton for state management, it maybe trivial, but I currently don't feel like having the capacity for it. Restate looks like a great production ready solution before delaying a release.

I intend to open-source it once it's mature. (But I believe binary distribution will be the more popular choice.)

by ALLTaken

3/28/2025 at 9:28:08 PM

I find this type of thing very interesting technically, but not very interesting commercially.

It would seem to me that durable execution implies long running jobs, but this kind of work suggests micro optimisation of a couple of ms. The applications inherently don't care about this stuff?

What am I missing. Or is it just that at a big enough scale anything matters.

by bluelightning2k

3/29/2025 at 12:58:22 AM

The way we think about durable execution is that it is not just for long-running code, where you may want to suspend and later resume. In those cases, low-latency implementations would not matter, agreed.

But durable execution is immensely helpful for anything that has multiple steps that build on each other. Anytime your service interacts with multiple APIs, updates some state, keeps locks, or queues events. Payment processing, inventory, order processing, ledgers, token issuing, etc. Almost all backend logic that changes state ultimately benefits from a durable execution foundation. The database stores the business data, but there is so much implicit orchestration/coordination-related state - having a durable execution foundation makes all of this so much easier to reason about.

The question is then: Can we make the overhead low enough and the system lightweight enough such that it becomes attractive to use it for all those cases? That's what we are trying to build here.

by sewen

3/29/2025 at 2:43:46 AM

(from DBOS) Great question. For better or worse, it seems like discussions about workflows and durable execution often intertwine. Usually ending up in what types of jobs or workflows require durable exec.

But really, any system that runs the risk of failing or committing an error should have something in place to observe it, undo it, resume it. Your point about "big enough scale" is true - you can write your own code to handle that, and manually troubleshoot and repair corrupted data up to a certain point. But that takes time.

By making durable execution more lightweight/seamless (a la DBOS or Restate), the use of durable execution libs become just good programming practice for any application where cost of failure is a concern.

by secondrow

3/28/2025 at 10:45:51 PM

How does it compare against Trigger or Hatchet?

by popalchemist

3/29/2025 at 4:06:15 AM

Isn’t this just event sourcing? Why not re-use the terminology from there?

by agentultra

3/29/2025 at 7:56:54 AM

Can you elaborate more on your persistence layer?

One of the good reasons why most products will layer on top of an established database like Postgres is because concerns like ACID are solved problems in those databases, and the database itself is well battle-hardened, Jepsen-tested with a reputation for reliability, etc. One of the reasons why many new database startups fail is precisely because it is so difficult to get over that hump with potential customers - you can't really improve reliability until you run into production bugs, and you can't sell because it's not reliable. It's a tough chicken-and-egg problem.

I appreciate you have reasons to build your own persistence layer here (like a push-based model), but doesn't doing so expose you to the same kind of risk as a new database startup? Particularly when we're talking about a database for durable execution, for which, you know, durability is a hard requirement?

by solatic

3/29/2025 at 10:45:18 AM

Indeed, the persistence layer is sensitive, and we do take this pretty serious.

All data is persisted via RocksDB. Not only the materialized state of invocations and journals, but even the log itself uses RocksDB as the storage layer for sequence of events. We do that to benefit from the insane testing and hardening that Meta has done (they run millions of instances). We are currently even trying to understand which operations and code paths Meta uses most, to adopt the code to use those, to get the best-tested paths possible.

The more sensitive part would be the consensus log, which only comes into play if you run in distributed deployments. In a way, that puts us into a similar boat as companies like Neon: having reliably single-node storage engine, but having to build the replication and failover around that. But in that is also the value-add over most databases.

We do actually use Jepsen internally for a lot of testing.

(Side note: Jepsen might be one of the most valuable things that this industry has - the value it adds cannot be overstated)

by sewen

3/29/2025 at 10:53:27 AM

I just realized I missed an important part: The primary durability for the bulk of the state comes from S3 (or similar object store). The periodic snapshots give you like an automatic frequent backup mechanism for free, which in itself is a nice property to have.

by sewen

3/29/2025 at 12:53:17 AM

Looks very interesting. How does it compare to Temporal?

by xwowsersx

3/29/2025 at 1:16:39 AM

There are a few dimensions where this is different.

(1) The design is a fully self-contained stack, event-driven, with its own replicated log and embedded storage engine.

That lets it ship as a single binary that you can use without dependency (on your laptop or the cloud). It is really easy to run.

It also scales out by starting more nodes. Every layer scales hand-in hand, from log to processors. (you should give it an object store to offload data, when running distributed)

The goal is a really simple and lightweight way to run yourself, while incrementally scaling to very large setups when necessary. I think that is non-trivial to do with most other systems.

(2) Restate pushes events, compared to Temporal pulling activities. This is to some extent a matter of taste, though the push model has a way to work very naturally with serverless functions (lambda, CF workers, fly.io, ...).

(3) Restate models services and stateful functions, not workflows. This means you can model logic that keeps state for longer than what would be the scope of a workflow (you have like a K/V store transactionally integrated with durable executions). It also supports RPC and messaging between functions (exactly-once integrated with the durable execution).

(4) The event-driven runtime, together with the push model, gets fairly good latencies (low overhead of durable execution).

by sewen

3/29/2025 at 8:26:45 AM

What do you mean by pushes events and pulling activities? Where exactly does that take place during a durable execution? I used Temporal and I know what Temporal Activities are, but the pushing and pulling confuses me.

by 7bit

3/29/2025 at 10:11:48 AM

afaik, with Temporal you deploy workers. When a workflow calls an activity, the activity gets added to a queue, and the workers pull activities from queues.

In Restate, there are no workers like that. The durable functions (which contain the equivalent of the activity logic) get deployed on FaaS or like a containerized RPC service. The Restate broker calls the function/service with the argument and some attached context (journal, state, ...).

You can think of it a bit like Kafka vs. EventBridge. The former needs long lived clients that poll for events, the latter pushes events to subscribers/listeners.

This "push" (Restate broker calls the service) means there doesn't have to be a long running process waiting for work (by polling a queue).

I think the difference also naturally from the programming abstraction: In Temporal, it is workflows that create activities, in Restate it stateful durable functions (bundled into services).

by sewen