3/31/2025 at 5:27:43 PM
It'd be nice if the post went into how conflict resolution will (?) work because that's the hard part here and the main selling point imo.by ezekg
4/1/2025 at 12:12:47 AM
A lot of offline sync projects just drop data on conflict and pretend it didn't happen. It's the salesman job to divert your questions about it.I found another blogpost from turso where they say they offer 3 options on conflict: drop it, rebase it (and hope for no conflict?) and "handle it yourself".
Writing an offline sync isn't hard. Dealing with conflicts is a PITA.
https://turso.tech/blog/introducing-offline-writes-for-turso...
by titaphraz
3/31/2025 at 5:37:34 PM
Conflict resolution can't work in a general sense.How you reconcile many copies of the same record could depend on time of action, server location, authority level of the user, causality between certain business events, enabled account features, prior phase of the moon, etc.
Whether or not offline sync can even work is very much a domain specific concern. You need to talk to the business about the pros & cons first. For example, they might not like the semantics regarding merchant terminals and offline processing. I can already hear the "what if the terminal never comes back online?" afternoon meeting arising out of that one.
by bob1029
3/31/2025 at 8:00:42 PM
I would say this is why server-authoritative systems that allow for custom logic in the backend for conflict resolution work well in practice (like Replicache, PowerSync and Zero - custom mutators coming in beta for the latter). Predefined deterministic distributed conflict resolution such as CRDT data structures work well for certain use cases like text editing, but many other use cases require deeper customizability based on various factors like you said.by ochiba
3/31/2025 at 8:04:59 PM
CRDT falls flat for rich text editing though. So many nasty edge cases and nobody has solved them all, despite their claims.by jahewson
3/31/2025 at 8:33:21 PM
have you tried loroby mentalgear
3/31/2025 at 9:24:44 PM
https://loro.dev/by mmerlin
3/31/2025 at 8:06:23 PM
Server-authoritative conflict resolution kind of mirrors my thinking as well, having resolution work like multiplayer net code, where the client and server may or may not attempt to resolve recent conflicts, but server has the final say on state. Just not sure how this plays out when a client starts dropping conflicting data because the server says so...by ezekg
3/31/2025 at 8:07:36 PM
Yes, exactly. https://www.gabrielgambetta.com/client-side-prediction-serve...by ochiba
3/31/2025 at 10:51:20 PM
CFRDT (Conflict Free Replicated Data Types) can absolutely reconcile many-writers situations. There are a number of these systems, and they all have their own rules around that replication (sometimes very complicated rules that are hard to reason about). As long as you can live inside those rules, and accept that they are going to have sharp corners that don't quite make sense for your use case, then you can get a virtually free lunch there.But living inside of those rules (and sometimes just understanding those rules) can be a big ask in some situations, so you have to know what you are doing.
by larkost
3/31/2025 at 8:06:17 PM
This. It’s so obvious when you think about it, but everybody wants a free lunch.by jahewson
3/31/2025 at 6:05:39 PM
I think this point confuses me the most in this regard:> Local-first architectures allow for fast and responsive applications that are resilient to network failures
So are we talking about apps that can work for days and weeks offline and then sync a lot of data at once, or are we talking about apps that can survive a few seconds glitch in network connectivity? I think that what is promised is the former, but what will make sense in practice is the latter.
by mystifyingpoi
3/31/2025 at 8:10:58 PM
Local-first is overkill for transient faults. This is probably meant for outage and disaster scenarios.by hnthrow90348765
3/31/2025 at 8:11:20 PM
There are niche use cases where the former (work for days to weeks offline) are useful and even critical - like certain field service use cases. Surviving glitches in network connectivity is useful for mainstream/consumer applications for users in general, especially those on mobile.In my experience, it can affect the architecture and performance in a significant way. If a client can go offline for an arbitrary period of time, doing a delta sync when they come back online is more tricky, since we need to sync a specific range of operation history (and this needs to be adjusted for specific scope/permissions that the client has access to). If you scale up a system to thousands or millions of clients, having them all do arbitrary range queries doesn't scale well. For this reason I've seen sync engines simply force a client to do a complete re-sync if it "falls behind" with deltas for too long (e.g. more than a day or so.) Maintaining an operation log that is set up and indexed for querying arbitrary ranges of operations (for a specific scope of data) works well.
by ochiba
3/31/2025 at 9:48:43 PM
I'm wondering too.In general this seems to work only if there's a single offline client that accepts the writes.
With limitations to the data scheme (e.g. have distinct tables per client), it might work with multiple clients. However those would need to be documented and I couldn't see anything in this blog post.
by Matthias247