Seems to be another great way to build local-first applications, which makes me think of CRDT, and come up with this silly question: what's the relationship between Durable Stream and CRDT, are they replacements for one another, or can they work well together?
They primarily serve different purposes, but they could complement each other.
Durable Streams are a lightweight network protocol on top of standard HTTP. When you are building a synchronisation layer for let's say a local-first app, you need to not only exchange data over some lower-level protocol (i.e. HTTP / SSE / WS), but you also have to define a higher-level protocol on how the client & server are going to communicate - i.e. how to resume data fetching once the client reconnects, based on the last data that the client received (~offset). Since the reconnect & offset should be automatically handled by the Durable Stream, you could just build your domain logic on top of it.
CRDTs are primarily meant to resolve data conflicts, usually client-side, based on a defined conflict resolution strategy (i.e. last-writer-wins). Some of the CRDT libraries, like automerge, loro or yjs, also implement a networking layer to exchange the data between nodes (could be even P2P), meaning they already have a built-in mechanism for reconnection and offset (~send me data since X). However, nobody forces you to use their networking layer, meaning that with Durable Streams, you would have a good starting point to build your own.
Great answer! I was always confused about how CRDTs were transferred. Like you said, existing implementations often come with their own in-house networking solutions. Now it's totally clear, since CRDTs are only about data, it's no wonder their transfer methods differ. That makes Durable Stream a very good companion to work with CRDTs—the boundaries are clear, and they complement each other perfectly.
I also feel that I could give Durable Stream's protocol spec to a coding agent, and it could blend into the best suited implementation for my current project (say, a Go repo). The simple yet sophisticated spec is more valuable than a bunch of SDKs.
Seems to be another great way to build local-first applications, which makes me think of CRDT, and come up with this silly question: what's the relationship between Durable Stream and CRDT, are they replacements for one another, or can they work well together?
They primarily serve different purposes, but they could complement each other.
Durable Streams are a lightweight network protocol on top of standard HTTP. When you are building a synchronisation layer for let's say a local-first app, you need to not only exchange data over some lower-level protocol (i.e. HTTP / SSE / WS), but you also have to define a higher-level protocol on how the client & server are going to communicate - i.e. how to resume data fetching once the client reconnects, based on the last data that the client received (~offset). Since the reconnect & offset should be automatically handled by the Durable Stream, you could just build your domain logic on top of it.
CRDTs are primarily meant to resolve data conflicts, usually client-side, based on a defined conflict resolution strategy (i.e. last-writer-wins). Some of the CRDT libraries, like automerge, loro or yjs, also implement a networking layer to exchange the data between nodes (could be even P2P), meaning they already have a built-in mechanism for reconnection and offset (~send me data since X). However, nobody forces you to use their networking layer, meaning that with Durable Streams, you would have a good starting point to build your own.
Great answer! I was always confused about how CRDTs were transferred. Like you said, existing implementations often come with their own in-house networking solutions. Now it's totally clear, since CRDTs are only about data, it's no wonder their transfer methods differ. That makes Durable Stream a very good companion to work with CRDTs—the boundaries are clear, and they complement each other perfectly.
I also feel that I could give Durable Stream's protocol spec to a coding agent, and it could blend into the best suited implementation for my current project (say, a Go repo). The simple yet sophisticated spec is more valuable than a bunch of SDKs.