architecturedomain-driven-designevent-driven

Data Liberation in pure DDD


I am reading the book "Building Event Driven Microservices" by Adam Bellemare and there is a part that I do not understand: In the perfect world, all state would be created, managed, maintained, and restored from the single source of truth of the event streams. Any shared state should be published to the event broker first and materialized back to any services that need to materialize the state, including the service that produced the data in the first place.. (page 55)

If I have all the data I need why not update the relevant domain tables and then append to an outbox table in the same transaction? Seems a bit cumbersome to have to consume data back from an event stream.

I understand the principle of EDM that the stream should be the single source of truth, but I am wondering if I am misunderstanding something here and would appreciate an explanation/interpretation of the above snippet from the book.

googled, but did not find anything


Solution

  • Well, nobody stops you from doing that. But the outbox pattern solves a different problem.

    Of course, if you have a single bounded context, you would think that the outbox table is the same as the event stream (which is not actually, because it doesn't necessarily span all domain events, only integration events). But remember that the goal and implementation of each one are different from the other.

    Update

    My mistake, I did not mention that event sourcing is different from EDA. I'm going to clear that out.

    To be honest, I didn't read the mentioned book, seems interesting though. But I assume that the book talks about event stream as a way to share the state of domain between all bounded contexts. Something like event sourcing. With this assumption in mind, I'm going to answer your questions in the comment section:

    Why would I consume messages again from the stream to materialize my own state?

    This question shows that the book suggests event sourcing and you are confused. In event sourcing, there is a single source of truth and it is the event stream. So the last state of the domain can be achieved by applying every event on the domain.

    There is a reason to use event sourcing. We care about the steps of modifications on our domain. Another thing to consider is that the event stream is the shared timeline of all events. So it's best suited for distributed systems. No coupling between services. No choreographical design of events between services is needed.

    Aren't only integration events produced to the event stream? Domain events are internal to the bounded context why would I want to produce those to the event stream?

    If you are talking about event sourcing, your answer is No. All events on the domain must be persisted on the stream. But if you are talking about EDA or Event-Driven Microservices, then it can be integration events only.