eventsevent-handlingcqrsevent-sourcingeda

Where does "current state" is stored in event sourcing?


I understand cqrs, but I'm having problems with a part of event sourcing. Everyone says "You don't store the aggregate's current state, you store the sequence of events that were applied to that aggregate". Fine by me. But in order to apply a command and produce an event, you need the current aggregate state. Take this table for an example (it is a sequence of API calls). Let's say I have this business rule: "A post can be updated at most twice". requests

In this example, the third attempt to edit a post should fail, and no event should be produced and stored. Assume we are not using the approach "fetch all events from the event store, build the aggregate, apply the command, produce the event, store the event". In that case, where is the current aggregate's state stored? If it is stored somewhere, isn't this against "you don't store the aggregate.."?

I have been through several articles and talks and examples on the internet. But I always come to the same conclusion/answer. I've seen people taking advantage of the projections in order to check domain invariants/business rules, but projections are eventually consistent and could lead to race conditions.


Solution

  • I understand cqrs, but I'm having problems with a part of event sourcing.

    Not your fault; the literature sucks.


    I've seen people taking advantage of the projections in order to check domain invariants/business rules, but projections are eventually consistent and could lead to race conditions.

    You don't get race conditions because people are using a (logical) lock to prevent write collisions on the history. Stripping away distracting details, the happy path would look something like:

    acquire lock
    read previous events
    compute new events
    write new events
    release lock
    

    In practice, it's common to instead use conditional writes, using stream metadata to ensure that nothing changed while the domain logic was running

    read previous events
    compute new events
    acquire lock
    write-if new events
    release lock
    

    Think "compare and swap".

    Usually this is done via general purpose metadata; we only perform the write if the current version of the history matches the "expected version" of the history (ex the history still has the same number of events in it that we started with).