event-sourcingaudit-loggingchange-data-capture

Is it wise to have a dedicated Event Store service


My project has a monolith application and a number of microservices around it. The MicroServices communicates via Pub/Sub (ServiceBus) and APIs.

Every service, including the monolith, publishes events for every change.

Currently I am trying to design a solution which stores the change history of several entities. I should be able to view all the changes and generate a report which tells me what changed between two dates.

My shortlisted options are:

  1. Use database's ability to track changes (eg. Change data capture (CDC) in SQL Server) and generate reports out of it. But enabling tracking in every service sounds complex and expensive.
  2. Use Event Sourcing - I am new to Event sourcing and not sure if this is an overkill. Also updating an existing MicroService is expensive in my case considering what I want to achieve.
  3. Create a new dedicated event store Microservice - all it does is, log every event in the system in to an event store and expose the aggregated event data via APIs. This service will store event data from all entities and all domains.

I already have a MicroService which does audit logging this way, so this probably will replace it. See the diagram below.

Diagram

Does this make sense? What could go wrong? Looking forward to see some thoughts. ta!


Solution

  • Vlad Khononov shares his thoughts on this question in his book, Learning Domain-Driven Design, in a chapter specifically about event sourcing:

    Why can't I just write logs to a text file and use it as an audit log?

    Writing data both to an operational database and to a logfile is an error-prone operation. In its essence, it's a transaction against two storage mechanisms: the database and the file. If the first one fails, the second one has to be rolled back. For example, if a database transaction fails, no one cares to delete the prior log messages. Hence, such logs are not consistent, but rather, eventually inconsistent.

    This answer hopefully gives you some food for thought. It's perhaps not directly applicable to your situation, as you are already publishing all state changes as events, so provided that you make sure to consume them elsewhere (ie in your dedicated event store microservice), you can somewhat guarantee that the event log stays (eventually) consistent.

    The biggest downside to that approach (publishing events for all state changes, but doing it without applying event sourcing as the storage pattern) is that it becomes easier to make a mistake such as forgetting to publish an event on state change, when adding a new feature for example. This especially rings true if other developers work on your codebase now or in the future. Choosing to use event sourcing as the storage pattern for your application state makes the creation, storage and publishing of events on every state change more explicit, and it will in turn be harder for you (or others) to introduce bugs/inconsistencies later on by, for example, unintentionally not publishing an event for some given state change.