Around 60 enthusiastic members of the community had a great time with beer, pizza, and an interesting presentation. So, this Event Sourcing stuff... what is it all about?
DDD, CQRS, ES and other abbreviations
Domain Driven Design (DDD) is an approach to software design that has been steadily growing in popularity for years. Along with DDD other patterns like Command Query Responsibility Segregation (CQRS) and Event Sourcing (ES) have become buzzwords that interest many developers. If applied correctly, DDD, CQRS, and ES are patterns that can add great value.
However, each comes with its own difficulties. This meetups presentation focused on the challenges that come with event sourcing. Michiel Overeem and Marten Spoor, both architects at AFAS, presented their insights.
The wonders of Event Sourcing
In Event Sourcing all changes to the state of the system are individually. Each change is captured in an event object and stored in the same order as the events took place. Let’s use a banking system as an example. When a new bank account is created, a BankAccountCreated event will be stored. This event will have attributes describing this change, such as the ID of the new account and the name of the owner.
When a deposit is made, a DepositPerformed event with an account ID and the deposited amount will be stored. These events are stored in event streams, and these streams are part of the event store. The stores schema describes the structure of the streams and the events and their attributes. The stored events are used to reinstate objects in their current state.
I feel a change comin' on...
Everything changes: requirements change, new insight is gained, and bugs are discovered. As the software evolves, the schema will have to be updated too. It may be necessary to add new attributes to existing events. Existing attributes may have to be split or removed. The same goes for events and streams: they could be removed, split or otherwise changed to support new requirements.
As the events evolve, the structure of the data that is stored changes too. A DepositPerformed event stored yesterday could have different attributes than a DepositPerformed event stored today. The software that reads these events to instantiate a bank account object to its current state is required to process both events correctly. As the event store schema evolves, the software responsible for reading the data has to evolve with it.
In search of the best technique
Michiel and Marten presented several techniques that can be used to implement the changes in the schema. The first three techniques do not change the stored data.
- Multiple versions: for each change in the schema a new version is introduced. When the DepositPerformed event has to change, a DepositPerformed_v2 event is introduced.
- Weak schema: the schema is weakened by marking attributes as optional.
- Upcasters: the code reading the events from the store can transform an older DepositPerformed_v1 event to a DepositPerformed_v2.
- Lazy transformation: if an event based on an older schema is read from the store, it is transformed to the current version. This transformed event is then stored.
- In place transformation: the events that need to be transformed are located in the store and fixed in place.
- Copy and transform: all events are copied to a new store and transformed if needed.
|Technique||Operation completeness||Maintainability||Performance efficiency||Reliability|
|In place transformation||+||+||+/-||-|
|Copy and transformation||+||+||-||+|
Finally, several strategies for the deployment of the implemented techniques were presented. These include big flip, blue-green, and the rolling upgrade. It’s important to note that it depends on the context of the system your working on which techniques and strategies will work best for you.
Time for a beer
As Michiel and Marten concluded their presentation there was some time for Q&A. Of course, this was followed by some more beer and discussion of our new insights and other tech stuff. I had a great night, see you next time?