As more developers apply Command/Query Responsibility Segregation to their projects, many find that it INCREASES the complexity of their systems rather than decreasing it, often without any performance benefits either. Join Udi for a look at where you should, and more importantly SHOULDN'T be using CQRS in your projects.
Derek Comartin asked for questions about Event Sourcing and CQRS and took some of them to Greg Young for answers about frameworks, Legacy Systems, Use Cases, Projections, Eventual Consistency and Versioning.
As we all know, naming things is one of the really hard problems in computer science. Maybe this is the reason the term “event” is used for what feels like at least twenty different concepts. It’s all too easy to get confused when people talk about Event Sourcing, Event Streaming, Event-Carried State Transfer, Notification Events, Domain Events, Fat Events, Event Storming and possibly yet other types of events. Serverless Functions - triggered by events, Apache Kafka users like to speak of events, as do those of the Axon framework - are they even meaning the same thing at all? And above all - why should I even bother with an Event-driven architecture, what are the benefits? Time for a proper clean-up. Let's start with a clear and bounded definition of events, and from there explore the patterns of using events in micro- and macro-architecture, their benefits as well as challenges. After the talk, participants will know what questions to ask if someone suggests to go event-driven, and will be able to assess the applicability of different approaches to their architectural tasks.
In this architecture lesson Mark Richards describes the CQRS pattern (Command Query Responsibility Segregation) and shows how it can be applied in a Microservices architecture.