diff --git a/docs/.vitepress/config.mts b/docs/.vitepress/config.mts index 7adc481117..58f791f0ea 100644 --- a/docs/.vitepress/config.mts +++ b/docs/.vitepress/config.mts @@ -72,6 +72,22 @@ const config: UserConfig = { sidebar: { '/': [ + { + text: 'Tutorial', + collapsed: true, + items: [ + { text: 'Building a Freight & Delivery System', link: '/tutorials/introduction' }, + { text: 'Getting Started', link: '/tutorials/getting-started' }, + { text: 'Modeling documents', link: '/tutorials/modeling-documents' }, + { text: 'Evolve to event sourcing', link: '/tutorials/evolve-to-event-sourcing' }, + { text: 'Event-Sourced Aggregate', link: '/tutorials/event-sourced-aggregate' }, + { text: 'Read model projections', link: '/tutorials/read-model-projections' }, + { text: 'Cross-Aggregate Views', link: '/tutorials/cross-aggregate-views' }, + { text: 'Distributed systems with Wolverine', link: '/tutorials/wolverine-integration' }, + { text: 'Advanced Considerations', link: '/tutorials/advanced-considerations' }, + { text: 'Conclusion', link: '/tutorials/conclusion' } + ] + }, { text: 'Introduction', collapsed: true, diff --git a/docs/cSpell.json b/docs/cSpell.json index 9c8c5fd6cf..66c75b85ab 100644 --- a/docs/cSpell.json +++ b/docs/cSpell.json @@ -40,7 +40,8 @@ "Replayability", "revisioned", "Revisioned", - "Vogen" + "Vogen", + "upserts" ], "ignoreWords": [ "JSONB", diff --git a/docs/tutorials/advanced-considerations.md b/docs/tutorials/advanced-considerations.md new file mode 100644 index 0000000000..07900e99a6 --- /dev/null +++ b/docs/tutorials/advanced-considerations.md @@ -0,0 +1,78 @@ +# Part 7: Advanced Considerations – Optimistic Concurrency and Deployment + +In this final section, we address how to maintain data consistency with optimistic concurrency and how to evolve your projections safely using blue/green deployment techniques. The focus is on using Marten’s features (like `FetchLatest()` and `ProjectionVersion`) and Wolverine’s new capabilities to achieve zero downtime even as your freight delivery system grows in complexity. + +## Optimistic Concurrency with Marten + +Optimistic concurrency control prevents conflicts between concurrent operations by detecting collisions and aborting the transaction rather than overwriting data. In an event-sourced system like our freight and delivery application, multiple services or users might attempt to update the same aggregate (for example, the same `FreightShipment` stream) at the same time. Marten helps manage this by **optimistically** assuming each transaction will succeed, but it will **fail fast** if it detects another session has already modified the stream. + +Marten’s event store uses a versioning mechanism under the covers (each event stream has a current version number). To leverage this, Marten provides the `IDocumentSession.Events.FetchLatest()` API for event streams. This method fetches the current state **and** version of the aggregate in one call. For instance, in a command handler updating a shipment: + +```csharp +// Fetch current state of the FreightShipment stream with concurrency check +var stream = await session.Events.FetchLatest(shipmentId); + +if (stream.Aggregate == null) throw new InvalidOperationException("Shipment not found"); + +// ... perform domain logic, possibly append new events ... +stream.AppendOne(new FreightDispatched(...)); + +await session.SaveChangesAsync(); // will throw ConcurrencyException if conflict +``` + +In the above example, `FetchLatest(shipmentId)` retrieves the latest state of the shipment aggregate (building it from events if needed) and tags the session with the current stream version. When you call `SaveChangesAsync()`, Marten will automatically check that no other updates have occurred on that stream. If another process successfully wrote to the same `FreightShipment` after our fetch (e.g. another dispatch command on the same shipment), Marten will throw a `ConcurrencyException` on save, aborting the transaction. This ensures you never accidentally persist changes based on stale data. In practice, you would catch this exception and handle it (for example, by retrying the operation or returning an error to the caller) as appropriate for your workflow. + +Why use `FetchLatest`? As explained in [the docs here](/events/projections/read-aggregates.html#fetchlatest), we strongly recommend using this pattern for any command that appends events to an existing stream ([Appending Events](/events/appending.html)). By loading the aggregate’s current state and version in one go, you both simplify your command logic and gain built-in concurrency protection. Another benefit is that `FetchLatest` abstracts whether the aggregate is computed on the fly (“live” aggregation) or read from a persisted projection (“inline” or async) – your code doesn’t need to care, it just gets the latest state. The trade-off with optimistic concurrency is that a conflict causes a rollback of the transaction; however, this is usually acceptable in a domain like freight shipping where, for example, two dispatch updates to the same shipment should not both succeed. It is far better to catch the conflict and handle it than to have undetected double updates. + +> **Note:** Marten also offers a more stringent **pessimistic concurrency** option via `FetchForExclusiveWriting()`, which places a database lock on the stream while processing. This guarantees exclusive access but can increase latency and risk deadlocks. In most cases, the optimistic approach is sufficient and more scalable. Use exclusive writes only if you truly need a single-writer guarantee and are aware of the performance implications. + +## Evolving Your Schema and Blue/Green Deployments + +Over time, you will likely need to evolve your projections or aggregate schemas. Whether to fix bugs, accommodate new business requirements, or add features. The challenge is doing this **without downtime**, especially in a running system where projections are continually updated by incoming events. Marten, with help from Wolverine, provides a strategy to deploy new projection versions side-by-side with old ones, often called a **blue/green deployment** in deployment terminology. + +Imagine our `DailyShipmentsProjection` (which aggregates daily freight shipment data for reporting) needs a schema change. Say we want to add a new calculated field or change how shipments are categorized. Rebuilding this projection from scratch will take time and we don’t want to take the system offline. The solution is to run a new version of the projection in parallel with the old one until the new version is fully caught up and ready to replace the old. + +### Side-by-Side Projections with `ProjectionVersion` + +Marten allows you to define a new projection as a *versioned* upgrade of an existing one by using the `ProjectionVersion` property on the projection definition. By incrementing the version number, you signal to Marten that this projection should be treated as a separate entity (with its own underlying storage). For example, if `DailyShipmentsProjection` was version 1, we might create an updated projection class and set `ProjectionVersion = 2`. Marten will then write the v2 projection data to new tables, independent of the v1 tables. + +This versioning mechanism is the “magic sauce” that enables running two generations of a projection side by side. The old (v1) projection continues to operate on the existing data, while the new (v2) projection starts fresh, processing all historical events as if it were building from scratch. In our freight system, that means the new `DailyShipmentsProjection` will begin consuming the event stream for shipments (e.g. `FreightShipment` events) and populating its new tables from day one’s data forward. During this time, your application is still serving reads from the old projection (v1) so there’s no interruption in service. Marten effectively treats the two versions as distinct projections running in parallel. + +**Important:** When using this approach, the new projection **must run in the async lifecycle** (even if the old one was inline or live). In a blue/green deployment scenario, you typically deploy the new version of the service with the projection version bumped, and configure it as an asynchronous projection. This way, the new projection will build in the background without blocking incoming commands. Marten’s `FetchLatest` will still ensure that any command processing on an aggregate (e.g. a `FreightShipment` update) can “fast-forward” the relevant part of the projection on the fly, so even strongly consistent write-side operations continue to work with the new projection version. The system might run a bit slower while the async projection catches up, but it remains available – slow is better than down. + +### Isolating Projections with Wolverine + +Running two versions of a projection concurrently requires careful coordination in a multi-node environment. You want the **“blue”** instances of your application (running the old projection) and the **“green”** instances (running the new projection) to each process *their own* projection version, without stepping on each other’s toes. This is where Wolverine’s capabilities come into play. + +When you integrate Marten with Wolverine and enable Wolverine’s *projection distribution* feature, Wolverine can control which nodes run which projections. Specifically, Wolverine supports restricting projections (or event subscribers) to nodes that declare a certain capability ([Projection/Subscription Distribution | Wolverine](https://wolverinefx.net/guide/durability/marten/distribution.html)). In practice, this means you could deploy your new version of the application and tag those new nodes with a capability like `"DailyShipmentsV2"` (while old nodes either lack it or perhaps have `"DailyShipmentsV1"`). Wolverine’s runtime will ensure that the **DailyShipmentsProjection v2** runs only on the nodes that have the V2 capability, and likewise the v1 projection continues to run on the older nodes. This isolation is crucial: it prevents, say, an old node from trying to run the new projection (which it doesn’t know about) or a new node accidentally double-processing the old projection. Essentially, Wolverine helps orchestrate a clean cut between blue and green workloads. + +Enabling this is straightforward. When configuring Marten with Wolverine, you would call `IntegrateWithWolverine(...)` and set `UseWolverineManagedEventSubscriptionDistribution = true` (as shown in earlier chapters). Then, you assign capabilities to your application nodes (via configuration or environment variables). For example, you might configure the new deployment to advertise a `"v2"` capability. Marten’s projection registration for the new version can be made conditional or simply present only in the new code. Once running, Wolverine’s leader node will distribute projection agents such that every projection-version combination is active on exactly one node in the cluster ([Projection/Subscription Distribution | Wolverine](https://wolverinefx.net/guide/durability/marten/distribution.html)) ([Projection/Subscription Distribution | Wolverine](https://wolverinefx.net/guide/durability/marten/distribution.html)). The “blue” cluster continues to process v1, and the “green” cluster processes v2. + +### Blue/Green Deployment Step-by-Step + +Combining Marten’s projection versioning with Wolverine’s distribution gives you a robust zero-downtime deployment strategy. At a high level, the process to evolve a projection with no downtime looks like this: + +1. **Bump the projection version in code:** Update your projection class (e.g. `DailyShipmentsProjection`) to a new `ProjectionVersion`. This indicates a new schema/logic version that will use a separate set of tables. +2. **Deploy the new version (Green) alongside the old (Blue):** Start up one or more new application instances running the updated code. At this point, both the old and new versions of the service are running. The old nodes are still serving users with version 1 of the projection, while the new nodes begin operating with version 2. If using Wolverine, ensure the new nodes have the appropriate capability so they exclusively run the v2 projection ([Projection/Subscription Distribution | Wolverine](https://wolverinefx.net/guide/durability/marten/distribution.html)). +3. **Run projections in parallel:** The v2 projection starts in async mode and begins rebuilding its data from the event store. Both versions consume incoming events: blue nodes continue to update the v1 projection, and green nodes update v2. The event stream (e.g. all `FreightShipment` events) is essentially being forked into two projection outputs. Because of the version separation, there’s no conflict – v1 writes to the old tables, v2 writes to the new tables. +4. **Monitor and catch up:** Allow the new projection to catch up to near real-time. Depending on the volume of past events, this could take some time. During this phase, keep most user read traffic directed to the blue nodes (since they have the up-to-date v1 projection). The system remains fully operational; the only overhead is the background work on green nodes to build the new projection. Marten and Wolverine ensure that the new projection stays isolated while it lags behind. +5. **Cut over to the new version:** Once the v2 projection is up-to-date (or close enough), you can switch the user traffic to the green nodes. For example, update your load balancer or service discovery to route requests to the new deployment. Now the reads are coming from `DailyShipmentsProjection` v2. Because v2 has been fully built, users should see the new data (including any backfilled changes). +6. **Retire old nodes and clean up:** With traffic on the new version, you can shut down the remaining blue nodes. The old projection (v1) will stop receiving events. At this point, it’s safe to decommission the old projection’s resources. Marten does not automatically drop the old tables, so you should remove or archive them via a migration or manual SQL after confirming the new projection is stable. The system is now running entirely on the updated projection schema. + +```mermaid +flowchart TD + subgraph OldSystem["Blue (Old Version)"] + A[FreightShipment events] -->|v1 projection| P1[DailyShipmentsProjection V1] + end + + subgraph NewSystem["Green (New Version)"] + A -->|v2 projection| P2[DailyShipmentsProjection V2 - async rebuild] + end + + P2 -->|catch up| P2Done[Projection V2 up-to-date] + P1 -. serving reads .-> Users((Users)) + P2Done -. switch reads .-> Users + P1 -->|decommission| X[Old projection retired] +``` + +In summary, Marten’s `ProjectionVersion` feature and Wolverine’s projection distribution **work in tandem** to support zero-downtime deployments for projection changes. Use `ProjectionVersion` when you need to introduce breaking changes to a projection’s shape or data – it gives you a clean slate in the database for the new logic. Use Wolverine’s capabilities to **isolate the new projection to specific nodes**, ensuring old and new versions don’t interfere . By using both strategies together, your freight system can deploy updates (like a new `DailyShipmentsProjection` schema) with minimal disruption: the new projection back-fills data while the old one handles live traffic, and a smooth cutover ensures continuity . This approach, as described in [Jeremy Miller’s 2025 write-up on zero-downtime projections](https://jeremydmiller.com/2025/03/26/projections-consistency-models-and-zero-downtime-deployments-with-the-critter-stack/), lets you evolve your event-driven system confidently without ever putting up a “service unavailable” sign. diff --git a/docs/tutorials/conclusion.md b/docs/tutorials/conclusion.md new file mode 100644 index 0000000000..f242aa7778 --- /dev/null +++ b/docs/tutorials/conclusion.md @@ -0,0 +1,23 @@ +# Conclusion + +In this tutorial, we started with a basic document-oriented approach to tracking freight shipments and gradually transformed it into a robust event-sourced system using Marten. Along the way, we highlighted why Marten’s unified approach is so powerful: + +- **Unified Document & Event Store:** We saw how Marten allowed us to store JSON documents and event streams in the same PostgreSQL database, leveraging SQL reliability with NoSQL flexibility ([Projecting Marten Events to a Flat Table – The Shade Tree Developer](https://jeremydmiller.com/2022/07/25/projecting-marten-events-to-a-flat-table)). +- **ACID Transactions:** Marten gave us transactional consistency across both documents and events – updating an aggregate and its events together without losing consistency ([What would it take for you to adopt Marten? – The Shade Tree Developer](https://jeremydmiller.com/2021/01/11/what-would-it-take-for-you-to-adopt-marten/)). +- **Evolution to Event Sourcing:** We were able to introduce event sourcing incrementally. We began with a document model, then started recording events, and finally used projections to maintain the same document as a read model. This kind of gradual adoption is much harder if using separate technologies for state and events. +- **Projections and Queries:** Marten’s projections system let us derive new read models (like daily summaries) from the events with relative ease, all within our .NET code. We didn’t need external pipelines; the data stayed in PostgreSQL and remained strongly consistent thanks to Marten’s guarantees. +- **Integration with Tools:** By integrating with Wolverine, we glimpsed how to operate this system at scale, coordinating projections in a distributed environment and enabling modern deployment strategies like blue/green with minimal fuss ([Projection/Subscription Distribution | Wolverine](https://wolverinefx.net/guide/durability/marten/distribution.html)). + +We also followed best practices such as clear event naming, encapsulating aggregate behavior in apply methods, using optimistic concurrency (via `FetchForWriting`), and separating the write and read concerns appropriately. These patterns will serve well in real-world applications. + +Marten stands out in the .NET ecosystem by making advanced patterns (like CQRS and Event Sourcing) more accessible and pragmatic. It lets you start with a simple approach and incrementally add complexity (audit logging, temporal queries, analytics projections, etc.) as needed – all without switching databases or sacrificing transactional safety. This means you can adopt event sourcing in parts of your system that truly benefit from it (like shipments with complex workflows) while still handling simpler data as straightforward documents, all using one tool. + +We encourage you to explore Marten’s documentation and experiment further: + +- For a deeper dive into event sourcing, its core concepts, advanced implementation scenarios, and Marten's specific features, be sure to check out our comprehensive guide: [Understanding Event Sourcing with Marten](/events/learning). +- Try adding a new type of event (e.g., a `ShipmentDelayed` event) and see how to handle it in the projection. +- Implement a query that uses Marten’s `AggregateStreamAsync` LINQ integration for an ad-hoc calculation. +- If you have multiple bounded contexts, consider using separate schemas or databases with Marten, and possibly multiple DocumentStores. +- Look into Marten’s support for **sagas** (long-running workflows) and how events can drive them. + +With Marten and PostgreSQL, you have a potent combination of conventional technology and innovative patterns. We hope this freight shipment example showed you how to harness Marten’s power in a real-world scenario. Happy coding with Marten! diff --git a/docs/tutorials/cross-aggregate-views.md b/docs/tutorials/cross-aggregate-views.md new file mode 100644 index 0000000000..aa2c8128e5 --- /dev/null +++ b/docs/tutorials/cross-aggregate-views.md @@ -0,0 +1,94 @@ +# Part 5: Multi-Stream Projections – Cross-Aggregate Views + +So far, each projection we considered was confined to a single stream (one shipment’s events). What if we want to derive insights that span across many shipments? For example, imagine we want a daily report of how many shipments were delivered on each day. This requires looking at `ShipmentDelivered` events from all shipment streams and grouping them by date. + +Marten supports this with [Multi Stream Projections](/events/projections/multi-stream-projections). A multi-stream projection processes events from *multiple* streams and aggregates them into one or more view documents. Essentially, you define how to group events (by some key) and how to apply events to a collective view. + +## Example: Daily Deliveries Count + +Let’s create a projection to count deliveries per day. We’ll make a view document called `DailyShipmentsDelivered` that has a date and a count of delivered shipments for that date. + +```csharp +public class DailyShipmentsDelivered +{ + public DateOnly Id { get; set; } // using DateOnly as the document Id (the day) + public int DeliveredCount { get; set; } +} +``` + +We will use the date (year-month-day) as the identity for this document. Each `DailyShipmentsDelivered` document will represent one calendar day. Now we need a projection that listens to `ShipmentDelivered` events from any shipment stream and updates the count for the corresponding day. + +Marten’s `MultiStreamProjection` base class makes this easier. We can subclass it: + +```csharp +using Marten.Events.Projections; + +public class DailyShipmentsProjection : MultiStreamProjection +{ + public DailyShipmentsProjection() + { + // Group events by the DateOnly key (extracted from DeliveredAt) + Identity(e => DateOnly.FromDateTime(e.DeliveredAt)); + } + + public DailyShipmentsDelivered Create(ShipmentDelivered @event) + { + // Create a new view for the date if none exists + return new DailyShipmentsDelivered + { + Id = DateOnly.FromDateTime(@event.DeliveredAt), + DeliveredCount = 1 + }; + } + + public void Apply(ShipmentDelivered @event, DailyShipmentsDelivered view) + { + // Increment the count for this date + view.DeliveredCount += 1; + } +} +``` + +In this `DailyShipmentsProjection`: + +- We use `Identity(Func)` to tell Marten how to determine the grouping key (the `Id` of our view document) for each `ShipmentDelivered` event. Here we take the event’s timestamp and convert it to a DateOnly (year-month-day) – that’s our grouping key. +- The `Create` method specifies what to do if an event arrives for a date that doesn’t yet have a document. We create a new `DailyShipmentsDelivered` with count 1. +- The `Apply` method defines how to update an existing document when another event for that same date arrives – we just increment the counter. + +We would register this projection as typically **async** (since multi-stream projections are by default registered async for safety): + +```csharp +opts.Projections.Add(ProjectionLifecycle.Async); +``` + +With this in place, whenever a `ShipmentDelivered` event is stored, the async projection daemon will eventually invoke our projection. All delivered events on the same day will funnel into the same `DailyShipmentsDelivered` document (with Id = that date). Marten ensures that events are processed in order and handles concurrency so that our counts don’t collide (under high load, async projection uses locking to avoid race conditions, which is one reason multi-stream is best as async). + +After running the system for a while, we could query the daily deliveries: + +```csharp +// Example query: get the last 7 days of delivery counts +var lastWeek = DateOnly.FromDateTime(DateTime.UtcNow.AddDays(-7)); +var stats = await session.Query() + .Where(x => x.Id >= lastWeek) + .OrderBy(x => x.Id) + .ToListAsync(); + +foreach(var dayStat in stats) +{ + Console.WriteLine($"{dayStat.Id}: {dayStat.DeliveredCount} deliveries"); +} +``` + +This query is hitting a regular document table (`DailyShipmentsDelivered` documents), which Marten has been keeping up-to-date from the events. Under the covers, Marten’s projection daemon fetched new `ShipmentDelivered` events, grouped them by date key, and stored/updated the documents. + +This example shows the power of combining events from many streams. We could similarly create projections for other cross-cutting concerns, such as: + +- Total live shipments in transit per route or per region. +- A table of all cancellations with reasons, to analyze why shipments get cancelled. +- Anything that involves correlating multiple aggregates’ events. + +All of it can be done with Marten using the event data we already collect, without additional external ETL jobs. And because it’s within the Marten/PostgreSQL environment, it benefits from transactional safety (the projection daemon will not lose events; it will resume from where it left off if the app restarts, etc.). + +**Tip:** For multi-stream projections, consider the volume of data for each grouping key. Our daily summary is a natural grouping (there’s a finite number of days, and each day gets a cumulative count). If you tried to use a highly unique key (like each event creating its own group), that might just be a degenerate case of one event per group – which could have been done as individual documents anyway. Use multi-stream grouping when events truly need to be summarized or combined. + +Now that we’ve seen how Marten handles documents, single-stream aggregates, and multi-stream projections, let’s discuss how Marten integrates with an external library called **Wolverine** to scale out and manage these projections in a robust way. diff --git a/docs/tutorials/event-sourced-aggregate.md b/docs/tutorials/event-sourced-aggregate.md new file mode 100644 index 0000000000..242510badd --- /dev/null +++ b/docs/tutorials/event-sourced-aggregate.md @@ -0,0 +1,171 @@ +# Part 3: Building an Event-Sourced Aggregate + +Now that we are recording events for each shipment, we need a way to derive the latest state of the shipment from those events. In event sourcing, an **aggregate** is an entity that can **replay or apply events** to build up its current state. In our case, the `FreightShipment` is the aggregate, and we want to be able to construct a `FreightShipment` object by applying the `ShipmentScheduled`, `ShipmentPickedUp`, etc. events in sequence. + +Fortunately, Marten can do a lot of this heavy lifting for us if we define how our `FreightShipment` applies events. Marten follows a convention of using static factory and static apply methods to define aggregates. This approach provides explicit control and is the most idiomatic way of building aggregates with Marten. + +## Defining the Aggregate with Static Apply Methods + +```csharp +public class FreightShipment +{ + public Guid Id { get; private set; } + public string Origin { get; private set; } + public string Destination { get; private set; } + public ShipmentStatus Status { get; private set; } + public DateTime ScheduledAt { get; private set; } + public DateTime? PickedUpAt { get; private set; } + public DateTime? DeliveredAt { get; private set; } + public DateTime? CancelledAt { get; private set; } + public string CancellationReason { get; private set; } + + public static FreightShipment Create(ShipmentScheduled @event) + { + return new FreightShipment + { + Id = @event.ShipmentId, + Origin = @event.Origin, + Destination = @event.Destination, + Status = ShipmentStatus.Scheduled, + ScheduledAt = @event.ScheduledAt + }; + } + + public static FreightShipment Apply(FreightShipment current, ShipmentPickedUp @event) + { + current.Status = ShipmentStatus.InTransit; + current.PickedUpAt = @event.PickedUpAt; + return current; + } + + public static FreightShipment Apply(FreightShipment current, ShipmentDelivered @event) + { + current.Status = ShipmentStatus.Delivered; + current.DeliveredAt = @event.DeliveredAt; + return current; + } + + public static FreightShipment Apply(FreightShipment current, ShipmentCancelled @event) + { + current.Status = ShipmentStatus.Cancelled; + current.CancelledAt = @event.CancelledAt; + current.CancellationReason = @event.Reason; + return current; + } +} +``` + +### Comparison to Document Modeling + +Compared to our earlier document models in `Shipment` and `Driver`, this aggregate model differs in a few key ways: + +- The aggregate is built from **events**, not manually instantiated. +- Instead of using `set` properties and constructors to initialize the object directly, we use **event-driven construction** via `Create(...)`. +- State transitions are handled using **explicit `Apply(...)` methods**, each corresponding to a domain event. + +This approach enforces a clear, consistent flow of state changes and ensures that aggregates are always built in a valid state based on domain history. It also enables Marten to recognize and apply the projection automatically during event replay or live aggregation. + +### Why Static Apply and Create? + +- **Static `Create`**: Marten uses this method to initialize the aggregate from the first event in the stream. It must return a fully constructed aggregate. +- **Static `Apply`**: Each subsequent event in the stream is passed to the relevant `Apply` method to update the current instance. These methods must return the updated aggregate. +- This design avoids mutation during construction and encourages an immutable mindset. It also aligns with Marten's default code generation and improves transparency when reading aggregate logic. + +By adopting this convention, your aggregate classes are fully compatible with Marten’s projection engine and behave consistently across live and inline projections. + +Now that our `FreightShipment` can be built from events, let’s use Marten to do exactly that. We have two main ways to get the current state from events: + +1. **On-demand aggregation** – load events and aggregate them in memory when needed. +2. **Projections (stored aggregation)** – have Marten automatically update a stored `FreightShipment` document as events come in (so we can load it directly like a regular document). + +We’ll explore both, starting with on-demand aggregation. + +## Aggregating a Stream on Demand (Live Projection) + +Marten provides an easy way to aggregate a stream of events into an object: `AggregateStreamAsync()`. We can use this to fetch the latest state without having stored a document. For example: + +```csharp +// Assuming we have a stream of events for shipmentId (from earlier Part) +var currentState = await session.Events.AggregateStreamAsync(shipmentId); +Console.WriteLine($"State: {currentState.Status}, PickedUpAt: {currentState.PickedUpAt}"); +``` + +When this code runs, Marten will fetch all events for the given stream ID from the database, then create a `FreightShipment` and apply each event (using the `Apply` methods we defined) to produce `currentState`. If our earlier events were Scheduled, PickedUp, Delivered, the resulting `currentState` should have `Status = Delivered` and the `PickedUpAt`/`DeliveredAt` times set appropriately. + +This on-demand approach is an example of a **live projection** – we compute the projection (the aggregate) from raw events in real time, without storing the result. Marten even lets you run live aggregations over a selection of events via LINQ queries (using the `AggregateToAsync` operator) ([Live Aggregation](/events/projections/live-aggregates)). For instance, you could query a subset of events (perhaps filtering by date or type) and aggregate those on the fly. This is powerful for ad-hoc queries or scenarios where you don’t need to frequently read the same aggregate. + +However, recalculating an aggregate from scratch each time can be inefficient if the event stream is long or if you need to read the state often. In our shipping example, if we have to show the current status of shipments frequently (e.g., in a UI dashboard), constantly replaying events might be overkill, especially as the number of events grows. This is where Marten’s **projection** support shines, allowing us to maintain a persistent up-to-date view of the aggregate. + +## Inline Projections: Mixing Events with Documents + +In some scenarios, we want Marten to automatically maintain a document that reflects the latest state of a single stream of events. This can be useful for dashboards or APIs where fast reads are important and you don’t want to replay the entire stream on every request. + +To do this, we can register a `SingleStreamProjection` and configure it to run inline. This projection class applies events from one stream and writes the resulting document to the database as part of the same transaction. + +```csharp +public class ShipmentView +{ + public Guid Id { get; set; } + public string Origin { get; set; } + public string Destination { get; set; } + public string Status { get; set; } + public DateTime? PickedUpAt { get; set; } + public DateTime? DeliveredAt { get; set; } +} + +public class ShipmentViewProjection : SingleStreamProjection +{ + public ShipmentView Create(ShipmentScheduled @event) => new ShipmentView + { + Id = @event.ShipmentId, + Origin = @event.Origin, + Destination = @event.Destination, + Status = "Scheduled" + }; + + public void Apply(ShipmentView view, ShipmentPickedUp @event) + { + view.Status = "InTransit"; + view.PickedUpAt = @event.PickedUpAt; + } + + public void Apply(ShipmentView view, ShipmentDelivered @event) + { + view.Status = "Delivered"; + view.DeliveredAt = @event.DeliveredAt; + } +} +``` + +Register the projection during configuration: + +```csharp +var store = DocumentStore.For(opts => +{ + opts.Connection(connectionString); + opts.Projections.Add(ProjectionLifecycle.Inline); +}); +``` + +Let’s illustrate how this works with our shipment example: + +```csharp +using var session = store.LightweightSession(); + +var sid = Guid.NewGuid(); +var evt1 = new ShipmentScheduled(sid, "Los Angeles", "Tokyo", DateTime.UtcNow); +session.Events.StartStream(sid, evt1); +await session.SaveChangesAsync(); // Inserts initial ShipmentView + +var evt2 = new ShipmentPickedUp(DateTime.UtcNow.AddHours(2)); +session.Events.Append(sid, evt2); +await session.SaveChangesAsync(); // Updates ShipmentView.Status and PickedUpAt + +var doc = await session.LoadAsync(sid); +Console.WriteLine(doc.Status); // InTransit +Console.WriteLine(doc.PickedUpAt); // Set to pickup time +``` + +This flow shows that each time we append an event, Marten applies the changes immediately and updates the document inside the same transaction. This ensures strong consistency between the event stream and the projected view. + +Note that this is not the same as aggregating a domain model like `FreightShipment` using `AggregateStreamAsync`. Instead, we're producing a derived view (or read model) designed for fast queries, based on a subset of event data. diff --git a/docs/tutorials/evolve-to-event-sourcing.md b/docs/tutorials/evolve-to-event-sourcing.md new file mode 100644 index 0000000000..46e6c0dbaf --- /dev/null +++ b/docs/tutorials/evolve-to-event-sourcing.md @@ -0,0 +1,100 @@ +# Part 2: Evolving to Event Sourcing – Capturing Shipment Changes + +To gain more insight and auditability, we decide to model the shipment lifecycle with **events**. In an event-sourced system, every change (e.g. “Shipment picked up”) is recorded as an immutable event. The current state can always be derived from the sequence of past events, and we never lose information about what happened when. + +## Learning Goals + +- Understand the purpose and benefits of event sourcing +- Define and store domain events +- Start and append to event streams using Marten +- Read and replay event streams to reconstruct state + +## Why Event Sourcing? + +Traditional systems persist the current state of data, often mutating it in place. Event sourcing flips this around: instead of storing *state*, we store *facts* — each change is captured as an immutable event. + +For our delivery system, this means: + +- A full audit trail of what happened and when +- The ability to rebuild state at any point +- Natural modeling of workflows and temporal logic +- Easier integration with external systems through event publishing + +--- + +## Identifying Events (Event Modeling) + +The first step is to define our domain events. These events should represent meaningful transitions or actions in the freight shipping process. Based on our earlier description, we have a few key moments: + +- A shipment is **scheduled** (created/booked in the system). +- A shipment is **picked up** by a carrier. +- A shipment is **delivered** to its destination. +- A shipment is **cancelled** (perhaps if the order is called off). + +Each of these will be an event. In naming events, a common best practice is to use past-tense verbs or descriptive phrases because events represent **something that has happened**. Also, events should carry the data relevant to that change. Let’s define our events as C# record types: + +```csharp +public record ShipmentScheduled(Guid ShipmentId, string Origin, string Destination, DateTime ScheduledAt); +public record ShipmentPickedUp(DateTime PickedUpAt); +public record ShipmentDelivered(DateTime DeliveredAt); +public record ShipmentCancelled(string Reason, DateTime CancelledAt); +``` + +We’ve declared four event types: + +- **ShipmentScheduled** – marks the creation of a shipment, including its Id, origin, destination, and when it was scheduled. +- **ShipmentPickedUp** – occurs when the shipment is collected; we capture the time of pickup. +- **ShipmentDelivered** – occurs when delivered; we capture the delivery time. +- **ShipmentCancelled** – occurs if the shipment is cancelled; we include a Reason and time of cancellation. + +Why did we include `ShipmentId` in the `ShipmentScheduled` event but not in the others? In Marten (and event sourcing in general), events are stored in **streams** identified by an ID. In our case, each shipment will have its own event stream identified by the shipment’s Id. That means when we store a `ShipmentPickedUp` event, we will associate it with a specific shipment stream (so the context of which shipment it belongs to is known by the stream key). It’s not strictly necessary to duplicate the shipment Id inside every event (and often one wouldn’t), but including it in the initial “created” event can be useful for clarity or if that event might be handled independently. The key point is that Marten will ensure each event is linked to a particular Shipment aggregate. + +Before coding with events, it’s useful to visualize the expected flow. Here’s a simple state diagram of our shipment lifecycle with events triggering state changes: + +```mermaid +stateDiagram-v2 + [*] --> Scheduled : ShipmentScheduled + Scheduled --> InTransit : ShipmentPickedUp + InTransit --> Delivered : ShipmentDelivered + Scheduled --> Cancelled : ShipmentCancelled + InTransit --> Cancelled : ShipmentCancelled + Delivered --> [*] + Cancelled --> [*] +``` + +In this diagram, **Scheduled**, **InTransit**, **Delivered**, and **Cancelled** are states of a shipment, and the arrows show the events that transition between states. For example, when a `ShipmentPickedUp` event occurs, the shipment moves from Scheduled to InTransit. This helps ensure our events make sense and cover all transitions. + +## Storing Events in Marten + +Marten’s event store allows us to record these events in the database. Each shipment’s events will be stored in order within its own stream. Let’s see how to append events using Marten. We’ll simulate creating a shipment and then recording a pickup and delivery: + +```csharp +using var session = store.LightweightSession(); + +// 1. Start a new event stream for a shipment +var shipmentId = Guid.NewGuid(); +var scheduleEvent = new ShipmentScheduled(shipmentId, "Rotterdam", "New York", DateTime.UtcNow); +session.Events.StartStream(shipmentId, scheduleEvent); +await session.SaveChangesAsync(); +Console.WriteLine($"Started stream {shipmentId} with ShipmentScheduled."); + +// 2. Append a ShipmentPickedUp event (in a real scenario, later in time) +var pickupEvent = new ShipmentPickedUp(DateTime.UtcNow.AddHours(5)); +session.Events.Append(shipmentId, pickupEvent); + +// 3. Append a ShipmentDelivered event +var deliveredEvent = new ShipmentDelivered(DateTime.UtcNow.AddDays(1)); +session.Events.Append(shipmentId, deliveredEvent); + +// 4. Commit the new events +await session.SaveChangesAsync(); +Console.WriteLine($"Appended PickedUp and Delivered events to stream {shipmentId}."); +``` + +Let’s break down what’s happening: + +- **Starting a stream**: We call `session.Events.StartStream(shipmentId, scheduleEvent)` to begin a new event stream for a `FreightShipment` with a specific Id. The first event in this stream is `ShipmentScheduled`. Under the hood, Marten will save this event to an internal `mt_events` table (the default name) and assign it a sequence number (version 1 in the stream). We then save changes, which writes the event to the database. +- **Appending events**: Later, we load a new session (or could use the same one in a real app if still open) and use `session.Events.Append(streamId, event)` to add new events to that existing stream. We add `ShipmentPickedUp` and `ShipmentDelivered`. Notice we haven’t saved changes yet – Marten, like with documents, batches operations in the session until `SaveChangesAsync()` is called. +- **Committing events**: The second `SaveChangesAsync()` writes both the pickup and delivered events to the database in one transaction. If something went wrong (say a violation of a concurrency check), none of the events would be stored. After this, our stream has three events in order: Scheduled (version 1), PickedUp (version 2), Delivered (version 3). + +At this point, the event store contains a full history for the shipment. We can retrieve the raw events if needed via Marten’s API (for example, `session.Events.FetchStream(shipmentId)` would give us all events for that stream). But more typically, we want to derive the **current state** or some useful representation from these events. That’s the role of an **aggregate** or **projection**, which we’ll explore next. diff --git a/docs/tutorials/getting-started.md b/docs/tutorials/getting-started.md new file mode 100644 index 0000000000..e884ced786 --- /dev/null +++ b/docs/tutorials/getting-started.md @@ -0,0 +1,97 @@ +# Part 1: Freight Shipping Use Case – Document-First Approach + +To ground this tutorial, imagine we’re building a simple freight shipping system. In this domain, a **Shipment** has an origin and destination, and goes through a lifecycle (scheduled, picked up, in transit, delivered, etc.). We’ll begin by modeling a Shipment as a straightforward document – essentially a record in a database that we update as the shipment progresses. Marten’s document database features make this easy and familiar. + +## Defining the Shipment Document + +First, let’s define a Shipment class to represent the data we want to store. We’ll include an Id (as a Guid), origin and destination locations, a status, and timestamps for key events. For now, we’ll treat this as a simple POCO that Marten can persist as JSON: + +```csharp +public enum ShipmentStatus { Scheduled, InTransit, Delivered, Cancelled } + +public class FreightShipment +{ + public Guid Id { get; set; } + public string Origin { get; set; } + public string Destination { get; set; } + public ShipmentStatus Status { get; set; } + public DateTime ScheduledAt { get; set; } + public DateTime? PickedUpAt { get; set; } + public DateTime? DeliveredAt { get; set; } + public DateTime? CancelledAt { get; set; } +} +``` + +Here, **Id** will uniquely identify the shipment (Marten uses this as the document identity). We track the shipment’s Origin and Destination, and use a Status to reflect where it is in the process. We also have optional timestamps for when the shipment was picked up, delivered, or cancelled (those will remain null until those actions happen). This is a **document-first model** – all the current state of a shipment is kept in one document, and we’ll overwrite fields as things change. + +## Storing and Retrieving Documents with Marten + +Marten makes it straightforward to work with documents. We start by configuring a `DocumentStore` with a connection to PostgreSQL. For example: + +```csharp +var store = DocumentStore.For(opts => +{ + opts.Connection("Host=localhost;Database=myapp;Username=myuser;Password=mypwd"); + opts.AutoCreateSchemaObjects = AutoCreate.All; // Dev mode: create tables if missing +}); +``` + +The `DocumentStore` is the main entry point to Marten. In the above setup, we provide the database connection string. We also enable `AutoCreateSchemaObjects` so Marten will automatically create the necessary tables (in a real app you might use a migration instead, but this is convenient for development). Marten will create a table to hold `FreightShipment` documents in JSON form. + +Now, let’s store a new shipment document and then load it back: + +```csharp +using var session = store.LightweightSession(); // open a new session + +// 1. Create a new shipment +var shipment = new FreightShipment +{ + Id = Guid.NewGuid(), + Origin = "Rotterdam", + Destination = "New York", + Status = ShipmentStatus.Scheduled, + ScheduledAt = DateTime.UtcNow +}; + +// 2. Store it in Marten +session.Store(shipment); +await session.SaveChangesAsync(); // saves the changes to the database + +// 3. Later... load the shipment by Id +var loaded = await session.LoadAsync(shipment.Id); +Console.WriteLine($"Shipment status: {loaded.Status}"); // Outputs: Scheduled +``` + +A few things to note in this code: + +- We use a **session** (`LightweightSession`) to interact with the database. This pattern is similar to an EF Core DbContext or a NHibernate session. The session is a unit of work; we save changes at the end (which wraps everything in a DB transaction). +- Calling `Store(shipment)` tells Marten to stage that document for saving. `SaveChangesAsync()` actually commits it to PostgreSQL. +- After saving, we can retrieve the document by Id using `LoadAsync`. Marten deserializes the JSON back into our `FreightShipment` object. + +Behind the scenes, Marten stored the shipment as a JSON document in a Postgres table. Thanks to Marten’s use of PostgreSQL, this was an ACID transaction – if we had multiple documents or operations in the session, they’d all commit or rollback together. At this point, our shipment record might look like: + +```json +{ + "Id": "3a1f...d45", + "Origin": "Rotterdam", + "Destination": "New York", + "Status": "Scheduled", + "ScheduledAt": "2025-03-21T08:30:00Z", + "PickedUpAt": null, + "DeliveredAt": null, + "CancelledAt": null +} +``` + +As the shipment goes through its lifecycle, we would update this document. For example, when the freight is picked up, we might do: + +```csharp +loaded.Status = ShipmentStatus.InTransit; +loaded.PickedUpAt = DateTime.UtcNow; +session.Store(loaded); +await session.SaveChangesAsync(); +``` + +This will update the existing JSON document in place (Marten knows it’s an update because the Id matches an existing document). Similarly, upon delivery, we’d set `Status = Delivered` and set `DeliveredAt`. This **state-oriented approach** is simple and works well for many cases – we always have the latest status easily available by loading the document. + +However, one drawback of the document-only approach is that we lose the historical changes. Each update overwrites the previous state. If we later want to know *when* a shipment was picked up or delivered, we have those timestamps, but what if we need more detail or want an audit trail? We might log or archive old versions, but that gets complex. This is where **event sourcing** comes in. Instead of just storing the final state, we capture each state change as an event. Let’s see how Marten allows us to evolve our design to an event-sourced model without abandoning the benefits of the document store. diff --git a/docs/tutorials/introduction.md b/docs/tutorials/introduction.md new file mode 100644 index 0000000000..12108aff0a --- /dev/null +++ b/docs/tutorials/introduction.md @@ -0,0 +1,32 @@ +# Marten Tutorial: Building a Freight & Delivery System + +> This tutorial introduces you to Marten through a real-world use case: building a freight and delivery management system using documents and event sourcing. You'll learn not just how to use Marten, but also why and when to apply different features, and how to integrate them with Wolverine for a complete CQRS and messaging architecture. + +--- + +## What You Will Learn + +- Why Marten's approach to Postgres as a document and event database is unique and powerful +- How to model real-world business workflows using documents and event sourcing +- How to define, store, and query domain models like shipments and drivers +- How to track the lifecycle of domain entities using event streams +- How to use projections to maintain real-time read models +- How to reliably send notifications using the Wolverine outbox +- How to scale with async projections and optimize performance + +--- + +## Why Marten? The Power of Postgres + .NET + +Many document databases and event stores are built on top of document-oriented NoSQL engines like MongoDB, DynamoDB, or Cosmos DB. Marten takes a different path: it builds a document store and event sourcing system **on top of PostgreSQL**, a relational database. + +At first glance, this may seem unorthodox. Why use a relational database for document and event-based data? + +The answer lies in the unique strengths of PostgreSQL: + +- **ACID transactions** — PostgreSQL is battle-tested for transactional consistency. Marten builds on that to offer safe, predictable persistence of documents and events. +- **Powerful JSON support** — PostgreSQL's `jsonb` data type lets Marten store .NET objects as raw JSON with indexing and querying capabilities. +- **Relational flexibility** — If needed, you can combine document-style storage with traditional columns or relational data in the same schema. +- **One database for everything** — No need to manage separate infrastructure for documents, events, messages, and relational queries. Marten and Wolverine build on the same reliable engine. + +Using Marten gives you the flexibility of NoSQL without leaving the safety and robustness of PostgreSQL. This hybrid approach enables high productivity, strong consistency, and powerful event-driven architectures in .NET applications. diff --git a/docs/tutorials/modeling-documents.md b/docs/tutorials/modeling-documents.md new file mode 100644 index 0000000000..707564060a --- /dev/null +++ b/docs/tutorials/modeling-documents.md @@ -0,0 +1,130 @@ +# Modeling documents + +In this chapter, we'll define the domain model for our freight and delivery system and store it in PostgreSQL using Marten as a document database. + +--- + +## Learning Goals + +- Design C# document types (`Shipment`, `Driver`) +- Store documents using Marten +- Query documents using LINQ +- Understand Marten's identity and schema conventions + +--- + +## Defining Documents + +We'll start by modeling two core entities in our domain: `Shipment` and `Driver`. + +```csharp +public class Shipment +{ + public Guid Id { get; set; } + public string Origin { get; set; } + public string Destination { get; set; } + public DateTime CreatedAt { get; set; } + public DateTime? DeliveredAt { get; set; } + public string Status { get; set; } + public Guid? AssignedDriverId { get; set; } +} + +public class Driver +{ + public Guid Id { get; set; } + public string Name { get; set; } + public string LicenseNumber { get; set; } +} +``` + +> Marten uses `Id` as the primary key by convention. No attributes or base classes are required. + +Once defined, Marten will automatically create tables like `mt_doc_shipment` and `mt_doc_driver` with a `jsonb` column to store the data. + +--- + +## Storing Documents + +```csharp +var shipment = new Shipment +{ + Id = Guid.NewGuid(), + Origin = "New York", + Destination = "Chicago", + CreatedAt = DateTime.UtcNow, + Status = "Created" +}; + +var driver = new Driver +{ + Id = Guid.NewGuid(), + Name = "Alice Smith", + LicenseNumber = "A123456" +}; + +using var session = store.LightweightSession(); +session.Store(driver); +session.Store(shipment); +await session.SaveChangesAsync(); +``` + +Marten uses PostgreSQL's `INSERT ... ON CONFLICT DO UPDATE` under the hood to perform upserts. + +--- + +## Querying Documents + +Use LINQ queries to fetch or filter data: + +```csharp +using var querySession = store.QuerySession(); + +// Load by Id +var existingShipment = await querySession.LoadAsync(shipment.Id); + +// Filter by destination +var shipmentsToChicago = await querySession + .Query() + .Where(x => x.Destination == "Chicago") + .ToListAsync(); + +// Count active shipments per driver +var active = await querySession + .Query() + .CountAsync(x => x.AssignedDriverId == driver.Id && x.Status != "Delivered"); +``` + +> You can also project into DTOs or anonymous types for performance if you don’t need the full document. + +--- + +## Indexing Fields for Performance + +If you frequently query by certain fields, consider duplicating them as indexed columns: + +```csharp +opts.Schema.For().Duplicate(x => x.Status); +opts.Schema.For().Duplicate(x => x.AssignedDriverId); +``` + +This improves query performance by creating indexes on those columns outside the JSON. + +--- + +## Visual Recap + +```mermaid +flowchart TB + A[Shipment Created] -->|Store| B["mt_doc_shipment (JSONB)"] + A2[Driver Registered] -->|Store| C["mt_doc_driver (JSONB)"] + B -->|Query: Destination = Chicago| D[LINQ Result] +``` + +--- + +## Summary + +- Documents are plain C# classes with an `Id` property +- Marten stores them in PostgreSQL using `jsonb` +- You can query documents using LINQ +- Index fields you query often for better performance diff --git a/docs/tutorials/read-model-projections.md b/docs/tutorials/read-model-projections.md new file mode 100644 index 0000000000..d72c455b61 --- /dev/null +++ b/docs/tutorials/read-model-projections.md @@ -0,0 +1,30 @@ +# Part 4: Projections – Building Read Models from Events + +In event sourcing, events are the source of truth, but they’re not always convenient for querying or presenting to users. **Projections** are derived views or read models built from those events ([Marten as Event Store](/events/)). We already saw one kind of projection: an aggregate projection that builds the current `FreightShipment` state. Marten’s projection system is quite powerful – it allows you to project events into virtually any shape of data: aggregated documents, view tables, or multiple related documents. + +Let’s discuss a few projection types and best practices, continuing with our freight shipment domain as context. + +## Inline vs. Async Projections + +Marten supports three projection lifecycles: **Inline**, **Async**, and **Live** ([Marten as Event Store](/events/)). We have touched on these, but here’s a quick comparison: + +- **Inline projections** run as part of the same transaction that records the events. This yields **strong consistency** (the projection is updated immediately, within the ACID transaction). The trade-off is that it can add latency to the write operation. In our example, updating the `FreightShipment` document inline ensures any query immediately after the event commit will see the new state. +- **Async projections** run in the background, typically via Marten’s Projection Daemon or with the help of Wolverine (more on that soon). When events are committed, they are queued for processing and a separate process (or thread) will update the projections shortly after. This is an **eventual consistency** model, but it can vastly improve write throughput, since the event insert transaction doesn’t do extra work. For heavy workloads, this is a common choice – you accept that there may be a tiny delay before the read models reflect the latest events. +- **Live projections** are on-demand and not persisted. We saw an example using `AggregateStreamAsync`. Another scenario for live projections might be a complex aggregation you only need once (like generating a report on the fly by scanning events). Marten’s `QueryAllRawEvents().AggregateToAsync()` API allows you to apply a projection dynamically to any event query ([Live Aggregation](/events/projections/live-aggregates)). Live projections are essentially **ad hoc** computations and do not maintain state beyond the immediate query. + +In practice, you might use a mix: aggregates that are needed in real-time might be inline, whereas other read models might be async. Marten makes it easy to register different projections with different lifecycles. + +For our freight system, suppose we want to generate a **shipment timeline** view (list of events with timestamps for a shipment) whenever needed. We might simply fetch the events and not store that as a document – this can be a live projection (just materialize events to a DTO when needed). Meanwhile, the `FreightShipment` current status we chose to maintain inline for instant consistency. + +## Designing Projections and Naming Conventions + +When building projections, especially multi-step ones, it’s good to follow clear naming and separation: + +- Keep your event classes in a domain namespace (they represent business facts). +- The aggregate (like `FreightShipment`) lives in the domain as well, with the apply methods as we did. +- If you create separate projection classes (as we will for multi-stream projections soon), name them after the view or purpose (e.g., `DailyShipmentsProjection` for a daily summary). This keeps things organized. +- Each projection should focus on one concern: an aggregate projection per stream type, or a specific read model that serves a query need. + +Marten will persist projection results as documents (or in user-defined tables for certain custom projections). By default, the document type name will determine the table name. For example, `FreightShipment` documents go in the `mt_doc_freightshipment` table (by Marten’s conventions). You can customize this via Marten’s schema config if needed. + +Now, let’s move on to a more advanced kind of projection: combining events from **multiple streams**. diff --git a/docs/tutorials/wolverine-integration.md b/docs/tutorials/wolverine-integration.md new file mode 100644 index 0000000000..ef9b33e6e7 --- /dev/null +++ b/docs/tutorials/wolverine-integration.md @@ -0,0 +1,102 @@ +# Part 6: Integrating Marten with Wolverine + +Marten and Wolverine are a powerful combination for building reliable, distributed, and event-driven systems. Marten handles persistence (documents and events), while Wolverine provides messaging, transactional outbox/inbox support, background processing, and distributed coordination. + +This integration serves **two distinct purposes**: + +1. Event-driven messaging and transactional command handling +2. Coordinated background projection processing in distributed environments + +--- + +## Reliable Messaging with Aggregates + +When using event sourcing, emitting events from domain aggregates is common — and often, we want to trigger side effects (notifications, follow-up commands, integration events). With just Marten, you'd need to handle messaging yourself, risking lost messages in case of failure. + +With Wolverine: + +- You can use `[AggregateHandler]` to define aggregate command handlers. +- Events and messages returned from the handler are **saved and dispatched atomically**. +- Wolverine uses **Marten’s outbox** to store messages until the transaction commits. + +Example: + +```csharp +[AggregateHandler] +public static IEnumerable Handle(PickupShipment cmd, FreightShipment shipment) +{ + if (shipment.Status != ShipmentStatus.Scheduled) + throw new InvalidOperationException("Cannot pick up unscheduled shipment"); + + yield return new ShipmentPickedUp(cmd.Timestamp); + yield return new NotifyDispatchCenter(shipment.Id, "PickedUp"); +} +``` + +Wolverine will: + +1. Load the `FreightShipment` using `FetchForWriting` (for optimistic concurrency) +2. Pass the current aggregate and command to the handler +3. Append the returned events to the stream +4. Save the session +5. Dispatch messages after commit + +The outbox ensures **exactly-once** messaging. The inbox can guarantee that incoming messages are processed **only once**, even across retries. + +--- + +### Distributed Projections in Multi-Node Environments + +If you run your freight system across multiple nodes (e.g. for horizontal scaling or redundancy), Marten’s async projection daemon needs coordination — to avoid multiple nodes processing the same projection. + +Wolverine offers cluster-wide coordination: + +- Only one node will run each async projection or event subscription. +- If a node fails, projection work is reassigned. +- You can configure **load balancing** and **capability-based projection routing** (e.g., for blue/green deployments). + +Configuration: + +```csharp +builder.Services.AddMarten(opts => +{ + opts.Connection(connectionString); + opts.Projections.Add(ProjectionLifecycle.Async); + opts.Projections.Add(ProjectionLifecycle.Async); +}) +.IntegrateWithWolverine(cfg => +{ + cfg.UseWolverineManagedEventSubscriptionDistribution = true; +}); +``` + +This ensures that your projection daemon is managed by Wolverine’s distributed coordinator. + +--- + +### Summary + +- **Messaging + Aggregates**: Wolverine makes it easy to process commands, generate events, and send follow-up messages, all with transactional guarantees. +- **Cluster-safe Projections**: In distributed deployments, Wolverine ensures that async projections are safely coordinated. +- **Inbox/Outbox**: Wolverine ensures exactly-once delivery semantics across your freight and delivery system. + +Together, Marten and Wolverine give you a solid foundation for a consistent, reliable freight management system — with minimal boilerplate and maximum safety. + +```mermaid +sequenceDiagram + participant API as API + participant Wolverine as Wolverine Handler + participant Marten as Marten Event Store + participant Outbox as Outbox + participant Dispatch as Dispatch Center + + API->>Wolverine: Send PickupShipment + Wolverine->>Marten: FetchForWriting(FreightShipment) + Marten-->>Wolverine: Aggregate state + Wolverine->>Wolverine: Handler emits ShipmentPickedUp + NotifyDispatchCenter + Wolverine->>Marten: Append events + Wolverine->>Outbox: Enqueue NotifyDispatchCenter + Wolverine->>Marten: SaveChanges (commit) + Marten-->>Wolverine: Committed + Wolverine-->>Dispatch: Publish NotifyDispatchCenter +```