asp.net

The ASP.NET Web API 2 HTTP Message Lifecycle in 43 Easy Steps

Anyone who works with ASP.NET Web API should check out this poster that Microsoft created to explain the Request/Response Pipeline that Web API utilizes. It's amazing, and if you do any work in Web API you should check it out! Right now. Yes, seriously. Go ahead, I'll wait.

I love this poster, but in my opinion it doesn't do a good job of explaining the decision logic and ideas behind each step in the pipeline. Further, it doesn't explicitly tell you exactly how many things happen during this pipeline (answer: a surprisingly large number of things). In short: it's awesome, but it can be made more awesome by incorporating just a little more detail.

Here's what we're going to do in this post: using that poster, we're going to enumerate every single step involved in processing a request and receiving a response using ASP.NET Web API 2, and explain a little more about each piece of the pipeline and where we programmers can extend, change, or otherwise make more awesome this complex lifecycle. So let's get going and step through the ASP.NET Web API 2 Request Lifecycle in just 43 easy steps!

The 43 Steps

It all starts with IIS:

  1. IIS (or OWIN self-hosting) receives a request.
  2. The request is then passed to an instance of HttpServer.

  3. HttpServer is responsible for dispatching HttpRequestMessage objects.

  4. HttpRequestMessage provides strongly-typed access to the request.

  5. If one or more global instances of DelegatingHandler exist on the pipeline, the request is passed to it. The request arrives at the instances of DelegatingHandler in the order said instances were added to the pipeline.

  6. If the HttpRequestMessage passes the DelegatingHandler instances (or no such handler exists), then the request proceeds to the HttpRoutingDispatcher instance.

    • HttpRoutingDispatcher chooses which routing handler to call based on the matching route. If no such route exists (e.g. Route.Handler is null, as seen in the diagram) then the request proceeds directly to Step 10.

  7. If a Route Handler exists for the given route, the HttpRequestMessage is sent to that handler.

  8. It is possible to have instances of DelegatingHandler attached to individual routes. If such handlers exist, the request goes to them (in the order they were added to the pipeline).
  9. An instance of HttpMessageHandler then handles the request. If you provide a custom HttpMessageHandler, said handler can optionally return the request to the "main" path or to a custom end point.

  10. The request is received by an instance of HttpControllerDispatcher, which will route the request to the appropriate route as determined by the request's URL.

  11. The HttpControllerDispatcher selects the appropriate controller to route the request to.

  12. An instance of IHttpControllerSelector selects the appropriate HttpControllerDescriptor for the given HttpMessage.
  13. The IHttpControllerSelector calls an instance of IHttpControllerTypeResolver, which will finally call...
  14. ...an instance of IAssembliesResolver, which ultimately selects the appropriate controller and returns it to the HttpControllerDispatcher from Step 11.
    • NOTE: If you implement Dependency Injection, the IAssembliesResolver will be replaced by whatever container you register.
  15. Once the HttpControllerDispatcher has a reference to the appropriate controller, it calls the Create() method on an IHttpControllerActivator...
  16. ...which creates the actual controller and returns it to the Dispatcher. The dispatcher then sends the request into the Select Controller Action routine, as shown below.

  17. We now have an instance of ApiController which represents the actual controller class the request is routed to. Said instance calls the SelectAction() method on IHttpActionSelector...

  18. ...which returns an instance of HttpActionDescriptor representing the action that needs to be called.

  19. Once the pipeline has determined which action to route the request to, it executes any Authentication Filters which are inserted into the pipeline (either globally or local to the invoked action).

    • These filters allow you to authenticate requests to either individual actions, entire controllers, or globally throughout the application. Any filters which exist are executed in the order they are added to the pipeline (global filters first, then controller-level filters, then action-level filters).
  20. The request then proceeds to the [Authorization Filters] layer, where any Authorization Filters which exist are applied to the request.

    • Authorization Filters can optionally create their own response and send that back, rather than allowing the request to proceed through the pipeline. These filters are applied in the same manner as Authentication Filters (globally, controller-level, action-level). Note that Authorization Filters can only be used on the Request, not the Response, as it is assumed that if a Response exists, the user had the authorization to generate it.
  21. The request now enters the Model Binding process, which is shown in the next part of the main poster. Each parameter needed by the action can be bound to its value by one of three separate paths. Which path the binding system uses depends on where the value needed exists within the request.

  22. If data needed for an action parameter value exists in the entity body, Web API reads the body of the request; an instance of FormatterParameterBinding will invoke the appropriate formatter classes...

  23. ...which bind the values to a media type (using MediaTypeFormatter)...

  24. ...which results in a new complex type.

  25. If data needed for a parameter value exists in the URL or query string, said URL is passed into an instance of IModelBinder, which uses an IValueProvider to map values to a model (see Phil Haack's post about this topic for more info)....

  26. ...which results in a simple type.

  27. If a custom HttpParameterBinding exists, the system uses that custom binding to build the value...

  28. ...which results in any kind (simple or complex) of object being mappable (see Mike Stall's wonderful series on this topic).

  29. Now that the request is bound to a model, it is passed through any Action Filters which may exist in the pipeline (either globally or just for the action being invoked).

  30. Once the action filters are passed, the action itself is invoked, and the system waits for a response from it.

  31. If the action produces an exception AND an exception filter exists, the exception filter receives and processes the exception.

  32. If no exception occurred, the action produces an instance of HttpResponseMessage by running the Result Conversion subroutine, shown in the next screenshot.

  33. If the return type is already an HttpResponseMessage, we don't need to do any conversion, so pass the return on through.

  34. If the return type is void, .NET will return an HttpResponseMessage with the status 204 No Content.

  35. If the return type is an IHttpActionResult, call the method ExecuteAsync to create an HttpResponseMessage.

    • In any Web API method in which you use return Ok(); or return BadRequest(); or something similar, that return statement follows this process, rather than any of the other processes, since the return type of those actions is IHttpActionResult.
  36. For all other types, .NET will create an HttpResponseMessage and place the serialized value of the return in the body of that message.

  37. Once the HttpResponseMessage has been created, return it to the main pipeline.

  38. Pass the newly-created HttpResponseMessage through any AuthenticationFilters which may exist.

    • Remember that Authorization Filters cannot be used on Responses.

  39. The HttpResponseMessage flows through the HttpControllerDispatcher, which at this point probably won't do anything with it.

  40. The Response also flows through the HttpRoutingDispatcher, which again won't do anything with it.

  41. The Response now proceeds through any DelegatingHandlers that are set up to handle it. At this point, the DelegatingHandler objects can really only change the response being sent (e.g. intercept certain responses and change to the appropriate HTTP status).

  42. The final HttpResponseMessage is given to the HttpServer instance...

  43. ...which returns an Http response to the invoking client.

Tada! We've successfully walked through the entire Web API 2 request/response pipeline, and in only 43 easy steps!

Let me know if this kind of deep dive has been helpful to you, and feel free to share in the comments! Microsoft people and other experts, please chime in to let me know if I got something wrong; I intend for this post to be the definitive guide to the Web API 2 Request/Response Lifecycle, and you can't be definitive without being correct.

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 5 - Running the APIs

NOTE: This is the final part of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

All our work in the previous parts of this series (learning what Command-Query Responsibility Segregation and Event Sourcing are, building the Write Model to modify our aggregate roots, building the Read Model to query data, and building both our Write and Read APIs) has lead to this. We can now test these two APIs using [Postman] and see how they operate.

In this post, the final part of our Real-World CQRS/ES with ASP.NET and Redis series, we will:

  • Run the Commands API with both valid and invalid commands.
  • Run the Queries API with existent and non-existent data.
  • Discuss some shortcomings of this design.

You're on the last lap, so don't stop now!

Command - Creating Locations

The first thing we should do is run a few commands to load our Write and Read models with data. To do that, we're going to use my favorite tool Postman to create some requests.

First, let's run a command to create a new location. Here's a screenshot of the Postman request:

Running this request returns 200 OK, which is what we expect. But what happens if we try to run the exact same request again?

Hey, lookie there! Our validation layer is working!

Let's create another location:

Well, seems our create location process is working fine. Or, at least, it looks like it is.

Query - Locations (All and By ID)

To be sure that our system is properly updating the read model, let's submit a query to our Queries API that returns all locations:

Which looks good. Let's also query for a single location by its ID. First, let's get Location #2:

Now we can query for Location #3:

Oh, wait, that's right, there is no Location #3. So we get back HTTP 400 Bad Request, which is also what we expect. (You could also make this return HTTP 404 Not Found, which is more semantically correct).

OK, great, adding and querying Locations works. But what about Employees?

Command - Creating Employees

Let's first create a new employee and assign him to Location #1:

Let's also create a couple more employees:

So now we should have two employees at Location #1 and a third employee at Location #2. Let's query for employees by location to confirm this.

Query - Employees by Location

Here's our Postman screenshot for the Employees by Location query for each location.

Just as we thought, there are two employees at Location #1 and a third at Location #2.

We're doing pretty darn good so far! But what happens if Reggie Martinez (Employee #3) needs to transfer to Location #2? We can do that with the proper commands.

Command - Assign Employee to Location

Here's a command to move Mr. Martinez to Location #2:

And now, if we query for all employees at Location #2:

I'd say we've done pretty darn good! All our commands do what we expect them to do, and all our queries return the appropriate data. We've now got ourselves a working CQRS/ES project with ASP.NET and Redis!

Shortcomings of This Design

Even though we've done a lot of work on this project and I think we've mostly gotten it right, there's still a few places that I think could be improved:

  • All Redis access through repositories. I don't like having the Event Handlers access the Redis database directly, I'd rather have them do that through the repositories. This would be easy to do, I just didn't have time before my publish date.
  • Better splitting of requests/commands and commands/events. I don't like how commands always seem to result in exactly one event.

That said, I'm really proud of the way this project turned out. If you see any additional areas for improvement, please let me know in the comments!

Summary

In this final part of our Real-World CQRS/ES with ASP.NET and Redis series:

  • Ran several queries and commands.
  • Confirmed that those queries and commands worked as expected.
  • Discussed a couple of shortcomings of this design.

As always, I welcome (civil) comments and discussion on my blog, and feel free to fork or patch or do whatever to the GitHub repository that goes along with this series. Thanks for reading!

Happy Coding!

Post image is Toddler running and falling from Wikimedia, used under license

Real-World CQRS/ES with ASP.NET and Redis Part 4 - Creating the APIs

NOTE: This is Part 4 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

We've done quite a lot of work to get to this point! We've discussed why we might want to use Command-Query Responsibility Segregation (CQRS) and Event Sourcing (ES) in our app, we've built a Write Model to handle the processing of our commands, and we've built a Read Model to query our data.

Now we can show why this is a "real-world" app. Here's what we're going to do in Part 4 of Real World CQRS/ES with ASP.NET:

  • Build a Queries API so we can query the system for data.
  • Build a Commands API so that we can issue commands to the system.
  • Implement a validation layer using FluentValidation to ensure that commands being issued are valid to execute.
  • Implement dependency injection using StructureMap in both the commands and queries APIs.

Don't stop now! Let's get starting building our APIs!

The Queries API

We're going to switch it up a bit and build the Queries API first, as that turns out to be easier than building the Commands API right off the bat. After all, the Queries API doesn't have to worry about things like validation. So, let's create a new ASP.NET Web API app.

Dependency Injection with StructureMap

After creating the new ASP.NET Web API project, the first thing we need to do is download the StructureMap.WebApi2 NuGet package and install it. Doing so gives us a folder structure that looks something like this (notice the new DependencyResolution folder):

I've blogged about how to use StructureMap with Web API in a previous post, so if you're not familiar with the StructureMap.WebApi2 package, you might want to read that post first, then come back here. It's OK, I'll wait.

Once we've downloaded and installed the StructureMap.WebApi2 package, we'll need to change just a couple of things. In our Global.asax file, we need to start the StructureMap container:

public class WebApiApplication : System.Web.HttpApplication  
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        GlobalConfiguration.Configure(WebApiConfig.Register);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
        BundleConfig.RegisterBundles(BundleTable.Bundles);

        StructuremapWebApi.Start(); //Start! Your! Containers! VROOOOOOOOM
    }
}

We also need to register the appropriate items with the container so that they can be injected. Among those items are the Repositories we created in the previous part of this series; we must register them so our API controller have them injected.

In Part 3, we also established that we are using Redis as our Read Data Store, and that we are utilizing StackExchange.Redis to interface with said Redis instance. StackExchange.Redis conveniently comes prepared for dependency injection, so we will need to register the IConnectionMultiplexer interface with our container.

In all, our DefaultRegistry class for the Queries API looks like this:

public class DefaultRegistry : Registry {  
    public DefaultRegistry() 
    {
        //Repositories
        For<IEmployeeRepository>().Use<EmployeeRepository>();
        For<ILocationRepository>().Use<LocationRepository>();

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }
}

See, that wasn't too bad! Just wait until you see the Commands API's registry.

Building the Queries

Anyway, with StructureMap now in place, we can start building the queries we need to support. Here's the queries list we talked about in Part 3:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's start with the easy one: getting an Employee by their ID.

Get Employee by ID

We need an EmployeeController, with a private IEmployeeRepository, to execute this query. The complete controller is as follows:

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    private readonly IEmployeeRepository _employeeRepo;

    public EmployeeController(IEmployeeRepository employeeRepo)
    {
        _employeeRepo = employeeRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var employee = _employeeRepo.GetByID(id);

        //It is possible for GetByID() to return null.
        //If it does, we return HTTP 400 Bad Request
        if(employee == null)
        {
            return BadRequest("No Employee with ID " + id.ToString() + " was found.");
        }

        //Otherwise, we return the employee
        return Ok(employee);
    }
}

Well, that looks pretty simple. How about the GetAll() query?

Get All Employees

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    ...

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var employees = _employeeRepo.GetAll();
        return Ok(employees);
    }
}

I think I'm sensing a theme here.

The Location Queries Controller

Let's see what the Location queries are:

[RoutePrefix("location")]
public class LocationController : ApiController  
{
    private ILocationRepository _locationRepo;

    public LocationController(ILocationRepository locationRepo)
    {
        _locationRepo = locationRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var location = _locationRepo.GetByID(id);
        if(location == null)
        {
            return BadRequest("No location with ID " + id.ToString() + " was found.");
        }
        return Ok(location);
    }

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var locations = _locationRepo.GetAll();
        return Ok(locations);
    }

    [HttpGet]
    [Route("{id}/employees")]
    public IHttpActionResult GetEmployees(int id)
    {
        var employees = _locationRepo.GetEmployees(id);
        return Ok(employees);
    }
}

Yep, definitely a theme going on. All this setup has made implementing our controllers very simple, and simplicity is definitely better when dealing with complex patterns like CQRS and ES.

We'll run queries against this API in Part 5, but for now let's turn our attention to the Commands API, which may prove to be a bit more difficult to write.

The Commands API

As I mentioned early on in this post, the Commands API is considerably more complex than the Queries API; this is largely due to the number of things we need to inject into our container, as well as the Commands API being responsible for validating the requests that come in to the system. We're going to tackle each of these problems.

Dependency Injection

First, let's deal with Dependency Injection. We'll use the same package as before, with the same Global.asax change. However, our DefaultRegistry looks much different.

In the Commands API, we need the following services available for injection:

  • CQRSLite's Commands and Events bus
  • Our Commands and Events (from Part 2)
  • Our Event Store (from Part 2)
  • AutoMapper
  • Our own Repositories (from Part 3)
  • StackExchange.Redis

That results in this monstrosity of a registry:

public class DefaultRegistry : Registry {  
    #region Constructors and Destructors

    public DefaultRegistry() {
        //Commands, Events, Handlers
        Scan(
            scan => {
                scan.TheCallingAssembly();
                scan.AssemblyContainingType<BaseEvent>();
                scan.Convention<FirstInterfaceConvention>();
            });

        //CQRSLite
        For<InProcessBus>().Singleton().Use<InProcessBus>();
        For<ICommandSender>().Use(y => y.GetInstance<InProcessBus>());
        For<IEventPublisher>().Use(y => y.GetInstance<InProcessBus>());
        For<IHandlerRegistrar>().Use(y => y.GetInstance<InProcessBus>());
        For<ISession>().HybridHttpOrThreadLocalScoped().Use<Session>();
        For<IEventStore>().Singleton().Use<InMemoryEventStore>();
        For<IRepository>().HybridHttpOrThreadLocalScoped().Use(y =>
                new CacheRepository(new Repository(y.GetInstance<IEventStore>()), y.GetInstance<IEventStore>()));

        //AutoMapper
        var profiles = from t in typeof(DefaultRegistry).Assembly.GetTypes()
                        where typeof(Profile).IsAssignableFrom(t)
                        select (Profile)Activator.CreateInstance(t);

        var config = new MapperConfiguration(cfg =>
        {
            foreach (var profile in profiles)
            {
                cfg.AddProfile(profile);
            }
        });

        var mapper = config.CreateMapper();

        For<IMapper>().Use(mapper);

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }

    #endregion
}

Holy crap that's a lot of things that need to be injected. But, as we will see, each of these things is actually necessary and provides a lot of value to our application.

(Hold on a second while I smack myself. I sounded way too much like a marketer just now.)

SMACK

Okay, I'm better now.

Requests

I've been using the term "request" liberally throughout this series, and now it's time to truly define what a request is.

In this system, a request is a potential command. That's all. Consuming applications which would like commands issued must submit a request first; that request will be validated and, if found to be valid, mapped to the corresponding command.

A request is, therefore, a C# class which contains the data needed to issue a particular command.

Request 1 - Create Employee

Let's begin to define our requests by first creating a request for creating a new employee.

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
}

WTF Matthew, you say, that looks almost EXACTLY like the CreateEmployeeCommand! Why can't we just use that?! And after I'm done looking around for my parents (nobody calls me Matthew), I can tell you that there are two reasons why we don't reuse the command objects as requests.

First, requests must be validated against the Read Model, whereas commands are assumed to be valid. Second a single request may kick off more than one command, as is the case with this request.

But how do we accomplish that validation, you say? By using one of my favorite NuGet packages of all time: FluentValidation.

The Validation Layer

FluentValidation is a NuGet package which allows us to validate objects and places any validation errors found into the Controller's ModelState. But (unlike StackExchange.Redis) it doesn't come ready for use in a Dependency Injection environment, so we must do some setup.

First, we need a factory which will create the validator objects:

public class StructureMapValidatorFactory : ValidatorFactoryBase  
{
    private readonly HttpConfiguration _configuration;

    public StructureMapValidatorFactory(HttpConfiguration configuration)
    {
        _configuration = configuration;
    }

    public override IValidator CreateInstance(Type validatorType)
    {
        return _configuration.DependencyResolver.GetService(validatorType) as IValidator;
    }
}

Next, in our WebApiConfig.cs file, we need to enable FluentValidation's validator provider using our factory:

public static void Register(HttpConfiguration config)  
{
    ...

    FluentValidationModelValidatorProvider.Configure(config, x => x.ValidatorFactory = new StructureMapValidatorFactory(config));

    ...
}

Finally, we need to register the validator provider in our StructureMap container, which is done in the DefaultRegistry class.

public DefaultRegistry()  
{
    ...
    //FluentValidation 
    FluentValidation.AssemblyScanner.FindValidatorsInAssemblyContaining<CreateEmployeeRequestValidator>()
            .ForEach(result =>
            {
                For(result.InterfaceType)
                    .Use(result.ValidatorType);
            });

With all of that in place, we're ready to begin building our validators!

Create Employee - Validation

Here's the validation rules we need to implement when creating an employee:

  • The Employee ID must not already exist.
  • The First Name cannot be blank.
  • The Last Name cannot be blank.
  • The Job Title cannot be blank.
  • Employees must be 16 years of age or older.

Here's how we would implement such a validator using FluentValidation:

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
}

public class CreateEmployeeRequestValidator : AbstractValidator<CreateEmployeeRequest>  
{
    public CreateEmployeeRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.EmployeeID).Must(x => !employeeRepo.Exists(x)).WithMessage("An Employee with this ID already exists.");
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.FirstName).NotNull().NotEmpty().WithMessage("The First Name cannot be blank.");
        RuleFor(x => x.LastName).NotNull().NotEmpty().WithMessage("The Last Name cannot be blank.");
        RuleFor(x => x.JobTitle).NotNull().NotEmpty().WithMessage("The Job Title cannot be blank.");
        RuleFor(x => x.DateOfBirth).LessThan(DateTime.Today.AddYears(-16)).WithMessage("Employees must be 16 years old or older.");
    }
}

Notice that IEmployeeRepository and ILocationRepository are constructor parameters to the validator class. We don't need to do anything else to get those objects injected, as that was taken care of by registering the Repositories and the FluentValidation factory.

There's just one last thing we need to do to have our validation layer fully integrated: whenever validation fails, we need to automatically return HTTP 400 Bad Request. We accomplish this by using an ActionFilter...

public class BadRequestActionFilter : ActionFilterAttribute  
{
    public override void OnActionExecuting(HttpActionContext actionContext)
    {
        if (!actionContext.ModelState.IsValid)
        {
            actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest, new ValidationErrorWrapper(actionContext.ModelState));
        }
        base.OnActionExecuting(actionContext);
    }
}

...and registering that action filter in WebApiConfig.

public static class WebApiConfig  
{
    public static void Register(HttpConfiguration config)
    {
        // Web API configuration and services
        config.Filters.Add(new BadRequestActionFilter());
        ...
    }
}

Controller Action

The controller action for creating an employee does two things: it issues the CreateEmployeeCommand, and then issues an AssignEmployeeToLocationCommand. Since this is the only action in the EmployeeController class, the entire class looks like this:

[RoutePrefix("employee")]
public class EmployeeController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;

    public EmployeeController(ICommandSender commandSender, IMapper mapper)
    {
        _commandSender = commandSender;
        _mapper = mapper;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateEmployeeRequest request)
    {
        var command = _mapper.Map<CreateEmployeeCommand>(request);
        _commandSender.Send(command);

        var assignCommand = new AssignEmployeeToLocationCommand(request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);
        return Ok();
    }
}

Since we've now got the EmployeeController written, we can move on to the next request: creating a new location.

Request 2 - Create Location

Now let's build a request and a validator to create a location. Our validation rules for creating a new location look like this:

  1. The location ID must not already exist.
  2. The street address cannot be blank.
  3. The city cannot be blank.
  4. The state cannot be blank.
  5. The postal code cannot be blank.

Implementing those rules results in the following classes:

public class CreateLocationRequest  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
}

public class CreateLocationRequestValidator : AbstractValidator<CreateLocationRequest>  
{
    public CreateLocationRequestValidator(ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => !locationRepo.Exists(x)).WithMessage("A Location with this ID already exists.");
        RuleFor(x => x.StreetAddress).NotNull().NotEmpty().WithMessage("The Street Address cannot be null");
        RuleFor(x => x.City).NotNull().NotEmpty().WithMessage("The City cannot be null");
        RuleFor(x => x.State).NotNull().NotEmpty().WithMessage("The State cannot be null");
        RuleFor(x => x.PostalCode).NotNull().NotEmpty().WithMessage("The Postal Code cannot be null");
    }
}

The corresponding controller (LocationController) looks pretty similar to EmployeeController.

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;
    private ILocationRepository _locationRepo;
    private IEmployeeRepository _employeeRepo;

    public LocationController(ICommandSender commandSender, IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _commandSender = commandSender;
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateLocationRequest request)
    {
        var command = _mapper.Map<CreateLocationCommand>(request);
        _commandSender.Send(command);
        return Ok();
    }
}

Looking at this controller, you might be wondering why IEmployeeRepository and ILocationRepository are passed into the constructor when they aren't used by the Create() action. That's because we still have one request left to build: assigning an employee to a location.

Request 3 - Assign Employee to Location

Remember that one of our business rules (from Part 2) says the following:

  1. Employees may switch locations, but they may not be assigned to more than one location at a time.

The request we are going to build now will assign an employee to a new location, as well as remove that employee from the location s/he is currently assigned to.

But, you declare, we defined a command to remove an employee from a location! Is that not also a request? Nope, it's not, and for the same reason that creating an employee results in two commands: one request can result in multiple commands. In this case, assigning an employee to a location could result in one or two commands, depending on if the employee was just created or not.

First, let's build the request and its validator. In this case, we have three validation rules:

  1. The Location must exist.
  2. The Employee must exist.
  3. The Employee must not already be assigned to the given Location.

Implementing those rules result in the following classes:

public class AssignEmployeeToLocationRequest  
{
    public int LocationID { get; set; }
    public int EmployeeID { get; set; }
}

public class AssignEmployeeToLocationRequestValidator : AbstractValidator<AssignEmployeeToLocationRequest>  
{
    public AssignEmployeeToLocationRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.EmployeeID).Must(x => employeeRepo.Exists(x)).WithMessage("No Employee with this ID exists.");
        RuleFor(x => new { x.LocationID, x.EmployeeID }).Must(x => !locationRepo.HasEmployee(x.LocationID, x.EmployeeID)).WithMessage("This Employee is already assigned to that Location.");
    }
}

Now, all we have to do is write the controller action:

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    ...
    [HttpPost]
    [Route("assignemployee")]
    public IHttpActionResult AssignEmployee(AssignEmployeeToLocationRequest request)
    {
        var employee = _employeeRepo.GetByID(request.EmployeeID);
        if (employee.LocationID != 0)
        {
            var oldLocationAggregateID = _locationRepo.GetByID(employee.LocationID).AggregateID;

            RemoveEmployeeFromLocationCommand removeCommand = new RemoveEmployeeFromLocationCommand(oldLocationAggregateID, request.LocationID, employee.EmployeeID);
            _commandSender.Send(removeCommand);
        }

        var locationAggregateID = _locationRepo.GetByID(request.LocationID).AggregateID;
        var assignCommand = new AssignEmployeeToLocationCommand(locationAggregateID, request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);

        return Ok();
    }
}

Whew! With that final controller action in place, we have completed building our APIs! Give yourselves a pat on the back for coming this far!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built a Queries API with a DI container and implemented our business queries.
  • Build a Commands API with a DI container and implemented our requests.
  • Used FluentValidation to implement the Commands API's validation layer.

Congratulations! We've completed the build of our real-world CQRS/ES system! All that's left to do is run a few commands and queries to show how the system works, and we will do that in the final part of this series. Keep your eyes (and feed readers) open for Part 5 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 3 - The Read Model

NOTE: This is Part 3 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

In Part 1, we talked about why we might want to use Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) in our apps, and in Part 2 we defined how the Write Model (Commands, Command Handlers, Events, Aggregate Roots) of our simple system behaves. In this part, we will define the system's Read Model; that is, how other apps will query for the data we use.

In this part of our Real-World CQRS with ASP.NET and Redis series, we will:

  • Discover what comprises the Read Model for CQRS applications.
  • Gather our requirements for the queries we need to support
  • Choose a data store (and explain why we chose the one that we did)
  • Build the Repositories which will allow our app to query the Read Model data AND
  • Build the Event Handlers which will maintain the Read Model data store.

Let's get started!

What Is The Read Model?

Quite simply, the read model is the model of the data that consuming applications can query against. There are a few guidelines to keep in mind when designing a good read model:

  1. The Read Model should reflect the kinds of queries run against it.
  2. The Read Model should contain the current state of the data (this is important as we are using Event Sourcing).

In our system, the Read Model consists of the Read Model Objects, the Read Data Store, the Event Handlers, and the Repositories. This post will walk through designing all of these objects.

Query Requirements

First, a reminder: the entire point of CQRS is that the read model and the write model are totally separate things. You can model each in a completely different way, and in fact this is what we are doing in this tutorial: for the write model, we are storing events (using the Event Sourcing pattern), but our read model must conform to the guidelines laid out above.

When designing a Read Model for a CQRS system, you generally want said model to reflect the kinds of queries that will be run against that system. So, if you need a way to get all locations, locations by ID, and employees by ID, your Read Model should be able to do each of these easily, without a lot of round-tripping between the data store and the application.

But in order to design our Read Model, we must first know what queries exist. Here are the possible queries for our sample system:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's see how we can design our Read Model to reflect these queries.

Design of Read Model Objects

One of the benefits of using CQRS is that we can use fully-separate classes to define what the Read Model contains. Let's use two new classes (EmployeeRM and LocationRM, RM being short for Read Model) to represent how our Locations and Employees will be stored in our Read Model database.

public class EmployeeRM  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
    public Guid AggregateID { get; set; }
}

public class LocationRM  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
    public List<int> Employees { get; set; }
    public Guid AggregateID { get; set; }

    public LocationRM()
    {
        Employees = new List<int>();
    }
}

For comparison, here's the properties from the Write Model versions of these objects (Employee and Location):

public class Employee : AggregateRoot  
{
    private int _employeeID;
    private string _firstName;
    private string _lastName;
    private DateTime _dateOfBirth;
    private string _jobTitle;

    ...
}

public class Location : AggregateRoot  
{
    private int _locationID;
    private string _streetAddress;
    private string _city;
    private string _state;
    private string _postalCode;
    private List<int> _employees;

    ...
}

As you can see, the LocationRM and EmployeeRM both store their respective AggregateID that was assigned to them when they were created, and EmployeeRM further has the property LocationID which does not exist in the Employee Write Model class.

Now we must tackle a different problem: what data store will we use?

Choosing a Data Store

In any CQRS system, the selection of a datastore comes down to a couple of questions:

  1. How fast do you need reads to be?
  2. How much functionality does the Read Model datastore need to be able to do on its own?

In my system, I am assuming there will be an order of magnitude more reads than writes (this is a very common scenario for a CQRS applications). Further, I am assuming that my Read Model datastore can be treated as little more than a cache that gets updated occasionally. These two assumptions lead me to answer those questions like this:

  1. How fast do you need reads to be? Extremely
  2. How much functionality does the Read Model datastore need to be able to do on its own? Not a lot

I'm a SQL Server guy by trade, but SQL Server is not exactly known for being "fast". You absolutely can optimize it to be such, but at this time I'm more interested in trying a datastore that I've heard a lot about but haven't actually had a chance to use yet: Redis.

Redis calls itself a "data structure store". What that really means is that it stores objects, not relations (as you would in a Relational Database such as SQL Server). Further, Redis distinguishes between keys and everything else, and gives you several options for creating such keys.

For this demo, you don't really need to know more about how Redis works, but I encourage you to check it out on your own. Further, if you intend to run the sample app (and, like most .NET devs, you're running Windows), you'll want to download MSOpenTech's redis client.

We now have two pieces of our Read Model in place: the Read Model Objects, and the Read Data Store. We can now begin implementation of a layer which will allow us to interface with the Read Data Store and update it as necessary: the Repository layer.

Creating the Repositories

The Repositories (for this project) are interfaces which allow us to query the Read Model. Remember that we have five possible queries that we need to support:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

However, we also need to support certain validation scenarios; for example, we cannot assign an Employee to a location that doesn't exist. Therefore we also need certain functions to check if employees or locations exist.

For the sake of good design, we need at least two Repositories: one for Locations and one for Employees. But a surprising amount of functionality is needed by both of these repositories:

  • They both need to get an object by its ID.
  • They both need to check if an object with a given ID exists.
  • They both need to save a changed object back into the Read Data Store.
  • They both need to be able to get multiple objects of the same type.

Consequently, we can build a common IBaseRepository interface and BaseRepository class which implement these common features. The IBaseRepository interface will be inherited by the other repository interfaces; it looks like this:

public interface IBaseRepository<T>  
{
    T GetByID(int id);
    List<T> GetMultiple(List<int> ids);
    bool Exists(int id);
    void Save(T item);
}

Now, we also need two more interfaces which implement BaseRepository<T>: IEmployeeRepository and ILocationRepository:

public interface IEmployeeRepository : IBaseRepository<EmployeeRM>  
{
    IEnumerable<EmployeeRM> GetAll();
}

public interface ILocationRepository : IBaseRepository<LocationRM>  
{
    IEnumerable<LocationRM> GetAll();
    IEnumerable<EmployeeRM> GetEmployees(int locationID);
    bool HasEmployee(int locationID, int employeeID);
}

The next piece of the puzzle is the BaseRepository class (which, unfortunately, does NOT implement IBaseRepository<T>). This class provides methods by which items can be retrieved from or saved to the Redis Read Data Store:

public class BaseRepository  
{
    private readonly IConnectionMultiplexer _redisConnection;

    /// <summary>
    /// The Namespace is the first part of any key created by this Repository, e.g. "location" or "employee"
    /// </summary>
    private readonly string _namespace;

    public BaseRepository(IConnectionMultiplexer redis, string nameSpace)
    {
        _redisConnection = redis;
        _namespace = nameSpace;
    }

    public T Get<T>(int id)
    {
        return Get<T>(id.ToString());
    }

    public T Get<T>(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        if (serializedObject.IsNullOrEmpty) throw new ArgumentNullException(); //Throw a better exception than this, please
        return JsonConvert.DeserializeObject<T>(serializedObject.ToString());
    }

    public List<T> GetMultiple<T>(List<int> ids)
    {
        var database = _redisConnection.GetDatabase();
        List<RedisKey> keys = new List<RedisKey>();
        foreach (int id in ids)
        {
            keys.Add(MakeKey(id));
        }
        var serializedItems = database.StringGet(keys.ToArray(), CommandFlags.None);
        List<T> items = new List<T>();
        foreach (var item in serializedItems)
        {
            items.Add(JsonConvert.DeserializeObject<T>(item.ToString()));
        }
        return items;
    }

    public bool Exists(int id)
    {
        return Exists(id.ToString());
    }

    public bool Exists(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        return !serializedObject.IsNullOrEmpty;
    }

    public void Save(int id, object entity)
    {
        Save(id.ToString(), entity);
    }

    public void Save(string keySuffix, object entity)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        database.StringSet(MakeKey(key), JsonConvert.SerializeObject(entity));
    }

    private string MakeKey(int id)
    {
        return MakeKey(id.ToString());
    }

    private string MakeKey(string keySuffix)
    {
        if (!keySuffix.StartsWith(_namespace + ":"))
        {
            return _namespace + ":" + keySuffix;
        }
        else return keySuffix; //Key is already prefixed with namespace
    }
}

With all of that infrastructure in place, we can start implementing the EmployeeRepository and LocationRepository.

Employee Repository

In the EmployeeRepository, let's get a single Employee record with the given Employee ID.

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }
}

Hey, that was easy! Because of the work we did in the BaseRepository, our Read Model Object repositories will be quite simple. Here's the rest of EmployeeRepository:

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }

    public List<EmployeeRM> GetMultiple(List<int> employeeIDs)
    {
        return GetMultiple<EmployeeRM>(employeeIDs);
    }

    public IEnumerable<EmployeeRM> GetAll()
    {
        return Get<List<EmployeeRM>>("all");
    }

    public void Save(EmployeeRM employee)
    {
        Save(employee.EmployeeID, employee);
        MergeIntoAllCollection(employee);
    }

    private void MergeIntoAllCollection(EmployeeRM employee)
    {
        List<EmployeeRM> allEmployees = new List<EmployeeRM>();
        if (Exists("all"))
        {
            allEmployees = Get<List<EmployeeRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allEmployees.Any(x => x.EmployeeID == employee.EmployeeID))
        {
            allEmployees.Remove(allEmployees.First(x => x.EmployeeID == employee.EmployeeID));
        }

        //Add the modified district to the ALL collection
        allEmployees.Add(employee);

        Save("all", allEmployees);
    }
}

Take special note of the MergeIntoAllCollection() method, and let me take a minute to explain what I'm doing here.

Querying for Collections

As I mentioned earlier, Redis makes a distinction between keys and everything else, and because of this it doesn't really apply a "type" per se to anything stored against a key. Consequently, unlike in SQL Server, you don't really query for several objects (e.g. SELECT * FROM table WHERE condition) because that's not what Redis is for.

Remember that we're designing this to reflect the queries we need to run. We can think of this as changing when the work of making a collection is done.

In SQL Server or other relational databases, most of the time you do the work of creating a collection when you run a query. So, you might have a huge table of, say, vegetables, and then create a query to only give you carrots, or radishes, or whatever.

But in Redis, no such querying is possible. Therefore, instead of doing the work when we need the query, we prep the data in advance at the point where it changes. Consequently, the queries are ready for consumption immediately after the corresponding event handlers are done processing.

All we're doing is moving the time when we create the query results from "when the query runs" to "when the source data changes."

With the current set up of the repositories, any time a LocationRM or EmployeeRM object is saved, that object is merged back into the respective "all collection" for that object. Hence, I needed MergeIntoAllCollection().

Location Repository

Now, let's see what the LocationRepository looks like:

public class LocationRepository : BaseRepository, ILocationRepository  
{
    public LocationRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "location") { }
    public LocationRM GetByID(int locationID)
    {
        return Get<LocationRM>(locationID);
    }

    public List<LocationRM> GetMultiple(List<int> locationIDs)
    {
        return GetMultiple(locationIDs);
    }

    public bool HasEmployee(int locationID, int employeeID)
    {
        //Deserialize the LocationDTO with the key location:{locationID}
        var location = Get<LocationRM>(locationID);

        //If that location has the specified Employee, return true
        return location.Employees.Contains(employeeID);
    }

    public IEnumerable<LocationRM> GetAll()
    {
        return Get<List<LocationRM>>("all");
    }
    public IEnumerable<EmployeeRM> GetEmployees(int locationID)
    {
        return Get<List<EmployeeRM>>(locationID.ToString() + ":employees");
    }

    public void Save(LocationRM location)
    {
        Save(location.LocationID, location);
        MergeIntoAllCollection(location);
    }

    private void MergeIntoAllCollection(LocationRM location)
    {
        List<LocationRM> allLocations = new List<LocationRM>();
        if (Exists("all"))
        {
            allLocations = Get<List<LocationRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allLocations.Any(x => x.LocationID == location.LocationID))
        {
            allLocations.Remove(allLocations.First(x => x.LocationID == location.LocationID));
        }

        //Add the modified district to the ALL collection
        allLocations.Add(location);

        Save("all", allLocations);
    }
}

Now our Repositories are complete, and we can finally write the last, best piece of our system's Read Model: the event handlers.

Building the Event Handlers

Whenever an event is issued by our system we can use an Event Handler to do something with that event. In our case, we need our Event Handlers to update our Redis data store.

First, let's create an Event Handler for the Create Employee event.

public class EmployeeEventHandler : IEventHandler<EmployeeCreatedEvent>  
{
    private readonly IMapper _mapper;
    private readonly IEmployeeRepository _employeeRepo;
    public EmployeeEventHandler(IMapper mapper, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _employeeRepo = employeeRepo;
    }

    public void Handle(EmployeeCreatedEvent message)
    {
        EmployeeRM employee = _mapper.Map<EmployeeRM>(message);
        _employeeRepo.Save(employee);
    }
}

Note that all interfacing with the Redis data store is done through the repository, and so the event handler consumes an instance of IEmployeeRepository in its constructor. Because we're using Dependency Injection (which we will set up in Part 4), this usage becomes possible and greatly simplifies our event handler.

In any case, notice that all this event handler is doing is creating the corresponding Read Model object from an event (specifically the EmployeeCreatedEvent).

Now let's build the event handler for a Location. In this case, we have three events to handle: creating a new Location, assigning an employee to a Location, and removing an employee from a Location (and in order to do all of those, it will need to take both ILocationRepository and IEmployeeRepository as constructor parameters):

public class LocationEventHandler : IEventHandler<LocationCreatedEvent>,  
                                    IEventHandler<EmployeeAssignedToLocationEvent>,
                                    IEventHandler<EmployeeRemovedFromLocationEvent>
{
    private readonly IMapper _mapper;
    private readonly ILocationRepository _locationRepo;
    private readonly IEmployeeRepository _employeeRepo;
    public LocationEventHandler(IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    public void Handle(LocationCreatedEvent message)
    {
        //Create a new LocationDTO object from the LocationCreatedEvent
        LocationRM location = _mapper.Map<LocationRM>(message);

        _locationRepo.Save(location);
    }

    public void Handle(EmployeeAssignedToLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.NewLocationID);
        location.Employees.Add(message.EmployeeID);
        _locationRepo.Save(location);

        //Find the employee which was assigned to this Location
        var employee = _employeeRepo.GetByID(message.EmployeeID);
        employee.LocationID = message.NewLocationID;
        _employeeRepo.Save(employee);
    }

    public void Handle(EmployeeRemovedFromLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.OldLocationID);
        location.Employees.Remove(message.EmployeeID);
        _locationRepo.Save(location);
    }
}

With the Event Handlers in place, every time an Event is kicked off, it will be consumed by the Event Handlers and the Redis data model will updated. Success!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built the Read Model Data Store using Redis,
  • Designed our Read Model to support our business's queries,
  • Built the Event Handlers which place data into said data store AND
  • Built a set of repositories to access the Redis data.

There's still a lot to do, though. We need to set up our Dependency Injection system, our validation layer, and our Requests. We'll do all of that in Part 4 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!