tutorials

Working with Environments and Launch Settings in ASP.NET Core

I'm continuing my series of deep dives into ASP.NET Core, on my never-ending quest to keep my coding career from becoming obsolete. In this part of the series, let's discuss a new feature in ASP.NET Core that will make debugging and testing much more intuitive and pain-free: Environments and Launch Settings.

ASP.NET Core includes the native ability to modify how an application behaves (and what it displays) based upon the environment it is currently executing in. This ends up being quite useful, since now we can do things as simple as not showing the error page to people using the application in a production environment. But it requires just a tiny bit of setup, and some knowledge of how environments are controlled.

ASPNETCORE_ENVIRONMENT

At the heart of all the stuff ASP.NET Core can do with environments is the ASPNETCORE_ENVIRONMENT environment variable. This variable controls what type of environment the current build will run in. You can edit this variable manually by right-clicking your project file and selecting "Properties", then navigating to the "Debug" tab.

There are three "default" values for this variable:

  • Development
  • Staging
  • Production

Each of those values corresponds to certain methods we can use in our Startup.cs file. In fact, the Startup.cs file includes several extension methods that make modifying our application based it's current environment quite simple.

Selecting an Environment

The Startup.cs file that every ASP.NET Core application needs does many things, not the least of which is setting up the environment in which the application runs. Here's a snippet of the default Startup.cs file that was generated when I created my sample project for this post:

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseBrowserLink();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Notice the IHostingEnvironment variable. That variable represents the current environment in which the application runs, and ASP.NET Core includes four extension methods that allow us to check the current value of ASPNETCORE_ENVIRONMENT:

  • IsDevelopment()
  • IsStaging()
  • IsProduction()
  • IsEnvironment()

Notice what we're setting up in the Startup.cs file: if the current environment is Development, we use the Developer Exception Page (which was formerly the Yellow Screen of Death) as well as enabling Browser Link in Visual Studio, which helps us test our application in several browsers at once.

But we don't want those features enabled in a Production environment, so instead our configuration establishes an error handling route "/Home/Error".

This is the coolest thing about this feature: you can decide what services are enabled for your application in each environment without ever touching a configuration file!

However, what if you need more environments? For example, at my day job we have up to four environments: Development, Staging, Production, and Testing, and very often I'll need to change application behaviors or display based on which environment it's currently running in. How do I do that in ASP.NET Core?

The launchSettings.json File

ASP.NET Core 1.0 includes a new file called launchSettings.json; in any given project, you'll find that file under the Properties submenu:

This file sets up the different launch environments that Visual Studio can launch automatically. Here's a snippet of the default launchSettings.json file that was generated with my demo project:

{
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "CoreEnvironmentBehaviorDemo": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

At the moment we have two profiles: one which runs the app using IIS Express and one which runs the app using a command-line handler. The interesting thing about these two profiles is that they correspond directly to the Visual Studio play button:

In other words: the launchSettings.json file controls what environments you can run your app in using Visual Studio. In fact, if we add a new profile, it automatically adds that profile to the button, as seen on this GIF I made:

All this together means that we can run the app in multiple different environment settings from Visual Studio, simply by setting up the appropriate launch profiles.

The Environment Tag Helper

Quite a while ago I wrote a post about Tag Helpers and how awesome they are, and I want to call out a particular helper in this post: the <environment> tag helper.

With this helper, we can modify how our MVC views are constructed based upon which environment the application is currently running in. In fact, the default _Layout.cshtml file in Visual Studio 2015 comes with an example of this:

<environment names="Development">  
    <script src="~/lib/jquery/dist/jquery.js"></script>
    <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
    <script src="~/js/site.js" asp-append-version="true"></script>
</environment>  
<environment names="Staging,Production">  
    <script src="https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.2.0.min.js"
            asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
            asp-fallback-test="window.jQuery">
    </script>
    <script src="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.6/bootstrap.min.js"
            asp-fallback-src="~/lib/bootstrap/dist/js/bootstrap.min.js"
            asp-fallback-test="window.jQuery && window.jQuery.fn && window.jQuery.fn.modal">
    </script>
    <script src="~/js/site.min.js" asp-append-version="true"></script>
</environment>  

In this sample, when we run the app in Development mode we use the local files for jQuery, Bootstrap, and our custom Javascript; but if we're running in the Staging or Production environments, we use copies of those libraries files found on the ASP.NET content delivery network (CDN). In this way, we improve performance, because a CDN can deliver these static files from one of many centers around the globe, rather than needing to round-trip all the way to the source server to get those files.

But, remember, in our sample we have the custom environment Testing. The environment tag helper also supports custom environments:

<div class="row">  
    <environment names="Development">
        <span>This is the Development environment.</span>
    </environment>
    <environment names="Testing">
        <span>This is the Testing environment.</span>
    </environment>
    <environment names="Production">
        <span>This is the Production environment.</span>
    </environment>
</div>  

Summary

Working with environments (both default and custom) gets a whole lot easier in ASP.NET Core, and with the new Launch Settings feature, getting the app started in different environment modes is a breeze! Using these features, we can:

  • Create and use custom environments.
  • Enable or disable application services based on the environment the app is running in.
  • Use the environment tag helper (<environment>) to modify our MVC views based on the current execution environment.

You can check out my sample project over on GitHub to see this code in action. Feel free to mention any cool tips and tricks you've found when dealing with environments and launch settings in the comments!

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 5 - Running the APIs

NOTE: This is the final part of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

All our work in the previous parts of this series (learning what Command-Query Responsibility Segregation and Event Sourcing are, building the Write Model to modify our aggregate roots, building the Read Model to query data, and building both our Write and Read APIs) has lead to this. We can now test these two APIs using [Postman] and see how they operate.

In this post, the final part of our Real-World CQRS/ES with ASP.NET and Redis series, we will:

  • Run the Commands API with both valid and invalid commands.
  • Run the Queries API with existent and non-existent data.
  • Discuss some shortcomings of this design.

You're on the last lap, so don't stop now!

Command - Creating Locations

The first thing we should do is run a few commands to load our Write and Read models with data. To do that, we're going to use my favorite tool Postman to create some requests.

First, let's run a command to create a new location. Here's a screenshot of the Postman request:

Running this request returns 200 OK, which is what we expect. But what happens if we try to run the exact same request again?

Hey, lookie there! Our validation layer is working!

Let's create another location:

Well, seems our create location process is working fine. Or, at least, it looks like it is.

Query - Locations (All and By ID)

To be sure that our system is properly updating the read model, let's submit a query to our Queries API that returns all locations:

Which looks good. Let's also query for a single location by its ID. First, let's get Location #2:

Now we can query for Location #3:

Oh, wait, that's right, there is no Location #3. So we get back HTTP 400 Bad Request, which is also what we expect. (You could also make this return HTTP 404 Not Found, which is more semantically correct).

OK, great, adding and querying Locations works. But what about Employees?

Command - Creating Employees

Let's first create a new employee and assign him to Location #1:

Let's also create a couple more employees:

So now we should have two employees at Location #1 and a third employee at Location #2. Let's query for employees by location to confirm this.

Query - Employees by Location

Here's our Postman screenshot for the Employees by Location query for each location.

Just as we thought, there are two employees at Location #1 and a third at Location #2.

We're doing pretty darn good so far! But what happens if Reggie Martinez (Employee #3) needs to transfer to Location #2? We can do that with the proper commands.

Command - Assign Employee to Location

Here's a command to move Mr. Martinez to Location #2:

And now, if we query for all employees at Location #2:

I'd say we've done pretty darn good! All our commands do what we expect them to do, and all our queries return the appropriate data. We've now got ourselves a working CQRS/ES project with ASP.NET and Redis!

Shortcomings of This Design

Even though we've done a lot of work on this project and I think we've mostly gotten it right, there's still a few places that I think could be improved:

  • All Redis access through repositories. I don't like having the Event Handlers access the Redis database directly, I'd rather have them do that through the repositories. This would be easy to do, I just didn't have time before my publish date.
  • Better splitting of requests/commands and commands/events. I don't like how commands always seem to result in exactly one event.

That said, I'm really proud of the way this project turned out. If you see any additional areas for improvement, please let me know in the comments!

Summary

In this final part of our Real-World CQRS/ES with ASP.NET and Redis series:

  • Ran several queries and commands.
  • Confirmed that those queries and commands worked as expected.
  • Discussed a couple of shortcomings of this design.

As always, I welcome (civil) comments and discussion on my blog, and feel free to fork or patch or do whatever to the GitHub repository that goes along with this series. Thanks for reading!

Happy Coding!

Post image is Toddler running and falling from Wikimedia, used under license

Real-World CQRS/ES with ASP.NET and Redis Part 4 - Creating the APIs

NOTE: This is Part 4 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

We've done quite a lot of work to get to this point! We've discussed why we might want to use Command-Query Responsibility Segregation (CQRS) and Event Sourcing (ES) in our app, we've built a Write Model to handle the processing of our commands, and we've built a Read Model to query our data.

Now we can show why this is a "real-world" app. Here's what we're going to do in Part 4 of Real World CQRS/ES with ASP.NET:

  • Build a Queries API so we can query the system for data.
  • Build a Commands API so that we can issue commands to the system.
  • Implement a validation layer using FluentValidation to ensure that commands being issued are valid to execute.
  • Implement dependency injection using StructureMap in both the commands and queries APIs.

Don't stop now! Let's get starting building our APIs!

The Queries API

We're going to switch it up a bit and build the Queries API first, as that turns out to be easier than building the Commands API right off the bat. After all, the Queries API doesn't have to worry about things like validation. So, let's create a new ASP.NET Web API app.

Dependency Injection with StructureMap

After creating the new ASP.NET Web API project, the first thing we need to do is download the StructureMap.WebApi2 NuGet package and install it. Doing so gives us a folder structure that looks something like this (notice the new DependencyResolution folder):

I've blogged about how to use StructureMap with Web API in a previous post, so if you're not familiar with the StructureMap.WebApi2 package, you might want to read that post first, then come back here. It's OK, I'll wait.

Once we've downloaded and installed the StructureMap.WebApi2 package, we'll need to change just a couple of things. In our Global.asax file, we need to start the StructureMap container:

public class WebApiApplication : System.Web.HttpApplication  
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        GlobalConfiguration.Configure(WebApiConfig.Register);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
        BundleConfig.RegisterBundles(BundleTable.Bundles);

        StructuremapWebApi.Start(); //Start! Your! Containers! VROOOOOOOOM
    }
}

We also need to register the appropriate items with the container so that they can be injected. Among those items are the Repositories we created in the previous part of this series; we must register them so our API controller have them injected.

In Part 3, we also established that we are using Redis as our Read Data Store, and that we are utilizing StackExchange.Redis to interface with said Redis instance. StackExchange.Redis conveniently comes prepared for dependency injection, so we will need to register the IConnectionMultiplexer interface with our container.

In all, our DefaultRegistry class for the Queries API looks like this:

public class DefaultRegistry : Registry {  
    public DefaultRegistry() 
    {
        //Repositories
        For<IEmployeeRepository>().Use<EmployeeRepository>();
        For<ILocationRepository>().Use<LocationRepository>();

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }
}

See, that wasn't too bad! Just wait until you see the Commands API's registry.

Building the Queries

Anyway, with StructureMap now in place, we can start building the queries we need to support. Here's the queries list we talked about in Part 3:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's start with the easy one: getting an Employee by their ID.

Get Employee by ID

We need an EmployeeController, with a private IEmployeeRepository, to execute this query. The complete controller is as follows:

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    private readonly IEmployeeRepository _employeeRepo;

    public EmployeeController(IEmployeeRepository employeeRepo)
    {
        _employeeRepo = employeeRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var employee = _employeeRepo.GetByID(id);

        //It is possible for GetByID() to return null.
        //If it does, we return HTTP 400 Bad Request
        if(employee == null)
        {
            return BadRequest("No Employee with ID " + id.ToString() + " was found.");
        }

        //Otherwise, we return the employee
        return Ok(employee);
    }
}

Well, that looks pretty simple. How about the GetAll() query?

Get All Employees

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    ...

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var employees = _employeeRepo.GetAll();
        return Ok(employees);
    }
}

I think I'm sensing a theme here.

The Location Queries Controller

Let's see what the Location queries are:

[RoutePrefix("location")]
public class LocationController : ApiController  
{
    private ILocationRepository _locationRepo;

    public LocationController(ILocationRepository locationRepo)
    {
        _locationRepo = locationRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var location = _locationRepo.GetByID(id);
        if(location == null)
        {
            return BadRequest("No location with ID " + id.ToString() + " was found.");
        }
        return Ok(location);
    }

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var locations = _locationRepo.GetAll();
        return Ok(locations);
    }

    [HttpGet]
    [Route("{id}/employees")]
    public IHttpActionResult GetEmployees(int id)
    {
        var employees = _locationRepo.GetEmployees(id);
        return Ok(employees);
    }
}

Yep, definitely a theme going on. All this setup has made implementing our controllers very simple, and simplicity is definitely better when dealing with complex patterns like CQRS and ES.

We'll run queries against this API in Part 5, but for now let's turn our attention to the Commands API, which may prove to be a bit more difficult to write.

The Commands API

As I mentioned early on in this post, the Commands API is considerably more complex than the Queries API; this is largely due to the number of things we need to inject into our container, as well as the Commands API being responsible for validating the requests that come in to the system. We're going to tackle each of these problems.

Dependency Injection

First, let's deal with Dependency Injection. We'll use the same package as before, with the same Global.asax change. However, our DefaultRegistry looks much different.

In the Commands API, we need the following services available for injection:

  • CQRSLite's Commands and Events bus
  • Our Commands and Events (from Part 2)
  • Our Event Store (from Part 2)
  • AutoMapper
  • Our own Repositories (from Part 3)
  • StackExchange.Redis

That results in this monstrosity of a registry:

public class DefaultRegistry : Registry {  
    #region Constructors and Destructors

    public DefaultRegistry() {
        //Commands, Events, Handlers
        Scan(
            scan => {
                scan.TheCallingAssembly();
                scan.AssemblyContainingType<BaseEvent>();
                scan.Convention<FirstInterfaceConvention>();
            });

        //CQRSLite
        For<InProcessBus>().Singleton().Use<InProcessBus>();
        For<ICommandSender>().Use(y => y.GetInstance<InProcessBus>());
        For<IEventPublisher>().Use(y => y.GetInstance<InProcessBus>());
        For<IHandlerRegistrar>().Use(y => y.GetInstance<InProcessBus>());
        For<ISession>().HybridHttpOrThreadLocalScoped().Use<Session>();
        For<IEventStore>().Singleton().Use<InMemoryEventStore>();
        For<IRepository>().HybridHttpOrThreadLocalScoped().Use(y =>
                new CacheRepository(new Repository(y.GetInstance<IEventStore>()), y.GetInstance<IEventStore>()));

        //AutoMapper
        var profiles = from t in typeof(DefaultRegistry).Assembly.GetTypes()
                        where typeof(Profile).IsAssignableFrom(t)
                        select (Profile)Activator.CreateInstance(t);

        var config = new MapperConfiguration(cfg =>
        {
            foreach (var profile in profiles)
            {
                cfg.AddProfile(profile);
            }
        });

        var mapper = config.CreateMapper();

        For<IMapper>().Use(mapper);

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }

    #endregion
}

Holy crap that's a lot of things that need to be injected. But, as we will see, each of these things is actually necessary and provides a lot of value to our application.

(Hold on a second while I smack myself. I sounded way too much like a marketer just now.)

SMACK

Okay, I'm better now.

Requests

I've been using the term "request" liberally throughout this series, and now it's time to truly define what a request is.

In this system, a request is a potential command. That's all. Consuming applications which would like commands issued must submit a request first; that request will be validated and, if found to be valid, mapped to the corresponding command.

A request is, therefore, a C# class which contains the data needed to issue a particular command.

Request 1 - Create Employee

Let's begin to define our requests by first creating a request for creating a new employee.

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
}

WTF Matthew, you say, that looks almost EXACTLY like the CreateEmployeeCommand! Why can't we just use that?! And after I'm done looking around for my parents (nobody calls me Matthew), I can tell you that there are two reasons why we don't reuse the command objects as requests.

First, requests must be validated against the Read Model, whereas commands are assumed to be valid. Second a single request may kick off more than one command, as is the case with this request.

But how do we accomplish that validation, you say? By using one of my favorite NuGet packages of all time: FluentValidation.

The Validation Layer

FluentValidation is a NuGet package which allows us to validate objects and places any validation errors found into the Controller's ModelState. But (unlike StackExchange.Redis) it doesn't come ready for use in a Dependency Injection environment, so we must do some setup.

First, we need a factory which will create the validator objects:

public class StructureMapValidatorFactory : ValidatorFactoryBase  
{
    private readonly HttpConfiguration _configuration;

    public StructureMapValidatorFactory(HttpConfiguration configuration)
    {
        _configuration = configuration;
    }

    public override IValidator CreateInstance(Type validatorType)
    {
        return _configuration.DependencyResolver.GetService(validatorType) as IValidator;
    }
}

Next, in our WebApiConfig.cs file, we need to enable FluentValidation's validator provider using our factory:

public static void Register(HttpConfiguration config)  
{
    ...

    FluentValidationModelValidatorProvider.Configure(config, x => x.ValidatorFactory = new StructureMapValidatorFactory(config));

    ...
}

Finally, we need to register the validator provider in our StructureMap container, which is done in the DefaultRegistry class.

public DefaultRegistry()  
{
    ...
    //FluentValidation 
    FluentValidation.AssemblyScanner.FindValidatorsInAssemblyContaining<CreateEmployeeRequestValidator>()
            .ForEach(result =>
            {
                For(result.InterfaceType)
                    .Use(result.ValidatorType);
            });

With all of that in place, we're ready to begin building our validators!

Create Employee - Validation

Here's the validation rules we need to implement when creating an employee:

  • The Employee ID must not already exist.
  • The First Name cannot be blank.
  • The Last Name cannot be blank.
  • The Job Title cannot be blank.
  • Employees must be 16 years of age or older.

Here's how we would implement such a validator using FluentValidation:

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
}

public class CreateEmployeeRequestValidator : AbstractValidator<CreateEmployeeRequest>  
{
    public CreateEmployeeRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.EmployeeID).Must(x => !employeeRepo.Exists(x)).WithMessage("An Employee with this ID already exists.");
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.FirstName).NotNull().NotEmpty().WithMessage("The First Name cannot be blank.");
        RuleFor(x => x.LastName).NotNull().NotEmpty().WithMessage("The Last Name cannot be blank.");
        RuleFor(x => x.JobTitle).NotNull().NotEmpty().WithMessage("The Job Title cannot be blank.");
        RuleFor(x => x.DateOfBirth).LessThan(DateTime.Today.AddYears(-16)).WithMessage("Employees must be 16 years old or older.");
    }
}

Notice that IEmployeeRepository and ILocationRepository are constructor parameters to the validator class. We don't need to do anything else to get those objects injected, as that was taken care of by registering the Repositories and the FluentValidation factory.

There's just one last thing we need to do to have our validation layer fully integrated: whenever validation fails, we need to automatically return HTTP 400 Bad Request. We accomplish this by using an ActionFilter...

public class BadRequestActionFilter : ActionFilterAttribute  
{
    public override void OnActionExecuting(HttpActionContext actionContext)
    {
        if (!actionContext.ModelState.IsValid)
        {
            actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest, new ValidationErrorWrapper(actionContext.ModelState));
        }
        base.OnActionExecuting(actionContext);
    }
}

...and registering that action filter in WebApiConfig.

public static class WebApiConfig  
{
    public static void Register(HttpConfiguration config)
    {
        // Web API configuration and services
        config.Filters.Add(new BadRequestActionFilter());
        ...
    }
}

Controller Action

The controller action for creating an employee does two things: it issues the CreateEmployeeCommand, and then issues an AssignEmployeeToLocationCommand. Since this is the only action in the EmployeeController class, the entire class looks like this:

[RoutePrefix("employee")]
public class EmployeeController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;

    public EmployeeController(ICommandSender commandSender, IMapper mapper)
    {
        _commandSender = commandSender;
        _mapper = mapper;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateEmployeeRequest request)
    {
        var command = _mapper.Map<CreateEmployeeCommand>(request);
        _commandSender.Send(command);

        var assignCommand = new AssignEmployeeToLocationCommand(request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);
        return Ok();
    }
}

Since we've now got the EmployeeController written, we can move on to the next request: creating a new location.

Request 2 - Create Location

Now let's build a request and a validator to create a location. Our validation rules for creating a new location look like this:

  1. The location ID must not already exist.
  2. The street address cannot be blank.
  3. The city cannot be blank.
  4. The state cannot be blank.
  5. The postal code cannot be blank.

Implementing those rules results in the following classes:

public class CreateLocationRequest  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
}

public class CreateLocationRequestValidator : AbstractValidator<CreateLocationRequest>  
{
    public CreateLocationRequestValidator(ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => !locationRepo.Exists(x)).WithMessage("A Location with this ID already exists.");
        RuleFor(x => x.StreetAddress).NotNull().NotEmpty().WithMessage("The Street Address cannot be null");
        RuleFor(x => x.City).NotNull().NotEmpty().WithMessage("The City cannot be null");
        RuleFor(x => x.State).NotNull().NotEmpty().WithMessage("The State cannot be null");
        RuleFor(x => x.PostalCode).NotNull().NotEmpty().WithMessage("The Postal Code cannot be null");
    }
}

The corresponding controller (LocationController) looks pretty similar to EmployeeController.

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;
    private ILocationRepository _locationRepo;
    private IEmployeeRepository _employeeRepo;

    public LocationController(ICommandSender commandSender, IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _commandSender = commandSender;
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateLocationRequest request)
    {
        var command = _mapper.Map<CreateLocationCommand>(request);
        _commandSender.Send(command);
        return Ok();
    }
}

Looking at this controller, you might be wondering why IEmployeeRepository and ILocationRepository are passed into the constructor when they aren't used by the Create() action. That's because we still have one request left to build: assigning an employee to a location.

Request 3 - Assign Employee to Location

Remember that one of our business rules (from Part 2) says the following:

  1. Employees may switch locations, but they may not be assigned to more than one location at a time.

The request we are going to build now will assign an employee to a new location, as well as remove that employee from the location s/he is currently assigned to.

But, you declare, we defined a command to remove an employee from a location! Is that not also a request? Nope, it's not, and for the same reason that creating an employee results in two commands: one request can result in multiple commands. In this case, assigning an employee to a location could result in one or two commands, depending on if the employee was just created or not.

First, let's build the request and its validator. In this case, we have three validation rules:

  1. The Location must exist.
  2. The Employee must exist.
  3. The Employee must not already be assigned to the given Location.

Implementing those rules result in the following classes:

public class AssignEmployeeToLocationRequest  
{
    public int LocationID { get; set; }
    public int EmployeeID { get; set; }
}

public class AssignEmployeeToLocationRequestValidator : AbstractValidator<AssignEmployeeToLocationRequest>  
{
    public AssignEmployeeToLocationRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.EmployeeID).Must(x => employeeRepo.Exists(x)).WithMessage("No Employee with this ID exists.");
        RuleFor(x => new { x.LocationID, x.EmployeeID }).Must(x => !locationRepo.HasEmployee(x.LocationID, x.EmployeeID)).WithMessage("This Employee is already assigned to that Location.");
    }
}

Now, all we have to do is write the controller action:

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    ...
    [HttpPost]
    [Route("assignemployee")]
    public IHttpActionResult AssignEmployee(AssignEmployeeToLocationRequest request)
    {
        var employee = _employeeRepo.GetByID(request.EmployeeID);
        if (employee.LocationID != 0)
        {
            var oldLocationAggregateID = _locationRepo.GetByID(employee.LocationID).AggregateID;

            RemoveEmployeeFromLocationCommand removeCommand = new RemoveEmployeeFromLocationCommand(oldLocationAggregateID, request.LocationID, employee.EmployeeID);
            _commandSender.Send(removeCommand);
        }

        var locationAggregateID = _locationRepo.GetByID(request.LocationID).AggregateID;
        var assignCommand = new AssignEmployeeToLocationCommand(locationAggregateID, request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);

        return Ok();
    }
}

Whew! With that final controller action in place, we have completed building our APIs! Give yourselves a pat on the back for coming this far!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built a Queries API with a DI container and implemented our business queries.
  • Build a Commands API with a DI container and implemented our requests.
  • Used FluentValidation to implement the Commands API's validation layer.

Congratulations! We've completed the build of our real-world CQRS/ES system! All that's left to do is run a few commands and queries to show how the system works, and we will do that in the final part of this series. Keep your eyes (and feed readers) open for Part 5 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 3 - The Read Model

NOTE: This is Part 3 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

In Part 1, we talked about why we might want to use Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) in our apps, and in Part 2 we defined how the Write Model (Commands, Command Handlers, Events, Aggregate Roots) of our simple system behaves. In this part, we will define the system's Read Model; that is, how other apps will query for the data we use.

In this part of our Real-World CQRS with ASP.NET and Redis series, we will:

  • Discover what comprises the Read Model for CQRS applications.
  • Gather our requirements for the queries we need to support
  • Choose a data store (and explain why we chose the one that we did)
  • Build the Repositories which will allow our app to query the Read Model data AND
  • Build the Event Handlers which will maintain the Read Model data store.

Let's get started!

What Is The Read Model?

Quite simply, the read model is the model of the data that consuming applications can query against. There are a few guidelines to keep in mind when designing a good read model:

  1. The Read Model should reflect the kinds of queries run against it.
  2. The Read Model should contain the current state of the data (this is important as we are using Event Sourcing).

In our system, the Read Model consists of the Read Model Objects, the Read Data Store, the Event Handlers, and the Repositories. This post will walk through designing all of these objects.

Query Requirements

First, a reminder: the entire point of CQRS is that the read model and the write model are totally separate things. You can model each in a completely different way, and in fact this is what we are doing in this tutorial: for the write model, we are storing events (using the Event Sourcing pattern), but our read model must conform to the guidelines laid out above.

When designing a Read Model for a CQRS system, you generally want said model to reflect the kinds of queries that will be run against that system. So, if you need a way to get all locations, locations by ID, and employees by ID, your Read Model should be able to do each of these easily, without a lot of round-tripping between the data store and the application.

But in order to design our Read Model, we must first know what queries exist. Here are the possible queries for our sample system:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's see how we can design our Read Model to reflect these queries.

Design of Read Model Objects

One of the benefits of using CQRS is that we can use fully-separate classes to define what the Read Model contains. Let's use two new classes (EmployeeRM and LocationRM, RM being short for Read Model) to represent how our Locations and Employees will be stored in our Read Model database.

public class EmployeeRM  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
    public Guid AggregateID { get; set; }
}

public class LocationRM  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
    public List<int> Employees { get; set; }
    public Guid AggregateID { get; set; }

    public LocationRM()
    {
        Employees = new List<int>();
    }
}

For comparison, here's the properties from the Write Model versions of these objects (Employee and Location):

public class Employee : AggregateRoot  
{
    private int _employeeID;
    private string _firstName;
    private string _lastName;
    private DateTime _dateOfBirth;
    private string _jobTitle;

    ...
}

public class Location : AggregateRoot  
{
    private int _locationID;
    private string _streetAddress;
    private string _city;
    private string _state;
    private string _postalCode;
    private List<int> _employees;

    ...
}

As you can see, the LocationRM and EmployeeRM both store their respective AggregateID that was assigned to them when they were created, and EmployeeRM further has the property LocationID which does not exist in the Employee Write Model class.

Now we must tackle a different problem: what data store will we use?

Choosing a Data Store

In any CQRS system, the selection of a datastore comes down to a couple of questions:

  1. How fast do you need reads to be?
  2. How much functionality does the Read Model datastore need to be able to do on its own?

In my system, I am assuming there will be an order of magnitude more reads than writes (this is a very common scenario for a CQRS applications). Further, I am assuming that my Read Model datastore can be treated as little more than a cache that gets updated occasionally. These two assumptions lead me to answer those questions like this:

  1. How fast do you need reads to be? Extremely
  2. How much functionality does the Read Model datastore need to be able to do on its own? Not a lot

I'm a SQL Server guy by trade, but SQL Server is not exactly known for being "fast". You absolutely can optimize it to be such, but at this time I'm more interested in trying a datastore that I've heard a lot about but haven't actually had a chance to use yet: Redis.

Redis calls itself a "data structure store". What that really means is that it stores objects, not relations (as you would in a Relational Database such as SQL Server). Further, Redis distinguishes between keys and everything else, and gives you several options for creating such keys.

For this demo, you don't really need to know more about how Redis works, but I encourage you to check it out on your own. Further, if you intend to run the sample app (and, like most .NET devs, you're running Windows), you'll want to download MSOpenTech's redis client.

We now have two pieces of our Read Model in place: the Read Model Objects, and the Read Data Store. We can now begin implementation of a layer which will allow us to interface with the Read Data Store and update it as necessary: the Repository layer.

Creating the Repositories

The Repositories (for this project) are interfaces which allow us to query the Read Model. Remember that we have five possible queries that we need to support:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

However, we also need to support certain validation scenarios; for example, we cannot assign an Employee to a location that doesn't exist. Therefore we also need certain functions to check if employees or locations exist.

For the sake of good design, we need at least two Repositories: one for Locations and one for Employees. But a surprising amount of functionality is needed by both of these repositories:

  • They both need to get an object by its ID.
  • They both need to check if an object with a given ID exists.
  • They both need to save a changed object back into the Read Data Store.
  • They both need to be able to get multiple objects of the same type.

Consequently, we can build a common IBaseRepository interface and BaseRepository class which implement these common features. The IBaseRepository interface will be inherited by the other repository interfaces; it looks like this:

public interface IBaseRepository<T>  
{
    T GetByID(int id);
    List<T> GetMultiple(List<int> ids);
    bool Exists(int id);
    void Save(T item);
}

Now, we also need two more interfaces which implement BaseRepository<T>: IEmployeeRepository and ILocationRepository:

public interface IEmployeeRepository : IBaseRepository<EmployeeRM>  
{
    IEnumerable<EmployeeRM> GetAll();
}

public interface ILocationRepository : IBaseRepository<LocationRM>  
{
    IEnumerable<LocationRM> GetAll();
    IEnumerable<EmployeeRM> GetEmployees(int locationID);
    bool HasEmployee(int locationID, int employeeID);
}

The next piece of the puzzle is the BaseRepository class (which, unfortunately, does NOT implement IBaseRepository<T>). This class provides methods by which items can be retrieved from or saved to the Redis Read Data Store:

public class BaseRepository  
{
    private readonly IConnectionMultiplexer _redisConnection;

    /// <summary>
    /// The Namespace is the first part of any key created by this Repository, e.g. "location" or "employee"
    /// </summary>
    private readonly string _namespace;

    public BaseRepository(IConnectionMultiplexer redis, string nameSpace)
    {
        _redisConnection = redis;
        _namespace = nameSpace;
    }

    public T Get<T>(int id)
    {
        return Get<T>(id.ToString());
    }

    public T Get<T>(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        if (serializedObject.IsNullOrEmpty) throw new ArgumentNullException(); //Throw a better exception than this, please
        return JsonConvert.DeserializeObject<T>(serializedObject.ToString());
    }

    public List<T> GetMultiple<T>(List<int> ids)
    {
        var database = _redisConnection.GetDatabase();
        List<RedisKey> keys = new List<RedisKey>();
        foreach (int id in ids)
        {
            keys.Add(MakeKey(id));
        }
        var serializedItems = database.StringGet(keys.ToArray(), CommandFlags.None);
        List<T> items = new List<T>();
        foreach (var item in serializedItems)
        {
            items.Add(JsonConvert.DeserializeObject<T>(item.ToString()));
        }
        return items;
    }

    public bool Exists(int id)
    {
        return Exists(id.ToString());
    }

    public bool Exists(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        return !serializedObject.IsNullOrEmpty;
    }

    public void Save(int id, object entity)
    {
        Save(id.ToString(), entity);
    }

    public void Save(string keySuffix, object entity)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        database.StringSet(MakeKey(key), JsonConvert.SerializeObject(entity));
    }

    private string MakeKey(int id)
    {
        return MakeKey(id.ToString());
    }

    private string MakeKey(string keySuffix)
    {
        if (!keySuffix.StartsWith(_namespace + ":"))
        {
            return _namespace + ":" + keySuffix;
        }
        else return keySuffix; //Key is already prefixed with namespace
    }
}

With all of that infrastructure in place, we can start implementing the EmployeeRepository and LocationRepository.

Employee Repository

In the EmployeeRepository, let's get a single Employee record with the given Employee ID.

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }
}

Hey, that was easy! Because of the work we did in the BaseRepository, our Read Model Object repositories will be quite simple. Here's the rest of EmployeeRepository:

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }

    public List<EmployeeRM> GetMultiple(List<int> employeeIDs)
    {
        return GetMultiple<EmployeeRM>(employeeIDs);
    }

    public IEnumerable<EmployeeRM> GetAll()
    {
        return Get<List<EmployeeRM>>("all");
    }

    public void Save(EmployeeRM employee)
    {
        Save(employee.EmployeeID, employee);
        MergeIntoAllCollection(employee);
    }

    private void MergeIntoAllCollection(EmployeeRM employee)
    {
        List<EmployeeRM> allEmployees = new List<EmployeeRM>();
        if (Exists("all"))
        {
            allEmployees = Get<List<EmployeeRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allEmployees.Any(x => x.EmployeeID == employee.EmployeeID))
        {
            allEmployees.Remove(allEmployees.First(x => x.EmployeeID == employee.EmployeeID));
        }

        //Add the modified district to the ALL collection
        allEmployees.Add(employee);

        Save("all", allEmployees);
    }
}

Take special note of the MergeIntoAllCollection() method, and let me take a minute to explain what I'm doing here.

Querying for Collections

As I mentioned earlier, Redis makes a distinction between keys and everything else, and because of this it doesn't really apply a "type" per se to anything stored against a key. Consequently, unlike in SQL Server, you don't really query for several objects (e.g. SELECT * FROM table WHERE condition) because that's not what Redis is for.

Remember that we're designing this to reflect the queries we need to run. We can think of this as changing when the work of making a collection is done.

In SQL Server or other relational databases, most of the time you do the work of creating a collection when you run a query. So, you might have a huge table of, say, vegetables, and then create a query to only give you carrots, or radishes, or whatever.

But in Redis, no such querying is possible. Therefore, instead of doing the work when we need the query, we prep the data in advance at the point where it changes. Consequently, the queries are ready for consumption immediately after the corresponding event handlers are done processing.

All we're doing is moving the time when we create the query results from "when the query runs" to "when the source data changes."

With the current set up of the repositories, any time a LocationRM or EmployeeRM object is saved, that object is merged back into the respective "all collection" for that object. Hence, I needed MergeIntoAllCollection().

Location Repository

Now, let's see what the LocationRepository looks like:

public class LocationRepository : BaseRepository, ILocationRepository  
{
    public LocationRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "location") { }
    public LocationRM GetByID(int locationID)
    {
        return Get<LocationRM>(locationID);
    }

    public List<LocationRM> GetMultiple(List<int> locationIDs)
    {
        return GetMultiple(locationIDs);
    }

    public bool HasEmployee(int locationID, int employeeID)
    {
        //Deserialize the LocationDTO with the key location:{locationID}
        var location = Get<LocationRM>(locationID);

        //If that location has the specified Employee, return true
        return location.Employees.Contains(employeeID);
    }

    public IEnumerable<LocationRM> GetAll()
    {
        return Get<List<LocationRM>>("all");
    }
    public IEnumerable<EmployeeRM> GetEmployees(int locationID)
    {
        return Get<List<EmployeeRM>>(locationID.ToString() + ":employees");
    }

    public void Save(LocationRM location)
    {
        Save(location.LocationID, location);
        MergeIntoAllCollection(location);
    }

    private void MergeIntoAllCollection(LocationRM location)
    {
        List<LocationRM> allLocations = new List<LocationRM>();
        if (Exists("all"))
        {
            allLocations = Get<List<LocationRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allLocations.Any(x => x.LocationID == location.LocationID))
        {
            allLocations.Remove(allLocations.First(x => x.LocationID == location.LocationID));
        }

        //Add the modified district to the ALL collection
        allLocations.Add(location);

        Save("all", allLocations);
    }
}

Now our Repositories are complete, and we can finally write the last, best piece of our system's Read Model: the event handlers.

Building the Event Handlers

Whenever an event is issued by our system we can use an Event Handler to do something with that event. In our case, we need our Event Handlers to update our Redis data store.

First, let's create an Event Handler for the Create Employee event.

public class EmployeeEventHandler : IEventHandler<EmployeeCreatedEvent>  
{
    private readonly IMapper _mapper;
    private readonly IEmployeeRepository _employeeRepo;
    public EmployeeEventHandler(IMapper mapper, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _employeeRepo = employeeRepo;
    }

    public void Handle(EmployeeCreatedEvent message)
    {
        EmployeeRM employee = _mapper.Map<EmployeeRM>(message);
        _employeeRepo.Save(employee);
    }
}

Note that all interfacing with the Redis data store is done through the repository, and so the event handler consumes an instance of IEmployeeRepository in its constructor. Because we're using Dependency Injection (which we will set up in Part 4), this usage becomes possible and greatly simplifies our event handler.

In any case, notice that all this event handler is doing is creating the corresponding Read Model object from an event (specifically the EmployeeCreatedEvent).

Now let's build the event handler for a Location. In this case, we have three events to handle: creating a new Location, assigning an employee to a Location, and removing an employee from a Location (and in order to do all of those, it will need to take both ILocationRepository and IEmployeeRepository as constructor parameters):

public class LocationEventHandler : IEventHandler<LocationCreatedEvent>,  
                                    IEventHandler<EmployeeAssignedToLocationEvent>,
                                    IEventHandler<EmployeeRemovedFromLocationEvent>
{
    private readonly IMapper _mapper;
    private readonly ILocationRepository _locationRepo;
    private readonly IEmployeeRepository _employeeRepo;
    public LocationEventHandler(IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    public void Handle(LocationCreatedEvent message)
    {
        //Create a new LocationDTO object from the LocationCreatedEvent
        LocationRM location = _mapper.Map<LocationRM>(message);

        _locationRepo.Save(location);
    }

    public void Handle(EmployeeAssignedToLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.NewLocationID);
        location.Employees.Add(message.EmployeeID);
        _locationRepo.Save(location);

        //Find the employee which was assigned to this Location
        var employee = _employeeRepo.GetByID(message.EmployeeID);
        employee.LocationID = message.NewLocationID;
        _employeeRepo.Save(employee);
    }

    public void Handle(EmployeeRemovedFromLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.OldLocationID);
        location.Employees.Remove(message.EmployeeID);
        _locationRepo.Save(location);
    }
}

With the Event Handlers in place, every time an Event is kicked off, it will be consumed by the Event Handlers and the Redis data model will updated. Success!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built the Read Model Data Store using Redis,
  • Designed our Read Model to support our business's queries,
  • Built the Event Handlers which place data into said data store AND
  • Built a set of repositories to access the Redis data.

There's still a lot to do, though. We need to set up our Dependency Injection system, our validation layer, and our Requests. We'll do all of that in Part 4 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!