sample-project

All posts that include links to a sample project on GitHub.

A Simple Caching Scheme for Web API using Dependency Injection

I use Dependency Injection (DI) quite a bit in my ASP.NET projects, particularly in Web API and MVC web applications. Recently, I had a need to implement a caching layer in one of my MVC apps, and such a layer would be best used if it could be injected into my clients layer (e.g. the layer that called an API and handled responses from the same). The solution I came up with seemed to be pretty simple, so I wanted to share it here.

In this post, we're going to set up a simple MVC project that consumes a Web API and implements a caching layer.

Requirements

For this project, I'm using two of my favorite NuGet packages:

The sample project, which is over on GitHub, is a fully-functional implementation of the strategy described in this post. Feel free to check it out, branch it, download it, whatever!

Setting Up the API

First, let's take a look at our sample API. Here's the class DateNumberObject, which represents a response returned from the API to the web project:

public class DateNumberObject  
{
    public DateTime CurrentDate { get; set; }
    public int RandomNumber { get; set; }

    public DateNumberObject()
    {
        CurrentDate = DateTime.Now;
        Random rand = new Random();
        RandomNumber = rand.Next(1, 200);
    }
}

As you can see, all this class does is return the current date and time and a random number.

We can now build our API, which is also rather simple. Here's a snippet from the API's controller:

[RoutePrefix("samples")]
public class SampleController : ApiController  
{

    [HttpGet]
    [Route("date")]
    public IHttpActionResult GetDateAndNumber()
    {
        return Ok(new DateNumberObject());
    }
}

That's all our API will do: return the current date and a random number. Since there's no caching taking place at the service layer, we will need to implement caching at the web layer.

Speaking of the web layer, in our sample solution it is an MVC5 project, and we will be using my favorite Dependency Injection tool, StructureMap.

Setting up StructureMap

If you've never set up StructureMap in your MVC projects, you'll want to read this section; otherwise, skip to the next section.

The first thing we need to do is download the StructureMap.MVC5 NuGet package, which will add a DependencyInjection folder and a StructuremapMvc file to our app:

Inside the Dependency Resolution folder will be a file called DefaultRegistry.cs, which will initially look something like this:

namespace WebApiCacheDemo.Mvc.DependencyResolution {  
    using Caching;
    using StructureMap.Configuration.DSL;
    using StructureMap.Graph;

    public class DefaultRegistry : Registry {
        #region Constructors and Destructors
        public DefaultRegistry() {
            Scan(
                scan => {
                    scan.TheCallingAssembly();
                    scan.WithDefaultConventions();
                    scan.With(new ControllerConvention());
                });
        }
        #endregion
    }
}

The way StructureMap works is that it keeps instances of classes that need to be "injected" into other classes in a Container class, and uses these instances any time a particular interface is called for in another class. You'll see exactly what this means in the next section.

The Uncached Example

Let's build the uncached example first, and we can begin by setting up the MVC client. For this project we'll be using RestSharp (which I have also written about before) to consume the API responses, so be sure to download the NuGet package.

Because we're using Dependency Injection, we need our clients to be injectable into our controllers. For this reason, we will make both an interface and a class for our client, like so:

public interface ISampleClient : IRestClient  
{
    DateNumberObject GetSampleDateAndNumberUncached();
}

public class SampleClient : RestClient, ISampleClient  
{
    public SampleClient()
    {
        BaseUrl = new Uri("http://localhost:58566/");
    }

    public DateNumberObject GetSampleDateAndNumberUncached()
    {
        RestRequest request = new RestRequest("samples/date", Method.GET);
        var response = Execute<DateNumberObject>(request);
        return response.Data;
    }
}

Note that we don't need to register this client with StructureMap because the naming follows the standard conventions (ISampleClient maps to SampleClient).

Now we need our controller...

[RoutePrefix("Home")]
public class HomeController : Controller  
{
    private ISampleClient _sampleClient;

    public HomeController(ISampleClient sampleClient)
    {
        _sampleClient = sampleClient;
    }

    [HttpGet]
    [Route("Uncached")]
    [Route("")]
    [Route("~/")]
    public ActionResult Uncached()
    {
        var model = _sampleClient.GetSampleDateAndNumberUncached();
        return View(model);
    }
}

...which returns a view:

@model WebApiCacheDemo.Contracts.Samples.DateNumberObject
@{
    ViewBag.Title = "Uncached Date Sample";
}

<h2>Uncached Results</h2>  
<div class="row">  
    <span><strong>Current Date:</strong></span>
    @Model.CurrentDate
</div>  
<div class="row">  
    <span><strong>Random Number:</strong></span>
    @Model.RandomNumber
</div>  

When we run this, we can see that the view returns the newest date and time, as shown in this gif:

That's exactly what we would expect to see, of course. Now, we can get down to implementing a cache at the web layer for this.

The Cached Example

The first thing we need is our CacheService interface and implementation, which I totally stole from this StackOverflow answer:

public class InMemoryCache : ICacheService  
{
    public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
    {
        T item = MemoryCache.Default.Get(cacheKey) as T;
        if (item == null)
        {
            item = getItemCallback();
            MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(30));
        }
        return item;
    }
}

public interface ICacheService  
{
    T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}

We also need our SampleClient class updated to use this cache service, like so:

public class SampleClient : RestClient, ISampleClient  
{
    private ICacheService _cache;
    public SampleClient(ICacheService cache)
    {
        _cache = cache;
        BaseUrl = new Uri("http://localhost:58566/");
    }

    public DateNumberObject GetSampleDateAndNumber()
    {
        return _cache.GetOrSet("SampleDateAndNumber", () => GetSampleDateAndNumberUncached());
    }

    public DateNumberObject GetSampleDateAndNumberUncached()
    {
        RestRequest request = new RestRequest("samples/date", Method.GET);
        var response = Execute<DateNumberObject>(request);
        return response.Data;
    }
}

NOTE: In my structure, I am intentionally implementing the cached and uncached operations as separate methods in the same client, so that either one can be used. This may or may not be the correct usage in your application.

Further, we need to update our controller:

[RoutePrefix("Home")]
public class HomeController : Controller  
{
    ...

    [HttpGet]
    [Route("Cached")]
    public ActionResult Cached()
    {
        var model = _sampleClient.GetSampleDateAndNumber();
        return View(model);
    }
}

The last step is to register the ICacheService implementing in StructureMap's DefaultRegistry.cs class:

namespace WebApiCacheDemo.Mvc.DependencyResolution {  
    ...

    public class DefaultRegistry : Registry {
        public DefaultRegistry() {
            ...
            var inMemoryCache = new InMemoryCache();
            For<ICacheService>().Use(inMemoryCache);
        }
    }
}

Now, when we run this sample, we will see the cached values for the date and number, as shown in the following gif:

Now we've successfully implemented our cache AND used Dependency Injection in the process!

Summary

With this structure, we've successfully implemented a caching layer into our MVC application, all while using StructureMap to provide Dependency Injection. This structure allows us to potentially swap cache providers (in case we, say, move to a database-located cache for server farms) without too much trouble, as well as injecting the cache service into our existing clients.

Don't forget to check out the sample project over on GitHub, and feel free to point out how you've used this implemention or where it can be improved in the comments.

Happy Coding!

Decimal vs Double and Other Tips About Number Types in .NET

I've been coding with .NET for a long time. In all of that time, I haven't really had a need to figure out the nitty-gritty differences between float and double, or between decimal and pretty much any other type. I've just used them as I see fit, and hope that's how they were meant to be used.

Until recently, anyway. Recently, as I was attending the AngleBrackets conference, and one of the coolest parts of attending that conference is getting to be in an in-depth workshop. My particular workshop was called I Will Make You A Better C# Programmer by Kathleen Dollard, and my reaction was thus:

One of the most interesting things I learned at Kathleen's session was that the .NET number types don't always behave the way I think they do. In this post, I'm going to walk through a few (a VERY few) of Kathleen's examples and try to explain why .NET has so many different number types and what they are each for. Come along as I (with her code) attempt to show what the number types are, and what they are used for!

The Number Types in .NET

Let's start with a review of the more common number types in .NET. Here's a few of the basic types:

  • Int16 (aka short): A signed integer with 16 bits (2 bytes) of space available.
  • Int32 (aka int): A signed integer with 32 bits (4 bytes) of space available.
  • Int64 (aka long): A signed integer with 64 bits (8 bytes) of space available.
  • Single (aka float): A 32-bit floating point number.
  • Double (aka double): A 64-bit floating-point number.
  • Decimal (aka decimal): A 128-bit floating-point number with a higher precision and a smaller range than Single or Double.

There's an interesting thing to point out when comparing double and decimal: the range of double is ±5.0 × 10−324 to ±1.7 × 10308, while the range of decimal is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28). In other words, the range of double is several times larger than the range of decimal. The reason for this is that they are used for quite different things.

Precision vs Accuracy

One of the concepts that's really important to discuss when dealing with .NET number types is that of precision vs. accuracy. To make matters more complicated, there are actually two different definitions of precision, one of which I will call arithmetic precision.

  • Precision refers to the closeness of two or more measurements to each other. If you measure something five times and get exactly 4.321 each time, your measurement can be said to be very precise.
  • Accuracy refers to the closeness of a value to standard or known value. If you measure something, and find it's weight to be 4.7kg, but the known value for that object is 10kg, your measurement is not very accurate.
  • Arithmetic precision refers to the number of digits used to represent a number (e.g. how many numbers after the decimal are used). The number 9.87 is less arithmetically precise than the number 9.87654332.

We always need mathematical operations in a computer system to be accurate; we cannot ever expect 4 x 6 = 32. Further, we also need these calculations to be precise using the common term; 4 x 6 must always be precisely 24 no matter how many times we make that calculation. However, the extent to which we want our systems to be either arithmetically precise has a direct impact on the performance of those system.

If we lose some arithmetic precision, we gain performance. The reverse is also true: if we need values to be arithmetically precise, we will spend more time calculating those values. Forgetting this can lead to incredible performance problems, problems which can be solved by using the correct type for the correct problem. These kinds of issues are most clearly shown during Test 3 later in this post.

Why Is Int The Default?

Here's something I've always wondered. If you take this line of code:

var number = 5;  

Why is the type of number always an int? Why not a short, since that takes up less space? Or maybe a long, since that will represent nearly any integer we could possibly use?

Turns out, the answer is, as it often is, performance. .NET optimizes your code to run in a 32-bit architecture, which means that any operations involving 32-bit integers will by definition be more performant than either 16-bit or 64-bit operations. I expect that this will change as we move toward a 64-bit architecture being standard, but for now, 32-bit integers are the most performant option.

Testing the Number Types

One of the ways we can start to see the inherent differences between the above types is by trying to use them in calculations. We're going to see three different tests, each of which will reveal a little more about how .NET uses each of these types.

Test 1: Division

Let's start with a basic example using division. Consider the following code:

private void DivisionTest1()  
{
    int maxDiscountPercent = 30;
    int markupPercent = 20;
    Single niceFactor = 30;
    double discount = maxDiscountPercent * (markupPercent / niceFactor);
    Console.WriteLine("Discount (double): ${0:R}", discount);
}

private void DivisionTest2()  
{
    byte maxDiscountPercent = 30;
    int markupPercent = 20;
    int niceFactor = 30;
    int discount = maxDiscountPercent * (markupPercent / niceFactor);
    Console.WriteLine("Discount (int): ${0}", discount);
}

Note that the only thing that's really different about these two methods are the types of the local variables.

Now here's the question: what will the discount be in each of these methods?

If you said that they'll both be $20, you're missing something very important.

The problem line is this one, from DivisionTest2():

int discount = maxDiscountPercent * (markupPercent / niceFactor);  

Here's the problem: because markupPercent is declared as an int (which in turn makes it an Int32), when you divide an int by another int, the result will be an int, even when we would logically expect it to be something like a double. .NET does this by truncating the result, so because 20 / 30 = 0.6666667, what you get back is 0 (and anything times 0 is 0).

In short, the discount for DivisionTest1 is the expected $20, but the discount for DivisionTest2 is $0, and the only difference between them is what types are used. That's quite a difference, no?

Test 2 - Double Addition

Now we get to see something really weird, and it involves the concept of arithmetic precision from earlier. Here's the next method:

public void DoubleAddition()  
{
    Double x = .1;
    Double result = 10 * x;
    Double result2 = x + x + x + x + x + x + x + x + x + x;

    Console.WriteLine("{0} - {1}", result, result2);
    Console.WriteLine("{0:R} - {1:R}", result, result2);
}

Just by reading this code, we expect result and result2 to be the same: multiplying .1 x 10 should equal .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1.

But there's another trick here, and that's the usage of the "{O:R}" string formatter. That's called the round-trip formatter, and it tells .NET to display all parts of this number to its maximum arithmetic precision.

If we run this method, what does the output look like?

By using the round-trip formatter, we see that the multiplication result ended up being exactly 1, but the addition result was off from 1 by a miniscule (but still potentially significant) amount. Question is: why does it do this?

In most systems, a number like 0.1 cannot be accurately represented using binary. There will be some form of arithmetic precision error when using a number such as this. Generally, said arithmetic precision error is not noticeable when doing mathematical operations, but the more operations you perform, the more noticeable the error is. The reason we see the error above is because for the multiplication portion, we only performed one operation, but for the addition portion, we performed ten, and thus caused the arithmetic precision error to compound each time.

Test 3 - Decimal vs Double Performance

Now we get to see something really interesting. I'm often approached by new .NET programmers with a question like the following: why should we use decimal over double and vice-versa? This test pretty clearly spells out when and why you should use these two types.

Here's the sample code:

private int iterations = 100000000;

private void DoubleTest()  
{
    Stopwatch watch = new Stopwatch();
    watch.Start();
    Double z = 0;
    for (int i = 0; i < iterations; i++)
    {
        Double x = i;
        Double y = x * i;
        z += y;
    }
    watch.Stop();
    Console.WriteLine("Double: " + watch.ElapsedTicks);
    Console.WriteLine(z);
}

private void DecimalTest()  
{
    Stopwatch watch = new Stopwatch();
    watch.Start();
    Decimal z = 0;
    for (int i = 0; i < iterations; i++)
    {
        Decimal x = i;
        Decimal y = x * i;
        z += y;
    }
    watch.Stop();
    Console.WriteLine("Decimal: " + watch.ElapsedTicks);
    Console.WriteLine(z);
}

For each of these types, we are doing a series of operations (100 million of them) and seeing how many ticks it takes for the double operation to execute vs how many ticks it takes for the decimal operations to execute. The answer is startling:

The operations involving double take 790836 ticks, while the operations involving decimal take a whopping 16728386 ticks. In other words, the decimal operations take 21 times longer to execute than the double operations. (If you run the sample project, you'll notice that the decimal operations take visibly longer than the double ones).

But why? Why does double take so much less time than decimal?

For one thing, double uses base-2 math, while decimal uses base-10 math. Base-2 math is much quicker for computers to calculate.

Further, what double is concerned with is performance, while what decimal is concerned with is precision. When using double, you are accepting a known trade-off: you won't be super precise in your calculations, but you will get an acceptable answer quickly. Whereas with decimal, precision is built into its type: it's meant to be used for money calculations, and guess how many people would riot if those weren't accurate down to the 23rd place after the decimal point.

In short, double and decimal are two totally separate types for a reason: one is for speed, and the other is for precision. Make sure you use the appropriate one at the appropriate time.

Summary

As can be expected from such a long-lived framework, .NET has several number types to help you with your calculations, ranging from simple integers to complex currency-based values. As always, it's important to use the correct tool for the job:

  • Use double for non-integer math where the most precise answer isn't necessary.
  • Use decimal for non-integer math where precision is needed (e.g. money and currency).
  • Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long.

Don't forget to check out the sample project over on GitHub!

Are there any other pitfalls or improvements we should be aware of? Feel free to sound off in the comments!

Happy Coding!

Huge thanks to Kathleen Dollard (@kathleendollard) for providing the code samples and her invaluable insight into how to effectively explain what's going on in these samples. Check out her Pluralsight course for more!

Working with Environments and Launch Settings in ASP.NET Core

I'm continuing my series of deep dives into ASP.NET Core, on my never-ending quest to keep my coding career from becoming obsolete. In this part of the series, let's discuss a new feature in ASP.NET Core that will make debugging and testing much more intuitive and pain-free: Environments and Launch Settings.

ASP.NET Core includes the native ability to modify how an application behaves (and what it displays) based upon the environment it is currently executing in. This ends up being quite useful, since now we can do things as simple as not showing the error page to people using the application in a production environment. But it requires just a tiny bit of setup, and some knowledge of how environments are controlled.

ASPNETCORE_ENVIRONMENT

At the heart of all the stuff ASP.NET Core can do with environments is the ASPNETCORE_ENVIRONMENT environment variable. This variable controls what type of environment the current build will run in. You can edit this variable manually by right-clicking your project file and selecting "Properties", then navigating to the "Debug" tab.

There are three "default" values for this variable:

  • Development
  • Staging
  • Production

Each of those values corresponds to certain methods we can use in our Startup.cs file. In fact, the Startup.cs file includes several extension methods that make modifying our application based it's current environment quite simple.

Selecting an Environment

The Startup.cs file that every ASP.NET Core application needs does many things, not the least of which is setting up the environment in which the application runs. Here's a snippet of the default Startup.cs file that was generated when I created my sample project for this post:

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseBrowserLink();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Notice the IHostingEnvironment variable. That variable represents the current environment in which the application runs, and ASP.NET Core includes four extension methods that allow us to check the current value of ASPNETCORE_ENVIRONMENT:

  • IsDevelopment()
  • IsStaging()
  • IsProduction()
  • IsEnvironment()

Notice what we're setting up in the Startup.cs file: if the current environment is Development, we use the Developer Exception Page (which was formerly the Yellow Screen of Death) as well as enabling Browser Link in Visual Studio, which helps us test our application in several browsers at once.

But we don't want those features enabled in a Production environment, so instead our configuration establishes an error handling route "/Home/Error".

This is the coolest thing about this feature: you can decide what services are enabled for your application in each environment without ever touching a configuration file!

However, what if you need more environments? For example, at my day job we have up to four environments: Development, Staging, Production, and Testing, and very often I'll need to change application behaviors or display based on which environment it's currently running in. How do I do that in ASP.NET Core?

The launchSettings.json File

ASP.NET Core 1.0 includes a new file called launchSettings.json; in any given project, you'll find that file under the Properties submenu:

This file sets up the different launch environments that Visual Studio can launch automatically. Here's a snippet of the default launchSettings.json file that was generated with my demo project:

{
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "CoreEnvironmentBehaviorDemo": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

At the moment we have two profiles: one which runs the app using IIS Express and one which runs the app using a command-line handler. The interesting thing about these two profiles is that they correspond directly to the Visual Studio play button:

In other words: the launchSettings.json file controls what environments you can run your app in using Visual Studio. In fact, if we add a new profile, it automatically adds that profile to the button, as seen on this GIF I made:

All this together means that we can run the app in multiple different environment settings from Visual Studio, simply by setting up the appropriate launch profiles.

The Environment Tag Helper

Quite a while ago I wrote a post about Tag Helpers and how awesome they are, and I want to call out a particular helper in this post: the <environment> tag helper.

With this helper, we can modify how our MVC views are constructed based upon which environment the application is currently running in. In fact, the default _Layout.cshtml file in Visual Studio 2015 comes with an example of this:

<environment names="Development">  
    <script src="~/lib/jquery/dist/jquery.js"></script>
    <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
    <script src="~/js/site.js" asp-append-version="true"></script>
</environment>  
<environment names="Staging,Production">  
    <script src="https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.2.0.min.js"
            asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
            asp-fallback-test="window.jQuery">
    </script>
    <script src="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.6/bootstrap.min.js"
            asp-fallback-src="~/lib/bootstrap/dist/js/bootstrap.min.js"
            asp-fallback-test="window.jQuery && window.jQuery.fn && window.jQuery.fn.modal">
    </script>
    <script src="~/js/site.min.js" asp-append-version="true"></script>
</environment>  

In this sample, when we run the app in Development mode we use the local files for jQuery, Bootstrap, and our custom Javascript; but if we're running in the Staging or Production environments, we use copies of those libraries files found on the ASP.NET content delivery network (CDN). In this way, we improve performance, because a CDN can deliver these static files from one of many centers around the globe, rather than needing to round-trip all the way to the source server to get those files.

But, remember, in our sample we have the custom environment Testing. The environment tag helper also supports custom environments:

<div class="row">  
    <environment names="Development">
        <span>This is the Development environment.</span>
    </environment>
    <environment names="Testing">
        <span>This is the Testing environment.</span>
    </environment>
    <environment names="Production">
        <span>This is the Production environment.</span>
    </environment>
</div>  

Summary

Working with environments (both default and custom) gets a whole lot easier in ASP.NET Core, and with the new Launch Settings feature, getting the app started in different environment modes is a breeze! Using these features, we can:

  • Create and use custom environments.
  • Enable or disable application services based on the environment the app is running in.
  • Use the environment tag helper (<environment>) to modify our MVC views based on the current execution environment.

You can check out my sample project over on GitHub to see this code in action. Feel free to mention any cool tips and tricks you've found when dealing with environments and launch settings in the comments!

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 5 - Running the APIs

NOTE: This is the final part of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

All our work in the previous parts of this series (learning what Command-Query Responsibility Segregation and Event Sourcing are, building the Write Model to modify our aggregate roots, building the Read Model to query data, and building both our Write and Read APIs) has lead to this. We can now test these two APIs using [Postman] and see how they operate.

In this post, the final part of our Real-World CQRS/ES with ASP.NET and Redis series, we will:

  • Run the Commands API with both valid and invalid commands.
  • Run the Queries API with existent and non-existent data.
  • Discuss some shortcomings of this design.

You're on the last lap, so don't stop now!

Command - Creating Locations

The first thing we should do is run a few commands to load our Write and Read models with data. To do that, we're going to use my favorite tool Postman to create some requests.

First, let's run a command to create a new location. Here's a screenshot of the Postman request:

Running this request returns 200 OK, which is what we expect. But what happens if we try to run the exact same request again?

Hey, lookie there! Our validation layer is working!

Let's create another location:

Well, seems our create location process is working fine. Or, at least, it looks like it is.

Query - Locations (All and By ID)

To be sure that our system is properly updating the read model, let's submit a query to our Queries API that returns all locations:

Which looks good. Let's also query for a single location by its ID. First, let's get Location #2:

Now we can query for Location #3:

Oh, wait, that's right, there is no Location #3. So we get back HTTP 400 Bad Request, which is also what we expect. (You could also make this return HTTP 404 Not Found, which is more semantically correct).

OK, great, adding and querying Locations works. But what about Employees?

Command - Creating Employees

Let's first create a new employee and assign him to Location #1:

Let's also create a couple more employees:

So now we should have two employees at Location #1 and a third employee at Location #2. Let's query for employees by location to confirm this.

Query - Employees by Location

Here's our Postman screenshot for the Employees by Location query for each location.

Just as we thought, there are two employees at Location #1 and a third at Location #2.

We're doing pretty darn good so far! But what happens if Reggie Martinez (Employee #3) needs to transfer to Location #2? We can do that with the proper commands.

Command - Assign Employee to Location

Here's a command to move Mr. Martinez to Location #2:

And now, if we query for all employees at Location #2:

I'd say we've done pretty darn good! All our commands do what we expect them to do, and all our queries return the appropriate data. We've now got ourselves a working CQRS/ES project with ASP.NET and Redis!

Shortcomings of This Design

Even though we've done a lot of work on this project and I think we've mostly gotten it right, there's still a few places that I think could be improved:

  • All Redis access through repositories. I don't like having the Event Handlers access the Redis database directly, I'd rather have them do that through the repositories. This would be easy to do, I just didn't have time before my publish date.
  • Better splitting of requests/commands and commands/events. I don't like how commands always seem to result in exactly one event.

That said, I'm really proud of the way this project turned out. If you see any additional areas for improvement, please let me know in the comments!

Summary

In this final part of our Real-World CQRS/ES with ASP.NET and Redis series:

  • Ran several queries and commands.
  • Confirmed that those queries and commands worked as expected.
  • Discussed a couple of shortcomings of this design.

As always, I welcome (civil) comments and discussion on my blog, and feel free to fork or patch or do whatever to the GitHub repository that goes along with this series. Thanks for reading!

Happy Coding!

Post image is Toddler running and falling from Wikimedia, used under license