json

Consuming Web API Custom Validation in MVC using RestSharp

I previously wrote a post called Custom Validation in ASP.NET Web API with FluentValidation, in which I showed how my group set up a validation framework over WebAPI using the FluentValidation NuGet package.

In this post, we'll go over how to consume those responses in an ASP.NET MVC app, as well as how to take the error messages from the API response and automatically add them to the MVC ModelState.

I have updated the sample project on GitHub with the code from this post, so it may be easier for you to pull that down first and then follow along. Whichever way you like to learn, let's get started!

Goals and Dependencies

In building this system, we have two major goals in mind:

  1. All validation will be done on the Web API side, but the consuming MVC app will need to display the errors to the user.
  2. In order to make the first goal easier, we want the errors from the Web API project automatically imported into the MVC ModelState.

For this solution, we are using a couple of packages from NuGet:

  • JSON.NET (Allows simple serializing and deserializing to and from JSON).
  • RestSharp (Enables simpler calling of API clients)

Finally, we are using a pattern called POST-REDIRECT-GET in our MVC app which enables us to pass ModelStates from one action to another (check out that post for more details).

Remember the Alamo Models!

First, let's remind ourselves what the models and package looked like from the previous post.

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime BirthDate { get; set; }
    public string Username { get; set; }
}

public class UserValidator : AbstractValidator<User>  
{
    public UserValidator()
    {
        RuleFor(x => x.FirstName).NotEmpty().WithMessage("The First Name cannot be blank.")
                                    .Length(0, 100).WithMessage("The First Name cannot be more than 100 characters.");

        RuleFor(x => x.LastName).NotEmpty().WithMessage("The Last Name cannot be blank.");

        RuleFor(x => x.BirthDate).LessThan(DateTime.Today).WithMessage("You cannot enter a birth date in the future.");

        RuleFor(x => x.Username).Length(8, 999).WithMessage("The user name must be at least 8 characters long.");
    }
}

public class ResponsePackage  
{
    public List<string> Errors { get; set; }
    public object Result { get; set; }
    public bool HasErrors
    {
        get
        {
            if (Errors != null)
            {
                return Errors.Any();
            }
            return false;
        }
    }
    public ResponsePackage(object result, List<string> errors)
    {
        Errors = errors;
        Result = result;
    }
}

Recall that in this system, any response at all will be wrapped by a ResponsePackage, so one of the things the consumer system will have to do is extract the JSON from the package and deserialize it to an object.

Creating the Clients

In order to call the API using RestSharp, we can use "client" classes that implement methods which call API functions. In our case, we decided to create a "base" client class which all other clients would inherit from. The base client needs three things:

  1. A constructor which passes in the current ModelState (so that validation errors from the API can be inserted into it).
  2. An Execute() method which executes the current request.
  3. An Execute<T>() method which executes the current request and automatically extracts the content of the ResponsePackage to the specified type.

Here's the code for the base client:

public class ClientBase : RestClient  
{
    private ModelStateDictionary _modelstate;

    public ClientBase(ModelStateDictionary modelstate) : base(Constants.ApiUrl)
    {
        _modelstate = modelstate;
    }

    //This method executes the request and does not attempt to deserialize the response.  We use this for updates, deletes, etc.
    public new void Execute(IRestRequest request)
    {
        var response = base.Execute(request);
        var parsedObj = JObject.Parse(response.Content);
        var apiResponse = JsonConvert.DeserializeObject<ResponsePackage>(parsedObj.ToString());
        if (apiResponse.HasErrors)
        {
            AddErrors(apiResponse);
        }
    }

    //This method expects the Result property of the response to be a JSON object that can be deserialized into an object of type T
    public new T Execute<T>(IRestRequest request) where T : new()
    {
        var response = base.Execute(request);
        var parsedObj = JObject.Parse(response.Content);
        var apiResponse = JsonConvert.DeserializeObject<ResponsePackage>(parsedObj.ToString());
        if (apiResponse.HasErrors)
        {
            AddErrors(apiResponse);
            return default(T);
        }
        return response.Extract<T>();
    }

    private void AddErrors(ResponsePackage response)
    {
        List<string> listMessagesAdded = new List<string>();
        for (int i = 0; i < response.Errors.Count; i++)
        {
            if (listMessagesAdded.Contains(response.Errors[i])) continue;
            _modelstate.AddModelError("error" + i.ToString(), response.Errors[i]);
            listMessagesAdded.Add(response.Errors[i]);
        }
    }
}

Take particular note of the AddErrors() method. This method is key to the whole operation: it is what takes the error messages out of the ResponsePackage and inserts them into the API.

Now that we've got the base client, let's create the derivative UserClient. Recall from the previous post that we have two methods in the API: a method to get all the users and a method to add an additional one. We need corresponding methods in our UserClient, like so:

public class UserClient : ClientBase  
{
    public UserClient(ModelStateDictionary modelstate) : base(modelstate)
    {
    }

    public List<User> GetAll()
    {
        RestRequest request = new RestRequest("users/all");
        return Execute<List<User>>(request);
    }

    public void Add(User user)
    {
        RestRequest request = new RestRequest("users/add", Method.POST);
        request.AddJsonBody(user);
        Execute(request);
    }
}

There's still a piece missing. Exactly what does the Extract<T>() method above do? It reads the Result property of the ResponsePackage and deserializes it into an object of type T. We implement that in an extension class, like so:

public static class RestResponseExtensions  
{
    private static string ResultPropertyName = "Result";

    public static T Extract<T>(this IRestResponse response) where T : new()
    {
        var parsedObj = JObject.Parse(response.Content);
        return JsonConvert.DeserializeObject<T>(parsedObj[ResultPropertyName].ToString());
    }
}

NOTE: IRestResponse is an interface implemented in RestSharp

We're on our way! Now that we've got the clients defined, let's set up the MVC app.

Chaos Control(ler)

Let's do the code first. Here's the Controller:

[RoutePrefix("users")]
public class UserController : Controller  
{
    [HttpGet]
    [Route("all")]
    [Route("")]
    [Route("~/")]
    public ActionResult Index()
    {
        UserClient client = new UserClient(ModelState);
        var users = client.GetAll();
        return View(users);
    }

    [HttpGet]
    [Route("add")]
    [ImportModelState]
    public ActionResult Add()
    {
        var user = new User();
        return View(user);
    }

    [HttpPost]
    [Route("add")]
    [ExportModelState]
    public ActionResult Add(User user)
    {
        UserClient client = new UserClient(ModelState);
        client.Add(user);
        if(!ModelState.IsValid)
        {
            return RedirectToAction("Add");
        }
        return RedirectToAction("Index");
    }
}

The [ExportModelState] and [ImportModelState] attributes are part of the POST-REDIRECT-GET pattern that I've written about before. For now, just remember that those attributes allow the ModelState to get passed from one action to another (otherwise it gets lost on redirects).

We also need to set up the Add view (If you want the code for the Index view, take a look at the GitHub project)

@model WebApiValidationDemo.Mvc.Lib.Models.User

@{
    ViewBag.Title = "Add a User";
}

@Html.ActionLink("Back to Index", "Index")

<h2>Add a User</h2>

@using (Html.BeginForm("Add", "User", FormMethod.Post))
{
    @Html.ValidationSummary() //This is key
    <div>
        <div>
            @Html.LabelFor(x => x.FirstName)
            @Html.TextBoxFor(x => x.FirstName)
        </div>
        <div>
            @Html.LabelFor(x => x.LastName)
            @Html.TextBoxFor(x => x.LastName)
        </div>
        <div>
            @Html.LabelFor(x => x.BirthDate)
            @Html.TextBoxFor(x => x.BirthDate, new { type = "date" })
        </div>
        <div>
            @Html.LabelFor(x => x.Username)
            @Html.TextBoxFor(x => x.Username)
        </div>
        <div>
            <input type="submit" value="Save" />
        </div>
    </div>
}

The Html.ValidationSummary() is key, since that will display the error messages found in the ModelState.

What Does THIS Button Do?

It saves the User. Sheesh, if you just wait a minute, you'll find out.

(My son asked me this question regarding a different app as I was writing this section. My response was the same.)

We will now test the app to ensure that it is receiving validation errors and displaying them on the page in the Html.ValidationSummary() control. As a reminder, here's the rules that validate the User:

  1. The FirstName cannot be blank.
  2. The FirstName cannot be more than 100 characters.
  3. The LastName cannot be blank.
  4. The BirthDate cannot be in the future (relative to the current date).
  5. The Username must be at least 8 characters long.

If we run the MVC app, we'll end up on a page that looks like this:

Now let's test adding a valid user and an invalid user.

Valid User

We can click on "Add a User" to see the Add a User page. Let's attempt to add a valid user, like so:

Clicking on save causes none of the validation rules to fire, so we end up back at the index page. Success!

Invalid User

Now let's try to add an invalid user.

Note that this user violates three of the rules:

  • The first name is blank.
  • The birth date is in the future.
  • The username is less than 8 characters long.

Attempting to save this user results in this:

Success! The error messages were successfully returned to the ModelState and shown to the user.

The Best Part

Here's the best part about this whole situation: adding new validation rules on the API causes zero changes on the consumers! We've removed any dependency the MVC app has on the validation implemented by the API.

Let's say we implement a new validation rule:

  • The Last Name cannot be less than 5 characters.

The resulting change to our UserValidator looks like this:

public class UserValidator : AbstractValidator<User>  
{
    public UserValidator()
    {
        ...
        RuleFor(x => x.LastName).Length(5, 999).WithMessage("The Last Name cannot be less than 5 characters");
        ...
    }
}

Now, let's resubmit that invalid user we just created, and here's the error messages we get:

Summary

We have now built an ASP.NET MVC app that can successfully:

  • Consume the responses sent by the Web API's custom validation layer, even though they're all wrapped in ResponsePackage.
  • Display the error message returned by the API.

Take a look at the GitHub project if you haven't already done so, and feel free to point out what I did wrong (or right, hey I can hope) in the comments.

Happy Coding!

Exploring the JSON Configuration Files in ASP.NET Core 1.0

As I have mentioned before, my team and I are working on getting up and running with a new ASP.NET Core 1.0 application. I've spent some time with them going over the new JSON-based configuration files like project.json, global.json, and appsettings.json, and now I'm finally at a point where I can write this post. Let's take a look at the three default JSON configuration files in a new ASP.NET Core application, and see what each of them is responsible for and what new functionality we can expect to be using.

NOTE: This post was written using RC1 (Release Candidate 1) of ASP.NET Core, and as the framework gets closer to final release, some things may change. If they do, I'll do my best to keep this post up-to-date. If you notice anything that needs to be updated, let me know in the comments.

global.json

The first file we should take a look at is in the Solution Items folder, called "global.json":

Our sample project structure, with the global.json files singled out

This file contains settings relevant to the entire solution, specifically what projects are included in said solution and what version of the .NET Execution Environment (DNX) is being used. Here's the contents of that file:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-rc1-update1"
  }
}

The "projects" section specifies the folders which contain projects that are to be included in this solution. In our case, only the "src" folder currently exists, but if we had a "test" folder any projects within it would be included into our solution.

The "sdk" section specifies the version of the DNX that our solution is to be run under. As this post was written when ASP.NET Core was a Release Candidate, you'll see that the version of the DNX in use is "1.0.0-rc1-update1". This value will change for each version of ASP.NET Core released.

There are other sections we can use under the "sdk" section:

  • "runtime": Allows us to specify whether we will run on the ASP.NET 4.6 runtime ("clr") or the ASP.NET Core 1.0 runtime ("coreclr"). If we don't specify this, the app will be compiled for both runtimes.
  • "architecture": Allows us to specify whether this app is run under x86 or x64 computer architectures. The default architecture depends on the runtime selected.

Even better, we now get full Intellisense support in these configuration files:

The next file we need to dive into is the "appsettings.json" file.

appsettings.json

Our sample project structure, with the appsettings.json file singled out.

"appsettings.json" stores the application settings for our program, which were formerly stored in the <appSettings> portion of the web.config file for previous versions of ASP.NET. The default file looks like this:

{
  "Data": {
    "DefaultConnection": {
      "ConnectionString": "Server=(localdb)\\mssqllocaldb;Database=aspnet5-ASPNET5Demo-14da7232-82c7-4174-aee7-d8a42c04c5c3;Trusted_Connection=True;MultipleActiveResultSets=true"
    }
  },
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Verbose",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

Currently we have two sets of settings, "data" and "logging". These two sets are not special, and you can define any sets you like. The way to access these values from within our code base is covered quite nicely over at docs.asp.net.

This file is, by default, injected into the .NET runtime environment by a line of code in the Startup.cs file:

var builder = new ConfigurationBuilder()  
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

Note the last call to AddJsonFile(). ASP.NET by default allows for us to inject different application settings files based on the current environment. If you're running in the Development environment, and you have a file called "appsettings.development.json", the values in that file will be added to the configuration only when executing in Development. This is how we handle adding different connection strings for different environments. In other words, Environment AppSettings files replace XST Transforms.

There's one last file we should look at, and it's arguably the most important of the three: project.json.

project.json

Our sample project structure, with the project.json file singled out.

The project.json file, in our default ASP.NET Web Application, contains information about how the project is to be built, what dependencies it has, what frameworks it runs on, etc. The default file looks like this:

{
  "version": "1.0.0-*",
  "compilationOptions": {
    "emitEntryPoint": true
  },

  "userSecretsId": "aspnet5-ASPNET5Demo-14da7232-82c7-4174-aee7-d8a42c04c5c3",

  "dependencies": {
    "EntityFramework.Commands": "7.0.0-rc1-final",
    "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final",
    "Microsoft.AspNet.Authentication.Cookies": "1.0.0-rc1-final",
    "Microsoft.AspNet.Diagnostics.Entity": "7.0.0-rc1-final",
    "Microsoft.AspNet.Identity.EntityFramework": "3.0.0-rc1-final",
    "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
    "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
    "Microsoft.AspNet.Mvc.TagHelpers": "6.0.0-rc1-final",
    "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
    "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
    "Microsoft.AspNet.Tooling.Razor": "1.0.0-rc1-final",
    "Microsoft.Extensions.CodeGenerators.Mvc": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.FileProviderExtensions" : "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.Json": "1.0.0-rc1-final",
    "Microsoft.Extensions.Configuration.UserSecrets": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final",
    "Microsoft.Extensions.Logging.Debug": "1.0.0-rc1-final",
    "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0-rc1-final"
  },

  "commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "ef": "EntityFramework.Commands"
  },

  "frameworks": {
    "dnx451": { },
    "dnxcore50": { }
  },

  "exclude": [
    "wwwroot",
    "node_modules"
  ],
  "publishExclude": [
    "**.user",
    "**.vspscc"
  ],
  "scripts": {
    "prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ]
  }
}

Let's step through each of these sections to better understand what they do.

"version"

The version number of the application. I know, I know, it's positively mind-boggling, but try to contain your enthusiasm, please.

"compilationOptions"

The "compilationOptions" section defines certain options for the project, such as the language version or turning warnings into errors.

"compilationOptions": {
    "emitEntryPoint": true,
    "warningsAsErrors": true,
    "languageVersion": "csharp6"
},

"userSecretsId"

ASP.NET Core includes a secrets configuration system which allows you to store confidential information (e.g. connection strings that use passwords) on your local system rather than checking them in to source control. You can do this by using the Secret Manager tool. Please note that the Secret Manager is still in beta, and could change.

The "userSecretsId" section of project.json is used by the Secret Manager to access application secrets in a secure manner.

"dependencies"

This is, hands-down, the coolest section of this file.

ASP.NET Core 1.0 is completely broken down into NuGet packages, creating a pluggable system that allows you to only use the packages you actually need. The "dependencies" section of the project.json file allows you to tell the DNX what dependencies are required for your app to run. Here's a snippet of the section:

  "dependencies": {
    "EntityFramework.Commands": "7.0.0-rc1-final",
    "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final",
    ...
    "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
    "Microsoft.AspNet.Mvc.TagHelpers": "6.0.0-rc1-final",
    "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
    "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
    "Microsoft.AspNet.Tooling.Razor": "1.0.0-rc1-final",
    ...
  },

Each of the dependencies specifies their version number, and we get full Intellisense support here just like we do in the other files. Even better than that, we can get NuGet packages from the NuGet server, and we get Intellisense support for that as well:

But the most amazing thing is that, when we specify a new NuGet dependency and save the file, Visual Studio will automatically attempt to download the NuGet package and include it in our project. NuGet is now seamless, fully integrated into Visual Studio and ASP.NET projects. NuGet all the things!

"commands"

The "commands" section defines DNX commands that can be run from the command line or Visual Studio's Command Window. A command, according to the docs, is a named execution of a .NET entry point with specific arguments. Essentially, the .NET Execution Environment exposes a way to run commands within the environment, and you can define what exactly those commands do in this section.

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "ef": "EntityFramework.Commands"
  },

Note the defined "web" command. That command will start a Kestrel web server and begin listening for requests. In other words, the "web" command is a shortcut to getting Kestrel running. The ASP.NET docs have a more in-depth tutorial on using commands, both in the command line and VS's Command Window.

"frameworks"

This section goes hand-in-hand with the "dependencies" section. Here's the JSON for reference:

"frameworks": {
    "dnx451": { },
    "dnxcore50": { }
  },

ASP.NET can now target multiple versions of the .NET Framework, which is nice because the different version do different things (for example, "dnx451" means .NET 4.5.1, which contains the DLLs for WebForms, something that "dnxcore50" doesn't have).

We can use this section to specify that certain dependencies should only when be used when running on certain frameworks (regardless of whether they are framework dependencies or NuGet packages). For example, I could pull down the DLLs for WebForms and the NuGet package for JSON.NET only when using .NET 4.5.1, like so:

"frameworks": {
    "dnx451": {
      "frameworkAssemblies": {
        "System.Web.DynamicData": "4.0.0.0"
      },
      "dependencies": {
        "Newtonsoft.Json": "8.0.2"
      }
    },
    "dnxcore50": { }
},

"exclude"

The "exclude" section of project.json is used exclude folders from the Roslyn compilation search path (StackOverflow source).

"exclude": [
    "wwwroot",
    "node_modules"
],

This means that folders specified in this section will not be compiled (hence why, by default, the folders included are "wwwroot" and "node_modules" both of which contain client-side resources like JS and CSS files and should not be compiled server-side). Simply adding another folder to the "exclude" collection will ensure that Roslyn does not attempt to compile that folder. For example, if you have a "forms" folder that contains PDF files, you probably don't want that folder to be compiled, so you would exclude it by placing it in this collection.

"publishExclude"

Similarly to "exclude", the "publishExclude" section specifies files that should not be published when the app is published.

"publishExclude": [
    "**.user",
    "**.vspscc"
  ],

The use of the ** wildcard indicates that any file, regardless of folder, with the specified extensions should not be published with the site. Right now, this section is just excluding some user- and Visual Studio-specific files.

"scripts"

The "scripts" section specifies commands that will be run at certain points in either the build or publish processes. The section currently defines four commands that run immediately before publishing.

"scripts": {
    "prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ]
  }

If I wanted to run certain commands immediately after the build, I could use the "postbuild" section, like so:

"scripts": {
    "prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ],
    "postbuild": [ "command1", "command2" ]
  }

The other possible sections are listed over on docs.asp.net, and most of them are self explanatory.

Others

There are quite a few sections that ASP.NET and Visual Studio understands that are not included in the default project.json file. Of course, we get full Intellisense for these options, so feel free to explore them.

A screenshot of the intellisense options for the project.json file

Summary

The three default JSON configuration files (global.json, appsettings.json, project.json) contain a myriad of configuration options and completely replace the functionality of the old web.config files, as well as providing entirely new features like NuGet integration and full Intellisense support. In particular, I'm digging the Commands and Scripts sections, as now I can automate certain tedious parts of developing a web app (like bundling/minification, etc.).

Is this was useful to you, if you noticed a bug in my code, or if you just need to sound off, feel free to do so in the comments!

Happy Coding!

Geocoding with Bing Maps REST Services in .NET

A major project we're working on in my team (which I've alluded to before) requires that we implement geocoding for a given address. This means that, given a valid address, we should have the ability to find that address latitude and longitude coordinates so that we can store those values in our database.

A projection of the world, showing continents and oceans Winkel triple projection by Strebe, used under license

I've dealt with geocoding before, but never in a .NET server-side application and never using Bing Maps REST services, which is a requirement for this application. This post is mostly written so I'll be able to find it again, but hopefully some of you intrepid readers out there will get use out of it too.

Enough talk. Show me the code!

Setup

First of all, in order to even call the Bing Maps REST services you'll need a Bing Maps Key. If you want to use the code in this demo, be sure to supply your own, valid Bing Maps key.

Second, we need to know where the Bing Maps REST services exist. For this demo, we will use the Query service call using an unstructured URL. The basic format for that URL looks like this (I have omitted some of the query string options for brevity):

http://dev.virtualearth.net/REST/v1/Locations?query=locationQuery&maxResults=maxResults&key=BingMapsKey  

Whenever we make a request to this service, we will receive back a JSON response (though this is configurable). Which means that our system will need to be able to read and deserialize JSON objects. But what objects would the system deserialize to? Turns out, Microsoft has already provided the data contracts necessary for deserializing Bing Maps API data; all we have to do is copy those contracts to our local project.

Now, we've got three pieces: the location of the service, the key necessary to call the service, and the contracts necessary to read data returned by the service. Let's build some examples!

Geocoding an Address

First, let's imagine that we get an address that looks like this:

1700 W Washington St, Phoenix, AZ 85007

In order to make the call to the API, we'd insert this address into the URL format from above, so that our request would look like this:

http://dev.virtualearth.net/REST/v1/Locations?query=1700 W Washington St Phoenix, AZ 85007&key=BingMapsKey  

But how do we do that in code? We want to accept an address and return a latitude and longitude. Here's how we set that up:

public class LatLong  
{
    public double Latitude { get; set; }
    public double Longitude { get; set; }
}

public static class GeocodeHelper  
{
    public static LatLong Geocode(string address)
    {
        string url = "http://dev.virtualearth.net/REST/v1/Locations?query=" + address + "&key=BingMapsKey";
    }
}

OK great, now we've got the URL we need to hit to get results, but what do we use to actually make that request? We use a class called WebClient and a method called DownloadString:

string url = "http://dev.virtualearth.net/REST/v1/Locations?query=" + address + "&key=BingMapsKey";

using (var client = new WebClient())  
{
    string response = client.DownloadString(url);
}

Now, the response will be JSON, so we must find some way to translate that into the contracts we acquired from Bing earlier. We can do this using DataContractJsonSerializer and a MemoryStream:

string response = client.DownloadString(url);  
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(Response));  
using (var es = new MemoryStream(Encoding.Unicode.GetBytes(response)))  
{
    var mapResponse = (ser.ReadObject(es) as Response); //Response is one of the Bing Maps DataContracts
}

Now, we have the response coded as Contracts. One last thing we want to do is take the latitude and longitude of the geocoded address and give them to our own LatLong class. We can do this like so:

Location location = (Location)mapResponse.ResourceSets.First().Resources.First();  
return new LatLongResult()  
{
    Latitude = location.Point.Coordinates[0],
    Longitude = location.Point.Coordinates[1]
};

Our finished Geocode method looks like this:

public static LatLong Geocode(string address)  
{
    string url = "http://dev.virtualearth.net/REST/v1/Locations?query=" + address + "&key=BingMapsKey";

    using (var client = new WebClient())
    {
        string response = client.DownloadString(url);
        DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(Response));
        using (var es = new MemoryStream(Encoding.Unicode.GetBytes(response)))
        {
            var mapResponse = (ser.ReadObject(es) as Response); //Response is one of the Bing Maps DataContracts
            Location location = (Location)mapResponse.ResourceSets.First().Resources.First();
            return new LatLongResult()
            {
                Latitude = location.Point.Coordinates[0],
                Longitude = location.Point.Coordinates[1]
            };
        }
    }
}

Running this method for the address we had above (1700 W Washington St Phoenix, AZ 85007) returns coordinates of 33.449001, -112.093643.

Reverse Geocoding

But what if we have a set of coordinates and want to get back the address? We can do this via reverse-geocoding the latitude and longitude.

First thing we need is a new URL format:

http://dev.virtualearth.net/REST/v1/Locations/point?key=BingMapsKey  

Note that point is latitude and longitude separated by a comma. So, using the latitude and longitude we got by geocoding the earlier address, the URL would look like this:

http://dev.virtualearth.net/REST/v1/Locations/33.449001,-112.093643?key=BingMapsKey  

Now we can write the method. Once we get the response back, the serialization code is the same. Here's the complete AddressResult and

public class AddressResult  
{
    public string AddressLine { get; set; }
    public string Locality { get; set; } //Roughly a city or town
    public string AdminDistrict { get; set; } //Roughly a state, province, or other similar area
    public string PostalCode { get; set; }
}

public static class GeocodeHelper  
{
    public static AddressResult ReverseGeocode(double latitude, double longitude)
    {
        using (var client = new WebClient())
        {
            var queryString = "http://dev.virtualearth.net/REST/v1/Locations/" + latitude.ToString() + "," + longitude.ToString() + "?key=BingMapsKey";

            string response = client.DownloadString(queryString);
            DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(Response));
            using (var es = new MemoryStream(Encoding.Unicode.GetBytes(response)))
            {
                var mapResponse = (ser.ReadObject(es) as Response);
                Location location = (Location)mapResponse.ResourceSets.First().Resources.First();
                return new AddressResult()
                {
                    StreetAddress = location.Address.AddressLine,
                    Locality = location.Address.Locality,
                    AdminDistrict = location.Address.Locality,
                    PostalCode = location.Address.PostalCode
                };
            }
        }
    }
}

If we feed back the coordinates we got from the geocoding earlier, we get the following address:

1570 W Washington St. Phoenix, AZ 85007

Notice that that is not the same address that we fed into the geocoding method. Geocoding is not 100% precise, especially when dealing with things like addresses (it is entirely possible to have the same street address in the same city but in different zip codes, and that's just one of the weird little things geocoding has to take into account). That said, in our system, where we needed to store a lat/long for every address entered, this solution works out just fine.

Summary

Bing Maps provides a full-featured REST API that allows consumers to geocode and reverse-geocode locations. We needed a way to access this API in our server-side code, and this solution seems to work for us. It's quick, it's relatively simple, and it gets us the data we desire.

There's a couple of downsides, though:

  • No async/await support. One of the MSDN samples does this. I'd like to add support for async/await in a later version.
  • Reverse geocoding is not very precise. Not something I expect to be able to fix.
  • I'm nitpicking, but there's a lot of string manipulation in the above samples. I feel like there should be a better way, I just dunno what it is.

Overall, we're pretty happy with this solution. It works for now, anyway.

Have any of you dear readers worked on geocoding in .NET, whether using Bing Maps or some other service? How did it go? I'd love to hear your stories in the comments.

Happy Coding!