Modeling Battleship in C# - Introduction and Strategies

NOTE: This is Part 1 of a three-part series demonstrating how we might model the classic game Battleship as a C# program. You might want to use the sample project over on GitHub to follow along with this post. Also, check out my other posts in the Modeling Practice series!

In software development, often we programmers are asked to take large, complex issues and break them down into smaller, more manageable chunks in order to solve any given problem. I find that this, as with many things, becomes easier the more you practice it, and so this blog has a series of posts called Modeling Practice in which we take large, complex problems and model them into working software applications.

In my case, I love games, so each of the previous entrants in this series have been popular, classic games (Candy Land, Minesweeper, UNO). That tradition continues here, and this time the board game we'll be modeling is the classic naval battle game Battleship.

A picture of the game box, showing two children playing the game and placing red and white pegs on the boards.

My boys (who I've written about before) are now old enough that they can play this game themselves, and so they've been killing hours trying to sink each other's ships.

That's the Modeling Practice we're going to do this time: we're going to model a game of Battleship from start to finish, including how our players will behave. So, let's get started!

What is Battleship?

For those of you who might not have played Battleship before, here's how it works. Each player gets a 10-by-10 grid on which to place five ships: the eponymous Battleship, as well as an Aircraft Carrier, a Cruiser, a Submarine, and a Destroyer. The ships have differing lengths, and larger ships can take more hits. Players cannot see the opposing player's game board.

More Modeling Practice:

Players also have a blank firing board from which they can call out shots. On each player's turn, they call out a panel (by using the panel coordinates, e.g. "A5" which means row A, column 5) on their opponent's board. The opponent will then tell them if that shot is a hit or a miss. If it's a hit, the player marks that panel with a red peg; if it is a miss, the player marks that panel with a white peg. It then becomes the other player's turn to call a shot.

When a ship is sunk, the player who owned that ship should call out what ship it was, so the other player can take note. Finally, when one player loses all five of his/her ships, that player loses.

Image is Sailors play "Battleship" aboard a carrier, found on Wikimedia. In this game, the player who owned the left board would have lost.

The game itself was known at least as far back as the 1890s, but it wasn't until 1967 that Mattel produced the peg-and-board version that most people have seen today. It is that version (and its official rules) that we will use as part of our modeling practice.

Image is You sunk my battleship!, found on Flickr and used under license.

Now, let's get started modeling! First, we need to figure out the components of the game.

Components of the Game

In order to play a game of Battleship, our system will need to be able to model the following components:

  • Coordinates: The most basic unit in the game. Represents a row and column location where a shot can be fired (and where a ship may or may not exist).
  • Panels: The individual pieces of the board that can be fired at.
  • Game Board: The board on which players place their ships and their opponent's shots.
  • Firing Board: The board on which players place their own shots and their results (hit or miss).
  • Ships: The five kinds of ships that the game uses.

All of that is fine and good, but if our system expects to be able to actually play a game, we're going to have to figure out the strategy involved.

Potential Strategies

Here's a sample Battleship game board:

There are two different strategies we'll need to model:

  1. How to place the ships AND
  2. How to determine what shots to fire.

Fortunately (or maybe unfortunately) the first strategy is terribly simple: place the ships randomly. The reason is that since your opponent will be firing at random for much of the game, there's no real strategy needed here.

The real interesting strategy is this one: how can we know where to fire our shots so as to sink our opponent's ships as quickly as possible? One possibility is that, just like placing the ships randomly, we also just fire randomly. This will eventually sink all the opponent's ships, but there is also a better way, and it involves combining two distinct strategies.

First, when selecting where to fire a shot, we don't need to pick from every possible panel. Instead, we only need to pick from every other panel, like so:

Because the smallest ship in the game (the Destroyer) is still two panels long, this strategy ensures that we will eventually hit each ship at least once.

But what about when we actually score a hit? At that point, we should only target adjacent panels, so as to ensure that we will sink that ship:

These are called "searching" shots in my system, and we only stop doing searching shots when we sink a ship.

By using these two strategies in tandem, we ensure that we can sink the opponent's ships in the shortest possible time (without using something like a probability algorithm, which more advanced solvers would do).


Here's all of the strategies we've discovered so far:

  1. Placement of ships is random; no better strategy available.
  2. Shot selection is partly random (every other panel) until a hit is scored.
  3. Once a hit is scored, we use "searching" shots to eventually sink that ship.
  4. The game ends when one player has lost all their ships.

In the next part of this series, we will begin our implementation by defining the components for our game, including the players, ships, coordinates, and so on. We'll also set up a game to be played by having our players place their ships.

Don't forget to check out the sample project over on GitHub!

Happy Modeling!

Eight Tips For Your Programming Team's Standup Meetings

As my organization has gone further down the Agile project management path (from our original process of a lean waterfall), one of the things we've started doing is daily standup meetings. These are short (15 minutes or less) meetings in which each team member reports on what they have accomplished recently and what they are planning to do today. They've been a fantastic tool to keep our team on track and on time, and I'm rapidly becoming convinced that they're going to be (if they aren't already) an essential part of modern software development teams.

A development team conducting a standup meeting.

Standup meetings are fast, directed conversations between team members where everyone updates everyone else as their own status and what they are doing. My team has come to a point where we are fully comfortable conducting our standup meetings, and so I thought I'd share some of the tips we've discovered as to how you can conduct your standups efficiently and quickly.

1. Keep it short

I cannot stress this enough: developers are busy people, and we don't like being interrupted. If a "standup" meeting is longer than 15 minutes, it is not a standup meeting. My group (six developers including me, plus a manager) aims for our meetings to be seven minutes or less. Keep the meeting short so everyone can get back to work!

2. Do it every day

Yes, every day. Even days when half the office is out on vacation and there's pressing bugs that need to be fixed right NOW. If there are people working on that day, you should do the standup meeting.

3. Use a standard template

My team's template looks like this:

  1. Here's what I accomplished yesterday (including task numbers, bug reports, etc.).
  2. Here's what I'm planning to do today (including task numbers, work requests, etc.).
  3. Blocks (things I cannot accomplish without someone else's help).
  4. Lingers (things which I am waiting to be done by someone else but are not impeding my work).
  5. Task status (e.g. whether or not our TFS tasks are up-to-date).

4. Don't allow the conversation to drift

Only pertinent work stuff is allowed. If any other things besides what's in your team's template need to be discussed, they should be discussed outside the standup.

5. Everybody gets a turn

This is important because, after all, we're a team, not a group of individuals. We're only as good as the least of us.

6. Meet at the same time every day

The expectation is that this time will be when everyone is expected to be working. Late morning, before lunch, works particularly well for my team.

7. Have a designated meeting leader

In our company's case, it's the team lead (e.g. me) that directs these meetings. That means it falls directly on my shoulders to ensure that the meeting is short, effective, and gets everyone involved. This is critical because whoever this person is (and it does not have to be the team lead), they are responsible for ensuring that the meeting interrupts everyone as little as possible.

8. You don't have to actually stand up

My group does our standups over Slack, because many times people just aren't in the office but are still working (i.e. on work-from-home days).

The point of having a standup meeting is to prevent those soul-sucking hour long meetings where everything gets talked about but nothing gets done. Such meetings happen when people don't know what other people are doing, and so managers or team leads call meetings to discuss who's doing what. Inevitably, these kinds of meetings get derailed because in the three weeks (or longer!) since the last meeting more has happened than can be discussed in an hour, and so the meeting takes three hours and nothing gets resolved. Those kind of meetings are a drain on resources and team morale, and should be dragged out into the street and shot. Standups are a way for everyone to explain what they've done and what they need to do, as well as getting the team to talk amongst one another, so that you don't need the soul-sucking meetings in the first place.

In short, standup meetings should be quick, directed, and done. That way, everyone can get back to what they want to be doing in the first place: programming!

The standup meeting tips I've listed above are what works for my team, at my company, but our way is not the only way to run effective standups. What tips do you have for your team's standup meetings? I'd love to hear them, so let me know in the comments!

Happy Coding Standups!

Image is Equipe durante um stand up meeting (daily meeting), used under license

The Sublime Joy of Continuous Integration and Continuous Delivery (CI/CD)

Once in a while a new process comes along and blows your friggin mind. That's what's been happening with me and my team recently, once our organization finally implemented Continuous Integration (CI) and Continuous Delivery (CD) on a large scale. These two processes have enabled our business to merge changes and push changes to production much more quickly than we could have before.

In short, now that we have CI/CD, I cannot even fathom how we got any work done and delivered without it.

What are CI and CD?

Let's use ThoughtWorks's definition of Continuous Integration:

"Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early."

By its very nature, CI tends to catch small problems before they become big problems, by forcing the developers to merge their small changes together and making them sort out any issues that arise.

One major reason you would want to use a CI process is to enable the Continuous Delivery (CD) process, which has a great definition over on

"Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing."

The only way CD works is if you can test changes in a production-like environment, and so CD processes often introduce a "staging" environment in which changes can be tested. Further, because the deployment and testing are automated, you can be assured that once your changes are deployed to production they will just work.

Our Team's Process

Prior to this, we'd been doing what I believe a lot of organizations are still doing: manual releases. That process generally went like this:

  1. Software developers check in their (hopefully working) code to source control.
  2. Developers then make a release package, which contains all of the code that they need to deploy to the staging environment.
  3. Developers contact our server group to make the staging deployment.
  4. Server group makes the deployment, tells the developers to check.
  5. Developers check, confirm that it looks good, notify their managers.
  6. Managers tell the stakeholders (e.g. people that care that this code gets deployed) to check out the system in staging, connected to test data.
  7. Stakeholders confirm that the system looks good.
  8. Developers make a second release package, this time for the QA environment.
  9. Server team moves system to QA.
  10. QA engineers test the changes, then notify developers and server team if the tests pass.
  11. Developers make yet another release package, this time for production.
  12. Developers contact server group, server group schedules a deployment for sometime off-hours (e.g. late at night).
  13. Once system is deployed, developers and server group confirm that it looks good.
  14. If anything bad happens, server team rolls back to the previous deployment.

Whew! That's a lot of steps, and at any time during that process something bad could happen and we might end up needing to start the whole thing over again. It clearly wasn't an ideal process.

But Jerry (who pointed out to me that our organization at the time was not agile like we thought, rather it was a lean waterfall process) and his team finally finished developing our company's CI/CD process, and is it ever a joy to use. Now our deployment process looks like this:

  1. Developers check in their (hopefully working) code.
  2. Said code gets automatically deployed to a development environment.
  3. Approvals are needed from the developer and his/her manager to deploy to the QA environment.
  4. Once deployed to QA, the QA engineers test the system.
  5. Once the test passes, the exact same code gets approval and is deployed to production.
  6. Once on production, the stakeholders are notified.

From 14 steps down to 6, and most of the process is automated. That's a huge improvement, and I can't even begin to calculate the total time it has saved me and my team, just from a development standpoint.

Is It More Work?

Yes, but only in the beginning. Let's be clear, there is some extra work involved in setting this up.

First of all, your organization needs some extra physical infrastructure (e.g. build servers) that you should probably have anyway, though not everyone does.

Secondly, somebody (most likely the developers) has to take of things like configuration and transforms (in our particular case, because we do everything in ASP.NET, we developers end up taking care of the web.config transforms), but your developers are probably already doing that work, just not in a unified manner.

Third, depending on how complicated your CI/CD process is, it might take some time to teach your developers and server admins the proper way to implement CI/CD, but that's time well spent and we need more teachers anyway.

Fourth, good, comprehensive testing becomes critically important with these processes. Failing tests cause a stop in delivery, and so writing valuable tests is now a requirement more than an option when using CI/CD.

Those obstacles are temporary, even if they are difficult, and once it is set up Continuous Integration and Continuous Delivery are truly a joy to use. They has saved me more time in the first month of our CI/CD process being operational than I care to admit.

Developers of the world: if you're not already on a CI/CD process, start bugging your bosses and coworkers about setting one up. It's becoming (if it isn't already) an integral part of running a modern software development shop, and it takes what was once a time-consuming manual process (deploying an app to production) and dramatically speeds it up. There will come a time when Continuous Integration and Continuous Delivery are no longer optional, and that time is rapidly approaching.

Are your teams using Continuous Integration or Continuous Delivery? If so, how do you like your versions of those processes? Is there anyone out there who is dissatisfied with how their CI/CD process works? Sound off in the comments!

Happy Coding!

Page image is Baggage Claim Haneda 2nd Terminal, found on Wikimedia and used under license.

Creating a Post Archive with the Ghost API and jQuery

I've long been missing an important feature in Ghost, my blog publishing platform: there's no inherent feature to create a post archive, or a list of all my posts in one place. I've gotten several requests for this feature, so I finally decided to just sit down and develop it using the Ghost Public API and a tiny bit of jQuery.

What follows is how I built my post archive the first time around. I have since replaced it with a Ghost-generated structure using the #get helper to solve some caching issues, but I like the look of the following system better. Please note that this post was written using Version 0.11.4 of Ghost and so it may change as Ghost changes.

Using jQuery and Ghost

Setting Up the Ghost Page Template

The first problem I had was that this page (the archive page) was not going to be either a post or a static page using my theme's default template. It was going to be a page, but it needed its own template, one that was very stripped-down.

Ghost supports using custom page templates by having a .hbs file that is prefixed with "page-". So, in my Ghost theme's root folder, I now have a file called page-all-posts.hbs, which looks like the following:

{{!< default}}

{{! This is a page template. A page outputs content just like any other post, and has all the same
    attributes by default, but you can also customise it to behave differently if you prefer. }}
    <header class="post-header">
        <h1 class="post-title">{{title}}</h1>
    <section class="post-content">
        <div id="postLoading">
            <h3>Loading post archive...</h3>
        <div id="postList"></div>

The template is pretty darn simple. It has the "normal" Ghost stuff, like the {{#post}} tag, as well as the page title and content. The difference is what happens at the end: the div "postList" is where we will populate the list of all posts.

However, just having the template doesn't help us; I also need to create a static page with the same url as the name of the page template file. In my Ghost admin window, I created a new post, marked it as a static page, and gave it the URL "all-posts".

With that template and page set up, we can now write the jQuery to get us the actual posts.

Querying for All Posts

The first thing we have to do in order to display all posts is to get all the posts. I've already blogged about something like this when I implemented the 5 random posts sidebar, and so this solution will be very similar.

Ghost exposes a public API that can be used to query posts, users, and tags. In this case, I only want posts and I specifically want all my posts, so my query is very easy:

$(document).ready(function () {
        ghost.url.api('posts', {limit: 'all'})

The onSuccess method is really just a pass-through to another method, called showArchive:

function onSuccess(data) {  
    .... //Here's the code that does the 5 random posts sidebar

The real tricks begin when we implement the showArchive method.

Displaying the Posts

When we query for posts using the Ghost API, the posts object which is returned looks like this (I have simplified this schema):

    "title":"Welcome to Ghost",
  }, {
    "title":"Lorem Ipsum Dolor",

From this, I can use the published_at property to get the month and year each post was published.

NOTE: The query we executed earlier already loads the posts in most-recent-first order, so we don't need to do any further ordering.

So, here' the outline of what we need the showArchive function to do:

  1. For each post, get the month and year that post was published.
  2. Each time the month or year changes, output a new subheader for that month and year.
  3. For each post, output a link to that post.

Here's the complete function:

function showArchive(posts) {  
    var monthNames = ["January", "February", "March", "April", "May", "June",
      "July", "August", "September", "October", "November", "December"
    var currentMonth = -1;
    var currentYear = -1;
    if(window.location.pathname == "/your-archive-page-url/"){ //Only display on this page
        $(posts).each(function(index,value){ //For each post 
            var datePublished = new Date(value.published_at); //Convert the string to a JS Date
            var postMonth = datePublished.getMonth();  //Get the month (as an integer)
            var postYear = datePublished.getFullYear(); //Get the 4-digit year

            if(postMonth != currentMonth || postYear != currentYear)
            { //If the current post's month and year are not the current stored month and year, set them
                currentMonth = postMonth;
                currentYear = postYear;
                //Then show a month/year header
                $("#postList").append("<br><span><strong>" + monthNames[currentMonth] + " " + currentYear + "</strong></span><br>");
            //For every post, display a link.
            $("#postList").append("<span><a href='" + value.url +"'>" + value.title + "</a></span><br>");

I fully realize that this is not the best HTML (or Javascript, really), but it suits my purposes for now. Here's a screenshot of what this looks like on my blog:

Woohoo! I've got a working solution to show all my posts! Which would be great...except this isn't what I'm actually using now.

Using Ghost Only

The problem is that I use CloudFlare on this blog, and when doing so CloudFlare caches this page before the script has a chance to run. This results in an empty page, which is obviously not what I want. So, instead, I ended up going with a native Ghost solution, which looks something like this:

<div id="postList">  
    {{#get "posts" limit="all"}}
        {{#foreach posts}}
            {{#if featured}}
                     <span class='fa fa-star'></span>  
                     {{date published_at format="MMM DD, YYYY"}}:&nbsp;<a href="{{url}}">{{title}} </a>
                    {{date published_at format="MMM DD, YYYY"}}:  <a href="{{url}}">{{title}}</a>

Here's a little breakdown of what this does:

  • The {{get}} helper gets me all my posts.
  • Within the {{get}} context, the {{foreach}} helper loops through each post.
  • Within the {{foreach}} context, the {{if}} helper enables me to check if a particular post is featured, and if so, output a star icon.
  • The {{date}} helper allows me to get the post's publishing date and format it.
  • Finally, the {{url}} helper and the {{title}} helper output the post's URL and title respectively.

Here's how the native Ghost solution looks:

The major downside to the native Ghost solution is that I no longer have the month and year section headers, something I would rather like to have. But, the page is no longer cached by CloudFlare, so it works. Further, this solution is much cleaner than the jQuery solution.

At any rate, now I've got a working page that lists all my blog posts! Check it out!


Please let me know if you found this post useful, and share other interesting tips about Ghost in the comments! Check out the Ghost documentation to learn about the other helpers they have available.

Happy Coding!