"Simpler" Is Subjective: How Bad Assumptions About Architecture Kicked My Ass

I'm doing some heavy refactoring on a project that I've recently joined but that has been underway for months. The code is a mishmash of different styles and an uncertain architecture, so my task up to this point has been to make it more consistent, testable, and readable.

One of the issues I was tasked with was improving the data access layer. The system was using a pattern known as Repository for its data access layer, and the individual Repositories (the classes which both write data to and retrieve data from the database) are fairly dumb; they just do exactly what they say they do, without any validation or anything. Get, add, edit, delete, search; that's it.

The service interface layer of this app is an ASP.NET Web API application, so that app's Controllers need some way to get data from the database. In simple scenarios, the Controllers can just call the Repositories directly. However, the previous development team also included another layer of static classes between the Controllers and the Repository, called the Logic layer. The full architecture looked something like this:


I had a problem with the Logic layer. For the majority of the functionality the app handled, the Logic layer just called a Repository method and didn't do anything else. Further, it was a set of static classes, and therefore couldn't take advantage of the dependency injection scheme that I wanted to use in the Controllers. It seemed to me like an unnecessary layer of architecture, and so I removed it; the Controllers now called the Repositories directly.


Long time readers will know that this is par for the course for me: I'm an extreme deletionist and any code we keep must have a reason to exist.

On the surface, the architecture change makes sense. I'd been preaching to this group that controllers need to be small, repositories should only do data access, and less architecture is better than more. Seemed like a no-brainer to me. Little did I realize the no-brainer was me.

Did you catch the mistake I'd made? I'd waltzed into this project, taken over, and declared that I knew better than the two developers who had been working this project for six months prior. I had (unintentionally, to be sure) declared that my opinion on how this app should be constructed was more important than theirs, simply because it had worked for me in the past. I had given myself a golden hammer, and now everything looked like a nail.

I, of course, didn't notice at all. The other two did. For this post we'll call them Rajit and Dave. Here's how that conversation went:

Dave: Hey Matt, did you remove the Logic classes that Rajit and I set up?

Matthew: Sure did! I couldn't see that they were doing anything different than if the controllers called the Repository classes directly. So I moved all that code to the Controllers. Now we don't need anything between Controllers and Repositories. It's a simpler design overall.

Rajit (stunned): ...Um, OK, it is simpler, kind of. But remember, this API will need to call other APIs to get all the data we need to send back to the requesting application. Where would it do that?

Matthew (realizing the mistake, not wanting to admit it): Well, wouldn't it do that in the Controllers....?

Dave (annoyed): But didn't you say that we want to keep the controllers small? Why bloat them like this when we could do the extra, non-database-access code in a different layer then just call that from the controller? Or are we not worrying about bloat on the controllers? Which is it?

Matthew: Because...well...(facepalm)

It's easy to fall into the trap of "I've seen this before and X worked, so I'll just do X again". Programmers (including myself) are busy people, often over-worked, and don't necessarily have time to consider all possible solutions to a given problem. Sometimes, the lack of time (or foresight, as is the case here) comes back to bite us in a big way.

Dave and Rajit were right, of course: we ought to have that extra layer to do things like consolidate calls from other web services and make multiple Repository submissions. I hadn't immediately found the Logic layer's reason to exist and had removed it, when in reality there was a purpose for this layer; I had just missed it. I still objected to the Logic layer being a set of static classes, so eventually Dave, Rajit, and I compromised to include a new set of Service classes:


The Services could call the Repositories and any other services, and because they were no longer static, we could inject them into the Controllers. I got my simpler architecture, and Rajit and Dave got their extra layer. Everyone's happy.

Point is, don't just assume that every app needs the same architecture. There are a myriad of software patterns for this reason: each of them has a different purpose, a different use case. They aren't all interchangeable. Don't assume the solution to a given problem is the same as a similar problem you've encountered before. Similarity does not mean sameness.

(Also, it helps if you actually talk to people who have more experience than you on a given project. Just sayin'.)

Anybody screwed up like this before? Alternately, any of you lovely readers find an over-architected app and pare it down successfully? Was there a better pattern for this app that Rajit, Dave, and I could've used? Share in the comments!

Happy Coding!

The Lean Waterfall: When Waterfall Looks Like Agile

I met a new team member (we'll call him Jerry) in our department today, and he offhandedly mentioned something that blew my mind. We're on the verge of a sea change in my organization, and only now am I becoming aware that the Agile process I thought I'd been doing was actually just a illusion, a mirage in the distance that obscured a regular Waterfall. And I didn't even realize it.

A small waterfall cascades over sharp rocks in a lush valley Elakala Waterfall, West Virginia by ForestWander, used under license

Face, Meet Palm

For the entire time I've been employed at my current job, we've had the usual teams of people: development, quality assurance, project managers, business units. That's the way it has been at each of my three jobs, so I'd unconsciously assumed that's just how software development went. Business units write the specs, hand it off to development; development writes the software, hands it off to QA; QA tests the software, hands it back to management; management tells the server team when and where to deploy it. Such is the way it has always been.

In my current company, though, it's been a little different. The software development group here is much more agile than at my previous jobs. Any time we get a set of requirements, we develop iterations, phases, and stories (terminology varies by group) to cover those requirements, and release many versions as needed to get what is desired out onto production servers. We meet with the business units multiple times during this process to discuss what has been done and what there still is to do. All-in-all, from my point of view, we're doing agile development. Problem is, this point of view was severely limited.

A couple of months ago I was trying to explain this process to another software developer, one who didn't work in the same company. I laid out our system, and he immediately responded with "Well that sounds a lot like the Waterfall model." I attempted to correct him, showing that because our development team would iterate over requirements many times, each time figuring out what was wrong and how we could fix it, and because we were in constant communication with the business units, clearly this wasn't the rigid, creaky, old waterfall method. Obviously we were doing better than that.

Weren't we?

That awkward conversation was on my mind when I had a chat with Jerry just last week. I had originally showed up at his desk to talk about what he meant by "acceptance testing" which we were going to be implementing in the near future. Since I know nothing about testing, Jerry launched into this neat summary of what each kind of testing was: unit testing, integration testing, acceptance testing, and how it all fit together and what our company's goals were as far as implementing all of that. It all makes sense, though we've got a long way to go to actually do all of that.

The offhanded comment, the one that blew my mind, was when Jerry said that this organization, the one that I had insisted wasn't using Waterfall, was one in which each department is more or less separate from the others and information is thrown "over the wall" from one team to another. Turns out that there is a term for this: information silos. Jerry pointed out that each time we got a set of requirements, one silo of people (the business units) would pass the requires to another silo (the management), who would pass it along to the next, and so on until the project was released.

A set of metal grain silos on a farm in Texas Ralls Texas Grain Silos 2010 by Leaflet, used under license

Gee, what does that sound like? I can hear the rushing river from here. We clearly weren't doing better than Waterfall, and I hadn't even noticed until Jerry mentioned it.

An Iterative Pool

Now that I understood where we're trying to go, and how we're going to get there, I started to wonder exactly what we were trying to achieve with our old system, where the engineering group would iterate many times over but the rest of the process flowed in one direction. I still don't know if we ever chose that architecture intentionally (probably not), but I did find out that what were doing had a name: the Lean Waterfall:

Agile, when executed by just the software development team, looks suspiciously like waterfall if the requirements are still being handed over from the rockstar product manager with the business plan, over to the guru designers, then to the agile ninja development team, and finally to the marketing diva.

That was us, to a T. Development was iterative, but nothing else in the flow was, and as such we were not really an Agile shop. We were just an iterative pool in a series of waterfalls.

Jerry's goal is to move us in the direction of true Agile. We're not the biggest company in the world, but we're not a tiny startup either, and so getting this ship of a business to turn in a new direction will take some time and effort. We know where were going now, (aka getting the hell away from Waterfall), but only time will tell if this new course brings us to promised land, or crashes us on the rocky shore. I'd be willing to bet, though, it will be the former.

Have you ever worked in a Silo or Waterfall shop, and did you realize it at the time? Or, have you now moved to a fully Agile shop, and is it really better? Sound off in the comments!

What is the Golden Hammer Anti-Pattern?

The Golden Hammer anti-pattern (formally and originally called the Law of the Instrument) is an over-reliance on a familiar tool. The gist of this anti-pattern is summed up as "If all you have is a hammer, everything looks like a nail."

A golden judge's gavel

Generally, people fall into this anti-pattern if they try to use a particular tool, architecture, suite, or methodology to solve many kinds of problems (especially problems that could be better solved by alternate means). They are often doing this for one of two reasons:

  1. They are very familiar with that tool set or framework and believe that it can be used to solve many problems OR
  2. They spent a lot of money acquiring the tool set or framework and need to justify this cost (hence why it's a Golden hammer).

Show Me an Example!

Ben is a new web developer, after having formerly been a database administrator, and he's been given his first task in this new role. His lead has asked him to implement a new website, one that will be displaying some static content to a small set of visitors in his company.

Ben, having dealt with these kinds of applications before on the database side, immediately begins laying out the database design for this new app. A simple schema and a few hours later, he presents his design to his lead Paula, who has to explain to him that the website isn't expected to change its content frequently enough to warrant using a database at all.

Previously in this series:

The website was displaying static content, e.g. content that would not change often. Ben had picked up his golden hammer and tried to use it to solve a problem that it wasn't really needed for.

When Does This Happen?

Many times this anti-pattern is perpetrated by individuals who have had past successes with a given technology, but are trying to use that technology to solve a problem that does not require the technology's existence.

In addition to seeing this result from inexperience (like Ben), we also see this coming from the management side. Has your manager ever said to you "Please use Technology Suite A for your solution" even if you found that Technology Suite B could implement the solution better? If the manager's suite was preferred because s/he knew it better, or because the company spent a lot of money acquiring that technology, that manager was (probably) falling victim to his/her own golden hammer.

How Can We Avoid This Anti-Pattern?

Evidence is your best weapon when combating this anti-pattern. You need evidence that shows why Technology Suite B outperforms Technology Suite A for a particular problem, and you need to be able to convince the people in charge of making decisions as to why Suite B is a better choice.

Experience is also an important weapon, but it isn't listed first because it is harder to acquire. The more experience you gain when designing and implementing non-trivial software systems, the more likely you are to recognize a golden hammer and not pick it up without analyzing the other possible solutions first.

For a wonderful in-depth writeup on this anti-pattern, check out the article over at SourceMaking.

Have you ever been a golden hammer wielder, and did you notice that you were doing such? How have you combated this anti-pattern in your job? Let me know in the comments!

What is the Big Ball of Mud Anti-Pattern?

A big ball of mud is a software system that lacks a perceivable architecture. The term appears to have been coined by computer scientists Brian Foote and Joseph Yoder in a 1996 paper.

A system that is a big ball of mud can be said to be held together by duct tape, sweat, and time. It is haphazardly structured, and as such attempting to modify it may be an exercise in futility. Any modifications that were made to the system in the past were just added to the outer parts with no recognizable planning (hence the term big ball of mud; it's like sticking some mud onto a ball of mud, which just makes the ball of mud bigger.)

Previously in this series:

Such a system often starts as a small, focused application, before scope creep and unmanageable deadlines eventually turn it into an unrecognizable beast. Because this anti-pattern results from a combination of time and non-code factors, it can be very difficult to prevent or even detect. Doing such requires someone which knowledge of the system and power to control its architecture, a combination that gets increasingly rare the longer the system exists.

Show Me An Example!

If you are a software developer, you have probably seen an example of this architecture recently. In their paper, Foote and Yoder contend that this is the most common architecture in software development.

The fact is that most software is organic and grows to meet requirement as they arise. Because of this, without some benevolent overlord architect strictly controlling how the structure of the app can grow and change as the requirements do, any software project is liable to slip into this anti-pattern. This is exactly what makes the Big Ball of Mud anti-pattern so prevalent.

Why Does This Happen?

The real danger of this anti-pattern is that we often don't notice a system becoming a big ball of mud until we're past the point of being able to refactor it without doing a full rewrite. Generally this process is accelerated by changes in requirements, changes in developers, or simple incompetence.

If requirements change (whether due to scope creep or otherwise) then the code changes with them, but because of this change you may now be implementing this solution for a slightly different set of customers or technologies. If this change is not immediately noticeable, no action is taken.

If programmers change, all of sudden you have a different set of assumptions and ideals that are now influencing the design of the application, and said assumptions may be very different from the previous person's.

Incompetence is the hardest of these three causes to plan for. It could be incompetence on the part of the developers (who may have no knowledge of patterns or anti-patterns), the management (with unrealistic deadlines) or the users (with unrealistic expectations).

How Can We Avoid This Anti-Pattern?

Because this anti-pattern is so hard to spot when it is occurring, options for mitigating it are fairly limited.

IMO the most ideal solution would be to have a knowledgeable architect in place who must approve all changes to the app and help design the optimal solution for implementing new requirements. Of course, few shops have this kind of person available for all projects.

Another possible strategy would be to simply explore each problem thoroughly. This sounds like a cat poster, but it's true. Developers are the first line of defense against anti-patterns, and this one is no exception. By exploring the problem thoroughly and getting a better understanding of the architecture in place, we can design solutions that are less likely to develop into big balls of mud.

Of course, sometimes the best option is to just rewrite the damn thing.

Have you encountered applications in your career that were big balls of mud? How did you deal with changes to those apps? Let me know in the comments!