A big ball of mud is a software system that lacks a perceivable architecture. The term appears to have been coined by computer scientists Brian Foote and Joseph Yoder in a 1996 paper.

A system that is a big ball of mud can be said to be held together by duct tape, sweat, and time. It is haphazardly structured, and as such attempting to modify it may be an exercise in futility. Any modifications that were made to the system in the past were just added to the outer parts with no recognizable planning (hence the term big ball of mud; it's like sticking some mud onto a ball of mud, which just makes the ball of mud bigger.)

Previously in this series:

Such a system often starts as a small, focused application, before scope creep and unmanageable deadlines eventually turn it into an unrecognizable beast. Because this anti-pattern results from a combination of time and non-code factors, it can be very difficult to prevent or even detect. Doing such requires someone which knowledge of the system and power to control its architecture, a combination that gets increasingly rare the longer the system exists.

Show Me An Example!

If you are a software developer, you have probably seen an example of this architecture recently. In their paper, Foote and Yoder contend that this is the most common architecture in software development.

The fact is that most software is organic and grows to meet requirement as they arise. Because of this, without some benevolent overlord architect strictly controlling how the structure of the app can grow and change as the requirements do, any software project is liable to slip into this anti-pattern. This is exactly what makes the Big Ball of Mud anti-pattern so prevalent.

Why Does This Happen?

The real danger of this anti-pattern is that we often don't notice a system becoming a big ball of mud until we're past the point of being able to refactor it without doing a full rewrite. Generally this process is accelerated by changes in requirements, changes in developers, or simple incompetence.

If requirements change (whether due to scope creep or otherwise) then the code changes with them, but because of this change you may now be implementing this solution for a slightly different set of customers or technologies. If this change is not immediately noticeable, no action is taken.

If programmers change, all of sudden you have a different set of assumptions and ideals that are now influencing the design of the application, and said assumptions may be very different from the previous person's.

Incompetence is the hardest of these three causes to plan for. It could be incompetence on the part of the developers (who may have no knowledge of patterns or anti-patterns), the management (with unrealistic deadlines) or the users (with unrealistic expectations).

How Can We Avoid This Anti-Pattern?

Because this anti-pattern is so hard to spot when it is occurring, options for mitigating it are fairly limited.

IMO the most ideal solution would be to have a knowledgeable architect in place who must approve all changes to the app and help design the optimal solution for implementing new requirements. Of course, few shops have this kind of person available for all projects.

Another possible strategy would be to simply explore each problem thoroughly. This sounds like a cat poster, but it's true. Developers are the first line of defense against anti-patterns, and this one is no exception. By exploring the problem thoroughly and getting a better understanding of the architecture in place, we can design solutions that are less likely to develop into big balls of mud.

Of course, sometimes the best option is to just rewrite the damn thing.

Have you encountered applications in your career that were big balls of mud? How did you deal with changes to those apps? Let me know in the comments!