We've been having an ongoing debate in our team about what archicture to use to implement our new enterprise-level application. There are two possible solutions, one familiar, one fast, but we can't seem to reach a conclusion as to which to use. A lack of applicable data is forcing us to make this key decision on intuition and guesswork, and I can't help but wonder how else we might be able to decide which path to take.

A nighttime long-exposure photo of lights from speeding cars on a seaside highway, leaving bright colored lines in their wake
Speed lights 2 from Flickr, used under license

Familiarity vs Performance

Our new teammate Jerry, my boss Frank, and I have been kicking around ways to ensure that this new service will be blazing fast and thoroughly scalable, since much of our company's infrastructure will depend on it. Specifically, we're trying to determine the best (read: fastest) way of accessing the information in this system's database, since we believe that the amount of reads will be orders of magnitude larger than the amount of writes. It was partly for this reason that I benchmarked the performance of Entity Framework vs Dapper vs ADO.NET.

Throughout all of this, Jerry, Frank, and I have collectively tried to determine which assortment of technologies will allow the system to be both blazing fast and scalable, as well as not too different from what we already know. This, as you might imagine, is more difficult than we thought it would be.

There are two possible architectures we have bandied about. The first one is the one my group is most familiar with: the Microsoft stack of SQL Server, Entity Framework, and ASP.NET Web API. We build almost all of our other apps using this stack, and so development time would be much quicker if we use this setup.

The second possible architecture involves a less-familiar but theoretically more-performant stack: Redis, Dapper, RabbitMQ, and Web API, implemented using the Command Query Responsibility Segregation (CRQS) pattern. In theory, this architecture would allow the system to more redundant, more scalable, more performant, more testable, more everything (at least according to Jerry). Problem is, with the exception of Web API, nobody on my team has ever developed any products using these technologies or patterns.

So, since we lack the experience to make an educated guess as to which technology stack is "better", we wanted to use metrics to help us make a more informed decision. That, unfortunately for us, proved to be impossible.

Blindfolded Decision-Making

There's an implicit assumption in the desire to use data to make a decision, and that is that said data exists.

Our thought process went like this: if we can determine the amount of load this system will need to handle, we could make a better decision on which architecture to use (moderate load = familiar stack, heavy load = performance stack). Say we choose to go with the full MS-stack (SQL Server, Entity Framework, Web API), which many (including both Jerry and Frank) have argued will be less optimized and less performant than the theoretically-optimized stack (Redis, Dapper, RabbitMQ, Web API). In an absolute sense, we will be picking the slower option. Do we care? Even if it is the slower of the two options, would it be fast enough for our purposes?

We have no data, no metrics, no information of any kind that can give us an idea of what our load expectation will be. There's no infrastructure in place, no repository of statistics and metrics that we can review, parse, and draw conclusions from. How can we make a decision as to which architecture to use if we don't have any pertinent data?

It's a Catch-22. We need the metrics to choose the best architecture, but we need to actually implement the damn thing in order to get metrics, and implementation requires us to select an architecture. In the best case, the metrics could reveal a clear path for us to venture down. In the worst case, well, we'd be in the same situation we are now, having to make an important decision while blindfolded due to lack of supporting data.

So how will we break this impasse? We're just gonna have to pick one.

There's no other choice left to us; we'll need to pick which stack we think is best for now, implement it, and improve it later as we start to collect metrics. Given this, it seems likely that we'll go with the performance-optimized stack, since we know that will provide us scalability and responsiveness benefits into the future.

Still, though, I have to wonder if the metrics we needed might have clearly shown us the path we should go down. Without evidence, the decision being debated is one that will be made out of hope, not proof. For now, we'll just have to hope that we will choose correctly.

Have you ever encountered a decision like this, where the "best" solution wasn't clear and the methods by which you could determine which solution was better didn't exist or weren't thorough enough? How did you pick a solution? Let me know in the comments!