Requirements Volatility Is the Core Problem of Software Engineering

web design tools and software engineering

The reason we develop software is to meet the needs of some customer, client, user, or market. The goal of software engineering is to make that development predictable and cost-effective.

It’s now been more than 50 years since the first IFIP Conference on Software Engineering, and in that time there have been many different software engineering methodologies, processes, and models proposed to help software developers achieve that predictable and cost-effective process. But 50 years later, we still seem to see the same kinds of problems we always have: late delivery, unsatisfactory results, and complete project failures.

Take a government contract I worked on years ago. It is undoubtedly the most successful project I’ve ever worked on, at least from the standpoint of the usual project management metrics: it was completed early, it was completed under budget, and it completed a scheduled month-long acceptance test in three days.

This project operated under some unusual constraints: the contract was denominated and paid in a foreign currency and was absolutely firm fixed-price, with no change management process in the contract at all. In fact, as part of the contract, the acceptance test was laid out as a series of observable, do-this and this-follows tests that could be checked off, yes or no, with very little room for dispute. Because of the terms of the contract, all the risk of any variation in requirements or in foreign exchange rates were on my company.

The process was absolutely, firmly, the classical waterfall, and we proceeded through the steps with confidence, until the final system was completed, delivered, and the acceptance test was, well, accepted.

After which I spend another 18 months with the system, modifying it until it actually satisfied the customers needs.

In the intervening year between the contract being signed and the software being delivered, reporting formats had changed, some components of the hardware platform were superseded by new and better products, and regulatory changes were made to which the system must respond.

Requirements change. Every software engineering project will face this hard problem at some point.

With this in mind, all software development processes can be seen as different responses to this essential truth. The original (and naive) waterfall process simply assumed that you could start with a firm statement of the requirements to be met.

W.W. Royce is credited with first observing the waterfall in his paper “Managing the Development of Large Software Systems,” and the illustrations in hundreds of software engineering papers, textbooks, and articles are recognizably the diagrams that he created. But what’s often forgotten in Royce’s original paper is that he also says “[The] implementation [in the diagram] is risky and invites failure.” 

Matching your process with your environment

Royce’s observation—that every development goes through recognizable stages, from identifying the requirements and proposed solution, through building the software, and then testing it to see if it satisfies those requirements—was a good one. In fact, every programmer is familiar with that, even in their first classroom assignments. But when your requirements change over the duration of the project, you’re guaranteed that you won’t be able to satisfy the customer even if you completely satisfy the original requirements.

There is really only one answer to this: you need to find a way to match the requirements-development-delivery cycle to the rate at which the requirements change. In the case of my government project, we did so artificially: there were no changes of any substance, so it was simple to build to the specification and acceptance test.

Royce’s original paper actually recognized the problem of changes during development. His paper describes an iterative model in which unexpected changes and design decisions that don’t work out are fed back through the development process.

Realism in software development

Once we accept the core uncertainty in all software development, that the requirements never stay the same over time, we can begin to do development in ways that can cope with the inevitable changes.

Start by accepting that change is inevitable.

Any project, no matter how well planned, will result in something that is at least somewhat different than what was first envisioned. Development processes must accept this and be prepared to cope with it.

As a consequence of this, software is never finished, only abandoned.

We like to make a special, crisply-defined point at which a development project is “finished.” The reality, however, is that any fixed time at which we say “it’s done” is just an artificial dividing line. New features, revised features, and bug fixes will start to come in the moment the “finished” product is delivered. (Actually, there will be changes that are still needed, representing technical debt and deferred requirements, at the very moment the software is released.) Those changes will continue as long as the software product is being used.

This means that no software product is ever exactly, perfectly satisfactory. Real software development is like shooting at a moving target—all the various random variations of aim, motion of the target, wind, and vibration ensure that while you may be close to the exact bullseye, you never ever achieve perfection.

Making our process fit the environment

Looked at in this light, software development could seem to be pretty depressing, even dismal. It sounds as if we’re saying that the whole notion of predictable, cost-effective development is chasing an impossible dream.

It’s not. We can be very effective developers as long as we keep the realities in mind.

The first reality is that while perfection is impossible, pragmatic success is quite possible. The LEAN startup movement has made the MVP—”minimum viable product”—the usual goal for startups. We need to extend this idea to all development, and recognize that every product is really an MVP—our best approximation of a solution for current understanding of the problem.

The second reality is that we can’t really stop changes in requirements, so we need to work with the changes. This has been understood for a long time in actual software development—Parnas’s rule for identifying modules is to build modules to hide requirements that can change. At the same time, there have been repeated attempts to describe software development processes that expect to provide successive approximations—incremental development processes (I’ve called it “The Once and Future methodology“).

Once we accept the necessity of incremental development, once we free ourselves from the notion of completing the perfect solution, we can accept changes with some calm confidence.

The third and final reality is that all schedules are really time-boxed schedules. We go into a development project unable to say exactly what the final product will be. Because of that, no early prediction of time to complete can be accurate, and all final deliveries will be partial deliveries.

Agile development to the rescue

The Agile Manifesto grew out of recognition of these facts. Regular delivery of working software is part of this recognition: a truly agile project has working partial implementations on a regular basis.Close relationships with the eventual customer ensure that as requirements changes become manifest, they can be fit into the work plan.

In an agile project, ideally, there is a working partial implementation starting very early in a project, and observable progress is being made toward a satisfactory product from the first. Think of the target-shooting metaphor again—as we progress, we’re closer and closer to the center ring, the bullseye. We can be confident that, when time is up, the product will be at least close to the goal.

This post was originally published on The Overflow.