What critical questions should you ask to determine the feasibility of your next software project? How do you determine whether your investment is going to pay off or take you and your company under?
For most customers, it's a matter of time and cost: "How long will it take?" and "How much will it cost?" While seeming reasonable, these questions don't give you nearly enough information to make a decision about whether to proceed - irrespective of how much up-front work goes into the answers - because they're misaligned with the nature of software development work. At best, they will give you a sometimes-educated best guess about what the final tally might be, but not how you will protect and ensure your interests as the project progresses. In other words, your ROI.
Software development is a 100% design-driven process. As a result, unlike most physical construction or manufacturing projects, there's no real division of labour between "architects" and "builders", primarily because there's nothing physical about software. It's ethereal thought-stuff that's captured into digital format by developers, artists, user interface experts, analysts, and the like which is constructed by machines into software that runs on a device.
In this process, "construction" is so cheap as to be free, but design is so pervasive as to be volatile, risky and expensive. In a design-driven process, the longer the lead time between request and result, the greater the probability it won't meet a customer's (or user's) expectations. (For a deeper explanation of why this is, see my post Understanding Misalignments in Software Development Projects). I often use the following simple graph to illustrate this correlation with new customers:
The X axis shows lead time; the Y axis shows investment and therefore risk. The dotted line shows customer investment into the project (negative) over a period of time and any subsequent "returns" they receive in working, tested, software (positive). The more this line "breaks through" to the positive side, the better.
In this graph, we see the typical "investment" curve for a project guided by asking "How long?/How much?" and related questions: A large, up-front investment in activities that do not result in seeing a representation of the final product in working, tested, software until well after the optimisically-estimated launch date. In some cases this "long tail" can be so long that the project is either scrapped or the project sponsor loses their job or the company takes on serious debt and goes under. Two researchers wrote about this very phenomena in an article in the September 2011 issue of the Harvard Business Review, Why Your IT Project May Be Riskier than you Think, referring to them as "black swans":
When we broke down the projects’ cost overruns, what we found surprised us. The average overrun was 27%—but that figure masks a far more alarming one. Graphing the projects’ budget overruns reveals a “fat tail”—a large number of gigantic overages. Fully one in six of the projects we studied was a black swan, with a cost overrun of 200%, on average, and a schedule overrun of almost 70%. This highlights the true pitfall of IT change initiatives: It’s not that they’re particularly prone to high cost overruns on average, as management consultants and academic studies have previously suggested. It’s that an unusually large proportion of them incur massive overages—that is, there are a disproportionate number of black swans. By focusing on averages instead of the more damaging outliers, most managers and consultants have been missing the real problem.
So what questions should you ask a contractor to determine their ability to protect your investment under such volatile conditions and avoid these IT project "black swans" ? I suggest turning the traditional "How long/How much" questions and their variants a bit upside down and inside-out with an aim to quickly determining a software contractor's capabilities and tolerance for delivering valuable software under constraints. All of these should be answerable within five to ten minutes:
- How soon can I see a working, tested representation of my final product and what will be the cost?
- On what ongoing basis will I see working, tested representations of my final product and at what cost?
- How long will it take to make the product "production ready" ?
- Can I make changes to the product mid-stream?
- What will it cost to cancel the project ahead of schedule?
Question 1) is perhaps the most important to ask because it subverts gross speculation in favour of a delivering against a concrete, short-term goal. This is considerably more challenging than deferring to some optimistic, faraway milestones because it requires an ability to aggressively pare down technical complexity without compromising quality. Note that the objective here isn't to get a cheap prototype or proof-of-concept that will be discarded: This is to obtain the first "slice" of end-to-end functionality which will form part of the end-product's nucleus, sometimes called a "walking skeleton".
Question 2) extends the intent of the first question by probing the contractor's ability to deliver and demonstrate valuable software on a regular cadence. This tells you what the anticipated cycle time for your project will be, ie. the periods of time between planning and delivering. In the graph above, there is only one "cycle" with extremely long lead times which exposes your project to high levels of risk - shorter lead and cycle times help to mitigate risk.
Question 3) probes the contractor's ability to ship at will. This requires a team that understands quality software engineering practices such as automated builds and testing. A warning sign here is if the contractor needs 2-4 weeks or more to "stabilize" the product - an indication that they're forgoing in-situ tests while accumulating un-addressed defects.
Question 4) probes the contractor's development process' resilience to change - you can and should expect your product to evolve as it undergoes development. Accommodating change mid-stream should be welcome and seamless, not discouraged and painful
Question 5) probes the contractor's process for a smooth wind-down in the even that your ROI objectives aren't being met - for example, if the purpose of the solution becomes irrelevant or if there's been budget cutbacks. As with Question 4) this shouldn't be painful.
Some ideal responses would include:
- Outside of some preliminary work to set the project up, 2-4 weeks depending on an interval we agree upon. Cost would be commensurate with time and materials for an agreed upon team size.
- Every 2-4 weeks, in accordance with our agreed-upon interval.
- Within 2-4 weeks of your decision to ship.
- Qualified "yes": Like-for-like feature swaps can be made at zero cost for the next interval period.
- Approximately 20% of the remaining value of the contracted period.
Teams that work in 2-4 week planning/delivery periods or "cadences" help to mitigate your risk exposure by dramatically decreasing the lead time between request and result, allowing for "course corrections" or changes to be made when it is most cost-effective. Using the same sample graph from above, we can visualize this difference with the traditional (and misaligned) "How long?/How much?" process:
Viewed this way, it should be readily apparent which process is most effective at reducing risk exposure while delivering working, tested, software. By subverting questions about "how long/how much" in favour of the above queries, you can gain some valuable insights about candidate contractors who are capable of working this way without having to resort to lengthy RFPs or interviews. If you get responses in-line with those I've listed above, you've likely found a contractor who well-understands how to align the work they do with how they do it - and will put that expertise to work for you.
Feel free to comment below or on Twitter via @DerailleurAgile.