All estimates are range estimates, whether we realize it or not. If we say that a project will finish on July 14, or that, by the end of the year, 56 function points will be complete, we likely do not mean we are 99% certain that these numbers are dead on.
We are implying or suggesting some kind of range (e.g., "sometime in July"). And without making the bounds of the range explicit, nor specifying a level of confidence that the value being estimated lies inside that range, we are probably doing more harm than good with our "estimates."
McConnell points out that in many areas of life we systematically do a bad job at constructing range estimates wide enough to include the value we're trying to estimate. He includes examples -- and an exercise for the reader -- showing that even when given specific instructions to generate an arbitrarily wide range that will include a target value, we still fail to make it wide enough.
We have a cognitive bias that causes us to mistake a wide estimate range for an unacceptably vague answer even when it is not. No wonder, then, that in real-world business scenarios, where pressures exist to create estimates that are both hyper-precise and artificially small, the estimation process comes apart right out of the gate.
How to think about ranges and estimates? Here are a few points to get started:
- Many values can be estimated in a software development project. Typical values to estimate include total effort or resources to implement a set of functionality, or quantity of functionality that can be implemented with fixed constraints.
- A point estimate is really just a very narrow range estimate -- and almost certainly an inaccurate range estimate.
- How wide should a range estimate be? Wide enough that one can have a specific confidence level that the range includes the actual number being estimated.
- If the confidence level is fixed, then the width of the range necessarily depends on how much is known to inform the estimate (as well as on the estimation techniques being used, etc.) For example, if you know the specs for a project, it is in the best case possible to achieve a narrower range estimate at given confidence level than if you don't yet know the specs at all.
- The amount if information known (feature details, effort to implement each feature, unexpected dependencies, etc.) increases over time and over the course of the project.
- Therefore, updated range estimates can narrow over the course of the project. Initial ranges, if they are accurate, will necessarily be quite wide. McConnell refers to these narrowing intervals over time as the "Cone of Uncertainty" based on the cone-like shape that the graph (range vs. time) makes.
- Despite the convergence of the "cone" graph lines, the cone concept reflects a best-case scenario of mapping limited information to limited accuracy in estimation. If other estimation best practices are not followed, it is possible to do much worse than the ranges reflected in the cone.