I was recently sent a link to a question on Quora. The question was “Why are software development task estimations regularly off by a factor of 2-3”. One of the comments with the most votes is a story about traveling up the coast of California from San Francisco to Los Angeles. The hikers in the story encounter many issues which affect their estimate of how long the trip will take. It is a good read with some good humor (though not exactly clean language the whole way…this happens in software development as well).
My favorite part of the story though was towards the end when the hikers were re-evaluating their estimate. The one hiker used “yesterday’s weather”, in other words the rate they had been able to achieve, to create a new estimate. The first hiker and narrator is not willing to accept this.
My friend says, well, we’ve gone 40 miles in 4 days, it is at least a 600 mile trip, so that’s 60 days, probably 70 to be safe. I say, “no f–ing way… yes, I’ve never done this walk before, but I *know* it does not take 70 days to walk from San Francisco to Los Angeles. Our friends are going to laugh at us if we call and tell them we won’t see them until Easter!
I see this happening in new agile teams all the time, and have fell victim to it myself. We see the burndown tracking well above the required rate to meet our goal, yet we ignore it. We come up with excuses as to why the data is wrong instead of being responsible and using the data we have to make a good business decision.
While I’ve made the mistake of ignoring the facts because I didn’t want to admit my original plan was wrong, I have also done the opposite. I have also seen after just a handful of iterations that our release was tracking behind and was able to discuss this with the business stakeholders. In this case we decided we could adjust the release date and give adequate time for the project to complete. We were able to make that decision very early in the project instead of at the last minute.