In my last post, Is Working Software Enough, I lamented about the desire of teams to optimize velocity and talked about how that can lead to sub-optimal performance. It has the tendency to push design off the team and onto a separate team which is usually not the best solution (I won’t say never because there are always exceptions). As I mentioned in that post, we would be better off measuring and optimizing value delivered…but how do we do that? Over the past week we have been talking about metrics here at the office as we have been starting to think about changes to our company and department goals for next year. So I thought I would bring up some of the thoughts that have come up that seem like better things to measure and work on optimizing than velocity.
If we are delivering valuable software to our customers frequently, they should be satisfied, correct? Given there are often many other aspects that go into customer satisfaction like sales, support, and training just to name a few, this could be a stretch. However, it is still something that most software product companies already measure that is a good indicator of the overall value delivered to customers. Also, there are often some lower level questions in these surveys that can tie back more closely to the software specifically. For example, one thing we have added to our satisfaction survey is a question about perceived quality of the software. Obviously this is a lagging indicator, but depending on how frequently you measure this it may not be lagging by much. Some people avoid lagging indicators, however I think they are important because in reality, there is no way to determine whether what you did was successful or not until after it is done. We can always come up with leading indicators that have historically predicted success, but that doesn’t mean they will always predict success.
An obvious measure of value could be revenue, or even better, profit. However, calculating this can often get very complicated especially if you are doing incremental enhancements. So if you are having trouble measuring revenue, another measure could simply be usage. If people are using the software, that is a good indicator it is providing them value and this can be much easier to track than revenue in many cases. If you are working on delivered software that would require users to upgrade, looking at the number of clients that choose to upgrade would potentially be another good indicator of value delivered with that release.
So we’ve discussed two good lagging indicators. Now we get into some leading indicators. One thing that companies usually want to measure is efficiency. Focus factor has been around as a measure for some time, though the standard definition has some issues with it such as it was usually only done at an individual level and assumed that all time spent on a task was productive. Scott Downey at RapidScrum has re-defined this measurement so it does a much better job at measuring efficiency on a scrum team. Using Scott’s new definition, Focus Factor is a measure of how much effort during a sprint resulted in requested, completed, accepted working software. If you’re familiar with Scott, you will know he never uses hours. You always estimate in story points and you also report work in story points. So you end up with some number of story points reported during the sprint (work capacity) and velocity (the sum of the original story point estimate of completed accepted tickets). So a team’s Focus Factor would be their velocity divided by work capacity. An added benefit of this measure, since it is a percentage, it can be used across teams even if they have different story point scales.
What is a desirable focus factor? Well, explaining that isn’t really the focus of this post, so for now I’ll just tell you that 80% is the target for focus factor (this number comes from W. Edwards Deming). So anything too far away from 80% (lets say lower than 70% or higher than 90%) would be an indicator that there may be an issue. The team may be committing to too much or too little, or a number of other things could be going on.
In Mike Cottmeyer’s recent post on The Real Reason We Estimate, he points out that many times, an estimating problem really isn’t an estimating problem at all, it’s a shared understanding problem. So if we buy into that (which I do), what is a metric we can look at as an indicator that there is a shared understanding problem? Scott again provides a key measurement here he calls Found Work. Basically, it is a measure of how much unexpected complexity a team discovered mid-sprint on work they had already committed to. So how well did the team understand the work going into the sprint. Scott measures this as a percentage of the original commitment, so he takes the work reported on a card minus the original estimate on the card (again, both estimates and reported work are in story points in this case) and divides that by the original commitment for the sprint (in story points). So this would give us a percentage and the closer to zero the number is the better the shared understanding.
So there we have four metrics, two lagging and two leading, which can help provide good indicators on how our scrum team is performing. All of these measures also encompass the dynamic between the product owner and the team. Both parties need to be doing a good job for these numbers to be good. If the team isn’t building valuable stuff then customer satisfaction and usage would be low. If the team is not working efficiently, then the focus factor will be out of the target zone. Found work being high indicates a low level of shared understanding either between the team and the product owner or among the team about the technology. Does anyone else have other metrics they like to track on their agile teams?
UPDATE: While I was working on this post, Mike Cottmeyer posted an article on his blog, Gaming the Numbers, that talks about this subject as well and there are some good thoughts and discussion in the comments thread I would encourage you to check out.