My experience has been that many people in the agile world have an aversion to metrics.
One of our goals when Jason Gorman and I started running our metrics workshop “Do you get what you measure?” was to explore what metrics actually measure when people know the metrics are there. It’s easy for participants to see almost all the metrics that are initially proposed result in very undesirable behaviour. People play the system, often yielding the opposite of the intended effect of the metric.
This effect has been documented in real life too, and I think it’s unfortunate that so many pro-metrics people would put metrics in place without thinking how people are going to feel about the measurement or what they’re going to do. It doesn’t take a long time or cost a lot of money to challenge and harden a metric, but–as we know from the study of confirmation bias in cognitive psychology–we all tend to seek evidence that supports what we already believe and ignore or fail to seek that which does not.
Among agilistas it would be easy to stop right at this point in the workshop. Metrics suck. It’s been proven. But there’s something I like about the workshop that stops it being a metrics-bashing session. We follow up with a couple of rounds of hardening the metrics–trying to make them resistant to gaming–and then attacking them again. Some metrics, with hardening, survive. Some don’t.
Carry on to read “Perception isn’t everything”, which finishes this line of thought.