A colleague of mine recently wrote about metrics, why they are important and what purpose they serve. In this post I want to take a few minutes to share some experience with metrics and how they can be both a help and a hindrance. A key skill is making this distinction.
Choose Wisely: Accuracy is Important, but not as much as the Value of the Measurement
By training and disposition, I can be pretty analytical and tend to think conceptually before applying ideas to the real world. As much as I enjoy theoretical arguments I’ve seen enough to know that practical application of ideas is what counts. In this regard metrics fascinate me because it is easy to be seduced by the false promises of implied accuracy and meaning. Let me give you a couple of examples.
Millions of Lines of Code: occasionally I hear people comment (brag, really) about how many lines of code are in a software solution and some folks are really impressed by a number that is typically in the millions. Frankly, I don’t care about the number of lines of code because it gives me little real insight. I’d be more interested if this number told me about changes in business efficiency, business process cycle time, headcount impact, transaction processing time but it doesn’t. It sounds like an impressive metric and it can be measured pretty accurately but the value to a client is limited.
Instead we need to identify metrics that have value and then decide how accurately we need to measure to understand the impact.
Percentage of Test Scripts Complete: I’ve been on projects where the test team diligently count the number of test scripts and the number of steps within the scripts. By dividing the total number of test steps by the number successfully executed you can treat this as a measure of the percentage of testing completed. I love hearing that 53.7% of testing is complete. My analytical side asks whether I should put much stock in this percentage: my conclusion is, not much, at least not to a decimal place of accuracy. This metric assumes all test steps are created equally: that never happens. Test tools often treat the data this way – be careful.
However, some people may choose to use this measurement in this way for this purpose. I’m actually OK with this as long as there is an accompanying explanation of the measurement and its limitations. The danger is that you can look at 53.7% completion and infer that testing is slightly more than half done and required X weeks. Now imagine having to explain that you are really 30% (or 80%) complete and that you need another X weeks plus (or minus) a few more weeks. Your project plan and timeline may be correct; it is the metric that is misleading.
In both these examples we’ve chosen a method to measure something with a high degree of accuracy. Unfortunately, in both cases the metric doesn’t bring much value to the project.
Make Metrics Meaningful & Don’t Be Afraid to Change Them When They Don’t Work
The two metrics discussed above don’t work well for me, but they might work for you. Nonetheless it is important to road test different metrics and see if they resonate with your audience. It can be a difficult process because consultants are often expected (albeit unreasonably) to show up with all the answers. Partnership with your client allows you to bring prior knowledge and adapt it to the new environment and avoid the this-worked-on-my-last-project syndrome.
As an example, I was on a project recently where I was temporarily acting as a development manager (not a core competency, so I was feeling my way around) and needed to provide a weekly read out on development object status. I readily admit I tried a few different ways to present information about status: bar charts with summary information about objects completed in the last week; running totals for completion; open object statuses indicating what is in design, in development, in unit testing, and ultimately completed ready for integration testing. Several attempts and refinement found a place where the information was presented in a way that made sense to the client and provided both a historical completion view of status as well as forward looking view that showed what to expect over the coming weeks.
At its core the same information was being presented in different ways. I was fortunate to have a collaborative client who provided input and guidance to massage and refine the metric and the read out.
Measurements & Messages
Ultimately a metric must serve a purpose: if you can’t explain why something is being measured and how to measure it then it’s no better than a hot air balloon landing in a field.
We can use metrics to measure progress and status and ultimately use them as calls to action. In our competitive world we all want to get a good score for achievement and not just for showing up and doing busy work.