Bad metrics are worse than no metrics

Bad metrics are worse than no metrics

When it comes to setting goals and assessing your progress toward achieving them, be smart about how you use metrics and what you pay attention to. Not SMART.

Credit: Dreamstime

When it comes to setting goals and assessing your progress toward achieving them, be smart about how you use metrics and what you pay attention to. Not SMART.

Metrics have always been a key facet of gauging a business area’s accomplishments, be it IT, accounting, operations, or anywhere else. That’s never been more true than today. And as existing metrics constantly lose their relevance to C-suite goals, new metrics continually arise to take their place.

There’s no escaping them, so you might as well be smart about how you define and use them, especially when it comes to setting goals for both business and IT.

Why SMART isn’t so smart

For business and IT leaders setting goals for their organisations, SMART is a popular framework. It stands for “Specific, Measurable, Achievable, Relevant, and Time-bound.” You shouldn’t stand for it.

Understand, there’s nothing particularly wrong with being specific, although given a choice, being clear matters more.

Being achievable? Okay, I’ll go for that one. After all, why on earth would anyone set a goal that isn’t achievable? That’s just pretending, and pretending isn’t a good way to run any operation.

Relevant? This is a case of a criterion being necessary but not sufficient. When it comes to achieving, say, a business strategy, all goals must be relevant — they must be consistent with the strategy or they are, to use a too-excitable word, insubordinate.

But they’d better be more than merely relevant or they aren’t good enough. Individual goals must, when combined, achieve the strategy. Merely being consistent just doesn’t get the job done.

Time bound? That’s one of those criteria that is sometimes a good idea but sometimes exactly wrong. When the goal corresponds to a tangible work product, a deadline makes all kinds of sense. But when steady progress is what’s called for, a deadline can be counterproductive. So set “time-bound” aside as a maybe.

That leaves measurability. When it comes to SMART goals, it is, as is so often the case, the measurable part that gets managers into trouble. That’s because it frequently runs afoul of one of Lewis’s Laws of Metrics: Anything you don’t measure you don’t get.

Measurability’s slippery slope

There’s nothing subtle about this. Think for a moment about what makes the business you support successful or, if not entirely successful, at least surviving. With very few exceptions, the business depends on one or more of three factors: (1) superior products; (2) excellent customer experiences; or (3) lower prices than its competitors.

You can measure the difference between your prices and your competitors’ prices without much difficulty. With few quibbles worth worrying about, price competitiveness is easily measured and has been ever since someone first explained that “Gimbels shops Macy’s and Macy’s shops Gimbels.”

Measuring product competitiveness is a far different and far more difficult matter. Measuring the excellence of your customers’ experiences is harder yet.

Which means that if the executive team buys into the SMART goal formulation, your business will inevitably and inexorably find itself competing on price, price, and only price, because price competitiveness is the only goal that passes the SMART test.

And as your business slides down this WD-40-coated slope, it will put itself on an irreversible customer alienation trajectory, eschewed by every potential customer willing to pay for better products and outstanding service.

The invisibility index

Even if you avoid the SMART formulation, the underlying issue is the same. Take out measurability, substitute “pays attention to,” and very little changes: Anything nobody pays attention to doesn’t get done.

And in case the point isn’t clear, “pay attention to” includes a few requirements of its own. In no particular order, management isn’t paying attention if a business function doesn’t get:

Budget: A lesson too many businesses failed to learn following IT’s successful resolution of the Y2 crisis 20+ years ago is that if you want important work to get done you need to be willing to pay for it. And for those whose memories don’t extend that far back, no, the Y2K issue wasn’t a fraud.

It was very real. IT’s mistake was addressing it too well, resulting in a phenomenon well-known in risk management circles, namely, that successful prevention is indistinguishable from absence of risk.

Recognition: I have, from time to time, proposed the creation of an “invisibility index” for gauging IT Operations’ level of success. It’s an essential metric, because, as stated, anything you don’t measure you don’t get.

With IT Operations, invisibility is what the function’s leader tries to achieve, because nobody notices IT Operations except when something goes wrong.

As a general rule, businesses use metrics as an alternative to paying attention to something. A CEO might be on a first-name basis with every sales manager, and listen to their ideas on how effective the sales function truly is, and on the barriers to making it even more effective.

Even if that same CEO can identify the IT Operations manager when they’re in the same room, should they find themselves in an actual conversation any interest the CEO expresses in what’s really going on and what’s needed is probably feigned. Should the CEO have any questions about how well the function is doing there’s a service level report to reference.

Imagine the impact should that same CEO call the head of IT Operations to express appreciation for the seventh consecutive month of perfect invisibility … or, following a disconcerting outage, to express confidence and ask if IT Operations is getting the support it needs.

Organisational listening: An organisation’s knowledge, experience, and judgment extend far beyond the executive suite and management layers. In large organisations, while executives provide guidance and managers know how they want work to get done, it’s the line employees — the ones who do the work — who know how work actually happens.

Recognising this fact puts metrics into its proper place in the broader realm of organisational listening — aka knowing What’s Going On Out There.

When it comes to paying attention, walking around or the virtual-workforce equivalent, coupled with employee roundtables, do more than give executives and managers information they separately need. These two organisational-listening tools also deliver a message that those engaging in them are paying attention.

Bad metrics are worse than no metrics

In the wise words of Mark Twain, it ain’t what you don’t know that gets you into trouble. It’s what you do know that ain’t so. Bad metrics are what you do know that ain’t so. So when it comes to metrics, yes, it is important to use them. But it’s even more important to avoid placing too much reliance on them.


Show Comments