I was addressing an audience of top management of a company recently just before a new assignment was about to kick off. To my surprise one of the team heads at the meeting, informed me, a little belligerently and with very obvious ‘consultant fatigue’, that the ‘on-time’ performance at their company had recently jumped from 8% to 70%.
Not only was I taken aback, frankly, I also felt a bit amused. Here, I was, called in by the company bosses to tackle a chronic problem of operational performance, and right there was someone in the team, if not denying the very existence of such a problem, at least contesting the extent of the issue.
More reviews = better performance?
The explanation from the team went thus: A short while back, the company had engaged a consulting firm to lay down new roles and KPIs. They had begun a routine of rigorous weekly review meetings focused on nailing down ‘accountability’. Consequently, the on-time performance had improved to these current levels.
“Well, why not push the on-time performance to 99%?” I asked. The room stared with bated breath for me to unveil the secret to such an unprecedented achievement.
“Simply increase the frequency of your review meetings from weekly to daily,” I continued sarcastically.
Yes, it was a cheeky line, one I could not resist. But it got the job done. I had the team’s attention.
“It seems to me that you’ve managed to increase vehicle speed just by looking at the speedometer more frequently,” I said with a smile, tongue in cheek.
The audience was not receptive to my sarcasm.
“Let me explain with another analogy.”
“Imagine you want to get the bath water to a comfortable temperature. You would have to adjust the tap a few times between the hot and cold settings, right? But, in a bid to get the temperature just right, if you keep turning it right or left without waiting for any single adjustment to deliver results, you are more likely to get a jet of ice-cold water or be assaulted by a scalding hot stream. Constantly intervening, overdoing the inputs without giving the actions enough time to have an effect, does not help stabilize output, rather it amplifies variability of the system”.
I thought I should make my point more explicitly and so continued.
“Let’s assume the lead time for an order is 10 to 12 weeks. What is the advantage of looking at the on-time performance every week?”
Pat came the response, “We can take corrective steps, if required.”
“Well, what steps have you taken during these reviews?”
There was no clear answer to that one. I heard “focus” “alignment” “planning” and a whole lot of generic words and jargon.
Validating reliability numbers
I moved into the next phase of investigating the truth behind the numbers.
I asked, “What’s the month-end skew in production and dispatch?”
The answer was: “40% in the last week of the month, a large part of the dispatch is on the last day.” The team admitted that although the due dates are random and there is no weekly pattern on the demand side, most of the production is completed during the last few days of the month.
If there is no skew in customer demand, with improved reliability and better on-time performance, the month-end skew in production should have simultaneously come down!
In this instance, in spite of a claim of improvement in on-time performance, there seemed to be no change in month-end skew of production and dispatch. In fact, the skew continued to give the team sleepless nights, quite literally.
This was intriguing so I wanted to know what stops them from manufacturing in the beginning of the month. And what goes on through the month?
“Expediting, planning and re-planning,” the head of planning responded.
“Plans don’t stay stable even for a day. There’s no predicting arrivals or dispatches.” logistics head added
This contradicted all claims of on-time performance improvement. Here was a team struggling to plan ahead beyond a day or two. How could they be meeting deadlines that had been set 12 weeks in advance?
“Predictability and reliable processes are integral to high on-time performance. Your reality is far from this,” I surmised.
The issue with “managing numbers” and not processes
It eventually emerged that the OTIF numbers at this company had improved recently because the team had been able to ‘manage’ or tweak delivery dates favorably!
The customers, in this case, are project businesses. Most project environments have multiple touch points where the customers have to give the company approvals, clearances, payments etc. So, when there is a delay in an order, it is sometimes difficult to ascertain who is responsible. Moreover, the customer’s project itself is often delayed – because of slow progress in execution or because of delay of customer’s many other suppliers. In both these situations, customers can often be coaxed to accommodate requests for deadline extensions, especially if they don’t need the material on the original deadline. In such circumstances, it is a common practice is to change purchase order dates.
As the team was under pressure to show better measures of on-time performance during the reviews, it had gone about aggressively apportioning delays to customers and finding ways to manipulate the on-time figure to look good. This is a much easier way to improve OTIF than improve flow of orders in the factory and reduce lead-time variability! Clearly, improvements were all on paper. No real change had been affected. The shop floor continued to suffer the chaos it always had.
It was yet another case of a company’s obsession with measurement and targets backfiring on them.
Why do companies focus on measurement and targets?
“If you cannot measure it, you cannot improve it,” famous management guru Peter Drucker had declared. Eli Goldratt, elaborating on Theory of Constraints, followed with, “Tell me how you measure me, I will tell you how I will behave.” He also warned that “wrong measures drive wrong behaviours.”
Managers desirous of running efficient plants swear by these lines. They channel their energies into identifying the right variable to measure and holding people accountable for the same. Carrots and sticks are waved around to achieve the “right” numbers or targeted benchmarks.
This company (discussed above) had earlier lived in a paradigm of measures and targets based on ‘utilization’ or ‘tonnage’ in its factories. Of late, the senior management had learned that ‘tonnage’ is a wrong measure which can lead to wrong behaviors. So, the they changed the measure and started tracking OTIF, presumably a ‘right’ and more customer-centric measure. Sustainable actions that can have a deep and lasting impact on performance takes concerted efforts of different departments- not easy. So, after the measurement changed, the shop floor did not change. And managers continued to spend large portions of their time gathering data, beautifying reports and fighting over figures – just a different set of figures!
A call to action
It is time that managers abandoned the “measure – set the target – assign accountability” style of management. It is time they let go of sacred management quotes and opened their minds to the fact that performance improvements cannot be brought in only by measuring it! In fact, that’s a great way to tempt people to game the system.
To ensure sustainable improvement, it has to be recognized that
- Many important parameters which may not be amenable to measurement have to be managed and improved.
- Variables do not move in isolation; they are interdependent and influence each other.
- Uncertainties will always affect the outcomes in any environment (So, isolating an individual’s contribution to any organizational outcome will be difficult.
Let’s examine how each of these plays a role.
Not everything can be measured
Companies are comprised of people. Employee passion, commitment, loyalty etc. are difficult to measure and track. But does that mean they are unimportant? Do they not require any managerial attention? All smart managers value these variables and know that softer aspects like encouraging healthy relationships, fostering inter-dependence can have a significant impact on businesses.
Even when some variables lend themselves to be measured easily, managers are wary of relying completely on available data to make crucial decisions because, as any researcher knows, there are far too many probable sources of bias or error. Thus, in practice, intuition often tends to trump data. Some examples of intuition playing a leading role include when promoting a subordinate or launching a new product or appointing a new dealer.
Variables are interdependent
Companies are systems with interdependent parts. Variables can have positive and negative impact on each other. For instance, when sales numbers go up in a B2B business, receivables are likely to go up with it. Hike in production can often mean an almost commiserate jump in inventory. It is difficult to find a completely independent performance variable. Therefore, trying to improve one without thought to the connected variables could spell trouble for the company as a whole.
Because of the interconnectedness of variables in a company, TOC proponents are highly critical of local efficiency measures- because actions taken to directly improve local measures can negatively impact global outcomes. But keeping an eye on local measures may not be as criminal as it is made out to be. At times, it is a necessity. For instance, in some environments, it gives us the idea of protective capacity in a non-bottleneck, a crucial requirement for maintaining high on-time performance and to prevent bottleneck from shifting.
Interestingly, this interconnectedness of variables can be leveraged to immense advantage. In implementations of Theory of Constraints operations solutions, once the core problem of a unit is sorted, there is a ripple effect in the entire system- improvement is shown in different interlinked areas; local as well as global indicators show improvement. A plant manager, happy with the dramatic improvement post TOC in costs per ton of their paint booth, a non-bottleneck machine remarked, “The TOC operations solution is great not only for overall output but also for local efficiencies!”
Numbers do not offer the complete picture
Let’s say some people in the sales team achieve their target. Some do not. Is this a true reflection of the concerned salesmen’s skills? Not always. For example, a salesman who met the targets set for a period could have done so because the competition in his territory had supply issues. Or a salesman could have missed his targets in spite of making extraordinary efforts to salvage a situation. This means that it is really difficult to isolate variability of extraneous conditions for an outcome from the impact of human intervention (or skills)! Only close observation of the environment, an individual’s or a team’s work can offer the right insights. Measures alone do not reveal the full story.
Therefore, fuzzy as these axioms may seem, the design of any improvement project has to accommodate or leverage the fact that
1) no single measure is inherently right or wrong
2) not everything that’s important can be measured; just because it can be measured does not make it important
3) Every measure has to be understood in a context and in its interrelationships with other measures
4) Achievement of a set target may or may not reveal anything about an individual’s or a team’s contribution.