What enables effective decisions?

Authored by

Published by Indiainfoline.com

“Big Data” or “Big Intuition”

With cloud computing and ever reducing prices of storage systems, the available capacity to store data has gone up exponentially. At the same time, data sets, collated from varied sources, are also growing at an ever-increasing pace to the size of petabytes. As experts put it, this combined effect is a huge opportunity. With right technological support to store and analyse petabytes across sources, one can gain new insights in the field of business, medicine, e-commerce, intelligence gathering etc. There are many anecdotal evidences from data rich industries to back up the claim. This definitely sounds exciting.

But before jumping into the bandwagon, we need to ask a few critical questions –

  • With increasing data over the years, have we been able to improve our decision-making in the field of social science?
  • Can the ability of “big-data” technologies to provide new insights, eventually, replace the need to rely on human intuition?

If efficacy of data driven decision-making is proven by anecdotes, let us try to look for counterexamples. Nassim Taleb, in his latest book ‘AntiFragile’ highlights how US government was unable to predict the Arab Spring revolutions or even the financial crisis of 2008 despite investing billions in predictive analytics.

He argues that, in a physical world, we may be able to predict the trajectory of a rocket’s flight but it is difficult to predict the rare events (he calls it the black swan types) in a non-linear complex system (where cause may not have proportionate effect due to feedback loops). The mathematical models will fail regardless of the sophistication or multiplicity of data used in the model.

What enables effective decisions_img 1

The way to test this claim is to use the predictive models in retrospective i.e., predict a past social event with information obtained from periods preceding the event. Most demand forecasting tools fail this test.

If this is the case, why do we feel so sure about our ability to predict which movie will be a super hit or which product will be the next hit in the market?

This is because of the hindsight bias. Our ability to create a perfect narrative story of cause and effect in the hindsight for an observed major event or a crisis, after it has happened, makes us believe that with more information, collated from different sources, we can easily predict it in future. If we analyze every major terrorist attack, in hindsight, the indirect signals leading unto the attack seem to be obviously predictable and signalling the attack. Hence we feel a sense of frustration with “incompetency” of the people in charge.

But if we look at data as it is arriving, much of it is contradictory and full of noise. As the historian Roberta Wohlstetter once remarked “After the event, of course, a signal is always crystal clear; we can see what disaster it was signalling. But before the event it is obscure and pregnant with conflicting meanings”. In his book “The Drunkard’s Walk” (regarded as one of the 10 top Science books of 2008), physicist Leonard Mlodinow, remarks, “The crystal ball of events is possible only when the event has happened. So we believe we know why a film did well, a candidate won an election, a new product failed or a disease turned worse. But such expertise is empty in the sense as it is of little use in predicting when a film will do well, a new product will fail or a team will lose.”

Randomness, contradiction, irrelevance in the data makes it difficult to pick up signals. At the same time our biases and prejudices in thinking can act as another blinding force for detecting the signals, when they are distinctly present in the data. We can, at times, ignore what does not fit our thinking paradigms – The confirmation bias!

In 2001, Cisco, one of the most “wired” supply chains announced to the stock market about writing off $2.5 billion of excess raw material. Was it the problem of huge errors in the forecasting software? The answer is No as the number involved was almost half of typical quarterly sales. The real problem was suppliers of Cisco producing in anticipation of future consumption. So when demand dropped with recession, the suppliers kept on producing at the older rate leading to gradual build-up of excess components over a period of 18 months leading to an eventual catastrophe of write-offs. Was the data of increasing inventory at suppliers not visible to planners in Cisco? Or is it a trap of local optima paradigms.

Not many supply chain managers would bother about inventory levels of suppliers; when they are more driven by the need to meet their local need for fast supplies. It is a paradigm by which they look at data around them. If local optimum is the predominant paradigm, one is blinded to signals of potential problems at global level until the mess hits the global picture. In India, almost the entire Auto supply and Consumer Goods supply chain go through a similar “bullwhip” effect at monthly horizon – a heavy month-end skew followed by a dip in first two weeks even though actual end consumer demand variation has no such trend. This way, working has a major havoc on working capital and stock availability at point of sale as space and capital is locked up in slow or non-moving items, while others are stocked out.




Share on Google+

Leave a Reply

Your email address will not be published. Required fields are marked *