Not all the decisions managers have to make are about risk,meaning decisions that might cause a serious loss, but might also cause a considerable gain. Actually those risky decisions are very infrequent. While I’m still claiming that the vast majority of the organizations force the managers to be ultra-conservative, the losses from those decisions are small relative to the huge loss from wrong policies dealing with “common and expected uncertainty.”
Take CCPM (TOC Project Management Solution) and ask yourself how come that planning a clear project buffer is such a dramatic new insight? How come people insist that there is clear time duration for a task?
Eli Goldratt said that organization force certainty on uncertain situations.
The paradox is that by forcing certainty management increases the negative impact of uncertainty. We see projects that take too long and shop floors that process too much inventory. Many organizations suffer from hazards because of lack of manpower, relatively cheap resource.
The common cause is the concern of every human manager of being blamed of creating “waste”.
The prime example I like pointing to is the use, actually misuse, of sales forecasts. We know from Probability Theory, or Statistics, that the minimum description of an uncertain variable contains two numbers, usually the average and the standard deviation. However, the vast majority of the forecast reports, used for various decisions, include only ONE number.
What is the value of one central measure for a forecast when nothing describes the spread around that measure? If next month sales of Product134 are forecasted to be 10,000 what is the likelihood that the actual sales would be 4,776, 8,244, 13,004 or even 18,559?
Suppose that the magic number of 10,000 comes from assessment of salespeople, is it clear that it represents an estimated average (expected value in the mathematical language)? Isn’t it possible that salespeople, who do not have any magic power to see the future, state a number they are comfortable with? If they are measured by meeting sales objectives, that are set according to the forecast, then they would reduce their estimation. But, if they need Operations to provide availability they would inflate the forecast.
I think that there is no way to manage an organization without forecasting!
I also think that Dynamic-Buffer-Management is actually a forecast looking at the combination of sales and inventory and predicts whether the stock buffer is about right.
However, treating a forecast as one number is a gross mistake. The reliance on one number allows top management to judge their sales and operations people, however that judgment is flawed and the sales and operations managers have to protect themselves from the ignorance of top management.
The overall impact of mishandling the common and expected uncertainty is HUGE. Management don’t recognize the need for protective capacity and thus look for high “efficiency”, causing people to pretend being very busy, which means they constantly look for “something to do”, regardless whether it creates valueor not.
However, protective capacity is truly required in order to maintain enough flexibility to deal with Murphy, as well as with temporary peaks of demand. TOC buffers help a lot to stabilize the flow and by that improve the overall performance, but they do not cover all the areas where people are using their own hidden buffers, causing huge damage: The hiring process is basically flawed with ridiculous requirements of 100% technical fit instead of requiring learning capabilities; Budgeting processes are flawed carrying no appropriate reserves; Even the need for maintaining presence in several different market segments is not fully recognized in many organizations.
Is it possible to learn how to deal with uncertainty, particularly the common and expected uncertainty? The vast majority of the managers have been in a basic course on Statistics, but it does not lead them to handle uncertainty that: does not have clear probabilities, is definitely different than the Gaussian (Normal distribution function), and the samples of similar occurrences in the near past are very small.
The real obstacle for improving the policies, making them a better match to the inherent uncertainty, is getting rid of the utopia of “optimal decisions” replacing it by “good enough” and stop measuring people by numbers which are exposed to both uncertainty and dependencies.
Is that doable? For me that is what TOC is all about.