The problems with “Common and Expected Uncertainty”

 

Not all the decisions managers have to make are about risk,meaning decisions that might cause a serious loss, but might also cause a considerable gain.  Actually those risky decisions are very infrequent.  While I’m still claiming that the vast majority of the organizations force the managers to be ultra-conservative, the losses from those decisions are small relative to the huge loss from wrong policies dealing with “common and expected uncertainty.”

Take CCPM (TOC Project Management Solution) and ask yourself how come that planning a clear project buffer is such a dramatic new insight?  How come people insist that there is clear time duration for a task?

Eli Goldratt said that organization force certainty on uncertain situations.  

The paradox is that by forcing certainty management increases the negative impact of uncertainty. We see projects that take too long and shop floors that process too much inventory.  Many organizations suffer from hazards because of lack of manpower, relatively cheap resource.  

The common cause is the concern of every human manager of being blamed of creating “waste”.

The prime example I like pointing to is the use, actually misuse, of sales forecasts.  We know from Probability Theory, or Statistics, that the minimum description of an uncertain variable contains two numbers, usually the average and the standard deviation.   However, the vast majority of the forecast reports, used for various decisions, include only ONE number.  

What is the value of one central measure for a forecast when nothing describes the spread around that measure?  If next month sales of Product134 are forecasted to be 10,000 what is the likelihood that the actual sales would be 4,776, 8,244, 13,004 or even 18,559?

Suppose that the magic number of 10,000 comes from assessment of salespeople, is it clear that it represents an estimated average (expected value in the mathematical language)?  Isn’t it possible that salespeople, who do not have any magic power to see the future, state a number they are comfortable with?  If they are measured by meeting sales objectives, that are set according to the forecast, then they would reduce their estimation. But, if they need Operations to provide availability they would inflate the forecast.  

I think that there is no way to manage an organization without forecasting!  

I also think that Dynamic-Buffer-Management is actually a forecast looking at the combination of sales and inventory and predicts whether the stock buffer is about right.  

However, treating a forecast as one number is a gross mistake.  The reliance on one number allows top management to judge their sales and operations people, however that judgment is flawed and the sales and operations managers have to protect themselves from the ignorance of top management.

The overall impact of mishandling the common and expected uncertainty is HUGE.  Management don’t recognize the need for protective capacity and thus look for high “efficiency”, causing people to pretend being very busy, which means they constantly look for “something to do”, regardless whether it creates valueor not

However, protective capacity is truly required in order to maintain enough flexibility to deal with Murphy, as well as with temporary peaks of demand. TOC buffers help a lot to stabilize the flow and by that improve the overall performance, but they do not cover all the areas where people are using their own hidden buffers, causing huge damage:  The hiring process is basically flawed with ridiculous requirements of 100% technical fit instead of requiring learning capabilities; Budgeting processes are flawed carrying no appropriate reserves;  Even the need for maintaining presence in several different market segments is not fully recognized in many organizations.

Is it possible to learn how to deal with uncertainty, particularly the common and expected uncertainty?  The vast majority of the managers have been in a basic course on Statistics, but it does not lead them to handle uncertainty that: does not have clear probabilities, is definitely different than the Gaussian (Normal distribution function), and the samples of similar occurrences in the near past are very small.  

The real obstacle for improving the policies, making them a better match to the inherent uncertainty, is getting rid of the utopia of “optimal decisions” replacing it by “good enough” and stop measuring people by numbers which are exposed to both uncertainty and dependencies.

Is that doable? For me that is what TOC is all about.

Published by

Eli Schragenheim

My love for challenges makes my life interesting. I'm concerned when I see organizations ignore uncertainty and I cannot understand people blindly following their leader.

16 thoughts on “The problems with “Common and Expected Uncertainty””

  1. Eli,
    In so many ways, you concisely describe the overall current reality of most organizations, whether business, not for profit, governmental, political systems, even military systems. Almost a paralyzing fear of making a mistake results in so many missed opportunities. Then they oscillate to the other side and take a foolish decision to “catch up.”

    On one hand, what you describe is an exciting opportunity, and yet somewhat discouraging. I can see that by removing / reducing this one belief system that encodes almost at the DNA level a “fear of failure,” that tremendous prosperity, and even innovation, can be unleashed pretty much worldwide.

    Yet at the same time, the inertia created by attempting to impose certainty upon uncertain decisions seems a bit overwhelming, even for the smallest of systems.

    I would assume that there is a greater frequency of success for the few that can see these opportunities and have the wherewithal to navigate them. Of course, for those that try, it might seem like an Indiana Jones movie while they step around the skeletons of those that previously attempted the same!

    I am looking forward to your thoughts on the leverage point(s) to address this phenomenon.

    Like

    1. Thank you Michael. I agree that it is a big challenge.
      Question: what could be the damage of trying to face uncertainty?
      I like, first of all, try to answer the concerns of what tight control management might lose if they formally admit, through their policies, that uncertainty exists? The ramifications would be considerable, but I doubt whether they are that overwhelming. Is it really so dangerous like the adventures of Indiana Jones?

      Like

      1. Eli,

        It will be interesting to see how this experiment goes in reality, but I see that management is very comfortable pretending that uncertainty should be managed as if it is certain. This is where Dr. Jones comes into play. Just like most every TOC change, convincing management to change a deep belief can dangerous. I would expect massive push back and a perception of career damage.

        Do you find that senior managers and up are willing to “rock the boat”? I find that very rare.

        It would be exciting to me to believe that enough managers are creative and open minded enough to challenge “managing uncertainty as if it is certain.”

        Like

        1. Michael, is forcing certainty on uncertainty a “deep belief”? The encouraging point is that individuals, including those managers, are aware in their own life of the impact on uncertainty. So, that behavior is a result of a certain situation where those managers do their best to survive. If we can come up with a solution to what truly bother them and if that solution can be tested without “rocking the boat” and if all, or most, of the bothering negative branches will be eliminated, then we do have a good chance.

          Note, that some of the TOC solutions have already dealt with uncertainty, especially CCPM, and managers are ready to consider them. They know that ignoring common and expected uncertainty does not work well. They just like to be certain there is a working solution without too serious negative branches.

          I am developing a new process for top management decision-making concerning the evaluation of product-mix, pricing and capacity planning. The assessment of the impact of uncertainty is part of it. If that solution would prove itself as an answer to a known need then I believe many doors will open. It does not threaten anyone with the question “how come you have used wrong tools and repeatedly made all those mistakes”, because it simply offers a new way to look at a known problem. If we succeed not to condemn management, but understand their current position and then adding a new insight – then we have a chance.

          Like

          1. Eli,

            I believe that it is a profound question: is forcing certainty on uncertainty a deep belief of management. A question that needs to be explored.

            Of course, a deep belief can (and is often) be unknown to the believer … a deep belief almost defines a core, and possibly erroneous, assumption.

            I would just observe and recall my own experiences with management. Frankly, you may be one of the few voices in the desert addressing how to appropriately identify and manage uncertainty. Even by that alone, one must seriously consider that it is a natural, and possibly an almost universal deep belief at least by mangers (if not everyone), that uncertainty must be treated as if it is certain.

            This almost defines your mission. It’s not so complex to understand that managers (parents, politicians, you name it) dont really know how to adequately identify uncertainty and for sure, they have no method of how to handle uncertainty when it is found.

            To me, that’s good enough … yes, forcing uncertainty on uncertainty is a deep belief of management (and more).

            Help me see it another way if you see fit.

            Like

  2. Hi Eli,

    I have a couple of questions regarding the last paragraph. Measurements drive people’s behavior so correct measurements can work in our favor, but you say that we should stop measuring people with numbers that are exposed to both uncertainty and dependencies. I’m not sure if you are suggesting to stop measuring people with numbers altogether or if you’re proposing to change the measurements we are using.

    In the first case, I think that in small companies it would be easy to differentiate good employees from the bad ones, but in big companies is it really possible to manageme without measurements for people’s performance? If it’s the second case, what measurements would you propose to evaluate people’s performance, for example a salesperson or the plant manager?

    Congratulations on your new blog!

    Alejandro Céspedes

    Like

    1. Alejandro, I do think that quantitative measurements should NOT be used to judge/evaluate people.
      Management needs to evaluate people just for two major reasons:

      1. Identify rotten-apples that cause damage when they are part of the organization.
      2. Rising starts –> people with special skills that the organization has to prepare them for important future roles within the organization.
      Both categories cannot be evaluated by numbers, they can be identified first by signals and then by analysis.

      All the rest are good employees that should be kept in the organization without bothering them all the time with measurements that, at the very best, show a partial and stochastic picture of their true abilities and motivation.

      More on that in later posts.

      Liked by 1 person

  3. Hi Eli,
    Your assessment that two numbers (average + standard deviation) describes better a forecast leaves me somehow perplex. Its obviously better than relying only on average alone.
    However you can have two distributions which have the same average and tne same standard deviation, but which are profoundly different when you look at the numbers. The missing factor is time : to know how the numbers are spread over time is critical to decide.
    Using Statistical Process Control can help.
    What are your thoughts about that ?
    Congratulations for the blog. When are you coming back to Paris ?
    Best regards. Joël-Henry GROSSARD

    Like

  4. Hi Joel-Henry, can you give me an example where the details of the distribution function would truly be significant for a practical decision?

    Note, most of the time we even do not have good estimation of the average (expected value), certainly not of the standard deviation. In too many times in reality we look at uncertain behavior that is constantly changing and new events change the basic parameters of both the expected value and standard deviation. In those cases we do not know the distribution function itself, as it is different from al the known functions. When you look on the behavior of sales, you see constant changes because new competitors join and some leave, the changes in economy impact the shape of the demand and certainly the introduction of new products. So, what is the value of time series if looking just one year back is IRRELEVANT?

    I try to shy away from the use of both average and certainly standard deviation because both are not intuitive and too many times intuition is all we have. I like to use a range, like a confidence interval, but without specifying the confidence level because we don’t know. So, I call it “the reasonable range”, where “reasonable” is a subjective judgment of a person – or the average range given by a group of people.

    I believe we leave in “approximate” situations where a substantial part of the uncertainty are parameters we don’t know. The good news are – IT MAKES DEALING WITH DECISIONS SIMPLER – because you know the boundaries of what you don’t know and you need to live with it, knowing that some of the decisions would result in negative results. As long as NO SINGLE NEGATIVE RESULT IS GOING TO KILL YOU and most of the decisions would result in positive results – this is what we can do.

    Like

  5. Dear Eli,
    You make an important observation:
    “I think that there is no way to manage an organization without forecasting!
    I also think that Dynamic-Buffer-Management is actually a forecast looking at the combination of sales and inventory and predicts whether the stock buffer is about right.”
    There is a common view that “pull” based techniques do not need forecasting (as it is based on replacing what is consumed) while “push” based (like typical MRP) are based on forecasting. No business manager will agree that forecasting is not required, hence usually “pull” based techniques are not given due consideration. As you rightly mention Dynamic-Buffer-Management is based on forecast. I would like to add:
    – but only forecast over a short horizon of replenishment lead time. But typical MRP aims at forecasting AFTER the replenishment lead time horizon, and the time between now(today) till replenishment time into future is “frozen”. So can we say that it is “blind” to the short horizon? So I would like to claim that the major difference between push and pull is not about former is based on forecast and latter is not, rather is about forecast horizon and replenishment time.

    In your “Supply Chain at Warp Speed” book you make the point as to why looking at short horizon of replenishment time important – Because in the short horizon “past can be a reasonable predictor of future” and hence forecasting based on recent past consumption can be reasonable accurate for the immediate future.

    Like

  6. Dr. W. Edwards Deming also thought that Statistical Thinking was an essential part of effective Management, which is why he emphasized it in the four-day course he taught for over three decades. [Dr. Wheeler’s slim and pratical book, Understanding Variation, is an excellent summary of Dr. Deming’s teachings on this point.] Not understanding variation is a deep ignorance, not a deep belief. [Some would call it innumeracy… ] This is also a fundamental part of Six Sigma.
    But shouldn’t Theory of Constraints presentations and case studies also show a basic grasp of fundamental statistics? Why do I almost always see one number (a mean or median) given and rarely any measure of variation? [I’d take anything — a range, a standard deviation, a mean squared error… ] And where is the data shown over TIME — as on SPC control charts — so we can see IF the system actually changed, by how much, and the MAGNITUDE of the noise (and if it was reduced)? [And see if the change actually was greater than the noise, or not.] The term ‘variation’ is fairly common in ToC discussions, but proper treatment of it is not…

    Like

  7. Richard, I agree with Dr. Deming and I started my blog with the target to deal more and in more depth with handling of uncertainty in general and with common and expected uncertainty in particular.
    I read Wheeler’s book. While I liked it what I felt is missing is the starting point of having to make a decision. Starting with the variation is very academic, because this is what they know, but it is not always what the decision maker needs. What is the level of accuracy and detail we need from the variation depends on the specific decision.
    More, the DAMAGE from certain possible results, like the impact on the bottom line in the short and longer time frame, is the real concern for the decision maker, not the variation of the main variable that somehow affects the end result (damage or gain).

    In the next posts I clarified what TOC has offered for handling of common and expected uncertainty, which also highlights what more can be done. In too many cases regular Statistics, as least as taught in MBA, is too inflexible to handle situations where the probabilities are unknown. The principles remain, of course, but the ready-made tools might need to be different. Deming and Juran did not force the regular statistical models for quality. Instead something more practical and usable were suggested and implemented. I like very much to expand the tools to all the critical decision areas for organizations.

    Like

  8. Statistical Process Control, as developed by Shewhart and popularized by Deming, makes NO ASSUMPTIONS about your data, or the distribution it comes from. You start with the process you have, see what the actual performance is (location and spread), and see if it is changing — or not. This current, and ACTUAL, reality is a solid foundation for decisions, yes?
    I’m also curious why you use the phrase ‘common and expected uncertainty’. How do you distinguish this from uncommon or unexpected uncertainty?

    Like

    1. Richard, I fully agree that SPC is a great tool. It can use statistical data because the samples are not too small and the data – good enough, at least in the majority of the cases.
      My categories of decisions under uncertainty are:
      1. Every alternative of the decision could end in a “catastrophe”
      Happens only in emergency situations
      2. One alternative of the decision could cause “catastrophe”, but with a very small probability
      3. Decisions on relatively unfamiliar area of uncertainty
      4. Decisions under common and expected uncertainty
      We have intuition about the possible outcomes and no result would be truly catastrophic
      So, common means the decision repeats itself, but many times the data is not good as it is for quality control. Suppose we have sold 10 pieces – was it the demand? Maybe we had 10, sold 10 and you don’t know whether there was more demand? Maybe the demand was for 7, but then you pressed a buyer to buy more for future consumption, which means he won’t come back soon.

      Like

    1. Thank you Pablo, this is an excellent question. In my coming presentation at the TOCICO, 2016, I’m going to define uncertainty as all the missing information we have to tolerate when making a decision. So, it contains the variability, fluctuations of the relevant variables, but also whatever we don’t know. For instance, how much we are going to sell next week is variability. Given the spread of power within the government – would the government decide to change the tax rules next month is more of “we don’t know” or “it is too complex to predict the result.”
      The reason I like to define uncertainty in this way is because I suggest to focus on decisions, for which uncertainty is a big problem that needs to have a reasonable solution. In other places I saw definitions of uncertainty as the cases where you do not know the probabilities, while risk is when there is a good idea of the probabilities. My observation is that for the vast majority of the decisions there is very little knowledge of the probabilities involved.

      Liked by 1 person

Leave a reply to Pablo Alvarez Cancel reply