Joel-Henry Grossard has made an important comment I like to share with you and express my view. He wrote:
“However you can have two distributions which have the same average and the same standard deviation, but which are profoundly different when you look at the numbers. The missing factor is time: to know how the numbers are spread over time is critical to decide. Using Statistical Process Control can help.”
Do we know how the variables we look at in our practical reality behave with time?
Let’s consider a process in the shop-floor where we are able to record a lot of data and how they spread over time. What we get is a time series of results, but that graph represents only one possible spread of the results and it is not a replicate of the real distribution function. Usually we don’t really know the full characteristics of the distribution function. For instance, if an operator gets tired after one hour and if this tiredness can be expressed in the quality of the output then we should see a certain deviation. But, unless we suspect this could happen there is no big chance that such cause for deviations would be detected.
When the process is fully under our control we are able to confirm that the basic parameters of the process haven’t gone through a significant change. Even in such a convenient case my understanding is that even Prof. Deming did not apply the full power of Statistics, and just went for standard heuristics to establish good-enough quality control.
Once we step out of what is fully under our influence we know even less about the behavior of the surrounding uncertainty. We don’t even know whether all the recorded results belong to the same distribution function.
Suppose a new Harry Potter book suddenly appears. The last book in the series appeared in 2007. What statistical model could predict the number of copies to be sold in the first week? We do have past results, and they are relevant to a certain degree, but due to the long intermission between the former series and the surprising new book the original distribution function has changed and we don’t exactly know what the change is. We don’t even know whether the demand will be up or down relative to the last book.
Does it mean we don’t know anything? Can it really be any number? We know some of the parameters that impact the demand for the next book. The reputation of the original series is still high. But many of the past readers are now older and it is not clear whether their interest is still high. Thus, some intuitive estimation can be made regarding the reasonable minimum demand. We can also estimate how much more demand is reasonable, taken into account the demand for the last book, but what meaning do the previous results carry? The one conclusion from them is that the last book was not a single incident and the whole series was a big success. However, the detailed results and how they spread over time do not add much value to the prediction.
I put a special emphasis on the word “reasonable”. First, we know that sometimes unreasonable things do happen. But, assuming we do have intuition based on care and experience, most of the time what is happening is reasonable to us. Our intuition is shaped by many small events and whether they are considered reasonable or unreasonable by us.
This intuition is a source of valuable, but partial, information to guide our decisions concerning common and expected uncertainty.
Partial information is what we usually have that could help us. Not enough to prevent us from some damaging decisions but good enough to guide us so that overall we’d get much more benefits than damage.
But, when we knowingly ignore the partial information, we cause definite damage. This is what most organizational policies do: force certainty on uncertain situation, like using one-number forecasts and turn them into sales objectives as prime performance measurements.
I claim that forcing certainty on uncertainty is due to fear from being unjustly criticized. I like to deal with the cause and effects of the fear from the performance measurements on people and highlight the nasty sides of the relationships between the organization and its employees. This is, hopefully, going to be my next post, subject to the inherent uncertainty.
4 thoughts on “The balance between Statistics and Intuition”
Forcing certainty on uncertain situations may also be a function of managers that equate negative branches as being “not on board with the program” or as sabotaging a good idea. More and more our US culture is elevating narcissistic personalities into positions of leadership and power. After researching the narcissistic personality for several years, I can say ( with certainty) that they are unrealistic in their lack of willingness to. consider risk and they view objections as personal challenges. One reason why their companies tend to underperform in the long run.
Marjorie, like your comment and fully agree with it. My question is only how many of the CEOs of small, medium and even large companies are narcissists who push their orgranizations to risky directions? The vast organizations I had a chance to observe refrain from any move that is just a little beyond the comfort zone. “Great leaders” do take too big risks and are blind to the risks, and this is an aspect I like to raise more because many people look for the “leader” who would tell them what to do. Fortunately for us we do not have too many “great leaders” – or do we?
Another superb post about uncertainty Eli, thanks. “forcing certainty on uncertainty is due to fear from being unjustly criticized” – GOLD! Buffer management and range estimates should be applied to, dare I say it, even finance budgets…