Eli Goldratt spoke about the need to notice inconsistencies. He even claimed we need courage to do that. Once you realize the inconsistency you are urged to reconcile it even when it means updating one or more of your basic assumptions.
I’m not sure whether what we need is ‘courage’, because if we choose to ignore the inconsistency we expose ourselves to possible resulting threats of failing to understand our reality. What we need to do with ourselves is to acknowledge the fact that we don’t know, thus we make mistakes and we need to fix them in order never to repeat the same mistake.
When I learned programming I realized that the number of mistakes, bugs in software terminology, I had made were enormous. I had to accept the fact that I’m not all that clever to be able to do it right the first time. The good news were that every programmer generated many bugs, and thus the culture of software development was forced to recognize the existence of bugs and to expect every programmer to fix the bugs as soon as possible. I assume nowadays it is still the common culture in software, but only there. The frightening realization was that I probably do many mistakes in my “other life”, but do not have the computer to throw the mistakes in my face.
Failing to identify the threat is a mistake of ignoring signals that such a threat is developing.
It is our responsibility to identify emerging threats as early as possible and find the appropriate actions to neutralize the threats.
Some threats are anticipated before they emerge into our reality and put us, or our organization, into jeopardy. What we are able to do is to put a “control mechanism” in place. Here is my definition for the term ‘control’:
A reactive mechanism to handle uncertainty by monitoring information that points to a threatening situation and taking corrective actions accordingly
Buffer management, alarm systems, auditing of financial transactions, periodical medical checks and performance measurements are all control mechanisms for variety of potential threats that we take a-priori into consideration that they might emerge.
Sometimes a new threat is emerging, but with enough visibility so the identification is pretty much immediate. For instance, a proposed change in the tax laws, which might compromise our profits. Failing to note such a publicize threat signals an even bigger threat, because if such a clear threat is ignored, how many other emerging threats are ignored?
What can we do to identify the emergence of a new threat, of which we haven’t given a thought?
An emerging threat generates signals that are inconsistent with our expectations. We have a name for such a strange feeling that something unexpected has just happened. We call that feeling a SURPRISE.
Surprises can be good or bad, but the common effect is that every surprise points to a flawed paradigm that our brain uses automatically. A paradigm is actually a small cause and effect branch that we take automatically as true. For example:
Does the above paradigm ALWAYS work? It does happen that my people do not do what needs to be done. This is a signal that there is a flaw in the logic. It is not clear, at that stage, what the flaw is.
The kernel of the signal is the gap between expectations and outcomes:
My expectations are: my people do exactly what I’ve asked them to do!
Now I face the following incident: I asked Dave to put in writing the full facts regarding Deal X. What I got is one page with just the most obvious facts and many of the important happenings are missing.
I’m angry with Dave. This is not what I have expected from him. What should I do?
We usually deal with the gap between what we want and what we get. Organizations often check the gap between the plan and the execution. I point to the gap of realistic expectations and actual outcomes as a key opportunity to update a flawed paradigm.
Let’s look again at the simple case of Dave giving too short report on Deal X. The gap signals a flawed paradigm, but what is it? I did explain what to do and I’m still the boss.
Three (should be more) initial possible explanations:
- Dave did not care what I’ve asked him to do
- Dave was too busy with many other things he had to do
- Dave did not understand the required level of details
To understand what truly went wrong in the case, there is a need to validate the facts and then dive into the paradigms we, both me and Dave, operate at.
Without carrying here the full analysis, let’s imagine the following realization is revealed:
When I do not clearly explain the “what for?” question of what I ask to do, it is possible that the outcome would not be what I think it should be.
If this is a new learned lesson – can you see the scope of the positive consequences? The possibility that their instructions would not be followed is a threat to every manager. If they do internalize the need to explain why it is necessary to do this-and-that then big part of that threat would not materialized. However, a similar gap might still emerge as a result of other flawed paradigms. We should still be open to spot surprises and go ahead with the learning process.