The above “cloud” describes a generic conflict of living under the limitations of our knowledge. Goldrartt coined the epigram “Never Say I Know” to guide us to always look for signals that challenge our current understanding and by this prepare the next leap in performance. Problem is that recognizing the limitations of our knowledge might cause paralysis, trying not to do anything that we don’t have to, certainly not to take any decisions that lie outside our “comfort zone”. Taking actions only within the comfort-zone is a wide-spread compromise of the conflict.
Being fortunate to work closely with Dr. Goldratt included the not-too-nice, but extremely inspiring, experience of answering him “I don’t know” to a pointed question. He got angry, pushing me to verbalize what partial information I do know and how it leads to the best answer we can have. The only reason, according to Goldratt, of not knowing anything is not caring. When you do care you notice certain effects that logically lead to certain conclusions, even though far from being certain. This experience led me to recognize the following statement:
Never Say I Don’t Know Anything
This recognition is an even more pointed paradox than the above conflict. How can we both know and don’t know?
My resolution of the paradox, focused on decision making, is the recognition of the wider impact of uncertainty, recognizing two parts of our knowledge – the part we believe we “reasonably know” and the part we know we don’t know. A new element we have to introduce into both conflicts is the term “reasonable”, describing what we think we know with a certain level of certainty, still recognizing the fact that in some infrequent cases we are wrong.
Assessing our reasonable knowledge is much more concrete when we predict what is reasonably NOT true. Let’s view some examples.
- Two teams in sport. Team A had, so far, 10 wins to 0. Team B had, so far, 10 defeats to 0. Who is going to win?
- We could assume that the optional result of B winning is not reasonable.
- When a draw is a possibility (like in soccer) then a draw might be regarded reasonable.
- We can also assess what results are not reasonable. For instance, if the example is about basketball than a result of 100 to 0 is unreasonable. Someone with better intuition might even claim a win by 60 points or more is unreasonable. This assessment of what outcomes are not reasonable, leaves quite a lot of latitude to what is reasonable – and this leaves us with the reasonable boundaries of what we don’t know!
- Predicting the rate of growth of the economy.
- All the current knowledge, based on the past and on understanding the markets would lead to outline what rates are too reasonably high and also what rates are too reasonably low. This defines the range of reasonable results where we don’t know which one would occur.
- Nominating a new C-level executive.
- We like to nominate someone who would not fail! This means that the criterion of the choice is that a failure of such a nomination is considered unreasonable. It could be that we have rejected another candidate whom we reasonably assessed could achieve even more, but also might fail causing serious damage.
Even with this recognition of what is reasonably not true we still acknowledge the fact that in some, relatively rare cases, our reasonable knowledge are flawed.
The boundaries of what we-know-we-don’t-know include all the cases that we assess could reasonably happen – but we don’t know which one. These are also the boundaries of the uncertainty as viewed from our perspective. I define uncertainty as everything I don’t know. The decision-making process has to consider seriously the possibility of every potential outcome that lies within the boundaries of what we don’t know and to consider the range of the potential benefit and damage of all those outcomes.
There is a perception that TOC is against forecasting of sales and that there is no need to forecast sales. I challenge this perception.
The real problem with forecasts is the basic misunderstanding of what information should be conveyed by forecasts. We absolutely need sales forecasts to tell what level of sales is unreasonable, so we can prepare for the range of possible reasonable demand.
Suppose the inventory target level, according to the TOC replenishment solution, for product A is 100 units. Currently the on-hand stock is 30, and two orders are on the way – one for 40 and another for 30.
Buffer management tells you that the on-hand stock is in RED. This means we assume it is reasonable that the demand in the next day or two might exceed 30 units. Thus we have to expedite the next order. However, the possibility that tomorrow the demand would exceed 140 units is considered unreasonable! This is the forecast we use in TOC – predicting that within regular response time the demand will not exceed the target level!
Suppose you hear a rumor that a major competitor is going to close his business in three or four days. It is a “rumor”, not a hard fact, but how many important real hard-facts, which could have big impact, are you exposed to when you have to make a critical decision? Suppose you think this rumor is “reasonable” and if it materialized then the demand for your products would go up.
Would you take actions based on a forecast, which is based on a rumor?
I know I would, maybe very carefully not to cause too much damage of overproducing when the demand might not go up at all because the rumor could be false.
This is the essence of living in uncertainty, partially knowing and partially not knowing. We need to make decisions that would never hit us too seriously and most of the time will bring huge benefits.
9 thoughts on ““Never Say I Know” and the Limitations of our (Reasonable) Knowledge”
I always looked forward to our insights and views.
In general, nevertheless, I am always confused when the (C) in the cloud – necessary condition – is formulated as a negative or a negative (no) is included in the wording (like in this case).
Having “no candy” hasn’t any impact on making fire. Get my point?
My suggestion: let’s try to avoid negatives in the necessary confictions; here maybe: limit global impact of our individual actions (mistakes).
Mark, how should I verbalize C? B and C are necessary conditions. Making sure a certain negative does not happen is a legitimate condition. If I would verbalize C as: “Every action on your part is appreciated in the future” conveys the same message, but I doubt whether it is clear in the same way.
Mark, I like to add that “Limit global impact of our individual actions (mistakes)” is not clear. You do want to have HUGE positive impact, you does not want to have HUGE negative impact. Why not include negatives in the requirements (B & C)? I agree that “no candy” has no logical meaning on making fire, is that the same for that cloud?
My reaction is a general reaction on using negations in esp. the C of a cloud. My plea is to try to formulate the (C) in positive terms (without negations), like we do in most (B)’s of clouds.
I could also say it like this: let’s not word it with what we do not want, but with what we want. Referring to your cloud: “ensure no big damage…”. What do we exactly want now?
Mark, what I want is to gain recognition for my courageous RIGHT decisions, which brought a lot of value to the organization. However, I want that this recognition is NOT spoiled by “mistakes”. These are my B (what I want) and C (what I want to prevent) and I think my verbalization clearly conveys what I need in both B and C.
From my Mathematical background I really don’t see any difference between conditions that are stated in a positive way, leading you to understand what is missing, and a statement that uses negatives. I like to use the one that is clearer to the reader.
It is always good to listen to you! Here is a little joke : A very successful guy was asked why is he so successful and he said “because I am making the right decisions”. So naturally the person asked him how did he develop this ability to make the right decisions and the guy said “from experience”. OK so to complete the story the guys asked him where do you get this experience and he said “from making the wrong decisions”. So the unavoidable conclusion might be that we can only learn from doing things wrong! – Ha ha ha
Jokes aside: Eli I came to the understanding that managing risk (impact from the unknown) successfully, is when you turn uncertainty (knowing nothing) into a probability (knowing something). There is good evidence to suggests that we can not turn an uncertainty (not knowing) into a certainty (knowing). The moment you turned it into a probability, then you can manage the probability or protect the global from the local source of uncertainty. Example : Reliability of supply is hitting me very bad (it is a source of uncertainty – I don’t know) and I take some effort to convert it into some understand of the variability of supply (the probabilities – now I know something) then I can buffer against it instead of trying to accurately predict supply.
There are two mind sets:
1. The first one is called Buffer against potential Failure and
2. The second one is Buffer for Success
When you buffer for success you turn uncertainty into probability and you protect the system given the understanding. I will put a big enough buffer (it will cost me some capital) to make sure my suppliers will not become temporally bottlenecks. We end up having one constraint as the main driver of our success.
When you buffer against potential failure then you will put as little capital as possible to make sure when it fails you don’t loose too much. We end up having everything as potential constraints in life!
Just thinking out load!
All the best my friend.
Henning du Preez
Hi Henning, I fully agree with you and I love very much the description of turning knowing nothing into probability.
The serious part of the joke is that we can learn from experiencing failures and very little from successes. Learning means gaining higher probability for success.
An insight is to learn from the experience (failures) of others.
Hi Eli, I relate to the uncertainty and giving the “I don’t know” answer when there is just too much of it. In my work, I often get a “hint” that some client has some requirements, and I am pressured to give some estimate. Naturally any explanation about the “why” I can not answer such a simple question are met with resistance. How can this kind of resistance be overcome? Not everyone is as reasonable as yourself or Eliyahu Goldratt, not everyone can accept a partial “I (do/don’t) know” as an answer.
In many cases, I find myself as would the “expert” in this comedy video – https://www.youtube.com/watch?v=BKorP55Aqvg
I am really interested in your thoughts on this matter.
Evgeny, thank you very much for the link. It truly demonstrates the conflict you are in. It is a generic conflict for any consultant, certainly every TOC consultant. On one hand we like to give value to our client and in order to do so we have to be very sincere and direct, including the possibility of telling the client he is wrong! On the other hand we need to get the job with the client, and for that we might need to radiate different views than what we truly think.
Behind this conflict there is an even more generic conflict between building your future (gaining a very reliable and trust worthy image) and preserving what you have now.
A possibility to evaporate the conflict is to radiate your real views, but be careful with the style of your answer/explanation. If you are not able to estimate without knowing more, then give an estimate based on certain assumptions, BUT RADIATE THOSE ASSUMPTIONS CLEARLY. For instance, “If we assume the current economical situation stays as it is today, then I estimate the potential between X and Y, however, if there is a change then the difference might be much larger.” Now, if the client does not accept such a careful answer, then the chance of giving him value are very low.
Eventually this conflict can be resolved only by personal considerations.