Have a quick look at the small cause and effect branch. Is the logic sound?
Can it be that in reality effects 1-3 are valid, but effect 4 is not?
We can come up with various explanations of insufficiency in the above logic. For instance, if the clients are not free to make their own decisions, like in totalitarian countries, then could be that the regime prefers something else. Another explanation might be that the brand name of Product P1 is much less known.
The generic point is: the vast majority of the practical cause and effect connections are not 100% valid.
In other words, most good logical branches are valid only statistically, because they might be impacted by uncertain additional effects that distort the main cause-and-effect. Actually the uncertainty represents insufficiencies we are not aware of, or we know about them but we cannot confirm whether they exist or not in our reality. For all practical purposes there is no difference between uncertainty and information we are not able to get.
This recognition has ramifications. Suppose we have a series of logical arrows:
eff1 –> eff2 –> eff3 –> eff4 –> eff5
If every arrow is 90% valid (it is true in 90% of the cases) then the long arrow from eff1 to eff5 is only 60% valid.
The point is that while we should use cause-and-effect because it is much better than to ignore it, we can never be sure we know! The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future. This false logic says: if ([I behave according to the TP branch] à [Sometimes I do not get the expected effect]) then [I stop using the TP].
The way to deal with this serious and damaging negative branch is to institute the role of uncertainty in our life and the idea that partial information is still better than no information – provided we take the limitations of being partial seriously. We can never be sure that whatever we do will bring benefits. However, when we use good logic then most-of-the-time we’ll get much better benefits than the total damage.
It’d be even better to consider the possibility of something going wrong in every step we do. This would guide us to check the results and re-check the logic when the result is different than what we have expected. It is always possible that there is a flaw in our logic and in such a case we better fix the flawed part and gain better logical understanding of the cause-and-effect. When we do not see any flaw in our logic – there is still room for certain crazy insufficiency to mess our life and this is the price we pay for living with uncertainty.
10 thoughts on “The Thinking Processes (TP) and uncertainty”
I think you are raising here a general problem in systems thinking and not only an issue regarding the TOC Thinking Processes.
The validity of an effect does always depend on what we know, or in other words what we can take as a true fact of life. But we can’t “never say I know”. Since a couple of years everyone should be aware of a ‘Black Swan’ (“An event or occurrence that deviates beyond what is normally expected of a situation and that would be extremely difficult to predict. This term was popularized by Nassim Nicholas Taleb, a finance professor and former Wall Street trader.”)
From knowledge management we know the classification of the ‘State of our Knowledge’ vs. the ‘State of Available Information’:
Unknown Known, and
So, there are a number of factors which can have an impact on the quality of a tree or a model in other methodologies. I think it is important to take as much as possible of these factors into account while applying the Thinking Processes.
In case it really happens that a tree failed, which means reality does not match the outcome of a thinking tool, your important work about “Learning from one Event” will help to make necessary corrections.
It might be that “The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future.” But they would face exactly the same problems with or without another approach. We can try to limit uncertainties, but we are not able to avoid them.
Thanks Jürgen, I fully agree with your comment. It is certainly a general view of the relationships between logic and the sometimes unexpected results in reality. I wanted to guide the TOC TP users to be aware that their logic might be valid and still it is a possible that the actual result be different.
There is a second thought I would like to share with you. This is the answer to your question:
Can it be that in reality effects 1-3 are valid, but effect 4 is not? My answer: YES !
In effect 4 you have stated: “Product P1 is sold significantly better than its competitors and this trend continues.” We can assume that based on effects 1-3 the Product P1 will be sold significantly better for a period of time, but there is no effect which supports the argument “…and this trend continues”. According to our CLR (Categories of Legitimate Reservations) I tend to say we are facing here a Cause Insufficiency.
Am I right?
A confusing element in the TP is the impact of time: how long it takes from when the effects at the bottom in the tree to be valid until we see the effect at the top. This aspect is worth another post. In the example I’ve used I claim that as long as the three effects at the bottom are valid then more clients would prefer P1, which is exactly a trend.
The trend could be stopped when either ALL clients buy P1 or, more realistically, when the competitors make a change, for instance reduce price (then effect 2 becomes invalid) or improve their products.
I agree. There are finally only two TP tools that take time into consideration. First the CRT (Current Reality Tree) here the time t = 0. A snapshot in time describing the current situation. Secondly the transition from PRT (Prerequisite Tree) towards a project plan (via Transition Tree or whatever). But we have no information about the dynamic of the system itself. Furthermore, often we have no idea what the impact of a delay of one initiative will be while implementing a flow-of-initiatives.
That’s the reason why I am studying System Thinking and System Dynamics (http://www.valuebasedmanagement.net/methods_forrester_system_dynamics.html) on my own. My personal objective is to take results based on the mentioned methodologies into account, during the development of a NBR Negative Branch Reservation), FRT (Future Reality Tree) and or PRT, to improve the quality of a TP full analysis.
But also here the question about validity comes back, because as already stated by George Box „Essentially, all models are wrong, but some are useful“.
In other words, I apply the TP and as an add-on other methodologies to improve the quality of my trees and to reduce given uncertainty.
Great post Eli. I really liked the way you showed how uncertanty on each level of the tree can lead to a lower overall uncertanty at the top. It’s brilliant how you show that TPs are subject to uncertainty, and how that fact makes it important to adjust the (statistical) confidence of the “predictions” made by them (specially their reliability).
As usual, Eli, your thinking indicated deeper reflection than merely the superficial and obvious statements of cause and effect. For more than 15 years, in my Logical Thinking Process courses, I have been emphasizing several conclusions I’ve drawn over my 20+ years of using and refining the TP.
1. The TP is NOT a quantitative tool. It is strictly qualitative. It was designed to express, in a logical way, the intuition of the tree-builder.
2. Any tree-builder’s intuition is a function of verifiable fact, experience, documentary evidence, and his or her ability to integrate all these in their own mind.
3. There is no such thing as perfect information. So, any logic tree always has a risk of inaccuracy in the content information. The conclusions drawn from the tree are likewise subject to error.
4. A CRT or FRT is also further subject to compromise by the skill of the tree-builder, and his or her understanding of the Categories of Legitimate Reservation and ability to apply them effectively.
5. The three most common errors in any cause-effect tree (CRT or FRT) are, in order of criticality: a) Entity Existence (i.e., inaccurate, unsubstantiated, or false statements); b) Clarity on the causal arrows (“long arrows”; excessive leaps of logic); and c) Cause Insufficiency. (This last comes with a caveat: there is such a thing as too many contributing causes in a logic tree).
The TP is not a perfect tool. But it’s way ahead of whatever’s in second place, especially when it comes to analyzing complicated qualitative situations—i.e., ones in which measurable, reliable data are not available. Which is most of them. As Deming once said, “The answers to the most important questions are often unknown and unknowable.” I would modify that slightly to say “unquantified and unmeasurable”. Data frequently run out of utility in decision making.
Because we can never be completely certain of an outcome of a particular causal chain, the best that we can expect from a TP analysis is that it improves our confidence or our odds. If a particular outcome fails to materialize, there could be several reasons, only one of which is uncertainty. In Eli’s example, the one that popped out at me was the absence from the contributing causes of a crucial (and perhaps likely) one: The customer actually NEEDS the product. This insufficiency raises another conclusion: the facts (Entity Existence) can all be known (i.e., NO uncertainty), but the failure to include all truly dependent contributing causes is fatal to the conclusion.
One other comment about previous comments: There are NO TP tools that take time into consideration, least of all the CRT. All that ANY of the TP trees depict is sequence. I can see no way to attach any files to these posts, but if I could, I would include, as evidence, a CRT that took 14 years to unfold. But the last 10 layers unfolded in 73 seconds. Yet only when time is included as part of the statement in an entity is it even considered in a CRT. The same is true of FRTs. PRTs are even less related to time, because there’s no sufficiency in them.
Bill, like so many times in the past, we wildly agree with each other, but we, many times, verbalize it somewhat differently. I certainly see huge value in the use of the TP. I can add one additional value to your list:
The TP is an effective way to explain to others a non-trivial line of thought.
All I wanted to express is the fact that we are, many times, unable to come up with 100% causality. In almost all realistic causes there is insufficiency and the main problem is that you cannot make sure whether they are valid or not. In my terminology not knowing whether a certain effect, required to achieve better sufficiency, is valid or not is part of the uncertainty we live with. The superior features we include in our products might not seem important to our clients and we might not know it. The design of the competitive product might be more to the taste of the clients and we might not know it. All these unknowns are, according to my definition, part of the uncertainty.
As you say, the TP improves the odds, and this is A LOT, but we have understand it is not 100% certain.
I try to develop ways to translate intuition into range of numbers and by that overcome the assumption that they are all qualitative and cannot be quantified. OF course, expressing anything as a range, like “we expect this move would add anything between 10% – 150% to our bottom line”, requires some changes in the way we make decisions. Actually it might have an impact on the FRT, as we might develop a reasonable pessimistic FRT and a reasonable optimistic FRT. I know this is not easy and even the pessimistic FRT might not materialize. I also don’t have the tool – it needs to be carefully developed. I simply think it is needed. Until then I agree that the TP is the best tool for qualitative analysis and it can guides us to better understanding of our reality.
Even when E1, E2 and E3 are all missing, it doesn’t mean E4 does not happen.
Also eff1->eff2->-eff3->eff4->eff5 with all 60% probability, does not mean eff5 is not going to happen. For equal reasons, when the probability of each is 99.9999999% does not mean that eff5 is going to happen.
As you thought me Eli quite a while ago, those are opportunities to learn. One of the first steps is to be open for seeing reality unfold in a way you didn’t expect and try to learn which assumptions one made that predicted a different outcome.
For quite sometime I used this to improve my predictions. Although it certainly helped me to ‘outpredict’ some others, by no means, as could be expected, it was guarantee. However, it made me more sensitive for observing potential early warnings. And, the leasure ‘buffers’ bring; alternative plans or mechanisms to make things happen.
The Focusing Steps are more than helpful. A bit of a shorthand, this assumes a Step 0 and where Step 5 is to go back to Step 1 or even Step 0, where step 2 is about deciding how to exploit the constraint (the stress is on the decision, not the exploitation, and might even trigger to reverbalize the objective, so going back to Step 0).
I really appreciate that reality, though it seems to take our actions into account for the better and worse, it keeps unfolding as it pleases and not as it would please us.
Therefore, IMHO assuming that TP is a tool to guide us to a better understanding of reality is wrong. It is a tool that verbalizes our assumptions and predictions with but two aims: feel more comfortable to decide to (not) do something and to communicate this decision in a way people feel they understand it. The clearity removes a lot of noise that usually leads to something that was definately not intended, but by no means guarantees anything.
I’ve been experimenting with combining the TP with concepts borrowed from Hypothesis-Driven-Development ( https://barryoreilly.com/2013/10/21/how-to-implement-hypothesis-driven-development/ ).
The basic idea is to express the Core Problem as a hypothesis, rather than a fact.
And then you try to frame your injections as tests of the hypothesis:
Will result in
We will have confidence to proceed when
The injections should be small enough so that a test can be performed quickly, but large enough to yield useful information.
Based on the results, you adjust your trees and proceed.