Suppose you are the CEO of a manufacturing company and Ken, your VP of Operations, comes to you with the idea of offering fast-response options to clients for a nice markup in price. That means on top of delivering in four weeks for regular price to deliver in two weeks for 15% markup and in one week for 30%. You ask Ken what Mia, the VP of Sales thinks of it, and he tells you she does not object, but also not fully support. As long as you, the CEO, would support the idea she will cooperate. A similar response comes from Ian, the CFO of the company.
There are two possible problems with the idea. One is that Operations might be unable to respond to such fast deliveries, but Ken is confident Production can do it. The second potential problem is that clients would refuse to pay the markup and still put pressure to get faster response.
Suppose you tell Ken the following: “Sounds interesting idea. If you truly believe in it – go and do it. Get the support of Mia and Ian and bring results.”
Is this “good leadership”? The words “if you truly believe …” radiate that the full responsibility for a failure would fall on Ken, no matter what has caused the failure or the fact that any new move is exposed to considerable variability. Would this pave the way to more people with creative ideas to bring them to management? What do you think would happen to Ken if “his idea” fails to impact sales?
A manager who dares to raise a new idea for improving the performance of an organization faces two major fears. The first is from being unable to meet the challenge and the responsibility and accountability that come with it. The second fear, much more devastating, is from unjust criticism if the idea won’t work according to the prior expectations because of the significant inherent uncertainty. Expectations are usually built according to optimistic forecast, even when the one who raised the idea took into account also the less optimistic results. The unjust behavior of critics, who choose to ignore the uncertainty, is what eventually causes cold feet to most managers, preventing them from raise new ideas.
Think of the conflict of a coach of a top sport team before an ultra important game that has a key player after long break due to a bad injury: should he use the player in the game? On one hand that player could be the decisive factor for winning the game. On the other hand, he might be injured again. How would you judge the coach after the game? By how much your judgment is influenced by the actual outcome?
A coach before a game has to take several critical decisions. But, when one has a daring new idea simply ignoring the idea is a valid option.
Nassim Taleb is absolutely right saying that instead of trying to avoid uncertainty we should use uncertainty for our benefit. However, you cannot just ignore the fear that any negative actual result would cause too high personal damage, much larger than the damage to the company from the specific idea. The fear is intensified by knowing that many people, who might judge the outcomes of the idea, do not really understand the nature of uncertainty. Then there is a concern that a “failure” would play a role in the power-game within the organization, causing damage to the person who came up with an idea that worked less well than expected.
Most people are afraid of variety of uncertain events. One question is what do you do with the fear? Most people delay critical decisions, which is the same as deciding to do nothing. Others take the uncertain decisions fast in order to avoid the torment of the fear. The use of superstition to handle uncertainty, and mainly to reduce the fear, is also widely spread.
As the reader has already realized, the focus of this article is not on individuals who make decisions for their own life, but on people who are making decisions on behalf of their organization.
One difference is that decision making on behalf of the organization should be based on rational analysis. The business culture of organizations radiates the expectation for optimal decisions checking carefully the cost-benefit relationships. People make their own decisions based mainly on emotions and then justify the decisions using rational arguments. Organizations might have certain values based on the emotions of the owners, but the vast majority of the derived decisions are supposed to be the outcome of rational analysis.
Are they? Can people have two different sets of behaviors when they need to take decisions?
The famous Goldatt’s saying “tell me how I’m measured and I’ll tell how I’ll behave” gives a clue to what could make a basic change in behavior between the work-place and all other environments a person interacts with. Every organization sets certain expectations on its employees. When continuing working for the organization counts, certainly when there is a wish to go up the ladder, fulfilling the expectations is of major impact.
“People are not optimizers they are satisfycers”, said Prof. Herbert Simon, the Nobel Prize laureate (1978 in Economics). ‘Satisfycer’ means trying to meet satisfactory criteria and once they are met the search for better alternatives stops. If Prof. Simon is right and if the organization culture promotes the value of optimization, then organizations demand a different way of making decisions than what people do for themselves.
The critical and devastating conflict lies with making a decision for which the ramifications cannot be accurately determined. This is the core problem with all decisions due to uncertainty and lack of relevant information. When it seems possible that the ramifications of the decisions might be bad, but could also be great, then we have a “hard decision” on our hands. Satisfycers would naturally make the decision based on evaluating the worst case and whether such an outcome could be tolerated as a key criterion. At the same time the other criterion would be based on how good the ramifications could be. Even though most people give much more weight for negative results, there are enough cases where people are ready to take a certain risk for the chance of gaining much more value. Many times taking the risk is the right decision for the long term. This is definitely true for organizations that could gain a lot by taking many decisions that their average gain is high, and the damage from losing is relatively small. While there are organizations that operate like this all the time, like high-risk funds, the vast majority of the organizations behave as if the single-number forecast is what the future is going to be. In such a scenario failing to achieve the ‘target’ is due to incompetence of specific people, who are openly blamed for that. This culture forces medium and high-level managers to protect themselves by aiming at lower business targets that they feel confident can be safely achieved.
The devastating damage of the fear of managers to raise new ideas is being stuck in the current state, and being exposed to the probability that the competition will learn how to handle uncertainty in a superior way that vastly reduces the fear.
This can be done by radiating to the organization’s employees that every forecast should be stated as a range rather than a single number. When the formal analysis of any opportunity or new idea is analyzed by a team of managers from all the relevant functions checking two different scenarios, one based on conservative assessments and the other on reasonable optimistic one, then such a team decision is better protected from the unjust after-the-fact criticism. Addressing uncertainty by estimated ranges and the creation of two reasonable extreme scenarios is a key element in Throughput Economics aimed at supporting much better decisions and opens the door for many new ideas.
> How would you judge the coach after the game? By how much your judgment is influenced by the actual outcome?
That would be like judging a decision to play the lottery by whether or not the person in question won the lottery. Or like judging a decision in poker by whether or not the player won the hand in question. It’s totally ignoring statistics. It’s pseudo-science. We must judge a decision by the information that the person had at the time of making the decision.
> People make their own decisions based mainly on emotions and then justify the decisions using rational arguments.
Which is pseudo-science (it’s not rational). They are trying to support their ideas (the foundationalism mistake made by Aristotle, mentioned in my essay) instead of trying to criticize their ideas in search of ideas that they don’t see any flaws in.
LikeLike