Behavioral biases and their impact on managing organizations

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

Most of us, especially TOC practitioners, consider ourselves very good at decision making thanks to our cause and effect thinking.  However, behavioral economy, notably the research of two Nobel Prize professors, Daniel Kahneman and Richard Thaler, have convincingly shown several general biases from rational economical thinking, pushing most people to make decisions that look logically flawed. The psychology behind how people make decisions is extremely relevant to TOC because organizations are run by people, the same people TOC tries hard to convince to change their decision making process. TOC usually treats cause and effect relationships as based on rational logic.  But, cause-and-effect could also consider irrational causes, like having a special negative response to the term “loss”, even when it is not a loss, and predict the resulting effect of such a negative response.

Generally speaking TOC should look for answers to three key questions regarding such biases:

    1. Can we use cause-and-effect logic to map and explain biases? If so, can we eliminate, or significantly reduce, the biases?
    2. Is the impact of the biases on the organization’s decision making the same as on the decisions of an individual? Can we improve the organization’s decision making by treating the cause of the biases?
    3.  When numbers are fuzzy, as they usually are in corporate scenarios, what do managers rely on to make decisions?

Understanding loss aversion

In itself loss aversion is not a bias. It is a reasonable way to stay away from trouble.  What has been shown is that it is many times inconsistent, which practically means that human beings are impacted by irrelevant parameters that should not have an impact. To demonstrate the inconsistency we will use two experiments presented in “Thinking, Fast and Slow” By Prof. Kahneman.

In one experiment people were told that they had been given US$1,000 and that they had to choose between a 50% chance of winning an additional US$1,000 or getting US$500 for sure.

It’s no surprise that the vast majority of people was risk averse and chose to get the US$500.

What’s interesting is that when people were told that they had been given US$2,000 and that then they had to choose between a 50% chance of losing US$1,000 or lose US$500 for sure, then many people suddenly became risk seekers and chose the gamble.

In terms of final state of wealth both cases are exactly the same. Both cases put the choice between getting US$1,500 and accepting a gamble with equal chances of having US$1,000 or US$2,000. The two cases differ in their framing of the choice.  In the first case the choice is verbalized between gaining and taking a risk to gain more. The second case frames the dilemma between losing and potentially losing more (or not losing).  The fact that many people made a different decision between the cases shows a bias based on the framing of a ‘loss’ versus ‘gaining less’. It demonstrates how the words have a decisive impact.

These two experiments demonstrate two important findings. One, is that “losses” looms much larger than “gains”, and the other is that people become risk seeking when all their options are bad. This also explains why most people turn down a bet with a 50% chance of losing US$100 and 50% chance of winning US$150, even though on average the result is positive. If the bet would be 50% chance of winning US$200, then a balance between risk seeking and risk averse would be achieved. That means, “losing” is about twice as strong than “winning” as a general value assessment.

Should we be surprised by this seemingly irrational behavior?

Losing existing $100 might disrupt the short-term plans of a person, while the value of additional $150 is less clear. So, even though it is clear to most people that overall this is a good gamble, they resist it based on recognizing the greater negative impact of the loss.

Losing all we have is a huge threat. So, every person sets a mode of survival that should not be breached.  As the ability of most people to manage gains and losses in detail is limited, the survival instincts lead a negative reaction to any potential loss, making it more than the equivalent gain. So, taking into account our limited capabilities to control our exact state, we develop simple fast rules to make safe decisions. A simple rule could be “don’t gamble ever!”, or, “don’t bother with gambles unless you are certain to win much more.” These heuristics are definitely very helpful in most situations, but they can be costly in others.

While risk aversion seems rational enough, the framing bias is an irrational element, but the cause behind it is pretty clear and can be outlined as regular cause-and-effect.

We further assume that ‘framing’ is a bias that a person with a good background in probability theory would be able, most of the time, to resist the bias and come up with consistent decisions, especially for significant decisions.

Does this hold true for decisions made on behalf of an organization?

Suppose you are the regional sales manager of a big company and have to decide whether to launch a new product or not. Historically it has been statistically shown that there is a fifty-fifty chance that the new product will make a profit of US$2 million in one year or that it will lose a million dollars and its production would stop at the end of the year.

What would you do?

Our experience says that most seasoned managers will refuse to take the risk. Managers are naturally risk averse regarding any outcomes that will be attributed directly to them. As a matter of fact, every decision on behalf of an organization goes through two different evaluations: One is what is good to the organization and the other is what is good to the decision maker.

It’s common in many organizations that a “success” leads to a modest reward while a “failure” leads to a significant negative result for the manager. What’s more, because of hindsight bias decisions are assessed not by the quality of the decision making process and the information available at the time it was made, but by its outcome. No wonder loss aversion intensifies in corporate scenarios!

Earlier we mentioned that teaching the basics of probability theory and the acknowledgement of the different biases should reduce their impact. But, the unfortunate fact is that in most cases the decision makers face uncertain outcomes for which the probabilities are unknown. The case of launching a new product is such a case.  The statistical assessment of fifty-fifty chance is very broad and the decision maker cannot assume she knows the real odds.  This fuzzy nature of assessments naturally makes people even more risk averse, because the risk could be bigger than what is formally assessed. On the other hand, managers are expected to make some decisions, so they are sometimes pushed to take risky decisions just in order to look active as expected.

Now suppose that you are the Sales Vice-President and you have to decide whether to launch 20 different new products in 20 different regions. All product launches carry the similar statistics as presented earlier (50% chance of making US$2M and 50% of losing US$1M). Suppose the company is big enough to be able to overcome several product flops without threatening its solvency.

Would you launch all of them?

Assuming the success or failure of each of the products is independent on the other products then the simple statistical model would predict, on average, a total profit of $10M. However, since top management will most probably judge each decision independently, another bias known as narrow framing, the VP of Sales will try her best to minimize the number of failures. She might decide to launch only 8, basing her choice on the best intuition she has, even though she is aware she doesn’t really know. What’s ironic is that there’s a higher overall risk for the company in launching 8 products than 20 because of the aggregation effect.

There are many well-known examples of companies that decided to play it safe and paid a huge price for it. Kodak, Blockbuster, Nokia and Atari immediately come to mind. So, if organizations want managers to take more “intelligent” risks they need to create an environment that doesn’t punish managers for the results of their individual decisions, even when the outcome turns out to be negative. This is not to say organizations shouldn’t have a certain control on their human decision makers so they take potential losses seriously. Otherwise, managers might take huge risks because it is not really their money.  This means understanding how significant decisions under uncertainty have to be taken, and forcing procedures for making such decisions, including documenting the assumptions and expectations, preferably for both reasonable ‘worst case’ and ‘best case’ scenarios, that will later allow a much more objective evaluation of the decisions made.

This balancing act for taking risks is definitely a challenge, but what organizations have to recognize is that excessive risk aversion favors the status quo which could eventually be even riskier.

A Spanish translation of this article can be found at: www.simplesolutions.com.co/blog

Published by

Eli Schragenheim

My love for challenges makes my life interesting. I'm concerned when I see organizations ignore uncertainty and I cannot understand people blindly following their leader.

3 thoughts on “Behavioral biases and their impact on managing organizations”

  1. Thanks for an excellent article. Kahnemann’s System1/System2 distinction is very useful when it comes to understanding decision making, especially logical flaws in decision making. Would you agree that rigorous training in cause-effect thinking (TOC Thinking Processes / Dettmer’s Logical Thinking Process) improves people’s System1 thinking, or would you rather say it trains them to use System2 processes more often?

    Like

    1. This is an excellent question and I’m not sure, actually I don’t know, how eventually we influence our System1. I like to assume we can radiate a certain pattern to System1 that requires System2. It seems logical to me that such a “warning” that such a pattern needs more thought is possible. Maybe it is even possible to make system1 get used to new patterns that are the result of recognizing the criticality of certain cause-and-effect and make it part of the System1 automatic response. Problem is: I’m not knowledgeable enough in how our mind works.

      Like

Leave a comment