## The Non-Linear Behavior of the Cost of Capacity

And its impact on decision making

Part 1 of a series on using T, I and OE for key decision making

Challenging widely accepted paradigms creates new opportunities

The terminology in Physics does not use words with dramatic intensity.  However a certain incident in the late 19th century was so embarrassing that it was called “The Catastrophe in the ultraviolet” and by that caught my imagination.  The story is about radiation emitted from a black box and the mathematical equations, according to the knowledge of that time, showed that the radiation should be infinite.  Well, it was easy to see that this is NOT the case.  What eventually solved the riddle was the discovery, understood through Quantum Theory, that the frequency of the emitted radiation is not continuous but discrete.  As it turns out discrete functions behave very differently from continuous functions.

There is a tendency in the social science circles to assume that the main functions, describing the behavior of key variables, like capacity or the cost of capacity, are continuous.

Really???

I claim that all cost functions in reality are discrete.  This is most certainly true when we speak about the cost of capacity.

All organizations spend their overhead expenses on providing enough capacity that is required for the business.  The usual way is to purchase a certain fixed amount of capacity, like space for storage or offices, a machine capable of processing a certain quantity per hour and employees who agree to work N hours every week.

The cost of providing that capacity is fixed whether you actually use all that capacity or only part of it.

This means that using 25% of the available fixed amount of capacity, or using 85% of that quantity costs exactly the same!  This is a basic non-linear behavior and its impact on the decision what to do with the capacity at hand is HUGE.

Once all the available capacity is used then new options of using additional capacity open.

But, the principle of being able to purchase capacity only in certain fix sizes is still on.

An employee might agree to work another hour, but usually not a part of an hour.  So, if you need just 34 minutes of overtime the cost is one hour of overtime, which is also considerably more expensive than the relative cost of a regular hour.

So, when we look on the behavior of the cost of capacity we realize the following behavior:

The initial cost is HIGH.  Then it becomes zero (0) until a certain load is reached. Then the cost jumps by another fixed amount. Using more capacity the cost is zero until the next fixed point.

This actual behavior is quite different from the current practice of associating the average cost to any use of capacity.

This is the kernel of the TOC challenge at cost accounting!

So, the simple principle of cost accounting is invalid in our reality.  This use of the average cost of capacity has led all the way to the fiction of cost-per-unit.

Do we really need “per unit” measures to support sales decisions?

We still believe in simplicity, but reject the wrong simplicity.  What could be simpler than have a way to measure the direct impact of a decision on the bottom-line?

Let’s now look on another realization:

There is no hope in hell to use all the available capacity!

This is certainly in direct clash with the common paradigms.

There are three causes for being unable to use all the available capacity to generate value:

1. TOC has demonstrated the need for protective capacity to provide good and reliable delivery performance.
2. The market demand fluctuates in a faster pace than our ability to adjust the available capacity.
3. Capacity is purchased only by certain sizes. This is similar to what has been already stated above.

What are the ramifications for decision making?

When a new market opportunity pops-up we need to consider the state of the capacity usage of every resource.  When there is enough excess capacity the usage is FREE!  When the additional load penetrates into the protective capacity then there is need to carefully check the cost of Additional capacity or the ramifications of giving up some existing sales.

This is very different generic approach than the existing management accounting tools!

Next post would explain more on how to calculate the impact of an opportunity on the bottom-line, without using any “per-unit” kind of measure that would force us to use averages and get a distorted answer.

## The Thinking Processes (TP) and uncertainty

Have a quick look at the small cause and effect branch.  Is the logic sound?

Can it be that in reality effects 1-3 are valid, but effect 4 is not?

We can come up with various explanations of insufficiency in the above logic.  For instance, if the clients are not free to make their own decisions, like in totalitarian countries, then could be that the regime prefers something else.  Another explanation might be that the brand name of Product P1 is much less known.

The generic point is: the vast majority of the practical cause and effect connections are not 100% valid.

In other words, most good logical branches are valid only statistically, because they might be impacted by uncertain additional effects that distort the main cause-and-effect.  Actually the uncertainty represents insufficiencies we are not aware of, or we know about them but we cannot confirm whether they exist or not in our reality.  For all practical purposes there is no difference between uncertainty and information we are not able to get.

This recognition has ramifications.  Suppose we have a series of logical arrows:

eff1 –> eff2 –> eff3 –> eff4 –> eff5

If every arrow is 90% valid (it is true in 90% of the cases) then the long arrow from eff1 to eff5 is only 60% valid.

The point is that while we should use cause-and-effect because it is much better than to ignore it, we can never be sure we know!  The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future.  This false logic says:  if ([I behave according to the TP branch] à [Sometimes I do not get the expected effect]) then [I stop using the TP].

The way to deal with this serious and damaging negative branch is to institute the role of uncertainty in our life and the idea that partial information is still better than no information – provided we take the limitations of being partial seriously.  We can never be sure that whatever we do will bring benefits.  However, when we use good logic then most-of-the-time we’ll get much better benefits than the total damage.

It’d be even better to consider the possibility of something going wrong in every step we do.  This would guide us to check the results and re-check the logic when the result is different than what we have expected.  It is always possible that there is a flaw in our logic and in such a case we better fix the flawed part and gain better logical understanding of the cause-and-effect.  When we do not see any flaw in our logic – there is still room for certain crazy insufficiency to mess our life and this is the price we pay for living with uncertainty.

## The Mysterious Power of Synergy

Synergy means that a system can achieve more, sometimes much more, than the sum of its parts.  This extra power is not easily understood and thus it is difficult to manage.

It is straight-forward to see the value of synergy in sports.  You can build a basketball team by bringing together great players, each excels in one particular role, and let them play and hopefully win.  Does it ALWAYS work?

When it does, one might get the impression that the power of the team is far more than the accumulated level of each player.  This is when the mystery of synergy works.

When it does not work then there is no ‘team’ but just a group of excellent players, each playing according to his own interests.  We in TOC call it: local thinking rather than holistic.  I think it is quite natural that a person thinks and acts based on his own interests.  The only rational way to cause a person to think holistically is to make a convincing argument that synergy does work, in other words the success of the whole would contribute much more to the person than whatever he can achieve by himself.

Theoretically there is a way to create such a clear win-win structure that the interests of every player are exactly the same as the holistic ones.  I understand the theory, but I admit I have not been able to construct such a network of win-wins in reality.  Still, the intuitive recognition that synergy exists in a big way could help in aligning different parts into a holistic system.

Very large organizations use their natural synergy to gain much more value.  We can recognize some of the causes of such synergy, and by that reduce its ‘mysterious’ impact.  When some of the products/services of a giant company get excellent recognition the other services gain recognition as well.  The stability and security radiated by large organizations is a synergy asset and its cause is pretty clear.

However, many other causes for synergy are not all that clear, but this does not mean they do not exist.  A strange way of speech calls ‘chemistry’ the effect where two players play with great understanding of each other and thus generate synergy.  It is funny, on my side, to call a ‘scientific’ name something that is hard to map the cause and effect of.  Still, in reality we see how some product-mix have more impact than others.  One needs to look for the overall characteristics of a ‘package’ to understand the advantage of one supplier on another rather than go to the details of every product.  It is exactly as recognizing a forest rather than trees.

Project-portfolio is a managerial topic that calls for assessment of its synergy.  It means that when we consider a new project there is a need to assess the somewhat vague impact of adding this project to the portfolio and predict the total impact of the whole portfolio on the organization.

As such assessment is mainly intuitive we need to recognize it as ‘partial information’ or basically uncertain information.  We should NOT ignore the synergy impact just because we are unable to predict its exact impact.  While we recognize “never say I know” we should not take the position of “we don’t know”, because we do know something.  We are able to carefully assess the impact of synergy as a reasonable range and thus take the range as part of the decision making process.

Any good Strategy planning has to strive to gain synergy from all the initiatives that are integrated into one effective decisive-competitive-edge.  Synergy is a critical part in the creation of ever-flourishing organization and it requires a holistic view and good tolerance of using ‘partial information’ to guide our decisions.

## The problematic relationships between the individual and the organization – Part 2

A common belief is: Many employees don’t want to make all the efforts they are required to make

The point is not so much whether the belief is true, but whether the belief is self-fulfilling, meaning employees try to avoid too much work and efforts because they realize they are not being trusted. When you are not trusted then the objective of feeling good with what you have done evaporates into thin air.

Suppose management succeeds in creating a culture of trust. Would the employees be willing to be loyal to the organization they work for? By ‘being loyal’ I mean do whatever it takes to achieve more of the organizational goal.

Employees come to work because they need the money. This starting point has several ramifications.

1. If the employees think they are getting less than what they deserve – then they become frustrated and hostile towards the organization, which is the opposite of loyalty.
2. As the money is important the employees choose to stay in the organization until something better pops up. This forces them to try hard to be considered “good”, or at the very least “OK”.

Other ramifications are caused by the mere fact that the employees spend large part of their life in their work place:

1. Most employees prefer to “do something” while they are at the work place, and so they usually work willingly according to what is expected from them, unless they have a reason not to.
2. Employees that have the passion to excel look for an appropriate chance.

An important observation:

It is easier for an employee to be loyal to the organization than to the organization to be loyal to the employee.

The organization always looks at the cost and compares it to the perceived value from the employee. However, as we have seen, that value is not easy to assess. It is even more difficult to assess the indirect damage of being disloyal to the employee, as many of the employees become disillusioned and even hostile in a hidden way.

From the above it seems that even when the management trust their employees and is loyal to them there is no guaranty that all the employees would be loyal to the organization. It is enough that a specific employee believes he is underpaid to cause him to betray that trust. And if this is the case then the organization should actively look for signals of low-motivation and disloyalty from the employees.

However, the need for making sure all employees are loyal does not necessarily mean the solution has to be the use of personal performance measurements.

What are the true needs of organizations in assessing their employees?

I can see two such needs:

1. Identify employees that generate damage. Some might simply lack the appropriate capabilities. Others might be ‘rotten apples’ – those who are disloyal. Such people might influence others to become disloyal.
2. Identify potential ‘stars’ – employees who can bring huge value in the future – if they are nurtured in the right way.

All the rest are good employees who bring value when management makes it possible. Is there a point of measuring performance in a more accurate way?

If the organization would maintain a culture of respect and loyalty then the employees will do their best to organization because this is their work – a substantial part of their life. What the organization has to do is to make sure there is a certain code of work and when there is a signal that the code is broken then, and only then, those employees should be chased out.

In some cases, the organization has no choice but to let people go because they cannot yield value anymore. The point is that when this happen management needs to recognize it as its own failure! Management then has to be aware that they need to re-build the trust and loyalty of the employees who are still in the organization to prevent the next disaster.

## The problematic relationships between the individual and the organization – Part 1

“Tell me how you measure me and I’ll tell you how I’ll behave.”

This famous citation states an inherent conflict between the individual and the organization.  The individual, according to Goldratt, behaves according to the way he is measured, which is not necessarily to the good of the organization.

Why would an organization measure its employees???

Do you measure the performance of your spouse, children or close friends?  You have certain expectations, which are not always met, but do you look for ways to quantify your expectations?

What is the gain from performance measurements?

One real and important gain is to know whether you need to analyze much deeper how come the results deviate from your initial expectations.  In order to get that objective you need recorded expectations.  Now – expectations are never one clear number – are they?

As a father I expected from my children to achieve good grades from school, say B and above.  Getting a low grade did not call for punishment, but we tried to understand the reason and what can be done next time.  Other parents push their children much more.  Is the push valuable?  Certainly, treating the grades as performance measurements that invoke positive and negative reactions would improve the grades.  The more important question is: would they improve the life of the children?

Do performance measurements improve performance?

Goldratt has shown us how flawed measurements could reduce global performance.  There are three different causes for a performance measurement to radiate the wrong message:

1. They are the wrong measurements.
2. Dependency – the measurement depends not just on my performance, but also on other factors, like the performance of others.
3. Variation. My performance varies. Some of the causes for the variance can be explained by external factors (like headache) and some have no clear explanation.

How significant is the variation factor?  I like to watch sport on TV in order to spot the unexplained variation in the performance of the players.  It is most noticeable in Tennis how the number of “unforced errors” and the “first serve percentage” fluctuate within one game.  It demonstrates that the variation in our ability to do things is quite significant.

When the performance measurements ignore the impact of dependency and variation, the employee distrusts the measurement and is led to do whatever he can to manipulate them to his own sake.

See what we have got so far:  ignoring complexity (dependency), ignoring uncertainty (variation) both are direct consequences of lack of trust.

Trust, or the lack of it, is a critical factor in life as well as in business and most certainly in managing organizations. The fact that the organization does not trust its employees and thus uses performance measurements actually causes the employees to distrust their bosses and pushes them to plan carefully their behavior – against the interests of the organization.

It all starts with the relationships between the CEO and the owners (shareholders and possibly the board). Every CEO likes to make sure the organization achieves the results that please the owners and makes him/her a highly appreciated CEO.

An obvious difficulty for the CEO is to make sure all other employees do whatever they need to do to achieve the results. Thus, when possible, they impose performance measurements to push their subordinates to make more efforts.  Do they really make more efforts? Or do they just manipulate their available capacity according to the specific measurements and nothing else is important?

A simple cause-and-effect tree to showing the rush for using performance measurement on employees

Is there anything wrong with the above logic?

What do you think of entity 2.3 – is it really valid in reality?  Let’s discuss it further to lead us to the direction of a solution.

## The balance between Statistics and Intuition

Joel-Henry Grossard has made an important comment I like to share with you and express my view. He wrote:

“However you can have two distributions which have the same average and the same standard deviation, but which are profoundly different when you look at the numbers. The missing factor is time: to know how the numbers are spread over time is critical to decide. Using Statistical Process Control can help.”

Do we know how the variables we look at in our practical reality behave with time?

Let’s consider a process in the shop-floor where we are able to record a lot of data and how they spread over time.  What we get is a time series of results, but that graph represents only one possible spread of the results and it is not a replicate of the real distribution function. Usually we don’t really know the full characteristics of the distribution function.  For instance, if an operator gets tired after one hour and if this tiredness can be expressed in the quality of the output then we should see a certain deviation. But, unless we suspect this could happen there is no big chance that such cause for deviations would be detected.

When the process is fully under our control we are able to confirm that the basic parameters of the process haven’t gone through a significant change. Even in such a convenient case my understanding is that even Prof. Deming did not apply the full power of Statistics, and just went for standard heuristics to establish good-enough quality control.

Once we step out of what is fully under our influence we know even less about the behavior of the surrounding uncertainty. We don’t even know whether all the recorded results belong to the same distribution function.

Suppose a new Harry Potter book suddenly appears. The last book in the series appeared in 2007. What statistical model could predict the number of copies to be sold in the first week?  We do have past results, and they are relevant to a certain degree, but due to the long intermission between the former series and the surprising new book the original distribution function has changed and we don’t exactly know what the change is.  We don’t even know whether the demand will be up or down relative to the last book.

Does it mean we don’t know anything?  Can it really be any number?  We know some of the parameters that impact the demand for the next book.  The reputation of the original series is still high. But many of the past readers are now older and it is not clear whether their interest is still high. Thus, some intuitive estimation can be made regarding the reasonable minimum demand.  We can also estimate how much more demand is reasonable, taken into account the demand for the last book, but what meaning do the previous results carry?  The one conclusion from them is that the last book was not a single incident and the whole series was a big success.  However, the detailed results and how they spread over time do not add much value to the prediction.

I put a special emphasis on the word “reasonable”.  First, we know that sometimes unreasonable things do happen.  But, assuming we do have intuition based on care and experience, most of the time what is happening is reasonable to us.  Our intuition is shaped by many small events and whether they are considered reasonable or unreasonable by us.

This intuition is a source of valuable, but partial, information to guide our decisions concerning common and expected uncertainty.

Partial information is what we usually have that could help us.  Not enough to prevent us from some damaging decisions but good enough to guide us so that overall we’d get much more benefits than damage.

But, when we knowingly ignore the partial information, we cause definite damage.  This is what most organizational policies do: force certainty on uncertain situation, like using one-number forecasts and turn them into sales objectives as prime performance measurements.

I claim that forcing certainty on uncertainty is due to fear from being unjustly criticized.  I like to deal with the cause and effects of the fear from the performance measurements on people and highlight the nasty sides of the relationships between the organization and its employees.  This is, hopefully, going to be my next post, subject to the inherent uncertainty.

## The current TOC achievements in handling uncertainty

TOC has always been focused on the common and expected uncertainty.  It just did not generalize in full the global ramifications of its tools to handle uncertainty.  In this post I like to highlight the wider impact that stem from DBR, CCPM and Replenishment.

The critical TOC terminologies that are a key in handling common and expected uncertainty are:

1. Buffers
2. Buffer Management
3. Protective capacity
4. Thin and focused planning

Buffers:  The concept of inserting visible buffers as anintegral part of the plan is, for me, a landmark in managing uncertainty and by “managing” I mean also the behavioral side.  People use buffers all the time to protect themselves, but they have to hide the buffers. The main problem in using hidden buffers is that they are wasted by being always fully consumed because the organization does not recognize the need for buffers.

The visible use of buffers in the planning raises several issues that planners have to consider:

1. What to buffer? Should we spread buffers everywhere or concentrate on specific locations?
2. How should we size buffers?
3. What is the cost of maintaining such buffers? What are the benefits?

Struggling with the above questions force people to recognize the impact of uncertainty and to employ certain key insights from Probability Theory.

Buffer Management:  It is a unique concept of TOC, I don’t know of any similar idea to inquire the actual usage of buffers to guide decisions.  Buffer Management is relevant only to buffers that are frequently partially consumed.  Buffers that are either fully consumed or not at all, like alarm systems or insurance, cannot be managed by buffer management.

The value of buffer management is for two different fronts:

1. Dictating a priority system in the execution phase, striving to achieve all the planning true objectives.
2. Generating valuable feedback on the planning, thus improving the future planning, including the more appropriate size of the buffers.

Protective Capacity:  This is the most revealing concept, as it is in direct clash with the utopia of being able to match capacity to demand and the efficiency syndrome.  The important message is that due to both external and internal uncertainty lack of enough excess capacity hurts the delivery performance to the market.  Note that there is no formula for how much protective capacity is necessary.  Buffer management let us know when one or more resources come close to the protective capacity, but is unable to tell us whether we have too much protective capacity.

Thin and focused planning:  Is a TOC concept even though it was never verbalized as such.  From the five focusing steps we realize that the key planning rule is exploitation of the constraint.  Subordination is about adding buffers to the planning and mainly about execution – making sure the exploitation plan progresses smoothly.  Both DBR and Replenishment use very thin planning, leaving many decisions for the last-minute where the actual impact of uncertainty is known.   CCPM does not fully follow the thin planning direction and it leads to recent ideas, by James Holt and Sanjeev Gupta, of simplifying the CCPM planning.

The above achievements should encourage us all to develop more tools that will allow management to recognize and manage the uncertainty.  I think most managers are aware of the need, but simply are caught within the fear of being unjustly criticized.

A superior level of performing well in spite of significant uncertainty will be achieved ONLY when a decision making process is established that verbalizes the uncertain potential results and lead the decision makers to contemplate decisions that would achieve high gains most of the time, but also take into account that in some cases limited damage will occur.  The emphasis is on ‘limited damage’, meaning the organization is able to tolerate, and thus the potential results considered can be used in the future to demonstrate the validity of the decision at the time.

## The problems with “Common and Expected Uncertainty”

Not all the decisions managers have to make are about risk,meaning decisions that might cause a serious loss, but might also cause a considerable gain.  Actually those risky decisions are very infrequent.  While I’m still claiming that the vast majority of the organizations force the managers to be ultra-conservative, the losses from those decisions are small relative to the huge loss from wrong policies dealing with “common and expected uncertainty.”

Take CCPM (TOC Project Management Solution) and ask yourself how come that planning a clear project buffer is such a dramatic new insight?  How come people insist that there is clear time duration for a task?

Eli Goldratt said that organization force certainty on uncertain situations.

The paradox is that by forcing certainty management increases the negative impact of uncertainty. We see projects that take too long and shop floors that process too much inventory.  Many organizations suffer from hazards because of lack of manpower, relatively cheap resource.

The common cause is the concern of every human manager of being blamed of creating “waste”.

The prime example I like pointing to is the use, actually misuse, of sales forecasts.  We know from Probability Theory, or Statistics, that the minimum description of an uncertain variable contains two numbers, usually the average and the standard deviation.   However, the vast majority of the forecast reports, used for various decisions, include only ONE number.

What is the value of one central measure for a forecast when nothing describes the spread around that measure?  If next month sales of Product134 are forecasted to be 10,000 what is the likelihood that the actual sales would be 4,776, 8,244, 13,004 or even 18,559?

Suppose that the magic number of 10,000 comes from assessment of salespeople, is it clear that it represents an estimated average (expected value in the mathematical language)?  Isn’t it possible that salespeople, who do not have any magic power to see the future, state a number they are comfortable with?  If they are measured by meeting sales objectives, that are set according to the forecast, then they would reduce their estimation. But, if they need Operations to provide availability they would inflate the forecast.

I think that there is no way to manage an organization without forecasting!

I also think that Dynamic-Buffer-Management is actually a forecast looking at the combination of sales and inventory and predicts whether the stock buffer is about right.

However, treating a forecast as one number is a gross mistake.  The reliance on one number allows top management to judge their sales and operations people, however that judgment is flawed and the sales and operations managers have to protect themselves from the ignorance of top management.

The overall impact of mishandling the common and expected uncertainty is HUGE.  Management don’t recognize the need for protective capacity and thus look for high “efficiency”, causing people to pretend being very busy, which means they constantly look for “something to do”, regardless whether it creates valueor not

However, protective capacity is truly required in order to maintain enough flexibility to deal with Murphy, as well as with temporary peaks of demand. TOC buffers help a lot to stabilize the flow and by that improve the overall performance, but they do not cover all the areas where people are using their own hidden buffers, causing huge damage:  The hiring process is basically flawed with ridiculous requirements of 100% technical fit instead of requiring learning capabilities; Budgeting processes are flawed carrying no appropriate reserves;  Even the need for maintaining presence in several different market segments is not fully recognized in many organizations.

Is it possible to learn how to deal with uncertainty, particularly the common and expected uncertainty?  The vast majority of the managers have been in a basic course on Statistics, but it does not lead them to handle uncertainty that: does not have clear probabilities, is definitely different than the Gaussian (Normal distribution function), and the samples of similar occurrences in the near past are very small.

The real obstacle for improving the policies, making them a better match to the inherent uncertainty, is getting rid of the utopia of “optimal decisions” replacing it by “good enough” and stop measuring people by numbers which are exposed to both uncertainty and dependencies.

Is that doable? For me that is what TOC is all about.

## How come managers take different decisions for their organization than for themselves?

This post continues the previous post on “A decision is required – a management story with a lesson.

Prof. Herbert Simon, the Noble prize winner, claimed that people are NOT OPTIMIZERS – they do not search for the ultimate optimal choice. Simon called the way people, like you and me, make decisions “satisficers” – looking for a satisfying choice by placing certain criteria and choosing the first one that satisfy all the criteria. This is quite similar to what we call in TOC “good-enough solution.”

My point is that while people are satisficers, once they make decisions on behalf of their organization they are forced to demonstrate that they actively look for the optimal decision. However, there is too much complexity and too much uncertainty on top of it to truly reach optimum decisions. This situation makes the search for optimal decisions, based on “books” written by other academics than Prof. Simon, looks pathetic. Too many of those decisions are wrong and leading to inferior results.

A common cause for this different behaviour is:

Managers are afraid from after-the-fact criticism, which they consider unfair, because it does not consider the conditions when the decision has been made.

The key frightening aspect is the possible impact of uncertainty. After-the-fact the decision can be easily seen either as “right” or “wrong”. Admitting to have made a mistake causes two different undesired effects:

1. Being punished because of the “mistake”, like being fired or just not being promoted.
2. Losing the feeling of creating value and being appreciated. This is of very significant meaning to executives and highly professional people.

The fear of unjust criticism forces managers to look for two means of protections:

1. Being super conservative.
2. Following the “book” when there is a book.

As Mr. Preston Sumner, in his very interesting comment to the story has noted, some CEOs are influenced to do the exact opposite: take larger risks than what they would allow for themselves. This tendency is initiated by the way some large organizations compensate the c-level executives. When a CEO is pushed to show great results by hefty bonuses, while the opposite is not true, the derived greed pushes them to take high risks. Is this really what the stockholders want?

There is critical mistake in looking to “motivate” a person, a CEO or even just a regular salesperson, by linking the actual financial results to payments to the person. Money is always a necessary condition – but it is far from being sufficient to ensure good intentions to look for the interests of the organization.

Dr. Goldratt said that organizations force certainty on uncertain situation. Ignoring uncertainty make people believe that they can judge any decision according to its actual result. If we recognize the need to live with significant uncertainty we need to learn how to judge decisions in a way that would reasonably assess what might happen – the potential damage as well as the potential gain.

This is just the beginning. I claim that failing to openly and visibly dealing with uncertainty is the core problem of most organizations! I’ll certainly come back to this topic highlighting more undesired effects resulting from the core problem.

## A decision is required – a management story with a lesson

Imagine that by a strange incident the following notes have been found. They express the hidden thoughts of a CEO that he’d never dare to reveal in public.

I bet everyone believes I’m in deep thoughts about the Fedora Russia Contract. Dale is still talking about the potential threat if Fedora would use its relationships with us to copy our technology and design to sell imitated products throughout Asia. Dale speaks with a lot of self-conviction, which should cause Martha and Gideon to have second thoughts about the contract they are passionately promoting. Dale brings a real example where something similar has happened: collaboration between British and Chinese companies ended badly for the British and great for the Chinese – a typical win-lose.

I think about my little daughter. Well, she is not exactly “little”, we have celebrated her 22nd birthday last week, together with her graduation with distinction from Stanford. She is my youngest child and she is determined to take an eye operation, using laser, to get rid of her glasses after failing to get used to contact lenses.

Martha is trying to argue that the probability of Fedora doing something outrageous against us is low. The main current markets of Fedora are all in Western Europe. She further claims that the Russian oligarch Ilia Mushkin, the owner of Fedora, recognizes the potential in collaborating with us exactly because he prefers doing business in Europe rather than in Asia. Dale says that this policy can easily evaporate in one day. Technically he is right. It is possible, but highly unlikely. Fedora certainly realizes that as long as they continue to export to Europe we have enough legal options that could hurt them. What Dale does stress is that if the worst scenario materializes then the damage would be huge. We have invested \$15M in opening our export channel to India, Malaysia and Indonesia. However, even if all the investment would be lost, which in itself is unlikely, the corporation will still be able to sustain the loss. And the business potential of the collaboration is much higher – for us as well as for Fedora.

The situation with my daughter is different. The damage of a failed eye operation is very high and would reduce her overall quality of life. I try not to think about it – but refraining from thinking about potential consequences does not solve any problem. On the other hand, a successful operation would improve her quality of life and the probability of success is very high. Actually she tells me the eye surgeon has, so far, one hundred percent success rate. That does not reduce my fear of the “black swan”. I don’t know how to consider such a critical decision when I have no idea what is the probability of such a failure. Actually I have no real say in the matter. My daughter is an adult and she does not ask my advice – she simply tells me what she is going to do.

I notice that Dale and Martha are looking at me, expecting me to make the decision. Gideon seems to be in deep thoughts. He probably thinks about the personal impact on him if the Fedora Contract is cancelled. The last project he was deeply involved with had been cancelled as well.

Is the decision regarding Fedora truly straightforward? It has big potential along a certain not-too-large risk of real flop, and even then it is not a real disaster, unlike what might happen to my daughter. The only other consideration is that if the Russian Project, as the directors in the board call it, would fail then all the fingers would point to me. I think some more caution is required here.

“Let’s see what Claire could add to the contract that will protect us from hostile moves by Fedora.” I say. Claire is our legal adviser. I can see the light in the eyes of Martha fades out. Gideon continues to look down. I’m not sure what Dale real position is. Everybody is aware that I have just killed the Russian Project.

I would love to hear your opinion and have a discussion about the following 3 questions:

1.

2.

3.

Is there a generic inherent problem in making decisions on behalf of an organization that is expressed in the story? If so, what can we do about it?