A new book opens a new direction for making superior managerial decisions

Cover (1)

I have struggled with the insights for this book for almost twenty years.  While all the ideas are based on the current body of knowledge of the Theory of Constraints (TOC), they extend its applicability and usefulness.  TOC already has challenged the use of cost-per-unit and local performance measurements that have a huge negative impact on managerial decisions today.  Expanding the already well-advanced TOC BOK requires special care and self-checking every part in the chain of logic.  When Henry Camp and Rocco Surace joined me in the writing, including outlining the necessary direction of solution and adding their own perspectives, it was a huge and absolutely necessary help.

There is a basic difference between a book and a blog, as they serve different needs.  Short articles in the blog are focused on one insight and if the reader sees value in the insight, then more efforts are required to develop the generic insight into a practical process.  A book should encompass the development of several new insights and integrate them into a clear focused message that is valid both theoretically and practically.

What is the particular need for Throughput Economics?

We claim that good management decisions must be analyzed and supported in a very different way than is customary today.  Managers have huge responsibilities on their shoulders and they deserve a better method to consider the relevant and available information to generate the best possible picture of might happen when any significant decision is undertaken.

The simple fact is that the current (and long established) methods of cost accounting distort the decision-making process by presenting a flawed picture of expected profits or losses resulting from the decision.  Managers might intuitively sense the impact of the considered decision on the bottom-line and they are also aware that the quality of their intuition is questionable.  Fear of unjust criticism, once the results become clear, is another factor that impels managers to utilize well-accepted tools, even if they feel those very tools are flawed.  In order to change the way managers are making decisions there must be a comprehensive alternative procedure that is demonstrably superior.

While the insights of TOC contributed much to clarify the flaws of cost accounting tools for decision making, pinpointing the underlying flawed paradigm behind the concept of cost-per-unit, could be quite beneficial. The core mistaken assumption is treating the cost of maintaining capacity as if it were linear.   This is just wrong because capacity of most resources can only be purchased by certain amounts – in chunks, if you will.  For instance, if you are looking for space for your office you might have a few alternatives each with its own specific square footage.  Eventually, you choose the most convenient one that has more space than you actually need.  It is up to you to treat the extra space as a “waste” or as an opportunity that, when triggered by new market opportunities, you already have the space you will need.  This benefit is offset by paying more for a bigger space in the meantime.  The point is that it is unrealistic to expect to be able to purchase exactly the capacity required for the changing level of activity in your business.

The fact that most resources have excess capacity means that consuming the surplus capacity generates no additional cost.  However, once the practical limit of the available capacity is reached, then any optional capacity increase is typically quite expensive and the quantities in which more capacity can be quickly purchased are normally subject to certain minimums.  This characteristic of buying capacity is what makes it non-linear.  The rub is: to know whether an additional consumption of capacity required for a new opportunity is ‘free’ or expensive, you must consider all your capacity requirements – the proposed new needs on top of all current activities.  In other words, a global calculation has to be made to estimate the actual impact of implementing a new decision on total operating expenses.

The most important decisions undertaken by any organization concern sales or capacity.  Sales are the key factor for income and maintaining capacity is the crux of expenses that enable the organization to provide what it sells.  It was an ingenious idea of Goldratt to look for two distinct information categories that impact any new decision concerning sales or capacity:  Throughput (T) on one hand and Operating Expenses (OE) (and Investment (I)) on the other.  T focuses on the added-value generated by sales.  OE describes the cost of maintaining the required capacity.  While changes in T usually behave in a linear fashion, the true impact of a new move on OE, the expenses for maintaining capacity, must consider the non-linear behavior of OE.  This non-linear behavior is often a big surprise to the managerial intuition of whether the proposed move is positive or negative.

What clearly comes out from cost’s non-linear nature is the requirement to analyze any new potential deal, not just by its own specific details and definitely not by any ‘per-unit’ artificial measurement, but by simulating the new deal as an addition to the load of the current activity of the whole organization and then checking the impact on ∆T, ∆I and ∆OE.

Does this idea seem frightening because so many numbers and variables are involves?  This is where the right kind of decision-support software can help us with the calculations.  The principles are simple and straight-forward, but making a huge number of calculations should be delegated to a computer, as long as we human beings dictate the logic.

The power of our book is going into the details of such a broad idea, without losing its inherent simplicity.  We present the holistic direction, while also covering enough details, to answer any doubt that might emerge.

In order to preserve the sensitive balance between the generic method and the tiny details, making sure nothing is lost in the process, we came up with several fictional cases where a management team needs to specifically analyze non-trivial new opportunities that could be great but might also be disastrous.  Unless the analysis is done comprehensively outcomes are practically impossible to predict.  I have already used these types of fictional cases to demonstrate the cause-and-effect behind generic principles in a previous book (Management Dilemmas).  In this book, the detailed fictional cases are of special importance.   Subsequent chapters refer to these cases and their intrinsic ideas in a more general way, explaining the processes from your perspective as an outside observer.  Our objective was to lead you to see the insights from both perspectives: the practical case where managers have to deal with a specific non-trivial decision and the higher-level world of defining the global process for dealing with a variety of such decisions.

Amazon, of course, sells and ships the book:  Throughput Economics by Eli Schragenheim, Henry Camp, and Rocco Surace

ISBN  978-0-367-03061-2 / Cat# 978-0-367-03061-2

If you need any help in purchasing the book write to me at:  elischragenheim@gmail.com

Advertisements

Threat Control – not just cyber!

The TOC core insights are focused on improving the current business.  TOC contributed a lot to the first three parts of SWOT, strengths, weaknesses and opportunities.  What is left is to contribute to early identification and then developing the best way to deal with threats.  Handling threats is not so much about taking new initiatives to achieve more and more success.  It is about preventing the damage caused by unanticipated changes or events.  Threats could come from inside the organization, like a major flaw in one of the company’s products, or from outside, like the emergence of a disrupting technology.  TOC definitely has the tools to develop the processes for identifying emerging threats and coming with the right way to deal with them.

Last time I wrote about identifying threats was in 2015, but time brings new thoughts and new ways to express both the problem and the direction of solution.  The importance of the topic hardly needs any explanation; however it is still not a big enough topic for management.  Risk management covers only part of the potential threats, usually just for very big proposed moves.  My conclusion is that managers ignore problematic issues when they don’t see a clear solution.

There are few environments where considerable efforts are given to this topic.  Countries, and their army and police, have created special dedicated sub-organizations called ‘intelligence’ to identify well defined security threats.  While most organizations use various control mechanisms to face few specific anticipated threats, like alarm systems, basic data protection and accounting methods to spot unexplained money transfers, many other threats are not properly controlled.

The nature of every control mechanism is to identify a threat and either to warn against it or even take automatic steps to neutralize the risk.  My definition of ‘control mechanism’ is: “A reactive mechanism to handle uncertainty by monitoring information that points to a threatening situation and taking corrective actions accordingly.

While the topic does not appear in the TOC BOK, some TOC basic insights are relevant for developing the solution for a structured process that deals with identifying the emergence of threats. Another process is required for planning the actions to neutralize the threat, maybe even turn it into an opportunity.

Any such process is much better prepared when the threat is recognized a-priori as probable.  For instance, quality assurance of new products should include special checks to prevent launching a new product with a defect, which would force calling back all the sold units.  When a product is found to be dangerous the threat is too big to tolerate.  In less damaging cases the financial loss, as well as the damage to the future reputation, are still high. Yet, such a threat is still possible to almost any company.  Early identification, before the big damage is caused, is of major importance.

The key difficulty in identifying threats is that each threat is usually independent of other threats, so the variety of potential threats is wide.  It could be that the same policies and behaviors that have caused an internal threat would also cause more threats.  But the timing of each potential emerging threat could be far from each other. For example, distrust between top management and the employees might cause major quality issues leading to lawsuits. It could also cause leak of confidential information and also to high number of people leaving the organization robbing its core capabilities.  However, which threat would emerge first is exposed to very high variability.

It is important to distinguish between the need to identify emerging threats and dealing with them and the need to prevent the emergence of threats.  Once a threat is identified and dealt with then it’d be highly beneficial to analyze the root cause and find a way to prevent that kind of threats to appear in the future.

External threats are less dependent on the organization own actions, even though it could well be that management ignored early signals that the threat is developing.

Challenge no 1:  Early identification of emerging threats

Step 1:

Create a list of categories of anticipated threats.

The idea is that every category is characterized by similar signals, which could be deduced by cause-and-effect logic, which can be monitored by a dedicated control mechanism.  Buffer Management is such a control mechanism for identifying threats to the perfect delivery to the market.

Another example is identifying ‘hard-to-explain’ money transactions, which might signal illegal or unauthorized financial actions taken by certain employees.  Accounting techniques are used to quickly point to such suspicious transactions.  An important category of threats is build from temporary failures and losses that together could drive the organization to bankruptcy.  Thus, a financial buffer should be maintained, so penetration into the Red-Zone would trigger special care and intense search for bringing cash in.

Other categories should be created including their list of signals.  These include quality, employee-moral, and loss of reputation in the market, for instance by too low pace of innovative products and services.

Much less is done today on categories of external threats.  The one category of threats that is usually monitored is state of the direct competitors.  There are, at least, two other important categories that need constant monitoring:  Regulatory and economy moves that might impact specific markets and the emergence of quick rising competition.  The latter includes the rise of a disruption technology, the entry of a giant new competitor and a surprising change of taste of the market.

Step 2:

For each category a list of signals to be carefully monitored is built.

Each signal should predict in good enough confidence the emergence of a threat.  A signal is any effect that can be spotted in reality that by applying cause-and-effect analysis can be logically connected to the actual emergence of the threat.  Such a cause could be another effect caused by the threat or a cause of the threat.  A red-order is caused by a local delay, or a combination of several delays, which might cause the delay of the order.

When it comes to external threats my assumption is that signals can be found mainly on the news channels, social networks and on other Internet publications.  This makes it hard to identify the right signals out of the ocean of published reports.  So, focusing techniques are required to search for signals that anticipate that something is going to change.

Step 3:

Continual search for the signals requires a formal process for a periodical check of signals.

This process has to be defined and implemented, including nominating the responsible people.  Buffer Management is better used when the computerized system displays the sorted open orders according to their buffer status to all the relevant people.  An alarm system, used to warn from a fire threat or burglary, has to have a very clear and strong sound, making sure everybody is aware of what might happen.

Challenge no 2:  handling the emerging threats effectively

The idea behind any control mechanism is that once the flag, based on the signals received, is raised then there is already a certain set of guidelines what actions are required first.  When there is an internal threat the urgency to react ASAP is obvious.  Suppose there are signals that raise suspicions, but not full proof, that a certain employee has betrayed the trust of the organization.  A quick procedure has to be already in place with a well defined line of action to formally investigate the suspicion, not forgetting the presumption of innocence.   When the signals lead to a threat of a major defect in a new product then the sales of that product have to be discontinued for a while until the suspicion is proven wrong. When the suspicion is confirmed then a focused analysis has to be carried to decide what else to do.

External threats are tough to identify and even tougher to handle.  The search for signals that anticipate the emergence of threats is non-trivial.  The evaluation of the emerging threat and the alternative ways to deal with it would grossly benefit from logical cause-and-effect analysis.  This is where a more flexible process has to be established.

In previous posts I have already mentioned a possible use of an insight developed by all the Intelligence organizations:  the clever distinction between two different processes:

  1. Collecting relevant data, usually according to clear focusing guidelines.
  2. Research and analysis of the received data.

Of course, the output of the research and analysis process is given to the decision makers to decide upon the actions.  Such a generic structure seems useful for threat control.

Challenge no 3:  Facing unanticipated emergence of threats

How can threats we don’t anticipate be controlled?

We probably cannot prevent the first appearance of such a threat.  But, the actual damage of the first appearance might be limited.  In such a case the key point is to identify the new undesired event as a potential to something much more damaging.  In other words, to anticipate based on the first appearance the full amount of the threat.

The title of this article uses the example of a serious external threat called: cyber!  Until recently this threat was outside the paradigm of both individuals and organizations.  As the surprise of being hit by hackers, creating serious damage, started to become known, the need for a great cyber control has been established.  As implied, Threat Control is much wider and bigger than cyber.

An insight that could lead to build the capability of identifying emerging new threats when they are still relatively small is to understand the impact of a ‘surprise’.  Being surprised should be treated as a warning signal that we have been exposed to an invalid paradigm that ignores certain possibilities in our reality.  The practical way to recognize such a paradigm is by treating surprises as warning signals.  This learning exposes both the potential causes for the surprise and to other unanticipated results.  I suggest readers to refer back to my post entitled ‘Learning from Surprises’, https://wordpress.com/post/elischragenheim.com/1834

My conclusions are that Threat Control is an absolutely required formal mechanism for any organization.  It should be useful to stand on the shoulders of Dr. Goldratt, understanding the thinking tools he provided to us, and use them to build a practical process to make our organizations safer and more successful.

Cause-and-effect as the ABC of practical logic

Outlining clearly the causality behind undesired-effects, and wondering what effects, desired or not, would be caused by the actions we take, have been an integral part of TOC from its start in the early 80s. In the early 90s several structured procedures were developed by Dr. Eli Goldratt, in the format of cause-and-effect trees, called the Thinking Processes.  I think it is time to experience the merits, but also the limitations, of using logical claims in the shape of ‘Effect A’ is causing ‘Effect B’, for managing organizations.

My bachelor degree was in Mathematics, which is the ultimate use of strict logic.  In our daily practice we use logic both to reveal the causes behind effects we experience and also for speculating what is going to happen if we take a certain action.  However, that use of logic is not easy; it is combined with a lot of emotions that confuse the strict logic.  Even when we do our best to stay within the logical directives we are faced with several obstacles.  One of them is being able to distinguish between assumptions about cause and effect and actual causality. We certainly have great difficulty with hidden assumptions, meaning not being fully aware that the causality is only assumed and not necessarily valid.

Reality is fuzzy and includes huge number of variables that have some impact.  In order to live in such reality we have to simplify the picture we have in our mind.  We do it by ignoring many variables, assuming their impact is too small to truly matter.  The choice of what we ignore is part of the basic assumptions behind our cause-and-effect logic.

To experience the value and the boundaries of applying cause and effect let’s check the following effort to understand a practical logical argument.

It seems straight-forward logic to claim:

If ‘We improve the availability of items on the shelf from 80% to 98%’ then ‘Sales will go up’.

Is this assertion always true?  Are there some missing conditions (insufficiencies) for the causality to be true?  Even if it is true can we deduce how much more sales will be generated?

The initial logical explanation is that the missing 20% items have demand that is not satisfied, thus sales are lost.  If those 20% would be available they will be sold according to their natural demand.

The claim is shown in a simple chart:

initial state

The right hand-side represents the original claim, then some more explanations on current lost sales that would not be lost now.  The oval shape says that the two causes act together.

Two different reservations to the above logic are:

Some customers might buy the same item somewhere else.”   And: ‘Customers might buy another item instead of the missing item.’  Both reservations aim at the causal arrow connecting unavailability of items to losing sales, from that effect, together with the improvement, to the resulting effect of ‘Sales go up”.

The two reservations highlight a clarity issue. The improvement cause is stated “We improve…”, but who are ‘we’?   It could be the management of the chain of stores, the local management of a particular store or a supplier of a family of items.  Each of them gives a different meaning to the current state and then has its own reservation to the claimed effect of “Sales go up”.  The supplier of certain products means ‘his products’ are available only in 80% of the time and customers who buy replacement products cause the supplier to lose sales. If the availability of the supplier products would go up, then those specific products will be sold more.

This is a non-trivial ‘clarity’ issue.   We first have to deal with the clarity reservation by making a choice.  I have chosen the perspective of the store, and now I have to relate to the causality reservation doubting whether unavailability of an item always causes loss of sales to the store.

When customers don’t find a specific item they might buy a similar item.  In this case the store does not lose the sale.  In other cases the clients might simply give up.  In some rare cases the client might walk out, which could mean other sales are lost as well.  So, we conclude that some sales are lost because of unavailability, but the direct loss of sales is less than the calculated average sales of that item in the period of time it is short.

So, the above logical claim seems valid, but its real impact could be low.  We like to go deeper into the question when is the loss of sales due to unavailability significant?

Is the loss of sales equal for all items?

There are two parameters that make a significant impact on the loss of sales for the store when an item is missing.  The first parameter is the average level of daily sales and the second is the level of loyalty of the clients to the brand/item.

Fast runners, when they are short, create considerable damage not just to the direct loss of sales, but also to the reputation of the store – meaning customers might look for a different store in the future.  The logical statement is: if ‘a fast runner is missing’ then ‘many customers are pissed off’ causing ‘some regular customers look for another store to make their purchasing’ causing ‘total sales go significantly down”.  I’ve added ‘significantly’ to make a mark about the total impact.

But, as ‘management are aware of the potential damage to the store from missing fast-runners’ then we expect the following effect to apply: ‘management is focused on maintaining the perfect availability of fast-runners’.

So, we can deduce that if ‘the current management is reasonably capable’ then ‘the missing items do not include fast runners’.  Of course, 20% of the items being short might still mean non-negligible amount of sales of medium and slow movers being lost.   The open question is how much and even more:  how the current level of shortages impacts the reputation of the store and through this the future sales?

So, we need to look deeper into the impact of the second parameter – loyalty to a specific brand/item.  The effect ‘some items are special for some clients’ causes the effect that ‘some customers develop loyalty to that item’. This effect causes ‘the probability that some customers refuse to buy a replacement is high’.  Thus, if ‘items with strong loyalty are frequently missing’ then ‘some customers try other stores’.  The effect of ‘items with strong loyalty are frequently missing’ also causes ‘our reputation for what we carry on the shelves goes down’, with clear impact on the future sales.

The difficulty with ‘loyalty of customers to the brand/item’ is that it is difficult to validate its power.  The true test for the strength of loyalty is when the item is short and checking whether the sales of alternative items go up or not.

One additional reservation from the basic claim that improving the availability of items on the shelf would increase sales:  it assumes that ‘most customers entering the store know exactly what they want to buy’.  If this effect is not valid, then what is important for the sales is that the shelf is full with items that have good enough demand.  The effect that some items planned to be on the shelf are missing, but other items, with equal chance of being sold, fill the space well, would not cause a clear impact on the sales.  The kind of items that people come to browse and then choose (‘when I see it I’ll know’) have to managed in a very different way than maintaining availability of specific items.  For such items it makes sense to replenish them with new items, unless a specific item seems such a hit that maybe keeping it available is beneficial giving the high desirability of clients.

The effect of ‘The store has many regular customers’ has an impact on the meaning of ‘availability’ on the incidental customer. A shop in a big airport serves mostly incidental clients, so unavailability of items doesn’t impact future sales.  When there are no regular customers, then there is no difference between items that the store does not hold and items that are short.  This is relatively a small issue.

There are many more conditions that we consider true without further thought: ‘we live in a free economy’, ‘there are many competing choices for most items’ and ‘there is enough middle-class customers that can afford buying variety of products’.  If we try to include all ‘sufficiency’ conditions we’ll never end up with anything useful.  On the other hand it also opens the way to major mistakes due to hidden assumptions about what not to include in the analysis.  One needs the intuition when to stop the logical analysis, recognizing also the validity of ‘never to say I know’ (an insight by Dr. Goldratt).  Another aspect is the impact of uncertainty:  there are no 100% cause and effect relationships.  But, causal relationships that are 90%, or more, valid are still highly valuable.

Eventually we get the following structure as a summary of the above arguments.  Not all the previous effects have been mentioned, which means some of the logical arrows require more details, but eventually this is the claim.

Sales go up

We still cannot determine how much the sales would go up, because it depends on the characteristics of the medium and slow runners:  how many of them have strong loyalty.  If we add to the initial effects also ‘The chain makes marketing efforts to radiate the message that the chain maintains very high availability at every store’ then the chain can expect a faster and stronger increase in its reputation and in its sales.

Was it worth to go through logical analysis? 

While we still have only a partial picture, it is probably better than a picture based just on intuition without any analysis.

Antifragile – strengths and boundaries from the TOC perspective

Antifragile is a term invented by Nassim Taleb as a major insight for dealing with uncertainty. It directs us to identify when and how uncertainties we have to live with can be handled in our favor, making us stronger, instead of reducing our quality of life. Taleb emphasizes the benefit we can get when the upside is very high while the downside is relatively small and easily tolerable. Actually there is a somewhat different way to turn uncertainty into a key positive factor: significantly increasing the chance for a big gain while reducing the chance for losing.  A generic message is that variability could be made positive and beneficial when you understand it well.

While it is obvious that the concept of antifragile is powerful, I have two reservations from it. One is that it is impossible to become fully antifragile.  We, human beings, are very fragile due to many uncertain causes that impact our life.  There is no way we can treat all of them in a way that gains from the variability.  For instance, there is always a risk of being killed by an act of terror, road accident or earth quake.   Organizations, small and big, are also fragile without any viable way to become antifragile from all potential threats. So, while looking for becoming antifragile from specific causes adds great value, it cannot be done to all sources of uncertainty.

The other reservation is that finding where we gain so much more from the upside, while we lose relatively little from the downside, still requires special care because the accumulation of too many small downsides might still kill us. Taleb brings several examples where small pains do not accumulate to a big pain, but this is not always the case, certainly not when we speak about money lost in one period of time.  So, there is a constant need to measure our current state before we take a gamble where the upside is much bigger than the downside.

The focus of this article is on the impact of the concept of antifragile, and its related insights, on managing organizations. The post doesn’t deal with the personal impact or macroeconomics.  The objective is to learn how the generic insights lead to practical insights for organizations.

There are some interesting parallels between TOC and the general idea behind antifragile. Goldratt strived for focusing on directions where the outcomes are way beyond the inherent noise in the environment.  TOC uses several tools that look not just to be robust, but to use the uncertainty to achieve more and more from the goal.

A commercial organization is fragile, first of all, by its ability to finance the coming short-term activities. This defines a line where the accumulated short-term losses are allowed to reach before bankruptcy would be imminent. Losses and profits are accumulated by time periods and their fluctuations are relatively mild.  Sudden huge increases in profits are very rare in organizational activities.  It can happen when critical tests of new drugs or of revolutionary technologies take place then the success or failure has a very high and immediate impact.  As developing a new product usually involves long efforts over time, it means very substantial investment, so the downside is not small.  The gain could be much higher, even by a factor of 10, even 100, but such a big success must have been intended early in the process with only very low probability to succeed.  So, from the perspective of the organization the number of failures of such ambitious developments has to be limited, or it is a startup organization that takes failure to survive into account.

Where I disagree with Mr. Taleb is the assertion of unpredictability. The way Mr. Taleb states it is grossly misleading.  It is right we can never predict a sporadic event, and we can never be sure of success.  But, in many cases a careful analysis and certain actions raise the odds for success and reduce the odds of failure.

One of the favorite sayings of Dr. Goldratt, actually one of the ‘pillars of TOC’, is: “Never Say I Know”, which is somewhat similar to the unpredictability statement.  But Goldratt never meant that we know nothing, but that while we have a big impact on what we do, we should never assume we know everything. I agree with Taleb that companies that set for themselves specific numbers to reach in the future, sometimes for several years, shoot themselves in their foot in a special idiotic way.

Can I offer the notion of ‘limited predictability’ as something people, and certainly management of organizations, can employ? A more general statement is: “We always have only partial information and yet we have to make our moves based on it”.

There are ways to increase the probability of very big successes against failures and by that achieve the right convex graph of business growth while keeping reasonable robustness. The downside in case of failure could still be significant, but not big enough to threaten the existence of the organization.  One of the key tools of evaluating the potential value of new products/services/technology is the Goldratt Six Questions, which have appeared several times in my previous posts on this blog.  The Six Questions guide the organization to look for the elimination of several probable causes for failures, but, of course, not all of them.

Add to it Throughput Economics, a recent development of Throughput Accounting, which helps checking the short-term potential outcomes of various opportunities, including careful consideration of capacity. Throughput Economics is also the name of a new book by Schragenheim, Camp and Surace, expected to be published in May 2019, which goes to great detail on how to evaluate the possible range of impact on profit of ideas and the combined impact of several ideas, considering the limited predictability.

Buffers are usually used to protect commitments to the market. The initial objective is being robust in delivery orders on time and in full.  But, being able to meet commitments is an advantage against competitors who cannot and by that help the client to maintain robust supply.  So, actually the buffers serve to gain from the inherent uncertainty in the supply chain.

But, there are buffers that provide flexibility, which is an even stronger means to gain from uncertainty. For instance, capacity buffers, keeping options for quick temporary increase in capacity for additional cost, let the organization grab opportunities that without the buffer are lost.  Using multi-skilled people is a similar buffer with similar advantage.

So far we dealt with evaluating risky opportunities, with their potential big gains versus the potential failure, and try both to increase the gain and its probability to materialize. There is another side to existing fragility: dealing with threats that could shake, even kill, the organization.

Some threats are developed outside the organization, like sanctions on a country by other countries, a new competitor, or the emergence of a disruptive technology. But most threats are a direct outcome of the doings, or non-doings, of the organization.  So, they include stupid moves like buying another company and then finding out the purchased company has no value at all (it happened to Teva).  Most of the threats are relatively simple mistakes, flaws in the existing paradigms or missing elements in certain procedures that, together with a statistical fluke (or “black swan”) cause huge damage.

How can we deal with threats?

If management are aware of such a threat then putting a control mechanism that is capable not only of identifying when the threat is happening, but also suggests a way to deal with it, is the way to go. This handling of threats adds to the robustness of the organization, but not necessarily to its antifragility, unless new lessons are learned.

But, too many truly dangerous threats are not anticipated, and that leaves the organizations quite fragile. The antifragile way should be to have the courage to note a surprising signal, or event, and be able to analyze it in a way that will expose the flaw in the current paradigms or procedures.  When such lessons are learned this is definitely gaining from the uncertainty.  The initial impact is that the organization becomes stronger through the lessons learned.  An additional impact takes place when the organization learns to learn from experience, which makes it more antifragile than just robust.

A structured process of learning from one event, actually learning from experience, mostly from surprises, good or bad, was developed by me. The methodology is using some of Thinking Processes of TOC in a somewhat different form, but in general prior knowledge of TOC is not necessary.  The detailed description of the process appears as a white paper at: https://drive.google.com/file/d/0B5bMuP-zfXtrMy1XanRDbi12ZUU/view.

The insights of Antifragility have to be coupled with another set of insights that are adjusted to managing organizations and have effective tools of making superior decisions under uncertainty. The TOC tools do exactly that.

Innovation as a double-edged sword

Innovation is one of few slogans that the current fashion on management adopts. The problem with every slogan is that it combines truth and utopia together.  Should every organization open a dedicated function for developing “innovation”?  I doubt.  This blog already touched upon various topics that belong to the generic term “innovative technology” like Industry 4.0, Big Data, Bitcoin and Blockchain.  Here I like to touch upon the generic need to be innovative, but also being aware of the risks.

It is obvious that without introducing something new the performance of the organization is going to get stuck. For many organizations staying at their current level of performance is good enough. However, this objective is under constant threat because a competitor might introduce something new and steal the market.  So, doing nothing innovative is risky as well. In some areas coming up with something new is quite common.  Sometimes the new element is significant and causes a long sequence of related changes, but many times the change is small and its impact is not truly felt.  Other business areas are considered ‘conservative’, meaning there is a clear tendency to stick to whatever seems working now.  In many areas, mainly conservative and semi-conservative, the culture is to watch the competition very closely and imitate every new move (not too many and not often) that a competitor implements.  We see it in the banking systems and in the airlines.  Even this culture of quick imitations is problematic when a new disruptive innovation appears from what is not considered “proper competition”.  A good example is the hotel business, now under the disruptive innovation of Airb&b.  The airlines experienced a similar innovative disruption when the low cost airlines appeared.

It is common to link innovation to technology. Listening to music went through several technological changes, from 78 records to LPs, to cassettes to CDs to MP3, each has disrupted the previous industry.  However, there are many innovations, including disruptive innovations, which are not depended on any new technology, like the previous examples of Airb&b and low cost flights, which use the available technology.  Technological companies actively look for introducing more and more features that are no longer defined as innovative.  After all what new feature, in the last 10 years, appeared in Microsoft Windows that deserves to be called innovative?

Non-technological innovations could have the same potential impact as new technology. Fixing flawed current paradigms, like batch policies, have been proven very effective by TOC users. Other options for innovations are offering a new payment scheme or coming up with a new way to order a service like Uber did.  Interesting question is whether the non-technological innovations are less risky than developing a new technology?  They usually require less heavy investment in R&D, but they are also more exposed to fast imitation.  The nice point when current flawed paradigms are challenged is that the competitors might be frightened by the idea to go against a well established paradigm.

It seems obvious to assume that innovation should be a chief ongoing concern of top management and board of directors. There are two critical objectives to include innovation within top management focus.  One is to find ways to grow the company and the other checking signals that a potential new disruptive innovation is emerging.  Such an identification should lead to analysis on how to face that threat, which is pretty difficult to do because of the impact of inertia.

There is an ongoing search for new innovations, but it is much more noticeable in the academy and with management consultants than with executives.   The following paper describes a typical academic research that depicts the key concerns of board members and innovation is not high in their list.  https://hbswk.hbs.edu/item/everyone-knows-innovation-is-essential-to-business-success-and-mdash-except-board-directors

How come that so many directors do not see innovation as a major topic to focus on?

Let’s us investigate the meaning for an executive, or a director in the board, of evaluating an innovative idea. Somehow, many enthusiasts of innovation don’t bother to tell us about the (obvious) risks of innovations. But, experienced executives are well aware of the risks, actually they are tuned to even exaggerate the risks, unless the original idea is theirs.

On top of the risk of grand failure there should be another realization about any innovation: the novel idea, good and valuable as it may be, is far from being enough to ensure success.  Eventually there is a need for many complementary elements, in operations as well as in marketing, and most certainly in sales, to be part of the overall solution to make the innovation a commercial success. This means the chances of failure are truly high not just because the innovation itself does not work, but because of one missing necessary element for success.  The missing element could be is a significant negative consequence of the use of the innovative product/service.  This means a missing element in the solution that should have overcome that negative part of the use of the product.

Consider the very impressive past innovation of the Concorde aircraft – a jet plane that was twice as fast as any other jet plane. It flew from New-York to Paris in mere 3.5 hours.  The Concorde was in use for 27 years until its limitations, cost and much too high noise, have suppressed the innovation.  So, here is just one example for great innovation and a colossal failure due to two important negative sides of the specific product.

When we analyze the risk of a proposed innovative idea we have to include the personal risk to the director or manager who brought the idea and stands all the way behind it.  To be associated with a grand failure is something quite damaging to the career, and it is also not very nice to be remembered as the father of a colossal failure.

This is probably a more rational explanation to the fact that innovation is not at the top concerns of board directors than what the above article suggests. Of course, relatively young people, or executives who are close to retirement, might be more willing to take the chance.

One big question is how we can reduce the risks when an innovation carrying a big promise is put on the table. In other words, being able to do much better job in analyzing the future value of the innovation, and also plan the other parts that are required in order to significantly increase the chance of success.   Another element is to understand the potential damage of failure and how most of the damage can be saved.

‘Thinking out of the box’ is a common name for the kind of thinking that could be truly innovative. This gives a very positive image to such thinking where ‘sacred cows’ are slaughtered.  On one hand, in order to come up with a worthy innovative insight one has to challenge well rooted paradigms, but on the other hand just being out of the box does not guaranty new value while definitely mean high risk.

TOC offers several tools to conduct the analysis much better. First are Goldratt Six Questions, which guide a careful check from the perspective of the users, who could win from the innovation, leading also to the other parts that have to accompany the innovative idea.   Using the Future Reality Tree (FRT) to identify possible negative branches for the user could be useful.  Throughput Economics tools could be used to predict the range of possible impacts on the capacity levels and through this get a clue of the financial risk versus the potential financial gain.  The same tool of FRT could become truly powerful for inquiring the potential threat of a new innovation developed by another party.  We cannot afford to ignore innovation, but we need to be careful, thus developing the steps for a detailed analysis should get high priority.

 

The confusion over Blockchain

By Amir and Eli Schragenheim

Blockchain is often described as the technology that is going to change the world economy. In itself such a declaration makes it vital to dedicate a lot of time to learn the new technology and what value it can generate.  Blockchain is vital for the Bitcoin and similar crypto-currencies, but the claim of changing the economy looks far beyond the virtual money.  The direct connection between Blockchain and Bitcoin is causing a lot of confusion. While the Bitcoin is based on Blockchain technology, there might be a lot of other things to do with Blockchain as a technology by itself. Assessing the value of a new technology is open to wide speculations that add to the confusion.  For instance, Don Topscott says, among other predictions, that Blockchain would lead to the creation of a true sharing economy. A post on Bitcoin already appeared in this blog, (https://elischragenheim.com/2017/12/28/raw-thoughts-about-the-bitcoin/), where the biggest concern was that the exchange rate of the Bitcoin behaves in a too volatile way to be useful as a currency.  Let’s have a look on Blockchain as a new technology and inquire what the future value can be.

Let’s start with Goldratt’s Six Questions on assessing the value of a new technology. This is a great tool for guiding us to raise the right questions and look for possible answers:

  1. What is the power of the new technology?
  2. What current limitation or barrier does the new technology eliminate or vastly reduce?
  3. What are the current usage rules, patterns and behaviors that bypass the limitation?
  4. What rules, patterns and behaviors need to be changed to get the benefits of the new technology?
  5. What is the application of the new technology that will enable the above change without causing resistance?
  6. How to build, capitalize and sustain the business?

The power of the Blockchain technology

The simple answer to the first question (What is the power of the new technology) is being able to both execute financial transactions and (mainly) recording the information, being confirmed, in a way that is very safe.  The first part means transferring money from one digital account to another without the need of an intermediary.  However, the currency has to be one of the crypto-currencies and both sides need to maintain their digital wallets.  The technology checks that there is enough money in the wallet to make the transfer.

The second part of the power is keeping the safety of the information records that comprise the general ledger. This is the true unique feature of Blockchain.  Going into the general ledger already involves a certain level of checking and confirmation of many distributed computers.  In itself the recorded information is transparent to all (unless one codes it using the current available techniques). The unique part is that it is practically impossible, even for the involved parties, to change the information of the transaction.  If there is a mistake then a new transaction of correcting the previous one has to be executed and stored.

Coming now to the second question: what limitation is eliminated or vastly reduced by the new technology?

Blockchain experts claim that the current limitation of lack of trust between parties that hardly know each other is eliminated by Blockchain. Trade is problematic when the minimum trust isn’t maintained, thus governments force rules on trade.  The basic minimum trust means that when you pay the required price you have confidence that you are getting the merchandise you have paid for.  This is what governments try to control through regulations and laws. When it comes to exchanging value between entities in different countries maintaining the trust is problematic.

Is the limitation the need to use intermediaries? In most value exchange through the Internet we currently need, at the very least, two different intermediate parties – one that transfers the money and one that transfers the purchased value.  The intermediaries are, many times, slow and expensive.  Can Blockchain substitutes the shipping company? Is the essence of the value of Blockchain aims at lowering the cost of the value transfer?  If Blockchain would become effective in bypassing the banks then we might see a major improvement in the banks and substantial reduction of the cost.  When this takes place what would be then the limitation removed by Blockchain?

While Blockchain can directly supports the actual transfer of virtual money, it can only record the data about the physical transport of merchandise, unless the merchandise is digital. So, for buying music, ebooks, videos and other digital information it is possible to overcome the limitation of trust by Blockchain.  This is a unique market segment where Blockchain provides the necessary minimum trust for the value exchange.

We propose that the safety of the data is the key limitation that Blockchain is able to vastly reduce.

Is the current safety of the information on transactions, especially financial transactions, limited?

The irony is that the threat to our digital data is not that high today, but it is growing very fast. So, while people still feel relatively secure with their financial and intellectual data stored in the bank and in their computer or on the cloud, then in the not-too-far future this safety is likely to diminish substantially.

Let’s now evaluate the third question: how the security issues of value exchanges are done today?

First let’s focus on value exchange. Later, let’s review whether keeping very critical data safe would add substantial value.

What are the current generic difficulties of exchanging value? The first need is reaching an agreement between buyer and seller.  Does the seller truly own the specific merchandise the buyer is interested in?  The current practice is to buy only from businesses that have established their reputation – like digital stores that seemingly objective sites have recorded testimonies of satisfied buyers who purchased from those stores.  The more expensive the merchandise the more care the buyer needs to apply.

Credit-cards, banks, PayPal and the like play a major part in making money transfer relatively safe. Very large deals would use direct transfer between banks, and it is true that such a transfer, between different banks at different countries, takes today about three days and uses the cumbersome SWIFT system.  Credit card transaction might face the risk of giving away the credit card details, but there seem to be currently good enough protection, on top of the credit card companies taking certain responsibility and operating sophisticated machine learning algorithms to solve that.  As already mentioned we do not have any guaranty that in the near future all the current safety measures would not be violated by clever hackers.

Yet, there are two different major safety concerns from exchange of value. One is the identity of the site I’m communicating with for value exchange.  More and more fake sites appear that disguise as a known site.  This causes an increasing feeling of insecurity.  The other concern is that the seller would not follow the commitment to send the right goods on time.

The current generic practices regarding the safety of data lean heavily on the financial institutions using their most sophisticated solutions to protect the data. However, those institutions also become the desired targets for hackers.

Protecting our most important data, especially the identity of the person, the ownership of real-estate assets and medical records is of high value, requires using the best available protection means, and if a much better data protection technology appears then for such data it could bring a lot of value. Other data, which is much less critical, could use less expensive protective means.

The fourth question focuses on the detailed answer on how should Blockchain operate, and what other means are required to significantly improve the current situation regarding safety.

A solution based on Blockchain should come with procedures that, at the very least, follow a whole deal, from recording the basic intent to buy X for the price of Y, then initiate the money transfer, no matter whether it is a direct transfer or sending instructions to a financial institute to move dollars from the buyer account to the seller account.  Then the solution should record the shipment data of the goods until confirmation of acceptance.  The chain of confirmed data on transactions seems to be the minimum solution where the safety and objectivity provided by the Blockchain service (an intermediate!) yields significant added value to the current practices.

Such a service could also check the record of both the seller and the buyer: how many past deals were completed successfully, how many pending deals are open for relatively long time.  This is a much more powerful check than testimonials.  Fake accounts, without proven history, could be identified by that service, providing extra safety to deals.

Using such a service should have a cost associated with it, and we’re not sure it should be low. The users will have to decide whether to use it or stick to the current technologies depending on the perceived level of safety.

When such a service is launched, offering extra safe records of deals, then it could be extended to record keeping of ownerships and identities. In a world that is under growing threats to its digital records safety such a service is very valuable.  Will it cause a revolution in the economy?  We don’t think so.

As we don’t have, at the moment, a full Blockchain service there is no point in addressing the two last Goldratt questions.  Organizations that like to offer a service using Blockchain and complement it with the required additional elements would need to provide the full answer to the fourth question and then also answer the two final questions in order to build the full vision for the Blockchain technology to become viable.

Behavioral biases and their impact on managing organizations – Part 2

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

This is the second of our posts on the topic of biases and how TOC should treat them. Behavioral biases mean that many people make decisions that seem wrong from the outside.  Such judgment is based on considering the cause-and-effects from the decision up to the expected results that seem lower than from a different decision.  The troubling point for the TOC community, and several other approaches trying to change established paradigms, is to understand how come managers continue to make decisions that lead to undesired outcomes.  We’ll focus this time on ‘mental accounting’ and ‘sunk cost’, and like the first post, we’ll eventually deal not just with the bias itself, but mainly how it affects managing organizations.

Suppose that you bought yourself a new car but it turned out to be very disappointing.   How would you consider the idea to sell the new car, for just 75% of what you paid, and buy a new one?

The above example demonstrates two different biases; the sunk cost fallacy and mental accounting, both blocking the idea of selling the car and buying a new one.

The sunk cost fallacy

‘Sunk cost’ is a cost that has already been incurred and cannot be recovered. Standard economic theory states that sunk costs should be ignored when making decisions because what happened in the past is irrelevant. Only costs that have not been incurred so far and are necessary to the decision at hand should be considered. This seems to be a logical process without letting emotions interrupt the process.  However, in reality people prefer to sit through a boring movie instead of leaving halfway through because the ticket has already been paid for. Organizations continue to invest money in unpromising or doomed projects just because of the time and money they’ve already put in.  The simple realization is that emotions have enough power to twist the logical process of decision making.

In the car example, the remaining cost of the car that cannot be redeemed, being 25% of the original price, is sunk cost, meaning it shouldn’t be part of the decision. What should be part of the decision is whether you can afford a new car.  Assuming that buying the disappointing car has consumed all the money you could afford then you might need to look for a second-hand car that will be better tuned to you.  Isn’t it quite natural to think like that?  However, most people would stick to the disappointing car without even considering selling and buying another car they can somehow afford, just to avoid the feeling of recognizing they bought the wrong car. Ignoring sunk costs means openly admitting a mistake.  This causes us a very unpleasant feeling that threatens our self-confidence, so we try to ignore it by pretending the spending was worthy.

Mental accounting

The decision to buy a car without having to consider the full current financial state of the buyer, based solely on the budget allocated for this specific purpose, is mental accounting. This bias, considering only the available money for a specific topic, is typical to significant categories of spending money, usually not top priority, but important enough not to ignore.  The core cause behind this bias is different than the cause for evading sunk cost.

Maintaining a special ‘account’ for a specific need is actually a way to protect a need from being stolen by other needs. The protection is required because we don’t have the capacity and capability to view, every time we need to make a decision, the whole financial situation against the whole group of different needs and desires in order to come up with the right priorities.  Thus, we create those accounts for worthy needs, and decouple them from re-considerations, even though from time to time we might make a serious error.

These biases seem reasonable for the average Joe, but what is their impact on managing organizations?

Like we saw in the previous post these biases are even more relevant for organizational decision making because of the decision maker’s concern about how these decisions might be judged by others, especially after-the-fact judgment based on the actual outcome. The point here is that if the fact that a significant sum was invested in something that produced no value then somebody has to pay for such a mistake.  Ignoring the sunk cost reveals the recognition of money being wasted.

Thus, ‘sunk cost’ is a devastating element in organizational behavior being responsible for continuing with projects when it is already clear there is no value left in them. Another typical case is refusing to write-off inventory that has no chance of being sold, or refusing to sell it for less than the calculated cost. The direct cause is practical and rational: do not shake the boat, because if you do, then all hell breaks loose.  This is much more devastating than individuals trying to keep their dignity by not admitting wasting money without getting real value.

The impact of mental accounting on organizations is HUGE! It encompasses all the aspects of what is called in TOC ‘local thinking’.  It is caused by being unable to handle the complexity of considering the ramifications of any decision on the holistic system.  Organizations are built of parts and it is simple enough to measure the performance of every part, even when its real impact on the organization is quite different.  Evaluating the full impact of a decision on the whole organization is frightening, because it seems way too complicated.

The common way to reduce the impact of complexity is to assign an account to every product, big deal, and client, and consider only the data required for maintaining that specific account: the revenues, the costs and the calculated “profit”. We put “profit” in quotation marks because without considering the wider dependencies, including the capacity of critical resources, there is no good measure of the true added profit of the product/deal/client to the organization.  Eventually current cost accounting methods are based on mental accounting to simplify the overall system.

Understanding the difficulty of considering all the dependencies within the holistic system is critical for the efforts of the TOC insights to overcome the difficulty without the resulting distortions. The basic thinking habits of people are set to bypass complexity in a straight-forward way of looking just on the decision at-hand and its immediate data, avoiding information that complicates the simple rules.

The TOC way of simplifying complex situations is by finding the few variables that impacts the outcomes much more than the level of the ‘noise’ (the inherent regular variation). The existence of uncertainty, on top of the complexity, actually simplifies the situation, because the variation introduces a level of noise that makes it practically impossible to optimize within that noise.  Recognizing the limitation of optimization enables management to look just for the few variables that impacts performance beyond the noise and by this vastly simplifies the complexity and provides a way to make superior decisions.  TOC may seem to a newcomer more complicated than the common way.  Actually all it requires is a lot of common sense and clear recognition that approximately right is much superior to precisely wrong.

A Spanish translation of this article can be found at: www.simplesolutions.com.co/blog