Cause-and-effect as the ABC of practical logic

Outlining clearly the causality behind undesired-effects, and wondering what effects, desired or not, would be caused by the actions we take, have been an integral part of TOC from its start in the early 80s. In the early 90s several structured procedures were developed by Dr. Eli Goldratt, in the format of cause-and-effect trees, called the Thinking Processes.  I think it is time to experience the merits, but also the limitations, of using logical claims in the shape of ‘Effect A’ is causing ‘Effect B’, for managing organizations.

My bachelor degree was in Mathematics, which is the ultimate use of strict logic.  In our daily practice we use logic both to reveal the causes behind effects we experience and also for speculating what is going to happen if we take a certain action.  However, that use of logic is not easy; it is combined with a lot of emotions that confuse the strict logic.  Even when we do our best to stay within the logical directives we are faced with several obstacles.  One of them is being able to distinguish between assumptions about cause and effect and actual causality. We certainly have great difficulty with hidden assumptions, meaning not being fully aware that the causality is only assumed and not necessarily valid.

Reality is fuzzy and includes huge number of variables that have some impact.  In order to live in such reality we have to simplify the picture we have in our mind.  We do it by ignoring many variables, assuming their impact is too small to truly matter.  The choice of what we ignore is part of the basic assumptions behind our cause-and-effect logic.

To experience the value and the boundaries of applying cause and effect let’s check the following effort to understand a practical logical argument.

It seems straight-forward logic to claim:

If ‘We improve the availability of items on the shelf from 80% to 98%’ then ‘Sales will go up’.

Is this assertion always true?  Are there some missing conditions (insufficiencies) for the causality to be true?  Even if it is true can we deduce how much more sales will be generated?

The initial logical explanation is that the missing 20% items have demand that is not satisfied, thus sales are lost.  If those 20% would be available they will be sold according to their natural demand.

The claim is shown in a simple chart:

initial state

The right hand-side represents the original claim, then some more explanations on current lost sales that would not be lost now.  The oval shape says that the two causes act together.

Two different reservations to the above logic are:

Some customers might buy the same item somewhere else.”   And: ‘Customers might buy another item instead of the missing item.’  Both reservations aim at the causal arrow connecting unavailability of items to losing sales, from that effect, together with the improvement, to the resulting effect of ‘Sales go up”.

The two reservations highlight a clarity issue. The improvement cause is stated “We improve…”, but who are ‘we’?   It could be the management of the chain of stores, the local management of a particular store or a supplier of a family of items.  Each of them gives a different meaning to the current state and then has its own reservation to the claimed effect of “Sales go up”.  The supplier of certain products means ‘his products’ are available only in 80% of the time and customers who buy replacement products cause the supplier to lose sales. If the availability of the supplier products would go up, then those specific products will be sold more.

This is a non-trivial ‘clarity’ issue.   We first have to deal with the clarity reservation by making a choice.  I have chosen the perspective of the store, and now I have to relate to the causality reservation doubting whether unavailability of an item always causes loss of sales to the store.

When customers don’t find a specific item they might buy a similar item.  In this case the store does not lose the sale.  In other cases the clients might simply give up.  In some rare cases the client might walk out, which could mean other sales are lost as well.  So, we conclude that some sales are lost because of unavailability, but the direct loss of sales is less than the calculated average sales of that item in the period of time it is short.

So, the above logical claim seems valid, but its real impact could be low.  We like to go deeper into the question when is the loss of sales due to unavailability significant?

Is the loss of sales equal for all items?

There are two parameters that make a significant impact on the loss of sales for the store when an item is missing.  The first parameter is the average level of daily sales and the second is the level of loyalty of the clients to the brand/item.

Fast runners, when they are short, create considerable damage not just to the direct loss of sales, but also to the reputation of the store – meaning customers might look for a different store in the future.  The logical statement is: if ‘a fast runner is missing’ then ‘many customers are pissed off’ causing ‘some regular customers look for another store to make their purchasing’ causing ‘total sales go significantly down”.  I’ve added ‘significantly’ to make a mark about the total impact.

But, as ‘management are aware of the potential damage to the store from missing fast-runners’ then we expect the following effect to apply: ‘management is focused on maintaining the perfect availability of fast-runners’.

So, we can deduce that if ‘the current management is reasonably capable’ then ‘the missing items do not include fast runners’.  Of course, 20% of the items being short might still mean non-negligible amount of sales of medium and slow movers being lost.   The open question is how much and even more:  how the current level of shortages impacts the reputation of the store and through this the future sales?

So, we need to look deeper into the impact of the second parameter – loyalty to a specific brand/item.  The effect ‘some items are special for some clients’ causes the effect that ‘some customers develop loyalty to that item’. This effect causes ‘the probability that some customers refuse to buy a replacement is high’.  Thus, if ‘items with strong loyalty are frequently missing’ then ‘some customers try other stores’.  The effect of ‘items with strong loyalty are frequently missing’ also causes ‘our reputation for what we carry on the shelves goes down’, with clear impact on the future sales.

The difficulty with ‘loyalty of customers to the brand/item’ is that it is difficult to validate its power.  The true test for the strength of loyalty is when the item is short and checking whether the sales of alternative items go up or not.

One additional reservation from the basic claim that improving the availability of items on the shelf would increase sales:  it assumes that ‘most customers entering the store know exactly what they want to buy’.  If this effect is not valid, then what is important for the sales is that the shelf is full with items that have good enough demand.  The effect that some items planned to be on the shelf are missing, but other items, with equal chance of being sold, fill the space well, would not cause a clear impact on the sales.  The kind of items that people come to browse and then choose (‘when I see it I’ll know’) have to managed in a very different way than maintaining availability of specific items.  For such items it makes sense to replenish them with new items, unless a specific item seems such a hit that maybe keeping it available is beneficial giving the high desirability of clients.

The effect of ‘The store has many regular customers’ has an impact on the meaning of ‘availability’ on the incidental customer. A shop in a big airport serves mostly incidental clients, so unavailability of items doesn’t impact future sales.  When there are no regular customers, then there is no difference between items that the store does not hold and items that are short.  This is relatively a small issue.

There are many more conditions that we consider true without further thought: ‘we live in a free economy’, ‘there are many competing choices for most items’ and ‘there is enough middle-class customers that can afford buying variety of products’.  If we try to include all ‘sufficiency’ conditions we’ll never end up with anything useful.  On the other hand it also opens the way to major mistakes due to hidden assumptions about what not to include in the analysis.  One needs the intuition when to stop the logical analysis, recognizing also the validity of ‘never to say I know’ (an insight by Dr. Goldratt).  Another aspect is the impact of uncertainty:  there are no 100% cause and effect relationships.  But, causal relationships that are 90%, or more, valid are still highly valuable.

Eventually we get the following structure as a summary of the above arguments.  Not all the previous effects have been mentioned, which means some of the logical arrows require more details, but eventually this is the claim.

Sales go up

We still cannot determine how much the sales would go up, because it depends on the characteristics of the medium and slow runners:  how many of them have strong loyalty.  If we add to the initial effects also ‘The chain makes marketing efforts to radiate the message that the chain maintains very high availability at every store’ then the chain can expect a faster and stronger increase in its reputation and in its sales.

Was it worth to go through logical analysis? 

While we still have only a partial picture, it is probably better than a picture based just on intuition without any analysis.

Advertisements

Antifragile – strengths and boundaries from the TOC perspective

Antifragile is a term invented by Nassim Taleb as a major insight for dealing with uncertainty. It directs us to identify when and how uncertainties we have to live with can be handled in our favor, making us stronger, instead of reducing our quality of life. Taleb emphasizes the benefit we can get when the upside is very high while the downside is relatively small and easily tolerable. Actually there is a somewhat different way to turn uncertainty into a key positive factor: significantly increasing the chance for a big gain while reducing the chance for losing.  A generic message is that variability could be made positive and beneficial when you understand it well.

While it is obvious that the concept of antifragile is powerful, I have two reservations from it. One is that it is impossible to become fully antifragile.  We, human beings, are very fragile due to many uncertain causes that impact our life.  There is no way we can treat all of them in a way that gains from the variability.  For instance, there is always a risk of being killed by an act of terror, road accident or earth quake.   Organizations, small and big, are also fragile without any viable way to become antifragile from all potential threats. So, while looking for becoming antifragile from specific causes adds great value, it cannot be done to all sources of uncertainty.

The other reservation is that finding where we gain so much more from the upside, while we lose relatively little from the downside, still requires special care because the accumulation of too many small downsides might still kill us. Taleb brings several examples where small pains do not accumulate to a big pain, but this is not always the case, certainly not when we speak about money lost in one period of time.  So, there is a constant need to measure our current state before we take a gamble where the upside is much bigger than the downside.

The focus of this article is on the impact of the concept of antifragile, and its related insights, on managing organizations. The post doesn’t deal with the personal impact or macroeconomics.  The objective is to learn how the generic insights lead to practical insights for organizations.

There are some interesting parallels between TOC and the general idea behind antifragile. Goldratt strived for focusing on directions where the outcomes are way beyond the inherent noise in the environment.  TOC uses several tools that look not just to be robust, but to use the uncertainty to achieve more and more from the goal.

A commercial organization is fragile, first of all, by its ability to finance the coming short-term activities. This defines a line where the accumulated short-term losses are allowed to reach before bankruptcy would be imminent. Losses and profits are accumulated by time periods and their fluctuations are relatively mild.  Sudden huge increases in profits are very rare in organizational activities.  It can happen when critical tests of new drugs or of revolutionary technologies take place then the success or failure has a very high and immediate impact.  As developing a new product usually involves long efforts over time, it means very substantial investment, so the downside is not small.  The gain could be much higher, even by a factor of 10, even 100, but such a big success must have been intended early in the process with only very low probability to succeed.  So, from the perspective of the organization the number of failures of such ambitious developments has to be limited, or it is a startup organization that takes failure to survive into account.

Where I disagree with Mr. Taleb is the assertion of unpredictability. The way Mr. Taleb states it is grossly misleading.  It is right we can never predict a sporadic event, and we can never be sure of success.  But, in many cases a careful analysis and certain actions raise the odds for success and reduce the odds of failure.

One of the favorite sayings of Dr. Goldratt, actually one of the ‘pillars of TOC’, is: “Never Say I Know”, which is somewhat similar to the unpredictability statement.  But Goldratt never meant that we know nothing, but that while we have a big impact on what we do, we should never assume we know everything. I agree with Taleb that companies that set for themselves specific numbers to reach in the future, sometimes for several years, shoot themselves in their foot in a special idiotic way.

Can I offer the notion of ‘limited predictability’ as something people, and certainly management of organizations, can employ? A more general statement is: “We always have only partial information and yet we have to make our moves based on it”.

There are ways to increase the probability of very big successes against failures and by that achieve the right convex graph of business growth while keeping reasonable robustness. The downside in case of failure could still be significant, but not big enough to threaten the existence of the organization.  One of the key tools of evaluating the potential value of new products/services/technology is the Goldratt Six Questions, which have appeared several times in my previous posts on this blog.  The Six Questions guide the organization to look for the elimination of several probable causes for failures, but, of course, not all of them.

Add to it Throughput Economics, a recent development of Throughput Accounting, which helps checking the short-term potential outcomes of various opportunities, including careful consideration of capacity. Throughput Economics is also the name of a new book by Schragenheim, Camp and Surace, expected to be published in May 2019, which goes to great detail on how to evaluate the possible range of impact on profit of ideas and the combined impact of several ideas, considering the limited predictability.

Buffers are usually used to protect commitments to the market. The initial objective is being robust in delivery orders on time and in full.  But, being able to meet commitments is an advantage against competitors who cannot and by that help the client to maintain robust supply.  So, actually the buffers serve to gain from the inherent uncertainty in the supply chain.

But, there are buffers that provide flexibility, which is an even stronger means to gain from uncertainty. For instance, capacity buffers, keeping options for quick temporary increase in capacity for additional cost, let the organization grab opportunities that without the buffer are lost.  Using multi-skilled people is a similar buffer with similar advantage.

So far we dealt with evaluating risky opportunities, with their potential big gains versus the potential failure, and try both to increase the gain and its probability to materialize. There is another side to existing fragility: dealing with threats that could shake, even kill, the organization.

Some threats are developed outside the organization, like sanctions on a country by other countries, a new competitor, or the emergence of a disruptive technology. But most threats are a direct outcome of the doings, or non-doings, of the organization.  So, they include stupid moves like buying another company and then finding out the purchased company has no value at all (it happened to Teva).  Most of the threats are relatively simple mistakes, flaws in the existing paradigms or missing elements in certain procedures that, together with a statistical fluke (or “black swan”) cause huge damage.

How can we deal with threats?

If management are aware of such a threat then putting a control mechanism that is capable not only of identifying when the threat is happening, but also suggests a way to deal with it, is the way to go. This handling of threats adds to the robustness of the organization, but not necessarily to its antifragility, unless new lessons are learned.

But, too many truly dangerous threats are not anticipated, and that leaves the organizations quite fragile. The antifragile way should be to have the courage to note a surprising signal, or event, and be able to analyze it in a way that will expose the flaw in the current paradigms or procedures.  When such lessons are learned this is definitely gaining from the uncertainty.  The initial impact is that the organization becomes stronger through the lessons learned.  An additional impact takes place when the organization learns to learn from experience, which makes it more antifragile than just robust.

A structured process of learning from one event, actually learning from experience, mostly from surprises, good or bad, was developed by me. The methodology is using some of Thinking Processes of TOC in a somewhat different form, but in general prior knowledge of TOC is not necessary.  The detailed description of the process appears as a white paper at: https://drive.google.com/file/d/0B5bMuP-zfXtrMy1XanRDbi12ZUU/view.

The insights of Antifragility have to be coupled with another set of insights that are adjusted to managing organizations and have effective tools of making superior decisions under uncertainty. The TOC tools do exactly that.

Innovation as a double-edged sword

Innovation is one of few slogans that the current fashion on management adopts. The problem with every slogan is that it combines truth and utopia together.  Should every organization open a dedicated function for developing “innovation”?  I doubt.  This blog already touched upon various topics that belong to the generic term “innovative technology” like Industry 4.0, Big Data, Bitcoin and Blockchain.  Here I like to touch upon the generic need to be innovative, but also being aware of the risks.

It is obvious that without introducing something new the performance of the organization is going to get stuck. For many organizations staying at their current level of performance is good enough. However, this objective is under constant threat because a competitor might introduce something new and steal the market.  So, doing nothing innovative is risky as well. In some areas coming up with something new is quite common.  Sometimes the new element is significant and causes a long sequence of related changes, but many times the change is small and its impact is not truly felt.  Other business areas are considered ‘conservative’, meaning there is a clear tendency to stick to whatever seems working now.  In many areas, mainly conservative and semi-conservative, the culture is to watch the competition very closely and imitate every new move (not too many and not often) that a competitor implements.  We see it in the banking systems and in the airlines.  Even this culture of quick imitations is problematic when a new disruptive innovation appears from what is not considered “proper competition”.  A good example is the hotel business, now under the disruptive innovation of Airb&b.  The airlines experienced a similar innovative disruption when the low cost airlines appeared.

It is common to link innovation to technology. Listening to music went through several technological changes, from 78 records to LPs, to cassettes to CDs to MP3, each has disrupted the previous industry.  However, there are many innovations, including disruptive innovations, which are not depended on any new technology, like the previous examples of Airb&b and low cost flights, which use the available technology.  Technological companies actively look for introducing more and more features that are no longer defined as innovative.  After all what new feature, in the last 10 years, appeared in Microsoft Windows that deserves to be called innovative?

Non-technological innovations could have the same potential impact as new technology. Fixing flawed current paradigms, like batch policies, have been proven very effective by TOC users. Other options for innovations are offering a new payment scheme or coming up with a new way to order a service like Uber did.  Interesting question is whether the non-technological innovations are less risky than developing a new technology?  They usually require less heavy investment in R&D, but they are also more exposed to fast imitation.  The nice point when current flawed paradigms are challenged is that the competitors might be frightened by the idea to go against a well established paradigm.

It seems obvious to assume that innovation should be a chief ongoing concern of top management and board of directors. There are two critical objectives to include innovation within top management focus.  One is to find ways to grow the company and the other checking signals that a potential new disruptive innovation is emerging.  Such an identification should lead to analysis on how to face that threat, which is pretty difficult to do because of the impact of inertia.

There is an ongoing search for new innovations, but it is much more noticeable in the academy and with management consultants than with executives.   The following paper describes a typical academic research that depicts the key concerns of board members and innovation is not high in their list.  https://hbswk.hbs.edu/item/everyone-knows-innovation-is-essential-to-business-success-and-mdash-except-board-directors

How come that so many directors do not see innovation as a major topic to focus on?

Let’s us investigate the meaning for an executive, or a director in the board, of evaluating an innovative idea. Somehow, many enthusiasts of innovation don’t bother to tell us about the (obvious) risks of innovations. But, experienced executives are well aware of the risks, actually they are tuned to even exaggerate the risks, unless the original idea is theirs.

On top of the risk of grand failure there should be another realization about any innovation: the novel idea, good and valuable as it may be, is far from being enough to ensure success.  Eventually there is a need for many complementary elements, in operations as well as in marketing, and most certainly in sales, to be part of the overall solution to make the innovation a commercial success. This means the chances of failure are truly high not just because the innovation itself does not work, but because of one missing necessary element for success.  The missing element could be is a significant negative consequence of the use of the innovative product/service.  This means a missing element in the solution that should have overcome that negative part of the use of the product.

Consider the very impressive past innovation of the Concorde aircraft – a jet plane that was twice as fast as any other jet plane. It flew from New-York to Paris in mere 3.5 hours.  The Concorde was in use for 27 years until its limitations, cost and much too high noise, have suppressed the innovation.  So, here is just one example for great innovation and a colossal failure due to two important negative sides of the specific product.

When we analyze the risk of a proposed innovative idea we have to include the personal risk to the director or manager who brought the idea and stands all the way behind it.  To be associated with a grand failure is something quite damaging to the career, and it is also not very nice to be remembered as the father of a colossal failure.

This is probably a more rational explanation to the fact that innovation is not at the top concerns of board directors than what the above article suggests. Of course, relatively young people, or executives who are close to retirement, might be more willing to take the chance.

One big question is how we can reduce the risks when an innovation carrying a big promise is put on the table. In other words, being able to do much better job in analyzing the future value of the innovation, and also plan the other parts that are required in order to significantly increase the chance of success.   Another element is to understand the potential damage of failure and how most of the damage can be saved.

‘Thinking out of the box’ is a common name for the kind of thinking that could be truly innovative. This gives a very positive image to such thinking where ‘sacred cows’ are slaughtered.  On one hand, in order to come up with a worthy innovative insight one has to challenge well rooted paradigms, but on the other hand just being out of the box does not guaranty new value while definitely mean high risk.

TOC offers several tools to conduct the analysis much better. First are Goldratt Six Questions, which guide a careful check from the perspective of the users, who could win from the innovation, leading also to the other parts that have to accompany the innovative idea.   Using the Future Reality Tree (FRT) to identify possible negative branches for the user could be useful.  Throughput Economics tools could be used to predict the range of possible impacts on the capacity levels and through this get a clue of the financial risk versus the potential financial gain.  The same tool of FRT could become truly powerful for inquiring the potential threat of a new innovation developed by another party.  We cannot afford to ignore innovation, but we need to be careful, thus developing the steps for a detailed analysis should get high priority.

 

The confusion over Blockchain

By Amir and Eli Schragenheim

Blockchain is often described as the technology that is going to change the world economy. In itself such a declaration makes it vital to dedicate a lot of time to learn the new technology and what value it can generate.  Blockchain is vital for the Bitcoin and similar crypto-currencies, but the claim of changing the economy looks far beyond the virtual money.  The direct connection between Blockchain and Bitcoin is causing a lot of confusion. While the Bitcoin is based on Blockchain technology, there might be a lot of other things to do with Blockchain as a technology by itself. Assessing the value of a new technology is open to wide speculations that add to the confusion.  For instance, Don Topscott says, among other predictions, that Blockchain would lead to the creation of a true sharing economy. A post on Bitcoin already appeared in this blog, (https://elischragenheim.com/2017/12/28/raw-thoughts-about-the-bitcoin/), where the biggest concern was that the exchange rate of the Bitcoin behaves in a too volatile way to be useful as a currency.  Let’s have a look on Blockchain as a new technology and inquire what the future value can be.

Let’s start with Goldratt’s Six Questions on assessing the value of a new technology. This is a great tool for guiding us to raise the right questions and look for possible answers:

  1. What is the power of the new technology?
  2. What current limitation or barrier does the new technology eliminate or vastly reduce?
  3. What are the current usage rules, patterns and behaviors that bypass the limitation?
  4. What rules, patterns and behaviors need to be changed to get the benefits of the new technology?
  5. What is the application of the new technology that will enable the above change without causing resistance?
  6. How to build, capitalize and sustain the business?

The power of the Blockchain technology

The simple answer to the first question (What is the power of the new technology) is being able to both execute financial transactions and (mainly) recording the information, being confirmed, in a way that is very safe.  The first part means transferring money from one digital account to another without the need of an intermediary.  However, the currency has to be one of the crypto-currencies and both sides need to maintain their digital wallets.  The technology checks that there is enough money in the wallet to make the transfer.

The second part of the power is keeping the safety of the information records that comprise the general ledger. This is the true unique feature of Blockchain.  Going into the general ledger already involves a certain level of checking and confirmation of many distributed computers.  In itself the recorded information is transparent to all (unless one codes it using the current available techniques). The unique part is that it is practically impossible, even for the involved parties, to change the information of the transaction.  If there is a mistake then a new transaction of correcting the previous one has to be executed and stored.

Coming now to the second question: what limitation is eliminated or vastly reduced by the new technology?

Blockchain experts claim that the current limitation of lack of trust between parties that hardly know each other is eliminated by Blockchain. Trade is problematic when the minimum trust isn’t maintained, thus governments force rules on trade.  The basic minimum trust means that when you pay the required price you have confidence that you are getting the merchandise you have paid for.  This is what governments try to control through regulations and laws. When it comes to exchanging value between entities in different countries maintaining the trust is problematic.

Is the limitation the need to use intermediaries? In most value exchange through the Internet we currently need, at the very least, two different intermediate parties – one that transfers the money and one that transfers the purchased value.  The intermediaries are, many times, slow and expensive.  Can Blockchain substitutes the shipping company? Is the essence of the value of Blockchain aims at lowering the cost of the value transfer?  If Blockchain would become effective in bypassing the banks then we might see a major improvement in the banks and substantial reduction of the cost.  When this takes place what would be then the limitation removed by Blockchain?

While Blockchain can directly supports the actual transfer of virtual money, it can only record the data about the physical transport of merchandise, unless the merchandise is digital. So, for buying music, ebooks, videos and other digital information it is possible to overcome the limitation of trust by Blockchain.  This is a unique market segment where Blockchain provides the necessary minimum trust for the value exchange.

We propose that the safety of the data is the key limitation that Blockchain is able to vastly reduce.

Is the current safety of the information on transactions, especially financial transactions, limited?

The irony is that the threat to our digital data is not that high today, but it is growing very fast. So, while people still feel relatively secure with their financial and intellectual data stored in the bank and in their computer or on the cloud, then in the not-too-far future this safety is likely to diminish substantially.

Let’s now evaluate the third question: how the security issues of value exchanges are done today?

First let’s focus on value exchange. Later, let’s review whether keeping very critical data safe would add substantial value.

What are the current generic difficulties of exchanging value? The first need is reaching an agreement between buyer and seller.  Does the seller truly own the specific merchandise the buyer is interested in?  The current practice is to buy only from businesses that have established their reputation – like digital stores that seemingly objective sites have recorded testimonies of satisfied buyers who purchased from those stores.  The more expensive the merchandise the more care the buyer needs to apply.

Credit-cards, banks, PayPal and the like play a major part in making money transfer relatively safe. Very large deals would use direct transfer between banks, and it is true that such a transfer, between different banks at different countries, takes today about three days and uses the cumbersome SWIFT system.  Credit card transaction might face the risk of giving away the credit card details, but there seem to be currently good enough protection, on top of the credit card companies taking certain responsibility and operating sophisticated machine learning algorithms to solve that.  As already mentioned we do not have any guaranty that in the near future all the current safety measures would not be violated by clever hackers.

Yet, there are two different major safety concerns from exchange of value. One is the identity of the site I’m communicating with for value exchange.  More and more fake sites appear that disguise as a known site.  This causes an increasing feeling of insecurity.  The other concern is that the seller would not follow the commitment to send the right goods on time.

The current generic practices regarding the safety of data lean heavily on the financial institutions using their most sophisticated solutions to protect the data. However, those institutions also become the desired targets for hackers.

Protecting our most important data, especially the identity of the person, the ownership of real-estate assets and medical records is of high value, requires using the best available protection means, and if a much better data protection technology appears then for such data it could bring a lot of value. Other data, which is much less critical, could use less expensive protective means.

The fourth question focuses on the detailed answer on how should Blockchain operate, and what other means are required to significantly improve the current situation regarding safety.

A solution based on Blockchain should come with procedures that, at the very least, follow a whole deal, from recording the basic intent to buy X for the price of Y, then initiate the money transfer, no matter whether it is a direct transfer or sending instructions to a financial institute to move dollars from the buyer account to the seller account.  Then the solution should record the shipment data of the goods until confirmation of acceptance.  The chain of confirmed data on transactions seems to be the minimum solution where the safety and objectivity provided by the Blockchain service (an intermediate!) yields significant added value to the current practices.

Such a service could also check the record of both the seller and the buyer: how many past deals were completed successfully, how many pending deals are open for relatively long time.  This is a much more powerful check than testimonials.  Fake accounts, without proven history, could be identified by that service, providing extra safety to deals.

Using such a service should have a cost associated with it, and we’re not sure it should be low. The users will have to decide whether to use it or stick to the current technologies depending on the perceived level of safety.

When such a service is launched, offering extra safe records of deals, then it could be extended to record keeping of ownerships and identities. In a world that is under growing threats to its digital records safety such a service is very valuable.  Will it cause a revolution in the economy?  We don’t think so.

As we don’t have, at the moment, a full Blockchain service there is no point in addressing the two last Goldratt questions.  Organizations that like to offer a service using Blockchain and complement it with the required additional elements would need to provide the full answer to the fourth question and then also answer the two final questions in order to build the full vision for the Blockchain technology to become viable.

Behavioral biases and their impact on managing organizations – Part 2

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

This is the second of our posts on the topic of biases and how TOC should treat them. Behavioral biases mean that many people make decisions that seem wrong from the outside.  Such judgment is based on considering the cause-and-effects from the decision up to the expected results that seem lower than from a different decision.  The troubling point for the TOC community, and several other approaches trying to change established paradigms, is to understand how come managers continue to make decisions that lead to undesired outcomes.  We’ll focus this time on ‘mental accounting’ and ‘sunk cost’, and like the first post, we’ll eventually deal not just with the bias itself, but mainly how it affects managing organizations.

Suppose that you bought yourself a new car but it turned out to be very disappointing.   How would you consider the idea to sell the new car, for just 75% of what you paid, and buy a new one?

The above example demonstrates two different biases; the sunk cost fallacy and mental accounting, both blocking the idea of selling the car and buying a new one.

The sunk cost fallacy

‘Sunk cost’ is a cost that has already been incurred and cannot be recovered. Standard economic theory states that sunk costs should be ignored when making decisions because what happened in the past is irrelevant. Only costs that have not been incurred so far and are necessary to the decision at hand should be considered. This seems to be a logical process without letting emotions interrupt the process.  However, in reality people prefer to sit through a boring movie instead of leaving halfway through because the ticket has already been paid for. Organizations continue to invest money in unpromising or doomed projects just because of the time and money they’ve already put in.  The simple realization is that emotions have enough power to twist the logical process of decision making.

In the car example, the remaining cost of the car that cannot be redeemed, being 25% of the original price, is sunk cost, meaning it shouldn’t be part of the decision. What should be part of the decision is whether you can afford a new car.  Assuming that buying the disappointing car has consumed all the money you could afford then you might need to look for a second-hand car that will be better tuned to you.  Isn’t it quite natural to think like that?  However, most people would stick to the disappointing car without even considering selling and buying another car they can somehow afford, just to avoid the feeling of recognizing they bought the wrong car. Ignoring sunk costs means openly admitting a mistake.  This causes us a very unpleasant feeling that threatens our self-confidence, so we try to ignore it by pretending the spending was worthy.

Mental accounting

The decision to buy a car without having to consider the full current financial state of the buyer, based solely on the budget allocated for this specific purpose, is mental accounting. This bias, considering only the available money for a specific topic, is typical to significant categories of spending money, usually not top priority, but important enough not to ignore.  The core cause behind this bias is different than the cause for evading sunk cost.

Maintaining a special ‘account’ for a specific need is actually a way to protect a need from being stolen by other needs. The protection is required because we don’t have the capacity and capability to view, every time we need to make a decision, the whole financial situation against the whole group of different needs and desires in order to come up with the right priorities.  Thus, we create those accounts for worthy needs, and decouple them from re-considerations, even though from time to time we might make a serious error.

These biases seem reasonable for the average Joe, but what is their impact on managing organizations?

Like we saw in the previous post these biases are even more relevant for organizational decision making because of the decision maker’s concern about how these decisions might be judged by others, especially after-the-fact judgment based on the actual outcome. The point here is that if the fact that a significant sum was invested in something that produced no value then somebody has to pay for such a mistake.  Ignoring the sunk cost reveals the recognition of money being wasted.

Thus, ‘sunk cost’ is a devastating element in organizational behavior being responsible for continuing with projects when it is already clear there is no value left in them. Another typical case is refusing to write-off inventory that has no chance of being sold, or refusing to sell it for less than the calculated cost. The direct cause is practical and rational: do not shake the boat, because if you do, then all hell breaks loose.  This is much more devastating than individuals trying to keep their dignity by not admitting wasting money without getting real value.

The impact of mental accounting on organizations is HUGE! It encompasses all the aspects of what is called in TOC ‘local thinking’.  It is caused by being unable to handle the complexity of considering the ramifications of any decision on the holistic system.  Organizations are built of parts and it is simple enough to measure the performance of every part, even when its real impact on the organization is quite different.  Evaluating the full impact of a decision on the whole organization is frightening, because it seems way too complicated.

The common way to reduce the impact of complexity is to assign an account to every product, big deal, and client, and consider only the data required for maintaining that specific account: the revenues, the costs and the calculated “profit”. We put “profit” in quotation marks because without considering the wider dependencies, including the capacity of critical resources, there is no good measure of the true added profit of the product/deal/client to the organization.  Eventually current cost accounting methods are based on mental accounting to simplify the overall system.

Understanding the difficulty of considering all the dependencies within the holistic system is critical for the efforts of the TOC insights to overcome the difficulty without the resulting distortions. The basic thinking habits of people are set to bypass complexity in a straight-forward way of looking just on the decision at-hand and its immediate data, avoiding information that complicates the simple rules.

The TOC way of simplifying complex situations is by finding the few variables that impacts the outcomes much more than the level of the ‘noise’ (the inherent regular variation). The existence of uncertainty, on top of the complexity, actually simplifies the situation, because the variation introduces a level of noise that makes it practically impossible to optimize within that noise.  Recognizing the limitation of optimization enables management to look just for the few variables that impacts performance beyond the noise and by this vastly simplifies the complexity and provides a way to make superior decisions.  TOC may seem to a newcomer more complicated than the common way.  Actually all it requires is a lot of common sense and clear recognition that approximately right is much superior to precisely wrong.

A Spanish translation of this article can be found at: www.simplesolutions.com.co/blog

Behavioral biases and their impact on managing organizations

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

Most of us, especially TOC practitioners, consider ourselves very good at decision making thanks to our cause and effect thinking.  However, behavioral economy, notably the research of two Nobel Prize professors, Daniel Kahneman and Richard Thaler, have convincingly shown several general biases from rational economical thinking, pushing most people to make decisions that look logically flawed. The psychology behind how people make decisions is extremely relevant to TOC because organizations are run by people, the same people TOC tries hard to convince to change their decision making process. TOC usually treats cause and effect relationships as based on rational logic.  But, cause-and-effect could also consider irrational causes, like having a special negative response to the term “loss”, even when it is not a loss, and predict the resulting effect of such a negative response.

Generally speaking TOC should look for answers to three key questions regarding such biases:

    1. Can we use cause-and-effect logic to map and explain biases? If so, can we eliminate, or significantly reduce, the biases?
    2. Is the impact of the biases on the organization’s decision making the same as on the decisions of an individual? Can we improve the organization’s decision making by treating the cause of the biases?
    3.  When numbers are fuzzy, as they usually are in corporate scenarios, what do managers rely on to make decisions?

Understanding loss aversion

In itself loss aversion is not a bias. It is a reasonable way to stay away from trouble.  What has been shown is that it is many times inconsistent, which practically means that human beings are impacted by irrelevant parameters that should not have an impact. To demonstrate the inconsistency we will use two experiments presented in “Thinking, Fast and Slow” By Prof. Kahneman.

In one experiment people were told that they had been given US$1,000 and that they had to choose between a 50% chance of winning an additional US$1,000 or getting US$500 for sure.

It’s no surprise that the vast majority of people was risk averse and chose to get the US$500.

What’s interesting is that when people were told that they had been given US$2,000 and that then they had to choose between a 50% chance of losing US$1,000 or lose US$500 for sure, then many people suddenly became risk seekers and chose the gamble.

In terms of final state of wealth both cases are exactly the same. Both cases put the choice between getting US$1,500 and accepting a gamble with equal chances of having US$1,000 or US$2,000. The two cases differ in their framing of the choice.  In the first case the choice is verbalized between gaining and taking a risk to gain more. The second case frames the dilemma between losing and potentially losing more (or not losing).  The fact that many people made a different decision between the cases shows a bias based on the framing of a ‘loss’ versus ‘gaining less’. It demonstrates how the words have a decisive impact.

These two experiments demonstrate two important findings. One, is that “losses” looms much larger than “gains”, and the other is that people become risk seeking when all their options are bad. This also explains why most people turn down a bet with a 50% chance of losing US$100 and 50% chance of winning US$150, even though on average the result is positive. If the bet would be 50% chance of winning US$200, then a balance between risk seeking and risk averse would be achieved. That means, “losing” is about twice as strong than “winning” as a general value assessment.

Should we be surprised by this seemingly irrational behavior?

Losing existing $100 might disrupt the short-term plans of a person, while the value of additional $150 is less clear. So, even though it is clear to most people that overall this is a good gamble, they resist it based on recognizing the greater negative impact of the loss.

Losing all we have is a huge threat. So, every person sets a mode of survival that should not be breached.  As the ability of most people to manage gains and losses in detail is limited, the survival instincts lead a negative reaction to any potential loss, making it more than the equivalent gain. So, taking into account our limited capabilities to control our exact state, we develop simple fast rules to make safe decisions. A simple rule could be “don’t gamble ever!”, or, “don’t bother with gambles unless you are certain to win much more.” These heuristics are definitely very helpful in most situations, but they can be costly in others.

While risk aversion seems rational enough, the framing bias is an irrational element, but the cause behind it is pretty clear and can be outlined as regular cause-and-effect.

We further assume that ‘framing’ is a bias that a person with a good background in probability theory would be able, most of the time, to resist the bias and come up with consistent decisions, especially for significant decisions.

Does this hold true for decisions made on behalf of an organization?

Suppose you are the regional sales manager of a big company and have to decide whether to launch a new product or not. Historically it has been statistically shown that there is a fifty-fifty chance that the new product will make a profit of US$2 million in one year or that it will lose a million dollars and its production would stop at the end of the year.

What would you do?

Our experience says that most seasoned managers will refuse to take the risk. Managers are naturally risk averse regarding any outcomes that will be attributed directly to them. As a matter of fact, every decision on behalf of an organization goes through two different evaluations: One is what is good to the organization and the other is what is good to the decision maker.

It’s common in many organizations that a “success” leads to a modest reward while a “failure” leads to a significant negative result for the manager. What’s more, because of hindsight bias decisions are assessed not by the quality of the decision making process and the information available at the time it was made, but by its outcome. No wonder loss aversion intensifies in corporate scenarios!

Earlier we mentioned that teaching the basics of probability theory and the acknowledgement of the different biases should reduce their impact. But, the unfortunate fact is that in most cases the decision makers face uncertain outcomes for which the probabilities are unknown. The case of launching a new product is such a case.  The statistical assessment of fifty-fifty chance is very broad and the decision maker cannot assume she knows the real odds.  This fuzzy nature of assessments naturally makes people even more risk averse, because the risk could be bigger than what is formally assessed. On the other hand, managers are expected to make some decisions, so they are sometimes pushed to take risky decisions just in order to look active as expected.

Now suppose that you are the Sales Vice-President and you have to decide whether to launch 20 different new products in 20 different regions. All product launches carry the similar statistics as presented earlier (50% chance of making US$2M and 50% of losing US$1M). Suppose the company is big enough to be able to overcome several product flops without threatening its solvency.

Would you launch all of them?

Assuming the success or failure of each of the products is independent on the other products then the simple statistical model would predict, on average, a total profit of $10M. However, since top management will most probably judge each decision independently, another bias known as narrow framing, the VP of Sales will try her best to minimize the number of failures. She might decide to launch only 8, basing her choice on the best intuition she has, even though she is aware she doesn’t really know. What’s ironic is that there’s a higher overall risk for the company in launching 8 products than 20 because of the aggregation effect.

There are many well-known examples of companies that decided to play it safe and paid a huge price for it. Kodak, Blockbuster, Nokia and Atari immediately come to mind. So, if organizations want managers to take more “intelligent” risks they need to create an environment that doesn’t punish managers for the results of their individual decisions, even when the outcome turns out to be negative. This is not to say organizations shouldn’t have a certain control on their human decision makers so they take potential losses seriously. Otherwise, managers might take huge risks because it is not really their money.  This means understanding how significant decisions under uncertainty have to be taken, and forcing procedures for making such decisions, including documenting the assumptions and expectations, preferably for both reasonable ‘worst case’ and ‘best case’ scenarios, that will later allow a much more objective evaluation of the decisions made.

This balancing act for taking risks is definitely a challenge, but what organizations have to recognize is that excessive risk aversion favors the status quo which could eventually be even riskier.

A Spanish translation of this article can be found at: www.simplesolutions.com.co/blog

The value organizations can get from computerized simulations

The power of today computers opens a new way to assess the impact of variety of ideas on the performance of an organization that takes into account both complexity and uncertainty. The need stems from the common view of organizations and their links to the environment as inherently complex, while also exposed to high uncertainty. Thus every decision, sensible as it may seem at the time, could easily lead to very negative results.

One of the pillars of TOC is the axiom/belief that every organization is inherently simple. Practically it means that only few variables truly limit the performance of the organization even under significant uncertainty.

The use of simulations could bridge the gap between seemingly complex system and reaching relatively simple rules to manage it well. In other words, it can and should be used to reveal the simplicity.  Uncovering the simple rules is especially valuable in times of change, no matter whether the change is the result of an internal initiative or from an external event.

Simulations can be used to achieve two different objectives:

  1. Providing the understanding of the cause-and-effect in certain situations and the impact of uncertainty on these situations.

The understanding is achieved through a series of simulations of a chosen well-defined environment that shows the significant difference in results between various decisions. An effective educational simulator should prove that there is a clear cause-and-effect flow that leads from a decision to the result.

Self discovery of ideas and concepts is a special optional subset of educational simulator.  It requires the ability to make many different decisions as long as the logic behind the actual results is clear.

dist sim

A simple educational simulator for distribution systems

  1. Supporting hard decisions by simulating a specific environment in detail, letting the user dictate a variety of parameters that represent different alternatives and get a reliable picture of the spread of results. The challenge is to be able to model the environment in a way that it keeps the basic complexity, and represents well all the key variables that truly impact the performance.

I’ve started my career in TOC by creating a computer game (The ‘OPT Game’) that aimed to “teach managers how to think”, and then continued to develop a variety of simulations. While most of the simulators were for TOC education, I had developed two simulations for specific environments aiming at answering specific managerial questions.

The power of today computers is such that developing wide-scope simulators, which can be adjusted to various environments and eventually support very complex decisions, is absolutely valid. My experience shows that the basic library of functions of such simulators should be developed from scratch as using general modules provided by others slows the simulations to a degree that they are unusable.   Managers have to take many of their decisions very fast.  This means the supporting information have to be readily accessible.  Being fast is one of the critical necessary conditions for wide-scope simulations to serve as an effective decision support tool.

Dr. Alan Barnard, one of the most known TOC experts, is also the creator of a full supply chain simulator. He defines the managerial need first to be convinced that the new general TOC policies behind the flow of products would work truly well. But, there is also a need to determine the right parameters, like the appropriate buffers and the replenishment times, and this can be achieved by a simulation.

There is a huge variety of other types of decisions that a good wide-scope simulator could support. The basic capability of a simulation is to depict a flow, like the flow of products through the supply chain, the flow of materials through manufacturing, the flow of projects, or the flow of money going in and out.   The simulated flow is characterized by its nodes, policies and uncertainty.  In order to be able to support decisions there is a need to simulate several flows that interact with each other.  Only when the product flow, order flow, money flow and capacity flow (purchasing capacity) are simulated together the essence of the holistic business can be captured.  The simulator should allow easy introduction of new ideas, like new products that compete with existing products, to be simulated fast enough.  The emerged platform for ‘what-if’ scenarios is then open for checking the impact of the idea on the bottom line.

For many decisions the inherent simplicity, as argued by Dr. Goldratt, provides the ability to predict well enough the impact of a proposed change on the bottom line. Throughput Economics defines the process of checking new ideas by calculating the pessimistic and optimistic impact of that idea on the bottom line of the organization.  It relies on being able to come up with good enough calculations on the total impact on sales and on capacity consumption to predict the resulting delta-T minus delta-OE.

However, sometimes the organization faces events or ideas with wider ramifications, like impacting lead-times or being exposed to the ‘domino effect’ where a certain random mishap causes a sequence of mishaps, so more sophisticated ways to support decisions have to be in place. Such extra complications of predicting the full potential ramifications of new ideas can be solved by simulating the situation with and without the changes due to the new ideas.  The simulation is the ultimate aid when straight-forward calculations are too complex.

Suppose a relatively big company, with several manufacturing sites in various locations throughout the globe, plus its transportation lines, clients and suppliers, is simulated. All the key flows, including the money transactions and their timing, are part of the simulation.  This provides the infrastructure where various ideas regarding the market, operations, engineering and supply can be carefully reviewed and given predicted impact on the net profit.  When new products are introduced determining the initial level of the stock in the supply chain is tough because of its high reliance on forecast.  Every decision should be tested according to both the pessimistic and optimistic assumptions, and thus management can make a sensible decision that considers several extreme future market behaviors, looking for the decision that minimizes downsides and still captures potential high gains.

Such a simulation can be of great help when an external event happens that messes the usual conduct of the organization. For instance, suppose one of the suppliers is hit by a tsunami.  While there is enough inventory for the next four weeks, the need is to find alternatives as soon as possible and also realize the potential damage of every alternative taken.  Checking this kind of ‘what-if’ scenarios is easy to do with such a simulator revealing the real financial impact of every alternative.

Other big areas that could use large simulation to check various ideas are the airline and shipping businesses.  The key problem in operating transportation is not just the capacity of every vehicle, but also its exact location at a specific time.  Any delay or breakdown creates a domino effect on the other missions and resources.  Checking the economic desirability of opening a new line has to include the possible impact of such a domino effect.  Of course, the exploitation of the vehicles, assuming that they are the constraint, should be a target for checking various scenarios through simulations.  Checking various options for the dynamic pricing policies, known as yield-management, could be enlightening as well.

While the benefits can be high indeed one should be aware of the limitations. Simulations are based on assumptions, which open the way to manipulations or just failures. Let’s distinguish between two different categories of causes for failure.

  1. Bugs and mistakes in the given parameters. These are failures within the simulation software or wrong inputs representing the key parameters requested by the simulation.
  2. Failure of the modeling to capture the true reality. It is impossible to simulate reality as is. There are too many parameters to capture. So, we need to simplify the reality and focus only on the parameters that have, or might have at certain circumstances, significant impact on the performance. For instance, it is crazy to model the detailed behavior of every single human resource. However, we might need to capture the behavior of large groups of people, such as market segments and groups of suppliers.

Modeling the stochastic behavior of different markets, specific resources and suppliers is another challenge. When the actual stochastic function is unknown there is a tendency to use the common mathematical functions like the Normal Distribution, Beta the Poisson, even when they don’t match to the specific reality.

So, simulations should be subject to careful check. The first big test should be depicting the current state. Does it really show the current behavior?  As there should be enough intuition and data to compare the simulated results with the current state results, this is a critical milestone in the use of simulations for decision support. In most cases there should be at first deviations that occur because of bugs and flawed input.  Once the simulation seems robust enough more careful tests should be done to ensure its ability to predict the future performance under certain assumptions.

So, while there is a lot to be careful with simulations, there is even more to be gained from by understanding better the impact of uncertainty and by that enhance the performance of the organization.