Antifragile – strengths and boundaries from the TOC perspective

Antifragile is a term invented by Nassim Taleb as a major insight for dealing with uncertainty. It directs us to identify when and how uncertainties we have to live with can be handled in our favor, making us stronger, instead of reducing our quality of life. Taleb emphasizes the benefit we can get when the upside is very high while the downside is relatively small and easily tolerable. Actually there is a somewhat different way to turn uncertainty into a key positive factor: significantly increasing the chance for a big gain while reducing the chance for losing.  A generic message is that variability could be made positive and beneficial when you understand it well.

While it is obvious that the concept of antifragile is powerful, I have two reservations from it. One is that it is impossible to become fully antifragile.  We, human beings, are very fragile due to many uncertain causes that impact our life.  There is no way we can treat all of them in a way that gains from the variability.  For instance, there is always a risk of being killed by an act of terror, road accident or earth quake.   Organizations, small and big, are also fragile without any viable way to become antifragile from all potential threats. So, while looking for becoming antifragile from specific causes adds great value, it cannot be done to all sources of uncertainty.

The other reservation is that finding where we gain so much more from the upside, while we lose relatively little from the downside, still requires special care because the accumulation of too many small downsides might still kill us. Taleb brings several examples where small pains do not accumulate to a big pain, but this is not always the case, certainly not when we speak about money lost in one period of time.  So, there is a constant need to measure our current state before we take a gamble where the upside is much bigger than the downside.

The focus of this article is on the impact of the concept of antifragile, and its related insights, on managing organizations. The post doesn’t deal with the personal impact or macroeconomics.  The objective is to learn how the generic insights lead to practical insights for organizations.

There are some interesting parallels between TOC and the general idea behind antifragile. Goldratt strived for focusing on directions where the outcomes are way beyond the inherent noise in the environment.  TOC uses several tools that look not just to be robust, but to use the uncertainty to achieve more and more from the goal.

A commercial organization is fragile, first of all, by its ability to finance the coming short-term activities. This defines a line where the accumulated short-term losses are allowed to reach before bankruptcy would be imminent. Losses and profits are accumulated by time periods and their fluctuations are relatively mild.  Sudden huge increases in profits are very rare in organizational activities.  It can happen when critical tests of new drugs or of revolutionary technologies take place then the success or failure has a very high and immediate impact.  As developing a new product usually involves long efforts over time, it means very substantial investment, so the downside is not small.  The gain could be much higher, even by a factor of 10, even 100, but such a big success must have been intended early in the process with only very low probability to succeed.  So, from the perspective of the organization the number of failures of such ambitious developments has to be limited, or it is a startup organization that takes failure to survive into account.

Where I disagree with Mr. Taleb is the assertion of unpredictability. The way Mr. Taleb states it is grossly misleading.  It is right we can never predict a sporadic event, and we can never be sure of success.  But, in many cases a careful analysis and certain actions raise the odds for success and reduce the odds of failure.

One of the favorite sayings of Dr. Goldratt, actually one of the ‘pillars of TOC’, is: “Never Say I Know”, which is somewhat similar to the unpredictability statement.  But Goldratt never meant that we know nothing, but that while we have a big impact on what we do, we should never assume we know everything. I agree with Taleb that companies that set for themselves specific numbers to reach in the future, sometimes for several years, shoot themselves in their foot in a special idiotic way.

Can I offer the notion of ‘limited predictability’ as something people, and certainly management of organizations, can employ? A more general statement is: “We always have only partial information and yet we have to make our moves based on it”.

There are ways to increase the probability of very big successes against failures and by that achieve the right convex graph of business growth while keeping reasonable robustness. The downside in case of failure could still be significant, but not big enough to threaten the existence of the organization.  One of the key tools of evaluating the potential value of new products/services/technology is the Goldratt Six Questions, which have appeared several times in my previous posts on this blog.  The Six Questions guide the organization to look for the elimination of several probable causes for failures, but, of course, not all of them.

Add to it Throughput Economics, a recent development of Throughput Accounting, which helps checking the short-term potential outcomes of various opportunities, including careful consideration of capacity. Throughput Economics is also the name of a new book by Schragenheim, Camp and Surace, expected to be published in May 2019, which goes to great detail on how to evaluate the possible range of impact on profit of ideas and the combined impact of several ideas, considering the limited predictability.

Buffers are usually used to protect commitments to the market. The initial objective is being robust in delivery orders on time and in full.  But, being able to meet commitments is an advantage against competitors who cannot and by that help the client to maintain robust supply.  So, actually the buffers serve to gain from the inherent uncertainty in the supply chain.

But, there are buffers that provide flexibility, which is an even stronger means to gain from uncertainty. For instance, capacity buffers, keeping options for quick temporary increase in capacity for additional cost, let the organization grab opportunities that without the buffer are lost.  Using multi-skilled people is a similar buffer with similar advantage.

So far we dealt with evaluating risky opportunities, with their potential big gains versus the potential failure, and try both to increase the gain and its probability to materialize. There is another side to existing fragility: dealing with threats that could shake, even kill, the organization.

Some threats are developed outside the organization, like sanctions on a country by other countries, a new competitor, or the emergence of a disruptive technology. But most threats are a direct outcome of the doings, or non-doings, of the organization.  So, they include stupid moves like buying another company and then finding out the purchased company has no value at all (it happened to Teva).  Most of the threats are relatively simple mistakes, flaws in the existing paradigms or missing elements in certain procedures that, together with a statistical fluke (or “black swan”) cause huge damage.

How can we deal with threats?

If management are aware of such a threat then putting a control mechanism that is capable not only of identifying when the threat is happening, but also suggests a way to deal with it, is the way to go. This handling of threats adds to the robustness of the organization, but not necessarily to its antifragility, unless new lessons are learned.

But, too many truly dangerous threats are not anticipated, and that leaves the organizations quite fragile. The antifragile way should be to have the courage to note a surprising signal, or event, and be able to analyze it in a way that will expose the flaw in the current paradigms or procedures.  When such lessons are learned this is definitely gaining from the uncertainty.  The initial impact is that the organization becomes stronger through the lessons learned.  An additional impact takes place when the organization learns to learn from experience, which makes it more antifragile than just robust.

A structured process of learning from one event, actually learning from experience, mostly from surprises, good or bad, was developed by me. The methodology is using some of Thinking Processes of TOC in a somewhat different form, but in general prior knowledge of TOC is not necessary.  The detailed description of the process appears as a white paper at:

The insights of Antifragility have to be coupled with another set of insights that are adjusted to managing organizations and have effective tools of making superior decisions under uncertainty. The TOC tools do exactly that.

Innovation as a double-edged sword

Innovation is one of few slogans that the current fashion on management adopts. The problem with every slogan is that it combines truth and utopia together.  Should every organization open a dedicated function for developing “innovation”?  I doubt.  This blog already touched upon various topics that belong to the generic term “innovative technology” like Industry 4.0, Big Data, Bitcoin and Blockchain.  Here I like to touch upon the generic need to be innovative, but also being aware of the risks.

It is obvious that without introducing something new the performance of the organization is going to get stuck. For many organizations staying at their current level of performance is good enough. However, this objective is under constant threat because a competitor might introduce something new and steal the market.  So, doing nothing innovative is risky as well. In some areas coming up with something new is quite common.  Sometimes the new element is significant and causes a long sequence of related changes, but many times the change is small and its impact is not truly felt.  Other business areas are considered ‘conservative’, meaning there is a clear tendency to stick to whatever seems working now.  In many areas, mainly conservative and semi-conservative, the culture is to watch the competition very closely and imitate every new move (not too many and not often) that a competitor implements.  We see it in the banking systems and in the airlines.  Even this culture of quick imitations is problematic when a new disruptive innovation appears from what is not considered “proper competition”.  A good example is the hotel business, now under the disruptive innovation of Airb&b.  The airlines experienced a similar innovative disruption when the low cost airlines appeared.

It is common to link innovation to technology. Listening to music went through several technological changes, from 78 records to LPs, to cassettes to CDs to MP3, each has disrupted the previous industry.  However, there are many innovations, including disruptive innovations, which are not depended on any new technology, like the previous examples of Airb&b and low cost flights, which use the available technology.  Technological companies actively look for introducing more and more features that are no longer defined as innovative.  After all what new feature, in the last 10 years, appeared in Microsoft Windows that deserves to be called innovative?

Non-technological innovations could have the same potential impact as new technology. Fixing flawed current paradigms, like batch policies, have been proven very effective by TOC users. Other options for innovations are offering a new payment scheme or coming up with a new way to order a service like Uber did.  Interesting question is whether the non-technological innovations are less risky than developing a new technology?  They usually require less heavy investment in R&D, but they are also more exposed to fast imitation.  The nice point when current flawed paradigms are challenged is that the competitors might be frightened by the idea to go against a well established paradigm.

It seems obvious to assume that innovation should be a chief ongoing concern of top management and board of directors. There are two critical objectives to include innovation within top management focus.  One is to find ways to grow the company and the other checking signals that a potential new disruptive innovation is emerging.  Such an identification should lead to analysis on how to face that threat, which is pretty difficult to do because of the impact of inertia.

There is an ongoing search for new innovations, but it is much more noticeable in the academy and with management consultants than with executives.   The following paper describes a typical academic research that depicts the key concerns of board members and innovation is not high in their list.

How come that so many directors do not see innovation as a major topic to focus on?

Let’s us investigate the meaning for an executive, or a director in the board, of evaluating an innovative idea. Somehow, many enthusiasts of innovation don’t bother to tell us about the (obvious) risks of innovations. But, experienced executives are well aware of the risks, actually they are tuned to even exaggerate the risks, unless the original idea is theirs.

On top of the risk of grand failure there should be another realization about any innovation: the novel idea, good and valuable as it may be, is far from being enough to ensure success.  Eventually there is a need for many complementary elements, in operations as well as in marketing, and most certainly in sales, to be part of the overall solution to make the innovation a commercial success. This means the chances of failure are truly high not just because the innovation itself does not work, but because of one missing necessary element for success.  The missing element could be is a significant negative consequence of the use of the innovative product/service.  This means a missing element in the solution that should have overcome that negative part of the use of the product.

Consider the very impressive past innovation of the Concorde aircraft – a jet plane that was twice as fast as any other jet plane. It flew from New-York to Paris in mere 3.5 hours.  The Concorde was in use for 27 years until its limitations, cost and much too high noise, have suppressed the innovation.  So, here is just one example for great innovation and a colossal failure due to two important negative sides of the specific product.

When we analyze the risk of a proposed innovative idea we have to include the personal risk to the director or manager who brought the idea and stands all the way behind it.  To be associated with a grand failure is something quite damaging to the career, and it is also not very nice to be remembered as the father of a colossal failure.

This is probably a more rational explanation to the fact that innovation is not at the top concerns of board directors than what the above article suggests. Of course, relatively young people, or executives who are close to retirement, might be more willing to take the chance.

One big question is how we can reduce the risks when an innovation carrying a big promise is put on the table. In other words, being able to do much better job in analyzing the future value of the innovation, and also plan the other parts that are required in order to significantly increase the chance of success.   Another element is to understand the potential damage of failure and how most of the damage can be saved.

‘Thinking out of the box’ is a common name for the kind of thinking that could be truly innovative. This gives a very positive image to such thinking where ‘sacred cows’ are slaughtered.  On one hand, in order to come up with a worthy innovative insight one has to challenge well rooted paradigms, but on the other hand just being out of the box does not guaranty new value while definitely mean high risk.

TOC offers several tools to conduct the analysis much better. First are Goldratt Six Questions, which guide a careful check from the perspective of the users, who could win from the innovation, leading also to the other parts that have to accompany the innovative idea.   Using the Future Reality Tree (FRT) to identify possible negative branches for the user could be useful.  Throughput Economics tools could be used to predict the range of possible impacts on the capacity levels and through this get a clue of the financial risk versus the potential financial gain.  The same tool of FRT could become truly powerful for inquiring the potential threat of a new innovation developed by another party.  We cannot afford to ignore innovation, but we need to be careful, thus developing the steps for a detailed analysis should get high priority.


The confusion over Blockchain

By Amir and Eli Schragenheim

Blockchain is often described as the technology that is going to change the world economy. In itself such a declaration makes it vital to dedicate a lot of time to learn the new technology and what value it can generate.  Blockchain is vital for the Bitcoin and similar crypto-currencies, but the claim of changing the economy looks far beyond the virtual money.  The direct connection between Blockchain and Bitcoin is causing a lot of confusion. While the Bitcoin is based on Blockchain technology, there might be a lot of other things to do with Blockchain as a technology by itself. Assessing the value of a new technology is open to wide speculations that add to the confusion.  For instance, Don Topscott says, among other predictions, that Blockchain would lead to the creation of a true sharing economy. A post on Bitcoin already appeared in this blog, (, where the biggest concern was that the exchange rate of the Bitcoin behaves in a too volatile way to be useful as a currency.  Let’s have a look on Blockchain as a new technology and inquire what the future value can be.

Let’s start with Goldratt’s Six Questions on assessing the value of a new technology. This is a great tool for guiding us to raise the right questions and look for possible answers:

  1. What is the power of the new technology?
  2. What current limitation or barrier does the new technology eliminate or vastly reduce?
  3. What are the current usage rules, patterns and behaviors that bypass the limitation?
  4. What rules, patterns and behaviors need to be changed to get the benefits of the new technology?
  5. What is the application of the new technology that will enable the above change without causing resistance?
  6. How to build, capitalize and sustain the business?

The power of the Blockchain technology

The simple answer to the first question (What is the power of the new technology) is being able to both execute financial transactions and (mainly) recording the information, being confirmed, in a way that is very safe.  The first part means transferring money from one digital account to another without the need of an intermediary.  However, the currency has to be one of the crypto-currencies and both sides need to maintain their digital wallets.  The technology checks that there is enough money in the wallet to make the transfer.

The second part of the power is keeping the safety of the information records that comprise the general ledger. This is the true unique feature of Blockchain.  Going into the general ledger already involves a certain level of checking and confirmation of many distributed computers.  In itself the recorded information is transparent to all (unless one codes it using the current available techniques). The unique part is that it is practically impossible, even for the involved parties, to change the information of the transaction.  If there is a mistake then a new transaction of correcting the previous one has to be executed and stored.

Coming now to the second question: what limitation is eliminated or vastly reduced by the new technology?

Blockchain experts claim that the current limitation of lack of trust between parties that hardly know each other is eliminated by Blockchain. Trade is problematic when the minimum trust isn’t maintained, thus governments force rules on trade.  The basic minimum trust means that when you pay the required price you have confidence that you are getting the merchandise you have paid for.  This is what governments try to control through regulations and laws. When it comes to exchanging value between entities in different countries maintaining the trust is problematic.

Is the limitation the need to use intermediaries? In most value exchange through the Internet we currently need, at the very least, two different intermediate parties – one that transfers the money and one that transfers the purchased value.  The intermediaries are, many times, slow and expensive.  Can Blockchain substitutes the shipping company? Is the essence of the value of Blockchain aims at lowering the cost of the value transfer?  If Blockchain would become effective in bypassing the banks then we might see a major improvement in the banks and substantial reduction of the cost.  When this takes place what would be then the limitation removed by Blockchain?

While Blockchain can directly supports the actual transfer of virtual money, it can only record the data about the physical transport of merchandise, unless the merchandise is digital. So, for buying music, ebooks, videos and other digital information it is possible to overcome the limitation of trust by Blockchain.  This is a unique market segment where Blockchain provides the necessary minimum trust for the value exchange.

We propose that the safety of the data is the key limitation that Blockchain is able to vastly reduce.

Is the current safety of the information on transactions, especially financial transactions, limited?

The irony is that the threat to our digital data is not that high today, but it is growing very fast. So, while people still feel relatively secure with their financial and intellectual data stored in the bank and in their computer or on the cloud, then in the not-too-far future this safety is likely to diminish substantially.

Let’s now evaluate the third question: how the security issues of value exchanges are done today?

First let’s focus on value exchange. Later, let’s review whether keeping very critical data safe would add substantial value.

What are the current generic difficulties of exchanging value? The first need is reaching an agreement between buyer and seller.  Does the seller truly own the specific merchandise the buyer is interested in?  The current practice is to buy only from businesses that have established their reputation – like digital stores that seemingly objective sites have recorded testimonies of satisfied buyers who purchased from those stores.  The more expensive the merchandise the more care the buyer needs to apply.

Credit-cards, banks, PayPal and the like play a major part in making money transfer relatively safe. Very large deals would use direct transfer between banks, and it is true that such a transfer, between different banks at different countries, takes today about three days and uses the cumbersome SWIFT system.  Credit card transaction might face the risk of giving away the credit card details, but there seem to be currently good enough protection, on top of the credit card companies taking certain responsibility and operating sophisticated machine learning algorithms to solve that.  As already mentioned we do not have any guaranty that in the near future all the current safety measures would not be violated by clever hackers.

Yet, there are two different major safety concerns from exchange of value. One is the identity of the site I’m communicating with for value exchange.  More and more fake sites appear that disguise as a known site.  This causes an increasing feeling of insecurity.  The other concern is that the seller would not follow the commitment to send the right goods on time.

The current generic practices regarding the safety of data lean heavily on the financial institutions using their most sophisticated solutions to protect the data. However, those institutions also become the desired targets for hackers.

Protecting our most important data, especially the identity of the person, the ownership of real-estate assets and medical records is of high value, requires using the best available protection means, and if a much better data protection technology appears then for such data it could bring a lot of value. Other data, which is much less critical, could use less expensive protective means.

The fourth question focuses on the detailed answer on how should Blockchain operate, and what other means are required to significantly improve the current situation regarding safety.

A solution based on Blockchain should come with procedures that, at the very least, follow a whole deal, from recording the basic intent to buy X for the price of Y, then initiate the money transfer, no matter whether it is a direct transfer or sending instructions to a financial institute to move dollars from the buyer account to the seller account.  Then the solution should record the shipment data of the goods until confirmation of acceptance.  The chain of confirmed data on transactions seems to be the minimum solution where the safety and objectivity provided by the Blockchain service (an intermediate!) yields significant added value to the current practices.

Such a service could also check the record of both the seller and the buyer: how many past deals were completed successfully, how many pending deals are open for relatively long time.  This is a much more powerful check than testimonials.  Fake accounts, without proven history, could be identified by that service, providing extra safety to deals.

Using such a service should have a cost associated with it, and we’re not sure it should be low. The users will have to decide whether to use it or stick to the current technologies depending on the perceived level of safety.

When such a service is launched, offering extra safe records of deals, then it could be extended to record keeping of ownerships and identities. In a world that is under growing threats to its digital records safety such a service is very valuable.  Will it cause a revolution in the economy?  We don’t think so.

As we don’t have, at the moment, a full Blockchain service there is no point in addressing the two last Goldratt questions.  Organizations that like to offer a service using Blockchain and complement it with the required additional elements would need to provide the full answer to the fourth question and then also answer the two final questions in order to build the full vision for the Blockchain technology to become viable.

Behavioral biases and their impact on managing organizations – Part 2

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

This is the second of our posts on the topic of biases and how TOC should treat them. Behavioral biases mean that many people make decisions that seem wrong from the outside.  Such judgment is based on considering the cause-and-effects from the decision up to the expected results that seem lower than from a different decision.  The troubling point for the TOC community, and several other approaches trying to change established paradigms, is to understand how come managers continue to make decisions that lead to undesired outcomes.  We’ll focus this time on ‘mental accounting’ and ‘sunk cost’, and like the first post, we’ll eventually deal not just with the bias itself, but mainly how it affects managing organizations.

Suppose that you bought yourself a new car but it turned out to be very disappointing.   How would you consider the idea to sell the new car, for just 75% of what you paid, and buy a new one?

The above example demonstrates two different biases; the sunk cost fallacy and mental accounting, both blocking the idea of selling the car and buying a new one.

The sunk cost fallacy

‘Sunk cost’ is a cost that has already been incurred and cannot be recovered. Standard economic theory states that sunk costs should be ignored when making decisions because what happened in the past is irrelevant. Only costs that have not been incurred so far and are necessary to the decision at hand should be considered. This seems to be a logical process without letting emotions interrupt the process.  However, in reality people prefer to sit through a boring movie instead of leaving halfway through because the ticket has already been paid for. Organizations continue to invest money in unpromising or doomed projects just because of the time and money they’ve already put in.  The simple realization is that emotions have enough power to twist the logical process of decision making.

In the car example, the remaining cost of the car that cannot be redeemed, being 25% of the original price, is sunk cost, meaning it shouldn’t be part of the decision. What should be part of the decision is whether you can afford a new car.  Assuming that buying the disappointing car has consumed all the money you could afford then you might need to look for a second-hand car that will be better tuned to you.  Isn’t it quite natural to think like that?  However, most people would stick to the disappointing car without even considering selling and buying another car they can somehow afford, just to avoid the feeling of recognizing they bought the wrong car. Ignoring sunk costs means openly admitting a mistake.  This causes us a very unpleasant feeling that threatens our self-confidence, so we try to ignore it by pretending the spending was worthy.

Mental accounting

The decision to buy a car without having to consider the full current financial state of the buyer, based solely on the budget allocated for this specific purpose, is mental accounting. This bias, considering only the available money for a specific topic, is typical to significant categories of spending money, usually not top priority, but important enough not to ignore.  The core cause behind this bias is different than the cause for evading sunk cost.

Maintaining a special ‘account’ for a specific need is actually a way to protect a need from being stolen by other needs. The protection is required because we don’t have the capacity and capability to view, every time we need to make a decision, the whole financial situation against the whole group of different needs and desires in order to come up with the right priorities.  Thus, we create those accounts for worthy needs, and decouple them from re-considerations, even though from time to time we might make a serious error.

These biases seem reasonable for the average Joe, but what is their impact on managing organizations?

Like we saw in the previous post these biases are even more relevant for organizational decision making because of the decision maker’s concern about how these decisions might be judged by others, especially after-the-fact judgment based on the actual outcome. The point here is that if the fact that a significant sum was invested in something that produced no value then somebody has to pay for such a mistake.  Ignoring the sunk cost reveals the recognition of money being wasted.

Thus, ‘sunk cost’ is a devastating element in organizational behavior being responsible for continuing with projects when it is already clear there is no value left in them. Another typical case is refusing to write-off inventory that has no chance of being sold, or refusing to sell it for less than the calculated cost. The direct cause is practical and rational: do not shake the boat, because if you do, then all hell breaks loose.  This is much more devastating than individuals trying to keep their dignity by not admitting wasting money without getting real value.

The impact of mental accounting on organizations is HUGE! It encompasses all the aspects of what is called in TOC ‘local thinking’.  It is caused by being unable to handle the complexity of considering the ramifications of any decision on the holistic system.  Organizations are built of parts and it is simple enough to measure the performance of every part, even when its real impact on the organization is quite different.  Evaluating the full impact of a decision on the whole organization is frightening, because it seems way too complicated.

The common way to reduce the impact of complexity is to assign an account to every product, big deal, and client, and consider only the data required for maintaining that specific account: the revenues, the costs and the calculated “profit”. We put “profit” in quotation marks because without considering the wider dependencies, including the capacity of critical resources, there is no good measure of the true added profit of the product/deal/client to the organization.  Eventually current cost accounting methods are based on mental accounting to simplify the overall system.

Understanding the difficulty of considering all the dependencies within the holistic system is critical for the efforts of the TOC insights to overcome the difficulty without the resulting distortions. The basic thinking habits of people are set to bypass complexity in a straight-forward way of looking just on the decision at-hand and its immediate data, avoiding information that complicates the simple rules.

The TOC way of simplifying complex situations is by finding the few variables that impacts the outcomes much more than the level of the ‘noise’ (the inherent regular variation). The existence of uncertainty, on top of the complexity, actually simplifies the situation, because the variation introduces a level of noise that makes it practically impossible to optimize within that noise.  Recognizing the limitation of optimization enables management to look just for the few variables that impacts performance beyond the noise and by this vastly simplifies the complexity and provides a way to make superior decisions.  TOC may seem to a newcomer more complicated than the common way.  Actually all it requires is a lot of common sense and clear recognition that approximately right is much superior to precisely wrong.

A Spanish translation of this article can be found at:

Behavioral biases and their impact on managing organizations

By Eli Schragenheim and Alejandro Céspedes (Simple Solutions)

Most of us, especially TOC practitioners, consider ourselves very good at decision making thanks to our cause and effect thinking.  However, behavioral economy, notably the research of two Nobel Prize professors, Daniel Kahneman and Richard Thaler, have convincingly shown several general biases from rational economical thinking, pushing most people to make decisions that look logically flawed. The psychology behind how people make decisions is extremely relevant to TOC because organizations are run by people, the same people TOC tries hard to convince to change their decision making process. TOC usually treats cause and effect relationships as based on rational logic.  But, cause-and-effect could also consider irrational causes, like having a special negative response to the term “loss”, even when it is not a loss, and predict the resulting effect of such a negative response.

Generally speaking TOC should look for answers to three key questions regarding such biases:

    1. Can we use cause-and-effect logic to map and explain biases? If so, can we eliminate, or significantly reduce, the biases?
    2. Is the impact of the biases on the organization’s decision making the same as on the decisions of an individual? Can we improve the organization’s decision making by treating the cause of the biases?
    3.  When numbers are fuzzy, as they usually are in corporate scenarios, what do managers rely on to make decisions?

Understanding loss aversion

In itself loss aversion is not a bias. It is a reasonable way to stay away from trouble.  What has been shown is that it is many times inconsistent, which practically means that human beings are impacted by irrelevant parameters that should not have an impact. To demonstrate the inconsistency we will use two experiments presented in “Thinking, Fast and Slow” By Prof. Kahneman.

In one experiment people were told that they had been given US$1,000 and that they had to choose between a 50% chance of winning an additional US$1,000 or getting US$500 for sure.

It’s no surprise that the vast majority of people was risk averse and chose to get the US$500.

What’s interesting is that when people were told that they had been given US$2,000 and that then they had to choose between a 50% chance of losing US$1,000 or lose US$500 for sure, then many people suddenly became risk seekers and chose the gamble.

In terms of final state of wealth both cases are exactly the same. Both cases put the choice between getting US$1,500 and accepting a gamble with equal chances of having US$1,000 or US$2,000. The two cases differ in their framing of the choice.  In the first case the choice is verbalized between gaining and taking a risk to gain more. The second case frames the dilemma between losing and potentially losing more (or not losing).  The fact that many people made a different decision between the cases shows a bias based on the framing of a ‘loss’ versus ‘gaining less’. It demonstrates how the words have a decisive impact.

These two experiments demonstrate two important findings. One, is that “losses” looms much larger than “gains”, and the other is that people become risk seeking when all their options are bad. This also explains why most people turn down a bet with a 50% chance of losing US$100 and 50% chance of winning US$150, even though on average the result is positive. If the bet would be 50% chance of winning US$200, then a balance between risk seeking and risk averse would be achieved. That means, “losing” is about twice as strong than “winning” as a general value assessment.

Should we be surprised by this seemingly irrational behavior?

Losing existing $100 might disrupt the short-term plans of a person, while the value of additional $150 is less clear. So, even though it is clear to most people that overall this is a good gamble, they resist it based on recognizing the greater negative impact of the loss.

Losing all we have is a huge threat. So, every person sets a mode of survival that should not be breached.  As the ability of most people to manage gains and losses in detail is limited, the survival instincts lead a negative reaction to any potential loss, making it more than the equivalent gain. So, taking into account our limited capabilities to control our exact state, we develop simple fast rules to make safe decisions. A simple rule could be “don’t gamble ever!”, or, “don’t bother with gambles unless you are certain to win much more.” These heuristics are definitely very helpful in most situations, but they can be costly in others.

While risk aversion seems rational enough, the framing bias is an irrational element, but the cause behind it is pretty clear and can be outlined as regular cause-and-effect.

We further assume that ‘framing’ is a bias that a person with a good background in probability theory would be able, most of the time, to resist the bias and come up with consistent decisions, especially for significant decisions.

Does this hold true for decisions made on behalf of an organization?

Suppose you are the regional sales manager of a big company and have to decide whether to launch a new product or not. Historically it has been statistically shown that there is a fifty-fifty chance that the new product will make a profit of US$2 million in one year or that it will lose a million dollars and its production would stop at the end of the year.

What would you do?

Our experience says that most seasoned managers will refuse to take the risk. Managers are naturally risk averse regarding any outcomes that will be attributed directly to them. As a matter of fact, every decision on behalf of an organization goes through two different evaluations: One is what is good to the organization and the other is what is good to the decision maker.

It’s common in many organizations that a “success” leads to a modest reward while a “failure” leads to a significant negative result for the manager. What’s more, because of hindsight bias decisions are assessed not by the quality of the decision making process and the information available at the time it was made, but by its outcome. No wonder loss aversion intensifies in corporate scenarios!

Earlier we mentioned that teaching the basics of probability theory and the acknowledgement of the different biases should reduce their impact. But, the unfortunate fact is that in most cases the decision makers face uncertain outcomes for which the probabilities are unknown. The case of launching a new product is such a case.  The statistical assessment of fifty-fifty chance is very broad and the decision maker cannot assume she knows the real odds.  This fuzzy nature of assessments naturally makes people even more risk averse, because the risk could be bigger than what is formally assessed. On the other hand, managers are expected to make some decisions, so they are sometimes pushed to take risky decisions just in order to look active as expected.

Now suppose that you are the Sales Vice-President and you have to decide whether to launch 20 different new products in 20 different regions. All product launches carry the similar statistics as presented earlier (50% chance of making US$2M and 50% of losing US$1M). Suppose the company is big enough to be able to overcome several product flops without threatening its solvency.

Would you launch all of them?

Assuming the success or failure of each of the products is independent on the other products then the simple statistical model would predict, on average, a total profit of $10M. However, since top management will most probably judge each decision independently, another bias known as narrow framing, the VP of Sales will try her best to minimize the number of failures. She might decide to launch only 8, basing her choice on the best intuition she has, even though she is aware she doesn’t really know. What’s ironic is that there’s a higher overall risk for the company in launching 8 products than 20 because of the aggregation effect.

There are many well-known examples of companies that decided to play it safe and paid a huge price for it. Kodak, Blockbuster, Nokia and Atari immediately come to mind. So, if organizations want managers to take more “intelligent” risks they need to create an environment that doesn’t punish managers for the results of their individual decisions, even when the outcome turns out to be negative. This is not to say organizations shouldn’t have a certain control on their human decision makers so they take potential losses seriously. Otherwise, managers might take huge risks because it is not really their money.  This means understanding how significant decisions under uncertainty have to be taken, and forcing procedures for making such decisions, including documenting the assumptions and expectations, preferably for both reasonable ‘worst case’ and ‘best case’ scenarios, that will later allow a much more objective evaluation of the decisions made.

This balancing act for taking risks is definitely a challenge, but what organizations have to recognize is that excessive risk aversion favors the status quo which could eventually be even riskier.

A Spanish translation of this article can be found at:

The value organizations can get from computerized simulations

The power of today computers opens a new way to assess the impact of variety of ideas on the performance of an organization that takes into account both complexity and uncertainty. The need stems from the common view of organizations and their links to the environment as inherently complex, while also exposed to high uncertainty. Thus every decision, sensible as it may seem at the time, could easily lead to very negative results.

One of the pillars of TOC is the axiom/belief that every organization is inherently simple. Practically it means that only few variables truly limit the performance of the organization even under significant uncertainty.

The use of simulations could bridge the gap between seemingly complex system and reaching relatively simple rules to manage it well. In other words, it can and should be used to reveal the simplicity.  Uncovering the simple rules is especially valuable in times of change, no matter whether the change is the result of an internal initiative or from an external event.

Simulations can be used to achieve two different objectives:

  1. Providing the understanding of the cause-and-effect in certain situations and the impact of uncertainty on these situations.

The understanding is achieved through a series of simulations of a chosen well-defined environment that shows the significant difference in results between various decisions. An effective educational simulator should prove that there is a clear cause-and-effect flow that leads from a decision to the result.

Self discovery of ideas and concepts is a special optional subset of educational simulator.  It requires the ability to make many different decisions as long as the logic behind the actual results is clear.

dist sim

A simple educational simulator for distribution systems

  1. Supporting hard decisions by simulating a specific environment in detail, letting the user dictate a variety of parameters that represent different alternatives and get a reliable picture of the spread of results. The challenge is to be able to model the environment in a way that it keeps the basic complexity, and represents well all the key variables that truly impact the performance.

I’ve started my career in TOC by creating a computer game (The ‘OPT Game’) that aimed to “teach managers how to think”, and then continued to develop a variety of simulations. While most of the simulators were for TOC education, I had developed two simulations for specific environments aiming at answering specific managerial questions.

The power of today computers is such that developing wide-scope simulators, which can be adjusted to various environments and eventually support very complex decisions, is absolutely valid. My experience shows that the basic library of functions of such simulators should be developed from scratch as using general modules provided by others slows the simulations to a degree that they are unusable.   Managers have to take many of their decisions very fast.  This means the supporting information have to be readily accessible.  Being fast is one of the critical necessary conditions for wide-scope simulations to serve as an effective decision support tool.

Dr. Alan Barnard, one of the most known TOC experts, is also the creator of a full supply chain simulator. He defines the managerial need first to be convinced that the new general TOC policies behind the flow of products would work truly well. But, there is also a need to determine the right parameters, like the appropriate buffers and the replenishment times, and this can be achieved by a simulation.

There is a huge variety of other types of decisions that a good wide-scope simulator could support. The basic capability of a simulation is to depict a flow, like the flow of products through the supply chain, the flow of materials through manufacturing, the flow of projects, or the flow of money going in and out.   The simulated flow is characterized by its nodes, policies and uncertainty.  In order to be able to support decisions there is a need to simulate several flows that interact with each other.  Only when the product flow, order flow, money flow and capacity flow (purchasing capacity) are simulated together the essence of the holistic business can be captured.  The simulator should allow easy introduction of new ideas, like new products that compete with existing products, to be simulated fast enough.  The emerged platform for ‘what-if’ scenarios is then open for checking the impact of the idea on the bottom line.

For many decisions the inherent simplicity, as argued by Dr. Goldratt, provides the ability to predict well enough the impact of a proposed change on the bottom line. Throughput Economics defines the process of checking new ideas by calculating the pessimistic and optimistic impact of that idea on the bottom line of the organization.  It relies on being able to come up with good enough calculations on the total impact on sales and on capacity consumption to predict the resulting delta-T minus delta-OE.

However, sometimes the organization faces events or ideas with wider ramifications, like impacting lead-times or being exposed to the ‘domino effect’ where a certain random mishap causes a sequence of mishaps, so more sophisticated ways to support decisions have to be in place. Such extra complications of predicting the full potential ramifications of new ideas can be solved by simulating the situation with and without the changes due to the new ideas.  The simulation is the ultimate aid when straight-forward calculations are too complex.

Suppose a relatively big company, with several manufacturing sites in various locations throughout the globe, plus its transportation lines, clients and suppliers, is simulated. All the key flows, including the money transactions and their timing, are part of the simulation.  This provides the infrastructure where various ideas regarding the market, operations, engineering and supply can be carefully reviewed and given predicted impact on the net profit.  When new products are introduced determining the initial level of the stock in the supply chain is tough because of its high reliance on forecast.  Every decision should be tested according to both the pessimistic and optimistic assumptions, and thus management can make a sensible decision that considers several extreme future market behaviors, looking for the decision that minimizes downsides and still captures potential high gains.

Such a simulation can be of great help when an external event happens that messes the usual conduct of the organization. For instance, suppose one of the suppliers is hit by a tsunami.  While there is enough inventory for the next four weeks, the need is to find alternatives as soon as possible and also realize the potential damage of every alternative taken.  Checking this kind of ‘what-if’ scenarios is easy to do with such a simulator revealing the real financial impact of every alternative.

Other big areas that could use large simulation to check various ideas are the airline and shipping businesses.  The key problem in operating transportation is not just the capacity of every vehicle, but also its exact location at a specific time.  Any delay or breakdown creates a domino effect on the other missions and resources.  Checking the economic desirability of opening a new line has to include the possible impact of such a domino effect.  Of course, the exploitation of the vehicles, assuming that they are the constraint, should be a target for checking various scenarios through simulations.  Checking various options for the dynamic pricing policies, known as yield-management, could be enlightening as well.

While the benefits can be high indeed one should be aware of the limitations. Simulations are based on assumptions, which open the way to manipulations or just failures. Let’s distinguish between two different categories of causes for failure.

  1. Bugs and mistakes in the given parameters. These are failures within the simulation software or wrong inputs representing the key parameters requested by the simulation.
  2. Failure of the modeling to capture the true reality. It is impossible to simulate reality as is. There are too many parameters to capture. So, we need to simplify the reality and focus only on the parameters that have, or might have at certain circumstances, significant impact on the performance. For instance, it is crazy to model the detailed behavior of every single human resource. However, we might need to capture the behavior of large groups of people, such as market segments and groups of suppliers.

Modeling the stochastic behavior of different markets, specific resources and suppliers is another challenge. When the actual stochastic function is unknown there is a tendency to use the common mathematical functions like the Normal Distribution, Beta the Poisson, even when they don’t match to the specific reality.

So, simulations should be subject to careful check. The first big test should be depicting the current state. Does it really show the current behavior?  As there should be enough intuition and data to compare the simulated results with the current state results, this is a critical milestone in the use of simulations for decision support. In most cases there should be at first deviations that occur because of bugs and flawed input.  Once the simulation seems robust enough more careful tests should be done to ensure its ability to predict the future performance under certain assumptions.

So, while there is a lot to be careful with simulations, there is even more to be gained from by understanding better the impact of uncertainty and by that enhance the performance of the organization.

The big slogan and the potential real value of Industry 4.0

By Eli Schragenheim and Jürgen Kanz

We are told that in order to keep with the quick changes in the world, facing the fiercer competition, manufacturing organizations have to join the fourth industrial revolution called Industry 4.0, which is a very big suite of different new technologies in the field of IT, namely the Internet of Things (IoT), artificial intelligence and robotics.

The slogan of Industry 4.0 claims it is highly desirable to join the revolution before the competitors. Well, we are not sure whether the term ‘revolution’ truly fits the new digital technologies.  But, this is truly the smallest issue.  The fast pace of the technology should definitely force every top management team of every organization to think clearly what could be the impact of the newest technological development on the organization and its business environment.  Thinking clearly is required not only for finding new ways to achieve more of the goal, but also to understand the potential new threats that such developments might bring.

There are two significant threats that new technology might create. First, push management to invest heavily on technology that is still half-baked and its potential value is, if at all, small.  Second, cause loss of focus on what brings value and what not. Trying too many ideas, investing money and management attention on too many issues, could end with a big loss, or very low value.  Just look on the wide area of claims to bring value:image 1

Image 1: Improvement areas for Industry 4.0, adapted from McKinsey Digital 2015,
“Industry 4.0: How to navigate digitization of the manufacturing sector”

Application of new IT technologies and the connection of known technologies shall lead to the following expected improvements per area:

image 2

Image 2: Expected improvements, adapted from McKinsey Digital 2015,
“Industry 4.0: How to navigate digitization of the manufacturing sector”

We can recognize a number of good time and cost reductions that will increase the overall productivity, but what are the expectations of top managers? To gain more insight we can look into a survey of “Roland Berger Strategy Consultants” with input from 300 top managers of German industries as an example:

image 3
Image 3: Top-Manager expectations, adapted from “Die digitale Transformation der Industrie”, Roland Berger Strategy Consultants & BDI,, last download 09/24/18

A big group of executives (43%) target only cost reduction with the help of Industry 4.0, while other managers want to have more sales with new products (32%) or more sales with existing products (10%). The objective to achieve more sales and cost reduction is a wish of 14% of the managers.

We can expect that approximately half of the managers will be satisfied with cost reduction due to improvements in above-mentioned areas, but there is no element in the above images that supports directly gaining more sales of new or existing products.

We assume that on one hand IT related product innovations will push sales with new technological products, like wearables (gps watches, health control, sleep tracker, etc.). On the other hand, there is the wish to cut cost in production, which due to the fierce competition would press the price down and increase sales quantity. Will this trend also increase net profit?  The short answer is: it depends; companies need to analyze the full impact on the bottom line very carefully.

The new digital technology can help reducing the “Time to Market”, the time to run a new product development project from idea to market launch incl. customer contribution. One question is by how much? The answer depends a lot on the specific technology of the new products.  Another question is whether Industry 4.0 can reduce production lead-time and what this could do to improve sales?

The mentioned improvements in Sales / Aftersales have an impact only on after-sales activities, but are they sufficient to create new sales?

It seems that the main vision of McKinsey and many other big players is limited to cutting operating expenses, which is fine for bringing certain value, but it is NOT a revolution. The tough reality of cutting costs is that it cannot be focused; it is spread over many cost drivers.  It requires a lot of management attention and usually brings limited net business value.  Question is what if the management attention efforts would have been directed to give higher value to more customers?

We understand that when implementing some of the most relevant Industry 4.0 technology, and when the technological changes are combined with the right management processes, such achievements, like cutting lead times by 20-50%, are not only possible, but should also dramatically improve the general responsiveness to customers and also becoming truly reliable in meeting all commitments.

But, is it sufficient that the technology is installed and used to achieve such results?

And is it enough to reduce the time to market, or cutting the production lead-time, to get better business results?

Significantly improved business results are achieved if and only if at least one of the following two conditions applies:

  1. Sales would grow either from selling more or from charging more and not losing too many sales because of the price increase.
  2. Costs are cut in a way that does not harm the delivery performance and the quality from the customer perspective.

The above should be the top objectives of any new move of management, including dwelling on implementing a new technology, like one of the Industry 4.0 elements.

On top of carefully checking how any of Industry 4.0 components could achieve one, or both, of the above conditions, a parallel analysis has to be used to identify the negative branches, the variety of possible harms that might be caused by the new technology.

For instance, the use of any 3D printer is limited by the basic materials that the particular printer can use. If this limitation wasn’t considered at the time when the decision to use such a printer was accepted then it could easily make the use of the “state-of-the-art” technology a farce.

We suggest every manufacturing organization to consider using chosen parts of Industry 4.0 in order to achieve one or both of the above top objectives, bringing higher level of business achievements.

A special effective tool to analyze any specific element of Industry 4.0 is the Six Questions on the Value of New Technology, developed by Dr. Eli Goldratt. The first four questions first appeared in Necessary but Not Sufficient, written by Goldratt, with Schragenheim and Ptak.

Question 1: What is the power of the new technology?

This is a straight-forward question on what the new technology can do, relative to older technologies, and also what it cannot do.

For instance, the ability of IoT to use PLC (programmable logic controllers) sensors on machines to send to a web page precise information about the state of a machine whether it functions properly or there is a problem. Predictions about the next maintenance step based on machine data are useful as well, because the results can help to avoid unexpected machine downtimes and to exploit the constraint.

Question 2: What current limitation or barrier does the new technology eliminate or vastly reduce?

This is a non-trivial question and it is asked from the perspective of the user. In order for a new technology to deliver value there has to be at least one significant current limitation that negatively impacts the user.  Overcoming this limitation is the source of value of the new technology.  It is self evident that verbalizing clearly the limitation for the user is a key for evaluating the potential value of the new technology.

The leading example of using PLC sensors for providing online information to variety of relevant users reveals that the limitation is the current need to have an operator physically near the machine to get information that could lead to an immediate action. We do not consider the capabilities of the PLC itself as the new technology in this analysis, as this is not truly a “new technology” by now.  The new concept is to use the Internet to reach far-away people that can gain, or help others to gain, from the online information on the current state of the machine and the specific batch that is processed.

There are two different uses for such immediate information. The first is when there is a problem in the flow of products, could be technical or bad-quality materials. The other type of information is for checking the likelihood of satisfying an opportunity, like changing over the production line to process a super urgent request or handling an unexpected delay.  The operator at the actual location has to get the fresh information and communicate it to certain people who appear in a predefined list.  The operator is also expected to update the IT system in an effort to consider the next actions. Overcoming the limitation means the flow of information does not need anybody at the physical location.  Depending on the technology the reactive actions could be taken from afar.

Question 3: What are the current usage rules, patterns and behaviors that bypass the limitation?

This question highlights an area that is too often ignored by technology enthusiasts. Assuming the current limitation causes real damages then ways to reduce the negative impact of the limitation are used.  For instance, before the cellular phones there were public phones spread all over the big cities to allow people to contact others from where they are.  Devices like a beeper or pager were in use to let someone far away know somebody is looking for her.  It is critical to clearly verbalize the current means to deal with the limitation because of two different objectives.  One is to understand better the net added-value of the new solution provided by the new technology.  The other is to understand the current inertia that might impact the users when the new solution would be provided.  This side is further explained and analyzed through the next question.

Today the industrial manufacturing landscape is roughly divided in two parts. We have factories with a high automation level for many years. These factories are often process industries using fully automated production lines for chemistry, pharmaceutics, etc. The machines and processes are connected by an independent data network that includes analysis. The monitoring of the production line and related processes takes place in a dedicated control room where the operator has to watch the information on a big screen and when spotting a problem the operator finds the best solution, or calls for help.

In addition we can find also many small and medium sized enterprises (SME) running modern machine tools with powerful controls and integrated sensors. These machines provide already all needed data for analysis, but in most cases, the data is left unused. It is also not very common to store information regarding the problems in the production flow into a database that can feed future analyses on improving the uptime of the production line. Operators can fix mainly small issues. They have to call the external supplier service in case of bigger problems with the machines. Bypassing the problem until it is resolved is typically managed by the operator and or by a cross-functional team. In today’s technology most of the information given to various decision makers is updated up to the previous day.  So, urgent requests and unexpected delays might wait to next day to be fully handled.

Question 4: What rules, patterns and behaviors need to be changed to get the benefits of the new technology?

Answering this question requires clearly detailing the optimal use of the new technology to achieve the maximum value. The behavior when the new technology is already operational is, in many cases, different than without that technology.  The value of the cellular technology is fully drawn only when the users carry their phone with them all the time.  There are many other ramifications of the change imposed by implementing the new technology, like being very careful not to lose our phone. New rules have to be developed to guide us to draw the most value of the new technology.

Industry 4.0 is pushing the idea that all available machine data should be used for monitoring, controlling and analysis. Modern machines can be connected directly with the IoT, while older machines need to install PLC sensors to provide data to the internet. A continuous stream of collected data is stored on a “Cloud” somewhere on a server farm of an external service provider.

The idea of having the PLCs information within immediate reach from everywhere could have added value if and only if people that have not been exposed to the information before would not only get the information but also be able to use it.  In order to use immediate online information one has to be aware that it is there.  As the true value of the new solution is the speed of getting the true updated information there is a basic need to be able to set an alarm to make the relevant person be aware that something important needs attention immediately.  This also means that there is a need to have effective analytics to note when the information becomes critical, and to whom.  This requirement should be part of answering question 4.  The more general lesson is that exposing the user to huge continuous stream of data from manufacturing, sales and Supply Chain is a problem the new technology has to offer a solution for.

When everything is connected with everything else, not just the internal company, it means direct connection with the external world via the IoT.  This move could create an opportunity for new level of business, but its rules and wide ramifications of such connection have to be very carefully examined.  For instance OEMs could have full access to all kind of data from every OEM supplier, which should enhance the win-win collaboration between the players.  But, this requires all the players to intentionally strive for this kind of collaboration.  The technology is just the enabler for the intentions. Creating this kind of transparency is a necessary condition for effective win-win collaborations.

The connectivity of everything is truly beneficial only with the right focus in place, preventing the human managers from being overwhelmed and confused by the ocean of data. This insight of having to maintain the right focus, the most basic general insight of the Theory of Constraints (TOC), is absolutely relevant for evaluating the potential contribution of every Industry 4.0 element, as it is so easy to lose focus and get no value at all.

Question 5: What is the application of the new technology that will enable the above change without causing resistance?

Resistance is usually raised because a proposed change might cause a negative, usually unintended, consequence. The example of every new medical drug for curing an illness also causes negative side effects, sometimes bigger than the cure, carries a wider message that this characteristic is much more general than just for medical drugs.

It is crucial not just to identify all the potential new negatives that the new technology would cause, but also to think hard how to trim them. The transition from film cameras to digital ones raised the negative consequence of having too many pictures taken.  During the years some solutions to organize the photos in a more manageable way have appeared.  If the thinking on that problem would have started sooner, the added-value would have been much higher.  This is a crucial part of the analysis:  to give the negatives much thought, in spite of the natural tendency of being happy with the new value.

It is advisable for every IoT idea to be analyzed for its probable negatives. A generic negative of almost all electronic devices is that when they fail to function properly the damage is usually greater than in the previous technologies.  This means stricter quality analysis is absolutely required, plus carrying the replacing electronic cards or devices in stock.

Question 6: How to build, capitalize and sustain the business?

This question is a reminder that the value of the new technology, plus all the decisions around it, is part of the global strategy of the company.

How does the above analysis live with the top objective of the organization? Does the plan for extracting value of the new technology provide synergy with the other strategic efforts required for achieving the goal?

So, here in the sixth question the global aspects of the proposition of implementing a specific application of the new technology have to be analyzed. Actually when several applications of new technologies are considered than question 6 should apply to all of them together.  Thus, when analyzing the various elements of Industry 4.0, the first step is choosing several for more detailed analysis; the last step in the analysis is evaluating the global strategy and deciding which ones, if any, to implement and what other actions are required to draw the expected value as soon as possible.

The previous questions of the leading example should discover by how much linking the PLC stream of data into the Internet would either increase sales or significantly reduce cost.

Suppose that the company has an active constraint in a specific machine or a whole production-line, and the constraint is frequently stopped due to various problems. In this case having a quick response mechanism, based on fast analysis of the PLC information, and immediately reaching the right people that can instruct the operator how to fix the problem is truly worthwhile.  The generated added value, both by keeping the customers unharmed and by superior exploitation of the capacity constraint, is high.

Add to this the decision to use 3D printers to overcome the management limitation in viewing new product designs from the original drawings, as managers might not have the capability of viewing 2D drawing and imaging the finished product. The cost of producing proto-types restricts the number of models for management to judge the design, and the number of alterations is also limited.   Using 3D printers eliminates the limitation.  After answering the rest of the questions the organization has to consider question 6 for both elements of Industry 4.0 and decide whether together the value is even greater.  If we consider the possibility that current proto-types of new products have to compete on the capacity of the constraint, while using the 3D printer bypasses the constraint, we could realize the synergetic added-value of leading to improved product-design that could enhance sales, while the capacity constraint is better exploited.

The overall conclusion has to highlight the sensitivity of the strategic analysis when the issue of Industry 4.0 is seriously considered. The contribution of the six questions could be truly decisive.