The Business Potential from a Unique Capability

A unique capability of a business is not widespread, as most businesses compete without being perceived by their customers as being “special.”  It is quite different in Art, Sport, and Science, where being unique or special is truly desired.  The unique capability of artists, sportsmen and scientists could lead, in various ways, to commercial success.  Both Art and Sport have their influences on Fashion, where the unique capability, when available, has a strong influence.  Some key high-tech organizations succeed to develop their own special capability, usually around one person, which sometimes, not too often, is also ready to teach and inspire others, so the unique capability gets stronger and wider within the organization.

The majority of the commercial organizations don’t have a unique capability and due to that face fierce competition and as a result a chronic difficulty to prosper.  

The Theory of Constraints (TOC) is naturally focused on the impact of capacity constraints on the performance of the organization, and comes up with breakthrough ideas on how to better exploit the constraint so more of the goal is achieved.  Two key concepts that rise from the recognition of having to deal with a capacity constraint resource are:

  1. Exploitation of the constraint, making sure its capacity is utilized for what generates the best profit.  Practically exploitation means a plan on how to exploit the limited capacity.
  2. Subordination to the exploitation scheme.  Setting the right policies so the exploitation plan can work to the fullest extent.

‘Capacity’ is a related term to ‘Capability.’  It looks for the maximum output units the resource can do in a period, like a day, a week, or a year.   The capacity limitation impacts the potential quantity of products/services that can be sold, but it doesn’t control the value to the customer, relative to the value the customer gets from the competition. 

So, when the operations of a relatively routine production system are significantly improved by identifying the capacity constraint, and instituting the most effective exploitation and subordination processes, there is a great opportunity to sell much more, which could leap the profit up. 

Identifying a unique capability, which delivers very high value to the customers, could yield even more, but just gaining a unique capability is insufficient.  At least two additional conditions must be in place.  One is that the unique capability can generate additional value to many customers, and the other is having a holistic program to generate as much value as possible.

Within the TOC methodology for Strategy the concept of gaining a ‘decisive-competitive-edge’ (DCE) is especially important. 

Having a decisive competitive edge (DCE) is achieved by the company answering a critical need of the client in a way that their competitors do not and it is difficult for the competition to quickly replicate their own solution to that need.  Another requirement of a DCE is that for all the other critical parameters the company performs well enough – at about the same level as its competitors.  A last requirement for validating a DCE is that it doesn’t result in new problems of significance.

The concepts of ‘DCE’ and ‘unique capability’ are connected.  A DCE that doesn’t rely on a unique capability can be easily imitated, except when the DCE is based on overcoming a very widely held but flawed assumption.  When an organization buys a unique capability, like acquiring a small company with such capability, then the challenge is developing and implementing a holistic scheme to draw the full value.  So, while imitating a competitor that came first with the unique capability is possible, it takes considerable time and effort.

How should a holistic scheme be made?  Here is an insight to digest: 

It is not enough to gain a unique capability, which can add considerable value to the customers, in order to be successful.  There is a need to develop effective exploitation and subordination procedures.  This means that on top of the capacity constraint, the unique capability, when available, requires such procedures to ensure it is fully directed to maximize the value, and that nothing else is missing from what the customer requires to draw the value.  The exploitation plan is to design the products/services in a way that emphasis the added-value of the unique capability, as well as pricing them accordingly.  The subordination processes should come up with the appropriate intermediate objectives, and performance measurements, that are targeted at fulfilling all the necessary requirements for the exploitation scheme.

Exploitation is a plan, a set of absolutely necessary decisions, targeted at achieving the best overall achievement of the goal. The subordination processes and policies are targeted to allow the exploitation plan to be performed as smoothly as possible.

It makes sense that the leading exploitation/subordination logic should be applied first to the unique capability, and then derive how should the capacity constraint be exploited.  The unique capability is used to enhance the value, and its broad perception, in the market.  Once that is planned, the capacity constraint requires its own exploitation and subordination to control the volume of the sales and the timely delivery.

Watching the final of the 2022 Mondial (World Cup soccer) highlighted for me some relevant observations.  A very highly desired unique capability for a soccer/football star player is being able to spot and take advantage of a very short-time opportunity to score a goal.  All the team should strive to create as many situations as possible for such an opportunity.  This is the essence of the exploitation, and the subordination means that all the team members stick to that objective.

The goalkeeper should be viewed as the natural constraint, because of the lack of capacity of any human being to equally protect all the goalpost area.  The overall exploitation of both the constraint (our goalkeeper) and the unique capability means having the ball away from our goalpost as possible, which also supports the exploitation scheme for preparing opportunities at the other end.  The overall subordination means focusing on the two targets: keeping the ball away from our goalpost, and finding more opportunities for the main striker(s).

In businesses the opportunity to develop a unique capability has a major impact on gaining a competitive edge, possibly even a decisive competitive edge.  Certainly, Steven Jobs had this kind of unique capability, which Apple still succeeds in maintaining.  But, instead of looking for a one-time genius, it is possible to create a team with combined skills and methodology that are focused on achieving the required unique capability.  When the specific capability is the outcome of a plan, then the key ingredients of the exploitation scheme should have been already thought of, and the challenge is to come up with the effective subordination of the whole organization, making it a true decisive-competitive-edge (DCE).

Dr. Goldratt, the developer of TOC, came up with three major steps for a strategy that is based on a new DCE: Build, Capitalize and Sustain

Build is the step of developing all the required skills for the unique capability.  Capitalize is the exploitation scheme for Marketing and Sales.  Sustain is a critical element for Operations to be ready for significantly increased demand, a necessary condition for success.

Thinking about the first two steps raises the issue that exploiting the key unique capability should involve not just the Marketing part, but also the Operations and Finance. 

Many restaurants, all over the world, struggle to achieve a competitive edge through the unique food made by their chef.  While being successful in achieving a competitive edge, it is seldom a “decisive” one.  Gaining Michelin star(s) does that by providing “proof” that the food is exceptional, worthy of the high price.  But, even when the edge is truly decisive, it is hard to scale up the volume of business, because the stars apply only to the specific restaurant, not to a whole chain, and customers are aware that when the same chef expands his/her reach to more restaurants, it is unclear whether the local chefs truly produce in the spirit of the star chef.  Another business problem of such a chain is that by expanding the unique knowledge of preparing special dishes, other chains might find ways to learn the secret and imitate the famous chef specialties.

An interesting case is the success and eventually failure of the Concorde plane.  The unique technology made the plane significantly faster than all other commercial aircraft.  The value to customers came from the much shorter flying time across long distances.  The extra value for a traveler with time to spare was limited and the price for a Concorde flight was too high for them.  So, the target market segment had to be top business and political people, who assume that their time is worth a great deal of money.

Along came the problem of failing to find an effective means for exploitation of the unique capability:  a busy businessperson, say in New York, has a specific window of time to go to Paris to meet associates and quickly return to New York.  Well, the few Concordes couldn’t offer terribly flexible departure and arrival schedules.  Private planes, even though they are much slower, provide that overall flexibility.

On top of that, the Concorde created a new problem, in TOC they are called “a negative branch,” where a valuable new idea also causes a new problem.  The Concorde, on top of its high cost, was way too noisy (sonic booms with every penetration of the sound barrier) and big cities don’t like noisy airplanes taking off and landing nearby.  Not finding a solution to the negative branch, coupled with the difficulty finding enough demand, brought the unique capability of the Concorde to an end and that was before the current priority on reducing carbon emissions.

Is it possible to gain a unique capability in Operations? 

Imitating a new operational procedure is a piece of cake, right?  Sometimes it is but look at the Toyota Production System.  How many other manufacturing organizations succeeded in being as effective?  Maybe we still don’t fully understand all key insights that Toyota has adopted?

Dr. Goldratt strived to achieve a unique capability from significantly improved operations.  One provoking idea is to be able to deliver, much faster than normal, some orders, for a substantial markup.  The emphasis is not on always delivering faster than others but on being able to reliably give that premium service when truly beneficial to the customer, which makes it possible to ask for a markup.  This is a key idea regarding exploitation: letting the customer decide whether there is a real need to get the product sooner. 

The idea follows FedEx, UPS, and similar international delivery companies, who developed their own unique capabilities to do that, but Goldratt transformed the idea to the more complex environment of manufacturing. 

Some generic conclusions

All commercial organizations, also some not-for-profits, should recognize the potential of gaining a DCE that is based on a carefully developed unique capability, which brings huge value to well-defined market segment(s), making it difficult for competitors to quickly imitate. Basing the strategy around that unique capability means using the core insights for effective exploitation and developing the rules for subordination.  Thus, the exploitation and subordination of a unique capability should be the core of the whole strategy.

There should be two major inputs for new ideas about developing a strategy:

  1. What are we good at?  What particular skills do we have?  What new skills can we acquire?
  2. What seems to be currently painful to quite a lot of potential clients? 

The major challenge is recognizing the current pain of potential customers, which our special capabilities could remove.

The challenge in recognizing what we are good at is to be able to judge our skills objectively.  Note, even when we recognize that our skills are not extraordinary, if we find a way to utilize them to develop a unique capability that there is a need for – this is what many other people and organizations, with similar raw skills fail to see.

The skill to be able to develop worthy new skills is of huge advantage.  It is a pity so few organizations are looking for people who can quickly learn new skills.

Advertisement

Is it Right, Wrong, or Unclear?

I find respectable discussions on the content of key issues of our life especially rewarding.  Here is an issue my friend Alejandro Fernandez had during his presentation at the 2022 TOCICO Conference.

The topic was The Measurement Nightmare Solved with Throughput Economics Approach. The idea is to judge the added value of a new move or idea, opening the door to evaluate the contribution of the new move to the Goal of the organization.

One of the financial measurements that can be used is the return-on-investment of the new move.  Here is the formula stated by Alejandro:

Sanjeev Gupta and Filippo Pescara, two well-known TOC experts, claimed that the above formula is incorrect.  The situation of presenting live could be too pressing to fully understand the criticism and its validity.  Moreover, one of the most common, but also trickiest problems is when a specific expression can be interpreted in two very different ways.  I believe this is the situation here.

Let us use an example:

Imagine a restaurant chain with four restaurants spread over the city.  The owner is contemplating adding a fifth branch.  He believes that such a restaurant, at a location far away from the others, would add mainly new customers, who are aware of the reputation of the chain, but highly prefer the new location.

  • The new restaurant requires a net investment of $500K.
  • The additional operating expenses of the chain would go up by $1.2M a year.
  • The evaluation of overall Throughput (revenues minus the truly-variable-costs, like the purchased food) comes to 1.5M a year.
  • This means the chain of restaurants will gain, due to the additional branch, net-profit, before tax, of (1.5M – $1.2M) = $300K a year.
  • The ROI of the investment in the new restaurant is $300 / $500 = 60%.

But here is the clarity issue: The ROI of 60% is only for the new restaurant – it is NOT the ROI of the chain and it is obvious that the total ROI is NOT going up by 60%!  To calculate the new ROI for the whole chain we need to consider the new total throughput of all five restaurants minus the operating expenses of all the restaurants, then dividing it by the total of the current investment plus the new one.

The point here is: what do you understand from the expression: Delta-ROI? 

Is it the change in ROI for the whole organization?  Or is it the ROI of just the new move? 

The above formula refers to the later interpretation!

Comment: The full Throughput Economics method involves TWO series of calculations, one is based on conservative assessments of the additional Throughput and additional Operating Expenses, and one is based on optimistic assessments.  To understand the reason for going through the calculations twice, see Alejandro’s whole presentation, or read the book: Throughput Economics, by Henry Camp, Rocco Surace, and me.

Please, come up with your reservations to continue the open discussion.

TOC and AI: Using the TOC Wisdom to Draw the Full Value from AI for Managing Organizations

By Eli and Amir Schragenheim

A powerful new technology has the potential of achieving huge benefits, but it is also able to cause huge damage.  So, it is mandatory to carefully analyze that power.  We think it is the duty of the TOC experts to look hard at AI and see how to exploit the benefits, while eliminating, or vastly reducing, the possible negative consequences.

Modern AI systems are able to make predictions based on large volume of data and simulated results and either take actions, like robots do, or support human decisions.  An important example is the ability to understand language, get the real meaning behind it, and generate a variety of services. The “experience” is created by the provided dataset, which has to be very large.  This kind of learning tries to imitate human beings learning from their experience, with the advantage of being able to learn from a HUGE amount of past experience, hopefully with less biases. 

AI currently generates value mainly by replacing human beings in relatively simple jobs, making it faster, more accurate, and with less ‘noise’.

AI has some critical flaws; one is being unable to explain how a specific decision has been reached.  Its dependency on both the large datasets and the training makes the inability to explain a decision a potential threat of making mistakes that most human beings won’t. Even huge datasets are biased due to the time, location and circumstances where the data have been collected, so they might misinterpret a specific situation. 

This document deals with the potential value for managing organizations that can be achieved by combining the Theory of Constraints (TOC) with AI.  It doesn’t deal with other valuable uses of AI.

The focus of TOC is on the goal and how to achieve more of it – so in terms of management it will look on what prevents the management team from achieving more of the goal.

TOC focuses on options for finding breakthroughs, trying to explore where there’s a current limitation to achieve more goal units, so we’d like to explore whether the power of AI can be used to overcome such limitations.

Without a deep understanding of the management needs, the potential value of AI, or any other new technology, is limited to needs that are obvious to all, and that AI is able to answer via automation, without having additional elements for the solution to work. In the more complicated case of using robots to move merchandise in a huge warehouse, we have a fairly obvious combination of two technologies, AI and robotics, for answering the need to replace lower-level human workers, probably also improving the speed with less mistakes (higher quality).

When it comes to supporting the decisions of higher-level managers the added value of AI is much less obvious.  One aspect that is basically different from the regular current uses of AI is: the human decision maker has to be fully responsible for the decision.  This means the AI could recommend, or just supply information and trade-offs, but it should not be the decision maker.  This raises several tough demands from AI technology, but when these demands are answered, new opportunities to gain value are raised.

Providing absolutely necessary information, which is either missing today, or given by the biased and inaccurate intuition of the human manager, is such an opportunity. 

Covering for not-good-enough human intuition, replacing it by considering a very wide large volume of data, performing a huge number of calculations, looking for correlations and patterns that imitate the human mind, using reinforcement rewards to identify the best path to the supporting information, the human decision maker gets a generic opportunity to improve the quality of the decisions.  Eventually, the decision maker might need to include facts that aren’t part of the datasets, and use human intuition and intelligence to complement information upon which an important decision has to be made.

Measuring the uncertainty and its impact

The trickiest part in predictions is getting a good idea not just of the exact value we like to know but also the reasonable range of deviations from it.  Any prediction of the future isn’t certain, so the key question should be ‘what should we reasonably expect?’

TOC developed the necessary tools for keeping a stable flow of products and materials in spite of all the noise (common and expected uncertainty), using visible buffers as an integral part of the planning, and buffer management for determining the priorities during the execution.  This line of thinking should be at the core of developing AI tools to support the management of the organization.

The most immediate need in managing a supply chain (and other critical and important decisions in business) is to get a good idea of the demand tomorrow, next week, next month and also in the long term.  Assessing the potential demand for next year(s) is critical for investing in capacity or in R&D. There is NO WAY to come up today with a reliable exact number of the demand tomorrow, and it gets worse the longer we go into the future (this is just the way uncertainty works). 

Example: Suppose the very best forecast algorithm tells you that next week’s demand for SKU13 is 1,237 units, but the actual demand turns out to be 1,411. 

Was the original forecast wrong? 

Suppose another forecast predicted the sales to be 1,358, is the algorithm behind the second forecast necessarily better?  After all, both were wrong.

Suppose now that the first algorithm included an estimation of the average absolute deviation, called the ‘forecasting error’.  The estimation was plus-minus 254.  This puts the first forecast in a better light because the prediction included the possibility of getting 1,411 as the actual result.  If the second algorithm doesn’t include any ‘forecasting error’, then how could you rely on it?

Effective managers have to be aware of what the demand might be.  When they face one-number forecasts, no matter how good the forecasting algorithm is, they frequently fail to make the best decision, given the available information.

Thus, a critical request from any type of forecasting is to reveal the size, and its related impact, of the uncertainty around the critical variables that impact the decision.  Having to live with the uncertainty means recognizing the damages when the actual demand will be different from the one-number forecast.  The relative size of the damage when the demand is less than the forecast, and when it is higher than the forecast, should lead the manager to make a choice that significantly impacts the decision.

There are meta-parameters of the AI algorithm, that dictate the decision made by it. Adjusting these meta-parameters can easily generate a result that is more conservative or more optimistic (for example – instead of using 0.76 as the threshold we can use 0.7 in one instance and 0.82 in the other). This way, being exposed to both predictions gives the decision-maker better information to consider the most appropriate action, without getting used to standard deviation or the like.

Reaching for more valuable information on sensing the market

A critical need of every management is to predict the market reaction to actions aiming at attracting more demand, or being able to charge more.  Most forecasting methods, with a few exceptions, assume no new change in the market.  Thus, on top of dealing with the quality of forecasting the demand, considering just the behavior in the past, there is a need to evaluate the impact of proposed changes, also expected changes imposed by external events, on the market demand.

Analyzing the potential changes in the market using the logical tools provided by the Thinking Processes can usually predict, with reasonable confidence, the overall trends that the changes would generate.  But the Thinking Processes cannot provide a good sense of the size of the change in the market.  When proposed changes cause different reactions, like when the esthetics of the products go through a major design change, human predictions are especially risky. 

Significant changes are a problem for the current practices of AI. However, AI algorithms that detect a deviation from a certain reality already exist, and are used extensively in predictive maintenance of manufacturing facilities. Such a signal from the AI can direct the decision-makers that the reality has changed, giving them the signal that manual intervention is needed. 

Predicting the impact of big changes that are made internally, like changes in item pricing, launching a big promotion etc., is a real need for management.  While changing the pricing of an item seems like an easy task, it is tricky to assess all the implications on the demand for other items and the response of the competitors. Plus – those changes don’t occur very frequently, and the internal data gathered for such changes in the past might not be enough to generate an effective AI model that predicts the implications accurately enough. This presents an opportunity for a 3rd party organization that deals with Big Data. Such an organization can gather data from many interested organizations, and use the aggregated data to build a much more capable AI model, which can be used by the organizations sharing their data to predict the effects of those actions better. This would create a win-win for all parties involved, and can cover the operations cost easily.  Such an organization should guarantee to avoid disclosing any data of a specific organization, just share the overall insights.

Warnings about changes in the supply

The natural focus of management is first on the market, then on Operations, which represent the capabilities of the organization to satisfy (or not), and possibly achieve more, demand.

The supply is, of course, an absolutely necessary element for maintaining the business. The problem is that when a supplier is going through a change that negatively impacts the supply, it might take a considerable amount of time for some clients to realize the change and the resulting damage.  The focus of management should not be on routine relationships.  However, when a change in the behavior is identified early enough, possibly by using software, it answers a basic need.  It is especially valuable when the cause of the change is not known. For instance, when a supplier faces financial problems or a change of management.

Achieving effective collaboration between AI, analytics, and human intuition

The three key limitations of AI are

  1. Being a ‘black box’ where its recommendations are not explained.
  2. The current practices don’t use cause-and-effect logic.  There are moves within AI to include cause-and-effect sometime in the future.
  3. AI is fully dependent on the database and the training.

One way to partially overcome the limitations is to use software modules, based on both cause-and-effect logic and on ‘old-fashioned’ statistical analysis, that evaluates the AI’s recommendations and checks how reasonable they are, possibly also re-activating the AI module in order to check a somewhat different request. 

Example.

Suppose the AI prediction for product P1 deviates significantly from the regular expectation (either regular forecast or simply the current demand), then the AI module could be asked to predict the demand for a group of similar products, say P2 up to P5, assuming that if there is a real increase in demand for P1 the other similar products should also show a similar trend.  Predicting the demand for a group of products should not be based on predicting the demand for each and combining them, but to repeat the sequence of operations considering the combined demand in the past. Thus, logical feedback is obtained checking whether the AI unexplained prediction or recommendation makes sense.

The other way is to let the human user accept or reject the AI output.  It is desired that the rejection is expressed in a cause-and-effect way, which could be used by the AI in the future as new input.

Additional inputs from the human user

AI cannot have all the relevant data required for making a critical decision.  If the human manager is able to input the additional relevant data to the AI module, and a certain level of training is done to ensure that the additional data participate in the learning and the output of the AI module, this could improve the usability of the AI as high-level decision support.

Conclusions and the vision for the future

AI is a powerful technology that can bring a lot of value, but also may cause a lot of damage.  In order to bring value, AI has to eliminate or reduce a current limitation.  Implementing AI has also to consider the current practices and to outline how the decision-makers should adjust to the new practice and how to evaluate the AI recommendations before taking the actions. 

Supporting management decisions is a worthy next direction for AI.  But it definitely needs a focus to ensure that truly high value is generated, and possible damage is prevented.

TOC can definitely contribute a focused view into the real needs of top management.  It also enables an analysis of all the necessary conditions for supporting the need.  This means that while AI can be a necessary element in making superior decisions, in most cases the AI application would be insufficient. For drawing the full value other parts, like responsible human inputs, other software modules, and proper training of the users, have to be in place.

TOC is about gaining the right focus for management on what is required in order to get more of the organizational goal.  Assisting managers to define what needs immediate focus, as well as assisting in understanding the inherent ‘noise’ and allowing quick identification of signals, is a critical direction for AI and TOC combined to improve the way organizations are managed.  Even human intuition could be significantly improved, while being focused on the areas where AI is unable to assist.

Improvements that AI can give to TOC

The proposed collaboration between the TOC philosophy and AI should not be just one way. The TOC applications can get substantiate support from AI, especially for buffer sizing and buffer management.

Buffer sizing is a sensitive area.  The introduction of buffers for protecting, actually stabilizing, the delivery performance, is problematic at the initial state. But at that point AI cannot help, because analyzing the history before the TOC insights have been actively used is not helpful.  But, after one or two years under the TOC guidelines, AI should be able to point to too-large buffers, also pointing to few too low ones.  The Dynamic-Buffer-Management (DBM) procedure for stock-buffers, based on analyzing the penetrations into the Red Zone and for how long, could be significantly improved by AI. Another potential improvement is letting AI recommend by how much to increase the buffer.  Similar improvements would be achieved by analyzing when staying too long in the Green Zone signals a safe decrease of the stock buffer.

The most important use of Buffer management is setting one priority system for Operations, guiding what is the most urgent next job for delivering all the orders on time.  A part that needs improvement is when expediting actions are truly needed, including the use of capacity buffers to restore the stability of the delivery performance.  Here is again a critical mission for AI to come up with improved prediction of the current state of the orders against the commitments to the market.

The TOC procedures were influenced by recognizing the capacity limitations of management attention. By relieving some of the ongoing, relatively routine, cases where AI is fast and reliable enough, TOC can focus management attention on the most critical strategic steps for the next era.

What should WE learn from Boeing’s two-crash tragedy?

The case of the two crashes of Boeing’s 737 MAX aircraft, less than six months apart, in 2018 and 2019, involves three big management failures that deserve to be learned, so some effective lessons can be internalized by all management.  The story, including some of the truly important detailed facts, is shown in the recent launch of “Downfall: The Case Against Boeing”, a documentary by Netflix.

We all demand 100% safety from airlines, and practically also from every organization: never let your focus on making money cause a fatal flaw!

However, any promise for 100% safety is utopian.  We can come very close to 100%, but there is no way to ensure that fatal accidents would never happen.  The true practical responsibility is made by two different actions:

  1. Invest time and effort to put protection mechanisms in place.  We in the Theory of Constraints (TOC) call them ‘buffers’, so even when something goes wrong, no disaster would happen.  All aircraft manufacturers, and all airlines, are fully aware of the need.  They include protection mechanisms, and very detailed safety procedures, into the everyday life of their organizations.  Due to the many safety procedures, any crash of an aircraft is the result of a combination of several things going wrong together and thus is very rare. Yet, crashes sometimes happen.
  2. If there is a signal that something that shouldn’t have happened has happened, then a full learning process has to be in place to identify the operational cause, and from that identify the flawed paradigm that let the operational cause happen.  This is just the first part of the learning. Next is deducing how to fix the flawed paradigm without causing serious negative consequences.  Airlines have internalized the culture of inquiring every signal that something went wrong.  Still, such a process could and should be improved.

I have developed a structured process of learning from one event, now entitled as “Systematic Learning from Significant Surprising Events”.  TOCICO members could download it from the TOCICO site of New BOK Papers, the direct link is https://www.tocico.org/page/TOCBodyofKnowledgeSystematicLearningfromSignificantSurprisingEvents

Others could approach me and I’ll gladly send the paper to them.

Back to Boeing.  I, at least, don’t think it is right to blame Boeing for what led to the crash of the Indonesian aircraft in October, 29th, 2018.  All flawed paradigms look as if everybody should have recognized the flaw, but this is inhuman.  There is no way for human beings to eliminate all their flawed assumptions.  But it is our duty to reveal the flawed paradigm once we see a signal that points to it.  Then we need to fix the flawed assumption, so the same mistake won’t be repeated in the future.

The general objective of the movie, like the habit of most public and media inquiries, is to find the ‘guilty party that is responsible for so-and-so many deaths and other damage.’  Boeing top management at the time was an easy target given the number of victims.  However, blaming top management because they were ‘greedy’ will not prevent any safety issue in the future.  I do expect management to strive to make more money now, as well as in the future.  However, the Goal should include several necessary conditions, and refusing to take a risk for a major disaster is one of them.  Pressing for very ambitious short time of development, and launching a new aircraft without the need to train the pilots, who are trained with the current models, are legitimate managerial objectives.  The flaw is not being greedy, but failing to see that the pressure might lead to cutting corners and to prevent employees from raising a flag that there is a problem.  Resolving the conflict between ambitious business targets and dealing with all the safety issues is a worthy challenge that needs to be addressed.

Blaming is a natural consequence of anything that went wrong.  It is the result of a widely spread flawed paradigm, which pushes good people to conceal the facts that might lead to their involvement with highly undesired events.  The fear is that they will be blamed and their career will end.  So, they do their best to prevent revealing their flawed paradigms.  The problem is: other people still use the flawed paradigm!

Let’s see what were the critical flawed paradigm(s) that caused the Indonesian crash.  Typically, two different combined flaws led to the crash of the Indonesian plane.  A damaged sensor sent wrong data to a critical new automatic software module, called MCAS, which was designed to fix a problem of too high angle of rising.  This was a major technical flaw of failing to consider the case that if the sensor is damaged then MCAS would cause a crash.  The sensors stick out of the airplane body, so hitting a balloon or a bird can destroy the sensor, and this makes the MCAS system deadly.

The second flaw, this time managerial, is deciding not to let the pilots know about the new automatic software. The result was that the Indonesian pilots couldn’t understand why the airplane is going down.  As the sensor was out of order, many alarms were heard filled with wrong information and the stick shaker on the captain’s side has been loudly vibrating.  To fix that state the pilots had to shut off the new system, but they didn’t know anything about MCAS and what it was supposed to do.

The reason for refraining from telling the pilots about the MCAS module was the concern that it’d trigger mandatory pilot training, which would limit the sales of the new aircraft.  The underlining managerial flaw was failing to realize how that lack of knowledge could lead to a disaster.  It seems reasonable to me that the management tried their best to come up with a new aircraft, with improved performance, and no need for special pilot training.  The flaw was that being unaware of the MCAS module could lead to such a disaster.

Once the first crash happened, and the technical operational cause revealed, the second managerial flaw took place.  It is almost natural after such a disaster to come up with the first possible cause that is the least damaging to the Management.  This time it was easy to claim that the Indonesian pilot wasn’t competent.  This is an ugly, yet widely spread, paradigm of putting the blame on someone else.  However, facts coming from the black box eventually told the true story.  The role of MCAS in causing the crash was clearly discovered, and the role of the pilots not having any prior information about it.

The safe response to the crash should have been grounding all the 737 MAX aircraft until a fix for MCAS is ready and proven safe.  It is my hypothesis that the key management paradigm flaw, after establishing the cause for the crash, was highly impacted by the fear of being blamed for the huge cost of grounding all the 737 MAX airplanes.  The public claim from Boeing top management was: “everything is under control”, a software fix would be implemented in six weeks, so there is no need to ground the 737 MAX airplanes.  The possibility that the same flaw of MCAS would lead to another crash was ignored in a way that could be explained only by top management being under huge fear for their career. It doesn’t make sense that the reason for ignoring the risk was just to reduce the costs of compensating the victims, by still putting the responsibility on the pilots.  My assumption is that the top executives of Boeing at the time were not idiots. So, something else pushed them to take the gamble of another crash.

Realizing the technical flaw forced Boeing to reveal the functionality of MCAS to all airlines and pilot unions.  It included the instruction that when the MCAS goes wrong to shut-off the system.  At the same time, they published that a software fix to the problem would be ready in six weeks, an announcement that was received with a lot of skepticism.  Due to these two developments Boeing formally refused to ground the 737 Max aircraft.  When directly asked by a member of the Allied Pilot Association, during a visit of a group of Boeing managers (and lobbyists) to the union, the unbelievable answer was: No one has concluded that this was the sole cause of the crash!  In other words, until we have full formal proof, we prefer to continue business as usual. 

Actually, the FAA, the Federal Aviation Administration, issued a report assessing that without a fix there will be a similar crash every two years!  This means there is 2-4% chance that a second crash could happen within one month!  How come the FAA has allowed Boeing to let all the aircraft fly?  Did they carry out an analysis of their behavior when the second crash occurred after five months without a fix of the MCAS system?

Another fact mentioned in the movie is that once the sensors are out-of-order and the MCAS points the airplane down, the pilots have to shut off the system in 10 seconds, otherwise the airplane is doomed due to the speed of going down!  I wonder whether this recognition has been discussed during the inquiry into the first crash.

When the second crash happened Boeing top management went into fright mode, misunderstanding the reality that the trust of the airlines, and the public, in Boeing, has been lost. In short: the key lessons from the crash and after-crash pressure were not learned!  They still didn’t want to ground the airplanes, but now the airlines took the initiative and one by one decided to ground them.  A public investigation was initiated and from Boeing Management Team perspective: hell broke loose.

The key point for all management teams: 

It is unavoidable to make mistakes, even though a lot of effort should be put trying to minimize them.  But it is UNFORGIVEN not to update the flawed paradigms, causing the mistakes.

If that conclusion is adopted, then a systematic method for learning from unexpected cases should be in place, with the objective of “never to repeat the same mistake”.  Well, I cannot guarantee it’ll never happen, but most of the repeats can be avoided.  Actually, much more can be avoided, as once a current flawed paradigm is recognized and the paradigm updated, the derived ramifications can be very wide.  If the flawed paradigm is discovered from a signal that, by luck, is not catastrophic, but surprising enough to initiate the learning, then huge disastrous consequences are prevented and the organization is much more secure.

It is important for everyone to identify, based on certain surprising signals, flawed paradigms and update them.  It is also possible to learn from other people, or organizations, mistakes. I hope the key paradigm of refusing to see a problem already visible, and trying to hide it, is now well understood not just within Boeing, but within every top management of any organization.  I hope my article can help to come up with the proper procedures for learning the right lessons from such events.

The other side of the coin: Amazon’s future threats

Part 2

By Henry F. Camp and Eli Schragenheim

What future threats does Amazon face?  Don’t believe they are bulletproof because of their current dominant position or an internal culture that embraces Jeff Bezos’ “Day One” philosophy, which demands companies to stay as sharp as they were when they were first founded – vulnerable, before amassing financial or political strength.  While Amazon serves its customers well, it behaves differently with its lower-level employees and even many of its business collaborators are less than satisfied.

As with any company that operates on a massive scale, there is significant pressure on Amazon to control wages.  This is obvious, right?  Their size, in terms of the sheer number of employees, means that paying well would come at a tremendous cost.  After all, the purpose of getting big was to gain operational efficiencies that allow Amazon to both earn high profits and offer low prices.  Given their customer orientation, this combination means Amazon feels it must look out for its customers at the expense of its employees and suppliers.

The Amazon decisive competitive edge relies on efficiencies.  The company’s approach to achieving them is multidimensional.  They work to automate wherever possible, so they require fewer employees and gain speed and accurate delivery.  They push back on their suppliers as well, sharing the cost of providing logistics.  More on this later.

When you employ 1.5 million people, assuming 2,000 hours for each per year, adding one dollar to hourly compensation increases costs by $3 billion per year, a non-trivial consideration.  That may be exactly what they had to do to maintain operations during fiscal 2021, the year of the casual COVID-worker. 

Nevertheless, Amazon’s cash flow increased by $17 billion to $60 billion in fiscal 2021.  To put that number into perspective, it is closing in on Microsoft, $87 billion, and Apple, $120 billion, the two most profitable companies in the world, outside of government owned entities.

Now, having high profits is not intrinsically bad.  Both customers, employees, suppliers and governments alike benefit enormously from Amazon’s success.  Nor do high profits oblige the companies that earn the most to do more than any other company does for their employees.  The question of whether compensation is fair or unfair is in comparison to what other companies pay and for equivalent work in an equivalent context. 

By context, we mean culture, as well as the physical environment and relative safety.  People who work in coal mines may demand higher compensation than those who sit all day in comfortable office chairs.  A wonderful corporate culture or purpose may attract some employees, even if the pay is not up to par, such as missionary work.  Is the workplace tough or even cruel?  Bad cultures typically result in high turnover and quitting-in-place, where workers accept paychecks for doing as little as possible.  Lastly, the degree to which a person’s work relies on their ability to plan out into the future determines what Elliott Jaques called felt-fair pay.  A PhD does not expect to be paid more than their coworkers for flipping burgers at McDonalds.

Back to Amazon in particular, dozens of articles going back for years have decried their treatment of front-line employees, claiming heavy workloads and loss of autonomy, down to timed restroom breaks.  The question is, are these factors a potential risk to Amazon’s future?

Amazon is a system and a very efficient one.  A system that largely took in stride a massive increase in volume as a result of the totally unforeseen COVID pandemic.  They provided the world with goods when we were unable or unwilling to go out shopping in our neighborhood brick-and-mortar stores.

They have dialed in exactly what they expect of employees to gain this efficiency.  The hope is their customers are the beneficiaries.  Meanwhile Amazon’s payrates are not the lowest.  So, where is the risk?

From a TOC point of view, it is a local/global conflict.  The efficiencies Amazon measures their operators against are local, not global.  The reason they want higher efficiencies is to become more effective globally, across their enormous company.  It all seems to be working far better than we might have reasonably predicted.  So, again, what risk?

Systems Theory points us in the direction of an answer.  It informs us that the sum of local optima does not lead to a global optimum.  This conclusion is in sync with the TOC, which implores us to focus on the system’s constraint.  The implication is that non-constraint resources must have some slack in their scheduled workloads.  The many complaints about Amazon as an employer is that they seem to focus on “sweating the assets,” a quote from John Seddon, British consultant to service industries.  By assets he means the employees themselves.

Let’s start from Amazon’s point of view.  They have scale.  This means they can apply division of labor and workloads to an extent seldom seen.  The scale of their operations requires many distribution and sorting centers, which are generally larger than one million square feet.  They have subordinated physical centralization in favor of speed of delivery to their customers.  A high-level manager runs each of these facilities and they are scored on their efficiencies.  Getting more work out of their staff at each facility drives great customer service and validates what a person can produce.  The latter allows Amazon to know when to hire and how many which leads to low excess capacity.  All of these result in the incredibly high profits mentioned earlier.

One of us recently spoke to the manager of one of these distribution centers.  He exposed us to a problem with these measurements.  They are not applied consistently.  Safety and external events blow the circuit breakers on the metrics machine that is Amazon.

For example, during thunderstorms outside a DC (and if the buildings get any bigger, there may soon be weather inside) drivers are prohibited, for their own safety, from leaving their trucks to dash into the building.  This prevents their trucks from being unloaded.  So, if a storm persists, as they often do in the Southern United States, the DC can become both starved of inputs and constipated for lack of outputs. 

The newest and most efficient DCs employ a direct flow model where receipts that are immediately required flow directly to shipping without having to first be put away (buffered) and then picked to be shipped.  Efficient – skipping steps – resulting in faster shipments out to customers, particularly of those items that have been on backorder.  However, during a long-lasting thunderstorm, if drivers can’t deliver their loads, the DC grinds to a stop after just a few hours.  (Interestingly, the older less efficient designs that are put away first and then pick are more resilient for a much longer time when such external events occur.)  When a stoppage occurs, the efficiency metrics are ignored, as it is not the fault of the staff of the distribution center; the stoppage being caused by an external event and a built-in desire for people’s safety.

What is really interesting is that behaviors change as soon as an external event is declared.  The sub-system resets.  Training that was needed but not prescribed takes place.  Equipment that needed fixing are repaired.  Preventative maintenance is accomplished.  It seems that efficiency measurements prohibit management from doing what they instinctively understand must be done to be effective.  Without an external event to turn off the efficiency spotlight, Amazon’s top managers’ efforts to improve future productivity (training, coaching, maintenance, repairs, etc.) make the manager look bad in the short term.  We imagine this creates a pressure-cooker workplace – damned if I do, damned if I don’t.

Furthermore, there is no metric that scores how quickly the facility recovers from the externally triggered reset.  As such, what often happens is the managers of the sub-systems prolong an interrupted condition specifically to be able to afford to work on what they perceive as necessary, over and above that which is proscribed by upper management.  In other words, managers must cheat to do the right things.  Eli Goldratt described this as his fifth Engine of Disharmony: gaps between responsibility and authority. You are responsible to accomplish something both in the short-term and over the long-term but you are not given the authority to undertake the actions that are necessary to meet your responsibilities.

Let’s investigate what this revelation means to employees.  Many psychological studies, popularized by Dan Pink, suggest that there are three main drivers of motivation: autonomy, mastery and purpose.  The last two are certainly achievable at Amazon, as it is today.  Our question is about the opportunity for autonomy, other than when there is an external interruption of Amazon’s proscribed processes.

The conflict is between allowing people to realize their very human need for autonomy versus expecting them to be part of a well-oiled machine.  At this point, allow us to broaden our discussion beyond employees to include suppliers, service providers and other stakeholders.

Earlier, we promised to return to the sellers of goods through Amazon.  The desire to provide top value to customers does not extend to suppliers.  Amazon provides these owners of the products sold on its site a world class operating system but they also charge high fees.  Why?  These fees offset Amazon’s expenses.  Many Amazon partners claim that the giant dictates to their ‘partners’ and forces them to largely finance their efficiency machine.

Let us briefly explain the diagram.  For Amazon to succeed, it must be both effective at doing what the market expects and do so efficiently enough to earn a vast profit.  To gain that efficiency, they must proscribe precisely what they understand needs to be done and hold others, employees and providers, accountable for delivering.  On the other hand, in order to keep their enormous system functioning now as well as in the future, they rely on the continuing support of their stakeholders.  To maintain that support, they must honor the needs of those stakeholders to express themselves, especially when they identify and desire to implement good ideas that improve Amazon’s effectiveness.  But, how can they give stakeholders autonomy, while ensuring the flow of work for as little money as possible?

We know from Eli Goldratt’s pillars of TOC that every conflict can be eliminated.  Our intuition suggests that the D action creates a legitimate risk to the C need.  However, is there an inherent jeopardy to efficiency if they seek collaboration and allow autonomy?  With an injection of seeking advice from staff, suppliers, providers and acting on it in meaningful ways, the cloud falls apart.  The D’ action is sufficient to both be globally effective (C) and locally efficient (B).

Current relationships with its work force and suppliers cannot be fully described as win-lose (there are other places selling your wares) but it would be correct to claim that they are BIG-WINsmall-win, with Amazon always getting the bigger win.  The more their internal and external constituents accumulate objections and the longer they remain unresolved, the greater the resultant animosity.  The threatening situation is that complaints of unethical behavior are spreading through the media and might erode their customers’ trust, which seems to us is Amazon’s biggest strategic asset.  Should public opinion swing against them, the government may be forced to intervene.  Afterall, politicians love to pander to their constituents.  Many other giants have fallen.  We little folk love to tell the stories.  Antitrust is a real risk to a company of Amazon’s scale.  That should be warning enough.

Amazon’s Decisive Competitive Edge

Being able to Keep it and facing some future threats

Part 1

By Eli Schragenheim and Henry F. Camp.

Having a great idea, which would generate immense value to many, is a necessary starting point for building a big successful business.  But it is far from being sufficient.  Eventually, in order to truly succeed in a consistent way, there is a need to make sure that the whole set of necessary and sufficient conditions apply.  The idea will fail if even one necessary condition doesn’t apply.  Thus, it is usually easier to learn from failures, rather than from big successes. Yet, learning the essence of predicting the perceived value of a new offering to the market and how to connect it to the strategy from successes can be highly valuable.

In this article, we try to understand better some critical points in the success of Amazon.  The benefit is to be able to better assess the potential of ideas, hopefully also point to some of the other necessary elements for ensuring success. We also like to check how the Theory of Constraints (TOC) insights and tools support shaping the ideas leading to identification of the other necessary elements for success.

An important tool, developed by Dr. Eliyahu Goldratt, is the Six Questions, originally for assessing the potential value of new technology.  We use them to assess the potential value of new ideas for products or services.

The Six Questions are:

  1. What is the power of the new technology?
  2. What current limitation or barrier does the new technology eliminate or vastly reduce?
  3. What are the current usage rules, patterns and behaviors that bypass the limitation?
  4. What rules, patterns and behaviors need to be changed to get the benefits of the new technology?
  5. What is the application of the new technology that will enable the above change without causing resistance?
  6. How to build, capitalize and sustain the business?

Another important insight by Goldratt is the concept of the ‘decisive competitive edge,’ which he defined as answering a need of a big enough market to an extent that no significant competitor can, while being on par with the competition on the other important aspects. To maintain the decisive competitive edge, delivering the value must be difficult for potential competitors to imitate or just counterintuitive to their business perception.

Jeff Bezos had a long-term, very ambitious vision of becoming BIG, and he made it happen.  Offering the market a decisive competitive edge is initially a must, in order to grow.  Today, Amazon is so big that its size itself is a decisive competitive edge, because it answers the need for security both for the payment and for receiving the goods on time, which are significant concerns of buyers from any e-tailer.  This decisive competitive edge is on top of the usual advantage of the economy of scale.

So, in retrospect, what was the secret that made Amazon such a giant?

We have to go back in time and visualize the decisions taken at the time and speculate the rationale of Bezos that he saw and others didn’t.  The frequent mantra all Amazon key people cite, “we focus on the customers”, is hardly an insight.  Too many others, much less successful than Bezos, thought they did exactly that.

A non-trivial question is how should any organization consider what their clients want?  Is asking the clients, either directly or carrying out a market survey, the best way?  Do clients know how to answer questions about something they have never encountered?  More, it is obvious that to stay way ahead of all competitors one needs to develop a somewhat different approach and predict what the potential clients want before they know they want it.  How to do that, and what is the risk of being grossly mistaken?

Back in 1995, when Amazon started to sell books through the Internet, not too many people felt it answered a need.  Well, at least not those who lived fairly close to a Barnes and Noble, or Borders, store. So, what’s the big deal? 

This is what made the decision so far-sighted.  Books are easy to purchase through the Internet.  You see clearly enough what’s for sale; the possibility of buying the wrong item is relatively low.  One advantage of a virtual store is the ability to order whenever one finds information on an interesting book.  The chance of finding the specific book at a physical store is definitely less than 100%, but if the virtual store is well-managed, the chance is quite high. 

This ability to quickly trigger a purchase, without having to invest time and effort, is the key limitation vastly reduced by the virtual store.  The technology of providing the 1-Click method for quickly making a purchase was certainly an improvement.

However, answering a need by reducing the current limitation is not enough.  The third and fourth questions must be answered as well.  At the time, the norm was to drive to a store and buy, or call the store over the phone, when such a service was offered.  Speaking over the phone is bound to result in mistakes – misunderstanding the title, author or the shipping address.  A way to bypass the limitation is to write down a shopping list to be used when someone from the family goes to the mall.

The relevancy of Goldratt’s fourth question is revealed once the idea of the virtual bookstore is understood.  In 1995, when Amazon started to sell books through the Internet, not everyone had computer access and knowledge.  The phones of that time didn’t have access to the Internet.  So, the offer and the reduced limitation were directed at those people with computers, preferably at home.  Even then, the market of people that had easy access to computers was big enough for a good start.  Whenever there was a wish to buy a book, the user had to enter the website, find the specific book and complete the sale by providing valuable personal information, like name, address and credit card.  This is a frightening procedure for many people even nowadays, definitely back in 1995 when Amazon started selling books.

Security of financial data is a critical concern that could easily be a source of resistance, which is what the fifth question is all about.  Another concern is the assurance that the order will arrive on time and in good shape.  To reduce doubts and resistance, the whole mechanism of accepting orders must be easy and super friendly, plus an effective logistics system must be established that gets a copy of the ordered book(s), ships quickly and securely to the user.

Back in 1995 for Amazon, the answer to the fifth question was: We must build trust with potential customers.  The vast majority of the resistance to purchasing through the Internet, even today, is based on mistrust!  But once trust is built and maintained – it opens the door for more and more new offerings.

Selling through the Internet requires excellent operations.  Technology can attract early adopters, but the test is the ability to deliver.  The choice of selling books was right from the perspective of clients looking for professional books, which means books that are far from being bestsellers, books that are not easy to find even at the largest bookstores. 

Maintaining inventory is a challenge.  However, it is not mandatory to actually carry every book in the catalogue at the warehouse, if the supply from the publisher is fast and reliable.  It is best to keep most of the desired books in stock and being able to replenish very quickly.  Ordering very small quantities from a publisher was a challenge, as the low-price/big-batch culture was strong in book printing.  Of course, once the store is huge, certainly like Amazon today, it can dictate replenishment terms.  The point is – it is tough for a young virtual shop to ensure the required agility from publishers.

Another challenge is logistics – shipping to the clients.  The challenge for a small company is huge, even in a geographically small area, and Jeff Bezos was insistent on shipping to any point on the globe.  Many virtual shops use the logistics services of third-party partners, which serves a variety of similar shops.  The challenge is that from the customers’ perspective the shop is responsible, and when problems occur from the customer’s perspective, then trust is eroded.

Goldratt’s sixth question raises the issue of the strategy, especially facing three key needs: building the operational capabilities, the marketing and sales capabilities, and keeping operational performance perfect when sales surge.  Evaluating the strategy to reach such a vision, it becomes clear that the only way to truly succeed is to become BIG, so all the parts of technology, operations and business are managed in perfect synchronization and then, and only then, the company can become truly profitable.  It is not surprising at all that it took Amazon six years to become even somewhat profitable. On the way to becoming big, losing money served to deter potential competitors from taking the same route of accelerated growth.  This left Amazon as the only truly big e-tailer until the emergence of Alibaba, making the competitive edge truly decisive.

Expanding the offering to other products, like CDs and video games, was quite natural.  The characteristics were similar to books: standard products that don’t allow much room for mistakes, plus the advantage over physical stores due to the huge number of items that even very large stores cannot always have on hand.  The decisive competitive edge of a truly reliable virtual store, providing an unbelievably wide choice, was kept intact.  

Every decisive competitive edge is limited by the time it takes competitors to copy the idea and even improve on it.  Being the first to answer a need might carry some value for a little, but competition will eventually catch up.

The introduction of Amazon Prime, in 2005, added substance to the operational commitment of Amazon and can be viewed as an additional decisive competitive edge and a barrier to competition.  The reduced limitation (Question 2) for customers that consider buying frequently from Amazon is getting any item within two days.  The extra commitment is given for an annual subscription price but then Amazon provides free delivery of two days and lower charges when the customers need next-day shipment. 

Amazon Prime cannot be delivered worldwide.  While Amazon serves customers all over the globe, committing to two days delivery has been confined at first only to the US.  Later, Amazon extended the Prime offer to many more countries.  The commitment by Amazon to fast delivery for Prime customers has increased and, in some places, it is now two hours.

Diving into the third question reveals that without the Prime service the customers prefer to reduce their shipping costs, which are detrimental to purchasing online, by combining several items into one order. But this sometimes delays the customer until they have enough items within one order.

The advantage of ordering whenever you like and getting it fast, without extra cost, provides a considerable benefit for customers and a different benefit for Amazon.  From the customer perspective there is no financial advantage combining different purchases, so every item can be ordered in isolation.  From Amazon’s perspective, as other virtual shops don’t offer such fast response for free, other than the annual subscription fee, then many items, which customers might have preferred to purchase elsewhere, go to Amazon.

The cost of Prime membership is probably the only reason to resist the offer.  It triggers a tendency to habitually buy from Amazon, rather than from another virtual shop that is more specialized.  Thus, significantly capitalizing on the decisive competitive edge.  Amazon’s technology and its excellent delivery performance make sure they can sustain the growth, which completes the answer to the sixth question.

Another key decisive competitive edge launched by Amazon was the appearance of the Kindle and the related popularization of eBooks.  Reading books from the screen of a computer was known long before.   Theoretically eBooks offer a decisive competitive edge by allowing the storage of a huge number of books without the space that printed books require.  However, reading a book from a laptop is not particularly engaging.  Developing a dedicated light electronic book reader reduced the limitation of having to sit at a computer, and the special E-Ink developed for the Kindle allowed reading even in sunlight.  This offered a somewhat similar experience to reading a printed book, sitting at any convenient spot, with a special advantage for comfortable reading on a flight.  The Kindle had to compete with the iPad that came about two years later.  The Kindle deserves a more detailed value analysis for itself.  Strategically, it established Amazon’s status as the one place to turn for books, printed or in Kindle format.

On top of becoming a giant e-tailer, Amazon looked to other markets.  One of the most valuable of Goldratt’s insights was: “segment your market – don’t segment your resources.”  The logic is that capable resources with excess capacity can serve more than one market.  Furthermore, from an uncertainty management standpoint, diversifying markets add stability.  Amazon was developed as a technological enterprise, whose business is aided by a state-of-the-art IT system.  So, why not capitalize on it and offer valuable IT services to other businesses as well as governments?

Amazon identified two key needs, not addressed at the time, for a wide spectrum of organizations.  First, the need to store huge amounts of data that is consistently growing and, secondly, the need for more computing power.  Both raised the concept of the ‘cloud’ as a major new IT service.  The fear of cyber-attacks, which have become a serious threat, and providing quick access to data from any point on the globe, added substance to the new offering.  The need for computing power, without having to rely on a growing number of private servers, was a need seeking a safe and stable answer.

Goldratt’s first four questions have straight-forward answers when a giant, like Amazon, offers cloud services, mainly storage and computing power.  The fifth question, looking for possible resistance, raises the issue of safety from two different perspectives.  One is the dependency on the security service of a third-party, big as it is, to keep the data safe and prevent hacking it by professional hackers, for whom penetrating the cloud is an especially desirable target.  The other angle is security from Amazon itself, which surely has the technical ability to penetrate into the stored data of its competitors, when it lies on its own servers. 

Amazon may make much more money from its Amazon Web Services (AWS), but its reputation lies with being a giant e-tailer, where it has only Alibaba as a huge competitor.  Practically, it seems to us that Amazon and Alibaba are not truly head-to-head competitors.  However, when it comes to AWS, two other giants, Google and Microsoft, are direct competitors, offering similar answers to the same need, and a whole array of smaller, yet big enough, cloud services providers have appeared as well.

Revealing the potential value of combining the Theory of Constraints (TOC) with Artificial Intelligence (AI)

AI, as a genetic name for computerized tools, capable of learning from past data for taking independent decisions, or supporting decisions, is becoming the key buzz word for the future of technology to change the world. 

However, there are quite a lot concerns regarding the ability of AI not just to improve our life, but maybe also to cause considerable damage.

I strongly believe that the Theory of Constraints (TOC) brings rational judgement and a tendency to look for the inherent simplicity, revealing the true potential of seemingly complex and uncertain situations.  Can the qualities of TOC significantly improve the potential of AI to bring value to the management of organizations?

The emphasis of TOC on identifying the right focus for achieving the best performance, which also means what not to focus on, is based on recognizing the capacity limitation of our mind.

Can AI significantly help in exploiting better the human capacity limitation?

All human beings should guide their mind to focus on what truly matters.  In managing organizations achieving more of the GOAL, now and in the future, provides the ultimate objective for assessing what to focus on right now.  No matter how cleverly we identify what truly matters some important matters might be missed.  One of the neglected areas in TOC, which no manager can afford to ignore, is to identify, as early as possible, emerging threats.

Computers are also limited with their ability to process huge amount of data – but their limitation is way above the human’s and that big gap is widening more and more.  So, can we hope that while the top objective is defined by the human manager, clever use of software, particularly AI, could constantly check the validity of the current focus and warn whenever a new critical issue emerges?

AI is widely used to replace human beings doing simple straight-forward actions, like using robots in large distribution centers.  Driving cars without a human driver is a more ambitious target, but it is also something that the vast majority of the human beings do well (when they are not under influence).   The current managerial emphasis on the use of AI is to reduce the ongoing cost of employing workers for simple enough jobs.  It’d be good to show that AI could support the substantial growth of throughput and even enhance strategic decision making.

The special power of AI is its ability to learn from huge amount of past data items.  This means it can also be trained to come up with critical decision-support information, based on observed correlations between variables, noting trends and sudden changes in the behavior of the market demand, suppliers, and flow blockages.  So, instead of making the AI module the decision maker of relatively simple decisions, it can be used for improving the performance of organizations.  A natural first target is improving the forecasting algorithms, highlight also the reasonable possible spread. The ability to identify correlation could show dependencies between various SKUs, and that would significantly improve the forecasts.  The more challenging tasks are to provide information on the potential impact of price changes, and other critical characteristics of the offerings, on the market.  Another worthy challenge is to highlight irregularities that require immediate management attention.  From the TOC BOK perspective it would be valuable to evaluate the effectiveness of the buffers better than it is done today.  Working with AI could indirectly be used to improve the intuition and even the thinking of open mind managers!  If the human manager would be able to use the AI to validate, or invalidate, assumptions and hypotheses, this will have a considerable impact on the quality of management to evaluate the ramifications of changes.

One important downside of AI, especially from a TOC perspective, is not considering logical cause-and-effect.  Being able to check cause-and-effect hypotheses is a key mission.  Another downside is the dependency on the training data, which could lead to erroneous results.  A key challenge of implementing AI is finding the way to reduce the probability of a significant mistake and be able to spot such a mistake through cause-and-effect analysis.

The process aiming at using AI most effectively starts with the GOAL, then deriving the key elements that impact it, and then deduce worthy valuable objectives.  The list of the valuable objectives, which would enhance the performance of the organization, should be analyzed to find out whether AI, maybe together with other software modules, can overcome the obstacles that currently prevent the achievement of these objectives.

A key generic idea is to recognize the potential of AI providing vital information, or even new observed insights, as an integral part of the human decision-making process. 

Setting the worthy objectives, guiding the AI to bring the supporting and necessary information, is where TOC wisdom can be so useful for drawing the most from AI.  Suddenly the title of “The Haystack Syndrome – Sifting Information out of the Data Ocean” finds wider meaning when the data ocean has grown by several orders of magnitude, but also the technology for making the best sense from it.

While computers in general, and AI in particular, are vastly superior in handling complexity, meaning many different variables that interact with each other, the tougher challenge is to face uncertainty, both the ‘noise’, the inherent common and expected variations, and the risks, which are rarer, yet highly damaging.

Here comes the opportunity of using the emerging power of AI, combined with the TOC wisdom, to support the assessment of future moves. 

Guiding the AI to observe predicted trends in the market, especially the impact of external forces, like changes in the economy, and even predicting the effect of increasing or reducing prices, could yield major value to the decision makers.  Much of the relevant data required for such missions lies in external databases.  It is possible that services for obtaining the data from various external sources would be required.  It’d be good if a cooperation between competitors to allow AI analysis of their combined data is achieved and carried by a neutral third party.  Such cooperation should ensure that no internal data of one company will ever leak to another company.  But the outcome of the analysis, highlighting issues like price sensitivity, the impact of inflation, changes in government regulations, and many others, could yield knowledge that is currently hidden, leaving the decisions to be based solely on intuition.  The key disadvantage of human intuition is being slow to adapt to changes.  Feeding the AI with huge number of similar past changes making it much better in predicting the outcomes, as long as there is enough relevant data that wasn’t made irrelevant by the change.

My current thinking about the effective use of AI for managerial decision-making, including the critical question ‘what to focus on’, is that there are two focused categories of targets for TOC-AI processes that would bring huge value to managing any organization:

  1. Sensing the market demand.  This includes forecasting current trends, and predicting the potential outcomes of certain moves and changes.  Plus giving good idea of the impact of price, economy and the variety of choice.
  2. Pointing to an emerging threat.  The TOC wisdom could easily yield a list of potential threats that management should be aware of as early as possible.  There is a need to identify signals, observed in the recent past, that testify that a certain threat is developing.  Giving enough examples to the AI could trigger the ongoing search for enough evidence.
    • For instance, when an important supplier starts to behave erratically, it could point to problems with its management, even the possibility of bankruptcy, or that our image at a client is going down.  Similarly, a change in the quantities, and/or frequency, of the purchasing orders of a big client, could signal a change in the client’s purchasing policies.
    • One of the problems of complex environments is the accuracy of the data.  If the AI module intentionally looks for outcomes that don’t fit the data, then notifying the user to check specific data items could be meaningful.
    • An existing example is monitoring the need for maintenance of machines.  This is an Industry 4.0 related feature that identifies when the current pace and quality of the machine deviates from the norm, before it becomes critical, leaving enough time to plan the necessary maintenance activities.

Critical questions for continued discussion:

  • Are there more generic organizational topics where AI, when guided by TOC, can contribute to management?
  • Can we come to a generic set of insights on how TOC can impact the objectives, training, and the actual use of AI?
    • For instance, guiding the AI to thoroughly check the quality of the capacity consumption data of the constraint and the few other critical resources. Comparing them to the capacity requirements of the past and incoming demand could help in determining whether the available protective capacity is adequate.
  • How can we make it happen?
    • And what training the people using the AI module should go through?

Between Sophisticated and Simple Solutions and the Role of Software

Smart people like to exploit their special capability by finding sophisticated solutions to seemingly complex problems.  Software allows even more sophistication and with better control. 

However, two immediate concerns are raised.  One is whether the impact on the actual results is significant enough to care for?  In other words: Do the results justify the extra efforts?  The other is whether such a solution wouldn’t fail in reality?  In other words, what is the risk of getting inferior results?

Simple solutions focus on only few variables, use much less data, and the inherent logic can be well-understood by all the involved players. However, simple solutions are not optimal, meaning more could have been achieved, at least theoretically.

Here is the basic conflict:

Until recently we could argue that the simple solutions have an edge, because of three common problems of sophistication.

  1. Inaccurate data could easily mess the optimal solution, as most sophisticated solutions are sensitive to the exact value of many variables.
  2. People executing the solution might misunderstand the logic and make major mistakes, which hurt achieving the excellent expected results.
  3. Any flawed basic assumption behind the optimal solution disrupts the results.
    • For instance, assuming certain variables, like sales of an item at different locations, are stochastically independent.
    • When the solution is based on software, then bugs might easily disrupt the results.  The more sophisticated is the algorithm the higher the chance for bugs that aren’t easily identified. 

The recent penetration of new technologies might push back towards sophistication.  Digitization of the flow of materials through shop-floors and warehouses, including RFID, advanced the accuracy of much of the data.  Artificial Intelligence (AI), coupled with Big Data, is able to consider the combined impact of many variables and also take into account dependencies and new discovered correlations. 

What are the limitations of sophisticated automation?

Two different types of potential causes for failures:

  1. Flawed results due to problems of the sophisticated algorithm:
    • Missing information on matters that have clear impact.
      • Like a change in the regulations, a new competitor etc.
        • In other words, information that humans are naturally aware of, but are not included in the digital databases.
    • Flawed assumptions, especially regarding modelling reality and software bugs.  It includes assessments of the behavior of the uncertainty and the relevance of past data to the current state.
  2. Misunderstanding the full objective of top management.  Human beings have emotions, desires and values.  There could be results, which are in line with the formal objective function, but violates certain key values, like being fair and honest to clients and suppliers.  These values are hard to code.

The human mind operates in a different way than computers leading to inconsistencies in evaluating a good solution.

The human mind uses cause-and-effect logic for predicting the future, using informal and intuitive information.  On one hand intuitive information might be wrong.  On the other hand, ignoring clear causality and truly relevant information could definitely yield inferior results.

Artificial Intelligence uses statistical tools to identify correlations between different variables.  But it refrains from assuming causality, and thus its predictions are, many times, limited to existing data and fail to consider recent changes that didn’t happen in the past.  The only way to predict the ramifications of such a new change is by cause-and-effect.

Human beings are limited in making a lot of calculations.  The human capacity also limits the number of different topics it can deal with at a period of time. 

Another aspect to consider is the impact of uncertainty.  The common approach to uncertainty is that it adds considerable complexity to the ability to predict the future based on what is known from the past. 

Uncertainty significantly limits our ability to predict anything that lies within the ‘noise’.  The noise can be described as the “common and expected uncertainty”, meaning the combined variability of all the relevant variables, focusing on the area of the vast majority of the cases (say 90% of the results), ignoring rare cases.  So, any outcome that falls within the ‘noise’ should not come as a surprise.  As long as the ‘noise’ stays at about the same level, it represents a limit to the ability to predict the future.  But that is already more than nothing, as it is possible to outline the boundaries of the noise, and predictions that are beyond the noise should be the focus for analysis and decisions.

Goldratt said: “Don’t optimize within the noise!”

Good statistical analysis of all the known contributors to the noise might be able to reduce the noise.  According to Goldratt this is often a poor exploitation of management time.  First, because in most cases the reduction in the noise is relatively small, while requiring efforts to look for the additional required data.  Secondly, it takes time to prove the reduction in the noise is real.  And thirdly, the most important, is that there are other changes that could improve the performance well beyond the existing noise.

A potential failing of statistical analyses is considering past data that are no longer relevant due to a major change that impacts the relevant economy.  One can wonder whether forecasts that consider data before Covid-19 have any relevance to the future after Covid.

The realization that a true improvement of the performance should be far above the noise greatly simplifies the natural complexity, and could lead to effective simple solutions, which are highly adaptive to significant changes, which are beyond the natural noise.

Demonstrating the generic problem:

Inventory management is a critical element for supply chains.  Forecasting the demand for every specific item at every specific location is quite challenging.  Human intuition might not be good enough.  The current practice is to determine a period of time, like two weeks, of inventory from item X at location Y, where the quantity of “two weeks of inventory” is determined through either a forecast or determination of an average sale-day.

Now, with much more sophisticated AI it is assumed that it is possible to accurately forecast the demand and align it with supply time, including the fluctuations in the supply.  However, forecasts are never one precise number, and so are the supply time.  Every forecast is a stochastic prediction, meaning it could vary.  Having a more accurate forecast means that the spread of the likely results is narrower than for less accurate forecast.  The sophisticated solution could try to assess the damage of shortages versus surpluses, however part of the required information for such an assessment might not be in the available data.  For instance, the significant damage of a shortage is often the negative response of the customers.  It might be possible to track actual loss of sales due to shortages, but it is challenging to assess the future behavior of disappointing customers.

The simpler key TOC insight for inventory management is to replenish as fast as possible.  This recognition means narrowing down the forecasting horizon.  Actually, TOC assumes, as an initial good-enough forecast, no change in the demand for that horizon, so replenishing what was sold yesterday is good enough. 

Another key insight is to base the target inventory not just on the on-hand stock, but to include the inventory that is already in the pipeline.  This is a more practical definition, as it represents the current commitment for holding inventory, and it makes it straight-forward to keep the target level intact.

Defining the target inventory to include both on-hand and pipeline stock makes it possible to issue signals reflecting the current status of the stock at the location.  Normally we’d expect anything between 1/3 to 2/3 of the target level to be available on-hand to represent the “about right” status of inventory, knowing the rest is somewhere on the way.  When there are less than one-third on-hand then the status of the stock is at risk, and actions to expedite the shipments are required.  This is the duty of the human manager to evaluate the situation and find the best way to respond to it.  Such an occurrence triggers the evaluation whether the target level is too low and needs to be increased.  Generally speaking, target levels should be stable most of the time.  Frequent re-forecasting usually come up with minor changes. 

Question is: as the target level includes safety, what is the rationale to introduce frequent changes of 1%-10% to the target level, as it is just a reflection of the regular noise, and probably not a real change in the demand?

A sophisticated solution, without the wisdom of the key insights, would try to assess the two uncertain situations: how much demand might show in the short time, and whether the on-hand plus whatever is on the way will make it on time. It’d also estimate whether the anticipated results would fall within the required service level.

Service level is an artificial and misleading concept.  Try to tell the customer that their delivery was subject to the 3-5% cases that the service level doesn’t cover.  Customers can understand that rare cases happen, but then they like to hear the story that justifies the failure.  It is also practically impossible to target a given service level, say 95%, because even with the most sophisticated statistical analysis there is no way to truly capture the stochastic function.  Assuming the spread of the combined delivery performance is according to the Normal distribution is convenient, but wrong. 

Given the practical need that humans should understand the logic of the solution, and be able to input important information that isn’t contained in any database, while recognizing the superiority of computers to follow well-built algorithms and carry huge number of calculations, leads to the direction of the solution.  It has to include two elements:  simple powerful and agreed-upon logic enabled by semi-sophisticated software, coupled with interaction with the human manager.  Definitely not an easy straight-forward mission, but an ambitious, yet doable, challenge.

Forecasts – the Need, the Great Damage, and Using it Right

Forecasting means predicting the future based on data and knowledge gained in the past.

According to the above definition every single decision we make depends on a forecast.  This is definitely true for every managerial decision.

The problem with every prediction is that it is never certain

Treating forecasting as a prophet telling us the future is a huge mistake.  So, we need a forecast that would present what the future might look like, including what is more likely to happen, and what is somewhat less likely, but still possible.

Math taught us that describing any uncertain behavior should have, at the very least, two different descriptors/parameters:  a central value, like ‘the expected value’, and another one that describes ‘the expected deviation from the average’.  This leads the definition of a ‘confidence interval’ where the more likely possible results lie.  Any sound decision has to consider a range of possible results.

While there are several ways to obtain effective forecasts, which could be used for superior decision-making, the real generic problem is the misuse of forecasts.

There are two major mistakes in using forecasts:

  1. Using one-number forecasts.
  2. Using the wrong forecasting horizon or level of detail.  The generic point is that the exact type of the forecast has to fit the decision that would take the forecast as a critical information input. A similar mistake is using the wrong parameters for computerized forecasts or relying on irrelevant, or poor quality, data.

The use of one-number forecasts

The vast majority of the forecasts used in business display only one number per item/location/period.  There is no indication of the estimated forecasting error.  Thus, if the forecast states that 1,000 units are going to be sold next week, there is no indication whether selling 1,500 is still likely to happen, or only 600.  This distorts the value of the information required for a sound decision, like how much to buy for next week sales.

Any computerized forecast, based on even the simplest mathematical model, includes an estimation of the mean possible deviation from the mean.  Given the expected value of a forecast and turning it into a reasonable range, like plus minus 1.5 or 2 estimated standard deviations or using the mean absolute percentage error (MAPE), yields about 80-90% chance that the actual outcome would fall within that range.

How such a reasonable range is able support decisions?

The two key meaningful information items are the boundaries of the range.  Every alternative choice of a decision should consider both extreme values of the range to calculate/estimate the potential damage. When the actual demand equals the lower side of the range it leads to one outcome, and when the actual demand equals the higher side there is another outcome.  When the demand falls somewhere within the range the outcome also falls between the extreme outcomes. Given both extreme outcomes the choice between the practical alternatives becomes realistic and would lead to better decisions than when no such range of reasonably outcomes is presented to the decision makers.

A simple example:  The forecast states that next week sales of Product X would be somewhere between 1,000 and 1,400 units.  The decision is about the level of stock at the beginning of the week. For simplicity let’s assume that there is no practical way to add X units during the week, or move units to a different location.

There are three reasonable alternatives for the decision:  Holding 1,000 units, 1,400, or going after the mean forecast: 1,200

If only 1,000 units are held and the demand is just 1,000 – the result is perfect.  However, if the demand turns out to be 1,400 there is unsatisfied demand for 400 units.  The real damage depends on the situation:  what the unsatisfied customers might do?  Will they buy similar products, develop a grudge against the company or patiently wait for next week?

When the decision is having 1,400 in stock, the question is:  there might be surplus of unsold units at the end of the week – is it a problem?  If sales continue next week and the units continue to look new, then the only damage is the too early expense of purchasing the 400 units.  There might be, of course, other cases.

What is the rational for storing 1,200 units?  It makes sense only when having a shortage or having a surplus is causing the same damage.  If being short is worse than having surplus, then storing 1,400 is the common-sense decision.  When having surplus is causing the higher damage – let’s decide to store just 1,000.

The example demonstrates the advantage of having a range rather than 1,200 as the one-number forecast, which leaves the decision maker wonder how much the demand might be.

There are two very different ways to forecast the demand.  One is through the use of mathematical forecasting algorithm, based on past results, and performed by a computer.  The other way is using the people closest to the market to express their intuition.  The mathematical algorithm can be used to create the required range, but there is a need to define the parameters defining the range, mainly the probability that the actual will fall within the range.

The other type, where human beings use their intuition to forecast the demand, also lend itself to forecast a range, rather than one number.  Human intuition is definitely not tuned to one number.  But certain rules should be clearly verbalized; otherwise, the human forecasted ranges might be too wide.  The idea behind the reasonable range is that possible, but extreme, results should be left outside the range.  This means the organizational culture is accepting that sometimes, not too often, the actual deviates from the forecasted range. There is no practical way to assess an intuitive 90% confidence interval, as the exact probabilities, even the formula describing the behavior of the uncertainty, are unknown.  Still, it is possible to approximately describe the uncertainty in a way that is superior to simply ignoring it. 

We do not expect that all actual results will fall within the range; we expect that 10-20% would lie outside the reasonable range

There could be more variations on the key decision.  When both shortages and surpluses cause considerable damage, maybe Operations should check whether it is possible to expedite a certain amount of units in the middle of the week.  If this is possible then holding 1,000 at the beginning of the week and being ready to expedite 400, or less, during the week makes sense.  It assumes, though, that watching the actual sales in the start of the week will yield a better forecast, meaning a much narrower range.  It also assumes the cost of expediting is less than being short or carrying too much.

Another rule that has to be fully understood is avoiding the use of combined ranges of items/locations for forecasting the demand of product family, a specific market segment, or the total demand.  While the sum of the means is the mean of the combined forecasts, combining the ranges yields a huge exaggeration of the reasonable range.  The mathematical forecasting should re-forecast the mean and the absolute mean deviation based on the past data of the combined demand.  The human forecast should, again, rely on the human intuition.

Remember the objective:  supporting better decisions by exposing the best partial information that is relevant to the decision.  Considering a too wide range, which includes cases that rarely happen, doesn’t support good decisions, unless the rare case might yield catastrophic damage.  Having too wide ranges supports much too-safe decisions, definitely not the required decisions for successful companies.

Warning:  Another related common mistake is assuming that the demand for each item/location is independent of the demand for another item or location.  THIS IS USUALLY WRONG!  There are partial dependencies of demand between items and across locations.  However, the dependencies are not 100%.  The only practical advice is:  forecast what you need.  When you need the forecast of one item – do it just for that item.  When you need the forecast of total sales – do it for the total from scratch.  The one piece of information you might use:  the sum of the mean should be equal to the mean of the sum.  When there is a mismatch between the sum of the means and the mean of the sum, it is time to question the underlining assumptions behind both the details and the global forecasts.

The right forecast for the specific decision

Suppose that a consistent growth in sales raises the issue of a considerable capacity increase, both equipment and manpower. 

Is there a need to consider the expected growth in sales of every product? 

The additional equipment is required for several product families, so the capacity requirements depend mainly on the growth of total sales, even though some products require more capacity than others.

So, the key parameter is the approximate new level of sales, and calculating back the required increase in capacity.  That increase in sales could also require increase in raw materials, which has to be checked with the suppliers.  There might be even the need for more credit-line to bridge between the timing of the material purchasing, regular operating expenses for maintaining capacity, and the timing of the incoming revenues.

Relying on the accumulation of individual forecasts is problematic.  It is good for calculating the average of the total, but not for assessing the average errors.  Being exposed to reasonable conservative forecast of the total versus the reasonable optimistic one, would highlight the probable risk in the investment and the probable gain. 

A decision about how much to store at a specific location has to be based on the individual ranges per SKU/location.  This is a different type of forecast that faces higher level of uncertainty and, thus, should be based on short horizons and fast replenishment and by that deal better with the fluctuations in the demand.  The main assumption of TOC and Lean is that the demand for the next short period is similar to last period, thus fast replenishment according to the actual demand provides quick adjustment to random fluctuations.  Longer term planning needs to consider trends, seasonality and other potential significant changes. This requires forecasts that look further into the future and are able capture the probability for such changes and include them in the reasonable range.

There are also decisions that have to consider the forecast for a specific family of products, or decisions that concern a particular market segment, which is a part of the market the company sells to.

The current practice regarding computerized forecasting is to come up with detailed forecasts for every item and accumulate them based on the need.  The problem, as already mentioned, is that while the accumulation of the averages yields the average of the total, when it comes to ranges the resulting range is a much too wide.

Another practice, usually based on intuitive forecasts, is to forecast the sales of a family of products/locations and then assume a certain distribution within the individual items.  This practice adds considerable noise to the average demand for individual items, without any reference to the likely spread.

Considering the power of today computers, the simple solution is to run several forecasts based on the decision-making requirements

When it comes to human-intuition based forecasts, there is flexibility in matching the forecast to the specific decision.  The significant change is using the reasonable range as the key information for the decision at hand.

Data quality

A special issue for forecasting is to be aware what past data is truly relevant to the decision at hand.  Statistics, as well as forecasting algorithms, have to rely on time-series data from the not-too-close past in order to identify trends, seasonality and other factors that impact future sales.  The potential problem is that the consumption patterns might have gone through a major change in the product, market or the economy, so it is possible that what happened prior to the change is not relevant anymore.

Covid-19 caused a dramatic change to many businesses, like tourism, restaurants, pubs and cinemas.  Other businesses have also been impacted, but in a less dramatic way.   So, special care should be taken to forecast future demand after Covid-19, while relying on the demand during the plague.  The author assumes the future consumption patterns for most products and services will behave differently after Covid-19 relative to 2019. This means the power of the computerized forecasts might go down for a while, as not too much good data will be available.  Even human-intuition forecasts should be used with extra care, as intuition, like computerized forecasting, are slow to adapt to a change and be able to predict its behavior.  Using rational cause-and-effect to re-shape the intuition is the right thing to do.

Conclusions

All organizations have to try their best to predict the future demand, but all managers have to internalize the basic common and expected uncertainty around their predictions and include the assessment of that uncertainty into their decision-making.

Once this recognition is in place, forecasts that yield a reasonable range of outcomes would become the best supportive information, leading to much improved decisions.  At times where the common and expected uncertainty is considerably higher than prior to 2020, organizations that would learn faster to use such range-forecasting will gain a decisive competitive edge.

Lack of Trust as a Major Obstacle in Business

By Eli Schragenheim and George Dekker

Business organizations and individuals clearly try to maximize their own interests, even at the expense of others. This creates an inherent lack of trust between any two different business entities.

Just to be on the safe side of clarity, let’s consider the following definitions:

Trust: assured reliance on the character, ability, strength, or truth of someone or something (Merriam-Webster)

feeling of confidence in someone that shows you believe they are honestfair, and reliable (Macmillan dictionary)

Trust is a key concept in human relations, but does it have a role in business? Some elements of trust can be found in business relationships like, reliability and accountability. But, does an organization have a ‘character’ that can be appreciated by another organization, or even an individual? Is it common to attribute ‘honesty’ to a business organization?

Yet, trust is part of many business relationships. Actually there are three categories of trust that are required for business organizations:

  1. Maintaining stable and effective ongoing business relationships with another organization. This is especially needed when the quality performance of the other organization matters. For instance, trusting a supplier that is able to response faster when necessary. Suppliers need to trust their clients to honor the payment terms. When two organizations partner together for a mutual business objective, the two-way trust is an even stronger need.
  2. The required two-way trust between an organization and its employees. This covers shareholders trusting the CEO, top management trusting their subordinates, and lower and mid-level employees trusting top management. When that trust is not there, the performance of the organization suffers.
  3. The trust of an individual, a customer or a professional, towards a company they expect service from or they expect to follow the terms of an engagement.

The need of governments to gain and retain trust of citizens is out of scope for this article.

What happens when there is no trust, but both sides like to maintain the relationships?

The simple, yet limited, alternative is basing the relationships on a formal agreement, expressed as a contract, which generally includes inspection, reporting and other types of assurance of compliance, and details what happens when one of the parties deviate from the agreement.

There are two basic flaws in relying on contracts to ensure proper business relationships.

  1. When the gain from breaking the contract is larger than the realistic compensation then the contract cannot protect the other side. It is also quite possible that the realistic compensation, including the time and efforts to achieve it, is poor relative to the damage done.
  2. Contracts are limited to what is clearly expressed. As language is quite ambiguous, contracts tend to be long, cryptically written, and leave ample room for opportunism and conjecture. They contain only what the sides are clearly anticipating might happen, but reality generates its own surprises. The unavoidable result is that too much damage can be caused without clearly breaching the contract.

We can also point to another significant and generic problem when the business does not trust others:

Lack of trust impedes the ability to focus on key issues, as significant managerial attention is spent on monitoring and reacting to actions of others.

This realization is directly derived from the concept of ‘focus’, which is essential to TOC. Without being able to focus on exploitation and subordination to the constraint, the organization fails to materialize its inherent potential.

Before going deeper into the meaning of ‘trust’, let’s examine the somewhat similar concept of win-win.

Unlike trust, which is mostly emotional and intuitive, win-win is based on logical cause-and-effect. The essence is that when both parties win from a specific collaboration then there is a very good chance that the collaboration will work well. It seems clear that when the collaboration is based on win-lose there is high probability that the losing party would find a way to hurt the winning partner.

Win-win usually keeps the collaboration going, but it does not prevent deviations from the core spirit. More, while win-win seems logical it is not all that common in the business world. In too many cases there is no realization that win-win is absolutely necessary. The main obstacle for win-win is that managers are not used to analyze a situation from the perspective of the other side. In other words, they are not aware of the win and lose of the other party. Too many salespeople believe they do a good job, even though they do not really understand their client’s business case.

Another problem with win-win is that the initial conditions, upon which the win-win has been based, might go through a change. In such a case one party might realize that the collaboration could cause a loss and that creates a temptation to violate the formal or informal agreement. The burst of Covid-19 certainly led to many cases where the seemingly win-win agreement came to an abrupt end or led to updated terms that are actually win-lose.

Trust is even more ambitious than win-win. It goes deeper into the area of broad rules of “what shouldn’t be done”. It also requires reliance on the other party to be fully capable and it stretches beyond the current relationship.

Trust is based on emotions that generate a belief in the capabilities and integrity of the other. It is natural for people to trust or distrust people based on their intuition. Marriage is a good example of having trust as a necessary condition for a “good marriage.” The practical requirements from a collaboration based on trust are far more than just win-win, because trust is less dependent on conditions that might easily become invalidated by external sources or events. Of course, there are many cases where people breach the trust given to them, which usually causes a shock to the believers and make some people less open to trust other people again. It also makes it almost impossible to restore trust, once it existed and vanished.

Generally speaking many humans feel a basic need to trust some people, making their living more focused by being less occupied with checking everyone and less fear of being cheated.

But, trust between organizations is a very difficult concept.

Trust is an elusive concept that is difficult to measure. While humans use their emotions and intuition, the organizational setting prefers facts, measurements and analysis. Another difficulty in trusting an organization is that its management, the people that have made the trust possible, could be easily replaced at any time, or can be coerced to betray trust by forces within and outside an organization.

Still, if trusting others is a need for organizations, which means the organization needs to relax its basic norms. Realizing the damage of lack of trust needs to be clear.

Let’s first check the relationships between the organization and its employees.

When an individual chooses to be an employee the common desire is to stay within one organization until retirement, hopefully to go up the ladder. At least this was a common wish before high-tech companies and the search for truly great high-tech professionals, have changed the culture. The rise of high-tech revealed more and more employees who don’t intend to stay too long in the organization they currently work for. In other words, they radiate that the organization should not trust them to be there when they are badly needed. This creates a problematic situation for high-tech where the key employees are actually temporary workers and every side could decide to end the working relations.

The commitment of the organizations to their employees in all areas of business has been also weakened, even though in Europe the regulations restrict the freedom of management to easily lay off their employees. Covid-19 made it clear to many employees that they cannot trust top management.

In an exposure of mistrust most organizations consistently measure the performance of their employees. Many have serious doubts regarding the effectiveness of these personal performance measurements, but the most devastating effect is that the vast majority of the employees look for any way, legitimate or not, to protect themselves from this kind of unfair judgment, including taking actions that violate the goal of the organization.

So, currently there is common mutual mistrust between employees and top management. But, in spite of that most organizations continue to function, even though far from exploiting the true business potential. The price the organization pays is stagnation, low motivation to excel and general refusal to take risks that might have personal implications.

As already mentioned, when there is a need for two organizations to collaborate certain rules have to be set and agreed upon. Monitoring the other party’s performance in reality is not only difficult; it consumes considerable managerial capacity and prevents managers from focusing on the more critical issues. As already noted even detailed contracts do not fully protect the fulfillment of the agreement.

So, there is a real need for organizations to trust other collaborating organizations. This means a ‘belief’, without any concrete proof, that the other side would truly follow the agreement, and even the ‘spirit of the agreement’. The rationale is that trust greatly simplifies the relationships and increases the prospects of truly valuable collaboration.

Agreements between organizations are achieved through people, who meet face-to-face to help in establishing the trust. The feelings of the people involved are a key factor. This is what the term ‘chemistry’ between business people and politicians means.

However, a negative branch (a potential undesirable effect) of trusting another is:

It is possible, even quite common, that organizations breach the trust placed on them, and by that cause considerable damage. The same is true between the organization and its employees.

How it is possible to trim the negative branch, taking into account the cost and difficulty of closely monitoring the performance and behavior of the other party?

A practical way is to trust the other party, but once a clear signal of misbehaving is identified – stop trusting anymore. Breach of trust is considered a product of erosion, an observable instance is sufficient to damage it for good.

This actually means that it is possible to build the image of a ‘trustworthy organization’. What makes it possible to trust, without frantically looking for such signals, is that ‘trustworthiness’ applies not just to a specific agreement, but it is a generic concept that applies to general conduct of an individual or an organization. When an organization spreads the notion of trustworthy behavior and capabilities, this can be monitored, as any deviation from trustworthy conduct would be published and all the organizations that do business with that organization, or an individual, will get the message.

Social media makes it possible to build, or ruin, the reputation of trustworthiness. There is a need, though, to handle cases where spreading intentional fake facts might disrupt that reputation. So, every organization that chooses to build the reputation of being trustworthy has to react fast to false accusations to keep its reputation.

E-commerce made the need to radiate trustworthiness particularly clear. Take a company like Booking.com as an example. Consumers who purchase hotel reservations through Booking have to trust that when they appear at the hotel they really have a room. The relationships between the digital store and its suppliers can also greatly benefit from trust.

So, it is up to the strategy of every company to evaluate the merits, and the cost, of being committed to be trustworthy and using that image as part of the key marketing messages. It is all about recognizing the perceived value, in the eyes of clients, suppliers and other potential collaborators, of being trustworthy in the long-term. What organizations need to consider, though, is that true breach of trust would make the task of re-establishing the trustworthiness very hard indeed. So, when being trustworthy is of true competitive advantage, maybe even a decisive competitive edge, then management has to protect it very thoroughly.