What should WE learn from Boeing’s two-crash tragedy?

The case of the two crashes of Boeing’s 737 MAX aircraft, less than six months apart, in 2018 and 2019, involves three big management failures that deserve to be learned, so some effective lessons can be internalized by all management.  The story, including some of the truly important detailed facts, is shown in the recent launch of “Downfall: The Case Against Boeing”, a documentary by Netflix.

We all demand 100% safety from airlines, and practically also from every organization: never let your focus on making money cause a fatal flaw!

However, any promise for 100% safety is utopian.  We can come very close to 100%, but there is no way to ensure that fatal accidents would never happen.  The true practical responsibility is made by two different actions:

  1. Invest time and effort to put protection mechanisms in place.  We in the Theory of Constraints (TOC) call them ‘buffers’, so even when something goes wrong, no disaster would happen.  All aircraft manufacturers, and all airlines, are fully aware of the need.  They include protection mechanisms, and very detailed safety procedures, into the everyday life of their organizations.  Due to the many safety procedures, any crash of an aircraft is the result of a combination of several things going wrong together and thus is very rare. Yet, crashes sometimes happen.
  2. If there is a signal that something that shouldn’t have happened has happened, then a full learning process has to be in place to identify the operational cause, and from that identify the flawed paradigm that let the operational cause happen.  This is just the first part of the learning. Next is deducing how to fix the flawed paradigm without causing serious negative consequences.  Airlines have internalized the culture of inquiring every signal that something went wrong.  Still, such a process could and should be improved.

I have developed a structured process of learning from one event, now entitled as “Systematic Learning from Significant Surprising Events”.  TOCICO members could download it from the TOCICO site of New BOK Papers, the direct link is https://www.tocico.org/page/TOCBodyofKnowledgeSystematicLearningfromSignificantSurprisingEvents

Others could approach me and I’ll gladly send the paper to them.

Back to Boeing.  I, at least, don’t think it is right to blame Boeing for what led to the crash of the Indonesian aircraft in October, 29th, 2018.  All flawed paradigms look as if everybody should have recognized the flaw, but this is inhuman.  There is no way for human beings to eliminate all their flawed assumptions.  But it is our duty to reveal the flawed paradigm once we see a signal that points to it.  Then we need to fix the flawed assumption, so the same mistake won’t be repeated in the future.

The general objective of the movie, like the habit of most public and media inquiries, is to find the ‘guilty party that is responsible for so-and-so many deaths and other damage.’  Boeing top management at the time was an easy target given the number of victims.  However, blaming top management because they were ‘greedy’ will not prevent any safety issue in the future.  I do expect management to strive to make more money now, as well as in the future.  However, the Goal should include several necessary conditions, and refusing to take a risk for a major disaster is one of them.  Pressing for very ambitious short time of development, and launching a new aircraft without the need to train the pilots, who are trained with the current models, are legitimate managerial objectives.  The flaw is not being greedy, but failing to see that the pressure might lead to cutting corners and to prevent employees from raising a flag that there is a problem.  Resolving the conflict between ambitious business targets and dealing with all the safety issues is a worthy challenge that needs to be addressed.

Blaming is a natural consequence of anything that went wrong.  It is the result of a widely spread flawed paradigm, which pushes good people to conceal the facts that might lead to their involvement with highly undesired events.  The fear is that they will be blamed and their career will end.  So, they do their best to prevent revealing their flawed paradigms.  The problem is: other people still use the flawed paradigm!

Let’s see what were the critical flawed paradigm(s) that caused the Indonesian crash.  Typically, two different combined flaws led to the crash of the Indonesian plane.  A damaged sensor sent wrong data to a critical new automatic software module, called MCAS, which was designed to fix a problem of too high angle of rising.  This was a major technical flaw of failing to consider the case that if the sensor is damaged then MCAS would cause a crash.  The sensors stick out of the airplane body, so hitting a balloon or a bird can destroy the sensor, and this makes the MCAS system deadly.

The second flaw, this time managerial, is deciding not to let the pilots know about the new automatic software. The result was that the Indonesian pilots couldn’t understand why the airplane is going down.  As the sensor was out of order, many alarms were heard filled with wrong information and the stick shaker on the captain’s side has been loudly vibrating.  To fix that state the pilots had to shut off the new system, but they didn’t know anything about MCAS and what it was supposed to do.

The reason for refraining from telling the pilots about the MCAS module was the concern that it’d trigger mandatory pilot training, which would limit the sales of the new aircraft.  The underlining managerial flaw was failing to realize how that lack of knowledge could lead to a disaster.  It seems reasonable to me that the management tried their best to come up with a new aircraft, with improved performance, and no need for special pilot training.  The flaw was that being unaware of the MCAS module could lead to such a disaster.

Once the first crash happened, and the technical operational cause revealed, the second managerial flaw took place.  It is almost natural after such a disaster to come up with the first possible cause that is the least damaging to the Management.  This time it was easy to claim that the Indonesian pilot wasn’t competent.  This is an ugly, yet widely spread, paradigm of putting the blame on someone else.  However, facts coming from the black box eventually told the true story.  The role of MCAS in causing the crash was clearly discovered, and the role of the pilots not having any prior information about it.

The safe response to the crash should have been grounding all the 737 MAX aircraft until a fix for MCAS is ready and proven safe.  It is my hypothesis that the key management paradigm flaw, after establishing the cause for the crash, was highly impacted by the fear of being blamed for the huge cost of grounding all the 737 MAX airplanes.  The public claim from Boeing top management was: “everything is under control”, a software fix would be implemented in six weeks, so there is no need to ground the 737 MAX airplanes.  The possibility that the same flaw of MCAS would lead to another crash was ignored in a way that could be explained only by top management being under huge fear for their career. It doesn’t make sense that the reason for ignoring the risk was just to reduce the costs of compensating the victims, by still putting the responsibility on the pilots.  My assumption is that the top executives of Boeing at the time were not idiots. So, something else pushed them to take the gamble of another crash.

Realizing the technical flaw forced Boeing to reveal the functionality of MCAS to all airlines and pilot unions.  It included the instruction that when the MCAS goes wrong to shut-off the system.  At the same time, they published that a software fix to the problem would be ready in six weeks, an announcement that was received with a lot of skepticism.  Due to these two developments Boeing formally refused to ground the 737 Max aircraft.  When directly asked by a member of the Allied Pilot Association, during a visit of a group of Boeing managers (and lobbyists) to the union, the unbelievable answer was: No one has concluded that this was the sole cause of the crash!  In other words, until we have full formal proof, we prefer to continue business as usual. 

Actually, the FAA, the Federal Aviation Administration, issued a report assessing that without a fix there will be a similar crash every two years!  This means there is 2-4% chance that a second crash could happen within one month!  How come the FAA has allowed Boeing to let all the aircraft fly?  Did they carry out an analysis of their behavior when the second crash occurred after five months without a fix of the MCAS system?

Another fact mentioned in the movie is that once the sensors are out-of-order and the MCAS points the airplane down, the pilots have to shut off the system in 10 seconds, otherwise the airplane is doomed due to the speed of going down!  I wonder whether this recognition has been discussed during the inquiry into the first crash.

When the second crash happened Boeing top management went into fright mode, misunderstanding the reality that the trust of the airlines, and the public, in Boeing, has been lost. In short: the key lessons from the crash and after-crash pressure were not learned!  They still didn’t want to ground the airplanes, but now the airlines took the initiative and one by one decided to ground them.  A public investigation was initiated and from Boeing Management Team perspective: hell broke loose.

The key point for all management teams: 

It is unavoidable to make mistakes, even though a lot of effort should be put trying to minimize them.  But it is UNFORGIVEN not to update the flawed paradigms, causing the mistakes.

If that conclusion is adopted, then a systematic method for learning from unexpected cases should be in place, with the objective of “never to repeat the same mistake”.  Well, I cannot guarantee it’ll never happen, but most of the repeats can be avoided.  Actually, much more can be avoided, as once a current flawed paradigm is recognized and the paradigm updated, the derived ramifications can be very wide.  If the flawed paradigm is discovered from a signal that, by luck, is not catastrophic, but surprising enough to initiate the learning, then huge disastrous consequences are prevented and the organization is much more secure.

It is important for everyone to identify, based on certain surprising signals, flawed paradigms and update them.  It is also possible to learn from other people, or organizations, mistakes. I hope the key paradigm of refusing to see a problem already visible, and trying to hide it, is now well understood not just within Boeing, but within every top management of any organization.  I hope my article can help to come up with the proper procedures for learning the right lessons from such events.

The other side of the coin: Amazon’s future threats

Part 2

By Henry F. Camp and Eli Schragenheim

What future threats does Amazon face?  Don’t believe they are bulletproof because of their current dominant position or an internal culture that embraces Jeff Bezos’ “Day One” philosophy, which demands companies to stay as sharp as they were when they were first founded – vulnerable, before amassing financial or political strength.  While Amazon serves its customers well, it behaves differently with its lower-level employees and even many of its business collaborators are less than satisfied.

As with any company that operates on a massive scale, there is significant pressure on Amazon to control wages.  This is obvious, right?  Their size, in terms of the sheer number of employees, means that paying well would come at a tremendous cost.  After all, the purpose of getting big was to gain operational efficiencies that allow Amazon to both earn high profits and offer low prices.  Given their customer orientation, this combination means Amazon feels it must look out for its customers at the expense of its employees and suppliers.

The Amazon decisive competitive edge relies on efficiencies.  The company’s approach to achieving them is multidimensional.  They work to automate wherever possible, so they require fewer employees and gain speed and accurate delivery.  They push back on their suppliers as well, sharing the cost of providing logistics.  More on this later.

When you employ 1.5 million people, assuming 2,000 hours for each per year, adding one dollar to hourly compensation increases costs by $3 billion per year, a non-trivial consideration.  That may be exactly what they had to do to maintain operations during fiscal 2021, the year of the casual COVID-worker. 

Nevertheless, Amazon’s cash flow increased by $17 billion to $60 billion in fiscal 2021.  To put that number into perspective, it is closing in on Microsoft, $87 billion, and Apple, $120 billion, the two most profitable companies in the world, outside of government owned entities.

Now, having high profits is not intrinsically bad.  Both customers, employees, suppliers and governments alike benefit enormously from Amazon’s success.  Nor do high profits oblige the companies that earn the most to do more than any other company does for their employees.  The question of whether compensation is fair or unfair is in comparison to what other companies pay and for equivalent work in an equivalent context. 

By context, we mean culture, as well as the physical environment and relative safety.  People who work in coal mines may demand higher compensation than those who sit all day in comfortable office chairs.  A wonderful corporate culture or purpose may attract some employees, even if the pay is not up to par, such as missionary work.  Is the workplace tough or even cruel?  Bad cultures typically result in high turnover and quitting-in-place, where workers accept paychecks for doing as little as possible.  Lastly, the degree to which a person’s work relies on their ability to plan out into the future determines what Elliott Jaques called felt-fair pay.  A PhD does not expect to be paid more than their coworkers for flipping burgers at McDonalds.

Back to Amazon in particular, dozens of articles going back for years have decried their treatment of front-line employees, claiming heavy workloads and loss of autonomy, down to timed restroom breaks.  The question is, are these factors a potential risk to Amazon’s future?

Amazon is a system and a very efficient one.  A system that largely took in stride a massive increase in volume as a result of the totally unforeseen COVID pandemic.  They provided the world with goods when we were unable or unwilling to go out shopping in our neighborhood brick-and-mortar stores.

They have dialed in exactly what they expect of employees to gain this efficiency.  The hope is their customers are the beneficiaries.  Meanwhile Amazon’s payrates are not the lowest.  So, where is the risk?

From a TOC point of view, it is a local/global conflict.  The efficiencies Amazon measures their operators against are local, not global.  The reason they want higher efficiencies is to become more effective globally, across their enormous company.  It all seems to be working far better than we might have reasonably predicted.  So, again, what risk?

Systems Theory points us in the direction of an answer.  It informs us that the sum of local optima does not lead to a global optimum.  This conclusion is in sync with the TOC, which implores us to focus on the system’s constraint.  The implication is that non-constraint resources must have some slack in their scheduled workloads.  The many complaints about Amazon as an employer is that they seem to focus on “sweating the assets,” a quote from John Seddon, British consultant to service industries.  By assets he means the employees themselves.

Let’s start from Amazon’s point of view.  They have scale.  This means they can apply division of labor and workloads to an extent seldom seen.  The scale of their operations requires many distribution and sorting centers, which are generally larger than one million square feet.  They have subordinated physical centralization in favor of speed of delivery to their customers.  A high-level manager runs each of these facilities and they are scored on their efficiencies.  Getting more work out of their staff at each facility drives great customer service and validates what a person can produce.  The latter allows Amazon to know when to hire and how many which leads to low excess capacity.  All of these result in the incredibly high profits mentioned earlier.

One of us recently spoke to the manager of one of these distribution centers.  He exposed us to a problem with these measurements.  They are not applied consistently.  Safety and external events blow the circuit breakers on the metrics machine that is Amazon.

For example, during thunderstorms outside a DC (and if the buildings get any bigger, there may soon be weather inside) drivers are prohibited, for their own safety, from leaving their trucks to dash into the building.  This prevents their trucks from being unloaded.  So, if a storm persists, as they often do in the Southern United States, the DC can become both starved of inputs and constipated for lack of outputs. 

The newest and most efficient DCs employ a direct flow model where receipts that are immediately required flow directly to shipping without having to first be put away (buffered) and then picked to be shipped.  Efficient – skipping steps – resulting in faster shipments out to customers, particularly of those items that have been on backorder.  However, during a long-lasting thunderstorm, if drivers can’t deliver their loads, the DC grinds to a stop after just a few hours.  (Interestingly, the older less efficient designs that are put away first and then pick are more resilient for a much longer time when such external events occur.)  When a stoppage occurs, the efficiency metrics are ignored, as it is not the fault of the staff of the distribution center; the stoppage being caused by an external event and a built-in desire for people’s safety.

What is really interesting is that behaviors change as soon as an external event is declared.  The sub-system resets.  Training that was needed but not prescribed takes place.  Equipment that needed fixing are repaired.  Preventative maintenance is accomplished.  It seems that efficiency measurements prohibit management from doing what they instinctively understand must be done to be effective.  Without an external event to turn off the efficiency spotlight, Amazon’s top managers’ efforts to improve future productivity (training, coaching, maintenance, repairs, etc.) make the manager look bad in the short term.  We imagine this creates a pressure-cooker workplace – damned if I do, damned if I don’t.

Furthermore, there is no metric that scores how quickly the facility recovers from the externally triggered reset.  As such, what often happens is the managers of the sub-systems prolong an interrupted condition specifically to be able to afford to work on what they perceive as necessary, over and above that which is proscribed by upper management.  In other words, managers must cheat to do the right things.  Eli Goldratt described this as his fifth Engine of Disharmony: gaps between responsibility and authority. You are responsible to accomplish something both in the short-term and over the long-term but you are not given the authority to undertake the actions that are necessary to meet your responsibilities.

Let’s investigate what this revelation means to employees.  Many psychological studies, popularized by Dan Pink, suggest that there are three main drivers of motivation: autonomy, mastery and purpose.  The last two are certainly achievable at Amazon, as it is today.  Our question is about the opportunity for autonomy, other than when there is an external interruption of Amazon’s proscribed processes.

The conflict is between allowing people to realize their very human need for autonomy versus expecting them to be part of a well-oiled machine.  At this point, allow us to broaden our discussion beyond employees to include suppliers, service providers and other stakeholders.

Earlier, we promised to return to the sellers of goods through Amazon.  The desire to provide top value to customers does not extend to suppliers.  Amazon provides these owners of the products sold on its site a world class operating system but they also charge high fees.  Why?  These fees offset Amazon’s expenses.  Many Amazon partners claim that the giant dictates to their ‘partners’ and forces them to largely finance their efficiency machine.

Let us briefly explain the diagram.  For Amazon to succeed, it must be both effective at doing what the market expects and do so efficiently enough to earn a vast profit.  To gain that efficiency, they must proscribe precisely what they understand needs to be done and hold others, employees and providers, accountable for delivering.  On the other hand, in order to keep their enormous system functioning now as well as in the future, they rely on the continuing support of their stakeholders.  To maintain that support, they must honor the needs of those stakeholders to express themselves, especially when they identify and desire to implement good ideas that improve Amazon’s effectiveness.  But, how can they give stakeholders autonomy, while ensuring the flow of work for as little money as possible?

We know from Eli Goldratt’s pillars of TOC that every conflict can be eliminated.  Our intuition suggests that the D action creates a legitimate risk to the C need.  However, is there an inherent jeopardy to efficiency if they seek collaboration and allow autonomy?  With an injection of seeking advice from staff, suppliers, providers and acting on it in meaningful ways, the cloud falls apart.  The D’ action is sufficient to both be globally effective (C) and locally efficient (B).

Current relationships with its work force and suppliers cannot be fully described as win-lose (there are other places selling your wares) but it would be correct to claim that they are BIG-WINsmall-win, with Amazon always getting the bigger win.  The more their internal and external constituents accumulate objections and the longer they remain unresolved, the greater the resultant animosity.  The threatening situation is that complaints of unethical behavior are spreading through the media and might erode their customers’ trust, which seems to us is Amazon’s biggest strategic asset.  Should public opinion swing against them, the government may be forced to intervene.  Afterall, politicians love to pander to their constituents.  Many other giants have fallen.  We little folk love to tell the stories.  Antitrust is a real risk to a company of Amazon’s scale.  That should be warning enough.

Amazon’s Decisive Competitive Edge

Being able to Keep it and facing some future threats

Part 1

By Eli Schragenheim and Henry F. Camp.

Having a great idea, which would generate immense value to many, is a necessary starting point for building a big successful business.  But it is far from being sufficient.  Eventually, in order to truly succeed in a consistent way, there is a need to make sure that the whole set of necessary and sufficient conditions apply.  The idea will fail if even one necessary condition doesn’t apply.  Thus, it is usually easier to learn from failures, rather than from big successes. Yet, learning the essence of predicting the perceived value of a new offering to the market and how to connect it to the strategy from successes can be highly valuable.

In this article, we try to understand better some critical points in the success of Amazon.  The benefit is to be able to better assess the potential of ideas, hopefully also point to some of the other necessary elements for ensuring success. We also like to check how the Theory of Constraints (TOC) insights and tools support shaping the ideas leading to identification of the other necessary elements for success.

An important tool, developed by Dr. Eliyahu Goldratt, is the Six Questions, originally for assessing the potential value of new technology.  We use them to assess the potential value of new ideas for products or services.

The Six Questions are:

  1. What is the power of the new technology?
  2. What current limitation or barrier does the new technology eliminate or vastly reduce?
  3. What are the current usage rules, patterns and behaviors that bypass the limitation?
  4. What rules, patterns and behaviors need to be changed to get the benefits of the new technology?
  5. What is the application of the new technology that will enable the above change without causing resistance?
  6. How to build, capitalize and sustain the business?

Another important insight by Goldratt is the concept of the ‘decisive competitive edge,’ which he defined as answering a need of a big enough market to an extent that no significant competitor can, while being on par with the competition on the other important aspects. To maintain the decisive competitive edge, delivering the value must be difficult for potential competitors to imitate or just counterintuitive to their business perception.

Jeff Bezos had a long-term, very ambitious vision of becoming BIG, and he made it happen.  Offering the market a decisive competitive edge is initially a must, in order to grow.  Today, Amazon is so big that its size itself is a decisive competitive edge, because it answers the need for security both for the payment and for receiving the goods on time, which are significant concerns of buyers from any e-tailer.  This decisive competitive edge is on top of the usual advantage of the economy of scale.

So, in retrospect, what was the secret that made Amazon such a giant?

We have to go back in time and visualize the decisions taken at the time and speculate the rationale of Bezos that he saw and others didn’t.  The frequent mantra all Amazon key people cite, “we focus on the customers”, is hardly an insight.  Too many others, much less successful than Bezos, thought they did exactly that.

A non-trivial question is how should any organization consider what their clients want?  Is asking the clients, either directly or carrying out a market survey, the best way?  Do clients know how to answer questions about something they have never encountered?  More, it is obvious that to stay way ahead of all competitors one needs to develop a somewhat different approach and predict what the potential clients want before they know they want it.  How to do that, and what is the risk of being grossly mistaken?

Back in 1995, when Amazon started to sell books through the Internet, not too many people felt it answered a need.  Well, at least not those who lived fairly close to a Barnes and Noble, or Borders, store. So, what’s the big deal? 

This is what made the decision so far-sighted.  Books are easy to purchase through the Internet.  You see clearly enough what’s for sale; the possibility of buying the wrong item is relatively low.  One advantage of a virtual store is the ability to order whenever one finds information on an interesting book.  The chance of finding the specific book at a physical store is definitely less than 100%, but if the virtual store is well-managed, the chance is quite high. 

This ability to quickly trigger a purchase, without having to invest time and effort, is the key limitation vastly reduced by the virtual store.  The technology of providing the 1-Click method for quickly making a purchase was certainly an improvement.

However, answering a need by reducing the current limitation is not enough.  The third and fourth questions must be answered as well.  At the time, the norm was to drive to a store and buy, or call the store over the phone, when such a service was offered.  Speaking over the phone is bound to result in mistakes – misunderstanding the title, author or the shipping address.  A way to bypass the limitation is to write down a shopping list to be used when someone from the family goes to the mall.

The relevancy of Goldratt’s fourth question is revealed once the idea of the virtual bookstore is understood.  In 1995, when Amazon started to sell books through the Internet, not everyone had computer access and knowledge.  The phones of that time didn’t have access to the Internet.  So, the offer and the reduced limitation were directed at those people with computers, preferably at home.  Even then, the market of people that had easy access to computers was big enough for a good start.  Whenever there was a wish to buy a book, the user had to enter the website, find the specific book and complete the sale by providing valuable personal information, like name, address and credit card.  This is a frightening procedure for many people even nowadays, definitely back in 1995 when Amazon started selling books.

Security of financial data is a critical concern that could easily be a source of resistance, which is what the fifth question is all about.  Another concern is the assurance that the order will arrive on time and in good shape.  To reduce doubts and resistance, the whole mechanism of accepting orders must be easy and super friendly, plus an effective logistics system must be established that gets a copy of the ordered book(s), ships quickly and securely to the user.

Back in 1995 for Amazon, the answer to the fifth question was: We must build trust with potential customers.  The vast majority of the resistance to purchasing through the Internet, even today, is based on mistrust!  But once trust is built and maintained – it opens the door for more and more new offerings.

Selling through the Internet requires excellent operations.  Technology can attract early adopters, but the test is the ability to deliver.  The choice of selling books was right from the perspective of clients looking for professional books, which means books that are far from being bestsellers, books that are not easy to find even at the largest bookstores. 

Maintaining inventory is a challenge.  However, it is not mandatory to actually carry every book in the catalogue at the warehouse, if the supply from the publisher is fast and reliable.  It is best to keep most of the desired books in stock and being able to replenish very quickly.  Ordering very small quantities from a publisher was a challenge, as the low-price/big-batch culture was strong in book printing.  Of course, once the store is huge, certainly like Amazon today, it can dictate replenishment terms.  The point is – it is tough for a young virtual shop to ensure the required agility from publishers.

Another challenge is logistics – shipping to the clients.  The challenge for a small company is huge, even in a geographically small area, and Jeff Bezos was insistent on shipping to any point on the globe.  Many virtual shops use the logistics services of third-party partners, which serves a variety of similar shops.  The challenge is that from the customers’ perspective the shop is responsible, and when problems occur from the customer’s perspective, then trust is eroded.

Goldratt’s sixth question raises the issue of the strategy, especially facing three key needs: building the operational capabilities, the marketing and sales capabilities, and keeping operational performance perfect when sales surge.  Evaluating the strategy to reach such a vision, it becomes clear that the only way to truly succeed is to become BIG, so all the parts of technology, operations and business are managed in perfect synchronization and then, and only then, the company can become truly profitable.  It is not surprising at all that it took Amazon six years to become even somewhat profitable. On the way to becoming big, losing money served to deter potential competitors from taking the same route of accelerated growth.  This left Amazon as the only truly big e-tailer until the emergence of Alibaba, making the competitive edge truly decisive.

Expanding the offering to other products, like CDs and video games, was quite natural.  The characteristics were similar to books: standard products that don’t allow much room for mistakes, plus the advantage over physical stores due to the huge number of items that even very large stores cannot always have on hand.  The decisive competitive edge of a truly reliable virtual store, providing an unbelievably wide choice, was kept intact.  

Every decisive competitive edge is limited by the time it takes competitors to copy the idea and even improve on it.  Being the first to answer a need might carry some value for a little, but competition will eventually catch up.

The introduction of Amazon Prime, in 2005, added substance to the operational commitment of Amazon and can be viewed as an additional decisive competitive edge and a barrier to competition.  The reduced limitation (Question 2) for customers that consider buying frequently from Amazon is getting any item within two days.  The extra commitment is given for an annual subscription price but then Amazon provides free delivery of two days and lower charges when the customers need next-day shipment. 

Amazon Prime cannot be delivered worldwide.  While Amazon serves customers all over the globe, committing to two days delivery has been confined at first only to the US.  Later, Amazon extended the Prime offer to many more countries.  The commitment by Amazon to fast delivery for Prime customers has increased and, in some places, it is now two hours.

Diving into the third question reveals that without the Prime service the customers prefer to reduce their shipping costs, which are detrimental to purchasing online, by combining several items into one order. But this sometimes delays the customer until they have enough items within one order.

The advantage of ordering whenever you like and getting it fast, without extra cost, provides a considerable benefit for customers and a different benefit for Amazon.  From the customer perspective there is no financial advantage combining different purchases, so every item can be ordered in isolation.  From Amazon’s perspective, as other virtual shops don’t offer such fast response for free, other than the annual subscription fee, then many items, which customers might have preferred to purchase elsewhere, go to Amazon.

The cost of Prime membership is probably the only reason to resist the offer.  It triggers a tendency to habitually buy from Amazon, rather than from another virtual shop that is more specialized.  Thus, significantly capitalizing on the decisive competitive edge.  Amazon’s technology and its excellent delivery performance make sure they can sustain the growth, which completes the answer to the sixth question.

Another key decisive competitive edge launched by Amazon was the appearance of the Kindle and the related popularization of eBooks.  Reading books from the screen of a computer was known long before.   Theoretically eBooks offer a decisive competitive edge by allowing the storage of a huge number of books without the space that printed books require.  However, reading a book from a laptop is not particularly engaging.  Developing a dedicated light electronic book reader reduced the limitation of having to sit at a computer, and the special E-Ink developed for the Kindle allowed reading even in sunlight.  This offered a somewhat similar experience to reading a printed book, sitting at any convenient spot, with a special advantage for comfortable reading on a flight.  The Kindle had to compete with the iPad that came about two years later.  The Kindle deserves a more detailed value analysis for itself.  Strategically, it established Amazon’s status as the one place to turn for books, printed or in Kindle format.

On top of becoming a giant e-tailer, Amazon looked to other markets.  One of the most valuable of Goldratt’s insights was: “segment your market – don’t segment your resources.”  The logic is that capable resources with excess capacity can serve more than one market.  Furthermore, from an uncertainty management standpoint, diversifying markets add stability.  Amazon was developed as a technological enterprise, whose business is aided by a state-of-the-art IT system.  So, why not capitalize on it and offer valuable IT services to other businesses as well as governments?

Amazon identified two key needs, not addressed at the time, for a wide spectrum of organizations.  First, the need to store huge amounts of data that is consistently growing and, secondly, the need for more computing power.  Both raised the concept of the ‘cloud’ as a major new IT service.  The fear of cyber-attacks, which have become a serious threat, and providing quick access to data from any point on the globe, added substance to the new offering.  The need for computing power, without having to rely on a growing number of private servers, was a need seeking a safe and stable answer.

Goldratt’s first four questions have straight-forward answers when a giant, like Amazon, offers cloud services, mainly storage and computing power.  The fifth question, looking for possible resistance, raises the issue of safety from two different perspectives.  One is the dependency on the security service of a third-party, big as it is, to keep the data safe and prevent hacking it by professional hackers, for whom penetrating the cloud is an especially desirable target.  The other angle is security from Amazon itself, which surely has the technical ability to penetrate into the stored data of its competitors, when it lies on its own servers. 

Amazon may make much more money from its Amazon Web Services (AWS), but its reputation lies with being a giant e-tailer, where it has only Alibaba as a huge competitor.  Practically, it seems to us that Amazon and Alibaba are not truly head-to-head competitors.  However, when it comes to AWS, two other giants, Google and Microsoft, are direct competitors, offering similar answers to the same need, and a whole array of smaller, yet big enough, cloud services providers have appeared as well.

Revealing the potential value of combining the Theory of Constraints (TOC) with Artificial Intelligence (AI)

AI, as a genetic name for computerized tools, capable of learning from past data for taking independent decisions, or supporting decisions, is becoming the key buzz word for the future of technology to change the world. 

However, there are quite a lot concerns regarding the ability of AI not just to improve our life, but maybe also to cause considerable damage.

I strongly believe that the Theory of Constraints (TOC) brings rational judgement and a tendency to look for the inherent simplicity, revealing the true potential of seemingly complex and uncertain situations.  Can the qualities of TOC significantly improve the potential of AI to bring value to the management of organizations?

The emphasis of TOC on identifying the right focus for achieving the best performance, which also means what not to focus on, is based on recognizing the capacity limitation of our mind.

Can AI significantly help in exploiting better the human capacity limitation?

All human beings should guide their mind to focus on what truly matters.  In managing organizations achieving more of the GOAL, now and in the future, provides the ultimate objective for assessing what to focus on right now.  No matter how cleverly we identify what truly matters some important matters might be missed.  One of the neglected areas in TOC, which no manager can afford to ignore, is to identify, as early as possible, emerging threats.

Computers are also limited with their ability to process huge amount of data – but their limitation is way above the human’s and that big gap is widening more and more.  So, can we hope that while the top objective is defined by the human manager, clever use of software, particularly AI, could constantly check the validity of the current focus and warn whenever a new critical issue emerges?

AI is widely used to replace human beings doing simple straight-forward actions, like using robots in large distribution centers.  Driving cars without a human driver is a more ambitious target, but it is also something that the vast majority of the human beings do well (when they are not under influence).   The current managerial emphasis on the use of AI is to reduce the ongoing cost of employing workers for simple enough jobs.  It’d be good to show that AI could support the substantial growth of throughput and even enhance strategic decision making.

The special power of AI is its ability to learn from huge amount of past data items.  This means it can also be trained to come up with critical decision-support information, based on observed correlations between variables, noting trends and sudden changes in the behavior of the market demand, suppliers, and flow blockages.  So, instead of making the AI module the decision maker of relatively simple decisions, it can be used for improving the performance of organizations.  A natural first target is improving the forecasting algorithms, highlight also the reasonable possible spread. The ability to identify correlation could show dependencies between various SKUs, and that would significantly improve the forecasts.  The more challenging tasks are to provide information on the potential impact of price changes, and other critical characteristics of the offerings, on the market.  Another worthy challenge is to highlight irregularities that require immediate management attention.  From the TOC BOK perspective it would be valuable to evaluate the effectiveness of the buffers better than it is done today.  Working with AI could indirectly be used to improve the intuition and even the thinking of open mind managers!  If the human manager would be able to use the AI to validate, or invalidate, assumptions and hypotheses, this will have a considerable impact on the quality of management to evaluate the ramifications of changes.

One important downside of AI, especially from a TOC perspective, is not considering logical cause-and-effect.  Being able to check cause-and-effect hypotheses is a key mission.  Another downside is the dependency on the training data, which could lead to erroneous results.  A key challenge of implementing AI is finding the way to reduce the probability of a significant mistake and be able to spot such a mistake through cause-and-effect analysis.

The process aiming at using AI most effectively starts with the GOAL, then deriving the key elements that impact it, and then deduce worthy valuable objectives.  The list of the valuable objectives, which would enhance the performance of the organization, should be analyzed to find out whether AI, maybe together with other software modules, can overcome the obstacles that currently prevent the achievement of these objectives.

A key generic idea is to recognize the potential of AI providing vital information, or even new observed insights, as an integral part of the human decision-making process. 

Setting the worthy objectives, guiding the AI to bring the supporting and necessary information, is where TOC wisdom can be so useful for drawing the most from AI.  Suddenly the title of “The Haystack Syndrome – Sifting Information out of the Data Ocean” finds wider meaning when the data ocean has grown by several orders of magnitude, but also the technology for making the best sense from it.

While computers in general, and AI in particular, are vastly superior in handling complexity, meaning many different variables that interact with each other, the tougher challenge is to face uncertainty, both the ‘noise’, the inherent common and expected variations, and the risks, which are rarer, yet highly damaging.

Here comes the opportunity of using the emerging power of AI, combined with the TOC wisdom, to support the assessment of future moves. 

Guiding the AI to observe predicted trends in the market, especially the impact of external forces, like changes in the economy, and even predicting the effect of increasing or reducing prices, could yield major value to the decision makers.  Much of the relevant data required for such missions lies in external databases.  It is possible that services for obtaining the data from various external sources would be required.  It’d be good if a cooperation between competitors to allow AI analysis of their combined data is achieved and carried by a neutral third party.  Such cooperation should ensure that no internal data of one company will ever leak to another company.  But the outcome of the analysis, highlighting issues like price sensitivity, the impact of inflation, changes in government regulations, and many others, could yield knowledge that is currently hidden, leaving the decisions to be based solely on intuition.  The key disadvantage of human intuition is being slow to adapt to changes.  Feeding the AI with huge number of similar past changes making it much better in predicting the outcomes, as long as there is enough relevant data that wasn’t made irrelevant by the change.

My current thinking about the effective use of AI for managerial decision-making, including the critical question ‘what to focus on’, is that there are two focused categories of targets for TOC-AI processes that would bring huge value to managing any organization:

  1. Sensing the market demand.  This includes forecasting current trends, and predicting the potential outcomes of certain moves and changes.  Plus giving good idea of the impact of price, economy and the variety of choice.
  2. Pointing to an emerging threat.  The TOC wisdom could easily yield a list of potential threats that management should be aware of as early as possible.  There is a need to identify signals, observed in the recent past, that testify that a certain threat is developing.  Giving enough examples to the AI could trigger the ongoing search for enough evidence.
    • For instance, when an important supplier starts to behave erratically, it could point to problems with its management, even the possibility of bankruptcy, or that our image at a client is going down.  Similarly, a change in the quantities, and/or frequency, of the purchasing orders of a big client, could signal a change in the client’s purchasing policies.
    • One of the problems of complex environments is the accuracy of the data.  If the AI module intentionally looks for outcomes that don’t fit the data, then notifying the user to check specific data items could be meaningful.
    • An existing example is monitoring the need for maintenance of machines.  This is an Industry 4.0 related feature that identifies when the current pace and quality of the machine deviates from the norm, before it becomes critical, leaving enough time to plan the necessary maintenance activities.

Critical questions for continued discussion:

  • Are there more generic organizational topics where AI, when guided by TOC, can contribute to management?
  • Can we come to a generic set of insights on how TOC can impact the objectives, training, and the actual use of AI?
    • For instance, guiding the AI to thoroughly check the quality of the capacity consumption data of the constraint and the few other critical resources. Comparing them to the capacity requirements of the past and incoming demand could help in determining whether the available protective capacity is adequate.
  • How can we make it happen?
    • And what training the people using the AI module should go through?

Between Sophisticated and Simple Solutions and the Role of Software

Smart people like to exploit their special capability by finding sophisticated solutions to seemingly complex problems.  Software allows even more sophistication and with better control. 

However, two immediate concerns are raised.  One is whether the impact on the actual results is significant enough to care for?  In other words: Do the results justify the extra efforts?  The other is whether such a solution wouldn’t fail in reality?  In other words, what is the risk of getting inferior results?

Simple solutions focus on only few variables, use much less data, and the inherent logic can be well-understood by all the involved players. However, simple solutions are not optimal, meaning more could have been achieved, at least theoretically.

Here is the basic conflict:

Until recently we could argue that the simple solutions have an edge, because of three common problems of sophistication.

  1. Inaccurate data could easily mess the optimal solution, as most sophisticated solutions are sensitive to the exact value of many variables.
  2. People executing the solution might misunderstand the logic and make major mistakes, which hurt achieving the excellent expected results.
  3. Any flawed basic assumption behind the optimal solution disrupts the results.
    • For instance, assuming certain variables, like sales of an item at different locations, are stochastically independent.
    • When the solution is based on software, then bugs might easily disrupt the results.  The more sophisticated is the algorithm the higher the chance for bugs that aren’t easily identified. 

The recent penetration of new technologies might push back towards sophistication.  Digitization of the flow of materials through shop-floors and warehouses, including RFID, advanced the accuracy of much of the data.  Artificial Intelligence (AI), coupled with Big Data, is able to consider the combined impact of many variables and also take into account dependencies and new discovered correlations. 

What are the limitations of sophisticated automation?

Two different types of potential causes for failures:

  1. Flawed results due to problems of the sophisticated algorithm:
    • Missing information on matters that have clear impact.
      • Like a change in the regulations, a new competitor etc.
        • In other words, information that humans are naturally aware of, but are not included in the digital databases.
    • Flawed assumptions, especially regarding modelling reality and software bugs.  It includes assessments of the behavior of the uncertainty and the relevance of past data to the current state.
  2. Misunderstanding the full objective of top management.  Human beings have emotions, desires and values.  There could be results, which are in line with the formal objective function, but violates certain key values, like being fair and honest to clients and suppliers.  These values are hard to code.

The human mind operates in a different way than computers leading to inconsistencies in evaluating a good solution.

The human mind uses cause-and-effect logic for predicting the future, using informal and intuitive information.  On one hand intuitive information might be wrong.  On the other hand, ignoring clear causality and truly relevant information could definitely yield inferior results.

Artificial Intelligence uses statistical tools to identify correlations between different variables.  But it refrains from assuming causality, and thus its predictions are, many times, limited to existing data and fail to consider recent changes that didn’t happen in the past.  The only way to predict the ramifications of such a new change is by cause-and-effect.

Human beings are limited in making a lot of calculations.  The human capacity also limits the number of different topics it can deal with at a period of time. 

Another aspect to consider is the impact of uncertainty.  The common approach to uncertainty is that it adds considerable complexity to the ability to predict the future based on what is known from the past. 

Uncertainty significantly limits our ability to predict anything that lies within the ‘noise’.  The noise can be described as the “common and expected uncertainty”, meaning the combined variability of all the relevant variables, focusing on the area of the vast majority of the cases (say 90% of the results), ignoring rare cases.  So, any outcome that falls within the ‘noise’ should not come as a surprise.  As long as the ‘noise’ stays at about the same level, it represents a limit to the ability to predict the future.  But that is already more than nothing, as it is possible to outline the boundaries of the noise, and predictions that are beyond the noise should be the focus for analysis and decisions.

Goldratt said: “Don’t optimize within the noise!”

Good statistical analysis of all the known contributors to the noise might be able to reduce the noise.  According to Goldratt this is often a poor exploitation of management time.  First, because in most cases the reduction in the noise is relatively small, while requiring efforts to look for the additional required data.  Secondly, it takes time to prove the reduction in the noise is real.  And thirdly, the most important, is that there are other changes that could improve the performance well beyond the existing noise.

A potential failing of statistical analyses is considering past data that are no longer relevant due to a major change that impacts the relevant economy.  One can wonder whether forecasts that consider data before Covid-19 have any relevance to the future after Covid.

The realization that a true improvement of the performance should be far above the noise greatly simplifies the natural complexity, and could lead to effective simple solutions, which are highly adaptive to significant changes, which are beyond the natural noise.

Demonstrating the generic problem:

Inventory management is a critical element for supply chains.  Forecasting the demand for every specific item at every specific location is quite challenging.  Human intuition might not be good enough.  The current practice is to determine a period of time, like two weeks, of inventory from item X at location Y, where the quantity of “two weeks of inventory” is determined through either a forecast or determination of an average sale-day.

Now, with much more sophisticated AI it is assumed that it is possible to accurately forecast the demand and align it with supply time, including the fluctuations in the supply.  However, forecasts are never one precise number, and so are the supply time.  Every forecast is a stochastic prediction, meaning it could vary.  Having a more accurate forecast means that the spread of the likely results is narrower than for less accurate forecast.  The sophisticated solution could try to assess the damage of shortages versus surpluses, however part of the required information for such an assessment might not be in the available data.  For instance, the significant damage of a shortage is often the negative response of the customers.  It might be possible to track actual loss of sales due to shortages, but it is challenging to assess the future behavior of disappointing customers.

The simpler key TOC insight for inventory management is to replenish as fast as possible.  This recognition means narrowing down the forecasting horizon.  Actually, TOC assumes, as an initial good-enough forecast, no change in the demand for that horizon, so replenishing what was sold yesterday is good enough. 

Another key insight is to base the target inventory not just on the on-hand stock, but to include the inventory that is already in the pipeline.  This is a more practical definition, as it represents the current commitment for holding inventory, and it makes it straight-forward to keep the target level intact.

Defining the target inventory to include both on-hand and pipeline stock makes it possible to issue signals reflecting the current status of the stock at the location.  Normally we’d expect anything between 1/3 to 2/3 of the target level to be available on-hand to represent the “about right” status of inventory, knowing the rest is somewhere on the way.  When there are less than one-third on-hand then the status of the stock is at risk, and actions to expedite the shipments are required.  This is the duty of the human manager to evaluate the situation and find the best way to respond to it.  Such an occurrence triggers the evaluation whether the target level is too low and needs to be increased.  Generally speaking, target levels should be stable most of the time.  Frequent re-forecasting usually come up with minor changes. 

Question is: as the target level includes safety, what is the rationale to introduce frequent changes of 1%-10% to the target level, as it is just a reflection of the regular noise, and probably not a real change in the demand?

A sophisticated solution, without the wisdom of the key insights, would try to assess the two uncertain situations: how much demand might show in the short time, and whether the on-hand plus whatever is on the way will make it on time. It’d also estimate whether the anticipated results would fall within the required service level.

Service level is an artificial and misleading concept.  Try to tell the customer that their delivery was subject to the 3-5% cases that the service level doesn’t cover.  Customers can understand that rare cases happen, but then they like to hear the story that justifies the failure.  It is also practically impossible to target a given service level, say 95%, because even with the most sophisticated statistical analysis there is no way to truly capture the stochastic function.  Assuming the spread of the combined delivery performance is according to the Normal distribution is convenient, but wrong. 

Given the practical need that humans should understand the logic of the solution, and be able to input important information that isn’t contained in any database, while recognizing the superiority of computers to follow well-built algorithms and carry huge number of calculations, leads to the direction of the solution.  It has to include two elements:  simple powerful and agreed-upon logic enabled by semi-sophisticated software, coupled with interaction with the human manager.  Definitely not an easy straight-forward mission, but an ambitious, yet doable, challenge.

Forecasts – the Need, the Great Damage, and Using it Right

Forecasting means predicting the future based on data and knowledge gained in the past.

According to the above definition every single decision we make depends on a forecast.  This is definitely true for every managerial decision.

The problem with every prediction is that it is never certain

Treating forecasting as a prophet telling us the future is a huge mistake.  So, we need a forecast that would present what the future might look like, including what is more likely to happen, and what is somewhat less likely, but still possible.

Math taught us that describing any uncertain behavior should have, at the very least, two different descriptors/parameters:  a central value, like ‘the expected value’, and another one that describes ‘the expected deviation from the average’.  This leads the definition of a ‘confidence interval’ where the more likely possible results lie.  Any sound decision has to consider a range of possible results.

While there are several ways to obtain effective forecasts, which could be used for superior decision-making, the real generic problem is the misuse of forecasts.

There are two major mistakes in using forecasts:

  1. Using one-number forecasts.
  2. Using the wrong forecasting horizon or level of detail.  The generic point is that the exact type of the forecast has to fit the decision that would take the forecast as a critical information input. A similar mistake is using the wrong parameters for computerized forecasts or relying on irrelevant, or poor quality, data.

The use of one-number forecasts

The vast majority of the forecasts used in business display only one number per item/location/period.  There is no indication of the estimated forecasting error.  Thus, if the forecast states that 1,000 units are going to be sold next week, there is no indication whether selling 1,500 is still likely to happen, or only 600.  This distorts the value of the information required for a sound decision, like how much to buy for next week sales.

Any computerized forecast, based on even the simplest mathematical model, includes an estimation of the mean possible deviation from the mean.  Given the expected value of a forecast and turning it into a reasonable range, like plus minus 1.5 or 2 estimated standard deviations or using the mean absolute percentage error (MAPE), yields about 80-90% chance that the actual outcome would fall within that range.

How such a reasonable range is able support decisions?

The two key meaningful information items are the boundaries of the range.  Every alternative choice of a decision should consider both extreme values of the range to calculate/estimate the potential damage. When the actual demand equals the lower side of the range it leads to one outcome, and when the actual demand equals the higher side there is another outcome.  When the demand falls somewhere within the range the outcome also falls between the extreme outcomes. Given both extreme outcomes the choice between the practical alternatives becomes realistic and would lead to better decisions than when no such range of reasonably outcomes is presented to the decision makers.

A simple example:  The forecast states that next week sales of Product X would be somewhere between 1,000 and 1,400 units.  The decision is about the level of stock at the beginning of the week. For simplicity let’s assume that there is no practical way to add X units during the week, or move units to a different location.

There are three reasonable alternatives for the decision:  Holding 1,000 units, 1,400, or going after the mean forecast: 1,200

If only 1,000 units are held and the demand is just 1,000 – the result is perfect.  However, if the demand turns out to be 1,400 there is unsatisfied demand for 400 units.  The real damage depends on the situation:  what the unsatisfied customers might do?  Will they buy similar products, develop a grudge against the company or patiently wait for next week?

When the decision is having 1,400 in stock, the question is:  there might be surplus of unsold units at the end of the week – is it a problem?  If sales continue next week and the units continue to look new, then the only damage is the too early expense of purchasing the 400 units.  There might be, of course, other cases.

What is the rational for storing 1,200 units?  It makes sense only when having a shortage or having a surplus is causing the same damage.  If being short is worse than having surplus, then storing 1,400 is the common-sense decision.  When having surplus is causing the higher damage – let’s decide to store just 1,000.

The example demonstrates the advantage of having a range rather than 1,200 as the one-number forecast, which leaves the decision maker wonder how much the demand might be.

There are two very different ways to forecast the demand.  One is through the use of mathematical forecasting algorithm, based on past results, and performed by a computer.  The other way is using the people closest to the market to express their intuition.  The mathematical algorithm can be used to create the required range, but there is a need to define the parameters defining the range, mainly the probability that the actual will fall within the range.

The other type, where human beings use their intuition to forecast the demand, also lend itself to forecast a range, rather than one number.  Human intuition is definitely not tuned to one number.  But certain rules should be clearly verbalized; otherwise, the human forecasted ranges might be too wide.  The idea behind the reasonable range is that possible, but extreme, results should be left outside the range.  This means the organizational culture is accepting that sometimes, not too often, the actual deviates from the forecasted range. There is no practical way to assess an intuitive 90% confidence interval, as the exact probabilities, even the formula describing the behavior of the uncertainty, are unknown.  Still, it is possible to approximately describe the uncertainty in a way that is superior to simply ignoring it. 

We do not expect that all actual results will fall within the range; we expect that 10-20% would lie outside the reasonable range

There could be more variations on the key decision.  When both shortages and surpluses cause considerable damage, maybe Operations should check whether it is possible to expedite a certain amount of units in the middle of the week.  If this is possible then holding 1,000 at the beginning of the week and being ready to expedite 400, or less, during the week makes sense.  It assumes, though, that watching the actual sales in the start of the week will yield a better forecast, meaning a much narrower range.  It also assumes the cost of expediting is less than being short or carrying too much.

Another rule that has to be fully understood is avoiding the use of combined ranges of items/locations for forecasting the demand of product family, a specific market segment, or the total demand.  While the sum of the means is the mean of the combined forecasts, combining the ranges yields a huge exaggeration of the reasonable range.  The mathematical forecasting should re-forecast the mean and the absolute mean deviation based on the past data of the combined demand.  The human forecast should, again, rely on the human intuition.

Remember the objective:  supporting better decisions by exposing the best partial information that is relevant to the decision.  Considering a too wide range, which includes cases that rarely happen, doesn’t support good decisions, unless the rare case might yield catastrophic damage.  Having too wide ranges supports much too-safe decisions, definitely not the required decisions for successful companies.

Warning:  Another related common mistake is assuming that the demand for each item/location is independent of the demand for another item or location.  THIS IS USUALLY WRONG!  There are partial dependencies of demand between items and across locations.  However, the dependencies are not 100%.  The only practical advice is:  forecast what you need.  When you need the forecast of one item – do it just for that item.  When you need the forecast of total sales – do it for the total from scratch.  The one piece of information you might use:  the sum of the mean should be equal to the mean of the sum.  When there is a mismatch between the sum of the means and the mean of the sum, it is time to question the underlining assumptions behind both the details and the global forecasts.

The right forecast for the specific decision

Suppose that a consistent growth in sales raises the issue of a considerable capacity increase, both equipment and manpower. 

Is there a need to consider the expected growth in sales of every product? 

The additional equipment is required for several product families, so the capacity requirements depend mainly on the growth of total sales, even though some products require more capacity than others.

So, the key parameter is the approximate new level of sales, and calculating back the required increase in capacity.  That increase in sales could also require increase in raw materials, which has to be checked with the suppliers.  There might be even the need for more credit-line to bridge between the timing of the material purchasing, regular operating expenses for maintaining capacity, and the timing of the incoming revenues.

Relying on the accumulation of individual forecasts is problematic.  It is good for calculating the average of the total, but not for assessing the average errors.  Being exposed to reasonable conservative forecast of the total versus the reasonable optimistic one, would highlight the probable risk in the investment and the probable gain. 

A decision about how much to store at a specific location has to be based on the individual ranges per SKU/location.  This is a different type of forecast that faces higher level of uncertainty and, thus, should be based on short horizons and fast replenishment and by that deal better with the fluctuations in the demand.  The main assumption of TOC and Lean is that the demand for the next short period is similar to last period, thus fast replenishment according to the actual demand provides quick adjustment to random fluctuations.  Longer term planning needs to consider trends, seasonality and other potential significant changes. This requires forecasts that look further into the future and are able capture the probability for such changes and include them in the reasonable range.

There are also decisions that have to consider the forecast for a specific family of products, or decisions that concern a particular market segment, which is a part of the market the company sells to.

The current practice regarding computerized forecasting is to come up with detailed forecasts for every item and accumulate them based on the need.  The problem, as already mentioned, is that while the accumulation of the averages yields the average of the total, when it comes to ranges the resulting range is a much too wide.

Another practice, usually based on intuitive forecasts, is to forecast the sales of a family of products/locations and then assume a certain distribution within the individual items.  This practice adds considerable noise to the average demand for individual items, without any reference to the likely spread.

Considering the power of today computers, the simple solution is to run several forecasts based on the decision-making requirements

When it comes to human-intuition based forecasts, there is flexibility in matching the forecast to the specific decision.  The significant change is using the reasonable range as the key information for the decision at hand.

Data quality

A special issue for forecasting is to be aware what past data is truly relevant to the decision at hand.  Statistics, as well as forecasting algorithms, have to rely on time-series data from the not-too-close past in order to identify trends, seasonality and other factors that impact future sales.  The potential problem is that the consumption patterns might have gone through a major change in the product, market or the economy, so it is possible that what happened prior to the change is not relevant anymore.

Covid-19 caused a dramatic change to many businesses, like tourism, restaurants, pubs and cinemas.  Other businesses have also been impacted, but in a less dramatic way.   So, special care should be taken to forecast future demand after Covid-19, while relying on the demand during the plague.  The author assumes the future consumption patterns for most products and services will behave differently after Covid-19 relative to 2019. This means the power of the computerized forecasts might go down for a while, as not too much good data will be available.  Even human-intuition forecasts should be used with extra care, as intuition, like computerized forecasting, are slow to adapt to a change and be able to predict its behavior.  Using rational cause-and-effect to re-shape the intuition is the right thing to do.

Conclusions

All organizations have to try their best to predict the future demand, but all managers have to internalize the basic common and expected uncertainty around their predictions and include the assessment of that uncertainty into their decision-making.

Once this recognition is in place, forecasts that yield a reasonable range of outcomes would become the best supportive information, leading to much improved decisions.  At times where the common and expected uncertainty is considerably higher than prior to 2020, organizations that would learn faster to use such range-forecasting will gain a decisive competitive edge.

Lack of Trust as a Major Obstacle in Business

By Eli Schragenheim and George Dekker

Business organizations and individuals clearly try to maximize their own interests, even at the expense of others. This creates an inherent lack of trust between any two different business entities.

Just to be on the safe side of clarity, let’s consider the following definitions:

Trust: assured reliance on the character, ability, strength, or truth of someone or something (Merriam-Webster)

feeling of confidence in someone that shows you believe they are honestfair, and reliable (Macmillan dictionary)

Trust is a key concept in human relations, but does it have a role in business? Some elements of trust can be found in business relationships like, reliability and accountability. But, does an organization have a ‘character’ that can be appreciated by another organization, or even an individual? Is it common to attribute ‘honesty’ to a business organization?

Yet, trust is part of many business relationships. Actually there are three categories of trust that are required for business organizations:

  1. Maintaining stable and effective ongoing business relationships with another organization. This is especially needed when the quality performance of the other organization matters. For instance, trusting a supplier that is able to response faster when necessary. Suppliers need to trust their clients to honor the payment terms. When two organizations partner together for a mutual business objective, the two-way trust is an even stronger need.
  2. The required two-way trust between an organization and its employees. This covers shareholders trusting the CEO, top management trusting their subordinates, and lower and mid-level employees trusting top management. When that trust is not there, the performance of the organization suffers.
  3. The trust of an individual, a customer or a professional, towards a company they expect service from or they expect to follow the terms of an engagement.

The need of governments to gain and retain trust of citizens is out of scope for this article.

What happens when there is no trust, but both sides like to maintain the relationships?

The simple, yet limited, alternative is basing the relationships on a formal agreement, expressed as a contract, which generally includes inspection, reporting and other types of assurance of compliance, and details what happens when one of the parties deviate from the agreement.

There are two basic flaws in relying on contracts to ensure proper business relationships.

  1. When the gain from breaking the contract is larger than the realistic compensation then the contract cannot protect the other side. It is also quite possible that the realistic compensation, including the time and efforts to achieve it, is poor relative to the damage done.
  2. Contracts are limited to what is clearly expressed. As language is quite ambiguous, contracts tend to be long, cryptically written, and leave ample room for opportunism and conjecture. They contain only what the sides are clearly anticipating might happen, but reality generates its own surprises. The unavoidable result is that too much damage can be caused without clearly breaching the contract.

We can also point to another significant and generic problem when the business does not trust others:

Lack of trust impedes the ability to focus on key issues, as significant managerial attention is spent on monitoring and reacting to actions of others.

This realization is directly derived from the concept of ‘focus’, which is essential to TOC. Without being able to focus on exploitation and subordination to the constraint, the organization fails to materialize its inherent potential.

Before going deeper into the meaning of ‘trust’, let’s examine the somewhat similar concept of win-win.

Unlike trust, which is mostly emotional and intuitive, win-win is based on logical cause-and-effect. The essence is that when both parties win from a specific collaboration then there is a very good chance that the collaboration will work well. It seems clear that when the collaboration is based on win-lose there is high probability that the losing party would find a way to hurt the winning partner.

Win-win usually keeps the collaboration going, but it does not prevent deviations from the core spirit. More, while win-win seems logical it is not all that common in the business world. In too many cases there is no realization that win-win is absolutely necessary. The main obstacle for win-win is that managers are not used to analyze a situation from the perspective of the other side. In other words, they are not aware of the win and lose of the other party. Too many salespeople believe they do a good job, even though they do not really understand their client’s business case.

Another problem with win-win is that the initial conditions, upon which the win-win has been based, might go through a change. In such a case one party might realize that the collaboration could cause a loss and that creates a temptation to violate the formal or informal agreement. The burst of Covid-19 certainly led to many cases where the seemingly win-win agreement came to an abrupt end or led to updated terms that are actually win-lose.

Trust is even more ambitious than win-win. It goes deeper into the area of broad rules of “what shouldn’t be done”. It also requires reliance on the other party to be fully capable and it stretches beyond the current relationship.

Trust is based on emotions that generate a belief in the capabilities and integrity of the other. It is natural for people to trust or distrust people based on their intuition. Marriage is a good example of having trust as a necessary condition for a “good marriage.” The practical requirements from a collaboration based on trust are far more than just win-win, because trust is less dependent on conditions that might easily become invalidated by external sources or events. Of course, there are many cases where people breach the trust given to them, which usually causes a shock to the believers and make some people less open to trust other people again. It also makes it almost impossible to restore trust, once it existed and vanished.

Generally speaking many humans feel a basic need to trust some people, making their living more focused by being less occupied with checking everyone and less fear of being cheated.

But, trust between organizations is a very difficult concept.

Trust is an elusive concept that is difficult to measure. While humans use their emotions and intuition, the organizational setting prefers facts, measurements and analysis. Another difficulty in trusting an organization is that its management, the people that have made the trust possible, could be easily replaced at any time, or can be coerced to betray trust by forces within and outside an organization.

Still, if trusting others is a need for organizations, which means the organization needs to relax its basic norms. Realizing the damage of lack of trust needs to be clear.

Let’s first check the relationships between the organization and its employees.

When an individual chooses to be an employee the common desire is to stay within one organization until retirement, hopefully to go up the ladder. At least this was a common wish before high-tech companies and the search for truly great high-tech professionals, have changed the culture. The rise of high-tech revealed more and more employees who don’t intend to stay too long in the organization they currently work for. In other words, they radiate that the organization should not trust them to be there when they are badly needed. This creates a problematic situation for high-tech where the key employees are actually temporary workers and every side could decide to end the working relations.

The commitment of the organizations to their employees in all areas of business has been also weakened, even though in Europe the regulations restrict the freedom of management to easily lay off their employees. Covid-19 made it clear to many employees that they cannot trust top management.

In an exposure of mistrust most organizations consistently measure the performance of their employees. Many have serious doubts regarding the effectiveness of these personal performance measurements, but the most devastating effect is that the vast majority of the employees look for any way, legitimate or not, to protect themselves from this kind of unfair judgment, including taking actions that violate the goal of the organization.

So, currently there is common mutual mistrust between employees and top management. But, in spite of that most organizations continue to function, even though far from exploiting the true business potential. The price the organization pays is stagnation, low motivation to excel and general refusal to take risks that might have personal implications.

As already mentioned, when there is a need for two organizations to collaborate certain rules have to be set and agreed upon. Monitoring the other party’s performance in reality is not only difficult; it consumes considerable managerial capacity and prevents managers from focusing on the more critical issues. As already noted even detailed contracts do not fully protect the fulfillment of the agreement.

So, there is a real need for organizations to trust other collaborating organizations. This means a ‘belief’, without any concrete proof, that the other side would truly follow the agreement, and even the ‘spirit of the agreement’. The rationale is that trust greatly simplifies the relationships and increases the prospects of truly valuable collaboration.

Agreements between organizations are achieved through people, who meet face-to-face to help in establishing the trust. The feelings of the people involved are a key factor. This is what the term ‘chemistry’ between business people and politicians means.

However, a negative branch (a potential undesirable effect) of trusting another is:

It is possible, even quite common, that organizations breach the trust placed on them, and by that cause considerable damage. The same is true between the organization and its employees.

How it is possible to trim the negative branch, taking into account the cost and difficulty of closely monitoring the performance and behavior of the other party?

A practical way is to trust the other party, but once a clear signal of misbehaving is identified – stop trusting anymore. Breach of trust is considered a product of erosion, an observable instance is sufficient to damage it for good.

This actually means that it is possible to build the image of a ‘trustworthy organization’. What makes it possible to trust, without frantically looking for such signals, is that ‘trustworthiness’ applies not just to a specific agreement, but it is a generic concept that applies to general conduct of an individual or an organization. When an organization spreads the notion of trustworthy behavior and capabilities, this can be monitored, as any deviation from trustworthy conduct would be published and all the organizations that do business with that organization, or an individual, will get the message.

Social media makes it possible to build, or ruin, the reputation of trustworthiness. There is a need, though, to handle cases where spreading intentional fake facts might disrupt that reputation. So, every organization that chooses to build the reputation of being trustworthy has to react fast to false accusations to keep its reputation.

E-commerce made the need to radiate trustworthiness particularly clear. Take a company like Booking.com as an example. Consumers who purchase hotel reservations through Booking have to trust that when they appear at the hotel they really have a room. The relationships between the digital store and its suppliers can also greatly benefit from trust.

So, it is up to the strategy of every company to evaluate the merits, and the cost, of being committed to be trustworthy and using that image as part of the key marketing messages. It is all about recognizing the perceived value, in the eyes of clients, suppliers and other potential collaborators, of being trustworthy in the long-term. What organizations need to consider, though, is that true breach of trust would make the task of re-establishing the trustworthiness very hard indeed. So, when being trustworthy is of true competitive advantage, maybe even a decisive competitive edge, then management has to protect it very thoroughly.

The Role of Intuition in Managing Organizations

Is it possible to make good decisions based solely on quantitative analysis of available hard data?

Is it possible to make good decisions based solely on intuition?

The key question behind this article is:

Is it valuable, and possible, to combine intuition together with quantitative data in a structured decision-making process in order to make better decisions?

For the sake of this article, I use the following definition of intuition:

The ability to understand something immediately without the need for conscious reasoning

Intuition is basically subjective information about reality.  Intuitive decision makers take their decisions based on their intuitive information, including predictions about the reaction of people to specific actions and happenings.  Intuition is led by a variety of assumptions to form a comprehensive picture of the relevant reality.  For instance, the VP of Marketing might claim, based on intuition, that a significant part of the market is ready for a certain new technology.  While intuition is a source of information, its accuracy is highly questionable due to a lack of data and rational reasoning.

Decisions are based on emotions, which dictate the desired objective, but should also include logical analysis of potential consequences.  Intuition replaces logic when there is not enough data, or time, to support good-enough prediction of the outcomes of an action.   We frequently make decisions that use intuition as the only supporting information, together with emotions determining what we want to achieve.

From the perspective of managing organizations, with a clear goal, using intuition contradicts the desire for optimal results, because intuition is imprecise, exposed to personal biases and very slow to adjust to changes.  But, in the vast majority of the cases, the decision-makers do not have enough “objective data” to make an optimal decision.  So, there is a real need for using intuition to complement the missing information.

Any decision is about impacting the future, so it cannot be deterministic as it is impacted by inherent uncertainty.   The actual probabilities of the various uncertainties are usually unknown.  Thus, using intuition to assess the boundaries of the relevant uncertainty is a must.

So, while intuition is not based on rational analysis it provides the opportunity to use logical quantitative analysis of the probable ramifications when both available hard data and complementing intuitive information are used.

Assessing the uncertainty by using statistical models, which look for past data for similar situations, is usually more objective and preferable than intuition.  However, in too many times past data is either not available, or can be grossly misleading as it could belong to basically different situations.

People make intuitive decisions all the time.  Intuition is heavily based on life experience where the individual recognizes patterns of behaviors that look as if following certain rules.  These rules are based on cause-and-effect, but without going through full logical awareness.  Intuition is also affected by emotions and values, which sometimes distort the more rational intuition.

Taken the imprecise nature of intuition and its personal biases raises the question of what good can it bring to the realm of managing organizations?

The push for “optimal solutions” forces managers to go through logically based quantitative analysis.  However, when some relevant information is missing then the decisions become arbitrary.  This drive for optimal solutions actually pushes managers to simply ignore a lot of the uncertainty when no clear probabilities can be used.

A side comment:  The common use of cost-per-unit is also backed up by the drive for optimal solutions because the cost-per-unit allows quantitative analysis.  Mathematically the use of cost-per-unit ignores the fact that cost does not behave linearly.  The unavoidable result is that managers make decisions against their best intuition and judgment and follow a flawed analysis, which seems like being based on hard data, but present a distorted picture of reality.

The reality of any organization is represented by the term VUCA: volatility, uncertainty, complexity, and ambiguity.  From the perspective of the decision-maker within an organization, all the four elements can be described together as ‘uncertainty’ as it describes the situation where too much information is missing at the time when the decision has to be made.  In the vast majority of the VUCA situations the overall uncertainty is pretty common and known, so most outcomes are not surprising.  In other words, the VUCA in most organizations is made of common and expected uncertainty, causing any manager to rely on his/her intuition to fill the information required for making the final decision.  Eventually, the decision itself would also be based on intuition, but having the best picture of what is somewhat known, and what clearly is not known, is the best that can be sought for in such reality.

What is it that the human decision-maker considers as “reasonably known”? 

On top of facts that are given high confidence in being true, there are assessments, most of them intuitive, which consider a reasonable range that represents the level of uncertainty.  The range represents an assessment of the boundaries of what we know, or believe we know, and what we definitely don’t know.

An example:  A company considers the promotion of several products at a 20% price reduction during one weekend.  The VP of Sales claims that the sales of those products would be five times the average units sold on a weekend.

Does the factor of five times the average sales represent the full intuition of the VP of Sales? 

As the intuition is imprecise in nature it probably means the VP has a certain range of the impact of the reduced price in her mind, but she is expected to quote just one number.  It could well be that the true reasonable range, in the mind of the VP of Sales, is anything between 150% and 1,000% increase, which actually means a very high level of uncertainty or a much narrower range of just 400% to 500% of the average sales. 

The point is that if the actual intuitive range, instead of an almost arbitrary number, be shown to the management it’d lead to a different decision.  With a reasonable possible outcome of 150% of the average sales, and assuming the cost of material is 50% of the price, then the total throughput would go down!

Throughput calculations:  The current state in sales = 100 and throughput = 100 – 50 = 50.  During the sales we get sales = 100*0.8*1.5 = 120, throughput = 100*0.8*1.5 – 50*1.5 = 45.

So, if the wide range is put on the table of management, and the low side would produce a loss, then management might decide to avoid the promotion.  The other range supports going for the promotion even when the lower side is considered as a valid potential outcome.

Comment:  In order to make a truly good decision for the above example, more information/intuition might be required.  I only wanted to demonstrate the advantage of the range relative to one number.

What is the meaning of being “reasonable” when evaluating a range? 

Intuition is ambiguous by nature.  Measuring the total impact of uncertainty (the whole VUCA) has to consider the practicality of the case and its reality.  Should we consider very rare cases?  It is a matter of judgment as the practical consequences of including rare cases could be intolerable.  When the potential damage of a not too-rare case might be disastrous then we might “reasonably” take into account a wider range.  But, when the potential damage can be tolerated, then a somewhat narrower range is more practical.  Being ‘reasonable’ is a judgment that managers have to make.

Using intuition to assess what is practically known to a certain degree is a major practical step.  The next step is recognizing that most decisions have a holistic impact on the organization, and thus the final quantitative analysis, combining hard data and intuitive information, might include several ‘local intuitions’.  This wider view lends itself to develop conservative and optimistic scenarios, which consider several ranges of different variables that impact the outcomes.  Such a decision-making process is described in the book ‘Throughput Economics’ (Schragenheim, Camp, and Surace).

Another critical question is: If we recognize the importance of intuition, can we systematically improve the intuition of the key people in the organization?

When the current intuition of a person is not exposed to meaningful feedback from reality, then signals, which point to significant deviations, are not received.  When the statement of the intuition of a person is expressed as one number then the feedback is almost useless.  If the VP of Sales assessed the factor on sales as 5 and it was eventually 4.2, 3.6, or 7, how should she treat the results?  When a range is offered then the first feedback is:  was the result within the range?  When many past assessments are analyzed then the personal bias of the person can be evaluated and important learning from experience can lead to considerably improved intuition in the future.

Once we recognize the importance of intuition then we can appreciate how to enhance it effectively.

Between the Strategic Constraint and the Current Constraint

This article assumes the reader is familiar with the Theory of Constraints (TOC), especially the definition of a constraint, and the five focusing steps.  This belongs to the basic knowledge of the Theory of Constraints (TOC).

The concept of the Strategic Constraint has been raised because it could well be an important strategic choice; targeting the desired future situation where the particular resource would become the active constraint.  Once this happens the organization’s performance will depend on the exploitation and subordination to that resource.

Actually, strategic constraint does not need to be a resource.  There are two other options.  The first is to declare the market as the strategic constraint.  The second is a rare situation where a critical material is truly scarce, so it could constrain the performance of the organization when nothing else limits it.

First, let’s deal with an internal resource as the strategic constraint.

The characteristics of a strategic constraint are:

  1. Adding capacity is very expensive, and it is also limited either by low availability in the market or by having to purchase the capacity in big chunks. It certainly needs to be much more expensive than any other resource.
  2. There is an effective way to control the exploitation of the strategic constraint resource.
  3. The overall achievable results when the specific resource is the constraint are better than when the constraint is something else. Determining the wishful future state where a specific resource would become a constraint is a very challenging objective, as is going to be explained and demonstrated later in this article.

Some organizations have an obvious strategic constraint.  When we consider an expensive restaurant it is easy to determine that the space where the guests sit and eat is naturally the strategic constraint.  Space is the more expensive resource, and enlarging the space is difficult or even impossible.  All the other resources, the kitchen, the chef, the staff, and the waiters are easier to manage and control.  Eventually, they are also not as expensive as space.  Even if one is tempted to think of the chef as the constraint, because of being the core of the decisive competitive edge, then space would easily become the actual constraint.  The reputation of the chef could serve several restaurants, which emphasizes the point that it is the reputation, rather than the physical capacity of the chef, that is exploited.

Most organizations do not have one clear resource that is much more expensive to elevate than all the rest, even though one resource is naturally somewhat more expensive than the rest.  Is it enough to make it the chosen constraint?

In order to answer the question, we need to understand better the way from the current situation to the desired situation where the strategic constraint becomes the actual constraint.

What happens when the current constraint is not the chosen strategic constraint?

The five focusing steps lead management to focus on exploitation and subordination to the constraint, which would bring a considerable increase to the bottom-line.  Question is:  would these steps make the organization closer to the strategy of having the chosen constraint becoming the constraint?

Suppose the organization is constrained by the current demand.  A good exploitation scheme is to ensure reliable delivery performance.  When the organization succeeds to improve the flow and deliver faster – more demand could be generated.  As long as there is no internal constraint, any additional demand with positive T would increase the net profit.   There is no need to choose what specific sales to increase, as there is no tradeoff between increasing the sales of product A or product B.  When these efforts continue then an internal constraint would emerge.  So, we come back to the question of what should we do when the internal current active constraint (new or old) is not strategic?

When the current constraint is a resource, but not the one we like to have, the only way to come closer to making the chosen strategic constraint is to elevate the current constraint as soon as possible.  Exploiting and subordinating to the current “wrong” constraint doesn’t make sense unless the elevation takes a very long time.

So, how can we make the chosen strategic constraint the actual constraint?

Trying to exploit the strategic constraint, when it is not the current constraint, is not effective and could cause considerable damage.  Using the T/constraint-time as a priority mechanism works contrary to the objective when the constraint-unit isn’t the current constraint.  To illustrate the point assume two product categories:  A and B.  Category A takes significant capacity from MX, which is the strategic constraint, so its T/strategic-constraint-time is low.  Category B requires less of MX, but much more from another resource called MY.  At the current state, there is no active capacity constraint.  MX is more expensive than MY, which is the main reason why MX is considered the strategic constraint.  Which product you like to expand?  Considering T/MX would lead to expanding category B sales as much as possible, but then MY might emerge as the constraint.  Expanding the sales of category A would make MX the constraint, which is what we want, but for low overall T.  Is it the product mix we have longed for?

The point is that high T/constraint-time means nice T for less utilization of the constraint.  However, that product might need much more utilization from a non-constraint, which means that significant more sales might cause a non-constraint to penetrate into its protective capacity and become an interactive constraint.  When this happens a new question emerges: are there quick means to add capacity to that resource, and what are the costs of adding this capacity?

Generally speaking whenever growth is considered it is necessary to carefully check the capacity of several critical resources and not just the strategic constraint!

The P&Q is the most known example demonstrating the concept of T/constraint-unit.  Here is the original case:

PandQsmall

Just suppose, unlike the original case where the demand for P and Q is fixed, that it is possible to expand the demand to both products.  It is also possible to double the capacity of every resource for an extra $1,500 per week.

Starting with the Blue resource as the strategic constraint:

If our chosen constraint is the Blue resource, where the P product yields T(P)/hour-of-Blue is (90-45)*4 (the Blue is able to produce 4 Ps per hour) = $180, while Q yields (100-40)*2 (Blue allows only two units of Q per hour) = $120, then the organization should produce only P!  The total weekly T, of selling 160 Ps, would be:  180*40 (weekly hours) = $7,200. We still have the same operating expenses (OE) of $6,000 per week, so the net weekly profit is $1,200.

By the way, the situation of producing only P is that there are four resources with the exact same load.  If you need protective capacity it’d be problematic, as any additional unit would increase OE by $1,500 and by that bringing loss!  Selling only P could also be risky for the long term.  Right now let’s stay with the theoretical situation that there are no fluctuations (Murphy).

What if we choose the Light-Blue resource as the strategic constraint?

First obvious recognition: there is a need to elevate the capacity of the Blue resource.  Actually, this might not be obvious to everybody.  When the focus is on the Light Blue, we might lose sight of the current constraint.

Anyway, considering a future state where the Light Blue is the constraint, then similar calculations would show that Q brings more T per hour of the Light Blue resource ($360 per Light Blue hour as 6 Qs are produced) than P.  Selling only Qs, with the Light-Blue as the constraint, would generate weekly T of 360*40 = $14,400.  But, OE cannot be just $6,000.  There is a need for 3 units of the Blue resource, so we need to add two units of the Blue resource.  The OE would be:  6,000 + 2*1,500 = $9,000.  The net weekly profit would be $5,400, bringing better profit than with the Blue resource as the constraint, but with a higher level of OE.  This situation is also theoretical as both the Blue and the Light-Blue are loaded to 100% of their available capacity.

A simple realization is:

It is not trivial to guess which resource as a strategic constraint would yield the best profit!

One needs to consider the capacity profile of other resources to ensure they have enough capacity to support the maximum T that the strategic constraint is able to generate.  Practically it means trying several scenarios where the limited capacity of several critical resources is calculated and solved.

In reality, there is a need to keep protective capacity on all non-constraints.  Actually, even the constraint itself should not be planned for 100% of its theoretical capacity, in order to keep the delivery performance intact.

Realizing the above lessons, and assuming there is no clear one resource that is very difficult to elevate, then why should we be bounded by the capacity of a resource, which can be easily elevated, when there is enough potential demand to grab?

Another issue is the wish for stability.  If the capacity constraint resource is frequently moving then the exploitation schemes, including the priorities between products and markets, might frequently change.  But the problem is that looking for stability might constrain the growth, or force to elevate several resources whenever an expansion of the demand occurs.

This leads us to consider recognizing the market demand as the strategic constraint.

Subordinating to the market demand is actually a basic necessary requirement for the vast majority of the businesses, even when an internal constraint prevents the management from accepting more orders.   It is easy to imagine what might happen when the organization, focusing on exploiting the limited capacity of an internal constraint, would fail to maintain reliable and high-quality delivery to its clients.  If the demand would go down, then the internal constraint will stop from being a constraint.

Keeping growth means constantly expanding market demand!  Keeping enough protective capacity for the critical resources means frequently increasing the capacity of one or more resources whenever buffer management, or the planned load, warn from penetration into the protective capacity.  Goldratt coined it “progressive equilibrium”.  The difference between this and keeping a strategic capacity constraint resource is that there is no need to keep that particular resource as the weakest link, which means less overall elevations to serve the same growth in sales.

It seems to me that as long as there is no natural strategic constraint, treating the market demand as the constraint makes better sense.  The growth plan has to frequently check the capacity profile of several key resources, making sure all commitments to the market can be reliably delivered.

As a final comment:  Goldratt mentioned Management Attention as the ultimate constraint.  To my mind, this is true for the Flow of Initiatives, which looks on how to improve the current Flow of Value (products and services delivered to existing clients).  Management Attention constrains the pace of growth of organizations.  Once managers learn how to focus on the right issues their attention capacity becomes the strategic constraint for growth.

The special role of common and expected uncertainty for management

dice plus cure

After what we recently went through, the area of risk management gets naturally more attention.  The question is centered on what an organization can do to face very big risks; many of them come from outside the organization.

What about the known small risks managers face all the time?

I suggest distinguishing between two different types of uncertainty/risks, which call for distinct methods of handling.  One is what we usually refer to as risks, meaning possible occurrences that generate big damage.  This kind of uncertain event is viewed as something we strive to avoid, and when we are unable to we try to minimize the damage.

The other type, which I call ‘common and expected uncertainty’, is simply everything we cannot accurately predict, but we know well the reasonable range of possible results.   The various results are sometimes positive and sometimes negative, but not to the degree that one such event would destroy the organization.  The emphasis on ‘common and expected’ is that none of the possible actual outcomes should come as a surprise.  While the actual outcome frequently causes some damage, true significant damage could come only from the accumulation of many such uncertain outcomes, and this is usually rare. So, losing one bid might not be disastrous, but losing ten in a row might be.

This article claims that there is a basic difference in handling the two types of uncertainty.  While both impact decisions and both call for protective mechanisms, the objective of those mechanisms and the practice of managing them is quite different.

The economic impact of ‘common and expected uncertainty’ is by far underappreciated by most decision-makers.

Hence the value of improving the method for dealing with ‘common and expected uncertainty’ is much higher than expected.

A big risk is something to be prepared for, but the means have to be carefully evaluated.  For instance, dealing with the risk of earthquakes involves economic considerations.  It is definitely required to apply standards of safety in the construction of buildings, roads, and bridges, but the costs, and the impact on the lead time, have to be considered.  Another common protection against the damage of earthquakes is given by insurance, which again raises the issue of financial implications.

Some risks are very hard to prepare for.  What could have the airlines do to prepare for the Coronavirus other than carrying enough cash reserves?  Airlines invest a lot in preventing fatal accidents and have procedures to deal with such events.  But, there are risks for which preparations, or insurance, don’t really help.  Every time I go on a flight I’m aware that there is a certain risk for which I have no meaningful protection.  So, I accept the risk and just hope that it’ll never occur.

Ignoring common and expected uncertainty is not reasonable!  However, it is practically ignored by too many organizations, which pretend they are able to predict the future accurately and base their planning on it.  This illogical behavior creates an edge for organizations with better capability to deal with common and expected uncertainty and generate very high business value based on reliable and fast service to customers.  That capability leads also to built-in flexibility that quickly adapts to the changing tastes of the market.  Isn’t this a basic capability for facing the new market behavior resulting from the Coronavirus crisis?  The burst of the epidemic changed the common and expected uncertainty, but by now we should be used to its new behavior, making it more “expected” than it was in March 2020.

Failing to deal with the common and expected uncertainty is especially noted in supply chain management.

For instance, a past CEO of a supermarket chain admitted to me that at any given time the rate of shortages on the shelf is, at least, 15%.  The damage of 15% shortages is definitely significant, as it means that many of the customers, coming to a supermarket store with a list of items to buy, don’t go home with the full list fulfilled.  When this is an ongoing situation then some customers might decide to move to another store.  As long as all the chains suffer from the same level of shortages this move of customers is not so damaging.  But, if a specific chain would significantly reduce the shortages it would steal customers from the other chains.

Given the common and expected uncertainty in both the demand and the supply is there a better way to manage the supply chain in a much more reliable way?

To establish a superior way the basic flaw(s) in the current practice should be clearly verbalized.

The current flawed managerial use of forecasts points to an even deeper core problem.  Mathematically a forecast is a stochastic function exposed to significant variability and thus should be described by a minimum of two parameters: an average and a measure of the spread around that average.  The norm for forecasting is using the forecast itself as an average and the forecasting error that points to an average absolute deviation from the average.  The forecasting error, like the forecast itself, is deduced from the past results.

The use of just ONE number forecasts in most management reports demonstrates how managers pretend “knowing” what the future should be, ignoring the expected spread around the average. When the forecast is not achieved it is the fault of employees who failed, and this is many times a distortion of what truly happened.  Once the employees learn the lesson they know to maneuver the forecast to secure their performance.  The organization loses from that behavior.

When the MRP algorithm in the ERP software takes the forecast and calculates the required materials the organization doesn’t really get what might be needed!  Safety stock without reference to the forecasting errors is too arbitrary to fix the situation.

A decision-maker viewing an uncertain situation needs to have two different estimations in order to make a reasonable decision:

  1. What could be the situation in a reasonable best-case scenario?
  2. What might be the reasonable worst-case situation?

The way to handle uncertainty is to forecast a reasonable range of what we try to predict.  Forecasting sales is the most common way to determine what Operations should be prepared to do.  Other cases where reasonable ranges should be used include considering the time to complete a project or just a manufacturing order.  The need for the range is to support the promise for completion, leaving also room for delays due to common and expected uncertainty.  The budget for a project, or a function within the organization, is an uncertain variable that should be handled by predicting a reasonable range.

The size of the range provides the option of using a buffer, the protection mechanism against common and expected uncertainty.  While one size of the reasonable range expresses a minimum assessment, where an actual result of less than that number seems “unreasonable” based on what we know.  The other side expresses the maximum reasonable assessment.  If you choose to protect from the possible reality of being close to the maximum assessment, like when you strive to prevent any shortage, then you need to tolerate too high stock, time, money, or any other entity that constitutes a buffer.  In cases where the cost of the buffer is high, then the financial consequences of losing sales due to shortages have to be considered.

One truly critical variable in the supply chain is the forecasting horizon.  Cost considerations can push planners to use too long horizons, which increases the level of uncertainty in an exponential way.  When it comes to managing the supply chain, which is all about managing the common and expected uncertainty, the horizon of the demand forecast should reflect the reliable supply time and not beyond that value.

Buffer Management is an unbelievably important concept, developed by Dr. Goldratt, which is invaluable for managing common and expected uncertainty during the execution phase, and also helps to identify emerging situations where the buffers, based on the predicted reasonable ranges, fail to function properly.  The idea is simple:  as long you are using a buffer against a stream of fluctuations, the state of the buffer tells you the real current level of urgency of the particular item, order, or even the state of cash.  Buffer management uses the well-known code of Green, Yellow, and Red to radiate what is more urgent, and this provides the best behavior model for dealing with common and expected uncertainty.

The big obstacle for becoming much more effective is to recognize the impact of both risks and common and expected uncertainty.  The difficulty in recognizing the obvious is how can the boss know when the subordinate does a good job?  The inherent uncertainty is an easy explanation for any failure to meet targets.  Problem is:  shutting our eyes does not help to improve the situation.  So, it is the need for managers to constantly measure the performance of every employee and demand accountability for results, for which the employees have just partial impact, is the ultimate cause for most managers to ignore common and expected uncertainty.