Forecasts – the Need, the Great Damage, and Using it Right

Forecasting means predicting the future based on data and knowledge gained in the past.

According to the above definition every single decision we make depends on a forecast.  This is definitely true for every managerial decision.

The problem with every prediction is that it is never certain

Treating forecasting as a prophet telling us the future is a huge mistake.  So, we need a forecast that would present what the future might look like, including what is more likely to happen, and what is somewhat less likely, but still possible.

Math taught us that describing any uncertain behavior should have, at the very least, two different descriptors/parameters:  a central value, like ‘the expected value’, and another one that describes ‘the expected deviation from the average’.  This leads the definition of a ‘confidence interval’ where the more likely possible results lie.  Any sound decision has to consider a range of possible results.

While there are several ways to obtain effective forecasts, which could be used for superior decision-making, the real generic problem is the misuse of forecasts.

There are two major mistakes in using forecasts:

  1. Using one-number forecasts.
  2. Using the wrong forecasting horizon or level of detail.  The generic point is that the exact type of the forecast has to fit the decision that would take the forecast as a critical information input. A similar mistake is using the wrong parameters for computerized forecasts or relying on irrelevant, or poor quality, data.

The use of one-number forecasts

The vast majority of the forecasts used in business display only one number per item/location/period.  There is no indication of the estimated forecasting error.  Thus, if the forecast states that 1,000 units are going to be sold next week, there is no indication whether selling 1,500 is still likely to happen, or only 600.  This distorts the value of the information required for a sound decision, like how much to buy for next week sales.

Any computerized forecast, based on even the simplest mathematical model, includes an estimation of the mean possible deviation from the mean.  Given the expected value of a forecast and turning it into a reasonable range, like plus minus 1.5 or 2 estimated standard deviations or using the mean absolute percentage error (MAPE), yields about 80-90% chance that the actual outcome would fall within that range.

How such a reasonable range is able support decisions?

The two key meaningful information items are the boundaries of the range.  Every alternative choice of a decision should consider both extreme values of the range to calculate/estimate the potential damage. When the actual demand equals the lower side of the range it leads to one outcome, and when the actual demand equals the higher side there is another outcome.  When the demand falls somewhere within the range the outcome also falls between the extreme outcomes. Given both extreme outcomes the choice between the practical alternatives becomes realistic and would lead to better decisions than when no such range of reasonably outcomes is presented to the decision makers.

A simple example:  The forecast states that next week sales of Product X would be somewhere between 1,000 and 1,400 units.  The decision is about the level of stock at the beginning of the week. For simplicity let’s assume that there is no practical way to add X units during the week, or move units to a different location.

There are three reasonable alternatives for the decision:  Holding 1,000 units, 1,400, or going after the mean forecast: 1,200

If only 1,000 units are held and the demand is just 1,000 – the result is perfect.  However, if the demand turns out to be 1,400 there is unsatisfied demand for 400 units.  The real damage depends on the situation:  what the unsatisfied customers might do?  Will they buy similar products, develop a grudge against the company or patiently wait for next week?

When the decision is having 1,400 in stock, the question is:  there might be surplus of unsold units at the end of the week – is it a problem?  If sales continue next week and the units continue to look new, then the only damage is the too early expense of purchasing the 400 units.  There might be, of course, other cases.

What is the rational for storing 1,200 units?  It makes sense only when having a shortage or having a surplus is causing the same damage.  If being short is worse than having surplus, then storing 1,400 is the common-sense decision.  When having surplus is causing the higher damage – let’s decide to store just 1,000.

The example demonstrates the advantage of having a range rather than 1,200 as the one-number forecast, which leaves the decision maker wonder how much the demand might be.

There are two very different ways to forecast the demand.  One is through the use of mathematical forecasting algorithm, based on past results, and performed by a computer.  The other way is using the people closest to the market to express their intuition.  The mathematical algorithm can be used to create the required range, but there is a need to define the parameters defining the range, mainly the probability that the actual will fall within the range.

The other type, where human beings use their intuition to forecast the demand, also lend itself to forecast a range, rather than one number.  Human intuition is definitely not tuned to one number.  But certain rules should be clearly verbalized; otherwise, the human forecasted ranges might be too wide.  The idea behind the reasonable range is that possible, but extreme, results should be left outside the range.  This means the organizational culture is accepting that sometimes, not too often, the actual deviates from the forecasted range. There is no practical way to assess an intuitive 90% confidence interval, as the exact probabilities, even the formula describing the behavior of the uncertainty, are unknown.  Still, it is possible to approximately describe the uncertainty in a way that is superior to simply ignoring it. 

We do not expect that all actual results will fall within the range; we expect that 10-20% would lie outside the reasonable range

There could be more variations on the key decision.  When both shortages and surpluses cause considerable damage, maybe Operations should check whether it is possible to expedite a certain amount of units in the middle of the week.  If this is possible then holding 1,000 at the beginning of the week and being ready to expedite 400, or less, during the week makes sense.  It assumes, though, that watching the actual sales in the start of the week will yield a better forecast, meaning a much narrower range.  It also assumes the cost of expediting is less than being short or carrying too much.

Another rule that has to be fully understood is avoiding the use of combined ranges of items/locations for forecasting the demand of product family, a specific market segment, or the total demand.  While the sum of the means is the mean of the combined forecasts, combining the ranges yields a huge exaggeration of the reasonable range.  The mathematical forecasting should re-forecast the mean and the absolute mean deviation based on the past data of the combined demand.  The human forecast should, again, rely on the human intuition.

Remember the objective:  supporting better decisions by exposing the best partial information that is relevant to the decision.  Considering a too wide range, which includes cases that rarely happen, doesn’t support good decisions, unless the rare case might yield catastrophic damage.  Having too wide ranges supports much too-safe decisions, definitely not the required decisions for successful companies.

Warning:  Another related common mistake is assuming that the demand for each item/location is independent of the demand for another item or location.  THIS IS USUALLY WRONG!  There are partial dependencies of demand between items and across locations.  However, the dependencies are not 100%.  The only practical advice is:  forecast what you need.  When you need the forecast of one item – do it just for that item.  When you need the forecast of total sales – do it for the total from scratch.  The one piece of information you might use:  the sum of the mean should be equal to the mean of the sum.  When there is a mismatch between the sum of the means and the mean of the sum, it is time to question the underlining assumptions behind both the details and the global forecasts.

The right forecast for the specific decision

Suppose that a consistent growth in sales raises the issue of a considerable capacity increase, both equipment and manpower. 

Is there a need to consider the expected growth in sales of every product? 

The additional equipment is required for several product families, so the capacity requirements depend mainly on the growth of total sales, even though some products require more capacity than others.

So, the key parameter is the approximate new level of sales, and calculating back the required increase in capacity.  That increase in sales could also require increase in raw materials, which has to be checked with the suppliers.  There might be even the need for more credit-line to bridge between the timing of the material purchasing, regular operating expenses for maintaining capacity, and the timing of the incoming revenues.

Relying on the accumulation of individual forecasts is problematic.  It is good for calculating the average of the total, but not for assessing the average errors.  Being exposed to reasonable conservative forecast of the total versus the reasonable optimistic one, would highlight the probable risk in the investment and the probable gain. 

A decision about how much to store at a specific location has to be based on the individual ranges per SKU/location.  This is a different type of forecast that faces higher level of uncertainty and, thus, should be based on short horizons and fast replenishment and by that deal better with the fluctuations in the demand.  The main assumption of TOC and Lean is that the demand for the next short period is similar to last period, thus fast replenishment according to the actual demand provides quick adjustment to random fluctuations.  Longer term planning needs to consider trends, seasonality and other potential significant changes. This requires forecasts that look further into the future and are able capture the probability for such changes and include them in the reasonable range.

There are also decisions that have to consider the forecast for a specific family of products, or decisions that concern a particular market segment, which is a part of the market the company sells to.

The current practice regarding computerized forecasting is to come up with detailed forecasts for every item and accumulate them based on the need.  The problem, as already mentioned, is that while the accumulation of the averages yields the average of the total, when it comes to ranges the resulting range is a much too wide.

Another practice, usually based on intuitive forecasts, is to forecast the sales of a family of products/locations and then assume a certain distribution within the individual items.  This practice adds considerable noise to the average demand for individual items, without any reference to the likely spread.

Considering the power of today computers, the simple solution is to run several forecasts based on the decision-making requirements

When it comes to human-intuition based forecasts, there is flexibility in matching the forecast to the specific decision.  The significant change is using the reasonable range as the key information for the decision at hand.

Data quality

A special issue for forecasting is to be aware what past data is truly relevant to the decision at hand.  Statistics, as well as forecasting algorithms, have to rely on time-series data from the not-too-close past in order to identify trends, seasonality and other factors that impact future sales.  The potential problem is that the consumption patterns might have gone through a major change in the product, market or the economy, so it is possible that what happened prior to the change is not relevant anymore.

Covid-19 caused a dramatic change to many businesses, like tourism, restaurants, pubs and cinemas.  Other businesses have also been impacted, but in a less dramatic way.   So, special care should be taken to forecast future demand after Covid-19, while relying on the demand during the plague.  The author assumes the future consumption patterns for most products and services will behave differently after Covid-19 relative to 2019. This means the power of the computerized forecasts might go down for a while, as not too much good data will be available.  Even human-intuition forecasts should be used with extra care, as intuition, like computerized forecasting, are slow to adapt to a change and be able to predict its behavior.  Using rational cause-and-effect to re-shape the intuition is the right thing to do.

Conclusions

All organizations have to try their best to predict the future demand, but all managers have to internalize the basic common and expected uncertainty around their predictions and include the assessment of that uncertainty into their decision-making.

Once this recognition is in place, forecasts that yield a reasonable range of outcomes would become the best supportive information, leading to much improved decisions.  At times where the common and expected uncertainty is considerably higher than prior to 2020, organizations that would learn faster to use such range-forecasting will gain a decisive competitive edge.

Lack of Trust as a Major Obstacle in Business

By Eli Schragenheim and George Dekker

Business organizations and individuals clearly try to maximize their own interests, even at the expense of others. This creates an inherent lack of trust between any two different business entities.

Just to be on the safe side of clarity, let’s consider the following definitions:

Trust: assured reliance on the character, ability, strength, or truth of someone or something (Merriam-Webster)

feeling of confidence in someone that shows you believe they are honestfair, and reliable (Macmillan dictionary)

Trust is a key concept in human relations, but does it have a role in business? Some elements of trust can be found in business relationships like, reliability and accountability. But, does an organization have a ‘character’ that can be appreciated by another organization, or even an individual? Is it common to attribute ‘honesty’ to a business organization?

Yet, trust is part of many business relationships. Actually there are three categories of trust that are required for business organizations:

  1. Maintaining stable and effective ongoing business relationships with another organization. This is especially needed when the quality performance of the other organization matters. For instance, trusting a supplier that is able to response faster when necessary. Suppliers need to trust their clients to honor the payment terms. When two organizations partner together for a mutual business objective, the two-way trust is an even stronger need.
  2. The required two-way trust between an organization and its employees. This covers shareholders trusting the CEO, top management trusting their subordinates, and lower and mid-level employees trusting top management. When that trust is not there, the performance of the organization suffers.
  3. The trust of an individual, a customer or a professional, towards a company they expect service from or they expect to follow the terms of an engagement.

The need of governments to gain and retain trust of citizens is out of scope for this article.

What happens when there is no trust, but both sides like to maintain the relationships?

The simple, yet limited, alternative is basing the relationships on a formal agreement, expressed as a contract, which generally includes inspection, reporting and other types of assurance of compliance, and details what happens when one of the parties deviate from the agreement.

There are two basic flaws in relying on contracts to ensure proper business relationships.

  1. When the gain from breaking the contract is larger than the realistic compensation then the contract cannot protect the other side. It is also quite possible that the realistic compensation, including the time and efforts to achieve it, is poor relative to the damage done.
  2. Contracts are limited to what is clearly expressed. As language is quite ambiguous, contracts tend to be long, cryptically written, and leave ample room for opportunism and conjecture. They contain only what the sides are clearly anticipating might happen, but reality generates its own surprises. The unavoidable result is that too much damage can be caused without clearly breaching the contract.

We can also point to another significant and generic problem when the business does not trust others:

Lack of trust impedes the ability to focus on key issues, as significant managerial attention is spent on monitoring and reacting to actions of others.

This realization is directly derived from the concept of ‘focus’, which is essential to TOC. Without being able to focus on exploitation and subordination to the constraint, the organization fails to materialize its inherent potential.

Before going deeper into the meaning of ‘trust’, let’s examine the somewhat similar concept of win-win.

Unlike trust, which is mostly emotional and intuitive, win-win is based on logical cause-and-effect. The essence is that when both parties win from a specific collaboration then there is a very good chance that the collaboration will work well. It seems clear that when the collaboration is based on win-lose there is high probability that the losing party would find a way to hurt the winning partner.

Win-win usually keeps the collaboration going, but it does not prevent deviations from the core spirit. More, while win-win seems logical it is not all that common in the business world. In too many cases there is no realization that win-win is absolutely necessary. The main obstacle for win-win is that managers are not used to analyze a situation from the perspective of the other side. In other words, they are not aware of the win and lose of the other party. Too many salespeople believe they do a good job, even though they do not really understand their client’s business case.

Another problem with win-win is that the initial conditions, upon which the win-win has been based, might go through a change. In such a case one party might realize that the collaboration could cause a loss and that creates a temptation to violate the formal or informal agreement. The burst of Covid-19 certainly led to many cases where the seemingly win-win agreement came to an abrupt end or led to updated terms that are actually win-lose.

Trust is even more ambitious than win-win. It goes deeper into the area of broad rules of “what shouldn’t be done”. It also requires reliance on the other party to be fully capable and it stretches beyond the current relationship.

Trust is based on emotions that generate a belief in the capabilities and integrity of the other. It is natural for people to trust or distrust people based on their intuition. Marriage is a good example of having trust as a necessary condition for a “good marriage.” The practical requirements from a collaboration based on trust are far more than just win-win, because trust is less dependent on conditions that might easily become invalidated by external sources or events. Of course, there are many cases where people breach the trust given to them, which usually causes a shock to the believers and make some people less open to trust other people again. It also makes it almost impossible to restore trust, once it existed and vanished.

Generally speaking many humans feel a basic need to trust some people, making their living more focused by being less occupied with checking everyone and less fear of being cheated.

But, trust between organizations is a very difficult concept.

Trust is an elusive concept that is difficult to measure. While humans use their emotions and intuition, the organizational setting prefers facts, measurements and analysis. Another difficulty in trusting an organization is that its management, the people that have made the trust possible, could be easily replaced at any time, or can be coerced to betray trust by forces within and outside an organization.

Still, if trusting others is a need for organizations, which means the organization needs to relax its basic norms. Realizing the damage of lack of trust needs to be clear.

Let’s first check the relationships between the organization and its employees.

When an individual chooses to be an employee the common desire is to stay within one organization until retirement, hopefully to go up the ladder. At least this was a common wish before high-tech companies and the search for truly great high-tech professionals, have changed the culture. The rise of high-tech revealed more and more employees who don’t intend to stay too long in the organization they currently work for. In other words, they radiate that the organization should not trust them to be there when they are badly needed. This creates a problematic situation for high-tech where the key employees are actually temporary workers and every side could decide to end the working relations.

The commitment of the organizations to their employees in all areas of business has been also weakened, even though in Europe the regulations restrict the freedom of management to easily lay off their employees. Covid-19 made it clear to many employees that they cannot trust top management.

In an exposure of mistrust most organizations consistently measure the performance of their employees. Many have serious doubts regarding the effectiveness of these personal performance measurements, but the most devastating effect is that the vast majority of the employees look for any way, legitimate or not, to protect themselves from this kind of unfair judgment, including taking actions that violate the goal of the organization.

So, currently there is common mutual mistrust between employees and top management. But, in spite of that most organizations continue to function, even though far from exploiting the true business potential. The price the organization pays is stagnation, low motivation to excel and general refusal to take risks that might have personal implications.

As already mentioned, when there is a need for two organizations to collaborate certain rules have to be set and agreed upon. Monitoring the other party’s performance in reality is not only difficult; it consumes considerable managerial capacity and prevents managers from focusing on the more critical issues. As already noted even detailed contracts do not fully protect the fulfillment of the agreement.

So, there is a real need for organizations to trust other collaborating organizations. This means a ‘belief’, without any concrete proof, that the other side would truly follow the agreement, and even the ‘spirit of the agreement’. The rationale is that trust greatly simplifies the relationships and increases the prospects of truly valuable collaboration.

Agreements between organizations are achieved through people, who meet face-to-face to help in establishing the trust. The feelings of the people involved are a key factor. This is what the term ‘chemistry’ between business people and politicians means.

However, a negative branch (a potential undesirable effect) of trusting another is:

It is possible, even quite common, that organizations breach the trust placed on them, and by that cause considerable damage. The same is true between the organization and its employees.

How it is possible to trim the negative branch, taking into account the cost and difficulty of closely monitoring the performance and behavior of the other party?

A practical way is to trust the other party, but once a clear signal of misbehaving is identified – stop trusting anymore. Breach of trust is considered a product of erosion, an observable instance is sufficient to damage it for good.

This actually means that it is possible to build the image of a ‘trustworthy organization’. What makes it possible to trust, without frantically looking for such signals, is that ‘trustworthiness’ applies not just to a specific agreement, but it is a generic concept that applies to general conduct of an individual or an organization. When an organization spreads the notion of trustworthy behavior and capabilities, this can be monitored, as any deviation from trustworthy conduct would be published and all the organizations that do business with that organization, or an individual, will get the message.

Social media makes it possible to build, or ruin, the reputation of trustworthiness. There is a need, though, to handle cases where spreading intentional fake facts might disrupt that reputation. So, every organization that chooses to build the reputation of being trustworthy has to react fast to false accusations to keep its reputation.

E-commerce made the need to radiate trustworthiness particularly clear. Take a company like Booking.com as an example. Consumers who purchase hotel reservations through Booking have to trust that when they appear at the hotel they really have a room. The relationships between the digital store and its suppliers can also greatly benefit from trust.

So, it is up to the strategy of every company to evaluate the merits, and the cost, of being committed to be trustworthy and using that image as part of the key marketing messages. It is all about recognizing the perceived value, in the eyes of clients, suppliers and other potential collaborators, of being trustworthy in the long-term. What organizations need to consider, though, is that true breach of trust would make the task of re-establishing the trustworthiness very hard indeed. So, when being trustworthy is of true competitive advantage, maybe even a decisive competitive edge, then management has to protect it very thoroughly.

The Role of Intuition in Managing Organizations

Is it possible to make good decisions based solely on quantitative analysis of available hard data?

Is it possible to make good decisions based solely on intuition?

The key question behind this article is:

Is it valuable, and possible, to combine intuition together with quantitative data in a structured decision-making process in order to make better decisions?

For the sake of this article, I use the following definition of intuition:

The ability to understand something immediately without the need for conscious reasoning

Intuition is basically subjective information about reality.  Intuitive decision makers take their decisions based on their intuitive information, including predictions about the reaction of people to specific actions and happenings.  Intuition is led by a variety of assumptions to form a comprehensive picture of the relevant reality.  For instance, the VP of Marketing might claim, based on intuition, that a significant part of the market is ready for a certain new technology.  While intuition is a source of information, its accuracy is highly questionable due to a lack of data and rational reasoning.

Decisions are based on emotions, which dictate the desired objective, but should also include logical analysis of potential consequences.  Intuition replaces logic when there is not enough data, or time, to support good-enough prediction of the outcomes of an action.   We frequently make decisions that use intuition as the only supporting information, together with emotions determining what we want to achieve.

From the perspective of managing organizations, with a clear goal, using intuition contradicts the desire for optimal results, because intuition is imprecise, exposed to personal biases and very slow to adjust to changes.  But, in the vast majority of the cases, the decision-makers do not have enough “objective data” to make an optimal decision.  So, there is a real need for using intuition to complement the missing information.

Any decision is about impacting the future, so it cannot be deterministic as it is impacted by inherent uncertainty.   The actual probabilities of the various uncertainties are usually unknown.  Thus, using intuition to assess the boundaries of the relevant uncertainty is a must.

So, while intuition is not based on rational analysis it provides the opportunity to use logical quantitative analysis of the probable ramifications when both available hard data and complementing intuitive information are used.

Assessing the uncertainty by using statistical models, which look for past data for similar situations, is usually more objective and preferable than intuition.  However, in too many times past data is either not available, or can be grossly misleading as it could belong to basically different situations.

People make intuitive decisions all the time.  Intuition is heavily based on life experience where the individual recognizes patterns of behaviors that look as if following certain rules.  These rules are based on cause-and-effect, but without going through full logical awareness.  Intuition is also affected by emotions and values, which sometimes distort the more rational intuition.

Taken the imprecise nature of intuition and its personal biases raises the question of what good can it bring to the realm of managing organizations?

The push for “optimal solutions” forces managers to go through logically based quantitative analysis.  However, when some relevant information is missing then the decisions become arbitrary.  This drive for optimal solutions actually pushes managers to simply ignore a lot of the uncertainty when no clear probabilities can be used.

A side comment:  The common use of cost-per-unit is also backed up by the drive for optimal solutions because the cost-per-unit allows quantitative analysis.  Mathematically the use of cost-per-unit ignores the fact that cost does not behave linearly.  The unavoidable result is that managers make decisions against their best intuition and judgment and follow a flawed analysis, which seems like being based on hard data, but present a distorted picture of reality.

The reality of any organization is represented by the term VUCA: volatility, uncertainty, complexity, and ambiguity.  From the perspective of the decision-maker within an organization, all the four elements can be described together as ‘uncertainty’ as it describes the situation where too much information is missing at the time when the decision has to be made.  In the vast majority of the VUCA situations the overall uncertainty is pretty common and known, so most outcomes are not surprising.  In other words, the VUCA in most organizations is made of common and expected uncertainty, causing any manager to rely on his/her intuition to fill the information required for making the final decision.  Eventually, the decision itself would also be based on intuition, but having the best picture of what is somewhat known, and what clearly is not known, is the best that can be sought for in such reality.

What is it that the human decision-maker considers as “reasonably known”? 

On top of facts that are given high confidence in being true, there are assessments, most of them intuitive, which consider a reasonable range that represents the level of uncertainty.  The range represents an assessment of the boundaries of what we know, or believe we know, and what we definitely don’t know.

An example:  A company considers the promotion of several products at a 20% price reduction during one weekend.  The VP of Sales claims that the sales of those products would be five times the average units sold on a weekend.

Does the factor of five times the average sales represent the full intuition of the VP of Sales? 

As the intuition is imprecise in nature it probably means the VP has a certain range of the impact of the reduced price in her mind, but she is expected to quote just one number.  It could well be that the true reasonable range, in the mind of the VP of Sales, is anything between 150% and 1,000% increase, which actually means a very high level of uncertainty or a much narrower range of just 400% to 500% of the average sales. 

The point is that if the actual intuitive range, instead of an almost arbitrary number, be shown to the management it’d lead to a different decision.  With a reasonable possible outcome of 150% of the average sales, and assuming the cost of material is 50% of the price, then the total throughput would go down!

Throughput calculations:  The current state in sales = 100 and throughput = 100 – 50 = 50.  During the sales we get sales = 100*0.8*1.5 = 120, throughput = 100*0.8*1.5 – 50*1.5 = 45.

So, if the wide range is put on the table of management, and the low side would produce a loss, then management might decide to avoid the promotion.  The other range supports going for the promotion even when the lower side is considered as a valid potential outcome.

Comment:  In order to make a truly good decision for the above example, more information/intuition might be required.  I only wanted to demonstrate the advantage of the range relative to one number.

What is the meaning of being “reasonable” when evaluating a range? 

Intuition is ambiguous by nature.  Measuring the total impact of uncertainty (the whole VUCA) has to consider the practicality of the case and its reality.  Should we consider very rare cases?  It is a matter of judgment as the practical consequences of including rare cases could be intolerable.  When the potential damage of a not too-rare case might be disastrous then we might “reasonably” take into account a wider range.  But, when the potential damage can be tolerated, then a somewhat narrower range is more practical.  Being ‘reasonable’ is a judgment that managers have to make.

Using intuition to assess what is practically known to a certain degree is a major practical step.  The next step is recognizing that most decisions have a holistic impact on the organization, and thus the final quantitative analysis, combining hard data and intuitive information, might include several ‘local intuitions’.  This wider view lends itself to develop conservative and optimistic scenarios, which consider several ranges of different variables that impact the outcomes.  Such a decision-making process is described in the book ‘Throughput Economics’ (Schragenheim, Camp, and Surace).

Another critical question is: If we recognize the importance of intuition, can we systematically improve the intuition of the key people in the organization?

When the current intuition of a person is not exposed to meaningful feedback from reality, then signals, which point to significant deviations, are not received.  When the statement of the intuition of a person is expressed as one number then the feedback is almost useless.  If the VP of Sales assessed the factor on sales as 5 and it was eventually 4.2, 3.6, or 7, how should she treat the results?  When a range is offered then the first feedback is:  was the result within the range?  When many past assessments are analyzed then the personal bias of the person can be evaluated and important learning from experience can lead to considerably improved intuition in the future.

Once we recognize the importance of intuition then we can appreciate how to enhance it effectively.

Between the Strategic Constraint and the Current Constraint

This article assumes the reader is familiar with the Theory of Constraints (TOC), especially the definition of a constraint, and the five focusing steps.  This belongs to the basic knowledge of the Theory of Constraints (TOC).

The concept of the Strategic Constraint has been raised because it could well be an important strategic choice; targeting the desired future situation where the particular resource would become the active constraint.  Once this happens the organization’s performance will depend on the exploitation and subordination to that resource.

Actually, strategic constraint does not need to be a resource.  There are two other options.  The first is to declare the market as the strategic constraint.  The second is a rare situation where a critical material is truly scarce, so it could constrain the performance of the organization when nothing else limits it.

First, let’s deal with an internal resource as the strategic constraint.

The characteristics of a strategic constraint are:

  1. Adding capacity is very expensive, and it is also limited either by low availability in the market or by having to purchase the capacity in big chunks. It certainly needs to be much more expensive than any other resource.
  2. There is an effective way to control the exploitation of the strategic constraint resource.
  3. The overall achievable results when the specific resource is the constraint are better than when the constraint is something else. Determining the wishful future state where a specific resource would become a constraint is a very challenging objective, as is going to be explained and demonstrated later in this article.

Some organizations have an obvious strategic constraint.  When we consider an expensive restaurant it is easy to determine that the space where the guests sit and eat is naturally the strategic constraint.  Space is the more expensive resource, and enlarging the space is difficult or even impossible.  All the other resources, the kitchen, the chef, the staff, and the waiters are easier to manage and control.  Eventually, they are also not as expensive as space.  Even if one is tempted to think of the chef as the constraint, because of being the core of the decisive competitive edge, then space would easily become the actual constraint.  The reputation of the chef could serve several restaurants, which emphasizes the point that it is the reputation, rather than the physical capacity of the chef, that is exploited.

Most organizations do not have one clear resource that is much more expensive to elevate than all the rest, even though one resource is naturally somewhat more expensive than the rest.  Is it enough to make it the chosen constraint?

In order to answer the question, we need to understand better the way from the current situation to the desired situation where the strategic constraint becomes the actual constraint.

What happens when the current constraint is not the chosen strategic constraint?

The five focusing steps lead management to focus on exploitation and subordination to the constraint, which would bring a considerable increase to the bottom-line.  Question is:  would these steps make the organization closer to the strategy of having the chosen constraint becoming the constraint?

Suppose the organization is constrained by the current demand.  A good exploitation scheme is to ensure reliable delivery performance.  When the organization succeeds to improve the flow and deliver faster – more demand could be generated.  As long as there is no internal constraint, any additional demand with positive T would increase the net profit.   There is no need to choose what specific sales to increase, as there is no tradeoff between increasing the sales of product A or product B.  When these efforts continue then an internal constraint would emerge.  So, we come back to the question of what should we do when the internal current active constraint (new or old) is not strategic?

When the current constraint is a resource, but not the one we like to have, the only way to come closer to making the chosen strategic constraint is to elevate the current constraint as soon as possible.  Exploiting and subordinating to the current “wrong” constraint doesn’t make sense unless the elevation takes a very long time.

So, how can we make the chosen strategic constraint the actual constraint?

Trying to exploit the strategic constraint, when it is not the current constraint, is not effective and could cause considerable damage.  Using the T/constraint-time as a priority mechanism works contrary to the objective when the constraint-unit isn’t the current constraint.  To illustrate the point assume two product categories:  A and B.  Category A takes significant capacity from MX, which is the strategic constraint, so its T/strategic-constraint-time is low.  Category B requires less of MX, but much more from another resource called MY.  At the current state, there is no active capacity constraint.  MX is more expensive than MY, which is the main reason why MX is considered the strategic constraint.  Which product you like to expand?  Considering T/MX would lead to expanding category B sales as much as possible, but then MY might emerge as the constraint.  Expanding the sales of category A would make MX the constraint, which is what we want, but for low overall T.  Is it the product mix we have longed for?

The point is that high T/constraint-time means nice T for less utilization of the constraint.  However, that product might need much more utilization from a non-constraint, which means that significant more sales might cause a non-constraint to penetrate into its protective capacity and become an interactive constraint.  When this happens a new question emerges: are there quick means to add capacity to that resource, and what are the costs of adding this capacity?

Generally speaking whenever growth is considered it is necessary to carefully check the capacity of several critical resources and not just the strategic constraint!

The P&Q is the most known example demonstrating the concept of T/constraint-unit.  Here is the original case:

PandQsmall

Just suppose, unlike the original case where the demand for P and Q is fixed, that it is possible to expand the demand to both products.  It is also possible to double the capacity of every resource for an extra $1,500 per week.

Starting with the Blue resource as the strategic constraint:

If our chosen constraint is the Blue resource, where the P product yields T(P)/hour-of-Blue is (90-45)*4 (the Blue is able to produce 4 Ps per hour) = $180, while Q yields (100-40)*2 (Blue allows only two units of Q per hour) = $120, then the organization should produce only P!  The total weekly T, of selling 160 Ps, would be:  180*40 (weekly hours) = $7,200. We still have the same operating expenses (OE) of $6,000 per week, so the net weekly profit is $1,200.

By the way, the situation of producing only P is that there are four resources with the exact same load.  If you need protective capacity it’d be problematic, as any additional unit would increase OE by $1,500 and by that bringing loss!  Selling only P could also be risky for the long term.  Right now let’s stay with the theoretical situation that there are no fluctuations (Murphy).

What if we choose the Light-Blue resource as the strategic constraint?

First obvious recognition: there is a need to elevate the capacity of the Blue resource.  Actually, this might not be obvious to everybody.  When the focus is on the Light Blue, we might lose sight of the current constraint.

Anyway, considering a future state where the Light Blue is the constraint, then similar calculations would show that Q brings more T per hour of the Light Blue resource ($360 per Light Blue hour as 6 Qs are produced) than P.  Selling only Qs, with the Light-Blue as the constraint, would generate weekly T of 360*40 = $14,400.  But, OE cannot be just $6,000.  There is a need for 3 units of the Blue resource, so we need to add two units of the Blue resource.  The OE would be:  6,000 + 2*1,500 = $9,000.  The net weekly profit would be $5,400, bringing better profit than with the Blue resource as the constraint, but with a higher level of OE.  This situation is also theoretical as both the Blue and the Light-Blue are loaded to 100% of their available capacity.

A simple realization is:

It is not trivial to guess which resource as a strategic constraint would yield the best profit!

One needs to consider the capacity profile of other resources to ensure they have enough capacity to support the maximum T that the strategic constraint is able to generate.  Practically it means trying several scenarios where the limited capacity of several critical resources is calculated and solved.

In reality, there is a need to keep protective capacity on all non-constraints.  Actually, even the constraint itself should not be planned for 100% of its theoretical capacity, in order to keep the delivery performance intact.

Realizing the above lessons, and assuming there is no clear one resource that is very difficult to elevate, then why should we be bounded by the capacity of a resource, which can be easily elevated, when there is enough potential demand to grab?

Another issue is the wish for stability.  If the capacity constraint resource is frequently moving then the exploitation schemes, including the priorities between products and markets, might frequently change.  But the problem is that looking for stability might constrain the growth, or force to elevate several resources whenever an expansion of the demand occurs.

This leads us to consider recognizing the market demand as the strategic constraint.

Subordinating to the market demand is actually a basic necessary requirement for the vast majority of the businesses, even when an internal constraint prevents the management from accepting more orders.   It is easy to imagine what might happen when the organization, focusing on exploiting the limited capacity of an internal constraint, would fail to maintain reliable and high-quality delivery to its clients.  If the demand would go down, then the internal constraint will stop from being a constraint.

Keeping growth means constantly expanding market demand!  Keeping enough protective capacity for the critical resources means frequently increasing the capacity of one or more resources whenever buffer management, or the planned load, warn from penetration into the protective capacity.  Goldratt coined it “progressive equilibrium”.  The difference between this and keeping a strategic capacity constraint resource is that there is no need to keep that particular resource as the weakest link, which means less overall elevations to serve the same growth in sales.

It seems to me that as long as there is no natural strategic constraint, treating the market demand as the constraint makes better sense.  The growth plan has to frequently check the capacity profile of several key resources, making sure all commitments to the market can be reliably delivered.

As a final comment:  Goldratt mentioned Management Attention as the ultimate constraint.  To my mind, this is true for the Flow of Initiatives, which looks on how to improve the current Flow of Value (products and services delivered to existing clients).  Management Attention constrains the pace of growth of organizations.  Once managers learn how to focus on the right issues their attention capacity becomes the strategic constraint for growth.

The special role of common and expected uncertainty for management

dice plus cure

After what we recently went through, the area of risk management gets naturally more attention.  The question is centered on what an organization can do to face very big risks; many of them come from outside the organization.

What about the known small risks managers face all the time?

I suggest distinguishing between two different types of uncertainty/risks, which call for distinct methods of handling.  One is what we usually refer to as risks, meaning possible occurrences that generate big damage.  This kind of uncertain event is viewed as something we strive to avoid, and when we are unable to we try to minimize the damage.

The other type, which I call ‘common and expected uncertainty’, is simply everything we cannot accurately predict, but we know well the reasonable range of possible results.   The various results are sometimes positive and sometimes negative, but not to the degree that one such event would destroy the organization.  The emphasis on ‘common and expected’ is that none of the possible actual outcomes should come as a surprise.  While the actual outcome frequently causes some damage, true significant damage could come only from the accumulation of many such uncertain outcomes, and this is usually rare. So, losing one bid might not be disastrous, but losing ten in a row might be.

This article claims that there is a basic difference in handling the two types of uncertainty.  While both impact decisions and both call for protective mechanisms, the objective of those mechanisms and the practice of managing them is quite different.

The economic impact of ‘common and expected uncertainty’ is by far underappreciated by most decision-makers.

Hence the value of improving the method for dealing with ‘common and expected uncertainty’ is much higher than expected.

A big risk is something to be prepared for, but the means have to be carefully evaluated.  For instance, dealing with the risk of earthquakes involves economic considerations.  It is definitely required to apply standards of safety in the construction of buildings, roads, and bridges, but the costs, and the impact on the lead time, have to be considered.  Another common protection against the damage of earthquakes is given by insurance, which again raises the issue of financial implications.

Some risks are very hard to prepare for.  What could have the airlines do to prepare for the Coronavirus other than carrying enough cash reserves?  Airlines invest a lot in preventing fatal accidents and have procedures to deal with such events.  But, there are risks for which preparations, or insurance, don’t really help.  Every time I go on a flight I’m aware that there is a certain risk for which I have no meaningful protection.  So, I accept the risk and just hope that it’ll never occur.

Ignoring common and expected uncertainty is not reasonable!  However, it is practically ignored by too many organizations, which pretend they are able to predict the future accurately and base their planning on it.  This illogical behavior creates an edge for organizations with better capability to deal with common and expected uncertainty and generate very high business value based on reliable and fast service to customers.  That capability leads also to built-in flexibility that quickly adapts to the changing tastes of the market.  Isn’t this a basic capability for facing the new market behavior resulting from the Coronavirus crisis?  The burst of the epidemic changed the common and expected uncertainty, but by now we should be used to its new behavior, making it more “expected” than it was in March 2020.

Failing to deal with the common and expected uncertainty is especially noted in supply chain management.

For instance, a past CEO of a supermarket chain admitted to me that at any given time the rate of shortages on the shelf is, at least, 15%.  The damage of 15% shortages is definitely significant, as it means that many of the customers, coming to a supermarket store with a list of items to buy, don’t go home with the full list fulfilled.  When this is an ongoing situation then some customers might decide to move to another store.  As long as all the chains suffer from the same level of shortages this move of customers is not so damaging.  But, if a specific chain would significantly reduce the shortages it would steal customers from the other chains.

Given the common and expected uncertainty in both the demand and the supply is there a better way to manage the supply chain in a much more reliable way?

To establish a superior way the basic flaw(s) in the current practice should be clearly verbalized.

The current flawed managerial use of forecasts points to an even deeper core problem.  Mathematically a forecast is a stochastic function exposed to significant variability and thus should be described by a minimum of two parameters: an average and a measure of the spread around that average.  The norm for forecasting is using the forecast itself as an average and the forecasting error that points to an average absolute deviation from the average.  The forecasting error, like the forecast itself, is deduced from the past results.

The use of just ONE number forecasts in most management reports demonstrates how managers pretend “knowing” what the future should be, ignoring the expected spread around the average. When the forecast is not achieved it is the fault of employees who failed, and this is many times a distortion of what truly happened.  Once the employees learn the lesson they know to maneuver the forecast to secure their performance.  The organization loses from that behavior.

When the MRP algorithm in the ERP software takes the forecast and calculates the required materials the organization doesn’t really get what might be needed!  Safety stock without reference to the forecasting errors is too arbitrary to fix the situation.

A decision-maker viewing an uncertain situation needs to have two different estimations in order to make a reasonable decision:

  1. What could be the situation in a reasonable best-case scenario?
  2. What might be the reasonable worst-case situation?

The way to handle uncertainty is to forecast a reasonable range of what we try to predict.  Forecasting sales is the most common way to determine what Operations should be prepared to do.  Other cases where reasonable ranges should be used include considering the time to complete a project or just a manufacturing order.  The need for the range is to support the promise for completion, leaving also room for delays due to common and expected uncertainty.  The budget for a project, or a function within the organization, is an uncertain variable that should be handled by predicting a reasonable range.

The size of the range provides the option of using a buffer, the protection mechanism against common and expected uncertainty.  While one size of the reasonable range expresses a minimum assessment, where an actual result of less than that number seems “unreasonable” based on what we know.  The other side expresses the maximum reasonable assessment.  If you choose to protect from the possible reality of being close to the maximum assessment, like when you strive to prevent any shortage, then you need to tolerate too high stock, time, money, or any other entity that constitutes a buffer.  In cases where the cost of the buffer is high, then the financial consequences of losing sales due to shortages have to be considered.

One truly critical variable in the supply chain is the forecasting horizon.  Cost considerations can push planners to use too long horizons, which increases the level of uncertainty in an exponential way.  When it comes to managing the supply chain, which is all about managing the common and expected uncertainty, the horizon of the demand forecast should reflect the reliable supply time and not beyond that value.

Buffer Management is an unbelievably important concept, developed by Dr. Goldratt, which is invaluable for managing common and expected uncertainty during the execution phase, and also helps to identify emerging situations where the buffers, based on the predicted reasonable ranges, fail to function properly.  The idea is simple:  as long you are using a buffer against a stream of fluctuations, the state of the buffer tells you the real current level of urgency of the particular item, order, or even the state of cash.  Buffer management uses the well-known code of Green, Yellow, and Red to radiate what is more urgent, and this provides the best behavior model for dealing with common and expected uncertainty.

The big obstacle for becoming much more effective is to recognize the impact of both risks and common and expected uncertainty.  The difficulty in recognizing the obvious is how can the boss know when the subordinate does a good job?  The inherent uncertainty is an easy explanation for any failure to meet targets.  Problem is:  shutting our eyes does not help to improve the situation.  So, it is the need for managers to constantly measure the performance of every employee and demand accountability for results, for which the employees have just partial impact, is the ultimate cause for most managers to ignore common and expected uncertainty.

After the Crisis

We are within a global crisis, and this is the best time to think how the world is going to be after the crisis is over.  It is obvious to deal also with how to survive the crisis, but it’d be a grave mistake to focus only on the obvious.

There are a lot of debate on how long the Coronavirus is going to last, and how long it’d take the economy to overcome the crisis.

But, here is a critical question:

Are the future demand characteristics going to be the same, or even just similar, to the demand we had in 2019?

I have doubts, and if I’m right then all organizations need to exploit the time to re-think their strategy.

We are in the beginning of two different crises and each of them will have an impact on the future demand patterns, in a way we haven’t felt before. 

The ‘market demand’ is created by the tastes, habits and preferences of consumers on the use of their free cash.  This core of the world consumer consumption is exactly what is disrupted by the Coronavirus causing a significant trauma that is still going on.  Too many people all over the world are losing their jobs and experience anxiety to their very basic survival.  Add to this the feeling of loneliness and the concern for the elderly parents and other relatives.  All these effects influence our buying decisions.  The vast of what we’re used to buy in ‘normal times’ is not in order to survive, but in order to enjoy.  Having more money than absolutely necessary raises the realization that “we have the means, so why not use it?”  The hard times we go through, especially those of us who are stuck at home, have an impact on our preferences and would probably change our habits.  The ongoing pressure on cash, without knowing when we’ll be back to steady income, change the perception of what we need in order to have “good life”.

So, the realization of the danger to life, plus the lack of enough secure cash to provide our needs, make a stamp on how we are going to consume even when economy will recover from the current crisis.

The combination of personal trauma and economic uncertainty would cause various long-term changes in consumption of goods.  The obvious change for many is becoming more conservative by spending only on things that seem practical and necessary.  It also means spending overall less money than the available cash, which raises the issue of the means of saving money.  The actual behavior of the stock market will determine whether the stock market is going to be the common way to savings in the future.  New ways to save money could become attractive if their stability, rather than their interest rate, can be shown in a convincing way.

However, this ‘conservatism’ is only one possible aspect of the change in behavior.  The general “status” culture of amazing others with what we have might go down as well.  When the threat of early death becomes widespread, the meaning of life and what does it mean to enjoy is going through a change.  A different set of priorities will emerge.  Being stuck at home pushes many people to find rewarding ways to fill up the time, reading books, watching more variety of shows in the TV and listening to music, all of these would have a more subtle kind of impact.

A very different reaction to the current hard times is to put more emphasis on having fun now, because who knows whether we’ll live tomorrow.  Every human being faces the chronic conflict between enjoying life now and preparing for a better future.  The common way people treat conflicts is by looking for an acceptable compromise.  Very few people succeed is making their goal for living the center, so everything they do is to achieve more and more of that goal.  Most people have to sacrifice something in the present, like money, time devoted to studies or preparations for providing the necessary conditions for good future.  That concept might be hit by the thought that life could end at any given time, and the even more general realization that you cannot rely on any prediction of what’s going to happen even in the very short time frame.  So, this kind of behavior might lead to preferring the near future over the far away future.

Would these contrasting reactions cancel each other regarding their impact on the global market behavior?

I don’t think so, because the two opposing ways of reacting to the current trauma would impact different sets of products and services.  Both ways combine together to reduce the demand for products and services that represent the compromise.  Currently every category of products offers a wide variety of prices, from the cheapest to the most expensive.   It seems that the role of the middle-level price products would go down.  The demand for the lower price products would grow while some of the high pricing products would still find customers willing to pay.  The pricing is only one parameter to watch, certain products types that have no practical value, but have a certain aesthetic value, without being truly great art, might not find demand at all.

The crisis would have a significant influence on the use of technology.  On one hand it will accelerate the use of newer ways to communicate and work from a distance.  On the other hand I expect the use of the newer technologies will be much more focused on the real practical value, rather than being enthusiastic from the new features and the attraction of gadgets that are not truly needed.

What should the organizations do now?

These are the most appropriate times to re-think the strategy and tactics.  All organizations face the following three period of times, with their different external impacts on the business.

  1. Within the Coronavirus crisis: how to survive without losing the future
  2. After the Coronavirus: experiencing economic recession
  3. Coming out of both crises into a changed economy and demand patterns

The current period of the Coronavirus crisis gives top management the opportunity to dedicate time to consider the possible changes in the market after the economy would be stabilized. 

TOC teaches us to make the best guess, based on cause-and-effect analysis, and then put the signals to tell us whether we are wrong.  This means that while we know that we don’t really know, assuming “we know nothing” is also wrong, and more damaging.  So, it is the duty of every high-level manager to make plausible assumptions, build a direction for taking the lead in several relevant market sectors, and monitor the warning signals.

There is another aspect of building a plan to win the competition by coming with new products and services that would be highly valuable to big enough market sectors and by this gain high demand.  There is an absolutely required need for the buildup of capabilities and capacity buffers to provide the flexibility to quickly adjust to the new trends in the market.  This means, among other necessary conditions, to have multi-skilled and highly motivated employees to support any quick changes in the product mix and/or delivery to customers.  To achieve it top management might need to come up with a new scheme for maintaining win-win relationships with the employees.

How such re-thinking should be carried on?  Certainty time has to be dedicated for mutual re-thinking, most probably using communication packages like Zoom or Skype with all the key players.  I strongly believe that TOC consultants should be used as well.  First of all because external (but clever) people are less attached to any current paradigms, so they can reveal and challenge hidden assumptions.  Secondly because the thinking tools of TOC (including Goldratt Six Questions) can be effectively used to analyze the perception of value of the customers.  And a good prediction of the perception value of customers is the key for any good strategy.

Conferences: Between Onsite and Virtual

An obvious unavoidable result of the Coronavirus is that conferences are cancelled.  The obvious solutions are either delaying the onsite conference to better times or move to virtual conferencing.  This is what happened to the TOCICO annual conference, which was planned for June 22-24 in Paris, and has been cancelled.  Instead, TOCICO is going to organize a virtual conference based on the best available technology.  A virtual conference is not able to fully replace an onsite one, certainly not when the conference is planned for a great city like Paris.  But, it can offer other benefits.

The Coronavirus only accelerated a basic need to find a proper solution for large scale conferences so people can join without having to travel and without the associated costs of running such big events.  This need is especially critical for international conferences.

Technology for good video communications exists for a number of years and it becomes better and better.  Yet, there are certain deficiencies with video distance communication, which are not technological.  The most serious negative outcome of replacing an onsite conference with a virtual one is the lack of face-to-face communication.  There are several aspects that make face-to-face more valuable than using distance communication technologies.

  • The emotional value of meeting a person is much higher in face-to-face meeting. When all the senses are activated, plus the sense of an occasion of meeting a meaningful figure who lives far away, the overall experience is considerably stronger.  This emotional pleasure is best achieved in the mingling that takes place at breaks and dinners during an onsite conference.
  • Even when the communication is based on just rational exchange of knowledge, which is typical to presentations, it seems that the quality of the knowledge transfer is more effective in live contact.
    • Even in such rational flow of ideas, based on logic, there are emotional controls that establish trust or the lack of it. These control mechanisms seem to work more effectively in onsite conferences.  During a live presentation a human being is not just judging the content of what is said, but also the reaction of the crowd, which cannot be fully replicated in any distance communication technology.
  • The ability of a listener to concentrate seems better when no external disturbances are competing on the limited attention capacity. Listening to the computer or the smartphone at home/office is unavoidably exposed to many distractions.

What new benefits can be generated by a virtual conference?

The most obvious benefit of a virtual conference is that its cost is far lower, both for the participants and for the organizers.  This is on top, of course, of the special impact of the current health crisis, where people from all over the world are stuck at home.

On the face of it a virtual conference can easily accommodate more presentations, giving a wider choice to the participants.  But, the negative (branch) of offering too wide choice is overall less impact and somewhat reduced quality of the conference as a whole.  In an actual conference there is a need to offer a full program for every track throughout the whole day.  A virtual conference can spread truly good and effective presentations in more days, say just three net hours of presentations a day.  By lowering the daily load of new knowledge, covering more material in a longer period of time, this limitation of virtual conferencing is vastly reduced.

A truly special benefit of a virtual conference is the option to listen to the recordings of presentations that were missed, either because of parallel interesting presentations, or because the need to rest, still within the sense of occasion of the conference.  This option is one of several that are not possible in a live conference.

Higher quality of the presentations is the key advantage of virtual conferences!

First, the choice of speakers is wider, because the speakers are not required to come physically to the conference location. Secondly the presentations can be pre-recorded, so the quality of the picture and sound can be carefully monitored.  It also allows making more than one take of the presentation and choosing the better one.

Pre-recording the presentations gives an opportunity to add subtitles in English to the presentation, and based on them create subtitles in several other languages that the participants can choose.  From my personal perspective of non-English speaking country I can test to the importance of captions in English.  It is easier for me to watch movies in English on TV with captions in English, even though I understand 95% of the spoken text.  It also serves to overcome the difficulty of understanding different accents.  Creating the captions in English provides the option to translate the English subtitles to different languages and this opens the door for people with limited knowledge in English to participate.

While the presentations themselves are pre-recorded, it is possible to conduct live Q&A sessions.  This combination of recordings and live sessions has the potential to achieve an overall superior quality of delivering content and capture the audience reaction to the ideas.  One of the ideas we like to examine is to have two of three Q&A sessions within every presentation, making the content more approachable.

A considerable difficulty of an international virtual conference is the adherence to different time zones.  When the audience is spread all over the world, many people face a practical difficulty to attend the live sessions of Q&A.   A partial solution to this difficulty is asking the speakers to carry two sets of Q&A sessions with 10-12 hours apart to better fit the time zone of participants at the other side of the globe.

There are several software packages that manage virtual conferences.  On top of handling the presentations they provide chat rooms, so more intimate meetings with key presenters are made possible, compensating to a certain degree the ability to directly approach them during a break.

My own conclusion is that while I still would like to attend a live conference in an attractive location, virtual conferences, with the best speakers, can provide huge value and are so much more affordable.  Eventually this is a direction for the future.   It is not just that the technology that allows us to do something similar to an onsite conference from afar; we can capitalize the new virtues of the new technology to achieve new benefits.

Eleven years ago I’ve initiated the delivery of webinars for TOCICO.  This was a new and very excited experience for me.  Members of TOCICO have now more than 120 past webinars to watch at any time.  I personally look forward to experience my first virtual conference as a major vehicle to spread the most updated powerful knowledge of the Theory of Constraint (TOC), at an affordable price to whoever that is curious enough to know about.  I sincerely believe this is an opportunity for readers of The Goal to learn what TOC can do for their organizations.

Fixing a mistake in our book. Our apologies!

And, some thoughts on how easy it is to make mistakes and continue to be blind to them.

Writing a book is an ultra-challenging mission.  It is far more than finding the most effective way to express what you have in your mind.  It certainly involves the especially difficult task of putting yourself in the shoes of the reader wondering whether the text is clear enough, and interesting enough to motivate the reader to continue.

There is much more.  You need to look for mistakes.

Having spent twelve years of my life in programming I know what every code writer knows so well:  it is damn easy to produce bugs!  It is so common that no programmer in the world claims that he/she has learned to write code that works right first time.  A positive side in the programming culture is that programmers are not blamed for bugs, as long as they are quick to fix them.

Writing a book is not so different from writing code, just much more challenging.  This means we are constantly introducing mistakes into our writing.  Reading back what we wrote isn’t good enough.  When you have the meaning of the sentence in your mind, you may fail to see that what appears on the page isn’t what you meant.  Thus, publishing houses employ editors trained to read text with the purpose of discovering some of the writer mistakes.  My assumption is that many mistakes still exist in any book.

We refer to the following book, which has been published and printed after all the careful editing that we have done:

Cover (1)

So, there we were, preparing for the webinar “Building a Bridge of Understanding,” when we suddenly discovered that the following table from our book does not represent the specific case.  Here is the table that looks quite normal:

Table 7‑3:  Alternative Accounting Treatments

table 1

The problem is that the real cost of Materials and Freight is not $300,000, which is based on 10,000 sold units with a materials cost per unit of $30.  While 10,000 units were sold, the company actually produced 14,286 units.  So, the real cost of materials purchased and used in production is 428,580.  The correct table should be:

Table 7‑3:  Alternative Accounting Treatments

table 2

It is particularly difficult to fix finished books.  We delivered the corrected table to the publishers but what has been printed contains the mistaken table.  Here is a link that allows you to download a PDF file where you can easily print the fixed table, cut the margins, and put it onto the flawed table in the book: https://www.dropbox.com/s/699x01nqouibaeq/Table%207.3%20corrections%20with%20Border%20Cut%20Guide.pdf?dl=0

Let me emphasize, we have found a mistake only in the table, not in the text itself.  As we mentioned, we still might have made other mistakes here and there.

So, readers of the book, be aware of the mistake and the fixed table.  Please let us know if you find one!

For all others who do not have the book, but the topic seems interesting, have a look at the page in my blog describing the need and the key questions answered by the content of the book.  Here is the link to this page: https://elischragenheim.com/toc-economics-top-management-decision-support/

IT as a universal bottleneck

Eli Goldratt defined management attention as the ultimate constraint.  However, sometimes other constraints emerge.  Let’s analyze the emergence of IT as a universal bottleneck for improvement efforts of many organizations and corporations.

The ultra fast development of IT, including the cloud, Big Data, artificial intelligence (AI), Industry 4.0, mobile applications, e-commerce, cyber protection and routine software, creates a problematic situation in almost all medium and big organizations:  the IT department becomes a bottleneck.   The simple meaning of a bottleneck is:  being incapable to perform all the required work.

Dealing with a real bottleneck is a critical strategic problem, because it forces top management to decide what good business to give up.  This is not a problem with the technology itself; it is a difficult managerial duty to decide what new technology (actually any promising change) to adapt to and at what pace.  That said the technology people don’t make it easy to top management to make the right decisions.  The technology perspective is quite different from the goal of the organization.

While the products and services of the organization might have nothing to do with IT, the flow of materials, products and services and the means for more effective marketing and sales, depend more and more on IT.  Most new technologies are heavily based on IT and the adaptation to the flow of incoming new IT capabilities becomes more and more difficult.  Actually as the use of IT becomes bigger, and more complex, controlling the whole IT activity might become chaotic.  New managerial capabilities are needed to maintain the huge suite of features, coming from various sources, under control.

Just to illustrate the problems:  The banking systems generate huge amount of IT requests from better mobile applications to improving international and multi-currency transactions, which have to conform to various local regulations. The appearance of crypto-currency adds challenges that require new IT tools. Naturally banks have endless artificial intelligence initiatives and, of course, they need to constantly improve their cyber protection.  Other business sectors, like Retail, also face new threats that require new IT tools to provide new services to their customers.  The whole sector of Manufacturing is facing Industry 4.0 new digital control and connectivity.

The “Flow of Value” is defined as the current way the organization delivers value to customers. This is definitely a critical flow for every organization.  However, every organization has also to maintain another critical flow:  the “Flow of Initiatives” to improve the Flow of Value in the future.  The Flow of Initiatives consists of all the efforts invested in expanding the market demand, developing new products/services or finding better ways for the “Flow of Value” to adapt to the trends in the market, like faster response time.  Each of these flows has its own constraint.

The means for gaining more market, or even preserving the current demand, rely heavily on advanced features of IT.  Being blocked by the bottleneck causes considerable damages, like confusion on what initiatives are in the pipeline and the priority they should get in competing for resources.  The confusion and the raised internal competition on IT resources cause multi-tasking and frequent priority changes, which cause significant waste of capacity of IT people, turning the situation into a vicious cycle.  Considering the fact that truly good IT people are scarce, so the competition on them is wide, having the IT as a bottleneck will not be solved in the near future.

IT as a bottleneck creates only minor problems to the current Flow of Value, because that flow continues to function using the older IT tools and it usually has adequate capacity of key resources.  However, the perceived shortcomings of the IT system are obvious to the employees and also to the key clients and suppliers.  This creates significant tension within the organization and between the organization and its clients and suppliers.  Rumors and facts about the competition going on the new technology wagon, add to the ongoing pressure and the department that needs to respond to all the requests is the IT.  The big problem of IT is that while they need to plan and implement new tools and upgrades to existing tools, they still need to support all the regular services.  The conflict between the urgent and the important is especially noticeable in all IT departments.

The real acute problem with IT as a bottleneck to growth is that the situation looks temporary, but it is not.  At any given time the stream of new IT-based technologies looks like it’d take certain time, maybe even few years, to implement, but then the pressure on the Flow of Initiatives will go down.  This is simply not true.  The whole area of Artificial Intelligence (AI) is just starting to penetrate into actual use in large organizations, and this flow will definitely continue and even accelerate.  In the same way the need to improve the security of the IT would go up more and more.  So, this state of having to deal with a stream of new significant IT oriented technologies will continue.

There are two parallel efforts required to get the IT, facing many routine but urgent requests, while also struggling with a stream of new technologies that seem relevant, in control.

  1. Significantly improving the flow of the work-in-process (WIP) of the IT.
    1. The first key insight is to keep a certain minimum amount of work items in progress and choke the release of more work items based on the pace of completed ones. In itself this would reduce multi-tasking and improve the focusing and concentration of the human resources on the current request or the project task they are working on.
    2. Implementing one scheme of priorities to warn when a certain project, or a specific mission, are stuck. The TOC methodology for priorities within execution is called Buffer Management.
    3. Identify within IT the internal constraint resource, and then search for the best scheme to exploit its work and subordinate all the rest to that scheme.
  2. Carefully choosing and prioritizing all IT requests. This practically means creating the organizational structure and its processes to evaluate the potential value from each request. This is an ultra-sensitive process and it has to be led by an executive, rather than by an IT professional manager, because the value should be assessed beyond the IT perspective.  Such a check should also include the rough capacity requirements from each request and the time line to complete it. This creates a process for defining the most effective portfolio of projects and missions. TOC has contributed the ‘Six Questions on the Value of New Technology’ and other thinking tools to support the evaluation of value.  This process would dictate the required effective stream of requests from the IT department, making sure every request is truly needed at the specific time frame and by that helps to define the capabilities and capacity levels that the IT department should maintain.

The key argument is that improving the flow of WIP in the IT department might solve the problem for some time, but without managerial priorities on the incoming requests the problem will return in the near future.

The next step is to implement the new IT tools into the Flow of Value to generate the added value. The implementation may require going into a Transition Period.  Getting used to the new tools takes time. During that period mistakes are done and all the managers have to be ready for signals of a problem and deal with it as soon as possible.  At the transition time the need for management attention across the hierarchy is at its peak.

Should IT be the strategic constraint of the Flow of Initiatives?

The natural constraint for growth is management attention because adding more managers might cause considerably more communication obstacles and diminishes productivity.  Elevating the management attention requires, first of all, learning to focus on the issues that matters the most.  This is exactly what is also required to exploit the limited capacity of the IT resources.  It means introducing a process of sorting all the ideas, including the partially developed initiatives, according to systematic assessments of their value, the required capacity from the relevant critical resources, and their expected completion time.  Such a process is the ultimate solution to both management attention and also for the IT.  Another conclusion is that the required capacities of all other resources, IT included, considering also the absolutely necessary protective capacity,  have to conform to the ability of top management to lead the most effective portfolio of improvement initiatives.

Decision Support Systems (DSS)

By Avraham Mordoch and Eli Schragenheim

How can the new fast development of technology effectively help organizations to achieve more of their goal?  The vast majority of the new technologies have a considerable impact on IT (Information Technology departments), which causes huge pressure on the workload of the IT people in most organizations that need to keep themselves frequently updated, causing headache to top management and lack of focus instead of helping them to move the organization forward.

But, the potential of the new power of computerization, including methodologies like Big Data, Artificial Intelligence (AI) and the Internet of Things (IoT), could also be used cleverly to improve the effectiveness of management by leading the organization to a secure and successful growth of its activity.

On one hand the new technologies allow getting more data, which is also more accurate than ever before.  That data, when properly analyzed with focus on what is truly meaningful, could serve managers in analyzing the current state of the business, its weaknesses and could lead to new ideas of how to improve the bottom line.

This article offers new ways to use the recent technology to allow management to get beneficial support in evaluating new ideas or prepare for expected changes coming from elsewhere.  We have chosen to focus on manufacturing organizations, which face the threat and the potential benefits, of digitization of the manufacturing shop-floor, considered to be the fourth industrial revolution, and thus gained the title of Industry 4.0.  The threat is being pushed to enormous expenses without gaining any business benefits. The capabilities of the new technology could assist a dramatic improvement in the way tactical and strategic moves are evaluated.  The point, though, is that in order to materialize the benefits some management paradigms have to be challenged and replaced with common-sense paradigms that utilize the new capabilities to support decisions.

Viewing the current types of software systems supporting manufacturing organizations, these systems can be classified into four types:

  1. MES (Manufacturing Execution Systems). This type of systems is focused on the very short term and aims at providing operators and production management with the most updated state of the flow of raw materials all the way to the finished goods inventory.  It allows handling priorities, fast fixation of problems and achieving efficient utilization of the equipment.  MES collects data and organize it in a way that can be easily viewed by middle operational managers. Scada systems, for example, are a subset of MES systems
  2. ERP (Enterprise Resource Planning). We include in this class also the CRM (Customer Relationship Management) systems. This class consists of a suite of integrated applications that the manufacturing organization can use to plan operations and collect, store, manage, and interpret data from different business activities. What integrates all the various part of the ERP class of systems is one database of all the key transactions, of the financial and the material, that have been recorded or are planned to be done in the short to medium-term.  The main function of this type of systems is planning the basic operations required to deliver the firm orders, while also record the transactions and institute order and the systemization of all the data related to the main processes in the organization. ERP and CRM systems are mainly data systems with some crude planning functionality. When information is defined as the answer to the question asked (Goldratt, The Haystack Syndrome) ERP and CRM supply answers to the most frequent and simple questions, like what needs to be done in order to deliver a customer order. SAP, Oracle or Dynamics 365 are just a few examples of ERP systems
  3. BI (Business Intelligent). The objective of the BI programs is to display high level information for top management, providing a picture of the current situation, and possibly pointing to certain observed trends.  The power of the BI technologies is to be able to collect relevant data elements from various databases. Internal data is mixed with data that is collected from the Internet and used to create graphs and charts for management to be aware of what’s going on within their organization and how it compares to what’s going on in the market and with their competitors.  The Key Performance Indicators (KPIs) are supported by BI making them clear to top management.  This gives the background to management to evaluate ‘what to change?’  But, it does not provide the tools to ‘what to change to?’, definitely not to ‘how to cause the change?’.  In other words, BI supports decisions by pointing to required areas, but it does not support specific decisions.
  4. Decision Support Systems (DSS). While the title of DSS was raised already in the 80s of the previous century, the true capability of actually supporting decisions has been achieved only recently. Every management decision is considering a change to the current state.  Every significant decision is also exposed to considerable uncertainty.  So, the key capability of a DSS is to be able to direct the managers to various alternatives to the considered decisions and present the possible ramifications of these changes.  We can divide this level of DSS into two parts:
    1. Supporting routine decisions done by experts, so less experienced people can take them, or even let the computer make this decision.  These types of computerized programs are based on Artificial Intelligence new technologies and create a variety of expert systems that support such decisions.
    2.  Supporting more significant tactical and strategic decisions by providing the decision makers with the holistic analysis of the potential financial and other ramifications of the decisions.  These are systems that support business organizational decision-making including decisions that consider unstructured or semi-structured potential opportunities that are exposed to significant uncertain situations. The assessment should consider the short term as well as the long term. A DSS must “understand” the cause and effect relationships between the different functions of the organization. These systems should allow a direct interaction between the human decision-maker(s) and the computerized algorithm.  The objective, given the amount of uncertainty and lack of full precise information, is to present the decision-makers with a full picture of what MIGHT happen, for good and for worse, as the result of the decision(s).

The above four classifications are not clear-cut and there are systems that cross the lines between them.

On top of that there is often an interaction, even a loop, between the above types of systems. The ERP consumes data accumulated by the MES and accordingly creates work orders that feed MES system.  The ERP database has a major role for the BI system showing the current state and the ERP and BI data are input into the DSS programs.

Interestingly enough the effort a manufacturing organization needs to make to implement these systems is especially significant when implementing an MES system, since there is a need to overcome cultural objections, including the antagonism that one finds in organizations with no established culture to report what has been done. When this initial infrastructure is laid down, it is a bit easier to implement and properly use the ERP system and by far easier to continues to climb the ladder and implement Expert Systems and the higher level of DSSs. So, the effort is reduced going up the ladder through the four types of systems,  but the benefits from the implementations is increased and there are very significant benefits when the top management, the C-level managers, are using a DSS for solving the crucial dilemmas they may have.

We have to take into account that manufacturing organizations are both complex and exposed to significant uncertainties. Still the C-level managers have to make tough decisions like:

  • Should the company offer packages of its existing products for a reduced price?
  • Should the company accept small orders for customized products for a not-too-high markup?
  • Should the company expand the product-mix with additional product family (or families)?
  • Should the company save considerable cost by shrinking its resources, as well as stopping the production of products with very low demand?
  • Should the company go on massive advertizing campaign?
  • Should the company participate in a big tender, quoting a moderate price, knowing that winning might affect the good delivery performance of the regular orders?
  • Should the company invest in opening a new export market?
  • Should the company invest in a new production-line when the market seems to go up, but some people believe this upward trend is going to stop?

These decisions lie outside the comfort zone of the decision makers, because of the obvious risk and having much less past experience with such situations.  The decisions are risky not just from the perspective of the organization, but also from the perspective of the personal risk of the decision maker, who ties himself/herself to the success or failure of the initiative.

The above risks force conservative decisions whenever the needed decision is beyond the known comfort zone. Lack of proper support for a holistic analysis blocks many organizations from achieving the true potential of the organization.

There are two big obstacles for any DSS to tackle the above decisions and many others: One such obstacle is expressing the intuition of the people close to the relevant area to play its role in the analysis.  Even when the situation is beyond the comfort zone of the decision-maker, it is still valuable as the people involved always know something that is more than nothing.  While lack of good precise relevant data is a constant issue, analyzing what MIGHT happen is a valid possibility, which yields a focused picture of the actual risk.

The second obstacle is being able to evaluate the proposed decision when it is added to everything else the organization is doing or committed to do.  This requires deep understanding of the rules behind the flow of materials, products, orders and financial transactions, including the various dependencies in Operations and in Sales.  This leads to the massive calculations, checking the state of capacity, materials and cash.

The DSS program needs to “simulate” the top-level dilemmas (like the examples above) and come up with the predicted financial results.  It has to make it easy to run a variety of ‘what-if’ scenarios and compare the results.  In the end, it has to display the predicted results for, at least, two different scenarios:  one that is based on reasonable conservative assessments and the other on reasonable optimistic one.  The range of the end results means the reasonable result should fall anywhere in between the extreme sides of the range.  The decisions should never be done automatically by the system – it needs constant intervention by the decision maker looking for better alternatives and use human judgment to make the final decision.

Generally speaking there could be two main ways to accomplish an effective support for decisions:

  1. Being able to carry a mass of calculations, based on good cause-and-effect rules, which describe the materials and capacity requirements for every product sold as well as the impact on the revenues and cost. This way is described in detail by the book Throughput Economics, written by Eli Schragenheim, Henry Camp and Rocco Surace.
  2. Using a powerful computerized simulator that closely follows the flow rules, and records revenues, truly variable expenses and the cost of capacity as an integral part of the simulation. The uncertainty has to be input into the simulator’s critical parameters to provide the possible range of the results.

The mass calculations way is more visible to the decision-makers, as the calculations are all straightforward and the added-value of the computer is the ability to carry such mass  calculations.  This means the decision makers fully understand the assumptions that are at the core of the calculations.

Using computerized simulation better fits complex situations, either within the production-floor, or with complicated dependencies within the sales.  For instance, simulating different flow rules, like batch sizing and different prioritization, are much more effective than mere calculations that have to rely on assumptions regarding the effectiveness of the flow rules.  On the other hand, the user has to inquire deeply to validate that the internal parameters of such simulation are in line with reality in order to trust the results.

Both ways have to start with a good representation of the current state as a reference according to which all the changes are compared to.  For a simulation it means creating a ‘digital twin’ that seems to come up with the current performance of the organization.

A computerized system that produces reliable reference or a digital twin, and is able to introduce variety of changes and compare the results to the reference, while also depicts the potential impact of uncertainty and lack of accurate data, deserves to be called a decision-support-system (DSS). Such a system will reduce significantly the risk in taking top-level decisions and will also reduce procrastination that is usually found whenever ‘hard decisions’ are evaluated.  This would help significantly to put the company ahead of the competition.

The first few true DSSs to appear in the market will enjoy a “Blue Ocean Strategy” compared to the “Red Ocean” which is typically the current situation in systems supporting manufacturing organizations.