The Role of Intuition in Managing Organizations

Is it possible to make good decisions based solely on quantitative analysis of available hard data?

Is it possible to make good decisions based solely on intuition?

The key question behind this article is:

Is it valuable, and possible, to combine intuition together with quantitative data in a structured decision-making process in order to make better decisions?

For the sake of this article, I use the following definition of intuition:

The ability to understand something immediately without the need for conscious reasoning

Intuition is basically subjective information about reality.  Intuitive decision makers take their decisions based on their intuitive information, including predictions about the reaction of people to specific actions and happenings.  Intuition is led by a variety of assumptions to form a comprehensive picture of the relevant reality.  For instance, the VP of Marketing might claim, based on intuition, that a significant part of the market is ready for a certain new technology.  While intuition is a source of information, its accuracy is highly questionable due to a lack of data and rational reasoning.

Decisions are based on emotions, which dictate the desired objective, but should also include logical analysis of potential consequences.  Intuition replaces logic when there is not enough data, or time, to support good-enough prediction of the outcomes of an action.   We frequently make decisions that use intuition as the only supporting information, together with emotions determining what we want to achieve.

From the perspective of managing organizations, with a clear goal, using intuition contradicts the desire for optimal results, because intuition is imprecise, exposed to personal biases and very slow to adjust to changes.  But, in the vast majority of the cases, the decision-makers do not have enough “objective data” to make an optimal decision.  So, there is a real need for using intuition to complement the missing information.

Any decision is about impacting the future, so it cannot be deterministic as it is impacted by inherent uncertainty.   The actual probabilities of the various uncertainties are usually unknown.  Thus, using intuition to assess the boundaries of the relevant uncertainty is a must.

So, while intuition is not based on rational analysis it provides the opportunity to use logical quantitative analysis of the probable ramifications when both available hard data and complementing intuitive information are used.

Assessing the uncertainty by using statistical models, which look for past data for similar situations, is usually more objective and preferable than intuition.  However, in too many times past data is either not available, or can be grossly misleading as it could belong to basically different situations.

People make intuitive decisions all the time.  Intuition is heavily based on life experience where the individual recognizes patterns of behaviors that look as if following certain rules.  These rules are based on cause-and-effect, but without going through full logical awareness.  Intuition is also affected by emotions and values, which sometimes distort the more rational intuition.

Taken the imprecise nature of intuition and its personal biases raises the question of what good can it bring to the realm of managing organizations?

The push for “optimal solutions” forces managers to go through logically based quantitative analysis.  However, when some relevant information is missing then the decisions become arbitrary.  This drive for optimal solutions actually pushes managers to simply ignore a lot of the uncertainty when no clear probabilities can be used.

A side comment:  The common use of cost-per-unit is also backed up by the drive for optimal solutions because the cost-per-unit allows quantitative analysis.  Mathematically the use of cost-per-unit ignores the fact that cost does not behave linearly.  The unavoidable result is that managers make decisions against their best intuition and judgment and follow a flawed analysis, which seems like being based on hard data, but present a distorted picture of reality.

The reality of any organization is represented by the term VUCA: volatility, uncertainty, complexity, and ambiguity.  From the perspective of the decision-maker within an organization, all the four elements can be described together as ‘uncertainty’ as it describes the situation where too much information is missing at the time when the decision has to be made.  In the vast majority of the VUCA situations the overall uncertainty is pretty common and known, so most outcomes are not surprising.  In other words, the VUCA in most organizations is made of common and expected uncertainty, causing any manager to rely on his/her intuition to fill the information required for making the final decision.  Eventually, the decision itself would also be based on intuition, but having the best picture of what is somewhat known, and what clearly is not known, is the best that can be sought for in such reality.

What is it that the human decision-maker considers as “reasonably known”? 

On top of facts that are given high confidence in being true, there are assessments, most of them intuitive, which consider a reasonable range that represents the level of uncertainty.  The range represents an assessment of the boundaries of what we know, or believe we know, and what we definitely don’t know.

An example:  A company considers the promotion of several products at a 20% price reduction during one weekend.  The VP of Sales claims that the sales of those products would be five times the average units sold on a weekend.

Does the factor of five times the average sales represent the full intuition of the VP of Sales? 

As the intuition is imprecise in nature it probably means the VP has a certain range of the impact of the reduced price in her mind, but she is expected to quote just one number.  It could well be that the true reasonable range, in the mind of the VP of Sales, is anything between 150% and 1,000% increase, which actually means a very high level of uncertainty or a much narrower range of just 400% to 500% of the average sales. 

The point is that if the actual intuitive range, instead of an almost arbitrary number, be shown to the management it’d lead to a different decision.  With a reasonable possible outcome of 150% of the average sales, and assuming the cost of material is 50% of the price, then the total throughput would go down!

Throughput calculations:  The current state in sales = 100 and throughput = 100 – 50 = 50.  During the sales we get sales = 100*0.8*1.5 = 120, throughput = 100*0.8*1.5 – 50*1.5 = 45.

So, if the wide range is put on the table of management, and the low side would produce a loss, then management might decide to avoid the promotion.  The other range supports going for the promotion even when the lower side is considered as a valid potential outcome.

Comment:  In order to make a truly good decision for the above example, more information/intuition might be required.  I only wanted to demonstrate the advantage of the range relative to one number.

What is the meaning of being “reasonable” when evaluating a range? 

Intuition is ambiguous by nature.  Measuring the total impact of uncertainty (the whole VUCA) has to consider the practicality of the case and its reality.  Should we consider very rare cases?  It is a matter of judgment as the practical consequences of including rare cases could be intolerable.  When the potential damage of a not too-rare case might be disastrous then we might “reasonably” take into account a wider range.  But, when the potential damage can be tolerated, then a somewhat narrower range is more practical.  Being ‘reasonable’ is a judgment that managers have to make.

Using intuition to assess what is practically known to a certain degree is a major practical step.  The next step is recognizing that most decisions have a holistic impact on the organization, and thus the final quantitative analysis, combining hard data and intuitive information, might include several ‘local intuitions’.  This wider view lends itself to develop conservative and optimistic scenarios, which consider several ranges of different variables that impact the outcomes.  Such a decision-making process is described in the book ‘Throughput Economics’ (Schragenheim, Camp, and Surace).

Another critical question is: If we recognize the importance of intuition, can we systematically improve the intuition of the key people in the organization?

When the current intuition of a person is not exposed to meaningful feedback from reality, then signals, which point to significant deviations, are not received.  When the statement of the intuition of a person is expressed as one number then the feedback is almost useless.  If the VP of Sales assessed the factor on sales as 5 and it was eventually 4.2, 3.6, or 7, how should she treat the results?  When a range is offered then the first feedback is:  was the result within the range?  When many past assessments are analyzed then the personal bias of the person can be evaluated and important learning from experience can lead to considerably improved intuition in the future.

Once we recognize the importance of intuition then we can appreciate how to enhance it effectively.

Advertisement

Between the Strategic Constraint and the Current Constraint

This article assumes the reader is familiar with the Theory of Constraints (TOC), especially the definition of a constraint, and the five focusing steps.  This belongs to the basic knowledge of the Theory of Constraints (TOC).

The concept of the Strategic Constraint has been raised because it could well be an important strategic choice; targeting the desired future situation where the particular resource would become the active constraint.  Once this happens the organization’s performance will depend on the exploitation and subordination to that resource.

Actually, strategic constraint does not need to be a resource.  There are two other options.  The first is to declare the market as the strategic constraint.  The second is a rare situation where a critical material is truly scarce, so it could constrain the performance of the organization when nothing else limits it.

First, let’s deal with an internal resource as the strategic constraint.

The characteristics of a strategic constraint are:

  1. Adding capacity is very expensive, and it is also limited either by low availability in the market or by having to purchase the capacity in big chunks. It certainly needs to be much more expensive than any other resource.
  2. There is an effective way to control the exploitation of the strategic constraint resource.
  3. The overall achievable results when the specific resource is the constraint are better than when the constraint is something else. Determining the wishful future state where a specific resource would become a constraint is a very challenging objective, as is going to be explained and demonstrated later in this article.

Some organizations have an obvious strategic constraint.  When we consider an expensive restaurant it is easy to determine that the space where the guests sit and eat is naturally the strategic constraint.  Space is the more expensive resource, and enlarging the space is difficult or even impossible.  All the other resources, the kitchen, the chef, the staff, and the waiters are easier to manage and control.  Eventually, they are also not as expensive as space.  Even if one is tempted to think of the chef as the constraint, because of being the core of the decisive competitive edge, then space would easily become the actual constraint.  The reputation of the chef could serve several restaurants, which emphasizes the point that it is the reputation, rather than the physical capacity of the chef, that is exploited.

Most organizations do not have one clear resource that is much more expensive to elevate than all the rest, even though one resource is naturally somewhat more expensive than the rest.  Is it enough to make it the chosen constraint?

In order to answer the question, we need to understand better the way from the current situation to the desired situation where the strategic constraint becomes the actual constraint.

What happens when the current constraint is not the chosen strategic constraint?

The five focusing steps lead management to focus on exploitation and subordination to the constraint, which would bring a considerable increase to the bottom-line.  Question is:  would these steps make the organization closer to the strategy of having the chosen constraint becoming the constraint?

Suppose the organization is constrained by the current demand.  A good exploitation scheme is to ensure reliable delivery performance.  When the organization succeeds to improve the flow and deliver faster – more demand could be generated.  As long as there is no internal constraint, any additional demand with positive T would increase the net profit.   There is no need to choose what specific sales to increase, as there is no tradeoff between increasing the sales of product A or product B.  When these efforts continue then an internal constraint would emerge.  So, we come back to the question of what should we do when the internal current active constraint (new or old) is not strategic?

When the current constraint is a resource, but not the one we like to have, the only way to come closer to making the chosen strategic constraint is to elevate the current constraint as soon as possible.  Exploiting and subordinating to the current “wrong” constraint doesn’t make sense unless the elevation takes a very long time.

So, how can we make the chosen strategic constraint the actual constraint?

Trying to exploit the strategic constraint, when it is not the current constraint, is not effective and could cause considerable damage.  Using the T/constraint-time as a priority mechanism works contrary to the objective when the constraint-unit isn’t the current constraint.  To illustrate the point assume two product categories:  A and B.  Category A takes significant capacity from MX, which is the strategic constraint, so its T/strategic-constraint-time is low.  Category B requires less of MX, but much more from another resource called MY.  At the current state, there is no active capacity constraint.  MX is more expensive than MY, which is the main reason why MX is considered the strategic constraint.  Which product you like to expand?  Considering T/MX would lead to expanding category B sales as much as possible, but then MY might emerge as the constraint.  Expanding the sales of category A would make MX the constraint, which is what we want, but for low overall T.  Is it the product mix we have longed for?

The point is that high T/constraint-time means nice T for less utilization of the constraint.  However, that product might need much more utilization from a non-constraint, which means that significant more sales might cause a non-constraint to penetrate into its protective capacity and become an interactive constraint.  When this happens a new question emerges: are there quick means to add capacity to that resource, and what are the costs of adding this capacity?

Generally speaking whenever growth is considered it is necessary to carefully check the capacity of several critical resources and not just the strategic constraint!

The P&Q is the most known example demonstrating the concept of T/constraint-unit.  Here is the original case:

PandQsmall

Just suppose, unlike the original case where the demand for P and Q is fixed, that it is possible to expand the demand to both products.  It is also possible to double the capacity of every resource for an extra $1,500 per week.

Starting with the Blue resource as the strategic constraint:

If our chosen constraint is the Blue resource, where the P product yields T(P)/hour-of-Blue is (90-45)*4 (the Blue is able to produce 4 Ps per hour) = $180, while Q yields (100-40)*2 (Blue allows only two units of Q per hour) = $120, then the organization should produce only P!  The total weekly T, of selling 160 Ps, would be:  180*40 (weekly hours) = $7,200. We still have the same operating expenses (OE) of $6,000 per week, so the net weekly profit is $1,200.

By the way, the situation of producing only P is that there are four resources with the exact same load.  If you need protective capacity it’d be problematic, as any additional unit would increase OE by $1,500 and by that bringing loss!  Selling only P could also be risky for the long term.  Right now let’s stay with the theoretical situation that there are no fluctuations (Murphy).

What if we choose the Light-Blue resource as the strategic constraint?

First obvious recognition: there is a need to elevate the capacity of the Blue resource.  Actually, this might not be obvious to everybody.  When the focus is on the Light Blue, we might lose sight of the current constraint.

Anyway, considering a future state where the Light Blue is the constraint, then similar calculations would show that Q brings more T per hour of the Light Blue resource ($360 per Light Blue hour as 6 Qs are produced) than P.  Selling only Qs, with the Light-Blue as the constraint, would generate weekly T of 360*40 = $14,400.  But, OE cannot be just $6,000.  There is a need for 3 units of the Blue resource, so we need to add two units of the Blue resource.  The OE would be:  6,000 + 2*1,500 = $9,000.  The net weekly profit would be $5,400, bringing better profit than with the Blue resource as the constraint, but with a higher level of OE.  This situation is also theoretical as both the Blue and the Light-Blue are loaded to 100% of their available capacity.

A simple realization is:

It is not trivial to guess which resource as a strategic constraint would yield the best profit!

One needs to consider the capacity profile of other resources to ensure they have enough capacity to support the maximum T that the strategic constraint is able to generate.  Practically it means trying several scenarios where the limited capacity of several critical resources is calculated and solved.

In reality, there is a need to keep protective capacity on all non-constraints.  Actually, even the constraint itself should not be planned for 100% of its theoretical capacity, in order to keep the delivery performance intact.

Realizing the above lessons, and assuming there is no clear one resource that is very difficult to elevate, then why should we be bounded by the capacity of a resource, which can be easily elevated, when there is enough potential demand to grab?

Another issue is the wish for stability.  If the capacity constraint resource is frequently moving then the exploitation schemes, including the priorities between products and markets, might frequently change.  But the problem is that looking for stability might constrain the growth, or force to elevate several resources whenever an expansion of the demand occurs.

This leads us to consider recognizing the market demand as the strategic constraint.

Subordinating to the market demand is actually a basic necessary requirement for the vast majority of the businesses, even when an internal constraint prevents the management from accepting more orders.   It is easy to imagine what might happen when the organization, focusing on exploiting the limited capacity of an internal constraint, would fail to maintain reliable and high-quality delivery to its clients.  If the demand would go down, then the internal constraint will stop from being a constraint.

Keeping growth means constantly expanding market demand!  Keeping enough protective capacity for the critical resources means frequently increasing the capacity of one or more resources whenever buffer management, or the planned load, warn from penetration into the protective capacity.  Goldratt coined it “progressive equilibrium”.  The difference between this and keeping a strategic capacity constraint resource is that there is no need to keep that particular resource as the weakest link, which means less overall elevations to serve the same growth in sales.

It seems to me that as long as there is no natural strategic constraint, treating the market demand as the constraint makes better sense.  The growth plan has to frequently check the capacity profile of several key resources, making sure all commitments to the market can be reliably delivered.

As a final comment:  Goldratt mentioned Management Attention as the ultimate constraint.  To my mind, this is true for the Flow of Initiatives, which looks on how to improve the current Flow of Value (products and services delivered to existing clients).  Management Attention constrains the pace of growth of organizations.  Once managers learn how to focus on the right issues their attention capacity becomes the strategic constraint for growth.

The special role of common and expected uncertainty for management

dice plus cure

After what we recently went through, the area of risk management gets naturally more attention.  The question is centered on what an organization can do to face very big risks; many of them come from outside the organization.

What about the known small risks managers face all the time?

I suggest distinguishing between two different types of uncertainty/risks, which call for distinct methods of handling.  One is what we usually refer to as risks, meaning possible occurrences that generate big damage.  This kind of uncertain event is viewed as something we strive to avoid, and when we are unable to we try to minimize the damage.

The other type, which I call ‘common and expected uncertainty’, is simply everything we cannot accurately predict, but we know well the reasonable range of possible results.   The various results are sometimes positive and sometimes negative, but not to the degree that one such event would destroy the organization.  The emphasis on ‘common and expected’ is that none of the possible actual outcomes should come as a surprise.  While the actual outcome frequently causes some damage, true significant damage could come only from the accumulation of many such uncertain outcomes, and this is usually rare. So, losing one bid might not be disastrous, but losing ten in a row might be.

This article claims that there is a basic difference in handling the two types of uncertainty.  While both impact decisions and both call for protective mechanisms, the objective of those mechanisms and the practice of managing them is quite different.

The economic impact of ‘common and expected uncertainty’ is by far underappreciated by most decision-makers.

Hence the value of improving the method for dealing with ‘common and expected uncertainty’ is much higher than expected.

A big risk is something to be prepared for, but the means have to be carefully evaluated.  For instance, dealing with the risk of earthquakes involves economic considerations.  It is definitely required to apply standards of safety in the construction of buildings, roads, and bridges, but the costs, and the impact on the lead time, have to be considered.  Another common protection against the damage of earthquakes is given by insurance, which again raises the issue of financial implications.

Some risks are very hard to prepare for.  What could have the airlines do to prepare for the Coronavirus other than carrying enough cash reserves?  Airlines invest a lot in preventing fatal accidents and have procedures to deal with such events.  But, there are risks for which preparations, or insurance, don’t really help.  Every time I go on a flight I’m aware that there is a certain risk for which I have no meaningful protection.  So, I accept the risk and just hope that it’ll never occur.

Ignoring common and expected uncertainty is not reasonable!  However, it is practically ignored by too many organizations, which pretend they are able to predict the future accurately and base their planning on it.  This illogical behavior creates an edge for organizations with better capability to deal with common and expected uncertainty and generate very high business value based on reliable and fast service to customers.  That capability leads also to built-in flexibility that quickly adapts to the changing tastes of the market.  Isn’t this a basic capability for facing the new market behavior resulting from the Coronavirus crisis?  The burst of the epidemic changed the common and expected uncertainty, but by now we should be used to its new behavior, making it more “expected” than it was in March 2020.

Failing to deal with the common and expected uncertainty is especially noted in supply chain management.

For instance, a past CEO of a supermarket chain admitted to me that at any given time the rate of shortages on the shelf is, at least, 15%.  The damage of 15% shortages is definitely significant, as it means that many of the customers, coming to a supermarket store with a list of items to buy, don’t go home with the full list fulfilled.  When this is an ongoing situation then some customers might decide to move to another store.  As long as all the chains suffer from the same level of shortages this move of customers is not so damaging.  But, if a specific chain would significantly reduce the shortages it would steal customers from the other chains.

Given the common and expected uncertainty in both the demand and the supply is there a better way to manage the supply chain in a much more reliable way?

To establish a superior way the basic flaw(s) in the current practice should be clearly verbalized.

The current flawed managerial use of forecasts points to an even deeper core problem.  Mathematically a forecast is a stochastic function exposed to significant variability and thus should be described by a minimum of two parameters: an average and a measure of the spread around that average.  The norm for forecasting is using the forecast itself as an average and the forecasting error that points to an average absolute deviation from the average.  The forecasting error, like the forecast itself, is deduced from the past results.

The use of just ONE number forecasts in most management reports demonstrates how managers pretend “knowing” what the future should be, ignoring the expected spread around the average. When the forecast is not achieved it is the fault of employees who failed, and this is many times a distortion of what truly happened.  Once the employees learn the lesson they know to maneuver the forecast to secure their performance.  The organization loses from that behavior.

When the MRP algorithm in the ERP software takes the forecast and calculates the required materials the organization doesn’t really get what might be needed!  Safety stock without reference to the forecasting errors is too arbitrary to fix the situation.

A decision-maker viewing an uncertain situation needs to have two different estimations in order to make a reasonable decision:

  1. What could be the situation in a reasonable best-case scenario?
  2. What might be the reasonable worst-case situation?

The way to handle uncertainty is to forecast a reasonable range of what we try to predict.  Forecasting sales is the most common way to determine what Operations should be prepared to do.  Other cases where reasonable ranges should be used include considering the time to complete a project or just a manufacturing order.  The need for the range is to support the promise for completion, leaving also room for delays due to common and expected uncertainty.  The budget for a project, or a function within the organization, is an uncertain variable that should be handled by predicting a reasonable range.

The size of the range provides the option of using a buffer, the protection mechanism against common and expected uncertainty.  While one size of the reasonable range expresses a minimum assessment, where an actual result of less than that number seems “unreasonable” based on what we know.  The other side expresses the maximum reasonable assessment.  If you choose to protect from the possible reality of being close to the maximum assessment, like when you strive to prevent any shortage, then you need to tolerate too high stock, time, money, or any other entity that constitutes a buffer.  In cases where the cost of the buffer is high, then the financial consequences of losing sales due to shortages have to be considered.

One truly critical variable in the supply chain is the forecasting horizon.  Cost considerations can push planners to use too long horizons, which increases the level of uncertainty in an exponential way.  When it comes to managing the supply chain, which is all about managing the common and expected uncertainty, the horizon of the demand forecast should reflect the reliable supply time and not beyond that value.

Buffer Management is an unbelievably important concept, developed by Dr. Goldratt, which is invaluable for managing common and expected uncertainty during the execution phase, and also helps to identify emerging situations where the buffers, based on the predicted reasonable ranges, fail to function properly.  The idea is simple:  as long you are using a buffer against a stream of fluctuations, the state of the buffer tells you the real current level of urgency of the particular item, order, or even the state of cash.  Buffer management uses the well-known code of Green, Yellow, and Red to radiate what is more urgent, and this provides the best behavior model for dealing with common and expected uncertainty.

The big obstacle for becoming much more effective is to recognize the impact of both risks and common and expected uncertainty.  The difficulty in recognizing the obvious is how can the boss know when the subordinate does a good job?  The inherent uncertainty is an easy explanation for any failure to meet targets.  Problem is:  shutting our eyes does not help to improve the situation.  So, it is the need for managers to constantly measure the performance of every employee and demand accountability for results, for which the employees have just partial impact, is the ultimate cause for most managers to ignore common and expected uncertainty.

After the Crisis

We are within a global crisis, and this is the best time to think how the world is going to be after the crisis is over.  It is obvious to deal also with how to survive the crisis, but it’d be a grave mistake to focus only on the obvious.

There are a lot of debate on how long the Coronavirus is going to last, and how long it’d take the economy to overcome the crisis.

But, here is a critical question:

Are the future demand characteristics going to be the same, or even just similar, to the demand we had in 2019?

I have doubts, and if I’m right then all organizations need to exploit the time to re-think their strategy.

We are in the beginning of two different crises and each of them will have an impact on the future demand patterns, in a way we haven’t felt before. 

The ‘market demand’ is created by the tastes, habits and preferences of consumers on the use of their free cash.  This core of the world consumer consumption is exactly what is disrupted by the Coronavirus causing a significant trauma that is still going on.  Too many people all over the world are losing their jobs and experience anxiety to their very basic survival.  Add to this the feeling of loneliness and the concern for the elderly parents and other relatives.  All these effects influence our buying decisions.  The vast of what we’re used to buy in ‘normal times’ is not in order to survive, but in order to enjoy.  Having more money than absolutely necessary raises the realization that “we have the means, so why not use it?”  The hard times we go through, especially those of us who are stuck at home, have an impact on our preferences and would probably change our habits.  The ongoing pressure on cash, without knowing when we’ll be back to steady income, change the perception of what we need in order to have “good life”.

So, the realization of the danger to life, plus the lack of enough secure cash to provide our needs, make a stamp on how we are going to consume even when economy will recover from the current crisis.

The combination of personal trauma and economic uncertainty would cause various long-term changes in consumption of goods.  The obvious change for many is becoming more conservative by spending only on things that seem practical and necessary.  It also means spending overall less money than the available cash, which raises the issue of the means of saving money.  The actual behavior of the stock market will determine whether the stock market is going to be the common way to savings in the future.  New ways to save money could become attractive if their stability, rather than their interest rate, can be shown in a convincing way.

However, this ‘conservatism’ is only one possible aspect of the change in behavior.  The general “status” culture of amazing others with what we have might go down as well.  When the threat of early death becomes widespread, the meaning of life and what does it mean to enjoy is going through a change.  A different set of priorities will emerge.  Being stuck at home pushes many people to find rewarding ways to fill up the time, reading books, watching more variety of shows in the TV and listening to music, all of these would have a more subtle kind of impact.

A very different reaction to the current hard times is to put more emphasis on having fun now, because who knows whether we’ll live tomorrow.  Every human being faces the chronic conflict between enjoying life now and preparing for a better future.  The common way people treat conflicts is by looking for an acceptable compromise.  Very few people succeed is making their goal for living the center, so everything they do is to achieve more and more of that goal.  Most people have to sacrifice something in the present, like money, time devoted to studies or preparations for providing the necessary conditions for good future.  That concept might be hit by the thought that life could end at any given time, and the even more general realization that you cannot rely on any prediction of what’s going to happen even in the very short time frame.  So, this kind of behavior might lead to preferring the near future over the far away future.

Would these contrasting reactions cancel each other regarding their impact on the global market behavior?

I don’t think so, because the two opposing ways of reacting to the current trauma would impact different sets of products and services.  Both ways combine together to reduce the demand for products and services that represent the compromise.  Currently every category of products offers a wide variety of prices, from the cheapest to the most expensive.   It seems that the role of the middle-level price products would go down.  The demand for the lower price products would grow while some of the high pricing products would still find customers willing to pay.  The pricing is only one parameter to watch, certain products types that have no practical value, but have a certain aesthetic value, without being truly great art, might not find demand at all.

The crisis would have a significant influence on the use of technology.  On one hand it will accelerate the use of newer ways to communicate and work from a distance.  On the other hand I expect the use of the newer technologies will be much more focused on the real practical value, rather than being enthusiastic from the new features and the attraction of gadgets that are not truly needed.

What should the organizations do now?

These are the most appropriate times to re-think the strategy and tactics.  All organizations face the following three period of times, with their different external impacts on the business.

  1. Within the Coronavirus crisis: how to survive without losing the future
  2. After the Coronavirus: experiencing economic recession
  3. Coming out of both crises into a changed economy and demand patterns

The current period of the Coronavirus crisis gives top management the opportunity to dedicate time to consider the possible changes in the market after the economy would be stabilized. 

TOC teaches us to make the best guess, based on cause-and-effect analysis, and then put the signals to tell us whether we are wrong.  This means that while we know that we don’t really know, assuming “we know nothing” is also wrong, and more damaging.  So, it is the duty of every high-level manager to make plausible assumptions, build a direction for taking the lead in several relevant market sectors, and monitor the warning signals.

There is another aspect of building a plan to win the competition by coming with new products and services that would be highly valuable to big enough market sectors and by this gain high demand.  There is an absolutely required need for the buildup of capabilities and capacity buffers to provide the flexibility to quickly adjust to the new trends in the market.  This means, among other necessary conditions, to have multi-skilled and highly motivated employees to support any quick changes in the product mix and/or delivery to customers.  To achieve it top management might need to come up with a new scheme for maintaining win-win relationships with the employees.

How such re-thinking should be carried on?  Certainty time has to be dedicated for mutual re-thinking, most probably using communication packages like Zoom or Skype with all the key players.  I strongly believe that TOC consultants should be used as well.  First of all because external (but clever) people are less attached to any current paradigms, so they can reveal and challenge hidden assumptions.  Secondly because the thinking tools of TOC (including Goldratt Six Questions) can be effectively used to analyze the perception of value of the customers.  And a good prediction of the perception value of customers is the key for any good strategy.

Conferences: Between Onsite and Virtual

An obvious unavoidable result of the Coronavirus is that conferences are cancelled.  The obvious solutions are either delaying the onsite conference to better times or move to virtual conferencing.  This is what happened to the TOCICO annual conference, which was planned for June 22-24 in Paris, and has been cancelled.  Instead, TOCICO is going to organize a virtual conference based on the best available technology.  A virtual conference is not able to fully replace an onsite one, certainly not when the conference is planned for a great city like Paris.  But, it can offer other benefits.

The Coronavirus only accelerated a basic need to find a proper solution for large scale conferences so people can join without having to travel and without the associated costs of running such big events.  This need is especially critical for international conferences.

Technology for good video communications exists for a number of years and it becomes better and better.  Yet, there are certain deficiencies with video distance communication, which are not technological.  The most serious negative outcome of replacing an onsite conference with a virtual one is the lack of face-to-face communication.  There are several aspects that make face-to-face more valuable than using distance communication technologies.

  • The emotional value of meeting a person is much higher in face-to-face meeting. When all the senses are activated, plus the sense of an occasion of meeting a meaningful figure who lives far away, the overall experience is considerably stronger.  This emotional pleasure is best achieved in the mingling that takes place at breaks and dinners during an onsite conference.
  • Even when the communication is based on just rational exchange of knowledge, which is typical to presentations, it seems that the quality of the knowledge transfer is more effective in live contact.
    • Even in such rational flow of ideas, based on logic, there are emotional controls that establish trust or the lack of it. These control mechanisms seem to work more effectively in onsite conferences.  During a live presentation a human being is not just judging the content of what is said, but also the reaction of the crowd, which cannot be fully replicated in any distance communication technology.
  • The ability of a listener to concentrate seems better when no external disturbances are competing on the limited attention capacity. Listening to the computer or the smartphone at home/office is unavoidably exposed to many distractions.

What new benefits can be generated by a virtual conference?

The most obvious benefit of a virtual conference is that its cost is far lower, both for the participants and for the organizers.  This is on top, of course, of the special impact of the current health crisis, where people from all over the world are stuck at home.

On the face of it a virtual conference can easily accommodate more presentations, giving a wider choice to the participants.  But, the negative (branch) of offering too wide choice is overall less impact and somewhat reduced quality of the conference as a whole.  In an actual conference there is a need to offer a full program for every track throughout the whole day.  A virtual conference can spread truly good and effective presentations in more days, say just three net hours of presentations a day.  By lowering the daily load of new knowledge, covering more material in a longer period of time, this limitation of virtual conferencing is vastly reduced.

A truly special benefit of a virtual conference is the option to listen to the recordings of presentations that were missed, either because of parallel interesting presentations, or because the need to rest, still within the sense of occasion of the conference.  This option is one of several that are not possible in a live conference.

Higher quality of the presentations is the key advantage of virtual conferences!

First, the choice of speakers is wider, because the speakers are not required to come physically to the conference location. Secondly the presentations can be pre-recorded, so the quality of the picture and sound can be carefully monitored.  It also allows making more than one take of the presentation and choosing the better one.

Pre-recording the presentations gives an opportunity to add subtitles in English to the presentation, and based on them create subtitles in several other languages that the participants can choose.  From my personal perspective of non-English speaking country I can test to the importance of captions in English.  It is easier for me to watch movies in English on TV with captions in English, even though I understand 95% of the spoken text.  It also serves to overcome the difficulty of understanding different accents.  Creating the captions in English provides the option to translate the English subtitles to different languages and this opens the door for people with limited knowledge in English to participate.

While the presentations themselves are pre-recorded, it is possible to conduct live Q&A sessions.  This combination of recordings and live sessions has the potential to achieve an overall superior quality of delivering content and capture the audience reaction to the ideas.  One of the ideas we like to examine is to have two of three Q&A sessions within every presentation, making the content more approachable.

A considerable difficulty of an international virtual conference is the adherence to different time zones.  When the audience is spread all over the world, many people face a practical difficulty to attend the live sessions of Q&A.   A partial solution to this difficulty is asking the speakers to carry two sets of Q&A sessions with 10-12 hours apart to better fit the time zone of participants at the other side of the globe.

There are several software packages that manage virtual conferences.  On top of handling the presentations they provide chat rooms, so more intimate meetings with key presenters are made possible, compensating to a certain degree the ability to directly approach them during a break.

My own conclusion is that while I still would like to attend a live conference in an attractive location, virtual conferences, with the best speakers, can provide huge value and are so much more affordable.  Eventually this is a direction for the future.   It is not just that the technology that allows us to do something similar to an onsite conference from afar; we can capitalize the new virtues of the new technology to achieve new benefits.

Eleven years ago I’ve initiated the delivery of webinars for TOCICO.  This was a new and very excited experience for me.  Members of TOCICO have now more than 120 past webinars to watch at any time.  I personally look forward to experience my first virtual conference as a major vehicle to spread the most updated powerful knowledge of the Theory of Constraint (TOC), at an affordable price to whoever that is curious enough to know about.  I sincerely believe this is an opportunity for readers of The Goal to learn what TOC can do for their organizations.

Fixing a mistake in our book. Our apologies!

And, some thoughts on how easy it is to make mistakes and continue to be blind to them.

Writing a book is an ultra-challenging mission.  It is far more than finding the most effective way to express what you have in your mind.  It certainly involves the especially difficult task of putting yourself in the shoes of the reader wondering whether the text is clear enough, and interesting enough to motivate the reader to continue.

There is much more.  You need to look for mistakes.

Having spent twelve years of my life in programming I know what every code writer knows so well:  it is damn easy to produce bugs!  It is so common that no programmer in the world claims that he/she has learned to write code that works right first time.  A positive side in the programming culture is that programmers are not blamed for bugs, as long as they are quick to fix them.

Writing a book is not so different from writing code, just much more challenging.  This means we are constantly introducing mistakes into our writing.  Reading back what we wrote isn’t good enough.  When you have the meaning of the sentence in your mind, you may fail to see that what appears on the page isn’t what you meant.  Thus, publishing houses employ editors trained to read text with the purpose of discovering some of the writer mistakes.  My assumption is that many mistakes still exist in any book.

We refer to the following book, which has been published and printed after all the careful editing that we have done:

Cover (1)

So, there we were, preparing for the webinar “Building a Bridge of Understanding,” when we suddenly discovered that the following table from our book does not represent the specific case.  Here is the table that looks quite normal:

Table 7‑3:  Alternative Accounting Treatments

table 1

The problem is that the real cost of Materials and Freight is not $300,000, which is based on 10,000 sold units with a materials cost per unit of $30.  While 10,000 units were sold, the company actually produced 14,286 units.  So, the real cost of materials purchased and used in production is 428,580.  The correct table should be:

Table 7‑3:  Alternative Accounting Treatments

table 2

It is particularly difficult to fix finished books.  We delivered the corrected table to the publishers but what has been printed contains the mistaken table.  Here is a link that allows you to download a PDF file where you can easily print the fixed table, cut the margins, and put it onto the flawed table in the book: https://www.dropbox.com/s/699x01nqouibaeq/Table%207.3%20corrections%20with%20Border%20Cut%20Guide.pdf?dl=0

Let me emphasize, we have found a mistake only in the table, not in the text itself.  As we mentioned, we still might have made other mistakes here and there.

So, readers of the book, be aware of the mistake and the fixed table.  Please let us know if you find one!

For all others who do not have the book, but the topic seems interesting, have a look at the page in my blog describing the need and the key questions answered by the content of the book.  Here is the link to this page: https://elischragenheim.com/toc-economics-top-management-decision-support/

IT as a universal bottleneck

Eli Goldratt defined management attention as the ultimate constraint.  However, sometimes other constraints emerge.  Let’s analyze the emergence of IT as a universal bottleneck for improvement efforts of many organizations and corporations.

The ultra fast development of IT, including the cloud, Big Data, artificial intelligence (AI), Industry 4.0, mobile applications, e-commerce, cyber protection and routine software, creates a problematic situation in almost all medium and big organizations:  the IT department becomes a bottleneck.   The simple meaning of a bottleneck is:  being incapable to perform all the required work.

Dealing with a real bottleneck is a critical strategic problem, because it forces top management to decide what good business to give up.  This is not a problem with the technology itself; it is a difficult managerial duty to decide what new technology (actually any promising change) to adapt to and at what pace.  That said the technology people don’t make it easy to top management to make the right decisions.  The technology perspective is quite different from the goal of the organization.

While the products and services of the organization might have nothing to do with IT, the flow of materials, products and services and the means for more effective marketing and sales, depend more and more on IT.  Most new technologies are heavily based on IT and the adaptation to the flow of incoming new IT capabilities becomes more and more difficult.  Actually as the use of IT becomes bigger, and more complex, controlling the whole IT activity might become chaotic.  New managerial capabilities are needed to maintain the huge suite of features, coming from various sources, under control.

Just to illustrate the problems:  The banking systems generate huge amount of IT requests from better mobile applications to improving international and multi-currency transactions, which have to conform to various local regulations. The appearance of crypto-currency adds challenges that require new IT tools. Naturally banks have endless artificial intelligence initiatives and, of course, they need to constantly improve their cyber protection.  Other business sectors, like Retail, also face new threats that require new IT tools to provide new services to their customers.  The whole sector of Manufacturing is facing Industry 4.0 new digital control and connectivity.

The “Flow of Value” is defined as the current way the organization delivers value to customers. This is definitely a critical flow for every organization.  However, every organization has also to maintain another critical flow:  the “Flow of Initiatives” to improve the Flow of Value in the future.  The Flow of Initiatives consists of all the efforts invested in expanding the market demand, developing new products/services or finding better ways for the “Flow of Value” to adapt to the trends in the market, like faster response time.  Each of these flows has its own constraint.

The means for gaining more market, or even preserving the current demand, rely heavily on advanced features of IT.  Being blocked by the bottleneck causes considerable damages, like confusion on what initiatives are in the pipeline and the priority they should get in competing for resources.  The confusion and the raised internal competition on IT resources cause multi-tasking and frequent priority changes, which cause significant waste of capacity of IT people, turning the situation into a vicious cycle.  Considering the fact that truly good IT people are scarce, so the competition on them is wide, having the IT as a bottleneck will not be solved in the near future.

IT as a bottleneck creates only minor problems to the current Flow of Value, because that flow continues to function using the older IT tools and it usually has adequate capacity of key resources.  However, the perceived shortcomings of the IT system are obvious to the employees and also to the key clients and suppliers.  This creates significant tension within the organization and between the organization and its clients and suppliers.  Rumors and facts about the competition going on the new technology wagon, add to the ongoing pressure and the department that needs to respond to all the requests is the IT.  The big problem of IT is that while they need to plan and implement new tools and upgrades to existing tools, they still need to support all the regular services.  The conflict between the urgent and the important is especially noticeable in all IT departments.

The real acute problem with IT as a bottleneck to growth is that the situation looks temporary, but it is not.  At any given time the stream of new IT-based technologies looks like it’d take certain time, maybe even few years, to implement, but then the pressure on the Flow of Initiatives will go down.  This is simply not true.  The whole area of Artificial Intelligence (AI) is just starting to penetrate into actual use in large organizations, and this flow will definitely continue and even accelerate.  In the same way the need to improve the security of the IT would go up more and more.  So, this state of having to deal with a stream of new significant IT oriented technologies will continue.

There are two parallel efforts required to get the IT, facing many routine but urgent requests, while also struggling with a stream of new technologies that seem relevant, in control.

  1. Significantly improving the flow of the work-in-process (WIP) of the IT.
    1. The first key insight is to keep a certain minimum amount of work items in progress and choke the release of more work items based on the pace of completed ones. In itself this would reduce multi-tasking and improve the focusing and concentration of the human resources on the current request or the project task they are working on.
    2. Implementing one scheme of priorities to warn when a certain project, or a specific mission, are stuck. The TOC methodology for priorities within execution is called Buffer Management.
    3. Identify within IT the internal constraint resource, and then search for the best scheme to exploit its work and subordinate all the rest to that scheme.
  2. Carefully choosing and prioritizing all IT requests. This practically means creating the organizational structure and its processes to evaluate the potential value from each request. This is an ultra-sensitive process and it has to be led by an executive, rather than by an IT professional manager, because the value should be assessed beyond the IT perspective.  Such a check should also include the rough capacity requirements from each request and the time line to complete it. This creates a process for defining the most effective portfolio of projects and missions. TOC has contributed the ‘Six Questions on the Value of New Technology’ and other thinking tools to support the evaluation of value.  This process would dictate the required effective stream of requests from the IT department, making sure every request is truly needed at the specific time frame and by that helps to define the capabilities and capacity levels that the IT department should maintain.

The key argument is that improving the flow of WIP in the IT department might solve the problem for some time, but without managerial priorities on the incoming requests the problem will return in the near future.

The next step is to implement the new IT tools into the Flow of Value to generate the added value. The implementation may require going into a Transition Period.  Getting used to the new tools takes time. During that period mistakes are done and all the managers have to be ready for signals of a problem and deal with it as soon as possible.  At the transition time the need for management attention across the hierarchy is at its peak.

Should IT be the strategic constraint of the Flow of Initiatives?

The natural constraint for growth is management attention because adding more managers might cause considerably more communication obstacles and diminishes productivity.  Elevating the management attention requires, first of all, learning to focus on the issues that matters the most.  This is exactly what is also required to exploit the limited capacity of the IT resources.  It means introducing a process of sorting all the ideas, including the partially developed initiatives, according to systematic assessments of their value, the required capacity from the relevant critical resources, and their expected completion time.  Such a process is the ultimate solution to both management attention and also for the IT.  Another conclusion is that the required capacities of all other resources, IT included, considering also the absolutely necessary protective capacity,  have to conform to the ability of top management to lead the most effective portfolio of improvement initiatives.

Decision Support Systems (DSS)

By Avraham Mordoch and Eli Schragenheim

How can the new fast development of technology effectively help organizations to achieve more of their goal?  The vast majority of the new technologies have a considerable impact on IT (Information Technology departments), which causes huge pressure on the workload of the IT people in most organizations that need to keep themselves frequently updated, causing headache to top management and lack of focus instead of helping them to move the organization forward.

But, the potential of the new power of computerization, including methodologies like Big Data, Artificial Intelligence (AI) and the Internet of Things (IoT), could also be used cleverly to improve the effectiveness of management by leading the organization to a secure and successful growth of its activity.

On one hand the new technologies allow getting more data, which is also more accurate than ever before.  That data, when properly analyzed with focus on what is truly meaningful, could serve managers in analyzing the current state of the business, its weaknesses and could lead to new ideas of how to improve the bottom line.

This article offers new ways to use the recent technology to allow management to get beneficial support in evaluating new ideas or prepare for expected changes coming from elsewhere.  We have chosen to focus on manufacturing organizations, which face the threat and the potential benefits, of digitization of the manufacturing shop-floor, considered to be the fourth industrial revolution, and thus gained the title of Industry 4.0.  The threat is being pushed to enormous expenses without gaining any business benefits. The capabilities of the new technology could assist a dramatic improvement in the way tactical and strategic moves are evaluated.  The point, though, is that in order to materialize the benefits some management paradigms have to be challenged and replaced with common-sense paradigms that utilize the new capabilities to support decisions.

Viewing the current types of software systems supporting manufacturing organizations, these systems can be classified into four types:

  1. MES (Manufacturing Execution Systems). This type of systems is focused on the very short term and aims at providing operators and production management with the most updated state of the flow of raw materials all the way to the finished goods inventory.  It allows handling priorities, fast fixation of problems and achieving efficient utilization of the equipment.  MES collects data and organize it in a way that can be easily viewed by middle operational managers. Scada systems, for example, are a subset of MES systems
  2. ERP (Enterprise Resource Planning). We include in this class also the CRM (Customer Relationship Management) systems. This class consists of a suite of integrated applications that the manufacturing organization can use to plan operations and collect, store, manage, and interpret data from different business activities. What integrates all the various part of the ERP class of systems is one database of all the key transactions, of the financial and the material, that have been recorded or are planned to be done in the short to medium-term.  The main function of this type of systems is planning the basic operations required to deliver the firm orders, while also record the transactions and institute order and the systemization of all the data related to the main processes in the organization. ERP and CRM systems are mainly data systems with some crude planning functionality. When information is defined as the answer to the question asked (Goldratt, The Haystack Syndrome) ERP and CRM supply answers to the most frequent and simple questions, like what needs to be done in order to deliver a customer order. SAP, Oracle or Dynamics 365 are just a few examples of ERP systems
  3. BI (Business Intelligent). The objective of the BI programs is to display high level information for top management, providing a picture of the current situation, and possibly pointing to certain observed trends.  The power of the BI technologies is to be able to collect relevant data elements from various databases. Internal data is mixed with data that is collected from the Internet and used to create graphs and charts for management to be aware of what’s going on within their organization and how it compares to what’s going on in the market and with their competitors.  The Key Performance Indicators (KPIs) are supported by BI making them clear to top management.  This gives the background to management to evaluate ‘what to change?’  But, it does not provide the tools to ‘what to change to?’, definitely not to ‘how to cause the change?’.  In other words, BI supports decisions by pointing to required areas, but it does not support specific decisions.
  4. Decision Support Systems (DSS). While the title of DSS was raised already in the 80s of the previous century, the true capability of actually supporting decisions has been achieved only recently. Every management decision is considering a change to the current state.  Every significant decision is also exposed to considerable uncertainty.  So, the key capability of a DSS is to be able to direct the managers to various alternatives to the considered decisions and present the possible ramifications of these changes.  We can divide this level of DSS into two parts:
    1. Supporting routine decisions done by experts, so less experienced people can take them, or even let the computer make this decision.  These types of computerized programs are based on Artificial Intelligence new technologies and create a variety of expert systems that support such decisions.
    2.  Supporting more significant tactical and strategic decisions by providing the decision makers with the holistic analysis of the potential financial and other ramifications of the decisions.  These are systems that support business organizational decision-making including decisions that consider unstructured or semi-structured potential opportunities that are exposed to significant uncertain situations. The assessment should consider the short term as well as the long term. A DSS must “understand” the cause and effect relationships between the different functions of the organization. These systems should allow a direct interaction between the human decision-maker(s) and the computerized algorithm.  The objective, given the amount of uncertainty and lack of full precise information, is to present the decision-makers with a full picture of what MIGHT happen, for good and for worse, as the result of the decision(s).

The above four classifications are not clear-cut and there are systems that cross the lines between them.

On top of that there is often an interaction, even a loop, between the above types of systems. The ERP consumes data accumulated by the MES and accordingly creates work orders that feed MES system.  The ERP database has a major role for the BI system showing the current state and the ERP and BI data are input into the DSS programs.

Interestingly enough the effort a manufacturing organization needs to make to implement these systems is especially significant when implementing an MES system, since there is a need to overcome cultural objections, including the antagonism that one finds in organizations with no established culture to report what has been done. When this initial infrastructure is laid down, it is a bit easier to implement and properly use the ERP system and by far easier to continues to climb the ladder and implement Expert Systems and the higher level of DSSs. So, the effort is reduced going up the ladder through the four types of systems,  but the benefits from the implementations is increased and there are very significant benefits when the top management, the C-level managers, are using a DSS for solving the crucial dilemmas they may have.

We have to take into account that manufacturing organizations are both complex and exposed to significant uncertainties. Still the C-level managers have to make tough decisions like:

  • Should the company offer packages of its existing products for a reduced price?
  • Should the company accept small orders for customized products for a not-too-high markup?
  • Should the company expand the product-mix with additional product family (or families)?
  • Should the company save considerable cost by shrinking its resources, as well as stopping the production of products with very low demand?
  • Should the company go on massive advertizing campaign?
  • Should the company participate in a big tender, quoting a moderate price, knowing that winning might affect the good delivery performance of the regular orders?
  • Should the company invest in opening a new export market?
  • Should the company invest in a new production-line when the market seems to go up, but some people believe this upward trend is going to stop?

These decisions lie outside the comfort zone of the decision makers, because of the obvious risk and having much less past experience with such situations.  The decisions are risky not just from the perspective of the organization, but also from the perspective of the personal risk of the decision maker, who ties himself/herself to the success or failure of the initiative.

The above risks force conservative decisions whenever the needed decision is beyond the known comfort zone. Lack of proper support for a holistic analysis blocks many organizations from achieving the true potential of the organization.

There are two big obstacles for any DSS to tackle the above decisions and many others: One such obstacle is expressing the intuition of the people close to the relevant area to play its role in the analysis.  Even when the situation is beyond the comfort zone of the decision-maker, it is still valuable as the people involved always know something that is more than nothing.  While lack of good precise relevant data is a constant issue, analyzing what MIGHT happen is a valid possibility, which yields a focused picture of the actual risk.

The second obstacle is being able to evaluate the proposed decision when it is added to everything else the organization is doing or committed to do.  This requires deep understanding of the rules behind the flow of materials, products, orders and financial transactions, including the various dependencies in Operations and in Sales.  This leads to the massive calculations, checking the state of capacity, materials and cash.

The DSS program needs to “simulate” the top-level dilemmas (like the examples above) and come up with the predicted financial results.  It has to make it easy to run a variety of ‘what-if’ scenarios and compare the results.  In the end, it has to display the predicted results for, at least, two different scenarios:  one that is based on reasonable conservative assessments and the other on reasonable optimistic one.  The range of the end results means the reasonable result should fall anywhere in between the extreme sides of the range.  The decisions should never be done automatically by the system – it needs constant intervention by the decision maker looking for better alternatives and use human judgment to make the final decision.

Generally speaking there could be two main ways to accomplish an effective support for decisions:

  1. Being able to carry a mass of calculations, based on good cause-and-effect rules, which describe the materials and capacity requirements for every product sold as well as the impact on the revenues and cost. This way is described in detail by the book Throughput Economics, written by Eli Schragenheim, Henry Camp and Rocco Surace.
  2. Using a powerful computerized simulator that closely follows the flow rules, and records revenues, truly variable expenses and the cost of capacity as an integral part of the simulation. The uncertainty has to be input into the simulator’s critical parameters to provide the possible range of the results.

The mass calculations way is more visible to the decision-makers, as the calculations are all straightforward and the added-value of the computer is the ability to carry such mass  calculations.  This means the decision makers fully understand the assumptions that are at the core of the calculations.

Using computerized simulation better fits complex situations, either within the production-floor, or with complicated dependencies within the sales.  For instance, simulating different flow rules, like batch sizing and different prioritization, are much more effective than mere calculations that have to rely on assumptions regarding the effectiveness of the flow rules.  On the other hand, the user has to inquire deeply to validate that the internal parameters of such simulation are in line with reality in order to trust the results.

Both ways have to start with a good representation of the current state as a reference according to which all the changes are compared to.  For a simulation it means creating a ‘digital twin’ that seems to come up with the current performance of the organization.

A computerized system that produces reliable reference or a digital twin, and is able to introduce variety of changes and compare the results to the reference, while also depicts the potential impact of uncertainty and lack of accurate data, deserves to be called a decision-support-system (DSS). Such a system will reduce significantly the risk in taking top-level decisions and will also reduce procrastination that is usually found whenever ‘hard decisions’ are evaluated.  This would help significantly to put the company ahead of the competition.

The first few true DSSs to appear in the market will enjoy a “Blue Ocean Strategy” compared to the “Red Ocean” which is typically the current situation in systems supporting manufacturing organizations.

Preparing for a recession

By Rudi Burkhard and Eli Schragenheim

Recession is an external threat that no organization has the power to stop or even delay.  A recession starts when enough people (those that influence the economy) expect it to happen soon and they start to take action to protect themselves from its impact. These may be actions by politicians, central banks or a major financial scandal that accelerate peoples’ decisions to take action.

What should a company do when recession is a close possibility?

A recession pushes most managers out of their comfort zone.  Managers’ intuition about markets’ direction dives and their fear level surges.  Every organization suffers the reaction of clients and suppliers to the coming recession. The first blow (actions) to sales are often much bigger than the actual real decline of the economy.  The knee jerk reaction is to reduce inventories; suppliers are the first to feel these reactions, sometimes even over-reaction, to the recession’s threats.

It takes time to understand the actual impact of a recession. Until that time hysteria and limited intuition frequently cause major mistakes. Managers also do not have good intuition about a recession’s impact on real physical demand. They often do not understand what really takes place in the economy.

Common practice is to cut cost. Warning: This common practice takes management’s focus away from the one parameter they must not hurt: sales!  To survive a recession a company must, as much as possible, protect sales revenue. We don’t claim that reducing cost is not important or critical to survival, but managers should carefully analyze their situation to ensure they do not disrupt their sales more than the recession  does. They must be careful to not make the recession’s damage worse.

Estimate the impact of a possible X% sales decline

Throughput Accounting, and more recently Throughput Economics, lead to a clear set of data and information that together estimate the range of valid probable results from such a macro-economic event.   Using these two tools, and the related knowledge, can lead to a much better perspective on how a recession might cause damage to the company’s future.

Two financial parameters are of special importance:

  1. Total Throughput (T), revenues minus totally variable costs (TVC).
  2. Total operating expense (OE) – all the money spent to maintain the necessary capacities of resources (space, equipment, manpower and even cash).

Comment:  The TOC goal measurements include also the money that is captured within the organization, called ‘Investment’ or just ‘I’.  While it is one of the key measurements it seems that to evaluate the potential impact of a recession the part of Investment that is important, is inventory, which is a natural candidate to reduce.

Interested readers are referred to the Theory of Constraints materials on Throughput Accounting, like Thomas Corbett book Throughput Accounting, and the more recent Throughput Economics by Schragenheim, Camp and Surace. These resources explain how the concepts they introduce give much better insights into the current and future financial state of a company.

Throughput is the cash inflow from sales (minus the cost of materials).  OE is the cash outflow to maintain the necessary capacity required to stay in business. Cash inflow includes depreciation of the investment in capacity.  Sufficient capacity is required to support the two key flows:

  1. The Flow of Value, focusing on the current flow of products and services to clients. This flow encompasses the entire chain from purchase orders to suppliers to product delivery to clients and finally payment collection.
  2. The Flow of Initiatives to improve (increase) the Flow of Value. This flow contains all improvement projects and new idea evaluations for products and processes.

A cost cut reduces capacities; for instance, by stopping all overtime, special shifts and temporary workers production capacity is reduced.  Depending on local regulations the company could also consider lay-offs and/or short workweeks.  Understanding the impact of these actions on sales volumes is critical for the survival of the organization.  Companies should use careful prioritization to prevent from cutting capacities that ensure the Flow of Value is maintained. An unavoidable and longer-lasting decrease in demand makes such cuts possible, as long as delivery lead-times and reliability to remaining clients are not negatively impacted.

Practically this means that when cutting capacity, the cost required to maintain capacity, should be considered mainly for resources required to support the Flow of Initiatives, rather than resources required to maintain the Flow of Value.  Proper consideration reduces the threat to future prosperity by the delay or cancellation of flow initiatives.  This implies that some of the luxuries management gives itself are valid potential cost reductions. Every organization may have capacity that supports the Flow of Value in an indirect way or maybe does not support it at all.  Such capacities with a questionable direct contribution to the business are the natural and sensible candidates to be cut in a recession.

In practice this means considering the short-term benefit from cost cuts vs. the longer-term benefits that will stem from the Flow of Initiatives. Cutting cost that stops or slows the Flow of Initiatives threatens future prosperity and creates opportunities for competitors.  Sometimes the short-term survival dictates having to give up future opportunities, but extra care is needed for that.

The two categories of actions necessary to keep a company safe, even truly successful, over the longer term are:

  1. Predict the possible range of the financial impact in order to correctly choose the resources that must remain to provide a good enough financial (cash flow) performance during the recession and enough to best support growth once the recession recedes.
  2. Improve Operations to a level so that reaction to market demand changes are faster than any competition. By this the company gains a clear, even decisive, competitive edge. Rapid identification of the products less impacted by the recession is one example how improved operations can be used to gain an advantage over competitors. A recession presents opportunities to capture more market demand; demand that until the recession has been served by competitors. A recession ‘forces’ clients and their suppliers to reduce inventory, which in turn requires a faster response to supply smaller quantities.  The supplier able to respond faster with smaller quantities (replenish his clients at a higher frequency) than the competition gains a decisive advantage.  Spare capacity should be used to accelerate response times, and to keep low stocks of finished goods to maximize the competitive advantage.

Major points to realize when predicting the depth and duration are:

Operating Expense behavior is not linear:  it is impossible to reduce the capacity by the exact predicted decrease in sales. Most resources come in sizable increments. Cutting capacity is possible only in amounts different from that required by the decrease in sales.

Can we make reliable estimates of the extent of reduced demand? Can we make reliable estimates of the extent of price reductions? We cannot!

All decisions are based on forecasts that are mostly intuitive, sometimes quantitative or a combination.  Forecasts are always based on the past with assumptions on how past behavior will change.  Management practice of treating forecasts as deterministic is the core problem behind erratic decisions over demand. A single number will never be reality – the best we can do is estimate a range and prepare to respond quickly as reality becomes clearer.  A valid way is to define a range from the conservative to the optimistic assessment. Both estimates should be reasonable; put aside possible results with a very low probability.

Thus, it is possible to estimate the reasonable range of the impact of the recession on a specific market and then check the extreme predictions to decide what, how much and where to cut cost and how much stock is absolutely necessary.  Note, you need to check both extremes, not just the conservative side. The optimistic side gives you what you might lose if you just consider the worst case.  As the recession’s impact can be quite different for different businesses and for different countries or regions the responsibility of the management of every company is to estimate the reasonable range of change in their specific market.  Estimating a range is easier (not “easy”, just easier) than predicting a single number. The company can discern reasonable estimates of how bad the situation might become and what can be done about it. The company should also consider what it can achieve if it maintains the required level of resources to obtain the best outcome of the recession.  This is the Throughput Economics process to obtain vital and relevant information that support superior decisions and results.

Both the conservative and the optimistic assessments lead to actions.  The job of management is to make the decisions that, even when they are based on the wrong side of the estimates, the damage is limited, while the potential gains are high.

Probably all managers realize that in their market final consumer behavior is critical.  Consumers dictate demand.  Consumers’ demand impacts all players in a supply or value chain. For some value chains or positions in the value chain the recession’s impact may be somewhat delayed.  It is essential that B2B organizations extend their evaluation beyond their immediate clients. In order to predict the evolution of demand they must evaluate what is likely to happen to the demand all along the chain starting from the final consumer.  Suppliers to retail organizations might suffer a very high drop in sales at the beginning of the recession.  However, the real drop in sales to the end consumers is usually much smaller.  Nevertheless the retailers decide to reduce inventories.  For suppliers this means demand is likely to return back quite soon. Understanding clients and their clients’ business well is an essential capability for every organization in a value chain. This capability is not only critical in a recession – it is always a critical competency to understand clients’ needs even better than they do!

How do streamlined operations win in the market during recession?

The Theory of Constraints (TOC) for manufacturing and distribution companies is the ultimate Lean. TOC methods ensure excellent response times and/or product availability with the lowest possible amount of work in process (WIP).  These methods are the correct choice whether in a recession or not.  These methods also require the least amount of cash to finance inventory and operations.  This makes the potential for competitive advantage especially during a recession. During a recession everyone is under pressure to spend money much more carefully.  Every purchase, by an individual or by the organization, is thoroughly checked. Competition becomes fiercer than before opening the path to price competition (price wars). The other, often ignored, option is to compete using faster response and smaller quantities. Throughput Economics and Simplified-Drum-Buffer-Rope (SDBR) are combined to define your best strategy and tactics to deal with an arriving recession.  As inventories are flushed out of the system, once the original demand comes back the suppliers should be ready to deliver, with less inventory, the original demand plus the new demand achieved through fast response during the recession.  Constantly updated holistic information on the actual demand and its trends allows management to estimate when demand will start to rebound. This information is critical to decisions about the required capacity levels, which in turn determine cost and cash flow. The questions to answer are how strongly the company should protect current sales and to what extent should improvement projects continue to be implemented.

The Theory of Constraints (TOC) tools, particularly SDBR and Throughput Economics, support this route, combining operational and financial capabilities.  The idea is to limit the potential damage of a recession, gain competitive advantages and be ready when the economy rebounds to fully capitalize on the acquired competitive edge.

The devastating impact of the fear from uncertainty

Suppose you are the CEO of a manufacturing company and Ken, your VP of Operations, comes to you with the idea of offering fast-response options to clients for a nice markup in price. That means on top of delivering in four weeks for regular price to deliver in two weeks for 15% markup and in one week for 30%.  You ask Ken what Mia, the VP of Sales thinks of it, and he tells you she does not object, but also not fully support.  As long as you, the CEO, would support the idea she will cooperate.  A similar response comes from Ian, the CFO of the company.

There are two possible problems with the idea.  One is that Operations might be unable to respond to such fast deliveries, but Ken is confident Production can do it.  The second potential problem is that clients would refuse to pay the markup and still put pressure to get faster response.

Suppose you tell Ken the following: “Sounds interesting idea.  If you truly believe in it – go and do it.  Get the support of Mia and Ian and bring results.”

Is this “good leadership”?  The words “if you truly believe …” radiate that the full responsibility for a failure would fall on Ken, no matter what has caused the failure or the fact that any new move is exposed to considerable variability.  Would this pave the way to more people with creative ideas to bring them to management?  What do you think would happen to Ken if “his idea” fails to impact sales?

A manager who dares to raise a new idea for improving the performance of an organization faces two major fears.  The first is from being unable to meet the challenge and the responsibility and accountability that come with it.  The second fear, much more devastating, is from unjust criticism if the idea won’t work according to the prior expectations because of the significant inherent uncertainty.  Expectations are usually built according to optimistic forecast, even when the one who raised the idea took into account also the less optimistic results.  The unjust behavior of critics, who choose to ignore the uncertainty, is what eventually causes cold feet to most managers, preventing them from raise new ideas.

Think of the conflict of a coach of a top sport team before an ultra important game that has a key player after long break due to a bad injury:  should he use the player in the game?  On one hand that player could be the decisive factor for winning the game.  On the other hand, he might be injured again.  How would you judge the coach after the game?  By how much your judgment is influenced by the actual outcome?

A coach before a game has to take several critical decisions.  But, when one has a daring new idea simply ignoring the idea is a valid option.

Nassim Taleb is absolutely right saying that instead of trying to avoid uncertainty we should use uncertainty for our benefit.  However, you cannot just ignore the fear that any negative actual result would cause too high personal damage, much larger than the damage to the company from the specific idea.  The fear is intensified by knowing that many people, who might judge the outcomes of the idea, do not really understand the nature of uncertainty.  Then there is a concern that a “failure” would play a role in the power-game within the organization, causing damage to the person who came up with an idea that worked less well than expected.

Most people are afraid of variety of uncertain events.  One question is what do you do with the fear?  Most people delay critical decisions, which is the same as deciding to do nothing.  Others take the uncertain decisions fast in order to avoid the torment of the fear.  The use of superstition to handle uncertainty, and mainly to reduce the fear, is also widely spread.

As the reader has already realized, the focus of this article is not on individuals who make decisions for their own life, but on people who are making decisions on behalf of their organization.

One difference is that decision making on behalf of the organization should be based on rational analysis.  The business culture of organizations radiates the expectation for optimal decisions checking carefully the cost-benefit relationships.  People make their own decisions based mainly on emotions and then justify the decisions using rational arguments.  Organizations might have certain values based on the emotions of the owners, but the vast majority of the derived decisions are supposed to be the outcome of rational analysis.

Are they?  Can people have two different sets of behaviors when they need to take decisions?

The famous Goldatt’s saying “tell me how I’m measured and I’ll tell how I’ll behave” gives a clue to what could make a basic change in behavior between the work-place and all other environments a person interacts with.  Every organization sets certain expectations on its employees.  When continuing working for the organization counts, certainly when there is a wish to go up the ladder, fulfilling the expectations is of major impact.

People are not optimizers they are satisfycers”, said Prof. Herbert Simon, the Nobel Prize laureate (1978 in Economics).  ‘Satisfycer’ means trying to meet satisfactory criteria and once they are met the search for better alternatives stops.  If Prof. Simon is right and if the organization culture promotes the value of optimization, then organizations demand a different way of making decisions than what people do for themselves.

The critical and devastating conflict lies with making a decision for which the ramifications cannot be accurately determined.  This is the core problem with all decisions due to uncertainty and lack of relevant information.  When it seems possible that the ramifications of the decisions might be bad, but could also be great, then we have a “hard decision” on our hands.  Satisfycers would naturally make the decision based on evaluating the worst case and whether such an outcome could be tolerated as a key criterion.  At the same time the other criterion would be based on how good the ramifications could be.  Even though most people give much more weight for negative results, there are enough cases where people are ready to take a certain risk for the chance of gaining much more value.  Many times taking the risk is the right decision for the long term.  This is definitely true for organizations that could gain a lot by taking many decisions that their average gain is high, and the damage from losing is relatively small.  While there are organizations that operate like this all the time, like high-risk funds, the vast majority of the organizations behave as if the single-number forecast is what the future is going to be. In such a scenario failing to achieve the ‘target’ is due to incompetence of specific people, who are openly blamed for that.  This culture forces medium and high-level managers to protect themselves by aiming at lower business targets that they feel confident can be safely achieved.

The devastating damage of the fear of managers to raise new ideas is being stuck in the current state, and being exposed to the probability that the competition will learn how to handle uncertainty in a superior way that vastly reduces the fear.

This can be done by radiating to the organization’s employees that every forecast should be stated as a range rather than a single number.  When the formal analysis of any opportunity or new idea is analyzed by a team of managers from all the relevant functions checking two different scenarios, one based on conservative assessments and the other on reasonable optimistic one, then such a team decision is better protected from the unjust after-the-fact criticism.  Addressing uncertainty by estimated ranges and the creation of two reasonable extreme scenarios is a key element in Throughput Economics aimed at supporting much better decisions and opens the door for many new ideas.