Decision Support Systems (DSS)

By Avraham Mordoch and Eli Schragenheim

How can the new fast development of technology effectively help organizations to achieve more of their goal?  The vast majority of the new technologies have a considerable impact on IT (Information Technology departments), which causes huge pressure on the workload of the IT people in most organizations that need to keep themselves frequently updated, causing headache to top management and lack of focus instead of helping them to move the organization forward.

But, the potential of the new power of computerization, including methodologies like Big Data, Artificial Intelligence (AI) and the Internet of Things (IoT), could also be used cleverly to improve the effectiveness of management by leading the organization to a secure and successful growth of its activity.

On one hand the new technologies allow getting more data, which is also more accurate than ever before.  That data, when properly analyzed with focus on what is truly meaningful, could serve managers in analyzing the current state of the business, its weaknesses and could lead to new ideas of how to improve the bottom line.

This article offers new ways to use the recent technology to allow management to get beneficial support in evaluating new ideas or prepare for expected changes coming from elsewhere.  We have chosen to focus on manufacturing organizations, which face the threat and the potential benefits, of digitization of the manufacturing shop-floor, considered to be the fourth industrial revolution, and thus gained the title of Industry 4.0.  The threat is being pushed to enormous expenses without gaining any business benefits. The capabilities of the new technology could assist a dramatic improvement in the way tactical and strategic moves are evaluated.  The point, though, is that in order to materialize the benefits some management paradigms have to be challenged and replaced with common-sense paradigms that utilize the new capabilities to support decisions.

Viewing the current types of software systems supporting manufacturing organizations, these systems can be classified into four types:

  1. MES (Manufacturing Execution Systems). This type of systems is focused on the very short term and aims at providing operators and production management with the most updated state of the flow of raw materials all the way to the finished goods inventory.  It allows handling priorities, fast fixation of problems and achieving efficient utilization of the equipment.  MES collects data and organize it in a way that can be easily viewed by middle operational managers. Scada systems, for example, are a subset of MES systems
  2. ERP (Enterprise Resource Planning). We include in this class also the CRM (Customer Relationship Management) systems. This class consists of a suite of integrated applications that the manufacturing organization can use to plan operations and collect, store, manage, and interpret data from different business activities. What integrates all the various part of the ERP class of systems is one database of all the key transactions, of the financial and the material, that have been recorded or are planned to be done in the short to medium-term.  The main function of this type of systems is planning the basic operations required to deliver the firm orders, while also record the transactions and institute order and the systemization of all the data related to the main processes in the organization. ERP and CRM systems are mainly data systems with some crude planning functionality. When information is defined as the answer to the question asked (Goldratt, The Haystack Syndrome) ERP and CRM supply answers to the most frequent and simple questions, like what needs to be done in order to deliver a customer order. SAP, Oracle or Dynamics 365 are just a few examples of ERP systems
  3. BI (Business Intelligent). The objective of the BI programs is to display high level information for top management, providing a picture of the current situation, and possibly pointing to certain observed trends.  The power of the BI technologies is to be able to collect relevant data elements from various databases. Internal data is mixed with data that is collected from the Internet and used to create graphs and charts for management to be aware of what’s going on within their organization and how it compares to what’s going on in the market and with their competitors.  The Key Performance Indicators (KPIs) are supported by BI making them clear to top management.  This gives the background to management to evaluate ‘what to change?’  But, it does not provide the tools to ‘what to change to?’, definitely not to ‘how to cause the change?’.  In other words, BI supports decisions by pointing to required areas, but it does not support specific decisions.
  4. Decision Support Systems (DSS). While the title of DSS was raised already in the 80s of the previous century, the true capability of actually supporting decisions has been achieved only recently. Every management decision is considering a change to the current state.  Every significant decision is also exposed to considerable uncertainty.  So, the key capability of a DSS is to be able to direct the managers to various alternatives to the considered decisions and present the possible ramifications of these changes.  We can divide this level of DSS into two parts:
    1. Supporting routine decisions done by experts, so less experienced people can take them, or even let the computer make this decision.  These types of computerized programs are based on Artificial Intelligence new technologies and create a variety of expert systems that support such decisions.
    2.  Supporting more significant tactical and strategic decisions by providing the decision makers with the holistic analysis of the potential financial and other ramifications of the decisions.  These are systems that support business organizational decision-making including decisions that consider unstructured or semi-structured potential opportunities that are exposed to significant uncertain situations. The assessment should consider the short term as well as the long term. A DSS must “understand” the cause and effect relationships between the different functions of the organization. These systems should allow a direct interaction between the human decision-maker(s) and the computerized algorithm.  The objective, given the amount of uncertainty and lack of full precise information, is to present the decision-makers with a full picture of what MIGHT happen, for good and for worse, as the result of the decision(s).

The above four classifications are not clear-cut and there are systems that cross the lines between them.

On top of that there is often an interaction, even a loop, between the above types of systems. The ERP consumes data accumulated by the MES and accordingly creates work orders that feed MES system.  The ERP database has a major role for the BI system showing the current state and the ERP and BI data are input into the DSS programs.

Interestingly enough the effort a manufacturing organization needs to make to implement these systems is especially significant when implementing an MES system, since there is a need to overcome cultural objections, including the antagonism that one finds in organizations with no established culture to report what has been done. When this initial infrastructure is laid down, it is a bit easier to implement and properly use the ERP system and by far easier to continues to climb the ladder and implement Expert Systems and the higher level of DSSs. So, the effort is reduced going up the ladder through the four types of systems,  but the benefits from the implementations is increased and there are very significant benefits when the top management, the C-level managers, are using a DSS for solving the crucial dilemmas they may have.

We have to take into account that manufacturing organizations are both complex and exposed to significant uncertainties. Still the C-level managers have to make tough decisions like:

  • Should the company offer packages of its existing products for a reduced price?
  • Should the company accept small orders for customized products for a not-too-high markup?
  • Should the company expand the product-mix with additional product family (or families)?
  • Should the company save considerable cost by shrinking its resources, as well as stopping the production of products with very low demand?
  • Should the company go on massive advertizing campaign?
  • Should the company participate in a big tender, quoting a moderate price, knowing that winning might affect the good delivery performance of the regular orders?
  • Should the company invest in opening a new export market?
  • Should the company invest in a new production-line when the market seems to go up, but some people believe this upward trend is going to stop?

These decisions lie outside the comfort zone of the decision makers, because of the obvious risk and having much less past experience with such situations.  The decisions are risky not just from the perspective of the organization, but also from the perspective of the personal risk of the decision maker, who ties himself/herself to the success or failure of the initiative.

The above risks force conservative decisions whenever the needed decision is beyond the known comfort zone. Lack of proper support for a holistic analysis blocks many organizations from achieving the true potential of the organization.

There are two big obstacles for any DSS to tackle the above decisions and many others: One such obstacle is expressing the intuition of the people close to the relevant area to play its role in the analysis.  Even when the situation is beyond the comfort zone of the decision-maker, it is still valuable as the people involved always know something that is more than nothing.  While lack of good precise relevant data is a constant issue, analyzing what MIGHT happen is a valid possibility, which yields a focused picture of the actual risk.

The second obstacle is being able to evaluate the proposed decision when it is added to everything else the organization is doing or committed to do.  This requires deep understanding of the rules behind the flow of materials, products, orders and financial transactions, including the various dependencies in Operations and in Sales.  This leads to the massive calculations, checking the state of capacity, materials and cash.

The DSS program needs to “simulate” the top-level dilemmas (like the examples above) and come up with the predicted financial results.  It has to make it easy to run a variety of ‘what-if’ scenarios and compare the results.  In the end, it has to display the predicted results for, at least, two different scenarios:  one that is based on reasonable conservative assessments and the other on reasonable optimistic one.  The range of the end results means the reasonable result should fall anywhere in between the extreme sides of the range.  The decisions should never be done automatically by the system – it needs constant intervention by the decision maker looking for better alternatives and use human judgment to make the final decision.

Generally speaking there could be two main ways to accomplish an effective support for decisions:

  1. Being able to carry a mass of calculations, based on good cause-and-effect rules, which describe the materials and capacity requirements for every product sold as well as the impact on the revenues and cost. This way is described in detail by the book Throughput Economics, written by Eli Schragenheim, Henry Camp and Rocco Surace.
  2. Using a powerful computerized simulator that closely follows the flow rules, and records revenues, truly variable expenses and the cost of capacity as an integral part of the simulation. The uncertainty has to be input into the simulator’s critical parameters to provide the possible range of the results.

The mass calculations way is more visible to the decision-makers, as the calculations are all straightforward and the added-value of the computer is the ability to carry such mass  calculations.  This means the decision makers fully understand the assumptions that are at the core of the calculations.

Using computerized simulation better fits complex situations, either within the production-floor, or with complicated dependencies within the sales.  For instance, simulating different flow rules, like batch sizing and different prioritization, are much more effective than mere calculations that have to rely on assumptions regarding the effectiveness of the flow rules.  On the other hand, the user has to inquire deeply to validate that the internal parameters of such simulation are in line with reality in order to trust the results.

Both ways have to start with a good representation of the current state as a reference according to which all the changes are compared to.  For a simulation it means creating a ‘digital twin’ that seems to come up with the current performance of the organization.

A computerized system that produces reliable reference or a digital twin, and is able to introduce variety of changes and compare the results to the reference, while also depicts the potential impact of uncertainty and lack of accurate data, deserves to be called a decision-support-system (DSS). Such a system will reduce significantly the risk in taking top-level decisions and will also reduce procrastination that is usually found whenever ‘hard decisions’ are evaluated.  This would help significantly to put the company ahead of the competition.

The first few true DSSs to appear in the market will enjoy a “Blue Ocean Strategy” compared to the “Red Ocean” which is typically the current situation in systems supporting manufacturing organizations.

Preparing for a recession

By Rudi Burkhard and Eli Schragenheim

Recession is an external threat that no organization has the power to stop or even delay.  A recession starts when enough people (those that influence the economy) expect it to happen soon and they start to take action to protect themselves from its impact. These may be actions by politicians, central banks or a major financial scandal that accelerate peoples’ decisions to take action.

What should a company do when recession is a close possibility?

A recession pushes most managers out of their comfort zone.  Managers’ intuition about markets’ direction dives and their fear level surges.  Every organization suffers the reaction of clients and suppliers to the coming recession. The first blow (actions) to sales are often much bigger than the actual real decline of the economy.  The knee jerk reaction is to reduce inventories; suppliers are the first to feel these reactions, sometimes even over-reaction, to the recession’s threats.

It takes time to understand the actual impact of a recession. Until that time hysteria and limited intuition frequently cause major mistakes. Managers also do not have good intuition about a recession’s impact on real physical demand. They often do not understand what really takes place in the economy.

Common practice is to cut cost. Warning: This common practice takes management’s focus away from the one parameter they must not hurt: sales!  To survive a recession a company must, as much as possible, protect sales revenue. We don’t claim that reducing cost is not important or critical to survival, but managers should carefully analyze their situation to ensure they do not disrupt their sales more than the recession  does. They must be careful to not make the recession’s damage worse.

Estimate the impact of a possible X% sales decline

Throughput Accounting, and more recently Throughput Economics, lead to a clear set of data and information that together estimate the range of valid probable results from such a macro-economic event.   Using these two tools, and the related knowledge, can lead to a much better perspective on how a recession might cause damage to the company’s future.

Two financial parameters are of special importance:

  1. Total Throughput (T), revenues minus totally variable costs (TVC).
  2. Total operating expense (OE) – all the money spent to maintain the necessary capacities of resources (space, equipment, manpower and even cash).

Comment:  The TOC goal measurements include also the money that is captured within the organization, called ‘Investment’ or just ‘I’.  While it is one of the key measurements it seems that to evaluate the potential impact of a recession the part of Investment that is important, is inventory, which is a natural candidate to reduce.

Interested readers are referred to the Theory of Constraints materials on Throughput Accounting, like Thomas Corbett book Throughput Accounting, and the more recent Throughput Economics by Schragenheim, Camp and Surace. These resources explain how the concepts they introduce give much better insights into the current and future financial state of a company.

Throughput is the cash inflow from sales (minus the cost of materials).  OE is the cash outflow to maintain the necessary capacity required to stay in business. Cash inflow includes depreciation of the investment in capacity.  Sufficient capacity is required to support the two key flows:

  1. The Flow of Value, focusing on the current flow of products and services to clients. This flow encompasses the entire chain from purchase orders to suppliers to product delivery to clients and finally payment collection.
  2. The Flow of Initiatives to improve (increase) the Flow of Value. This flow contains all improvement projects and new idea evaluations for products and processes.

A cost cut reduces capacities; for instance, by stopping all overtime, special shifts and temporary workers production capacity is reduced.  Depending on local regulations the company could also consider lay-offs and/or short workweeks.  Understanding the impact of these actions on sales volumes is critical for the survival of the organization.  Companies should use careful prioritization to prevent from cutting capacities that ensure the Flow of Value is maintained. An unavoidable and longer-lasting decrease in demand makes such cuts possible, as long as delivery lead-times and reliability to remaining clients are not negatively impacted.

Practically this means that when cutting capacity, the cost required to maintain capacity, should be considered mainly for resources required to support the Flow of Initiatives, rather than resources required to maintain the Flow of Value.  Proper consideration reduces the threat to future prosperity by the delay or cancellation of flow initiatives.  This implies that some of the luxuries management gives itself are valid potential cost reductions. Every organization may have capacity that supports the Flow of Value in an indirect way or maybe does not support it at all.  Such capacities with a questionable direct contribution to the business are the natural and sensible candidates to be cut in a recession.

In practice this means considering the short-term benefit from cost cuts vs. the longer-term benefits that will stem from the Flow of Initiatives. Cutting cost that stops or slows the Flow of Initiatives threatens future prosperity and creates opportunities for competitors.  Sometimes the short-term survival dictates having to give up future opportunities, but extra care is needed for that.

The two categories of actions necessary to keep a company safe, even truly successful, over the longer term are:

  1. Predict the possible range of the financial impact in order to correctly choose the resources that must remain to provide a good enough financial (cash flow) performance during the recession and enough to best support growth once the recession recedes.
  2. Improve Operations to a level so that reaction to market demand changes are faster than any competition. By this the company gains a clear, even decisive, competitive edge. Rapid identification of the products less impacted by the recession is one example how improved operations can be used to gain an advantage over competitors. A recession presents opportunities to capture more market demand; demand that until the recession has been served by competitors. A recession ‘forces’ clients and their suppliers to reduce inventory, which in turn requires a faster response to supply smaller quantities.  The supplier able to respond faster with smaller quantities (replenish his clients at a higher frequency) than the competition gains a decisive advantage.  Spare capacity should be used to accelerate response times, and to keep low stocks of finished goods to maximize the competitive advantage.

Major points to realize when predicting the depth and duration are:

Operating Expense behavior is not linear:  it is impossible to reduce the capacity by the exact predicted decrease in sales. Most resources come in sizable increments. Cutting capacity is possible only in amounts different from that required by the decrease in sales.

Can we make reliable estimates of the extent of reduced demand? Can we make reliable estimates of the extent of price reductions? We cannot!

All decisions are based on forecasts that are mostly intuitive, sometimes quantitative or a combination.  Forecasts are always based on the past with assumptions on how past behavior will change.  Management practice of treating forecasts as deterministic is the core problem behind erratic decisions over demand. A single number will never be reality – the best we can do is estimate a range and prepare to respond quickly as reality becomes clearer.  A valid way is to define a range from the conservative to the optimistic assessment. Both estimates should be reasonable; put aside possible results with a very low probability.

Thus, it is possible to estimate the reasonable range of the impact of the recession on a specific market and then check the extreme predictions to decide what, how much and where to cut cost and how much stock is absolutely necessary.  Note, you need to check both extremes, not just the conservative side. The optimistic side gives you what you might lose if you just consider the worst case.  As the recession’s impact can be quite different for different businesses and for different countries or regions the responsibility of the management of every company is to estimate the reasonable range of change in their specific market.  Estimating a range is easier (not “easy”, just easier) than predicting a single number. The company can discern reasonable estimates of how bad the situation might become and what can be done about it. The company should also consider what it can achieve if it maintains the required level of resources to obtain the best outcome of the recession.  This is the Throughput Economics process to obtain vital and relevant information that support superior decisions and results.

Both the conservative and the optimistic assessments lead to actions.  The job of management is to make the decisions that, even when they are based on the wrong side of the estimates, the damage is limited, while the potential gains are high.

Probably all managers realize that in their market final consumer behavior is critical.  Consumers dictate demand.  Consumers’ demand impacts all players in a supply or value chain. For some value chains or positions in the value chain the recession’s impact may be somewhat delayed.  It is essential that B2B organizations extend their evaluation beyond their immediate clients. In order to predict the evolution of demand they must evaluate what is likely to happen to the demand all along the chain starting from the final consumer.  Suppliers to retail organizations might suffer a very high drop in sales at the beginning of the recession.  However, the real drop in sales to the end consumers is usually much smaller.  Nevertheless the retailers decide to reduce inventories.  For suppliers this means demand is likely to return back quite soon. Understanding clients and their clients’ business well is an essential capability for every organization in a value chain. This capability is not only critical in a recession – it is always a critical competency to understand clients’ needs even better than they do!

How do streamlined operations win in the market during recession?

The Theory of Constraints (TOC) for manufacturing and distribution companies is the ultimate Lean. TOC methods ensure excellent response times and/or product availability with the lowest possible amount of work in process (WIP).  These methods are the correct choice whether in a recession or not.  These methods also require the least amount of cash to finance inventory and operations.  This makes the potential for competitive advantage especially during a recession. During a recession everyone is under pressure to spend money much more carefully.  Every purchase, by an individual or by the organization, is thoroughly checked. Competition becomes fiercer than before opening the path to price competition (price wars). The other, often ignored, option is to compete using faster response and smaller quantities. Throughput Economics and Simplified-Drum-Buffer-Rope (SDBR) are combined to define your best strategy and tactics to deal with an arriving recession.  As inventories are flushed out of the system, once the original demand comes back the suppliers should be ready to deliver, with less inventory, the original demand plus the new demand achieved through fast response during the recession.  Constantly updated holistic information on the actual demand and its trends allows management to estimate when demand will start to rebound. This information is critical to decisions about the required capacity levels, which in turn determine cost and cash flow. The questions to answer are how strongly the company should protect current sales and to what extent should improvement projects continue to be implemented.

The Theory of Constraints (TOC) tools, particularly SDBR and Throughput Economics, support this route, combining operational and financial capabilities.  The idea is to limit the potential damage of a recession, gain competitive advantages and be ready when the economy rebounds to fully capitalize on the acquired competitive edge.

The devastating impact of the fear from uncertainty

Suppose you are the CEO of a manufacturing company and Ken, your VP of Operations, comes to you with the idea of offering fast-response options to clients for a nice markup in price. That means on top of delivering in four weeks for regular price to deliver in two weeks for 15% markup and in one week for 30%.  You ask Ken what Mia, the VP of Sales thinks of it, and he tells you she does not object, but also not fully support.  As long as you, the CEO, would support the idea she will cooperate.  A similar response comes from Ian, the CFO of the company.

There are two possible problems with the idea.  One is that Operations might be unable to respond to such fast deliveries, but Ken is confident Production can do it.  The second potential problem is that clients would refuse to pay the markup and still put pressure to get faster response.

Suppose you tell Ken the following: “Sounds interesting idea.  If you truly believe in it – go and do it.  Get the support of Mia and Ian and bring results.”

Is this “good leadership”?  The words “if you truly believe …” radiate that the full responsibility for a failure would fall on Ken, no matter what has caused the failure or the fact that any new move is exposed to considerable variability.  Would this pave the way to more people with creative ideas to bring them to management?  What do you think would happen to Ken if “his idea” fails to impact sales?

A manager who dares to raise a new idea for improving the performance of an organization faces two major fears.  The first is from being unable to meet the challenge and the responsibility and accountability that come with it.  The second fear, much more devastating, is from unjust criticism if the idea won’t work according to the prior expectations because of the significant inherent uncertainty.  Expectations are usually built according to optimistic forecast, even when the one who raised the idea took into account also the less optimistic results.  The unjust behavior of critics, who choose to ignore the uncertainty, is what eventually causes cold feet to most managers, preventing them from raise new ideas.

Think of the conflict of a coach of a top sport team before an ultra important game that has a key player after long break due to a bad injury:  should he use the player in the game?  On one hand that player could be the decisive factor for winning the game.  On the other hand, he might be injured again.  How would you judge the coach after the game?  By how much your judgment is influenced by the actual outcome?

A coach before a game has to take several critical decisions.  But, when one has a daring new idea simply ignoring the idea is a valid option.

Nassim Taleb is absolutely right saying that instead of trying to avoid uncertainty we should use uncertainty for our benefit.  However, you cannot just ignore the fear that any negative actual result would cause too high personal damage, much larger than the damage to the company from the specific idea.  The fear is intensified by knowing that many people, who might judge the outcomes of the idea, do not really understand the nature of uncertainty.  Then there is a concern that a “failure” would play a role in the power-game within the organization, causing damage to the person who came up with an idea that worked less well than expected.

Most people are afraid of variety of uncertain events.  One question is what do you do with the fear?  Most people delay critical decisions, which is the same as deciding to do nothing.  Others take the uncertain decisions fast in order to avoid the torment of the fear.  The use of superstition to handle uncertainty, and mainly to reduce the fear, is also widely spread.

As the reader has already realized, the focus of this article is not on individuals who make decisions for their own life, but on people who are making decisions on behalf of their organization.

One difference is that decision making on behalf of the organization should be based on rational analysis.  The business culture of organizations radiates the expectation for optimal decisions checking carefully the cost-benefit relationships.  People make their own decisions based mainly on emotions and then justify the decisions using rational arguments.  Organizations might have certain values based on the emotions of the owners, but the vast majority of the derived decisions are supposed to be the outcome of rational analysis.

Are they?  Can people have two different sets of behaviors when they need to take decisions?

The famous Goldatt’s saying “tell me how I’m measured and I’ll tell how I’ll behave” gives a clue to what could make a basic change in behavior between the work-place and all other environments a person interacts with.  Every organization sets certain expectations on its employees.  When continuing working for the organization counts, certainly when there is a wish to go up the ladder, fulfilling the expectations is of major impact.

People are not optimizers they are satisfycers”, said Prof. Herbert Simon, the Nobel Prize laureate (1978 in Economics).  ‘Satisfycer’ means trying to meet satisfactory criteria and once they are met the search for better alternatives stops.  If Prof. Simon is right and if the organization culture promotes the value of optimization, then organizations demand a different way of making decisions than what people do for themselves.

The critical and devastating conflict lies with making a decision for which the ramifications cannot be accurately determined.  This is the core problem with all decisions due to uncertainty and lack of relevant information.  When it seems possible that the ramifications of the decisions might be bad, but could also be great, then we have a “hard decision” on our hands.  Satisfycers would naturally make the decision based on evaluating the worst case and whether such an outcome could be tolerated as a key criterion.  At the same time the other criterion would be based on how good the ramifications could be.  Even though most people give much more weight for negative results, there are enough cases where people are ready to take a certain risk for the chance of gaining much more value.  Many times taking the risk is the right decision for the long term.  This is definitely true for organizations that could gain a lot by taking many decisions that their average gain is high, and the damage from losing is relatively small.  While there are organizations that operate like this all the time, like high-risk funds, the vast majority of the organizations behave as if the single-number forecast is what the future is going to be. In such a scenario failing to achieve the ‘target’ is due to incompetence of specific people, who are openly blamed for that.  This culture forces medium and high-level managers to protect themselves by aiming at lower business targets that they feel confident can be safely achieved.

The devastating damage of the fear of managers to raise new ideas is being stuck in the current state, and being exposed to the probability that the competition will learn how to handle uncertainty in a superior way that vastly reduces the fear.

This can be done by radiating to the organization’s employees that every forecast should be stated as a range rather than a single number.  When the formal analysis of any opportunity or new idea is analyzed by a team of managers from all the relevant functions checking two different scenarios, one based on conservative assessments and the other on reasonable optimistic one, then such a team decision is better protected from the unjust after-the-fact criticism.  Addressing uncertainty by estimated ranges and the creation of two reasonable extreme scenarios is a key element in Throughput Economics aimed at supporting much better decisions and opens the door for many new ideas.

The Value Generated by TOCICO

TOCICO

TOCICO as an independent and neutral organization of the TOC international community is at cross-road where its value to the community is, to my mind, very high, but it struggles with cash problems. For a non-profit organization the core problem is that generating value to customers does not always return enough revenues.

Of course, the duty of the management of any non-profit organization is to establish the necessary cash for generating the value.  Generally speaking there are three ways for non-profit organizations to raise cash.  Of course the combination of three sources is very common.

  • Sell the generated value in the same way as for-profit organizations. Membership fee is one way to sell value. Selling specific products or services is another way.
  • Be financed by the government or another large organization that provides a budget.
  • Donations by the various parties who appreciate the global value generated by the organization. Bill Gates comes to mind as an example of a billionaire who donates huge money to keep non-profit organizations effective.  Most of the performing-art organizations in the US are financed this way, on top of selling tickets.

Two general comments:

  1. Every non-profit organization is producing value that, from whatever reason, is difficult to sell commercially, and thus such an organization has to be careful to protect its goal from behaving according to profit considerations. Customers evaluate every purchase based on the perceived value of the specific product/service they buy, and not by the overall value generated by the non-profit organization.  Thus, the incoming revenues do not represent the true value of the organization.
  2. Every non-profit organization is constrained by cash. The underlining assumption is that the organization is able to generate more value when more cash is available.  While this assumption has to be checked in reality, it should drive management to exploit the money to generate as much value as possible; no matter how much cash that value brings back.  Eventually the budget of such an organization is spent on maintaining resources and it makes sense that the cash limitation creates a specific internal capacity constrained resource (CCR) that limits the Flow of Value, while the other resources have excess capacity in order to support the internal strategic constraint.

TOCICO was, and still is, financed by selling value, mainly the revenues from the annual conference, delivering the certification exams, and membership fees.

What is the value generated by TOCICO?  And for whom this value is significant?

The goal of TOCICO, to my mind, is to support the spread of TOC awareness, knowledge and successful implementation throughout the world. 

This goal could be especially valuable to four different market segments. Each one of the segments should be divided into two sub-segments:  those who are already familiar with TOC, and those who are not.  Generally speaking TOCICO faces only those who are somewhat familiar with TOC, like having read The Goal.  We need to find marketing ways to raise enough curiosity in TOC and then the value from TOCICO would become clear.  Here are the four segments:

  1. Management consultants. Many managers contemn consultants viewing them as people without deep understanding of the particular reality, while also not accountable for the results.  However, getting an external view-point, based on many other organizations, could be a major opportunity to identify flawed assumptions, which are typical for specific industries, and which any inside manager faces real difficulty to identify and challenge.  Such external view could bring to the table new opportunities that the competitors, being trapped by the same flawed assumptions, cannot recognize.  Certainly TOC develops the skills of consultants to quickly identify key problems and deduce the flawed assumptions behind those problems.
  2. Every manager in any organization. This is based on recognizing that understanding well the TOC insights significantly improves the managerial skills of every manager.  Most managers would gain immense value when they recognize the opportunity.  A possible negative branch for such managers is to announce their views too early and by that being viewed by others as zealots and even be forced to leave the organization.  So, understanding the perspectives of the other managers should be part of the TOC insights and education.  The focus here is on the personal value of the managers, assuming doing well their job would improve their self-satisfaction as well as their career.
  3. Organizations and corporations that are using TOC could get immense value by exposing the wide spectrum of the TOC knowledge to all the organization members. A more specific need of such a corporation is to get focused TOC knowledge and certification for their employees.  Corporations that just contemplate to implement TOC should, in the vast majority of the cases, look for TOC consultants to guide the implementation. TOCICO shouldn’t recommend one TOC consultant over another, but indirectly participating in the TOCICO conference, or watching videos and webinars presented by consultants could assist in the choice.
  4. Academics, both students and professors, who get access to important knowledge that is not a regular part of the current curriculum. Certainly the TOC insights could easily serve new worthy research topics.

TOC knowledge gives consultants significant advantage, no matter whether they use the name ‘TOC’ in their practice or not.  The ability to quickly identify the constraint / core-problem, use the TOC available insights for the direction of solutions plus the systematic use of cause-and-effect, are required capabilities for doing better job.  This is the highest value consultants can get.  Add to it the spread of TOC, which adds more relevant leads and opportunities.

Another value to consultants is the SHARING platform provided by TOCICO.  When you use ideas that are in conflict with the current paradigms, the internal discussions with other experienced consultants are of immense value.  TOCICO is a platform for such internal discussions to take place.

The value to corporations is especially interesting.  In most cases the initial TOC implementation is guided by consultants.  The managers that learn directly from the consultants might, mainly after the consulting company leave, get constant stream of value from TOCICO.  At this stage the corporation faces new needs.  One of them is to make sure all their employees know what they need to know to get the improved results.  New employees also need simple and effective training on the TOC basic knowledge to understand the new thinking behind the not-so-common procedures.  This might require guiding the employees to take the certification exams to prove their level of understanding.  In order to learn the materials TOCICO will launch several educational programs that cover the various certification areas.

Another need of corporations, which can be addressed by TOCICO, is answering the question what next to implement?  This means covering topics that have not been implemented by the organization at that time, like Throughput Economics and the use of Goldratt’s Six Questions for guiding the evaluation of new products/services.  The effective way to evaluate what additional TOC insights should be incorporated next is by participating in a TOCICO conference as well as viewing several videos and webinars offered by TOCICO.

Yet another need of corporations is getting external recognition of achievements.  When the implementation is done in one division it is valuable to radiate the excellence to the other divisions.  External recognition by TOCICO could create pressure on other divisions to achieve this kind of recognition.  Of course, the PR department of the organization could also use that public recognition of achievement to generate more value.

The products/services of TOCICO need expanding into education/training programs to provide more value to all its market segments.  In particular there is a need for training programs. The Alex Rogo program, announced in the 2019 TOCICO conference in Chicago, provides effective guidance for self-learning. Training programs for higher level people could be adjusted to the specific requirements of corporations.  Eventually TOCICO should be able to offer basic TOC education to any individual, or organization, all over the world in variety of languages.

In order to achieve the generation of new ongoing stream of value TOCICO needs to stabilize its financial state.  This means TOCICO needs the support of its members, both individuals and corporations.

My call is to everyone who appreciates the potential value to be generated by TOCICO to become a member and by that support the ongoing TOCICO activities.

Increasing the membership is a necessary condition. When it’d becomes wide enough it’ll also become sufficient, as there are enough great people who are willing to contribute their time to do voluntary work to generate that value.  The neutrality of TOCICO is an asset that is absolutely necessary for both the value and the willingness to volunteer.

Keep in mind the global value of TOCICO when you evaluate the cost versus its specific value.  Eventually any charge for products/services generates revenues that are needed for carrying the full value of TOCICO to the world.

A new book opens a new direction for making superior managerial decisions

Cover (1)

I have struggled with the insights for this book for almost twenty years.  While all the ideas are based on the current body of knowledge of the Theory of Constraints (TOC), they extend its applicability and usefulness.  TOC already has challenged the use of cost-per-unit and local performance measurements that have a huge negative impact on managerial decisions today.  Expanding the already well-advanced TOC BOK requires special care and self-checking every part in the chain of logic.  When Henry Camp and Rocco Surace joined me in the writing, including outlining the necessary direction of solution and adding their own perspectives, it was a huge and absolutely necessary help.

There is a basic difference between a book and a blog, as they serve different needs.  Short articles in the blog are focused on one insight and if the reader sees value in the insight, then more efforts are required to develop the generic insight into a practical process.  A book should encompass the development of several new insights and integrate them into a clear focused message that is valid both theoretically and practically.

What is the particular need for Throughput Economics?

We claim that good management decisions must be analyzed and supported in a very different way than is customary today.  Managers have huge responsibilities on their shoulders and they deserve a better method to consider the relevant and available information to generate the best possible picture of might happen when any significant decision is undertaken.

The simple fact is that the current (and long established) methods of cost accounting distort the decision-making process by presenting a flawed picture of expected profits or losses resulting from the decision.  Managers might intuitively sense the impact of the considered decision on the bottom-line and they are also aware that the quality of their intuition is questionable.  Fear of unjust criticism, once the results become clear, is another factor that impels managers to utilize well-accepted tools, even if they feel those very tools are flawed.  In order to change the way managers are making decisions there must be a comprehensive alternative procedure that is demonstrably superior.

While the insights of TOC contributed much to clarify the flaws of cost accounting tools for decision making, pinpointing the underlying flawed paradigm behind the concept of cost-per-unit, could be quite beneficial. The core mistaken assumption is treating the cost of maintaining capacity as if it were linear.   This is just wrong because capacity of most resources can only be purchased by certain amounts – in chunks, if you will.  For instance, if you are looking for space for your office you might have a few alternatives each with its own specific square footage.  Eventually, you choose the most convenient one that has more space than you actually need.  It is up to you to treat the extra space as a “waste” or as an opportunity that, when triggered by new market opportunities, you already have the space you will need.  This benefit is offset by paying more for a bigger space in the meantime.  The point is that it is unrealistic to expect to be able to purchase exactly the capacity required for the changing level of activity in your business.

The fact that most resources have excess capacity means that consuming the surplus capacity generates no additional cost.  However, once the practical limit of the available capacity is reached, then any optional capacity increase is typically quite expensive and the quantities in which more capacity can be quickly purchased are normally subject to certain minimums.  This characteristic of buying capacity is what makes it non-linear.  The rub is: to know whether an additional consumption of capacity required for a new opportunity is ‘free’ or expensive, you must consider all your capacity requirements – the proposed new needs on top of all current activities.  In other words, a global calculation has to be made to estimate the actual impact of implementing a new decision on total operating expenses.

The most important decisions undertaken by any organization concern sales or capacity.  Sales are the key factor for income and maintaining capacity is the crux of expenses that enable the organization to provide what it sells.  It was an ingenious idea of Goldratt to look for two distinct information categories that impact any new decision concerning sales or capacity:  Throughput (T) on one hand and Operating Expenses (OE) (and Investment (I)) on the other.  T focuses on the added-value generated by sales.  OE describes the cost of maintaining the required capacity.  While changes in T usually behave in a linear fashion, the true impact of a new move on OE, the expenses for maintaining capacity, must consider the non-linear behavior of OE.  This non-linear behavior is often a big surprise to the managerial intuition of whether the proposed move is positive or negative.

What clearly comes out from cost’s non-linear nature is the requirement to analyze any new potential deal, not just by its own specific details and definitely not by any ‘per-unit’ artificial measurement, but by simulating the new deal as an addition to the load of the current activity of the whole organization and then checking the impact on ∆T, ∆I and ∆OE.

Does this idea seem frightening because so many numbers and variables are involves?  This is where the right kind of decision-support software can help us with the calculations.  The principles are simple and straight-forward, but making a huge number of calculations should be delegated to a computer, as long as we human beings dictate the logic.

The power of our book is going into the details of such a broad idea, without losing its inherent simplicity.  We present the holistic direction, while also covering enough details, to answer any doubt that might emerge.

In order to preserve the sensitive balance between the generic method and the tiny details, making sure nothing is lost in the process, we came up with several fictional cases where a management team needs to specifically analyze non-trivial new opportunities that could be great but might also be disastrous.  Unless the analysis is done comprehensively outcomes are practically impossible to predict.  I have already used these types of fictional cases to demonstrate the cause-and-effect behind generic principles in a previous book (Management Dilemmas).  In this book, the detailed fictional cases are of special importance.   Subsequent chapters refer to these cases and their intrinsic ideas in a more general way, explaining the processes from your perspective as an outside observer.  Our objective was to lead you to see the insights from both perspectives: the practical case where managers have to deal with a specific non-trivial decision and the higher-level world of defining the global process for dealing with a variety of such decisions.

Amazon, of course, sells and ships the book:  Throughput Economics by Eli Schragenheim, Henry Camp, and Rocco Surace

ISBN  978-0-367-03061-2 / Cat# 978-0-367-03061-2

If you need any help in purchasing the book write to me at:  elischragenheim@gmail.com

Threat Control – not just cyber!

The TOC core insights are focused on improving the current business.  TOC contributed a lot to the first three parts of SWOT, strengths, weaknesses and opportunities.  What is left is to contribute to early identification and then developing the best way to deal with threats.  Handling threats is not so much about taking new initiatives to achieve more and more success.  It is about preventing the damage caused by unanticipated changes or events.  Threats could come from inside the organization, like a major flaw in one of the company’s products, or from outside, like the emergence of a disrupting technology.  TOC definitely has the tools to develop the processes for identifying emerging threats and coming with the right way to deal with them.

Last time I wrote about identifying threats was in 2015, but time brings new thoughts and new ways to express both the problem and the direction of solution.  The importance of the topic hardly needs any explanation; however it is still not a big enough topic for management.  Risk management covers only part of the potential threats, usually just for very big proposed moves.  My conclusion is that managers ignore problematic issues when they don’t see a clear solution.

There are few environments where considerable efforts are given to this topic.  Countries, and their army and police, have created special dedicated sub-organizations called ‘intelligence’ to identify well defined security threats.  While most organizations use various control mechanisms to face few specific anticipated threats, like alarm systems, basic data protection and accounting methods to spot unexplained money transfers, many other threats are not properly controlled.

The nature of every control mechanism is to identify a threat and either to warn against it or even take automatic steps to neutralize the risk.  My definition of ‘control mechanism’ is: “A reactive mechanism to handle uncertainty by monitoring information that points to a threatening situation and taking corrective actions accordingly.

While the topic does not appear in the TOC BOK, some TOC basic insights are relevant for developing the solution for a structured process that deals with identifying the emergence of threats. Another process is required for planning the actions to neutralize the threat, maybe even turn it into an opportunity.

Any such process is much better prepared when the threat is recognized a-priori as probable.  For instance, quality assurance of new products should include special checks to prevent launching a new product with a defect, which would force calling back all the sold units.  When a product is found to be dangerous the threat is too big to tolerate.  In less damaging cases the financial loss, as well as the damage to the future reputation, are still high. Yet, such a threat is still possible to almost any company.  Early identification, before the big damage is caused, is of major importance.

The key difficulty in identifying threats is that each threat is usually independent of other threats, so the variety of potential threats is wide.  It could be that the same policies and behaviors that have caused an internal threat would also cause more threats.  But the timing of each potential emerging threat could be far from each other. For example, distrust between top management and the employees might cause major quality issues leading to lawsuits. It could also cause leak of confidential information and also to high number of people leaving the organization robbing its core capabilities.  However, which threat would emerge first is exposed to very high variability.

It is important to distinguish between the need to identify emerging threats and dealing with them and the need to prevent the emergence of threats.  Once a threat is identified and dealt with then it’d be highly beneficial to analyze the root cause and find a way to prevent that kind of threats to appear in the future.

External threats are less dependent on the organization own actions, even though it could well be that management ignored early signals that the threat is developing.

Challenge no 1:  Early identification of emerging threats

Step 1:

Create a list of categories of anticipated threats.

The idea is that every category is characterized by similar signals, which could be deduced by cause-and-effect logic, which can be monitored by a dedicated control mechanism.  Buffer Management is such a control mechanism for identifying threats to the perfect delivery to the market.

Another example is identifying ‘hard-to-explain’ money transactions, which might signal illegal or unauthorized financial actions taken by certain employees.  Accounting techniques are used to quickly point to such suspicious transactions.  An important category of threats is build from temporary failures and losses that together could drive the organization to bankruptcy.  Thus, a financial buffer should be maintained, so penetration into the Red-Zone would trigger special care and intense search for bringing cash in.

Other categories should be created including their list of signals.  These include quality, employee-moral, and loss of reputation in the market, for instance by too low pace of innovative products and services.

Much less is done today on categories of external threats.  The one category of threats that is usually monitored is state of the direct competitors.  There are, at least, two other important categories that need constant monitoring:  Regulatory and economy moves that might impact specific markets and the emergence of quick rising competition.  The latter includes the rise of a disruption technology, the entry of a giant new competitor and a surprising change of taste of the market.

Step 2:

For each category a list of signals to be carefully monitored is built.

Each signal should predict in good enough confidence the emergence of a threat.  A signal is any effect that can be spotted in reality that by applying cause-and-effect analysis can be logically connected to the actual emergence of the threat.  Such a cause could be another effect caused by the threat or a cause of the threat.  A red-order is caused by a local delay, or a combination of several delays, which might cause the delay of the order.

When it comes to external threats my assumption is that signals can be found mainly on the news channels, social networks and on other Internet publications.  This makes it hard to identify the right signals out of the ocean of published reports.  So, focusing techniques are required to search for signals that anticipate that something is going to change.

Step 3:

Continual search for the signals requires a formal process for a periodical check of signals.

This process has to be defined and implemented, including nominating the responsible people.  Buffer Management is better used when the computerized system displays the sorted open orders according to their buffer status to all the relevant people.  An alarm system, used to warn from a fire threat or burglary, has to have a very clear and strong sound, making sure everybody is aware of what might happen.

Challenge no 2:  handling the emerging threats effectively

The idea behind any control mechanism is that once the flag, based on the signals received, is raised then there is already a certain set of guidelines what actions are required first.  When there is an internal threat the urgency to react ASAP is obvious.  Suppose there are signals that raise suspicions, but not full proof, that a certain employee has betrayed the trust of the organization.  A quick procedure has to be already in place with a well defined line of action to formally investigate the suspicion, not forgetting the presumption of innocence.   When the signals lead to a threat of a major defect in a new product then the sales of that product have to be discontinued for a while until the suspicion is proven wrong. When the suspicion is confirmed then a focused analysis has to be carried to decide what else to do.

External threats are tough to identify and even tougher to handle.  The search for signals that anticipate the emergence of threats is non-trivial.  The evaluation of the emerging threat and the alternative ways to deal with it would grossly benefit from logical cause-and-effect analysis.  This is where a more flexible process has to be established.

In previous posts I have already mentioned a possible use of an insight developed by all the Intelligence organizations:  the clever distinction between two different processes:

  1. Collecting relevant data, usually according to clear focusing guidelines.
  2. Research and analysis of the received data.

Of course, the output of the research and analysis process is given to the decision makers to decide upon the actions.  Such a generic structure seems useful for threat control.

Challenge no 3:  Facing unanticipated emergence of threats

How can threats we don’t anticipate be controlled?

We probably cannot prevent the first appearance of such a threat.  But, the actual damage of the first appearance might be limited.  In such a case the key point is to identify the new undesired event as a potential to something much more damaging.  In other words, to anticipate based on the first appearance the full amount of the threat.

The title of this article uses the example of a serious external threat called: cyber!  Until recently this threat was outside the paradigm of both individuals and organizations.  As the surprise of being hit by hackers, creating serious damage, started to become known, the need for a great cyber control has been established.  As implied, Threat Control is much wider and bigger than cyber.

An insight that could lead to build the capability of identifying emerging new threats when they are still relatively small is to understand the impact of a ‘surprise’.  Being surprised should be treated as a warning signal that we have been exposed to an invalid paradigm that ignores certain possibilities in our reality.  The practical way to recognize such a paradigm is by treating surprises as warning signals.  This learning exposes both the potential causes for the surprise and to other unanticipated results.  I suggest readers to refer back to my post entitled ‘Learning from Surprises’, https://wordpress.com/post/elischragenheim.com/1834

My conclusions are that Threat Control is an absolutely required formal mechanism for any organization.  It should be useful to stand on the shoulders of Dr. Goldratt, understanding the thinking tools he provided to us, and use them to build a practical process to make our organizations safer and more successful.

Cause-and-effect as the ABC of practical logic

Outlining clearly the causality behind undesired-effects, and wondering what effects, desired or not, would be caused by the actions we take, have been an integral part of TOC from its start in the early 80s. In the early 90s several structured procedures were developed by Dr. Eli Goldratt, in the format of cause-and-effect trees, called the Thinking Processes.  I think it is time to experience the merits, but also the limitations, of using logical claims in the shape of ‘Effect A’ is causing ‘Effect B’, for managing organizations.

My bachelor degree was in Mathematics, which is the ultimate use of strict logic.  In our daily practice we use logic both to reveal the causes behind effects we experience and also for speculating what is going to happen if we take a certain action.  However, that use of logic is not easy; it is combined with a lot of emotions that confuse the strict logic.  Even when we do our best to stay within the logical directives we are faced with several obstacles.  One of them is being able to distinguish between assumptions about cause and effect and actual causality. We certainly have great difficulty with hidden assumptions, meaning not being fully aware that the causality is only assumed and not necessarily valid.

Reality is fuzzy and includes huge number of variables that have some impact.  In order to live in such reality we have to simplify the picture we have in our mind.  We do it by ignoring many variables, assuming their impact is too small to truly matter.  The choice of what we ignore is part of the basic assumptions behind our cause-and-effect logic.

To experience the value and the boundaries of applying cause and effect let’s check the following effort to understand a practical logical argument.

It seems straight-forward logic to claim:

If ‘We improve the availability of items on the shelf from 80% to 98%’ then ‘Sales will go up’.

Is this assertion always true?  Are there some missing conditions (insufficiencies) for the causality to be true?  Even if it is true can we deduce how much more sales will be generated?

The initial logical explanation is that the missing 20% items have demand that is not satisfied, thus sales are lost.  If those 20% would be available they will be sold according to their natural demand.

The claim is shown in a simple chart:

initial state

The right hand-side represents the original claim, then some more explanations on current lost sales that would not be lost now.  The oval shape says that the two causes act together.

Two different reservations to the above logic are:

Some customers might buy the same item somewhere else.”   And: ‘Customers might buy another item instead of the missing item.’  Both reservations aim at the causal arrow connecting unavailability of items to losing sales, from that effect, together with the improvement, to the resulting effect of ‘Sales go up”.

The two reservations highlight a clarity issue. The improvement cause is stated “We improve…”, but who are ‘we’?   It could be the management of the chain of stores, the local management of a particular store or a supplier of a family of items.  Each of them gives a different meaning to the current state and then has its own reservation to the claimed effect of “Sales go up”.  The supplier of certain products means ‘his products’ are available only in 80% of the time and customers who buy replacement products cause the supplier to lose sales. If the availability of the supplier products would go up, then those specific products will be sold more.

This is a non-trivial ‘clarity’ issue.   We first have to deal with the clarity reservation by making a choice.  I have chosen the perspective of the store, and now I have to relate to the causality reservation doubting whether unavailability of an item always causes loss of sales to the store.

When customers don’t find a specific item they might buy a similar item.  In this case the store does not lose the sale.  In other cases the clients might simply give up.  In some rare cases the client might walk out, which could mean other sales are lost as well.  So, we conclude that some sales are lost because of unavailability, but the direct loss of sales is less than the calculated average sales of that item in the period of time it is short.

So, the above logical claim seems valid, but its real impact could be low.  We like to go deeper into the question when is the loss of sales due to unavailability significant?

Is the loss of sales equal for all items?

There are two parameters that make a significant impact on the loss of sales for the store when an item is missing.  The first parameter is the average level of daily sales and the second is the level of loyalty of the clients to the brand/item.

Fast runners, when they are short, create considerable damage not just to the direct loss of sales, but also to the reputation of the store – meaning customers might look for a different store in the future.  The logical statement is: if ‘a fast runner is missing’ then ‘many customers are pissed off’ causing ‘some regular customers look for another store to make their purchasing’ causing ‘total sales go significantly down”.  I’ve added ‘significantly’ to make a mark about the total impact.

But, as ‘management are aware of the potential damage to the store from missing fast-runners’ then we expect the following effect to apply: ‘management is focused on maintaining the perfect availability of fast-runners’.

So, we can deduce that if ‘the current management is reasonably capable’ then ‘the missing items do not include fast runners’.  Of course, 20% of the items being short might still mean non-negligible amount of sales of medium and slow movers being lost.   The open question is how much and even more:  how the current level of shortages impacts the reputation of the store and through this the future sales?

So, we need to look deeper into the impact of the second parameter – loyalty to a specific brand/item.  The effect ‘some items are special for some clients’ causes the effect that ‘some customers develop loyalty to that item’. This effect causes ‘the probability that some customers refuse to buy a replacement is high’.  Thus, if ‘items with strong loyalty are frequently missing’ then ‘some customers try other stores’.  The effect of ‘items with strong loyalty are frequently missing’ also causes ‘our reputation for what we carry on the shelves goes down’, with clear impact on the future sales.

The difficulty with ‘loyalty of customers to the brand/item’ is that it is difficult to validate its power.  The true test for the strength of loyalty is when the item is short and checking whether the sales of alternative items go up or not.

One additional reservation from the basic claim that improving the availability of items on the shelf would increase sales:  it assumes that ‘most customers entering the store know exactly what they want to buy’.  If this effect is not valid, then what is important for the sales is that the shelf is full with items that have good enough demand.  The effect that some items planned to be on the shelf are missing, but other items, with equal chance of being sold, fill the space well, would not cause a clear impact on the sales.  The kind of items that people come to browse and then choose (‘when I see it I’ll know’) have to managed in a very different way than maintaining availability of specific items.  For such items it makes sense to replenish them with new items, unless a specific item seems such a hit that maybe keeping it available is beneficial giving the high desirability of clients.

The effect of ‘The store has many regular customers’ has an impact on the meaning of ‘availability’ on the incidental customer. A shop in a big airport serves mostly incidental clients, so unavailability of items doesn’t impact future sales.  When there are no regular customers, then there is no difference between items that the store does not hold and items that are short.  This is relatively a small issue.

There are many more conditions that we consider true without further thought: ‘we live in a free economy’, ‘there are many competing choices for most items’ and ‘there is enough middle-class customers that can afford buying variety of products’.  If we try to include all ‘sufficiency’ conditions we’ll never end up with anything useful.  On the other hand it also opens the way to major mistakes due to hidden assumptions about what not to include in the analysis.  One needs the intuition when to stop the logical analysis, recognizing also the validity of ‘never to say I know’ (an insight by Dr. Goldratt).  Another aspect is the impact of uncertainty:  there are no 100% cause and effect relationships.  But, causal relationships that are 90%, or more, valid are still highly valuable.

Eventually we get the following structure as a summary of the above arguments.  Not all the previous effects have been mentioned, which means some of the logical arrows require more details, but eventually this is the claim.

Sales go up

We still cannot determine how much the sales would go up, because it depends on the characteristics of the medium and slow runners:  how many of them have strong loyalty.  If we add to the initial effects also ‘The chain makes marketing efforts to radiate the message that the chain maintains very high availability at every store’ then the chain can expect a faster and stronger increase in its reputation and in its sales.

Was it worth to go through logical analysis? 

While we still have only a partial picture, it is probably better than a picture based just on intuition without any analysis.