Between a true insight and an effective recipe

Vector Timer And A Electricity Light Bulb Sign, Brain Storm Conc

We all look to overcome the main problems that trouble us to the extent that most of our time is wasted just by thinking about them. Generally speaking there are two possible solutions that miraculously eliminate such a problem:

  1. A detail procedure that handles the specific problem. I call it a ‘recipe’, because it is similar to the medical treatment we follow without truly understand how it helps us. Many times these recipes, sometimes even called “best practices”, are not based on clear cause-and-effect reasoning, but on past experience that shows that the recipe usually works.
  2. An insight that is the core of a new potential solution. A new understanding that leads to the detailed development of a full solution using logical tools.

Insights have to be based on cause-and-effect logic otherwise there is no new understanding. Here are several definitions of the term ‘insight’ taken from Google:

  • An understanding of the true nature of something – merriam-webster.com
  • An understanding of relationships that sheds light on or helps solve a problem – dictionary.com
  • The ability to have a clear, deep and sometimes sudden understanding of a complicated problem or situation – Cambridge

The term ‘understanding’ appears in all the above definitions. It seems that to ‘understand’ anything the underlining cause and effects have to be part of it. Thus I suggest the following definition:

Insight: A generic cause-and-effect branch that becomes clear and can be applied to many different situations

Filling in a missing cause that explains an effect creates a sudden understanding about reality and also causes an emotional sensation of overcoming a vague situation. In my years of close interaction with Dr. Eli Goldratt I have had many moments of ‘aha’ of gaining a new insight.  Those insights had huge influence on my life and they constitute very precious memories of a truly great person.

The key value of an insight is being generic and thus its real impact is very wide. So, new opportunities are opened with every new insight we learn.  It is our duty to fully comprehend the new opportunities, which pose us the challenge of using our limited time in the most effective way to achieve our dream.  This characteristic of an insight is in contrast with a recipe that is effective only within certain boundaries.

Every practical problem is actually being torn between two different actions, where each action is target to satisfy a need and there is no way to take both actions. In itself this understanding is not all that useful as it only says that one has to choose the more important need and sacrifice the other, or decide to compromise.  The key TOC insight is that it is enough to challenge ONE assumption that lies behind one of three different causal statements to solve the problem by satisfying both needs.  One statement is that the two actions cannot be done together and the other two statements are that without the specific action the related need would not be satisfied.

This key insight is the core of the TOC conflict resolution tool, but in itself it has wider ramifications of constantly challenging key assumptions, like “When the costs go up the selling price goes up.”

When an insight triggers an effective solution to a long troubling problem there is a risk that the insight would be wasted by the recipe that is built upon the insight.  It is amazing how many valuable insights that have been implemented in one business sector are unknown in other business sectors.  For instance, consider the following insight:  It is highly beneficial to keep our current customers and it is not self-evident to our customers that they should remain loyal. Thus, we have to give our customers special and well appreciated incentive for continuing to be our customers.  This insight has been translated by the airlines to a recipe called ‘frequent flier program’.  The insight is generic, while the recipe is specific, and one has to go back to the insight to realize how to draw value in very different business sectors.

Benchmarking is a popular recipe to adopt recipes from similar organizations. This leads to major moves that imitate practices that bring limited value and close the mind to identify insights that could lead to so much more value.

TOC has developed several important recipes that work well within the boundaries determined by several basic assumptions. We should never forget the original insights.  Their scope of value is far wider.

I intend to talk about the key TOC insights in the next TOCICO annual conference in Berlin (July 2017).

Looking for the right pilot as proof-of-concept

bigstock-153590651

Any change, even one that promises big benefits, includes concerns that something might go wrong. Too many times there is no practical way to prove that the concerns are either easily solvable, or they are too small relative to the benefits. There are also concerns from the unknown, what we cannot even imagine.  Even the value of the benefits is often hard to translate to actual impact on the goal.

Proof-of-concept is a general expression for providing a logical proof that the concept works. Theoretically a good Future-Reality-Tree (FRT) can provide such a proof, but it might not be good enough to eliminate all concerns, especially not the concerns from the unknown.

A more robust way to prove a concept is through simulation. However, it requires utmost care that the underlining assumptions are valid in the reality where the concept is considered.  It is not trivial to check the assumptions behind computerized simulation that has been developed by other people.  One category of assumptions that need to be carefully checked is the behavior of uncertainty. Another important aspect is identifying real effects that have not been included in the simulation.  For instance, most simulations do not consider human behavioral aspects like Parkinson Law.

The most robust way to prove a concept is running a pilot. The pilot should give a better idea of the value and reveal the negative branches and their impact, providing the opportunity to trim or reduce the negative impact.

A pilot generates considerable hassle.   Management attention is given beyond what is usually required in such a project.  All pilots are aimed at proving a concept.  This actually points to the very first task of planning a pilot:  defining the concept to be proven and its resulting benefits, including assessing the reduced uncertainty due to the pilot.

For instance, consider the case of choosing a project as a pilot for proving the concept of CCPM. The concept is aimed at completing the project on time, preferably earlier than the realistically expectations.  The CCPM concept includes planning the critical chain, cutting the task times and inserting buffers, mainly the project buffer.  In the execution phase the use of buffer management as the only priority mechanism is a key insight.  The choice of the particular project should handle the concerns that meeting the due-date for that project might be due to either special attention given to that project or that the good result is merely arbitrary.

It is important to realize that pilots should be done only when there is strong conviction that the concept offers considerable value, but might also cause damage.

The main factors in designing the right pilot are:

  • Being able to significantly reduce the potential damage from full implementing of the concept.
  • Providing good information on the value from full implementation.
  • Limited consumption of special management attention.
  • Limited investment in the pilot.
  • Limited hassle throughout the organization. The impact of the pilot on the daily management and performance of the organization as a whole should be relatively low.

A suggestion for planning the pilot:

Define a-priori the performance measurements and the decision rules to determine whether to implement the concept after the pilot’s completion.

The concern is that if the measurements and rules for such a decision are not defined before the pilot is implemented, then there is high probability that no decision would be made, and the situation that led to the pilot, the conflict between believing in the value and being concerned by negatives, will continue indefinitely.

A key case for considering a pilot for a TOC implementation is for a distribution chain. The size of the full implementation, which could cover wide geographical area, many regional warehouses and huge number of retailers, makes the decision to abandon the current rules and move to the dynamic TOC solution very tough.  Let’s just state some of the reasonable concerns of senior managers:

  • The improved availability would not lead to significantly improved sales.
    • For instance, because clients have always reasonable alternatives.
    • Or, because the short items are not high runners.
  • The resulting inventory levels, and their impact on cash-flow, might be still high, maybe even higher than now.
  • Transport costs would go up.
  • New difficulties would emerge in loading trucks with very small batches of many SKUs.
  • Getting used to new software modules and implementing them in many sites would take long time, causing problems in daily management and by that harming the performance.

These concerns, while still relying on the big promise of achieving a decisive-competitive-edge of vastly improved availability and lower inventory levels, lead to going for a limited pilot as a proof-of-concept.

How should such a pilot be defined?

There are several options to consider:

  1. Start by implementing the solution at the central warehouse, but waiting with the regional warehouses.
  2. Focus on 3-4 regional warehouses.
  3. Choosing one family of products, including both fast and slow runners, and covering the way from the central warehouse to several, or even all, regions.

The issue of defining the characteristics of a good pilot is one of the most important open issues of TOC implementations. I highly recommend that TOCICO would arrange for several known TOC experts to publically discuss the issues during the next TOCICO annual conference.

I like to express my own view on the above options for a pilot on distribution.

The effectiveness of the replenishment solution depends on maintaining stable and flexible flow, according to a clear set of priorities, throughout the distribution chain. The relationships with suppliers have to be carefully re-thought in order to maintain as fast and frequent replenishment as possible.  The number of different suppliers poses a difficulty that is not lower than the difficulty to deal with huge number of small retailers.  It is part of the overall implementation plan to grow gradually through suppliers and though retailers.

What a pilot cannot afford is to cause disappointment from the results. Implementing a pilot only at the central warehouse exposes the chain to negatives that are not part of the final process. The regions would still order relatively large quantities, based on their local view of the supply chain, forcing the central warehouse to hold high inventory levels.  This could easily cause disappointment from the results and shutting down the whole implementation.

Focusing on several regions causes two different negative branches. One is that as long as the central warehouse cannot ensure fast and reliable response to the regions, the availability at the regions becomes questionable, leading again to disappointment.  The other negative branch is that the regions in the pilot might demand, and get, special treatment.  This could lead to good results in the pilot, but causing huge resistance in all other parts of the organization because they have to deal with tougher conditions.

So, my preference is to run a pilot on one family of products, including the central warehouse and all the regions for that family of products. I don’t see an urgent need to go to the retailers as part of the pilot, especially when they are managed by another organization. The outcome of the pilot is gaining somewhat reduced overall inventory in the central warehouse plus the regions while ensuring excellent availability.  The retailers might still order in batches, but the number of retails would reduce the negative impact of the  batching. The experience would lead to better understanding the actual impact on transportation cost and on loading the trucks, even when most trucks would carry also items that not part of the pilot.

I hope that this post would raise the issues behind planning the executing TOC pilots in variety of TOC applications.

Common mistakes of TOC practitioners in well-known TOC applications

Let's make better mistakes tomorrow - handwriting on a napkin wi

I think that none of the  TOC applications are in the stage of just following a recipe for implementation.  There are certainly recipes for SDBR, Make-to-availability, Replenishment and CCPM, but in too many occasions in reality there is a need to deviate from the recipe.

There are two different categories of basic, sometimes hidden, TOC assumptions behind the TOC recipe for a successful implementation.

  1.  Invalid necessary assumptions about the reality of the organization.
  2. Invalid assumptions about the clients of that organization.

All TOC applications have several necessary assumptions that define the boundaries where the TOC application is effective.  Here are some examples where failing to notice that one necessary assumption is not valid causes problems in the implementation.

Basic SDBR assumption: The total touch-time is less than 10% of the total production lead-time. When that assumption is not valid the problem lies mainly with buffer management that might not show penetration into the Red at the appropriate time when the user can still expedite.  In such a case certain changes to buffer management are required, taking into account the touch-time that still lies ahead of the order. When the touch time is 50% or more of the lead-time then the problem is wider than just buffer management.  Such environments either have very high amount of idle capacity or have to be planned according to CCPM.

Comment:  Touch time also include necessary wait time, like drying, even when no resource capacity is required.

Basic SDBR assumption: From material release until completion the order is under full control of the organization.  This is not valid when one or more of the processes are done by outsourcing. The contractor usually does not commit to follow buffer management priorities. The whole batch is going to and from the contractor. In a way this is similar to long touch-time process, but not being in control during that time adds to the problem.  The situation calls for protecting the intermediate due-date when the order should go to the contractor.  This means having two back-to-back time buffers, one until going to the contractor and the other covering the route from that time until completion. 

TOC common assumption for manufacturing organizations:  The common practice is to release orders as early as possible. In one striking case this almost automatic assumption about the organization was proven wrong!  The plant was very careful, actually too careful, in releasing orders, causing very low WIP throughout the plant.  Can you imagine what happened when the production lead-time was cut by half and orders were not allowed to be released earlier?

Basic CCPM assumption: The project could earn a lot from quick completion and loses a lot from slow completion.  When this assumption is not valid then the whole concept of the ‘critical chain’ (or the ‘critical path’) loses its impact.  While it is always true that completing the project fast is valuable and being slow causes some damage, the important question is whether the value from being fast or the loss of being late are such that the organization is ready to invest efforts and money to complete the project fast!  The reason that in manufacturing the concept of critical chain is not known is that the value of fast delivery is lower than keeping high efficiency of the expensive resources.  When some resources are highly loaded then tasks have to wait for the resource to become available.  TOC, which challenges the value of efficiency for non-constraints, does not challenge the manufacturing concept of orders waiting for loaded resources, certainly for the constraint.  Thus, in manufacturing the lead-time is much longer than touch-time, while in projects special efforts are put to prevent the project from standing idle.  Recognizing when fast completion is critical should be part of the definition of a ‘project’ that should be planned using CCPM.

A common assumption in CCPM:  Professionals intentionally inflate the time to complete a job in order to be always on-time.  This assumption might be invalid in software and in sophisticated technology organizations where the professionals are not bothered by being on-time, and are more interested in getting the green light to develop something new and exciting.  For that end they might intentionally reduce the time it takes to do the job!  Cutting to half this kind of time assessment is a major mistake!

TOC assumption in Distribution:  The TOC solution would dramatically cut the inventory levels.  This is a common expectation and sometimes the success of the implementation is based on the amount of reduced inventory.  The real aim of the TOC solution is to provide excellent availability.  In most cases trying to do so without the TOC insights ends up holding too much inventory.  So, in most cases the expectations are met.  But, when many slow movers are maintained for perfect availability the required inventory, according to the TOC model, might be higher than the current practice.  Should the organization commit to keep those slow movers in perfect availability?  This questions leads us to the second category of invalid assumptions.

Here are some common examples for failing to understand the clients of the organization.

A common assumption in make-to-availability and distribution: the market suffers from frequent unavailability of every item.  There are two devastating results stemming from this assumption:

  1. All the items are held for availability, including slow movers that require large stock buffers to maintain availability, even when the clients are ready to wait some time for delivery. Another related problem is offering availability for items with unstable supply. 
  2. All clients like to be offered perfect availability. Well, perfect availability is always nice, but:

Do the clients truly suffer from unavailability?

Many times the suffering is real and it is beneficial to offer perfect reliability that the clients can rely on.  But, sometimes the missing items have obvious replacements, so the damage is minimal.  In other cases the client carries enough stock and is not bothered by short-time unavailability.  When the value to the client is low then providing perfect availability is merely “nice”.  For instance, offering perfect availability to wholesalers, which base their competitive advantage on low prices and do not offer availability of specific items, is bound to fail.

A hidden assumption in CCPM:  The original due-date, which is important to the client, does not change.  An important planning principle in CCPM is keeping the planning intact. However, the critical due-date might often slide later because of other needs of the client.  When this happens it does not add value to complete the project on the original time.  Showing the project in Red when the client can easily wait adds unnecessary pressure and tension.  It could also make project managers suspect buffer management when the project is Red and they know it is not. When the change of the due-date is small, then the project buffer can get extra time until the new due-date, relieving the pressure on delayed chains.  When the change is considerable re-planning is recommended.  The main point is: check frequently with the client whether the original due-date is still on.

We all make mistakes.  We need to learn from our mistakes, and even better, learn from the mistakes of others.  The key learning from mistakes is the ability to generalize the case, so it becomes a new insight.  It bothers me that most TOCICO case presentations take the usual approach of showing successful results, without revealing the mistakes and hurdles on the way, as if someone needs to be ashamed of the mistakes and by that also hide the achievement of identifying the mistake and fixing it.  There was one great presentation I remember from the Canadian CMS group that came up with lessons learned from mistakes in an implementation.  The implementation apparently ended well after the new understanding.  I think we all should learn the lessons from mistakes, made by us and by others.

Dynamic Buffer Management (DBM) – the breakthrough idea and several problems to solve

Warehouse Check

The most common procedure for maintaining stock of items is relying on a forecast, translate it to average daily sales/consumption and aim at holding fix number of sale-days in stock or defining min and max number of sale-days. That number of sale-days (or sale-weeks) is determined by a policy for a whole category of items, defined broadly by the supply lead time.

This common procedure leads to significant deviations from the determined levels in both directions causing shortages and huge surpluses at the same time.

The main flaws in the rationale of the common procedure:

  1. The common procedure monitors the demand fluctuations and based on it forecasts the future. But it ignores the uncertainty in the supply time.  The stock-level should consider both the demand and supply fluctuations.
  2. The current forecasting method is based on predicting the average demand, but ignores assessing the level of uncertainty (forecasting error). Thus, information regarding the stock that is required to satisfy the constantly fluctuating demand is missing.
  3. Frequent forecasting increases the noise in the system.
  4. The min-max definition encourages batching and slows down the replenishment frequency, which increases the impact of uncertainty.

The TOC key insights for holding stock are:

  1. Considering not just the on-hand stock, but also the items ‘on the way’, meaning all the open purchasing orders should be part of the mechanism to provide good availability. The Target-Level defines the buffer of stock, including both on-hand and open orders.
  2. The Target-Level is kept constant until clear signal is received that it is not appropriate.
  3. Fast and frequent replenishments to the target-level.
  4. Buffer Management is used for establishing one priority system for moving stock from one location to another.
  5. Tracking the behavior of the buffers to decide whether the Target Level is too small or too large. This is the objective of the DBM algorithm.
    1. The idea is to check the combination of two different sources of uncertainty:
      1. The market demand – its ups and downs!
      2. The replenishment time – its own ups and downs, including the impact of the frequency of replenishments.
    2. There is no point in introducing small changes.
    3. The signal for increasing the buffer is too long stay and too deep penetration into the Red Zone of the on-hand stock.
    4. The signal for decreasing the buffer is too long stay at the Green Zone.

The breakthrough idea of DBM is monitoring the effectiveness of the protection mechanism rather than re-calculating the buffer-size. Both the demand and the replenishment time behave in an erratic way, which is difficult to describe.  The main difficulty is frequent changes in the environment, which upset the key parameters of the demand and supply time.  Events like the emergence of a new competitor, a controversial article in the media, changes in the economy or regulations all could cause a quantum change in the market demand.

The replenishment time is highly influenced by the operational management of the supplier and the state of load versus capacity. Changes in both factors could lead to significant changes in the replenishment time.

Re-calculation of the buffers when such a drastic change happens is problematic because the calculations rely on past performance. Sensing the actual state of the protection mechanism leads to taking quick actions based on the most recent past.  The quick response does not try to speculate the exact size of the change – just its direction: up or down.  Goldratt recommended increasing or decreasing the buffer, once DBM signals the need, by 33%.

The impact of DBM on the performance of the organization is quite strong and faulty DBM signals might be very costly. Constant learning should be used to tune its algorithm to the specific reality, especially identifying situations that require different reaction.

When the reason for the deep and lengthy penetration into the Red is (temporarily) inability to replenish, like when the source lacks inventory or capacity then DBM should not increase the buffer.

A conceptual issue is the fixed-ratio change of buffers. It even does not matter whether it is an increase or decrease. It is always possible that a change has been invoked, but after some time reality shows there has been no real need for the change.  In other words, short time after the increase there is a signal to decrease.  However, if we use 33% for any change then we end up with about 90% of the buffer before the increase.  The problem is that it is hard to explain that inconsistency.

An idea, raised by Dmitry Egorov, was to check carefully the behavior immediately after such a buffer increase in order to validate that it is truly needed. The result of an increase in the buffer is that the buffer status is deeper into the Red relative to the new buffer size. If after very short time the buffer goes up into the Yellow, then it should signal returning to the former size.

Similar behavior should be taken after decreasing the buffer. This move would temporarily make the on-hand stock to be above the new Green line.  If the buffer status goes down into the Yellow very soon – DBM should recommend increasing the buffer back to its previous size.

A related issue is the asymmetry of the DBM algorithm between increasing and decreasing the buffer. For buffer increase the algorithm considers the depth of the penetration into the Red-Zone. For decreasing the buffer the amount of penetration into the Green is not considered at all.  Actually there is a good reason to be much more conservative about reducing buffers than for increasing them.

The use of the replenishment time as part of the DBM algorithm is of concern to me, because the TOC algorithm does not monitor that time and its relevancy for the decision is dubious. The whole point of DBM is monitoring the combination of demand and replenishment time.  The only important need for the replenishment time in the DBM algorithm is for stopping further increases until the effect of the new size can be evaluated.  However, this can be done by monitoring the arrival of the specific order generated by the buffer increase.  The algorithm for buffer increase could be based on continuous stay in the Red-Zone taking the depth into account.  For decreasing the buffer there is no reason to refer to the replenishment time.  All that is required is a time parameter for too long stay in the Green.

DBM works in a similar way to forecasts, meaning it looks back to the past to deduce the near future. However, DBM looks only to the very recent past and considers only the actual state of the on-hand stock.

Should we use forecasts as additional information?

The idea is NOT to change the buffer unless there is a clear signal that the buffer is inappropriate. The additional information based on a forecast that considers additional parameters than DBM would be a rough estimation whether the current buffer is definitely too large or too small. Considering seasonality, knowledge of a change in the economy or the emergence of new products could add valid information to the decision whether to change buffers, and also give a rough idea by how much.  When the forecast points to a minor change in the buffer. less than 20%, the buffer size should be kept as is.

The above issues are, to my mind, central for coming up with an overall more effective way to control stock buffers. I always prefer to leave the final decision to humans, but give them the most relevant information to do that.  When millions of stock buffers are maintained throughout the supply chain at various locations, and 1-2% of the buffers seem to be inappropriate at any given day, it is practically difficult for humans to consider the changes for so many buffers.  At that case there is a need to let DBM, coupled or not with forecasts, to change buffers automatically.  This means the effectiveness of DBM directly impacts the financial and strategic performance of the organization.

DBM is important enough to encourage TOC experts to collaborate in order to come up with effective DBM specifications for software companies to follow. The full detailed solution should have a wide acceptance.  TOC is clearly against any “black box” algorithms.

The TOC contribution to Healthcare

profession, people, health care, reanimation and medicine concep

The life of every physician is influenced by the oath to treat every patient and do no harm. A practical evil conflict appears when several patients require treatment at the same time.  If this peak of demand for the physician capability is sporadic then a certain priority mechanism, based on the net medical conditions of the “competing” patients, can handle the situation in a satisfactory way.

However, when the load on the medical resources is high, the conflict causes serious delays that endanger many patients and actually threaten the performance of the whole medical system.

Improving the flow of patients by exploiting the weak links in the medical chain and using buffer management for setting priorities is the essence of the current significant contribution of TOC to healthcare. This emphasis on FLOW is well expressed in Alex Knight’s excellent book Pride and Joy.  Alex Knight made a breakthrough in TOC in general by “translating” the insights of TOC, which were applied to manufacturing and multi-project environments, to a very different environment.  Healthcare is governed by different set of values and behavioral patterns.  Healthcare is also exposed to high level of fluctuations and it has to accept any demand that shows up.

Several other TOC people have contributed to the basic concept of improving the flow of patients, among them Prof. Boaz Ronen, Prof. James Cox, Bill Taylor and Gijs Andrea.

The core insight for improving a flow of patients, considering the uncertainty, the lack of enough capacity of critical capabilities and the value of healthcare to humanity in general, is based on creating one priority mechanism that mainly considers time. Only critical medical emergencies might disrupt the previous priorities, but they are less frequent than what outsiders assume.  The flow of patients in hospitals is significantly different than the flows in manufacturing and multi-project not just because the flow consists of live human beings.  The process of treating a patient includes substantial time between treatments where the patient simply rests and that time is part of the absolutely necessary “touch time”.  The sequence of medical treatments and checks can be approximately determined in the initial phase, but is subject to considerable uncertainty.  The flow of incoming emergency patients is definitely subject to very high uncertainty that impacts the whole system.

It is not surprising that TOC has started its penetration into healthcare in improving the flow at the emergency room – the center of the unplanned flow of patients. Few patients show up because of true emergency. Most patients show up for regular, still highly required, treatments.  We now know from wide experience that implementing buffer management priority shortens not just the longest stays in the emergency room, but also cut considerably the average time.  This effect can be explained only by impacting the behavior of the team, now actively looking for quick release of patients, and taking actions when the stay time starts to penetrate into the red.  Weekly buffer management meetings analyzing the Pareto List of the red and black cases adds much more to improving the flow.

TOC has also contributed to superior rules for planning and scheduling of appointments and operations. These planning rules apply to non-emergency requests within hospitals and also for external day-care clinics.  Using time buffers, checking closely the willingness and readiness of the scheduled patients to truly show up ready for the treatment, is part of process.  Being able to call patients to come in a hurry because a given slot of time is free is actually managing a buffer of patients who are ready to be called this way.

Improving the flow of patients in hospitals is aimed at shortening the time patients spend in the hospital. This is a key objective of uncovering capacity of beds that, many times, constrain the hospital from admitting more patients.  I like very much Alex Knight observation that “a hospital is a very unhealthy place to be.”

Can there be additional TOC contribution to healthcare?

The inherent conflict between the duty to give full and equal treatment to every person and managing the scarce capacity and money is still a major evil conflict that troubles not only every healthcare organization, but also governments. It touches upon another huge generic conflict of every government: exploiting the budget constraint. Healthcare causes a vicious cycle where the improved healthcare causes people to live longer and in better quality and both aspects require higher budget to sustain the improved healthcare.

The government conflict causes wide area of ramifications. The basic values between capitalism and socialism have considerable impact on how that conflict is handled.  While I still believe in Goldratt’s axiom that every conflict can be resolved, I admit I don’t know to resolve this key generic conflict of every government.   I believe I and other TOC experts are able relieve the intensity of the conflict, but not to resolve it in full.

What we can do is to accept the compromised budget and exploit the budget for the best healthcare for all citizens.  Of course, values regarding what is “best” and how to measure it, are part of both the obstacles and the solution.  Accepting the conflicting values as they are we still realize that the common current policies lead to behaviors that distort the exploitation of the healthcare budget.  The resulting behaviors do not really follow the original values.  Many of the common policies are based on flawed assumptions that cause distortions, quite similar to what we know in business organization.   Actually the core of TOC is revealing current flawed assumptions that distort the achievement of the goal.  TOC does not interfere with the values themselves.

Eventually every healthcare institution needs to develop a strategy to improve its current state of creating healthcare value.

When we go down to the level of a hospital, as a key example for a healthcare institution, we should assume the policies between the hospital and the government are given. But, within those policies the detailed offering of the hospital and the internal policies and processes there is a lot of room to improve the value generated by the hospital.

Shouldn’t we unite to map the cause and effects that impact the overall performance of healthcare organizations? Understanding how the government and other imposed policies impact behavior could lead us to generate a more universal solution to a truly wicked problem.

Capacity buffers – a grossly underrated strategic concept

Available capacity is an issue for every organization or business. In order to deliver value any provider needs two different entities:  the appropriate capabilities to generate the value, and available capacity of each of those capabilities in order to react to the demand in an acceptable way.

Capabilities are easier to manage than capacity. The relatively difficult mission in managing capabilities is synchronizing effectively between the different capabilities. For instance, engineering capabilities are required to design a new product, and then production capabilities are required to produce.  The interface between the engineers and the production operators is far from being trivial.  These synchronization capabilities are part of the more generic requirements for management capabilities.  In order to generate throughput from new products additional capabilities are required, like purchasing, sales and finance and usually also IT.  All have to be managed to ensure the integration.

Capacity is the trickier part in the process of generating value. Answering the question “do we have enough capacity?” has to consider all the demand translated into load compared to limited available specific capacities.  Shortage of capacity, even of just one resource, is the ultimate cause for delays in delivery.  So, the practical focused question is:

Given the demand and the capacity limitations would our delivery performance be good enough?

A common complication of managing capacity is when on average there is ample capacity, but at specific times a bottleneck emerges causing very long delays in the delivery of value, probably far beyond the client tolerance time.

At such peak of load the threat of losing reputation leading to reduced future sales is very serious. Question is:

Is it possible to increase the available capacity at very short notice?

It is clear that if such short-term fixation of capacity is possible it is going to be relatively expensive. The point is that the damage of not paying that price could be much more expensive in the future.

Suppose the night manager of a hotel realizes that due to a mistake there is no availability of standard rooms for that night, but two clients with reserved rooms are expected to arrive from the airport.

This is a case where the clear damage of shortage of capacity of a specific resource is very high. It is pretty obvious that the hotel has to solve the problem – not the clients; otherwise the clients might sue the hotel causing more negative ramifications than just the compensation.

How can the hotel find more rooms?

If there are available upgraded rooms, like suites, then giving them to the clients is the first choice.  Note, these rooms are not part of the available capacity of standard rooms – yet, it is possible to use them for quickly increasing the capacity of standard rooms.  The next choice is to find equivalent rooms at another hotel, with similar characteristics.

Fluctuations in the market are the most vicious uncertainty that every organization has to be prepared for. Maintaining inventory is a common way to deal with this kind of uncertainty.  However, in most service environments, also in job-shops, holding inventory is not practical.  Inventory also protects only from peaks of demand for specific products, but only enough capacity protects from peaks of many SKUs at the same time.

How much available capacity of each type of resource (providing specific capability) to hold?

Maintaining very high level of internal capacity to cover all temporary peaks of load is a problematic business approach, creating considerably high OE and high pressure to generate very high T to cover for it. Yet, being able to respond to all demand is a necessary condition for maintaining business stability in the future.

The simple solution is distinguishing between two types of capacity: fully available, and capacity by request.  This leads me to define both:

Available Capacity: The periodical amount of capacity for specific capability that is regularly purchased by the organization and paid whether it is fully used or not.

For instance, an operator that is paid for 180 work hours every month. If that operator consistently earns 20 additional hours of overtime per month, then that amount of overtime should be included in the available capacity and this amount of overtime be included in the regular OE.

Capacity Buffer: Additional capacity that can be quickly purchased at relatively small amounts.

Capacity buffers are usually made of overtime, temporary workers, free-lancers, outsourcing and the use of resources that are primary used for different capabilities, like using the store manager as a cashier to reduce too long queues.

The capacity buffers protect the organization sales and reputation from temporary peaks.  The obvious cost of using the capacity buffers is the relatively high expense of using it.

Capacity buffers should be planned ahead of time and constantly managed! Looking for options at the last minute causes considerable damage in the long run.  Maintaining capacity buffers might cause additional costs.  For instance in order to add a special shift in the shop floor temp workers are needed.  It is unlikely to call temp workers that were not given work for several months.  Thus, to ensure having temp workers to call in a hurry they need to get minimum amount of work every month.

Thus, the global Strategy of the organization, looking to ensure the high reputation of the organization, should define the tactics of maintaining capacity buffers. The Strategy plan should detail the rules for maintaining the amount of available capacities and the rules for capacity buffers.

Even a capacity buffer might run out of capacity!

This is a considerable risk, because when the buffer is exhausted there is no buffer left and the damage is high as no one truly expects it to happen. It is necessary to monitor the capacity buffers using buffer management.

Implementing buffer management means the buffer is measured by the same measurement as regular capacity.  The most common unit of capacity is one-hour of a specific resource.  So, a buffer of operators in a manufacturing shop-floor could be 120 man-hours per week.  When additional 60 man-hours are used in one day – it means a penetration of 50% into the buffer.  Additional consumption of 30 man-hours means penetrating into the red-zone, which should generate a warning for the short-term, and feedback to the planning that the buffer might not be big enough.  A detailed periodical analysis of the performance of the capacity buffers has to be in place, both for better understanding of the flexibility level the organization has to maintain to satisfy the market requirement, and to keep reasonable control on the operating expenses.

Virtual questions and answers session: Can we safely commit to full availability of all items?

Confusion People With Navigation Arrows And Question Mark Flat I

Imagine that you have a meeting with a C-level executive of a supermarket chain. You know that the chain suffers from 12%, on average, shortages, at any given day, at any store.  The chain operates 1,125 stores throughout the country.

In order to prepare yourself for the critical meeting you approach a well known international TOC expert for a free 30-minutes Ask-an-Expert service.

You already know that a typical store holds about 30,000 different items. Every day 3,600 items are missing from the shelf.

The main question you have in your mind:

Is it feasible to ensure excellent availability of all the items?

What level of unavailability should be considered good enough?

From now on there is a (virtual) dialogue between Y (you) and E (the TOC expert). Eventually an answer, not necessarily to the original question, but to the practical need, will emerge.

E: The chain objective is to sell more, leading to higher profit, right?  Suppose the percentage of shortages would go down to one-third, about 4%, how would that lead to additional sales?

Y: Two different effects lead to more sales.  First, 8% of the items, which have been missing, are now available for selling.  Second, it’d reduce the number of disappointing clients and add new clients that could not found good availability at the competing stores.

E: Just a minute, how much sales are truly lost due to unavailability?  First we don’t know whether the missing 8% items are sold at the average rate of all items. Some items that seem very important to many customers, like Coca-Cola, are in excellent availability, because the store handles them with greater care.  Slow movers have higher probability to be short.  But the more critical point is that in many cases the customer easily finds a similar item and no sale is lost.  So, let’s think again what is the impact of unavailability of 12% on the sales?

Y: But, the customers are still disappointed, even when they find a replacement for the missing item.  Within time customers would learn that the particular chain has much better availability than other chains.

E: How would customers realize that one chain has only one-third of shortages relative to another?  Do you expect customers to make tests?  How would the customer know what items are missing and what items are simply not offered at this store?  How long would it take to customers to realize that the advantage is real?

Y: Do you suggest that once the availability level goes up significantly the chain should advertise that the availability level at their stores is higher than the competitors?  Not sure customers would buy such a vague message, after all, some sporadic items are still missing.  It is not safe to offer compensation for a shortage in such state.  I think that even without advertising better availability, sales would, after some time, go up 3-4% due to better availability, and as the TOC solution also reduces the inventory levels profitability would go up nicely.

E: Yes, but it’d be very hard to prove it was due to the TOC methodology.  Without radiating the added value to the market the increase in sales would be slow and the benefits under certain controversy.

Y: This is what concerns me – it is important to show fast results leading to truly impressive results.  Do you have an idea what else can be done?

E: Let me ask you:

Why do you strive for perfect availability for all 30K items?

Many of the items are easily replaceable by other items. They are on the shelf in order to offer wide choice, but one missing item does not bother the vast majority of the customers.

Y: So, you say to offer perfect availability only to few key items?

E: Not just true key items. All items that some customers enter the store with the intent to buy exactly these items.  Customers do not expect every brand and size to be available.  The store chooses holding 30K items, which means there are many items that are not held at all.  But, there are items that customers actively look for and they expect to find those items on the shelf.  It could be that there are 2,000 or even 5,000 such items.  A chain using the TOC replenishment solution can provide excellent availability of 5K items and gain a decisive-competitive-edge.  It is much more effective than trying to provide perfect availability for 30K items.

Y: Sounds interesting, I suppose the chain should publish a catalogue containing the items for which the chain guarantees perfect availability at every store.

E: out of the 5,000 items, 3,000 could be chain-wise commitment to hold at every store, while the rest depend on the choice of the store.  So, there is an availability-commitment catalogue of the chain and an additional catalogue of the store.  Handling the store-commitment items would have some logistical impact on the central and regional warehouses, but it is relatively easy to solve.

Y: How do you suggest managing the stock of the items that are not on full commitment for availability?

E: You keep a buffer for every item/location and continue to replenish in the same way.  The difference is that buffer-management should not have a red-zone for these items.  Green and yellow are enough not to let the system “forget” the availability of those items.

Y: How would you monitor the size of those buffers?

E: By statistics of the time the item is in black and the time it is in green.  Anyway, this is not part of your first meeting with the C-level executive of the chain.

Y: Thank you.

The free 30-minutes service is offered by TOC Global, www.toc-global.com.  You can write your question to info@toc-global.com and such a session will be set.