Dynamic Buffer Management (DBM) – the breakthrough idea and several problems to solve

Warehouse Check

The most common procedure for maintaining stock of items is relying on a forecast, translate it to average daily sales/consumption and aim at holding fix number of sale-days in stock or defining min and max number of sale-days. That number of sale-days (or sale-weeks) is determined by a policy for a whole category of items, defined broadly by the supply lead time.

This common procedure leads to significant deviations from the determined levels in both directions causing shortages and huge surpluses at the same time.

The main flaws in the rationale of the common procedure:

  1. The common procedure monitors the demand fluctuations and based on it forecasts the future. But it ignores the uncertainty in the supply time.  The stock-level should consider both the demand and supply fluctuations.
  2. The current forecasting method is based on predicting the average demand, but ignores assessing the level of uncertainty (forecasting error). Thus, information regarding the stock that is required to satisfy the constantly fluctuating demand is missing.
  3. Frequent forecasting increases the noise in the system.
  4. The min-max definition encourages batching and slows down the replenishment frequency, which increases the impact of uncertainty.

The TOC key insights for holding stock are:

  1. Considering not just the on-hand stock, but also the items ‘on the way’, meaning all the open purchasing orders should be part of the mechanism to provide good availability. The Target-Level defines the buffer of stock, including both on-hand and open orders.
  2. The Target-Level is kept constant until clear signal is received that it is not appropriate.
  3. Fast and frequent replenishments to the target-level.
  4. Buffer Management is used for establishing one priority system for moving stock from one location to another.
  5. Tracking the behavior of the buffers to decide whether the Target Level is too small or too large. This is the objective of the DBM algorithm.
    1. The idea is to check the combination of two different sources of uncertainty:
      1. The market demand – its ups and downs!
      2. The replenishment time – its own ups and downs, including the impact of the frequency of replenishments.
    2. There is no point in introducing small changes.
    3. The signal for increasing the buffer is too long stay and too deep penetration into the Red Zone of the on-hand stock.
    4. The signal for decreasing the buffer is too long stay at the Green Zone.

The breakthrough idea of DBM is monitoring the effectiveness of the protection mechanism rather than re-calculating the buffer-size. Both the demand and the replenishment time behave in an erratic way, which is difficult to describe.  The main difficulty is frequent changes in the environment, which upset the key parameters of the demand and supply time.  Events like the emergence of a new competitor, a controversial article in the media, changes in the economy or regulations all could cause a quantum change in the market demand.

The replenishment time is highly influenced by the operational management of the supplier and the state of load versus capacity. Changes in both factors could lead to significant changes in the replenishment time.

Re-calculation of the buffers when such a drastic change happens is problematic because the calculations rely on past performance. Sensing the actual state of the protection mechanism leads to taking quick actions based on the most recent past.  The quick response does not try to speculate the exact size of the change – just its direction: up or down.  Goldratt recommended increasing or decreasing the buffer, once DBM signals the need, by 33%.

The impact of DBM on the performance of the organization is quite strong and faulty DBM signals might be very costly. Constant learning should be used to tune its algorithm to the specific reality, especially identifying situations that require different reaction.

When the reason for the deep and lengthy penetration into the Red is (temporarily) inability to replenish, like when the source lacks inventory or capacity then DBM should not increase the buffer.

A conceptual issue is the fixed-ratio change of buffers. It even does not matter whether it is an increase or decrease. It is always possible that a change has been invoked, but after some time reality shows there has been no real need for the change.  In other words, short time after the increase there is a signal to decrease.  However, if we use 33% for any change then we end up with about 90% of the buffer before the increase.  The problem is that it is hard to explain that inconsistency.

An idea, raised by Dmitry Egorov, was to check carefully the behavior immediately after such a buffer increase in order to validate that it is truly needed. The result of an increase in the buffer is that the buffer status is deeper into the Red relative to the new buffer size. If after very short time the buffer goes up into the Yellow, then it should signal returning to the former size.

Similar behavior should be taken after decreasing the buffer. This move would temporarily make the on-hand stock to be above the new Green line.  If the buffer status goes down into the Yellow very soon – DBM should recommend increasing the buffer back to its previous size.

A related issue is the asymmetry of the DBM algorithm between increasing and decreasing the buffer. For buffer increase the algorithm considers the depth of the penetration into the Red-Zone. For decreasing the buffer the amount of penetration into the Green is not considered at all.  Actually there is a good reason to be much more conservative about reducing buffers than for increasing them.

The use of the replenishment time as part of the DBM algorithm is of concern to me, because the TOC algorithm does not monitor that time and its relevancy for the decision is dubious. The whole point of DBM is monitoring the combination of demand and replenishment time.  The only important need for the replenishment time in the DBM algorithm is for stopping further increases until the effect of the new size can be evaluated.  However, this can be done by monitoring the arrival of the specific order generated by the buffer increase.  The algorithm for buffer increase could be based on continuous stay in the Red-Zone taking the depth into account.  For decreasing the buffer there is no reason to refer to the replenishment time.  All that is required is a time parameter for too long stay in the Green.

DBM works in a similar way to forecasts, meaning it looks back to the past to deduce the near future. However, DBM looks only to the very recent past and considers only the actual state of the on-hand stock.

Should we use forecasts as additional information?

The idea is NOT to change the buffer unless there is a clear signal that the buffer is inappropriate. The additional information based on a forecast that considers additional parameters than DBM would be a rough estimation whether the current buffer is definitely too large or too small. Considering seasonality, knowledge of a change in the economy or the emergence of new products could add valid information to the decision whether to change buffers, and also give a rough idea by how much.  When the forecast points to a minor change in the buffer. less than 20%, the buffer size should be kept as is.

The above issues are, to my mind, central for coming up with an overall more effective way to control stock buffers. I always prefer to leave the final decision to humans, but give them the most relevant information to do that.  When millions of stock buffers are maintained throughout the supply chain at various locations, and 1-2% of the buffers seem to be inappropriate at any given day, it is practically difficult for humans to consider the changes for so many buffers.  At that case there is a need to let DBM, coupled or not with forecasts, to change buffers automatically.  This means the effectiveness of DBM directly impacts the financial and strategic performance of the organization.

DBM is important enough to encourage TOC experts to collaborate in order to come up with effective DBM specifications for software companies to follow. The full detailed solution should have a wide acceptance.  TOC is clearly against any “black box” algorithms.

The TOC contribution to Healthcare

profession, people, health care, reanimation and medicine concep

The life of every physician is influenced by the oath to treat every patient and do no harm. A practical evil conflict appears when several patients require treatment at the same time.  If this peak of demand for the physician capability is sporadic then a certain priority mechanism, based on the net medical conditions of the “competing” patients, can handle the situation in a satisfactory way.

However, when the load on the medical resources is high, the conflict causes serious delays that endanger many patients and actually threaten the performance of the whole medical system.

Improving the flow of patients by exploiting the weak links in the medical chain and using buffer management for setting priorities is the essence of the current significant contribution of TOC to healthcare. This emphasis on FLOW is well expressed in Alex Knight’s excellent book Pride and Joy.  Alex Knight made a breakthrough in TOC in general by “translating” the insights of TOC, which were applied to manufacturing and multi-project environments, to a very different environment.  Healthcare is governed by different set of values and behavioral patterns.  Healthcare is also exposed to high level of fluctuations and it has to accept any demand that shows up.

Several other TOC people have contributed to the basic concept of improving the flow of patients, among them Prof. Boaz Ronen, Prof. James Cox, Bill Taylor and Gijs Andrea.

The core insight for improving a flow of patients, considering the uncertainty, the lack of enough capacity of critical capabilities and the value of healthcare to humanity in general, is based on creating one priority mechanism that mainly considers time. Only critical medical emergencies might disrupt the previous priorities, but they are less frequent than what outsiders assume.  The flow of patients in hospitals is significantly different than the flows in manufacturing and multi-project not just because the flow consists of live human beings.  The process of treating a patient includes substantial time between treatments where the patient simply rests and that time is part of the absolutely necessary “touch time”.  The sequence of medical treatments and checks can be approximately determined in the initial phase, but is subject to considerable uncertainty.  The flow of incoming emergency patients is definitely subject to very high uncertainty that impacts the whole system.

It is not surprising that TOC has started its penetration into healthcare in improving the flow at the emergency room – the center of the unplanned flow of patients. Few patients show up because of true emergency. Most patients show up for regular, still highly required, treatments.  We now know from wide experience that implementing buffer management priority shortens not just the longest stays in the emergency room, but also cut considerably the average time.  This effect can be explained only by impacting the behavior of the team, now actively looking for quick release of patients, and taking actions when the stay time starts to penetrate into the red.  Weekly buffer management meetings analyzing the Pareto List of the red and black cases adds much more to improving the flow.

TOC has also contributed to superior rules for planning and scheduling of appointments and operations. These planning rules apply to non-emergency requests within hospitals and also for external day-care clinics.  Using time buffers, checking closely the willingness and readiness of the scheduled patients to truly show up ready for the treatment, is part of process.  Being able to call patients to come in a hurry because a given slot of time is free is actually managing a buffer of patients who are ready to be called this way.

Improving the flow of patients in hospitals is aimed at shortening the time patients spend in the hospital. This is a key objective of uncovering capacity of beds that, many times, constrain the hospital from admitting more patients.  I like very much Alex Knight observation that “a hospital is a very unhealthy place to be.”

Can there be additional TOC contribution to healthcare?

The inherent conflict between the duty to give full and equal treatment to every person and managing the scarce capacity and money is still a major evil conflict that troubles not only every healthcare organization, but also governments. It touches upon another huge generic conflict of every government: exploiting the budget constraint. Healthcare causes a vicious cycle where the improved healthcare causes people to live longer and in better quality and both aspects require higher budget to sustain the improved healthcare.

The government conflict causes wide area of ramifications. The basic values between capitalism and socialism have considerable impact on how that conflict is handled.  While I still believe in Goldratt’s axiom that every conflict can be resolved, I admit I don’t know to resolve this key generic conflict of every government.   I believe I and other TOC experts are able relieve the intensity of the conflict, but not to resolve it in full.

What we can do is to accept the compromised budget and exploit the budget for the best healthcare for all citizens.  Of course, values regarding what is “best” and how to measure it, are part of both the obstacles and the solution.  Accepting the conflicting values as they are we still realize that the common current policies lead to behaviors that distort the exploitation of the healthcare budget.  The resulting behaviors do not really follow the original values.  Many of the common policies are based on flawed assumptions that cause distortions, quite similar to what we know in business organization.   Actually the core of TOC is revealing current flawed assumptions that distort the achievement of the goal.  TOC does not interfere with the values themselves.

Eventually every healthcare institution needs to develop a strategy to improve its current state of creating healthcare value.

When we go down to the level of a hospital, as a key example for a healthcare institution, we should assume the policies between the hospital and the government are given. But, within those policies the detailed offering of the hospital and the internal policies and processes there is a lot of room to improve the value generated by the hospital.

Shouldn’t we unite to map the cause and effects that impact the overall performance of healthcare organizations? Understanding how the government and other imposed policies impact behavior could lead us to generate a more universal solution to a truly wicked problem.

Capacity buffers – a grossly underrated strategic concept

Available capacity is an issue for every organization or business. In order to deliver value any provider needs two different entities:  the appropriate capabilities to generate the value, and available capacity of each of those capabilities in order to react to the demand in an acceptable way.

Capabilities are easier to manage than capacity. The relatively difficult mission in managing capabilities is synchronizing effectively between the different capabilities. For instance, engineering capabilities are required to design a new product, and then production capabilities are required to produce.  The interface between the engineers and the production operators is far from being trivial.  These synchronization capabilities are part of the more generic requirements for management capabilities.  In order to generate throughput from new products additional capabilities are required, like purchasing, sales and finance and usually also IT.  All have to be managed to ensure the integration.

Capacity is the trickier part in the process of generating value. Answering the question “do we have enough capacity?” has to consider all the demand translated into load compared to limited available specific capacities.  Shortage of capacity, even of just one resource, is the ultimate cause for delays in delivery.  So, the practical focused question is:

Given the demand and the capacity limitations would our delivery performance be good enough?

A common complication of managing capacity is when on average there is ample capacity, but at specific times a bottleneck emerges causing very long delays in the delivery of value, probably far beyond the client tolerance time.

At such peak of load the threat of losing reputation leading to reduced future sales is very serious. Question is:

Is it possible to increase the available capacity at very short notice?

It is clear that if such short-term fixation of capacity is possible it is going to be relatively expensive. The point is that the damage of not paying that price could be much more expensive in the future.

Suppose the night manager of a hotel realizes that due to a mistake there is no availability of standard rooms for that night, but two clients with reserved rooms are expected to arrive from the airport.

This is a case where the clear damage of shortage of capacity of a specific resource is very high. It is pretty obvious that the hotel has to solve the problem – not the clients; otherwise the clients might sue the hotel causing more negative ramifications than just the compensation.

How can the hotel find more rooms?

If there are available upgraded rooms, like suites, then giving them to the clients is the first choice.  Note, these rooms are not part of the available capacity of standard rooms – yet, it is possible to use them for quickly increasing the capacity of standard rooms.  The next choice is to find equivalent rooms at another hotel, with similar characteristics.

Fluctuations in the market are the most vicious uncertainty that every organization has to be prepared for. Maintaining inventory is a common way to deal with this kind of uncertainty.  However, in most service environments, also in job-shops, holding inventory is not practical.  Inventory also protects only from peaks of demand for specific products, but only enough capacity protects from peaks of many SKUs at the same time.

How much available capacity of each type of resource (providing specific capability) to hold?

Maintaining very high level of internal capacity to cover all temporary peaks of load is a problematic business approach, creating considerably high OE and high pressure to generate very high T to cover for it. Yet, being able to respond to all demand is a necessary condition for maintaining business stability in the future.

The simple solution is distinguishing between two types of capacity: fully available, and capacity by request.  This leads me to define both:

Available Capacity: The periodical amount of capacity for specific capability that is regularly purchased by the organization and paid whether it is fully used or not.

For instance, an operator that is paid for 180 work hours every month. If that operator consistently earns 20 additional hours of overtime per month, then that amount of overtime should be included in the available capacity and this amount of overtime be included in the regular OE.

Capacity Buffer: Additional capacity that can be quickly purchased at relatively small amounts.

Capacity buffers are usually made of overtime, temporary workers, free-lancers, outsourcing and the use of resources that are primary used for different capabilities, like using the store manager as a cashier to reduce too long queues.

The capacity buffers protect the organization sales and reputation from temporary peaks.  The obvious cost of using the capacity buffers is the relatively high expense of using it.

Capacity buffers should be planned ahead of time and constantly managed! Looking for options at the last minute causes considerable damage in the long run.  Maintaining capacity buffers might cause additional costs.  For instance in order to add a special shift in the shop floor temp workers are needed.  It is unlikely to call temp workers that were not given work for several months.  Thus, to ensure having temp workers to call in a hurry they need to get minimum amount of work every month.

Thus, the global Strategy of the organization, looking to ensure the high reputation of the organization, should define the tactics of maintaining capacity buffers. The Strategy plan should detail the rules for maintaining the amount of available capacities and the rules for capacity buffers.

Even a capacity buffer might run out of capacity!

This is a considerable risk, because when the buffer is exhausted there is no buffer left and the damage is high as no one truly expects it to happen. It is necessary to monitor the capacity buffers using buffer management.

Implementing buffer management means the buffer is measured by the same measurement as regular capacity.  The most common unit of capacity is one-hour of a specific resource.  So, a buffer of operators in a manufacturing shop-floor could be 120 man-hours per week.  When additional 60 man-hours are used in one day – it means a penetration of 50% into the buffer.  Additional consumption of 30 man-hours means penetrating into the red-zone, which should generate a warning for the short-term, and feedback to the planning that the buffer might not be big enough.  A detailed periodical analysis of the performance of the capacity buffers has to be in place, both for better understanding of the flexibility level the organization has to maintain to satisfy the market requirement, and to keep reasonable control on the operating expenses.

Virtual questions and answers session: Can we safely commit to full availability of all items?

Confusion People With Navigation Arrows And Question Mark Flat I

Imagine that you have a meeting with a C-level executive of a supermarket chain. You know that the chain suffers from 12%, on average, shortages, at any given day, at any store.  The chain operates 1,125 stores throughout the country.

In order to prepare yourself for the critical meeting you approach a well known international TOC expert for a free 30-minutes Ask-an-Expert service.

You already know that a typical store holds about 30,000 different items. Every day 3,600 items are missing from the shelf.

The main question you have in your mind:

Is it feasible to ensure excellent availability of all the items?

What level of unavailability should be considered good enough?

From now on there is a (virtual) dialogue between Y (you) and E (the TOC expert). Eventually an answer, not necessarily to the original question, but to the practical need, will emerge.

E: The chain objective is to sell more, leading to higher profit, right?  Suppose the percentage of shortages would go down to one-third, about 4%, how would that lead to additional sales?

Y: Two different effects lead to more sales.  First, 8% of the items, which have been missing, are now available for selling.  Second, it’d reduce the number of disappointing clients and add new clients that could not found good availability at the competing stores.

E: Just a minute, how much sales are truly lost due to unavailability?  First we don’t know whether the missing 8% items are sold at the average rate of all items. Some items that seem very important to many customers, like Coca-Cola, are in excellent availability, because the store handles them with greater care.  Slow movers have higher probability to be short.  But the more critical point is that in many cases the customer easily finds a similar item and no sale is lost.  So, let’s think again what is the impact of unavailability of 12% on the sales?

Y: But, the customers are still disappointed, even when they find a replacement for the missing item.  Within time customers would learn that the particular chain has much better availability than other chains.

E: How would customers realize that one chain has only one-third of shortages relative to another?  Do you expect customers to make tests?  How would the customer know what items are missing and what items are simply not offered at this store?  How long would it take to customers to realize that the advantage is real?

Y: Do you suggest that once the availability level goes up significantly the chain should advertise that the availability level at their stores is higher than the competitors?  Not sure customers would buy such a vague message, after all, some sporadic items are still missing.  It is not safe to offer compensation for a shortage in such state.  I think that even without advertising better availability, sales would, after some time, go up 3-4% due to better availability, and as the TOC solution also reduces the inventory levels profitability would go up nicely.

E: Yes, but it’d be very hard to prove it was due to the TOC methodology.  Without radiating the added value to the market the increase in sales would be slow and the benefits under certain controversy.

Y: This is what concerns me – it is important to show fast results leading to truly impressive results.  Do you have an idea what else can be done?

E: Let me ask you:

Why do you strive for perfect availability for all 30K items?

Many of the items are easily replaceable by other items. They are on the shelf in order to offer wide choice, but one missing item does not bother the vast majority of the customers.

Y: So, you say to offer perfect availability only to few key items?

E: Not just true key items. All items that some customers enter the store with the intent to buy exactly these items.  Customers do not expect every brand and size to be available.  The store chooses holding 30K items, which means there are many items that are not held at all.  But, there are items that customers actively look for and they expect to find those items on the shelf.  It could be that there are 2,000 or even 5,000 such items.  A chain using the TOC replenishment solution can provide excellent availability of 5K items and gain a decisive-competitive-edge.  It is much more effective than trying to provide perfect availability for 30K items.

Y: Sounds interesting, I suppose the chain should publish a catalogue containing the items for which the chain guarantees perfect availability at every store.

E: out of the 5,000 items, 3,000 could be chain-wise commitment to hold at every store, while the rest depend on the choice of the store.  So, there is an availability-commitment catalogue of the chain and an additional catalogue of the store.  Handling the store-commitment items would have some logistical impact on the central and regional warehouses, but it is relatively easy to solve.

Y: How do you suggest managing the stock of the items that are not on full commitment for availability?

E: You keep a buffer for every item/location and continue to replenish in the same way.  The difference is that buffer-management should not have a red-zone for these items.  Green and yellow are enough not to let the system “forget” the availability of those items.

Y: How would you monitor the size of those buffers?

E: By statistics of the time the item is in black and the time it is in green.  Anyway, this is not part of your first meeting with the C-level executive of the chain.

Y: Thank you.

The free 30-minutes service is offered by TOC Global, www.toc-global.com.  You can write your question to info@toc-global.com and such a session will be set.

Learning from surprises: the need the several obstacles

Survey Form Research Poll Form Concept

What looks as a big surprise is actually an opportunity to learn an important insight about our reality. Many times we ignore the opportunity and in other cases we learn the wrong lesson. We should learn how to learn the right lessons.

Bad surprises, but also good surprises, signal us that somewhere in our mind we have a flawed paradigm.  A paradigm is a small branch of cause-and-effect tree that we automatically treat as true, while in a case of a surprise it becomes clear that the tree contains a flaw.

The first task of learning from the surprise is to verbalize the gap between prior expectations and actual outcome.

I like to use the latest news about the US presidency election. The focus is not on any political aspect. My focus is on the failure of ALL the polls.  The gap is: “We were led to believe the prediction implied by polls”, but the actual outcome was the opposite – creating a big surprise that raises many practical questions for both the political arena and also for managing organizations.  The most obvious lesson is to stop believing in polls in general. Is this a valid lesson???

Such a lesson has huge practical ramifications. After all statistical models, like those at the core of the polls, are important tools for managerial information that lead to many decisions.

This is the first serious obstacle of learning from experience:

There is a big risk of learning the WRONG lesson

The first lesson in dealing with surprises is:

Do not jump into fast conclusions!

Instead come up with several alternative explanations before carrying a cause-and-effect analysis, leading eventually to the right lesson, which leads to practical consequences that are based on improved understanding of the cause-and-effect in our reality.

For instance, it is possible to claim a “rare occurrence” leading to the failure. After all no poll claimed the predicted result with 100% confidence. Even when the claim is 90% then there is 10% chance for a different result.

The hard fact is that we cannot prove the “rare occurrence” theory is wrong. However, in this particular case the failure of ALL the polls in the US is added to similar failures of polls in the UK (the Brexit) and in Israel. It is still possible that all of them are rare occurrences, but it is unlikely.

Possible alternative explanations:

  1. It is IMPOSSIBLE to predict what people would do by simply asking them direct questions before the event. In other words, we don’t know how to predict the reaction of mass of people. Note, that if this is true then all market surveys are useless.
  2. It is possible to have a credible poll, but the current method has a basic common flaw. This hypothesis is actually a category of several, more detailed, assumptions regarding the flaw.
    1. The way the statistical sample is built ignores, or under estimate, relevant parts of the society.
    2. Many people deliberately lie to polls.
    3. Many people change their mind in the last minute.
    4. Many people are influenced by an informal leader, so they have no strong opinion of their own.
    5. Asking people about their preference does not mean they will take an action (like going to vote or buy anything) accordingly.
  3. Carrying polls creates a new situation where many people react to the poll in a way that changes the result.
  4. Polls rely too much on mathematical formulas, without considering enough the impact of the communication with people answering the poll.
  5. The polls are grossly misunderstood as they never say something concrete and people do not understand the meaning of “statistical error”.

Each one of the possible explanations should be analyzed using cause-and-effect logic, including looking for other effects that validate or invalidate the explanation. Eventually we get an explanation that is more likely than others.  Even here we should tell ourselves: “Never say I know”, but the analysis, in spite of its level of uncertainty, gives you much higher chance of enhancing your life than by telling yourself “I don’t know anything”.

Generally speaking identifying the flawed paradigm does not complete the analysis. One needs to come up with an updated paradigm and then analyze the practical ramifications of the updated paradigm.

I don’t have the knowledge to carry a full analysis of the failure of the polls by myself. This is a process that requires people who have been involved with the polls together with others who can spot and challenge assumptions.  Question is:  would such a serious process take place?

Two different groups should be interested in such an analysis:

  1. Politicians, campaign managers and also C-level managers, notably the heads of Marketing and Sales.
  2. The statisticians, both academic and professionals, who carry those polls or impact them through their work.

Here is the truly big obstacle to learning:

Do you really want to recognize your own flaws?

This obstacle is emotional; there is no practical rationale behind it.  But, it exists in a big way and it causes HUGE damage.  It impacts our repeated failures in our life and it never generates anything positive.  We like to protect our ego – but our ego is NOT protected by that at all.

Eventually, it is a matter of our own decision to look hard at the mirror.  Some would claim it is not possible to overcome such strong emotions.  It is a good excuse.  Somehow I have seen people taking this decision and going quite far with it – maybe not to every flaw they have, but overcoming some of them is already an achievement.   One of them, by the way, was Eli Goldratt, who clearly struggled, and eventually succeeded, to be able to compliment other people on their achievements.

If you like to know more about learning from ONE event – go on this blog to the main menu, enter Articles, Videos and More, and read the second article entitled:  “Learning from ONE Event – A white paper for expanding the BOK of TOC

When Support is Truly Required


No matter whether you are a consultant leading an implementation at a client organization or a practitioner leading a change, you are bound to face certain outcomes that find you unprepared.

For example, you have chocked the release of orders according to the new shipping buffers. After one or two months the WIP went down and deliveries were on time. Now you are approached by the production manager who is worried that two or three non-constraints are idle for three days in a row. Is this a natural consequence of being non-constraints? Could it be a signal that the buffers are too small?  Is it possible that another constraint upstream of the three work-centers has emerged?  Are you sure you know?  The ramifications of a mistake could be very bad to you!

In other cases you struggle with one of the details of the TOC solution and ask yourself: does it really apply in this case?  

For instance, should every single task planned for a CCPM network cut by half? Should the red-zone always be one-third of the buffer?  Should we offer perfect availability for ALL the SKUs held at our supermarket stores? Should we always exploit the constraint before elevating it?

“Never Say I Know” said Eli, but we still have to take actions that might cause serious ramifications. Whenever we introduce a significant change there is a risk of causing damage.  There should be considerable efforts to reduce the risk by carrying good logical analysis and inserting sensors that would give timely warning of something going wrong.

Is that all one can do?

What if the logical analysis failed to identify certain negative aspects? We all use preconceived assumptions in our analysis and the quality of the analysis is depended on their validity.  Many of the critical assumptions are hidden – meaning we are not aware we use those assumptions.

A simple answer to the need is sharing our analysis with people we appreciate. There are two different positive ramifications:

  1. By having to explain to someone else we have to articulate the problem and the proposed solution in clear cause-and-effect way. We cannot “cut corners” that we do when we think “in our head”. So, the mere fact that someone else is listening and we feel obliged to clarify the logic for that person, forces us to go through the logic. Then we sometimes see for ourselves what is missing, usually a critical insufficiency.
  2. When we don’t see the possible flaw then the other person might lead us to see it. Not necessarily the assumption we take to granted is shared by the other person. Of course, now the quality of the feedback has a lot to do with the expertise and open minded of the other person, and also with our readiness to digest feedback that contradict our own analysis.

Our own biggest obstacle to improve our life is our ego. It prevents us from being open to ideas and opportunities.  I don’t have any insights on how to deal with that, just be aware that you lose from refusing to listen to others.

Listening is not the same as being led. Eventually we all have to decide for ourselves what to do, including whom, if at all, to listen to.  The assumption that listening to others diminishes our own reputation and ego is definitely false.  Discussing issues with another person requires a certain respect to that person; at the very least some appreciation for his/her knowledge and ability to think with fresh mind.  But that person does not need to be considered on a higher level than us; actually such a case has some negative impact, because at such a case it looks as if we have to accept that person view even when we are not convinced.  I have to admit that when I wanted to discuss an issue, something I felt important to me, with Eli Goldratt, I had that concern: what if Eli suggests something I choose not to follow?  No matter what – the final decision has to be our own.

When should we seek the support of somebody else?

Certainly not every issue requires such efforts. It should be used when our intuition tells us we are not sure.  I think that whenever we struggle with a certain issue there is a reason for it.  We might believe we have eventually verbalized the conflict to ourselves and “evaporated the cloud”, but when it still hangs on us there is a reason for it.  A reason we might fail to fully recognize and then our intuition continues to radiate dissatisfaction.

A personal story: I realized the critical importance of that kind of intuition when my first book “Management Dilemmas” was translated into Japanese.  The translator sent me several paragraphs asking me “What’s the hell did you mean in this paragraph?” Well, she used much more polite words, but that was the meaning.  It was with a lot of pain that I realized that every paragraph she included caused me, when I wrote it, a struggle and eventually dissatisfaction, which I simply ignored.  If only I had the wit to ask one of my friends to read and comment.  In my later books I have co-authored with others, mainly with Bill Dettmer, to avoid this feeling.

So, this kind of sanity-check and getting support in diagnosing problems and shaping solutions, is something we should sometimes do – based on our true intuition.

This is what we, at TOC Global, have in mind when we offer the free service of 30-minutes of Ask-an-Expert. We in no way check the time with a stopper. We recognize the simple fact that discussing practical and possibly debatable issues is a real need.  The TOC world does not know of any super-expert that does not need to talk with another knowledgeable person.

TOC Global website is www.toc-global.com.  You can register through the site or send a mail to info@toc-global.com.   It is free and it brings value.

Small-TOC and Big-TOC – dealing with a key wicked problem of TOC

TOC is now at a cross-road. On one hand we have well-defined methodologies for improving the flow of products and services, also making the delivery reliable.  On the other hand, Goldratt taught us the use of cause and effect tools for diagnosing the current blockages to success, pointing to future ramifications, challenging assumptions and coming up with a winning strategy plan.  The critical blockage, by the way, does not necessarily delay the flow of products.  Many times it is being in the wrong market, failing to see the right needs of the market or de-motivating the employees.

We have a conflict: do and sell what we know well versus trying to shoot to the sky, using the variety of the TOC tools, and maybe other tools, to solve the real problems that block the organization from achieving much more.

Here is the conflict – the way I see it:toc-cloud2

The well-known TOC methodologies are DBR,SDBR, buffer-management, Replenishment, CCPM and possibly throughput accounting. The full scope would include also the TP, the six questions, S&T, SFS, the pillars, the engines of disharmony and many other insights that are not well integrated into coherent BOK.

The cloud, one of those tools which are not part of the “well known and effective TOC methodologies” represents a wicked-problem in the TOC community. The upper leg expresses the notion of a “small-TOC”, which is proven to give excellent results and can be nicely sold (when the focus is on it), while the bottom leg, the “big-TOC”, brings higher value as it integrates the functional results to bottom line improvements and is more universal, but is also more difficult to sell.

What is not mentioned is my additional observation of an undesired-effect (UDE):

There is growing competition on improving flow of products, services and projects from other methodologies.

The point is that those new competing methodologies are not superior to TOC, but they are superior to the current practices, so they compete on the mind of potential clients, including The Goal readers.  These methods compete with Small-TOC, but not with Big-TOC.  Let me just mention Lean, DDMRP and Agile as such methods.  If you agree with this assumption than the advantage of selling Small-TOC is threatened and could be temporary.

Most of Small-TOC implementations are functional, and thus do not need the full support of top management. Big-TOC should be sold and addressed to top management as its advantage is integrating the whole company to the desired state of growth coupled with stability.

How can we evaporate the above cloud?

We certainly have difficulty in selling Big-TOC, but selling Small-TOC is also far from being trivial.

A potential solution, challenging the above critical assumption, is to present TOC as a method to answer two critical questions, as verbalized by Dr. Alan Barnard:

  • How much better can you do? In other words, what is limiting the performance of the organization from achieving much more?
  • What is the simplest, fastest, low cost and low risk way to achieve that?

These questions are holistic and generic and they apply to the top management of the organization. While the two questions can be easily translated into actual value to the client organization, they raise the issue whether the client trusts that TOC can lead to effective and safe answers to the questions.  More, letting “consultants”, with all the connotations it raises, lead the strategy of an organization generates fear, which is also personal (what it might do to ME?)

The obstacles for convincing executives who have some idea of TOC, like The Goal readers, are much better handled when the clients see a large organization of truly experienced TOC experts, who closely collaborate to achieve the most effective answer to the second question.

Currently there are two relatively large TOC consultancy companies who do well, even though their growth in not spectacular, and they are not truly large compared to several non-TOC consultancy companies. Having several high level consultants be involved in every implementation provides an opportunity to quickly identify unexpected signals and draw the right response, and by this reduce the risk – and this is also what the client expects from an array of highly experienced people.

TOC Global is a new TOC non-profit organization aimed to solve wicked-problems that limit the performance of organizations, by combining the experience and knowledge of diverse group of TOC experts. TOC Global is an international network of top consultants, coaches and practitioners who are ready to contribute time and efforts to improve the awareness, adoption, actual value generated and also the sustainability of TOC implementations.  There are three major routes that TOC Global is determined to take:

  1. Supporting new and ongoing TOC implementations to achieve very high value. This means guiding local consultants and practitioners through active dialogue to address the specific issues, challenge hidden assumptions, and deal with the fears of managers, which block them from moving. In other words, help those who are ready be helped to deal with their wicked-problems of specific implementations. A free service of Ask-an-Expert is an initial step in this direction (write to info@toc-global.com).
  2. Choose challenging wicked-problems and run projects to analyze, carry careful experiments and eventually complete an effective solution, which would add huge value to the specific organization and similar ones.
  3. Improve the awareness to TOC through investing in marketing efforts.

This activity would lead to another desired effect: Turning the current TOC BOK to be more complete and more effective. Being a non-profit organization allows sharing the lessons and the new knowledge with all the TOC community.

Big-TOC always looks for possible negative branches of any new exciting idea that solves a problem. The grouping of specific people in the TOC Global naturally generates concerns of possible competition with other TOC experts.  The only way to trim that negative branch is by instituting very strong ethical codes, and by being ready to collaborate and join forces with others.  The real competition of TOC is not Lean or Six Sigma, it is the big consulting companies.  Big-TOC offers leaner and more collaborative process based on focusing on the truly critical issues, helping the organization to verbalize their valuable intuition, and achieving huge value based on simplicity and reliability.  One might say this is the right way to become more antifragile.