Virtual questions and answers session: Can we safely commit to full availability of all items?

Confusion People With Navigation Arrows And Question Mark Flat I

Imagine that you have a meeting with a C-level executive of a supermarket chain. You know that the chain suffers from 12%, on average, shortages, at any given day, at any store.  The chain operates 1,125 stores throughout the country.

In order to prepare yourself for the critical meeting you approach a well known international TOC expert for a free 30-minutes Ask-an-Expert service.

You already know that a typical store holds about 30,000 different items. Every day 3,600 items are missing from the shelf.

The main question you have in your mind:

Is it feasible to ensure excellent availability of all the items?

What level of unavailability should be considered good enough?

From now on there is a (virtual) dialogue between Y (you) and E (the TOC expert). Eventually an answer, not necessarily to the original question, but to the practical need, will emerge.

E: The chain objective is to sell more, leading to higher profit, right?  Suppose the percentage of shortages would go down to one-third, about 4%, how would that lead to additional sales?

Y: Two different effects lead to more sales.  First, 8% of the items, which have been missing, are now available for selling.  Second, it’d reduce the number of disappointing clients and add new clients that could not found good availability at the competing stores.

E: Just a minute, how much sales are truly lost due to unavailability?  First we don’t know whether the missing 8% items are sold at the average rate of all items. Some items that seem very important to many customers, like Coca-Cola, are in excellent availability, because the store handles them with greater care.  Slow movers have higher probability to be short.  But the more critical point is that in many cases the customer easily finds a similar item and no sale is lost.  So, let’s think again what is the impact of unavailability of 12% on the sales?

Y: But, the customers are still disappointed, even when they find a replacement for the missing item.  Within time customers would learn that the particular chain has much better availability than other chains.

E: How would customers realize that one chain has only one-third of shortages relative to another?  Do you expect customers to make tests?  How would the customer know what items are missing and what items are simply not offered at this store?  How long would it take to customers to realize that the advantage is real?

Y: Do you suggest that once the availability level goes up significantly the chain should advertise that the availability level at their stores is higher than the competitors?  Not sure customers would buy such a vague message, after all, some sporadic items are still missing.  It is not safe to offer compensation for a shortage in such state.  I think that even without advertising better availability, sales would, after some time, go up 3-4% due to better availability, and as the TOC solution also reduces the inventory levels profitability would go up nicely.

E: Yes, but it’d be very hard to prove it was due to the TOC methodology.  Without radiating the added value to the market the increase in sales would be slow and the benefits under certain controversy.

Y: This is what concerns me – it is important to show fast results leading to truly impressive results.  Do you have an idea what else can be done?

E: Let me ask you:

Why do you strive for perfect availability for all 30K items?

Many of the items are easily replaceable by other items. They are on the shelf in order to offer wide choice, but one missing item does not bother the vast majority of the customers.

Y: So, you say to offer perfect availability only to few key items?

E: Not just true key items. All items that some customers enter the store with the intent to buy exactly these items.  Customers do not expect every brand and size to be available.  The store chooses holding 30K items, which means there are many items that are not held at all.  But, there are items that customers actively look for and they expect to find those items on the shelf.  It could be that there are 2,000 or even 5,000 such items.  A chain using the TOC replenishment solution can provide excellent availability of 5K items and gain a decisive-competitive-edge.  It is much more effective than trying to provide perfect availability for 30K items.

Y: Sounds interesting, I suppose the chain should publish a catalogue containing the items for which the chain guarantees perfect availability at every store.

E: out of the 5,000 items, 3,000 could be chain-wise commitment to hold at every store, while the rest depend on the choice of the store.  So, there is an availability-commitment catalogue of the chain and an additional catalogue of the store.  Handling the store-commitment items would have some logistical impact on the central and regional warehouses, but it is relatively easy to solve.

Y: How do you suggest managing the stock of the items that are not on full commitment for availability?

E: You keep a buffer for every item/location and continue to replenish in the same way.  The difference is that buffer-management should not have a red-zone for these items.  Green and yellow are enough not to let the system “forget” the availability of those items.

Y: How would you monitor the size of those buffers?

E: By statistics of the time the item is in black and the time it is in green.  Anyway, this is not part of your first meeting with the C-level executive of the chain.

Y: Thank you.

The free 30-minutes service is offered by TOC Global, www.toc-global.com.  You can write your question to info@toc-global.com and such a session will be set.

Learning from surprises: the need the several obstacles

Survey Form Research Poll Form Concept

What looks as a big surprise is actually an opportunity to learn an important insight about our reality. Many times we ignore the opportunity and in other cases we learn the wrong lesson. We should learn how to learn the right lessons.

Bad surprises, but also good surprises, signal us that somewhere in our mind we have a flawed paradigm.  A paradigm is a small branch of cause-and-effect tree that we automatically treat as true, while in a case of a surprise it becomes clear that the tree contains a flaw.

The first task of learning from the surprise is to verbalize the gap between prior expectations and actual outcome.

I like to use the latest news about the US presidency election. The focus is not on any political aspect. My focus is on the failure of ALL the polls.  The gap is: “We were led to believe the prediction implied by polls”, but the actual outcome was the opposite – creating a big surprise that raises many practical questions for both the political arena and also for managing organizations.  The most obvious lesson is to stop believing in polls in general. Is this a valid lesson???

Such a lesson has huge practical ramifications. After all statistical models, like those at the core of the polls, are important tools for managerial information that lead to many decisions.

This is the first serious obstacle of learning from experience:

There is a big risk of learning the WRONG lesson

The first lesson in dealing with surprises is:

Do not jump into fast conclusions!

Instead come up with several alternative explanations before carrying a cause-and-effect analysis, leading eventually to the right lesson, which leads to practical consequences that are based on improved understanding of the cause-and-effect in our reality.

For instance, it is possible to claim a “rare occurrence” leading to the failure. After all no poll claimed the predicted result with 100% confidence. Even when the claim is 90% then there is 10% chance for a different result.

The hard fact is that we cannot prove the “rare occurrence” theory is wrong. However, in this particular case the failure of ALL the polls in the US is added to similar failures of polls in the UK (the Brexit) and in Israel. It is still possible that all of them are rare occurrences, but it is unlikely.

Possible alternative explanations:

  1. It is IMPOSSIBLE to predict what people would do by simply asking them direct questions before the event. In other words, we don’t know how to predict the reaction of mass of people. Note, that if this is true then all market surveys are useless.
  2. It is possible to have a credible poll, but the current method has a basic common flaw. This hypothesis is actually a category of several, more detailed, assumptions regarding the flaw.
    1. The way the statistical sample is built ignores, or under estimate, relevant parts of the society.
    2. Many people deliberately lie to polls.
    3. Many people change their mind in the last minute.
    4. Many people are influenced by an informal leader, so they have no strong opinion of their own.
    5. Asking people about their preference does not mean they will take an action (like going to vote or buy anything) accordingly.
  3. Carrying polls creates a new situation where many people react to the poll in a way that changes the result.
  4. Polls rely too much on mathematical formulas, without considering enough the impact of the communication with people answering the poll.
  5. The polls are grossly misunderstood as they never say something concrete and people do not understand the meaning of “statistical error”.

Each one of the possible explanations should be analyzed using cause-and-effect logic, including looking for other effects that validate or invalidate the explanation. Eventually we get an explanation that is more likely than others.  Even here we should tell ourselves: “Never say I know”, but the analysis, in spite of its level of uncertainty, gives you much higher chance of enhancing your life than by telling yourself “I don’t know anything”.

Generally speaking identifying the flawed paradigm does not complete the analysis. One needs to come up with an updated paradigm and then analyze the practical ramifications of the updated paradigm.

I don’t have the knowledge to carry a full analysis of the failure of the polls by myself. This is a process that requires people who have been involved with the polls together with others who can spot and challenge assumptions.  Question is:  would such a serious process take place?

Two different groups should be interested in such an analysis:

  1. Politicians, campaign managers and also C-level managers, notably the heads of Marketing and Sales.
  2. The statisticians, both academic and professionals, who carry those polls or impact them through their work.

Here is the truly big obstacle to learning:

Do you really want to recognize your own flaws?

This obstacle is emotional; there is no practical rationale behind it.  But, it exists in a big way and it causes HUGE damage.  It impacts our repeated failures in our life and it never generates anything positive.  We like to protect our ego – but our ego is NOT protected by that at all.

Eventually, it is a matter of our own decision to look hard at the mirror.  Some would claim it is not possible to overcome such strong emotions.  It is a good excuse.  Somehow I have seen people taking this decision and going quite far with it – maybe not to every flaw they have, but overcoming some of them is already an achievement.   One of them, by the way, was Eli Goldratt, who clearly struggled, and eventually succeeded, to be able to compliment other people on their achievements.

If you like to know more about learning from ONE event – go on this blog to the main menu, enter Articles, Videos and More, and read the second article entitled:  “Learning from ONE Event – A white paper for expanding the BOK of TOC

When Support is Truly Required

bigstock-133422674

No matter whether you are a consultant leading an implementation at a client organization or a practitioner leading a change, you are bound to face certain outcomes that find you unprepared.

For example, you have chocked the release of orders according to the new shipping buffers. After one or two months the WIP went down and deliveries were on time. Now you are approached by the production manager who is worried that two or three non-constraints are idle for three days in a row. Is this a natural consequence of being non-constraints? Could it be a signal that the buffers are too small?  Is it possible that another constraint upstream of the three work-centers has emerged?  Are you sure you know?  The ramifications of a mistake could be very bad to you!

In other cases you struggle with one of the details of the TOC solution and ask yourself: does it really apply in this case?  

For instance, should every single task planned for a CCPM network cut by half? Should the red-zone always be one-third of the buffer?  Should we offer perfect availability for ALL the SKUs held at our supermarket stores? Should we always exploit the constraint before elevating it?

“Never Say I Know” said Eli, but we still have to take actions that might cause serious ramifications. Whenever we introduce a significant change there is a risk of causing damage.  There should be considerable efforts to reduce the risk by carrying good logical analysis and inserting sensors that would give timely warning of something going wrong.

Is that all one can do?

What if the logical analysis failed to identify certain negative aspects? We all use preconceived assumptions in our analysis and the quality of the analysis is depended on their validity.  Many of the critical assumptions are hidden – meaning we are not aware we use those assumptions.

A simple answer to the need is sharing our analysis with people we appreciate. There are two different positive ramifications:

  1. By having to explain to someone else we have to articulate the problem and the proposed solution in clear cause-and-effect way. We cannot “cut corners” that we do when we think “in our head”. So, the mere fact that someone else is listening and we feel obliged to clarify the logic for that person, forces us to go through the logic. Then we sometimes see for ourselves what is missing, usually a critical insufficiency.
  2. When we don’t see the possible flaw then the other person might lead us to see it. Not necessarily the assumption we take to granted is shared by the other person. Of course, now the quality of the feedback has a lot to do with the expertise and open minded of the other person, and also with our readiness to digest feedback that contradict our own analysis.

Our own biggest obstacle to improve our life is our ego. It prevents us from being open to ideas and opportunities.  I don’t have any insights on how to deal with that, just be aware that you lose from refusing to listen to others.

Listening is not the same as being led. Eventually we all have to decide for ourselves what to do, including whom, if at all, to listen to.  The assumption that listening to others diminishes our own reputation and ego is definitely false.  Discussing issues with another person requires a certain respect to that person; at the very least some appreciation for his/her knowledge and ability to think with fresh mind.  But that person does not need to be considered on a higher level than us; actually such a case has some negative impact, because at such a case it looks as if we have to accept that person view even when we are not convinced.  I have to admit that when I wanted to discuss an issue, something I felt important to me, with Eli Goldratt, I had that concern: what if Eli suggests something I choose not to follow?  No matter what – the final decision has to be our own.

When should we seek the support of somebody else?

Certainly not every issue requires such efforts. It should be used when our intuition tells us we are not sure.  I think that whenever we struggle with a certain issue there is a reason for it.  We might believe we have eventually verbalized the conflict to ourselves and “evaporated the cloud”, but when it still hangs on us there is a reason for it.  A reason we might fail to fully recognize and then our intuition continues to radiate dissatisfaction.

A personal story: I realized the critical importance of that kind of intuition when my first book “Management Dilemmas” was translated into Japanese.  The translator sent me several paragraphs asking me “What’s the hell did you mean in this paragraph?” Well, she used much more polite words, but that was the meaning.  It was with a lot of pain that I realized that every paragraph she included caused me, when I wrote it, a struggle and eventually dissatisfaction, which I simply ignored.  If only I had the wit to ask one of my friends to read and comment.  In my later books I have co-authored with others, mainly with Bill Dettmer, to avoid this feeling.

So, this kind of sanity-check and getting support in diagnosing problems and shaping solutions, is something we should sometimes do – based on our true intuition.

This is what we, at TOC Global, have in mind when we offer the free service of 30-minutes of Ask-an-Expert. We in no way check the time with a stopper. We recognize the simple fact that discussing practical and possibly debatable issues is a real need.  The TOC world does not know of any super-expert that does not need to talk with another knowledgeable person.

TOC Global website is www.toc-global.com.  You can register through the site or send a mail to info@toc-global.com.   It is free and it brings value.

Small-TOC and Big-TOC – dealing with a key wicked problem of TOC

TOC is now at a cross-road. On one hand we have well-defined methodologies for improving the flow of products and services, also making the delivery reliable.  On the other hand, Goldratt taught us the use of cause and effect tools for diagnosing the current blockages to success, pointing to future ramifications, challenging assumptions and coming up with a winning strategy plan.  The critical blockage, by the way, does not necessarily delay the flow of products.  Many times it is being in the wrong market, failing to see the right needs of the market or de-motivating the employees.

We have a conflict: do and sell what we know well versus trying to shoot to the sky, using the variety of the TOC tools, and maybe other tools, to solve the real problems that block the organization from achieving much more.

Here is the conflict – the way I see it:toc-cloud2

The well-known TOC methodologies are DBR,SDBR, buffer-management, Replenishment, CCPM and possibly throughput accounting. The full scope would include also the TP, the six questions, S&T, SFS, the pillars, the engines of disharmony and many other insights that are not well integrated into coherent BOK.

The cloud, one of those tools which are not part of the “well known and effective TOC methodologies” represents a wicked-problem in the TOC community. The upper leg expresses the notion of a “small-TOC”, which is proven to give excellent results and can be nicely sold (when the focus is on it), while the bottom leg, the “big-TOC”, brings higher value as it integrates the functional results to bottom line improvements and is more universal, but is also more difficult to sell.

What is not mentioned is my additional observation of an undesired-effect (UDE):

There is growing competition on improving flow of products, services and projects from other methodologies.

The point is that those new competing methodologies are not superior to TOC, but they are superior to the current practices, so they compete on the mind of potential clients, including The Goal readers.  These methods compete with Small-TOC, but not with Big-TOC.  Let me just mention Lean, DDMRP and Agile as such methods.  If you agree with this assumption than the advantage of selling Small-TOC is threatened and could be temporary.

Most of Small-TOC implementations are functional, and thus do not need the full support of top management. Big-TOC should be sold and addressed to top management as its advantage is integrating the whole company to the desired state of growth coupled with stability.

How can we evaporate the above cloud?

We certainly have difficulty in selling Big-TOC, but selling Small-TOC is also far from being trivial.

A potential solution, challenging the above critical assumption, is to present TOC as a method to answer two critical questions, as verbalized by Dr. Alan Barnard:

  • How much better can you do? In other words, what is limiting the performance of the organization from achieving much more?
  • What is the simplest, fastest, low cost and low risk way to achieve that?

These questions are holistic and generic and they apply to the top management of the organization. While the two questions can be easily translated into actual value to the client organization, they raise the issue whether the client trusts that TOC can lead to effective and safe answers to the questions.  More, letting “consultants”, with all the connotations it raises, lead the strategy of an organization generates fear, which is also personal (what it might do to ME?)

The obstacles for convincing executives who have some idea of TOC, like The Goal readers, are much better handled when the clients see a large organization of truly experienced TOC experts, who closely collaborate to achieve the most effective answer to the second question.

Currently there are two relatively large TOC consultancy companies who do well, even though their growth in not spectacular, and they are not truly large compared to several non-TOC consultancy companies. Having several high level consultants be involved in every implementation provides an opportunity to quickly identify unexpected signals and draw the right response, and by this reduce the risk – and this is also what the client expects from an array of highly experienced people.

TOC Global is a new TOC non-profit organization aimed to solve wicked-problems that limit the performance of organizations, by combining the experience and knowledge of diverse group of TOC experts. TOC Global is an international network of top consultants, coaches and practitioners who are ready to contribute time and efforts to improve the awareness, adoption, actual value generated and also the sustainability of TOC implementations.  There are three major routes that TOC Global is determined to take:

  1. Supporting new and ongoing TOC implementations to achieve very high value. This means guiding local consultants and practitioners through active dialogue to address the specific issues, challenge hidden assumptions, and deal with the fears of managers, which block them from moving. In other words, help those who are ready be helped to deal with their wicked-problems of specific implementations. A free service of Ask-an-Expert is an initial step in this direction (write to info@toc-global.com).
  2. Choose challenging wicked-problems and run projects to analyze, carry careful experiments and eventually complete an effective solution, which would add huge value to the specific organization and similar ones.
  3. Improve the awareness to TOC through investing in marketing efforts.

This activity would lead to another desired effect: Turning the current TOC BOK to be more complete and more effective. Being a non-profit organization allows sharing the lessons and the new knowledge with all the TOC community.

Big-TOC always looks for possible negative branches of any new exciting idea that solves a problem. The grouping of specific people in the TOC Global naturally generates concerns of possible competition with other TOC experts.  The only way to trim that negative branch is by instituting very strong ethical codes, and by being ready to collaborate and join forces with others.  The real competition of TOC is not Lean or Six Sigma, it is the big consulting companies.  Big-TOC offers leaner and more collaborative process based on focusing on the truly critical issues, helping the organization to verbalize their valuable intuition, and achieving huge value based on simplicity and reliability.  One might say this is the right way to become more antifragile.

Sales and Operations Planning the TOC Way

word speech bubble illustration of business acronym term S&OP Sa

S&OP is a known practice, usually focused on the immediate short-time frame, where Sales and Operations negotiate what to produce.

Much more value can be generated when ideas regarding market opportunities can be truly analyzed, considering the potential throughput (T) and capacity requirements, which include the cost of using overtime, special shifts, temporary workers and outsourcing. I refer to these means of quickly increasing the available capacity, for a certain additional cost (delta-OE), as capacity buffer.

When there is excess capacity throughout all operations we are used to describe this state as “the constraint lies in the market.” In such a situation any additional sale is welcome.  Question is how such a situation impacts the sales agents – are they truly compelled to take big moves to bring new clients and new markets, or they still focus on the existing clients, looking for few small opportunities to increase the sales just a bit?

When salespeople come with new ideas, pointing to new market segments, are they being listened to? How are those ideas, which might raise also concerns on top of opening a potential opportunity, checked by top management?

Suppose an idea of packaging several different products together, selling it for a lower price than the simple accumulative price of all the items, is raised. Several questions are immediately raised:

  • Selling a package would definitely reduce the sales of the individual items. Question is: by how much? Is the overall total T going up or down? Are there ramifications on the operating-expenses (OE)?
  • Is there enough protective capacity to face the new potential demand? If not, can we use the capacity buffer and still gain delta-T>>delta-OE? Or, should we intentionally reduce sales from products that yield less T per the critical capacity they require?
  • As no one can accurately forecast the demand for such a new offer – how can we test both the risk of causing a loss and the chance of gaining much more profit? In other words, what are the possible upside and downside of the decision?

The regular S&OP process does not ask these questions. Forecasts are treated as one-number representing reality and the financial impact is supposed to be based on the cost-per-unit.  The flaw of such a process lies on the two erroneous concepts, cost-per-unit and one-number-forecast, which lead to wrong decisions and mediocre results, in spite of the good intentions of both Sales and Operations teams.

The real result is that most organizations are stuck with their current clients and market segments and they do very little to make a real move to achieve a leap in the organizational financial performance.

Kiran Kothekar, co-founding director at Vector Consulting Group India, made the following important observation during his presentation at the TOCICO conference, 2016.

Using targets leads to inferior overall performance of the organization!

The rational of the above statement is that targets behave very similar to Parkinson Law. We try to hit the target, but we know better not to try to be above the target, because we don’t like to get higher targets in the future.  Another negative ramification is that most targets are for a local area.  The derivation of the target is done by considering the overall forecast, but when one of the other parts fails to reach their target, trying to meet the target of the other local areas causes damage.  So, hitting targets locally causes problems elsewhere in the organization.  For instance, focused efforts to sell specific products in order to hit their target might be on the expense of other sales that are someone else responsibility, or on the expense of future sales.  Promotions, carried in order to meet the longer term targets, create massive temporary capacity problems that harm the sales of other products, and reduce the overall Net-Profit = T – OE.

The practical ramification is that setting targets would eventually disrupt any implementation of TOC, no matter the level of benefits already earned.  In his presentation Kiran made it clear that using TOC performance measurements as targets would cause the same negatives.  I fully agree.

In the mind of management setting targets has a reason: pushing salespersons, operators and middle-level managers to make the required efforts to achieve good results.  Without those quantitative measures there is a concern that employees would constantly do less than what they can and should.

Judging whether the performance of an individual, or a whole function, is about right cannot be done by relying on quantitative measurements. There is too high variability and on top of it there are too many dependencies with other individuals, functions and external events.  Observing the behavioural patterns is a better way to identify low motivation. Motivating people by encouraging them to raise improvement ideas and treating those ideas seriously by carrying analysis of the impact on the goal is a way to maintain the right culture.

Treating uncertainty calls for using forecasting ranges. Falling below the range should call for analysis, not automatic blaming.  Going beyond the range also calls for an analysis.  Most of the time the cause is not under or peak performance of someone, but a signal about changing reality that allows us to know a little better, which is the key for handling uncertainty and gaining competitive edge out of it.

I have described the process of ongoing Sales and Operation planning, called DSTOC for decision support based on TOC, in previous posts, which encourages capturing the intuition of salespeople as well as operational people, converting it to ranges, and checking the financial ramifications of the reasonable worst-case and the reasonable best-case. It is not just a way to make sensible decisions under uncertainty; it is also the sensible way to abolish the use of targets to get people do what they know they should do.

The Critical Information behind the Planned-Load

When I developed the concept of the Planned-Load I thought it would be a “nice-to-have” additional feature to my MICSS simulator.  MICSS provided me with the opportunity to check different policies and ideas relevant for managing manufacturing. It took me years to realize how important the planned-load is.

Actually, without the use of the planned-load it is impossible to use Simplified-Drum-Buffer-Rope (S-DBR), which replaced the much more complex DBR. Still, it seems most people in the TOC world, including those who develop TOC software, are not aware of its importance and certainly not its potential value, not yet fully materialized.

The planned-load for a critical resource is the accumulation of the time it would take to complete all the work that is firmly planned. The objective is to help us understand the connection between load, available capacity and response time!

Let’s use the justice system as an example. Imagine a judge examining the work he has to do: sitting in already scheduled trial-sessions for the total of 160 hours.  On top of that he needs to spend 80 hours on reading protocols of previous sessions and writing verdicts.  All in all 240 hours are required for completing the existing known work-load.

Assuming the judge works 40 hours a week then all the trials currently waiting for the judge should theoretically be completed in six weeks. We can expect, with reasonable certainly, that a new trial, requiring about 40 hours net work from the judge, would be finished no later than ten weeks.  I have added three weeks as a buffer against whatever could go wrong, like some of the trials requiring additional sessions or that the judge becomes sick for few days. This buffer is required in order to reasonably predict the completion of a new trial.

However, in reality we could easily find out that the average lead-time of a new trial is fifty (50) weeks – how can that be explained?

When expectations based on the planned-load do not materialize we have to assume one of the following explanations:

  1. The resource we are looking at is a non-constraint and thus has a lot of idle time. In the judge case, the real bottleneck could be the lawyers that ask and get long delays between sessions.
  2. While the resource is a constraint, the management of the resource, specifically the efforts to exploit its capacity, is so bad that substantial capacity is wasted.
  3. The actual demand contains high ratio of urgent cases, not planned a-priori and thus not part of the current planned-load. Those cases frequently appear and delay the regular planned work already registered in the system.

The importance of the planned-load of the “weakest-link” in operations is quick rough estimation of the net queue-time of work-orders and missions.  When the queue time is relatively long the planned load gives an estimation of the actual lead-time provided there is no major waste of capacity.  In other words the planned-load is a measure of the potential responsiveness of the system to new demand.

Have a look at the following picture depicting the planned-load (the red-bar) of six different resources of a manufacturing organization. The numbers, on the right side of the bar denotes hours of work, including average setups.

micss1

Assuming every resource works 40 hours a week, we see one resource (PK – the packaging line) that has almost three weeks of net work to do.

The green bars represent the amount of planned-work that is already at the site of the resource. In production lines, but also in environments with missions that require several resources, it could be that a certain work is already firm, but it has not, yet, reached the specific resource.  That work is part of the planned load. It is NOT part of the green-bar.  PK is the last operation and most of the known work-orders for it reside at upstream resources or maybe even not released, yet, to the floor.

The truly critical information is contained in the red-bars – the Planned-Load. To understand the message you need an additional piece of information:  the customer lead-time is between 6-7 weeks.

The above information is enough to safely conclude that the shop floor above is very poorly handled.

The actual lead-time should have been three-weeks on average. So, quoting four weeks delivery time might bring more demand.  Of course, when the demand goes sharply up, then the planned-load should be carefully re-checked to validate that the four-week delivery is safe.

Just to illustrate the power of the planned-load information – here is planned-load of the same organization four months after introducing the required changes in operations:

Micss3

The customer lead time is now four weeks. The demand went up by 25%, causing work-center MB to become an active capacity constraint. As the simulation is now using the correct TOC principles, most of the WIP is naturally gathered at the constraint.  Non-constraint resources that are downstream of the constraint have very little WIP residing at their site.

The longest planned-load is three weeks (about 120 hours of planned work for MB). The four weeks quotation includes the time-buffer to allow getting over problems in MB itself, and having to go through downstream operations.

This is just the basic understanding of the criticality of the planned load information. Once Dr. Goldratt internalized the hidden value of that information, he based the algorithm for quoting “safe-dates” on the planned-load plus a part of the time-buffer.

A graph of the behavior of the planned-load through time could be revealing. When the graph goes up – it tells you there is more incoming new demand than demand delivered, which means a risk of having an emerging bottleneck.  When the graph goes down it means excess capacity is available allowing faster response.  Both Sales and Production should base their plans accordingly.

Other important distinction is to look for the immediate short-term impact of the planned-load in order to manage overtime to control the reliable delivery. The time horizon is the time-buffer – making sure the capacity is enough to deliver all orders within that horizon. It identifies problems before buffer management warning of “too much red”.  One should always look on BOTH planned-load and buffer management to manage the execution. 

SDBR, and its planned-load part, certainly applies to manufacturing. But, SDBR, including both the planned-load and buffer-management should be used to manage the timely completion of missions in almost all organizations.

There is a tendency in TOC circles to assume that any big enough mission, for whom several resources are required, is a project that should be manage according to CCPM.  While it is possible to do so and get positive results, this is not necessarily the best way, certainly not the simplest way, to manage such missions.

I think we should make the effort and distinguish clearly when SDBR can be effectively used for missions and when CCPM is truly required.   And, by the way, we can all think how the capacity of key resources should be managed in multi-project environments.  Is there a way to use the Planned-Load as a key indicator whether there is enough capacity to prevent too long delays or there is an urgent need to increase capacity?

Is Throughput-per-constraint-unit truly useful?

P&Q1

Cost-per-unit is the most devastating flawed paradigm TOC has challenged. From my experience many managers, certainly most certified accountants, are aware of some of the potential distortions.  One needs to examine several situations to get the full impact of the distortion.

Cost-per-unit supports a simple process for decision-making, and this process is “the book” that managers believe they should follow.  It is difficult to blame a manager for making decisions based on cost-per-unit.  There are many more ramifications to the blind acceptance of cost-per-unit, like the concept of “efficiency” on which most performance measurements are based.   TOC logically proves how those “efficient” performance measurements force loyal employees to take damaging actions to the organization.

Does Throughput Accounting offer a valid “book” replacing the flawed concept of cost-per-unit?

Hint: Yes, but some critical developments are required.

The P&Q is a famous example, originally used by Dr. Goldratt, which proves that cost-per-unit gives a wrong answer to the question: how much money the company in the example is able to make?

Every colored ellipse in the picture represents a resource that is available 8 hours a day, five days a week. The chart represents the routing for two products: P and Q. The numbers in the colored ellipses represent the time-per-part in minutes.  The weekly fixed expenses are $6,000.

The first mistake is ignoring the possible of lack of capacity. The Blue resource is actually a bottleneck – preventing the full production of all the required 100 units of P and 50 units of Q every week.  The obvious required decision is:

What current market should be given up?

The regular cost accounting principles lead us to give up part of the P sales, because a unit of P yields lower price than Q, requires more materials and also longer work time.

This is the second common mistake as when you check what happens when some of the Q sales are given up, instead of P units, you realize that the latter choice is the better decision!

The reason is that as the Blue resource is the only resource that lacks capacity, and the Q product requires much more time from the Blue than from the rest of the resources that have idle capacity.

The simple and effective way for demonstrating the reasons behind what seems like a big surprise is to calculate for every product the ratio of T/CU – throughput (selling price minus the material cost) divided by the time required from the capacity constraint.  In this case a unit of P yields T of ($90-$45) divided by 15 minutes = $3 per minute of Resource B capacity. Product Q yields only (100-40)/30 = $2 per minute of B.

This is quite a proof that cost-per-unit distorts decisions. It is NOT a proof that T/CU is always right.  According to regular cost-accounting principles, once the recognition that the Blue is a bottleneck is realized, the normal result is: a loss of $300 per week.  When the T/CU rule is followed then the result is: positive profit of $300 per week.

I claim that the T/CU is a flawed concept!

I still claim that the concept of throughput, together with operating-expenses and investment, is a major breakthrough for business decision-making!

The above statement about T/CU has been already presented by Dr. Alan Barnard in TOCICO and in a subsequent paper.  Dr. Barnard showed that when there are more than one constraint T/CU yields wrong answers. I wish to explain the full ramifications of that on decisions that are taken prior to the emergence of new capacity constraints.

The logic behind T/CU is based on two critical assumptions:

  1. There is ONE active capacity constraint and only one.
    1. Comment: Active capacity constraints means if we’d get a little more capacity the bottom-line would go up and when you waste a little of that capacity the bottom-line will definitely go down.
  2. The decision at hand is relatively small, so it would NOT cause new constraints to appear.

Some observations of reality:

Most organizations are NOT constrained by their internal capacity! We should note two different situations:

  • The market demand is clearly lower than the load on the weakest-link.
  • While one, or even several, resources are loaded to 100% of their available capacity, the organization has means to get enough fast additional capacity for a certain price (delta(OE)), like overtime, extra shifts, temporary workers or outsourcing. In this situation the lack of capacity does not limit the generation of T and profit, and thus the capacity is not the constraint.

The second critical assumption, that the decision considered is small, means the T/CU should NOT be used for the vast majority of the marketing and sales new initiatives! This is because most marketing and sales moves could easily cause extra load that penetrates into the protective capacity of one, or more, resources, creating interactive constraints that disrupt the reliable delivery.  Every company using promotions is familiar with the effects of running out of capacity and what happens to the delivery of the products that not part of the promotion.

That said, it is possible that there are enough means to quickly elevate the capacity of the overloaded resources, but certainly both operations and financial managers should be well prepared for that situation.

Let’s view a somewhat different P&Q problem:

pq12

Suppose that the management considers adding an additional product W, without adding more capacity.  The new product W uses the Blue resource capacity, but relatively little.

Question is: What are the ramifications?

If before having Product W the company had the Blue resource as a bottleneck (loaded to 125%), now three resources are overloaded. The most loaded resource now is the Light-Blue (154%), then the Blue (146%) and also the Grey reaches 135%.

So, according to which resource the T/CU should guide us?

Finding the “optimized solution” does not follow any T/CU guidance. The new product seems great from the Blue machine perspective.  Product W ratio of T/time-by-the-Blue = (77-30)/5=9.4, which the best of all products.  If we should go all the way according to this T/CU – we should sell all the W demand and part of the demand for P (we have the blue capacity for 46 units of P) and none of the Q. That demand would generate a profit of $770, which is more than the $300 profit without the W product.

Is it the best profit we can get?

However, when you consider the ratio T/CU considering the Light-blue resource then Product W is the lowest with only $2.76 per minute of Light-Blue.

Techniques of linear programming can be used in the ultra simple example above. As only sales of complete items are realistic the result of gaining profit of $1,719 can be reached by selling 97 units of P, 23 units of Q and 42 units of W.  This is considerably higher than without Product W, but also much higher than relying on the T/CU of the Blue resource!

As already mentioned, the above conclusions have already been dealt with by Dr. Barnard. The emphasis on decision-making that could cause the emergence of constraints is something we have to be able to analyze at the decision-making stage.

Until now we have respected the capacity limitations as is. In reality we usually have some more flexibility.  When there are easy and fast means to increase capacity, for instance paying the operators for overtime, then a whole new avenue is opened for assessing the worthiness of adding new products and new market segments.  Even when the extra capacity is expensive – in many cases the impact on the bottom-line is highly positive.

The non-linear behavior of capacity (there is a previous post dealing with it) has to be viewed as a huge opportunity to drive the profits up by product-mix decisions and by the use of additional capacity (defined as the “capacity buffer”). Looking towards the longer time-frame could lead to superior strategy planning, understanding the worth of every market segment and using the capacity buffer to face the fluctuations of the demand. This is the essence of Throughput Economics, an expansion of Throughput Accounting using the practical combination of intuition and hard-analysis as the full replacement of the flawed parts of cost accounting.

T/CU is useless when the use of the capacity buffer, the means for quick temporary capacity, is possible.  When relatively large decisions are considered the use of T/CU leads to wrong decisions similar to the use of cost-per-unit.

Goldratt told me: “I have a problem with Throughput Accounting. People expect me to give them a number and I cannot give them a number”.  He meant the T/CU number is too often irrelevant and distorting.  I do not have a number, but I believe I have an answer.  Read my paper on a special page in this blog entitled “TOC Economics: Top Management Decision Support”.  It appears on the menu at the left corner.