The Critical Information behind the Planned-Load

When I developed the concept of the Planned-Load I thought it would be a “nice-to-have” additional feature to my MICSS simulator.  MICSS provided me with the opportunity to check different policies and ideas relevant for managing manufacturing. It took me years to realize how important the planned-load is.

Actually, without the use of the planned-load it is impossible to use Simplified-Drum-Buffer-Rope (S-DBR), which replaced the much more complex DBR. Still, it seems most people in the TOC world, including those who develop TOC software, are not aware of its importance and certainly not its potential value, not yet fully materialized.

The planned-load for a critical resource is the accumulation of the time it would take to complete all the work that is firmly planned. The objective is to help us understand the connection between load, available capacity and response time!

Let’s use the justice system as an example. Imagine a judge examining the work he has to do: sitting in already scheduled trial-sessions for the total of 160 hours.  On top of that he needs to spend 80 hours on reading protocols of previous sessions and writing verdicts.  All in all 240 hours are required for completing the existing known work-load.

Assuming the judge works 40 hours a week then all the trials currently waiting for the judge should theoretically be completed in six weeks. We can expect, with reasonable certainly, that a new trial, requiring about 40 hours net work from the judge, would be finished no later than ten weeks.  I have added three weeks as a buffer against whatever could go wrong, like some of the trials requiring additional sessions or that the judge becomes sick for few days. This buffer is required in order to reasonably predict the completion of a new trial.

However, in reality we could easily find out that the average lead-time of a new trial is fifty (50) weeks – how can that be explained?

When expectations based on the planned-load do not materialize we have to assume one of the following explanations:

  1. The resource we are looking at is a non-constraint and thus has a lot of idle time. In the judge case, the real bottleneck could be the lawyers that ask and get long delays between sessions.
  2. While the resource is a constraint, the management of the resource, specifically the efforts to exploit its capacity, is so bad that substantial capacity is wasted.
  3. The actual demand contains high ratio of urgent cases, not planned a-priori and thus not part of the current planned-load. Those cases frequently appear and delay the regular planned work already registered in the system.

The importance of the planned-load of the “weakest-link” in operations is quick rough estimation of the net queue-time of work-orders and missions.  When the queue time is relatively long the planned load gives an estimation of the actual lead-time provided there is no major waste of capacity.  In other words the planned-load is a measure of the potential responsiveness of the system to new demand.

Have a look at the following picture depicting the planned-load (the red-bar) of six different resources of a manufacturing organization. The numbers, on the right side of the bar denotes hours of work, including average setups.

micss1

Assuming every resource works 40 hours a week, we see one resource (PK – the packaging line) that has almost three weeks of net work to do.

The green bars represent the amount of planned-work that is already at the site of the resource. In production lines, but also in environments with missions that require several resources, it could be that a certain work is already firm, but it has not, yet, reached the specific resource.  That work is part of the planned load. It is NOT part of the green-bar.  PK is the last operation and most of the known work-orders for it reside at upstream resources or maybe even not released, yet, to the floor.

The truly critical information is contained in the red-bars – the Planned-Load. To understand the message you need an additional piece of information:  the customer lead-time is between 6-7 weeks.

The above information is enough to safely conclude that the shop floor above is very poorly handled.

The actual lead-time should have been three-weeks on average. So, quoting four weeks delivery time might bring more demand.  Of course, when the demand goes sharply up, then the planned-load should be carefully re-checked to validate that the four-week delivery is safe.

Just to illustrate the power of the planned-load information – here is planned-load of the same organization four months after introducing the required changes in operations:

Micss3

The customer lead time is now four weeks. The demand went up by 25%, causing work-center MB to become an active capacity constraint. As the simulation is now using the correct TOC principles, most of the WIP is naturally gathered at the constraint.  Non-constraint resources that are downstream of the constraint have very little WIP residing at their site.

The longest planned-load is three weeks (about 120 hours of planned work for MB). The four weeks quotation includes the time-buffer to allow getting over problems in MB itself, and having to go through downstream operations.

This is just the basic understanding of the criticality of the planned load information. Once Dr. Goldratt internalized the hidden value of that information, he based the algorithm for quoting “safe-dates” on the planned-load plus a part of the time-buffer.

A graph of the behavior of the planned-load through time could be revealing. When the graph goes up – it tells you there is more incoming new demand than demand delivered, which means a risk of having an emerging bottleneck.  When the graph goes down it means excess capacity is available allowing faster response.  Both Sales and Production should base their plans accordingly.

Other important distinction is to look for the immediate short-term impact of the planned-load in order to manage overtime to control the reliable delivery. The time horizon is the time-buffer – making sure the capacity is enough to deliver all orders within that horizon. It identifies problems before buffer management warning of “too much red”.  One should always look on BOTH planned-load and buffer management to manage the execution. 

SDBR, and its planned-load part, certainly applies to manufacturing. But, SDBR, including both the planned-load and buffer-management should be used to manage the timely completion of missions in almost all organizations.

There is a tendency in TOC circles to assume that any big enough mission, for whom several resources are required, is a project that should be manage according to CCPM.  While it is possible to do so and get positive results, this is not necessarily the best way, certainly not the simplest way, to manage such missions.

I think we should make the effort and distinguish clearly when SDBR can be effectively used for missions and when CCPM is truly required.   And, by the way, we can all think how the capacity of key resources should be managed in multi-project environments.  Is there a way to use the Planned-Load as a key indicator whether there is enough capacity to prevent too long delays or there is an urgent need to increase capacity?

Is Throughput-per-constraint-unit truly useful?

P&Q1

Cost-per-unit is the most devastating flawed paradigm TOC has challenged. From my experience many managers, certainly most certified accountants, are aware of some of the potential distortions.  One needs to examine several situations to get the full impact of the distortion.

Cost-per-unit supports a simple process for decision-making, and this process is “the book” that managers believe they should follow.  It is difficult to blame a manager for making decisions based on cost-per-unit.  There are many more ramifications to the blind acceptance of cost-per-unit, like the concept of “efficiency” on which most performance measurements are based.   TOC logically proves how those “efficient” performance measurements force loyal employees to take damaging actions to the organization.

Does Throughput Accounting offer a valid “book” replacing the flawed concept of cost-per-unit?

Hint: Yes, but some critical developments are required.

The P&Q is a famous example, originally used by Dr. Goldratt, which proves that cost-per-unit gives a wrong answer to the question: how much money the company in the example is able to make?

Every colored ellipse in the picture represents a resource that is available 8 hours a day, five days a week. The chart represents the routing for two products: P and Q. The numbers in the colored ellipses represent the time-per-part in minutes.  The weekly fixed expenses are $6,000.

The first mistake is ignoring the possible of lack of capacity. The Blue resource is actually a bottleneck – preventing the full production of all the required 100 units of P and 50 units of Q every week.  The obvious required decision is:

What current market should be given up?

The regular cost accounting principles lead us to give up part of the P sales, because a unit of P yields lower price than Q, requires more materials and also longer work time.

This is the second common mistake as when you check what happens when some of the Q sales are given up, instead of P units, you realize that the latter choice is the better decision!

The reason is that as the Blue resource is the only resource that lacks capacity, and the Q product requires much more time from the Blue than from the rest of the resources that have idle capacity.

The simple and effective way for demonstrating the reasons behind what seems like a big surprise is to calculate for every product the ratio of T/CU – throughput (selling price minus the material cost) divided by the time required from the capacity constraint.  In this case a unit of P yields T of ($90-$45) divided by 15 minutes = $3 per minute of Resource B capacity. Product Q yields only (100-40)/30 = $2 per minute of B.

This is quite a proof that cost-per-unit distorts decisions. It is NOT a proof that T/CU is always right.  According to regular cost-accounting principles, once the recognition that the Blue is a bottleneck is realized, the normal result is: a loss of $300 per week.  When the T/CU rule is followed then the result is: positive profit of $300 per week.

I claim that the T/CU is a flawed concept!

I still claim that the concept of throughput, together with operating-expenses and investment, is a major breakthrough for business decision-making!

The above statement about T/CU has been already presented by Dr. Alan Barnard in TOCICO and in a subsequent paper.  Dr. Barnard showed that when there are more than one constraint T/CU yields wrong answers. I wish to explain the full ramifications of that on decisions that are taken prior to the emergence of new capacity constraints.

The logic behind T/CU is based on two critical assumptions:

  1. There is ONE active capacity constraint and only one.
    1. Comment: Active capacity constraints means if we’d get a little more capacity the bottom-line would go up and when you waste a little of that capacity the bottom-line will definitely go down.
  2. The decision at hand is relatively small, so it would NOT cause new constraints to appear.

Some observations of reality:

Most organizations are NOT constrained by their internal capacity! We should note two different situations:

  • The market demand is clearly lower than the load on the weakest-link.
  • While one, or even several, resources are loaded to 100% of their available capacity, the organization has means to get enough fast additional capacity for a certain price (delta(OE)), like overtime, extra shifts, temporary workers or outsourcing. In this situation the lack of capacity does not limit the generation of T and profit, and thus the capacity is not the constraint.

The second critical assumption, that the decision considered is small, means the T/CU should NOT be used for the vast majority of the marketing and sales new initiatives! This is because most marketing and sales moves could easily cause extra load that penetrates into the protective capacity of one, or more, resources, creating interactive constraints that disrupt the reliable delivery.  Every company using promotions is familiar with the effects of running out of capacity and what happens to the delivery of the products that not part of the promotion.

That said, it is possible that there are enough means to quickly elevate the capacity of the overloaded resources, but certainly both operations and financial managers should be well prepared for that situation.

Let’s view a somewhat different P&Q problem:

pq12

Suppose that the management considers adding an additional product W, without adding more capacity.  The new product W uses the Blue resource capacity, but relatively little.

Question is: What are the ramifications?

If before having Product W the company had the Blue resource as a bottleneck (loaded to 125%), now three resources are overloaded. The most loaded resource now is the Light-Blue (154%), then the Blue (146%) and also the Grey reaches 135%.

So, according to which resource the T/CU should guide us?

Finding the “optimized solution” does not follow any T/CU guidance. The new product seems great from the Blue machine perspective.  Product W ratio of T/time-by-the-Blue = (77-30)/5=9.4, which the best of all products.  If we should go all the way according to this T/CU – we should sell all the W demand and part of the demand for P (we have the blue capacity for 46 units of P) and none of the Q. That demand would generate a profit of $770, which is more than the $300 profit without the W product.

Is it the best profit we can get?

However, when you consider the ratio T/CU considering the Light-blue resource then Product W is the lowest with only $2.76 per minute of Light-Blue.

Techniques of linear programming can be used in the ultra simple example above. As only sales of complete items are realistic the result of gaining profit of $1,719 can be reached by selling 97 units of P, 23 units of Q and 42 units of W.  This is considerably higher than without Product W, but also much higher than relying on the T/CU of the Blue resource!

As already mentioned, the above conclusions have already been dealt with by Dr. Barnard. The emphasis on decision-making that could cause the emergence of constraints is something we have to be able to analyze at the decision-making stage.

Until now we have respected the capacity limitations as is. In reality we usually have some more flexibility.  When there are easy and fast means to increase capacity, for instance paying the operators for overtime, then a whole new avenue is opened for assessing the worthiness of adding new products and new market segments.  Even when the extra capacity is expensive – in many cases the impact on the bottom-line is highly positive.

The non-linear behavior of capacity (there is a previous post dealing with it) has to be viewed as a huge opportunity to drive the profits up by product-mix decisions and by the use of additional capacity (defined as the “capacity buffer”). Looking towards the longer time-frame could lead to superior strategy planning, understanding the worth of every market segment and using the capacity buffer to face the fluctuations of the demand. This is the essence of Throughput Economics, an expansion of Throughput Accounting using the practical combination of intuition and hard-analysis as the full replacement of the flawed parts of cost accounting.

T/CU is useless when the use of the capacity buffer, the means for quick temporary capacity, is possible.  When relatively large decisions are considered the use of T/CU leads to wrong decisions similar to the use of cost-per-unit.

Goldratt told me: “I have a problem with Throughput Accounting. People expect me to give them a number and I cannot give them a number”.  He meant the T/CU number is too often irrelevant and distorting.  I do not have a number, but I believe I have an answer.  Read my paper on a special page in this blog entitled “TOC Economics: Top Management Decision Support”.  It appears on the menu at the left corner.

A Dialogue between TOC and SWOT

SWOT Analysis table with main objectives

It is not easy for TOC people to evaluate ideas created outside the TOC community, because of three interconnected reasons.

The first is the damaging tendency to assume that TOC challenges everything that is not part of the TOC BOK.  I hope we get over this reason.

Another reason is the specific terminology used in TOC, which can be different from the use of those terms elsewhere. Just think of the term ‘constraint’ and how the use of it in TOC is different than the rest of the world.

The third reason is that the TOC school of thought implies a certain sequence of analysis. It always starts from the goal or an important objective and asks the question:

What prevents you from achieving more?

It is a must to create enough bridges and dialogues with other sources of relevant managerial knowledge into TOC to expand its scope and also its power.

Let’s check the relationships between TOC and SWOT analysis. SWOT, the acronym of Strengths, Weaknesses, Opportunities and Threats, is basically a marketing picture of the organization, brand or just a product.  The objective of SWOT is to lead the mind to improve the impact of the strengths, noting the potential opportunities and grabbing the best of them, reducing the damage from weaknesses and becoming more careful from threats.  The idea is that every part of SWOT impacts marketing, so the appropriate planning would take it into account.

SWOT starts with the Strengths assuming they are the key to identify the target markets and to emphasis these aspects in the marketing campaign. TOC, on the other hand, starts its analysis with the weaknesses of the organization as a whole. These weaknesses are the key reason to the current state of the organization.  TOC uses several types of weaknesses – constraints, core problems, flawed policies (policy constraints) all lead to identification of flawed assumptions that can be challenged.

The basic assumption behind this part of TOC is that the core weakness, in capacity, capability and possibly in the market perception, is the key leverage point, the most immediate opportunity to do much better in relatively short time.

It took time for TOC to recognize the role of the strengths in outlining the way to vastly improve the future of the organization. I see the insight of defining the Decisive-Competitive-Edge (DCE) a key development of TOC.  Goldratt defined DCE as “Answering a need of potential clients in a way that no competitor is able to.”  A TOC way to spot a need of the potential market, its pains that are taken now as “natural” or “part of reality”, is to look for possible UDEs of the market, by developing a branch of a current-reality-tree starting with the products, services and delivery.  But, in order to be able to solve an UDE certain key capabilities are required to provide the development of an answer to that need.

So, the unique capabilities of the organization, like fast, yet reliable, flow of products, are the key strengths of the organization.  These capabilities are the source of new opportunities, which means the ability to combine an unanswered need in the market with the ability to answer that need.  The logical cause-and-effect branch can start with the unique capabilities and then deduce the undesired-effects in the market that could be solved by those capabilities.

For example, fast and reliable flow could solve urgent situations of potential clients badly needing the products, when the current standard of delivery is too slow to solve such an urgency. The next step in the analysis is estimating the value for the potential clients  receiving quick and reliable delivery and whether this solution could provide new business for such a client knowing there is a satisfactory, even if somewhat more expensive, answer to such emergencies.  Such an analysis should come to the conclusion that the organization should not “waste” the unique capability by selling the fast-response to everybody, even when no urgency exists, without charging more for it.

The usual SWOT analysis looks on the strength of a product or service from the perspective of the market. These strengths are all due to certain capabilities of the company. Knowing better the unique capabilities, coupled with sensitivity to the pains and needs of the market, are critical for identifying new opportunities.   Strengths and opportunities have to be bundle together to get the full effect.

The last part of SWOT is threats. From marketing perspective threats can be competitors who might find better ways to compete. Another type of threats is economic and cultural happenings that might negatively impact future sales.  These are mostly external events, where the company might not be prepared to handle.

There is a definite need to look not just for external threats, but also to internally developing threats.  For instance, the retirement of a key professional whose unique capabilities are behind some of the current strengths.  Another one could be turning cash to become a constraint when too high long-term investments draw too much of the current financial assets.

TOC has, generally speaking, neglected the issue of threats, both external and internal.  The notion of an UDE is the closest signal that TOC might note and lead the user to draw the fuller cause-and-effect picture.  UDE is defined as well-known undesired-effect. The missing part in the current TOC BOK is constantly monitoring for new emerging effects that have the potential of becoming most undesired, sometimes even disastrous.   I have already written a post about “Identifying the emergence of threats”  (https://elischragenheim.com/2015/09/24/indentifying-the-emergence-of-threats/).

SWOT in general encourages detailed definition of market segments, those that enjoy the strengths and care less about the weaknesses. TOC did not fully developed, to my mind, a technique to develop clever market segmentation where features, delivery service, variety of the product mix, are all used to define the clients that should get the best value, and by that define the targets.  It is not too difficult to develop such TOC-influenced tools.

The personal challenge of being a CEO

portrait of handsome senior business man with grey hait at moder

Clarification:  This post was written after several discussions on the topic with prominent people in the TOC field.  The main discussion was led by Ray Immelman.

Understanding how to manage organizations has to include the personal aspects of the one who is in charge of the organization – the managing director or the CEO. While the undesired effects of the organizations affect the CEO we have also to consider the CEO as an individual with interests, wishes and also ego.

Given the wide spread of the size of organizations, and spectrum of personalities who are CEOs, can we have any idea of what it takes to be one?

Taking the responsibility for the future of an organization, for its shareholders and employees, fits only people who have enough self-confidence to believe they can do it. Actually every single CEO has to demonstrate self-confidence at all times, which requires a lot of self-control.

I believe many CEOs have doubts and fears they hide well behind the façade of someone who clearly knows what needs to be done.

The challenge of every CEO is to get hold of the basic complexity of the organization, its stakeholders, clients, and suppliers. On top of the complexity there is considerable uncertainty.  The combination of complexity and uncertainty impacts the efforts of the CEO to answer two critical questions: “what to change?” and “what to change to?”  On top of dealing with complexity and uncertainty every CEO also has to constantly resolve conflicts within the organization and between the CEO and the shareholders.  These conflicts produce obstacles to implement the changes proposed by the answer to the second question and by that raise the third basic question: “how to cause the change?”

The first key personal dilemma of every CEO is derived from the difficulty to answer the three key questions and, based on the actual results, how those results are judged by the board, shareholders and possibly stock market analysts.  The seemingly unavoidable outcome is that the CEO fears that taking any risk, even when the possible damage is low, might be harshly criticized.  Considering the complexity and the variability the level of the pressure is so high that it pushes the CEO not to take even limited risks required for potential growth.

This means that within the generic conflict of take-the-risk versus do-not-take-the-risk the interest of the organization might be to take-the-risk, yet the CEO decides against taking such a risk because of the potential personal damage.

When analysing the CEO conflict we have to consider also the risk of not-taking-risks. First of all the shareholders expect better results and the CEO, trying to resolve the conflict, has to promise certain improved results – and he’d be judged according to these expectations.  Actually achieving too phenomenal results might also be seen as too risky, creating too high expectations in the stock market.  On top of that there are enough other threats to the organization and failing to handle them would be detrimental to the CEO as well. Having to behave with the utmost care on almost every move adds to the potential opportunity for TOC being able of superior handling of uncertainty and risks.

The key TOC insight is that the combination of complexity and variability leads to inherent simplicity.  The essence of the simplicity is that actions that their impact is lower than the level of the noise (variability), cannot be proven that their outcomes have been positive. This leads to focus only on the more meaningful actions. The simplicity also supports judging more daring actions looking for those that their potential downsides are limited and the upsides are high. When you add other TOC insights that reduce  variability, improve operational flow, provide checking the true financial impact by the use of throughput economics and the use of powerful cause-and-effect analysis then the combination yields an overall safer environment.

Taking risks is not the only dilemma with a special impact on the personal side of the CEO. While the fear of being harshly criticized is pretty strong – the CEO wishes to get the glory of any big success.  The dilemma is raised when the success requires the active participation, also considerable inspiration, of other people.  It is even more noted when the other people are external to the organization, like consultants.  Challenging existing paradigms, which is the core of the TOC power, might put the light on the TOC consultants and rob the glory from the CEO, who has chosen to take the risk, but might not fully enjoy the successful outcomes.

How do people react to someone who suddenly changes his own previous paradigms? Isn’t it easy and even natural to blame such a manager for not being able to change much earlier or of being too easy to be influenced?

Actually this dilemma seems to be tough to resolve in order to achieve a win-win between the organization and the CEO and also with the other executives. Emphasizing how wrong the old paradigms are makes the manager’s dilemma stronger.  People have a reason to refuse to admit mistakes.  It harms the self-confidence and it radiates incompetence, which is probably a flawed impression but still pretty common.  Of course, the other side of the conflict is the potential damage of not admitting the previous mistakes.

In the old days of TOC we have used the OPT Game and the Goldratt Simulators, which I have developed, to push managers to admit they don’t know.  This was quite effective in creating the “wow impact”.  However, the humiliation the managers went through proved beneficial only to those with very strong personality.  Too many managers paid some lip-service to the proof-of-flawed-concept and continue with the old ways.

We expect from a CEO to have that strong personality that allows recognizing a mistake and taking whatever steps required to go on the right track. We expect from a CEO to act according to the full interests of the organization without considering his personal interests.  Very few truly apply to this challenge.  Many believe that the way to handle the “agent” problem is to pay high bonuses for success.  Actually it only makes the personal-organizational conflict legitimate and could easily influence CEOs to take bigger risks that increase the fragility of the organization.

It seems we need to help CEOs to resolve both dilemmas. We have to promote the contribution of the CEO to the success, and we have to reduce the fears of the potential unknown outcomes, organizational and personal, of the change we believe would bring huge value and even in the worst case will still be better than the current state.

It is my own realization to reduce the pressure on what is “wrong” and make much more of what is “better”, making it an improvement rather than a revolution by discarding everything people have learned in the past.

What FOCUS means?

find the solution

The vast majority of the managers believe that focusing is absolutely necessary for managing organizations.

If this is the case, then FOCUS as the key short description of TOC has very little to offer to managers.

Let’s consider the possible consequences of Pareto Law. A typical company gets 80% of its revenues from 20% of its clients.  How about focusing on the 20% big clients and dump the 80%?  Does it make sense?

The point is that the real question is not whether to focus or not, but:

What to focus on?

And, even more importantly:

What NOT to focus on?

The reason of emphasizing the part of what not to focus on is that the need to focus is caused by limited capacity, which means it is impossible to focus on everything and draw the full value from it.  The limitation could be a capacity constraint that forces us to concentrate on what exploits that resource.  Empowering the subordinates is a mean for an individual manager to focus on the key issues without becoming the constraint of the organization.  In many cases the critical limitation is the inability of the management team to multi-task in a way that would not delay truly important issues that require actions.  This is expressed by management attention as the ultimate constraint of designing the future of the organization.

Giving up part of the market demand could make sense only when more overall sales, more total Throughput, could be materialized.  Only in very rare cases it is possible to reduce the OE, following giving up certain demand, to the level where T-OE would be improved.  Reducing 80% of the clients, the smaller clients that yield only 20% of the turnover, would almost never reduce the OE by what is required to compensate for the lost T, which is significantly more than 20% of the OE.  This is due to the non-linearity of the OE, where reducing the capacity requirements do NOT yield OE reduction of the same rate.  Just think about the space the organization holds and whether the reduced number of clients would allow using less space, and even when that is possible – it might be impossible to save OE due to it.

FOCUS should NOT be interpreted as just one specific area. It has to be linked to an estimation of the effectiveness of the available capacity to focus on without considerable delays to the other areas and topics.  And remember the following Goldratt insight:

Segment your Market – Do Not Segment your Resources!

The idea is that many of our resources are able to serve variety of different products and services target at different segments. This is a very beneficial capability.  Effective focusing should exploit the weakest link based on the limiting capacity.  In most cases the exploitation encourages serving several market segments, but not too many.

The question of what to focus on goes into all the different parts of TOC, always looking, first of all, to the limitation and from that the answer is derived.  In DBR it is the constraint.  In Buffer Management the question gets a small twist of “what should we do NOW that otherwise the subordination might fail?” The Current-Reality-Tree defines the core-problem of the organization, which is the first focus for designing the future. CCPM focuses on the Critical Chain rather than on the Critical Pass, pointing also to multi-tasking as lack of focus causing huge damage.  The key concept in the Strategy and Tactic (S&T) is the Decisive-Competitive-Edge (DCE), which again points to where the focus should be.  The DCE is actually based on an identified limitation of the client, while the organization has the capability of removing or reducing that limitation.  Building a DCE is a huge challenge that adds considerable load to all managers and professionals, so it makes sense to avoid more than one DCE at a time.

Goldratt brilliantly used a slang in Hebrew, actually taken from Russian, called “choopchik”, describing an issue with very low impact on the performance. The whole point is that choopchiks do have a certain positive impact, which is tempting to tackle, but causing huge loss of not doing the vastly more important missions.  I look on choopchiks as a key concept of TOC that is directly derived by the search for the right focus.

The notion of focus according to TOC is relevant to recognize the impact of uncertainty on management. Choopchiks impacts the performance of the organization less than the normal existing variability; call it the level of the “noise”.  With such small impact you don’t know whether there has been any actual benefit in the real case.  Worthy missions have impact that is considerably bigger than the noise.

What to focus on is a key for achieving better and better performance. The elements involved are the diagnostic of the current state, the few variables that dictate the global performance and what could go wrong.   Mistakes in what to focus on are common and are main causes for the death of organizations and for so many being on the surviving mode.

What is the right time to make a decision?

Businessman standing and making his choice between times

Every decision is a choice between alternatives. Another element is the right time for the decision to be made. Very few decisions force the manager to decide immediately.  In itself this is an undesired situation where a threat has emerged in complete surprise.  Most decisions leave enough time to the decision maker.

Facing substantial uncertainty suggests that every decision should be delayed until the last moment that allows executing the decision in full.  The underlining assumption is that time adds information that reduces the uncertainty.

There are serious negative branches to the above logical claim. All of them look at what might go wrong with the suggestion.  Here are several of them:

  • We don’t truly know the exact timing of “the last moment”, so we may miss it.
  • We might forget the decision at the right moment.
  • Our attention might be occupied by more important issues at the critical time.
  • Making the decision at the very last moment makes it urgent and generates stress. The stress might affect us to make the wrong decision!

Inspired by the idea of time buffers we should treat every non-trivial decision as a mission that should be completed at a given time and to assign a time buffer for that mission.  According to the DBR interpretation of time buffers the mission should not start prior to the buffer.  The time buffer itself should provide enough time to deal with all the other requirements for attention, without creating stress, except the need to make the RIGHT decision.

Managing the execution of missions by individuals or teams through assigning time-buffers, and using buffer management as a control mechanism, is a much more effective process than check-lists. It reduces multi-tasking through buffer management priorities and limits handling missions, especially decisions, too early. Only non-trivial tasks should be included in the missions.  It is a step forward in understanding the behavior of the capacity (attention) of individual managers.  It would also clarify the issue of distinguishing between missions and projects.

An example

Suppose a decision to stop working with a certain supplier and switch to another one is considered. The decision process requires updated information on the trouble with the current supplier and mainly finding alternative suppliers, inquiring how they are evaluated by their clients, whether they have the specific capabilities and, of course, their pricing.

When is the deadline to make the above decision?

Suppose the contract with the current supplier ends on December 31st 2016.  If the contract is not going to be extended, it is fair to announce it by December 1st, which also leaves enough time for finalizing the contract with the new supplier. The mission includes getting the relevant information, bringing it to the decision maker(s) and letting 1-2 hours for the decision itself.  Assigning three weeks for the whole mission is reasonable.  This means no one should work on that mission prior to November 10th!

The impact of the criticality of the decision

Goldratt said: “Don’t ever let something important to become urgent”.

The practical lesson is: important decisions should be given a reasonable time buffer.  Very important decisions, those that we call ‘critical’, should be given longer time buffer, ensuring the decision is not going to be taken under stress. Of course, a critical decision might be taken under stress because of possible negative ramifications, for which no viable solution has been successfully developed.  This post only focuses on the time element.

The suggested process expands what TOC has developed for the production-floor to the work of managers taking responsibility for important missions.

Comment: a mission can be viewed as a very small and simple project, but it does not make sense to work on the mission continuously, unlike the expected behavior in projects, especially along the critical chain, where we strive for continuous progress.

Batching of Decisions

Batching of decisions, usually by periodical planning sessions, is widely done.  The tendency to plan according to time periods can be explained by the need to gather together the relevant management team to come up with a periodical financial budgeting process based on forecasts.  The targets for the various functions are derived from that periodic plan.

I’ve expressed in previous posts my negative view on one-number forecasts and how they reduce the overall performance of the organization.  My focus here is to highlight that the planning sessions provide an “opportunity” to include other decisions that are not directly related to the purpose of the periodical planning.

Any plan is a combination of decisions that are interconnected in order to achieve a certain objective. The plan should include only the decisions that any deviation from them would impact the objective.  This message is explained in a previous post called “What is a good plan – the relationships between planning and execution”, including the need to plan buffers within the planning.

Does it make sense to include the decision to switch suppliers within the annual planning session aimed at determining the financial boundaries for next year? Is the identity of the specific supplier critical to the quality of that high-level planning?  Suppose there is a small impact of the switch on the total cost of goods – does it justify forcing a decision too early?

The key point is that including decisions, with very limited impact on the objective, within the planning, disrupts the quality of the plan that needs to be focused just on the critical factors for achieving the objective. It forces timing that does not support the quality of the particular decision.

Planning, execution and the right timing of decisions are all part of handling common-and-expected-uncertainty. We need to vastly improve processes that dictate what is in the planning, what are left for execution and the handling of all the variety of non-trivial decisions including making sure they are made at the right time.

What Simplicity truly means?

balancing stones

Goldratt assumed that every organization has to be inherently simple.  This recognition is one of the four pillars of TOC, and to my mind the most important.  It is in direct clash with the new Complex Theory when applied to human organizations.

Comment: I refer in this post only to organizations that have a goal and serve clients.

Is Inherent Simplicity just a philosophical concept without practical impact?

One of the most practical advises I got from Eli Goldratt was:

If the situation you are looking at seems too complex for you then:

You are looking on a too small subsystem – look at the larger system to see the simplicity

This is a very counter-intuitive advice. When you see complexity should you look for even more complexity? But, actually the situation is relieved when you analyze the larger system because what is important and mainly what is not important become clearer.  Production shop-floor might look very complex to schedule.  Only when you include the actual demand, distinguish between firm and forecasted demand, you realize what the real constraint is and only then the exploitation and subordination become apparent.

The term ‘simplicity’ needs to be clarified. There are two different definitions to ‘complexity’, which also clarifies what ‘simplicity’, the opposite, means.

  1. Many variables, with partial dependencies between them, impact the outcomes.
  2. It is difficult to predict the outcome of an action or a change in the value of a certain variable.

The second definition describes why complexity bothers us.  The first one describes what seems to be the cause for the difficulty.

The term ‘partial dependency’ is what makes the interaction between variables complicated. When the variables are fully depended on each other then a formula can be developed to predict the combined outcome.   When the variables are absolutely independent then again it is easy to calculate the total impact.  It is when partial dependencies govern the output that makes it difficult to predict.

Examples for independent, fully depended variables, and partially depended:

  1. Several units of the same resource. The units are independent of each other.
  2. A production line where every problem stops the whole line. The line certainly works according to the pace of the slowest station, and every station is fully dependable of all the other stations in the line.
  3. A regular production floor with different work centers and enough space between them. Every work center is partially dependent on the previous ones to provide enough materials for processing.

When, on top of the complexity, every variable is exposed to significant variability then the overall complexity is overwhelming.

Can the performance of the organization be truly unpredictable?

You may call this state “chaos”, or just “on the verge of chaos”, point is that clients cannot tolerate such a performance.  When I’m promised delivery at October 1st, 2pm and the delivery shows up on October 22nd, 6:30am – this is intolerable.

Is it possible to be on the verge of chaos internally, but still provide acceptable delivery to clients?

In order to achieve acceptable reliability organizations have to become simple enough.  The initial impression of complexity is wrong because the partial dependencies are pushed down, so their impact on the deliveries is limited.  The reduction of the partial dependencies is achieved by providing excess capacity and long lead-times.  TOC simplifies it more effectively by using buffers and buffer management.  What we get is good enough predictions of meeting due-dates and even being able to promise rapid-response to part of the market that is ready to pay more for quick supply.

Still, the use of the buffers means: the predictability is limited!

Even Inherent Simplicity cannot truly mean precise predictability! The whole idea is to determine the range of our ability to predict.  When CCPM planning of a project predicts completion on June 2017, it actually means no later than June 2017.  It could be completed earlier and we usually like it to be earlier, but the prediction of June 2017 is good enough.

Thus, the simplicity means predictions within an acceptable range!

Does simplicity means the solution can be described in one paragraph? I doubt whether one-paragraph on CCPM is enough to give the user the ability to judge the possible ramifications.  Certainly we cannot describe the BOK of TOC in one paragraph.

Simplicity in radiating an idea means the idea is well understood. This is the meaning of “predictability” when we deal with marketing messages:  we are able to predict what the reader, listener or spectator understands!  Even here there is a certain range of interpretation that we have to live with.

What about the details of the solution itself? Is the solution necessarily easy to implement?

Easy and simple are not synonymous. The concepts could be simple, but the implementation might face obstacles, usually predictable obstacles, but overcoming the obstacles might be difficult.  So, both simplicity and ease of implementation are highly desirable, but not always perfectly reachable.

We in TOC appreciate simplicity, but achieving it is a challenge. The requirements for truly good solutions are: Simplicity, Viability (possible to do in reality) and Effectiveness (achieving the objective).

An example illustrating the challenge:

Simplified-DBR is a simple effective solution for reliable delivery in manufacturing. However, for buffer-management to work properly we assume the net touch time is less than 10% of the production lead-time.  This is a complication!  A solution for manufacturing environments, where net-touch-time is longer than 10%, has been developed. It complicates the required information for buffer-management, but is  effective.

I remember my professor for History of Physics, Prof. Sambursky, who explained to us:

“At all times, since ancient Greece, the scientists looked for the ONE formula that would explain everything. They always came with such a formula, and then a new discovered effect did not behave according to the formula.  The formula was corrected to fit the behavior of that effect.  Then more new effects contradicted the formula, and the formula started to be very cumbersome and it could not predict the behaviors of new effects.  Then a new theory came with a new simple formula and the cycle went on again.”

TOC is basically simple. It strives to identify the Inherent Simplicity, come up with simple solutions, simple messages and easy implementations.  But, we have, from time to time, to add something to deal with environments where a certain basic assumption is invalid.   This is, to my mind, the most practically effective way to manage organizations.

Until a new simpler, yet effective, approach would emerge