Is Throughput-per-constraint-unit truly useful?


Cost-per-unit is the most devastating flawed paradigm TOC has challenged. From my experience many managers, certainly most certified accountants, are aware of some of the potential distortions.  One needs to examine several situations to get the full impact of the distortion.

Cost-per-unit supports a simple process for decision-making, and this process is “the book” that managers believe they should follow.  It is difficult to blame a manager for making decisions based on cost-per-unit.  There are many more ramifications to the blind acceptance of cost-per-unit, like the concept of “efficiency” on which most performance measurements are based.   TOC logically proves how those “efficient” performance measurements force loyal employees to take damaging actions to the organization.

Does Throughput Accounting offer a valid “book” replacing the flawed concept of cost-per-unit?

Hint: Yes, but some critical developments are required.

The P&Q is a famous example, originally used by Dr. Goldratt, which proves that cost-per-unit gives a wrong answer to the question: how much money the company in the example is able to make?

Every colored ellipse in the picture represents a resource that is available 8 hours a day, five days a week. The chart represents the routing for two products: P and Q. The numbers in the colored ellipses represent the time-per-part in minutes.  The weekly fixed expenses are $6,000.

The first mistake is ignoring the possible of lack of capacity. The Blue resource is actually a bottleneck – preventing the full production of all the required 100 units of P and 50 units of Q every week.  The obvious required decision is:

What current market should be given up?

The regular cost accounting principles lead us to give up part of the P sales, because a unit of P yields lower price than Q, requires more materials and also longer work time.

This is the second common mistake as when you check what happens when some of the Q sales are given up, instead of P units, you realize that the latter choice is the better decision!

The reason is that as the Blue resource is the only resource that lacks capacity, and the Q product requires much more time from the Blue than from the rest of the resources that have idle capacity.

The simple and effective way for demonstrating the reasons behind what seems like a big surprise is to calculate for every product the ratio of T/CU – throughput (selling price minus the material cost) divided by the time required from the capacity constraint.  In this case a unit of P yields T of ($90-$45) divided by 15 minutes = $3 per minute of Resource B capacity. Product Q yields only (100-40)/30 = $2 per minute of B.

This is quite a proof that cost-per-unit distorts decisions. It is NOT a proof that T/CU is always right.  According to regular cost-accounting principles, once the recognition that the Blue is a bottleneck is realized, the normal result is: a loss of $300 per week.  When the T/CU rule is followed then the result is: positive profit of $300 per week.

I claim that the T/CU is a flawed concept!

I still claim that the concept of throughput, together with operating-expenses and investment, is a major breakthrough for business decision-making!

The above statement about T/CU has been already presented by Dr. Alan Barnard in TOCICO and in a subsequent paper.  Dr. Barnard showed that when there are more than one constraint T/CU yields wrong answers. I wish to explain the full ramifications of that on decisions that are taken prior to the emergence of new capacity constraints.

The logic behind T/CU is based on two critical assumptions:

  1. There is ONE active capacity constraint and only one.
    1. Comment: Active capacity constraints means if we’d get a little more capacity the bottom-line would go up and when you waste a little of that capacity the bottom-line will definitely go down.
  2. The decision at hand is relatively small, so it would NOT cause new constraints to appear.

Some observations of reality:

Most organizations are NOT constrained by their internal capacity! We should note two different situations:

  • The market demand is clearly lower than the load on the weakest-link.
  • While one, or even several, resources are loaded to 100% of their available capacity, the organization has means to get enough fast additional capacity for a certain price (delta(OE)), like overtime, extra shifts, temporary workers or outsourcing. In this situation the lack of capacity does not limit the generation of T and profit, and thus the capacity is not the constraint.

The second critical assumption, that the decision considered is small, means the T/CU should NOT be used for the vast majority of the marketing and sales new initiatives! This is because most marketing and sales moves could easily cause extra load that penetrates into the protective capacity of one, or more, resources, creating interactive constraints that disrupt the reliable delivery.  Every company using promotions is familiar with the effects of running out of capacity and what happens to the delivery of the products that not part of the promotion.

That said, it is possible that there are enough means to quickly elevate the capacity of the overloaded resources, but certainly both operations and financial managers should be well prepared for that situation.

Let’s view a somewhat different P&Q problem:


Suppose that the management considers adding an additional product W, without adding more capacity.  The new product W uses the Blue resource capacity, but relatively little.

Question is: What are the ramifications?

If before having Product W the company had the Blue resource as a bottleneck (loaded to 125%), now three resources are overloaded. The most loaded resource now is the Light-Blue (154%), then the Blue (146%) and also the Grey reaches 135%.

So, according to which resource the T/CU should guide us?

Finding the “optimized solution” does not follow any T/CU guidance. The new product seems great from the Blue machine perspective.  Product W ratio of T/time-by-the-Blue = (77-30)/5=9.4, which the best of all products.  If we should go all the way according to this T/CU – we should sell all the W demand and part of the demand for P (we have the blue capacity for 46 units of P) and none of the Q. That demand would generate a profit of $770, which is more than the $300 profit without the W product.

Is it the best profit we can get?

However, when you consider the ratio T/CU considering the Light-blue resource then Product W is the lowest with only $2.76 per minute of Light-Blue.

Techniques of linear programming can be used in the ultra simple example above. As only sales of complete items are realistic the result of gaining profit of $1,719 can be reached by selling 97 units of P, 23 units of Q and 42 units of W.  This is considerably higher than without Product W, but also much higher than relying on the T/CU of the Blue resource!

As already mentioned, the above conclusions have already been dealt with by Dr. Barnard. The emphasis on decision-making that could cause the emergence of constraints is something we have to be able to analyze at the decision-making stage.

Until now we have respected the capacity limitations as is. In reality we usually have some more flexibility.  When there are easy and fast means to increase capacity, for instance paying the operators for overtime, then a whole new avenue is opened for assessing the worthiness of adding new products and new market segments.  Even when the extra capacity is expensive – in many cases the impact on the bottom-line is highly positive.

The non-linear behavior of capacity (there is a previous post dealing with it) has to be viewed as a huge opportunity to drive the profits up by product-mix decisions and by the use of additional capacity (defined as the “capacity buffer”). Looking towards the longer time-frame could lead to superior strategy planning, understanding the worth of every market segment and using the capacity buffer to face the fluctuations of the demand. This is the essence of Throughput Economics, an expansion of Throughput Accounting using the practical combination of intuition and hard-analysis as the full replacement of the flawed parts of cost accounting.

T/CU is useless when the use of the capacity buffer, the means for quick temporary capacity, is possible.  When relatively large decisions are considered the use of T/CU leads to wrong decisions similar to the use of cost-per-unit.

Goldratt told me: “I have a problem with Throughput Accounting. People expect me to give them a number and I cannot give them a number”.  He meant the T/CU number is too often irrelevant and distorting.  I do not have a number, but I believe I have an answer.  Read my paper on a special page in this blog entitled “TOC Economics: Top Management Decision Support”.  It appears on the menu at the left corner.

A Dialogue between TOC and SWOT

SWOT Analysis table with main objectives

It is not easy for TOC people to evaluate ideas created outside the TOC community, because of three interconnected reasons.

The first is the damaging tendency to assume that TOC challenges everything that is not part of the TOC BOK.  I hope we get over this reason.

Another reason is the specific terminology used in TOC, which can be different from the use of those terms elsewhere. Just think of the term ‘constraint’ and how the use of it in TOC is different than the rest of the world.

The third reason is that the TOC school of thought implies a certain sequence of analysis. It always starts from the goal or an important objective and asks the question:

What prevents you from achieving more?

It is a must to create enough bridges and dialogues with other sources of relevant managerial knowledge into TOC to expand its scope and also its power.

Let’s check the relationships between TOC and SWOT analysis. SWOT, the acronym of Strengths, Weaknesses, Opportunities and Threats, is basically a marketing picture of the organization, brand or just a product.  The objective of SWOT is to lead the mind to improve the impact of the strengths, noting the potential opportunities and grabbing the best of them, reducing the damage from weaknesses and becoming more careful from threats.  The idea is that every part of SWOT impacts marketing, so the appropriate planning would take it into account.

SWOT starts with the Strengths assuming they are the key to identify the target markets and to emphasis these aspects in the marketing campaign. TOC, on the other hand, starts its analysis with the weaknesses of the organization as a whole. These weaknesses are the key reason to the current state of the organization.  TOC uses several types of weaknesses – constraints, core problems, flawed policies (policy constraints) all lead to identification of flawed assumptions that can be challenged.

The basic assumption behind this part of TOC is that the core weakness, in capacity, capability and possibly in the market perception, is the key leverage point, the most immediate opportunity to do much better in relatively short time.

It took time for TOC to recognize the role of the strengths in outlining the way to vastly improve the future of the organization. I see the insight of defining the Decisive-Competitive-Edge (DCE) a key development of TOC.  Goldratt defined DCE as “Answering a need of potential clients in a way that no competitor is able to.”  A TOC way to spot a need of the potential market, its pains that are taken now as “natural” or “part of reality”, is to look for possible UDEs of the market, by developing a branch of a current-reality-tree starting with the products, services and delivery.  But, in order to be able to solve an UDE certain key capabilities are required to provide the development of an answer to that need.

So, the unique capabilities of the organization, like fast, yet reliable, flow of products, are the key strengths of the organization.  These capabilities are the source of new opportunities, which means the ability to combine an unanswered need in the market with the ability to answer that need.  The logical cause-and-effect branch can start with the unique capabilities and then deduce the undesired-effects in the market that could be solved by those capabilities.

For example, fast and reliable flow could solve urgent situations of potential clients badly needing the products, when the current standard of delivery is too slow to solve such an urgency. The next step in the analysis is estimating the value for the potential clients  receiving quick and reliable delivery and whether this solution could provide new business for such a client knowing there is a satisfactory, even if somewhat more expensive, answer to such emergencies.  Such an analysis should come to the conclusion that the organization should not “waste” the unique capability by selling the fast-response to everybody, even when no urgency exists, without charging more for it.

The usual SWOT analysis looks on the strength of a product or service from the perspective of the market. These strengths are all due to certain capabilities of the company. Knowing better the unique capabilities, coupled with sensitivity to the pains and needs of the market, are critical for identifying new opportunities.   Strengths and opportunities have to be bundle together to get the full effect.

The last part of SWOT is threats. From marketing perspective threats can be competitors who might find better ways to compete. Another type of threats is economic and cultural happenings that might negatively impact future sales.  These are mostly external events, where the company might not be prepared to handle.

There is a definite need to look not just for external threats, but also to internally developing threats.  For instance, the retirement of a key professional whose unique capabilities are behind some of the current strengths.  Another one could be turning cash to become a constraint when too high long-term investments draw too much of the current financial assets.

TOC has, generally speaking, neglected the issue of threats, both external and internal.  The notion of an UDE is the closest signal that TOC might note and lead the user to draw the fuller cause-and-effect picture.  UDE is defined as well-known undesired-effect. The missing part in the current TOC BOK is constantly monitoring for new emerging effects that have the potential of becoming most undesired, sometimes even disastrous.   I have already written a post about “Identifying the emergence of threats”  (

SWOT in general encourages detailed definition of market segments, those that enjoy the strengths and care less about the weaknesses. TOC did not fully developed, to my mind, a technique to develop clever market segmentation where features, delivery service, variety of the product mix, are all used to define the clients that should get the best value, and by that define the targets.  It is not too difficult to develop such TOC-influenced tools.

The personal challenge of being a CEO

portrait of handsome senior business man with grey hait at moder

Clarification:  This post was written after several discussions on the topic with prominent people in the TOC field.  The main discussion was led by Ray Immelman.

Understanding how to manage organizations has to include the personal aspects of the one who is in charge of the organization – the managing director or the CEO. While the undesired effects of the organizations affect the CEO we have also to consider the CEO as an individual with interests, wishes and also ego.

Given the wide spread of the size of organizations, and spectrum of personalities who are CEOs, can we have any idea of what it takes to be one?

Taking the responsibility for the future of an organization, for its shareholders and employees, fits only people who have enough self-confidence to believe they can do it. Actually every single CEO has to demonstrate self-confidence at all times, which requires a lot of self-control.

I believe many CEOs have doubts and fears they hide well behind the façade of someone who clearly knows what needs to be done.

The challenge of every CEO is to get hold of the basic complexity of the organization, its stakeholders, clients, and suppliers. On top of the complexity there is considerable uncertainty.  The combination of complexity and uncertainty impacts the efforts of the CEO to answer two critical questions: “what to change?” and “what to change to?”  On top of dealing with complexity and uncertainty every CEO also has to constantly resolve conflicts within the organization and between the CEO and the shareholders.  These conflicts produce obstacles to implement the changes proposed by the answer to the second question and by that raise the third basic question: “how to cause the change?”

The first key personal dilemma of every CEO is derived from the difficulty to answer the three key questions and, based on the actual results, how those results are judged by the board, shareholders and possibly stock market analysts.  The seemingly unavoidable outcome is that the CEO fears that taking any risk, even when the possible damage is low, might be harshly criticized.  Considering the complexity and the variability the level of the pressure is so high that it pushes the CEO not to take even limited risks required for potential growth.

This means that within the generic conflict of take-the-risk versus do-not-take-the-risk the interest of the organization might be to take-the-risk, yet the CEO decides against taking such a risk because of the potential personal damage.

When analysing the CEO conflict we have to consider also the risk of not-taking-risks. First of all the shareholders expect better results and the CEO, trying to resolve the conflict, has to promise certain improved results – and he’d be judged according to these expectations.  Actually achieving too phenomenal results might also be seen as too risky, creating too high expectations in the stock market.  On top of that there are enough other threats to the organization and failing to handle them would be detrimental to the CEO as well. Having to behave with the utmost care on almost every move adds to the potential opportunity for TOC being able of superior handling of uncertainty and risks.

The key TOC insight is that the combination of complexity and variability leads to inherent simplicity.  The essence of the simplicity is that actions that their impact is lower than the level of the noise (variability), cannot be proven that their outcomes have been positive. This leads to focus only on the more meaningful actions. The simplicity also supports judging more daring actions looking for those that their potential downsides are limited and the upsides are high. When you add other TOC insights that reduce  variability, improve operational flow, provide checking the true financial impact by the use of throughput economics and the use of powerful cause-and-effect analysis then the combination yields an overall safer environment.

Taking risks is not the only dilemma with a special impact on the personal side of the CEO. While the fear of being harshly criticized is pretty strong – the CEO wishes to get the glory of any big success.  The dilemma is raised when the success requires the active participation, also considerable inspiration, of other people.  It is even more noted when the other people are external to the organization, like consultants.  Challenging existing paradigms, which is the core of the TOC power, might put the light on the TOC consultants and rob the glory from the CEO, who has chosen to take the risk, but might not fully enjoy the successful outcomes.

How do people react to someone who suddenly changes his own previous paradigms? Isn’t it easy and even natural to blame such a manager for not being able to change much earlier or of being too easy to be influenced?

Actually this dilemma seems to be tough to resolve in order to achieve a win-win between the organization and the CEO and also with the other executives. Emphasizing how wrong the old paradigms are makes the manager’s dilemma stronger.  People have a reason to refuse to admit mistakes.  It harms the self-confidence and it radiates incompetence, which is probably a flawed impression but still pretty common.  Of course, the other side of the conflict is the potential damage of not admitting the previous mistakes.

In the old days of TOC we have used the OPT Game and the Goldratt Simulators, which I have developed, to push managers to admit they don’t know.  This was quite effective in creating the “wow impact”.  However, the humiliation the managers went through proved beneficial only to those with very strong personality.  Too many managers paid some lip-service to the proof-of-flawed-concept and continue with the old ways.

We expect from a CEO to have that strong personality that allows recognizing a mistake and taking whatever steps required to go on the right track. We expect from a CEO to act according to the full interests of the organization without considering his personal interests.  Very few truly apply to this challenge.  Many believe that the way to handle the “agent” problem is to pay high bonuses for success.  Actually it only makes the personal-organizational conflict legitimate and could easily influence CEOs to take bigger risks that increase the fragility of the organization.

It seems we need to help CEOs to resolve both dilemmas. We have to promote the contribution of the CEO to the success, and we have to reduce the fears of the potential unknown outcomes, organizational and personal, of the change we believe would bring huge value and even in the worst case will still be better than the current state.

It is my own realization to reduce the pressure on what is “wrong” and make much more of what is “better”, making it an improvement rather than a revolution by discarding everything people have learned in the past.

What FOCUS means?

find the solution

The vast majority of the managers believe that focusing is absolutely necessary for managing organizations.

If this is the case, then FOCUS as the key short description of TOC has very little to offer to managers.

Let’s consider the possible consequences of Pareto Law. A typical company gets 80% of its revenues from 20% of its clients.  How about focusing on the 20% big clients and dump the 80%?  Does it make sense?

The point is that the real question is not whether to focus or not, but:

What to focus on?

And, even more importantly:

What NOT to focus on?

The reason of emphasizing the part of what not to focus on is that the need to focus is caused by limited capacity, which means it is impossible to focus on everything and draw the full value from it.  The limitation could be a capacity constraint that forces us to concentrate on what exploits that resource.  Empowering the subordinates is a mean for an individual manager to focus on the key issues without becoming the constraint of the organization.  In many cases the critical limitation is the inability of the management team to multi-task in a way that would not delay truly important issues that require actions.  This is expressed by management attention as the ultimate constraint of designing the future of the organization.

Giving up part of the market demand could make sense only when more overall sales, more total Throughput, could be materialized.  Only in very rare cases it is possible to reduce the OE, following giving up certain demand, to the level where T-OE would be improved.  Reducing 80% of the clients, the smaller clients that yield only 20% of the turnover, would almost never reduce the OE by what is required to compensate for the lost T, which is significantly more than 20% of the OE.  This is due to the non-linearity of the OE, where reducing the capacity requirements do NOT yield OE reduction of the same rate.  Just think about the space the organization holds and whether the reduced number of clients would allow using less space, and even when that is possible – it might be impossible to save OE due to it.

FOCUS should NOT be interpreted as just one specific area. It has to be linked to an estimation of the effectiveness of the available capacity to focus on without considerable delays to the other areas and topics.  And remember the following Goldratt insight:

Segment your Market – Do Not Segment your Resources!

The idea is that many of our resources are able to serve variety of different products and services target at different segments. This is a very beneficial capability.  Effective focusing should exploit the weakest link based on the limiting capacity.  In most cases the exploitation encourages serving several market segments, but not too many.

The question of what to focus on goes into all the different parts of TOC, always looking, first of all, to the limitation and from that the answer is derived.  In DBR it is the constraint.  In Buffer Management the question gets a small twist of “what should we do NOW that otherwise the subordination might fail?” The Current-Reality-Tree defines the core-problem of the organization, which is the first focus for designing the future. CCPM focuses on the Critical Chain rather than on the Critical Pass, pointing also to multi-tasking as lack of focus causing huge damage.  The key concept in the Strategy and Tactic (S&T) is the Decisive-Competitive-Edge (DCE), which again points to where the focus should be.  The DCE is actually based on an identified limitation of the client, while the organization has the capability of removing or reducing that limitation.  Building a DCE is a huge challenge that adds considerable load to all managers and professionals, so it makes sense to avoid more than one DCE at a time.

Goldratt brilliantly used a slang in Hebrew, actually taken from Russian, called “choopchik”, describing an issue with very low impact on the performance. The whole point is that choopchiks do have a certain positive impact, which is tempting to tackle, but causing huge loss of not doing the vastly more important missions.  I look on choopchiks as a key concept of TOC that is directly derived by the search for the right focus.

The notion of focus according to TOC is relevant to recognize the impact of uncertainty on management. Choopchiks impacts the performance of the organization less than the normal existing variability; call it the level of the “noise”.  With such small impact you don’t know whether there has been any actual benefit in the real case.  Worthy missions have impact that is considerably bigger than the noise.

What to focus on is a key for achieving better and better performance. The elements involved are the diagnostic of the current state, the few variables that dictate the global performance and what could go wrong.   Mistakes in what to focus on are common and are main causes for the death of organizations and for so many being on the surviving mode.

What is the right time to make a decision?

Businessman standing and making his choice between times

Every decision is a choice between alternatives. Another element is the right time for the decision to be made. Very few decisions force the manager to decide immediately.  In itself this is an undesired situation where a threat has emerged in complete surprise.  Most decisions leave enough time to the decision maker.

Facing substantial uncertainty suggests that every decision should be delayed until the last moment that allows executing the decision in full.  The underlining assumption is that time adds information that reduces the uncertainty.

There are serious negative branches to the above logical claim. All of them look at what might go wrong with the suggestion.  Here are several of them:

  • We don’t truly know the exact timing of “the last moment”, so we may miss it.
  • We might forget the decision at the right moment.
  • Our attention might be occupied by more important issues at the critical time.
  • Making the decision at the very last moment makes it urgent and generates stress. The stress might affect us to make the wrong decision!

Inspired by the idea of time buffers we should treat every non-trivial decision as a mission that should be completed at a given time and to assign a time buffer for that mission.  According to the DBR interpretation of time buffers the mission should not start prior to the buffer.  The time buffer itself should provide enough time to deal with all the other requirements for attention, without creating stress, except the need to make the RIGHT decision.

Managing the execution of missions by individuals or teams through assigning time-buffers, and using buffer management as a control mechanism, is a much more effective process than check-lists. It reduces multi-tasking through buffer management priorities and limits handling missions, especially decisions, too early. Only non-trivial tasks should be included in the missions.  It is a step forward in understanding the behavior of the capacity (attention) of individual managers.  It would also clarify the issue of distinguishing between missions and projects.

An example

Suppose a decision to stop working with a certain supplier and switch to another one is considered. The decision process requires updated information on the trouble with the current supplier and mainly finding alternative suppliers, inquiring how they are evaluated by their clients, whether they have the specific capabilities and, of course, their pricing.

When is the deadline to make the above decision?

Suppose the contract with the current supplier ends on December 31st 2016.  If the contract is not going to be extended, it is fair to announce it by December 1st, which also leaves enough time for finalizing the contract with the new supplier. The mission includes getting the relevant information, bringing it to the decision maker(s) and letting 1-2 hours for the decision itself.  Assigning three weeks for the whole mission is reasonable.  This means no one should work on that mission prior to November 10th!

The impact of the criticality of the decision

Goldratt said: “Don’t ever let something important to become urgent”.

The practical lesson is: important decisions should be given a reasonable time buffer.  Very important decisions, those that we call ‘critical’, should be given longer time buffer, ensuring the decision is not going to be taken under stress. Of course, a critical decision might be taken under stress because of possible negative ramifications, for which no viable solution has been successfully developed.  This post only focuses on the time element.

The suggested process expands what TOC has developed for the production-floor to the work of managers taking responsibility for important missions.

Comment: a mission can be viewed as a very small and simple project, but it does not make sense to work on the mission continuously, unlike the expected behavior in projects, especially along the critical chain, where we strive for continuous progress.

Batching of Decisions

Batching of decisions, usually by periodical planning sessions, is widely done.  The tendency to plan according to time periods can be explained by the need to gather together the relevant management team to come up with a periodical financial budgeting process based on forecasts.  The targets for the various functions are derived from that periodic plan.

I’ve expressed in previous posts my negative view on one-number forecasts and how they reduce the overall performance of the organization.  My focus here is to highlight that the planning sessions provide an “opportunity” to include other decisions that are not directly related to the purpose of the periodical planning.

Any plan is a combination of decisions that are interconnected in order to achieve a certain objective. The plan should include only the decisions that any deviation from them would impact the objective.  This message is explained in a previous post called “What is a good plan – the relationships between planning and execution”, including the need to plan buffers within the planning.

Does it make sense to include the decision to switch suppliers within the annual planning session aimed at determining the financial boundaries for next year? Is the identity of the specific supplier critical to the quality of that high-level planning?  Suppose there is a small impact of the switch on the total cost of goods – does it justify forcing a decision too early?

The key point is that including decisions, with very limited impact on the objective, within the planning, disrupts the quality of the plan that needs to be focused just on the critical factors for achieving the objective. It forces timing that does not support the quality of the particular decision.

Planning, execution and the right timing of decisions are all part of handling common-and-expected-uncertainty. We need to vastly improve processes that dictate what is in the planning, what are left for execution and the handling of all the variety of non-trivial decisions including making sure they are made at the right time.

What Simplicity truly means?

balancing stones

Goldratt assumed that every organization has to be inherently simple.  This recognition is one of the four pillars of TOC, and to my mind the most important.  It is in direct clash with the new Complex Theory when applied to human organizations.

Comment: I refer in this post only to organizations that have a goal and serve clients.

Is Inherent Simplicity just a philosophical concept without practical impact?

One of the most practical advises I got from Eli Goldratt was:

If the situation you are looking at seems too complex for you then:

You are looking on a too small subsystem – look at the larger system to see the simplicity

This is a very counter-intuitive advice. When you see complexity should you look for even more complexity? But, actually the situation is relieved when you analyze the larger system because what is important and mainly what is not important become clearer.  Production shop-floor might look very complex to schedule.  Only when you include the actual demand, distinguish between firm and forecasted demand, you realize what the real constraint is and only then the exploitation and subordination become apparent.

The term ‘simplicity’ needs to be clarified. There are two different definitions to ‘complexity’, which also clarifies what ‘simplicity’, the opposite, means.

  1. Many variables, with partial dependencies between them, impact the outcomes.
  2. It is difficult to predict the outcome of an action or a change in the value of a certain variable.

The second definition describes why complexity bothers us.  The first one describes what seems to be the cause for the difficulty.

The term ‘partial dependency’ is what makes the interaction between variables complicated. When the variables are fully depended on each other then a formula can be developed to predict the combined outcome.   When the variables are absolutely independent then again it is easy to calculate the total impact.  It is when partial dependencies govern the output that makes it difficult to predict.

Examples for independent, fully depended variables, and partially depended:

  1. Several units of the same resource. The units are independent of each other.
  2. A production line where every problem stops the whole line. The line certainly works according to the pace of the slowest station, and every station is fully dependable of all the other stations in the line.
  3. A regular production floor with different work centers and enough space between them. Every work center is partially dependent on the previous ones to provide enough materials for processing.

When, on top of the complexity, every variable is exposed to significant variability then the overall complexity is overwhelming.

Can the performance of the organization be truly unpredictable?

You may call this state “chaos”, or just “on the verge of chaos”, point is that clients cannot tolerate such a performance.  When I’m promised delivery at October 1st, 2pm and the delivery shows up on October 22nd, 6:30am – this is intolerable.

Is it possible to be on the verge of chaos internally, but still provide acceptable delivery to clients?

In order to achieve acceptable reliability organizations have to become simple enough.  The initial impression of complexity is wrong because the partial dependencies are pushed down, so their impact on the deliveries is limited.  The reduction of the partial dependencies is achieved by providing excess capacity and long lead-times.  TOC simplifies it more effectively by using buffers and buffer management.  What we get is good enough predictions of meeting due-dates and even being able to promise rapid-response to part of the market that is ready to pay more for quick supply.

Still, the use of the buffers means: the predictability is limited!

Even Inherent Simplicity cannot truly mean precise predictability! The whole idea is to determine the range of our ability to predict.  When CCPM planning of a project predicts completion on June 2017, it actually means no later than June 2017.  It could be completed earlier and we usually like it to be earlier, but the prediction of June 2017 is good enough.

Thus, the simplicity means predictions within an acceptable range!

Does simplicity means the solution can be described in one paragraph? I doubt whether one-paragraph on CCPM is enough to give the user the ability to judge the possible ramifications.  Certainly we cannot describe the BOK of TOC in one paragraph.

Simplicity in radiating an idea means the idea is well understood. This is the meaning of “predictability” when we deal with marketing messages:  we are able to predict what the reader, listener or spectator understands!  Even here there is a certain range of interpretation that we have to live with.

What about the details of the solution itself? Is the solution necessarily easy to implement?

Easy and simple are not synonymous. The concepts could be simple, but the implementation might face obstacles, usually predictable obstacles, but overcoming the obstacles might be difficult.  So, both simplicity and ease of implementation are highly desirable, but not always perfectly reachable.

We in TOC appreciate simplicity, but achieving it is a challenge. The requirements for truly good solutions are: Simplicity, Viability (possible to do in reality) and Effectiveness (achieving the objective).

An example illustrating the challenge:

Simplified-DBR is a simple effective solution for reliable delivery in manufacturing. However, for buffer-management to work properly we assume the net touch time is less than 10% of the production lead-time.  This is a complication!  A solution for manufacturing environments, where net-touch-time is longer than 10%, has been developed. It complicates the required information for buffer-management, but is  effective.

I remember my professor for History of Physics, Prof. Sambursky, who explained to us:

“At all times, since ancient Greece, the scientists looked for the ONE formula that would explain everything. They always came with such a formula, and then a new discovered effect did not behave according to the formula.  The formula was corrected to fit the behavior of that effect.  Then more new effects contradicted the formula, and the formula started to be very cumbersome and it could not predict the behaviors of new effects.  Then a new theory came with a new simple formula and the cycle went on again.”

TOC is basically simple. It strives to identify the Inherent Simplicity, come up with simple solutions, simple messages and easy implementations.  But, we have, from time to time, to add something to deal with environments where a certain basic assumption is invalid.   This is, to my mind, the most practically effective way to manage organizations.

Until a new simpler, yet effective, approach would emerge

From a TOC perspective: Paying tribute to a Great Pragmatic Thinker

Written by Dr. Alan Barnard and Eli Schragenheim

 Simon 1

We both encountered the name of Prof. Herbert Simon, long before we met Dr. Eli Goldratt. Prof. Herbert Simon (1916-2001), a recipient of the Nobel Prize for Economics in 1978, was an American political scientist, economist, sociologist, psychologist, and computer scientist. Prof. Simon was among the founding fathers of several of today’s most important scientific domains, including artificial intelligence, information processing, decision-making, problem-solving, organization theory, complex systems, and computer simulation.

He coined the terms bounded rationality and satisficing.

Bounded rationality is the idea that, when we make decisions, our rationality is limited, not only by the inadequate information we have available and/or inadequate knowledge to predict the outcomes of our decisions, but also the cognitive limitations of our minds, and the limited time available to make these decisions.

Simon coined the term satisficing (a combination of satisfy and suffice) to describe the heuristic we likely use when having to quickly make difficult decisions with inadequate information and/or knowledge.

Simon often said:people are not optimizers they are satisficers” – we seek a satisfactory solution rather than an optimal one. When faced with a challenging problem or decision, we search for a solution that satisfies our pre-defined criteria to a sufficient level.  When such a solution is reached there is no need to continue searching – we have found a solution that is good enough!

We both rediscovered Simon’s incredible insights, when we recently started doing research on the limitations managers are confronted with, which Theory of Constraints and its applications can help diminish or even eliminate. These are the limitations imposed by complex and uncertain situations as well as by conflicting objectives in solving problems and making decisions.

Below are three of the highlights we found during this “rediscovery”.

In 1971, the world was just at the beginning of the huge advancements in information technology and the exponential growth in the access to information. Yet, Prof. Simon already had the foresight to warn us about one of the major negatives of the increased access to more and more information.

In a public speech he gave in 1971 he warned:

“The wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

And he went further. In his 1973 paper titled, “Applying Information Technology to Organizational Design”, Prof. Simon wrote:

“The information-processing systems of our contemporary world swim in an exceedingly rich soup of information. In a world of this kind, the scarce resource is not information; it is processing capacity to attend to information. Attention is the chief bottleneck in organizational activity, and the bottleneck becomes narrower and narrower as we move to the top of organizations.”

Sounds familiar?

Dr. Goldratt gave similar warnings. Firstly, in the Haystack Syndrome, he warned about the importance to differentiate between data and information (the answer to the question) and the need to build true information systems that deliver only the information relevant to managers for making important decisions. Later, he also shared his insight that the ultimate constraint in any organization was Management, especially Top Management’s limited Attention.

Goldratt explained in “Standing on the Shoulders of Giants” that he simply advanced the work started by Henry Ford and Taiichi Ohno, realizing that to improve flow you need a practical mechanism to prevent overproduction – producing things that are not needed, or at least not needed now.

To our mind, Dr. Goldratt also advanced the work started by Prof. Simon, by outlining practical mechanisms for helping managers decide what to focus on (and as importantly, what not), to better exploit and not waste the scarcest resource in any organization – management’s limited attention.

Considering the growth in Big Data, what are the implications of these warnings on managers’ problem solving and decision making today?

Will it really help to provide the step-change in managers and domain specialist’s ability to improve the quality and speed of their decisions?

And, even if it did, will it be sufficient?

And the third insight: A citation from Prof. Simon article entitled “Making Management Decisions the role of intuition”:

“What all of these decision-making situations have in common is stress, a powerful force that divert behavior from the urgings of reason. They are examples of a much broader class of situations in which managers frequently behave in clearly nonproductive ways.”

We, Alan and Eli, are deeply interested in the impact of fear on the way managers make decisions and manage organizations. We all fear being blamed for our decisions and actions. So it is safer not to act. But we also fear missing something (in the ocean of data) and being blamed for not acting.  Whether we do (act) or don’t, we are damned.

Fear results in stress. And we know that when people are under stress, they often freeze-up, not doing something they should, or they over-react, doing something they should not. When under fear-induced stress, we often act in irrational ways.

Prof. Simon also frequently warned against excessive fear of unforeseen consequences. He advised that the best way to overcome such fears was to experiment and to see what happens.

We have four questions regarding Prof Simon’s concepts and its implications on managers today.

  1. Are decisions made within the organization also aimed at satisficing rather than at optimizing?  Our reason for asking that question is our observation that organizations seem to impose the value of optimization on decisions, and by that, almost force managers to look beyond satisficing, leading to be considerably less focused, and resulting in decision delays and/or errors.
  2. To what extent today, do the fear of being blamed for taking the wrong decisions/wrong actions, still cause many of the avoidable decisions errors and delays?
  3. To what extent today, do the fear of missing something important in the data, still cause managers to look at too much data / too many measurements, causing distractions which in turn, waste management attention and also result in decision errors and delays?
  4.  Why, assuming there is consensus on the timeless importance of the above insights from Prof. Simon, a Nobel Prize recipient, have many others not followed and continue this important work?

The answers to these questions could hint on why the awareness and adoption of TOC is still much lower than what we have expected…