Small-TOC and Big-TOC – dealing with a key wicked problem of TOC

TOC is now at a cross-road. On one hand we have well-defined methodologies for improving the flow of products and services, also making the delivery reliable.  On the other hand, Goldratt taught us the use of cause and effect tools for diagnosing the current blockages to success, pointing to future ramifications, challenging assumptions and coming up with a winning strategy plan.  The critical blockage, by the way, does not necessarily delay the flow of products.  Many times it is being in the wrong market, failing to see the right needs of the market or de-motivating the employees.

We have a conflict: do and sell what we know well versus trying to shoot to the sky, using the variety of the TOC tools, and maybe other tools, to solve the real problems that block the organization from achieving much more.

Here is the conflict – the way I see it:toc-cloud2

The well-known TOC methodologies are DBR,SDBR, buffer-management, Replenishment, CCPM and possibly throughput accounting. The full scope would include also the TP, the six questions, S&T, SFS, the pillars, the engines of disharmony and many other insights that are not well integrated into coherent BOK.

The cloud, one of those tools which are not part of the “well known and effective TOC methodologies” represents a wicked-problem in the TOC community. The upper leg expresses the notion of a “small-TOC”, which is proven to give excellent results and can be nicely sold (when the focus is on it), while the bottom leg, the “big-TOC”, brings higher value as it integrates the functional results to bottom line improvements and is more universal, but is also more difficult to sell.

What is not mentioned is my additional observation of an undesired-effect (UDE):

There is growing competition on improving flow of products, services and projects from other methodologies.

The point is that those new competing methodologies are not superior to TOC, but they are superior to the current practices, so they compete on the mind of potential clients, including The Goal readers.  These methods compete with Small-TOC, but not with Big-TOC.  Let me just mention Lean, DDMRP and Agile as such methods.  If you agree with this assumption than the advantage of selling Small-TOC is threatened and could be temporary.

Most of Small-TOC implementations are functional, and thus do not need the full support of top management. Big-TOC should be sold and addressed to top management as its advantage is integrating the whole company to the desired state of growth coupled with stability.

How can we evaporate the above cloud?

We certainly have difficulty in selling Big-TOC, but selling Small-TOC is also far from being trivial.

A potential solution, challenging the above critical assumption, is to present TOC as a method to answer two critical questions, as verbalized by Dr. Alan Barnard:

  • How much better can you do? In other words, what is limiting the performance of the organization from achieving much more?
  • What is the simplest, fastest, low cost and low risk way to achieve that?

These questions are holistic and generic and they apply to the top management of the organization. While the two questions can be easily translated into actual value to the client organization, they raise the issue whether the client trusts that TOC can lead to effective and safe answers to the questions.  More, letting “consultants”, with all the connotations it raises, lead the strategy of an organization generates fear, which is also personal (what it might do to ME?)

The obstacles for convincing executives who have some idea of TOC, like The Goal readers, are much better handled when the clients see a large organization of truly experienced TOC experts, who closely collaborate to achieve the most effective answer to the second question.

Currently there are two relatively large TOC consultancy companies who do well, even though their growth in not spectacular, and they are not truly large compared to several non-TOC consultancy companies. Having several high level consultants be involved in every implementation provides an opportunity to quickly identify unexpected signals and draw the right response, and by this reduce the risk – and this is also what the client expects from an array of highly experienced people.

TOC Global is a new TOC non-profit organization aimed to solve wicked-problems that limit the performance of organizations, by combining the experience and knowledge of diverse group of TOC experts. TOC Global is an international network of top consultants, coaches and practitioners who are ready to contribute time and efforts to improve the awareness, adoption, actual value generated and also the sustainability of TOC implementations.  There are three major routes that TOC Global is determined to take:

  1. Supporting new and ongoing TOC implementations to achieve very high value. This means guiding local consultants and practitioners through active dialogue to address the specific issues, challenge hidden assumptions, and deal with the fears of managers, which block them from moving. In other words, help those who are ready be helped to deal with their wicked-problems of specific implementations. A free service of Ask-an-Expert is an initial step in this direction (write to
  2. Choose challenging wicked-problems and run projects to analyze, carry careful experiments and eventually complete an effective solution, which would add huge value to the specific organization and similar ones.
  3. Improve the awareness to TOC through investing in marketing efforts.

This activity would lead to another desired effect: Turning the current TOC BOK to be more complete and more effective. Being a non-profit organization allows sharing the lessons and the new knowledge with all the TOC community.

Big-TOC always looks for possible negative branches of any new exciting idea that solves a problem. The grouping of specific people in the TOC Global naturally generates concerns of possible competition with other TOC experts.  The only way to trim that negative branch is by instituting very strong ethical codes, and by being ready to collaborate and join forces with others.  The real competition of TOC is not Lean or Six Sigma, it is the big consulting companies.  Big-TOC offers leaner and more collaborative process based on focusing on the truly critical issues, helping the organization to verbalize their valuable intuition, and achieving huge value based on simplicity and reliability.  One might say this is the right way to become more antifragile.

Sales and Operations Planning the TOC Way

word speech bubble illustration of business acronym term S&OP Sa

S&OP is a known practice, usually focused on the immediate short-time frame, where Sales and Operations negotiate what to produce.

Much more value can be generated when ideas regarding market opportunities can be truly analyzed, considering the potential throughput (T) and capacity requirements, which include the cost of using overtime, special shifts, temporary workers and outsourcing. I refer to these means of quickly increasing the available capacity, for a certain additional cost (delta-OE), as capacity buffer.

When there is excess capacity throughout all operations we are used to describe this state as “the constraint lies in the market.” In such a situation any additional sale is welcome.  Question is how such a situation impacts the sales agents – are they truly compelled to take big moves to bring new clients and new markets, or they still focus on the existing clients, looking for few small opportunities to increase the sales just a bit?

When salespeople come with new ideas, pointing to new market segments, are they being listened to? How are those ideas, which might raise also concerns on top of opening a potential opportunity, checked by top management?

Suppose an idea of packaging several different products together, selling it for a lower price than the simple accumulative price of all the items, is raised. Several questions are immediately raised:

  • Selling a package would definitely reduce the sales of the individual items. Question is: by how much? Is the overall total T going up or down? Are there ramifications on the operating-expenses (OE)?
  • Is there enough protective capacity to face the new potential demand? If not, can we use the capacity buffer and still gain delta-T>>delta-OE? Or, should we intentionally reduce sales from products that yield less T per the critical capacity they require?
  • As no one can accurately forecast the demand for such a new offer – how can we test both the risk of causing a loss and the chance of gaining much more profit? In other words, what are the possible upside and downside of the decision?

The regular S&OP process does not ask these questions. Forecasts are treated as one-number representing reality and the financial impact is supposed to be based on the cost-per-unit.  The flaw of such a process lies on the two erroneous concepts, cost-per-unit and one-number-forecast, which lead to wrong decisions and mediocre results, in spite of the good intentions of both Sales and Operations teams.

The real result is that most organizations are stuck with their current clients and market segments and they do very little to make a real move to achieve a leap in the organizational financial performance.

Kiran Kothekar, co-founding director at Vector Consulting Group India, made the following important observation during his presentation at the TOCICO conference, 2016.

Using targets leads to inferior overall performance of the organization!

The rational of the above statement is that targets behave very similar to Parkinson Law. We try to hit the target, but we know better not to try to be above the target, because we don’t like to get higher targets in the future.  Another negative ramification is that most targets are for a local area.  The derivation of the target is done by considering the overall forecast, but when one of the other parts fails to reach their target, trying to meet the target of the other local areas causes damage.  So, hitting targets locally causes problems elsewhere in the organization.  For instance, focused efforts to sell specific products in order to hit their target might be on the expense of other sales that are someone else responsibility, or on the expense of future sales.  Promotions, carried in order to meet the longer term targets, create massive temporary capacity problems that harm the sales of other products, and reduce the overall Net-Profit = T – OE.

The practical ramification is that setting targets would eventually disrupt any implementation of TOC, no matter the level of benefits already earned.  In his presentation Kiran made it clear that using TOC performance measurements as targets would cause the same negatives.  I fully agree.

In the mind of management setting targets has a reason: pushing salespersons, operators and middle-level managers to make the required efforts to achieve good results.  Without those quantitative measures there is a concern that employees would constantly do less than what they can and should.

Judging whether the performance of an individual, or a whole function, is about right cannot be done by relying on quantitative measurements. There is too high variability and on top of it there are too many dependencies with other individuals, functions and external events.  Observing the behavioural patterns is a better way to identify low motivation. Motivating people by encouraging them to raise improvement ideas and treating those ideas seriously by carrying analysis of the impact on the goal is a way to maintain the right culture.

Treating uncertainty calls for using forecasting ranges. Falling below the range should call for analysis, not automatic blaming.  Going beyond the range also calls for an analysis.  Most of the time the cause is not under or peak performance of someone, but a signal about changing reality that allows us to know a little better, which is the key for handling uncertainty and gaining competitive edge out of it.

I have described the process of ongoing Sales and Operation planning, called DSTOC for decision support based on TOC, in previous posts, which encourages capturing the intuition of salespeople as well as operational people, converting it to ranges, and checking the financial ramifications of the reasonable worst-case and the reasonable best-case. It is not just a way to make sensible decisions under uncertainty; it is also the sensible way to abolish the use of targets to get people do what they know they should do.

The Critical Information behind the Planned-Load

When I developed the concept of the Planned-Load I thought it would be a “nice-to-have” additional feature to my MICSS simulator.  MICSS provided me with the opportunity to check different policies and ideas relevant for managing manufacturing. It took me years to realize how important the planned-load is.

Actually, without the use of the planned-load it is impossible to use Simplified-Drum-Buffer-Rope (S-DBR), which replaced the much more complex DBR. Still, it seems most people in the TOC world, including those who develop TOC software, are not aware of its importance and certainly not its potential value, not yet fully materialized.

The planned-load for a critical resource is the accumulation of the time it would take to complete all the work that is firmly planned. The objective is to help us understand the connection between load, available capacity and response time!

Let’s use the justice system as an example. Imagine a judge examining the work he has to do: sitting in already scheduled trial-sessions for the total of 160 hours.  On top of that he needs to spend 80 hours on reading protocols of previous sessions and writing verdicts.  All in all 240 hours are required for completing the existing known work-load.

Assuming the judge works 40 hours a week then all the trials currently waiting for the judge should theoretically be completed in six weeks. We can expect, with reasonable certainly, that a new trial, requiring about 40 hours net work from the judge, would be finished no later than ten weeks.  I have added three weeks as a buffer against whatever could go wrong, like some of the trials requiring additional sessions or that the judge becomes sick for few days. This buffer is required in order to reasonably predict the completion of a new trial.

However, in reality we could easily find out that the average lead-time of a new trial is fifty (50) weeks – how can that be explained?

When expectations based on the planned-load do not materialize we have to assume one of the following explanations:

  1. The resource we are looking at is a non-constraint and thus has a lot of idle time. In the judge case, the real bottleneck could be the lawyers that ask and get long delays between sessions.
  2. While the resource is a constraint, the management of the resource, specifically the efforts to exploit its capacity, is so bad that substantial capacity is wasted.
  3. The actual demand contains high ratio of urgent cases, not planned a-priori and thus not part of the current planned-load. Those cases frequently appear and delay the regular planned work already registered in the system.

The importance of the planned-load of the “weakest-link” in operations is quick rough estimation of the net queue-time of work-orders and missions.  When the queue time is relatively long the planned load gives an estimation of the actual lead-time provided there is no major waste of capacity.  In other words the planned-load is a measure of the potential responsiveness of the system to new demand.

Have a look at the following picture depicting the planned-load (the red-bar) of six different resources of a manufacturing organization. The numbers, on the right side of the bar denotes hours of work, including average setups.


Assuming every resource works 40 hours a week, we see one resource (PK – the packaging line) that has almost three weeks of net work to do.

The green bars represent the amount of planned-work that is already at the site of the resource. In production lines, but also in environments with missions that require several resources, it could be that a certain work is already firm, but it has not, yet, reached the specific resource.  That work is part of the planned load. It is NOT part of the green-bar.  PK is the last operation and most of the known work-orders for it reside at upstream resources or maybe even not released, yet, to the floor.

The truly critical information is contained in the red-bars – the Planned-Load. To understand the message you need an additional piece of information:  the customer lead-time is between 6-7 weeks.

The above information is enough to safely conclude that the shop floor above is very poorly handled.

The actual lead-time should have been three-weeks on average. So, quoting four weeks delivery time might bring more demand.  Of course, when the demand goes sharply up, then the planned-load should be carefully re-checked to validate that the four-week delivery is safe.

Just to illustrate the power of the planned-load information – here is planned-load of the same organization four months after introducing the required changes in operations:


The customer lead time is now four weeks. The demand went up by 25%, causing work-center MB to become an active capacity constraint. As the simulation is now using the correct TOC principles, most of the WIP is naturally gathered at the constraint.  Non-constraint resources that are downstream of the constraint have very little WIP residing at their site.

The longest planned-load is three weeks (about 120 hours of planned work for MB). The four weeks quotation includes the time-buffer to allow getting over problems in MB itself, and having to go through downstream operations.

This is just the basic understanding of the criticality of the planned load information. Once Dr. Goldratt internalized the hidden value of that information, he based the algorithm for quoting “safe-dates” on the planned-load plus a part of the time-buffer.

A graph of the behavior of the planned-load through time could be revealing. When the graph goes up – it tells you there is more incoming new demand than demand delivered, which means a risk of having an emerging bottleneck.  When the graph goes down it means excess capacity is available allowing faster response.  Both Sales and Production should base their plans accordingly.

Other important distinction is to look for the immediate short-term impact of the planned-load in order to manage overtime to control the reliable delivery. The time horizon is the time-buffer – making sure the capacity is enough to deliver all orders within that horizon. It identifies problems before buffer management warning of “too much red”.  One should always look on BOTH planned-load and buffer management to manage the execution. 

SDBR, and its planned-load part, certainly applies to manufacturing. But, SDBR, including both the planned-load and buffer-management should be used to manage the timely completion of missions in almost all organizations.

There is a tendency in TOC circles to assume that any big enough mission, for whom several resources are required, is a project that should be manage according to CCPM.  While it is possible to do so and get positive results, this is not necessarily the best way, certainly not the simplest way, to manage such missions.

I think we should make the effort and distinguish clearly when SDBR can be effectively used for missions and when CCPM is truly required.   And, by the way, we can all think how the capacity of key resources should be managed in multi-project environments.  Is there a way to use the Planned-Load as a key indicator whether there is enough capacity to prevent too long delays or there is an urgent need to increase capacity?

Is Throughput-per-constraint-unit truly useful?


Cost-per-unit is the most devastating flawed paradigm TOC has challenged. From my experience many managers, certainly most certified accountants, are aware of some of the potential distortions.  One needs to examine several situations to get the full impact of the distortion.

Cost-per-unit supports a simple process for decision-making, and this process is “the book” that managers believe they should follow.  It is difficult to blame a manager for making decisions based on cost-per-unit.  There are many more ramifications to the blind acceptance of cost-per-unit, like the concept of “efficiency” on which most performance measurements are based.   TOC logically proves how those “efficient” performance measurements force loyal employees to take damaging actions to the organization.

Does Throughput Accounting offer a valid “book” replacing the flawed concept of cost-per-unit?

Hint: Yes, but some critical developments are required.

The P&Q is a famous example, originally used by Dr. Goldratt, which proves that cost-per-unit gives a wrong answer to the question: how much money the company in the example is able to make?

Every colored ellipse in the picture represents a resource that is available 8 hours a day, five days a week. The chart represents the routing for two products: P and Q. The numbers in the colored ellipses represent the time-per-part in minutes.  The weekly fixed expenses are $6,000.

The first mistake is ignoring the possible of lack of capacity. The Blue resource is actually a bottleneck – preventing the full production of all the required 100 units of P and 50 units of Q every week.  The obvious required decision is:

What current market should be given up?

The regular cost accounting principles lead us to give up part of the P sales, because a unit of P yields lower price than Q, requires more materials and also longer work time.

This is the second common mistake as when you check what happens when some of the Q sales are given up, instead of P units, you realize that the latter choice is the better decision!

The reason is that as the Blue resource is the only resource that lacks capacity, and the Q product requires much more time from the Blue than from the rest of the resources that have idle capacity.

The simple and effective way for demonstrating the reasons behind what seems like a big surprise is to calculate for every product the ratio of T/CU – throughput (selling price minus the material cost) divided by the time required from the capacity constraint.  In this case a unit of P yields T of ($90-$45) divided by 15 minutes = $3 per minute of Resource B capacity. Product Q yields only (100-40)/30 = $2 per minute of B.

This is quite a proof that cost-per-unit distorts decisions. It is NOT a proof that T/CU is always right.  According to regular cost-accounting principles, once the recognition that the Blue is a bottleneck is realized, the normal result is: a loss of $300 per week.  When the T/CU rule is followed then the result is: positive profit of $300 per week.

I claim that the T/CU is a flawed concept!

I still claim that the concept of throughput, together with operating-expenses and investment, is a major breakthrough for business decision-making!

The above statement about T/CU has been already presented by Dr. Alan Barnard in TOCICO and in a subsequent paper.  Dr. Barnard showed that when there are more than one constraint T/CU yields wrong answers. I wish to explain the full ramifications of that on decisions that are taken prior to the emergence of new capacity constraints.

The logic behind T/CU is based on two critical assumptions:

  1. There is ONE active capacity constraint and only one.
    1. Comment: Active capacity constraints means if we’d get a little more capacity the bottom-line would go up and when you waste a little of that capacity the bottom-line will definitely go down.
  2. The decision at hand is relatively small, so it would NOT cause new constraints to appear.

Some observations of reality:

Most organizations are NOT constrained by their internal capacity! We should note two different situations:

  • The market demand is clearly lower than the load on the weakest-link.
  • While one, or even several, resources are loaded to 100% of their available capacity, the organization has means to get enough fast additional capacity for a certain price (delta(OE)), like overtime, extra shifts, temporary workers or outsourcing. In this situation the lack of capacity does not limit the generation of T and profit, and thus the capacity is not the constraint.

The second critical assumption, that the decision considered is small, means the T/CU should NOT be used for the vast majority of the marketing and sales new initiatives! This is because most marketing and sales moves could easily cause extra load that penetrates into the protective capacity of one, or more, resources, creating interactive constraints that disrupt the reliable delivery.  Every company using promotions is familiar with the effects of running out of capacity and what happens to the delivery of the products that not part of the promotion.

That said, it is possible that there are enough means to quickly elevate the capacity of the overloaded resources, but certainly both operations and financial managers should be well prepared for that situation.

Let’s view a somewhat different P&Q problem:


Suppose that the management considers adding an additional product W, without adding more capacity.  The new product W uses the Blue resource capacity, but relatively little.

Question is: What are the ramifications?

If before having Product W the company had the Blue resource as a bottleneck (loaded to 125%), now three resources are overloaded. The most loaded resource now is the Light-Blue (154%), then the Blue (146%) and also the Grey reaches 135%.

So, according to which resource the T/CU should guide us?

Finding the “optimized solution” does not follow any T/CU guidance. The new product seems great from the Blue machine perspective.  Product W ratio of T/time-by-the-Blue = (77-30)/5=9.4, which the best of all products.  If we should go all the way according to this T/CU – we should sell all the W demand and part of the demand for P (we have the blue capacity for 46 units of P) and none of the Q. That demand would generate a profit of $770, which is more than the $300 profit without the W product.

Is it the best profit we can get?

However, when you consider the ratio T/CU considering the Light-blue resource then Product W is the lowest with only $2.76 per minute of Light-Blue.

Techniques of linear programming can be used in the ultra simple example above. As only sales of complete items are realistic the result of gaining profit of $1,719 can be reached by selling 97 units of P, 23 units of Q and 42 units of W.  This is considerably higher than without Product W, but also much higher than relying on the T/CU of the Blue resource!

As already mentioned, the above conclusions have already been dealt with by Dr. Barnard. The emphasis on decision-making that could cause the emergence of constraints is something we have to be able to analyze at the decision-making stage.

Until now we have respected the capacity limitations as is. In reality we usually have some more flexibility.  When there are easy and fast means to increase capacity, for instance paying the operators for overtime, then a whole new avenue is opened for assessing the worthiness of adding new products and new market segments.  Even when the extra capacity is expensive – in many cases the impact on the bottom-line is highly positive.

The non-linear behavior of capacity (there is a previous post dealing with it) has to be viewed as a huge opportunity to drive the profits up by product-mix decisions and by the use of additional capacity (defined as the “capacity buffer”). Looking towards the longer time-frame could lead to superior strategy planning, understanding the worth of every market segment and using the capacity buffer to face the fluctuations of the demand. This is the essence of Throughput Economics, an expansion of Throughput Accounting using the practical combination of intuition and hard-analysis as the full replacement of the flawed parts of cost accounting.

T/CU is useless when the use of the capacity buffer, the means for quick temporary capacity, is possible.  When relatively large decisions are considered the use of T/CU leads to wrong decisions similar to the use of cost-per-unit.

Goldratt told me: “I have a problem with Throughput Accounting. People expect me to give them a number and I cannot give them a number”.  He meant the T/CU number is too often irrelevant and distorting.  I do not have a number, but I believe I have an answer.  Read my paper on a special page in this blog entitled “TOC Economics: Top Management Decision Support”.  It appears on the menu at the left corner.

A Dialogue between TOC and SWOT

SWOT Analysis table with main objectives

It is not easy for TOC people to evaluate ideas created outside the TOC community, because of three interconnected reasons.

The first is the damaging tendency to assume that TOC challenges everything that is not part of the TOC BOK.  I hope we get over this reason.

Another reason is the specific terminology used in TOC, which can be different from the use of those terms elsewhere. Just think of the term ‘constraint’ and how the use of it in TOC is different than the rest of the world.

The third reason is that the TOC school of thought implies a certain sequence of analysis. It always starts from the goal or an important objective and asks the question:

What prevents you from achieving more?

It is a must to create enough bridges and dialogues with other sources of relevant managerial knowledge into TOC to expand its scope and also its power.

Let’s check the relationships between TOC and SWOT analysis. SWOT, the acronym of Strengths, Weaknesses, Opportunities and Threats, is basically a marketing picture of the organization, brand or just a product.  The objective of SWOT is to lead the mind to improve the impact of the strengths, noting the potential opportunities and grabbing the best of them, reducing the damage from weaknesses and becoming more careful from threats.  The idea is that every part of SWOT impacts marketing, so the appropriate planning would take it into account.

SWOT starts with the Strengths assuming they are the key to identify the target markets and to emphasis these aspects in the marketing campaign. TOC, on the other hand, starts its analysis with the weaknesses of the organization as a whole. These weaknesses are the key reason to the current state of the organization.  TOC uses several types of weaknesses – constraints, core problems, flawed policies (policy constraints) all lead to identification of flawed assumptions that can be challenged.

The basic assumption behind this part of TOC is that the core weakness, in capacity, capability and possibly in the market perception, is the key leverage point, the most immediate opportunity to do much better in relatively short time.

It took time for TOC to recognize the role of the strengths in outlining the way to vastly improve the future of the organization. I see the insight of defining the Decisive-Competitive-Edge (DCE) a key development of TOC.  Goldratt defined DCE as “Answering a need of potential clients in a way that no competitor is able to.”  A TOC way to spot a need of the potential market, its pains that are taken now as “natural” or “part of reality”, is to look for possible UDEs of the market, by developing a branch of a current-reality-tree starting with the products, services and delivery.  But, in order to be able to solve an UDE certain key capabilities are required to provide the development of an answer to that need.

So, the unique capabilities of the organization, like fast, yet reliable, flow of products, are the key strengths of the organization.  These capabilities are the source of new opportunities, which means the ability to combine an unanswered need in the market with the ability to answer that need.  The logical cause-and-effect branch can start with the unique capabilities and then deduce the undesired-effects in the market that could be solved by those capabilities.

For example, fast and reliable flow could solve urgent situations of potential clients badly needing the products, when the current standard of delivery is too slow to solve such an urgency. The next step in the analysis is estimating the value for the potential clients  receiving quick and reliable delivery and whether this solution could provide new business for such a client knowing there is a satisfactory, even if somewhat more expensive, answer to such emergencies.  Such an analysis should come to the conclusion that the organization should not “waste” the unique capability by selling the fast-response to everybody, even when no urgency exists, without charging more for it.

The usual SWOT analysis looks on the strength of a product or service from the perspective of the market. These strengths are all due to certain capabilities of the company. Knowing better the unique capabilities, coupled with sensitivity to the pains and needs of the market, are critical for identifying new opportunities.   Strengths and opportunities have to be bundle together to get the full effect.

The last part of SWOT is threats. From marketing perspective threats can be competitors who might find better ways to compete. Another type of threats is economic and cultural happenings that might negatively impact future sales.  These are mostly external events, where the company might not be prepared to handle.

There is a definite need to look not just for external threats, but also to internally developing threats.  For instance, the retirement of a key professional whose unique capabilities are behind some of the current strengths.  Another one could be turning cash to become a constraint when too high long-term investments draw too much of the current financial assets.

TOC has, generally speaking, neglected the issue of threats, both external and internal.  The notion of an UDE is the closest signal that TOC might note and lead the user to draw the fuller cause-and-effect picture.  UDE is defined as well-known undesired-effect. The missing part in the current TOC BOK is constantly monitoring for new emerging effects that have the potential of becoming most undesired, sometimes even disastrous.   I have already written a post about “Identifying the emergence of threats”  (

SWOT in general encourages detailed definition of market segments, those that enjoy the strengths and care less about the weaknesses. TOC did not fully developed, to my mind, a technique to develop clever market segmentation where features, delivery service, variety of the product mix, are all used to define the clients that should get the best value, and by that define the targets.  It is not too difficult to develop such TOC-influenced tools.

The personal challenge of being a CEO

portrait of handsome senior business man with grey hait at moder

Clarification:  This post was written after several discussions on the topic with prominent people in the TOC field.  The main discussion was led by Ray Immelman.

Understanding how to manage organizations has to include the personal aspects of the one who is in charge of the organization – the managing director or the CEO. While the undesired effects of the organizations affect the CEO we have also to consider the CEO as an individual with interests, wishes and also ego.

Given the wide spread of the size of organizations, and spectrum of personalities who are CEOs, can we have any idea of what it takes to be one?

Taking the responsibility for the future of an organization, for its shareholders and employees, fits only people who have enough self-confidence to believe they can do it. Actually every single CEO has to demonstrate self-confidence at all times, which requires a lot of self-control.

I believe many CEOs have doubts and fears they hide well behind the façade of someone who clearly knows what needs to be done.

The challenge of every CEO is to get hold of the basic complexity of the organization, its stakeholders, clients, and suppliers. On top of the complexity there is considerable uncertainty.  The combination of complexity and uncertainty impacts the efforts of the CEO to answer two critical questions: “what to change?” and “what to change to?”  On top of dealing with complexity and uncertainty every CEO also has to constantly resolve conflicts within the organization and between the CEO and the shareholders.  These conflicts produce obstacles to implement the changes proposed by the answer to the second question and by that raise the third basic question: “how to cause the change?”

The first key personal dilemma of every CEO is derived from the difficulty to answer the three key questions and, based on the actual results, how those results are judged by the board, shareholders and possibly stock market analysts.  The seemingly unavoidable outcome is that the CEO fears that taking any risk, even when the possible damage is low, might be harshly criticized.  Considering the complexity and the variability the level of the pressure is so high that it pushes the CEO not to take even limited risks required for potential growth.

This means that within the generic conflict of take-the-risk versus do-not-take-the-risk the interest of the organization might be to take-the-risk, yet the CEO decides against taking such a risk because of the potential personal damage.

When analysing the CEO conflict we have to consider also the risk of not-taking-risks. First of all the shareholders expect better results and the CEO, trying to resolve the conflict, has to promise certain improved results – and he’d be judged according to these expectations.  Actually achieving too phenomenal results might also be seen as too risky, creating too high expectations in the stock market.  On top of that there are enough other threats to the organization and failing to handle them would be detrimental to the CEO as well. Having to behave with the utmost care on almost every move adds to the potential opportunity for TOC being able of superior handling of uncertainty and risks.

The key TOC insight is that the combination of complexity and variability leads to inherent simplicity.  The essence of the simplicity is that actions that their impact is lower than the level of the noise (variability), cannot be proven that their outcomes have been positive. This leads to focus only on the more meaningful actions. The simplicity also supports judging more daring actions looking for those that their potential downsides are limited and the upsides are high. When you add other TOC insights that reduce  variability, improve operational flow, provide checking the true financial impact by the use of throughput economics and the use of powerful cause-and-effect analysis then the combination yields an overall safer environment.

Taking risks is not the only dilemma with a special impact on the personal side of the CEO. While the fear of being harshly criticized is pretty strong – the CEO wishes to get the glory of any big success.  The dilemma is raised when the success requires the active participation, also considerable inspiration, of other people.  It is even more noted when the other people are external to the organization, like consultants.  Challenging existing paradigms, which is the core of the TOC power, might put the light on the TOC consultants and rob the glory from the CEO, who has chosen to take the risk, but might not fully enjoy the successful outcomes.

How do people react to someone who suddenly changes his own previous paradigms? Isn’t it easy and even natural to blame such a manager for not being able to change much earlier or of being too easy to be influenced?

Actually this dilemma seems to be tough to resolve in order to achieve a win-win between the organization and the CEO and also with the other executives. Emphasizing how wrong the old paradigms are makes the manager’s dilemma stronger.  People have a reason to refuse to admit mistakes.  It harms the self-confidence and it radiates incompetence, which is probably a flawed impression but still pretty common.  Of course, the other side of the conflict is the potential damage of not admitting the previous mistakes.

In the old days of TOC we have used the OPT Game and the Goldratt Simulators, which I have developed, to push managers to admit they don’t know.  This was quite effective in creating the “wow impact”.  However, the humiliation the managers went through proved beneficial only to those with very strong personality.  Too many managers paid some lip-service to the proof-of-flawed-concept and continue with the old ways.

We expect from a CEO to have that strong personality that allows recognizing a mistake and taking whatever steps required to go on the right track. We expect from a CEO to act according to the full interests of the organization without considering his personal interests.  Very few truly apply to this challenge.  Many believe that the way to handle the “agent” problem is to pay high bonuses for success.  Actually it only makes the personal-organizational conflict legitimate and could easily influence CEOs to take bigger risks that increase the fragility of the organization.

It seems we need to help CEOs to resolve both dilemmas. We have to promote the contribution of the CEO to the success, and we have to reduce the fears of the potential unknown outcomes, organizational and personal, of the change we believe would bring huge value and even in the worst case will still be better than the current state.

It is my own realization to reduce the pressure on what is “wrong” and make much more of what is “better”, making it an improvement rather than a revolution by discarding everything people have learned in the past.

What FOCUS means?

find the solution

The vast majority of the managers believe that focusing is absolutely necessary for managing organizations.

If this is the case, then FOCUS as the key short description of TOC has very little to offer to managers.

Let’s consider the possible consequences of Pareto Law. A typical company gets 80% of its revenues from 20% of its clients.  How about focusing on the 20% big clients and dump the 80%?  Does it make sense?

The point is that the real question is not whether to focus or not, but:

What to focus on?

And, even more importantly:

What NOT to focus on?

The reason of emphasizing the part of what not to focus on is that the need to focus is caused by limited capacity, which means it is impossible to focus on everything and draw the full value from it.  The limitation could be a capacity constraint that forces us to concentrate on what exploits that resource.  Empowering the subordinates is a mean for an individual manager to focus on the key issues without becoming the constraint of the organization.  In many cases the critical limitation is the inability of the management team to multi-task in a way that would not delay truly important issues that require actions.  This is expressed by management attention as the ultimate constraint of designing the future of the organization.

Giving up part of the market demand could make sense only when more overall sales, more total Throughput, could be materialized.  Only in very rare cases it is possible to reduce the OE, following giving up certain demand, to the level where T-OE would be improved.  Reducing 80% of the clients, the smaller clients that yield only 20% of the turnover, would almost never reduce the OE by what is required to compensate for the lost T, which is significantly more than 20% of the OE.  This is due to the non-linearity of the OE, where reducing the capacity requirements do NOT yield OE reduction of the same rate.  Just think about the space the organization holds and whether the reduced number of clients would allow using less space, and even when that is possible – it might be impossible to save OE due to it.

FOCUS should NOT be interpreted as just one specific area. It has to be linked to an estimation of the effectiveness of the available capacity to focus on without considerable delays to the other areas and topics.  And remember the following Goldratt insight:

Segment your Market – Do Not Segment your Resources!

The idea is that many of our resources are able to serve variety of different products and services target at different segments. This is a very beneficial capability.  Effective focusing should exploit the weakest link based on the limiting capacity.  In most cases the exploitation encourages serving several market segments, but not too many.

The question of what to focus on goes into all the different parts of TOC, always looking, first of all, to the limitation and from that the answer is derived.  In DBR it is the constraint.  In Buffer Management the question gets a small twist of “what should we do NOW that otherwise the subordination might fail?” The Current-Reality-Tree defines the core-problem of the organization, which is the first focus for designing the future. CCPM focuses on the Critical Chain rather than on the Critical Pass, pointing also to multi-tasking as lack of focus causing huge damage.  The key concept in the Strategy and Tactic (S&T) is the Decisive-Competitive-Edge (DCE), which again points to where the focus should be.  The DCE is actually based on an identified limitation of the client, while the organization has the capability of removing or reducing that limitation.  Building a DCE is a huge challenge that adds considerable load to all managers and professionals, so it makes sense to avoid more than one DCE at a time.

Goldratt brilliantly used a slang in Hebrew, actually taken from Russian, called “choopchik”, describing an issue with very low impact on the performance. The whole point is that choopchiks do have a certain positive impact, which is tempting to tackle, but causing huge loss of not doing the vastly more important missions.  I look on choopchiks as a key concept of TOC that is directly derived by the search for the right focus.

The notion of focus according to TOC is relevant to recognize the impact of uncertainty on management. Choopchiks impacts the performance of the organization less than the normal existing variability; call it the level of the “noise”.  With such small impact you don’t know whether there has been any actual benefit in the real case.  Worthy missions have impact that is considerably bigger than the noise.

What to focus on is a key for achieving better and better performance. The elements involved are the diagnostic of the current state, the few variables that dictate the global performance and what could go wrong.   Mistakes in what to focus on are common and are main causes for the death of organizations and for so many being on the surviving mode.