A concise history of constraints

Magnifying glass focusing on the weakest link of an iron chain isolated on white background

A recent discussion on what is the appropriate TOC definition for ‘constraint’ leads me to state some historical facts that highlight the development of Goldatt’s approach to constraints.

Prior to the Theory of Constraints (TOC) the breakthrough idea was to distinguish between bottlenecks and non-bottlenecks.  The definition of a bottleneck was simple: “The load placed on the resource is more than what the resource is able to do.” Thus, a bottleneck is always a resource.

The term ‘constraint’ was defined “Anything that limits the system versus its goal”.  It was conceived to answer three significant limitations of the term ‘bottleneck’.

1. When all resources have enough capacity to process all the demand then there is no bottleneck. However the system is able to do more.  Thus, looking on the market demand as a ‘constraint’ is quite valuable.  It allowed managers to understand that there is no excuse not to ship everything on time.

2. Being a bottleneck does not ensure being a constraint. There might be another bottleneck with even more load.

3. We might have a true resource-capacity-constraint (CCR) that is not a bottleneck.  While on average there are idle times, in other times the queue behind the CCR is so long that some potential demand is lost.

It was realized from the start that the constraint limits the throughput (T) of the organization.  Goldratt even played with the idea of introducing the term of ‘inventory constraints’ referring to trouble-makers that force the organization to maintain more WIP.  He backed off this term to keep the simplicity.

The real power of the term ‘constraint’ came through the paradigm that an organization cannot have many constraints.  Dependencies coupled with statistical fluctuations do not allow interactive constraints in the chain. This realization led to the conclusion that the shop-floor can handle only one constraint without creating chaos.  In 1989 Goldratt wrote The Haystack Syndrome and presented a rather complicated algorithm to handle multiple constraints.  The whole development of the ideas was set around capacity constraints.  The chain analogy, where there has to be one, and only one, weakest link in the chain was widely used. Thus, the default for a constraint was lack of enough capacity of a resource.

Limited capabilities, like being unable to produce top quality products, were not considered constraints.  Limited capabilities are less exposed to statistical fluctuations.

The wide definition of the term constraint did cause problems.  People used to say that the constraint lies between the eyes of the CEO.  Flawed policies, especially policies concerning efficiency, were called ‘policy constraints’.  So, the idea was that the system is limited by a capacity constraint, and failing to exploit it is due to policy constraints.

The full set of TP (thinking processes) was developed in 1990. Effect-cause-effect trees and the cloud existed before (even before the 5fs) but not the other tools we know today.  The definition of the CRT raised the notion of the core problem – the conflict (cloud) that causes all the undesired-effects.  Resolving the conflict by challenging a basic assumption behind the conflict would push the organization to a new level of performance.

So, is the core-problem the real constraint?

It remained an open question for a while.  Core problems touched upon local versus holistic thinking, but also on behavioral patterns and opened the door for re-evaluating the value the organization brings to the market.  The core-problem could also challenge the paradigm why do we exploit a CCR rather than immediately elevate it.

Fact is: we did not ask ourselves these questions in the 80s.

Goldratt publicly regretted calling flawed policies “policy constraints” sometime in the 90s, explaining that policies should be eliminated and not exploited and subordinated to.

A major development in the TOC thinking came around 2003 with the idea of the Viable Vision.  Suddenly the way to improve an organization did not come through elevation of a capacity constraint and even not through challenging the conflict behind a policy-constraint.  With the term “decisive-competitive-edge” the TOC thinking has realized the need to challenge the value the organization offers to its customers.  The core idea was to answer a need of the customer in a way no other competitor can.

Explaining how come the VV did not care what is the constraint, Goldratt spoke about two different changes.  One is minus-minus – you identify something that is not right (minus) and you change it (minus of the minus) and a plus-plus change where you take a big step towards the “pot-of-gold”.  When such a step is taken one needs to carefully re-think all the conditions that would be sufficient to bring the organization to growth along the “red curve”.  Lack of capacity of a specific resource becomes a triviality that needs to be eliminated.  Many other potential constraints would be elevated long before they become constraints.

Food for thought?

Advertisements

Mutual decision-making process

Part 3 of a series on using T, I and OE for key decision making

Men shaking Hands Closing a Deal

In the last post I showed the need to have inputs based on intuition for making sound decisions.  Thus, for any structured decision making the involvement of people with the best relevant intuition is absolutely required.

This is not enough.  There is still a need to check the wider ramifications of the decision at-hand considering the various intuitive inputs.  This check has to be based on logic, serving both as an intuition-control mechanism and being able to look at the bigger picture.

There is a known managerial practice where the top manager calls his people to a meeting, lay down a decision to consider and asks every one of the participants to voice their view one at a time.  In the end the top manager states HIS opinion and this is the decision to be acted upon.

While that practice ensures everyone has an opportunity to present his/her view and intuition exposing the top guy to the inputs, it lacks a critical element: logical analysis of the full ramifications of every alternative!

Some of the frequent, but very basic, decisions every company has to make are about its product-mix and capacity.  Suppose the following decision is now considered:

Currently the company sells two different chocolate packages containing the same basic product. The idea is to sell a much larger package for a reduced price per one unit of product.

The intuition of the sales people is required for the following inputs:

  • What might be the pessimistic and optimistic sales of the new package?
  • By how much would the sales of the other two packages be impacted?
    • We can be reasonably certain the sales of the other packages will be reduced – but by how much?
  • Would other products, somehow similar to the above product, face reduced sales?

Given the above intuition and simple calculations the impact on the total T can be derived – both according to the pessimistic and optimistic estimations.

One more issue needs to be resolved:

Do we have enough capacity to sustain the possible increase in sales, especially according to the optimistic assessment?

It is enough that we’d lack capacity on just one resource to invalidate the above T and OE calculations.  We need also to understand that by “lack of capacity” we also consider the case that on average we do have enough capacity, but lack capacity at specific points in time causing delays to the market.  We call “protective capacity” the amount of excess-capacity that is absolutely necessary for keeping the delivery performance in “good-enough” state.  When the protective-capacity is penetrated there is damage.

How much protective capacity is required?

Eventually we need the intuition of the key people in Operations to assess the answer.  There is no TOC formula determining the right amount of protective capacity.

Calculations can easily depict the load on critical resources generated by the assessment of the demand.

If there is enough capacity then the calculated total T, with and without the new package, is all the support management truly needs.

If one or more of the resources lost their protective capacity then the management team has to consider quick ways to increase capacity, or find products where it is possible to reduce their sales (maybe by increasing the price).  Again we need the intuition of sales and operations to make sure the solution is doable.

What might happen with the decision making is that while the optimistic assessment brings very nice addition to the profit, the pessimistic scenario shows a loss. We expect that if making much higher profit is more likely than the having a relatively small loss then accepting the new idea is the right decision. However, one more point needs to be checked.  Small losses might accumulate to the point it endangers the organization.  The current state of cash-flow plus the intuition of the finance guy should be part of the mutual decision process.

Mutual Decision Making Process is a managerial must. Such a process has to use the intuition of key people as legitimate and necessary inputs.  Then data processing and logical analysis would lead the management team to make sound decisions.

Common sense – combining intuition and logic

We all know that common sense is not common at all, especially within organizations that have the ‘optimization’ culture.  Common sense tells us that reaching optimization is an illusion, which drives damaging behaviors and keeps us far away from even a good enough state.

What is the common sense way to assess the worthiness of a new idea?

The first common sense question is:  what information is required to assess the idea?

A little girl playing real chess in competition. Black and white photo. Concentrated kid. Power of concentration ** Note: Soft Focus at 100%, best at smaller sizes

Reminder, Goldratt defined information as “an answer to a question asked. In a way it means that there are some things we need to know.  So, when we have to make a decision there are several inputs we look for – and these are the necessary information items.

Example:

On behalf of your organization you look for a vendor for office supplies.  You talk with the representative of a large office supplies company and also with the enthusiastic owner of a new office supply business.  The large company rep. is a tired and not-too-bright fellow that just cite the normal sales-pitch text.  The owner of the new business is definitely brilliant, and your intuition tells you he is going to be successful.

What information/answers-to-questions you have to look for?

  1. From whom you’re going to get better overall deal?
  2. From whom you’re going to get better overall service, especially better response to any urgent request?

The first question gets a precise numerical answer.  Suppose the new business offers a cheaper price of 4% for the first 6 months.  After that the prices would be the same. Let’s also lay down the data that the total expense on office supplies in your organization comes to .94% of the turnover.

The second question has to use intuition, as any question about the future has no precise number and I doubt whether you find any valid statistical model for this specific question.

Your intuition, based roughly on your life experience plus some emotions and biases, tells you the new business is going to give much better service.  You don’t expect from a large business to do “favors”, but a new business with the wish for future growth is more open to respond to special requests.

Decisions are certainly impacted by emotions, and in this case the emotion and the intuition go hand-by-hand for making the new business.

So, is the decision obvious? 

Here comes the important role of applying logic as a critical control mechanism and a mean to look at the bigger picture.

What might be the damage from failing to serve at the required level and is it likely to happen?

Large suppliers might miss items here and there.  However, we do expect them to fix those in a day or two.  A new business might face more difficulties, especially when the new business tries to grow too fast, like exhausting their resources and possibly suffer from low cash-flow. They could also suffer from less expertise in the area.

Let’s now inquire the question: what is the size of the damage and to whom?

Most organizations do not suffer too much from incidental lack of office supplies.  However it creates a hassle and when there is a hassle there is a person who is responsible for the hassle. So, while the real impact on the performance of the organization is relatively low, the well being of the decision maker, the common-sense decision, is to take the safer alternative, which is the larger supplier!

The need for safety, in this case, is usually stronger than the very little impact on the cost. Well, this is my intuition even for organizations that are in the Cost World.

My general observation

Decisions involve emotions, intuition and logical analysis.  To my mind the emotions have negative impact on organizational decisions.  Intuition is critically necessary for the main information inputs.  The final decision has to look at the bigger picture, consider the ramifications of the inputs on other aspects of the bigger picture, and for that you need logical analysis.

Is it really an opportunity?

Part 2 of a series on using T, I and OE for key decision making

Opportunities present themselves in various ways.  Only seldom we see an opportunity which is so good that there is no point asking any more questions.  Most of what looks like a potential opportunity comes with the doubt embedded in: is it really an opportunity or a trap?

Opportunity Concept.

A typical managerial conflict happens when Sales proposes a promotion, offering several products for a certain price reduction.  Sales managers believe this would significantly enhance the sales of those products next month, and this belief is backed up by past experience.

A promotion creates huge pressure on the shop-floor, reduces the sales of other products and mainly reduces the sales for a certain period after the promotion is over.  Yet, sometimes the extra revenues (minus the variable costs) generated, especially selling to new clients and gain their future purchases, more than compensate for the damage.

  • How can we truly check the net financial impact of a promotion?
  • How can we check the financial impact of penetrating into another market segment?
  • How can we check the financial impact of launching a series of new products?
  • How can we check the financial impact of purchasing a new production line as an elevation of our current capacity constraint?

We are aware that cost-per-unit is not the right tool to support sound decisions. So, how should we make such decisions?

The most straight-forward way is to assess the financial impact of the decision-at-hand on the bottom-line without relying on some funny ‘per-unit’ fabricated measures.  It looks quite difficult objective due to the complication of the various expenses.  However, when we look on the decision as an optional addition to the current level of sales we can see two clear factors that simplify the situation:

  1. The change in the incoming flow of money: the revenues from the change in sales, both the additional sales and possible loss of other sales, minus the truly variable costs of those sales.  This is what we call Throughput (T).
  2. The change in the outgoing flow of money (all the other operating expenses called OE). Note, those additional expenses are all due to the required changes in the available capacity!  This insight was revealed in the previous posts about the behavior of the cost of capacity.

What we get is:   Delta(P) = Delta(T) minus Delta(OE)

Delta(P) is the change in net profit before tax.  For the decision-at-hand we like to know whether delta(P) is positive or negative.

What information we need in order to get a good estimation of the above equation?

One obvious problem is the impact of uncertainty, which includes everything we don’t know at the time of the decision.  We should come back to this issue in later posts.

From the general direction described so far the first step has to define the current state of the organization, as we like to evaluate the difference between the state with the additional decision and without.

There are two main categories of information describing the current state:

  1. The current sales. The items sold, their respective quantities, prices and truly-variable-costs (mainly cost of materials).   We can then calculate the generated throughput (T) per item and the resulting total T – the flow of incoming money.
  2. The available capacity and the load generated by the current sales.
    • In order to calculate the load we need to know how much capacity, for every resource, is required for every product sold!

Then we need the following categories of information for every new opportunity / deal / idea:

  1. The new sales / T to be generated by this idea, including longer term impact
  2. The impact of the new sales on the current sales – would some current sales be reduced?
  3. The updated load versus capacity – do one or more resources lack enough available capacity?
  4. When one or more resources lack capacity what special options are open?
    • Purchasing additional capacity for extra cost (how much?)
    • Reduce some sales, provided it can be practically done without tampering with other sales!

Critical questions for advancing ahead:

  • Is it possible to gather all the above information?
  • How long into the future we need to look in order to make a decision?
  • How can we handle many different opportunities for the same time frame?
  • How do we consider the impact of uncertainty?
  • What is the structured process to make a sound decision?
  • Is it too complex? If so, can we simplify it without distorting the decision process?

To allocate or not to allocate – is this the question?

An intermediate post to clarify a point

Part 1.5 in the series on using T, I and OE for key decision making

The previous post argued that the cost of capacity is not linear and it is also not continuous.  I did not deal with the fact that in too many cases there is no direct way to relate specific capacity consumption to specific products.  The remedy of cost-accounting in their efforts to calculate the cost of a product unit is to allocate the cost-of-capacity, which is not directly related to a product unit, based on some arbitrary parameter like the direct-labor.

Barcelona, Spain - September 28, 2011: Camp Nou stadium is the highest capacity soccer stadium in Europe. photo taken on September 28, 2011 in Barcelona.

Should we allocate the cost of the arena based on tickets or on the result of the match?

Activity-Based-Costing (ABC) challenges the older methods on that point.  ABC tries hard to relate every time capacity is consumed to its “cost-driver”, which could be a product unit, but also a new client and even an order.  Anything wrong with that?

The real mistake of ABC, and all the other cost accounting methods, is to associate the average cost of the specific capacity consumption to those cost-drivers.  The non-linear behavior of the cost of capacity causes a huge distortion in the ABC management information.  It gives the wrong impression that certain cost-drivers are too costly when there is a lot of excess capacity, while other cost-drivers look good, concealing the fact that they use capacity that is truly limited (and purchasing more is truly expensive), thus leading to flawed business decisions.

Of course, in order to convince organizations to stop assuming that every time capacity is consumed a certain cost is generated, we need to establish an alternative way to make sound decisions.  We need a good method to check whether a new opportunity/idea would improve the bottom-line or not.  We like also to have a good method to decide whether purchasing more capacity is profitable or we better give up some of the available capacity.  I promise to arrive to the solution in later posts.

Sometimes we need to allocate certain costs even when we use the TOC logic!

For instance, suppose your company has partnered with another company in leasing a whole floor of offices.  The reason is: the owner of the floor refused to lease part of it.  That space is a resource, and the total space is the limit of the available capacity of the space resource.  Any agreement between you and the other company on splitting the cost of the rent, and probably also some other capacities you use (cleaning, communication lines) is basically arbitrary and based on some allocation (the lobby and the lifts are certainly shared) of the space.

I’m going to raise more cases of allocation as a good-enough solution when direct calculations are not possible.

The Non-Linear Behavior of the Cost of Capacity

And its impact on decision making

Part 1 of a series on using T, I and OE for key decision making

No limits success concept with a road or highway going forward fading into the sky with a group of clouds shaped as an upward arrow as a business symbol of financial freedom and aspirations.

Challenging widely accepted paradigms creates new opportunities

The terminology in Physics does not use words with dramatic intensity.  However a certain incident in the late 19th century was so embarrassing that it was called “The Catastrophe in the ultraviolet” and by that caught my imagination.  The story is about radiation emitted from a black box and the mathematical equations, according to the knowledge of that time, showed that the radiation should be infinite.  Well, it was easy to see that this is NOT the case.  What eventually solved the riddle was the discovery, understood through Quantum Theory, that the frequency of the emitted radiation is not continuous but discrete.  As it turns out discrete functions behave very differently from continuous functions.

There is a tendency in the social science circles to assume that the main functions, describing the behavior of key variables, like capacity or the cost of capacity, are continuous.

Really???

I claim that all cost functions in reality are discrete.  This is most certainly true when we speak about the cost of capacity.

All organizations spend their overhead expenses on providing enough capacity that is required for the business.  The usual way is to purchase a certain fixed amount of capacity, like space for storage or offices, a machine capable of processing a certain quantity per hour and employees who agree to work N hours every week.

The cost of providing that capacity is fixed whether you actually use all that capacity or only part of it.

This means that using 25% of the available fixed amount of capacity, or using 85% of that quantity costs exactly the same!  This is a basic non-linear behavior and its impact on the decision what to do with the capacity at hand is HUGE.

Once all the available capacity is used then new options of using additional capacity open.

But, the principle of being able to purchase capacity only in certain fix sizes is still on.

An employee might agree to work another hour, but usually not a part of an hour.  So, if you need just 34 minutes of overtime the cost is one hour of overtime, which is also considerably more expensive than the relative cost of a regular hour.

So, when we look on the behavior of the cost of capacity we realize the following behavior:

The initial cost is HIGH.  Then it becomes zero (0) until a certain load is reached. Then the cost jumps by another fixed amount. Using more capacity the cost is zero until the next fixed point.

This actual behavior is quite different from the current practice of associating the average cost to any use of capacity.

This is the kernel of the TOC challenge at cost accounting!

So, the simple principle of cost accounting is invalid in our reality.  This use of the average cost of capacity has led all the way to the fiction of cost-per-unit.

Do we really need “per unit” measures to support sales decisions?

We still believe in simplicity, but reject the wrong simplicity.  What could be simpler than have a way to measure the direct impact of a decision on the bottom-line?

Let’s now look on another realization:

There is no hope in hell to use all the available capacity!

This is certainly in direct clash with the common paradigms.

There are three causes for being unable to use all the available capacity to generate value:

  1. TOC has demonstrated the need for protective capacity to provide good and reliable delivery performance.
  2. The market demand fluctuates in a faster pace than our ability to adjust the available capacity.
  3. Capacity is purchased only by certain sizes. This is similar to what has been already stated above.

What are the ramifications for decision making?

When a new market opportunity pops-up we need to consider the state of the capacity usage of every resource.  When there is enough excess capacity the usage is FREE!  When the additional load penetrates into the protective capacity then there is need to carefully check the cost of Additional capacity or the ramifications of giving up some existing sales.

This is very different generic approach than the existing management accounting tools!

Next post would explain more on how to calculate the impact of an opportunity on the bottom-line, without using any “per-unit” kind of measure that would force us to use averages and get a distorted answer.

The Thinking Processes (TP) and uncertainty

Have a quick look at the small cause and effect branch.  Is the logic sound?

logic-pic2

 Can it be that in reality effects 1-3 are valid, but effect 4 is not?

We can come up with various explanations of insufficiency in the above logic.  For instance, if the clients are not free to make their own decisions, like in totalitarian countries, then could be that the regime prefers something else.  Another explanation might be that the brand name of Product P1 is much less known.

The generic point is: the vast majority of the practical cause and effect connections are not 100% valid.

In other words, most good logical branches are valid only statistically, because they might be impacted by uncertain additional effects that distort the main cause-and-effect.  Actually the uncertainty represents insufficiencies we are not aware of, or we know about them but we cannot confirm whether they exist or not in our reality.  For all practical purposes there is no difference between uncertainty and information we are not able to get.

This recognition has ramifications.  Suppose we have a series of logical arrows:

eff1 –> eff2 –> eff3 –> eff4 –> eff5

If every arrow is 90% valid (it is true in 90% of the cases) then the long arrow from eff1 to eff5 is only 60% valid.

The point is that while we should use cause-and-effect because it is much better than to ignore it, we can never be sure we know!  The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future.  This false logic says:  if ([I behave according to the TP branch] à [Sometimes I do not get the expected effect]) then [I stop using the TP].

The way to deal with this serious and damaging negative branch is to institute the role of uncertainty in our life and the idea that partial information is still better than no information – provided we take the limitations of being partial seriously.  We can never be sure that whatever we do will bring benefits.  However, when we use good logic then most-of-the-time we’ll get much better benefits than the total damage.

It’d be even better to consider the possibility of something going wrong in every step we do.  This would guide us to check the results and re-check the logic when the result is different than what we have expected.  It is always possible that there is a flaw in our logic and in such a case we better fix the flawed part and gain better logical understanding of the cause-and-effect.  When we do not see any flaw in our logic – there is still room for certain crazy insufficiency to mess our life and this is the price we pay for living with uncertainty.