A concise history of constraints

Magnifying glass focusing on the weakest link of an iron chain isolated on white background

A recent discussion on what is the appropriate TOC definition for ‘constraint’ leads me to state some historical facts that highlight the development of Goldatt’s approach to constraints.

Prior to the Theory of Constraints (TOC) the breakthrough idea was to distinguish between bottlenecks and non-bottlenecks.  The definition of a bottleneck was simple: “The load placed on the resource is more than what the resource is able to do.” Thus, a bottleneck is always a resource.

The term ‘constraint’ was defined “Anything that limits the system versus its goal”.  It was conceived to answer three significant limitations of the term ‘bottleneck’.

1. When all resources have enough capacity to process all the demand then there is no bottleneck. However the system is able to do more.  Thus, looking on the market demand as a ‘constraint’ is quite valuable.  It allowed managers to understand that there is no excuse not to ship everything on time.

2. Being a bottleneck does not ensure being a constraint. There might be another bottleneck with even more load.

3. We might have a true resource-capacity-constraint (CCR) that is not a bottleneck.  While on average there are idle times, in other times the queue behind the CCR is so long that some potential demand is lost.

It was realized from the start that the constraint limits the throughput (T) of the organization.  Goldratt even played with the idea of introducing the term of ‘inventory constraints’ referring to trouble-makers that force the organization to maintain more WIP.  He backed off this term to keep the simplicity.

The real power of the term ‘constraint’ came through the paradigm that an organization cannot have many constraints.  Dependencies coupled with statistical fluctuations do not allow interactive constraints in the chain. This realization led to the conclusion that the shop-floor can handle only one constraint without creating chaos.  In 1989 Goldratt wrote The Haystack Syndrome and presented a rather complicated algorithm to handle multiple constraints.  The whole development of the ideas was set around capacity constraints.  The chain analogy, where there has to be one, and only one, weakest link in the chain was widely used. Thus, the default for a constraint was lack of enough capacity of a resource.

Limited capabilities, like being unable to produce top quality products, were not considered constraints.  Limited capabilities are less exposed to statistical fluctuations.

The wide definition of the term constraint did cause problems.  People used to say that the constraint lies between the eyes of the CEO.  Flawed policies, especially policies concerning efficiency, were called ‘policy constraints’.  So, the idea was that the system is limited by a capacity constraint, and failing to exploit it is due to policy constraints.

The full set of TP (thinking processes) was developed in 1990. Effect-cause-effect trees and the cloud existed before (even before the 5fs) but not the other tools we know today.  The definition of the CRT raised the notion of the core problem – the conflict (cloud) that causes all the undesired-effects.  Resolving the conflict by challenging a basic assumption behind the conflict would push the organization to a new level of performance.

So, is the core-problem the real constraint?

It remained an open question for a while.  Core problems touched upon local versus holistic thinking, but also on behavioral patterns and opened the door for re-evaluating the value the organization brings to the market.  The core-problem could also challenge the paradigm why do we exploit a CCR rather than immediately elevate it.

Fact is: we did not ask ourselves these questions in the 80s.

Goldratt publicly regretted calling flawed policies “policy constraints” sometime in the 90s, explaining that policies should be eliminated and not exploited and subordinated to.

A major development in the TOC thinking came around 2003 with the idea of the Viable Vision.  Suddenly the way to improve an organization did not come through elevation of a capacity constraint and even not through challenging the conflict behind a policy-constraint.  With the term “decisive-competitive-edge” the TOC thinking has realized the need to challenge the value the organization offers to its customers.  The core idea was to answer a need of the customer in a way no other competitor can.

Explaining how come the VV did not care what is the constraint, Goldratt spoke about two different changes.  One is minus-minus – you identify something that is not right (minus) and you change it (minus of the minus) and a plus-plus change where you take a big step towards the “pot-of-gold”.  When such a step is taken one needs to carefully re-think all the conditions that would be sufficient to bring the organization to growth along the “red curve”.  Lack of capacity of a specific resource becomes a triviality that needs to be eliminated.  Many other potential constraints would be elevated long before they become constraints.

Food for thought?

Mutual decision-making process

Part 3 of a series on using T, I and OE for key decision making

Men shaking Hands Closing a Deal

In the last post I showed the need to have inputs based on intuition for making sound decisions.  Thus, for any structured decision making the involvement of people with the best relevant intuition is absolutely required.

This is not enough.  There is still a need to check the wider ramifications of the decision at-hand considering the various intuitive inputs.  This check has to be based on logic, serving both as an intuition-control mechanism and being able to look at the bigger picture.

There is a known managerial practice where the top manager calls his people to a meeting, lay down a decision to consider and asks every one of the participants to voice their view one at a time.  In the end the top manager states HIS opinion and this is the decision to be acted upon.

While that practice ensures everyone has an opportunity to present his/her view and intuition exposing the top guy to the inputs, it lacks a critical element: logical analysis of the full ramifications of every alternative!

Some of the frequent, but very basic, decisions every company has to make are about its product-mix and capacity.  Suppose the following decision is now considered:

Currently the company sells two different chocolate packages containing the same basic product. The idea is to sell a much larger package for a reduced price per one unit of product.

The intuition of the sales people is required for the following inputs:

  • What might be the pessimistic and optimistic sales of the new package?
  • By how much would the sales of the other two packages be impacted?
    • We can be reasonably certain the sales of the other packages will be reduced – but by how much?
  • Would other products, somehow similar to the above product, face reduced sales?

Given the above intuition and simple calculations the impact on the total T can be derived – both according to the pessimistic and optimistic estimations.

One more issue needs to be resolved:

Do we have enough capacity to sustain the possible increase in sales, especially according to the optimistic assessment?

It is enough that we’d lack capacity on just one resource to invalidate the above T and OE calculations.  We need also to understand that by “lack of capacity” we also consider the case that on average we do have enough capacity, but lack capacity at specific points in time causing delays to the market.  We call “protective capacity” the amount of excess-capacity that is absolutely necessary for keeping the delivery performance in “good-enough” state.  When the protective-capacity is penetrated there is damage.

How much protective capacity is required?

Eventually we need the intuition of the key people in Operations to assess the answer.  There is no TOC formula determining the right amount of protective capacity.

Calculations can easily depict the load on critical resources generated by the assessment of the demand.

If there is enough capacity then the calculated total T, with and without the new package, is all the support management truly needs.

If one or more of the resources lost their protective capacity then the management team has to consider quick ways to increase capacity, or find products where it is possible to reduce their sales (maybe by increasing the price).  Again we need the intuition of sales and operations to make sure the solution is doable.

What might happen with the decision making is that while the optimistic assessment brings very nice addition to the profit, the pessimistic scenario shows a loss. We expect that if making much higher profit is more likely than the having a relatively small loss then accepting the new idea is the right decision. However, one more point needs to be checked.  Small losses might accumulate to the point it endangers the organization.  The current state of cash-flow plus the intuition of the finance guy should be part of the mutual decision process.

Mutual Decision Making Process is a managerial must. Such a process has to use the intuition of key people as legitimate and necessary inputs.  Then data processing and logical analysis would lead the management team to make sound decisions.

Common sense – combining intuition and logic

We all know that common sense is not common at all, especially within organizations that have the ‘optimization’ culture.  Common sense tells us that reaching optimization is an illusion, which drives damaging behaviors and keeps us far away from even a good enough state.

What is the common sense way to assess the worthiness of a new idea?

The first common sense question is:  what information is required to assess the idea?

A little girl playing real chess in competition. Black and white photo. Concentrated kid. Power of concentration ** Note: Soft Focus at 100%, best at smaller sizes

Reminder, Goldratt defined information as “an answer to a question asked. In a way it means that there are some things we need to know.  So, when we have to make a decision there are several inputs we look for – and these are the necessary information items.


On behalf of your organization you look for a vendor for office supplies.  You talk with the representative of a large office supplies company and also with the enthusiastic owner of a new office supply business.  The large company rep. is a tired and not-too-bright fellow that just cite the normal sales-pitch text.  The owner of the new business is definitely brilliant, and your intuition tells you he is going to be successful.

What information/answers-to-questions you have to look for?

  1. From whom you’re going to get better overall deal?
  2. From whom you’re going to get better overall service, especially better response to any urgent request?

The first question gets a precise numerical answer.  Suppose the new business offers a cheaper price of 4% for the first 6 months.  After that the prices would be the same. Let’s also lay down the data that the total expense on office supplies in your organization comes to .94% of the turnover.

The second question has to use intuition, as any question about the future has no precise number and I doubt whether you find any valid statistical model for this specific question.

Your intuition, based roughly on your life experience plus some emotions and biases, tells you the new business is going to give much better service.  You don’t expect from a large business to do “favors”, but a new business with the wish for future growth is more open to respond to special requests.

Decisions are certainly impacted by emotions, and in this case the emotion and the intuition go hand-by-hand for making the new business.

So, is the decision obvious? 

Here comes the important role of applying logic as a critical control mechanism and a mean to look at the bigger picture.

What might be the damage from failing to serve at the required level and is it likely to happen?

Large suppliers might miss items here and there.  However, we do expect them to fix those in a day or two.  A new business might face more difficulties, especially when the new business tries to grow too fast, like exhausting their resources and possibly suffer from low cash-flow. They could also suffer from less expertise in the area.

Let’s now inquire the question: what is the size of the damage and to whom?

Most organizations do not suffer too much from incidental lack of office supplies.  However it creates a hassle and when there is a hassle there is a person who is responsible for the hassle. So, while the real impact on the performance of the organization is relatively low, the well being of the decision maker, the common-sense decision, is to take the safer alternative, which is the larger supplier!

The need for safety, in this case, is usually stronger than the very little impact on the cost. Well, this is my intuition even for organizations that are in the Cost World.

My general observation

Decisions involve emotions, intuition and logical analysis.  To my mind the emotions have negative impact on organizational decisions.  Intuition is critically necessary for the main information inputs.  The final decision has to look at the bigger picture, consider the ramifications of the inputs on other aspects of the bigger picture, and for that you need logical analysis.

Is it really an opportunity?

Part 2 of a series on using T, I and OE for key decision making

Opportunities present themselves in various ways.  Only seldom we see an opportunity which is so good that there is no point asking any more questions.  Most of what looks like a potential opportunity comes with the doubt embedded in: is it really an opportunity or a trap?

Opportunity Concept.

A typical managerial conflict happens when Sales proposes a promotion, offering several products for a certain price reduction.  Sales managers believe this would significantly enhance the sales of those products next month, and this belief is backed up by past experience.

A promotion creates huge pressure on the shop-floor, reduces the sales of other products and mainly reduces the sales for a certain period after the promotion is over.  Yet, sometimes the extra revenues (minus the variable costs) generated, especially selling to new clients and gain their future purchases, more than compensate for the damage.

  • How can we truly check the net financial impact of a promotion?
  • How can we check the financial impact of penetrating into another market segment?
  • How can we check the financial impact of launching a series of new products?
  • How can we check the financial impact of purchasing a new production line as an elevation of our current capacity constraint?

We are aware that cost-per-unit is not the right tool to support sound decisions. So, how should we make such decisions?

The most straight-forward way is to assess the financial impact of the decision-at-hand on the bottom-line without relying on some funny ‘per-unit’ fabricated measures.  It looks quite difficult objective due to the complication of the various expenses.  However, when we look on the decision as an optional addition to the current level of sales we can see two clear factors that simplify the situation:

  1. The change in the incoming flow of money: the revenues from the change in sales, both the additional sales and possible loss of other sales, minus the truly variable costs of those sales.  This is what we call Throughput (T).
  2. The change in the outgoing flow of money (all the other operating expenses called OE). Note, those additional expenses are all due to the required changes in the available capacity!  This insight was revealed in the previous posts about the behavior of the cost of capacity.

What we get is:   Delta(P) = Delta(T) minus Delta(OE)

Delta(P) is the change in net profit before tax.  For the decision-at-hand we like to know whether delta(P) is positive or negative.

What information we need in order to get a good estimation of the above equation?

One obvious problem is the impact of uncertainty, which includes everything we don’t know at the time of the decision.  We should come back to this issue in later posts.

From the general direction described so far the first step has to define the current state of the organization, as we like to evaluate the difference between the state with the additional decision and without.

There are two main categories of information describing the current state:

  1. The current sales. The items sold, their respective quantities, prices and truly-variable-costs (mainly cost of materials).   We can then calculate the generated throughput (T) per item and the resulting total T – the flow of incoming money.
  2. The available capacity and the load generated by the current sales.
    • In order to calculate the load we need to know how much capacity, for every resource, is required for every product sold!

Then we need the following categories of information for every new opportunity / deal / idea:

  1. The new sales / T to be generated by this idea, including longer term impact
  2. The impact of the new sales on the current sales – would some current sales be reduced?
  3. The updated load versus capacity – do one or more resources lack enough available capacity?
  4. When one or more resources lack capacity what special options are open?
    • Purchasing additional capacity for extra cost (how much?)
    • Reduce some sales, provided it can be practically done without tampering with other sales!

Critical questions for advancing ahead:

  • Is it possible to gather all the above information?
  • How long into the future we need to look in order to make a decision?
  • How can we handle many different opportunities for the same time frame?
  • How do we consider the impact of uncertainty?
  • What is the structured process to make a sound decision?
  • Is it too complex? If so, can we simplify it without distorting the decision process?

To allocate or not to allocate – is this the question?

An intermediate post to clarify a point

Part 1.5 in the series on using T, I and OE for key decision making

The previous post argued that the cost of capacity is not linear and it is also not continuous.  I did not deal with the fact that in too many cases there is no direct way to relate specific capacity consumption to specific products.  The remedy of cost-accounting in their efforts to calculate the cost of a product unit is to allocate the cost-of-capacity, which is not directly related to a product unit, based on some arbitrary parameter like the direct-labor.

Barcelona, Spain - September 28, 2011: Camp Nou stadium is the highest capacity soccer stadium in Europe. photo taken on September 28, 2011 in Barcelona.

Should we allocate the cost of the arena based on tickets or on the result of the match?

Activity-Based-Costing (ABC) challenges the older methods on that point.  ABC tries hard to relate every time capacity is consumed to its “cost-driver”, which could be a product unit, but also a new client and even an order.  Anything wrong with that?

The real mistake of ABC, and all the other cost accounting methods, is to associate the average cost of the specific capacity consumption to those cost-drivers.  The non-linear behavior of the cost of capacity causes a huge distortion in the ABC management information.  It gives the wrong impression that certain cost-drivers are too costly when there is a lot of excess capacity, while other cost-drivers look good, concealing the fact that they use capacity that is truly limited (and purchasing more is truly expensive), thus leading to flawed business decisions.

Of course, in order to convince organizations to stop assuming that every time capacity is consumed a certain cost is generated, we need to establish an alternative way to make sound decisions.  We need a good method to check whether a new opportunity/idea would improve the bottom-line or not.  We like also to have a good method to decide whether purchasing more capacity is profitable or we better give up some of the available capacity.  I promise to arrive to the solution in later posts.

Sometimes we need to allocate certain costs even when we use the TOC logic!

For instance, suppose your company has partnered with another company in leasing a whole floor of offices.  The reason is: the owner of the floor refused to lease part of it.  That space is a resource, and the total space is the limit of the available capacity of the space resource.  Any agreement between you and the other company on splitting the cost of the rent, and probably also some other capacities you use (cleaning, communication lines) is basically arbitrary and based on some allocation (the lobby and the lifts are certainly shared) of the space.

I’m going to raise more cases of allocation as a good-enough solution when direct calculations are not possible.

The Non-Linear Behavior of the Cost of Capacity

And its impact on decision making

Part 1 of a series on using T, I and OE for key decision making

No limits success concept with a road or highway going forward fading into the sky with a group of clouds shaped as an upward arrow as a business symbol of financial freedom and aspirations.

Challenging widely accepted paradigms creates new opportunities

The terminology in Physics does not use words with dramatic intensity.  However a certain incident in the late 19th century was so embarrassing that it was called “The Catastrophe in the ultraviolet” and by that caught my imagination.  The story is about radiation emitted from a black box and the mathematical equations, according to the knowledge of that time, showed that the radiation should be infinite.  Well, it was easy to see that this is NOT the case.  What eventually solved the riddle was the discovery, understood through Quantum Theory, that the frequency of the emitted radiation is not continuous but discrete.  As it turns out discrete functions behave very differently from continuous functions.

There is a tendency in the social science circles to assume that the main functions, describing the behavior of key variables, like capacity or the cost of capacity, are continuous.


I claim that all cost functions in reality are discrete.  This is most certainly true when we speak about the cost of capacity.

All organizations spend their overhead expenses on providing enough capacity that is required for the business.  The usual way is to purchase a certain fixed amount of capacity, like space for storage or offices, a machine capable of processing a certain quantity per hour and employees who agree to work N hours every week.

The cost of providing that capacity is fixed whether you actually use all that capacity or only part of it.

This means that using 25% of the available fixed amount of capacity, or using 85% of that quantity costs exactly the same!  This is a basic non-linear behavior and its impact on the decision what to do with the capacity at hand is HUGE.

Once all the available capacity is used then new options of using additional capacity open.

But, the principle of being able to purchase capacity only in certain fix sizes is still on.

An employee might agree to work another hour, but usually not a part of an hour.  So, if you need just 34 minutes of overtime the cost is one hour of overtime, which is also considerably more expensive than the relative cost of a regular hour.

So, when we look on the behavior of the cost of capacity we realize the following behavior:

The initial cost is HIGH.  Then it becomes zero (0) until a certain load is reached. Then the cost jumps by another fixed amount. Using more capacity the cost is zero until the next fixed point.

This actual behavior is quite different from the current practice of associating the average cost to any use of capacity.

This is the kernel of the TOC challenge at cost accounting!

So, the simple principle of cost accounting is invalid in our reality.  This use of the average cost of capacity has led all the way to the fiction of cost-per-unit.

Do we really need “per unit” measures to support sales decisions?

We still believe in simplicity, but reject the wrong simplicity.  What could be simpler than have a way to measure the direct impact of a decision on the bottom-line?

Let’s now look on another realization:

There is no hope in hell to use all the available capacity!

This is certainly in direct clash with the common paradigms.

There are three causes for being unable to use all the available capacity to generate value:

  1. TOC has demonstrated the need for protective capacity to provide good and reliable delivery performance.
  2. The market demand fluctuates in a faster pace than our ability to adjust the available capacity.
  3. Capacity is purchased only by certain sizes. This is similar to what has been already stated above.

What are the ramifications for decision making?

When a new market opportunity pops-up we need to consider the state of the capacity usage of every resource.  When there is enough excess capacity the usage is FREE!  When the additional load penetrates into the protective capacity then there is need to carefully check the cost of Additional capacity or the ramifications of giving up some existing sales.

This is very different generic approach than the existing management accounting tools!

Next post would explain more on how to calculate the impact of an opportunity on the bottom-line, without using any “per-unit” kind of measure that would force us to use averages and get a distorted answer.

The Thinking Processes (TP) and uncertainty

Have a quick look at the small cause and effect branch.  Is the logic sound?


 Can it be that in reality effects 1-3 are valid, but effect 4 is not?

We can come up with various explanations of insufficiency in the above logic.  For instance, if the clients are not free to make their own decisions, like in totalitarian countries, then could be that the regime prefers something else.  Another explanation might be that the brand name of Product P1 is much less known.

The generic point is: the vast majority of the practical cause and effect connections are not 100% valid.

In other words, most good logical branches are valid only statistically, because they might be impacted by uncertain additional effects that distort the main cause-and-effect.  Actually the uncertainty represents insufficiencies we are not aware of, or we know about them but we cannot confirm whether they exist or not in our reality.  For all practical purposes there is no difference between uncertainty and information we are not able to get.

This recognition has ramifications.  Suppose we have a series of logical arrows:

eff1 –> eff2 –> eff3 –> eff4 –> eff5

If every arrow is 90% valid (it is true in 90% of the cases) then the long arrow from eff1 to eff5 is only 60% valid.

The point is that while we should use cause-and-effect because it is much better than to ignore it, we can never be sure we know!  The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future.  This false logic says:  if ([I behave according to the TP branch] à [Sometimes I do not get the expected effect]) then [I stop using the TP].

The way to deal with this serious and damaging negative branch is to institute the role of uncertainty in our life and the idea that partial information is still better than no information – provided we take the limitations of being partial seriously.  We can never be sure that whatever we do will bring benefits.  However, when we use good logic then most-of-the-time we’ll get much better benefits than the total damage.

It’d be even better to consider the possibility of something going wrong in every step we do.  This would guide us to check the results and re-check the logic when the result is different than what we have expected.  It is always possible that there is a flaw in our logic and in such a case we better fix the flawed part and gain better logical understanding of the cause-and-effect.  When we do not see any flaw in our logic – there is still room for certain crazy insufficiency to mess our life and this is the price we pay for living with uncertainty.

The Mysterious Power of Synergy

Synergy means that a system can achieve more, sometimes much more, than the sum of its parts.  This extra power is not easily understood and thus it is difficult to manage.

Symphony orchestra on stage violins cello and flute performing.

It is straight-forward to see the value of synergy in sports.  You can build a basketball team by bringing together great players, each excels in one particular role, and let them play and hopefully win.  Does it ALWAYS work?

When it does, one might get the impression that the power of the team is far more than the accumulated level of each player.  This is when the mystery of synergy works.

When it does not work then there is no ‘team’ but just a group of excellent players, each playing according to his own interests.  We in TOC call it: local thinking rather than holistic.  I think it is quite natural that a person thinks and acts based on his own interests.  The only rational way to cause a person to think holistically is to make a convincing argument that synergy does work, in other words the success of the whole would contribute much more to the person than whatever he can achieve by himself.

Theoretically there is a way to create such a clear win-win structure that the interests of every player are exactly the same as the holistic ones.  I understand the theory, but I admit I have not been able to construct such a network of win-wins in reality.  Still, the intuitive recognition that synergy exists in a big way could help in aligning different parts into a holistic system.

Very large organizations use their natural synergy to gain much more value.  We can recognize some of the causes of such synergy, and by that reduce its ‘mysterious’ impact.  When some of the products/services of a giant company get excellent recognition the other services gain recognition as well.  The stability and security radiated by large organizations is a synergy asset and its cause is pretty clear.

However, many other causes for synergy are not all that clear, but this does not mean they do not exist.  A strange way of speech calls ‘chemistry’ the effect where two players play with great understanding of each other and thus generate synergy.  It is funny, on my side, to call a ‘scientific’ name something that is hard to map the cause and effect of.  Still, in reality we see how some product-mix have more impact than others.  One needs to look for the overall characteristics of a ‘package’ to understand the advantage of one supplier on another rather than go to the details of every product.  It is exactly as recognizing a forest rather than trees.

Project-portfolio is a managerial topic that calls for assessment of its synergy.  It means that when we consider a new project there is a need to assess the somewhat vague impact of adding this project to the portfolio and predict the total impact of the whole portfolio on the organization.

As such assessment is mainly intuitive we need to recognize it as ‘partial information’ or basically uncertain information.  We should NOT ignore the synergy impact just because we are unable to predict its exact impact.  While we recognize “never say I know” we should not take the position of “we don’t know”, because we do know something.  We are able to carefully assess the impact of synergy as a reasonable range and thus take the range as part of the decision making process.

Any good Strategy planning has to strive to gain synergy from all the initiatives that are integrated into one effective decisive-competitive-edge.  Synergy is a critical part in the creation of ever-flourishing organization and it requires a holistic view and good tolerance of using ‘partial information’ to guide our decisions.

The problematic relationships between the individual and the organization – Part 2

A common belief is: Many employees don’t want to make all the efforts they are required to make

The point is not so much whether the belief is true, but whether the belief is self-fulfilling, meaning employees try to avoid too much work and efforts because they realize they are not being trusted. When you are not trusted then the objective of feeling good with what you have done evaporates into thin air.

Suppose management succeeds in creating a culture of trust. Would the employees be willing to be loyal to the organization they work for? By ‘being loyal’ I mean do whatever it takes to achieve more of the organizational goal.

Employees come to work because they need the money. This starting point has several ramifications.

  1. If the employees think they are getting less than what they deserve – then they become frustrated and hostile towards the organization, which is the opposite of loyalty.
  2. As the money is important the employees choose to stay in the organization until something better pops up. This forces them to try hard to be considered “good”, or at the very least “OK”.

Other ramifications are caused by the mere fact that the employees spend large part of their life in their work place:

  1. Most employees prefer to “do something” while they are at the work place, and so they usually work willingly according to what is expected from them, unless they have a reason not to.
  2. Employees that have the passion to excel look for an appropriate chance.

An important observation:

It is easier for an employee to be loyal to the organization than to the organization to be loyal to the employee.

The organization always looks at the cost and compares it to the perceived value from the employee. However, as we have seen, that value is not easy to assess. It is even more difficult to assess the indirect damage of being disloyal to the employee, as many of the employees become disillusioned and even hostile in a hidden way.

From the above it seems that even when the management trust their employees and is loyal to them there is no guaranty that all the employees would be loyal to the organization. It is enough that a specific employee believes he is underpaid to cause him to betray that trust. And if this is the case then the organization should actively look for signals of low-motivation and disloyalty from the employees.

However, the need for making sure all employees are loyal does not necessarily mean the solution has to be the use of personal performance measurements.

What are the true needs of organizations in assessing their employees?

I can see two such needs:

  1. Identify employees that generate damage. Some might simply lack the appropriate capabilities. Others might be ‘rotten apples’ – those who are disloyal. Such people might influence others to become disloyal.
  2. Identify potential ‘stars’ – employees who can bring huge value in the future – if they are nurtured in the right way.

All the rest are good employees who bring value when management makes it possible. Is there a point of measuring performance in a more accurate way?

If the organization would maintain a culture of respect and loyalty then the employees will do their best to organization because this is their work – a substantial part of their life. What the organization has to do is to make sure there is a certain code of work and when there is a signal that the code is broken then, and only then, those employees should be chased out.

In some cases, the organization has no choice but to let people go because they cannot yield value anymore. The point is that when this happen management needs to recognize it as its own failure! Management then has to be aware that they need to re-build the trust and loyalty of the employees who are still in the organization to prevent the next disaster.


The problematic relationships between the individual and the organization – Part 1


Cartoon of businessman dog getting his evaluation over the phone, great teamwork, great morale, you're a good dog.

“Tell me how you measure me and I’ll tell you how I’ll behave.”

This famous citation states an inherent conflict between the individual and the organization.  The individual, according to Goldratt, behaves according to the way he is measured, which is not necessarily to the good of the organization.

Why would an organization measure its employees???

Do you measure the performance of your spouse, children or close friends?  You have certain expectations, which are not always met, but do you look for ways to quantify your expectations?

What is the gain from performance measurements? 

One real and important gain is to know whether you need to analyze much deeper how come the results deviate from your initial expectations.  In order to get that objective you need recorded expectations.  Now – expectations are never one clear number – are they?

As a father I expected from my children to achieve good grades from school, say B and above.  Getting a low grade did not call for punishment, but we tried to understand the reason and what can be done next time.  Other parents push their children much more.  Is the push valuable?  Certainly, treating the grades as performance measurements that invoke positive and negative reactions would improve the grades.  The more important question is: would they improve the life of the children?

Do performance measurements improve performance?

Goldratt has shown us how flawed measurements could reduce global performance.  There are three different causes for a performance measurement to radiate the wrong message:

  1. They are the wrong measurements.
  2. Dependency – the measurement depends not just on my performance, but also on other factors, like the performance of others.
  3. Variation. My performance varies. Some of the causes for the variance can be explained by external factors (like headache) and some have no clear explanation.

How significant is the variation factor?  I like to watch sport on TV in order to spot the unexplained variation in the performance of the players.  It is most noticeable in Tennis how the number of “unforced errors” and the “first serve percentage” fluctuate within one game.  It demonstrates that the variation in our ability to do things is quite significant.

When the performance measurements ignore the impact of dependency and variation, the employee distrusts the measurement and is led to do whatever he can to manipulate them to his own sake.

See what we have got so far:  ignoring complexity (dependency), ignoring uncertainty (variation) both are direct consequences of lack of trust.

Trust, or the lack of it, is a critical factor in life as well as in business and most certainly in managing organizations. The fact that the organization does not trust its employees and thus uses performance measurements actually causes the employees to distrust their bosses and pushes them to plan carefully their behavior – against the interests of the organization.

It all starts with the relationships between the CEO and the owners (shareholders and possibly the board). Every CEO likes to make sure the organization achieves the results that please the owners and makes him/her a highly appreciated CEO.

An obvious difficulty for the CEO is to make sure all other employees do whatever they need to do to achieve the results. Thus, when possible, they impose performance measurements to push their subordinates to make more efforts.  Do they really make more efforts? Or do they just manipulate their available capacity according to the specific measurements and nothing else is important?

personal measurements 10

A simple cause-and-effect tree to showing the rush for using performance measurement on employees

Is there anything wrong with the above logic?

What do you think of entity 2.3 – is it really valid in reality?  Let’s discuss it further to lead us to the direction of a solution.