The Challenge of Facing Complexity and Uncertainty

Mickey Granot has published a very interesting article entitled: “The 3 mistakes that prevent exploiting your business potential”, see at https://www.linkedin.com/pulse/3-mistakes-prevent-exploiting-your-business-potential-mickey-granot/?published=t.  The mistakes Mickey has identified are:

  1. Spreading management attention too thin.
  2. Misunderstanding the customer.
  3. Misusing measures.

I agree that each of the three mistakes has major negative impact on preventing better exploitation of the current capabilities and capacity in the vast majority of businesses. I think that there is a core problem causing management to repeat the above mistakes all the time.

The constant fear from negative consequences from changes that look promising

The fear is invoked by the inherent complexity coupled with uncertainty. There are simply too many unknown facts for every proposed idea that could, maybe, generate more throughput (T) without significant additional operating expenses.   The difficulty to handle complexity coupled with uncertainty is the key obstacle for every manager. The fear is partially on behalf of the organization and partially due to the personal potential negative consequences of a “failure”.

Example: Offering variety of packages of regular products with a price tag that is 10% less the regular.   The idea is, first of all, to combine products that aim at the same end-consumer.  Another parameter is combining fast movers with medium movers, and by that expanding the market of the medium movers.  Another aspect is the ability to use excess capacity of most resources even when the organization has to add overtime on the weakest link.  The idea is that the resulting delta-T would be much larger than delta-OE.  For instance, publishers can offer packages of several books of a known author. It is known in this market that while the newest book of the famous writer is sold very well, the previous books are sold now much less and might not even be available on the shelf.  Offering a package of the newest book coupled with the first book by that author, could be relevant to fans of that writer that missed the older book.

How would managers approach such an idea? It is not a-priori clear how much more sales will be generated this way and what will be the impact on the bottom line, taking into account the reduced price of the package, meaning significantly reduce throughput per copy.

So, a decision to test such an idea very carefully and for long time seems reasonable. In practice it means introducing very small number of packages and monitor their sales.  The result is that the impact on the bottom-line is usually not so clear.  So, the management, while giving the idea very limited attention, need to try several other new ideas at the same time.  The unavoidable result is spreading the management attention very thinly. This is one effect caused by the basic fear from uncertainty.

The second mistake is trickier to fully understand the causalities behind it.  How come we frequently fail to recognize the right value as perceived by the customer?   When the customer is an organization we can assume the generated value is based on the practical needs of the organization.  Understanding the business of the customer should guide the supplier-organization to identify the true needs and by that gain major insights on how the products/services could be more valuable.  Problem is that such an understanding is not common at all – most marketing people have very little knowledge on the business of their customers because of two key obstacles.

  1. Analyzing a different business from afar seems too complex and hence uncertain.
  2. The current tool to understand how the customer appreciates the products/services is to analyze the complaints raised by the customer.  This proves to be a very partial and problematic tool, which give rise to secondary elements and ignore the more critical ones, sometimes just because the customer does not expect the supplier to be able to deal with the real missing, or flawed, element.  Yet, having a practice seems good enough to many.

When it comes to the end-consumer, understanding the value of the product is even tougher because the consumer sees, many times, value that is not practical. For instance, taste preferences cannot be logically defined by objective attributes, or the aesthetics of the product design. I wrote in the past about the three categories of value, see https://elischragenheim.com/2015/08/03/the-categories-of-value/.

When we analyze the example, the creation of the right packages has to be based on good understanding of the perceived value of the customer, even the question whether 10% reduced price is a good cause for buying a whole package depends on the overall perception of value by the customer.

The fear of negative consequences causes organizations to be very careful especially with assumptions, based on intuition, about the external world, like customers and also vendors.  Understanding the end consumer is difficult because analyzing hard data is not sufficient.  Some logical analysis is certainly required.  But, even then several not fully proven assumptions have to be in place in order to understand the end-consumer and be able to, reasonably, predict the reaction to certain moves.  The fear of failing to predict the behavior of customers limits the efforts to create a ‘theory’ of the true needs of specific market-segments and by that prevents the actual test of the ‘theory’ and by that missing many powerful opportunities that might be much worthier than the current ideas.

The use of performance measurements to measure people is a clear announcement of mistrust created by the fear of failure. Measurements are definitely required for diagnostic of emerging problems and as necessary inputs to decision making.  A wicked flawed part is to assume that the measurements reflect the capabilities and motivation of the people in charge.  This lack of confidence in people leads to many local performance measurements and we know how distorting those are.  See also my previous post, https://elischragenheim.com/2018/03/30/the-problem-with-performance-measurements-and-how-to-deal-with-them/.

It is my view that eventually fearing complexity and uncertainty is the ultimate core problem of the vast majority of the organizations. Only very small organizations, where everyone knows well everybody else, are able to overcome the obstacle of fear from potential negative outcomes of each specific decision or action.

While TOC provides us with great tools to manage common and expected uncertainty in the key TOC applications for Production, Project Management and Distribution, the Pillars of TOC relate to handling uncertainty only indirectly. Humberto Baptista already offered to include a fifth pillar to cover the need to live with uncertainty and be able to handle it effectively, actually use it in order to truly flourish.  Humberto verbalization is:  “Optimizing in the noise increases the noise.”  This insight, which is also part of Dr. Deming methodology for quality, should lead to realize that in order to improve one has to beat the natural variability.

We should come up with a detailed approach to “managing expectations” that will include full recognition of uncertainty and by that reduce the fear and let people, managers and executives included, exploit their own capabilities.

Advertisements

The problem with Performance Measurements and how to deal with them

Dr. Goldratt famous saying “Tell me how you measure me and I’ll tell you how I’ll behave” shows one dark side of any measurement: it impacts the behavior of the people involved, actually the behavior of the whole system.  The hope of management is that the impact would be positive:  people will do their best to achieve the best results.  Unfortunately in most cases the opposite is happening.  Just to illustrate another problematic side we don’t always keep attention to:  when the prime measurement is to make money then some managers might break the law and other moral rules in their quest for making more money.  The 2008 crisis is just an example of the impact of money as a prime, measurement.

The Theory-of-Constraints (TOC) went deep into the clash between the local performance measurements and the global ones, showing how the local disrupt the global. This is certainly one of the most concerning issues and a lot have been written and presented on that issue.  However, performance measurements pose several additional negative branches (potential negative consequences).

The key objective of performance measurements is showing a full picture of the current performance in order to lead the required actions for improved performance.

A devastating side of all performance measurements is the personal interpretation of them. Management, actually many lower level people in the organization, are actually measured by these measurements whether they have succeeded or failed in their job.  This linkage causes the devastating effects that lie behind the famous comment by Goldratt.  I like to state an effect that looks obvious to me:

Performance measurements, at best, represent the current state; they do NOT answer the question “how come?”

In order to conclude what to do next, performance measurements, expressing the current state, are absolutely necessary, but they are definitely not sufficient. An analysis has to be carried to explain the results.  I’m aware that “explanations for poor results” have bad reputation, but that is part of the big problem. The poor results have to be openly recognized in order to identify the core cause.  An explanation like “the people involved were dumb” should lead to immediate questions how come incompetent people were given that particular job!

What makes performance measurements even more problematic is the tendency to outline a target for them.  There are two basic negative characteristics of determining targets:

  1. Parkinson Law claims “work expands so as to fill the time available for its completion”. The same law applies to any quantitative target. The simple rational is: outperforming the target is bad for the future of the individual, because next time the target will be set higher. So, the best case is to reach the target – no more and no less. Almost all means are allowed, including lowering the target to be reasonable, and then definitely not trying to achieve more. I have seen several cases where the organization claimed that 90% of the tasks finish exactly on time. This statistically impossible result gives evidence that Parkinson Law works.
  2. Determining the target is a problematic issue in itself. Sometimes targets are determined by hope and prayers. Sometimes there is a certain rational for the top target, but then all the lower levels are given targets that are, more or less, arbitrary with the sole requirement that they have to support the higher level.  These lower level targets are those that middle-level management try their best to restrain.

The idea behind setting targets is determining “success” or “failure” of people involved. On one hand the idea is to push people to excel.  On the other hand it creates fear, mistrust and manipulations.  This is a typical generic conflict.  The basic assumption that without given clear and quantitative targets people would not do everything they can to accomplish their missions at the highest level is, to my mind, flawed.  The tricky point is that it is a self-fulfilling prophecy.  When people are used to targets, stopping the targets leaves people wonder what they should do, which drives them to do a little, but definitely not too much.  Only a very clear message from management would make a change, and it’d take time to be believed.

Another problematic side of performance measurements is their dependency on the time periods. Suppose that this year the organization has to produce considerable stock because of an expected peak demand at the start of next year.  The annual T is relatively low, while the OE, maybe containing overtime, is relatively high.  Next year T will get much higher.  Question:  would management be aware of the causality?  If not, would Operations support producing stock for next year, when the TOC accounting practices do not reward any increase in inventory?

How can we deal with the negative ramifications of performance measurements?

I think this is the critical question for any organization striving to become ever-flourishing. To call the solution “Leadership” is to underestimate the obstacles and relying on a vague term as if it is a solution.  I think that implementing a structure for management decision making, where predictions are based on ranges, rather than a blind commitment to a number, where the potential risks are openly discussed and the management team eventually reach consensus – until the next management meeting where the actual signals are observed and the discussion might be opened again.  This should be a procedure that is not over-depended on the charisma of a specific leader.

In short, a procedure that truly respects uncertainty, recognizing mistakes without automatic blaming, and trying to correct them is a solution that could work.

Throughput (T) and Operating Expenses (OE), and the Capacity critical connection – the key for decision making

T represents the added value generated by the organization. Operating expenses represent the financial cost required to provide the capacity for all the required resources with the appropriate capabilities, required for generating the value to customers that would be able to generate the T.

Confused? Read it again, this comprises most of the truly required data for managerial decisions.  The division between T, which is focused on sales data and OE focused on the internal resources, is of immense simplifying value to all managerial decisions.

Here is a rough diagram:

Value to customers

I apologize for the poor graphics; I’m not very good with the use of graphical tools.

OE is just the cost for providing capacity. The goal is to have Throughput (T) much bigger than OE and then find the way to grow T faster than OE.  That should be the sole objective of every single decision taken any manager in the organization.  There might be difficulties to do the analysis, but the objective is the same. T for business organizations is defined as Revenues minus the Truly-Variable-Costs (TVC).  The truly variable costs are those that occur with every single sale.  So, T is the added value as measured by the customers, who are willing to pay the price. But, the value for customers includes also what others, who are not part of the organization, have contributed.

Thus, T is the true performance measurement of what the organization succeeded to achieve. OE is what the organization has to pay in order to achieve the T.

Well, I should have also included ‘I’, standing for ‘Investment’, as the part of the capital being invested to make it possible to achieve the T. But, I think there is no conceptual difference between ‘I’ and ‘OE’.  The difference is about time frame.  ‘I’ refers to expenses that stretch beyond one year.  There are mechanisms to convert multi-year expenses into equivalent stream of annual expenses – and these are part of the OE.  So, a $10M machine, which is supposed to work for 10 years, represents an annual expense of $1.1M or whatever conversion rate you think is appropriate.

Comment on a minor complication: Originally Goldratt defined ‘I’ as ‘Inventory’. He moved to the more generic term later.  But, what is a little missing point in the above rough chart is that the materials being purchased are in a temporary state of Inventory (part of Investment) until it either become part of T or part of OE when scrapped.  I don’t think it really complicates the simple picture.

The key point is to understand that OE is the critical enabler to generate T.  And being made of many individual items creates a technical problem to predict how much OE would support future T, for instance taking actual initiatives to double the current level of T might require additional delta-OE that could be more, or much less, than the current level of OE.

The majority of management decisions are about growing or just maintaining the current level of T. After all Sales is about achieving T, and the efforts of Operations are aimed at delivery.  But, there is a constant pressure to reduce OE, mainly because OE represents an ongoing threat to the organization:  you have to pay the OE no matter whether you made enough T or not.  The tricky point of saving OE is that in most cases the negative impact on T is ignored.  The emphasis on T makes you aware that you need to be very careful not to reduce T.

So, we have to understand the dependencies between T and OE, and they look very complicated, because OE is about capacity of so many different, seemingly independent, items.

TOC, through Throughput Accounting plus understanding the full impact of the five focusing steps, the role of buffers in planning and buffer management in execution, gives a much simpler answer to the connection between OE and T.

Critical insight #1: It is enough that one resource would be overloaded, receiving more load than its available capacity to seriously harm the expected T unless significant additional OE is added.

Critical insight #2: There is a real need to maintain protective capacity, certain amount of excess capacity, in order to provide enough flexibility to overcome market fluctuations and other types of uncertainty.  There is no safe formula to calculate precisely the required protective capacity, so conservative assessment is required and then getting the appropriate feedback to ascertain that it is enough.

Critical insight #3: Every internal resource has a finite capacity being covered by a portion of OE, but many times there are temporary ways to increase capacity for a cost, usually much more expensive per unit of capacity than the regular available capacity.  Such means could be part of the protective capacity, but their real value is allowing taking opportunities that clearly require more capacity than the current OE covers. That means delta-OE has to be considered and compared to the expected delta-T.

Any decision that deals with ways to increase T has to analyze the possibility that one or more of the critical resource would be overloaded, and if so find a way to either reduce other sales or increase the capacity of the specific resource(s).

The cost of capacity changes in stepwise ways, which makes the behavior of OE to be clearly non-linear.  One might look at it as a complication, and it really makes the whole notion of “per-unit” measurements non-usable in reality.  But, when the full impact of uncertainty is recognized, then simulating ‘what-if’ scenarios could reveal when the connection between T and OE are clear enough in supporting a decision, or when there is a doubt.

Another realization is that ideas for increasing T are usually significant and their expected impact, both on Sales/Throughput and on the required capacity, is far from being deterministic.  So, some means to check both the conservative realistic possibilities and the more optimistic ones have to be carefully checked.

Another insight: When judging the impact of an idea on sales it seems that if the conservative assessment of the impact is already good, then there is no need to check the option that the impact would be far greater. This is a mistake! When the market reacts very favorably then more problems in capacity, causing delays in delivery, have to be taken into careful consideration.  So, there are clear possible negative impacts of succeeding too well.  It can be called “The curse of blessing”.  I heard that interesting insight from Shimon Pass.  This is a devastating insight if you are not aware of it.

Is the above “simple”?

I think it is as simple as we can get when we strive to be right most of the time.

People who like to know more on what I have briefly outlined above could ask me for a presentation and demo of Throughput Economics, a detailed methodology for evaluating decisions for achieving better much more T than OE.

Decision Making: between Emotions and Logic

We in TOC think we are people of logic, doing our best to think clearly and by that be able to draw the best decisions. Suppose it is true that we are able to think clearly – how does that impact our decisions? And does the capability to think clearly help in influencing others to take the right decisions based on our clear thinking analysis?

It is widely accepted that decisions are made based on emotions not logic.

That claim is obviously true not just because of the structure and functioning of the brain but also based on logical analyze that logic cannot make ANY decision, because what you want to achieve and what you don’t want to tolerate cannot be determined by logic.  Abstract logic does not have a goal or any wish.  Logic cannot determine how important it is to earn more money or whether to live alone or with the family or even whether to live or commit suicide.  All the above critical inputs are dictated by our emotions and all the worthy objectives are emotional.  Also every risk we have to consider involves our emotions in evaluating the damage the risk might cause us.  Measurement of ‘damage’ is done by our emotions.  So, the decision has to take into account our feelings for or against various potential outcomes.  We can logically quantify the damage; say losing $1,000, but the interpretation of the damage is done by our emotions.

So, what is the role of logic in the decision making?

First, logic looks for rational ways to accomplish the objectives set by the emotions. You like to buy a car?  Logic raises the financial impact and predicts the response of other people to your new car, but it does not tell you how much joy the enthusiastic response of others would mean to you.  Logic, of course, does not mind the esthetics of the car and the general feeling of driving such a car unless the emotions include them in the logical process.  When the car is for specific needs logical analysis could note whether the capabilities of the car are good enough for those needs.  The logic predicts some of the future problems like facing complaints from the family that the bank account is now too low, so they cannot buy what they desire.

Thus, logic is used to identify both good and negative outcomes of the decision. If one is very angry on another person, logic might raise the option to hit the other person in the face, but also warn from possible outcomes.  The judgment lies with the emotions in order make the final decision.  Would the satisfaction of hitting the other person worth more than the consequences?  This is a detailed dialogue between emotional inputs and logical analysis and predictions.

So, it is absolutely right that eventually every decision is emotional. It is also true that after the decision is made logic is used to justify it to other people.

But, logic plays an important role in the decision making itself. It is stronger for decisions that have less obvious personal impact, like many of the managerial decisions, even though we’ll discuss later some possible emotional impact also on these decisions.  When we try to impact a decision to be taken by another person we have to use very strong logic, highlighting the pros and cons and presenting them intentionally to impact the emotions of the other side.  The TOC tools for outlining cause-and-effect are great for this purpose, but the emotional effects have to be part of the cause-effect analysis.

Every decision involves a choice. We are able to respond fast to daily common decisions in an automatic way.  Inertia plays a big role in decisions that seem similar to past decisions, but logic, when used, might raise reservations from the routine model for such decisions when negative outcomes are observed.  When such reservations are raised the emotions have to respond, either by rejecting the logical arguments, or considering the impact and only then making the decision.  Rejecting logical arguments because they clash with already established models is quite common, but they also raise a certain fear that might lead to reconsideration of the logical arguments later in the future.

A person who tries to influence another has to consider the possibility of blind rejection as a possible response, which can be logically understood only when we are aware of the hidden threat of the negative emotion of being influenced. Taking into account emotions within logical analysis requires good understanding of the emotions involved.

The buy-in process, developed by Goldratt, is directed at change management. The first level of the process is to achieve a consensus on the problem. The point here is that “the problem” refers to the organization. But, the decision maker is a specific person, who also considers how “the problem” impacts his personal interests.  So, the same problem has two different settings to be judged upon.

Suppose we try to influence project managers to recognize the generic problem in managing projects. We know that most projects take longer than planned, cost more and achieve less of the planned content.  This is definitely a problem for the organization having to ensure the timing and quality of the project when the decision to go for the project is made.  Another top management need is to manage well the organization’s resources, which also suffer from late projects.

How would a typical project manager evaluate the personal aspects of project lateness? What emotions would be impacted if the performance of the next project will be similar to previous projects?  What emotions of the project manager truly mind the future wellness of the organization? Does the project manager see it as a personal failing when the performance is about the same as in the past?  Does the manager fear that her personal reputation might be harmed?

We need to recognize the fact that it is absolutely necessary to include the personal aspects of the person we are communicating within the cause-and-effect logical process.

We also need to recognize the fact that emotions are effects that cause other effects like behaviors, views, responses and decisions. The effects caused by emotions should not be viewed as irrational; much of the time they reflect perfect rationality when we understand the emotions.

There are two categories of emotions that have an impact on the role of logic for managerial decisions.

  1. Positive emotions for being able to think logically. People with great passion to success have to develop one of two different, even conflicting, emotions. One is the desire to see reality objectively. This desire leads to a feeling of respect for logical analysis. The other emotion is a desire to develop a “sixth sense” that would mysteriously lead to success through taking the right gambles. The first emotion for being objective directly causes respect for logic and taking efforts to use it properly. It can be seen in most successful managers. The second type is made of people ready to take big risks and when successful they become great business people, but not much of great managers. Those people rely on their emotions much more than on logic.
  2. Handling the fear from uncertainty. The emotions lead to a choice between “fight” or “flight”. Fighting uncertainty draws a person to logically analyze the odds and to systematically look for ways to reduce the damage. They respect objectivity and logical thinking. Other people hate being in fear even more than the subject matter of the fear itself. If I suspect I have cancer, I might evade doing the necessary checkups because I don’t want to know. Such people try to view reality according to what suits them.

Generally speaking FEAR is a critical source for various emotions and it has huge impact on our decisions.  Logic does not tell us to be brave or coward.  These behavioral patterns are dictated by emotions, and then logic can take the objective and look for the best way to handle it.

It is my view that FEAR is a major cause for inconsistencies in the behavior of managers.  While most managers try to do well for the sake of the organization, the potential impact on their personal emotions might lead to different decisions.  Thus, the inconsistencies are not irrational – they just reflect the role of their emotions and self-interests.  We can use LOGIC to identify the inconsistency and build the rational explanation for it.  When we fail to do so, it is usually the failure of our logic, not the irrational nature of the person we try to understand.

Caught within the shared paradigms of their business area

A common shared paradigm being challenged

Every business area has its own “best practices” (are they really the best?) and a whole group of paradigms that are shared by everybody in this particular area. The consequence is being caught in a status-quo, where the performance of the organization is stuck and goes slowly down due to the increase in the efforts of every competitor to steal customers from others.

This day-to-day constant fighting to preserve the current state without any leap in performance is the reality of the vast majority of the organizations. It causes them to be satisfied with relatively small profit, or tolerate limited losses, with the feeling that this is the best they can do.  Such businesses succeed to be in a reasonably stable state, but without any hope for better future.

A necessary condition, though far from being sufficient, to get back to good business growth is to be able to challenge one important shared paradigm. Once this is done the organization deviates from the common way all the competitors are going, and by this establish a clear differentiation from the competition.  The risk of not challenging such a paradigm is that a competitor might do it first and this would change the false impression of stability.

However, this absolutely necessary step for growth, is very risky as being different does not mean outclassing the competition and it certainly does not mean bringing any new value to customers. Too many times being different from the standard only reduces the perceived value of the customers, who just see the difficulty to get used to something different without any benefit from it.  In other cases the new added value seems to be too expensive for the target market.

Another risk is that even if the organization succeeds to create new value to customers, it does not mean the customers are able to recognize and appreciate the new value. The difficulty is that unexpected added value might require a change in habits, and even when the customer sees the new value as something surprisingly nice (“how come we never got such an offer before”) the move raises suspicions that it is too good to be true.

The point with the risk is that it creates FEAR, which sometimes blocks any attempt to challenge a common paradigm that could lead to a breakthrough. The way FEAR should be handled is full acknowledgment that it is legitimate, but the risk can be handled by logically analyzing it, striving to reduce the risk, or its negative impact, and also creating a safe-network of control with immediate corrective actions to neutralize the negative impact on time.  When the risk is properly evaluated and controlled it is possible to overcome the fear.

Another, seemingly unrelated effect of a similar fear, is the high number of R&D projects that continue in spite of the fact that the early promise has already vanished.   The causal relation of that effect with the reluctance to challenge established paradigms that are shared within a business sector is the fear of failure and its personal impact.  The term “failure” has an especially negative connotation in the world of measurements and false accountability, and in itself is a paradigm that should be challenged.  An alternative related expression is “taking a calculated risk” that naturally leads to realization that the move might fail, but it is not interpreted with the full connotation of a “failure” because it has been considered ahead of time and the choice has been to go for it.  In the high-tech startup world the expectation of failures is so high that the damage to the pride and reputation of the individuals involved is minimal, which opens the way to many worthy efforts to do something exceptional.

Taking a calculated risk should be widely used not just for new technologies, but for every business sector, as the ways to come up with a new significant value to potential clients are diverse and only very few of them require a technological breakthrough.

But, taking a calculated risk has to be based on two necessary elements.

  1. A culture that endorse taking calculated risks with the full realization that it might fail.
  2. Using a valid process of analyzing the risk. Such a process should include searching for ways to reduce the potential risk, and eventually producing an analysis of both the potential damage and the potential gain.

The difficulty in the process of calculating the risk is that in the majority of the cases we don’t have good enough probability numbers. Using statistical models to estimate the probability is also frequently misleading.

Yet, the difficulty of estimating the amount of uncertainty should not cause management to ignore the notion of well calculated risks, because the future of every organization simply require taking some risks, and if you have to take risks you should better develop good-enough ways to estimate them. Developing the right culture depends on finding an acceptable way to estimate risks.  The term “estimate” is more appropriate to use than “calculate”, which seems to suggest it is the outcome of precise calculations.

There is a need to differentiate between estimating uncertainty and estimating the level of damage that would generate as a result of it. Let’s use the following example to comprehend the full ramifications of the difference.

A food company evaluates an idea to add a high level variant to its popular product line SoupOne. The new line will target the market segment that appreciates true gourmet soups. The line will be called SuperSoupOne and will cost 50% more.  This is a kind of a new paradigm, because the usual idea is that the gourmet people shy away from processed food.

Suppose that the management has enough evidence to be convinced that gourmet loving people could be tempted to try such a soup and assuming it is really up to their standard will continue to consume it. The “most likely” estimation, based on a certain market survey, is that SuperSoupOne will gain market demand of 10% of the current demand for SoupOne, but only 5% of SoupOne buyers would switch to the new product, the rest are going to be new customers.

However, one of the senior executives has raised another potential risk:

“What would SuperSoupOne do to the reputation of our most popular product line? It would radiate the message that it is a rather poor product and even the producer is now selling a much better version of it?  What would the buyers do when they cannot afford the better product?  I’m concerned that some of them will try the competitors’ products.”

The risk is losing part of the sales of the key product of the company. How much might be the impact of SuperSoupOne on the sales of SoupOne?  Actually the impact might be positive. Do we really know? We need to evaluate the possibility of having a negative effect and how it impacts the bottom-line.

Note, the risk to be evaluated is the impact of the new line on the old line – not whether the new line would generate high enough throughput to cover for all the delta-operating-expenses of launching the new line.

How such a risk could be evaluated? Suppose the current throughput generated by SoupOne is $5M.  According to the forecast for SuperSoupOne the quantity to be sold of the new line will be 10% of the current quantity sold by SoupOne.  Suppose that such sales would generate 20% of the current throughput due to the higher unit price. So, we get additional throughput of $1M from the new line, while losing only $250K (5%) from the old line.

But, the drop of 5% of the old line is only a forecast described vaguely as “most likely” and those 5% are now buying the new line. But, if the reputation will be truly harmed it might cause up to 30% less sales of the old line.  In this case the loss of $1.5M of throughput from the old line will not be compensated by the $1M “most likely” estimation of the new throughput. 

The above rough calculations help management to realize the potential risk of losing up to $.5M as a kind of reasonable worst case. Other reasonable possibilities seem much more optimistic for overall additional profit from the move. 

Can the risk be reduced? How about giving the new line of product a totally different brand name, which does not refer at all to the current popular product?  It’s probably not eliminating the full negative impact, but will significantly reduce it.

The detailed example objective was not to reach a firm clear decision. There is no claim the existing paradigm is not valid and thus can be challenged. We also don’t know whether the idea of coming up with higher level product is a good idea and what’s the actual impact on the current market is going to be.  The example has been used to demonstrate a need for getting better idea about the risk and its potential impact on the bottom line, using the intuition of the relevant people.  A certain direction of solution for estimating a risky move has been briefly demonstrated.

Such an analysis is a necessary condition for the bigger need of opening the door to a constant search for a breakthrough that has to be based on challenging an existing shared paradigm. This is the objective of this post: to claim that challenging widely shared paradigms is truly required for every organization.  You might say it also about your own desire to make a personal breakthrough -> it passes through a challenge of a common paradigm.

Collaboration, Win-Win and TOC

This article is broadly based on a mutual webinar at TOCICO.  We have found out that the topic is of special importance and ought to be expressed in more than just one way.

People often collaborate with each other. Family, ideology, security and business are good objectives for collaboration.  When the candidates for collaboration trust each other it makes win-win easier to achieve. Win-win is necessary for maintaining long-term collaboration.  Sometimes we have to collaborate with people we do not trust.  It happens when a mutual pressing need makes it mandatory to overcome the distrust.

Collaboration between different organizations is harder to establish. The simple straight-forward relationships like: “we buy from you and we pay according to agreed pricing and related conditions”, is more about “cooperation” than “collaboration”.  Cooperation needs to be present in most of our organizational relationships, whereas collaboration – where we have some mutual goals to achieve, and we need to ensure we both find a win in what we do – is rarer.

There are obvious difficulties in maintaining ‘trust’ between organizations. We can trust a specific person.  While any relationships between organizations are handled by people, the obvious concern is that those people might be replaced or be forced to act against the spirit of the collaboration.

However, collaboration could open the door to new opportunities, even creating the desired decisive-competitive-edge, for at least one of the sides, while improving the profitability of the other. Collaboration between competitors could strengthen the position of both towards the other competitors.  Collaboration between vertical links in a supply chain could improve the whole supply chain, and if all the links in a supply chain would collaborate effectively then the overall decisive-competitive-edge would be hard to beat.

So, we should look, first of all, on the new opportunity to be opened and only then analyze how such collaboration could be sustained, overcoming the usual obstacles.

So, a key insight is that collaboration might SOMETIMES work well, thus it should be carefully decided when it truly pays. There are two key negative branches of collaboration:

  1. There are several risks in collaboration, especially between organizations, which might disrupt the positive outcomes.
  2. Collaboration requires considerable amount of management attention.

An example where many efforts have been taken to establish effective collaboration is found in the area of big construction projects. A methodology called ‘A Project Alliance’, or Integrated Project Delivery has emerged to deal with the basic dilemma posed by the common contracts and basic structure for such big projects.

The problem is that the client has to come up with very detailed plan of the project, which is required to allow the competing contractors to come up with fixed-price for the whole project. The winning main/general contractor is then able to contract a number of sub-contractors.  What usually happens next is that some errors, additional requests and missing parts are revealed and then re-negotiations take place, which please the contractors, but much less the client.

The concept of cost-plus came to settle this kind of re-negotiations, but when there is no fair visibility into the true cost, the above changes to the original plan are still great opportunities to squeeze more money from the client. From the client perspective it is difficult to assess the true cost of the project and it is even more difficult to ensure the quality and duration of the project.  When we accumulate the efforts of the clients to plan in great detail and then feel helpless when errors are identified and more changes have to be introduced, the pain from running such a project is severe.

From the contractor perspective the initial bidding/competition phase forces him to reduce the price too much and by that take considerable risks, hoping to be lucky enough to gain a lot from changes in order to preserve satisfactory profitability.

A combined bad aspect of this kind of relationships is the mutual dissatisfaction from the outcome of the project. With all the changes and re-negotiations the project typically takes too long, the cost too high and the quality of the end product has been compromised.  That basic dissatisfaction has a negative impact on the reputation of all the contractors.

It could be nice for the client to have more open and collaborative dialogue with the contractors, giving them more influence on the planning and ongoing execution. It is a more effective way to handle the complexity and uncertainty of such big projects.  However, any solution has to be beneficial to every contractor.  Without win-win no alternative way would be truly useful.

How the concern of cost, from the client view, and the concern for profitability, from the contractor view, can be dealt with in a way that would also be in line with the success of the project?

The idea behind the Project Alliance is based on two elements.

One is to create a collaborative team of the key contractors that manage the project with the understanding of drawing the most of the collaborative efforts to achieve great success. This structure is different than having just one key contractor who manages the whole project and contracts several, even many, subcontractors.  Under the Project Alliance structure a consensus between the alliance members has to be achieved through active collaboration.  One consequence is far better synchronization between all the different professional aspects.

The second element is establishing a gain/pain payment that is based on achieving few targets, defined by specific measurements, which together define the success of the project. The payment to each alliance member is made of three parts:

  1. Actual cost – the true cost paid by the members to suppliers and freelances, plus the salaries of the employees that are fully dedicated to the project.
  2. Fixed payment to the members for their work. In this kind of project the fixed price is less than the normal expected profit.
  3. Variable fee based on the agreed performance measurements for the project as a whole. The variable fee plus the fixed payment could end up with much higher profit for the contractor-member than the norm.

The variable fee and the fixed payment are not proportional to the cost!   They are defined independently of the cost and it might include ‘total cost of the project’ as one of the measurements. This split eliminates the interest of the contractors to increase the costs that pass through their books in order to make a profit.  It also eliminates the damage caused if their scope is reduced in value.  The acronym given to this payment method is ‘CFV’, for Cost-Fixed-Variable.  In TOC terms the throughput for every contractor is the Fixed plus the Variable.

The Project Alliance was used by several big projects that are considered especially successful, taking into account the lead-time from start to finish, the overall cost of the project and general satisfaction of the client. However, the vast majority of the big construction projects all over the world are still managed in the old way, despite all the predictable undesired effects.  In order to understand the fears of adopting that direction of solution, let’s point to some potential negative branches:

  1. Maintaining trust between organizations is assumed to be shaky, especially without prior experience.
  2. The individuals within the client organization, who are in charge of the project, may feel robbed of their power to dictate whatever has to be done in the project.
  3. The contractors might find themselves in a dilemma when they see a short-term chance to squeeze more money, but feel bound by the collaboration agreement, where the variable fee is less than the opportunity they see.
  4. The uncertainty of the budget due to the variable payments might seem problematic. This might be more of a bureaucratic issue as big construction projects are exposed to much higher uncertainty.
  5. Many clients feel uncomfortable in selecting suppliers without having a fixed-price bid, and some formal procedures require such bids (though these are rare), mainly due to inertia and the fear of going against the current wisdom.

Another area where collaboration could add immense value is the relationships between a client organization and few of its suppliers. The regular relationships in B2B are:  the client organization tells the supplier what is required, specs, quality, quantity, delivery time and price.  Negotiations are about due-dates and price.  The underlining assumption is that the client knows what and how much is required.  In the majority of the cases that assumption is valid enough.  In many cases the client is able to generate a bid/competition in order to get the cheapest price.

There are other cases where creating longer term engagement between the client and the supplier could boost the business of both, creating win-win. In most of those cases the basic information flow is still coming from the client telling the supplier what, how much and when to supply.  The agreement covers longer time frame and exclusivity, thus ensuring availability of supply and security for the supplier.

There are fewer cases where true collaboration between client-supplier could truly enhance both organizations, creating new opportunities that cannot be achieved with the formal relationships of “we tell you what we need, you supply according to a general agreement on time and price, probably also minimum annual quantity.”

In those fewer cases the potential could be huge.  When the collaboration opens the way to reach wider demand and/or achieving higher price then the potential of significant increase in throughput, which far exceeds any possible additional cost, exists for both organizations.  Longer-term collaboration can also assist the buyer organization to simplify their purchasing processes and specification, allowing a reduction in overall purchase cost, leading each party to increased profit.

Both organizations have to invest management attention in providing true close collaboration, one may call it: partnership. Both need to earn a lot from it.  One characteristic of any on-going collaboration is not just trust, difficult as it is to maintain between organizations, but also deeper understanding of the interests, values and general culture of the two organizations.  Actually, this kind of understanding of the other side is a necessary element in establishing such partnership, because only when you understand the situation and its related effects on the other side – collaboration can truly be beneficial and far superior to the competitive common approach.

Goldratt already developed a vision connecting all the supply chain in partnership/collaboration, ensuring fast response to the taste and wishes of the market, following the insight that every link in the supply chain truly sells only when the end product is sold to the consumer. The idea was to split the throughput from every sale between all the participants, making them collaborate in order to ensure as high throughput as possible.

The vision of Goldratt raises several negative branches that need to be eliminated, like conflicting interests of a link in the chain when it partners with more than one supply chain. The collaboration along the supply chain should focus not only on the response time and lower inventory, but also actively develop new products and modifications to existing products that would capture more and more market.

All the above difficulties can be overcome with analysis and innovative thinking when the potential amount of the opportunity becomes clear. Collaboration is a means, especially for medium and small organizations, to gain extra competitive edge by becoming virtually larger with additional capabilities, capacity and more effective synchronization.  Using the right thinking, and taking win-win seriously, the potential is unlimited – even though the ultimate constraint could still be management attention.

 

Raw Thoughts on the Management Attention Constraint

Goldratt called management attention the ultimate constraint, meaning after elevating the other constraints, including the market demand by establishing a decisive-competitive-edge, then the pace of growth of the organization is still limited by the capacity of management attention.

How much attention can a person give in a period of time? The term itself is elusive and very difficult to assess even when focusing just on one person, even more so when we try to assess the attention capacity of a group of people.  Yet, there is no doubt that there is a limit where adding more issues to think and control causes chaos.  The evils of multi-tasking are well known.

Capacity of management attention means how many different issues can be handled in a period of time. I do not try, in this post, to assess the ability and skills to deal with a certain issue – just how many can be properly done in a period.

What contributes to the difficulty is the simple fact that we are used to fill our attention all the time. We cannot tolerate being “bored”.  We always think of something.  So, just by making our mind busy all time the question whether we are now close to the limit cannot be easily answered.  After all if something urgent pops up then our mind abandons whatever it has been occupied with and switches to the new issue, which requires immediate attention.  We can easily say that while there are issues that force themselves upon us, most of the time we choose the issues that are worth spending time on.

Focusing is an exploitation scheme to direct our mind to deal with the more valuable issues, putting aside the less critical issues.  However, we are only partially successful of being able to concentrate on what we have decided to focus on.  We are certainly limited by how long we can concentrate deeply on one issue before having a break.  This means people have to multi-think on several issues.  However, when we don’t try to focus our mind on just few issues we would achieve nothing of substance.

So, here is the key difficulty in utilizing our mind in the most effective way. We need to let our mind wander between several issues, but not let it wander between too many issues.  Let us assume that every person has somehow learned to maneuver his/her mind in an acceptable way.  This means we can feel when too many critical issues call for our attention and then we lose control and become erratic.  What we can do is try our best to decide what to push out of our mind, so we won’t reach the stage of overloading our attention.  This is a change of behavior that is very difficult to do, but even when we’re only partially successful our effectiveness goes up considerably.

How management attention becomes an issue in the life of an organization?

Even high-level executives give attention to their work only part of their overall span of attention. So, part of the competition on our attention has nothing to do with work.  People who love their work are highly motivated to do well, feel deeply responsible at work and give more attention to the work issues. But, still all other personal issues, like family, health and hobbies, have to have their part.

From the organization perspective the limited attention of all management has to be properly utilized, but not allowed to come near the line of confusion causing mistakes and delaying critical decisions.

In several previous posts and webinars I have expressed my view that in any organization there are two different critical flows:

  1. The current Flow-of-Value to clients. This involves short-term planning, execution and control, doing whatever it takes to serve current customers.
  2. Developing the future flow-of-value. This is the Flow-of-Initiatives aimed at bringing the organization to a superior state according the goal of the organization.

Is it likely to have active management attention constraint in the flow-of-value?

When this situation occurs the delivery performance gets out of control. Some orders are forgotten, others are stuck behind for very long time, and without the client screaming there is little chance of delivering an order at “about the agreed time”.  Such a situation might bring the organization into chaos, and no human system can live for long with chaotic performance.

The clear conclusion is that managerial attention constraint in the flow-of-value cannot be tolerated, and thus all organizations look for the right skilled managers to maintain the flow-of-value under a certain level of stability, which is in par with the competition. The typical operations manager is one that is active in spotting fires and is able to put them off.  Such a manager is less tuned to come up with a new vision.

But, when we examine the flow-of-initiatives the situation is quite different. There are usually many more ideas to improve the current performance than management attention required for developing, carefully checking and implementing.  The result of overloading the management attention is being stuck for much too long in the current state – same clients, same products and same procedures, while improvement plans take very long to implement and the stream of new products is also slow and erratic.

Having management attention as the constraint of the flow-of-initiatives makes sense, because of the unlimited number of ideas, but it requires strong discipline to keep management focused. It means having consensus on the strategy and based on it on what raw ideas should be checked, then having a process for deciding which ones to develop in detail and after that choosing the few to implement.  As measuring the attention capacity is currently impossible because we lack the knowledge then some broad rules should be employed to dictate the amount of open issues every management team has to deal with.

This kind of discipline requires monitoring the issues, call it ‘missions’, where every manager is in charge of completion, assigning a due-date to it and monitor the amount of open missions, checking also whether too many missions are late, signaling that one or more managers are overloaded, thus the rules have to updated.

It is not practical to count every tiny issue as a ‘mission’. Managers definitely need to put up fires, deal with other urgent issues and many other small issues that take relatively short time.  Being able to empower the subordinates could significantly offload the critical load on any manager.  But, changing habits is very tough to achieve, so most of the time we have to accept the manager character as given, and eventually come close enough to fair assessment of how many medium and large missions a typical manager can do on top of all the smaller issues that the manager has to control.

What happens when management need more capacity?

How difficult it is to elevate the attention constraint? The simple answer is adding more managers.  There are two problems with that.  One problem is called “diminishing marginal returns/productivity”.  Any additional manager adds to the total management attention less than the previous one, because of the additional load of communications on the existing management group.  The other problem is whether the whole management structure needs to be re-checked and maybe going through a change.

Empowering the subordinates is another way to elevate the load on the shoulders of managers, and it does not need re-structuring the management hierarchy. The problem here is that for a manager to change in order to trust subordinates is even tougher than improving the control on what issues should occupy the attention and what should be pushed away.

So, empowerment and wider managerial pyramid are valid ways to increase managerial capacity, but the ongoing duty of top management is to focus the attention of all managers on the most promising issues top management have chosen, while also keep part of the attention open for controlling the current situation and looking for signals for emerging threats.