The personal challenge of being a CEO

portrait of handsome senior business man with grey hait at moder

Clarification:  This post was written after several discussions on the topic with prominent people in the TOC field.  The main discussion was led by Ray Immelman.

Understanding how to manage organizations has to include the personal aspects of the one who is in charge of the organization – the managing director or the CEO. While the undesired effects of the organizations affect the CEO we have also to consider the CEO as an individual with interests, wishes and also ego.

Given the wide spread of the size of organizations, and spectrum of personalities who are CEOs, can we have any idea of what it takes to be one?

Taking the responsibility for the future of an organization, for its shareholders and employees, fits only people who have enough self-confidence to believe they can do it. Actually every single CEO has to demonstrate self-confidence at all times, which requires a lot of self-control.

I believe many CEOs have doubts and fears they hide well behind the façade of someone who clearly knows what needs to be done.

The challenge of every CEO is to get hold of the basic complexity of the organization, its stakeholders, clients, and suppliers. On top of the complexity there is considerable uncertainty.  The combination of complexity and uncertainty impacts the efforts of the CEO to answer two critical questions: “what to change?” and “what to change to?”  On top of dealing with complexity and uncertainty every CEO also has to constantly resolve conflicts within the organization and between the CEO and the shareholders.  These conflicts produce obstacles to implement the changes proposed by the answer to the second question and by that raise the third basic question: “how to cause the change?”

The first key personal dilemma of every CEO is derived from the difficulty to answer the three key questions and, based on the actual results, how those results are judged by the board, shareholders and possibly stock market analysts.  The seemingly unavoidable outcome is that the CEO fears that taking any risk, even when the possible damage is low, might be harshly criticized.  Considering the complexity and the variability the level of the pressure is so high that it pushes the CEO not to take even limited risks required for potential growth.

This means that within the generic conflict of take-the-risk versus do-not-take-the-risk the interest of the organization might be to take-the-risk, yet the CEO decides against taking such a risk because of the potential personal damage.

When analysing the CEO conflict we have to consider also the risk of not-taking-risks. First of all the shareholders expect better results and the CEO, trying to resolve the conflict, has to promise certain improved results – and he’d be judged according to these expectations.  Actually achieving too phenomenal results might also be seen as too risky, creating too high expectations in the stock market.  On top of that there are enough other threats to the organization and failing to handle them would be detrimental to the CEO as well. Having to behave with the utmost care on almost every move adds to the potential opportunity for TOC being able of superior handling of uncertainty and risks.

The key TOC insight is that the combination of complexity and variability leads to inherent simplicity.  The essence of the simplicity is that actions that their impact is lower than the level of the noise (variability), cannot be proven that their outcomes have been positive. This leads to focus only on the more meaningful actions. The simplicity also supports judging more daring actions looking for those that their potential downsides are limited and the upsides are high. When you add other TOC insights that reduce  variability, improve operational flow, provide checking the true financial impact by the use of throughput economics and the use of powerful cause-and-effect analysis then the combination yields an overall safer environment.

Taking risks is not the only dilemma with a special impact on the personal side of the CEO. While the fear of being harshly criticized is pretty strong – the CEO wishes to get the glory of any big success.  The dilemma is raised when the success requires the active participation, also considerable inspiration, of other people.  It is even more noted when the other people are external to the organization, like consultants.  Challenging existing paradigms, which is the core of the TOC power, might put the light on the TOC consultants and rob the glory from the CEO, who has chosen to take the risk, but might not fully enjoy the successful outcomes.

How do people react to someone who suddenly changes his own previous paradigms? Isn’t it easy and even natural to blame such a manager for not being able to change much earlier or of being too easy to be influenced?

Actually this dilemma seems to be tough to resolve in order to achieve a win-win between the organization and the CEO and also with the other executives. Emphasizing how wrong the old paradigms are makes the manager’s dilemma stronger.  People have a reason to refuse to admit mistakes.  It harms the self-confidence and it radiates incompetence, which is probably a flawed impression but still pretty common.  Of course, the other side of the conflict is the potential damage of not admitting the previous mistakes.

In the old days of TOC we have used the OPT Game and the Goldratt Simulators, which I have developed, to push managers to admit they don’t know.  This was quite effective in creating the “wow impact”.  However, the humiliation the managers went through proved beneficial only to those with very strong personality.  Too many managers paid some lip-service to the proof-of-flawed-concept and continue with the old ways.

We expect from a CEO to have that strong personality that allows recognizing a mistake and taking whatever steps required to go on the right track. We expect from a CEO to act according to the full interests of the organization without considering his personal interests.  Very few truly apply to this challenge.  Many believe that the way to handle the “agent” problem is to pay high bonuses for success.  Actually it only makes the personal-organizational conflict legitimate and could easily influence CEOs to take bigger risks that increase the fragility of the organization.

It seems we need to help CEOs to resolve both dilemmas. We have to promote the contribution of the CEO to the success, and we have to reduce the fears of the potential unknown outcomes, organizational and personal, of the change we believe would bring huge value and even in the worst case will still be better than the current state.

It is my own realization to reduce the pressure on what is “wrong” and make much more of what is “better”, making it an improvement rather than a revolution by discarding everything people have learned in the past.

What FOCUS means?

find the solution

The vast majority of the managers believe that focusing is absolutely necessary for managing organizations.

If this is the case, then FOCUS as the key short description of TOC has very little to offer to managers.

Let’s consider the possible consequences of Pareto Law. A typical company gets 80% of its revenues from 20% of its clients.  How about focusing on the 20% big clients and dump the 80%?  Does it make sense?

The point is that the real question is not whether to focus or not, but:

What to focus on?

And, even more importantly:

What NOT to focus on?

The reason of emphasizing the part of what not to focus on is that the need to focus is caused by limited capacity, which means it is impossible to focus on everything and draw the full value from it.  The limitation could be a capacity constraint that forces us to concentrate on what exploits that resource.  Empowering the subordinates is a mean for an individual manager to focus on the key issues without becoming the constraint of the organization.  In many cases the critical limitation is the inability of the management team to multi-task in a way that would not delay truly important issues that require actions.  This is expressed by management attention as the ultimate constraint of designing the future of the organization.

Giving up part of the market demand could make sense only when more overall sales, more total Throughput, could be materialized.  Only in very rare cases it is possible to reduce the OE, following giving up certain demand, to the level where T-OE would be improved.  Reducing 80% of the clients, the smaller clients that yield only 20% of the turnover, would almost never reduce the OE by what is required to compensate for the lost T, which is significantly more than 20% of the OE.  This is due to the non-linearity of the OE, where reducing the capacity requirements do NOT yield OE reduction of the same rate.  Just think about the space the organization holds and whether the reduced number of clients would allow using less space, and even when that is possible – it might be impossible to save OE due to it.

FOCUS should NOT be interpreted as just one specific area. It has to be linked to an estimation of the effectiveness of the available capacity to focus on without considerable delays to the other areas and topics.  And remember the following Goldratt insight:

Segment your Market – Do Not Segment your Resources!

The idea is that many of our resources are able to serve variety of different products and services target at different segments. This is a very beneficial capability.  Effective focusing should exploit the weakest link based on the limiting capacity.  In most cases the exploitation encourages serving several market segments, but not too many.

The question of what to focus on goes into all the different parts of TOC, always looking, first of all, to the limitation and from that the answer is derived.  In DBR it is the constraint.  In Buffer Management the question gets a small twist of “what should we do NOW that otherwise the subordination might fail?” The Current-Reality-Tree defines the core-problem of the organization, which is the first focus for designing the future. CCPM focuses on the Critical Chain rather than on the Critical Pass, pointing also to multi-tasking as lack of focus causing huge damage.  The key concept in the Strategy and Tactic (S&T) is the Decisive-Competitive-Edge (DCE), which again points to where the focus should be.  The DCE is actually based on an identified limitation of the client, while the organization has the capability of removing or reducing that limitation.  Building a DCE is a huge challenge that adds considerable load to all managers and professionals, so it makes sense to avoid more than one DCE at a time.

Goldratt brilliantly used a slang in Hebrew, actually taken from Russian, called “choopchik”, describing an issue with very low impact on the performance. The whole point is that choopchiks do have a certain positive impact, which is tempting to tackle, but causing huge loss of not doing the vastly more important missions.  I look on choopchiks as a key concept of TOC that is directly derived by the search for the right focus.

The notion of focus according to TOC is relevant to recognize the impact of uncertainty on management. Choopchiks impacts the performance of the organization less than the normal existing variability; call it the level of the “noise”.  With such small impact you don’t know whether there has been any actual benefit in the real case.  Worthy missions have impact that is considerably bigger than the noise.

What to focus on is a key for achieving better and better performance. The elements involved are the diagnostic of the current state, the few variables that dictate the global performance and what could go wrong.   Mistakes in what to focus on are common and are main causes for the death of organizations and for so many being on the surviving mode.

What is the right time to make a decision?

Businessman standing and making his choice between times

Every decision is a choice between alternatives. Another element is the right time for the decision to be made. Very few decisions force the manager to decide immediately.  In itself this is an undesired situation where a threat has emerged in complete surprise.  Most decisions leave enough time to the decision maker.

Facing substantial uncertainty suggests that every decision should be delayed until the last moment that allows executing the decision in full.  The underlining assumption is that time adds information that reduces the uncertainty.

There are serious negative branches to the above logical claim. All of them look at what might go wrong with the suggestion.  Here are several of them:

  • We don’t truly know the exact timing of “the last moment”, so we may miss it.
  • We might forget the decision at the right moment.
  • Our attention might be occupied by more important issues at the critical time.
  • Making the decision at the very last moment makes it urgent and generates stress. The stress might affect us to make the wrong decision!

Inspired by the idea of time buffers we should treat every non-trivial decision as a mission that should be completed at a given time and to assign a time buffer for that mission.  According to the DBR interpretation of time buffers the mission should not start prior to the buffer.  The time buffer itself should provide enough time to deal with all the other requirements for attention, without creating stress, except the need to make the RIGHT decision.

Managing the execution of missions by individuals or teams through assigning time-buffers, and using buffer management as a control mechanism, is a much more effective process than check-lists. It reduces multi-tasking through buffer management priorities and limits handling missions, especially decisions, too early. Only non-trivial tasks should be included in the missions.  It is a step forward in understanding the behavior of the capacity (attention) of individual managers.  It would also clarify the issue of distinguishing between missions and projects.

An example

Suppose a decision to stop working with a certain supplier and switch to another one is considered. The decision process requires updated information on the trouble with the current supplier and mainly finding alternative suppliers, inquiring how they are evaluated by their clients, whether they have the specific capabilities and, of course, their pricing.

When is the deadline to make the above decision?

Suppose the contract with the current supplier ends on December 31st 2016.  If the contract is not going to be extended, it is fair to announce it by December 1st, which also leaves enough time for finalizing the contract with the new supplier. The mission includes getting the relevant information, bringing it to the decision maker(s) and letting 1-2 hours for the decision itself.  Assigning three weeks for the whole mission is reasonable.  This means no one should work on that mission prior to November 10th!

The impact of the criticality of the decision

Goldratt said: “Don’t ever let something important to become urgent”.

The practical lesson is: important decisions should be given a reasonable time buffer.  Very important decisions, those that we call ‘critical’, should be given longer time buffer, ensuring the decision is not going to be taken under stress. Of course, a critical decision might be taken under stress because of possible negative ramifications, for which no viable solution has been successfully developed.  This post only focuses on the time element.

The suggested process expands what TOC has developed for the production-floor to the work of managers taking responsibility for important missions.

Comment: a mission can be viewed as a very small and simple project, but it does not make sense to work on the mission continuously, unlike the expected behavior in projects, especially along the critical chain, where we strive for continuous progress.

Batching of Decisions

Batching of decisions, usually by periodical planning sessions, is widely done.  The tendency to plan according to time periods can be explained by the need to gather together the relevant management team to come up with a periodical financial budgeting process based on forecasts.  The targets for the various functions are derived from that periodic plan.

I’ve expressed in previous posts my negative view on one-number forecasts and how they reduce the overall performance of the organization.  My focus here is to highlight that the planning sessions provide an “opportunity” to include other decisions that are not directly related to the purpose of the periodical planning.

Any plan is a combination of decisions that are interconnected in order to achieve a certain objective. The plan should include only the decisions that any deviation from them would impact the objective.  This message is explained in a previous post called “What is a good plan – the relationships between planning and execution”, including the need to plan buffers within the planning.

Does it make sense to include the decision to switch suppliers within the annual planning session aimed at determining the financial boundaries for next year? Is the identity of the specific supplier critical to the quality of that high-level planning?  Suppose there is a small impact of the switch on the total cost of goods – does it justify forcing a decision too early?

The key point is that including decisions, with very limited impact on the objective, within the planning, disrupts the quality of the plan that needs to be focused just on the critical factors for achieving the objective. It forces timing that does not support the quality of the particular decision.

Planning, execution and the right timing of decisions are all part of handling common-and-expected-uncertainty. We need to vastly improve processes that dictate what is in the planning, what are left for execution and the handling of all the variety of non-trivial decisions including making sure they are made at the right time.

What Simplicity truly means?

balancing stones

Goldratt assumed that every organization has to be inherently simple.  This recognition is one of the four pillars of TOC, and to my mind the most important.  It is in direct clash with the new Complex Theory when applied to human organizations.

Comment: I refer in this post only to organizations that have a goal and serve clients.

Is Inherent Simplicity just a philosophical concept without practical impact?

One of the most practical advises I got from Eli Goldratt was:

If the situation you are looking at seems too complex for you then:

You are looking on a too small subsystem – look at the larger system to see the simplicity

This is a very counter-intuitive advice. When you see complexity should you look for even more complexity? But, actually the situation is relieved when you analyze the larger system because what is important and mainly what is not important become clearer.  Production shop-floor might look very complex to schedule.  Only when you include the actual demand, distinguish between firm and forecasted demand, you realize what the real constraint is and only then the exploitation and subordination become apparent.

The term ‘simplicity’ needs to be clarified. There are two different definitions to ‘complexity’, which also clarifies what ‘simplicity’, the opposite, means.

  1. Many variables, with partial dependencies between them, impact the outcomes.
  2. It is difficult to predict the outcome of an action or a change in the value of a certain variable.

The second definition describes why complexity bothers us.  The first one describes what seems to be the cause for the difficulty.

The term ‘partial dependency’ is what makes the interaction between variables complicated. When the variables are fully depended on each other then a formula can be developed to predict the combined outcome.   When the variables are absolutely independent then again it is easy to calculate the total impact.  It is when partial dependencies govern the output that makes it difficult to predict.

Examples for independent, fully depended variables, and partially depended:

  1. Several units of the same resource. The units are independent of each other.
  2. A production line where every problem stops the whole line. The line certainly works according to the pace of the slowest station, and every station is fully dependable of all the other stations in the line.
  3. A regular production floor with different work centers and enough space between them. Every work center is partially dependent on the previous ones to provide enough materials for processing.

When, on top of the complexity, every variable is exposed to significant variability then the overall complexity is overwhelming.

Can the performance of the organization be truly unpredictable?

You may call this state “chaos”, or just “on the verge of chaos”, point is that clients cannot tolerate such a performance.  When I’m promised delivery at October 1st, 2pm and the delivery shows up on October 22nd, 6:30am – this is intolerable.

Is it possible to be on the verge of chaos internally, but still provide acceptable delivery to clients?

In order to achieve acceptable reliability organizations have to become simple enough.  The initial impression of complexity is wrong because the partial dependencies are pushed down, so their impact on the deliveries is limited.  The reduction of the partial dependencies is achieved by providing excess capacity and long lead-times.  TOC simplifies it more effectively by using buffers and buffer management.  What we get is good enough predictions of meeting due-dates and even being able to promise rapid-response to part of the market that is ready to pay more for quick supply.

Still, the use of the buffers means: the predictability is limited!

Even Inherent Simplicity cannot truly mean precise predictability! The whole idea is to determine the range of our ability to predict.  When CCPM planning of a project predicts completion on June 2017, it actually means no later than June 2017.  It could be completed earlier and we usually like it to be earlier, but the prediction of June 2017 is good enough.

Thus, the simplicity means predictions within an acceptable range!

Does simplicity means the solution can be described in one paragraph? I doubt whether one-paragraph on CCPM is enough to give the user the ability to judge the possible ramifications.  Certainly we cannot describe the BOK of TOC in one paragraph.

Simplicity in radiating an idea means the idea is well understood. This is the meaning of “predictability” when we deal with marketing messages:  we are able to predict what the reader, listener or spectator understands!  Even here there is a certain range of interpretation that we have to live with.

What about the details of the solution itself? Is the solution necessarily easy to implement?

Easy and simple are not synonymous. The concepts could be simple, but the implementation might face obstacles, usually predictable obstacles, but overcoming the obstacles might be difficult.  So, both simplicity and ease of implementation are highly desirable, but not always perfectly reachable.

We in TOC appreciate simplicity, but achieving it is a challenge. The requirements for truly good solutions are: Simplicity, Viability (possible to do in reality) and Effectiveness (achieving the objective).

An example illustrating the challenge:

Simplified-DBR is a simple effective solution for reliable delivery in manufacturing. However, for buffer-management to work properly we assume the net touch time is less than 10% of the production lead-time.  This is a complication!  A solution for manufacturing environments, where net-touch-time is longer than 10%, has been developed. It complicates the required information for buffer-management, but is  effective.

I remember my professor for History of Physics, Prof. Sambursky, who explained to us:

“At all times, since ancient Greece, the scientists looked for the ONE formula that would explain everything. They always came with such a formula, and then a new discovered effect did not behave according to the formula.  The formula was corrected to fit the behavior of that effect.  Then more new effects contradicted the formula, and the formula started to be very cumbersome and it could not predict the behaviors of new effects.  Then a new theory came with a new simple formula and the cycle went on again.”

TOC is basically simple. It strives to identify the Inherent Simplicity, come up with simple solutions, simple messages and easy implementations.  But, we have, from time to time, to add something to deal with environments where a certain basic assumption is invalid.   This is, to my mind, the most practically effective way to manage organizations.

Until a new simpler, yet effective, approach would emerge

From a TOC perspective: Paying tribute to a Great Pragmatic Thinker

Written by Dr. Alan Barnard and Eli Schragenheim

 Simon 1

We both encountered the name of Prof. Herbert Simon, long before we met Dr. Eli Goldratt. Prof. Herbert Simon (1916-2001), a recipient of the Nobel Prize for Economics in 1978, was an American political scientist, economist, sociologist, psychologist, and computer scientist. Prof. Simon was among the founding fathers of several of today’s most important scientific domains, including artificial intelligence, information processing, decision-making, problem-solving, organization theory, complex systems, and computer simulation.

He coined the terms bounded rationality and satisficing.

Bounded rationality is the idea that, when we make decisions, our rationality is limited, not only by the inadequate information we have available and/or inadequate knowledge to predict the outcomes of our decisions, but also the cognitive limitations of our minds, and the limited time available to make these decisions.

Simon coined the term satisficing (a combination of satisfy and suffice) to describe the heuristic we likely use when having to quickly make difficult decisions with inadequate information and/or knowledge.

Simon often said:people are not optimizers they are satisficers” – we seek a satisfactory solution rather than an optimal one. When faced with a challenging problem or decision, we search for a solution that satisfies our pre-defined criteria to a sufficient level.  When such a solution is reached there is no need to continue searching – we have found a solution that is good enough!

We both rediscovered Simon’s incredible insights, when we recently started doing research on the limitations managers are confronted with, which Theory of Constraints and its applications can help diminish or even eliminate. These are the limitations imposed by complex and uncertain situations as well as by conflicting objectives in solving problems and making decisions.

Below are three of the highlights we found during this “rediscovery”.

In 1971, the world was just at the beginning of the huge advancements in information technology and the exponential growth in the access to information. Yet, Prof. Simon already had the foresight to warn us about one of the major negatives of the increased access to more and more information.

In a public speech he gave in 1971 he warned:

“The wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

And he went further. In his 1973 paper titled, “Applying Information Technology to Organizational Design”, Prof. Simon wrote:

“The information-processing systems of our contemporary world swim in an exceedingly rich soup of information. In a world of this kind, the scarce resource is not information; it is processing capacity to attend to information. Attention is the chief bottleneck in organizational activity, and the bottleneck becomes narrower and narrower as we move to the top of organizations.”

Sounds familiar?

Dr. Goldratt gave similar warnings. Firstly, in the Haystack Syndrome, he warned about the importance to differentiate between data and information (the answer to the question) and the need to build true information systems that deliver only the information relevant to managers for making important decisions. Later, he also shared his insight that the ultimate constraint in any organization was Management, especially Top Management’s limited Attention.

Goldratt explained in “Standing on the Shoulders of Giants” that he simply advanced the work started by Henry Ford and Taiichi Ohno, realizing that to improve flow you need a practical mechanism to prevent overproduction – producing things that are not needed, or at least not needed now.

To our mind, Dr. Goldratt also advanced the work started by Prof. Simon, by outlining practical mechanisms for helping managers decide what to focus on (and as importantly, what not), to better exploit and not waste the scarcest resource in any organization – management’s limited attention.

Considering the growth in Big Data, what are the implications of these warnings on managers’ problem solving and decision making today?

Will it really help to provide the step-change in managers and domain specialist’s ability to improve the quality and speed of their decisions?

And, even if it did, will it be sufficient?

And the third insight: A citation from Prof. Simon article entitled “Making Management Decisions the role of intuition”:

“What all of these decision-making situations have in common is stress, a powerful force that divert behavior from the urgings of reason. They are examples of a much broader class of situations in which managers frequently behave in clearly nonproductive ways.”

We, Alan and Eli, are deeply interested in the impact of fear on the way managers make decisions and manage organizations. We all fear being blamed for our decisions and actions. So it is safer not to act. But we also fear missing something (in the ocean of data) and being blamed for not acting.  Whether we do (act) or don’t, we are damned.

Fear results in stress. And we know that when people are under stress, they often freeze-up, not doing something they should, or they over-react, doing something they should not. When under fear-induced stress, we often act in irrational ways.

Prof. Simon also frequently warned against excessive fear of unforeseen consequences. He advised that the best way to overcome such fears was to experiment and to see what happens.

We have four questions regarding Prof Simon’s concepts and its implications on managers today.

  1. Are decisions made within the organization also aimed at satisficing rather than at optimizing?  Our reason for asking that question is our observation that organizations seem to impose the value of optimization on decisions, and by that, almost force managers to look beyond satisficing, leading to be considerably less focused, and resulting in decision delays and/or errors.
  2. To what extent today, do the fear of being blamed for taking the wrong decisions/wrong actions, still cause many of the avoidable decisions errors and delays?
  3. To what extent today, do the fear of missing something important in the data, still cause managers to look at too much data / too many measurements, causing distractions which in turn, waste management attention and also result in decision errors and delays?
  4.  Why, assuming there is consensus on the timeless importance of the above insights from Prof. Simon, a Nobel Prize recipient, have many others not followed and continue this important work?

The answers to these questions could hint on why the awareness and adoption of TOC is still much lower than what we have expected…

What TOC can contribute to a Transportation Organization?

Transport icons. Truck, Airplane, Bus and Ship.

Underline the current TOC methodologies for Operations there is a basic assumption that the available capacity exists in one location. In other words, the resources don’t move!

This assumption is, of course, invalid for transportation organizations. The meaning of ‘available capacity’ has to include two additional pieces of information:

  1. Is there available capacity close enough to the required starting point within the appropriate time frame?
  2. Where to and for how long? Are there opportunities for transport from the proximity of the destination back to the usual location? How long it takes to be available here again?

These additional variables make the business of the transportation different than the environments TOC has been established in so far. The dependency on wide geographical locations causes low effective utilization of vehicles, while still suffering from lost opportunities due to lack of timely capacity.  Taking into account that every vehicle is relatively expensive the challenge of finding more demand for the available capacity is a key for successful transportation business.

From TOC perspective the vehicles are the internal constraint of the organization, even though there is a lot of idle capacity.

In itself the service of carrying people, or goods, from point A to point B, is simple. It requires several resources at the same time, a vehicle, a driver, sometimes a whole crew on the vehicle and in the terminals.  Supporting processes are planning the vehicles missions, maintenance, accepting orders and collecting the money.

A major simplifying factor is that there is no direct interaction between the vehicles.

Thus, exploiting the missions of every vehicle is the key business issue.

Thus, we can look at every single vehicle as an independent constraint! Exploiting one vehicle is only seldom on the expense of the other.

Do transporting companies exploit their constraining units?

In a previous post I’ve dealt with an exploitation scheme used by the airlines called “Yield Management” (also Revenue Management), which is basically an exploitation scheme of a single flight (micro-constraint) through the use of dynamic pricing.  The general direction of Yield Management is right, but the airlines use it in an overly extreme way (pathetic to my view) to optimize the revenues within the “noise.”

But, optimizing the flights, or any transport from A to B, is not necessarily the same as exploiting the capacity of the vehicle! What is missed is the number of transportations the vehicle actually does in a period of time.

A key flawed paradigm of most transportation companies is that the full cost-per-km (or mile) is the only key parameter that dictates whether the specific travel is profitable.  So, every kilometer travelled needs to cover its cost.  The cost includes not just the truly-variable-costs of travelling one kilometer (mainly the fuel), but also the allocated fixed cost associated with the vehicle, especially the purchasing investment of that vehicle.

This paradigm causes rejecting business opportunities, preferring to leave the vehicle standing still, and certainly not letting the vehicle travel empty unless that travel is covered by a client.

An example:  There is a shipping order from A to B. How should the vehicle come back to A?  The obvious wish is finding another shipment to cover the full cost of travelling back.  What happens when such an opportunity is required only 24 hours later?  Is it obvious to keep the vehicle idle for 24 hours?  The cost-per-km does not address the economics of standing idle.

The TOC solution is to use Throughput Economics to plan the business of transportation. This means, first of all, calculating the true throughput (T) of the whole travel.  Certainly all TVC per kilometer have to be included.

The T-per-travel should lead the company to calculate the total T-per-specific-vehicle for a period of time, like a week or a month. The focus of management should be to maximize the total monthly T for every vehicle.

Planning the generation of next week T by Vehicle-X involves checking various options from the minute the vehicle is free taking location and time into account.  It could be that the vehicle should come back empty in order to be available at point A for higher T opportunities.

Dynamic pricing should be used to encourage potential customers to allow enough time ahead, providing the planner better flexibility.  There should be a price difference between flexible time given by the customer and very specific timing for the service.  Certainly for an urgent service the price should be higher.

This different focus should achieve better exploitation of the constraint(s).

The company still needs to understand and implement subordination. For instance, loading and unloading might take long time, causing losing potential business.  Suppose that adding people to help with the loading would significantly shorten the time.  Adding people adds delta-Operating-Expenses (delta-OE). Question is: can we get additional delta-T, higher than the delta-OE, by saving time?

Isn’t this focus what made Southwest Airlines so successful? Using operational flexibility to subordinate to the most efficient use of the constraint, which is every single aircraft.  The use of the same type of aircraft enables flexible use of pilots. It is just an example of effective subordination.

Strategy according to TOC has to come up with a decisive-competitive-edge, in the shape of unique value, target at big enough market segment(s).   Generally speaking all transportation companies struggle to offer the clients the following key values:

  • Reliability, both regarding the agreed upon timing and the safety of the shipment.
  • Fast response to any request.

The difficulty to deliver the above is that the excess capacity is not enough to overcome temporary peaks of demand in one location. Improved exploitation of the pool of vehicles, including clever buffering of commitments to key clients, would improve the reliability and fast response.

There are two different modes of operation for transportation service:

  • Fix schedule of transportation from A – B and back from B – A.  The route could cover many intermediate points. The ultimate examples are trains, flights and ships. This way high reliability can be achieved, but there is no ability for fast response or adjust the timing. The key challenge is establishing the fixed routes and schedules in a way that maximizes the T-per-vehicle.
  • Flexible routes schedule.   The ultimate examples are taxis and trucks.

An overall superior Strategy can be developed using collaboration between competitors to deliver better service. Airlines use a certain level of collaboration, allowing moving between airlines for routes not fully provided by one airline.  They also collaborate to provide a buffer for passengers when flights are cancelled.

It is my view that additional strategic collaborations can vastly improve the businesses of many transportation companies. For instance, a company located at point A could collaborate with a company located at B to ensure quick returns of vehicles.  Answering the real needs of users, coupled with effective control on the T-per-week-per-vehicle, could make very substantial business improvement for organizations that are open for a change.

Are there organizations where TOC is not applicable?

bigstock--126121283

TOC was born in the manufacturing shop-floor. It has expanded into distribution and projects.  It has notable success in healthcare, which is a pretty different environment and some of the basic concepts had to go through “translation” to fit the environment.

The Thinking Processes (TP) were created with the intent to be applicable to any environment. Twenty five years after the definition of the main TP tools we should ask ourselves whether the TP is enough to address environments that are very different than the existing environments TOC is known to have an impact.

The weakness of starting the analysis with the TP is being swamped by huge number of undesired effects (UDEs) that might be irrelevant to pinpoint what makes that environment different than what we already know.  We do need a practical way to cut corners when we look at an unfamiliar environment.  We, of course, need also the opportunity to start working with such an environment. I like to deal right now only with understanding the possible value TOC is able to draw.

If you accept the axiom that every organization is inherently simple then there has to be very few key differences between the environments that truly impact the way to manage them effectively.  Under this assumption we can speculate what those few differences are, and use the cause-and-effect process to derive the core problem(s) of the environment, leading hopefully to identify the critical flawed assumption(s) that could be challenged.

Here is a list of such environments, where TOC currently has, if at all, only minor impact:

  1. Financial institutions, banks, insurance and credit-card companies.
  2. Transportation companies: in the air, sea and land.
  3. Performing arts organizations: theatres, music (opera) and dance groups. You can add TV stations to that list. I like to treat museums also in this group.
  4. Organizations for emergency cases: Army and fire-squads.

Let’s have a quick look at the banks.

What makes a bank strictly different than other organizations?

I’m aware of several TOC implementations in banks that were focused on improving the flow of specific missions within the bank. This is certainly valuable, but I doubt whether it touches the core problem.

All for-profit organizations strive to make more money now and in the future. Banks use money as their key resource for producing more money.  This forces the banks to inquire in depth what “money” means, uncovering the “virtual” part of money – the option to lend more money than they actually have.  This understanding, which is not part of most other businesses, is a key internal recognition that leads to more specific paradigms that are unique for the financial world.

Banks deliver two very different categories of services:

    • The classic services of providing loans
      • Lending money now in order to get more in the future.
      • From the customer perception of value: bridging between the time the customer does not have the required money and times where there is more than enough.
      • A crossover service of providing deposits. The bank needs more money as a resource to use for loans. Customers give a loan to the bank, at times where the customer has enough money, and get back just a little more when they need it.
    • Services of protecting and managing the money of customers, which have nothing to do with loans or interest-rate
      • Keeping the money, recording transactions, transferring money, buying and selling shares (when applicable), dealing with different currencies and more.
      • User applications to manage their bank account through computers and smart phones.

The simplicity is expressed by noting the growing demand for handling money. The older services of loans and deposits are still required, but there is no basic change in the needs and in the offerings.  However the advance of communication technology opens huge opportunities for the other category of bank services.  That technology generates growing expectations of the clients for more sophisticated options and information.

The competition in the banking system is changing towards using the most advanced technology for new services and newer looks. It creates a trend of reducing the number of branches and agents in the banking system, but, it also increases the need for state-of-the-art IT, and the managers and professionals who understand the wishes of the customers and are able to define the requirements to the IT developers.  It generates more and more demand for new offerings, which have to be supported by the IT, while keeping the security high, which adds many more requirements to the IT developers.

This change in the whole environment creates a natural bottleneck in IT projects.  The banking systems used to be in a deadlock, being able to quickly imitate each other.  Now the change is leading to growing competition for the best and wide use of the most advanced technology.  New threats emerge from companies, like PayPal, which offer services that bypass the banking system.  The appearance of Bitcoin is another threat to the banking system.

What value can TOC bring to the banks?

Implementing CCPM in the IT related projects is a partial solution. CCPM does not have a clear mechanism to generate the right priorities between competing projects.  But, the TOC school of thought can handle it better than any other method I can think of.

What should an effective project portfolio look like? What is the balance between big projects and small ones?  How to plan the time horizon of completed projects that together would generate synergy that supports real growth? There is no ready-made TOC methodology for that, but a group of good TOC experts are able to develop a good solution.

The above analysis is based on my observations and assumptions and I’m aware never to say I know. In a real opportunity to look into a specific bank there is a need to validate the assumptions and the focus might shift a little.  It is just an example to the ability to quickly identify the core problem and outline a direction.    A similar approach for the other environments mentioned above could show real value pretty quickly.

Throughput-Dollar-days (TDD) and Inventory-Dollar-Days (IDD) – the value and limitations

Calendar With Dollar Bills.

The concept of multiplying money by time has occupied a lot of thought from Eli Goldratt for quite long time. The entities of money and time represent two different dimensions and thus the product of the two represents their combined impact.

In the financial area value-days are known for very long time, but with one substantial addition: the concept of “price of money” is accepted and widely used.  The value of the product of money and time can be translated into money in the same way as other types of value.  “The price of money” is an interest rate and it allows quantifying the financial worth of loans as well as investments. More on it – later in this post.

The use of dollar-days replaces certain, grossly biased, performance measurements that express

The damage of failing to achieve the delivery commitments

Thus, Throughput-Dollar-Days (TDD) is by definition a negative performance measurement.  The best value you can get is zero.  The prime use of TDD is measuring on time and in-full delivery of orders.  When an order is late, relative to the promise to the client, then the T of the order multiplied by the late-days is far superior to simple due-date performance.  Consideration of lateness creates motivation for efforts to minimize the lateness.  Without it there is a real concern that once an order is late it loses its priority because the damage to the measurement has been done.

About twenty years ago the flight captains of El-Al, the Israeli airline, were measured by their on-time pull back from the terminal, and the bonuses were determined by it. As a traveler I could see the efforts to be on-time.  But, when there was a delay it was alarming to see how people stopped to run and started walking, slowly, because they did not mind anymore.

However, TDD generates several negative branches. The T worth of an order is a key factor in the measurement. But, do you really like to give automatic higher priority to $1,000 order over $500 order? This measurement does not consider the damage to the reputation and does not consider the characteristic of the client and the level of business with that client. Buffer management, the formal TOC priority scheme in Operations, totally ignores the T of an order for setting the priority. So, is TDD an additional priority mechanism?

Another question is why to use the T of an order rather than the revenue? From the client perspective the worth of an order is the full price.  We like to get the full payment, including the TVC, as soon as possible.  The division of the revenue to TVC and T has nothing to do with the need to get the payments on time.  Shouldn’t we use RDD (revenue*days-late) as a measurement?

My biggest issue with the concept of dollar-days is that it is not intuitive.  DD generates very high numbers, which are quite confusing when compared with the net worth.  An order of $1K delayed for 60 days, is 60K DD. How clear the true state of the situation is reflected by this one number?

Eli Goldratt wished TDD becomes a key measurement for the whole supply chain – keeping every link responsible for possible loss of sales. The practical problem is: how to measure the TDD of items that are sold from stock? When there is a shortage we suspect some lost sales – it is less clear how much.  We can use statistics of sale-days, which are actually based on forecasts. Problem is, forecasts are quite confusing and many do not understand the ramifications.

My conclusion is that TDD has the potential of creating real value, but we should review the logic and be ready to introduce changes.  Reservations and new ideas are welcome.

Inventory-Dollar-Days (IDD), supposedly the twin concept of TDD, is actually a different concept.  The original idea was that while TDD expressed failing to deliver, IDD represents the effectiveness of the investment in inventory.

IDD is the accumulation of the cost of every item in inventory, the actual price paid for it, multiplied by the days passed since the purchasing. So, it represents the dollars invested combined by the time those dollars have been held without generating value.

So, in order to achieve very low TDD we need to invest in inventory. An analysis is required to set a standard for the “right” level of IDD that would achieve reasonable value of TDD.

Does really the IDD represent the effectiveness of the investment? IDD does not consider whether the items leaving the IDD calculations have generated money or just scrapped.  While items spend time in inventory, or being processed but not sold, their real worth in money might have changed, but the IDD cannot relate to it – the real value would be revealed only when the item is sold.

What value we get from IDD?

We can use it to identify items that are both expensive and spent long time in inventory, contributing the most to the IDD, motivating operations to get rid of those items.  It also motivates the purchasing people to be more careful when they order large amount of such materials.  If this motivation is important, can’t we identify those items by crossing together the expensive items and their aging?  Is the use of one number, which is not intuitive, a better way?

IDD is for inventory and it cannot be used for other investments. Suppose we have bought a new machine.  The intention is to use it for many years.  Dollar-days would accumulate from the day of purchasing the machine.  Without considering the T generated by that machine the IDD of infra-structure is useless.

Here comes the concept of ‘Flush’ as a measurement of such an investment.  The dollar days start with the initial investment.  From that date negative dollar-days (DD) accumulate.  Additional expenses increase the negative DD.  When T is generated positive DD are added.  Hopefully, at a certain time the state of the investment-DD would reach zero:  the DD of the investment is fully recovered. Flush is the number of days to the breakeven of the DD of the original investment.

Flush is superior to the simplistic measurement of the time to return the cost of the investment.

But, is Flush superior to Net-Present-Value (NPV), where the DD are converted into money?

Flush ignores whatever happens after the DD become breakeven.  More income might be generated, which have no impact on Flush.  I also think we cannot simply ignore the concept of the “price of money”, which is a simpler, yet effective, way to evaluate an investment.

The real difficulty in evaluating an investment is the risk associated by it. Both Flush and NPV do not provide a good answer to that.

Another point that puts Flush in a funny perspective: When one spends money for pleasure then its DD would grow to infinity.  Does this seem intuitive to you?

Do you like to discuss this further?

Throughput Economics: Discussing the case when materials should be treated as resources

This post is target at people who know Throughput Accounting well and are ready for an intellectual exercise.  I think that the title Throughput Economics fits better than Throughput Accounting in describing a methodology that supports superior decisions, but has very little relevancy to accounting.

The usual setting is that organizations buy materials or goods they eventually intend to sell, then they use the capacity of internal resources and then they sell.

A key difference, important for making decisions using the Throughput Economics rules, is that the cost of materials behaves in an almost linear way.  When 20% more materials are needed then the additional cost of materials is about 20% of the original cost. Transportation and other factors might add some costs, but for being “about right” treating the cost of materials as linear is good enough.

This linearity is quite important for simple, yet effective, calculation of the impact of increased sales resulting from promotions, export or special deals.

For instance, suppose 1,000 items of Product-X are sold for $10 and the cost of material per unit of X is $6. The resulting total T is $4K.  If, there is a demand from a large client for 3,000 additional X for only $8 per unit, and that demand does not affect the regular demand, then the additional delta(T) is 3,000*($8-$6) = $6K.  Of course, this simple calculation is not enough to fully justify the decision.  At the very least there is a need to validate that the available capacity is enough for the total of 4,000 units.

The cost of capacity behaves in a very different way than the cost of materials. Suppose the most loaded resource has enough capacity for up to 3,000 units of X.  This means that by increasing the required quantity of X from 1,000 to 3,000 – there is no extra cost of capacity.

However, there is a need to consider the cost of capacity for processing the extra 1,000 units.

Do we know the cost? We cannot automatically assume it is possible to produce the additional 1,000 units. Sometimes there is no valid way to get the extra capacity needed for such extra demand.  In such a case the only option is to free capacity by not producing something else. For instance, by preferring to sell 3,000 units to the client that pays less per unit, but generates overall higher T than the regular market.  In other cases it is possible to use overtime or outsourcing, but then it is mandatory to calculate the extra cost, delta(OE), required for the extra capacity that is beyond the regular available capacity.  Thus, the cost of capacity behaves in a non-linear way, while the cost of materials is linear.

What if the organization owns part of the materials?

This is a typical case with manufacturers based on agriculture, like meat, wine, juice or other types of food processing when those organizations also own farms that raise the key raw materials. Usually those manufacturers are able to buy more quantity of those materials from other suppliers.

The farms plan, ahead of time, the amount of materials required for manufacturing. However, the lead-time of agriculture cannot be shortened.  Thus, the initial decision on the quantity of every specific item cannot be altered within the given lead-time.  In most cases the frequency of such decisions is highly limited, mostly only once a year.

There are two ways to model such a case:

  1. Define two different alternative products. The primary products use the self-owned materials. These products have high T per unit because the cost of those materials is not included in the TVC. However the primary products are limited by the available amount of materials. When the demand is much higher than that limit, the alternative products, actually exactly the same from the client perspective, can be produced and offered to the market, but with higher TVC, hence lower T per unit.
  2. Treating the owned materials as if they are resources!

The idea is that the resources behave anyway in a similar way to those materials.  There is cost to maintain a certain amount of available capacity.  The cost of utilizing the resource for 20%, 60% or 99% of its availability is the same. But, when there is a need for additional capacity the cost jumps.

This basic non-linearity of the cost of resources forces Throughput Economics to use an algorithm that is able to consider such behavior in order to support better decisions. In itself this ability is still pretty simple – just noting when a resource starts to be overloaded.  Modeling certain materials as resources does not add any new difficulty.

The initial available quantity of materials, owned by the organization, carries a fixed cost no matter whether all the quantity is used or not.  So, modeling such materials as resources, with the appropriate level of available quantity and a cost figure for adding materials above that level, is in line with the Throughput Economics algorithm for considering the resources capacity and temporary elevation of capacity.

The one difference in the Throughput Economics methodology is that purchasing more materials from suppliers should be treated as delta(OE), rather than part of delta(T). The key supporting information for a decision is delta(T) minus delta(OE) and that result is not impacted.

My general insight:

We like linear behavior because it is simple and in most cases good enough for supporting decisions. We are all aware that reality does not truly behave in a linear fashion. There is a common temptation to try to find more precise solutions by considering the complexity.  Falling in this trap yields confusion and far inferior level of performance.

The insight is that even when we face true non-linear behavior, it is still relatively simple behavior, where all we have to do is to consider the sudden jumps at certain values.

Eventually we are able to reasonably predict the impact of decisions and changes within a certain range of possible outcomes. What we get is much better than assuming complexity.

Analysis of an incident of bad service by PayPal as an opportunity to learn

Online Shopping Paid Via Paypal Payments Using Plastic Cards Vis

PayPal offers a service of safe payment transfer through the internet, competing with the use of credit cards and with the banking systems.  The safety and ease of use creates a decisive competitive edge (DCE).  Any DCE should be under constant scrutiny that the competition might close the gap and that the company might give bad service that might eliminate its own advantage.

It all started for me with a transaction of small amount of money from the US. I got a happy mail telling me to claim the money through my PayPal account.  Problem was:

In my account there was no mention of any payment!

This is a typical moment where the basic service collapses and there is an urgent need for customer service.

To cut a long story short: I did not receive the money and I spent very long time with PayPal customer service in Israel, spoke with five different people and in between several messages were exchanged.  The service agents gave me several guesses regarding what happened, which did not improve my level of frustration.  Eventually the following verdict came:

The sender of the money defined the transaction as “personal”, which is allowed in the US, but not in Israel.”

According to this policy PayPal should have rejected the transfer upfront. Another failure is that the customer service people have not been exposed to the information that the transaction was rejected.

The truly big failure: PayPal did not let me know that the transaction was rejected after letting me know I can claim the money.

Why?

Is it a PayPal policy that it has NO obligation to the recipient of the transaction?

Such failure in performing simple service could be dangerous to PayPal. I think this incident should invoke learning from experience to draw the right lessons.  When clients, like me, are exposed to such treatment then those clients would not like to do more business with PayPal, at least as long as there are other acceptable service providers.

The main gap in performance is:

Expectations: Any account that receives a notification that money has been received – the money truly lies in the account.

Actual outcome: I got the mail, but the money was not there!

The next step is to find out the facts that have led to the above gap. Through my long talks with customer service I have been exposed to some facts:

  • It seems the transaction was declined. The official reason is that I, the recipient, declined it. This is not true!
  • PayPal Israel, where I live, has a policy of not dealing with personal transactions, even though PayPal US delivers that service.

My main hypothesis:

When a personal transaction is initiated in the US the global PayPal system accepts it. In Israel there is a check, probably through human intervention, whether the transaction is personal or business and personal transactions are declined.  No information is recorded regarding the reason the transaction has been declined.  The obvious understanding is that the recipient has rejected the payment.

All other hypotheses that I can think of, like a major bug in the IT system, seem to me unreasonable for such sensitive international organization. So, I assume the above hypothesis is right.

Behind the facts that led to a gap between expectations and outcomes there is always a flawed paradigm!

If my hypothesis is right then the flawed paradigm is:

The policies of any specific local market are independent of the policies in other local markets without serious negative ramifications

This is a significantly flawed paradigm.  What happens is that different local policies do often clash, especially when international operations are at the core of the service.  If PayPal Israel thinks it is good to focus solely on business transactions, while in other parts of the world personal transactions are gladly accepted, then both directions of such transactions should be capable to perform smoothly.  However, while it is relatively easy to prevent personal transactions from Israel to the US, it is not easy to stop, on time, a transaction from the US to Israel.  In order to make the paradigm valid a super-sophisticated system has to be in place, containing all the policies everywhere, and with the ability to identify from an email where in the world it is located and what are the rules in that location. Sophisticated systems are exposed to bugs much more than simple straightforward systems.

The failure I’ve been exposed to is just an example of many other possibilities of failing to deliver the promised service.

This means that decisions made in PayPal Israel might have damaging effect on the reputation and reliability of PayPal throughout the world!

There are several options to fix the original paradigm, and every alternative has its own negative branches. One alternative is to develop the super-sophisticated system.

Another alternative is that Local policies would apply only to any outgoing transactions, but not to inbound transactions.

Yet another alternative is that PayPal decides to deliver the same service, using the same policies and rules, all over the globe. I’d choose this alternative, but this is a different kind of analysis.

I’d be glad if PayPal officials would participate in this discussion concentrating on the generic case.

In a learning organization this is exactly what would happen.