What FOCUS means?

find the solution

The vast majority of the managers believe that focusing is absolutely necessary for managing organizations.

If this is the case, then FOCUS as the key short description of TOC has very little to offer to managers.

Let’s consider the possible consequences of Pareto Law. A typical company gets 80% of its revenues from 20% of its clients.  How about focusing on the 20% big clients and dump the 80%?  Does it make sense?

The point is that the real question is not whether to focus or not, but:

What to focus on?

And, even more importantly:

What NOT to focus on?

The reason of emphasizing the part of what not to focus on is that the need to focus is caused by limited capacity, which means it is impossible to focus on everything and draw the full value from it.  The limitation could be a capacity constraint that forces us to concentrate on what exploits that resource.  Empowering the subordinates is a mean for an individual manager to focus on the key issues without becoming the constraint of the organization.  In many cases the critical limitation is the inability of the management team to multi-task in a way that would not delay truly important issues that require actions.  This is expressed by management attention as the ultimate constraint of designing the future of the organization.

Giving up part of the market demand could make sense only when more overall sales, more total Throughput, could be materialized.  Only in very rare cases it is possible to reduce the OE, following giving up certain demand, to the level where T-OE would be improved.  Reducing 80% of the clients, the smaller clients that yield only 20% of the turnover, would almost never reduce the OE by what is required to compensate for the lost T, which is significantly more than 20% of the OE.  This is due to the non-linearity of the OE, where reducing the capacity requirements do NOT yield OE reduction of the same rate.  Just think about the space the organization holds and whether the reduced number of clients would allow using less space, and even when that is possible – it might be impossible to save OE due to it.

FOCUS should NOT be interpreted as just one specific area. It has to be linked to an estimation of the effectiveness of the available capacity to focus on without considerable delays to the other areas and topics.  And remember the following Goldratt insight:

Segment your Market – Do Not Segment your Resources!

The idea is that many of our resources are able to serve variety of different products and services target at different segments. This is a very beneficial capability.  Effective focusing should exploit the weakest link based on the limiting capacity.  In most cases the exploitation encourages serving several market segments, but not too many.

The question of what to focus on goes into all the different parts of TOC, always looking, first of all, to the limitation and from that the answer is derived.  In DBR it is the constraint.  In Buffer Management the question gets a small twist of “what should we do NOW that otherwise the subordination might fail?” The Current-Reality-Tree defines the core-problem of the organization, which is the first focus for designing the future. CCPM focuses on the Critical Chain rather than on the Critical Pass, pointing also to multi-tasking as lack of focus causing huge damage.  The key concept in the Strategy and Tactic (S&T) is the Decisive-Competitive-Edge (DCE), which again points to where the focus should be.  The DCE is actually based on an identified limitation of the client, while the organization has the capability of removing or reducing that limitation.  Building a DCE is a huge challenge that adds considerable load to all managers and professionals, so it makes sense to avoid more than one DCE at a time.

Goldratt brilliantly used a slang in Hebrew, actually taken from Russian, called “choopchik”, describing an issue with very low impact on the performance. The whole point is that choopchiks do have a certain positive impact, which is tempting to tackle, but causing huge loss of not doing the vastly more important missions.  I look on choopchiks as a key concept of TOC that is directly derived by the search for the right focus.

The notion of focus according to TOC is relevant to recognize the impact of uncertainty on management. Choopchiks impacts the performance of the organization less than the normal existing variability; call it the level of the “noise”.  With such small impact you don’t know whether there has been any actual benefit in the real case.  Worthy missions have impact that is considerably bigger than the noise.

What to focus on is a key for achieving better and better performance. The elements involved are the diagnostic of the current state, the few variables that dictate the global performance and what could go wrong.   Mistakes in what to focus on are common and are main causes for the death of organizations and for so many being on the surviving mode.

What is the right time to make a decision?

Businessman standing and making his choice between times

Every decision is a choice between alternatives. Another element is the right time for the decision to be made. Very few decisions force the manager to decide immediately.  In itself this is an undesired situation where a threat has emerged in complete surprise.  Most decisions leave enough time to the decision maker.

Facing substantial uncertainty suggests that every decision should be delayed until the last moment that allows executing the decision in full.  The underlining assumption is that time adds information that reduces the uncertainty.

There are serious negative branches to the above logical claim. All of them look at what might go wrong with the suggestion.  Here are several of them:

  • We don’t truly know the exact timing of “the last moment”, so we may miss it.
  • We might forget the decision at the right moment.
  • Our attention might be occupied by more important issues at the critical time.
  • Making the decision at the very last moment makes it urgent and generates stress. The stress might affect us to make the wrong decision!

Inspired by the idea of time buffers we should treat every non-trivial decision as a mission that should be completed at a given time and to assign a time buffer for that mission.  According to the DBR interpretation of time buffers the mission should not start prior to the buffer.  The time buffer itself should provide enough time to deal with all the other requirements for attention, without creating stress, except the need to make the RIGHT decision.

Managing the execution of missions by individuals or teams through assigning time-buffers, and using buffer management as a control mechanism, is a much more effective process than check-lists. It reduces multi-tasking through buffer management priorities and limits handling missions, especially decisions, too early. Only non-trivial tasks should be included in the missions.  It is a step forward in understanding the behavior of the capacity (attention) of individual managers.  It would also clarify the issue of distinguishing between missions and projects.

An example

Suppose a decision to stop working with a certain supplier and switch to another one is considered. The decision process requires updated information on the trouble with the current supplier and mainly finding alternative suppliers, inquiring how they are evaluated by their clients, whether they have the specific capabilities and, of course, their pricing.

When is the deadline to make the above decision?

Suppose the contract with the current supplier ends on December 31st 2016.  If the contract is not going to be extended, it is fair to announce it by December 1st, which also leaves enough time for finalizing the contract with the new supplier. The mission includes getting the relevant information, bringing it to the decision maker(s) and letting 1-2 hours for the decision itself.  Assigning three weeks for the whole mission is reasonable.  This means no one should work on that mission prior to November 10th!

The impact of the criticality of the decision

Goldratt said: “Don’t ever let something important to become urgent”.

The practical lesson is: important decisions should be given a reasonable time buffer.  Very important decisions, those that we call ‘critical’, should be given longer time buffer, ensuring the decision is not going to be taken under stress. Of course, a critical decision might be taken under stress because of possible negative ramifications, for which no viable solution has been successfully developed.  This post only focuses on the time element.

The suggested process expands what TOC has developed for the production-floor to the work of managers taking responsibility for important missions.

Comment: a mission can be viewed as a very small and simple project, but it does not make sense to work on the mission continuously, unlike the expected behavior in projects, especially along the critical chain, where we strive for continuous progress.

Batching of Decisions

Batching of decisions, usually by periodical planning sessions, is widely done.  The tendency to plan according to time periods can be explained by the need to gather together the relevant management team to come up with a periodical financial budgeting process based on forecasts.  The targets for the various functions are derived from that periodic plan.

I’ve expressed in previous posts my negative view on one-number forecasts and how they reduce the overall performance of the organization.  My focus here is to highlight that the planning sessions provide an “opportunity” to include other decisions that are not directly related to the purpose of the periodical planning.

Any plan is a combination of decisions that are interconnected in order to achieve a certain objective. The plan should include only the decisions that any deviation from them would impact the objective.  This message is explained in a previous post called “What is a good plan – the relationships between planning and execution”, including the need to plan buffers within the planning.

Does it make sense to include the decision to switch suppliers within the annual planning session aimed at determining the financial boundaries for next year? Is the identity of the specific supplier critical to the quality of that high-level planning?  Suppose there is a small impact of the switch on the total cost of goods – does it justify forcing a decision too early?

The key point is that including decisions, with very limited impact on the objective, within the planning, disrupts the quality of the plan that needs to be focused just on the critical factors for achieving the objective. It forces timing that does not support the quality of the particular decision.

Planning, execution and the right timing of decisions are all part of handling common-and-expected-uncertainty. We need to vastly improve processes that dictate what is in the planning, what are left for execution and the handling of all the variety of non-trivial decisions including making sure they are made at the right time.

What Simplicity truly means?

balancing stones

Goldratt assumed that every organization has to be inherently simple.  This recognition is one of the four pillars of TOC, and to my mind the most important.  It is in direct clash with the new Complex Theory when applied to human organizations.

Comment: I refer in this post only to organizations that have a goal and serve clients.

Is Inherent Simplicity just a philosophical concept without practical impact?

One of the most practical advises I got from Eli Goldratt was:

If the situation you are looking at seems too complex for you then:

You are looking on a too small subsystem – look at the larger system to see the simplicity

This is a very counter-intuitive advice. When you see complexity should you look for even more complexity? But, actually the situation is relieved when you analyze the larger system because what is important and mainly what is not important become clearer.  Production shop-floor might look very complex to schedule.  Only when you include the actual demand, distinguish between firm and forecasted demand, you realize what the real constraint is and only then the exploitation and subordination become apparent.

The term ‘simplicity’ needs to be clarified. There are two different definitions to ‘complexity’, which also clarifies what ‘simplicity’, the opposite, means.

  1. Many variables, with partial dependencies between them, impact the outcomes.
  2. It is difficult to predict the outcome of an action or a change in the value of a certain variable.

The second definition describes why complexity bothers us.  The first one describes what seems to be the cause for the difficulty.

The term ‘partial dependency’ is what makes the interaction between variables complicated. When the variables are fully depended on each other then a formula can be developed to predict the combined outcome.   When the variables are absolutely independent then again it is easy to calculate the total impact.  It is when partial dependencies govern the output that makes it difficult to predict.

Examples for independent, fully depended variables, and partially depended:

  1. Several units of the same resource. The units are independent of each other.
  2. A production line where every problem stops the whole line. The line certainly works according to the pace of the slowest station, and every station is fully dependable of all the other stations in the line.
  3. A regular production floor with different work centers and enough space between them. Every work center is partially dependent on the previous ones to provide enough materials for processing.

When, on top of the complexity, every variable is exposed to significant variability then the overall complexity is overwhelming.

Can the performance of the organization be truly unpredictable?

You may call this state “chaos”, or just “on the verge of chaos”, point is that clients cannot tolerate such a performance.  When I’m promised delivery at October 1st, 2pm and the delivery shows up on October 22nd, 6:30am – this is intolerable.

Is it possible to be on the verge of chaos internally, but still provide acceptable delivery to clients?

In order to achieve acceptable reliability organizations have to become simple enough.  The initial impression of complexity is wrong because the partial dependencies are pushed down, so their impact on the deliveries is limited.  The reduction of the partial dependencies is achieved by providing excess capacity and long lead-times.  TOC simplifies it more effectively by using buffers and buffer management.  What we get is good enough predictions of meeting due-dates and even being able to promise rapid-response to part of the market that is ready to pay more for quick supply.

Still, the use of the buffers means: the predictability is limited!

Even Inherent Simplicity cannot truly mean precise predictability! The whole idea is to determine the range of our ability to predict.  When CCPM planning of a project predicts completion on June 2017, it actually means no later than June 2017.  It could be completed earlier and we usually like it to be earlier, but the prediction of June 2017 is good enough.

Thus, the simplicity means predictions within an acceptable range!

Does simplicity means the solution can be described in one paragraph? I doubt whether one-paragraph on CCPM is enough to give the user the ability to judge the possible ramifications.  Certainly we cannot describe the BOK of TOC in one paragraph.

Simplicity in radiating an idea means the idea is well understood. This is the meaning of “predictability” when we deal with marketing messages:  we are able to predict what the reader, listener or spectator understands!  Even here there is a certain range of interpretation that we have to live with.

What about the details of the solution itself? Is the solution necessarily easy to implement?

Easy and simple are not synonymous. The concepts could be simple, but the implementation might face obstacles, usually predictable obstacles, but overcoming the obstacles might be difficult.  So, both simplicity and ease of implementation are highly desirable, but not always perfectly reachable.

We in TOC appreciate simplicity, but achieving it is a challenge. The requirements for truly good solutions are: Simplicity, Viability (possible to do in reality) and Effectiveness (achieving the objective).

An example illustrating the challenge:

Simplified-DBR is a simple effective solution for reliable delivery in manufacturing. However, for buffer-management to work properly we assume the net touch time is less than 10% of the production lead-time.  This is a complication!  A solution for manufacturing environments, where net-touch-time is longer than 10%, has been developed. It complicates the required information for buffer-management, but is  effective.

I remember my professor for History of Physics, Prof. Sambursky, who explained to us:

“At all times, since ancient Greece, the scientists looked for the ONE formula that would explain everything. They always came with such a formula, and then a new discovered effect did not behave according to the formula.  The formula was corrected to fit the behavior of that effect.  Then more new effects contradicted the formula, and the formula started to be very cumbersome and it could not predict the behaviors of new effects.  Then a new theory came with a new simple formula and the cycle went on again.”

TOC is basically simple. It strives to identify the Inherent Simplicity, come up with simple solutions, simple messages and easy implementations.  But, we have, from time to time, to add something to deal with environments where a certain basic assumption is invalid.   This is, to my mind, the most practically effective way to manage organizations.

Until a new simpler, yet effective, approach would emerge

From a TOC perspective: Paying tribute to a Great Pragmatic Thinker

Written by Dr. Alan Barnard and Eli Schragenheim

 Simon 1

We both encountered the name of Prof. Herbert Simon, long before we met Dr. Eli Goldratt. Prof. Herbert Simon (1916-2001), a recipient of the Nobel Prize for Economics in 1978, was an American political scientist, economist, sociologist, psychologist, and computer scientist. Prof. Simon was among the founding fathers of several of today’s most important scientific domains, including artificial intelligence, information processing, decision-making, problem-solving, organization theory, complex systems, and computer simulation.

He coined the terms bounded rationality and satisficing.

Bounded rationality is the idea that, when we make decisions, our rationality is limited, not only by the inadequate information we have available and/or inadequate knowledge to predict the outcomes of our decisions, but also the cognitive limitations of our minds, and the limited time available to make these decisions.

Simon coined the term satisficing (a combination of satisfy and suffice) to describe the heuristic we likely use when having to quickly make difficult decisions with inadequate information and/or knowledge.

Simon often said:people are not optimizers they are satisficers” – we seek a satisfactory solution rather than an optimal one. When faced with a challenging problem or decision, we search for a solution that satisfies our pre-defined criteria to a sufficient level.  When such a solution is reached there is no need to continue searching – we have found a solution that is good enough!

We both rediscovered Simon’s incredible insights, when we recently started doing research on the limitations managers are confronted with, which Theory of Constraints and its applications can help diminish or even eliminate. These are the limitations imposed by complex and uncertain situations as well as by conflicting objectives in solving problems and making decisions.

Below are three of the highlights we found during this “rediscovery”.

In 1971, the world was just at the beginning of the huge advancements in information technology and the exponential growth in the access to information. Yet, Prof. Simon already had the foresight to warn us about one of the major negatives of the increased access to more and more information.

In a public speech he gave in 1971 he warned:

“The wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

And he went further. In his 1973 paper titled, “Applying Information Technology to Organizational Design”, Prof. Simon wrote:

“The information-processing systems of our contemporary world swim in an exceedingly rich soup of information. In a world of this kind, the scarce resource is not information; it is processing capacity to attend to information. Attention is the chief bottleneck in organizational activity, and the bottleneck becomes narrower and narrower as we move to the top of organizations.”

Sounds familiar?

Dr. Goldratt gave similar warnings. Firstly, in the Haystack Syndrome, he warned about the importance to differentiate between data and information (the answer to the question) and the need to build true information systems that deliver only the information relevant to managers for making important decisions. Later, he also shared his insight that the ultimate constraint in any organization was Management, especially Top Management’s limited Attention.

Goldratt explained in “Standing on the Shoulders of Giants” that he simply advanced the work started by Henry Ford and Taiichi Ohno, realizing that to improve flow you need a practical mechanism to prevent overproduction – producing things that are not needed, or at least not needed now.

To our mind, Dr. Goldratt also advanced the work started by Prof. Simon, by outlining practical mechanisms for helping managers decide what to focus on (and as importantly, what not), to better exploit and not waste the scarcest resource in any organization – management’s limited attention.

Considering the growth in Big Data, what are the implications of these warnings on managers’ problem solving and decision making today?

Will it really help to provide the step-change in managers and domain specialist’s ability to improve the quality and speed of their decisions?

And, even if it did, will it be sufficient?

And the third insight: A citation from Prof. Simon article entitled “Making Management Decisions the role of intuition”:

“What all of these decision-making situations have in common is stress, a powerful force that divert behavior from the urgings of reason. They are examples of a much broader class of situations in which managers frequently behave in clearly nonproductive ways.”

We, Alan and Eli, are deeply interested in the impact of fear on the way managers make decisions and manage organizations. We all fear being blamed for our decisions and actions. So it is safer not to act. But we also fear missing something (in the ocean of data) and being blamed for not acting.  Whether we do (act) or don’t, we are damned.

Fear results in stress. And we know that when people are under stress, they often freeze-up, not doing something they should, or they over-react, doing something they should not. When under fear-induced stress, we often act in irrational ways.

Prof. Simon also frequently warned against excessive fear of unforeseen consequences. He advised that the best way to overcome such fears was to experiment and to see what happens.

We have four questions regarding Prof Simon’s concepts and its implications on managers today.

  1. Are decisions made within the organization also aimed at satisficing rather than at optimizing?  Our reason for asking that question is our observation that organizations seem to impose the value of optimization on decisions, and by that, almost force managers to look beyond satisficing, leading to be considerably less focused, and resulting in decision delays and/or errors.
  2. To what extent today, do the fear of being blamed for taking the wrong decisions/wrong actions, still cause many of the avoidable decisions errors and delays?
  3. To what extent today, do the fear of missing something important in the data, still cause managers to look at too much data / too many measurements, causing distractions which in turn, waste management attention and also result in decision errors and delays?
  4.  Why, assuming there is consensus on the timeless importance of the above insights from Prof. Simon, a Nobel Prize recipient, have many others not followed and continue this important work?

The answers to these questions could hint on why the awareness and adoption of TOC is still much lower than what we have expected…

What TOC can contribute to a Transportation Organization?

Transport icons. Truck, Airplane, Bus and Ship.

Underline the current TOC methodologies for Operations there is a basic assumption that the available capacity exists in one location. In other words, the resources don’t move!

This assumption is, of course, invalid for transportation organizations. The meaning of ‘available capacity’ has to include two additional pieces of information:

  1. Is there available capacity close enough to the required starting point within the appropriate time frame?
  2. Where to and for how long? Are there opportunities for transport from the proximity of the destination back to the usual location? How long it takes to be available here again?

These additional variables make the business of the transportation different than the environments TOC has been established in so far. The dependency on wide geographical locations causes low effective utilization of vehicles, while still suffering from lost opportunities due to lack of timely capacity.  Taking into account that every vehicle is relatively expensive the challenge of finding more demand for the available capacity is a key for successful transportation business.

From TOC perspective the vehicles are the internal constraint of the organization, even though there is a lot of idle capacity.

In itself the service of carrying people, or goods, from point A to point B, is simple. It requires several resources at the same time, a vehicle, a driver, sometimes a whole crew on the vehicle and in the terminals.  Supporting processes are planning the vehicles missions, maintenance, accepting orders and collecting the money.

A major simplifying factor is that there is no direct interaction between the vehicles.

Thus, exploiting the missions of every vehicle is the key business issue.

Thus, we can look at every single vehicle as an independent constraint! Exploiting one vehicle is only seldom on the expense of the other.

Do transporting companies exploit their constraining units?

In a previous post I’ve dealt with an exploitation scheme used by the airlines called “Yield Management” (also Revenue Management), which is basically an exploitation scheme of a single flight (micro-constraint) through the use of dynamic pricing.  The general direction of Yield Management is right, but the airlines use it in an overly extreme way (pathetic to my view) to optimize the revenues within the “noise.”

But, optimizing the flights, or any transport from A to B, is not necessarily the same as exploiting the capacity of the vehicle! What is missed is the number of transportations the vehicle actually does in a period of time.

A key flawed paradigm of most transportation companies is that the full cost-per-km (or mile) is the only key parameter that dictates whether the specific travel is profitable.  So, every kilometer travelled needs to cover its cost.  The cost includes not just the truly-variable-costs of travelling one kilometer (mainly the fuel), but also the allocated fixed cost associated with the vehicle, especially the purchasing investment of that vehicle.

This paradigm causes rejecting business opportunities, preferring to leave the vehicle standing still, and certainly not letting the vehicle travel empty unless that travel is covered by a client.

An example:  There is a shipping order from A to B. How should the vehicle come back to A?  The obvious wish is finding another shipment to cover the full cost of travelling back.  What happens when such an opportunity is required only 24 hours later?  Is it obvious to keep the vehicle idle for 24 hours?  The cost-per-km does not address the economics of standing idle.

The TOC solution is to use Throughput Economics to plan the business of transportation. This means, first of all, calculating the true throughput (T) of the whole travel.  Certainly all TVC per kilometer have to be included.

The T-per-travel should lead the company to calculate the total T-per-specific-vehicle for a period of time, like a week or a month. The focus of management should be to maximize the total monthly T for every vehicle.

Planning the generation of next week T by Vehicle-X involves checking various options from the minute the vehicle is free taking location and time into account.  It could be that the vehicle should come back empty in order to be available at point A for higher T opportunities.

Dynamic pricing should be used to encourage potential customers to allow enough time ahead, providing the planner better flexibility.  There should be a price difference between flexible time given by the customer and very specific timing for the service.  Certainly for an urgent service the price should be higher.

This different focus should achieve better exploitation of the constraint(s).

The company still needs to understand and implement subordination. For instance, loading and unloading might take long time, causing losing potential business.  Suppose that adding people to help with the loading would significantly shorten the time.  Adding people adds delta-Operating-Expenses (delta-OE). Question is: can we get additional delta-T, higher than the delta-OE, by saving time?

Isn’t this focus what made Southwest Airlines so successful? Using operational flexibility to subordinate to the most efficient use of the constraint, which is every single aircraft.  The use of the same type of aircraft enables flexible use of pilots. It is just an example of effective subordination.

Strategy according to TOC has to come up with a decisive-competitive-edge, in the shape of unique value, target at big enough market segment(s).   Generally speaking all transportation companies struggle to offer the clients the following key values:

  • Reliability, both regarding the agreed upon timing and the safety of the shipment.
  • Fast response to any request.

The difficulty to deliver the above is that the excess capacity is not enough to overcome temporary peaks of demand in one location. Improved exploitation of the pool of vehicles, including clever buffering of commitments to key clients, would improve the reliability and fast response.

There are two different modes of operation for transportation service:

  • Fix schedule of transportation from A – B and back from B – A.  The route could cover many intermediate points. The ultimate examples are trains, flights and ships. This way high reliability can be achieved, but there is no ability for fast response or adjust the timing. The key challenge is establishing the fixed routes and schedules in a way that maximizes the T-per-vehicle.
  • Flexible routes schedule.   The ultimate examples are taxis and trucks.

An overall superior Strategy can be developed using collaboration between competitors to deliver better service. Airlines use a certain level of collaboration, allowing moving between airlines for routes not fully provided by one airline.  They also collaborate to provide a buffer for passengers when flights are cancelled.

It is my view that additional strategic collaborations can vastly improve the businesses of many transportation companies. For instance, a company located at point A could collaborate with a company located at B to ensure quick returns of vehicles.  Answering the real needs of users, coupled with effective control on the T-per-week-per-vehicle, could make very substantial business improvement for organizations that are open for a change.

Are there organizations where TOC is not applicable?

bigstock--126121283

TOC was born in the manufacturing shop-floor. It has expanded into distribution and projects.  It has notable success in healthcare, which is a pretty different environment and some of the basic concepts had to go through “translation” to fit the environment.

The Thinking Processes (TP) were created with the intent to be applicable to any environment. Twenty five years after the definition of the main TP tools we should ask ourselves whether the TP is enough to address environments that are very different than the existing environments TOC is known to have an impact.

The weakness of starting the analysis with the TP is being swamped by huge number of undesired effects (UDEs) that might be irrelevant to pinpoint what makes that environment different than what we already know.  We do need a practical way to cut corners when we look at an unfamiliar environment.  We, of course, need also the opportunity to start working with such an environment. I like to deal right now only with understanding the possible value TOC is able to draw.

If you accept the axiom that every organization is inherently simple then there has to be very few key differences between the environments that truly impact the way to manage them effectively.  Under this assumption we can speculate what those few differences are, and use the cause-and-effect process to derive the core problem(s) of the environment, leading hopefully to identify the critical flawed assumption(s) that could be challenged.

Here is a list of such environments, where TOC currently has, if at all, only minor impact:

  1. Financial institutions, banks, insurance and credit-card companies.
  2. Transportation companies: in the air, sea and land.
  3. Performing arts organizations: theatres, music (opera) and dance groups. You can add TV stations to that list. I like to treat museums also in this group.
  4. Organizations for emergency cases: Army and fire-squads.

Let’s have a quick look at the banks.

What makes a bank strictly different than other organizations?

I’m aware of several TOC implementations in banks that were focused on improving the flow of specific missions within the bank. This is certainly valuable, but I doubt whether it touches the core problem.

All for-profit organizations strive to make more money now and in the future. Banks use money as their key resource for producing more money.  This forces the banks to inquire in depth what “money” means, uncovering the “virtual” part of money – the option to lend more money than they actually have.  This understanding, which is not part of most other businesses, is a key internal recognition that leads to more specific paradigms that are unique for the financial world.

Banks deliver two very different categories of services:

    • The classic services of providing loans
      • Lending money now in order to get more in the future.
      • From the customer perception of value: bridging between the time the customer does not have the required money and times where there is more than enough.
      • A crossover service of providing deposits. The bank needs more money as a resource to use for loans. Customers give a loan to the bank, at times where the customer has enough money, and get back just a little more when they need it.
    • Services of protecting and managing the money of customers, which have nothing to do with loans or interest-rate
      • Keeping the money, recording transactions, transferring money, buying and selling shares (when applicable), dealing with different currencies and more.
      • User applications to manage their bank account through computers and smart phones.

The simplicity is expressed by noting the growing demand for handling money. The older services of loans and deposits are still required, but there is no basic change in the needs and in the offerings.  However the advance of communication technology opens huge opportunities for the other category of bank services.  That technology generates growing expectations of the clients for more sophisticated options and information.

The competition in the banking system is changing towards using the most advanced technology for new services and newer looks. It creates a trend of reducing the number of branches and agents in the banking system, but, it also increases the need for state-of-the-art IT, and the managers and professionals who understand the wishes of the customers and are able to define the requirements to the IT developers.  It generates more and more demand for new offerings, which have to be supported by the IT, while keeping the security high, which adds many more requirements to the IT developers.

This change in the whole environment creates a natural bottleneck in IT projects.  The banking systems used to be in a deadlock, being able to quickly imitate each other.  Now the change is leading to growing competition for the best and wide use of the most advanced technology.  New threats emerge from companies, like PayPal, which offer services that bypass the banking system.  The appearance of Bitcoin is another threat to the banking system.

What value can TOC bring to the banks?

Implementing CCPM in the IT related projects is a partial solution. CCPM does not have a clear mechanism to generate the right priorities between competing projects.  But, the TOC school of thought can handle it better than any other method I can think of.

What should an effective project portfolio look like? What is the balance between big projects and small ones?  How to plan the time horizon of completed projects that together would generate synergy that supports real growth? There is no ready-made TOC methodology for that, but a group of good TOC experts are able to develop a good solution.

The above analysis is based on my observations and assumptions and I’m aware never to say I know. In a real opportunity to look into a specific bank there is a need to validate the assumptions and the focus might shift a little.  It is just an example to the ability to quickly identify the core problem and outline a direction.    A similar approach for the other environments mentioned above could show real value pretty quickly.

Throughput-Dollar-days (TDD) and Inventory-Dollar-Days (IDD) – the value and limitations

Calendar With Dollar Bills.

The concept of multiplying money by time has occupied a lot of thought from Eli Goldratt for quite long time. The entities of money and time represent two different dimensions and thus the product of the two represents their combined impact.

In the financial area value-days are known for very long time, but with one substantial addition: the concept of “price of money” is accepted and widely used.  The value of the product of money and time can be translated into money in the same way as other types of value.  “The price of money” is an interest rate and it allows quantifying the financial worth of loans as well as investments. More on it – later in this post.

The use of dollar-days replaces certain, grossly biased, performance measurements that express

The damage of failing to achieve the delivery commitments

Thus, Throughput-Dollar-Days (TDD) is by definition a negative performance measurement.  The best value you can get is zero.  The prime use of TDD is measuring on time and in-full delivery of orders.  When an order is late, relative to the promise to the client, then the T of the order multiplied by the late-days is far superior to simple due-date performance.  Consideration of lateness creates motivation for efforts to minimize the lateness.  Without it there is a real concern that once an order is late it loses its priority because the damage to the measurement has been done.

About twenty years ago the flight captains of El-Al, the Israeli airline, were measured by their on-time pull back from the terminal, and the bonuses were determined by it. As a traveler I could see the efforts to be on-time.  But, when there was a delay it was alarming to see how people stopped to run and started walking, slowly, because they did not mind anymore.

However, TDD generates several negative branches. The T worth of an order is a key factor in the measurement. But, do you really like to give automatic higher priority to $1,000 order over $500 order? This measurement does not consider the damage to the reputation and does not consider the characteristic of the client and the level of business with that client. Buffer management, the formal TOC priority scheme in Operations, totally ignores the T of an order for setting the priority. So, is TDD an additional priority mechanism?

Another question is why to use the T of an order rather than the revenue? From the client perspective the worth of an order is the full price.  We like to get the full payment, including the TVC, as soon as possible.  The division of the revenue to TVC and T has nothing to do with the need to get the payments on time.  Shouldn’t we use RDD (revenue*days-late) as a measurement?

My biggest issue with the concept of dollar-days is that it is not intuitive.  DD generates very high numbers, which are quite confusing when compared with the net worth.  An order of $1K delayed for 60 days, is 60K DD. How clear the true state of the situation is reflected by this one number?

Eli Goldratt wished TDD becomes a key measurement for the whole supply chain – keeping every link responsible for possible loss of sales. The practical problem is: how to measure the TDD of items that are sold from stock? When there is a shortage we suspect some lost sales – it is less clear how much.  We can use statistics of sale-days, which are actually based on forecasts. Problem is, forecasts are quite confusing and many do not understand the ramifications.

My conclusion is that TDD has the potential of creating real value, but we should review the logic and be ready to introduce changes.  Reservations and new ideas are welcome.

Inventory-Dollar-Days (IDD), supposedly the twin concept of TDD, is actually a different concept.  The original idea was that while TDD expressed failing to deliver, IDD represents the effectiveness of the investment in inventory.

IDD is the accumulation of the cost of every item in inventory, the actual price paid for it, multiplied by the days passed since the purchasing. So, it represents the dollars invested combined by the time those dollars have been held without generating value.

So, in order to achieve very low TDD we need to invest in inventory. An analysis is required to set a standard for the “right” level of IDD that would achieve reasonable value of TDD.

Does really the IDD represent the effectiveness of the investment? IDD does not consider whether the items leaving the IDD calculations have generated money or just scrapped.  While items spend time in inventory, or being processed but not sold, their real worth in money might have changed, but the IDD cannot relate to it – the real value would be revealed only when the item is sold.

What value we get from IDD?

We can use it to identify items that are both expensive and spent long time in inventory, contributing the most to the IDD, motivating operations to get rid of those items.  It also motivates the purchasing people to be more careful when they order large amount of such materials.  If this motivation is important, can’t we identify those items by crossing together the expensive items and their aging?  Is the use of one number, which is not intuitive, a better way?

IDD is for inventory and it cannot be used for other investments. Suppose we have bought a new machine.  The intention is to use it for many years.  Dollar-days would accumulate from the day of purchasing the machine.  Without considering the T generated by that machine the IDD of infra-structure is useless.

Here comes the concept of ‘Flush’ as a measurement of such an investment.  The dollar days start with the initial investment.  From that date negative dollar-days (DD) accumulate.  Additional expenses increase the negative DD.  When T is generated positive DD are added.  Hopefully, at a certain time the state of the investment-DD would reach zero:  the DD of the investment is fully recovered. Flush is the number of days to the breakeven of the DD of the original investment.

Flush is superior to the simplistic measurement of the time to return the cost of the investment.

But, is Flush superior to Net-Present-Value (NPV), where the DD are converted into money?

Flush ignores whatever happens after the DD become breakeven.  More income might be generated, which have no impact on Flush.  I also think we cannot simply ignore the concept of the “price of money”, which is a simpler, yet effective, way to evaluate an investment.

The real difficulty in evaluating an investment is the risk associated by it. Both Flush and NPV do not provide a good answer to that.

Another point that puts Flush in a funny perspective: When one spends money for pleasure then its DD would grow to infinity.  Does this seem intuitive to you?

Do you like to discuss this further?