Managing assortment vs. managing availability

There are two types of assortments that differ by their basic message to the potential client:

  1. Whatever you truly need we have in stock or can produce it for you.
  2. We offer an attractive assortment for you to choose from.

Rows Of Beautiful Women's Shoes On Store Shelves

Generally speaking the first one applies to products whose main value is a practical need.  The clients are assumed to know a-priori their exact needs.

The second one, with the emphasis on ‘choice’, is for potential clients who enjoy browsing and is based on the value of ‘pleasure’.  The reader is advised to read my post on The Categories of Value.

The difference in the type of value lends itself to different logistics.  In the first type managing availability is critical to sales.  A shortage means the right product for a specific client is missing, leaving the client either to choose the second best or go elsewhere.

The second type, which is fashion, books, music, art and similar products is focused on clients spending significant amount of time browsing, which already creates value for them.  While sometimes clients might have a good idea what they look for (like the last book of Grisham), most of the clients enter the store wishing to be exposed to an assortment out of which they might choose one or more items.

This characteristic impacts the replenishment procedure.  A sale of an item should invoke replenishment, but not necessarily of the same item.  An assortment of fashion requires frequent changes.  It seems logical that when a certain design generates high or stable demand, the same item should be replenished.  However, when there is no evidence for special wide appeal of an item, then maybe it is a good idea to replenish the item with a different item?

I assume that creating an attractive assortment is an ‘art’, based on intuition and it is difficult to come up with clear guidelines.  Most of the items within the same assortment have to appeal to a certain market segment in order to attract the browsing that eventually leads to sales.  Sometimes one client even buys several items of an assortment just because he/she likes them and deciding upon only one is tough.  This never happens in selling products for practical needs.

So, the choice of the items is a key for the success of an assortment. There should be a certain common taste through the whole assortment, but also providing variety to create pleasure during the browsing.

Another factor is the size of the assortment.  How many similar, yet different, items can a store display as one assortment, which is effective in attracting potential clients?

An assortment of fashion goods might be too large.  Browsing for too long might cause the client to give up.  It is a known effect that too high variety attracts people to browse, but, actual sales are low.  Certainly too small assortments definitely reduce sales.

How should we decide upon the size of an assortment?

The TOC generic directive is to start with a guess and then check signals whether the initial assessment is too large or too small.  Such a signal can be the rate of clients entering the store that go to the specific assortment. Another is how much time on average is spent there.  When the time is short, it might signal disappointment, which could be due to general taste or that the assortment is too small.  Very long average time indicates too large assortment, which could be validated by checking the ratio of sales to the number of clients browsing.  All that information has to be collected.  Sporadic observations in a week could provide good enough signals whether to re-evaluate the size and content of the assortment.

The distribution of sales of different items within the assortment is another signal, but it is not straight-forward.  When the tail of the distribution, the sales of the slow-movers, is pretty long then a possible cause is that the assortment is too large.  However, providing variety is an important factor for attracting clients even when most clients eventually choose only few specific items.  Different types of assortments might have different lengths of effective tails which are required for the pleasure of browsing.  Testing the effectiveness by carefully dropping several slow-movers and realize the resulting impact on sales of the whole assortment is absolutely necessary.

Managing assortments is linked to managing availability and creates several challenging dilemmas.   Many products/designs are offered in several sizes.  As long as the design is to be kept as a part of the assortment, we should keep availability of all sizes.  However, when there is a decision to discontinue that design, then the whole design should NOT be replenished.  It opens the question what to do with the remaining items of that design?  My own recommendation is to carry the remaining items to another spot and sell it cheaply, without any commitment to availability.

The idea of maintaining a certain size of the assortment means that when you add a new design you take out an existing one and vice-versa, when a specific design is taken out there has to be a new design ready to move in.  This means having to manage a buffer of new designs waiting for an opportunity to enter the assortment at a specific store.  These buffers have to be managed through buffer-management to ensure there are never shortages of new designs, and there are never too many new designs waiting for the opportunity to be displayed and sold at a store.

The more generic point is the need to find practical and effective ways to combine human intuition with systematic analysis supported by software.  I have seen that need, combining intuition and hard analysis, when I developed the decision-support the TOC Way (DSTOC) as an expansion of Throughput Accounting.  As you can see the need to combine intuition with logical systematic analysis does not stop there.

What is a Good Plan? The relationships between planning and execution

Keep it simple

A ‘plan’ is a group of decisions and guidelines, most of them to be executed sometime in the future, in order to achieve a desired objective.  Describing the objective, the outcome of the plan, including the criteria for determining the quality of the outcome, is the first mission of any planning.

For instance, a project plan according to CCPM is a typical group of decisions, specifying what to do, who should do, and at what sequence.  The objective is to achieve the specific project output at, or before, a certain date.  Meeting the specifications, the budget and being on-time constitute the criteria for most projects.  All the other elements in any planning should be derived from the objective.

When the desired objective is not achieved as planned we cannot easily determine whether it is due to a flawed plan, mistakes done in the execution or rare bad luck.

Or can we?

When consistently the objectives are not achieved then most likely the core problem is a flaw in the planning.  After all, limitations in the capabilities or capacities of the resources should be dealt with in the planning.

Achieving an objective faces two problems:

  1. Complexity, too many variables with partial-dependencies between them.
  2. Uncertainty – the fluctuations of the key variables on top of the dependencies.

One of the most important insights I learned from Eli Goldratt:

The combination of complexity and uncertainty makes the situation SIMPLER!

Because uncertainty outlines the boundaries of what we don’t know, Goldratt called it “the noise in the system.”  We should ignore variables that their impact relative to the noise is low.  Our expectations from any proposed change should be way above the noise/uncertainty in the system. Other changes do not matter at all.

A necessary element in any plan:

The timing of making any decision is an important factor. On one hand a decision taken too early exposes itself to considerable uncertainty.  On the other hand we do not want to delay the decision until it is too late.  Any detailed instruction in the planning has to have a good answer to:

Isn’t it too early to finalize that instruction?

What about letting the people at the execution phase decide when the time is appropriate?

For instance, should we include every nice-to-have feature in the planning of a new product?  At the time of the planning we don’t know the exact pressure on capacity and the value of this feature in the face of the competition. Wouldn’t it be better to delay that decision to a later point in time where the actual work on that feature should start to ensure meeting the due-date?

What comes out is that the plan has to be truly lean, including only the most critical elements that any deviation from might damage or delay achieving the objective.

Another necessary element in any plan:

Protecting the objective from whatever could threaten it. TOC uses buffers to protect due-dates and availability.  We should also consider buffers for capabilities, capacity, budget and key risks of the technology used.

The impact of the two elements on the planning:

  1. Any plan has to include ONLY the elements that leaving them to a later stage might cause serious damage. The plan could include generic guidelines for later decisions and checklists to ensure they won’t be skipped by mistake.
  2. The planning includes visible and well defined buffers protecting the sensitive and vulnerable elements in the planning.

The role of the execution is to achieve the planning objective(s).  The more decisions are left to the execution the more flexibility to face uncertainty.  But, synchronization issues, important requirements and other wide-effect decisions are better handled in the planning.

In the last twenty years we saw the formidable failures of several key planning methodologies.  Many advanced-planning-and-scheduling (APS) software packages, like i2 and the APO by SAP, almost vanished.  The common undesired-effect of those methodologies is vulnerability to common and expected uncertainty.  In many ways the common project management planning methods have been exposed to frequent missing due-dates and content.  Planning of budgets, with a lot of detailed entries, is a similar ridiculous kind of planning widely exposed to uncertainty.

The failures of sophisticated planning methodologies initiated movements that bypass planning and concentrate only on execution.  However, abandoning planning generates several negative branches, like failing to reach any truly desirable objective, and not supplying top management with critical information: what time and value commitments we are able to promise our customers?

The TOC way is to plan holistically the objectives and put them into the timeline, protecting whatever is truly important by the use of buffers.  In the end of the day, the use of visible buffers is a paradigm shift that we, the TOC community, do not fully grasp the full meaning.  For instance, where are the buffers in the S&T?

The Boundaries of Make-to-Availability

This post focuses on defining when providing excellent availability makes sense.  The main insights behind the proposed solution appeared in the previous post.

Managing stock is always based on forecast that there will be demand for the specific items in the future.  Even under that assumption there has to be a reason to take the risk to produce/purchase things without firm orders. Possible reasons are:

  • Spare capacity is often tempting to make stock in order to achieve short-term high utilization.
  • There are immediate cost advantages that will not be available in the future.
  • The stock is absolutely required for selling, but, there is NO commitment to perfect availability.
    • Because, storing that much stock is not economical.
    • Clients do not really expect perfect availability of specific items and they can find alternatives very easily.
  • Maintaining excellent availability as an ongoing objective, answering a real need.

Three key categories of assumptions outline the boundaries of make-to-availability (MTA):

  1. There is high added-value stemming from providing excellent availability.
    • Otherwise, we should consider holding less stock, allowing certain level of shortages, or provide the way to order the item.
  2. There is continued demand for the specific items for a while.
    • Either the item is a standard product sold to many clients with good overall demand.
    • Or the item is sold to a single client with good and relatively stable demand.
    • Or the client is committed to cover the cost of maintaining certain level of stock.
    • The item is an intermediate part in the production-floor required for many end-items, for which the accumulated demand is ongoing.
  3. The ROI of maintaining stock is considered good enough.

Eventually the above assumptions define when providing excellent availability makes economical sense, making sure it generates real value to potential clients and the cost is not too high. Technically, it is possible to offer any product in excellent availability using the insights from my previous post, but the impact on the bottom-line could be disastrous.

When we evaluate the cost of providing availability of an individual item we need also to consider the impact on the sales of other items!  There are few key items that when they are short the sales of other items go down.  Other items have similar alternatives, so the damage of a shortage is reduced.  These dependencies between different SKUs are usually intuitively recognized and not part of any computerized data. They still need to be included in the analysis of the overall Strategy defining what items to manage to availability and how to offer the others.

The second assumption, having relatively stable future demand, has to be carefully checked all the time!  Demand could go down, most of the time gradually, but sometimes it goes to zero very fast.  So, there is a point where stopping the replenishment is truly urgent.  Sudden changes in the demand are beyond the power of DBM to spot early on.  This is where the human intuition and intelligence have to be on guard.

Let’s expand on the ROI angle.

The investment in maintaining excellent availability is the cost (we deal later with the capacity), required to create the stock buffer.  This is a standing investment! When sales occur replenishing the buffer keeps the stock buffer constant.  So, the investment is not depreciated with time.

The cost of the stock buffer is based on the TVC for the full amount of the buffer. The cost has to include also the expected obsolesces for such size of stock.  This is especially critical for products with short expiration time.

The return on the investment is not based on the revenues, but on the annual Throughput (T).  The revenues have to finance the TVC to restore the basic investment.

Thus, a key measurement to analyze the ROI of an item managed to availability is:

Annual(T) / (The full cost of generating the stock buffer)

We can use the above formula to derive the individual ROI for one SKU and also the ROI of all the product-mix that is managed to availability.

When an individual ROI is too low we might need to find other ways to sell the item. Fast movers usually have excellent ROI, because the relatively stable sales enable low stock buffers.  Slow movers usually have better T/Price ratio, but their sales are sporadic, which means high stock-buffers relative to the sales. Many times it is worthwhile to manage certain slow-movers for-order, but with short lead-time, to keep good enough sales.

The capacity of critical resources is another type of investment.  For distribution organizations the relevant capacity is space, especially for retail, and cash invested in stock.  The relative priority of items according to their ROI is important when there are trade-offs in the product-mix due to lack of capacity.  When there is enough capacity to build all the required stock buffers then we can consider the capacity investment as “free”.  However, when there are trade-offs we need to find the best product-mix that maximizes the resulting profit (Total-T minus Total-OE) and the overall ROI.  This holistic view of the whole product-mix is a critical part of the overall Strategy of organizations.

It is pretty seldom that offering excellent availability of ALL the product-mix makes sense.  It means some items are managed-to-availability, others are sometimes in stock, and others are sold by placing orders. The mechanism for managing stock without commitment to availability still needs to be properly defined.

The main insights behind Make-to-Availability

Covering narrower area than make-to-stock or manage stock

Warehouse manager checking his inventory in a large warehouse

I have seen too many big mistakes in make-to-availability implementations, especially when availability should not have been offered in the first place. Other mistakes show misunderstanding of the key insights. I wish first to verbalize the main insights, according to my understanding, behind the methodology called ‘replenishment.’ Then, in a subsequent post, discuss the boundaries of the currently known TOC solution for ‘make-to-availability.’

Eli Goldratt coined the term ‘make-to-availability’ to characterize an environment where a promise is made to potential clients that whenever they need the specific items they find it at the specific warehouse. Goldratt thought that by offering stable availability the organization would win much more demand, possibly also for higher price.

Every ‘make-to-stock’ or ‘purchase-to-stock’ is about managing uncertainty. While ‘make-to-availability’ is certainty ‘make-to-stock’ not every time stock is produced or purchased the intent is to provide excellent availability.  Sometimes the message is actually the opposite: “The stock will soon be gone!!!”  The idea of ‘Sales’ is based on the message of scarcity, pushing the client to buy now.

Insight 1 of the TOC solution to managing stock (even when availability is not offered):

We can never perfectly align our stock with the actual demand

This insight immediately leads to recognize the fact that either we have surplus or we face unanswered demand! Question is: what is more damaging – shortages or surplus stock?  Most of the time, but not always, we better hold just a little more rather than disappoint the market.  When a commitment for availability is given, there is no doubt that we have chosen to hold more stock than the average demand, but, not too much.

There are two different sources of uncertainty in managing stock:  the uncertainty of the demand and the uncertainty of the supply.  The common forecasting methods look only at the demand.  The problematic characteristic of any forecast is that the forecasting error grows sharply with the horizon. Recognizing the combined effect of demand and supply and the wish to offer excellent availability leads to the next insight.

Insight 2:

The relevant horizon for assessing the appropriate stock is the lead-time from consumption until replenishment

This insight means we should NOT look for longer horizon forecasts, because the supply has the flexibility to react properly to any change in the demand.

Given the horizon dictated by the supply lead-time – how much stock we need to maintain to ensure excellent availability? The on-hand stock protects the immediate demand.  The stock on-the-way, meaning the open replenishment orders, covers the rest of the time of the horizon.  If that stock yields excellent protection we should keep it constant.  This practically means:

Insight 3:

Any consumption of stock is immediately replenished – not more and not less

In this way the stock in the system, both on-hand and on the way, is kept constant. As long as the stock seems to do the job: keeping excellent availability, while keeping stock that is not in clear excess, there is no need to change the buffer.

Buffer management adds additional capabilities to ensure availability by providing priorities in the execution phase. This ability is critical mainly for manufacturing.

Insight 4:

The state of the on-hand stock relative to the buffer indicates the criticality of the specific replenishment orders

Actually this is a quite revolutionary idea. The common practice is to assign a due-date to the replenishment order and judge the priority of the order accordingly.  However, in managing stock the due-date is artificial as no one really needs all the quantity at any specific future date, and sales during the lead-time could vary and by that impact the priorities.  Thus, instead of assigning dates the buffer management algorithm looks for the actual state of the on-hand stock and bases the priorities of the replenishment orders accordingly.

Buffer management enables another key capability – a new kind of forecasting to get a warning when the stock buffer is too small, unable to ensure excellent availability, or too large, having too much stock. This is an expansion of the former insight.

Insight 5:

The behavior of the on-hand stock can be used to forecast the combined impact of demand and supply that determine the effective stock that ensures excellent availability

The resulting methodology called Dynamic-Buffer-Management (DBM) is based on the above insight and recommends increase or decrease of stock buffers. I call it “forecast” because it predicts the future based on the past.  It is a different sort of forecasting because it looks at the combination of demand and supply and guides the amount of required stock.

Another important insight is to look for the most effective points for holding the stock.

Insight 6:

The main stocks should be held at a central location to reduce the overall level of uncertainty

The direct result is holding less stock for the same level of availability. This insight is relevant both for the distribution channels as well as for those production shop-floors that face common intermediate parts used for different end items. Holding stock of intermediate parts could reduce the response to the demand and reduce the amount of stock in the system.

Warning: While centralizing the stock required at various locations reduces the overall uncertainty, the total impact of that reduction is often exaggerated.  The centralized stock damps the local fluctuations, but it does NOT damp the fluctuations caused by global causes.  For instance, local taste varies with the location, but a change in the economy impacts the demand all over the place.

Next post would deal with the boundaries of the TOC solution for make-to-availability.

The boundaries of DBR/SDBR

DBR, Drum-Buffer-Rope, is a TOC planning methodology for manufacturing companies. In this post I focus on DBR/SDBR for make-to-order (MTO), when client orders specify the products, the quantities and the delivery time.  I’ll dedicate another post on the boundaries of managing stock. On top of the planning, there is Buffer-Management, which is used to guide the priorities in the execution.

“Sim10” used for learning the DBR methodology using the Goldratt simulator
“Sim10” used for learning the DBR methodology using the Goldratt simulator

I’m not going to discuss here the details of DBR and the differences between DBR and SDBR. I wish to outline the situations where the rationale of DBR/SDBR is valid.  For instance, Goldratt created CCPM because it became evident that DBR would NOT work for projects.  Is it clear to YOU, the reader, what basic assumption(s) behind DBR are not valid in multi-project environment and vice-versa?

The basic situation is manufacturing or service organizations getting orders from clients who ask for certain packages of well-defined products or services to be delivered within a certain lead-time. The organization does not usually hold these products in stock, so there is a need to produce them. The organization might need materials purchased from suppliers and then produce the specific orders for those clients.

Key basic assumptions where using DBR/SDBR would yield reliable delivery performance:

  1. The net touch-time of any production order is very low relative to the lead-time, the time from accepting an order until delivery.
    1. Goldratt assumed less than 10% of the time-buffer used for such an order.
    2. The rest of the time the production order waits for resources to become available, which means that the utilization level of many resources is not very low.
  2. The level of statistical fluctuations within the shop-floor itself is not too large.
    1. For instance, when the average setup is two hours it is unlikely to take 20 hours.
    2. Overall such an environment is exposed to much less fluctuations than in projects!
    3. This assumption also addresses the level of scrap.  It is unlikely, or very rare, that a whole production order is scrapped.
    4. It means that the shop-floor maintains good enough quality in all stages.
  3. The organization maintains acceptable control on our supply and outsourcing.
  4. All the operations are at the same location, or close enough, so transportation time between the facilities is short relative to the time-buffer.
    1. This assumption could be viewed as an extension of assumption #1.  We need to understand that transportation, also dry-time, is part of the “touch time”.

One key assumption for Buffer-Management:

  1. Most orders either finish in Yellow or finish very close to their penetration into Red.

When one assumption, or more, is not valid it does not mean that DBR/SDBR is useless, but it does mean certain changes are mandatory.

I like to explain more the impact of touch-time on the methodology. Actually the assumption about touch-time is the main difference between production and multi-projects. In projects we expect that the time it takes to process the tasks along the critical chain is equal to the time to complete the whole project.  This is in sharp contrast to production where orders wait between operations for quite long time.  Because of this key difference the term “critical path” or “critical chain” is not relevant for production.

What happens when the touch-time of a certain operation is about 30% of the time-buffer?

In SDBR the time-buffer covers the whole production process, from the release of the materials to completion. When one operation takes 30% of that time we have to ask two critical questions:

  1. Is the time-buffer enough to protect the due-date from fluctuations along the whole production process?  The 30% touch-time is not part of the protection time, so, do the rest 70% provide enough protection?
  2. In buffer management for DBR/SDBR the exact location of the production order is not reflected in the status of the buffer, because when the touch-time is negligible then it does not truly matter.  However, when one specific operation takes 30% of the buffer it matters a lot whether the production order has still to go through it or already passed it. When the order is still behind the long-operation the remaining effective buffer is significantly shorter than the remaining time until the due-date.

A new application of  buffer management within SDBR implementation facing high touch-time was developed and presented at the TOCICO conference in 2012 (Lisa Scheinkopf, Yuji Kishira, and Amir Schragenheim). This is an example of understanding the boundaries of the basic assumptions and the level of required changes when a specific assumption is invalid.

It is important to emphasis again that the above assumptions refer to make-to-order production. The critical distinction between MTO and MTA (make-to-availability) was recognized much later than DBR and also SDBR.  The assumption that we have a specific quantity to deliver at a specific date was not clearly verbalized in both MRP and DBR.  Once the role of that assumption became clear a different methodology for MTA had been developed.  The lesson is to try our best to verbalize the underlining assumptions, which together define the boundaries where a specific methodology is valid.  Then we can identify the situations that lie beyond those boundaries and look for an appropriate solution, which can be close or very different from the original methodology.

The boundaries of CCPM

Critical-Chain-Project-Management, CCPM, is the most successful and widely known application of TOC. The most striking feature of CCPM is the handling of uncertainty, and the second most striking feature is the understanding of the impact of performance measurements on the human behavior and how it affects the performance.

project CCPM1

I’ve put handling of uncertainty first, because adding a visual project buffer time at the end of the project is a bold statement that we do not really know the exact date when the project would finish. It is a clearer buffer than the DBR buffer, which seems like a regular lead-time attached to a chain of operations.  This concept of the project-buffer is a significant contribution to handling of “common and expected uncertainty”.

The famous Goldratt saying “Tell me how you measure me and I’ll tell you how I’ll behave” was coined a decade before CCPM. The realistic negative impact of performance measurements on the behavior of humans is clearly seen in the shop floor and it is even more relevant for projects, where people are the key resources.  The devastating use of multi-tasking is also derived from performance measurements that judge every project manager in isolation from all other open projects and by that force him to fight for “his project.”

While CCPM is a breakthrough of huge value, one should always ask:

What are the boundaries of the given methodology?

In other words, when the use of CCPM is beneficial and when certain changes have to be introduced?

It all comes to checking the assumptions behind the methodology and when they are not valid.

Just to illustrate the point let’s think about a program to find a cure for cancer. At the beginning of such a research is it possible to give it a due-date?  Is it possible to build the network of tasks for the whole project?  Do we have any idea how long it would take?  Is there any role for a project buffer when you do not have a due-date to start with?

Here is my list of basic assumptions behind CCPM, which might be sometimes invalid in certain situations:

  1. We have a good enough idea how to perform the project from start to finish.
    • In particular, we have a good idea, from the start of the project, what features have to be developed.
    • We do not have conditional points in the project that could dramatically change the downstream operations or go backwards to an earlier task.
  2. Completing the project before or at the due-date is of utmost importance.
  3. Finishing on-time is nicely correlated to meeting the specifications and budget.
  4. The control on the progress and its timing issues are at our hands.
  5. We plan to work continuously on the critical chain.  Without that the definition of the critical chain as what determines the duration of the project is meaningless.

What happens when one or more of the basic assumptions are not valid?

Let’s view two examples:

Suppose the client has to confirm certain stages in the project, and the client takes his own time to confirm. Two assumptions are not valid: the fourth and the firth. The fourth is invalid because we do not have the means to force the client to adhere to our timetable.  The fifth is invalid as the whole project is in hold during the client’s check. One might include the client’s task in the planning and assume a certain 50% time for that task. The point is – when the client delays the signal to go ahead how can the project people keep the due-date?

My recommendation for such a case is to cut the overall project into subprojects. The project team controls the progress until the client gets the output for confirmation and that is the completion of project one.  Once the client confirms to go ahead, even with new requirements, the second project starts.

Another problematic project is one where the due-date is mandatory, so no delay is possible. In such a case the content of the “complete project” might be much less than the original plan, or the costs might go up sharply, which means the third assumption is invalid. The time buffer does not protect all the truly relevant variables. The planning of such a project has to make clear distinction between “must-have” and “nice-to-have” features.  I can see a direction of solution with a buffer-of-nice-to-have-features on top of the project-buffer protecting the mandatory due-date.

A general comment: I find CCPM as the least holistic methodology in TOC.  It is target to solve a problem that is quite common, but not necessarily the key problem of the organization.  I’m troubled by solving an undesired-effect (UDE) without seeing the overall picture.  You can improve the management of projects and the organization would not reach more of the goal.

As a concluding anecdote: I met a high executive is a huge international conglomerate.  When I mentioned the nice CCPM implementation done in Israel, the executive commented: “They finished early the WRONG project!”

Shouldn’t TOC be involved not just in the planning and execution of the projects, but also making sure they are the RIGHT projects with the right choice of content and specifications?

Can we do it without being involved with the Strategy of the organization?

Short-Term and Long-Term TOC Or the Two Critical Flows

Two different flows, each constrained by a different constraint
Two different flows, each constrained by a different constraint

TOC has been criticized as being focused solely on the short-term.  Exploiting a capacity constraint, while subordinating everything else to the exploitation scheme, is a typical effective and fast way to get more from the goal in the short-term.

In the long-term, provided the constraint is already exploited and subordinated to, we are guided to elevate of the constraint.  The constraint would then move to the market. Further market expansion, using mafia-offers, will cause the emergence of a constraint back into the organization and so on.  This is the classical description of how the five focusing steps work in the long-term.

When the constraint is in the market – how should management decide where to focus their efforts to expand the demand?

Theoretically, every expansion of the market, when no internal constraint is active, should be encouraged.  But, arbitrary market efforts might eventually cause the emergence of the “wrong” capacity constraint, not the one that yields the best bottom-line.

The notion of the “strategic constraint” could guide us to expand the market so that the emerging constraint would be that particular resource. With this target in mind what market should be the focus of the marketing and sales efforts? Would you base your plan on products with high throughput per strategic-constraint-unit (T/sCU)?  Actually if you do so you definitely going to end up with another constraint – not the strategic one which would not have enough load to even come close to be a constraint.

Is it good to have an internal strategic constraint?

Not to my mind! There is an effective rationale to maintain spare capacity to ensure flexibility in answering market opportunities, while offering excellent reliability to the existing clients.  I’m aware that not always reliability is a decisive-competitive-edge, but when it is not then it is still a necessary condition.  An internal constraint is always a threat to the reliability and fast response.

TOC never ignored the impact on the long-term, but let’s admit that the major focus, exploiting the constraint, was on the short term. There is a simple explanation why:

The further we look into the future the higher is the impact of uncertainty

And thus any approach to the long-term is subject to possible failure. However, we cannot and should not ignore what we can do NOW to reach the ever-flourishing state we like to have in the future.

The focus on the short-term considers resources, capabilities and opportunities we already have. It does not consider creating new value for potential clients. Coming up with a decisive-competitive-edge takes time to synchronize, integrate the various parts into an effective launch that expands the market demand.  We also need to build the appropriate infra-structure, resources, capabilities and robust processes, to handle this growth.

Thus, we can easily identify two critical flows within any organization:

  1. The current value-flow – the short-term generation of T.
  2. The flow of initiatives to increase the value to potential clients, enter new markets and possibly limit the growth of OE and I relative to the growth of T. These initiatives together construct the actual Strategy of the organization.

The two flows use mostly different resources. The current-value-flow is all about getting orders, responding fast and ensuring good quality.  The main efforts come from Sales and Operations.

The flow-of-initiatives is quite different. It includes Marketing, as the function that looks to new opportunities and R&D that come with new answers to the various markets.  The key gross initiatives try to expand the market demand, or elevate the value perception of the market.  Those key initiatives derive initiatives to prepare the appropriate infrastructure, like bringing in new capabilities, resources and new processes.  Other initiatives build control mechanisms to warn from developing threats.

The resources for the flow-of-initiatives are mainly managers and high-level professionals. High enough budget is another needed resource.

Both flows are critical to the success of any organization.  Of course, there are many other flows in an organization like the flow of incoming payments, the flow of information and the flow of human resources.  The above two flows seem to me as the key ones for achieving the goal.

Both flows are constrained, but not by the same constraint!

It has been my own view since the mid-90s that the current-flow is almost always constrained by the market and sometimes also by lack of capacity of an internal resource. Forcing the clients to subordinate to the internal constraint of their supplier is a very damaging to the future of the organization.

However, the flow-of-initiatives is always constrained by an internal resource. There should be always more ideas that might improve the future performance than the capacity of either the budget or the human resources.

This is how I interpret Eli Goldratt saying that the ultimate constraint is management attention!

I doubt whether there are many cases where management attention truly limits the current-value-flow.   Once the basic TOC methodology is implemented and the level of WIP is about right we should not expect to have operation managers loaded up to the level where damage is unavoidable.

But, when it comes to striving to ensure future growth and stability – management attention is the strategic constraint.  The most obvious exploitation scheme is focusing on the most beneficial ideas.  Planning carefully the Strategy based on TOC, preferably using the S&T format, is the best way to achieve not just the exploitation, but also the proper subordination processes, giving the right priorities to the rest of the organization.

This is, to my mind, what the long-term TOC should be.

The failure of a grand technological idea – part 3

This brings the case, involving learning-from-experience to an end with new lessons learned. Last part ended with the main facts that led to the failure. I call them: the operational causes. They are important to know, but they do not supply us with the answer to the question:

How come those operational causes have happened leading to such a negative result?

Next step: Identifying the core flawed paradigm

The team concluded that there are two key operational causes that the question “how come?” should apply.

  1. How come the initial specifications were verbalized only in a very generic way?
  2. How come the project team did not update the management with the “holes” in the system?

Picture pic 4a1

The team had checked all the probable causes to make sure they were valid. Most of the effects were confirmed in a direct way as both top managers and project members answered the questions openly.

The one effect that needed indirect validation was: “The project team did not fully understand the business requirements of the project and were not aware what holes are critical.” As it turned out the potential business/marketing value of having perfect external protection without anybody watching the screens was not appreciated by the project team, hence they did not bother to notify management that it seems technically infeasible.

Next step: Verbalizing the main updated paradigm

Technicians, scientists and engineers understand the technology, but, many times they do not have good understanding of the way it should be utilized by the users, the necessary marketing messages and other business aspects.

What would be the impact of the new paradigm on the way TopSecurity shall be managed?

Next step: Developing the Future-Reality-Tree

Let’s outline some possible ramifications we can achieve from recognizing the new paradigm:

Picture pic 5

Expanding the generic message from the updated paradigm

There are, at least, two ways to generalize the new understanding.

  1. Stick with the verbalization, but look for impacts that are beyond the specific operational causes of the case.

For instance, can we come up with changes to the way ideas for new products/projects are raised? For instance, a company that its business is based on state-of-the-art technology has to ensure a good match between the technological ideas and current limitations of users! The technological people have good intuition of the possibilities of the technology. People that are close to business development understand better the current needs of users.

  1. Expand the verbalization from pointing to “Technicians, scientists and engineers” to a wider scope of professional people. The need to fully understand the business case of a company, its strategy, the exact message to the various market segments the company likes to have as clients, applies to a variety of people working for the company.

Last step: Verbalizing the lessons learned

The point is now to allow people to understand what lies below the new processes, the new paradigm and learn from the previous mistakes done by others.

It should consist of the following parts:

  1. A good summary of the story.
  2. The definition of the gap – without going into more detail of the discussions and the other alternatives that were considered.
  3. A summary of the operational facts that have caused the gap.
  4. The logical tree identifying the flawed paradigm and how it caused the gap.
  5. The verbalization of the new paradigm.
  6. The new processes that were built from the new paradigm.

Please, feel free to comment or argue the case, and mainly the proposed process for learning from such an event.

The failure of a grand technological idea – part 2

The inquiry team continued its quest to learn the right lessons from the gap between expectations and outcomes

The formal structure of the gap being inquired:

grand failure gap v1

The task of raising hypotheses is to keep an open mind to all possibilities, based on the very few facts known. Every hypothesis would then be used to direct the team to look for information that would either invalidate or support the hypothesis.

The team came up with the following hypotheses:

Hypotheses 1: The expectations were not realistic to start with – it is impossible to build a perfect system that will always warn when it should and never when it should not.

Hypothesis 2: The project people developed what they were capable of developing. What seemed to be too difficult was not developed.

Hypothesis 3: The project people focused on preventing false alarms even on the expense of failing to raise the alarm when real approach is done.

Hypothesis 4: There were no clear and detailed specifications, approved by top management, of what the Wise-Cameras should do.

Hypothesis 5: There was not enough involvement of highly professional security people in the development of the system.

Hypothesis 6: The project team did not have all the skills required for such a mission, and they tried to conceal this from top management by announcing success.

The hypotheses were verbalized in regular daily language by the members of the team. There are obvious causal connections between some of the hypotheses. So, several hypotheses could be valid. At this stage the team simply checks each hypothesis whether explains the gap.

Verbalizing a potential explanation:

Note that additional effects that are required to cause the gap are actually new hypos that need to be validated as well – checking for known facts that would tell whether the effect existed in the specific case.

case of grand failure full v1

Validating the facts until the operational cause(s) are clear

Going through verbalizing several possible explanations, drawn from the basic hypotheses, the team had no difficulty to come up by the following straight-forward facts:

  • Top management had verbalized only high level of specifications with the following main requests:
    • The system should identify intruders before they arrive to a door or a window of a protected building.
    • The number of false alarms would be minimal – not more than 5% of the alarms should be false.
    • The system should have clear advantages on any other camera-base protection systems.
    • The management did not specify in writing that the system should eliminate the need for human guards to watch the screens.  However, this request had been raised in several informal talks.
  • The project leader, Raphael Turina, told top management he’d achieve all the requests that appear in writing, and will strive to get to the point where watching the screens would not be necessary.
  • Sam Fuller, the CEO, said that Raphael promised to let top management know whether there is a need to watch the screens.
  • Raphael said that he notified Sam that the written requests were all answered.
  • The idea of people rolling towards the building was not raised by the team and thus was not considered.
  • The main technical challenge was to distinguish between a person and an animal moving.  The team assumed animals have four legs and based the ultra sophisticated image recognition on this idea.
  • All the project professionals were found to have the highest professional skills.
  • Both Raphael and Alex have very wide experience in security.
  • None of the project professionals and internal management were involved in planning the test for the event.

Facts that were validated indirectly:

  • The project team were reluctant to inform top management of failures or specifications that were not achieved
    • Raphael, Alex and other members of the team deny this tendency
    • However, checking mails and reports to top management showed detailed reporting of successful internal tests and no reporting of difficulties
    • One clear failure of identifying movement in extreme weather conditions was not reported
  • Top management were certain that the system did not need human intervention during the identification stage
    • Gilbert, the manager of the external testing team and the head of the inquiry team, testified that he understood no human watching the screens was necessary
    • Nobody was supposed to watch the screens during the external test

A summary of the revealed facts:

The project lacked a clear definition of the required characteristics. It strived to cover everything, but some problems were raised. Two problems were clearly identified:

  1. In very bad weather the Cameras are unable to identify human movements.
  2. The project team had a conflict between the need to identify suspicious attempt to come close to the building and activating false alarms.   This conflict led to the decision to identify human movement by moving with two legs. Possible exceptions and bypasses were not discussed.

The problems were not reported to Top Management, which were under the impression that the system is perfect and eliminate the need for watching the screens. The impression of the project people were that all written requests have been achieved. The project team thought there is no need to mention the state of issues that were not raised in writing. Management was under the impression that all the requests, not just those in writing, have found the appropriate answers.

Questions:

  1. We know now what happened – is there anything else we need to know?
  2. What do we do now? Is the inquiry complete?

To be continued!

The failure of a grand technological idea – part 1

I have used this fictional case in workshops on Learning from ONE Event to demonstrate the process. I like you to read, stop from time to time for questioning yourself how to proceed, think also about its relevancy to less spectacular failures and what could be gained by learning the RIGHT lessons. This is part 1 of the case, part 2 would follow in a week.

The trigger for learning and the start of the process

The big test of the Wise-Cameras, intended to be the new diamond in the crown of TopSecurity, was organized for two hundred people from the army, air force, police and the secret service. It ended in total disaster. Samuel Fuller, the CEO of TopSecurity, the one who brought it all the way to become a $5 billion giant, nominated a team of three people to inquire the shameful event where the Wise-Cameras failed to identify the break-in of a test-group of five well trained people into Top Security’s building, in spite of all the sophistication that went into the Wise-Cameras system.

The inquiry team was led by Gilbert, the manager of SecurityCheck, an independent organization that checked the functionality and effectiveness of various security products. SecurityCheck was asked to try breaking into TopSecurity HQ. Their success was the biggest failure TopSecirity ever had. Linda, the brilliant CEO of ThoughtWare, a software company without any business linkage to TopSecurity, was another member. Jacob an organizational consultant was the third member of the team.

The project to develop Wise-Cameras started three years ago and was supposed to take two years. The idea was to use security cameras to automatically identify any attempt to approach a protected building. The identification of the potential intruders was supposed to be automatic and in high certainty. Additional expectations were to identify the exact number of people trying to break in, and recording special features of the intruders, notably the distance between the eyes in order to support identification even when the intruders wear masks.

What went wrong in the test was that the five intruders came to the building by rolling on the ground and by that fooled the system.

The failure certainly took Sam by surprise. It’s economically and operationally impact on TopSecurity was immediately recognized. The actual cost of the project was around $5M, but the hopes for future revenues were closer to $100M a year!

Is it a good idea not to include anyone from the project team in the learning-from-one-event-experience team?

Naturally the tendency of anyone involved in such a project is to cover the wrong actions and decisions that led to the failure. Nobody likes to be blamed even for much smaller failures.

But, what is the objective of the learning? Is it really to identify the guilt of some people, or reveal some common flawed paradigms, shared probably by many people, and fix them?

Jacob, the organizational consultant, proposed to add someone from the project team in order to radiate the message the intent is not to blame anybody. Linda added that they need at least two people with the knowledge and intuition of what happened. Thus, Alex, the chief project engineer, and Martha, a software specialist responsible for the movement recognition in the project, joined the team.

The new team, now consisting of five people, sat down to resolve the first issue:

Verbalizing the gap between prior expectations and actual outcomes

They soon realized this issue is more challenging than it first seemed to be. Actually two different gaps have been recognized:

Gap no. 1:

Prior expectations: The important guests, who are potential clients, would be grossly impressed by the performance of the Wise-Cameras.

Actual outcome: Huge disappointment, leading to inferior reputation and low perception of the system and of TopSecurity as a reliable and innovative supplier.

Gap no. 2:

Prior expectations: The system is capable of tracing even the most sophisticated breaking into a protected building.

Actual outcome: The system was tricked by a clever team.

Gap 1 focused on the event planning. A possible lesson might be running a rehearsal before such a show, or conducting closer communication between the developers and the testing team.

Gap 2 focused on the question how come a three-year project failed to achieve an effective product?

Gilbert, the team leader, asked Alex a clear question:

Was the failure due to a minor technical problem that could be fixed pretty soon?

Alex: “The specifications were very demanding that no incidental move of an animal, like a dog, would activate the alarm. In other words, no false alarm allowed! Thus, we assumed that any human being coming to the building would be walking on two feet. The image recognition algorithm was based on that assumption.”

Based on this statement it was clear that gap no. 2 was real and substantial. Considering the value of better understanding what led to failing to achieve the technological objective of the project and ensuring that such a failure would never happen again pointed towards the second gap.

What is important to understand is that inquiring both gaps by the same team reduces the ability to focus and by that the chance of learning a beneficial lesson of value.

Questions: What should the next step be? Is there a need for more information and analysis? If so, what missing information is required?