Caught within the shared paradigms of their business area

A common shared paradigm being challenged

Every business area has its own “best practices” (are they really the best?) and a whole group of paradigms that are shared by everybody in this particular area. The consequence is being caught in a status-quo, where the performance of the organization is stuck and goes slowly down due to the increase in the efforts of every competitor to steal customers from others.

This day-to-day constant fighting to preserve the current state without any leap in performance is the reality of the vast majority of the organizations. It causes them to be satisfied with relatively small profit, or tolerate limited losses, with the feeling that this is the best they can do.  Such businesses succeed to be in a reasonably stable state, but without any hope for better future.

A necessary condition, though far from being sufficient, to get back to good business growth is to be able to challenge one important shared paradigm. Once this is done the organization deviates from the common way all the competitors are going, and by this establish a clear differentiation from the competition.  The risk of not challenging such a paradigm is that a competitor might do it first and this would change the false impression of stability.

However, this absolutely necessary step for growth, is very risky as being different does not mean outclassing the competition and it certainly does not mean bringing any new value to customers. Too many times being different from the standard only reduces the perceived value of the customers, who just see the difficulty to get used to something different without any benefit from it.  In other cases the new added value seems to be too expensive for the target market.

Another risk is that even if the organization succeeds to create new value to customers, it does not mean the customers are able to recognize and appreciate the new value. The difficulty is that unexpected added value might require a change in habits, and even when the customer sees the new value as something surprisingly nice (“how come we never got such an offer before”) the move raises suspicions that it is too good to be true.

The point with the risk is that it creates FEAR, which sometimes blocks any attempt to challenge a common paradigm that could lead to a breakthrough. The way FEAR should be handled is full acknowledgment that it is legitimate, but the risk can be handled by logically analyzing it, striving to reduce the risk, or its negative impact, and also creating a safe-network of control with immediate corrective actions to neutralize the negative impact on time.  When the risk is properly evaluated and controlled it is possible to overcome the fear.

Another, seemingly unrelated effect of a similar fear, is the high number of R&D projects that continue in spite of the fact that the early promise has already vanished.   The causal relation of that effect with the reluctance to challenge established paradigms that are shared within a business sector is the fear of failure and its personal impact.  The term “failure” has an especially negative connotation in the world of measurements and false accountability, and in itself is a paradigm that should be challenged.  An alternative related expression is “taking a calculated risk” that naturally leads to realization that the move might fail, but it is not interpreted with the full connotation of a “failure” because it has been considered ahead of time and the choice has been to go for it.  In the high-tech startup world the expectation of failures is so high that the damage to the pride and reputation of the individuals involved is minimal, which opens the way to many worthy efforts to do something exceptional.

Taking a calculated risk should be widely used not just for new technologies, but for every business sector, as the ways to come up with a new significant value to potential clients are diverse and only very few of them require a technological breakthrough.

But, taking a calculated risk has to be based on two necessary elements.

  1. A culture that endorse taking calculated risks with the full realization that it might fail.
  2. Using a valid process of analyzing the risk. Such a process should include searching for ways to reduce the potential risk, and eventually producing an analysis of both the potential damage and the potential gain.

The difficulty in the process of calculating the risk is that in the majority of the cases we don’t have good enough probability numbers. Using statistical models to estimate the probability is also frequently misleading.

Yet, the difficulty of estimating the amount of uncertainty should not cause management to ignore the notion of well calculated risks, because the future of every organization simply require taking some risks, and if you have to take risks you should better develop good-enough ways to estimate them. Developing the right culture depends on finding an acceptable way to estimate risks.  The term “estimate” is more appropriate to use than “calculate”, which seems to suggest it is the outcome of precise calculations.

There is a need to differentiate between estimating uncertainty and estimating the level of damage that would generate as a result of it. Let’s use the following example to comprehend the full ramifications of the difference.

A food company evaluates an idea to add a high level variant to its popular product line SoupOne. The new line will target the market segment that appreciates true gourmet soups. The line will be called SuperSoupOne and will cost 50% more.  This is a kind of a new paradigm, because the usual idea is that the gourmet people shy away from processed food.

Suppose that the management has enough evidence to be convinced that gourmet loving people could be tempted to try such a soup and assuming it is really up to their standard will continue to consume it. The “most likely” estimation, based on a certain market survey, is that SuperSoupOne will gain market demand of 10% of the current demand for SoupOne, but only 5% of SoupOne buyers would switch to the new product, the rest are going to be new customers.

However, one of the senior executives has raised another potential risk:

“What would SuperSoupOne do to the reputation of our most popular product line? It would radiate the message that it is a rather poor product and even the producer is now selling a much better version of it?  What would the buyers do when they cannot afford the better product?  I’m concerned that some of them will try the competitors’ products.”

The risk is losing part of the sales of the key product of the company. How much might be the impact of SuperSoupOne on the sales of SoupOne?  Actually the impact might be positive. Do we really know? We need to evaluate the possibility of having a negative effect and how it impacts the bottom-line.

Note, the risk to be evaluated is the impact of the new line on the old line – not whether the new line would generate high enough throughput to cover for all the delta-operating-expenses of launching the new line.

How such a risk could be evaluated? Suppose the current throughput generated by SoupOne is $5M.  According to the forecast for SuperSoupOne the quantity to be sold of the new line will be 10% of the current quantity sold by SoupOne.  Suppose that such sales would generate 20% of the current throughput due to the higher unit price. So, we get additional throughput of $1M from the new line, while losing only $250K (5%) from the old line.

But, the drop of 5% of the old line is only a forecast described vaguely as “most likely” and those 5% are now buying the new line. But, if the reputation will be truly harmed it might cause up to 30% less sales of the old line.  In this case the loss of $1.5M of throughput from the old line will not be compensated by the $1M “most likely” estimation of the new throughput. 

The above rough calculations help management to realize the potential risk of losing up to $.5M as a kind of reasonable worst case. Other reasonable possibilities seem much more optimistic for overall additional profit from the move. 

Can the risk be reduced? How about giving the new line of product a totally different brand name, which does not refer at all to the current popular product?  It’s probably not eliminating the full negative impact, but will significantly reduce it.

The detailed example objective was not to reach a firm clear decision. There is no claim the existing paradigm is not valid and thus can be challenged. We also don’t know whether the idea of coming up with higher level product is a good idea and what’s the actual impact on the current market is going to be.  The example has been used to demonstrate a need for getting better idea about the risk and its potential impact on the bottom line, using the intuition of the relevant people.  A certain direction of solution for estimating a risky move has been briefly demonstrated.

Such an analysis is a necessary condition for the bigger need of opening the door to a constant search for a breakthrough that has to be based on challenging an existing shared paradigm. This is the objective of this post: to claim that challenging widely shared paradigms is truly required for every organization.  You might say it also about your own desire to make a personal breakthrough -> it passes through a challenge of a common paradigm.

Advertisements

Collaboration, Win-Win and TOC

This article is broadly based on a mutual webinar at TOCICO.  We have found out that the topic is of special importance and ought to be expressed in more than just one way.

People often collaborate with each other. Family, ideology, security and business are good objectives for collaboration.  When the candidates for collaboration trust each other it makes win-win easier to achieve. Win-win is necessary for maintaining long-term collaboration.  Sometimes we have to collaborate with people we do not trust.  It happens when a mutual pressing need makes it mandatory to overcome the distrust.

Collaboration between different organizations is harder to establish. The simple straight-forward relationships like: “we buy from you and we pay according to agreed pricing and related conditions”, is more about “cooperation” than “collaboration”.  Cooperation needs to be present in most of our organizational relationships, whereas collaboration – where we have some mutual goals to achieve, and we need to ensure we both find a win in what we do – is rarer.

There are obvious difficulties in maintaining ‘trust’ between organizations. We can trust a specific person.  While any relationships between organizations are handled by people, the obvious concern is that those people might be replaced or be forced to act against the spirit of the collaboration.

However, collaboration could open the door to new opportunities, even creating the desired decisive-competitive-edge, for at least one of the sides, while improving the profitability of the other. Collaboration between competitors could strengthen the position of both towards the other competitors.  Collaboration between vertical links in a supply chain could improve the whole supply chain, and if all the links in a supply chain would collaborate effectively then the overall decisive-competitive-edge would be hard to beat.

So, we should look, first of all, on the new opportunity to be opened and only then analyze how such collaboration could be sustained, overcoming the usual obstacles.

So, a key insight is that collaboration might SOMETIMES work well, thus it should be carefully decided when it truly pays. There are two key negative branches of collaboration:

  1. There are several risks in collaboration, especially between organizations, which might disrupt the positive outcomes.
  2. Collaboration requires considerable amount of management attention.

An example where many efforts have been taken to establish effective collaboration is found in the area of big construction projects. A methodology called ‘A Project Alliance’, or Integrated Project Delivery has emerged to deal with the basic dilemma posed by the common contracts and basic structure for such big projects.

The problem is that the client has to come up with very detailed plan of the project, which is required to allow the competing contractors to come up with fixed-price for the whole project. The winning main/general contractor is then able to contract a number of sub-contractors.  What usually happens next is that some errors, additional requests and missing parts are revealed and then re-negotiations take place, which please the contractors, but much less the client.

The concept of cost-plus came to settle this kind of re-negotiations, but when there is no fair visibility into the true cost, the above changes to the original plan are still great opportunities to squeeze more money from the client. From the client perspective it is difficult to assess the true cost of the project and it is even more difficult to ensure the quality and duration of the project.  When we accumulate the efforts of the clients to plan in great detail and then feel helpless when errors are identified and more changes have to be introduced, the pain from running such a project is severe.

From the contractor perspective the initial bidding/competition phase forces him to reduce the price too much and by that take considerable risks, hoping to be lucky enough to gain a lot from changes in order to preserve satisfactory profitability.

A combined bad aspect of this kind of relationships is the mutual dissatisfaction from the outcome of the project. With all the changes and re-negotiations the project typically takes too long, the cost too high and the quality of the end product has been compromised.  That basic dissatisfaction has a negative impact on the reputation of all the contractors.

It could be nice for the client to have more open and collaborative dialogue with the contractors, giving them more influence on the planning and ongoing execution. It is a more effective way to handle the complexity and uncertainty of such big projects.  However, any solution has to be beneficial to every contractor.  Without win-win no alternative way would be truly useful.

How the concern of cost, from the client view, and the concern for profitability, from the contractor view, can be dealt with in a way that would also be in line with the success of the project?

The idea behind the Project Alliance is based on two elements.

One is to create a collaborative team of the key contractors that manage the project with the understanding of drawing the most of the collaborative efforts to achieve great success. This structure is different than having just one key contractor who manages the whole project and contracts several, even many, subcontractors.  Under the Project Alliance structure a consensus between the alliance members has to be achieved through active collaboration.  One consequence is far better synchronization between all the different professional aspects.

The second element is establishing a gain/pain payment that is based on achieving few targets, defined by specific measurements, which together define the success of the project. The payment to each alliance member is made of three parts:

  1. Actual cost – the true cost paid by the members to suppliers and freelances, plus the salaries of the employees that are fully dedicated to the project.
  2. Fixed payment to the members for their work. In this kind of project the fixed price is less than the normal expected profit.
  3. Variable fee based on the agreed performance measurements for the project as a whole. The variable fee plus the fixed payment could end up with much higher profit for the contractor-member than the norm.

The variable fee and the fixed payment are not proportional to the cost!   They are defined independently of the cost and it might include ‘total cost of the project’ as one of the measurements. This split eliminates the interest of the contractors to increase the costs that pass through their books in order to make a profit.  It also eliminates the damage caused if their scope is reduced in value.  The acronym given to this payment method is ‘CFV’, for Cost-Fixed-Variable.  In TOC terms the throughput for every contractor is the Fixed plus the Variable.

The Project Alliance was used by several big projects that are considered especially successful, taking into account the lead-time from start to finish, the overall cost of the project and general satisfaction of the client. However, the vast majority of the big construction projects all over the world are still managed in the old way, despite all the predictable undesired effects.  In order to understand the fears of adopting that direction of solution, let’s point to some potential negative branches:

  1. Maintaining trust between organizations is assumed to be shaky, especially without prior experience.
  2. The individuals within the client organization, who are in charge of the project, may feel robbed of their power to dictate whatever has to be done in the project.
  3. The contractors might find themselves in a dilemma when they see a short-term chance to squeeze more money, but feel bound by the collaboration agreement, where the variable fee is less than the opportunity they see.
  4. The uncertainty of the budget due to the variable payments might seem problematic. This might be more of a bureaucratic issue as big construction projects are exposed to much higher uncertainty.
  5. Many clients feel uncomfortable in selecting suppliers without having a fixed-price bid, and some formal procedures require such bids (though these are rare), mainly due to inertia and the fear of going against the current wisdom.

Another area where collaboration could add immense value is the relationships between a client organization and few of its suppliers. The regular relationships in B2B are:  the client organization tells the supplier what is required, specs, quality, quantity, delivery time and price.  Negotiations are about due-dates and price.  The underlining assumption is that the client knows what and how much is required.  In the majority of the cases that assumption is valid enough.  In many cases the client is able to generate a bid/competition in order to get the cheapest price.

There are other cases where creating longer term engagement between the client and the supplier could boost the business of both, creating win-win. In most of those cases the basic information flow is still coming from the client telling the supplier what, how much and when to supply.  The agreement covers longer time frame and exclusivity, thus ensuring availability of supply and security for the supplier.

There are fewer cases where true collaboration between client-supplier could truly enhance both organizations, creating new opportunities that cannot be achieved with the formal relationships of “we tell you what we need, you supply according to a general agreement on time and price, probably also minimum annual quantity.”

In those fewer cases the potential could be huge.  When the collaboration opens the way to reach wider demand and/or achieving higher price then the potential of significant increase in throughput, which far exceeds any possible additional cost, exists for both organizations.  Longer-term collaboration can also assist the buyer organization to simplify their purchasing processes and specification, allowing a reduction in overall purchase cost, leading each party to increased profit.

Both organizations have to invest management attention in providing true close collaboration, one may call it: partnership. Both need to earn a lot from it.  One characteristic of any on-going collaboration is not just trust, difficult as it is to maintain between organizations, but also deeper understanding of the interests, values and general culture of the two organizations.  Actually, this kind of understanding of the other side is a necessary element in establishing such partnership, because only when you understand the situation and its related effects on the other side – collaboration can truly be beneficial and far superior to the competitive common approach.

Goldratt already developed a vision connecting all the supply chain in partnership/collaboration, ensuring fast response to the taste and wishes of the market, following the insight that every link in the supply chain truly sells only when the end product is sold to the consumer. The idea was to split the throughput from every sale between all the participants, making them collaborate in order to ensure as high throughput as possible.

The vision of Goldratt raises several negative branches that need to be eliminated, like conflicting interests of a link in the chain when it partners with more than one supply chain. The collaboration along the supply chain should focus not only on the response time and lower inventory, but also actively develop new products and modifications to existing products that would capture more and more market.

All the above difficulties can be overcome with analysis and innovative thinking when the potential amount of the opportunity becomes clear. Collaboration is a means, especially for medium and small organizations, to gain extra competitive edge by becoming virtually larger with additional capabilities, capacity and more effective synchronization.  Using the right thinking, and taking win-win seriously, the potential is unlimited – even though the ultimate constraint could still be management attention.

 

Raw Thoughts on the Management Attention Constraint

Goldratt called management attention the ultimate constraint, meaning after elevating the other constraints, including the market demand by establishing a decisive-competitive-edge, then the pace of growth of the organization is still limited by the capacity of management attention.

How much attention can a person give in a period of time? The term itself is elusive and very difficult to assess even when focusing just on one person, even more so when we try to assess the attention capacity of a group of people.  Yet, there is no doubt that there is a limit where adding more issues to think and control causes chaos.  The evils of multi-tasking are well known.

Capacity of management attention means how many different issues can be handled in a period of time. I do not try, in this post, to assess the ability and skills to deal with a certain issue – just how many can be properly done in a period.

What contributes to the difficulty is the simple fact that we are used to fill our attention all the time. We cannot tolerate being “bored”.  We always think of something.  So, just by making our mind busy all time the question whether we are now close to the limit cannot be easily answered.  After all if something urgent pops up then our mind abandons whatever it has been occupied with and switches to the new issue, which requires immediate attention.  We can easily say that while there are issues that force themselves upon us, most of the time we choose the issues that are worth spending time on.

Focusing is an exploitation scheme to direct our mind to deal with the more valuable issues, putting aside the less critical issues.  However, we are only partially successful of being able to concentrate on what we have decided to focus on.  We are certainly limited by how long we can concentrate deeply on one issue before having a break.  This means people have to multi-think on several issues.  However, when we don’t try to focus our mind on just few issues we would achieve nothing of substance.

So, here is the key difficulty in utilizing our mind in the most effective way. We need to let our mind wander between several issues, but not let it wander between too many issues.  Let us assume that every person has somehow learned to maneuver his/her mind in an acceptable way.  This means we can feel when too many critical issues call for our attention and then we lose control and become erratic.  What we can do is try our best to decide what to push out of our mind, so we won’t reach the stage of overloading our attention.  This is a change of behavior that is very difficult to do, but even when we’re only partially successful our effectiveness goes up considerably.

How management attention becomes an issue in the life of an organization?

Even high-level executives give attention to their work only part of their overall span of attention. So, part of the competition on our attention has nothing to do with work.  People who love their work are highly motivated to do well, feel deeply responsible at work and give more attention to the work issues. But, still all other personal issues, like family, health and hobbies, have to have their part.

From the organization perspective the limited attention of all management has to be properly utilized, but not allowed to come near the line of confusion causing mistakes and delaying critical decisions.

In several previous posts and webinars I have expressed my view that in any organization there are two different critical flows:

  1. The current Flow-of-Value to clients. This involves short-term planning, execution and control, doing whatever it takes to serve current customers.
  2. Developing the future flow-of-value. This is the Flow-of-Initiatives aimed at bringing the organization to a superior state according the goal of the organization.

Is it likely to have active management attention constraint in the flow-of-value?

When this situation occurs the delivery performance gets out of control. Some orders are forgotten, others are stuck behind for very long time, and without the client screaming there is little chance of delivering an order at “about the agreed time”.  Such a situation might bring the organization into chaos, and no human system can live for long with chaotic performance.

The clear conclusion is that managerial attention constraint in the flow-of-value cannot be tolerated, and thus all organizations look for the right skilled managers to maintain the flow-of-value under a certain level of stability, which is in par with the competition. The typical operations manager is one that is active in spotting fires and is able to put them off.  Such a manager is less tuned to come up with a new vision.

But, when we examine the flow-of-initiatives the situation is quite different. There are usually many more ideas to improve the current performance than management attention required for developing, carefully checking and implementing.  The result of overloading the management attention is being stuck for much too long in the current state – same clients, same products and same procedures, while improvement plans take very long to implement and the stream of new products is also slow and erratic.

Having management attention as the constraint of the flow-of-initiatives makes sense, because of the unlimited number of ideas, but it requires strong discipline to keep management focused. It means having consensus on the strategy and based on it on what raw ideas should be checked, then having a process for deciding which ones to develop in detail and after that choosing the few to implement.  As measuring the attention capacity is currently impossible because we lack the knowledge then some broad rules should be employed to dictate the amount of open issues every management team has to deal with.

This kind of discipline requires monitoring the issues, call it ‘missions’, where every manager is in charge of completion, assigning a due-date to it and monitor the amount of open missions, checking also whether too many missions are late, signaling that one or more managers are overloaded, thus the rules have to updated.

It is not practical to count every tiny issue as a ‘mission’. Managers definitely need to put up fires, deal with other urgent issues and many other small issues that take relatively short time.  Being able to empower the subordinates could significantly offload the critical load on any manager.  But, changing habits is very tough to achieve, so most of the time we have to accept the manager character as given, and eventually come close enough to fair assessment of how many medium and large missions a typical manager can do on top of all the smaller issues that the manager has to control.

What happens when management need more capacity?

How difficult it is to elevate the attention constraint? The simple answer is adding more managers.  There are two problems with that.  One problem is called “diminishing marginal returns/productivity”.  Any additional manager adds to the total management attention less than the previous one, because of the additional load of communications on the existing management group.  The other problem is whether the whole management structure needs to be re-checked and maybe going through a change.

Empowering the subordinates is another way to elevate the load on the shoulders of managers, and it does not need re-structuring the management hierarchy. The problem here is that for a manager to change in order to trust subordinates is even tougher than improving the control on what issues should occupy the attention and what should be pushed away.

So, empowerment and wider managerial pyramid are valid ways to increase managerial capacity, but the ongoing duty of top management is to focus the attention of all managers on the most promising issues top management have chosen, while also keep part of the attention open for controlling the current situation and looking for signals for emerging threats.

Raw Thoughts about the Bitcoin

TOC should guide us to think clearly. I’m going to try to do this regarding the phenomenon of the Bitcoin and expose myself to your cause-and-effect criticism.  I don’t know much about the Bitcoin and I’m certainly not an expert on how Blockchain, which supposed to protect the security and privacy of transactions and agreements between parties, truly operates and whether the information is absolutely safe without any chance of breaking in.

I’m going to broadly and freely follow Goldratt Six Questions for assessing the value of new technology, but use only four of the questions from the second to the fifth.  The point is that the Six Questions are target at the developing organization providing guidance to improve the overall value, while here I like to assess the value of the Bitcoin for me.

Two posts covering the six questions have been published on this blog. The link to the first one: https://elischragenheim.com/2015/12/28/uncovering-the-value-of-new-products-part-1/.  Then, at the bottom, click on the next one.

The current limitation that the Bitcoin eliminates (question #2) is the need to rely on a specific middle player, like a bank or a credit card, to provide safe transaction of money from one party to another. Another related current limitation is the need to exchange different currencies for international trade.  The use of Bitcoin still requires a middle player to put such a transaction as part of a block of information, but this could be done by many service providers.  The purchase of Bitcoin would still have to use exchange from the current currency to Bitcoin.

There are two problems with current middle players. First they charge per transaction and their fee is not low.  Second, the middle player gains direct access to private information that can be used against the parties, most notably informing the tax authorities. The threat of losing privacy is an obstacle also for many who like to engage in illegal, or immoral, activities, like selling or buying drugs.  The bigger the middle-player is the higher the concerns that the information stored would be used against the true owners of the information.

Do we really want to help people to hide their illegal activities?  This is a moral question that I leave to the reader to decide.  Generally speaking there are other violations of privacy that we all like to prevent.

The current ways to bypass the limitation (question #3) is by using cash or barter. Due to security issues and practical logistics cash is used in a very limited way.  Another way to reduce the limitation is working with many banks and several credit-cards.

The first step to using Bitcoin (question no. 4) is to buy enough Bitcoin or mine it. Just as a comment – Bitcoin can be any decimal number.  Mining Bitcoin is tricky as only very big computers can do it and relatively only small quantity of Bitcoin is still available to be mined.  So, the easy way is to buy Bitcoin from others that already have.  The problem is that you can buy Bitcoin by either cash or using the same middle players.  Businesses can buy Bitcoin by selling products or services to clients that pay with Bitcoin and this could be a good way for keeping their business transactions confidential.

So, it seems that Bitcoin could give considerable value to businesses that need confidentially of their financial transactions. It could also give some value to small clients for buying through the Internet from foreign sellers, as it might be cheaper and quicker than the current means of international transactions.  It also helps small clients to keep their privacy of what they buy and from whom.

But, when we give attention to potential reservations of using Bitcoin (question #5) we now see a very serious drawback:

The exchange rate of the Bitcoin behaves in very volatile way

The fact that the exchange rates of most regular currencies fluctuate is already a major problem. The bottom line of many businesses with wide international activity has considerable dependency on the exchange rates between the local currency and the Dollar or Euro and the fluctuations of these rates causes considerable pain.

For the TOC readers let me add some explanation on the impact of the exchange rate on throughput (T). Suppose a deal is made in American Dollars and the money is going to be paid one month after the delivery of the goods. Suppose we live in another country with a currency called s-coin where every s-coin equals exactly $1 at the date of the delivery.  It is obvious that the actual generated T might be 3-5% up or down from the theoretical T at the date the goods have been delivered.  This means the value of T cannot be known until the actual payment.  The same possible distortion could affect the price of materials as well.  Throughput means revenues minus the truly-variable-costs (TVC), mainly materials.  But, what is the cost of materials?   Is it the original purchasing price, or the current price for replenishing the same materials?  I think it should be the latter.  When the materials are purchased from a foreign supplier it adds another source of uncertainty to the level of throughput.

As we see, the impact of the exchange rate on international business activity is quite significant. Using Bitcoin dramatically increases the level of uncertainty of the T for deals that have been formally completed. So, the deal itself is actually a combination of two very different deals at the same time.  One is a deal of goods sold at the value of $X. The other deal is selling $X for Y Bitcoin, which is a very uncertain deal in itself.

Currently the Bitcoin behaves in a crazy way that fits addict gamblers and almost nobody else. It is possible it’d continue to be highly fluctuant in the future, because we don’t see any clear way to give Bitcoin a value. I claim that this is a major cause not to use Bitcoin as money – it is too risky.

Can this basic flaw in the functionality of virtual money, having unstable value, be fixed? Think about it as a necessary condition for any future replacement of the Bitcoin, and let the great minds in economy work it out.

Going out of trouble – or running directly into it?

Teva is a multibillion pharmaceutical company, originally Israeli, now international, but Israel still feels close to. Two years ago the value of the company was estimated at $65 Billion.  It looks now like ancient times as its current value is estimated at $19 billion.

Teva is a sad story from which we all should learn. It reflects the problematic behavior of top executives, their megalomania coupled with personal fears and how they are dragged into gambling on the life of their employees.

The big problem of Teva started with a very big success with clear expiration time. The patent on a specific original drug that has contributed substantial profits was going to expire. This means profitability goes sharply down and this situation is quite frightening to people used to high bonuses.  Another threat to the executives was the possibility of a hostile takeover.  The “remedy” was to become bigger, with less excess cash, by purchasing more and more companies.

Teva operates mainly in the generic drugs pharmaceutical business, taking advantage of the expiration of patents of other drugs. To do well such a company needs to be truly agile, starting with coming up first with good replacements to drugs that just lost their patent, which gives them six months of exclusivity with the exception of the original drug producer. The second field of potential advantage is being truly good in managing the supply chain, ensuring no shortages while not holding too much surpluses.

Due to fierce competition the generic drugs business has to live with low profitability relative to the potential “easy profits” coming from few successful original drugs. This requires very high volume of sales backed up by smooth flow of products.  The need for excellent management of both R&D of new generic drugs and supply chain is as an ongoing burden on management in a field where clients have very low loyalty and reduced price is the key parameter.  It is a real must for such a company to keep highly competent middle-level managers in order to ensure stable financial performance.

The situation in the sad story of Teva was that the pressure of the time bomb and the threat of a takeover caused hysteria among top executives. Failing to come up with worthy new original drugs meant coming back to operate just in the generics market.  This is where the company, much smaller at that time, excelled in the past:  being truly great performer in the generics business.

Somehow Teva management believed that to stay successful enough after the expiration of the one key original drug was to buy a large competitor and become the largest producer of generic drugs in the world. Being the biggest still means just one-digit percent of the world generic drugs business.

This pressure to buy companies, showing momentum, becoming the biggest in the world and be relatively immune against takeovers has led to a series of very bad decisions. One of them was to acquire a Mexican company for only $2.5 billion just to realize that the company is empty and its real worth is zero.  Then Teva executives and board of directors decided to acquire Allergan, a large competitor in the generics manufacturing for $40 Billion.  After the purchase of Allergan Teva became the biggest with 8% of the generic drug market. Financing that deal created a debt of about $35 Billion.

This was a kind of a gamble that top executives should never take!

The simple meaning was basing the future of Teva on the HOPE that the market would behave as Teva wanted it to behave.

What actually happened in the market that disrupted Teva position is ironic. One of the reasons to become the largest supplier of generic drugs is to be able to dictate higher price.  However, many buyers of drugs in the US have collaborated to create bigger organizations that would buy the drugs for reduced price.  Here is an insight to digest: the idea that a big player is able to force the rules on the whole business area is open to both sides!

There are two common objectives for a company to buy a competitor, on top of upgrading the image of the executives and making the possibility of a hostile takeover remote:

  1. Gaining a stronger position in the market to dictate better price.
  2. Being able to reduce cost by cutting redundant jobs.

However there are also several possible negative branches:

  1. Merging the operations of two different organizations calls for problems both companies have not faced so far, like a clash between two different cultures, managerial policies, middle-level management practices, language barrier and general reduced morale of employees because of too many unknowns.
    1. Note the possible negative impact on the performance of the accumulative supply chain.
  2. Very high increase in the burden on several key executives and managers whose area of responsibility, including the number of subordinates, increased. Such increase might overload the capacities and capabilities of these individual managers. In other words, management attention constraints might emerge and cause a certain level of chaos.
  3. Making CASH the constraint of such an organization. We in TOC can understand the ramifications, which are not obvious to non-TOC managers. A previous post by Ravi Gilani and I on cash constraints has appeared on this blog. The emergence of the new constraint can happen in two different ways.
    1. The company drains its own free-cash and line-of-credit to partially finance the deal.
    2. The company finances the deal by taking a big loan, hopefully to be returned from its own future profits. If the company would face difficulty to return the debt its very existence will be in jeopardy.

Not paying attention to deal with the above potential negative consequences and deciding unanimously to take such a clear big risk caused the collapse of once great and successful company. We, in TOC, should learn the emotional causes for such behavior and think how we could prevent such reckless behavior when we have the chance.

Up until now was brief learning on how a big organization chooses to go into trouble. Let’s now try to analyze what are the pitfalls when the company tries to get out of the trouble.

The common paradigm is: An organization in financial trouble needs to decrease its size.  This means either selling parts of the business or employing a massive “efficiency program”, meaning mainly cutting jobs, closing some facilities and moving them to cheaper locations. The common focus is on reducing costs not on increasing sales!

For Teva the potential alternative of selling the same company that purchasing it caused the trouble is UNTHINKABLE. Suppose that the former owners of Allegan are ready to pay $15 Billion to purchase their company back, in spite of the tougher market in generic drugs.  For the management of Teva the humility of the situation is unbearable.  Teva still needs to pay the $35 Billion in full.  But, they need to pay it anyway, even if Allergan does not produce enough profit to be worth more than $15 Billion.  Was such an alternative considered???

So, the solution that looks acceptable is being more efficient and this would, hopefully, bring enough cash to cover the loan, which is required in order to keep Teva alive.

The key assumption is that right now there are many places of inefficiency in Teva operations. By inefficiency I mean that there are costs that can be cut without reduction in revenues.  There might be cases where some functions could be closed because the costs are higher than the revenue they generate.  I assume these are minor cases.

It is relatively easy to calculate the cost saving of cutting 14,000 jobs (the number quoted in the media). It is not easy to fairly estimate the true impact on revenues.  However, the numbers that leaked out about the current efforts of Teva to stay alive are all about costs saved; nothing about revenues.

Think about it, when you evaluate an investment then estimating the generated increase in revenues is a must. But, when you decide to cut jobs then your assumption is that revenues would be kept at their current level.  When cutting jobs is done due to a significant decrease in the demand then the utopia is that it is possible to save cost just in the right amount required to serve the shrinking demand.  It is a flawed assumption in itself, but the case of Teva is different and it makes the assumption irrelevant.  The world consumption of generic drugs is probably not going down, certainly not by much.  The pressure to reduce price is what truly goes up.  So, Teva faces about the same demand for quantity, but its revenues go down.  If Teva ends up with less available capacity, due to massive job cutting, then revenues would go down much further.  What is the net of saved costs and reduced revenues?  Would this generate enough cash for returning the loans?

We in TOC are fully aware to the insight:

Operational excellence definitely requires excess capacity

The amount of excess capacity required to preserve the agility for gaining competitive advantage cannot be easily determined. In TOC we get signals, through buffer management, whether the current situation is about right or not.  I doubt whether any other methodology is better than that.

Cutting jobs reduces excess capacity, allow new bottlenecks to emerge and reduces the responsiveness to the market. The common act of moving the production from an expensive location to a cheaper location takes considerable time and expenses, especially in the pharmaceutical area.  But the real long term question is whether the cheaper plant can produce all the required quantity, including the quantity produced by the closed plant, while adhering to the agility standards of the supply chain.  If the answer is ‘NO’ or it might require substantial unplanned investment in additional capacity then the risk for future business is considerable.

Another devastating aspect is the ability to maintain the required high standards of middle-level management looking for the flow of goods throughout the supply chain.  Cutting jobs means losing substantial amount of middle-level management expertise.  How would the remaining middle-level managers be motivated to excel in what they do?  Are they going to try harder because of their gratitude for not losing their jobs?  Or they will learn from that experience to look actively elsewhere because they have lost trust in Teva management?

Instead of cutting x% of the manpower, organizations could offer all the employees a temporary cut in salaries in order to avoid the act, and promise that once the bottom-line reaches a certain point an automatic restoration of the previous salary would be done. There are negative branches to such an act, but in most cases they can be handled.  The upside is that the capacities and capabilities are kept intact and can be used to gain a competitive edge.

Throughput Economics provides superior tools to assess the bottom-line impact of moves like purchasing a competitor, or cutting high number of jobs. The distinction TOC is doing between Throughput (T) and Operating-Expenses (OE) helps to clarify what to consider in evaluating the net impact.

Eventually any company in crisis needs to improve its sales.  In the generics market it means managing the supply chain in a superior way to any other competitor.  This would allow creating real advantage in the market, by providing perfect availability everywhere in the chain, while holding lower inventories than the competitors.   This could open the door to improved VMI agreements and even opening the scope for Teva to take, in certain chosen locations, responsibility for the whole drug inventory for hospitals and other medical-care organizations, maybe even to some pharmacy networks.  This could be a viable vision for Teva and TOC has the tools to achieve it.

The Objective and the Challenge of Improving the Supply Chain – and the Personal Dilemma of the Key People

Improving the supply chain applies to two different scopes:

  1. Improving the performance of our own organization by improving the flow to our clients as well as the flow of supply from our suppliers.
  2. Improving the overall performance of a whole supply chain, across many different organizations, up to the end users of the end product/service.

My assumption is that the first, relatively local perspective of the supply chain, is the current common focus for considerable improvements.  It already contains some thinking and vision on how that improvement is going to impact the overall supply-chain, especially as it looks on both suppliers and clients of the organization.

The long-term vision for the whole supply chain, developed by Eli Goldratt, deserves a special post sometime soon. Let’s focus now on the challenge of one link in the supply chain to improve its business.  An organization usually has variety of products for variety of clients. It has also to maintain good relationships with various suppliers.  Part of the value generated by the organization depends on the perceived value of the products, their quality, and the level of usage by the clients. A major point is the size of the target market segment that truly likes the end products.  Another critical part of the value is the quality of the delivery to the clients.  This fully depends on the management of the flow.

What blocks the flow of products according to the true wish of the market?

One of the two key obstacles to fast flow is batching, meaning the grouping of many pieces and working on them together as they move from one work station to the next.  Naturally every batch serves many customers.  Batching is in the center of attack by Lean, once called Just-in-Time, in order to come closer as possible to the idea of one-piece flow.

Batching is caused by either long setups or by certain resources that work on a whole batch, like ovens or transportation vehicles. One of means of Lean to dramatically reduce the batching is by reducing the setup time. Another mean is using more resources with lower capacity, for instance, smaller trucks for more frequent deliveries.  The common concern is that these means would add cost.

A different key obstacle of the flow is lack of enough capacity, which causes long wait time. The first obstacle, batching, clearly impacts the second, the lack of capacity. When the batches are smaller capacity is spent on more setups, which seems like cost is wasted. TOC clearly shows this is not necessarily the case.  But, it is certainly possible that too many setups would turn the specific non-constraint resource into a bottleneck, causing huge delays.

TOC has the right tools to deal with the obstacles, and by that maintain good flow, without becoming too orthodox, through sensible management of capacity and considering the real impact on cost and on the demand.

The two obstacles seem to be a major problem because of their impact on the flow of products and services and through that on the organization goal. But, when we examine the goal there is even more critical obstacle that makes the life of all managers so difficult:

Having to deal with the considerable uncertainty of the demand

The connection with managing the flow is soon to be analyzed. Meanwhile let’s understand the way common management practices deal with the uncertainty.

The common way most businesses deal with uncertainty is using forecasts to predict the future. The problem with forecasts is that they are, at best, partial information.  In Probability Theory every stochastic behavior has to be expressed, at the very least, by two parameters.  The most common are the predicted ‘average’ and the standard deviation.  Forecasting methods use past results plus previous forecasts to generate the estimation of the average result and the ‘forecasting error’, which estimates the equivalent standard deviation for the coming forecast.  The big problem of using the forecasts is that when looking for the demand in far away weeks the estimation of the ‘forecasting error’ becomes messy. Actually the whole notion of the forecasting error is problematic because when the error is relatively big, like when you look for the weekly forecast of one SKU at one location three months from now, then the burden on the decision maker is quite significant.  Pretending the forecast determines accurately the demand seems like a good solution.

The personal fear of every decision maker of being proved wrong dominates the current practice.  The usual response is that the forecast was RIGHT but the execution wrong – this provides a way to blame somebody else.

In reality the demand at any specific geographical location fluctuates in a much bigger way than the total inventory.  Another complicating factor is that the level of uncertainty grows sharply with time.  The longer the horizon the weekly/monthly demand forecasts are subject to more uncertainty.  The weekly demand a year from now might even be irrelevant, due to new products and/or different economical situation.

The practical ramification for every supply chain is that the upstream nodes in the supply chain face much harder decisions on what to make and how much of each specific item. This is because the time between the relevant decisions of what to produce, and the actual sales of the end-products is relatively long, which means high level of uncertainty.

If reality would have been deterministic then the two obstacles of flow would not matter and optimized solution for capacity utilization, using the optimal batches could have been provided by good, but routine enough, software algorithm. This is, of course, not our reality.

One critical insight should be well understood:

Instead of improving the forecast, which might be either impossible or very minor, it is possible to improve the flow throughout the supply chain to quickly react and adjust to the actual demand!

Even with fast reaction to actual demand we have to make sure there is enough stock either on-hand or in the pipeline, to answer the immediate demand. It seems impossible to determine the exact stock, due to the volatile uncertainty, but we can come up with a good-enough estimation and adjust it based on the actual behavior.

Estimating the right amount of stock is a kind of FORECAST! But, we need to be clever in how we use the partial information to come up with a good-enough estimation of how much stock to hold in the system.  In TOC the underlining assumption is that the demand for tomorrow is roughly the same as today, unless we get a signal that this might not be the case.  So, we need to be clever in analyzing the signals that the current stock level might NOT be right.  There is no viable way to determine a precise number, and being slightly “mistaken” would not matter if the fast reactive flow to actual demand is working properly.

These are the key core insights of TOC for managing a supply chain. The process needs to be much more detailed, but this is certainly beyond this post.

Still a troubling question is:

Is “good enough” truly enough for an organization that values optimization?

There is a strange conflict in the minds of most of the key managers in the area of supply chains. On one hand they recognize the need to improvise because everything changes all the time.  Production managers are certainly used to improvisation. This requires having the appropriate infrastructure built in, like having enough stock and available capacity for sudden changes.  On the other hand, managers are aware that improvisation means extra cost for the maintaining flexibility and such an approach cannot be optimized, and even worse, it is considered to be far from the common best practices of today.  It is frightening to go against the common best practices, and every manager whose career depends on the judgment of others, like his boss or the board of directors, has a lot to fear from doing something different than what is accepted as “right”.

Here is the conflict cloud:

Conflict SCM

Actually, the TOC rules of dealing with uncertainty do not require frequent improvisations, but simply follow common-sense and make the decisions for relatively short periods of time, resulting from the fast flow that management is able to maintain.

What comes from handling uncertainty in such a way is being superior to most competitors in the eyes of the market, which could lead to very successful business.

The TOC approach challenges the need for the ‘C’ entity above to achieve the objective. So, the resolved conflict looks now like this:

SCM-TOCsolution1

Overcoming the natural fear to go against the common practices could be dealt with running a Proof-of-Concept. It has to be a good enough “proof”, and it has to limited, so even failing would not create too high perceived loss.  A former post on Proof-of-Concept can be found at: https://elischragenheim.com/2017/02/26/looking-for-the-right-pilot-as-proof-of-concept/

 

Managing a mix of make-to-availability (MTA) and make-to-order (MTO)

A critical insight has emerged into TOC around 2,000:

There should be clear separation between customer orders, which specify quantities and due-dates, and work orders to stock!

This is NOT the common practice, which strives to merge the quantities required by customer orders with stock quantities based on forecast. TOC clearly refrained from merging customer orders with different due-dates into one work-order.  But, up to that time even the TOC methodology used to assign artificial due-dates to make-to-stock orders.  By realizing that no due-date should be assigned to stock orders TOC achieved a true separation between make-to-order and make-to-stock/availability.  This vastly simplified the production process by noticing two different types of flow with two different types of buffers: time and stock.

Standard products with good and relatively stable demand, certainly fast-runners, fit being produced to availability. Fully customized products naturally fit make-to-order.  In between the two groups there are products that could be treated either as MTO or as MTA.  There are several gray-area categories of products that can be treated this way or the other way. For instance, slow movers that the market still expects to be available or when the required delivery time in more than one day, but less than the current production lead-time.

In some cases a combination of make-to-order and make-to-stock is applied to the same item (SKU), like regular MTA, but dealing with few very large orders that could be bigger than the whole stock buffer is better handled as MTO. Another case, which is pretty common, deals with supplying big clients, like automobile producers, that give the supplier rolling forecasts for several weeks ahead, but then expecting to draw on the spot somewhat different quantities.  Here also a combination of MTO and MTA is the preferable direction of solution.

The situation where a company runs both MTO orders along their time-buffers and MTA orders, controlled by stock buffers, and both compete on the capacity of the same resources, should be very common. TOC is effective in maintaining the separation and by this every single order is exposed to the true priorities dictated by buffer management no matter what type of buffer it is.

Are there generic problems in managing a product mix that contains both MTA and MTO?

The MTO buffer is based on time. The order is released to the floor time-buffer prior to the due-date. So, the consumption of the buffer is linear – the buffer is consumed day by day in the same pace.  An important advance in the TOC methodology was to use the Planned-Load, the load on the weakest-link/constraint, to determine the “safe-date” for any incoming orders.  This provides a mechanism to flatten a temporary peak of demand by increasing the promised customer lead-time based on the incoming demand.  The safe-date mechanism smoothers the load and by that ensures stable performance.  During off-peak periods the company is able, depending on its strategy, to offer shorter response times.  This offering has to be carefully thought of as it might cause negative ramifications of customers expecting fast delivery at all times and, when relevant, refuse to pay more for truly faster response.

The MTA buffer is based on stock, so the buffer status depends on the actual sales. The immediate consequence is that the buffer status of an order can jump from Green to Red in one day.

On the other hand, an order in the Green might stay in Green for very long time when the sales of that item are very low. All in all we see more volatility in the buffer management sorted list of priorities due to MTA orders, while the MTO orders keep steady pace.

This difference in behavior is not relevant to the question: if we have two red-orders, one MTO and one MTA, which one is more urgent?

One insight has to be considered here:

Buffer Management is effective as long as there is a fair chance to deliver ALL red orders on time!

Both MTO and MTA radiate commitments made to the market. TOC uses its capability to stabilize the operational system in order to gain reliability in meeting all the commitments and make this a decisive-competitive-edge.  When violating at least one commitment given to the market the emerged conflict is which order should be delayed then a new question is raised:

Which order would create less damage when its specific commitment is violated?

The buffer management algorithm doesn’t consider the size of the orders, the throughput they generate and ignores the identity of the client. However, when it seems clear the company is (temporarily) unable to meet all its commitments then the truly critical information is: Who is the client?

The answer should generate more questions about how this particular client is going to react and how this might impact the other businesses the client has with the company.

So, when an MTO order competes with an MTA order which one would turn Black – then the deciding factor is the predicted damage and it could easily be either the MTO or the MTA order.

When the company manages a mix of MTO and MTA then there is relevancy to the question:

What is the ratio of capacity consumption of the weakest link between MTO and MTA orders?

The reason of having to ‘reserve capacity’ of, say, 40% to MTO and 60% to MTA, is to enable smoothing the load of the MTO part through the mechanism of the “safe-date”. In order to do that we assume that every day 40% of the available capacity, on average, would be dedicated to MTO.  Thus, we can consider the planned-load for the MTO orders, which means calculating how long it’d take the weakest-link to process all the current MTO orders.  That time is converted to a date, when only 40% of the daily capacity is considered.  The calculated date represents when the weakest link would be free to process a new MTO order just received. The safe-date for that order is the calculated planned-load date plus half of the time buffer for that order.

The reader can find more detailed description of the determining the safe-dates for MTO orders elsewhere. The very brief description is intended to explain that the average reserved capacity for MTO orders, in a mix environment, is required just for that mechanism.

The ratio of 40/60 might make the wrong impression that the weakest-link, the potential capacity constraint, is planned to utilize 100% of its available capacity. This is a major mistake.  The commitments of reliability in delivering MTO and maintaining excellent availability of MTA items at all times clearly requires a certain level of protective capacity.  When the mechanism of quoting safe-dates for MTO is working properly there is still a need for protective capacity to cover for unexpected downtime, need for rework and mainly inaccurate capacity data.

Maintaining excellent availability of the MTA items requires MORE protective capacity because of the impact of incidental peaks of load.

Dr. Goldratt required that the total capacity utilization of any resource in an MTA environment including a mixed environment would not be over 80% of the available capacity. Goldratt’s concern was not just the fluctuations in the total demand for MTA items, but also to have enough time to deal with an increase in the total demand.  Many simulations showed that when the demand is growing there is a point where suddenly the number of red-orders goes up sharply and then it is just a matter of time until many items become short.  Note, maintaining large buffers restrain the impact of lack of protective capacity just for short time.  The use of dynamic-buffer-management at that point in time makes the situation WORSE, because increasing the buffers increases the demand at that point of time, where too many red-orders compete for the limited capacity of the weakest-link, which turned to be a bottleneck.  The emergency policies at this stage should be reducing the buffers, while looking to quick means to add capacity as soon as possible.  We better not experiment too much the tolerance of the protective capacity, especially for MTA.

The idea behind setting the line of average utilization of the weakest-link to 80% is that when the total demand is up there is still an opportunity to increase the capacity before the reputation of the company as one that meets all its commitment will deteriorate.

Having to exploit the internal capacity constraint to only 80% of its theoretical capacity is problematic. When more potential demand exists the temptation to draw more of the constraint is considerable.  However, the risk of overexploiting the constraint and by that ruin the decisive-competitive-edge of reliability is also high.

The solution, offered by Goldratt, is to maintain a market with no, or very limited, commitment! For instance, when the constraint is idle it could produce to stock items that can be sold to another market segment without any commitment for future availability for that market segment.  These make-to-stock orders are definitely ‘The Least Priority’ orders – carrying less priority than green-orders.

This idea is based on an important insight that is good to end the article with:

Specific commitments that provide high value to the clients should be directed at specific market segments. Other segments could be offered less binding commitments