The big slogan and the potential real value of Industry 4.0

By Eli Schragenheim and Jürgen Kanz

We are told that in order to keep with the quick changes in the world, facing the fiercer competition, manufacturing organizations have to join the fourth industrial revolution called Industry 4.0, which is a very big suite of different new technologies in the field of IT, namely the Internet of Things (IoT), artificial intelligence and robotics.

The slogan of Industry 4.0 claims it is highly desirable to join the revolution before the competitors. Well, we are not sure whether the term ‘revolution’ truly fits the new digital technologies.  But, this is truly the smallest issue.  The fast pace of the technology should definitely force every top management team of every organization to think clearly what could be the impact of the newest technological development on the organization and its business environment.  Thinking clearly is required not only for finding new ways to achieve more of the goal, but also to understand the potential new threats that such developments might bring.

There are two significant threats that new technology might create. First, push management to invest heavily on technology that is still half-baked and its potential value is, if at all, small.  Second, cause loss of focus on what brings value and what not. Trying too many ideas, investing money and management attention on too many issues, could end with a big loss, or very low value.  Just look on the wide area of claims to bring value:image 1

Image 1: Improvement areas for Industry 4.0, adapted from McKinsey Digital 2015,
“Industry 4.0: How to navigate digitization of the manufacturing sector”

Application of new IT technologies and the connection of known technologies shall lead to the following expected improvements per area:

image 2

Image 2: Expected improvements, adapted from McKinsey Digital 2015,
“Industry 4.0: How to navigate digitization of the manufacturing sector”

We can recognize a number of good time and cost reductions that will increase the overall productivity, but what are the expectations of top managers? To gain more insight we can look into a survey of “Roland Berger Strategy Consultants” with input from 300 top managers of German industries as an example:

image 3
Image 3: Top-Manager expectations, adapted from “Die digitale Transformation der Industrie”, Roland Berger Strategy Consultants & BDI, https://bdi.eu/media/user_upload/Digitale_Transformation.pdf, last download 09/24/18

A big group of executives (43%) target only cost reduction with the help of Industry 4.0, while other managers want to have more sales with new products (32%) or more sales with existing products (10%). The objective to achieve more sales and cost reduction is a wish of 14% of the managers.

We can expect that approximately half of the managers will be satisfied with cost reduction due to improvements in above-mentioned areas, but there is no element in the above images that supports directly gaining more sales of new or existing products.

We assume that on one hand IT related product innovations will push sales with new technological products, like wearables (gps watches, health control, sleep tracker, etc.). On the other hand, there is the wish to cut cost in production, which due to the fierce competition would press the price down and increase sales quantity. Will this trend also increase net profit?  The short answer is: it depends; companies need to analyze the full impact on the bottom line very carefully.

The new digital technology can help reducing the “Time to Market”, the time to run a new product development project from idea to market launch incl. customer contribution. One question is by how much? The answer depends a lot on the specific technology of the new products.  Another question is whether Industry 4.0 can reduce production lead-time and what this could do to improve sales?

The mentioned improvements in Sales / Aftersales have an impact only on after-sales activities, but are they sufficient to create new sales?

It seems that the main vision of McKinsey and many other big players is limited to cutting operating expenses, which is fine for bringing certain value, but it is NOT a revolution. The tough reality of cutting costs is that it cannot be focused; it is spread over many cost drivers.  It requires a lot of management attention and usually brings limited net business value.  Question is what if the management attention efforts would have been directed to give higher value to more customers?

We understand that when implementing some of the most relevant Industry 4.0 technology, and when the technological changes are combined with the right management processes, such achievements, like cutting lead times by 20-50%, are not only possible, but should also dramatically improve the general responsiveness to customers and also becoming truly reliable in meeting all commitments.

But, is it sufficient that the technology is installed and used to achieve such results?

And is it enough to reduce the time to market, or cutting the production lead-time, to get better business results?

Significantly improved business results are achieved if and only if at least one of the following two conditions applies:

  1. Sales would grow either from selling more or from charging more and not losing too many sales because of the price increase.
  2. Costs are cut in a way that does not harm the delivery performance and the quality from the customer perspective.

The above should be the top objectives of any new move of management, including dwelling on implementing a new technology, like one of the Industry 4.0 elements.

On top of carefully checking how any of Industry 4.0 components could achieve one, or both, of the above conditions, a parallel analysis has to be used to identify the negative branches, the variety of possible harms that might be caused by the new technology.

For instance, the use of any 3D printer is limited by the basic materials that the particular printer can use. If this limitation wasn’t considered at the time when the decision to use such a printer was accepted then it could easily make the use of the “state-of-the-art” technology a farce.

We suggest every manufacturing organization to consider using chosen parts of Industry 4.0 in order to achieve one or both of the above top objectives, bringing higher level of business achievements.

A special effective tool to analyze any specific element of Industry 4.0 is the Six Questions on the Value of New Technology, developed by Dr. Eli Goldratt. The first four questions first appeared in Necessary but Not Sufficient, written by Goldratt, with Schragenheim and Ptak.

Question 1: What is the power of the new technology?

This is a straight-forward question on what the new technology can do, relative to older technologies, and also what it cannot do.

For instance, the ability of IoT to use PLC (programmable logic controllers) sensors on machines to send to a web page precise information about the state of a machine whether it functions properly or there is a problem. Predictions about the next maintenance step based on machine data are useful as well, because the results can help to avoid unexpected machine downtimes and to exploit the constraint.

Question 2: What current limitation or barrier does the new technology eliminate or vastly reduce?

This is a non-trivial question and it is asked from the perspective of the user. In order for a new technology to deliver value there has to be at least one significant current limitation that negatively impacts the user.  Overcoming this limitation is the source of value of the new technology.  It is self evident that verbalizing clearly the limitation for the user is a key for evaluating the potential value of the new technology.

The leading example of using PLC sensors for providing online information to variety of relevant users reveals that the limitation is the current need to have an operator physically near the machine to get information that could lead to an immediate action. We do not consider the capabilities of the PLC itself as the new technology in this analysis, as this is not truly a “new technology” by now.  The new concept is to use the Internet to reach far-away people that can gain, or help others to gain, from the online information on the current state of the machine and the specific batch that is processed.

There are two different uses for such immediate information. The first is when there is a problem in the flow of products, could be technical or bad-quality materials. The other type of information is for checking the likelihood of satisfying an opportunity, like changing over the production line to process a super urgent request or handling an unexpected delay.  The operator at the actual location has to get the fresh information and communicate it to certain people who appear in a predefined list.  The operator is also expected to update the IT system in an effort to consider the next actions. Overcoming the limitation means the flow of information does not need anybody at the physical location.  Depending on the technology the reactive actions could be taken from afar.

Question 3: What are the current usage rules, patterns and behaviors that bypass the limitation?

This question highlights an area that is too often ignored by technology enthusiasts. Assuming the current limitation causes real damages then ways to reduce the negative impact of the limitation are used.  For instance, before the cellular phones there were public phones spread all over the big cities to allow people to contact others from where they are.  Devices like a beeper or pager were in use to let someone far away know somebody is looking for her.  It is critical to clearly verbalize the current means to deal with the limitation because of two different objectives.  One is to understand better the net added-value of the new solution provided by the new technology.  The other is to understand the current inertia that might impact the users when the new solution would be provided.  This side is further explained and analyzed through the next question.

Today the industrial manufacturing landscape is roughly divided in two parts. We have factories with a high automation level for many years. These factories are often process industries using fully automated production lines for chemistry, pharmaceutics, etc. The machines and processes are connected by an independent data network that includes analysis. The monitoring of the production line and related processes takes place in a dedicated control room where the operator has to watch the information on a big screen and when spotting a problem the operator finds the best solution, or calls for help.

In addition we can find also many small and medium sized enterprises (SME) running modern machine tools with powerful controls and integrated sensors. These machines provide already all needed data for analysis, but in most cases, the data is left unused. It is also not very common to store information regarding the problems in the production flow into a database that can feed future analyses on improving the uptime of the production line. Operators can fix mainly small issues. They have to call the external supplier service in case of bigger problems with the machines. Bypassing the problem until it is resolved is typically managed by the operator and or by a cross-functional team. In today’s technology most of the information given to various decision makers is updated up to the previous day.  So, urgent requests and unexpected delays might wait to next day to be fully handled.

Question 4: What rules, patterns and behaviors need to be changed to get the benefits of the new technology?

Answering this question requires clearly detailing the optimal use of the new technology to achieve the maximum value. The behavior when the new technology is already operational is, in many cases, different than without that technology.  The value of the cellular technology is fully drawn only when the users carry their phone with them all the time.  There are many other ramifications of the change imposed by implementing the new technology, like being very careful not to lose our phone. New rules have to be developed to guide us to draw the most value of the new technology.

Industry 4.0 is pushing the idea that all available machine data should be used for monitoring, controlling and analysis. Modern machines can be connected directly with the IoT, while older machines need to install PLC sensors to provide data to the internet. A continuous stream of collected data is stored on a “Cloud” somewhere on a server farm of an external service provider.

The idea of having the PLCs information within immediate reach from everywhere could have added value if and only if people that have not been exposed to the information before would not only get the information but also be able to use it.  In order to use immediate online information one has to be aware that it is there.  As the true value of the new solution is the speed of getting the true updated information there is a basic need to be able to set an alarm to make the relevant person be aware that something important needs attention immediately.  This also means that there is a need to have effective analytics to note when the information becomes critical, and to whom.  This requirement should be part of answering question 4.  The more general lesson is that exposing the user to huge continuous stream of data from manufacturing, sales and Supply Chain is a problem the new technology has to offer a solution for.

When everything is connected with everything else, not just the internal company, it means direct connection with the external world via the IoT.  This move could create an opportunity for new level of business, but its rules and wide ramifications of such connection have to be very carefully examined.  For instance OEMs could have full access to all kind of data from every OEM supplier, which should enhance the win-win collaboration between the players.  But, this requires all the players to intentionally strive for this kind of collaboration.  The technology is just the enabler for the intentions. Creating this kind of transparency is a necessary condition for effective win-win collaborations.

The connectivity of everything is truly beneficial only with the right focus in place, preventing the human managers from being overwhelmed and confused by the ocean of data. This insight of having to maintain the right focus, the most basic general insight of the Theory of Constraints (TOC), is absolutely relevant for evaluating the potential contribution of every Industry 4.0 element, as it is so easy to lose focus and get no value at all.

Question 5: What is the application of the new technology that will enable the above change without causing resistance?

Resistance is usually raised because a proposed change might cause a negative, usually unintended, consequence. The example of every new medical drug for curing an illness also causes negative side effects, sometimes bigger than the cure, carries a wider message that this characteristic is much more general than just for medical drugs.

It is crucial not just to identify all the potential new negatives that the new technology would cause, but also to think hard how to trim them. The transition from film cameras to digital ones raised the negative consequence of having too many pictures taken.  During the years some solutions to organize the photos in a more manageable way have appeared.  If the thinking on that problem would have started sooner, the added-value would have been much higher.  This is a crucial part of the analysis:  to give the negatives much thought, in spite of the natural tendency of being happy with the new value.

It is advisable for every IoT idea to be analyzed for its probable negatives. A generic negative of almost all electronic devices is that when they fail to function properly the damage is usually greater than in the previous technologies.  This means stricter quality analysis is absolutely required, plus carrying the replacing electronic cards or devices in stock.

Question 6: How to build, capitalize and sustain the business?

This question is a reminder that the value of the new technology, plus all the decisions around it, is part of the global strategy of the company.

How does the above analysis live with the top objective of the organization? Does the plan for extracting value of the new technology provide synergy with the other strategic efforts required for achieving the goal?

So, here in the sixth question the global aspects of the proposition of implementing a specific application of the new technology have to be analyzed. Actually when several applications of new technologies are considered than question 6 should apply to all of them together.  Thus, when analyzing the various elements of Industry 4.0, the first step is choosing several for more detailed analysis; the last step in the analysis is evaluating the global strategy and deciding which ones, if any, to implement and what other actions are required to draw the expected value as soon as possible.

The previous questions of the leading example should discover by how much linking the PLC stream of data into the Internet would either increase sales or significantly reduce cost.

Suppose that the company has an active constraint in a specific machine or a whole production-line, and the constraint is frequently stopped due to various problems. In this case having a quick response mechanism, based on fast analysis of the PLC information, and immediately reaching the right people that can instruct the operator how to fix the problem is truly worthwhile.  The generated added value, both by keeping the customers unharmed and by superior exploitation of the capacity constraint, is high.

Add to this the decision to use 3D printers to overcome the management limitation in viewing new product designs from the original drawings, as managers might not have the capability of viewing 2D drawing and imaging the finished product. The cost of producing proto-types restricts the number of models for management to judge the design, and the number of alterations is also limited.   Using 3D printers eliminates the limitation.  After answering the rest of the questions the organization has to consider question 6 for both elements of Industry 4.0 and decide whether together the value is even greater.  If we consider the possibility that current proto-types of new products have to compete on the capacity of the constraint, while using the 3D printer bypasses the constraint, we could realize the synergetic added-value of leading to improved product-design that could enhance sales, while the capacity constraint is better exploited.

The overall conclusion has to highlight the sensitivity of the strategic analysis when the issue of Industry 4.0 is seriously considered. The contribution of the six questions could be truly decisive.

 

What Brick and Mortar Retail can learn from Big Data?

By Amir and Eli Schragenheim

Physical stores face a wicked problem that gets bigger and bigger. An increasing number of customers buy through online shops, offering lower prices and wider choice.  Online shops also face a huge business problem, coming from fierce global competition and lack of a clear competitive edge other than price. This does not help the physical stores to re-invent themselves and offer a real alternative to the customers, being offered by online shops wider variety, getting everything shipped home and with lower prices.

Part of the reason that so many customers buy online is that e-commerce stores succeed to understand better the specific taste of customers and approach them with lucrative suggestions. This understanding is achieved by investing huge efforts to gather a lot of data from every user entering their site, and perform sophisticated analysis of that data.  These are ongoing efforts so we can expect more improvements in manipulating customers to buy through the Internet.

How come physical stores do not take similar actions?

Physical stores have harder access to the relevant information. For instance, currently there is no good way to record customers who do not find the product they’re looking for.  The behavior patterns of the customers are not recorded.  There might be video cameras in a store, but their objective is security and all other behavioral aspects are not considered.  So, it seems there is not much the physical stores can do to study better their customers.

This is a HUGE mistake.  There is plenty of available data that can be processed into valuable information, and there are ways to access more data, which could be used to yield even more information.

Customers coming to big stores, certainly to supermarkets and drug-stores, buy, many times, more than one item. This is an opportunity to learn more about the possible relationships between different items, and the personal tendencies to brands, plus the role of the price in choosing from a variety of similar products.

The total purchase of different items purchased by a customer carries hidden information that testifies to the taste and economic level of the customer. To reveal the relevant information certain analysis has to be carried out, with the aid of statistics & machine learning (ML), to be able to come up with answers to key questions concerning the most important decisions every retail store has to make:

What new items to hold? What items should be eliminated?  What is the relative importance of keeping perfect availability of an item?  What items should be placed close together?  What items could be used for promotions?  What additional services, like an on-site bakery, should be added (or removed)?

Inquiring the total purchase of a customer reveals so much more than just looking at the sales of every item. The mix of items purchased at one time reveals wider needs, taste and economical behavior. When it is legally possible to identify the specific buyer, then the previous purchases of the buyer can be analyzed in order to define in greater detail the market segment that customer belongs to.  Part of the value of maintaining customer loyalty clubs is the ability to link different purchases, at different times, to the same client.  Thus, a client profile can be deduced.

The most obvious outcome is mapping the customers according to market segments, noting the different characteristics between the segments. Inquiring the purchases could highlight aspects regarding the family of the customer: spouse and children, their approximate age, their financial status and their preferences.  These characteristics can be revealed through analysis of the purchases and the frequency of purchasing.  Certain preferences, like smoking or being vegetarian, can be identified.  Together the key characteristics are revealed in order to define several layers of the market segment.

Another important value that can be deduced from inquiry of purchases is the dependencies between different items: when item X is purchased then there is a good chance that item Y is purchased as well.  These dependencies are sometimes intuitively deduced by some managers and they definitely impact decisions concerning maintaining the availability of both items, their placement within the store and even the possibility to sell them together as a package.  Understanding the linkages between products helps to check changes of purchasing habits over time.  For instance, when the economy goes down, how it impacts different segments, which is much more valuable than just watching the impact on individual products.  We can expect a general shift to cheaper products, but which brands are replaced by cheaper ones, and which segment makes those changes more than the other, should be valuable in forecasting those changes before the actual change in the economy takes place.

The structure of typical purchases by different market segments would certainly initiate marketing moves that would capitalize on that understanding. Analytical knowledge, translated into operational policies, would impact the performance of the various branches of the retail chain, as the specific needs of the branch are recognized, but also some of the generic insights.

When every purchase of a specific client can be linked with the previous purchases of that client, then the frequency of buying could lead to initiatives to influence the content of a typical purchase of a specific market segment.

Developing the machine-learning (ML) module to categorize better the different segments the store serves, should enhance both the marketing and the logistics of every retail store. There are always dilemmas in holding slow movers given the amount of the logistical efforts required to keep that slow mover available.  Being exposed to the right priorities, by understanding the full financial impact of the slow mover sales, would lead to better decisions about what items to hold in stock.  The relative value of a slow mover includes its impact on the sales of other products. Understanding the relative importance of that particular item to a specific segment contributes to determine the slow mover impact on the desirability of the store from the viewpoint of the market segment.

Through ML the retailers can get better understanding of the customer loyalty to specific brands and items. When it is already established that a certain segment prefers item X to item Y, then by intentionally creating unavailability of X for one day, it is possible to discover whether most clients from that segment bought Y, or refrained from buying a replacement.  It also answers the question whether buying the replacement would impact the brand loyalty.  Promotions also cause people to buy the less preferable items, but it is of major interest to know whether it impacts the brand loyalty.

It is highly desirable to have access to the information on the availability of all items at the time a certain purchase was made. When item X happens to be unavailable, then it provides an opportunity to check whether the seemingly brand-loyal customers switch easily to the replacement.

Supermarkets and drug-stores are typical retailers where purchasing usually includes several different items. It seems absolutely necessary that such retail chains would invest efforts in ML to learn more about their customers habits and develop the process of coming up with superior decisions to capitalize on that knowledge.

The Frustration of a Middle-level Manager – A short story by Eli Schragenheim

My boss, Dr. Christopher Logan, asked me to come to his office at 2pm sharp, and report in detail how come the delivery to MKM did not have all the 100 B12 units. I know how such meetings are conducted:  it looks friendly enough, but actually there is nothing friendly in such a meeting.  The kind of attitude, “we are nice understanding people” is supposed to hang over, but beneath the politeness you’re on TRIAL – try to prove it is not YOUR incompetence that caused the trouble.

Such a hostile inquiry takes place every two months. Many things go wrong every day, but only few receive this kind of treatment.  It is usually because the client is very important, very big or new, and when such a client is making a complaint then the incident becomes critical and somebody has to be blamed and punished.

MKM is both big and new. From whatever reason, getting 97 good units, out of 100, exactly on the formal due-date, was not enough for them.  I promised immediately to deliver the missing three units in five business days.  Apparently this was not good enough, so a formal complaint was put on Dr. Logan’s desk.  I have no idea how come the delivery of all 100 units at the promised day is so sensitive that 3% less is such a disaster.

I’m now preparing my defense, trying not to exaggerate too much the role of Freddy in the blunder.   I know that almost every one of my people might have done the same mistake he did, and that mistake only partially led to missing the delivery.  As MKM demanded 100 good units that would pass their test in full no later than June 1st, 2018, we decided to produce 120 units, as there is no viable way to identify such quality exceptions in B12 production in an early stage.  Freddy mistake was assuming that a temperature of less than 1 degree above the standard is still within the control limits.  Usually this is right, but for MKM specifications it is not.  That small deviation impacted, at most, five units, because the temperature was fixed very fast.  Five out of the extra buffer of twenty cannot be the only reason that only 97 units passed the MKM test.  We, by the way, sent 101 units that passed our own test, rejecting 19 due to variety of reasons.  How come four units, which passed our test, failed in their similar test is an open question. No one, I repeat, NO ONE, has any explanation for this fact.

This is the situation I face and I just hope it won’t turn out as bad as the blunder of last year, which led to the layoff of two good people. The charge was not paying enough attention to rare and unfortunate incident that caused the breakdown of expensive equipment.  In such cases, Dr. Logan is taking the juridical authority to find who to blame.  I actually understand him; his superiors would blame him unless he succeeds to find another scapegoat.  So, it is now my mission to avoid being the scapegoat.  I hope I’d succeed also to prevent Freddy from such undeserved verdict.

Part of the pain we in feel in Operations is that things could have been much better if we would have known more about the clients true needs. We are told not to be in touch with the clients, so all I’m able to know is what is written in the documents submitted by the clients.  We only see the name of the client, the name of the responsible product manager and a list of specifications without much detail.  The product managers also know very little on the client true needs.  I suggested Larry, the product manager responsible for MKM, that we deliver first 50-60 units of the order already in May 16th and two weeks later all the rest.  The reason was that we had to split the order into two batches due to some technical difficulties.  Larry called them and got a refusal, but no explanations.

Why?

Why do I have to function under strict instructions which I fail to see their rational?

Why I meet the executives, to whom Dr. Logan reports, only in special public ceremonies?

All I see above me is Dr. Logan. The rest are located far away and there is no active dialogue between me and them.

I stay in this company, first of all, because I have a wife and two small daughters. I also think I’m doing a very good job.  I cannot prove it, but I think that under someone else, with less experience and technical knowledge, the MKM failures would triple.  Most of the time the procedures we have, and the willingness of my people to react to any signal pointing that something might be wrong, keep what I consider to be very good overall quality of operations.

But, the ridiculous performance measurements look as if our performance is just moderate. The cost of our operations is, according to the measurements and the funny benchmarking they use, somewhat higher than the “average” of similar facilities.  This is so wrong that it is an insult!  If you don’t even contemplate to listen to us in order to understand what we do and why we do it this way, how do you expect us to improve?

I feel the situation is “us against them” – the people who do the job against the people who play the role of God and judge our performance, even though in some public speeches they say “we have to do much better”, making the wrong impression that they think they should improve as well. I don’t think Dr. Logan believes he has to improve.  He has me and my people to improve, and it is just me who has to learn the lesson and make sure the MKM case would never happen again.

So, here is a potential action plan. I shall argue that the whole incident happened because Sales intentionally concealed part of the detailed specifications of the order, being concerned we’d reject the order, because we don’t have the capability to do it right.  Larry, the product manager, told us that the chief salesperson hinted that MKM has the most sophisticated equipment in the world.  I didn’t see the implications at the time, but it is evident now that such equipment allows more precise measurements, so it could be that the true specifications were not given to us because our equipment is unable to meet such precise specifications.

Does MKM really needs more precise specifications for their products?

Fact is that 97 units have passed the test. It means that our equipment is able to achieve the required specifications. But how can we test the final quality when our testing equipment is not the same as MKM new testing equipment?

Human relationships as a part of the holistic approach in managing organizations

Management is about achieving results for the organization. The obvious meaning is that integrating the various organizational functions, like Sales, Operations, Finance and R&D, together to achieve the best performance is what the CEO has to accomplish.   This is the holistic approach: integrating the parts into a whole entity.

What is the role of HR in this need for integration?

In every part of the organization there are people who are truly required to achieve the global objectives. Every resource has a set of capabilities and a certain capacity that limits the amount of output in a period of time.  When it comes to human resources both capabilities and capacity are much more difficult to define and measure than the other types of resources, like machines, space and money.  But, the limitations of both human capabilities and capacities play a considerable part in the performance of the organization.

The characteristic of human resources is such that to properly utilize their capabilities the right motivation has to be active. For instance, a salesperson is meeting a new potential client.  Would she do everything she can to bring the client in?  Is this true also when she isn’t entitled to a special bonus for that?  Similar situations are a purchasing agent negotiating price and terms with a supplier and a foreman who gets a special request to expedite an order, which requires extra efforts.  Eventually all the above examples depend on the willingness of employees to help the company to prosper.

People might cause unintentional damage by failing to act in the right way. This could happen because of being untrained, or incapable, to do the job properly.  Another cause is flawed procedures and measurements that push people to do what harms the performance.  TQM, Lean, Six Sigma and TOC act in their different ways to fix that.

There are very few cases, but causing huge damage, where employees intentionally harm the performance of the organization.  This could take the form of refraining from doing what is required, like a strike, or even taking concrete actions that disrupt the performance of the organization.  It is definitely the responsibility of top management to prevent such high damaging cases, which raise emotions, like rage, preventing win-win solutions.

In the vast majority of the cases employees simply do their job as being told by their superiors.  When top management is doing a good job in integrating all the parts into synergetic performance than the results are positive, otherwise the employees cause damage, exactly because they follow instructions.

Can employees add high value that is far beyond just doing their job?

High level employees, like executives and highly professional employees, are expected to make huge efforts beyond their job. Question is: what is expected from the rest of the employees?

It is definitely possible that relatively lower level employees might know how to help the company to do better. In most cases the employees decide to keep quiet, believing the boss would not listen or appreciate their ideas.  Many employees feel that helping the company, beyond the formal description of the job, is a waste of their intellect.  This is a declaration of indifference:

“This is just my job, not my life. I’m not going to waste my intellect and special efforts for the organization that does not employ me for that purpose.”

So, the message for top management is that the employees might become a problem, either because they are not capable or because they are not motivated to do everything they can for the sake of the organization.

Henry Camp is the CEO of Shippers Supply Inc. and the owner of four other companies. Henry conducted a TOCICO webinar highlighting the 10 steps required to achieve the active collaboration of the employees for the company.  A special emphasis was on being ready to assist implementing a change in the way the company operates.

Henry Camp webinar is focused on what management should do to ensure that this indifference would never happen, preventing also the damage of intentional acts of frustration of the employees.

I recommend the reader to watch the recording of the webinar on the TOCICO site. A somewhat shorter alternative is watching his 30 minutes video on YouTube:  https://www.youtube.com/watch?v=4B0Azc6MNn0

I like to raise the issue of a CEO who is either new to the organization, or have not paid too much attention to the human relationships culture in the organization and now realizes that the time has come to diagnose the current state.

How much effort should management dedicate to diagnose problems with the motivation of their subordinates and solve them?

The objective is to find out whether the current performance of the organization is seriously harmed by existing level of distrust between employees and management. In the terminology of TOC the actual question is:

Is the internal human relationship the core problem of the organization?

The easy, but not always the best way, is inviting organizational behavior consultants to do the diagnostic. The result often is lack of focus, as the tendency is to come up with long list of what needs to be fixed.  The true damage to the Throughput is usually not defined.

There are two key organizational flows that determine the rate of achieving the goal. The first is the current flow-of-value to the customers.  The second is the flow of initiatives to improve the flow-of-value.  The concern from the impact of behavior on the current flow-of-value is creating blockages and by that harming the reputation of the organization.  The main concern for the initiatives is not trying hard enough to come up with great innovative ideas.

The chronic problem of the organizational culture is not with individual employees. Such problems are relatively easy to handle.  The problem is when most employees radiate indifference to achieve more of the goal.

Dealing with power groups within the organization is a situation that can easily become disastrous. Every airline has to manage its relationships with the pilots with extra care, while all the other groups are watching and might react to any change in the status quo.  In hospitals the surgeons have extra dominance and universities are run by full-time professors.  The balance between the power group, top management and the other groups is quite sensitive.  It is possible to get a win-win for all the groups, but it is not easy to maintain it for long time.

Can we apply rational cause-and-effect to diagnose existing or emerging behavioral problems and then find the effective win-win?

There is a common claim that people behave irrationally and thus analyzing it with rational logic is not effective. The argument is that we, human beings, often act based on impulses, stirred by emotions, leading to behavior that seems irrational because the actual results to the person are bad.  For instance, criminals behave in a way that eventually leads them to be in jail.  Question is whether the decision to make a crime is irrational from the perspective of the person committing the crime? Criminals could choose to satisfy their immediate desires in spite of possible negative consequences, as they judge being in jail less negative than most people.

Is human behavior often unpredictable?

If the behavior is the result of known causes, like the desire for dominance on other people, then the logical analysis should lead to expected behavior that is in line with reality.  Most of the time we predict the behavior of people we are communicating with well enough.  This is also true for managers predicting the response of their people and vice versa.

When negative behavior of employees can be predicted then it might appear on one side of the core conflict of the organization. For instance, when management distrust their employees they could suspect that the employees would not cooperate in introducing a change.  Such a conflict looks like that:

cloud management employees

Suppose that indifference is causing many undesired effects that reduce the potential of the organization to achieve more of the goal. Does it mean the indifference is the core problem?  Or the indifference is a symptom caused by another effect that causes several other undesired effects?

Most of the time indifference is caused by the reluctance of management to trust their employees, or by poor performance of the organization that harms the morale and the trust of the employees in the management. Both causes have more negative ramifications on the organizations.  Having to control everything has a huge negative impact on management attention, and from that on the ability of the organization to grow.

This basic trust and sense of purpose should be carefully maintained by the management of the organization. When there is a change in the mutual trust between management and employees then new emerging undesired effects should signal that such a change is happening.  A drop in the delivery performance to customers could be such a signal.  Failing to meet commitments, reduced quality and increase in customer complaints should be carefully viewed and monitored.  When such a change in the mutual trust is validated, the next step is to understand what happened to this sensitive balance.   Understanding the causes for human behavior is very much needed at this stage.  When such signals are not observed, then the focus of management should be elsewhere – on what truly constrains the performance of the organization.

The importance of Big Data

By Amir and Eli Schragenheim

Is Big Data important? Can every organization draw considerable value from it?

Amir and I assumed that the ultimate answer of most people in management would be: Yes, there is a big potential, but there is also a problem of drowning in the ocean of data (Goldratt in The Haystack Syndrome).

Well, as it seems too many people think that there is not much value to find in Big Data. So, maybe we, who think there is a very substantial potential value, need to back up this assertion.

Big Data in its narrow form is the ability of every organization to store huge quantities of data relatively cheap by the use of the cloud software tools for extracting specific data from various databases and formats, and organize them in a way that allows the human manager to focus on what is truly relevant.

A much wider approach to Big Data includes the huge amounts of data from external resources that allow free access through the Internet. Google, Facebook and LinkedIn provide the tools to do it and there are also public databases that allow searching and using their data for a certain cost.

It seems obvious that some organizations, certainly the bigger ones, are drawing a lot of value from Big Data, like the three big data manipulators mentioned above. Those giant organizations offer focused ways to advertise to well-defined audience.  Having the means to approach very specific market segments can be used to gain knowledge on the preferences of their customers.

The business sector of e-commerce, especially digital stores, is using their own huge data, taken from everyone who enters the website and records every move the user does, to draw conclusions on what the customer is interested in. The analysis of this accumulation of data opens a way not only for offering more to that customer with good chance for selling, but also winning that customer for future deals.  Beyond guessing the specific taste of every single customer, the generic understanding of groups of customers, like the role of price in their choices, can be established.

Physical retail stores use much less efforts to capture data that would reflect the clients’ preferences, beyond the trivial analysis of actual sales. Without direct access to client information, and even worse, without knowing what data could help them to gain more sales, they are helpless.  The retail stores lose a lot from their incompetence to collect the data they need to become more effective.

So, companies that have easy access to pretty straight-forward relevant data find answers to critical questions and gain a lot of value. Other organizations don’t.

When a new technology, like the ability to store and analyze huge amounts of data, presents itself to the market it raises two seemingly similar, but actually different, questions.

  1. Given the existence of the technology can we utilize it to bring benefits?
  2. Given our current obstacles – does the new technology lead us to overcome them? If so, what are the benefits going to be?

Many organizations don’t immediately see the benefits of a major new technology, meaning their answer to the first question is NO.

However, we believe some more efforts should be given to analyze what might overcome obstacles. Currently the organization accepts them as hard facts of reality, but the new technology is able to vastly reduce the limitation imposed by the obstacle. Then, new opportunities could be identified.

Goldratt 2nd question, of the Six Questions for assessing the value of a new technology, states:

What current limitation or barrier does the new technology eliminate or vastly reduce?

The obvious limitation of storage is not the relevant answer to the above question, because the value of storing huge amount of data is not clear and could easily lead to waste of efforts. Also reducing the slow and cumbersome speed of collecting huge data and organize it in a friendly visible way does not always add value.

But, we always have the wish to have more relevant information on the critical issues the organization is dealing with. We never have perfect information when a decision has to be taken.  So, decision making is always under high uncertainty, due to variation, plus unknown facts.  While this basic life situation would continue in the future, the unknowns could be significantly reduced if the right relevant information is collected and given to the decision makers.

Thus, Amir and I suggested the following limitation/barrier that the new IT technology reduces:

Not being able to get reliable answers to questions that require data, which was before either unavailable or not accessible

For instance, what are the features that many customers miss in our current products?

It is possible to ask the customers such questions, and even store all answers, but many of them simply refuse to answer and maybe they do not know what they miss, but when they would see it, they will know. Can we answer the question if we analyze data on what caused certain products, from different sources, suddenly becoming highly popular?

Failing to answer critical question is a key limitation for every company, and a search for the truly relevant data should, many times, yield new information that, together with an effective analysis, should yield substantial value.

To clarify the sensitive connection between data and information let’s bring the definition Goldratt gave to ‘information’ in his book ‘The Haystack Syndrome’ from 1990:

Information is an answer to a question asked

The definition highlights two insights. One is the power of asking questions, because in most cases when you ask something it is something that bothers you, so the answer to the question is also an answer to a need.

The other insight is that in order to answer a question certain data is required, and through the question that data becomes information.

In order to manage an organization successfully questions have to be asked and each one of them is directed to highlight a required aspect for one of two categories of managerial needs:

  1. Identifying new opportunities and how to draw the value from them
  2. Identifying emerging threats and how they could be eliminated or controlled

The first category is about new initiatives for success. The second category is about protecting your back.  Both are critical to every organization.

Goldratt’s third question is:

What are the current usage rules, patterns and behaviors that bypass the limitation?

Without the means of gathering data from many sources the decision makers have to make decisions, the practice has to be based on the following elements:

  • Using the routine data from the ERP or legacy system of the organization
  • Using the intuition of the key people in the organization closest to the specific topic
  • Employ a general ultra conservative approach, due to the unknowns and the perceived risk

The most important element is the use of intuition, based on one’s past experience.  So, it is certainly relevant data, but its quality is questionable.  The lack of objectivity, the various personal biases and being very slow to embrace any change, comprise the problematic side of intuition.

Intuition will still take a big role in the future. However, the ability provided by certain analysis based on data, unavailable before true big data, to check the validity of the initial intuition (especially the hidden assumptions) and also to be the source for new insights that could inspire new intuition could settle a new relationship between hard analysis and intuition.

TOC people argue that on top of intuition there should be cause-and-effect analysis that enables great managers to speculate right even when actual data is minimal. This is sometimes true, but as all cause-and-effect are based on observed effects, which are not always true facts, then even the most robust logic cannot deal with too many unknowns without data to rely on.

So, how could we improve our ability to spot new opportunities and emerging threats with the aid of the new IT capabilities of accessing huge amount of data?

The big trap of using the new IT capabilities is: losing the focus, investing huge amount of efforts on searching for data, analyzing it and eventually come up with almost nothing.  This is a real threat to many organizations.

The direction of solution we offer is building a high-level strategic process, run by a special team operating as a headquarter function that follows these steps:

  1. Decide on a prioritized list of worthy objectives that are not satisfactorily achieved
  2. For each of the objectives identify the key obstacle(s) and what is required to overcome them. We assume many of the obstacles are due to unknowns
  3. Based on the above come up with a prioritized list of specific questions that require good answers, which currently are not available with a reasonably high confidence level.
  4. Search for the specific data required for answering the questions. Many times the search is for external data, but then import the data to a central internal storage
  5. Generating the global picture how to achieve more of the top objective. The answers to the questions are merged with cause-and-effect plus intuition to create possible alternatives for actions. The final analysis is submitted to the decision makers.

The above process is similar to what Intelligence Bureaus are doing for countries. The priorities and the means are clearly different.  Countries most critical questions are about threats, much less emphasis on opportunities, and their means to collect data are usually illegal with special permit from the government.

Customizing the process for true business intelligence isn’t trivial. The big mistake of imitation is ignoring the basic differences.  However, ignoring the similarities and the opportunity to learn from a well established process is another huge mistake.  Given the difference in ethics, priorities and means, the basic need and the analysis tools are similar enough, and the emergence of Big Data gives the potential value great chance of being materialized.

What makes these efforts worthy to go after is the simple fact that the underlying new insights do not clash with any deep paradigm of big companies.

We, Amir and I, will be glad to take part in such an endeavor. We have delivered a webinar on the topic that goes deeper into analyzing the value of Big Data.  The recording of our webinar on the topic can be viewed on TOCICO site, https://www.tocico.org/page/replay?.

In another post we intend to deal with the potential value of simulations to gain new insights and answering very troubling questions.  Like Big Data, and actually any new technology, simulations could bring huge value, but require special care from severe pitfalls.

The Challenge of Facing Complexity and Uncertainty

Mickey Granot has published a very interesting article entitled: “The 3 mistakes that prevent exploiting your business potential”, see at https://www.linkedin.com/pulse/3-mistakes-prevent-exploiting-your-business-potential-mickey-granot/?published=t.  The mistakes Mickey has identified are:

  1. Spreading management attention too thin.
  2. Misunderstanding the customer.
  3. Misusing measures.

I agree that each of the three mistakes has major negative impact on preventing better exploitation of the current capabilities and capacity in the vast majority of businesses. I think that there is a core problem causing management to repeat the above mistakes all the time.

The constant fear from negative consequences from changes that look promising

The fear is invoked by the inherent complexity coupled with uncertainty. There are simply too many unknown facts for every proposed idea that could, maybe, generate more throughput (T) without significant additional operating expenses.   The difficulty to handle complexity coupled with uncertainty is the key obstacle for every manager. The fear is partially on behalf of the organization and partially due to the personal potential negative consequences of a “failure”.

Example: Offering variety of packages of regular products with a price tag that is 10% less the regular.   The idea is, first of all, to combine products that aim at the same end-consumer.  Another parameter is combining fast movers with medium movers, and by that expanding the market of the medium movers.  Another aspect is the ability to use excess capacity of most resources even when the organization has to add overtime on the weakest link.  The idea is that the resulting delta-T would be much larger than delta-OE.  For instance, publishers can offer packages of several books of a known author. It is known in this market that while the newest book of the famous writer is sold very well, the previous books are sold now much less and might not even be available on the shelf.  Offering a package of the newest book coupled with the first book by that author, could be relevant to fans of that writer that missed the older book.

How would managers approach such an idea? It is not a-priori clear how much more sales will be generated this way and what will be the impact on the bottom line, taking into account the reduced price of the package, meaning significantly reduce throughput per copy.

So, a decision to test such an idea very carefully and for long time seems reasonable. In practice it means introducing very small number of packages and monitor their sales.  The result is that the impact on the bottom-line is usually not so clear.  So, the management, while giving the idea very limited attention, need to try several other new ideas at the same time.  The unavoidable result is spreading the management attention very thinly. This is one effect caused by the basic fear from uncertainty.

The second mistake is trickier to fully understand the causalities behind it.  How come we frequently fail to recognize the right value as perceived by the customer?   When the customer is an organization we can assume the generated value is based on the practical needs of the organization.  Understanding the business of the customer should guide the supplier-organization to identify the true needs and by that gain major insights on how the products/services could be more valuable.  Problem is that such an understanding is not common at all – most marketing people have very little knowledge on the business of their customers because of two key obstacles.

  1. Analyzing a different business from afar seems too complex and hence uncertain.
  2. The current tool to understand how the customer appreciates the products/services is to analyze the complaints raised by the customer.  This proves to be a very partial and problematic tool, which give rise to secondary elements and ignore the more critical ones, sometimes just because the customer does not expect the supplier to be able to deal with the real missing, or flawed, element.  Yet, having a practice seems good enough to many.

When it comes to the end-consumer, understanding the value of the product is even tougher because the consumer sees, many times, value that is not practical. For instance, taste preferences cannot be logically defined by objective attributes, or the aesthetics of the product design. I wrote in the past about the three categories of value, see https://elischragenheim.com/2015/08/03/the-categories-of-value/.

When we analyze the example, the creation of the right packages has to be based on good understanding of the perceived value of the customer, even the question whether 10% reduced price is a good cause for buying a whole package depends on the overall perception of value by the customer.

The fear of negative consequences causes organizations to be very careful especially with assumptions, based on intuition, about the external world, like customers and also vendors.  Understanding the end consumer is difficult because analyzing hard data is not sufficient.  Some logical analysis is certainly required.  But, even then several not fully proven assumptions have to be in place in order to understand the end-consumer and be able to, reasonably, predict the reaction to certain moves.  The fear of failing to predict the behavior of customers limits the efforts to create a ‘theory’ of the true needs of specific market-segments and by that prevents the actual test of the ‘theory’ and by that missing many powerful opportunities that might be much worthier than the current ideas.

The use of performance measurements to measure people is a clear announcement of mistrust created by the fear of failure. Measurements are definitely required for diagnostic of emerging problems and as necessary inputs to decision making.  A wicked flawed part is to assume that the measurements reflect the capabilities and motivation of the people in charge.  This lack of confidence in people leads to many local performance measurements and we know how distorting those are.  See also my previous post, https://elischragenheim.com/2018/03/30/the-problem-with-performance-measurements-and-how-to-deal-with-them/.

It is my view that eventually fearing complexity and uncertainty is the ultimate core problem of the vast majority of the organizations. Only very small organizations, where everyone knows well everybody else, are able to overcome the obstacle of fear from potential negative outcomes of each specific decision or action.

While TOC provides us with great tools to manage common and expected uncertainty in the key TOC applications for Production, Project Management and Distribution, the Pillars of TOC relate to handling uncertainty only indirectly. Humberto Baptista already offered to include a fifth pillar to cover the need to live with uncertainty and be able to handle it effectively, actually use it in order to truly flourish.  Humberto verbalization is:  “Optimizing in the noise increases the noise.”  This insight, which is also part of Dr. Deming methodology for quality, should lead to realize that in order to improve one has to beat the natural variability.

We should come up with a detailed approach to “managing expectations” that will include full recognition of uncertainty and by that reduce the fear and let people, managers and executives included, exploit their own capabilities.

The problem with Performance Measurements and how to deal with them

Dr. Goldratt famous saying “Tell me how you measure me and I’ll tell you how I’ll behave” shows one dark side of any measurement: it impacts the behavior of the people involved, actually the behavior of the whole system.  The hope of management is that the impact would be positive:  people will do their best to achieve the best results.  Unfortunately in most cases the opposite is happening.  Just to illustrate another problematic side we don’t always keep attention to:  when the prime measurement is to make money then some managers might break the law and other moral rules in their quest for making more money.  The 2008 crisis is just an example of the impact of money as a prime, measurement.

The Theory-of-Constraints (TOC) went deep into the clash between the local performance measurements and the global ones, showing how the local disrupt the global. This is certainly one of the most concerning issues and a lot have been written and presented on that issue.  However, performance measurements pose several additional negative branches (potential negative consequences).

The key objective of performance measurements is showing a full picture of the current performance in order to lead the required actions for improved performance.

A devastating side of all performance measurements is the personal interpretation of them. Management, actually many lower level people in the organization, are actually measured by these measurements whether they have succeeded or failed in their job.  This linkage causes the devastating effects that lie behind the famous comment by Goldratt.  I like to state an effect that looks obvious to me:

Performance measurements, at best, represent the current state; they do NOT answer the question “how come?”

In order to conclude what to do next, performance measurements, expressing the current state, are absolutely necessary, but they are definitely not sufficient. An analysis has to be carried to explain the results.  I’m aware that “explanations for poor results” have bad reputation, but that is part of the big problem. The poor results have to be openly recognized in order to identify the core cause.  An explanation like “the people involved were dumb” should lead to immediate questions how come incompetent people were given that particular job!

What makes performance measurements even more problematic is the tendency to outline a target for them.  There are two basic negative characteristics of determining targets:

  1. Parkinson Law claims “work expands so as to fill the time available for its completion”. The same law applies to any quantitative target. The simple rational is: outperforming the target is bad for the future of the individual, because next time the target will be set higher. So, the best case is to reach the target – no more and no less. Almost all means are allowed, including lowering the target to be reasonable, and then definitely not trying to achieve more. I have seen several cases where the organization claimed that 90% of the tasks finish exactly on time. This statistically impossible result gives evidence that Parkinson Law works.
  2. Determining the target is a problematic issue in itself. Sometimes targets are determined by hope and prayers. Sometimes there is a certain rational for the top target, but then all the lower levels are given targets that are, more or less, arbitrary with the sole requirement that they have to support the higher level.  These lower level targets are those that middle-level management try their best to restrain.

The idea behind setting targets is determining “success” or “failure” of people involved. On one hand the idea is to push people to excel.  On the other hand it creates fear, mistrust and manipulations.  This is a typical generic conflict.  The basic assumption that without given clear and quantitative targets people would not do everything they can to accomplish their missions at the highest level is, to my mind, flawed.  The tricky point is that it is a self-fulfilling prophecy.  When people are used to targets, stopping the targets leaves people wonder what they should do, which drives them to do a little, but definitely not too much.  Only a very clear message from management would make a change, and it’d take time to be believed.

Another problematic side of performance measurements is their dependency on the time periods. Suppose that this year the organization has to produce considerable stock because of an expected peak demand at the start of next year.  The annual T is relatively low, while the OE, maybe containing overtime, is relatively high.  Next year T will get much higher.  Question:  would management be aware of the causality?  If not, would Operations support producing stock for next year, when the TOC accounting practices do not reward any increase in inventory?

How can we deal with the negative ramifications of performance measurements?

I think this is the critical question for any organization striving to become ever-flourishing. To call the solution “Leadership” is to underestimate the obstacles and relying on a vague term as if it is a solution.  I think that implementing a structure for management decision making, where predictions are based on ranges, rather than a blind commitment to a number, where the potential risks are openly discussed and the management team eventually reach consensus – until the next management meeting where the actual signals are observed and the discussion might be opened again.  This should be a procedure that is not over-depended on the charisma of a specific leader.

In short, a procedure that truly respects uncertainty, recognizing mistakes without automatic blaming, and trying to correct them is a solution that could work.

Throughput (T) and Operating Expenses (OE), and the Capacity critical connection – the key for decision making

T represents the added value generated by the organization. Operating expenses represent the financial cost required to provide the capacity for all the required resources with the appropriate capabilities, required for generating the value to customers that would be able to generate the T.

Confused? Read it again, this comprises most of the truly required data for managerial decisions.  The division between T, which is focused on sales data and OE focused on the internal resources, is of immense simplifying value to all managerial decisions.

Here is a rough diagram:

Value to customers

I apologize for the poor graphics; I’m not very good with the use of graphical tools.

OE is just the cost for providing capacity. The goal is to have Throughput (T) much bigger than OE and then find the way to grow T faster than OE.  That should be the sole objective of every single decision taken any manager in the organization.  There might be difficulties to do the analysis, but the objective is the same. T for business organizations is defined as Revenues minus the Truly-Variable-Costs (TVC).  The truly variable costs are those that occur with every single sale.  So, T is the added value as measured by the customers, who are willing to pay the price. But, the value for customers includes also what others, who are not part of the organization, have contributed.

Thus, T is the true performance measurement of what the organization succeeded to achieve. OE is what the organization has to pay in order to achieve the T.

Well, I should have also included ‘I’, standing for ‘Investment’, as the part of the capital being invested to make it possible to achieve the T. But, I think there is no conceptual difference between ‘I’ and ‘OE’.  The difference is about time frame.  ‘I’ refers to expenses that stretch beyond one year.  There are mechanisms to convert multi-year expenses into equivalent stream of annual expenses – and these are part of the OE.  So, a $10M machine, which is supposed to work for 10 years, represents an annual expense of $1.1M or whatever conversion rate you think is appropriate.

Comment on a minor complication: Originally Goldratt defined ‘I’ as ‘Inventory’. He moved to the more generic term later.  But, what is a little missing point in the above rough chart is that the materials being purchased are in a temporary state of Inventory (part of Investment) until it either become part of T or part of OE when scrapped.  I don’t think it really complicates the simple picture.

The key point is to understand that OE is the critical enabler to generate T.  And being made of many individual items creates a technical problem to predict how much OE would support future T, for instance taking actual initiatives to double the current level of T might require additional delta-OE that could be more, or much less, than the current level of OE.

The majority of management decisions are about growing or just maintaining the current level of T. After all Sales is about achieving T, and the efforts of Operations are aimed at delivery.  But, there is a constant pressure to reduce OE, mainly because OE represents an ongoing threat to the organization:  you have to pay the OE no matter whether you made enough T or not.  The tricky point of saving OE is that in most cases the negative impact on T is ignored.  The emphasis on T makes you aware that you need to be very careful not to reduce T.

So, we have to understand the dependencies between T and OE, and they look very complicated, because OE is about capacity of so many different, seemingly independent, items.

TOC, through Throughput Accounting plus understanding the full impact of the five focusing steps, the role of buffers in planning and buffer management in execution, gives a much simpler answer to the connection between OE and T.

Critical insight #1: It is enough that one resource would be overloaded, receiving more load than its available capacity to seriously harm the expected T unless significant additional OE is added.

Critical insight #2: There is a real need to maintain protective capacity, certain amount of excess capacity, in order to provide enough flexibility to overcome market fluctuations and other types of uncertainty.  There is no safe formula to calculate precisely the required protective capacity, so conservative assessment is required and then getting the appropriate feedback to ascertain that it is enough.

Critical insight #3: Every internal resource has a finite capacity being covered by a portion of OE, but many times there are temporary ways to increase capacity for a cost, usually much more expensive per unit of capacity than the regular available capacity.  Such means could be part of the protective capacity, but their real value is allowing taking opportunities that clearly require more capacity than the current OE covers. That means delta-OE has to be considered and compared to the expected delta-T.

Any decision that deals with ways to increase T has to analyze the possibility that one or more of the critical resource would be overloaded, and if so find a way to either reduce other sales or increase the capacity of the specific resource(s).

The cost of capacity changes in stepwise ways, which makes the behavior of OE to be clearly non-linear.  One might look at it as a complication, and it really makes the whole notion of “per-unit” measurements non-usable in reality.  But, when the full impact of uncertainty is recognized, then simulating ‘what-if’ scenarios could reveal when the connection between T and OE are clear enough in supporting a decision, or when there is a doubt.

Another realization is that ideas for increasing T are usually significant and their expected impact, both on Sales/Throughput and on the required capacity, is far from being deterministic.  So, some means to check both the conservative realistic possibilities and the more optimistic ones have to be carefully checked.

Another insight: When judging the impact of an idea on sales it seems that if the conservative assessment of the impact is already good, then there is no need to check the option that the impact would be far greater. This is a mistake! When the market reacts very favorably then more problems in capacity, causing delays in delivery, have to be taken into careful consideration.  So, there are clear possible negative impacts of succeeding too well.  It can be called “The curse of blessing”.  I heard that interesting insight from Shimon Pass.  This is a devastating insight if you are not aware of it.

Is the above “simple”?

I think it is as simple as we can get when we strive to be right most of the time.

People who like to know more on what I have briefly outlined above could ask me for a presentation and demo of Throughput Economics, a detailed methodology for evaluating decisions for achieving better much more T than OE.

Decision Making: between Emotions and Logic

We in TOC think we are people of logic, doing our best to think clearly and by that be able to draw the best decisions. Suppose it is true that we are able to think clearly – how does that impact our decisions? And does the capability to think clearly help in influencing others to take the right decisions based on our clear thinking analysis?

It is widely accepted that decisions are made based on emotions not logic.

That claim is obviously true not just because of the structure and functioning of the brain but also based on logical analyze that logic cannot make ANY decision, because what you want to achieve and what you don’t want to tolerate cannot be determined by logic.  Abstract logic does not have a goal or any wish.  Logic cannot determine how important it is to earn more money or whether to live alone or with the family or even whether to live or commit suicide.  All the above critical inputs are dictated by our emotions and all the worthy objectives are emotional.  Also every risk we have to consider involves our emotions in evaluating the damage the risk might cause us.  Measurement of ‘damage’ is done by our emotions.  So, the decision has to take into account our feelings for or against various potential outcomes.  We can logically quantify the damage; say losing $1,000, but the interpretation of the damage is done by our emotions.

So, what is the role of logic in the decision making?

First, logic looks for rational ways to accomplish the objectives set by the emotions. You like to buy a car?  Logic raises the financial impact and predicts the response of other people to your new car, but it does not tell you how much joy the enthusiastic response of others would mean to you.  Logic, of course, does not mind the esthetics of the car and the general feeling of driving such a car unless the emotions include them in the logical process.  When the car is for specific needs logical analysis could note whether the capabilities of the car are good enough for those needs.  The logic predicts some of the future problems like facing complaints from the family that the bank account is now too low, so they cannot buy what they desire.

Thus, logic is used to identify both good and negative outcomes of the decision. If one is very angry on another person, logic might raise the option to hit the other person in the face, but also warn from possible outcomes.  The judgment lies with the emotions in order make the final decision.  Would the satisfaction of hitting the other person worth more than the consequences?  This is a detailed dialogue between emotional inputs and logical analysis and predictions.

So, it is absolutely right that eventually every decision is emotional. It is also true that after the decision is made logic is used to justify it to other people.

But, logic plays an important role in the decision making itself. It is stronger for decisions that have less obvious personal impact, like many of the managerial decisions, even though we’ll discuss later some possible emotional impact also on these decisions.  When we try to impact a decision to be taken by another person we have to use very strong logic, highlighting the pros and cons and presenting them intentionally to impact the emotions of the other side.  The TOC tools for outlining cause-and-effect are great for this purpose, but the emotional effects have to be part of the cause-effect analysis.

Every decision involves a choice. We are able to respond fast to daily common decisions in an automatic way.  Inertia plays a big role in decisions that seem similar to past decisions, but logic, when used, might raise reservations from the routine model for such decisions when negative outcomes are observed.  When such reservations are raised the emotions have to respond, either by rejecting the logical arguments, or considering the impact and only then making the decision.  Rejecting logical arguments because they clash with already established models is quite common, but they also raise a certain fear that might lead to reconsideration of the logical arguments later in the future.

A person who tries to influence another has to consider the possibility of blind rejection as a possible response, which can be logically understood only when we are aware of the hidden threat of the negative emotion of being influenced. Taking into account emotions within logical analysis requires good understanding of the emotions involved.

The buy-in process, developed by Goldratt, is directed at change management. The first level of the process is to achieve a consensus on the problem. The point here is that “the problem” refers to the organization. But, the decision maker is a specific person, who also considers how “the problem” impacts his personal interests.  So, the same problem has two different settings to be judged upon.

Suppose we try to influence project managers to recognize the generic problem in managing projects. We know that most projects take longer than planned, cost more and achieve less of the planned content.  This is definitely a problem for the organization having to ensure the timing and quality of the project when the decision to go for the project is made.  Another top management need is to manage well the organization’s resources, which also suffer from late projects.

How would a typical project manager evaluate the personal aspects of project lateness? What emotions would be impacted if the performance of the next project will be similar to previous projects?  What emotions of the project manager truly mind the future wellness of the organization? Does the project manager see it as a personal failing when the performance is about the same as in the past?  Does the manager fear that her personal reputation might be harmed?

We need to recognize the fact that it is absolutely necessary to include the personal aspects of the person we are communicating within the cause-and-effect logical process.

We also need to recognize the fact that emotions are effects that cause other effects like behaviors, views, responses and decisions. The effects caused by emotions should not be viewed as irrational; much of the time they reflect perfect rationality when we understand the emotions.

There are two categories of emotions that have an impact on the role of logic for managerial decisions.

  1. Positive emotions for being able to think logically. People with great passion to success have to develop one of two different, even conflicting, emotions. One is the desire to see reality objectively. This desire leads to a feeling of respect for logical analysis. The other emotion is a desire to develop a “sixth sense” that would mysteriously lead to success through taking the right gambles. The first emotion for being objective directly causes respect for logic and taking efforts to use it properly. It can be seen in most successful managers. The second type is made of people ready to take big risks and when successful they become great business people, but not much of great managers. Those people rely on their emotions much more than on logic.
  2. Handling the fear from uncertainty. The emotions lead to a choice between “fight” or “flight”. Fighting uncertainty draws a person to logically analyze the odds and to systematically look for ways to reduce the damage. They respect objectivity and logical thinking. Other people hate being in fear even more than the subject matter of the fear itself. If I suspect I have cancer, I might evade doing the necessary checkups because I don’t want to know. Such people try to view reality according to what suits them.

Generally speaking FEAR is a critical source for various emotions and it has huge impact on our decisions.  Logic does not tell us to be brave or coward.  These behavioral patterns are dictated by emotions, and then logic can take the objective and look for the best way to handle it.

It is my view that FEAR is a major cause for inconsistencies in the behavior of managers.  While most managers try to do well for the sake of the organization, the potential impact on their personal emotions might lead to different decisions.  Thus, the inconsistencies are not irrational – they just reflect the role of their emotions and self-interests.  We can use LOGIC to identify the inconsistency and build the rational explanation for it.  When we fail to do so, it is usually the failure of our logic, not the irrational nature of the person we try to understand.

Caught within the shared paradigms of their business area

A common shared paradigm being challenged

Every business area has its own “best practices” (are they really the best?) and a whole group of paradigms that are shared by everybody in this particular area. The consequence is being caught in a status-quo, where the performance of the organization is stuck and goes slowly down due to the increase in the efforts of every competitor to steal customers from others.

This day-to-day constant fighting to preserve the current state without any leap in performance is the reality of the vast majority of the organizations. It causes them to be satisfied with relatively small profit, or tolerate limited losses, with the feeling that this is the best they can do.  Such businesses succeed to be in a reasonably stable state, but without any hope for better future.

A necessary condition, though far from being sufficient, to get back to good business growth is to be able to challenge one important shared paradigm. Once this is done the organization deviates from the common way all the competitors are going, and by this establish a clear differentiation from the competition.  The risk of not challenging such a paradigm is that a competitor might do it first and this would change the false impression of stability.

However, this absolutely necessary step for growth, is very risky as being different does not mean outclassing the competition and it certainly does not mean bringing any new value to customers. Too many times being different from the standard only reduces the perceived value of the customers, who just see the difficulty to get used to something different without any benefit from it.  In other cases the new added value seems to be too expensive for the target market.

Another risk is that even if the organization succeeds to create new value to customers, it does not mean the customers are able to recognize and appreciate the new value. The difficulty is that unexpected added value might require a change in habits, and even when the customer sees the new value as something surprisingly nice (“how come we never got such an offer before”) the move raises suspicions that it is too good to be true.

The point with the risk is that it creates FEAR, which sometimes blocks any attempt to challenge a common paradigm that could lead to a breakthrough. The way FEAR should be handled is full acknowledgment that it is legitimate, but the risk can be handled by logically analyzing it, striving to reduce the risk, or its negative impact, and also creating a safe-network of control with immediate corrective actions to neutralize the negative impact on time.  When the risk is properly evaluated and controlled it is possible to overcome the fear.

Another, seemingly unrelated effect of a similar fear, is the high number of R&D projects that continue in spite of the fact that the early promise has already vanished.   The causal relation of that effect with the reluctance to challenge established paradigms that are shared within a business sector is the fear of failure and its personal impact.  The term “failure” has an especially negative connotation in the world of measurements and false accountability, and in itself is a paradigm that should be challenged.  An alternative related expression is “taking a calculated risk” that naturally leads to realization that the move might fail, but it is not interpreted with the full connotation of a “failure” because it has been considered ahead of time and the choice has been to go for it.  In the high-tech startup world the expectation of failures is so high that the damage to the pride and reputation of the individuals involved is minimal, which opens the way to many worthy efforts to do something exceptional.

Taking a calculated risk should be widely used not just for new technologies, but for every business sector, as the ways to come up with a new significant value to potential clients are diverse and only very few of them require a technological breakthrough.

But, taking a calculated risk has to be based on two necessary elements.

  1. A culture that endorse taking calculated risks with the full realization that it might fail.
  2. Using a valid process of analyzing the risk. Such a process should include searching for ways to reduce the potential risk, and eventually producing an analysis of both the potential damage and the potential gain.

The difficulty in the process of calculating the risk is that in the majority of the cases we don’t have good enough probability numbers. Using statistical models to estimate the probability is also frequently misleading.

Yet, the difficulty of estimating the amount of uncertainty should not cause management to ignore the notion of well calculated risks, because the future of every organization simply require taking some risks, and if you have to take risks you should better develop good-enough ways to estimate them. Developing the right culture depends on finding an acceptable way to estimate risks.  The term “estimate” is more appropriate to use than “calculate”, which seems to suggest it is the outcome of precise calculations.

There is a need to differentiate between estimating uncertainty and estimating the level of damage that would generate as a result of it. Let’s use the following example to comprehend the full ramifications of the difference.

A food company evaluates an idea to add a high level variant to its popular product line SoupOne. The new line will target the market segment that appreciates true gourmet soups. The line will be called SuperSoupOne and will cost 50% more.  This is a kind of a new paradigm, because the usual idea is that the gourmet people shy away from processed food.

Suppose that the management has enough evidence to be convinced that gourmet loving people could be tempted to try such a soup and assuming it is really up to their standard will continue to consume it. The “most likely” estimation, based on a certain market survey, is that SuperSoupOne will gain market demand of 10% of the current demand for SoupOne, but only 5% of SoupOne buyers would switch to the new product, the rest are going to be new customers.

However, one of the senior executives has raised another potential risk:

“What would SuperSoupOne do to the reputation of our most popular product line? It would radiate the message that it is a rather poor product and even the producer is now selling a much better version of it?  What would the buyers do when they cannot afford the better product?  I’m concerned that some of them will try the competitors’ products.”

The risk is losing part of the sales of the key product of the company. How much might be the impact of SuperSoupOne on the sales of SoupOne?  Actually the impact might be positive. Do we really know? We need to evaluate the possibility of having a negative effect and how it impacts the bottom-line.

Note, the risk to be evaluated is the impact of the new line on the old line – not whether the new line would generate high enough throughput to cover for all the delta-operating-expenses of launching the new line.

How such a risk could be evaluated? Suppose the current throughput generated by SoupOne is $5M.  According to the forecast for SuperSoupOne the quantity to be sold of the new line will be 10% of the current quantity sold by SoupOne.  Suppose that such sales would generate 20% of the current throughput due to the higher unit price. So, we get additional throughput of $1M from the new line, while losing only $250K (5%) from the old line.

But, the drop of 5% of the old line is only a forecast described vaguely as “most likely” and those 5% are now buying the new line. But, if the reputation will be truly harmed it might cause up to 30% less sales of the old line.  In this case the loss of $1.5M of throughput from the old line will not be compensated by the $1M “most likely” estimation of the new throughput. 

The above rough calculations help management to realize the potential risk of losing up to $.5M as a kind of reasonable worst case. Other reasonable possibilities seem much more optimistic for overall additional profit from the move. 

Can the risk be reduced? How about giving the new line of product a totally different brand name, which does not refer at all to the current popular product?  It’s probably not eliminating the full negative impact, but will significantly reduce it.

The detailed example objective was not to reach a firm clear decision. There is no claim the existing paradigm is not valid and thus can be challenged. We also don’t know whether the idea of coming up with higher level product is a good idea and what’s the actual impact on the current market is going to be.  The example has been used to demonstrate a need for getting better idea about the risk and its potential impact on the bottom line, using the intuition of the relevant people.  A certain direction of solution for estimating a risky move has been briefly demonstrated.

Such an analysis is a necessary condition for the bigger need of opening the door to a constant search for a breakthrough that has to be based on challenging an existing shared paradigm. This is the objective of this post: to claim that challenging widely shared paradigms is truly required for every organization.  You might say it also about your own desire to make a personal breakthrough -> it passes through a challenge of a common paradigm.