TOC and AI: Using the TOC Wisdom to Draw the Full Value from AI for Managing Organizations

By Eli and Amir Schragenheim

A powerful new technology has the potential of achieving huge benefits, but it is also able to cause huge damage.  So, it is mandatory to carefully analyze that power.  We think it is the duty of the TOC experts to look hard at AI and see how to exploit the benefits, while eliminating, or vastly reducing, the possible negative consequences.

Modern AI systems are able to make predictions based on large volume of data and simulated results and either take actions, like robots do, or support human decisions.  An important example is the ability to understand language, get the real meaning behind it, and generate a variety of services. The “experience” is created by the provided dataset, which has to be very large.  This kind of learning tries to imitate human beings learning from their experience, with the advantage of being able to learn from a HUGE amount of past experience, hopefully with less biases. 

AI currently generates value mainly by replacing human beings in relatively simple jobs, making it faster, more accurate, and with less ‘noise’.

AI has some critical flaws; one is being unable to explain how a specific decision has been reached.  Its dependency on both the large datasets and the training makes the inability to explain a decision a potential threat of making mistakes that most human beings won’t. Even huge datasets are biased due to the time, location and circumstances where the data have been collected, so they might misinterpret a specific situation. 

This document deals with the potential value for managing organizations that can be achieved by combining the Theory of Constraints (TOC) with AI.  It doesn’t deal with other valuable uses of AI.

The focus of TOC is on the goal and how to achieve more of it – so in terms of management it will look on what prevents the management team from achieving more of the goal.

TOC focuses on options for finding breakthroughs, trying to explore where there’s a current limitation to achieve more goal units, so we’d like to explore whether the power of AI can be used to overcome such limitations.

Without a deep understanding of the management needs, the potential value of AI, or any other new technology, is limited to needs that are obvious to all, and that AI is able to answer via automation, without having additional elements for the solution to work. In the more complicated case of using robots to move merchandise in a huge warehouse, we have a fairly obvious combination of two technologies, AI and robotics, for answering the need to replace lower-level human workers, probably also improving the speed with less mistakes (higher quality).

When it comes to supporting the decisions of higher-level managers the added value of AI is much less obvious.  One aspect that is basically different from the regular current uses of AI is: the human decision maker has to be fully responsible for the decision.  This means the AI could recommend, or just supply information and trade-offs, but it should not be the decision maker.  This raises several tough demands from AI technology, but when these demands are answered, new opportunities to gain value are raised.

Providing absolutely necessary information, which is either missing today, or given by the biased and inaccurate intuition of the human manager, is such an opportunity. 

Covering for not-good-enough human intuition, replacing it by considering a very wide large volume of data, performing a huge number of calculations, looking for correlations and patterns that imitate the human mind, using reinforcement rewards to identify the best path to the supporting information, the human decision maker gets a generic opportunity to improve the quality of the decisions.  Eventually, the decision maker might need to include facts that aren’t part of the datasets, and use human intuition and intelligence to complement information upon which an important decision has to be made.

Measuring the uncertainty and its impact

The trickiest part in predictions is getting a good idea not just of the exact value we like to know but also the reasonable range of deviations from it.  Any prediction of the future isn’t certain, so the key question should be ‘what should we reasonably expect?’

TOC developed the necessary tools for keeping a stable flow of products and materials in spite of all the noise (common and expected uncertainty), using visible buffers as an integral part of the planning, and buffer management for determining the priorities during the execution.  This line of thinking should be at the core of developing AI tools to support the management of the organization.

The most immediate need in managing a supply chain (and other critical and important decisions in business) is to get a good idea of the demand tomorrow, next week, next month and also in the long term.  Assessing the potential demand for next year(s) is critical for investing in capacity or in R&D. There is NO WAY to come up today with a reliable exact number of the demand tomorrow, and it gets worse the longer we go into the future (this is just the way uncertainty works). 

Example: Suppose the very best forecast algorithm tells you that next week’s demand for SKU13 is 1,237 units, but the actual demand turns out to be 1,411. 

Was the original forecast wrong? 

Suppose another forecast predicted the sales to be 1,358, is the algorithm behind the second forecast necessarily better?  After all, both were wrong.

Suppose now that the first algorithm included an estimation of the average absolute deviation, called the ‘forecasting error’.  The estimation was plus-minus 254.  This puts the first forecast in a better light because the prediction included the possibility of getting 1,411 as the actual result.  If the second algorithm doesn’t include any ‘forecasting error’, then how could you rely on it?

Effective managers have to be aware of what the demand might be.  When they face one-number forecasts, no matter how good the forecasting algorithm is, they frequently fail to make the best decision, given the available information.

Thus, a critical request from any type of forecasting is to reveal the size, and its related impact, of the uncertainty around the critical variables that impact the decision.  Having to live with the uncertainty means recognizing the damages when the actual demand will be different from the one-number forecast.  The relative size of the damage when the demand is less than the forecast, and when it is higher than the forecast, should lead the manager to make a choice that significantly impacts the decision.

There are meta-parameters of the AI algorithm, that dictate the decision made by it. Adjusting these meta-parameters can easily generate a result that is more conservative or more optimistic (for example – instead of using 0.76 as the threshold we can use 0.7 in one instance and 0.82 in the other). This way, being exposed to both predictions gives the decision-maker better information to consider the most appropriate action, without getting used to standard deviation or the like.

Reaching for more valuable information on sensing the market

A critical need of every management is to predict the market reaction to actions aiming at attracting more demand, or being able to charge more.  Most forecasting methods, with a few exceptions, assume no new change in the market.  Thus, on top of dealing with the quality of forecasting the demand, considering just the behavior in the past, there is a need to evaluate the impact of proposed changes, also expected changes imposed by external events, on the market demand.

Analyzing the potential changes in the market using the logical tools provided by the Thinking Processes can usually predict, with reasonable confidence, the overall trends that the changes would generate.  But the Thinking Processes cannot provide a good sense of the size of the change in the market.  When proposed changes cause different reactions, like when the esthetics of the products go through a major design change, human predictions are especially risky. 

Significant changes are a problem for the current practices of AI. However, AI algorithms that detect a deviation from a certain reality already exist, and are used extensively in predictive maintenance of manufacturing facilities. Such a signal from the AI can direct the decision-makers that the reality has changed, giving them the signal that manual intervention is needed. 

Predicting the impact of big changes that are made internally, like changes in item pricing, launching a big promotion etc., is a real need for management.  While changing the pricing of an item seems like an easy task, it is tricky to assess all the implications on the demand for other items and the response of the competitors. Plus – those changes don’t occur very frequently, and the internal data gathered for such changes in the past might not be enough to generate an effective AI model that predicts the implications accurately enough. This presents an opportunity for a 3rd party organization that deals with Big Data. Such an organization can gather data from many interested organizations, and use the aggregated data to build a much more capable AI model, which can be used by the organizations sharing their data to predict the effects of those actions better. This would create a win-win for all parties involved, and can cover the operations cost easily.  Such an organization should guarantee to avoid disclosing any data of a specific organization, just share the overall insights.

Warnings about changes in the supply

The natural focus of management is first on the market, then on Operations, which represent the capabilities of the organization to satisfy (or not), and possibly achieve more, demand.

The supply is, of course, an absolutely necessary element for maintaining the business. The problem is that when a supplier is going through a change that negatively impacts the supply, it might take a considerable amount of time for some clients to realize the change and the resulting damage.  The focus of management should not be on routine relationships.  However, when a change in the behavior is identified early enough, possibly by using software, it answers a basic need.  It is especially valuable when the cause of the change is not known. For instance, when a supplier faces financial problems or a change of management.

Achieving effective collaboration between AI, analytics, and human intuition

The three key limitations of AI are

  1. Being a ‘black box’ where its recommendations are not explained.
  2. The current practices don’t use cause-and-effect logic.  There are moves within AI to include cause-and-effect sometime in the future.
  3. AI is fully dependent on the database and the training.

One way to partially overcome the limitations is to use software modules, based on both cause-and-effect logic and on ‘old-fashioned’ statistical analysis, that evaluates the AI’s recommendations and checks how reasonable they are, possibly also re-activating the AI module in order to check a somewhat different request. 

Example.

Suppose the AI prediction for product P1 deviates significantly from the regular expectation (either regular forecast or simply the current demand), then the AI module could be asked to predict the demand for a group of similar products, say P2 up to P5, assuming that if there is a real increase in demand for P1 the other similar products should also show a similar trend.  Predicting the demand for a group of products should not be based on predicting the demand for each and combining them, but to repeat the sequence of operations considering the combined demand in the past. Thus, logical feedback is obtained checking whether the AI unexplained prediction or recommendation makes sense.

The other way is to let the human user accept or reject the AI output.  It is desired that the rejection is expressed in a cause-and-effect way, which could be used by the AI in the future as new input.

Additional inputs from the human user

AI cannot have all the relevant data required for making a critical decision.  If the human manager is able to input the additional relevant data to the AI module, and a certain level of training is done to ensure that the additional data participate in the learning and the output of the AI module, this could improve the usability of the AI as high-level decision support.

Conclusions and the vision for the future

AI is a powerful technology that can bring a lot of value, but also may cause a lot of damage.  In order to bring value, AI has to eliminate or reduce a current limitation.  Implementing AI has also to consider the current practices and to outline how the decision-makers should adjust to the new practice and how to evaluate the AI recommendations before taking the actions. 

Supporting management decisions is a worthy next direction for AI.  But it definitely needs a focus to ensure that truly high value is generated, and possible damage is prevented.

TOC can definitely contribute a focused view into the real needs of top management.  It also enables an analysis of all the necessary conditions for supporting the need.  This means that while AI can be a necessary element in making superior decisions, in most cases the AI application would be insufficient. For drawing the full value other parts, like responsible human inputs, other software modules, and proper training of the users, have to be in place.

TOC is about gaining the right focus for management on what is required in order to get more of the organizational goal.  Assisting managers to define what needs immediate focus, as well as assisting in understanding the inherent ‘noise’ and allowing quick identification of signals, is a critical direction for AI and TOC combined to improve the way organizations are managed.  Even human intuition could be significantly improved, while being focused on the areas where AI is unable to assist.

Improvements that AI can give to TOC

The proposed collaboration between the TOC philosophy and AI should not be just one way. The TOC applications can get substantiate support from AI, especially for buffer sizing and buffer management.

Buffer sizing is a sensitive area.  The introduction of buffers for protecting, actually stabilizing, the delivery performance, is problematic at the initial state. But at that point AI cannot help, because analyzing the history before the TOC insights have been actively used is not helpful.  But, after one or two years under the TOC guidelines, AI should be able to point to too-large buffers, also pointing to few too low ones.  The Dynamic-Buffer-Management (DBM) procedure for stock-buffers, based on analyzing the penetrations into the Red Zone and for how long, could be significantly improved by AI. Another potential improvement is letting AI recommend by how much to increase the buffer.  Similar improvements would be achieved by analyzing when staying too long in the Green Zone signals a safe decrease of the stock buffer.

The most important use of Buffer management is setting one priority system for Operations, guiding what is the most urgent next job for delivering all the orders on time.  A part that needs improvement is when expediting actions are truly needed, including the use of capacity buffers to restore the stability of the delivery performance.  Here is again a critical mission for AI to come up with improved prediction of the current state of the orders against the commitments to the market.

The TOC procedures were influenced by recognizing the capacity limitations of management attention. By relieving some of the ongoing, relatively routine, cases where AI is fast and reliable enough, TOC can focus management attention on the most critical strategic steps for the next era.

Published by

Eli Schragenheim

My love for challenges makes my life interesting. I'm concerned when I see organizations ignore uncertainty and I cannot understand people blindly following their leader.

Leave a comment