Between Sophisticated and Simple Solutions and the Role of Software

Smart people like to exploit their special capability by finding sophisticated solutions to seemingly complex problems.  Software allows even more sophistication and with better control. 

However, two immediate concerns are raised.  One is whether the impact on the actual results is significant enough to care for?  In other words: Do the results justify the extra efforts?  The other is whether such a solution wouldn’t fail in reality?  In other words, what is the risk of getting inferior results?

Simple solutions focus on only few variables, use much less data, and the inherent logic can be well-understood by all the involved players. However, simple solutions are not optimal, meaning more could have been achieved, at least theoretically.

Here is the basic conflict:

Until recently we could argue that the simple solutions have an edge, because of three common problems of sophistication.

  1. Inaccurate data could easily mess the optimal solution, as most sophisticated solutions are sensitive to the exact value of many variables.
  2. People executing the solution might misunderstand the logic and make major mistakes, which hurt achieving the excellent expected results.
  3. Any flawed basic assumption behind the optimal solution disrupts the results.
    • For instance, assuming certain variables, like sales of an item at different locations, are stochastically independent.
    • When the solution is based on software, then bugs might easily disrupt the results.  The more sophisticated is the algorithm the higher the chance for bugs that aren’t easily identified. 

The recent penetration of new technologies might push back towards sophistication.  Digitization of the flow of materials through shop-floors and warehouses, including RFID, advanced the accuracy of much of the data.  Artificial Intelligence (AI), coupled with Big Data, is able to consider the combined impact of many variables and also take into account dependencies and new discovered correlations. 

What are the limitations of sophisticated automation?

Two different types of potential causes for failures:

  1. Flawed results due to problems of the sophisticated algorithm:
    • Missing information on matters that have clear impact.
      • Like a change in the regulations, a new competitor etc.
        • In other words, information that humans are naturally aware of, but are not included in the digital databases.
    • Flawed assumptions, especially regarding modelling reality and software bugs.  It includes assessments of the behavior of the uncertainty and the relevance of past data to the current state.
  2. Misunderstanding the full objective of top management.  Human beings have emotions, desires and values.  There could be results, which are in line with the formal objective function, but violates certain key values, like being fair and honest to clients and suppliers.  These values are hard to code.

The human mind operates in a different way than computers leading to inconsistencies in evaluating a good solution.

The human mind uses cause-and-effect logic for predicting the future, using informal and intuitive information.  On one hand intuitive information might be wrong.  On the other hand, ignoring clear causality and truly relevant information could definitely yield inferior results.

Artificial Intelligence uses statistical tools to identify correlations between different variables.  But it refrains from assuming causality, and thus its predictions are, many times, limited to existing data and fail to consider recent changes that didn’t happen in the past.  The only way to predict the ramifications of such a new change is by cause-and-effect.

Human beings are limited in making a lot of calculations.  The human capacity also limits the number of different topics it can deal with at a period of time. 

Another aspect to consider is the impact of uncertainty.  The common approach to uncertainty is that it adds considerable complexity to the ability to predict the future based on what is known from the past. 

Uncertainty significantly limits our ability to predict anything that lies within the ‘noise’.  The noise can be described as the “common and expected uncertainty”, meaning the combined variability of all the relevant variables, focusing on the area of the vast majority of the cases (say 90% of the results), ignoring rare cases.  So, any outcome that falls within the ‘noise’ should not come as a surprise.  As long as the ‘noise’ stays at about the same level, it represents a limit to the ability to predict the future.  But that is already more than nothing, as it is possible to outline the boundaries of the noise, and predictions that are beyond the noise should be the focus for analysis and decisions.

Goldratt said: “Don’t optimize within the noise!”

Good statistical analysis of all the known contributors to the noise might be able to reduce the noise.  According to Goldratt this is often a poor exploitation of management time.  First, because in most cases the reduction in the noise is relatively small, while requiring efforts to look for the additional required data.  Secondly, it takes time to prove the reduction in the noise is real.  And thirdly, the most important, is that there are other changes that could improve the performance well beyond the existing noise.

A potential failing of statistical analyses is considering past data that are no longer relevant due to a major change that impacts the relevant economy.  One can wonder whether forecasts that consider data before Covid-19 have any relevance to the future after Covid.

The realization that a true improvement of the performance should be far above the noise greatly simplifies the natural complexity, and could lead to effective simple solutions, which are highly adaptive to significant changes, which are beyond the natural noise.

Demonstrating the generic problem:

Inventory management is a critical element for supply chains.  Forecasting the demand for every specific item at every specific location is quite challenging.  Human intuition might not be good enough.  The current practice is to determine a period of time, like two weeks, of inventory from item X at location Y, where the quantity of “two weeks of inventory” is determined through either a forecast or determination of an average sale-day.

Now, with much more sophisticated AI it is assumed that it is possible to accurately forecast the demand and align it with supply time, including the fluctuations in the supply.  However, forecasts are never one precise number, and so are the supply time.  Every forecast is a stochastic prediction, meaning it could vary.  Having a more accurate forecast means that the spread of the likely results is narrower than for less accurate forecast.  The sophisticated solution could try to assess the damage of shortages versus surpluses, however part of the required information for such an assessment might not be in the available data.  For instance, the significant damage of a shortage is often the negative response of the customers.  It might be possible to track actual loss of sales due to shortages, but it is challenging to assess the future behavior of disappointing customers.

The simpler key TOC insight for inventory management is to replenish as fast as possible.  This recognition means narrowing down the forecasting horizon.  Actually, TOC assumes, as an initial good-enough forecast, no change in the demand for that horizon, so replenishing what was sold yesterday is good enough. 

Another key insight is to base the target inventory not just on the on-hand stock, but to include the inventory that is already in the pipeline.  This is a more practical definition, as it represents the current commitment for holding inventory, and it makes it straight-forward to keep the target level intact.

Defining the target inventory to include both on-hand and pipeline stock makes it possible to issue signals reflecting the current status of the stock at the location.  Normally we’d expect anything between 1/3 to 2/3 of the target level to be available on-hand to represent the “about right” status of inventory, knowing the rest is somewhere on the way.  When there are less than one-third on-hand then the status of the stock is at risk, and actions to expedite the shipments are required.  This is the duty of the human manager to evaluate the situation and find the best way to respond to it.  Such an occurrence triggers the evaluation whether the target level is too low and needs to be increased.  Generally speaking, target levels should be stable most of the time.  Frequent re-forecasting usually come up with minor changes. 

Question is: as the target level includes safety, what is the rationale to introduce frequent changes of 1%-10% to the target level, as it is just a reflection of the regular noise, and probably not a real change in the demand?

A sophisticated solution, without the wisdom of the key insights, would try to assess the two uncertain situations: how much demand might show in the short time, and whether the on-hand plus whatever is on the way will make it on time. It’d also estimate whether the anticipated results would fall within the required service level.

Service level is an artificial and misleading concept.  Try to tell the customer that their delivery was subject to the 3-5% cases that the service level doesn’t cover.  Customers can understand that rare cases happen, but then they like to hear the story that justifies the failure.  It is also practically impossible to target a given service level, say 95%, because even with the most sophisticated statistical analysis there is no way to truly capture the stochastic function.  Assuming the spread of the combined delivery performance is according to the Normal distribution is convenient, but wrong. 

Given the practical need that humans should understand the logic of the solution, and be able to input important information that isn’t contained in any database, while recognizing the superiority of computers to follow well-built algorithms and carry huge number of calculations, leads to the direction of the solution.  It has to include two elements:  simple powerful and agreed-upon logic enabled by semi-sophisticated software, coupled with interaction with the human manager.  Definitely not an easy straight-forward mission, but an ambitious, yet doable, challenge.

Advertisement

How come managers take different decisions for their organization than for themselves?

This post continues the previous post on “A decision is required – a management story with a lesson.

Prof. Herbert Simon, the Noble prize winner, claimed that people are NOT OPTIMIZERS – they do not search for the ultimate optimal choice. Simon called the way people, like you and me, make decisions “satisficers” – looking for a satisfying choice by placing certain criteria and choosing the first one that satisfy all the criteria. This is quite similar to what we call in TOC “good-enough solution.”

My point is that while people are satisficers, once they make decisions on behalf of their organization they are forced to demonstrate that they actively look for the optimal decision. However, there is too much complexity and too much uncertainty on top of it to truly reach optimum decisions. This situation makes the search for optimal decisions, based on “books” written by other academics than Prof. Simon, looks pathetic. Too many of those decisions are wrong and leading to inferior results.

A common cause for this different behaviour is:

Managers are afraid from after-the-fact criticism, which they consider unfair, because it does not consider the conditions when the decision has been made.

Hands pointing towards businessman holding head in hands concept

The key frightening aspect is the possible impact of uncertainty. After-the-fact the decision can be easily seen either as “right” or “wrong”. Admitting to have made a mistake causes two different undesired effects:

  1. Being punished because of the “mistake”, like being fired or just not being promoted.
  2. Losing the feeling of creating value and being appreciated. This is of very significant meaning to executives and highly professional people.

The fear of unjust criticism forces managers to look for two means of protections:

  1. Being super conservative.
  2. Following the “book” when there is a book.

As Mr. Preston Sumner, in his very interesting comment to the story has noted, some CEOs are influenced to do the exact opposite: take larger risks than what they would allow for themselves. This tendency is initiated by the way some large organizations compensate the c-level executives. When a CEO is pushed to show great results by hefty bonuses, while the opposite is not true, the derived greed pushes them to take high risks. Is this really what the stockholders want?

There is critical mistake in looking to “motivate” a person, a CEO or even just a regular salesperson, by linking the actual financial results to payments to the person. Money is always a necessary condition – but it is far from being sufficient to ensure good intentions to look for the interests of the organization.

Dr. Goldratt said that organizations force certainty on uncertain situation. Ignoring uncertainty make people believe that they can judge any decision according to its actual result. If we recognize the need to live with significant uncertainty we need to learn how to judge decisions in a way that would reasonably assess what might happen – the potential damage as well as the potential gain.

This is just the beginning. I claim that failing to openly and visibly dealing with uncertainty is the core problem of most organizations! I’ll certainly come back to this topic highlighting more undesired effects resulting from the core problem.