Common sense – combining intuition and logic

We all know that common sense is not common at all, especially within organizations that have the ‘optimization’ culture.  Common sense tells us that reaching optimization is an illusion, which drives damaging behaviors and keeps us far away from even a good enough state.

What is the common sense way to assess the worthiness of a new idea?

The first common sense question is:  what information is required to assess the idea?

A little girl playing real chess in competition. Black and white photo. Concentrated kid. Power of concentration ** Note: Soft Focus at 100%, best at smaller sizes

Reminder, Goldratt defined information as “an answer to a question asked. In a way it means that there are some things we need to know.  So, when we have to make a decision there are several inputs we look for – and these are the necessary information items.

Example:

On behalf of your organization you look for a vendor for office supplies.  You talk with the representative of a large office supplies company and also with the enthusiastic owner of a new office supply business.  The large company rep. is a tired and not-too-bright fellow that just cite the normal sales-pitch text.  The owner of the new business is definitely brilliant, and your intuition tells you he is going to be successful.

What information/answers-to-questions you have to look for?

  1. From whom you’re going to get better overall deal?
  2. From whom you’re going to get better overall service, especially better response to any urgent request?

The first question gets a precise numerical answer.  Suppose the new business offers a cheaper price of 4% for the first 6 months.  After that the prices would be the same. Let’s also lay down the data that the total expense on office supplies in your organization comes to .94% of the turnover.

The second question has to use intuition, as any question about the future has no precise number and I doubt whether you find any valid statistical model for this specific question.

Your intuition, based roughly on your life experience plus some emotions and biases, tells you the new business is going to give much better service.  You don’t expect from a large business to do “favors”, but a new business with the wish for future growth is more open to respond to special requests.

Decisions are certainly impacted by emotions, and in this case the emotion and the intuition go hand-by-hand for making the new business.

So, is the decision obvious? 

Here comes the important role of applying logic as a critical control mechanism and a mean to look at the bigger picture.

What might be the damage from failing to serve at the required level and is it likely to happen?

Large suppliers might miss items here and there.  However, we do expect them to fix those in a day or two.  A new business might face more difficulties, especially when the new business tries to grow too fast, like exhausting their resources and possibly suffer from low cash-flow. They could also suffer from less expertise in the area.

Let’s now inquire the question: what is the size of the damage and to whom?

Most organizations do not suffer too much from incidental lack of office supplies.  However it creates a hassle and when there is a hassle there is a person who is responsible for the hassle. So, while the real impact on the performance of the organization is relatively low, the well being of the decision maker, the common-sense decision, is to take the safer alternative, which is the larger supplier!

The need for safety, in this case, is usually stronger than the very little impact on the cost. Well, this is my intuition even for organizations that are in the Cost World.

My general observation

Decisions involve emotions, intuition and logical analysis.  To my mind the emotions have negative impact on organizational decisions.  Intuition is critically necessary for the main information inputs.  The final decision has to look at the bigger picture, consider the ramifications of the inputs on other aspects of the bigger picture, and for that you need logical analysis.

Advertisements

Is it really an opportunity?

Part 2 of a series on using T, I and OE for key decision making

Opportunities present themselves in various ways.  Only seldom we see an opportunity which is so good that there is no point asking any more questions.  Most of what looks like a potential opportunity comes with the doubt embedded in: is it really an opportunity or a trap?

Opportunity Concept.

A typical managerial conflict happens when Sales proposes a promotion, offering several products for a certain price reduction.  Sales managers believe this would significantly enhance the sales of those products next month, and this belief is backed up by past experience.

A promotion creates huge pressure on the shop-floor, reduces the sales of other products and mainly reduces the sales for a certain period after the promotion is over.  Yet, sometimes the extra revenues (minus the variable costs) generated, especially selling to new clients and gain their future purchases, more than compensate for the damage.

  • How can we truly check the net financial impact of a promotion?
  • How can we check the financial impact of penetrating into another market segment?
  • How can we check the financial impact of launching a series of new products?
  • How can we check the financial impact of purchasing a new production line as an elevation of our current capacity constraint?

We are aware that cost-per-unit is not the right tool to support sound decisions. So, how should we make such decisions?

The most straight-forward way is to assess the financial impact of the decision-at-hand on the bottom-line without relying on some funny ‘per-unit’ fabricated measures.  It looks quite difficult objective due to the complication of the various expenses.  However, when we look on the decision as an optional addition to the current level of sales we can see two clear factors that simplify the situation:

  1. The change in the incoming flow of money: the revenues from the change in sales, both the additional sales and possible loss of other sales, minus the truly variable costs of those sales.  This is what we call Throughput (T).
  2. The change in the outgoing flow of money (all the other operating expenses called OE). Note, those additional expenses are all due to the required changes in the available capacity!  This insight was revealed in the previous posts about the behavior of the cost of capacity.

What we get is:   Delta(P) = Delta(T) minus Delta(OE)

Delta(P) is the change in net profit before tax.  For the decision-at-hand we like to know whether delta(P) is positive or negative.

What information we need in order to get a good estimation of the above equation?

One obvious problem is the impact of uncertainty, which includes everything we don’t know at the time of the decision.  We should come back to this issue in later posts.

From the general direction described so far the first step has to define the current state of the organization, as we like to evaluate the difference between the state with the additional decision and without.

There are two main categories of information describing the current state:

  1. The current sales. The items sold, their respective quantities, prices and truly-variable-costs (mainly cost of materials).   We can then calculate the generated throughput (T) per item and the resulting total T – the flow of incoming money.
  2. The available capacity and the load generated by the current sales.
    • In order to calculate the load we need to know how much capacity, for every resource, is required for every product sold!

Then we need the following categories of information for every new opportunity / deal / idea:

  1. The new sales / T to be generated by this idea, including longer term impact
  2. The impact of the new sales on the current sales – would some current sales be reduced?
  3. The updated load versus capacity – do one or more resources lack enough available capacity?
  4. When one or more resources lack capacity what special options are open?
    • Purchasing additional capacity for extra cost (how much?)
    • Reduce some sales, provided it can be practically done without tampering with other sales!

Critical questions for advancing ahead:

  • Is it possible to gather all the above information?
  • How long into the future we need to look in order to make a decision?
  • How can we handle many different opportunities for the same time frame?
  • How do we consider the impact of uncertainty?
  • What is the structured process to make a sound decision?
  • Is it too complex? If so, can we simplify it without distorting the decision process?

To allocate or not to allocate – is this the question?

An intermediate post to clarify a point

Part 1.5 in the series on using T, I and OE for key decision making

The previous post argued that the cost of capacity is not linear and it is also not continuous.  I did not deal with the fact that in too many cases there is no direct way to relate specific capacity consumption to specific products.  The remedy of cost-accounting in their efforts to calculate the cost of a product unit is to allocate the cost-of-capacity, which is not directly related to a product unit, based on some arbitrary parameter like the direct-labor.

Barcelona, Spain - September 28, 2011: Camp Nou stadium is the highest capacity soccer stadium in Europe. photo taken on September 28, 2011 in Barcelona.

Should we allocate the cost of the arena based on tickets or on the result of the match?

Activity-Based-Costing (ABC) challenges the older methods on that point.  ABC tries hard to relate every time capacity is consumed to its “cost-driver”, which could be a product unit, but also a new client and even an order.  Anything wrong with that?

The real mistake of ABC, and all the other cost accounting methods, is to associate the average cost of the specific capacity consumption to those cost-drivers.  The non-linear behavior of the cost of capacity causes a huge distortion in the ABC management information.  It gives the wrong impression that certain cost-drivers are too costly when there is a lot of excess capacity, while other cost-drivers look good, concealing the fact that they use capacity that is truly limited (and purchasing more is truly expensive), thus leading to flawed business decisions.

Of course, in order to convince organizations to stop assuming that every time capacity is consumed a certain cost is generated, we need to establish an alternative way to make sound decisions.  We need a good method to check whether a new opportunity/idea would improve the bottom-line or not.  We like also to have a good method to decide whether purchasing more capacity is profitable or we better give up some of the available capacity.  I promise to arrive to the solution in later posts.

Sometimes we need to allocate certain costs even when we use the TOC logic!

For instance, suppose your company has partnered with another company in leasing a whole floor of offices.  The reason is: the owner of the floor refused to lease part of it.  That space is a resource, and the total space is the limit of the available capacity of the space resource.  Any agreement between you and the other company on splitting the cost of the rent, and probably also some other capacities you use (cleaning, communication lines) is basically arbitrary and based on some allocation (the lobby and the lifts are certainly shared) of the space.

I’m going to raise more cases of allocation as a good-enough solution when direct calculations are not possible.

The Non-Linear Behavior of the Cost of Capacity

And its impact on decision making

Part 1 of a series on using T, I and OE for key decision making

No limits success concept with a road or highway going forward fading into the sky with a group of clouds shaped as an upward arrow as a business symbol of financial freedom and aspirations.

Challenging widely accepted paradigms creates new opportunities

The terminology in Physics does not use words with dramatic intensity.  However a certain incident in the late 19th century was so embarrassing that it was called “The Catastrophe in the ultraviolet” and by that caught my imagination.  The story is about radiation emitted from a black box and the mathematical equations, according to the knowledge of that time, showed that the radiation should be infinite.  Well, it was easy to see that this is NOT the case.  What eventually solved the riddle was the discovery, understood through Quantum Theory, that the frequency of the emitted radiation is not continuous but discrete.  As it turns out discrete functions behave very differently from continuous functions.

There is a tendency in the social science circles to assume that the main functions, describing the behavior of key variables, like capacity or the cost of capacity, are continuous.

Really???

I claim that all cost functions in reality are discrete.  This is most certainly true when we speak about the cost of capacity.

All organizations spend their overhead expenses on providing enough capacity that is required for the business.  The usual way is to purchase a certain fixed amount of capacity, like space for storage or offices, a machine capable of processing a certain quantity per hour and employees who agree to work N hours every week.

The cost of providing that capacity is fixed whether you actually use all that capacity or only part of it.

This means that using 25% of the available fixed amount of capacity, or using 85% of that quantity costs exactly the same!  This is a basic non-linear behavior and its impact on the decision what to do with the capacity at hand is HUGE.

Once all the available capacity is used then new options of using additional capacity open.

But, the principle of being able to purchase capacity only in certain fix sizes is still on.

An employee might agree to work another hour, but usually not a part of an hour.  So, if you need just 34 minutes of overtime the cost is one hour of overtime, which is also considerably more expensive than the relative cost of a regular hour.

So, when we look on the behavior of the cost of capacity we realize the following behavior:

The initial cost is HIGH.  Then it becomes zero (0) until a certain load is reached. Then the cost jumps by another fixed amount. Using more capacity the cost is zero until the next fixed point.

This actual behavior is quite different from the current practice of associating the average cost to any use of capacity.

This is the kernel of the TOC challenge at cost accounting!

So, the simple principle of cost accounting is invalid in our reality.  This use of the average cost of capacity has led all the way to the fiction of cost-per-unit.

Do we really need “per unit” measures to support sales decisions?

We still believe in simplicity, but reject the wrong simplicity.  What could be simpler than have a way to measure the direct impact of a decision on the bottom-line?

Let’s now look on another realization:

There is no hope in hell to use all the available capacity!

This is certainly in direct clash with the common paradigms.

There are three causes for being unable to use all the available capacity to generate value:

  1. TOC has demonstrated the need for protective capacity to provide good and reliable delivery performance.
  2. The market demand fluctuates in a faster pace than our ability to adjust the available capacity.
  3. Capacity is purchased only by certain sizes. This is similar to what has been already stated above.

What are the ramifications for decision making?

When a new market opportunity pops-up we need to consider the state of the capacity usage of every resource.  When there is enough excess capacity the usage is FREE!  When the additional load penetrates into the protective capacity then there is need to carefully check the cost of Additional capacity or the ramifications of giving up some existing sales.

This is very different generic approach than the existing management accounting tools!

Next post would explain more on how to calculate the impact of an opportunity on the bottom-line, without using any “per-unit” kind of measure that would force us to use averages and get a distorted answer.

The Thinking Processes (TP) and uncertainty

Have a quick look at the small cause and effect branch.  Is the logic sound?

logic-pic2

 Can it be that in reality effects 1-3 are valid, but effect 4 is not?

We can come up with various explanations of insufficiency in the above logic.  For instance, if the clients are not free to make their own decisions, like in totalitarian countries, then could be that the regime prefers something else.  Another explanation might be that the brand name of Product P1 is much less known.

The generic point is: the vast majority of the practical cause and effect connections are not 100% valid.

In other words, most good logical branches are valid only statistically, because they might be impacted by uncertain additional effects that distort the main cause-and-effect.  Actually the uncertainty represents insufficiencies we are not aware of, or we know about them but we cannot confirm whether they exist or not in our reality.  For all practical purposes there is no difference between uncertainty and information we are not able to get.

This recognition has ramifications.  Suppose we have a series of logical arrows:

eff1 –> eff2 –> eff3 –> eff4 –> eff5

If every arrow is 90% valid (it is true in 90% of the cases) then the long arrow from eff1 to eff5 is only 60% valid.

The point is that while we should use cause-and-effect because it is much better than to ignore it, we can never be sure we know!  The real negative branch of using the TP to outline various potential impacts is that frustrated people could blame the TP and its logic and refrain from using it in the future.  This false logic says:  if ([I behave according to the TP branch] à [Sometimes I do not get the expected effect]) then [I stop using the TP].

The way to deal with this serious and damaging negative branch is to institute the role of uncertainty in our life and the idea that partial information is still better than no information – provided we take the limitations of being partial seriously.  We can never be sure that whatever we do will bring benefits.  However, when we use good logic then most-of-the-time we’ll get much better benefits than the total damage.

It’d be even better to consider the possibility of something going wrong in every step we do.  This would guide us to check the results and re-check the logic when the result is different than what we have expected.  It is always possible that there is a flaw in our logic and in such a case we better fix the flawed part and gain better logical understanding of the cause-and-effect.  When we do not see any flaw in our logic – there is still room for certain crazy insufficiency to mess our life and this is the price we pay for living with uncertainty.

The Mysterious Power of Synergy

Synergy means that a system can achieve more, sometimes much more, than the sum of its parts.  This extra power is not easily understood and thus it is difficult to manage.

Symphony orchestra on stage violins cello and flute performing.

It is straight-forward to see the value of synergy in sports.  You can build a basketball team by bringing together great players, each excels in one particular role, and let them play and hopefully win.  Does it ALWAYS work?

When it does, one might get the impression that the power of the team is far more than the accumulated level of each player.  This is when the mystery of synergy works.

When it does not work then there is no ‘team’ but just a group of excellent players, each playing according to his own interests.  We in TOC call it: local thinking rather than holistic.  I think it is quite natural that a person thinks and acts based on his own interests.  The only rational way to cause a person to think holistically is to make a convincing argument that synergy does work, in other words the success of the whole would contribute much more to the person than whatever he can achieve by himself.

Theoretically there is a way to create such a clear win-win structure that the interests of every player are exactly the same as the holistic ones.  I understand the theory, but I admit I have not been able to construct such a network of win-wins in reality.  Still, the intuitive recognition that synergy exists in a big way could help in aligning different parts into a holistic system.

Very large organizations use their natural synergy to gain much more value.  We can recognize some of the causes of such synergy, and by that reduce its ‘mysterious’ impact.  When some of the products/services of a giant company get excellent recognition the other services gain recognition as well.  The stability and security radiated by large organizations is a synergy asset and its cause is pretty clear.

However, many other causes for synergy are not all that clear, but this does not mean they do not exist.  A strange way of speech calls ‘chemistry’ the effect where two players play with great understanding of each other and thus generate synergy.  It is funny, on my side, to call a ‘scientific’ name something that is hard to map the cause and effect of.  Still, in reality we see how some product-mix have more impact than others.  One needs to look for the overall characteristics of a ‘package’ to understand the advantage of one supplier on another rather than go to the details of every product.  It is exactly as recognizing a forest rather than trees.

Project-portfolio is a managerial topic that calls for assessment of its synergy.  It means that when we consider a new project there is a need to assess the somewhat vague impact of adding this project to the portfolio and predict the total impact of the whole portfolio on the organization.

As such assessment is mainly intuitive we need to recognize it as ‘partial information’ or basically uncertain information.  We should NOT ignore the synergy impact just because we are unable to predict its exact impact.  While we recognize “never say I know” we should not take the position of “we don’t know”, because we do know something.  We are able to carefully assess the impact of synergy as a reasonable range and thus take the range as part of the decision making process.

Any good Strategy planning has to strive to gain synergy from all the initiatives that are integrated into one effective decisive-competitive-edge.  Synergy is a critical part in the creation of ever-flourishing organization and it requires a holistic view and good tolerance of using ‘partial information’ to guide our decisions.

The problematic relationships between the individual and the organization – Part 2

A common belief is: Many employees don’t want to make all the efforts they are required to make

The point is not so much whether the belief is true, but whether the belief is self-fulfilling, meaning employees try to avoid too much work and efforts because they realize they are not being trusted. When you are not trusted then the objective of feeling good with what you have done evaporates into thin air.

Suppose management succeeds in creating a culture of trust. Would the employees be willing to be loyal to the organization they work for? By ‘being loyal’ I mean do whatever it takes to achieve more of the organizational goal.

Employees come to work because they need the money. This starting point has several ramifications.

  1. If the employees think they are getting less than what they deserve – then they become frustrated and hostile towards the organization, which is the opposite of loyalty.
  2. As the money is important the employees choose to stay in the organization until something better pops up. This forces them to try hard to be considered “good”, or at the very least “OK”.

Other ramifications are caused by the mere fact that the employees spend large part of their life in their work place:

  1. Most employees prefer to “do something” while they are at the work place, and so they usually work willingly according to what is expected from them, unless they have a reason not to.
  2. Employees that have the passion to excel look for an appropriate chance.

An important observation:

It is easier for an employee to be loyal to the organization than to the organization to be loyal to the employee.

The organization always looks at the cost and compares it to the perceived value from the employee. However, as we have seen, that value is not easy to assess. It is even more difficult to assess the indirect damage of being disloyal to the employee, as many of the employees become disillusioned and even hostile in a hidden way.

From the above it seems that even when the management trust their employees and is loyal to them there is no guaranty that all the employees would be loyal to the organization. It is enough that a specific employee believes he is underpaid to cause him to betray that trust. And if this is the case then the organization should actively look for signals of low-motivation and disloyalty from the employees.

However, the need for making sure all employees are loyal does not necessarily mean the solution has to be the use of personal performance measurements.

What are the true needs of organizations in assessing their employees?

I can see two such needs:

  1. Identify employees that generate damage. Some might simply lack the appropriate capabilities. Others might be ‘rotten apples’ – those who are disloyal. Such people might influence others to become disloyal.
  2. Identify potential ‘stars’ – employees who can bring huge value in the future – if they are nurtured in the right way.

All the rest are good employees who bring value when management makes it possible. Is there a point of measuring performance in a more accurate way?

If the organization would maintain a culture of respect and loyalty then the employees will do their best to organization because this is their work – a substantial part of their life. What the organization has to do is to make sure there is a certain code of work and when there is a signal that the code is broken then, and only then, those employees should be chased out.

In some cases, the organization has no choice but to let people go because they cannot yield value anymore. The point is that when this happen management needs to recognize it as its own failure! Management then has to be aware that they need to re-build the trust and loyalty of the employees who are still in the organization to prevent the next disaster.