# The Non-Linear Behavior of the Cost of Capacity

And its impact on decision making

Part 1 of a series on using T, I and OE for key decision making

Challenging widely accepted paradigms creates new opportunities

The terminology in Physics does not use words with dramatic intensity.  However a certain incident in the late 19th century was so embarrassing that it was called “The Catastrophe in the ultraviolet” and by that caught my imagination.  The story is about radiation emitted from a black box and the mathematical equations, according to the knowledge of that time, showed that the radiation should be infinite.  Well, it was easy to see that this is NOT the case.  What eventually solved the riddle was the discovery, understood through Quantum Theory, that the frequency of the emitted radiation is not continuous but discrete.  As it turns out discrete functions behave very differently from continuous functions.

There is a tendency in the social science circles to assume that the main functions, describing the behavior of key variables, like capacity or the cost of capacity, are continuous.

Really???

I claim that all cost functions in reality are discrete.  This is most certainly true when we speak about the cost of capacity.

All organizations spend their overhead expenses on providing enough capacity that is required for the business.  The usual way is to purchase a certain fixed amount of capacity, like space for storage or offices, a machine capable of processing a certain quantity per hour and employees who agree to work N hours every week.

The cost of providing that capacity is fixed whether you actually use all that capacity or only part of it.

This means that using 25% of the available fixed amount of capacity, or using 85% of that quantity costs exactly the same!  This is a basic non-linear behavior and its impact on the decision what to do with the capacity at hand is HUGE.

Once all the available capacity is used then new options of using additional capacity open.

But, the principle of being able to purchase capacity only in certain fix sizes is still on.

An employee might agree to work another hour, but usually not a part of an hour.  So, if you need just 34 minutes of overtime the cost is one hour of overtime, which is also considerably more expensive than the relative cost of a regular hour.

So, when we look on the behavior of the cost of capacity we realize the following behavior:

The initial cost is HIGH.  Then it becomes zero (0) until a certain load is reached. Then the cost jumps by another fixed amount. Using more capacity the cost is zero until the next fixed point.

This actual behavior is quite different from the current practice of associating the average cost to any use of capacity.

This is the kernel of the TOC challenge at cost accounting!

So, the simple principle of cost accounting is invalid in our reality.  This use of the average cost of capacity has led all the way to the fiction of cost-per-unit.

Do we really need “per unit” measures to support sales decisions?

We still believe in simplicity, but reject the wrong simplicity.  What could be simpler than have a way to measure the direct impact of a decision on the bottom-line?

Let’s now look on another realization:

There is no hope in hell to use all the available capacity!

This is certainly in direct clash with the common paradigms.

There are three causes for being unable to use all the available capacity to generate value:

1. TOC has demonstrated the need for protective capacity to provide good and reliable delivery performance.
2. The market demand fluctuates in a faster pace than our ability to adjust the available capacity.
3. Capacity is purchased only by certain sizes. This is similar to what has been already stated above.

What are the ramifications for decision making?

When a new market opportunity pops-up we need to consider the state of the capacity usage of every resource.  When there is enough excess capacity the usage is FREE!  When the additional load penetrates into the protective capacity then there is need to carefully check the cost of Additional capacity or the ramifications of giving up some existing sales.

This is very different generic approach than the existing management accounting tools!

Next post would explain more on how to calculate the impact of an opportunity on the bottom-line, without using any “per-unit” kind of measure that would force us to use averages and get a distorted answer.

### Eli Schragenheim

My love for challenges makes my life interesting. I'm concerned when I see organizations ignore uncertainty and I cannot understand people blindly following their leader.

## 9 thoughts on “The Non-Linear Behavior of the Cost of Capacity”

1. Eli,

I think it could be said that cost accounting does not assume continuos functions. It seems that is assumes a single point function: full capacity utilization. Cost accounting is based in the assumption that every thing that is produced is sold and that there is not additional capacity (again, full capacity utilization).

In the other hand, the need for protective capacity (capacity buffer) was clearly demonstrated by the Kingman´s equation (VUT equation) since 1961 and extensively treated and discussed by Factory Physics.

The combination of Little´s Law and Kingman´s equation revealed the intimate relationship between the 3 types of buffers: stock, time and capacity.

Any one interested can check:

Like

2. David I do claim that cost accounting assumes capacity is continuous. If the cost of producing 50 units a day is N, then cost-accounting would claim that the cost of producing 48 units is .96N and the cost of 51 units is 1.02N. In other words, the cost is linear, certainly continuous, and the question of whether there is enough available capacity is not asked.

Almost everything Goldratt talked about were raised before him, but, to my knowledge, no one else has unite them into a global approach.

By the way, all the mathematical functions in the article of Factory Physics are continuous. They do believe in optimized solutions where the variability is known and so all the costs – so the holy balance (optimization) is looked for.

What I try to point out is the existence of singular points, a term used in Physics, where functions go through a significant change – for instance when no available capacity is left. Then, either the market demand is not fully satisfied or options of using other means of purchasing capacity in a hurry, like outsourcing, open.

Liked by 1 person

1. Eli, you are right. It is assumed linear and continuous. Total agreement in the fact that this is absolutely wrong. It is still my opinion that the assumption that every thing that is produced is sold is far much more devastating.

I brought the issue of the VUT equation since you wrote that TOC has demonstrated the need for a capacity buffer. Maybe is my ignorance but I have just heard that TOC claims that. Absolutely true, no question about. The only demonstrations I have seen (and that I use) is given by the Kingman equation.

We will leave the Factory Physics debate out.

Like

3. Loved the article, Eli. I completely agree. I’ll email you a 3 slide PowerPoint I made to share some ideas about pricing with some of the people at my companies. (Anybody else, feel free to request it from me.) It exposes another problem, which is assuming that there are fewer variables affecting a non-continuous function than there actually are. My PPTX shows just one extra that is an aggregation of several but which leaves others out.

Like

1. Rocco Surace says:

Henry – would love to see your slides = thanx

Like

4. Thank you Eli –

Very insightful article. There is always something to learn from you and many hours of thinking to do.

Henry – I would very much appreciate a copy of your 3 slides, thank you.

Kind regards to all
Andrew

Like