Promotions – a problematical habit with ongoing consequences

Vector fire labels set

When does a business truly gain from promotions?

First let’s clarify the definition for this post:

A promotion is a stated period of time in which a certain product mix is sold for reduced prices.

Thus, reducing prices to get rid of unsold inventory is not a promotion. A promotion requires a clear date where the prices are going back to their original level.

The end-date of a promotion puts a pressure on the customer to buy now and buy more.  However, that pressure also pushes the customer to wait for promotions and refrain from buying after the promotion.  This is what makes promotions different from price wars.

There are four reasons for using promotions:

  1. Introduction of new products, where the intent is gaining as many first customers as possible hoping that they will become regular customers. Sometimes this kind of promotions makes sense.
  2. Fixing a state of lower than expected sales or against a marketing campaign of a competitor. The objective is selling high volume. This is a key cause for going into the vicious cycle where the vast majority of the sales are in reduced prices.
  3. Retailers and distribution channels use promotions to bring customers into the store, where they buy additional items. The problem here is that all the direct competitors use this scheme and by this vastly reduces the added-value.
  4. Being forced by a distribution channel to participate in their ongoing promotions. The distribution channels are the Gorilla dictating the business rules. Their poor suppliers need to find good-enough ways to live under that tyranny or rebel against it.

There are four different negative branches to promotions, without considering the relationships within the supply chain.

  • The total T might go down because the increase in volume does not cover the reduced T per product-unit.
  • The capacity of at least one resource is exhausted because of the sudden increased volume, causing loss of sales, usually of other products sold at premium price. Promotions cause a peak of load that often turns the weakest-link into a temporary constraint, cost goes up because of overtime and quality problems, and everybody in the organization feels the waves of that peak.
  • The lower prices steal sales of other products that are sold at premium price!
  • A critical effect is the reduction of sales after the promotion.

The result of the above is high probability that the promotion causes a loss rather than  profit, most of the potential damage is currently unknown. In TOC terms the loss is expressed by:

Delta(T) – Delta(OE) < 0.

So, the first mission when a proposal for a promotion is on the table is to assess the true impact of the negative effects relative to the positive of selling higher volume of items. The biggest mistake is to consider Revenues instead of Throughput.  Reducing prices reduces T-per-unit much more than revenue-per-unit.  Then, extra costs have to be included in the analysis.

The main difficulty to calculate the net impact of a promotion on the bottom line is to assess the demand during the promotion and also after the promotion!

Building two what-if scenarios, the reasonable pessimistic and the reasonable optimistic, is a tool to check the full impact of the negatives and guide the required actions.

Promotions are typical moves that impact both sales and operations. The end-date of a promotion impacts the behavior of customers.  Operations are hit by a huge peak, creating waves of shifting priorities that would take long time to calm down.  The crazy run after volumes of sales, expressed by either revenues or quantities sold, blocks the sight of the true economic impact.

However, being aware to the damage of most promotions would not automatically solve the basic two conflicts that involve promotions.

One conflict is generated when a competitor launches a promotion, which would reduce sales. Should we respond with our own promotion? Preserving the market-share is a flawed argument because the worthy objectives are high and stable profits and it is not obvious that preserving market-share is the mean to achieve them.  But, does it mean that ignoring the reduced prices of a competitor is the right action? When you don’t have truly superior products directed at specific market segments then you pay the price of being depended on the rational of your competitors.

The other conflict is whether to accept the demand of your distributors to carry a promotion or insist on your own interests, which might cause them to give up your products. Again, this could happen only when your products, viewed by the end customers, are practically the same as your competitors.

When you have to have a promotion how should your operations behave during the promotion?

The high demand starts at the first day of the promotion. The resulting demand is highly uncertain, meaning the range of the potential demand is pretty wide.  Only after the first day or two it is possible to assess a narrower range.  The length of the promotion is critical to the ability to respond by fast replenishments.  The shorter is the promotion more stock is required at the beginning of the promotion, both at the central warehouse and in the stores.

A short promotion poses a dilemma regarding the stock buffers. The optimistic forecast leads to very high buffers, but the actual demand might be according to the pessimistic forecast.  Sizing the buffers according to the pessimistic forecast might cause shortages.  Starting with low buffers and increasing them within the promotion requires considerable amount of excess capacity.

My recommendation is to prepare enough stock in the central location to cover the pessimistic forecast for the whole promotion period!!!

Every store has to hold enough stock for at least two days based on the optimistic forecast for that store. After the first day or two of the promotion re-adjusting all the buffers in the stores could be done more sensibly.  The central warehouse, starting with overstock, should also calculate the target-levels after the first two days, and replenish only when the inventory goes below the target.

Like in handling seasonality, it is imperative to reduce all the buffers to their original size sometime before the end of the promotion.

The damage of promotions is high and its most critical element is fixation of an end-date. This is a kind of dynamic pricing, a topic I mentioned in a post about yield-management, which pushes customers to change their natural behavior.

The true remedy is to be able to deliver unique value to many loyal customers, creating a decisive competitive edge (DCE). Is it always possible?  Unless you truly think hard about the possibilities you cannot claim it is impossible.

Published by

Eli Schragenheim

My love for challenges makes my life interesting. I'm concerned when I see organizations ignore uncertainty and I cannot understand people blindly following their leader.

6 thoughts on “Promotions – a problematical habit with ongoing consequences”

  1. Thank you Eli for sharing your ideas and thoughts (interesting as always) about this controversial subject.
    Maybe I didn’t digest well enough but I am still missing the full way to assess delta throughput and delta OE.
    I have elements that are extremely hard to assess: how many extra loyal consumers are we going to get from this action? How many extra sales of premium products are we going to get from this action? How much extra OE we will have in order to support the promotion?
    Maybe it’s my lack of experience, but my gut feeling is that for the same promotion event I can create one model that will show that we are going to gain from it and another model that will show that we are going to lose from it (having different assumptions on those elements that are hard to assess as described above).
    I would appreciate your feedback on the above.
    Pitshon

    Like

  2. Dear Pitshon, I suggest you call me to schedule a meeting and I can show you not only how to create a REASONABLE pessimistic and optimistic scenarios that hold water, but how to use the supporting software to truly go through suggested variations to the general idea of a promotion to make sure we do not get too much damage and instead make a profit. I have nice realistic examples to demonstrate the process.

    What is the alternative?

    Suppose one model (what I call optimistic) predicts $50K profit and another model, a pessimistic one would cause a loss of $35K. If this was done based on the best knowledge we have, it means that the real impact would be somewhere within that range. Now it is up to the management to make a decision, but at least that decision is a knowledgeable one, not based on empty hopes or empty conservatism.
    Would you be surprised that in too many cases BOTH pessimistic and optimistic scenarios show a loss?
    Would you be surprised that some “crazy ideas” could be found that BOTH scenarios lead to high net profit? Those ideas that are now thrown into the trash bin because there is no easy fast and reliable process to truly check all the ramifications?

    I claim that technology today makes it possible to use Throughput Economics to support such decisions and many others in a friendly and fast way. I’ll be glad to explain and demo the process to everyone who is interested through GoToMeeting session. Just write to me at elischragnheim@gmail.com

    If this is not an important topic, then I don’t know what is.

    Like

    1. Dear Eli,

      I tried to send you an email but I got “fail” notification for some reason… anyway, this is what I wanted to write:

      indeed it is an important issue and we see it growing a lot in the retail industry. in the past they used to have 2-3 promotions per year and today it is more like 10-15 promotions. Having said that, this is not exactly the promotions that you described, but I believe that there are lines of similarity between them.

      Once I will know how next week looks like (we have a big retail summit in GC) I will try to schedule a meeting with you about this issue. I think face to face is better than GoToMeeting, but if you prefer GoToMeeting we can do that as well.

      Thanks again,

      Pitshon

      Like

  3. HI Eli,

    Your discussion are always worth reading and thought provoking as the are intended.

    I don’t often comment but the s discussion made me think back to the old days of the TP there were a lot of development taking place and when “thinking” was at the light on the path of TOC.

    Here is what you made me think about:

    TOC is essentially a science based approach to understanding ‘reality’,
    so how do we judge a scientific approach? What is a good Scientific Model?

    I personally like this description based on Steven Hawking’s “The Grand Design”.

    A model is a good scientific model if it:
    – Acts to fulfill its purpose – consistently helps describe a reality or phenomena.

    – Elegant in Simplicity (aesthetically pleasing, simple, neat, tide and beautiful) symmetrical without “fudge factors” to make its predictions work. Without arbitrary or very few adjustable assumptions.

    – Repeatable and consistently agrees with and explains the detail about all existing observations.

    – Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out (and contains the ability to self destruct if predictions are proven false).

    —————————————————————————————————————————————————

    Back to the subject of Probability High, Low approach to assumptions:

    If something is in the realm of human subjective probability assessment – some might consider it closer to an axiom* , rather than a scientific model.

    From the above description of a good scientific model: “Without arbitrary or very few adjustable assumptions.” could mean that assumptions of probability compounding assumptions of performance could potentially be the genesis of sophisticated complexity. And a slippery slope.

    The moment ’selecting subjective assumption’ is introduced, it could be a red flag to “fudge factors are being introduced” to make it work. Subjective also means the start of disputable discussions.

    How do we create “indisputable, Effect-Cause-Effect solid logic” and high probability of an expected outcome?

    From the TP tools the “Categories of Legitimate Reservations” approach might provide a better direction for exploration and answer.

    What does this mean in practice? Could a better answer result from using combination of “positive” and “negative branches” approach to test the assumptions?

    Could one simply place the Assumption at the base of a mini Future Reality Tree and use logical E-C-E connections to achieve the expected Desirable Effects. Then check for any positive loops reenforced by injections. Then check for Negative Branches and trim them with additional injections.

    The probability of a predictable outcome is now significantly higher without the potential ambiguity of a high, low scenarios analysis.

    There is a second step, now that we have a thorough ‘testing” and better understanding of the base assumptions.

    The second step is to actively track the ‘evolvement ‘ / unfolding of your assumptions in reality as key indicators “disprove or falsify your model if they are not borne out (and contains the ability to self destruct if predictions are proven false).

    Best Regards – John.

    *(Axiom: proposition on which an abstractly defined structure is based)

    Like

    1. John, this is most interesting challenge. I will think more about it and probably write a whole post on it.

      My first reactions are:

      1. TOC strives to use scientific methods, but it is far (yet???) from being a science. There are several arbitrary and adjustable assumptions in TOC, including the definition of “constraint”, the meaning of “exploitation” and certainly “subordination” is not a scientifically robust definition. Certainly dividing a buffer into three is arbitrary assumption, and to my mind not-too-bad.
      2. Unlike science that waits for a good enough valid hypothesis to include it in the current knowledge, in management the managers need to decide NOW. So, if an assumption looks subjectively valid and there is no big harm if we follow the assumption and it proves wrong, then let’s use it NOW and check it again in the future.
      3. I wrote a post on the limitations of the TP. A cause-and-effect tree, which passed several people checks according to the legitimate reservations, is still a very subjective statement. We do have robust tools to check logical connections. In reality there always more and more insufficiency reservations. We used to call them “oxygen” in order to justify ignoring them, but this is a very subjective approach.
      4. The TP does not go at all into quantifications. However, in many cases there is no way to ignore it.
      5. Just a correction: I’m not speaking about high and low probability, but high and low potential results of a decision, action or an initiative. I do assume that given the subjective assessment of several people we have a range with relatively high probability that the actual result would fall into it.

      To demonstrate the two last points, let me give a specific example:

      The sales of product P1 are below the expectation. It is a good product, which went through TOC analysis according to the 6 questions and the offer to the market seems to irresistible, yet results are below the expectations. The suggestion is to reduce the price by 20%. The demand would, almost certainly, go up. Question is: by how much? Considering the current T/Revenue is 50% – this rate would be only 37.5%, requiring considerable more sales to get the same level of T. Can you trim the negative branch? How do you compare a FRT with NBRs of this move to another one, say launching another new product, assuming you do have the capacity and focus to do both? The TP does not know to choose one FRT over another!

      To be continued!

      Thank John, I’m ready to continue the discussion.

      Like

Leave a comment