Predicting Competitors

Predicting Competitors: Or, They Did What?, by Mark Chussil

This essay was published as “Predicting Competitors: Game Theory in Pricing” in the Journal of Professional Pricing, First Quarter 2010 (www.pricingsociety.com).

I wrote and you may have read an essay called Predictable Competitors. In that essay we explored the assumption of predictability and the easy-to-fall-into traps of using competitors’ previous behavior to predict their future behavior. We also discussed how to avoid those traps and, in so doing, how to open up promising opportunities to make better strategy decisions. We did it in the context of pricing.

Here we’ll talk less about illusions of predictability and more about delusions of predicting. We’ll do so in the context of a pricing tournament in which over 250 able strategists participated and for which I ran millions of simulations. The pricing tournament was a kind of massive business war game using humans’ strategies and a computer’s calculations.

Unless you already know what I’m going to say — if you think you can predict what I’m going to say, write it down so you can check later — you will have two surprises by the time this essay is over.

The case

Here’s the situation I presented to those able strategists.

You are a pricing strategist for a company with businesses in three industries. You will develop pricing strategies for each of those three businesses, covering 12 quarters (three years). In each industry your business has two competitors, and in each industry your business and your competitors’ businesses start from identical positions. You define success: you decide how much you care about profitability and market share.

I proceeded to describe the three industries in some detail. Those details included these and more:

  • The Ailing industry was shrinking, had somewhat price-sensitive customers, and was capital intensive.
  • The Mature industry was growing slowly, had relatively price-insensitive customers, and was labor intensive.
  • The Fast Growth industry was growing rapidly, had price-sensitive customers, and was a bit on the capital-intensive side.

Because the businesses in each industry began from identical positions, everyone had an equal opportunity to win. The only thing that would determine who won was the quality of their pricing-strategy decisions.

Participants selected a pricing move for Q1 (the first of the 12 quarters), a pricing strategy for Q2-4, another pricing strategy for Q5-8 (year 2), and a third pricing strategy for Q9-12 (year 3). The three multi-quarter pricing strategies could be the same or different, in any combination. The Q1 pricing move would be cut, hold, or raise (see picture); the three subsequent pricing strategies would be selected from a list of strategy options. Just as in real life, participants had to select their strategies without knowing what their competitors would do.

Q1 pricing decision

(I know what you’re thinking. Rest assured, there are analytic nuances, technological marvels, and good answers for your good questions, none of which will we go into here. What we’re about to cover doesn’t depend on those nuances, marvels, and answered questions.)

What would you do?

Let’s focus on the first move, the one affecting only Q1, where in each industry (Ailing, Mature, Fast Growth) you could cut, hold, or raise your price.

What would you do in each industry?

Write down your answer, or, if you’re telling yourself you’ll remember your answers, record them legibly in mental ink.

Of course I haven’t given you as much information as I gave the tournament participants. Still, though, you probably have some idea of what you’d do, something along the lines of “in a declining market it is best to _____ prices” or “I’d calculate the effect on profitability of _____ my prices and then decide.”

Second question. I didn’t directly ask this one in the tournament but it’s relevant for our discussion here.

What do you think your competitors (i.e., the other strategists participating in the tournament) will do for the two competing businesses you’ll face in each industry?

Record those answers too.

Notice how that question explicitly focuses your attention on competitors. An approach such as “in a declining market it is best to _____ prices” addresses competitors only obliquely: you’d probably consider the role of competition in a declining market, but perhaps not think about their specific actions.

Now this third question:

Did you think your competitors would do something different from what you chose to do?

In my experience strategists tend to assume competitors will behave as they wish them to or as they have behaved in the past. Who knows, they might even assume competitors are not as clever, quick, or attentive as they are. Hence that third question. But why would you think your competitors will do something different from you in scenarios when we have explicitly said they start from positions identical to your own?

Having now encountered that question, would you change your answer to the first question, the one about what you would do in each industry?

Our first surprise

Unless you correctly predicted me (did you?) and so there was no surprise, we’ve completed our first surprise: the realization that we make assumptions unconsciously that we wouldn’t make deliberately. (Remember, you heard it first here.) Further evidence appears as the subject of our second surprise.

Our second surprise

If the right pricing strategy were obvious, we would expect our able strategists to be pretty close to unanimous in their strategy selections for the tournament. They were not. In other words: surprise, we’re all over the map on how to kick off an effective pricing strategy.

Q1 decisions chart 1

 

The chart above shows the percentage of strategists who chose to cut, hold, or raise price in their Q1 pricing decision for each industry. Even the most-popular choice — hold price in Q1 in the Mature industry — was preferred by only 57% of the strategists. No move even got a majority in the Ailing and Fast Growth industries. In the Ailing industry, roughly equal percentages of strategists thought it would be best to cut or to raise their prices!

I mentioned earlier that the strategists indicated their performance objectives: market share, profits, or any combination. Perhaps if we control for their objectives we’ll see something closer to consensus. Here’s as close as we get to consensus:

Q1 decisions chart 2 (Mature)

Sixty-seven percent of the strategists who wanted a mix of share and profit in the Mature industry chose to hold their prices in Q1. No other pricing decision, in any of the industries, got anywhere near that level. For instance, here’s the Ailing industry:

Q1 decisions chart 3 (Ailing)

The highest we see is 49% in favor of holding prices in Q1 to achieve a mix of share and profit, and 49% who’d cut price to gain share.

We do see an effect we might expect. The strategists who preferred market share as their performance objective were more likely to cut price than to raise it, and those who sought profit were more likely to raise price than to cut. Even so, those effects seem muted: raising or cutting prices didn’t win a clear majority of strategists. (That changed only when we looked at the most-extreme of the share-seekers, and we’d have to get really extreme — the 5 or 10 most share-happy strategists, out of over 250 — to approach a consensus.)

(Sidebar. Since objectives have a demonstrable effect on pricing decisions, we might argue that predicting competitors’ pricing moves translates, at least in part, to understanding their objectives. We would come to the same conclusions and the same surprises, though, because the strategists were far from unanimous about objectives too. There were many in the Ailing industry, for example, who wanted growth, and many in Fast Growth who wanted profits. We’ll talk more about objectives presently. End of sidebar.)

Let’s flip the numbers around a bit. What are the odds that you would be wrong if you predicted your competitors would make the most-popular move? At best, you’d have a 33% chance of being wrong, if you were in the Mature industry and knew your competitors wanted some share and some profit. At worst, you’d have a 63% chance of being wrong for profit-oriented competitors in Fast Growth. (Look at the chart below. The odds of being wrong equal 100% minus the most-popular choice, which is 37% for “hold.”) Not good odds.

Q1 decisions chart 4 (Fast Growth)

So far we’ve focused on one quarter’s pricing decisions. In the tournament our able strategists could choose from a longer list of strategies that ranged from aggressive to reactive, cooperative to confrontational, tend-to-raise to tend-to-cut, and so on, for their moves after Q1. There was nothing approaching consensus or even popularity in those decisions.

Why might real life feel different?

Here we’ve seen that hundreds of strategists are far from agreement when they predict what pricing moves would work and, therefore, why it would be difficult to predict what any of them would do. So why might it seem that competitors are predictable in real life?

Here are a few ideas. Although I don’t have data to prove or disprove them, they are consistent with my experience working with thousands of strategists around the world, and consistent with the way that competitive-strategy tools think. (See Further Reading, below.)

  • Clean slate. In the pricing tournament, the strategists started with clean slates: no history, no politics. No one could rely on, or had to defend, previous decisions. In real life, that’s not the case.
  • Safety. In the tournament, the worst that could happen was that a person wouldn’t do terribly well in a simulation. Big deal. In real life, a person could lose his or her job for a change that seems to backfire. There’s perceived safety in consistency: don’t blame me, this strategy has worked well for [fill in suitable time span and/or credible other people]. In the tournament, no strategy has an inside track.
  • Clarity. In the tournament, all the facts were laid out and the decisions were relatively simple. In real life, there’s more complexity and ambiguity, which might make strategists wary of upsetting a precariously balanced system.
  • No tradition. In the tournament, strategists were free to define success according to their own preferred combinations of market share and profit, unburdened by how we do things around here. In real life, different companies may assign similar missions to their businesses (e.g., “fly full” in airlines). We’ve seen that definitions of success — objectives — influence pricing decisions. That effect is doubtless magnified by real-life industries’ oral traditions of what it takes to achieve objectives.

In real life, competitors often emulate others’ moves. That happened in the tournament too, in various ways and for various reasons. However, that affected pricing after Q1, and here we focus mostly on the Q1 decisions. The phenomena after Q1 are fascinating but this essay may already be long enough to try your patience, and certainly mine.

What our surprises mean

Strategists face the problem of predicting competitors. We have just seen why doing so may be harder than we might have thought. Our first surprise suggests that we don’t ask questions that might help us predict competitors (more on that quite soon). Our second surprise suggests that our competitors may not be so easy to predict, unless the slate isn’t clean, jobs aren’t safe, issues aren’t clear, and tradition is binding.

The predicting problem has two elements, carbon and silicon, a.k.a. humans and software.

On the human side, we’ve seen that we are prone to make optimistic, or at least unexamined, assumptions about what competitors will do. We know we assume, and we work in good faith not to do so. Take, for instance, SWOT (strengths, weaknesses, opportunities, and threats) analysis. There we endeavor to give equal time to our competitors so we don’t get too convinced of our infallibility and invulnerability.

But let us look at the SW of SWOT. Its strength is that it is easy, fast, portable, and potentially insightful. (I say “potentially” because whether we get insight depends as much on what we receive as on what it transmits.) Its weakness is… well, let’s illustrate by comparing SWOT to business war games. In a business war game you walk in your competitors’ shoes. In SWOT analysis you merely look at them.

Ways to do better:

  • Use competitive intelligence to learn about your competitors’ objectives. A change in objectives may well upset any pricing (or other) “pattern” you may have observed.
  • Also use CI to learn about new management. Our able strategists have demonstrated that smart people differ in what they think will work. A change in management may foretell a change in strategy. That’s true especially because “now under new management” rarely means “still under previous thinking.”
  • Competitive dynamics resemble chess more than accounting or trend lines. Practice your game before the big real-life match. Business war games let you do that. In my experience, surprises like those we’ve explored here are the rule, not the exception, in business war games. The good news is, it’s a lot cheaper to get surprised during practice.
  • During strategy debates, ask if you were our competitors, how would you take advantage of our move. Ask what could go wrong. Ask what are we assuming and do we believe what we are assuming.

Then there’s the software side. Our software thinks like us. (Who else would it think like?) We tell it how to think. We tell it that profit equals revenue minus costs. We tell it demand will change X% for a Y% change in price. We tell it how to combine those thoughts, and others, to figure out the bottom-line effects of price changes.

Since our thinking includes assumptions, so does the thinking of the software that calculates on our behalf. Some of the able strategists, participating in the tournament at a pricing conference I addressed, actually wrote down their spreadsheet-style calculations on their strategy-decision forms: at this price, with these fixed and variable costs, here’s how much I’ll make. If such people were in their offices, it’s likely they would run those calculations in Excel to guide their decisions. Those people chose a paradigm for how to decide before deciding on pricing strategies. That paradigm, like all others, has its paradigm-specific assumptions. In that case, the paradigm assumed competitors were simply irrelevant.

Ways to do better:

  • Ask your tools what assumptions they make. They’re not talking? Okay, ask the tool-designers or -wielders what assumptions their tools make. Ask especially how competitors’ moves would be taken into account.
  • Ask about competitive dynamics. What gets held constant over time, what doesn’t.
  • Ask about what-if. If there’s anything we should take away from the Q1 pricing decisions we explored, let alone the similar variation in the other pricing decisions that we didn’t explore, it’s that we need to test the very real possibility that they will move in a surprising direction. (Subtle point: they may even want to move in a different direction, and may be watching us before they commit.)
  • Worry more about the what-ifs than about precision. Who cares if the 63% chance of being wrong for the profit-oriented competitors in the Fast Growth industry should really be 62% or 64%? It’s far more important to explore your chess opponent’s options than to measure precisely where each piece is in its square.

Bonus surprise

Did you predict there’d be only two surprises since that’s what I said at the beginning? Oh my poor student.

There’s one more surprise concerning the able strategists. We noted the variation in their pricing decisions. Every one of them believed that he or she was selecting the strategy that’d win; if she or he believed otherwise, he or she would have selected a different strategy.

Surprise: not all of them won. (I didn’t either. See Further Reading, below.)

That means that we strategists don’t know what will work. Actually, that’s a slight overstatement. Some did know what would work, and their strategies performed well in the tournament, in which I ran over 25,000,000 (no joke) what-if simulations. The problem is, we don’t know in advance whose strategies will work, nor do we know if they will be successful in the future. For what it’s worth, and to add a minor surprise: the person who did best, so far, isn’t a pricing expert. The person is a market-research practitioner.

It’s humbling and perhaps infuriating that we don’t know what will work. We may assume solace in thinking that we do know what’ll work in our industries, and those Ailing, Fast Growth, and Mature industries are pretty weird. Maybe that’s true, but I don’t think so. And I’ll end by asking one more annoying question: if we’re so good at pricing, where do price wars come from?

Further reading

Predictable Competitors, on using history and trends to predict competitors (or not)
Motor Swilling Forbidden, on how people use the same words and mean different things
When I Was Wrong, on the consequences of and opportunities from mistakes
House, MBA, on the envy two CEOs have for the other’s pricing strategy
The Rules, about surprises and assumptions
Decision Tournaments, on the technology behind the pricing tournament

Appendix

Participating in the pricing tournament 

If you would like to run a pricing tournament for your group, let us know. Strategies and scores will be held in confidence. For more information, please write to info@whatifyourstrategy.com.

A representative sample of strategists

In the essay I mentioned that 250+ real-life strategists have participated in the massive pricing simulation. One might ask, especially if one has been exposed to statistical analysis, whether those 250+ people constitute a representative sample. It’s a good question, and surprisingly hard to answer, and fortunately quite inconsequential.

First, what would be a representative sample? Pricing specialists? Pricing consultants? People with responsibility for pricing decisions? With how much experience? In what countries and industries? Big companies or small? Highly competitive markets, long-established markets, markets with many competitors, or not? It’s hard to know what’s representative.

It’s a little easier to know what’s relevant, which is what leads me to say the “representativeness” of our sample is inconsequential. What’s relevant is that you could be up against pricers of any kind.

The strategists in the pricing tournament:

  • Came from several countries, mostly from the USA
  • Came from many industries
  • Included mostly corporate strategists, augmented by some consultants, academics, and MBA students
  • Knew their strategies would be held in confidence.

My analysis so far shows little reason to believe that demographic characteristics (location, occupation, etc.) have a material effect on the strategy decisions the strategists made. In other words, the quality of thinking and strategizing doesn’t seem to depend much, if at all, on demographics. More research is needed.

Update, June 30, 2019. Over 1,850 people from six continents have entered the tournament. The tournament simulates almost ten billion scenarios as it analyzes the entrants’ strategies.

Share This Comment

Comments
The Model Whisperer - Advanced Competitive Strategies

[…] strategy decision tests Predicting Competitors Strategy decision tests When I Was Wrong addthis_pub = 'mchussil'; addthis_logo = […]

House, MBA - Advanced Competitive Strategies

[…] Ross (a former McKinsey consultant and President of The MFL Group in Beachwood, Ohio). See also Predicting Competitors. addthis_pub = 'mchussil'; addthis_logo = […]

Why Do War Games Work? - Advanced Competitive Strategies

[…] Further reading: Decision Traps, J. Edward Russo and Paul J.H. Schoemaker, covers anchoring. See also Anchors Weigh Down Competitive Thinking. On trend analysis, Predicting Competitors. […]

House, MBA | Advanced Competitive Strategies

[…] Update. There are options besides holding and cutting prices. See “Yes, You Can Raise Prices in a Downturn,” an interview with Benson P. Shapiro (Malcolm P. McNair Professor of Marketing Emeritus at Harvard Business School), Frank V. Cespedes (senior lecturer in the Entrepreneurial Management unit at Harvard Business School), and Elliot Ross (a former McKinsey consultant and President of The MFL Group in Beachwood, Ohio). See also Predicting Competitors. […]