The Model Whisperer: The Strategist’s Guide to Creating Value with Models, by Mark Chussil
Preamble.
This is the third and last in a series of essays about models and competitive strategy.
From All About Models, the first in the series:
- Models describe how we believe the world works.
- Models may be in our heads (mental models) or computer-based.
- Computer-based models are always based on mental models, though they can calculate more and better.
- A model is always involved, though perhaps unconsciously, when we say or compute if we do this we will get that.
- No model is perfect. The test for whether a model is useful is whether we can make a better decision with it than without it.
From What The Model Says, the second in the series:
- What the model “says” is at least as much about its design as about the numbers that come out. For example, a model that knows about costs will give you cost-based advice, nothing more.
- The most important decision we make about our model is to choose how our model should think.
- If your model omits key variables, it will leave you vulnerable to competitors who make use of those variables.
- A key test for whether a model is valid is whether it makes conceptual sense for the problem you want to solve. Many common modeling techniques do not.
Excessively cautious notice. This essay may occasionally sound commercial because in it I describe techniques ACS uses, including one technique proprietary to ACS. The two business war-gaming techniques I’ll describe are practiced by ACS and other companies; each company has its own way of doing things. The third, strategy decision tests, is proprietary to ACS. Related techniques (e.g., Monte Carlo simulation) are not.
End of preamble.
People whisper to horses, dogs, and (according to TV) ghosts. Why not models? The model whisperer — perhaps you, savvy strategist — gently, wisely guides models into shape and helps them achieve fulfillment as oracles of your business’ future.
In our last episode I promised that we’d explore when to use mental and computer-based models in competitive strategy. I further promised that we’d talk about them in the context of three strategy-development techniques: qualitative business war games, quantitative business war games, and strategy decision tests.
In qualitative business war games we experiment with ideas and paradigms embedded in our mental models. In quantitative business war games we use computer-based models to stress-test specific strategy options. In strategy decision tests we run massive what-if simulations to gauge the risks and rewards of going down different paths.
The hypothetical examples I’ll present are actually not hypothetical. Each is an amalgam of multiple war games I’ve conducted in multiple industries, with identifying marks removed to protect confidentiality.
Qualitative business war games
Imagine. You run a business preparing a major change. You’re contemplating something unprecedented, even controversial. Your move will be highly visible and very expensive to retract. If it works, the rewards are great. But there are many interested parties out there, including your competitors, and it’s far from clear which side they’ll take. Their opposition could hurt you badly. Should you make the move?
You’ve thought it through and, although you favor making the move (your mental model at work), you’ve quickly bumped up against the chess-like complexity of your situation. You feel how easy it is to slip into rosy predictions. Your management team is split, some offering passionate support, some passionate opposition (their mental models at work). You and they have debated for some time without reaching a conclusion. Your window of opportunity is closing.
What would you do? Why, you’d run a qualitative business war game, of course.
Business war games involve role-playing. Teams of strategists role-play their business, competitors’ businesses, government regulators, consumers, investors, and so on, as needed for the problem the company wants to solve. Through a series of structured exercises and interactions, the teams construct, debate, select, execute, and react to strategy moves.
In the safe environment of a qualitative war game your strategists get to experience what could happen as they roll out a strategy… or, as a role-played competitor, as they fight against it. A qualitative war game is not unlike a mock trial or a field exercise: it doesn’t ascertain what will happen, but it shows what could happen, with far greater richness than a conference-room discussion. It reveals gaps in your thinking. It surfaces unconscious and unfounded assumptions that could haunt you later.
A qualitative business war game plays out your mental model of what you believe or hope will happen. In addition, it will probably uncover new ideas, favorable and unfavorable. You may choose to explore those new ideas with the quantitative methods I’ll describe later, or by running through another qualitative session. It’s because of those new ideas that I strongly advise you to run at least two “rounds” in a business war game; the second round turns the clock back to the kick-off point of the war game, but with the first round behind them the teams know more than they did before.
By definition, qualitative business war games don’t require computer-based models. They are flexible, time- and cost-efficient, and they can simulate just about anything. They are a great way to generate ideas because the adrenaline rush (I’m not kidding) of simulated competition stimulates creativity. Oh, and they’re fun.
Their drawbacks mirror those of mental models: humans make lousy calculators, even more so under uncertainty, and so it is difficult to get reasonable performance projections in a qualitative war game. Frankly, I don’t even try. I wouldn’t believe the projections.
Quantitative business war games
Imagine. You run a business facing a new competitive threat that’s sure to materialize. You and your colleagues have developed several mutually exclusive countermoves. The countermoves share no common ground and there’s no split-the-difference compromise to be found. Your colleagues advance persuasive arguments in favor of each possible move, and promise future glory. But how can that be, when the moves are so different? How can you know what to expect? What if competitors counter your countermove?
You ponder and reject the usual tools. Financial spreadsheet: doesn’t take competitors into account. (Notice the mental model in those spreadsheets, and refer to our discussion of Model V and Model M in What The Model Says.) Forecast: history won’t be a good guide because the new competitive threat changes too much. (Notice the mental model there too.) SWOT analysis: just a way to express your mental models using four categories.
So you turn to a quantitative business war game. You use a qualitative game when you’re looking for ideas and surprises; you use a quantitative game when you’re looking for analysis and choices. A quantitative war game uses teams and role-playing like the qualitative variety, but it adds (as you might expect) a computer-based model. The model, calibrated for your competitive environment, takes as input the strategy decisions made by the teams and provides as output estimates of sales, market share, profits, and so on.
Sidebar. Tempting and fascinating as it is, I’m not going to discuss the process of building such models in this essay. Contact me at info@whatifyourstrategy.com if you’d like to talk about it. I’ll just say 1) it’s possible to build such models, 2) it’s not as hard as it might sound, and 3) such models easily satisfy the usefulness criterion, to wit, whether you can make a better decision with the model than without it. See, for example, the Shell case at the start of Putting the Lesson before the Test from Wharton on Dynamic Competitive Strategy (Day and Reibstein, editors, 1997). That model projected a nine-figure loss if Shell were to execute a certain strategy. End of sidebar.
In qualitative war games participants are surprised by the actions teams take. In quantitative war games participants are surprised by the outcomes of those actions. Surprises are good, of course, if they’re in the safe environment of a war game.
Here’s a tip-of-the-iceberg illustration that shows the dynamic of a quantitative war game. Teams develop strategies. They all expect to perform well. Unless it begins with a near-monopoly, no team expects to lose market share. Yet each and every market in the world always, always contains exactly 100% market share, which means that for one business to gain share someone must lose share. Without a model, the teams argue endlessly about who gains and who loses. With a model, teams see who gains and who loses, and why. It makes the discussion much more fruitful.
By definition, quantitative business war games require computer-based models. They generate ideas, as do qualitative games, but their key benefit is that they let you evaluate and contrast your strategy options rigorously and objectively. They can produce consensus and action quickly because participants see the results of option A versus option B. (I’ve seen Fortune 500 companies change course overnight.) They help set reasonable targets and expectations. Oh, and they’re fun.
Their drawbacks mirror those of computer-based models: more time and cost to set up, and the model won’t be able to simulate actions not anticipated in the design phase. More subtly, some models are simply better-designed than others, so you must choose carefully.
Strategy decision tests
Imagine. You direct pricing for a business in a highly competitive market. You and your competitors watch each other’s prices like genetically modified hawks. You and they can change prices frequently. How should you set your prices?
This situation is rife with mental models and computer-based models. Some pricing strategists — your competitors? — look at prices periodically and then make a decision what to do. Those decisions, of course, come from some invisible and visceral combination of their mental models and any analysis they see. Other pricing strategists — your competitors? — may use computer-based models that figure out “optimal” pricing based on, for example, demand curves or the status of supply and demand at this microsecond.
So how should you set your prices? This much is clear: if you go too low you may trigger a price war or leave money on the table, and if you go too high you may lose share and long-time customers. If you try to duplicate others’ apparently successful moves you may make matters worse, sort of like grabbing someone else in midair instead of opening your own parachute. And of course prices are relative, and of course your prices affect their prices, and vice versa. So how should you set your prices?
You set up a strategy decision test.
Sidebar. Or perhaps you use other tools, of which there are at least several. As my focus in this essay is models and decision-making, not a pricing-technique critique, I won’t delve into methods other than strategy decision tests. I’ll just reiterate that all computer-based models are based on mental models, and it behooves the model consumer to ensure that a model is conceptually sensible and appropriate for the intended use. End of sidebar.
A strategy decision test is a massive what-if simulation. It’s useful when there are many options for you and/or your competitors and you want to understand the risks and rewards of your options. Each of your options may face many thousands of possible outcomes, and evaluating all your options may mean looking at many millions of futures. (See also The How-Likely Case.) That’s too many to do in your head. It’s also too many to do in a spreadsheet. But it’s not too many for a strategy decision test.
Strategy decision tests look at all the possibilities and summarize the results. They don’t involve teams or role-playing. They do, however, thrive on ideas for strategy options, specifically including ideas generated in war games or by brainstorming.
Obviously, strategy decision tests use computer-based models, and all my usual imprecations about model quality and sensibility apply.
One of the benefits of strategy decision tests is its comprehensiveness, which lets it show a clear view of risk; that is, of the range of what could happen. That comprehensiveness would have been very difficult not many years ago, and simply impossible not many years before that.
Another, quite intriguing, is what Malcolm Gladwell called “serendipity” in his article about drug companies screening millions of compounds to see if they might have useful medicinal effects. (“The Treatment,” The New Yorker, May 17, 2010.) “[Screening] provided a chance of stumbling across something by accident — something so novel and unexpected that no scientist would have dreamed it up. It provided for serendipity, and the history of drug discovery is full of stories of serendipity.” He cites penicillin and Viagra. “What he [a cancer researcher] found was exactly what he’d hoped for when he started his hunt: something he could never have imagined on his own.”
That serendipity happens in strategy decision tests. It’s not only that I’ve seen strategy decision tests find strategies that beat the best I could develop. I’ve also seen strategies pop out that I could never have imagined on my own.
The drawbacks of strategy decision tests are two. First, developing a strategy decision test takes somewhat longer than developing other kinds of computer-based models. Second, the analysis may appear like a black box, not because anything is hidden but because the scope is too big to be eyeballed. Oh, and maybe a third. The results may be fascinating, insightful, and even jaw-dropping — I’ve felt mine fall a few times — but it’s not as entertaining as a business war game.
The n rules for model whispering
My goodness, we made it! Thank you, friendly reader, for staying with me to the end (almost) of this series of long essays.
We end with a few rules to remember as you whisper to your models and make your competitive-strategy decisions.
There is always a model. Computer-based or mental; Model V or Model M or Model XYZ; big or small. You never decide whether to use a model. You always decide which model to use. Don’t expect a financial spreadsheet to give you advice about marketing strategy. Corollary: choose a model on the basis of sensibility for the decision you need to make.
Computer-based models are people too, sort of. Computers only know what people have fed them and they only think what people have taught them. It’s not human versus computer. It’s, again, which model you want to use, and is that model better applied in your head or in a computer. Corollary: don’t try to do too much arithmetic in your head.
Models can stimulate creativity. People get results from models — their mental models, others’ mental models, computer-based models — and they create ideas for how to do better. (War games are a prime example.) They even get ideas by designing models, as they become attuned to the wide range of variables under their control. Corollary: you can build your creativity by asking yourself “how would I model that?” when you come across an interesting strategy. That activity thrills me and makes me endlessly fascinating at parties. I know that because people can hardly wait to wander off and think about it on their own.
Surprise is good. If you can always predict what a model will say, the model doesn’t add any value. The model adds value when it tells you something you didn’t already know. Corollary: if you (i.e., your mental model) disagree with another model, take the opportunity to learn whether to update your mental model or to jettison the other model.
There is no such thing as precision or accuracy about the future. There are many, many, many possible futures. No practical model can capture every relevant variable and tell you which one will happen. (Put another way: a perfect model of the next five years would take five years to run.) Depending on the technology we employ, you can get a good / better / best view of those futures, but no one can guarantee a correct view. Incidentally, there is also no such thing as data about the future, but that’s only one reason why there’s no such thing as precision or accuracy about the future. Corollary: the objective is to improve the odds of making good strategy decisions.
Finally: ask what-if, and imagine. “Whenever you see a successful business, someone once made a courageous decision.” — Peter Drucker
Further Reading
About business war games in general
Business War Games
Learning Faster Than The Competition
The Seven Deadly Sins of Business War Games
About quantitative business war games and methods
Honey, We Shrunk The Industry
Honey, We Shrunk The Industry Again
Precision In, Garbage Out
About strategy decision tests
Predicting Competitors
Strategy decision tests
When I Was Wrong
[…] interaction, Predictable Competitors. About calculation, All About Models, What The Model Says, and The Model Whisperer. About emotion, Feeling is […]