All About Models

All About Models: The Strategist’s Guide to How They Think, by Mark Chussil

This essay is full of good stuff about models. Of course that depends on how you define good stuff and models. If you mean something involving delectable (your definition here too) folk lurching down a runway with sour expressions stitched onto their faces, well, sorry, I’m not that kind of manager. But if “good stuff about models” to you means cool and insightful methods to improve decision-making, as it does to me during working hours, then you’ve clicked to the right place.

The fascinating world of models and decision-making

All of us live with models. More accurately, models live within all of us. They help us every day.

As if that weren’t enough, there are models outside us too, and they help us too.

In general, we can classify models chemically: those based on carbon and those based on silicon. The former, mental models, operate inside our heads. The latter, computer-based models, operate inside computers. They behave differently.

Mental models

Mental models are the things in our heads that say if you do this then the result will be that. They reflect knowledge and/or beliefs about how the world works.

Our ability to, say, throw physical objects to a specific target without consciously solving equations comes from mental models. If asked, we could enumerate the key variables involved — the force we apply, the distance it must travel, wind resistance — but we don’t think about them when we throw an object.

Those mental models work well, and without controversy, because thrown objects precisely obey the laws of physics for everyone, every time, so far. But not everything covered by mental models is so consistent. Humans’ mental models can differ dramatically and contentiously where our experience is more varied, complex, or ambiguous. As with competitive strategy.

Imagine you and I run a large manufacturing business. A mental model for what makes it tick may go like this:

  • We have heavy fixed costs.
  • To be profitable, we must cover those fixed costs and all other costs.
  • Fixed costs go down per-unit if we sell more units.
  • Selling more units also brings in more revenue.

I’m not saying that mental model is right or wrong. I’m saying only that it is an example of a mental model.

That mental model reflects a certain list of variables (akin to those for throwing physical objects) and it leads its owner to think in a particular way. It focuses on internal operations (our costs), it recognizes the effects of volume (keeping the factory full reduces costs per unit), and it would favor moves such as price cuts or mass marketing because, given the list of variables, those actions would promise to build volume.

I’ve encountered exactly that mental model in the business war games I’ve conducted, and I’m sure you’ve heard it too. It’s common and it at least appears to make sense.

Think again about our large manufacturing business. What would be a different mental model? How about this:

  • We have heavy fixed costs.
  • To be profitable, we must cover those fixed costs and all other costs.
  • It’s easier to cover our costs if our prices are high.
  • Excellent quality and a strong brand give us pricing power.

I’m not judging that mental model either. Notice, though, that it reflects a different list of variables and it leads its owner to think in a different particular way. It focuses on customer perceptions (quality and brand), it recognizes the causes of margin (profit comes from the difference between prices and costs), and it would favor moves such as price increases or product differentiation because, given the list of variables,  those actions would promise to build margin.

I’ve encountered that mental model as well, and I’m sure you have too. It’s common and it at least appears to make sense.

Our large manufacturing business is hurting. What should we do? Your mental model says cut price, my mental model says don’t. We battle, we spreadsheet, we PowerPoint. We argue passionately about why a price cut is a good idea or a bad idea, we selectively cite anecdotes and trends and studies. Oddly, we spend less time asking how we can end at different conclusions when we start with the same data; that is, asking how our mental models differ. (See also Predicting Competitors.)

The point is not that any given mental model is right or wrong. (That said, we all know that no one is always right.) The point isn’t even that we must eventually choose one mental model or another. The point is that we all have mental models, we use them all the time, they’re unconscious to us and invisible to others, they appear self-evidently true to their owners, and they differ from person to person.

Mental models have benefits.

  • They’re as flexible and creative as the human brain.
  • They accommodate new information in real time. We can revise or update them quickly; we just change our minds.
  • We always have them with us, they’re always ready to go, and we don’t need help from IT to operate them.
  • Mental-model conflicts, annoying as they may be, help us learn. The wisdom of crowds. The marketplace of ideas. Very beneficial in qualitative business war games.

Mental models have drawbacks. For example, they require that we handle numbers in our heads. Sure, we can handle the math behind accurately throwing objects, but try this much simpler arithmetic: what’s 1,248 x 3,579 – 9,876 + 5,555? You’re right, it’s 4,362,271. I was just checking to be sure you knew too. How about this: if our sales are $1,000 today and we expect 3.15% compound annual growth, what will our sales be after 63 months? And we haven’t even gotten into any interesting problems yet. How about our profits after 5 years, assuming that brand popularity rises 9% and fixed costs are… well, you get the point.

It’s not just about handling numbers; it’s also about understanding ripple effects. We cut our price and expect volume to grow, but our competitors want volume too and they feel threatened and so they cut their price in response, so now we’re still at parity but at lower prices, so we cut our price again or cut our costs, and then, and then, and then. Unintended consequences come from unanticipated effects. Scenario-planning and role-playing programs, like business war games, help reveal those effects and consequences. (See also Do Not Overtighten.)

For an amusing and related diversion — and we probably all need one right about now — you might enjoy the lyrics to Tom Lehrer’s classic satire New Math.

Computer-Based Models

Remember that our mental models tell us if you do this then the result will be that. Say you tell your computer about “this” and ask it what “that” will be. Sorry; it will just sit there in stolid silicon silence. [Update: Not necessarily so. If you ask Siri, you’ll always get a snappy comeback.] Before the computer can tell you about “that,” you have to tell it how. Your computer thinks as you tell it to. Your computer thinks like you.

All computer-based models start in life as mental models.

If you believe you should forecast your sales by extrapolating trend lines into the future, you or your Excelophilic proxy will insert trend-line-extrapolation equations. If you believe you  should forecast sales with a statistical model, you will commission a statistical model. If you ask someone else to figure it out, your computer will think like the person to whom you delegated thinking. And if you believe your mental model is good enough, you won’t even turn on your computer. Let it sit there.

Computer-based models think neither better nor worse than humans. That’s because they think like humans. Their benefits are in clarity, speed, precision, and scope.

  • They’re explicit and visible. People can see, discuss, enhance, and ultimately share the way of thinking embodied in the model.
  • They compute far, far faster than humans can. That’s more than a convenience. It means also that it’s practical to conduct serious what-if tests with many scenarios. A simulator I wrote calculates about 20,000 scenarios per second, which is handy when I ask it to process millions. (See Millions of Pricing Simulations.)
  • They calculate much more accurately than humans can. (See also Precision In, Garbage Out.) Oh, by the way: the arithmetic we covered earlier? The real answer is 4,462,271. The 4,362,271 I mentioned was a sly test. You didn’t know it was wrong? That’s the point.
  • They can keep track of many more variables than humans can. A strategy simulator I created for a company could handle roughly a thousand decisions for them and their competitors in multiple countries. Not a job for a mental model. Very beneficial in quantitative business war games and strategy analysis.

Computer-based models also provide a different kind of benefit. Yes, they serve as mental models mated with deft calculators. They also let strategists compare mental models. Back to our large manufacturing business, the one where you and I have been arguing whether to cut price. Use a computer-based model to run it both ways. And maybe some other ways too: cut capacity, broaden the product line, lead prices up, and more.

In other words, computer-based models allow us to run what-if tests. They can do so in quantitative business war games or even in the massive explorations ACS calls decision tournaments. (See also When I Was Wrong.)

Computer-based models sometimes are like rocket science, but their drawbacks are simple. They take time, money, and skill to create. And, of course, they need mental models for their raw material, which makes them vulnerable to lousy mental model in, garbage out. Note that the garbage-out would be the fault of the lousy mental model in, not the computer.

Is the model valid?

I think we’ve established that strategists always use models, whether mental or computer-based. Sooner or later someone asks how you know your model is valid. I think that mostly they want to know about numerical validity and accuracy; does the model work, has it been tested. That’s a good question we’ve all been trained to ask. We’ll talk about that next. However, I think it’s even more important to ask about conceptual validity; in effect, should the model work, does it make sense. We’ll talk about that in the part about rephrasing the question, just a few terrific paragraphs from now.

Actually, people ask the validity question only with computer-based models. Mental models are invisible, don’t feel like models, and are self-evidently correct to their owners. People may contest other people’s conclusions, but, as we said a while back, people rarely ask why they reach different conclusions from the same data.

When people ask if a model is valid they usually mean to ask whether it fits known data. The validity question is fair, but fitting known data is often not a useful test.

It’s always possible to put together a model that fits known data. If you expect the future to look just like the past, then such a model might even be useful. (Assuming it didn’t violate good model-building hygiene, such as using up too many degrees of freedom. See also Predictable Competitors.) But it’s not especially interesting to model a future that will look just like the past. The time you really need a good model is when the future will not look like the past. Unfortunately, there are no data about the future, so the does-it-fit-the-data test isn’t available. (See also More Internet Users than People.)

I recommend rephrasing  the question to shift from numerical validity to conceptual validity. Rather than ask if a model is accurate, ask if it is sensible. Ask how it works, ask what it takes into account, ask on what paradigm it rests. Remember that making strategy decisions is not about accounting, trend lines, or forecasting. It is about strategy, and the models you select should work with strategy concepts. (See also Pundits and Stress and With All This Intelligence, Why Don’t We Have Better Strategies?)

The bottom line

The ultimate point about validating models — and about selecting models and especially about using models — is this. You are going to make decisions no matter what. The relevant question is whether you can make a better decision with the model than without it.

No model is perfect and there are no guarantees of fabulous future performance. Improving the odds of success is all you can hope to get by using models. Fortunately, it is a lot to get. It’s the difference between the gambler and the casino.

This essay talks about models themselves. The next in the series, What The Model Says, discusses which models are worth listening to. The final essay, The Model Whisperer, connects mental and computer-based models to techniques for developing competitive strategy.

Update, November 30, 2012: Further reading. See Paul Krugman’s New York Times essay Varieties of Error. He distinguishes between errors caused by extraneous forces, which make good models produce bad predictions, and errors caused by models that are flawed.

Share This Comment

Comments
The Model Whisperer - Advanced Competitive Strategies

[…] All About Models, the first in the […]

Netflix Gone Vile | Advanced Competitive Strategies

[…] Such an analysis for Netflix would not look like gap analysis or financial analysis (too narrow) or like trend analysis or benchmarks (little or no relevant past). It would not rely on anecdotes or arbitrary performance targets. (See also All About Models.) […]