Numbers Gone Wild (the essay)

Numbers Gone Wild: Or, Precision In, Garbage Out (the essay), by Mark Chussil

This brief essay highlights key themes from my Numbers Gone Wild workshop at the 2010 SCIP International Annual Conference on March 11, 2010. It does not replay the workshop, as the workshop was energetically interactive, verging at times on raucous; well, as raucous as competitive-intelligence professionals get. Not to mention that the group enjoyed numerous punch lines and shockers, and I wouldn’t want to spoil the delights and surprises if someday you experience the workshop yourself. Rather, this essay focuses on concepts and conclusions, which will shock and delight better than would a play-by-play recap.

Introduction: Crazy

Having numbers drives us crazy.

Lacking numbers drives us crazy.

Seeking precision drives us crazy.

Living with imprecision drives us crazy.

In short, we are going crazy.

Numbers surround us. Even people who prefer qualitative methods (“quals”) use numbers every day.

Numbers bug us. Even people who like quantitative methods (“quants”) criticize numbers every day.

In this workshop prudent people explored the unintended ways in which numbers going wild cause us to suffer. We suffer as a result of the bad decisions, and the collateral career damage, that can come from untamed numbers.

How to Like Numbers

I like most numbers. When I say “most” I don’t mean that I get along with all the digits except 7 and 8. I mean that numbers are literally the only way that we can gain certain insights, knowledge, and even inspiration, and that makes them likable.

Of course, the presence of numbers does not guarantee insight, knowledge, or inspiration. On the other hand, the absence of numbers limits us to anecdotes, impressions, and hypotheses. So, we want numbers.

I like most numbers. I don’t like most factoids. Here’s an example: “GM’s market share went down at the same time the reliability of its cars went up.” That statement is true and over many years it was measured precisely in cars sold and defects per hundred vehicles, but it is not likable. Why not? Here’s one of several reasons: because it is context-free. Context would tell us what else was happening while GM improved reliability and lost market share. Context would reveal that many more-reliable Japanese cars had entered the market and the other American producers were improving reliability too. GM almost certainly would have lost more market share if it hadn’t improved reliability.

Our unlikable factoid illustrates how we can decide which numbers to like and which not. The relevant criterion for likability is not whether a number is “actionable.” After all, the factoid is actionable: advise GM management to regain market share by making their cars less reliable. The relevant criterion is whether a number is sensible.

“Sensible” needs a big tent. It includes context, robustness, and quality of analysis. It does not imply or require approval. You might not like a number, but numbers are not popularity contests. (Which is amusingly ironic, if one is easily amused, because popularity contests are judged with numbers.)

The big tent of sensibility also does not imply or require precision. It doesn’t matter a whole lot whether GM’s market share declined 11.2% or 11.3%. No one would breathe a sigh of relief to find the loss was the former and not the latter. Moreover, precision is not even an option for many numbers, such as all numbers about the future.

Notice that the sensibility criterion separates the wildness of a number from what we do with that number (e.g., approve of it or take action on it). That is a good thing. When we humans get wrapped up in strategy debates and decision-making, we confound the goodness of a number (its sensibility) with our approval of the number (its value in getting our idea adopted). We slip into judging the goodness of a number by the support it lends to our idea, rather than judging the goodness of our idea by the support it gains from a number. It is to avoid confounding sensibility and approval that I design the structure of my strategy-simulation models before calibrating the models with data, and that I calibrate the models before simulating strategies with them.

Models in Our Heads!

Think about your business. What would happen to its profits if its prices were to rise 5%?

To answer that question you will execute a series of unconscious cerebral calculations to forecast the net results of many interconnected moving parts. How do I know you will do that? Because you will have an answer to my what-would-happen question, meaning you will promptly predict how prices will prod profits.

Let’s think about that. To promptly predict how prices will prod profits you will assess:

  • Competitors’ reactions to your price increase. Will they follow you up or will they tout their new price advantage? How quickly will they move, how far will they go?
  • Customers’ reactions to the price increase. What will happen to your business’ sales in the short term? In the long term? Will customers expect a better product for the higher price? Can you market the increase in a soothing way? Will you try to hide the increase via smaller packages or hidden fees? Will customer loyalty dampen or conceal the effects?
  • Given the changes (if any) in your business’ volume, what happens to costs? Are fixed costs fully covered? What are the implications for contracts with labor, distributors, suppliers, and shipping? What about sales support and customer service?
  • And more.

It would be extraordinarily difficult for you to take all of those factors into account in your head, not to mention doing the arithmetic. And yet you will proffer a prompt prediction to the what-would-happen question. As would most business professionals. You and I hear it all the time; just listen to debates about what works and what to do.

That prediction comes from what’s called a mental model. It’s just like a computer model, except it happens in one’s head and it operates with a lower degree of computational accuracy.

A lower degree of computational accuracy? Yes. Try this. You sell a product for $129 per unit. You sold 51,500 last quarter. The market is growing at a compound annual rate of 3.8%. What will be your revenue next quarter? Write down your answer. That, by the way, is a much simpler question than the one I posed before, the one about raising prices. Even so, you probably had a confident opinion about the first question while feeling helpless to answer the second without electronic assistance.

Mental models are always switched on, even within the quals wandering among us. They matter because we use them to predict the results of actions we might take and to judge the validity of the computer models we might use. We can and should take these steps to use both mental models and computer models without producing garbage-out:

  • Mental models are generally unconscious. Make them explicit. Draw them, discuss them, think through the assumptions and principles that drive them. Question mental models and their owners, respectfully, just as we question computer models. Separate their conceptual sensibility from the numbers they produce, and focus on the former first.
  • Because humans tell computer models how to think, computer models think like humans. If you believe a financial analysis is appropriate, you use a financial model in which the computer has been taught that profit equals sales minus costs. There’s a flip side: when you use a financial model, you implicitly aver that the problem at hand is properly addressed by financial analysis. Worries about “garbage in” apply to the choice of analysis at least as much as to the numbers fed in.

Important point: both mental models and computer models are about thinking. Both reflect how we believe the world works.

By the way, the answer to the arithmetic-accuracy question is $6,705,733.32.

Oh, Them

Let’s compare models. We’ll go back to the $129 product arithmetic and contrast it with the raise-prices prediction. There’s a critical difference between the two.

  • Yes, the $129 product arithmetic is more precise. We have actual numbers. But although it’s a real difference, it’s not a critical difference.
  • The $129 product arithmetic can be calculated, monitored, and verified. Another real difference, though not yet critical.
  • The $129 product arithmetic implicitly and invisibly implied that the solution was a matter of financial analysis. I reinforced that perception by 1) supplying an answer 2) that was calculated to the penny. The raise-prices prediction implied financial analysis much less, and perhaps not at all. That’s critical.

Review the list of assessments you’d make in the raise-prices prediction. You might assess well and you might assess badly, but the way you would think about that problem would probably differ dramatically from the way you would think about the $129 product arithmetic.

The superficial precision of the $129 product arithmetic nudged you toward a method of analysis. (That’s known as “framing” the problem.) Thus, precision in could cause garbage out. Why garbage? Because the $129 product arithmetic should raise exactly the same issues about competitors and customers as the raise-prices prediction, and using a financial-analysis approach would completely, presumably unintentionally, and potentially disastrously ignore competitors and customers. Oh, them.

Conclusion: Not Crazy

The cure to our initial craziness is not to dispense with numbers wholesale, nor is it to embrace them unconditionally. The way to prevent garbage-out is not to scorn precision, nor is it to demand it. The answer is not either-or or some mythical “sweet spot.”

The answer is to deploy the best of both worlds, thinking strategically and calculating rigorously. Not perfectly or exhaustively, but consciously and vigorously.

That means being careful as we frame problems and choose models. It means remembering that there’s always a model, whether it’s in your head or in your computer. It means taking relevant factors into account, even imprecisely, because omitting relevant factors hurts more than using them imprecisely.

I know it’s possible because I’ve done it in the strategy models I’ve built in my career, and I’m not the only one who’s done it.

Let’s close with lessons from the remarkable Deep Blue.

Garry Kasparov, the world’s reigning chess champion at the time, was defeated in 1997 by IBM’s Deep Blue supercomputer. There are several points we can make about that match.

First, Deep Blue was not programmed with trend lines, gap analysis, or financial fundamentals. Of course not; it was a chess match. So why use trend lines, gap analysis, or financial fundamentals to analyze competitive-strategy problems?

Second, Deep Blue was a machine but it thought like a human. That’s because humans programmed it, with their chess-playing knowledge and expertise. Computers do not contest us; they express us. They are amplifiers for our thinking the way power tools are amplifiers for our muscles.

Third, it’s virtually certain that none of the individuals who programmed Deep Blue could have beaten Kasparov. Their combined talent, though, did. It’s also virtually certain that Deep Blue would have beaten any single person whose knowledge and expertise it contained.

Finally, we know that Deep Blue didn’t win every game in the match, nor (obviously) did Kasparov. But there’s a question deeper than which of them was best. The deeper question is this: who or what could beat a Kasparov/Deep Blue team? The value of numbers and models is not numbers and models. The value is better business decisions that help you succeed.

How to avoid garbage-out on your way to better decisions? Think about it.

Share This Comment

Comments