But Not Simpler

But Not Simpler: Getting Your Analysis Just Right

by Mark Chussil

Albert Einstein said, “Things should be as simple as possible, but not simpler.”

No one wants to spend three months studying whether we ought to offer our new product in 182 colors or 183. No one wants to pluck decisions from the air when billions of dollars and thousands of livelihoods are at stake.

I didn’t say we don’t do those things. I just said no one wants to do those things. At least I hope no one does.

As simple as possible, but not simpler. So when it comes to business decision-making, what’s too simple, what’s not simple enough, and what’s just right?

Too Simple

I’ve been interviewed a few times recently (here, here, and here) about the value of daily-deal coupons from Groupon and LivingSocial. The deals are generally terrific for consumers. They’ve been good for Groupon and LivingSocial too. Groupon turned down a $6 billion takeover offer from Google early in 2011, and may be valuing itself in the $20 billion range. Not bad for a company that didn’t exist 5 years ago.

But are daily-deal coupons a good deal for the restaurateurs, merchants, and others who buy into the system and offer those deals to potential customers? How would you decide?

If you look around, such as in those interviews, you’ll find plenty of answers to that question. Some of those answers are learned, many of them are passionate, almost all of them are anecdotal. They’re not too simple because they’re wrong; on the contrary, they make good points. Rather, they’re wrong because they’re too simple.

They’re too simple because they isolate narrow pieces of the problem, like predicting life success on the basis of a high-school trigonometry test. (I hope that technique doesn’t work.) Yes, the coupons might cheapen a brand and that’s dangerous. Yes, on the other hand, the coupons might bring in new customers and that’s progress. But to make a good business decision about the coupons, we must account for both factors, and more.

In other words, the answer is it depends. Or, equivalently, there is no single answer that’s right for every business. Which is emphatically not the same as saying that there is no answer at all or that every business is different.

Before my interviews I spent a few hours creating a simple what-if simulator for restaurants that might use daily-deal coupons. It’s not pretty. Still, it handles multiple factors, such as how many tables the restaurant typically fills, how much the restaurant relies on repeat customers, and whether the restaurant has high fixed costs (e.g., fancy décor) or high variable costs (e.g., expensive ingredients). Based on those factors and others, you could get a reasonable idea whether the coupons would pay off for your restaurant. It depends. For some restaurants the coupons would pay off, for others they would hurt.

The point is that getting decision-making insight doesn’t have to be too complicated and that sound-bite answers are too simple.

Further reading: The Burden of Anecdote.

Not Simple Enough

With a friend and colleague I co-founded a company called Benefitics, which specializes in quantifying the social ROI of non-profit organizations. That means we estimate the monetary value to society of an organization’s activities and divide it by the cost of conducting those activities. We did it for Friends of the Children, on whose Board of Directors I am honored to serve, and for others.

We sloshed through a lot of academic research as we got our quantitative feet wet on our first project. We found deep thinkers whose studies shed rigorous light on genuinely tough social-science questions. The problem from our perspective was that their exacting methods were prohibitively expensive, in time as well as in money. Some studies had taken many years. If we modeled our analysis on those studies we would have to wait about 65 years for our results, which we felt was too long.

A great answer in 65 years would be useless (to us). Not simple enough. We didn’t need pinpoint precision or encyclopedic breadth. We needed something simpler: a good answer, soon. After all, an estimated benefit/cost ratio of 5.0 tells us what we really need to know even if the “right” number is 4.7 or 5.2.

So we developed a good answer. We used census data and government statistics rather than compile our own with new primary research. We made reasonable assumptions about what influenced what, and we tested those assumptions with smart people who know the field. Whenever we faced a choice about how to do something, we chose the analytically conservative path; if we introduced any bias, we would underestimate, not inflate, the results.

As thorough as a gold-standard scientific study? Our study was no slouch, but no, it was silver-standard. The good news: by avoiding not-simple-enough we saved 65 years.

Further reading: Precision In, Garbage Out.

Just Right

I ran a business war game recently in which the client and I wanted to use a simulation model so we could see which strategy ideas were promising and which were not.

We were budget- and time-constrained. I had one month to, among other things, put together a simulator. It had to be custom-calibrated for their industry. It had to handle multiple market segments, new competitors entering those segments, parallel universes, and the key “levers” each business could pull, to any degree, in any combination, and in real time. Impressed? I know I was, when it worked.

In the middle of the business war game strategists role-playing one of the client’s competitors took several actions including a price increase. The simulation model projected that the price-raising company would enjoy an increase in market share of several percentage points.

“Wait a minute!” cried someone from a competing team. “They raised their price and the model said their sales volume would go up? That doesn’t make sense.”

Was it a bug in a quickly developed model? Did the model fail to capture the way their industry worked? Did their industry have a perverse affection for higher prices?

No, no, and no. It wasn’t a bug; it didn’t misrepresent their industry; their industry, like most others, preferred lower prices.

Here’s what was going on. The price increase did not cause the gain in market share. It did not even contribute to it. The gain in market share came in spite of the price increase.

At the same time that they’d upped their price, they’d also spent more on marketing. Meanwhile, their competitors had moved, but less aggressively and less effectively. The gain in market share was due to the net interaction of everyone’s moves, aggressive or not, effective or not.

To demonstrate, we removed the price increase, left in all the other moves, and re-simulated. As you, I, and they would expect, the team would have gained more share if they had not increased their price.

And that’s what made the simulator just right on the simplicity scale. Sure, the model wasn’t “accurate.” (NB: no one can calculate “accurately,” or even discern accuracy, in any such analysis. But that’s another subject.) Even so, it moved in a sensible direction however the teams tuned their strategies, and the more forceful the action, the more the model would move. It gave sensible feedback. Just right for what we needed.

Further reading: What The Model Says.

Getting Your Analysis Just Right

Here’s what I’ve learned over 35 years working with analytic models. Well, not everything I’ve learned about them, but important stuff. And here’s what to notice and implement from those three simple stories.

An analysis is too simple if people, especially you, keep saying “but what if” or “but does it take into account” or “but that’s not realistic.” That’s the feeling you might have if you’re told Groupon is good, period, because it brings in new customers.

  • It’s a bad sign when all the numbers in an analysis are about your business, when you’re never surprised by the results, or when you see happy numbers come out no matter what numbers you put in. It’s a bad sign when all you hear is anecdotes or when all you see is tunnel vision.
  • It’s worth asking, as a sanity check on the analysis, if an big, obvious move by someone else (e.g., a competitor) would affect the analysis. If it wouldn’t, watch out.
  • What may help it be just-right is adding realism. That may mean changing the conceptual underpinning of the analysis (e.g., don’t extrapolate history if you need to anticipate competitive dynamics). It may mean taking more factors into account.

An analysis is not simple enough if it won’t be ready on time to affect a decision, like a program that takes two days to calculate tomorrow’s weather forecast. That’s the feeling you might have when you find out that the data you really want will take 65 years to gather.

  • It’s a bad sign when people quarrel about minute differences that just aren’t big enough to affect a decision. If that’s where the debate is, ask open-ended, non-confrontational questions such as what would we do differently if the minute differences turned out one way or turned out the other way.
  • It’s worth asking, if the problem is that the analysis will take too long, if the decision can be delayed.
  • What may help the analysis be just-right is to look for approximations or proxies. Approximations give you a reasonable number quickly. For example, you can average market-analysts’ forecasts of Microsoft Office sales rather than construct your own forecast. Proxies give you a similar or related number quickly. For example, look at sales of Windows PCs to get an idea of sales of Microsoft Office.

An analysis is just right if you can comfortably answer questions about and with the analysis. The analysis makes sense, it’s directionally correct, it focuses on the problem at hand. The analysis fits the need.

  • It’s a good sign when people stop asking about the analysis and start getting excited about the results. It’s also a good sign when they want to run what-if experiments or have a copy to use themselves. Extra credit if they say they want to “play with” the model.
  • It’s worth asking if you can apply what you created to related problems. The key is to recognize the general principles common to those problems. Experience at doing that is one reason I could built that strategy simulator in a month. (See also The Good, the Bad, and the Lucky.)
  • What will help the analysis stay just-right is to filter proposed enhancements with this question: if we had this enhancement, what would we be able to do that we cannot do now? Which brings us to…

The Bottom Line

The purpose of an analysis is not to tell us a number we like. It’s not to deliver a definitive, immutable last word. It’s not to guarantee anything.

The purpose of an analysis is to positively affect the quality of a decision. Keep that in mind, and your analysis will be as simple as possible, and not simpler.

Share This Comment

Comments