Why Strategies Fail

Why Strategies Fail (P.S. We Expect Them to Succeed)

by Mark Chussil

On May 10, 2011, thirty intrepid, curious, thoughtful strategists joined me in a three-hour workshop at the SCIP2011 Conference. Our subject was “Why Strategies Fail.” (See the end of this post for access to a 30-minute video of a similar speech.)

The workshop combined audience interaction, experiential learning, live simulations, performance art, and ah-ha surprises. This essay presents conceptual highlights from the workshop even if, alas, it falls short on interaction, experiences, simulation, performance, and surprise. As you read please imagine a lively, dynamic, engaging session, thanks to a great group of participants.

 

At first glance “Why Strategies Fail” is an odd subject. After all, don’t we really want to know what makes strategies succeed?

One thing we strategists must do to succeed is to not fail. And there’s sad evidence that we aren’t necessarily good at not-failing.

First, don’t fail

What do these companies have in common?

Blockbuster Video. Borders. Chrysler. Circuit City. Delta Air Lines. Enron. General Growth Properties. General Motors. Hollywood Video. Kmart. Lehman Brothers. Six Flags. Texaco. Texas Rangers. Trans World Airlines. United Airlines. Washington Mutual. Worldcom.

What they have in common is that they all went bankrupt.

Some of them have emerged from bankruptcy. Still, it’s safe to say that none of them wanted to enter bankruptcy in the first place.

It’s not immediately clear how they came to such unhappy ends.

  • They didn’t enter bankruptcy only when times were bad. Some did in good times.
  • They didn’t enter bankruptcy because their industries were imploding. They had competitors who survived and even prospered.
  • They didn’t enter bankruptcy overnight. Some took decades to fail, meaning that generations of well-intentioned strategists didn’t prevent the fall.

Bankruptcy is only one form of failure. We also say a strategy failed when it misses its performance targets, loses ground to competitors, or costs its author his or her job.

Strategies fail when

When strategists choose bad strategies, strategies fail. That sounds obvious until we remember that no strategist purposely chooses a bad strategy. Strategists are smart, experienced, industry-savvy, data-rich, and highly motivated to succeed. They want to choose smart strategies. Yet smart strategists can and do choose bad strategies.

If you doubt that smart strategists choose bad strategies, look again at the list of companies above. Do you believe they employed, promoted, and trusted incompetent strategists? Do you also believe their senior management approved bad strategies due to incompetence of their own?

Because smart strategists choose bad strategies, we’re unlikely to prevent bad strategies merely by shuffling people around. That’s an expensive, haphazard way to solve the problem of strategies that fail.

Instead, let’s address why smart strategists can mistakenly believe that a bad strategy is a good strategy. That’s what we did in the workshop.

Seven habits of highly ineffective strategizing

I didn’t call this section “the” seven habits because there are more than seven. And even though you may read them in a few minutes, take a few seconds to consider why we spent a few hours on them in the workshop. It’s because it’s more effective, not to mention more fun, to learn through experience than through lectures. In the workshop we discovered each of these seven habits experientially.

Here, HHIS = habit of highly ineffective strategizing.

HHIS#1: Using wrong paradigms

Discovered in the workshop by deconstructing Marketing 101 with a simple pricing question.

We have strategy problems. How should we position our product? How should we defend against a new entrant? How should we price as we enter or exit a recession?

We say that if you have a hammer, you see problems as nails. There’s also the reverse to consider: if you have a nail, you need a hammer.

The tools we choose to solve strategy problems are often accounting-based spreadsheets, trend lines, anecdotes, and advice from confident-sounding people. Those tools rely on paradigms that may not fit strategy problems. For example, a trend line assumes that conditions from the past will persist into the future. If the past will persist, we don’t have a very hard problem; if it won’t, the trend line itself isn’t reliable.

We need to use thinking and tools based on relevant paradigms. That means if we have a strategy nail, we need a strategy hammer.

Further reading: Why Do War Games Work?

HHIS#2: Seeking pseudo-precision

Discovered in the workshop with an interactive exercise and a case study from a business war game.

When a strategy fails, we reasonably turn attention to the analysis and forecasts that led us to adopt the strategy. We figure that if we can make the analysis and forecasts more precise, we’ll be more likely to succeed in the future.

That may be true if lack of precision is the problem. In my experience, though, lack of precision is rarely, if ever, the problem.

A large telecommunications company faced a new competitive threat. Their strategists had been unable to choose whether to respond with Strategy A or Strategy B. Think of how they could have resolved their impasse: take a vote, have the boss rule, wait (for what?) and see, kick the decision up to top management, get a consultant to make a recommendation. Note that those options are merely means to make a choice. They decided instead to work it through in a business war game.

In their business war game we had them role-play their company and the new competitor, and we used a strategy simulator to estimate the outcomes. The choice between Strategy A and Strategy B came down to whether strategists would prefer to lose 20 points of market share or 40. To make a good decision, who cares if it’s 20 versus 40, or 19 versus 37, or 20.311 versus 38.726?

Quantifying helps. Precision, not so much.

Further reading: Predictable Competitors and Predicting Competitors.

HHIS#3: Relying on anecdotes and stories

Discovered in the workshop with a vigorous virtual debate between Steve Burd, CEO of Safeway, and Craig Heckert, CEO of Supervalu.

We humans love anecdotes and stories. We glow as we imagine ourselves the hero, which we call aspiration and inspiration. We shudder as we imagine ourselves the victim or villain, which we call fear or lessons learned.

Anecdotes and stories prove that something is possible even if, swept up in a good tale, we forget that possible doesn’t mean probable. But anecdotes and stories hardly provide solid ground to make complex decisions.

A question I find helpful is this: It works in practice, but does it work in theory? No, I didn’t scramble practice and theory when I wrote that.

Something may appear to work in practice. We infer that through a process that goes like this: I did X, then Y happened, and I like Y, so X works. But we all know that X-preceded-Y doesn’t mean X-caused-Y, especially in a field as turbulent, complex, and interconnected as competitive strategy.

Asking “does it work in theory” injects intellectual discipline where we otherwise would have only assertion, inflation, persuasion, and frustration. It asks whether we can draw plausible cause-and-effect links from X to Y before we risk our Y on that X.

Further reading: Numbers, Circular Reasoning, and Numbers and The Burden of Anecdote.

HHIS#4: Assuming our strategy will work

Discovered in the workshop in a miniature business war game on the automobile industry.

If you ask strategists “will your competitors do what you want them to do?,” of course they’ll answer “maybe, but probably not.” Yet tools commonly used in strategy development implicitly assume that your competitors will do what you want. When’s the last time you saw a spreadsheet take competitors’ reactions into account as it forecasted your business’ future profits or market share? And on the off chance that it did, how strong, sustained, or clever was the competitive response that was fed in?

In the workshop we ran a miniature business war game based on the automobile industry. Teams of workshop participants role-played various car-makers as they allocated production and forecasted results in three consumer segments.

What we saw mirrored what I’ve seen in hundreds of business war games I’ve conducted for companies and at conferences: teams made rational decisions that rammed head-on into competing teams’ rational decisions. Teams raced to expand in the segments they considered desirable. Teams assumed an orderly exit from shrinking segments. Every team expected to gain market share somewhere. Not a single team expected to lose market share anywhere. Net result: overproduction in every segment, and performance below expectations.

Of course strategists want to grow; people don’t like strategies that will shrink their businesses. And in a business war game, as in real life, some businesses will grow.

But hopes for at least some will be dashed, in real life as in war games. Does that reflect not-good-enough strategies or too-optimistic hopes? A full answer goes beyond this space and perhaps your patience. Let’s just say that we may inadvertently cause not-good-enough strategies, too-optimistic hopes, or both, when our strategy-development tools implicitly assume our strategy will work.

Further reading: Honey, We Shrunk the Industry and Honey, We Shrunk the Industry Again.

HHIS#5: Believing we can make it happen

Discovered in the workshop in case studies based on two business war games.

The phrase “make it happen” stirs the accountability and glory centers of our brains. It feels triumphant to proclaim that we will make it happen. It feels magisterial to demand that you must make it happen.

The thing is, it’s only not up to us/you whether it happens. Others are involved, such as competitors, whose brains are similarly stirred. Then there are customers, suppliers, distributors, regulators, shareholders, and financiers, not to mention constraints from budgets and technology.

When it happens, it’s not necessarily because we made it. And even if we made it happen, we didn’t necessarily do it in a way we prefer. Perhaps we made our profits happen by cutting costs when expected sales didn’t materialize.

We train telescopes on our goals and microscopes on our budgets, and what we need is a wide-angle lens to scan our scenarios. After all, if we fail to make it happen because of the things that swoop out of a metaphorical left field, perhaps we ought to pay more attention to left field.

Further reading: The How-Likely Case.

HHIS#6: Deciding while being human

Discovered in the workshop in several interactive exercises.

Imagine an unfair coin. A fair coin, when flipped, comes up heads 50% of the time and tails 50% of the time. The unfair coin you’re imagining comes up heads 60% of the time and tails 40%.

Say you repeatedly flip your unfair coin with humans and ask them to predict the result of the next flip. Humans observe previous outcomes closely. They perceive patterns and construct elaborate schemes and rules. They regard correct predictions as vindication and incorrect predictions as an imperative to refine their systems. They believe that with practice they can do better.

Say you flip your unfair coin with rats (or the rat equivalent of such a coin) and reward them with food for each correct prediction.

The rats out-perform the humans.

The rats learn that guessing the rat-equivalent of heads every time will maximize their food. They get fed 60% of the time. The humans, with their big brains and complicated systems, get it right less than 60% of the time.

I’m not against big brains. I’m not against complicated systems. I’m not for rats and I’m not against humans.

What I am against is forgetting that we’re human, and therefore subject to human biases and foibles, when we make decisions. Overconfidence; groupthink; innumeracy; confirmation bias (believing only the data with which we agree); much more. We can fight those biases and foibles if we learn and try. The point is, we have to learn and try.

Further reading: It’s Working! and Marvelous Techniques.

HHIS#7: Not figuring out what it’s about

Discovered in the workshop in a very cool team exercise.

In business schools, at conferences, and in news reports, everything arrives labeled. This is a marketing case, that’s a sales problem, this is competition, that’s product development. We reach for mindsets and tools with matching labels and get to work.

Challenges in real life don’t come labeled. (Notice, by the way, that “challenge” is itself a label.) We’re predisposed to think of finance, prices, and trend lines if we have spreadsheets just as we think of nails if we have hammers, but that says as much about our thinking as it does about the challenge at hand.

We included an unlabeled challenge in the “Why Strategies Fail” workshop. I’m not going to describe it here, partly because it would lose too much in the translation and partly because you may experience it someday with me or someone else and I don’t want to spoil it for you. The point of the unlabeled exercise is that it’s easy for us to fail if we don’t pay attention and figure out what it’s about. Doing so requires taking a few minutes to think, to question assumptions, and to be willing to be wrong.

Further reading: When I Was Wrong.

The bottom line

We strategists often think in terms of strategy fundamentals: understanding customers, anticipating competitors, seeking profitable markets, achieving market share, controlling costs, and so on. When we build a strategy we carefully deploy the fundamentals. When a strategy fails we investigate what went wrong with the fundamentals.

I suggest that there is also such a thing as strategist fundamentals. Strategist fundamentals drive and reflect how we think. If you review the seven habits of highly ineffective strategizing we’ve covered here, you’ll see they are about those thinking fundamentals.

We’ve come full circle. We began with why strategies fail and now we end with how strategies succeed. Good strategies come from good decisions. Good decisions come from good decision-making. Good decision-making comes from good strategist fundamentals; that is, from good thinking. And that’s the bottom line.

 

For more information

Please feel free to contact me if you’d like to know more about the “Why Strategies Fail” workshop or others that I’ve conducted for companies and conferences around the world. Or, visit How to Think Better on ACS’ website.

You can view and download The ACS Why Strategies Fail Bibliography. It lists essays by ACS, with links to full text, and lists thought-provoking books that have shaped my thinking.

You can view and download The ACS Business War-Gaming Bibliography. It lists essays by ACS on business war games, with summaries and links to full text.

Update. You can watch a video of Mark Chussil’s “Why Strategies Fail” speech for The IE Group (30 minutes; not the same as the 3-hour workshop for SCIP). It was delivered at the Chief Strategy Officer Summit in New York City on December 9, 2011.

Share This Comment

Comments