Decide, decide and decide
Senior managers are paid to make difficult decisions. Much depends on the outcome of those strategic, and executives are fairly rightly judged on their overall success rate. It is impossible to eliminate the risk of strategic decision making, of course. But we believe that executives - and companies - can significantly improve their chances of success by making a simple (though not simple) change: expanding your decision support toolkit toolset and understanding which tools work best for each decision. .
Most companies use basic tools like Insights of discounted cash flow or very simple quantitative scenario tests, even when faced with highly complex and uncertain contexts. We constantly see this in our executive education and consulting work, and research confirms our impressions.
Do not misunderstand me. The conventional tools that we all learned in business school are excellent when you work in a stable environment, with a business model that you understand and access solid information. They are much less useful if you are in unfamiliar terrain, if you are in a rapidly changing industry, launch a new type of product, or switch to a new business model. This is because conventional tools assume that decision-makers have access to remarkably complete and reliable information. However, every business leader we have worked with for the past 20 years recognizes that more and more decisions involve judgments that must be made with incomplete and uncertain information.
The problem managers face is not a lack of appropriate tools. A wide variety of tools can be used, including Insights decision-making, the Insights of qualitative scenarios and information markets, for decisions made under high degrees of uncertainty. But the sheer variety can be overwhelming without clear guidance on when to use one tool or combination of tools over another. In the absence of such guidance, decision makers will continue to rely solely on the tools they know best in an honest but misguided attempt to impose logic and structure on their decisive decisions.
In the first half of this article, we described a model for matching the decision-making tool with the decision in question, based on three factors: how well you understand the variables that will determine success, how well you can predict the range of possible outcomes and how centralized the relevant information is. We present strong arguments for greater use of Insights of decisions based on cases (based on multiple analogies) and analysis of qualitative scenarios under conditions of uncertainty.
Inevitably, the model we propose simplifies a very complicated reality to discover some important truths. (That's what models do.) In the second half of the article, we explore some of the most common complications: Most executives underestimate the uncertainty they face; Organizational protocols can make decision-making difficult; and managers have little understanding of when it is ideal to use several different tools to analyze a decision, or when it makes sense to delay a decision until they can better frame it.
Developing a decision profile
As you reflect on what tools are appropriate for a given context, you should ask yourself two fundamental questions:
You need to know if you have a causal model, that is, a solid understanding of what critical success factors and economic conditions, in what combination, will lead to a successful outcome. Companies that make similar decisions repeatedly have strong causal models. Consider a retailer that has launched retail outlets for years in one country, or one that has made many small acquisitions from adjacent competitors.
A simple test of the strength of your causal model is whether you can confidently specify a set of "if-then" statements about the decision. (“If our proposed new process technology reduces costs by X% and we can achieve a Y% market share by transferring those savings to our customers, then we should invest in this technology ”). You should also be able to specify a financial model in which you can connect different assumptions (such as how much the technology reduces costs and how much market share it can capture).
For the vast majority of strategic decisions, executives cannot specify a clear causal model. Some managers have a reasonably good idea of the critical success factors that matter, but not a complete picture; This would generally be true for a company developing a new product, for example. Others don't even know how to frame the decision, for example, a company that is affected by new technology handled by a company outside its industry.
- Do you understand what combination of critical success factors will determine if your decision leads to a successful outcome?
- Do you know what metrics must be met to ensure success?
- Do you have a precise understanding of, almost a recipe for, how to achieve success?
- Can I predict the range of possible outcomes?
When choosing the right decision support tools, you also need to know whether it is possible to predict an outcome, or a range of outcomes, that could result from the decision.
Sometimes it is possible to predict a single outcome with reasonable certainty, such as when a company has made similar decisions many times before. Most frequently, decision makers can identify a range of possible outcomes, both for specific success factors and for the decision as a whole. They can also often predict the probability of those results. However, under conditions of uncertainty, it is common for executives to be unable to specify the range of possible outcomes or their likelihood of occurring with real precision (even in cases where they understand the critical success factors and the success model).
- Can you define the range of results that could result from your decision, both as a whole and for each critical success factor?
- Can you measure the probability of each result?
Choosing the Right Tools: Five Contexts
As the “Diagnosing Your Decision” exhibit suggests, the answers to the questions above will point you to the best decision support tools. (For brief definitions of each, see “Decision Support Tools: A Glossary”). In some cases, you will need only one tool; in others you will need a combination. Many of these tools will be familiar. However, the tool we recommend using most frequently, case-based decision analysis, is not yet widely used, partly because the more formal and rigorous versions are relatively new, and partly because executives generally underestimate the degree of uncertainty. That they face. (For more information on case-based analysis, see the sidebar “Developing Rigorous Analogies: An Underused Tool”).
To illustrate, let's look at five scenarios that McDonald's executives might face. (These are oversimplified for the sake of clarity.)
Situation 1: You understand your causal model and can predict the outcome of your decision with reasonable certainty.
Suppose McDonald's executives must decide where to locate the new restaurants from the USA USA The company has or can get all the information it needs to be reasonably sure how a given location will work. First, learn about the variables that are important to success: local demographics, traffic patterns, real estate availability and prices, and competitive point-of-sale locations. Second, you have or can get rich data sources on those variables. And third, it has income and cost models of restaurants well calibrated. Together, this information constitutes a causal model. Decision makers can feed information about traffic and other variables into standard discounted cash flow models to accurately predict (with a close enough approximation) how the proposed location will work and make a clear go / no go decision.
Tools: Conventional capital budgeting tools, such as discounted cash flow and expected rate of return
Scenario 2: You understand your causal model and can predict a range of possible outcomes, along with the probabilities of those outcomes.
Imagine now that McDonald's managers are deciding whether to introduce a new sandwich in the United States. They still have a reliable way to model costs and revenues; they have relevant data on demographics, pedestrian traffic, etc. (In other words, they have a causal model.) But there is great uncertainty about what the result of introducing the sandwich will be: they don't know what the demand will be, for example, nor do they know what impact the new product will have. will have in sales of complementary products.
However, they can predict a range of possible outcomes through the use of multi-scenario quantitative tools. Some preliminary market research in different regions of the country will probably give you a variety of results, and perhaps even the probability of each. It might be possible to summarize this information in simple result trees that show the probability of different demand results and the associated benefits for McDonald's. The trees could be used to calculate the expected value, variance, and range of financial results that McDonald's could face if it presented the sandwich. Managers could use standard decision analysis techniques to make their final determination.
Alternatively, McDonald's could test the new sandwich in a limited number of regions. Such pilots provide useful information on the potential total market demand without incurring the risk of a large-scale deployment. Conducting a pilot test is similar to investing in an “option” that provides information and gives you the right, but not the obligation, to implement the product more widely in the future. (This approach is still market research, but is generally a more expensive way.) Real options analysis, which quantifies the benefits and costs of the pilot in light of market uncertainty, would be the appropriate decision-making tool in this case.
Quantitative tools for multiple scenarios such as simulations of Monte Carlo, analysis of decisions and valuation of real options. (These tools combine statistical methods with conventional capital budgeting models preferred in Situation 1. Managers can simulate possible outcomes using known probabilities and discounted cash flow models, and then use decision analysis tools to calculate expected values, ranges. , etc.)
Situation 3: You understand your causal model but cannot predict the results.
Now suppose McDonald's enters an emerging market for the first time. Executives still understand the model that will drive store profitability. The cost and income factors may be the same, from market to market. However, the company has much less information about the results, and it would be difficult to predict them using market research and statistical analysis. Its products are relatively new in this market, it will take on unknown competitors, it is less sure of the supplier's reliability and it knows less about who to hire and how to train them.
In this situation, McDonald's can use qualitative scenario analysis to get a better idea of the possible outcomes. You can create revenue side scenarios that cover a wide range of customer acceptance and competitor response profiles. On the supply side, scenarios could focus on uncertainties in the emerging market supply chain and regulatory structure that could cause wide variation in supplier costs and reliability.
These scenarios will be representative, not comprehensive, but will help executives evaluate the advantages and disadvantages of various approaches and determine how much they are willing to invest in the market. Executives must supplement the scenarios with case-based decision analysis of analogous business situations. They could see the results of their own fast food or other entries in developing markets or consider the results of a consumer goods entry in this particular market.
Tools: Qualitative analysis scenario complemented by case-based decision analysis
Scenario 4: You don't understand your causal model, but you can still predict a variety of outcomes.
Suppose McDonald's wants to enter a new line of business with a new business model, such as consulting services for food service process improvements. In this case, executives probably can't define a full causal model or easily identify drivers of success. However, that does not mean that they cannot define a range of possible outcomes for the business by leveraging the right sources of information, for example, getting estimates of success from people who have more experience with this business model, or adding information about the range of results experienced by others using similar business models. It is often easier to leverage the outcome data (and thus define a range of possible outcomes) that define an underlying business model than to ask people to reveal the details of their business models. (That "secret sauce" is confidential in many companies).
Tools: Case-based decision analysis
Situation 5: You do not understand your causal model and cannot predict a variety of outcomes.
Even a well-established market leader in a well-established industry faces decisions under high levels of ambiguity and uncertainty. When considering how to respond to recent concerns about obesity in the US. USA And the backlash on the role of the fast food industry in the obesity epidemic, McDonald's cannot be sure what effect various movements could have on customer demand.
Backlash has the potential to fundamentally rewrite leadership rules in the fast food industry and render existing decision-making models and historical data obsolete. McDonald's certainly cannot accurately forecast future lawsuits, medical investigations, legislative changes, and competition moves that will ultimately determine the benefits of any decision it makes.
When faced with this level of uncertainty, the company must once again rely on case-based decision analysis. Relevant reference cases may include attempts by other consumer goods companies to reposition themselves as healthy or safe alternatives in an otherwise 'dangerous' sector or to influence legislation, regulation or perceptions of stakeholders through public relations and lobbying campaigns. McDonald's could analyze, for example, cases in the gaming, tobacco, firearms, soft drink, and baked goods industries for information.
Tools: Case-based decision analysis
Careful readers will have noticed that the decision tree has a set of tools that we have not covered: information aggregation tools. We treat them separately because, for the most part, they work independently of the decision profile questions we ask at the top (do you have a causal model and know the range of possible outcomes?).
A newer approach to gathering scattered information is to use information markets (also known as prediction markets) to capture the collective wisdom of informed crowds regarding key variables such as likely macroeconomic performance in the coming year or how a performance will be received. proposed product. We must take into account two limitations of this approach: First, because information and prediction markets are structured as financial stock markets, in which participants can “bet” on different outcomes, they can only be used when executives can specify a range of possible outcomes (as in situations 2 and 4 above). Second, the use of such marketplaces can allow information that executives would prefer to keep private (for example, expected revenue from a new drug) to leak.
Two alternatives to information markets can overcome these limitations. The first is incentive estimates: People who have access to diverse information are asked to provide estimates of a key result, and the person who comes closest to the actual number receives a payment of some sort (which may or may not be monetary). ). The second is similarity-based forecasting: people are asked to rate how similar a particular decision or asset is to past decisions or assets. The ratings are then aggregated using simple statistical procedures to generate revenue forecasts or for completion times or costs, depending on the objective. (This is actually a case-based decision analysis tool.)
- Is the information you need centralized or decentralized?
- If you're decentralized, can you leverage the experts you need and add your insights?
- Is it feasible and useful to use “the crowd” for some parts of your information gathering?
- Is it possible to add useful information from the crowd without having to reveal confidential information
For the sake of clarity, we have presented a simplified set of previous examples. In practice, of course, all sorts of complications occur when important decisions are made. We explore some of the following.
Executives don't know what they don't know.
The model we have developed for choosing decision support tools depends on managers being able to accurately determine the level of ambiguity and uncertainty they face. This can be problematic because decision makers, like all humans, are subject to cognitive limitations and behavioral biases. Particularly relevant here are the well-established facts that decision-makers are overconfident in their ability to forecast uncertain outcomes and that they interpret the data in ways that tend to confirm their initial hypotheses.
Essentially, executives don't know what they don't know, but are generally happy to assume they do.
Cognitive bias creeps in.
Manager's biased assessments of the level of uncertainty they face may lead some to conclude that our diagnostic tool is of limited practical use and may point them to an incorrect approach. Our consulting experiences suggest that most organizations can manage those biases if, when considering a strategic decision, managers choose their decision-making approach in a systematic, transparent and public manner during which their judgments can be evaluated by their peers. (This will require a process and culture change in many organizations.)
For example, anyone making decisions who assumes they have a firm understanding of the economics underlying a great decision should be challenged with questions such as: Is there any reason to believe that the relationship between critical success factors and outcomes has changed? over time, making our historical models no longer valid? Similarly, those who assume that all possible outcomes and their probabilities can be identified in advance might ask: Why are other seemingly plausible outcomes impossible? What assumptions do you make when estimating the probabilities? Finally, one could ask those who conclude that the relevant information to make the decision resides in the company or even in a small group of senior executives, if we could assemble a “dream team” to advise us on this decision, who would be in that and because?
When asked these questions, decision makers are less likely to assume that their decisions are direct or even intuitive, and they are more likely to use tools such as scenario analysis and case-based decision making. This is especially important when considering a relatively new or unique strategic investment.
Organizational processes get in the way.
Organizations need to develop general protocols for decision making, because political and behavioral traps abound when money or power is at stake. This is just one of many examples we could give: We work with a leading technology company whose forecasting group used the same decision support tool, regardless of where a product was in its life cycle. This made no sense at all. When we investigated, we learned two things: First, the heads of the business units demanded simple forecasts because they did not understand how to interpret or use the complexes. Second, the company did not charge business units for capital spent on R&D investments, so unit heads put pressure on the forecast team to increase their revenue estimates. As a result of these factors, the forecasts were highly distorted. It would have made more sense for the forecasting team to report to the CFO, which was more sophisticated with respect to financial modeling and could also be more objective with respect to the investment needs of business units. It is not possible to design all the perverse incentives of a system, but some common sense protocols can make a big difference.
Decision makers tend to trust a single tool.
We moved to create the decision profile diagnosis in part because we saw many managers relying solely on conventional capital budgeting techniques. The most important decisions involve degrees of ambiguity and uncertainty that those approaches are not equipped to handle on their own.
It is often useful to complement one tool with another or combine tools. To illustrate this point, imagine that an executive in a Hollywood studio is tasked with making a go / no go decision about a mainstream movie. Decisions of this kind are of vital importance: Today, the average cost of production is $ 70 million for films that are released in 600 theaters or more (many have production budgets of more than $ 100 million), and only three or four out of 10 movies equal or earn a profit. However, the decision to green-light a project is generally based solely on “expert opinion,” in other words, executives' intuition supplemented by standard regression analysis.
In a recent study, two of us used similarity-based forecasts to predict box office earnings for 19 first-run movies. Non-expert viewers were asked through online polls to judge how similar each movie was, based on a brief summary of the plot, stars, and other featured features, to other previously released movies. Revenue for new movies was then forecast by taking weighted averages based on similarities in revenue for previously released movies. On average, those predictions were twice as accurate as those driven by expert opinion and the standard regression forecast. They were particularly effective in identifying small films that generate income. This type of case-based decision analysis is an effective way to tap into the wisdom of the crowd.
Even in situations that seem unambiguous, it is often worth using tools to check for possible biases.
Even in situations that seem relatively unequivocal, it is often worth supplementing capital budgeting and quantitative multi-scenario tools with case-based decision analysis to check for potential biases. For example, if your "certain" investment project is expected to offer an unprecedented rate of return compared to projects Similar in the past, that might be more a reflection of overconfidence than the extraordinary nature of your project. A robust analysis of analogous situations forces decision makers to look at their particular situation more objectively and tends to uncover any illusions embedded in their return projections.
Managers do not consider the option of delaying a decision.
Deciding when to decide is often as important as deciding how to decide. In highly uncertain circumstances, such as a rapidly changing industry or a major change in the business model, it is wise to borrow a completely different set of tools: iterative learning-based experimentation. For example, today's universities are being disrupted by massive open online courses (MOOCs), and most administrators don't know if their institutions should react or how to react.
Instead of making an expensive and high-risk decision now, many universities are conducting small-scale experiments to test the waters and learn more about what “success” will look like in this space. (Of course, they are also using analogies, for example trying to understand whether the music business separation has lessons for higher education.)
What can you start doing tomorrow to become a better business decision maker? Start by developing your decision-making toolkit more fully. There is a clear disconnect between tools that are being used and those that should be used more frequently. Make it a priority to learn more about multi-scenario quantitative tools such as Monte Carlo simulations, decision analysis, and real-option valuation. Get scenario planning training.
Explore the growing academic and professional literature in the information markets. Make more rigorous use of historical analogies to inform your most ambiguous and uncertain, and generally the most important, decisions. We all use analogies, implicitly or explicitly, when making decisions.
Cognitive scientist Douglas Hofstadter argues that analogy is the "fuel and fire of thought." But it's too easy to fall prey to our prejudices and focus on a limited set of selfish analogies that support our preconceived notions. Those trends can be verified through rigorous case-based decision methods, such as similarity-based forecasting.
Finally, and perhaps most importantly, get your business in the habit of consciously deciding how and when to make any decision.
A version of this article appeared in the November 2013 issue of the Harvard Business Review.
- Hugh Courtney is dean and professor of international business and strategy at D'Amore-McKim School of Business at Northeastern University, and a former consultant at McKinsey & Company.
- Dan Lovallo is a professor of business strategy at the University of Sydney and a senior advisor to McKinsey & Company.
- Carmina Clarke is a senior manager at Macquarie Group.