Job Market Paper
Prospective Outcome Bias: Investing to Succeed When Success is Already Likely
Joshua Lewis and Joseph Simmons
Invited for minor revisions at: Journal of Experimental Psychology: General.
Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we developed a theory called prospective outcome bias. According to this theory, people make decisions that they expect to feel good about after the outcome has been realized. Importantly, people expect to feel best about decisions that are followed by successes – even when the decisions did not cause the successes. Consequently, they are most inclined to incur costs to increase their likelihood of success when success is already likely (e.g., people are more inclined to increase their probability of winning a prize from 80% to 90% than from 10% to 20%). We find evidence for this effect, and for prospective outcome bias, in nine experiments. In Study 1, we establish that people expect to evaluate decisions that precede successes more favorably than decisions that precede failures, even when the decisions did not cause the success or failure. Then, we document that people are more motivated to increase higher chances of success. Study 2 establishes this effect in an incentive-compatible laboratory setting, and Studies 3-5 generalize the effect to different kinds of decisions. Studies 6-8 establish that prospective outcome bias drives the effect (rather than regret aversion, waste aversion, or probability weighting). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: holding expected value constant, people prefer small increases in the probability of a large reward to large increases in the probability of a small reward.
The full paper is available here.
Publications and Working Papers
Extremeness Aversion Is a Cause of Anchoring
Joshua Lewis, Celia Gaertig, and Joseph Simmons
Psychological Science (2019)
Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.
The full paper is available here.
Diminishing Sensitivity to Outcomes: What Prospect Theory Gets Wrong About Diminishing Sensitivity to Price
Joshua Lewis, Alex Rees-Jones, Uri Simonsohn, and Joseph Simmons
Under review at Journal of Marketing Research.
Abstract: Prospect Theory assumes that decision makers are diminishingly sensitive to gains and losses. For example, as losses get larger, the perceived harm of any additional increase in the loss gets smaller. A well-known demonstration of this phenomenon involves prices: people are more interested in saving $5 off a $15 purchase than in saving $5 off a $125 purchase (e.g., the “Jacket/Calculator” scenario). However, we present evidence that diminishing sensitivity to price changes is separate from Prospect Theory and arguably inconsistent with it. Across four studies, we find that people exhibit diminishing sensitivity with respect to outcomes that do not align with their evaluations of gains and losses. Specifically, while the reference point determines whether a price is coded as a gain or a loss, the magnitude of the overall transaction determines how large or small a given gain or loss is perceived to be. This implies that, contrary to Prospect Theory, diminishing sensitivity does not occur with increases in gains and losses per se, but rather, it occurs with increases in the magnitude of the underlying transaction.
The full paper is available here.
Ineffective Altruism: Donating Less When Donations Do More Good
Joshua Lewis and Deborah Small, working paper.
Abstract: Despite well-meaning intentions, people rarely allocate their charitable donations in the most cost-effective way possible. The manner in which cost-effectiveness information is presented can be a contributing factor. In four studies (N = 2,725), when we inform participants of the cost of a unit of impact (e.g. the cost of a mosquito net), they perversely donate less when the cost is cheaper. This result arises because people want their donation to have a tangible impact, and when the cost of such an impact is lower, people can achieve it with a smaller donation. A remedy for this inefficiency is to express cost-effectiveness in terms of “units per dollar amount” (e.g. 5 nets provided per $10 donated) and leave the cost of providing one tangible item unstated, rendering it less salient as a target donation amount. Across Studies 2 and 3, we demonstrate both the inefficiency and the effectiveness of the remedy for incentive-compatible donations decisions about providing meals, oral rehydration therapy, deworming medication, and measles vaccines.
The full paper is available here.
The Directional Anchoring Bias
Joshua Lewis and Joseph Simmons, working paper.
Abstract: When people estimate an unknown quantity after previously considering a high candidate value (or “anchor”), they estimate higher values than they would have done after considering a low anchor. In explaining this effect, previous anchoring research has emphasized the distance between the anchor and the estimate. However, across 5 studies (N = 5,662), we find a directional anchoring bias: people disproportionately estimate values that are higher than high anchors and lower than low anchors, and this bias accounts for between 10% and 20% of the total anchoring effect (Study 1). The bias seems to result from people expressing their intuitions about estimation quantities. For example, when estimating an intuitively high quantity (such as the weight of an elephant), people tend to express their intuition that the quantity is “high” by adjusting their estimates upwards from the anchor. When anchors are higher, a decision to adjust upwards necessitates a higher estimate, so higher anchors lead to higher estimates. Consistent with this mechanism, we find that participants’ intuitions about the stimuli moderate the directional anchoring bias (Studies 2-5). In addition, we demonstrate the adverse effects of this bias for estimation accuracy (Study 3) and consumer choice (Studies 4 & 5).
The Forgone-Option Fallacy
Etan Green and Joshua Lewis, working paper.
Abstract: Forgoing an option motivates decision makers to achieve outcomes that "justify" the initial choice. Analyzing a natural experiment in the National Football League, we find that forgoing an option motivates professional athletes to do no worse than they would have done had they taken the option in the first place. We complement this result with evidence from survey experiments.
The full paper is available here.
Barely Plausible Anchor Values Maximize Bias
Etan Green and Joshua Lewis, working paper.
Abstract: Anchors such as list prices and first offers bias judgments. Which anchor values maximally bias judgments? Analyzing a three-year field experiment in which thousands of participants estimated dozens of familiar probabilities, we generate a 95% confidence interval for the extremity of the bias-maximizing anchor. The lower bound for this interval is 1.5 standard deviations from the mean estimate. However, because we observe similar levels of bias for anchor extremities of up to 3 standard deviations from the mean estimate, we do not obtain an upper bound for this interval. Thus, in the context of probability estimates, bias-maximizing anchors may be maximally extreme, and lack any semblance of credibility.
Trusting Kind Friends and Fair Leaders: How Relationships Affect the Antecedents of Trust
Alexander Moore, Joshua Lewis, Emma Levine, and Maurice Schweitzer, working paper.
Abstract: While a robust literature explores the antecedents of trust in organizations, thus far, none has addressed how and why the nature of trust differs across relationships. In the present research, we demonstrate that the nature of trust in friends and leaders is fundamentally different. Specifically, trust in friends relies on perceptions of benevolence, whereas trust in leaders relies on integrity. We establish these results across one pilot study, three main experiments, and three replications (total N = 1,387) that explore how individuals trust targets as friends and leaders subsequent to observing these targets resolve ethical dilemmas that involve tradeoffs between benevolence and integrity. Across our studies, we use both attitudinal and behavioral measures of trust, and a range of ethical dilemmas. This research deepens our understanding of both trust and the relationship between morality, hierarchy, and relationship formation.
Prospective Outcome Bias: Investing to Succeed When Success is Already Likely
Joshua Lewis and Joseph Simmons
Invited for minor revisions at: Journal of Experimental Psychology: General.
Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we developed a theory called prospective outcome bias. According to this theory, people make decisions that they expect to feel good about after the outcome has been realized. Importantly, people expect to feel best about decisions that are followed by successes – even when the decisions did not cause the successes. Consequently, they are most inclined to incur costs to increase their likelihood of success when success is already likely (e.g., people are more inclined to increase their probability of winning a prize from 80% to 90% than from 10% to 20%). We find evidence for this effect, and for prospective outcome bias, in nine experiments. In Study 1, we establish that people expect to evaluate decisions that precede successes more favorably than decisions that precede failures, even when the decisions did not cause the success or failure. Then, we document that people are more motivated to increase higher chances of success. Study 2 establishes this effect in an incentive-compatible laboratory setting, and Studies 3-5 generalize the effect to different kinds of decisions. Studies 6-8 establish that prospective outcome bias drives the effect (rather than regret aversion, waste aversion, or probability weighting). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: holding expected value constant, people prefer small increases in the probability of a large reward to large increases in the probability of a small reward.
The full paper is available here.
Publications and Working Papers
Extremeness Aversion Is a Cause of Anchoring
Joshua Lewis, Celia Gaertig, and Joseph Simmons
Psychological Science (2019)
Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.
The full paper is available here.
Diminishing Sensitivity to Outcomes: What Prospect Theory Gets Wrong About Diminishing Sensitivity to Price
Joshua Lewis, Alex Rees-Jones, Uri Simonsohn, and Joseph Simmons
Under review at Journal of Marketing Research.
Abstract: Prospect Theory assumes that decision makers are diminishingly sensitive to gains and losses. For example, as losses get larger, the perceived harm of any additional increase in the loss gets smaller. A well-known demonstration of this phenomenon involves prices: people are more interested in saving $5 off a $15 purchase than in saving $5 off a $125 purchase (e.g., the “Jacket/Calculator” scenario). However, we present evidence that diminishing sensitivity to price changes is separate from Prospect Theory and arguably inconsistent with it. Across four studies, we find that people exhibit diminishing sensitivity with respect to outcomes that do not align with their evaluations of gains and losses. Specifically, while the reference point determines whether a price is coded as a gain or a loss, the magnitude of the overall transaction determines how large or small a given gain or loss is perceived to be. This implies that, contrary to Prospect Theory, diminishing sensitivity does not occur with increases in gains and losses per se, but rather, it occurs with increases in the magnitude of the underlying transaction.
The full paper is available here.
Ineffective Altruism: Donating Less When Donations Do More Good
Joshua Lewis and Deborah Small, working paper.
Abstract: Despite well-meaning intentions, people rarely allocate their charitable donations in the most cost-effective way possible. The manner in which cost-effectiveness information is presented can be a contributing factor. In four studies (N = 2,725), when we inform participants of the cost of a unit of impact (e.g. the cost of a mosquito net), they perversely donate less when the cost is cheaper. This result arises because people want their donation to have a tangible impact, and when the cost of such an impact is lower, people can achieve it with a smaller donation. A remedy for this inefficiency is to express cost-effectiveness in terms of “units per dollar amount” (e.g. 5 nets provided per $10 donated) and leave the cost of providing one tangible item unstated, rendering it less salient as a target donation amount. Across Studies 2 and 3, we demonstrate both the inefficiency and the effectiveness of the remedy for incentive-compatible donations decisions about providing meals, oral rehydration therapy, deworming medication, and measles vaccines.
The full paper is available here.
The Directional Anchoring Bias
Joshua Lewis and Joseph Simmons, working paper.
Abstract: When people estimate an unknown quantity after previously considering a high candidate value (or “anchor”), they estimate higher values than they would have done after considering a low anchor. In explaining this effect, previous anchoring research has emphasized the distance between the anchor and the estimate. However, across 5 studies (N = 5,662), we find a directional anchoring bias: people disproportionately estimate values that are higher than high anchors and lower than low anchors, and this bias accounts for between 10% and 20% of the total anchoring effect (Study 1). The bias seems to result from people expressing their intuitions about estimation quantities. For example, when estimating an intuitively high quantity (such as the weight of an elephant), people tend to express their intuition that the quantity is “high” by adjusting their estimates upwards from the anchor. When anchors are higher, a decision to adjust upwards necessitates a higher estimate, so higher anchors lead to higher estimates. Consistent with this mechanism, we find that participants’ intuitions about the stimuli moderate the directional anchoring bias (Studies 2-5). In addition, we demonstrate the adverse effects of this bias for estimation accuracy (Study 3) and consumer choice (Studies 4 & 5).
The Forgone-Option Fallacy
Etan Green and Joshua Lewis, working paper.
Abstract: Forgoing an option motivates decision makers to achieve outcomes that "justify" the initial choice. Analyzing a natural experiment in the National Football League, we find that forgoing an option motivates professional athletes to do no worse than they would have done had they taken the option in the first place. We complement this result with evidence from survey experiments.
The full paper is available here.
Barely Plausible Anchor Values Maximize Bias
Etan Green and Joshua Lewis, working paper.
Abstract: Anchors such as list prices and first offers bias judgments. Which anchor values maximally bias judgments? Analyzing a three-year field experiment in which thousands of participants estimated dozens of familiar probabilities, we generate a 95% confidence interval for the extremity of the bias-maximizing anchor. The lower bound for this interval is 1.5 standard deviations from the mean estimate. However, because we observe similar levels of bias for anchor extremities of up to 3 standard deviations from the mean estimate, we do not obtain an upper bound for this interval. Thus, in the context of probability estimates, bias-maximizing anchors may be maximally extreme, and lack any semblance of credibility.
Trusting Kind Friends and Fair Leaders: How Relationships Affect the Antecedents of Trust
Alexander Moore, Joshua Lewis, Emma Levine, and Maurice Schweitzer, working paper.
Abstract: While a robust literature explores the antecedents of trust in organizations, thus far, none has addressed how and why the nature of trust differs across relationships. In the present research, we demonstrate that the nature of trust in friends and leaders is fundamentally different. Specifically, trust in friends relies on perceptions of benevolence, whereas trust in leaders relies on integrity. We establish these results across one pilot study, three main experiments, and three replications (total N = 1,387) that explore how individuals trust targets as friends and leaders subsequent to observing these targets resolve ethical dilemmas that involve tradeoffs between benevolence and integrity. Across our studies, we use both attitudinal and behavioral measures of trust, and a range of ethical dilemmas. This research deepens our understanding of both trust and the relationship between morality, hierarchy, and relationship formation.