Thanks to everyone who participated in the Escaping Paternalism Book Club, and special thanks to authors Mario Rizzo and Glen Whitman for joining the discussion. In case you missed any of the installments, here’s the full list.
Thanks to everyone who participated in the Escaping Paternalism Book Club, and special thanks to authors Mario Rizzo and Glen Whitman for joining the discussion. In case you missed any of the installments, here’s the full list.
We want to thank Bryan one more time for hosting this book club, which has been entertaining and enlightening for both of us. Although the discussion has naturally gravitated toward points of disagreement, in truth we and Bryan are largely on the same page. It’s worth enumerating some of our many points of agreement, most of which relate to the policy application of behavioral findings:
Even outside applications to policy, we and Bryan agree on quite a lot. Specifically:
So where do we and Bryan differ? There are various small points of difference, but the most important and persistent is that (consistent with the literature) we adopt a subjective theory of value, whereas Bryan believes in a form of objective welfare. We have trodden this ground well in previous posts, so we’re inclined to let it lie here.
However, we do wish to respond to one specific point from Bryan’s most recent post. Once again, he invokes the case of children. But this time, his point is not that children might be inclusively irrational (a point we have conceded), but that, in trying to shape our own children’s behavior, we are revealing our own support for objective welfare! Needless to say, we disagree. The reasons are complicated, but the primary one is that most of the ways we control our children fall in category of “dealing with people who don’t actually understand the world yet.” In Bryan’s specific example, he imagines “a bright child [who] stubbornly insisted that he wanted to play with a loaded gun after you thoroughly warned him of the risks.” We would indeed intervene in this case. But we would do so in large part because we don’t actually believe they genuinely grasp the risks, regardless of how bright they are or how much they insist. Yes, this is a paternalist impulse – appropriately directed at a child rather than an adult – but it doesn’t prove the existence of objective welfare. (There is much more we could say about this example, and the justification for paternalism for children in general, but we need to end this somewhere.)
In any case, our disagreement with Bryan about objective welfare confirms the wisdom of our having given the book a nested “even if” structure. At each stage of the book, we essentially say, “Even if you don’t agree with the arguments so far, we’re now offering you an additional and independent argument for resisting paternalism.” Bryan finds the book’s early arguments only moderately persuasive, but the later arguments strongly persuasive. That works for us! (We suspect that different readers might be more persuaded by the earlier arguments.)
As a final observation, we would note that Bryan’s personal anti-paternalism relies to a great extent on his libertarian (and specifically Huemerian) political philosophy. That’s fine, and we have a good deal of sympathy for that point of view. But as we say in the book, we are not philosophers, and therefore we focus on our comparative advantage: conceptual and consequentialist problems with behavioral paternalism. We endeavor to offer argumentation for the anti-paternalist position that doesn’t require one to come from a libertarian philosophical perspective. In many respects, our book is an immanent critique of behavioral paternalism, and it’s one we hope will cause even non-libertarian supporters of that position to reconsider.
This is the third of a series of responses by Mario Rizzo and Glen Whitman, authors of Escaping Paternalism, for my Book Club on their treatise.
Once again, we’d like to thank Bryan for hosting this book club. We also appreciate the many insightful contributions in the comments section. In this post, we’ll discuss a handful of questions raised over the course of the book club but especially in Bryan’s last installment.
But first, a personal request. If you’ve read the book, we would truly appreciate your reviewing it on Amazon!
And now for the remaining topics…
In our earlier reply on this topic, we may have taken the question too literally. For many, “falsifiability” is simply a shorthand way of asking Is this claim scientific? or Is this claim supported by evidence? And we believe the answer is a clear yes for both questions. There are mountains of evidence for inclusive rationality. It takes the form of people engaging in all manner of self-regulatory behaviors, from mental budgeting to diet plans to self-reward-and-punishment schemes to environmental structuring to mutual support groups. It takes the form of people learning over time. It takes the form of people reducing their “biases” in response to higher costs and indulging them more in response to lower costs (a phenomenon that Bryan himself identified and named). It takes the form of behavioral economic research revealing tendencies that – shorn of unwarranted normative judgments – evince the existence of non-standard preferences. And so on.
Some might reply, “What you’re doing is seeking confirmation rather than falsification.” But these are in fact two sides of the same coin. Scientific hypotheses quite often generate predictions about the existence of certain phenomena. If a thorough search fails to turn up examples of such phenomena, then eventually that failure is deemed a falsification. For example, evolutionary biologists hypothesized – on the basis of primitive marsupial fossils in South America and contemporary marsupials in Australia – that transitionary marsupial fossils would be found in Antarctica (which at one point connected South America to Australia). This hypothesis led to explorations that did indeed find such fossils. But what if these explorations had turned up nothing? With enough failures, biologists would eventually have had to move on to other, better-supported hypotheses. That’s how falsification works in evolutionary biology: a search for confirmatory evidence that might not materialize.
The process is similar in the study of human decision-making. If the search for inclusively rational behaviors had turned up nothing, we would have had to reconsider our position. But it didn’t turn out that way.
Children and Opiate Addicts
Even if inclusive rationality is generally correct, there is still the question of its limits. Are there any exceptions? This, we think, is what Bryan really wants to know when he asks about falsifiability; hence his frequent references to children and opioid addicts. We are happy to take these as probable exceptions, at least in many instances. Why do we hesitate to go beyond “probable”? Because we are experts in neither child psychology nor opioid addiction. What we know of them from general awareness and personal observation tends to support that hypothesis, but we don’t want to oversell our understanding.
We also suspect that, in both cases, the story is more complex than “these people aren’t rational.” It seems likely that some child and addict behaviors can be explained as the result of peculiar but not irrational preferences. Other behaviors, especially for children, can be explained as mere ignorance about how the world works (kids don’t necessarily understand that throwing a tantrum won’t make the broken toy fix itself) or testing the limits of their power (maybe the tantrum will motivate Mom and Dad to go buy a new toy). None of this means genuine irrationality isn’t also involved – only that teasing out its role could be a difficult undertaking.
Could other exceptions be found? Sure. Some varieties of mental illness might qualify. There might be some specific behaviors even in “normal” (adult, non-addicted, mentally healthy) people that would qualify. But if our research has taught us anything, it’s that rationality is more complex than you think and often appears where you least expect it. Behavioral economists have found countless “gotcha” cases of alleged rationality – but then closer examination forces us to say, “Hmm, maybe not.” The inclusive rationality approach forces analysts to have some humility, and to examine more carefully rather than indulging the impulse to find fault.
Bryan’s overarching perspective comes out most clearly in Part 5 of the book club. Here, he makes clear his support for a notion of objective welfare.
We should emphasize that Bryan’s position was not our primary target in the book. Many behavioral paternalists (i.e., new-school paternalists) explicitly disavow the notion of objective welfare, even if they sometimes unwittingly slip into that frame of mind. Like most economists, behavioral paternalists embrace the subjectivism of preferences and personal values. We have met them on that playing field.
By contrast, although Bryan is not a paternalist, he shares the old-school paternalists’ fundamental value judgment: that some people’s preferences are just wrong, and they would be better off if corrected. This is a philosophical position we do not share, so perhaps we should simply agree to disagree.
However, Bryan also suggests that we, too, have accepted some notion of objective welfare. In response to a passage in the book where we concede – for the sake of argument – that we could indulge our intuition in certain very extreme cases that people are acting irrationally, Bryan responds: “I agree that ‘intuition’ (or just ‘common-sense’) says this. The reason, though, is that ‘well-considered well-being’ is a thinly-veiled version of objective well-being. The morbidly obese are plainly acting in accordance with their own preferences, but they are acting contrary to their own long-run happiness.”
On this point, we strongly beg to differ. Well-considered well-being is not just thinly veiled objective well-being. We dispute the notion that anyone who thinks carefully enough will choose Bryan’s personal values! On this front, the behavioral paternalists are right: the appropriate standard of well-being is the one you would impose on yourself. If the morbidly obese person looks at his life and genuinely concludes, “You know, all thing considered, this is the life for me,” we, as economists, have no objective basis for saying otherwise. If there are legitimate grounds for deeming this person irrational and possibly in need of help, it’s because his behavior is making him worse off from his own perspective.
We would also observe that the appeal to “common sense” is potentially tyrannical – not in Bryan’s hands, but in the hands of those who don’t share his libertarian value commitments. Common sense is often shorthand for “what we happen to like.” Laden with social desirability bias, common sense can become a cudgel for imposing one’s own values on others. Which is not to say we should never apply common sense; again, we might be willing to indulge that intuition in some extreme cases. But it is playing with fire, so to speak.
To be very clear, we don’t dispute the existence of objective standards. In principle, we can objectively define the choices that will maximize health, or lifespan, or long-term financial wealth. But how should those things be weighed against other values, such as spontaneity, indulgence, and hedonic pleasure? That is a matter of personal preferences and values. Most importantly, tradeoffs among competing values – some or all of which may have a claim to objective – are what is important in any policy evaluation. We cannot say, for example, that people are undersaving if we do not know the appropriate tradeoff between present and future “objective goods” at the margin.
Overapplication of “Irrationality”
A recurring theme of the book is that the term “irrational” has been applied far too broadly. “Irrational” is not a synonym for “wrong,” “undesirable,” “ill-informed,” or any number of other negative qualities. “Rationality” and “irrationality” relate specifically to the suitability of means to ends – that is, how well people match their choices to their goals, whatever those goals might be. At least, that is how we use them in the book. The need to distinguish irrationality from other negative things comes up in Bryan’s fourth book-club post:
RW embrace glorious candor. If we fully embrace their “inclusive rationality,” RW freely admit that we can no longer impugn paternalists as irrational. I’d add, however, that RW expose their opponents’ irrationality so convincingly that we should reject inclusive rationality. Many beliefs are arrant folly, and relabeling this arrant folly as “inclusive rationality” is bending over backwards for no good reason.
We would not want to be accused of supporting “arrant folly”! But it’s worthwhile to recognize that arrant folly can nevertheless be rational. We lay out the reasons why in the chapters on political economy and slippery slopes, but here’s the short version: Even fully rational policymakers can support bad policies. This can happen for numerous reasons. They may lack good information. They may not have absorbed the wisdom of our book! They may care only about what appeals to a rationally ignorant public. They may expect never to bear the long-run consequences of their policies. They may expect to be rewarded for the up-front benefits or illusory benefits of their policies. They may expect to receive greater status and larger budgets in administering such policies. They may enjoy exercising control over others. They may belong to (or take contributions from) interest groups that benefit from paternalistic policies.
Furthermore, as we emphasize in the book, even “irrational” contributors to bad policy – such as action bias, overconfidence, and confirmation bias – are not clearly irrational. They are tendencies with low costs and sometimes notable benefits to policymakers. This is Bryan’s own “rational irrationality” in action: the political arena does little to discourage, and much to encourage, the indulgence of our biases. And we don’t have to concede they are genuine biases to make this point! Some biases are just nonstandard (but totally relatable) preferences, such as “I enjoy feeling like I’m right, and I dislike admitting I’m wrong.” If we conclude that so-called biases like these are in fact inclusively rational, that doesn’t excuse the resulting bad policy. See above: even fully rational policymakers can support bad policies.
Finally, policymakers may not share our values. They may implicitly or explicitly believe in a notion of objective welfare (as Bryan does) and weight it heavily enough to outweigh pragmatic concerns. They may not share Bryan’s libertarian value commitments, which do a lot of the work in helping him reach anti-paternalistic conclusions despite his belief in objective welfare. We disagree with those who arrive at paternalism in this way, but we wouldn’t call them irrational. In the book, although some of our arguments may invoke libertarian values, for the most part we endeavor to provide reasons to resist paternalism that could persuade people who don’t share our ideological priors.
Thank to everyone who’s participated in the Escaping Paternalism Book Club. I just left my review on Amazon, and encourage readers to do the same.
I’m also happy to take one last round of questions on Escaping Paternalism, and suspect that Rizzo and Whitman are up for an encore as well. If you have any remaining or Big Picture comments or questions, please share them in the comments for this post.
For now, here are my responses to some earlier questions.
Hazlitt takes an economic model of willpower. The will is simply that which we choose to do at one time and which has won out over competing desires. Given many short term desires may conflict with long term goals, the difficulty lies in keeping the long term vision/goal front of mind so that acting that way is not ‘dethroned’ by shorter term desires. So far, this seems like a fairly standard view. We aim at the best, but trip up on the way. This would be deemed irrational – we haven’t chosen correct means to achieve our ends.
But Hazlitt doesn’t call this irrational behaviour. It is not, as Rizzo and Whitman describe the behavioural economists’ view, a failure to choose effective means to achieve given, subjective ends. In Hazlitt’s view, it is that the long-term goal was poorly chosen. If, when the time comes to pay the price – to sacrifice by giving up something – and we choose not to give up that thing, we have chosen a goal that we were not willing and able to pay the price for. That is, we desired the long term goal, but did not demand it. Therefore, willpower is about what we demand – which is the product of how we value some things relative to others. Are we choosing goals in alignment with our values?
So, is a preference only useful in considering whether behaviour is rational or not if we are willing to pay the price for it, ie that we demand it? Are preferences assumed to be the things we demand, not desire? How does this idea relate to Rizzo and Whitman’s book? Thanks.
Hazlitt’s idea fits well with my view that most alleged “self-control problems” are driven by Social Desirability Bias. People say they want socially approved things, but their actions reveal what they actually want. When someone says, “Nothing means more to me than my family” after drinking away the rent money, the correct interpretation is that they prefer alcoholic beverages to their family’s well-being. Why claim otherwise? Because it sounds better. Yes, actions speak louder than words – but as long as some people fail to accept this truism, expect the flow of flowery verbiage to continue.
“I say the rational discount rate for utility is no time discounting at all.”
Isn’t that like ignoring compounding interest? Wouldn’t that be irrational, or at least unintelligent?
No. This confuses prices with preferences. I’m saying that you should not discount future utility merely because it is in the future, not that you should have flat consumption regardless of interest rates. If you can earn interest by waiting, a person who does not discount the future might opt to wait. Indeed, if you can earn interest by waiting, even a person who does discount the future might opt to wait!
Which is a worse problem overall: people discounting the future too much or not discounting the future enough? A lot of people have trouble optimizing for the present and thus put a great deal of weight on the future. They sacrifice happiness in the present for the promise of happiness in the future. For many people, this makes sense: The pains of pregnancy are worth it for the joy of a child. if you want a career in the military, the few months in boot camp make a lot of sense even if you’re miserable for that brief time.
A great and underrated question! Ultimately I say that excessively discounting the future is the greater problem. High discounting (better-known as “impulsivity”) leads predictably to all the canonical “social pathologies”: poverty, broken families, substance abuse, crime, and more. Furthermore, impulsivity is much more common, especially among young people. At the same time, I agree that a noticeable share of adults, perhaps 10-15%, are so future-oriented that they fail to enjoy life. They accumulate much wealth but get little pleasure out of it.
I don’t think it’s reasonable to say objective welfare exists. It’s always possible to imagine a mind that does not want something, even if all real people want whatever it is.
I’m not saying that all humans want their own objective welfare. I’m saying that objective welfare exists whether a human wants it or not. Not all kids want to go to sleep, but all kids will have unhappier lives if they’re habitually sleep-deprived.
To be clear, when I say “happy” and “unhappy” I mean emotional states, not preference satisfaction. We know them in ourselves directly via introspection and in others inferentially via facial expression, demeanor, and so on.
If you really want to bite the bullet on anti-paternalism, I’d like to hear how Russia should have handled the alcohol epidemic during the 1990s. Should it have kept taxes low?
I say yes. And instead of preaching to heavy drinkers to stop, I would have preached to families and romantic partners to shun heavy drinkers.
When you begin the final chapter of Escaping Paternalism, Rizzo and Whitman (RW) seem ready to rest their case. They neatly recap their nested argumentative strategy:
We began this book with an extended critique of the neoclassical model of rationality, which behavioral economists have rejected as a positive model of human behavior but nevertheless have accepted as a normative standard…
The rest of the book could easily be read as a series of “even if” arguments: even if we accept neoclassical rationality as true rationality, behavioral science has not advanced far enough to answer a number of crucial questions for policymaking, such as the generalizability of behavioral results across different contexts and the applicability of laboratory results in the wild. Even if we had better research on those questions, policymakers would face practically insurmountable local and personal knowledge gaps that would hobble attempts at crafting paternalist policies that are effective and cost-benefit justified. Even if we could acquire such knowledge, policymakers have little incentive to do the hard work of crafting good policies, especially when interested parties (such as rent-seekers, moralists, and bureaucrats) can tilt the legislative and regulatory process in their favor – and so much the worse if policymakers are afflicted by any of the cognitive biases attributed to regular people. And even if we set all of these concerns aside, behavioral paternalism creates the risk of a slippery slope toward more extensive and intrusive policies that go beyond what has been justified by theory and evidence, resulting in ever greater restrictions on individual choice.
They also fault their opponents for dire intellectual negligence:
The reality of behavioral policymaking stands in sharp opposition to the rhetoric. Behavioral paternalists have frequently emphasized the need for evidence-based policy (Thaler 2015b, 338) that should be implemented in a cautious and disciplined manner (Camerer et al. 2003, 1212). Their confident tone often suggests that it is anti-paternalists who eschew evidence and rely on gut instinct for policymaking. To paraphrase the Bible, they behold the mote in the anti-paternalists’ eyes while neglecting the beam in their own. Consider this telling passage from Sunstein that purports to recognize the need for better evidence:With respect to errors, more is being learned every day. Some behavioral findings remain highly preliminary and need further testing. There is much that we do not know. Randomized controlled trials, the gold standard for empirical research, must be used far more to obtain a better understanding of how the relevant findings operate in the world. Even at this stage, however, the underlying findings have been widely noticed, and behavioral economics, cognitive and social psychology, and related fields have had a significant effect on policies in several nations, including the United States and the United Kingdom. (Sunstein 2014, 11–12)
Notice the speed of the transition from the need for cautious collection of more research to a congratulatory discussion of policy impact. If the need for better research were taken seriously, surely the “significant effect on policies in several nations” would be cause for concern, not approbation.
Instead of heading home, however, RW return to the stage for a stunning encore. They list, explore, and critique an encyclopedic list of intellectual “escape routes” for paternalists. In order of appearance: revert to objective welfare paternalism; appeal to obviousness; shift the burden of proof; loosen the definition of paternalism; rely on the “libertarian condition”; invoke the inevitability of choice architecture; focus on the irrational subset of the population; rely on extreme cases,;treat behavioral paternalism as a “toolbox”; and invoke fiscal externalities.
Each of these sections is rich and wise. Yes, if you make paternalism vague enough, “We’re all paternalists now.” A silly game.
Is GPS really a form of paternalism? Sunstein thinks so. In fact, GPS is one of his favorite examples, and not just when it’s a gift. He even calls GPS an “iconic nudge” that should be seen as “a form of means paternalism,” and one that paternalists should seek to build upon (Sunstein 2015, 61–62). GPS is, of course, almost always self-adopted… Is consulting a map also a form of paternalism? A map does, after all, simplify the territory that it depicts in order to ease the process of finding things.
But GPS is not the most trivial example of alleged paternalism. According to Sunstein, a restaurant providing a low-calorie menu for its customers is (or can be) a paternalist nudge (2014, 2). According to Thaler, giving someone accurate instructions on how to get to the subway is a form of paternalism (2015b, 324). Text-message reminders from doctors (Thaler 2015b, 342) and credit card companies (Sunstein 2015, 518) are also apparently paternalism… As Sunstein and Thaler see it, any time someone gives helpful advice, provides useful information, or gives a friendly warning, that’s paternalism. The paternalist bar seems to be remarkably low.
Escaping Paternalism then provides a series of “friendly warnings” about how to spot dangerous paternalism:
We want to focus on the characteristics that distinguish innocuous interventions from more problematic ones, irrespective of the label attached. Here are several factors, often overlapping and highly correlated with each other, that will help us to distinguish the harmless activities from more troubling ones…
RW name the following criteria: self-imposed versus other-imposed; invited versus uninvited; competitive versus monopolistic environment; coercive versus voluntary; public versus private; and informative versus manipulative. On the latter point:
When a Swedish maker of snus, a form of smokeless tobacco, petitioned to modify the warning label on its product to say that it carries “substantially lower risks to health than cigarettes,” the FDA rejected the petition even though the claim is true given current medical knowledge. Why? The primary concern seems to have been that “labels that indicate lower risk may tempt people, particularly young people, to use tobacco products that they might not have tried otherwise” (Tavernise 2015). We don’t know whether any behavioral paternalists weighed in on this particular issue, but it is indicative of how providing truthful information, even that which is clearly relevant to some consumers, takes a back seat when the regulatory focus is on changing behavior.
In short, behavioral paternalism has a complicated relationship with the truth. Truthful disclosures may be useful from a paternalist standpoint . . . but not necessarily. To judge whether a given piece of information is desirable, the paternalist must have some notion of how the targeted agent should behave, all things considered. Then information can be delivered – or obscured – in the manner most likely to nudge the agent in the supposedly correct direction.
None of these “escape route” sections is without its charms, but RW’s critique of the “inevitability” argument is my favorite. Highlights:
If paternalism is inevitable, it’s pointless to discuss whether or not to be paternalistic, and instead we should focus on how and how much to be paternalistic. But in fact, only choice architecture is inevitable. That means we can ask about other ways, besides paternalistically, that choice architecture might be chosen.
Even this wording may be too narrow, as “chosen” implies a level of intentionality… It would be a mistake, then, to assume that undesigned choice architecture is simply arbitrary or random. Here is one simple example: goods displayed in public view in a store, particularly those with price tags, are available for sale to anyone who can afford the price. When this rule is violated, merchants will usually say so explicitly (“Display items not for sale”). This simple default rule, a kind of choice architecture, minimizes confusion and eases communication between potential buyers and sellers. As far as we know, this practice was never explicitly chosen by anyone.
Consider a more detailed example: the case of refunds and exchanges. In the United States, there is no general legal rule requiring merchants to let customers return goods for a refund or exchange. Although there are some exceptions, especially with regard to door-to-door sales, for most transactions merchants are free to make all sales final (Ben-Shahar and Posner 2011, 115–116)… Consumers assume goods can be returned in good condition during a reasonable period, unless sellers explicitly say otherwise…
Furthermore, the usual default rule is suspended in a number of familiar cases, undergarments and perishable foods being well-known examples. In these cases, most customers are aware that refunds cannot be taken for granted, even if the merchant hasn’t said so explicitly. These goods presumably differ from other goods because the goods in question “depreciate” quickly after sale or use. What all of these cases together suggest is an ongoing market for default rules. If all-sales-final were the universal default, we might suspect that consumers aren’t thinking carefully and sellers are simply taking advantage of them. But the variation of default rules tells a different story – one in which market default rules are responsive to the needs of both buyers and sellers.
Even when defaults are chosen deliberately rather than evolving, there exist ways to decide among default rules that do not necessarily involve paternalism. Here are a few possibilities:
* Defaults may be chosen in line with conventional expectations. This has the advantage of not surprising or confusing people who have become accustomed to the usual rules…
* Default rules may be chosen in a minimalistic fashion – i.e., assuming that people are not making promises or exchanges unless they explicitly consent to them. This would rule out, for instance, a default rule that assumes for-cause termination, as that constitutes an additional promise on top of the simple agreement to exchange labor for compensation…
* Defaults may be chosen in line with the goal of minimizing transaction costs, which usually would mean minimizing opt-outs…
* Defaults could be chosen to minimize how much choosers infer from the rule. In other words, people are more inclined to infer advice or information from some defaults (or framings) than others…
1. Big picture: Rizzo and Whitman’s Escaping Paternalism is probably the best book about paternalism ever written. The authors demonstrate an old-school mastery of their subject, and pack the work with so much insight that you’ll keep learning new things if you read it five times. This book is a more valuable contribution to human knowledge than the latest paper in the AER, or even the whole latest issue of the AER.
2. While RW insist otherwise, I reluctantly conclude that their “inclusive rationality” verges on unfalsifiable. Given their level of expertise, their refusal to name a single definite example of irrationality in the world is telling. They’ve raised their bar so high even they can’t meet it.
3. Still, RW rightly point out that mainstream economists are too quick to call others irrational. Yes, saying, “People changed their mind,” is an unsatisfying explanation of human behavior. Yet people really do change their minds, and they’re hardly “irrational” to do so. Indeed, stubbornness is a greater cognitive flaw than flightiness. To quote Emerson:
A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.
4. While reading the book, I kept thinking about (a) kids; and (b) opioid addicts. RW cover an immense range of topics, but gloss over the human beings almost everyone thinks should be treated paternalistically. RW could have bitten the bullet and declared kids and opioid addicts to be “inclusively rational.” Or they could have backed off and said, “Paternalism for such people clears the burden of proof.” Disappointingly, they decline to take a stand.
5. RW have little patience for the “revert to objective welfare paternalism” escape route:
From our perspective, the very idea of objective welfare is implausible. No such thing as “welfare” exists until an individual mind comes into being. The individual mind generates values, desires, and preferences (typically through interaction with many other minds). And as it turns out, different minds can generate very different values, desires, and preferences. People are idiosyncratic; they want different things. And despite the many wants they have in common, they want the same things to a different extent. We see no plausible grounds for stepping outside of the mind to define what is good for it.
Despite my shared hostility to paternalism, I staunchly disagree. The very idea of objective welfare is not merely plausible, but compelling. (Even “inevitable”!) Like everyone else, the homeless act on their own preferences; but almost all of them would have much higher objective welfare if they adopted a sober bourgeois lifestyle. The same goes for children, alcoholics, drug addicts, and so on. Left to their own devices, these impulsive humans generate “very different values, desires, and preferences.” And left to their own devices, they ruin their lives.
6. RW convincingly accuse the “new paternalists” of being crypto old paternalists.
The apparent willingness of behavioral paternalists to favor some expressed preferences over others also colors the policy debate. They lend the veneer of science to what are in fact subjective judgments, giving policymakers the cover they need to implement policies based on prejudices and moralistic attitudes, such as the universal desirability of conventional virtues such as patience, moderation, and temperance.
So far so good. But aren’t the new paternalists wrong for the right reason? Namely: Despite RW’s incredulity, the conventional virtues of patience, moderation, and temperance are indeed universally desirable. If someone taught their kids to be impatient, immoderate, and intemperate, you would baffled. And if you had to quickly summarize the “root causes” of the miseries of homelessness, drug addiction, and so forth, isn’t the obvious answer that those who suffer fail to practice these bourgeois virtues? This is no “veneer of science”; this is common-sense.
7. RW almost grant this point in their “Rely on Extreme Cases” section:
If you want to demonstrate the irrationality of human beings, one very simple strategy is to point to extreme cases: drug addicts whose actions destroy their lives, compulsive gamblers who lose everything they have and more, morbidly obese people who cannot even leave their homes. It is hard to believe that such people are acting rationally, even by the most permissive definition.
In these cases, perhaps we can safely indulge our intuition that their biases are truly damaging in terms of their own well-considered well-being, and if only they could see their situation globally they would truly wish to behave differently.
I agree that “intuition” (or just “common-sense”) says this. The reason, though, is that “well-considered well-being” is a thinly-veiled version of objective well-being. The morbidly obese are plainly acting in accordance with their own preferences, but they are acting contrary to their own long-run happiness.
Couldn’t you simply say, “The morbidly obese care about food more than happiness”? You can and you should. I’ve argued much the same about the “mentally ill.” But this doesn’t show that objective well-being is a myth.
8. If we concede that objective welfare exists, doesn’t this open the door to paternalism? Sure, in the same sense that conceding that China exists opens the door to protectionism. Logically speaking, the cleanest way to prevent a trade war with China is to deny that China exists. This, however, is an absurd claim, and will at best convince your most dogmatic allies. Similarly, the cleanest way to prevent paternalism is to deny that objective welfare exists. But this, too, is an absurd claim, and will at best convince your most dogmatic allies.
9. What then is the reasonable position on paternalism? Per Michael Huemer’s The Problem of Political Authority, we should begin with a strong but defeasible moral presumption against using coercion to increase objective welfare. This is hardly an exotic libertarian position; in their personal lives, almost everyone holds such a presumption. To justify paternalism, you have to show that the net benefits of paternalism are large, even after factoring in knowledge problems, public choice problems, slippery slopes, and all the other practical difficulties RW detail.
10. This Huemerian position explains why it is typically OK to treat your own young children paternalistically. Why? Because they’re incompetent, you really do know better, and you really do have their best interests at heart. (And don’t forget, “My house, my rules.”) The same goes for elderly relatives with dementia, though the bar should be much higher because the love of the old for the young is so much stronger than the love of the young for the old.
11. Doesn’t this justify actually-existing paternalistic government policy? Hardly. As RW explain, governments habitually use their powers with gross negligence – and plenty of earlier researchers document the collateral damage of policies like Prohibition and the War on Drugs. Yes, the world is complex; but as long as there is a strong moral presumption against coercion, the complexity of the world tips the moral scales strongly toward inaction.
12. To return to my own nagging concern, what about the opioid addicts? I say government should leave them – and their suppliers – in peace. Why? Yes, they’re acting strongly against their own objective welfare. Yet this pales before (a) the classic economic arguments against prohibition, (b) RW’s pragmatic concerns, and (c) the strong moral presumption in favor of leaving strangers alone. The case against parents forcibly placing their adult children in rehab for their own good is weaker, though even there we should be skeptical. As the old joke goes, “How many therapists does it take to screw in a lightbulb? Just one, but the bulb must want to change.”
If you live in a city then there’s a good chance that on the first weekend of May every year you can find people who hold free walking tours highlighting local insights, history, or hidden nuggets in the neighbourhood you’re walking. The people leading these tours do so in honour of the work and life of Jane Jacobs.
It seems strange that a text destined to become a cornerstone of the study of urban planning begins by declaring itself against the enterprise of urban planning as it existed in 1961. But Jacobs opens her masterpiece,
During our VRG discussions, contributor Jon Murphy made an interesting observation: Jacobs offers not only a condemnation of city planning in her day but a new model for understanding cities and practicing city planning. In this she parallels Adam Smith, who saw his An Inquiry into the Nature and Causes of the Wealth of Nations as “the very violent attack…made upon the whole commercial system of Great Britain.” In presenting his attack, Smith laid much of the groundwork for modern economics.
Jacobs found plenty to criticize in Adam Smith’s thought. She devotes pages in The Economy of Cities and Cities and the Wealth of Nations to criticizing fundamental assumptions in Smith’s theories, including his failure to question and reject assumptions underlying his four-stage theory of civilization and the assumption that countries are meaningful economic units. Jacobs also criticizes Smith’s emphasis on the division of labour as the driving economic force. But Jacobs and Smith had more in common than she may have cared to admit.
In Smith’s case, the “whole commercial system of Great Britain” had been preoccupied with wealth as measured by its stores of bullion, either mined or purchased by maintaining a favourable balance of trade—more exports, fewer imports. Jacobs faced a city planning establishment that seemed obsessed with controlling and understanding cities and their inhabitants by controlling and analyzing their built environment. They both confronted orthodox thought preoccupied with gold and silver, streets and plazas. With stuff.
Jacobs and Smith were pursuing the same goal, though they may not have put it in these terms: they sought to improve their field by re-centering analysis around people. Though Jacobs’ human-centric urban analysis is now much more mainstream among committed urbanists, it remains tempting to envision cities as plannable and perfectible independent of the lives of the people living in them. And the “doctrine of the balance of trade” that Smith decried as absurd in the late 18th century remains a contested issue in public policy.
Jacobs says that cities have to make room for even the plans of eccentric weirdos, and the only way that happens is if everybody participates in the business of creating the city. And Smith’s political-economic system measures the wealth of a nation not by the gold in its treasury but by the production and consumption of its people. All of its people. Not only the producers, and not only the rich and great.
If there’s one thing that should be clear after a close reading of Death and Life, it’s this: Cities are made of people. Reading more of Jacobs’ work reveals that she thought that cities are also natural units of economic analysis (hence her criticism of Smith’s use of countries). Like Smith, Jacobs believed that economies are made of people. Maybe we shouldn’t be surprised that both authors also devoted time to theories of morality: Smith in The Theory of Moral Sentiments and Jacobs in Systems of Survival.
After many of our reading groups, we’ve had the chance to ask the author we’ve been reading a few questions sparked by reading group discussions in an “Ask Me Anything” video. Unfortunately, this time that option wasn’t available—Jane Jacobs passed away in 2006. Instead, I spoke with Sandy Ikeda, a Jacobs scholar and professor of economics at Purchase College, about Jacobs’ human-centric conception of cities and economies. We discussed Jacobs’ thought, its implications, and what she might have made of some of the challenges facing the world today.
Janet Bufton (Neilson) co-founded the Institute for Liberal Studies in 2006 and has worked as a program coordinator with the Institute for Liberal Studies since 2013. She manages the Liberal Studies Guides project.
This is the second of a series of responses by Mario Rizzo and Glen Whitman, authors of Escaping Paternalism, for my Book Club on their treatise.
Inclusive Rationality and Falsifiability
In the first installment of the book club, Bryan raised questions about the falsifiability of inclusive rationality, and he wonders if we can offer a concrete example of a potential falsification.
This is an issue that we wrestled with as authors (as indicated by the passages Bryan quoted). We don’t wish to present our position as tautological. We do think that irrationality is possible, even under our permissive definition of the term. But we also think finding examples that are both systematic and undeniable is extremely difficult. With apologies, we can’t explain why without delving into the philosophy of science.
In most philosophies of science, even Karl Popper’s, it is not true that everything is directly falsifiable. Instead, scientific research is typically governed by a paradigm (Kuhn), research programme (Lakatos), or metaphysical research program (Popper). The purpose of such a programme is not to be directly falsified, but to generate ideas and falsifiable hypotheses. We believe “inclusive rationality” to be such a programme, and it incorporates many subsidiary questions with testable implications: whether (and how much) people learn over time; whether (and how) people construct their environments to shape their behavior; whether (and how) people adopt regimes of self-reward and self-punishments; whether these strategies are effective at changing behavior; and so on. Bryan’s own “rational irrationality” is another example. Many of these hypotheses have passed the test, but it’s possible that some won’t. If some of these hypotheses were falsified, the programme could still survive. But with enough falsifications, it could in principle be overturned in favor of some alternative programme.
Aside from its being a research programme rather than a simple hypothesis, there is another barrier to the direct falsification of inclusive rationality, and that relates to positive versus normative/welfare analysis. If we just want to know whether people behave in a certain way (do demand curves slope downward?), then no problem – test away! Likewise, if we just want to know if people behave consistently with a given model (do they act as if maximizing a Cobb-Douglas utility function?), again there’s no problem – test away! But once we get to normative/welfare issues, “as if” is no longer good enough. The explanation must be accurate in a more thoroughgoing way. We have to know something about what specific people really want. And this, it turns out, is extremely difficult to get at.
Neoclassical economics “solved” this problem via the presumption of preference revelation. But if, for the sake of argument, we agree with the behavioral economists that actions don’t necessarily reveal true preferences, then how can we know? To repeat a line from the book that Bryan quotes, “If actions do not reveal preference, then what does?” It is a conundrum that we do not purport to solve. We should note that the behavioral paternalists have not solved it, either.
To summarize, the scientific method, great as it is, does not always answer the precise questions that interest us. Some questions lie beyond its reach, at least for now. But we can use the scientific method to explore a set of related questions. We don’t know what people really want, but we can investigate to see what kind of tools they use – or don’t use – to shape their behavior, as well as how effective those tools are. It turns out that people don’t behave consistently with the old neoclassical model of rationality – but nevertheless, their behavior strongly suggests an ongoing process of trying to achieve their purposes as well as they can, given both environmental and cognitive constraints.
We still owe an answer to the question, “Can you provide a concrete counter-example to inclusive rationality?” Given the need to get inside someone’s head to learn what they really want, a psychiatrist might be able to offer better answers than we could. She might point to specific clients who commit the same error again and again, yet cannot manage to break out of the destructive cycle. Then again, their going to a shrink suggests at least a meta-rational desire to change.
Bryan offers the example of opioid addiction, and we think it might work. Not in the sense that it’s irrational to take opioids – the initial decision is surely quite rational for many – but that for some users, an emergent addiction may progressively reduce their ability to engage in the type of self-management behaviors we lay out in the book – in essence, robbing their inclusively-rational toolkit of its tools. The concrete evidence might involve proof that opioid users tend to exhibit few or none of these behaviors, or that these tools fail for them at a much higher rate. We are far from experts in this field, so we would not want to commit ourselves to this position, but it seems reasonable.
Returning to the main point, we view the practical difficulties of ascertaining inclusively irrational behavior, not as an unnecessary limitation of our analysis, but as a reflection of a fundamental epistemic problem: human beings are purposeful but their purposes are not easily identified. Taking off our economist hats and putting on our policy hats, it seems to us that policymakers ought to adopt a presumption of inclusive rationality. The opposite presumption of irrationality would surely be vastly over-inclusive and a threat to liberty.
This is the first of a series of responses by Mario Rizzo and Glen Whitman, authors of Escaping Paternalism, for my Book Club on their treatise.
Present Bias and Time Preference
We should begin by thanking Bryan for his many kind words about our book, and also for hosting this book club. We appreciate his critiques almost as much as his praise! Given the many excellent points raised by Bryan and others, we’ve decided to break our reply into a few posts – beginning with the issue of time preference and present bias.
In both of the first two installments, Bryan argues that any time discounting ought to be considered irrational – i.e., that future utility ought be weighted exactly as much as present utility. The most important thing to realize is that, while Bryan’s position is certainly defensible, it is highly uncommon among both neoclassical and behavioral economists. So in that sense, his position is orthogonal to our arguments. Neoclassical and behavioral notions of rationality both allow discounting of the future relative to the present. For them, the red flag of irrationality is inconsistency of discount rates, not discount rates that differ from zero. That is the argument that we have engaged with (and attempted to refute) in our book.
That said, it is worth considering Bryan’s position on its merits. The idea of zero time-discounting is based on the idea that “utility is utility is utility,” and that discounting on the basis of anything else – such as uncertainty about the world, potential change of one’s preferences, or declining enjoyment because of age – is a separate matter. In other words, once we have controlled for these other factors, the remaining level of time discounting should be zero.
Our first reply to this is to ask whether it’s really possible to separate out everything else that matters. In any real-world measurement of time preference, you can’t rule out uncertainty. You can’t rule out the participants’ awareness that the passage of time means they might die, their preferences might change, their ability to experience pleasure may decline, and so on. So measured time preference will always incorporate at least some of those factors.
The word utility itself may be misleading in this context. What is this utility that might be discounted? Presumably it is something real – a feeling of happiness or satisfaction. But we don’t feel in the abstract; we feel about specific things, such as consumption of food, socializing with friends, watching movies, and so on. It is all contextual, including the experience of the passage of time. And time itself is not a homogeneous medium except in idealized models. So outside of a mathematical model created for predictive or heuristic purposes, it’s not clear that utility (and discounting utility) means much at all. Humans do not make trade-offs between abstract amounts of utility to be received at different times. They make trade-offs between various types of consumption and well-being to be experienced at different times. It is these specific sources of satisfaction that may be discounted. This is why the notion of an abstract rate of time preference, which applies directly to “utility” rather than emerging from the process of comparing different forms of satisfaction, is a chimera.
In addition, the idea of zero-discounting presupposes a specific concept of the self as it endures through time. We would echo the comments by B K, who mentioned philosophical arguments to the effect that your distant-future self is arguably not even you anymore, or is you to a lesser extent, so aside from altruism it’s not clear why you should care so much about that person. Discounting the utility of a distant future self is arguably no more irrational than discounting the utility of a stranger. While this argument may sound like austere oxygen-free philosophy, we think it’s closely connected to the issues of uncertainty and preference change above. The more likely is future-me to have different preferences or values from my own, the less identity I feel with him, and thus the less inclined I am to be concerned about his welfare. This would seem to be an ethical-value question, not a question of rationality.
Finally, Bryan makes a different sort of argument when he says, “The whole idea of time inconsistency is that people predictably change their minds. If this is not a strong sign of irrationality, what is?” Here, Bryan is saying something closer to what the neoclassical and behavioral economists say: that the issue is inconsistent time discounting, not discounting that differs from zero.
But is it true? Does predictably changing one’s mind really imply irrationality? As we conceive it, the issue is that we humans have conflicting preferences within ourselves – a desire to indulge, and a desire to delay gratification – and we are in a process of negotiating between those desires. Rationality does not, or should not, mean that we’ve already concluded the negotiation and landed on the final answers at the beginning of time. When there is not a “correct answer” dictated by pre-existing well-defined preferences, then choice reversal can reflect a person’s process of forming and discovering their preferences over time.
Learning may also be taking place, including learning about both oneself and the outside world. Individuals can make a plan, and then when it comes time to carry it out, they may realize the plan is higher-cost than they thought, or unexpected factors have intervened, or new options have become available, or their preferences and beliefs have evolved in the meantime. It might takes someone a few failed plans to realize that specific kind of plan isn’t optimal for them. By introspection, we’ve seen this in ourselves: Well, I wasn’t able to carry through with that dieting strategy, so maybe I should try another. Now, maaaaybe if someone makes the same plan again and again, and breaks that plan again and again, that could be a sign of failure to learn. We haven’t seen the literature on time preference demonstrate this, although addiction could be one example. Our personal experiences hew more closely to a model of experimenting with different strategies of self-management and learning more about ourselves over time.
In Chapter 6 of Escaping Paternalism, Rizzo and Whitman argue that paternalistic behavioral economists have recklessly rushed from laboratory experiments to real life. Even if the experiments were above reproach, their external validity is questionable at best. Sunstein, Thaler, and the rest have overpromised and underdelivered:
A central claim of behavioral paternalists is that their approach is “evidence-based” (Thaler 2015b, 330–345). They claim to eschew ideology and simply advocate “what works” (Halpern 2015, 266–298). They say their policy recommendations rest on strong evidence provided by both behavioral economics and cognitive psychology. This decades-long research program has supposedly enabled them to discover how actual people behave rather than how hypothetical economic agents behave.
Why is there such a slip ‘twixt cup and lip?
Most importantly, the crafting of behavioral paternalist policies depends not simply on the existence of phenomena such as the endowment effect or present bias, but on the quantitative magnitudes of such phenomena. These magnitudes are indispensable for answering such questions as: How large should sin taxes be? How long should cooling off periods be? How graphic does a risk narrative need to be? How much income should people be defaulted into saving for retirement? How difficult should it be, in terms of time and effort, to opt out of default terms? All of these quantitative questions, raised automatically by any attempt to implement the policies in question, require quantitative inputs to calculate their answers. Research must therefore establish people’s true preferences (whose better satisfaction is the raison d’être of behavioral paternalism), as well as the strength of the biases that impede them. The methods used by behavioral economists have not reliably estimated such quantitative magnitudes.
And if the new paternalists retreat to, “Well, that’s all up to the political process,” what good are they?
Question: Why is behavioral economics so inadequate to the task of crafting specific policies? RW offer many complementary answers:
1. Because one of their recurring findings is that human behavior is “context-dependent.” If changing contexts in the lab makes a big difference, imagine what happens when we move from the lab to real life! Example:
But perhaps the most important contextual question relates to the reference point from which a loss or a gain is defined. The reference point is subjective. As we saw in Chapter 4, the reference point need not be the subject’s current endowment; it could be the subject’s expectation of something in the future.
2. Behavioral economists find strong “hypothetical bias” – yet selectively use hypotheticals to reach desired policy conclusions:
The upshot is that extrapolating the results of stated-choice experiments into the realm of actual behavior is fraught with difficulties, especially since most of the evidence on preference reversal (due to present bias) rests on stated-choice experiments.
3. Psychological findings replicate poorly:
The Open Science Collaboration, a group of more than 125 psychologists, conducted replications of 100 experimental and correlation studies in three major psychology journals for the year 2008. There is no single standard of successful replications, but the results that are most important for our purposes are these: (1) only 36 percent of the replications showed a statistically significant effect in the same direction as the original study, and (2) the “mean effect size of the replication effects . . . was half the magnitude of the mean effect size of the original effects . . . representing a substantial decline” (Open Science Collaboration 2015, 943). The Open Science Collaboration’s project has since been criticized on statistical grounds by Gilbert et al. (2016), who say that the project did not faithfully replicate the conditions of the original studies…
We don’t know how this particular discussion will ultimately be resolved, but it is safe to say that the reproducibility of much psychological research is simply unknown.
The replication of economic experiments comes off only moderately better:
In the first systematic, but limited, effort to replicate laboratory experiments in economics, Camerer et al. (2016) replicated eighteen studies published in the American Economic Review and the Quarterly Journal of economics between 2011 and 2014. They found that 61 percent of the replications showed a statistically significant effect in the same direction as the original study. However, the mean effect size was 66 percent of the original magnitude. In most cases such a difference in magnitude (or a greater difference, as in the large study of psychology articles) will have a considerable impact on policy prescriptions.
4. Mere replication is not enough! The populations researchers study aren’t just unrepresentative. They’re based on a skewed sample of a skewed sample; namely: students willing to join experiments.
5. Researchers fail to properly account for incentives and learning. RW’s discussion is too thoughtful and subtle to quickly capture, but here are two nice cases:
Consider the experiment conducted by Thaler (1985, 206). In a hypothetical beer-on-the-beach scenario, people were asked their maximum willingness to pay for a beer… It turned out that people said that they were willing to pay more for the same beer in the fancy resort case than in the run-down grocery case. In theory, this difference in willingness to pay is deemed irrational, inasmuch as the beer is the same regardless and will be consumed on the beach, not in the place where it was purchased. When the experiment was repeated (Shah et al. 2015) while dividing the participants by income constraint, it was found that there was no statistically significant difference in willingness to pay between the two kinds of stores for the lower-income (i.e., more income-constrained) group.
We do not interpret this result as showing that lower-income people are more standardly rational by temperament or character; this seems highly unlikely. What seems more likely is that for lower-income people, the subjective opportunity cost of money is higher… Thus, the cost of succumbing to so-called irrelevant framing does seem to affect its incidence.
Becker and Rubinstein (2004) examine the use of public bus services in Israel after a spate of suicide bombings on buses. Their hypothesis was, broadly speaking, that the greater the cost of one’s fears in terms of reducing the consumption of the terror-infected good (bus rides), the more agents will expend effort to control those fears…
Since Becker and Rubinstein could not measure fear directly, they sought to measure the effect on the consumption of the terror-infected good. They found that frequent or more intensive users of buses were not affected at all by the terror threat, while all of the reduced consumption was on the part of low-frequency users. This differential impact conforms to the rational application of more effort to reduce fear when there is greater value from doing so. Thus it appears that when the opportunity cost of riding on the bus is relatively high, the operative bias disappears.
6. New paternalists give short shrift to self-help; people often realize that their decisions and beliefs are biased – and strive to offset these biases. RW’s discussion is rich with detail, so let me just share one striking passage:
In an attempt to produce results of general applicability, many experiments are devoid of relevant context (Loewenstein 1999). Familiar cues are omitted and individuals are treated as abstract agents. And yet this attempt at generality results in an impoverished and narrow view of self-regulation. Consider that it is impossible in a laboratory experiment to avoid facing the prescribed choice. The participants cannot say, “No. I would never face that temptation. I would change or modify the situation.” …In natural environments, people choose their own regulatory strategies.
Chapter 7 zeroes in on “knowledge problems.” I usually find references to Hayek gratuitous, but not here.
[S]cientific knowledge is not the only kind of knowledge relevant to policy. There is another type of knowledge that lies largely beyond the reach of academics and policymakers: the particular details of time and place that affect the preferences, constraints, and choices of individuals. Following Friedrich Hayek (1945), we will call this kind of knowledge “local knowledge.” In the case of scientific knowledge, a suitable body of experts may legitimately claim to have the best and most recent knowledge available. But when it comes to local knowledge, individuals have insights and perspective unavailable to outside experts.
[S]ometimes individuals lack knowledge of themselves… Behavioral economists may have scientific knowledge of a certain kind of bias that afflicts many people or the “average” person. But they do not typically know how much any particular individual is affected by a given bias, the extent to which the individual has become aware of her own bias, and the ways in which she may have attempted to compensate for it. The best the expert can hope for is population-level or group-level summary statistics, not the specific contextual knowledge needed to guide and correct individual behavior.
What are the central knowledge problems that dog the new paternalist enterprise?
1. Knowledge of “true preferences.” Helping agents satisfy their “true preferences” is the whole point of the new paternalist project, but distinguishing “true preferences” from “false preferences” is daunting even in theory. One great observation:
Did the hot decision-maker know she would later regret her decision? In other words, is she sophisticated in her bias? If so, then she may reason in this way: “I know I will regret this in the morning because then I will be in a cool state. But it is totally worth it. My cool self is such a bore. I am always choosing the ‘safe’ way. Perhaps I need to take some risks and live a little.”
2. Knowledge of the extent of the bias.
Just one illuminating passage:
It might be argued that the widely varying existing estimates are good enough; we can simply take their mean or median value. However, it turns out that optimal policies can be highly sensitive to small differences in parameter values. In a theoretical exercise, O’Donoghue and Rabin (2006, 1838) provide a striking example of the sensitivity of their sin-tax model to parameter estimates:
If half the population is fully self-controlled while the other half [of] the population has a very small present bias of β = 0.99, then the optimal tax is 5.15%. If instead the half [of] the population with self-control problems has a somewhat larger present bias of β = 0.90 – which is still a smaller present bias (larger β) than often discussed in the literature – the optimal tax is 63.71%. Thus, a mere 9 percentage-point shift in one parameter (from β = 0.99 to β = 0.90) results in a twelvefold increase in the optimal tax.
3. Knowledge of self-debiasing and small group debiasing. One of many weighty insights:
Self-regulation is complex and much of it is not obvious. Consider an overweight individual with a propensity to eat junk food. Imagine that she often stays away from restaurants that serve junk food but occasionally indulges herself. Does she need the help of a paternalist? Should her indulgences be taxed or should she be nudged away from junk food on those occasions? A person who has made an intrapersonal bargain to abstain, but also to reward her “present self” with some tasty junk food from time to time, may not require a correction. Or, if she does to some extent, the paternalist would have to know in which respects this bargain has broken down and to what extent it is inadequate. To tax the present self reward would tend to unravel the bargain, thereby potentially putting the agent in a worse condition than before.
4. Knowledge of bias interactions.
Because biases can reinforce or offset each other, policies that would improve welfare by correcting a bias if that were the only bias present may in fact reduce welfare when multiple biases are in play. As Besharov (2004) has pointed out, this problem is analogous to the second-best problem in the study of market failure. To take one example, negative externalities (such as air pollution from the burning of fossil fuels) can lead to too much consumption, while a degree of monopoly power (such as that created by OPEC in the petroleum industry) can lead to too little consumption. When both market imperfections are present, theory alone cannot say whether consumption is too high, too low, or just right; any of these are possible.
5. Knowledge of population heterogeneity.
Mitchell (2002) cites at least 100 studies on this point, showing that behavioral phenomena (including cognitive biases) differ in the population along such dimensions as educational level, cognitive ability (as measured by, for instance, performance on the Scholastic Aptitude Test), cognitive mindsets or dispositions, cultural differences, age differences, and gender differences (pp. 94–95, 140–156).
Chapters 1-5 of Escaping Paternalism were very good. Beginning in Chapter 6, however, the book becomes a relentless bulldozer of the intellectual pretensions of the new paternalism. After reading Chapters 6 and 7, the idea that behavioral economics provides a “scientific foundation” for any concrete paternalist program seems absurd. The best-case scenario is that behavioral economics will provide a vague rationalization for the feel-good seat-of-the-pants paternalistic policies governments wished to adopt anyway. Furthermore, the “It will be up to the democratic process” answer is mere democratic fundamentalism. Isn’t the central message of behavioral economics is that we can’t blithely trust the wisdom of the people?!
1. Outsiders may struggle to believe that practicing behavioral economists neglect to offer specific quantitative recommendations. As far as I can tell, though, the intellectual picture is as dire as RW paint. Researchers announce the discovery of an “effect,” then let policy-makers use (and abuse) their discoveries to rationalize both old regulations already on the books and new regulations they dream of putting on the books.
2. RW never mention “aging out.” Yet many forms of self-destructive behavior erode with age, and we can plausibly interpret this as a kind of “learning.”
3. If RW are right, why don’t researchers work harder to achieve external validity? I prefer a straightforwardly neoclassical story: The costs of external validity are very high, and the professional rewards are low. If politicians really wanted scientifically-grounded policy, of course, matters would be entirely different. Governments rarely give astronomers credit for qualitative “discoveries” about space travel.
4. RW describe many credible examples of self-debiasing. The main weakness with their discussion: Most people seem extraordinarily stubborn. Finding and emulating highly successful people is easy, but few less-successful folks are humble enough take advantage of this golden opportunity. In my experience, the typical human being prefers to either (a) keep their own counsel, or (b) “heed” advice that confirms their prejudices.
5. RW’s discussion of the knowledge problem makes old ideas seem new again. “Helping people achieve their true preferences” indeed! If revealed preference doesn’t reveal true preference, what on Earth does?
6. The extension of the “second-best” model to individual decision-making is great – and deserves a far wider audience. Without optimistic bias to counter their unreasonable fear of rejection, how many males would ever find love? How many children would never have been born?
7. You could object, “Behavioral economics is no worse a foundation for concrete regulation than any other branch of economics.” And you probably wouldn’t be wrong. Most economists, sadly, would rather rationalize regulation than hold regulation up to a mirror. Insurance regulation is a fine example: economists routinely use moral hazard and adverse selection to justify policies that make these “market failures” worse.
Rizzo and Whitman now devote two chapters to critiquing (a) the underlying empirics of behavorial economics, plus (b) the way behavioral economists market these results. In Chapter 4, they go after “defective” preferences; in Chapter 5, they reconsider “biased” beliefs.
RW repeatedly stress the modesty of their project. They aren’t saying that behavioral economics is worthless, just oversold. If only behavioral economists had vetted their own work with the same (motivated?) skepticism they’ve honed for standard economics! RW take up this neglected task, and claim to “show that the preferences deemed better or more ‘true’ by paternalists are often just as questionable on behavioral grounds, if not more so.”
Chapter 4’s main applied topics are: (a) hyperbolic discounting, including preference reversals and intransitivities; (b) endowment effects, including loss aversion and status quo bias; (c) and poor affective forecasting (failure to correctly predict how outcomes will make you feel).
1. RW argue that hyperbolic discounting could arise because of our subjective perception of time:
People do not necessarily perceive time in the way that the calendar or the number-line portrays it. When asked, “How long do you consider the duration between today and a day some distance in the future,” with the interval ranging from three months to thirty-six months, people answer in a nonlinear fashion. For example, while the time horizon from three months to one year grows 300 percent by calendar measure, it grows only 35 percent in subjective duration…
2. In any case, preference reversals for intertemporal choice are rather rare – and often in the opposite of the expected direction:
In recent research there have been experiments that elicit preferences by questioning participants over the passage of actual time, and not simply at a point in time as in the process just described. These are called “longitudinal” studies. A large majority of individuals do not actually switch over time: those who are patient with regard to the distant decision remain patient and those who are impatient remain impatient (Read et al. 2012). Relatedly, Halevy (2015) finds that only 10 percent of participants are actually time inconsistent in a longitudinal study. Furthermore, the ubiquity of impatient preference reversals is in doubt. The longitudinal experiments (Read et al. 2012; Sayman and Öncüler 2009) have found a very large number of shifts from SS to LL – that is, patient reversals – as the distant decision becomes more nearly immediate.12 In fact, in two experiments conducted by Read and coauthors (2012) the numbers of impatient and patient reversals were roughly equal.
3. Does time inconsistency actually hurt individuals? The evidence is thin, but RW discuss some fascinating results:
In [Berg et al.]’s own study of 881 participants, they find that time-inconsistent individuals earn substantially more than time-consistent ones across the forty time trade-off payoff options in the experiment. This is because the time-consistent individuals are, for whatever reason, consistently more impatient in the choices…
Therefore, if time inconsistency is to be deemed irrational in more than a presumptive neoclassical sense, the comparison cannot be relative to just any time-consistent agent. It must be with a suitably patient time-consistent agent. Impatience in whatever form can depress lifetime earnings or wealth. Why single out inconsistency as the problem?
4. When endowment effects matter, Willingness to Pay (WTP) is less than Willingness to Accept (WTA). Behavioral economists tend to want to treat WTP as the suspect measure. But this raises a host of issues. Case in point:
[A]dopting WTP rather than WTA as the appropriate norm for “true” preferences would mean abandoning the case for many paternalist interventions, such as new labor-friendly default rules that offer workers additional contractual benefits. The alleged superiority of such rules rests on the implicit assumption that WTA is the right valuation.
An exquisitely clever point:
[I]f status quo bias is the true explanation for the endowment effect, it casts doubt on the validity of paternalist nudges whose “sticking power” depends on it. Suppose a new default rule entitles workers to paid vacation time (presumably funded by lower wages). Now endowed with this new benefit, workers resist giving it up during contract negotiations. But why? Simply because the new rule is now the status quo. If status quo bias is indeed irrational, then the persistence of the new status quo offers no grounds for thinking the new rule is an improvement.
[T]he experimental evidence we do have is for an “instant endowment effect.” This means that the experimenters test for WTA–WTP gaps and reluctance to exchange within a few minutes after the subjects are given a mug or some other good (Ericson and Fuster 2014, 557). How these subjects react after they have possessed the good for some time (a day, week, month, or more) is unknown. Does the novelty of the gift wear off, or do they become more attached?
Chapter 5’s main applied topics are: (a) the functions of beliefs and learning; (b) violations of classical logic; (c) the conjunctive effect; (d) Bayes’ Rule, base-rate neglect, and belief revision; (e) availability bias; (f) salience; and (g) overconfidence. A few highlights:
1. Biased beliefs can offset confused preferences, or even other biased beliefs.
Varki (2009), among others, argues that optimistic illusions may have had adaptive value for early humans because it counterbalanced the fear of death and oblivion that came with the emergence of conscious foresight. In short, the “best” beliefs for attention, motivational, and even survival purposes may not be the most correct from the standpoint of truth.
2. Logical consistency is overrated:
From a pragmatic perspective, the case for limiting the role of logic is even stronger. It is uneconomic for the agent who wants to attain his goals efficiently to worry about the consistency of all of his beliefs.1 The inconsistency of an entire system of beliefs is likely to be vast.
Trying to make all your beliefs consistent is as foolish as trying to keep your house perfectly clean.
3. Experimental subjects often make “mistakes” because the experimenters expect the subjects to interpret all instructions literally. In real life, you’re not supposed to do so!
The behavior of the experimenters violates the expected norms of conversational interaction (Grice 1989). Among these norms is the maxim of relevance, which says that in a cooperative setting, people assume that their interlocutors present them with the information required for current purposes and no more.
4. In real life, people often neglect base rates because they should:
Consider the case of a doctor who takes a job in a new clinic… A significant number of her patients test positive for HIV. Now, if the doctor applied Bayes’ rule using base rates from the national population – where the fraction of people with HIV is small – she would have to conclude that most of these are false positives. Fortunately, the doctor is smarter than that… So she adjusts her priors away from the base rates, and thus concludes that many of the positive test results are probably true positives.
5. Availability bias doesn’t matter much when you have lots of first-hand experience:
[W]hen economics students and nursing students were asked to estimate the frequency of deaths from various causes in their own age cohorts, the whole picture changed – in fact, the bias vanished (Benjamin et al. 2001).39 The logic of this result is compelling. In a world of scarce resources, people have a tendency to learn what is in their interest to learn. The students, by and large, did not need to know population figures, but
they did find it useful to have a decent idea of the hazards they actually face in their age groups.
6. The standard evidence of overconfidence is largely artifactual. People seem overconfident if you ask them their confidence question-by-question. But if you ask them for their overall accuracy, there is often no sign of overconfidence. Format matters.
What all this format dependence ultimately means for an understanding of “overconfidence” phenomena is unclear. This is because there is no good theory to guide us in determining which measures are most relevant to
pragmatic concerns. In other words, we do not know which formats mirror the process by which real-world individuals evaluate their own knowledge and, most importantly, make decisions about significant matters.
Chapters 4 and 5 are packed with good points. Behavioral economists should be more self-critical. There is considerable contrary empirical evidence they should take to heart. Most remarkably, RW manage to simultaneously provide a careful survey of the conventional view with insightful critique.
Yet ultimately, the main results of behavioral economics seem pretty solid to me. In particular:
1. The whole idea of time inconsistency is that people predictably change their minds. If this is not a strong sign of irrationality, what is? As I claimed last week, the time consistency literature doesn’t go far enough. Discounting the future purely because it is in the future should be classified as irrational even if you do so with perfect consistency.
2. The evidence RW present on the subjectivity of time is credible. However, we should interpret it as a further sign of irrationality rather than use it to rationalize time inconsistent choices.
3. RW flatly deny the irrationality of loss aversion (and, by extension, endowment effects):
Loss aversion would then seem to be a taste variable no different from the nonpecuniary aspects of labor that economics has recognized from early on. Loss-averse agents happen to attach value to changes in wealth, with greater value attached to a loss than to the equivalent gain.
I get where they’re coming from. I whole-heartedly love the kids I have, even though I recognize that I would have felt the same way about very different offspring. In common-sense terms, I see nothing “irrational” about this. On the other hand, I would deem it highly irrational to fall in love with the peaches I bought yesterday, knowing full-well that I would have felt the same love for whatever peaches I purchased. Can I formally model this distinction? No, but it seems dogmatic to deny the silliness of getting attached to a specific bag of peaches.
4. RW’s discussion of the “maxim of relevance” is impressive. Still, how much does it really buy them? Suppose experimenters loudly and clearly announced, “Focus on the literal meaning of all our instructions.” Would that really lead anyone to avoid the conjunctive fallacy? Similarly, think about how long it took humans to apply the experimental method. Thinking in clear-cut, literal terms yields enormous gains – but the experimental method has only been around for a few centuries, and only a few people really understand it even today.
5. RW sternly remind us:
In a Bayesian framework, all probabilities are conditional. The priors are conditional on everything the agent believes as background knowledge. This knowledge may include base rates, but not exclusively. In the subjectivist version of Bayesianism, any prior probability would be allowable. The rationality of Bayes’ theorem begins after the agent has chosen his priors.
In most experiments, however, the descriptions are so austere that using any prior probability other than the base rates is bizarre. Suppose, for example, experimenters tell you that the balls in an urn are half blue, half green. Next they ask, “What is the probability that you draw a green ball?” Sure, you could say, “My prior probability says that experimenters always stack green balls on top, so the chance that the first ball I draw will be green is 100%.” Isn’t that absurd, though?
In a sense, RW accidentally show that behavioral economics sets the bar of rationality much too low. Rationality is actually a matter of substance, not form alone.
6. For availability bias and overconfidence, the reasonable prior is that they’re serious. Ponder all you’ve seen. Human beings overweight rare, vivid events. Human beings are overconfident. We should have believed this before any experimental evidence arrived, because daily life overwhelmingly affirms these patterns. And we should continue to believe these problems are severe even if the scientific evidence of these biases is fragile. So while RW do a fine job of exposing researchers’ overconfidence in their own research, we should only marginally change our minds about human psychology itself.