Is Amazon a Corporate Mother Teresa?

Amazon is in many ways a fascinating company and deserves to be defended against most of its mainstream critics. However, it would be simplistic to explain its campaign for a $15 federally-imposed minimum wage by identifying it with a corporate Mother Teresa. Its more obvious reasons to preach for minimum wages are not defendable.

I will not repeat all the arguments against the minimum wage, summarized in a good article by Cato Institute’s Ryan Bourne (“The Case Against a $15 Federal Minimum Wage: Q&A”). My co-blogger David Henderson has also defended many of the standard economic arguments. There exist some disagreements among economists about the employment effect of minimum wages, but they mainly relate to the size and victims of the negative effect (see Bourne’s overview).

One thing is sure: Amazon would benefit from forcing higher costs on its small competitors, including mom-and-pop businesses. A higher minimum wage would have exactly this effect while it would have zero effect on Amazon’s costs. As the company already pays a starting wage equal to the proposed $15 minimum, the latter would be non-binding and irrelevant for the retail behemoth.

One reason why Amazon was able to bid up the wage of its entry-level workforce is that its technology and other capital embedded in its warehouses and distribution network increase the productivity of its employees, which justifies the bidding up from a pure profit-maximizing viewpoint. There is nothing wrong with profits, but there is something wrong wtith using state power to bankrupt one’s competitors. This is what is happening. Jonathan Meer, an economist at A&M University observes:

It’s a lot harder for Joe’s Hardware. We should take note that Amazon—the place with no cashiers—is the one calling for a higher minimum wage.

Other large companies—such as Walmart—have come out in favor of an increase in the federal minimum but not up to $15. In their case, indeed, $15 would be binding for some employees. (Cf. Eric Morath and Heather Haddon, “Many Businesses Support a Minimum-Wage Increase—Just Not Biden’s $15-an-Hour Plan,” Wall Street Journal, March 1, 2021)

Amazon has another reason to be politically correct, that is, to signal its virtue under current faddish and unrealistic ideas. The company can hope to cajole DC’s powerful men to spare it from some regulation that would bite. The systemic effects of such behavior point to crony capitalism and groveling toward the state, which are not good for free enterprise and future prosperity.

It is not clear, to say the least, what kind of acceptable ethics could justify Amazon’s current behavior.

(0 COMMENTS)

Read More

What the Success Sequence Means

[continued from yesterday]

…This is a strange state of affairs.  Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty.  And since I’m writing a book called Poverty: Who To Blame, I beg to differ.

Consider this hypothetical.  Suppose the success sequence discovered that people could only reliably avoid poverty by finishing a Ph.D. in engineering, working 80 hours a week, and practicing lifelong celibacy.  What would be the right reaction?  Something along the lines of, “Then we shouldn’t blame people for their own poverty, because self-help is just too damn hard.”

The underlying moral principle: You shouldn’t blame people for problems they have no reasonable way to avoid.  You shouldn’t blame them if avoiding the problem is literally impossible; nor should you blame them if they can only avoid the problem by enduring years of abject misery.

The flip side, though, is that you should blame people for problems they do have a reasonable way to avoid.  And the steps of the success sequence are eminently reasonable.  This is especially clear in the U.S.  American high schools have low standards, so almost any student who puts in a little effort will graduate.  Outside of severe recessions, American labor markets offer ample opportunities for full-time work.  And since cheap, effective contraception is available, people can easily avoid having children before they are ready to support them.

These realizations are probably the main reason why talking about the success sequence so agitates the critics.  The success sequence isn’t merely a powerful recipe for avoiding poverty.  It is a recipe easy enough for almost any adult to understand and follow.

But can’t we still blame society for failing to foster the bourgeois values necessary to actually adhere to the success sequence?  Despite the popularity of this rhetorical question, my answer is an unequivocal no.  In ordinary moral reasoning, virtually no one buys such attempts to shift blame for individual misdeeds to “society.”

Suppose, for example, that your spouse cheats on you.  When caught, he objects, “I come from a broken home, so I didn’t have a good role model for fidelity, so you shouldn’t blame me.”  Not very morally convincing, is it?

Similarly, suppose you hire a worker, and he steals from you.  When you catch him, he protests, “Don’t blame me.  Blame racism.”  How do you react?  Poorly, I bet.

Or imagine that you brother drinks his way into homelessness.  When you tell him he has to reform if he wants your help, he denounces your “bloodless moralism.”  Are you still obliged to help him?  Really?

Finally, imagine you’re a juror on a war crimes trial.  A soldier accused of murdering a dozen children says, “It was war, I’m a product of my violent circumstances.”  Could you in good conscience exonerate him?

So what?  We should place much greater confidence in our concrete moral judgments than in grand moral theories.  This is moral reasoning 101.  And virtually all of our concrete moral judgments say that we should blame individuals – not “society” – for their own bad behavior.  When wrong-doers point to broad social forces that influenced their behavior, the right response is, “Social forces influence us all, but that’s no excuse.  You can and should have done the right thing despite your upbringing, racism, love of drink, or violent circumstances.”

To be clear, I’m not saying that we should pretend that individuals are morally responsible for their own actions to give better incentives.  What I’m saying, rather, is that individuals really are morally responsible for their actions.  Better incentives are just icing on the cake.

This is not my eccentric opinion.  As long as we stick to concrete cases, virtually everyone agrees with me.  Each of my little moral vignettes is a forceful counter-example to the grand moral theory that invokes “broad social forces” to excuse wrong-doing.  And retaining a grand moral theory in the face of multitudinous counter-examples is practically the definition of bad philosophy.

Does empirical research on the success sequence really show that the poor are entirely to blame for their own poverty?  Of course not!  In rich countries, following the success sequence is normally easy for able-bodied adults, but not for children or the severely handicapped.  In poor countries, even able-bodied adults often find that the success sequence falls short (though this would be far less true under open borders).  Haitians who follow the success sequence usually remain quite poor because economic conditions in Haiti are grim.  Though even there, we can properly blame Haitians who stray from the success sequence for making a bad situation worse.

Research on the success sequence clearly makes people nervous.  Few modern thinkers, left or right, want to declare: “Despite numerous bad economic policies, responsible behavior is virtually a sufficient condition for avoiding poverty in the First World.  And we have every right to blame individuals for the predictable consequences of their own irresponsible behavior.”  Yet if you combine the rather obvious empirics of the success sequence with common-sense morality, this is exactly what you will end up believing.

(0 COMMENTS)

Read More

What Does the Success Sequence Mean?

If you live in the First World, there is a simple and highly effective formula for avoiding poverty:

1. Finish high school.

2. Get a full-time job once you finish school.

3. Get married before you have children.

Researchers call this formula the “success sequence.”  Ron Haskins and Isabel Sawhill got the ball rolling with their book Creating an  Opportunity Society, calling for a change in social norms to “bring back the success sequence as the expected path for young Americans.”  The highest-quality research on this success sequence probably comes from Wendy Wang and Brad Wilcox.  In their Millennial Success Sequence, they observe:

97% of Millennials who follow what has been called the “success sequence”—that is, who get at least a high school degree, work, and then marry before having any children, in that order—are not poor by the time they reach their prime young adult years (ages 28-34).

One common criticism is that full-time work does almost all the work of the success sequence.  Even if you drop out of high school and have five kids with five different partners, you’ll probably avoid poverty as long as you work full-time.  Wilcox and Wang disagree:

…This analysis is especially relevant since some critics of the success sequence have argued that marriage does not matter once education and work status are controlled.

The regression results indicate that after controlling for a range of background factors, the order of marriage and parenthood in Millennials’ lives is significantly associated with their financial well-being in the prime of young adulthood. Simply put, compared with the path of having a baby first, marrying before children more than doubles young adults’ odds of being in the middle or top income. Meanwhile, putting marriage first reduces the odds of young adults being in poverty by 60% (vs. having a baby first).

But even if the “work does all the work” criticism were statistically true, it is misses the point: Single parenthood makes it very hard to work full-time.

A more agnostic criticism doubts causation.  Sure, poverty correlates with failure to follow the success sequence.  How, though, do we know that the so-called success sequence actually causes success?  It’s not like we run experiments where we randomly assign lifestyles to people.  The best answer to this challenge, frankly, is that causation is obvious.  “Dropping out of school, idleness, and single parenthood make you poor” is on par with “burning money makes you poor.”  The demand for further proof of the obvious is a thinly-veiled veto of unpalatable truths.

A very different criticism, however, challenges the perceived moral premise behind the success sequence.  What is this alleged moral premise?  Something along the lines of: “Since people can reliably escape poverty with moderately responsible behavior, the poor are largely to blame for their own poverty, and society is not obliged to help them.”  Or perhaps simply, “The success sequence shifts much of the moral blame for poverty from broad social forces to individual behavior.”  While hardly anyone explicitly uses the success sequence to argue that we underrate the blameworthiness of the poor for their own troubles, critics still hear this argument loud and clear – and vociferously object.

Thus, Eve Tushnet writes:

To me, the success sequence is an example of what Helen Andrews dubbed “bloodless moralism”…

All bloodless moralisms conflate material success and virtue, presenting present successful people as moral exemplars. And this, like “it’s better to have a diploma than a GED,” is something virtually every poor American already believes: that escaping poverty proves your virtue and remaining poor is shameful.

Brian Alexander similarly remarks:

The appeal of the success sequence, then, appears to be about more than whether it’s a good idea. In a society where so much of one’s prospects are determined by birth, it makes sense that narratives pushing individual responsibility—narratives that convince the well-off that they deserve what they have—take hold.

Cato’s Michael Tanner says much the same:

The success sequence also ignores the circumstances in which the poor make choices. Our choices result from a complex process that is influenced at each step by a variety of outside factors. We are not perfectly rational actors, carefully weighing the likely outcomes for each choice. In particular, progressives are correct to point to the impact of racism, gender-based discrimination, and economic dislocation on the decisions that the poor make in their lives. Focusing on the choices and not the underlying conditions is akin to a doctor treating only the visible symptoms without dealing with the underlying disease.

Strikingly,  the leading researchers of the success sequence seem to agree with the critics!  Wang and Wilcox:

We do not take the view that the success sequence is simply a “pull yourselves up by your own bootstraps” strategy that individuals adopt on their own. Rather, for many, the “success sequence” does not exist in a cultural vacuum; it’s inculcated by an interlocking cultural array of ideals, norms, expectations, and knowledge.*

This is a strange state of affairs.  Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty.  And since I’m writing a book called Poverty: Who To Blame, I beg to differ…

* To be fair, Wang and Wilcox also tell us: “But it’s not just about natural endowments, social structure, and culture; agency also matters. Most men and women have the  capacity to make choices, to embrace virtues or avoid vices, and to otherwise take steps that increase or decrease their odds of doing well in school, finding and keeping a job, or deciding when to marry and have children.”

[to be continued]

(0 COMMENTS)

Read More

Jeff Hummel on Classical Liberals and Libertarians

Economist and libertarian Jeff Hummel, pictured above, sent me the following and I think it’s worth sharing:

In a Zoom session some libertarian friends and colleagues had a lively discussion of the correct usage of the term “libertarian.” Afterwards I had some additional thoughts. So I wrote this message to lay out my argument in more detail.

In the late 60s and early 70s, as the libertarian movement was just distinguishing and disentangling itself from conservatism, the terms “libertarian” and “classical liberal” had clear and relatively precise meanings, at least among the U.S. libertarians with whom I associated. These meanings were once very clearly articulated by libertarian philosopher Eric Mack at one of the IHS (or Cato?) summer seminars I attended.

He defined a classical liberal as someone who believes that maximizing individual liberty should be the highest (if not sole) goal of government (or the State). A libertarian is a classical liberal who further believes that government should have no morally privileged status with regards its powers. In other words, government and its agents could justly engage only in actions that were legitimate for individuals or groups of individuals. Thus, governments should be confined to using force (coercion) to the extent that individuals can rightfully do so for defense or restitution. Stating the libertarian constraint on government in this way gets around (or evades, if you prefer) some of the difficult problems defining the moral limits of legitimate defense and restitution, about which libertarians sometimes disagree, especially in the realm of so-called national defense. Yet nearly all moral philosophies, religious and non-religious, share certain broad outlines, disapproving of murder and theft, as much as they may differ on the borderline details of justifiable defense or restitution.

This way of putting the constraint also leaves open some room for the libertarian archipelagos of Chandran Kukathas, or for the proprietary communities that other libertarians favor. But in order to qualify as genuine libertarian social orders, such communities must be voluntary associations. The opposition to government taxation is also what distinguished libertarians from non-libertarian classical liberals, who in contrast believe that there is a difficult trade-off between liberty and coercion. In their view, government must impose taxes and perhaps exercise other coercive powers not derived from individual rights in order to effectively maximize total liberty. Libertarians, in contrast, held that government should be entirely voluntarily funded, a position that even Ayn Rand embraced at one point.

Libertarians then divided into limited-government libertarians (or to use Sam Konkin’s term, minarchists) and anarchist libertarians (or anarcho-capitalists, a term I never liked). Rand, among others, was a limited-government libertarian. Anarchist libertarians, such as myself and the younger Roy Childs, did argue that the limited-government libertarian position was inconsistent, pointing out that there is no sure way that a government (even if it eschews taxation) can maintain its monopoly without using some coercive powers that are illegitimate for individuals. But we never therefore denied that limited-government libertarians failed to qualify as libertarians, as long as they continued to believe that a voluntarily funded government was desirable and possible, no matter how mistaken we found that belief. Moreover, these distinctions were fairly widely recognized and accepted by libertarians of all varieties, whether primarily influenced by Rand or Rothbard.

I admittedly recognize two problems with maintaining these clear distinctions today. There is often a tension between prescriptive and descriptive definitions for words. I accept that the meanings of words spontaneously evolve over time. The word “libertarian” was used with a less precise meaning before the modern movement, even being embraced by some socialist libertarians. And in common usage today, the terms libertarian and classical liberal have become virtually synonymous. I attribute that evolution to two developments. (1) As some (many?) of the young libertarians of the 60s and 70s matured and aged, having to deal with real-world problems and issues, their views became less consistent or more nuanced and subtle, depending on your point of view. For particularly extreme cases of this intellectual evolution, I like Jeffrey Friedman’s term of “post-libertarian.” (2) The newer generation of libertarians is much more focused on current government policies, and has little interest in the fundamental but thorny philosophical and ideological foundations of their views. Even some of us in the older generation have gotten tired of those endless debates. So in casual conversation, I have to go along with current usage. Yet I still think the greater clarity of the original meanings should sometimes be maintained and specified for more serious discussions.

A second problem with a strict definition for the term “libertarian” is that in the past it led to endless internecine squabbles about who was a “genuine” libertarian, almost like the hair-splitting divisions and deviations that arose among early Marxists. I certainly have no interest in bringing back these counter-productive excommunications and denunciations. If someone wants to claim the label “libertarian,” there is not much to be gained from arguing about that, unless the self-identification is particularly outlandish. I’d rather focus on specific and concrete differences of opinion.

Some contend that one consideration should be whether people self-identify as libertarians, as Rand did not. For labels that describe people’s ideas there is a smidgen of validity to his claim. With respect to religion, we usually accept as definitive people’s self-identification as Christian, Muslim, atheist, etc. But that is simply a courteous and usually reliable rule of thumb. Lurking behind it is still some objective notion of what, for instance, a Christian believes. If, on questioning someone who claims to be a Christian, you discover that he or she does not believe in the historical existence of Jesus or in the existence of God, and also thinks the New Testament has less religious relevance than the Koran, you would be justified in doubting his or her self-identification.

By the way, by the strict standard that Jeff lays out for libertarians, I am not quite a libertarian.

 

 

(0 COMMENTS)

Read More

Bioethics: Tuskegee vs. COVID

When bioethicists want to justify their own existence, they routinely point to the infamous Tuskegee Syphilis Study.  It’s a gripping story.  Back in 1932, the U.S. Public Health Service started a study of 399 black men with latent syphilis, plus a control group of 201 black men without syphilis.  Contrary to what I’ve sometimes heard, the researchers never injected anyone with syphilis.  However, they grossly violated the principle of informed consent, with disastrous consequences:

As an incentive for participation in the study, the men were promised free medical care, but were deceived by the PHS, who never informed subjects of their diagnosis and disguised placebos, ineffective methods, and diagnostic procedures as treatment.

The men were initially told that the “study” was only going to last six months, but it was extended to 40 years. After funding for treatment was lost, the study was continued without informing the men that they would never be treated. None of the infected men were treated with penicillin despite the fact that by 1947, the antibiotic was widely available and had become the standard treatment for syphilis.

Why do bioethicists habitually invoke the Tuskegee experiment?  To justify current Human Subjects Review.  Which is bizarre, because Human Subjects Review applies to a vast range of obviously innocuous activities.  Under current rules, you need approval from Human Subjects merely to conduct a survey – i.e., to talk to a bunch of people and record their answers.

The rationale, presumably, is: “You should only conduct research on human beings if they give you informed consent.  And we shouldn’t let researchers decide for themselves if informed consent has been given.  Only bioethicists (and their well-trained minions) can make that call.”

On reflection, this just pushes the issue back a step.  Researchers aren’t allowed decide if their human experiment requires informed consent.  However, they are allowed to decide if what they’re doing counts as an experiment.   No one submits a formal request to their Human Subjects Review Board before emailing other researchers questions about their work.  No professor submits a formal request to their Human Subjects Review Board before polling his students.  Why not?  Because they don’t classify such activities as “experiments.”  How is a formal survey any more “experimental” than emailing researchers or polling students?  To quote The Prisoner, “Questions are a burden to others; answers, a prison for oneself.”

The safest answer for bioethicists, of course, is simply: “They should give our approval for those activities, too.”  The more territory bioethicists claim for themselves, however, the more you have to wonder, “How good is bioethicists’ moral judgment in the first place?”

To answer this question, let me bring up a bioethical incident thousands of times deadlier than the Tuskegee experiment.  You see, there was a deadly plague called COVID-19.  Researchers quickly came up with promising vaccines.  They could have tested the safety and efficacy of these vaccines in about one month using voluntary paid human experimentation.  How?

Step 1: Vaccinate half the volunteers and give the other half a placebo.

Step 2: Wait a week, then inject all the volunteers with COVID-19.  (Alternately, give half of each subgroup a placebo injection).

Step 3: Compare the COVID infection rates of the vaccinated and unvaccinated 2-4 weeks later.

In the real world, researchers only did Step 1, then waited about six months to compare naturally-occurring infection rates.  During this period, ignorance of the various vaccines’ efficacy continued, almost no one received any COVID vaccine, and over a million people died.  In the end, researchers discovered that the vaccines were highly effective, so this delay really did cause mass death.

How come no country on Earth tried voluntary paid human experimentation?*  As far as I can tell, the most important factor was the formal and informal opposition of bioethicists.  In particular, bioethicists converged on absurdly (or impossibly) high standards for “truly informed consent” to deliberate infection.  Here’s a prime example:

An important principle in human challenge studies is that subjects must give their informed consent in order to take part. That means they should be provided with all the relevant information about the risk they are considering. But that is impossible for such a new disease.

Why can’t you bluntly tell would-be subjects, “This is a very new disease, so there could be all sorts of unforeseen complications.  Do you still consent?”  Because the real point of bioethics isn’t to ensure informed consent, but to veto informed consent to whatever gives bioethicists the willies.

I’m no paternalist, but I understand paternalism.  Paternalists want to stop people from harming themselves.  The goal of bioethicists, however, is far stranger.  Bioethicists want to stop people from helping others! Even if experimental subjects heroically volunteer to be injected for no money at all, bioethicists stand on guard to overrule them.

I’ve said it before and I’ll say it again: Bioethics is to ethics as astrology is to astronomy.  If bioethicists had previously prevented a hundred Tuskegees from happening, COVID would still have turned the existence of their entire profession into a net negative for humanity.  Verily, we would be better off if their field had never existed.

If you find this hard to believe, remember: What the Tuskegee researchers did was already illegal in 1932.  Instead of creating a pile of new rules enforced by a cult of sanctimonious busybodies, the obvious response was to apply the familiar laws of contract and fiduciary duty.  These rules alone would have sent people like the Tuskegee researchers to jail where they belong.  And they would have left forthright practitioners of voluntary paid human experimentation free to do their vital life-saving work.

In a just world, future generations would hear stories of the monstrous effort to impede COVID-19 vaccine research.  Textbooks and documentaries would icily describe bioethicists’ lame rationalizations for allowing over a million people die.  If the Tuskegee experiments laid the groundwork for modern Human Subjects Review, the COVID non-experiments would lay the groundwork for the abolition of these deadly shackles on medical progress.

Which is further proof, in case you needed any, that we don’t live in a just world.

* At least as I’m writing.  Maybe this will have started by the time you read this.

(0 COMMENTS)

Read More

Will Joe Biden Be a Dictator?

This might look like a ridiculous question to ask about a soft-looking near-octogenarian who signals his virtue by repeating the inclusiveness mantra. But not so much if you define “dictator” as a political ruler who imposes on the whole population some shared preferences of the minority who brought him or keeps him in power. A more inclusive definition would replace “minority” by “majority short of unanimity.”

Biden was elected by 51% of the American voters. If, to be inclusive indeed, we include the third of the electorate (that is, of Americans eligible to vote) who did not vote, Mr. Biden’s support shrinks to 34% (51% × 66%). Now, consider that many who voted for him probably did so only or mainly because they thought that his adversary, Donald Trump, was even worse—not an unrealistic hypothesis. If Biden imposes the preferences of 17% of the electorate to 83%, or even of 34% on 66%, he will be a dictator. (Note that my definition of the term is not very different from the one in Kenneth Arrow’s famous Impossibility Theorem.)

An interesting article that bears on this topic is John G. Grove’s “Numerical Democracy or Constitutional Reality,” Law & Liberty (our sister website), November 12, 2020. Grove argues that the United States is a limited, compound republic, not a numerical democracy, and that the whole check and balance structure is meant to prevent a numerical majority from bulldozing the preferences of the rest. In this perspective, each side has a right to have its preferences incorporated in the winner’s legislation and an adverse electoral result is not, for the losers, a catastrophe to be corrected at any cost.

By the very nature of government, however, it is not easy to prevent winner-win-all results: a law is enforced against everybody, especially against individuals who did not agree with it. It seems that, on the basis of an individualist philosophy, only a near-universal consensus could justify radical change.

One disturbing implication is the following. Grove’s idea is a two-edged sword. When we start from several decades of a collectivist legislative and regulatory drift that has trammeled the minority of individuals who want to be largely left alone, even a new numerical majority may not and could not rapidly change course. Ronald Reagan, with his many good ideas (and a number of bad ones) did not bring much change and perhaps no lasting change. But for the same reason, thank God, Trump was not able to do more damage than he did.

James Buchanan, the Nobel economist, understood the conundrum: How can one reverse dictatorship without being a dictator himself? The solution, Buchanan argues with Geoffrey Brennan in their book The Reason of Rules: Constitutional Political Economy (Liberty Fund, 2000[1985]), is a “constitutional revolution.” That is, we—“we” classical liberals and libertarians—need to promote radical change to which our fellow citizens can unanimously consent, at least in theory. This pedagogical and abstract task is not an easy one.

(0 COMMENTS)

Read More

White Guilt and Reparations: A True Story

Co-blogger Bryan Caplan’s post this morning on collective guilt and the subsequent discussion in the comment section reminded me of something that happened my first day of a microeconomics class in 2001. At the end of the opening class, a number of people came up to ask questions. One was a young black woman who said, “Professor, what do you think of reparations for slavery?”

I answered, “I promise I’ll answer but first I want to know what you think.”

She said, “I favor them.”

“And those reparations would be paid for by white people?”

“Yes,” she answered.

I turned to a white guy who was waiting to ask a question, and I took a risk.

“Where are your grandparents from?” I asked.

“The Netherlands,” he answered.

I then turned back to the woman who had asked and said, “I’m ready to answer you. His grandparents came to this country well after slavery had ended. I think it’s wrong for the government to tax people who didn’t even inherit wealth from slavery to give to the great, great grandchildren of former slaves.”

Note: Of course it’s possible that his grandparents inherited wealth from their predecessors having had slaves in the Netherlands. I don’t know the history of slavery in the Netherlands. But the odds that they gained big time and came to the United States as wealthy people were probably pretty low.

(0 COMMENTS)

Read More

Collective Guilt for Everyone for Everything

Here’s an excerpt from my book-in-progress, Poverty: Who To Blame.


After “Don’t blame the victim,” the second-most obvious maxim for blame is, “Only blame the perpetrators.”  Precisely who, though, are the “perpetrators”?  Another deep criticism of my approach is that I blame too narrowly.  Instead of concentrating blame on specific wrong-doers, we should blame large swaths of society – or even whole countries.  To my ears, this echoes a blood-curdling passage from Deuteronomy:

If you hear it said about one of the towns the Lord your God is giving you to live in that troublemakers have arisen among you and have led the people of their town astray, saying, “Let us go and worship other gods” (gods you have not known), then you must inquire, probe and investigate it thoroughly. And if it is true and it has been proved that this detestable thing has been done among you,  you must certainly put to the sword all who live in that town. You must destroy it completely, both its people and its livestock.[i]

While most moderns would deny any affinity, the Deuteronomic mentality is alive and well.  In wartime, citing an offending government’s actions to rationalize collective punishment of its citizens is the default.  Japan attacked Pearl Harbor, so the people of Japan have only themselves to blame when we firebomb Tokyo.  Hamas won’t make peace with Israel, so the inhabitants of the Gaza strip have only themselves to blame for the ongoing blockade.  Israel won’t leave the West Bank, so Israeli citizens have only themselves to blame for terrorist attacks.  Even in peacetime, though, collective blame occasionally surfaces.  Many wish to exclude immigrants because of the crimes of a handful of people from the same country or religion.  And in recent years, collective blame for “structural” or “institutional” racism and sexism has become common in progressive spaces – especially college campuses.  The core idea is that a white male can’t hold himself blameless for racism and sexism merely because he personally is neither racist nor sexist.

Can an idea with such broad appeal really be wrong?  What is telling is that barely anyone endorses all or even most appeals to collective blame.  Those who invoke it do so selectively.  Indeed, they normally invoke it nepotistically; when my cause or my group suffers, we are entitled to blame loosely.  Historian Stephen Roberts once quipped: “I contend that we are both atheists. I just believe in one fewer god than you do. When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours.”  Similarly, I contend that those who insist that “We are all guilty” dismiss almost as many forms of collective blame as I do.  I just dismiss their carve-outs too.

[i] Deuteronomy 13:12-15.  These are odd injunctions even in the context of fanatical religious intolerance.  At least some adults in an entire town would have remained true to Yahweh; and what about townsfolk too young (or too senile) to detect apostasy?  Later books of the Bible take these questions to heart, firmly switching from collective to individual responsibility:

Yet you ask, “Why does the son not share the guilt of his father?” Since the son has done what is just and right and has been careful to keep all my decrees, he will surely live.  The one who sins is the one who will die. The child will not share the guilt of the parent, nor will the parent share the guilt of the child. The righteousness of the righteous will be credited to them, and the wickedness of the wicked will be charged against them. (Ezekiel 18:19-20)

(0 COMMENTS)

Read More

What is Equity?

In a comment on one of my recent posts, co-blogger Scott Sumner quotes my statement:

But implicit in his discussion is the idea that equity is synonymous with income equality or, at least, reduced income inequality. That’s not my view. My view is that people are treated equitably when other people don’t take their stuff.

Scott then writes:

That’s fine as a definition, but in that case I’d just use a different term.  Even if I accepted your definition of “equity”, it would not change my views on Romney’s proposal at all.  I’d just replace “equity” with “income equality” in my post, and otherwise keep the argument the same.

If he had stopped there, we wouldn’t have had a disagreement, except that we have very different views on the desirability of Romney’s proposal.

But then Scott writes:

I think my use of equity is consistent with how it’s used in economics textbooks when they discuss the equity/efficiency trade-off.

That’s a bridge too far.

It is consistent with how it’s used in some, possibly many, economics textbooks. I found words to that effect, for example, in Jack Hirshleifer’s microeconomics text. But some, maybe many, economics textbooks give a few versions of “equity.”

Here are two examples, and I didn’t have to look hard in my remaining few economics textbooks to find them.

In the 9th edition of their textbook Economics: Principles and Policy (2003), William J. Baumol and Alan S. Blinder consider various versions of equity. The two most directly related to this discussion are the concept of “vertical equity” and the benefits principle.

The version of the vertical equity principle they discuss is the “ability-to-pay principle,” which says that “those most able to pay should pay the highest taxes.” But they then give examples of a progressive, a proportional, and a regressive income tax system, in all of which those most able to pay do pay the highest taxes. So that’s not guidance that leads you to income equality or even to reducing income inequality.

The other equity principle they discuss is the “benefits principle,” which says that “those who reap the benefits from government services should pay the taxes.” They note that this principle of fair taxation “often violates commonly accepted notions of vertical equity.”

You can see why. Bill Gates gains a lot from national defense, but does he gain much from U.S. foreign policy, which tends to be focused on national offense? Or even more obviously, does he gain from the existence of the SNAP (food stamp) program?

In the 5th edition of N. Gregory Mankiw’s Principles of Economics (2009), there’s a similar discussion. Indeed, Mankiw uses examples of progressive, proportional, and regressive tax systems, all of which cause those with higher incomes to pay more taxes.

Mankiw also discusses the benefits principle and even argues that it could be used to justify taxing higher-income people more for anti-poverty programs, on the grounds that reducing poverty is a public good. Whether or not you think this is a stretch, the point is that we still don’t get to the conclusion that high-income people should pay a larger percent of their income.

And notice that neither of the two textbooks equates equity with income equality.

 

(0 COMMENTS)

Read More

Revolution is the Hell of It: Algerian Edition

In 1968, Abbie Hoffman famously wrote a book called Revolution for the Hell of It.

In 1973, this negatively inspired David Friedman to write a chapter called “Revolution is the Hell of It.”

Last month, I watched The Battle of Algiers, probably the most famous pro-terrorist (or at least anti-anti-terrorist) movie in history.  If you don’t know the sordid history of the “liberation” of Algeria, you should.  The whole movie is gripping, but this little speech by terrorist Ben M’Hidi stayed with me.  Though the writers probably intended the speech to be an inspiration rather than a warning, it’s a vivid vindication of Friedman over Hoffman.

BEN M’HIDI: Do you know something Ali? Starting a revolution is hard, and it’s even harder to continue it. Winning is hardest of all. But only afterward, when we have won, will the real hardships begin.

Which raises the obvious pacifist question: “Then why start?”  Committing evil deeds when the benefits are large and reliable might be justified.  Committing evil deeds when the benefits are deeply speculative is absurd.

Am I really going to defend colonialism?  No.  As I’ve said before, both colonialism and anti-colonialism are blameworthy expressions of violent nationalism:

But don’t you either have to be pro-colonial or anti-colonial?  No.  You can take the cynical view that foreign and native rule are about equally bad.  You can take the pacifist view that the difference between foreign and native rule isn’t worth a war.  Or, like me, you can merge these positions into cynical pacifism.  On this view, fighting wars to start colonial rule was one monstrous crime – and fighting wars to end colonial rule was another.

In the case of Algeria, however, I should add that native rule turned out to be vastly worse.

(0 COMMENTS)

Read More

1 2 3 7