Businesspeople Earn Every Penny

Back in February, I got the idea to create a COVID vaccination t-shirt (now on sale!).  Reflecting on my past experience, I figured it would be easy.

Step 1: Run an illustration contest on, something I’ve successfully done several times before.

Step 2: Take the winning entries to, design some shirts, and sell them using the same interface, another thing I’ve done several times before (albeit on a small scale).


My thinking: The whole process would be pretty fun, so I’d only need to sell a few dozen shirts to cover the cost of the contest and count the project a success.  I’m still optimistic, but the process has definitely been much more aggravating than expected.  A chronological list of snags:

1. One of my winning entrants warned me that the other two winners had copied their designs.  Unpleasant news.

2. When I followed up, one of the accused was able to produce clear documentation that she had purchased the rights to her design.  One problem solved.

3. The other accused contestant, however, seemed quite evasive about the situation.  Or perhaps it was a language problem?  I didn’t like the idea of paying for an unusable design, but I also felt bad about refusing to reward one of my winners.  After much prodding, he finally produced clear documentation that the images he incorporated into his design were in the public domain.  Another problem solved, but the conflict weighed on me.

4. I was planning on immediately announcing that the three shirts were available for sale, but I decided I ought to order test copies for myself first.  And figuring I was losing sales every day, I paid for rush delivery from Zazzle.

5. A couple days later, Zazzle sent me an email canceling the order.  Why?  They claimed that the sole fully original design violated copyright!  Hopefully I’ll work this out eventually, but apparently every drawing of a guy in a white suit at a disco infringes Saturday Night Fever.  Argh.

6. I could have challenged the ruling, but instead I looked around for an alternative vendor.  I figured they’d all be pretty similar, so I quickly settled on Printful.

7. Since I’d never done business with Printful, I had to place another test order.

8. After a couple days, Printful emailed with with a new problem: Printing a white semi-transparent design on a black sweatshirt yields an unwanted gray color.  So I went back and revised the order.

9. Soon afterwards, Zazzle let me know my cancelled order was in the mail, rush order surcharge included!  In the past, Zazzle cancelled all items in an order if it flagged any item for copyright problems.  Now, apparently, it sends everything that wasn’t cancelled.  Argh.

10. A week later, I checked on my Printful order, and discovered that I had somehow failed to click the final “OK” after revising the gray sweatshirt snafu, so my test order was still in limbo.  Sigh.  So I fixed it again, double stampies no erasies.

11. A few days later, I finally got my Printful order.  The products looked good.  I was ready to go.  But when I went to the website to offer the products for sale to customers, I discovered that Printful – unlike Zazzle – makes selling designs a pain in the neck.  To do business on Printful, I’d first have to sign up for a totally separate vendor website, and then merge the two accounts.  Argh.

12. After trying this for a half hour, I realized that I would be better off going back to Zazzle.  So I dumped Printful and created a new Zazzle store, #FearMeNot, minus the disapproved design.  Happily, the Zazzle interface seemed to work just as seamlessly as I remembered.  You can order “Fear me not! I got my COVID vaccine” shirts, sweatshirts, and hoodies now.  (Be sure to use the coupon code TUESDAYGIFTZ).


So at the end of this arduous and aggravating journey, I finally started selling my products to nudge the world to back to normalcy.  In a week or so, I’ll try to convince the Zazzle copyright people that my third design is legit.  (Even if Saturday Night Fever does have a copyright on all images of disco-dancers in white suits, my design should clearly be protected as parody).  Overall, I think this will be a positive experience for me.  The creative pleasure I’ve enjoyed plus the money I expect to make will probably exceed the subjective and financial cost of the dozen hassles I’ve already swallowed.

Still, a few more hassles could easily change my mind.  And selling t-shirts on Zazzle is virtually the lowest-hassle business I can imagine running.  Which makes me picture the horrors of creating and managing an actual business.

Indeed, I suspect that anyone who’s ever run an actual business has been rolling their eyes at my self-pity.  Twelve little snags?  Real entrepreneurs face more challenges every day.  Unlike me, they have to coordinate a long list of products, each with their own attendant baggage.  Unlike me, they have to manage a physical space.  Unlike me, they have to hire and direct employees.  And unlike me, they have to cope with a morass of government regulation.  I don’t care if actual businesspeople do roll their eyes at me; their can-do attitude in the face of endless obstacles still fills me with awe.

Note further that in this very blog post I’ve already publicly complained more about my business woes than most businesspeople ever will.  Are they stoic?  Do they realize that hardly anyone will sympathize with their plight?  Or are they just too busy making the trains run on time to stop and reflect?  All three answers make businesspeople look admirable indeed.  They don’t just make the world work.  They bear the suffering of the world in silence.  No wonder I love them!

What motivates businesspeople?  While the full answer is complex, the basic answer is clear: Money.  People run businesses to get richer – and ideally, to get rich.  And whenever I get a small taste of the challenges businesspeople overcome, not to mention the disrespect they endure in our society, I have to say that businesspeople earn every penny.  As someone who definitely does not want your job, entrepreneurs of the world, I thank you.

P.S. Put your customers at ease with a #FearMeNot shirt!




Read More

Privileges and Privacy for the Rulers

Recent journalistic investigations revealed that the family and friends of New York governor Andrew Cuomo benefited from nomenklatura privileges at the time when ordinary people had problems getting Covid-19 tests and timely results. These state-privileged people could be tested rapidly, often at home and many times if they wished. Their tests were often rushed to laboratories by state troopers and treated in priority. Liz Wolfe of Reason Magazine writes:

There was limited testing if you thought you’d been exposed, and long wait times if you did manage to nab one of those precious few tests.

But not if your last name starts with a C and ends with an uomo! …

The Albany Times Union reported last night that Democratic Gov. Andrew Cuomo directed the state’s top health officials to prioritize COVID testing for “the governor’s relatives as well as influential people with ties to the administration.”

This reminded me that, in late December, I reported on Cuomo’s intention to prosecute those who would give or sell Covid-19 vaccines to anybody outside the groups favored by the state and its priorities (“Free Enterprise: A Daring New Year Wish”). At that time, I asked the governor’s office, through its website, if he had himself received the vaccine. Two weeks later, having received no reply, I rapidly drafted a freedom-of-information request (called Freedom of Information Law or FOIL request in New York State) and emailed it to both the governor’s office and the New York State Department of Health.

The two replies landed in my virtual mailbox a few days apart in January. The letter from the Executive Chamber of the State of New York said:

This letter responds to your correspondence dated January 12, 2021, which pursuant to FOIL, requested:

“the dates Governor Cuomo, members of his family, and immediate staff have received vaccines against Covid-19; and indicate in which group of priority recipients (according to the State of New York’s policies) they fall.”

To the extent your request is reasonably described, these records are not maintained by the NYS Executive Chamber.

Please be advised that even assuming such records were maintained by the Executive Chamber, they would be exempt pursuant to Public Officers Law § 87(2)(b) because, if disclosed, would “constitute an unwarranted invasion of personal privacy.

Additionally, pursuant to Public Officers Law § 87(2)(a), an agency may deny access to records or portions thereof that are “specifically exempted from disclosure by state or federal statute.” Accordingly, to the extent records may exist said records are exempt from production pursuant to Health Insurance Portability and Accountability Act of 1996, Public. Law 104-191 and New York State Public Health Law §18.

The reply from the Department of Health was not very different:

This letter responds to your Freedom of Information Law (FOIL) request of January 12, 2021, in which you requested “the dates Governor Cuomo, members of his family, and immediate staff have received vaccines against Covid-19; and indicate in which group of priority recipients (according to the State of New York’s policies) they fall.”

Please be advised, the records you are requesting, to the extent such records exist, contain protected health information (PHI) regarding the individuals referenced in your request. In accordance with New York State law and the Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Federal Law 45 C.F.R. §164.524), the Department requires a duly executed HIPAA authorization form in order to release PHI regarding any individual. We note your request was not accompanied by any HIPAA authorization forms.

Accordingly, your request is denied pursuant to POL §87(2)(a) as “specifically exempted from disclosure by state or federal statute” in accordance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Federal Law 45 C.F.R. §164.524), and §87(2)(b), because disclosure “would constitute an unwarranted invasion of personal privacy.”

We now know that the governor himself waited his turn and received the vaccine in mid-March with much public fanfare.

The replies to my FOIL requests, however, show something interesting. One might have thought that privacy laws were meant to protect individuals against Leviathan’s lust for private information. But these laws seem to have been hijacked to protect the privacy of the rulers themselves. Perhaps actual governments don’t work as their ideal models?

Is “highjack” exaggerated? Consider the following. If, as current legal doctrine claims, ordinary individuals have no expectation of privacy when they enter an air terminal or cross the U.S. border or relate to their loving governments in certain other ways, why would political rulers have an expectation of privacy while they serve the people and sacrifice themselves for the “public good”?


Read More

Moral Relativism and Moral Fanaticism

In high school, Ayn Rand convinced me that moral relativism was a grave social problem.  Not in the weak sense that, “If everyone were moral relativists, there would be bad consequences,” but in the strong sense that, “Moral relativism has terrible consequences already.”  Soon afterwards, I read Paul Johnson’s Modern Times, and he reinforced my Randian belief.  In Johnson’s words:

At the beginning of the 1920s the belief began to circulate, for the first time at a popular level, that there were no longer any absolutes: of time and space, of good and evil, of knowledge, above all of value. Mistakenly but perhaps inevitably, relativity became confused with relativism.

Johnson then proceeds to interpret the world from the 1920s to the 1980s through the lens of moral relativism.  Moral relativism leads to Marxism-Leninism, fascism, Nazism, and World War II, as well as the barbaric wars of “national liberation” and the subsequent petty tyrannies.

Over time, however, I’ve almost completely changed my mind.  While I definitely think that moral relativism is false, I no longer think that moral relativism has grave geopolitical consequences.  Instead, I say that the horrors that Johnson describes were heavily driven by what I call moral fanaticism.  And the same goes for our contemporary political landscape.  The vast majority of liberals and conservatives are much closer to moral fanaticism than moral relativism.

What exactly is moral fanaticism?  Like moral relativism, moral fanaticism is a meta-ethical theory – a theory about moral facts and moral reasoning.  Moral relativism says, roughly, that there are no moral facts, and moral “reasoning” is just thinly-veiled emoting.  Moral fanaticism, in contrast, affirms that there are moral facts, but pretends that thinly-veiled emoting is ironclad moral reasoning.  The predictable result is that moral fanatics hold bizarre moral views with immense confidence.  They’re like people who use love to solve math problems.

Consider Nazism.  Leonard Peikoff notwithstanding, moral relativism had near-zero influence on the Nazis.  The Nazis didn’t think the truth of their moral position was a matter of opinion.  They totally thought they were right.  They believed that Aryans were the master race, and that as the master race they had the right to treat lesser people as slaves or vermin.  That’s the kind of self-righteousness you need to murder millions.  What made them fanatics?  The way they reached these conclusions.  They didn’t try to stay calm.  They didn’t test their moral positions against hypotheticals.  They didn’t invite intelligent people who disagreed to check their work.  They didn’t ponder Bayes’ Rule, or study cognitive biases.  Instead, they adopted the moral positions most compatible with their own power-hunger and hate.

Basically the same goes for Johnson’s entire rogues gallery.  Marxists-Leninists also totally thought they were right – and had the kind of self-righteousness you need to murder millions.  And while their writing style was obviously very appealing to the highly-educated, their reasoning process was fanatical.  In their writings, neither Marx nor Lenin try to stay calm.  They make near-zero effort to find and respond to intelligent critics.  They virtually never wonder if they’re just plain wrong.  Instead, they preach to the choir – with a subtext of fire and blood.  The anti-colonialist movement was obvious more varied.  But almost none of the prominent proponents of “national liberation” seriously wondered if their struggle against foreign oppression would unleash homegrown tyranny.  Questions like, “War is hell, so does it really make sense turn to violence to obtain independence?” were thought crimes.  Yes, even Nelson Mandela was such a moral fanatic – even according to his falsified autobiography which lies about his long-standing membership in the South African Communist Party.

The best case for my original position is that moral relativism enables moral fanaticism.  In the words of Bertrand Russell: “The trouble with the world is that the stupid are cocksure and the intelligent full of doubt.”  If reasonable people had the courage of their convictions, they would have proudly crushed Marxism-Leninism, Nazism, and other expressions of moral fanaticism before they became severe threats.  If you search carefully, you can definitely find statements consistent with this story.  Here’s what the great historian Carlton Hayes had to say about the Soviet Union in 1924:

Nevertheless, some order was emerging from the Russian chaos.  The world had failed to overcome Bolshevist Russia, and Bolshevist Russia had failed to overcome the world.  The Russian Revolution was left to work itself out as a great political and social experiment.  Already it stood forth in history as a most significant outcome of the Great War, and it promised to command the attention and interest of the whole world for many years to come.

In the end, however, these relativistic sentiments are throw-away comments.  A few casual words in a career.  When push comes to shove, almost everyone treats their political views as undeniable.  Take a look, for instance, at Hayes’ A Brief History of the Great War.  This book-length expression of absolute moral certitude in Wilson’s crusade to make the world safe from democracy starts with the dedication:

To those students of his who loyally left their books and proudly paid the supreme sacrifice in the cause of human solidarity against international anarchy the author inscribes this book.

A true believer mentality infuses the entire book.  None of the sordid history of the origins or aftermath of World War I even faze Hayes.  (Though to his credit, Hayes later wrote a book-length critique of moral fanaticism called Nationalism: A Religion).  And while he’s obviously just one man, he’s an archetype.

If moral fanaticism rules the world, though, why aren’t violent conflicts much more common?  Not because of moral relativism, but because of political pragmatism.  Even most moral fanatics realize that trying to impose their dogmas on the entire world would end in disaster.  For their own power-hungry selves.  They combine absurd confidence in their own moral rectitude with reasonable doubts about their ability to bring a world of enemies to their knees.  So life goes on.


Read More

Individual and Collective Choices in Cars

There appears to be something basic that most people in most of human history don’t understand. Or is it me (along with a lot of economists)? Here is the argument.

It would be better if our car were chosen democratically. A democratic referendum could ask voters to choose which car will be available to consumers. (How individual purchases would be financed, either with private money or by government, does not matter at this point.) Assume the voting system is the one you prefer and that the number of choices or write-in options is also what you think is most democratic. The voters are asked to vote for the single brand and model of car to be produced or imported. Each individual has one vote, however “one vote” is defined in your preferred voting system. The economies of scale brought about by a single model would reduce the price of cars compared to the wasteful diversity of the market—the 250 different models available on the American market, not counting the numerous options and colors for each model. Collective rationality would replace individual ignorance and market anarchy. Equality would be promoted: the times would be over when the rich could afford more luxurious and safer cars than the average American. The car manufacturer whose model has been chosen would, in a real sense, be elected democratically. A true collective choice would democratically decide which car we drive. What can be wrong with that?

Many things. In fact, this whole argument is invalid. A few reasons:

(1) Depending on the voting system (majority, plurality, ranked-choice, Borda, Hare, etc.), a different choice of car would likely prevail, so the person or group that chooses the voting system and the choices to be proposed would indirectly decide, or at least strongly influence, which car you will drive (on voting systems, see, in the forthcoming Spring issue of Regulation, my essay on William Riker’s Liberalism against Populism).

(2) Each individual’s vote has an infinitesimal chance of changing the result of the referendum, that is, of getting him the car he wants—or, for the real altruist, the car he thinks is better for the large masses.

(3) With only one producer and a lack of competition, including import competition, economies of scale would soon be overcome by reduced incentives, bureaucratic growth, and union power. In between referendums, the main incentives of the chosen producer would be to satisfy a faceless average consumer; or to swindle him if the incumbent thinks it is unlikely to be allowed to put one of its cars among the candidates next time. The consequences would be similar if, in a more complex referendum, a number of producers were chosen to offer, say, a black-made car, a white-made car, a LGBTQ+-made car, or any other stakeholder-made car. History provides us with an example of a near-collective car named Trabant, “a sparkplug with a roof.” (A Trabant model is shown on the featured image of this post.)

(4) This reminds us that political processes, even democratic ones, are very rough and imperfect. The most popular car in the American market is a pickup, Ford’s F-150, but only those who individually choose it are obliged to drive it, which is what individual choices are about.

(5) Economic efficiency, which is defined as the satisfaction of the varied preferences of different individuals, would be replaced by some common preferences of a centrist group of voters according to the median-voter theorem.

(6) Socialism and imposed uniformity in consumption are antithetical to individual dreams and their subjective utility—the sort of car you have wanted to buy for yourself since childhood, for example. I say “socialism” but it is the same in conservative collectivism or the old elitist right.

(7) Another obstacle to “collective rationality” would come from the voters’ “rational ignorance.” Since every individual voter knows that his vote has practically a zero chance of delivering the car he prefers, he would have no incentive to buy (if only with his time) information on the referendum alternatives—to subscribe to Consumer Reports, to read car magazines, to google technical terms or watch YouTube videos, to visit manufacturers online or physical showrooms, and so forth. (See also David Henderson, “The Logical Basis is a Difference in Incentives,” Econlog, March 9, 2021; and my own post “One Thing Rationally Ignorant Voters Don’t Know,” September 14, 2020) And if there are many voters whose cognitive limitations or sensibility to propaganda lead each of them to believe that he will decide the vote, how can anybody trust the rationality of such an electorate? Collective rationality amounts to voting blind or, at best, voting with one’s tribe.

(8) Market competition, not political competition, is, theoretically and historically, the way to reach economic efficiency.

(9) The equality obtained by letting every voter vote on our collective car would be illusory. Even with the ideal referendum, the real influencers would be the car manufacturers’ P.R. departments and popular pundits and media personalities (as well as perhaps QAnon-type websites).

(10) Even in this ideal democratic system, political competition would fill the void of economic competition. When economic markets are forbidden to clear, political markets will clear. Rent-seekers would try to influence which models will be put on the ballot, which producers will thus be privileged, and how long the monopoly will last.

(11) Consider the financing aspect of car purchases, ignored up to now. This issue would also need to be decided by an equally imperfect referendum. Suppose that “our national car” is to be paid by the government and financed by public debt. All the voters who think that their individual votes count and who want “social justice” hic et nunc would likely vote for Cadillacs to be paid by their descendants.

What most people do not understand, even apparently in America and in other sophisticated countries, is that individual choices are preferable to collective choices for both economic and moral reasons. This is true not only for cars but also for most other goods. Only goods or services that must be consumed simultaneously by all—what economists call “public goods”—escape this characterization but a separate argument has to be made for them.


Read More

Paul Krugman and the Notion of Choice

In one of his recent New York Times columns (“Too Much Choice Is Hurting America,” March 1, 2021), Nobel economist Paul Krugman worries about the US having become

a country in which many of us are actually offered too many choices, in ways that can do a lot of harm.

It’s true that both Economics 101 and conservative ideology say that more choice is always a good thing. …

In the real world, too much choice can be a big problem. …

Too much choice creates space for predators who exploit our all-too-human limitations.

… people have limited “bandwidth” for processing complex issues.

This is not an original opinion. It is typical of the authoritarian left and of the authoritarian right. Even Trump might agree, he who did not mind preventing people from buying dolls made in China. Krugman should know why, for nearly three centuries, mainstream economics, and not only at the 101 level, has taken the opposite stance.

As a positive science, economics shows that individuals make their choices on the basis of their preferences and their constraints. Individuals generally prefer more choices because that increases the possibilities of satisfying their preferences. One is more limited if he can choose between only a rotary telephone and a push-button one.  If we use economics normatively on the basis of classical-liberal values (from which Krugman has far drifted), every individuals should have the right to choose what he wants—besides committing crimes such as murder because a social system allowing such crimes is presumably in nobody’s interest.

When Prof. Krugman states that “many of us are actually offered too many choices,” he doesn’t include himself, but only the poor or those who he thinks don’t have his intellect. What would he say if some intellectual told him that he has too many book choices? What if we told him that he is manipulated by “predators who exploit our all-too-human limitations”?

Of course, it is true that some individuals make choices that turn out to be bad for their own future. But what is the alternative?

Mr. Krugman only dislikes individual choices. He loves the alternative: collective choices imposed on everybody, choices where individuals are much more impotent and blind. Impotent because the typical individual—as opposed to a Nobel prizewinner with a column in the New York Times—has only one vote that will not change the result of any election; and moreover because, “in the real world,” politicians and bureaucrats make most of the collective decisions anyway. Blind because, for the reasons we have just seen, the ordinary voter remains “rationally ignorant” (as public-choice economists say) of politics and spends less time getting information on politics than when, as a consumer, he buys a new car. Even if you think that you are buying something when you vote, it is nearly infinitely more difficult to figure out what you buy than when you purchase for yourself a car, a computer, a health insurance policy, or a mortgage.

Given his predilection for collective choices, we can suspect that Krugman wants his elitism to be imposed by laws and regulations, mandates and bans. His columns are not meant to be about aesthetics and literary criticism. Isn’t he worried that, in collective choices, political predators, with the same cognitive limitations as “many of us,” will “exploit our all-too-human limitations”? Who is more dangerous, a political demagogue or the VP Marketing at Ford?

Krugman cites the case of the Great Recession as an example of individuals being incompetent to make choices:

One cause of the 2008 financial crisis was the proliferation of novel financial arrangements, like interest-only loans, that looked like good deals but exposed borrowers to huge risks.

To be fair, he does speak of one cause but why does he not mention the major role played by the federal government, which was already guaranteeing nearly half of residential mortgages and was controlling the whole market? The main “novel financial arrangement,” the mortgage-based security, was created in 1970 by Ginnie Mae, a federal government agency that long boasted about it on its website. Krugman knows something about this because he wrote in a 2009 book that securitization was “pioneered by Fannie Mae,” a government-sponsored enterprise. Moreover, the federal government had spent decades encouraging poor people to buy mortgages and coercing banks into not discriminating against people likely to be incapable of reimbursing them. In 2003, congressman Barney Frank declared (all citations in my 2011 book Somebody in Charge):

I believe that we, as the Federal Government, have probably done too little rather than too much to push them to meet their goals of affordable housing and to set reasonable goals. … I would like to get Fannie [Mae] and Freddie [Mac] more deeply into helping low-income housing and possibly moving into something that is more explicitly a subsidy. … I want to roll the dice a little bit more in this situation towards subsidizing housing.

All that is a bit troubling. How can somebody like Krugman, who is, after all, an economist and obviously an intelligent man, defend such simplistic ideas? Should we just suppose that his New York Times columns are so heavily edited that they don’t really represent his own opinions? (The New York Times is apparently known as an “editors’ paper” as opposed to a “writers’ paper.”) But if so, why would he accept to play that game?


Read More

Is Amazon a Corporate Mother Teresa?

Amazon is in many ways a fascinating company and deserves to be defended against most of its mainstream critics. However, it would be simplistic to explain its campaign for a $15 federally-imposed minimum wage by identifying it with a corporate Mother Teresa. Its more obvious reasons to preach for minimum wages are not defendable.

I will not repeat all the arguments against the minimum wage, summarized in a good article by Cato Institute’s Ryan Bourne (“The Case Against a $15 Federal Minimum Wage: Q&A”). My co-blogger David Henderson has also defended many of the standard economic arguments. There exist some disagreements among economists about the employment effect of minimum wages, but they mainly relate to the size and victims of the negative effect (see Bourne’s overview).

One thing is sure: Amazon would benefit from forcing higher costs on its small competitors, including mom-and-pop businesses. A higher minimum wage would have exactly this effect while it would have zero effect on Amazon’s costs. As the company already pays a starting wage equal to the proposed $15 minimum, the latter would be non-binding and irrelevant for the retail behemoth.

One reason why Amazon was able to bid up the wage of its entry-level workforce is that its technology and other capital embedded in its warehouses and distribution network increase the productivity of its employees, which justifies the bidding up from a pure profit-maximizing viewpoint. There is nothing wrong with profits, but there is something wrong wtith using state power to bankrupt one’s competitors. This is what is happening. Jonathan Meer, an economist at A&M University observes:

It’s a lot harder for Joe’s Hardware. We should take note that Amazon—the place with no cashiers—is the one calling for a higher minimum wage.

Other large companies—such as Walmart—have come out in favor of an increase in the federal minimum but not up to $15. In their case, indeed, $15 would be binding for some employees. (Cf. Eric Morath and Heather Haddon, “Many Businesses Support a Minimum-Wage Increase—Just Not Biden’s $15-an-Hour Plan,” Wall Street Journal, March 1, 2021)

Amazon has another reason to be politically correct, that is, to signal its virtue under current faddish and unrealistic ideas. The company can hope to cajole DC’s powerful men to spare it from some regulation that would bite. The systemic effects of such behavior point to crony capitalism and groveling toward the state, which are not good for free enterprise and future prosperity.

It is not clear, to say the least, what kind of acceptable ethics could justify Amazon’s current behavior.


Read More

What the Success Sequence Means

[continued from yesterday]

…This is a strange state of affairs.  Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty.  And since I’m writing a book called Poverty: Who To Blame, I beg to differ.

Consider this hypothetical.  Suppose the success sequence discovered that people could only reliably avoid poverty by finishing a Ph.D. in engineering, working 80 hours a week, and practicing lifelong celibacy.  What would be the right reaction?  Something along the lines of, “Then we shouldn’t blame people for their own poverty, because self-help is just too damn hard.”

The underlying moral principle: You shouldn’t blame people for problems they have no reasonable way to avoid.  You shouldn’t blame them if avoiding the problem is literally impossible; nor should you blame them if they can only avoid the problem by enduring years of abject misery.

The flip side, though, is that you should blame people for problems they do have a reasonable way to avoid.  And the steps of the success sequence are eminently reasonable.  This is especially clear in the U.S.  American high schools have low standards, so almost any student who puts in a little effort will graduate.  Outside of severe recessions, American labor markets offer ample opportunities for full-time work.  And since cheap, effective contraception is available, people can easily avoid having children before they are ready to support them.

These realizations are probably the main reason why talking about the success sequence so agitates the critics.  The success sequence isn’t merely a powerful recipe for avoiding poverty.  It is a recipe easy enough for almost any adult to understand and follow.

But can’t we still blame society for failing to foster the bourgeois values necessary to actually adhere to the success sequence?  Despite the popularity of this rhetorical question, my answer is an unequivocal no.  In ordinary moral reasoning, virtually no one buys such attempts to shift blame for individual misdeeds to “society.”

Suppose, for example, that your spouse cheats on you.  When caught, he objects, “I come from a broken home, so I didn’t have a good role model for fidelity, so you shouldn’t blame me.”  Not very morally convincing, is it?

Similarly, suppose you hire a worker, and he steals from you.  When you catch him, he protests, “Don’t blame me.  Blame racism.”  How do you react?  Poorly, I bet.

Or imagine that you brother drinks his way into homelessness.  When you tell him he has to reform if he wants your help, he denounces your “bloodless moralism.”  Are you still obliged to help him?  Really?

Finally, imagine you’re a juror on a war crimes trial.  A soldier accused of murdering a dozen children says, “It was war, I’m a product of my violent circumstances.”  Could you in good conscience exonerate him?

So what?  We should place much greater confidence in our concrete moral judgments than in grand moral theories.  This is moral reasoning 101.  And virtually all of our concrete moral judgments say that we should blame individuals – not “society” – for their own bad behavior.  When wrong-doers point to broad social forces that influenced their behavior, the right response is, “Social forces influence us all, but that’s no excuse.  You can and should have done the right thing despite your upbringing, racism, love of drink, or violent circumstances.”

To be clear, I’m not saying that we should pretend that individuals are morally responsible for their own actions to give better incentives.  What I’m saying, rather, is that individuals really are morally responsible for their actions.  Better incentives are just icing on the cake.

This is not my eccentric opinion.  As long as we stick to concrete cases, virtually everyone agrees with me.  Each of my little moral vignettes is a forceful counter-example to the grand moral theory that invokes “broad social forces” to excuse wrong-doing.  And retaining a grand moral theory in the face of multitudinous counter-examples is practically the definition of bad philosophy.

Does empirical research on the success sequence really show that the poor are entirely to blame for their own poverty?  Of course not!  In rich countries, following the success sequence is normally easy for able-bodied adults, but not for children or the severely handicapped.  In poor countries, even able-bodied adults often find that the success sequence falls short (though this would be far less true under open borders).  Haitians who follow the success sequence usually remain quite poor because economic conditions in Haiti are grim.  Though even there, we can properly blame Haitians who stray from the success sequence for making a bad situation worse.

Research on the success sequence clearly makes people nervous.  Few modern thinkers, left or right, want to declare: “Despite numerous bad economic policies, responsible behavior is virtually a sufficient condition for avoiding poverty in the First World.  And we have every right to blame individuals for the predictable consequences of their own irresponsible behavior.”  Yet if you combine the rather obvious empirics of the success sequence with common-sense morality, this is exactly what you will end up believing.


Read More

What Does the Success Sequence Mean?

If you live in the First World, there is a simple and highly effective formula for avoiding poverty:

1. Finish high school.

2. Get a full-time job once you finish school.

3. Get married before you have children.

Researchers call this formula the “success sequence.”  Ron Haskins and Isabel Sawhill got the ball rolling with their book Creating an  Opportunity Society, calling for a change in social norms to “bring back the success sequence as the expected path for young Americans.”  The highest-quality research on this success sequence probably comes from Wendy Wang and Brad Wilcox.  In their Millennial Success Sequence, they observe:

97% of Millennials who follow what has been called the “success sequence”—that is, who get at least a high school degree, work, and then marry before having any children, in that order—are not poor by the time they reach their prime young adult years (ages 28-34).

One common criticism is that full-time work does almost all the work of the success sequence.  Even if you drop out of high school and have five kids with five different partners, you’ll probably avoid poverty as long as you work full-time.  Wilcox and Wang disagree:

…This analysis is especially relevant since some critics of the success sequence have argued that marriage does not matter once education and work status are controlled.

The regression results indicate that after controlling for a range of background factors, the order of marriage and parenthood in Millennials’ lives is significantly associated with their financial well-being in the prime of young adulthood. Simply put, compared with the path of having a baby first, marrying before children more than doubles young adults’ odds of being in the middle or top income. Meanwhile, putting marriage first reduces the odds of young adults being in poverty by 60% (vs. having a baby first).

But even if the “work does all the work” criticism were statistically true, it is misses the point: Single parenthood makes it very hard to work full-time.

A more agnostic criticism doubts causation.  Sure, poverty correlates with failure to follow the success sequence.  How, though, do we know that the so-called success sequence actually causes success?  It’s not like we run experiments where we randomly assign lifestyles to people.  The best answer to this challenge, frankly, is that causation is obvious.  “Dropping out of school, idleness, and single parenthood make you poor” is on par with “burning money makes you poor.”  The demand for further proof of the obvious is a thinly-veiled veto of unpalatable truths.

A very different criticism, however, challenges the perceived moral premise behind the success sequence.  What is this alleged moral premise?  Something along the lines of: “Since people can reliably escape poverty with moderately responsible behavior, the poor are largely to blame for their own poverty, and society is not obliged to help them.”  Or perhaps simply, “The success sequence shifts much of the moral blame for poverty from broad social forces to individual behavior.”  While hardly anyone explicitly uses the success sequence to argue that we underrate the blameworthiness of the poor for their own troubles, critics still hear this argument loud and clear – and vociferously object.

Thus, Eve Tushnet writes:

To me, the success sequence is an example of what Helen Andrews dubbed “bloodless moralism”…

All bloodless moralisms conflate material success and virtue, presenting present successful people as moral exemplars. And this, like “it’s better to have a diploma than a GED,” is something virtually every poor American already believes: that escaping poverty proves your virtue and remaining poor is shameful.

Brian Alexander similarly remarks:

The appeal of the success sequence, then, appears to be about more than whether it’s a good idea. In a society where so much of one’s prospects are determined by birth, it makes sense that narratives pushing individual responsibility—narratives that convince the well-off that they deserve what they have—take hold.

Cato’s Michael Tanner says much the same:

The success sequence also ignores the circumstances in which the poor make choices. Our choices result from a complex process that is influenced at each step by a variety of outside factors. We are not perfectly rational actors, carefully weighing the likely outcomes for each choice. In particular, progressives are correct to point to the impact of racism, gender-based discrimination, and economic dislocation on the decisions that the poor make in their lives. Focusing on the choices and not the underlying conditions is akin to a doctor treating only the visible symptoms without dealing with the underlying disease.

Strikingly,  the leading researchers of the success sequence seem to agree with the critics!  Wang and Wilcox:

We do not take the view that the success sequence is simply a “pull yourselves up by your own bootstraps” strategy that individuals adopt on their own. Rather, for many, the “success sequence” does not exist in a cultural vacuum; it’s inculcated by an interlocking cultural array of ideals, norms, expectations, and knowledge.*

This is a strange state of affairs.  Everyone – even the original researchers – insists that the success sequence sheds little or no light on who to blame for poverty.  And since I’m writing a book called Poverty: Who To Blame, I beg to differ…

* To be fair, Wang and Wilcox also tell us: “But it’s not just about natural endowments, social structure, and culture; agency also matters. Most men and women have the  capacity to make choices, to embrace virtues or avoid vices, and to otherwise take steps that increase or decrease their odds of doing well in school, finding and keeping a job, or deciding when to marry and have children.”

[to be continued]


Read More

Jeff Hummel on Classical Liberals and Libertarians

Economist and libertarian Jeff Hummel, pictured above, sent me the following and I think it’s worth sharing:

In a Zoom session some libertarian friends and colleagues had a lively discussion of the correct usage of the term “libertarian.” Afterwards I had some additional thoughts. So I wrote this message to lay out my argument in more detail.

In the late 60s and early 70s, as the libertarian movement was just distinguishing and disentangling itself from conservatism, the terms “libertarian” and “classical liberal” had clear and relatively precise meanings, at least among the U.S. libertarians with whom I associated. These meanings were once very clearly articulated by libertarian philosopher Eric Mack at one of the IHS (or Cato?) summer seminars I attended.

He defined a classical liberal as someone who believes that maximizing individual liberty should be the highest (if not sole) goal of government (or the State). A libertarian is a classical liberal who further believes that government should have no morally privileged status with regards its powers. In other words, government and its agents could justly engage only in actions that were legitimate for individuals or groups of individuals. Thus, governments should be confined to using force (coercion) to the extent that individuals can rightfully do so for defense or restitution. Stating the libertarian constraint on government in this way gets around (or evades, if you prefer) some of the difficult problems defining the moral limits of legitimate defense and restitution, about which libertarians sometimes disagree, especially in the realm of so-called national defense. Yet nearly all moral philosophies, religious and non-religious, share certain broad outlines, disapproving of murder and theft, as much as they may differ on the borderline details of justifiable defense or restitution.

This way of putting the constraint also leaves open some room for the libertarian archipelagos of Chandran Kukathas, or for the proprietary communities that other libertarians favor. But in order to qualify as genuine libertarian social orders, such communities must be voluntary associations. The opposition to government taxation is also what distinguished libertarians from non-libertarian classical liberals, who in contrast believe that there is a difficult trade-off between liberty and coercion. In their view, government must impose taxes and perhaps exercise other coercive powers not derived from individual rights in order to effectively maximize total liberty. Libertarians, in contrast, held that government should be entirely voluntarily funded, a position that even Ayn Rand embraced at one point.

Libertarians then divided into limited-government libertarians (or to use Sam Konkin’s term, minarchists) and anarchist libertarians (or anarcho-capitalists, a term I never liked). Rand, among others, was a limited-government libertarian. Anarchist libertarians, such as myself and the younger Roy Childs, did argue that the limited-government libertarian position was inconsistent, pointing out that there is no sure way that a government (even if it eschews taxation) can maintain its monopoly without using some coercive powers that are illegitimate for individuals. But we never therefore denied that limited-government libertarians failed to qualify as libertarians, as long as they continued to believe that a voluntarily funded government was desirable and possible, no matter how mistaken we found that belief. Moreover, these distinctions were fairly widely recognized and accepted by libertarians of all varieties, whether primarily influenced by Rand or Rothbard.

I admittedly recognize two problems with maintaining these clear distinctions today. There is often a tension between prescriptive and descriptive definitions for words. I accept that the meanings of words spontaneously evolve over time. The word “libertarian” was used with a less precise meaning before the modern movement, even being embraced by some socialist libertarians. And in common usage today, the terms libertarian and classical liberal have become virtually synonymous. I attribute that evolution to two developments. (1) As some (many?) of the young libertarians of the 60s and 70s matured and aged, having to deal with real-world problems and issues, their views became less consistent or more nuanced and subtle, depending on your point of view. For particularly extreme cases of this intellectual evolution, I like Jeffrey Friedman’s term of “post-libertarian.” (2) The newer generation of libertarians is much more focused on current government policies, and has little interest in the fundamental but thorny philosophical and ideological foundations of their views. Even some of us in the older generation have gotten tired of those endless debates. So in casual conversation, I have to go along with current usage. Yet I still think the greater clarity of the original meanings should sometimes be maintained and specified for more serious discussions.

A second problem with a strict definition for the term “libertarian” is that in the past it led to endless internecine squabbles about who was a “genuine” libertarian, almost like the hair-splitting divisions and deviations that arose among early Marxists. I certainly have no interest in bringing back these counter-productive excommunications and denunciations. If someone wants to claim the label “libertarian,” there is not much to be gained from arguing about that, unless the self-identification is particularly outlandish. I’d rather focus on specific and concrete differences of opinion.

Some contend that one consideration should be whether people self-identify as libertarians, as Rand did not. For labels that describe people’s ideas there is a smidgen of validity to his claim. With respect to religion, we usually accept as definitive people’s self-identification as Christian, Muslim, atheist, etc. But that is simply a courteous and usually reliable rule of thumb. Lurking behind it is still some objective notion of what, for instance, a Christian believes. If, on questioning someone who claims to be a Christian, you discover that he or she does not believe in the historical existence of Jesus or in the existence of God, and also thinks the New Testament has less religious relevance than the Koran, you would be justified in doubting his or her self-identification.

By the way, by the strict standard that Jeff lays out for libertarians, I am not quite a libertarian.




Read More

Bioethics: Tuskegee vs. COVID

When bioethicists want to justify their own existence, they routinely point to the infamous Tuskegee Syphilis Study.  It’s a gripping story.  Back in 1932, the U.S. Public Health Service started a study of 399 black men with latent syphilis, plus a control group of 201 black men without syphilis.  Contrary to what I’ve sometimes heard, the researchers never injected anyone with syphilis.  However, they grossly violated the principle of informed consent, with disastrous consequences:

As an incentive for participation in the study, the men were promised free medical care, but were deceived by the PHS, who never informed subjects of their diagnosis and disguised placebos, ineffective methods, and diagnostic procedures as treatment.

The men were initially told that the “study” was only going to last six months, but it was extended to 40 years. After funding for treatment was lost, the study was continued without informing the men that they would never be treated. None of the infected men were treated with penicillin despite the fact that by 1947, the antibiotic was widely available and had become the standard treatment for syphilis.

Why do bioethicists habitually invoke the Tuskegee experiment?  To justify current Human Subjects Review.  Which is bizarre, because Human Subjects Review applies to a vast range of obviously innocuous activities.  Under current rules, you need approval from Human Subjects merely to conduct a survey – i.e., to talk to a bunch of people and record their answers.

The rationale, presumably, is: “You should only conduct research on human beings if they give you informed consent.  And we shouldn’t let researchers decide for themselves if informed consent has been given.  Only bioethicists (and their well-trained minions) can make that call.”

On reflection, this just pushes the issue back a step.  Researchers aren’t allowed decide if their human experiment requires informed consent.  However, they are allowed to decide if what they’re doing counts as an experiment.   No one submits a formal request to their Human Subjects Review Board before emailing other researchers questions about their work.  No professor submits a formal request to their Human Subjects Review Board before polling his students.  Why not?  Because they don’t classify such activities as “experiments.”  How is a formal survey any more “experimental” than emailing researchers or polling students?  To quote The Prisoner, “Questions are a burden to others; answers, a prison for oneself.”

The safest answer for bioethicists, of course, is simply: “They should give our approval for those activities, too.”  The more territory bioethicists claim for themselves, however, the more you have to wonder, “How good is bioethicists’ moral judgment in the first place?”

To answer this question, let me bring up a bioethical incident thousands of times deadlier than the Tuskegee experiment.  You see, there was a deadly plague called COVID-19.  Researchers quickly came up with promising vaccines.  They could have tested the safety and efficacy of these vaccines in about one month using voluntary paid human experimentation.  How?

Step 1: Vaccinate half the volunteers and give the other half a placebo.

Step 2: Wait a week, then inject all the volunteers with COVID-19.  (Alternately, give half of each subgroup a placebo injection).

Step 3: Compare the COVID infection rates of the vaccinated and unvaccinated 2-4 weeks later.

In the real world, researchers only did Step 1, then waited about six months to compare naturally-occurring infection rates.  During this period, ignorance of the various vaccines’ efficacy continued, almost no one received any COVID vaccine, and over a million people died.  In the end, researchers discovered that the vaccines were highly effective, so this delay really did cause mass death.

How come no country on Earth tried voluntary paid human experimentation?*  As far as I can tell, the most important factor was the formal and informal opposition of bioethicists.  In particular, bioethicists converged on absurdly (or impossibly) high standards for “truly informed consent” to deliberate infection.  Here’s a prime example:

An important principle in human challenge studies is that subjects must give their informed consent in order to take part. That means they should be provided with all the relevant information about the risk they are considering. But that is impossible for such a new disease.

Why can’t you bluntly tell would-be subjects, “This is a very new disease, so there could be all sorts of unforeseen complications.  Do you still consent?”  Because the real point of bioethics isn’t to ensure informed consent, but to veto informed consent to whatever gives bioethicists the willies.

I’m no paternalist, but I understand paternalism.  Paternalists want to stop people from harming themselves.  The goal of bioethicists, however, is far stranger.  Bioethicists want to stop people from helping others! Even if experimental subjects heroically volunteer to be injected for no money at all, bioethicists stand on guard to overrule them.

I’ve said it before and I’ll say it again: Bioethics is to ethics as astrology is to astronomy.  If bioethicists had previously prevented a hundred Tuskegees from happening, COVID would still have turned the existence of their entire profession into a net negative for humanity.  Verily, we would be better off if their field had never existed.

If you find this hard to believe, remember: What the Tuskegee researchers did was already illegal in 1932.  Instead of creating a pile of new rules enforced by a cult of sanctimonious busybodies, the obvious response was to apply the familiar laws of contract and fiduciary duty.  These rules alone would have sent people like the Tuskegee researchers to jail where they belong.  And they would have left forthright practitioners of voluntary paid human experimentation free to do their vital life-saving work.

In a just world, future generations would hear stories of the monstrous effort to impede COVID-19 vaccine research.  Textbooks and documentaries would icily describe bioethicists’ lame rationalizations for allowing over a million people die.  If the Tuskegee experiments laid the groundwork for modern Human Subjects Review, the COVID non-experiments would lay the groundwork for the abolition of these deadly shackles on medical progress.

Which is further proof, in case you needed any, that we don’t live in a just world.

* At least as I’m writing.  Maybe this will have started by the time you read this.


Read More

1 2 3 7