Is Modern Democracy So Modern and How?

The Decline and Rise of Democracy, a new book by David Stasavage, a political scientist at New York University, reviews the history of democracy, from “early democracy” to “modern democracy.” I review the book in the just-out Fall issue of Regulation. One short quote of my review about the plight of modern democracy in America:

[Stasavage] notes the “tremendous expansion of the ability of presidents to rule by executive order.” Presidential powers, he explains, “have sometimes been expanded by presidents who cannot be accused of having authoritarian tendencies, such as Barack Obama, only to have this expanded power then used by Donald Trump.” We could, or course, as well say that the new powers grabbed by Trump will likely be used by a future Democratic president “who cannot be accused of authoritarian tendencies,” or perhaps who might legitimately be so accused.

The book is a book of history and political theory, not a partisan book. But the history of democracy has implications for today. An interesting one is how bureaucracy typically helped rulers prevent the development of democracy. Another quote from  my review—Stasavage deals with imperial China and I compare with today’s America:

At the apogee of the Han dynasty, at the beginning of the first millennium CE, there was one bureaucrat for every 440 subjects in the empire. … In the United States, which is at the low end of government bureaucracies in the rich world, public employees at all levels of government translate into one bureaucrat for 15 residents (about one for 79 at the federal level only).

If you read my review in the paper version of Regulation, beware. I made an error in my estimate for the federal bureaucracy and the printed version says “37” instead of “79”. It is corrected in the electronic version. Mea culpa.

(0 COMMENTS)

Read More

Raghuran Rajan’s The Third Pillar

In his latest book, Raghuram Rajan, a chaired professor of finance at the University of Chicago’s Booth School of Business and former governor of the Reserve Bank of India, advocates what he calls “inclusive localism.” His basic idea is that there are three pillars of a good and productive society: the market, the state, and the community. He argues that the community, which is the third pillar, nicely balances the excesses of both the free market and the state.

Although there is a strong case to be made for the importance of the community, Rajan does not make it nearly as well as he could have. The Third Pillar contains many insights and important facts, but his argument for inclusive localism is half-hearted. He concedes far too much to the current large state apparatus and, in doing so, implicitly accepts that communities will be weak. Again, and again in the book, when contemplating how to make local communities more powerful relative to federal governments, he fails to call for a massive reduction in state power. At times he accepts the state apparatus because he believes, often unjustifiably, in its goodness and effectiveness, and at times he accepts it because he seems to have a status quo bias.

Moreover, although he has better than the median economist’s understanding of the free market, he misses opportunities to point out how the market would straightforwardly solve some of the dilemmas he presents. Rajan also gets some important history wrong. And he makes too weak a case for free trade and favors ending child labor even in third-world countries where children and their families desperately need them to work.

This is from David R. Henderson, “An Unpersuasive Book with Some Encouraging Insights,” my review of Raghuram Rajan, The Third Pillar, Regulation, Fall 2020.

Rajan’s Misunderstanding of the Term “The Dismal Science”

In making his case that we can go too far in the direction of markets, Rajan writes, “Reverend Thomas Robert Malthus epitomized the heartless side of [classical] liberalism, when taken to its extreme.” Commenting on Malthus’s claim that disease, war, and famine would be natural checks on population growth, he writes, “No wonder historian Thomas Carlyle termed economics the ‘dismal science.’” But that is not why Carlyle coined the term. Instead, in noting that the dominant economists of his day strongly opposed slavery, he said economics was dismal because they opposed slavery. That is a big difference.

Rajan on Child Labor

One thing that is well established in economics is that child labor in very poor countries is a boon to children and their families. I made that point in Fortune in 1996 and Nobel economics prizewinner Paul Krugman made it in Slate in 1997. We both pointed out that children who work in “sweat shops” are virtually always better off than in their next best alternative. That next best alternative, if they are lucky, is a lower-paid job in agriculture or, if they are unlucky, picking through garbage or starving. Yet Rajan, who comes from a poor country, writes, “All countries should, of course, respect universal human rights, including refraining from using slave labor or child labor.” He is right on slave labor; he is horribly wrong on child labor. If he got his way, millions of poor children would suffer needlessly.

Rajan Has a Way with Words

One bright spot is Rajan’s refreshing way of expressing insights. For example, he sees a lot of problems with China’s unusual mixed economy and coins a beautiful phrase to describe it: “competitive cronyism.” And here is how he characterizes populism: “Populism, at its core, is a cry for help, sheathed in a demand for respect, and enveloped in the anger of those who feel they have been ignored.”

Read the whole thing.

(0 COMMENTS)

Read More

Case and Deaton on Deaths of Despair and the Future of Capitalism

In their recent book Deaths of Despair and the Future of Capitalism, Anne Case and Nobel economics prizewinner Angus Deaton, both emeritus economists at Princeton University, show that the death rate for middle-age whites without a college degree bottomed out in 1999 and has risen since. They attribute the increase to drugs, alcohol, and suicide. Their data on deaths are impeccable. They are careful not to attribute the deaths to some of the standard but problematic reasons people might think of, such as increasing inequality, poverty, or a lousy health care system. At the same time, they claim that capitalism, pharmaceutical companies, and expensive health insurance are major contributors to this despair.

The dust jacket of their book states, “Capitalism, which over two centuries lifted countless people out of poverty, is now destroying the lives of blue-collar America.” Fortunately, their argument is much more nuanced than the book jacket. But it is also, at times, contradictory. Their discussion of the health care system is particularly interesting both for its insights and for its confusions. In their last chapter, “What to Do?” the authors suggest various policies but, compared to the empirical rigor with which they established the facts about deaths by despair, their proposals are not well worked out. One particularly badly crafted policy is their proposal on the minimum wage.

This is from “Blame Capitalism?“, my review of The Deaths of Despair and the Future of Capitalism,” in Regulation, Fall 2020.

Another excerpt:

To understand what is behind the increase in the death rate, the authors look at state data and note that death rates increased in all but six states. The largest increases in mortality were in West Virginia, Kentucky, Arkansas, and Mississippi. The only states in which midlife white mortality fell much were California, New York, New Jersey, and Illinois. All four of the latter states, they note, have high levels of formal education. That fact leads them to one of their main “aha!” findings: the huge negative correlation between having a bachelor’s degree and deaths of despair.

To illustrate, they focus on Kentucky, a state with one of the lowest levels of educational attainment. Between the mid-1990s and 2015, Case and Deaton show, for white non-Hispanics age 45–54 who had a four-year college degree, deaths from suicide, drug overdose, or alcoholic liver disease stayed fairly flat at about 25–30 per 100,000. But for that same group but without a college degree, the deaths in the same categories zoomed up from about 40 in the mid-1990s to a whopping 130 by 2015, over four times the rate for those with a college degree.

Why is a college degree so important? One big difference between those with and without a degree is the probability of being employed. In 2017, the U.S. unemployment rate was a low 3.6%. Of those with a bachelor’s degree or more, 84% of Americans age 25–64 were employed. By contrast, only 68% of those in the same age range who had only a high school degree were employed.

That leads to two questions. First, why are those without a college degree so much less likely to have jobs? Second, how does the absence of a degree lead to more suicide and drug and alcohol consumption? On the first question, the authors note that a higher percentage of jobs than in the past require higher skills and ability. Also, they write, “some jobs that were once open to nongraduates are now reserved for those with a college degree.”

I wish they had addressed this educational “rat race” in more detail. My Econlog blogging colleague Bryan Caplan, an economist at George Mason University, argues in his 2018 book The Case Against Education that a huge amount of the value of higher education is for people to signal to potential employers that they can finish a major project and be appropriately docile. To the extent he is right, government subsidies to higher education make many jobs even more off-limits to high school graduates. Yet, Case and Deaton do not cite Caplan’s work. Moreover, in their final chapter on what to do, they go the exact wrong way, writing, “Perhaps it is time to up our game to make college the norm?” That policy would further narrow the range of jobs available to nongraduates, making them even worse off.

On the second question—why absence of a degree leads to more deaths of despair—they cite a Gallup poll asking Americans to rate their lives on a scale from 0 (“the worst possible life you can imagine”) to 10 (“the best possible life you can imagine”). Those with a college degree averaged 7.3, while those with just a high school diploma averaged 6.6. That is not a large difference, a fact they do not note.

And note their novel argument for why improved health care, better entertainment through the internet, and more convenience don’t count in people’s real wages:

So, what are the culprits behind the deaths of those without college degrees? Case and Deaton blame the job market and health insurance. Jobs for those without college degrees do not pay as much and do not generally carry much prestige. And, as noted above, Case and Deaton mistakenly think that real wages for such jobs have fallen. Some economists, by adding nonmonetary benefits provided by employers and by noting the amazing goods we can buy with our wages such as cell phones, conclude that even those without a college degree are doing better. Case and Deaton reject that argument. They do not deny that health care now is better than it was 20 years ago, but they write that a typical worker is doing better now than then “only if the improvements—in healthcare, or in better entertainment through the internet, or in more convenience from ATMs—can be turned into hard cash by buying less of the good affected, or less of something else, a possibility that, however desirable, is usually not available.” They continue, “People may be happier as a result of the innovations, but while it is often disputed whether money buys happiness, we have yet to discover a way of using happiness to buy money.”

That thinking is stunning. Over many decades, economists have been accused, usually unjustly, of saying that only money counts. We have usually responded by saying, “No, what counts is utility, the satisfaction we get out of goods and services and life in general.” But now Case and Deaton dismiss major improvements in the happiness provided by goods and services by noting that happiness cannot be converted to money. That is a big step backward in economic thinking.

 

Read the whole thing.

(0 COMMENTS)

Read More

Hello Mind, Nice to Meet Ya

Universities at their best are places where reading, writing, speaking, and (hopefully) listening are carried out at the highest level. The core activity here is sharing words with other people. We share words—written, verbal, and non-verbal—to meet other minds, to learn and share experiences for the sake of mutual betterment. So, as teachers and students return to campus, I thought it might be fun to take a moment to reflect on WORDS, with a little inspiration from Vincent Ostrom, F. A. Hayek, and Stephen King.

Vincent Ostrom, maybe more than any other 20th century political economist, emphasized the fact that language is a powerful tool. When we name what we experience by assigning words to objects and relationships, we generate “shared communities of understanding.”(The Meaning of Deocracies and the Sahred Vulnerability of Democracies: A Response To Tocqueville’s Challenge, p. 153) These words and the understanding they enable are how people share what they learn with others, including across generations. Through words, our experiences benefit others. I interpret this as similar to Hayek’s claim from Constitution of Liberty that “civilization begins where the individual can benefit from more knowledge than he can himself acquire, and is able to cope with his ignorance by using knowledge which he does not possess.” Words—along with markets, culture, and law/rules of conduct—form the extended orders that make society possible.

In Stephen King’s On Writing—an intellectual memoir from a true master of words—he equates successful writing with being able to pull off the near-supernatural act of telepathy. Language is a vehicle through which we are able to either send or receive mental images that otherwise would remain electrical impulses with nowhere to go, trapped inside our own minds. He gives the following example of “telepathy in action”:

“Look, here’s a table covered with a red cloth. On it is a cage the size of a small fish aquarium. In the cage is a white rabbit, with a pink nose and pink rimmed eyes. In its front paws is a carrot stub upon which it is contentedly munching. On its back, clearly marked in blue ink, is the numeral eight. Do we see the same thing? We’d have to get together and compare notes to make absolutely sure, but I think we do.”

The quote doesn’t give the chapter full justice, so I definitely recommend reading the whole thing, especially if you have an interest in writing as a craft. He goes on to explain that we might imagine very different details, but nearly everybody comes away with the same understanding about what is important about the description: the blue number eight on the rabbit’s back. This is the puzzle, the unexpected element that makes the information new and unites our attention around an idea. What I take from this is that there’s something about finding the right way to say something—precise but only to the point of usefulness, thorough yet focused, with some understanding of what the reader is bringing to the table—that makes it possible to get a message across in the way it was intended. That makes it possible for two minds to meet.

King’s conclusion is that “You must not come lightly to the blank page.” To write is to transmit ideas to other people’s minds. That’s a serious responsibility that can be carried out well or poorly, put to good use or ill. I can think of no reason why the same admonition should not apply to lectures, conversations, and video presentations.

Vincent Ostrom built on this idea. For Ostrom, language is created through the process of continued communication, and the language that is created then enters back into every aspect of our lives: “The learning, use, and alteration of language articulations is constitutional in character, applicable to the constitutive character of human personality, to patterns of human association, and to the aggregate structure of the conventions of language usage… the way languages are associated with institutions, goods, cultures, and personality, attributes means that we find languages permeating all aspects of human existence” (p. 171-2).

In other words, by embarking on the academic’s quest to use words better, we are all taking on a particularly important constitutive role. Global markets are made up by millions of buyers and sellers scattered around the world. Languages are made up of millions of people talking, reading, writing, listening, and—to borrow King’s analogy—making telepathic connections with each other in an attempt to connect words to better ideas, and better ideas to better lives. It might be an abstract quest but it’s noble one. Getting it right can make the world better, getting it wrong can make the world worse.

There are several dozen morals about the importance of the endeavor, of sticking to one’s principles, of mastering the fundamentals, etc. that can be drawn from this, and I don’t really want to moralize or pontificate more than I already have. So I’ll just end by saying that if you’re still reading, it was nice to meet your mind for a moment. I hope we’ll meet again soon.

 

 

 

Jayme Lemke is a Senior Research Fellow and Associate Director of Academic and Student Programs at the Mercatus Center at George Mason University and a Senior Fellow in the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics.

 


As an Amazon Associate, Econlib earns from qualifying purchases.

(0 COMMENTS)

Read More

Our Great Purpose

The Theory of Moral Sentiments (1759) is the first book that Adam Smith wrote, and for decades it was contrasted to his most famous other book, The Wealth of Nations (1776). Most scholars today do not see the contrast anymore, but Ryan Patrick Hanley resumes this so-called Adam Smith Problem in his Our Great Purpose: Adam Smith on Living a Better Life. For Hanley, the Wealth of Nations is the book about self-interest (but not greed) and wealth accumulation, and Theory of Moral Sentiments is the book about “love” and living a good life. But there is no Problem because the two books complement each other. Wealth of Nations celebrates wealth accumulation and decreased poverty, and Theory of Moral Sentiments warns us against the moral costs of this wealth accumulation (181), helping us stay on the right path in a “capitalist” society.

Hanley achieves his goal of showing that Theory of Moral Sentiments is a normative book offering prescriptions regarding how to live the good life (86), rather than a description of moral development, as it is typically considered, thanks to his usual beautiful prose and narrative.

So the image of Adam Smith that we get from Hanley is the explicit opposite of “Greed is good” (13). Hanley’s Smith promotes a society in which “everyone loved each other and was loved by them in return” (90), a love of others that is so great and complete that our goal in life is “to feel much for others and little for ourselves” (132), a love that drives us to become a “wise and virtuous person, […] serving others and […] always striving for their well-being, [who] lives a life that is good for those who live with her. […] A person who ‘sacrifice[s]’ herself for others, […] for a life of active service, [who] sacrifice[s] promoting her own self-interest in order to promote the interest of others” (148).

But if there is truth in this quest of “always working for others, never promoting herself, all the while knowing that nobody is ever going to recognize her for all this” (151) that for Hanley Smith asks us to have to live a good life, then the implication, which Hanley does not consider, is that Smith would also aspire to see the end of markets, as in a world of “love lover[s]” (88) markets become useless. For his reading to hold, Hanley has to ignore, and indeed does ignore, that for Smith people face binding time constraints:

“In civilized society, [a person] stands at all times in need of the cooperation and assistance of great multitudes, while his whole live is scarce sufficient to gain the friendship of a few persons. […] But man has almost constant occasion for the help of his brether, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favor and shew them that it is for their own advantage to do for him what he requires if them.” WN I.ii.2

So the implication of Hanley’s reading of Theory of Moral Sentiments is that the achievement of a good life is actually not a complement, but a substitute for markets.

To achieve this unconventional reading of Smith, Hanley “arranged and ordered (each chapter) in such a way as to tell a story that starts with the first chapter and ends with the last” (10).

Hanley’s Smith thinks that we should not just live, we should “live a better life”, a good life, a meaningful life, a purposeful life: “living a life requires that we be actively engaged in pursuing a trajectory that we can recognize as ‘a life’—that is, a trajectory that not only has a beginning and a middle and end, but also has a unity to it that enables us to see all its different parts as fitting together in a meaningful way” (1). In this journey that is our life, we are torn “in two very different directions. On the one hand, we are naturally led to be concerned with ourselves and our own well-being. On the other hand, we are naturally led to be concerned with the well-being and happiness of others” (10). “These competing demands raise key challenges to the project of living a single and unified life” (11).

So we need to battle against natural tendencies that lure us to be blindly attracted to the “trinkets and baubles” of wealth, we need to fight against our ambition that deludes us into pursing wealth when we would be better off stopping and smelling the roses more often.

The story that Hanley tells us is of a Smith’s “cautionary tale” (43), where we should quest to become perfect “wise and virtuous” people. It is a story that becomes even more powerful when compared to Tyler Cowen’s TedTalk “Be suspicious of stories”. Cowen does not talk about Adam Smith at all, yet he may capture an aspect of Smith that is absent in Hanley’s story. Cowen simply warns us about stories, stories that describe our life as journeys, as battles, as quests. “A story is about intention. A story is not about spontaneous order or complex human institutions which are the product of human action, but not of human design.”

Hanley claims that for Smith “living a life requires more than just the activity of living. … [We are required] to see ourselves as a self, engaged in the project of living a life of virtue and flourishing, of unity and coherence, and thus, hopefully, of purpose and meaning” (12). For Cowen our life is a mess, and it is good that it is a mess: “If I actually had to live those journeys, and quests, and battles, that would be so oppressive to me! It’s like, my goodness, can’t I just have my life in its messy, ordinary – I hesitate to use the word – glory but that it’s fun for me? Do I really have to follow some kind of narrative? Can’t I just live?”

The book is not for specialists and has very limited scholarly references. But it is a challenge for the people who think in terms of spontaneous order and unintended consequences, not only at the macro level but also at the individual level. Hopefully it will induce more people to read The Theory of Moral Sentiments.

 


*Maria Pia Paganelli is a Professor of Economics at Trinity University. She works on Adam Smith, David Hume, 18th century theories of money, as well as the links between the Scottish Enlightenment and behavioral economics.

For more articles by Maria Pia Paganelli, see the Archive.


As an Amazon Associate, Econlib earns from qualifying purchases.

(0 COMMENTS)

Read More

The chair and its enemies

This article won’t come as a surprise to those, among our readers, that are partisans of standing desks (quite a few of them, I suppose, in the US, not so many in Europe). This piece by Sara Hendren, abstracted from her book What Can a Body Do? How We Meet the Built World, presents interesting arguments against the chair. “Sitting for hours and hours can weaken your back and core muscles, pinch the nerves of your rear end and constrain the flow of blood that your body needs for peak energy and attention. Most people’s bodies are largely unsuited to extended periods in these structures”. If the chair is an old invention, the widespread use of it is a rather new thing, “for most of human history, a mix of postures was the norm for a body meeting the world”.

In part the article stresses the fact that a “chair-and-table culture” is actually a recent thing, basically a byproduct of our industrial society with its factory and its office, and thus our body has difficulties to cope with it.

In part the article builds on Victor Papanek’s polemic against industrial design and the search for design virtuosism, rather than comfort, and the emergence of “universal design”.

The conflict between beautiful and convenient is older than contemporary design, which seems to me has solved it better than most. Anyway, fascinating.

(0 COMMENTS)

Read More

Skidelsky on Economics

On our sister website, Law and Liberty, I have a review of Robert Skidelsky’s last book, What’s Wrong With Economics. I was unimpressed by the book. It looks to me like an attempt to build a straw man out of modern economics, which is blamed by Skidelsky for, of course, “neoliberal” policies.

The book is strongly idoelogical but, leaving ideology aside for a minute, I was amazed by the view of the social sciences Lord Skidelsky proposes.

He

is apparently incapable of understanding the pursuit of social science as something different from policy punditry. It is revealing that Skidelsky is puzzled by a quote from Milton Friedman, who charmingly described himself as “somewhat of a schizophrenic”: “On the one hand, I was interested in science qua science, and I have tried—successfully, I hope—not to let my ideological viewpoints contaminate my scientific work. On the other, I felt deeply concerned with the course of events and I wanted to influence them so as to enhance human freedom.” Some economists, political scientists, or philosophers may enter their fields because of their political vision of how the world should be improved. Yet it does not mean that they do not try to challenge their own opinions about the facts. Nor does it mean that they may not be, in pursuing their studies, interested merely in understanding how or why a particular phenomenon happened. Friedman honestly described a difficult navigation which is hardly exclusive to the economist. (Consider a Democratic reporter at the Republican National Convention, for instance.)

The review is here.

(0 COMMENTS)

Read More

Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Noblesse Oblige: Thicker than Water

A useful postscript to my reading of Bad Blood and my blog posts about the podcast The Dropout, both of which examined the Elizabeth Holmes/Theranos story, is Tyler Shultz’s new Audible podcast, Thicker Than Water

 

Shultz is, of course, the grandson of George Shultz and the whistleblower who began the process of exposing the lies and misrepresentations behind Holmes and Theranos.

 

In much the same way that my most pressing question about Holmes and her company was “How could anyone do this?” my most pressing question about Tyler Shultz when I encountered him in Carreyrou’s book and The Dropout was, “How did he do this?” Among the many people who knew, should have known, or seem to have known how badly Theranos’s technology was failing and how boldly Holmes was lying about it, how is it that Tyler Shultz was the one who decided he had to do something?

 

Shultz’s podcast, I think, provides some helpful answers. He’s clearly a smart and charming young man, who has led a life as protected by as much privilege as any American can hope for. I don’t mean that he’s part of some incredibly wealthy, hard-partying jet-set. I just mean that he’s the youngest generation of a famous family, who attended good schools, got good internships, and was brought up with the understanding that his opinions matter and that what happens to him is worthy of note.

 

He could be annoying if he weren’t clearly such a good guy (and I do confess to eye rolling over a few self-indulgent moments in the podcast). But one of the things we don’t talk about when we talk about the problems caused by inherited privilege is that, sometimes, it can have a good side.

 

Tyler Shultz is fairly clear that he got his internship with Theranos because his grandfather is George Shultz. But it’s equally apparent that the sense of his own significance and the assumption that he would be listened to and taken seriously are part of what allowed him to turn Theranos in. 

 

The heart-breaking part of the podcast is hearing Shultz talk about his realization that, somehow, his grandfather’s loyalties had switched to Holmes and to Theranos, and away from his grandson. He still sounds baffled when he mentions she was invited to family parties from which he was excluded. And the pain in his voice is unforgettable when he discusses the ways his grandfather pressured him to retract his statements about Theranos despite mounting evidence that he was right about the company’s lies. Shultz’s decision to do the right thing was clearly agonizing, yet he stuck to it.

 

It’s easy to be dismissive of young white men who have easy roads to travel in their lives. There are probably some good reasons for it, too. But the Thicker Than Water podcast will remind you that there is always more to people that we initially think. Just as the world’s first impressions of Elizabeth Holmes’s as a technological wunderkind turned out to be hopelessly, painfully, mistaken, my first impression of Tyler Shultz as “just another one of those kids who wanders into class late, unprepared, and hungover, wearing Nike slides and a ball cap” was mistaken.

 

Underneath the soft sheen of his privilege, Tyler Shultz is a man to respect, and one whose insistence on sticking to his principles has done more for market tested innovation than Elizabeth Holmes and her former company ever did.

(0 COMMENTS)

Read More

1 2 3 5