Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Brains and Bias: A Review of Science Fictions

Stuart Ritchie has written a very interesting and timely half of a book in Science Fictions. He has correctly identified and painstakingly articulated why those who claim to have clear cut answers based on “science” should be taken with a grain of salt.  His biggest accomplishment is reminding the reader that part of the problem with science is that humans, we imperfect creatures with all of our biases and flaws, are the ones actually doing the science.  That is a particularly important thing to remember as more and more policy in the world of COVID seems predicated on “science.”  Ritchie’s bracing warning about the limitations of scientific research is a welcome message at the moment.  Overcoming the COVID pandemic, or at least effectively managing its risks, requires tremendous scientific precision and accuracy to create a successful vaccine. Science can also warn us about truly risky behaviors, and develop other treatments for those with COVID-19.  Yet he falters in the later stages of the text, in which he conflates his overlapping concerns and tries to suggest prescriptions to solve the problems facing science that don’t seem to align human nature with the institutions of scientific research.  He also fails to recognize the role that the government plays in funding and directing scientific research.

 

Much of Ritchie’s book addresses the crisis many fields are facing because of an inability to replicate research findings.  Replication is the ability to be able to reproduce reported results in subsequent research across various samples or obviously using the same data from the original published work.  Ritchie does a convincing job explaining the scope of the problem to his readers.  This is particularly true in psychology.  Replication work has invalidated about half of the psychology papers that have appeared in major journals such as Nature.  For example, in a well known and widely cited study participants who were shown words like grey, knits, wise, and Florida appeared to walk more slowly because of the supposed “priming” effect of those words.  However, subsequent attempts to replicate this study in different contexts did not show any effect.  That inability to replicate results is not merely limited to psychology.  He also discusses examples from medicine where the inability to replicate findings has more costly and tangible consequences.

 

Compounding the problem is another legitimate issue Ritchie discusses – the bias of journals  only publishing what he calls “positive, flashy, novel, newsworthy,” findings rather than dull replications and null results which don’t excite readers.  More than a few academics will acknowledge that they have papers in their “file drawers” that didn’t produce positive results that they never bother to send for review.  There’s a lot of potentially beneficial human knowledge hidden away because of these journal biases.

 

Ritchie then spends the next four chapters describing what he calls “Faults and Flaws-” namely fraud, bias, negligence, and hype.  Fraud is about what the reader might expect.  Ritchie highlights some of the more well known cases of fraud in various disciplines including political science, social psychology, physics, and medicine – including the now retracted and debunked paper in the Lancet on the relationship between autism and vaccinations.  He explores the possible motivations of these cases, and while most frequently it is greed and ambition, he also speculates that in some cases it is the hope and desires of the researchers that their projects should be true that prompts them to falsify data.  This raises interesting questions about human nature and influence of prior beliefs in “science”.  The reader will be both amused and somewhat horrified to learn that there is a Hall of Shame for individuals who have been forced to retract many of their scientific papers because they engaged in fraudulent science.  Additionally, Ritchie shows that many of these retracted papers continue to be cited favorably by researchers who are unaware they have been retracted.  This is obviously a serious problem.

 

This leads him to his second problem, which he describes as “bias”.  Now “bias” for Ritchie means a couple of different things, but it seems to boil down to the mostly unconscious biases scholars bring to their scientific research.  He begins this chapter discussing the historical “replication” of Samuel Morton’s infamous phrenology study of skulls from different human races that claims to show the larger average size of caucasian skulls.  The replication by the esteemed paleontologist Stephen Jay Gould concluded that Morton had made errors such as selecting more male caucasian skulls (thus enhancing the average size).  Ritchie tells his readers that Gould said of Morton’s research, “all of his errors moved the results in that same direction,” and the reader is left with the impression that bias is really about prior belief.

 

He then goes on to explain bias in publications (the above mentioned general bias towards positive rather than null results) as well as what he calls “p-hacking”, manipulation of statistical significance tests to achieve significance for important variables in research.  Let me be clear – all of these issues are significant problems in the social and “hard” sciences.  However, the way he blends them makes the chapter a little choppy and tough to follow, especially for someone not as familiar with statistics and methodology. He still does a solid job explaining graphically how small sample sizes can skew results.  But when the chapter ends he returns the readers to the story of Morton and Gould and tells us that another anthropologist retested the skulls yet again.  These researchers, using more modern technology, found errors both helping and harming Morton’s case, while Gould’s work also showed bias in one direction only.  If a reader is left somewhat confused by how exactly Ritchie is thinking about bias, she can probably be forgiven.

 

 Next he turns to “negligence” or the sloppy methods employed that can skew results and lead to faulty conclusions.  Like fraud, the theme of this chapter shouldn’t be surprising.  People make mistakes; we are human.  But when it comes to critical research on things like genetic sequencing or the effectiveness of government austerity programs, Ritchie’s concern about the inability to confirm and replicate results because of simple errors in entering data or coding are important to recognize.  Ultimately, his solution to this problem is again rechecking and replication.  Most of the cases he documents were discovered as the data for this work became more widely available and other researchers and more advanced computer programs began to examine the data to confirm the results.  In this sense, he’s actually telling a somewhat more optimistic story than he seems to realize.

 

Where Ritchie loses his way… Coming next week.

(0 COMMENTS)

Read More

Benjamin Boyce Interviews Adrian Lee Oliver

As regular readers of my posts know, I have little patience for long interviews. But I was blown away by this one. It’s the best interview I’ve seen in 2020.

I hadn’t heard of either Boyce or Oliver before, but from now on, whenever I see their names, I’ll pay attention.

I was so enthralled by the first 37 minutes that I forgot to time stamp things. So this time stamping will be rough up until the 38th minute.

In the first 30 minutes or so, Oliver talks about what it was like to grow up as a black kid facing extreme discrimination and racism in America. And not 1940s or 1950s America, but America of the 1990s and early 2000s. In one story he tells how he won over a white racist in school. Really neat story.

38:00: Why the cops suddenly apologized for torturing him. Hint: nepotism.

39:50: Why what’s going on with cops is not systemic racism. Many of them would like to treat everyone badly, but their statistical analysis stops them with certain groups.

41:38: The left’s preemptive strike against expertise and the recent Steven Pinker attack.

46:45: How what the left is doing could lead to an even more virulent and wider spread white nationalism.

57:00: We need arenas for the non-political. (By the way, my own is pickle ball. Political conversations are actively discouraged.)

59:00: Oliver makes a fantastic point about the failure of Communism.

1:03:40: Being in a cult isn’t fun.

1:14:50: We need to be ruthless against bad ideas but not against the people who hold them.

HT2 Bob Murphy.

(0 COMMENTS)

Read More

The Bad and the Great News about Unemployment in May

 

My economist friend Jack Tatom wrote the following on Facebook and gave me permission to share. For background, see my “Why the Drop in Unemployment Did Not Surprise Me,” June 5. It’s pretty involved and you might have to pause at various points to take it in, but it’s by far the best explanation I’ve seen. Here goes:

On Friday, June 5, the Bureau of Labor Statistics (BLS) announced the Employment Situation for May showing that the nation’s unemployment rate had declined in May from 14.7 percent to 13.3 percent, a shock to many who had expected a fall in employment of over 9 million and a rise in the unemployment rate to 20 percent. I was not among the shocked. I had expected employment to rise very sharply due to a reduction in the number unemployed.

The BLS added a footnote to their May report that indicates there had been an error in data submitted by survey takers who had counted many people as employed instead of as unemployed. The latter was the explicit instruction BLS had given for the treatment of furloughed workers who did not work during the previous week. According to the BLS, had the surveys been correctly completed, the April unemployment rate would have been shown as 19.7 percent and the rate would have fallen to 16.3 percent in May. [DRH note: Note that that still is a large fall in the unemployment rate; I believe it’s the largest one-month fall in recorded U.S. history.]

So, what happened and what does this mean? First of all, it means that in both months unemployment and the unemployment rate were higher than previously thought. That’s the bad news. The 20 percent unemployment rate expected for May nearly occurred in April. The good news is that this data shows the turnaround in the economy was actually bigger than the official data indicate. The official reported data show a decline of 1.4 percentage points in the unemployment rate; the actual decline that the BLS indicates occurred is more than twice as large as the official data shows.

Based on the data released, my calculations indicate approximately 7.7 million furloughed workers were “mistakenly” treated as employed in April but should have been treated as unemployed. Instead of the 18.1 million reported in Table A-11 of the Employment Situation for April, the correct number was about 25.8 million. In May, the Table A-11 shows a 15.3 million on-furloughed reduction in the overall number unemployed. Using the revised data based on the footnote to the BLS report, it now appears that the decline in unemployment in May due to falling numbers of unemployed workers on temporary layoffs was 5.7 million workers instead of the officially reported 2.7 million. This larger return of furloughed workers to employment again accounted for more than all of the approximately 5.0 million increase in overall employment implied by the footnote. Five million more workers returning to work in May is dramatically more than the continuing decline expected a week ago by others or even the 2.1 million official gain reported on Friday. That is not good news; it is great news.

What about the decline in employment expected by nearly all analysts and the press? Buried in all these numbers is a decline in employment for workers who were not on furlough that was swamped by the return of formerly furloughed workers. In the official data, the reduction of unemployed and furloughed workers was 2.7 million, but the reduction in the unemployed was 2.1 million. The difference is others who were not furloughed but added to the number unemployed. When the corrected measures are used, furloughed workers declined by 5.7 million, larger by about 0.7 million than the overall approximate number of 5 million reduction in overall unemployment (different from official measures due to rounding error). So there was a decline in employment in May that was overwhelmed by the return of furloughed workers observed in April and back in employment by May.

Where do we go from here? Depends on to whom “we” refers. The BLS official view is not to tamper with suspected errors in the collected surveys. So, whether BLS will adjust the official data in the future remains to be seen. But where the economy goes is more certain. Given the opening of states’ economies in late May and early June, the accelerating opening of businesses suggests even larger increases in employment and declines in the unemployment rate in June and the rest of the summer.

(0 COMMENTS)

Read More