Covid Caution and Curry

On March 17, my favorite NBA player, Steph Curry shot a 3-pointer and then, as is his wont, backpedalled. The problem: he was backpedalling off the sideline instead of down the court and there was no barrier to stop him. In a normal game, there would have been some normal barrier to stop his going backward, whether the barrier be other chairs that players were sitting on or something else.

But because of Covid cautions, there are large spaces between chairs and so as Steph went backward, he didn’t stop until his tail bone came in hard contact with some metal stairs. Go to this link and page down to the 38-second video if you want to see what happened. But be prepared to watch something painful.

Why do I highlight this in an economics blog? Because it illustrates in microcosm the failure to make reasonable tradeoffs to deal with Covid-19. We know that Covid-19 is not particularly risky for young people and especially for young people without co-morbidities. NBA players are not a random sample; their physical fitness certainly puts them in the top 1 percent and maybe even in the top 0.1 percent of people their age, let alone of all ages. (And maybe in the top 0.01 percent.) The probability that Steph Curry would get badly sick from Covid, even if he didn’t get the shot, is really low. But the NBA did not make the tradeoffs the way I would have. I’m not challenging their right to do so: it’s their arena, pun not intended. I’m challenging the bad thinking behind their decision.

We often hear from the behavioral economists like Richard Thaler and  behavioral legal scholars like Cass Sunstein about “availability bias.” The idea is that people pay attention to what’s most prominent, not to what’s most likely. Where oh where are Thaler and Sunstein? Shouldn’t this be their moment to shine by pointing out how absurd some of these policies are?

 

(0 COMMENTS)

Read More

On Price Formation Theory

Summary: On price formation theory in Sabiou M. Inoua and Vernon L. Smith Economics of Markets

To suppose that utility maximizing individuals choose quantities to buy (sell) contingent on given prices is to pose a consumer demand and supply problem without a price-determining solution. This neoclassical problem formulation imposes (1) exogenous prices, (2) price-taking behavior, and (3) the law of one equilibrium-clearing price on markets, before prices can have formed. Hence, unexplained prices are presumed to exist before consumers arrive in the market. If conditions (2) and (3) are hypothesized to characterize markets, the theoretical challenge is to show that they follow from a theory of how markets function. Hence, neoclassical economics did not, because it could not, articulate a market price formation process.

 

The classical economists suffered none of these inconsistencies. (see for example Adam Smith, 1776, book 1, chapter VII) They articulated a coherent theory of price formation and discovery based on operational pre-market assumptions about the decentralized information that buyers and sellers brought to market, their interactive market behavior in aggregating this information, and simultaneously determining prices and contract quantities in the market’s end state.

Buyers (sellers) were postulated as having pre-market max willingness to pay, wtp (min willingness to accept, wta) value (cost) for given desired quantities to purchase (sell) that bounded the price at which each would buy (sell) as they sought to buy cheap (sell dear). We articulate a mathematical theory of this classical price formation process, its connections with

Shannan information theory, and the unexpected role of early experiments in using designs and reporting results consistent with classical theory:

  1. Individuals go to market with pre-market max wtp (min wta) reservation values (costs) for discrete (integer) units Arriving, they bring aggregate WTP (WTA) conditions governing price bounds, and motivation to buy cheap (sell dear).
  2. Any trial (bid/offer) price, P, if too low, tends to rise; if too high, tends to fall. Hence, in classical price adjustment, the “law of demand and supply”, is dynamic. Formally, price change and excess demand, e(P), have the same sign: e(P) ΔP/Δt > 0, if e≠ 0; price changes if excess demand is not
  3. Short side rationing: If any (bid or offer) trial price, P, is too low, the units bought (demanded) are limited by the supply quantity; if P is too high, the units sold (supplied) are limited by the demand quantity. Hence, quantity traded is the minimum of quantity supplied and quantity demanded, or formally, min[s(P), d(P)].

From 2., let V(P) = integral (sum) of -e(P) [namely excess supply, s(p)-d(p)]; for discrete values

𝑉𝑉(𝑝𝑝) = � |𝑣𝑣 − 𝑝𝑝 | + � |𝑐𝑐 − 𝑝𝑝 |,

𝑣𝑣≥𝑝𝑝                      𝑐𝑐≤𝑝𝑝

 

where the notation means summation over all values, v ≥ p, and all costs, c ≤ p, to assure that no goods trade at a loss. Define (the market center of value),

C = arg Min V(P),

which includes market clearing, with C = P* (“equilibrium” price), but is more general, by including important cases like constant cost industries where the exchange quantity is determined by demand.

 

Notice that V(P), a Lyapunov function, measures the distance between price and the traders’ reservation values in profit space. At any P we have, ΔV/Δt = ─e(P)ΔP/Δdt ≤ 0 where t is transaction number.

Hence, V changes non-positively as transactions increase, a parameter-free law of classical market convergence. To get convergence speed, a quantitative result, we would need institutional parameters relating transactions to calendar time.

For a smooth “large market”, where we let the number of agents increase without bound (infinite),

𝑉𝑉(𝑝𝑝) =  𝑝𝑝 𝑆𝑆(𝑥𝑥)𝑑𝑑𝑥𝑥 +   ∞ 𝐷𝐷(𝑥𝑥)𝑑𝑑𝑥𝑥

                 ∫0                                    ∫𝑝𝑝

 

and dV/dt = -e(P)dP/dt ≤ 0.

 

Here is a chart illustrating the above equations for large markets in which all “motion” is in terms of transactions, t ≥ 0 , not time, and is therefore a qualitative dynamics.

(0 COMMENTS)

Read More

To Fear or Not to Fear: That is the Question

This is a talk I gave to the local Osher Lifelong Learning Institute, sponsored by California State University, Monterey Bay (CSUMB). The local one is run by Michele Crompton and she does a great job. My topic was about things we should fear a lot and things we shouldn’t fear much. In the middle category was COVID-19. In the “not fear much” category were China (unless you live near China), terrorism, running out of land, genetically modified foods, getting shot by police, and global warming.

I’ll leave the “what we should fear a lot” to people who view the video.

You can read about Bernard Osher here.

If you ever get a chance to give an OLLI talk, I recommend it. The pay is low and, to do a good job, I usually put a fair amount of time into it. But I get articles and blog posts out of that work.

As important, I get a great audience. It tends to be older people and what I like about them is that they’re not a random pick. They tend to be quite curious and want to learn.

My take on the locals, after giving about 6 of these over the last approximately 8 years, is that they like me but don’t necessarily like my message, especially on this topic. But they ask good questions and it’s a highly civil discussion. Moreover, many of them know a lot.

My first OLLI talk was titled “The Cost of War.” My usual style with an audience is to ask questions as I go. It worked really well with that audience. Each time I asked a question of an audience of about 30 people, 5 or 6 hands would go up and I would call on someone randomly who would invariably get the answer right. (As I recall, my early questions were about World War I.) After about the third one, I paused and said, “I love talking to people who know things.”

(0 COMMENTS)

Read More

Counting the Cost

A review of Hench by Natalie Zina Walschots, William Morrow Press, 399 pages.

 

As much as I have enjoyed watching The Boys on Amazon Prime, I confess that I am somewhat worn out with dark revisionings of superheroes. Radical and genre-bending when they first began to appear, much of this work has become as predictable and formulaic as the worst versions of the material it seeks to overturn. I sometimes amuse myself by wondering when genre writers will decide it’s time to do something really radical and write non-dystopian, non-apocalyptic works with contented characters and happy endings. 

 

That said, Natalie Zina Walschots’s novel Hench does explore some new territory. While we have seen novels that focused on sidekicks before (Lexie Dunne’s “Superheroes Anonymous” series, for example) and while we have seen novels that have focused on villains (V.E. Schwab’s Vicious and Austin Grossman’s Soon I Will Be Invincible come to mind), I don’t think we have seen a novel (with the possible exception of The Henchmen’s Book Club by Danny King) that clearly imagines the world of the henchmen who support supervillains. 

 

Our hero, Anna Tromedlov, works data entry temp jobs for supervillains. She is, as the novel opens, a completely insignificant individual in a world occupied by heroes and villains, and preoccupied with their interactions. When a temp job goes wrong, Anna is horribly injured by the biggest hero of her world–a Superman analogue named Supercollider. As a result, her supervillain boss fires her. (In what may be the most villainous moment of the book he does so by sending a fruit basket to her bedside in the hospital…with a pink slip attachedl.)

 

Anna’s combination of devastating injuries and unemployment sends her on a quest that Econlog readers should find particularly interesting. She begins to calculate the cost of superheroes in lifeyears, using the work of real life economist Ilan Noy as her inspiration. Superhero costs are a common topic for discussion on Reddit, and the website Law and the Multiverse gives the question a good deal of attention as well, but it’s fun to see it brought into a fictional setting.

 

It’s even more fun when Anna decides to weaponize her blog that counts these costs as a way to take down the superhero who ruined her life. Her carefully calculated, gradual attacks on Supercollider, her growing alliance with the supervillain Leviathan, and her slow transformation from a temporary data-entry clerk to a henchman, and then to a supervillain in her own right provide much of the interest of the novel. Considerable horror (or gross-out humor, depending on the reader’s tastes) is provided by her increased reliance on body modifications to ramp up her power, and by the various ways she finds to deal with the invulnerable flesh of her nemesis Supercollider. The book’s final scenes, where Supercollider is turned into a weapon against himself, are not for the squeamish.

 

I’m not sure that anything in Hench is really new. The more familiar you are with the genre of superheroes and particularly with the genre of dark superhero reimaginings, the more it will remind you of other things you’ve read before. But Hench is a good read, with a fun economic twist. It’s a comment on modern office culture, the struggles of temp work, and an increasing sense of powerlessness that demands “decisive evidence that once the pieces are assembled, a hero can fall. A king can fall. No matter how absolute the stranglehold of power might seem, I can take them down. The data is there.”

 


As an Amazon Associate, Econlib earns through qualifying purchases.

(0 COMMENTS)

Read More

Dogs, Mountain Lions, and COVID-19

How dangerous are mountain lions? The data tell an interesting story. Since 1980, there have been only 13 attacks in all of California (where David and Charley live) and three people have died as a result. Compare this with attacks by dogs. Each year in California, about 100,000 dog attacks cause their victims to get medical attention. This means that California residents are approximately 180,000 times as likely to be seriously attacked by a dog as by a mountain lion. But to really compare dogs and mountain lions, we need to check our base, because there are a lot more dogs than mountain lions. With so many more dogs running around, a reasonable person would expect more dog attacks. With about 8 million dogs and 5,000 mountain lions in California, we see that there are approximately 1,500 dogs for each mountain lion. Once we check our base and correct for the numbers of dogs and mountain lions, we see that dogs are still more dangerous, and in fact, the risk of serious attack from an individual dog is about 120 times that of the risk from an individual mountain lion. Mountain lions present a daunting and ferocious image, but with so few attacks, they must have very little interest in attacking people.

This quote is from David R. Henderson and Charles L. Hooper, Making Great Decisions in Business and Life, Chicago Park Press, 2006.

The picture above is of me at the entrance to Stanford University yesterday. It reminded me of maps I saw of Germany divided into 4 zones after World War II. My friend and co-author Charley Hooper and I walked on the campus and notice a whole lot of 20-somethings walking around wearing masks even though they were typically walking alone and were not closer than 30 feet to anyone else.

They seem to fear COVID-19 the way some people fear mountain lions. My guess is that it’s a mixture of their fear and the fear of the administrators of Stanford, who seem to have taken an extremely anti-intellectual approach to the issue.

Either way, the risk to the young is extremely low. Here are data from the Centers for Disease Control as of November 12, 2020. They are for the number of deaths between February 1, 2020 and November 7, 2020. (The CDC notes that there is a lag because death counts are somewhat delayed.)

The number of Americans of age 15 to 24 who have died of COVID-19 is 410.

The number of Americans of age 15 to 24 who have died of all causes is 26,662.

Watch out for dogs, not mountain lions.

If you’re young, watch out for deaths from other causes, not COVID-19.

Postscript: What brought us to Palo Alto is my 70th birthday, which I celebrate on Saturday. Because it’s impractical right now to do what Governor Newsom did but tells the rest of us not to do, I’m not having a 70th birthday party. Instead, I’m seeing individual friends for outdoor lunches and conversation. Charley, Jay Bhattacharya, and I had a wonderful almost 3-hour outdoor lunch and conversation.

(0 COMMENTS)

Read More

Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Brains and Bias, continued.

Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.

(0 COMMENTS)

Read More

Brains and Bias: A Review of Science Fictions

Stuart Ritchie has written a very interesting and timely half of a book in Science Fictions. He has correctly identified and painstakingly articulated why those who claim to have clear cut answers based on “science” should be taken with a grain of salt.  His biggest accomplishment is reminding the reader that part of the problem with science is that humans, we imperfect creatures with all of our biases and flaws, are the ones actually doing the science.  That is a particularly important thing to remember as more and more policy in the world of COVID seems predicated on “science.”  Ritchie’s bracing warning about the limitations of scientific research is a welcome message at the moment.  Overcoming the COVID pandemic, or at least effectively managing its risks, requires tremendous scientific precision and accuracy to create a successful vaccine. Science can also warn us about truly risky behaviors, and develop other treatments for those with COVID-19.  Yet he falters in the later stages of the text, in which he conflates his overlapping concerns and tries to suggest prescriptions to solve the problems facing science that don’t seem to align human nature with the institutions of scientific research.  He also fails to recognize the role that the government plays in funding and directing scientific research.

 

Much of Ritchie’s book addresses the crisis many fields are facing because of an inability to replicate research findings.  Replication is the ability to be able to reproduce reported results in subsequent research across various samples or obviously using the same data from the original published work.  Ritchie does a convincing job explaining the scope of the problem to his readers.  This is particularly true in psychology.  Replication work has invalidated about half of the psychology papers that have appeared in major journals such as Nature.  For example, in a well known and widely cited study participants who were shown words like grey, knits, wise, and Florida appeared to walk more slowly because of the supposed “priming” effect of those words.  However, subsequent attempts to replicate this study in different contexts did not show any effect.  That inability to replicate results is not merely limited to psychology.  He also discusses examples from medicine where the inability to replicate findings has more costly and tangible consequences.

 

Compounding the problem is another legitimate issue Ritchie discusses – the bias of journals  only publishing what he calls “positive, flashy, novel, newsworthy,” findings rather than dull replications and null results which don’t excite readers.  More than a few academics will acknowledge that they have papers in their “file drawers” that didn’t produce positive results that they never bother to send for review.  There’s a lot of potentially beneficial human knowledge hidden away because of these journal biases.

 

Ritchie then spends the next four chapters describing what he calls “Faults and Flaws-” namely fraud, bias, negligence, and hype.  Fraud is about what the reader might expect.  Ritchie highlights some of the more well known cases of fraud in various disciplines including political science, social psychology, physics, and medicine – including the now retracted and debunked paper in the Lancet on the relationship between autism and vaccinations.  He explores the possible motivations of these cases, and while most frequently it is greed and ambition, he also speculates that in some cases it is the hope and desires of the researchers that their projects should be true that prompts them to falsify data.  This raises interesting questions about human nature and influence of prior beliefs in “science”.  The reader will be both amused and somewhat horrified to learn that there is a Hall of Shame for individuals who have been forced to retract many of their scientific papers because they engaged in fraudulent science.  Additionally, Ritchie shows that many of these retracted papers continue to be cited favorably by researchers who are unaware they have been retracted.  This is obviously a serious problem.

 

This leads him to his second problem, which he describes as “bias”.  Now “bias” for Ritchie means a couple of different things, but it seems to boil down to the mostly unconscious biases scholars bring to their scientific research.  He begins this chapter discussing the historical “replication” of Samuel Morton’s infamous phrenology study of skulls from different human races that claims to show the larger average size of caucasian skulls.  The replication by the esteemed paleontologist Stephen Jay Gould concluded that Morton had made errors such as selecting more male caucasian skulls (thus enhancing the average size).  Ritchie tells his readers that Gould said of Morton’s research, “all of his errors moved the results in that same direction,” and the reader is left with the impression that bias is really about prior belief.

 

He then goes on to explain bias in publications (the above mentioned general bias towards positive rather than null results) as well as what he calls “p-hacking”, manipulation of statistical significance tests to achieve significance for important variables in research.  Let me be clear – all of these issues are significant problems in the social and “hard” sciences.  However, the way he blends them makes the chapter a little choppy and tough to follow, especially for someone not as familiar with statistics and methodology. He still does a solid job explaining graphically how small sample sizes can skew results.  But when the chapter ends he returns the readers to the story of Morton and Gould and tells us that another anthropologist retested the skulls yet again.  These researchers, using more modern technology, found errors both helping and harming Morton’s case, while Gould’s work also showed bias in one direction only.  If a reader is left somewhat confused by how exactly Ritchie is thinking about bias, she can probably be forgiven.

 

 Next he turns to “negligence” or the sloppy methods employed that can skew results and lead to faulty conclusions.  Like fraud, the theme of this chapter shouldn’t be surprising.  People make mistakes; we are human.  But when it comes to critical research on things like genetic sequencing or the effectiveness of government austerity programs, Ritchie’s concern about the inability to confirm and replicate results because of simple errors in entering data or coding are important to recognize.  Ultimately, his solution to this problem is again rechecking and replication.  Most of the cases he documents were discovered as the data for this work became more widely available and other researchers and more advanced computer programs began to examine the data to confirm the results.  In this sense, he’s actually telling a somewhat more optimistic story than he seems to realize.

 

Where Ritchie loses his way… Coming next week.

(0 COMMENTS)

Read More

Benjamin Boyce Interviews Adrian Lee Oliver

As regular readers of my posts know, I have little patience for long interviews. But I was blown away by this one. It’s the best interview I’ve seen in 2020.

I hadn’t heard of either Boyce or Oliver before, but from now on, whenever I see their names, I’ll pay attention.

I was so enthralled by the first 37 minutes that I forgot to time stamp things. So this time stamping will be rough up until the 38th minute.

In the first 30 minutes or so, Oliver talks about what it was like to grow up as a black kid facing extreme discrimination and racism in America. And not 1940s or 1950s America, but America of the 1990s and early 2000s. In one story he tells how he won over a white racist in school. Really neat story.

38:00: Why the cops suddenly apologized for torturing him. Hint: nepotism.

39:50: Why what’s going on with cops is not systemic racism. Many of them would like to treat everyone badly, but their statistical analysis stops them with certain groups.

41:38: The left’s preemptive strike against expertise and the recent Steven Pinker attack.

46:45: How what the left is doing could lead to an even more virulent and wider spread white nationalism.

57:00: We need arenas for the non-political. (By the way, my own is pickle ball. Political conversations are actively discouraged.)

59:00: Oliver makes a fantastic point about the failure of Communism.

1:03:40: Being in a cult isn’t fun.

1:14:50: We need to be ruthless against bad ideas but not against the people who hold them.

HT2 Bob Murphy.

(0 COMMENTS)

Read More

The Bad and the Great News about Unemployment in May

 

My economist friend Jack Tatom wrote the following on Facebook and gave me permission to share. For background, see my “Why the Drop in Unemployment Did Not Surprise Me,” June 5. It’s pretty involved and you might have to pause at various points to take it in, but it’s by far the best explanation I’ve seen. Here goes:

On Friday, June 5, the Bureau of Labor Statistics (BLS) announced the Employment Situation for May showing that the nation’s unemployment rate had declined in May from 14.7 percent to 13.3 percent, a shock to many who had expected a fall in employment of over 9 million and a rise in the unemployment rate to 20 percent. I was not among the shocked. I had expected employment to rise very sharply due to a reduction in the number unemployed.

The BLS added a footnote to their May report that indicates there had been an error in data submitted by survey takers who had counted many people as employed instead of as unemployed. The latter was the explicit instruction BLS had given for the treatment of furloughed workers who did not work during the previous week. According to the BLS, had the surveys been correctly completed, the April unemployment rate would have been shown as 19.7 percent and the rate would have fallen to 16.3 percent in May. [DRH note: Note that that still is a large fall in the unemployment rate; I believe it’s the largest one-month fall in recorded U.S. history.]

So, what happened and what does this mean? First of all, it means that in both months unemployment and the unemployment rate were higher than previously thought. That’s the bad news. The 20 percent unemployment rate expected for May nearly occurred in April. The good news is that this data shows the turnaround in the economy was actually bigger than the official data indicate. The official reported data show a decline of 1.4 percentage points in the unemployment rate; the actual decline that the BLS indicates occurred is more than twice as large as the official data shows.

Based on the data released, my calculations indicate approximately 7.7 million furloughed workers were “mistakenly” treated as employed in April but should have been treated as unemployed. Instead of the 18.1 million reported in Table A-11 of the Employment Situation for April, the correct number was about 25.8 million. In May, the Table A-11 shows a 15.3 million on-furloughed reduction in the overall number unemployed. Using the revised data based on the footnote to the BLS report, it now appears that the decline in unemployment in May due to falling numbers of unemployed workers on temporary layoffs was 5.7 million workers instead of the officially reported 2.7 million. This larger return of furloughed workers to employment again accounted for more than all of the approximately 5.0 million increase in overall employment implied by the footnote. Five million more workers returning to work in May is dramatically more than the continuing decline expected a week ago by others or even the 2.1 million official gain reported on Friday. That is not good news; it is great news.

What about the decline in employment expected by nearly all analysts and the press? Buried in all these numbers is a decline in employment for workers who were not on furlough that was swamped by the return of formerly furloughed workers. In the official data, the reduction of unemployed and furloughed workers was 2.7 million, but the reduction in the unemployed was 2.1 million. The difference is others who were not furloughed but added to the number unemployed. When the corrected measures are used, furloughed workers declined by 5.7 million, larger by about 0.7 million than the overall approximate number of 5 million reduction in overall unemployment (different from official measures due to rounding error). So there was a decline in employment in May that was overwhelmed by the return of furloughed workers observed in April and back in employment by May.

Where do we go from here? Depends on to whom “we” refers. The BLS official view is not to tamper with suspected errors in the collected surveys. So, whether BLS will adjust the official data in the future remains to be seen. But where the economy goes is more certain. Given the opening of states’ economies in late May and early June, the accelerating opening of businesses suggests even larger increases in employment and declines in the unemployment rate in June and the rest of the summer.

(0 COMMENTS)

Read More