Arnold Kling on Modigliani-Miller and money

Arnold Kling recently posted on the implications of the Modigliani-Miller (M-M) theory of corporate finance for monetary economics. He seems to suggest that open market operations will not significantly affect the price level, even when interest rates are positive. (Although it’s possible I’ve misread his post):

The metaphor that I would propose is that a single firm’s changes to its financial structure is like me sticking my hand in the ocean, scooping up water, and throwing it in the air: I don’t make a hole in the ocean.

Modigliani-Miller is not strictly true. But it is the best first approximation to use in looking at financial markets. That is, you should start with Modigliani-Miller and think carefully about what might cause deviations from it, rather than casually theorize under the implicit assumption that it has no validity whatsoever.

Taking this approach, I view the Fed as just another bank. Its portfolio decisions do not make a hole in the ocean. That heretical view is the basis for the analysis of inflation in my book Specialization and Trade.

*Sumner’s response is actually beside the point, in my view. The Modigliani-Miller theorem does not in any way rely on different asset classes being close substitutes. It relies on financial markets offering opportunities for people to align their portfolios to meet their needs in response to a corporate restructuring.

I have the opposite view. I believe that money is neutral in the long run and that if you permanently and exogenously double the monetary base through open market operations, this action will double the price level.

Consider the following example. Apple decides to exchange tens of billions of dollars of T-bills for another asset, say palladium. Let’s say Apple suddenly buys up 30% of the world’s stock of palladium. As a first approximation, the value of Apple has not changed–they simply exchanged one asset for another at current market prices.

But the market value of palladium has likely increased sharply! If palladium were the medium of account when this occurred, then Apple’s portfolio readjustment would be highly deflationary.  Indeed something quite like this caused the Great Depression.

You might argue that palladium has nothing to do with M-M, and I’d agree. But my claim is that cash is much more like palladium than it is like stocks or bonds, which are the sorts of assets envisioned when people discuss M-M.  In 1981, $100 bills yielded 0% while one-month T-bills yielded nearly 15%.  These assets are clearly not close substitutes.  When the government exchanges T-bills for T-Notes, the financial markets yawn.  When the Fed (implicitly) announces a plan to exchange T-bills for cash, the global market value of all stocks can decline by trillions of dollars in seconds.  There’s a good reason why stock investors believe I’m right and Kling is wrong; cash and T-bills are not close substitutes.

I’ve spent my whole life looking at the impact of open market operations (OMOs) on macro variables, as have much smarter people like Milton Friedman.  There’s no doubt in my mind that exogenous and permanent OMOs have a huge impact on the economy, at least at positive interest rates and very likely at zero interest rates as well.  That’s not in doubt.  The question we need to examine is not, “Does M-M help us to understand monetary policy?”  Rather the interesting question is, “Why doesn’t M-M help us to understand monetary policy?”

PS.  At 2:15 on December 11, 2007, the Fed made an announcement.  The didn’t even mention “cash”, but the only way to implement their announced policy was to reduce the growth rate of cash to a level below what the market had expected 5 minutes earlier.  This is how the stock market (DJIA) responded:

Stock investors don’t seem to believe that M-M applies to OMOs.

(0 COMMENTS)

Read More

Look at money!

I ended a recent MoneyIllusion post with this amusing equation, as a sort of throwaway:

M*V = C + I + G + (X-M)

The comment section convinced me to say a bit more about the equation. How should we think about it?

Start with a barter economy and look at the exchange of apples and oranges. If the price of apples in terms of oranges moves over time, how should we think of that change? Obviously it might reflect a shock in the apple industry, the orange industry, or both. Therefore it might be useful to compare each good against many other goods, to see if one of the two goods was clearly moving in price against almost all other goods.

A more familiar example is exchange rates. If the yen price of euros goes up, that might mean a stronger euro, a weaker yen, or both. Here again you might look at each currency in terms of all other currencies, to get a better read as to where the “shock” was concentrated.  If the euro appreciated against all other currencies, then that would be suggestive that the move didn’t just reflect events in Japan.

Of course most transactions involve money for goods.  My general view is that in a microeconomic context it makes more sense to focus on the good in question, whereas in macro I focus on the money side of the transaction.

Consider a case where global oil output suddenly falls by 2%, and oil prices shoot up by 20% in the short run.  (Oil demand is highly inelastic in the short run.)  In that case, money expenditures on oil rise by about 18%.  And yes, that expenditure is a part of NGDP.  So why do I focus on the OPEC output shock and not the money side when explaining why 18% more is spent on oil?  Because the oil market is just one of many uses of money.  If people spend more on oil, it’s pretty easy to move money over from other sectors, or from savings.

In macro it’s almost exactly the opposite.  When we spend more at a macro level, we have more money being exchanged for thousands of different types of goods, services, and assets, in a wide variety of markets.  Now it’s simpler to focus on the money side of the transaction.  It’s easier to figure out why M times V goes up, rather than explain spending on many different types of goods.

In my book on the gold standard I looked at changes in the price level, although I would have used NGDP if better high frequency data were available.  I found that the easiest way to explain the big fall in global prices (and NGDP) during the early 1930s was to look at changes in the supply and demand for gold, rather than C + I + G in dozens of countries.

That’s not to say it’s impossible to tell a “Keynesian” story.  You could claim that reduced animal spirits led to less investment demand and a lower equilibrium global interest rate.  A lower interest rate boosted gold demand (or reduced gold velocity.)  But in practice, animal spirits don’t typically change that dramatically for no reason, especially in the aggregate.  In the early 1930s, it was increased demand for gold by central banks that first triggered the Great Depression, and this is what later reduced “animal spirits”, leading to additional feedback effects that further boosted gold demand and further reduced NGDP and prices.

Overall, I believe that NGDP during the gold standard can be best explained by focusing on the global gold market, but it’s not the only option.  It’s when we turn to fiat money regimes that the argument for a monetary explanation for NGDP becomes completely overwhelming, for two reasons:

1. Central banks have almost infinite ability to impact M (and great ability to impact V as well—via IOER)

2. Central banks often have mandates to target things that are closely related to NGDP, like prices and employment.

Perhaps the following analogy would help explain why the money focus is even more desirable with fiat money than with gold.

With old-fashioned sailing ships, you might want to focus on the captain’s decisions when explaining the path of the ship, but wind and waves would also play a non-trivial role.

With modern oil tankers, you’d be crazy to not focus on the captain’s decisions when explaining the path of the ship; the role of wind and waves would be trivial.

Now let’s return to M*V = C + I + G + (X-M)

It’s an identity, so it tells us nothing about causation.  One can think of this equation as a sort of argument, a debate.  It reflects two ways of describing NGDP, and it can be viewed as a dispute as to which approach is more useful in explaining what causes changes in NGDP.  Should we focus on the goods sold, or the money spent on those goods?

When inflation is extremely high, it’s pretty obvious that the monetary approach is the most useful.  You can’t explain hyperinflation by looking at what causes prices rises in 13,000 individual markets, and then adding them up.  There’s obviously a common factor. In most cases, close to 99% of hyperinflation is due to money growth, and the rest is higher velocity.  When inflation is very low, it’s more debatable as to which approach is the most useful for explaining changes in P and NGDP.

I happen to believe the monetary approach continues to be the most useful framework at lower rates of NGDP growth and inflation, because even though M and P (or M and NGDP) are no longer closely correlated, the central bank can and should move M to offset changes in V.  When if fails to do so, we can think about monetary reform proposals to make that failure less likely in the future.  Such reforms were enacted after both the Great Depression and the Great Inflation, which is why we now freak out about 1.5% inflation when there’s a 2% target, instead of minus 12% inflation or plus 13% inflation, which used to happen before we fixed the Fed.

If the monetary approach were not the most useful, then it’s unlikely that Fed reforms would have eliminated the wild price level swings we used to see, from double digit inflation to double-digit deflation.  Perhaps fiscal reforms could be said to have fixed the deflation problem (I doubt it), but obviously fiscal didn’t fix the high inflation episodes.

So what does M*V = NGDP actually mean?  One definition is, “The person who wrote down this equation believes that one should explain movements in NGDP by looking at the market for money.”  It’s a sort of exhortation:  Look at money!

And what does C + I + G + (X-M) = NGDP actually mean?  One definition is, “The person who wrote down this equation believes that one should explain movements in NGDP by looking at the factors that determine each type of spending.”  It’s a sort of exhortation:  Look at the major expenditure categories!

Me:  Look at money!

 

(0 COMMENTS)

Read More

Europe Needs to Maintain Strong Policy Support to Sustain the Recovery

By Alfred Kammer The pandemic is exacting a heavy toll on Europe. More than 240,000 people have lost their lives. Millions have suffered the illness themselves, the loss of loved ones, or major disruption in their work, their businesses, and their daily lives. The economic impact of the pandemic has been enormous. Our latest Regional […]

Read More

Just how expansionary has the Fed actually been?

In my view, Fed policy during 2020 has been contractionary, as both NGDP and price level forecasts have declined. Many people prefer to look at monetary policy in a different way, focusing on the money that the Fed injects into the banking system. And by that measure, policy has indeed been expansionary.

But perhaps not as expansionary as you might assume. Six years ago, total commercial bank reserves balances at the Fed totaled $2.8 trillion. The most recent data indicates that total reserve balances at the Fed are $2.8 trillion. That’s no growth in 6 years, even in nominal terms.  (Reserves have obviously declined as a share of GDP.)

Again, both the overall monetary base and the part of the base held as bank reserves at the Fed have increased sharply this year, after declining from a peak in August 2014. But the current level of reserves is not extraordinary; at least not in the way that this year’s budget deficit is historically unprecedented.  (The deficit rose from below $500 billion in 2014 to $3.1 trillion this year.)

The Fed could have injected vast sums of money into the US economy, many times more than they injected during the Great Recession. They chose not to do so for reasons that I cannot explain.  Someone should ask Jay Powell, especially given that he’s calling for even more trillions in fiscal stimulus.  Monetary stimulus does not add a penny to the national debt.

(0 COMMENTS)

Read More

Tim Duy on fiscal stimulus

David Beckworth directed me to an interesting post by Tim Duy:

Most likely, net job growth will continue even if at a slower pace. That job growth will be sufficient to drive income growth, and income growth will support consumption. But what about the missing fiscal stimulus, you say? I know this will be widely hated, but the decline in spending in nominal and real terms at this point pretty much matches the decline in income excluding current transfer payments . . .

The fall in consumption exceeded the fall in incomes early in the cycle while, on net, transfer payments are ending up as forced saving. The virus is the key impediment to growth at this point; there are certain sectors of the economy, leisure and hospitality in particular, with limited prospects until the virus is under greater control. There isn’t really a debate on this point; there is simply a nontrivial supply-side constraint on the economy right now.

I don’t hate Tim Duy for saying that the key problem now is on the supply side, as I’ve also been making that argument.  That’s not to say that demand stimulus would have no value right now—both Duy and I would prefer somewhat higher inflation expectations—but this isn’t the main factor currently holding back the recovery.

Some people complain that the Fed’s new “average inflation targeting” policy is quite vague and ambiguous.  That’s true, but in an earlier blog post I argued that as a practical matter it is pretty clear what the Fed intends to do.  Duy linked to a speech by Chicago Fed president Charles Evans that confirms my prediction:

Forget the many years of underrunning 2 percent since 2008, and let’s just start averaging beginning with the price level in the first quarter of 2020. Core PCE inflation in the SEP is projected to be 1-1/12 percent this year and then gradually rise to 2 percent in 2023.12 Suppose it hits 2-1/4 percent in 2024 and then stays there. In this scenario, cumulative average core inflation starting from the first quarter of 2020 does not reach 2 percent until mid-2026. That is a long time. If you can produce 2-1/2 percent inflation in 2024, you can get there about a year quicker.

That was my view as well, that the Fed would start the clock at the beginning of 2020 and begin to push inflation a few tenths of percent above 2.0% in 2024, until the near-term inflation undershoot was offset.  I don’t think there’s much doubt any longer as to what the Fed intends to do.  The only question now is whether they will do what they promise or renege on their promise.  We’ll probably know the answer by 2025.

(0 COMMENTS)

Read More

Identifying monetary shocks

This post is a follow-up to my recent post on the “masquerading problem”. Recall that changes in interest rates are not a reliable indicator of changes in the stance of monetary policy. A new paper by Marek Jarociński and Peter Karadi discusses an interesting method of identifying monetary shocks:

Central bank announcements simultaneously convey information about monetary policy and the central bank’s assessment of the economic outlook. This paper disentangles these two components and studies their effect on the economy using a structural vector autoregression. It relies on the information inherent in high-frequency co-movement of interest rates and stock prices around policy announcements: a surprise policy tightening raises interest rates and reduces stock prices, while the complementary positive central bank information shock raises both. These two shocks have intuitive and very different effects on the economy. Ignoring the central bank information shocks biases the inference on monetary policy nonneutrality.

I see this as a promising first step toward the market monetarist goal of using asset prices linked to NGDP as an indicator of monetary policy.  To be sure, stock prices are a very noisy indicator of NGDP expectations, but they are better than changes in short-term interest rates, which often don’t even have the right sign.  Tight money can occasionally cause lower nominal interest rates, as NeoFisherians have pointed out.  In the Jarociński and Karadi paper, only interest rate increases associated with falling stock prices are identified as an actual move toward tighter money.  Ideally we’d replace stock prices with NGDP futures prices.

This article was sent to me by Basil Halperin, who also made the following comment about my earlier masquerading problem post:

The Wolf paper mentions the “sign restrictions” identification strategy that is usually credited to Uhlig (2005). . . .
I think of your 1989 JPE with Silver as having done a proto-version of this approach! Both your paper and the Uhlig paper use the idea that “monetary shocks should send output and inflation in the same direction” to identify which episodes are demand shocks, versus which are supply.
Readers who are studying economics might be interested in the background of our 1989 JPE paper.  In the late 1980s, I was interested in studying business cycles.  When I looked at the data, the 1920-21 depression seemed like the purest example I could find of a stereotypical business cycle.  It saw the steepest one-year drop in industrial production, the steepest one-year drop in the monetary base, the steepest one-year drop in the price level, and the steepest one-year rise in real wages.  This made me sympathetic to the sticky wage theory of the business cycle, which is based on the idea that a severe deflation is contractionary because wages fall more slowly than prices.
Later I discovered research that found real wages to be procyclical, falling during booms and rising during recessions.  This surprised me, so I tried to reconcile these results with the evidence from 1921.  It turns out that the more recent studies that found procyclical real wages tended to rely heavily on some business cycles associated with the two oil shocks (1974 and 1979), which were periods when inflation rose and real wages fell during recessions.
Steve Silver and I responded with a study that divided business cycles up into two types, those with procyclical inflation (like 1921) and those with countercyclical inflation (such as 1974 and 1979).  Real wages were countercyclical during demand-side recessions such as 1921 and procyclical during the supply-side recessions of the 1970s.  This pushed me even more firmly into the sticky wages camp, as both findings are consistent with the idea that nominal wages are sticky when the price level moves suddenly and unexpectedly.
This work also dovetails nicely with my general view that NGDP is a good measure of monetary policy.  During supply shocks, prices and output move in the opposite direction, and NGDP doesn’t necessarily change all that much.  In contrast, tight money causes both falling prices and falling output.  If nominal wages are sticky then this results in higher real wages and higher unemployment.  This is why I later switched my focus from price level shocks to NGDP shocks. NGDP measures monetary shocks better than does the price level. George Selgin also reached this conclusion a few decades back, albeit for a slightly different set of reasons.
Perhaps it might seem “unscientific” to base one’s views on a single episode like 1920-21.  But my view is that extreme events are very revealing.  Yes, you should not use data mining to test a model, but data mining is a very good way to develop a model.  Then test it with a completely different set of data.  I’d encourage younger economists to pay close attention to Fed announcements that led to unusually pronounced real time market reactions, such as January 2001, September 2007 and December 2007.  In those cases, the background “noise” is less likely to disguise the causal relationships.  In my book on the Great Depression I discuss numerous such natural experiments.

PS.  There was a slightly steeper drop in IP right after WWII, but that was clearly a very unusual business cycle.  There was a larger drop during the Great Depression, but spread over a much longer period of time.  So I believe 1920-21 is the purest negative demand shock.  Furthermore, the drop in demand was not endogenous.  It wasn’t partly caused by bank failures as in the 1930s; it was almost entirely due to the Fed sharply reducing the monetary base.

(0 COMMENTS)

Read More

The masquerading problem

For the past four decades, I’ve been complaining about the way the profession does empirical work on monetary policy. The studies often use “vector autoregressions” to estimate the impact of changes in the policy interest rate. Unfortunately, interest rates are not monetary policy. You can try to estimate the part of interest rate movements that are “exogenous” and hence reflective of monetary policy, but in practice this is almost impossible.

So after four decades of VAR studies by the best and the brightest in the economics profession, where are we? Here’s the abstract to a promising new paper by Christian Wolf of Princeton University:

I argue that the seemingly disparate findings of the recent empirical literature on monetary policy transmission are in fact all consistent with the same standard macro models. Weak sign restrictions, which suggest that contractionary monetary policy if anything boosts output, present as policy shocks what actually are expansionary demand and supply shocks. Classical zero restrictions are robust to such misidentification, but miss short-horizon effects. Two recent approaches – restrictions on Taylor rules and external instruments – instead work well. My findings suggest that empirical evidence is consistent with models in which the real effects of monetary policy are larger than commonly estimated.

Many previous VAR studies have found monetary shocks to have relatively weak effects on the economy.  But these shocks tend to conflate reductions in interest rates with expansionary monetary policy.  In fact, in the vast majority of cases a decline in interest rates reflects slower growth in aggregate demand, not easier money.  So it’s no surprise that lower interest rates don’t seem to provide much of a boost to the economy.  Lower interest rates are not easier money:

Sign restrictions, as in Uhlig (2005), are vulnerable to expansionary demand and supply shocks “masquerading” as contractionary monetary policy shocks, which then seemingly boost – rather than depress – output. . . . For monetary policy transmission, my results encouragingly suggest that, first, recent advances in identification effectively address the masquerading problem, and second, even small sets of macro observables may carry a lot of information about policy shocks. Viewed in this light, I conclude that existing empirical work quite consistently paints the picture of significant, medium-sized effects of monetary policy on the real economy.

This “masquerading problem” is sometimes called the identification problem; it’s what happens when people engage in reasoning from a price change.

After forty years, economists yet to develop a generally accepted VAR model of the monetary policy transmission mechanism.  Like fusion power, there’s a small chance that it may happen some day.  But there’s almost no chance I will live long enough to see this approach yield useful results.  The profession would be much better off switching to an approach that used NGDP growth expectations as the primary indicator of monetary policy shocks, and then develop models to estimate those expectations using real time data from asset markets.  Interest rates are not a useful variable when analyzing monetary policy.

(0 COMMENTS)

Read More

Monetary Policy for all? Inequality and the Conduct of Monetary Policy

By Niels-Jakob Hansen, Alessandro Lin, and Rui C. Mano Inequality in both advanced economies and emerging markets has been on the rise in recent decades. The COVID-19 pandemic has exacerbated and raised awareness of disparities between the rich and poor. Fiscal policies and structural reforms are long known to be powerful mitigators of inequality. But […]

Read More

Stephen Williamson on NGDP level targeting

Over at TheMoneyIllusion I did a post discussing Steve Ambler’s presentation on NGDP targeting, and Nick Rowe’s subsequent discussion. Now I’d like to address some concerns expressed by Stephen Williamson.

One concern is that NGDPLT (or average inflation targeting) might be rather complicated to communicate. I believe that’s true of average inflation targeting, but not NGDPLT. Williamson mentioned that each year there would be a different NGDP target growth rate, depending on whether current NGDP was above or below the target path. But I believe it’s a mistake to think in terms of growth rates when evaluating a level targeting regime.

Consider the analogy of exchange rates under the Bretton Woods regime. The British pound was targeted at $2.80, plus or minus 1%. In my view that’s a very simple and easy to understand system. But if you consider the exchange rate in terms of growth rates, it might seem very complicated. Thus if the current exchange rate is $2.82, (i.e. slightly overvalued) then the Bank of England might be said to “target a negative growth rate of the exchange rate” until the pound returns to $2.80. The opposite would be true if the exchange rate were currently $2.78. And how long would it take to return the exchange rate to the target?

I favor a system where the Fed targets 12-month forward NGDP at exactly the rate implied by a predetermined target path, growing at 4% per year. The focus should not be on current NGDP, or the expected growth rate over the next 12 months, rather the focus should always be on the expected level of NGDP in 12 months. And that expected value should always be equal to the target value. In other words, monetary policymakers should “target the forecast”. Over time, I believe that people would begin to think in level terms, just as with they did with exchange rates under Bretton Woods.

If 12 months is too short (arguably true in the special case of Covid-19) then use a 24-month forward target.  But even with inflation targeting there will be special cases, as with severe supply shocks.

Williamson also argued that NGDPLT might lead central banks to adopt a “lower for longer” policy after an event like 2008, as both RGDP and inflation would be below trend.  In contrast, with an inflation target the central bank need not make up for depressed RGDP.  (Actually, with the Fed’s dual mandate that distinction is less clear, but I’d like to focus on some other issues.)

I have two responses to the questions raised by Williamson:

1.  I believe it’s incorrect to treat the severe NGDP gap of 2008-09 as a given.  In my view, the big drop in NGDP was mostly caused by the Fed’s unwillingness to adopt a policy of 5% NGDP level targeting in 2007.  Had such a policy been in place, the drop in NGDP during 2008-09 would have been far smaller.  This is consistent with models developed by Michael Woodford and others, where current levels of aggregate demand (NGDP) are heavily dependent on expected future aggregate demand, i.e. expected future path of monetary policy.  In 1933, FDR raised current gold prices by promising to raise future gold prices.  Then in 1934, FDR did raise the price of gold from $20.67 to $35/oz.  If the Fed promises to quickly push NGDP back to the 5% growth trend line, then NGDP will fall less sharply below the trend line in the short run.  (Of course that’s not to say there wouldn’t be a recession in 2008-09, but it would have been considerably milder.)

2.  I believe it is a mistake to assume that if a central bank did X and fell short of its inflation and/or NGDP goal, it would have had to do 2X or 3X to have hit the goal.  My claim sounds counterintuitive, but in fact my argument has an affinity to NeoFisherian ideas that have been developed by Williamson.  The very low interest rates of 2009-15 to some extent reflected the low levels and growth rates of nominal GDP during this period.  With higher NGDP and/or faster expected growth in NGDP, the equilibrium (natural) interest rate would have been higher in nominal terms, perhaps allowing the central bank to hit its policy target with a higher policy interest rate.

In 2001, Lars Svensson proposed a “foolproof” method for Japan to escape its liquidity trap.  Svensson’s proposal called for a one-time depreciation of the yen, but the most important part of the proposal was that the yen would subsequently be pegged to the US dollar.  In the long run, this would raise Japan’s inflation close to US levels, due to purchasing power parity.  But due to interest parity it would also raise Japan’s nominal interest rates up to US levels, which were still well above zero back in 2001.  I believe Williamson would recognize Svensson’s proposal as being NeoFisherian in spirit, even though Svensson himself is a New Keynesian.  Svensson reassured his readers that the policy would be expansionary “in spite of” of higher Japanese nominal interest rates, but a NeoFisherian would say there’s no need to say “in spite of”.

Monetary policy affects interest rates in two ways.  First, policy can target a short-term interest rate.  Second, a change in the policy regime can impact the equilibrium or natural rate of interest.  In monetary policy models, the policy stance is often described as the difference between the policy interest rate and the equilibrium rate.  My argument is that a shift to a 5% NGDPLT target path in 2007 would have radically boosted NGDP growth expectations during 2008.  Actual NGDP expectations fell sharply in the second half of 2008, even for 2009 and 2010.  With NGDPLT, expectations for future NGDP would have held up better during 2008, and thus the equilibrium interest rate would have been higher.

This does not mean that the Fed could have gotten by with a higher target interest rate.  We know that the actual target rate was too high to maintain adequate NGDP growth (or even 2% inflation) in 2009.  So we needed a lower interest rate relative to the equilibrium rate. Whether we needed a lower interest rate in absolute terms is uncertain.  If a switch to NGDP level targeting had raised the equilibrium interest rate in 2008, then the effect on the actual short-term interest rate would be ambiguous.  It depends on whether the Keynesian effect or the NeoFisherian effect was dominant.

We should never assume that if low rates and lots of QE failed, then even lower rates and even more QE would have been needed to hit the target.  That’s one option, but regime change is another.  The Australian central bank did not cut rates to zero, and didn’t do any QE, and yet completely avoided recession in 2008-09.  A credible and successful policy of maintaining adequate NGDP growth expectations in the long run is the best way of keeping equilibrium interest rates above zero, and avoiding the need to do QE.

 

(0 COMMENTS)

Read More

Congress >>> economics profession

The Joint Economic Committee in Congress put out its annual report on the economy, written by Alan Cole. My overall impression is that the JEC has a better grasp of real world macroeconomics than many people at top 10 econ departments.

Let’s start with their diagnosis of the Great Recession:

Unfortunately, Federal Reserve policy from 2007-2018 erred too far towards curbing the growth of nominal spending—a stance known colloquially as “too tight” monetary policy. The result was a long, persistent “output gap,” or shortfall in GDP relative to what the economy could have produced with more ample nominal spending. While not the only policy problem of the time period, the output gap was a clear consequence of the Federal Reserve’s choice of policy anchor and its level of commitment to the anchor.

The mass unemployment that followed the 2008 financial crisis was an economic disaster whose effects will be felt for years to come. Americans lost trillions of dollars of income and tens of millions of years of work. The job losses were also concentrated among disadvantaged groups, increasing inequality along the dimensions of both education and race.

This era is useful to study because it can inform policy in future recessions, including, to some extent, the current one. A well-chosen and consistent monetary policy anchor will not solve every problem—and certainly not ones directly related to public health—but it can facilitate the execution of financial and business contracts and shore up the social contract by lowering uncertainty about the future.

How many macroeconomists understand that the Great Recession was caused by a tight money policy by the Fed?  You could almost count them on one hand.

The report cites Kevin Erdmann’s excellent book on the housing crisis:

[I]n his book Shut Out, Kevin Erdmann notes that the Federal Reserve as a whole would issue statements describing the weakness in the housing market as a “correction,” suggesting a kind of normative view that housing prices should fall.31 The Federal Reserve kept this language even well into the decline of employment measures. The focus on moral hazard and housing prices largely detracted from attention to an ailing labor market.

Most economists believe the Fed was “doing all it could” in 2008.  The JEC reports understands that actual policy was quite schizophrenic, both expansionary and contractionary at the same time:

Taken separately, the bailout and interest rate decisions are coherent. But together, it is difficult to square them. As the Federal Reserve told it, spending enabled by emergency below-market-rate liquidity injections to Bear Stearns was good spending that helps Main Street, while spending enabled by a federal funds rate of (for example) 1.75 percent would have been bad spending that would spur inflation.

This pattern of easier credit for troubled financial institutions but tighter credit than necessary for the rest of us continued throughout 2008: as George Selgin documents, the Federal Reserve actually took care to offset its emergency operations’ effect on overall demand. Increases in credit to troubled banks were matched with corresponding decreases in credit elsewhere in the system.34 In Bernanke’s words, this was done to “keep a lid on inflation.”35

One tool in this offsetting process was interest on excess reserves (IOER). In October of 2008, the Federal Reserve began paying IOER.36 This policy induced banks to hold reserves and earn interest from the government rather than lending to private-sector individuals or institutions. This constrained credit for the private sector, outside of the banks that were rescued with below-market-rate lending.37

It’s as if the Fed simultaneously believed the economy faced a danger of too little spending and too much inflation—-which is literally impossible!!

The report also correctly describes how the Fed completely screwed up its forward guidance:

But there was a problem with forward guidance in the 2010s: Federal Reserve communications often described a hawkish reaction function—an inclination to run monetary policy relatively tightly.

Consider the Federal Reserve Board’s projections from January 201240, when interest rate predictions (often known as “dot plots,” for the way they were frequently charted) had just been issued for the first time. The projections told us that the median participant in the exercise believed that 2014 was the appropriate year for interest rates to rise. They also told us some other things about 2014: that participants believed Core PCE inflation would be below-target in the range of 1.6 to 2.0 percent, and that participants believed the unemployment rate would be in the range of 6.7 to 7.6 percent.

Put together, these predictions paint a clear picture of extraordinarily tight monetary policy. They told us that a Federal Reserve faced with an economy with elevated unemployment and below-target inflation would act to curb spending by tightening credit.

There’s also a recognition that the unemployment rate is often a useful warning sign of recessions—the Sahm Rule:

Recent work by the Federal Reserve has affirmed this view of employment measures. Economist Claudia Sahm devised an algorithm colloquially known as the “Sahm Rule,” which treats sudden rises in the unemployment rate as reliable early warning signs of a contraction.44 While the Sahm Rule is based on the official unemployment rate for simplicity’s sake and to facilitate comparability across time, it is likely that other employment measures, such as payroll surveys or unemployment claims, could be used as additional data points to scan for early signs of recession.

Most economists put too much weight on interest rates as an indicator of the stance of monetary policy, which led them to (wrongly) assume that policy was accommodative during 2008.  The JEC report understands that rates are not a good policy indicator:

FOMC statements have frequently identified low interest rates as a sign of accommodative policy.

This is not always and everywhere correct. Neither is the converse: that high interest rates are a sign of tight policy. As Milton Friedman observed in his famous American Economic Association presidential address:

As an empirical matter, low interest rates are a sign that monetary policy has been tight-in the sense that the quantity of money has grown slowly; high interest rates are a sign that monetary policy has been easy-in the sense that the quantity of money has grown rapidly.45

This observation—made in 1968—has largely held up, and in fact predicted to some degree both the late 1970s (when, despite high interest rates, inflation soared to record levels) and the early 2010s (when, despite low interest rates, inflation remained persistently below target and unemployment remained elevated.)

They suggest that NGDP growth is a superior policy indicator:

Scott Sumner phrases it in an improved and more modern formulation.46

Interest rates are not a reliable indicator of the stance of monetary policy. On any given day, an unexpected reduction in the fed funds target is usually an easing of policy. However, an extended period of time when interest rates are declining usually represents a tightening of monetary policy. That’s because during periods when interest rates are falling, the natural rate of interest is usually falling even faster (due to slowing NGDP growth), and vice versa.

The natural rate of interest is another economic abstraction that is hard to pin down precisely, but Sumner can be loosely translated as follows: during periods where the central bank is cutting interest rates, the risk-adjusted attractiveness of private-sector investments is falling even faster, so savers are still crowding into government bonds even at the lower rates.

Sumner considers the growth rate of NGDP a better guide to the stance of monetary policy. A policy that enables an acceleration in spending—however it is implemented—is loose, and one that forces a deceleration or contraction—however it is implemented—is tight. This formulation—based on effects—seems more appropriate than a measure based on interest rates alone.

The report then explains why measuring the policy stance correctly is so important:

Why are the semantics here important? First, because effects matter. Monetary policy stances are named after their intended effects; loose or accommodative or expansionary monetary policy should presumably be loosening, accommodating, or expanding something. Tight or contractionary policy should presumably be tightening or contracting something.

Second, semantics are important because names have an effect on the policy’s politics. The Federal Reserve in 2015 had essentially achieved some relatively-normal results for years: steady improvement in the employment rate, steady (though below-target) core inflation, and steady four percent growth in NGDP, which is also a normal result. However, it labeled these policies “accommodative.” This lent credibility to the plausible-sounding-but-wrong critique that the low interest rates at the time were “artificial” in a way that higher interest rates would not have been. It put the FOMC under pressure to “normalize” policy by tightening, which it did by the end of the year.

Third, a results-based measure of the stance of monetary policy, such as NGDP growth, appropriately captures the effects of policies that do not involve the setting of short-term interest rates: for example, quantitative easing or forward guidance.

The report also contains excellent policy suggestions:

A number of market indicators can help the Federal Reserve make good predictions about the future. Mechanically tying Federal Reserve actions to market data is largely not a reasonable policy option, but markets can help the Federal Reserve predict the consequences of policy.

And:

The dual mandate leaves much room for ambiguity in terms of how to weight unemployment and inflation concerns; however, it is possible to integrate inflation and unemployment data into a single mandate that implicitly contains both components. The most promising methods for this begin with the observation that inflation is a price, and employment is a quantity. Therefore, they look to measures of price multiplied by quantity.

Fortunately, many such metrics exist. One of the most obvious of these is nominal GDP. The idea of targeting nominal GDP originated with monetary economist Bennett McCallum,48 but also has been advocated by other economists such as Scott Sumner, Christina Romer,49 Jan Hatzius,50 and Joshua Hendrickson.51 While there are some technical issues implementing a nominal GDP target in real time, economist David Beckworth, another advocate, proposes methods to predict nominal GDP more quickly, including the use of new data sources or futures markets.52 At a minimum, stable nominal GDP growth is an excellent medium- and longer-run measure of central bank performance.

Level targeting is especially important:

Level targeting is perhaps the single most effective zero lower bound policy, and likely has benefits even outside of the zero lower bound. The idea of “level targeting” is to have a consistent long-run growth path in mind for the target variable, not just growth rate to target anew each period.

There are two strong reasons to believe a level target would be effective. The first is that level targets would do a better job of anchoring expectations for long-term contracts, such as mortgages. For example, it is considerably easier for a mortgage lender to operate if she has at least a general sense of what nominal incomes in America will look like in the 30th year of the loan. Will they double? Will they triple? A nominal income level targeting regime can actually provide an answer to that question, making long-term contracts considerably easier to write. Similarly, if a pension plan were interested in implementing a cost-of-living adjustment to benefits based on inflation, it would be easy to make long-run projections under an inflation level targeting regime.

The second reason for believing in the effectiveness of a level target is that a level target constitutes a kind of forward guidance, which—through its impact on expectations, can actually work backwards in time. In promising a steady long-run path, it encourages people to invest more steadily in the present, knowing that over the long run, rough patches will be smoothed out.

Nominal GDP level targeting, or NGDPLT, is one of the most popular uses of the level targeting idea. Level targeting dovetails particularly well with NGDP targeting because it turns the target into a long-run goal. In a level-targeting regime, short-run blips like revisions to GDP data are understood to be less consequential; instead the central bank maintains focus on keeping the long-run path steady.

Honestly, this report is far better than 90% of the articles one reads in top level economics journals.  Its fine to be able to write down highly mathematical models of the economy, but one also needs to have an intuitive grasp of which economic concepts are relevant to the sort of macroeconomic problems faced by real world economics.   Alan Cole definitely understands the role of monetary policy in business cycles.

(0 COMMENTS)

Read More