Rick Rieder argues that the Fed’s price stability mandate has been fulfilled and that today’s drivers of inflation are being misunderstood. Ultimately, the Fed’s mandate should evolve away from inflation targets in favor of other measures.
In the U.S. today, despite the lowest levels of inflation volatility in the last 60 years, policy makers and market participants alike place an excessive focus on the Federal Reserve’s 2% inflation target, in our view. To understand how we’ve arrived here, we think it useful to provide context with a brief history of the Fed’s inflation mandate, and from there we take a fresh look at what we think are the true drivers of inflation. Finally, assessing the forward-looking balance of risks informs our view that the Fed should place greater emphasis on targeting other metrics, such as nominal GDP, which though an imperfect indicator of well-being, may be more indicative of overall economic health.
A brief history of the Fed’s inflation mandate
From 1965 to 1985, inflation in the United States represented a much different risk than it does today. The Fed was determined to contain inflation in any way possible, which was rising precipitously as baby boomers began entering the workforce, and women headed to work in much greater numbers. Further, oil price shocks in 1973 and 1979, along with other factors, exacerbated both absolute levels of inflation and the volatility of price changes. Given the very high outright level and volatility of inflation in this era (see first graph), in 1977 the Fed was tasked by Congress to “promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” As the population growth influence peaked in 1980, inflation calmed. Yet, as the children of the baby boomers hit the workforce another (smaller) wave of population growth saw inflation re-accelerate toward 6% in the late 1980’s, which sparked unfounded fears of another 1970’s experience.
As a result, the Fed implicitly adopted a 1.5% to 2.0% inflation target range in the early-1990s in a renewed effort to tame prices. As population growth continued to cool, inflation slowed simultaneously, until 2008 when inflation volatility spiked again–this time in the form of disinflation; so in 2012 the Fed decided to adopt an official target at the high-end of its range: 2.0%. We can see that there is no magic to 2% inflation exactly; rather, this target is simply meant to stand in as a proxy for stable prices. Moreover, when considering the fact that the volatility of prices is as low (stable) as it’s been in the past 60 years, in addition to looking at the low absolute levels of inflation, we think it suggests that price stability has been achieved by the Fed for the time being.
A fresh look at the true drivers of inflation
The monetary policy decisions described in the prior section have been dominated by a long-standing belief in setting monetary policy based on the Philips Curve; however, we see little evidence that low unemployment explains a majority of higher inflation. Rather, we believe that demographic trends, technological innovation, and global supply chains have a much greater influence on general price level growth today.
There are those of the view that higher interest rates and higher unemployment were the factors primarily responsible for the lower rates of inflation in recent history, but we think that population growth boasts a much stronger relationship to future inflation and explains most of its trajectory over the past 60 years (and for hundreds of years before that). The same demographic models that would have informed policy makers well in the past now suggest inflation is likely to stay in a healthy range between 1% and 3% for the foreseeable future.
Meanwhile, technology continues to create disinflation/deflation in many areas of the economy (regardless of the stance of monetary policy), and broadly we see that as a good thing, as it tends to be associated with higher productivity. Additionally, the prevalence of technological advancement makes inflation harder to understand. Inflation must be calculated by making adjustments for changes to quality (referred to as hedonic adjustment). These adjustments are, by definition, ambiguous and imprecise: and for this reason, we see nominal GDP growth targeting as a cleaner, and more reliable, practice for policy makers. A case in point: if an economy is exhibiting a 5% nominal growth rate, the central bank might completely flip its policy stance depending on whether it’s due to: 1) 2% price gains and 3% real growth versus 2) -1% price change and 6% real growth. The interpretation of prices should be contingent on the associated real growth that accompanies them; hence, the nominal target makes more sense to us.
We also agree that globalization has also played a role in reducing supply-chain costs across the world, and thus inflation. It’s notable that in 1979 Deng Xiaopingopened the Chinese economy and the U.S. established diplomatic relations with China. Despite modest shifts toward reduced globalization today, due in part to recent trade policies, global supply chains are likely to remain quite integrated. Lastly, it’s important to note that the underlying contributors to inflation have evolved meaningfully over time. In 1960, for instance, just 45% of consumption was in the service sectors (55% goods), but today that number is 70%, illustrating the well-known shift from a goods-oriented economy to a services-dominated one. Under these conditions, inflation naturally becomes more stable and less sensitive to interest rates.
A way forward for monetary policy
So, not only do we believe that setting monetary policy based on the Phillips Curve is too backward looking, but as a result we think setting policy in this manner likely exacerbated cyclical swings and hurt employment in years past. By its own models, the Fed currently indicates it will have to tighten policy beyond neutral, and increase unemployment, so that inflation won’t overheat. That plan would see policy actively reduce employment in the short run–all to protect against a theoretical hyperinflation that is imagined through the lens of a very weak Philips Curve model and with eyes turned away from much stronger evidence to the contrary.
We can see that relative to our demographic models, monetary rate policy was in general too loose prior to 1980, too tight from 1980 to 2000 (in the wake of Volcker’s dramatic policy shift), again too loose pre-crisis, and today appears to be close to neutral (see second graph). At this moment, however, we find ourselves in the midst of “triple-barreled tightening,” which is comprised of higher policy rate levels, Fed balance sheet reduction, and an unprecedented amount of U.S. Treasury issuance, which appropriately can be thought of as a type of tightening of financial conditions, as it can draw capital away from more productive private-sector purposes.
In our estimation, the Fed tends to be more inclined to be anticipatory during easing cycles and tends to be more reactionary during tightening cycles. We believe that being reactionary today would potentially be a big mistake, given that financial conditions move to become restrictive much more rapidly during a tightening cycle than during an easing cycle. In turn, this can tangibly stunt growth in the broader economy, as we are seeing in many of the interest-sensitive parts of the economy today (such as housing). As we have already witnessed at year end, the stress is showing itself in the economy and markets; so with rates near neutral in our view, we strongly suggest pausing on any further policy rate increases and ending balance sheet reduction to adopt a closer-to-neutral policy stance.
Rick Rieder, Managing Director, is BlackRock’s Chief Investment Officer of Global Fixed Income and is a regular contributor to The Blog. Russell Brownback, Managing Director, is Head of Global Macro positioning for Fixed Income, and he contributed to this post.
© BlackRock
© BlackRock
Read more commentaries by BlackRock