New Series on Income Inequality

Those interested laypersons and members of the general public who are unfamiliar with econometrics will undoubtably find the following passage hard to swallow given the language. Bear with it until the end, and I will translate for you.

Excerpt from, The ET Interview: Professor David F. Hendry

3.5 Money Demand

You have analyzed UK broad money demand on both quarterly and annual data, and quarterly narrow money demand for both the United Kingdom and the United States. In your first money-demand study [29], you and Grayham Mizon were responding to work by Graham Hacche (1974) at the Bank of England. How did that arise?

“Tony Courakis (1978) had submitted a comment to the Economic Journal criticizing Hacche for differencing data in order to achieve stationarity. Grayham Mizon and I proposed testing the restrictions imposed by differencing as an example of Denis’s new common-factor tests–later published as Sargan (1980)–and we developed an equilibrium-correction representation for money demand, using the Bank’s data. The common-factor restriction in Hacche (1974) was rejected, and the equilibriumcorrection term in our model was significant.”

So, you assumed that the data were stationary, even though differencing was needed.

“We implicitly assumed that both the equilibrium-correction term and the differences would be stationary, despite no concept of cointegration; and we assumed that the significance of the equilibrium-correction term was equivalent to rejecting the common factor from differencing. Also, the Bank study was specific to general in its approach, whereas we argued for general-to-specific modeling, which was the natural way to test common-factor restrictions using Denis’s determinantal conditions. Denis’s COMFAC algorithm was already included in GIVE, although Grayham’s and my Monte Carlo study of COMFAC only appeared two years later in [34].”

Did Courakis (1978) and [29] change modeling strategies in the United Kingdom? What was the Bank of England’s reaction?

“The next Bank study–of M1 by Richard Coghlan (1978)–considered general dynamic specifications, but they still lacked an equilibrium-correction term. As I discussed in my follow-up [31], narrow money acts as a buffer for agents’ expenditures, but with target ratios for money relative to expenditure, deviations from which prompt adjustment. That target ratio should depend on the opportunity costs of holding money relative to alternative financial assets and to goods, as measured by interest rates and inflation respectively. Also, because some agents are taxed on interest earnings, and other agents are not, the Fisher equation cannot hold.”

So your interest rate measure did not adjust for tax.

“Right. [31] also highlighted the problems confronting a simple-to-general approach. Those problems include the misinterpretation of earlier results in the modeling sequence, the impossibility of constructively interpreting test rejections, the many expansion paths faced, the unknown stopping point, the collapse of the strategy if later mis-specifications are detected, and the poor properties that result from stopping at the first non-rejection–a criticism dating back to Anderson (1962).

A key difficulty with earlier UK money-demand equations had been parameter nonconstancy. However, my equilibrium-correction model was constant over a sample with considerable turbulence after Competition and Credit Control regulations in 1971.”

[31] also served as the starting point for a sequence of papers on UK and US M1. You returned to modeling UK M1 again in [60] and [94].

“That research resulted in a simple representation for UK M1 demand, despite a very general initial model, with only four variables representing opportunity costs against goods and other assets, adjustment costs, and equilibrium adjustment.”

In 1982, Milton Friedman and Anna Schwartz published their book Monetary Trends in the United States and the United Kingdom, and it had many potential policy implications. Early the following year, the Bank asked you to evaluate the econometrics in Friedman and Schwartz (1982) for the Bank’s panel of academic consultants, leading to Hendry and Ericsson (1983) and eventually to [93].

“You were my research officer then. Friedman and Schwartz’s approach was deliberately simple-to-general, commencing with bivariate regressions, generalizing to trivariate regressions, etc. By the early 1980s, most British econometricians had realized that such an approach was not a good modeling strategy. However, replicating their results revealed numerous other problems as well.”

I recall that one of those was simply graphing velocity.

“Yes. The graph in Friedman and Schwartz (1982, p. 178, Chart 5.5) made UK velocity look constant over their century of data. I initially questioned your plot of UK velocity–using Friedman and Schwartz’s own annual data–because your graph showed considerable nonconstancy in velocity. We discovered that the discrepancy between the two graphs arose mainly because Friedman and Schwartz plotted velocity allowing for a range of 1 to 10, whereas UK velocity itself only varied between 1 and 2.4. Figure 1 reproduces the comparison.

Testing Friedman and Schwartz’s equations revealed a considerable lack of congruence. Friedman and Schwartz phase-averaged their annual data in an attempt to remove the business cycle, but phase averaging still left highly autocorrelated, nonstationary processes. Because filtering (such as phase averaging) imposes dynamic restrictions, we analyzed the original annual data. Our paper for the Bank of England panel started a modeling sequence, with contributions from Andrew Longbottom and Sean Holly (1985) and Alvaro Escribano (1985).”

Shortly after the meeting of the Bank’s panel of academic consultants, there was considerable press coverage. Do you recall how that occurred? The Guardian newspaper started the debate.

“As background, monetarism was at its peak. Margaret Thatcher–the Prime Minister– 25 had instituted a regime of monetary control, as she believed that money caused inflation, precisely the view put forward by Friedman and Schwartz. From this perspective, a credible monetary tightening would rapidly reduce inflation because expectations were rational. In fact, inflation fell slowly, whereas unemployment leapt to levels not seen since the 1930s. The Treasury and Civil Service Committee on Monetary Policy (which I had advised in [36] and [37]) had found no evidence that monetary expansion was the cause of the post-oil-crisis inflation. If anything, inflation caused money, whereas money was almost an epiphenomenon. The structure of the British banking system made the Bank of England a “lender of the first resort,” and so the Bank could only control the quantity of money by varying interest rates.

At the time, Christopher Huhne was the economics editor at the Guardian. He had seen our critique, and he deemed our evidence central to the policy debate.”

As I recall, when Huhne’s article hit the press, your phone rang for hours on end.

“That it did. There were actually two articles about Friedman and Schwartz (1982) in the Guardian on December 15, 1983. On page 19, Huhne had written an article that summarized–in layman’s terms–our critique of Friedman and Schwartz (1982). Huhne and I had talked at length about this piece, and it provided an accurate statement of Hendry and Ericsson (1983) and its implications. In addition–and unknown to us–the Guardian decided to run a front-page editorial on Friedman and Schwartz with the headline Monetarism’s guru ‘distorts his evidence’. That headline summarized Huhne’s view that it was unacceptable for Friedman and Schwartz to use their data-based dummy variable for 1921—1955 and still claim parameter constancy of their money-demand equation. Rather, that dummy variable actually implied nonconstancy because the regression results were substantively different in its absence. That nonconstancy undermined Friedman and Schwartz’s policy conclusions.“

Charles Goodhart (1982) had also questioned that dummy.

“It is legitimate to question any data-based dummy selected for a period unrelated to historical events. Whether that dummy “distorted the evidence” is less obvious, since econometricians often use indicators to clarify evidence or to proxy for unobserved variables. In its place, we used a nonlinear equilibrium correction, which had two equilibria, one for normal times and one for disturbed times (although one could hardly call the First World War “normal”). Like Friedman and Schwartz, we did include a dummy for the two world wars that captured a 4% increase in demand, probably due to increased risks. Huhne later did a TV program about the debate, spending a day at my house filming.”

Hendry and Ericsson (1983) was finally published nearly eight years later in [93], after a prolonged editorial process. Just when we thought the issue was laid to rest, Chris Attfield, David Demery, and Nigel Duck (1995) claimed that our equation had broken down on data extended to the early 1990s whereas the Friedman and Schwartz specification was constant.

“To compile a coherent statistical series over a long run of history, Attfield, Demery, and Duck had spliced several different money measures together; but they had not 26 adjusted the corresponding measures of the opportunity cost. With that combination, our model did indeed fail. However, as shown in [166], our model remained constant over the whole sample once we used an appropriate measure of opportunity cost, whereas the updated Friedman and Schwartz model failed. Escribano (2004) updates our equation through 2000 and confirms its continued constancy.”

Your model of US narrow money demand also generated controversy, as when you presented it at the Fed.

“Yes, that research appeared as [96] with Yoshi Baba and Ross Starr. After the supposed break-down in US money demand recorded by Steve Goldfeld (1976), it was natural to implement similar models for the United States. Many new financial instruments had been introduced, including money market mutual funds, CDs, and NOW and SuperNOW accounts, so we hypothesized that these non-modeled financial innovations were the cause of the instability in money demand. Ross also thought that long-term interest-rate volatility had changed the maturity structure of the bond market, especially when the Fed implemented its New Operating Procedures. A high long rate was no longer a signal to buy because high interest rates were associated with high variances, and interest rates might go higher still and induce capital losses. This situation suggested calculating a certainty-equivalent long-run interest rate–that is, the interest rate adjusted for risk.

Otherwise, the basic approach and specifications were similar. We treated M1 as being determined by the private sector, conditional on interest rates set by the Fed, although the income elasticity was one half, rather than unity, as in the United Kingdom. Seminars at the Fed indeed produced a number of challenges, including the claim that the Fed engineered a monetary expansion for Richard Nixon’s re-election. Dummies for that period were insignificant, so agents were willing to hold that money at the interest rates set, confirming valid conditioning. Another criticism concerned the lag structure, which represented average adjustment speeds in a large and complex economy.”

Some economists still regard the final formulation in [96] as too complicated. Sometimes, I think that they believe the world is inherently simple. Other times, I think that they are concerned about data mining. Have you had similar reactions?

“Data mining could never spuriously produce the sizes of t-values we found, however many search paths were explored. The variables might proxy unmodeled effects, but their large t-statistics could not arise by chance.”

In layperson’s terms, what all of this says is that Milton Friedman and his co-conspirator, Anna Schwartz’s attempt to claim that the velocity of money was constant, was total bullshit.

So, what then does this all mean to you?

Quite frankly, a lot and it is very important information that the public needs to understand.

Therefore, I’m writing a somewhat comprehensive series on income inequality which will begin with a look at post-World War II policies of full employment up to the 1970’s. Next, I will discuss the OPEC cost shock, the Great Inflation, and the rise of Monetarism. From there, I will discuss the end of full employment, union busting, wage suppression to 1992 and the coming of “New Democrats”. I will then finish with a look at private debt expansion.

So, there’s lots to discuss in the coming weeks.