Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label Phillips curve. Show all posts
Showing posts with label Phillips curve. Show all posts

Friday, 17 February 2017

NAIRU bashing

The NAIRU is the level of unemployment at which inflation is stable. Ever since economists invented the concept people have poked fun at how difficult to measure and elusive the NAIRU appears to be, and these articles often end with the proclamation that it is time we ditched the concept. Even good journalists can do it. But few of these attempts to trash the NAIRU answer a very simple and obvious question - how else do we link the real economy to inflation?

One exception are those that attempt to suggest that all we need to effectively control the economy is a nominal anchor, like the money supply or the exchange rate. But to cut a long story short, attempts to put this into practice have never worked out too well. The most recent attempt has been the Euro: just adopt a common currency, and inflation in individual countries will be forced to follow the average. This didn’t prove to be true for either Germany or the periphery, with disastrous results.

The NAIRU is one of those economic concepts which is essential to understand the economy but is extremely difficult to measure. Let’s start with the reasons for difficulty. First, unemployment is not perfectly measured (with people giving up looking for work who start looking again when the economy grows strongly), and may not capture the idea it is meant to represent, which is excess supply or demand in the labour market. Second, it looks at only the labour market, whereas inflation may also have something to do with excess demand in the goods market. Third, even if neither of these problems existed, the way unemployment interacts with inflation is still not clear.

The way economists have thought about the relationship between unemployment and inflation over the last 50 years is the Phillips curve. That says that inflation depends on expected inflation and unemployment. The importance of expected inflation means that simply drawing unemployment against inflation will always produce a mess. I remember from one of the earlier editions of Mankiw’s textbook he had a lovely plot of this for the US, that contradicted what I just said: it displayed clear ‘Phillips curve loops’. But it was always messier for other countries and it got messier for the US once we had inflation targeting (as it should with rational expectations). See this post for details.

The ubiquity of the New Keynesian Phillips Curve (NKPC) in current macroeconomics should not fool anyone that we finally have the true model of inflation. Its frequency of use reflects the obsession with microfoundations methodology and the consequent downgrading of empirical analysis. We know that workers and employers don’t like nominal wage cuts, but that aversion is not in the NKPC. If monetary policy is stuck at the Zero Lower Bound the NKPC says that inflation should become rather volatile, but that did not appear to happen, a point John Cochrane has stressed.

I could go on and on, and write my own NAIRU bashing piece. But here is the rub. If we really think there is no relationship between unemployment and inflation, why on earth are we not trying to get unemployment below 4%? We know that the government could, by spending more, raise demand and reduce unemployment. And why would we ever raise interest rates above their lower bound?

I’ve been there, done that. While we should not be obsessed by the 1970s, we should not wipe it from our minds either. Then policy makers did in effect ditch the NAIRU, and we got uncomfortably high inflation. In 1980 in the US and UK policy changed and increased unemployment, and inflation fell. There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth.

A more subtle critique of the NAIRU would be to acknowledge that truth, but say that because the relationship is difficult to measure, we should stop using unemployment as a guide to setting monetary policy. Let’s just focus on the objective, inflation, and move rates according to what actually happens to inflation. In other words forget forecasting, and let monetary policy operate like a thermostat, raising rates when inflation is above target and vice versa.

That could lead to large oscillations in inflation, but there is a more serious problem. This tends to be forgotten, but inflation is not the only goal of monetary policy. Take what is currently happening in the UK. Inflation is rising, and is expected to soon exceed its target, but the central bank has cut interest rates because it is more concerned about the impact of Brexit on the real economy. That shows quite clearly that policy makers in reality target some measure of the output gap as well as inflation. And they are quite right to, because why create a recession just to smooth inflation.

OK, so just target some weighted average of inflation and unemployment like a thermostat. But what level of unemployment? There is a danger that would always mean we would tolerate high inflation if unemployment is low. We know that is not a good idea, because inflation would just go on rising. So why not target the difference between unemployment and some level which is consistent with stable inflation. We could call that level X, but we should try to be more descriptive. Any suggestions?

Friday, 11 November 2016

Do New Keynesians assume full employment?

I’ve tried to write this as jargon free as I can, but it is mainly for economists

Nick Rowe claims that the New Keynesian model assumes full employment. I think he is onto something, but while he treats it as a problem with the model, I think it is a problem with the real world.

Nick sets up a simple consumption only economy with infinitely lived self employed workers, where we are at the steady state (=long run) level of consumption C(t)= output Y(t)=100. Then something bad happens (what macroeconomists call a shock):
every agent has a bad case of animal spirits. There's a sunspot. Or someone forgets to sacrifice a goat. So each agent expects every other agent to consume at C(t)=50 from now on. ... So each agent expects his income to be 50 per period from now on. So each agent realises that he must cut his consumption to 50 per period from now on too, otherwise he will have to borrow to finance his negative saving and will go deeper and deeper into debt, till he hits his borrowing limit and is forced to cut his consumption below 50 so he can pay at least the interest on his debt.”

To put it more formally: each agent believes the steady state level of output has fallen. That in turn has to imply that everyone makes a mistake about the desired labour supply of everyone else. I assume this is a mistaken belief. If the belief was correct, then there is no problem: the steady state level of output should fall, because people want more leisure and less work.

Nick says that there is nothing a monetary authority that controls the real interest rate can do about this mistaken belief about the steady state, because changing real rates only changes the profile of consumption (shifting consumption from the future to the present) and not its overall level. That is correct. Furthermore if each individual simply assumes what they think is true, and does not even bother to offer his pre-shock level of labour to others, then this is indeed a new equilibrium which the monetary authority can do nothing about.

But people and economies are not like that. Each agent wants to work at the pre-shock level, and will signal that in some way. They will see that the economy had widespread underemployment, and as a result they will revise their expectations about the steady state. I think Nick knows that, because he writes that the NK model needs “to just assume the economy always approaches full employment in the limit as time goes to infinity, otherwise our Phiilips Curve tells us we will eventually get hyperinflation or hyperdeflation, and we can't have our model predicting that, can we.”

He treats that as if it were a problem, but I do not see that it is. After all, we have no problem with the idea that consumers will revise down their expectations of their future income if they unexpectedly find they are always in debt. Equally I have no problem with the idea that in Nick’s economy with widespread and visible involuntary underemployment consumers might think they had made a mistake about others desired labour supply.

Let me put it another way. In a single person economy we never get underemployment. The problem arises because in a real economy we need to form expectations about what others will do. But if there exist signals which help us get our expectations right, that should shift us out of a mistaken belief equilibrium.

Which gets us to why I think Nick is on to something about the real world. Suppose there is a shock like a financial crisis, which for the sake of argument just temporarily reduces demand by a lot and creates unemployment. Central banks cannot cut real interest rates enough to get rid of the unemployment because of the zero lower bound. Inflation falls, but because everyone initially thinks this is all temporary, and maybe also because of an aversion to nominal wage cuts, we get a modest fall in inflation.

Now suppose people erroneously revise down their beliefs about steady state output, to be more like current output. Suppose also that visible unemployment goes away, because firms substitute labour for capital (UK) or workers get discouraged (US). We get to what looks like Nick’s bad equilibrium. Even inflation moves back to target, because the current output gap appears to disappear. We no longer have any signals that there is an alternative, better for everyone, inflation at target equilibrium with higher output.

Now we could get out of this bad equilibrium, if some positive shock or monetary/fiscal policy raised demand ‘temporarily’ and people saw that, because firms substituted capital for labour, or discouraged workers came back into the labour force, inflation did not rise well above target. But suppose policymakers also start to hold these erroneous beliefs, and so do not try and get us out of the bad equilibrium. Could that describe the secular stagnation we are in?



Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Sunday, 26 July 2015

The F story about the Great Inflation

Here F could stand for folk. The story that is often told by economists to their students goes as follows. After Phillips discovered his curve, which relates inflation to unemployment, Samuelson and Solow in 1960 suggested this implied a trade-off that policymakers could use. They could permanently have a bit less unemployment at the cost of a bit more inflation. Policymakers took up that option, but then could not understand why inflation didn’t just go up a bit, but kept on going up and up. Along came Milton Friedman to the rescue, who in a 1968 presidential address argued that inflation also depended on inflation expectations, which meant the long run Phillips curve was vertical and there was no permanent inflation unemployment trade-off. Policymakers then saw the light, and the steady rise in inflation seen in the 1960s and 1970s came to an end.

This is a neat little story, particularly if you like the idea that all great macroeconomic disasters stem from errors in mainstream macroeconomics. However even a half awake student should spot one small difficulty with this tale. Why did it take over 10 years for Friedman’s wisdom to be adopted by policymakers, while Samuelson and Solow’s alleged mistake seems to have been adopted quickly? Even if you think that the inflation problem only really started in the 1970s that imparts a 10 year lag into the knowledge transmission mechanism, which is a little strange.

However none of that matters, because this folk story is simply untrue. There has been some discussion of this in blogs (by Robert Waldmann in particular - see Mark Thoma here), and the best source on this is another F: James Forder. There are papers (e.g. here), but the most comprehensive source is now his book, which presents an exhaustive study of this folk story. It is, he argues, untrue in every respect. Not only did Samuelson and Solow not argue that there was a permanent inflation unemployment trade-off that policymakers could exploit, policymakers never believed there was such a trade-off. So how did this folk story arise? Quite simply from another F: Friedman himself, in his Nobel Prize lecture in 1977.

Forder discusses much else in his book, including the extent to which Friedman’s 1968 emphasis on the importance of expectations was particularly original (it wasn’t). He also describes how and why he thinks Friedman’s story became so embedded that it became folklore. The reason I write about this now is that I’m in the process of finishing a paper on the knowledge transmission mechanism and the 2010 switch to austerity, and I wanted to look back at previous macroeconomic crises.

If it wasn’t a belief in a long run inflation unemployment trade-off, what was it that allowed inflation to gradually rise during those two decades? Forder has a lot to say on this, but the following is my own take. I think two things were critical: the idea that demand management was primarily designed to achieve full employment, and that full employment had primacy over the objective of price stability. Although more and more economists over that period began to see the policy problem within a Phillips curve framework, many still hoped that other measures like prices and incomes policies (in the UK in particular but also in the US) could override the Phillips curve logic. The primacy of the full employment objective meant the problem was often described as ‘cost-push inflation’ rather than a rise in the natural rate of unemployment.

If you find this hard to imagine, think about historians discussing the current period in a possible future in 2050. By then nonlinearities in the Phillips curve and the power the inflation target had in anchoring inflation expectations were firmly entrenched in mainstream thinking. Imagine that partly as a result in 2050 the inflation target has been replaced by a level of nominal income target. With the benefit of hindsight these historians were amazed to calculate the extent to which resources were lost decades earlier because policy had become fixated by a 2% inflation target and budget deficits. They will recount with amusement at the number of economists and policymakers who thought that the way to deal with deficient demand was by ‘structural reform’. Rather than construct folk tales, they will observe that even when most economists realised what was required to avoid being misled again policymakers were extremely reluctant to change the inflation target.


Tuesday, 24 February 2015

Greece and primary surpluses

In my simple guide to the current macroeconomic position of Greece, I said that a major mistake made by the Troika was to insist on a pace of fiscal adjustment that was far too fast. It led to a collapse in the economy. Of course a collapse in the economy itself raises the deficit. So people who just look at the deficit, including many comments on that post, say ‘what adjustment’ and ‘just how many years does Greece need’.

It is easy to avoid this trap. The OECD publishes a series for the underlying primary balance, which is their guess at what the primary balance (taxes less spending excl. interest payments) would be if the output gap was zero. It is the first row in the table below: the estimated output gap is below. I’ve also shown the scale of the decline in GDP, just to show that the output gap numbers are pretty conservative. Unemployment in Greece is over 25%, and over half of all young people are unemployed.


2009
2010
2011
2012
2013
2014
Underlying primary balance
(% of GDP)
-12.1
-6.0
-0.7
2.9
6.7
7.6
Output gap (%)
4.3
0.2
-7.1
-11.6
-14.2
-12.7
GDP growth (%)
-4.4
-5.3
-8.9
-6.6
-4.0
0.8 

2009 was the peak underlying primary deficit, and it was huge, representing the actions of a truly profligate government. However what followed was complete cold turkey: within two years the underlying primary balance was close to zero. A pretty conservative estimate for the impact of fiscal consolidation would reduce GDP by 1% for each 1% of GDP reduction in the primary balance. In those terms, all of the current output gap in Greece can be explained by austerity.

As I have always said, some period involving a negative output gap was inevitable because Greece had to regain the competitiveness it lost as a result of the previous boom fuelled by fiscal profligacy. But slow gradual adjustment is more efficient than cold turkey. Paul Krugman explains one reason for this: resistance to nominal wage cuts. But there is another which is even more conventional. If we have a Phillips curve where inflation expectations are endogenous (either through rational or adaptive expectations) rather than anchored to some inflation target (as Paul implicitly assumes), then competitiveness adjustment can be achieved with a much lower cost in terms of the cumulated output gap if it is done slowly. (I gave an example here, then reacting to the idea that Latvia’s cold turkey adjustment had been a success.)

There are only two serious barriers to this more efficient adjustment path. The first is the willingness of some outside body to provide the loans to fund the gradual reduction in the government’s deficit. The second is getting those outside bodies to recognise this basic macro: austerity hits output, and gradual adjustment is better. I think the second turned out to be the crucial problem with Greece: as has been extensively documented, the Troika were hopelessly optimistic about the impact cold turkey would have.

So it is as clear as it can be that the current dire position of the Greek economy is the result of a huge mistake by the Troika. The size of the collapse in the Greek economy is similar to the fall in Irish output during the Great Irish Famine of 1845-53, and while the suffering in the latter is obviously of a different order, the attitude of some in the Eurozone is as misconceived as most English politicians during the famine. Of that event they say 'God sent the blight but the English made the famine'. In the future the Greeks may justly say ‘our politicians caused the deficit but the Troika made the depression’.  


Tuesday, 14 October 2014

The mythical Phillips curve?

Suppose you had just an hour to teach the basics of macroeconomics, what relationship would you be sure to include? My answer would be the Phillips curve. With the Phillips curve you can go a long way to understanding what monetary policy is all about.

My faith in the Phillips curve comes from simple but highly plausible ideas. In a boom, demand is strong relative to the economy’s capacity to produce, so prices and wages tend to rise faster than in an economic downturn. However workers do not normally suffer from money illusion: in a boom they want higher real wages to go with increasing labour supply. Equally firms are interested in profit margins, so if costs rise, so will prices. As firms do not change prices every day, they will think about future as well as current costs. That means that inflation depends on expected inflation as well as some indicator of excess demand, like unemployment.

Microfoundations confirm this logic, but add a crucial point that is not immediately obvious. Inflation today will depend on expectations about inflation in the future, not expectations about current inflation. That is the major contribution of New Keynesian theory to macroeconomics.

This combination of simple and formal theory would be of little interest if it was inconsistent with the data. A few do periodically claim just this: that it is very hard to find a Phillips curve in the data. (For example here is Stephen Williamson talking about Europe - but see also this from László Andor claiming just the opposite - and this from Chris Dillow on the UK.) If this was true, it would mean that monetary policymakers the world over were using the wrong framework in taking their decisions.

So is it true? The problem is that we do not have good data series going back very far on inflation expectations. Results from estimating econometric equations can therefore vary a lot depending how this crucial variable is treated. What I want to do here is just look at the raw data on inflation and unemployment for the US, and see whether it is really true that it is hard to find a Phillips curve.

The first chart plots consumer price inflation (y axis) against unemployment (x axis), where a line joins one year to the next. We start down the bottom right in 1961, when inflation was about 1% and unemployment 6.7%. Over the next few years we get the kind of pattern Phillips originally observed: unemployment falls and inflation rises.


The problem is that with inflation rising to 5.5% in 1969, it made sense for agents to raise their expectations about inflation. (In fact they almost surely started doing this before 1969, which may give the line from 1961 to 1969 its curvature. For given expectations, the line might be quite flat, a point I will come back to later.) So when unemployment started rising again, inflation didn’t go back to 1%, because expected inflation had risen. The pattern we get are called Phillips curve loops: falling unemployment over time is clearly associated with rising inflation, but this short run pattern is overlaid on a trend rise in inflation because inflation expectations are rising. Of course the other thing going on here is that we had two oil price hikes in 1974 and 1979. The chart finishes in 1980.

Most economists agree that things changed in 1980, as Volker used monetary policy aggressively to get inflation down. The next chart plots inflation and unemployment from 1980 to 2000.


Inflation came down from 13.5% in 1980 to 3.2% in 1983 partly because unemployment was high, but also because inflation expectations fell rapidly. (We do have survey evidence showing this happening.) The remaining period is dominated by a large fall in unemployment. So why didn’t this fall in unemployment push inflation back up? In terms of the chart, why isn’t the 2000 point much higher? Again expectations are confusing things. One survey has inflation expectations at around 5% in 1983, falling towards 3% at the end of 1999. So inflation was being held back for that reason. A Phillips curve, and its loops, is still there, but pretty flat.

The final chart goes from 2000 to 2013. Note that the inflation axis has changed - it now peaks at 4.5% rather than 16%. The interesting point, which Paul Krugman and others have noted, is that this looks much more like Phillips’s original observation: a simple negative relationship between inflation and unemployment. This could happen if expectations had become much more anchored as a result of credible inflation targeting, and survey data on expectations do suggest this has happened to some extent. There are also important changes in commodity prices happening here too.


While the change in inflation scale allows us to see this more clearly, it hides an important point. Once again the Phillips curve is pretty flat. We go from 4% to 10% unemployment, but inflation changes by at most 4%. However from the previous discussion we can see that this is not necessarily a new phenomenon, once we allow for changing inflation expectations.

Is it this data which makes me believe in the Phillips curve? To be honest, no. Instead it is the basic theory that I discussed at the beginning of this post. It may also be because I’m old enough to remember the 1970s when there were still economists around who denied that lower unemployment would lead to higher inflation, or who thought that the influence of expectations on inflation was weak, or who thought any relationship could be negated by direct controls on wages and prices, with disastrous results. But given how ‘noisy’ macro data normally is, I find the data I have shown here pretty consistent with my beliefs.

    

Friday, 18 July 2014

Further thoughts on Phillips curves

In a post from a few days ago I looked at some recent evidence on Phillips curves, treating the Great Recession as a test case. I cast the discussion as a debate between rational and adaptive expectations. Neither is likely to be 100% right of course, but I suggested the evidence implied rational expectations were more right than adaptive. In this post I want to relate this to some other people’s work and discussion. (See also this post from Mark Thoma.)

The first issue is why look at just half a dozen years, in only a few countries. As I noted in the original post, when looking at CPI inflation there are many short term factors that may mislead. Another reason for excluding European countries which I did not mention is the impact of austerity driven higher VAT rates (and other similar taxes or administered prices), nicely documented by Klitgaard and Peck. Surely all this ‘noise’ is an excellent reason to look over a much longer time horizon?

One answer is given in this recent JEL paper by Mavroeidis, Plagborg-Møller and Stock. As Plagborg-Moller notes in an email to Mark Thoma: “Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature….traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.” This had been my reading based on work I’d seen.

This is often going to be the case with time series econometrics, particularly when key variables appear in the form of expectations. Faced with this, what economists often look for is some decisive and hopefully large event, where all the issues involving specification uncertainty can be sidelined or become second order. The Great Recession, for countries that did not suffer a second recession, might be just such an event. In earlier, milder recessions it was also much less clear what the monetary authority’s inflation target was (if it had one at all), and how credible it was.

How does what I did relate to recent discussions by Paul Krugman? Paul observes that recent observations look like a Phillips curve without any expected inflation term at all. He mentions various possible explanations for this, but of those the most obvious to me is that expectations have become anchored because of inflation targeting. This was one of the cases I considered in my earlier post: that agents always believed inflation would return to target next year. So in that sense Paul and I are talking about the same evidence.

Before discussing interpretation further, let me bring in a paper by Ball and Mazumder. This appears to come to completely the opposite conclusion to mine. They say “we show that the Great Recession provides fresh evidence against the New Keynesian Phillips curve with rational expectations”. I do not want to discuss the specific section of their paper where they draw that conclusion, because it involves just the kind of specification uncertainties that Mavroeidis et al discuss. Instead I will simply note that the Ball and Mazumder study had data up to 2010. We now have data up to 2013. In its most basic form, the contest between the two Phillips curves is whether underlying inflation is now higher or lower than in 2009 (see maths below). It is higher. So to rescue the adaptive expectations view, you have to argue that underlying inflation is actually lower now than in 2009. Maybe it is possible to do that, but I have not seen that done.

However it would be a big mistake to think that the Ball and Mazumder paper finds support for the adaptive expectations Friedman/Phelps Phillips curve. They too find clear evidence that expectations have become more and more anchored. So in this sense the evidence is all pointing in the same way.

So I suspect the main differences here come from interpretation. I’m happy to interpret anchoring as agents acting rationally as inflation targets have become established and credible, although I also agree that it is not the only possible interpretation (see Thomas Palley and this paper in particular). My interpretation suggests that the New Keynesian Phillips curve is a more sensible place to start from than the adaptive expectations Friedman/Phelps version. As this is the view implicitly taken by most mainstream academic macroeconomics, but using a methodology that does not ensure congruence with the data, I think it is useful to point out when the mainstream does have empirical support.


Some maths

Suppose the Phillips curve has the following form:

p(t) = E[p(t+1)] + a.y(t) + u(t)

where ‘p’ is inflation, E[..] is the expectations operator, ‘a’ is a positive parameter on the output gap ‘y’, and ‘u’ is an error term. We have two references cases:

Static expectations: E[p(t+1)] = p(t-1)

Rational expectations: E[p(t+1)] = p(t+1) + e(t+1)

where ‘e’ is the error on expectations of future inflation and is random. Some simple maths shows that under static expectations, negative output gaps are associated with falling inflation, while under rational expectations they are associated with rising inflation. If we agree that between 2009 and today we have had a series of negative output gaps, we just need to ask whether underlying inflation is now higher or lower than in 2009. 



Monday, 14 July 2014

Has the Great Recession killed the traditional Phillips Curve?

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.

After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.

Which better describes the data? The great attraction of the FPPC is that it can describe stagflation. We have a boom, which while it lasts steadily raises inflation. When the boom comes to an end, inflation stabilises, but at a much higher level than it began. So policy has to engineer a recession to get inflation back down again: a period in which above average unemployment is accompanied by above average inflation, which we call stagflation. If over this period we had had credible independent central banks setting inflation targets, the NKPC would not give us stagflation: when the boom came to an end, inflation would return to target. (For more explanation, see this post.) The trouble is we did not have inflation targeting during this period, so it is difficult to tell whether stagflation is evidence against the NKPC. (As an example of this ambiguity, see this survey of the empirical evidence by Nason and Smith. This enabled Robert Gordon to be quite supportive of the FPPC in 2009.)

The Great Recession could provide a much better test. In some countries output fell sharply in 2009, but has since seen a slow but steady recovery, such that the output gap today is less than it was in 2009. With the FPPC, inflation should have been steadily falling over this period, reaching its lowest level today. So if we plotted the output gap (x axis) and inflation (y axis) together, we should see a line pointing South East. With the NKPC, we can consider two polar cases. In the first, agents fully anticipate that the recovery will be slow, so we will get a sharp immediate fall in inflation, but subsequently inflation will rise towards the target. That will give us a line pointing North East. In the second, agents keep thinking inflation will return to target next year. That also gives us a line pointing North East, but it is flatter. [Postscript - I should have added that this last gives us what Krugman calls the Neo-paleo-Keynesian Phillips curve.]

Here is this plot for four countries, using OECD estimates for the output gap on the horizontal axis, consumer price inflation less 2% on the vertical axis, and OECD forecasts for 2014 and 2015. I’ve chosen these countries simply because in most of Europe we had a double dip recession, which is a more complicated experiment. If you do not like the idea of including forecasts, just ignore the last two points for each country.



The lines point North East, not South East. This gives more support to the rational expectations NKPC than the adaptive expectations FPPC. To take just one example, US inflation in 2013 is higher than it was in 2009, which is consistent with the NKPC. The traditional FPPC, on the other hand, would suggest that after a string of negative output gaps, US inflation should be a lot lower in 2013 than it was in 2009.

OK, now the caveats. Commodity prices will have an important influence on the CPI, and these are not part of either simple Phillips curve story. They may help explain the blip in inflation around 2011 in some countries, but they also helped depress inflation in 2009. Exchange rate changes will also matter. The simple Phillips curve also takes no account of non-linearity caused by a reluctance to cut nominal wages. And of course estimates of the output gap may be wrong.

In the case of Japan, we also had a recent increase in the inflation target. This may explain the forecast upward shift in inflation in 2014/5. If it does, that is clear evidence in favour of rational rather than adaptive expectations. 

All these caveats point to the need to do more empirical analysis. Nevertheless we can see why some more elaborate studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.


Wednesday, 2 April 2014

Microfoundations and the Phillips curve

This is a rather long post that follows from my earlier one on Faustian bargains. Mainly for economists.

Although the debate about microfoundations normally talks about models, it is possible in some cases to also talk about individual relationships. This has the advantage of simplicity. I think the Phillips curve is a nice illustration of many of the key issues in the debate - even for those who do not believe in it! (I make the more general argument here.)

First, it illustrates for me why the debate is not, I repeat not, about whether the microfoundations approach is useful. Of course it is. Let’s think about the Phillips curve before microfoundations. This traditional Phillips curve had inflation at time t depending on expectations of inflation also at time t, and the output gap. This seemed at the time plausible: you could tell a story about lags in wage setting, so actors in the labour market had to guess the relevant real wage etc.

This is an example of the dangers of ad hoc theorising. The lags involved in wage decisions arise primarily because of contracts: wages are being fixed not just for the next period, but for a number of periods. Once you do the microfoundations properly, you find that inflation today depends on expected inflation next period: the New Keynesian Phillips curve (NKPC).

A detail? Unfortunately the ad hoc version implied that the output gap would move with expectation errors. So the dynamics of the business cycle became all about the inflation expectation error process. This was always implausible, even if you did not believe in rational expectations. With rational expectations, it appeared to deny the existence of Keynesian business cycles, and led to a whole generation of students being taught that anticipated monetary policy was ineffective.

So the traditional Phillips curve illustrates the dangers of ‘ad hoc’ theorising. But the relevant debate is not about whether microfounded relationships are useful, but whether they should be the only game in the academic town. So now let me give you various reasons why they should not, based on the Phillips curve example.

First, although various microfounded models suggest inflation should depend on expected inflation next period, the empirical evidence is more equivocal. A number of studies have found that inflation today depends on both expected inflation next period, but also on actual inflation last period (‘inflation inertia’). There are also empirical regularities (e.g. Phillips curve loops) that are difficult to rationalise with just a NKPC. As yet there is no generally accepted microfoundation for why this inflation inertia might occur, but given the key role this equation plays in policy it is important to look at the consequences of inflation inertia. That is what Steinsson did in a 2003 paper in JME, for example. [1] Now Steinsson’s introduction of ‘rule of thumb’ price setters was entirely ad hoc, with no microfoundation (traditional or behavioural) offered. Would those who decry theory by ‘hand waving’ as ad hoc think that article should not have been published in the JME?

At this point some might say that this example illustrates that there is no problem - papers with non-microfounded relationships do get published. Yet everyone knows that there is a large effective bias against publishing models that are not fully microfounded: Steinsson’s paper is the exception rather than the rule. (See my discussion of middlebrow models.) Indeed I started seriously thinking about these issues after I attended a conference where one respected participant attacked a couple of papers that did include inflation inertia, saying we must ‘respect microfoundations’. It was this that gave rise to my distinction between microfoundations purists and microfoundations pragmatists.

This example illustrates that we are often not confronted with a choice between two relationships: one microfounded and one not. Instead we have the ad hoc relationship, and we are still waiting for the microfounded version. Microfoundations takes time. There is a simple reason for this: microfoundations modelling is difficult, and it often relies on certain tricks to retain tractability. Those tricks may not be immediately obvious, and may take time to become ‘acceptable’. So in the early 1970s, before New Keynesian theory was developed, we only had the traditional (ad hoc) Phillips curve. In those circumstances, are we seriously suggesting that academic macroeconomics should have ignored this relationship because it was ad hoc? Surely we want to model what we can see, and not just what we can microfound.

Second, an aggregate relationship can be much more robust than any particular microfoundation. You get the result that inflation depends on expected inflation next period from different models of price rigidity: Calvo contracts of course (which can be said to mimic the impact of menu costs - see below), but also fixed contracts. Why tie things down to one particular price rigidity story, rather than going directly to an aggregate relationship which is compatible with more than one microfoundation?

The answer is often consistency. You need to demonstrate that your microfoundations are consistent across the model as a whole. Now this is clearly true for something like consumption and labour supply decisions. However models of nominal inertia are pretty separable from the rest of a model, or we employ tricks to ensure that they are. So the reasons for having to go through the Calvo set-up every time anyone wants to use a NKPC hardly seem compelling.

Third, just how great are these microfoundations anyway? The Calvo contract story has the expected probability of changing prices exogenous to the firm, and it is not obvious why this is a reasonable thing to assume. In practice microfoundation modellers wave their hands as much as anyone, talking about the impact of menu costs without formally modelling them. As I argue here, this is an ‘as if’ form of justification which is an important step removed from establishing model consistency based on only deep parameters.

If all this were not compelling enough, a belief that the microfoundations methodology is the only way of ‘doing macroeconomics properly’ has unfortunate side effects. It moves modellers away from the real world and into an imaginary world of yeoman farmers and the like. So some will react to the last paragraph by agreeing, saying that they were always dubious that Calvo contracts represented proper microfoundations, and that it is better therefore to ignore price stickiness until a more acceptable microfoundation can be found.

This is a bizarre reaction. The issue with price rigidity is that there are too many potential microfoundations, not too few. (See this well known paper by Alan Blinder for example.) The problem is in reality two fold. First, each reason is quite complex, so they are difficult to easily incorporate in a model. Tractability is the problem, not evidence. Second, even if you can do this for one particular motivation for price stickiness (e.g. Calvo contracts for menu costs), you are ignoring all the others - you ignore heterogeneity. Is this because you do not think these other reasons matter, or is it because the model is complicated enough as it is? The latter, obviously. Yes, of course it is important to do microfoundations modelling with heterogeneity - but this often results in a loss of clarity. If this heterogeneity in practice leads to a very similar aggregate outcome, why not go straight to the aggregate?

We are in danger of fooling ourselves here. We choose Calvo contracts to microfound price rigidity not because we believe this is how the majority of firms behave, but because it is tractable, and gives us an aggregate relationship (the New Keynesian Phillips curve) that is consistent with the data. Yet we then go on to argue that if any other aspect of the model is internally inconsistent in terms of its microeconomics with Calvo contracts, the model should be rejected. Finally, we then claim that anyone who uses a New Keynesian Phillips curve without doing this microfoundations check is not doing serious macroeconomics.

So we choose a microfoundation because it gives us the aggregate answer suggested by the data, and not because of evidence that this microfoundation is appropriate. We then insist that everything in that model has to be consistent with this microfoundation, and that our model has been built from only thinking about what individual agents do. Have we just replaced ad hoc with post hoc?

So, to summarise. The Phillips curve shows why microfoundations macro is really useful. However it also shows why microfoundations modelling should not be the only way of doing proper macroeconomics.

[1] Steinsson, J (2003), Optimal Monetary Policy in an economy with Inflation Persistence, Journal of Monetary Economics, 50, 1425-1456


Thursday, 12 December 2013

New versus traditional Phillips curves and the Great Recession

For economists

One of the questions I like asking students is whether inflation following the Great Recession has tended to favour the New Keynesian (NK) Phillips curve or its more traditional counterpart (TK). I like it because it allows me to draw a nice diagram, and also because it shows students how difficult it is to discriminate between theories in macro.

So first the theory. The two competing models are
  • NK: Inflation at t = expected inflation at t+1 together with a term in the output gap
  • TK: inflation at t = inflation at t-1 together with a term in the output gap

I’m ignoring discounting in the NK Phillips curve for simplicity. Assume expectations about inflation are rational, and suppose the economy is hit by an unexpected recession of known size and duration. The two models predict the following:



With the traditional model, inflation gradually falls as the recession continues, and once it comes to an end, inflation remains lower. In the New Keynesian model, assuming that the inflation target is credible, inflation jumps down when the unexpected recession occurs, and then inflation gradually rises towards its target as the recession progresses. (We assume here that the output gap is constant while the recession lasts, again for simplicity.) For the NK model, it is critical in drawing this diagram that the extent of the recession is known – more on this below. The patterns implied by the two models are distinct, and this difference is likely to persist even if each curve becomes flatter as we approach zero inflation because of nominal wage rigidities.

To see what has actually happened, see this nice post from Gavyn Davies. The immediate aftermath of the recession looked more like the NK model: a sharp fall followed by a gradual rise. Furthermore I would argue that – once the recession hit – most people expected it to be large and persistent, so my diagram is not totally unrealistic. But if we look at what has been happening in the last two years, it looks much more like the TK model, with inflation gradually falling below target.

That is probably as far as we should go without doing some econometrics, and also taking account of some of the complexities discussed here. We could probably get any pattern to fit the NK model by imagining a suitable sequence of expectations errors. In addition if we are looking at consumer price inflation we should account for commodity price changes, which neither model does. (If we look at GDP deflators, you could tell a story where agents were initially expecting a recession lasting three or four years, and have been surprised that the recession has persisted ever since.) That is why some proper econometrics is required, preferably looking at both price and wage inflation together with expectations data. (If such studies have been done, please let me know.)

However perhaps I can suggest two possible conclusions that such studies could test more rigorously. First, the traditional Phillips curve, where expectations are implicitly naive and backward looking, does not look like a promising basis for explaining inflation following the recession. Either the New Keynesian model, or some combination of the two models, looks more like providing an adequate foundation for a reasonable explanation. Second, an explanation based on the NK model that treats the size and extent of the recession (whatever that turns out to be) as one initially unexpected but then completely anticipated shock is also going to struggle to fit the data.