Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label New Keynesian. Show all posts
Showing posts with label New Keynesian. Show all posts

Friday, 17 February 2017

NAIRU bashing

The NAIRU is the level of unemployment at which inflation is stable. Ever since economists invented the concept people have poked fun at how difficult to measure and elusive the NAIRU appears to be, and these articles often end with the proclamation that it is time we ditched the concept. Even good journalists can do it. But few of these attempts to trash the NAIRU answer a very simple and obvious question - how else do we link the real economy to inflation?

One exception are those that attempt to suggest that all we need to effectively control the economy is a nominal anchor, like the money supply or the exchange rate. But to cut a long story short, attempts to put this into practice have never worked out too well. The most recent attempt has been the Euro: just adopt a common currency, and inflation in individual countries will be forced to follow the average. This didn’t prove to be true for either Germany or the periphery, with disastrous results.

The NAIRU is one of those economic concepts which is essential to understand the economy but is extremely difficult to measure. Let’s start with the reasons for difficulty. First, unemployment is not perfectly measured (with people giving up looking for work who start looking again when the economy grows strongly), and may not capture the idea it is meant to represent, which is excess supply or demand in the labour market. Second, it looks at only the labour market, whereas inflation may also have something to do with excess demand in the goods market. Third, even if neither of these problems existed, the way unemployment interacts with inflation is still not clear.

The way economists have thought about the relationship between unemployment and inflation over the last 50 years is the Phillips curve. That says that inflation depends on expected inflation and unemployment. The importance of expected inflation means that simply drawing unemployment against inflation will always produce a mess. I remember from one of the earlier editions of Mankiw’s textbook he had a lovely plot of this for the US, that contradicted what I just said: it displayed clear ‘Phillips curve loops’. But it was always messier for other countries and it got messier for the US once we had inflation targeting (as it should with rational expectations). See this post for details.

The ubiquity of the New Keynesian Phillips Curve (NKPC) in current macroeconomics should not fool anyone that we finally have the true model of inflation. Its frequency of use reflects the obsession with microfoundations methodology and the consequent downgrading of empirical analysis. We know that workers and employers don’t like nominal wage cuts, but that aversion is not in the NKPC. If monetary policy is stuck at the Zero Lower Bound the NKPC says that inflation should become rather volatile, but that did not appear to happen, a point John Cochrane has stressed.

I could go on and on, and write my own NAIRU bashing piece. But here is the rub. If we really think there is no relationship between unemployment and inflation, why on earth are we not trying to get unemployment below 4%? We know that the government could, by spending more, raise demand and reduce unemployment. And why would we ever raise interest rates above their lower bound?

I’ve been there, done that. While we should not be obsessed by the 1970s, we should not wipe it from our minds either. Then policy makers did in effect ditch the NAIRU, and we got uncomfortably high inflation. In 1980 in the US and UK policy changed and increased unemployment, and inflation fell. There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth.

A more subtle critique of the NAIRU would be to acknowledge that truth, but say that because the relationship is difficult to measure, we should stop using unemployment as a guide to setting monetary policy. Let’s just focus on the objective, inflation, and move rates according to what actually happens to inflation. In other words forget forecasting, and let monetary policy operate like a thermostat, raising rates when inflation is above target and vice versa.

That could lead to large oscillations in inflation, but there is a more serious problem. This tends to be forgotten, but inflation is not the only goal of monetary policy. Take what is currently happening in the UK. Inflation is rising, and is expected to soon exceed its target, but the central bank has cut interest rates because it is more concerned about the impact of Brexit on the real economy. That shows quite clearly that policy makers in reality target some measure of the output gap as well as inflation. And they are quite right to, because why create a recession just to smooth inflation.

OK, so just target some weighted average of inflation and unemployment like a thermostat. But what level of unemployment? There is a danger that would always mean we would tolerate high inflation if unemployment is low. We know that is not a good idea, because inflation would just go on rising. So why not target the difference between unemployment and some level which is consistent with stable inflation. We could call that level X, but we should try to be more descriptive. Any suggestions?

Friday, 11 November 2016

Do New Keynesians assume full employment?

I’ve tried to write this as jargon free as I can, but it is mainly for economists

Nick Rowe claims that the New Keynesian model assumes full employment. I think he is onto something, but while he treats it as a problem with the model, I think it is a problem with the real world.

Nick sets up a simple consumption only economy with infinitely lived self employed workers, where we are at the steady state (=long run) level of consumption C(t)= output Y(t)=100. Then something bad happens (what macroeconomists call a shock):
every agent has a bad case of animal spirits. There's a sunspot. Or someone forgets to sacrifice a goat. So each agent expects every other agent to consume at C(t)=50 from now on. ... So each agent expects his income to be 50 per period from now on. So each agent realises that he must cut his consumption to 50 per period from now on too, otherwise he will have to borrow to finance his negative saving and will go deeper and deeper into debt, till he hits his borrowing limit and is forced to cut his consumption below 50 so he can pay at least the interest on his debt.”

To put it more formally: each agent believes the steady state level of output has fallen. That in turn has to imply that everyone makes a mistake about the desired labour supply of everyone else. I assume this is a mistaken belief. If the belief was correct, then there is no problem: the steady state level of output should fall, because people want more leisure and less work.

Nick says that there is nothing a monetary authority that controls the real interest rate can do about this mistaken belief about the steady state, because changing real rates only changes the profile of consumption (shifting consumption from the future to the present) and not its overall level. That is correct. Furthermore if each individual simply assumes what they think is true, and does not even bother to offer his pre-shock level of labour to others, then this is indeed a new equilibrium which the monetary authority can do nothing about.

But people and economies are not like that. Each agent wants to work at the pre-shock level, and will signal that in some way. They will see that the economy had widespread underemployment, and as a result they will revise their expectations about the steady state. I think Nick knows that, because he writes that the NK model needs “to just assume the economy always approaches full employment in the limit as time goes to infinity, otherwise our Phiilips Curve tells us we will eventually get hyperinflation or hyperdeflation, and we can't have our model predicting that, can we.”

He treats that as if it were a problem, but I do not see that it is. After all, we have no problem with the idea that consumers will revise down their expectations of their future income if they unexpectedly find they are always in debt. Equally I have no problem with the idea that in Nick’s economy with widespread and visible involuntary underemployment consumers might think they had made a mistake about others desired labour supply.

Let me put it another way. In a single person economy we never get underemployment. The problem arises because in a real economy we need to form expectations about what others will do. But if there exist signals which help us get our expectations right, that should shift us out of a mistaken belief equilibrium.

Which gets us to why I think Nick is on to something about the real world. Suppose there is a shock like a financial crisis, which for the sake of argument just temporarily reduces demand by a lot and creates unemployment. Central banks cannot cut real interest rates enough to get rid of the unemployment because of the zero lower bound. Inflation falls, but because everyone initially thinks this is all temporary, and maybe also because of an aversion to nominal wage cuts, we get a modest fall in inflation.

Now suppose people erroneously revise down their beliefs about steady state output, to be more like current output. Suppose also that visible unemployment goes away, because firms substitute labour for capital (UK) or workers get discouraged (US). We get to what looks like Nick’s bad equilibrium. Even inflation moves back to target, because the current output gap appears to disappear. We no longer have any signals that there is an alternative, better for everyone, inflation at target equilibrium with higher output.

Now we could get out of this bad equilibrium, if some positive shock or monetary/fiscal policy raised demand ‘temporarily’ and people saw that, because firms substituted capital for labour, or discouraged workers came back into the labour force, inflation did not rise well above target. But suppose policymakers also start to hold these erroneous beliefs, and so do not try and get us out of the bad equilibrium. Could that describe the secular stagnation we are in?



Saturday, 24 September 2016

What is so bad about the RBC model?

This post has its genesis in a short twitter exchange storified by Brad DeLong

DSGE models, the models that mainstream macroeconomists use to model the business cycle, are built on the foundations of the Real Business Cycle (RBC) model. We (almost) all know that the RBC project failed. So how can anything built on these foundations be acceptable? As Donald Trump might say, what is going on here?

The basic RBC model contains a production function relating output to capital (owned by individuals) and labour plus a stochastic element representing technical progress, an identity relating investment and capital, a national income identity giving output as the sum of consumption and investment, marginal productivity conditions (from profit maximisation by perfectly competitive representative firms) giving the real wage and real interest rate, and the representative consumer’s optimisation problem for consumption, labour supply and capital. (See here, for example.)

What is the really big problem with this model? Not problems along the lines of ‘I would want to add this’, but more problems like I would not even start from here. Let’s ignore capital, because in the bare bones New Keynesian model capital does not appear. If you were to say giving primacy to shocks to technical progress I would agree that is a big problem: all the behavioural equations should contain stochastic elements which can also shock this economy, but New Keynesian models do this to varying degrees. If you were to say the assumption of labour market clearing I would also agree that is a big problem.

However none of the above is the biggest problem in my view. The biggest problem is the assumption of continuous goods market clearing aka fully flexible prices. That is the assumption that tells you monetary policy has no impact on real variables. Now an RBC modeller might say in response how do you know that? Surely it makes sense to see whether a model that does assume price flexibility could generate something like business cycles?

The answer to that question is no, it does not. It does not because we know it cannot for a simple reason: unemployment in recessions is involuntary, and this model cannot generate involuntary unemployment, but only voluntary variations in labour supply as a result of short term movements in the real wage. Once you accept that higher unemployment in recessions is involuntary (and the evidence for that is very strong), the RBC project was never going to work.

So how did RBC models ever get off the ground? Because the New Classical revolution said everything we knew before that revolution should be discounted because it did not use the right methodology. And also because the right methodology - the microfoundations methodology - allowed the researcher to select what evidence (micro or macro) was admissible. That, in turn, is why the microfoundations methodology has to be central to any critique of modern macro. Why RBC modellers chose to dismiss the evidence on involuntary unemployment I will leave as an exercise for the reader.

The New Keynesian (NK) model, although it may have just added one equation to the RBC model, did something which corrected its central failure: the failure to acknowledge the pre-revolution wisdom about what causes business cycles and what you had to do to combat them. In that sense its break from its RBC heritage was profound. Is New Keynesian analysis still hampered by its RBC parentage? The answer is complex (see here), but can be summarised as no and yes. But once again, I would argue that what holds back modern macro much more is its reliance on its particular methodology.

One final point. Many people outside mainstream macro feel happy to describe DSGE modelling as a degenerative research strategy. I think that is a very difficult claim to substantiate, and is hardly going to convince mainstream macroeconomists. The claim I want to make is much weaker, and that is that there is no good reason why microfoundations modelling should be the only research strategy employed by academic economists. I challenge anyone to argue against my claim.




Wednesday, 9 March 2016

Multipliers from Eurozone periphery austerity

For macroeconomists

We often see graphs relating fiscal consolidation to output growth since the Great Recession. Despite such scatter plots being very weak evidence, they appear to show that fiscal multipliers in the periphery countries like Greece have been very large indeed. At first sight this is not difficult to explain. These countries do not have their own monetary policies, and to the extent that fiscal consolidation reduces local inflation, real interest rates will rise, which increases the fiscal multiplier.

Unfortunately the basic New Keynesian (NK) model suggests this reasoning is incorrect, as Farhi and Werning show for temporary changes in government spending. While real rates might rise in the short run following a negative government spending shock, being in a monetary union ties down the long run price level in these economies. So, other things being equal, a negative government spending shock that reduces inflation now will be followed by higher inflation (compared to the no shock case) later, as the real exchange rate self-corrects. That in turn means that fiscal consolidation in the form of temporary cuts to government spending will produce a small rise in consumption for a period after the shock. (Consumption depends on the forward sum of future real interest rates, so as time progresses lower future rates dominate this sum.)

Of course that may simply mean that the basic NK model is incorrect or incomplete. As Farhi and Werning show in the same paper, with some credit constrained consumers we can get back to positive short term consumption multipliers, and therefore output multipliers greater than one. But it occurred to me, just before I was about to discuss this paper in an advanced macro graduate class, that the basic NK model could still give us what appeared to be large multipliers without such additions.

What we had in periphery countries was not just a government spending shock. In Ireland and Greece at least, that spending shock was preceded by a government debt shock. Either the government admitted to borrowing more than the official data suggested, or it had to bail out the banks. We can think of at least two types of response to a pure government debt shock. It could lead to a short sharp contraction in spending, in which case the analysis of Farhi and Werning would apply. Alternatively the government accepts that its debt will be permanently higher, and it only plans to cut spending or raise taxes to pay the interest on that additional debt.

In the latter case, assume that a significant proportion of that extra debt was owned overseas. We would have a permanent transfer from domestic to overseas citizens, and that would require a permanent depreciation in the real exchange rate. An increase in competitiveness is needed to make up for the permanently lower level of domestic demand that these transfers would produce. That in itself produces a terms of trade loss that impacts on consumption. But in addition in a monetary union, that depreciation would have to come about through a period of lower inflation, which would lead to a period in which real interest rates were higher. That in turn would decrease consumption, with the peak effect when the debt shock happened.

This is probably already written down somewhere, but it does explain why you could get apparently large multipliers in Greece and Ireland even if the simple NK model was broadly correct. What we had was a combination of a negative government spending shock and a positive government debt shock, and the latter could have led to significant falls in consumption. For these economies at least, true government spending multipliers may not be as large as they appear.

There I go again, choosing my economics to get the answer I want. Oh, wait ….



Wednesday, 4 November 2015

Tax cuts vs spending vs helicopters

Some people still seem unable, or maybe unwilling, to understand the basic New Keynesian (NK) model. Should it be surprising in this model that cutting taxes on wages at the Zero Lower Bound (i.e. when nominal interest rates are fixed) are contractionary? Of course not. The basic NK model contains an intertemporal consumption function that implies Ricardian Equivalence holds, so consumers save all of the extra income they get from a tax cut. But cutting taxes increases the incentive to work, thereby increasing labour supply, which through a Phillips curve decreases inflation. With a fixed nominal interest rate that implies higher real rates, which are contractionary. QED.

Now the main thing not to like here is the consumption function and Ricardian Equivalence. Empirical evidence points strongly to a significant income effect, with a marginal propensity to consume around a third rather than zero. There are good theoretical reasons why you might get this result, even with totally rational consumers. But the implication that cutting taxes will lead to some increase in labour supply seems reasonable, and that will put some downward pressure on inflation. This is why pushing ‘structural reforms’ that expand the supply side in a liquidity trap can be counterproductive in the short term. (Things are more complex when you have a fixed exchange rate.)

Now you may quite reasonably believe that in the real world a positive income effect from a tax cut will raise demand by more than any increase in supply, so inflation will rise and real rates will fall. But it remains the case that as a stimulus measure directly raising demand through higher government spending does not generate this supply side offset. That the NK model has this feature seems like a virtue to me. The only point I have to add is that because helicopter money, as traditionally envisaged, is a lump sum transfer (everyone gets an equal amount, so it is independent of wages), you do not get this offsetting supply side effect. So for that reason helicopter money is more effective as a stimulus instrument in a liquidity trap than cutting taxes on wages.


Sunday, 7 June 2015

Austerity as a Knowledge Transmission Mechanism failure

In this post I talked about the Knowledge Transmission Mechanism: the process by which academic ideas do or do not get translated into economic policy. I pointed to the importance of what I called ‘policy intermediaries’ in this process: civil servants, think tanks, policy entrepreneurs, the media, and occasionally financial sector economists and central banks. Here I want to ask whether thinking about these intermediaries could help explain the continuing popularity amongst policy makers of austerity during a liquidity trap, even though there is an academic consensus behind the idea that austerity now would harm output.

In this post I looked at various reasons for thinking there was such a consensus, and one of them was that the framework generally used to analyse business cycles was the (New) Keynesian model. In this Keynesian framework cuts in government spending when interest rates are stuck at their lower bound clearly reduce output, with multipliers around one or more.

Where are these models used in anger? Among academics studying business cycles of course, but also within central banks. As far as I know, pretty well all the core models used by central banks to do forecasting and policy analysis are (New) Keynesian. (This includes the ECB.) An important point about the delegation of stabilisation policy to independent central banks is that expertise on business cycles has tended to shift from civil servants working in finance ministries to economists working in central banks.

Suppose you are a policy maker, who is genuinely concerned about what impact cuts in government spending might have in the period after the Great Recession. Where would you, or your civil servants, go to find expertise on this issue? Given the above, one obvious source, and perhaps the main source, would be independent central banks. One big advantage that independent central banks have over academics as a source for the received wisdom on this issue is that they are a single point of reference. No need to ask the many economists working in the central bank - just ask the central bank governor, who you would expect to distil the wisdom of their own economists.

Following this logic, you might expect to find central banks shouting the loudest about the dangers of austerity. After all, they get the rap for deflation, so anything that makes their job more difficult and uncertain when interest rates have hit their lower bound they should perceive as especially unwelcome. In front of committees of congress/select committees and the like, they should be banging on about how they cannot be expected to do their job if politicians continue to make life difficult by deflating demand. If they did this, some politicians (particularly on the centre left) would have had ammunition with which to counter homilies about Swabian housewives and maxed out credit cards. 

Of course this does not happen. The extent to which it does not happen varies among the major banks. In the US Bernanke did very occasionally (and somewhat discretely) say things along these lines, but he seemed reluctant to do so in any way that might prove influential. In the UK Mervyn King is believed to have actively pushed for greater austerity, and the Bank of England has never to my knowledge suggested that austerity might compromise its control of inflation. The ECB, of course, always argues for austerity. It is one of the great paradoxes of our time how the ECB can continue to encourage governments to take fiscal or other actions that their own models tell them will reduce output and inflation at a time when the ECB is failing so miserably to control both.

So what is going on here? I think there are two classes of explanation, related to the distinction between the roles of interests and ideas in political economy (see Campbell here, for example). The first class talks about why the interests of the elite might favour austerity, and how these interests could be easily mediated through senior central bankers. It could also explore the interests of finance, and their close connections to central banks.

The second class might focus on ideas involving perceived threats to central bank independence. In the US, this might be nothing more than a desired quid pro quo whereby central bankers avoided mentioning fiscal policy so that politicians steer clear of comments on monetary policy. More seriously, amongst other central bankers it may represent a primal (and in the current context quite unjustified) fear of fiscal dominance: being forced to monetise debt and as a result losing both independence and control of inflation. In this context I often quote Mervyn King, who said “Central banks are often accused of being obsessed with inflation. This is untrue. If they are obsessed with anything, it is with fiscal policy.”

These ideas are in conflict with the message on fiscal policy coming from the central bank’s own models. In the UK and US, this contradiction is partly resolved by an excessive optimism about unconventional monetary policy. But it can also be resolved through overoptimistic forecasts, given that inflation targeting is in reality targeting future inflation. Although both these mechanisms come with a limited shelf life, they only need to operate for as long as austerity and the liquidity trap last.

The story I like to use about the Great Recession is that it exposed an Achilles’ heel with the consensus assignment that helped give us the Great Moderation. Yes, it was best to leave monetary policy to independent central banks, but the Achilles’ heel is that this would not work if interest rates hit their lower bound. In that situation fiscal policy had to come in as a backup for monetary policy. But if the analysis above is right the creation of independent central banks may have helped make that backup process much more difficult to achieve. By concentrating macroeconomic received wisdom in institutions that were predisposed to worry far too much about budget deficits, a huge spanner was thrown into the (socially efficient) working of the knowledge transmission mechanism.

  

Friday, 5 June 2015

The academic consensus on the impact of austerity

In discussing the forthcoming UK budget, Robert Peston writes:

“And before I am savaged (as I always am) by the Krugman crew of Keynesian economists for even allowing George Osborne's argument an airing, I am not saying that the net negative impact on our national income and living standards of cutting the deficit faster is less than their alternative route of slower so-called fiscal consolidation.

I am simply pointing out that there is a debate here (though Krugman, Wren-Lewis and Portes are utterly persuaded they've won this match - and take the somewhat patronising view that voters who think differently are ignorant sheep led astray by a malign or blinkered media).”

I do not want to disappoint, and as I was about to write something on the macroeconomic consensus on austerity anyway, let me oblige - not in savaging (I leave that to my American colleague in arms!), but in justifying why I think there is such a consensus in the places that count. By consensus I do not mean that everyone agrees - of course not - but that a very large majority do, which probably counts as consensus in economics.

Unfortunately we do not have a great deal of information on what academic economists as a whole think about austerity, but we do have two important survey results which are pretty conclusive. In the US, there is the IFM Forum, which regularly asks a group of distinguished economists - including many macroeconomists - their views on key policy issues. The last poll I have seen suggests that 82% of that panel thought the 2009 Obama stimulus had reduced unemployment, while only 2% disagreed. In the UK, the CFM survey asked a similar question to a smaller group of academic economists, most of whom are macroeconomists. Only 15% agreed that the austerity policies of the coalition government have had a positive effect on aggregate economic activity, while 66% disagreed. That consensus is not universal - it would not apply in Germany for example - but I doubt if anyone would disagree when I say that US economists call the shots as far as academic macroeconomics is concerned. 

This is why economists the world over continue to teach Keynesian macro to undergraduates, and normally not as one ‘school of thought’ but rather as an initial approximation of how the economy actually works. As Amartya Sen so forcefully reminds us, the experience of the last hundred years has earned Keynesian theory this central role.

However we have another, more indirect, source of evidence. If you asked whether there was a standard model for analysing the business cycle among economists in academia and in policy making institutions, the answer would have to be the New Keynesian model. I want to include economists in central banks in particular because they have to put theories of the business cycle into practice on a regular basis. The key macromodels that central banks use to forecast and to analyse policy are Keynesian, and many are New Keynesian. Having worked a great deal with New Keynesian models myself, I also know what they imply about temporary changes in government spending in a liquidity trap (see this paper by Mike Woodford, for example). It may be possible to adapt these models to give you expansionary austerity, but no such adaptations command general or even partial support.

The models used by pretty well all central banks would therefore imply that temporary cuts in government spending were contractionary, absent any monetary policy offset. The governors of the central banks of the UK and US say this publicly. European central bank governors do not tend to say this, and instead continue to advocate austerity despite deflation. The reason why they might do this despite what their models tell them will be the subject of a later post, but I suspect it has little to do with conventional macroeconomics (but see also the point about German academic views above, and Sen’s article). If temporary cuts in government spending are contractionary in a liquidity trap, it follows that it is much better to delay this form of austerity.

I could add repeated arguments from economists at the IMF (e.g. here and most recently here), and now also the OECD (FT here, or ungated here). Of course there are some academic economists who continue to argue that the impact of austerity is expansionary or at least minor - I suspect there always will be, as long as this remains an intense political debate. They would be joined by many City economists, but they are neither unbiased nor the source of any particular expertise on this issue.

This is why, among economists with expertise, there is a clear majority view that fiscal austerity is significantly contractionary in a liquidity trap. That does not automatically mean that the 2010 policy switch was wrong, or that it had a big impact on the UK in 2010-2012: there are additional issues here which I have discussed many times. How damaging to the macroeconomy any additional austerity from Osborne will be also depends on whether we are or will be in a liquidity trap. But the fact that we might well be means that additional austerity now is a big mistake, and on this I believe the great majority of academic macroeconomists and those macroeconomists working in policy making institutions would agree.

As far as the media is concerned, I cannot believe that Robert Peston would disagree that a large section are ‘malign’, given how political this issue is. When I have talked to journalists who have some freedom to report the facts rather than what their editors want them to report, the argument I most often hear is that because this issue is political, they have to report it as a ‘debate’ come what may. I have never had the pleasure of talking to Robert Peston (he is welcome to email at any time), and I would be very interested in how he would respond to the evidence I have laid out. As for the public, the word sheep is his not mine. Would he really argue that the public are independently well informed on these matters, or unaffected by the media’s presentation of this and similar issues? Which is why I will continue to - as he might say - bang on about this, even though my audience is tiny in comparison to most journalists.



Wednesday, 25 March 2015

Why do central banks use New Keynesian models?

And more on whether price setting is microfounded in RBC models. For macroeconomists.

Why do central banks like using the New Keynesian (NK) model? Stephen Williamson says: “I work for one of these institutions, and I have a hard time answering that question, so it's not clear why Simon wants David [Levine] to answer it. Simon posed the question, so I think he should answer it.” The answer is very simple: the model helps these banks do their job of setting an appropriate interest rate. (I suspect because the answer is very simple this is really a setup for another post Stephen wants to write, but as I always find what Stephen writes interesting I have no problem with that.)

What is a NK model? It is a RBC model plus a microfounded model of price setting, and a nominal interest rate set by the central bank. Every NK model has its inner RBC model. You could reasonably say that these NK models were designed to help tell the central bank what interest rate to set. In the simplest case, this involves setting a nominal rate that achieves, or moves towards, the level of real interest rates that is assumed to occur in the inner RBC model: the natural real rate. These models do not tell us how and why the central bank can set the nominal short rate, and those are interesting questions which occasionally might be important. As Stephen points out, NK models tell us very little about money. Most of the time, however, I think interest rate setters can get by without worrying about these how and why questions.

Why not just use the restricted RBC version of the NK model? Because the central bank sets a nominal rate, so it needs an estimate of what expected inflation is. It could get that from surveys, but it also wants to know how expected inflation will change if it changes its nominal rate. I think a central banker might also add that they are supposed to be achieving an inflation target, so having a model that examines the response of inflation to the rest of the economy and nominal interest rate changes seems like an important thing to do.

The reason why I expect people like David Levine to at least acknowledge the question I have just answered is also simple. David Levine claimed that Keynesian economics is nonsense, and had been shown to be nonsense since the New Classical revolution. With views like that, I would at least expect some acknowledgement that central banks appear to think differently. For him, like Stephen, that must be a puzzle. He may not be able to answer that puzzle, but it is good practice to note the puzzles that your worldview throws up.

Stephen also seems to miss my point about the lack of any microfounded model of price setting in the RBC model. The key variable is the real interest rate, and as he points out the difference between perfect competition and monopolistic competition is not critical here. In a monetary economy the real interest rate is set by both price setters in the goods market and the central bank. The RBC model contains neither. To say that the RBC model assumes that agents set the appropriate market clearing prices describes an outcome, but not the mechanism by which it is achieved.

That may be fine - a perfectly acceptable simplification - if when we do think how price setters and the central bank interact, that is the outcome we generally converge towards. NK models suggest that most of the time that is true. This in turn means that the microfoundations of price setting in RBC models applied to a monetary economy rest on NK foundations. The RBC model assumes the real interest rate clears the goods market, and the NK model shows us why in a monetary economy that can happen (and occasionally why it does not). 


Thursday, 24 July 2014

Synthesis!? David Beckworth's Insurance Policy

Could it be that New Keynesians and Market Monetarists can converge on a common policy proposal? I really like David Beckworth’s Insurance proposal against ‘incompetent’ monetary policy. Here it is.

1) Target the level of nominal GDP (NGDP)

2) “the Fed and Treasury sign an agreement that should a liquidity trap emerge anyhow [say due to central bank incompetence] and knock NGDP off its targeted path, they would then quickly work together to implement a helicopter drop. The Fed would provide the funding and the Treasury Department would provide the logistical support to deliver the funds to households. Once NGDP returned to its targeted path the helicopter drop would end and the Fed would implement policy using normal open market operations. If the public understood this plan, it would further stabilize NGDP expectations and make it unlikely a helicopter drop would ever be needed.”

In fact I like it so much that Jonathan Portes and I proposed something very like it in our recent paper. There we acknowledge that outside the Zero Lower Bound (ZLB), monetary policy does the stabilisation. But we also suggest that if the central bank thinks there is more than a 50% probability that they will hit the ZLB, they get together with the national fiscal council (in the US case, the CBO) to propose to the government a fiscal package that is designed to allow interest rates to rise above the ZLB.

There we did not specify what monetary policy should be, but speaking just for myself I have endorsed using the level of NGDP as an intermediate target for monetary policy, so there is no real disagreement there. A helicopter drop is a fiscal stimulus involving tax cuts plus Quantitative Easing (QE). Again we did not specify that the central bank had to undertake QE as part of its proposed package, but I think we both assumed that it would (outside the Eurozone, where for the moment we can just say it should). I think a central bank could suggest that an income tax cut might not be the most effective form of fiscal stimulus (compared to public investment, for example), but let’s not spoil the party by arguing over that.

Now this does not mean that Market Monetarists and New Keynesians suddenly agree about everything. A key difference is that for David this is an insurance against incompetence by the central bank, whereas Keynesians are as likely to view hitting the ZLB as unavoidable if the shock is big enough. However this difference is not critical, as New Keynesians are more than happy to try and improve how monetary policy works. The reason I wrote this post was not because of these differences in how we understand the world. It was because I thought New Keynesians and Market Monetarists could be much closer on policy than at least some let on. I now think this even more. 



Friday, 18 July 2014

Further thoughts on Phillips curves

In a post from a few days ago I looked at some recent evidence on Phillips curves, treating the Great Recession as a test case. I cast the discussion as a debate between rational and adaptive expectations. Neither is likely to be 100% right of course, but I suggested the evidence implied rational expectations were more right than adaptive. In this post I want to relate this to some other people’s work and discussion. (See also this post from Mark Thoma.)

The first issue is why look at just half a dozen years, in only a few countries. As I noted in the original post, when looking at CPI inflation there are many short term factors that may mislead. Another reason for excluding European countries which I did not mention is the impact of austerity driven higher VAT rates (and other similar taxes or administered prices), nicely documented by Klitgaard and Peck. Surely all this ‘noise’ is an excellent reason to look over a much longer time horizon?

One answer is given in this recent JEL paper by Mavroeidis, Plagborg-Møller and Stock. As Plagborg-Moller notes in an email to Mark Thoma: “Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature….traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.” This had been my reading based on work I’d seen.

This is often going to be the case with time series econometrics, particularly when key variables appear in the form of expectations. Faced with this, what economists often look for is some decisive and hopefully large event, where all the issues involving specification uncertainty can be sidelined or become second order. The Great Recession, for countries that did not suffer a second recession, might be just such an event. In earlier, milder recessions it was also much less clear what the monetary authority’s inflation target was (if it had one at all), and how credible it was.

How does what I did relate to recent discussions by Paul Krugman? Paul observes that recent observations look like a Phillips curve without any expected inflation term at all. He mentions various possible explanations for this, but of those the most obvious to me is that expectations have become anchored because of inflation targeting. This was one of the cases I considered in my earlier post: that agents always believed inflation would return to target next year. So in that sense Paul and I are talking about the same evidence.

Before discussing interpretation further, let me bring in a paper by Ball and Mazumder. This appears to come to completely the opposite conclusion to mine. They say “we show that the Great Recession provides fresh evidence against the New Keynesian Phillips curve with rational expectations”. I do not want to discuss the specific section of their paper where they draw that conclusion, because it involves just the kind of specification uncertainties that Mavroeidis et al discuss. Instead I will simply note that the Ball and Mazumder study had data up to 2010. We now have data up to 2013. In its most basic form, the contest between the two Phillips curves is whether underlying inflation is now higher or lower than in 2009 (see maths below). It is higher. So to rescue the adaptive expectations view, you have to argue that underlying inflation is actually lower now than in 2009. Maybe it is possible to do that, but I have not seen that done.

However it would be a big mistake to think that the Ball and Mazumder paper finds support for the adaptive expectations Friedman/Phelps Phillips curve. They too find clear evidence that expectations have become more and more anchored. So in this sense the evidence is all pointing in the same way.

So I suspect the main differences here come from interpretation. I’m happy to interpret anchoring as agents acting rationally as inflation targets have become established and credible, although I also agree that it is not the only possible interpretation (see Thomas Palley and this paper in particular). My interpretation suggests that the New Keynesian Phillips curve is a more sensible place to start from than the adaptive expectations Friedman/Phelps version. As this is the view implicitly taken by most mainstream academic macroeconomics, but using a methodology that does not ensure congruence with the data, I think it is useful to point out when the mainstream does have empirical support.


Some maths

Suppose the Phillips curve has the following form:

p(t) = E[p(t+1)] + a.y(t) + u(t)

where ‘p’ is inflation, E[..] is the expectations operator, ‘a’ is a positive parameter on the output gap ‘y’, and ‘u’ is an error term. We have two references cases:

Static expectations: E[p(t+1)] = p(t-1)

Rational expectations: E[p(t+1)] = p(t+1) + e(t+1)

where ‘e’ is the error on expectations of future inflation and is random. Some simple maths shows that under static expectations, negative output gaps are associated with falling inflation, while under rational expectations they are associated with rising inflation. If we agree that between 2009 and today we have had a series of negative output gaps, we just need to ask whether underlying inflation is now higher or lower than in 2009. 



Thursday, 1 May 2014

Looking for the flimflam

According to Thomas Palley, Paul Krugman and my defence of mainstream economics is “pure flimflam”. The definition of flimflam is ‘nonsensical or insincere talk’ or ‘a confidence trick’. Nonsensical I guess is possible, but insincere or a confidence trick it most definitely is not. But I guess this no worse than ‘pure drivel’, which is how Lars Syll once described one of my posts.

Despite all this, I would like to have a debate about macroeconomics with heterodox economists, and have tried to initiate one in the past. A debate that gets beyond generalities (and name calling), and talks about actual macroeconomic mechanisms and what policy makers should do. This is because I’m genuinely puzzled about what I am doing that heterodox economists find so wrong.

According to Thomas Palley, New Keynesian economics “retained the nonsense of marginal productivity distribution theory while discarding the foundations of Keynesian economics”. We “use price and nominal wage rigidity to explain cyclical unemployment”. Now I admit to not being terribly concerned about what Keynes really meant, but I’m at a loss to see marginal productivity distribution theory at the centre of New Keynesian theory. What New Keynesian theory does need is that falls in real interest rates stimulate aggregate demand (i.e. some form of IS curve), and in the basic model this comes from changing the intertemporal pattern of consumption. Is that wrong? What explains cyclical unemployment is real interest rates being at the wrong level. Movements in wages and prices get us out of a recession because they lead the central bank to reduce real interest rates. At the zero bound they cannot do that, and in those circumstances wage and price flexibility could make things worse. Is that wrong?

Now it is true that the standard New Keynesian model assumes a labour market that clears, but a model that replaces this with labour market imperfect competition would not behave very differently. That is what I actually teach. Equally the basic New Keynesian model assumes rational expectations, but if we want to change this to a case where agents make predictable errors that is easy enough to do. I also teach this to undergraduates. (For a pretty good guide to what I teach, see this paper by Carlin and Soskice. I use their textbook.)

Which brings us back to teaching. As I said in my original post, I would like to make students aware of heterodox critiques, but I want to point out where in my mainstream account that critique would enter. (I think what I teach is pretty close to how many central bankers think, if not the rest of 'my tribe'!)  I believe I can do that for what I call anti-Keynesians (freshwater or whatever), although I remain at a loss as to how flexible prices can get us out of a liquidity trap when central banks target inflation (see here and here). So where (in terms of macroeconomic mechanisms) do I locate the heterodox (post-Keynesian or whatever) critique of New Keynesian analysis? This is not an insincere or trick (flimflam) question.   

Friday, 14 February 2014

Are New Keynesian DSGE models a Faustian bargain?

Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events.

Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build?

One way of looking at this (and I’ll argue at the end that it misses a key element) is to think about the individual components of models. If you do this, the Faustian bargain story looks implausible. Let’s start with the mainstream before the New Classical revolution. This was the famous post-war neoclassical synthesis popularised by Paul Samuelson, which integrated traditional Keynesian and Classical models in a common overall framework. While prices were sticky we were in a Keynesian world, but once prices had adjusted the world was Classical.

In terms of components, the RBC model is just the classical macromodel with two key additions. The first is rational expectations. The second is intertemporal optimisation by agents. (In non-jargon, it takes seriously the ability of agents to choose when they consume by saving or borrowing, rather than simply assuming they just consume a fixed proportion of their current income. This is often called the consumption smoothing model, because typically consumers smooth consumption relative to income e.g. by saving for retirement.) In both cases I do not think Keynesian economists were forced to adopt these ideas against their better judgement. Instead I think quite the opposite is true: both ideas were readily adopted because they appeared to be a distinct improvement on previous methods.

The key point here is that they were an improvement on previous practice. It does not mean that economists thought they were the final answer, or indeed that they were half adequate answers. Instead they were a better foundation to build on compared to what had gone before. I’ve argued this for rational expectations before, but I also think it is true for intertemporal consumption. I find it very difficult to think about more complex ideas, like liquidity constraints or precautionary saving, without starting with consumption smoothing.

I have talked about the real world events that convinced me of this, but here let me make the same point in a more informal way. When teaching on the Oxford masters programme, I give students a question. If they won a large sum, would they spend it over the next year, over the next few years, spend a significant proportion now but save the rest, or save nearly all. The last response is the answer given by the simple intertemporal model, but I argue that the first two responses make perfect sense if you are a credit constrained student. However I tell my audience that those who gave the first answer are not intending to do a PhD after finishing the masters, while those who gave the second are, because they are expecting the credit constraint to last longer. The serious point is that credit constrained consumers do not automatically consume all of a temporary increase in income. If the period over which income is higher is less than the period over which they expect to be constrained, they will smooth their additional consumption.

So, in terms of the components of New Keynesian models, I can see little that most modellers would love to junk if it wasn’t for those nasty New Classicals. [1] But what this ignores is methodology, and the fact that the RBC model is a microfounded Classical model. (By microfounded, I mean that every macroeconomic relationship has to be formally derived from optimisation by individual agents.) Yet here again, I doubt that most New Keynesian modellers adopted the microfoundations perspective against their better judgement. Instead I suspect most saw the power of the microfoundations approach (in analysing consumption, in particular), recognised the dangers in ad hoc theorising about dynamics (as in the traditional Phillips curve), and thought there was no contest.

The more interesting question is whether this has turned out to be a Faustian pact between macroeconomics and microfoundations ex post. To be more precise, by putting all our macroeconomic model building eggs in one microfounded basket, have we significantly slowed down the pace at which macroeconomists can say something helpful about the rapidly changing real world? That is a question I have written a lot about (e.g. here, and here) and no doubt will write more, but the key point I want to make now is this. If there was a Faustian bargain, I think we should acknowledge that most Keynesian economists agreed to it for good reasons, and that they were not forced into it by others.



[1] I must add a caveat here, although it is rather controversial. I think one sense in which RBC models have cast an annoying shadow is the idea that we must have models in which labour supply is endogenous. Often it would make things simpler if we could assume a fixed labour supply, and my own view is that for many issues we would lose little empirical relevance if we did so. Here I do think New Keynesians are too deferential to the always silly idea of trying to explain movements in unemployment as simply a labour supply choice.   

Saturday, 8 February 2014

Speaking as an Old New Keynesian …

Labels are fun, and get attention. They can be a useful shorthand to capture an idea, or related set of ideas. But is there really a New Old Keynesian school of thought? I don’t think so. Here are a couple of bold assertions, which I think I believe, and which I will try to justify. First, in academic research terms there is only one meaningful division, between mainstream and heterodox. (Of course the heterodox divide themselves up into various ‘schools’, but their size is small and their influence is also small.) Second, in macroeconomic policy terms I think there is only one meaningful significant division, between mainstream and anti-Keynesians.

But before trying to justify these statements, I want to defend being a killjoy. As I said, putting people into categories can be fun - why spoil it by taking the exercise seriously? Two reasons. First, I want to make some points which do not get said often enough on economics blogs. Second, labels can lead to confusion or worse. Just think about the label Keynesian. Any sensible definition would involve the words sticky prices and aggregate demand. Yet there are still some economists (generally not academics) who think Keynesian means believing fiscal rather than monetary policy should be used to stabilise demand. Fifty years ago maybe, but no longer. Even worse are non-economists who think being a Keynesian means believing in market imperfections, government intervention in general and a mixed economy. (If you do not believe this happens, look at the definition in Wikipedia.)

So what do I mean by a meaningful division in academic research terms? I mean speaking a different language. Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something. Equally an economist like John Cochrane can and does engage in meaningful discussions of New Keynesian theory (pdf). [1]

Of course all academic macroeconomists have their own idea of how the world actually works and will probably do research using models that roughly conform to that. Yet I think you would be hard put to draw meaningful boundaries here. Take John Quiggin’s Old/New Keynesian post, for example (which followed this from Tyler Cowen). He characterises New New Keynesians as those still working with DSGE models who are now attempting to add financial frictions. He wants to argue (and labels New Old Keynesian) the idea that following this recession, there “is no unique long-run equilibrium growth path, determined by technology and preferences, to which the economy is bound to return. In particular, the loss of productive capacity, skills and so on in the current depression is, for all practical purposes, permanent.” Now listening with my mainstream ears, this sounds like a combination of hysteresis effects and endogenous growth, which sounds interesting. Yet I also think we can learn a lot from adding financial frictions to DSGE models. Does this make me a middle aged Keynesian?

What I suspect Quiggin is getting at here is that New New Keynesians are still following a microfoundations research programme (using DSGE), whereas he would not. Now many mainstream macroeconomists, myself included, can be pretty critical of the limitations that this programme can place on economic thinking, particularly if it is taken too literally by microfoundations purists. But like it or not, that is how most macro research is done nowadays in the mainstream, and I see no sign of this changing anytime soon. (Paul Krugman discusses some reasons why here.) My own view is that I would like to see more tolerance and a greater variety of modelling approaches, but a pragmatic microfoundations macro will and should remain the major academic research paradigm.

When it comes to macroeconomic policy, and keeping to the different language idea, the only significant division I see is between the mainstream macro practiced by most economists, including those in most central banks, and anti-Keynesians. By anti-Keynesian I mean those who deny the potential for aggregate demand to influence output and unemployment in the short term. [2] Why do I use the term anti-Keynesian rather than, say, New Classical? Partly because New Keynesian economics essentially just augments New Classical macroeconomics with sticky prices. But also because as far as I can see what holds anti-Keynesians together isn’t some coherent and realistic view of the world, but instead a dislike of what taking aggregate demand seriously implies.

What is incoherent about believing in pretty flexible prices, you might ask? Two things. First, as I have argued before, with the demise of the Pigou effect flexible prices do not get you out of a deficient demand problem at the zero lower bound when there are inflation targets. Second, the evidence that prices are not flexible is so overwhelming that you need something else to drive you to ignore this evidence. Or to put it another way, you need something pretty strong for politicians or economists to make the ‘schoolboy error’ that is Says Law, which is why I think the basis of the anti-Keynesian view is essentially ideological.

Of course there are a huge number of policy debates in macroeconomics, and you can attach labels to those if you like. Should we use fiscal stimulus at the zero lower bound, for example. Was austerity a good idea? However, anti-Keynesians aside, I don’t think these debates reveal large fault lines in economic thinking. Economists do not rigidly line up on one side or another, and some even change their mind over time as the facts change. It is possible to have serious discussions about the effectiveness of monetary policy, the dangers of high debt etc. The only group where a discussion can fail to get off the ground is with those who contend that aggregate demand is always irrelevant.

[1] Heterodox economists might argue that they have to be bilingual - they are able to speak mainstream, even if they prefer not to among friends. Those more critical might detect a reluctance to get past certain words.


[2] An alternative, and more positive, way to define the anti-Keynesian group is that they believe macroeconomic outcomes are essentially efficient, and so intervention by a government (or central bank, beyond providing a nominal anchor) is not required. This difference might be important in placing someone like Roger Farmer (who I’m glad to see now has a blog), who is not an anti-Keynesian under this positive definition, but might be using my more negative criteria.

Thursday, 12 December 2013

New versus traditional Phillips curves and the Great Recession

For economists

One of the questions I like asking students is whether inflation following the Great Recession has tended to favour the New Keynesian (NK) Phillips curve or its more traditional counterpart (TK). I like it because it allows me to draw a nice diagram, and also because it shows students how difficult it is to discriminate between theories in macro.

So first the theory. The two competing models are
  • NK: Inflation at t = expected inflation at t+1 together with a term in the output gap
  • TK: inflation at t = inflation at t-1 together with a term in the output gap

I’m ignoring discounting in the NK Phillips curve for simplicity. Assume expectations about inflation are rational, and suppose the economy is hit by an unexpected recession of known size and duration. The two models predict the following:



With the traditional model, inflation gradually falls as the recession continues, and once it comes to an end, inflation remains lower. In the New Keynesian model, assuming that the inflation target is credible, inflation jumps down when the unexpected recession occurs, and then inflation gradually rises towards its target as the recession progresses. (We assume here that the output gap is constant while the recession lasts, again for simplicity.) For the NK model, it is critical in drawing this diagram that the extent of the recession is known – more on this below. The patterns implied by the two models are distinct, and this difference is likely to persist even if each curve becomes flatter as we approach zero inflation because of nominal wage rigidities.

To see what has actually happened, see this nice post from Gavyn Davies. The immediate aftermath of the recession looked more like the NK model: a sharp fall followed by a gradual rise. Furthermore I would argue that – once the recession hit – most people expected it to be large and persistent, so my diagram is not totally unrealistic. But if we look at what has been happening in the last two years, it looks much more like the TK model, with inflation gradually falling below target.

That is probably as far as we should go without doing some econometrics, and also taking account of some of the complexities discussed here. We could probably get any pattern to fit the NK model by imagining a suitable sequence of expectations errors. In addition if we are looking at consumer price inflation we should account for commodity price changes, which neither model does. (If we look at GDP deflators, you could tell a story where agents were initially expecting a recession lasting three or four years, and have been surprised that the recession has persisted ever since.) That is why some proper econometrics is required, preferably looking at both price and wage inflation together with expectations data. (If such studies have been done, please let me know.)

However perhaps I can suggest two possible conclusions that such studies could test more rigorously. First, the traditional Phillips curve, where expectations are implicitly naive and backward looking, does not look like a promising basis for explaining inflation following the recession. Either the New Keynesian model, or some combination of the two models, looks more like providing an adequate foundation for a reasonable explanation. Second, an explanation based on the NK model that treats the size and extent of the recession (whatever that turns out to be) as one initially unexpected but then completely anticipated shock is also going to struggle to fit the data.