Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label Keynesian. Show all posts
Showing posts with label Keynesian. Show all posts

Friday, 17 June 2016

Postgrad teaching and Keynesian economics: a survey

This post is joint with André Moreira, an economist at the Bank of England

Three years ago one of us got into a discussion with Paul Krugman and Brad DeLong about how dominant, or otherwise, the New Keynesian model of business cycles was in academia. That post contained a footnote with an idea: why don’t we look at what the top schools actually teach their post graduates, to see if there are a large proportion of students who are not being taught any Keynesian economics. We came together with the idea of doing just that.

We initially thought that we could do this by ‘simply’ contacting the academics teaching core macro courses and asking them for their syllabus. You can probably guess the problem we encountered, although thanks to those who did respond. So instead we decided to survey recent students. That worked much better, with one exception we note below. (The results are written up in more detail as a short paper – see the top of the main list from this link).

We decided to ask a simple question: “Approximately what percentage of the core Macro sequence that you received covered models involving price and/or wage rigidities (including New Keynesian models)? Please round your answer to the nearest 5% mark.” We sent this question to graduate students at the top 15 schools during 2014/15 [1]. Here are the results


Table 2: Survey results

The one school missing is Chicago. Contact information for these graduates is not publicly available, and we were told by both the course administrator and the academic in charge of the course that they have a policy of not divulging the email addresses of postgraduates, even after we made it clear what we wanted them for. We also received no information from our earlier requests for a syllabus.

Of course no simple question like this is perfect: at NYU, for example, the first year teaching focuses on methods, and Keynesian analysis is covered in a later (but optional) course. We found no major discontinuities according to the year students entered the programme (a few variations are noted in the paper), and the information matched the syllabus information from those who had been good enough to respond to our first survey. We sent the results to course teachers a few months ago for any comments or corrections.

The main message we draw from these results is that, in at least the top schools, there is no major divide between a group that teaches Keynesian economics and those that do not. There is a large amount of variation among schools, but there is little evidence of the marked bifurcation among top universities that some discussions of a saltwater/freshwater divide might suggest. We suspect opinions will differ on whether the variation we still found is natural in a discipline like economics or is an indication that something is wrong.

We would be interested in any thoughts about whether it would be worth taking this analysis further in any way.


[1] Top 15 according to the IDEAS ranking of economic institutions as of September 2013.

Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Thursday, 21 May 2015

Ferguson tries again

Perhaps stung by the widespread criticism of the way he treated data in his original FT op-ed piece, Professor Niall Ferguson has had another go. This new piece is distinctly better than the original. For example, he now acknowledges that “From 2010 to 2015, average inflation-adjusted weekly earnings fell more than under any postwar government.” Let me focus here on his central claim, which is that the economy did far better than Keynesians had predicted, and that Keynesians have refused to acknowledge this. (There are some other problematic points in the piece, but they are largely distractions from the main idea.)

Rather than get into the pointless business of comparing quotes, let us do this the academic way, which is to see what the Keynesian model says. Recall this is the model that pretty well all central banks use to regulate the economy. Everyone agrees that UK austerity was at its most intense in the first two years of Osborne’s Chancellorship. The UK Office of Budget Responsibility, which does the number crunching for Osborne, calculates using standard (although conservative in the current context) Keynesian multipliers that austerity in those first two years reduced GDP growth by 1% in each year. That is the basis for my calculation that austerity cost the average UK household at least the equivalent of £4,000. The OBR also calculate that overall UK austerity had no significant impact on growth in subsequent years.

Does the data show that Keynesian assessment is clearly wrong? Ferguson includes an IMF chart in his article, and I commend the fact that it uses GDP per head. Unfortunately it also includes a forecast, which is as reliable as most macro forecasts, so just focus on the part that is data. In 2011 and 2012 the UK flat lined, and only started recovering when the drag imposed by austerity came to an end. This is entirely consistent with my and the OBR’s analysis. (For the record, I also said three years ago that we would see a recovery in subsequent years.) Anti-Keynesians like to point to UK growth from 2013 onwards being healthy compared to other countries, but this is also entirely consistent with Keynesian analysis, because if you look at underlying primary balances UK fiscal austerity was much weaker in those years than in the US or the Eurozone.

General government underlying primary balances: source OECD Economic Outlook

In a way Ferguson acknowledges all this, because he lists other potential reasons why UK growth might have been weak in those early years. Fair enough, except that his central claim is that the numbers show the Keynesian analysis of austerity is wrong. But his chart shows that the numbers are in fact completely consistent with the Keynesian story. That does not prove the Keynesian story is right, but it sure does not show it is wrong!

Macroeconomics is a messy subject, because there are always so many things going on. For this reason the impact of policy is often not immediately apparent in the data, and some econometrics is required to sort things out. The unusual feature of the last few years across the UK and Eurozone is that events have largely followed the Keynesian script - no econometrics required. An election result does not change this fact.


Wednesday, 20 May 2015

On what I said before

This is something of a personal indulgence, but my excuse is obvious given recent posts.

I always smile when certain people claim that Keynesians said there would be no recovery. There are two reasons. The first is personal. A well known UK economist (clue: someone who the economics editor at a well known newspaper finds it best to ignore!) reminded me of this post I wrote three years ago. Here is an excerpt:

“Good spin is simple, and plays off real events. So the line “we have to reduce debt quickly because otherwise we will be like Greece, or Spain” works, while the response “but the Eurozone is special because member countries do not have their own central bank” is too technical to be an effective counter. In contrast the argument that Wolf and Portes put forward above – why not invest when it’s so cheap to borrow – is effective, which is why it is dangerous. So of course is “austerity is stifling growth”, as long as growth is negative or negligible. However, come 2015, the spin “we have done the hard work and the strategy has worked” will accord with (relatively) strong growth, while talk of output gaps and lost capacity will have less resonance. True, unemployment will still be high, but not many of the unemployed are Conservative voters, and the immunising spin about lack of willingness to work can be quite effective.

Will the strategy, and the associated spin, work? The risk that growth will not be respectable in 2014 must be low: by then consumers and firms should have adjusted their borrowing and wealth sufficiently such that growth can resume.  If there is a chance that it might not be, I expect to see some measures in next year’s budget that do not conflict with the overriding ideological objective, such as incentives for firms to bring forward investment.”

I got two things wrong here. First, I did not foresee the continuing stagnation in productivity, and therefore that unemployment would fall rapidly despite at best average output growth. (Perhaps another piece of Cameron ‘luck’?) Second, I got the example of a budget stimulus measure wrong (we in fact got Help to Buy), because I was thinking like an economist and not a certain politician. But one thing I did not get wrong is that there would be a recovery. Indeed I was if anything expecting a rather stronger recovery than actually took place.

The reason I got this right, and the second reason I smile, is that this has nothing to do with any personal insight on my part. As Paul Krugman explains, I was just using the standard Keynesian model. What amuses me is how some anti-Keynesians seem to think that Keynesian ideas are embodied in the words of certain well known Keynesians, rather than in the journals, textbooks and central bank models. As Paul has rightly said many times, the basic ideas of Keynesian economics have been pretty well vindicated by macroeconomic developments in recent years. This, you might argue, is why they are in the textbooks and central bank models in the first place. 

Monday, 6 October 2014

More asymmetries: Is Keynesian economics left wing?

In the textbooks it is suggested that Keynesian economics is what happens when ‘prices are sticky’. Sticky prices sound like prices failing to equate supply and demand, which in turn sounds like markets not working. Hence whether you believe in Keynesian theory depends on whether you think markets work, so it obviously maps to a left/right political perspective.

Reality is rather different. Suppose we start from a position where firms are selling all they wish. Aggregate demand equals aggregate supply. If then aggregate demand for goods falls, perhaps because consumers or firms are trying to rebuild their balance sheets after a financial crisis, producers of these goods will start to reduce output, and lay off workers. The idea that they would ignore the fall in demand and just carry on producing the same amount is ludicrous. So output appears to be influenced by aggregate demand at least in the short run, which is at the heart of what most economists think of as Keynesian theory.

So where do sticky prices come in? Here we have to go back to the textbooks, and to an imaginary world where the monetary authority fixes the money supply. Firms, in an effort to stimulate demand for their goods, cut prices. Lower prices mean people do not need to hold so much money to buy goods. However if the nominal money supply is fixed, interest rates will fall to encourage people to hold more money. The textbooks encourage us to think of a market for money, with interest rates as the price that equates supply and demand. Lower interest rates provide an incentive to consumers and firms to increase demand, which in turn raises output.

Now suppose that firms carry on cutting prices as long as they are selling less than they would like. The process just described will continue, with interest rates getting lower and aggregate demand rising in response. The process stops when firms stop cutting prices, which means aggregate demand has increased back to its original level. Suppose further that prices adjusted very quickly. This mechanism would work very quickly, so we would only observe aggregate demand being below supply for very short periods. If prices were extremely flexible, we could ignore aggregate demand altogether in thinking about output. Hence aggregate demand matters only if ‘prices are sticky’.

Note that this correction mechanism is quite complex, and some way from the simple microeconomic world of the market for a single good. But we need to move back to the real world again. Monetary authorities do not fix the money supply; they fix short term interest rates. So they are directly in charge of the correction mechanism that is at the heart of this story. If central banks had some way of knowing what aggregate supply was, and also had perfect knowledge of aggregate demand and how interest rates influenced it, they could make sure aggregate demand equalled supply without any need for prices to change at all. Equally, if prices were very flexible but the monetary authority always moved nominal rates in such a way as to fail to stimulate aggregate demand, aggregate demand and therefore output would not return back to equal aggregate supply. Demand would still matter, even with flexible prices.

Once you see things as they are in the real world, rather than as they are portrayed in the textbooks, the importance of aggregate demand (and therefore of Keynesian theory) is all about how good monetary policy is, and not about sticky prices. If monetary policy was perfect, then Keynesian theory would only be used by central banks in order to be perfect, and everyone else could ignore it. Of course for many good reasons monetary policy is not perfect, and so Keynesian theory matters.

We could re-establish the link between Keynesian theory and price flexibility by assuming the monetary authority follows a rule which would make policy perfect if and only if prices moved very fast, but the key point remains. The importance or otherwise of Keynesian theory depends on monetary policy. It is not about market failure. Keynesian economics is not left wing, but it is about how the economy actually works, which is why all monetary policymakers use it.

It is also common sense, which is why I’m often perplexed by those who dispute Keynesian ideas. Now maybe they are confused by the strange world portrayed in textbooks, but even if they think it is all about ‘sticky prices’, the evidence that prices are slow to adjust is overwhelming, so it is hard to dispute Keynesian theory on those grounds. Yet a whole revolution in macroeconomic theory was based around a movement that wanted to overthrow Keynesian ideas, and build models where this correction mechanism I described happened automatically. The people who built these models did not describe them as assuming monetary policy worked perfectly: instead they said it was all about assuming markets worked. As a description this was at best opaque and at worst a deliberate deception.

So why is there this desire to deny the importance of Keynesian theory coming from the political right? Perhaps it is precisely because monetary policy is necessary to ensure aggregate demand is neither excessive nor deficient. Monetary policy is state intervention: by setting a market price, an arm of the state ensures the macroeconomy works. When this particular procedure fails to work, in a liquidity trap for example, state intervention of another kind is required (fiscal policy). While these statements are self-evident to many mainstream economists, to someone of a neoliberal or ordoliberal persuasion they are discomforting. At the macroeconomic level, things only work well because of state intervention. This was so discomforting that New Classical economists attempted to create an alternative theory of business cycles where booms and recessions were nothing to be concerned about, but just the optimal response of agents to exogenous shocks.

So my argument is that Keynesian theory is not left wing, because it is not about market failure - it is just about how the macroeconomy works. On the other hand anti-Keynesian views are often politically motivated, because the pivotal role the state plays in managing the macroeconomy does not fit the ideology. Is this asymmetry odd? I do not think so - just think about the debate over climate change. Now of course it is true that there are a small minority of scientists who do not believe in manmade climate change and who are not politically motivated to do so, and I’m sure the same is true for Keynesian theory. But to claim that the majority of anti-Keynesian views were innocent of ideological preference would be like – well like trying to pretend that monetary policy has no role in stabilising the business cycle.

There are of course many differences between climate change denial and anti-Keynesian positions. One is the extent to which the antagonism has infiltrated the subject itself. Another is the extent to which the mainstream wants to deny this influence. I do wonder if the unreal view of monetary policy that remains in the textbooks does so in part so as to not offend a particular ideological position. I do know that macroeconomics is often taught as if this ideological influence was non-existent, or at least not important to the development of the discipline. I think doing good social science involves recognising ideological influence, rather than pretending it does not exist.

  

Friday, 22 August 2014

Types of unemployment

For economists

This post completes a discussion of a new paper by Pascal Michaillat and Emmanuel Saez. My earlier post outlined their initial model that just had a goods market with yeoman farmers, but with search costs in finding goods to consume. Here I want to look at their main model where there are firms, and a labour market as well as a goods market.

The labour market has an identical search structure to the goods market. We can move straight to the equivalent diagram to the one I reproduced in my previous post.



The firm needs ‘recruiters’ to hire productive workers (n). As labour market tightness (theta) increases, any vacancy is less likely to result in a hire. In the yeoman farmer model capacity k was exogenous. Here it is endogenous, and linked to n using a simple production function. Labour demand is given by profit maximisation. Employing one extra producer has a cost that depends on the real wage w, but also the cost of recruiting and hence labour market tightness theta. They generate revenue equal to the sales they enable, but only because by raising capacity they make a visit more likely to result in a sale. The difference between a firm’s output (y) and their capacity (k, now given by the production function and n) the paper calls ‘idle time’ for workers. As y<k, workers are always idle some of the time. So, crucially, profit maximisation determines capacity, not output. Output is influenced by capacity, but it is also influenced by aggregate demand.

Now consider an increase in the aggregate demand for goods, caused by - for example - a reduction in the price level. That results in more visits to producers, which will lead to more sales (trades=output). This leads firms to want to increase their capacity, which means increasing employment. (More employment reduces x, but the net effect of an increase in aggregate demand is higher x, so workers’ idle time falls.) This increases labour market tightness and reduces unemployment.

Here I think the discussion in the paper (bottom of page 28) might be a little confusing. It notes that in fixed price models like Barro and Grossman, in a regime that is goods demand constrained, an increase in demand will raise employment by exactly the amount required to meet demand (providing we stay within that regime). It then says that in their model the mechanism is different, because aggregate demand determines idle time, which in turn affects labour demand and hence unemployment. I would prefer to put it differently. In this model a firm responds to an increase in aggregate demand in two ways: by increasing employment (as in fix price models) but also by reducing worker idle time. The advantage of adding the second mechanism is that, as aggregate demand varies, it generates pro-cyclical movements in productivity. (There are of course other means of doing this, like employment adjustment costs.)

There are additional interesting comparisons with this earlier fixed price literature. In this model unemployment can be of three ‘types’: classical (w too high), Keynesian (aggregate demand too low), but also frictional. This model can also generate four ‘regimes’, each corresponding to some combination of real wage and price. However, unlike the fixed price models, these regimes are all determined by the same set of equations (there are no discontinuities), and are relative to the efficient level of goods and labour market tightness.

For me, this is one of the neat aspects of the model. We do not need to ask whether demand is greater or less than ‘supply’, but equally we do not presume that output is always independent of ‘supply’. Instead output is always less than capacity, just as unemployment (workers actually looking for work) is always positive. One way to think about this is that actual output is always a combination of ‘supply’ (capacity) and demand (visits), a combination determined by the matching function. This is what matching allows you to do. What this also means is that increases in supply in either the goods market (technical progress) or labour market will increase both output and employment, even if prices remain fixed. In Keynesian models additional supply will only increase output if something boosts aggregate demand, but that is not the case here. However, if the equilibrium was efficient before this supply shock, output will be inefficiently low after it unless something happens to increase aggregate demand (e.g. prices fall).

The aggregate demand framework in the model, borrowed from fixed price models, is rather old fashioned, but there is no barrier to replacing it with a more modern dynamic analysis of a New Keynesian type. Indeed, this is exactly what the authors have done in a companion paper

The paper ends with an empirical analysis of the sources of fluctuations in unemployment. It suggests that unemployment fluctuations are driven mostly by aggregate demand shocks. (This is also well covered in their Vox post.) This ties in with the message of Michaillat’s earlier AER paper, where he argued that in recessions, frictional unemployment is low and most unemployment is caused by depressed labour demand. What this paper adds is a goods market where changes in aggregate demand can be the source of depressed labour demand, and therefore movements in unemployment.    



Tuesday, 15 July 2014

Aggregate demand and the labour market

I’m surprised I do not see a diagram like this more often:



The black lines are familiar from any introductory macro textbook. There is a labour supply curve. It is drawn such that a higher real wage encourages more labour supply, but I will consider what happens if it is vertical later. What I call the ‘unconstrained labour demand’ curve is what firms would choose to do if (a) labour gets less productive as output increases (which is why the curve slopes downward), and (b) firms can sell whatever they like. This is the classical model, and you will find this diagram in nearly every introductory textbook. With flexible wages the level of employment is determined at the intersection of the two curves, and we could call it the ‘natural’ employment level.

However (b) is a fiction. No (surviving) firm produces what it wants to, irrespective of whether people want to buy its product. Aggregate demand can diverge from the level of output implied by the intersection of the two black curves for all kinds of reasons: Says Law does not hold. For simplicity, suppose the aggregate demand for goods in the economy is independent of the real wage, and there is no factor substitution. In that case employment is determined by my red line - it is determined only by aggregate demand. This is pretty clear in the case that I have drawn, where we have deficient aggregate demand. Firms produce what they can sell: they would like to sell more, but there is no point producing more if no one wants to buy more.

It is less clear what happens if the red line shifts right until we have excess aggregate demand. If the number of firms is fixed, why would they produce more than they want to - i.e. at a level where they are losing money on the extra products they are producing? In New Keynesian models where firms are monopolistic and prices are sticky they will produce more if excess demand is modest, because for given prices it is profitable to produce more up to some limit. But will firms be prepared to pay higher (e.g. overtime) wages to workers for long, or will workers be prepared to work more for unchanged wages for long? I will come back to this at the end.

If there is deficient demand, is there anything that makes the red line move right to achieve the natural employment level? With deficient demand, prices may tend to fall, but that alone is unlikely to shift the red line to the right: few people talk about Pigou effects anymore, and the general concern is that deflation is deflationary because the real value of debt has increased. It is monetary policy that shifts the red line in the required direction, either because it responds to falling prices or because it wants to reduce the output gap. Ironically, Real Business Cycle models, which assume the red line is always at the natural employment level, only make sense in a world where monetary policy is very efficient. However, monetary policy does normally work over the medium term, which is why the classical or RBC model makes sense if we are just doing medium/long term analysis.

What happens to the real wage? In the most basic of New Keynesian models, the labour market clears, which means real wages fall until the labour supply curve intersects the red line. (Here a vertical supply curve would be a problem.) If, in contrast, real wages were sticky, we would get involuntary unemployment - rationing in the labour market. Which is true matters a great deal to those unemployed, but it terms of modelling deviations from the natural rate the difference is not that great. Real wages have no direct impact on aggregate demand, and so all that might happen is that monetary policy could be influenced one way or the other. If it is not, what happens to real wages is irrelevant to the basic problem, which is deficient aggregate demand. This is one reason why New Keynesian economists may be happy to work with a model where the labour market clears, but there may be other reasons.

The vertical red line assumes no factor substitution. With factor substitution it can become downward sloping: falling real wages lead firms to substitute labour for capital, which can increase employment even if output is unchanged because aggregate demand is unchanged. As I speculate in this post, based on the work of Pessoa and Van Reenen, something like this could help explain the current UK productivity puzzle. If this speculation is correct, at some point as aggregate demand and investment expands real wages will rise, and labour productivity will increase as factor substitution is reversed.

I was prompted to write this post by this from Noah Smith, which in turn comments on a post by John Quiggin. (See also Mike Konczal and Nick Rowe.) They talk about models of labour market matching, which I have not mentioned, but similar considerations apply with matching models taking the place of the classical model (although there are key differences between the two). This gives me another chance to plug an AER paper by Pascal Michaillat, which addresses the possible asymmetry that may occur when we have excess demand. Michaillat’s model is a matching model when aggregate demand is strong, but a rationing model (with Keynesian involuntary unemployment) when aggregate demand is low. This is one, rather interesting, answer to the question I asked above about possible asymmetry. Putting matching together with Keynesian rationing is complicated, but if Michaillat’s paper is ignored just for this reason, that says something rather sad about current macro methodology. 

Friday, 11 July 2014

Rereading Lucas and Sargent 1979

Mainly for macroeconomists and those interested in macroeconomic thought

Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.

What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.

In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation:

“Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale.”

There is no attempt to link this stagflation failure to the identification problems discussed earlier. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within SEMs. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.

In the schema due to Lakatos, a failing mainstream theory may still be able to explain previously anomalous results, but only in such a contrived way that it makes the programme degenerate. Yet, as Jesse Zinn argues in this paper, the changes to the Phillips curve suggested by Friedman and Phelps appear progressive rather than degenerate. True, this innovation came from thinking about microeconomic theory, but innovations in SEMs had always come from a mixture of microeconomic theory and evidence. 

This is why LS go on to say: “We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework.” The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able to explain the particular problem of stagflation than SEMs.

In their conclusion, LS summarise their argument. They say:

“First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.”

Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for SEMs, rather than the failure to predict or explain stagflation. As I will argue in a subsequent post, it was this methodological critique, rather than any superior empirical ability, that led to the success of this manifesto.



Tuesday, 24 June 2014

Was the neoclassical synthesis unstable?

This post presents a very simple story of the development of macroeconomic thought from Keynes until today. It is related to a recent post from Brad DeLong on ‘economic theology’ and the neoclassical synthesis. (See also a response from Robert Waldmann.) 

Economics as a science that studies markets is ideologically neutral. Economic theory can be used to support ‘unfettered’ markets, or it can be used to justify interventions to avoid various kinds of market failure. The former means that it will inevitably be used by some to support a laissez-faire ideological position. There are two checks against this one-sided presentation of economic theory: economists presenting alternative theories that embody imperfections, and the use of evidence to show that a particular theory works, either in terms of its assumptions or results.

Before considering macroeconomics, take an example from labour economics: the minimum wage. Standard competitive theory suggests a minimum wage will reduce employment and raise unemployment. Card and Krueger undertook a famous study suggesting that in one particular example where the minimum wage was increased there was no reduction in employment. That led to a substantial amount of additional research, much (but by no means all) backing up the result that the impact of moderate increases in the minimum wage on employment was either non-existent or very small. For similar developments in the UK, see this account by Alan Manning. This empirical evidence was sufficient to encourage the development of alternative theoretical models: principally but not only monopsony.

So here we see theory and evidence interacting in a Popperian type way, hopefully leading to better theory. [1] Yet with economics there will always be ideological resistance, so there will always be those who want to stick to the basic model and who select those empirical studies that support it. For the discipline to survive, those ideologues have to be a minority. But even if this condition is met, a healthy discipline has to recognise the influence of that minority, rather than try and pretend it does not exist or does not matter.

There is a slight twist for macroeconomics. As governments are the monopoly providers of cash, and provide a backstop to the financial system, they are involved in the ‘market’ whether they like it or not. Complete non-intervention is not an option: instead the next best thing (from a laissez-faire point of view) is some kind of ‘neutral’ default policy rule, like keeping the stock of money constant.

The Great Depression was the empirical wake-up call (the equivalent of the Card and Krueger study) for macroeconomics. So profound was the impact of this empirical event that it led to a whole new way of doing the subject. Keynesian economics was methodologically different from much of microeconomics: it put much more weight on aggregate evidence (through time series econometrics), and much less on microeconomic theory. One way of putting this is that in the 1960s, general equilibrium theory of the Arrow-Debreu-McKenzie type seemed a complete contrast to what macroeconomists were doing. That an event as powerful as the Great Depression should have had such a profound methodological impact is not really surprising.

The Great Depression also meant that those advocating non-intervention had to make an exception of macroeconomics. It was for the generation after the Great Depression abundantly clear that here was a colossal market failure. This is one sense in which the term neo-classical synthesis can be used: to allow the state to combat the market failure represented by Keynesian unemployment (albeit, in the case of Friedman, in as rule like way as possible), but to maintain advocacy of non-intervention elsewhere. Note however that this is a synthesis servicing a particular ideological point of view, rather than being anything inherent within economics as a discipline.

Was this ‘ideological synthesis’ tenable among those supporting the ideology? There were two natural tensions. First, the position that macro intervention should be rule based and minimal was contestable. Second and more importantly, as the memory of the Great Depression faded (and neoliberalism spread), the temptation grew to ask ‘do we really have to accept the need for state intervention at the macro level’. However I’m not sure the latter would have become critical had it not been for another tension within macroeconomics itself. 

What was not tenable from a methodological point of view was the distance between the very empirical orientation of macroeconomics, and the more axiomatic foundation of much of microeconomics. What was required here was a different kind of synthesis, one which allowed for a healthy dialogue between theory and evidence. My impression is that in many areas of microeconomics this happened: that is partly why I gave the minimum wage example, but it is also worth noting that general equilibrium theory lost the primacy that it might once had among microeconomists. But these are impressions, and I’ll happily be corrected.

I think the same thing could have happened in macroeconomics. Heterodox economists (and Robert Waldmann) would almost certainly disagree, but I think macroeconomics has gained a great deal from the project to add microfoundations. Where I hope heterodox economists would agree is that a dialogue where theorists engaged with macroeconomics and tried to persuade macroeconomists of the importance of following particular theories would have been healthy. But that was not the way it turned out. What could have been a dialogue of the Popperian kind became instead a theoretical and methodological counter revolution. Instead of asking ‘what can we do to get better microfoundations for sticky prices’, the assertion became ‘without good microfoundations we should ignore sticky prices’.

Why was there a counter revolution in macro rather than a Popperian dialogue? I think it is here that the second tension in the ‘ideological synthesis’ I identified above is important. Those who wanted to dispute the need for macro intervention realised that the microfoundations for macro market failures that existed at the time were poor (adaptive expectations in a traditional Phillips curve), and so any macroeconomics based on ‘rigorous’ (textbook, imperfection free) microfoundations would not be Keynesian. They also realised that they could produce models which generated real business cycles which were entirely efficient. These models assumed all unemployment was voluntary, which in any normal science would lead to their rejection, but in an axiomatic based approach where some evidence can be ignored it was acceptable.

New Classical economics did not want to improve Keynesian economics, but to overthrow it. It is very difficult to believe this motivation was not ideological. Does the fact that this counter revolution was largely successful among academic macroeconomists imply that the majority of macroeconomists shared this ideological outlook? I suspect not. What New Classical economists succeeded in doing was framing the issue as one where a choice had to be made, between an eclectic empirically orientated approach where theory was weak and empirical methods shaky, and an alternative whose methodological foundations were solidly based within the discipline of economics. So we moved from a position where macroeconomics and Arrow-Debreu-McKenzie seemed worlds apart, to one where at least some see the former arising naturally from the latter. Ironically this happened at the same time as many microeconomists saw Arrow-Debreu-McKenzie as less relevant to what they did.

Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

[1] By Popperian type, I just mean that a theory proves inconsistent with data and so a better theory is developed. The Popperian ideal where one piece of evidence (one black swan) is enough on its own to disprove a theory is never going to apply in economics (if it applies anywhere), because evidence is probabilistic and fragile. There are no black swans in economics.

Sunday, 18 May 2014

Keynesian economics works: Eurozone edition

Paul Krugman is fond of saying that since the financial crisis, basic Keynesian economics has performed pretty well. Increases in government debt did not lead to rising interest rates. Increases in the monetary base (QE) did not lead to rapid inflation. But these are not the only places where Keynesian economics works. Keynesian analysis tells us almost all we need to know to understand what has happened to the Eurozone since its formation.

Some people are fond of denouncing mainstream economics because it failed to predict the financial crisis. But the nature of the Euro crisis was predicted by standard Keynesian open economy macro. The big problem with a monetary union was that countries could be hit by asymmetric shocks, and would no longer have their own monetary policy to deal with them. Many economists, myself included, said that this problem needed to be tackled by an active countercyclical fiscal policy - again standard Keynesian analysis. This advice was ignored.

What those using Keynesian analysis did not predict was the shock that would reveal all this: that the financial markets would make the mistake of assuming country specific risk on government borrowing had disappeared once the Euro was formed, which helped lead to a substantial and rapid fall in interest rates in the periphery. But once that happened, Keynesian economics tells the rest of the story. This large monetary stimulus led to excess demand in the periphery relative to the core. This in turn raised periphery inflation relative to the core, leading to a steady deterioration in competitiveness.

This boom in the periphery was not offset by fiscal contraction. Instead the public finances looked good, because that is what a boom does, and the focus of the Stability and Growth Pact on deficits meant that there was no pressure on politicians to tighten fiscal policy. Eventually the decline in competitiveness would bring the boom to an end, but a standard feature of quantitative Keynesian analysis is that this corrective process can take some time, if it is fighting against powerful expansionary forces.

So Keynesian economics said this would end in tears, and it did. The precise nature of the tears is to some extent a detail. (If you think the Eurozone crisis was all about fiscal profligacy rather than private sector excess, you are sadly misinformed.) Of course Keynesian economics could not have predicted the perverse reaction to the crisis when it came: austerity in the core as well as the periphery. It could not have predicted it because it was so obviously stupid given a Keynesian framework. But when general austerity came, from 2010 onwards, the implications of Keynesian analysis were clear. Sure enough in 2012 we had the second Eurozone recession, helped along by some perverse monetary policy decisions.

Paul Krugman also tends to note how most of those who bet against Keynesian predictions on interest rates and inflation after 2009 have yet to concede they were wrong, and Keynesian analysis was right. The bad news from the Eurozone is that this kind of denial can go on for fifteen years (and counting)! But there is a reason why we teach Keynesian economics - it works.   

Sunday, 11 May 2014

Sticky prices: how we confuse students, and sometimes ourselves

For teachers and students of macroeconomics.

I’m about to teach a small number of first year undergraduate students Keynesian macroeconomics, and my aim will be not to tell them that this is the macroeconomics of sticky prices. Yet I realise I’ve already gone wrong. In week one I talked about time periods in macro, and how the ‘short run’ was the length of time ‘it takes prices to fully adjust’. I must have been saying this for years. But it is at best highly misleading.

In both the New Keynesian closed economy model, and the IS/LM model, the short run is the length of time it takes the central bank to stabilise inflation (output goes to its natural rate), or less precisely to achieve full employment. For students we could equally say it is the period it takes monetary policy to achieve the real rate of interest implied by the RBC, or Classical, model.  Calling this the time period it takes prices to fully adjust only makes sense when monetary policy involves some kind of nominal anchor, like a fixed money target in IS/LM. It makes no sense when monetary policy involves a central bank trying to choose the best nominal interest rate. The impact of an unexpected but subsequently known preference/demand shock, for example, would be very short lived when such a central bank knew what it was doing. (See this excellent post from Nick Rowe.)  

The big danger in equating Keynesian economics with sticky prices is that students forget about the crucial role monetary policy is playing. Too many think that after an increase in aggregate demand, if contracts and menu costs were absent, higher prices would in themselves choke off the increase in aggregate demand.  As they have just learnt micro, it is a natural mistake to make. They then get very confused when price flexibility does (at best) nothing at the zero lower bound.

Yet the linking of the short run with sticky prices is ubiquitous. In the edition of Mankiw I have to hand it says
“In the long run, prices are flexible and can respond to changes in supply or demand. In the short run, many prices are sticky at some predetermined level. Because prices behave differently in the short run than the long run, economic policies have different effects over different time horizons.”
This kind of statement makes sense in a fixed money supply world, but it makes much less sense in the real world. (Mankiw uses the term ‘long run’ where others would use ‘medium run’, but let us not worry about that.) Compare it with this alternative statement:
“In the long run, monetary policy adjusts to achieve steady inflation, which means output goes to its ‘natural’ or Classical level. In the short run, monetary policy fails to achieve this, so we need to look at movements in aggregate demand to explain output.”
This works for any sensible monetary policy.

In my second year lectures, I ask my students to think about a monetary policy that involved moving real interest rates in response to the output gap, but not to excess inflation.  If that policy stabilised a closed economy, then what impact would the speed of price adjustment have on anything except inflation? Inflation aside, a world where price adjustment was quick would look much like a world where prices were much stickier. The ‘short run’ would have the same length, irrespective of how quickly prices adjusted.

All this is about how Keynesian economics is taught, rather than about how it is done. Yet how it is taught can also influence how it is eventually understood. One of the problems some people have with understanding that we are still in a situation of deficient demand is that it is five years after the recession ‘and surely prices should have adjusted by now’. There is also a rather more profound point. Many anti-Keynesians use this misunderstanding about price adjustment to dismiss Keynesian economics. When they say ‘I ignore Keynesian economics, because I think prices adjust rapidly’ they are really saying ‘I ignore Keynesian economics because I think monetary policy is very successful’. And in the real world, monetary policy can only be very successful by understanding Keynesian economics!