Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label New Classical. Show all posts
Showing posts with label New Classical. Show all posts

Thursday, 27 August 2015

The day macroeconomics changed

It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair (HT Mark Thoma), who was at the conference, it is the day that macroeconomics started to go wrong.

Ray Fair is a bit of a hero of mine. When I left the National Institute to become a formal academic, I had the goal (with the essential help of two excellent and courageous colleagues) of constructing a new econometric model of the UK economy, which would incorporate the latest theory: in essence, it would be New Keynesian, but with additional features like allowing variable credit conditions to influence consumption. Unlike a DSGE it would as far as possible involve econometric estimation. I had previously worked with the Treasury’s model, and then set up what is now NIGEM at the National Institute by adapting a global model used by the Treasury, and finally I had been in charge of developing the Institute’s domestic model. But creating a new model from scratch within two years was something else, and although the academics on the ESRC board gave me the money to do it, I could sense that some of them thought it could not be done. In believing (correctly) that it could, Ray Fair was one of the people who inspired me.

I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I wantmore bells and whistles on DSGE models”. I also disagree about rational expectations: the UK model I built had rational expectations in all the key relationships.

Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.

However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. I have talked before about the work that Carroll and Muellbauer have done which shows that you have to think about credit conditions if you want to explain the pre-crisis time series for UK or US consumption. DSGE models could avoid this problem, but more traditional structural econometric (aka CC) models would find it harder to do so. So perhaps if academic macro had given greater priority to explaining these time series, it would have been better prepared for understanding the impact of the financial crisis.

What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.


Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Wednesday, 24 September 2014

Where macroeconomics went wrong

In my view, the answer is in the 1970/80s with the New Classical revolution (NCR). However I also think the new ideas that came with that revolution were progressive. I have defended rational expectations, I think intertemporal theory is the right place to start in thinking about consumption, and exploring the implications of time inconsistency is very important to macro policy, as well as many other areas of economics. I also think, along with nearly all macroeconomists, that the microfoundations approach to macro (DSGE models) is a progressive research strategy.

That is why discussion about these issues can become so confused. New Classical economics made academic macroeconomics take a number of big steps forward, but a couple of big steps backward at the same time. The clue to the backward steps comes from the name NCR. The research programme was anti-Keynesian (hence New Classical), and it did not want microfounded macro to be an alternative to the then dominant existing methodology, it wanted to replace it (hence revolution). Because the revolution succeeded (although the victory over Keynesian ideas was temporary), generations of students were taught that Keynesian economics was out of date. They were not taught about the pros and cons of the old and new methodologies, but were taught that the old methodology was simply wrong. And that teaching was/is a problem because it itself is wrong.

There had always been opposition to Keynesian ideas, and much (though not all) was ideological, such as attempts to remove Keynesian textbooks from US universities. However the NCR gave what should more precisely be called ‘aggregate demand denial’ an intellectual respectability that it never deserved. The reasons for believing that shifts in demand move output and employment in the short run and prices are sticky are overwhelming, so to deny both was a seemingly impossible task. It was achieved by adopting a methodological position which could ignore inconvenient evidence.

I do not think it had to be like this. Mainstream macroeconomics did not need a revolution in the 1970s and 1980s. Ideas like rational expectations could have been assimilated into the mainstream methodology, and microfounded models could have been developed alongside more eclectic econometric models (SEMs, not VARs), or aggregate theoretical models that Blanchard and Fischer rightly called ‘useful models’. Microfounded models could have shown the kind of errors that can arise in more empirically based models when theory is ignored or only applied piecemeal, and these empirical models could have highlighted the key areas where additional microfoundations were needed.

I think if this had happened, macroeconomics would have been better prepared when the financial crisis hit. Take just one issue: the role of credit conditions in influencing consumption. This is clearly crucial in understanding how consumption might respond to a credit crunch, yet any mechanism of this kind was absent from most DSGE models in 2008. However a more empirically based model of consumption would have had to address this issue well before 2008, as I argue here. If these types of models had continued to be developed within academia, rather than confined to the dustbin by the microfoundations revolution, then at least policymakers would have had something to work with. If there had been interaction between empirical and microfounded models some of the financial frictions literature that has flourished since 2008 might have appeared earlier.

So why didn’t this happen? Why did we have a revolution which overturned an existing methodology and temporarily banished Keynesian theory, rather than an adaptation and augmentation of what was then mainstream? Was the attraction of overturning orthodoxy too strong, as it is for a minority of heterodox economists today? Did an ideological imperative of dismissing Keynesian ideas play a role? To what extent was the hostile reaction of many in the macroeconomic establishment to eminently sensible ideas like rational expectations responsible? Was the attraction of a methodology where at least you could be sure you were consistent too enticing, perhaps encouraged by increasing segmentation between theoretical and empirical macro? I would love to know the answer to these questions. 

  

Wednesday, 30 July 2014

Methodological seduction

Mainly for macroeconomists or those interested in economic methodology. I first summarise my discussion in two earlier posts (here and here), and then address why this matters.

If there is such a thing as the standard account of scientific revolutions, it goes like this:

1) Theory A explains body of evidence X

2) Important additional evidence Y comes to light (or just happens)

3) Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)

4) Theory B can explain X and Y

5) After a struggle, theory B replaces A.

For a more detailed schema due to Lakatos, which talks about a theory’s ‘core’ and ‘protective belt’ and tries to distinguish between theoretical evolution and revolution, see this paper by Zinn which also considers the New Classical counterrevolution.

The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?

My argument is that it does not. Arnold Kling makes the point clearly. In his stage one, Keynesian/Monetarist theory adapts to stagflation, using the Friedman/Phelps accelerationist Phillips curve. Stage two involves rational expectations, the Lucas supply curve and other New Classical ideas. As Kling says, “there was no empirical event that drove the stage two conversion.” I think from this that Paul Krugman also agrees, although perhaps with an odd quibble.

Now of course the counter revolutionaries do talk about the stagflation failure, and there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable. The key question, however, is whether points (3) and (4) are correct. On (3) Zinn argues that changes to Keynesian theory to account for stagflation were progressive rather than contrived, and I agree. I also agree with John Cochrane that this adaptation was still empirically inadequate, and that further progress needed rational expectations (see this separate thread), but as I note below the old methodology could (and did) incorporate this particular New Classical innovation.

More critically, (4) did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, or in my view the Great Depression either. Yet the NCCR was successful. So why did (5) happen, without (3) and (4)?

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example. Ironically the innovation that had allowed conventional macro to explain stagflation, the accelerationist Phillips curve, also made it appear unable to adapt to rational expectations. But if that was all, then you need to ask why New Classical ideas could have been gradually assimilated into the mainstream. Many of the counter revolutionaries did not want this (as this note from Judy Klein via Mark Thoma makes clear), because they had an (ideological?) agenda which required the destruction of Keynesian ideas. However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM), which is what I spent a lot of time in the 1990s doing.

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions. How serious this problem was, relative to the alternative of being theoretically consistent but empirically wide of the mark, was seldom asked.   

So why does this matter? For those who are critical of the total dominance of current macro microfoundations methodology, it is important to understand its appeal. I do not think this comes from macroeconomics being dominated by a ‘self-perpetuating clique that cared very little about evidence and regarded the assumption of perfect rationality as sacrosanct’, although I do think that the ideological preoccupations of many New Classical economists has an impact on what is regarded as de rigueur in model building even today. Nor do I think most macroeconomists are ‘seduced by the vision of a perfect, frictionless market system.’ As with economics more generally, the game is to explore imperfections rather than ignore them. The more critical question is whether the starting point of a ‘frictionless’ world constrains realistic model building in practice.

If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous! 

Noah Smith, who does believe stagflation was important in the NCCR, says at the end of his post: “this raises the question of how the 2008 crisis and Great Recession are going to affect the field”. However, if you think as I do that stagflation was not critical to the success of the NCCR, the question you might ask instead is whether there is anything in the Great Recession that challenges the methodology established by that revolution. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.  

Is there a chance of the older methodology making a comeback? I suspect the place to look is not in academia but in central banks. John Cochrane says that after the New Classical revolution there was a split, with the old style way of doing things surviving among policymakers. I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks. At the Bank of England, their main model used to be a SEM, was replaced by a hybrid DSGE/SEM, and was replaced in turn by a DSGE model. The Fed operates both a DSGE model and a more old-fashioned SEM. It is in central banks that the limitations of DSGE analysis may be felt most acutely, as I suggested here. But central bank economists are trained by academics. Perhaps those that are seduced are bound to remain smitten.


Friday, 11 July 2014

Rereading Lucas and Sargent 1979

Mainly for macroeconomists and those interested in macroeconomic thought

Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.

What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.

In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation:

“Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale.”

There is no attempt to link this stagflation failure to the identification problems discussed earlier. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within SEMs. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.

In the schema due to Lakatos, a failing mainstream theory may still be able to explain previously anomalous results, but only in such a contrived way that it makes the programme degenerate. Yet, as Jesse Zinn argues in this paper, the changes to the Phillips curve suggested by Friedman and Phelps appear progressive rather than degenerate. True, this innovation came from thinking about microeconomic theory, but innovations in SEMs had always come from a mixture of microeconomic theory and evidence. 

This is why LS go on to say: “We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework.” The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able to explain the particular problem of stagflation than SEMs.

In their conclusion, LS summarise their argument. They say:

“First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.”

Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for SEMs, rather than the failure to predict or explain stagflation. As I will argue in a subsequent post, it was this methodological critique, rather than any superior empirical ability, that led to the success of this manifesto.



Saturday, 28 June 2014

Understanding the New Classical revolution

In the account of the history of macroeconomic thought I gave here, the New Classical counter revolution was both methodological and ideological in nature. It was successful, I suggested, because too many economists were unhappy with the gulf between the methodology used in much of microeconomics, and the methodology of macroeconomics at the time.

There is a much simpler reading. Just as the original Keynesian revolution was caused by massive empirical failure (the Great Depression), the New Classical revolution was caused by the Keynesian failure of the 1970s: stagflation. An example of this reading is in this piece by the philosopher Alex Rosenberg (HT Diane Coyle). He writes: “Back then it was the New Classical macrotheory that gave the right answers and explained what the matter with the Keynesian models was.”

I just do not think that is right. Stagflation is very easily explained: you just need an ‘accelerationist’ Phillips curve (i.e. where the coefficient on expected inflation is one), plus a period in which monetary policymakers systematically underestimate the natural rate of unemployment. You do not need rational expectations, or any of the other innovations introduced by New Classical economists.

No doubt the inflation of the 1970s made the macroeconomic status quo unattractive. But I do not think the basic appeal of New Classical ideas lay in their better predictive ability. The attraction of rational expectations was not that it explained actual expectations data better than some form of adaptive scheme. Instead it just seemed more consistent with the general idea of rationality that economists used all the time. Ricardian Equivalence was not successful because the data revealed that tax cuts had no impact on consumption - in fact study after study have shown that tax cuts do have a significant impact on consumption.

Stagflation did not kill IS-LM. In fact, because empirical validity was so central to the methodology of macroeconomics at the time, it adapted to stagflation very quickly. This gave a boost to the policy of monetarism, but this used the same IS-LM framework. If you want to find the decisive event that led to New Classical economists winning their counterrevolution, it was the theoretical realisation that if expectations were rational, but inflation was described by an accelerationist Phillips curve with expectations about current inflation on the right hand side, then deviations from the natural rate had to be random. The fatal flaw in the Keynesian/Monetarist theory of the 1970s was theoretical rather than empirical.


Tuesday, 24 June 2014

Was the neoclassical synthesis unstable?

This post presents a very simple story of the development of macroeconomic thought from Keynes until today. It is related to a recent post from Brad DeLong on ‘economic theology’ and the neoclassical synthesis. (See also a response from Robert Waldmann.) 

Economics as a science that studies markets is ideologically neutral. Economic theory can be used to support ‘unfettered’ markets, or it can be used to justify interventions to avoid various kinds of market failure. The former means that it will inevitably be used by some to support a laissez-faire ideological position. There are two checks against this one-sided presentation of economic theory: economists presenting alternative theories that embody imperfections, and the use of evidence to show that a particular theory works, either in terms of its assumptions or results.

Before considering macroeconomics, take an example from labour economics: the minimum wage. Standard competitive theory suggests a minimum wage will reduce employment and raise unemployment. Card and Krueger undertook a famous study suggesting that in one particular example where the minimum wage was increased there was no reduction in employment. That led to a substantial amount of additional research, much (but by no means all) backing up the result that the impact of moderate increases in the minimum wage on employment was either non-existent or very small. For similar developments in the UK, see this account by Alan Manning. This empirical evidence was sufficient to encourage the development of alternative theoretical models: principally but not only monopsony.

So here we see theory and evidence interacting in a Popperian type way, hopefully leading to better theory. [1] Yet with economics there will always be ideological resistance, so there will always be those who want to stick to the basic model and who select those empirical studies that support it. For the discipline to survive, those ideologues have to be a minority. But even if this condition is met, a healthy discipline has to recognise the influence of that minority, rather than try and pretend it does not exist or does not matter.

There is a slight twist for macroeconomics. As governments are the monopoly providers of cash, and provide a backstop to the financial system, they are involved in the ‘market’ whether they like it or not. Complete non-intervention is not an option: instead the next best thing (from a laissez-faire point of view) is some kind of ‘neutral’ default policy rule, like keeping the stock of money constant.

The Great Depression was the empirical wake-up call (the equivalent of the Card and Krueger study) for macroeconomics. So profound was the impact of this empirical event that it led to a whole new way of doing the subject. Keynesian economics was methodologically different from much of microeconomics: it put much more weight on aggregate evidence (through time series econometrics), and much less on microeconomic theory. One way of putting this is that in the 1960s, general equilibrium theory of the Arrow-Debreu-McKenzie type seemed a complete contrast to what macroeconomists were doing. That an event as powerful as the Great Depression should have had such a profound methodological impact is not really surprising.

The Great Depression also meant that those advocating non-intervention had to make an exception of macroeconomics. It was for the generation after the Great Depression abundantly clear that here was a colossal market failure. This is one sense in which the term neo-classical synthesis can be used: to allow the state to combat the market failure represented by Keynesian unemployment (albeit, in the case of Friedman, in as rule like way as possible), but to maintain advocacy of non-intervention elsewhere. Note however that this is a synthesis servicing a particular ideological point of view, rather than being anything inherent within economics as a discipline.

Was this ‘ideological synthesis’ tenable among those supporting the ideology? There were two natural tensions. First, the position that macro intervention should be rule based and minimal was contestable. Second and more importantly, as the memory of the Great Depression faded (and neoliberalism spread), the temptation grew to ask ‘do we really have to accept the need for state intervention at the macro level’. However I’m not sure the latter would have become critical had it not been for another tension within macroeconomics itself. 

What was not tenable from a methodological point of view was the distance between the very empirical orientation of macroeconomics, and the more axiomatic foundation of much of microeconomics. What was required here was a different kind of synthesis, one which allowed for a healthy dialogue between theory and evidence. My impression is that in many areas of microeconomics this happened: that is partly why I gave the minimum wage example, but it is also worth noting that general equilibrium theory lost the primacy that it might once had among microeconomists. But these are impressions, and I’ll happily be corrected.

I think the same thing could have happened in macroeconomics. Heterodox economists (and Robert Waldmann) would almost certainly disagree, but I think macroeconomics has gained a great deal from the project to add microfoundations. Where I hope heterodox economists would agree is that a dialogue where theorists engaged with macroeconomics and tried to persuade macroeconomists of the importance of following particular theories would have been healthy. But that was not the way it turned out. What could have been a dialogue of the Popperian kind became instead a theoretical and methodological counter revolution. Instead of asking ‘what can we do to get better microfoundations for sticky prices’, the assertion became ‘without good microfoundations we should ignore sticky prices’.

Why was there a counter revolution in macro rather than a Popperian dialogue? I think it is here that the second tension in the ‘ideological synthesis’ I identified above is important. Those who wanted to dispute the need for macro intervention realised that the microfoundations for macro market failures that existed at the time were poor (adaptive expectations in a traditional Phillips curve), and so any macroeconomics based on ‘rigorous’ (textbook, imperfection free) microfoundations would not be Keynesian. They also realised that they could produce models which generated real business cycles which were entirely efficient. These models assumed all unemployment was voluntary, which in any normal science would lead to their rejection, but in an axiomatic based approach where some evidence can be ignored it was acceptable.

New Classical economics did not want to improve Keynesian economics, but to overthrow it. It is very difficult to believe this motivation was not ideological. Does the fact that this counter revolution was largely successful among academic macroeconomists imply that the majority of macroeconomists shared this ideological outlook? I suspect not. What New Classical economists succeeded in doing was framing the issue as one where a choice had to be made, between an eclectic empirically orientated approach where theory was weak and empirical methods shaky, and an alternative whose methodological foundations were solidly based within the discipline of economics. So we moved from a position where macroeconomics and Arrow-Debreu-McKenzie seemed worlds apart, to one where at least some see the former arising naturally from the latter. Ironically this happened at the same time as many microeconomists saw Arrow-Debreu-McKenzie as less relevant to what they did.

Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

[1] By Popperian type, I just mean that a theory proves inconsistent with data and so a better theory is developed. The Popperian ideal where one piece of evidence (one black swan) is enough on its own to disprove a theory is never going to apply in economics (if it applies anywhere), because evidence is probabilistic and fragile. There are no black swans in economics.

Tuesday, 15 October 2013

Microfoundations and Macro Wars

There have been two strands of reaction to my last post. One has been to interpret it as yet another salvo in the macro wars. The second has been to deny there is an issue here: to quote Tony Yates: “The pragmatic microfounders and empirical macro people have won out entirely”. If people are confused, perhaps some remarks by way of clarification might be helpful.

There are potentially three different debates going on here. The first is the familiar Keynesian/anti-Keynesian debate. The second is whether ‘proper’ policy analysis has to be done with microfounded models, or whether there is also an important role for more eclectic (and data-based) aggregate models in policy analysis, like IS-LM. The third is about how far microfoundation modellers should be allowed to go in incorporating non-microfounded (or maybe behavioural) relationships in their models.

Although all three debates are important in their own right, in this post I want to explore the extent to which they are linked. But I want to say at the outset what, in my view, is not up for debate among mainstream macroeconomists: microfounded macromodels are likely to remain the mainstay of academic macro analysis for the foreseeable future. Many macroeconomists outside the mainstream, and some other economists, might wish it otherwise, but I think they are wrong to do so. DSGE models really do tell us a lot of interesting and important things.

For those who are not economists, let’s be clear what the microfoundations project in macro is all about. The idea is that a macro model should be built up from a formal analysis of the behaviour of individual agents in a consistent way. There may be just a single representative agent, or increasingly heterogeneous agents. So a typical journal paper in macro nowadays will involve lots of optimisation by individual agents as a way of deriving aggregate relationships.

Compare this to two alternative ways of ‘doing macro’. The first goes to the other extreme: choose a bunch of macro variables, and just look at the historic relationship between them (a VAR). This uses minimal theory, and the focus is all about the past empirical interaction between macro aggregates. The second would sit in between the two. It might start off with aggregate macro relationships, and justify them with some eclectic mix of theory and empirics. You can think of IS/LM as an example of this third way. In reality there is probably a spectrum of alternatives here, with different mixes between theoretical consistency and consistency with the data (see this post).

In the 1960s and 1970s, a good deal of macro analysis in journals was of this third type. The trouble with this approach, as New Classical economists demonstrated, was that the theoretical rationale behind equations often turned out to be inadequate and inconsistent. The Lucas critique is the most widely quoted example where this happens. So the microfoundations project said let’s do the theory properly and rigorously, so we do not make these kind of errors. In fact, let’s make theoretical (‘internal’) consistency the overriding aim, such that anything which fails on these grounds is rejected. There were two practical costs of this approach. First, doing this was hard, so for a time many real world complexities had to be set aside (like the importance of banks in rationing credit, for example, or the reluctance of firms to cut nominal wages). This led to a second cost, which was that less notice was taken of how each aggregate macro relationship tracked the data (‘external’ consistency). To use a jargon phrase that sums it up quite well: internal rather than external consistency became the test of admissibility for these models.

The microfoundations project was extremely successful, such that it became generally accepted among most academics that all policy analysis should be done with microfounded models. However I think macroeconomists are divided about how strict to be about microfoundations: this is the distinction between purists and pragmatists that I made here. Should every part of a model be microfounded, or are we allowed a bit of discretion occasionally? Plenty of ‘pragmatic’ papers exist, so just referencing a few tells us very little. Tony Yates thinks the pragmatists have won, and I think David Andolfatto in a comment on my post agrees. I would like to think they are right, but my own experience talking to other macroeconomists suggests they are not.

But let’s just explore what it might mean if they were right. Macroeconomists would be quite happy incorporating non-microfounded elements into their models, when strong empirical evidence appeared to warrant this. Referees would not be concerned. But there is no logical reason to only include one non-microfounded element at a time: why not allow more than one aggregate equation to be data rather than theory based? In that case, ‘very pragmatic’ microfoundation models could begin to look like the aggregate models of the past, which used a combination of theory and empirical evidence to justify particular equations.

I would have no problem with this, as I have argued that these more eclectic aggregate models have an important role to play alongside more traditional DSGE models in policy analysis, particularly in policy making institutions that require flexible and robust tools. Paul Krugman is fond of suggesting that IS-LM type models are more useful than microfounded models, with the latter being a check on the former, so I guess he wouldn’t worry about this either. But others do seem to want to argue that IS-LM type models should have no place in ‘proper’ policy analysis, at least in the pages of academic journals. If you take this view but want to be a microfoundations pragmatist, just where do you draw the line on pragmatism?

I have deliberately avoided mentioning the K word so far. This is because I think it is possible to imagine a world where Keynesian economics had not been invented, but where debates over microfoundations would still take place. For example, Heathcote et al talk about modelling ‘what you can microfound’ versus ‘modelling what you can see’ in relation to the incompleteness of asset markets, and I think this is a very similar purist/pragmatist microfoundations debate, but there is no direct connection to sticky prices.

However in the real world where thankfully Keynesian economics does exist, I think it becomes problematic to be both a New Keynesian and a microfoundations purist. First, there is Paul Krugman’s basic point. Before New Keynesian theory, New Classical economists argued that because sticky wages and prices were not microfounded, they should not be in our models. (Some who are unconvinced by New Keynesian ideas still make that case.) Were they right at the time? I think a microfoundations purist would have to say yes, which is problematic because it seems an absurd position for a Keynesian to take. Second, in this paper I argued that the microfoundations project, in embracing sticky prices, actually had to make an important methodological compromise which a microfoundations purist should worry about. I think Chari, Kehoe and McGrattan are making similar kinds of points. Yet my own paper arose out of talking to New Keynesian economists who appeared to take a purist position, which was why I wrote it.

It is clear what the attraction of microfoundations purity was to those who wanted to banish Keynesian theory in the 1970s and 1980s. The argument of those who championed rational expectations and intertemporal consumption theory should have been: your existing [Keynesian] theory is full of holes, and you really need to do better – here are some ideas that might help, and let’s see how you get on. Instead for many it was: your theory is irredeemable, and the problems you are trying to explain (and alleviate) are not really problems at all. In taking that kind of position it is quite helpful to follow a methodology where you get rather a lot of choice over what empirical facts you try and be consistent with.

So it is clear why the microfoundations debate is mixed up with the debate over Keynesian economics. It also seems clear to me that the microfoundations approach did reveal serious problems with the Keynesian analysis that had gone before, and that the New Keynesian analysis that has emerged as a result of the microfoundations project is a lot better for it. We now understand more about the dynamics of inflation and business cycles and so monetary policy is better. This shows that the microfoundations project is progressive.

But just because a methodology is progressive does not imply that it is the only proper way to proceed. When I wrote that focusing on microfoundations can distort the way macroeconomists think, I was talking about myself as much as anyone else. I feel I spend too much time thinking about microfoundations tricks, and give insufficient attention to empirical evidence that should have much more influence on modelling choices. I don’t think I can just blame anti-Keynesians for this: I would argue New Keynesians also need to be more pragmatic about what they do, and more tolerant of other ways of building macromodels.  


Wednesday, 28 August 2013

Macro workers and macro wars

A long post I’m afraid, even with extensive use of footnotes. But I really think it is much more productive to try and understand someone’s opposing point of view than just be rude about it.

Most academic macroeconomists are just trying to advance the discipline by getting their papers published, and are certainly not consciously trying to defend some ideological viewpoint. As a result, there are lots of macroeconomists producing high quality work in a wide variety of diverse areas. There are many interesting new ideas being explored. Furthermore this work can be appreciated by most fellow researchers. Unlike the 60s and 70s, where members of different schools of thought talked across each other, we now have a shared language as a result of the microfoundation of macro. My own view is that as a result macro today is much more interesting than macro was back then. Furthermore, this work can be useful to policymakers, as Paul Krugman outlines in the case of monetary policy here.

Ahh - that may have made you pause for thought. Isn’t that the same Paul Krugman who says there is something “deeply wrong with economics”. Who talks about how ideology and politics distort the advice that economists - and perhaps particularly macroeconomists - give to policymakers. And who suggests that in many cases policymakers would be better off thinking about good old IS-LM than any of this more modern stuff.

One of the problems when Paul Krugman does this is that it gets on the nerves of many academic macroeconomists, who would much rather identify with the sentiments I express in my first paragraph. Economists like Tony Yates, for example. I suspect they see Paul Krugman’s attacks on the state of macroeconomics as akin to a personal attack on their own and colleagues work, and as a result they can go way over the top in reaction. I think this is understandable, but it is wrong.

As I see it both points of view are correct. As Stephen Williamson argues, in one sense macroeconomics is flourishing. Take the example of financial frictions. There is now a wealth of papers out there exploring different types of friction, in large part responding to events of just the last five years. This is hardly the response of a moribund, out of touch discipline. And as Krugman says, sometimes this work can be useful to policymakers. But that is not the acid test for the integrity of a supposedly scientific discipline. In days of old, policymakers made much use of astrology. The acid test is when the discipline tells policymakers something they do not want to hear, and unites behind this implication of its models and the data.

Nearly fifteen years ago I began working with DSGE models looking at monetary and fiscal interactions. [1] Doing this work taught me a lot about how fiscal policy worked in New Keynesian models. I understood more clearly why monetary policy was the stabilisation tool of choice in those models, but also why fiscal policy - appropriately designed - was also quite effective in that role if monetary policy was absent (individual countries in the Eurozone) or impaired (the ZLB). Why New Keynesian models? Because if you were interested in business cycle stabilisation, that is the framework that most involved in that area (academics and policymakers) were using. So when we hit the ZLB, the reaction of policymakers in using fiscal stimulus seemed logical, entirely appropriate and fully in line with current theory.

I also found that in these models the basics of Barro’s tax smoothing hypothesis continues to apply, so if you needed to reduce debt, you should do so as gradually as possible. This also seemed like as robust a result as one can get in macro.

The acid test for macro came in 2010. Policymakers, for a variety of reasons, went into reverse with fiscal policy. Austerity replaced stimulus around the world. If academic macroeconomists had been true to their discipline, they would have been united in saying that our standard models tell us this will reduce output and raise unemployment. They would have said that if markets allow, the time to reduce debt is when the ZLB comes to an end, and then it should be done gradually. Many did say that, but many did not. This division at least encouraged policymakers to continue with austerity.

So macroeconomists as a collective failed this test, repeating errors made in the 1930s. But unlike the 1930s, it did not have ignorance as an excuse. 

This is a crucial point that many on both sides tend to ignore. In 2010, the standard business cycle model was the New Keynesian model, and the implications of that model for the efficacy of appropriately designed fiscal policy are clear. So to blame the failure of 2010 on the current dominant macro model is just wrong. You may not like that model, but it cannot be blamed for the widespread adoption of austerity, or the ambivalent attitude of many macroeconomists towards that policy change.

So in my view macroeconomists, not the dominant macroeconomic model, failed. Why? The easy answer is to say that macroeconomists were too influenced by ideology, but I think for most (not all) it is the wrong answer, as I said in my opening paragraph. I think for most (not all) it is not simple politics, so in that respect I agree with Stephen Williamson. [2] So what is it?

Here is one possible answer. First I have to appeal to macroeconomists who learnt their trade after the New Classical revolution to consider where the now dominant methodology in macro came from. If you start your history of macro thought in 1980, then everything can seem nicely progressive. First there were RBC models, applying basic micro to macro. The data showed that this framework could not explain how monetary policy appeared to work, so to resolve this puzzle New Keynesian theory was developed.

But it was not like that at all. [3]

Macro started well before 1980. Its theoretical basis may have been shaky, but it was a scientific discipline in the Popperian sense. The New Classical revolution attempted to kill off Keynesian ideas, and deny the importance of aggregate demand. In that respect it now seems clear, because of the dominance of New Keynesian theory, that this was regressive rather than progressive. 
 
Yet in many who were part of the New Classical revolution, or who were taught by its leaders, there remains a deep antagonism to Keynesian ideas. This was not enough to prevent the emergence of New Keynesian theory, but the NK model built upon rather than challenged the RBC framework, and it could always be dismissed with an assertion about price flexibility. As a result, in certain places NK theory was tolerated rather than embraced, or was quietly marginalised.

This attitude was facilitated by another aspect of the revolution. Although it failed to kill Keynesian economics, it did succeed in changing how macro was done. Microfoundations macro did not just become an important way of explaining the economy, it became the only acceptable way. Now we can debate the wisdom of that. But I think it is very difficult to deny that this methodological revolution reduced the discipline that data can provide on model proliferation. It becomes far too easy to say ‘but in this model something different happens’, even if there is compelling empirical evidence that said model is not applicable. [4] [5]

I think this may help explain why a good proportion of macroeconomists failed to advocate fiscal stimulus in 2009 and call the consequences of subsequent austerity. [6] Now you may disagree with my view that this represented a failure for macroeconomists as a whole. What I want to convince you of here is that my view that it did, and perhaps similar views of others, does not amount to an assertion that modern macro is fundamentally flawed, and that those working within it are wasting their time. While the New Classical revolution may have moved macro many steps forward, in condemning Keynesian ideas it took one large step backwards, and the consequences of that mistake are still with us.



[1] In part this was a reaction to a different policy decision. The work of various economists before the formation of the Euro had suggested that countercyclical fiscal policy should be a key stabilisation tool for individual Eurozone members. That work was ignored by policymakers, and they were backed up by a significant number of macroeconomists, in part using other models that focused on free rider problems and fiscal dominance. How this all turned out is another story, but once again the collective of academic macroeconomists hardly covered itself in glory.

[2] Personally, I do not think the actions of some eminent economists who ignore their economics when batting for their favoured politicians is critical here, regrettable though it is. Nor on its own were the comments of other eminent macroeconomists who appeared not to have kept up with the literature, although as I suggest here I think this was indicative. Both could have been isolated examples, quickly brushed aside. More revealing is this survey, where although 46% of economists agreed that the US stimulus was a good policy, a large 40% were uncertain or did not answer. That is just one survey, but it reflects a similar division amongst the macroeconomists I know, and the views you find on the web. See, for example, the quote from Tom Sargent in Stephen Williamson’s post. Now I think some of this 40% are equivocal about fiscal stimulus (and therefore not too worried about austerity) because of a deep distrust of government intervention or the state. Whatever the merits of this mistrust, this is exactly the ideology and politics that Paul Krugman and I complain about. I think some others of this 40% take a view that monetary policy is still up to the job, even at the ZLB. I have talked about this most recently here. This post provides a third possible explanation, although I am sure there are others. (I tried to be comprehensive here.) 

[3] Getting the history of macro thought right is important for other reasons as well. As I suggested here, the structure of NK models may owe as much to the need to work with rather than against the then dominant RBC paradigm, as to any intrinsic empirical merits of that structure.

[4] I have tried to argue that economists working in central banks take more notice of the data, which helps explain why the NK model is dominant there, but Stephen Williamson disagrees.

[5] Another arguable consequence is that modelbuilding became too conformist. The charge that some element of a model is ad hoc and lacks microfoundations hangs like a Sword of Damocles over modelbuilders. Probably too much intolerance of alternative methodologies came with this revolution as well. I think this is part of any explanation of why so little work was done on financial frictions before 2008, as Mark Thoma suggests. Again I am not arguing that the methodology is wrong, but instead that it may have certain perhaps unintended and unfortunate consequences. In my own experience if you talk to many microeconomists about DSGE in macro they can be quite critical: for example about how narrow the micro used is, or how obsessed with technicalities the analysis can become. Sometimes they can be downright dismissive

[6] You could argue that ideology lay behind the New Classical revolution’s attempt to kill off Keynesian economics, and I do not really know enough to agree or disagree. What I do think is that most of those involved in the revolution thought that they were just exposing deficient theory, which in many cases they were.