Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label microfoundations. Show all posts
Showing posts with label microfoundations. Show all posts

Friday, 25 August 2017

Medicine and the microfoundations hegemony in macroeconomics

Mainly for economists

I’m beginning to think I should have made much more of analogies between economics and medicine in discussing what I call the microfoundations hegemony: the idea that the only ‘proper’ macroeconomic models are those that have all their equations consistently derived from microeconomic theory. The analogy I have in mind is that biology represents microfoundations, and statistical analysis linking, say, smoking to lung cancer are the non-microfounded models. (I've used the analogy in other contexts.)

I was thinking about this in the context of a paper I have just finished which uses a diagram that Adrian Pagan used to describe different types of macromodels. The diagram, which you can find in an earlier post, has ‘degree of empirical coherence’ and ‘degree of theoretical coherence’ on the two axes. Particular macromodels can be placed within this space. At one extreme involving the highest theoretical coherence but weaker empirical coherence are microfounded DSGE models. At the other are VARs: statistical correlations between a group of macro variables with no theory-based theoretical restrictions imposed. In the middle are what I call Structural Econometric Models and Blanchard calls Policy Models, which use an eclectic mix of theory and econometric evidence.

If you have a simple view of the hard sciences, this diagram looks very odd. Theories either fit the facts or they do not. But I think a medic could make sense of this diagram by thinking about medical practice based on biology (for example how cells work and interact with various chemicals) and practice based on epidemiological studies. Ideally the two should work together, but at any particular moment in time some medical ideas may borrow more from one side or the other. In particular, statistical studies could throw up links which do not have a clear and well established biological explanation.

Now imagine the microfoundations hegemony in macroeconomics applied to medicine. Statistical longitudinal studies in the 1950s showed a link between smoking and lung cancer, but the biological mechanisms were unclear. The microfoundations hegemony applied to this example would mean that medics would argue that until those biological mechanisms are clearly established they should ignore these statistical results. The investigation of such mechanisms should remain a top research priority, but for the moment advice to patients should be to carry on smoking.

OK, that is perhaps a little harsh, but only a little. That some macroeconomists (I call them microfoundations purists) can argue that you should model and give policy advice based not on what you see but on what you can microfound represents something that I cannot imagine any philosopher of science taking seriously (after they had stopped laughing).        

Tuesday, 11 October 2016

Ricardian Equivalence, benchmark models, and academics response to the financial crisis

Mainly for economists

In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:
“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified.

It is not only academics who think policy has to be done using microfounded models. The core model used by the Bank of England is a microfounded DSGE model. So even in this policy making institution, their core model does not conform to Blanchard’s prescription. (Yes, I know they have lots of other models, but still. The Fed is closer to Blanchard than the Bank.)

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about in his graduate textbook with Stan Fischer, although then they were confined to chapter 10! These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.


[1] Recognising this point, some might add some ‘rule of thumb’ consumers into the model. This is fine, as long as you do not continue to think the model is microfounded. If these rule of thumb consumers spend all their income because of credit constraints, what happens when these constraints are expected to last for more than the next period? Does the model correctly predict what would happen to consumption if the proportion of rule of thumb consumers changes? It does not.  

Saturday, 24 September 2016

What is so bad about the RBC model?

This post has its genesis in a short twitter exchange storified by Brad DeLong

DSGE models, the models that mainstream macroeconomists use to model the business cycle, are built on the foundations of the Real Business Cycle (RBC) model. We (almost) all know that the RBC project failed. So how can anything built on these foundations be acceptable? As Donald Trump might say, what is going on here?

The basic RBC model contains a production function relating output to capital (owned by individuals) and labour plus a stochastic element representing technical progress, an identity relating investment and capital, a national income identity giving output as the sum of consumption and investment, marginal productivity conditions (from profit maximisation by perfectly competitive representative firms) giving the real wage and real interest rate, and the representative consumer’s optimisation problem for consumption, labour supply and capital. (See here, for example.)

What is the really big problem with this model? Not problems along the lines of ‘I would want to add this’, but more problems like I would not even start from here. Let’s ignore capital, because in the bare bones New Keynesian model capital does not appear. If you were to say giving primacy to shocks to technical progress I would agree that is a big problem: all the behavioural equations should contain stochastic elements which can also shock this economy, but New Keynesian models do this to varying degrees. If you were to say the assumption of labour market clearing I would also agree that is a big problem.

However none of the above is the biggest problem in my view. The biggest problem is the assumption of continuous goods market clearing aka fully flexible prices. That is the assumption that tells you monetary policy has no impact on real variables. Now an RBC modeller might say in response how do you know that? Surely it makes sense to see whether a model that does assume price flexibility could generate something like business cycles?

The answer to that question is no, it does not. It does not because we know it cannot for a simple reason: unemployment in recessions is involuntary, and this model cannot generate involuntary unemployment, but only voluntary variations in labour supply as a result of short term movements in the real wage. Once you accept that higher unemployment in recessions is involuntary (and the evidence for that is very strong), the RBC project was never going to work.

So how did RBC models ever get off the ground? Because the New Classical revolution said everything we knew before that revolution should be discounted because it did not use the right methodology. And also because the right methodology - the microfoundations methodology - allowed the researcher to select what evidence (micro or macro) was admissible. That, in turn, is why the microfoundations methodology has to be central to any critique of modern macro. Why RBC modellers chose to dismiss the evidence on involuntary unemployment I will leave as an exercise for the reader.

The New Keynesian (NK) model, although it may have just added one equation to the RBC model, did something which corrected its central failure: the failure to acknowledge the pre-revolution wisdom about what causes business cycles and what you had to do to combat them. In that sense its break from its RBC heritage was profound. Is New Keynesian analysis still hampered by its RBC parentage? The answer is complex (see here), but can be summarised as no and yes. But once again, I would argue that what holds back modern macro much more is its reliance on its particular methodology.

One final point. Many people outside mainstream macro feel happy to describe DSGE modelling as a degenerative research strategy. I think that is a very difficult claim to substantiate, and is hardly going to convince mainstream macroeconomists. The claim I want to make is much weaker, and that is that there is no good reason why microfoundations modelling should be the only research strategy employed by academic economists. I challenge anyone to argue against my claim.




Tuesday, 20 September 2016

Paul Romer on macroeconomics

It is a great irony that the microfoundations project, which was meant to make macro just another application of microeconomics, has left macroeconomics with very few friends among other economists. The latest broadside comes from Paul Romer. Yes it is unfair, and yes it is wide of the mark in places, but it will not be ignored by those outside mainstream macro. This is partly because he discusses issues on which modern macro is extremely vulnerable.

The first is its treatment of data. Paul’s discussion of identification illustrates how macroeconomics needs to use all the hard information it can get to parameterise its models. Yet microfounded models, the only models deemed acceptable in top journals for both theoretical and empirical analysis, are normally rather selective about the data they focus on. Both micro and macro evidence is either ignored because it is inconvenient, or put on a to do list for further research. This is an inevitable result of making internal consistency an admissibility criteria for publishable work.

The second vulnerability is a conservatism which also arises from this methodology. The microfoundations criteria taken in its strict form makes it intractable to model some processes: for example modelling sticky prices where actual menu costs are a deep parameter. Instead DSGE modelling uses tricks, like Calvo contracts. But who decides whether these tricks amount to acceptable microfoundations or are instead ad hoc or implausible? The answer depends a lot on conventions among macroeconomists, and like all conventions these move slowly. Again this is a problem generated by the microfoundations methodology.

Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated. (And, as a result, it completely misleads Paul Mason here.) Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places. It was only a few years ago that I listened to a seminar paper where the financial crisis was modelled as a large negative productivity shock.

Only in a discipline which has deemed microfoundations as the only acceptable way of modelling can practitioners still feel embarrassed about including sticky prices because their microfoundations (the tricks mentioned above) are problematic . Only in that discipline can respected macroeconomists argue that because of these problematic microfoundations it is best to ignore something like sticky prices when doing policy work: and argument that would be laughed out of court in any other science. In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see. Other economists understand this, but many macroeconomists still think this is all quite normal.   

Friday, 16 September 2016

Economics, DSGE and Reality: a personal story

As I do not win prizes very often, I thought I would use the occasion of this one to write something much more personal than I normally allow myself. But this mini autobiography has a theme involving something quite topical: the relationship between academic macroeconomics and reality, and in particular the debate over DSGE modelling and the lack of economics in current policymaking. [1]

I first learnt economics at Cambridge, a department which at that time was hopelessly split between different factions or ‘schools of thought’. I thought if this is what being an academic is all about I want nothing to do with it, and instead of doing a PhD went to work at the UK Treasury. The one useful thing about economics that Cambridge taught me (with some help from tutorials with Mervyn King) was that mainstream economics contained too much wisdom to be dismissed as fundamentally flawed, but also (with the help of John Eatwell) that economics of all kinds could easily be bent by ideology.

My idea that by working at the Treasury I could avoid clashes between different schools of thought was of course naive. Although the institution I joined had a well developed and empirically orientated Keynesian framework [2], it immediately came under attack from monetarists, and once again we had different schools using different models and talking past each other. I needed more knowledge to understand competing claims, and the Treasury kindly paid for me to do a masters at Birkbeck, with the only condition being that I subsequently return to the Treasury for at least 2 years. Birkbeck at the time was also a very diverse department (incl John Muellbauer, Richard Portes, Ron Smith, Ben Fine and Laurence Harris), but unlike Cambridge a faculty where the dedication to teaching trumped factional warfare.

I returned to the Treasury, which while I was away saw the election of Margaret Thatcher and its (correct) advice about the impact of monetarism completely rejected. I was, largely by accident, immediately thrust into controversy: first by being given the job of preparing a published paper evaluating the empirical evidence for monetarism, and then by internally evaluating the economic effects of the 1981 budget. (I talk about each here and here.) I left for a job at NIESR exactly two years after I returned from Birkbeck. It was partly that experience that informed this post about giving advice: when your advice is simply ignored, there is no point giving it.

NIESR was like a halfway house between academia and the Treasury: research, but with forecasting rather than teaching. I became very involved in building structural econometric models and doing empirical research to back them up. I built the first version of what is now called NIGEM (a world model widely used by policy making and financial institutions), and with Stephen Hall incorporated rational expectations and other New Classical elements into their domestic model.

At its best, NIESR was an interface between academic macro and policy. It worked very well just before 1990, where with colleagues I showed that entering the ERM at an overvalued exchange rate would lead to a UK recession. A well respected Financial Times journalist responded that we had won the intellectual argument, but he was still going with his heart that we should enter at 2.95 DM/£. The Conservative government did likewise, and the recession of 1992 inevitably followed.

This was the first public occasion where academic research that I had organised could have made a big difference to UK policy and people’s lives, but like previous occasions it did not do so because others were using simplistic and perhaps politically motivated reasoning. It was also the first occasion that I saw close up academics who had not done similar research but who had influence use that influence to support simplistic reasoning. It is difficult to understate the impact that had on me: being centrally involved in a policy debate, losing that debate for partly political reasons, and subsequently seeing your analysis vindicated but at the cost of people becoming unemployed.

My time at NIESR convinced me that I would find teaching more fulfilling than forecasting, so I moved to academia. The publications I had produced at NIESR were sufficient to allow me to become a professor. I went to Strathclyde University at Glasgow partly because they agreed to give temporary funding to two colleagues at NIESR to come with me so we could bid to build a new UK model. [3] At the time the UK’s social science research funding body, the ESRC, allocated a significant proportion of its funds to support econometric macromodels, subject to competitions every 4 years. It also funded a Bureau at Warwick university that analysed and compared the main UK models. This Bureau at its best allowed a strong link between academia and policy debate.

Our bid was successful, and in the model called COMPACT I would argue we built the first UK large scale structural econometric model which was New Keynesian but which also incorporated innovative features like an influence of (exogenous) financial conditions on intertemporal consumption decisions. [4] We deliberately avoided forecasting, but I was very pleased to work with the IPPR in providing model based economic analysis in regular articles in their new journal, many written with Rebecca Driver.

Our efforts impressed the academics on the ESRC board that allocated funds, and we won another 4 years funding, and both projects were subsequently rated outstanding by academic assessors. But the writing was on the wall for this kind of modelling in the UK, because it did not fit the ‘it has to be DSGE’ edict from the US. A third round of funding, which wanted to add more influences from the financial sector into the model using ideas based on work by Stiglitz and Greenwald, was rejected because our approach was ‘old fashioned’ i.e not DSGE. (The irony given events some 20 years later is immense, and helped inform this paper.)

As my modelling work had always been heavily theory based, I had no problem moving with the tide, and now at Exeter university with Campbell Leith we began a very successful stream of work looking at monetary and fiscal policy interactions using DSGE models. [5] We obtained a series of ESRC grants for this work, again all subsequently rated as outstanding. Having to ensure everything was microfounded I think created more heat than light, but I learnt a great deal from this work which would prove invaluable over the last decade.

The work on exchange rates got revitalised with Gordon Brown’s 5 tests for Euro entry, and although the exchange rate with the Euro was around 1.6 at the time, the work I submitted to the Treasury implied an equilibrium rate closer to 1.4. When the work was eventually published it had fallen to around 1.4, and stayed there for some years. Yet as I note here, that work again used an ’old fashioned’ (non DSGE) framework, so it was of no interest to journals, and I never had time to translate it (something Obstfeld and Rogoff subsequently did, but ignoring all that had gone before). I also advised the Bank of England on building its ‘crossover’ DSGE/econometric model (described here).

Although my main work in the 2000s was on monetary and fiscal policy, the DSGE framework meant I had no need to follow evolving macro data, in contrast to the earlier modelling work. With Campbell and Tatiana I did use that work to help argue for an independent fiscal council in the UK, a cause I first argued for in 1996. This time Conservative policymakers were listening, and our paper helped make the case for the OBR.

My work on monetary and fiscal interaction also became highly relevant after the financial crisis when interest rates hit their lower bound. In what I hope by now is a familiar story, governments from around the world first went with what macroeconomic theory and evidence would prescribe, and then in 2010 dramatically went the opposite way. The latter event was undoubtedly the underlying motivation for me starting to write this blog (coupled with the difficulty I had getting anything I wrote published in the Financial Times or Guardian).

When I was asked to write an academic article on the fiscal policy record of the Labour government, I discovered not just that the Coalition government’s constant refrain was simply wrong, but also that the Labour opposition seemed uninterested in what I found. Given what I found only validated what was obvious from key data series, I began to ask why no one in the media appeared to have done this, or was interested (beyond making fun) in what I had found. Once I started looking at what and how the media reported, I realised this was just one of many areas where basic economic analysis was just being ignored, which led to my inventing the term mediamacro.

You can see from all this why I have a love/hate relationship to microfoundations and DSGE. It does produce insights, and also ended the school of thought mentality within mainstream macro, but more traditional forms of macromodelling also had virtues that were lost with DSGE. Which is why those who believe microfounded modelling is a dead end are wrong: it is an essential part of macro but just should not be all academic macro. What I think this criticism can do is two things: revitalise non-microfounded analysis, and also stop editors taking what I have called ‘microfoundations purists’ too seriously.

As for macroeconomic advice and policy, you can see that austerity is not the first time good advice has been ignored at considerable cost. And for the few that sometimes tell me I should ‘stick with the economics’, you can see why given my experience I find that rather difficult to do. It is a bit like asking a chef to ignore how bad the service is in his restaurant, and just stick with the cooking. [6]

[1] This exercise in introspection is also prompted by having just returned from a conference in Cambridge, where I first studied economics. I must also admit that the Wikipedia page on me is terrible, and I have never felt it kosher to edit it myself, so this is a more informative alternative.

[2] Old, not new Keynesian, and still attached to incomes policies. And with a phobia about floating rates that could easily become ‘the end is nigh’ stuff (hence 1976 IMF).

[3] I hope neither regret their brave decision: Julia Darby is now a professor at Strathclyde and John Ireland is a deputy director in the Scottish Government.

[4] Consumption was of the Blanchard Yaari type, which allowed feedback from wealth to consumption. It was not all microfounded and therefore internally consistent, but it did attempt to track individual data series.

[5] The work continued when Campbell went to Glasgow, but I also began working with Tatiana Kirsanova at Exeter. I kept COMPACT going enough to be able to contribute to this article looking at flu pandemics, but even there one referee argued that the analysis did not use a ‘proper’ (i.e DSGE) model.

[6] At which point I show my true macro credentials in choosing analogies based on restaurants.  

Wednesday, 13 January 2016

Is mainstream academic macroeconomics eclectic?

For economists, and those interested in macroeconomics as a discipline

Eric Lonergan has a short little post that is well worth reading. Not because it is particularly deep or profound, but because it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.

If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. (My original title for this post was can DSGE models be eclectic, but that got sidetracked into definitional issues, but from the way I tend to define things it is the same question.) My answer is yes and no.

Let’s take the five ‘schools’ that Eric talks about. We clearly already have three: New Keynesian, Classical, and Rational Expectations. (Rational Expectations is not normally thought of in the same terms, but I understand why Eric wanted to single it out.) There is currently a huge research programme which aims to incorporate the financial sector, and (sometimes) the potential for financial crises, into DSGE analysis, so soon we may have Minsky too. Indeed the variety of models that academic macro currently uses is far wider than this.

Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really, in the following important sense. I think that any of this huge range of models could be presented at an academic seminar, and the audience would have some idea of what was going on, and be able raise issues and make criticisms about the model on its own terms. This is because these models (unlike those of 40+ years ago) use a common language. The idea that the academic ranking of economists like Lucas should reflect events like the financial crisis seems misconceived from this point of view.

It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).

It also means that the range of issues that models (DSGE models) can address is also huge. To take just one example: the idea that the financial crisis was caused by growing inequality which led to too much borrowing by less wealthy individuals. This is the theme of a 2013 paper by Michael Kumhof and colleagues. Yet the model they use to address this issue is a standard DSGE model with some twists. There is nothing fundamentally non-mainstream about it.

So why is the popular perception so different? Why do people talk about schools of thought? I think there are two reasons. First, while the above is true in the realm of academic understanding and discourse, it does not carry over into policy. When it comes to policy, we get to learn which models academic think are applicable to particular policy problems, and here divisions can be sharp. Second, there are plenty of people outside academia who have a public voice about economics (and generally a policy orientation), and they often do see themselves as school followers.

In terms of working practice rather than the hot end of macro policy decisions, most academic macroeconomists would regard themselves as eclectic in terms of the kind of work they are prepared to spend an hour or two seeing presented. But this view, and the common language that mainstream academics use, leads me to the No part of the answer to my original question. The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.

You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.

I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world. (Start here, or type microfoundations into this blog’s search box and work backwards in time.) But for me at least this as a methodological point, rather than anything associated with any school of thought. Attempts to link the two, which I think many people including myself have been guilty of, just confuses.

The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.




Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Saturday, 16 May 2015

Paul Romer and microfoundations

For economists

In an AER P&P paper, Paul Romer talks about many things: a distinction between scientific consensus and political discourse, a divide in growth theory between those that use models based on perfect competition and those using imperfect competition, but mainly the distinction between appropriate mathematical theory and what he calls ‘mathiness’. To better see how these things connect up, and how they could have wider applicability, I suggest reading his blog post first. There he writes:
“the problems I identify in growth theory may be of broader interest. If economists can understand what the problem is in this sub-field, we may be in a better position to evaluate the scientific health of other parts of economics. The field to which scrutiny might first extend is economic fluctuations.”
So how might such a comparison go? The attachment to using perfect rather than imperfect competition could map into an aversion to either price stickiness or the importance/autonomy of aggregate demand, both of which could be labelled as ‘anti-Keynesian’. Keynesian theory is denigrated in some cases not because of empirical evidence but because of the policy implications that may follow from that theory. The microfoundations methodology, as practiced by some, allows those that want to deny the importance of Keynesian effects to continue to study business cycles, because this methodology can place such a low weight on the importance of evidence when it comes to the elements of model building. (Ask not whether price stickiness has empirical support, but whether it has solid microfoundations.)

Paul Romer’s post also links to the idea in this paper by Paul Pfleiderer about theoretical models becoming “chameleons”. To quote: “A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.” I think we could add that these conclusions are usually associated with defending a particular political view or sectional interest.

It is important to stress that this is not an attack on the microfoundations methodology, just as Paul Romer’s article is not an attack on mathematical modelling. Most DSGE modellers, who are not subject to any political aversion to using price rigidity, happily use this methodology to advance the discipline. But if that methodology is taken too seriously (by what I call here microfoundations purists), so that modellers only look at what they can microfound rather than what they actually see in the real world, it can allow approaches that should have been discarded to live on, perhaps because they support a particular policy position.

A discipline where a huge number of alternative models persist could be described as ‘flourishing’, but risks disintegrating into alternative schools of thought, where some schools have an immunisation strategy that protects them from particular kinds of empirical evidence. As Paul perceptively points out, this makes economics more like political discourse than a scientific discipline. Some people welcome that, or regard it is inevitable - I hope most economists do not. This means we first need to collectively recognise the problem, rather than keeping our heads down to avoid upsetting others. I hope Paul Romer’s article can be part of that process. 

Friday, 3 April 2015

Do not underestimate the power of microfoundations

Mainly for economists

Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. (Recently Stephen Williamson asked the same question, but I suspect from a different perspective!) Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”

I would guess the two questions are in fact exactly the same. The NK model is the microfounded way of doing Keynesian economics, and microfounded (DSGE) models are de rigueur in academic macro, so any mainstream academic wanting to analyse business cycle issues from a Keynesian perspective will use a variant of the NK model. Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency.

As macro 50 years ago was very different, it is an interesting methodological question to ask why things changed, even if you think the change has greatly improved how macro is done (as I do). I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However there are two problems with having such a discussion. First, economists are usually not comfortable talking about methodology. Second, it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.

So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. You will hear how the Lucas critique proved that any non-microfounded model is inadequate for doing policy analysis, rather than it simply being one aspect of a complex trade-off between internal and external consistency. In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.

I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. (And arguing that it should be there is a relevant point for economics teaching, but is pointless if you are trying to change current discourse.) That makes discussion difficult, but I’m not sure it makes it impossible.


Wednesday, 25 March 2015

Why do central banks use New Keynesian models?

And more on whether price setting is microfounded in RBC models. For macroeconomists.

Why do central banks like using the New Keynesian (NK) model? Stephen Williamson says: “I work for one of these institutions, and I have a hard time answering that question, so it's not clear why Simon wants David [Levine] to answer it. Simon posed the question, so I think he should answer it.” The answer is very simple: the model helps these banks do their job of setting an appropriate interest rate. (I suspect because the answer is very simple this is really a setup for another post Stephen wants to write, but as I always find what Stephen writes interesting I have no problem with that.)

What is a NK model? It is a RBC model plus a microfounded model of price setting, and a nominal interest rate set by the central bank. Every NK model has its inner RBC model. You could reasonably say that these NK models were designed to help tell the central bank what interest rate to set. In the simplest case, this involves setting a nominal rate that achieves, or moves towards, the level of real interest rates that is assumed to occur in the inner RBC model: the natural real rate. These models do not tell us how and why the central bank can set the nominal short rate, and those are interesting questions which occasionally might be important. As Stephen points out, NK models tell us very little about money. Most of the time, however, I think interest rate setters can get by without worrying about these how and why questions.

Why not just use the restricted RBC version of the NK model? Because the central bank sets a nominal rate, so it needs an estimate of what expected inflation is. It could get that from surveys, but it also wants to know how expected inflation will change if it changes its nominal rate. I think a central banker might also add that they are supposed to be achieving an inflation target, so having a model that examines the response of inflation to the rest of the economy and nominal interest rate changes seems like an important thing to do.

The reason why I expect people like David Levine to at least acknowledge the question I have just answered is also simple. David Levine claimed that Keynesian economics is nonsense, and had been shown to be nonsense since the New Classical revolution. With views like that, I would at least expect some acknowledgement that central banks appear to think differently. For him, like Stephen, that must be a puzzle. He may not be able to answer that puzzle, but it is good practice to note the puzzles that your worldview throws up.

Stephen also seems to miss my point about the lack of any microfounded model of price setting in the RBC model. The key variable is the real interest rate, and as he points out the difference between perfect competition and monopolistic competition is not critical here. In a monetary economy the real interest rate is set by both price setters in the goods market and the central bank. The RBC model contains neither. To say that the RBC model assumes that agents set the appropriate market clearing prices describes an outcome, but not the mechanism by which it is achieved.

That may be fine - a perfectly acceptable simplification - if when we do think how price setters and the central bank interact, that is the outcome we generally converge towards. NK models suggest that most of the time that is true. This in turn means that the microfoundations of price setting in RBC models applied to a monetary economy rest on NK foundations. The RBC model assumes the real interest rate clears the goods market, and the NK model shows us why in a monetary economy that can happen (and occasionally why it does not).