Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label methodology. Show all posts
Showing posts with label methodology. Show all posts

Wednesday, 12 April 2017

Economics is an inexact science

When I wrote about why the BBC should treat a clear consensus in economics the same way as it now treated climate science, I got a number of comments about why economics is not a science. A common theme was that economics couldn’t prove theories ‘beyond doubt’ the same way as the hard sciences could. A more sophisticated version of this complaint is that most economic theories cannot be disproved in the same way that Popper thought scientific theories could be disproved.

All this ignores a key feature of any social science, which is their inexact nature. Instead we have accumulations of evidence that confirm the applicability of some theories and reject the applicability of others. Economists’ views about what models are applicable change as this evidence accumulates.

A good example involves the minimum wage, as Noah Smith suggests. The basic economic model suggested even a modest minimum wage should significantly reduce employment, but economists discovered that the evidence did not show this. As this evidence accumulated, alternative theories and models (monopsony and search) were thought to be more relevant. It is this response to evidence that makes economics a science.

Jo Michell writes “The scientific method of forming a hypothesis and then testing that hypothesis against reality can never be the final arbiter of knowledge, as it can in the physical sciences.” He is right that no single experiment or regression can kill a theory, but wrong that the accumulation of evidence is not the final arbiter, because no other arbiter is available. He links to a post by Noah Smith which talks about the failures of forecasting. But as that post makes clear, this is not about data rejecting models, but the inability of models to predict the future. We would never dream of condemning medics because they cannot predict the exact time of our death, still less suggest that this failure indicates they are not doing science.

Of course economics involves cases where economists appear too reluctant to give up their favoured models. You can find similar stories in the hard sciences. There will be more such stories in economics because the inexact nature of economics makes it easier to discount any single piece of evidence. What I cannot understand is what leads someone like Russ Roberts to argue against the use of evidence, and instead that “economics is primarily a way of organizing one’s thinking”. Astrology is also a way of organising one’s thinking, but it fails because evidence does not back it up.

That comparison is slightly unfair, because while the theory behind astrology is obviously implausible, the basic principles of microeconomics are not. In a class on economic methodology I once drew a huge tree that showed how most of economics could be derived from principles of rational choice. But go beyond the basics, and add in complications involving information and transactions costs (to name but two) and you very quickly derive competing models. There is no single model that comes from thinking like an economist, so for that reason alone we need data to tell us which models are more applicable.

So thinking like an economist does not tell me at what point raising the minimum wage will reduce employment. But why would anyone want to keep their models from being proved relevant or otherwise by data? The only reason I can think of is that some models give answers that are ideologically convenient. Of course allowing data to establish the relevance of some models over others does not make economics ideology proof. For example people can always select the one study that suggests that fiscal policy does not influence output and ignore the hundreds that show otherwise. That is why the accumulation of evidence, which includes its replicability, is so important. If you think economics has problems in that respect, have a look at psychology.

This is why economists views about the long term impact of Brexit should be treated as knowledge rather than just an opinion. Here knowledge is shorthand for the accumulation of evidence consistent with plausible theory. Sometimes the theories are common sense, like making trade more difficult will reduce trade. Estimates of the size of trade reduction based on evidence are uncertain, but they are better than estimates based on wishful thinking. Empirical gravity equations consistently show that geography still matters a lot in determining how much is traded. Finally there is clear evidence that trade is positively associated with productivity growth. To say that all this has no more worth than some politicians opinion is ultimately to degrade evidence and the science which interprets it.



Sunday, 15 January 2017

Blanchard joins calls for Structural Econometric Models to be brought in from the cold

Mainly for economists

Ever since I started blogging I have written posts on macroeconomic methodology. One objective was to try and convince fellow macroeconomists that Structural Econometric Models (SEMs), with their ad hoc blend of theory and data fitting, were not some old fashioned dinosaur, but a perfectly viable way to do macroeconomics and macroeconomic policy. I wrote this with the experience of having built and published papers with both SEMs and DSGE models.

Olivier Blanchard’s third post on DSGE models does exactly the same thing. The only slight confusion is that he calls them ‘policy models’, but when he writes

“Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals.”

he can only mean SEMs. [1] I prefer SEMs to policy models because SEMs describe what is in the tin: structural because they utilise lots of theory, but econometric because they try and match the data.

In a tweet, Noah Smith says he is puzzled. “What else is the point of DSGEs??” besides advising policy he asks? This post tries to help him and others see how the two classes of model can work together.

The way I would estimate a SEM today (but not necessarily the only valid way) would be to start with an elaborate DSGE model. But rather than estimate this model using Bayesian methods, I would use it as a theoretical template with which to start econometric work, either on an equation by equation basis or as a set of sub-systems. Where lag structures or cross equation restrictions were clearly rejected by the data, I would change the model to more closely match the data. If some variables had strong power in explaining others but were not in the DSGE specification, but I could think of reasons for a causal relationship (i.e. why the DSGE specification was inadequate), I would include them in the model. That would become the SEM. [2]

If that sounds terribly ad hoc to you, that is right. SEMs are an eclectic mix of theory and data. But SEMs will still be useful to academics and policymakers who want to work with a model that is reasonably close to the data. What those I call DSGE purists have to admit is that because DSGE models do not match the data in many respects, they are misspecified and therefore any policy advice from them is invalid. The fact that you can be sure they satisfy the Lucas critique is not sufficient compensation for this misspecification.

By setting the relationship between a DSGE and a SEM in the way I have, it makes it clear why both types of model will continue to be used, and how SEMs can take their theoretical lead from DSGE models. SEMs are also useful for DSGE model development because their departures from DSGEs provide a whole list of potential puzzles for DSGE theorists to investigate. Maybe one day DSGE will get so good at matching the data that we no longer need SEMs, but we are a long way from that.

Will what Blanchard and I call for happen? It already does to a large extent at the Fed: as Blanchard says what is effectively their main model is a SEM. The Bank of England uses a DSGE model, and the MPC would get more useful advice from its staff if this was replaced by a SEM. The real problem is with academics, and in particular (as Blanchard again identified in an earlier post) journal editors. Of course most academics will go on using DSGE, and I have no problem with that. But the few who do instead decide to use a SEM should not be automatically shut out from the pages of the top journals. They would be at present, and I’m not confident - even with Blanchard’s intervention - that this is going to change anytime soon.


[1] What Ray Fair, longtime builder and user of his own SEM, calls Cowles Commission models.

[2] Something like this could have happened when the Bank of England built BEQM, a model I was consultant on. Instead the Bank chose a core/periphery structure which was interesting, but ultimately too complex even for the economists at the Bank.

Friday, 15 January 2016

Heterodox economists and mainstream eclecticism

I knew when I wrote this post some economists would not like it. These are economists who locate themselves outside the mainstream: heterodox economists. They often claim that mainstream economics is this narrow discipline wedded to particular assumptions that are both implausible and ideological. So when I argue that in principle and practice it is not, they will not like it. Simply saying (as I do) that economists are often too reluctant (sometimes for good reason in terms of the sociology of economists) to explore this freedom is not enough for them. Some of them require mainstream economics to be beyond redemption.

Sure enough, Lars Syll attacks my post. He writes 
“And just as his colleagues, when it really counts, Wren-Lewis shows what he is — a mainstream neoclassical economist fanatically defending the insistence of using an axiomatic-deductive economic modeling strategy.” 

I can only think that reading my post got him so angry he temporarily lost his critical faculties. Because what he writes is completely false. I wrote a comment on his blog but it has not appeared, although fortunately Bruce Wilder makes my point in very gentle terms. As I have been here before with Syll (see the footnote to this post), I will be less gentle.

My post ended with the following sentence:
“Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.”

I argue in the post that “this non-eclecticism in terms of excluding non-microfounded work is deeply problematic.” I then link to my many earlier posts where I have expanded on this theme. So how I can be a fanatic defender of insisting that this modelling strategy be used escapes me. Unless I have misunderstood what an ‘axiomatic-deductive’ strategy is. Perhaps for Syll not following this strategy means being able to completely (180 degrees completely) misrepresent what someone else says.

That out of the way, I wanted to say something more substantive. In macro, the insistence on using microfounded models also works as an exclusion device. (I am suggesting this as a fact, not a deliberate strategy.) You cannot just write down an aggregate macro model, based on other people’s work or empirical findings or whatever. You have to microfound it, and that requires a lot of skill and practice, as many a PhD student has found out.

If it also turns out when doing this that the issue you want to address or the innovation you want to make is ‘difficult’ in terms of finding an acceptable microfoundation, there are many wise supervisors who will suggest that the student tries something else. It is hardly surprising that this might put some people off mainstream macro.

I think some knowledge of these things is essential - the kind of knowledge to be able to read and understand a journal article. But the depth of knowledge required to be able to create your own microfounded model, if your own interests are more empirical but you nevertheless want to explore the implications of your empirical work for the economy as a whole? Here I think what Lars Syll says has validity. But his wish to tar even the critics of this aspect of mainstream macro with this brush is just bizarre. More generally heterodox economists misdirect their fire when they accuse mainstream macro of being inescapably narrow in its subject matter or assumptions, when their criticism should be directed at the limitations implied by microfoundations formalism.



Friday, 3 April 2015

Do not underestimate the power of microfoundations

Mainly for economists

Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. (Recently Stephen Williamson asked the same question, but I suspect from a different perspective!) Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”

I would guess the two questions are in fact exactly the same. The NK model is the microfounded way of doing Keynesian economics, and microfounded (DSGE) models are de rigueur in academic macro, so any mainstream academic wanting to analyse business cycle issues from a Keynesian perspective will use a variant of the NK model. Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency.

As macro 50 years ago was very different, it is an interesting methodological question to ask why things changed, even if you think the change has greatly improved how macro is done (as I do). I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However there are two problems with having such a discussion. First, economists are usually not comfortable talking about methodology. Second, it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.

So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. You will hear how the Lucas critique proved that any non-microfounded model is inadequate for doing policy analysis, rather than it simply being one aspect of a complex trade-off between internal and external consistency. In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.

I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. (And arguing that it should be there is a relevant point for economics teaching, but is pointless if you are trying to change current discourse.) That makes discussion difficult, but I’m not sure it makes it impossible.


Wednesday, 30 July 2014

Methodological seduction

Mainly for macroeconomists or those interested in economic methodology. I first summarise my discussion in two earlier posts (here and here), and then address why this matters.

If there is such a thing as the standard account of scientific revolutions, it goes like this:

1) Theory A explains body of evidence X

2) Important additional evidence Y comes to light (or just happens)

3) Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)

4) Theory B can explain X and Y

5) After a struggle, theory B replaces A.

For a more detailed schema due to Lakatos, which talks about a theory’s ‘core’ and ‘protective belt’ and tries to distinguish between theoretical evolution and revolution, see this paper by Zinn which also considers the New Classical counterrevolution.

The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?

My argument is that it does not. Arnold Kling makes the point clearly. In his stage one, Keynesian/Monetarist theory adapts to stagflation, using the Friedman/Phelps accelerationist Phillips curve. Stage two involves rational expectations, the Lucas supply curve and other New Classical ideas. As Kling says, “there was no empirical event that drove the stage two conversion.” I think from this that Paul Krugman also agrees, although perhaps with an odd quibble.

Now of course the counter revolutionaries do talk about the stagflation failure, and there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable. The key question, however, is whether points (3) and (4) are correct. On (3) Zinn argues that changes to Keynesian theory to account for stagflation were progressive rather than contrived, and I agree. I also agree with John Cochrane that this adaptation was still empirically inadequate, and that further progress needed rational expectations (see this separate thread), but as I note below the old methodology could (and did) incorporate this particular New Classical innovation.

More critically, (4) did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, or in my view the Great Depression either. Yet the NCCR was successful. So why did (5) happen, without (3) and (4)?

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example. Ironically the innovation that had allowed conventional macro to explain stagflation, the accelerationist Phillips curve, also made it appear unable to adapt to rational expectations. But if that was all, then you need to ask why New Classical ideas could have been gradually assimilated into the mainstream. Many of the counter revolutionaries did not want this (as this note from Judy Klein via Mark Thoma makes clear), because they had an (ideological?) agenda which required the destruction of Keynesian ideas. However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM), which is what I spent a lot of time in the 1990s doing.

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions. How serious this problem was, relative to the alternative of being theoretically consistent but empirically wide of the mark, was seldom asked.   

So why does this matter? For those who are critical of the total dominance of current macro microfoundations methodology, it is important to understand its appeal. I do not think this comes from macroeconomics being dominated by a ‘self-perpetuating clique that cared very little about evidence and regarded the assumption of perfect rationality as sacrosanct’, although I do think that the ideological preoccupations of many New Classical economists has an impact on what is regarded as de rigueur in model building even today. Nor do I think most macroeconomists are ‘seduced by the vision of a perfect, frictionless market system.’ As with economics more generally, the game is to explore imperfections rather than ignore them. The more critical question is whether the starting point of a ‘frictionless’ world constrains realistic model building in practice.

If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous! 

Noah Smith, who does believe stagflation was important in the NCCR, says at the end of his post: “this raises the question of how the 2008 crisis and Great Recession are going to affect the field”. However, if you think as I do that stagflation was not critical to the success of the NCCR, the question you might ask instead is whether there is anything in the Great Recession that challenges the methodology established by that revolution. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.  

Is there a chance of the older methodology making a comeback? I suspect the place to look is not in academia but in central banks. John Cochrane says that after the New Classical revolution there was a split, with the old style way of doing things surviving among policymakers. I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks. At the Bank of England, their main model used to be a SEM, was replaced by a hybrid DSGE/SEM, and was replaced in turn by a DSGE model. The Fed operates both a DSGE model and a more old-fashioned SEM. It is in central banks that the limitations of DSGE analysis may be felt most acutely, as I suggested here. But central bank economists are trained by academics. Perhaps those that are seduced are bound to remain smitten.


Tuesday, 13 May 2014

Humility and Chameleons

Macroeconomics tells you to (temporarily) raise, not cut, government spending when we have a recession caused by deficient demand and interest rates are at their lower bound. That is the claim that some of us make. Others say we are being far too sure of ourselves and our subject, in part because there exist models where this is not true. As a result, we should not loudly complain when politicians do not follow this advice. A bit more humility please.

If you think we should have more humility, imagine the following. The UK or US government tomorrow abolishes their independent central bank, and immediately raises rates to 5%, saying it was about time savers had a better deal. Well macroeconomists generally think that independent central banks are a good idea, and we nearly all believe that raising interest rates when inflation is below target and unemployment is high is crazy. But wait a minute. There are models that suggest keeping interest rates low is causing low inflation, and that raising rates could stimulate the economy - I discuss one here. So perhaps we should not be critical of a government that did this. We should be humble, and leave the politicians to do as they please while we get on with our research. Let us make sure we are absolutely sure before shouting too loud.

Why is that wrong? Two reasons. First, the existence of a model that says higher interest rates could stimulate the economy is not in itself evidence that it might. In an interesting paper, Paul Pfleiderer talks about Chameleon models. He defines a chameleon model as “built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.” The model that I discussed where higher rates could stimulate the economy assumes (among other things) agents believe the inflation target is negative, and that raising rates will show them they are wrong. Possible, but highly improbable.

Second, economic policy always takes place in an uncertain environment. Raising interest rates might have reduced inflation in the past, but maybe this time is different? If we wait until we are all absolutely sure about the impact of a policy change, we will wait forever. However, if we are more than 90% certain that raising interest rates, or cutting government spending, will make the recession worse, we should say so. If politicians ignore this advice, we should make sure everyone knows. This is no intellectual game - people’s welfare is at stake.


Friday, 9 May 2014

Economists and methodology

Where I argue that mainstream economics should think about the methodology of their subject more, but that to study this methodology it is much better to look at what economists actually do than to look at their (occasional) writing on the subject.

Methodology? Why should I worry about that? It’s what all those heterodox people do - lots of ‘isms’ and ‘ologies’ that are totally incomprehensible! Unlike those guys, I get on with doing real economics. After all, doctors do not spend large amounts of their time worrying about the methodology of medicine. So why should economists?

This is a caricature, but not far off the mark for many economists. (When I refer to just economists/economics from now on, I mean mainstream.) Perhaps more of a concern is that very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. This is all nicely expressed in the title of what I think is one of the best books written on economic methodology: Dan Hausman’s ‘The inexact and separate science of economics’.

This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is the one implied by my opening paragraph: a feeling that the methodology being used is unproblematic, and therefore requires little discussion.

I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science, and have only published one paper on methodology, but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does.  

This reluctance by economists to investigate their own methodology has a consequence which is the main subject of this post. It occurred to me when I recently re-read a methodology paper entitled “Two Responses to the Failings of Modern Economics: the Instrumentalist and the Realist” by Tony Lawson. The paper, written in 2001, starts on the first page with “There is little doubt that the modern discipline of economics is in a state of some disarray.” This is a strong claim. For example, I have previously written that the influence of economists within the UK government at that time may have been at an all time high, and as this account (pdf) shows, economics remains very influential within the civil service. Where is the evidence for the claim about disarray? The answer in this paper is a selection of quotes from economists writing about aspects of their subject. Now any economist would immediately wonder how representative these quotes were. But more fundamentally, are expressions of concern within a discipline equivalent to it being ‘in disarray’? (For example, see the first quote from a physicist here. Would this be a good basis for a paper that asserts than physics is in disarray?)

Even if we ignore these concerns, given the unfamiliarity of most economists with methodological discussion, it may be unwise to use what economists write about their discipline as evidence about what economists actually do.  The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.

Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. Before his famous 1968 presidential lecture, the Phillips curve had related wage inflation to unemployment, and if expectations about inflation were included (in some way), the coefficient on this expectations term was often empirically determined (see above) and was often less than one. Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. (For a good discussion of the history of the ‘expectations critique’, see this paper by James Forder.)

Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use, rather than relying on selected quotations.

Afterthought

There is a nice self-conformational element to this post. Someone is bound to tell me that, in my comments on Freidman, I do not really understand what instrumentalism means. And that, of course, just goes to make my point that you should not rely on what economists say about their own methodology!


Friday, 25 April 2014

Retiring macroeconomic theory

Dear Professor Diamond

Thank you for sending your paper ‘National Debt in a Neoclassical Growth Model’ to the American Economic Review. The paper has now been read by two referees, and I’m afraid the news is not good.

Referee A raises a fundamental objection. Your model has a two period structure, where agents work in the first period but do not work in the second. This assumption is simply stated in one paragraph on your page 2, but is not justified in any way. In that sense it appears entirely ad hoc. Furthermore, as referee A stresses, it appears to contradict (is internally inconsistent with) another fundamental part of you model, which is that agents attempt to smooth consumption over time. The referee is quite happy with that assumption, as it clearly comes from standard postulates about the utility of the consumption of goods. Yet why should these postulates not also apply to the consumption of leisure? As the referee points out, if agents tried to smooth leisure in the same way as they smoothed consumption, there would not be any ‘retirement’. As this concern strikes at the heart of your model, it is troubling.

Referee B raised rather different issues. They pointed out that the model implies a constant interest rate that is only a function of the population growth rate. The model therefore makes a clear prediction, but as the referee points out interest rates have fallen in this country over the last two decades, without any matching declines in the population growth rate. So the model has been clearly falsified by events, and therefore cannot be the basis of any meaningful discussion of the impact of national debt. The referee is also concerned that you failed to locate your analysis within an ontological discussion of the open rather than closed nature of the social realm, which makes your deductivist and formalist reasoning about socially constructed variables problematic, to say the least.

I am therefore very sorry to inform you that we will be unable to publish your paper. Referee A did make a number of helpful suggestions about how ‘retirement’ could be microfounded, and I am sure you will find the extensive reading list referee B provided on economic methodology helpful in any future work. 


My apologies to Nick Rowe, whose post gave me the idea. I actually think asking the question why we have retirement is revealing, but writing the above was easier than attempting an answer. (And I also think economic methodology is important!)  

Thursday, 19 December 2013

More on the illusion of superiority

For economists, and those interested in methodology

Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.

My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.

But what about the Lucas critique? Surely that says that only a microfounded model can avoid the Lucas critique. Tony says we might not need to worry about the Lucas critique if policy changes are consistent with what policy has done in the past. I do not need this, so let’s make our policy changes radical. My argument still stands. The reason is very simple. A misspecified model can produce bad policy. These misspecification errors may far outweigh any errors due to the Lucas critique. Robert Waldmann is I think making the same point here. (According to Stephen Williamson, even Lucas thinks that the Lucas critique is used as a bludgeon to do away with ideas one doesn't like.)

Stephen thinks that I think the data speaks directly to us. What I think is that the way a good deal of research is actually done involves a constant interaction between data and theory. We observe some correlation in the data and think why that might be. We get some ideas. These ideas are what we might call informal theory. Now the trouble with informal theory is that it may be inconsistent with the rest of the theory in the model - that is why we build microfounded models. But this takes time, and in the meantime, because it is also possible that the informal theory may be roughly OK, I can incorporate it in my eclectic model.[1] In fact we could have a complete model that uses informal theory - what Blanchard and Fischer call useful models. The defining characteristic of microfounded models is not that they use theory, but that the theory they use can be shown to be internally consistent.

Now Tony does end by saying “ad-hoc modifications seem attractive if they are a guess at what a microfounded model would look like, and you are a policymaker who can’t wait, and you find a way to assess the Lucas-Critique errors you might be making.” I have dealt with the last point – it’s perfectly OK to say the Lucas critique may apply to my model, but that is a price worth paying to use more evidence than a microfounded model does to better guide policy. For the sake of argument let’s also assume that one day we will be able to build a microfounded model that is consistent with this evidence. (As Noah says, I’m far too deferential, but I want to persuade rather than win arguments.) [2] In that case, if I’m a policy maker who cannot wait for this to happen, Tony will allow me my eclectic model.

This is where time comes in. Tony’s position is that policymakers in a hurry can do this eclectic stuff, but we academics should just focus on building better microfoundations. There are two problems with this. First, building better microfoundations can take a very long time. Second, there is a great deal that academics can say using eclectic, or useful, models.

The most obvious example of this is Keynesian business cycle theory. Go back to the 1970s. The majority of microfoundations modellers at that time, New Classical economists, said price rigidity should not be in macromodels because it was not microfounded. I think Tony, if he had been writing then, would have been a little more charitable: policymakers could put ad hoc price rigidities into models if they must, but academics should just use models without such rigidities until those rigidities could be microfounded.

This example shows us clearly why eclectic models (in this case with ad hoc price rigidities) can be a far superior guide for policy than the best microfounded models available at the time. Suppose policymakers in the 1970s, working within a fixed exchange rate regime, wanted to devalue their currency because they felt it had become overvalued after a temporary burst of domestic inflation. Those using microfounded models would have said there was no point - any change in the nominal exchange rate would be immediately offset by a change in domestic prices. (Actually they would probably have asked how the exchange rate can be overvalued in the first place.) Those using eclectic models with ad hoc price rigidities would have known better. Would those eclectic models have got things exactly right? Almost certainly not, but they would have said something useful, and pointed policy in the right direction.

Should academic macroeconomists in the 1970s have left these policymakers to their own devices, and instead got on with developing New Keynesian theory? In my view some should have worked away at New Keynesian theory, because it has improved our understanding a lot, but this took a decade or two to become accepted. (Acceptance that, alas, remains incomplete.) But in the meantime they could also have done lots of useful work with the eclectic models that incorporated price stickiness, such as working out what policies should accompany the devaluation. Which of course in reality they did: microfoundations hegemony was less complete in those days.

Today I think the situation is rather different. Nearly all the young academic macroeconomists I know want to work with DSGE models, because that is what gets published. They are very reluctant to add what might be regarded as ad hoc elements to these models; however strong the evidence and informal theory might be that could support any modification. They are also understandably unclear about what counts as ad hoc and what does not. The situation in central banks is not so very different.

This is a shame. The idea that the only proper way to do macro that involves theory is to work with fully microfounded DSGE models is simply wrong. I think it can distort policy, and can hold back innovation. If our DSGE models were pretty good descriptions of the world then this misconception might not matter too much, but the real world keeps reminding us that they are not. We really should be more broad minded. 

[1] Suppose there is some correlation in the past that appears to have no plausible informal theory that might explain it. Including that in our eclectic model would be more problematic, for reasons Nick Rowe gives.


[2] I suggest why this might not be the case here. Nick Rowe discusses one key problem, while comments on my earlier post discuss others.

Wednesday, 27 November 2013

Bertrand Russell’s chicken (and why it was not an economist)

When that pioneering economist David Hume wrote about the problem of induction, he talked about the possibility that the sun would not rise one morning. There is no way we can know ‘for sure’ that it will rise. (In contrast, we know for sure that 1+1=2.) Just because the theories we have suggest it will rise each morning, and those theories have been right so far, does nothing to ensure they will continue to be right.

The problem with this example is that it is very difficult to imagine the sun not rising every morning. Bertrand Russell had perhaps a better example. The chicken that is fed by the farmer each morning may well have a theory that it will always be fed each morning - it becomes a ‘law’. And it works every day, until the day the chicken is instead slaughtered.

When I used to lecture about economic methodology, I liked to say that this chicken was not an economist. Now you might say that no chicken is an economist, but suppose that chickens were as intelligent as the farmer who keeps them, so they could be an economist. Economics is at a disadvantage compared to the physical sciences because we cannot do so many types of experiments (although we are doing more and more), but we have another source of evidence: introspection. So if Bertrand Russell’s chicken had been an economist, they would not simply have observed that every morning the farmer brought them food, and therefore concluded that this must happen forever. Instead they would have asked a crucial additional question: why is the farmer doing this? What is in it for him? If I was the farmer, why would I do this? And of course trying to answer that question might have led them to the unfortunate truth.

I thought of this when reading through the fascinating comments on my post on rational expectations, and posts others had written in response. You can see why the habit of introspection would make economists predisposed to assume rationality generally, and rational expectations in particular. (I think it also helps explain economists’ aversion to paternalism.) It only works to use your own thought processes as a guide to how people in general might behave, if you think other people are essentially like yourself. So if your own thoughts lead you to postulate some theory about how the economy behaves, then others similar to yourself might be able to do something like the same thing.
 
But of course this line of reasoning could also be misleading. An economist who introspects does so with the help of the economic theory they already have, so their introspection is not representative. A psychologist or behavioural economist might come to very different conclusions from introspection - what biases do I bring to this problem, they may ask. Economists may also be fooled into thinking their introspection is representative, because they are surrounded by other economists. So this conjecture about introspection does little to show that assuming agents have rational expectations is right (or wrong), but it may be one reason why most economists find the concept of rational expectations so attractive.


Sunday, 24 November 2013

Attacks on mainstream economics and reforming economics teaching

Mainstream (orthodox) economics is having a hard time in the pages of the Guardian. First Aditya Chakrabortty writesHow do elites remain in charge? If the tale of the economists is any guide, by clearing out the opposition and then blocking their ears to reality. The result is the one we're all paying for.” Then Seumas Milne adds “Any other profession that had proved so spectacularly wrong and caused such devastation would surely be in disgrace.” In this post I want to say why such attacks are wide of the mark, but also say something about how these attacks gain traction, and why they suggest changing the way the subject is taught.

One frequent accusation, very evident in Milne’s piece, and often repeated by heterodox economists, is that mainstream economics and neoliberal ideas are inextricably linked. Of course economics is used to support neoliberalism. Yet I find mainstream economics full of ideas and analysis that permits a wide ranging and deep critique of these same positions. The idea that the two live and die together is just silly.

The absurdity of linking mainstream economics to all our current problems is also obvious if you think about austerity. As I never tire of saying, the proposition that austerity was a crazy thing to try in this recession is prominent in the pages of undergraduate and graduate textbooks. It is what mainstream economics, as practiced in central banks, tells us. Now I agree that it is a great shame that some influential economists sometimes seem to ignore or have forgotten what is in these textbooks, or put their own textbooks aside to provide support for particular political parties. However it remains the case that the most effective critic of austerity is using totally orthodox economics.

Nearly all complaints about that mainstream start off with the economics profession’s failure to foresee the financial crisis. Again it’s important to make some fairly basic points. First economics is not just (or even mainly) about trying to forecast the future. The percentage of the profession that made this mistake is tiny. Another one of my favourite lines back from when I did forecasting is that macro forecasts are only slightly better than guesswork. We know that, both from past evidence and the models themselves. It is a difficult message to get across, because a very visible part of economics - making decisions about interest rates - necessarily involves forecasts, and the media loves simplistic messages, but institutions like central banks do their best to emphasise the uncertainty involved.

It is also obviously not true that mainstream economics is incapable of understanding what led to the crisis, and what needs to be done to avoid it happening again. I think it’s fair to say that much that is in Admati and Hellwig’s The Bankers New Clothes is pretty mainstream. Perhaps in the past economists have been rather narrow, and even politically naive, in issues from regulation to overseas aid, but that is clearly changing and has been changing for some time. 

Having said all this, it would also be a mistake of equal magnitude to think that everything is just fine in the land of academic economics. I am struck about how economists, while at least partially defending their own particular field, are quite happy to express grave concern about what some of their colleagues in other fields do. I’ve noted Andy Haldane and Diane Coyle’s criticisms of DSGE modelling before, and you will find plenty of economists who can be very rude about their colleagues doing finance. More generally I suspect slightly less shrill versions of the sentiments expressed by the two Guardian columnists would attract considerable sympathy from lots of very sensible people who know quite a lot about economics.

Whether this should, or will, lead to any major upheaval in economic thinking – as suggested by Martin Wolf in this lecture for example – is a question for perhaps another post. What I want to focus on here is how the subject is taught, if only because that has a large influence on how the subject is perceived and how it develops. Both Guardian articles talk about student dissatisfaction (as expressed here for example), and there seems to be widespread support for the idea that economics teaching needs some fairly radical reform: see this recent meeting at the UK Treasury (which followed this) and Wendy Carlin’s article in the FT.

I think part of the problem with economics, which is very evident in the way it is taught, is how economists see themselves. (I think Alex Marsh describes this well.) The vision that I think many economists are attached to is that economics is like a physical science. So there is a body of knowledge, which has been accumulated over time in much the same way as the physical sciences have developed. This approach plays down the context in which that knowledge was developed - it may provide a bit of diversion in a lecture, but is not essential. There is certainly no need to worry about the methodology behind the way the discipline works.

An alternative and I now think better, vision would give more emphasis to how economics developed. Economic history would play a central role. Economic theory would be seen as responding to historical events and processes. For example placing Keynesian theory in the context of the Great Depression is clearly useful, given the events of the last five years. I think it is also important to recognise the links between economic theory and ideology. This is partly to understand why governments might not act on the wisdom of economists, but it also leads naturally to recognising that economists need to adapt to the social and political context in which they work. We should also be more honest that our wisdom might be influenced by ideology. Given the limits to experimental and econometric evidence, but with a very clear axiomatic structure, methodology is always going to be an important issue in economics. [1]

Of course this alternative vision can be taken too far. I do not think it is helpful to teach the subject like a course in the history of economic thought. The insight gained from trying to understand what some past great economist actually said (or still worse, actually meant) is small. We do not necessarily need to know the details of every historical debate. In addition some important ideas in economics do not come from problems thrown up by major historical events or ideology: rational expectations is a clear example. We do try and integrate solutions to new problems into a coherent overall framework. I do not want to go back to teaching a schools of thought type of macro, because the mainstream is much more integrated.

There is an additional problem in teaching economics relative to the sciences. The world that we attempt to describe and advise changes rapidly. This makes a model in which teaching is based on textbooks problematic. Not just because it takes time for textbooks to be produced and updated, but because they tend to want to appeal to those who learnt their subject many years ago, and are not actively researching in the field. How else can you explain the continuing centrality of things like the money multiplier in nearly every undergraduate textbook?  

So I look forward to seeing what comes out of the Institute of New Economic Thinking’s project to reform the undergraduate syllabus, headed by Wendy Carlin. Her macro textbook with David Soskice is innovative in replacing the IS-LM framework with a more realistic and up to date three equation model (IS, Phillips curve, monetary rule), and by giving imperfect competition a central role, and a new version where the financial sector has much more prominence is due out soon. While it is plainly nonsense to say that mainstream economics cannot explain the financial crisis and critique neoliberal policies, we need to do what we can to make that clear, and we should start with our students.


[1] In fact, I think the lack of interest in methodology among mainstream economists is itself revealing. The combination of a highly deductive theoretical structure with many alternative but problematic ways of getting evidence makes economics a fairly unique discipline from a methodological point of view, so it would be natural to want to explore the methodology of economics. However you might want to shy away from this if you pretended economics was just like biology of physics. 

Saturday, 12 October 2013

Nominal wage rigidity in macro: an example of methodological failure

This post develops a point made by Bryan Caplan (HT MT). I have two stock complaints about the dominance of the microfoundations approach in macro. Neither imply that the microfoundations approach is ‘fundamentally flawed’ or should be abandoned: I still learn useful things from building DSGE models. My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by macroeconomists is so out of date. Behavioural economics just does not get a look in. A good and very important example comes from the reluctance of firms to cut nominal wages. There is overwhelming empirical evidence for this phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer Smith at Warwick). The behavioural reasons for this are explored in detail in this book by Truman Bewley, which Bryan Caplan discusses here. Both money illusion and the importance of workforce morale are now well accepted ideas in behavioural economics.

Yet debates among macroeconomists about whether and why wages are sticky go on. As this excellent example (I’ve been wanting to link to it for some time, just because of its quality) shows, they are not just debates between Keynesians and anti-Keynesians, so I do not think you can put this all down to some kind of ideological divide. I suspect nearly all economists are naturally reluctant to embrace cases where agents appear to miss opportunities for Pareto improvement - I give another example related to wage setting here. However in most other areas of the discipline overwhelming evidence is now able to trump these suspicions. But not, it seems, in macro.

While we can debate why this is at the level of general methodology, the importance of this particular example to current policy is huge. Many have argued that the failure of inflation to fall further in the recession is evidence that the output gap is not that large. As Paul Krugman in particular has repeatedly suggested, the reluctance of workers or firms to cut nominal wages may mean that inflation could be much more sticky at very low levels, so the current behaviour of inflation is not inconsistent with a large output gap. Work by the IMF supports this idea. Yet this is hardly a new discovery, so why is macro having to rediscover these basic empirical truths?

There may be an even more concrete example of the price paid for failing to allow for this non-linearity in wage behaviour. For all it inadequacies, the Eurozone Fiscal Compact does at least include a measure of the cyclically adjusted budget deficit among its many indicators that are meant to guide/proscribe fiscal policy. However, as Jeremie Cohen-Setton discusses here, the Commission now think they have been underestimating the output gap. As he suggests, the reason is pretty obvious: they have overestimated how much the natural rate of unemployment has risen in this recession. Here is the example he gives for Spain.

Actual unemployment and European Commission estimates of the NAWRU for Spain, from European Commission, 2013 spring forecast exercise. Source Jeremie Cohen-Setton

How could the Commission have been so foolish as to believe the natural rate had risen from 10% to 27% in a few years? Might it be because they looked at nominal wages in Spain, and inferred from the fact that nominal wages were not falling that therefore actual unemployment must be close to its natural rate? If empirical macromodels as a matter of course allowed for the absence of nominal wage cuts, would they have made such an obvious (to anyone who is not a macroeconomist) mistake?


I think this example illustrates why it can be dangerous to rely on DSGE models to guide policy. Yet the influence of DSGE models in policy making institutions is strong and growing. The Bank of England’s core forecasting model (pdf) is a fairly basic DSGE construct, and as far as I can see its wage equation is a standard New Keynesian specification, with no non-linearity when wage inflation approaches zero. Now I know the Bank have many other models they look at, and they will undoubtedly have looked at the implications of a reluctance to cut nominal wages. (I discuss the Bank’s ‘new’ model in more detail here.). However default positions are important, as the examples I discussed earlier show. Focusing on models where consistency with fairly simplistic microfoundations is all important, and consistency with empirical evidence is less of a concern, can distort the way macroeconomists think.