Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label Blanchard. Show all posts
Showing posts with label Blanchard. Show all posts

Thursday, 2 March 2017

A self-fulfilling expectations led recession?

The only two lectures on Oxford’s core undergraduate macro course that I still teach, and which I have just taught, are the last two on fiscal policy. I use the privilege of the last lecture to end on a reflective note. I acknowledge that macro rightly got a lot of stick by largely ignoring the role of finance, but I also point out that the poor recovery has involved a vindication of the core macro model: austerity is a bad idea at the ZLB, QE was not inflationary and interest rates on government debt did not rise but fell.

So far so familiar. But I end by showing them my this chart.

And I say that we really have no idea why there has been no recovery from the Great Recession, so there are plenty of mysteries left in macro. The puzzle is sharpest in the UK because the pre-crisis trend is so stable, but something similar has happened in most places. I think it is a suitable note of humility (and perhaps inspiration) on which to end the course. 

A mechanical way to explain what has happened is to bend the trend: to suggest that technical progress has been slowing down for some time. This inevitably means that the pre-crisis period is transformed into a boom. I have been highly skeptical about that story, but I have to admit part of my skepticism comes in part from traditional ideas about what inflation would do in a boom.

However another explanation that I have always wondered about and which others are beginning to explore is that perhaps we remain in an extended period of demand deficiency. Keynesian theory is very suggestive that such a possibility could occur. Suppose that firms and consumers came to believe that the output gap was currently zero when it is not, and that they erroneously believed that the recession caused a step change both in potential GDP but also possibly its growth rate. Suppose also that unemployed workers priced themselves into jobs by cutting their (real) wage or disappearing by no longer looking for work. The former could happen because firms could choose more labour intensive production techniques: scrapping the car wash machine for workers with hoses.

In that situation, how do we know that we are suffering from demand deficiency? The traditional answer in macroeconomics is nominal deflation: falling wages and prices. But because workers have already priced themselves into jobs, nothing more will come from the wages route. So why would firms cut prices?

If the pre-crisis trend still applies, it means that there are a large number of innovations waiting to be embodied in new investment. With this new more efficient capital in place, firms would either increase their profits on selling to their existing market or try to expand their market by undercutting competitors. We would get an investment led recovery, accompanied by rising productivity and perhaps falling prices.

But suppose the innovations are just not profitable enough to generate an increase in profits that would justify undertaking the investment, even though borrowing costs are low. Maybe a far more dependable motivator for embodied technical progress to take place is the need to satisfy an expanding market. The firm needs to install new capacity to satisfy growing demand for its product, and then it is obvious to investment in equipment that embodies new innovations. The accelerator remains a very successful empirical model of investment. (On both points, see this discussion by Caballero.) But if beliefs are such that the market is not going to expand that much, because firms believe the economy is ‘at trend’ and trend growth has now become pretty small, then the need to invest to meet an expanding market largely goes away.

This idea goes right back to Keynes and animal spirits of course. Others have more recently reformulated similar ideas, such as Roger Farmer. This is a little different from the idea of adding endogenous growth to a Keynesian model, as in this paper by Benigno and Fornaro for example. I’m assuming in this discussion that potential output has not been lost, because innovation has not slowed, but it is simply not being utilised.

It is this possibility which is the reason that I have always argued central banks and governments should have been much more ambitious about demand stimulation after the Great Recession. As I and others have pointed out, you do not have to attach a very high probability to the scenario that demand will create supply before it justifies a policy of ‘testing the water’ by letting the economy run hot. Every time I look at the data above, I ask whether we have brought this on ourselves by a combination of destructive austerity and timidity.




Sunday, 15 January 2017

Blanchard joins calls for Structural Econometric Models to be brought in from the cold

Mainly for economists

Ever since I started blogging I have written posts on macroeconomic methodology. One objective was to try and convince fellow macroeconomists that Structural Econometric Models (SEMs), with their ad hoc blend of theory and data fitting, were not some old fashioned dinosaur, but a perfectly viable way to do macroeconomics and macroeconomic policy. I wrote this with the experience of having built and published papers with both SEMs and DSGE models.

Olivier Blanchard’s third post on DSGE models does exactly the same thing. The only slight confusion is that he calls them ‘policy models’, but when he writes

“Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals.”

he can only mean SEMs. [1] I prefer SEMs to policy models because SEMs describe what is in the tin: structural because they utilise lots of theory, but econometric because they try and match the data.

In a tweet, Noah Smith says he is puzzled. “What else is the point of DSGEs??” besides advising policy he asks? This post tries to help him and others see how the two classes of model can work together.

The way I would estimate a SEM today (but not necessarily the only valid way) would be to start with an elaborate DSGE model. But rather than estimate this model using Bayesian methods, I would use it as a theoretical template with which to start econometric work, either on an equation by equation basis or as a set of sub-systems. Where lag structures or cross equation restrictions were clearly rejected by the data, I would change the model to more closely match the data. If some variables had strong power in explaining others but were not in the DSGE specification, but I could think of reasons for a causal relationship (i.e. why the DSGE specification was inadequate), I would include them in the model. That would become the SEM. [2]

If that sounds terribly ad hoc to you, that is right. SEMs are an eclectic mix of theory and data. But SEMs will still be useful to academics and policymakers who want to work with a model that is reasonably close to the data. What those I call DSGE purists have to admit is that because DSGE models do not match the data in many respects, they are misspecified and therefore any policy advice from them is invalid. The fact that you can be sure they satisfy the Lucas critique is not sufficient compensation for this misspecification.

By setting the relationship between a DSGE and a SEM in the way I have, it makes it clear why both types of model will continue to be used, and how SEMs can take their theoretical lead from DSGE models. SEMs are also useful for DSGE model development because their departures from DSGEs provide a whole list of potential puzzles for DSGE theorists to investigate. Maybe one day DSGE will get so good at matching the data that we no longer need SEMs, but we are a long way from that.

Will what Blanchard and I call for happen? It already does to a large extent at the Fed: as Blanchard says what is effectively their main model is a SEM. The Bank of England uses a DSGE model, and the MPC would get more useful advice from its staff if this was replaced by a SEM. The real problem is with academics, and in particular (as Blanchard again identified in an earlier post) journal editors. Of course most academics will go on using DSGE, and I have no problem with that. But the few who do instead decide to use a SEM should not be automatically shut out from the pages of the top journals. They would be at present, and I’m not confident - even with Blanchard’s intervention - that this is going to change anytime soon.


[1] What Ray Fair, longtime builder and user of his own SEM, calls Cowles Commission models.

[2] Something like this could have happened when the Bank of England built BEQM, a model I was consultant on. Instead the Bank chose a core/periphery structure which was interesting, but ultimately too complex even for the economists at the Bank.

Tuesday, 11 October 2016

Ricardian Equivalence, benchmark models, and academics response to the financial crisis

Mainly for economists

In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:
“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified.

It is not only academics who think policy has to be done using microfounded models. The core model used by the Bank of England is a microfounded DSGE model. So even in this policy making institution, their core model does not conform to Blanchard’s prescription. (Yes, I know they have lots of other models, but still. The Fed is closer to Blanchard than the Bank.)

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about in his graduate textbook with Stan Fischer, although then they were confined to chapter 10! These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.


[1] Recognising this point, some might add some ‘rule of thumb’ consumers into the model. This is fine, as long as you do not continue to think the model is microfounded. If these rule of thumb consumers spend all their income because of credit constraints, what happens when these constraints are expected to last for more than the next period? Does the model correctly predict what would happen to consumption if the proportion of rule of thumb consumers changes? It does not.  

Friday, 12 August 2016

Blanchard on DSGE

Olivier Blanchard, former director of the IMF’s research department, has written a short critical piece about DSGE models. Forget all the econblog reaction that essentially says he has been too kind: DSGE completely dominates academic macroeconomics, and there is no way that all these academics are going to suddenly decide this research programme is a waste of time. (I happen to think Blanchard is right that it isn’t a waste of time.) What is at issue is not the existence of DSGE models, but their hegemony.

One of Blanchard’s recommendations is that DSGE “has to become less imperialistic. Or, perhaps more fairly, the profession (and again, this is a note to the editors of the major journals) must realize that different model types are needed for different tasks.” The most important part of that sentence is the bit in brackets. He talks about a distinction between fully microfounded models and ‘policy models’. The latter used to be called Structural Econometric Models (SEMs), and they are the type of model that Lucas and Sargent famously attacked.

These SEMs have survived as the core model used in many important policy institutions (except for the Bank of England) for good reason, but DSGE trained academics have followed Lucas and Sargent as viewing these as not ‘proper macroeconomics’. Their reasoning is simply wrong, as I discuss here. As Blanchard notes, it is the editors of top journals that need to realise this, and stop insisting that all aggregate models have to be microfounded. The moment they allow space for eclecticism, then academics will be able to choose which methods they use.

Blanchard has one other ‘note for editors’ remark, and it also gets to the heart of the problem with today’s macroeconomics. He writes “Not every discussion of a new mechanism should be required to come with a complete general equilibrium closure.” The example he discusses, and which I have also used in this context, is consumption. DSGE modellers have of course often departed from the simple Euler equation, but I suspect the ways they have done this (rule of thumb consumers, habits) reflect analytical convenience rather than realism.

What sometimes seems to be missing in macro nowadays is a connection between people working on partial equilibrium analysis (like consumption) and general equilibrium modellers. Top journal editors’ preference for the latter means that the former is less highly valued. In my view this has already had important costs. I argue that the failure to take seriously the strong evidence about the importance of changes in credit availability for consumption played an important part in the inability of macroeconomics to adequately model the response to the financial crisis (for more discussion see here and here). Even if you do not accept that, the failure of most DSGE models to include any kind of precautionary saving behaviour does not seem right when DSGE has a monopoly in ‘proper modelling’. [1]

Criticism of the DSGE hegemony from those outside economics, from macroeconomists who are not part of it, or even from economic policymakers has had little impact on those all important journal editors up until now. Perhaps similar comments from one of the best macroeconomists in the world might.

[1] I discuss the reasons why this may have occurred in relation to Chris Carroll’s work here.

Saturday, 4 May 2013

Blanchard on Fiscal Policy


I was recently rather negative about the way the IMF frames the fiscal policy debate around the  right speed of consolidation. In my view this always prioritises long run debt control over fiscal stimulus at the zero lower bound (ZLB), and so starts us off on the wrong foot when thinking about the current conjuncture. Its the spirit of 2011 rather than the spirit of 2009.

Blanchard and Leigh have a recent Vox post, which allows me to make this point in perhaps a clearer way, and also to link it to a recent piece by David Romer. The Vox post is entitled “fiscal consolidation: at what speed”, but I want to suggest the rest of the article undermines the title. The first three sections are under the subtitle “Less now, more later”. They discuss the (now familiar from the IMF) argument that fiscal multipliers will be significantly larger in current circumstances, the point that output losses are more painful when output is low, and the dangers of hysteresis. I have no quarrel with anything written here, except the subtitle, of which more below.

A more interesting section is the one subtitled “More now, less later”. This section starts by noting that the textbook case for consolidation is that high debt crowds out productive capital and increases tax distortions. Yet these issues are not discussed further. The article does not say why, but the reason is pretty obvious. While both are long term concerns, they are not relevant at the ZLB.

Instead the section focuses on default, and multiple equilibria. After running through the standard De Grauwe argument, the text then says: “This probably exaggerates the role that central banks can play: Knowing whether the market indeed exhibits the good or the bad equilibrium, and what the interest rate associated with the good equilibrium might be is far from easy to assess, and the central bank may be reluctant to take what could be excessive risk onto its balance sheet.” This is more a description of ECB excuses before OMT than an argument.

More interesting is what comes next. Does default risk actually imply more austerity now, less later? I totally agree with the following: “The evidence shows that markets, to assess risk, look at much more than just current debt and deficits. In a word, they care about credibility.” “How best to achieve credibility? A medium-term plan is clearly important. So are fiscal rules, and, where needed, retirement and public health care reforms which reduce the growth rate of spending over time. The question, in our context, is whether frontloading increases credibility.”

So here we come to a critical point. Does more now, less later, actually increase the credibility of consolidation? If it does not, then the only argument for frontloading austerity disappears. The next paragraph discusses econometric evidence from the crisis, and concludes it is ambiguous. The whole rationale for more now, less later, is hanging by a thread. And there is just one paragraph left! Let me reproduce it in full.

“The econometric evidence is rough, however, and may not carry the argument. Adjustment fatigue and the limited ability of current governments to bind the hands of future governments are also relevant. Tough decisions may need to be taken before fatigue sets in. One must realise that, in many cases, the fiscal adjustment will have to continue well beyond the tenure of the current government. Still, these arguments support doing more now.”

Is this paragraph intentionally weak and contradictory? If credible fiscal adjustment requires consolidation by future governments, why does doing more now add to credibility? You could equally well argue that overdoing it now, because of the adverse reaction it creates (‘fatigue’ !?), turns future governments (and the electorate) away from consolidation, and so it is less credible.

So what we have is an article that appears to be a classic ‘on the one hand, on the other’ type, but is in fact a convincing argument for ‘less now, more later’. Perhaps that is intentional. But even if it is, I’m still unhappy. Although the arguments on multipliers, output gaps and hysteresis appear under the subtitle ‘less now, more later’, they in fact imply ‘stimulus now, consolidation later’, once you take the ZLB seriously. If you are walking along a path, and there is a snake blocking your way, you don’t react by walking towards it more slowly!

Why does this matter? Let me refer to recent comments David Romer made about the ‘Rethinking Macro’ IMF conference, which he suggests avoided the big questions. For example he notes “I heard virtually no discussion of larger changes to the fiscal framework.” He goes on (my italics)

“Another fiscal idea that has received little attention either at the conference or in the broader policy debate is the idea of fiscal rules or constraints. For example, one can imagine some type of constitutional rule or independent agency (or a combination, with a constitutional rule enforced by an independent agency) that requires highly responsible fiscal policy in good times, and provides a mechanism for fiscal stimulus in a downturn that is credibly temporary.”
As I argued here, it is not a matter of having a fiscal rule for consolidation that allows you to just ease up a bit at the ZLB. What we need is a rule that obliges governments to switch from consolidation to stimulus at or near the ZLB. Otherwise, the next time a large crisis hits (and Romer plausibly suggests that could be sooner rather than later), we will have to go through all of this stuff once again.