Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label social welfare. Show all posts
Showing posts with label social welfare. Show all posts

Saturday, 29 April 2017

The Brexit slowdown begins (probably)

When the Bank of England after the Brexit vote forecast 0.8% GDP growth in 2017, they expected consumption growth to decline to just 1%, with only a small fall in the savings ratio. But consumption growth proved much stronger in the second half of 2016 than the Bank had expected. As this chart from the Resolution Foundation shows, pretty well all the GDP growth through 2016 was down to consumption growth, something they rightly describe as unsustainable. (If consumption is growing but the other components of GDP are not, that implies consumers are eating into their savings. That cannot go on forever)


This strong growth in consumption in 2016 led the Bank to change its forecast. By February
their forecast for 2017 involved 2% growth in consumption and GDP, and a substantial fall in the savings ratio.

What was going on here? In August, the Bank reasoned that consumers would recognise that Brexit would lead to a significant fall in future income growth, and that they would quickly start reducing their consumption as a result. When that didn’t happen the Bank appeared to adopt something close to the opposite assumption, which is that consumers would assume that Brexit would have little impact on expected income growth. As a result, in the Bank’s February forecast, the savings ratio was expected to decline further in 2018 and 2019, as I noted here. Consumers, in this new forecast, would continually be surprised that income growth was less than they had expected.

The first estimate for 2017 Q1 GDP that came out yesterday showed growth of only 0.3%, about half what the Bank had expected in February. This low growth figure appeared to be mainly down to weakness in sectors associated with consumption (although we will not get the consumption growth figure until the second GDP estimate comes out). So what is going on?

There are three possible explanations. The first, which is the least likely, is that 2017 Q1 is just a blip. The second is that many more consumers are starting to realise that Brexit will indeed mean they are worse off (I noted some polling evidence suggesting that here.), and are now adjusting their spending accordingly The third is that consumption was strong at the end of 2016 because people were buying overseas goods before prices went up as a result of the Brexit deprecation.

If you have followed me so far, you can get an idea of how difficult this kind of forecasting is, and why the huge fuss the Brexiteers made about the August to February revision to the Bank’s forecast was both completely overblown and also probably premature. All Philip Hammond could manage to say about the latest disappointing growth data was how it showed that we needed ‘strong and stable’ government! I suspect, however, that we might be hearing a little less about our strong economy in the next few weeks.

Of course growth could easily pick up in subsequent quarters, particularly if firms take advantage of the temporary ‘sweet spot’ created by the depreciation preceding us actually leaving the EU. Forecasts are almost always wrong. But even if this happens, what I do not think most journalists have realised yet is just how inappropriate it is to use GDP as a measure of economic health after a large depreciation. Because that depreciation makes overseas goods more expensive to buy, people in the UK can see a deterioration in their real income and therefore well being even if GDP growth is reasonable. As I pointed out here, that is why real earnings have fallen since 2010 even though we have had positive (although low) growth in real GDP per head, and as I pointed out here that is why Brexit will make the average UK citizen worse off even if GDP growth does not decline. If it does decline, that just makes things worse.  

Saturday, 3 May 2014

Pareto, Inequality and Government Debt

Or is economics inherently right wing?

I noted in passing in an earlier post that Pareto efficiency was obviously not a value free criteria. So those who argue that economists should only look for Pareto improvements – changes where no one is made worse off – are making a value judgement. One, and only one, of its implicit normative assumptions is that inequality does not matter. For others see, for example, Elizabeth Anderson (pdf, HT Anon). Now you could argue that an assumption that inequality does not matter intrinsically is at least internally consistent with the conventional assumption that personal utility depends only on personal variables. However as that assumption is clearly incorrect, this is a rather weak defence.

(You could also reasonably argue that Pareto improving increases in inequality could have a negative impact on the personal variables of others that conventional economic analysis ignores. So, for example, rising incomes of the 1% - even if this initially comes from just increasing the size of the pie - allows that 1% greater political power, which they will subsequently use to redistribute income away from the 99%.)

This is hardly a new point. For just two recent examples of other posts saying the same thing: Richard Serlin here, and Ingrid Robeyns here. It only has to keep being said because too many students are taught that economists like the Pareto criteria because it is value free. One of the comments to that second post says that the task should not be to “import liberal or left-wing moral philosophy into economics. It’s to scrub right-wing, libertarian moral philosophy out of it.” Well, in my usual moderate manner, I’d say we should at least expose it.

A more sophisticated defence of Pareto optimality is the second welfare theorem, which says that we can separate issues of distribution from issues of allocative efficiency. So, if some Pareto improving measure only makes the 1% better off, we can go ahead with it and deal with any reduction in social welfare generated by additional inequality using lump sum transfers. One obvious problem with this idea is that there are no lump sum transfers. Another is that we do not as a society decide at some date every year what the optimal distribution of income to implement is. In practice the only chance of reversing any inequality created by a Pareto improving measure is to use compensation alongside that measure, but then agents will recognise this connection which in turn will influence incentives.

The only possibly original point I wanted to make here is that the absurdity of restricting policies to Pareto improvements becomes immediately apparent if we think about government debt. Measures to reduce currently high levels of debt will almost certainly make current generations worse off, because they will have to pay the taxes (or whatever) to get debt down. Yet I do not often hear people arguing that we have to let debt stay high because the government can only implement Pareto improvements. If you think about it for a second, restricting government debt policy to Pareto improvements would be a sure fire recipe for deficit bias.

While this may be obvious, textbooks still make a big deal of dynamic inefficiency. This is the idea that the amount of productive capital in society can be too high, so that too much output is going to preserving that level of capital (replacement investment to offset depreciation etc), and not enough to consumption. If that is true, then if the current generation saves less, everyone can be made better off. Government intervention to discouraging saving would be a Pareto improvement: the current generation consumes more because they save less, but future generations consume more because less output needs to go to replacement investment.

The symmetrical case is where there is too little capital, which also reduces long run consumption compared to what could be achieved. Yet the implication in many textbooks is that this case is not one we should worry about, because to change it (by raising saving) would make the current generation worse off and is therefore not a Pareto improvement. The discussion in Romer, for example, is all about whether economies are dynamically inefficient rather than sub-optimally small. We don’t think this way about government debt, so why should we when it comes to productive capital?

Why is there this emphasis on only looking at Pareto improvements? I think you would have to work quite hard to argue that it was intrinsic to economic theory - it would be, and is, quite possible to do economics without it. (Many economists use social welfare functions.) But one thing that is intrinsic to economic theory is the diminishing marginal utility of consumption. Couple that with the idea of representative agents that macro uses all the time (who share the same preferences), and you have a natural bias towards equality. Focusing just on Pareto improvements neutralises that possibility. Now I mention this not to imply that the emphasis put on Pareto improvements in textbooks and elsewhere is a right wing plot - I do not know enough to argue that. But it should make those (mainstream or heterodox) who believe that economics is inherently conservative pause for thought.    


Friday, 23 August 2013

New Keynesian models and the labour market

Do all those using New Keynesian models have to believe everything in those models? To answer this question, you have to know the history of macroeconomic thought. I think the answer is also relevant to another frequently asked question, which is what the difference is between a ‘New Keynesian’ and an ‘Old Keynesian’?

You cannot understand macro today without going back to the New Classical revolution of the 1970s/80s. I often say that the war between traditional macro (Keynesian or Monetarist) and New Classical macro was won and lost on the battlefield of rational expectations. This was not just because rational expectations was such an innovative and refreshing idea, but also because the main weapon in the traditionalists armoury was so vulnerable to it. Take Friedman’s version of the Phillips curve, and replace adaptive expectations by rational expectations, and the traditional mainstream Keynesian story just fell apart. It really was no contest. (See Roger Farmer here, for example.)

I believe that revolution, and the microfoundations programme that lay behind it, brought huge improvements to macro. But it also led to a near death experience for Keynesian economics. I think it is fairly clear that this was one of the objectives of the revolution, and the winners of wars get to rewrite the rules. So getting Keynesian ideas back into macro was a slow process of attrition. The New Classical view was not overthrown but amended. New Keynesian models were RBC models plus sticky prices (and occasionally sticky wages), where stickiness was now microfounded (sort of). Yet from the New Classical point of view, New Keynesian analysis was not a fundamental threat to the revolution. It built upon their analysis, and could be easily dismissed with an assertion about price flexibility. Specifically NK models retained the labour leisure choice, which was at the heart of RBC analysis. Monetary policymakers were doing the Keynesian thing anyway, so little was being conceded in terms of policy. [1]

So labour supply choice and labour market clearing became part of the core New Keynesian model. Is this because all those who use New Keynesian models believe it is a good approximation to what happens in business cycles? I doubt it very much. However for many purposes allowing perfect labour markets does not matter too much. Sticky prices give you a distortion that monetary policy can attempt to negate by stabilising the business cycle. The position you are trying to stabilise towards is the outcome of an RBC model (natural levels), but in many cases that involves the same sort of stabilisation that would be familiar to more traditional Keynesians.

This is not to suggest that New Keynesians are closet traditionalists. Speaking for myself, I am much happier using rational expectations than anything adaptive, and I find it very difficult to think about consumption decisions without starting with an intertemporally optimising consumer. I also think Old Keynesians could be very confused about the relationship between aggregate supply and demand, whereas I find the New Keynesian approach both coherent and intuitive. However, the idea that labour markets clear in a recession is another matter. It is so obviously wrong (again, see Roger Farmer). So why did New Keynesian analysis not quickly abandon the labour market clearing assumption?

Part of the answer is the standard one: it is a useful simplifying assumption which does not give us misleading answers for some questions. However the reason for my initial excursion into macro history is because I think there was, and still is, another answer. If you want to stay within the mainstream, the less you raise the hackles of those who won the great macro war, the more chance you have of getting your paper published.

There are of course a number of standard ways of complicating the labour market in the baseline New Keynesian model. We can make the labour market imperfectly competitive, which allows involuntary unemployment to exist. We can assume wages are sticky, of course. We can add matching. But I would argue that none of these on its own gets close to realistically modelling unemployment in business cycles. In a recession, I doubt very much if unemployment would disappear if the unemployed spent an infinite amount of time searching. (I have always seen programmes designed to give job search assistance to the unemployed as trying to reduce the scaring effects of long term unemployment, rather than as a way of reducing aggregate unemployment in a recession.) To capture unemployment in the business cycle, we need rationing, as Pascal Michaillat argues here (AER article here). This is not an alternative to these other imperfections: to ‘support’ rationing we need some real wage rigidity, and Michaillat’s model incorporates matching. [2]

I think a rationing model of this type is what many ‘Old Keynesians’ had in mind when thinking about unemployment during a business cycle. If this is true, then in this particular sense I am much more of an Old Keynesian than a New Keynesian. The interesting question then becomes when this matters. When does a rationed labour market make a significant difference? I have two suggestions, one tentative and one less so. I am sure there are others.

The tentative suggestion concerns asymmetries. In the baseline NK model, booms are just the opposite of downturns - there is no fundamental asymmetry. Yet traditional measurement of business cycles, with talk of ‘productive potential’ and ‘capacity’, are implicitly based on a rather different conception of the cycle. A recent paper (Vox) by Antonio Fatás and Ilian Mihov takes a similar approach. (See also Paul Krugman here.) Now there is in fact an asymmetry implicit in the NK model: although imperfect competition means that firms may find it profitable to raise production and keep prices unchanged following ‘small’ increases in demand, at some point additional production is likely to become unprofitable. There is no equivalent point with falling demand. However that potential asymmetry is normally ignored. I suspect that a model of unemployment based on rationing will produce asymmetries which cannot be ignored.

The other area where modelling unemployment matters concerns welfare. As I have noted before, Woodford type derivations of social welfare give a low weight to the output gap relative to inflation. This is because the costs of working a bit less than the efficient level are small: what we lose in output we almost gain back in additional leisure. If we have unemployment because of rationing, those costs will rise just because of convexity. [3]

However I think there is a more subtle reason why models that treat cyclical unemployment as rationing should be more prevalent. It will allow New Keynesian economists to say that this is what they would ideally model, even when for reasons of tractability they can get away with simpler models where the labour market clears. Once you recognise that periods of rationing in the labour market are fairly common because economic downturns are common, and that to be on the wrong end of that rationing is very costly, you can see more clearly why the labour contract between a worker and a firm itself involves important asymmetries - asymmetries that firms would be tempted to exploit during a recession. 

Yet you have to ask, if I am right that this way of modelling unemployment is both more realistic and implicit in quite traditional ways of thinking, why is it so rare in the literature? Are we still in a situation where departures from the RBC paradigm have to be limited and non-threatening to the victors of the New Classical revolution?

[1] When, in a liquidity trap, macroeconomists started using these very same models to show that fiscal policy might be effective as a replacement for monetary policy, the response was very different. Countercyclical fiscal policy was something that New Classical economists had thought they had killed off for good.

[2] Some technical remarks.

(a) Indivisibility of labour, reflecting the observation (e.g. Shimer, 2010) that hours per worker are quite acyclical, has been used in RBC models: early examples include Hansen (1985) and Hansen and Wright (1992). Michaillat also assumes constant labour force participation, so the labour supply curve is vertical, and critically some real wage rigidity and diminishing returns.

(b) Consider a deterioration in technology. With flexible wages, we would get no rationing, because real wages would fall until all labour was employed. What if real wages were fixed? If we have constant returns to labour, then if anyone is employed, everyone would be employed, because hiring more workers is always profitable (mpl>w always). What Michaillat does is to allow diminishing returns (and a degree of wage flexibility): some workers will be employed, but after a point hiring becomes unprofitable, so rationing can occur.  

(c) Michaillat adds matching frictions to the model, so as productivity improves, rationing unemployment declines but frictional unemployment increases (as matches become more difficult). Michaillat’s model is not New Keynesian, as there is no price rigidity, but there is no reason why price rigidity could not be added. Blanchard and Gali (2010) is a NK model with matching frictions, but constant returns rules out rationing.

[3] I do not think they will rise enough, because in the standard formulation the unemployed are still ‘enjoying’ their additional leisure. One day macroeconomists will feel able to note that in reality most view the cost of being unemployed as far greater than its pecuniary cost less any benefit they get from their additional leisure time. This may be a result of a rational anticipation of future personal costs (e.g. here or here), or a more ‘behavioural’ status issue, but the evidence that it is there is undeniable. And please do not tell me that microfounding this unhappiness is hard - why should macro be the only place where behavioural economics is not allowed to enter!? (There is a literature on using wellbeing data to make valuations.) Once we have got this bit of reality (back?) into macro, it should be much more difficult for policymakers to give up on the unemployed.

References (some with links in the main text)

Olivier Blanchard & Jordi Galí (2010), Labor Markets and Monetary Policy: A New Keynesian Model with Unemployment,  American Economic Journal: Macroeconomics, vol. 2(2), pages 1-30

Hansen, Gary D (1985) “Indivisible Labour and the Business Cycle” Journal of Monetary Economics 16, 309-327

Hansen, Gary D and Wright, Randall (1992) “The Labour Market in Real Business Cycle Theory” Federal Reserve Bank of Minneapolis Quarterly Review 16, 2-12.

Pascal Michaillat (2012), Do Matching Frictions Explain Unemployment? Not in Bad Times, American Economic Review, vol. 102(4), pages 1721-50.

Shimer, R. ‘Labor Markets and Business Cycles’, Princeton, NJ: Princeton University Press, 2010.




Saturday, 4 May 2013

Microfounded Social Welfare Functions


More on Beauty and Truth for economists


I have just been rereading Ricardo Caballero’s Journal of Economic Perspectives paper entitled “Macroeconomics after the Crisis: Time to Deal with the Pretense-of-Knowledge Syndrome”. I particularly like this quote:

The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer.


I thought of this when describing here (footnote [5]) Woodford’s derivation of social welfare functions from representative agent’s utility. Although it has now become a standard part of the DSGE toolkit, I remember when I had to really work through the maths for this paper. I recall how exciting it was, first to be able to say something about policy objectives that was more than ad hoc, and secondly to see how terms in second order Taylor expansions nicely cancelled out when first order conditions describing optimal individual behaviour were added.

This kind of exercise can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.


Where does this result come from? The inflation term in Woodford’s derivation of social welfare comes from relative price distortions when prices are sticky due to Calvo contracts. Let’s assume for the sake of argument that these costs are captured correctly. The output gap term comes from sticky prices leading to fluctuations in consumption and fluctuations in labour supply. Lucas famously argued [1] that the former are small. Again, for the sake of argument lets focus on fluctuations in labour supply.
Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations: agents are now working more or less as a result of fluctuations in labour demand, and inelasticity means that these fluctuations are more costly in terms of utility. Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.

Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.

So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.

Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.

Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero again

“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”

As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.

[1] Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.

Tuesday, 20 March 2012

The optimal speed of debt correction

                This month’s Economic Journal sees the publication of a paper by Tatiana Kirsanova and myself that asks how quickly fiscal policy should respond to excess government debt. It is a very topical issue, but we started work on it back in 2004. It was Tanya’s idea so she deserves the credit for any foresight.
                The set up is both straightforward and not too unrealistic. We assume that monetary policy pursues an optimal policy, and that it has the credibility to commit. (We ignore the zero lower bound problem - it is not that topical!) In contrast, fiscal policy involves a simple feedback rule: following some shock, either government spending or taxes are adjusted by some proportion of the level of excess debt. The economic model is of a pretty standard New Keynesian variety. We also ignore the possibility of default on debt. Following various shocks, we look at how social welfare changes as the speed of fiscal feedback varies. The idea was not just to find out what the optimal degree of feedback is, but also to observe how much welfare suffers if we depart from this optimal speed.
                One extreme case is where there is no fiscal feedback – the fiscal authorities just ignore debt completely. The presumption might be that this would lead to an explosive debt spiral. In fact it does not, because monetary policy manages to control both debt and inflation. The way it does this is clever (remember policy is optimal, so it can be clever). Following a shock that raises debt, it first cuts interest rates, which helps get debt under control. It subsequently raises interest rates, which controls inflation. The reason this works is that the inflation process, unlike the government’s budget constraint, is very forward looking – the model is New Keynesian. With rational expectations, promises of higher interest rates in the future help control inflation today, while lower interest rates today keep debt under control. (Those who follow such things may detect an element of the Fiscal Theory of the Price Level here.) However, although the economy is stabilised, inflation is not controlled very efficiently, and so the welfare costs of the shock are high.
                At the other extreme is where fiscal feedback is very rapid: government spending (say) is cut very quickly in response to a shock that raises debt. This does better in welfare terms, but it is not the best policy for two reasons. First, quickly cutting government spending has costs. Second, large changes in government spending mess with inflation. If a shock raises both debt and inflation, you might think this is a good thing, but in this type of model monetary policy is better at controlling inflation than fiscal policy. Having both fiscal and monetary policy control inflation actually reduces welfare.
                The best amount of fiscal feedback turns out to involve pretty slow correction of debt. Although this is an important result, it was not unexpected or very new. This is because the ideal fiscal policy (when both monetary and fiscal policy are jointly determined in an optimal manner under commitment) in this type of model would not eliminate excess debt at all. It is better to allow debt to change permanently as a result of a shock: although this means either government spending or taxes will have to change permanently to finance this debt, the discounted costs of this are less than the large short term changes in fiscal policy required to eliminate all excess debt. (This is an implication of Barro’s famous tax smoothing result.) A simple feedback rule cannot duplicate this ideal policy exactly, but it can get pretty close by not reacting very much to excess debt.
                The result that it is best to correct debt very slowly is neither surprising nor unusual – a similar conclusion is reached by most papers that look at this issue. In that sense it is a robust result. In more recent work that I have done with Campbell Leith and Ioana Moldovan, where the ideal policy is no longer to accommodate shocks to debt, we still get very slow adjustment. The more interesting result in the paper is quantifying the degree to which welfare deteriorates when we try and correct debt too quickly. Controlling debt too quickly is never as bad as not controlling it at all, but it is significantly worse than more gradual adjustment. Trying to correct debt too rapidly is costly, and should be avoided if possible, even when we are not at the zero lower bound.
                One interesting result that surprised me at the time was how well the simple feedback rule for fiscal policy did compared to the ideal fiscal policy (i.e. a joint optimisation benchmark). I now know that this reflects a fundamental property of this basic New Keynesian model, which is that in many cases, and in the absence of the zero lower bound, monetary policy is all you need for stabilisation of output and inflation. (This is well known for demand shocks, but it turns out to be more general than that.) With the help of Fabian Eser and Campbell Leith, we show this analytically here. Of course if monetary policy is compromised by hitting a zero lower bound for nominal interest rates, this result no longer holds. In this case you would want fiscal policy to help out on output stabilisation, as Gauti Eggertsson has examined in a number of papers. The position is similar to that of an individual economy in a monetary union. Tanya and I showed in joint work with Mathan Satchi and David Vines that the simple feedback rule on debt should be supplemented with additional terms reflecting this dual role in a monetary union.
                About two years ago, there was a debate via letters to newspapers over how quickly UK debt should be brought down. I’m told that when one of the economists who signed the ‘cut quickly’ letter saw Tanya present this paper, they said that they now understood why I had signed the ‘cut slowly’ alternative. And for those who would say ‘ah yes, but slow correction would not be credible’, read this