Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label forecasting. Show all posts
Showing posts with label forecasting. Show all posts

Wednesday, 12 April 2017

Economics is an inexact science

When I wrote about why the BBC should treat a clear consensus in economics the same way as it now treated climate science, I got a number of comments about why economics is not a science. A common theme was that economics couldn’t prove theories ‘beyond doubt’ the same way as the hard sciences could. A more sophisticated version of this complaint is that most economic theories cannot be disproved in the same way that Popper thought scientific theories could be disproved.

All this ignores a key feature of any social science, which is their inexact nature. Instead we have accumulations of evidence that confirm the applicability of some theories and reject the applicability of others. Economists’ views about what models are applicable change as this evidence accumulates.

A good example involves the minimum wage, as Noah Smith suggests. The basic economic model suggested even a modest minimum wage should significantly reduce employment, but economists discovered that the evidence did not show this. As this evidence accumulated, alternative theories and models (monopsony and search) were thought to be more relevant. It is this response to evidence that makes economics a science.

Jo Michell writes “The scientific method of forming a hypothesis and then testing that hypothesis against reality can never be the final arbiter of knowledge, as it can in the physical sciences.” He is right that no single experiment or regression can kill a theory, but wrong that the accumulation of evidence is not the final arbiter, because no other arbiter is available. He links to a post by Noah Smith which talks about the failures of forecasting. But as that post makes clear, this is not about data rejecting models, but the inability of models to predict the future. We would never dream of condemning medics because they cannot predict the exact time of our death, still less suggest that this failure indicates they are not doing science.

Of course economics involves cases where economists appear too reluctant to give up their favoured models. You can find similar stories in the hard sciences. There will be more such stories in economics because the inexact nature of economics makes it easier to discount any single piece of evidence. What I cannot understand is what leads someone like Russ Roberts to argue against the use of evidence, and instead that “economics is primarily a way of organizing one’s thinking”. Astrology is also a way of organising one’s thinking, but it fails because evidence does not back it up.

That comparison is slightly unfair, because while the theory behind astrology is obviously implausible, the basic principles of microeconomics are not. In a class on economic methodology I once drew a huge tree that showed how most of economics could be derived from principles of rational choice. But go beyond the basics, and add in complications involving information and transactions costs (to name but two) and you very quickly derive competing models. There is no single model that comes from thinking like an economist, so for that reason alone we need data to tell us which models are more applicable.

So thinking like an economist does not tell me at what point raising the minimum wage will reduce employment. But why would anyone want to keep their models from being proved relevant or otherwise by data? The only reason I can think of is that some models give answers that are ideologically convenient. Of course allowing data to establish the relevance of some models over others does not make economics ideology proof. For example people can always select the one study that suggests that fiscal policy does not influence output and ignore the hundreds that show otherwise. That is why the accumulation of evidence, which includes its replicability, is so important. If you think economics has problems in that respect, have a look at psychology.

This is why economists views about the long term impact of Brexit should be treated as knowledge rather than just an opinion. Here knowledge is shorthand for the accumulation of evidence consistent with plausible theory. Sometimes the theories are common sense, like making trade more difficult will reduce trade. Estimates of the size of trade reduction based on evidence are uncertain, but they are better than estimates based on wishful thinking. Empirical gravity equations consistently show that geography still matters a lot in determining how much is traded. Finally there is clear evidence that trade is positively associated with productivity growth. To say that all this has no more worth than some politicians opinion is ultimately to degrade evidence and the science which interprets it.



Friday, 13 January 2017

Miles on Haldane on Economics in Crises

Anything that says economics is in crisis always gets a lot of attention, particularly after Brexit (because economists are so pessimistic about its outcome), and Andy Haldane’s public comments were no exception. But former Monetary Policy Committee colleague David Miles has hit back, saying Haldane is wrong and economics is not in crisis. David is right, but (perhaps inevitably) he slightly overstates his case.

First an obvious point that is beyond dispute. Economics is much more than macroeconomics and finance. Look at an economics department, and you will typically find less than 20% are macroeconomists, and in some departments there can be just a single macroeconomist. Those working on labour economics, experimental economics, behavioural economics, public economics, microeconomic theory and applied microeconomics, econometric theory, industrial economics and so on would not have felt their sub-discipline was remotely challenged by the financial crisis.

David Miles is also right that economists have not found it difficult to explain the basic story of the financial crisis from the tools that they already had at their disposal. Here I will tell again a story about an ESRC seminar held at the Bank of England about whether other subjects like the physical sciences could tell economists anything useful post-crisis. It was by invitation only, Andy Haldane was there throughout, and for some reason I was there and asked to give my impressions at the end. In the background document there was a picture a bit like this.
UK Bank leverage: ratio of total assets to shareholder claims. (Source Bank of England Financial Stability Report June 2012) Added by popular request 17/1/17 [3]

I made what I hope is a correct observation. Show most economists a version of this chart just before the crisis, and they would have become very concerned. Some might have had their concern reduced by assurances and stories about how new risk management techniques made the huge increase in leverage seen in the years just before the crisis perfectly safe, but I think most would not. In particular, many macroeconomists would have said what about systemic risk?

The problem before the financial crisis was that hardly anyone looked at this data. There is one institution that surely would have looked at this like this data, and that was the Bank of England. As Peter Doyle writes:

“ .. it was not “economics” that missed the GFC, but, dare I say it (and amongst some others), the Bank of England.”

If there is a discussion of the increase in bank leverage and the consequent risks to the economy in any Inflation Reports in 2006 and 2007 I missed it. I do not think we have been given a real account of why the Bank missed what was going on: who looked at the data, who discussed it etc. I think we should know, if only for history’s sake.

What I think David Miles could have said but didn’t is that macroeconomists were at fault in taking the financial sector for granted, and therefore typically not including key finance to real interactions in their models. [1] As a result, the crisis has inspired a wave of new research that tries to make up for that, but this involves using existing ideas and applying them to macroeconomic models. There has also been new work using new techniques that has tried to look at network effects, which Andy Haldane mentions here. Whether this work could be usefully applied much more widely, as he suggests, is not yet clear, and to say that until that happens there is a crisis in economics is just silly.

The failure to forecast that consumers after the Brexit vote would reduce their savings ratio is a typical kind of forecasting error. Would they have done this anyway, and if not what about the Brexit vote and its aftermath inspired it, we will probably never know for sure. This kind of mistake happens all the time in macro forecasting, which is why comparisons to weather forecasting and Michael Fish are not really apt. [2] That is what David Miles means by saying it is a non-event.

What is hardly ever said, so I make no apologies for doing so once more, is that macroeconomic theory has in some ways ‘had a good crisis’. Basic Keynesian macroeconomic theory says you don’t worry about borrowing in a recession because interest rates will not rise, and they have not. New Keynesian theory says creating loads of new money will not lead to runaway inflation and it has not. Above all else, macroeconomic theory and most evidence said that the turn to austerity in 2010 would delay or weaken the recovery and that is exactly what happened. As Paul Krugman often says, it is quite rare for macroeconomics to be so fundamentally tested, and it passed that test. We should be talking not about a phoney crisis in economics, but why policy makers today have ignored economics, and thereby lost their citizens' the equivalent of a lot of money.

[1] In the COMPACT model I built in the early 1990s, credit conditions played an important role in consumption decisions, reflecting the work of John Muellbauer. But as I set out here, proposals to continue the model and develop further financial/real linkages were rejected by economists and the ESRC because it was not a DSGE model.

[2] Weather forecasts for the next few days are more accurate than macro forecasts, although perhaps longer term forecasts are more comparable. But more fundamentally, while the weather is a highly complex system like the economy. It is made up of physical processes that are predictable in a way human behaviour will never be. As a result, I doubt that simply having more data will have much impact on the ability to forecast the economy.

[3] Total asset are the size of the bank's balance sheet. Shareholder claims are the part of those assets that belong to shareholder, and which therefore represent a cushion that can absorb losses without the bank facing bankruptcy. So at the peak of the financial crisis, banks had over 60 times as many assets as that cushion. That makes a bank very vulnerable to loss on those assets.

Saturday, 3 December 2016

Hitting back

Not a post about a certain byelection, but a reaction to reading this:
“A more serious incident was the forecast by the Office for Budget Responsibility in the UK, which said last week that Brexit would have severe economic consequences. Coming only a few months after the economics profession discredited itself with a doomy forecast about the consequences of Brexit, this is an astonishing reminder of the inadequacy of economic forecasting models.

The truth about the impact of Brexit is that it is uncertain, beyond the ability of any human being to forecast and almost entirely dependent on how the process will be managed. “Don’t know” is the technically correct answer. Before the referendum, Project Fear was merely a monumental tactical miscalculation. Today it is stupidity. One of the debates was whether people should be listening to experts. We have moved beyond that. Because of a tendency to exaggerate, macroeconomists are no longer considered experts on the macroeconomy.”

Shrug your shoulders and move on? If it had appeared in the partisan press that would be a sensible reaction, but this was written by a widely respected journalist in the UK’s internationally renown financial newspaper. Furthermore - lest my motives be misunderstood - written by someone whose knowledge on the Eurozone is beyond dispute and whose views I often agree with. Well on this occasion this particular member of a discredited profession who is no longer apparently considered an expert on macroeconomics is not prepared to take this kind of stuff anymore, whoever it may come from.

It is difficult to know where to start with such apparent and complete ignorance. Nonsense expressed as platitudes. You can only make sense of “beyond the ability of any human to forecast” if you either think we know nothing about the impact of trade restrictions, which is false, or that forecasts are non-probabilistic. No journalist has any excuse nowadays for misunderstanding the probabilistic nature of forecasts (Bank of England fan charts), and any academic economist who knows anything about forecasting will tell you that unconditional macro forecasts are only slightly better than intelligent guesswork. They exist because it is worth being slightly better than guesswork when the stakes are so high.

You can also only make sense of these two paragraph if the writer is unaware or is just choosing to ignore the difference between conditional and unconditional forecasts. These are long words for a very simple concept. You would not dream of asking your doctor to forecast the number of times you would catch a cold over the next year (an unconditional forecast), but if you gave them all your relevant data they could probably make a better guess than your own. Their forecast would be probabilistic, but if you took the mean as ‘the forecast’ then in any particular year your doctor would generally be wrong. It would be absurd for you to then say that, having ‘discredited the profession with this inaccuracy’ you were now going to ignore their advice about how to avoid catching colds (advice based on conditional forecasts). But this is the logic of these two paragraphs.

As for a tendency to exaggerate, the simplest response involves a black kettle. But on this particular occasion I think there is a more honest response. In the Brexit campaign I felt the temptation to exaggerate (I don’t think I ever succumbed), because the media was failing to get the message from economists across. Our collective knowledge about the impact of trade restrictions was treated as just one more opinion, or described as Project Fear. When you are effectively being ignored you tend to shout louder.

But this is all defensive. Trying to explain yet again some basic economic ideas, and to be honest about what you can or cannot do and any failings you have. I’m just tired of doing this stuff over and over again, so it is time not just to defend. There are many good journalists out there, who when they write about macroeconomics do try to check with academics that what they are writing makes sense. (It was one of those journalists who drew my attention to the article I quote above.) It simply lets them down when others think they can write this sort of stuff without any of the kind of basic fact checking that journalists are supposed to do. It brings the profession of journalism into disrepute.

And they can only get away with it because academic economists only get a media voice by the grace and favour of journalists. If anyone should be doing some serious introspection after the Brexit result it should be journalists and the media. Warning of the dangers of trade restrictions was not a ‘tactical mistake’. What was a mistake was for journalists to allow those warnings, that knowledge, to be characterised as Project Fear, all in the name of ‘balance’ or cheap copy. But this was not a temporary lapse in an otherwise good record, but just another example of a growing tendency for the media to allow politicians to define economic facts and truths, a record I described in my lecture.

To have the nerve to blame economists for the Brexit result, to suggest that using their knowledge was a ‘tactical mistake’, to imply that the OBR should pretend they know nothing about Brexit, all that is itself amazing malevolent chutzpah. But it goes beyond audacity to criticise a profession and subject matter you appear not to understand when it is this lack of understanding that has contributed so much to the damage over the last few years.



Friday, 12 December 2014

Bond market fairy tales part 2

In part 1 I contrasted the way I think about how different speeds of deficit reduction in the UK or US today will influence interest rates on government debt with how at least some people in those markets say they think about the same issue. That was a particular example of a more general phenomenon. The macroeconomics coming from economists attached to financial institutions often seems to be rather different to the macroeconomics of academic economists. When it comes to an issue involving financial markets, then it seems obvious who mediamacro should believe. Those close to the markets surely must know more about how those markets work than some unworldly academic. This post will suggest a more nuanced view.

As is often the case in macroeconomics, it all depends on the time horizon. Are we talking about what may happen over the next few days or weeks, or are we talking about what will happen over the next few years?

In terms of very short term prediction, financial market economists beat academic economists hands down. The only thing most academic economists can usefully tell you is that it is unlikely you will outsmart market opinion. If you really want to try then you need lots of short term information and a good nose for how that short term information is interconnected. Most academics (there are exceptions) just do not have time to do that work. I always remember the reply an academic member of the Bank of England’s Monetary Policy Committee gave to some MP who asked him about the implications of some latest data. I must have been doing some marking (grading) at the time that came out, was the reply.

Perhaps more surprisingly, those working in the markets are not as concerned about the longer term (what might happen in three or five years time) as you might expect. That is because money is made in predicting short term movements, and knowledge of where things are going over the next few years is a relatively weak guide to what might happen over the next few days. When I first started doing work on ‘equilibrium exchange rates’, I got a lot of queries from those in the markets, but the interest largely disappeared when I told them that ‘equilibrium’ meant where rates might be in about five years time.

This may surprise you because economists attached to financial market institutions often tell longer term stories, and sometimes they even produce detailed numerical forecasts of the type produced by central banks or governments. (See the list that the UK Treasury compiles for example.) But as I have often said, macroeconomic forecasts are only slightly better than guesswork. So it is only really worth putting any significant resources into producing a macro forecast if you are taking or seriously influencing decisions - like setting interest rates - where the costs of getting things wrong are extremely large. My suspicion is that financial sector macro forecasts are mainly there to give the impression of expertise to the institution’s clients.

I also suspect that economists working for financial institutions spend rather more time talking to their institution’s clients than to market traders. They earn their money by telling stories that interest and impress their clients. To do that it helps if they have the same worldview as their clients. Getting things right over the longer term seems less important, as Paul Krugman keeps complaining about in the context of those who have been predicting rapid inflation as a result of Quantitative Easing. 

It is also useful if they leave their clients with the impression that they have some unique insight into how the markets work. So instead of suggesting - as an academic would - that markets are governed by basic principles, it is better to suggest that the market is like some capricious god, and they are one of a few high priests who can detect its mood. Now in the short term the market really can behave in volatile, unexpected and sometimes mysterious ways, but over the longer term there are some basic rules that markets obey.

The incentive system for academics is very different. They are judged by their peers. If they present stories to the media that differ greatly from conventional wisdom about theory or the empirical evidence, they will be given a hard time by their colleagues. They need to have an idea about how markets work to do good macroeconomics. They want to be more like scientists than high priests. (This has an unfortunate by-product. Most academics would rather not lose precious research time talking to journalists, particularly if the quotes they give may fail to contain the caveats normally demanded in academic work. In contrast talking to the media is part of a city economist’s job description.)   

So who should journalists trust on the economy? If you want to know about the latest retail sales numbers or where the economy might be heading over the next few months, with a few exceptions financial economists are better bets than academic economists. If you have a more long term question, like how alternative speeds of deficit reduction will influence interest rates, then perhaps surprisingly you may tend to get a more reliable answer from academics. Like most things in economics, this is a tendency: there are some seasoned city economists who I would trust over many academics.

There is an important implication about political bias as well. Academic economists are no saints on this, but I do not think there is a clear average bias among academic macroeconomists towards the left or right. However partly because financial economists need to be good at telling stories that their clients find sympathetic, their worldview tends to be one where a smaller state is good for the economy, higher taxes on top incomes are a bad idea, markets are generally efficient and regulation is harmful.

If you think this is just self-serving conjecture, look at this evidence. The question of whether, in the UK, the 2013 recovery vindicated 2010 austerity was a no-brainer. Anyone who thinks about the logic for a moment will realise the answer is no, even if they think austerity was a good idea. To suggest otherwise would be to argue that it was a good idea to close half the economy down for a year, because growth in the following year would be fantastic. To answer yes to this question probably indicates political bias rather than lack of thought. When the Financial Times asked this question, only two out of twelve academics gave the answer yes. About half the city economists who were asked said yes.