Winner of the New Statesman SPERI Prize in Political Economy 2016
Showing posts with label Friedman. Show all posts
Showing posts with label Friedman. Show all posts

Friday, 27 January 2017

The UK’s 1976 IMF crisis in the light of modern theory

During the arguments over austerity, its supporters would often point to 1976 as evidence that it was possible for a country with its own currency to have a debt funding crisis. At the time this was frustrating for me, because I had been a very junior economist in the Treasury at the time, and my dim recollection was of an exchange rate crisis rather than a debt funding crisis. But I could not trust my memory and did not have time to do much research myself.

So with the publication of a new book by Richard Roberts on this exact subject (many thanks to Diane Coyle, whose FT review of the book is here), I thought it was time to revisit that episode combining Robert's comprehensive account with our current understanding of macroeconomic theory. I think any macroeconomist would find what happened in 1976 puzzling until they realised that senior policymakers did not have two key pieces of modern knowledge: the centrality of the Phillips curve, and an understanding of how the foreign exchange market works.

In terms of where the economy was, there is one crucial difference between 1976 and 2010. In the previous year of 1975 CPI inflation had reached a postwar peak of 24.2%. Although that peak owed a lot to a disastrous agreement with the unions, it probably also had a lot to do with the ‘Barber boom’ which had led to output being 6.5% above the level at which inflation would be stable in 1973 (using the OBR’s measure of the output gap). Although this output gap had disappeared by 1975 and 1976, inflation was still 16.5%. Given the lack of any kind of credible inflation target a period of negative output gaps would almost certainly be required to reduce inflation to reasonable levels.

The lack of understanding by senior policymakers of how the foreign exchange market worked was due to floating exchange rates being a novelty, Bretton Woods having broken down only 5 years earlier. We had a policy of ‘managed floating’, where policymakers thought the Bank of England could intervene in the FOREX markets to ‘smooth’ the trajectory of the exchange rate. My job at the time - forecasting the world economy - was a long way from where the action was, but my main recollection of the time comes from one of the periodic meetings of all the Treasury’s economists. It seemed as if the Treasury’s senior economists believed in the ‘cliff model’ of the exchange rate. The cliff theory suggests that if the rate moves significantly away from the target that the Bank was aiming at, it would collapse with no lower bound in sight. At the meeting I remember some more junior economists (but more senior than I) trying to explain ideas about fundamentals and Uncovered Interest Parity, but their seniors seemed unconvinced.

It is much easier to understand the 1976 crisis if you see it as a classic attempt to peg the currency when the markets wanted to depreciate it, and this is the main story Roberts tells.. The immediate need of the IMF money was to be able to repay a credit from the G10 central banks that had been used to support sterling. It is also true that sales of government debt had been weak, but Roberts describes this as stemming from a (correct) belief that rates on new debt were about to rise - a classic buyers strike. Although nominal interest rates at the time were at a record high, they were still at a similar level to inflation, implying real rates of around zero.

Here we get to the heart of the difference between 2010 and 1976. If there had been a strike of gilt buyers in 2010, the Bank of England would have simply increased its purchases of government debt through the QE programme, the whole aim of which was to keep long rates low. They could do this because inflation was low and showed no sign of rising. Contrast this with 1976, with inflation in double figures but real rates were near zero.

I think what would strike a macroeconomist even more about this period was the absence of the Phillips curve from the way policymakers thought. Take this extract from the famous Callaghan speech to the party conference that Peter Jay helped draft.
“We used to think that you could spend your way out of a recession and increase employment by cutting taxes and boosting government spending. I tell you with all candour that that option no longer exists, and so far as it ever did exist, it only worked on each occasion since the war by injecting a bigger dose of inflation into the economy, followed by a higher level of unemployment as the next step”

As a piece of text it only makes sense to modern ears if there is a missing sentence: that we failed to raise taxes and cut spending in a boom. Far from a denunciation of Keynesian countercyclical fiscal policy, it was an admission that politicians could not be trusted with operating such a policy, essentially because they imagined they could beat the Phillips curve using direct controls on prices and incomes. The fact that fiscal rather than (government controlled) interest rate policy was being used as the countercyclical instrument here was incidental.

Reading this book also confirmed to me how misleading the Friedman (1977) story of the Great Inflation was, at least applied to the UK. These were not policymakers trying the exploit a permanent inflation output trade-off, but policymakers trying to escape the discipline of any kind of Phillips curve. They were also policymakers who had not fully adjusted to a floating rate world, and the IMF crisis was superficially a failed attempt to manage the exchange rate. More fundamentally It was also a reaction by the markets to a government that was not doing enough to bring down an inflation rate that was way too high. The IMF loan was useful both as a means of paying back existing foreign currency loans, but also a means of getting fiscal policy and therefore demand to the level required to reduce inflation.

Although inflation fell steadily until 1979, another boom in 1978 together with rising oil prices reversed this, and through the winterof discontent helped elect Margaret Thatcher. Unfortunately the IMF crisis and the 1970s more generally is another example of the consequences of politicians, in this case particularly those on the left, not accepting basic lessons from economics.            

Wednesday, 16 November 2016

Macroeconomics and Ideology

Jo Michell has a long post in which he enters in a debate between Ann Pettifor and myself about the role of mainstream macroeconomics in austerity. Ann wanted to pin a large part of the blame for austerity on mainstream macroeconomics, and Jo largely sides with her. Now I have great respect for Jo’s attempts to bridge the divide between mainstream and heterodox economics, but here he is both wrong about austerity and also paints a rather distorted picture of the history of macroeconomic thought.

Let’s start with austerity. I think he would agree that the consensus model of the business cycle among mainstream Keynesians for the last decade or two is the New Keynesian (NK) model. That model is absolutely clear about austerity. At the zero lower bound (ZLB) you do not cut government spending. It will reduce output. No ifs or buts.

So to argue that mainstream macro was pushing for austerity you would have to argue that mainstream economists thought the NK model was deficient in some important and rather fundamental respect. This was just not happening. One of, if not the, leading macroeconomist of the last decade or two is Michael Woodford. His book is something of a bible for those using the New Keynesian model. In June 2010 he wrote “Simple Analytics of the Government Expenditure Multiplier”, showing that increases in government spending could be particularly effective at the ZLB. The interest in that paper for those working in this area was not in that this form of fiscal policy would have some effect - that was obvious to those like myself working on monetary and fiscal policy using the NK model - but that it could generate very large multipliers.

This consensus that austerity would be damaging and fiscal stimulus useful was a major reason why we had fiscal stimulus in the UK and US in 2009, and why even the IMF backed fiscal stimulus in 2009. There were some from Chicago in particular who argued against that stimulus, but as bloggers like DeLong, Krugman and myself pointed out, they simply showed up their ignorance of the NK model. Krugman in particular was very familiar with ZLB macro, having done some important work on Japan’s lost decade.

What changed this policy consensus in 2010 was not agitation from the majority of mainstream academic macroeconomists, but two other events: the Eurozone crisis and the election of right wing governments in the UK and US Congress.

Jo tries to argue that because discussion of the ZLB was not in the macroeconomic textbooks, it was not part of the consensus. But textbooks are notorious for being about 30 years out of date, and most still base their teaching around IS/LM rather than the NK model. Now it might just be possible that right wing policy makers were misled by the consensus assignment taught in these textbooks, and that it was just a coincidence that these policy makers chose spending cuts rather than tax increases (and later tax cuts!), but that seems rather unlikely. You do not have to be working in the field to realise that the pre-financial crisis consensus for using changes in interest rates as the stabilisation tool of choice kind of depended on being able to change interest rates!

Moving on from austerity, Jo’s post also tries to argue that mainstream macroeconomics has always been heavily influenced by neoliberal ideology. To do that he gives a short account of the post-war history of macroeconomic thought that has Friedman, well known member of the Mont Pelerin society, as its guiding light, at least before New Classical economics came along. There is so much that could be said here, but let me limit myself to two points.

First, the idea that Keynesian economics was about short term periods of excess or deficient demand rather than permanent stagnation pre-dated Friedman, and goes back to the earliest attempts to formalise Keynesian economics. It was called the neoclassical consensus. It was why the Keynesian Bob Solow could give an account of growth theory that assumed full employment.

Second, the debates around monetarism in the 1970s were not about the validity of that Keynesian model, but about its parameters and policy activism. Friedman’s own contributions to macroeconomic theory, such as the permanent income hypothesis and the expectations augmented Phillips curve, did not obviously steer theory in a neoliberal direction. His main policy proposal, targeting the money supply, lost out to policy activism using changes to interest rates. And Friedman certainly did not approve of New Classical views on macroeconomic policy.

Jo may be on firmer ground when he argues that the neoliberal spirit of the 1980s might have had something to do with the success of New Classical economics, but I do not think it was at all central. As I have argued many times, the New Classical revolution was successful because rational expectations made sense to economists used to applying rationality in their microeconomics, and once you accept rational expectations then there were serious problems with the then dominant Keynesian consensus. I suppose you could try to argue a link between the appeal of microfoundations as a methodology and neoliberalism, but I think it would be a bit of a stretch.

This brings me to my final point. Jo notes that I have suggested an ideological influence behind the development of Real Business Cycle (RBC) theory, but asks why I stop there. He then writes
“It’s therefore odd that when Simon discusses the relationship between ideology and economics he chooses to draw a dividing line between those who use a sticky-price New Keynesian DSGE model and those who use a flexible-price New Classical version. The beliefs of the latter group are, Simon suggests, ideological, while those of the former group are based on ideology-free science. This strikes me as arbitrary. Simon’s justification is that, despite the evidence, the RBC model denies the possibility of involuntary unemployment. But the sticky-price version – which denies any role for inequality, finance, money, banking, liquidity, default, long-run unemployment, the use of fiscal policy away from the ZLB, supply-side hysteresis effects and plenty else besides – is acceptable.”

This misses a crucial distinction. The whole rationale of RBC theory was to show that business cycles were just an optimal response to technology shocks in a market clearing world. This would always deny an essential feature of business cycles, which is involuntary unemployment (IU). It is absurd to argue that NK theory denies all the things on Jo’s list. Abstraction is different from denial. The Solow model does not deny the existence of business cycles, but just assumes (rightly or wrongly) that they are not essential in looking at aspects of long term economic growth. Jo is right that the very basic NK model does not include IU, but there is nothing in the NK model that denies its possibility. Indeed it is fairly easy to elaborate the model to include it.

Why does the very basic NK model not include IU? The best thing to read on this is Woodford’s bible, but the basic idea is to focus on a model that allows variations in aggregate demand to be the driving force behind business cycles. I happen to think that is right: that is what drives these cycles, and IU is a consequence. Or to put it another way, you could still get business cycles even if the labour market always cleared.

To suggest, as Jo seems to, that the development of NK models had something to do with the Third Way politics of Blair and Bill Clinton is really far fetched. It was the inevitable response to RBC theory and its refusal to incorporate rigid prices, for which there is again strong evidence, and its inability to allow for IU.

That’s all. I do not want to talk about globalisation and trade theory partly because it is not my field, but also because I suspect there is some culpability there. I would also never want to suggest, as Jo implies I would, that ideological influence is confined to the New Classical part of macroeconomics. But just as it is absurd to deny any such influence, it is also wrong to imagine that the discipline and ideology are inextricably entwined. 2010 austerity is a proof of that.







Sunday, 26 July 2015

The F story about the Great Inflation

Here F could stand for folk. The story that is often told by economists to their students goes as follows. After Phillips discovered his curve, which relates inflation to unemployment, Samuelson and Solow in 1960 suggested this implied a trade-off that policymakers could use. They could permanently have a bit less unemployment at the cost of a bit more inflation. Policymakers took up that option, but then could not understand why inflation didn’t just go up a bit, but kept on going up and up. Along came Milton Friedman to the rescue, who in a 1968 presidential address argued that inflation also depended on inflation expectations, which meant the long run Phillips curve was vertical and there was no permanent inflation unemployment trade-off. Policymakers then saw the light, and the steady rise in inflation seen in the 1960s and 1970s came to an end.

This is a neat little story, particularly if you like the idea that all great macroeconomic disasters stem from errors in mainstream macroeconomics. However even a half awake student should spot one small difficulty with this tale. Why did it take over 10 years for Friedman’s wisdom to be adopted by policymakers, while Samuelson and Solow’s alleged mistake seems to have been adopted quickly? Even if you think that the inflation problem only really started in the 1970s that imparts a 10 year lag into the knowledge transmission mechanism, which is a little strange.

However none of that matters, because this folk story is simply untrue. There has been some discussion of this in blogs (by Robert Waldmann in particular - see Mark Thoma here), and the best source on this is another F: James Forder. There are papers (e.g. here), but the most comprehensive source is now his book, which presents an exhaustive study of this folk story. It is, he argues, untrue in every respect. Not only did Samuelson and Solow not argue that there was a permanent inflation unemployment trade-off that policymakers could exploit, policymakers never believed there was such a trade-off. So how did this folk story arise? Quite simply from another F: Friedman himself, in his Nobel Prize lecture in 1977.

Forder discusses much else in his book, including the extent to which Friedman’s 1968 emphasis on the importance of expectations was particularly original (it wasn’t). He also describes how and why he thinks Friedman’s story became so embedded that it became folklore. The reason I write about this now is that I’m in the process of finishing a paper on the knowledge transmission mechanism and the 2010 switch to austerity, and I wanted to look back at previous macroeconomic crises.

If it wasn’t a belief in a long run inflation unemployment trade-off, what was it that allowed inflation to gradually rise during those two decades? Forder has a lot to say on this, but the following is my own take. I think two things were critical: the idea that demand management was primarily designed to achieve full employment, and that full employment had primacy over the objective of price stability. Although more and more economists over that period began to see the policy problem within a Phillips curve framework, many still hoped that other measures like prices and incomes policies (in the UK in particular but also in the US) could override the Phillips curve logic. The primacy of the full employment objective meant the problem was often described as ‘cost-push inflation’ rather than a rise in the natural rate of unemployment.

If you find this hard to imagine, think about historians discussing the current period in a possible future in 2050. By then nonlinearities in the Phillips curve and the power the inflation target had in anchoring inflation expectations were firmly entrenched in mainstream thinking. Imagine that partly as a result in 2050 the inflation target has been replaced by a level of nominal income target. With the benefit of hindsight these historians were amazed to calculate the extent to which resources were lost decades earlier because policy had become fixated by a 2% inflation target and budget deficits. They will recount with amusement at the number of economists and policymakers who thought that the way to deal with deficient demand was by ‘structural reform’. Rather than construct folk tales, they will observe that even when most economists realised what was required to avoid being misled again policymakers were extremely reluctant to change the inflation target.


Friday, 18 July 2014

Further thoughts on Phillips curves

In a post from a few days ago I looked at some recent evidence on Phillips curves, treating the Great Recession as a test case. I cast the discussion as a debate between rational and adaptive expectations. Neither is likely to be 100% right of course, but I suggested the evidence implied rational expectations were more right than adaptive. In this post I want to relate this to some other people’s work and discussion. (See also this post from Mark Thoma.)

The first issue is why look at just half a dozen years, in only a few countries. As I noted in the original post, when looking at CPI inflation there are many short term factors that may mislead. Another reason for excluding European countries which I did not mention is the impact of austerity driven higher VAT rates (and other similar taxes or administered prices), nicely documented by Klitgaard and Peck. Surely all this ‘noise’ is an excellent reason to look over a much longer time horizon?

One answer is given in this recent JEL paper by Mavroeidis, Plagborg-Møller and Stock. As Plagborg-Moller notes in an email to Mark Thoma: “Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature….traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.” This had been my reading based on work I’d seen.

This is often going to be the case with time series econometrics, particularly when key variables appear in the form of expectations. Faced with this, what economists often look for is some decisive and hopefully large event, where all the issues involving specification uncertainty can be sidelined or become second order. The Great Recession, for countries that did not suffer a second recession, might be just such an event. In earlier, milder recessions it was also much less clear what the monetary authority’s inflation target was (if it had one at all), and how credible it was.

How does what I did relate to recent discussions by Paul Krugman? Paul observes that recent observations look like a Phillips curve without any expected inflation term at all. He mentions various possible explanations for this, but of those the most obvious to me is that expectations have become anchored because of inflation targeting. This was one of the cases I considered in my earlier post: that agents always believed inflation would return to target next year. So in that sense Paul and I are talking about the same evidence.

Before discussing interpretation further, let me bring in a paper by Ball and Mazumder. This appears to come to completely the opposite conclusion to mine. They say “we show that the Great Recession provides fresh evidence against the New Keynesian Phillips curve with rational expectations”. I do not want to discuss the specific section of their paper where they draw that conclusion, because it involves just the kind of specification uncertainties that Mavroeidis et al discuss. Instead I will simply note that the Ball and Mazumder study had data up to 2010. We now have data up to 2013. In its most basic form, the contest between the two Phillips curves is whether underlying inflation is now higher or lower than in 2009 (see maths below). It is higher. So to rescue the adaptive expectations view, you have to argue that underlying inflation is actually lower now than in 2009. Maybe it is possible to do that, but I have not seen that done.

However it would be a big mistake to think that the Ball and Mazumder paper finds support for the adaptive expectations Friedman/Phelps Phillips curve. They too find clear evidence that expectations have become more and more anchored. So in this sense the evidence is all pointing in the same way.

So I suspect the main differences here come from interpretation. I’m happy to interpret anchoring as agents acting rationally as inflation targets have become established and credible, although I also agree that it is not the only possible interpretation (see Thomas Palley and this paper in particular). My interpretation suggests that the New Keynesian Phillips curve is a more sensible place to start from than the adaptive expectations Friedman/Phelps version. As this is the view implicitly taken by most mainstream academic macroeconomics, but using a methodology that does not ensure congruence with the data, I think it is useful to point out when the mainstream does have empirical support.


Some maths

Suppose the Phillips curve has the following form:

p(t) = E[p(t+1)] + a.y(t) + u(t)

where ‘p’ is inflation, E[..] is the expectations operator, ‘a’ is a positive parameter on the output gap ‘y’, and ‘u’ is an error term. We have two references cases:

Static expectations: E[p(t+1)] = p(t-1)

Rational expectations: E[p(t+1)] = p(t+1) + e(t+1)

where ‘e’ is the error on expectations of future inflation and is random. Some simple maths shows that under static expectations, negative output gaps are associated with falling inflation, while under rational expectations they are associated with rising inflation. If we agree that between 2009 and today we have had a series of negative output gaps, we just need to ask whether underlying inflation is now higher or lower than in 2009. 



Monday, 14 July 2014

Has the Great Recession killed the traditional Phillips Curve?

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.

After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.

Which better describes the data? The great attraction of the FPPC is that it can describe stagflation. We have a boom, which while it lasts steadily raises inflation. When the boom comes to an end, inflation stabilises, but at a much higher level than it began. So policy has to engineer a recession to get inflation back down again: a period in which above average unemployment is accompanied by above average inflation, which we call stagflation. If over this period we had had credible independent central banks setting inflation targets, the NKPC would not give us stagflation: when the boom came to an end, inflation would return to target. (For more explanation, see this post.) The trouble is we did not have inflation targeting during this period, so it is difficult to tell whether stagflation is evidence against the NKPC. (As an example of this ambiguity, see this survey of the empirical evidence by Nason and Smith. This enabled Robert Gordon to be quite supportive of the FPPC in 2009.)

The Great Recession could provide a much better test. In some countries output fell sharply in 2009, but has since seen a slow but steady recovery, such that the output gap today is less than it was in 2009. With the FPPC, inflation should have been steadily falling over this period, reaching its lowest level today. So if we plotted the output gap (x axis) and inflation (y axis) together, we should see a line pointing South East. With the NKPC, we can consider two polar cases. In the first, agents fully anticipate that the recovery will be slow, so we will get a sharp immediate fall in inflation, but subsequently inflation will rise towards the target. That will give us a line pointing North East. In the second, agents keep thinking inflation will return to target next year. That also gives us a line pointing North East, but it is flatter. [Postscript - I should have added that this last gives us what Krugman calls the Neo-paleo-Keynesian Phillips curve.]

Here is this plot for four countries, using OECD estimates for the output gap on the horizontal axis, consumer price inflation less 2% on the vertical axis, and OECD forecasts for 2014 and 2015. I’ve chosen these countries simply because in most of Europe we had a double dip recession, which is a more complicated experiment. If you do not like the idea of including forecasts, just ignore the last two points for each country.



The lines point North East, not South East. This gives more support to the rational expectations NKPC than the adaptive expectations FPPC. To take just one example, US inflation in 2013 is higher than it was in 2009, which is consistent with the NKPC. The traditional FPPC, on the other hand, would suggest that after a string of negative output gaps, US inflation should be a lot lower in 2013 than it was in 2009.

OK, now the caveats. Commodity prices will have an important influence on the CPI, and these are not part of either simple Phillips curve story. They may help explain the blip in inflation around 2011 in some countries, but they also helped depress inflation in 2009. Exchange rate changes will also matter. The simple Phillips curve also takes no account of non-linearity caused by a reluctance to cut nominal wages. And of course estimates of the output gap may be wrong.

In the case of Japan, we also had a recent increase in the inflation target. This may explain the forecast upward shift in inflation in 2014/5. If it does, that is clear evidence in favour of rational rather than adaptive expectations. 

All these caveats point to the need to do more empirical analysis. Nevertheless we can see why some more elaborate studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.


Friday, 9 May 2014

Economists and methodology

Where I argue that mainstream economics should think about the methodology of their subject more, but that to study this methodology it is much better to look at what economists actually do than to look at their (occasional) writing on the subject.

Methodology? Why should I worry about that? It’s what all those heterodox people do - lots of ‘isms’ and ‘ologies’ that are totally incomprehensible! Unlike those guys, I get on with doing real economics. After all, doctors do not spend large amounts of their time worrying about the methodology of medicine. So why should economists?

This is a caricature, but not far off the mark for many economists. (When I refer to just economists/economics from now on, I mean mainstream.) Perhaps more of a concern is that very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. This is all nicely expressed in the title of what I think is one of the best books written on economic methodology: Dan Hausman’s ‘The inexact and separate science of economics’.

This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is the one implied by my opening paragraph: a feeling that the methodology being used is unproblematic, and therefore requires little discussion.

I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science, and have only published one paper on methodology, but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does.  

This reluctance by economists to investigate their own methodology has a consequence which is the main subject of this post. It occurred to me when I recently re-read a methodology paper entitled “Two Responses to the Failings of Modern Economics: the Instrumentalist and the Realist” by Tony Lawson. The paper, written in 2001, starts on the first page with “There is little doubt that the modern discipline of economics is in a state of some disarray.” This is a strong claim. For example, I have previously written that the influence of economists within the UK government at that time may have been at an all time high, and as this account (pdf) shows, economics remains very influential within the civil service. Where is the evidence for the claim about disarray? The answer in this paper is a selection of quotes from economists writing about aspects of their subject. Now any economist would immediately wonder how representative these quotes were. But more fundamentally, are expressions of concern within a discipline equivalent to it being ‘in disarray’? (For example, see the first quote from a physicist here. Would this be a good basis for a paper that asserts than physics is in disarray?)

Even if we ignore these concerns, given the unfamiliarity of most economists with methodological discussion, it may be unwise to use what economists write about their discipline as evidence about what economists actually do.  The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.

Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. Before his famous 1968 presidential lecture, the Phillips curve had related wage inflation to unemployment, and if expectations about inflation were included (in some way), the coefficient on this expectations term was often empirically determined (see above) and was often less than one. Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. (For a good discussion of the history of the ‘expectations critique’, see this paper by James Forder.)

Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use, rather than relying on selected quotations.

Afterthought

There is a nice self-conformational element to this post. Someone is bound to tell me that, in my comments on Freidman, I do not really understand what instrumentalism means. And that, of course, just goes to make my point that you should not rely on what economists say about their own methodology!


Wednesday, 21 August 2013

Left, Right and Centre: some recent observations

I’m sure many political scientists hate the way descriptions in politics so often amount to a position on a straight line. It is one-dimensional. There is the obvious aggregation problem: should a person or political party, who is left of centre on issue X, and right of centre on issue Y, be described at generally in the middle of the political spectrum? How do we weight the importance of issues X and Y? But there is also a problem about whether positions are relative or absolute. This matters in part because the perception among many is that being near the middle is good (‘moderate’), and being away from the centre is bad (‘extreme’).

Three recent posts made me think about this. The first, by Noah Smith, is part of a current economics blog topic on Milton Friedman. I happen to pretty much agree with everything Noah says, but have absolutely no expertise on this - on matters of who thought what decades ago, I am curious but not interested enough to do any work. (Much better to leave it to David Glasner or Brad DeLong.) However it did strike me as obviously relevant to what has happened to the political centre, at least in the US.

One of Paul Krugman’s frequent complaints is that political commentators define the political centre as being somewhere between the Democrats and Republicans, regardless of the positions that each side take. He argues that the Republicans today are much more right wing than a generation ago, so that under this definition the centre today becomes what was right wing back then. This matters in part because the presumption is that the centre is the place for commentators to be.

Now one reaction might be: well he would say that, wouldn’t he. He is just trying to make his own views, which are ‘obviously’ to the left, sound more centrist than they actually are. But on economics at least, how politicians see Milton Friedman’s views provides some sort of objective yardstick. As Noah points out, some of Friedman’s positions would now be regarded as dangerously left wing by a good part of today’s Republican Party, whereas they were not so regarded 30 years ago.

The second post was my own, and the comments on it. It was about the increase in support for parties away from the centre in the UK and Netherlands, which I thought could be related to the recessions and austerity there, and more particularly to falling real wages. (Incidentally Robert Reich wrote a post on the same day making a similar argument about US politics.) I received many interesting comments on my post, and I want to thank everyone involved. A persistent theme was that I was wrong to call UKIP and the Freedom Party ‘far-right’, and imply any kind of equivalence to fascism.

I deliberately did not use the term fascist. Nor did I intend to imply that UKIP or the Freedom Party was fascist, or indeed that they were comparable - except to the extent that they are to the right of their respective and longer established mainstream right-of-centre parties. I used the term ‘far-right’ to denote this, as commentators often do, but I appreciate that many people read that as short for ‘furthest-right’ rather than the ‘farther-right’ that I had in mind.

I think many of these comments raised important issues. For example, would it make more sense to characterise UKIP and perhaps others not as a point on a left/right spectrum, but instead as specific issue parties? But the comments also revealed how sensitive people are to where the party they may support or sympathise with is placed on the political spectrum, and the obvious reason why. The endpoints of the political spectrum are typically defined by fascism and communism, and therefore the farther away you appear to be from those extremes, the better. Whether that is a deficiency or an advantage of this simple left/right model is an interesting question.

Why this may have a more substantial importance is illustrated by the third post, which involves think thanks in the UK. The right of centre think tank, the Centre for Policy Studies (CPS), had publicised its study into BBC bias, based in part on how the BBC uses different think tanks. [1] Part of their argument is that the BBC often calls left-of-centre think tanks ‘independent’, but mentions the ideological position of right wing think tanks. One of the think tanks it defines as left-of-centre is the Social Market Foundation (SMF). Yet, as this post from SMF complains, the SMF do not think of themselves as left-of-centre, and they provide evidence about why that description is wrong.

Now I have worried in the past about whether some think tanks are in the business producing propaganda instead of being in the business of thinking. So I cannot resist quoting the end of SMF’s post. “Especially on a significant issue of public debate - ie. public service broadcasting - think tanks owe a duty to follow the evidence. Or are CPS doing something slightly different than the normal work of a think tank? Without more evidence, I won't stick any other name on them for now.” The post is both short and amusing (unless you work for the CPS), so please have a look. [2]

Yet putting the thinking versus propaganda issue aside, this little tiff does illustrate why these issues can have immediate relevance. An organisation like the BBC tries very hard to be balanced. How you achieve balance depends in many cases on a judgement about where positions or organisations are on the left/right spectrum. The spectrum becomes like a balance scale, with the pivot right in the middle. So if you can persuade an organisation like the BBC that the mid-point is not where they thought it was, you can significantly change the content of their reporting and coverage. Or, even more seriously, if you can convince others that the BBC’s judgement is wrong, you can threaten their future.

If you think I’m being alarmist in this respect, here is how the director of the CPS ends his comment on their own research. “The most important [question] is why should everyone in the UK be forced to pay a poll tax to support an institution which has so conspicuously failed for so long to obey its founding principle of impartiality?” A serious charge if true, but is it true? It is clear that governments (of whatever colour) put a lot of pressure on the BBC, although measuring its effect is very difficult (although sometimes the circumstantial evidence is strong).

However some simple things can be measured, like how much coverage different political parties get. Of course coverage always tends to be biased towards the party in power. But, as Justin Lewis of Cardiff University’s School of Journalism notes, one study suggests that whereas in 2007 the margin between the Labour government and Conservative opposition was less than 2 to 1, the margin in 2012 favoured Cameron over Miliband by more than 3 to 1, with a ratio of more than 4 to 1 between Government and Shadow Ministers. So on this count, the people who should be claiming that the BBC is biased is Labour, not the Conservatives or the right. Are we in danger of entering that state of affairs where everyone just ‘knows’ that the BBC is biased to the left, just as everyone ‘knows’ that there is a liberal bias in the US media, without bothering with that annoying stuff called evidence?

Now one response to this emerging state of affairs is to ask why the left does not bang on about media bias the same way as the right does. Although with a coverage ratio of 1 to 4, perhaps they do, but we just do not get to hear about it.


[1] The publicity appeared to predate publication of the report, which seemed like a strange thing to do.


[2] The blog response from the CPS is also worth reading. As far as I can see, their reason for characterising the SMF as left of centre is that their objective is to “champion policy ideas which marry markets with social justice and take a pro-market rather than free-market approach.” So social justice in the context of a pro-market approach is left wing! One rather telling comment on the SMF post suggested that the CPS used transparency of funding sources as their guide to who was left or right wing.