So I had a look at the FT’s coverage of the inflation report, and it brought up one of my pet peeves about the commentary on the crisis, namely, measures of economic slack.
The bank of England apparently thinks that there is around 1.5% of slack in the economy. My first pet hate is that “slack” is not adequately defined in this context. Does this mean that Carney thinks that RGDP will develop a new trend line that is only 1.5% higher than the current level and at around 2% a year? Or does this simply mean that at the current moment companies estimate that with their current personnel and facilities they could only increase production by 1.5%.
This is important because there are two types of “slack”, there is the measure of “could we do more right now”, and there is the measure “given greater demand could I deploy more employees, capital, and technology to increase supply”. I am pretty sure the BOE’s estimate is about the former, but the macroeconomically important version is the latter. In other words, the supply side constraints could be soft for a while, if technological improvements since 2008 have enabled greater production, but they just haven’t been deployed due to weak demand.
Lets take a quick look at the RGDP (constant prices):
Does this look like an economy with only 1.5% slack? Is it not possible that in the medium term we might recover nearly all of our lost productivity? Might we not find that as demand grows aggregate supply just continues to quietly grow so that the BOE’s measure remains stuck at 1.5% while GDP grows 4-5% a year for a number of years?
Stolen from an ONS publication, does this look like an economy with only 1.5% slack? Are you telling me that after 7 years of technological progress, the BOE expects slack in GDP to be exhausted before productivity per worker has even reached its 2007 high?
Are we really to expect that some (minor) institutional differences will mean that UK workers will not, over the medium term, enjoy the same advances that have allowed US productivity to grow 20% in the same period? The UK is structurally similar to the US, but the structural differences (might) mean that in the UK we get more hiring first, and productivity growth later, and in the US they get more productivity first, and then more hiring. That certainly seems more plausible to me than that the differences in our institutions mean that the UK will not benefit from the same advances of the US in the long run. In that case, the technology exists already for a 20% rise in productivity in UK workers.
The real problem here is that the BOE is giving a short term definition of slack, where it doesn’t assume anything about the future investment and employment and capital improvement programs of enterprises. It just asks about capacity utilisation right now. Which, immediately following a recession, is a sensible enough thing to do. The problem is, that here we have moved into the medium term. There has been technological progress in that time, and the fact that this is waiting in the wings means that we should not regard current utilisation as in anyway a meaningful indicator of “slack”.
I predict that productivity per worker will recover its century long trend line of 2% a year growth. In that case, “slack” in the economy is more like 15% than 1.5%. Capital spending, training workers on new technology, the rise of robotics and machine learning will do the heavy lifting over the next 3-5 years, and we can expect strong growth along side benign inflation until we approach this productivity trend line.
Abenomics is having a huge effect on output compared, which is somewhat masked by demographic decline in the working population.
So when will the ECB realise that QE is an effective policy tool which increases output during a depression, and get on with saving the EZ?
It didn’t exist. No Seriously. There was no boom in housing construction.
The constructing in US housing was exactly what was needed to maintain the housing-population ratio in the face of increased population growth. You cannot have an “unsustainable boom” without oversupply. If you are building exactly the amount that you need, and prices are rising anyway, it is the very definition of a sustainable boom.
Its true that housing prices rose and then fell, but they fell exactly the amount that you would expect when there was a peak to trough fall in national income. What does that prove except when people are poorer they will pay less for housing. A change in prices is never the cause of anything, it is always the result of something. The correct way to reason is not “why did an asset crash cause a recession” it is “what caused asset prices to crash”.
Aggregate demand is the only story. The world’s central banks let it fall off a cliff in 2008, when they could have prevented it, purely because they were focused on inflation and forgot about nominal income. They assumed that because, historically, inflation and NGDP had been pretty well correlated, that controlling inflation would automatically control aggregate demand at a healthy level. They, in fact, made exactly the mistake that others accuse banks of having made – that they showed too much trust in their theory and didn’t have enough prudence. If the central banks had been watching aggregate demand, this recession could have been a non event.
I always feel that these hearings are usually nothing more than a chance for political leaders to
grandstand portray their ignorance. It is usually always easy for a technical expert to come up with an answer which sounds impressive while not really saying a lot, and that most people will be unable to decipher. However, there was one great question, by Klobuchar, which deserved a follow up. (This was well covered by Yglesias):
[Mr Bernanke]… what would you have done differently under a single mandate to target inflation
and Ben Bernanke hummed and hawed and said, essentially, that he would have done nothing differently. This seems a pretty damning indictment of BB’s policy really, if, with unemployment high and inflation consistently below target, you are not going to take the opportunity to do extra stimulus to lower unemployment, what exactly is the point in a dual mandate?
The dual mandate exists because central bankers have known since forever that a little bit of extra demand, which usually creates a little extra inflation, is helpful in lowering unemployment. This is the wisdom of the Philip’s curve, which, for all its flaws, at least underlines the truth that very low inflation is nearly always correlated with high unemployment.
Of course, perhaps BB was aiming an under the radar shot at his FOMC hawks: I mean, with the board that he has, perhaps he is unable to force through policy that is expansionary enough for him to feel that he his fulfilling his legal mandate, but he could have chosen to put forwards the hawk’s argument, which would be that excess stimulus would not help unemployment fall any faster. This is a nice argument for this type of hearing, as there is clearly some empirical limit on how fast unemployment can fall – no matter how expansionary, the Fed could not restore full employment in one day. I am completely certain that it could do a hell of a lot better than it is doing, but I accept that falls in unemployment of more than 2% a year are fairly implausible. And if they attempted that they really might cause some inflation.
Nevertheless, I cannot help but think that BB’s decision to say nothing was itself an overtly political statement about his feelings on the manner. He is required to go and defend Fed policy, even if he was on the losing side of the FOMC consensus. Just as Mervyn King is forced to talk about how the committee feels there the “costs and risks” of further QE outweigh the benefits, even though we know he voted for more QE.
Finally, we can speculate on some nice follow up questions that I might have asked:
BB, do you believe that a more expansionary policy would cause unemployment to drop faster, while keeping inflation expectations steady?
He would almost be forced to answer yes, and now he would be in a really tight spot, then we could ask
BB, given that you believe that expansionary policy would bring down unemployment faster, while maintaining inflation expectations, do you believe that your policy is legal?
Could ask this instead:
BB, it is my understanding that the dual mandate exists because of the well known relationship between higher inflation and lower unemployment. Thus, logically, a dual mandate must lead to higher inflation than a single mandate whenever unemployment is significantly above its natural rate. Do you agree?
Or how about this one:
BB, which members of the FOMC do we need to impeach to enable you to legally fulfil your mandate?
Wouldn’t that have made great TV!
So I did not follow these in particularly great detail, but one thing did catch my ear. Senator McCain accused Apple of pernicious practices which violated the “spirit of the Law”. I really have no time for this argument. If they are interpreting the Law in a manner which the courts uphold as justified, then they are in the clear. In fact, I would say that it is an excuse that politicians use for writing law that is so riddled with contradictions and conflicts as to be essentially garbage.
It is literally the job of legislators and lawmakers to write laws which are clear, concise, and unambiguous. It is a basic requirement of competent government, that the powers that be should be capable of writing Law that does what then intend for it to do. Claiming that a company “violated the spirit if not the letter” is just having a tantrum because you aren’t willing to admit your own incompetence. Its childish and disingenuous.
Sometimes it appears that the EU is run by children. The rest of the time it is obviously true. This is one of the latter occurrences. Bundesbank President Jens Weidmann the anti-hero of the European Depression, has thrown his toys out of the pram. After being out-voted 22-1 on the ECB governing council on instituting the ECB’s bond buying program, he stamped his feet and yelled “I’m telling Mummy”. `Mummy’, in this case, being the German Constitutional Court.
The OMO program is the one truly successful program the ECB has produced. The mere threat of its existence, announced last June, has calmed sovereign bond markets. Jens’ Weidmann apparently believed that it constitutes `direct funding of sovereigns’ and has asked the German Constitutional Court to rule on whether the ECB is violating its own mandate.
This all happened last summer, and so why am I bringing it up now? Because the German Court is finally ready to make a ruling. If the court rules against the ECB, what does that even mean? Is a supra-national organisation going to be held hostage to legal rulings of each and every state? What is Draghi to do? What will Merkel do? How will the ECB be held to account? Will it merely ask the Cyprus High court for a dissenting opinion, I am sure they would have no trouble sticking one in Germany’s eye, and then claim that in the absence of a consensus legal opinion they intend to ignore everyone.
Will Germany threaten to stop funding the ECB? A comical threat against an institution that holds the printing presses, and in fact, legally owns all of the currency in circulation. Will Germany boycott ECB rate meetings, which I would regard as very good news, as it would almost certainly lead to more policy easing.
There is going to be one very bad outcome if this court case comes out against the ECB, the return of massive uncertainty and volatility to peripheral sovereign bonds. Does anyone want to see the Germany attempt to put another millstone around Spain’s neck?
There is nothing good to come out of this, just extra pain for a Eurozone that is continuing to implode in the absence of any serious policy easing.
Pretty much ever since the crisis of 2008, there are those who have been arguing that the sharp fall in NGDP/demand was the cause of the financial crisis, rather than the other way around. Further evidence has now come to light in both the US and the UK, courtesy of David Beckworth and Britmouse, suggesting that the causes of the crisis go back as far as 2005 at least, and largely vindicating the view that the fall in nominal incomes destroyed banks balance sheets. I strongly urge you to read the Beckworth post.
In particular, in the US, expectations of nominal income growth started falling in 2005:
where as in the UK, Nominal income growth has been below trend since 2001,
Interpreting why this happened is obviously tough, but I lay the blame on inflation targeting. If you wish to target the price level of outputs, in the face of real cost pressure on your inputs, you must restrict nominal wage growth. Since someones income is someone else’s expenditure, this means that you are restricting NGDP growth.
Ever since monetarism was put forward as a theory by Freidmann (and others), they have emphasised that there are two nominal variables which are important, income growth, and inflation. The first because income ultimately determines demand, and the second because nominal wage stickiness makes it hard for employers to adjust wages competitively. The first effort at monetarism, was the targeting of the monetary growth rate, the rational was that both incomes and inflation were strongly correlated to monetary growth. In the US and the UK this turned out to be a transient correlation, and money growth targeting failed (-although it largely succeeded in Germany and Switzerland for nearly thirty years, under a slightly different model, you can get a quick overview from this Mishkin paper).
The next logical step was inflation targeting. Why look to target some intermediate variable when you can target them directly. This works just great as long as the two remain strongly correlated. We targeted inflation and got stable income growth for free. :). Of course, this means that had we targeted nominal income/NGDP, we would have got stable inflation `for free’.
The graphs above demonstrate that this correlation started to break down. This is probably a result of falling world inequality. We outsourced a lot of our labour intensive manufacturing to countries where labour was cheaper. Now income in those countries are rising, and we in the developed world must therefore pay more for the basic raw materials that we use as inputs. Thus, by targeting two percent inflation under rising real costs, we crushed income growth. Over the long term this makes it harder for households to meet their debt repayments. Perhaps the small fall in income growth rate in the UK was stable enough not to contribute materially, but the much sharper fall in the US was probably the cause of the financial crisis.
The good news is that central banks can do a lot about this type of nominal problems. By expanding income growth we can put the unemployed back to work. On the other hand, rising wages in the developing world are not going away, so real cost pressures will remain, and higher inflation (in the UK) might be needed in order to keep incomes/demand on a stable trajectory. To my mind, a slightly higher price level is a small cost compared to millions sitting unemployed, so we should do that.
This picture also makes sense in of the UK’s productivity puzzle. If raw material costs increase 4%, productivity increases 2% and wage growth is 0%, we get get roughly two percent inflation. However, its not clear that this type of cost inflation is really captured by the productivity statistics. If productivity is measured by output, then it is negative in the above example, since I am producing the same amount of stuff at a higher cost, despite productivity increases.
A mixed bag for the Uk then: We must increase nominal wage growth to increase spending, and suffer slightly higher inflation as a result, since the central bank cannot overcome the real cost pressures that are a result of the rising living standards of much of the rest of the world.
The fund management industry has been getting a lot of bad press recently. Firstly because so many of them fail to beat simple strategies like investing in a FTSE100 index tracker. Of course, in aggregate the fund universe is the stock market, so its clearly impossible for them to beat the stock market in aggregate. Ironically, if all the funds controlled by active fund management where placed in passive funds, it would create exactly the situation needed for the fund management industry to create alpha and thrive.
However, its especially hard to understand, because almost every other passive strategy, e.g. buying an equal weighted portfolio instead of a cap weighted portfolio like the FTSE100, will massively outperform a cap weighted index. The Nasdaq-Apple phenomenon illustrates precisely the danger of Cap weighting. At one point Apple comprised nearly 40% of the index (it was later limited to 20% by official decree), which means that you are not getting the benefits of diversification that you would expect in a portfolio, since most of your value is in a handful of positions. The top ten positions of the S&P500 have 20% of the value, you need just 32 to companies to have half the index by value. The bottom fifty companies make up just 1% of the market cap. Is it really worth the trading costs to rebalance 50 positions worth just 0.02% each? Its trivial to see that if every company has identical profiles, the minimum variance portfolio will be equally weighted.
We have not even talked about Fee structures. Many funds have total expense ratios exceeding two percent. Truly awful. Investment trusts as a group seem to do better, averaging about 1%. Still, in all this, I came across a surprising graph in the small cap sector:
I have no idea how this can even be true. Perhaps the FT is selecting only the best performers. Perhaps many or most small cap funds are actually only partially invested in small caps? Here is the graph for Flex cap equity.
These graphs were quietly snipped of the FT website, but again, the FTSE all share has indeed managed just 2.5% annualised, so it seems plausible that their `flex cap’ index might have done this badly. But how can every fund outperformed so massively? A quick check on my morningstar driven research platform finds 28 small cap funds, but only 9 of them are more than three years old.
Does the fund industry routinely close and reopen funds in-order to erase the memory of bad performance? This is really the only possible explanation. No wonder funds look attractive to retail investors when fund managers continually erase fund that under-perform the index, and only those that look good survive.
If that is what is happening, the regulators ought to step in, as these graphs are deeply misleading, despite being superficially accurate.
However, there is a slight upside in this for the retail investor. If funds are killed of for relative under performance (and your money returned), then sooner or later you will end up in a fund that is beating the index. Of course, if that out performance is essentially random, then you would be better off indexing and saving your fees, since it is impossible to generate useful work from a random process*. On the other hand, if manager out performance is persistent, and there is reasonable, though not conclusive, evidence that it is, then sooner or later you are likely to end up in a winner. This is interesting enough that I would like to model it. Does anyone have access to fund data including funds that were closed? Ten years would be enough.
*Yes, I did just apply the second law of thermodynamics to Finance. It really works too. Try this brain teaser: suppose you wish for a population to have more boys than girls, is there any strategy which can produce this result? Strategies like: I will keep having children till I get a boy. I will continue having children till I have more boys than girls, etc. Strategies involving killing people are not allowed.
The Black-Scholes analysis underpins much of modern financing. Today I am going to take an axe to one of its primary underpinnings. The Black-Scholes analysis depends, among other things, on the idea that volatility can be represented by a single number which is independent of the time over which you mention it.
Suppose that we have a random walk made up of randomly sampling a probability distribution, plus a general tendency to move in some direction. I.e. . Now since dX samples randomly from a distribution with variance , we can expect that it is time dependent. If the variance can be represented by a single parameter, then since
then its clear that for a random walk if we sample over a time that is twice as long, (i.e., contains two samples from the distribution X added together), then we have , and hence we must have , thus obtaining Ito’s Lemma by inspection.
Now if the market were perfectly efficient, we would know that this must be the case, since if it is not the case, we can obtain an arbitrage opportunity in the following way. (Today we ignore all frictions and transaction costs). Suppose that I have £1,000,000 in cash, and so I decide to take a short position worth £1,000,000 in some index, say the FTSE100 while simultaneously taking a long position in the same index. The only difference is that I am going to rebalance the long position at the open and the close every day to be worth exactly £1,000,000 by using my cash to buy or sell as needed, whereas I will rebalance the short position only every thirty days. The idea is that if volatility is greater on a day to day basis than on a thirty day basis, then by buying high and selling low I can make an incremental profit while, in the main, being hedged against market moves. The open and close data for the FTSE100 is widely available. Ideally I would like to have the tick data so I could look at rebalancing on shorter timescales. Sadly that data appears to be very expensive :(.
Anyway, here is the result of the strategy on the FTSE100 since 1984.
I find this graph fascinating. We are looking at the evolution of the market in action. In essence, from 2000, we see the emergence of a term structure, after which the strategy routinely makes money. Note that as we are very close to Beta-neutral, this strategy can be levered up and would need to be, as the absolute returns are not exactly great. However, note that the volatility of the underlying instruments is likely to be much greater than that of the composite index, so the returns would be greater if we applied this strategy individually to each of the constituents. That would mean much more work for me to back test it. 🙂
I suspect that the change in term structure is related to the rise of algorithmic trading. In essence, this strategy is the opposite of trend following. If you trade based on trends, you make the trends shorter and steeper and hence increase volatility. Its plausible that this results in higher volatility on short time scales compared to longer ones. An alternative explanation is that options trading has calmed longer term volatility.
Another interesting feature of this term structure applies to options pricing. Since options are priced via Black Scholes based on the volatility, and since measuring volatility on a monthly rather than a daily basis would give you different measures of volatility, then they will also give you different prices. This is already known in finance, where the implied volatility of longer term options is often less than that of shorter term options.
Still, its fascinating to me to watch how the evolution of the market and trading ideas has created an inefficiency which did not appear to exist in the past. Probably some Quant based Hedge funds are already trading on this idea, and as it becomes more heavily traded it will disappear.
So I am considering a long position in Apple. This post will serve as my investment case, an attempt to apply the mindset of Graham and Dodd. I have a shares Isa, in which I hold a portion of my savings. I try to find undervalued companies and sectors, and aim to hold for a longish time period. My current holdings are BG Group, Man Group, Lamprell, Schroder’s Real Estate Investment Trust, and a hedged Nikki positions through the MSCI Sterling hedged ETF, Cazenove Smaller Companies Fund, and Standard Life Equity Unconstrained.
Firstly, I will set out how I think about valuing companies in the abstract. As a shareholder, you live in a gray area of the investing landscape. You have a share in the assets and liabilities of the company, and also a share in current and future profits. I like to think about how a share price reflects a company as a going concern, which means I need to strip out of the market capitalisation those assets and liabilities that are independent of daily operations. Most commonly this involves bond issues. Bonds are more senior than equity, so they get first bite at the profits and also, in the event of bankruptcy, first claim on the assets. Thus, to understand how the market is valuing a company, we need to add its liabilities to its market cap. Similarly, in the unusual event of a company having a very strong net cash position, we should subtract it from the market capitalisation.
We can imagine, as it were, that the “market value of a business” = Market Cap + Liabilities – Non productive assets. We should not include such things as factories or offices, or those assets that are necessary for the daily operations of the company. The reasons for this are laid out in Graham and Dodds weighty tome “Security analysis”. When one comes to value a productive asset, its value is usually how much it produces. A company usually only sells such assets when they are no longer profitable, in which case there is no reason to think another company could profitably employ your factory, so its value drops to its land value. As an investor, we cannot know for certain how a company is valuing its productive assets, so in the interests of safety, assume you will get nothing from them that is not fully factored into the profit expectations.
Apple produces an interesting test case for this way of thinking, as its strong cash position materially affects its market cap, despite having almost no debt. According to its most recent 10-Q (interim statement for UK readers), it has around $68.7bn in liabilities, and around $158.6bn dollars of non productive assets, mostly in long dated securities and cash. The enterprise value of Apple is therefore its 400bn market cap, less $70bn in net non productive assets, for $330bn. This is the value that should be used for PE ratios. Last years apple earnings were around $41bn consensus estimates suggest it will be around $45bn this year and $50bn the year after. This gives an effective PE ratio of around 8. To put this in perspective, Google is valued at around 24 times earnings, Microsoft and Oracle are measured at around 15 times earnings. (In fairness, these are just the values from the FT website, not the adjusted values I laid out above for apple). This suggests to me that Apple is pretty cheap for a Tech company. Mean reversion is the most powerful force in market dynamics, so I would expect to see this valuation gap resolve itself one way or another.
As is often pointed out, the stock market is a forward looking vehicle, so let us ask what this stock price is predicting. To me this suggests that the Stock market is expecting Apple earnings to halve. If they did so then Apple would be fair valued as a tech company at around 15 times earnings. This does not seem plausible to me. It is true that as smart phones have become a commodity margins are likely to come under pressure, and that Google is at a huge premium because the market is very excited about driverless cars and Google glasses. Apple has always been extremely secretive about product development, are we certain that Apple does not have another game changing device up its sleeve? Apple has a fantastic brand, and huge brand loyalty. Smart phones are a fast growing market, and so we could easily see Apple maintaining earnings even with falling margins. Moreover, the corporate culture of excellence that Steve Jobs built is likely to persist.
So we have set out a value case for Apple. However, market sentiment is a strange beast, and its clear that Apple is in a monster downtrend. Do you fight the trend? Do you catch a falling knife? Well, this is an area in which it helps to look at the big picture.
So the question is, are we going to break through the support at $420? If it does, we are likely to see another big move lower, down to 380 or so. At the moment the chart looks squeezed between the top of its down trend and the support line. Over the next few days it will break out either higher or lower. The risk of buying now is that it might go lower. The risk of not buying is that it might break out to the up side by some 20% or so.
In such situations my inclination is not to bother too much with the charts. I intend to hold for a long time (two years at least), and so a temporary downwards movement is not such a big deal. Apple is a strongly cash generative company disconnected from its fundamentals (in a good way), so I see a buying opportunity. If your plan is to ride a short move, you have been warned.
Disclosure: I will probably initiate a long in Apple later today.