When looking at new strategies, I try to judge the quality of an idea by looking at it from different angles. A combination of methods estimates a range of possible outcomes that strategy could have and defines its place in the portfolio – one could call it common sense backtesting.
I always start with a strategy hypothesis, rather than letting a great piece of software run through millions of permutations of how to trade on historical data. Data-mining in such a way will produce a curve-fit system that showed incredible, hypothetical performance in the past, but will most likely yield vastly inferior results going forward.
To narrow down to a universe of viable strategies, I have looked at a possible top down approach in the previous post.
For myself, I have decided to settle on a universe of well known sources of return in the market – beta (the risk premia of different assets) and alternative beta (factors e.g. value and momentum, strategic sources of return e.g. trend or unconventional risk premia like volatility). Added value comes from learning to understand the characteristics of these return sources and combining them intelligently in a diversified portfolio yielding higher risk-adjusted returns than each source on its own. Nuggets are found in the detailed implementation of strategies using return sources in a systematic way, which can lead to very different results from a traditional buy and hold approach. By adding concepts like trend or volatility regimes as an overlay to adapt exposure of different strategies cyclically, I try to enhance performance further.
An evaluation triangle visualizes the way I illuminate at an idea from different angles:
Quantitative analysis balanced by common sense and based on economic principles and human behavior.
Depending on your background, the term quantitative analysis may sound like the holy grail (punch in some numbers, write a clever algorithm and out comes money) or scary (quant – what???). Both views are unreasonable and as I am not a programming specialist (I run a photography business) nor completely uninformed, the case I present here represents something of a middle ground.
To me quantitative analysis equals learning from history.
The genesis of an idea comes from the way it has played out in the past – what can be found in the historical data. An idea´s historical backtest, spanning several decades, will have to show long term positive results with tolerable bad periods to be considered suitable – this is evidence based investing. To be able to compare strategies across different sources, I look at the industry standard for measuring risk-adjusted returns, the Sharpe ratio – absolute returns play a minor role, as they can be scaled through the use of leverage.
Not only the magnitude, but also the timing of bad performance should be looked at closely. Strategies with good performance in bad times are the greatest diversifiers, but hard to stick to during good times, when they tend to underperform.
The worst drawdown is, by definition, always in the future – AQR´s Cliff Asness has the following rule of thumb: “take the maximum drawdown and double it“.
Alpha and beta decay are an issue and can be detected by looking at the long term performance trend. If it is pointing down across several cycles of a strategy´s over- and underperformance, it is a bad sign. Often this is indistinguishable from long term fluctuations in established risk premia and I use it mostly when looking at new strategies with a shorter performance history – these often show great performance when first implemented, but deteriorate quickly, still boasting misleadingly high performance measures.
Skewness, the distribution of returns in contrast to a standard distribution, should be taken into account, as it is not reflected in the Sharpe ratio, but an essential property to consider when constructing a diversified portfolio.
In case there are no seriously bad times at all in the backtest, something must be wrong (the Madoff effect) – returns are paired with risk, which necessarily materializes as losses sooner or later.
You might have stumbled upon great value when finding a consistently bad backtest – try to implement the opposite.
This negative screen – Charlie Munger´s famous inverse thinking– can be applied to a lot of investment research results, that are usually used to discourage bad practice, but can instead be reversed for a positive edge in the market. For example:
- Individual investors underperform the market by over 4% per year on average (Dalbar study) by piling in near highs and panic selling near lows – someone has to earn those 4% as the market is a zero sum game in regard to over- and underperforming the average. Figuring out where the over-performance can be found might be the basis of a great strategy.
- More than 80 percent of day traders lose money, and only 1 percent of them can be called predictably profitable. Overtrading in general leads to even worse underperformance than the average investor suffer already, supporting the view that patience and discipline might be a true sustainable edge in the marketplace.
- Protective puts to reduce the downside risk exposure of an existing equity portfolios have a price – the Volatility Risk Premium – that is so high that it may eliminate the entire upside. Option selling can capture that premium.
Use and evaluate different sources for ideas.
The way to gather information about these ideas is quite irrelevant, as long as you apply sound judgement to your source. Whether you crunch the data yourself, read academic papers, trading or investment books, listen to podcasts* (some amazingly high quality information exists) or scour the internet to come up with ideas and their historical performance, does not really matter.
Availability of information is not the problem, the hard part is working through the sheer mass and weed out the useless – information that is plain wrong, merely anecdotal or already outdated.
These are some of the basic criteria I use to find good ideas:
The strategy needs to have exact, systematic and testable rules – otherwise there won´t be a verifiable historical performance record to evaluate. Many strategies you can read about are hard to quantify and many don´t produce any positive returns in real life – all of those are disqualified. Most specific technical patterns and indicators fall into this category for me – I only trust solid concepts.
I separate the strategy into the basic concept and the details of implementation. Most viable strategies boil down to one of quite few solid basic concepts, for example trend following, harvesting different risk premia, factors etc. and this basic concept needs to be sound – everything else is secondary.
I search for several different sources analyzing the historical returns of the ideas´s basic premise and the reason for its existence. To judge the quality of the source, I use rules of thumb: a peer reviewed Ph.D. thesis counts higher than a day trader´s twitter feed. A paper put out by one of the most successful hedge funds is more practical than that of a pure academic, because it shows that there is knowledge of practical implementation of ideas (because these are my highest ranked sources, you will find many links to resources written by successful practitioners with an academic background throughout the blog).
There is even a book, that essentially is a comprehensive catalogue of the basic sources of return that underly the majority of the strategies I use in my portfolio: „Expected Returns“ by Antti Ilmanen.
Uncertainty is everywhere.
While it is by no means certain that a strategy, a factor or a simple risk premium for which we have risk-adjusted return data for, say, 50 years, will have the same Sharpe ratio over the next 50 years, we can be quite certain, that it will vary widely on its way there. 50 years or even a decade are not investment horizons one can deal with while they unfold in real time – emotionally we have to deal with much shorter time frames, even if our goal is to follow a long term path.
Trusting in numbers too much, leads to a sense of false precision – investing is not a science. Data is useless without common sense – as Warren Buffett has famously said, “I would rather be approximately right than precisely wrong.”
The most important part of a strategy is its basic foundation, but the generation of its rules (like entries and exits) has nuances, that can make a difference over time. Unfortunately it is uncertain which nuances will make a positive difference in the future and which ones will have a negative effect.
All in all, I don´t think the exact methodology one uses makes a difference that can be known in advance. More important is sticking to the basic principles of robustness and simplicity. The strategy should work over many different markets and asset classes and use very few parameters, that work over a wide range of values. For example a trend following system can use different entry concepts like moving averages, breakouts or simply the price difference to a specific point in time applied to all assets – all of these methods will catch the trend with a similar result and show similar characteristics, which is the sign of a robust strategy. Simply choose the method that intuitively makes most sense to you, that you trust most, so that you are most likely to stick with it. Don´t optimize, but select a parameter in the middle of the range that has worked historically or diversify over different parameters and time frames.
Before putting money into a strategy, I try to visualize the emotional impact on myself by backtesting trade by trade by hand, living through an approximation of the real trading experience – a low tech dumbing down of the idea to approximate practical implementation.
Performance chasing is a serious concern. Most often, all other statistics being equal, the recently under- performing strategy will likely outperform in the future and vice versa. Regression to the mean is powerful, but not very intuitive and to follow the common consensus of the market exerts a strong pull. Strategies, like asset classes, are cyclical in nature: outperformance will build momentum (which makes timing very hard, because the length of this process is unknowable and can deviate considerably from rational fundamentals), lead to overcrowding and eventually underperform to yield the net performance you can see in the backtest (and of course the future may play out quite differently than the past). In reality it is very hard to select a new strategy, that is currently going through a period of underperformance. A practical approach would be to construct a well balanced portfolio from a set of strategies, across diverse assets and balance the exposure regularly – I also use a trend overlay or volatility regime filter on all my assets and strategies to adapt exposure to market conditions.
The next post will go into the details of my short volatility strategy implementation to give a practical example of the process from idea to trading.
*My favorite podcasts:
I prefer podcast by investment professionals as they tend to do a very good job of presenting reliable information. Some less mainstream approaches fall through the cracks and I enjoy listening to the last two on the list for more off-the-radar ideas.