This is not a political commentary. I’ll spare you my thoughts on the U.S. election content. Instead, I’d like to point out something that many already have: Trump has unexpectedly led the Republican field. In fact, it is not just Trump and Republicans. Bernie Sanders has been able to hang on far longer than expected in the Democratic primary despite going up against what can reasonably be described as a Clinton machine.
Is this about the glowing personality of Trump? The sound policies of Sanders? Vice versa? Probably not. Instead, I’d argue that the electorate has changed its priorities—in this case, very much trying to find an alternative to whatever it was that brought us to this moment in political time. Only now are many pundits acknowledging that or some other story as the reason for these surprise successes. It would have been far more helpful for predicting to know that before the event. And that is my point: the world has been surprised by these successes. Very surprised. Mostly the professional predictors.
Amongst the best of these predictors is Nate Silver and fivethirtyeight.com. I highly recommend Silver’s book, “The Signal and the Noise.” Yet that organization got it wrong—and not like a little wrong.
“Trump has a better chance of cameoing in another “Home Alone” movie with Macaulay Culkin—or playing in the NBA Finals—than winning the Republican nomination.” That’s not merely an incorrect prediction; that is saying an event is essentially an impossibility. Harry Enten revisited that quote in this article. Enten should be credited for acknowledging the mistake and for seeking how to address potential mistakes going forward. These are the good predictors—the people and organizations that we should be learning about predicting from. So what happened?
It would be easy enough to describe it as a fluke event: the long-shot hit. That’s not good enough. What happened is that the forecasters were relying on statistics. Statistics only help you when something has happened before and especially with some regularity. In other words, statistics are good at describing normal events that occur with regularity. It leaves you vulnerable to what in finance is termed a “regime change”; no pun intended. Here is a real-world example of the fundamentals changing before the stats do. In fact, they have to.
Until the fundamentals change, the stats don’t. Until then, the old model of the world still holds. And that is how science works, too. We use the old model until proven wrong, and then we improve it. That works well enough in the physical world where, at the fundamental physics level, there is no regime change (technically—and in the spirit of this article—there has not yet been a regime change in the laws of physics). On the other hand, we are in shifting waters in the complex systems world of the social sciences.
Here, I’d argue that the election predictors did not account for the appetite of the electorate to want actual regime change. Whether I’m right about the electorate does not matter. What does matter is that a new variable was introduced, and it is one that the old model(s) does not include. It does not include it, and even if it did, a statistical model would not have any way of measuring the magnitude of the impact. Statistics merely give a sense of how things behaved in the past.
These kinds of events happen all the time in the markets. It is not even so much that no one sees it coming. It is that the fundamental changes are occurring under the surface so that one usually needs to be something of an expert or think a little differently in order to note it. It also takes far longer to reach a Gladwellian tipping point that translates into actual effects than any of the original analysts anticipate. In fact, the prediction of the tipping point usually enforces the old regime as those predictions end up being too early. The predictors are discredited and life goes on…until, of course, the regime change occurs.
“All models are wrong. Some models are useful.”
The Great Recession was one of those events. I remember participating in Internet discussions about how the housing market was unsustainable. All were in agreement that things were getting out of hand. Not all had a similar view of outcome, of course. Yet, these discussions were beginning in 2005. It took, realistically, until 2007 when the real cracks began to show enough to get into the mainstream view (one could argue that the Yen’s crack in ‘06 was something of the beginning). That’s an eternity for financial types that are incented to show quarterly returns. The world does not have time for that kind of talk when there is money to be made. Of course, it should make time.
There has been much talk lately about the decline of human judgment. It is not phrased like that, but that is the denouement. For instance, the rise of smart beta and the decline of the hedge fund manager. There was just an article about whether the Fed chair could be replaced with an algorithm. The biggest has been the ubiquity of media coverage of machine learning. The cover of Wired magazine for the current month as I write this is titled “The End of Code” because software “soon” is simply going to learn how to write itself.
I know that machine learning has made huge strides and is a terrific tool in many disciplines. Yet, it is statistical in nature; it has to be. For that matter, so is the brain. But the brain has had far more time to develop. The brain has not only a strongly developed associative memory, but also ways to make associations jump in unpredictable ways. We call this creativity. I’ve yet to see the model that incorporates what we would call creativity—finding parallels between nature and markets in their fractal behavior, for instance.
“The race is not to the swift nor the battle to the strong…but that is the way to bet.”
When I re-introduced TradeCo in my first blog post, I explained that I’ve based our operation on quantitative analysis. That is, I’ve based my business model on statistics and being scientifically rigorous. Part of the rigor is knowing that something will go awry; that the models will one day not work. Understanding that just because the biggest drawdown was X in the backtest does not mean that will continue to be the case. Past performance is no guarantee of future results. Or so I’m told. That strategies lack guarantees does not make them bad businesses.
The issue is that the models of market behavior are missing a key variable that has yet to be significant. Addressing this is called risk management.
There are things that can be done to offset these risks. I look at the world as a contest between optimization and robustness. Optimization, rather simply, is organizing to be maximally efficient in a certain context. Robustness, at the other end of the spectrum, is organizing to be maximally efficient holistically across many contexts, but not maximally efficient for any one. That means that optimizers will do better for some period of time, but there will also be a day of reckoning at some point (presumably).
Traders focused on robustness know money is being left on the table temporarily, but over the course of a career, count on the grinding nature of compound interest. Think of it as getting rich slowly vs. betting on red 17. There is a lot to be said about optimization vs. robustness, but that’s too much for this article.
That’s great, but how does that help us as traders? First, don’t get rid of the models. Rather, don’t get rid of scientific inquiry and rigorous analysis. That is always going to be useful, but how to use it may change. Everyone (not literally) has learned co-integration, risk parity, passive investing and factor investing. There always has been and always will be value in finding that missing variable that is snowballing in importance but is off most people’s radar. Use those models, but understand the context and use good risk management.
If you have persistent, real edge/alpha, then it pays to be robust. If the 1,000 year flood occurs, it is important to still be operating so that edge can be milked again. Some specifics for being robust:
- Do everything you can to avoid firm-killer-type trades like Knight Capital had. For instance, don’t only have net trade limits, have gross trade limits, too.
- Put safeties to pull trading under extreme duress—or limit trading.
- In TradeCo, we will not run our algos without human oversight.
- Trade smaller than optimal; most likely there are hidden fat tails in your trading
- Diversify amongst assets and strategy types, e.g., mean reversion with momentum.
- Constantly evaluate your models. Are your actual results matching forward testing? Is there a deviation from the model value? Check for missing variables/regime change.
- Another example of a missing variable in the mid-2000s was the role of shadow banking in money creation.
- Leaving orders in the market is a form of selling liquidity to the market. Is it possible to avoid leaving orders in the market? Sometimes this may work; sometimes this may take away all of the edge.
- And remember, this also means that firms can adopt machine learning algos, but that there will still be a role for creativity, i.e., humans, when new variables need to be introduced to the models.
This is just a starter kit, of course. But it is wise to avoid having the market “Trump” your strategy. Good luck.