Link to the full article here.
By Dr. Joshua Vogelstein, Professor at John’s Hopkins & Jonathan Caplis, Managing Principal at PivotalPath
According to a recent Barclay report [link here], 62% of systematic hedge fund managers are using machine learning techniques within the investment process. Given the significant increase to computing power (and reduced cost), along with wide availability of off-the-shelf statistical computing packages, the barriers to implementing machine learning techniques have been significantly mitigated, while the difficulty in using these techniques successfully remains. Indeed, fund managers that don’t fully grasp the upside potential and risk parameters of their strategy can quickly implode, even if the investment thesis is fundamentally sound; the problem becomes further magnified within the expanding complexity of a machine learning framework.
Machine learning, at its core, is a misnomer. The term suggests that one can throw data at a machine, and out pops accurate predictions. In fact, to set machine learning up to solve a problem, a team of humans is required to choose the data sources, the algorithms, the hardware, and the parameters. Each of these choices requires a complementary set of skills, backgrounds, and intuition, ranging from finance, to statistics, to computer science.
In our experience, the teams that most successfully utilize machine learning are the teams that treat it like a glass-box, rather than a black-box. Whereas a black box provides no knowledge of what occurs between input and output, an intelligently constructed glass box offers a level of insight and the ability to test a clear hypothesis. As is always a good policy, if someone can’t clearly explain to you what they are doing, they probably don’t understand it themselves.
So, how can an investor separate the wheat from the chaff?
When evaluating machine learning strategies, here are some of the questions investors should be asking to help separate black-boxes from glass-boxes.
- What is your investment thesis?
Don’t let sales people switch the topic by citing sophisticated algorithms in response to this question. If they do, ask the following simple questions, why do you make money, or what is the market inefficiency in which you are capitalizing? If they can’t answer these basic questions, or rather the team prefers to advertise the value of their tool and its ability to identify patterns, over their understanding of the fundamental principles underlying statistical science, there is cause for concern.
- Are you a data analyst using a machine learning engineer’s product or are you a machine learning engineer?
Just as an investor scrutinizes the qualifications and experiences of an investment team to determine whether they are appropriately skilled, the same level of scrutiny should apply to the quantitative team – are they:
- M.B.A.s with a concentration in statistics
- Professionals with graduate level degrees in data science, or
- P.h.D.s in the applied sciences such as biomedical engineering or applied neuroscience?
You should look for the latter. In order to fully understand the nuances of implementing machine learning techniques, one must have the proper training and experience to effectively use these techniques, where borrowing tools from machine learning packages in Matlab is insufficient.
- How do you handle the trade-off between leveraging more powerful tools and the ability to understand the model?
In data science speak, this concept is known as bias-variance trade-off. More parameters introduce greater variance and less bias, increasing the likelihood of overfitting, or describing noise in the data, rather than identifying underlying relationships and predictive patterns. For example, linear regressions come with significant bias (they work well when the relationship is linear), but the variance is small because they cannot estimate any nonlinearities function such as seasonal variability. One must always consider whether the additional variance is too large and worth including. This is a subtle point that was lost on the much of the neural network community for decades, so it is little wonder that it is still absent in large communities applying mathematical tools to forecast financial markets. The more variables and parameters (degrees of freedom), you feed into the model, the more one loses the ability to reason what happens between the input and the output. This is directly related to the next question.
- How do you determine when a model is broken before it is too late?
A primary difficulty in utilizing machine learning algorithms is the increasing power of those tools for finance. Modern algorithms can find patterns in almost any dataset, much like a novice stargazer can find patterns looking up at the stars. But finding an historical pattern is not the same as being able to forecast a profitable trade, just as seeing a star formation of a Lion does not necessarily foretell a day of courage. Without significant expertise in the subtleties of how to evaluate these powerful algorithms, being able to detect when and why a model loses its predictive ability and making the necessary changes required becomes almost impossible.
The most successful financial teams will continue to be those that a have a deep understanding of the kinds of things that can go awry, how to avoid them, and a plan for how to cope with them when they happen, no matter which tools they have at their disposal.
The increasing ubiquity of powerful computing technologies will likely continue to increase the number of users in the financial community. Ultimately, machine learning is far from a blanket description but something for investors to question especially given the ongoing hype and media attention. As they would in other strategies across their portfolios, investors should look for the best investment teams with deep domain knowledge about the markets they trade and the underlying data.