Voodoo Spectrum of Machine Learning and Data Sets

3 Min Read

I used to be very gung-ho about machine learning approaches to trading but I’m less so now. You have to understand that that there is a spectrum of alpha sources, from very specific structured arbitrage opportunities -> to stat arb -> to just voodoo nonsense.

As history goes on, hedge funds and other large players are absorbing the alpha from left to right. Having squeezed the pure arbs (ADR vs underlying, ETF vs components, mergers, currency triangles, etc) they then became hungry again and moved to stat arb (momentum, correlated pairs, regression analysis, news sentiment, etc). But now even the big stat arb strategies are running dry so people go further, chasing mirages (nonlinear regression, causality inference in large data sets, etc).
In modeling the market, it’s best to start with as much structure as possible before moving on to more amorphous statistical strategies. If you have to use statistical machine learning, encode as much trading domain knowledge as possible with specific distance/neighborhood metrics, linearity, variable importance weightings, hierarchy, low-dimensional factors, etc.
It’s good to have a heuristic feel for the


I used to be very gung-ho about machine learning approaches to trading but I’m less so now. You have to understand that that there is a spectrum of alpha sources, from very specific structured arbitrage opportunities -> to stat arb -> to just voodoo nonsense.

As history goes on, hedge funds and other large players are absorbing the alpha from left to right. Having squeezed the pure arbs (ADR vs underlying, ETF vs components, mergers, currency triangles, etc) they then became hungry again and moved to stat arb (momentum, correlated pairs, regression analysis, news sentiment, etc). But now even the big stat arb strategies are running dry so people go further, chasing mirages (nonlinear regression, causality inference in large data sets, etc).
In modeling the market, it’s best to start with as much structure as possible before moving on to more amorphous statistical strategies. If you have to use statistical machine learning, encode as much trading domain knowledge as possible with specific distance/neighborhood metrics, linearity, variable importance weightings, hierarchy, low-dimensional factors, etc.
It’s good to have a heuristic feel for the danger/flexibility/noise sensitivity (synonyms) of each statistical learning tool. I roughly have this spectrum in my head:
Very specific, structured, safe
Optimize 1 parameter, require crossvalidation
Optimize 2 parameters, require crossvalidation
Optimize parameters with too little data, require regularization
Extrapolation
Nonlinear (SVM, tree bagging, etc)
Higher-order variable dependencies
Variable selection
Structure learning
Very general, dangerous in noise, voodoo
This diagram is worth expanding. If anyone has any suggestions, please leave them.

Share This Article
Exit mobile version