Weirdness is the “Curse of Dimensionality”

3 Min Read

I read the following well-written section in “The Elements of Statistical Learning” by Friedman, Hastie, & Tibshirani. This curse of dimensionality is profound. I am assuming you are familiar with the k-nearest neighbors classifier, which is used to introduce the idea.

This sparked ideas in two contexts: 1) human personalities and 2) trading.
1) If you think about human personalities being a combination of real-valued variables (ex. introversion-extroversion, affectionate-cold, optimistic-depressed, driven-apathetic, etc) then this basically says that everyone is weird. Let’s say there were only 10 personality traits, then (following the unit 10D-cube example) 90% of people are located over 80% away from the center toward the fringe.
One caveat- this assumes personality traits are uniformly distributed, but due to peer pressure this is probably not the case.
2) You can’t look into the past for a setup identical to what you are currently seeing. Also, the more data streams you feed into a system, and depending on the learner you are using (ex. k-NN), the more every time slice will look absolutely unique and the harder it will be to get a historical data set large enough to teach an


I read the following well-written section in “The Elements of Statistical Learning” by Friedman, Hastie, & Tibshirani. This curse of dimensionality is profound. I am assuming you are familiar with the k-nearest neighbors classifier, which is used to introduce the idea.

This sparked ideas in two contexts: 1) human personalities and 2) trading.
1) If you think about human personalities being a combination of real-valued variables (ex. introversion-extroversion, affectionate-cold, optimistic-depressed, driven-apathetic, etc) then this basically says that everyone is weird. Let’s say there were only 10 personality traits, then (following the unit 10D-cube example) 90% of people are located over 80% away from the center toward the fringe.
One caveat- this assumes personality traits are uniformly distributed, but due to peer pressure this is probably not the case.
2) You can’t look into the past for a setup identical to what you are currently seeing. Also, the more data streams you feed into a system, and depending on the learner you are using (ex. k-NN), the more every time slice will look absolutely unique and the harder it will be to get a historical data set large enough to teach any trend.

Feel free to add your thoughts, this seems to be a very important result so I’m sure there are more conclusions that can be drawn.

Share This Article
Exit mobile version