Speed up backtesting with parallel computing

April 5, 2009
122 Views

The video from last month’s high-performance backtesting webinar is now available for replay. It’s well worth checking out, especially for the demonstration at the end (from our own Bryan Lewis). Backtesting financial models is almost always a time-consuming task. Running the model over a sequence of historical time periods can be a burden both because the model itself may be expensive to compute, and the number of time periods may be large to get sufficient resolution of the trends over time and the deviations from actual results. With a multiprocessor computer or with a simple cluster of machines running R,…

The video from last month’s high-performance backtesting webinar is now available for replay. It’s well worth checking out, especially for the demonstration at the end (from our own Bryan Lewis).

Backtesting financial models is almost always a time-consuming task. Running the model over a sequence of historical time periods can be a burden both because the model itself may be expensive to compute, and the number of time periods may be large to get sufficient resolution of the trends over time and the deviations from actual results.

With a multiprocessor computer or with a simple cluster of machines running R, you can reduce the time required (scaling by the number of processors available). Bryan gives a very neat example of using the new foreach function in ParallelR 2.0 to simply create a parallelized version of a for loop and reduce the time required for the backtesting calculation by a factor of nearly four on a quad-core machine.