R/Finance 2010 … and unicorns

April 23, 2010
46 Views

At the Information Management blogs, Steve Miller has posted a great roundup of last weekend’s R/Finance 2010 conference in Chicago. Here’s Steve’s overall take:

This year’s conference was even better than the 2009 inaugural, the in-excess-of-200 participants consumed by more than 20 consecutive high-powered presentations over the fast-paced day and a half. And while I’m a quantitative finance welterweight at best, there was plenty to pique my interest, including the latest developments to scale R for size and performance.

Check of the rest of Steve’s post for a great review of the other talks at the conference.

As Steve mentions, analysis of large data sets was a big focus of the conference with at least six presentations on the topic, including my own. I talked about a research project we’ve been working on at REvolution for a while, to make data processing and statistical analysis techniques for huge data sets available in REvolution R, breaking the bottlenecks of single-CPU processing, slow disk I/O processing, and being limited to RAM on just one machine. I deviated from the pre-advertised title, and the title in the slides, “A Herd of Unicorns” (download as PDF),

At the Information Management blogs, Steve Miller has posted a great roundup of last weekend’s R/Finance 2010 conference in Chicago. Here’s Steve’s overall take:

This year’s conference was even better than the 2009 inaugural, the in-excess-of-200 participants consumed by more than 20 consecutive high-powered presentations over the fast-paced day and a half. And while I’m a quantitative finance welterweight at best, there was plenty to pique my interest, including the latest developments to scale R for size and performance.

Check of the rest of Steve’s post for a great review of the other talks at the conference.

As Steve mentions, analysis of large data sets was a big focus of the conference with at least six presentations on the topic, including my own. I talked about a research project we’ve been working on at REvolution for a while, to make data processing and statistical analysis techniques for huge data sets available in REvolution R, breaking the bottlenecks of single-CPU processing, slow disk I/O processing, and being limited to RAM on just one machine. I deviated from the pre-advertised title, and the title in the slides, “A Herd of Unicorns” (download as PDF), may require a little explanation out of context. The “unicorn” here is something powerful and (at least today) mythical: the combination of analytic algorithms for really large data sets, and a flexible programming environment that enables modern statistical analysis: exploration, data manipulation, visualization, model evaluation. In other words, the R environment. And if you had the freedom to do large-scale data analysis in R, while making the use of the power of multiple machine in a cluster or in the cloud then that would be, well, a herd of unicorns. We’re working hard to make that fantasy a reality, soon. 

Information Management: R/Finance 2010: Applied Finance with R

Link to original post

You may be interested

How SAP Hana is Driving Big Data Startups
Big Data
298 shares3,066 views
Big Data
298 shares3,066 views

How SAP Hana is Driving Big Data Startups

Ryan Kh - July 20, 2017

The first version of SAP Hana was released in 2010, before Hadoop and other big data extraction tools were introduced.…

Data Erasing Software vs Physical Destruction: Sustainable Way of Data Deletion
Data Management
62 views
Data Management
62 views

Data Erasing Software vs Physical Destruction: Sustainable Way of Data Deletion

Manish Bhickta - July 20, 2017

Physical Data destruction techniques are efficient enough to destroy data, but they can never be considered eco-friendly. On the other…

10 Simple Rules for Creating a Good Data Management Plan
Data Management
69 shares672 views
Data Management
69 shares672 views

10 Simple Rules for Creating a Good Data Management Plan

GloriaKopp - July 20, 2017

Part of business planning is arranging how data will be used in the development of a project. This is why…