Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns

MIT is out of the blocks first once again with a technological development designed to fix some of the privacy issues associated with big data.

May 20, 2017
606 Views

MIT is out of the blocks first once again with a technological development designed to fix some of the privacy issues associated with big data.

In a world where data analytics and machine learning are at the forefront of technological advancement, big data is becoming a necessary lynchpin of that process. However, most organisations do not have the internal expertise to deal with algorithm development and thus have to outsource their data analytics. This raises many concerns regarding the dissemination of sensitive information to outsiders

The researchers at MIT have come up with a novel solution to these privacy issues. Their machine learning system can create “synthetic data” modelled on the data set which contains no real data and can be distributed safely to outsiders for development and education purposes.

The synthetic data is a structural and statistical analogue of the original data set but does not contain any real information regarding the organisation. However, it performs similarly in data analytical and stress testing and thus renders it the ideal substrate for developing algorithms and design testing in the data science milieu.

How it works

The MIT researchers, led by Kalyan Veeramachaneni, proposed a concept they call the Synthetic Data Vault (SDV). This describes a machine learning system that creates artificial data from an original data set. The goal is to be able to use the data to test algorithms and analytical models without any association to the organisation involved. He succinctly states that, “In a way, we are using machine learning to enable machine learning,”

The SDV achieves this using a machine learning algorithm called “recursive conditional parameter aggregation” which exploits the hierarchical organisation of the data and captures the correlations between multiple fields to produce a multivariate model of the data. The system learns the model and subsequently produces an entire database of synthetic data.

To test the SDV, synthetic data generation for five different public datasets was performed using anti debugging techniques. Thirty-nine freelance data scientists were hired to develop predictive models on the data to ascertain if a significant difference between the synthesized data and the real data exists. The result was a conclusive no. Eleven out of the 15 tests displayed no significant difference in the predictive modelling solutions of the real and synthetic data.

The beauty of the SDV is that it can replicate the “noise” within the dataset, as well as any missing data, so that the synthetic data set model is statistically the same. Furthermore, the artificial data can be easily scaled as required, making it versatile.

The solution we’ve been looking for?

The inferences drawn from the analysis are that real data can be successfully replaced by synthetic data in software testing without the security ramifications and that the SDV is a viable solution for synthetic data generation.

Recognised as the next big thing by Tableau’s 2017 whitepaper, big data is front and centre in the hi-tech game. Accordingly, the need to be able to work safely and securely with the data is becoming increasingly important. MIT seems to have sidestepped these privacy issues quite neatly with the SDV, ensuring that data scientists can design and test approaches without invading the privacy of real people.

This prototype has the potential to become a valuable educational tool, with no concern about student exposure to sensitive information. With this generative modelling method, the stage is set to teach the next generation of data scientists in an effective way, by facilitating learning by doing.

MIT’s model seems to have everything going for it, especially considering the success of the paradigm testing and in theory it makes perfect sense. Researchers claim that it will speed up the rate of innovation by negating the “privacy bottleneck”. In practice, that remains to be seen.

You may be interested

Big Data is the Key to the Future of Multi-Device Marketing
Big Data
0 shares221 views
Big Data
0 shares221 views

Big Data is the Key to the Future of Multi-Device Marketing

Ryan Kh - May 26, 2017

Digital marketers must reach customers across multiple devices. According to Criteo Mobile eCommerce Report, 40% of all online transactions involve…

Empowering Partners and Customers with Data Insights: A Win-Win for Everyone
Analytics
0 shares274 views
Analytics
0 shares274 views

Empowering Partners and Customers with Data Insights: A Win-Win for Everyone

Guy Greenberg - May 26, 2017

All businesses in the digital age rely on analytics for various activities: Product managers rely on analytics to gain insights…

The State of US Cyber Security
IT
0 shares312 views1
IT
0 shares312 views1

The State of US Cyber Security

bcornell - May 25, 2017

During the first week of May 2017 President Donald Trump signed a cyber security executive order focusing on upgrading government…