By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData Collective
  • Analytics
    AnalyticsShow More
    predictive analytics in dropshipping
    Predictive Analytics Helps New Dropshipping Businesses Thrive
    12 Min Read
    data-driven approach in healthcare
    The Importance of Data-Driven Approaches to Improving Healthcare in Rural Areas
    6 Min Read
    analytics for tax compliance
    Analytics Changes the Calculus of Business Tax Compliance
    8 Min Read
    big data analytics in gaming
    The Role of Big Data Analytics in Gaming
    10 Min Read
    analyst,women,looking,at,kpi,data,on,computer,screen
    Promising Benefits of Predictive Analytics in Asset Management
    11 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-23 SmartData Collective. All Rights Reserved.
Reading: Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns
Share
Notification Show More
Latest News
ai digital marketing tools
Top Five AI-Driven Digital Marketing Tools in 2023
Artificial Intelligence
ai-generated content
Is AI-Generated Content a Net Positive for Businesses?
Artificial Intelligence
predictive analytics in dropshipping
Predictive Analytics Helps New Dropshipping Businesses Thrive
Predictive Analytics
cloud data security in 2023
Top Tools for Your Cloud Data Security Stack in 2023
Cloud Computing
become a data scientist
Boosting Your Chances for Landing a Job as a Data Scientist
Jobs
Aa
SmartData Collective
Aa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns
AnalyticsBig Data

Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns

Steve Jones
Last updated: 2017/05/20 at 8:48 PM
Steve Jones
5 Min Read
Big Data Privacy Concerns
SHARE

MIT is out of the blocks first once again with a technological development designed to fix some of the privacy issues associated with big data.

Contents
How it works The solution we’ve been looking for?

In a world where data analytics and machine learning are at the forefront of technological advancement, big data is becoming a necessary lynchpin of that process. However, most organisations do not have the internal expertise to deal with algorithm development and thus have to outsource their data analytics. This raises many concerns regarding the dissemination of sensitive information to outsiders

The researchers at MIT have come up with a novel solution to these privacy issues. Their machine learning system can create “synthetic data” modelled on the data set which contains no real data and can be distributed safely to outsiders for development and education purposes.

The synthetic data is a structural and statistical analogue of the original data set but does not contain any real information regarding the organisation. However, it performs similarly in data analytical and stress testing and thus renders it the ideal substrate for developing algorithms and design testing in the data science milieu.

More Read

data breach issues

A Guide to Using XDR Threat Protection to Stop Data Breaches

New SIEM Alternative Offers Excellent Data Security Features
What Role Does Breach and Attack Simulation Play in Data Protection?
How to Protect Data Within an App With RASP Security
Steps Laptop Owners Must Take to Mitigate Risks of Data Loss

How it works

The MIT researchers, led by Kalyan Veeramachaneni, proposed a concept they call the Synthetic Data Vault (SDV). This describes a machine learning system that creates artificial data from an original data set. The goal is to be able to use the data to test algorithms and analytical models without any association to the organisation involved. He succinctly states that, “In a way, we are using machine learning to enable machine learning,”

The SDV achieves this using a machine learning algorithm called “recursive conditional parameter aggregation” which exploits the hierarchical organisation of the data and captures the correlations between multiple fields to produce a multivariate model of the data. The system learns the model and subsequently produces an entire database of synthetic data.

To test the SDV, synthetic data generation for five different public datasets was performed using anti debugging techniques. Thirty-nine freelance data scientists were hired to develop predictive models on the data to ascertain if a significant difference between the synthesized data and the real data exists. The result was a conclusive no. Eleven out of the 15 tests displayed no significant difference in the predictive modelling solutions of the real and synthetic data.

The beauty of the SDV is that it can replicate the “noise” within the dataset, as well as any missing data, so that the synthetic data set model is statistically the same. Furthermore, the artificial data can be easily scaled as required, making it versatile.

The solution we’ve been looking for?

The inferences drawn from the analysis are that real data can be successfully replaced by synthetic data in software testing without the security ramifications and that the SDV is a viable solution for synthetic data generation.

Recognised as the next big thing by Tableau’s 2017 whitepaper, big data is front and centre in the hi-tech game. Accordingly, the need to be able to work safely and securely with the data is becoming increasingly important. MIT seems to have sidestepped these privacy issues quite neatly with the SDV, ensuring that data scientists can design and test approaches without invading the privacy of real people.

This prototype has the potential to become a valuable educational tool, with no concern about student exposure to sensitive information. With this generative modelling method, the stage is set to teach the next generation of data scientists in an effective way, by facilitating learning by doing.

MIT’s model seems to have everything going for it, especially considering the success of the paradigm testing and in theory it makes perfect sense. Researchers claim that it will speed up the rate of innovation by negating the “privacy bottleneck”. In practice, that remains to be seen.

TAGGED: data privacy, data protection
Steve Jones May 13, 2017
Share this Article
Facebook Twitter Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

ai digital marketing tools
Top Five AI-Driven Digital Marketing Tools in 2023
Artificial Intelligence
ai-generated content
Is AI-Generated Content a Net Positive for Businesses?
Artificial Intelligence
predictive analytics in dropshipping
Predictive Analytics Helps New Dropshipping Businesses Thrive
Predictive Analytics
cloud data security in 2023
Top Tools for Your Cloud Data Security Stack in 2023
Cloud Computing

Stay Connected

1.2k Followers Like
33.7k Followers Follow
222 Followers Pin

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form id=”1616″]

You Might also Like

data breach issues
Security

A Guide to Using XDR Threat Protection to Stop Data Breaches

6 Min Read
SIEM is ideal for data security
Security

New SIEM Alternative Offers Excellent Data Security Features

9 Min Read
data protection guide
Security

What Role Does Breach and Attack Simulation Play in Data Protection?

7 Min Read
data protection
Data Management

How to Protect Data Within an App With RASP Security

7 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence
data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data

Quick Link

  • About
  • Contact
  • Privacy
Follow US

© 2008-23 SmartData Collective. All Rights Reserved.

Removed from reading list

Undo
Go to mobile version
Welcome Back!

Sign in to your account

Lost your password?