Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns
AnalyticsBig Data

Using ‘Faked’ Data is Key to Allaying Big Data Privacy Concerns

Steve Jones
Steve Jones
5 Min Read
Big Data Privacy Concerns
SHARE

MIT is out of the blocks first once again with a technological development designed to fix some of the privacy issues associated with big data.

Contents
How it worksThe solution we’ve been looking for?

In a world where data analytics and machine learning are at the forefront of technological advancement, big data is becoming a necessary lynchpin of that process. However, most organisations do not have the internal expertise to deal with algorithm development and thus have to outsource their data analytics. This raises many concerns regarding the dissemination of sensitive information to outsiders

The researchers at MIT have come up with a novel solution to these privacy issues. Their machine learning system can create “synthetic data” modelled on the data set which contains no real data and can be distributed safely to outsiders for development and education purposes.

The synthetic data is a structural and statistical analogue of the original data set but does not contain any real information regarding the organisation. However, it performs similarly in data analytical and stress testing and thus renders it the ideal substrate for developing algorithms and design testing in the data science milieu.

More Read

Marketing Metrics HR Vendors Will Soon Be Paying a Lot of Attention To
From Master Data to Master Graph
10 Trends Shaping Big Data in Financial Services
CRM: Businesses Should Walk Before They Run
Pushing the Data Visualization Envelope: an Interview with Tableau’s Ellie Fields

How it works

The MIT researchers, led by Kalyan Veeramachaneni, proposed a concept they call the Synthetic Data Vault (SDV). This describes a machine learning system that creates artificial data from an original data set. The goal is to be able to use the data to test algorithms and analytical models without any association to the organisation involved. He succinctly states that, “In a way, we are using machine learning to enable machine learning,”

The SDV achieves this using a machine learning algorithm called “recursive conditional parameter aggregation” which exploits the hierarchical organisation of the data and captures the correlations between multiple fields to produce a multivariate model of the data. The system learns the model and subsequently produces an entire database of synthetic data.

To test the SDV, synthetic data generation for five different public datasets was performed using anti debugging techniques. Thirty-nine freelance data scientists were hired to develop predictive models on the data to ascertain if a significant difference between the synthesized data and the real data exists. The result was a conclusive no. Eleven out of the 15 tests displayed no significant difference in the predictive modelling solutions of the real and synthetic data.

The beauty of the SDV is that it can replicate the “noise” within the dataset, as well as any missing data, so that the synthetic data set model is statistically the same. Furthermore, the artificial data can be easily scaled as required, making it versatile.

The solution we’ve been looking for?

The inferences drawn from the analysis are that real data can be successfully replaced by synthetic data in software testing without the security ramifications and that the SDV is a viable solution for synthetic data generation.

Recognised as the next big thing by Tableau’s 2017 whitepaper, big data is front and centre in the hi-tech game. Accordingly, the need to be able to work safely and securely with the data is becoming increasingly important. MIT seems to have sidestepped these privacy issues quite neatly with the SDV, ensuring that data scientists can design and test approaches without invading the privacy of real people.

This prototype has the potential to become a valuable educational tool, with no concern about student exposure to sensitive information. With this generative modelling method, the stage is set to teach the next generation of data scientists in an effective way, by facilitating learning by doing.

MIT’s model seems to have everything going for it, especially considering the success of the paradigm testing and in theory it makes perfect sense. Researchers claim that it will speed up the rate of innovation by negating the “privacy bottleneck”. In practice, that remains to be seen.

TAGGED:data privacydata protection
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

sales and data analytics
How Data Analytics Improves Lead Management and Sales Results
Analytics Big Data Exclusive
ai in marketing
How AI and Smart Platforms Improve Email Marketing
Artificial Intelligence Exclusive Marketing
AI Document Verification for Legal Firms: Importance & Top Tools
AI Document Verification for Legal Firms: Importance & Top Tools
Artificial Intelligence Exclusive
AI supply chain
AI Tools Are Strengthening Global Supply Chains
Artificial Intelligence Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

trusting big data smartdatacollective
AnalyticsBest PracticesBig DataData ManagementExclusivePrivacy

Trusting in Big Data – Can Society Do it?

6 Min Read
data protection regulation
Data Management

Benefits of Data Management Regulations for Consumers & Businesses

13 Min Read
Security

Data Privacy and Internet Safety Tips for College Students

10 Min Read
getting the credentials to become a data security expert
Security

4 Certificates Data Security Enthusiasts Should Consider

7 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?