Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: PAW: High-Performance Scoring of Healthcare Data
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > PAW: High-Performance Scoring of Healthcare Data
Data MiningPredictive Analytics

PAW: High-Performance Scoring of Healthcare Data

JamesTaylor
JamesTaylor
6 Min Read
SHARE

Live from Predictive Analytics World

Natasha Balac from UC San Diego and Michael Zeller from Zementis (their product was blogged here and their support for the amazon.com compute cloud was discussed here) presented on the use of Medicare and Medicaid data to detect and prevent fraud. The high computing center at UC San Diego (San Diego Supercomputer Center or SDSC) has a huge (3PB+ of disk for instance) data infrastructure and lots of computing resources. They keep a lot of data and do all sorts of data mining on that data working on a variety of public and private projects.

One of these is for the Centers for Medicare and Medicaid Services (CMS) which has been chartered to detect and eliminate Medicare/Medicaid provider fraud. CMS is bringing together the data from all 50 states and all sorts of stakeholders to try and find fraud. The project handles 30TB of data (all HIPAA and FISMA regulations among others) and SDSC is providing a platform for this project. The project has 50 states, lots of flavors of fraud. The claims data is being mined using various rules for known-fraud (several states had these already) but fraud changes all the time so also need predictive analytics to fi…


Live from Predictive Analytics World

More Read

Handling the Information Overload in Marketing With Big Data
Business Analytics vs. Business Intelligence
#10: Here’s a thought…
How Web Analytics Can Help Your Business
On Best Buy’s success and being decision-centric

Natasha Balac from UC San Diego and Michael Zeller from Zementis (their product was blogged here and their support for the amazon.com compute cloud was discussed here) presented on the use of Medicare and Medicaid data to detect and prevent fraud. The high computing center at UC San Diego (San Diego Supercomputer Center or SDSC) has a huge (3PB+ of disk for instance) data infrastructure and lots of computing resources. They keep a lot of data and do all sorts of data mining on that data working on a variety of public and private projects.

One of these is for the Centers for Medicare and Medicaid Services (CMS) which has been chartered to detect and eliminate Medicare/Medicaid provider fraud. CMS is bringing together the data from all 50 states and all sorts of stakeholders to try and find fraud. The project handles 30TB of data (all HIPAA and FISMA regulations among others) and SDSC is providing a platform for this project. The project has 50 states, lots of flavors of fraud. The claims data is being mined using various rules for known-fraud (several states had these already) but fraud changes all the time so also need predictive analytics to find new and unknown kinds of fraud. SDSC focus on profiling and detecting fraud (and errors) using data from insurance claims, pharmacy, doctors and more. This project needed not only to handle large amounts of data and lots oftransactions but also do so flexibly.

The project uses R to build regression trees and neural networks and then exports these models using PMML (Predictive Model Markup Language – an XML syntax for predictive models maintained by the Data Mining Group) to the ADAPA engine from Zementis. Zementis ADAPA is a decision engine that supports rules and analytic models and is very focused on open standards like PMML. They use an open source rules engine (JBoss Drools), support JSR73 for data mining and deploy as a standard decision service so that any system can call the decision service and get questions answered.

Adding R to build the models allowed open source and standards-based development of the models also. R is an integrated suite of software for data manipulation, calculation, visualization and data mining. Zementis has been contributing code to this project so it can support PMML export and lots of other companies integrate with and contribute to the R project.

PMML is not as well known or widely used as it should be. It is a robust and usable syntax for describing predictive models. It provides a clear separation between model development and model deployment, helping analytic folks focus on building the model and IT people on deploying it. Lots of vendors support PMML (IBM, Oracle, SAP, SAS, SPSS, Fair Isaac, Teradata, Microstrategy, KXEN). PMML also supports the transformations needed for a model. All the pre- and post-processing can also be described in PMML, helping address one of the big issues when models are implemented.

The project found that deployment of models (from R to ADAPA) was almost instant – little or no delay. The models were built against 300,000-500,000 rows and showed 90% accuracy or so. The project found that this approach was scalable and fast while delivering the flexibility they wanted to support the different states and their rules. They are also scoring data as part of their Extract-Transform-Load (ETL) process allowing them to detect fraud before they load into the database. The use of PMML also allowed them flexibility of mixing commercial and open source products.

More posts and white papers on predictive analytics and decision management at decisionmanagementsolutions.com/paw

TAGGED:data miningfraudhealthcarepawpmmlpredictive analyticspredictive analytics worldrzementis
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

image fx (2)
Monitoring Data Without Turning into Big Brother
Big Data Exclusive
image fx (71)
The Power of AI for Personalization in Email
Artificial Intelligence Exclusive Marketing
image fx (67)
Improving LinkedIn Ad Strategies with Data Analytics
Analytics Big Data Exclusive Software
big data and remote work
Data Helps Speech-Language Pathologists Deliver Better Results
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

gaming big data
Big DataExclusive

Here’s How Big Data Is Transforming Online Gaming

5 Min Read

PAW: The unrealized power of data

6 Min Read
big data for tax collection
Big DataData MiningExclusive

Here’s How The UK Government Is Using Big Data For Tax Collection

6 Min Read

Put Predictive Analytics To Work in Operations

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?