Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (60)
    Data Analytics Driving the Modern E-commerce Warehouse
    13 Min Read
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: BI’s Dirty Secrets – The Unfortunate Domination of Manually-Coded Extracts
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Business Intelligence > BI’s Dirty Secrets – The Unfortunate Domination of Manually-Coded Extracts
Business IntelligenceCommentary

BI’s Dirty Secrets – The Unfortunate Domination of Manually-Coded Extracts

RickSherman
RickSherman
6 Min Read
SHARE

SecretManually-coded extracts are another dirty secret of the BI world. I’ve been seeing them for years, in both large and small companies. They grow haphazardly and are never documented, which practically guarantees that they will become an IT nightmare.

SecretManually-coded extracts are another dirty secret of the BI world. I’ve been seeing them for years, in both large and small companies. They grow haphazardly and are never documented, which practically guarantees that they will become an IT nightmare.

How have manually-coded extracts become so prevalent? It’s not as if there aren’t enough data integration tools around, including ETL tools. Even large enterprises that use the correct tools to load their enterprise data warehouses will often resort to manually-coded extracts to load their downstream BI data sources such as data marts, OLAP cubes, reporting databases and spreadsheets.

More Read

First Look – Mobile Agent Technologies
Data Mining Research Interview: Thomas A. Rathburn
Using historical data to justify BI investments – Part II
Bad 3D Pie Chart Alert! By Scientific American no less!
Customer Service, the New Marketing in the Era of the Social Customer

After seeing this problem in enough client companies, I’ve got a few theories as to why it happens:

  • Money: The top tier tools are expensive. They are out of reach for SMBs and can even be too expensive for large enterprises to expand their use from the EDW to BI data source. There are data integration tools that would do a great job spanning price ranges, but for the most part nobody knows about them. And when they are used, they are misused (see below), so their reputation for producing a solid business ROI is diminished.
  • Stretched resources: In large enterprises, the centralized data warehouse team likely has data integration experience, but their backlog of work means that people creating BI data sources are on their own. So they end up hand-coding. In SMB firms, the IT staffs are too small to dedicate anyone to data integration, so no one is an expert.
  • Data never sleeps: Regardless of the state of data integration expertise and investment at an enterprise, business people still have to run and manage the business. This requires data. If the data has not been integrated for them, they’ll  figure out some other way to get it — even if it means cutting and pasting data from spreadsheet queries or getting IT to “crank out” SQL scripts. This is why data shadow systems or spreadmarts get started and then become so prevalent.
  • You don’t know what you don’t know:  Even when enterprises use data integration or ETL tools, they often don’t use them well. The biggest reason why people misuse these tools is that they don’t have a firm grasp of the concepts of data integration processes and advanced dimensional modeling.  Tools are easy; concepts are harder.  Anyone can start coding; it’s a lot harder to actually architect and design. Tool vendors don’t help this situation when they promote tools that “solve world hunger” and limit training to the tool, not any concepts.  

So, here’s what happens:  instead of using data integration best practices, people design the ETL tool processes the same way they would create a sequential series of SQL scripts to integrate data.  In fact, many an ETL process simply executes stored procedures (SP) or SQL scripts. Why use the tool at all if you’re not going to use its, capabilities? When this happens, IT figures it was a waste of time to use the ETL tool to begin with, and the ETL tool investment had no ROI. This becomes a self-reinforcing loop enabling IT to justify (or rationalize) manual coding.

  • Coding is easier than thinking:  There is an inherit bias for the IT staff to generate SQL code. They know it (just like the business person knows spreadsheets), they can crank something out quickly and it does not cost anything extra. The typical scenario is that the IT person creates a SQL script or a stored procedure to pull data from one source and things are fine. But then several hundred SQL scripts or stored procedures later, the hodgepodge and undocumented accumulation of pseudo ETL processes becomes the recurring method to load the data warehouse or BI data sources. Each change to that set of code takes longer and longer.  It consumes more and more resource time just to maintain it.  When new data needs to be integrated, another IT person starts the next hodgepodge of undocumented code with yet another simple SQL script.

How do we get out of this mess? Stay tuned for a future blog post.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

Why the AI Race Is Being Decided at the Dataset Level
Why the AI Race Is Being Decided at the Dataset Level
Artificial Intelligence Big Data Exclusive
image fx (60)
Data Analytics Driving the Modern E-commerce Warehouse
Analytics Big Data Exclusive
ai for building crypto banks
Building Your Own Crypto Bank with AI
Blockchain Exclusive
julia taubitz vn5s g5spky unsplash
Benefits of AI in Nursing Education Amid Medicaid Cuts
Artificial Intelligence Exclusive News

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Date – March 6th, 2009 Time – 09:00 – 13:30 Address – IBM Forum…

2 Min Read
big data travel solutions
AnalyticsBest PracticesBig DataBusiness IntelligenceCRMData ManagementHardwareLocationMarket ResearchMobilitySocial Data

Big Data: Aspirin for Travelers’ Headaches

5 Min Read

How are Supply Chain Executives dealing with today’s…

2 Min Read
Image
Big DataBusiness Intelligence

Focusing on Service and Identity with Big Data

4 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?