Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
    data analytics and gold trading
    Data Analytics and the New Era of Gold Trading
    9 Min Read
    composable analytics
    How Composable Analytics Unlocks Modular Agility for Data Teams
    9 Min Read
    data mining to find the right poly bag makers
    Using Data Analytics to Choose the Best Poly Mailer Bags
    12 Min Read
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Web Crawling Automation
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Web Crawling Automation
Data Mining

Web Crawling Automation

Editor SDC
Editor SDC
3 Min Read
SHARE

Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

More Read

Making BI more decision-centric
EDM Summit – some closing thoughts
Thoughts on UseR! 2009
How Habitat UK *should* have used Twitter
Designing and implementing a web-based warranty system

 

> library(RCurl)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> ie <- comcreate(“internetexplorer.application”)=””>
> txt <- list()=””>
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innertext”]]=””>
> txt
$`
http://www.nytimes.com/2009/01/07/tec…


Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

 

> library(RCurl)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> ie <- COMCreate(“InternetExplorer.Application”)
> txt <- list()
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innerText”]]
> txt
$`
http://www.nytimes.com/2009/01/07/technology/business-computing/
07program.html?_r=2`

[1] “Skip to article Try Electronic Edition Log …

The third way ( a personal favorite) is by using the Firefox add in IMacros from www.iopus.com if you need to extract huge amounts of data and copy and paste into text and excel files. The Add in works almost the same way as the Record Macro feature works in Excel, with a difference it records all the clicks, download ,url’s etc from the browser.

It can even automate website testing, and data entry tasks.

While Firefox add-in is free the Internet Explorer costs 49 USD.

www.decisionstats.com Tags: R, automation, web scraping, web crawler

 

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

street address database
Why Data-Driven Companies Rely on Accurate Street Address Databases
Big Data Exclusive
predictive analytics risk management
How Predictive Analytics Is Redefining Risk Management Across Industries
Analytics Exclusive Predictive Analytics
data analytics and gold trading
Data Analytics and the New Era of Gold Trading
Analytics Big Data Exclusive
student learning AI
Advanced Degrees Still Matter in an AI-Driven Job Market
Artificial Intelligence Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Image
Cloud ComputingData MiningData VisualizationDecision ManagementHadoopMarket ResearchPolicy and GovernanceRisk Management

Where in the World Does All this ESRI World Data Come from?

11 Min Read

From Big Data to Smart Data: Supporting Critical Business Decisions

5 Min Read

Payola? There’s An App For That!

5 Min Read

Five Segmentation Must-Dos

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?