Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
    car expense data analytics
    Data Analytics for Smarter Vehicle Expense Management
    10 Min Read
    image fx (60)
    Data Analytics Driving the Modern E-commerce Warehouse
    13 Min Read
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Web Crawling Automation
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Web Crawling Automation
Data Mining

Web Crawling Automation

Editor SDC
Editor SDC
3 Min Read
SHARE

Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

More Read

data tools
Democratizing Data with Decision Management
R Successor Language ‘Tea’ announced
Google Experimenting with Social Search
4 Benefits for the Public Sector when Governments Start Using Big Data
“Of those respondents who said their companies still make decisions based on judgment rather than…”

 

> library(RCurl)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> ie <- comcreate(“internetexplorer.application”)=””>
> txt <- list()=””>
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innertext”]]=””>
> txt
$`
http://www.nytimes.com/2009/01/07/tec…


Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

 

> library(RCurl)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2”
> ie <- COMCreate(“InternetExplorer.Application”)
> txt <- list()
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innerText”]]
> txt
$`
http://www.nytimes.com/2009/01/07/technology/business-computing/
07program.html?_r=2`

[1] “Skip to article Try Electronic Edition Log …

The third way ( a personal favorite) is by using the Firefox add in IMacros from www.iopus.com if you need to extract huge amounts of data and copy and paste into text and excel files. The Add in works almost the same way as the Record Macro feature works in Excel, with a difference it records all the clicks, download ,url’s etc from the browser.

It can even automate website testing, and data entry tasks.

While Firefox add-in is free the Internet Explorer costs 49 USD.

www.decisionstats.com Tags: R, automation, web scraping, web crawler

 

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

cybersecurity essentials
Cybersecurity Essentials For Customer-Facing Platforms
Exclusive Infographic IT Security
ai for making lyric videos
How AI Is Revolutionizing Lyric Video Creation
Artificial Intelligence Exclusive
intersection of data and patient care
How Healthcare Careers Are Expanding at the Intersection of Data and Patient Care
Big Data Exclusive
dedicated servers for ai businesses
5 Reasons AI-Driven Business Need Dedicated Servers
Artificial Intelligence Exclusive News

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Ignore Your Business, Rake in the Profits

6 Min Read

Big Data and the Demise of Analog Retail

5 Min Read

Happy New Year : 2009 Predictions and 2008 Recap

4 Min Read

Evidence-based management is a simple idea. It just means…

2 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence
AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?