Web Crawling Automation

3 Min Read

Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

 

> library(RCurl)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <->
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2
> ie <- comcreate(“internetexplorer.application”)=””>
> txt <- list()=””>
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innertext”]]=””>
> txt
$`
http://www.nytimes.com/2009/01/07/tec


Apart from the various ways you can use PERL, or other scripting languages for Automated Web crawling- this is a relatively low technology solution for people who want to download web pages , or web data.It can also be called as web scraping for some people.

 

The First Method is by using the package RCurl package (from R-Help Archives) .

The R –List is also found here http://www.nabble.com/R-help-f13820.html.

 

> library(RCurl)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2
> getURL(my.url)

A variation is the following line of code-

getURL(my.url, followlocation = TRUE)

The information being sent from R and received by R from the server.

getURL(my.url, verbose = TRUE)

The second is by using the package RDCOMClient in R

> library(RDCOMClient)
> my.url <- “
http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html?_r=2
> ie <- COMCreate(“InternetExplorer.Application”)
> txt <- list()
> ie$Navigate(my.url)
NULL
> while(ie[[“Busy”]]) Sys.sleep(1)
> txt[[my.url]] <- ie[[“document”]][[“body”]][[“innerText”]]
> txt
$`
http://www.nytimes.com/2009/01/07/technology/business-computing/
07program.html?_r=2`

[1] “Skip to article Try Electronic Edition Log …

The third way ( a personal favorite) is by using the Firefox add in IMacros from www.iopus.com if you need to extract huge amounts of data and copy and paste into text and excel files. The Add in works almost the same way as the Record Macro feature works in Excel, with a difference it records all the clicks, download ,url’s etc from the browser.

It can even automate website testing, and data entry tasks.

While Firefox add-in is free the Internet Explorer costs 49 USD.

www.decisionstats.com Tags: R, automation, web scraping, web crawler

 

Share This Article
Exit mobile version