Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Wikipedia Page Traffic Statistics Dataset
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > Wikipedia Page Traffic Statistics Dataset
Data Mining

Wikipedia Page Traffic Statistics Dataset

Editor SDC
Editor SDC
7 Min Read
SHARE

Contents
WikistatsWikilinks (1.1G)Wikidump (29G)

I’ve published a Wikipedia Page Traffic Data Set containing a 320 GB sample of the data used to power trendingtopics.org (I’ll talk about Trending Topics more in a upcoming post). The EBS snapshot includes 7 months of hourly page traffic statistics for over 8 Million Wikipedia articles (~ 1 TB uncompressed) along with the associated Wikipedia content, linkgraph, & metadata. The english Wikipedia subset contains ~2.5 Million articles.

It only takes a couple of minutes to sign up for an Amazon EC2 account and set up access to the data as an EBS volume from the Amazon Management Console.

If you want to work entirely from the command line, you will need to complete the steps in the Getting Started Guide. When you are set up to use EC2, launch a small EC2 Ubuntu instance from your local machine:

More Read

Government IT Savings Success – Time to Open the Piggy Bank…Carefully
A powerful computing tool that allows scientists to extract…
Eli Lilly’s Dave Powers talks compellingly about how the…
Building a Knowledge Hub
Data Mining Fundamentals: Terms You Must Know
    $ ec2-run-instances ami-5394733a -k gsg-keypair -z us-east-1a

Once it is running and you have the instance id, create and attach an EBS Volume using the public snapshot snap-753dfc1c (make sure the volume is created in the same availability zone as the ec2 instance)

    $ ec2-create-volume --snapshot snap-753dfc1c -z us-east-1a    $ ec2-attach-volume vol-ec06ea85 -i i-df396cb6 -d /dev/sdf

…

I’ve published a Wikipedia Page Traffic Data Set containing a 320 GB sample of the data used to power trendingtopics.org (I’ll talk about Trending Topics more in a upcoming post). The EBS snapshot includes 7 months of hourly page traffic statistics for over 8 Million Wikipedia articles (~ 1 TB uncompressed) along with the associated Wikipedia content, linkgraph, & metadata. The english Wikipedia subset contains ~2.5 Million articles.

It only takes a couple of minutes to sign up for an Amazon EC2 account and set up access to the data as an EBS volume from the Amazon Management Console.

If you want to work entirely from the command line, you will need to complete the steps in the Getting Started Guide. When you are set up to use EC2, launch a small EC2 Ubuntu instance from your local machine:

    $ ec2-run-instances ami-5394733a -k gsg-keypair -z us-east-1a

Once it is running and you have the instance id, create and attach an EBS Volume using the public snapshot snap-753dfc1c (make sure the volume is created in the same availability zone as the ec2 instance)

    $ ec2-create-volume --snapshot snap-753dfc1c -z us-east-1a    $ ec2-attach-volume vol-ec06ea85 -i i-df396cb6 -d /dev/sdf

Next, ssh into the instance and mount the volume

    $ ssh root@ec2-12-xx-xx-xx.z-1.compute-1.amazonaws.com    root@domU-12-xx-xx-xx-75-81:/mnt# mkdir /mnt/wikidata    root@domU-12-xx-xx-xx-75-81:/mnt# mount /dev/sdf /mnt/wikidata

See the README files in each subdirectory for more details on these datasets…

Wikistats

The good stuff is sitting in 5000 files in /mnt/wikidata/wikistats/pagecounts/

    /mnt/wikidata/wikistats/pagecounts# ls -l | wc -l    5068    /mnt/wikidata/wikistats/pagecounts# ls -lh |head    total 260G    -rw-r--r-- 1 root root  49M 2009-02-26 13:34 pagecounts-20081001-000000.gz    -rw-r--r-- 1 root root  46M 2009-02-26 13:34 pagecounts-20081001-010000.gz    -rw-r--r-- 1 root root  47M 2009-02-26 13:34 pagecounts-20081001-020000.gz    -rw-r--r-- 1 root root  44M 2009-02-26 13:34 pagecounts-20081001-030000.gz    -rw-r--r-- 1 root root  45M 2009-02-26 13:34 pagecounts-20081001-040000.gz    -rw-r--r-- 1 root root  47M 2009-02-26 13:35 pagecounts-20081001-050001.gz    -rw-r--r-- 1 root root  45M 2009-02-26 13:35 pagecounts-20081001-060000.gz    -rw-r--r-- 1 root root  50M 2009-02-26 13:35 pagecounts-20081001-070000.gz    -rw-r--r-- 1 root root  51M 2009-02-26 13:35 pagecounts-20081001-080000.gz

This directory contains hourly Wikipedia article traffic logs covering the 7 month period from October 01 2008 to April 30 2009, this data is regularly logged from the wikipedia squid proxy by Domas Mituzas.

Each log file is named with the date and time of collection: pagecounts-20090430-230000.gz

Each line has 4 fields:

projectcode, pagename, pageviews, bytes
    en Barack_Obama 997 123091092    en Barack_Obama%27s_first_100_days 8 850127    en Barack_Obama,_Jr 1 144103    en Barack_Obama,_Sr. 37 938821    en Barack_Obama_%22HOPE%22_poster 4 81005    en Barack_Obama_%22Hope%22_poster 5 102081

Wikilinks (1.1G)

Contains a wikipedia linkgraph dataset provided by Henry Haselgrove.

These files contain all links between proper english language Wikipedia pages, that is pages in “namespace 0″. This includes disambiguation pages and redirect pages.

In links-simple-sorted.txt, there is one line for each page that has links from it. The format of the lines is ready for processing by Hadoop:

    from1: to11 to12 to13 ...    from2: to21 to22 to23 ...    ...

where from1 is an integer labelling a page that has links from it, and to11 to12 to13 … are integers labelling all the pages that the page links to. To find the page title that corresponds to integer n, just look up the n-th line in the file titles-sorted.txt.

Wikidump (29G)

Contains the raw Wikipedia dumps from March along with some processed versions of the data. One of the useful files I created provides a direct lookup table for wikipedia article redirects in page_lookup_redirects.txt, which can be useful for name standardization and search:

Here is a sample query run when the file is loaded into MySQL:

   mysql> select redirect_title, true_title from page_lookups               where page_id = 534366;   +------------------------------------------------+--------------+   | redirect_title                                 | true_title   |   +------------------------------------------------+--------------+   | Barack_Obama                                   | Barack Obama |   | Barak_Obama                                    | Barack Obama |   | 44th_President_of_the_United_States            | Barack Obama |   | Barach_Obama                                   | Barack Obama |   | Senator_Barack_Obama                           | Barack Obama |                           .....                           .....            | Rocco_Bama                                     | Barack Obama |   | Barack_Obama's                                 | Barack Obama |    | B._Obama                                       | Barack Obama |   +------------------------------------------------+--------------+   110 rows in set (11.15 sec)    

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

data analytics and truck accident claims
How Data Analytics Reduces Truck Accidents and Speeds Up Claims
Analytics Big Data Exclusive
predictive analytics for interior designers
Interior Designers Boost Profits with Predictive Analytics
Analytics Exclusive Predictive Analytics
big data and cybercrime
Stopping Lateral Movement in a Data-Heavy, Edge-First World
Big Data Exclusive
AI and data mining
What the Rise of AI Web Scrapers Means for Data Teams
Artificial Intelligence Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Overcoming Tradition with Analytics – Baseball’s Mindshift

5 Min Read
Machine Learning
Big DataData Mining

Big Data, Data Mining and Machine Learning: Deriving Value for Business

6 Min Read

The SAS-L Rookie of the Year

2 Min Read

The $1 Bailout

2 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?