Pipeline for downloading, parsing and aggregating static page view dumps from Wikipedia.
In case you need an anual number of pageviews for specific pages on Wikipedia before 2015. you will unfortunately not be able to rely on the API (at least not at time of writing this doc) as it gives access to new records (post 2015). However, a collection of static dumps is available.
This pipline was made in order to:
- Fetch names of all files to be downloaded
- Download the needed files (paralelized)
- Parse them after downloading (paralelized)
- Aggregate files for each year in order to get the anual number of views for selected pages
The following scripts need to be ran respectively:
First, we need to get the names of files we want to download. For every year, there is a set of files available, so it is also good to specify about which years we are interested in.
The script generates a csv file containing file names, file sizes and URLs from which the files should be downloaded. Script parameters:
- year_start - first year to be downloaded
- year_end - last year to be downloaded (all years in between are downloaded)
- output_dir - directory where files for each year will be stored
python fetch_file_names.py [year_start] [year_end] [output_dir]
file | size | url |
---|---|---|
pagecounts-20140101-000000.gz | 82 | https://.. |
pagecounts-20140201-000000.gz | 81 | https://.. |
... | ... | ... |
Now, when we have downloaded the file names and URLs, we can download them!
This script concurently downloads Wikipedia pagecount dumps [qzip]. The file previously generated file.csv contains a list of urls for the files mentioned. The path_save refers to directory where files should be downloaded.
python downloader.py [file.csv] [path_save] [thread_number]
THE SERVER IS CURRENTLY BLOCKING IN CASE OF USING MORE THEN 3 THREADS
As the files have information on every page on Wikipedia which was accessed within the hour specified in the file name, we should remove page names that we do not need.
For parsing, a csv file containing wikipedia page names has to be provided in the following format:
names_u | names_q |
---|---|
Barack_Obama | Barack_Obama |
René_Konen | Ren%C3%A9_Konen |
Zoran_Đinđić | Zoran_%C4%90in%C4%91i%C4%87 |
... | ... |
The column names_u is standard utf-8 encoding (the unquated representation), however in the files a nother type of encoding is used, so we need a names_q which is the 'qouated' representation. Both quote and unquote can be done with urllib.
Opens specified list of files in files_dir, filters them per names in page_names_file and project_name ("en" for english wikipedia, "de" for german, etc.), saves filtered files in save_dir using a specified num_threads.
python parser.py [page_names_file] [files_dir] [save_dir] [project_name] [num_threads]
After parsing the files, it is time to aggregate the page views!
Loads files from file_dir as pandas dataframes, concatinates them, performs aggregation and saves them as csv on save_path.
python groupby.py [file_dir] [save_path]
names_u | names_q | views |
---|---|---|
Barack_Obama | Barack_Obama | 3562998 |
René_Konen | Ren%C3%A9_Konen | 156456 |
Zoran_Đinđić | Zoran_%C4%90in%C4%91i%C4%87 | 96846 |
... | ... | ... |
#todo requirements.txt