-
Notifications
You must be signed in to change notification settings - Fork 16
/
README.Rmd
90 lines (57 loc) · 3.16 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "README-"
)
```
# threadpool
The `threadpool` package implements a simple parallel programming backend using a thread pool model that can be dynamically resized during operation. The package builds clusters by setting up a queue of input jobs and having the individual cluster nodes retrieve jobs from the queue one at a time. The queue is stored on-disk and nodes communicate with each other via the filesystem.
A few benefits of this model of parallel computing are
* *Dynamic resource allocation*: You can modify the number of resources dedicated to a job as the job is running and as resources become available or unavailable.
* *Automatic load balancing*: Tasks are not prescheduled to specific nodes of the cluster and so long-running jobs do not prevent faster-running jobs from executing.
* *Fault tolerance and restartability*: Tasks can continue executing as nodes come in and out of the cluster. In particular, if the entire cluster goes down, the job can be automatically restarted where it left off.
* *Seamless switching to parallel mode*: It's easy to start and debug a job in single processor mode and then ramp up to parallel computing without needing to start the job over again.
* *Overall job resizing*: If in the middle of a job run you decide you want to add more tasks, you can simply add them withouth having to start the whole job over again.
This package will be most useful in situations where the jobs being run
* are embarassingly parallel, and so don't require complex communication between nodes;
* are relatively long-running, on the order of hours or days;
* are being run on a batch/queued system where the availability of processors varies over time;
* have individual tasks that are heterogeneous in their runtimes.
## Installation
You can install threadpool from GitHub with:
```{r gh-installation, eval = FALSE}
# install.packages("remotes")
remotes::install_github("rdpeng/threadpool")
```
The `threadpool` package makes use of the `thor` package by Rich FitzJohn. This package can be installed from GitHub with:
```{r gh-installation-thor, eval = FALSE}
remotes::install_github("richfitz/thor")
```
In addition, you need to install the `queue` package from GitHub with:
```{r gh-installation-queue, eval = FALSE}
remotes::install_github("rdpeng/queue")
```
## Example
This is a basic example of how you might invoke the `threadpool` package. The basic approach is to initialize the cluster (which also adds tasks to the cluster), join the cluster, and then run it by starting the execution of jobs. Once the cluster is finished running, you can retrieve the results. Finally, once we are finished we can delete the cluster.
```{r example, eval = TRUE}
library(threadpool)
data(airquality)
x <- as.list(airquality)
f <- function(x) {
mean(x, na.rm = TRUE)
}
## Initialize the cluster
cl <- cluster_initialize("my_cluster", x, f)
## Run jobs
cl <- cluster_run(cl)
## Gather the output
r <- cluster_reduce(cl)
r[1:3]
## Clean up
delete_cluster("my_cluster")
```