forked from mlverse/torch
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.Rmd
99 lines (74 loc) · 2.3 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# torch <a href='https://torch.mlverse.org'><img src='man/figures/torch.png' align="right" height="139" /></a>
[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://www.tidyverse.org/lifecycle/#experimental)
![R build status](https://github.com/mlverse/torch/workflows/Test/badge.svg)
[![CRAN status](https://www.r-pkg.org/badges/version/torch)](https://CRAN.R-project.org/package=torch)
[![](https://cranlogs.r-pkg.org/badges/torch)](https://cran.r-project.org/package=torch)
## Installation
Run:
```r
remotes::install_github("mlverse/torch")
```
At the first package load additional software will be
installed.
## Example
Currently this package is only a proof of concept and you can only create a Torch
Tensor from an R object. And then convert back from a torch Tensor to an R object.
```{r}
library(torch)
x <- array(runif(8), dim = c(2, 2, 2))
y <- torch_tensor(x, dtype = torch_float64())
y
identical(x, as_array(y))
```
### Simple Autograd Example
In the following snippet we let torch, using the autograd feature, calculate the derivatives:
```{r}
x <- torch_tensor(1, requires_grad = TRUE)
w <- torch_tensor(2, requires_grad = TRUE)
b <- torch_tensor(3, requires_grad = TRUE)
y <- w * x + b
y$backward()
x$grad
w$grad
b$grad
```
### Linear Regression
In the following example we are going to fit a linear regression from scratch
using torch's Autograd.
**Note** all methods that end with `_` (eg. `sub_`), will modify the tensors in
place.
```{r, eval=TRUE}
x <- torch_randn(100, 2)
y <- 0.1 + 0.5*x[,1] - 0.7*x[,2]
w <- torch_randn(2, 1, requires_grad = TRUE)
b <- torch_zeros(1, requires_grad = TRUE)
lr <- 0.5
for (i in 1:100) {
y_hat <- torch_mm(x, w) + b
loss <- torch_mean((y - y_hat$squeeze(1))^2)
loss$backward()
with_no_grad({
w$sub_(w$grad*lr)
b$sub_(b$grad*lr)
w$grad$zero_()
b$grad$zero_()
})
}
print(w)
print(b)
```
## Contributing
No matter your current skills it's possible to contribute to `torch` development.
See the contributing guide for more information.