A comparative study of TensorFlow vs PyTorch.
This repository aims for comparative analysis of TensorFlow vs PyTorch, for those who want to learn TensorFlow while already familiar with PyTorch or vice versa.
TensorFlow
Eager Excution (Oct 17, 2018)
Tensorflow also launches a dynamic graph framework which enables define by run.
Pytorch
Pytorch 4.0 Migration (Apr 22, 2018).
Variable is merged into Tensor. From 4.0, torch.Variable returns torch.tensor and torch.tensor can function as old torch.Variable.
TensorFlow | PyTorch | |
---|---|---|
Numpy to tensor | - Numpy to tf.Tensor tf.convert_to_tensor(numpy_array, np.float32) |
- Numpy to torch.Tensor torch.from_numpy(numpy_array) |
Tensor to Numpy | - tf.Tensor to Numpy tensorflow_tensor.eval() tf.convert_to_tensor(numpy_array, np.float32) |
- torch.Tensor to Numpy torch_for_numpy.numpy() |
Dimension check | - .shape variable - tf.rank function my_image.shape tf.rank(my_image) |
- Automatically Displayed Dim. - .shape variable in PyTorch torch_for_numpy.shape |
1. The Concept of Tensor
[TensorFlow] - Tensors and special type of tensors
(1) What is TensorFlow "Tensor" ?
(2) Special type Tensors
(3) Convention for Tensor dimension
(4) Numpy to tf.Variable
(5) Direct declaration
(6) Difference Between Special Tensors and tf.Variable (TensorFlow)
[PyTorch] - Torch tensor and torch.Variable
Basics for PyTorch Tensors.
(1) PyTorch Tensor
(2) PyTorch's dynamic graph feature
(3) What does torch.autograd.Variable contain?
(4) Backpropagation with dynamic graph
[TensorFlow] tf.convert_to_tensor or .eval()
[PyTorch] .numpy() or torch.from_numpy()
[TensorFlow] .shape or tf.rank() followed by .eval()
Automatically Displayed PyTorch Tensor Dimension
.shape variable in PyTorch
Reshape tf.Tensor with tf.reshape
Handling the Rest of Dimension with "-1"
Reshape PyTorch Tensor with .view()
Handling the Rest of Dimension with "-1"
Copy the Dimension of other PyTorch Tensor .view_as()
1. Creating a Variable
[TensorFlow]
[PyTorch] Creating PyTorch Variable - torch.autograd.Variable
1. Tensorflow VS PyTorch Comparison
2. Dynamic Graph and Static Graph
- There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.
TensorFlow | PyTorch | |
---|---|---|
Framework | Define-and-run | Define-by-run |
Graph | Static | Dynamic |
Debug | Non-native debugger (tfdbg) | pdb(ipdb) Python debugger |
How "Graph" is defined in each framework?
#TensorFlow:
-
Static graph.
-
Once define a computational graph and excute the same graph repeatedly.
-
Pros:
(1) Optimizes the graph upfront and makes better distributed computation.
(2) Repeated computation does not cause additional computational cost.
-
Cons:
(1) Difficult to perform different computation for each data point.
(2) The structure becomes more complicated and harder to debug than dynamic graph.
#PyTorch:
-
Dynamic graph.
-
Does not define a graph in advance. Every forward pass makes a new computational graph.
-
Pros:
(1) Debugging is easier than static graph.
(2) Keep the whole structure concise and intuitive.
(3) For each data point and time different computation can be performed.
-
Cons:
(1) Repetitive computation can lead to slower computation speed.
(2) Difficult to distribute the work load in the beginning of training.
-
There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.