Skip to content

Latest commit

 

History

History
executable file
·
154 lines (94 loc) · 9.45 KB

INTRO.md

File metadata and controls

executable file
·
154 lines (94 loc) · 9.45 KB

tensorflow-vs-pytorch

A comparative study of TensorFlow vs PyTorch.

This repository aims for comparative analysis of TensorFlow vs PyTorch, for those who want to learn TensorFlow while already familiar with PyTorch or vice versa.

Important Updates

TensorFlow

Eager Excution (Oct 17, 2018)
Tensorflow also launches a dynamic graph framework which enables define by run.

Pytorch

Pytorch 4.0 Migration (Apr 22, 2018).
Variable is merged into Tensor. From 4.0, torch.Variable returns torch.tensor and torch.tensor can function as old torch.Variable.

vs. Table

TensorFlow PyTorch
Numpy to tensor - Numpy to tf.Tensor
tf.convert_to_tensor(numpy_array, np.float32)
- Numpy to torch.Tensor
torch.from_numpy(numpy_array)
Tensor to Numpy - tf.Tensor to Numpy
tensorflow_tensor.eval()
tf.convert_to_tensor(numpy_array, np.float32)
- torch.Tensor to Numpy
torch_for_numpy.numpy()
Dimension check - .shape variable
- tf.rank function
my_image.shape
tf.rank(my_image)
- Automatically Displayed Dim.
- .shape variable in PyTorch
torch_for_numpy.shape

Table of Contents

01. Tensor

1. The Concept of Tensor
[TensorFlow] - Tensors and special type of tensors

(1) What is TensorFlow "Tensor" ?
(2) Special type Tensors
(3) Convention for Tensor dimension
(4) Numpy to tf.Variable
(5) Direct declaration
(6) Difference Between Special Tensors and tf.Variable (TensorFlow)

[PyTorch] - Torch tensor and torch.Variable
Basics for PyTorch Tensors.

(1) PyTorch Tensor
(2) PyTorch's dynamic graph feature
(3) What does torch.autograd.Variable contain?
(4) Backpropagation with dynamic graph

2. Tensor Numpy Conversion

[TensorFlow] tf.convert_to_tensor or .eval()

Numpy to tf.Tensor
tf.Tensor to Numpy |

[PyTorch] .numpy() or torch.from_numpy()

Numpy to torch.Tensor
torch.Tensor to Numpy

3. Indentifying The Dimension

[TensorFlow] .shape or tf.rank() followed by .eval()

.shape variable in TensorFlow
tf.rank function

[PyTorch] .shape or .size()

Automatically Displayed PyTorch Tensor Dimension
.shape variable in PyTorch

4. Shaping the Tensor Variables

[TensorFlow] tf.reshape

Reshape tf.Tensor with tf.reshape
Handling the Rest of Dimension with "-1"

[PyTorch].view() function

Reshape PyTorch Tensor with .view()
Handling the Rest of Dimension with "-1"
Copy the Dimension of other PyTorch Tensor .view_as()

5. Shaping the Tensor Variables

6. Datatype Conversion

7. Printing Variables

02. Variable

1. Creating a Variable
[TensorFlow]

Method 1: tf.get_variable()
Method 2: tf.Variable

[PyTorch] Creating PyTorch Variable - torch.autograd.Variable

The concept of Pytorch Variable

03. Computation of data

1. Tensorflow VS PyTorch Comparison
2. Dynamic Graph and Static Graph

  • There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.
TensorFlow PyTorch
Framework Define-and-run Define-by-run
Graph Static Dynamic
Debug Non-native debugger (tfdbg) pdb(ipdb) Python debugger

How "Graph" is defined in each framework?

#TensorFlow:

  • Static graph.

  • Once define a computational graph and excute the same graph repeatedly.

  • Pros:

    (1) Optimizes the graph upfront and makes better distributed computation.

    (2) Repeated computation does not cause additional computational cost.

  • Cons:

    (1) Difficult to perform different computation for each data point.

    (2) The structure becomes more complicated and harder to debug than dynamic graph.

#PyTorch:

  • Dynamic graph.

  • Does not define a graph in advance. Every forward pass makes a new computational graph.

  • Pros:

    (1) Debugging is easier than static graph.

    (2) Keep the whole structure concise and intuitive.

    (3) For each data point and time different computation can be performed.

  • Cons:

    (1) Repetitive computation can lead to slower computation speed.

    (2) Difficult to distribute the work load in the beginning of training.

  • There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.