You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should expect our users to try slapping torch.jit.compile on the top level of an RNN, which means that however long they unroll the RNN, the compiler is going to run on it. That means that, on real world RNNs, compiler performance may actually be a problem.
For example, I took bnlstm (https://github.com/pytorch/benchmark/blob/master/benchmarks/bnlstm.py) and ran compile on the top-level model. Because bnlstm was fed pixel-by-pixel the MNIST data set, it has 784 hidden layers. It took 17s to compile the trace (and this is a conservative guess, because for other reasons compilation failed midway through.)
We don't necessarily have to fix this, but this will be something important to communicate to users.
The text was updated successfully, but these errors were encountered:
We should expect our users to try slapping
torch.jit.compile
on the top level of an RNN, which means that however long they unroll the RNN, the compiler is going to run on it. That means that, on real world RNNs, compiler performance may actually be a problem.For example, I took bnlstm (https://github.com/pytorch/benchmark/blob/master/benchmarks/bnlstm.py) and ran compile on the top-level model. Because bnlstm was fed pixel-by-pixel the MNIST data set, it has 784 hidden layers. It took 17s to compile the trace (and this is a conservative guess, because for other reasons compilation failed midway through.)
We don't necessarily have to fix this, but this will be something important to communicate to users.
The text was updated successfully, but these errors were encountered: