Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

GPU RNN to use TempSpace resource for workspace. #15056

Merged
merged 5 commits into from
May 25, 2019

Conversation

DickJC123
Copy link
Contributor

Description

This PR eliminates the test flakiness described in #15034 (non-deterministic low-probability test_operator_gpu.py:test_rnntanh_bidirectional failures on P40)

The fix was to move the cudnn workspace from a per-op-instance permanent allocation to use the shared TempSpace resource. Note that for temporary additional workspace, operators typically request space from the shared global TempSpace resource that is maintained for each gpu worker. This was in fact the approach taken by cudnn_rnn-inl.h before it was combined with rnn-inl.h by #14476. So in a sense, this PR simply reverts the portion of the 14476 PR that correlates with the flakiness, a much better result than having to revert the entire PR.

Some minor code cleanup was performed around now-unused RNNOp data members dropout_states_ and dropout_bytes_.

@Shza @lihaofd @ptrendx

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • [ X ] Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • [ X ] Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@karan6181
Copy link
Contributor

@mxnet-label-bot add [Operator, RNN, pr-awaiting-review]

@marcoabreu marcoabreu added Operator pr-awaiting-review PR is waiting for code review RNN labels May 24, 2019
Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix.
We have verified this PR with latest RNN PR #14713 together in local and no performance and functionality issue is found.
I have merged the #14713 to avoid the delay of release 1.5 (we are on the weekend now). Please rebase the code again. Sorry for the inconvenience :(

@DickJC123
Copy link
Contributor Author

DickJC123 commented May 25, 2019

After battling with the CI and a merge of master (or two), I feel this PR is ready for merging, with approvals from both NVIDIA and Intel reviewers. @szha @eric-haibin-lin @ptrendx?

@szha szha merged commit 136a5df into apache:master May 25, 2019
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* GPU RNN to use TempSpace resource for workspace.

* Trigger CI.

* Fix syntax error after merge.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Operator pr-awaiting-review PR is waiting for code review RNN
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants