Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Multi_sum_sq review, AtomicAdd removal #17002

Merged
merged 7 commits into from
Dec 14, 2019

Conversation

MoisesHer
Copy link
Contributor

@MoisesHer MoisesHer commented Dec 7, 2019

Description

Modified multi_sum_sq operator to avoid nondeterministic behavior, which was potentially caused by AtomicAdd operation on GPU kernel.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage: an specific test for multi_sum_sq operator has been included
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Modified GPU implementation of multi_sum_sq operator. The operator is launching several CUDA blocks per tensor. In the previous version, an atomic operation was used to reduce the sumSQ coming from different blocks. In this new version, we instead use a temporal storage space to write the reduction from each block, and we launch a second kernel to reduce those.

Copy link
Member

@eric-haibin-lin eric-haibin-lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MXNet uses CamelCase for functions and snake_case for variables. Would you mind updating the var names below? Thanks. Otherwise looks good to me

tests/python/gpu/test_operator_gpu.py Outdated Show resolved Hide resolved
src/operator/contrib/multi_sum_sq.cu Outdated Show resolved Hide resolved
src/operator/contrib/multi_sum_sq.cu Outdated Show resolved Hide resolved
@MoisesHer
Copy link
Contributor Author

MXNet uses CamelCase for functions and snake_case for variables. Would you mind updating the var names below? Thanks. Otherwise looks good to me

Thanks ! I think I corrected all of them

@eric-haibin-lin eric-haibin-lin merged commit bbdc1c3 into apache:master Dec 14, 2019
@eric-haibin-lin eric-haibin-lin mentioned this pull request Dec 15, 2019
6 tasks
ptrendx pushed a commit to ptrendx/mxnet that referenced this pull request Dec 20, 2019
* Update multi_sum_sq to avoid AtomicAdd

* Add specific test for multi_sum_sq

* Add a determism test and lint issues

* better test for cheching op is deterministic

* Follow MXNet letters case format

* Reduce dimensions of tensors in the test
ptrendx added a commit that referenced this pull request Dec 20, 2019
* Improve the speed of the pointwise fusion graph pass (#17114)

* Debug the long startup time

* Optimize backward fusion

* Figure out why the fusion pass is called twice

* Cleaning

* Small optimization

* [BUGFIX] Fix trainer param order (#17068)

* fix trainer param order

* Update trainer.py

* Update trainer.py

* Update trainer.py

* [reproducibility] multi_sum_sq review, AtomicAdd removal (#17002)

* Update multi_sum_sq to avoid AtomicAdd

* Add specific test for multi_sum_sq

* Add a determism test and lint issues

* better test for cheching op is deterministic

* Follow MXNet letters case format

* Reduce dimensions of tensors in the test

Co-authored-by: Haibin Lin <[email protected]>
Co-authored-by: MoisesHer <[email protected]>

if (threadIdx.x == 0) {
block_reductions[(start_tensor_id + tensor_loc) * param.max_chunks_per_tensor +
param.block_to_chunk[blockIdx.x]] = final;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should change the variable name here? = final specifies that a virtual function cannot be overridden in a derived class.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants