Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Add Large tensor vector test cases #15941

Merged
merged 31 commits into from
Sep 4, 2019
Merged

Conversation

ChaiBapchya
Copy link
Contributor

@ChaiBapchya ChaiBapchya commented Aug 19, 2019

Description

Added test for large vector for following ops

  1. Binary arithmetic - add, sub, rsub, neg, mul, div, rdiv, mod, rmod, imod, pow
  2. Neural Network ops - , LayerNorm and BatchNorm
  3. Sequence ops - sequence_last, sequence_reverse, `sequence_mask``
  4. Exponent & Log - exp, expm1, log, log2, log10, log1p
  5. Power - sqrt, rsqrt, cbrt, rcbrt, square, reciprocal
  6. Random ops - random.exponential, random.gamma, random.generalized_negative_binomial, random.multinomial, random.negative_binomial, random.normal, random.poisson, random.randn

Minor changes to large tensor array op

Fix indexing issue for sequence_last, sequence_reverse

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Code is well-documented:
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

@access2rohit
Copy link
Contributor

@ChaiBapchya only test cases to verify Large Vector are being added. Update the description and title accordingly

@ChaiBapchya ChaiBapchya changed the title Add Large tensor vector support Add Large tensor vector test cases Aug 19, 2019
@ChaiBapchya
Copy link
Contributor Author

ChaiBapchya commented Aug 19, 2019

Updated. @access2rohit

@@ -742,7 +761,7 @@ def test_activation():
# Hyperbolic tangent (tanh)
# y = (exp(x)-exp(-x))/(exp(x)+exp(-x))
a = mx.nd.Activation(a, act_type="tanh")
tanh_x = (np.exp(-2)-np.exp(2))/(np.exp(-2)+np.exp(2))
tanh_x = (np.exp(test_x)-np.exp(-test_x))/(np.exp(test_x)+np.exp(-test_x))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add space around operators. Do this across the entire file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 / pylint both didn't give error for this. I'll do it but is there some other linting tool apart from PEP8 and pylint?

If it's google style guide tool?
Is this the way to go?
https://stackoverflow.com/questions/29597618/is-there-a-tool-to-lint-python-based-on-the-google-style-guide

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM. Since the current CI nightly pipeline is broken, please run through the tests using nosetests and paste the output here.

@ChaiBapchya
Copy link
Contributor Author

$ MXNET_TEST_COUNT=1 nosetests --logging-level=DEBUG --verbose -s tests/nightly/test_large_vector.py
test_large_vector.test_slice ... ok
test_large_vector.test_ndarray_zeros ... ok
test_large_vector.test_ndarray_ones ... ok
test_large_vector.test_ndarray_random_uniform ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1917427195 to reproduce.
ok
test_large_vector.test_ndarray_random_randint ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=2050648240 to reproduce.
ok
test_large_vector.test_ndarray_empty ... ok
test_large_vector.test_elementwise ... ok
test_large_vector.test_reduce ... ok
test_large_vector.test_clip ... ok
test_large_vector.test_argmin ... ok
test_large_vector.test_take ... ok
test_large_vector.test_slice_assign ... ok
test_large_vector.test_expand_dims ... ok
test_large_vector.test_squeeze ... ok
test_large_vector.test_broadcast_div ... ok
test_large_vector.test_Dense ... ok
test_large_vector.test_argsort ...ok
test_large_vector.test_sort ... ok
test_large_vector.test_topk ... ok
test_large_vector.test_ndarray_random_exponential ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=773632298 to reproduce.
ok
test_large_vector.test_ndarray_random_gamma ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=363992520 to reproduce.
ok
test_large_vector.test_ndarray_random_generalized_negative_binomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=231587666 to reproduce.
ok
test_large_vector.test_ndarray_random_multinomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1448118810 to reproduce.
ok
test_large_vector.test_ndarray_random_negative_binomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1850229415 to reproduce.
ok
test_large_vector.test_ndarray_random_normal ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1371597908 to reproduce.
ok
test_large_vector.test_ndarray_random_poisson ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1847825566 to reproduce.
ok
test_large_vector.test_ndarray_random_randn ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=954526493 to reproduce.
ok
test_large_vector.test_ndarray_random_shuffle ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1909959272 to reproduce.
ok
test_large_vector.test_exponent_logarithm_operators ... ok
test_large_vector.test_power_operators ... ok
test_large_vector.test_sequence_mask ... ok
test_large_vector.test_sequence_reverse ... ok
test_large_vector.test_sequence_last ... ok
test_large_vector.test_layer_norm ... ok
test_large_vector.test_batchnorm ... ok
test_large_vector.test_add ... ok
test_large_vector.test_sub ... ok
test_large_vector.test_rsub ... ok
test_large_vector.test_neg ... ok
test_large_vector.test_mul ... ok
test_large_vector.test_div ... ok
test_large_vector.test_rdiv ... ok
test_large_vector.test_mod ... ok
test_large_vector.test_rmod ... ok
test_large_vector.test_pow ... ok
test_large_vector.test_rpow ... ok
test_large_vector.test_shape ... ok
test_large_vector.test_size ... ok
test_large_vector.test_copy ... ok
test_large_vector.test_copy_to ... ok
test_large_vector.test_zeros_like ... ok
test_large_vector.test_ones_like ... ok
test_large_vector.test_concat ... ok
test_large_vector.test_sum ... ok
test_large_vector.test_prod ... ERROR
test_large_vector.test_min ... ERROR
test_large_vector.test_max ... ok
test_large_vector.test_argmax ... ok
test_large_vector.test_iadd ... ok
test_large_vector.test_isub ... ok
test_large_vector.test_imul ... ok
test_large_vector.test_idiv ... ok
test_large_vector.test_imod ... ok
test_large_vector.test_eq ... ok
test_large_vector.test_neq ... ok
test_large_vector.test_lt ... ok
test_large_vector.test_lte ... ok
test_large_vector.test_gt ... ok
test_large_vector.test_gte ... ok
test_large_vector.test_slice_like ... ok
test_large_vector.test_slice_axis ... ERROR
test_large_vector.test_full ... ok
test_large_vector.test_one_hot ... ERROR

@apeforest apeforest merged commit 6122dfc into apache:master Sep 4, 2019
@ChaiBapchya ChaiBapchya deleted the lts_vector branch September 6, 2019 18:28
gyshi pushed a commit to gyshi/incubator-mxnet that referenced this pull request Sep 7, 2019
* add random ops

* add shuffle to test large array

* shape evaluation after value check

* add log, exponent, power ops

* fix sequence reverse issue in test_large_array and add sequence ops to test_large_vector

* add binary arithmetic

* fix lint, minor mistakes in large_array; add nn op to tensor

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification bcoz R failures

* address comments

* normal distribution assert statement fix; randint dtype check

* correct layernorm and shuffle

* layer norm numpy flaky hence removed, dropout shape fix

* comment not working ops

* fix multi

* Trigger notification

* fix seq reverse, uncomment seq mask as it works

* index fix and uncomment test

* index fix

* seq_reverse index fix

* uncomment seq reverse test and handle static typecasts

* removing commented ops

* resolve merge conflict

* teardown, lint, remove redundant functions

* fix shape assertions and randint low,high

* remove waits, add teardown to large_array, change randint assert in large array
gyshi pushed a commit to gyshi/incubator-mxnet that referenced this pull request Sep 7, 2019
* add random ops

* add shuffle to test large array

* shape evaluation after value check

* add log, exponent, power ops

* fix sequence reverse issue in test_large_array and add sequence ops to test_large_vector

* add binary arithmetic

* fix lint, minor mistakes in large_array; add nn op to tensor

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification bcoz R failures

* address comments

* normal distribution assert statement fix; randint dtype check

* correct layernorm and shuffle

* layer norm numpy flaky hence removed, dropout shape fix

* comment not working ops

* fix multi

* Trigger notification

* fix seq reverse, uncomment seq mask as it works

* index fix and uncomment test

* index fix

* seq_reverse index fix

* uncomment seq reverse test and handle static typecasts

* removing commented ops

* resolve merge conflict

* teardown, lint, remove redundant functions

* fix shape assertions and randint low,high

* remove waits, add teardown to large_array, change randint assert in large array
access2rohit pushed a commit to access2rohit/incubator-mxnet that referenced this pull request Sep 25, 2019
* add random ops

* add shuffle to test large array

* shape evaluation after value check

* add log, exponent, power ops

* fix sequence reverse issue in test_large_array and add sequence ops to test_large_vector

* add binary arithmetic

* fix lint, minor mistakes in large_array; add nn op to tensor

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification bcoz R failures

* address comments

* normal distribution assert statement fix; randint dtype check

* correct layernorm and shuffle

* layer norm numpy flaky hence removed, dropout shape fix

* comment not working ops

* fix multi

* Trigger notification

* fix seq reverse, uncomment seq mask as it works

* index fix and uncomment test

* index fix

* seq_reverse index fix

* uncomment seq reverse test and handle static typecasts

* removing commented ops

* resolve merge conflict

* teardown, lint, remove redundant functions

* fix shape assertions and randint low,high

* remove waits, add teardown to large_array, change randint assert in large array
access2rohit pushed a commit to access2rohit/incubator-mxnet that referenced this pull request Sep 25, 2019
* add random ops

* add shuffle to test large array

* shape evaluation after value check

* add log, exponent, power ops

* fix sequence reverse issue in test_large_array and add sequence ops to test_large_vector

* add binary arithmetic

* fix lint, minor mistakes in large_array; add nn op to tensor

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification bcoz R failures

* address comments

* normal distribution assert statement fix; randint dtype check

* correct layernorm and shuffle

* layer norm numpy flaky hence removed, dropout shape fix

* comment not working ops

* fix multi

* Trigger notification

* fix seq reverse, uncomment seq mask as it works

* index fix and uncomment test

* index fix

* seq_reverse index fix

* uncomment seq reverse test and handle static typecasts

* removing commented ops

* resolve merge conflict

* teardown, lint, remove redundant functions

* fix shape assertions and randint low,high

* remove waits, add teardown to large_array, change randint assert in large array
access2rohit pushed a commit to access2rohit/incubator-mxnet that referenced this pull request Sep 25, 2019
* add random ops

* add shuffle to test large array

* shape evaluation after value check

* add log, exponent, power ops

* fix sequence reverse issue in test_large_array and add sequence ops to test_large_vector

* add binary arithmetic

* fix lint, minor mistakes in large_array; add nn op to tensor

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification coz of test_operator.test_laop_6 error

* Trigger notification bcoz R failures

* address comments

* normal distribution assert statement fix; randint dtype check

* correct layernorm and shuffle

* layer norm numpy flaky hence removed, dropout shape fix

* comment not working ops

* fix multi

* Trigger notification

* fix seq reverse, uncomment seq mask as it works

* index fix and uncomment test

* index fix

* seq_reverse index fix

* uncomment seq reverse test and handle static typecasts

* removing commented ops

* resolve merge conflict

* teardown, lint, remove redundant functions

* fix shape assertions and randint low,high

* remove waits, add teardown to large_array, change randint assert in large array
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants