Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

added support for large tensors for Dropout operator and tests to verify support for more operators #16409

Merged
merged 3 commits into from
Oct 18, 2019

Conversation

access2rohit
Copy link
Contributor

@access2rohit access2rohit commented Oct 9, 2019

Description

Tests added for following operators:
astype
cast
repeat
ceil
fix
floor
rint
round
trunk
arccos
arcsin
arctan
cos
degrees
radians
sin
tan
L2Normalization
InstanceNormalization

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Testing

Will update results of test run here.

@access2rohit access2rohit force-pushed the new_dgl_ops branch 2 times, most recently from 11b23b2 to 083c6b5 Compare October 11, 2019 00:01
@access2rohit access2rohit changed the title [WIP]adding tests to verify large tensor support for more operators adding tests to verify large tensor support for more operators Oct 14, 2019
@access2rohit
Copy link
Contributor Author

@mxnet-label-bot add [pr-awaiting-review]

@lanking520 lanking520 added the pr-awaiting-review PR is waiting for code review label Oct 14, 2019
@access2rohit
Copy link
Contributor Author

@anirudh2290 @ChaiBapchya @zheng-da this PR is ready for review

@access2rohit
Copy link
Contributor Author

test_large_array.test_gluon_embedding ... ok
test_large_array.test_ndarray_zeros ... ok
test_large_array.test_ndarray_ones ... ok
test_large_array.test_ndarray_convert ... ok
test_large_array.test_ndarray_random_uniform ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=583459780 to reproduce.
ok
test_large_array.test_ndarray_random_randint ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1561187177 to reproduce.
ok
test_large_array.test_ndarray_random_exponential ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=488723450 to reproduce.
ok
test_large_array.test_ndarray_random_gamma ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1246538300 to reproduce.
ok
test_large_array.test_ndarray_random_multinomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=279077978 to reproduce.

ok
test_large_array.test_ndarray_random_generalized_negative_binomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=582809580 to reproduce.
ok
test_large_array.test_ndarray_random_negative_binomial ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1682432459 to reproduce.
ok
test_large_array.test_ndarray_random_normal ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=721628740 to reproduce.
ok
test_large_array.test_ndarray_random_poisson ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1614267541 to reproduce.
ok
test_large_array.test_ndarray_random_randn ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1931064888 to reproduce.
ok
test_large_array.test_ndarray_random_shuffle ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=887032379 to reproduce.
ok
test_large_array.test_ndarray_empty ... ok
test_large_array.test_elementwise ... ok
test_large_array.test_reduce ... ok
test_large_array.test_dot ... ok
test_large_array.test_FullyConnected ... ok
test_large_array.test_broadcast ... ok
test_large_array.test_clip ... ok
test_large_array.test_split ... ok
test_large_array.test_argmin ... ok
test_large_array.test_tile ... ok
test_large_array.test_take ... ok
test_large_array.test_slice ... ok
test_large_array.test_slice_assign ... ok
test_large_array.test_expand_dims ... ok
Helper function that cleans up memory by releasing it from memory pool ... ok
test_large_array.test_squeeze ... ok
test_large_array.test_broadcast_div ... ok
test_large_array.test_Dense ... ok
test_large_array.test_where ... ok
test_large_array.test_pick ... ok
test_large_array.test_depthtospace ... ok
test_large_array.test_spacetodepth ... ok
test_large_array.test_diag ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1737100945 to reproduce.
ok
test_large_array.test_ravel_multi_index ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=1577038190 to reproduce.
ok
test_large_array.test_unravel_index ... [DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=754939684 to reproduce.
ok
test_large_array.test_transpose ... ok
test_large_array.test_swapaxes ... ok
test_large_array.test_flip ... ok

test_large_array.test_softmax ... ok
test_large_array.test_argsort ... ok
test_large_array.test_sort ... ok
test_large_array.test_topk ... ok
test_large_array.test_exponent_logarithm_operators ... ok
test_large_array.test_power_operators ... ok

test_large_array.test_sequence_mask ... ok
test_large_array.test_sequence_reverse ... ok
test_large_array.test_sequence_last ... ok
test_large_array.test_softmax_cross_entropy ... ok
test_large_array.test_index_copy ... ok
test_large_array.testSoftmaxOutput ... ok
test_large_array.test_leaky_relu ... ok
test_large_array.test_pooling ... ok
test_large_array.test_layer_norm ... ok

test_large_array.test_dropout ... ok
test_large_array.test_activation ... ok
test_large_array.test_batchnorm ... ok
test_large_array.test_add ... ok
test_large_array.test_sub ... ok
test_large_array.test_rsub ... ok
test_large_array.test_neg ... ok
test_large_array.test_mul ... ok
test_large_array.test_div ... ok
test_large_array.test_rdiv ... ok
test_large_array.test_mod ... ok
test_large_array.test_rmod ... ok
test_large_array.test_imod ... ok
test_large_array.test_pow ... ok
test_large_array.test_rpow ... ok
test_large_array.test_shape ... ok
test_large_array.test_size ... ok
test_large_array.test_copy ... ok
test_large_array.test_copy_to ... ok
test_large_array.test_zeros_like ... ok
test_large_array.test_ones_like ... ok
test_large_array.test_reshape_like ... ok
test_large_array.test_flatten ... ok
test_large_array.test_concat ... ok
test_large_array.test_stack ... ok
test_large_array.test_broadcast_axes ... ok
test_large_array.test_sum ... ok
test_large_array.test_prod ... ok
test_large_array.test_mean ... ok
test_large_array.test_min ... ok
test_large_array.test_max ... ok
test_large_array.test_norm ... ok
test_large_array.test_argmax ... ok
test_large_array.test_relu ... ok
test_large_array.test_sigmoid ... ok
test_large_array.test_log_softmax ... ok
test_large_array.test_iadd ... ok
test_large_array.test_isub ... ok
test_large_array.test_imul ... ok
test_large_array.test_idiv ... ok
test_large_array.test_eq ... ok
test_large_array.test_neq ... ok
test_large_array.test_lt ... ok
test_large_array.test_lte ... ok
test_large_array.test_gt ... ok
test_large_array.test_gte ... ok
test_large_array.test_slice_like ... ok
test_large_array.test_slice_axis ... ok
test_large_array.test_one_hot ... ok
test_large_array.test_full ... ok
test_large_array.test_astype ... ok
test_large_array.test_cast ... ok
test_large_array.test_repeat ... ok
test_large_array.test_ceil ... ok
test_large_array.test_fix ... ok
test_large_array.test_floor ... ok
test_large_array.test_rint ... ok
test_large_array.test_round ... ok
test_large_array.test_trunc ... ok
test_large_array.test_arcsin ... ok
test_large_array.test_arccos ... ok
test_large_array.test_arctan ... ok
test_large_array.test_sin ... ok
test_large_array.test_cos ... ok
test_large_array.test_tan ... ok
test_large_array.test_radians ... ok
test_large_array.test_degrees ... ok
test_large_array.test_L2Normalization ... ok
test_large_array.test_instance_norm ... ok

@access2rohit access2rohit changed the title adding tests to verify large tensor support for more operators added support for large tensors in Dropout tests to verify support for more operators Oct 16, 2019
Comment on lines +1417 to +1422
def npy_instance_norm(data, gamma, beta, axis, eps=1E-5):
if axis < 0:
axis += data.ndim
broadcast_shape = [1 for _ in range(data.ndim)]
broadcast_shape[axis] = data.shape[axis]
mean = data.mean(axis=axis, keepdims=True).astype(dtype)
var = data.var(axis=axis, keepdims=True).astype(dtype)
std = np.sqrt(var + dtype(eps)).astype(dtype)
out = gamma * (data - mean) / std + \
beta
return out
Copy link
Contributor Author

@access2rohit access2rohit Oct 16, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@access2rohit
Copy link
Contributor Author

test_large_array.test_rounding_ops ... ok test_large_array.test_trigonometric_ops ... ok

Copy link
Member

@anirudh2290 anirudh2290 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a comma missing in the title ?

1 . added support for large tensors in Dropout
2. tests to verify support for more operators ?

tests/nightly/test_large_array.py Show resolved Hide resolved
tests/nightly/test_large_array.py Show resolved Hide resolved
@access2rohit
Copy link
Contributor Author

@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is ready for review

@access2rohit access2rohit changed the title added support for large tensors in Dropout tests to verify support for more operators added support for large tensors for Dropout operator and tests to verify support for more operators Oct 17, 2019
@access2rohit
Copy link
Contributor Author

access2rohit commented Oct 17, 2019

is there a comma missing in the title ?

1 . added support for large tensors in Dropout
2. tests to verify support for more operators ?

Aah, corrected it now. Thanks for pointing that out.

Copy link
Member

@anirudh2290 anirudh2290 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, please add the TODO.

@access2rohit
Copy link
Contributor Author

@mxnet-label-bot add [pr-awaiting-merge]

@lanking520 lanking520 added the pr-awaiting-merge Review and CI is complete. Ready to Merge label Oct 17, 2019
@rahul003 rahul003 merged commit f2ed1d4 into apache:master Oct 18, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-merge Review and CI is complete. Ready to Merge pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants