Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Add power, exponent, log ops large tensor support #15794

Merged
merged 8 commits into from
Aug 16, 2019
Merged
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions tests/nightly/test_large_array.py
Original file line number Diff line number Diff line change
Expand Up @@ -434,6 +434,72 @@ def test_topk():
assert l.sum() == np.sum(np.arange(0, SMALL_Y))


def test_exponent_logarithm_operators():
a = 2*nd.ones(shape=(LARGE_X, SMALL_Y))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reuse create_2d_tensor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a. create_2d_tensor uses 114G (htop reading) for the creating an nd array while same MXNet nd would do it around 40G.
b. np.arange is not really necessary. All we need to do is test if the function works for large arrays.
What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should change create_2d_array to make it more efficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. but ill address that in separate PR if that's fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah lets remove unnecessary numpy APIs from this test file if they eat up too much memory

# exponent
result = nd.exp(a)
assert result[0][-1] == 7.389056
assert result.shape == a.shape

# exponent minus 1
result = nd.expm1(a)
assert result[0][-1] == 6.389056
assert result.shape == a.shape

# log2
result = nd.log2(a)
assert result[0][-1] == 1
assert result.shape == a.shape

# log10
result = nd.log10(a)
assert result[0][-1] == 0.30103
assert result.shape == a.shape

# log1p
result = nd.log1p(a)
assert result[0][-1] == 1.0986123
assert result.shape == a.shape

# log
result = nd.log(a)
assert result[0][-1] == 0.6931472
assert result.shape == a.shape


def test_power_operators():
a = 2*nd.ones(shape=(LARGE_X, SMALL_Y))
apeforest marked this conversation as resolved.
Show resolved Hide resolved
# sqrt
result = nd.sqrt(a)
assert result[0][-1] == 1.4142135
assert result.shape == a.shape

# rsqrt
result = nd.rsqrt(a)
assert result[0][-1] == 0.70710677
assert result.shape == a.shape

# cbrt
result = nd.cbrt(a)
assert result[0][-1] == 1.2599211
assert result.shape == a.shape

# rcbrt
result = nd.rcbrt(a)
assert result[0][-1] == 0.7937005
assert result.shape == a.shape

# square
result = nd.square(a)
assert result[0][-1] == 4
assert result.shape == a.shape

# reciprocal
result = nd.reciprocal(a)
assert result[0][-1] == 0.5
assert result.shape == a.shape


def test_add():
a = nd.ones(shape=(LARGE_X, SMALL_Y))
b = nd.ones(shape=(LARGE_X, SMALL_Y))
Expand Down