Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

test_operator.test_l2_normalization failure in CI #12417

Closed
aaronmarkham opened this issue Aug 30, 2018 · 4 comments · Fixed by #12429
Closed

test_operator.test_l2_normalization failure in CI #12417

aaronmarkham opened this issue Aug 30, 2018 · 4 comments · Fixed by #12429

Comments

@aaronmarkham
Copy link
Contributor

The PR is unrelated to anything that would have failed this test...

http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-12413/1/pipeline

======================================================================

FAIL: test_operator.test_l2_normalization

----------------------------------------------------------------------

Traceback (most recent call last):

  File "C:\Anaconda3\envs\py2\lib\site-packages\nose\case.py", line 197, in runTest

    self.test(*self.arg)

  File "C:\jenkins_slave\workspace\ut-python-gpu\tests\python\unittest\common.py", line 172, in test_new

    orig_test(*args, **kwargs)

  File "C:\jenkins_slave\workspace\ut-python-gpu\tests\python\unittest\test_operator.py", line 3134, in test_l2_normalization

    check_l2_normalization((nbatch, nchannel, height, width), mode, dtype)

  File "C:\jenkins_slave\workspace\ut-python-gpu\tests\python\unittest\test_operator.py", line 3120, in check_l2_normalization

    check_numeric_gradient(out, [in_data], numeric_eps=1e-3, rtol=1e-2, atol=1e-3)

  File "C:\jenkins_slave\workspace\ut-python-gpu\windows_package\python\mxnet\test_utils.py", line 912, in check_numeric_gradient

    ("NUMERICAL_%s"%name, "BACKWARD_%s"%name))

  File "C:\jenkins_slave\workspace\ut-python-gpu\windows_package\python\mxnet\test_utils.py", line 491, in assert_almost_equal

    raise AssertionError(msg)

AssertionError: 

Items are not equal:

Error 1.721498 exceeds tolerance rtol=0.010000, atol=0.001000.  Location of maximum error:(1, 0, 2, 1), a=-0.194527, b=-0.199686

 NUMERICAL_data: array([[[[  1.49086118,   0.15851855,   0.26511028, ...,   0.04867464,

            1.2716279 ,  -0.13297796],

         [  0.62356889,  -0.54586679,   0.78976154, ...,   1.01109409,...

 BACKWARD_data: array([[[[  1.49098313,   0.15851827,   0.26509097, ...,   0.04858097,

            1.27164507,  -0.13300258],

         [  0.62362021,  -0.54588604,   0.78978229, ...,   1.01112676,...

-------------------- >> begin captured logging << --------------------

common: INFO: Setting test np/mx/python random seeds, use MXNET_TEST_SEED=2040851487 to reproduce.

--------------------- >> end captured logging << ---------------------

@aaronmarkham
Copy link
Contributor Author

@mxnet-label-bot [Flaky, Test]

@haojin2
Copy link
Contributor

haojin2 commented Sep 1, 2018

Seems like a tolerance level problem as the numbers are pretty close, will submit a PR to bump up the tolerance level a bit soon.

@haojin2
Copy link
Contributor

haojin2 commented Sep 1, 2018

Fix in #12429

@perdasilva
Copy link
Contributor

perdasilva commented May 20, 2019

@aaronmarkham could we please re-open this issue? I'll open a new ticket...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants