Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Cherrypick Fix reshape interoperability test (#17155) #17345

Closed
wants to merge 813 commits into from

Conversation

ChaiBapchya
Copy link
Contributor

  • fix reshape interoperability test

  • fix for scipy import

Description

Cherrypick this PR into 1.5.x so that “numpy.decorator” issue gets resolved.
This will ensure #17286 passes CI

@frankfliu @haojin2

xinyu-intel and others added 30 commits November 4, 2019 18:11
* support mkldnn gelu and convgelu fusion

* remove slope for gelu
* add deformable conv v2

* fix lint and compiler warning

* fix lint

* fix pylint

* fix clang and lint

* fix base class, add test case

* fix gluon impl, add cpu forward

* address comment

* fix duplicate kernel name

* fix cpplint

* con't fix cpplint

* address comments

* fix mask scale

* make initial mask centered at 1 rather than 0.5

* fix submodule
* Adding second NMS op

* NMS kernel

* Removing second sort

* Optimization

* Adding out-of-place ability to SortByKey

* Optimization pt2

* Optimizations pt3

* Do not recompute other boxes area every time

* Sort only topk results during second sorting

* Cleaning

* Fixes from rebase

* Fix lint and more fixes from rebase

* Fix typo

* Early exit in Triangle kernel

* Fixes

* Fix sort

* Fix from rebase

* Fix for the mixed naming convention

* Fix the index_t with int comparisoon
* support mixed-precision binary operations

* improvement for documentations and error messages
* add MXNet Ops for fast multihead attention

* add cutlass as 3rdparty dependency

* add cutlass to compilation flags

* remove all cutlass stuff

* add better error message and description and remove cutlass from compilation flags

* change credit for the approach since the code have changed

* fix typos

* correct another typo

* Add all the cuda/cublas helper functions

* remove tests using kAddTo

* only use cublasStridedBatchedGemm if CUDA >= 9.1

* add equivalent mxnet code in description of mha ops

* remove a wrong copy-paste

* add _contrib for namespace and add GPU only on description

* add warning in bwd_ignore_zero_init description, also test with fp32

* add error return if bwd_ignore_zero_init is used without MXNET_EXEC_ENABLE_ADDTO

* remove std::move for clang

* remove bwd_ignore_zero_init flag

* remove bwd_ignore_zero_init in test_operator_gpu.py

* fix typo

* fix another typo
…taloader (apache#16233)

* fix dataloader

* add unittest

* fix test
* Fix int8 convolution bias overflow

* Fix when data is too small

* Fix CI

* Fix

* Add fc fix

* Add round
…e#16737)

* use dim_t instead of int

* fix same issue in pooling

* rebase code

* trigger CI
* Refactor elemwise_op_common and change SliceChannel InferType

* Add gluoncv models

* Comment Faster RCNN models
…ng="utf-8",it fix some encode error in Chinese windows system. (apache#16738)
* Add NumPy support for inv

* fix CUDA float64 memory alignment bug

* make test_mixed_precision more tolerant
* Fix rtrue_divide_scalar

* More tests
…de (apache#16728)

* support pure boolean elemwise/broadcast binary op

* switch to unique_tpr

* fix the test error
* add frontend interface for bernoulli

* bernoulli backend done

* frontend done, test to be added

* finish tests, fix indicator initialization bug

* test with native numpy

* fix indent, change test name

* resolve comments

* add raise test

* modify raise test
* Add align_corners parameter to bilinear resize op.

* Add align_corners parameter to bilinear resize op.

add BilinearResize2D test

fix BilinearResize2D align_corners test

fix code layout

fix resize gpu and cpu consistency

fix bilinear reisze test bug

close transforms.resize test

* optimize BilinearResize2D backward performance; improve fp16 performance

* optimize BilinearResize2D forward kernel

* lint

* fix forceinline and add

* lint

* remove commented out code and remove transforms.Resize gpu test

* Dtype to float in cpu implementation

* retrigger CI

* retrigger CI

* Update resize.cu

* retrigger CI

* Update resize.cu

fix resize.cu
…nputs contain inf or nan (apache#16234)

* fix meansum nan

* remove print in testcase

* update to avoid assignment

* update

* fix argmin and argmax, update julia unittest

* update argmin/argmax docs in julia bindings

* debug

* update

* update test

* fix sum merge

* update testcase

* update including sign

* fix allclose

* ci

* use constants

* fix build for isinf and isnan

* ci

* ci
* a new round of link fixes

* fixing merge conflict

* Nudging test.

* Update julia/docs/src/tutorial/char-lstm.md

* fixing mkldnn version to match upstream/master
…operators (apache#16022)

* Added (CuDNN)BatchNorm operator to the list of mirrored operators

* ci

* Enable the auxiliary state locking only in the backward mirroring mode

* retrigger CI
* fix mean output type for integer inputs

* enable for windows
…ache#16716)

* fix zero_grad

* Update parameter.py

* add test

* fix
kshitij12345 and others added 18 commits January 13, 2020 23:10
…5476)

* support rsqrt, rcbrt for higher order grad

* add relevant tests

* update comments
When being build, auto-grad links are not behaving as expected. Adding full links to fix images on this page.
…apache#17271)

* Fix apache#17267, add expected and got datatype when non-uniform dtype fount in concat

* Add op name and input index to concat type error msg

* Fix lint error
…pache#17228)

* Support R-package with cmake build and fix installation instructions

* Fix typo

* Fix callback.R

* Clarify creation of personal R library

* Fix generation of R-package documentation

* Remove unused USE_BLAS variable on CI for make rpkg

* Add cmake build command for get_started/linux/r/*.md pages

* Fix zzz.R swallowing error messages

* Fix R-package/src/Makevars for OpenCV dependency

See apache#17282
* Fix CosineEmbeddingLoss in when symbol API is used

Fixes apache#17275

* Update CONTRIBUTORS.md
…ache#17321)

* use failed seed and verify first order

* replace grad_op with equivalent expression

* remove fixed seed for tanh

* add relax tolerance for tanh first order
* Add a check for number of inputs

* Fix num inputs for backward_Deconvolution

* Fix number of inputs to backward ROIAlign

* Fix number of inputs to backward_SoftmaxOutput

* Fix more operators lying about their number of inputs

* Fix input number of backward NMS

* Fixes

* Fix dropout, RNN and upsampling backward number of inputs

* Fix LeakyRelu number of inputs

* Actually fix LeakyRelu

* Fix pooling and concat

* Fix Concat (attempt 2)

* Fix from review

* Incorporate Dick's changes

* Add guard to MakeNonlossGradNode

* Fix

* Fix backward of SoftmaxActivation

* Fix backward of np_prod and norm
* Add multi-tensor lamb Op

* Fix compilation issue

* Optimize GPU kernels

* Stable version (included optimizer tests)

* fix lint errors

* fix lint errors

* fix pylint errors

* fix pylint errors

* Remove extra state for temporal_g (now using requested workspace)

* change default value of bounds and bias

* Fix bugs related to removal extra state

* Reuse existing LAMB optimizer

* Fix pylint errors

* Fix pylint erros

* Fix pylint erros

* Remove large tensors from test (memory issues when checking)

* Fix bug: needs to allocate memory for MultiSumSq

* Fix index bug and allows different lrs/wds for each tensor

* Template data type for lrs/wds

* Match single-tensor LAMB, and allows to pass a list (AGGREATION=1) to single-tensor LAMB

* Follow Mxnet case/format

* Clean-up code and follow Mxnet case/format

* fix lint issues

* Fix linking problem

* pylint issue
* fix reshape interoperability test

* fix for scipy import
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.