-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
7a3e387
to
780eb54
Compare
}; | ||
|
||
template<int req> | ||
struct around_forwardint{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Get rid of this kernel after you switch to identity
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
&& param.decimals > 0) { | ||
MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, { | ||
MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, { | ||
Kernel<around_forwardint<req_type>, xpu>::Launch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simply use identity
kernel instead of your new kernel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
for hybridize in [True, False]: | ||
for oneType in types: | ||
rtol=1e-3 | ||
atol=1e-5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rtol, atol = 1e-3, 1e-5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
return F.np.around(x, self.decimals) | ||
|
||
shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes | ||
types = ['int32', 'int64', 'float32', 'double'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
types = ['int32', 'int64', 'float32', 'float64']
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
def hybrid_forward(self, F, x): | ||
return F.np.around(x, self.decimals) | ||
|
||
shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shapes = [(), (1, 2, 3), (1, 0)]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
rtol=1e-3 | ||
atol=1e-5 | ||
for shape in shapes: | ||
for d in range(-10, 11): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
too many cases for d, simply reduce to something like -5, 6
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
848ce30
to
81649d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
006ec32
to
1c2d15c
Compare
* change the name of argument * add doc in three files and fix some bug * change the data type in .h and add test function cancel optimization when abs(temp) < 0.5 modify test on cpu and add test on gpu do not support float16 edit testcase on gpu and add 'Do not support float16 on doc' * edit doc: support scalar * adjust the format * add license * fix format error * delete gpu test * move around to np_elemwise_unary_op_basic * edit AroundOpType * replace int kernel with identity_with_cast and fix format error * delete unused req_type
1c2d15c
to
4a1a595
Compare
* change the name of argument * add doc in three files and fix some bug * change the data type in .h and add test function cancel optimization when abs(temp) < 0.5 modify test on cpu and add test on gpu do not support float16 edit testcase on gpu and add 'Do not support float16 on doc' * edit doc: support scalar * adjust the format * add license * fix format error * delete gpu test * move around to np_elemwise_unary_op_basic * edit AroundOpType * replace int kernel with identity_with_cast and fix format error * delete unused req_type
* change the name of argument * add doc in three files and fix some bug * change the data type in .h and add test function cancel optimization when abs(temp) < 0.5 modify test on cpu and add test on gpu do not support float16 edit testcase on gpu and add 'Do not support float16 on doc' * edit doc: support scalar * adjust the format * add license * fix format error * delete gpu test * move around to np_elemwise_unary_op_basic * edit AroundOpType * replace int kernel with identity_with_cast and fix format error * delete unused req_type
Create a new branch and move around to np_elemwise_unary_op_basic.
@haojin2