Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[numpy] Implement NumPy operators #14327

Closed
reminisce opened this issue Mar 4, 2019 · 5 comments
Closed

[numpy] Implement NumPy operators #14327

reminisce opened this issue Mar 4, 2019 · 5 comments

Comments

@reminisce
Copy link
Contributor

reminisce commented Mar 4, 2019

I made a list of operators that can be implemented using NumPy APIs. Most of the operators are extracted from the D2L book. I think we should prioritize these operators to benefit the wide audience of the book. Some operators, such as one_hot does not have the counterpart in NumPy. We can consider registering them under different namespaces than mxnet.numpy.

Will revert some of the changes in this commit to restore namespace mxnet.numpy.

MXNet NumPy (1.16) Assignee Status
add np.add @reminisce #14758
multiply np.multiply @reminisce #14758
sub np.subtract @reminisce #14758
div np.divide/np.true_divide @reminisce #14758
mod np.mod @reminisce #14758
power np.power @reminisce #14758
maximum np.maximum @reminisce #14924
minimum np.minimum @reminisce #14924
hypot np.hypot
equal np.equal @reminisce
greater_equal np.greater_equal @reminisce
lesser np.less @reminisce
lesser_equal np.less_equal @reminisce
logical_or np.logical_or @reminisce
logical_and np.logical_and @reminisce
nd.dot np.dot @haojin2 #14831
nd.random.normal np.random.normal @reminisce #15086
nd.random.uniform np.random.uniform @reminisce #15086
nd.zeros np.zeros @reminisce Done
nd.arange np.arange @reminisce
nd.array np.array Done
nd.argmax np.argmax @reminisce
nd.split np.split @haojin2 WIP
nd.stack np.stack @haojin2 WIP
nd.concat np.concatenate @haojin2 WIP
nd.sum np.sum @haojin2 Done
nd.ones_like np.ones_like @reminisce #14989
nd.zeros_like np.zeros_like @reminisce #14989
nd.full np.full @reminisce
nd.max np.amax @stu1130
nd.random.multinomial np.random.multinomial @stu1130
nd.linspace np.linspace @stu1130
nd.clip np.clip @haojin2 #14754
nd.random.shuffle np.random.shuffle @reminisce
nd.reshape np.reshape @reminisce Done
nd.topk np.argsort @mikemwx
nd.batch_dot np.tensordot @haojin2 WIP
nd.mean np.mean @haojin2 Done
nd.histogram np.histogram @haojin2
nd.flatten np.flatten (different definition) @reminisce
N/A np.cumsum @haojin2
N/A np.trace @hzfan
nd.eye np.eye @stu1130
nd.Custom np.ext.custom
nd.relu npx.relu @reminisce Done
nd.transpose np.transpose @reminisce Done
nd.array np.array @reminisce Done
nd.empty np.empty @reminisce Done
nd.ones np.ones @reminisce Done
nd.abs nd.abs @haojin2 #15010
nd.cbrt np.cbrt @haojin2 #15010
nd.ceil np.ceil @haojin2 #15010
nd.exp np.exp @haojin2 #15010
nd.expm1 np.expm1 @haojin2 #15010
nd.fix np.fix @haojin2 #15010
nd.floor np.floor @haojin2 #15010
nd.log np.log @haojin2 #15010
nd.log10 np.log10 @haojin2 #15010
nd.log1p np.log1p @haojin2 #15010
nd.log2 np.log2 @haojin2 #15010
nd.logical_not np.logical_not @haojin2 #15010
nd.negative np.negative @haojin2 #15010
nd.reciprocal np.reciprocal @haojin2 #15010
nd.rint np.rint @haojin2 #15010
nd.sign np.sign @haojin2 #15010
nd.sqrt np.sqrt @haojin2 #15010
nd.square np.square @haojin2 #15010
nd.trunc np.trunc @haojin2 #15010
nd.sin np.sin @haojin2 #15010
nd.cos np.cos @haojin2 #15010
nd.tan np.tan @haojin2 #15010
nd.arcsin np.arcsin @haojin2 #15010
nd.arccos np.arccos @haojin2 #15010
nd.arctan np.arctan @haojin2 #15010
nd.degrees np.degrees @haojin2 #15010
nd.radians np.radians @haojin2 #15010
nd.sinh np.sinh @haojin2 #15010
nd.cosh np.cosh @haojin2 #15010
nd.tanh np.tanh @haojin2 #15010
nd.arcsinh np.arcsinh @haojin2 #15010
nd.arccosh np.arccosh @haojin2 #15010
nd.arctanh np.arctanh @haojin2 #15010
mx.random.seed np.random.seed
mx.nd.one_hot npx.one_hot @reminisce
mx.nd.gamma np.random.gamma @reminisce
np.tensordot @ckt624
mx.nd.prod np.prod @reminisce
N/A np.std @haojin2
nd.broadcast_to np.broadcast_to @reminisce
@mxnet-label-bot
Copy link
Contributor

Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended labels: Feature, Performance

@piyushghai
Copy link
Contributor

@mxnet-label-bot Add [Feature request, Numpy]

@wkcn
Copy link
Member

wkcn commented Mar 5, 2019

Great! I have the following questions.

Could you please provide an example to explain how to accept NDArray and Symbol inputs?

In detail, how to determine the type of inputs?

Slice and View (e.g. reshape) are the important features in NumPy, however current MXNet doesn't support non-contiguous tensor, and MXNet operators always return a copy. How to address the problem? For non-contiguous tensor, I think we can use DLPack, which supports strides.

@reminisce
Copy link
Contributor Author

@wkcn

Could you please provide an example to explain how to accept NDArray and Symbol inputs?

This can be achieved pretty similar to how forward is defined in HybridBlock by checking the input argument's type. However, per our discussion, we are not going to expose numpy operator APIs accepting symbolic inputs (at least for the next three months) as it's the thing we want to deprecate in 2.0.

Slice and View (e.g. reshape) are the important features in NumPy, however current MXNet doesn't support non-contiguous tensor, and MXNet operators always return a copy. How to address the problem? For non-contiguous tensor, I think we can use DLPack, which supports strides.

In fact, MXNet tensor can support view if we add strides implementation. It's just a matter of how elements are accessed. But that also means we need to re-implement all the kernels to support strides which is paramount of work with no performance gain. Although supporting view is a must-have to achieve the long-term goal of numpy compatibility, it's not the current focus of improving the model training usability as it introduces quite a few problems that will lead to negative user experience, such as view cannot be used in auto grad as an assignee because in-place assignment is not allowed, non-contiguous memory allocation breaks data locality which is critical in devising high-performance kernels. We would have more freedom of taking this into consideration in 2.0.

@wkcn
Copy link
Member

wkcn commented Mar 5, 2019

@reminisce I see. Thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants