-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
dc5313d
to
15572d1
Compare
@mxnet-label-bot Add [Backend, pr-awaiting-review] |
@@ -188,13 +188,29 @@ int MXNDArrayCreate(const mx_uint *shape, | |||
API_END(); | |||
} | |||
|
|||
int MXNDArrayCreateExInt64(const mx_int64 *shape, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the internal implementation is the same and only differs by data type, we can create an internal template to do this?
@@ -550,6 +566,34 @@ int MXNDArrayGetShapeEx(NDArrayHandle handle, | |||
API_END(); | |||
} | |||
|
|||
int MXNDArrayGetShapeExInt64(NDArrayHandle handle, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the internal implementation is the same and only differs by data type, we can create an internal template to do this?
@@ -130,6 +134,30 @@ struct MXAPIThreadLocalEntry { | |||
} | |||
} | |||
} | |||
|
|||
inline static void SetupShapeArrayReturnWithBufferExInt64( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the internal implementation is the same and only differs by data type, we can create an internal template to do this?
@@ -658,6 +658,85 @@ int MXSymbolInferShapeEx(SymbolHandle sym, | |||
API_END(); | |||
} | |||
|
|||
int MXSymbolInferShapeExInt64(SymbolHandle sym, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the internal implementation is the same and only differs by data type, we can create an internal template to do this?
@@ -708,6 +787,31 @@ int MXSymbolInferShapePartialEx(SymbolHandle sym, | |||
&succ); | |||
} | |||
|
|||
int MXSymbolInferShapePartialExInt64(SymbolHandle sym, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the internal implementation is the same and only differs by data type, we can create an internal template to do this?
const int **aux_shape_ndim, | ||
const int64_t ***aux_shape_data, | ||
int *complete) { | ||
int succ; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
initilize the value of succ and better to rename it success
@@ -0,0 +1,37 @@ | |||
# Licensed to the Apache Software Foundation (ASF) under one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this test pass with your Part-1 PR? If not, please check it in together with your Part-2 without causing nightly test failure.
python/mxnet/ndarray/ndarray.py
Outdated
pdata = ctypes.POINTER(mx_int)() | ||
check_call(_LIB.MXNDArrayGetShapeEx( | ||
self.handle, ctypes.byref(ndim), ctypes.byref(pdata))) | ||
if Features().is_enabled('INT64_TENSOR_SIZE') and sys.version_info[0] > 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think calling the Feature().is_enabled() may take extra time here
Feature already merged in: #15593 |
Description
New int64 C APIs.
Changes related to using index_t in slice operator.
New Test file for testing very large indices
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Large Index Support APIs Part1