Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Large Index Support:Part 1 #15728

Closed
wants to merge 6 commits into from
Closed

Conversation

access2rohit
Copy link
Contributor

@access2rohit access2rohit commented Aug 2, 2019

Description

New int64 C APIs.
Changes related to using index_t in slice operator.
New Test file for testing very large indices

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

Large Index Support APIs Part1

@access2rohit access2rohit changed the title [WIP]LIS slice 6 Large Index Support:Part 1 Aug 2, 2019
@piyushghai
Copy link
Contributor

@mxnet-label-bot Add [Backend, pr-awaiting-review]

@marcoabreu marcoabreu added Backend Issues related to the backend of MXNet pr-awaiting-review PR is waiting for code review labels Aug 2, 2019
@@ -188,13 +188,29 @@ int MXNDArrayCreate(const mx_uint *shape,
API_END();
}

int MXNDArrayCreateExInt64(const mx_int64 *shape,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the internal implementation is the same and only differs by data type, we can create an internal template to do this?

@@ -550,6 +566,34 @@ int MXNDArrayGetShapeEx(NDArrayHandle handle,
API_END();
}

int MXNDArrayGetShapeExInt64(NDArrayHandle handle,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the internal implementation is the same and only differs by data type, we can create an internal template to do this?

@@ -130,6 +134,30 @@ struct MXAPIThreadLocalEntry {
}
}
}

inline static void SetupShapeArrayReturnWithBufferExInt64(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the internal implementation is the same and only differs by data type, we can create an internal template to do this?

@@ -658,6 +658,85 @@ int MXSymbolInferShapeEx(SymbolHandle sym,
API_END();
}

int MXSymbolInferShapeExInt64(SymbolHandle sym,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the internal implementation is the same and only differs by data type, we can create an internal template to do this?

@@ -708,6 +787,31 @@ int MXSymbolInferShapePartialEx(SymbolHandle sym,
&succ);
}

int MXSymbolInferShapePartialExInt64(SymbolHandle sym,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the internal implementation is the same and only differs by data type, we can create an internal template to do this?

const int **aux_shape_ndim,
const int64_t ***aux_shape_data,
int *complete) {
int succ;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

initilize the value of succ and better to rename it success

@@ -0,0 +1,37 @@
# Licensed to the Apache Software Foundation (ASF) under one
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this test pass with your Part-1 PR? If not, please check it in together with your Part-2 without causing nightly test failure.

pdata = ctypes.POINTER(mx_int)()
check_call(_LIB.MXNDArrayGetShapeEx(
self.handle, ctypes.byref(ndim), ctypes.byref(pdata)))
if Features().is_enabled('INT64_TENSOR_SIZE') and sys.version_info[0] > 2:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think calling the Feature().is_enabled() may take extra time here

@access2rohit
Copy link
Contributor Author

Feature already merged in: #15593

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Backend Issues related to the backend of MXNet pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants