Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Support for XLA devices #16916

Open
guoquan opened this issue Nov 26, 2019 · 2 comments
Open

Support for XLA devices #16916

guoquan opened this issue Nov 26, 2019 · 2 comments

Comments

@guoquan
Copy link

guoquan commented Nov 26, 2019

Description

XLA is an abstraction layer of computation graph for better efficiency, consistency, portability, and a lot as they claim. However, the most significant feature is to enable access to google's TPU or other customized accelerator hardware that uses the same abstraction.

Sample code to use XLA device could be:

from mxnet import nd
from mxnet_xla import xla
x = nd.ones((4,5), ctx=xla.xla_device())
print(x)

References

@pengzhao-intel
Copy link
Contributor

Good idea! We're evaluating the possibility of XLA recently :)

@cjolivier01
Copy link
Member

cjolivier01 commented Nov 27, 2019

As a start, it would be good to simply be able to generate an HloModuleProto protobuf file.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants