Skip to content
This repository has been archived by the owner on Jul 1, 2023. It is now read-only.

Eager Tensors should be placed on the device they are assigned #1103

Open
BradLarson opened this issue Oct 9, 2020 · 0 comments
Open

Eager Tensors should be placed on the device they are assigned #1103

BradLarson opened this issue Oct 9, 2020 · 0 comments
Labels
bug Something isn't working

Comments

@BradLarson
Copy link
Contributor

The Tensor initializer lets you manually specify a device for placement. For example, you should be able to explicitly place a Tensor on the first CPU using the eager mode with the following:

let cpu0 = Device(kind: .CPU, ordinal: 0, backend: .TF_EAGER)
let cpuTensor1 = Tensor([0.0, 1.0, 2.0], on: cpu0)

This does not work for the .TF_EAGER backend. The Tensor will be created on the default accelerator, no matter what device is specified. If a GPU is present, trying to force a Tensor onto the CPU will not work.

As a workaround, encapsulating eager Tensor operations in withDevice(.cpu) { code here } will force the eager Tensors within that closure to run on the first of the specified class of devices.

For X10, this does work, and does allow for correct manual device placement. The eager-mode Tensors should be modified to support this in the same way as X10 Tensors.

This is an associated issue with #524 on tensorflow/swift.

@dan-zheng dan-zheng added the bug Something isn't working label Dec 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants