Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing devices in Fbank #999

Merged
merged 1 commit into from
Mar 16, 2023
Merged

Conversation

Tomiinek
Copy link
Contributor

No description provided.

Comment on lines +79 to +81
def to(self, device: str):
self.config.device = device
self.extractor.to(device)
Copy link
Contributor Author

@Tomiinek Tomiinek Mar 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because I would like to instantiate multiple extractors on different devices via .from_yaml(...) and using the same config file.

@@ -385,7 +389,6 @@ def _extract_batch(
samples = [samples.reshape(1, -1)]

if any(isinstance(x, torch.Tensor) for x in samples):
samples = [x.numpy() for x in samples]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be useless. It fails for CUDA tensors and the numpy arrays are converted to tensors three lines below anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that a usecase where the user inputs a list of mixture of numpy arrays and tensors is paranoid 😄

@@ -403,7 +406,9 @@ def _extract_batch(
samples = torch.nn.utils.rnn.pad_sequence(samples, batch_first=True)

# Perform feature extraction
feats = extractor(samples.to(device)).cpu()
Copy link
Contributor Author

@Tomiinek Tomiinek Mar 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I would expect the output tensor to be on the same device as the input tensor .

Copy link
Collaborator

@pzelasko pzelasko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@pzelasko pzelasko added this to the v1.13 milestone Mar 16, 2023
@pzelasko pzelasko merged commit 20eb1bb into lhotse-speech:master Mar 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants