TensorRT PyTorch Hub inference fix#7560
Merged
glenn-jocher merged 1 commit intomasterfrom Apr 24, 2022
Merged
Conversation
Solution proposed in #7128 to TRT PyTorch Hub CUDA illegal memory errors.
Member
Author
Closed
1 task
This was referenced Apr 24, 2022
Closed
1 task
1 task
1 task
BjarneKuehl
pushed a commit
to fhkiel-mlaip/yolov5
that referenced
this pull request
Aug 26, 2022
Solution proposed in ultralytics#7128 to TRT PyTorch Hub CUDA illegal memory errors.
ctjanuhowski
pushed a commit
to ctjanuhowski/yolov5
that referenced
this pull request
Sep 8, 2022
Solution proposed in ultralytics#7128 to TRT PyTorch Hub CUDA illegal memory errors.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Solution proposed in #7128 to TRT PyTorch Hub CUDA illegal memory errors.
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improvement in device compatibility for AMP inferences within YOLOv5.
📊 Key Changes
pis on the same device asself.modelwhenself.ptisFalse.🎯 Purpose & Impact
self.ptisFalse, ensuring that users who are performing inferences without a.ptmodel (PyTorch format) do not face device-related issues.