-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(machine-learning): support cuda 12 #7569
Conversation
Deploying with
|
Latest commit: |
1a8ddde
|
Status: | ✅ Deploy successful! |
Preview URL: | https://05e22c0c.immich.pages.dev |
Branch Preview URL: | https://feat-support-cuda12.immich.pages.dev |
I'm not sure we actually need to keep 11. In practice, 525 is the minimum driver version for ONNX Runtime based on issues I've seen and the minimum compute capability for it is 5.2. I think anyone who's using CUDA right now is already in an environment where 12 will just work. |
the original goal was only to support cuda 12 but if you want to move away from cuda 11 completely, I'm all in 😄 |
What do you think about still supporting cuda 11 ? |
I'm not sure if Poetry can handle different flavors of the same package since the optional dependencies are all squashed into the same lock file. It didn't work for cpu/gpu variants of PyTorch at least. Only one of them actually get resolved. |
But that might have changed since I tried it. You can test and check if it can actually differentiate the cuda-11 and cuda-12 flavors |
Yeah, you're right, we can't python-poetry/poetry#7748 |
Add support for cuda 12: