-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Issues: microsoft/onnxruntime
[DO NOT UNPIN] onnxruntime-gpu v1.10.0 PyPI Removal Notice
#22747
opened Nov 6, 2024 by
sophies927
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Performance] how to set the threads when using TRT EP
ep:TensorRT
issues related to TensorRT execution provider
performance
issues related to performance regressions
platform:jetson
issues related to the NVIDIA Jetson platform
#22913
opened Nov 21, 2024 by
noahzn
Segmentation fault when following the Phi-3 tutorial for DirectML
ep:DML
issues related to the DirectML execution provider
#22910
opened Nov 21, 2024 by
matt200-ok
[Performance] Binary operators using SSE on AVX systems
performance
issues related to performance regressions
#22905
opened Nov 20, 2024 by
eralmual
[Feature Request] Add official support for onnxruntime-gpu on ARM64/aarch64 platforms
feature request
request for unsupported feature or enhancement
#22903
opened Nov 20, 2024 by
abhishek-iitmadras
how to release gpu memory when use onnxruntime with fastapi
api
issues related to all other APIs: C, C++, Python, etc.
#22899
opened Nov 20, 2024 by
SZ-ing
T5-Small different output for decoder inference with CPU and DirectML EPs
ep:DML
issues related to the DirectML execution provider
#22896
opened Nov 19, 2024 by
r4ghu
Enabling profiler with empty Optional causes segmentation fault
core runtime
issues related to core runtime
#22890
opened Nov 19, 2024 by
amancini-N
[Web] Failed to load model: Error: no available backend found. ERR: [webgpu] backend not found
ep:WebGPU
ort-web webgpu provider
platform:web
issues related to ONNX Runtime web; typically submitted using template
#22885
opened Nov 19, 2024 by
mozeqiu123
[Build] Build Error
api:Java
issues related to the Java API
build
build issues; typically submitted using template
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#22882
opened Nov 19, 2024 by
Lutan701
[Mobile] Using ORT on Android6.0, ocrrur error
api:Java
issues related to the Java API
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#22881
opened Nov 19, 2024 by
Lutan701
[cuda] [npm/nodejs] Failed to download the binaries: 404 Not Found
ep:CUDA
issues related to the CUDA execution provider
platform:web
issues related to ONNX Runtime web; typically submitted using template
#22877
opened Nov 18, 2024 by
lucyknada
CUDA memory increasing and process freeze [Performance]
ep:CUDA
issues related to the CUDA execution provider
performance
issues related to performance regressions
#22872
opened Nov 18, 2024 by
kkluonaitis
[Build] Issues with Multithreading in the New Versions of onnxruntime-directml
build
build issues; typically submitted using template
ep:DML
issues related to the DirectML execution provider
#22867
opened Nov 18, 2024 by
lianshiye0
Inference Output CPU vs CUDA not the same [1.19.2]
ep:CUDA
issues related to the CUDA execution provider
#22866
opened Nov 17, 2024 by
lschaupp
[Build] ONNX Runtime build fails OOM (v1.20.0)
build
build issues; typically submitted using template
#22859
opened Nov 15, 2024 by
mc-nv
[Build] linking against header-only library cudnn_frontend
build
build issues; typically submitted using template
#22855
opened Nov 15, 2024 by
ConnorBaker
[Dev] "./onnxruntime_test_all --help" gives segmentation fault
core runtime
issues related to core runtime
#22838
opened Nov 14, 2024 by
amancini-N
[Build] v1.20.0 min GCC version can't build on armv8
build
build issues; typically submitted using template
contributions welcome
lower priority issues for the core ORT teams
#22837
opened Nov 14, 2024 by
AxelZi
Error when running inference in C++:onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node.
ep:CUDA
issues related to the CUDA execution provider
#22836
opened Nov 14, 2024 by
pcycccccc
[TensorRT EP] How can I disable generating cache when using trt execution provider
ep:TensorRT
issues related to TensorRT execution provider
#22822
opened Nov 13, 2024 by
noahzn
[Performance] in windows v1.19.2 - non-deterministic but in linux deterministic
performance
issues related to performance regressions
platform:windows
issues related to the Windows platform
#22818
opened Nov 13, 2024 by
parkkyungjun
InferenceSession - Catastrophic Error or Unspecified Error is thrown
api:CSharp
issues related to the C# API
ep:DML
issues related to the DirectML execution provider
#22815
opened Nov 12, 2024 by
saddam213
[Feature Request] ONNX model file decryption I/O hooks
feature request
request for unsupported feature or enhancement
#22813
opened Nov 12, 2024 by
vadimkantorov
Altough using onnxruntime-19.0 but still can't use cudnn 9.x
platform:jetson
issues related to the NVIDIA Jetson platform
#22809
opened Nov 12, 2024 by
wcycqjy
[Performance] the root cause of onnxruntime-genai's being much faster than onnxruntime?
performance
issues related to performance regressions
#22805
opened Nov 11, 2024 by
idruker-cerence
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.