You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add Hardware Compatibility Check for FP8 Quantization
Issue Summary
In our current implementation, we provide three APIs for model computation in FP8 format. However, for dynamic activation quant these FP8 computations are only supported on NVIDIA GPUs with SM89 and SM90 architectures. When models are quantized to FP8 on unsupported hardware, errors only occur during runtime, which can lead to confusion and wasted resources.
Proposed Solution
Check at the model quantization stage if the target hardware does not support FP8 computations and raise an error accordingly. This way, users are informed immediately if their hardware cannot handle FP8 quantization, rather than discovering it during runtime. Potentially point to weight-only quant which as more supported
Add Hardware Compatibility Check for FP8 Quantization
Issue Summary
In our current implementation, we provide three APIs for model computation in FP8 format. However, for dynamic activation quant these FP8 computations are only supported on NVIDIA GPUs with SM89 and SM90 architectures. When models are quantized to FP8 on unsupported hardware, errors only occur during runtime, which can lead to confusion and wasted resources.
Proposed Solution
Check at the model quantization stage if the target hardware does not support FP8 computations and raise an error accordingly. This way, users are informed immediately if their hardware cannot handle FP8 quantization, rather than discovering it during runtime. Potentially point to weight-only quant which as more supported
Changes where to add errors:
The text was updated successfully, but these errors were encountered: