Skip to content

Conversation

@arsenm
Copy link
Contributor

@arsenm arsenm commented Jan 13, 2026

Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.

@llvmbot
Copy link
Member

llvmbot commented Jan 13, 2026

@llvm/pr-subscribers-llvm-globalisel

Author: Matt Arsenault (arsenm)

Changes

Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.


Patch is 882.27 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/175795.diff

87 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td (+8-3)
  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll (+1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll (+48-46)
  • (modified) llvm/test/CodeGen/AMDGPU/bf16.ll (+247-430)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fadd.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fmax.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fmin.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/call-argument-types.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/calling-conventions.ll (+8-14)
  • (modified) llvm/test/CodeGen/AMDGPU/clamp.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/constant-address-space-32bit.ll (+138-138)
  • (modified) llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll (+30-71)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+72-130)
  • (modified) llvm/test/CodeGen/AMDGPU/divergence-driven-buildvector.ll (+2-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+13-23)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll (+116-174)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll (+140-114)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f32.ll (+7-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f64.ll (+11-17)
  • (modified) llvm/test/CodeGen/AMDGPU/fdiv.f16.ll (+3-5)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fadd.ll (+109-99)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fmax.ll (+110-100)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fmin.ll (+110-100)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fsub.ll (+109-99)
  • (modified) llvm/test/CodeGen/AMDGPU/fmax3-maximumnum.ll (+14-18)
  • (modified) llvm/test/CodeGen/AMDGPU/fmax_legacy.f16.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3-cast-combine.ll (+87-98)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fmin3-minimumnum.ll (+14-18)
  • (modified) llvm/test/CodeGen/AMDGPU/fmin_legacy.f16.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+372-419)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+52-79)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-modifier-casting.ll (+12-16)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg.ll (+1)
  • (modified) llvm/test/CodeGen/AMDGPU/fpow.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+28-28)
  • (modified) llvm/test/CodeGen/AMDGPU/function-args.ll (+26-50)
  • (modified) llvm/test/CodeGen/AMDGPU/function-returns.ll (+8-18)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fadd.ll (+126-102)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fmax.ll (+128-104)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fmin.ll (+128-104)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fsub.ll (+126-102)
  • (modified) llvm/test/CodeGen/AMDGPU/i1-to-bf16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/integer-canonicalizing-src-modifiers.ll (-4)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.ptr.buffer.load.bf16.ll (-1)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.ptr.buffer.store.bf16.ll (-2)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp.ll (+26-51)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp10.ll (+26-51)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp2.ll (+28-57)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.frexp.ll (+18-20)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.is.fpclass.bf16.ll (+52-99)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.is.fpclass.f16.ll (+67-114)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.ldexp.ll (+30-64)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log.ll (+28-60)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log10.ll (+28-60)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log2.ll (+27-56)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.maximum.f16.ll (+12-20)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.minnum.f16.ll (+1-3)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fadd.ll (+20-30)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fmax.ll (+21-31)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fmin.ll (+21-31)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fsub.ll (+20-30)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix-hi.ll (+27-3)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix-lo.ll (+13-10)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+77-19)
  • (modified) llvm/test/CodeGen/AMDGPU/maximumnum.bf16.ll (+10-14)
  • (modified) llvm/test/CodeGen/AMDGPU/maximumnum.ll (+65-135)
  • (modified) llvm/test/CodeGen/AMDGPU/minimumnum.bf16.ll (+10-14)
  • (modified) llvm/test/CodeGen/AMDGPU/minimumnum.ll (+57-117)
  • (modified) llvm/test/CodeGen/AMDGPU/omod.ll (+3-6)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/repeated-divisor.ll (+2-9)
  • (modified) llvm/test/CodeGen/AMDGPU/roundeven.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+112-153)
  • (modified) llvm/test/CodeGen/AMDGPU/select-flags-to-fmin-fmax.ll (+8-16)
  • (modified) llvm/test/CodeGen/AMDGPU/strict_fpext.ll (+2-11)
  • (modified) llvm/test/CodeGen/AMDGPU/strict_fptrunc.ll (+3-1)
  • (modified) llvm/test/CodeGen/AMDGPU/strictfp_f16_abi_promote.ll (+5-11)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fadd.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmax.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmaximum.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmin.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fminimum.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmul.ll (+10-10)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td b/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
index e891fdba4e03e..2932bbf0e7bbd 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
@@ -14,6 +14,10 @@
 class CCIfNotInReg<CCAction A> : CCIf<"!ArgFlags.isInReg()", A> {}
 class CCIfExtend<CCAction A>
   : CCIf<"ArgFlags.isSExt() || ArgFlags.isZExt()", A>;
+class CCIfOrigTypeShaderCCIsSGPR<CCAction A>
+  : CCIf<[{(!OrigTy->getScalarType()->isFloatTy() &&
+            !OrigTy->getScalarType()->isHalfTy()) }], A>;
+
 
 // Calling convention for SI
 def CC_SI_Gfx : CallingConv<[
@@ -56,14 +60,15 @@ def CC_SI_SHADER : CallingConv<[
   >>>
 ]>;
 
+
 def RetCC_SI_Shader : CallingConv<[
   CCIfType<[i1, i16], CCIfExtend<CCPromoteToType<i32>>>,
-  CCIfType<[i32, i16, v2i16] , CCAssignToReg<
+  CCIfType<[i32, i16, v2i16], CCIfOrigTypeShaderCCIsSGPR<CCAssignToReg<
     !foreach(i, !range(0, 44), !cast<Register>("SGPR"#i))  // SGPR0-43
-  >>,
+  >>>,
 
   // 32*4 + 4 is the minimum for a fetch shader with 32 outputs.
-  CCIfType<[f32, f16, v2f16, bf16, v2bf16] , CCAssignToReg<
+  CCIfType<[f32, f16, v2f16, bf16, v2bf16, i32, i16, v2i16] , CCAssignToReg<
     !foreach(i, !range(0, 136), !cast<Register>("VGPR"#i))  // VGPR0-135
   >>
 ]>;
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 49f5d514071e2..5dd3e929941eb 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -1121,6 +1121,9 @@ MVT SITargetLowering::getRegisterTypeForCallingConv(LLVMContext &Context,
     return Size == 32 ? ScalarVT.getSimpleVT() : MVT::i32;
   }
 
+  if (!Subtarget->has16BitInsts() && VT.getSizeInBits() == 16)
+    return MVT::i32;
+
   if (VT.getSizeInBits() > 32)
     return MVT::i32;
 
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
index f48c72688533a..97fb83e0b6f45 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
@@ -1664,6 +1664,7 @@ define amdgpu_ps <2 x half> @fma_v2s16_uniform(<2 x half> inreg %a, <2 x half> i
 ; GFX6-NEXT:    s_and_b32 s1, 0xffff, s1
 ; GFX6-NEXT:    s_lshl_b32 s1, s1, 16
 ; GFX6-NEXT:    s_or_b32 s0, s0, s1
+; GFX6-NEXT:    v_mov_b32_e32 v0, s0
 ; GFX6-NEXT:    ; return to shader part epilog
 ;
 ; GFX8-LABEL: fma_v2s16_uniform:
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
index 5344095e99217..ed44b1c0b294a 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
@@ -15,22 +15,17 @@ define half @bitcast_i16_to_f16(i16 %a, i32 %b) {
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB0_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB0_4
-; SI-NEXT:  .LBB0_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB0_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v2
 ; SI-NEXT:    ; implicit-def: $vgpr2
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB0_2
-; SI-NEXT:  .LBB0_4: ; %cmp.true
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, 3, v2
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:  ; %bb.4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_i16_to_f16:
@@ -125,6 +120,7 @@ define inreg half @bitcast_i16_to_f16_scalar(i16 inreg %a, i32 inreg %b) {
 ; SI-NEXT:    s_add_i32 s6, s6, 3
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, s6
 ; SI-NEXT:  .LBB1_3: ; %end
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB1_4:
 ; SI-NEXT:    ; implicit-def: $vgpr0
@@ -199,8 +195,9 @@ define i16 @bitcast_f16_to_i16(half %a, i32 %b) {
 ; SI-LABEL: bitcast_f16_to_i16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
@@ -294,8 +291,9 @@ define inreg i16 @bitcast_f16_to_i16_scalar(half inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_f16_to_i16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v0, s16
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, s16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_cbranch_scc0 .LBB3_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    s_cbranch_execnz .LBB3_3
@@ -408,6 +406,8 @@ define bfloat @bitcast_i16_to_bf16(i16 %a, i32 %b) {
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, 0x30000, v0
 ; SI-NEXT:  ; %bb.2: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_i16_to_bf16:
@@ -502,7 +502,8 @@ define inreg bfloat @bitcast_i16_to_bf16_scalar(i16 inreg %a, i32 inreg %b) {
 ; SI-NEXT:    s_lshl_b32 s4, s6, 16
 ; SI-NEXT:    s_add_i32 s7, s4, 0x30000
 ; SI-NEXT:  .LBB5_3: ; %end
-; SI-NEXT:    v_mov_b32_e32 v0, s7
+; SI-NEXT:    v_mul_f32_e64 v0, 1.0, s7
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB5_4:
 ; SI-NEXT:    ; implicit-def: $sgpr7
@@ -577,6 +578,7 @@ define i16 @bitcast_bf16_to_i16(bfloat %a, i32 %b) {
 ; SI-LABEL: bitcast_bf16_to_i16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
 ; SI-NEXT:    v_mul_f32_e32 v1, 1.0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
@@ -720,8 +722,9 @@ define inreg i16 @bitcast_bf16_to_i16_scalar(bfloat inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_bf16_to_i16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    s_lshl_b32 s4, s16, 16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
-; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s16
+; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s4
 ; SI-NEXT:    s_cbranch_scc0 .LBB7_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
@@ -835,29 +838,27 @@ define bfloat @bitcast_f16_to_bf16(half %a, i32 %b) {
 ; SI-LABEL: bitcast_f16_to_bf16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v2, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
+; SI-NEXT:    v_cvt_f16_f32_e32 v2, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB8_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB8_4
-; SI-NEXT:  .LBB8_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB8_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v2
 ; SI-NEXT:    ; implicit-def: $vgpr2
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB8_2
-; SI-NEXT:  .LBB8_4: ; %cmp.true
+; SI-NEXT:    s_cbranch_execz .LBB8_4
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v2
 ; SI-NEXT:    v_add_f32_e32 v0, 0x38000000, v0
 ; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
+; SI-NEXT:  .LBB8_4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_f16_to_bf16:
@@ -942,21 +943,24 @@ define inreg bfloat @bitcast_f16_to_bf16_scalar(half inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_f16_to_bf16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v1, s16
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, s16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_cbranch_scc0 .LBB9_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
-; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v1
+; SI-NEXT:    v_lshlrev_b32_e32 v1, 16, v0
 ; SI-NEXT:    s_cbranch_execnz .LBB9_3
 ; SI-NEXT:  .LBB9_2: ; %cmp.true
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v1
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_add_f32_e32 v0, 0x38000000, v0
 ; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
-; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
+; SI-NEXT:    v_lshlrev_b32_e32 v1, 16, v0
 ; SI-NEXT:  .LBB9_3: ; %end
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v1
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB9_4:
-; SI-NEXT:    ; implicit-def: $vgpr0
+; SI-NEXT:    ; implicit-def: $vgpr1
 ; SI-NEXT:    s_branch .LBB9_2
 ;
 ; VI-LABEL: bitcast_f16_to_bf16_scalar:
@@ -1049,30 +1053,26 @@ define half @bitcast_bf16_to_f16(bfloat %a, i32 %b) {
 ; SI-LABEL: bitcast_bf16_to_f16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
 ; SI-NEXT:    v_mul_f32_e32 v1, 1.0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB10_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB10_4
-; SI-NEXT:  .LBB10_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB10_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr1
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB10_2
-; SI-NEXT:  .LBB10_4: ; %cmp.true
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v1
 ; SI-NEXT:    v_add_f32_e32 v0, 0x40c00000, v0
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:  ; %bb.4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_bf16_to_f16:
@@ -1194,22 +1194,24 @@ define inreg half @bitcast_bf16_to_f16_scalar(bfloat inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_bf16_to_f16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    s_lshl_b32 s4, s16, 16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
-; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s16
+; SI-NEXT:    v_mul_f32_e64 v0, 1.0, s4
 ; SI-NEXT:    s_cbranch_scc0 .LBB11_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
-; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v1, v1
 ; SI-NEXT:    s_cbranch_execnz .LBB11_3
 ; SI-NEXT:  .LBB11_2: ; %cmp.true
-; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v1
+; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
 ; SI-NEXT:    v_add_f32_e32 v0, 0x40c00000, v0
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v1, v0
 ; SI-NEXT:  .LBB11_3: ; %end
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v1
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB11_4:
-; SI-NEXT:    ; implicit-def: $vgpr0
+; SI-NEXT:    ; implicit-def: $vgpr1
 ; SI-NEXT:    s_branch .LBB11_2
 ;
 ; VI-LABEL: bitcast_bf16_to_f16_scalar:
diff --git a/llvm/test/CodeGen/AMDGPU/bf16.ll b/llvm/test/CodeGen/AMDGPU/bf16.ll
index daa771a843ee6..0394ed7f89633 100644
--- a/llvm/test/CodeGen/AMDGPU/bf16.ll
+++ b/llvm/test/CodeGen/AMDGPU/bf16.ll
@@ -2280,12 +2280,10 @@ define void @test_arg_store(bfloat %in, ptr addrspace(1) %out) {
 ; GCN-LABEL: test_arg_store:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    s_mov_b32 s7, 0xf000
 ; GCN-NEXT:    s_mov_b32 s6, 0
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; GCN-NEXT:    s_mov_b32 s7, 0xf000
 ; GCN-NEXT:    s_mov_b32 s4, s6
 ; GCN-NEXT:    s_mov_b32 s5, s6
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, v[1:2], s[4:7], 0 addr64
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2294,11 +2292,9 @@ define void @test_arg_store(bfloat %in, ptr addrspace(1) %out) {
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX7-NEXT:    s_mov_b32 s6, 0
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
 ; GFX7-NEXT:    s_mov_b32 s7, 0xf000
 ; GFX7-NEXT:    s_mov_b32 s4, s6
 ; GFX7-NEXT:    s_mov_b32 s5, s6
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, v[1:2], s[4:7], 0 addr64
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -2681,12 +2677,11 @@ define amdgpu_gfx void @test_inreg_arg_store(bfloat inreg %in, ptr addrspace(1)
 ; GCN-LABEL: test_inreg_arg_store:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    s_mov_b32 s39, 0xf000
 ; GCN-NEXT:    s_mov_b32 s38, 0
-; GCN-NEXT:    v_mul_f32_e64 v2, 1.0, s4
+; GCN-NEXT:    s_mov_b32 s39, 0xf000
+; GCN-NEXT:    v_mov_b32_e32 v2, s4
 ; GCN-NEXT:    s_mov_b32 s36, s38
 ; GCN-NEXT:    s_mov_b32 s37, s38
-; GCN-NEXT:    v_lshrrev_b32_e32 v2, 16, v2
 ; GCN-NEXT:    buffer_store_short v2, v[0:1], s[36:39], 0 addr64
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2695,11 +2690,10 @@ define amdgpu_gfx void @test_inreg_arg_store(bfloat inreg %in, ptr addrspace(1)
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX7-NEXT:    s_mov_b32 s38, 0
-; GFX7-NEXT:    v_mul_f32_e64 v2, 1.0, s4
 ; GFX7-NEXT:    s_mov_b32 s39, 0xf000
 ; GFX7-NEXT:    s_mov_b32 s36, s38
 ; GFX7-NEXT:    s_mov_b32 s37, s38
-; GFX7-NEXT:    v_lshrrev_b32_e32 v2, 16, v2
+; GFX7-NEXT:    v_mov_b32_e32 v2, s4
 ; GFX7-NEXT:    buffer_store_short v2, v[0:1], s[36:39], 0 addr64
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -2764,18 +2758,14 @@ define bfloat @test_byval(ptr addrspace(5) byval(bfloat) %bv, bfloat %val) {
 ; GCN-LABEL: test_byval:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
-; GCN-NEXT:    buffer_store_short v1, off, s[0:3], s32
+; GCN-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX7-LABEL: test_byval:
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
-; GFX7-NEXT:    buffer_store_short v1, off, s[0:3], s32
+; GFX7-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -2842,8 +2832,6 @@ define void @test_sret(ptr addrspace(5) sret(bfloat) %sret, bfloat %val) {
 ; GCN-LABEL: test_sret:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GCN-NEXT:    buffer_store_short v1, v0, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2851,8 +2839,6 @@ define void @test_sret(ptr addrspace(5) sret(bfloat) %sret, bfloat %val) {
 ; GFX7-LABEL: test_sret:
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GFX7-NEXT:    buffer_store_short v1, v0, s[0:3], 0 offen
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -3327,8 +3313,6 @@ define void @test_call(bfloat %in, ptr addrspace(5) %out) {
 ; GCN-NEXT:    s_load_dwordx2 s[16:17], s[16:17], 0x0
 ; GCN-NEXT:    s_waitcnt lgkmcnt(0)
 ; GCN-NEXT:    s_swappc_b64 s[30:31], s[16:17]
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, v1, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt vmcnt(0)
 ; GCN-NEXT:    v_readlane_b32 s31, v2, 1
@@ -3358,8 +3342,6 @@ define void @test_call(bfloat %in, ptr addrspace(5) %out) {
 ; GFX7-NEXT:    v_writelane_b32 v2, s31, 1
 ; GFX7-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX7-NEXT:    s_swappc_b64 s[30:31], s[16:17]
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, v1, s[0:3], 0 offen
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    v_readlane_b32 s31, v2, 1
@@ -5068,25 +5050,19 @@ define bfloat @test_alloca_load_store_ret(bfloat %in) {
 ; GCN-LABEL: test_alloca_load_store_ret:
 ; GCN:       ; %bb.0: ; %entry
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    buffer_load_ushort v0, off, s[0:3], s32 glc
 ; GCN-NEXT:    s_waitcnt vmcnt(0)
-; GCN-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX7-LABEL: test_alloca_load_store_ret:
 ; GFX7:       ; %bb.0: ; %entry
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    buffer_load_ushort v0, off, s[0:3], s32 glc
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: test_alloca_load_store_ret:
@@ -5199,7 +5175,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GCN-NEXT:    buffer_store_dword v26, v29, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt expcnt(0)
 ; GCN-NEXT:    v_add_i32_e32 v26, vcc, 0x48, v0
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
 ; GCN-NEXT:    v_add_i32_e32 v29, vcc, 0x44, v0
 ; GCN-NEXT:    buffer_store_dword v25, v31, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt expcnt(0)
@@ -5234,7 +5209,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GCN-NEXT:    v_add_i32_e32 v18, vcc, 8, v0
 ; GCN-NEXT:    v_add_i32_e32 v25, vcc, 4, v0
 ; GCN-NEXT:    v_add_i32_e32 v0, vcc, 0x80, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GCN-NEXT:    buffer_store_dword v17, v31, s[0:3], 0 offen
 ; GCN-NEXT:    buffer_store_dword v16, v2, s[0:3], 0 offen
 ; GCN-NEXT:    buffer_store_dword v15, v24, s[0:3], 0 offen
@@ -5260,8 +5234,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GFX7-NEXT:    buffer_store_dword v2, v0, s[0:3], 0 offen
 ; GFX7-NEXT:    buffer_load_dword v2, off, s[0:3], s32 offset:8
 ; GFX7-NEXT:    v_add_i32_e32 v31, vcc, 0x7c, v0
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    buffer_store_dword v2, v31, s[0:3], 0 offen
 ; GFX7-NEXT:    buffer_load_dword v2, off, s[0:3], s32 offset:4
@@ -9509,23 +9481,19 @@ define bfloat @v_fadd_bf16(bfloat %a, bfloat %b) {
 ; GCN-LABEL: v_fadd_bf16:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GCN-NEXT:    v_and_b32_e32 v1, 0xffff0000, v1
-; GCN-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
+; GCN-NEXT:    v_lshlrev_b32_e32 v1, 16, v1
+; GCN-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    v_add_f32_e32 v0, v0, v1
-; GCN-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
+; GCN-NEXT:    v_lshrrev_b32_e32 v0,...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 13, 2026

@llvm/pr-subscribers-backend-amdgpu

Author: Matt Arsenault (arsenm)

Changes

Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.


Patch is 882.27 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/175795.diff

87 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td (+8-3)
  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll (+1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll (+48-46)
  • (modified) llvm/test/CodeGen/AMDGPU/bf16.ll (+247-430)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fadd.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fmax.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointer-atomicrmw-fmin.ll (+96-90)
  • (modified) llvm/test/CodeGen/AMDGPU/call-argument-types.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/calling-conventions.ll (+8-14)
  • (modified) llvm/test/CodeGen/AMDGPU/clamp.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/constant-address-space-32bit.ll (+138-138)
  • (modified) llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll (+30-71)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+72-130)
  • (modified) llvm/test/CodeGen/AMDGPU/divergence-driven-buildvector.ll (+2-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+13-23)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.bf16.ll (+116-174)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll (+140-114)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f32.ll (+7-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f64.ll (+11-17)
  • (modified) llvm/test/CodeGen/AMDGPU/fdiv.f16.ll (+3-5)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fadd.ll (+109-99)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fmax.ll (+110-100)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fmin.ll (+110-100)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-atomicrmw-fsub.ll (+109-99)
  • (modified) llvm/test/CodeGen/AMDGPU/fmax3-maximumnum.ll (+14-18)
  • (modified) llvm/test/CodeGen/AMDGPU/fmax_legacy.f16.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3-cast-combine.ll (+87-98)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fmed3.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fmin3-minimumnum.ll (+14-18)
  • (modified) llvm/test/CodeGen/AMDGPU/fmin_legacy.f16.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+372-419)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+52-79)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-modifier-casting.ll (+12-16)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg.ll (+1)
  • (modified) llvm/test/CodeGen/AMDGPU/fpow.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+28-28)
  • (modified) llvm/test/CodeGen/AMDGPU/function-args.ll (+26-50)
  • (modified) llvm/test/CodeGen/AMDGPU/function-returns.ll (+8-18)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fadd.ll (+126-102)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fmax.ll (+128-104)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fmin.ll (+128-104)
  • (modified) llvm/test/CodeGen/AMDGPU/global-atomicrmw-fsub.ll (+126-102)
  • (modified) llvm/test/CodeGen/AMDGPU/i1-to-bf16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/integer-canonicalizing-src-modifiers.ll (-4)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.ptr.buffer.load.bf16.ll (-1)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.raw.ptr.buffer.store.bf16.ll (-2)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp.ll (+26-51)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp10.ll (+26-51)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.exp2.ll (+28-57)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.frexp.ll (+18-20)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.is.fpclass.bf16.ll (+52-99)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.is.fpclass.f16.ll (+67-114)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.ldexp.ll (+30-64)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log.ll (+28-60)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log10.ll (+28-60)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.log2.ll (+27-56)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.maximum.f16.ll (+12-20)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.minnum.f16.ll (+1-3)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fadd.ll (+20-30)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fmax.ll (+21-31)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fmin.ll (+21-31)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fsub.ll (+20-30)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix-hi.ll (+27-3)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix-lo.ll (+13-10)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+77-19)
  • (modified) llvm/test/CodeGen/AMDGPU/maximumnum.bf16.ll (+10-14)
  • (modified) llvm/test/CodeGen/AMDGPU/maximumnum.ll (+65-135)
  • (modified) llvm/test/CodeGen/AMDGPU/minimumnum.bf16.ll (+10-14)
  • (modified) llvm/test/CodeGen/AMDGPU/minimumnum.ll (+57-117)
  • (modified) llvm/test/CodeGen/AMDGPU/omod.ll (+3-6)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/repeated-divisor.ll (+2-9)
  • (modified) llvm/test/CodeGen/AMDGPU/roundeven.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+112-153)
  • (modified) llvm/test/CodeGen/AMDGPU/select-flags-to-fmin-fmax.ll (+8-16)
  • (modified) llvm/test/CodeGen/AMDGPU/strict_fpext.ll (+2-11)
  • (modified) llvm/test/CodeGen/AMDGPU/strict_fptrunc.ll (+3-1)
  • (modified) llvm/test/CodeGen/AMDGPU/strictfp_f16_abi_promote.ll (+5-11)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fadd.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmax.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmaximum.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmin.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fminimum.ll (+5)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-reduce-fmul.ll (+10-10)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td b/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
index e891fdba4e03e..2932bbf0e7bbd 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallingConv.td
@@ -14,6 +14,10 @@
 class CCIfNotInReg<CCAction A> : CCIf<"!ArgFlags.isInReg()", A> {}
 class CCIfExtend<CCAction A>
   : CCIf<"ArgFlags.isSExt() || ArgFlags.isZExt()", A>;
+class CCIfOrigTypeShaderCCIsSGPR<CCAction A>
+  : CCIf<[{(!OrigTy->getScalarType()->isFloatTy() &&
+            !OrigTy->getScalarType()->isHalfTy()) }], A>;
+
 
 // Calling convention for SI
 def CC_SI_Gfx : CallingConv<[
@@ -56,14 +60,15 @@ def CC_SI_SHADER : CallingConv<[
   >>>
 ]>;
 
+
 def RetCC_SI_Shader : CallingConv<[
   CCIfType<[i1, i16], CCIfExtend<CCPromoteToType<i32>>>,
-  CCIfType<[i32, i16, v2i16] , CCAssignToReg<
+  CCIfType<[i32, i16, v2i16], CCIfOrigTypeShaderCCIsSGPR<CCAssignToReg<
     !foreach(i, !range(0, 44), !cast<Register>("SGPR"#i))  // SGPR0-43
-  >>,
+  >>>,
 
   // 32*4 + 4 is the minimum for a fetch shader with 32 outputs.
-  CCIfType<[f32, f16, v2f16, bf16, v2bf16] , CCAssignToReg<
+  CCIfType<[f32, f16, v2f16, bf16, v2bf16, i32, i16, v2i16] , CCAssignToReg<
     !foreach(i, !range(0, 136), !cast<Register>("VGPR"#i))  // VGPR0-135
   >>
 ]>;
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 49f5d514071e2..5dd3e929941eb 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -1121,6 +1121,9 @@ MVT SITargetLowering::getRegisterTypeForCallingConv(LLVMContext &Context,
     return Size == 32 ? ScalarVT.getSimpleVT() : MVT::i32;
   }
 
+  if (!Subtarget->has16BitInsts() && VT.getSizeInBits() == 16)
+    return MVT::i32;
+
   if (VT.getSizeInBits() > 32)
     return MVT::i32;
 
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
index f48c72688533a..97fb83e0b6f45 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/fma.ll
@@ -1664,6 +1664,7 @@ define amdgpu_ps <2 x half> @fma_v2s16_uniform(<2 x half> inreg %a, <2 x half> i
 ; GFX6-NEXT:    s_and_b32 s1, 0xffff, s1
 ; GFX6-NEXT:    s_lshl_b32 s1, s1, 16
 ; GFX6-NEXT:    s_or_b32 s0, s0, s1
+; GFX6-NEXT:    v_mov_b32_e32 v0, s0
 ; GFX6-NEXT:    ; return to shader part epilog
 ;
 ; GFX8-LABEL: fma_v2s16_uniform:
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
index 5344095e99217..ed44b1c0b294a 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.16bit.ll
@@ -15,22 +15,17 @@ define half @bitcast_i16_to_f16(i16 %a, i32 %b) {
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB0_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB0_4
-; SI-NEXT:  .LBB0_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB0_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v2
 ; SI-NEXT:    ; implicit-def: $vgpr2
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB0_2
-; SI-NEXT:  .LBB0_4: ; %cmp.true
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, 3, v2
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:  ; %bb.4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_i16_to_f16:
@@ -125,6 +120,7 @@ define inreg half @bitcast_i16_to_f16_scalar(i16 inreg %a, i32 inreg %b) {
 ; SI-NEXT:    s_add_i32 s6, s6, 3
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, s6
 ; SI-NEXT:  .LBB1_3: ; %end
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB1_4:
 ; SI-NEXT:    ; implicit-def: $vgpr0
@@ -199,8 +195,9 @@ define i16 @bitcast_f16_to_i16(half %a, i32 %b) {
 ; SI-LABEL: bitcast_f16_to_i16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
@@ -294,8 +291,9 @@ define inreg i16 @bitcast_f16_to_i16_scalar(half inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_f16_to_i16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v0, s16
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, s16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_cbranch_scc0 .LBB3_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    s_cbranch_execnz .LBB3_3
@@ -408,6 +406,8 @@ define bfloat @bitcast_i16_to_bf16(i16 %a, i32 %b) {
 ; SI-NEXT:    v_add_i32_e32 v0, vcc, 0x30000, v0
 ; SI-NEXT:  ; %bb.2: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_i16_to_bf16:
@@ -502,7 +502,8 @@ define inreg bfloat @bitcast_i16_to_bf16_scalar(i16 inreg %a, i32 inreg %b) {
 ; SI-NEXT:    s_lshl_b32 s4, s6, 16
 ; SI-NEXT:    s_add_i32 s7, s4, 0x30000
 ; SI-NEXT:  .LBB5_3: ; %end
-; SI-NEXT:    v_mov_b32_e32 v0, s7
+; SI-NEXT:    v_mul_f32_e64 v0, 1.0, s7
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB5_4:
 ; SI-NEXT:    ; implicit-def: $sgpr7
@@ -577,6 +578,7 @@ define i16 @bitcast_bf16_to_i16(bfloat %a, i32 %b) {
 ; SI-LABEL: bitcast_bf16_to_i16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
 ; SI-NEXT:    v_mul_f32_e32 v1, 1.0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
@@ -720,8 +722,9 @@ define inreg i16 @bitcast_bf16_to_i16_scalar(bfloat inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_bf16_to_i16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    s_lshl_b32 s4, s16, 16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
-; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s16
+; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s4
 ; SI-NEXT:    s_cbranch_scc0 .LBB7_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
@@ -835,29 +838,27 @@ define bfloat @bitcast_f16_to_bf16(half %a, i32 %b) {
 ; SI-LABEL: bitcast_f16_to_bf16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v2, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
+; SI-NEXT:    v_cvt_f16_f32_e32 v2, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB8_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB8_4
-; SI-NEXT:  .LBB8_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB8_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v2
 ; SI-NEXT:    ; implicit-def: $vgpr2
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB8_2
-; SI-NEXT:  .LBB8_4: ; %cmp.true
+; SI-NEXT:    s_cbranch_execz .LBB8_4
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v2
 ; SI-NEXT:    v_add_f32_e32 v0, 0x38000000, v0
 ; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
+; SI-NEXT:  .LBB8_4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_f16_to_bf16:
@@ -942,21 +943,24 @@ define inreg bfloat @bitcast_f16_to_bf16_scalar(half inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_f16_to_bf16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; SI-NEXT:    v_cvt_f16_f32_e32 v1, s16
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, s16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_cbranch_scc0 .LBB9_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
-; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v1
+; SI-NEXT:    v_lshlrev_b32_e32 v1, 16, v0
 ; SI-NEXT:    s_cbranch_execnz .LBB9_3
 ; SI-NEXT:  .LBB9_2: ; %cmp.true
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v1
+; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    v_add_f32_e32 v0, 0x38000000, v0
 ; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
-; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
+; SI-NEXT:    v_lshlrev_b32_e32 v1, 16, v0
 ; SI-NEXT:  .LBB9_3: ; %end
+; SI-NEXT:    v_mul_f32_e32 v0, 1.0, v1
+; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB9_4:
-; SI-NEXT:    ; implicit-def: $vgpr0
+; SI-NEXT:    ; implicit-def: $vgpr1
 ; SI-NEXT:    s_branch .LBB9_2
 ;
 ; VI-LABEL: bitcast_f16_to_bf16_scalar:
@@ -1049,30 +1053,26 @@ define half @bitcast_bf16_to_f16(bfloat %a, i32 %b) {
 ; SI-LABEL: bitcast_bf16_to_f16:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cmp_ne_u32_e32 vcc, 0, v1
 ; SI-NEXT:    v_mul_f32_e32 v1, 1.0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr0
 ; SI-NEXT:    s_and_saveexec_b64 s[4:5], vcc
 ; SI-NEXT:    s_xor_b64 s[4:5], exec, s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB10_3
-; SI-NEXT:  ; %bb.1: ; %Flow
-; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execnz .LBB10_4
-; SI-NEXT:  .LBB10_2: ; %end
-; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
-; SI-NEXT:    s_setpc_b64 s[30:31]
-; SI-NEXT:  .LBB10_3: ; %cmp.false
+; SI-NEXT:  ; %bb.1: ; %cmp.false
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
 ; SI-NEXT:    ; implicit-def: $vgpr1
+; SI-NEXT:  ; %bb.2: ; %Flow
 ; SI-NEXT:    s_andn2_saveexec_b64 s[4:5], s[4:5]
-; SI-NEXT:    s_cbranch_execz .LBB10_2
-; SI-NEXT:  .LBB10_4: ; %cmp.true
+; SI-NEXT:  ; %bb.3: ; %cmp.true
 ; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v1
 ; SI-NEXT:    v_add_f32_e32 v0, 0x40c00000, v0
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:  ; %bb.4: ; %end
 ; SI-NEXT:    s_or_b64 exec, exec, s[4:5]
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v0
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; VI-LABEL: bitcast_bf16_to_f16:
@@ -1194,22 +1194,24 @@ define inreg half @bitcast_bf16_to_f16_scalar(bfloat inreg %a, i32 inreg %b) {
 ; SI-LABEL: bitcast_bf16_to_f16_scalar:
 ; SI:       ; %bb.0:
 ; SI-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; SI-NEXT:    s_lshl_b32 s4, s16, 16
 ; SI-NEXT:    s_cmp_lg_u32 s17, 0
-; SI-NEXT:    v_mul_f32_e64 v1, 1.0, s16
+; SI-NEXT:    v_mul_f32_e64 v0, 1.0, s4
 ; SI-NEXT:    s_cbranch_scc0 .LBB11_4
 ; SI-NEXT:  ; %bb.1: ; %cmp.false
-; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v1
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:    v_lshrrev_b32_e32 v1, 16, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v1, v1
 ; SI-NEXT:    s_cbranch_execnz .LBB11_3
 ; SI-NEXT:  .LBB11_2: ; %cmp.true
-; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v1
+; SI-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
 ; SI-NEXT:    v_add_f32_e32 v0, 0x40c00000, v0
 ; SI-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
-; SI-NEXT:    v_cvt_f32_f16_e32 v0, v0
+; SI-NEXT:    v_cvt_f32_f16_e32 v1, v0
 ; SI-NEXT:  .LBB11_3: ; %end
+; SI-NEXT:    v_cvt_f16_f32_e32 v0, v1
 ; SI-NEXT:    s_setpc_b64 s[30:31]
 ; SI-NEXT:  .LBB11_4:
-; SI-NEXT:    ; implicit-def: $vgpr0
+; SI-NEXT:    ; implicit-def: $vgpr1
 ; SI-NEXT:    s_branch .LBB11_2
 ;
 ; VI-LABEL: bitcast_bf16_to_f16_scalar:
diff --git a/llvm/test/CodeGen/AMDGPU/bf16.ll b/llvm/test/CodeGen/AMDGPU/bf16.ll
index daa771a843ee6..0394ed7f89633 100644
--- a/llvm/test/CodeGen/AMDGPU/bf16.ll
+++ b/llvm/test/CodeGen/AMDGPU/bf16.ll
@@ -2280,12 +2280,10 @@ define void @test_arg_store(bfloat %in, ptr addrspace(1) %out) {
 ; GCN-LABEL: test_arg_store:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    s_mov_b32 s7, 0xf000
 ; GCN-NEXT:    s_mov_b32 s6, 0
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
+; GCN-NEXT:    s_mov_b32 s7, 0xf000
 ; GCN-NEXT:    s_mov_b32 s4, s6
 ; GCN-NEXT:    s_mov_b32 s5, s6
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, v[1:2], s[4:7], 0 addr64
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2294,11 +2292,9 @@ define void @test_arg_store(bfloat %in, ptr addrspace(1) %out) {
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX7-NEXT:    s_mov_b32 s6, 0
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
 ; GFX7-NEXT:    s_mov_b32 s7, 0xf000
 ; GFX7-NEXT:    s_mov_b32 s4, s6
 ; GFX7-NEXT:    s_mov_b32 s5, s6
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, v[1:2], s[4:7], 0 addr64
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -2681,12 +2677,11 @@ define amdgpu_gfx void @test_inreg_arg_store(bfloat inreg %in, ptr addrspace(1)
 ; GCN-LABEL: test_inreg_arg_store:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    s_mov_b32 s39, 0xf000
 ; GCN-NEXT:    s_mov_b32 s38, 0
-; GCN-NEXT:    v_mul_f32_e64 v2, 1.0, s4
+; GCN-NEXT:    s_mov_b32 s39, 0xf000
+; GCN-NEXT:    v_mov_b32_e32 v2, s4
 ; GCN-NEXT:    s_mov_b32 s36, s38
 ; GCN-NEXT:    s_mov_b32 s37, s38
-; GCN-NEXT:    v_lshrrev_b32_e32 v2, 16, v2
 ; GCN-NEXT:    buffer_store_short v2, v[0:1], s[36:39], 0 addr64
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2695,11 +2690,10 @@ define amdgpu_gfx void @test_inreg_arg_store(bfloat inreg %in, ptr addrspace(1)
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX7-NEXT:    s_mov_b32 s38, 0
-; GFX7-NEXT:    v_mul_f32_e64 v2, 1.0, s4
 ; GFX7-NEXT:    s_mov_b32 s39, 0xf000
 ; GFX7-NEXT:    s_mov_b32 s36, s38
 ; GFX7-NEXT:    s_mov_b32 s37, s38
-; GFX7-NEXT:    v_lshrrev_b32_e32 v2, 16, v2
+; GFX7-NEXT:    v_mov_b32_e32 v2, s4
 ; GFX7-NEXT:    buffer_store_short v2, v[0:1], s[36:39], 0 addr64
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -2764,18 +2758,14 @@ define bfloat @test_byval(ptr addrspace(5) byval(bfloat) %bv, bfloat %val) {
 ; GCN-LABEL: test_byval:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
-; GCN-NEXT:    buffer_store_short v1, off, s[0:3], s32
+; GCN-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX7-LABEL: test_byval:
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
-; GFX7-NEXT:    buffer_store_short v1, off, s[0:3], s32
+; GFX7-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -2842,8 +2832,6 @@ define void @test_sret(ptr addrspace(5) sret(bfloat) %sret, bfloat %val) {
 ; GCN-LABEL: test_sret:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GCN-NEXT:    buffer_store_short v1, v0, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
@@ -2851,8 +2839,6 @@ define void @test_sret(ptr addrspace(5) sret(bfloat) %sret, bfloat %val) {
 ; GFX7-LABEL: test_sret:
 ; GFX7:       ; %bb.0:
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GFX7-NEXT:    buffer_store_short v1, v0, s[0:3], 0 offen
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
@@ -3327,8 +3313,6 @@ define void @test_call(bfloat %in, ptr addrspace(5) %out) {
 ; GCN-NEXT:    s_load_dwordx2 s[16:17], s[16:17], 0x0
 ; GCN-NEXT:    s_waitcnt lgkmcnt(0)
 ; GCN-NEXT:    s_swappc_b64 s[30:31], s[16:17]
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, v1, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt vmcnt(0)
 ; GCN-NEXT:    v_readlane_b32 s31, v2, 1
@@ -3358,8 +3342,6 @@ define void @test_call(bfloat %in, ptr addrspace(5) %out) {
 ; GFX7-NEXT:    v_writelane_b32 v2, s31, 1
 ; GFX7-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX7-NEXT:    s_swappc_b64 s[30:31], s[16:17]
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, v1, s[0:3], 0 offen
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    v_readlane_b32 s31, v2, 1
@@ -5068,25 +5050,19 @@ define bfloat @test_alloca_load_store_ret(bfloat %in) {
 ; GCN-LABEL: test_alloca_load_store_ret:
 ; GCN:       ; %bb.0: ; %entry
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0)
 ; GCN-NEXT:    buffer_load_ushort v0, off, s[0:3], s32 glc
 ; GCN-NEXT:    s_waitcnt vmcnt(0)
-; GCN-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX7-LABEL: test_alloca_load_store_ret:
 ; GFX7:       ; %bb.0: ; %entry
 ; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX7-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GFX7-NEXT:    v_lshrrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    buffer_store_short v0, off, s[0:3], s32
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    buffer_load_ushort v0, off, s[0:3], s32 glc
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
-; GFX7-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GFX7-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: test_alloca_load_store_ret:
@@ -5199,7 +5175,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GCN-NEXT:    buffer_store_dword v26, v29, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt expcnt(0)
 ; GCN-NEXT:    v_add_i32_e32 v26, vcc, 0x48, v0
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
 ; GCN-NEXT:    v_add_i32_e32 v29, vcc, 0x44, v0
 ; GCN-NEXT:    buffer_store_dword v25, v31, s[0:3], 0 offen
 ; GCN-NEXT:    s_waitcnt expcnt(0)
@@ -5234,7 +5209,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GCN-NEXT:    v_add_i32_e32 v18, vcc, 8, v0
 ; GCN-NEXT:    v_add_i32_e32 v25, vcc, 4, v0
 ; GCN-NEXT:    v_add_i32_e32 v0, vcc, 0x80, v0
-; GCN-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GCN-NEXT:    buffer_store_dword v17, v31, s[0:3], 0 offen
 ; GCN-NEXT:    buffer_store_dword v16, v2, s[0:3], 0 offen
 ; GCN-NEXT:    buffer_store_dword v15, v24, s[0:3], 0 offen
@@ -5260,8 +5234,6 @@ define { <32 x i32>, bfloat } @test_overflow_stack(bfloat %a, <32 x i32> %b) {
 ; GFX7-NEXT:    buffer_store_dword v2, v0, s[0:3], 0 offen
 ; GFX7-NEXT:    buffer_load_dword v2, off, s[0:3], s32 offset:8
 ; GFX7-NEXT:    v_add_i32_e32 v31, vcc, 0x7c, v0
-; GFX7-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GFX7-NEXT:    v_lshrrev_b32_e32 v1, 16, v1
 ; GFX7-NEXT:    s_waitcnt vmcnt(0)
 ; GFX7-NEXT:    buffer_store_dword v2, v31, s[0:3], 0 offen
 ; GFX7-NEXT:    buffer_load_dword v2, off, s[0:3], s32 offset:4
@@ -9509,23 +9481,19 @@ define bfloat @v_fadd_bf16(bfloat %a, bfloat %b) {
 ; GCN-LABEL: v_fadd_bf16:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT:    v_mul_f32_e32 v0, 1.0, v0
-; GCN-NEXT:    v_mul_f32_e32 v1, 1.0, v1
-; GCN-NEXT:    v_and_b32_e32 v1, 0xffff0000, v1
-; GCN-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
+; GCN-NEXT:    v_lshlrev_b32_e32 v1, 16, v1
+; GCN-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
 ; GCN-NEXT:    v_add_f32_e32 v0, v0, v1
-; GCN-NEXT:    v_and_b32_e32 v0, 0xffff0000, v0
+; GCN-NEXT:    v_lshrrev_b32_e32 v0,...
[truncated]

@arsenm arsenm force-pushed the users/arsenm/amdgpu/change-abi-16-bit-gfx6-gfx7 branch from 71cb2b0 to e802568 Compare January 13, 2026 20:09
@arsenm arsenm force-pushed the users/arsenm/amdgpu/use-packed-16bit-vector-abi-gfx6-gfx7 branch from ca4c52a to a756489 Compare January 13, 2026 20:09
@arsenm arsenm force-pushed the users/arsenm/amdgpu/change-abi-16-bit-gfx6-gfx7 branch from e802568 to 9f464f1 Compare January 13, 2026 20:10
@arsenm arsenm force-pushed the users/arsenm/amdgpu/use-packed-16bit-vector-abi-gfx6-gfx7 branch from a756489 to 06a1b06 Compare January 13, 2026 20:10
@arsenm arsenm marked this pull request as ready for review January 13, 2026 20:11
@arsenm arsenm force-pushed the users/arsenm/amdgpu/use-packed-16bit-vector-abi-gfx6-gfx7 branch from 06a1b06 to 8d2c40c Compare January 22, 2026 14:23
@arsenm arsenm force-pushed the users/arsenm/amdgpu/change-abi-16-bit-gfx6-gfx7 branch from 9f464f1 to 81ec09c Compare January 22, 2026 14:23
Base automatically changed from users/arsenm/amdgpu/use-packed-16bit-vector-abi-gfx6-gfx7 to main January 22, 2026 16:24
Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.
@arsenm arsenm merged commit 056e5a3 into main Jan 22, 2026
8 of 10 checks passed
@arsenm arsenm deleted the users/arsenm/amdgpu/change-abi-16-bit-gfx6-gfx7 branch January 22, 2026 18:34
Harrish92 pushed a commit to Harrish92/llvm-project that referenced this pull request Jan 23, 2026
Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.
Harrish92 pushed a commit to Harrish92/llvm-project that referenced this pull request Jan 24, 2026
Keep bf16/f16 values encoded as the low half of a 32-bit register,
instead of promoting to float. This avoids unwanted FP effects
from the fpext/fptrunc which should not be implied by just
passing an argument. This also fixes ABI divergence between
SelectionDAG and GlobalISel.

I've wanted to make this change for ages, and failed the last
few times. The main complication was the hack to return
shader integer types in SGPRs, which now needs to inspect
the underlying IR type.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants