Skip to content

Conversation

@andjo403
Copy link
Contributor

Remove vector check so this fold always is done.

proof: https://alive2.llvm.org/ce/z/oabD6J
closes #172888

@andjo403 andjo403 requested a review from dtcxzyw January 30, 2026 22:15
@andjo403 andjo403 requested a review from nikic as a code owner January 30, 2026 22:15
@llvmbot llvmbot added PGO Profile Guided Optimizations llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:analysis Includes value tracking, cost tables and constant folding llvm:transforms labels Jan 30, 2026
@llvmbot
Copy link
Member

llvmbot commented Jan 30, 2026

@llvm/pr-subscribers-pgo

Author: Andreas Jonson (andjo403)

Changes

Remove vector check so this fold always is done.

proof: https://alive2.llvm.org/ce/z/oabD6J
closes #172888


Patch is 77.35 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/178977.diff

21 Files Affected:

  • (modified) llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp (+2-5)
  • (modified) llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/and-or-icmps.ll (+18-25)
  • (modified) llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/cmp-intrinsic.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/exact.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-and-shift.ll (+12-17)
  • (modified) llvm/test/Transforms/InstCombine/icmp-binop.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul-and.ll (+3-6)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-ne-pow2.ll (+2-3)
  • (modified) llvm/test/Transforms/InstCombine/icmp.ll (+7-10)
  • (modified) llvm/test/Transforms/InstCombine/load-cmp.ll (+12-18)
  • (modified) llvm/test/Transforms/InstCombine/or.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-lshr.ll (+2-4)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-shl.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest.ll (+4-6)
  • (modified) llvm/test/Transforms/LoopUnroll/WebAssembly/basic-unrolling.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/chr.ll (+98-105)
  • (modified) llvm/test/Transforms/PGOProfile/chr_coro.ll (+23-11)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/extra-unroll-simplifications.ll (+2-2)
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index aa762753130b0..3c6d5affd6b36 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -1786,11 +1786,8 @@ Instruction *InstCombinerImpl::foldICmpAndConstConst(ICmpInst &Cmp,
                                                      const APInt &C1) {
   bool isICMP_NE = Cmp.getPredicate() == ICmpInst::ICMP_NE;
 
-  // For vectors: icmp ne (and X, 1), 0 --> trunc X to N x i1
-  // TODO: We canonicalize to the longer form for scalars because we have
-  // better analysis/folds for icmp, and codegen may be better with icmp.
-  if (isICMP_NE && Cmp.getType()->isVectorTy() && C1.isZero() &&
-      match(And->getOperand(1), m_One()))
+  // icmp ne (and X, 1), 0 --> trunc X to i1
+  if (isICMP_NE && C1.isZero() && match(And->getOperand(1), m_One()))
     return new TruncInst(And->getOperand(0), Cmp.getType());
 
   const APInt *C2;
diff --git a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
index 663de281f19ba..868e340c266ad 100644
--- a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
+++ b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
@@ -221,8 +221,7 @@ define i1 @blsmsk_gt_is_false_assume(i32 %x) {
 
 define i32 @blsmsk_add_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_add_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -261,8 +260,7 @@ define <2 x i32> @blsmsk_add_eval_assume_vec(<2 x i32> %x) {
 
 define i32 @blsmsk_sub_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_sub_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 -31
 ;
@@ -277,8 +275,7 @@ define i32 @blsmsk_sub_eval_assume(i32 %x) {
 
 define i32 @blsmsk_or_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_or_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -545,8 +542,7 @@ define <2 x i1> @blsi_cmp_eq_diff_bits_vec(<2 x i32> %x) {
 
 define i32 @blsi_xor_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsi_xor_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
diff --git a/llvm/test/Transforms/InstCombine/and-or-icmps.ll b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
index 975d3a072bcd3..4cd089ca524c2 100644
--- a/llvm/test/Transforms/InstCombine/and-or-icmps.ll
+++ b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
@@ -1418,7 +1418,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1439,7 +1439,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1540,10 +1540,9 @@ define i1 @bitwise_and_logical_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1563,10 +1562,9 @@ define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
@@ -1586,12 +1584,11 @@ define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1609,12 +1606,11 @@ define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm3(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1632,10 +1628,9 @@ define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C1]], i1 [[C2]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1655,10 +1650,9 @@ define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[TMP1:%.*]] = select i1 [[C3]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[TMP1]], i1 [[C2]], i1 false
@@ -1678,10 +1672,9 @@ define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C2]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
diff --git a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
index 5883c089119c4..f8db9e3b7f0d1 100644
--- a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
+++ b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
@@ -7,24 +7,24 @@ declare void @use1(i1)
 ; Basic case - all good.
 define i8 @p0(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p0(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]], !prof [[PROF0:![0-9]+]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   %r = select i1 %t1, i8 %v0, i8 %v1, !prof !0
   ret i8 %r
 }
 define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p1(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -33,14 +33,14 @@ define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; Can't invert all users of original condition
 define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n2(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1:%.*]] = icmp ne i8 [[T0]], 0
 ; CHECK-NEXT:    call void @use1(i1 [[T1]])
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   call void @use1(i1 %t1) ; condition has un-invertable use
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -50,7 +50,7 @@ define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-LABEL: @t3(
 ; CHECK-NEXT:  bb0:
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    br i1 [[C:%.*]], label [[BB1:%.*]], label [[BB2:%.*]]
 ; CHECK:       bb1:
@@ -62,8 +62,8 @@ define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
 bb0:
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   br i1 %c, label %bb1, label %bb2
 bb1:
   %r0 = select i1 %t1, i8 %v0, i8 %v1
@@ -75,14 +75,14 @@ bb2:
 }
 define i8 @t4(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out) {
 ; CHECK-LABEL: @t4(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R0:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    store i8 [[R0]], ptr [[OUT:%.*]], align 1
 ; CHECK-NEXT:    [[R1:%.*]] = select i1 [[T1_NOT]], i8 [[V3:%.*]], i8 [[V2:%.*]]
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r0 = select i1 %t1, i8 %v0, i8 %v1
   store i8 %r0, ptr %out
@@ -111,13 +111,13 @@ define i8 @n6(i8 %x, i8 %v0, i8 %v1) {
 }
 define i8 @n7(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n7(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT_NOT]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp ne i8 %t0, 1 ; not checking that it's zero
+  %t0 = and i8 %x, 2
+  %t1 = icmp ne i8 %t0, 2 ; not checking that it's zero
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
 }
diff --git a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
index 19c4cc979d4ba..12c18e2ec0302 100644
--- a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
+++ b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
@@ -274,8 +274,7 @@ define <2 x i1> @cttz_eq_bitwidth_v2i32(<2 x i32> %a) {
 
 define i1 @cttz_eq_zero_i33(i33 %x) {
 ; CHECK-LABEL: @cttz_eq_zero_i33(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i33 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i33 [[TMP1]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i33 [[X:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %tz = tail call i33 @llvm.cttz.i33(i33 %x, i1 false)
diff --git a/llvm/test/Transforms/InstCombine/exact.ll b/llvm/test/Transforms/InstCombine/exact.ll
index 819e8fbb89b5f..d8bbcaa949660 100644
--- a/llvm/test/Transforms/InstCombine/exact.ll
+++ b/llvm/test/Transforms/InstCombine/exact.ll
@@ -150,8 +150,7 @@ define <2 x i1> @ashr_icmp2_vec(<2 x i64> %X) {
 ; Make sure we don't transform the ashr here into an sdiv
 define i1 @pr9998(i32 %V) {
 ; CHECK-LABEL: @pr9998(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i32 [[V:%.*]], 1
-; CHECK-NEXT:    [[Z:%.*]] = icmp ne i32 [[TMP1]], 0
+; CHECK-NEXT:    [[Z:%.*]] = trunc i32 [[V:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[Z]]
 ;
   %W = shl i32 %V, 31
diff --git a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
index 78f1bc7d7379d..2973bb979181d 100644
--- a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
@@ -608,9 +608,8 @@ define i1 @fold_ne_rhs_fail_shift_not_1s(i8 %x, i8 %yy) {
 
 define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -621,9 +620,8 @@ define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 
 define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0_samesign(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -634,9 +632,8 @@ define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 
 define i1 @test_const_shr_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 42, %b
@@ -660,9 +657,8 @@ define i1 @test_not_const_shr_and_1_ne_0(i32 %b) {
 
 define i1 @test_const_shr_exact_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_exact_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr exact i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr exact i32 42, %b
@@ -721,10 +717,9 @@ define i1 @test_const_shr_and_1_ne_0_i1_negative(i1 %b) {
 define i1 @test_const_shr_and_1_ne_0_multi_use_lshr_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_lshr_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
-; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[SHR]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
@@ -739,9 +734,9 @@ define i1 @test_const_shr_and_1_ne_0_multi_use_and_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_and_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
 ; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[AND]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
diff --git a/llvm/test/Transforms/InstCombine/icmp-binop.ll b/llvm/test/Transforms/InstCombine/icmp-binop.ll
index 3b4eca3ba69b3..4c7eccbde9f2f 100644
--- a/llvm/test/Transforms/InstCombine/icmp-binop.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-binop.ll
@@ -36,11 +36,9 @@ define <2 x i1> @mul_unkV_oddC_ne_vec(<2 x i64> %v) {
 
 define i1 @mul_assumeoddV_asumeoddV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_asumeoddV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
-; CHECK-NEXT:    [[LB2:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD2:%.*]] = icmp ne i16 [[LB2]], 0
+; CHECK-NEXT:    [[ODD2:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD2]])
 ; CHECK-NEXT:    ret i1 true
 ;
@@ -81,8 +79,7 @@ define i1 @mul_reused_unkV_oddC_ne(i64 %v) {
 
 define i1 @mul_assumeoddV_unkV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_unkV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
 ; CHECK-NEXT:    [[CMP:%.*]] = icmp eq i16 [[V:%.*]], 0
 ; CHECK-NEXT:    ret i1 [[CMP]]
@@ -97,8 +94,7 @@ define i1 @mul_assumeoddV_unkV_eq(i16 %v, ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 30, 2026

@llvm/pr-subscribers-llvm-analysis

Author: Andreas Jonson (andjo403)

Changes

Remove vector check so this fold always is done.

proof: https://alive2.llvm.org/ce/z/oabD6J
closes #172888


Patch is 77.35 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/178977.diff

21 Files Affected:

  • (modified) llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp (+2-5)
  • (modified) llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/and-or-icmps.ll (+18-25)
  • (modified) llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/cmp-intrinsic.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/exact.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-and-shift.ll (+12-17)
  • (modified) llvm/test/Transforms/InstCombine/icmp-binop.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul-and.ll (+3-6)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-ne-pow2.ll (+2-3)
  • (modified) llvm/test/Transforms/InstCombine/icmp.ll (+7-10)
  • (modified) llvm/test/Transforms/InstCombine/load-cmp.ll (+12-18)
  • (modified) llvm/test/Transforms/InstCombine/or.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-lshr.ll (+2-4)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-shl.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest.ll (+4-6)
  • (modified) llvm/test/Transforms/LoopUnroll/WebAssembly/basic-unrolling.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/chr.ll (+98-105)
  • (modified) llvm/test/Transforms/PGOProfile/chr_coro.ll (+23-11)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/extra-unroll-simplifications.ll (+2-2)
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index aa762753130b0..3c6d5affd6b36 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -1786,11 +1786,8 @@ Instruction *InstCombinerImpl::foldICmpAndConstConst(ICmpInst &Cmp,
                                                      const APInt &C1) {
   bool isICMP_NE = Cmp.getPredicate() == ICmpInst::ICMP_NE;
 
-  // For vectors: icmp ne (and X, 1), 0 --> trunc X to N x i1
-  // TODO: We canonicalize to the longer form for scalars because we have
-  // better analysis/folds for icmp, and codegen may be better with icmp.
-  if (isICMP_NE && Cmp.getType()->isVectorTy() && C1.isZero() &&
-      match(And->getOperand(1), m_One()))
+  // icmp ne (and X, 1), 0 --> trunc X to i1
+  if (isICMP_NE && C1.isZero() && match(And->getOperand(1), m_One()))
     return new TruncInst(And->getOperand(0), Cmp.getType());
 
   const APInt *C2;
diff --git a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
index 663de281f19ba..868e340c266ad 100644
--- a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
+++ b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
@@ -221,8 +221,7 @@ define i1 @blsmsk_gt_is_false_assume(i32 %x) {
 
 define i32 @blsmsk_add_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_add_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -261,8 +260,7 @@ define <2 x i32> @blsmsk_add_eval_assume_vec(<2 x i32> %x) {
 
 define i32 @blsmsk_sub_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_sub_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 -31
 ;
@@ -277,8 +275,7 @@ define i32 @blsmsk_sub_eval_assume(i32 %x) {
 
 define i32 @blsmsk_or_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_or_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -545,8 +542,7 @@ define <2 x i1> @blsi_cmp_eq_diff_bits_vec(<2 x i32> %x) {
 
 define i32 @blsi_xor_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsi_xor_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
diff --git a/llvm/test/Transforms/InstCombine/and-or-icmps.ll b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
index 975d3a072bcd3..4cd089ca524c2 100644
--- a/llvm/test/Transforms/InstCombine/and-or-icmps.ll
+++ b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
@@ -1418,7 +1418,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1439,7 +1439,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1540,10 +1540,9 @@ define i1 @bitwise_and_logical_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1563,10 +1562,9 @@ define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
@@ -1586,12 +1584,11 @@ define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1609,12 +1606,11 @@ define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm3(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1632,10 +1628,9 @@ define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C1]], i1 [[C2]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1655,10 +1650,9 @@ define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[TMP1:%.*]] = select i1 [[C3]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[TMP1]], i1 [[C2]], i1 false
@@ -1678,10 +1672,9 @@ define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C2]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
diff --git a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
index 5883c089119c4..f8db9e3b7f0d1 100644
--- a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
+++ b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
@@ -7,24 +7,24 @@ declare void @use1(i1)
 ; Basic case - all good.
 define i8 @p0(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p0(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]], !prof [[PROF0:![0-9]+]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   %r = select i1 %t1, i8 %v0, i8 %v1, !prof !0
   ret i8 %r
 }
 define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p1(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -33,14 +33,14 @@ define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; Can't invert all users of original condition
 define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n2(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1:%.*]] = icmp ne i8 [[T0]], 0
 ; CHECK-NEXT:    call void @use1(i1 [[T1]])
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   call void @use1(i1 %t1) ; condition has un-invertable use
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -50,7 +50,7 @@ define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-LABEL: @t3(
 ; CHECK-NEXT:  bb0:
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    br i1 [[C:%.*]], label [[BB1:%.*]], label [[BB2:%.*]]
 ; CHECK:       bb1:
@@ -62,8 +62,8 @@ define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
 bb0:
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   br i1 %c, label %bb1, label %bb2
 bb1:
   %r0 = select i1 %t1, i8 %v0, i8 %v1
@@ -75,14 +75,14 @@ bb2:
 }
 define i8 @t4(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out) {
 ; CHECK-LABEL: @t4(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R0:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    store i8 [[R0]], ptr [[OUT:%.*]], align 1
 ; CHECK-NEXT:    [[R1:%.*]] = select i1 [[T1_NOT]], i8 [[V3:%.*]], i8 [[V2:%.*]]
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r0 = select i1 %t1, i8 %v0, i8 %v1
   store i8 %r0, ptr %out
@@ -111,13 +111,13 @@ define i8 @n6(i8 %x, i8 %v0, i8 %v1) {
 }
 define i8 @n7(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n7(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT_NOT]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp ne i8 %t0, 1 ; not checking that it's zero
+  %t0 = and i8 %x, 2
+  %t1 = icmp ne i8 %t0, 2 ; not checking that it's zero
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
 }
diff --git a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
index 19c4cc979d4ba..12c18e2ec0302 100644
--- a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
+++ b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
@@ -274,8 +274,7 @@ define <2 x i1> @cttz_eq_bitwidth_v2i32(<2 x i32> %a) {
 
 define i1 @cttz_eq_zero_i33(i33 %x) {
 ; CHECK-LABEL: @cttz_eq_zero_i33(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i33 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i33 [[TMP1]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i33 [[X:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %tz = tail call i33 @llvm.cttz.i33(i33 %x, i1 false)
diff --git a/llvm/test/Transforms/InstCombine/exact.ll b/llvm/test/Transforms/InstCombine/exact.ll
index 819e8fbb89b5f..d8bbcaa949660 100644
--- a/llvm/test/Transforms/InstCombine/exact.ll
+++ b/llvm/test/Transforms/InstCombine/exact.ll
@@ -150,8 +150,7 @@ define <2 x i1> @ashr_icmp2_vec(<2 x i64> %X) {
 ; Make sure we don't transform the ashr here into an sdiv
 define i1 @pr9998(i32 %V) {
 ; CHECK-LABEL: @pr9998(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i32 [[V:%.*]], 1
-; CHECK-NEXT:    [[Z:%.*]] = icmp ne i32 [[TMP1]], 0
+; CHECK-NEXT:    [[Z:%.*]] = trunc i32 [[V:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[Z]]
 ;
   %W = shl i32 %V, 31
diff --git a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
index 78f1bc7d7379d..2973bb979181d 100644
--- a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
@@ -608,9 +608,8 @@ define i1 @fold_ne_rhs_fail_shift_not_1s(i8 %x, i8 %yy) {
 
 define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -621,9 +620,8 @@ define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 
 define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0_samesign(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -634,9 +632,8 @@ define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 
 define i1 @test_const_shr_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 42, %b
@@ -660,9 +657,8 @@ define i1 @test_not_const_shr_and_1_ne_0(i32 %b) {
 
 define i1 @test_const_shr_exact_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_exact_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr exact i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr exact i32 42, %b
@@ -721,10 +717,9 @@ define i1 @test_const_shr_and_1_ne_0_i1_negative(i1 %b) {
 define i1 @test_const_shr_and_1_ne_0_multi_use_lshr_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_lshr_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
-; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[SHR]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
@@ -739,9 +734,9 @@ define i1 @test_const_shr_and_1_ne_0_multi_use_and_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_and_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
 ; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[AND]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
diff --git a/llvm/test/Transforms/InstCombine/icmp-binop.ll b/llvm/test/Transforms/InstCombine/icmp-binop.ll
index 3b4eca3ba69b3..4c7eccbde9f2f 100644
--- a/llvm/test/Transforms/InstCombine/icmp-binop.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-binop.ll
@@ -36,11 +36,9 @@ define <2 x i1> @mul_unkV_oddC_ne_vec(<2 x i64> %v) {
 
 define i1 @mul_assumeoddV_asumeoddV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_asumeoddV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
-; CHECK-NEXT:    [[LB2:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD2:%.*]] = icmp ne i16 [[LB2]], 0
+; CHECK-NEXT:    [[ODD2:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD2]])
 ; CHECK-NEXT:    ret i1 true
 ;
@@ -81,8 +79,7 @@ define i1 @mul_reused_unkV_oddC_ne(i64 %v) {
 
 define i1 @mul_assumeoddV_unkV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_unkV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
 ; CHECK-NEXT:    [[CMP:%.*]] = icmp eq i16 [[V:%.*]], 0
 ; CHECK-NEXT:    ret i1 [[CMP]]
@@ -97,8 +94,7 @@ define i1 @mul_assumeoddV_unkV_eq(i16 %v, ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 30, 2026

@llvm/pr-subscribers-llvm-transforms

Author: Andreas Jonson (andjo403)

Changes

Remove vector check so this fold always is done.

proof: https://alive2.llvm.org/ce/z/oabD6J
closes #172888


Patch is 77.35 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/178977.diff

21 Files Affected:

  • (modified) llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp (+2-5)
  • (modified) llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/and-or-icmps.ll (+18-25)
  • (modified) llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/cmp-intrinsic.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/exact.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-and-shift.ll (+12-17)
  • (modified) llvm/test/Transforms/InstCombine/icmp-binop.ll (+4-8)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul-and.ll (+3-6)
  • (modified) llvm/test/Transforms/InstCombine/icmp-mul.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-ne-pow2.ll (+2-3)
  • (modified) llvm/test/Transforms/InstCombine/icmp.ll (+7-10)
  • (modified) llvm/test/Transforms/InstCombine/load-cmp.ll (+12-18)
  • (modified) llvm/test/Transforms/InstCombine/or.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-lshr.ll (+2-4)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest-with-truncation-shl.ll (+1-2)
  • (modified) llvm/test/Transforms/InstCombine/shift-amount-reassociation-in-bittest.ll (+4-6)
  • (modified) llvm/test/Transforms/LoopUnroll/WebAssembly/basic-unrolling.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/chr.ll (+98-105)
  • (modified) llvm/test/Transforms/PGOProfile/chr_coro.ll (+23-11)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/extra-unroll-simplifications.ll (+2-2)
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index aa762753130b0..3c6d5affd6b36 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -1786,11 +1786,8 @@ Instruction *InstCombinerImpl::foldICmpAndConstConst(ICmpInst &Cmp,
                                                      const APInt &C1) {
   bool isICMP_NE = Cmp.getPredicate() == ICmpInst::ICMP_NE;
 
-  // For vectors: icmp ne (and X, 1), 0 --> trunc X to N x i1
-  // TODO: We canonicalize to the longer form for scalars because we have
-  // better analysis/folds for icmp, and codegen may be better with icmp.
-  if (isICMP_NE && Cmp.getType()->isVectorTy() && C1.isZero() &&
-      match(And->getOperand(1), m_One()))
+  // icmp ne (and X, 1), 0 --> trunc X to i1
+  if (isICMP_NE && C1.isZero() && match(And->getOperand(1), m_One()))
     return new TruncInst(And->getOperand(0), Cmp.getType());
 
   const APInt *C2;
diff --git a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
index 663de281f19ba..868e340c266ad 100644
--- a/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
+++ b/llvm/test/Analysis/ValueTracking/knownbits-bmi-pattern.ll
@@ -221,8 +221,7 @@ define i1 @blsmsk_gt_is_false_assume(i32 %x) {
 
 define i32 @blsmsk_add_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_add_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -261,8 +260,7 @@ define <2 x i32> @blsmsk_add_eval_assume_vec(<2 x i32> %x) {
 
 define i32 @blsmsk_sub_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_sub_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 -31
 ;
@@ -277,8 +275,7 @@ define i32 @blsmsk_sub_eval_assume(i32 %x) {
 
 define i32 @blsmsk_or_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsmsk_or_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
@@ -545,8 +542,7 @@ define <2 x i1> @blsi_cmp_eq_diff_bits_vec(<2 x i32> %x) {
 
 define i32 @blsi_xor_eval_assume(i32 %x) {
 ; CHECK-LABEL: @blsi_xor_eval_assume(
-; CHECK-NEXT:    [[LB:%.*]] = and i32 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[LB]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[X:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[CMP]])
 ; CHECK-NEXT:    ret i32 33
 ;
diff --git a/llvm/test/Transforms/InstCombine/and-or-icmps.ll b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
index 975d3a072bcd3..4cd089ca524c2 100644
--- a/llvm/test/Transforms/InstCombine/and-or-icmps.ll
+++ b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
@@ -1418,7 +1418,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1439,7 +1439,7 @@ define i1 @bitwise_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-NEXT:    [[TMP1:%.*]] = or i8 [[Z_SHIFT]], 1
 ; CHECK-NEXT:    [[TMP2:%.*]] = and i8 [[X:%.*]], [[TMP1]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = icmp eq i8 [[TMP2]], [[TMP1]]
-; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[TMP3]], [[C1]]
+; CHECK-NEXT:    [[AND2:%.*]] = and i1 [[C1]], [[TMP3]]
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
   %c1 = icmp eq i8 %y, 42
@@ -1540,10 +1540,9 @@ define i1 @bitwise_and_logical_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1563,10 +1562,9 @@ define i1 @logical_and_bitwise_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
@@ -1586,12 +1584,11 @@ define i1 @logical_and_bitwise_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1609,12 +1606,11 @@ define i1 @logical_and_bitwise_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_bitwise_and_icmps_comm3(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
-; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C2]], [[C1]]
+; CHECK-NEXT:    [[AND1:%.*]] = and i1 [[C1]], [[C2]]
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[C3]], i1 [[AND1]], i1 false
 ; CHECK-NEXT:    ret i1 [[AND2]]
 ;
@@ -1632,10 +1628,9 @@ define i1 @logical_and_bitwise_and_icmps_comm3(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C1]], i1 [[C2]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
@@ -1655,10 +1650,9 @@ define i1 @logical_and_logical_and_icmps(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm1(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[TMP1:%.*]] = select i1 [[C3]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[TMP1]], i1 [[C2]], i1 false
@@ -1678,10 +1672,9 @@ define i1 @logical_and_logical_and_icmps_comm1(i8 %x, i8 %y, i8 %z) {
 define i1 @logical_and_logical_and_icmps_comm2(i8 %x, i8 %y, i8 %z) {
 ; CHECK-LABEL: @logical_and_logical_and_icmps_comm2(
 ; CHECK-NEXT:    [[C1:%.*]] = icmp eq i8 [[Y:%.*]], 42
-; CHECK-NEXT:    [[X_M1:%.*]] = and i8 [[X:%.*]], 1
 ; CHECK-NEXT:    [[Z_SHIFT:%.*]] = shl nuw i8 1, [[Z:%.*]]
-; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X]], [[Z_SHIFT]]
-; CHECK-NEXT:    [[C2:%.*]] = icmp ne i8 [[X_M1]], 0
+; CHECK-NEXT:    [[X_M2:%.*]] = and i8 [[X:%.*]], [[Z_SHIFT]]
+; CHECK-NEXT:    [[C2:%.*]] = trunc i8 [[X]] to i1
 ; CHECK-NEXT:    [[C3:%.*]] = icmp ne i8 [[X_M2]], 0
 ; CHECK-NEXT:    [[AND1:%.*]] = select i1 [[C2]], i1 [[C1]], i1 false
 ; CHECK-NEXT:    [[AND2:%.*]] = select i1 [[AND1]], i1 [[C3]], i1 false
diff --git a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
index 5883c089119c4..f8db9e3b7f0d1 100644
--- a/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
+++ b/llvm/test/Transforms/InstCombine/canonicalize-selects-icmp-condition-bittest.ll
@@ -7,24 +7,24 @@ declare void @use1(i1)
 ; Basic case - all good.
 define i8 @p0(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p0(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]], !prof [[PROF0:![0-9]+]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   %r = select i1 %t1, i8 %v0, i8 %v1, !prof !0
   ret i8 %r
 }
 define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @p1(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -33,14 +33,14 @@ define i8 @p1(i8 %x, i8 %v0, i8 %v1) {
 ; Can't invert all users of original condition
 define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n2(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1:%.*]] = icmp ne i8 [[T0]], 0
 ; CHECK-NEXT:    call void @use1(i1 [[T1]])
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   call void @use1(i1 %t1) ; condition has un-invertable use
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
@@ -50,7 +50,7 @@ define i8 @n2(i8 %x, i8 %v0, i8 %v1) {
 define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-LABEL: @t3(
 ; CHECK-NEXT:  bb0:
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    br i1 [[C:%.*]], label [[BB1:%.*]], label [[BB2:%.*]]
 ; CHECK:       bb1:
@@ -62,8 +62,8 @@ define i8 @t3(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out, i1 %c) {
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
 bb0:
-  %t0 = and i8 %x, 1
-  %t1 = icmp eq i8 %t0, 1
+  %t0 = and i8 %x, 2
+  %t1 = icmp eq i8 %t0, 2
   br i1 %c, label %bb1, label %bb2
 bb1:
   %r0 = select i1 %t1, i8 %v0, i8 %v1
@@ -75,14 +75,14 @@ bb2:
 }
 define i8 @t4(i8 %x, i8 %v0, i8 %v1, i8 %v2, i8 %v3, ptr %out) {
 ; CHECK-LABEL: @t4(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R0:%.*]] = select i1 [[T1_NOT]], i8 [[V1:%.*]], i8 [[V0:%.*]]
 ; CHECK-NEXT:    store i8 [[R0]], ptr [[OUT:%.*]], align 1
 ; CHECK-NEXT:    [[R1:%.*]] = select i1 [[T1_NOT]], i8 [[V3:%.*]], i8 [[V2:%.*]]
 ; CHECK-NEXT:    ret i8 [[R1]]
 ;
-  %t0 = and i8 %x, 1
+  %t0 = and i8 %x, 2
   %t1 = icmp ne i8 %t0, 0
   %r0 = select i1 %t1, i8 %v0, i8 %v1
   store i8 %r0, ptr %out
@@ -111,13 +111,13 @@ define i8 @n6(i8 %x, i8 %v0, i8 %v1) {
 }
 define i8 @n7(i8 %x, i8 %v0, i8 %v1) {
 ; CHECK-LABEL: @n7(
-; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 1
+; CHECK-NEXT:    [[T0:%.*]] = and i8 [[X:%.*]], 2
 ; CHECK-NEXT:    [[T1_NOT_NOT:%.*]] = icmp eq i8 [[T0]], 0
 ; CHECK-NEXT:    [[R:%.*]] = select i1 [[T1_NOT_NOT]], i8 [[V0:%.*]], i8 [[V1:%.*]]
 ; CHECK-NEXT:    ret i8 [[R]]
 ;
-  %t0 = and i8 %x, 1
-  %t1 = icmp ne i8 %t0, 1 ; not checking that it's zero
+  %t0 = and i8 %x, 2
+  %t1 = icmp ne i8 %t0, 2 ; not checking that it's zero
   %r = select i1 %t1, i8 %v0, i8 %v1
   ret i8 %r
 }
diff --git a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
index 19c4cc979d4ba..12c18e2ec0302 100644
--- a/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
+++ b/llvm/test/Transforms/InstCombine/cmp-intrinsic.ll
@@ -274,8 +274,7 @@ define <2 x i1> @cttz_eq_bitwidth_v2i32(<2 x i32> %a) {
 
 define i1 @cttz_eq_zero_i33(i33 %x) {
 ; CHECK-LABEL: @cttz_eq_zero_i33(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i33 [[X:%.*]], 1
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i33 [[TMP1]], 0
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i33 [[X:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %tz = tail call i33 @llvm.cttz.i33(i33 %x, i1 false)
diff --git a/llvm/test/Transforms/InstCombine/exact.ll b/llvm/test/Transforms/InstCombine/exact.ll
index 819e8fbb89b5f..d8bbcaa949660 100644
--- a/llvm/test/Transforms/InstCombine/exact.ll
+++ b/llvm/test/Transforms/InstCombine/exact.ll
@@ -150,8 +150,7 @@ define <2 x i1> @ashr_icmp2_vec(<2 x i64> %X) {
 ; Make sure we don't transform the ashr here into an sdiv
 define i1 @pr9998(i32 %V) {
 ; CHECK-LABEL: @pr9998(
-; CHECK-NEXT:    [[TMP1:%.*]] = and i32 [[V:%.*]], 1
-; CHECK-NEXT:    [[Z:%.*]] = icmp ne i32 [[TMP1]], 0
+; CHECK-NEXT:    [[Z:%.*]] = trunc i32 [[V:%.*]] to i1
 ; CHECK-NEXT:    ret i1 [[Z]]
 ;
   %W = shl i32 %V, 31
diff --git a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
index 78f1bc7d7379d..2973bb979181d 100644
--- a/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-and-shift.ll
@@ -608,9 +608,8 @@ define i1 @fold_ne_rhs_fail_shift_not_1s(i8 %x, i8 %yy) {
 
 define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -621,9 +620,8 @@ define i1 @test_shr_and_1_ne_0(i32 %a, i32 %b) {
 
 define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 ; CHECK-LABEL: @test_shr_and_1_ne_0_samesign(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[A:%.*]], [[TMP1]]
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 [[A:%.*]], [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 %a, %b
@@ -634,9 +632,8 @@ define i1 @test_shr_and_1_ne_0_samesign(i32 %a, i32 %b) {
 
 define i1 @test_const_shr_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr i32 42, %b
@@ -660,9 +657,8 @@ define i1 @test_not_const_shr_and_1_ne_0(i32 %b) {
 
 define i1 @test_const_shr_exact_and_1_ne_0(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_exact_and_1_ne_0(
-; CHECK-NEXT:    [[TMP1:%.*]] = shl nuw i32 1, [[B:%.*]]
-; CHECK-NEXT:    [[TMP2:%.*]] = and i32 [[TMP1]], 42
-; CHECK-NEXT:    [[CMP:%.*]] = icmp ne i32 [[TMP2]], 0
+; CHECK-NEXT:    [[SHR:%.*]] = lshr exact i32 42, [[B:%.*]]
+; CHECK-NEXT:    [[CMP:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    ret i1 [[CMP]]
 ;
   %shr = lshr exact i32 42, %b
@@ -721,10 +717,9 @@ define i1 @test_const_shr_and_1_ne_0_i1_negative(i1 %b) {
 define i1 @test_const_shr_and_1_ne_0_multi_use_lshr_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_lshr_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
-; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[SHR]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
@@ -739,9 +734,9 @@ define i1 @test_const_shr_and_1_ne_0_multi_use_and_negative(i32 %b) {
 ; CHECK-LABEL: @test_const_shr_and_1_ne_0_multi_use_and_negative(
 ; CHECK-NEXT:    [[SHR:%.*]] = lshr i32 42, [[B:%.*]]
 ; CHECK-NEXT:    [[AND:%.*]] = and i32 [[SHR]], 1
-; CHECK-NEXT:    [[CMP1:%.*]] = icmp ne i32 [[AND]], 0
+; CHECK-NEXT:    [[CMP1:%.*]] = trunc i32 [[SHR]] to i1
 ; CHECK-NEXT:    [[CMP2:%.*]] = icmp eq i32 [[B]], [[AND]]
-; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP1]], [[CMP2]]
+; CHECK-NEXT:    [[RET:%.*]] = and i1 [[CMP2]], [[CMP1]]
 ; CHECK-NEXT:    ret i1 [[RET]]
 ;
   %shr = lshr i32 42, %b
diff --git a/llvm/test/Transforms/InstCombine/icmp-binop.ll b/llvm/test/Transforms/InstCombine/icmp-binop.ll
index 3b4eca3ba69b3..4c7eccbde9f2f 100644
--- a/llvm/test/Transforms/InstCombine/icmp-binop.ll
+++ b/llvm/test/Transforms/InstCombine/icmp-binop.ll
@@ -36,11 +36,9 @@ define <2 x i1> @mul_unkV_oddC_ne_vec(<2 x i64> %v) {
 
 define i1 @mul_assumeoddV_asumeoddV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_asumeoddV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
-; CHECK-NEXT:    [[LB2:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD2:%.*]] = icmp ne i16 [[LB2]], 0
+; CHECK-NEXT:    [[ODD2:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD2]])
 ; CHECK-NEXT:    ret i1 true
 ;
@@ -81,8 +79,7 @@ define i1 @mul_reused_unkV_oddC_ne(i64 %v) {
 
 define i1 @mul_assumeoddV_unkV_eq(i16 %v, i16 %v2) {
 ; CHECK-LABEL: @mul_assumeoddV_unkV_eq(
-; CHECK-NEXT:    [[LB:%.*]] = and i16 [[V2:%.*]], 1
-; CHECK-NEXT:    [[ODD:%.*]] = icmp ne i16 [[LB]], 0
+; CHECK-NEXT:    [[ODD:%.*]] = trunc i16 [[V2:%.*]] to i1
 ; CHECK-NEXT:    call void @llvm.assume(i1 [[ODD]])
 ; CHECK-NEXT:    [[CMP:%.*]] = icmp eq i16 [[V:%.*]], 0
 ; CHECK-NEXT:    ret i1 [[CMP]]
@@ -97,8 +94,7 @@ define i1 @mul_assumeoddV_unkV_eq(i16 %v, ...
[truncated]

@andjo403
Copy link
Contributor Author

Think that I have managed to solve most of the regressions that this fold is causing now.

@github-actions
Copy link

github-actions bot commented Jan 30, 2026

🐧 Linux x64 Test Results

  • 189544 tests passed
  • 5046 tests skipped

✅ The build succeeded and all tests passed.

Copy link
Member

@dtcxzyw dtcxzyw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Looks like there are no more significant regressions. As we are at the early stage of llvm 23, we have enough time to tune the result.

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for working through this.

@andjo403 andjo403 merged commit faa4b97 into llvm:main Feb 3, 2026
8 of 9 checks passed
@andjo403 andjo403 deleted the icmpToTrunc branch February 3, 2026 18:20
moar55 pushed a commit to moar55/llvm-project that referenced this pull request Feb 3, 2026
rishabhmadan19 pushed a commit to rishabhmadan19/llvm-project that referenced this pull request Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llvm:analysis Includes value tracking, cost tables and constant folding llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:transforms PGO Profile Guided Optimizations

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[InstCombine] Missed icmp_ne(and(x,1),0) -> trunc(x) fold

4 participants