Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VPlan] Introduce recipes for VP loads and stores. #87816

Merged
merged 7 commits into from
Apr 19, 2024

Conversation

fhahn
Copy link
Contributor

@fhahn fhahn commented Apr 5, 2024

Introduce new subclasses of VPWidenMemoryRecipe for VP
(vector-predicated) loads and stores to address multiple TODOs from
#76172

Note that the introduction of the new recipes also improves code-gen for
VP gather/scatters by removing the redundant header mask. With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle those cases, iterate over all recipes in the
vector loop region to make sure all widened memory recipes are
processed.

Depends on #87411.

@llvmbot
Copy link
Member

llvmbot commented Apr 5, 2024

@llvm/pr-subscribers-llvm-transforms

@llvm/pr-subscribers-backend-risc-v

Author: Florian Hahn (fhahn)

Changes

Introduce new subclasses of VPWidenMemoryRecipe for VP
(vector-predicated) loads and stores to address multiple TODOs from
#76172

Note that the introduction of the new recipes also improves code-gen for
VP gather/scatters by removing the redundant header mask. With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle those cases, iterate over all recipes in the
vector loop region to make sure all widened memory recipes are
processed.

Depends on #87411.


Patch is 77.02 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/87816.diff

23 Files Affected:

  • (modified) llvm/lib/Transforms/Vectorize/LoopVectorize.cpp (+155-140)
  • (modified) llvm/lib/Transforms/Vectorize/VPRecipeBuilder.h (+3-3)
  • (modified) llvm/lib/Transforms/Vectorize/VPlan.h (+211-63)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanAnalysis.cpp (+4-5)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanAnalysis.h (+2-2)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp (+35-21)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp (+53-23)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanValue.h (+4-3)
  • (modified) llvm/lib/Transforms/Vectorize/VPlanVerifier.cpp (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-tail-folding-forced.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-widen-gep.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/synthesize-mask-for-call.ll (+6-6)
  • (modified) llvm/test/Transforms/LoopVectorize/RISCV/riscv-vector-reverse.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/RISCV/vectorize-force-tail-with-evl-gather-scatter.ll (+5-9)
  • (modified) llvm/test/Transforms/LoopVectorize/RISCV/vplan-vp-intrinsics.ll (+5-5)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/vplan-vp-intrinsics.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/first-order-recurrence-chains-vplan.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/vplan-dot-printing.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/vplan-iv-transforms.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/vplan-printing-before-execute.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/vplan-printing.ll (+9-9)
  • (modified) llvm/unittests/Transforms/Vectorize/VPlanHCFGTest.cpp (+2-2)
  • (modified) llvm/unittests/Transforms/Vectorize/VPlanTest.cpp (+4-5)
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index 49bacb5ae6cc4e..10d41e829e88b3 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -8095,7 +8095,7 @@ void VPRecipeBuilder::createBlockInMask(BasicBlock *BB) {
   BlockMaskCache[BB] = BlockMask;
 }
 
-VPWidenMemoryInstructionRecipe *
+VPWidenMemoryRecipe *
 VPRecipeBuilder::tryToWidenMemory(Instruction *I, ArrayRef<VPValue *> Operands,
                                   VFRange &Range) {
   assert((isa<LoadInst>(I) || isa<StoreInst>(I)) &&
@@ -8140,12 +8140,12 @@ VPRecipeBuilder::tryToWidenMemory(Instruction *I, ArrayRef<VPValue *> Operands,
     Ptr = VectorPtr;
   }
   if (LoadInst *Load = dyn_cast<LoadInst>(I))
-    return new VPWidenMemoryInstructionRecipe(*Load, Ptr, Mask, Consecutive,
-                                              Reverse, I->getDebugLoc());
+    return new VPWidenLoadRecipe(*Load, Ptr, Mask, Consecutive, Reverse,
+                                 I->getDebugLoc());
 
   StoreInst *Store = cast<StoreInst>(I);
-  return new VPWidenMemoryInstructionRecipe(
-      *Store, Ptr, Operands[0], Mask, Consecutive, Reverse, I->getDebugLoc());
+  return new VPWidenStoreRecipe(*Store, Operands[0], Ptr, Mask, Consecutive,
+                                Reverse, I->getDebugLoc());
 }
 
 /// Creates a VPWidenIntOrFpInductionRecpipe for \p Phi. If needed, it will also
@@ -8780,13 +8780,12 @@ LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes(VFRange &Range) {
   // for this VPlan, replace the Recipes widening its memory instructions with a
   // single VPInterleaveRecipe at its insertion point.
   for (const auto *IG : InterleaveGroups) {
-    auto *Recipe = cast<VPWidenMemoryInstructionRecipe>(
-        RecipeBuilder.getRecipe(IG->getInsertPos()));
+    auto *Recipe =
+        cast<VPWidenMemoryRecipe>(RecipeBuilder.getRecipe(IG->getInsertPos()));
     SmallVector<VPValue *, 4> StoredValues;
     for (unsigned i = 0; i < IG->getFactor(); ++i)
       if (auto *SI = dyn_cast_or_null<StoreInst>(IG->getMember(i))) {
-        auto *StoreR =
-            cast<VPWidenMemoryInstructionRecipe>(RecipeBuilder.getRecipe(SI));
+        auto *StoreR = cast<VPWidenStoreRecipe>(RecipeBuilder.getRecipe(SI));
         StoredValues.push_back(StoreR->getStoredValue());
       }
 
@@ -9418,73 +9417,19 @@ void VPReplicateRecipe::execute(VPTransformState &State) {
       State.ILV->scalarizeInstruction(UI, this, VPIteration(Part, Lane), State);
 }
 
-/// Creates either vp_store or vp_scatter intrinsics calls to represent
-/// predicated store/scatter.
-static Instruction *
-lowerStoreUsingVectorIntrinsics(IRBuilderBase &Builder, Value *Addr,
-                                Value *StoredVal, bool IsScatter, Value *Mask,
-                                Value *EVL, const Align &Alignment) {
-  CallInst *Call;
-  if (IsScatter) {
-    Call = Builder.CreateIntrinsic(Type::getVoidTy(EVL->getContext()),
-                                   Intrinsic::vp_scatter,
-                                   {StoredVal, Addr, Mask, EVL});
-  } else {
-    VectorBuilder VBuilder(Builder);
-    VBuilder.setEVL(EVL).setMask(Mask);
-    Call = cast<CallInst>(VBuilder.createVectorInstruction(
-        Instruction::Store, Type::getVoidTy(EVL->getContext()),
-        {StoredVal, Addr}));
-  }
-  Call->addParamAttr(
-      1, Attribute::getWithAlignment(Call->getContext(), Alignment));
-  return Call;
-}
-
-/// Creates either vp_load or vp_gather intrinsics calls to represent
-/// predicated load/gather.
-static Instruction *lowerLoadUsingVectorIntrinsics(IRBuilderBase &Builder,
-                                                   VectorType *DataTy,
-                                                   Value *Addr, bool IsGather,
-                                                   Value *Mask, Value *EVL,
-                                                   const Align &Alignment) {
-  CallInst *Call;
-  if (IsGather) {
-    Call =
-        Builder.CreateIntrinsic(DataTy, Intrinsic::vp_gather, {Addr, Mask, EVL},
-                                nullptr, "wide.masked.gather");
-  } else {
-    VectorBuilder VBuilder(Builder);
-    VBuilder.setEVL(EVL).setMask(Mask);
-    Call = cast<CallInst>(VBuilder.createVectorInstruction(
-        Instruction::Load, DataTy, Addr, "vp.op.load"));
-  }
-  Call->addParamAttr(
-      0, Attribute::getWithAlignment(Call->getContext(), Alignment));
-  return Call;
-}
-
-void VPWidenMemoryInstructionRecipe::execute(VPTransformState &State) {
-  VPValue *StoredValue = isStore() ? getStoredValue() : nullptr;
-
+void VPWidenLoadRecipe::execute(VPTransformState &State) {
   // Attempt to issue a wide load.
-  LoadInst *LI = dyn_cast<LoadInst>(&Ingredient);
-  StoreInst *SI = dyn_cast<StoreInst>(&Ingredient);
-
-  assert((LI || SI) && "Invalid Load/Store instruction");
-  assert((!SI || StoredValue) && "No stored value provided for widened store");
-  assert((!LI || !StoredValue) && "Stored value provided for widened load");
+  auto *LI = cast<LoadInst>(&Ingredient);
 
   Type *ScalarDataTy = getLoadStoreType(&Ingredient);
-
   auto *DataTy = VectorType::get(ScalarDataTy, State.VF);
   const Align Alignment = getLoadStoreAlignment(&Ingredient);
-  bool CreateGatherScatter = !isConsecutive();
+  bool CreateGather = !isConsecutive();
 
   auto &Builder = State.Builder;
   InnerLoopVectorizer::VectorParts BlockInMaskParts(State.UF);
-  bool isMaskRequired = getMask();
-  if (isMaskRequired) {
+  bool IsMaskRequired = getMask();
+  if (IsMaskRequired) {
     // Mask reversal is only needed for non-all-one (null) masks, as reverse of
     // a null all-one mask is a null mask.
     for (unsigned Part = 0; Part < State.UF; ++Part) {
@@ -9495,88 +9440,20 @@ void VPWidenMemoryInstructionRecipe::execute(VPTransformState &State) {
     }
   }
 
-  // Handle Stores:
-  if (SI) {
-    State.setDebugLocFrom(getDebugLoc());
-
-    for (unsigned Part = 0; Part < State.UF; ++Part) {
-      Instruction *NewSI = nullptr;
-      Value *StoredVal = State.get(StoredValue, Part);
-      // TODO: split this into several classes for better design.
-      if (State.EVL) {
-        assert(State.UF == 1 && "Expected only UF == 1 when vectorizing with "
-                                "explicit vector length.");
-        assert(cast<VPInstruction>(State.EVL)->getOpcode() ==
-                   VPInstruction::ExplicitVectorLength &&
-               "EVL must be VPInstruction::ExplicitVectorLength.");
-        Value *EVL = State.get(State.EVL, VPIteration(0, 0));
-        // If EVL is not nullptr, then EVL must be a valid value set during plan
-        // creation, possibly default value = whole vector register length. EVL
-        // is created only if TTI prefers predicated vectorization, thus if EVL
-        // is not nullptr it also implies preference for predicated
-        // vectorization.
-        // FIXME: Support reverse store after vp_reverse is added.
-        Value *MaskPart = isMaskRequired ? BlockInMaskParts[Part] : nullptr;
-        NewSI = lowerStoreUsingVectorIntrinsics(
-            Builder, State.get(getAddr(), Part, !CreateGatherScatter),
-            StoredVal, CreateGatherScatter, MaskPart, EVL, Alignment);
-      } else if (CreateGatherScatter) {
-        Value *MaskPart = isMaskRequired ? BlockInMaskParts[Part] : nullptr;
-        Value *VectorGep = State.get(getAddr(), Part);
-        NewSI = Builder.CreateMaskedScatter(StoredVal, VectorGep, Alignment,
-                                            MaskPart);
-      } else {
-        if (isReverse()) {
-          // If we store to reverse consecutive memory locations, then we need
-          // to reverse the order of elements in the stored value.
-          StoredVal = Builder.CreateVectorReverse(StoredVal, "reverse");
-          // We don't want to update the value in the map as it might be used in
-          // another expression. So don't call resetVectorValue(StoredVal).
-        }
-        auto *VecPtr = State.get(getAddr(), Part, /*IsScalar*/ true);
-        if (isMaskRequired)
-          NewSI = Builder.CreateMaskedStore(StoredVal, VecPtr, Alignment,
-                                            BlockInMaskParts[Part]);
-        else
-          NewSI = Builder.CreateAlignedStore(StoredVal, VecPtr, Alignment);
-      }
-      State.addMetadata(NewSI, SI);
-    }
-    return;
-  }
-
   // Handle loads.
   assert(LI && "Must have a load instruction");
   State.setDebugLocFrom(getDebugLoc());
   for (unsigned Part = 0; Part < State.UF; ++Part) {
     Value *NewLI;
-    // TODO: split this into several classes for better design.
-    if (State.EVL) {
-      assert(State.UF == 1 && "Expected only UF == 1 when vectorizing with "
-                              "explicit vector length.");
-      assert(cast<VPInstruction>(State.EVL)->getOpcode() ==
-                 VPInstruction::ExplicitVectorLength &&
-             "EVL must be VPInstruction::ExplicitVectorLength.");
-      Value *EVL = State.get(State.EVL, VPIteration(0, 0));
-      // If EVL is not nullptr, then EVL must be a valid value set during plan
-      // creation, possibly default value = whole vector register length. EVL
-      // is created only if TTI prefers predicated vectorization, thus if EVL
-      // is not nullptr it also implies preference for predicated
-      // vectorization.
-      // FIXME: Support reverse loading after vp_reverse is added.
-      Value *MaskPart = isMaskRequired ? BlockInMaskParts[Part] : nullptr;
-      NewLI = lowerLoadUsingVectorIntrinsics(
-          Builder, DataTy, State.get(getAddr(), Part, !CreateGatherScatter),
-          CreateGatherScatter, MaskPart, EVL, Alignment);
-    } else if (CreateGatherScatter) {
-      Value *MaskPart = isMaskRequired ? BlockInMaskParts[Part] : nullptr;
+    if (CreateGather) {
+      Value *MaskPart = IsMaskRequired ? BlockInMaskParts[Part] : nullptr;
       Value *VectorGep = State.get(getAddr(), Part);
       NewLI = Builder.CreateMaskedGather(DataTy, VectorGep, Alignment, MaskPart,
                                          nullptr, "wide.masked.gather");
       State.addMetadata(NewLI, LI);
     } else {
       auto *VecPtr = State.get(getAddr(), Part, /*IsScalar*/ true);
-      if (isMaskRequired)
+      if (IsMaskRequired)
         NewLI = Builder.CreateMaskedLoad(
             DataTy, VecPtr, Alignment, BlockInMaskParts[Part],
             PoisonValue::get(DataTy), "wide.masked.load");
@@ -9590,10 +9467,148 @@ void VPWidenMemoryInstructionRecipe::execute(VPTransformState &State) {
         NewLI = Builder.CreateVectorReverse(NewLI, "reverse");
     }
 
-    State.set(getVPSingleValue(), NewLI, Part);
+    State.set(this, NewLI, Part);
+  }
+}
+
+void VPWidenVPLoadRecipe::execute(VPTransformState &State) {
+  assert(State.UF == 1 && "Expected only UF == 1 when vectorizing with "
+                          "explicit vector length.");
+  // FIXME: Support reverse loading after vp_reverse is added.
+  assert(!isReverse() && "Reverse loads are not implemented yet.");
+
+  // Attempt to issue a wide load.
+  auto *LI = cast<LoadInst>(&Ingredient);
+
+  Type *ScalarDataTy = getLoadStoreType(&Ingredient);
+  auto *DataTy = VectorType::get(ScalarDataTy, State.VF);
+  const Align Alignment = getLoadStoreAlignment(&Ingredient);
+  bool CreateGather = !isConsecutive();
+
+  auto &Builder = State.Builder;
+  // Handle loads.
+  assert(LI && "Must have a load instruction");
+  State.setDebugLocFrom(getDebugLoc());
+  for (unsigned Part = 0; Part < State.UF; ++Part) {
+    CallInst *NewLI;
+    Value *EVL = State.get(getEVL(), VPIteration(0, 0));
+    Value *Addr = State.get(getAddr(), Part, !CreateGather);
+    Value *Mask =
+        getMask()
+            ? State.get(getMask(), Part)
+            : Mask = Builder.CreateVectorSplat(State.VF, Builder.getTrue());
+    if (CreateGather) {
+      NewLI = Builder.CreateIntrinsic(DataTy, Intrinsic::vp_gather,
+                                      {Addr, Mask, EVL}, nullptr,
+                                      "wide.masked.gather");
+    } else {
+      VectorBuilder VBuilder(Builder);
+      VBuilder.setEVL(EVL).setMask(Mask);
+      NewLI = cast<CallInst>(VBuilder.createVectorInstruction(
+          Instruction::Load, DataTy, Addr, "vp.op.load"));
+    }
+    NewLI->addParamAttr(
+        0, Attribute::getWithAlignment(NewLI->getContext(), Alignment));
+
+    // Add metadata to the load.
+    State.addMetadata(NewLI, LI);
+    State.set(this, NewLI, Part);
+  }
+}
+
+void VPWidenStoreRecipe::execute(VPTransformState &State) {
+  auto *SI = cast<StoreInst>(&Ingredient);
+
+  VPValue *StoredValue = getStoredValue();
+  bool CreateScatter = !isConsecutive();
+  const Align Alignment = getLoadStoreAlignment(&Ingredient);
+
+  auto &Builder = State.Builder;
+  InnerLoopVectorizer::VectorParts BlockInMaskParts(State.UF);
+  bool IsMaskRequired = getMask();
+  if (IsMaskRequired) {
+    // Mask reversal is only needed for non-all-one (null) masks, as reverse of
+    // a null all-one mask is a null mask.
+    for (unsigned Part = 0; Part < State.UF; ++Part) {
+      Value *Mask = State.get(getMask(), Part);
+      if (isReverse())
+        Mask = Builder.CreateVectorReverse(Mask, "reverse");
+      BlockInMaskParts[Part] = Mask;
+    }
+  }
+
+  State.setDebugLocFrom(getDebugLoc());
+
+  for (unsigned Part = 0; Part < State.UF; ++Part) {
+    Instruction *NewSI = nullptr;
+    Value *StoredVal = State.get(StoredValue, Part);
+    // TODO: split this into several classes for better design.
+    if (CreateScatter) {
+      Value *MaskPart = IsMaskRequired ? BlockInMaskParts[Part] : nullptr;
+      Value *VectorGep = State.get(getAddr(), Part);
+      NewSI = Builder.CreateMaskedScatter(StoredVal, VectorGep, Alignment,
+                                          MaskPart);
+    } else {
+      if (isReverse()) {
+        // If we store to reverse consecutive memory locations, then we need
+        // to reverse the order of elements in the stored value.
+        StoredVal = Builder.CreateVectorReverse(StoredVal, "reverse");
+        // We don't want to update the value in the map as it might be used in
+        // another expression. So don't call resetVectorValue(StoredVal).
+      }
+      auto *VecPtr = State.get(getAddr(), Part, /*IsScalar*/ true);
+      if (IsMaskRequired)
+        NewSI = Builder.CreateMaskedStore(StoredVal, VecPtr, Alignment,
+                                          BlockInMaskParts[Part]);
+      else
+        NewSI = Builder.CreateAlignedStore(StoredVal, VecPtr, Alignment);
+    }
+    State.addMetadata(NewSI, SI);
   }
 }
 
+void VPWidenVPStoreRecipe::execute(VPTransformState &State) {
+  assert(State.UF == 1 && "Expected only UF == 1 when vectorizing with "
+                          "explicit vector length.");
+  // FIXME: Support reverse loading after vp_reverse is added.
+  assert(!isReverse() && "Reverse store are not implemented yet.");
+
+  auto *SI = cast<StoreInst>(&Ingredient);
+
+  VPValue *StoredValue = getStoredValue();
+  bool CreateScatter = !isConsecutive();
+  const Align Alignment = getLoadStoreAlignment(&Ingredient);
+
+  auto &Builder = State.Builder;
+  State.setDebugLocFrom(getDebugLoc());
+
+  for (unsigned Part = 0; Part < State.UF; ++Part) {
+    CallInst *NewSI = nullptr;
+    Value *StoredVal = State.get(StoredValue, Part);
+    Value *EVL = State.get(getEVL(), VPIteration(0, 0));
+    // FIXME: Support reverse store after vp_reverse is added.
+    Value *Mask =
+        getMask()
+            ? State.get(getMask(), Part)
+            : Mask = Builder.CreateVectorSplat(State.VF, Builder.getTrue());
+    Value *Addr = State.get(getAddr(), Part, !CreateScatter);
+    if (CreateScatter) {
+      NewSI = Builder.CreateIntrinsic(Type::getVoidTy(EVL->getContext()),
+                                      Intrinsic::vp_scatter,
+                                      {StoredVal, Addr, Mask, EVL});
+    } else {
+      VectorBuilder VBuilder(Builder);
+      VBuilder.setEVL(EVL).setMask(Mask);
+      NewSI = cast<CallInst>(VBuilder.createVectorInstruction(
+          Instruction::Store, Type::getVoidTy(EVL->getContext()),
+          {StoredVal, Addr}));
+    }
+    NewSI->addParamAttr(
+        1, Attribute::getWithAlignment(NewSI->getContext(), Alignment));
+
+    State.addMetadata(NewSI, SI);
+  }
+}
 // Determine how to lower the scalar epilogue, which depends on 1) optimising
 // for minimum code-size, 2) predicate compiler options, 3) loop hints forcing
 // predication, and 4) a TTI hook that analyses whether the loop is suitable
diff --git a/llvm/lib/Transforms/Vectorize/VPRecipeBuilder.h b/llvm/lib/Transforms/Vectorize/VPRecipeBuilder.h
index 605b47fa0a46b8..b4c7ab02f928f0 100644
--- a/llvm/lib/Transforms/Vectorize/VPRecipeBuilder.h
+++ b/llvm/lib/Transforms/Vectorize/VPRecipeBuilder.h
@@ -69,9 +69,9 @@ class VPRecipeBuilder {
   /// Check if the load or store instruction \p I should widened for \p
   /// Range.Start and potentially masked. Such instructions are handled by a
   /// recipe that takes an additional VPInstruction for the mask.
-  VPWidenMemoryInstructionRecipe *tryToWidenMemory(Instruction *I,
-                                                   ArrayRef<VPValue *> Operands,
-                                                   VFRange &Range);
+  VPWidenMemoryRecipe *tryToWidenMemory(Instruction *I,
+                                        ArrayRef<VPValue *> Operands,
+                                        VFRange &Range);
 
   /// Check if an induction recipe should be constructed for \p Phi. If so build
   /// and return it. If not, return null.
diff --git a/llvm/lib/Transforms/Vectorize/VPlan.h b/llvm/lib/Transforms/Vectorize/VPlan.h
index 77577b516ae274..cbe015874d16a3 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.h
+++ b/llvm/lib/Transforms/Vectorize/VPlan.h
@@ -242,15 +242,6 @@ struct VPTransformState {
   ElementCount VF;
   unsigned UF;
 
-  /// If EVL (Explicit Vector Length) is not nullptr, then EVL must be a valid
-  /// value set during plan transformation, possibly a default value = whole
-  /// vector register length. EVL is created only if TTI prefers predicated
-  /// vectorization, thus if EVL is not nullptr it also implies preference for
-  /// predicated vectorization.
-  /// TODO: this is a temporarily solution, the EVL must be explicitly used by
-  /// the recipes and must be removed here.
-  VPValue *EVL = nullptr;
-
   /// Hold the indices to generate specific scalar instructions. Null indicates
   /// that all instances are to be generated, using either scalar or vector
   /// instructions.
@@ -875,7 +866,8 @@ class VPSingleDefRecipe : public VPRecipeBase, public VPValue {
       return true;
     case VPRecipeBase::VPInterleaveSC:
     case VPRecipeBase::VPBranchOnMaskSC:
-    case VPRecipeBase::VPWidenMemoryInstructionSC:
+    case VPRecipeBase::VPWidenLoadSC:
+    case VPRecipeBase::VPWidenStoreSC:
       // TODO: Widened stores don't define a value, but widened loads do. Split
       // the recipes to be able to make widened loads VPSingleDefRecipes.
       return false;
@@ -2273,19 +2265,16 @@ class VPPredInstPHIRecipe : public VPSingleDefRecipe {
   }
 };
 
-/// A Recipe for widening load/store operations.
-/// The recipe uses the following VPValues:
-/// - For load: Address, optional mask
-/// - For store: Address, stored value, optional mask
-/// TODO: We currently execute only per-part unless a specific instance is
-/// provided.
-class VPWidenMemoryInstructionRecipe : public VPRecipeBase {
+/// A common base class for widening memory operations. An optional mask can be
+/// provided the last operand.
+class VPWidenMemoryRecipe : public VPRecipeBase {
+protected:
   Instruction &Ingredient;
 
-  // Whether the loaded-from / stored-to addresses are consecutive.
+  /// Whether the loaded-from / stored-to addresses are consecutive.
   bool Consecutive;
 
-  // Whether the consecutive loaded/stored addresses are in reverse order.
+  /// Whether the consecutive loaded/stored addresses are in reverse order.
   bool Reverse;
 
   void setMask(VPValue *Mask) {
@@ -2294,48 +2283,66 @@ class VPWidenMemoryInstructionRecipe : public VPRecipeBase {
     addOperand(Mask);
   }
 
-  bool isMasked() const {
-    return isStore() ? getNumOperands() == 3 : getNumOperands() == 2;
+  VPWidenMemoryRecipe(const char unsigned ...
[truncated]

@fhahn
Copy link
Contributor Author

fhahn commented Apr 5, 2024

Note that this still contains the commit for #87411 as that PR is based on a branch not in llvm-project. Will update once #87411 is merged.

// Handle loads.
assert(LI && "Must have a load instruction");
State.setDebugLocFrom(getDebugLoc());
for (unsigned Part = 0; Part < State.UF; ++Part) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for the loop here, since expected that State.UF == 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed loop, thanks!

auto &Builder = State.Builder;
State.setDebugLocFrom(getDebugLoc());

for (unsigned Part = 0; Part < State.UF; ++Part) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

Comment on lines 2414 to 2416
return new VPWidenVPLoadRecipe(cast<LoadInst>(Ingredient), getAddr(),
getEVL(), getMask(), isConsecutive(),
getDebugLoc());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need clone implementation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1
Currently clone() is called only for epilog vectorization, so this is unreachable.
Can define it unreachable at the VPWidenMemoryRecipe parent, instead of pure virtual.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to VPWidenMemoryRecipe::clone, thanks!

Comment on lines 1341 to 1342
VPValue *BTC = Plan.getOrCreateBackedgeTakenCount();
auto IsHeaderMask = [BTC](VPValue *V) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
VPValue *BTC = Plan.getOrCreateBackedgeTakenCount();
auto IsHeaderMask = [BTC](VPValue *V) {
auto IsHeaderMask = [BTC = Plan.getOrCreateBackedgeTakenCount()](VPValue *V) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworked, the lambda is gone now, thanks!

Copy link
Collaborator

@ayalz ayalz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking care of this clean-up!

// FIXME: Support reverse loading after vp_reverse is added.
assert(!isReverse() && "Reverse loads are not implemented yet.");

// Attempt to issue a wide load.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Attempt to issue a wide load.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

Comment on lines 9489 to 9490
// Handle loads.
assert(LI && "Must have a load instruction");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Handle loads.
assert(LI && "Must have a load instruction");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

NewLI->addParamAttr(
0, Attribute::getWithAlignment(NewLI->getContext(), Alignment));

// Add metadata to the load.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this comment add any information?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

@@ -875,7 +866,8 @@ class VPSingleDefRecipe : public VPRecipeBase, public VPValue {
return true;
case VPRecipeBase::VPInterleaveSC:
case VPRecipeBase::VPBranchOnMaskSC:
case VPRecipeBase::VPWidenMemoryInstructionSC:
case VPRecipeBase::VPWidenLoadSC:
case VPRecipeBase::VPWidenStoreSC:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the two VP variants?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added, thanks!

}

VP_CLASSOF_IMPL(VPDef::VPWidenMemoryInstructionSC)
/// Returns true if the recipe is masked.
bool isMasked() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further incentive to maintain a bool IsMasked indicator?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to use bool

return CompareToReplace &&
CompareToReplace->getOpcode() == Instruction::ICmp &&
CompareToReplace->getPredicate() == CmpInst::ICMP_ULE &&
CompareToReplace->getOperand(1) == BTC;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle those cases, iterate over all recipes in the
vector loop region to make sure all widened memory recipes are
processed.

Must every ICMP_ULE of BTC be the header mask, regardless of its other operand?
Should/can the header mask(s) continue to be identified by looking for compares of VPWidenCanonicalIVRecipe that in turn use the canonical IV, before calling CanonicalIVPHI->replaceAllUsesWith(EVLPhi)?

Tail folding (an integral part of initial stripmining step according to roadmap) should arguably introduce a single abstract HeaderMask VPValue, retrievable from the loop region (as with its canonical IV), to be later materialized/legalized/lowered into concrete bump/widening/compare/active-lane-mask/no-mask-with-EVL recipes placed inside blocks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must every ICMP_ULE of BTC be the header mask, regardless of its other operand?

At the moment yes, but I reworked this to walk users of header-masks.

The current placement of addExplicitVectorLength unfortunately means that VPWidenCanonicalIVRecipe may have been removed, if there's a matching VPWidenIntOrFpInductionRecipe (see TODO from the initial EVL PR; the current placement is to avoid introducing new uses of the canonical IV after introducing EVL)

So we also need to walk the users of VPWidenIntOrFpInductionRecipe that are canonical.

Tail folding (an integral part of initial stripmining step according to roadmap) should arguably introduce a single abstract HeaderMask VPValue, retrievable from the loop region (as with its canonical IV), to be later materialized/legalized/lowered into concrete bump/widening/compare/active-lane-mask/no-mask-with-EVL recipes placed inside blocks?

Sounds good to me, pushing towards more gradual lowering (as follow-up)

if (!MemR)
continue;
VPValue *OrigMask = MemR->getMask();
if (!OrigMask)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are unmasked loads/stores permitted when tail-folding, moreover with EVL - can they remain unmodified and independent of EVL, alongside EVL VP intrinsics?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not be null at this point, updated, thanks!

Comment on lines 1357 to 1362
VPValue *OrigMask = MemR->getMask();
if (!OrigMask)
continue;
assert(!MemR->isReverse() &&
"Reversed memory operations not supported yet.");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
VPValue *OrigMask = MemR->getMask();
if (!OrigMask)
continue;
assert(!MemR->isReverse() &&
"Reversed memory operations not supported yet.");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;
assert(!MemR->isReverse() &&
"Reversed memory operations not supported yet.");
VPValue *OrigMask = MemR->getMask();
assert(OrigMask && "Unmasked widen memory recipe when folding tail");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;

worth slightly reordering, if OrigMask must be non-null.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reordered, thanks!

"Reversed memory operations not supported yet.");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;
if (auto *L = dyn_cast<VPWidenLoadRecipe>(&R)) {
auto *N = new VPWidenVPLoadRecipe(cast<LoadInst>(L->getIngredient()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: perhaps the constructor of VPWidenVPLoadRecipe should take a VPWidenLoadRecipe, VPEVL, and mask as parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

L->replaceAllUsesWith(N);
L->eraseFromParent();
} else if (auto *S = dyn_cast<VPWidenStoreRecipe>(&R)) {
auto *N = new VPWidenVPStoreRecipe(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: perhaps the constructor of VPWidenVPStoreRecipe should take a VPWidenStoreRecipe, VPEVL, and mask as parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

/// A recipe for widening load operations with vector-predication intrinsics,
/// using the address to load from, the explicit vector length and an optional
/// mask.
struct VPWidenVPLoadRecipe final : public VPWidenMemoryRecipe, public VPValue {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding that you want to have dedicated recipes that emit vp-intrinsics. Since they're derived from VPRecipe, is it expected they will be fully supported by VPlanTransforms ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, any recipe must be fully supported; all existing code dealing with memory ops should already do this, either by directly dealing with the common base class or various may-read/may-write helpers.

fhahn added a commit that referenced this pull request Apr 17, 2024
Factor out logic to collect all users recursively to be re-used
in #87816.
Introduce new subclasses of VPWidenMemoryRecipe for VP
(vector-predicated) loads and stores to address multiple TODOs from
llvm#76172

Note that the introduction of the new recipes also improves code-gen for
VP gather/scatters by removing the redundant header mask. With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle those cases, iterate over all recipes in the
vector loop region to make sure all widened memory recipes are
processed.

Depends on llvm#87411.
@fhahn fhahn force-pushed the vplan-split-vpmemory branch from 6dcd584 to 4533d92 Compare April 17, 2024 15:11
Copy link
Contributor Author

@fhahn fhahn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebased after parent commits landed & comments addressed. Apologies for the force-push

// FIXME: Support reverse loading after vp_reverse is added.
assert(!isReverse() && "Reverse loads are not implemented yet.");

// Attempt to issue a wide load.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

Comment on lines 9489 to 9490
// Handle loads.
assert(LI && "Must have a load instruction");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

// Handle loads.
assert(LI && "Must have a load instruction");
State.setDebugLocFrom(getDebugLoc());
for (unsigned Part = 0; Part < State.UF; ++Part) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed loop, thanks!

NewLI->addParamAttr(
0, Attribute::getWithAlignment(NewLI->getContext(), Alignment));

// Add metadata to the load.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

auto &Builder = State.Builder;
State.setDebugLocFrom(getDebugLoc());

for (unsigned Part = 0; Part < State.UF; ++Part) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

return CompareToReplace &&
CompareToReplace->getOpcode() == Instruction::ICmp &&
CompareToReplace->getPredicate() == CmpInst::ICMP_ULE &&
CompareToReplace->getOperand(1) == BTC;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must every ICMP_ULE of BTC be the header mask, regardless of its other operand?

At the moment yes, but I reworked this to walk users of header-masks.

The current placement of addExplicitVectorLength unfortunately means that VPWidenCanonicalIVRecipe may have been removed, if there's a matching VPWidenIntOrFpInductionRecipe (see TODO from the initial EVL PR; the current placement is to avoid introducing new uses of the canonical IV after introducing EVL)

So we also need to walk the users of VPWidenIntOrFpInductionRecipe that are canonical.

Tail folding (an integral part of initial stripmining step according to roadmap) should arguably introduce a single abstract HeaderMask VPValue, retrievable from the loop region (as with its canonical IV), to be later materialized/legalized/lowered into concrete bump/widening/compare/active-lane-mask/no-mask-with-EVL recipes placed inside blocks?

Sounds good to me, pushing towards more gradual lowering (as follow-up)

if (!MemR)
continue;
VPValue *OrigMask = MemR->getMask();
if (!OrigMask)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not be null at this point, updated, thanks!

Comment on lines 1357 to 1362
VPValue *OrigMask = MemR->getMask();
if (!OrigMask)
continue;
assert(!MemR->isReverse() &&
"Reversed memory operations not supported yet.");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reordered, thanks!

"Reversed memory operations not supported yet.");
VPValue *Mask = IsHeaderMask(OrigMask) ? nullptr : OrigMask;
if (auto *L = dyn_cast<VPWidenLoadRecipe>(&R)) {
auto *N = new VPWidenVPLoadRecipe(cast<LoadInst>(L->getIngredient()),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

L->replaceAllUsesWith(N);
L->eraseFromParent();
} else if (auto *S = dyn_cast<VPWidenStoreRecipe>(&R)) {
auto *N = new VPWidenVPStoreRecipe(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!


StoreInst *Store = cast<StoreInst>(I);
return new VPWidenMemoryInstructionRecipe(
*Store, Ptr, Operands[0], Mask, Consecutive, Reverse, I->getDebugLoc());
return new VPWidenStoreRecipe(*Store, Operands[0], Ptr, Mask, Consecutive,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: better retain parameters in their order as operands?

void VPWidenMemoryInstructionRecipe::execute(VPTransformState &State) {
VPValue *StoredValue = isStore() ? getStoredValue() : nullptr;

void VPWidenLoadRecipe::execute(VPTransformState &State) {
// Attempt to issue a wide load.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: redundant?

Builder, DataTy, State.get(getAddr(), Part, !CreateGather),
CreateGather, Mask, EVL, Alignment);
} else if (CreateGather) {
if (CreateGather) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (CreateGather) {
Value *Addr = State.get(getAddr(), Part, !CreateGather);
if (CreateGather)
NewLI = Builder.CreateMaskedGather(DataTy, Addr, Alignment, Mask,
nullptr, "wide.masked.gather");
else if (Mask)
NewLI = Builder.CreateMaskedLoad(DataTy, Addr, Alignment, Mask,
PoisonValue::get(DataTy),
"wide.masked.load");
else
NewLI =
Builder.CreateAlignedLoad(DataTy, Addr, Alignment, "wide.load");
// Add metadata to the load, but setVectorValue to possibly reversed shuffle.
State.addMetadata(NewLI, LI);
if (Reverse)
NewLI = Builder.CreateVectorReverse(NewLI, "reverse");
State.set(this, NewLI, Part);

nit: could be simplified a bit, consistent with VPWidenVPLoadRecipe::execute()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! The brackets could be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kept the braces as there are multi-line statements (for which I think it is recommended to use braces)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, but let the store case below be consistent with the load case here?

Builder, State.get(getAddr(), Part, !CreateScatter), StoredVal,
CreateScatter, Mask, EVL, Alignment);
} else if (CreateScatter) {
if (CreateScatter) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (CreateScatter) {
Value *Addr = State.get(getAddr(), Part, !CreateScatter);
if (CreateScatter)
NewSI =
Builder.CreateMaskedScatter(StoredVal, Addr, Alignment, Mask);
else if (Mask)
NewSI = Builder.CreateMaskedStore(StoredVal, Addr, Alignment, Mask);
else
NewSI = Builder.CreateAlignedStore(StoredVal, Addr, Alignment);

nit: can be simplified a bit, consistent with VPWidenVPStoreRecipe::execute()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! The brackets can be removed, along with inlining else if.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

@@ -9448,31 +9423,14 @@ void VPWidenStoreRecipe::execute(VPTransformState &State) {

Value *StoredVal = State.get(StoredVPValue, Part);
if (isReverse()) {
assert(!State.EVL && "reversing not yet implemented with EVL");
// If we store to reverse consecutive memory locations, then we need
// to reverse the order of elements in the stored value.
StoredVal = Builder.CreateVectorReverse(StoredVal, "reverse");
// We don't want to update the value in the map as it might be used in
// another expression. So don't call resetVectorValue(StoredVal).
}
// TODO: split this into several classes for better design.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this TODO still relevant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

Comment on lines 1229 to 1231
// Walk users of wide canonical IVs and replace all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count) with
// the given idiom VPValue.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Walk users of wide canonical IVs and replace all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count) with
// the given idiom VPValue.
// Walk users of wide canonical IVs and apply Fn to all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

@@ -27,14 +27,14 @@ define void @foo(ptr noalias %a, ptr noalias %b, ptr noalias %c, i64 %N) {
; IF-EVL-NEXT: vp<[[ST:%[0-9]+]]> = SCALAR-STEPS vp<[[EVL_PHI]]>, ir<1>
; IF-EVL-NEXT: CLONE ir<[[GEP1:%.+]]> = getelementptr inbounds ir<%b>, vp<[[ST]]>
; IF-EVL-NEXT: vp<[[PTR1:%[0-9]+]]> = vector-pointer ir<[[GEP1]]>
; IF-EVL-NEXT: WIDEN ir<[[LD1:%.+]]> = load vp<[[PTR1]]>, ir<true>
; IF-EVL-NEXT: WIDEN ir<[[LD1:%.+]]> = vp.load vp<[[PTR1]]>, vp<[[EVL]]>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit (unrelated to this patch): may be helpful to add the name of the underlying IR LoadInst when printing WIDEN [vp.]load recipes, to clarify what is being widened.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the original IR instruction should be available as the IR name of the recipe (matched using a pattern here ir<[[LD1:%.+]]>)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, of course.

; IF-EVL-NEXT: [[TMP20:%.*]] = getelementptr inbounds i32, ptr [[INDEX:%.*]], <vscale x 2 x i64> [[VEC_IND]]
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> [[TMP19]], i32 [[TMP18]])
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> shufflevector (<vscale x 2 x i1> insertelement (<vscale x 2 x i1> poison, i1 true, i64 0), <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer), i32 [[TMP18]])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: would be clearer if this shufflevector ( ... zeroinitializer) operand representing a full-mask was outlined; as some "oneinitializer".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, at the moment this creates a constant, but maybe there's an easy way to create an equivalent shuffle vector instruction. Can check separately.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -1336,6 +1332,30 @@ void VPlanTransforms::addExplicitVectorLength(VPlan &Plan) {
NextEVLIV->insertBefore(CanonicalIVIncrement);
EVLPhi->addOperand(NextEVLIV);

forAllHeaderPredicates(Plan, [VPEVL](VPInstruction &Mask) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative approach would be to scan the recipes inside the loop, replacing every widen memory recipe with its vp counterpart, where header masks are replaced with null. Before doing so, collect all header masks into a set, possibly via forAllHeaderPredicates(), which could be simplified into collectAllHeaderPredicates(), until a designated recipe can be use directly.

Where is collectUsersRecursively() defined?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative approach would be to scan the recipes inside the loop, replacing every widen memory recipe with its vp counterpart, where header masks are replaced with null. Before doing so, collect all header masks into a set, possibly via forAllHeaderPredicates(), which could be simplified into collectAllHeaderPredicates(), until a designated recipe can be use directly.

Updated to use collectAllHeaderPredicates (collectAllHeaderMasks), but kept walking those users for now. Can also revert back to iterating over the whole vector loop region as was done in earlier versions of the patch if preferred.

@@ -9425,6 +9362,44 @@ void VPWidenLoadRecipe::execute(VPTransformState &State) {
}
}

void VPWidenVPLoadRecipe::execute(VPTransformState &State) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A general naming comment: VPWidenVP... sounds a bit confusing. The original non-EVL/non-VP masked vector loads and stores are also already "vector predicated", and the VP prefix is already prevalently used to denote VPlan. The distinction lies with the additional EVL operand, currently employed to relieve the mask of conveying the tail. How about VPWidenEVLLoadRecipe and VPWidenEVLStoreRecipe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

Copy link
Contributor Author

@fhahn fhahn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking a look, latest comments should be addressed.

Builder, DataTy, State.get(getAddr(), Part, !CreateGather),
CreateGather, Mask, EVL, Alignment);
} else if (CreateGather) {
if (CreateGather) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

@@ -9425,6 +9362,44 @@ void VPWidenLoadRecipe::execute(VPTransformState &State) {
}
}

void VPWidenVPLoadRecipe::execute(VPTransformState &State) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

Builder, State.get(getAddr(), Part, !CreateScatter), StoredVal,
CreateScatter, Mask, EVL, Alignment);
} else if (CreateScatter) {
if (CreateScatter) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

@@ -9448,31 +9423,14 @@ void VPWidenStoreRecipe::execute(VPTransformState &State) {

Value *StoredVal = State.get(StoredVPValue, Part);
if (isReverse()) {
assert(!State.EVL && "reversing not yet implemented with EVL");
// If we store to reverse consecutive memory locations, then we need
// to reverse the order of elements in the stored value.
StoredVal = Builder.CreateVectorReverse(StoredVal, "reverse");
// We don't want to update the value in the map as it might be used in
// another expression. So don't call resetVectorValue(StoredVal).
}
// TODO: split this into several classes for better design.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks!

Comment on lines 2390 to 2392
// Widened, consecutive memory operations only demand the first lane of
// their address, unless the same operand is also stored. That latter can
// happen with opaque pointers.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done ,not. sure what happened there.

/// WideCanonicalIV, backedge-taken-count) pattern
static void forAllHeaderPredicates(VPlan &Plan,
function_ref<void(VPInstruction &)> Fn) {
SmallVector<VPValue *> WideCanonicalIVs;
auto *FoundWidenCanonicalIVUser =
find_if(Plan.getCanonicalIV()->users(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a at-most one WidenCanonicalIVRecipe user expected (worth asserting?), or simply iterate over all users possibly pushing multiple candidates into the worklist, as with widen int or fp induction recipes below?

Yes, added an assert.

Instead of a pattern matching search for header masks, introduce an abstract HeaderMask recipe, which is later (i.e., here) lowered into an ICMP_ULE, ActiveLaneMask, EVL, whatnot?

Sounds good, OK as follow-up? Added a TODO for now?

Comment on lines 1229 to 1231
// Walk users of wide canonical IVs and replace all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count) with
// the given idiom VPValue.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!

@@ -27,14 +27,14 @@ define void @foo(ptr noalias %a, ptr noalias %b, ptr noalias %c, i64 %N) {
; IF-EVL-NEXT: vp<[[ST:%[0-9]+]]> = SCALAR-STEPS vp<[[EVL_PHI]]>, ir<1>
; IF-EVL-NEXT: CLONE ir<[[GEP1:%.+]]> = getelementptr inbounds ir<%b>, vp<[[ST]]>
; IF-EVL-NEXT: vp<[[PTR1:%[0-9]+]]> = vector-pointer ir<[[GEP1]]>
; IF-EVL-NEXT: WIDEN ir<[[LD1:%.+]]> = load vp<[[PTR1]]>, ir<true>
; IF-EVL-NEXT: WIDEN ir<[[LD1:%.+]]> = vp.load vp<[[PTR1]]>, vp<[[EVL]]>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the original IR instruction should be available as the IR name of the recipe (matched using a pattern here ir<[[LD1:%.+]]>)

@@ -1336,6 +1332,30 @@ void VPlanTransforms::addExplicitVectorLength(VPlan &Plan) {
NextEVLIV->insertBefore(CanonicalIVIncrement);
EVLPhi->addOperand(NextEVLIV);

forAllHeaderPredicates(Plan, [VPEVL](VPInstruction &Mask) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative approach would be to scan the recipes inside the loop, replacing every widen memory recipe with its vp counterpart, where header masks are replaced with null. Before doing so, collect all header masks into a set, possibly via forAllHeaderPredicates(), which could be simplified into collectAllHeaderPredicates(), until a designated recipe can be use directly.

Updated to use collectAllHeaderPredicates (collectAllHeaderMasks), but kept walking those users for now. Can also revert back to iterating over the whole vector loop region as was done in earlier versions of the patch if preferred.

; IF-EVL-NEXT: [[TMP20:%.*]] = getelementptr inbounds i32, ptr [[INDEX:%.*]], <vscale x 2 x i64> [[VEC_IND]]
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> [[TMP19]], i32 [[TMP18]])
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> shufflevector (<vscale x 2 x i1> insertelement (<vscale x 2 x i1> poison, i1 true, i64 0), <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer), i32 [[TMP18]])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, at the moment this creates a constant, but maybe there's an easy way to create an equivalent shuffle vector instruction. Can check separately.

Comment on lines 50 to 51
case VPWidenStoreSC:
case VPWidenEVLStoreSC:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
case VPWidenStoreSC:
case VPWidenEVLStoreSC:
case VPWidenEVLStoreSC:
case VPWidenStoreSC:

nit: lex order

There are many such disorders below, where the EVL and non-EVL versions would be listed apart. Perhaps better call them VPWidenLoadEVL[SC] and VPWidenStoreEVL[SC], for better alignment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed including the recipe names (for consistency), thanks!

Comment on lines 66 to 69
case VPWidenIntOrFpInductionSC:
case VPWidenLoadSC:
case VPWidenPHISC:
case VPWidenEVLLoadSC:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
case VPWidenIntOrFpInductionSC:
case VPWidenLoadSC:
case VPWidenPHISC:
case VPWidenEVLLoadSC:
case VPWidenEVLLoadSC:
case VPWidenIntOrFpInductionSC:
case VPWidenLoadSC:
case VPWidenPHISC:

nit: lex order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with new naming.

Comment on lines 86 to 87
case VPWidenLoadSC:
case VPWidenEVLLoadSC:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
case VPWidenLoadSC:
case VPWidenEVLLoadSC:
case VPWidenEVLLoadSC:
case VPWidenLoadSC:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with new naming

static void
replaceHeaderPredicateWith(VPlan &Plan, VPValue &Idiom,
function_ref<bool(VPUser &, unsigned)> Cond = {}) {
/// Collet all VPValues representing a header mask through the (ICMP_ULE,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Collet all VPValues representing a header mask through the (ICMP_ULE,
/// Collect all VPValues representing a header mask through the (ICMP_ULE,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks!

Comment on lines 1234 to 1235
// Walk users of wide canonical IVs and apply Fn to all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Walk users of wide canonical IVs and apply Fn to all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count).
// Walk users of wide canonical IVs and collect all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

@@ -157,6 +161,8 @@ bool VPRecipeBase::mayHaveSideEffects() const {
return mayWriteToMemory();
case VPWidenLoadSC:
case VPWidenStoreSC:
case VPWidenEVLLoadSC:
case VPWidenEVLStoreSC:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lex order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated with new naming

Comment on lines 1240 to 1241
// TODO: Introduce explicit recipe for header-mask instead of searching
// for the header-mask pattern manually.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO can be placed at the definition of collectAllHeaderMasks(), which would be removed as a whole.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved, thanks!

auto *N = new VPWidenEVLLoadRecipe(L, VPEVL, NewMask);
N->insertBefore(L);
L->replaceAllUsesWith(N);
L->eraseFromParent();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
L->eraseFromParent();

nit: could the following recursivelyDeleteDeadRecipes() take care of erasing Load recipes from their parents? The former is needed due to no subsequent VPlan dce.
Collecting dead Stores is more challenging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment recursivelyDeleteDeadRecipes works upwards, recursing from uses to def, so recursivelyDeleteDeadRecipes(HeaderMask) won't clean up the load.

@@ -358,6 +358,8 @@ class VPDef {
VPWidenGEPSC,
VPWidenLoadSC,
VPWidenStoreSC,
VPWidenEVLLoadSC,
VPWidenEVLStoreSC,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lex order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reordered with new naming, thanks!

; IF-EVL-NEXT: [[TMP20:%.*]] = getelementptr inbounds i32, ptr [[INDEX:%.*]], <vscale x 2 x i64> [[VEC_IND]]
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> [[TMP19]], i32 [[TMP18]])
; IF-EVL-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 2 x i64> @llvm.vp.gather.nxv2i64.nxv2p0(<vscale x 2 x ptr> align 8 [[TMP20]], <vscale x 2 x i1> shufflevector (<vscale x 2 x i1> insertelement (<vscale x 2 x i1> poison, i1 true, i64 0), <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer), i32 [[TMP18]])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link
Contributor Author

@fhahn fhahn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed comments, thanks!

struct VPWidenEVLLoadRecipe final : public VPWidenMemoryRecipe, public VPValue {
VPWidenEVLLoadRecipe(VPWidenLoadRecipe *L, VPValue *EVL, VPValue *Mask)
: VPWidenMemoryRecipe(
VPDef::VPWidenEVLLoadSC, *cast<LoadInst>(&L->getIngredient()),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah no, removed, thanks!

struct VPWidenEVLStoreRecipe final : public VPWidenMemoryRecipe {
VPWidenEVLStoreRecipe(VPWidenStoreRecipe *S, VPValue *EVL, VPValue *Mask)
: VPWidenMemoryRecipe(VPDef::VPWidenEVLStoreSC,
*cast<StoreInst>(&S->getIngredient()),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed, removed, thanks!

Comment on lines 50 to 51
case VPWidenStoreSC:
case VPWidenEVLStoreSC:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed including the recipe names (for consistency), thanks!

Comment on lines 66 to 69
case VPWidenIntOrFpInductionSC:
case VPWidenLoadSC:
case VPWidenPHISC:
case VPWidenEVLLoadSC:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with new naming.

Comment on lines 86 to 87
case VPWidenLoadSC:
case VPWidenEVLLoadSC:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with new naming

if (FoundWidenCanonicalIVUser != Plan.getCanonicalIV()->users().end()) {
auto *WideCanonicalIV =
cast<VPWidenCanonicalIVRecipe>(*FoundWidenCanonicalIVUser);
assert(all_of(Plan.getCanonicalIV()->users(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

Comment on lines 1234 to 1235
// Walk users of wide canonical IVs and apply Fn to all compares of the form
// (ICMP_ULE, WideCanonicalIV, backedge-taken-count).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

Comment on lines 1240 to 1241
// TODO: Introduce explicit recipe for header-mask instead of searching
// for the header-mask pattern manually.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved, thanks!

auto *N = new VPWidenEVLLoadRecipe(L, VPEVL, NewMask);
N->insertBefore(L);
L->replaceAllUsesWith(N);
L->eraseFromParent();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment recursivelyDeleteDeadRecipes works upwards, recursing from uses to def, so recursivelyDeleteDeadRecipes(HeaderMask) won't clean up the load.

@@ -358,6 +358,8 @@ class VPDef {
VPWidenGEPSC,
VPWidenLoadSC,
VPWidenStoreSC,
VPWidenEVLLoadSC,
VPWidenEVLStoreSC,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reordered with new naming, thanks!

Copy link
Collaborator

@ayalz ayalz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, thanks for accommodating!
Adding last minor nits.

Builder, DataTy, State.get(getAddr(), Part, !CreateGather),
CreateGather, Mask, EVL, Alignment);
} else if (CreateGather) {
if (CreateGather) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! The brackets could be removed.

Builder, State.get(getAddr(), Part, !CreateScatter), StoredVal,
CreateScatter, Mask, EVL, Alignment);
} else if (CreateScatter) {
if (CreateScatter) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! The brackets can be removed, along with inlining else if.

struct VPWidenLoadEVLRecipe final : public VPWidenMemoryRecipe, public VPValue {
VPWidenLoadEVLRecipe(VPWidenLoadRecipe *L, VPValue *EVL, VPValue *Mask)
: VPWidenMemoryRecipe(
VPDef::VPWidenLoadEVLSC, *cast<LoadInst>(&L->getIngredient()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still a redundant cast?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed, thanks!

struct VPWidenLoadEVLRecipe final : public VPWidenMemoryRecipe, public VPValue {
VPWidenLoadEVLRecipe(VPWidenLoadRecipe *L, VPValue *EVL, VPValue *Mask)
: VPWidenMemoryRecipe(
VPDef::VPWidenLoadEVLSC, *cast<LoadInst>(&L->getIngredient()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
VPDef::VPWidenLoadEVLSC, *cast<LoadInst>(&L->getIngredient()),
VPDef::VPWidenLoadEVLSC, L->getIngredient(),

Still a redundant cast?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed, thanks!

replaceHeaderPredicateWith(Plan, *LaneMask);
for (VPValue *HeaderMask : collectAllHeaderMasks(Plan)) {
HeaderMask->replaceAllUsesWith(LaneMask);
recursivelyDeleteDeadRecipes(HeaderMask);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
recursivelyDeleteDeadRecipes(HeaderMask);

Wonder if this is redundant due to subsequent VPlan dce.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, removed thanks(originally thought this comment was for the addExplicitVectorlength, where it wouldn't apply)

assert(count_if(Plan.getCanonicalIV()->users(),
[](VPUser *U) { return isa<VPWidenCanonicalIVRecipe>(U); }) <=
1 &&
"Must at most one VPWideCanonicalIVRecipe");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Must at most one VPWideCanonicalIVRecipe");
"Must be at most one VPWideCanonicalIVRecipe");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to use Must have at most, thanks!

@fhahn fhahn merged commit e2a72fa into llvm:main Apr 19, 2024
3 of 4 checks passed
@fhahn fhahn deleted the vplan-split-vpmemory branch April 19, 2024 08:44
aniplcc pushed a commit to aniplcc/llvm-project that referenced this pull request Apr 21, 2024
Introduce new subclasses of VPWidenMemoryRecipe for VP
(vector-predicated) loads and stores to address multiple TODOs from
llvm#76172

Note that the introduction of the new recipes also improves code-gen for
VP gather/scatters by removing the redundant header mask. With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle that, first collect all VPValues representing header
masks (by looking at users of both the canonical IV and widened inductions
that are canonical) and then checking all users (recursively) of those header
masks.

Depends on llvm#87411.

PR: llvm#87816
@fhahn
Copy link
Contributor Author

fhahn commented Apr 22, 2024

Just shared a PR to introduce an abstract header mask early on: #89603

Mel-Chen added a commit that referenced this pull request Jul 16, 2024
)

Following from #87816, add VPReductionEVLRecipe to describe vector
predication reduction.

Address one of TODOs from #76172.
sayhaan pushed a commit to sayhaan/llvm-project that referenced this pull request Jul 16, 2024
…m#90184)

Summary:
Following from llvm#87816, add VPReductionEVLRecipe to describe vector
predication reduction.

Address one of TODOs from llvm#76172.

Test Plan: 

Reviewers: 

Subscribers: 

Tasks: 

Tags: 


Differential Revision: https://phabricator.intern.facebook.com/D59822470
yuxuanchen1997 pushed a commit that referenced this pull request Jul 25, 2024
)

Summary:
Following from #87816, add VPReductionEVLRecipe to describe vector
predication reduction.

Address one of TODOs from #76172.

Test Plan: 

Reviewers: 

Subscribers: 

Tasks: 

Tags: 


Differential Revision: https://phabricator.intern.facebook.com/D60251485
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants