Skip to content

SROA: Recognize llvm.protected.field.ptr intrinsics. #151650

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: users/pcc/spr/main.sroa-recognize-llvmprotectedfieldptr-intrinsics
Choose a base branch
from

Conversation

pcc
Copy link
Contributor

@pcc pcc commented Aug 1, 2025

When an alloc slice's users include llvm.protected.field.ptr intrinsics
and their discriminators are consistent, drop the intrinsics in order
to avoid unnecessary pointer sign and auth operations.

Created using spr 1.3.6-beta.1
@llvmbot llvmbot added llvm:analysis Includes value tracking, cost tables and constant folding llvm:transforms labels Aug 1, 2025
@llvmbot
Copy link
Member

llvmbot commented Aug 1, 2025

@llvm/pr-subscribers-llvm-analysis

Author: Peter Collingbourne (pcc)

Changes

When an alloc slice's users include llvm.protected.field.ptr intrinsics
and their discriminators are consistent, drop the intrinsics in order
to avoid unnecessary pointer sign and auth operations.


Full diff: https://github.com/llvm/llvm-project/pull/151650.diff

4 Files Affected:

  • (modified) llvm/include/llvm/Analysis/PtrUseVisitor.h (+15)
  • (modified) llvm/lib/Analysis/PtrUseVisitor.cpp (+2-1)
  • (modified) llvm/lib/Transforms/Scalar/SROA.cpp (+56-5)
  • (added) llvm/test/Transforms/SROA/protected-field-pointer.ll (+41)
diff --git a/llvm/include/llvm/Analysis/PtrUseVisitor.h b/llvm/include/llvm/Analysis/PtrUseVisitor.h
index 0858d8aee2186..a39f6881f24f3 100644
--- a/llvm/include/llvm/Analysis/PtrUseVisitor.h
+++ b/llvm/include/llvm/Analysis/PtrUseVisitor.h
@@ -134,6 +134,7 @@ class PtrUseVisitorBase {
 
     UseAndIsOffsetKnownPair UseAndIsOffsetKnown;
     APInt Offset;
+    Value *ProtectedFieldDisc;
   };
 
   /// The worklist of to-visit uses.
@@ -158,6 +159,10 @@ class PtrUseVisitorBase {
   /// The constant offset of the use if that is known.
   APInt Offset;
 
+  // When this access is via an llvm.protected.field.ptr intrinsic, contains
+  // the second argument to the intrinsic, the discriminator.
+  Value *ProtectedFieldDisc;
+
   /// @}
 
   /// Note that the constructor is protected because this class must be a base
@@ -230,6 +235,7 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
     IntegerType *IntIdxTy = cast<IntegerType>(DL.getIndexType(I.getType()));
     IsOffsetKnown = true;
     Offset = APInt(IntIdxTy->getBitWidth(), 0);
+    ProtectedFieldDisc = nullptr;
     PI.reset();
 
     // Enqueue the uses of this pointer.
@@ -242,6 +248,7 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
       IsOffsetKnown = ToVisit.UseAndIsOffsetKnown.getInt();
       if (IsOffsetKnown)
         Offset = std::move(ToVisit.Offset);
+      ProtectedFieldDisc = ToVisit.ProtectedFieldDisc;
 
       Instruction *I = cast<Instruction>(U->getUser());
       static_cast<DerivedT*>(this)->visit(I);
@@ -300,6 +307,14 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
     case Intrinsic::lifetime_start:
     case Intrinsic::lifetime_end:
       return; // No-op intrinsics.
+
+    case Intrinsic::protected_field_ptr: {
+      if (!IsOffsetKnown)
+        return Base::visitIntrinsicInst(II);
+      ProtectedFieldDisc = II.getArgOperand(1);
+      enqueueUsers(II);
+      break;
+    }
     }
   }
 
diff --git a/llvm/lib/Analysis/PtrUseVisitor.cpp b/llvm/lib/Analysis/PtrUseVisitor.cpp
index 9c79546f491ef..59a09c4ea8721 100644
--- a/llvm/lib/Analysis/PtrUseVisitor.cpp
+++ b/llvm/lib/Analysis/PtrUseVisitor.cpp
@@ -22,7 +22,8 @@ void detail::PtrUseVisitorBase::enqueueUsers(Value &I) {
     if (VisitedUses.insert(&U).second) {
       UseToVisit NewU = {
         UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
-        Offset
+        Offset,
+        ProtectedFieldDisc,
       };
       Worklist.push_back(std::move(NewU));
     }
diff --git a/llvm/lib/Transforms/Scalar/SROA.cpp b/llvm/lib/Transforms/Scalar/SROA.cpp
index 23256cf2acbd2..c212c4f45dc37 100644
--- a/llvm/lib/Transforms/Scalar/SROA.cpp
+++ b/llvm/lib/Transforms/Scalar/SROA.cpp
@@ -62,6 +62,7 @@
 #include "llvm/IR/Instruction.h"
 #include "llvm/IR/Instructions.h"
 #include "llvm/IR/IntrinsicInst.h"
+#include "llvm/IR/Intrinsics.h"
 #include "llvm/IR/LLVMContext.h"
 #include "llvm/IR/Metadata.h"
 #include "llvm/IR/Module.h"
@@ -523,9 +524,10 @@ class Slice {
 public:
   Slice() = default;
 
-  Slice(uint64_t BeginOffset, uint64_t EndOffset, Use *U, bool IsSplittable)
+  Slice(uint64_t BeginOffset, uint64_t EndOffset, Use *U, bool IsSplittable,
+        Value *ProtectedFieldDisc)
       : BeginOffset(BeginOffset), EndOffset(EndOffset),
-        UseAndIsSplittable(U, IsSplittable) {}
+        UseAndIsSplittable(U, IsSplittable), ProtectedFieldDisc(ProtectedFieldDisc) {}
 
   uint64_t beginOffset() const { return BeginOffset; }
   uint64_t endOffset() const { return EndOffset; }
@@ -538,6 +540,10 @@ class Slice {
   bool isDead() const { return getUse() == nullptr; }
   void kill() { UseAndIsSplittable.setPointer(nullptr); }
 
+  // When this access is via an llvm.protected.field.ptr intrinsic, contains
+  // the second argument to the intrinsic, the discriminator.
+  Value *ProtectedFieldDisc;
+
   /// Support for ordering ranges.
   ///
   /// This provides an ordering over ranges such that start offsets are
@@ -631,6 +637,9 @@ class AllocaSlices {
   /// Access the dead users for this alloca.
   ArrayRef<Instruction *> getDeadUsers() const { return DeadUsers; }
 
+  /// Access the PFP users for this alloca.
+  ArrayRef<IntrinsicInst *> getPFPUsers() const { return PFPUsers; }
+
   /// Access Uses that should be dropped if the alloca is promotable.
   ArrayRef<Use *> getDeadUsesIfPromotable() const {
     return DeadUseIfPromotable;
@@ -691,6 +700,10 @@ class AllocaSlices {
   /// they come from outside of the allocated space.
   SmallVector<Instruction *, 8> DeadUsers;
 
+  /// Users that are llvm.protected.field.ptr intrinsics. These will be RAUW'd
+  /// to their first argument if we rewrite the alloca.
+  SmallVector<IntrinsicInst *, 0> PFPUsers;
+
   /// Uses which will become dead if can promote the alloca.
   SmallVector<Use *, 8> DeadUseIfPromotable;
 
@@ -1064,7 +1077,8 @@ class AllocaSlices::SliceBuilder : public PtrUseVisitor<SliceBuilder> {
       EndOffset = AllocSize;
     }
 
-    AS.Slices.push_back(Slice(BeginOffset, EndOffset, U, IsSplittable));
+    AS.Slices.push_back(
+        Slice(BeginOffset, EndOffset, U, IsSplittable, ProtectedFieldDisc));
   }
 
   void visitBitCastInst(BitCastInst &BC) {
@@ -1274,6 +1288,9 @@ class AllocaSlices::SliceBuilder : public PtrUseVisitor<SliceBuilder> {
       return;
     }
 
+    if (II.getIntrinsicID() == Intrinsic::protected_field_ptr)
+      AS.PFPUsers.push_back(&II);
+
     Base::visitIntrinsicInst(II);
   }
 
@@ -4682,7 +4699,7 @@ bool SROA::presplitLoadsAndStores(AllocaInst &AI, AllocaSlices &AS) {
       NewSlices.push_back(
           Slice(BaseOffset + PartOffset, BaseOffset + PartOffset + PartSize,
                 &PLoad->getOperandUse(PLoad->getPointerOperandIndex()),
-                /*IsSplittable*/ false));
+                /*IsSplittable*/ false, nullptr));
       LLVM_DEBUG(dbgs() << "    new slice [" << NewSlices.back().beginOffset()
                         << ", " << NewSlices.back().endOffset()
                         << "): " << *PLoad << "\n");
@@ -4838,10 +4855,12 @@ bool SROA::presplitLoadsAndStores(AllocaInst &AI, AllocaSlices &AS) {
                                  LLVMContext::MD_access_group});
 
       // Now build a new slice for the alloca.
+      // ProtectedFieldDisc==nullptr is a lie, but it doesn't matter because we
+      // already determined that all accesses are consistent.
       NewSlices.push_back(
           Slice(BaseOffset + PartOffset, BaseOffset + PartOffset + PartSize,
                 &PStore->getOperandUse(PStore->getPointerOperandIndex()),
-                /*IsSplittable*/ false));
+                /*IsSplittable*/ false, nullptr));
       LLVM_DEBUG(dbgs() << "    new slice [" << NewSlices.back().beginOffset()
                         << ", " << NewSlices.back().endOffset()
                         << "): " << *PStore << "\n");
@@ -5618,6 +5637,32 @@ SROA::runOnAlloca(AllocaInst &AI) {
     return {Changed, CFGChanged};
   }
 
+  for (auto &P : AS.partitions()) {
+    std::optional<Value *> ProtectedFieldDisc;
+    // For now, we can't split if a field is accessed both via protected
+    // field and not.
+    for (Slice &S : P) {
+      if (auto *II = dyn_cast<IntrinsicInst>(S.getUse()->getUser()))
+        if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+            II->getIntrinsicID() == Intrinsic::lifetime_end)
+          continue;
+      if (!ProtectedFieldDisc)
+        ProtectedFieldDisc = S.ProtectedFieldDisc;
+      if (*ProtectedFieldDisc != S.ProtectedFieldDisc)
+        return {Changed, CFGChanged};
+    }
+    for (Slice *S : P.splitSliceTails()) {
+      if (auto *II = dyn_cast<IntrinsicInst>(S->getUse()->getUser()))
+        if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+            II->getIntrinsicID() == Intrinsic::lifetime_end)
+          continue;
+      if (!ProtectedFieldDisc)
+        ProtectedFieldDisc = S->ProtectedFieldDisc;
+      if (*ProtectedFieldDisc != S->ProtectedFieldDisc)
+        return {Changed, CFGChanged};
+    }
+  }
+
   // Delete all the dead users of this alloca before splitting and rewriting it.
   for (Instruction *DeadUser : AS.getDeadUsers()) {
     // Free up everything used by this instruction.
@@ -5635,6 +5680,12 @@ SROA::runOnAlloca(AllocaInst &AI) {
     clobberUse(*DeadOp);
     Changed = true;
   }
+  for (IntrinsicInst *PFPUser : AS.getPFPUsers()) {
+    PFPUser->replaceAllUsesWith(PFPUser->getArgOperand(0));
+
+    DeadInsts.push_back(PFPUser);
+    Changed = true;
+  }
 
   // No slices to split. Leave the dead alloca for a later pass to clean up.
   if (AS.begin() == AS.end())
diff --git a/llvm/test/Transforms/SROA/protected-field-pointer.ll b/llvm/test/Transforms/SROA/protected-field-pointer.ll
new file mode 100644
index 0000000000000..49a88cbe35629
--- /dev/null
+++ b/llvm/test/Transforms/SROA/protected-field-pointer.ll
@@ -0,0 +1,41 @@
+; RUN: opt -passes=sroa -S < %s | FileCheck %s
+
+target triple = "aarch64-unknown-linux-gnu"
+
+; CHECK: define void @slice
+define void @slice(ptr %ptr1, ptr %ptr2, ptr %out1, ptr %out2) {
+  %alloca = alloca { ptr, ptr }
+
+  %protptrptr1.1 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  store ptr %ptr1, ptr %protptrptr1.1
+  %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  %ptr1a = load ptr, ptr %protptrptr1.2
+
+  %gep = getelementptr { ptr, ptr }, ptr %alloca, i64 0, i32 1
+  %protptrptr2.1 = call ptr @llvm.protected.field.ptr(ptr %gep, i64 2, i1 true)
+  store ptr %ptr2, ptr %protptrptr1.1
+  %protptrptr2.2 = call ptr @llvm.protected.field.ptr(ptr %gep, i64 2, i1 true)
+  %ptr2a = load ptr, ptr %protptrptr1.2
+
+  ; CHECK-NEXT: store ptr %ptr1, ptr %out1, align 8
+  store ptr %ptr1a, ptr %out1
+  ; CHECK-NEXT: store ptr %ptr2, ptr %out2, align 8
+  store ptr %ptr2a, ptr %out2
+  ret void
+}
+
+; CHECK: define ptr @mixed
+define ptr @mixed(ptr %ptr) {
+  ; CHECK-NEXT: %alloca = alloca ptr, align 8
+  %alloca = alloca ptr
+
+  ; CHECK-NEXT: store ptr %ptr, ptr %alloca, align 8
+  store ptr %ptr, ptr %alloca
+  ; CHECK-NEXT: %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  ; CHECK-NEXT: %ptr1a = load ptr, ptr %protptrptr1.2, align 8
+  %ptr1a = load ptr, ptr %protptrptr1.2
+
+  ; CHECK-NEXT: ret ptr %ptr
+  ret ptr %ptr
+}

@llvmbot
Copy link
Member

llvmbot commented Aug 1, 2025

@llvm/pr-subscribers-llvm-transforms

Author: Peter Collingbourne (pcc)

Changes

When an alloc slice's users include llvm.protected.field.ptr intrinsics
and their discriminators are consistent, drop the intrinsics in order
to avoid unnecessary pointer sign and auth operations.


Full diff: https://github.com/llvm/llvm-project/pull/151650.diff

4 Files Affected:

  • (modified) llvm/include/llvm/Analysis/PtrUseVisitor.h (+15)
  • (modified) llvm/lib/Analysis/PtrUseVisitor.cpp (+2-1)
  • (modified) llvm/lib/Transforms/Scalar/SROA.cpp (+56-5)
  • (added) llvm/test/Transforms/SROA/protected-field-pointer.ll (+41)
diff --git a/llvm/include/llvm/Analysis/PtrUseVisitor.h b/llvm/include/llvm/Analysis/PtrUseVisitor.h
index 0858d8aee2186..a39f6881f24f3 100644
--- a/llvm/include/llvm/Analysis/PtrUseVisitor.h
+++ b/llvm/include/llvm/Analysis/PtrUseVisitor.h
@@ -134,6 +134,7 @@ class PtrUseVisitorBase {
 
     UseAndIsOffsetKnownPair UseAndIsOffsetKnown;
     APInt Offset;
+    Value *ProtectedFieldDisc;
   };
 
   /// The worklist of to-visit uses.
@@ -158,6 +159,10 @@ class PtrUseVisitorBase {
   /// The constant offset of the use if that is known.
   APInt Offset;
 
+  // When this access is via an llvm.protected.field.ptr intrinsic, contains
+  // the second argument to the intrinsic, the discriminator.
+  Value *ProtectedFieldDisc;
+
   /// @}
 
   /// Note that the constructor is protected because this class must be a base
@@ -230,6 +235,7 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
     IntegerType *IntIdxTy = cast<IntegerType>(DL.getIndexType(I.getType()));
     IsOffsetKnown = true;
     Offset = APInt(IntIdxTy->getBitWidth(), 0);
+    ProtectedFieldDisc = nullptr;
     PI.reset();
 
     // Enqueue the uses of this pointer.
@@ -242,6 +248,7 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
       IsOffsetKnown = ToVisit.UseAndIsOffsetKnown.getInt();
       if (IsOffsetKnown)
         Offset = std::move(ToVisit.Offset);
+      ProtectedFieldDisc = ToVisit.ProtectedFieldDisc;
 
       Instruction *I = cast<Instruction>(U->getUser());
       static_cast<DerivedT*>(this)->visit(I);
@@ -300,6 +307,14 @@ class PtrUseVisitor : protected InstVisitor<DerivedT>,
     case Intrinsic::lifetime_start:
     case Intrinsic::lifetime_end:
       return; // No-op intrinsics.
+
+    case Intrinsic::protected_field_ptr: {
+      if (!IsOffsetKnown)
+        return Base::visitIntrinsicInst(II);
+      ProtectedFieldDisc = II.getArgOperand(1);
+      enqueueUsers(II);
+      break;
+    }
     }
   }
 
diff --git a/llvm/lib/Analysis/PtrUseVisitor.cpp b/llvm/lib/Analysis/PtrUseVisitor.cpp
index 9c79546f491ef..59a09c4ea8721 100644
--- a/llvm/lib/Analysis/PtrUseVisitor.cpp
+++ b/llvm/lib/Analysis/PtrUseVisitor.cpp
@@ -22,7 +22,8 @@ void detail::PtrUseVisitorBase::enqueueUsers(Value &I) {
     if (VisitedUses.insert(&U).second) {
       UseToVisit NewU = {
         UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
-        Offset
+        Offset,
+        ProtectedFieldDisc,
       };
       Worklist.push_back(std::move(NewU));
     }
diff --git a/llvm/lib/Transforms/Scalar/SROA.cpp b/llvm/lib/Transforms/Scalar/SROA.cpp
index 23256cf2acbd2..c212c4f45dc37 100644
--- a/llvm/lib/Transforms/Scalar/SROA.cpp
+++ b/llvm/lib/Transforms/Scalar/SROA.cpp
@@ -62,6 +62,7 @@
 #include "llvm/IR/Instruction.h"
 #include "llvm/IR/Instructions.h"
 #include "llvm/IR/IntrinsicInst.h"
+#include "llvm/IR/Intrinsics.h"
 #include "llvm/IR/LLVMContext.h"
 #include "llvm/IR/Metadata.h"
 #include "llvm/IR/Module.h"
@@ -523,9 +524,10 @@ class Slice {
 public:
   Slice() = default;
 
-  Slice(uint64_t BeginOffset, uint64_t EndOffset, Use *U, bool IsSplittable)
+  Slice(uint64_t BeginOffset, uint64_t EndOffset, Use *U, bool IsSplittable,
+        Value *ProtectedFieldDisc)
       : BeginOffset(BeginOffset), EndOffset(EndOffset),
-        UseAndIsSplittable(U, IsSplittable) {}
+        UseAndIsSplittable(U, IsSplittable), ProtectedFieldDisc(ProtectedFieldDisc) {}
 
   uint64_t beginOffset() const { return BeginOffset; }
   uint64_t endOffset() const { return EndOffset; }
@@ -538,6 +540,10 @@ class Slice {
   bool isDead() const { return getUse() == nullptr; }
   void kill() { UseAndIsSplittable.setPointer(nullptr); }
 
+  // When this access is via an llvm.protected.field.ptr intrinsic, contains
+  // the second argument to the intrinsic, the discriminator.
+  Value *ProtectedFieldDisc;
+
   /// Support for ordering ranges.
   ///
   /// This provides an ordering over ranges such that start offsets are
@@ -631,6 +637,9 @@ class AllocaSlices {
   /// Access the dead users for this alloca.
   ArrayRef<Instruction *> getDeadUsers() const { return DeadUsers; }
 
+  /// Access the PFP users for this alloca.
+  ArrayRef<IntrinsicInst *> getPFPUsers() const { return PFPUsers; }
+
   /// Access Uses that should be dropped if the alloca is promotable.
   ArrayRef<Use *> getDeadUsesIfPromotable() const {
     return DeadUseIfPromotable;
@@ -691,6 +700,10 @@ class AllocaSlices {
   /// they come from outside of the allocated space.
   SmallVector<Instruction *, 8> DeadUsers;
 
+  /// Users that are llvm.protected.field.ptr intrinsics. These will be RAUW'd
+  /// to their first argument if we rewrite the alloca.
+  SmallVector<IntrinsicInst *, 0> PFPUsers;
+
   /// Uses which will become dead if can promote the alloca.
   SmallVector<Use *, 8> DeadUseIfPromotable;
 
@@ -1064,7 +1077,8 @@ class AllocaSlices::SliceBuilder : public PtrUseVisitor<SliceBuilder> {
       EndOffset = AllocSize;
     }
 
-    AS.Slices.push_back(Slice(BeginOffset, EndOffset, U, IsSplittable));
+    AS.Slices.push_back(
+        Slice(BeginOffset, EndOffset, U, IsSplittable, ProtectedFieldDisc));
   }
 
   void visitBitCastInst(BitCastInst &BC) {
@@ -1274,6 +1288,9 @@ class AllocaSlices::SliceBuilder : public PtrUseVisitor<SliceBuilder> {
       return;
     }
 
+    if (II.getIntrinsicID() == Intrinsic::protected_field_ptr)
+      AS.PFPUsers.push_back(&II);
+
     Base::visitIntrinsicInst(II);
   }
 
@@ -4682,7 +4699,7 @@ bool SROA::presplitLoadsAndStores(AllocaInst &AI, AllocaSlices &AS) {
       NewSlices.push_back(
           Slice(BaseOffset + PartOffset, BaseOffset + PartOffset + PartSize,
                 &PLoad->getOperandUse(PLoad->getPointerOperandIndex()),
-                /*IsSplittable*/ false));
+                /*IsSplittable*/ false, nullptr));
       LLVM_DEBUG(dbgs() << "    new slice [" << NewSlices.back().beginOffset()
                         << ", " << NewSlices.back().endOffset()
                         << "): " << *PLoad << "\n");
@@ -4838,10 +4855,12 @@ bool SROA::presplitLoadsAndStores(AllocaInst &AI, AllocaSlices &AS) {
                                  LLVMContext::MD_access_group});
 
       // Now build a new slice for the alloca.
+      // ProtectedFieldDisc==nullptr is a lie, but it doesn't matter because we
+      // already determined that all accesses are consistent.
       NewSlices.push_back(
           Slice(BaseOffset + PartOffset, BaseOffset + PartOffset + PartSize,
                 &PStore->getOperandUse(PStore->getPointerOperandIndex()),
-                /*IsSplittable*/ false));
+                /*IsSplittable*/ false, nullptr));
       LLVM_DEBUG(dbgs() << "    new slice [" << NewSlices.back().beginOffset()
                         << ", " << NewSlices.back().endOffset()
                         << "): " << *PStore << "\n");
@@ -5618,6 +5637,32 @@ SROA::runOnAlloca(AllocaInst &AI) {
     return {Changed, CFGChanged};
   }
 
+  for (auto &P : AS.partitions()) {
+    std::optional<Value *> ProtectedFieldDisc;
+    // For now, we can't split if a field is accessed both via protected
+    // field and not.
+    for (Slice &S : P) {
+      if (auto *II = dyn_cast<IntrinsicInst>(S.getUse()->getUser()))
+        if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+            II->getIntrinsicID() == Intrinsic::lifetime_end)
+          continue;
+      if (!ProtectedFieldDisc)
+        ProtectedFieldDisc = S.ProtectedFieldDisc;
+      if (*ProtectedFieldDisc != S.ProtectedFieldDisc)
+        return {Changed, CFGChanged};
+    }
+    for (Slice *S : P.splitSliceTails()) {
+      if (auto *II = dyn_cast<IntrinsicInst>(S->getUse()->getUser()))
+        if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+            II->getIntrinsicID() == Intrinsic::lifetime_end)
+          continue;
+      if (!ProtectedFieldDisc)
+        ProtectedFieldDisc = S->ProtectedFieldDisc;
+      if (*ProtectedFieldDisc != S->ProtectedFieldDisc)
+        return {Changed, CFGChanged};
+    }
+  }
+
   // Delete all the dead users of this alloca before splitting and rewriting it.
   for (Instruction *DeadUser : AS.getDeadUsers()) {
     // Free up everything used by this instruction.
@@ -5635,6 +5680,12 @@ SROA::runOnAlloca(AllocaInst &AI) {
     clobberUse(*DeadOp);
     Changed = true;
   }
+  for (IntrinsicInst *PFPUser : AS.getPFPUsers()) {
+    PFPUser->replaceAllUsesWith(PFPUser->getArgOperand(0));
+
+    DeadInsts.push_back(PFPUser);
+    Changed = true;
+  }
 
   // No slices to split. Leave the dead alloca for a later pass to clean up.
   if (AS.begin() == AS.end())
diff --git a/llvm/test/Transforms/SROA/protected-field-pointer.ll b/llvm/test/Transforms/SROA/protected-field-pointer.ll
new file mode 100644
index 0000000000000..49a88cbe35629
--- /dev/null
+++ b/llvm/test/Transforms/SROA/protected-field-pointer.ll
@@ -0,0 +1,41 @@
+; RUN: opt -passes=sroa -S < %s | FileCheck %s
+
+target triple = "aarch64-unknown-linux-gnu"
+
+; CHECK: define void @slice
+define void @slice(ptr %ptr1, ptr %ptr2, ptr %out1, ptr %out2) {
+  %alloca = alloca { ptr, ptr }
+
+  %protptrptr1.1 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  store ptr %ptr1, ptr %protptrptr1.1
+  %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  %ptr1a = load ptr, ptr %protptrptr1.2
+
+  %gep = getelementptr { ptr, ptr }, ptr %alloca, i64 0, i32 1
+  %protptrptr2.1 = call ptr @llvm.protected.field.ptr(ptr %gep, i64 2, i1 true)
+  store ptr %ptr2, ptr %protptrptr1.1
+  %protptrptr2.2 = call ptr @llvm.protected.field.ptr(ptr %gep, i64 2, i1 true)
+  %ptr2a = load ptr, ptr %protptrptr1.2
+
+  ; CHECK-NEXT: store ptr %ptr1, ptr %out1, align 8
+  store ptr %ptr1a, ptr %out1
+  ; CHECK-NEXT: store ptr %ptr2, ptr %out2, align 8
+  store ptr %ptr2a, ptr %out2
+  ret void
+}
+
+; CHECK: define ptr @mixed
+define ptr @mixed(ptr %ptr) {
+  ; CHECK-NEXT: %alloca = alloca ptr, align 8
+  %alloca = alloca ptr
+
+  ; CHECK-NEXT: store ptr %ptr, ptr %alloca, align 8
+  store ptr %ptr, ptr %alloca
+  ; CHECK-NEXT: %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  %protptrptr1.2 = call ptr @llvm.protected.field.ptr(ptr %alloca, i64 1, i1 true)
+  ; CHECK-NEXT: %ptr1a = load ptr, ptr %protptrptr1.2, align 8
+  %ptr1a = load ptr, ptr %protptrptr1.2
+
+  ; CHECK-NEXT: ret ptr %ptr
+  ret ptr %ptr
+}

Copy link

github-actions bot commented Aug 1, 2025

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff HEAD~1 HEAD --extensions h,cpp -- llvm/include/llvm/Analysis/PtrUseVisitor.h llvm/lib/Analysis/PtrUseVisitor.cpp llvm/lib/Transforms/Scalar/SROA.cpp
View the diff from clang-format here.
diff --git a/llvm/lib/Analysis/PtrUseVisitor.cpp b/llvm/lib/Analysis/PtrUseVisitor.cpp
index 59a09c4ea..0a79f8419 100644
--- a/llvm/lib/Analysis/PtrUseVisitor.cpp
+++ b/llvm/lib/Analysis/PtrUseVisitor.cpp
@@ -21,9 +21,9 @@ void detail::PtrUseVisitorBase::enqueueUsers(Value &I) {
   for (Use &U : I.uses()) {
     if (VisitedUses.insert(&U).second) {
       UseToVisit NewU = {
-        UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
-        Offset,
-        ProtectedFieldDisc,
+          UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
+          Offset,
+          ProtectedFieldDisc,
       };
       Worklist.push_back(std::move(NewU));
     }
diff --git a/llvm/lib/Transforms/Scalar/SROA.cpp b/llvm/lib/Transforms/Scalar/SROA.cpp
index c212c4f45..54c6edbf1 100644
--- a/llvm/lib/Transforms/Scalar/SROA.cpp
+++ b/llvm/lib/Transforms/Scalar/SROA.cpp
@@ -527,7 +527,8 @@ public:
   Slice(uint64_t BeginOffset, uint64_t EndOffset, Use *U, bool IsSplittable,
         Value *ProtectedFieldDisc)
       : BeginOffset(BeginOffset), EndOffset(EndOffset),
-        UseAndIsSplittable(U, IsSplittable), ProtectedFieldDisc(ProtectedFieldDisc) {}
+        UseAndIsSplittable(U, IsSplittable),
+        ProtectedFieldDisc(ProtectedFieldDisc) {}
 
   uint64_t beginOffset() const { return BeginOffset; }
   uint64_t endOffset() const { return EndOffset; }

pcc added a commit to pcc/llvm-project that referenced this pull request Aug 1, 2025
When an alloc slice's users include llvm.protected.field.ptr intrinsics
and their discriminators are consistent, drop the intrinsics in order
to avoid unnecessary pointer sign and auth operations.

Pull Request: llvm#151650
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use update_test_checks.py.

Please also add a test where the alloca is split but not promoted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,41 @@
; RUN: opt -passes=sroa -S < %s | FileCheck %s

target triple = "aarch64-unknown-linux-gnu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this triple necessary? If not, drop it, otherwise add REQUIRES.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

Created using spr 1.3.6-beta.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llvm:analysis Includes value tracking, cost tables and constant folding llvm:transforms
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

3 participants