[llvm-branch-commits] [llvm] e075123 - [CodeGen] Add "noreturn" attirbute to _Unwind_Resume

2020-12-24 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-24T18:14:18+07:00
New Revision: e0751234ef0df733032b777ed0d993a490121855

URL: 
https://github.com/llvm/llvm-project/commit/e0751234ef0df733032b777ed0d993a490121855
DIFF: 
https://github.com/llvm/llvm-project/commit/e0751234ef0df733032b777ed0d993a490121855.diff

LOG: [CodeGen] Add "noreturn" attirbute to _Unwind_Resume

Currently 'resume' is lowered to _Unwind_Resume with out "noreturn" attribute. 
Semantically _Unwind_Resume  library call is expected to never return and 
should be marked as such. Though I didn't find any changes in behavior of 
existing tests there will be a difference once https://reviews.llvm.org/D79485 
lands.

I was not able to come up with the test case anything better than just checking 
for presence of "noreturn" attribute. Please let me know if there is a better 
way to test the change.

Reviewed By: xbolva00

Differential Revision: https://reviews.llvm.org/D93682

Added: 
llvm/test/CodeGen/Generic/dwarf_eh_resume.ll

Modified: 
llvm/lib/CodeGen/DwarfEHPrepare.cpp
llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-invoke-probabilities.ll

Removed: 




diff  --git a/llvm/lib/CodeGen/DwarfEHPrepare.cpp 
b/llvm/lib/CodeGen/DwarfEHPrepare.cpp
index c75c957bff8a..c7048337cdf2 100644
--- a/llvm/lib/CodeGen/DwarfEHPrepare.cpp
+++ b/llvm/lib/CodeGen/DwarfEHPrepare.cpp
@@ -235,6 +235,7 @@ bool DwarfEHPrepare::InsertUnwindResumeCalls(Function &Fn) {
 CI->setCallingConv(TLI->getLibcallCallingConv(RTLIB::UNWIND_RESUME));
 
 // We never expect _Unwind_Resume to return.
+CI->setDoesNotReturn();
 new UnreachableInst(Ctx, UnwindBB);
 return true;
   }
@@ -260,6 +261,7 @@ bool DwarfEHPrepare::InsertUnwindResumeCalls(Function &Fn) {
   CI->setCallingConv(TLI->getLibcallCallingConv(RTLIB::UNWIND_RESUME));
 
   // We never expect _Unwind_Resume to return.
+  CI->setDoesNotReturn();
   new UnreachableInst(Ctx, UnwindBB);
   return true;
 }

diff  --git 
a/llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-invoke-probabilities.ll 
b/llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-invoke-probabilities.ll
index a10148f1ffdc..473216e9f170 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-invoke-probabilities.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-invoke-probabilities.ll
@@ -11,7 +11,7 @@ declare i32 @hoge(...)
 define void @pluto() align 2 personality i8* bitcast (i32 (...)* @hoge to i8*) 
{
 ; CHECK-LABEL: @pluto
 ; CHECK: bb.1.bb
-; CHECK: successors: %bb.2(0x4000), %bb.3(0x4000)
+; CHECK: successors: %bb.2(0x), %bb.3(0x8000)
 ; CHECK: EH_LABEL 
 ; CHECK: G_BR %bb.2
 

diff  --git a/llvm/test/CodeGen/Generic/dwarf_eh_resume.ll 
b/llvm/test/CodeGen/Generic/dwarf_eh_resume.ll
new file mode 100644
index ..b7bc17c46963
--- /dev/null
+++ b/llvm/test/CodeGen/Generic/dwarf_eh_resume.ll
@@ -0,0 +1,23 @@
+; RUN: llc %s -stop-after=irtranslator -o - | FileCheck %s
+
+declare i32 @hoge(...)
+
+; Check that 'resume' is lowered to _Unwind_Resume which marked as 'noreturn'
+define void @pluto() align 2 personality i8* bitcast (i32 (...)* @hoge to i8*) 
{
+;CHECK: call void @_Unwind_Resume(i8* %exn.obj) [[A:#.*]]
+;CHECK: attributes [[A]] = { noreturn }
+bb:
+  invoke void @spam()
+  to label %bb1 unwind label %bb2
+
+bb1:  ; preds = %bb
+  ret void
+
+bb2:  ; preds = %bb
+  %tmp = landingpad { i8*, i32 }
+  cleanup
+  resume { i8*, i32 } %tmp
+
+}
+
+declare void @spam()



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] ce4413e - Moved dwarf_eh_resume.ll from Generic to X86 folder

2020-12-24 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-24T20:08:50+07:00
New Revision: ce4413e48941eeb85769e35b1a31112f39d9cc4c

URL: 
https://github.com/llvm/llvm-project/commit/ce4413e48941eeb85769e35b1a31112f39d9cc4c
DIFF: 
https://github.com/llvm/llvm-project/commit/ce4413e48941eeb85769e35b1a31112f39d9cc4c.diff

LOG: Moved dwarf_eh_resume.ll from Generic to X86 folder

Make test case x86 specific.

Reviewed By: xbolva00

Differential Revision: https://reviews.llvm.org/D93803

Added: 
llvm/test/CodeGen/X86/dwarf_eh_resume.ll

Modified: 


Removed: 
llvm/test/CodeGen/Generic/dwarf_eh_resume.ll



diff  --git a/llvm/test/CodeGen/Generic/dwarf_eh_resume.ll 
b/llvm/test/CodeGen/X86/dwarf_eh_resume.ll
similarity index 88%
rename from llvm/test/CodeGen/Generic/dwarf_eh_resume.ll
rename to llvm/test/CodeGen/X86/dwarf_eh_resume.ll
index b7bc17c46963..f44f0bc1e503 100644
--- a/llvm/test/CodeGen/Generic/dwarf_eh_resume.ll
+++ b/llvm/test/CodeGen/X86/dwarf_eh_resume.ll
@@ -1,4 +1,4 @@
-; RUN: llc %s -stop-after=irtranslator -o - | FileCheck %s
+; RUN: opt -mtriple=x86_64-linux-gnu -dwarfehprepare -S %s | FileCheck %s
 
 declare i32 @hoge(...)
 



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 485cd4c - [NFC][Tests] Auto generate checks for llvm/test/Transforms/NaryReassociate/pr24301.ll using update_test_checks.py

2020-12-03 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-03T18:22:14+07:00
New Revision: 485cd4c52ed7662f844703fff273995b3b156327

URL: 
https://github.com/llvm/llvm-project/commit/485cd4c52ed7662f844703fff273995b3b156327
DIFF: 
https://github.com/llvm/llvm-project/commit/485cd4c52ed7662f844703fff273995b3b156327.diff

LOG: [NFC][Tests] Auto generate checks for 
llvm/test/Transforms/NaryReassociate/pr24301.ll using update_test_checks.py

Generate checks with update_test_checks.py in order to simplify upcoming 
updates.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D92561

Added: 


Modified: 
llvm/test/Transforms/NaryReassociate/pr24301.ll

Removed: 




diff  --git a/llvm/test/Transforms/NaryReassociate/pr24301.ll 
b/llvm/test/Transforms/NaryReassociate/pr24301.ll
index 49a4d2863e13..2bcbc937d18f 100644
--- a/llvm/test/Transforms/NaryReassociate/pr24301.ll
+++ b/llvm/test/Transforms/NaryReassociate/pr24301.ll
@@ -1,8 +1,16 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
 ; RUN: opt < %s -nary-reassociate -S | FileCheck %s
 ; RUN: opt < %s -passes='nary-reassociate' -S | FileCheck %s
 
 define i32 @foo(i32 %tmp4) {
 ; CHECK-LABEL: @foo(
+; CHECK-NEXT:  entry:
+; CHECK-NEXT:[[TMP5:%.*]] = add i32 [[TMP4:%.*]], 8
+; CHECK-NEXT:[[TMP14:%.*]] = add i32 [[TMP5]], -128
+; CHECK-NEXT:[[TMP21:%.*]] = add i32 119, [[TMP4]]
+; CHECK-NEXT:[[TMP23:%.*]] = add i32 [[TMP21]], -128
+; CHECK-NEXT:ret i32 [[TMP23]]
+;
 entry:
   %tmp5 = add i32 %tmp4, 8
   %tmp13 = add i32 %tmp4, -128  ; deleted
@@ -10,6 +18,5 @@ entry:
   %tmp21 = add i32 119, %tmp4
   ; do not rewrite %tmp23 against %tmp13 because %tmp13 is already deleted
   %tmp23 = add i32 %tmp21, -128
-; CHECK: %tmp23 = add i32 %tmp21, -128
   ret i32 %tmp23
 }



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 5c6dc7b - [NFC][Tests] Added one additional test case for NaryRessociation pass.

2020-12-03 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-03T19:11:08+07:00
New Revision: 5c6dc7b882be747746211c021e9ebc41e8ceb2b0

URL: 
https://github.com/llvm/llvm-project/commit/5c6dc7b882be747746211c021e9ebc41e8ceb2b0
DIFF: 
https://github.com/llvm/llvm-project/commit/5c6dc7b882be747746211c021e9ebc41e8ceb2b0.diff

LOG: [NFC][Tests] Added one additional test case for NaryRessociation pass.

New tes cases added. Change var names to avoid the following warning from  
update_test_checks.py:

WARNING: Change IR value name 'tmp5' to prevent possible conflict with scripted 
FileCheck name.

Reviewed By: ebrevnov

Differential Revision: https://reviews.llvm.org/D92566

Added: 


Modified: 
llvm/test/Transforms/NaryReassociate/pr24301.ll

Removed: 




diff  --git a/llvm/test/Transforms/NaryReassociate/pr24301.ll 
b/llvm/test/Transforms/NaryReassociate/pr24301.ll
index 2bcbc937d18f..2eef29cb19a1 100644
--- a/llvm/test/Transforms/NaryReassociate/pr24301.ll
+++ b/llvm/test/Transforms/NaryReassociate/pr24301.ll
@@ -2,21 +2,42 @@
 ; RUN: opt < %s -nary-reassociate -S | FileCheck %s
 ; RUN: opt < %s -passes='nary-reassociate' -S | FileCheck %s
 
-define i32 @foo(i32 %tmp4) {
+define i32 @foo(i32 %t4) {
 ; CHECK-LABEL: @foo(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:[[TMP5:%.*]] = add i32 [[TMP4:%.*]], 8
-; CHECK-NEXT:[[TMP14:%.*]] = add i32 [[TMP5]], -128
-; CHECK-NEXT:[[TMP21:%.*]] = add i32 119, [[TMP4]]
-; CHECK-NEXT:[[TMP23:%.*]] = add i32 [[TMP21]], -128
-; CHECK-NEXT:ret i32 [[TMP23]]
+; CHECK-NEXT:[[T5:%.*]] = add i32 [[T4:%.*]], 8
+; CHECK-NEXT:[[T14:%.*]] = add i32 [[T5]], -128
+; CHECK-NEXT:[[T21:%.*]] = add i32 119, [[T4]]
+; CHECK-NEXT:[[T23:%.*]] = add i32 [[T21]], -128
+; CHECK-NEXT:ret i32 [[T23]]
 ;
 entry:
-  %tmp5 = add i32 %tmp4, 8
-  %tmp13 = add i32 %tmp4, -128  ; deleted
-  %tmp14 = add i32 %tmp13, 8; => %tmp5 + -128
-  %tmp21 = add i32 119, %tmp4
-  ; do not rewrite %tmp23 against %tmp13 because %tmp13 is already deleted
-  %tmp23 = add i32 %tmp21, -128
-  ret i32 %tmp23
+  %t5 = add i32 %t4, 8
+  %t13 = add i32 %t4, -128  ; deleted
+  %t14 = add i32 %t13, 8; => %t5 + -128
+  %t21 = add i32 119, %t4
+  ; do not rewrite %t23 against %t13 because %t13 is already deleted
+  %t23 = add i32 %t21, -128
+  ret i32 %t23
+}
+
+; This is essentially the same test as the previous one but intermidiate 
result (t14) has a use.
+define i32 @foo2(i32 %t4) {
+; CHECK-LABEL: @foo2(
+; CHECK-NEXT:  entry:
+; CHECK-NEXT:[[T5:%.*]] = add i32 [[T4:%.*]], 8
+; CHECK-NEXT:[[T14:%.*]] = add i32 [[T5]], -128
+; CHECK-NEXT:[[T21:%.*]] = add i32 119, [[T4]]
+; CHECK-NEXT:[[T23:%.*]] = add i32 [[T21]], -128
+; CHECK-NEXT:[[RES:%.*]] = add i32 [[T14]], [[T23]]
+; CHECK-NEXT:ret i32 [[RES]]
+;
+entry:
+  %t5 = add i32 %t4, 8
+  %t13 = add i32 %t4, -128  ; deleted
+  %t14 = add i32 %t13, 8; => %t5 + -128
+  %t21 = add i32 119, %t4
+  %t23 = add i32 %t21, -128
+  %res = add i32 %t14, %t23
+  ret i32 %res
 }



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] f61c29b - [NARY-REASSOCIATE] Simplify traversal logic by post deleting dead instructions

2020-12-04 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-04T16:17:50+07:00
New Revision: f61c29b3a725a620c67355a519788a96be5d5651

URL: 
https://github.com/llvm/llvm-project/commit/f61c29b3a725a620c67355a519788a96be5d5651
DIFF: 
https://github.com/llvm/llvm-project/commit/f61c29b3a725a620c67355a519788a96be5d5651.diff

LOG: [NARY-REASSOCIATE] Simplify traversal logic by post deleting dead 
instructions

Currently we delete optimized instructions as we go. That has several negative 
consequences. First it complicates traversal logic itself. Second if newly 
generated instruction has been deleted the traversal is repeated from scratch.

But real motivation for the change is upcoming change with support for min/max 
reassociation. Here we employ SCEV expander to generate code. As a result newly 
generated instructions may be inserted not right before original instruction 
(because SCEV may do hoisting) and there is no way to know 'next' instruction.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D88285

Added: 


Modified: 
llvm/lib/Transforms/Scalar/NaryReassociate.cpp
llvm/test/Transforms/NaryReassociate/pr24301.ll

Removed: 




diff  --git a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp 
b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
index adfd05020bc5..b1bfc03c1e73 100644
--- a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
+++ b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
@@ -231,36 +231,32 @@ bool NaryReassociatePass::doOneIteration(Function &F) {
   // Process the basic blocks in a depth first traversal of the dominator
   // tree. This order ensures that all bases of a candidate are in Candidates
   // when we process it.
+  SmallVector DeadInsts;
   for (const auto Node : depth_first(DT)) {
 BasicBlock *BB = Node->getBlock();
 for (auto I = BB->begin(); I != BB->end(); ++I) {
-  if (SE->isSCEVable(I->getType()) && isPotentiallyNaryReassociable(&*I)) {
-const SCEV *OldSCEV = SE->getSCEV(&*I);
-if (Instruction *NewI = tryReassociate(&*I)) {
-  Changed = true;
-  SE->forgetValue(&*I);
-  I->replaceAllUsesWith(NewI);
-  WeakVH NewIExist = NewI;
-  // If SeenExprs/NewIExist contains I's WeakTrackingVH/WeakVH, that
-  // entry will be replaced with nullptr if deleted.
-  RecursivelyDeleteTriviallyDeadInstructions(&*I, TLI);
-  if (!NewIExist) {
-// Rare occation where the new instruction (NewI) have been 
removed,
-// probably due to parts of the input code was dead from the
-// beginning, reset the iterator and start over from the beginning
-I = BB->begin();
-continue;
-  }
-  I = NewI->getIterator();
-}
-// Add the rewritten instruction to SeenExprs; the original instruction
-// is deleted.
-const SCEV *NewSCEV = SE->getSCEV(&*I);
-SeenExprs[NewSCEV].push_back(WeakTrackingVH(&*I));
+  Instruction *OrigI = &*I;
+
+  if (!SE->isSCEVable(OrigI->getType()) ||
+  !isPotentiallyNaryReassociable(OrigI))
+continue;
+
+  const SCEV *OrigSCEV = SE->getSCEV(OrigI);
+  if (Instruction *NewI = tryReassociate(OrigI)) {
+Changed = true;
+OrigI->replaceAllUsesWith(NewI);
+
+// Add 'OrigI' to the list of dead instructions.
+DeadInsts.push_back(WeakTrackingVH(OrigI));
+// Add the rewritten instruction to SeenExprs; the original
+// instruction is deleted.
+const SCEV *NewSCEV = SE->getSCEV(NewI);
+SeenExprs[NewSCEV].push_back(WeakTrackingVH(NewI));
+
 // Ideally, NewSCEV should equal OldSCEV because tryReassociate(I)
 // is equivalent to I. However, ScalarEvolution::getSCEV may
-// weaken nsw causing NewSCEV not to equal OldSCEV. For example, 
suppose
-// we reassociate
+// weaken nsw causing NewSCEV not to equal OldSCEV. For example,
+// suppose we reassociate
 //   I = &a[sext(i +nsw j)] // assuming sizeof(a[0]) = 4
 // to
 //   NewI = &a[sext(i)] + sext(j).
@@ -274,12 +270,19 @@ bool NaryReassociatePass::doOneIteration(Function &F) {
 // equivalence, we add I to SeenExprs[OldSCEV] as well so that we can
 // map both SCEV before and after tryReassociate(I) to I.
 //
-// This improvement is exercised in @reassociate_gep_nsw in 
nary-gep.ll.
-if (NewSCEV != OldSCEV)
-  SeenExprs[OldSCEV].push_back(WeakTrackingVH(&*I));
-  }
+// This improvement is exercised in @reassociate_gep_nsw in
+// nary-gep.ll.
+if (NewSCEV != OrigSCEV)
+  SeenExprs[OrigSCEV].push_back(WeakTrackingVH(NewI));
+  } else
+SeenExprs[OrigSCEV].push_back(WeakTrackingVH(OrigI));
 }
   }
+  // Delete all dead instructions from 'DeadInsts'.
+  // Please note ScalarEvolution is updat

[llvm-branch-commits] [llvm] 061cebb - [NFC][NARY-REASSOCIATE] Restructure code to aviod isPotentiallyReassociatable

2020-12-04 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-04T16:19:43+07:00
New Revision: 061cebb46f6c484322719f906a16ebc7a1bc5fff

URL: 
https://github.com/llvm/llvm-project/commit/061cebb46f6c484322719f906a16ebc7a1bc5fff
DIFF: 
https://github.com/llvm/llvm-project/commit/061cebb46f6c484322719f906a16ebc7a1bc5fff.diff

LOG: [NFC][NARY-REASSOCIATE] Restructure code to aviod 
isPotentiallyReassociatable

Currently we have to duplicate the same checks in isPotentiallyReassociatable 
and tryReassociate. With simple pattern like add/mul this may be not a big 
deal. But the situation gets much worse when I try to add support for min/max. 
Min/Max may be represented by several instructions and can take different 
forms. In order reduce complexity for upcoming min/max support we need to 
restructure the code a bit to avoid mentioned code duplication.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D88286

Added: 


Modified: 
llvm/include/llvm/Transforms/Scalar/NaryReassociate.h
llvm/lib/Transforms/Scalar/NaryReassociate.cpp

Removed: 




diff  --git a/llvm/include/llvm/Transforms/Scalar/NaryReassociate.h 
b/llvm/include/llvm/Transforms/Scalar/NaryReassociate.h
index 26f5fe185dd5..5fa7427b2603 100644
--- a/llvm/include/llvm/Transforms/Scalar/NaryReassociate.h
+++ b/llvm/include/llvm/Transforms/Scalar/NaryReassociate.h
@@ -114,7 +114,7 @@ class NaryReassociatePass : public 
PassInfoMixin {
   bool doOneIteration(Function &F);
 
   // Reassociates I for better CSE.
-  Instruction *tryReassociate(Instruction *I);
+  Instruction *tryReassociate(Instruction *I, const SCEV *&OrigSCEV);
 
   // Reassociate GEP for better CSE.
   Instruction *tryReassociateGEP(GetElementPtrInst *GEP);

diff  --git a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp 
b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
index b1bfc03c1e73..bc1b58611dd1 100644
--- a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
+++ b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
@@ -213,18 +213,6 @@ bool NaryReassociatePass::runImpl(Function &F, 
AssumptionCache *AC_,
   return Changed;
 }
 
-// Explicitly list the instruction types NaryReassociate handles for now.
-static bool isPotentiallyNaryReassociable(Instruction *I) {
-  switch (I->getOpcode()) {
-  case Instruction::Add:
-  case Instruction::GetElementPtr:
-  case Instruction::Mul:
-return true;
-  default:
-return false;
-  }
-}
-
 bool NaryReassociatePass::doOneIteration(Function &F) {
   bool Changed = false;
   SeenExprs.clear();
@@ -236,13 +224,8 @@ bool NaryReassociatePass::doOneIteration(Function &F) {
 BasicBlock *BB = Node->getBlock();
 for (auto I = BB->begin(); I != BB->end(); ++I) {
   Instruction *OrigI = &*I;
-
-  if (!SE->isSCEVable(OrigI->getType()) ||
-  !isPotentiallyNaryReassociable(OrigI))
-continue;
-
-  const SCEV *OrigSCEV = SE->getSCEV(OrigI);
-  if (Instruction *NewI = tryReassociate(OrigI)) {
+  const SCEV *OrigSCEV = nullptr;
+  if (Instruction *NewI = tryReassociate(OrigI, OrigSCEV)) {
 Changed = true;
 OrigI->replaceAllUsesWith(NewI);
 
@@ -274,7 +257,7 @@ bool NaryReassociatePass::doOneIteration(Function &F) {
 // nary-gep.ll.
 if (NewSCEV != OrigSCEV)
   SeenExprs[OrigSCEV].push_back(WeakTrackingVH(NewI));
-  } else
+  } else if (OrigSCEV)
 SeenExprs[OrigSCEV].push_back(WeakTrackingVH(OrigI));
 }
   }
@@ -286,16 +269,26 @@ bool NaryReassociatePass::doOneIteration(Function &F) {
   return Changed;
 }
 
-Instruction *NaryReassociatePass::tryReassociate(Instruction *I) {
+Instruction *NaryReassociatePass::tryReassociate(Instruction * I,
+ const SCEV *&OrigSCEV) {
+
+  if (!SE->isSCEVable(I->getType()))
+return nullptr;
+
   switch (I->getOpcode()) {
   case Instruction::Add:
   case Instruction::Mul:
+OrigSCEV = SE->getSCEV(I);
 return tryReassociateBinaryOp(cast(I));
   case Instruction::GetElementPtr:
+OrigSCEV = SE->getSCEV(I);
 return tryReassociateGEP(cast(I));
   default:
-llvm_unreachable("should be filtered out by 
isPotentiallyNaryReassociable");
+return nullptr;
   }
+
+  llvm_unreachable("should not be reached");
+  return nullptr;
 }
 
 static bool isGEPFoldable(GetElementPtrInst *GEP,



___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 2d1b024 - [DSE][NFC] Need to be carefull mixing signed and unsigned types

2020-12-08 Thread Evgeniy Brevnov via llvm-branch-commits

Author: Evgeniy Brevnov
Date: 2020-12-08T16:53:37+07:00
New Revision: 2d1b024d06b2a81f1c35193a998a864be09e2ae8

URL: 
https://github.com/llvm/llvm-project/commit/2d1b024d06b2a81f1c35193a998a864be09e2ae8
DIFF: 
https://github.com/llvm/llvm-project/commit/2d1b024d06b2a81f1c35193a998a864be09e2ae8.diff

LOG: [DSE][NFC] Need to be carefull mixing signed and unsigned types

Currently in some places we use signed type to represent size of an access and 
put explicit casts from unsigned to signed.
For example: int64_t EarlierSize = int64_t(Loc.Size.getValue());

Even though it doesn't loos bits (immidiatly) it may overflow and we end up 
with negative size. Potentially that cause later code to work incorrectly. A 
simple expample is a check that size is not negative.

I think it would be safer and clearer if we use unsigned type for the size and 
handle it appropriately.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D92648

Added: 


Modified: 
llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp

Removed: 




diff  --git a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp 
b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index 9499e892871f..e4b8980b01e8 100644
--- a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -1071,8 +1071,8 @@ static bool handleEndBlock(BasicBlock &BB, AliasAnalysis 
*AA,
 }
 
 static bool tryToShorten(Instruction *EarlierWrite, int64_t &EarlierOffset,
- int64_t &EarlierSize, int64_t LaterOffset,
- int64_t LaterSize, bool IsOverwriteEnd) {
+ uint64_t &EarlierSize, int64_t LaterOffset,
+ uint64_t LaterSize, bool IsOverwriteEnd) {
   // TODO: base this on the target vector size so that if the earlier
   // store was too small to get vector writes anyway then its likely
   // a good idea to shorten it
@@ -1127,16 +1127,23 @@ static bool tryToShorten(Instruction *EarlierWrite, 
int64_t &EarlierOffset,
 
 static bool tryToShortenEnd(Instruction *EarlierWrite,
 OverlapIntervalsTy &IntervalMap,
-int64_t &EarlierStart, int64_t &EarlierSize) {
+int64_t &EarlierStart, uint64_t &EarlierSize) {
   if (IntervalMap.empty() || !isShortenableAtTheEnd(EarlierWrite))
 return false;
 
   OverlapIntervalsTy::iterator OII = --IntervalMap.end();
   int64_t LaterStart = OII->second;
-  int64_t LaterSize = OII->first - LaterStart;
+  uint64_t LaterSize = OII->first - LaterStart;
 
-  if (LaterStart > EarlierStart && LaterStart < EarlierStart + EarlierSize &&
-  LaterStart + LaterSize >= EarlierStart + EarlierSize) {
+  assert(OII->first - LaterStart >= 0 && "Size expected to be positive");
+
+  if (LaterStart > EarlierStart &&
+  // Note: "LaterStart - EarlierStart" is known to be positive due to
+  // preceding check.
+  (uint64_t)(LaterStart - EarlierStart) < EarlierSize &&
+  // Note: "EarlierSize - (uint64_t)(LaterStart - EarlierStart)" is known 
to
+  // be non negative due to preceding checks.
+  LaterSize >= EarlierSize - (uint64_t)(LaterStart - EarlierStart)) {
 if (tryToShorten(EarlierWrite, EarlierStart, EarlierSize, LaterStart,
  LaterSize, true)) {
   IntervalMap.erase(OII);
@@ -1148,16 +1155,23 @@ static bool tryToShortenEnd(Instruction *EarlierWrite,
 
 static bool tryToShortenBegin(Instruction *EarlierWrite,
   OverlapIntervalsTy &IntervalMap,
-  int64_t &EarlierStart, int64_t &EarlierSize) {
+  int64_t &EarlierStart, uint64_t &EarlierSize) {
   if (IntervalMap.empty() || !isShortenableAtTheBeginning(EarlierWrite))
 return false;
 
   OverlapIntervalsTy::iterator OII = IntervalMap.begin();
   int64_t LaterStart = OII->second;
-  int64_t LaterSize = OII->first - LaterStart;
+  uint64_t LaterSize = OII->first - LaterStart;
+
+  assert(OII->first - LaterStart >= 0 && "Size expected to be positive");
 
-  if (LaterStart <= EarlierStart && LaterStart + LaterSize > EarlierStart) {
-assert(LaterStart + LaterSize < EarlierStart + EarlierSize &&
+  if (LaterStart <= EarlierStart &&
+  // Note: "EarlierStart - LaterStart" is known to be non negative due to
+  // preceding check.
+  LaterSize > (uint64_t)(EarlierStart - LaterStart)) {
+// Note: "LaterSize - (uint64_t)(EarlierStart - LaterStart)" is known to be
+// positive due to preceding checks.
+assert(LaterSize - (uint64_t)(EarlierStart - LaterStart) < EarlierSize &&
"Should have been handled as OW_Complete");
 if (tryToShorten(EarlierWrite, EarlierStart, EarlierSize, LaterStart,
  LaterSize, false)) {
@@ -1179,7 +1193,7 @@ static bool removePartiallyOverlappedStores(const 
DataLayout &DL