From: Chandler Carruth Date: Tue, 8 Aug 2017 10:13:23 +0000 (+0000) Subject: [PM] Fix a likely more critical infloop bug in the CGSCC pass manager. X-Git-Tag: llvmorg-6.0.0-rc1~10635 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=6e35c31d2d2c1853113ca0526ceead85e9c8e1ac;p=platform%2Fupstream%2Fllvm.git [PM] Fix a likely more critical infloop bug in the CGSCC pass manager. This was just a bad oversight on my part. The code in question should never have worked without this fix. But it turns out, there are relatively few places that involve libfunctions that participate in a single SCC, and unless they do, this happens to not matter. The effect of not having this correct is that each time through this routine, the edge from write_wrapper to write was toggled between a call edge and a ref edge. First time through, it becomes a demoted call edge and is turned into a ref edge. Next time it is a promoted call edge from a ref edge. On, and on it goes forever. I've added the asserts which should have always been here to catch silly mistakes like this in the future as well as a test case that will actually infloop without the fix. The other (much scarier) infinite-inlining issue I think didn't actually occur in practice, and I simply misdiagnosed this minor issue as that much more scary issue. The other issue *is* still a real issue, but I'm somewhat relieved that so far it hasn't happened in real-world code yet... llvm-svn: 310342 --- diff --git a/llvm/lib/Analysis/CGSCCPassManager.cpp b/llvm/lib/Analysis/CGSCCPassManager.cpp index cb6d139c..b810695 100644 --- a/llvm/lib/Analysis/CGSCCPassManager.cpp +++ b/llvm/lib/Analysis/CGSCCPassManager.cpp @@ -421,7 +421,9 @@ LazyCallGraph::SCC &llvm::updateCGAndAnalysisManagerForFunctionPass( assert(E && "No function transformations should introduce *new* " "call edges! Any new calls should be modeled as " "promoted existing ref edges!"); - RetainedEdges.insert(&CalleeN); + bool Inserted = RetainedEdges.insert(&CalleeN).second; + (void)Inserted; + assert(Inserted && "We should never visit a function twice."); if (!E->isCall()) PromotedRefTargets.insert(&CalleeN); } @@ -441,7 +443,9 @@ LazyCallGraph::SCC &llvm::updateCGAndAnalysisManagerForFunctionPass( assert(E && "No function transformations should introduce *new* ref " "edges! Any new ref edges would require IPO which " "function passes aren't allowed to do!"); - RetainedEdges.insert(&RefereeN); + bool Inserted = RetainedEdges.insert(&RefereeN).second; + (void)Inserted; + assert(Inserted && "We should never visit a function twice."); if (E->isCall()) DemotedCallTargets.insert(&RefereeN); }; @@ -449,7 +453,10 @@ LazyCallGraph::SCC &llvm::updateCGAndAnalysisManagerForFunctionPass( // Include synthetic reference edges to known, defined lib functions. for (auto *F : G.getLibFunctions()) - VisitRef(*F); + // While the list of lib functions doesn't have repeats, don't re-visit + // anything handled above. + if (!Visited.count(F)) + VisitRef(*F); // First remove all of the edges that are no longer present in this function. // We have to build a list of dead targets first and then remove them as the diff --git a/llvm/test/Other/cgscc-libcall-update.ll b/llvm/test/Other/cgscc-libcall-update.ll index e0833ca..072eec4 100644 --- a/llvm/test/Other/cgscc-libcall-update.ll +++ b/llvm/test/Other/cgscc-libcall-update.ll @@ -3,7 +3,10 @@ ; ; Also check that it can handle inlining *removing* a libcall entirely. ; -; RUN: opt -passes='cgscc(inline,function(instcombine))' -S < %s | FileCheck %s +; Finally, we include some recursive patterns and forced analysis invaliadtion +; that can trigger infinite CGSCC refinement if not handled correctly. +; +; RUN: opt -passes='cgscc(inline,function(instcombine,invalidate))' -S < %s | FileCheck %s define i8* @wibble(i8* %arg1, i8* %arg2) { ; CHECK-LABEL: define i8* @wibble( @@ -59,3 +62,15 @@ entry: %shr = and i32 %and2, 65280 ret i32 %shr } + +define i64 @write(i32 %i, i8* %p, i64 %j) { +entry: + %val = call i64 @write_wrapper(i32 %i, i8* %p, i64 %j) noinline + ret i64 %val +} + +define i64 @write_wrapper(i32 %i, i8* %p, i64 %j) { +entry: + %val = call i64 @write(i32 %i, i8* %p, i64 %j) noinline + ret i64 %val +}