From: Quentin Colombet Date: Mon, 20 Oct 2014 23:13:30 +0000 (+0000) Subject: [X86] Fix a bug in the lowering of the mask of VSELECT. X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=06355199f153a62c390d1e49d96cec66bc9056d2;p=platform%2Fupstream%2Fllvm.git [X86] Fix a bug in the lowering of the mask of VSELECT. X86 code to lower VSELECT messed a bit with the bits set in the mask of VSELECT when it knows it can be lowered into BLEND. Indeed, only the high bits need to be set for those and it optimizes those accordingly. However, when the mask is a compile time constant, the lowering will be handled by the generic optimizer and those modifications will generate bad code in the generic optimizer. This patch fixes that by preventing the optimization if the VSELECT will be handled by the generic optimizer. llvm-svn: 220242 --- diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp index a8afe81..543a2fd 100644 --- a/llvm/lib/Target/X86/X86ISelLowering.cpp +++ b/llvm/lib/Target/X86/X86ISelLowering.cpp @@ -22598,7 +22598,12 @@ static SDValue PerformSELECTCombine(SDNode *N, SelectionDAG &DAG, TargetLowering::TargetLoweringOpt TLO(DAG, DCI.isBeforeLegalize(), DCI.isBeforeLegalizeOps()); if (TLO.ShrinkDemandedConstant(Cond, DemandedMask) || - TLI.SimplifyDemandedBits(Cond, DemandedMask, KnownZero, KnownOne, TLO)) + (TLI.SimplifyDemandedBits(Cond, DemandedMask, KnownZero, KnownOne, + TLO) && + // Don't optimize vector of constants. Those are handled by + // the generic code and all the bits must be properly set for + // the generic optimizer. + !ISD::isBuildVectorOfConstantSDNodes(TLO.New.getNode()))) DCI.CommitTargetLoweringOpt(TLO); } diff --git a/llvm/test/CodeGen/X86/vselect-avx.ll b/llvm/test/CodeGen/X86/vselect-avx.ll new file mode 100644 index 0000000..2d7ccf3 --- /dev/null +++ b/llvm/test/CodeGen/X86/vselect-avx.ll @@ -0,0 +1,27 @@ +; RUN: llc %s -o - -mattr=+avx | FileCheck %s +target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128" +target triple = "x86_64-apple-macosx" + +; For this test we used to optimize the +; mask into because we thought +; we would lower that into a blend where only the high bit is relevant. +; However, since the whole mask is constant, this is simplified incorrectly +; by the generic code, because it was expecting -1 in place of 2147483648. +; +; The problem does not occur without AVX, because vselect of v4i32 is not legal +; nor custom. +; +; + +; CHECK-LABEL: test: +; CHECK: vmovdqa {{.*#+}} xmm1 = [65533,124,125,14807] +; CHECK: vmovdqa {{.*#+}} xmm1 = [65535,0,0,65535] +; CHECK: ret +define void @test(<4 x i16>* %a, <4 x i16>* %b) { +body: + %predphi = select <4 x i1> , <4 x i16> , <4 x i16> + %predphi42 = select <4 x i1> , <4 x i16> , <4 x i16> zeroinitializer + store <4 x i16> %predphi, <4 x i16>* %a, align 8 + store <4 x i16> %predphi42, <4 x i16>* %b, align 8 + ret void +}