[ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit...
authorCraig Topper <craig.topper@intel.com>
Fri, 2 Aug 2019 23:43:53 +0000 (23:43 +0000)
committerCraig Topper <craig.topper@intel.com>
Fri, 2 Aug 2019 23:43:53 +0000 (23:43 +0000)
commitb1cfcd1a5667c55fbcd96fd4bd49db70ce393856
tree1382a310a40e089287e463cd80c38b1392aee635
parent52e6d52f10dcc2c7750f8c37d2a408219bda611b
[ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches for expandload/compressstore.

Same as what was done for gather/scatter/load/store in r367489.
Expandload/compressstore were delayed due to lack of constant
masking handling that has since been fixed.

llvm-svn: 367738
llvm/lib/CodeGen/ScalarizeMaskedMemIntrin.cpp
llvm/test/CodeGen/X86/masked_compressstore.ll
llvm/test/CodeGen/X86/masked_expandload.ll
llvm/test/CodeGen/X86/pr39666.ll
llvm/test/Transforms/ScalarizeMaskedMemIntrin/X86/expand-masked-compressstore.ll
llvm/test/Transforms/ScalarizeMaskedMemIntrin/X86/expand-masked-expandload.ll