[X86] When lowering v1i1/v2i1/v4i1/v8i1 load/store with avx512f, but not avx512dq...
authorCraig Topper <craig.topper@intel.com>
Sat, 12 Jan 2019 02:22:10 +0000 (02:22 +0000)
committerCraig Topper <craig.topper@intel.com>
Sat, 12 Jan 2019 02:22:10 +0000 (02:22 +0000)
commitbf61525e8c99de876c8d9c0a295d2e9319a39a42
tree4bbc1856ea062f8709ece3d401c8e8e1101923f3
parent8695e6dfc43d839b270ab95b57c1548d23b74a5f
[X86] When lowering v1i1/v2i1/v4i1/v8i1 load/store with avx512f, but not avx512dq, use v16i1 as the intermediate mask type instead of v8i1.

We still use i8 for the load/store type. So we need to convert to/from i16 to around the mask type.

By doing this we get an i8->i16 extload which we can then pattern match to a KMOVW if the access is aligned.

llvm-svn: 350989
llvm/lib/Target/X86/X86ISelLowering.cpp
llvm/lib/Target/X86/X86InstrAVX512.td
llvm/test/CodeGen/X86/avx512-extract-subvector-load-store.ll
llvm/test/CodeGen/X86/avx512-intrinsics-upgrade.ll
llvm/test/CodeGen/X86/avx512-mask-op.ll
llvm/test/CodeGen/X86/avx512-select.ll
llvm/test/CodeGen/X86/vector-sext-widen.ll
llvm/test/CodeGen/X86/vector-sext.ll