GlobalISel: Reduce G_SHL width if source is extension
authorMatt Arsenault <Matthew.Arsenault@amd.com>
Sat, 15 Aug 2020 17:38:29 +0000 (13:38 -0400)
committerMatt Arsenault <Matthew.Arsenault@amd.com>
Mon, 24 Aug 2020 13:42:40 +0000 (09:42 -0400)
commite1644a377996565e119aa178f40c567b986a6203
tree7e4b9bac63ac60441938be2f7dc839dcdaa2bb61
parentc8d2b065b98fa91139cc7bb1fd1407f032ef252e
GlobalISel: Reduce G_SHL width if source is extension

shl ([sza]ext x, y) => zext (shl x, y).

Turns expensive 64 bit shifts into 32 bit if it does not overflow the
source type:

This is a port of an AMDGPU DAG combine added in
5fa289f0d8ff85b9e14d2f814a90761378ab54ae. InstCombine does this
already, but we need to do it again here to apply it to shifts
introduced for lowered getelementptrs. This will help matching
addressing modes that use 32-bit offsets in a future patch.

TableGen annoyingly assumes only a single match data operand, so
introduce a reusable struct. However, this still requires defining a
separate GIMatchData for every combine which is still annoying.

Adds a morally equivalent function to the existing
getShiftAmountTy. Without this, we would have to do try to repeatedly
query the legalizer info and guess at what type to use for the shift.
llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
llvm/include/llvm/CodeGen/TargetLowering.h
llvm/include/llvm/Target/GlobalISel/Combine.td
llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
llvm/lib/Target/AMDGPU/SIISelLowering.cpp
llvm/lib/Target/AMDGPU/SIISelLowering.h
llvm/test/CodeGen/AMDGPU/GlobalISel/combine-shl-from-extend-narrow.postlegal.mir [new file with mode: 0644]
llvm/test/CodeGen/AMDGPU/GlobalISel/combine-shl-from-extend-narrow.prelegal.mir [new file with mode: 0644]
llvm/test/CodeGen/AMDGPU/GlobalISel/shl-ext-reduce.ll [new file with mode: 0644]