[AMDGPU] Optimize waitcnt insertion for flat memory operations
authorTony <Tony.Tye@amd.com>
Fri, 16 Oct 2020 07:09:38 +0000 (07:09 +0000)
committerTony <Tony.Tye@amd.com>
Tue, 20 Oct 2020 22:55:12 +0000 (22:55 +0000)
commit1bc7bfffdbabffcdb43cc2829c551c33aed57742
tree9c721a505b7e063ccaccd60e79deeba106b2094f
parent1298252f80fec0cd77aabef5cb133e7b030852e4
[AMDGPU] Optimize waitcnt insertion for flat memory operations

Change waitcnt insertion to check the memory operand tokens to see if
flat memory operations access VMEM in the same way it does to check if
accessing LDS. This avoids adding waitcnt for counters for address
spaces that are not accessed.

In addition, only generate the pessimistic waitcnt 0 if a flat memory
operation appears to access both VMEM and LDS.

This benefits flat memory operations that explicitly specify the
address space as GLOBAL or LOCAL.

Differential Revision: https://reviews.llvm.org/D89618
46 files changed:
llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp
llvm/test/CodeGen/AMDGPU/GlobalISel/cvt_f32_ubyte.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/extractelement.i128.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/extractelement.i16.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/extractelement.i8.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/extractelement.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/fmed3.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/frem.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/insertelement.i16.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/insertelement.i8.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/lds-global-non-entry-func.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.atomic.dec.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.atomic.inc.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.div.fmas.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.div.scale.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.update.dpp.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/shl-ext-reduce.ll
llvm/test/CodeGen/AMDGPU/GlobalISel/zextload.ll
llvm/test/CodeGen/AMDGPU/bitreverse.ll
llvm/test/CodeGen/AMDGPU/copy-illegal-type.ll
llvm/test/CodeGen/AMDGPU/ctlz.ll
llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.global.ll
llvm/test/CodeGen/AMDGPU/fmax_legacy.f64.ll
llvm/test/CodeGen/AMDGPU/fmin_legacy.f64.ll
llvm/test/CodeGen/AMDGPU/frem.ll
llvm/test/CodeGen/AMDGPU/idot2.ll
llvm/test/CodeGen/AMDGPU/imm16.ll
llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll
llvm/test/CodeGen/AMDGPU/llvm.amdgcn.cvt.pkrtz.ll
llvm/test/CodeGen/AMDGPU/llvm.amdgcn.image.sample.d16.dim.ll
llvm/test/CodeGen/AMDGPU/llvm.cos.f16.ll
llvm/test/CodeGen/AMDGPU/llvm.sin.f16.ll
llvm/test/CodeGen/AMDGPU/load-lo16.ll
llvm/test/CodeGen/AMDGPU/lshr.v2i16.ll
llvm/test/CodeGen/AMDGPU/max.i16.ll
llvm/test/CodeGen/AMDGPU/saddo.ll
llvm/test/CodeGen/AMDGPU/shl.v2i16.ll
llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
llvm/test/CodeGen/AMDGPU/sub.v2i16.ll
llvm/test/CodeGen/AMDGPU/trunc-combine.ll
llvm/test/CodeGen/AMDGPU/waitcnt-back-edge-loop.mir
llvm/test/CodeGen/AMDGPU/waitcnt-looptest.ll
llvm/test/CodeGen/AMDGPU/waitcnt-vscnt.ll
llvm/test/CodeGen/AMDGPU/waitcnt.mir
llvm/test/CodeGen/AMDGPU/widen-smrd-loads.ll