[RISCV] Add IR intrinsics for vmsge(u).vv/vx/vi.
authorCraig Topper <craig.topper@sifive.com>
Thu, 22 Apr 2021 17:29:36 +0000 (10:29 -0700)
committerCraig Topper <craig.topper@sifive.com>
Thu, 22 Apr 2021 17:44:38 +0000 (10:44 -0700)
commite01c419ecdf5511c550c3ec9d9c9dd132b480e88
tree447ca04841acab4831f3cdeab1db6d019472e546
parentd77d56acfd48e8253a35d885db8daac78793313f
[RISCV] Add IR intrinsics for vmsge(u).vv/vx/vi.

These instructions don't really exist, but we have ways we can
emulate them.

.vv will swap operands and use vmsle().vv. .vi will adjust the
immediate and use .vmsgt(u).vi when possible. For .vx we need to
use some of the multiple instruction sequences from the V extension
spec.

For unmasked vmsge(u).vx we use:
  vmslt{u}.vx vd, va, x; vmnand.mm vd, vd, vd

For cases where mask and maskedoff are the same value then we have
vmsge{u}.vx v0, va, x, v0.t which is the vd==v0 case that
requires a temporary so we use:
  vmslt{u}.vx vt, va, x; vmandnot.mm vd, vd, vt

For other masked cases we use this sequence:
  vmslt{u}.vx vd, va, x, v0.t; vmxor.mm vd, vd, v0
We trust that register allocation will prevent vd in vmslt{u}.vx
from being v0 since v0 is still needed by the vmxor.

Differential Revision: https://reviews.llvm.org/D100925
llvm/include/llvm/IR/IntrinsicsRISCV.td
llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
llvm/test/CodeGen/RISCV/rvv/vmsge-rv32.ll [new file with mode: 0644]
llvm/test/CodeGen/RISCV/rvv/vmsge-rv64.ll [new file with mode: 0644]
llvm/test/CodeGen/RISCV/rvv/vmsgeu-rv32.ll [new file with mode: 0644]
llvm/test/CodeGen/RISCV/rvv/vmsgeu-rv64.ll [new file with mode: 0644]