[KnownBits] Make shl/lshr/ashr implementations optimal
authorNikita Popov <npopov@redhat.com>
Mon, 15 May 2023 15:38:49 +0000 (17:38 +0200)
committerNikita Popov <npopov@redhat.com>
Tue, 16 May 2023 07:44:26 +0000 (09:44 +0200)
commit9d73a8bdc66496b673c11e991fd9cf0cba0a1bff
tree2238ca700a6f862ca5fd40ca343cc27ccbc69cc2
parentd187ceee3b400b8b235630c6cddf64bf517620c7
[KnownBits] Make shl/lshr/ashr implementations optimal

The implementations for shifts were suboptimal in the case where
the max shift amount was >= bitwidth. In that case we should still
use the usual code clamped to BitWidth-1 rather than just giving up
entirely.

Additionally, there was an implementation bug where the known zero
bits for the individual shift amounts were not set in the shl/lshr
implementations. I think after these changes, we'll be able to drop
some of the code in ValueTracking which *also* evaluates all possible
shift amounts and has been papering over this issue.

For the "all poison" case I've opted to return an unknown value for
now. It would be better to return zero, but this has fairly
substantial test fallout, so I figured it's best to not mix it into
this change. (The "correct" return value would be a conflict, but
given that a lot of our APIs assert conflict-freedom, that's probably
not the best idea to actually return.)

Differential Revision: https://reviews.llvm.org/D150587
llvm/lib/Support/KnownBits.cpp
llvm/test/CodeGen/AMDGPU/amdgpu.private-memory.ll
llvm/test/Transforms/InstCombine/not-add.ll
llvm/unittests/Support/KnownBitsTest.cpp