Set nonzero bits from bitwise and operator in range-ops.
Now that nonzero bits are first class citizens in the range, we can
keep better track of them in range-ops, especially the bitwise and
operator.
This patch sets the nonzero mask for the trivial case. In doing so,
I've removed some old dead code that was an attempt to keep better
track of masks.
I'm sure there are tons of optimizations throughout range-ops that
could be implemented, especially the op1_range methods, but those
always make my head hurt. I'll leave them to the smarter hackers
out there.
I've removed the restriction that nonzero bits can't be queried from
legacy. This was causing special casing all over the place, and
it's not like we can generate incorrect code. We just silently
drop nonzero bits to -1 in some of the legacy code. The end result
is that VRP1, and other users of legacy, may not benefit from these
improvements.
Tested and benchmarked on x86-64 Linux.
gcc/ChangeLog:
* range-op.cc (unsigned_singleton_p): Remove.
(operator_bitwise_and::remove_impossible_ranges): Remove.
(operator_bitwise_and::fold_range): Set nonzero bits. *
* value-range.cc (irange::get_nonzero_bits): Remove
legacy_mode_p assert.
(irange::dump_bitmasks): Remove legacy_mode_p check.