string: Improve generic memchr
authorAdhemerval Zanella <adhemerval.zanella@linaro.org>
Tue, 10 Jan 2023 21:00:59 +0000 (18:00 -0300)
committerAdhemerval Zanella <adhemerval.zanella@linaro.org>
Mon, 6 Feb 2023 19:19:35 +0000 (16:19 -0300)
commit2a8867a17ffe5c5a4251fd40bf6c73a3fd426062
tree2b533dae74065199f0d1eaa8be5b385685dfce5e
parent3709ed904770b440d68385f3da259008cdf642a6
string: Improve generic memchr

New algorithm read the first aligned address and mask off the
unwanted bytes (this strategy is similar to arch-specific
implementations used on powerpc, sparc, and sh).

The loop now read word-aligned address and check using the has_eq
macro.

Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu,
and powerpc64-linux-gnu by removing the arch-specific assembly
implementation and disabling multi-arch (it covers both LE and BE
for 64 and 32 bits).

Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
string/memchr.c
sysdeps/powerpc/powerpc32/power4/multiarch/memchr-ppc32.c
sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c