Use AVX unaligned memcpy only if AVX2 is available
authorH.J. Lu <hjl.tools@gmail.com>
Fri, 30 Jan 2015 14:50:20 +0000 (06:50 -0800)
committerDongkyun, Son <dongkyun.s@samsung.com>
Sun, 22 May 2016 13:26:59 +0000 (22:26 +0900)
commitd3180f61af8fc705605d9a51b07ae41b949c7d8a
treedd71ed2c07db46cc7c958cbfa9a495b6585b2234
parent3b0df84015836d4c1123231d447afb8076e31b13
Use AVX unaligned memcpy only if AVX2 is available

memcpy with unaligned 256-bit AVX register loads/stores are slow on older
processorsl like Sandy Bridge.  This patch adds bit_AVX_Fast_Unaligned_Load
and sets it only when AVX2 is available.

[BZ #17801]
* sysdeps/x86_64/multiarch/init-arch.c (__init_cpu_features):
Set the bit_AVX_Fast_Unaligned_Load bit for AVX2.
* sysdeps/x86_64/multiarch/init-arch.h (bit_AVX_Fast_Unaligned_Load):
New.
(index_AVX_Fast_Unaligned_Load): Likewise.
(HAS_AVX_FAST_UNALIGNED_LOAD): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check the
bit_AVX_Fast_Unaligned_Load bit instead of the bit_AVX_Usable bit.
* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/memmove.c (__libc_memmove): Replace
HAS_AVX with HAS_AVX_FAST_UNALIGNED_LOAD.
* sysdeps/x86_64/multiarch/memmove_chk.c (__memmove_chk): Likewise.

(cherry picked from commit 5f3d0b78e011d2a72f9e88b0e9ef5bc081d18f97)

Conflicts:
ChangeLog
NEWS
ChangeLog
NEWS
sysdeps/x86_64/multiarch/init-arch.c
sysdeps/x86_64/multiarch/init-arch.h
sysdeps/x86_64/multiarch/memcpy.S
sysdeps/x86_64/multiarch/memcpy_chk.S
sysdeps/x86_64/multiarch/memmove.c
sysdeps/x86_64/multiarch/memmove_chk.c
sysdeps/x86_64/multiarch/mempcpy.S
sysdeps/x86_64/multiarch/mempcpy_chk.S