x86-64: Handle byte-wise tail copying in memcpy() without a loop
authorJan Beulich <JBeulich@suse.com>
Thu, 26 Jan 2012 15:55:32 +0000 (15:55 +0000)
committerIngo Molnar <mingo@elte.hu>
Thu, 26 Jan 2012 20:19:20 +0000 (21:19 +0100)
commit9d8e22777e66f420e46490e9fc6f8cb7e0e2222b
treedd0ec6122dda1409206dda70f6ae4fd3c9a2cd35
parent2ab560911a427fdc73bfd3a7d2944d8ee0ca6db8
x86-64: Handle byte-wise tail copying in memcpy() without a loop

While hard to measure, reducing the number of possibly/likely
mis-predicted branches can generally be expected to be slightly
better.

Other than apparent at the first glance, this also doesn't grow
the function size (the alignment gap to the next function just
gets smaller).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/4F218584020000780006F422@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
arch/x86/lib/memcpy_64.S