From: Chris Wilson Date: Fri, 8 Jan 2016 09:55:33 +0000 (+0000) Subject: x86/mm: Micro-optimise clflush_cache_range() X-Git-Tag: v4.14-rc1~4081^2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=1f1a89ac05f6e88aa341e86e57435fdbb1177c0c;p=platform%2Fkernel%2Flinux-rpi.git x86/mm: Micro-optimise clflush_cache_range() Whilst inspecting the asm for clflush_cache_range() and some perf profiles that required extensive flushing of single cachelines (from part of the intel-gpu-tools GPU benchmarks), we noticed that gcc was reloading boot_cpu_data.x86_clflush_size on every iteration of the loop. We can manually hoist that read which perf regarded as taking ~25% of the function time for a single cacheline flush. Signed-off-by: Chris Wilson Reviewed-by: Ross Zwisler Acked-by: "H. Peter Anvin" Cc: Toshi Kani Cc: Borislav Petkov Cc: Luis R. Rodriguez Cc: Stephen Rothwell Cc: Sai Praneeth Link: http://lkml.kernel.org/r/1452246933-10890-1-git-send-email-chris@chris-wilson.co.uk Signed-off-by: Thomas Gleixner --- diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index a3137a4..6000ad7 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -129,14 +129,16 @@ within(unsigned long addr, unsigned long start, unsigned long end) */ void clflush_cache_range(void *vaddr, unsigned int size) { - unsigned long clflush_mask = boot_cpu_data.x86_clflush_size - 1; + const unsigned long clflush_size = boot_cpu_data.x86_clflush_size; + void *p = (void *)((unsigned long)vaddr & ~(clflush_size - 1)); void *vend = vaddr + size; - void *p; + + if (p >= vend) + return; mb(); - for (p = (void *)((unsigned long)vaddr & ~clflush_mask); - p < vend; p += boot_cpu_data.x86_clflush_size) + for (; p < vend; p += clflush_size) clflushopt(p); mb();