projects
/
platform
/
kernel
/
linux-starfive.git
/ commitdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
| commitdiff |
tree
raw
|
patch
| inline |
side by side
(parent:
29430fa
)
powerpc: Allow 64bit VDSO __kernel_sync_dicache to work across ranges >4GB
author
Alastair D'Silva
<alastair@d-silva.org>
Mon, 4 Nov 2019 02:32:54 +0000
(13:32 +1100)
committer
Michael Ellerman
<mpe@ellerman.id.au>
Thu, 7 Nov 2019 11:48:34 +0000
(22:48 +1100)
When calling __kernel_sync_dicache with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Cc: stable@vger.kernel.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link:
https://lore.kernel.org/r/20191104023305.9581-3-alastair@au1.ibm.com
arch/powerpc/kernel/vdso64/cacheflush.S
patch
|
blob
|
history
diff --git
a/arch/powerpc/kernel/vdso64/cacheflush.S
b/arch/powerpc/kernel/vdso64/cacheflush.S
index 3f92561a64c4d485e41633ac9af1f6ab1f2a551a..526f5ba2593e22eed7240e5aed782d7b77785326 100644
(file)
--- a/
arch/powerpc/kernel/vdso64/cacheflush.S
+++ b/
arch/powerpc/kernel/vdso64/cacheflush.S
@@
-35,7
+35,7
@@
V_FUNCTION_BEGIN(__kernel_sync_dicache)
subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */
lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10)
- sr
w
. r8,r8,r9 /* compute line count */
+ sr
d
. r8,r8,r9 /* compute line count */
crclr cr0*4+so
beqlr /* nothing to do? */
mtctr r8
@@
-52,7
+52,7
@@
V_FUNCTION_BEGIN(__kernel_sync_dicache)
subf r8,r6,r4 /* compute length */
add r8,r8,r5
lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10)
- sr
w
. r8,r8,r9 /* compute line count */
+ sr
d
. r8,r8,r9 /* compute line count */
crclr cr0*4+so
beqlr /* nothing to do? */
mtctr r8