From: Peter Zijlstra Date: Thu, 24 Feb 2011 10:47:32 +0000 (+0000) Subject: powerpc/mm: Make hpte_need_flush() safe for preemption X-Git-Tag: v3.0~1712^2~2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=f342552b917a18a7a1fa2c10625df85fac828c36;p=platform%2Fkernel%2Flinux-amlogic.git powerpc/mm: Make hpte_need_flush() safe for preemption hpte_need_flush() might be called outside of a preempt section when manipulating the kernel page tables, so we need to use the appopriate variants of per-cpu variable accesses. There should be no risk of being in the middle of a batch and a context switch will flush any pending batch. [Patch extracted from a larger patch in Peter's preemptible mmu_gather series] Signed-off-by: Peter Zijlstra Signed-off-by: Hugh Dickins Signed-off-by: Benjamin Herrenschmidt --- diff --git a/arch/powerpc/mm/tlb_hash64.c b/arch/powerpc/mm/tlb_hash64.c index 1ec0657..c14d09f 100644 --- a/arch/powerpc/mm/tlb_hash64.c +++ b/arch/powerpc/mm/tlb_hash64.c @@ -38,13 +38,11 @@ DEFINE_PER_CPU(struct ppc64_tlb_batch, ppc64_tlb_batch); * neesd to be flushed. This function will either perform the flush * immediately or will batch it up if the current CPU has an active * batch on it. - * - * Must be called from within some kind of spinlock/non-preempt region... */ void hpte_need_flush(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long pte, int huge) { - struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); + struct ppc64_tlb_batch *batch = &get_cpu_var(ppc64_tlb_batch); unsigned long vsid, vaddr; unsigned int psize; int ssize; @@ -99,6 +97,7 @@ void hpte_need_flush(struct mm_struct *mm, unsigned long addr, */ if (!batch->active) { flush_hash_page(vaddr, rpte, psize, ssize, 0); + put_cpu_var(ppc64_tlb_batch); return; } @@ -127,6 +126,7 @@ void hpte_need_flush(struct mm_struct *mm, unsigned long addr, batch->index = ++i; if (i >= PPC64_TLB_BATCH_NR) __flush_tlb_pending(batch); + put_cpu_var(ppc64_tlb_batch); } /*