From: Paul Mundt Date: Tue, 13 Oct 2009 02:18:34 +0000 (+0900) Subject: sh: force dcache flush if dcache_dirty bit set. X-Git-Tag: 2.1b_release~10620^2~6 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=964f7e5a56814b32c727821de77d22bd7ef782bc;p=platform%2Fkernel%2Fkernel-mfld-blackbay.git sh: force dcache flush if dcache_dirty bit set. This too follows the ARM change, given that the issue at hand applies to all platforms that implement lazy D-cache writeback. This fixes up the case when a page mapping disappears between the flush_dcache_page() call (when PG_dcache_dirty is set for the page) and the update_mmu_cache() call -- such as in the case of swap cache being freed early. This kills off the mapping test in update_mmu_cache() and switches to simply testing for PG_dcache_dirty. Reported-by: Nitin Gupta Reported-by: Hugh Dickins Signed-off-by: Paul Mundt --- diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 35c37b7..5e1091b 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -128,7 +128,7 @@ void __update_cache(struct vm_area_struct *vma, return; page = pfn_to_page(pfn); - if (pfn_valid(pfn) && page_mapping(page)) { + if (pfn_valid(pfn)) { int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags); if (dirty) { unsigned long addr = (unsigned long)page_address(page);