The VM_EXEC check in update_mmu_cache() was getting optimized away
because of a stupid error in definition of macro addr_not_cache_congruent()
The intention was to have the equivalent of following:
if (a || (1 ? b : 0))
but we ended up with following:
if (a || 1 ? b : 0)
And because precedence of '||' is more that that of '?', gcc was optimizing
away evaluation of <a>
Nasty Repercussions:
1. For non-aliasing configs it would mean some extraneous dcache flushes
for non-code pages if U/K mappings were not congruent.
2. For aliasing config, some needed dcache flush for code pages might
be missed if U/K mappings were congruent.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* checks if two addresses (after page aligning) index into same cache set
*/
#define addr_not_cache_congruent(addr1, addr2) \
+({ \
cache_is_vipt_aliasing() ? \
- (CACHE_COLOR(addr1) != CACHE_COLOR(addr2)) : 0 \
+ (CACHE_COLOR(addr1) != CACHE_COLOR(addr2)) : 0; \
+})
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
* so userspace sees the right data.
* (Avoids the flush for Non-exec + congruent mapping case)
*/
- if (vma->vm_flags & VM_EXEC || addr_not_cache_congruent(paddr, vaddr)) {
+ if ((vma->vm_flags & VM_EXEC) ||
+ addr_not_cache_congruent(paddr, vaddr)) {
struct page *page = pfn_to_page(pte_pfn(*ptep));
int dirty = test_and_clear_bit(PG_arch_1, &page->flags);