Merge tag 'x86_mm_for_6.2_v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
authorLinus Torvalds <torvalds@linux-foundation.org>
Sat, 17 Dec 2022 20:06:53 +0000 (14:06 -0600)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 17 Dec 2022 20:06:53 +0000 (14:06 -0600)
Pull x86 mm updates from Dave Hansen:
 "New Feature:

   - Randomize the per-cpu entry areas

  Cleanups:

   - Have CR3_ADDR_MASK use PHYSICAL_PAGE_MASK instead of open coding it

   - Move to "native" set_memory_rox() helper

   - Clean up pmd_get_atomic() and i386-PAE

   - Remove some unused page table size macros"

* tag 'x86_mm_for_6.2_v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (35 commits)
  x86/mm: Ensure forced page table splitting
  x86/kasan: Populate shadow for shared chunk of the CPU entry area
  x86/kasan: Add helpers to align shadow addresses up and down
  x86/kasan: Rename local CPU_ENTRY_AREA variables to shorten names
  x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area
  x86/mm: Recompute physical address for every page of per-CPU CEA mapping
  x86/mm: Rename __change_page_attr_set_clr(.checkalias)
  x86/mm: Inhibit _PAGE_NX changes from cpa_process_alias()
  x86/mm: Untangle __change_page_attr_set_clr(.checkalias)
  x86/mm: Add a few comments
  x86/mm: Fix CR3_ADDR_MASK
  x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros
  mm: Convert __HAVE_ARCH_P..P_GET to the new style
  mm: Remove pointless barrier() after pmdp_get_lockless()
  x86/mm/pae: Get rid of set_64bit()
  x86_64: Remove pointless set_64bit() usage
  x86/mm/pae: Be consistent with pXXp_get_and_clear()
  x86/mm/pae: Use WRITE_ONCE()
  x86/mm/pae: Don't (ab)use atomic64
  mm/gup: Fix the lockless PMD access
  ...

21 files changed:
1  2 
arch/x86/Kconfig
arch/x86/kernel/alternative.c
arch/x86/kernel/ftrace.c
arch/x86/kernel/kprobes/core.c
arch/x86/mm/cpu_entry_area.c
arch/x86/mm/pat/set_memory.c
drivers/iommu/intel/irq_remapping.c
include/linux/filter.h
include/linux/pgtable.h
init/main.c
kernel/bpf/core.c
kernel/bpf/trampoline.c
kernel/events/core.c
kernel/fork.c
mm/Kconfig
mm/gup.c
mm/khugepaged.c
mm/mprotect.c
mm/userfaultfd.c
mm/vmscan.c
net/bpf/bpf_dummy_struct_ops.c

Simple merge
Simple merge
@@@ -423,9 -413,9 +423,7 @@@ create_trampoline(struct ftrace_ops *op
        /* ALLOC_TRAMP flags lets us know we created it */
        ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
  
-       if (likely(system_state != SYSTEM_BOOTING))
-               set_memory_ro((unsigned long)trampoline, npages);
-       set_memory_x((unsigned long)trampoline, npages);
 -      set_vm_flush_reset_perms(trampoline);
 -
+       set_memory_rox((unsigned long)trampoline, npages);
        return (unsigned long)trampoline;
  fail:
        tramp_free(trampoline);
@@@ -414,13 -414,9 +414,7 @@@ void *alloc_insn_page(void
        if (!page)
                return NULL;
  
 -      set_vm_flush_reset_perms(page);
 -
        /*
-        * First make the page read-only, and only then make it executable to
-        * prevent it from being W+X in between.
-        */
-       set_memory_ro((unsigned long)page, 1);
-       /*
         * TODO: Once additional kernel code protection mechanisms are set, ensure
         * that the page was not maliciously altered and it is still zeroed.
         */
Simple merge
Simple merge
Simple merge
Simple merge
Simple merge
diff --cc init/main.c
Simple merge
Simple merge
Simple merge
Simple merge
diff --cc kernel/fork.c
Simple merge
diff --cc mm/Kconfig
Simple merge
diff --cc mm/gup.c
Simple merge
diff --cc mm/khugepaged.c
Simple merge
diff --cc mm/mprotect.c
Simple merge
Simple merge
diff --cc mm/vmscan.c
Simple merge
Simple merge