1 ==================================
2 Cache and TLB Flushing Under Linux
3 ==================================
5 :Author: David S. Miller <davem@redhat.com>
7 This document describes the cache/tlb flushing interfaces called
8 by the Linux VM subsystem. It enumerates over each interface,
9 describes its intended purpose, and what side effect is expected
10 after the interface is invoked.
12 The side effects described below are stated for a uniprocessor
13 implementation, and what is to happen on that single processor. The
14 SMP cases are a simple extension, in that you just extend the
15 definition such that the side effect for a particular interface occurs
16 on all processors in the system. Don't let this scare you into
17 thinking SMP cache/tlb flushing must be so inefficient, this is in
18 fact an area where many optimizations are possible. For example,
19 if it can be proven that a user address space has never executed
20 on a cpu (see mm_cpumask()), one need not perform a flush
21 for this address space on that cpu.
23 First, the TLB flushing interfaces, since they are the simplest. The
24 "TLB" is abstracted under Linux as something the cpu uses to cache
25 virtual-->physical address translations obtained from the software
26 page tables. Meaning that if the software page tables change, it is
27 possible for stale translations to exist in this "TLB" cache.
28 Therefore when software page table changes occur, the kernel will
29 invoke one of the following flush methods _after_ the page table
32 1) ``void flush_tlb_all(void)``
34 The most severe flush of all. After this interface runs,
35 any previous page table modification whatsoever will be
38 This is usually invoked when the kernel page tables are
39 changed, since such translations are "global" in nature.
41 2) ``void flush_tlb_mm(struct mm_struct *mm)``
43 This interface flushes an entire user address space from
44 the TLB. After running, this interface must make sure that
45 any previous page table modifications for the address space
46 'mm' will be visible to the cpu. That is, after running,
47 there will be no entries in the TLB for 'mm'.
49 This interface is used to handle whole address space
50 page table operations such as what happens during
53 3) ``void flush_tlb_range(struct vm_area_struct *vma,
54 unsigned long start, unsigned long end)``
56 Here we are flushing a specific range of (user) virtual
57 address translations from the TLB. After running, this
58 interface must make sure that any previous page table
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
61 running, there will be no entries in the TLB for 'mm' for
62 virtual addresses in the range 'start' to 'end-1'.
64 The "vma" is the backing store being used for the region.
65 Primarily, this is used for munmap() type operations.
67 The interface is provided in hopes that the port can find
68 a suitably efficient method for removing multiple page
69 sized translations from the TLB, instead of having the kernel
70 call flush_tlb_page (see below) for each entry which may be
73 4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
75 This time we need to remove the PAGE_SIZE sized translation
76 from the TLB. The 'vma' is the backing structure used by
77 Linux to keep track of mmap'd regions for a process, the
78 address space is available via vma->vm_mm. Also, one may
79 test (vma->vm_flags & VM_EXEC) to see if this region is
80 executable (and thus could be in the 'instruction TLB' in
81 split-tlb type setups).
83 After running, this interface must make sure that any previous
84 page table modification for address space 'vma->vm_mm' for
85 user virtual address 'addr' will be visible to the cpu. That
86 is, after running, there will be no entries in the TLB for
87 'vma->vm_mm' for virtual address 'addr'.
89 This is used primarily during fault processing.
91 5) ``void update_mmu_cache(struct vm_area_struct *vma,
92 unsigned long address, pte_t *ptep)``
94 At the end of every page fault, this routine is invoked to
95 tell the architecture specific code that a translation
96 now exists at virtual address "address" for address space
97 "vma->vm_mm", in the software page tables.
99 A port may use this information in any way it so chooses.
100 For example, it could use this event to pre-load TLB
101 translations for software managed TLB configurations.
102 The sparc64 port currently does this.
104 Next, we have the cache flushing interfaces. In general, when Linux
105 is changing an existing virtual-->physical mapping to a new value,
106 the sequence will be in one of the following forms::
108 1) flush_cache_mm(mm);
109 change_all_page_tables_of(mm);
112 2) flush_cache_range(vma, start, end);
113 change_range_of_page_tables(mm, start, end);
114 flush_tlb_range(vma, start, end);
116 3) flush_cache_page(vma, addr, pfn);
117 set_pte(pte_pointer, new_pte_val);
118 flush_tlb_page(vma, addr);
120 The cache level flush will always be first, because this allows
121 us to properly handle systems whose caches are strict and require
122 a virtual-->physical translation to exist for a virtual address
123 when that virtual address is flushed from the cache. The HyperSparc
124 cpu is one such cpu with this attribute.
126 The cache flushing routines below need only deal with cache flushing
127 to the extent that it is necessary for a particular cpu. Mostly,
128 these routines must be implemented for cpus which have virtually
129 indexed caches which must be flushed when virtual-->physical
130 translations are changed or removed. So, for example, the physically
131 indexed physically tagged caches of IA32 processors have no need to
132 implement these interfaces since the caches are fully synchronized
133 and have no dependency on translation information.
135 Here are the routines, one by one:
137 1) ``void flush_cache_mm(struct mm_struct *mm)``
139 This interface flushes an entire user address space from
140 the caches. That is, after running, there will be no cache
141 lines associated with 'mm'.
143 This interface is used to handle whole address space
144 page table operations such as what happens during exit and exec.
146 2) ``void flush_cache_dup_mm(struct mm_struct *mm)``
148 This interface flushes an entire user address space from
149 the caches. That is, after running, there will be no cache
150 lines associated with 'mm'.
152 This interface is used to handle whole address space
153 page table operations such as what happens during fork.
155 This option is separate from flush_cache_mm to allow some
156 optimizations for VIPT caches.
158 3) ``void flush_cache_range(struct vm_area_struct *vma,
159 unsigned long start, unsigned long end)``
161 Here we are flushing a specific range of (user) virtual
162 addresses from the cache. After running, there will be no
163 entries in the cache for 'vma->vm_mm' for virtual addresses in
164 the range 'start' to 'end-1'.
166 The "vma" is the backing store being used for the region.
167 Primarily, this is used for munmap() type operations.
169 The interface is provided in hopes that the port can find
170 a suitably efficient method for removing multiple page
171 sized regions from the cache, instead of having the kernel
172 call flush_cache_page (see below) for each entry which may be
175 4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
177 This time we need to remove a PAGE_SIZE sized range
178 from the cache. The 'vma' is the backing structure used by
179 Linux to keep track of mmap'd regions for a process, the
180 address space is available via vma->vm_mm. Also, one may
181 test (vma->vm_flags & VM_EXEC) to see if this region is
182 executable (and thus could be in the 'instruction cache' in
183 "Harvard" type cache layouts).
185 The 'pfn' indicates the physical page frame (shift this value
186 left by PAGE_SHIFT to get the physical address) that 'addr'
187 translates to. It is this mapping which should be removed from
190 After running, there will be no entries in the cache for
191 'vma->vm_mm' for virtual address 'addr' which translates
194 This is used primarily during fault processing.
196 5) ``void flush_cache_kmaps(void)``
198 This routine need only be implemented if the platform utilizes
199 highmem. It will be called right before all of the kmaps
202 After running, there will be no entries in the cache for
203 the kernel virtual address range PKMAP_ADDR(0) to
204 PKMAP_ADDR(LAST_PKMAP).
206 This routing should be implemented in asm/highmem.h
208 6) ``void flush_cache_vmap(unsigned long start, unsigned long end)``
209 ``void flush_cache_vunmap(unsigned long start, unsigned long end)``
211 Here in these two interfaces we are flushing a specific range
212 of (kernel) virtual addresses from the cache. After running,
213 there will be no entries in the cache for the kernel address
214 space for virtual addresses in the range 'start' to 'end-1'.
216 The first of these two routines is invoked after vmap_range()
217 has installed the page table entries. The second is invoked
218 before vunmap_range() deletes the page table entries.
220 There exists another whole class of cpu cache issues which currently
221 require a whole different set of interfaces to handle properly.
222 The biggest problem is that of virtual aliasing in the data cache
225 Is your port susceptible to virtual aliasing in its D-cache?
226 Well, if your D-cache is virtually indexed, is larger in size than
227 PAGE_SIZE, and does not prevent multiple cache lines for the same
228 physical address from existing at once, you have this problem.
230 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
231 properly, it should essentially be the size of your virtually
232 addressed D-cache (or if the size is variable, the largest possible
233 size). This setting will force the SYSv IPC layer to only allow user
234 processes to mmap shared memory at address which are a multiple of
239 This does not fix shared mmaps, check out the sparc64 port for
240 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
242 Next, you have to solve the D-cache aliasing issue for all
243 other cases. Please keep in mind that fact that, for a given page
244 mapped into some user address space, there is always at least one more
245 mapping, that of the kernel in its linear mapping starting at
246 PAGE_OFFSET. So immediately, once the first user maps a given
247 physical page into its address space, by implication the D-cache
248 aliasing problem has the potential to exist since the kernel already
249 maps this page at its virtual address.
251 ``void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)``
252 ``void clear_user_page(void *to, unsigned long addr, struct page *page)``
254 These two routines store data in user anonymous or COW
255 pages. It allows a port to efficiently avoid D-cache alias
256 issues between userspace and the kernel.
258 For example, a port may temporarily map 'from' and 'to' to
259 kernel virtual addresses during the copy. The virtual address
260 for these two pages is chosen in such a way that the kernel
261 load/store instructions happen to virtual addresses which are
262 of the same "color" as the user mapping of the page. Sparc64
263 for example, uses this technique.
265 The 'addr' parameter tells the virtual address where the
266 user will ultimately have this page mapped, and the 'page'
267 parameter gives a pointer to the struct page of the target.
269 If D-cache aliasing is not an issue, these two routines may
270 simply call memcpy/memset directly and do nothing more.
272 ``void flush_dcache_page(struct page *page)``
274 This routines must be called when:
276 a) the kernel did write to a page that is in the page cache page
277 and / or in high memory
278 b) the kernel is about to read from a page cache page and user space
279 shared/writable mappings of this page potentially exist. Note
280 that {get,pin}_user_pages{_fast} already call flush_dcache_page
281 on any page found in the user address space and thus driver
282 code rarely needs to take this into account.
286 This routine need only be called for page cache pages
287 which can potentially ever be mapped into the address
288 space of a user process. So for example, VFS layer code
289 handling vfs symlinks in the page cache need not call
290 this interface at all.
292 The phrase "kernel writes to a page cache page" means, specifically,
293 that the kernel executes store instructions that dirty data in that
294 page at the page->virtual mapping of that page. It is important to
295 flush here to handle D-cache aliasing, to make sure these kernel stores
296 are visible to user space mappings of that page.
298 The corollary case is just as important, if there are users which have
299 shared+writable mappings of this file, we must make sure that kernel
300 reads of these pages will see the most recent stores done by the user.
302 If D-cache aliasing is not an issue, this routine may simply be defined
303 as a nop on that architecture.
305 There is a bit set aside in page->flags (PG_arch_1) as "architecture
306 private". The kernel guarantees that, for pagecache pages, it will
307 clear this bit when such a page first enters the pagecache.
309 This allows these interfaces to be implemented much more efficiently.
310 It allows one to "defer" (perhaps indefinitely) the actual flush if
311 there are currently no user processes mapping this page. See sparc64's
312 flush_dcache_page and update_mmu_cache implementations for an example
313 of how to go about doing this.
315 The idea is, first at flush_dcache_page() time, if page_file_mapping()
316 returns a mapping, and mapping_mapped on that mapping returns %false,
317 just mark the architecture private page flag bit. Later, in
318 update_mmu_cache(), a check is made of this flag bit, and if set the
319 flush is done and the flag bit is cleared.
323 It is often important, if you defer the flush,
324 that the actual flush occurs on the same CPU
325 as did the cpu stores into the page to make it
326 dirty. Again, see sparc64 for examples of how
329 ``void flush_dcache_folio(struct folio *folio)``
330 This function is called under the same circumstances as
331 flush_dcache_page(). It allows the architecture to
332 optimise for flushing the entire folio of pages instead
333 of flushing one page at a time.
335 ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
336 unsigned long user_vaddr, void *dst, void *src, int len)``
337 ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
338 unsigned long user_vaddr, void *dst, void *src, int len)``
340 When the kernel needs to copy arbitrary data in and out
341 of arbitrary user pages (f.e. for ptrace()) it will use
344 Any necessary cache flushing or other coherency operations
345 that need to occur should happen here. If the processor's
346 instruction cache does not snoop cpu stores, it is very
347 likely that you will need to flush the instruction cache
348 for copy_to_user_page().
350 ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
351 unsigned long vmaddr)``
353 When the kernel needs to access the contents of an anonymous
354 page, it calls this function (currently only
355 get_user_pages()). Note: flush_dcache_page() deliberately
356 doesn't work for an anonymous page. The default
357 implementation is a nop (and should remain so for all coherent
358 architectures). For incoherent architectures, it should flush
359 the cache of the page at vmaddr.
361 ``void flush_icache_range(unsigned long start, unsigned long end)``
363 When the kernel stores into addresses that it will execute
364 out of (eg when loading modules), this function is called.
366 If the icache does not snoop stores then this routine will need
369 ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
371 All the functionality of flush_icache_page can be implemented in
372 flush_dcache_page and update_mmu_cache. In the future, the hope
373 is to remove this interface completely.
375 The final category of APIs is for I/O to deliberately aliased address
376 ranges inside the kernel. Such aliases are set up by use of the
377 vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
378 subsystem assumes that the user mapping and kernel offset mapping are
379 the only aliases. This isn't true for vmap aliases, so anything in
380 the kernel trying to do I/O to vmap areas must manually manage
381 coherency. It must do this by flushing the vmap range before doing
382 I/O and invalidating it after the I/O returns.
384 ``void flush_kernel_vmap_range(void *vaddr, int size)``
386 flushes the kernel cache for a given virtual address range in
387 the vmap area. This is to make sure that any data the kernel
388 modified in the vmap range is made visible to the physical
389 page. The design is to make this area safe to perform I/O on.
390 Note that this API does *not* also flush the offset map alias
393 ``void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates``
395 the cache for a given virtual address range in the vmap area
396 which prevents the processor from making the cache stale by
397 speculatively reading data while the I/O was occurring to the
398 physical pages. This is only necessary for data reads into the