Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO

Current best practice is to reuse the name of the function as a define to
indicate that the function is implemented by the architecture.

Link: https://lkml.kernel.org/r/20230802151406.3735276-6-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Matthew Wilcox (Oracle) and committed by
Andrew Morton
29d26f12 bc60abfb

+12 -18
+9 -15
Documentation/core-api/cachetlb.rst
··· 273 273 If D-cache aliasing is not an issue, these two routines may 274 274 simply call memcpy/memset directly and do nothing more. 275 275 276 - ``void flush_dcache_page(struct page *page)`` 276 + ``void flush_dcache_folio(struct folio *folio)`` 277 277 278 278 This routines must be called when: 279 279 ··· 281 281 and / or in high memory 282 282 b) the kernel is about to read from a page cache page and user space 283 283 shared/writable mappings of this page potentially exist. Note 284 - that {get,pin}_user_pages{_fast} already call flush_dcache_page 284 + that {get,pin}_user_pages{_fast} already call flush_dcache_folio 285 285 on any page found in the user address space and thus driver 286 286 code rarely needs to take this into account. 287 287 ··· 295 295 296 296 The phrase "kernel writes to a page cache page" means, specifically, 297 297 that the kernel executes store instructions that dirty data in that 298 - page at the page->virtual mapping of that page. It is important to 298 + page at the kernel virtual mapping of that page. It is important to 299 299 flush here to handle D-cache aliasing, to make sure these kernel stores 300 300 are visible to user space mappings of that page. 301 301 ··· 306 306 If D-cache aliasing is not an issue, this routine may simply be defined 307 307 as a nop on that architecture. 308 308 309 - There is a bit set aside in page->flags (PG_arch_1) as "architecture 309 + There is a bit set aside in folio->flags (PG_arch_1) as "architecture 310 310 private". The kernel guarantees that, for pagecache pages, it will 311 311 clear this bit when such a page first enters the pagecache. 312 312 313 313 This allows these interfaces to be implemented much more 314 314 efficiently. It allows one to "defer" (perhaps indefinitely) the 315 315 actual flush if there are currently no user processes mapping this 316 - page. See sparc64's flush_dcache_page and update_mmu_cache_range 316 + page. See sparc64's flush_dcache_folio and update_mmu_cache_range 317 317 implementations for an example of how to go about doing this. 318 318 319 - The idea is, first at flush_dcache_page() time, if 320 - page_file_mapping() returns a mapping, and mapping_mapped on that 319 + The idea is, first at flush_dcache_folio() time, if 320 + folio_flush_mapping() returns a mapping, and mapping_mapped() on that 321 321 mapping returns %false, just mark the architecture private page 322 322 flag bit. Later, in update_mmu_cache_range(), a check is made 323 323 of this flag bit, and if set the flush is done and the flag bit ··· 330 330 as did the cpu stores into the page to make it 331 331 dirty. Again, see sparc64 for examples of how 332 332 to deal with this. 333 - 334 - ``void flush_dcache_folio(struct folio *folio)`` 335 - This function is called under the same circumstances as 336 - flush_dcache_page(). It allows the architecture to 337 - optimise for flushing the entire folio of pages instead 338 - of flushing one page at a time. 339 333 340 334 ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 341 335 unsigned long user_vaddr, void *dst, void *src, int len)`` ··· 351 357 352 358 When the kernel needs to access the contents of an anonymous 353 359 page, it calls this function (currently only 354 - get_user_pages()). Note: flush_dcache_page() deliberately 360 + get_user_pages()). Note: flush_dcache_folio() deliberately 355 361 doesn't work for an anonymous page. The default 356 362 implementation is a nop (and should remain so for all coherent 357 363 architectures). For incoherent architectures, it should flush ··· 368 374 ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` 369 375 370 376 All the functionality of flush_icache_page can be implemented in 371 - flush_dcache_page and update_mmu_cache_range. In the future, the hope 377 + flush_dcache_folio and update_mmu_cache_range. In the future, the hope 372 378 is to remove this interface completely. 373 379 374 380 The final category of APIs is for I/O to deliberately aliased address
+2 -2
include/linux/cacheflush.h
··· 7 7 struct folio; 8 8 9 9 #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 10 - #ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 10 + #ifndef flush_dcache_folio 11 11 void flush_dcache_folio(struct folio *folio); 12 12 #endif 13 13 #else 14 14 static inline void flush_dcache_folio(struct folio *folio) 15 15 { 16 16 } 17 - #define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 17 + #define flush_dcache_folio flush_dcache_folio 18 18 #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ 19 19 20 20 #endif /* _LINUX_CACHEFLUSH_H */
+1 -1
mm/util.c
··· 1119 1119 } 1120 1120 EXPORT_SYMBOL(page_offline_end); 1121 1121 1122 - #ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 1122 + #ifndef flush_dcache_folio 1123 1123 void flush_dcache_folio(struct folio *folio) 1124 1124 { 1125 1125 long i, nr = folio_nr_pages(folio);