Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: remove arch_flush_tlb_batched_pending() arch helper

Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
TLB entry remains valid"), all arches that use tlbbatch for reclaim
(arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
flush_tlb_mm().

So let's simplify by removing the unnecessary abstraction and doing the
flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
arch_flush_tlb_batched_pending()").

Link: https://lkml.kernel.org/r/20250609103132.447370-1-ryan.roberts@arm.com
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Suggested-by: Will Deacon <will@kernel.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Borislav Betkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Thomas Gleinxer <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Ryan Roberts and committed by
Andrew Morton
a9e056de 441413d2

+1 -23
-11
arch/arm64/include/asm/tlbflush.h
··· 323 323 } 324 324 325 325 /* 326 - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure 327 - * all the previously issued TLBIs targeting mm have completed. But since we 328 - * can be executing on a remote CPU, a DSB cannot guarantee this like it can 329 - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm. 330 - */ 331 - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) 332 - { 333 - flush_tlb_mm(mm); 334 - } 335 - 336 - /* 337 326 * To support TLB batched flush for multiple pages unmapping, we only send 338 327 * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the 339 328 * completion at the end in arch_tlbbatch_flush(). Since we've already issued
-1
arch/riscv/include/asm/tlbflush.h
··· 63 63 bool arch_tlbbatch_should_defer(struct mm_struct *mm); 64 64 void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, 65 65 struct mm_struct *mm, unsigned long start, unsigned long end); 66 - void arch_flush_tlb_batched_pending(struct mm_struct *mm); 67 66 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); 68 67 69 68 extern unsigned long tlb_flush_all_threshold;
-5
arch/riscv/mm/tlbflush.c
··· 234 234 mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); 235 235 } 236 236 237 - void arch_flush_tlb_batched_pending(struct mm_struct *mm) 238 - { 239 - flush_tlb_mm(mm); 240 - } 241 - 242 237 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) 243 238 { 244 239 __flush_tlb_range(NULL, &batch->cpumask,
-5
arch/x86/include/asm/tlbflush.h
··· 356 356 mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); 357 357 } 358 358 359 - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) 360 - { 361 - flush_tlb_mm(mm); 362 - } 363 - 364 359 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); 365 360 366 361 static inline bool pte_flags_need_flush(unsigned long oldflags,
+1 -1
mm/rmap.c
··· 746 746 int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; 747 747 748 748 if (pending != flushed) { 749 - arch_flush_tlb_batched_pending(mm); 749 + flush_tlb_mm(mm); 750 750 /* 751 751 * If the new TLB flushing is pending during flushing, leave 752 752 * mm->tlb_flush_batched as is, to avoid losing flushing.