Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

io-mapping: don't disable preempt on RT in io_mapping_map_atomic_wc().

io_mapping_map_atomic_wc() disables preemption and pagefaults for
historical reasons. The conversion to io_mapping_map_local_wc(), which
only disables migration, cannot be done wholesale because quite some call
sites need to be updated to accommodate with the changed semantics.

On PREEMPT_RT enabled kernels the io_mapping_map_atomic_wc() semantics are
problematic due to the implicit disabling of preemption which makes it
impossible to acquire 'sleeping' spinlocks within the mapped atomic
sections.

PREEMPT_RT replaces the preempt_disable() with a migrate_disable() for
more than a decade. It could be argued that this is a justification to do
this unconditionally, but PREEMPT_RT covers only a limited number of
architectures and it disables some functionality which limits the coverage
further.

Limit the replacement to PREEMPT_RT for now. This is also done
kmap_atomic().

Link: https://lkml.kernel.org/r/20230310162905.O57Pj7hh@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: Richard Weinberger <richard.weinberger@gmail.com>
Link: https://lore.kernel.org/CAFLxGvw0WMxaMqYqJ5WgvVSbKHq2D2xcXTOgMCpgq9nDC-MWTQ@mail.gmail.com
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Sebastian Andrzej Siewior and committed by
Andrew Morton
7eb16f23 2c6efe9c

+16 -4
+16 -4
include/linux/io-mapping.h
··· 69 69 70 70 BUG_ON(offset >= mapping->size); 71 71 phys_addr = mapping->base + offset; 72 - preempt_disable(); 72 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 73 + preempt_disable(); 74 + else 75 + migrate_disable(); 73 76 pagefault_disable(); 74 77 return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); 75 78 } ··· 82 79 { 83 80 kunmap_local_indexed((void __force *)vaddr); 84 81 pagefault_enable(); 85 - preempt_enable(); 82 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 83 + preempt_enable(); 84 + else 85 + migrate_enable(); 86 86 } 87 87 88 88 static inline void __iomem * ··· 168 162 io_mapping_map_atomic_wc(struct io_mapping *mapping, 169 163 unsigned long offset) 170 164 { 171 - preempt_disable(); 165 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 166 + preempt_disable(); 167 + else 168 + migrate_disable(); 172 169 pagefault_disable(); 173 170 return io_mapping_map_wc(mapping, offset, PAGE_SIZE); 174 171 } ··· 181 172 { 182 173 io_mapping_unmap(vaddr); 183 174 pagefault_enable(); 184 - preempt_enable(); 175 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 176 + preempt_enable(); 177 + else 178 + migrate_enable(); 185 179 } 186 180 187 181 static inline void __iomem *