Merge tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull VM_FAULT_RETRY fixes from Al Viro:
"Some of the page fault handlers do not deal with the following case
correctly:

- handle_mm_fault() has returned VM_FAULT_RETRY

- there is a pending fatal signal

- fault had happened in kernel mode

Correct action in such case is not "return unconditionally" - fatal
signals are handled only upon return to userland and something like
copy_to_user() would end up retrying the faulting instruction and
triggering the same fault again and again.

What we need to do in such case is to make the caller to treat that as
failed uaccess attempt - handle exception if there is an exception
handler for faulting instruction or oops if there isn't one.

Over the years some architectures had been fixed and now are handling
that case properly; some still do not. This series should fix the
remaining ones.

Status:

- m68k, riscv, hexagon, parisc: tested/acked by maintainers.

- alpha, sparc32, sparc64: tested locally - bug has been reproduced
on the unpatched kernel and verified to be fixed by this series.

- ia64, microblaze, nios2, openrisc: build, but otherwise completely
untested"

* tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
openrisc: fix livelock in uaccess
nios2: fix livelock in uaccess
microblaze: fix livelock in uaccess
ia64: fix livelock in uaccess
sparc: fix livelock in uaccess
alpha: fix livelock in uaccess
parisc: fix livelock in uaccess
hexagon: fix livelock in uaccess
riscv: fix livelock in uaccess
m68k: fix livelock in uaccess

Changed files
+48 -11
arch
alpha
mm
hexagon
ia64
mm
m68k
mm
microblaze
mm
nios2
mm
openrisc
mm
parisc
mm
riscv
mm
sparc
+4 -1
arch/alpha/mm/fault.c
··· 152 152 the fault. */ 153 153 fault = handle_mm_fault(vma, address, flags, regs); 154 154 155 - if (fault_signal_pending(fault, regs)) 155 + if (fault_signal_pending(fault, regs)) { 156 + if (!user_mode(regs)) 157 + goto no_context; 156 158 return; 159 + } 157 160 158 161 /* The fault is fully completed (including releasing mmap lock) */ 159 162 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/hexagon/mm/vm_fault.c
··· 93 93 94 94 fault = handle_mm_fault(vma, address, flags, regs); 95 95 96 - if (fault_signal_pending(fault, regs)) 96 + if (fault_signal_pending(fault, regs)) { 97 + if (!user_mode(regs)) 98 + goto no_context; 97 99 return; 100 + } 98 101 99 102 /* The fault is fully completed (including releasing mmap lock) */ 100 103 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/ia64/mm/fault.c
··· 136 136 */ 137 137 fault = handle_mm_fault(vma, address, flags, regs); 138 138 139 - if (fault_signal_pending(fault, regs)) 139 + if (fault_signal_pending(fault, regs)) { 140 + if (!user_mode(regs)) 141 + goto no_context; 140 142 return; 143 + } 141 144 142 145 /* The fault is fully completed (including releasing mmap lock) */ 143 146 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/m68k/mm/fault.c
··· 138 138 fault = handle_mm_fault(vma, address, flags, regs); 139 139 pr_debug("handle_mm_fault returns %x\n", fault); 140 140 141 - if (fault_signal_pending(fault, regs)) 141 + if (fault_signal_pending(fault, regs)) { 142 + if (!user_mode(regs)) 143 + goto no_context; 142 144 return 0; 145 + } 143 146 144 147 /* The fault is fully completed (including releasing mmap lock) */ 145 148 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/microblaze/mm/fault.c
··· 219 219 */ 220 220 fault = handle_mm_fault(vma, address, flags, regs); 221 221 222 - if (fault_signal_pending(fault, regs)) 222 + if (fault_signal_pending(fault, regs)) { 223 + if (!user_mode(regs)) 224 + bad_page_fault(regs, address, SIGBUS); 223 225 return; 226 + } 224 227 225 228 /* The fault is fully completed (including releasing mmap lock) */ 226 229 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/nios2/mm/fault.c
··· 136 136 */ 137 137 fault = handle_mm_fault(vma, address, flags, regs); 138 138 139 - if (fault_signal_pending(fault, regs)) 139 + if (fault_signal_pending(fault, regs)) { 140 + if (!user_mode(regs)) 141 + goto no_context; 140 142 return; 143 + } 141 144 142 145 /* The fault is fully completed (including releasing mmap lock) */ 143 146 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/openrisc/mm/fault.c
··· 162 162 163 163 fault = handle_mm_fault(vma, address, flags, regs); 164 164 165 - if (fault_signal_pending(fault, regs)) 165 + if (fault_signal_pending(fault, regs)) { 166 + if (!user_mode(regs)) 167 + goto no_context; 166 168 return; 169 + } 167 170 168 171 /* The fault is fully completed (including releasing mmap lock) */ 169 172 if (fault & VM_FAULT_COMPLETED)
+6 -1
arch/parisc/mm/fault.c
··· 308 308 309 309 fault = handle_mm_fault(vma, address, flags, regs); 310 310 311 - if (fault_signal_pending(fault, regs)) 311 + if (fault_signal_pending(fault, regs)) { 312 + if (!user_mode(regs)) { 313 + msg = "Page fault: fault signal on kernel memory"; 314 + goto no_context; 315 + } 312 316 return; 317 + } 313 318 314 319 /* The fault is fully completed (including releasing mmap lock) */ 315 320 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/riscv/mm/fault.c
··· 326 326 * signal first. We do not need to release the mmap_lock because it 327 327 * would already be released in __lock_page_or_retry in mm/filemap.c. 328 328 */ 329 - if (fault_signal_pending(fault, regs)) 329 + if (fault_signal_pending(fault, regs)) { 330 + if (!user_mode(regs)) 331 + no_context(regs, addr); 330 332 return; 333 + } 331 334 332 335 /* The fault is fully completed (including releasing mmap lock) */ 333 336 if (fault & VM_FAULT_COMPLETED)
+4 -1
arch/sparc/mm/fault_32.c
··· 187 187 */ 188 188 fault = handle_mm_fault(vma, address, flags, regs); 189 189 190 - if (fault_signal_pending(fault, regs)) 190 + if (fault_signal_pending(fault, regs)) { 191 + if (!from_user) 192 + goto no_context; 191 193 return; 194 + } 192 195 193 196 /* The fault is fully completed (including releasing mmap lock) */ 194 197 if (fault & VM_FAULT_COMPLETED)
+6 -1
arch/sparc/mm/fault_64.c
··· 424 424 425 425 fault = handle_mm_fault(vma, address, flags, regs); 426 426 427 - if (fault_signal_pending(fault, regs)) 427 + if (fault_signal_pending(fault, regs)) { 428 + if (regs->tstate & TSTATE_PRIV) { 429 + insn = get_fault_insn(regs, insn); 430 + goto handle_kernel_fault; 431 + } 428 432 goto exit_exception; 433 + } 429 434 430 435 /* The fault is fully completed (including releasing mmap lock) */ 431 436 if (fault & VM_FAULT_COMPLETED)