Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

MIPS: Fix MSA ld unaligned failure cases

Copying the content of an MSA vector from user memory may involve TLB
faults & mapping in pages. This will fail when preemption is disabled
due to an inability to acquire mmap_sem from do_page_fault, which meant
such vector loads to unmapped pages would always fail to be emulated.
Fix this by disabling preemption later only around the updating of
vector register state.

This change does however introduce a race between performing the load
into thread context & the thread being preempted, saving its current
live context & clobbering the loaded value. This should be a rare
occureence, so optimise for the fast path by simply repeating the load if
we are preempted.

Additionally if the copy failed then the failure path was taken with
preemption left disabled, leading to the kernel typically encountering
further issues around sleeping whilst atomic. The change to where
preemption is disabled avoids this issue.

Fixes: e4aa1f153add "MIPS: MSA unaligned memory access support"
Reported-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.3
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12345/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>

authored by

Paul Burton and committed by
Ralf Baechle
fa8ff601 19fb5818

+29 -20
+29 -20
arch/mips/kernel/unaligned.c
··· 885 885 { 886 886 union mips_instruction insn; 887 887 unsigned long value; 888 - unsigned int res; 888 + unsigned int res, preempted; 889 889 unsigned long origpc; 890 890 unsigned long orig31; 891 891 void __user *fault_addr = NULL; ··· 1226 1226 if (!access_ok(VERIFY_READ, addr, sizeof(*fpr))) 1227 1227 goto sigbus; 1228 1228 1229 - /* 1230 - * Disable preemption to avoid a race between copying 1231 - * state from userland, migrating to another CPU and 1232 - * updating the hardware vector register below. 1233 - */ 1234 - preempt_disable(); 1229 + do { 1230 + /* 1231 + * If we have live MSA context keep track of 1232 + * whether we get preempted in order to avoid 1233 + * the register context we load being clobbered 1234 + * by the live context as it's saved during 1235 + * preemption. If we don't have live context 1236 + * then it can't be saved to clobber the value 1237 + * we load. 1238 + */ 1239 + preempted = test_thread_flag(TIF_USEDMSA); 1235 1240 1236 - res = __copy_from_user_inatomic(fpr, addr, 1237 - sizeof(*fpr)); 1238 - if (res) 1239 - goto fault; 1241 + res = __copy_from_user_inatomic(fpr, addr, 1242 + sizeof(*fpr)); 1243 + if (res) 1244 + goto fault; 1240 1245 1241 - /* 1242 - * Update the hardware register if it is in use by the 1243 - * task in this quantum, in order to avoid having to 1244 - * save & restore the whole vector context. 1245 - */ 1246 - if (test_thread_flag(TIF_USEDMSA)) 1247 - write_msa_wr(wd, fpr, df); 1248 - 1249 - preempt_enable(); 1246 + /* 1247 + * Update the hardware register if it is in use 1248 + * by the task in this quantum, in order to 1249 + * avoid having to save & restore the whole 1250 + * vector context. 1251 + */ 1252 + preempt_disable(); 1253 + if (test_thread_flag(TIF_USEDMSA)) { 1254 + write_msa_wr(wd, fpr, df); 1255 + preempted = 0; 1256 + } 1257 + preempt_enable(); 1258 + } while (preempted); 1250 1259 break; 1251 1260 1252 1261 case msa_st_op: