ARM: 9179/1: uaccess: avoid alignment faults in copy_[from|to]_kernel_nofault

The helpers that are used to implement copy_from_kernel_nofault() and
copy_to_kernel_nofault() cast a void* to a pointer to a wider type,
which may result in alignment faults on ARM if the compiler decides to
use double-word or multiple-word load/store instructions.

Only configurations that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
are affected, given that commit 2423de2e6f4d ("ARM: 9115/1: mm/maccess:
fix unaligned copy_{from,to}_kernel_nofault") ensures that dst and src
are sufficiently aligned otherwise.

So use the unaligned accessors for accessing dst and src in cases where
they may be misaligned.

Cc: <stable@vger.kernel.org> # depends on 2423de2e6f4d
Fixes: 2df4c9a741a0 ("ARM: 9112/1: uaccess: add __{get,put}_kernel_nofault")
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

authored by Ard Biesheuvel and committed by Russell King (Oracle) 15420269 8b59b0a5

+8 -2
+8 -2
arch/arm/include/asm/uaccess.h
··· 11 11 #include <linux/string.h> 12 12 #include <asm/memory.h> 13 13 #include <asm/domain.h> 14 + #include <asm/unaligned.h> 14 15 #include <asm/unified.h> 15 16 #include <asm/compiler.h> 16 17 ··· 498 497 } \ 499 498 default: __err = __get_user_bad(); break; \ 500 499 } \ 501 - *(type *)(dst) = __val; \ 500 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) \ 501 + put_unaligned(__val, (type *)(dst)); \ 502 + else \ 503 + *(type *)(dst) = __val; /* aligned by caller */ \ 502 504 if (__err) \ 503 505 goto err_label; \ 504 506 } while (0) ··· 511 507 const type *__pk_ptr = (dst); \ 512 508 unsigned long __dst = (unsigned long)__pk_ptr; \ 513 509 int __err = 0; \ 514 - type __val = *(type *)src; \ 510 + type __val = IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) \ 511 + ? get_unaligned((type *)(src)) \ 512 + : *(type *)(src); /* aligned by caller */ \ 515 513 switch (sizeof(type)) { \ 516 514 case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \ 517 515 case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \