x86, pmem: Fix cache flushing for iovec write < 8 bytes

Commit 11e63f6d920d added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write. But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.

Cc: <stable@vger.kernel.org>
Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>

authored by Ben Hutchings and committed by Dan Williams 8376efd3 cf1e2289

+1 -1
+1 -1
arch/x86/include/asm/pmem.h
··· 98 98 99 99 if (bytes < 8) { 100 100 if (!IS_ALIGNED(dest, 4) || (bytes != 4)) 101 - arch_wb_cache_pmem(addr, 1); 101 + arch_wb_cache_pmem(addr, bytes); 102 102 } else { 103 103 if (!IS_ALIGNED(dest, 8)) { 104 104 dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);