Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ARCv2: lib: memeset: fix doing prefetchw outside of buffer

ARCv2 optimized memset uses PREFETCHW instruction for prefetching the
next cache line but doesn't ensure that the line is not past the end of
the buffer. PRETECHW changes the line ownership and marks it dirty,
which can cause issues in SMP config when next line was already owned by
other core. Fix the issue by avoiding the PREFETCHW

Some more details:

The current code has 3 logical loops (ignroing the unaligned part)
(a) Big loop for doing aligned 64 bytes per iteration with PREALLOC
(b) Loop for 32 x 2 bytes with PREFETCHW
(c) any left over bytes

loop (a) was already eliding the last 64 bytes, so PREALLOC was
safe. The fix was removing PREFETCW from (b).

Another potential issue (applicable to configs with 32 or 128 byte L1
cache line) is that PREALLOC assumes 64 byte cache line and may not do
the right thing specially for 32b. While it would be easy to adapt,
there are no known configs with those lie sizes, so for now, just
compile out PREALLOC in such cases.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Cc: stable@vger.kernel.org #4.4+
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: rewrote changelog, used asm .macro vs. "C" macro]

authored by

Eugeniy Paltsev and committed by
Vineet Gupta
e6a72b7d 4d447455

+32 -8
+32 -8
arch/arc/lib/memset-archs.S
··· 7 7 */ 8 8 9 9 #include <linux/linkage.h> 10 + #include <asm/cache.h> 10 11 11 - #undef PREALLOC_NOT_AVAIL 12 + /* 13 + * The memset implementation below is optimized to use prefetchw and prealloc 14 + * instruction in case of CPU with 64B L1 data cache line (L1_CACHE_SHIFT == 6) 15 + * If you want to implement optimized memset for other possible L1 data cache 16 + * line lengths (32B and 128B) you should rewrite code carefully checking 17 + * we don't call any prefetchw/prealloc instruction for L1 cache lines which 18 + * don't belongs to memset area. 19 + */ 20 + 21 + #if L1_CACHE_SHIFT == 6 22 + 23 + .macro PREALLOC_INSTR reg, off 24 + prealloc [\reg, \off] 25 + .endm 26 + 27 + .macro PREFETCHW_INSTR reg, off 28 + prefetchw [\reg, \off] 29 + .endm 30 + 31 + #else 32 + 33 + .macro PREALLOC_INSTR 34 + .endm 35 + 36 + .macro PREFETCHW_INSTR 37 + .endm 38 + 39 + #endif 12 40 13 41 ENTRY_CFI(memset) 14 - prefetchw [r0] ; Prefetch the write location 42 + PREFETCHW_INSTR r0, 0 ; Prefetch the first write location 15 43 mov.f 0, r2 16 44 ;;; if size is zero 17 45 jz.d [blink] ··· 76 48 77 49 lpnz @.Lset64bytes 78 50 ;; LOOP START 79 - #ifdef PREALLOC_NOT_AVAIL 80 - prefetchw [r3, 64] ;Prefetch the next write location 81 - #else 82 - prealloc [r3, 64] 83 - #endif 51 + PREALLOC_INSTR r3, 64 ; alloc next line w/o fetching 52 + 84 53 #ifdef CONFIG_ARC_HAS_LL64 85 54 std.ab r4, [r3, 8] 86 55 std.ab r4, [r3, 8] ··· 110 85 lsr.f lp_count, r2, 5 ;Last remaining max 124 bytes 111 86 lpnz .Lset32bytes 112 87 ;; LOOP START 113 - prefetchw [r3, 32] ;Prefetch the next write location 114 88 #ifdef CONFIG_ARC_HAS_LL64 115 89 std.ab r4, [r3, 8] 116 90 std.ab r4, [r3, 8]