Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

powerpc/lib: Adjust .balign inside string functions for PPC32

commit 87a156fb18fe1 ("Align hot loops of some string functions")
degraded the performance of string functions by adding useless
nops

A simple benchmark on an 8xx calling 100000x a memchr() that
matches the first byte runs in 41668 TB ticks before this patch
and in 35986 TB ticks after this patch. So this gives an
improvement of approx 10%

Another benchmark doing the same with a memchr() matching the 128th
byte runs in 1011365 TB ticks before this patch and 1005682 TB ticks
after this patch, so regardless on the number of loops, removing
those useless nops improves the test by 5683 TB ticks.

Fixes: 87a156fb18fe1 ("Align hot loops of some string functions")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

authored by

Christophe Leroy and committed by
Michael Ellerman
1128bb78 56b04d56

+7 -3
+3
arch/powerpc/include/asm/cache.h
··· 9 9 #if defined(CONFIG_PPC_8xx) || defined(CONFIG_403GCX) 10 10 #define L1_CACHE_SHIFT 4 11 11 #define MAX_COPY_PREFETCH 1 12 + #define IFETCH_ALIGN_SHIFT 2 12 13 #elif defined(CONFIG_PPC_E500MC) 13 14 #define L1_CACHE_SHIFT 6 14 15 #define MAX_COPY_PREFETCH 4 16 + #define IFETCH_ALIGN_SHIFT 3 15 17 #elif defined(CONFIG_PPC32) 16 18 #define MAX_COPY_PREFETCH 4 19 + #define IFETCH_ALIGN_SHIFT 3 /* 603 fetches 2 insn at a time */ 17 20 #if defined(CONFIG_PPC_47x) 18 21 #define L1_CACHE_SHIFT 7 19 22 #else
+4 -3
arch/powerpc/lib/string.S
··· 12 12 #include <asm/errno.h> 13 13 #include <asm/ppc_asm.h> 14 14 #include <asm/export.h> 15 + #include <asm/cache.h> 15 16 16 17 .text 17 18 ··· 24 23 mtctr r5 25 24 addi r6,r3,-1 26 25 addi r4,r4,-1 27 - .balign 16 26 + .balign IFETCH_ALIGN_BYTES 28 27 1: lbzu r0,1(r4) 29 28 cmpwi 0,r0,0 30 29 stbu r0,1(r6) ··· 44 43 mtctr r5 45 44 addi r5,r3,-1 46 45 addi r4,r4,-1 47 - .balign 16 46 + .balign IFETCH_ALIGN_BYTES 48 47 1: lbzu r3,1(r5) 49 48 cmpwi 1,r3,0 50 49 lbzu r0,1(r4) ··· 78 77 beq- 2f 79 78 mtctr r5 80 79 addi r3,r3,-1 81 - .balign 16 80 + .balign IFETCH_ALIGN_BYTES 82 81 1: lbzu r0,1(r3) 83 82 cmpw 0,r0,r4 84 83 bdnzf 2,1b