Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/hweight: Force inlining of __arch_hweight{32,64}()

With this config:

http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os

gcc-4.7.2 generates many copies of these tiny functions:

__arch_hweight32 (35 copies):
55 push %rbp
e8 66 9b 4a 00 callq __sw_hweight32
48 89 e5 mov %rsp,%rbp
5d pop %rbp
c3 retq

__arch_hweight64 (8 copies):
55 push %rbp
e8 5e c2 8a 00 callq __sw_hweight64
48 89 e5 mov %rsp,%rbp
5d pop %rbp
c3 retq

See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122

This patch fixes this via s/inline/__always_inline/

To avoid touching 32-bit case where such change was not tested
to be a win, reformat __arch_hweight64() to have completely
disjoint 64-bit and 32-bit implementations. IOW: made #ifdef /
32 bits and 64 bits instead of having #ifdef / #else / #endif
inside a single function body. Only 64-bit __arch_hweight64() is
__always_inline'd.

text data bss dec filename
86971120 17195912 36659200 140826232 vmlinux.before
86970954 17195912 36659200 140826066 vmlinux

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Graf <tgraf@suug.ch>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1438697716-28121-2-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Denys Vlasenko and committed by
Ingo Molnar
d14edb16 1a1d48a4

+9 -6
+9 -6
arch/x86/include/asm/arch_hweight.h
··· 21 21 * ARCH_HWEIGHT_CFLAGS in <arch/x86/Kconfig> for the respective 22 22 * compiler switches. 23 23 */ 24 - static inline unsigned int __arch_hweight32(unsigned int w) 24 + static __always_inline unsigned int __arch_hweight32(unsigned int w) 25 25 { 26 26 unsigned int res = 0; 27 27 ··· 42 42 return __arch_hweight32(w & 0xff); 43 43 } 44 44 45 + #ifdef CONFIG_X86_32 45 46 static inline unsigned long __arch_hweight64(__u64 w) 47 + { 48 + return __arch_hweight32((u32)w) + 49 + __arch_hweight32((u32)(w >> 32)); 50 + } 51 + #else 52 + static __always_inline unsigned long __arch_hweight64(__u64 w) 46 53 { 47 54 unsigned long res = 0; 48 55 49 - #ifdef CONFIG_X86_32 50 - return __arch_hweight32((u32)w) + 51 - __arch_hweight32((u32)(w >> 32)); 52 - #else 53 56 asm (ALTERNATIVE("call __sw_hweight64", POPCNT64, X86_FEATURE_POPCNT) 54 57 : "="REG_OUT (res) 55 58 : REG_IN (w)); 56 - #endif /* CONFIG_X86_32 */ 57 59 58 60 return res; 59 61 } 62 + #endif /* CONFIG_X86_32 */ 60 63 61 64 #endif