Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

cache: add __cacheline_group_{begin, end}_aligned() (+ couple more)

__cacheline_group_begin(), unfortunately, doesn't align the group
anyhow. If it is wanted, then you need to do something like

__cacheline_group_begin(grp) __aligned(ALIGN)

which isn't really convenient nor compact.
Add the _aligned() counterparts to align the groups automatically to
either the specified alignment (optional) or ``SMP_CACHE_BYTES``.
Note that the actual struct layout will then be (on x64 with 64-byte CL):

struct x {
u32 y; // offset 0, size 4, padding 56
__cacheline_group_begin__grp; // offset 64, size 0
u32 z; // offset 64, size 4, padding 4
__cacheline_group_end__grp; // offset 72, size 0
__cacheline_group_pad__grp; // offset 72, size 0, padding 56
u32 w; // offset 128
};

The end marker is aligned to long, so that you can assert the struct
size more strictly, but the offset of the next field in the structure
will be aligned to the group alignment, so that the next field won't
fall into the group it's not intended to.

Add __LARGEST_ALIGN definition and LARGEST_ALIGN() macro.
__LARGEST_ALIGN is the value to which the compilers align fields when
__aligned_largest is specified. Sometimes, it might be needed to get
this value outside of variable definitions. LARGEST_ALIGN() is macro
which just aligns a value to __LARGEST_ALIGN.
Also add SMP_CACHE_ALIGN(), similar to L1_CACHE_ALIGN(), but using
``SMP_CACHE_BYTES`` instead of ``L1_CACHE_BYTES`` as the former
also accounts L2, needed in some cases.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>

authored by

Alexander Lobakin and committed by
Tony Nguyen
2cb13dec ce2f84eb

+59
+59
include/linux/cache.h
··· 13 13 #define SMP_CACHE_BYTES L1_CACHE_BYTES 14 14 #endif 15 15 16 + /** 17 + * SMP_CACHE_ALIGN - align a value to the L2 cacheline size 18 + * @x: value to align 19 + * 20 + * On some architectures, L2 ("SMP") CL size is bigger than L1, and sometimes, 21 + * this needs to be accounted. 22 + * 23 + * Return: aligned value. 24 + */ 25 + #ifndef SMP_CACHE_ALIGN 26 + #define SMP_CACHE_ALIGN(x) ALIGN(x, SMP_CACHE_BYTES) 27 + #endif 28 + 29 + /* 30 + * ``__aligned_largest`` aligns a field to the value most optimal for the 31 + * target architecture to perform memory operations. Get the actual value 32 + * to be able to use it anywhere else. 33 + */ 34 + #ifndef __LARGEST_ALIGN 35 + #define __LARGEST_ALIGN sizeof(struct { long x; } __aligned_largest) 36 + #endif 37 + 38 + #ifndef LARGEST_ALIGN 39 + #define LARGEST_ALIGN(x) ALIGN(x, __LARGEST_ALIGN) 40 + #endif 41 + 16 42 /* 17 43 * __read_mostly is used to keep rarely changing variables out of frequently 18 44 * updated cachelines. Its use should be reserved for data that is used ··· 120 94 #define __cacheline_group_end(GROUP) \ 121 95 __u8 __cacheline_group_end__##GROUP[0] 122 96 #endif 97 + 98 + /** 99 + * __cacheline_group_begin_aligned - declare an aligned group start 100 + * @GROUP: name of the group 101 + * @...: optional group alignment 102 + * 103 + * The following block inside a struct: 104 + * 105 + * __cacheline_group_begin_aligned(grp); 106 + * field a; 107 + * field b; 108 + * __cacheline_group_end_aligned(grp); 109 + * 110 + * will always be aligned to either the specified alignment or 111 + * ``SMP_CACHE_BYTES``. 112 + */ 113 + #define __cacheline_group_begin_aligned(GROUP, ...) \ 114 + __cacheline_group_begin(GROUP) \ 115 + __aligned((__VA_ARGS__ + 0) ? : SMP_CACHE_BYTES) 116 + 117 + /** 118 + * __cacheline_group_end_aligned - declare an aligned group end 119 + * @GROUP: name of the group 120 + * @...: optional alignment (same as was in __cacheline_group_begin_aligned()) 121 + * 122 + * Note that the end marker is aligned to sizeof(long) to allow more precise 123 + * size assertion. It also declares a padding at the end to avoid next field 124 + * falling into this cacheline. 125 + */ 126 + #define __cacheline_group_end_aligned(GROUP, ...) \ 127 + __cacheline_group_end(GROUP) __aligned(sizeof(long)); \ 128 + struct { } __cacheline_group_pad__##GROUP \ 129 + __aligned((__VA_ARGS__ + 0) ? : SMP_CACHE_BYTES) 123 130 124 131 #ifndef CACHELINE_ASSERT_GROUP_MEMBER 125 132 #define CACHELINE_ASSERT_GROUP_MEMBER(TYPE, GROUP, MEMBER) \