Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

lib/stackdepot: various comments clean-ups

Clean up comments in include/linux/stackdepot.h and lib/stackdepot.c:

1. Rework the initialization comment in stackdepot.h.
2. Rework the header comment in stackdepot.c.
3. Various clean-ups for other comments.

Also adjust whitespaces for find_stack and depot_alloc_stack call sites.

No functional changes.

Link: https://lkml.kernel.org/r/5836231b7954355e2311fc9b5870f697ea8e1f7d.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Andrey Konovalov and committed by
Andrew Morton
b232b999 beb3c23c

+78 -78
+19 -17
include/linux/stackdepot.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 2 /* 3 - * A generic stack depot implementation 3 + * Stack depot - a stack trace storage that avoids duplication. 4 4 * 5 5 * Author: Alexander Potapenko <glider@google.com> 6 6 * Copyright (C) 2016 Google, Inc. 7 7 * 8 - * Based on code by Dmitry Chernenkov. 8 + * Based on the code by Dmitry Chernenkov. 9 9 */ 10 10 11 11 #ifndef _LINUX_STACKDEPOT_H ··· 17 17 18 18 /* 19 19 * Number of bits in the handle that stack depot doesn't use. Users may store 20 - * information in them. 20 + * information in them via stack_depot_set/get_extra_bits. 21 21 */ 22 22 #define STACK_DEPOT_EXTRA_BITS 5 23 23 24 24 /* 25 - * Every user of stack depot has to call stack_depot_init() during its own init 26 - * when it's decided that it will be calling stack_depot_save() later. This is 27 - * recommended for e.g. modules initialized later in the boot process, when 28 - * slab_is_available() is true. 25 + * Using stack depot requires its initialization, which can be done in 3 ways: 29 26 * 30 - * The alternative is to select STACKDEPOT_ALWAYS_INIT to have stack depot 31 - * enabled as part of mm_init(), for subsystems where it's known at compile time 32 - * that stack depot will be used. 27 + * 1. Selecting CONFIG_STACKDEPOT_ALWAYS_INIT. This option is suitable in 28 + * scenarios where it's known at compile time that stack depot will be used. 29 + * Enabling this config makes the kernel initialize stack depot in mm_init(). 33 30 * 34 - * Another alternative is to call stack_depot_request_early_init(), when the 35 - * decision to use stack depot is taken e.g. when evaluating kernel boot 36 - * parameters, which precedes the enablement point in mm_init(). 31 + * 2. Calling stack_depot_request_early_init() during early boot, before 32 + * stack_depot_early_init() in mm_init() completes. For example, this can 33 + * be done when evaluating kernel boot parameters. 34 + * 35 + * 3. Calling stack_depot_init(). Possible after boot is complete. This option 36 + * is recommended for modules initialized later in the boot process, after 37 + * mm_init() completes. 37 38 * 38 39 * stack_depot_init() and stack_depot_request_early_init() can be called 39 - * regardless of CONFIG_STACKDEPOT and are no-op when disabled. The actual 40 - * save/fetch/print functions should only be called from code that makes sure 41 - * CONFIG_STACKDEPOT is enabled. 40 + * regardless of whether CONFIG_STACKDEPOT is enabled and are no-op when this 41 + * config is disabled. The save/fetch/print stack depot functions can only be 42 + * called from the code that makes sure CONFIG_STACKDEPOT is enabled _and_ 43 + * initializes stack depot via one of the ways listed above. 42 44 */ 43 45 #ifdef CONFIG_STACKDEPOT 44 46 int stack_depot_init(void); 45 47 46 48 void __init stack_depot_request_early_init(void); 47 49 48 - /* This is supposed to be called only from mm_init() */ 50 + /* Must be only called from mm_init(). */ 49 51 int __init stack_depot_early_init(void); 50 52 #else 51 53 static inline int stack_depot_init(void) { return 0; }
+59 -61
lib/stackdepot.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Generic stack depot for storing stack traces. 3 + * Stack depot - a stack trace storage that avoids duplication. 4 4 * 5 - * Some debugging tools need to save stack traces of certain events which can 6 - * be later presented to the user. For example, KASAN needs to safe alloc and 7 - * free stacks for each object, but storing two stack traces per object 8 - * requires too much memory (e.g. SLUB_DEBUG needs 256 bytes per object for 9 - * that). 5 + * Stack depot is intended to be used by subsystems that need to store and 6 + * later retrieve many potentially duplicated stack traces without wasting 7 + * memory. 10 8 * 11 - * Instead, stack depot maintains a hashtable of unique stacktraces. Since alloc 12 - * and free stacks repeat a lot, we save about 100x space. 13 - * Stacks are never removed from depot, so we store them contiguously one after 14 - * another in a contiguous memory allocation. 9 + * For example, KASAN needs to save allocation and free stack traces for each 10 + * object. Storing two stack traces per object requires a lot of memory (e.g. 11 + * SLUB_DEBUG needs 256 bytes per object for that). Since allocation and free 12 + * stack traces often repeat, using stack depot allows to save about 100x space. 13 + * 14 + * Internally, stack depot maintains a hash table of unique stacktraces. The 15 + * stack traces themselves are stored contiguously one after another in a set 16 + * of separate page allocations. 17 + * 18 + * Stack traces are never removed from stack depot. 15 19 * 16 20 * Author: Alexander Potapenko <glider@google.com> 17 21 * Copyright (C) 2016 Google, Inc. 18 22 * 19 - * Based on code by Dmitry Chernenkov. 23 + * Based on the code by Dmitry Chernenkov. 20 24 */ 21 25 22 26 #define pr_fmt(fmt) "stackdepot: " fmt ··· 54 50 (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \ 55 51 (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP) 56 52 57 - /* The compact structure to store the reference to stacks. */ 53 + /* Compact structure that stores a reference to a stack. */ 58 54 union handle_parts { 59 55 depot_stack_handle_t handle; 60 56 struct { ··· 66 62 }; 67 63 68 64 struct stack_record { 69 - struct stack_record *next; /* Link in the hashtable */ 70 - u32 hash; /* Hash in the hastable */ 71 - u32 size; /* Number of frames in the stack */ 65 + struct stack_record *next; /* Link in the hash table */ 66 + u32 hash; /* Hash in the hash table */ 67 + u32 size; /* Number of stored frames */ 72 68 union handle_parts handle; 73 - unsigned long entries[]; /* Variable-sized array of entries. */ 69 + unsigned long entries[]; /* Variable-sized array of frames */ 74 70 }; 75 71 76 72 static bool stack_depot_disabled; ··· 321 317 return stack; 322 318 } 323 319 324 - /* Calculate hash for a stack */ 320 + /* Calculates the hash for a stack. */ 325 321 static inline u32 hash_stack(unsigned long *entries, unsigned int size) 326 322 { 327 323 return jhash2((u32 *)entries, ··· 329 325 STACK_HASH_SEED); 330 326 } 331 327 332 - /* Use our own, non-instrumented version of memcmp(). 333 - * 334 - * We actually don't care about the order, just the equality. 328 + /* 329 + * Non-instrumented version of memcmp(). 330 + * Does not check the lexicographical order, only the equality. 335 331 */ 336 332 static inline 337 333 int stackdepot_memcmp(const unsigned long *u1, const unsigned long *u2, ··· 344 340 return 0; 345 341 } 346 342 347 - /* Find a stack that is equal to the one stored in entries in the hash */ 343 + /* Finds a stack in a bucket of the hash table. */ 348 344 static inline struct stack_record *find_stack(struct stack_record *bucket, 349 345 unsigned long *entries, int size, 350 346 u32 hash) ··· 361 357 } 362 358 363 359 /** 364 - * __stack_depot_save - Save a stack trace from an array 360 + * __stack_depot_save - Save a stack trace to stack depot 365 361 * 366 - * @entries: Pointer to storage array 367 - * @nr_entries: Size of the storage array 368 - * @alloc_flags: Allocation gfp flags 362 + * @entries: Pointer to the stack trace 363 + * @nr_entries: Number of frames in the stack 364 + * @alloc_flags: Allocation GFP flags 369 365 * @can_alloc: Allocate stack pools (increased chance of failure if false) 370 366 * 371 367 * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is 372 - * %true, is allowed to replenish the stack pool in case no space is left 368 + * %true, stack depot can replenish the stack pools in case no space is left 373 369 * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids 374 - * any allocations and will fail if no space is left to store the stack trace. 370 + * any allocations and fails if no space is left to store the stack trace. 375 371 * 376 - * If the stack trace in @entries is from an interrupt, only the portion up to 377 - * interrupt entry is saved. 372 + * If the provided stack trace comes from the interrupt context, only the part 373 + * up to the interrupt entry is saved. 378 374 * 379 375 * Context: Any context, but setting @can_alloc to %false is required if 380 376 * alloc_pages() cannot be used from the current context. Currently 381 - * this is the case from contexts where neither %GFP_ATOMIC nor 377 + * this is the case for contexts where neither %GFP_ATOMIC nor 382 378 * %GFP_NOWAIT can be used (NMI, raw_spin_lock). 383 379 * 384 - * Return: The handle of the stack struct stored in depot, 0 on failure. 380 + * Return: Handle of the stack struct stored in depot, 0 on failure 385 381 */ 386 382 depot_stack_handle_t __stack_depot_save(unsigned long *entries, 387 383 unsigned int nr_entries, ··· 396 392 397 393 /* 398 394 * If this stack trace is from an interrupt, including anything before 399 - * interrupt entry usually leads to unbounded stackdepot growth. 395 + * interrupt entry usually leads to unbounded stack depot growth. 400 396 * 401 - * Because use of filter_irq_stacks() is a requirement to ensure 402 - * stackdepot can efficiently deduplicate interrupt stacks, always 403 - * filter_irq_stacks() to simplify all callers' use of stackdepot. 397 + * Since use of filter_irq_stacks() is a requirement to ensure stack 398 + * depot can efficiently deduplicate interrupt stacks, always 399 + * filter_irq_stacks() to simplify all callers' use of stack depot. 404 400 */ 405 401 nr_entries = filter_irq_stacks(entries, nr_entries); 406 402 ··· 415 411 * The smp_load_acquire() here pairs with smp_store_release() to 416 412 * |bucket| below. 417 413 */ 418 - found = find_stack(smp_load_acquire(bucket), entries, 419 - nr_entries, hash); 414 + found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash); 420 415 if (found) 421 416 goto exit; 422 417 ··· 444 441 445 442 found = find_stack(*bucket, entries, nr_entries, hash); 446 443 if (!found) { 447 - struct stack_record *new = depot_alloc_stack(entries, nr_entries, hash, &prealloc); 444 + struct stack_record *new = 445 + depot_alloc_stack(entries, nr_entries, hash, &prealloc); 448 446 449 447 if (new) { 450 448 new->next = *bucket; ··· 458 454 } 459 455 } else if (prealloc) { 460 456 /* 461 - * We didn't need to store this stack trace, but let's keep 462 - * the preallocated memory for the future. 457 + * Stack depot already contains this stack trace, but let's 458 + * keep the preallocated memory for the future. 463 459 */ 464 460 depot_init_pool(&prealloc); 465 461 } ··· 467 463 raw_spin_unlock_irqrestore(&pool_lock, flags); 468 464 exit: 469 465 if (prealloc) { 470 - /* Nobody used this memory, ok to free it. */ 466 + /* Stack depot didn't use this memory, free it. */ 471 467 free_pages((unsigned long)prealloc, DEPOT_POOL_ORDER); 472 468 } 473 469 if (found) ··· 478 474 EXPORT_SYMBOL_GPL(__stack_depot_save); 479 475 480 476 /** 481 - * stack_depot_save - Save a stack trace from an array 477 + * stack_depot_save - Save a stack trace to stack depot 482 478 * 483 - * @entries: Pointer to storage array 484 - * @nr_entries: Size of the storage array 485 - * @alloc_flags: Allocation gfp flags 479 + * @entries: Pointer to the stack trace 480 + * @nr_entries: Number of frames in the stack 481 + * @alloc_flags: Allocation GFP flags 486 482 * 487 483 * Context: Contexts where allocations via alloc_pages() are allowed. 488 484 * See __stack_depot_save() for more details. 489 485 * 490 - * Return: The handle of the stack struct stored in depot, 0 on failure. 486 + * Return: Handle of the stack trace stored in depot, 0 on failure 491 487 */ 492 488 depot_stack_handle_t stack_depot_save(unsigned long *entries, 493 489 unsigned int nr_entries, ··· 498 494 EXPORT_SYMBOL_GPL(stack_depot_save); 499 495 500 496 /** 501 - * stack_depot_fetch - Fetch stack entries from a depot 497 + * stack_depot_fetch - Fetch a stack trace from stack depot 502 498 * 503 - * @handle: Stack depot handle which was returned from 504 - * stack_depot_save(). 505 - * @entries: Pointer to store the entries address 499 + * @handle: Stack depot handle returned from stack_depot_save() 500 + * @entries: Pointer to store the address of the stack trace 506 501 * 507 - * Return: The number of trace entries for this depot. 502 + * Return: Number of frames for the fetched stack 508 503 */ 509 504 unsigned int stack_depot_fetch(depot_stack_handle_t handle, 510 505 unsigned long **entries) ··· 538 535 EXPORT_SYMBOL_GPL(stack_depot_fetch); 539 536 540 537 /** 541 - * stack_depot_print - print stack entries from a depot 538 + * stack_depot_print - Print a stack trace from stack depot 542 539 * 543 - * @stack: Stack depot handle which was returned from 544 - * stack_depot_save(). 545 - * 540 + * @stack: Stack depot handle returned from stack_depot_save() 546 541 */ 547 542 void stack_depot_print(depot_stack_handle_t stack) 548 543 { ··· 554 553 EXPORT_SYMBOL_GPL(stack_depot_print); 555 554 556 555 /** 557 - * stack_depot_snprint - print stack entries from a depot into a buffer 556 + * stack_depot_snprint - Print a stack trace from stack depot into a buffer 558 557 * 559 - * @handle: Stack depot handle which was returned from 560 - * stack_depot_save(). 558 + * @handle: Stack depot handle returned from stack_depot_save() 561 559 * @buf: Pointer to the print buffer 562 - * 563 560 * @size: Size of the print buffer 564 - * 565 561 * @spaces: Number of leading spaces to print 566 562 * 567 - * Return: Number of bytes printed. 563 + * Return: Number of bytes printed 568 564 */ 569 565 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, 570 566 int spaces)