Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

dm vdo: fix kerneldoc warnings

Fix kerneldoc warnings across the dm-vdo target. Also
remove some unhelpful or inaccurate doc comments, and fix
some format inconsistencies that did not produce warnings.

No functional changes.

Suggested-by: Sunday Adelodun <adelodunolaoluwa@yahoo.com>
Signed-off-by: Matthew Sakai <msakai@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

authored by

Matthew Sakai and committed by
Mikulas Patocka
4efe85b0 d0ac06ae

+298 -169
+1 -1
drivers/md/dm-vdo/action-manager.c
··· 43 43 * @actions: The two action slots. 44 44 * @current_action: The current action slot. 45 45 * @zones: The number of zones in which an action is to be applied. 46 - * @Scheduler: A function to schedule a default next action. 46 + * @scheduler: A function to schedule a default next action. 47 47 * @get_zone_thread_id: A function to get the id of the thread on which to apply an action to a 48 48 * zone. 49 49 * @initiator_thread_id: The ID of the thread on which actions may be initiated.
+50 -25
drivers/md/dm-vdo/admin-state.c
··· 149 149 /** 150 150 * get_next_state() - Determine the state which should be set after a given operation completes 151 151 * based on the operation and the current state. 152 - * @operation The operation to be started. 152 + * @state: The current admin state. 153 + * @operation: The operation to be started. 153 154 * 154 155 * Return: The state to set when the operation completes or NULL if the operation can not be 155 156 * started in the current state. ··· 188 187 189 188 /** 190 189 * vdo_finish_operation() - Finish the current operation. 190 + * @state: The current admin state. 191 + * @result: The result of the operation. 191 192 * 192 193 * Will notify the operation waiter if there is one. This method should be used for operations 193 194 * started with vdo_start_operation(). For operations which were started with vdo_start_draining(), ··· 217 214 218 215 /** 219 216 * begin_operation() - Begin an operation if it may be started given the current state. 220 - * @waiter A completion to notify when the operation is complete; may be NULL. 221 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 217 + * @state: The current admin state. 218 + * @operation: The operation to be started. 219 + * @waiter: A completion to notify when the operation is complete; may be NULL. 220 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 222 221 * 223 222 * Return: VDO_SUCCESS or an error. 224 223 */ ··· 264 259 265 260 /** 266 261 * start_operation() - Start an operation if it may be started given the current state. 267 - * @waiter A completion to notify when the operation is complete. 268 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 262 + * @state: The current admin state. 263 + * @operation: The operation to be started. 264 + * @waiter: A completion to notify when the operation is complete; may be NULL. 265 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 269 266 * 270 267 * Return: true if the operation was started. 271 268 */ ··· 281 274 282 275 /** 283 276 * check_code() - Check the result of a state validation. 284 - * @valid true if the code is of an appropriate type. 285 - * @code The code which failed to be of the correct type. 286 - * @what What the code failed to be, for logging. 287 - * @waiter The completion to notify of the error; may be NULL. 277 + * @valid: True if the code is of an appropriate type. 278 + * @code: The code which failed to be of the correct type. 279 + * @what: What the code failed to be, for logging. 280 + * @waiter: The completion to notify of the error; may be NULL. 288 281 * 289 282 * If the result failed, log an invalid state error and, if there is a waiter, notify it. 290 283 * ··· 308 301 309 302 /** 310 303 * assert_vdo_drain_operation() - Check that an operation is a drain. 311 - * @waiter The completion to finish with an error if the operation is not a drain. 304 + * @operation: The operation to check. 305 + * @waiter: The completion to finish with an error if the operation is not a drain. 312 306 * 313 307 * Return: true if the specified operation is a drain. 314 308 */ ··· 321 313 322 314 /** 323 315 * vdo_start_draining() - Initiate a drain operation if the current state permits it. 324 - * @operation The type of drain to initiate. 325 - * @waiter The completion to notify when the drain is complete. 326 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 316 + * @state: The current admin state. 317 + * @operation: The type of drain to initiate. 318 + * @waiter: The completion to notify when the drain is complete. 319 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 327 320 * 328 321 * Return: true if the drain was initiated, if not the waiter will be notified. 329 322 */ ··· 354 345 355 346 /** 356 347 * vdo_finish_draining() - Finish a drain operation if one was in progress. 348 + * @state: The current admin state. 357 349 * 358 350 * Return: true if the state was draining; will notify the waiter if so. 359 351 */ ··· 365 355 366 356 /** 367 357 * vdo_finish_draining_with_result() - Finish a drain operation with a status code. 358 + * @state: The current admin state. 359 + * @result: The result of the drain operation. 368 360 * 369 361 * Return: true if the state was draining; will notify the waiter if so. 370 362 */ ··· 377 365 378 366 /** 379 367 * vdo_assert_load_operation() - Check that an operation is a load. 380 - * @waiter The completion to finish with an error if the operation is not a load. 368 + * @operation: The operation to check. 369 + * @waiter: The completion to finish with an error if the operation is not a load. 381 370 * 382 371 * Return: true if the specified operation is a load. 383 372 */ ··· 390 377 391 378 /** 392 379 * vdo_start_loading() - Initiate a load operation if the current state permits it. 393 - * @operation The type of load to initiate. 394 - * @waiter The completion to notify when the load is complete (may be NULL). 395 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 380 + * @state: The current admin state. 381 + * @operation: The type of load to initiate. 382 + * @waiter: The completion to notify when the load is complete; may be NULL. 383 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 396 384 * 397 385 * Return: true if the load was initiated, if not the waiter will be notified. 398 386 */ ··· 407 393 408 394 /** 409 395 * vdo_finish_loading() - Finish a load operation if one was in progress. 396 + * @state: The current admin state. 410 397 * 411 398 * Return: true if the state was loading; will notify the waiter if so. 412 399 */ ··· 418 403 419 404 /** 420 405 * vdo_finish_loading_with_result() - Finish a load operation with a status code. 421 - * @result The result of the load operation. 406 + * @state: The current admin state. 407 + * @result: The result of the load operation. 422 408 * 423 409 * Return: true if the state was loading; will notify the waiter if so. 424 410 */ ··· 430 414 431 415 /** 432 416 * assert_vdo_resume_operation() - Check whether an admin_state_code is a resume operation. 433 - * @waiter The completion to notify if the operation is not a resume operation; may be NULL. 417 + * @operation: The operation to check. 418 + * @waiter: The completion to notify if the operation is not a resume operation; may be NULL. 434 419 * 435 420 * Return: true if the code is a resume operation. 436 421 */ ··· 444 427 445 428 /** 446 429 * vdo_start_resuming() - Initiate a resume operation if the current state permits it. 447 - * @operation The type of resume to start. 448 - * @waiter The completion to notify when the resume is complete (may be NULL). 449 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 430 + * @state: The current admin state. 431 + * @operation: The type of resume to start. 432 + * @waiter: The completion to notify when the resume is complete; may be NULL. 433 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 450 434 * 451 435 * Return: true if the resume was initiated, if not the waiter will be notified. 452 436 */ ··· 461 443 462 444 /** 463 445 * vdo_finish_resuming() - Finish a resume operation if one was in progress. 446 + * @state: The current admin state. 464 447 * 465 448 * Return: true if the state was resuming; will notify the waiter if so. 466 449 */ ··· 472 453 473 454 /** 474 455 * vdo_finish_resuming_with_result() - Finish a resume operation with a status code. 475 - * @result The result of the resume operation. 456 + * @state: The current admin state. 457 + * @result: The result of the resume operation. 476 458 * 477 459 * Return: true if the state was resuming; will notify the waiter if so. 478 460 */ ··· 485 465 /** 486 466 * vdo_resume_if_quiescent() - Change the state to normal operation if the current state is 487 467 * quiescent. 468 + * @state: The current admin state. 488 469 * 489 470 * Return: VDO_SUCCESS if the state resumed, VDO_INVALID_ADMIN_STATE otherwise. 490 471 */ ··· 500 479 501 480 /** 502 481 * vdo_start_operation() - Attempt to start an operation. 482 + * @state: The current admin state. 483 + * @operation: The operation to attempt to start. 503 484 * 504 485 * Return: VDO_SUCCESS if the operation was started, VDO_INVALID_ADMIN_STATE if not 505 486 */ ··· 513 490 514 491 /** 515 492 * vdo_start_operation_with_waiter() - Attempt to start an operation. 516 - * @waiter the completion to notify when the operation completes or fails to start; may be NULL. 517 - * @initiator The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 493 + * @state: The current admin state. 494 + * @operation: The operation to attempt to start. 495 + * @waiter: The completion to notify when the operation completes or fails to start; may be NULL. 496 + * @initiator: The vdo_admin_initiator_fn to call if the operation may begin; may be NULL. 518 497 * 519 498 * Return: VDO_SUCCESS if the operation was started, VDO_INVALID_ADMIN_STATE if not 520 499 */
+43 -8
drivers/md/dm-vdo/block-map.c
··· 174 174 175 175 /** 176 176 * initialize_info() - Initialize all page info structures and put them on the free list. 177 + * @cache: The page cache. 177 178 * 178 179 * Return: VDO_SUCCESS or an error. 179 180 */ ··· 210 209 /** 211 210 * allocate_cache_components() - Allocate components of the cache which require their own 212 211 * allocation. 212 + * @cache: The page cache. 213 213 * 214 214 * The caller is responsible for all clean up on errors. 215 215 * ··· 240 238 /** 241 239 * assert_on_cache_thread() - Assert that a function has been called on the VDO page cache's 242 240 * thread. 241 + * @cache: The page cache. 242 + * @function_name: The funtion name to report if the assertion fails. 243 243 */ 244 244 static inline void assert_on_cache_thread(struct vdo_page_cache *cache, 245 245 const char *function_name) ··· 275 271 276 272 /** 277 273 * get_page_state_name() - Return the name of a page state. 274 + * @state: The page state to describe. 278 275 * 279 276 * If the page state is invalid a static string is returned and the invalid state is logged. 280 277 * ··· 347 342 /** 348 343 * set_info_state() - Set the state of a page_info and put it on the right list, adjusting 349 344 * counters. 345 + * @info: The page info to update. 346 + * @new_state: The new state to set. 350 347 */ 351 348 static void set_info_state(struct page_info *info, enum vdo_page_buffer_state new_state) 352 349 { ··· 423 416 424 417 /** 425 418 * find_free_page() - Find a free page. 419 + * @cache: The page cache. 426 420 * 427 421 * Return: A pointer to the page info structure (if found), NULL otherwise. 428 422 */ ··· 441 433 442 434 /** 443 435 * find_page() - Find the page info (if any) associated with a given pbn. 436 + * @cache: The page cache. 444 437 * @pbn: The absolute physical block number of the page. 445 438 * 446 439 * Return: The page info for the page if available, or NULL if not. ··· 458 449 459 450 /** 460 451 * select_lru_page() - Determine which page is least recently used. 452 + * @cache: The page cache. 461 453 * 462 454 * Picks the least recently used from among the non-busy entries at the front of each of the lru 463 455 * list. Since whenever we mark a page busy we also put it to the end of the list it is unlikely ··· 533 523 534 524 /** 535 525 * distribute_page_over_waitq() - Complete a waitq of VDO page completions with a page result. 526 + * @info: The loaded page info. 527 + * @waitq: The list of waiting data_vios. 536 528 * 537 529 * Upon completion the waitq will be empty. 538 530 * ··· 560 548 561 549 /** 562 550 * set_persistent_error() - Set a persistent error which all requests will receive in the future. 551 + * @cache: The page cache. 563 552 * @context: A string describing what triggered the error. 553 + * @result: The error result to set on the cache. 564 554 * 565 555 * Once triggered, all enqueued completions will get this error. Any future requests will result in 566 556 * this error as well. ··· 595 581 /** 596 582 * validate_completed_page() - Check that a page completion which is being freed to the cache 597 583 * referred to a valid page and is in a valid state. 584 + * @completion: The page completion to check. 598 585 * @writable: Whether a writable page is required. 599 586 * 600 587 * Return: VDO_SUCCESS if the page was valid, otherwise as error ··· 773 758 774 759 /** 775 760 * launch_page_load() - Begin the process of loading a page. 761 + * @info: The page info to launch. 762 + * @pbn: The absolute physical block number of the page to load. 776 763 * 777 764 * Return: VDO_SUCCESS or an error code. 778 765 */ ··· 853 836 854 837 /** 855 838 * schedule_page_save() - Add a page to the outgoing list of pages waiting to be saved. 839 + * @info: The page info to save. 856 840 * 857 841 * Once in the list, a page may not be used until it has been written out. 858 842 */ ··· 872 854 /** 873 855 * launch_page_save() - Add a page to outgoing pages waiting to be saved, and then start saving 874 856 * pages if another save is not in progress. 857 + * @info: The page info to save. 875 858 */ 876 859 static void launch_page_save(struct page_info *info) 877 860 { ··· 883 864 /** 884 865 * completion_needs_page() - Determine whether a given vdo_page_completion (as a waiter) is 885 866 * requesting a given page number. 867 + * @waiter: The page completion waiter to check. 886 868 * @context: A pointer to the pbn of the desired page. 887 869 * 888 870 * Implements waiter_match_fn. ··· 900 880 /** 901 881 * allocate_free_page() - Allocate a free page to the first completion in the waiting queue, and 902 882 * any other completions that match it in page number. 883 + * @info: The page info to allocate a page for. 903 884 */ 904 885 static void allocate_free_page(struct page_info *info) 905 886 { ··· 946 925 947 926 /** 948 927 * discard_a_page() - Begin the process of discarding a page. 928 + * @cache: The page cache. 949 929 * 950 930 * If no page is discardable, increments a count of deferred frees so that the next release of a 951 931 * page which is no longer busy will kick off another discard cycle. This is an indication that the ··· 977 955 launch_page_save(info); 978 956 } 979 957 980 - /** 981 - * discard_page_for_completion() - Helper used to trigger a discard so that the completion can get 982 - * a different page. 983 - */ 984 958 static void discard_page_for_completion(struct vdo_page_completion *vdo_page_comp) 985 959 { 986 960 struct vdo_page_cache *cache = vdo_page_comp->cache; ··· 1150 1132 1151 1133 /** 1152 1134 * vdo_release_page_completion() - Release a VDO Page Completion. 1135 + * @completion: The page completion to release. 1153 1136 * 1154 1137 * The page referenced by this completion (if any) will no longer be held busy by this completion. 1155 1138 * If a page becomes discardable and there are completions awaiting free pages then a new round of ··· 1191 1172 } 1192 1173 } 1193 1174 1194 - /** 1195 - * load_page_for_completion() - Helper function to load a page as described by a VDO Page 1196 - * Completion. 1197 - */ 1198 1175 static void load_page_for_completion(struct page_info *info, 1199 1176 struct vdo_page_completion *vdo_page_comp) 1200 1177 { ··· 1334 1319 1335 1320 /** 1336 1321 * vdo_invalidate_page_cache() - Invalidate all entries in the VDO page cache. 1322 + * @cache: The page cache. 1337 1323 * 1338 1324 * There must not be any dirty pages in the cache. 1339 1325 * ··· 1361 1345 1362 1346 /** 1363 1347 * get_tree_page_by_index() - Get the tree page for a given height and page index. 1348 + * @forest: The block map forest. 1349 + * @root_index: The root index of the tree to search. 1350 + * @height: The height in the tree. 1351 + * @page_index: The page index. 1364 1352 * 1365 1353 * Return: The requested page. 1366 1354 */ ··· 2231 2211 /** 2232 2212 * vdo_find_block_map_slot() - Find the block map slot in which the block map entry for a data_vio 2233 2213 * resides and cache that result in the data_vio. 2214 + * @data_vio: The data vio. 2234 2215 * 2235 2216 * All ancestors in the tree will be allocated or loaded, as needed. 2236 2217 */ ··· 2456 2435 /** 2457 2436 * make_forest() - Make a collection of trees for a block_map, expanding the existing forest if 2458 2437 * there is one. 2438 + * @map: The block map. 2459 2439 * @entries: The number of entries the block map will hold. 2460 2440 * 2461 2441 * Return: VDO_SUCCESS or an error. ··· 2498 2476 2499 2477 /** 2500 2478 * replace_forest() - Replace a block_map's forest with the already-prepared larger forest. 2479 + * @map: The block map. 2501 2480 */ 2502 2481 static void replace_forest(struct block_map *map) 2503 2482 { ··· 2515 2492 /** 2516 2493 * finish_cursor() - Finish the traversal of a single tree. If it was the last cursor, finish the 2517 2494 * traversal. 2495 + * @cursor: The cursor to complete. 2518 2496 */ 2519 2497 static void finish_cursor(struct cursor *cursor) 2520 2498 { ··· 2573 2549 2574 2550 /** 2575 2551 * traverse() - Traverse a single block map tree. 2552 + * @cursor: A cursor tracking traversal progress. 2576 2553 * 2577 2554 * This is the recursive heart of the traversal process. 2578 2555 */ ··· 2644 2619 /** 2645 2620 * launch_cursor() - Start traversing a single block map tree now that the cursor has a VIO with 2646 2621 * which to load pages. 2622 + * @waiter: The parent of the cursor to launch. 2647 2623 * @context: The pooled_vio just acquired. 2648 2624 * 2649 2625 * Implements waiter_callback_fn. ··· 2662 2636 2663 2637 /** 2664 2638 * compute_boundary() - Compute the number of pages used at each level of the given root's tree. 2639 + * @map: The block map. 2640 + * @root_index: The tree root index. 2665 2641 * 2666 2642 * Return: The list of page counts as a boundary structure. 2667 2643 */ ··· 2696 2668 2697 2669 /** 2698 2670 * vdo_traverse_forest() - Walk the entire forest of a block map. 2671 + * @map: The block map. 2699 2672 * @callback: A function to call with the pbn of each allocated node in the forest. 2700 2673 * @completion: The completion to notify on each traversed PBN, and when traversal completes. 2701 2674 */ ··· 2736 2707 2737 2708 /** 2738 2709 * initialize_block_map_zone() - Initialize the per-zone portions of the block map. 2710 + * @map: The block map. 2711 + * @zone_number: The zone to initialize. 2712 + * @cache_size: The total block map cache size. 2739 2713 * @maximum_age: The number of journal blocks before a dirtied page is considered old and must be 2740 2714 * written out. 2741 2715 */ ··· 3123 3091 3124 3092 /** 3125 3093 * clear_mapped_location() - Clear a data_vio's mapped block location, setting it to be unmapped. 3094 + * @data_vio: The data vio. 3126 3095 * 3127 3096 * This indicates the block map entry for the logical block is either unmapped or corrupted. 3128 3097 */ ··· 3137 3104 /** 3138 3105 * set_mapped_location() - Decode and validate a block map entry, and set the mapped location of a 3139 3106 * data_vio. 3107 + * @data_vio: The data vio. 3108 + * @entry: The new mapped entry to set. 3140 3109 * 3141 3110 * Return: VDO_SUCCESS or VDO_BAD_MAPPING if the map entry is invalid or an error code for any 3142 3111 * other failure
+5
drivers/md/dm-vdo/completion.c
··· 65 65 66 66 /** 67 67 * vdo_set_completion_result() - Set the result of a completion. 68 + * @completion: The completion to update. 69 + * @result: The result to set. 68 70 * 69 71 * Older errors will not be masked. 70 72 */ ··· 79 77 80 78 /** 81 79 * vdo_launch_completion_with_priority() - Run or enqueue a completion. 80 + * @completion: The completion to launch. 82 81 * @priority: The priority at which to enqueue the completion. 83 82 * 84 83 * If called on the correct thread (i.e. the one specified in the completion's callback_thread_id ··· 128 125 129 126 /** 130 127 * vdo_requeue_completion_if_needed() - Requeue a completion if not called on the specified thread. 128 + * @completion: The completion to requeue. 129 + * @callback_thread_id: The thread on which to requeue the completion. 131 130 * 132 131 * Return: True if the completion was requeued; callers may not access the completion in this case. 133 132 */
+32 -2
drivers/md/dm-vdo/data-vio.c
··· 227 227 /** 228 228 * check_for_drain_complete_locked() - Check whether a data_vio_pool has no outstanding data_vios 229 229 * or waiters while holding the pool's lock. 230 + * @pool: The data_vio pool. 230 231 */ 231 232 static bool check_for_drain_complete_locked(struct data_vio_pool *pool) 232 233 { ··· 388 387 389 388 /** 390 389 * cancel_data_vio_compression() - Prevent this data_vio from being compressed or packed. 390 + * @data_vio: The data_vio. 391 391 * 392 392 * Return: true if the data_vio is in the packer and the caller was the first caller to cancel it. 393 393 */ ··· 485 483 /** 486 484 * launch_data_vio() - (Re)initialize a data_vio to have a new logical block number, keeping the 487 485 * same parent and other state and send it on its way. 486 + * @data_vio: The data_vio to launch. 487 + * @lbn: The logical block number. 488 488 */ 489 489 static void launch_data_vio(struct data_vio *data_vio, logical_block_number_t lbn) 490 490 { ··· 645 641 646 642 /** 647 643 * schedule_releases() - Ensure that release processing is scheduled. 644 + * @pool: The data_vio pool. 648 645 * 649 646 * If this call switches the state to processing, enqueue. Otherwise, some other thread has already 650 647 * done so. ··· 773 768 774 769 /** 775 770 * initialize_data_vio() - Allocate the components of a data_vio. 771 + * @data_vio: The data_vio to initialize. 772 + * @vdo: The vdo containing the data_vio. 776 773 * 777 774 * The caller is responsible for cleaning up the data_vio on error. 778 775 * ··· 887 880 888 881 /** 889 882 * free_data_vio_pool() - Free a data_vio_pool and the data_vios in it. 883 + * @pool: The data_vio pool to free. 890 884 * 891 885 * All data_vios must be returned to the pool before calling this function. 892 886 */ ··· 952 944 953 945 /** 954 946 * vdo_launch_bio() - Acquire a data_vio from the pool, assign the bio to it, and launch it. 947 + * @pool: The data_vio pool. 948 + * @bio: The bio to launch. 955 949 * 956 950 * This will block if data_vios or discard permits are not available. 957 951 */ ··· 1004 994 1005 995 /** 1006 996 * drain_data_vio_pool() - Wait asynchronously for all data_vios to be returned to the pool. 997 + * @pool: The data_vio pool. 1007 998 * @completion: The completion to notify when the pool has drained. 1008 999 */ 1009 1000 void drain_data_vio_pool(struct data_vio_pool *pool, struct vdo_completion *completion) ··· 1016 1005 1017 1006 /** 1018 1007 * resume_data_vio_pool() - Resume a data_vio pool. 1008 + * @pool: The data_vio pool. 1019 1009 * @completion: The completion to notify when the pool has resumed. 1020 1010 */ 1021 1011 void resume_data_vio_pool(struct data_vio_pool *pool, struct vdo_completion *completion) ··· 1036 1024 1037 1025 /** 1038 1026 * dump_data_vio_pool() - Dump a data_vio pool to the log. 1027 + * @pool: The data_vio pool. 1039 1028 * @dump_vios: Whether to dump the details of each busy data_vio as well. 1040 1029 */ 1041 1030 void dump_data_vio_pool(struct data_vio_pool *pool, bool dump_vios) ··· 1127 1114 /** 1128 1115 * release_allocated_lock() - Release the PBN lock and/or the reference on the allocated block at 1129 1116 * the end of processing a data_vio. 1117 + * @completion: The data_vio holding the lock. 1130 1118 */ 1131 1119 static void release_allocated_lock(struct vdo_completion *completion) 1132 1120 { ··· 1208 1194 /** 1209 1195 * release_logical_lock() - Release the logical block lock and flush generation lock at the end of 1210 1196 * processing a data_vio. 1197 + * @completion: The data_vio holding the lock. 1211 1198 */ 1212 1199 static void release_logical_lock(struct vdo_completion *completion) 1213 1200 { ··· 1243 1228 1244 1229 /** 1245 1230 * finish_cleanup() - Make some assertions about a data_vio which has finished cleaning up. 1231 + * @data_vio: The data_vio. 1246 1232 * 1247 1233 * If it is part of a multi-block discard, starts on the next block, otherwise, returns it to the 1248 1234 * pool. ··· 1358 1342 /** 1359 1343 * get_data_vio_operation_name() - Get the name of the last asynchronous operation performed on a 1360 1344 * data_vio. 1345 + * @data_vio: The data_vio. 1361 1346 */ 1362 1347 const char *get_data_vio_operation_name(struct data_vio *data_vio) 1363 1348 { ··· 1372 1355 1373 1356 /** 1374 1357 * data_vio_allocate_data_block() - Allocate a data block. 1375 - * 1358 + * @data_vio: The data_vio. 1376 1359 * @write_lock_type: The type of write lock to obtain on the block. 1377 1360 * @callback: The callback which will attempt an allocation in the current zone and continue if it 1378 1361 * succeeds. ··· 1396 1379 1397 1380 /** 1398 1381 * release_data_vio_allocation_lock() - Release the PBN lock on a data_vio's allocated block. 1382 + * @data_vio: The data_vio. 1399 1383 * @reset: If true, the allocation will be reset (i.e. any allocated pbn will be forgotten). 1400 1384 * 1401 1385 * If the reference to the locked block is still provisional, it will be released as well. ··· 1417 1399 1418 1400 /** 1419 1401 * uncompress_data_vio() - Uncompress the data a data_vio has just read. 1402 + * @data_vio: The data_vio. 1420 1403 * @mapping_state: The mapping state indicating which fragment to decompress. 1421 1404 * @buffer: The buffer to receive the uncompressed data. 1422 1405 */ ··· 1538 1519 1539 1520 /** 1540 1521 * read_block() - Read a block asynchronously. 1522 + * @completion: The data_vio doing the read. 1541 1523 * 1542 1524 * This is the callback registered in read_block_mapping(). 1543 1525 */ ··· 1695 1675 1696 1676 /** 1697 1677 * read_old_block_mapping() - Get the previous PBN/LBN mapping of an in-progress write. 1678 + * @completion: The data_vio doing the read. 1698 1679 * 1699 1680 * Gets the previous PBN mapped to this LBN from the block map, so as to make an appropriate 1700 1681 * journal entry referencing the removal of this LBN->PBN mapping. ··· 1725 1704 1726 1705 /** 1727 1706 * pack_compressed_data() - Attempt to pack the compressed data_vio into a block. 1707 + * @completion: The data_vio. 1728 1708 * 1729 1709 * This is the callback registered in launch_compress_data_vio(). 1730 1710 */ ··· 1747 1725 1748 1726 /** 1749 1727 * compress_data_vio() - Do the actual work of compressing the data on a CPU queue. 1728 + * @completion: The data_vio. 1750 1729 * 1751 1730 * This callback is registered in launch_compress_data_vio(). 1752 1731 */ ··· 1777 1754 1778 1755 /** 1779 1756 * launch_compress_data_vio() - Continue a write by attempting to compress the data. 1757 + * @data_vio: The data_vio. 1780 1758 * 1781 1759 * This is a re-entry point to vio_write used by hash locks. 1782 1760 */ ··· 1820 1796 /** 1821 1797 * hash_data_vio() - Hash the data in a data_vio and set the hash zone (which also flags the record 1822 1798 * name as set). 1823 - 1799 + * @completion: The data_vio. 1800 + * 1824 1801 * This callback is registered in prepare_for_dedupe(). 1825 1802 */ 1826 1803 static void hash_data_vio(struct vdo_completion *completion) ··· 1857 1832 /** 1858 1833 * write_bio_finished() - This is the bio_end_io function registered in write_block() to be called 1859 1834 * when a data_vio's write to the underlying storage has completed. 1835 + * @bio: The bio to update. 1860 1836 */ 1861 1837 static void write_bio_finished(struct bio *bio) 1862 1838 { ··· 1910 1884 1911 1885 /** 1912 1886 * acknowledge_write_callback() - Acknowledge a write to the requestor. 1887 + * @completion: The data_vio. 1913 1888 * 1914 1889 * This callback is registered in allocate_block() and continue_write_with_block_map_slot(). 1915 1890 */ ··· 1936 1909 1937 1910 /** 1938 1911 * allocate_block() - Attempt to allocate a block in the current allocation zone. 1912 + * @completion: The data_vio. 1939 1913 * 1940 1914 * This callback is registered in continue_write_with_block_map_slot(). 1941 1915 */ ··· 1969 1941 1970 1942 /** 1971 1943 * handle_allocation_error() - Handle an error attempting to allocate a block. 1944 + * @completion: The data_vio. 1972 1945 * 1973 1946 * This error handler is registered in continue_write_with_block_map_slot(). 1974 1947 */ ··· 1999 1970 2000 1971 /** 2001 1972 * continue_data_vio_with_block_map_slot() - Read the data_vio's mapping from the block map. 1973 + * @completion: The data_vio to continue. 2002 1974 * 2003 1975 * This callback is registered in launch_read_data_vio(). 2004 1976 */
+18 -24
drivers/md/dm-vdo/dedupe.c
··· 917 917 918 918 /** 919 919 * enter_forked_lock() - Bind the data_vio to a new hash lock. 920 + * @waiter: The data_vio's waiter link. 921 + * @context: The new hash lock. 920 922 * 921 923 * Implements waiter_callback_fn. Binds the data_vio that was waiting to a new hash lock and waits 922 924 * on that lock. ··· 973 971 * path. 974 972 * @lock: The hash lock. 975 973 * @data_vio: The data_vio to deduplicate using the hash lock. 976 - * @has_claim: true if the data_vio already has claimed an increment from the duplicate lock. 974 + * @has_claim: True if the data_vio already has claimed an increment from the duplicate lock. 977 975 * 978 976 * If no increments are available, this will roll over to a new hash lock and launch the data_vio 979 977 * as the writing agent for that lock. ··· 998 996 * true copy of their data on disk. 999 997 * @lock: The hash lock. 1000 998 * @agent: The data_vio acting as the agent for the lock. 1001 - * @agent_is_done: true only if the agent has already written or deduplicated against its data. 999 + * @agent_is_done: True only if the agent has already written or deduplicated against its data. 1002 1000 * 1003 1001 * If the agent itself needs to deduplicate, an increment for it must already have been claimed 1004 1002 * from the duplicate lock, ensuring the hash lock will still have a data_vio holding it. ··· 2148 2146 /** 2149 2147 * report_dedupe_timeouts() - Record and eventually report that some dedupe requests reached their 2150 2148 * expiration time without getting answers, so we timed them out. 2151 - * @zones: the hash zones. 2152 - * @timeouts: the number of newly timed out requests. 2149 + * @zones: The hash zones. 2150 + * @timeouts: The number of newly timed out requests. 2153 2151 */ 2154 2152 static void report_dedupe_timeouts(struct hash_zones *zones, unsigned int timeouts) 2155 2153 { ··· 2511 2509 2512 2510 /** 2513 2511 * suspend_index() - Suspend the UDS index prior to draining hash zones. 2512 + * @context: Not used. 2513 + * @completion: The completion for the suspend operation. 2514 2514 * 2515 2515 * Implements vdo_action_preamble_fn 2516 2516 */ ··· 2525 2521 initiate_suspend_index); 2526 2522 } 2527 2523 2528 - /** 2529 - * initiate_drain() - Initiate a drain. 2530 - * 2531 - * Implements vdo_admin_initiator_fn. 2532 - */ 2524 + /** Implements vdo_admin_initiator_fn. */ 2533 2525 static void initiate_drain(struct admin_state *state) 2534 2526 { 2535 2527 check_for_drain_complete(container_of(state, struct hash_zone, state)); 2536 2528 } 2537 2529 2538 - /** 2539 - * drain_hash_zone() - Drain a hash zone. 2540 - * 2541 - * Implements vdo_zone_action_fn. 2542 - */ 2530 + /** Implements vdo_zone_action_fn. */ 2543 2531 static void drain_hash_zone(void *context, zone_count_t zone_number, 2544 2532 struct vdo_completion *parent) 2545 2533 { ··· 2568 2572 2569 2573 /** 2570 2574 * resume_index() - Resume the UDS index prior to resuming hash zones. 2575 + * @context: Not used. 2576 + * @parent: The completion for the resume operation. 2571 2577 * 2572 2578 * Implements vdo_action_preamble_fn 2573 2579 */ ··· 2600 2602 vdo_finish_completion(parent); 2601 2603 } 2602 2604 2603 - /** 2604 - * resume_hash_zone() - Resume a hash zone. 2605 - * 2606 - * Implements vdo_zone_action_fn. 2607 - */ 2605 + /** Implements vdo_zone_action_fn. */ 2608 2606 static void resume_hash_zone(void *context, zone_count_t zone_number, 2609 2607 struct vdo_completion *parent) 2610 2608 { ··· 2628 2634 /** 2629 2635 * get_hash_zone_statistics() - Add the statistics for this hash zone to the tally for all zones. 2630 2636 * @zone: The hash zone to query. 2631 - * @tally: The tally 2637 + * @tally: The tally. 2632 2638 */ 2633 2639 static void get_hash_zone_statistics(const struct hash_zone *zone, 2634 2640 struct hash_lock_statistics *tally) ··· 2674 2680 2675 2681 /** 2676 2682 * vdo_get_dedupe_statistics() - Tally the statistics from all the hash zones and the UDS index. 2677 - * @zones: The hash zones to query 2678 - * @stats: A structure to store the statistics 2683 + * @zones: The hash zones to query. 2684 + * @stats: A structure to store the statistics. 2679 2685 * 2680 2686 * Return: The sum of the hash lock statistics from all hash zones plus the statistics from the UDS 2681 2687 * index ··· 2850 2856 2851 2857 /** 2852 2858 * acquire_context() - Acquire a dedupe context from a hash_zone if any are available. 2853 - * @zone: the hash zone 2859 + * @zone: The hash zone. 2854 2860 * 2855 - * Return: A dedupe_context or NULL if none are available 2861 + * Return: A dedupe_context or NULL if none are available. 2856 2862 */ 2857 2863 static struct dedupe_context * __must_check acquire_context(struct hash_zone *zone) 2858 2864 {
+3 -2
drivers/md/dm-vdo/dm-vdo-target.c
··· 1144 1144 /** 1145 1145 * get_thread_id_for_phase() - Get the thread id for the current phase of the admin operation in 1146 1146 * progress. 1147 + * @vdo: The vdo. 1147 1148 */ 1148 1149 static thread_id_t __must_check get_thread_id_for_phase(struct vdo *vdo) 1149 1150 { ··· 1189 1188 /** 1190 1189 * advance_phase() - Increment the phase of the current admin operation and prepare the admin 1191 1190 * completion to run on the thread for the next phase. 1192 - * @vdo: The on which an admin operation is being performed 1191 + * @vdo: The vdo on which an admin operation is being performed. 1193 1192 * 1194 - * Return: The current phase 1193 + * Return: The current phase. 1195 1194 */ 1196 1195 static u32 advance_phase(struct vdo *vdo) 1197 1196 {
+24 -2
drivers/md/dm-vdo/encodings.c
··· 432 432 /** 433 433 * vdo_compute_new_forest_pages() - Compute the number of pages which must be allocated at each 434 434 * level in order to grow the forest to a new number of entries. 435 + * @root_count: The number of block map roots. 436 + * @old_sizes: The sizes of the old tree segments. 435 437 * @entries: The new number of entries the block map must address. 438 + * @new_sizes: The sizes of the new tree segments. 436 439 * 437 440 * Return: The total number of non-leaf pages required. 438 441 */ ··· 465 462 466 463 /** 467 464 * encode_recovery_journal_state_7_0() - Encode the state of a recovery journal. 465 + * @buffer: A buffer to store the encoding. 466 + * @offset: The offset in the buffer at which to encode. 467 + * @state: The recovery journal state to encode. 468 468 * 469 469 * Return: VDO_SUCCESS or an error code. 470 470 */ ··· 490 484 /** 491 485 * decode_recovery_journal_state_7_0() - Decode the state of a recovery journal saved in a buffer. 492 486 * @buffer: The buffer containing the saved state. 487 + * @offset: The offset to start decoding from. 493 488 * @state: A pointer to a recovery journal state to hold the result of a successful decode. 494 489 * 495 490 * Return: VDO_SUCCESS or an error code. ··· 551 544 552 545 /** 553 546 * encode_slab_depot_state_2_0() - Encode the state of a slab depot into a buffer. 547 + * @buffer: A buffer to store the encoding. 548 + * @offset: The offset in the buffer at which to encode. 549 + * @state: The slab depot state to encode. 554 550 */ 555 551 static void encode_slab_depot_state_2_0(u8 *buffer, size_t *offset, 556 552 struct slab_depot_state_2_0 state) ··· 580 570 581 571 /** 582 572 * decode_slab_depot_state_2_0() - Decode slab depot component state version 2.0 from a buffer. 573 + * @buffer: The buffer being decoded. 574 + * @offset: The offset to start decoding from. 575 + * @state: A pointer to a slab depot state to hold the decoded result. 583 576 * 584 577 * Return: VDO_SUCCESS or an error code. 585 578 */ ··· 1169 1156 1170 1157 /** 1171 1158 * decode_vdo_component() - Decode the component data for the vdo itself out of the super block. 1159 + * @buffer: The buffer being decoded. 1160 + * @offset: The offset to start decoding from. 1161 + * @component: The vdo component structure to decode into. 1172 1162 * 1173 1163 * Return: VDO_SUCCESS or an error. 1174 1164 */ ··· 1306 1290 * understand. 1307 1291 * @buffer: The buffer being decoded. 1308 1292 * @offset: The offset to start decoding from. 1309 - * @geometry: The vdo geometry 1293 + * @geometry: The vdo geometry. 1310 1294 * @states: An object to hold the successfully decoded state. 1311 1295 * 1312 1296 * Return: VDO_SUCCESS or an error. ··· 1345 1329 /** 1346 1330 * vdo_decode_component_states() - Decode the payload of a super block. 1347 1331 * @buffer: The buffer containing the encoded super block contents. 1348 - * @geometry: The vdo geometry 1332 + * @geometry: The vdo geometry. 1349 1333 * @states: A pointer to hold the decoded states. 1350 1334 * 1351 1335 * Return: VDO_SUCCESS or an error. ··· 1399 1383 1400 1384 /** 1401 1385 * vdo_encode_component_states() - Encode the state of all vdo components in the super block. 1386 + * @buffer: A buffer to store the encoding. 1387 + * @offset: The offset into the buffer to start the encoding. 1388 + * @states: The component states to encode. 1402 1389 */ 1403 1390 static void vdo_encode_component_states(u8 *buffer, size_t *offset, 1404 1391 const struct vdo_component_states *states) ··· 1421 1402 1422 1403 /** 1423 1404 * vdo_encode_super_block() - Encode a super block into its on-disk representation. 1405 + * @buffer: A buffer to store the encoding. 1406 + * @states: The component states to encode. 1424 1407 */ 1425 1408 void vdo_encode_super_block(u8 *buffer, struct vdo_component_states *states) 1426 1409 { ··· 1447 1426 1448 1427 /** 1449 1428 * vdo_decode_super_block() - Decode a super block from its on-disk representation. 1429 + * @buffer: The buffer to decode from. 1450 1430 */ 1451 1431 int vdo_decode_super_block(u8 *buffer) 1452 1432 {
+1 -5
drivers/md/dm-vdo/flush.c
··· 522 522 vdo_enqueue_completion(completion, BIO_Q_FLUSH_PRIORITY); 523 523 } 524 524 525 - /** 526 - * initiate_drain() - Initiate a drain. 527 - * 528 - * Implements vdo_admin_initiator_fn. 529 - */ 525 + /** Implements vdo_admin_initiator_fn. */ 530 526 static void initiate_drain(struct admin_state *state) 531 527 { 532 528 check_for_drain_complete(container_of(state, struct flusher, state));
+7
drivers/md/dm-vdo/funnel-workqueue.c
··· 372 372 /** 373 373 * vdo_make_work_queue() - Create a work queue; if multiple threads are requested, completions will 374 374 * be distributed to them in round-robin fashion. 375 + * @thread_name_prefix: A prefix for the thread names to identify them as a vdo thread. 376 + * @name: A base name to identify this queue. 377 + * @owner: The vdo_thread structure to manage this queue. 378 + * @type: The type of queue to create. 379 + * @thread_count: The number of actual threads handling this queue. 380 + * @thread_privates: An array of private contexts, one for each thread; may be NULL. 381 + * @queue_ptr: A pointer to return the new work queue. 375 382 * 376 383 * Each queue is associated with a struct vdo_thread which has a single vdo thread id. Regardless 377 384 * of the actual number of queues and threads allocated here, code outside of the queue
+14 -12
drivers/md/dm-vdo/io-submitter.c
··· 118 118 /** 119 119 * vdo_submit_vio() - Submits a vio's bio to the underlying block device. May block if the device 120 120 * is busy. This callback should be used by vios which did not attempt to merge. 121 + * @completion: The vio to submit. 121 122 */ 122 123 void vdo_submit_vio(struct vdo_completion *completion) 123 124 { ··· 134 133 * The list will always contain at least one entry (the bio for the vio on which it is called), but 135 134 * other bios may have been merged with it as well. 136 135 * 137 - * Return: bio The head of the bio list to submit. 136 + * Return: The head of the bio list to submit. 138 137 */ 139 138 static struct bio *get_bio_list(struct vio *vio) 140 139 { ··· 159 158 /** 160 159 * submit_data_vio() - Submit a data_vio's bio to the storage below along with 161 160 * any bios that have been merged with it. 161 + * @completion: The vio to submit. 162 162 * 163 163 * Context: This call may block and so should only be called from a bio thread. 164 164 */ ··· 186 184 * There are two types of merging possible, forward and backward, which are distinguished by a flag 187 185 * that uses kernel elevator terminology. 188 186 * 189 - * Return: the vio to merge to, NULL if no merging is possible. 187 + * Return: The vio to merge to, NULL if no merging is possible. 190 188 */ 191 189 static struct vio *get_mergeable_locked(struct int_map *map, struct vio *vio, 192 190 bool back_merge) ··· 264 262 * 265 263 * Currently this is only used for data_vios, but is broken out for future use with metadata vios. 266 264 * 267 - * Return: whether or not the vio was merged. 265 + * Return: Whether or not the vio was merged. 268 266 */ 269 267 static bool try_bio_map_merge(struct vio *vio) 270 268 { ··· 308 306 309 307 /** 310 308 * vdo_submit_data_vio() - Submit I/O for a data_vio. 311 - * @data_vio: the data_vio for which to issue I/O. 309 + * @data_vio: The data_vio for which to issue I/O. 312 310 * 313 311 * If possible, this I/O will be merged other pending I/Os. Otherwise, the data_vio will be sent to 314 312 * the appropriate bio zone directly. ··· 323 321 324 322 /** 325 323 * __submit_metadata_vio() - Submit I/O for a metadata vio. 326 - * @vio: the vio for which to issue I/O 327 - * @physical: the physical block number to read or write 328 - * @callback: the bio endio function which will be called after the I/O completes 329 - * @error_handler: the handler for submission or I/O errors (may be NULL) 330 - * @operation: the type of I/O to perform 331 - * @data: the buffer to read or write (may be NULL) 332 - * @size: the I/O amount in bytes 324 + * @vio: The vio for which to issue I/O. 325 + * @physical: The physical block number to read or write. 326 + * @callback: The bio endio function which will be called after the I/O completes. 327 + * @error_handler: The handler for submission or I/O errors; may be NULL. 328 + * @operation: The type of I/O to perform. 329 + * @data: The buffer to read or write; may be NULL. 330 + * @size: The I/O amount in bytes. 333 331 * 334 332 * The vio is enqueued on a vdo bio queue so that bio submission (which may block) does not block 335 333 * other vdo threads. ··· 443 441 444 442 /** 445 443 * vdo_cleanup_io_submitter() - Tear down the io_submitter fields as needed for a physical layer. 446 - * @io_submitter: The I/O submitter data to tear down (may be NULL). 444 + * @io_submitter: The I/O submitter data to tear down; may be NULL. 447 445 */ 448 446 void vdo_cleanup_io_submitter(struct io_submitter *io_submitter) 449 447 {
+4 -16
drivers/md/dm-vdo/logical-zone.c
··· 159 159 vdo_finish_draining(&zone->state); 160 160 } 161 161 162 - /** 163 - * initiate_drain() - Initiate a drain. 164 - * 165 - * Implements vdo_admin_initiator_fn. 166 - */ 162 + /** Implements vdo_admin_initiator_fn. */ 167 163 static void initiate_drain(struct admin_state *state) 168 164 { 169 165 check_for_drain_complete(container_of(state, struct logical_zone, state)); 170 166 } 171 167 172 - /** 173 - * drain_logical_zone() - Drain a logical zone. 174 - * 175 - * Implements vdo_zone_action_fn. 176 - */ 168 + /** Implements vdo_zone_action_fn. */ 177 169 static void drain_logical_zone(void *context, zone_count_t zone_number, 178 170 struct vdo_completion *parent) 179 171 { ··· 184 192 parent); 185 193 } 186 194 187 - /** 188 - * resume_logical_zone() - Resume a logical zone. 189 - * 190 - * Implements vdo_zone_action_fn. 191 - */ 195 + /** Implements vdo_zone_action_fn. */ 192 196 static void resume_logical_zone(void *context, zone_count_t zone_number, 193 197 struct vdo_completion *parent) 194 198 { ··· 344 356 345 357 /** 346 358 * vdo_dump_logical_zone() - Dump information about a logical zone to the log for debugging. 347 - * @zone: The zone to dump 359 + * @zone: The zone to dump. 348 360 * 349 361 * Context: the information is dumped in a thread-unsafe fashion. 350 362 *
+6 -9
drivers/md/dm-vdo/packer.c
··· 35 35 /** 36 36 * vdo_get_compressed_block_fragment() - Get a reference to a compressed fragment from a compressed 37 37 * block. 38 - * @mapping_state [in] The mapping state for the look up. 39 - * @compressed_block [in] The compressed block that was read from disk. 40 - * @fragment_offset [out] The offset of the fragment within a compressed block. 41 - * @fragment_size [out] The size of the fragment. 38 + * @mapping_state: The mapping state describing the fragment. 39 + * @block: The compressed block that was read from disk. 40 + * @fragment_offset: The offset of the fragment within the compressed block. 41 + * @fragment_size: The size of the fragment. 42 42 * 43 43 * Return: If a valid compressed fragment is found, VDO_SUCCESS; otherwise, VDO_INVALID_FRAGMENT if 44 44 * the fragment is invalid. ··· 382 382 * @compression: The agent's compression_state to pack in to. 383 383 * @data_vio: The data_vio to pack. 384 384 * @offset: The offset into the compressed block at which to pack the fragment. 385 + * @slot: The slot number in the compressed block. 385 386 * @block: The compressed block which will be written out when batch is fully packed. 386 387 * 387 388 * Return: The new amount of space used. ··· 706 705 vdo_flush_packer(packer); 707 706 } 708 707 709 - /** 710 - * initiate_drain() - Initiate a drain. 711 - * 712 - * Implements vdo_admin_initiator_fn. 713 - */ 708 + /** Implements vdo_admin_initiator_fn. */ 714 709 static void initiate_drain(struct admin_state *state) 715 710 { 716 711 struct packer *packer = container_of(state, struct packer, state);
+3 -2
drivers/md/dm-vdo/physical-zone.c
··· 60 60 * vdo_is_pbn_read_lock() - Check whether a pbn_lock is a read lock. 61 61 * @lock: The lock to check. 62 62 * 63 - * Return: true if the lock is a read lock. 63 + * Return: True if the lock is a read lock. 64 64 */ 65 65 bool vdo_is_pbn_read_lock(const struct pbn_lock *lock) 66 66 { ··· 75 75 /** 76 76 * vdo_downgrade_pbn_write_lock() - Downgrade a PBN write lock to a PBN read lock. 77 77 * @lock: The PBN write lock to downgrade. 78 + * @compressed_write: True if the written block was a compressed block. 78 79 * 79 80 * The lock holder count is cleared and the caller is responsible for setting the new count. 80 81 */ ··· 583 582 * that fails try the next if possible. 584 583 * @data_vio: The data_vio needing an allocation. 585 584 * 586 - * Return: true if a block was allocated, if not the data_vio will have been dispatched so the 585 + * Return: True if a block was allocated, if not the data_vio will have been dispatched so the 587 586 * caller must not touch it. 588 587 */ 589 588 bool vdo_allocate_block_in_zone(struct data_vio *data_vio)
+17 -13
drivers/md/dm-vdo/recovery-journal.c
··· 109 109 * @journal: The recovery journal. 110 110 * @lock_number: The lock to check. 111 111 * 112 - * Return: true if the journal zone is locked. 112 + * Return: True if the journal zone is locked. 113 113 */ 114 114 static bool is_journal_zone_locked(struct recovery_journal *journal, 115 115 block_count_t lock_number) ··· 217 217 * Indicates it has any uncommitted entries, which includes both entries not written and entries 218 218 * written but not yet acknowledged. 219 219 * 220 - * Return: true if the block has any uncommitted entries. 220 + * Return: True if the block has any uncommitted entries. 221 221 */ 222 222 static inline bool __must_check is_block_dirty(const struct recovery_journal_block *block) 223 223 { ··· 228 228 * is_block_empty() - Check whether a journal block is empty. 229 229 * @block: The block to check. 230 230 * 231 - * Return: true if the block has no entries. 231 + * Return: True if the block has no entries. 232 232 */ 233 233 static inline bool __must_check is_block_empty(const struct recovery_journal_block *block) 234 234 { ··· 239 239 * is_block_full() - Check whether a journal block is full. 240 240 * @block: The block to check. 241 241 * 242 - * Return: true if the block is full. 242 + * Return: True if the block is full. 243 243 */ 244 244 static inline bool __must_check is_block_full(const struct recovery_journal_block *block) 245 245 { ··· 260 260 261 261 /** 262 262 * continue_waiter() - Release a data_vio from the journal. 263 + * @waiter: The data_vio waiting on journal activity. 264 + * @context: The result of the journal operation. 263 265 * 264 266 * Invoked whenever a data_vio is to be released from the journal, either because its entry was 265 267 * committed to disk, or because there was an error. Implements waiter_callback_fn. ··· 275 273 * has_block_waiters() - Check whether the journal has any waiters on any blocks. 276 274 * @journal: The journal in question. 277 275 * 278 - * Return: true if any block has a waiter. 276 + * Return: True if any block has a waiter. 279 277 */ 280 278 static inline bool has_block_waiters(struct recovery_journal *journal) 281 279 { ··· 298 296 * suspend_lock_counter() - Prevent the lock counter from notifying. 299 297 * @counter: The counter. 300 298 * 301 - * Return: true if the lock counter was not notifying and hence the suspend was efficacious. 299 + * Return: True if the lock counter was not notifying and hence the suspend was efficacious. 302 300 */ 303 301 static bool suspend_lock_counter(struct lock_counter *counter) 304 302 { ··· 418 416 * 419 417 * The head is the lowest sequence number of the block map head and the slab journal head. 420 418 * 421 - * Return: the head of the journal. 419 + * Return: The head of the journal. 422 420 */ 423 421 static inline sequence_number_t get_recovery_journal_head(const struct recovery_journal *journal) 424 422 { ··· 537 535 * vdo_get_recovery_journal_length() - Get the number of usable recovery journal blocks. 538 536 * @journal_size: The size of the recovery journal in blocks. 539 537 * 540 - * Return: the number of recovery journal blocks usable for entries. 538 + * Return: The number of recovery journal blocks usable for entries. 541 539 */ 542 540 block_count_t vdo_get_recovery_journal_length(block_count_t journal_size) 543 541 { ··· 1080 1078 1081 1079 /** 1082 1080 * assign_entry() - Assign an entry waiter to the active block. 1081 + * @waiter: The data_vio. 1082 + * @context: The recovery journal block. 1083 1083 * 1084 1084 * Implements waiter_callback_fn. 1085 1085 */ ··· 1169 1165 /** 1170 1166 * continue_committed_waiter() - invoked whenever a VIO is to be released from the journal because 1171 1167 * its entry was committed to disk. 1168 + * @waiter: The data_vio waiting on a journal write. 1169 + * @context: A pointer to the recovery journal. 1172 1170 * 1173 1171 * Implements waiter_callback_fn. 1174 1172 */ ··· 1368 1362 1369 1363 /** 1370 1364 * write_block() - Issue a block for writing. 1365 + * @waiter: The recovery journal block to write. 1366 + * @context: Not used. 1371 1367 * 1372 1368 * Implements waiter_callback_fn. 1373 1369 */ ··· 1619 1611 smp_mb__after_atomic(); 1620 1612 } 1621 1613 1622 - /** 1623 - * initiate_drain() - Initiate a drain. 1624 - * 1625 - * Implements vdo_admin_initiator_fn. 1626 - */ 1614 + /** Implements vdo_admin_initiator_fn. */ 1627 1615 static void initiate_drain(struct admin_state *state) 1628 1616 { 1629 1617 check_for_drain_complete(container_of(state, struct recovery_journal, state));
+55 -41
drivers/md/dm-vdo/slab-depot.c
··· 40 40 41 41 /** 42 42 * get_lock() - Get the lock object for a slab journal block by sequence number. 43 - * @journal: vdo_slab journal to retrieve from. 43 + * @journal: The vdo_slab journal to retrieve from. 44 44 * @sequence_number: Sequence number of the block. 45 45 * 46 46 * Return: The lock object for the given sequence number. ··· 110 110 * block_is_full() - Check whether a journal block is full. 111 111 * @journal: The slab journal for the block. 112 112 * 113 - * Return: true if the tail block is full. 113 + * Return: True if the tail block is full. 114 114 */ 115 115 static bool __must_check block_is_full(struct slab_journal *journal) 116 116 { ··· 127 127 128 128 /** 129 129 * is_slab_journal_blank() - Check whether a slab's journal is blank. 130 + * @slab: The slab to check. 130 131 * 131 132 * A slab journal is blank if it has never had any entries recorded in it. 132 133 * 133 - * Return: true if the slab's journal has never been modified. 134 + * Return: True if the slab's journal has never been modified. 134 135 */ 135 136 static bool is_slab_journal_blank(const struct vdo_slab *slab) 136 137 { ··· 228 227 229 228 /** 230 229 * check_summary_drain_complete() - Check whether an allocators summary has finished draining. 230 + * @allocator: The allocator to check. 231 231 */ 232 232 static void check_summary_drain_complete(struct block_allocator *allocator) 233 233 { ··· 351 349 352 350 /** 353 351 * update_slab_summary_entry() - Update the entry for a slab. 354 - * @slab: The slab whose entry is to be updated 352 + * @slab: The slab whose entry is to be updated. 355 353 * @waiter: The waiter that is updating the summary. 356 354 * @tail_block_offset: The offset of the slab journal's tail block. 357 355 * @load_ref_counts: Whether the reference counts must be loaded from disk on the vdo load. ··· 656 654 657 655 /** 658 656 * reopen_slab_journal() - Reopen a slab's journal by emptying it and then adding pending entries. 657 + * @slab: The slab to reopen. 659 658 */ 660 659 static void reopen_slab_journal(struct vdo_slab *slab) 661 660 { ··· 842 839 * @sbn: The slab block number of the entry to encode. 843 840 * @operation: The type of the entry. 844 841 * @increment: True if this is an increment. 845 - * 846 - * Exposed for unit tests. 847 842 */ 848 843 static void encode_slab_journal_entry(struct slab_journal_block_header *tail_header, 849 844 slab_journal_payload *payload, ··· 952 951 * @parent: The completion to notify when there is space to add the entry if the entry could not be 953 952 * added immediately. 954 953 * 955 - * Return: true if the entry was added immediately. 954 + * Return: True if the entry was added immediately. 956 955 */ 957 956 bool vdo_attempt_replay_into_slab(struct vdo_slab *slab, physical_block_number_t pbn, 958 957 enum journal_operation operation, bool increment, ··· 1004 1003 * requires_reaping() - Check whether the journal must be reaped before adding new entries. 1005 1004 * @journal: The journal to check. 1006 1005 * 1007 - * Return: true if the journal must be reaped. 1006 + * Return: True if the journal must be reaped. 1008 1007 */ 1009 1008 static bool requires_reaping(const struct slab_journal *journal) 1010 1009 { ··· 1276 1275 1277 1276 /** 1278 1277 * get_reference_block() - Get the reference block that covers the given block index. 1278 + * @slab: The slab containing the references. 1279 + * @index: The index of the physical block. 1279 1280 */ 1280 1281 static struct reference_block * __must_check get_reference_block(struct vdo_slab *slab, 1281 1282 slab_block_number index) ··· 1382 1379 1383 1380 /** 1384 1381 * adjust_free_block_count() - Adjust the free block count and (if needed) reprioritize the slab. 1385 - * @incremented: true if the free block count went up. 1382 + * @slab: The slab. 1383 + * @incremented: True if the free block count went up. 1386 1384 */ 1387 1385 static void adjust_free_block_count(struct vdo_slab *slab, bool incremented) 1388 1386 { ··· 1889 1885 /** 1890 1886 * reset_search_cursor() - Reset the free block search back to the first reference counter in the 1891 1887 * first reference block of a slab. 1888 + * @slab: The slab. 1892 1889 */ 1893 1890 static void reset_search_cursor(struct vdo_slab *slab) 1894 1891 { ··· 1897 1892 1898 1893 cursor->block = cursor->first_block; 1899 1894 cursor->index = 0; 1900 - /* Unit tests have slabs with only one reference block (and it's a runt). */ 1901 1895 cursor->end_index = min_t(u32, COUNTS_PER_BLOCK, slab->block_count); 1902 1896 } 1903 1897 1904 1898 /** 1905 1899 * advance_search_cursor() - Advance the search cursor to the start of the next reference block in 1906 - * a slab, 1900 + * a slab. 1901 + * @slab: The slab. 1907 1902 * 1908 1903 * Wraps around to the first reference block if the current block is the last reference block. 1909 1904 * 1910 - * Return: true unless the cursor was at the last reference block. 1905 + * Return: True unless the cursor was at the last reference block. 1911 1906 */ 1912 1907 static bool advance_search_cursor(struct vdo_slab *slab) 1913 1908 { ··· 1938 1933 1939 1934 /** 1940 1935 * vdo_adjust_reference_count_for_rebuild() - Adjust the reference count of a block during rebuild. 1936 + * @depot: The slab depot. 1937 + * @pbn: The physical block number to adjust. 1938 + * @operation: The type opf operation. 1941 1939 * 1942 1940 * Return: VDO_SUCCESS or an error. 1943 1941 */ ··· 2046 2038 * @slab: The slab counters to scan. 2047 2039 * @index_ptr: A pointer to hold the array index of the free block. 2048 2040 * 2049 - * Exposed for unit testing. 2050 - * 2051 - * Return: true if a free block was found in the specified range. 2041 + * Return: True if a free block was found in the specified range. 2052 2042 */ 2053 2043 static bool find_free_block(const struct vdo_slab *slab, slab_block_number *index_ptr) 2054 2044 { ··· 2103 2097 * @slab: The slab to search. 2104 2098 * @free_index_ptr: A pointer to receive the array index of the zero reference count. 2105 2099 * 2106 - * Return: true if an unreferenced counter was found. 2100 + * Return: True if an unreferenced counter was found. 2107 2101 */ 2108 2102 static bool search_current_reference_block(const struct vdo_slab *slab, 2109 2103 slab_block_number *free_index_ptr) ··· 2122 2116 * counter index saved in the search cursor and searching up to the end of the last reference 2123 2117 * block. The search does not wrap. 2124 2118 * 2125 - * Return: true if an unreferenced counter was found. 2119 + * Return: True if an unreferenced counter was found. 2126 2120 */ 2127 2121 static bool search_reference_blocks(struct vdo_slab *slab, 2128 2122 slab_block_number *free_index_ptr) ··· 2142 2136 2143 2137 /** 2144 2138 * make_provisional_reference() - Do the bookkeeping for making a provisional reference. 2139 + * @slab: The slab. 2140 + * @block_number: The index for the physical block to reference. 2145 2141 */ 2146 2142 static void make_provisional_reference(struct vdo_slab *slab, 2147 2143 slab_block_number block_number) ··· 2163 2155 2164 2156 /** 2165 2157 * dirty_all_reference_blocks() - Mark all reference count blocks in a slab as dirty. 2158 + * @slab: The slab. 2166 2159 */ 2167 2160 static void dirty_all_reference_blocks(struct vdo_slab *slab) 2168 2161 { ··· 2182 2173 2183 2174 /** 2184 2175 * match_bytes() - Check an 8-byte word for bytes matching the value specified 2185 - * @input: A word to examine the bytes of 2186 - * @match: The byte value sought 2176 + * @input: A word to examine the bytes of. 2177 + * @match: The byte value sought. 2187 2178 * 2188 - * Return: 1 in each byte when the corresponding input byte matched, 0 otherwise 2179 + * Return: 1 in each byte when the corresponding input byte matched, 0 otherwise. 2189 2180 */ 2190 2181 static inline u64 match_bytes(u64 input, u8 match) 2191 2182 { ··· 2200 2191 2201 2192 /** 2202 2193 * count_valid_references() - Process a newly loaded refcount array 2203 - * @counters: the array of counters from a metadata block 2194 + * @counters: The array of counters from a metadata block. 2204 2195 * 2205 - * Scan a 8-byte-aligned array of counters, fixing up any "provisional" values that weren't 2206 - * cleaned up at shutdown, changing them internally to "empty". 2196 + * Scan an 8-byte-aligned array of counters, fixing up any provisional values that 2197 + * weren't cleaned up at shutdown, changing them internally to zero. 2207 2198 * 2208 - * Return: the number of blocks that are referenced (counters not "empty") 2199 + * Return: The number of blocks with a non-zero reference count. 2209 2200 */ 2210 2201 static unsigned int count_valid_references(vdo_refcount_t *counters) 2211 2202 { ··· 2360 2351 /** 2361 2352 * load_reference_blocks() - Load a slab's reference blocks from the underlying storage into a 2362 2353 * pre-allocated reference counter. 2354 + * @slab: The slab. 2363 2355 */ 2364 2356 static void load_reference_blocks(struct vdo_slab *slab) 2365 2357 { ··· 2385 2375 2386 2376 /** 2387 2377 * drain_slab() - Drain all reference count I/O. 2378 + * @slab: The slab. 2388 2379 * 2389 2380 * Depending upon the type of drain being performed (as recorded in the ref_count's vdo_slab), the 2390 2381 * reference blocks may be loaded from disk or dirty reference blocks may be written out. ··· 2575 2564 2576 2565 /** 2577 2566 * load_slab_journal() - Load a slab's journal by reading the journal's tail. 2567 + * @slab: The slab. 2578 2568 */ 2579 2569 static void load_slab_journal(struct vdo_slab *slab) 2580 2570 { ··· 2675 2663 prioritize_slab(slab); 2676 2664 } 2677 2665 2678 - /** 2679 - * initiate_slab_action() - Initiate a slab action. 2680 - * 2681 - * Implements vdo_admin_initiator_fn. 2682 - */ 2666 + /** Implements vdo_admin_initiator_fn. */ 2683 2667 static void initiate_slab_action(struct admin_state *state) 2684 2668 { 2685 2669 struct vdo_slab *slab = container_of(state, struct vdo_slab, state); ··· 2728 2720 * has_slabs_to_scrub() - Check whether a scrubber has slabs to scrub. 2729 2721 * @scrubber: The scrubber to check. 2730 2722 * 2731 - * Return: true if the scrubber has slabs to scrub. 2723 + * Return: True if the scrubber has slabs to scrub. 2732 2724 */ 2733 2725 static inline bool __must_check has_slabs_to_scrub(struct slab_scrubber *scrubber) 2734 2726 { ··· 2749 2741 * finish_scrubbing() - Stop scrubbing, either because there are no more slabs to scrub or because 2750 2742 * there's been an error. 2751 2743 * @scrubber: The scrubber. 2744 + * @result: The result of the scrubbing operation. 2752 2745 */ 2753 2746 static void finish_scrubbing(struct slab_scrubber *scrubber, int result) 2754 2747 { ··· 3141 3132 3142 3133 /** 3143 3134 * abort_waiter() - Abort vios waiting to make journal entries when read-only. 3135 + * @waiter: A waiting data_vio. 3136 + * @context: Not used. 3144 3137 * 3145 3138 * This callback is invoked on all vios waiting to make slab journal entries after the VDO has gone 3146 3139 * into read-only mode. Implements waiter_callback_fn. 3147 3140 */ 3148 - static void abort_waiter(struct vdo_waiter *waiter, void *context __always_unused) 3141 + static void abort_waiter(struct vdo_waiter *waiter, void __always_unused *context) 3149 3142 { 3150 3143 struct reference_updater *updater = 3151 3144 container_of(waiter, struct reference_updater, waiter); ··· 3547 3536 /** 3548 3537 * vdo_notify_slab_journals_are_recovered() - Inform a block allocator that its slab journals have 3549 3538 * been recovered from the recovery journal. 3550 - * @completion The allocator completion 3539 + * @completion: The allocator completion. 3551 3540 */ 3552 3541 void vdo_notify_slab_journals_are_recovered(struct vdo_completion *completion) 3553 3542 { ··· 3786 3775 * in the slab. 3787 3776 * @allocator: The block allocator to which the slab belongs. 3788 3777 * @slab_number: The slab number of the slab. 3789 - * @is_new: true if this slab is being allocated as part of a resize. 3778 + * @is_new: True if this slab is being allocated as part of a resize. 3790 3779 * @slab_ptr: A pointer to receive the new slab. 3791 3780 * 3792 3781 * Return: VDO_SUCCESS or an error code. ··· 3905 3894 vdo_free(vdo_forget(depot->new_slabs)); 3906 3895 } 3907 3896 3908 - /** 3909 - * get_allocator_thread_id() - Get the ID of the thread on which a given allocator operates. 3910 - * 3911 - * Implements vdo_zone_thread_getter_fn. 3912 - */ 3897 + /** Implements vdo_zone_thread_getter_fn. */ 3913 3898 static thread_id_t get_allocator_thread_id(void *context, zone_count_t zone_number) 3914 3899 { 3915 3900 return ((struct slab_depot *) context)->allocators[zone_number].thread_id; ··· 3918 3911 * @recovery_lock: The sequence number of the recovery journal block whose locks should be 3919 3912 * released. 3920 3913 * 3921 - * Return: true if the journal does hold a lock on the specified block (which it will release). 3914 + * Return: True if the journal released a lock on the specified block. 3922 3915 */ 3923 3916 static bool __must_check release_recovery_journal_lock(struct slab_journal *journal, 3924 3917 sequence_number_t recovery_lock) ··· 3962 3955 3963 3956 /** 3964 3957 * prepare_for_tail_block_commit() - Prepare to commit oldest tail blocks. 3958 + * @context: The slab depot. 3959 + * @parent: The parent operation. 3965 3960 * 3966 3961 * Implements vdo_action_preamble_fn. 3967 3962 */ ··· 3977 3968 3978 3969 /** 3979 3970 * schedule_tail_block_commit() - Schedule a tail block commit if necessary. 3971 + * @context: The slab depot. 3980 3972 * 3981 3973 * This method should not be called directly. Rather, call vdo_schedule_default_action() on the 3982 3974 * depot's action manager. ··· 4371 4361 4372 4362 /** 4373 4363 * vdo_allocate_reference_counters() - Allocate the reference counters for all slabs in the depot. 4364 + * @depot: The slab depot. 4374 4365 * 4375 4366 * Context: This method may be called only before entering normal operation from the load thread. 4376 4367 * ··· 4626 4615 } 4627 4616 4628 4617 /** 4629 - * load_slab_summary() - The preamble of a load operation. 4618 + * load_slab_summary() - Load the slab summary before the slab data. 4619 + * @context: The slab depot. 4620 + * @parent: The load operation. 4630 4621 * 4631 4622 * Implements vdo_action_preamble_fn. 4632 4623 */ ··· 4744 4731 * vdo_prepare_to_grow_slab_depot() - Allocate new memory needed for a resize of a slab depot to 4745 4732 * the given size. 4746 4733 * @depot: The depot to prepare to resize. 4747 - * @partition: The new depot partition 4734 + * @partition: The new depot partition. 4748 4735 * 4749 4736 * Return: VDO_SUCCESS or an error. 4750 4737 */ ··· 4794 4781 /** 4795 4782 * finish_registration() - Finish registering new slabs now that all of the allocators have 4796 4783 * received their new slabs. 4784 + * @context: The slab depot. 4797 4785 * 4798 4786 * Implements vdo_action_conclusion_fn. 4799 4787 */
+6 -3
drivers/md/dm-vdo/vdo.c
··· 181 181 182 182 /** 183 183 * initialize_thread_config() - Initialize the thread mapping 184 + * @counts: The number and types of threads to create. 185 + * @config: The thread_config to initialize. 184 186 * 185 187 * If the logical, physical, and hash zone counts are all 0, a single thread will be shared by all 186 188 * three plus the packer and recovery journal. Otherwise, there must be at least one of each type, ··· 886 884 887 885 /** 888 886 * record_vdo() - Record the state of the VDO for encoding in the super block. 887 + * @vdo: The vdo. 889 888 */ 890 889 static void record_vdo(struct vdo *vdo) 891 890 { ··· 1280 1277 * vdo_is_read_only() - Check whether the VDO is read-only. 1281 1278 * @vdo: The vdo. 1282 1279 * 1283 - * Return: true if the vdo is read-only. 1280 + * Return: True if the vdo is read-only. 1284 1281 * 1285 1282 * This method may be called from any thread, as opposed to examining the VDO's state field which 1286 1283 * is only safe to check from the admin thread. ··· 1294 1291 * vdo_in_read_only_mode() - Check whether a vdo is in read-only mode. 1295 1292 * @vdo: The vdo to query. 1296 1293 * 1297 - * Return: true if the vdo is in read-only mode. 1294 + * Return: True if the vdo is in read-only mode. 1298 1295 */ 1299 1296 bool vdo_in_read_only_mode(const struct vdo *vdo) 1300 1297 { ··· 1305 1302 * vdo_in_recovery_mode() - Check whether the vdo is in recovery mode. 1306 1303 * @vdo: The vdo to query. 1307 1304 * 1308 - * Return: true if the vdo is in recovery mode. 1305 + * Return: True if the vdo is in recovery mode. 1309 1306 */ 1310 1307 bool vdo_in_recovery_mode(const struct vdo *vdo) 1311 1308 {
+3 -1
drivers/md/dm-vdo/vdo.h
··· 279 279 280 280 /** 281 281 * typedef vdo_filter_fn - Method type for vdo matching methods. 282 + * @vdo: The vdo to match. 283 + * @context: A parameter for the filter to use. 282 284 * 283 - * A filter function returns false if the vdo doesn't match. 285 + * Return: True if the vdo matches the filter criteria, false if it doesn't. 284 286 */ 285 287 typedef bool (*vdo_filter_fn)(struct vdo *vdo, const void *context); 286 288
+2 -1
drivers/md/dm-vdo/vio.c
··· 398 398 399 399 /** 400 400 * is_vio_pool_busy() - Check whether an vio pool has outstanding entries. 401 + * @pool: The vio pool. 401 402 * 402 - * Return: true if the pool is busy. 403 + * Return: True if the pool is busy. 403 404 */ 404 405 bool is_vio_pool_busy(struct vio_pool *pool) 405 406 {
+4 -2
drivers/md/dm-vdo/vio.h
··· 156 156 /** 157 157 * continue_vio() - Enqueue a vio to run its next callback. 158 158 * @vio: The vio to continue. 159 - * 160 - * Return: The result of the current operation. 159 + * @result: The result of the current operation. 161 160 */ 162 161 static inline void continue_vio(struct vio *vio, int result) 163 162 { ··· 171 172 172 173 /** 173 174 * continue_vio_after_io() - Continue a vio now that its I/O has returned. 175 + * @vio: The vio to continue. 176 + * @callback: The next operation for this vio. 177 + * @thread: Which thread to run the next operation on. 174 178 */ 175 179 static inline void continue_vio_after_io(struct vio *vio, vdo_action_fn callback, 176 180 thread_id_t thread)