Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

libbpf: Fix some typos in comments

Fix some spelling errors in the code comments of libbpf:

betwen -> between
paremeters -> parameters
knowning -> knowing
definiton -> definition
compatiblity -> compatibility
overriden -> overridden
occured -> occurred
proccess -> process
managment -> management
nessary -> necessary

Signed-off-by: Yusheng Zheng <yunwei356@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240909225952.30324-1-yunwei356@gmail.com

authored by

Yusheng Zheng and committed by
Andrii Nakryiko
41d0c467 72d8508e

+13 -13
+1 -1
tools/lib/bpf/bpf_helpers.h
··· 341 341 * I.e., it looks almost like high-level for each loop in other languages, 342 342 * supports continue/break, and is verifiable by BPF verifier. 343 343 * 344 - * For iterating integers, the difference betwen bpf_for_each(num, i, N, M) 344 + * For iterating integers, the difference between bpf_for_each(num, i, N, M) 345 345 * and bpf_for(i, N, M) is in that bpf_for() provides additional proof to 346 346 * verifier that i is in [N, M) range, and in bpf_for_each() case i is `int 347 347 * *`, not just `int`. So for integers bpf_for() is more convenient.
+1 -1
tools/lib/bpf/bpf_tracing.h
··· 808 808 * tp_btf/fentry/fexit BPF programs. It hides the underlying platform-specific 809 809 * low-level way of getting kprobe input arguments from struct pt_regs, and 810 810 * provides a familiar typed and named function arguments syntax and 811 - * semantics of accessing kprobe input paremeters. 811 + * semantics of accessing kprobe input parameters. 812 812 * 813 813 * Original struct pt_regs* context is preserved as 'ctx' argument. This might 814 814 * be necessary when using BPF helpers like bpf_perf_event_output().
+1 -1
tools/lib/bpf/btf.c
··· 4230 4230 * consists of portions of the graph that come from multiple compilation units. 4231 4231 * This is due to the fact that types within single compilation unit are always 4232 4232 * deduplicated and FWDs are already resolved, if referenced struct/union 4233 - * definiton is available. So, if we had unresolved FWD and found corresponding 4233 + * definition is available. So, if we had unresolved FWD and found corresponding 4234 4234 * STRUCT/UNION, they will be from different compilation units. This 4235 4235 * consequently means that when we "link" FWD to corresponding STRUCT/UNION, 4236 4236 * type graph will likely have at least two different BTF types that describe
+1 -1
tools/lib/bpf/btf.h
··· 286 286 LIBBPF_API int btf_dump__dump_type(struct btf_dump *d, __u32 id); 287 287 288 288 struct btf_dump_emit_type_decl_opts { 289 - /* size of this struct, for forward/backward compatiblity */ 289 + /* size of this struct, for forward/backward compatibility */ 290 290 size_t sz; 291 291 /* optional field name for type declaration, e.g.: 292 292 * - struct my_struct <FNAME>
+1 -1
tools/lib/bpf/btf_dump.c
··· 304 304 * definition, in which case they have to be declared inline as part of field 305 305 * type declaration; or as a top-level anonymous enum, typically used for 306 306 * declaring global constants. It's impossible to distinguish between two 307 - * without knowning whether given enum type was referenced from other type: 307 + * without knowing whether given enum type was referenced from other type: 308 308 * top-level anonymous enum won't be referenced by anything, while embedded 309 309 * one will. 310 310 */
+5 -5
tools/lib/bpf/libbpf.h
··· 152 152 * log_buf and log_level settings. 153 153 * 154 154 * If specified, this log buffer will be passed for: 155 - * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overriden 155 + * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden 156 156 * with bpf_program__set_log() on per-program level, to get 157 157 * BPF verifier log output. 158 158 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get ··· 455 455 /** 456 456 * @brief **bpf_program__attach()** is a generic function for attaching 457 457 * a BPF program based on auto-detection of program type, attach type, 458 - * and extra paremeters, where applicable. 458 + * and extra parameters, where applicable. 459 459 * 460 460 * @param prog BPF program to attach 461 461 * @return Reference to the newly created BPF link; or NULL is returned on error, ··· 679 679 /** 680 680 * @brief **bpf_program__attach_uprobe()** attaches a BPF program 681 681 * to the userspace function which is found by binary path and 682 - * offset. You can optionally specify a particular proccess to attach 682 + * offset. You can optionally specify a particular process to attach 683 683 * to. You can also optionally attach the program to the function 684 684 * exit instead of entry. 685 685 * ··· 1593 1593 * memory region of the ring buffer. 1594 1594 * This ring buffer can be used to implement a custom events consumer. 1595 1595 * The ring buffer starts with the *struct perf_event_mmap_page*, which 1596 - * holds the ring buffer managment fields, when accessing the header 1596 + * holds the ring buffer management fields, when accessing the header 1597 1597 * structure it's important to be SMP aware. 1598 1598 * You can refer to *perf_event_read_simple* for a simple example. 1599 1599 * @param pb the perf buffer structure 1600 - * @param buf_idx the buffer index to retreive 1600 + * @param buf_idx the buffer index to retrieve 1601 1601 * @param buf (out) gets the base pointer of the mmap()'ed memory 1602 1602 * @param buf_size (out) gets the size of the mmap()'ed region 1603 1603 * @return 0 on success, negative error code for failure
+2 -2
tools/lib/bpf/libbpf_legacy.h
··· 76 76 * first BPF program or map creation operation. This is done only if 77 77 * kernel is too old to support memcg-based memory accounting for BPF 78 78 * subsystem. By default, RLIMIT_MEMLOCK limit is set to RLIM_INFINITY, 79 - * but it can be overriden with libbpf_set_memlock_rlim() API. 79 + * but it can be overridden with libbpf_set_memlock_rlim() API. 80 80 * Note that libbpf_set_memlock_rlim() needs to be called before 81 81 * the very first bpf_prog_load(), bpf_map_create() or bpf_object__load() 82 82 * operation. ··· 97 97 * @brief **libbpf_get_error()** extracts the error code from the passed 98 98 * pointer 99 99 * @param ptr pointer returned from libbpf API function 100 - * @return error code; or 0 if no error occured 100 + * @return error code; or 0 if no error occurred 101 101 * 102 102 * Note, as of libbpf 1.0 this function is not necessary and not recommended 103 103 * to be used. Libbpf doesn't return error code embedded into the pointer
+1 -1
tools/lib/bpf/skel_internal.h
··· 107 107 * The loader program will perform probe_read_kernel() from maps.rodata.initial_value. 108 108 * skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map and 109 109 * does maps.rodata.initial_value = ~0ULL to signal skel_free_map_data() that kvfree 110 - * is not nessary. 110 + * is not necessary. 111 111 * 112 112 * For user space: 113 113 * skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.