Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: fix spelling mistakes in header files

Fix some spelling mistakes in comments:
successfull ==> successful
potentialy ==> potentially
alloced ==> allocated
indicies ==> indices
wont ==> won't
resposible ==> responsible
dirtyness ==> dirtiness
droppped ==> dropped
alread ==> already
occured ==> occurred
interupts ==> interrupts
extention ==> extension
slighly ==> slightly
Dont't ==> Don't

Link: https://lkml.kernel.org/r/20210531034849.9549-2-thunder.leizhen@huawei.com
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Zhen Lei and committed by
Linus Torvalds
06c88398 76fe17ef

+16 -16
+2 -2
include/linux/compaction.h
··· 35 35 COMPACT_CONTINUE, 36 36 37 37 /* 38 - * The full zone was compacted scanned but wasn't successfull to compact 38 + * The full zone was compacted scanned but wasn't successful to compact 39 39 * suitable pages. 40 40 */ 41 41 COMPACT_COMPLETE, 42 42 /* 43 - * direct compaction has scanned part of the zone but wasn't successfull 43 + * direct compaction has scanned part of the zone but wasn't successful 44 44 * to compact suitable pages. 45 45 */ 46 46 COMPACT_PARTIAL_SKIPPED,
+1 -1
include/linux/hmm.h
··· 113 113 * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range 114 114 * 115 115 * When waiting for mmu notifiers we need some kind of time out otherwise we 116 - * could potentialy wait for ever, 1000ms ie 1s sounds like a long time to 116 + * could potentially wait for ever, 1000ms ie 1s sounds like a long time to 117 117 * wait already. 118 118 */ 119 119 #define HMM_RANGE_DEFAULT_TIMEOUT 1000
+3 -3
include/linux/hugetlb.h
··· 51 51 long count; 52 52 long max_hpages; /* Maximum huge pages or -1 if no maximum. */ 53 53 long used_hpages; /* Used count against maximum, includes */ 54 - /* both alloced and reserved pages. */ 54 + /* both allocated and reserved pages. */ 55 55 struct hstate *hstate; 56 56 long min_hpages; /* Minimum huge pages or -1 if no minimum. */ 57 57 long rsv_hpages; /* Pages reserved against global pool to */ ··· 85 85 * by a resv_map's lock. The set of regions within the resv_map represent 86 86 * reservations for huge pages, or huge pages that have already been 87 87 * instantiated within the map. The from and to elements are huge page 88 - * indicies into the associated mapping. from indicates the starting index 88 + * indices into the associated mapping. from indicates the starting index 89 89 * of the region. to represents the first index past the end of the region. 90 90 * 91 91 * For example, a file region structure with from == 0 and to == 4 represents ··· 797 797 * It determines whether or not a huge page should be placed on 798 798 * movable zone or not. Movability of any huge page should be 799 799 * required only if huge page size is supported for migration. 800 - * There wont be any reason for the huge page to be movable if 800 + * There won't be any reason for the huge page to be movable if 801 801 * it is not migratable to start with. Also the size of the huge 802 802 * page should be large enough to be placed under a movable zone 803 803 * and still feasible enough to be migratable. Just the presence
+2 -2
include/linux/list_lru.h
··· 146 146 * @lru: the lru pointer. 147 147 * @nid: the node id to scan from. 148 148 * @memcg: the cgroup to scan from. 149 - * @isolate: callback function that is resposible for deciding what to do with 149 + * @isolate: callback function that is responsible for deciding what to do with 150 150 * the item currently being scanned 151 151 * @cb_arg: opaque type that will be passed to @isolate 152 152 * @nr_to_walk: how many items to scan. ··· 172 172 * @lru: the lru pointer. 173 173 * @nid: the node id to scan from. 174 174 * @memcg: the cgroup to scan from. 175 - * @isolate: callback function that is resposible for deciding what to do with 175 + * @isolate: callback function that is responsible for deciding what to do with 176 176 * the item currently being scanned 177 177 * @cb_arg: opaque type that will be passed to @isolate 178 178 * @nr_to_walk: how many items to scan.
+4 -4
include/linux/mmu_notifier.h
··· 33 33 * 34 34 * @MMU_NOTIFY_SOFT_DIRTY: soft dirty accounting (still same page and same 35 35 * access flags). User should soft dirty the page in the end callback to make 36 - * sure that anyone relying on soft dirtyness catch pages that might be written 36 + * sure that anyone relying on soft dirtiness catch pages that might be written 37 37 * through non CPU mappings. 38 38 * 39 39 * @MMU_NOTIFY_RELEASE: used during mmu_interval_notifier invalidate to signal ··· 167 167 * decrease the refcount. If the refcount is decreased on 168 168 * invalidate_range_start() then the VM can free pages as page 169 169 * table entries are removed. If the refcount is only 170 - * droppped on invalidate_range_end() then the driver itself 170 + * dropped on invalidate_range_end() then the driver itself 171 171 * will drop the last refcount but it must take care to flush 172 172 * any secondary tlb before doing the final free on the 173 173 * page. Pages will no longer be referenced by the linux ··· 196 196 * If invalidate_range() is used to manage a non-CPU TLB with 197 197 * shared page-tables, it not necessary to implement the 198 198 * invalidate_range_start()/end() notifiers, as 199 - * invalidate_range() alread catches the points in time when an 199 + * invalidate_range() already catches the points in time when an 200 200 * external TLB range needs to be flushed. For more in depth 201 201 * discussion on this see Documentation/vm/mmu_notifier.rst 202 202 * ··· 369 369 * mmu_interval_read_retry() will return true. 370 370 * 371 371 * False is not reliable and only suggests a collision may not have 372 - * occured. It can be called many times and does not have to hold the user 372 + * occurred. It can be called many times and does not have to hold the user 373 373 * provided lock. 374 374 * 375 375 * This call can be used as part of loops and other expensive operations to
+1 -1
include/linux/percpu-defs.h
··· 412 412 * instead. 413 413 * 414 414 * If there is no other protection through preempt disable and/or disabling 415 - * interupts then one of these RMW operations can show unexpected behavior 415 + * interrupts then one of these RMW operations can show unexpected behavior 416 416 * because the execution thread was rescheduled on another processor or an 417 417 * interrupt occurred and the same percpu variable was modified from the 418 418 * interrupt context.
+1 -1
include/linux/shrinker.h
··· 4 4 5 5 /* 6 6 * This struct is used to pass information from page reclaim to the shrinkers. 7 - * We consolidate the values for easier extention later. 7 + * We consolidate the values for easier extension later. 8 8 * 9 9 * The 'gfpmask' refers to the allocation we are currently trying to 10 10 * fulfil.
+2 -2
include/linux/vmalloc.h
··· 29 29 #define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */ 30 30 31 31 /* 32 - * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. 32 + * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC. 33 33 * 34 34 * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after 35 35 * shadow memory has been mapped. It's used to handle allocation errors so that ··· 247 247 extern long vread(char *buf, char *addr, unsigned long count); 248 248 249 249 /* 250 - * Internals. Dont't use.. 250 + * Internals. Don't use.. 251 251 */ 252 252 extern struct list_head vmap_area_list; 253 253 extern __init void vm_area_add_early(struct vm_struct *vm);