Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: fix typos in comments

Fix ~94 single-word typos in locking code comments, plus a few
very obvious grammar mistakes.

Link: https://lkml.kernel.org/r/20210322212624.GA1963421@gmail.com
Link: https://lore.kernel.org/r/20210322205203.GB1959563@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Ingo Molnar and committed by
Linus Torvalds
f0953a1b fa60ce2c

+83 -83
+1 -1
include/linux/mm.h
··· 106 106 * embedding these tags into addresses that point to these memory regions, and 107 107 * checking that the memory and the pointer tags match on memory accesses) 108 108 * redefine this macro to strip tags from pointers. 109 - * It's defined as noop for arcitectures that don't support memory tagging. 109 + * It's defined as noop for architectures that don't support memory tagging. 110 110 */ 111 111 #ifndef untagged_addr 112 112 #define untagged_addr(addr) (addr)
+2 -2
include/linux/vmalloc.h
··· 33 33 * 34 34 * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after 35 35 * shadow memory has been mapped. It's used to handle allocation errors so that 36 - * we don't try to poision shadow on free if it was never allocated. 36 + * we don't try to poison shadow on free if it was never allocated. 37 37 * 38 38 * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to 39 39 * determine which allocations need the module shadow freed. ··· 43 43 44 44 /* 45 45 * Maximum alignment for ioremap() regions. 46 - * Can be overriden by arch-specific value. 46 + * Can be overridden by arch-specific value. 47 47 */ 48 48 #ifndef IOREMAP_MAX_ORDER 49 49 #define IOREMAP_MAX_ORDER (7 + PAGE_SHIFT) /* 128 pages */
+2 -2
mm/balloon_compaction.c
··· 58 58 /** 59 59 * balloon_page_list_dequeue() - removes pages from balloon's page list and 60 60 * returns a list of the pages. 61 - * @b_dev_info: balloon device decriptor where we will grab a page from. 61 + * @b_dev_info: balloon device descriptor where we will grab a page from. 62 62 * @pages: pointer to the list of pages that would be returned to the caller. 63 63 * @n_req_pages: number of requested pages. 64 64 * ··· 157 157 /* 158 158 * balloon_page_dequeue - removes a page from balloon's page list and returns 159 159 * its address to allow the driver to release the page. 160 - * @b_dev_info: balloon device decriptor where we will grab a page from. 160 + * @b_dev_info: balloon device descriptor where we will grab a page from. 161 161 * 162 162 * Driver must call this function to properly dequeue a previously enqueued page 163 163 * before definitively releasing it back to the guest system.
+2 -2
mm/compaction.c
··· 2012 2012 unsigned int wmark_low; 2013 2013 2014 2014 /* 2015 - * Cap the low watermak to avoid excessive compaction 2016 - * activity in case a user sets the proactivess tunable 2015 + * Cap the low watermark to avoid excessive compaction 2016 + * activity in case a user sets the proactiveness tunable 2017 2017 * close to 100 (maximum). 2018 2018 */ 2019 2019 wmark_low = max(100U - sysctl_compaction_proactiveness, 5U);
+1 -1
mm/filemap.c
··· 2755 2755 * entirely memory-based such as tmpfs, and filesystems which support 2756 2756 * unwritten extents. 2757 2757 * 2758 - * Return: The requested offset on successs, or -ENXIO if @whence specifies 2758 + * Return: The requested offset on success, or -ENXIO if @whence specifies 2759 2759 * SEEK_DATA and there is no data after @start. There is an implicit hole 2760 2760 * after @end - 1, so SEEK_HOLE returns @end if all the bytes between @start 2761 2761 * and @end contain data.
+1 -1
mm/gup.c
··· 1575 1575 * Returns NULL on any kind of failure - a hole must then be inserted into 1576 1576 * the corefile, to preserve alignment with its headers; and also returns 1577 1577 * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - 1578 - * allowing a hole to be left in the corefile to save diskspace. 1578 + * allowing a hole to be left in the corefile to save disk space. 1579 1579 * 1580 1580 * Called without mmap_lock (takes and releases the mmap_lock by itself). 1581 1581 */
+1 -1
mm/highmem.c
··· 519 519 520 520 /* 521 521 * Disable migration so resulting virtual address is stable 522 - * accross preemption. 522 + * across preemption. 523 523 */ 524 524 migrate_disable(); 525 525 preempt_disable();
+3 -3
mm/huge_memory.c
··· 1792 1792 /* 1793 1793 * Returns 1794 1794 * - 0 if PMD could not be locked 1795 - * - 1 if PMD was locked but protections unchange and TLB flush unnecessary 1796 - * - HPAGE_PMD_NR is protections changed and TLB flush necessary 1795 + * - 1 if PMD was locked but protections unchanged and TLB flush unnecessary 1796 + * - HPAGE_PMD_NR if protections changed and TLB flush necessary 1797 1797 */ 1798 1798 int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, 1799 1799 unsigned long addr, pgprot_t newprot, unsigned long cp_flags) ··· 2469 2469 xa_lock(&swap_cache->i_pages); 2470 2470 } 2471 2471 2472 - /* lock lru list/PageCompound, ref freezed by page_ref_freeze */ 2472 + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ 2473 2473 lruvec = lock_page_lruvec(head); 2474 2474 2475 2475 for (i = nr - 1; i >= 1; i--) {
+3 -3
mm/hugetlb.c
··· 466 466 resv->region_cache_count; 467 467 468 468 /* At this point, we should have enough entries in the cache 469 - * for all the existings adds_in_progress. We should only be 469 + * for all the existing adds_in_progress. We should only be 470 470 * needing to allocate for regions_needed. 471 471 */ 472 472 VM_BUG_ON(resv->region_cache_count < resv->adds_in_progress); ··· 5536 5536 v_end = ALIGN_DOWN(vma->vm_end, PUD_SIZE); 5537 5537 5538 5538 /* 5539 - * vma need span at least one aligned PUD size and the start,end range 5540 - * must at least partialy within it. 5539 + * vma needs to span at least one aligned PUD size, and the range 5540 + * must be at least partially within in. 5541 5541 */ 5542 5542 if (!(vma->vm_flags & VM_MAYSHARE) || !(v_end > v_start) || 5543 5543 (*end <= v_start) || (*start >= v_end))
+1 -1
mm/internal.h
··· 334 334 } 335 335 336 336 /* 337 - * Stack area - atomatically grows in one direction 337 + * Stack area - automatically grows in one direction 338 338 * 339 339 * VM_GROWSUP / VM_GROWSDOWN VMAs are always private anonymous: 340 340 * do_mmap() forbids all other combinations.
+4 -4
mm/kasan/kasan.h
··· 55 55 #define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ 56 56 57 57 #ifdef CONFIG_KASAN_HW_TAGS 58 - #define KASAN_TAG_MIN 0xF0 /* mimimum value for random tags */ 58 + #define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */ 59 59 #else 60 - #define KASAN_TAG_MIN 0x00 /* mimimum value for random tags */ 60 + #define KASAN_TAG_MIN 0x00 /* minimum value for random tags */ 61 61 #endif 62 62 63 63 #ifdef CONFIG_KASAN_GENERIC ··· 403 403 #else /* CONFIG_KASAN_HW_TAGS */ 404 404 405 405 /** 406 - * kasan_poison - mark the memory range as unaccessible 406 + * kasan_poison - mark the memory range as inaccessible 407 407 * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE 408 408 * @size - range size, must be aligned to KASAN_GRANULE_SIZE 409 409 * @value - value that's written to metadata for the range ··· 434 434 435 435 /** 436 436 * kasan_poison_last_granule - mark the last granule of the memory range as 437 - * unaccessible 437 + * inaccessible 438 438 * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE 439 439 * @size - range size 440 440 *
+2 -2
mm/kasan/quarantine.c
··· 27 27 /* Data structure and operations for quarantine queues. */ 28 28 29 29 /* 30 - * Each queue is a signle-linked list, which also stores the total size of 30 + * Each queue is a single-linked list, which also stores the total size of 31 31 * objects inside of it. 32 32 */ 33 33 struct qlist_head { ··· 138 138 local_irq_save(flags); 139 139 140 140 /* 141 - * As the object now gets freed from the quaratine, assume that its 141 + * As the object now gets freed from the quarantine, assume that its 142 142 * free track is no longer valid. 143 143 */ 144 144 *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE;
+2 -2
mm/kasan/shadow.c
··· 316 316 * // rest of vmalloc process <data dependency> 317 317 * STORE p, a LOAD shadow(x+99) 318 318 * 319 - * If there is no barrier between the end of unpoisioning the shadow 319 + * If there is no barrier between the end of unpoisoning the shadow 320 320 * and the store of the result to p, the stores could be committed 321 321 * in a different order by CPU#0, and CPU#1 could erroneously observe 322 322 * poison in the shadow. ··· 384 384 * How does this work? 385 385 * ------------------- 386 386 * 387 - * We have a region that is page aligned, labelled as A. 387 + * We have a region that is page aligned, labeled as A. 388 388 * That might not map onto the shadow in a way that is page-aligned: 389 389 * 390 390 * start end
+1 -1
mm/kfence/report.c
··· 263 263 if (panic_on_warn) 264 264 panic("panic_on_warn set ...\n"); 265 265 266 - /* We encountered a memory unsafety error, taint the kernel! */ 266 + /* We encountered a memory safety error, taint the kernel! */ 267 267 add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK); 268 268 }
+1 -1
mm/khugepaged.c
··· 667 667 * 668 668 * The page table that maps the page has been already unlinked 669 669 * from the page table tree and this process cannot get 670 - * an additinal pin on the page. 670 + * an additional pin on the page. 671 671 * 672 672 * New pins can come later if the page is shared across fork, 673 673 * but not from this process. The other process cannot write to
+2 -2
mm/ksm.c
··· 1065 1065 /* 1066 1066 * Ok this is tricky, when get_user_pages_fast() run it doesn't 1067 1067 * take any lock, therefore the check that we are going to make 1068 - * with the pagecount against the mapcount is racey and 1068 + * with the pagecount against the mapcount is racy and 1069 1069 * O_DIRECT can happen right after the check. 1070 1070 * So we clear the pte and flush the tlb before the check 1071 1071 * this assure us that no O_DIRECT can happen after the check ··· 1435 1435 */ 1436 1436 *_stable_node = found; 1437 1437 /* 1438 - * Just for robustneess as stable_node is 1438 + * Just for robustness, as stable_node is 1439 1439 * otherwise left as a stable pointer, the 1440 1440 * compiler shall optimize it away at build 1441 1441 * time.
+2 -2
mm/madvise.c
··· 799 799 if (end > vma->vm_end) { 800 800 /* 801 801 * Don't fail if end > vma->vm_end. If the old 802 - * vma was splitted while the mmap_lock was 802 + * vma was split while the mmap_lock was 803 803 * released the effect of the concurrent 804 804 * operation may not cause madvise() to 805 805 * have an undefined result. There may be an ··· 1039 1039 * MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump. 1040 1040 * MADV_COLD - the application is not expected to use this memory soon, 1041 1041 * deactivate pages in this range so that they can be reclaimed 1042 - * easily if memory pressure hanppens. 1042 + * easily if memory pressure happens. 1043 1043 * MADV_PAGEOUT - the application is not expected to use this memory soon, 1044 1044 * page out the pages in this range immediately. 1045 1045 *
+9 -9
mm/memcontrol.c
··· 215 215 #define MEMFILE_PRIVATE(x, val) ((x) << 16 | (val)) 216 216 #define MEMFILE_TYPE(val) ((val) >> 16 & 0xffff) 217 217 #define MEMFILE_ATTR(val) ((val) & 0xffff) 218 - /* Used for OOM nofiier */ 218 + /* Used for OOM notifier */ 219 219 #define OOM_CONTROL (0) 220 220 221 221 /* ··· 786 786 * __count_memcg_events - account VM events in a cgroup 787 787 * @memcg: the memory cgroup 788 788 * @idx: the event item 789 - * @count: the number of events that occured 789 + * @count: the number of events that occurred 790 790 */ 791 791 void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, 792 792 unsigned long count) ··· 904 904 rcu_read_lock(); 905 905 do { 906 906 /* 907 - * Page cache insertions can happen withou an 907 + * Page cache insertions can happen without an 908 908 * actual mm context, e.g. during disk probing 909 909 * on boot, loopback IO, acct() writes etc. 910 910 */ ··· 1712 1712 struct mem_cgroup *iter; 1713 1713 1714 1714 /* 1715 - * Be careful about under_oom underflows becase a child memcg 1715 + * Be careful about under_oom underflows because a child memcg 1716 1716 * could have been added after mem_cgroup_mark_under_oom. 1717 1717 */ 1718 1718 spin_lock(&memcg_oom_lock); ··· 1884 1884 /* 1885 1885 * There is no guarantee that an OOM-lock contender 1886 1886 * sees the wakeups triggered by the OOM kill 1887 - * uncharges. Wake any sleepers explicitely. 1887 + * uncharges. Wake any sleepers explicitly. 1888 1888 */ 1889 1889 memcg_oom_recover(memcg); 1890 1890 } ··· 4364 4364 * Foreign dirty flushing 4365 4365 * 4366 4366 * There's an inherent mismatch between memcg and writeback. The former 4367 - * trackes ownership per-page while the latter per-inode. This was a 4367 + * tracks ownership per-page while the latter per-inode. This was a 4368 4368 * deliberate design decision because honoring per-page ownership in the 4369 4369 * writeback path is complicated, may lead to higher CPU and IO overheads 4370 4370 * and deemed unnecessary given that write-sharing an inode across ··· 4379 4379 * triggering background writeback. A will be slowed down without a way to 4380 4380 * make writeback of the dirty pages happen. 4381 4381 * 4382 - * Conditions like the above can lead to a cgroup getting repatedly and 4382 + * Conditions like the above can lead to a cgroup getting repeatedly and 4383 4383 * severely throttled after making some progress after each 4384 - * dirty_expire_interval while the underyling IO device is almost 4384 + * dirty_expire_interval while the underlying IO device is almost 4385 4385 * completely idle. 4386 4386 * 4387 4387 * Solving this problem completely requires matching the ownership tracking ··· 5774 5774 return 0; 5775 5775 5776 5776 /* 5777 - * We are now commited to this value whatever it is. Changes in this 5777 + * We are now committed to this value whatever it is. Changes in this 5778 5778 * tunable will only affect upcoming migrations, not the current one. 5779 5779 * So we need to save it, and keep it going. 5780 5780 */
+1 -1
mm/memory-failure.c
··· 75 75 if (dissolve_free_huge_page(page) || !take_page_off_buddy(page)) 76 76 /* 77 77 * We could fail to take off the target page from buddy 78 - * for example due to racy page allocaiton, but that's 78 + * for example due to racy page allocation, but that's 79 79 * acceptable because soft-offlined page is not broken 80 80 * and if someone really want to use it, they should 81 81 * take it.
+5 -5
mm/memory.c
··· 3727 3727 return ret; 3728 3728 3729 3729 /* 3730 - * Archs like ppc64 need additonal space to store information 3730 + * Archs like ppc64 need additional space to store information 3731 3731 * related to pte entry. Use the preallocated table for that. 3732 3732 */ 3733 3733 if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) { ··· 4503 4503 } 4504 4504 4505 4505 /** 4506 - * mm_account_fault - Do page fault accountings 4506 + * mm_account_fault - Do page fault accounting 4507 4507 * 4508 4508 * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting 4509 4509 * of perf event counters, but we'll still do the per-task accounting to ··· 4512 4512 * @flags: the fault flags. 4513 4513 * @ret: the fault retcode. 4514 4514 * 4515 - * This will take care of most of the page fault accountings. Meanwhile, it 4515 + * This will take care of most of the page fault accounting. Meanwhile, it 4516 4516 * will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf counter 4517 - * updates. However note that the handling of PERF_COUNT_SW_PAGE_FAULTS should 4517 + * updates. However, note that the handling of PERF_COUNT_SW_PAGE_FAULTS should 4518 4518 * still be in per-arch page fault handlers at the entry of page fault. 4519 4519 */ 4520 4520 static inline void mm_account_fault(struct pt_regs *regs, ··· 4848 4848 /** 4849 4849 * generic_access_phys - generic implementation for iomem mmap access 4850 4850 * @vma: the vma to access 4851 - * @addr: userspace addres, not relative offset within @vma 4851 + * @addr: userspace address, not relative offset within @vma 4852 4852 * @buf: buffer to read/write 4853 4853 * @len: length of transfer 4854 4854 * @write: set to FOLL_WRITE when writing, otherwise reading
+2 -2
mm/mempolicy.c
··· 1867 1867 * we apply policy when gfp_zone(gfp) = ZONE_MOVABLE only. 1868 1868 * 1869 1869 * policy->v.nodes is intersect with node_states[N_MEMORY]. 1870 - * so if the following test faile, it implies 1870 + * so if the following test fails, it implies 1871 1871 * policy->v.nodes has movable memory only. 1872 1872 */ 1873 1873 if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY])) ··· 2098 2098 * 2099 2099 * If tsk's mempolicy is "default" [NULL], return 'true' to indicate default 2100 2100 * policy. Otherwise, check for intersection between mask and the policy 2101 - * nodemask for 'bind' or 'interleave' policy. For 'perferred' or 'local' 2101 + * nodemask for 'bind' or 'interleave' policy. For 'preferred' or 'local' 2102 2102 * policy, always return true since it may allocate elsewhere on fallback. 2103 2103 * 2104 2104 * Takes task_lock(tsk) to prevent freeing of its mempolicy.
+4 -4
mm/migrate.c
··· 2779 2779 * 2780 2780 * For empty entries inside CPU page table (pte_none() or pmd_none() is true) we 2781 2781 * do set MIGRATE_PFN_MIGRATE flag inside the corresponding source array thus 2782 - * allowing the caller to allocate device memory for those unback virtual 2783 - * address. For this the caller simply has to allocate device memory and 2782 + * allowing the caller to allocate device memory for those unbacked virtual 2783 + * addresses. For this the caller simply has to allocate device memory and 2784 2784 * properly set the destination entry like for regular migration. Note that 2785 - * this can still fails and thus inside the device driver must check if the 2786 - * migration was successful for those entries after calling migrate_vma_pages() 2785 + * this can still fail, and thus inside the device driver you must check if the 2786 + * migration was successful for those entries after calling migrate_vma_pages(), 2787 2787 * just like for regular migration. 2788 2788 * 2789 2789 * After that, the callers must call migrate_vma_pages() to go over each entry
+2 -2
mm/mmap.c
··· 612 612 unsigned long nr_pages = 0; 613 613 struct vm_area_struct *vma; 614 614 615 - /* Find first overlaping mapping */ 615 + /* Find first overlapping mapping */ 616 616 vma = find_vma_intersection(mm, addr, end); 617 617 if (!vma) 618 618 return 0; ··· 2875 2875 if (unlikely(uf)) { 2876 2876 /* 2877 2877 * If userfaultfd_unmap_prep returns an error the vmas 2878 - * will remain splitted, but userland will get a 2878 + * will remain split, but userland will get a 2879 2879 * highly unexpected error anyway. This is no 2880 2880 * different than the case where the first of the two 2881 2881 * __split_vma fails, but we don't undo the first
+1 -1
mm/mprotect.c
··· 699 699 mmap_write_unlock(current->mm); 700 700 701 701 /* 702 - * We could provie warnings or errors if any VMA still 702 + * We could provide warnings or errors if any VMA still 703 703 * has the pkey set here. 704 704 */ 705 705 return ret;
+1 -1
mm/mremap.c
··· 730 730 * So, to avoid such scenario we can pre-compute if the whole 731 731 * operation has high chances to success map-wise. 732 732 * Worst-scenario case is when both vma's (new_addr and old_addr) get 733 - * split in 3 before unmaping it. 733 + * split in 3 before unmapping it. 734 734 * That means 2 more maps (1 for each) to the ones we already hold. 735 735 * Check whether current map count plus 2 still leads us to 4 maps below 736 736 * the threshold, otherwise return -ENOMEM here to be more safe.
+1 -1
mm/oom_kill.c
··· 74 74 75 75 #ifdef CONFIG_NUMA 76 76 /** 77 - * oom_cpuset_eligible() - check task eligiblity for kill 77 + * oom_cpuset_eligible() - check task eligibility for kill 78 78 * @start: task struct of which task to consider 79 79 * @oc: pointer to struct oom_control 80 80 *
+2 -2
mm/page-writeback.c
··· 1806 1806 break; 1807 1807 1808 1808 /* 1809 - * In the case of an unresponding NFS server and the NFS dirty 1809 + * In the case of an unresponsive NFS server and the NFS dirty 1810 1810 * pages exceeds dirty_thresh, give the other good wb's a pipe 1811 1811 * to go through, so that tasks on them still remain responsive. 1812 1812 * ··· 2216 2216 * Page truncated or invalidated. We can freely skip it 2217 2217 * then, even for data integrity operations: the page 2218 2218 * has disappeared concurrently, so there could be no 2219 - * real expectation of this data interity operation 2219 + * real expectation of this data integrity operation 2220 2220 * even if there is now a new, dirty page at the same 2221 2221 * pagecache address. 2222 2222 */
+7 -7
mm/page_alloc.c
··· 893 893 return false; 894 894 895 895 /* 896 - * Do not let lower order allocations polluate a movable pageblock. 896 + * Do not let lower order allocations pollute a movable pageblock. 897 897 * This might let an unmovable request use a reclaimable pageblock 898 898 * and vice-versa but no more than normal fallback logic which can 899 899 * have trouble finding a high-order free page. ··· 2776 2776 /* 2777 2777 * In page freeing path, migratetype change is racy so 2778 2778 * we can counter several free pages in a pageblock 2779 - * in this loop althoug we changed the pageblock type 2779 + * in this loop although we changed the pageblock type 2780 2780 * from highatomic to ac->migratetype. So we should 2781 2781 * adjust the count once. 2782 2782 */ ··· 3080 3080 * drain_all_pages doesn't use proper cpu hotplug protection so 3081 3081 * we can race with cpu offline when the WQ can move this from 3082 3082 * a cpu pinned worker to an unbound one. We can operate on a different 3083 - * cpu which is allright but we also have to make sure to not move to 3083 + * cpu which is alright but we also have to make sure to not move to 3084 3084 * a different one. 3085 3085 */ 3086 3086 preempt_disable(); ··· 5929 5929 static int __parse_numa_zonelist_order(char *s) 5930 5930 { 5931 5931 /* 5932 - * We used to support different zonlists modes but they turned 5932 + * We used to support different zonelists modes but they turned 5933 5933 * out to be just not useful. Let's keep the warning in place 5934 5934 * if somebody still use the cmd line parameter so that we do 5935 5935 * not fail it silently ··· 7670 7670 } 7671 7671 7672 7672 /* 7673 - * Some architecturs, e.g. ARC may have ZONE_HIGHMEM below ZONE_NORMAL. For 7673 + * Some architectures, e.g. ARC may have ZONE_HIGHMEM below ZONE_NORMAL. For 7674 7674 * such cases we allow max_zone_pfn sorted in the descending order 7675 7675 */ 7676 7676 bool __weak arch_has_descending_max_zone_pfns(void) ··· 8728 8728 * alloc_contig_range() -- tries to allocate given range of pages 8729 8729 * @start: start PFN to allocate 8730 8730 * @end: one-past-the-last PFN to allocate 8731 - * @migratetype: migratetype of the underlaying pageblocks (either 8731 + * @migratetype: migratetype of the underlying pageblocks (either 8732 8732 * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks 8733 8733 * in range must have the same migratetype and it must 8734 8734 * be either of the two. ··· 8988 8988 8989 8989 /* 8990 8990 * The zone indicated has a new number of managed_pages; batch sizes and percpu 8991 - * page high values need to be recalulated. 8991 + * page high values need to be recalculated. 8992 8992 */ 8993 8993 void __meminit zone_pcp_update(struct zone *zone) 8994 8994 {
+1 -1
mm/page_owner.c
··· 233 233 /* 234 234 * We don't clear the bit on the oldpage as it's going to be freed 235 235 * after migration. Until then, the info can be useful in case of 236 - * a bug, and the overal stats will be off a bit only temporarily. 236 + * a bug, and the overall stats will be off a bit only temporarily. 237 237 * Also, migrate_misplaced_transhuge_page() can still fail the 238 238 * migration and then we want the oldpage to retain the info. But 239 239 * in that case we also don't need to explicitly clear the info from
+1 -1
mm/percpu-internal.h
··· 170 170 u64 nr_max_alloc; /* max # of live allocations */ 171 171 u32 nr_chunks; /* current # of live chunks */ 172 172 u32 nr_max_chunks; /* max # of live chunks */ 173 - size_t min_alloc_size; /* min allocaiton size */ 173 + size_t min_alloc_size; /* min allocation size */ 174 174 size_t max_alloc_size; /* max allocation size */ 175 175 }; 176 176
+1 -1
mm/percpu.c
··· 1862 1862 pr_info("limit reached, disable warning\n"); 1863 1863 } 1864 1864 if (is_atomic) { 1865 - /* see the flag handling in pcpu_blance_workfn() */ 1865 + /* see the flag handling in pcpu_balance_workfn() */ 1866 1866 pcpu_atomic_alloc_failed = true; 1867 1867 pcpu_schedule_balance_work(); 1868 1868 } else {
+3 -3
mm/pgalloc-track.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LINUX_PGALLLC_TRACK_H 3 - #define _LINUX_PGALLLC_TRACK_H 2 + #ifndef _LINUX_PGALLOC_TRACK_H 3 + #define _LINUX_PGALLOC_TRACK_H 4 4 5 5 #if defined(CONFIG_MMU) 6 6 static inline p4d_t *p4d_alloc_track(struct mm_struct *mm, pgd_t *pgd, ··· 48 48 (__pte_alloc_kernel(pmd) || ({*(mask)|=PGTBL_PMD_MODIFIED;0;})))?\ 49 49 NULL: pte_offset_kernel(pmd, address)) 50 50 51 - #endif /* _LINUX_PGALLLC_TRACK_H */ 51 + #endif /* _LINUX_PGALLOC_TRACK_H */
+3 -3
mm/slab.c
··· 259 259 260 260 #define BATCHREFILL_LIMIT 16 261 261 /* 262 - * Optimization question: fewer reaps means less probability for unnessary 262 + * Optimization question: fewer reaps means less probability for unnecessary 263 263 * cpucache drain/refill cycles. 264 264 * 265 265 * OTOH the cpuarrays can contain lots of objects, ··· 2381 2381 }; 2382 2382 2383 2383 /* 2384 - * Initialize the state based on the randomization methode available. 2385 - * return true if the pre-computed list is available, false otherwize. 2384 + * Initialize the state based on the randomization method available. 2385 + * return true if the pre-computed list is available, false otherwise. 2386 2386 */ 2387 2387 static bool freelist_state_initialize(union freelist_init_state *state, 2388 2388 struct kmem_cache *cachep,
+1 -1
mm/slub.c
··· 3391 3391 */ 3392 3392 3393 3393 /* 3394 - * Mininum / Maximum order of slab pages. This influences locking overhead 3394 + * Minimum / Maximum order of slab pages. This influences locking overhead 3395 3395 * and slab fragmentation. A higher order reduces the number of partial slabs 3396 3396 * and increases the number of allocations possible without having to 3397 3397 * take the list_lock.
+1 -1
mm/swap_slots.c
··· 16 16 * to local caches without needing to acquire swap_info 17 17 * lock. We do not reuse the returned slots directly but 18 18 * move them back to the global pool in a batch. This 19 - * allows the slots to coaellesce and reduce fragmentation. 19 + * allows the slots to coalesce and reduce fragmentation. 20 20 * 21 21 * The swap entry allocated is marked with SWAP_HAS_CACHE 22 22 * flag in map_count that prevents it from being allocated
+3 -3
mm/vmalloc.c
··· 1583 1583 static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0); 1584 1584 1585 1585 /* 1586 - * Serialize vmap purging. There is no actual criticial section protected 1586 + * Serialize vmap purging. There is no actual critical section protected 1587 1587 * by this look, but we want to avoid concurrent calls for performance 1588 1588 * reasons and to make the pcpu_get_vm_areas more deterministic. 1589 1589 */ ··· 2628 2628 * May sleep if called *not* from interrupt context. 2629 2629 * Must not be called in NMI context (strictly speaking, it could be 2630 2630 * if we have CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, but making the calling 2631 - * conventions for vfree() arch-depenedent would be a really bad idea). 2631 + * conventions for vfree() arch-dependent would be a really bad idea). 2632 2632 */ 2633 2633 void vfree(const void *addr) 2634 2634 { ··· 3141 3141 /* 3142 3142 * To do safe access to this _mapped_ area, we need 3143 3143 * lock. But adding lock here means that we need to add 3144 - * overhead of vmalloc()/vfree() calles for this _debug_ 3144 + * overhead of vmalloc()/vfree() calls for this _debug_ 3145 3145 * interface, rarely used. Instead of that, we'll use 3146 3146 * kmap() and get small overhead in this access function. 3147 3147 */
+1 -1
mm/vmstat.c
··· 934 934 935 935 /* 936 936 * this is only called if !populated_zone(zone), which implies no other users of 937 - * pset->vm_stat_diff[] exsist. 937 + * pset->vm_stat_diff[] exist. 938 938 */ 939 939 void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset) 940 940 {
+1 -1
mm/zpool.c
··· 336 336 * This may hold locks, disable interrupts, and/or preemption, 337 337 * and the zpool_unmap_handle() must be called to undo those 338 338 * actions. The code that uses the mapped handle should complete 339 - * its operatons on the mapped handle memory quickly and unmap 339 + * its operations on the mapped handle memory quickly and unmap 340 340 * as soon as possible. As the implementation may use per-cpu 341 341 * data, multiple handles should not be mapped concurrently on 342 342 * any cpu.
+1 -1
mm/zsmalloc.c
··· 1227 1227 * zs_map_object - get address of allocated object from handle. 1228 1228 * @pool: pool from which the object was allocated 1229 1229 * @handle: handle returned from zs_malloc 1230 - * @mm: maping mode to use 1230 + * @mm: mapping mode to use 1231 1231 * 1232 1232 * Before using an object allocated from zs_malloc, it must be mapped using 1233 1233 * this function. When done with the object, it must be unmapped using