Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86: Fix various typos in comments, take #2

Fix another ~42 single-word typos in arch/x86/ code comments,
missed a few in the first pass, in particular in .S files.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: linux-kernel@vger.kernel.org

+42 -42
+1 -1
arch/x86/boot/compressed/efi_thunk_64.S
··· 5 5 * Early support for invoking 32-bit EFI services from a 64-bit kernel. 6 6 * 7 7 * Because this thunking occurs before ExitBootServices() we have to 8 - * restore the firmware's 32-bit GDT before we make EFI serivce calls, 8 + * restore the firmware's 32-bit GDT before we make EFI service calls, 9 9 * since the firmware's 32-bit IDT is still currently installed and it 10 10 * needs to be able to service interrupts. 11 11 *
+1 -1
arch/x86/boot/compressed/head_64.S
··· 231 231 /* 232 232 * Setup for the jump to 64bit mode 233 233 * 234 - * When the jump is performend we will be in long mode but 234 + * When the jump is performed we will be in long mode but 235 235 * in 32bit compatibility mode with EFER.LME = 1, CS.L = 0, CS.D = 1 236 236 * (and in turn EFER.LMA = 1). To jump into 64bit mode we use 237 237 * the new gdt/idt that has __KERNEL_CS with CS.L = 1.
+1 -1
arch/x86/crypto/crc32-pclmul_glue.c
··· 24 24 /* 25 25 * Copyright 2012 Xyratex Technology Limited 26 26 * 27 - * Wrappers for kernel crypto shash api to pclmulqdq crc32 imlementation. 27 + * Wrappers for kernel crypto shash api to pclmulqdq crc32 implementation. 28 28 */ 29 29 #include <linux/init.h> 30 30 #include <linux/module.h>
+1 -1
arch/x86/crypto/twofish-x86_64-asm_64-3way.S
··· 88 88 89 89 /* 90 90 * Combined G1 & G2 function. Reordered with help of rotates to have moves 91 - * at begining. 91 + * at beginning. 92 92 */ 93 93 #define g1g2_3(ab, cd, Tx0, Tx1, Tx2, Tx3, Ty0, Ty1, Ty2, Ty3, x, y) \ 94 94 /* G1,1 && G2,1 */ \
+1 -1
arch/x86/entry/entry_32.S
··· 209 209 * 210 210 * Lets build a 5 entry IRET frame after that, such that struct pt_regs 211 211 * is complete and in particular regs->sp is correct. This gives us 212 - * the original 6 enties as gap: 212 + * the original 6 entries as gap: 213 213 * 214 214 * 14*4(%esp) - <previous context> 215 215 * 13*4(%esp) - gap / flags
+1 -1
arch/x86/entry/entry_64.S
··· 511 511 /* 512 512 * No need to switch back to the IST stack. The current stack is either 513 513 * identical to the stack in the IRET frame or the VC fall-back stack, 514 - * so it is definitly mapped even with PTI enabled. 514 + * so it is definitely mapped even with PTI enabled. 515 515 */ 516 516 jmp paranoid_exit 517 517
+1 -1
arch/x86/entry/vdso/vdso2c.c
··· 218 218 219 219 /* 220 220 * Figure out the struct name. If we're writing to a .so file, 221 - * generate raw output insted. 221 + * generate raw output instead. 222 222 */ 223 223 name = strdup(argv[3]); 224 224 namelen = strlen(name);
+1 -1
arch/x86/entry/vdso/vdso32/system_call.S
··· 29 29 * anyone with an AMD CPU, for example). Nonetheless, we try to keep 30 30 * it working approximately as well as it ever worked. 31 31 * 32 - * This link may eludicate some of the history: 32 + * This link may elucidate some of the history: 33 33 * https://android-review.googlesource.com/#/q/Iac3295376d61ef83e713ac9b528f3b50aa780cd7 34 34 * personally, I find it hard to understand what's going on there. 35 35 *
+1 -1
arch/x86/entry/vdso/vma.c
··· 358 358 mmap_write_lock(mm); 359 359 /* 360 360 * Check if we have already mapped vdso blob - fail to prevent 361 - * abusing from userspace install_speciall_mapping, which may 361 + * abusing from userspace install_special_mapping, which may 362 362 * not do accounting and rlimit right. 363 363 * We could search vma near context.vdso, but it's a slowpath, 364 364 * so let's explicitly check all VMAs to be completely sure.
+1 -1
arch/x86/entry/vdso/vsgx.S
··· 137 137 138 138 /* 139 139 * If the return from callback is zero or negative, return immediately, 140 - * else re-execute ENCLU with the postive return value interpreted as 140 + * else re-execute ENCLU with the positive return value interpreted as 141 141 * the requested ENCLU function. 142 142 */ 143 143 cmp $0, %eax
+1 -1
arch/x86/events/intel/bts.c
··· 594 594 * we cannot use the user mapping since it will not be available 595 595 * if we're not running the owning process. 596 596 * 597 - * With PTI we can't use the kernal map either, because its not 597 + * With PTI we can't use the kernel map either, because its not 598 598 * there when we run userspace. 599 599 * 600 600 * For now, disable this driver when using PTI.
+1 -1
arch/x86/events/intel/core.c
··· 2776 2776 * processing loop coming after that the function, otherwise 2777 2777 * phony regular samples may be generated in the sampling buffer 2778 2778 * not marked with the EXACT tag. Another possibility is to have 2779 - * one PEBS event and at least one non-PEBS event whic hoverflows 2779 + * one PEBS event and at least one non-PEBS event which overflows 2780 2780 * while PEBS has armed. In this case, bit 62 of GLOBAL_STATUS will 2781 2781 * not be set, yet the overflow status bit for the PEBS counter will 2782 2782 * be on Skylake.
+1 -1
arch/x86/events/intel/p4.c
··· 1313 1313 .get_event_constraints = x86_get_event_constraints, 1314 1314 /* 1315 1315 * IF HT disabled we may need to use all 1316 - * ARCH_P4_MAX_CCCR counters simulaneously 1316 + * ARCH_P4_MAX_CCCR counters simultaneously 1317 1317 * though leave it restricted at moment assuming 1318 1318 * HT is on 1319 1319 */
+1 -1
arch/x86/include/asm/agp.h
··· 9 9 * Functions to keep the agpgart mappings coherent with the MMU. The 10 10 * GART gives the CPU a physical alias of pages in memory. The alias 11 11 * region is mapped uncacheable. Make sure there are no conflicting 12 - * mappings with different cachability attributes for the same 12 + * mappings with different cacheability attributes for the same 13 13 * page. This avoids data corruption on some CPUs. 14 14 */ 15 15
+1 -1
arch/x86/include/asm/intel_pt.h
··· 3 3 #define _ASM_X86_INTEL_PT_H 4 4 5 5 #define PT_CPUID_LEAVES 2 6 - #define PT_CPUID_REGS_NUM 4 /* number of regsters (eax, ebx, ecx, edx) */ 6 + #define PT_CPUID_REGS_NUM 4 /* number of registers (eax, ebx, ecx, edx) */ 7 7 8 8 enum pt_capabilities { 9 9 PT_CAP_max_subleaf = 0,
+1 -1
arch/x86/include/asm/set_memory.h
··· 8 8 /* 9 9 * The set_memory_* API can be used to change various attributes of a virtual 10 10 * address range. The attributes include: 11 - * Cachability : UnCached, WriteCombining, WriteThrough, WriteBack 11 + * Cacheability : UnCached, WriteCombining, WriteThrough, WriteBack 12 12 * Executability : eXecutable, NoteXecutable 13 13 * Read/Write : ReadOnly, ReadWrite 14 14 * Presence : NotPresent
+1 -1
arch/x86/kernel/amd_nb.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Shared support code for AMD K8 northbridges and derivates. 3 + * Shared support code for AMD K8 northbridges and derivatives. 4 4 * Copyright 2006 Andi Kleen, SUSE Labs. 5 5 */ 6 6
+1 -1
arch/x86/kernel/apm_32.c
··· 1025 1025 * status which gives the rough battery status, and current power 1026 1026 * source. The bat value returned give an estimate as a percentage 1027 1027 * of life and a status value for the battery. The estimated life 1028 - * if reported is a lifetime in secodnds/minutes at current power 1028 + * if reported is a lifetime in seconds/minutes at current power 1029 1029 * consumption. 1030 1030 */ 1031 1031
+1 -1
arch/x86/kernel/cpu/intel.c
··· 301 301 * The operating system must reload CR3 to cause the TLB to be flushed" 302 302 * 303 303 * As a result, boot_cpu_has(X86_FEATURE_PGE) in arch/x86/include/asm/tlbflush.h 304 - * should be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE 304 + * should be false so that __flush_tlb_all() causes CR3 instead of CR4.PGE 305 305 * to be modified. 306 306 */ 307 307 if (c->x86 == 5 && c->x86_model == 9) {
+1 -1
arch/x86/kernel/cpu/mce/severity.c
··· 142 142 MASK(MCI_STATUS_OVER|MCI_UC_SAR, MCI_STATUS_UC|MCI_STATUS_AR) 143 143 ), 144 144 MCESEV( 145 - KEEP, "Non signalled machine check", 145 + KEEP, "Non signaled machine check", 146 146 SER, BITCLR(MCI_STATUS_S) 147 147 ), 148 148
+1 -1
arch/x86/kernel/cpu/mtrr/mtrr.c
··· 799 799 * 800 800 * This routine is called in two cases: 801 801 * 802 - * 1. very earily time of software resume, when there absolutely 802 + * 1. very early time of software resume, when there absolutely 803 803 * isn't mtrr entry changes; 804 804 * 805 805 * 2. cpu hotadd time. We let mtrr_add/del_page hold cpuhotplug
+2 -2
arch/x86/kernel/cpu/resctrl/monitor.c
··· 397 397 * timer. Having 1s interval makes the calculation of bandwidth simpler. 398 398 * 399 399 * Although MBA's goal is to restrict the bandwidth to a maximum, there may 400 - * be a need to increase the bandwidth to avoid uncecessarily restricting 400 + * be a need to increase the bandwidth to avoid unnecessarily restricting 401 401 * the L2 <-> L3 traffic. 402 402 * 403 403 * Since MBA controls the L2 external bandwidth where as MBM measures the ··· 480 480 481 481 /* 482 482 * Delta values are updated dynamically package wise for each 483 - * rdtgrp everytime the throttle MSR changes value. 483 + * rdtgrp every time the throttle MSR changes value. 484 484 * 485 485 * This is because (1)the increase in bandwidth is not perfectly 486 486 * linear and only "approximately" linear even when the hardware
+1 -1
arch/x86/kernel/cpu/resctrl/rdtgroup.c
··· 2555 2555 /* 2556 2556 * This creates a directory mon_data which contains the monitored data. 2557 2557 * 2558 - * mon_data has one directory for each domain whic are named 2558 + * mon_data has one directory for each domain which are named 2559 2559 * in the format mon_<domain_name>_<domain_id>. For ex: A mon_data 2560 2560 * with L3 domain looks as below: 2561 2561 * ./mon_data:
+1 -1
arch/x86/kernel/relocate_kernel_32.S
··· 107 107 * - Write protect disabled 108 108 * - No task switch 109 109 * - Don't do FP software emulation. 110 - * - Proctected mode enabled 110 + * - Protected mode enabled 111 111 */ 112 112 movl %cr0, %eax 113 113 andl $~(X86_CR0_PG | X86_CR0_AM | X86_CR0_WP | X86_CR0_TS | X86_CR0_EM), %eax
+1 -1
arch/x86/kernel/relocate_kernel_64.S
··· 121 121 * - Write protect disabled 122 122 * - No task switch 123 123 * - Don't do FP software emulation. 124 - * - Proctected mode enabled 124 + * - Protected mode enabled 125 125 */ 126 126 movq %cr0, %rax 127 127 andq $~(X86_CR0_AM | X86_CR0_WP | X86_CR0_TS | X86_CR0_EM), %rax
+1 -1
arch/x86/kernel/smp.c
··· 204 204 } 205 205 /* 206 206 * Don't wait longer than 10 ms if the caller didn't 207 - * reqeust it. If wait is true, the machine hangs here if 207 + * request it. If wait is true, the machine hangs here if 208 208 * one or more CPUs do not reach shutdown state. 209 209 */ 210 210 timeout = USEC_PER_MSEC * 10;
+1 -1
arch/x86/kernel/tsc_sync.c
··· 472 472 /* 473 473 * Add the result to the previous adjustment value. 474 474 * 475 - * The adjustement value is slightly off by the overhead of the 475 + * The adjustment value is slightly off by the overhead of the 476 476 * sync mechanism (observed values are ~200 TSC cycles), but this 477 477 * really depends on CPU, node distance and frequency. So 478 478 * compensating for this is hard to get right. Experiments show
+1 -1
arch/x86/kernel/umip.c
··· 272 272 * by whether the operand is a register or a memory location. 273 273 * If operand is a register, return as many bytes as the operand 274 274 * size. If operand is memory, return only the two least 275 - * siginificant bytes. 275 + * significant bytes. 276 276 */ 277 277 if (X86_MODRM_MOD(insn->modrm.value) == 3) 278 278 *data_size = insn->opnd_bytes;
+1 -1
arch/x86/kvm/svm/avic.c
··· 727 727 struct amd_svm_iommu_ir *ir; 728 728 729 729 /** 730 - * In some cases, the existing irte is updaed and re-set, 730 + * In some cases, the existing irte is updated and re-set, 731 731 * so we need to check here if it's already been * added 732 732 * to the ir_list. 733 733 */
+1 -1
arch/x86/kvm/vmx/nested.c
··· 3537 3537 * snapshot restore (migration). 3538 3538 * 3539 3539 * In this flow, it is assumed that vmcs12 cache was 3540 - * trasferred as part of captured nVMX state and should 3540 + * transferred as part of captured nVMX state and should 3541 3541 * therefore not be read from guest memory (which may not 3542 3542 * exist on destination host yet). 3543 3543 */
+1 -1
arch/x86/math-emu/reg_ld_str.c
··· 964 964 /* The return value (in eax) is zero if the result is exact, 965 965 if bits are changed due to rounding, truncation, etc, then 966 966 a non-zero value is returned */ 967 - /* Overflow is signalled by a non-zero return value (in eax). 967 + /* Overflow is signaled by a non-zero return value (in eax). 968 968 In the case of overflow, the returned significand always has the 969 969 largest possible value */ 970 970 int FPU_round_to_int(FPU_REG *r, u_char tag)
+1 -1
arch/x86/math-emu/reg_round.S
··· 575 575 #ifdef PECULIAR_486 576 576 /* 577 577 * This implements a special feature of 80486 behaviour. 578 - * Underflow will be signalled even if the number is 578 + * Underflow will be signaled even if the number is 579 579 * not a denormal after rounding. 580 580 * This difference occurs only for masked underflow, and not 581 581 * in the unmasked case.
+1 -1
arch/x86/mm/fault.c
··· 1497 1497 * userspace task is trying to access some valid (from guest's point of 1498 1498 * view) memory which is not currently mapped by the host (e.g. the 1499 1499 * memory is swapped out). Note, the corresponding "page ready" event 1500 - * which is injected when the memory becomes available, is delived via 1500 + * which is injected when the memory becomes available, is delivered via 1501 1501 * an interrupt mechanism and not a #PF exception 1502 1502 * (see arch/x86/kernel/kvm.c: sysvec_kvm_asyncpf_interrupt()). 1503 1503 *
+1 -1
arch/x86/mm/init.c
··· 756 756 757 757 #ifdef CONFIG_X86_64 758 758 if (max_pfn > max_low_pfn) { 759 - /* can we preseve max_low_pfn ?*/ 759 + /* can we preserve max_low_pfn ?*/ 760 760 max_low_pfn = max_pfn; 761 761 } 762 762 #else
+1 -1
arch/x86/mm/pkeys.c
··· 128 128 /* 129 129 * Called from the FPU code when creating a fresh set of FPU 130 130 * registers. This is called from a very specific context where 131 - * we know the FPU regstiers are safe for use and we can use PKRU 131 + * we know the FPU registers are safe for use and we can use PKRU 132 132 * directly. 133 133 */ 134 134 void copy_init_pkru_to_fpregs(void)
+1 -1
arch/x86/platform/efi/quirks.c
··· 441 441 * 1.4.4 with SGX enabled booting Linux via Fedora 24's 442 442 * grub2-efi on a hard disk. (And no, I don't know why 443 443 * this happened, but Linux should still try to boot rather 444 - * panicing early.) 444 + * panicking early.) 445 445 */ 446 446 rm_size = real_mode_size_needed(); 447 447 if (rm_size && (start + rm_size) < (1<<20) && size >= rm_size) {
+1 -1
arch/x86/platform/olpc/olpc-xo15-sci.c
··· 27 27 * wake-on-close. This is implemented as standard by the XO-1.5 DSDT. 28 28 * 29 29 * We provide here a sysfs attribute that will additionally enable 30 - * wake-on-close behavior. This is useful (e.g.) when we oportunistically 30 + * wake-on-close behavior. This is useful (e.g.) when we opportunistically 31 31 * suspend with the display running; if the lid is then closed, we want to 32 32 * wake up to turn the display off. 33 33 *
+1 -1
arch/x86/platform/olpc/olpc_dt.c
··· 131 131 const size_t chunk_size = max(PAGE_SIZE, size); 132 132 133 133 /* 134 - * To mimimize the number of allocations, grab at least 134 + * To minimize the number of allocations, grab at least 135 135 * PAGE_SIZE of memory (that's an arbitrary choice that's 136 136 * fast enough on the platforms we care about while minimizing 137 137 * wasted bootmem) and hand off chunks of it to callers.
+1 -1
arch/x86/power/cpu.c
··· 321 321 322 322 /* 323 323 * When bsp_check() is called in hibernate and suspend, cpu hotplug 324 - * is disabled already. So it's unnessary to handle race condition between 324 + * is disabled already. So it's unnecessary to handle race condition between 325 325 * cpumask query and cpu hotplug. 326 326 */ 327 327 static int bsp_check(void)
+1 -1
arch/x86/realmode/init.c
··· 103 103 *ptr += phys_base; 104 104 } 105 105 106 - /* Must be perfomed *after* relocation. */ 106 + /* Must be performed *after* relocation. */ 107 107 trampoline_header = (struct trampoline_header *) 108 108 __va(real_mode_header->trampoline_header); 109 109
+1 -1
arch/x86/xen/mmu_pv.c
··· 2410 2410 rmd.prot = prot; 2411 2411 /* 2412 2412 * We use the err_ptr to indicate if there we are doing a contiguous 2413 - * mapping or a discontigious mapping. 2413 + * mapping or a discontiguous mapping. 2414 2414 */ 2415 2415 rmd.contiguous = !err_ptr; 2416 2416 rmd.no_translate = no_translate;