Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86: Fix various typos in comments

Go over arch/x86/ and fix common typos in comments,
and a typo in an actual function argument name.

No change in functionality intended.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

+28 -28
+1 -1
arch/x86/crypto/cast5_avx_glue.c
··· 1 1 /* 2 - * Glue Code for the AVX assembler implemention of the Cast5 Cipher 2 + * Glue Code for the AVX assembler implementation of the Cast5 Cipher 3 3 * 4 4 * Copyright (C) 2012 Johannes Goetzfried 5 5 * <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
+1 -1
arch/x86/crypto/cast6_avx_glue.c
··· 1 1 /* 2 - * Glue Code for the AVX assembler implemention of the Cast6 Cipher 2 + * Glue Code for the AVX assembler implementation of the Cast6 Cipher 3 3 * 4 4 * Copyright (C) 2012 Johannes Goetzfried 5 5 * <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
+1 -1
arch/x86/entry/common.c
··· 140 140 /* 141 141 * In order to return to user mode, we need to have IRQs off with 142 142 * none of EXIT_TO_USERMODE_LOOP_FLAGS set. Several of these flags 143 - * can be set at any time on preemptable kernels if we have IRQs on, 143 + * can be set at any time on preemptible kernels if we have IRQs on, 144 144 * so we need to loop. Disabling preemption wouldn't help: doing the 145 145 * work to clear some of the flags can sleep. 146 146 */
+1 -1
arch/x86/entry/vdso/vma.c
··· 261 261 * abusing from userspace install_speciall_mapping, which may 262 262 * not do accounting and rlimit right. 263 263 * We could search vma near context.vdso, but it's a slowpath, 264 - * so let's explicitely check all VMAs to be completely sure. 264 + * so let's explicitly check all VMAs to be completely sure. 265 265 */ 266 266 for (vma = mm->mmap; vma; vma = vma->vm_next) { 267 267 if (vma_is_special_mapping(vma, &vdso_mapping) ||
+1 -1
arch/x86/events/intel/bts.c
··· 589 589 * the AUX buffer. 590 590 * 591 591 * However, since this driver supports per-CPU and per-task inherit 592 - * we cannot use the user mapping since it will not be availble 592 + * we cannot use the user mapping since it will not be available 593 593 * if we're not running the owning process. 594 594 * 595 595 * With PTI we can't use the kernal map either, because its not
+1 -1
arch/x86/events/intel/core.c
··· 1930 1930 * in sequence on the same PMC or on different PMCs. 1931 1931 * 1932 1932 * In practise it appears some of these events do in fact count, and 1933 - * we need to programm all 4 events. 1933 + * we need to program all 4 events. 1934 1934 */ 1935 1935 static void intel_pmu_nhm_workaround(void) 1936 1936 {
+1 -1
arch/x86/events/intel/ds.c
··· 1199 1199 /* 1200 1200 * We must however always use iregs for the unwinder to stay sane; the 1201 1201 * record BP,SP,IP can point into thin air when the record is from a 1202 - * previous PMI context or an (I)RET happend between the record and 1202 + * previous PMI context or an (I)RET happened between the record and 1203 1203 * PMI. 1204 1204 */ 1205 1205 if (sample_type & PERF_SAMPLE_CALLCHAIN)
+1 -1
arch/x86/events/intel/p4.c
··· 1259 1259 } 1260 1260 /* 1261 1261 * Perf does test runs to see if a whole group can be assigned 1262 - * together succesfully. There can be multiple rounds of this. 1262 + * together successfully. There can be multiple rounds of this. 1263 1263 * Unfortunately, p4_pmu_swap_config_ts touches the hwc->config 1264 1264 * bits, such that the next round of group assignments will 1265 1265 * cause the above p4_should_swap_ts to pass instead of fail.
+1 -1
arch/x86/include/asm/alternative.h
··· 167 167 /* 168 168 * Alternative inline assembly with input. 169 169 * 170 - * Pecularities: 170 + * Peculiarities: 171 171 * No memory clobber here. 172 172 * Argument numbers start with 1. 173 173 * Best is to use constraints that are fixed size (like (%1) ... "r")
+1 -1
arch/x86/include/asm/cmpxchg.h
··· 7 7 #include <asm/alternative.h> /* Provides LOCK_PREFIX */ 8 8 9 9 /* 10 - * Non-existant functions to indicate usage errors at link time 10 + * Non-existent functions to indicate usage errors at link time 11 11 * (or compile-time if the compiler implements __compiletime_error(). 12 12 */ 13 13 extern void __xchg_wrong_size(void)
+1 -1
arch/x86/include/asm/efi.h
··· 19 19 * This is the main reason why we're doing stable VA mappings for RT 20 20 * services. 21 21 * 22 - * This flag is used in conjuction with a chicken bit called 22 + * This flag is used in conjunction with a chicken bit called 23 23 * "efi=old_map" which can be used as a fallback to the old runtime 24 24 * services mapping method in case there's some b0rkage with a 25 25 * particular EFI implementation (haha, it is hard to hold up the
+1 -1
arch/x86/kernel/acpi/boot.c
··· 848 848 /** 849 849 * acpi_ioapic_registered - Check whether IOAPIC assoicatied with @gsi_base 850 850 * has been registered 851 - * @handle: ACPI handle of the IOAPIC deivce 851 + * @handle: ACPI handle of the IOAPIC device 852 852 * @gsi_base: GSI base associated with the IOAPIC 853 853 * 854 854 * Assume caller holds some type of lock to serialize acpi_ioapic_registered()
+1 -1
arch/x86/kernel/cpu/mcheck/mce.c
··· 686 686 * errors here. However this would be quite problematic -- 687 687 * we would need to reimplement the Monarch handling and 688 688 * it would mess up the exclusion between exception handler 689 - * and poll hander -- * so we skip this for now. 689 + * and poll handler -- * so we skip this for now. 690 690 * These cases should not happen anyways, or only when the CPU 691 691 * is already totally * confused. In this case it's likely it will 692 692 * not fully execute the machine check handler either.
+1 -1
arch/x86/kernel/crash_dump_64.c
··· 62 62 63 63 /** 64 64 * copy_oldmem_page_encrypted - same as copy_oldmem_page() above but ioremap the 65 - * memory with the encryption mask set to accomodate kdump on SME-enabled 65 + * memory with the encryption mask set to accommodate kdump on SME-enabled 66 66 * machines. 67 67 */ 68 68 ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+1 -1
arch/x86/kernel/process_64.c
··· 684 684 /* TBD: overwrites user setup. Should have two bits. 685 685 But 64bit processes have always behaved this way, 686 686 so it's not too bad. The main problem is just that 687 - 32bit childs are affected again. */ 687 + 32bit children are affected again. */ 688 688 current->personality &= ~READ_IMPLIES_EXEC; 689 689 } 690 690
+2 -2
arch/x86/kvm/vmx.c
··· 485 485 /* 486 486 * To allow migration of L1 (complete with its L2 guests) between 487 487 * machines of different natural widths (32 or 64 bit), we cannot have 488 - * unsigned long fields with no explict size. We use u64 (aliased 488 + * unsigned long fields with no explicit size. We use u64 (aliased 489 489 * natural_width) instead. Luckily, x86 is little-endian. 490 490 */ 491 491 natural_width cr0_guest_host_mask; ··· 4936 4936 * vmcs->revision_id to KVM_EVMCS_VERSION instead of 4937 4937 * revision_id reported by MSR_IA32_VMX_BASIC. 4938 4938 * 4939 - * However, even though not explictly documented by 4939 + * However, even though not explicitly documented by 4940 4940 * TLFS, VMXArea passed as VMXON argument should 4941 4941 * still be marked with revision_id reported by 4942 4942 * physical CPU.
+1 -1
arch/x86/kvm/x86.c
··· 9280 9280 * with dirty logging disabled in order to eliminate unnecessary GPA 9281 9281 * logging in PML buffer (and potential PML buffer full VMEXT). This 9282 9282 * guarantees leaving PML enabled during guest's lifetime won't have 9283 - * any additonal overhead from PML when guest is running with dirty 9283 + * any additional overhead from PML when guest is running with dirty 9284 9284 * logging disabled for memory slots. 9285 9285 * 9286 9286 * kvm_x86_ops->slot_enable_log_dirty is called when switching new slot
+1 -1
arch/x86/mm/pageattr.c
··· 1704 1704 } else if (!(in_flag & CPA_PAGES_ARRAY)) { 1705 1705 /* 1706 1706 * in_flag of CPA_PAGES_ARRAY implies it is aligned. 1707 - * No need to cehck in that case 1707 + * No need to check in that case 1708 1708 */ 1709 1709 if (*addr & ~PAGE_MASK) { 1710 1710 *addr &= PAGE_MASK;
+2 -2
arch/x86/platform/ce4100/ce4100.c
··· 84 84 } 85 85 86 86 static void ce4100_serial_fixup(int port, struct uart_port *up, 87 - u32 *capabilites) 87 + u32 *capabilities) 88 88 { 89 89 #ifdef CONFIG_EARLY_PRINTK 90 90 /* ··· 111 111 up->serial_in = ce4100_mem_serial_in; 112 112 up->serial_out = ce4100_mem_serial_out; 113 113 114 - *capabilites |= (1 << 12); 114 + *capabilities |= (1 << 12); 115 115 } 116 116 117 117 static __init void sdv_serial_fixup(void)
+1 -1
arch/x86/platform/intel-mid/device_libs/platform_bcm43xx.c
··· 1 1 /* 2 - * platform_bcm43xx.c: bcm43xx platform data initilization file 2 + * platform_bcm43xx.c: bcm43xx platform data initialization file 3 3 * 4 4 * (C) Copyright 2016 Intel Corporation 5 5 * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+1 -1
arch/x86/platform/intel-mid/device_libs/platform_mrfld_spidev.c
··· 1 1 /* 2 - * spidev platform data initilization file 2 + * spidev platform data initialization file 3 3 * 4 4 * (C) Copyright 2014, 2016 Intel Corporation 5 5 * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+1 -1
arch/x86/platform/intel-mid/device_libs/platform_pcal9555a.c
··· 1 1 /* 2 - * PCAL9555a platform data initilization file 2 + * PCAL9555a platform data initialization file 3 3 * 4 4 * Copyright (C) 2016, Intel Corporation 5 5 *
+1 -1
arch/x86/platform/intel/iosf_mbi.c
··· 13 13 * 14 14 * 15 15 * The IOSF-SB is a fabric bus available on Atom based SOC's that uses a 16 - * mailbox interface (MBI) to communicate with mutiple devices. This 16 + * mailbox interface (MBI) to communicate with multiple devices. This 17 17 * driver implements access to this interface for those platforms that can 18 18 * enumerate the device using PCI. 19 19 */
+1 -1
arch/x86/platform/olpc/olpc-xo1-sci.c
··· 109 109 * the edge detector hookup on the gpio inputs on the geode is 110 110 * odd, to say the least. See http://dev.laptop.org/ticket/5703 111 111 * for details, but in a nutshell: we don't use the edge 112 - * detectors. instead, we make use of an anomoly: with the both 112 + * detectors. instead, we make use of an anomaly: with the both 113 113 * edge detectors turned off, we still get an edge event on a 114 114 * positive edge transition. to take advantage of this, we use the 115 115 * front-end inverter to ensure that that's the edge we're always
+1 -1
arch/x86/platform/uv/uv_nmi.c
··· 560 560 } 561 561 } 562 562 563 - /* Ping non-responding CPU's attemping to force them into the NMI handler */ 563 + /* Ping non-responding CPU's attempting to force them into the NMI handler */ 564 564 static void uv_nmi_nr_cpus_ping(void) 565 565 { 566 566 int cpu;
+1 -1
arch/x86/xen/setup.c
··· 493 493 * The remap information (which mfn remap to which pfn) is contained in the 494 494 * to be remapped memory itself in a linked list anchored at xen_remap_mfn. 495 495 * This scheme allows to remap the different chunks in arbitrary order while 496 - * the resulting mapping will be independant from the order. 496 + * the resulting mapping will be independent from the order. 497 497 */ 498 498 void __init xen_remap_memory(void) 499 499 {