Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86: Fix leftover comment typos

Signed-off-by: Ingo Molnar <mingo@kernel.org>

+6 -6
+1 -1
arch/x86/hyperv/hv_init.c
··· 623 623 * output parameter to the hypercall below and so it should be 624 624 * compatible with 'virt_to_phys'. Which means, it's address should be 625 625 * directly mapped. Use 'static' to keep it compatible; stack variables 626 - * can be virtually mapped, making them imcompatible with 626 + * can be virtually mapped, making them incompatible with 627 627 * 'virt_to_phys'. 628 628 * Hypercall input/output addresses should also be 8-byte aligned. 629 629 */
+1 -1
arch/x86/include/asm/sgx.h
··· 13 13 /* 14 14 * This file contains both data structures defined by SGX architecture and Linux 15 15 * defined software data structures and functions. The two should not be mixed 16 - * together for better readibility. The architectural definitions come first. 16 + * together for better readability. The architectural definitions come first. 17 17 */ 18 18 19 19 /* The SGX specific CPUID function. */
+1 -1
arch/x86/include/asm/stackprotector.h
··· 11 11 * The same segment is shared by percpu area and stack canary. On 12 12 * x86_64, percpu symbols are zero based and %gs (64-bit) points to the 13 13 * base of percpu area. The first occupant of the percpu area is always 14 - * fixed_percpu_data which contains stack_canary at the approproate 14 + * fixed_percpu_data which contains stack_canary at the appropriate 15 15 * offset. On x86_32, the stack canary is just a regular percpu 16 16 * variable. 17 17 *
+1 -1
arch/x86/kernel/kprobes/core.c
··· 674 674 break; 675 675 676 676 if (insn->addr_bytes != sizeof(unsigned long)) 677 - return -EOPNOTSUPP; /* Don't support differnt size */ 677 + return -EOPNOTSUPP; /* Don't support different size */ 678 678 if (X86_MODRM_MOD(opcode) != 3) 679 679 return -EOPNOTSUPP; /* TODO: support memory addressing */ 680 680
+1 -1
arch/x86/kvm/mmu/mmu.c
··· 2374 2374 * page is available, while the caller may end up allocating as many as 2375 2375 * four pages, e.g. for PAE roots or for 5-level paging. Temporarily 2376 2376 * exceeding the (arbitrary by default) limit will not harm the host, 2377 - * being too agressive may unnecessarily kill the guest, and getting an 2377 + * being too aggressive may unnecessarily kill the guest, and getting an 2378 2378 * exact count is far more trouble than it's worth, especially in the 2379 2379 * page fault paths. 2380 2380 */
+1 -1
arch/x86/kvm/mmu/tdp_mmu.c
··· 1017 1017 1018 1018 if (!is_shadow_present_pte(iter.old_spte)) { 1019 1019 /* 1020 - * If SPTE has been forzen by another thread, just 1020 + * If SPTE has been frozen by another thread, just 1021 1021 * give up and retry, avoiding unnecessary page table 1022 1022 * allocation and free. 1023 1023 */