Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: Fix typos and spelling errors in comments

This patch corrects several minor typographical and spelling errors
in comments across multiple arm64 source files.

No functional changes.

Signed-off-by: mrigendrachaubey <mrigendra.chaubey@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

authored by

mrigendrachaubey and committed by
Catalin Marinas
96ac403e 3a866087

+22 -22
+2 -2
arch/arm64/include/asm/assembler.h
··· 371 371 * [start, end) with dcache line size explicitly provided. 372 372 * 373 373 * op: operation passed to dc instruction 374 - * domain: domain used in dsb instruciton 374 + * domain: domain used in dsb instruction 375 375 * start: starting virtual address of the region 376 376 * end: end virtual address of the region 377 377 * linesz: dcache line size ··· 412 412 * [start, end) 413 413 * 414 414 * op: operation passed to dc instruction 415 - * domain: domain used in dsb instruciton 415 + * domain: domain used in dsb instruction 416 416 * start: starting virtual address of the region 417 417 * end: end virtual address of the region 418 418 * fixup: optional label to branch to on user fault
+2 -2
arch/arm64/include/asm/cpufeature.h
··· 199 199 * registers (e.g, SCTLR, TCR etc.) or patching the kernel via 200 200 * alternatives. The kernel patching is batched and performed at later 201 201 * point. The actions are always initiated only after the capability 202 - * is finalised. This is usally denoted by "enabling" the capability. 202 + * is finalised. This is usually denoted by "enabling" the capability. 203 203 * The actions are initiated as follows : 204 204 * a) Action is triggered on all online CPUs, after the capability is 205 205 * finalised, invoked within the stop_machine() context from ··· 251 251 #define ARM64_CPUCAP_SCOPE_LOCAL_CPU ((u16)BIT(0)) 252 252 #define ARM64_CPUCAP_SCOPE_SYSTEM ((u16)BIT(1)) 253 253 /* 254 - * The capabilitiy is detected on the Boot CPU and is used by kernel 254 + * The capability is detected on the Boot CPU and is used by kernel 255 255 * during early boot. i.e, the capability should be "detected" and 256 256 * "enabled" as early as possibly on all booting CPUs. 257 257 */
+1 -1
arch/arm64/include/asm/el2_setup.h
··· 28 28 * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but 29 29 * don't advertise it (they predate this relaxation). 30 30 * 31 - * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H 31 + * Initialize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H 32 32 * indicating whether the CPU is running in E2H mode. 33 33 */ 34 34 mrs_s x1, SYS_ID_AA64MMFR4_EL1
+2 -2
arch/arm64/include/asm/pgtable.h
··· 432 432 * 1 0 | 1 0 1 433 433 * 1 1 | 0 1 x 434 434 * 435 - * When hardware DBM is not present, the sofware PTE_DIRTY bit is updated via 435 + * When hardware DBM is not present, the software PTE_DIRTY bit is updated via 436 436 * the page fault mechanism. Checking the dirty status of a pte becomes: 437 437 * 438 438 * PTE_DIRTY || (PTE_WRITE && !PTE_RDONLY) ··· 598 598 /* 599 599 * pte_present_invalid() tells us that the pte is invalid from HW 600 600 * perspective but present from SW perspective, so the fields are to be 601 - * interpretted as per the HW layout. The second 2 checks are the unique 601 + * interpreted as per the HW layout. The second 2 checks are the unique 602 602 * encoding that we use for PROT_NONE. It is insufficient to only use 603 603 * the first check because we share the same encoding scheme with pmds 604 604 * which support pmd_mkinvalid(), so can be present-invalid without
+1 -1
arch/arm64/include/asm/suspend.h
··· 23 23 * __cpu_suspend_enter()'s caller, and populated by __cpu_suspend_enter(). 24 24 * This data must survive until cpu_resume() is called. 25 25 * 26 - * This struct desribes the size and the layout of the saved cpu state. 26 + * This struct describes the size and the layout of the saved cpu state. 27 27 * The layout of the callee_saved_regs is defined by the implementation 28 28 * of __cpu_suspend_enter(), and cpu_resume(). This struct must be passed 29 29 * in by the caller as __cpu_suspend_enter()'s stack-frame is gone once it
+1 -1
arch/arm64/kernel/acpi.c
··· 133 133 134 134 /* 135 135 * FADT is required on arm64; retrieve it to check its presence 136 - * and carry out revision and ACPI HW reduced compliancy tests 136 + * and carry out revision and ACPI HW reduced compliance tests 137 137 */ 138 138 status = acpi_get_table(ACPI_SIG_FADT, 0, &table); 139 139 if (ACPI_FAILURE(status)) {
+1 -1
arch/arm64/kernel/cpufeature.c
··· 1002 1002 1003 1003 /* 1004 1004 * Initialise the CPU feature register from Boot CPU values. 1005 - * Also initiliases the strict_mask for the register. 1005 + * Also initialises the strict_mask for the register. 1006 1006 * Any bits that are not covered by an arm64_ftr_bits entry are considered 1007 1007 * RES0 for the system-wide value, and must strictly match. 1008 1008 */
+1 -1
arch/arm64/kernel/ftrace.c
··· 492 492 return ret; 493 493 494 494 /* 495 - * When using mcount, callsites in modules may have been initalized to 495 + * When using mcount, callsites in modules may have been initialized to 496 496 * call an arbitrary module PLT (which redirects to the _mcount stub) 497 497 * rather than the ftrace PLT we'll use at runtime (which redirects to 498 498 * the ftrace trampoline). We can ignore the old PLT when initializing
+1 -1
arch/arm64/kernel/machine_kexec.c
··· 251 251 * marked as Reserved as memory was allocated via memblock_reserve(). 252 252 * 253 253 * In hibernation, the pages which are Reserved and yet "nosave" are excluded 254 - * from the hibernation iamge. crash_is_nosave() does thich check for crash 254 + * from the hibernation image. crash_is_nosave() does thich check for crash 255 255 * dump kernel and will reduce the total size of hibernation image. 256 256 */ 257 257
+1 -1
arch/arm64/kernel/probes/uprobes.c
··· 131 131 struct uprobe_task *utask = current->utask; 132 132 133 133 /* 134 - * Task has received a fatal signal, so reset back to probbed 134 + * Task has received a fatal signal, so reset back to probed 135 135 * address. 136 136 */ 137 137 instruction_pointer_set(regs, utask->vaddr);
+1 -1
arch/arm64/kernel/sdei.c
··· 202 202 /* 203 203 * do_sdei_event() returns one of: 204 204 * SDEI_EV_HANDLED - success, return to the interrupted context. 205 - * SDEI_EV_FAILED - failure, return this error code to firmare. 205 + * SDEI_EV_FAILED - failure, return this error code to firmware. 206 206 * virtual-address - success, return to this address. 207 207 */ 208 208 unsigned long __kprobes do_sdei_event(struct pt_regs *regs,
+2 -2
arch/arm64/kernel/smp.c
··· 350 350 351 351 /* 352 352 * Now that the dying CPU is beyond the point of no return w.r.t. 353 - * in-kernel synchronisation, try to get the firwmare to help us to 353 + * in-kernel synchronisation, try to get the firmware to help us to 354 354 * verify that it has really left the kernel before we consider 355 355 * clobbering anything it might still be using. 356 356 */ ··· 523 523 524 524 /* 525 525 * Availability of the acpi handle is sufficient to establish 526 - * that _STA has aleady been checked. No need to recheck here. 526 + * that _STA has already been checked. No need to recheck here. 527 527 */ 528 528 c->hotpluggable = arch_cpu_is_hotpluggable(cpu); 529 529
+1 -1
arch/arm64/kernel/traps.c
··· 922 922 __show_regs(regs); 923 923 924 924 /* 925 - * We use nmi_panic to limit the potential for recusive overflows, and 925 + * We use nmi_panic to limit the potential for recursive overflows, and 926 926 * to get a better stack trace. 927 927 */ 928 928 nmi_panic(NULL, "kernel stack overflow");
+1 -1
arch/arm64/kvm/arch_timer.c
··· 815 815 tpt = tpc = true; 816 816 817 817 /* 818 - * For the poor sods that could not correctly substract one value 818 + * For the poor sods that could not correctly subtract one value 819 819 * from another, trap the full virtual timer and counter. 820 820 */ 821 821 if (has_broken_cntvoff() && timer_get_offset(map->direct_vtimer))
+1 -1
arch/arm64/kvm/hyp/nvhe/ffa.c
··· 115 115 * 116 116 * FFA-1.3 introduces 64-bit variants of the CPU cycle management 117 117 * interfaces. Moreover, FF-A 1.3 clarifies that SMC32 direct requests 118 - * complete with SMC32 direct reponses which *should* allow us use the 118 + * complete with SMC32 direct responses which *should* allow us use the 119 119 * function ID sent by the caller to determine whether to return x8-x17. 120 120 * 121 121 * Note that we also cannot rely on function IDs in the response.
+1 -1
arch/arm64/kvm/mmu.c
··· 1755 1755 1756 1756 /* 1757 1757 * Check if this is non-struct page memory PFN, and cannot support 1758 - * CMOs. It could potentially be unsafe to access as cachable. 1758 + * CMOs. It could potentially be unsafe to access as cacheable. 1759 1759 */ 1760 1760 if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP) && !pfn_is_map_memory(pfn)) { 1761 1761 if (is_vma_cacheable) {
+1 -1
arch/arm64/kvm/nested.c
··· 85 85 /* 86 86 * Let's treat memory allocation failures as benign: If we fail to 87 87 * allocate anything, return an error and keep the allocated array 88 - * alive. Userspace may try to recover by intializing the vcpu 88 + * alive. Userspace may try to recover by initializing the vcpu 89 89 * again, and there is no reason to affect the whole VM for this. 90 90 */ 91 91 num_mmus = atomic_read(&kvm->online_vcpus) * S2_MMU_PER_VCPU;
+1 -1
arch/arm64/net/bpf_jit_comp.c
··· 3053 3053 /* We unwind through both kernel frames starting from within bpf_throw 3054 3054 * call and BPF frames. Therefore we require FP unwinder to be enabled 3055 3055 * to walk kernel frames and reach BPF frames in the stack trace. 3056 - * ARM64 kernel is aways compiled with CONFIG_FRAME_POINTER=y 3056 + * ARM64 kernel is always compiled with CONFIG_FRAME_POINTER=y 3057 3057 */ 3058 3058 return true; 3059 3059 }