Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'riscv-for-linus-6.3-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Palmer Dabbelt:
"There's a bunch of fixes/cleanups throughout the tree as usual, but we
also have a handful of new features:

- Various improvements to the extension detection and alternative
patching infrastructure

- Zbb-optimized string routines

- Support for cpu-capacity in the RISC-V DT bindings

- Zicbom no longer depends on toolchain support

- Some performance and code size improvements to ftrace

- Support for ARCH_WANT_LD_ORPHAN_WARN

- Oops now contain the faulting instruction"

* tag 'riscv-for-linus-6.3-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (67 commits)
RISC-V: add a spin_shadow_stack declaration
riscv: mm: hugetlb: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
riscv: Add header include guards to insn.h
riscv: alternative: proceed one more instruction for auipc/jalr pair
riscv: Avoid enabling interrupts in die()
riscv, mm: Perform BPF exhandler fixup on page fault
RISC-V: take text_mutex during alternative patching
riscv: hwcap: Don't alphabetize ISA extension IDs
RISC-V: fix ordering of Zbb extension
riscv: jump_label: Fixup unaligned arch_static_branch function
RISC-V: Only provide the single-letter extensions in HWCAP
riscv: mm: fix regression due to update_mmu_cache change
scripts/decodecode: Add support for RISC-V
riscv: Add instruction dump to RISC-V splats
riscv: select ARCH_WANT_LD_ORPHAN_WARN for !XIP_KERNEL
riscv: vmlinux.lds.S: explicitly catch .init.bss sections from EFI stub
riscv: vmlinux.lds.S: explicitly catch .riscv.attributes sections
riscv: vmlinux.lds.S: explicitly catch .rela.dyn symbols
riscv: lds: define RUNTIME_DISCARD_EXIT
RISC-V: move some stray __RISCV_INSN_FUNCS definitions from kprobes
...

+1467 -630
+2 -2
Documentation/devicetree/bindings/arm/cpu-capacity.txt Documentation/devicetree/bindings/cpu/cpu-capacity.txt
··· 1 1 ========================================== 2 - ARM CPUs capacity bindings 2 + CPU capacity bindings 3 3 ========================================== 4 4 5 5 ========================================== 6 6 1 - Introduction 7 7 ========================================== 8 8 9 - ARM systems may be configured to have cpus with different power/performance 9 + Some systems may be configured to have cpus with different power/performance 10 10 characteristics within the same chip. In this case, additional information has 11 11 to be made available to the kernel for it to be aware of such differences and 12 12 take decisions accordingly.
+1 -1
Documentation/devicetree/bindings/arm/cpus.yaml
··· 259 259 260 260 capacity-dmips-mhz: 261 261 description: 262 - u32 value representing CPU capacity (see ./cpu-capacity.txt) in 262 + u32 value representing CPU capacity (see ../cpu/cpu-capacity.txt) in 263 263 DMIPS/MHz, relative to highest capacity-dmips-mhz 264 264 in the system. 265 265
+6
Documentation/devicetree/bindings/riscv/cpus.yaml
··· 114 114 List of phandles to idle state nodes supported 115 115 by this hart (see ./idle-states.yaml). 116 116 117 + capacity-dmips-mhz: 118 + description: 119 + u32 value representing CPU capacity (see ../cpu/cpu-capacity.txt) in 120 + DMIPS/MHz, relative to highest capacity-dmips-mhz 121 + in the system. 122 + 117 123 required: 118 124 - riscv,isa 119 125 - interrupt-controller
+42
Documentation/riscv/uabi.rst
··· 3 3 RISC-V Linux User ABI 4 4 ===================== 5 5 6 + ISA string ordering in /proc/cpuinfo 7 + ------------------------------------ 8 + 9 + The canonical order of ISA extension names in the ISA string is defined in 10 + chapter 27 of the unprivileged specification. 11 + The specification uses vague wording, such as should, when it comes to ordering, 12 + so for our purposes the following rules apply: 13 + 14 + #. Single-letter extensions come first, in canonical order. 15 + The canonical order is "IMAFDQLCBKJTPVH". 16 + 17 + #. All multi-letter extensions will be separated from other extensions by an 18 + underscore. 19 + 20 + #. Additional standard extensions (starting with 'Z') will be sorted after 21 + single-letter extensions and before any higher-privileged extensions. 22 + 23 + #. For additional standard extensions, the first letter following the 'Z' 24 + conventionally indicates the most closely related alphabetical 25 + extension category. If multiple 'Z' extensions are named, they will be 26 + ordered first by category, in canonical order, as listed above, then 27 + alphabetically within a category. 28 + 29 + #. Standard supervisor-level extensions (starting with 'S') will be listed 30 + after standard unprivileged extensions. If multiple supervisor-level 31 + extensions are listed, they will be ordered alphabetically. 32 + 33 + #. Standard machine-level extensions (starting with 'Zxm') will be listed 34 + after any lower-privileged, standard extensions. If multiple machine-level 35 + extensions are listed, they will be ordered alphabetically. 36 + 37 + #. Non-standard extensions (starting with 'X') will be listed after all standard 38 + extensions. If multiple non-standard extensions are listed, they will be 39 + ordered alphabetically. 40 + 41 + An example string following the order is:: 42 + 43 + rv64imadc_zifoo_zigoo_zafoo_sbar_scar_zxmbaz_xqux_xrux 44 + 45 + Misaligned accesses 46 + ------------------- 47 + 6 48 Misaligned accesses are supported in userspace, but they may perform poorly.
+1 -1
Documentation/scheduler/sched-capacity.rst
··· 260 260 261 261 The arm and arm64 architectures directly map this to the arch_topology driver 262 262 CPU scaling data, which is derived from the capacity-dmips-mhz CPU binding; see 263 - Documentation/devicetree/bindings/arm/cpu-capacity.txt. 263 + Documentation/devicetree/bindings/cpu/cpu-capacity.txt. 264 264 265 265 3.2 Frequency invariance 266 266 ------------------------
+1 -1
Documentation/translations/zh_CN/scheduler/sched-capacity.rst
··· 233 233 234 234 arm和arm64架构直接把这个信息映射到arch_topology驱动的CPU scaling数据中(译注:参考 235 235 arch_topology.h的percpu变量cpu_scale),它是从capacity-dmips-mhz CPU binding中衍生计算 236 - 出来的。参见Documentation/devicetree/bindings/arm/cpu-capacity.txt。 236 + 出来的。参见Documentation/devicetree/bindings/cpu/cpu-capacity.txt。 237 237 238 238 3.2 频率不变性 239 239 --------------
+50 -32
arch/riscv/Kconfig
··· 14 14 def_bool y 15 15 select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION 16 16 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 17 + select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE 17 18 select ARCH_HAS_BINFMT_FLAT 18 19 select ARCH_HAS_CURRENT_STACK_POINTER 19 - select ARCH_HAS_DEBUG_VM_PGTABLE 20 20 select ARCH_HAS_DEBUG_VIRTUAL if MMU 21 + select ARCH_HAS_DEBUG_VM_PGTABLE 21 22 select ARCH_HAS_DEBUG_WX 22 23 select ARCH_HAS_FORTIFY_SOURCE 23 24 select ARCH_HAS_GCOV_PROFILE_ALL ··· 45 44 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU 46 45 select ARCH_WANT_FRAME_POINTERS 47 46 select ARCH_WANT_GENERAL_HUGETLB 47 + select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP 48 48 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT 49 + select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL 49 50 select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE 50 51 select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU 51 52 select BUILDTIME_TABLE_SORT if MMU 52 - select CLONE_BACKWARDS 53 53 select CLINT_TIMER if !MMU 54 + select CLONE_BACKWARDS 54 55 select COMMON_CLK 55 56 select CPU_PM if CPU_IDLE 56 57 select EDAC_SUPPORT ··· 87 84 select HAVE_ARCH_MMAP_RND_BITS if MMU 88 85 select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT 89 86 select HAVE_ARCH_SECCOMP_FILTER 87 + select HAVE_ARCH_THREAD_STRUCT_WHITELIST 90 88 select HAVE_ARCH_TRACEHOOK 91 89 select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU 92 - select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE 93 - select HAVE_ARCH_THREAD_STRUCT_WHITELIST 94 90 select HAVE_ARCH_VMAP_STACK if MMU && 64BIT 95 91 select HAVE_ASM_MODVERSIONS 96 92 select HAVE_CONTEXT_TRACKING_USER 97 93 select HAVE_DEBUG_KMEMLEAK 98 94 select HAVE_DMA_CONTIGUOUS if MMU 99 95 select HAVE_EBPF_JIT if MMU 96 + select HAVE_FUNCTION_ARG_ACCESS_API 100 97 select HAVE_FUNCTION_ERROR_INJECTION 101 98 select HAVE_GCC_PLUGINS 102 99 select HAVE_GENERIC_VDSO if MMU && 64BIT ··· 113 110 select HAVE_PERF_USER_STACK_DUMP 114 111 select HAVE_POSIX_CPU_TIMERS_TASK_WORK 115 112 select HAVE_REGS_AND_STACK_ACCESS_API 116 - select HAVE_FUNCTION_ARG_ACCESS_API 113 + select HAVE_RSEQ 117 114 select HAVE_STACKPROTECTOR 118 115 select HAVE_SYSCALL_TRACEPOINTS 119 - select HAVE_RSEQ 120 116 select IRQ_DOMAIN 121 117 select IRQ_FORCED_THREADING 122 118 select MODULES_USE_ELF_RELA if MODULES ··· 139 137 select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE 140 138 select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL 141 139 select HAVE_FUNCTION_GRAPH_TRACER 142 - select HAVE_FUNCTION_TRACER if !XIP_KERNEL 140 + select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION 143 141 144 142 config ARCH_MMAP_RND_BITS_MIN 145 143 default 18 if 64BIT ··· 236 234 config RISCV_DMA_NONCOHERENT 237 235 bool 238 236 select ARCH_HAS_DMA_PREP_COHERENT 239 - select ARCH_HAS_SYNC_DMA_FOR_DEVICE 240 - select ARCH_HAS_SYNC_DMA_FOR_CPU 241 237 select ARCH_HAS_SETUP_DMA_OPS 238 + select ARCH_HAS_SYNC_DMA_FOR_CPU 239 + select ARCH_HAS_SYNC_DMA_FOR_DEVICE 242 240 select DMA_DIRECT_REMAP 243 241 244 242 config AS_HAS_INSN ··· 353 351 config NUMA 354 352 bool "NUMA Memory Allocation and Scheduler Support" 355 353 depends on SMP && MMU 356 - select GENERIC_ARCH_NUMA 357 - select OF_NUMA 358 354 select ARCH_SUPPORTS_NUMA_BALANCING 359 - select USE_PERCPU_NUMA_NODE_ID 355 + select GENERIC_ARCH_NUMA 360 356 select NEED_PER_CPU_EMBED_FIRST_CHUNK 357 + select OF_NUMA 358 + select USE_PERCPU_NUMA_NODE_ID 361 359 help 362 360 Enable NUMA (Non-Uniform Memory Access) support. 363 361 ··· 402 400 bool "SVPBMT extension support" 403 401 depends on 64BIT && MMU 404 402 depends on !XIP_KERNEL 405 - select RISCV_ALTERNATIVE 406 403 default y 404 + select RISCV_ALTERNATIVE 407 405 help 408 406 Adds support to dynamically detect the presence of the SVPBMT 409 407 ISA-extension (Supervisor-mode: page-based memory types) and ··· 417 415 418 416 If you don't know what to do here, say Y. 419 417 420 - config TOOLCHAIN_HAS_ZICBOM 418 + config TOOLCHAIN_HAS_ZBB 421 419 bool 422 420 default y 423 - depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom) 424 - depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom) 425 - depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800 421 + depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbb) 422 + depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbb) 423 + depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900 424 + depends on AS_IS_GNU 425 + 426 + config RISCV_ISA_ZBB 427 + bool "Zbb extension support for bit manipulation instructions" 428 + depends on TOOLCHAIN_HAS_ZBB 429 + depends on !XIP_KERNEL && MMU 430 + select RISCV_ALTERNATIVE 431 + default y 432 + help 433 + Adds support to dynamically detect the presence of the ZBB 434 + extension (basic bit manipulation) and enable its usage. 435 + 436 + The Zbb extension provides instructions to accelerate a number 437 + of bit-specific operations (count bit population, sign extending, 438 + bitrotation, etc). 439 + 440 + If you don't know what to do here, say Y. 426 441 427 442 config RISCV_ISA_ZICBOM 428 443 bool "Zicbom extension support for non-coherent DMA operation" 429 - depends on TOOLCHAIN_HAS_ZICBOM 430 444 depends on !XIP_KERNEL && MMU 431 - select RISCV_DMA_NONCOHERENT 432 - select RISCV_ALTERNATIVE 433 445 default y 446 + select RISCV_ALTERNATIVE 447 + select RISCV_DMA_NONCOHERENT 434 448 help 435 449 Adds support to dynamically detect the presence of the ZICBOM 436 450 extension (Cache Block Management Operations) and enable its ··· 508 490 509 491 config KEXEC 510 492 bool "Kexec system call" 511 - select KEXEC_CORE 512 - select HOTPLUG_CPU if SMP 513 493 depends on MMU 494 + select HOTPLUG_CPU if SMP 495 + select KEXEC_CORE 514 496 help 515 497 kexec is a system call that implements the ability to shutdown your 516 498 current kernel, and to start another kernel. It is like a reboot ··· 521 503 522 504 config KEXEC_FILE 523 505 bool "kexec file based systmem call" 506 + depends on 64BIT && MMU 507 + select HAVE_IMA_KEXEC if IMA 524 508 select KEXEC_CORE 525 509 select KEXEC_ELF 526 - select HAVE_IMA_KEXEC if IMA 527 - depends on 64BIT && MMU 528 510 help 529 511 This is new version of kexec system call. This system call is 530 512 file based and takes file descriptors as system call argument ··· 613 595 config EFI 614 596 bool "UEFI runtime support" 615 597 depends on OF && !XIP_KERNEL 616 - select LIBFDT 617 - select UCS2_STRING 618 - select EFI_PARAMS_FROM_FDT 619 - select EFI_STUB 620 - select EFI_GENERIC_STUB 621 - select EFI_RUNTIME_WRAPPERS 622 - select RISCV_ISA_C 623 598 depends on MMU 624 599 default y 600 + select EFI_GENERIC_STUB 601 + select EFI_PARAMS_FROM_FDT 602 + select EFI_RUNTIME_WRAPPERS 603 + select EFI_STUB 604 + select LIBFDT 605 + select RISCV_ISA_C 606 + select UCS2_STRING 625 607 help 626 608 This option provides support for runtime services provided 627 609 by UEFI firmware (such as non-volatile variables, realtime ··· 700 682 bool 701 683 default !NONPORTABLE 702 684 select EFI 703 - select OF 704 685 select MMU 686 + select OF 705 687 706 688 menu "Power management options" 707 689
+3 -2
arch/riscv/Kconfig.socs
··· 43 43 44 44 config ARCH_VIRT 45 45 def_bool SOC_VIRT 46 - 46 + 47 47 config SOC_VIRT 48 48 bool "QEMU Virt Machine" 49 49 select CLINT_TIMER if RISCV_M_MODE ··· 88 88 If unsure, say Y. 89 89 90 90 config ARCH_CANAAN_K210_DTB_SOURCE 91 - def_bool SOC_CANAAN_K210_DTB_SOURCE 91 + string 92 + default SOC_CANAAN_K210_DTB_SOURCE 92 93 93 94 config SOC_CANAAN_K210_DTB_SOURCE 94 95 string "Source file for the Canaan Kendryte K210 builtin DTB"
+5 -4
arch/riscv/Makefile
··· 11 11 ifeq ($(CONFIG_DYNAMIC_FTRACE),y) 12 12 LDFLAGS_vmlinux := --no-relax 13 13 KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY 14 - CC_FLAGS_FTRACE := -fpatchable-function-entry=8 14 + ifeq ($(CONFIG_RISCV_ISA_C),y) 15 + CC_FLAGS_FTRACE := -fpatchable-function-entry=4 16 + else 17 + CC_FLAGS_FTRACE := -fpatchable-function-entry=2 18 + endif 15 19 endif 16 20 17 21 ifeq ($(CONFIG_CMODEL_MEDLOW),y) ··· 61 57 # instructions from the I extension to the Zicsr and Zifencei extensions. 62 58 toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei) 63 59 riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei 64 - 65 - # Check if the toolchain supports Zicbom extension 66 - riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom 67 60 68 61 # Check if the toolchain supports Zihintpause extension 69 62 riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
+5 -1
arch/riscv/errata/sifive/errata.c
··· 4 4 */ 5 5 6 6 #include <linux/kernel.h> 7 + #include <linux/memory.h> 7 8 #include <linux/module.h> 8 9 #include <linux/string.h> 9 10 #include <linux/bug.h> ··· 108 107 109 108 tmp = (1U << alt->errata_id); 110 109 if (cpu_req_errata & tmp) { 111 - patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); 110 + mutex_lock(&text_mutex); 111 + patch_text_nosync(ALT_OLD_PTR(alt), ALT_ALT_PTR(alt), 112 + alt->alt_len); 113 + mutex_lock(&text_mutex); 112 114 cpu_apply_errata |= tmp; 113 115 } 114 116 }
+12 -5
arch/riscv/errata/thead/errata.c
··· 5 5 6 6 #include <linux/bug.h> 7 7 #include <linux/kernel.h> 8 + #include <linux/memory.h> 8 9 #include <linux/module.h> 9 10 #include <linux/string.h> 10 11 #include <linux/uaccess.h> ··· 88 87 struct alt_entry *alt; 89 88 u32 cpu_req_errata = thead_errata_probe(stage, archid, impid); 90 89 u32 tmp; 90 + void *oldptr, *altptr; 91 91 92 92 for (alt = begin; alt < end; alt++) { 93 93 if (alt->vendor_id != THEAD_VENDOR_ID) ··· 98 96 99 97 tmp = (1U << alt->errata_id); 100 98 if (cpu_req_errata & tmp) { 99 + oldptr = ALT_OLD_PTR(alt); 100 + altptr = ALT_ALT_PTR(alt); 101 + 101 102 /* On vm-alternatives, the mmu isn't running yet */ 102 - if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) 103 - memcpy((void *)__pa_symbol(alt->old_ptr), 104 - (void *)__pa_symbol(alt->alt_ptr), alt->alt_len); 105 - else 106 - patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); 103 + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) { 104 + memcpy(oldptr, altptr, alt->alt_len); 105 + } else { 106 + mutex_lock(&text_mutex); 107 + patch_text_nosync(oldptr, altptr, alt->alt_len); 108 + mutex_unlock(&text_mutex); 109 + } 107 110 } 108 111 } 109 112
+10 -10
arch/riscv/include/asm/alternative-macros.h
··· 7 7 #ifdef __ASSEMBLY__ 8 8 9 9 .macro ALT_ENTRY oldptr newptr vendor_id errata_id new_len 10 - RISCV_PTR \oldptr 11 - RISCV_PTR \newptr 12 - REG_ASM \vendor_id 13 - REG_ASM \new_len 14 - .word \errata_id 10 + .4byte \oldptr - . 11 + .4byte \newptr - . 12 + .2byte \vendor_id 13 + .2byte \new_len 14 + .4byte \errata_id 15 15 .endm 16 16 17 17 .macro ALT_NEW_CONTENT vendor_id, errata_id, enable = 1, new_c : vararg ··· 59 59 #include <linux/stringify.h> 60 60 61 61 #define ALT_ENTRY(oldptr, newptr, vendor_id, errata_id, newlen) \ 62 - RISCV_PTR " " oldptr "\n" \ 63 - RISCV_PTR " " newptr "\n" \ 64 - REG_ASM " " vendor_id "\n" \ 65 - REG_ASM " " newlen "\n" \ 66 - ".word " errata_id "\n" 62 + ".4byte ((" oldptr ") - .) \n" \ 63 + ".4byte ((" newptr ") - .) \n" \ 64 + ".2byte " vendor_id "\n" \ 65 + ".2byte " newlen "\n" \ 66 + ".4byte " errata_id "\n" 67 67 68 68 #define ALT_NEW_CONTENT(vendor_id, errata_id, enable, new_c) \ 69 69 ".if " __stringify(enable) " == 1\n" \
+14 -6
arch/riscv/include/asm/alternative.h
··· 23 23 #define RISCV_ALTERNATIVES_MODULE 1 /* alternatives applied during module-init */ 24 24 #define RISCV_ALTERNATIVES_EARLY_BOOT 2 /* alternatives applied before mmu start */ 25 25 26 + /* add the relative offset to the address of the offset to get the absolute address */ 27 + #define __ALT_PTR(a, f) ((void *)&(a)->f + (a)->f) 28 + #define ALT_OLD_PTR(a) __ALT_PTR(a, old_offset) 29 + #define ALT_ALT_PTR(a) __ALT_PTR(a, alt_offset) 30 + 26 31 void __init apply_boot_alternatives(void); 27 32 void __init apply_early_boot_alternatives(void); 28 33 void apply_module_alternatives(void *start, size_t length); 29 34 35 + void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len, 36 + int patch_offset); 37 + 30 38 struct alt_entry { 31 - void *old_ptr; /* address of original instruciton or data */ 32 - void *alt_ptr; /* address of replacement instruction or data */ 33 - unsigned long vendor_id; /* cpu vendor id */ 34 - unsigned long alt_len; /* The replacement size */ 35 - unsigned int errata_id; /* The errata id */ 36 - } __packed; 39 + s32 old_offset; /* offset relative to original instruction or data */ 40 + s32 alt_offset; /* offset relative to replacement instruction or data */ 41 + u16 vendor_id; /* cpu vendor id */ 42 + u16 alt_len; /* The replacement size */ 43 + u32 errata_id; /* The errata id */ 44 + }; 37 45 38 46 struct errata_checkfunc_id { 39 47 unsigned long vendor_id;
+6 -4
arch/riscv/include/asm/elf.h
··· 14 14 #include <asm/auxvec.h> 15 15 #include <asm/byteorder.h> 16 16 #include <asm/cacheinfo.h> 17 + #include <asm/hwcap.h> 17 18 18 19 /* 19 20 * These are used to set parameters in the core dumps. ··· 60 59 #define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12)) 61 60 #endif 62 61 #endif 62 + 63 63 /* 64 - * This yields a mask that user programs can use to figure out what 65 - * instruction set this CPU supports. This could be done in user space, 66 - * but it's not easy, and we've already done it here. 64 + * Provides information on the availiable set of ISA extensions to userspace, 65 + * via a bitmap that coorespends to each single-letter ISA extension. This is 66 + * essentially defunct, but will remain for compatibility with userspace. 67 67 */ 68 - #define ELF_HWCAP (elf_hwcap) 68 + #define ELF_HWCAP (elf_hwcap & ((1UL << RISCV_ISA_EXT_BASE) - 1)) 69 69 extern unsigned long elf_hwcap; 70 70 71 71 /*
+5 -7
arch/riscv/include/asm/errata_list.h
··· 7 7 8 8 #include <asm/alternative.h> 9 9 #include <asm/csr.h> 10 + #include <asm/insn-def.h> 11 + #include <asm/hwcap.h> 10 12 #include <asm/vendorid_list.h> 11 13 12 14 #ifdef CONFIG_ERRATA_SIFIVE ··· 23 21 #define ERRATA_THEAD_PMU 2 24 22 #define ERRATA_THEAD_NUMBER 3 25 23 #endif 26 - 27 - #define CPUFEATURE_SVPBMT 0 28 - #define CPUFEATURE_ZICBOM 1 29 - #define CPUFEATURE_NUMBER 2 30 24 31 25 #ifdef __ASSEMBLY__ 32 26 ··· 53 55 #define ALT_SVPBMT(_val, prot) \ 54 56 asm(ALTERNATIVE_2("li %0, 0\t\nnop", \ 55 57 "li %0, %1\t\nslli %0,%0,%3", 0, \ 56 - CPUFEATURE_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ 58 + RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ 57 59 "li %0, %2\t\nslli %0,%0,%4", THEAD_VENDOR_ID, \ 58 60 ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT) \ 59 61 : "=r"(_val) \ ··· 123 125 "mv a0, %1\n\t" \ 124 126 "j 2f\n\t" \ 125 127 "3:\n\t" \ 126 - "cbo." __stringify(_op) " (a0)\n\t" \ 128 + CBO_##_op(a0) \ 127 129 "add a0, a0, %0\n\t" \ 128 130 "2:\n\t" \ 129 131 "bltu a0, %2, 3b\n\t" \ 130 - "nop", 0, CPUFEATURE_ZICBOM, CONFIG_RISCV_ISA_ZICBOM, \ 132 + "nop", 0, RISCV_ISA_EXT_ZICBOM, CONFIG_RISCV_ISA_ZICBOM, \ 131 133 "mv a0, %1\n\t" \ 132 134 "j 2f\n\t" \ 133 135 "3:\n\t" \
+38 -12
arch/riscv/include/asm/ftrace.h
··· 42 42 * 2) jalr: setting low-12 offset to ra, jump to ra, and set ra to 43 43 * return address (original pc + 4) 44 44 * 45 + *<ftrace enable>: 46 + * 0: auipc t0/ra, 0x? 47 + * 4: jalr t0/ra, ?(t0/ra) 48 + * 49 + *<ftrace disable>: 50 + * 0: nop 51 + * 4: nop 52 + * 45 53 * Dynamic ftrace generates probes to call sites, so we must deal with 46 54 * both auipc and jalr at the same time. 47 55 */ ··· 60 52 #define AUIPC_OFFSET_MASK (0xfffff000) 61 53 #define AUIPC_PAD (0x00001000) 62 54 #define JALR_SHIFT 20 63 - #define JALR_BASIC (0x000080e7) 64 - #define AUIPC_BASIC (0x00000097) 55 + #define JALR_RA (0x000080e7) 56 + #define AUIPC_RA (0x00000097) 57 + #define JALR_T0 (0x000282e7) 58 + #define AUIPC_T0 (0x00000297) 65 59 #define NOP4 (0x00000013) 66 60 67 - #define make_call(caller, callee, call) \ 61 + #define to_jalr_t0(offset) \ 62 + (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_T0) 63 + 64 + #define to_auipc_t0(offset) \ 65 + ((offset & JALR_SIGN_MASK) ? \ 66 + (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_T0) : \ 67 + ((offset & AUIPC_OFFSET_MASK) | AUIPC_T0)) 68 + 69 + #define make_call_t0(caller, callee, call) \ 68 70 do { \ 69 - call[0] = to_auipc_insn((unsigned int)((unsigned long)callee - \ 70 - (unsigned long)caller)); \ 71 - call[1] = to_jalr_insn((unsigned int)((unsigned long)callee - \ 72 - (unsigned long)caller)); \ 71 + unsigned int offset = \ 72 + (unsigned long) callee - (unsigned long) caller; \ 73 + call[0] = to_auipc_t0(offset); \ 74 + call[1] = to_jalr_t0(offset); \ 73 75 } while (0) 74 76 75 - #define to_jalr_insn(offset) \ 76 - (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC) 77 + #define to_jalr_ra(offset) \ 78 + (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_RA) 77 79 78 - #define to_auipc_insn(offset) \ 80 + #define to_auipc_ra(offset) \ 79 81 ((offset & JALR_SIGN_MASK) ? \ 80 - (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \ 81 - ((offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC)) 82 + (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_RA) : \ 83 + ((offset & AUIPC_OFFSET_MASK) | AUIPC_RA)) 84 + 85 + #define make_call_ra(caller, callee, call) \ 86 + do { \ 87 + unsigned int offset = \ 88 + (unsigned long) callee - (unsigned long) caller; \ 89 + call[0] = to_auipc_ra(offset); \ 90 + call[1] = to_jalr_ra(offset); \ 91 + } while (0) 82 92 83 93 /* 84 94 * Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
+55 -59
arch/riscv/include/asm/hwcap.h
··· 8 8 #ifndef _ASM_RISCV_HWCAP_H 9 9 #define _ASM_RISCV_HWCAP_H 10 10 11 + #include <asm/alternative-macros.h> 11 12 #include <asm/errno.h> 12 13 #include <linux/bits.h> 13 14 #include <uapi/asm/hwcap.h> 14 - 15 - #ifndef __ASSEMBLY__ 16 - #include <linux/jump_label.h> 17 - /* 18 - * This yields a mask that user programs can use to figure out what 19 - * instruction set this cpu supports. 20 - */ 21 - #define ELF_HWCAP (elf_hwcap) 22 - 23 - enum { 24 - CAP_HWCAP = 1, 25 - }; 26 - 27 - extern unsigned long elf_hwcap; 28 15 29 16 #define RISCV_ISA_EXT_a ('a' - 'a') 30 17 #define RISCV_ISA_EXT_c ('c' - 'a') ··· 24 37 #define RISCV_ISA_EXT_u ('u' - 'a') 25 38 26 39 /* 27 - * Increse this to higher value as kernel support more ISA extensions. 40 + * These macros represent the logical IDs of each multi-letter RISC-V ISA 41 + * extension and are used in the ISA bitmap. The logical IDs start from 42 + * RISCV_ISA_EXT_BASE, which allows the 0-25 range to be reserved for single 43 + * letter extensions. The maximum, RISCV_ISA_EXT_MAX, is defined in order 44 + * to allocate the bitmap and may be increased when necessary. 45 + * 46 + * New extensions should just be added to the bottom, rather than added 47 + * alphabetically, in order to avoid unnecessary shuffling. 28 48 */ 29 - #define RISCV_ISA_EXT_MAX 64 30 - #define RISCV_ISA_EXT_NAME_LEN_MAX 32 49 + #define RISCV_ISA_EXT_BASE 26 31 50 32 - /* The base ID for multi-letter ISA extensions */ 33 - #define RISCV_ISA_EXT_BASE 26 51 + #define RISCV_ISA_EXT_SSCOFPMF 26 52 + #define RISCV_ISA_EXT_SSTC 27 53 + #define RISCV_ISA_EXT_SVINVAL 28 54 + #define RISCV_ISA_EXT_SVPBMT 29 55 + #define RISCV_ISA_EXT_ZBB 30 56 + #define RISCV_ISA_EXT_ZICBOM 31 57 + #define RISCV_ISA_EXT_ZIHINTPAUSE 32 34 58 35 - /* 36 - * This enum represent the logical ID for each multi-letter RISC-V ISA extension. 37 - * The logical ID should start from RISCV_ISA_EXT_BASE and must not exceed 38 - * RISCV_ISA_EXT_MAX. 0-25 range is reserved for single letter 39 - * extensions while all the multi-letter extensions should define the next 40 - * available logical extension id. 41 - */ 42 - enum riscv_isa_ext_id { 43 - RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, 44 - RISCV_ISA_EXT_SVPBMT, 45 - RISCV_ISA_EXT_ZICBOM, 46 - RISCV_ISA_EXT_ZIHINTPAUSE, 47 - RISCV_ISA_EXT_SSTC, 48 - RISCV_ISA_EXT_SVINVAL, 49 - RISCV_ISA_EXT_ID_MAX 50 - }; 51 - static_assert(RISCV_ISA_EXT_ID_MAX <= RISCV_ISA_EXT_MAX); 59 + #define RISCV_ISA_EXT_MAX 64 60 + #define RISCV_ISA_EXT_NAME_LEN_MAX 32 52 61 53 - /* 54 - * This enum represents the logical ID for each RISC-V ISA extension static 55 - * keys. We can use static key to optimize code path if some ISA extensions 56 - * are available. 57 - */ 58 - enum riscv_isa_ext_key { 59 - RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */ 60 - RISCV_ISA_EXT_KEY_SVINVAL, 61 - RISCV_ISA_EXT_KEY_MAX, 62 - }; 62 + #ifndef __ASSEMBLY__ 63 + 64 + #include <linux/jump_label.h> 63 65 64 66 struct riscv_isa_ext_data { 65 67 /* Name of the extension displayed to userspace via /proc/cpuinfo */ ··· 57 81 unsigned int isa_ext_id; 58 82 }; 59 83 60 - extern struct static_key_false riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_MAX]; 61 - 62 - static __always_inline int riscv_isa_ext2key(int num) 84 + static __always_inline bool 85 + riscv_has_extension_likely(const unsigned long ext) 63 86 { 64 - switch (num) { 65 - case RISCV_ISA_EXT_f: 66 - return RISCV_ISA_EXT_KEY_FPU; 67 - case RISCV_ISA_EXT_d: 68 - return RISCV_ISA_EXT_KEY_FPU; 69 - case RISCV_ISA_EXT_SVINVAL: 70 - return RISCV_ISA_EXT_KEY_SVINVAL; 71 - default: 72 - return -EINVAL; 73 - } 87 + compiletime_assert(ext < RISCV_ISA_EXT_MAX, 88 + "ext must be < RISCV_ISA_EXT_MAX"); 89 + 90 + asm_volatile_goto( 91 + ALTERNATIVE("j %l[l_no]", "nop", 0, %[ext], 1) 92 + : 93 + : [ext] "i" (ext) 94 + : 95 + : l_no); 96 + 97 + return true; 98 + l_no: 99 + return false; 100 + } 101 + 102 + static __always_inline bool 103 + riscv_has_extension_unlikely(const unsigned long ext) 104 + { 105 + compiletime_assert(ext < RISCV_ISA_EXT_MAX, 106 + "ext must be < RISCV_ISA_EXT_MAX"); 107 + 108 + asm_volatile_goto( 109 + ALTERNATIVE("nop", "j %l[l_yes]", 0, %[ext], 1) 110 + : 111 + : [ext] "i" (ext) 112 + : 113 + : l_yes); 114 + 115 + return false; 116 + l_yes: 117 + return true; 74 118 } 75 119 76 120 unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
+58
arch/riscv/include/asm/insn-def.h
··· 12 12 #define INSN_R_RD_SHIFT 7 13 13 #define INSN_R_OPCODE_SHIFT 0 14 14 15 + #define INSN_I_SIMM12_SHIFT 20 16 + #define INSN_I_RS1_SHIFT 15 17 + #define INSN_I_FUNC3_SHIFT 12 18 + #define INSN_I_RD_SHIFT 7 19 + #define INSN_I_OPCODE_SHIFT 0 20 + 15 21 #ifdef __ASSEMBLY__ 16 22 17 23 #ifdef CONFIG_AS_HAS_INSN 18 24 19 25 .macro insn_r, opcode, func3, func7, rd, rs1, rs2 20 26 .insn r \opcode, \func3, \func7, \rd, \rs1, \rs2 27 + .endm 28 + 29 + .macro insn_i, opcode, func3, rd, rs1, simm12 30 + .insn i \opcode, \func3, \rd, \rs1, \simm12 21 31 .endm 22 32 23 33 #else ··· 43 33 (.L__gpr_num_\rs2 << INSN_R_RS2_SHIFT)) 44 34 .endm 45 35 36 + .macro insn_i, opcode, func3, rd, rs1, simm12 37 + .4byte ((\opcode << INSN_I_OPCODE_SHIFT) | \ 38 + (\func3 << INSN_I_FUNC3_SHIFT) | \ 39 + (.L__gpr_num_\rd << INSN_I_RD_SHIFT) | \ 40 + (.L__gpr_num_\rs1 << INSN_I_RS1_SHIFT) | \ 41 + (\simm12 << INSN_I_SIMM12_SHIFT)) 42 + .endm 43 + 46 44 #endif 47 45 48 46 #define __INSN_R(...) insn_r __VA_ARGS__ 47 + #define __INSN_I(...) insn_i __VA_ARGS__ 49 48 50 49 #else /* ! __ASSEMBLY__ */ 51 50 ··· 62 43 63 44 #define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \ 64 45 ".insn r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n" 46 + 47 + #define __INSN_I(opcode, func3, rd, rs1, simm12) \ 48 + ".insn i " opcode ", " func3 ", " rd ", " rs1 ", " simm12 "\n" 65 49 66 50 #else 67 51 ··· 82 60 " (.L__gpr_num_\\rs2 << " __stringify(INSN_R_RS2_SHIFT) "))\n" \ 83 61 " .endm\n" 84 62 63 + #define DEFINE_INSN_I \ 64 + __DEFINE_ASM_GPR_NUMS \ 65 + " .macro insn_i, opcode, func3, rd, rs1, simm12\n" \ 66 + " .4byte ((\\opcode << " __stringify(INSN_I_OPCODE_SHIFT) ") |" \ 67 + " (\\func3 << " __stringify(INSN_I_FUNC3_SHIFT) ") |" \ 68 + " (.L__gpr_num_\\rd << " __stringify(INSN_I_RD_SHIFT) ") |" \ 69 + " (.L__gpr_num_\\rs1 << " __stringify(INSN_I_RS1_SHIFT) ") |" \ 70 + " (\\simm12 << " __stringify(INSN_I_SIMM12_SHIFT) "))\n" \ 71 + " .endm\n" 72 + 85 73 #define UNDEFINE_INSN_R \ 86 74 " .purgem insn_r\n" 75 + 76 + #define UNDEFINE_INSN_I \ 77 + " .purgem insn_i\n" 87 78 88 79 #define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \ 89 80 DEFINE_INSN_R \ 90 81 "insn_r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n" \ 91 82 UNDEFINE_INSN_R 83 + 84 + #define __INSN_I(opcode, func3, rd, rs1, simm12) \ 85 + DEFINE_INSN_I \ 86 + "insn_i " opcode ", " func3 ", " rd ", " rs1 ", " simm12 "\n" \ 87 + UNDEFINE_INSN_I 92 88 93 89 #endif 94 90 ··· 116 76 __INSN_R(RV_##opcode, RV_##func3, RV_##func7, \ 117 77 RV_##rd, RV_##rs1, RV_##rs2) 118 78 79 + #define INSN_I(opcode, func3, rd, rs1, simm12) \ 80 + __INSN_I(RV_##opcode, RV_##func3, RV_##rd, \ 81 + RV_##rs1, RV_##simm12) 82 + 119 83 #define RV_OPCODE(v) __ASM_STR(v) 120 84 #define RV_FUNC3(v) __ASM_STR(v) 121 85 #define RV_FUNC7(v) __ASM_STR(v) 86 + #define RV_SIMM12(v) __ASM_STR(v) 122 87 #define RV_RD(v) __ASM_STR(v) 123 88 #define RV_RS1(v) __ASM_STR(v) 124 89 #define RV_RS2(v) __ASM_STR(v) ··· 132 87 #define RV___RS1(v) __RV_REG(v) 133 88 #define RV___RS2(v) __RV_REG(v) 134 89 90 + #define RV_OPCODE_MISC_MEM RV_OPCODE(15) 135 91 #define RV_OPCODE_SYSTEM RV_OPCODE(115) 136 92 137 93 #define HFENCE_VVMA(vaddr, asid) \ ··· 179 133 #define HINVAL_GVMA(gaddr, vmid) \ 180 134 INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(51), \ 181 135 __RD(0), RS1(gaddr), RS2(vmid)) 136 + 137 + #define CBO_inval(base) \ 138 + INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \ 139 + RS1(base), SIMM12(0)) 140 + 141 + #define CBO_clean(base) \ 142 + INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \ 143 + RS1(base), SIMM12(1)) 144 + 145 + #define CBO_flush(base) \ 146 + INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \ 147 + RS1(base), SIMM12(2)) 182 148 183 149 #endif /* __ASM_INSN_DEF_H */
+381
arch/riscv/include/asm/insn.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2020 SiFive 4 + */ 5 + 6 + #ifndef _ASM_RISCV_INSN_H 7 + #define _ASM_RISCV_INSN_H 8 + 9 + #include <linux/bits.h> 10 + 11 + #define RV_INSN_FUNCT3_MASK GENMASK(14, 12) 12 + #define RV_INSN_FUNCT3_OPOFF 12 13 + #define RV_INSN_OPCODE_MASK GENMASK(6, 0) 14 + #define RV_INSN_OPCODE_OPOFF 0 15 + #define RV_INSN_FUNCT12_OPOFF 20 16 + 17 + #define RV_ENCODE_FUNCT3(f_) (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF) 18 + #define RV_ENCODE_FUNCT12(f_) (RVG_FUNCT12_##f_ << RV_INSN_FUNCT12_OPOFF) 19 + 20 + /* The bit field of immediate value in I-type instruction */ 21 + #define RV_I_IMM_SIGN_OPOFF 31 22 + #define RV_I_IMM_11_0_OPOFF 20 23 + #define RV_I_IMM_SIGN_OFF 12 24 + #define RV_I_IMM_11_0_OFF 0 25 + #define RV_I_IMM_11_0_MASK GENMASK(11, 0) 26 + 27 + /* The bit field of immediate value in J-type instruction */ 28 + #define RV_J_IMM_SIGN_OPOFF 31 29 + #define RV_J_IMM_10_1_OPOFF 21 30 + #define RV_J_IMM_11_OPOFF 20 31 + #define RV_J_IMM_19_12_OPOFF 12 32 + #define RV_J_IMM_SIGN_OFF 20 33 + #define RV_J_IMM_10_1_OFF 1 34 + #define RV_J_IMM_11_OFF 11 35 + #define RV_J_IMM_19_12_OFF 12 36 + #define RV_J_IMM_10_1_MASK GENMASK(9, 0) 37 + #define RV_J_IMM_11_MASK GENMASK(0, 0) 38 + #define RV_J_IMM_19_12_MASK GENMASK(7, 0) 39 + 40 + /* 41 + * U-type IMMs contain the upper 20bits [31:20] of an immediate with 42 + * the rest filled in by zeros, so no shifting required. Similarly, 43 + * bit31 contains the signed state, so no sign extension necessary. 44 + */ 45 + #define RV_U_IMM_SIGN_OPOFF 31 46 + #define RV_U_IMM_31_12_OPOFF 0 47 + #define RV_U_IMM_31_12_MASK GENMASK(31, 12) 48 + 49 + /* The bit field of immediate value in B-type instruction */ 50 + #define RV_B_IMM_SIGN_OPOFF 31 51 + #define RV_B_IMM_10_5_OPOFF 25 52 + #define RV_B_IMM_4_1_OPOFF 8 53 + #define RV_B_IMM_11_OPOFF 7 54 + #define RV_B_IMM_SIGN_OFF 12 55 + #define RV_B_IMM_10_5_OFF 5 56 + #define RV_B_IMM_4_1_OFF 1 57 + #define RV_B_IMM_11_OFF 11 58 + #define RV_B_IMM_10_5_MASK GENMASK(5, 0) 59 + #define RV_B_IMM_4_1_MASK GENMASK(3, 0) 60 + #define RV_B_IMM_11_MASK GENMASK(0, 0) 61 + 62 + /* The register offset in RVG instruction */ 63 + #define RVG_RS1_OPOFF 15 64 + #define RVG_RS2_OPOFF 20 65 + #define RVG_RD_OPOFF 7 66 + #define RVG_RD_MASK GENMASK(4, 0) 67 + 68 + /* The bit field of immediate value in RVC J instruction */ 69 + #define RVC_J_IMM_SIGN_OPOFF 12 70 + #define RVC_J_IMM_4_OPOFF 11 71 + #define RVC_J_IMM_9_8_OPOFF 9 72 + #define RVC_J_IMM_10_OPOFF 8 73 + #define RVC_J_IMM_6_OPOFF 7 74 + #define RVC_J_IMM_7_OPOFF 6 75 + #define RVC_J_IMM_3_1_OPOFF 3 76 + #define RVC_J_IMM_5_OPOFF 2 77 + #define RVC_J_IMM_SIGN_OFF 11 78 + #define RVC_J_IMM_4_OFF 4 79 + #define RVC_J_IMM_9_8_OFF 8 80 + #define RVC_J_IMM_10_OFF 10 81 + #define RVC_J_IMM_6_OFF 6 82 + #define RVC_J_IMM_7_OFF 7 83 + #define RVC_J_IMM_3_1_OFF 1 84 + #define RVC_J_IMM_5_OFF 5 85 + #define RVC_J_IMM_4_MASK GENMASK(0, 0) 86 + #define RVC_J_IMM_9_8_MASK GENMASK(1, 0) 87 + #define RVC_J_IMM_10_MASK GENMASK(0, 0) 88 + #define RVC_J_IMM_6_MASK GENMASK(0, 0) 89 + #define RVC_J_IMM_7_MASK GENMASK(0, 0) 90 + #define RVC_J_IMM_3_1_MASK GENMASK(2, 0) 91 + #define RVC_J_IMM_5_MASK GENMASK(0, 0) 92 + 93 + /* The bit field of immediate value in RVC B instruction */ 94 + #define RVC_B_IMM_SIGN_OPOFF 12 95 + #define RVC_B_IMM_4_3_OPOFF 10 96 + #define RVC_B_IMM_7_6_OPOFF 5 97 + #define RVC_B_IMM_2_1_OPOFF 3 98 + #define RVC_B_IMM_5_OPOFF 2 99 + #define RVC_B_IMM_SIGN_OFF 8 100 + #define RVC_B_IMM_4_3_OFF 3 101 + #define RVC_B_IMM_7_6_OFF 6 102 + #define RVC_B_IMM_2_1_OFF 1 103 + #define RVC_B_IMM_5_OFF 5 104 + #define RVC_B_IMM_4_3_MASK GENMASK(1, 0) 105 + #define RVC_B_IMM_7_6_MASK GENMASK(1, 0) 106 + #define RVC_B_IMM_2_1_MASK GENMASK(1, 0) 107 + #define RVC_B_IMM_5_MASK GENMASK(0, 0) 108 + 109 + #define RVC_INSN_FUNCT4_MASK GENMASK(15, 12) 110 + #define RVC_INSN_FUNCT4_OPOFF 12 111 + #define RVC_INSN_FUNCT3_MASK GENMASK(15, 13) 112 + #define RVC_INSN_FUNCT3_OPOFF 13 113 + #define RVC_INSN_J_RS2_MASK GENMASK(6, 2) 114 + #define RVC_INSN_OPCODE_MASK GENMASK(1, 0) 115 + #define RVC_ENCODE_FUNCT3(f_) (RVC_FUNCT3_##f_ << RVC_INSN_FUNCT3_OPOFF) 116 + #define RVC_ENCODE_FUNCT4(f_) (RVC_FUNCT4_##f_ << RVC_INSN_FUNCT4_OPOFF) 117 + 118 + /* The register offset in RVC op=C0 instruction */ 119 + #define RVC_C0_RS1_OPOFF 7 120 + #define RVC_C0_RS2_OPOFF 2 121 + #define RVC_C0_RD_OPOFF 2 122 + 123 + /* The register offset in RVC op=C1 instruction */ 124 + #define RVC_C1_RS1_OPOFF 7 125 + #define RVC_C1_RS2_OPOFF 2 126 + #define RVC_C1_RD_OPOFF 7 127 + 128 + /* The register offset in RVC op=C2 instruction */ 129 + #define RVC_C2_RS1_OPOFF 7 130 + #define RVC_C2_RS2_OPOFF 2 131 + #define RVC_C2_RD_OPOFF 7 132 + 133 + /* parts of opcode for RVG*/ 134 + #define RVG_OPCODE_FENCE 0x0f 135 + #define RVG_OPCODE_AUIPC 0x17 136 + #define RVG_OPCODE_BRANCH 0x63 137 + #define RVG_OPCODE_JALR 0x67 138 + #define RVG_OPCODE_JAL 0x6f 139 + #define RVG_OPCODE_SYSTEM 0x73 140 + 141 + /* parts of opcode for RVC*/ 142 + #define RVC_OPCODE_C0 0x0 143 + #define RVC_OPCODE_C1 0x1 144 + #define RVC_OPCODE_C2 0x2 145 + 146 + /* parts of funct3 code for I, M, A extension*/ 147 + #define RVG_FUNCT3_JALR 0x0 148 + #define RVG_FUNCT3_BEQ 0x0 149 + #define RVG_FUNCT3_BNE 0x1 150 + #define RVG_FUNCT3_BLT 0x4 151 + #define RVG_FUNCT3_BGE 0x5 152 + #define RVG_FUNCT3_BLTU 0x6 153 + #define RVG_FUNCT3_BGEU 0x7 154 + 155 + /* parts of funct3 code for C extension*/ 156 + #define RVC_FUNCT3_C_BEQZ 0x6 157 + #define RVC_FUNCT3_C_BNEZ 0x7 158 + #define RVC_FUNCT3_C_J 0x5 159 + #define RVC_FUNCT3_C_JAL 0x1 160 + #define RVC_FUNCT4_C_JR 0x8 161 + #define RVC_FUNCT4_C_JALR 0x9 162 + #define RVC_FUNCT4_C_EBREAK 0x9 163 + 164 + #define RVG_FUNCT12_EBREAK 0x1 165 + #define RVG_FUNCT12_SRET 0x102 166 + 167 + #define RVG_MATCH_AUIPC (RVG_OPCODE_AUIPC) 168 + #define RVG_MATCH_JALR (RV_ENCODE_FUNCT3(JALR) | RVG_OPCODE_JALR) 169 + #define RVG_MATCH_JAL (RVG_OPCODE_JAL) 170 + #define RVG_MATCH_FENCE (RVG_OPCODE_FENCE) 171 + #define RVG_MATCH_BEQ (RV_ENCODE_FUNCT3(BEQ) | RVG_OPCODE_BRANCH) 172 + #define RVG_MATCH_BNE (RV_ENCODE_FUNCT3(BNE) | RVG_OPCODE_BRANCH) 173 + #define RVG_MATCH_BLT (RV_ENCODE_FUNCT3(BLT) | RVG_OPCODE_BRANCH) 174 + #define RVG_MATCH_BGE (RV_ENCODE_FUNCT3(BGE) | RVG_OPCODE_BRANCH) 175 + #define RVG_MATCH_BLTU (RV_ENCODE_FUNCT3(BLTU) | RVG_OPCODE_BRANCH) 176 + #define RVG_MATCH_BGEU (RV_ENCODE_FUNCT3(BGEU) | RVG_OPCODE_BRANCH) 177 + #define RVG_MATCH_EBREAK (RV_ENCODE_FUNCT12(EBREAK) | RVG_OPCODE_SYSTEM) 178 + #define RVG_MATCH_SRET (RV_ENCODE_FUNCT12(SRET) | RVG_OPCODE_SYSTEM) 179 + #define RVC_MATCH_C_BEQZ (RVC_ENCODE_FUNCT3(C_BEQZ) | RVC_OPCODE_C1) 180 + #define RVC_MATCH_C_BNEZ (RVC_ENCODE_FUNCT3(C_BNEZ) | RVC_OPCODE_C1) 181 + #define RVC_MATCH_C_J (RVC_ENCODE_FUNCT3(C_J) | RVC_OPCODE_C1) 182 + #define RVC_MATCH_C_JAL (RVC_ENCODE_FUNCT3(C_JAL) | RVC_OPCODE_C1) 183 + #define RVC_MATCH_C_JR (RVC_ENCODE_FUNCT4(C_JR) | RVC_OPCODE_C2) 184 + #define RVC_MATCH_C_JALR (RVC_ENCODE_FUNCT4(C_JALR) | RVC_OPCODE_C2) 185 + #define RVC_MATCH_C_EBREAK (RVC_ENCODE_FUNCT4(C_EBREAK) | RVC_OPCODE_C2) 186 + 187 + #define RVG_MASK_AUIPC (RV_INSN_OPCODE_MASK) 188 + #define RVG_MASK_JALR (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 189 + #define RVG_MASK_JAL (RV_INSN_OPCODE_MASK) 190 + #define RVG_MASK_FENCE (RV_INSN_OPCODE_MASK) 191 + #define RVC_MASK_C_JALR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK) 192 + #define RVC_MASK_C_JR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK) 193 + #define RVC_MASK_C_JAL (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) 194 + #define RVC_MASK_C_J (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) 195 + #define RVG_MASK_BEQ (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 196 + #define RVG_MASK_BNE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 197 + #define RVG_MASK_BLT (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 198 + #define RVG_MASK_BGE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 199 + #define RVG_MASK_BLTU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 200 + #define RVG_MASK_BGEU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) 201 + #define RVC_MASK_C_BEQZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) 202 + #define RVC_MASK_C_BNEZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) 203 + #define RVC_MASK_C_EBREAK 0xffff 204 + #define RVG_MASK_EBREAK 0xffffffff 205 + #define RVG_MASK_SRET 0xffffffff 206 + 207 + #define __INSN_LENGTH_MASK _UL(0x3) 208 + #define __INSN_LENGTH_GE_32 _UL(0x3) 209 + #define __INSN_OPCODE_MASK _UL(0x7F) 210 + #define __INSN_BRANCH_OPCODE _UL(RVG_OPCODE_BRANCH) 211 + 212 + #define __RISCV_INSN_FUNCS(name, mask, val) \ 213 + static __always_inline bool riscv_insn_is_##name(u32 code) \ 214 + { \ 215 + BUILD_BUG_ON(~(mask) & (val)); \ 216 + return (code & (mask)) == (val); \ 217 + } \ 218 + 219 + #if __riscv_xlen == 32 220 + /* C.JAL is an RV32C-only instruction */ 221 + __RISCV_INSN_FUNCS(c_jal, RVC_MASK_C_JAL, RVC_MATCH_C_JAL) 222 + #else 223 + #define riscv_insn_is_c_jal(opcode) 0 224 + #endif 225 + __RISCV_INSN_FUNCS(auipc, RVG_MASK_AUIPC, RVG_MATCH_AUIPC) 226 + __RISCV_INSN_FUNCS(jalr, RVG_MASK_JALR, RVG_MATCH_JALR) 227 + __RISCV_INSN_FUNCS(jal, RVG_MASK_JAL, RVG_MATCH_JAL) 228 + __RISCV_INSN_FUNCS(c_jr, RVC_MASK_C_JR, RVC_MATCH_C_JR) 229 + __RISCV_INSN_FUNCS(c_jalr, RVC_MASK_C_JALR, RVC_MATCH_C_JALR) 230 + __RISCV_INSN_FUNCS(c_j, RVC_MASK_C_J, RVC_MATCH_C_J) 231 + __RISCV_INSN_FUNCS(beq, RVG_MASK_BEQ, RVG_MATCH_BEQ) 232 + __RISCV_INSN_FUNCS(bne, RVG_MASK_BNE, RVG_MATCH_BNE) 233 + __RISCV_INSN_FUNCS(blt, RVG_MASK_BLT, RVG_MATCH_BLT) 234 + __RISCV_INSN_FUNCS(bge, RVG_MASK_BGE, RVG_MATCH_BGE) 235 + __RISCV_INSN_FUNCS(bltu, RVG_MASK_BLTU, RVG_MATCH_BLTU) 236 + __RISCV_INSN_FUNCS(bgeu, RVG_MASK_BGEU, RVG_MATCH_BGEU) 237 + __RISCV_INSN_FUNCS(c_beqz, RVC_MASK_C_BEQZ, RVC_MATCH_C_BEQZ) 238 + __RISCV_INSN_FUNCS(c_bnez, RVC_MASK_C_BNEZ, RVC_MATCH_C_BNEZ) 239 + __RISCV_INSN_FUNCS(c_ebreak, RVC_MASK_C_EBREAK, RVC_MATCH_C_EBREAK) 240 + __RISCV_INSN_FUNCS(ebreak, RVG_MASK_EBREAK, RVG_MATCH_EBREAK) 241 + __RISCV_INSN_FUNCS(sret, RVG_MASK_SRET, RVG_MATCH_SRET) 242 + __RISCV_INSN_FUNCS(fence, RVG_MASK_FENCE, RVG_MATCH_FENCE); 243 + 244 + /* special case to catch _any_ system instruction */ 245 + static __always_inline bool riscv_insn_is_system(u32 code) 246 + { 247 + return (code & RV_INSN_OPCODE_MASK) == RVG_OPCODE_SYSTEM; 248 + } 249 + 250 + /* special case to catch _any_ branch instruction */ 251 + static __always_inline bool riscv_insn_is_branch(u32 code) 252 + { 253 + return (code & RV_INSN_OPCODE_MASK) == RVG_OPCODE_BRANCH; 254 + } 255 + 256 + #define RV_IMM_SIGN(x) (-(((x) >> 31) & 1)) 257 + #define RVC_IMM_SIGN(x) (-(((x) >> 12) & 1)) 258 + #define RV_X(X, s, mask) (((X) >> (s)) & (mask)) 259 + #define RVC_X(X, s, mask) RV_X(X, s, mask) 260 + 261 + #define RV_EXTRACT_RD_REG(x) \ 262 + ({typeof(x) x_ = (x); \ 263 + (RV_X(x_, RVG_RD_OPOFF, RVG_RD_MASK)); }) 264 + 265 + #define RV_EXTRACT_UTYPE_IMM(x) \ 266 + ({typeof(x) x_ = (x); \ 267 + (RV_X(x_, RV_U_IMM_31_12_OPOFF, RV_U_IMM_31_12_MASK)); }) 268 + 269 + #define RV_EXTRACT_JTYPE_IMM(x) \ 270 + ({typeof(x) x_ = (x); \ 271 + (RV_X(x_, RV_J_IMM_10_1_OPOFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OFF) | \ 272 + (RV_X(x_, RV_J_IMM_11_OPOFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OFF) | \ 273 + (RV_X(x_, RV_J_IMM_19_12_OPOFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OFF) | \ 274 + (RV_IMM_SIGN(x_) << RV_J_IMM_SIGN_OFF); }) 275 + 276 + #define RV_EXTRACT_ITYPE_IMM(x) \ 277 + ({typeof(x) x_ = (x); \ 278 + (RV_X(x_, RV_I_IMM_11_0_OPOFF, RV_I_IMM_11_0_MASK)) | \ 279 + (RV_IMM_SIGN(x_) << RV_I_IMM_SIGN_OFF); }) 280 + 281 + #define RV_EXTRACT_BTYPE_IMM(x) \ 282 + ({typeof(x) x_ = (x); \ 283 + (RV_X(x_, RV_B_IMM_4_1_OPOFF, RV_B_IMM_4_1_MASK) << RV_B_IMM_4_1_OFF) | \ 284 + (RV_X(x_, RV_B_IMM_10_5_OPOFF, RV_B_IMM_10_5_MASK) << RV_B_IMM_10_5_OFF) | \ 285 + (RV_X(x_, RV_B_IMM_11_OPOFF, RV_B_IMM_11_MASK) << RV_B_IMM_11_OFF) | \ 286 + (RV_IMM_SIGN(x_) << RV_B_IMM_SIGN_OFF); }) 287 + 288 + #define RVC_EXTRACT_JTYPE_IMM(x) \ 289 + ({typeof(x) x_ = (x); \ 290 + (RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) << RVC_J_IMM_3_1_OFF) | \ 291 + (RVC_X(x_, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_MASK) << RVC_J_IMM_4_OFF) | \ 292 + (RVC_X(x_, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_MASK) << RVC_J_IMM_5_OFF) | \ 293 + (RVC_X(x_, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_MASK) << RVC_J_IMM_6_OFF) | \ 294 + (RVC_X(x_, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_MASK) << RVC_J_IMM_7_OFF) | \ 295 + (RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) << RVC_J_IMM_9_8_OFF) | \ 296 + (RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) << RVC_J_IMM_10_OFF) | \ 297 + (RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); }) 298 + 299 + #define RVC_EXTRACT_BTYPE_IMM(x) \ 300 + ({typeof(x) x_ = (x); \ 301 + (RVC_X(x_, RVC_B_IMM_2_1_OPOFF, RVC_B_IMM_2_1_MASK) << RVC_B_IMM_2_1_OFF) | \ 302 + (RVC_X(x_, RVC_B_IMM_4_3_OPOFF, RVC_B_IMM_4_3_MASK) << RVC_B_IMM_4_3_OFF) | \ 303 + (RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \ 304 + (RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \ 305 + (RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); }) 306 + 307 + /* 308 + * Get the immediate from a J-type instruction. 309 + * 310 + * @insn: instruction to process 311 + * Return: immediate 312 + */ 313 + static inline s32 riscv_insn_extract_jtype_imm(u32 insn) 314 + { 315 + return RV_EXTRACT_JTYPE_IMM(insn); 316 + } 317 + 318 + /* 319 + * Update a J-type instruction with an immediate value. 320 + * 321 + * @insn: pointer to the jtype instruction 322 + * @imm: the immediate to insert into the instruction 323 + */ 324 + static inline void riscv_insn_insert_jtype_imm(u32 *insn, s32 imm) 325 + { 326 + /* drop the old IMMs, all jal IMM bits sit at 31:12 */ 327 + *insn &= ~GENMASK(31, 12); 328 + *insn |= (RV_X(imm, RV_J_IMM_10_1_OFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OPOFF) | 329 + (RV_X(imm, RV_J_IMM_11_OFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OPOFF) | 330 + (RV_X(imm, RV_J_IMM_19_12_OFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OPOFF) | 331 + (RV_X(imm, RV_J_IMM_SIGN_OFF, 1) << RV_J_IMM_SIGN_OPOFF); 332 + } 333 + 334 + /* 335 + * Put together one immediate from a U-type and I-type instruction pair. 336 + * 337 + * The U-type contains an upper immediate, meaning bits[31:12] with [11:0] 338 + * being zero, while the I-type contains a 12bit immediate. 339 + * Combined these can encode larger 32bit values and are used for example 340 + * in auipc + jalr pairs to allow larger jumps. 341 + * 342 + * @utype_insn: instruction containing the upper immediate 343 + * @itype_insn: instruction 344 + * Return: combined immediate 345 + */ 346 + static inline s32 riscv_insn_extract_utype_itype_imm(u32 utype_insn, u32 itype_insn) 347 + { 348 + s32 imm; 349 + 350 + imm = RV_EXTRACT_UTYPE_IMM(utype_insn); 351 + imm += RV_EXTRACT_ITYPE_IMM(itype_insn); 352 + 353 + return imm; 354 + } 355 + 356 + /* 357 + * Update a set of two instructions (U-type + I-type) with an immediate value. 358 + * 359 + * Used for example in auipc+jalrs pairs the U-type instructions contains 360 + * a 20bit upper immediate representing bits[31:12], while the I-type 361 + * instruction contains a 12bit immediate representing bits[11:0]. 362 + * 363 + * This also takes into account that both separate immediates are 364 + * considered as signed values, so if the I-type immediate becomes 365 + * negative (BIT(11) set) the U-type part gets adjusted. 366 + * 367 + * @utype_insn: pointer to the utype instruction of the pair 368 + * @itype_insn: pointer to the itype instruction of the pair 369 + * @imm: the immediate to insert into the two instructions 370 + */ 371 + static inline void riscv_insn_insert_utype_itype_imm(u32 *utype_insn, u32 *itype_insn, s32 imm) 372 + { 373 + /* drop possible old IMM values */ 374 + *utype_insn &= ~(RV_U_IMM_31_12_MASK); 375 + *itype_insn &= ~(RV_I_IMM_11_0_MASK << RV_I_IMM_11_0_OPOFF); 376 + 377 + /* add the adapted IMMs */ 378 + *utype_insn |= (imm & RV_U_IMM_31_12_MASK) + ((imm & BIT(11)) << 1); 379 + *itype_insn |= ((imm & RV_I_IMM_11_0_MASK) << RV_I_IMM_11_0_OPOFF); 380 + } 381 + #endif /* _ASM_RISCV_INSN_H */
+2
arch/riscv/include/asm/jump_label.h
··· 18 18 const bool branch) 19 19 { 20 20 asm_volatile_goto( 21 + " .align 2 \n\t" 21 22 " .option push \n\t" 22 23 " .option norelax \n\t" 23 24 " .option norvc \n\t" ··· 40 39 const bool branch) 41 40 { 42 41 asm_volatile_goto( 42 + " .align 2 \n\t" 43 43 " .option push \n\t" 44 44 " .option norelax \n\t" 45 45 " .option norvc \n\t"
+16
arch/riscv/include/asm/module.h
··· 5 5 #define _ASM_RISCV_MODULE_H 6 6 7 7 #include <asm-generic/module.h> 8 + #include <linux/elf.h> 8 9 9 10 struct module; 10 11 unsigned long module_emit_got_entry(struct module *mod, unsigned long val); ··· 111 110 } 112 111 113 112 #endif /* CONFIG_MODULE_SECTIONS */ 113 + 114 + static inline const Elf_Shdr *find_section(const Elf_Ehdr *hdr, 115 + const Elf_Shdr *sechdrs, 116 + const char *name) 117 + { 118 + const Elf_Shdr *s, *se; 119 + const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 120 + 121 + for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 122 + if (strcmp(name, secstrs + s->sh_name) == 0) 123 + return s; 124 + } 125 + 126 + return NULL; 127 + } 114 128 115 129 #endif /* _ASM_RISCV_MODULE_H */
-219
arch/riscv/include/asm/parse_asm.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) 2020 SiFive 4 - */ 5 - 6 - #include <linux/bits.h> 7 - 8 - /* The bit field of immediate value in I-type instruction */ 9 - #define I_IMM_SIGN_OPOFF 31 10 - #define I_IMM_11_0_OPOFF 20 11 - #define I_IMM_SIGN_OFF 12 12 - #define I_IMM_11_0_OFF 0 13 - #define I_IMM_11_0_MASK GENMASK(11, 0) 14 - 15 - /* The bit field of immediate value in J-type instruction */ 16 - #define J_IMM_SIGN_OPOFF 31 17 - #define J_IMM_10_1_OPOFF 21 18 - #define J_IMM_11_OPOFF 20 19 - #define J_IMM_19_12_OPOFF 12 20 - #define J_IMM_SIGN_OFF 20 21 - #define J_IMM_10_1_OFF 1 22 - #define J_IMM_11_OFF 11 23 - #define J_IMM_19_12_OFF 12 24 - #define J_IMM_10_1_MASK GENMASK(9, 0) 25 - #define J_IMM_11_MASK GENMASK(0, 0) 26 - #define J_IMM_19_12_MASK GENMASK(7, 0) 27 - 28 - /* The bit field of immediate value in B-type instruction */ 29 - #define B_IMM_SIGN_OPOFF 31 30 - #define B_IMM_10_5_OPOFF 25 31 - #define B_IMM_4_1_OPOFF 8 32 - #define B_IMM_11_OPOFF 7 33 - #define B_IMM_SIGN_OFF 12 34 - #define B_IMM_10_5_OFF 5 35 - #define B_IMM_4_1_OFF 1 36 - #define B_IMM_11_OFF 11 37 - #define B_IMM_10_5_MASK GENMASK(5, 0) 38 - #define B_IMM_4_1_MASK GENMASK(3, 0) 39 - #define B_IMM_11_MASK GENMASK(0, 0) 40 - 41 - /* The register offset in RVG instruction */ 42 - #define RVG_RS1_OPOFF 15 43 - #define RVG_RS2_OPOFF 20 44 - #define RVG_RD_OPOFF 7 45 - 46 - /* The bit field of immediate value in RVC J instruction */ 47 - #define RVC_J_IMM_SIGN_OPOFF 12 48 - #define RVC_J_IMM_4_OPOFF 11 49 - #define RVC_J_IMM_9_8_OPOFF 9 50 - #define RVC_J_IMM_10_OPOFF 8 51 - #define RVC_J_IMM_6_OPOFF 7 52 - #define RVC_J_IMM_7_OPOFF 6 53 - #define RVC_J_IMM_3_1_OPOFF 3 54 - #define RVC_J_IMM_5_OPOFF 2 55 - #define RVC_J_IMM_SIGN_OFF 11 56 - #define RVC_J_IMM_4_OFF 4 57 - #define RVC_J_IMM_9_8_OFF 8 58 - #define RVC_J_IMM_10_OFF 10 59 - #define RVC_J_IMM_6_OFF 6 60 - #define RVC_J_IMM_7_OFF 7 61 - #define RVC_J_IMM_3_1_OFF 1 62 - #define RVC_J_IMM_5_OFF 5 63 - #define RVC_J_IMM_4_MASK GENMASK(0, 0) 64 - #define RVC_J_IMM_9_8_MASK GENMASK(1, 0) 65 - #define RVC_J_IMM_10_MASK GENMASK(0, 0) 66 - #define RVC_J_IMM_6_MASK GENMASK(0, 0) 67 - #define RVC_J_IMM_7_MASK GENMASK(0, 0) 68 - #define RVC_J_IMM_3_1_MASK GENMASK(2, 0) 69 - #define RVC_J_IMM_5_MASK GENMASK(0, 0) 70 - 71 - /* The bit field of immediate value in RVC B instruction */ 72 - #define RVC_B_IMM_SIGN_OPOFF 12 73 - #define RVC_B_IMM_4_3_OPOFF 10 74 - #define RVC_B_IMM_7_6_OPOFF 5 75 - #define RVC_B_IMM_2_1_OPOFF 3 76 - #define RVC_B_IMM_5_OPOFF 2 77 - #define RVC_B_IMM_SIGN_OFF 8 78 - #define RVC_B_IMM_4_3_OFF 3 79 - #define RVC_B_IMM_7_6_OFF 6 80 - #define RVC_B_IMM_2_1_OFF 1 81 - #define RVC_B_IMM_5_OFF 5 82 - #define RVC_B_IMM_4_3_MASK GENMASK(1, 0) 83 - #define RVC_B_IMM_7_6_MASK GENMASK(1, 0) 84 - #define RVC_B_IMM_2_1_MASK GENMASK(1, 0) 85 - #define RVC_B_IMM_5_MASK GENMASK(0, 0) 86 - 87 - /* The register offset in RVC op=C0 instruction */ 88 - #define RVC_C0_RS1_OPOFF 7 89 - #define RVC_C0_RS2_OPOFF 2 90 - #define RVC_C0_RD_OPOFF 2 91 - 92 - /* The register offset in RVC op=C1 instruction */ 93 - #define RVC_C1_RS1_OPOFF 7 94 - #define RVC_C1_RS2_OPOFF 2 95 - #define RVC_C1_RD_OPOFF 7 96 - 97 - /* The register offset in RVC op=C2 instruction */ 98 - #define RVC_C2_RS1_OPOFF 7 99 - #define RVC_C2_RS2_OPOFF 2 100 - #define RVC_C2_RD_OPOFF 7 101 - 102 - /* parts of opcode for RVG*/ 103 - #define OPCODE_BRANCH 0x63 104 - #define OPCODE_JALR 0x67 105 - #define OPCODE_JAL 0x6f 106 - #define OPCODE_SYSTEM 0x73 107 - 108 - /* parts of opcode for RVC*/ 109 - #define OPCODE_C_0 0x0 110 - #define OPCODE_C_1 0x1 111 - #define OPCODE_C_2 0x2 112 - 113 - /* parts of funct3 code for I, M, A extension*/ 114 - #define FUNCT3_JALR 0x0 115 - #define FUNCT3_BEQ 0x0 116 - #define FUNCT3_BNE 0x1000 117 - #define FUNCT3_BLT 0x4000 118 - #define FUNCT3_BGE 0x5000 119 - #define FUNCT3_BLTU 0x6000 120 - #define FUNCT3_BGEU 0x7000 121 - 122 - /* parts of funct3 code for C extension*/ 123 - #define FUNCT3_C_BEQZ 0xc000 124 - #define FUNCT3_C_BNEZ 0xe000 125 - #define FUNCT3_C_J 0xa000 126 - #define FUNCT3_C_JAL 0x2000 127 - #define FUNCT4_C_JR 0x8000 128 - #define FUNCT4_C_JALR 0xf000 129 - 130 - #define FUNCT12_SRET 0x10200000 131 - 132 - #define MATCH_JALR (FUNCT3_JALR | OPCODE_JALR) 133 - #define MATCH_JAL (OPCODE_JAL) 134 - #define MATCH_BEQ (FUNCT3_BEQ | OPCODE_BRANCH) 135 - #define MATCH_BNE (FUNCT3_BNE | OPCODE_BRANCH) 136 - #define MATCH_BLT (FUNCT3_BLT | OPCODE_BRANCH) 137 - #define MATCH_BGE (FUNCT3_BGE | OPCODE_BRANCH) 138 - #define MATCH_BLTU (FUNCT3_BLTU | OPCODE_BRANCH) 139 - #define MATCH_BGEU (FUNCT3_BGEU | OPCODE_BRANCH) 140 - #define MATCH_SRET (FUNCT12_SRET | OPCODE_SYSTEM) 141 - #define MATCH_C_BEQZ (FUNCT3_C_BEQZ | OPCODE_C_1) 142 - #define MATCH_C_BNEZ (FUNCT3_C_BNEZ | OPCODE_C_1) 143 - #define MATCH_C_J (FUNCT3_C_J | OPCODE_C_1) 144 - #define MATCH_C_JAL (FUNCT3_C_JAL | OPCODE_C_1) 145 - #define MATCH_C_JR (FUNCT4_C_JR | OPCODE_C_2) 146 - #define MATCH_C_JALR (FUNCT4_C_JALR | OPCODE_C_2) 147 - 148 - #define MASK_JALR 0x707f 149 - #define MASK_JAL 0x7f 150 - #define MASK_C_JALR 0xf07f 151 - #define MASK_C_JR 0xf07f 152 - #define MASK_C_JAL 0xe003 153 - #define MASK_C_J 0xe003 154 - #define MASK_BEQ 0x707f 155 - #define MASK_BNE 0x707f 156 - #define MASK_BLT 0x707f 157 - #define MASK_BGE 0x707f 158 - #define MASK_BLTU 0x707f 159 - #define MASK_BGEU 0x707f 160 - #define MASK_C_BEQZ 0xe003 161 - #define MASK_C_BNEZ 0xe003 162 - #define MASK_SRET 0xffffffff 163 - 164 - #define __INSN_LENGTH_MASK _UL(0x3) 165 - #define __INSN_LENGTH_GE_32 _UL(0x3) 166 - #define __INSN_OPCODE_MASK _UL(0x7F) 167 - #define __INSN_BRANCH_OPCODE _UL(OPCODE_BRANCH) 168 - 169 - /* Define a series of is_XXX_insn functions to check if the value INSN 170 - * is an instance of instruction XXX. 171 - */ 172 - #define DECLARE_INSN(INSN_NAME, INSN_MATCH, INSN_MASK) \ 173 - static inline bool is_ ## INSN_NAME ## _insn(long insn) \ 174 - { \ 175 - return (insn & (INSN_MASK)) == (INSN_MATCH); \ 176 - } 177 - 178 - #define RV_IMM_SIGN(x) (-(((x) >> 31) & 1)) 179 - #define RVC_IMM_SIGN(x) (-(((x) >> 12) & 1)) 180 - #define RV_X(X, s, mask) (((X) >> (s)) & (mask)) 181 - #define RVC_X(X, s, mask) RV_X(X, s, mask) 182 - 183 - #define EXTRACT_JTYPE_IMM(x) \ 184 - ({typeof(x) x_ = (x); \ 185 - (RV_X(x_, J_IMM_10_1_OPOFF, J_IMM_10_1_MASK) << J_IMM_10_1_OFF) | \ 186 - (RV_X(x_, J_IMM_11_OPOFF, J_IMM_11_MASK) << J_IMM_11_OFF) | \ 187 - (RV_X(x_, J_IMM_19_12_OPOFF, J_IMM_19_12_MASK) << J_IMM_19_12_OFF) | \ 188 - (RV_IMM_SIGN(x_) << J_IMM_SIGN_OFF); }) 189 - 190 - #define EXTRACT_ITYPE_IMM(x) \ 191 - ({typeof(x) x_ = (x); \ 192 - (RV_X(x_, I_IMM_11_0_OPOFF, I_IMM_11_0_MASK)) | \ 193 - (RV_IMM_SIGN(x_) << I_IMM_SIGN_OFF); }) 194 - 195 - #define EXTRACT_BTYPE_IMM(x) \ 196 - ({typeof(x) x_ = (x); \ 197 - (RV_X(x_, B_IMM_4_1_OPOFF, B_IMM_4_1_MASK) << B_IMM_4_1_OFF) | \ 198 - (RV_X(x_, B_IMM_10_5_OPOFF, B_IMM_10_5_MASK) << B_IMM_10_5_OFF) | \ 199 - (RV_X(x_, B_IMM_11_OPOFF, B_IMM_11_MASK) << B_IMM_11_OFF) | \ 200 - (RV_IMM_SIGN(x_) << B_IMM_SIGN_OFF); }) 201 - 202 - #define EXTRACT_RVC_J_IMM(x) \ 203 - ({typeof(x) x_ = (x); \ 204 - (RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) << RVC_J_IMM_3_1_OFF) | \ 205 - (RVC_X(x_, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_MASK) << RVC_J_IMM_4_OFF) | \ 206 - (RVC_X(x_, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_MASK) << RVC_J_IMM_5_OFF) | \ 207 - (RVC_X(x_, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_MASK) << RVC_J_IMM_6_OFF) | \ 208 - (RVC_X(x_, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_MASK) << RVC_J_IMM_7_OFF) | \ 209 - (RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) << RVC_J_IMM_9_8_OFF) | \ 210 - (RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) << RVC_J_IMM_10_OFF) | \ 211 - (RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); }) 212 - 213 - #define EXTRACT_RVC_B_IMM(x) \ 214 - ({typeof(x) x_ = (x); \ 215 - (RVC_X(x_, RVC_B_IMM_2_1_OPOFF, RVC_B_IMM_2_1_MASK) << RVC_B_IMM_2_1_OFF) | \ 216 - (RVC_X(x_, RVC_B_IMM_4_3_OPOFF, RVC_B_IMM_4_3_MASK) << RVC_B_IMM_4_3_OFF) | \ 217 - (RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \ 218 - (RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \ 219 - (RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
+2 -2
arch/riscv/include/asm/pgtable.h
··· 31 31 #define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) 32 32 33 33 /* 34 - * Half of the kernel address space (half of the entries of the page global 34 + * Half of the kernel address space (1/4 of the entries of the page global 35 35 * directory) is for the direct mapping. 36 36 */ 37 37 #define KERN_VIRT_SIZE ((PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2) ··· 415 415 * Relying on flush_tlb_fix_spurious_fault would suffice, but 416 416 * the extra traps reduce performance. So, eagerly SFENCE.VMA. 417 417 */ 418 - flush_tlb_page(vma, address); 418 + local_flush_tlb_page(address); 419 419 } 420 420 421 421 #define __HAVE_ARCH_UPDATE_MMU_TLB
+1 -1
arch/riscv/include/asm/signal.h
··· 7 7 #include <uapi/asm/ptrace.h> 8 8 9 9 asmlinkage __visible 10 - void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags); 10 + void do_work_pending(struct pt_regs *regs, unsigned long thread_info_flags); 11 11 12 12 #endif
+10
arch/riscv/include/asm/string.h
··· 18 18 #define __HAVE_ARCH_MEMMOVE 19 19 extern asmlinkage void *memmove(void *, const void *, size_t); 20 20 extern asmlinkage void *__memmove(void *, const void *, size_t); 21 + 22 + #define __HAVE_ARCH_STRCMP 23 + extern asmlinkage int strcmp(const char *cs, const char *ct); 24 + 25 + #define __HAVE_ARCH_STRLEN 26 + extern asmlinkage __kernel_size_t strlen(const char *); 27 + 28 + #define __HAVE_ARCH_STRNCMP 29 + extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count); 30 + 21 31 /* For those files which don't want to check by kasan. */ 22 32 #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) 23 33 #define memcpy(dst, src, len) __memcpy(dst, src, len)
+2 -1
arch/riscv/include/asm/switch_to.h
··· 59 59 60 60 static __always_inline bool has_fpu(void) 61 61 { 62 - return static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_FPU]); 62 + return riscv_has_extension_likely(RISCV_ISA_EXT_f) || 63 + riscv_has_extension_likely(RISCV_ISA_EXT_d); 63 64 } 64 65 #else 65 66 static __always_inline bool has_fpu(void) { return false; }
+1
arch/riscv/include/asm/thread_info.h
··· 43 43 #ifndef __ASSEMBLY__ 44 44 45 45 extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; 46 + extern unsigned long spin_shadow_stack; 46 47 47 48 #include <asm/processor.h> 48 49 #include <asm/csr.h>
+4
arch/riscv/include/asm/vdso.h
··· 28 28 #define COMPAT_VDSO_SYMBOL(base, name) \ 29 29 (void __user *)((unsigned long)(base) + compat__vdso_##name##_offset) 30 30 31 + extern char compat_vdso_start[], compat_vdso_end[]; 32 + 31 33 #endif /* CONFIG_COMPAT */ 34 + 35 + extern char vdso_start[], vdso_end[]; 32 36 33 37 #endif /* !__ASSEMBLY__ */ 34 38
+113
arch/riscv/kernel/alternative.c
··· 11 11 #include <linux/cpu.h> 12 12 #include <linux/uaccess.h> 13 13 #include <asm/alternative.h> 14 + #include <asm/module.h> 14 15 #include <asm/sections.h> 16 + #include <asm/vdso.h> 15 17 #include <asm/vendorid_list.h> 16 18 #include <asm/sbi.h> 17 19 #include <asm/csr.h> 20 + #include <asm/insn.h> 21 + #include <asm/patch.h> 18 22 19 23 struct cpu_manufacturer_info_t { 20 24 unsigned long vendor_id; ··· 57 53 } 58 54 } 59 55 56 + static u32 riscv_instruction_at(void *p) 57 + { 58 + u16 *parcel = p; 59 + 60 + return (u32)parcel[0] | (u32)parcel[1] << 16; 61 + } 62 + 63 + static void riscv_alternative_fix_auipc_jalr(void *ptr, u32 auipc_insn, 64 + u32 jalr_insn, int patch_offset) 65 + { 66 + u32 call[2] = { auipc_insn, jalr_insn }; 67 + s32 imm; 68 + 69 + /* get and adjust new target address */ 70 + imm = riscv_insn_extract_utype_itype_imm(auipc_insn, jalr_insn); 71 + imm -= patch_offset; 72 + 73 + /* update instructions */ 74 + riscv_insn_insert_utype_itype_imm(&call[0], &call[1], imm); 75 + 76 + /* patch the call place again */ 77 + patch_text_nosync(ptr, call, sizeof(u32) * 2); 78 + } 79 + 80 + static void riscv_alternative_fix_jal(void *ptr, u32 jal_insn, int patch_offset) 81 + { 82 + s32 imm; 83 + 84 + /* get and adjust new target address */ 85 + imm = riscv_insn_extract_jtype_imm(jal_insn); 86 + imm -= patch_offset; 87 + 88 + /* update instruction */ 89 + riscv_insn_insert_jtype_imm(&jal_insn, imm); 90 + 91 + /* patch the call place again */ 92 + patch_text_nosync(ptr, &jal_insn, sizeof(u32)); 93 + } 94 + 95 + void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len, 96 + int patch_offset) 97 + { 98 + int num_insn = len / sizeof(u32); 99 + int i; 100 + 101 + for (i = 0; i < num_insn; i++) { 102 + u32 insn = riscv_instruction_at(alt_ptr + i * sizeof(u32)); 103 + 104 + /* 105 + * May be the start of an auipc + jalr pair 106 + * Needs to check that at least one more instruction 107 + * is in the list. 108 + */ 109 + if (riscv_insn_is_auipc(insn) && i < num_insn - 1) { 110 + u32 insn2 = riscv_instruction_at(alt_ptr + (i + 1) * sizeof(u32)); 111 + 112 + if (!riscv_insn_is_jalr(insn2)) 113 + continue; 114 + 115 + /* if instruction pair is a call, it will use the ra register */ 116 + if (RV_EXTRACT_RD_REG(insn) != 1) 117 + continue; 118 + 119 + riscv_alternative_fix_auipc_jalr(alt_ptr + i * sizeof(u32), 120 + insn, insn2, patch_offset); 121 + i++; 122 + } 123 + 124 + if (riscv_insn_is_jal(insn)) { 125 + s32 imm = riscv_insn_extract_jtype_imm(insn); 126 + 127 + /* Don't modify jumps inside the alternative block */ 128 + if ((alt_ptr + i * sizeof(u32) + imm) >= alt_ptr && 129 + (alt_ptr + i * sizeof(u32) + imm) < (alt_ptr + len)) 130 + continue; 131 + 132 + riscv_alternative_fix_jal(alt_ptr + i * sizeof(u32), 133 + insn, patch_offset); 134 + } 135 + } 136 + } 137 + 60 138 /* 61 139 * This is called very early in the boot process (directly after we run 62 140 * a feature detect on the boot CPU). No need to worry about other CPUs ··· 163 77 stage); 164 78 } 165 79 80 + #ifdef CONFIG_MMU 81 + static void __init apply_vdso_alternatives(void) 82 + { 83 + const Elf_Ehdr *hdr; 84 + const Elf_Shdr *shdr; 85 + const Elf_Shdr *alt; 86 + struct alt_entry *begin, *end; 87 + 88 + hdr = (Elf_Ehdr *)vdso_start; 89 + shdr = (void *)hdr + hdr->e_shoff; 90 + alt = find_section(hdr, shdr, ".alternative"); 91 + if (!alt) 92 + return; 93 + 94 + begin = (void *)hdr + alt->sh_offset, 95 + end = (void *)hdr + alt->sh_offset + alt->sh_size, 96 + 97 + _apply_alternatives((struct alt_entry *)begin, 98 + (struct alt_entry *)end, 99 + RISCV_ALTERNATIVES_BOOT); 100 + } 101 + #else 102 + static void __init apply_vdso_alternatives(void) { } 103 + #endif 104 + 166 105 void __init apply_boot_alternatives(void) 167 106 { 168 107 /* If called on non-boot cpu things could go wrong */ ··· 196 85 _apply_alternatives((struct alt_entry *)__alt_start, 197 86 (struct alt_entry *)__alt_end, 198 87 RISCV_ALTERNATIVES_BOOT); 88 + 89 + apply_vdso_alternatives(); 199 90 } 200 91 201 92 /*
+39 -15
arch/riscv/kernel/cpu.c
··· 144 144 .uprop = #UPROP, \ 145 145 .isa_ext_id = EXTID, \ 146 146 } 147 + 147 148 /* 148 - * Here are the ordering rules of extension naming defined by RISC-V 149 - * specification : 150 - * 1. All extensions should be separated from other multi-letter extensions 151 - * by an underscore. 152 - * 2. The first letter following the 'Z' conventionally indicates the most 149 + * The canonical order of ISA extension names in the ISA string is defined in 150 + * chapter 27 of the unprivileged specification. 151 + * 152 + * Ordinarily, for in-kernel data structures, this order is unimportant but 153 + * isa_ext_arr defines the order of the ISA string in /proc/cpuinfo. 154 + * 155 + * The specification uses vague wording, such as should, when it comes to 156 + * ordering, so for our purposes the following rules apply: 157 + * 158 + * 1. All multi-letter extensions must be separated from other extensions by an 159 + * underscore. 160 + * 161 + * 2. Additional standard extensions (starting with 'Z') must be sorted after 162 + * single-letter extensions and before any higher-privileged extensions. 163 + 164 + * 3. The first letter following the 'Z' conventionally indicates the most 153 165 * closely related alphabetical extension category, IMAFDQLCBKJTPVH. 154 - * If multiple 'Z' extensions are named, they should be ordered first 155 - * by category, then alphabetically within a category. 156 - * 3. Standard supervisor-level extensions (starts with 'S') should be 157 - * listed after standard unprivileged extensions. If multiple 158 - * supervisor-level extensions are listed, they should be ordered 166 + * If multiple 'Z' extensions are named, they must be ordered first by 167 + * category, then alphabetically within a category. 168 + * 169 + * 3. Standard supervisor-level extensions (starting with 'S') must be listed 170 + * after standard unprivileged extensions. If multiple supervisor-level 171 + * extensions are listed, they must be ordered alphabetically. 172 + * 173 + * 4. Standard machine-level extensions (starting with 'Zxm') must be listed 174 + * after any lower-privileged, standard extensions. If multiple 175 + * machine-level extensions are listed, they must be ordered 159 176 * alphabetically. 160 - * 4. Non-standard extensions (starts with 'X') must be listed after all 161 - * standard extensions. They must be separated from other multi-letter 162 - * extensions by an underscore. 177 + * 178 + * 5. Non-standard extensions (starting with 'X') must be listed after all 179 + * standard extensions. If multiple non-standard extensions are listed, they 180 + * must be ordered alphabetically. 181 + * 182 + * An example string following the order is: 183 + * rv64imadc_zifoo_zigoo_zafoo_sbar_scar_zxmbaz_xqux_xrux 184 + * 185 + * New entries to this struct should follow the ordering rules described above. 163 186 */ 164 187 static struct riscv_isa_ext_data isa_ext_arr[] = { 188 + __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), 189 + __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), 190 + __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), 165 191 __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), 166 192 __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), 167 193 __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), 168 194 __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), 169 - __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), 170 - __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), 171 195 __RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX), 172 196 }; 173 197
+22 -65
arch/riscv/kernel/cpufeature.c
··· 10 10 #include <linux/ctype.h> 11 11 #include <linux/libfdt.h> 12 12 #include <linux/log2.h> 13 + #include <linux/memory.h> 13 14 #include <linux/module.h> 14 15 #include <linux/of.h> 15 16 #include <asm/alternative.h> ··· 29 28 30 29 /* Host ISA bitmap */ 31 30 static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly; 32 - 33 - DEFINE_STATIC_KEY_ARRAY_FALSE(riscv_isa_ext_keys, RISCV_ISA_EXT_KEY_MAX); 34 - EXPORT_SYMBOL(riscv_isa_ext_keys); 35 31 36 32 /** 37 33 * riscv_isa_extension_base() - Get base extension word ··· 220 222 set_bit(nr, this_isa); 221 223 } 222 224 } else { 225 + /* sorted alphabetically */ 223 226 SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF); 224 - SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); 225 - SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM); 226 - SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); 227 227 SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC); 228 228 SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL); 229 + SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); 230 + SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB); 231 + SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM); 232 + SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); 229 233 } 230 234 #undef SET_ISA_EXT_MAP 231 235 } ··· 266 266 if (elf_hwcap & BIT_MASK(i)) 267 267 print_str[j++] = (char)('a' + i); 268 268 pr_info("riscv: ELF capabilities %s\n", print_str); 269 - 270 - for_each_set_bit(i, riscv_isa, RISCV_ISA_EXT_MAX) { 271 - j = riscv_isa_ext2key(i); 272 - if (j >= 0) 273 - static_branch_enable(&riscv_isa_ext_keys[j]); 274 - } 275 269 } 276 270 277 271 #ifdef CONFIG_RISCV_ALTERNATIVE 278 - static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage) 279 - { 280 - if (!IS_ENABLED(CONFIG_RISCV_ISA_SVPBMT)) 281 - return false; 282 - 283 - if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) 284 - return false; 285 - 286 - return riscv_isa_extension_available(NULL, SVPBMT); 287 - } 288 - 289 - static bool __init_or_module cpufeature_probe_zicbom(unsigned int stage) 290 - { 291 - if (!IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM)) 292 - return false; 293 - 294 - if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) 295 - return false; 296 - 297 - if (!riscv_isa_extension_available(NULL, ZICBOM)) 298 - return false; 299 - 300 - riscv_noncoherent_supported(); 301 - return true; 302 - } 303 - 304 - /* 305 - * Probe presence of individual extensions. 306 - * 307 - * This code may also be executed before kernel relocation, so we cannot use 308 - * addresses generated by the address-of operator as they won't be valid in 309 - * this context. 310 - */ 311 - static u32 __init_or_module cpufeature_probe(unsigned int stage) 312 - { 313 - u32 cpu_req_feature = 0; 314 - 315 - if (cpufeature_probe_svpbmt(stage)) 316 - cpu_req_feature |= BIT(CPUFEATURE_SVPBMT); 317 - 318 - if (cpufeature_probe_zicbom(stage)) 319 - cpu_req_feature |= BIT(CPUFEATURE_ZICBOM); 320 - 321 - return cpu_req_feature; 322 - } 323 - 324 272 void __init_or_module riscv_cpufeature_patch_func(struct alt_entry *begin, 325 273 struct alt_entry *end, 326 274 unsigned int stage) 327 275 { 328 - u32 cpu_req_feature = cpufeature_probe(stage); 329 276 struct alt_entry *alt; 330 - u32 tmp; 277 + void *oldptr, *altptr; 278 + 279 + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) 280 + return; 331 281 332 282 for (alt = begin; alt < end; alt++) { 333 283 if (alt->vendor_id != 0) 334 284 continue; 335 - if (alt->errata_id >= CPUFEATURE_NUMBER) { 336 - WARN(1, "This feature id:%d is not in kernel cpufeature list", 285 + if (alt->errata_id >= RISCV_ISA_EXT_MAX) { 286 + WARN(1, "This extension id:%d is not in ISA extension list", 337 287 alt->errata_id); 338 288 continue; 339 289 } 340 290 341 - tmp = (1U << alt->errata_id); 342 - if (cpu_req_feature & tmp) 343 - patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); 291 + if (!__riscv_isa_extension_available(NULL, alt->errata_id)) 292 + continue; 293 + 294 + oldptr = ALT_OLD_PTR(alt); 295 + altptr = ALT_ALT_PTR(alt); 296 + 297 + mutex_lock(&text_mutex); 298 + patch_text_nosync(oldptr, altptr, alt->alt_len); 299 + riscv_alternative_fix_offsets(oldptr, alt->alt_len, oldptr - altptr); 300 + mutex_unlock(&text_mutex); 344 301 } 345 302 } 346 303 #endif
+19 -46
arch/riscv/kernel/ftrace.c
··· 55 55 } 56 56 57 57 static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target, 58 - bool enable) 58 + bool enable, bool ra) 59 59 { 60 60 unsigned int call[2]; 61 61 unsigned int nops[2] = {NOP4, NOP4}; 62 62 63 - make_call(hook_pos, target, call); 63 + if (ra) 64 + make_call_ra(hook_pos, target, call); 65 + else 66 + make_call_t0(hook_pos, target, call); 64 67 65 68 /* Replace the auipc-jalr pair at once. Return -EPERM on write error. */ 66 69 if (patch_text_nosync ··· 73 70 return 0; 74 71 } 75 72 76 - /* 77 - * Put 5 instructions with 16 bytes at the front of function within 78 - * patchable function entry nops' area. 79 - * 80 - * 0: REG_S ra, -SZREG(sp) 81 - * 1: auipc ra, 0x? 82 - * 2: jalr -?(ra) 83 - * 3: REG_L ra, -SZREG(sp) 84 - * 85 - * So the opcodes is: 86 - * 0: 0xfe113c23 (sd)/0xfe112e23 (sw) 87 - * 1: 0x???????? -> auipc 88 - * 2: 0x???????? -> jalr 89 - * 3: 0xff813083 (ld)/0xffc12083 (lw) 90 - */ 91 - #if __riscv_xlen == 64 92 - #define INSN0 0xfe113c23 93 - #define INSN3 0xff813083 94 - #elif __riscv_xlen == 32 95 - #define INSN0 0xfe112e23 96 - #define INSN3 0xffc12083 97 - #endif 98 - 99 - #define FUNC_ENTRY_SIZE 16 100 - #define FUNC_ENTRY_JMP 4 101 - 102 73 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 103 74 { 104 - unsigned int call[4] = {INSN0, 0, 0, INSN3}; 105 - unsigned long target = addr; 106 - unsigned long caller = rec->ip + FUNC_ENTRY_JMP; 75 + unsigned int call[2]; 107 76 108 - call[1] = to_auipc_insn((unsigned int)(target - caller)); 109 - call[2] = to_jalr_insn((unsigned int)(target - caller)); 77 + make_call_t0(rec->ip, addr, call); 110 78 111 - if (patch_text_nosync((void *)rec->ip, call, FUNC_ENTRY_SIZE)) 79 + if (patch_text_nosync((void *)rec->ip, call, MCOUNT_INSN_SIZE)) 112 80 return -EPERM; 113 81 114 82 return 0; ··· 88 114 int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, 89 115 unsigned long addr) 90 116 { 91 - unsigned int nops[4] = {NOP4, NOP4, NOP4, NOP4}; 117 + unsigned int nops[2] = {NOP4, NOP4}; 92 118 93 - if (patch_text_nosync((void *)rec->ip, nops, FUNC_ENTRY_SIZE)) 119 + if (patch_text_nosync((void *)rec->ip, nops, MCOUNT_INSN_SIZE)) 94 120 return -EPERM; 95 121 96 122 return 0; 97 123 } 98 - 99 124 100 125 /* 101 126 * This is called early on, and isn't wrapped by ··· 117 144 int ftrace_update_ftrace_func(ftrace_func_t func) 118 145 { 119 146 int ret = __ftrace_modify_call((unsigned long)&ftrace_call, 120 - (unsigned long)func, true); 147 + (unsigned long)func, true, true); 121 148 if (!ret) { 122 149 ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call, 123 - (unsigned long)func, true); 150 + (unsigned long)func, true, true); 124 151 } 125 152 126 153 return ret; ··· 132 159 unsigned long addr) 133 160 { 134 161 unsigned int call[2]; 135 - unsigned long caller = rec->ip + FUNC_ENTRY_JMP; 162 + unsigned long caller = rec->ip; 136 163 int ret; 137 164 138 - make_call(caller, old_addr, call); 165 + make_call_t0(caller, old_addr, call); 139 166 ret = ftrace_check_current_call(caller, call); 140 167 141 168 if (ret) 142 169 return ret; 143 170 144 - return __ftrace_modify_call(caller, addr, true); 171 + return __ftrace_modify_call(caller, addr, true, false); 145 172 } 146 173 #endif 147 174 ··· 176 203 int ret; 177 204 178 205 ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call, 179 - (unsigned long)&prepare_ftrace_return, true); 206 + (unsigned long)&prepare_ftrace_return, true, true); 180 207 if (ret) 181 208 return ret; 182 209 183 210 return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call, 184 - (unsigned long)&prepare_ftrace_return, true); 211 + (unsigned long)&prepare_ftrace_return, true, true); 185 212 } 186 213 187 214 int ftrace_disable_ftrace_graph_caller(void) ··· 189 216 int ret; 190 217 191 218 ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call, 192 - (unsigned long)&prepare_ftrace_return, false); 219 + (unsigned long)&prepare_ftrace_return, false, true); 193 220 if (ret) 194 221 return ret; 195 222 196 223 return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call, 197 - (unsigned long)&prepare_ftrace_return, false); 224 + (unsigned long)&prepare_ftrace_return, false, true); 198 225 } 199 226 #endif /* CONFIG_DYNAMIC_FTRACE */ 200 227 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+22 -41
arch/riscv/kernel/kgdb.c
··· 11 11 #include <linux/string.h> 12 12 #include <asm/cacheflush.h> 13 13 #include <asm/gdb_xml.h> 14 - #include <asm/parse_asm.h> 14 + #include <asm/insn.h> 15 15 16 16 enum { 17 17 NOT_KGDB_BREAK = 0, ··· 22 22 23 23 static unsigned long stepped_address; 24 24 static unsigned int stepped_opcode; 25 - 26 - #if __riscv_xlen == 32 27 - /* C.JAL is an RV32C-only instruction */ 28 - DECLARE_INSN(c_jal, MATCH_C_JAL, MASK_C_JAL) 29 - #else 30 - #define is_c_jal_insn(opcode) 0 31 - #endif 32 - DECLARE_INSN(jalr, MATCH_JALR, MASK_JALR) 33 - DECLARE_INSN(jal, MATCH_JAL, MASK_JAL) 34 - DECLARE_INSN(c_jr, MATCH_C_JR, MASK_C_JR) 35 - DECLARE_INSN(c_jalr, MATCH_C_JALR, MASK_C_JALR) 36 - DECLARE_INSN(c_j, MATCH_C_J, MASK_C_J) 37 - DECLARE_INSN(beq, MATCH_BEQ, MASK_BEQ) 38 - DECLARE_INSN(bne, MATCH_BNE, MASK_BNE) 39 - DECLARE_INSN(blt, MATCH_BLT, MASK_BLT) 40 - DECLARE_INSN(bge, MATCH_BGE, MASK_BGE) 41 - DECLARE_INSN(bltu, MATCH_BLTU, MASK_BLTU) 42 - DECLARE_INSN(bgeu, MATCH_BGEU, MASK_BGEU) 43 - DECLARE_INSN(c_beqz, MATCH_C_BEQZ, MASK_C_BEQZ) 44 - DECLARE_INSN(c_bnez, MATCH_C_BNEZ, MASK_C_BNEZ) 45 - DECLARE_INSN(sret, MATCH_SRET, MASK_SRET) 46 25 47 26 static int decode_register_index(unsigned long opcode, int offset) 48 27 { ··· 44 65 if (get_kernel_nofault(op_code, (void *)pc)) 45 66 return -EINVAL; 46 67 if ((op_code & __INSN_LENGTH_MASK) != __INSN_LENGTH_GE_32) { 47 - if (is_c_jalr_insn(op_code) || is_c_jr_insn(op_code)) { 68 + if (riscv_insn_is_c_jalr(op_code) || 69 + riscv_insn_is_c_jr(op_code)) { 48 70 rs1_num = decode_register_index(op_code, RVC_C2_RS1_OPOFF); 49 71 *next_addr = regs_ptr[rs1_num]; 50 - } else if (is_c_j_insn(op_code) || is_c_jal_insn(op_code)) { 51 - *next_addr = EXTRACT_RVC_J_IMM(op_code) + pc; 52 - } else if (is_c_beqz_insn(op_code)) { 72 + } else if (riscv_insn_is_c_j(op_code) || 73 + riscv_insn_is_c_jal(op_code)) { 74 + *next_addr = RVC_EXTRACT_JTYPE_IMM(op_code) + pc; 75 + } else if (riscv_insn_is_c_beqz(op_code)) { 53 76 rs1_num = decode_register_index_short(op_code, 54 77 RVC_C1_RS1_OPOFF); 55 78 if (!rs1_num || regs_ptr[rs1_num] == 0) 56 - *next_addr = EXTRACT_RVC_B_IMM(op_code) + pc; 79 + *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; 57 80 else 58 81 *next_addr = pc + 2; 59 - } else if (is_c_bnez_insn(op_code)) { 82 + } else if (riscv_insn_is_c_bnez(op_code)) { 60 83 rs1_num = 61 84 decode_register_index_short(op_code, RVC_C1_RS1_OPOFF); 62 85 if (rs1_num && regs_ptr[rs1_num] != 0) 63 - *next_addr = EXTRACT_RVC_B_IMM(op_code) + pc; 86 + *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; 64 87 else 65 88 *next_addr = pc + 2; 66 89 } else { ··· 71 90 } else { 72 91 if ((op_code & __INSN_OPCODE_MASK) == __INSN_BRANCH_OPCODE) { 73 92 bool result = false; 74 - long imm = EXTRACT_BTYPE_IMM(op_code); 93 + long imm = RV_EXTRACT_BTYPE_IMM(op_code); 75 94 unsigned long rs1_val = 0, rs2_val = 0; 76 95 77 96 rs1_num = decode_register_index(op_code, RVG_RS1_OPOFF); ··· 81 100 if (rs2_num) 82 101 rs2_val = regs_ptr[rs2_num]; 83 102 84 - if (is_beq_insn(op_code)) 103 + if (riscv_insn_is_beq(op_code)) 85 104 result = (rs1_val == rs2_val) ? true : false; 86 - else if (is_bne_insn(op_code)) 105 + else if (riscv_insn_is_bne(op_code)) 87 106 result = (rs1_val != rs2_val) ? true : false; 88 - else if (is_blt_insn(op_code)) 107 + else if (riscv_insn_is_blt(op_code)) 89 108 result = 90 109 ((long)rs1_val < 91 110 (long)rs2_val) ? true : false; 92 - else if (is_bge_insn(op_code)) 111 + else if (riscv_insn_is_bge(op_code)) 93 112 result = 94 113 ((long)rs1_val >= 95 114 (long)rs2_val) ? true : false; 96 - else if (is_bltu_insn(op_code)) 115 + else if (riscv_insn_is_bltu(op_code)) 97 116 result = (rs1_val < rs2_val) ? true : false; 98 - else if (is_bgeu_insn(op_code)) 117 + else if (riscv_insn_is_bgeu(op_code)) 99 118 result = (rs1_val >= rs2_val) ? true : false; 100 119 if (result) 101 120 *next_addr = imm + pc; 102 121 else 103 122 *next_addr = pc + 4; 104 - } else if (is_jal_insn(op_code)) { 105 - *next_addr = EXTRACT_JTYPE_IMM(op_code) + pc; 106 - } else if (is_jalr_insn(op_code)) { 123 + } else if (riscv_insn_is_jal(op_code)) { 124 + *next_addr = RV_EXTRACT_JTYPE_IMM(op_code) + pc; 125 + } else if (riscv_insn_is_jalr(op_code)) { 107 126 rs1_num = decode_register_index(op_code, RVG_RS1_OPOFF); 108 127 if (rs1_num) 109 128 *next_addr = ((unsigned long *)regs)[rs1_num]; 110 - *next_addr += EXTRACT_ITYPE_IMM(op_code); 111 - } else if (is_sret_insn(op_code)) { 129 + *next_addr += RV_EXTRACT_ITYPE_IMM(op_code); 130 + } else if (riscv_insn_is_sret(op_code)) { 112 131 *next_addr = pc; 113 132 } else { 114 133 *next_addr = pc + 4;
+16 -26
arch/riscv/kernel/mcount-dyn.S
··· 13 13 14 14 .text 15 15 16 - #define FENTRY_RA_OFFSET 12 17 - #define ABI_SIZE_ON_STACK 72 16 + #define FENTRY_RA_OFFSET 8 17 + #define ABI_SIZE_ON_STACK 80 18 18 #define ABI_A0 0 19 19 #define ABI_A1 8 20 20 #define ABI_A2 16 ··· 23 23 #define ABI_A5 40 24 24 #define ABI_A6 48 25 25 #define ABI_A7 56 26 - #define ABI_RA 64 26 + #define ABI_T0 64 27 + #define ABI_RA 72 27 28 28 29 .macro SAVE_ABI 29 - addi sp, sp, -SZREG 30 30 addi sp, sp, -ABI_SIZE_ON_STACK 31 31 32 32 REG_S a0, ABI_A0(sp) ··· 37 37 REG_S a5, ABI_A5(sp) 38 38 REG_S a6, ABI_A6(sp) 39 39 REG_S a7, ABI_A7(sp) 40 + REG_S t0, ABI_T0(sp) 40 41 REG_S ra, ABI_RA(sp) 41 42 .endm 42 43 ··· 50 49 REG_L a5, ABI_A5(sp) 51 50 REG_L a6, ABI_A6(sp) 52 51 REG_L a7, ABI_A7(sp) 52 + REG_L t0, ABI_T0(sp) 53 53 REG_L ra, ABI_RA(sp) 54 54 55 55 addi sp, sp, ABI_SIZE_ON_STACK 56 - addi sp, sp, SZREG 57 56 .endm 58 57 59 58 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 60 59 .macro SAVE_ALL 61 - addi sp, sp, -SZREG 62 60 addi sp, sp, -PT_SIZE_ON_STACK 63 61 64 - REG_S x1, PT_EPC(sp) 65 - addi sp, sp, PT_SIZE_ON_STACK 66 - REG_L x1, (sp) 67 - addi sp, sp, -PT_SIZE_ON_STACK 62 + REG_S t0, PT_EPC(sp) 68 63 REG_S x1, PT_RA(sp) 69 - REG_L x1, PT_EPC(sp) 70 - 71 64 REG_S x2, PT_SP(sp) 72 65 REG_S x3, PT_GP(sp) 73 66 REG_S x4, PT_TP(sp) ··· 95 100 .endm 96 101 97 102 .macro RESTORE_ALL 103 + REG_L t0, PT_EPC(sp) 98 104 REG_L x1, PT_RA(sp) 99 - addi sp, sp, PT_SIZE_ON_STACK 100 - REG_S x1, (sp) 101 - addi sp, sp, -PT_SIZE_ON_STACK 102 - REG_L x1, PT_EPC(sp) 103 105 REG_L x2, PT_SP(sp) 104 106 REG_L x3, PT_GP(sp) 105 107 REG_L x4, PT_TP(sp) 106 - REG_L x5, PT_T0(sp) 107 108 REG_L x6, PT_T1(sp) 108 109 REG_L x7, PT_T2(sp) 109 110 REG_L x8, PT_S0(sp) ··· 128 137 REG_L x31, PT_T6(sp) 129 138 130 139 addi sp, sp, PT_SIZE_ON_STACK 131 - addi sp, sp, SZREG 132 140 .endm 133 141 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ 134 142 135 143 ENTRY(ftrace_caller) 136 144 SAVE_ABI 137 145 138 - addi a0, ra, -FENTRY_RA_OFFSET 146 + addi a0, t0, -FENTRY_RA_OFFSET 139 147 la a1, function_trace_op 140 148 REG_L a2, 0(a1) 141 - REG_L a1, ABI_SIZE_ON_STACK(sp) 149 + mv a1, ra 142 150 mv a3, sp 143 151 144 152 ftrace_call: ··· 145 155 call ftrace_stub 146 156 147 157 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 148 - addi a0, sp, ABI_SIZE_ON_STACK 149 - REG_L a1, ABI_RA(sp) 158 + addi a0, sp, ABI_RA 159 + REG_L a1, ABI_T0(sp) 150 160 addi a1, a1, -FENTRY_RA_OFFSET 151 161 #ifdef HAVE_FUNCTION_GRAPH_FP_TEST 152 162 mv a2, s0 ··· 156 166 call ftrace_stub 157 167 #endif 158 168 RESTORE_ABI 159 - ret 169 + jr t0 160 170 ENDPROC(ftrace_caller) 161 171 162 172 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 163 173 ENTRY(ftrace_regs_caller) 164 174 SAVE_ALL 165 175 166 - addi a0, ra, -FENTRY_RA_OFFSET 176 + addi a0, t0, -FENTRY_RA_OFFSET 167 177 la a1, function_trace_op 168 178 REG_L a2, 0(a1) 169 - REG_L a1, PT_SIZE_ON_STACK(sp) 179 + mv a1, ra 170 180 mv a3, sp 171 181 172 182 ftrace_regs_call: ··· 186 196 #endif 187 197 188 198 RESTORE_ALL 189 - ret 199 + jr t0 190 200 ENDPROC(ftrace_regs_caller) 191 201 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
+16 -15
arch/riscv/kernel/module.c
··· 268 268 return -EINVAL; 269 269 } 270 270 271 + static int apply_r_riscv_add16_rela(struct module *me, u32 *location, 272 + Elf_Addr v) 273 + { 274 + *(u16 *)location += (u16)v; 275 + return 0; 276 + } 277 + 271 278 static int apply_r_riscv_add32_rela(struct module *me, u32 *location, 272 279 Elf_Addr v) 273 280 { ··· 286 279 Elf_Addr v) 287 280 { 288 281 *(u64 *)location += (u64)v; 282 + return 0; 283 + } 284 + 285 + static int apply_r_riscv_sub16_rela(struct module *me, u32 *location, 286 + Elf_Addr v) 287 + { 288 + *(u16 *)location -= (u16)v; 289 289 return 0; 290 290 } 291 291 ··· 329 315 [R_RISCV_CALL] = apply_r_riscv_call_rela, 330 316 [R_RISCV_RELAX] = apply_r_riscv_relax_rela, 331 317 [R_RISCV_ALIGN] = apply_r_riscv_align_rela, 318 + [R_RISCV_ADD16] = apply_r_riscv_add16_rela, 332 319 [R_RISCV_ADD32] = apply_r_riscv_add32_rela, 333 320 [R_RISCV_ADD64] = apply_r_riscv_add64_rela, 321 + [R_RISCV_SUB16] = apply_r_riscv_sub16_rela, 334 322 [R_RISCV_SUB32] = apply_r_riscv_sub32_rela, 335 323 [R_RISCV_SUB64] = apply_r_riscv_sub64_rela, 336 324 }; ··· 444 428 __builtin_return_address(0)); 445 429 } 446 430 #endif 447 - 448 - static const Elf_Shdr *find_section(const Elf_Ehdr *hdr, 449 - const Elf_Shdr *sechdrs, 450 - const char *name) 451 - { 452 - const Elf_Shdr *s, *se; 453 - const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 454 - 455 - for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 456 - if (strcmp(name, secstrs + s->sh_name) == 0) 457 - return s; 458 - } 459 - 460 - return NULL; 461 - } 462 431 463 432 int module_finalize(const Elf_Ehdr *hdr, 464 433 const Elf_Shdr *sechdrs,
+6 -13
arch/riscv/kernel/probes/simulate-insn.c
··· 136 136 #define branch_offset(opcode) \ 137 137 sign_extend32((branch_imm(opcode)), 12) 138 138 139 - #define BRANCH_BEQ 0x0 140 - #define BRANCH_BNE 0x1 141 - #define BRANCH_BLT 0x4 142 - #define BRANCH_BGE 0x5 143 - #define BRANCH_BLTU 0x6 144 - #define BRANCH_BGEU 0x7 145 - 146 139 bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs) 147 140 { 148 141 /* ··· 162 169 163 170 offset_tmp = branch_offset(opcode); 164 171 switch (branch_funct3(opcode)) { 165 - case BRANCH_BEQ: 172 + case RVG_FUNCT3_BEQ: 166 173 offset = (rs1_val == rs2_val) ? offset_tmp : 4; 167 174 break; 168 - case BRANCH_BNE: 175 + case RVG_FUNCT3_BNE: 169 176 offset = (rs1_val != rs2_val) ? offset_tmp : 4; 170 177 break; 171 - case BRANCH_BLT: 178 + case RVG_FUNCT3_BLT: 172 179 offset = ((long)rs1_val < (long)rs2_val) ? offset_tmp : 4; 173 180 break; 174 - case BRANCH_BGE: 181 + case RVG_FUNCT3_BGE: 175 182 offset = ((long)rs1_val >= (long)rs2_val) ? offset_tmp : 4; 176 183 break; 177 - case BRANCH_BLTU: 184 + case RVG_FUNCT3_BLTU: 178 185 offset = (rs1_val < rs2_val) ? offset_tmp : 4; 179 186 break; 180 - case BRANCH_BGEU: 187 + case RVG_FUNCT3_BGEU: 181 188 offset = (rs1_val >= rs2_val) ? offset_tmp : 4; 182 189 break; 183 190 default:
+5 -24
arch/riscv/kernel/probes/simulate-insn.h
··· 3 3 #ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H 4 4 #define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H 5 5 6 - #define __RISCV_INSN_FUNCS(name, mask, val) \ 7 - static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \ 8 - { \ 9 - BUILD_BUG_ON(~(mask) & (val)); \ 10 - return (code & (mask)) == (val); \ 11 - } \ 12 - bool simulate_##name(u32 opcode, unsigned long addr, \ 13 - struct pt_regs *regs) 6 + #include <asm/insn.h> 14 7 15 8 #define RISCV_INSN_REJECTED(name, code) \ 16 9 do { \ ··· 11 18 return INSN_REJECTED; \ 12 19 } \ 13 20 } while (0) 14 - 15 - __RISCV_INSN_FUNCS(system, 0x7f, 0x73); 16 - __RISCV_INSN_FUNCS(fence, 0x7f, 0x0f); 17 21 18 22 #define RISCV_INSN_SET_SIMULATE(name, code) \ 19 23 do { \ ··· 20 30 } \ 21 31 } while (0) 22 32 23 - __RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001); 24 - __RISCV_INSN_FUNCS(c_jr, 0xf07f, 0x8002); 25 - __RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001); 26 - __RISCV_INSN_FUNCS(c_jalr, 0xf07f, 0x9002); 27 - __RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001); 28 - __RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001); 29 - __RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002); 30 - 31 - __RISCV_INSN_FUNCS(auipc, 0x7f, 0x17); 32 - __RISCV_INSN_FUNCS(branch, 0x7f, 0x63); 33 - 34 - __RISCV_INSN_FUNCS(jal, 0x7f, 0x6f); 35 - __RISCV_INSN_FUNCS(jalr, 0x707f, 0x67); 33 + bool simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *regs); 34 + bool simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs); 35 + bool simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs); 36 + bool simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs); 36 37 37 38 #endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
+3
arch/riscv/kernel/riscv_ksyms.c
··· 12 12 EXPORT_SYMBOL(memset); 13 13 EXPORT_SYMBOL(memcpy); 14 14 EXPORT_SYMBOL(memmove); 15 + EXPORT_SYMBOL(strcmp); 16 + EXPORT_SYMBOL(strlen); 17 + EXPORT_SYMBOL(strncmp); 15 18 EXPORT_SYMBOL(__memset); 16 19 EXPORT_SYMBOL(__memcpy); 17 20 EXPORT_SYMBOL(__memmove);
+3
arch/riscv/kernel/setup.c
··· 300 300 riscv_init_cbom_blocksize(); 301 301 riscv_fill_hwcap(); 302 302 apply_boot_alternatives(); 303 + if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) && 304 + riscv_isa_extension_available(NULL, ZICBOM)) 305 + riscv_noncoherent_supported(); 303 306 } 304 307 305 308 static int __init topology_init(void)
+27 -3
arch/riscv/kernel/traps.c
··· 29 29 30 30 static DEFINE_SPINLOCK(die_lock); 31 31 32 + static void dump_kernel_instr(const char *loglvl, struct pt_regs *regs) 33 + { 34 + char str[sizeof("0000 ") * 12 + 2 + 1], *p = str; 35 + const u16 *insns = (u16 *)instruction_pointer(regs); 36 + long bad; 37 + u16 val; 38 + int i; 39 + 40 + for (i = -10; i < 2; i++) { 41 + bad = get_kernel_nofault(val, &insns[i]); 42 + if (!bad) { 43 + p += sprintf(p, i == 0 ? "(%04hx) " : "%04hx ", val); 44 + } else { 45 + printk("%sCode: Unable to access instruction at 0x%px.\n", 46 + loglvl, &insns[i]); 47 + return; 48 + } 49 + } 50 + printk("%sCode: %s\n", loglvl, str); 51 + } 52 + 32 53 void die(struct pt_regs *regs, const char *str) 33 54 { 34 55 static int die_counter; 35 56 int ret; 36 57 long cause; 58 + unsigned long flags; 37 59 38 60 oops_enter(); 39 61 40 - spin_lock_irq(&die_lock); 62 + spin_lock_irqsave(&die_lock, flags); 41 63 console_verbose(); 42 64 bust_spinlocks(1); 43 65 44 66 pr_emerg("%s [#%d]\n", str, ++die_counter); 45 67 print_modules(); 46 - if (regs) 68 + if (regs) { 47 69 show_regs(regs); 70 + dump_kernel_instr(KERN_EMERG, regs); 71 + } 48 72 49 73 cause = regs ? regs->cause : -1; 50 74 ret = notify_die(DIE_OOPS, str, regs, 0, cause, SIGSEGV); ··· 78 54 79 55 bust_spinlocks(0); 80 56 add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE); 81 - spin_unlock_irq(&die_lock); 57 + spin_unlock_irqrestore(&die_lock, flags); 82 58 oops_exit(); 83 59 84 60 if (in_interrupt())
-5
arch/riscv/kernel/vdso.c
··· 22 22 }; 23 23 #endif 24 24 25 - extern char vdso_start[], vdso_end[]; 26 - #ifdef CONFIG_COMPAT 27 - extern char compat_vdso_start[], compat_vdso_end[]; 28 - #endif 29 - 30 25 enum vvar_pages { 31 26 VVAR_DATA_PAGE_OFFSET, 32 27 VVAR_TIMENS_PAGE_OFFSET,
+7
arch/riscv/kernel/vdso/vdso.lds.S
··· 40 40 . = 0x800; 41 41 .text : { *(.text .text.*) } :text 42 42 43 + . = ALIGN(4); 44 + .alternative : { 45 + __alt_start = .; 46 + *(.alternative) 47 + __alt_end = .; 48 + } 49 + 43 50 .data : { 44 51 *(.got.plt) *(.got) 45 52 *(.data .data.* .gnu.linkonce.d.*)
+9
arch/riscv/kernel/vmlinux.lds.S
··· 5 5 */ 6 6 7 7 #define RO_EXCEPTION_TABLE_ALIGN 4 8 + #define RUNTIME_DISCARD_EXIT 8 9 9 10 #ifdef CONFIG_XIP_KERNEL 10 11 #include "vmlinux-xip.lds.S" ··· 86 85 /* Start of init data section */ 87 86 __init_data_begin = .; 88 87 INIT_DATA_SECTION(16) 88 + .init.bss : { 89 + *(.init.bss) /* from the EFI stub */ 90 + } 89 91 .exit.data : 90 92 { 91 93 EXIT_DATA ··· 97 93 98 94 .rel.dyn : { 99 95 *(.rel.dyn*) 96 + } 97 + 98 + .rela.dyn : { 99 + *(.rela*) 100 100 } 101 101 102 102 __init_data_end = .; ··· 148 140 STABS_DEBUG 149 141 DWARF_DEBUG 150 142 ELF_DETAILS 143 + .riscv.attributes 0 : { *(.riscv.attributes) } 151 144 152 145 DISCARDS 153 146 }
+1 -2
arch/riscv/kvm/tlb.c
··· 15 15 #include <asm/hwcap.h> 16 16 #include <asm/insn-def.h> 17 17 18 - #define has_svinval() \ 19 - static_branch_unlikely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_SVINVAL]) 18 + #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) 20 19 21 20 void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, 22 21 gpa_t gpa, gpa_t gpsz,
+3
arch/riscv/lib/Makefile
··· 3 3 lib-y += memcpy.o 4 4 lib-y += memset.o 5 5 lib-y += memmove.o 6 + lib-y += strcmp.o 7 + lib-y += strlen.o 8 + lib-y += strncmp.o 6 9 lib-$(CONFIG_MMU) += uaccess.o 7 10 lib-$(CONFIG_64BIT) += tishift.o 8 11
+121
arch/riscv/lib/strcmp.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #include <linux/linkage.h> 4 + #include <asm/asm.h> 5 + #include <asm-generic/export.h> 6 + #include <asm/alternative-macros.h> 7 + #include <asm/errata_list.h> 8 + 9 + /* int strcmp(const char *cs, const char *ct) */ 10 + SYM_FUNC_START(strcmp) 11 + 12 + ALTERNATIVE("nop", "j strcmp_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB) 13 + 14 + /* 15 + * Returns 16 + * a0 - comparison result, value like strcmp 17 + * 18 + * Parameters 19 + * a0 - string1 20 + * a1 - string2 21 + * 22 + * Clobbers 23 + * t0, t1 24 + */ 25 + 1: 26 + lbu t0, 0(a0) 27 + lbu t1, 0(a1) 28 + addi a0, a0, 1 29 + addi a1, a1, 1 30 + bne t0, t1, 2f 31 + bnez t0, 1b 32 + li a0, 0 33 + ret 34 + 2: 35 + /* 36 + * strcmp only needs to return (< 0, 0, > 0) values 37 + * not necessarily -1, 0, +1 38 + */ 39 + sub a0, t0, t1 40 + ret 41 + 42 + /* 43 + * Variant of strcmp using the ZBB extension if available 44 + */ 45 + #ifdef CONFIG_RISCV_ISA_ZBB 46 + strcmp_zbb: 47 + 48 + .option push 49 + .option arch,+zbb 50 + 51 + /* 52 + * Returns 53 + * a0 - comparison result, value like strcmp 54 + * 55 + * Parameters 56 + * a0 - string1 57 + * a1 - string2 58 + * 59 + * Clobbers 60 + * t0, t1, t2, t3, t4, t5 61 + */ 62 + 63 + or t2, a0, a1 64 + li t4, -1 65 + and t2, t2, SZREG-1 66 + bnez t2, 3f 67 + 68 + /* Main loop for aligned string. */ 69 + .p2align 3 70 + 1: 71 + REG_L t0, 0(a0) 72 + REG_L t1, 0(a1) 73 + orc.b t3, t0 74 + bne t3, t4, 2f 75 + addi a0, a0, SZREG 76 + addi a1, a1, SZREG 77 + beq t0, t1, 1b 78 + 79 + /* 80 + * Words don't match, and no null byte in the first 81 + * word. Get bytes in big-endian order and compare. 82 + */ 83 + #ifndef CONFIG_CPU_BIG_ENDIAN 84 + rev8 t0, t0 85 + rev8 t1, t1 86 + #endif 87 + 88 + /* Synthesize (t0 >= t1) ? 1 : -1 in a branchless sequence. */ 89 + sltu a0, t0, t1 90 + neg a0, a0 91 + ori a0, a0, 1 92 + ret 93 + 94 + 2: 95 + /* 96 + * Found a null byte. 97 + * If words don't match, fall back to simple loop. 98 + */ 99 + bne t0, t1, 3f 100 + 101 + /* Otherwise, strings are equal. */ 102 + li a0, 0 103 + ret 104 + 105 + /* Simple loop for misaligned strings. */ 106 + .p2align 3 107 + 3: 108 + lbu t0, 0(a0) 109 + lbu t1, 0(a1) 110 + addi a0, a0, 1 111 + addi a1, a1, 1 112 + bne t0, t1, 4f 113 + bnez t0, 3b 114 + 115 + 4: 116 + sub a0, t0, t1 117 + ret 118 + 119 + .option pop 120 + #endif 121 + SYM_FUNC_END(strcmp)
+133
arch/riscv/lib/strlen.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #include <linux/linkage.h> 4 + #include <asm/asm.h> 5 + #include <asm-generic/export.h> 6 + #include <asm/alternative-macros.h> 7 + #include <asm/errata_list.h> 8 + 9 + /* int strlen(const char *s) */ 10 + SYM_FUNC_START(strlen) 11 + 12 + ALTERNATIVE("nop", "j strlen_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB) 13 + 14 + /* 15 + * Returns 16 + * a0 - string length 17 + * 18 + * Parameters 19 + * a0 - String to measure 20 + * 21 + * Clobbers: 22 + * t0, t1 23 + */ 24 + mv t1, a0 25 + 1: 26 + lbu t0, 0(t1) 27 + beqz t0, 2f 28 + addi t1, t1, 1 29 + j 1b 30 + 2: 31 + sub a0, t1, a0 32 + ret 33 + 34 + /* 35 + * Variant of strlen using the ZBB extension if available 36 + */ 37 + #ifdef CONFIG_RISCV_ISA_ZBB 38 + strlen_zbb: 39 + 40 + #ifdef CONFIG_CPU_BIG_ENDIAN 41 + # define CZ clz 42 + # define SHIFT sll 43 + #else 44 + # define CZ ctz 45 + # define SHIFT srl 46 + #endif 47 + 48 + .option push 49 + .option arch,+zbb 50 + 51 + /* 52 + * Returns 53 + * a0 - string length 54 + * 55 + * Parameters 56 + * a0 - String to measure 57 + * 58 + * Clobbers 59 + * t0, t1, t2, t3 60 + */ 61 + 62 + /* Number of irrelevant bytes in the first word. */ 63 + andi t2, a0, SZREG-1 64 + 65 + /* Align pointer. */ 66 + andi t0, a0, -SZREG 67 + 68 + li t3, SZREG 69 + sub t3, t3, t2 70 + slli t2, t2, 3 71 + 72 + /* Get the first word. */ 73 + REG_L t1, 0(t0) 74 + 75 + /* 76 + * Shift away the partial data we loaded to remove the irrelevant bytes 77 + * preceding the string with the effect of adding NUL bytes at the 78 + * end of the string's first word. 79 + */ 80 + SHIFT t1, t1, t2 81 + 82 + /* Convert non-NUL into 0xff and NUL into 0x00. */ 83 + orc.b t1, t1 84 + 85 + /* Convert non-NUL into 0x00 and NUL into 0xff. */ 86 + not t1, t1 87 + 88 + /* 89 + * Search for the first set bit (corresponding to a NUL byte in the 90 + * original chunk). 91 + */ 92 + CZ t1, t1 93 + 94 + /* 95 + * The first chunk is special: compare against the number 96 + * of valid bytes in this chunk. 97 + */ 98 + srli a0, t1, 3 99 + bgtu t3, a0, 3f 100 + 101 + /* Prepare for the word comparison loop. */ 102 + addi t2, t0, SZREG 103 + li t3, -1 104 + 105 + /* 106 + * Our critical loop is 4 instructions and processes data in 107 + * 4 byte or 8 byte chunks. 108 + */ 109 + .p2align 3 110 + 1: 111 + REG_L t1, SZREG(t0) 112 + addi t0, t0, SZREG 113 + orc.b t1, t1 114 + beq t1, t3, 1b 115 + 2: 116 + not t1, t1 117 + CZ t1, t1 118 + 119 + /* Get number of processed words. */ 120 + sub t2, t0, t2 121 + 122 + /* Add number of characters in the first word. */ 123 + add a0, a0, t2 124 + srli t1, t1, 3 125 + 126 + /* Add number of characters in the last word. */ 127 + add a0, a0, t1 128 + 3: 129 + ret 130 + 131 + .option pop 132 + #endif 133 + SYM_FUNC_END(strlen)
+139
arch/riscv/lib/strncmp.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #include <linux/linkage.h> 4 + #include <asm/asm.h> 5 + #include <asm-generic/export.h> 6 + #include <asm/alternative-macros.h> 7 + #include <asm/errata_list.h> 8 + 9 + /* int strncmp(const char *cs, const char *ct, size_t count) */ 10 + SYM_FUNC_START(strncmp) 11 + 12 + ALTERNATIVE("nop", "j strncmp_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB) 13 + 14 + /* 15 + * Returns 16 + * a0 - comparison result, value like strncmp 17 + * 18 + * Parameters 19 + * a0 - string1 20 + * a1 - string2 21 + * a2 - number of characters to compare 22 + * 23 + * Clobbers 24 + * t0, t1, t2 25 + */ 26 + li t2, 0 27 + 1: 28 + beq a2, t2, 2f 29 + lbu t0, 0(a0) 30 + lbu t1, 0(a1) 31 + addi a0, a0, 1 32 + addi a1, a1, 1 33 + bne t0, t1, 3f 34 + addi t2, t2, 1 35 + bnez t0, 1b 36 + 2: 37 + li a0, 0 38 + ret 39 + 3: 40 + /* 41 + * strncmp only needs to return (< 0, 0, > 0) values 42 + * not necessarily -1, 0, +1 43 + */ 44 + sub a0, t0, t1 45 + ret 46 + 47 + /* 48 + * Variant of strncmp using the ZBB extension if available 49 + */ 50 + #ifdef CONFIG_RISCV_ISA_ZBB 51 + strncmp_zbb: 52 + 53 + .option push 54 + .option arch,+zbb 55 + 56 + /* 57 + * Returns 58 + * a0 - comparison result, like strncmp 59 + * 60 + * Parameters 61 + * a0 - string1 62 + * a1 - string2 63 + * a2 - number of characters to compare 64 + * 65 + * Clobbers 66 + * t0, t1, t2, t3, t4, t5, t6 67 + */ 68 + 69 + or t2, a0, a1 70 + li t5, -1 71 + and t2, t2, SZREG-1 72 + add t4, a0, a2 73 + bnez t2, 4f 74 + 75 + /* Adjust limit for fast-path. */ 76 + andi t6, t4, -SZREG 77 + 78 + /* Main loop for aligned string. */ 79 + .p2align 3 80 + 1: 81 + bgt a0, t6, 3f 82 + REG_L t0, 0(a0) 83 + REG_L t1, 0(a1) 84 + orc.b t3, t0 85 + bne t3, t5, 2f 86 + addi a0, a0, SZREG 87 + addi a1, a1, SZREG 88 + beq t0, t1, 1b 89 + 90 + /* 91 + * Words don't match, and no null byte in the first 92 + * word. Get bytes in big-endian order and compare. 93 + */ 94 + #ifndef CONFIG_CPU_BIG_ENDIAN 95 + rev8 t0, t0 96 + rev8 t1, t1 97 + #endif 98 + 99 + /* Synthesize (t0 >= t1) ? 1 : -1 in a branchless sequence. */ 100 + sltu a0, t0, t1 101 + neg a0, a0 102 + ori a0, a0, 1 103 + ret 104 + 105 + 2: 106 + /* 107 + * Found a null byte. 108 + * If words don't match, fall back to simple loop. 109 + */ 110 + bne t0, t1, 3f 111 + 112 + /* Otherwise, strings are equal. */ 113 + li a0, 0 114 + ret 115 + 116 + /* Simple loop for misaligned strings. */ 117 + 3: 118 + /* Restore limit for slow-path. */ 119 + .p2align 3 120 + 4: 121 + bge a0, t4, 6f 122 + lbu t0, 0(a0) 123 + lbu t1, 0(a1) 124 + addi a0, a0, 1 125 + addi a1, a1, 1 126 + bne t0, t1, 5f 127 + bnez t0, 4b 128 + 129 + 5: 130 + sub a0, t0, t1 131 + ret 132 + 133 + 6: 134 + li a0, 0 135 + ret 136 + 137 + .option pop 138 + #endif 139 + SYM_FUNC_END(strncmp)
+6 -4
arch/riscv/mm/fault.c
··· 267 267 if (user_mode(regs)) 268 268 flags |= FAULT_FLAG_USER; 269 269 270 - if (!user_mode(regs) && addr < TASK_SIZE && 271 - unlikely(!(regs->status & SR_SUM))) 272 - die_kernel_fault("access to user memory without uaccess routines", 273 - addr, regs); 270 + if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) { 271 + if (fixup_exception(regs)) 272 + return; 273 + 274 + die_kernel_fault("access to user memory without uaccess routines", addr, regs); 275 + } 274 276 275 277 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); 276 278
+13
arch/riscv/purgatory/Makefile
··· 2 2 OBJECT_FILES_NON_STANDARD := y 3 3 4 4 purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o memset.o 5 + purgatory-y += strcmp.o strlen.o strncmp.o 5 6 6 7 targets += $(purgatory-y) 7 8 PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) ··· 17 16 $(call if_changed_rule,as_o_S) 18 17 19 18 $(obj)/memset.o: $(srctree)/arch/riscv/lib/memset.S FORCE 19 + $(call if_changed_rule,as_o_S) 20 + 21 + $(obj)/strcmp.o: $(srctree)/arch/riscv/lib/strcmp.S FORCE 22 + $(call if_changed_rule,as_o_S) 23 + 24 + $(obj)/strlen.o: $(srctree)/arch/riscv/lib/strlen.S FORCE 25 + $(call if_changed_rule,as_o_S) 26 + 27 + $(obj)/strncmp.o: $(srctree)/arch/riscv/lib/strncmp.S FORCE 20 28 $(call if_changed_rule,as_o_S) 21 29 22 30 $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE ··· 87 77 AFLAGS_REMOVE_entry.o += -Wa,-gdwarf-2 88 78 AFLAGS_REMOVE_memcpy.o += -Wa,-gdwarf-2 89 79 AFLAGS_REMOVE_memset.o += -Wa,-gdwarf-2 80 + AFLAGS_REMOVE_strcmp.o += -Wa,-gdwarf-2 81 + AFLAGS_REMOVE_strlen.o += -Wa,-gdwarf-2 82 + AFLAGS_REMOVE_strncmp.o += -Wa,-gdwarf-2 90 83 91 84 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE 92 85 $(call if_changed,ld)
+11 -1
scripts/decodecode
··· 93 93 ${CROSS_COMPILE}strip $t.o 94 94 fi 95 95 96 + if [ "$ARCH" = "riscv" ]; then 97 + OBJDUMPFLAGS="-M no-aliases --section=.text -D" 98 + ${CROSS_COMPILE}strip $t.o 99 + fi 100 + 96 101 if [ $pc_sub -ne 0 ]; then 97 102 if [ $PC ]; then 98 103 adj_vma=$(( $PC - $pc_sub )) ··· 131 126 do 132 127 substr+="$opc" 133 128 129 + opcode="$substr" 130 + if [ "$ARCH" = "riscv" ]; then 131 + opcode=$(echo $opcode | tr ' ' '\n' | tac | tr -d '\n') 132 + fi 133 + 134 134 # return if opcode bytes do not match @opline anymore 135 - if ! echo $opline | grep -q "$substr"; 135 + if ! echo $opline | grep -q "$opcode"; 136 136 then 137 137 break 138 138 fi