Merge tag 'riscv-for-linus-6.10-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Palmer Dabbelt:

- Add byte/half-word compare-and-exchange, emulated via LR/SC loops

- Support for Rust

- Support for Zihintpause in hwprobe

- Add PR_RISCV_SET_ICACHE_FLUSH_CTX prctl()

- Support lockless lockrefs

* tag 'riscv-for-linus-6.10-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (42 commits)
riscv: defconfig: Enable CONFIG_CLK_SOPHGO_CV1800
riscv: select ARCH_HAS_FAST_MULTIPLIER
riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required
riscv: Annotate pgtable_l{4,5}_enabled with __ro_after_init
riscv: Remove redundant CONFIG_64BIT from pgtable_l{4,5}_enabled
riscv: mm: Always use an ASID to flush mm contexts
riscv: mm: Preserve global TLB entries when switching contexts
riscv: mm: Make asid_bits a local variable
riscv: mm: Use a fixed layout for the MM context ID
riscv: mm: Introduce cntx2asid/cntx2version helper macros
riscv: Avoid TLB flush loops when affected by SiFive CIP-1200
riscv: Apply SiFive CIP-1200 workaround to single-ASID sfence.vma
riscv: mm: Combine the SMP and UP TLB flush code
riscv: Only send remote fences when some other CPU is online
riscv: mm: Broadcast kernel TLB flushes only when needed
riscv: Use IPIs for remote cache/TLB flushes by default
riscv: Factor out page table TLB synchronization
riscv: Flush the instruction cache during SMP bringup
riscv: hwprobe: export Zihintpause ISA extension
riscv: misaligned: remove CONFIG_RISCV_M_MODE specific code
...

+755 -668
+98
Documentation/arch/riscv/cmodx.rst
···
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================================================== 4 + Concurrent Modification and Execution of Instructions (CMODX) for RISC-V Linux 5 + ============================================================================== 6 + 7 + CMODX is a programming technique where a program executes instructions that were 8 + modified by the program itself. Instruction storage and the instruction cache 9 + (icache) are not guaranteed to be synchronized on RISC-V hardware. Therefore, the 10 + program must enforce its own synchronization with the unprivileged fence.i 11 + instruction. 12 + 13 + However, the default Linux ABI prohibits the use of fence.i in userspace 14 + applications. At any point the scheduler may migrate a task onto a new hart. If 15 + migration occurs after the userspace synchronized the icache and instruction 16 + storage with fence.i, the icache on the new hart will no longer be clean. This 17 + is due to the behavior of fence.i only affecting the hart that it is called on. 18 + Thus, the hart that the task has been migrated to may not have synchronized 19 + instruction storage and icache. 20 + 21 + There are two ways to solve this problem: use the riscv_flush_icache() syscall, 22 + or use the ``PR_RISCV_SET_ICACHE_FLUSH_CTX`` prctl() and emit fence.i in 23 + userspace. The syscall performs a one-off icache flushing operation. The prctl 24 + changes the Linux ABI to allow userspace to emit icache flushing operations. 25 + 26 + As an aside, "deferred" icache flushes can sometimes be triggered in the kernel. 27 + At the time of writing, this only occurs during the riscv_flush_icache() syscall 28 + and when the kernel uses copy_to_user_page(). These deferred flushes happen only 29 + when the memory map being used by a hart changes. If the prctl() context caused 30 + an icache flush, this deferred icache flush will be skipped as it is redundant. 31 + Therefore, there will be no additional flush when using the riscv_flush_icache() 32 + syscall inside of the prctl() context. 33 + 34 + prctl() Interface 35 + --------------------- 36 + 37 + Call prctl() with ``PR_RISCV_SET_ICACHE_FLUSH_CTX`` as the first argument. The 38 + remaining arguments will be delegated to the riscv_set_icache_flush_ctx 39 + function detailed below. 40 + 41 + .. kernel-doc:: arch/riscv/mm/cacheflush.c 42 + :identifiers: riscv_set_icache_flush_ctx 43 + 44 + Example usage: 45 + 46 + The following files are meant to be compiled and linked with each other. The 47 + modify_instruction() function replaces an add with 0 with an add with one, 48 + causing the instruction sequence in get_value() to change from returning a zero 49 + to returning a one. 50 + 51 + cmodx.c:: 52 + 53 + #include <stdio.h> 54 + #include <sys/prctl.h> 55 + 56 + extern int get_value(); 57 + extern void modify_instruction(); 58 + 59 + int main() 60 + { 61 + int value = get_value(); 62 + printf("Value before cmodx: %d\n", value); 63 + 64 + // Call prctl before first fence.i is called inside modify_instruction 65 + prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX_ON, PR_RISCV_CTX_SW_FENCEI, PR_RISCV_SCOPE_PER_PROCESS); 66 + modify_instruction(); 67 + // Call prctl after final fence.i is called in process 68 + prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX_OFF, PR_RISCV_CTX_SW_FENCEI, PR_RISCV_SCOPE_PER_PROCESS); 69 + 70 + value = get_value(); 71 + printf("Value after cmodx: %d\n", value); 72 + return 0; 73 + } 74 + 75 + cmodx.S:: 76 + 77 + .option norvc 78 + 79 + .text 80 + .global modify_instruction 81 + modify_instruction: 82 + lw a0, new_insn 83 + lui a5,%hi(old_insn) 84 + sw a0,%lo(old_insn)(a5) 85 + fence.i 86 + ret 87 + 88 + .section modifiable, "awx" 89 + .global get_value 90 + get_value: 91 + li a0, 0 92 + old_insn: 93 + addi a0, a0, 0 94 + ret 95 + 96 + .data 97 + new_insn: 98 + addi a0, a0, 1
+4
Documentation/arch/riscv/hwprobe.rst
··· 188 manual starting from commit 95cf1f9 ("Add changes requested by Ved 189 during signoff") 190 191 * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance 192 information about the selected set of processors. 193
··· 188 manual starting from commit 95cf1f9 ("Add changes requested by Ved 189 during signoff") 190 191 + * :c:macro:`RISCV_HWPROBE_EXT_ZIHINTPAUSE`: The Zihintpause extension is 192 + supported as defined in the RISC-V ISA manual starting from commit 193 + d8ab5c78c207 ("Zihintpause is ratified"). 194 + 195 * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance 196 information about the selected set of processors. 197
+1
Documentation/arch/riscv/index.rst
··· 13 patch-acceptance 14 uabi 15 vector 16 17 features 18
··· 13 patch-acceptance 14 uabi 15 vector 16 + cmodx 17 18 features 19
+1
Documentation/rust/arch-support.rst
··· 17 ============= ================ ============================================== 18 ``arm64`` Maintained Little Endian only. 19 ``loongarch`` Maintained \- 20 ``um`` Maintained ``x86_64`` only. 21 ``x86`` Maintained ``x86_64`` only. 22 ============= ================ ==============================================
··· 17 ============= ================ ============================================== 18 ``arm64`` Maintained Little Endian only. 19 ``loongarch`` Maintained \- 20 + ``riscv`` Maintained ``riscv64`` only. 21 ``um`` Maintained ``x86_64`` only. 22 ``x86`` Maintained ``x86_64`` only. 23 ============= ================ ==============================================
+14 -8
arch/riscv/Kconfig
··· 23 select ARCH_HAS_DEBUG_VIRTUAL if MMU 24 select ARCH_HAS_DEBUG_VM_PGTABLE 25 select ARCH_HAS_DEBUG_WX 26 select ARCH_HAS_FORTIFY_SOURCE 27 select ARCH_HAS_GCOV_PROFILE_ALL 28 select ARCH_HAS_GIGANTIC_PAGE ··· 58 select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU 59 select ARCH_SUPPORTS_PER_VMA_LOCK if MMU 60 select ARCH_SUPPORTS_SHADOW_CALL_STACK if HAVE_SHADOW_CALL_STACK 61 select ARCH_USE_MEMTEST 62 select ARCH_USE_QUEUED_RWLOCKS 63 select ARCH_USES_CFI_TRAPS if CFI_CLANG 64 - select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if SMP && MMU 65 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU 66 select ARCH_WANT_FRAME_POINTERS 67 select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT ··· 73 select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE 74 select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU 75 select BUILDTIME_TABLE_SORT if MMU 76 - select CLINT_TIMER if !MMU 77 select CLONE_BACKWARDS 78 select COMMON_CLK 79 select CPU_PM if CPU_IDLE || HIBERNATION || SUSPEND ··· 157 select HAVE_REGS_AND_STACK_ACCESS_API 158 select HAVE_RETHOOK if !XIP_KERNEL 159 select HAVE_RSEQ 160 select HAVE_SAMPLE_FTRACE_DIRECT 161 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI 162 select HAVE_STACKPROTECTOR ··· 234 235 # set if we run in machine mode, cleared if we run in supervisor mode 236 config RISCV_M_MODE 237 - bool 238 - default !MMU 239 240 # set if we are running in S-mode and can use SBI calls 241 config RISCV_SBI ··· 256 257 config PAGE_OFFSET 258 hex 259 - default 0xC0000000 if 32BIT && MMU 260 - default 0x80000000 if !MMU 261 default 0xff60000000000000 if 64BIT 262 263 config KASAN_SHADOW_OFFSET ··· 606 config RISCV_ISA_ZBB 607 bool "Zbb extension support for bit manipulation instructions" 608 depends on TOOLCHAIN_HAS_ZBB 609 - depends on MMU 610 depends on RISCV_ALTERNATIVE 611 default y 612 help ··· 637 638 config RISCV_ISA_ZICBOZ 639 bool "Zicboz extension support for faster zeroing of memory" 640 - depends on MMU 641 depends on RISCV_ALTERNATIVE 642 default y 643 help
··· 23 select ARCH_HAS_DEBUG_VIRTUAL if MMU 24 select ARCH_HAS_DEBUG_VM_PGTABLE 25 select ARCH_HAS_DEBUG_WX 26 + select ARCH_HAS_FAST_MULTIPLIER 27 select ARCH_HAS_FORTIFY_SOURCE 28 select ARCH_HAS_GCOV_PROFILE_ALL 29 select ARCH_HAS_GIGANTIC_PAGE ··· 57 select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU 58 select ARCH_SUPPORTS_PER_VMA_LOCK if MMU 59 select ARCH_SUPPORTS_SHADOW_CALL_STACK if HAVE_SHADOW_CALL_STACK 60 + select ARCH_USE_CMPXCHG_LOCKREF if 64BIT 61 select ARCH_USE_MEMTEST 62 select ARCH_USE_QUEUED_RWLOCKS 63 select ARCH_USES_CFI_TRAPS if CFI_CLANG 64 + select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if MMU 65 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU 66 select ARCH_WANT_FRAME_POINTERS 67 select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT ··· 71 select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE 72 select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU 73 select BUILDTIME_TABLE_SORT if MMU 74 + select CLINT_TIMER if RISCV_M_MODE 75 select CLONE_BACKWARDS 76 select COMMON_CLK 77 select CPU_PM if CPU_IDLE || HIBERNATION || SUSPEND ··· 155 select HAVE_REGS_AND_STACK_ACCESS_API 156 select HAVE_RETHOOK if !XIP_KERNEL 157 select HAVE_RSEQ 158 + select HAVE_RUST if 64BIT 159 select HAVE_SAMPLE_FTRACE_DIRECT 160 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI 161 select HAVE_STACKPROTECTOR ··· 231 232 # set if we run in machine mode, cleared if we run in supervisor mode 233 config RISCV_M_MODE 234 + bool "Build a kernel that runs in machine mode" 235 + depends on !MMU 236 + default y 237 + help 238 + Select this option if you want to run the kernel in M-mode, 239 + without the assistance of any other firmware. 240 241 # set if we are running in S-mode and can use SBI calls 242 config RISCV_SBI ··· 249 250 config PAGE_OFFSET 251 hex 252 + default 0x80000000 if !MMU && RISCV_M_MODE 253 + default 0x80200000 if !MMU 254 + default 0xc0000000 if 32BIT 255 default 0xff60000000000000 if 64BIT 256 257 config KASAN_SHADOW_OFFSET ··· 598 config RISCV_ISA_ZBB 599 bool "Zbb extension support for bit manipulation instructions" 600 depends on TOOLCHAIN_HAS_ZBB 601 depends on RISCV_ALTERNATIVE 602 default y 603 help ··· 630 631 config RISCV_ISA_ZICBOZ 632 bool "Zicboz extension support for faster zeroing of memory" 633 depends on RISCV_ALTERNATIVE 634 default y 635 help
+15 -11
arch/riscv/Makefile
··· 34 KBUILD_AFLAGS += -mabi=lp64 35 36 KBUILD_LDFLAGS += -melf64lriscv 37 else 38 BITS := 32 39 UTS_MACHINE := riscv32 ··· 70 riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd 71 riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c 72 riscv-march-$(CONFIG_RISCV_ISA_V) := $(riscv-march-y)v 73 74 ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC 75 KBUILD_CFLAGS += -Wa,-misa-spec=2.2 ··· 140 ifeq ($(CONFIG_XIP_KERNEL),y) 141 KBUILD_IMAGE := $(boot)/xipImage 142 else 143 KBUILD_IMAGE := $(boot)/Image.gz 144 endif 145 146 libs-y += arch/riscv/lib/ ··· 168 vdso-install-y += arch/riscv/kernel/vdso/vdso.so.dbg 169 vdso-install-$(CONFIG_COMPAT) += arch/riscv/kernel/compat_vdso/compat_vdso.so.dbg 170 171 - ifneq ($(CONFIG_XIP_KERNEL),y) 172 - ifeq ($(CONFIG_RISCV_M_MODE)$(CONFIG_SOC_CANAAN_K210),yy) 173 - KBUILD_IMAGE := $(boot)/loader.bin 174 - else 175 - ifeq ($(CONFIG_EFI_ZBOOT),) 176 - KBUILD_IMAGE := $(boot)/Image.gz 177 - else 178 - KBUILD_IMAGE := $(boot)/vmlinuz.efi 179 - endif 180 - endif 181 - endif 182 BOOT_TARGETS := Image Image.gz loader loader.bin xipImage vmlinuz.efi 183 184 all: $(notdir $(KBUILD_IMAGE))
··· 34 KBUILD_AFLAGS += -mabi=lp64 35 36 KBUILD_LDFLAGS += -melf64lriscv 37 + 38 + KBUILD_RUSTFLAGS += -Ctarget-cpu=generic-rv64 --target=riscv64imac-unknown-none-elf \ 39 + -Cno-redzone 40 else 41 BITS := 32 42 UTS_MACHINE := riscv32 ··· 67 riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd 68 riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c 69 riscv-march-$(CONFIG_RISCV_ISA_V) := $(riscv-march-y)v 70 + 71 + ifneq ($(CONFIG_RISCV_ISA_C),y) 72 + KBUILD_RUSTFLAGS += -Ctarget-feature=-c 73 + endif 74 75 ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC 76 KBUILD_CFLAGS += -Wa,-misa-spec=2.2 ··· 133 ifeq ($(CONFIG_XIP_KERNEL),y) 134 KBUILD_IMAGE := $(boot)/xipImage 135 else 136 + ifeq ($(CONFIG_RISCV_M_MODE)$(CONFIG_SOC_CANAAN_K210),yy) 137 + KBUILD_IMAGE := $(boot)/loader.bin 138 + else 139 + ifeq ($(CONFIG_EFI_ZBOOT),) 140 KBUILD_IMAGE := $(boot)/Image.gz 141 + else 142 + KBUILD_IMAGE := $(boot)/vmlinuz.efi 143 + endif 144 + endif 145 endif 146 147 libs-y += arch/riscv/lib/ ··· 153 vdso-install-y += arch/riscv/kernel/vdso/vdso.so.dbg 154 vdso-install-$(CONFIG_COMPAT) += arch/riscv/kernel/compat_vdso/compat_vdso.so.dbg 155 156 BOOT_TARGETS := Image Image.gz loader loader.bin xipImage vmlinuz.efi 157 158 all: $(notdir $(KBUILD_IMAGE))
+1
arch/riscv/configs/defconfig
··· 234 CONFIG_VIRTIO_INPUT=y 235 CONFIG_VIRTIO_MMIO=y 236 CONFIG_RENESAS_OSTM=y 237 CONFIG_SUN8I_DE2_CCU=m 238 CONFIG_SUN50I_IOMMU=y 239 CONFIG_RPMSG_CHAR=y
··· 234 CONFIG_VIRTIO_INPUT=y 235 CONFIG_VIRTIO_MMIO=y 236 CONFIG_RENESAS_OSTM=y 237 + CONFIG_CLK_SOPHGO_CV1800=y 238 CONFIG_SUN8I_DE2_CCU=m 239 CONFIG_SUN50I_IOMMU=y 240 CONFIG_RPMSG_CHAR=y
+5
arch/riscv/errata/sifive/errata.c
··· 42 return false; 43 if ((impid & 0xffffff) > 0x200630 || impid == 0x1200626) 44 return false; 45 return true; 46 } 47
··· 42 return false; 43 if ((impid & 0xffffff) > 0x200630 || impid == 0x1200626) 44 return false; 45 + 46 + #ifdef CONFIG_MMU 47 + tlb_flush_all_threshold = 0; 48 + #endif 49 + 50 return true; 51 } 52
+76 -88
arch/riscv/include/asm/atomic.h
··· 195 #undef ATOMIC_FETCH_OP 196 #undef ATOMIC_OP_RETURN 197 198 /* This is required to provide a full barrier on success. */ 199 static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) 200 { 201 int prev, rc; 202 203 - __asm__ __volatile__ ( 204 - "0: lr.w %[p], %[c]\n" 205 - " beq %[p], %[u], 1f\n" 206 - " add %[rc], %[p], %[a]\n" 207 - " sc.w.rl %[rc], %[rc], %[c]\n" 208 - " bnez %[rc], 0b\n" 209 - RISCV_FULL_BARRIER 210 - "1:\n" 211 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 212 - : [a]"r" (a), [u]"r" (u) 213 - : "memory"); 214 return prev; 215 } 216 #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless ··· 227 s64 prev; 228 long rc; 229 230 - __asm__ __volatile__ ( 231 - "0: lr.d %[p], %[c]\n" 232 - " beq %[p], %[u], 1f\n" 233 - " add %[rc], %[p], %[a]\n" 234 - " sc.d.rl %[rc], %[rc], %[c]\n" 235 - " bnez %[rc], 0b\n" 236 - RISCV_FULL_BARRIER 237 - "1:\n" 238 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 239 - : [a]"r" (a), [u]"r" (u) 240 - : "memory"); 241 return prev; 242 } 243 #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless 244 #endif 245 246 static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) 247 { 248 int prev, rc; 249 250 - __asm__ __volatile__ ( 251 - "0: lr.w %[p], %[c]\n" 252 - " bltz %[p], 1f\n" 253 - " addi %[rc], %[p], 1\n" 254 - " sc.w.rl %[rc], %[rc], %[c]\n" 255 - " bnez %[rc], 0b\n" 256 - RISCV_FULL_BARRIER 257 - "1:\n" 258 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 259 - : 260 - : "memory"); 261 return !(prev < 0); 262 } 263 264 #define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative 265 266 static __always_inline bool arch_atomic_dec_unless_positive(atomic_t *v) 267 { 268 int prev, rc; 269 270 - __asm__ __volatile__ ( 271 - "0: lr.w %[p], %[c]\n" 272 - " bgtz %[p], 1f\n" 273 - " addi %[rc], %[p], -1\n" 274 - " sc.w.rl %[rc], %[rc], %[c]\n" 275 - " bnez %[rc], 0b\n" 276 - RISCV_FULL_BARRIER 277 - "1:\n" 278 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 279 - : 280 - : "memory"); 281 return !(prev > 0); 282 } 283 284 #define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive 285 286 static __always_inline int arch_atomic_dec_if_positive(atomic_t *v) 287 { 288 int prev, rc; 289 290 - __asm__ __volatile__ ( 291 - "0: lr.w %[p], %[c]\n" 292 - " addi %[rc], %[p], -1\n" 293 - " bltz %[rc], 1f\n" 294 - " sc.w.rl %[rc], %[rc], %[c]\n" 295 - " bnez %[rc], 0b\n" 296 - RISCV_FULL_BARRIER 297 - "1:\n" 298 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 299 - : 300 - : "memory"); 301 return prev - 1; 302 } 303 ··· 318 s64 prev; 319 long rc; 320 321 - __asm__ __volatile__ ( 322 - "0: lr.d %[p], %[c]\n" 323 - " bltz %[p], 1f\n" 324 - " addi %[rc], %[p], 1\n" 325 - " sc.d.rl %[rc], %[rc], %[c]\n" 326 - " bnez %[rc], 0b\n" 327 - RISCV_FULL_BARRIER 328 - "1:\n" 329 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 330 - : 331 - : "memory"); 332 return !(prev < 0); 333 } 334 ··· 330 s64 prev; 331 long rc; 332 333 - __asm__ __volatile__ ( 334 - "0: lr.d %[p], %[c]\n" 335 - " bgtz %[p], 1f\n" 336 - " addi %[rc], %[p], -1\n" 337 - " sc.d.rl %[rc], %[rc], %[c]\n" 338 - " bnez %[rc], 0b\n" 339 - RISCV_FULL_BARRIER 340 - "1:\n" 341 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 342 - : 343 - : "memory"); 344 return !(prev > 0); 345 } 346 ··· 342 s64 prev; 343 long rc; 344 345 - __asm__ __volatile__ ( 346 - "0: lr.d %[p], %[c]\n" 347 - " addi %[rc], %[p], -1\n" 348 - " bltz %[rc], 1f\n" 349 - " sc.d.rl %[rc], %[rc], %[c]\n" 350 - " bnez %[rc], 0b\n" 351 - RISCV_FULL_BARRIER 352 - "1:\n" 353 - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) 354 - : 355 - : "memory"); 356 return prev - 1; 357 } 358
··· 195 #undef ATOMIC_FETCH_OP 196 #undef ATOMIC_OP_RETURN 197 198 + #define _arch_atomic_fetch_add_unless(_prev, _rc, counter, _a, _u, sfx) \ 199 + ({ \ 200 + __asm__ __volatile__ ( \ 201 + "0: lr." sfx " %[p], %[c]\n" \ 202 + " beq %[p], %[u], 1f\n" \ 203 + " add %[rc], %[p], %[a]\n" \ 204 + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 205 + " bnez %[rc], 0b\n" \ 206 + " fence rw, rw\n" \ 207 + "1:\n" \ 208 + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 209 + : [a]"r" (_a), [u]"r" (_u) \ 210 + : "memory"); \ 211 + }) 212 + 213 /* This is required to provide a full barrier on success. */ 214 static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) 215 { 216 int prev, rc; 217 218 + _arch_atomic_fetch_add_unless(prev, rc, v->counter, a, u, "w"); 219 + 220 return prev; 221 } 222 #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless ··· 221 s64 prev; 222 long rc; 223 224 + _arch_atomic_fetch_add_unless(prev, rc, v->counter, a, u, "d"); 225 + 226 return prev; 227 } 228 #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless 229 #endif 230 231 + #define _arch_atomic_inc_unless_negative(_prev, _rc, counter, sfx) \ 232 + ({ \ 233 + __asm__ __volatile__ ( \ 234 + "0: lr." sfx " %[p], %[c]\n" \ 235 + " bltz %[p], 1f\n" \ 236 + " addi %[rc], %[p], 1\n" \ 237 + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 238 + " bnez %[rc], 0b\n" \ 239 + " fence rw, rw\n" \ 240 + "1:\n" \ 241 + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 242 + : \ 243 + : "memory"); \ 244 + }) 245 + 246 static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) 247 { 248 int prev, rc; 249 250 + _arch_atomic_inc_unless_negative(prev, rc, v->counter, "w"); 251 + 252 return !(prev < 0); 253 } 254 255 #define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative 256 257 + #define _arch_atomic_dec_unless_positive(_prev, _rc, counter, sfx) \ 258 + ({ \ 259 + __asm__ __volatile__ ( \ 260 + "0: lr." sfx " %[p], %[c]\n" \ 261 + " bgtz %[p], 1f\n" \ 262 + " addi %[rc], %[p], -1\n" \ 263 + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 264 + " bnez %[rc], 0b\n" \ 265 + " fence rw, rw\n" \ 266 + "1:\n" \ 267 + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 268 + : \ 269 + : "memory"); \ 270 + }) 271 + 272 static __always_inline bool arch_atomic_dec_unless_positive(atomic_t *v) 273 { 274 int prev, rc; 275 276 + _arch_atomic_dec_unless_positive(prev, rc, v->counter, "w"); 277 + 278 return !(prev > 0); 279 } 280 281 #define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive 282 283 + #define _arch_atomic_dec_if_positive(_prev, _rc, counter, sfx) \ 284 + ({ \ 285 + __asm__ __volatile__ ( \ 286 + "0: lr." sfx " %[p], %[c]\n" \ 287 + " addi %[rc], %[p], -1\n" \ 288 + " bltz %[rc], 1f\n" \ 289 + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 290 + " bnez %[rc], 0b\n" \ 291 + " fence rw, rw\n" \ 292 + "1:\n" \ 293 + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 294 + : \ 295 + : "memory"); \ 296 + }) 297 + 298 static __always_inline int arch_atomic_dec_if_positive(atomic_t *v) 299 { 300 int prev, rc; 301 302 + _arch_atomic_dec_if_positive(prev, rc, v->counter, "w"); 303 + 304 return prev - 1; 305 } 306 ··· 303 s64 prev; 304 long rc; 305 306 + _arch_atomic_inc_unless_negative(prev, rc, v->counter, "d"); 307 + 308 return !(prev < 0); 309 } 310 ··· 324 s64 prev; 325 long rc; 326 327 + _arch_atomic_dec_unless_positive(prev, rc, v->counter, "d"); 328 + 329 return !(prev > 0); 330 } 331 ··· 345 s64 prev; 346 long rc; 347 348 + _arch_atomic_dec_if_positive(prev, rc, v->counter, "d"); 349 + 350 return prev - 1; 351 } 352
+1 -1
arch/riscv/include/asm/cache.h
··· 26 27 #ifndef __ASSEMBLY__ 28 29 - #ifdef CONFIG_RISCV_DMA_NONCOHERENT 30 extern int dma_cache_alignment; 31 #define dma_get_cache_alignment dma_get_cache_alignment 32 static inline int dma_get_cache_alignment(void) 33 {
··· 26 27 #ifndef __ASSEMBLY__ 28 29 extern int dma_cache_alignment; 30 + #ifdef CONFIG_RISCV_DMA_NONCOHERENT 31 #define dma_get_cache_alignment dma_get_cache_alignment 32 static inline int dma_get_cache_alignment(void) 33 {
+5 -2
arch/riscv/include/asm/cacheflush.h
··· 33 * so instead we just flush the whole thing. 34 */ 35 #define flush_icache_range(start, end) flush_icache_all() 36 - #define flush_icache_user_page(vma, pg, addr, len) \ 37 - flush_icache_mm(vma->vm_mm, 0) 38 39 #ifdef CONFIG_64BIT 40 #define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end)
··· 33 * so instead we just flush the whole thing. 34 */ 35 #define flush_icache_range(start, end) flush_icache_all() 36 + #define flush_icache_user_page(vma, pg, addr, len) \ 37 + do { \ 38 + if (vma->vm_flags & VM_EXEC) \ 39 + flush_icache_mm(vma->vm_mm, 0); \ 40 + } while (0) 41 42 #ifdef CONFIG_64BIT 43 #define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end)
+142 -280
arch/riscv/include/asm/cmpxchg.h
··· 10 11 #include <asm/fence.h> 12 13 - #define __xchg_relaxed(ptr, new, size) \ 14 ({ \ 15 __typeof__(ptr) __ptr = (ptr); \ 16 - __typeof__(new) __new = (new); \ 17 - __typeof__(*(ptr)) __ret; \ 18 - switch (size) { \ 19 case 4: \ 20 - __asm__ __volatile__ ( \ 21 - " amoswap.w %0, %2, %1\n" \ 22 - : "=r" (__ret), "+A" (*__ptr) \ 23 - : "r" (__new) \ 24 - : "memory"); \ 25 break; \ 26 case 8: \ 27 - __asm__ __volatile__ ( \ 28 - " amoswap.d %0, %2, %1\n" \ 29 - : "=r" (__ret), "+A" (*__ptr) \ 30 - : "r" (__new) \ 31 - : "memory"); \ 32 break; \ 33 default: \ 34 BUILD_BUG(); \ 35 } \ 36 - __ret; \ 37 }) 38 39 #define arch_xchg_relaxed(ptr, x) \ 40 - ({ \ 41 - __typeof__(*(ptr)) _x_ = (x); \ 42 - (__typeof__(*(ptr))) __xchg_relaxed((ptr), \ 43 - _x_, sizeof(*(ptr))); \ 44 - }) 45 - 46 - #define __xchg_acquire(ptr, new, size) \ 47 - ({ \ 48 - __typeof__(ptr) __ptr = (ptr); \ 49 - __typeof__(new) __new = (new); \ 50 - __typeof__(*(ptr)) __ret; \ 51 - switch (size) { \ 52 - case 4: \ 53 - __asm__ __volatile__ ( \ 54 - " amoswap.w %0, %2, %1\n" \ 55 - RISCV_ACQUIRE_BARRIER \ 56 - : "=r" (__ret), "+A" (*__ptr) \ 57 - : "r" (__new) \ 58 - : "memory"); \ 59 - break; \ 60 - case 8: \ 61 - __asm__ __volatile__ ( \ 62 - " amoswap.d %0, %2, %1\n" \ 63 - RISCV_ACQUIRE_BARRIER \ 64 - : "=r" (__ret), "+A" (*__ptr) \ 65 - : "r" (__new) \ 66 - : "memory"); \ 67 - break; \ 68 - default: \ 69 - BUILD_BUG(); \ 70 - } \ 71 - __ret; \ 72 - }) 73 74 #define arch_xchg_acquire(ptr, x) \ 75 - ({ \ 76 - __typeof__(*(ptr)) _x_ = (x); \ 77 - (__typeof__(*(ptr))) __xchg_acquire((ptr), \ 78 - _x_, sizeof(*(ptr))); \ 79 - }) 80 - 81 - #define __xchg_release(ptr, new, size) \ 82 - ({ \ 83 - __typeof__(ptr) __ptr = (ptr); \ 84 - __typeof__(new) __new = (new); \ 85 - __typeof__(*(ptr)) __ret; \ 86 - switch (size) { \ 87 - case 4: \ 88 - __asm__ __volatile__ ( \ 89 - RISCV_RELEASE_BARRIER \ 90 - " amoswap.w %0, %2, %1\n" \ 91 - : "=r" (__ret), "+A" (*__ptr) \ 92 - : "r" (__new) \ 93 - : "memory"); \ 94 - break; \ 95 - case 8: \ 96 - __asm__ __volatile__ ( \ 97 - RISCV_RELEASE_BARRIER \ 98 - " amoswap.d %0, %2, %1\n" \ 99 - : "=r" (__ret), "+A" (*__ptr) \ 100 - : "r" (__new) \ 101 - : "memory"); \ 102 - break; \ 103 - default: \ 104 - BUILD_BUG(); \ 105 - } \ 106 - __ret; \ 107 - }) 108 109 #define arch_xchg_release(ptr, x) \ 110 - ({ \ 111 - __typeof__(*(ptr)) _x_ = (x); \ 112 - (__typeof__(*(ptr))) __xchg_release((ptr), \ 113 - _x_, sizeof(*(ptr))); \ 114 - }) 115 - 116 - #define __arch_xchg(ptr, new, size) \ 117 - ({ \ 118 - __typeof__(ptr) __ptr = (ptr); \ 119 - __typeof__(new) __new = (new); \ 120 - __typeof__(*(ptr)) __ret; \ 121 - switch (size) { \ 122 - case 4: \ 123 - __asm__ __volatile__ ( \ 124 - " amoswap.w.aqrl %0, %2, %1\n" \ 125 - : "=r" (__ret), "+A" (*__ptr) \ 126 - : "r" (__new) \ 127 - : "memory"); \ 128 - break; \ 129 - case 8: \ 130 - __asm__ __volatile__ ( \ 131 - " amoswap.d.aqrl %0, %2, %1\n" \ 132 - : "=r" (__ret), "+A" (*__ptr) \ 133 - : "r" (__new) \ 134 - : "memory"); \ 135 - break; \ 136 - default: \ 137 - BUILD_BUG(); \ 138 - } \ 139 - __ret; \ 140 - }) 141 142 #define arch_xchg(ptr, x) \ 143 - ({ \ 144 - __typeof__(*(ptr)) _x_ = (x); \ 145 - (__typeof__(*(ptr))) __arch_xchg((ptr), _x_, sizeof(*(ptr))); \ 146 - }) 147 148 #define xchg32(ptr, x) \ 149 ({ \ ··· 101 * store NEW in MEM. Return the initial value in MEM. Success is 102 * indicated by comparing RETURN with OLD. 103 */ 104 - #define __cmpxchg_relaxed(ptr, old, new, size) \ 105 ({ \ 106 __typeof__(ptr) __ptr = (ptr); \ 107 - __typeof__(*(ptr)) __old = (old); \ 108 - __typeof__(*(ptr)) __new = (new); \ 109 - __typeof__(*(ptr)) __ret; \ 110 - register unsigned int __rc; \ 111 - switch (size) { \ 112 case 4: \ 113 - __asm__ __volatile__ ( \ 114 - "0: lr.w %0, %2\n" \ 115 - " bne %0, %z3, 1f\n" \ 116 - " sc.w %1, %z4, %2\n" \ 117 - " bnez %1, 0b\n" \ 118 - "1:\n" \ 119 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 120 - : "rJ" ((long)__old), "rJ" (__new) \ 121 - : "memory"); \ 122 break; \ 123 case 8: \ 124 - __asm__ __volatile__ ( \ 125 - "0: lr.d %0, %2\n" \ 126 - " bne %0, %z3, 1f\n" \ 127 - " sc.d %1, %z4, %2\n" \ 128 - " bnez %1, 0b\n" \ 129 - "1:\n" \ 130 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 131 - : "rJ" (__old), "rJ" (__new) \ 132 - : "memory"); \ 133 break; \ 134 default: \ 135 BUILD_BUG(); \ 136 } \ 137 - __ret; \ 138 }) 139 140 #define arch_cmpxchg_relaxed(ptr, o, n) \ 141 - ({ \ 142 - __typeof__(*(ptr)) _o_ = (o); \ 143 - __typeof__(*(ptr)) _n_ = (n); \ 144 - (__typeof__(*(ptr))) __cmpxchg_relaxed((ptr), \ 145 - _o_, _n_, sizeof(*(ptr))); \ 146 - }) 147 - 148 - #define __cmpxchg_acquire(ptr, old, new, size) \ 149 - ({ \ 150 - __typeof__(ptr) __ptr = (ptr); \ 151 - __typeof__(*(ptr)) __old = (old); \ 152 - __typeof__(*(ptr)) __new = (new); \ 153 - __typeof__(*(ptr)) __ret; \ 154 - register unsigned int __rc; \ 155 - switch (size) { \ 156 - case 4: \ 157 - __asm__ __volatile__ ( \ 158 - "0: lr.w %0, %2\n" \ 159 - " bne %0, %z3, 1f\n" \ 160 - " sc.w %1, %z4, %2\n" \ 161 - " bnez %1, 0b\n" \ 162 - RISCV_ACQUIRE_BARRIER \ 163 - "1:\n" \ 164 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 165 - : "rJ" ((long)__old), "rJ" (__new) \ 166 - : "memory"); \ 167 - break; \ 168 - case 8: \ 169 - __asm__ __volatile__ ( \ 170 - "0: lr.d %0, %2\n" \ 171 - " bne %0, %z3, 1f\n" \ 172 - " sc.d %1, %z4, %2\n" \ 173 - " bnez %1, 0b\n" \ 174 - RISCV_ACQUIRE_BARRIER \ 175 - "1:\n" \ 176 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 177 - : "rJ" (__old), "rJ" (__new) \ 178 - : "memory"); \ 179 - break; \ 180 - default: \ 181 - BUILD_BUG(); \ 182 - } \ 183 - __ret; \ 184 - }) 185 186 #define arch_cmpxchg_acquire(ptr, o, n) \ 187 - ({ \ 188 - __typeof__(*(ptr)) _o_ = (o); \ 189 - __typeof__(*(ptr)) _n_ = (n); \ 190 - (__typeof__(*(ptr))) __cmpxchg_acquire((ptr), \ 191 - _o_, _n_, sizeof(*(ptr))); \ 192 - }) 193 - 194 - #define __cmpxchg_release(ptr, old, new, size) \ 195 - ({ \ 196 - __typeof__(ptr) __ptr = (ptr); \ 197 - __typeof__(*(ptr)) __old = (old); \ 198 - __typeof__(*(ptr)) __new = (new); \ 199 - __typeof__(*(ptr)) __ret; \ 200 - register unsigned int __rc; \ 201 - switch (size) { \ 202 - case 4: \ 203 - __asm__ __volatile__ ( \ 204 - RISCV_RELEASE_BARRIER \ 205 - "0: lr.w %0, %2\n" \ 206 - " bne %0, %z3, 1f\n" \ 207 - " sc.w %1, %z4, %2\n" \ 208 - " bnez %1, 0b\n" \ 209 - "1:\n" \ 210 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 211 - : "rJ" ((long)__old), "rJ" (__new) \ 212 - : "memory"); \ 213 - break; \ 214 - case 8: \ 215 - __asm__ __volatile__ ( \ 216 - RISCV_RELEASE_BARRIER \ 217 - "0: lr.d %0, %2\n" \ 218 - " bne %0, %z3, 1f\n" \ 219 - " sc.d %1, %z4, %2\n" \ 220 - " bnez %1, 0b\n" \ 221 - "1:\n" \ 222 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 223 - : "rJ" (__old), "rJ" (__new) \ 224 - : "memory"); \ 225 - break; \ 226 - default: \ 227 - BUILD_BUG(); \ 228 - } \ 229 - __ret; \ 230 - }) 231 232 #define arch_cmpxchg_release(ptr, o, n) \ 233 - ({ \ 234 - __typeof__(*(ptr)) _o_ = (o); \ 235 - __typeof__(*(ptr)) _n_ = (n); \ 236 - (__typeof__(*(ptr))) __cmpxchg_release((ptr), \ 237 - _o_, _n_, sizeof(*(ptr))); \ 238 - }) 239 - 240 - #define __cmpxchg(ptr, old, new, size) \ 241 - ({ \ 242 - __typeof__(ptr) __ptr = (ptr); \ 243 - __typeof__(*(ptr)) __old = (old); \ 244 - __typeof__(*(ptr)) __new = (new); \ 245 - __typeof__(*(ptr)) __ret; \ 246 - register unsigned int __rc; \ 247 - switch (size) { \ 248 - case 4: \ 249 - __asm__ __volatile__ ( \ 250 - "0: lr.w %0, %2\n" \ 251 - " bne %0, %z3, 1f\n" \ 252 - " sc.w.rl %1, %z4, %2\n" \ 253 - " bnez %1, 0b\n" \ 254 - RISCV_FULL_BARRIER \ 255 - "1:\n" \ 256 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 257 - : "rJ" ((long)__old), "rJ" (__new) \ 258 - : "memory"); \ 259 - break; \ 260 - case 8: \ 261 - __asm__ __volatile__ ( \ 262 - "0: lr.d %0, %2\n" \ 263 - " bne %0, %z3, 1f\n" \ 264 - " sc.d.rl %1, %z4, %2\n" \ 265 - " bnez %1, 0b\n" \ 266 - RISCV_FULL_BARRIER \ 267 - "1:\n" \ 268 - : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ 269 - : "rJ" (__old), "rJ" (__new) \ 270 - : "memory"); \ 271 - break; \ 272 - default: \ 273 - BUILD_BUG(); \ 274 - } \ 275 - __ret; \ 276 - }) 277 278 #define arch_cmpxchg(ptr, o, n) \ 279 - ({ \ 280 - __typeof__(*(ptr)) _o_ = (o); \ 281 - __typeof__(*(ptr)) _n_ = (n); \ 282 - (__typeof__(*(ptr))) __cmpxchg((ptr), \ 283 - _o_, _n_, sizeof(*(ptr))); \ 284 - }) 285 286 #define arch_cmpxchg_local(ptr, o, n) \ 287 - (__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr)))) 288 289 #define arch_cmpxchg64(ptr, o, n) \ 290 ({ \ ··· 201 ({ \ 202 BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 203 arch_cmpxchg_relaxed((ptr), (o), (n)); \ 204 }) 205 206 #endif /* _ASM_RISCV_CMPXCHG_H */
··· 10 11 #include <asm/fence.h> 12 13 + #define __arch_xchg_masked(prepend, append, r, p, n) \ 14 + ({ \ 15 + u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ 16 + ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ 17 + ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ 18 + << __s; \ 19 + ulong __newx = (ulong)(n) << __s; \ 20 + ulong __retx; \ 21 + ulong __rc; \ 22 + \ 23 + __asm__ __volatile__ ( \ 24 + prepend \ 25 + "0: lr.w %0, %2\n" \ 26 + " and %1, %0, %z4\n" \ 27 + " or %1, %1, %z3\n" \ 28 + " sc.w %1, %1, %2\n" \ 29 + " bnez %1, 0b\n" \ 30 + append \ 31 + : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \ 32 + : "rJ" (__newx), "rJ" (~__mask) \ 33 + : "memory"); \ 34 + \ 35 + r = (__typeof__(*(p)))((__retx & __mask) >> __s); \ 36 + }) 37 + 38 + #define __arch_xchg(sfx, prepend, append, r, p, n) \ 39 + ({ \ 40 + __asm__ __volatile__ ( \ 41 + prepend \ 42 + " amoswap" sfx " %0, %2, %1\n" \ 43 + append \ 44 + : "=r" (r), "+A" (*(p)) \ 45 + : "r" (n) \ 46 + : "memory"); \ 47 + }) 48 + 49 + #define _arch_xchg(ptr, new, sfx, prepend, append) \ 50 ({ \ 51 __typeof__(ptr) __ptr = (ptr); \ 52 + __typeof__(*(__ptr)) __new = (new); \ 53 + __typeof__(*(__ptr)) __ret; \ 54 + \ 55 + switch (sizeof(*__ptr)) { \ 56 + case 1: \ 57 + case 2: \ 58 + __arch_xchg_masked(prepend, append, \ 59 + __ret, __ptr, __new); \ 60 + break; \ 61 case 4: \ 62 + __arch_xchg(".w" sfx, prepend, append, \ 63 + __ret, __ptr, __new); \ 64 break; \ 65 case 8: \ 66 + __arch_xchg(".d" sfx, prepend, append, \ 67 + __ret, __ptr, __new); \ 68 break; \ 69 default: \ 70 BUILD_BUG(); \ 71 } \ 72 + (__typeof__(*(__ptr)))__ret; \ 73 }) 74 75 #define arch_xchg_relaxed(ptr, x) \ 76 + _arch_xchg(ptr, x, "", "", "") 77 78 #define arch_xchg_acquire(ptr, x) \ 79 + _arch_xchg(ptr, x, "", "", RISCV_ACQUIRE_BARRIER) 80 81 #define arch_xchg_release(ptr, x) \ 82 + _arch_xchg(ptr, x, "", RISCV_RELEASE_BARRIER, "") 83 84 #define arch_xchg(ptr, x) \ 85 + _arch_xchg(ptr, x, ".aqrl", "", "") 86 87 #define xchg32(ptr, x) \ 88 ({ \ ··· 162 * store NEW in MEM. Return the initial value in MEM. Success is 163 * indicated by comparing RETURN with OLD. 164 */ 165 + 166 + #define __arch_cmpxchg_masked(sc_sfx, prepend, append, r, p, o, n) \ 167 + ({ \ 168 + u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ 169 + ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ 170 + ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ 171 + << __s; \ 172 + ulong __newx = (ulong)(n) << __s; \ 173 + ulong __oldx = (ulong)(o) << __s; \ 174 + ulong __retx; \ 175 + ulong __rc; \ 176 + \ 177 + __asm__ __volatile__ ( \ 178 + prepend \ 179 + "0: lr.w %0, %2\n" \ 180 + " and %1, %0, %z5\n" \ 181 + " bne %1, %z3, 1f\n" \ 182 + " and %1, %0, %z6\n" \ 183 + " or %1, %1, %z4\n" \ 184 + " sc.w" sc_sfx " %1, %1, %2\n" \ 185 + " bnez %1, 0b\n" \ 186 + append \ 187 + "1:\n" \ 188 + : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \ 189 + : "rJ" ((long)__oldx), "rJ" (__newx), \ 190 + "rJ" (__mask), "rJ" (~__mask) \ 191 + : "memory"); \ 192 + \ 193 + r = (__typeof__(*(p)))((__retx & __mask) >> __s); \ 194 + }) 195 + 196 + #define __arch_cmpxchg(lr_sfx, sc_sfx, prepend, append, r, p, co, o, n) \ 197 + ({ \ 198 + register unsigned int __rc; \ 199 + \ 200 + __asm__ __volatile__ ( \ 201 + prepend \ 202 + "0: lr" lr_sfx " %0, %2\n" \ 203 + " bne %0, %z3, 1f\n" \ 204 + " sc" sc_sfx " %1, %z4, %2\n" \ 205 + " bnez %1, 0b\n" \ 206 + append \ 207 + "1:\n" \ 208 + : "=&r" (r), "=&r" (__rc), "+A" (*(p)) \ 209 + : "rJ" (co o), "rJ" (n) \ 210 + : "memory"); \ 211 + }) 212 + 213 + #define _arch_cmpxchg(ptr, old, new, sc_sfx, prepend, append) \ 214 ({ \ 215 __typeof__(ptr) __ptr = (ptr); \ 216 + __typeof__(*(__ptr)) __old = (old); \ 217 + __typeof__(*(__ptr)) __new = (new); \ 218 + __typeof__(*(__ptr)) __ret; \ 219 + \ 220 + switch (sizeof(*__ptr)) { \ 221 + case 1: \ 222 + case 2: \ 223 + __arch_cmpxchg_masked(sc_sfx, prepend, append, \ 224 + __ret, __ptr, __old, __new); \ 225 + break; \ 226 case 4: \ 227 + __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \ 228 + __ret, __ptr, (long), __old, __new); \ 229 break; \ 230 case 8: \ 231 + __arch_cmpxchg(".d", ".d" sc_sfx, prepend, append, \ 232 + __ret, __ptr, /**/, __old, __new); \ 233 break; \ 234 default: \ 235 BUILD_BUG(); \ 236 } \ 237 + (__typeof__(*(__ptr)))__ret; \ 238 }) 239 240 #define arch_cmpxchg_relaxed(ptr, o, n) \ 241 + _arch_cmpxchg((ptr), (o), (n), "", "", "") 242 243 #define arch_cmpxchg_acquire(ptr, o, n) \ 244 + _arch_cmpxchg((ptr), (o), (n), "", "", RISCV_ACQUIRE_BARRIER) 245 246 #define arch_cmpxchg_release(ptr, o, n) \ 247 + _arch_cmpxchg((ptr), (o), (n), "", RISCV_RELEASE_BARRIER, "") 248 249 #define arch_cmpxchg(ptr, o, n) \ 250 + _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n") 251 252 #define arch_cmpxchg_local(ptr, o, n) \ 253 + arch_cmpxchg_relaxed((ptr), (o), (n)) 254 255 #define arch_cmpxchg64(ptr, o, n) \ 256 ({ \ ··· 357 ({ \ 358 BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 359 arch_cmpxchg_relaxed((ptr), (o), (n)); \ 360 + }) 361 + 362 + #define arch_cmpxchg64_relaxed(ptr, o, n) \ 363 + ({ \ 364 + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 365 + arch_cmpxchg_relaxed((ptr), (o), (n)); \ 366 + }) 367 + 368 + #define arch_cmpxchg64_acquire(ptr, o, n) \ 369 + ({ \ 370 + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 371 + arch_cmpxchg_acquire((ptr), (o), (n)); \ 372 + }) 373 + 374 + #define arch_cmpxchg64_release(ptr, o, n) \ 375 + ({ \ 376 + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 377 + arch_cmpxchg_release((ptr), (o), (n)); \ 378 }) 379 380 #endif /* _ASM_RISCV_CMPXCHG_H */
+11 -1
arch/riscv/include/asm/errata_list.h
··· 43 CONFIG_ERRATA_SIFIVE_CIP_453) 44 #else /* !__ASSEMBLY__ */ 45 46 - #define ALT_FLUSH_TLB_PAGE(x) \ 47 asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \ 48 ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ 49 : : "r" (addr) : "memory") 50 51 /* 52 * _val is marked as "will be overwritten", so need to set it to 0
··· 43 CONFIG_ERRATA_SIFIVE_CIP_453) 44 #else /* !__ASSEMBLY__ */ 45 46 + #define ALT_SFENCE_VMA_ASID(asid) \ 47 + asm(ALTERNATIVE("sfence.vma x0, %0", "sfence.vma", SIFIVE_VENDOR_ID, \ 48 + ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ 49 + : : "r" (asid) : "memory") 50 + 51 + #define ALT_SFENCE_VMA_ADDR(addr) \ 52 asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \ 53 ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ 54 : : "r" (addr) : "memory") 55 + 56 + #define ALT_SFENCE_VMA_ADDR_ASID(addr, asid) \ 57 + asm(ALTERNATIVE("sfence.vma %0, %1", "sfence.vma", SIFIVE_VENDOR_ID, \ 58 + ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ 59 + : : "r" (addr), "r" (asid) : "memory") 60 61 /* 62 * _val is marked as "will be overwritten", so need to set it to 0
-1
arch/riscv/include/asm/irqflags.h
··· 7 #ifndef _ASM_RISCV_IRQFLAGS_H 8 #define _ASM_RISCV_IRQFLAGS_H 9 10 - #include <asm/processor.h> 11 #include <asm/csr.h> 12 13 /* read interrupt enabled status */
··· 7 #ifndef _ASM_RISCV_IRQFLAGS_H 8 #define _ASM_RISCV_IRQFLAGS_H 9 10 #include <asm/csr.h> 11 12 /* read interrupt enabled status */
+5
arch/riscv/include/asm/mmu.h
··· 19 #ifdef CONFIG_SMP 20 /* A local icache flush is needed before user execution can resume. */ 21 cpumask_t icache_stale_mask; 22 #endif 23 #ifdef CONFIG_BINFMT_ELF_FDPIC 24 unsigned long exec_fdpic_loadmap; 25 unsigned long interp_fdpic_loadmap; 26 #endif 27 } mm_context_t; 28 29 void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, 30 phys_addr_t sz, pgprot_t prot);
··· 19 #ifdef CONFIG_SMP 20 /* A local icache flush is needed before user execution can resume. */ 21 cpumask_t icache_stale_mask; 22 + /* Force local icache flush on all migrations. */ 23 + bool force_icache_flush; 24 #endif 25 #ifdef CONFIG_BINFMT_ELF_FDPIC 26 unsigned long exec_fdpic_loadmap; 27 unsigned long interp_fdpic_loadmap; 28 #endif 29 } mm_context_t; 30 + 31 + #define cntx2asid(cntx) ((cntx) & SATP_ASID_MASK) 32 + #define cntx2version(cntx) ((cntx) & ~SATP_ASID_MASK) 33 34 void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, 35 phys_addr_t sz, pgprot_t prot);
+1
arch/riscv/include/asm/patch.h
··· 6 #ifndef _ASM_RISCV_PATCH_H 7 #define _ASM_RISCV_PATCH_H 8 9 int patch_text_nosync(void *addr, const void *insns, size_t len); 10 int patch_text_set_nosync(void *addr, u8 c, size_t len); 11 int patch_text(void *addr, u32 *insns, int ninsns);
··· 6 #ifndef _ASM_RISCV_PATCH_H 7 #define _ASM_RISCV_PATCH_H 8 9 + int patch_insn_write(void *addr, const void *insn, size_t len); 10 int patch_text_nosync(void *addr, const void *insns, size_t len); 11 int patch_text_set_nosync(void *addr, u8 c, size_t len); 12 int patch_text(void *addr, u32 *insns, int ninsns);
+14 -18
arch/riscv/include/asm/pgalloc.h
··· 8 #define _ASM_RISCV_PGALLOC_H 9 10 #include <linux/mm.h> 11 #include <asm/tlb.h> 12 13 #ifdef CONFIG_MMU 14 #define __HAVE_ARCH_PUD_ALLOC_ONE 15 #define __HAVE_ARCH_PUD_FREE 16 #include <asm-generic/pgalloc.h> 17 18 static inline void pmd_populate_kernel(struct mm_struct *mm, 19 pmd_t *pmd, pte_t *pte) ··· 111 struct ptdesc *ptdesc = virt_to_ptdesc(pud); 112 113 pagetable_pud_dtor(ptdesc); 114 - if (riscv_use_ipi_for_rfence()) 115 - tlb_remove_page_ptdesc(tlb, ptdesc); 116 - else 117 - tlb_remove_ptdesc(tlb, ptdesc); 118 } 119 } 120 ··· 145 static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, 146 unsigned long addr) 147 { 148 - if (pgtable_l5_enabled) { 149 - if (riscv_use_ipi_for_rfence()) 150 - tlb_remove_page_ptdesc(tlb, virt_to_ptdesc(p4d)); 151 - else 152 - tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); 153 - } 154 } 155 #endif /* __PAGETABLE_PMD_FOLDED */ 156 ··· 178 struct ptdesc *ptdesc = virt_to_ptdesc(pmd); 179 180 pagetable_pmd_dtor(ptdesc); 181 - if (riscv_use_ipi_for_rfence()) 182 - tlb_remove_page_ptdesc(tlb, ptdesc); 183 - else 184 - tlb_remove_ptdesc(tlb, ptdesc); 185 } 186 187 #endif /* __PAGETABLE_PMD_FOLDED */ ··· 189 struct ptdesc *ptdesc = page_ptdesc(pte); 190 191 pagetable_pte_dtor(ptdesc); 192 - if (riscv_use_ipi_for_rfence()) 193 - tlb_remove_page_ptdesc(tlb, ptdesc); 194 - else 195 - tlb_remove_ptdesc(tlb, ptdesc); 196 } 197 #endif /* CONFIG_MMU */ 198
··· 8 #define _ASM_RISCV_PGALLOC_H 9 10 #include <linux/mm.h> 11 + #include <asm/sbi.h> 12 #include <asm/tlb.h> 13 14 #ifdef CONFIG_MMU 15 #define __HAVE_ARCH_PUD_ALLOC_ONE 16 #define __HAVE_ARCH_PUD_FREE 17 #include <asm-generic/pgalloc.h> 18 + 19 + static inline void riscv_tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) 20 + { 21 + if (riscv_use_sbi_for_rfence()) 22 + tlb_remove_ptdesc(tlb, pt); 23 + else 24 + tlb_remove_page_ptdesc(tlb, pt); 25 + } 26 27 static inline void pmd_populate_kernel(struct mm_struct *mm, 28 pmd_t *pmd, pte_t *pte) ··· 102 struct ptdesc *ptdesc = virt_to_ptdesc(pud); 103 104 pagetable_pud_dtor(ptdesc); 105 + riscv_tlb_remove_ptdesc(tlb, ptdesc); 106 } 107 } 108 ··· 139 static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, 140 unsigned long addr) 141 { 142 + if (pgtable_l5_enabled) 143 + riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); 144 } 145 #endif /* __PAGETABLE_PMD_FOLDED */ 146 ··· 176 struct ptdesc *ptdesc = virt_to_ptdesc(pmd); 177 178 pagetable_pmd_dtor(ptdesc); 179 + riscv_tlb_remove_ptdesc(tlb, ptdesc); 180 } 181 182 #endif /* __PAGETABLE_PMD_FOLDED */ ··· 190 struct ptdesc *ptdesc = page_ptdesc(pte); 191 192 pagetable_pte_dtor(ptdesc); 193 + riscv_tlb_remove_ptdesc(tlb, ptdesc); 194 } 195 #endif /* CONFIG_MMU */ 196
+10
arch/riscv/include/asm/processor.h
··· 68 #endif 69 70 #ifndef __ASSEMBLY__ 71 72 struct task_struct; 73 struct pt_regs; ··· 123 struct __riscv_v_ext_state vstate; 124 unsigned long align_ctl; 125 struct __riscv_v_ext_state kernel_vstate; 126 }; 127 128 /* Whitelist the fstate from the task_struct for hardened usercopy */ ··· 189 190 #define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr)) 191 #define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val)) 192 193 #endif /* __ASSEMBLY__ */ 194
··· 68 #endif 69 70 #ifndef __ASSEMBLY__ 71 + #include <linux/cpumask.h> 72 73 struct task_struct; 74 struct pt_regs; ··· 122 struct __riscv_v_ext_state vstate; 123 unsigned long align_ctl; 124 struct __riscv_v_ext_state kernel_vstate; 125 + #ifdef CONFIG_SMP 126 + /* Flush the icache on migration */ 127 + bool force_icache_flush; 128 + /* A forced icache flush is not needed if migrating to the previous cpu. */ 129 + unsigned int prev_cpu; 130 + #endif 131 }; 132 133 /* Whitelist the fstate from the task_struct for hardened usercopy */ ··· 182 183 #define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr)) 184 #define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val)) 185 + 186 + #define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2) 187 + extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread); 188 189 #endif /* __ASSEMBLY__ */ 190
+4
arch/riscv/include/asm/sbi.h
··· 387 unsigned long riscv_cached_mimpid(unsigned int cpu_id); 388 389 #if IS_ENABLED(CONFIG_SMP) && IS_ENABLED(CONFIG_RISCV_SBI) 390 void sbi_ipi_init(void); 391 #else 392 static inline void sbi_ipi_init(void) { } 393 #endif 394
··· 387 unsigned long riscv_cached_mimpid(unsigned int cpu_id); 388 389 #if IS_ENABLED(CONFIG_SMP) && IS_ENABLED(CONFIG_RISCV_SBI) 390 + DECLARE_STATIC_KEY_FALSE(riscv_sbi_for_rfence); 391 + #define riscv_use_sbi_for_rfence() \ 392 + static_branch_unlikely(&riscv_sbi_for_rfence) 393 void sbi_ipi_init(void); 394 #else 395 + static inline bool riscv_use_sbi_for_rfence(void) { return false; } 396 static inline void sbi_ipi_init(void) { } 397 #endif 398
-12
arch/riscv/include/asm/signal.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - 3 - #ifndef __ASM_SIGNAL_H 4 - #define __ASM_SIGNAL_H 5 - 6 - #include <uapi/asm/signal.h> 7 - #include <uapi/asm/ptrace.h> 8 - 9 - asmlinkage __visible 10 - void do_work_pending(struct pt_regs *regs, unsigned long thread_info_flags); 11 - 12 - #endif
···
+2 -13
arch/riscv/include/asm/smp.h
··· 49 bool riscv_ipi_have_virq_range(void); 50 51 /* Set the IPI interrupt numbers for arch (called by irqchip drivers) */ 52 - void riscv_ipi_set_virq_range(int virq, int nr, bool use_for_rfence); 53 - 54 - /* Check if we can use IPIs for remote FENCEs */ 55 - DECLARE_STATIC_KEY_FALSE(riscv_ipi_for_rfence); 56 - #define riscv_use_ipi_for_rfence() \ 57 - static_branch_unlikely(&riscv_ipi_for_rfence) 58 59 /* Check other CPUs stop or not */ 60 bool smp_crash_stop_failed(void); ··· 99 return false; 100 } 101 102 - static inline void riscv_ipi_set_virq_range(int virq, int nr, 103 - bool use_for_rfence) 104 { 105 - } 106 - 107 - static inline bool riscv_use_ipi_for_rfence(void) 108 - { 109 - return false; 110 } 111 112 #endif /* CONFIG_SMP */
··· 49 bool riscv_ipi_have_virq_range(void); 50 51 /* Set the IPI interrupt numbers for arch (called by irqchip drivers) */ 52 + void riscv_ipi_set_virq_range(int virq, int nr); 53 54 /* Check other CPUs stop or not */ 55 bool smp_crash_stop_failed(void); ··· 104 return false; 105 } 106 107 + static inline void riscv_ipi_set_virq_range(int virq, int nr) 108 { 109 } 110 111 #endif /* CONFIG_SMP */
-1
arch/riscv/include/asm/suspend.h
··· 13 /* Saved and restored by low-level functions */ 14 struct pt_regs regs; 15 /* Saved and restored by high-level functions */ 16 - unsigned long scratch; 17 unsigned long envcfg; 18 unsigned long tvec; 19 unsigned long ie;
··· 13 /* Saved and restored by low-level functions */ 14 struct pt_regs regs; 15 /* Saved and restored by high-level functions */ 16 unsigned long envcfg; 17 unsigned long tvec; 18 unsigned long ie;
+23
arch/riscv/include/asm/switch_to.h
··· 8 9 #include <linux/jump_label.h> 10 #include <linux/sched/task_stack.h> 11 #include <asm/vector.h> 12 #include <asm/cpufeature.h> 13 #include <asm/processor.h> ··· 73 extern struct task_struct *__switch_to(struct task_struct *, 74 struct task_struct *); 75 76 #define switch_to(prev, next, last) \ 77 do { \ 78 struct task_struct *__prev = (prev); \ 79 struct task_struct *__next = (next); \ 80 if (has_fpu()) \ 81 __switch_to_fpu(__prev, __next); \ 82 if (has_vector()) \ 83 __switch_to_vector(__prev, __next); \ 84 ((last) = __switch_to(__prev, __next)); \ 85 } while (0) 86
··· 8 9 #include <linux/jump_label.h> 10 #include <linux/sched/task_stack.h> 11 + #include <linux/mm_types.h> 12 #include <asm/vector.h> 13 #include <asm/cpufeature.h> 14 #include <asm/processor.h> ··· 72 extern struct task_struct *__switch_to(struct task_struct *, 73 struct task_struct *); 74 75 + static inline bool switch_to_should_flush_icache(struct task_struct *task) 76 + { 77 + #ifdef CONFIG_SMP 78 + bool stale_mm = task->mm && task->mm->context.force_icache_flush; 79 + bool stale_thread = task->thread.force_icache_flush; 80 + bool thread_migrated = smp_processor_id() != task->thread.prev_cpu; 81 + 82 + return thread_migrated && (stale_mm || stale_thread); 83 + #else 84 + return false; 85 + #endif 86 + } 87 + 88 + #ifdef CONFIG_SMP 89 + #define __set_prev_cpu(thread) ((thread).prev_cpu = smp_processor_id()) 90 + #else 91 + #define __set_prev_cpu(thread) 92 + #endif 93 + 94 #define switch_to(prev, next, last) \ 95 do { \ 96 struct task_struct *__prev = (prev); \ 97 struct task_struct *__next = (next); \ 98 + __set_prev_cpu(__prev->thread); \ 99 if (has_fpu()) \ 100 __switch_to_fpu(__prev, __next); \ 101 if (has_vector()) \ 102 __switch_to_vector(__prev, __next); \ 103 + if (switch_to_should_flush_icache(__next)) \ 104 + local_flush_icache_all(); \ 105 ((last) = __switch_to(__prev, __next)); \ 106 } while (0) 107
+22 -30
arch/riscv/include/asm/tlbflush.h
··· 15 #define FLUSH_TLB_NO_ASID ((unsigned long)-1) 16 17 #ifdef CONFIG_MMU 18 - extern unsigned long asid_mask; 19 - 20 static inline void local_flush_tlb_all(void) 21 { 22 __asm__ __volatile__ ("sfence.vma" : : : "memory"); 23 } 24 25 /* Flush one page from local TLB */ 26 static inline void local_flush_tlb_page(unsigned long addr) 27 { 28 - ALT_FLUSH_TLB_PAGE(__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory")); 29 } 30 - #else /* CONFIG_MMU */ 31 - #define local_flush_tlb_all() do { } while (0) 32 - #define local_flush_tlb_page(addr) do { } while (0) 33 - #endif /* CONFIG_MMU */ 34 35 - #if defined(CONFIG_SMP) && defined(CONFIG_MMU) 36 void flush_tlb_all(void); 37 void flush_tlb_mm(struct mm_struct *mm); 38 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, ··· 65 void arch_flush_tlb_batched_pending(struct mm_struct *mm); 66 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); 67 68 - #else /* CONFIG_SMP && CONFIG_MMU */ 69 - 70 - #define flush_tlb_all() local_flush_tlb_all() 71 - #define flush_tlb_page(vma, addr) local_flush_tlb_page(addr) 72 - 73 - static inline void flush_tlb_range(struct vm_area_struct *vma, 74 - unsigned long start, unsigned long end) 75 - { 76 - local_flush_tlb_all(); 77 - } 78 - 79 - /* Flush a range of kernel pages */ 80 - static inline void flush_tlb_kernel_range(unsigned long start, 81 - unsigned long end) 82 - { 83 - local_flush_tlb_all(); 84 - } 85 - 86 - #define flush_tlb_mm(mm) flush_tlb_all() 87 - #define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() 88 - #define local_flush_tlb_kernel_range(start, end) flush_tlb_all() 89 - #endif /* !CONFIG_SMP || !CONFIG_MMU */ 90 91 #endif /* _ASM_RISCV_TLBFLUSH_H */
··· 15 #define FLUSH_TLB_NO_ASID ((unsigned long)-1) 16 17 #ifdef CONFIG_MMU 18 static inline void local_flush_tlb_all(void) 19 { 20 __asm__ __volatile__ ("sfence.vma" : : : "memory"); 21 } 22 23 + static inline void local_flush_tlb_all_asid(unsigned long asid) 24 + { 25 + if (asid != FLUSH_TLB_NO_ASID) 26 + ALT_SFENCE_VMA_ASID(asid); 27 + else 28 + local_flush_tlb_all(); 29 + } 30 + 31 /* Flush one page from local TLB */ 32 static inline void local_flush_tlb_page(unsigned long addr) 33 { 34 + ALT_SFENCE_VMA_ADDR(addr); 35 } 36 37 + static inline void local_flush_tlb_page_asid(unsigned long addr, 38 + unsigned long asid) 39 + { 40 + if (asid != FLUSH_TLB_NO_ASID) 41 + ALT_SFENCE_VMA_ADDR_ASID(addr, asid); 42 + else 43 + local_flush_tlb_page(addr); 44 + } 45 + 46 void flush_tlb_all(void); 47 void flush_tlb_mm(struct mm_struct *mm); 48 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, ··· 55 void arch_flush_tlb_batched_pending(struct mm_struct *mm); 56 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); 57 58 + extern unsigned long tlb_flush_all_threshold; 59 + #else /* CONFIG_MMU */ 60 + #define local_flush_tlb_all() do { } while (0) 61 + #endif /* CONFIG_MMU */ 62 63 #endif /* _ASM_RISCV_TLBFLUSH_H */
+1
arch/riscv/include/uapi/asm/hwprobe.h
··· 59 #define RISCV_HWPROBE_EXT_ZTSO (1ULL << 33) 60 #define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34) 61 #define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35) 62 #define RISCV_HWPROBE_KEY_CPUPERF_0 5 63 #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) 64 #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0)
··· 59 #define RISCV_HWPROBE_EXT_ZTSO (1ULL << 33) 60 #define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34) 61 #define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35) 62 + #define RISCV_HWPROBE_EXT_ZIHINTPAUSE (1ULL << 36) 63 #define RISCV_HWPROBE_KEY_CPUPERF_0 5 64 #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) 65 #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0)
+40 -4
arch/riscv/kernel/ftrace.c
··· 8 #include <linux/ftrace.h> 9 #include <linux/uaccess.h> 10 #include <linux/memory.h> 11 #include <asm/cacheflush.h> 12 #include <asm/patch.h> 13 ··· 76 make_call_t0(hook_pos, target, call); 77 78 /* Replace the auipc-jalr pair at once. Return -EPERM on write error. */ 79 - if (patch_text_nosync 80 - ((void *)hook_pos, enable ? call : nops, MCOUNT_INSN_SIZE)) 81 return -EPERM; 82 83 return 0; ··· 88 89 make_call_t0(rec->ip, addr, call); 90 91 - if (patch_text_nosync((void *)rec->ip, call, MCOUNT_INSN_SIZE)) 92 return -EPERM; 93 94 return 0; ··· 99 { 100 unsigned int nops[2] = {NOP4, NOP4}; 101 102 - if (patch_text_nosync((void *)rec->ip, nops, MCOUNT_INSN_SIZE)) 103 return -EPERM; 104 105 return 0; ··· 133 } 134 135 return ret; 136 } 137 #endif 138
··· 8 #include <linux/ftrace.h> 9 #include <linux/uaccess.h> 10 #include <linux/memory.h> 11 + #include <linux/stop_machine.h> 12 #include <asm/cacheflush.h> 13 #include <asm/patch.h> 14 ··· 75 make_call_t0(hook_pos, target, call); 76 77 /* Replace the auipc-jalr pair at once. Return -EPERM on write error. */ 78 + if (patch_insn_write((void *)hook_pos, enable ? call : nops, MCOUNT_INSN_SIZE)) 79 return -EPERM; 80 81 return 0; ··· 88 89 make_call_t0(rec->ip, addr, call); 90 91 + if (patch_insn_write((void *)rec->ip, call, MCOUNT_INSN_SIZE)) 92 return -EPERM; 93 94 return 0; ··· 99 { 100 unsigned int nops[2] = {NOP4, NOP4}; 101 102 + if (patch_insn_write((void *)rec->ip, nops, MCOUNT_INSN_SIZE)) 103 return -EPERM; 104 105 return 0; ··· 133 } 134 135 return ret; 136 + } 137 + 138 + struct ftrace_modify_param { 139 + int command; 140 + atomic_t cpu_count; 141 + }; 142 + 143 + static int __ftrace_modify_code(void *data) 144 + { 145 + struct ftrace_modify_param *param = data; 146 + 147 + if (atomic_inc_return(&param->cpu_count) == num_online_cpus()) { 148 + ftrace_modify_all_code(param->command); 149 + /* 150 + * Make sure the patching store is effective *before* we 151 + * increment the counter which releases all waiting CPUs 152 + * by using the release variant of atomic increment. The 153 + * release pairs with the call to local_flush_icache_all() 154 + * on the waiting CPU. 155 + */ 156 + atomic_inc_return_release(&param->cpu_count); 157 + } else { 158 + while (atomic_read(&param->cpu_count) <= num_online_cpus()) 159 + cpu_relax(); 160 + } 161 + 162 + local_flush_icache_all(); 163 + 164 + return 0; 165 + } 166 + 167 + void arch_ftrace_update_code(int command) 168 + { 169 + struct ftrace_modify_param param = { command, ATOMIC_INIT(0) }; 170 + 171 + stop_machine(__ftrace_modify_code, &param, cpu_online_mask); 172 } 173 #endif 174
+12 -5
arch/riscv/kernel/patch.c
··· 196 } 197 NOKPROBE_SYMBOL(patch_text_set_nosync); 198 199 - static int patch_insn_write(void *addr, const void *insn, size_t len) 200 { 201 size_t patched = 0; 202 size_t size; ··· 240 if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { 241 for (i = 0; ret == 0 && i < patch->ninsns; i++) { 242 len = GET_INSN_LENGTH(patch->insns[i]); 243 - ret = patch_text_nosync(patch->addr + i * len, 244 - &patch->insns[i], len); 245 } 246 - atomic_inc(&patch->cpu_count); 247 } else { 248 while (atomic_read(&patch->cpu_count) <= num_online_cpus()) 249 cpu_relax(); 250 - smp_mb(); 251 } 252 253 return ret; 254 }
··· 196 } 197 NOKPROBE_SYMBOL(patch_text_set_nosync); 198 199 + int patch_insn_write(void *addr, const void *insn, size_t len) 200 { 201 size_t patched = 0; 202 size_t size; ··· 240 if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { 241 for (i = 0; ret == 0 && i < patch->ninsns; i++) { 242 len = GET_INSN_LENGTH(patch->insns[i]); 243 + ret = patch_insn_write(patch->addr + i * len, &patch->insns[i], len); 244 } 245 + /* 246 + * Make sure the patching store is effective *before* we 247 + * increment the counter which releases all waiting CPUs 248 + * by using the release variant of atomic increment. The 249 + * release pairs with the call to local_flush_icache_all() 250 + * on the waiting CPU. 251 + */ 252 + atomic_inc_return_release(&patch->cpu_count); 253 } else { 254 while (atomic_read(&patch->cpu_count) <= num_online_cpus()) 255 cpu_relax(); 256 } 257 + 258 + local_flush_icache_all(); 259 260 return ret; 261 }
+10 -1
arch/riscv/kernel/sbi-ipi.c
··· 13 #include <linux/irqdomain.h> 14 #include <asm/sbi.h> 15 16 static int sbi_ipi_virq; 17 18 static void sbi_ipi_handle(struct irq_desc *desc) ··· 75 "irqchip/sbi-ipi:starting", 76 sbi_ipi_starting_cpu, NULL); 77 78 - riscv_ipi_set_virq_range(virq, BITS_PER_BYTE, false); 79 pr_info("providing IPIs using SBI IPI extension\n"); 80 }
··· 13 #include <linux/irqdomain.h> 14 #include <asm/sbi.h> 15 16 + DEFINE_STATIC_KEY_FALSE(riscv_sbi_for_rfence); 17 + EXPORT_SYMBOL_GPL(riscv_sbi_for_rfence); 18 + 19 static int sbi_ipi_virq; 20 21 static void sbi_ipi_handle(struct irq_desc *desc) ··· 72 "irqchip/sbi-ipi:starting", 73 sbi_ipi_starting_cpu, NULL); 74 75 + riscv_ipi_set_virq_range(virq, BITS_PER_BYTE); 76 pr_info("providing IPIs using SBI IPI extension\n"); 77 + 78 + /* 79 + * Use the SBI remote fence extension to avoid 80 + * the extra context switch needed to handle IPIs. 81 + */ 82 + static_branch_enable(&riscv_sbi_for_rfence); 83 }
+1 -10
arch/riscv/kernel/smp.c
··· 171 return (ipi_virq_base) ? true : false; 172 } 173 174 - DEFINE_STATIC_KEY_FALSE(riscv_ipi_for_rfence); 175 - EXPORT_SYMBOL_GPL(riscv_ipi_for_rfence); 176 - 177 - void riscv_ipi_set_virq_range(int virq, int nr, bool use_for_rfence) 178 { 179 int i, err; 180 ··· 194 195 /* Enabled IPIs for boot CPU immediately */ 196 riscv_ipi_enable(); 197 - 198 - /* Update RFENCE static key */ 199 - if (use_for_rfence) 200 - static_branch_enable(&riscv_ipi_for_rfence); 201 - else 202 - static_branch_disable(&riscv_ipi_for_rfence); 203 } 204 205 static const char * const ipi_names[] = {
··· 171 return (ipi_virq_base) ? true : false; 172 } 173 174 + void riscv_ipi_set_virq_range(int virq, int nr) 175 { 176 int i, err; 177 ··· 197 198 /* Enabled IPIs for boot CPU immediately */ 199 riscv_ipi_enable(); 200 } 201 202 static const char * const ipi_names[] = {
+4 -3
arch/riscv/kernel/smpboot.c
··· 26 #include <linux/sched/task_stack.h> 27 #include <linux/sched/mm.h> 28 29 - #include <asm/cpufeature.h> 30 #include <asm/cpu_ops.h> 31 #include <asm/irq.h> 32 #include <asm/mmu_context.h> ··· 234 riscv_user_isa_enable(); 235 236 /* 237 - * Remote TLB flushes are ignored while the CPU is offline, so emit 238 - * a local TLB flush right now just in case. 239 */ 240 local_flush_tlb_all(); 241 complete(&cpu_running); 242 /*
··· 26 #include <linux/sched/task_stack.h> 27 #include <linux/sched/mm.h> 28 29 + #include <asm/cacheflush.h> 30 #include <asm/cpu_ops.h> 31 #include <asm/irq.h> 32 #include <asm/mmu_context.h> ··· 234 riscv_user_isa_enable(); 235 236 /* 237 + * Remote cache and TLB flushes are ignored while the CPU is offline, 238 + * so flush them both right now just in case. 239 */ 240 + local_flush_icache_all(); 241 local_flush_tlb_all(); 242 complete(&cpu_running); 243 /*
+1 -2
arch/riscv/kernel/suspend.c
··· 14 15 void suspend_save_csrs(struct suspend_context *context) 16 { 17 - context->scratch = csr_read(CSR_SCRATCH); 18 if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 19 context->envcfg = csr_read(CSR_ENVCFG); 20 context->tvec = csr_read(CSR_TVEC); ··· 36 37 void suspend_restore_csrs(struct suspend_context *context) 38 { 39 - csr_write(CSR_SCRATCH, context->scratch); 40 if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 41 csr_write(CSR_ENVCFG, context->envcfg); 42 csr_write(CSR_TVEC, context->tvec);
··· 14 15 void suspend_save_csrs(struct suspend_context *context) 16 { 17 if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 18 context->envcfg = csr_read(CSR_ENVCFG); 19 context->tvec = csr_read(CSR_TVEC); ··· 37 38 void suspend_restore_csrs(struct suspend_context *context) 39 { 40 + csr_write(CSR_SCRATCH, 0); 41 if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 42 csr_write(CSR_ENVCFG, context->envcfg); 43 csr_write(CSR_TVEC, context->tvec);
+1
arch/riscv/kernel/sys_hwprobe.c
··· 111 EXT_KEY(ZTSO); 112 EXT_KEY(ZACAS); 113 EXT_KEY(ZICOND); 114 115 if (has_vector()) { 116 EXT_KEY(ZVBB);
··· 111 EXT_KEY(ZTSO); 112 EXT_KEY(ZACAS); 113 EXT_KEY(ZICOND); 114 + EXT_KEY(ZIHINTPAUSE); 115 116 if (has_vector()) { 117 EXT_KEY(ZVBB);
-1
arch/riscv/kernel/sys_riscv.c
··· 7 8 #include <linux/syscalls.h> 9 #include <asm/cacheflush.h> 10 - #include <asm-generic/mman-common.h> 11 12 static long riscv_sys_mmap(unsigned long addr, unsigned long len, 13 unsigned long prot, unsigned long flags,
··· 7 8 #include <linux/syscalls.h> 9 #include <asm/cacheflush.h> 10 11 static long riscv_sys_mmap(unsigned long addr, unsigned long len, 12 unsigned long prot, unsigned long flags,
+17 -89
arch/riscv/kernel/traps_misaligned.c
··· 264 #define GET_F32_RS2C(insn, regs) (get_f32_rs(insn, 2, regs)) 265 #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs)) 266 267 - #ifdef CONFIG_RISCV_M_MODE 268 - static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) 269 - { 270 - u8 val; 271 - 272 - asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr)); 273 - *r_val = val; 274 - 275 - return 0; 276 - } 277 - 278 - static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) 279 - { 280 - asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr)); 281 - 282 - return 0; 283 - } 284 - 285 - static inline int get_insn(struct pt_regs *regs, ulong mepc, ulong *r_insn) 286 - { 287 - register ulong __mepc asm ("a2") = mepc; 288 - ulong val, rvc_mask = 3, tmp; 289 - 290 - asm ("and %[tmp], %[addr], 2\n" 291 - "bnez %[tmp], 1f\n" 292 - #if defined(CONFIG_64BIT) 293 - __stringify(LWU) " %[insn], (%[addr])\n" 294 - #else 295 - __stringify(LW) " %[insn], (%[addr])\n" 296 - #endif 297 - "and %[tmp], %[insn], %[rvc_mask]\n" 298 - "beq %[tmp], %[rvc_mask], 2f\n" 299 - "sll %[insn], %[insn], %[xlen_minus_16]\n" 300 - "srl %[insn], %[insn], %[xlen_minus_16]\n" 301 - "j 2f\n" 302 - "1:\n" 303 - "lhu %[insn], (%[addr])\n" 304 - "and %[tmp], %[insn], %[rvc_mask]\n" 305 - "bne %[tmp], %[rvc_mask], 2f\n" 306 - "lhu %[tmp], 2(%[addr])\n" 307 - "sll %[tmp], %[tmp], 16\n" 308 - "add %[insn], %[insn], %[tmp]\n" 309 - "2:" 310 - : [insn] "=&r" (val), [tmp] "=&r" (tmp) 311 - : [addr] "r" (__mepc), [rvc_mask] "r" (rvc_mask), 312 - [xlen_minus_16] "i" (XLEN_MINUS_16)); 313 - 314 - *r_insn = val; 315 - 316 - return 0; 317 - } 318 - #else 319 - static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) 320 - { 321 - if (user_mode(regs)) { 322 - return __get_user(*r_val, (u8 __user *)addr); 323 - } else { 324 - *r_val = *addr; 325 - return 0; 326 - } 327 - } 328 - 329 - static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) 330 - { 331 - if (user_mode(regs)) { 332 - return __put_user(val, (u8 __user *)addr); 333 - } else { 334 - *addr = val; 335 - return 0; 336 - } 337 - } 338 - 339 - #define __read_insn(regs, insn, insn_addr) \ 340 ({ \ 341 int __ret; \ 342 \ 343 if (user_mode(regs)) { \ 344 - __ret = __get_user(insn, insn_addr); \ 345 } else { \ 346 - insn = *(__force u16 *)insn_addr; \ 347 __ret = 0; \ 348 } \ 349 \ ··· 284 285 if (epc & 0x2) { 286 ulong tmp = 0; 287 - u16 __user *insn_addr = (u16 __user *)epc; 288 289 - if (__read_insn(regs, insn, insn_addr)) 290 return -EFAULT; 291 /* __get_user() uses regular "lw" which sign extend the loaded 292 * value make sure to clear higher order bits in case we "or" it ··· 296 *r_insn = insn; 297 return 0; 298 } 299 - insn_addr++; 300 - if (__read_insn(regs, tmp, insn_addr)) 301 return -EFAULT; 302 *r_insn = (tmp << 16) | insn; 303 304 return 0; 305 } else { 306 - u32 __user *insn_addr = (u32 __user *)epc; 307 - 308 - if (__read_insn(regs, insn, insn_addr)) 309 return -EFAULT; 310 if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { 311 *r_insn = insn; ··· 315 return 0; 316 } 317 } 318 - #endif 319 320 union reg_data { 321 u8 data_bytes[8]; ··· 333 unsigned long epc = regs->epc; 334 unsigned long insn; 335 unsigned long addr = regs->badaddr; 336 - int i, fp = 0, shift = 0, len = 0; 337 338 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); 339 ··· 416 return -EOPNOTSUPP; 417 418 val.data_u64 = 0; 419 - for (i = 0; i < len; i++) { 420 - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) 421 return -1; 422 } 423 424 if (!fp) ··· 441 unsigned long epc = regs->epc; 442 unsigned long insn; 443 unsigned long addr = regs->badaddr; 444 - int i, len = 0, fp = 0; 445 446 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); 447 ··· 514 if (!IS_ENABLED(CONFIG_FPU) && fp) 515 return -EOPNOTSUPP; 516 517 - for (i = 0; i < len; i++) { 518 - if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) 519 return -1; 520 } 521 522 regs->epc = epc + INSN_LEN(insn);
··· 264 #define GET_F32_RS2C(insn, regs) (get_f32_rs(insn, 2, regs)) 265 #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs)) 266 267 + #define __read_insn(regs, insn, insn_addr, type) \ 268 ({ \ 269 int __ret; \ 270 \ 271 if (user_mode(regs)) { \ 272 + __ret = __get_user(insn, (type __user *) insn_addr); \ 273 } else { \ 274 + insn = *(type *)insn_addr; \ 275 __ret = 0; \ 276 } \ 277 \ ··· 356 357 if (epc & 0x2) { 358 ulong tmp = 0; 359 360 + if (__read_insn(regs, insn, epc, u16)) 361 return -EFAULT; 362 /* __get_user() uses regular "lw" which sign extend the loaded 363 * value make sure to clear higher order bits in case we "or" it ··· 369 *r_insn = insn; 370 return 0; 371 } 372 + epc += sizeof(u16); 373 + if (__read_insn(regs, tmp, epc, u16)) 374 return -EFAULT; 375 *r_insn = (tmp << 16) | insn; 376 377 return 0; 378 } else { 379 + if (__read_insn(regs, insn, epc, u32)) 380 return -EFAULT; 381 if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { 382 *r_insn = insn; ··· 390 return 0; 391 } 392 } 393 394 union reg_data { 395 u8 data_bytes[8]; ··· 409 unsigned long epc = regs->epc; 410 unsigned long insn; 411 unsigned long addr = regs->badaddr; 412 + int fp = 0, shift = 0, len = 0; 413 414 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); 415 ··· 492 return -EOPNOTSUPP; 493 494 val.data_u64 = 0; 495 + if (user_mode(regs)) { 496 + if (raw_copy_from_user(&val, (u8 __user *)addr, len)) 497 return -1; 498 + } else { 499 + memcpy(&val, (u8 *)addr, len); 500 } 501 502 if (!fp) ··· 515 unsigned long epc = regs->epc; 516 unsigned long insn; 517 unsigned long addr = regs->badaddr; 518 + int len = 0, fp = 0; 519 520 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); 521 ··· 588 if (!IS_ENABLED(CONFIG_FPU) && fp) 589 return -EOPNOTSUPP; 590 591 + if (user_mode(regs)) { 592 + if (raw_copy_to_user((u8 __user *)addr, &val, len)) 593 return -1; 594 + } else { 595 + memcpy((u8 *)addr, &val, len); 596 } 597 598 regs->epc = epc + INSN_LEN(insn);
+1 -4
arch/riscv/mm/Makefile
··· 13 KCOV_INSTRUMENT_init.o := n 14 15 obj-y += init.o 16 - obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o pgtable.o 17 obj-y += cacheflush.o 18 obj-y += context.o 19 obj-y += pmem.o 20 21 - ifeq ($(CONFIG_MMU),y) 22 - obj-$(CONFIG_SMP) += tlbflush.o 23 - endif 24 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 25 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o 26 obj-$(CONFIG_KASAN) += kasan_init.o
··· 13 KCOV_INSTRUMENT_init.o := n 14 15 obj-y += init.o 16 + obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o pgtable.o tlbflush.o 17 obj-y += cacheflush.o 18 obj-y += context.o 19 obj-y += pmem.o 20 21 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 22 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o 23 obj-$(CONFIG_KASAN) += kasan_init.o
+117 -3
arch/riscv/mm/cacheflush.c
··· 5 6 #include <linux/acpi.h> 7 #include <linux/of.h> 8 #include <asm/acpi.h> 9 #include <asm/cacheflush.h> 10 ··· 22 { 23 local_flush_icache_all(); 24 25 - if (IS_ENABLED(CONFIG_RISCV_SBI) && !riscv_use_ipi_for_rfence()) 26 sbi_remote_fence_i(NULL); 27 else 28 on_each_cpu(ipi_remote_fence_i, NULL, 1); ··· 72 * with flush_icache_deferred(). 73 */ 74 smp_mb(); 75 - } else if (IS_ENABLED(CONFIG_RISCV_SBI) && 76 - !riscv_use_ipi_for_rfence()) { 77 sbi_remote_fence_i(&others); 78 } else { 79 on_each_cpu_mask(&others, ipi_remote_fence_i, NULL, 1); ··· 153 154 if (cboz_block_size) 155 riscv_cboz_block_size = cboz_block_size; 156 }
··· 5 6 #include <linux/acpi.h> 7 #include <linux/of.h> 8 + #include <linux/prctl.h> 9 #include <asm/acpi.h> 10 #include <asm/cacheflush.h> 11 ··· 21 { 22 local_flush_icache_all(); 23 24 + if (num_online_cpus() < 2) 25 + return; 26 + else if (riscv_use_sbi_for_rfence()) 27 sbi_remote_fence_i(NULL); 28 else 29 on_each_cpu(ipi_remote_fence_i, NULL, 1); ··· 69 * with flush_icache_deferred(). 70 */ 71 smp_mb(); 72 + } else if (riscv_use_sbi_for_rfence()) { 73 sbi_remote_fence_i(&others); 74 } else { 75 on_each_cpu_mask(&others, ipi_remote_fence_i, NULL, 1); ··· 151 152 if (cboz_block_size) 153 riscv_cboz_block_size = cboz_block_size; 154 + } 155 + 156 + #ifdef CONFIG_SMP 157 + static void set_icache_stale_mask(void) 158 + { 159 + cpumask_t *mask; 160 + bool stale_cpu; 161 + 162 + /* 163 + * Mark every other hart's icache as needing a flush for 164 + * this MM. Maintain the previous value of the current 165 + * cpu to handle the case when this function is called 166 + * concurrently on different harts. 167 + */ 168 + mask = &current->mm->context.icache_stale_mask; 169 + stale_cpu = cpumask_test_cpu(smp_processor_id(), mask); 170 + 171 + cpumask_setall(mask); 172 + cpumask_assign_cpu(smp_processor_id(), mask, stale_cpu); 173 + } 174 + #endif 175 + 176 + /** 177 + * riscv_set_icache_flush_ctx() - Enable/disable icache flushing instructions in 178 + * userspace. 179 + * @ctx: Set the type of icache flushing instructions permitted/prohibited in 180 + * userspace. Supported values described below. 181 + * 182 + * Supported values for ctx: 183 + * 184 + * * %PR_RISCV_CTX_SW_FENCEI_ON: Allow fence.i in user space. 185 + * 186 + * * %PR_RISCV_CTX_SW_FENCEI_OFF: Disallow fence.i in user space. All threads in 187 + * a process will be affected when ``scope == PR_RISCV_SCOPE_PER_PROCESS``. 188 + * Therefore, caution must be taken; use this flag only when you can guarantee 189 + * that no thread in the process will emit fence.i from this point onward. 190 + * 191 + * @scope: Set scope of where icache flushing instructions are allowed to be 192 + * emitted. Supported values described below. 193 + * 194 + * Supported values for scope: 195 + * 196 + * * %PR_RISCV_SCOPE_PER_PROCESS: Ensure the icache of any thread in this process 197 + * is coherent with instruction storage upon 198 + * migration. 199 + * 200 + * * %PR_RISCV_SCOPE_PER_THREAD: Ensure the icache of the current thread is 201 + * coherent with instruction storage upon 202 + * migration. 203 + * 204 + * When ``scope == PR_RISCV_SCOPE_PER_PROCESS``, all threads in the process are 205 + * permitted to emit icache flushing instructions. Whenever any thread in the 206 + * process is migrated, the corresponding hart's icache will be guaranteed to be 207 + * consistent with instruction storage. This does not enforce any guarantees 208 + * outside of migration. If a thread modifies an instruction that another thread 209 + * may attempt to execute, the other thread must still emit an icache flushing 210 + * instruction before attempting to execute the potentially modified 211 + * instruction. This must be performed by the user-space program. 212 + * 213 + * In per-thread context (eg. ``scope == PR_RISCV_SCOPE_PER_THREAD``) only the 214 + * thread calling this function is permitted to emit icache flushing 215 + * instructions. When the thread is migrated, the corresponding hart's icache 216 + * will be guaranteed to be consistent with instruction storage. 217 + * 218 + * On kernels configured without SMP, this function is a nop as migrations 219 + * across harts will not occur. 220 + */ 221 + int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope) 222 + { 223 + #ifdef CONFIG_SMP 224 + switch (ctx) { 225 + case PR_RISCV_CTX_SW_FENCEI_ON: 226 + switch (scope) { 227 + case PR_RISCV_SCOPE_PER_PROCESS: 228 + current->mm->context.force_icache_flush = true; 229 + break; 230 + case PR_RISCV_SCOPE_PER_THREAD: 231 + current->thread.force_icache_flush = true; 232 + break; 233 + default: 234 + return -EINVAL; 235 + } 236 + break; 237 + case PR_RISCV_CTX_SW_FENCEI_OFF: 238 + switch (scope) { 239 + case PR_RISCV_SCOPE_PER_PROCESS: 240 + current->mm->context.force_icache_flush = false; 241 + 242 + set_icache_stale_mask(); 243 + break; 244 + case PR_RISCV_SCOPE_PER_THREAD: 245 + current->thread.force_icache_flush = false; 246 + 247 + set_icache_stale_mask(); 248 + break; 249 + default: 250 + return -EINVAL; 251 + } 252 + break; 253 + default: 254 + return -EINVAL; 255 + } 256 + return 0; 257 + #else 258 + switch (ctx) { 259 + case PR_RISCV_CTX_SW_FENCEI_ON: 260 + case PR_RISCV_CTX_SW_FENCEI_OFF: 261 + return 0; 262 + default: 263 + return -EINVAL; 264 + } 265 + #endif 266 }
+21 -21
arch/riscv/mm/context.c
··· 15 #include <asm/tlbflush.h> 16 #include <asm/cacheflush.h> 17 #include <asm/mmu_context.h> 18 19 #ifdef CONFIG_MMU 20 21 DEFINE_STATIC_KEY_FALSE(use_asid_allocator); 22 23 - static unsigned long asid_bits; 24 static unsigned long num_asids; 25 - unsigned long asid_mask; 26 27 static atomic_long_t current_version; 28 ··· 80 if (cntx == 0) 81 cntx = per_cpu(reserved_context, i); 82 83 - __set_bit(cntx & asid_mask, context_asid_map); 84 per_cpu(reserved_context, i) = cntx; 85 } 86 ··· 101 lockdep_assert_held(&context_lock); 102 103 if (cntx != 0) { 104 - unsigned long newcntx = ver | (cntx & asid_mask); 105 106 /* 107 * If our current CONTEXT was active during a rollover, we ··· 114 * We had a valid CONTEXT in a previous life, so try to 115 * re-use it if possible. 116 */ 117 - if (!__test_and_set_bit(cntx & asid_mask, context_asid_map)) 118 return newcntx; 119 } 120 ··· 127 goto set_asid; 128 129 /* We're out of ASIDs, so increment current_version */ 130 - ver = atomic_long_add_return_relaxed(num_asids, &current_version); 131 132 /* Flush everything */ 133 __flush_context(); ··· 167 */ 168 old_active_cntx = atomic_long_read(&per_cpu(active_context, cpu)); 169 if (old_active_cntx && 170 - ((cntx & ~asid_mask) == atomic_long_read(&current_version)) && 171 atomic_long_cmpxchg_relaxed(&per_cpu(active_context, cpu), 172 old_active_cntx, cntx)) 173 goto switch_mm_fast; ··· 176 177 /* Check that our ASID belongs to the current_version. */ 178 cntx = atomic_long_read(&mm->context.id); 179 - if ((cntx & ~asid_mask) != atomic_long_read(&current_version)) { 180 cntx = __new_context(mm); 181 atomic_long_set(&mm->context.id, cntx); 182 } ··· 190 191 switch_mm_fast: 192 csr_write(CSR_SATP, virt_to_pfn(mm->pgd) | 193 - ((cntx & asid_mask) << SATP_ASID_SHIFT) | 194 satp_mode); 195 196 if (need_flush_tlb) ··· 201 { 202 /* Switch the page table and blindly nuke entire local TLB */ 203 csr_write(CSR_SATP, virt_to_pfn(mm->pgd) | satp_mode); 204 - local_flush_tlb_all(); 205 } 206 207 static inline void set_mm(struct mm_struct *prev, ··· 226 227 static int __init asids_init(void) 228 { 229 - unsigned long old; 230 231 /* Figure-out number of ASID bits in HW */ 232 old = csr_read(CSR_SATP); ··· 246 /* Pre-compute ASID details */ 247 if (asid_bits) { 248 num_asids = 1 << asid_bits; 249 - asid_mask = num_asids - 1; 250 } 251 252 /* ··· 253 * at-least twice more than CPUs 254 */ 255 if (num_asids > (2 * num_possible_cpus())) { 256 - atomic_long_set(&current_version, num_asids); 257 258 context_asid_map = bitmap_zalloc(num_asids, GFP_KERNEL); 259 if (!context_asid_map) ··· 295 * 296 * The "cpu" argument must be the current local CPU number. 297 */ 298 - static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu) 299 { 300 #ifdef CONFIG_SMP 301 - cpumask_t *mask = &mm->context.icache_stale_mask; 302 - 303 - if (cpumask_test_cpu(cpu, mask)) { 304 - cpumask_clear_cpu(cpu, mask); 305 /* 306 * Ensure the remote hart's writes are visible to this hart. 307 * This pairs with a barrier in flush_icache_mm. 308 */ 309 smp_mb(); 310 - local_flush_icache_all(); 311 - } 312 313 #endif 314 } 315 ··· 334 335 set_mm(prev, next, cpu); 336 337 - flush_icache_deferred(next, cpu); 338 }
··· 15 #include <asm/tlbflush.h> 16 #include <asm/cacheflush.h> 17 #include <asm/mmu_context.h> 18 + #include <asm/switch_to.h> 19 20 #ifdef CONFIG_MMU 21 22 DEFINE_STATIC_KEY_FALSE(use_asid_allocator); 23 24 static unsigned long num_asids; 25 26 static atomic_long_t current_version; 27 ··· 81 if (cntx == 0) 82 cntx = per_cpu(reserved_context, i); 83 84 + __set_bit(cntx2asid(cntx), context_asid_map); 85 per_cpu(reserved_context, i) = cntx; 86 } 87 ··· 102 lockdep_assert_held(&context_lock); 103 104 if (cntx != 0) { 105 + unsigned long newcntx = ver | cntx2asid(cntx); 106 107 /* 108 * If our current CONTEXT was active during a rollover, we ··· 115 * We had a valid CONTEXT in a previous life, so try to 116 * re-use it if possible. 117 */ 118 + if (!__test_and_set_bit(cntx2asid(cntx), context_asid_map)) 119 return newcntx; 120 } 121 ··· 128 goto set_asid; 129 130 /* We're out of ASIDs, so increment current_version */ 131 + ver = atomic_long_add_return_relaxed(BIT(SATP_ASID_BITS), &current_version); 132 133 /* Flush everything */ 134 __flush_context(); ··· 168 */ 169 old_active_cntx = atomic_long_read(&per_cpu(active_context, cpu)); 170 if (old_active_cntx && 171 + (cntx2version(cntx) == atomic_long_read(&current_version)) && 172 atomic_long_cmpxchg_relaxed(&per_cpu(active_context, cpu), 173 old_active_cntx, cntx)) 174 goto switch_mm_fast; ··· 177 178 /* Check that our ASID belongs to the current_version. */ 179 cntx = atomic_long_read(&mm->context.id); 180 + if (cntx2version(cntx) != atomic_long_read(&current_version)) { 181 cntx = __new_context(mm); 182 atomic_long_set(&mm->context.id, cntx); 183 } ··· 191 192 switch_mm_fast: 193 csr_write(CSR_SATP, virt_to_pfn(mm->pgd) | 194 + (cntx2asid(cntx) << SATP_ASID_SHIFT) | 195 satp_mode); 196 197 if (need_flush_tlb) ··· 202 { 203 /* Switch the page table and blindly nuke entire local TLB */ 204 csr_write(CSR_SATP, virt_to_pfn(mm->pgd) | satp_mode); 205 + local_flush_tlb_all_asid(0); 206 } 207 208 static inline void set_mm(struct mm_struct *prev, ··· 227 228 static int __init asids_init(void) 229 { 230 + unsigned long asid_bits, old; 231 232 /* Figure-out number of ASID bits in HW */ 233 old = csr_read(CSR_SATP); ··· 247 /* Pre-compute ASID details */ 248 if (asid_bits) { 249 num_asids = 1 << asid_bits; 250 } 251 252 /* ··· 255 * at-least twice more than CPUs 256 */ 257 if (num_asids > (2 * num_possible_cpus())) { 258 + atomic_long_set(&current_version, BIT(SATP_ASID_BITS)); 259 260 context_asid_map = bitmap_zalloc(num_asids, GFP_KERNEL); 261 if (!context_asid_map) ··· 297 * 298 * The "cpu" argument must be the current local CPU number. 299 */ 300 + static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu, 301 + struct task_struct *task) 302 { 303 #ifdef CONFIG_SMP 304 + if (cpumask_test_and_clear_cpu(cpu, &mm->context.icache_stale_mask)) { 305 /* 306 * Ensure the remote hart's writes are visible to this hart. 307 * This pairs with a barrier in flush_icache_mm. 308 */ 309 smp_mb(); 310 311 + /* 312 + * If cache will be flushed in switch_to, no need to flush here. 313 + */ 314 + if (!(task && switch_to_should_flush_icache(task))) 315 + local_flush_icache_all(); 316 + } 317 #endif 318 } 319 ··· 334 335 set_mm(prev, next, cpu); 336 337 + flush_icache_deferred(next, cpu, task); 338 }
+17 -3
arch/riscv/mm/init.c
··· 50 EXPORT_SYMBOL(satp_mode); 51 52 #ifdef CONFIG_64BIT 53 - bool pgtable_l4_enabled = IS_ENABLED(CONFIG_64BIT) && !IS_ENABLED(CONFIG_XIP_KERNEL); 54 - bool pgtable_l5_enabled = IS_ENABLED(CONFIG_64BIT) && !IS_ENABLED(CONFIG_XIP_KERNEL); 55 EXPORT_SYMBOL(pgtable_l4_enabled); 56 EXPORT_SYMBOL(pgtable_l5_enabled); 57 #endif ··· 162 163 void __init mem_init(void) 164 { 165 #ifdef CONFIG_FLATMEM 166 BUG_ON(!mem_map); 167 #endif /* CONFIG_FLATMEM */ 168 169 - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); 170 memblock_free_all(); 171 172 print_vm_layout();
··· 50 EXPORT_SYMBOL(satp_mode); 51 52 #ifdef CONFIG_64BIT 53 + bool pgtable_l4_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); 54 + bool pgtable_l5_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); 55 EXPORT_SYMBOL(pgtable_l4_enabled); 56 EXPORT_SYMBOL(pgtable_l5_enabled); 57 #endif ··· 162 163 void __init mem_init(void) 164 { 165 + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); 166 #ifdef CONFIG_FLATMEM 167 BUG_ON(!mem_map); 168 #endif /* CONFIG_FLATMEM */ 169 170 + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && 171 + dma_cache_alignment != 1) { 172 + /* 173 + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb 174 + * buffer per 1GB of RAM for kmalloc() bouncing on 175 + * non-coherent platforms. 176 + */ 177 + unsigned long size = 178 + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); 179 + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); 180 + swiotlb = true; 181 + } 182 + 183 + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); 184 memblock_free_all(); 185 186 print_vm_layout();
+22 -55
arch/riscv/mm/tlbflush.c
··· 7 #include <asm/sbi.h> 8 #include <asm/mmu_context.h> 9 10 - static inline void local_flush_tlb_all_asid(unsigned long asid) 11 - { 12 - if (asid != FLUSH_TLB_NO_ASID) 13 - __asm__ __volatile__ ("sfence.vma x0, %0" 14 - : 15 - : "r" (asid) 16 - : "memory"); 17 - else 18 - local_flush_tlb_all(); 19 - } 20 - 21 - static inline void local_flush_tlb_page_asid(unsigned long addr, 22 - unsigned long asid) 23 - { 24 - if (asid != FLUSH_TLB_NO_ASID) 25 - __asm__ __volatile__ ("sfence.vma %0, %1" 26 - : 27 - : "r" (addr), "r" (asid) 28 - : "memory"); 29 - else 30 - local_flush_tlb_page(addr); 31 - } 32 - 33 /* 34 * Flush entire TLB if number of entries to be flushed is greater 35 * than the threshold below. 36 */ 37 - static unsigned long tlb_flush_all_threshold __read_mostly = 64; 38 39 static void local_flush_tlb_range_threshold_asid(unsigned long start, 40 unsigned long size, ··· 56 57 void flush_tlb_all(void) 58 { 59 - if (riscv_use_ipi_for_rfence()) 60 - on_each_cpu(__ipi_flush_tlb_all, NULL, 1); 61 - else 62 sbi_remote_sfence_vma_asid(NULL, 0, FLUSH_TLB_MAX_SIZE, FLUSH_TLB_NO_ASID); 63 } 64 65 struct flush_tlb_range_data { ··· 82 unsigned long start, unsigned long size, 83 unsigned long stride) 84 { 85 - struct flush_tlb_range_data ftd; 86 - bool broadcast; 87 88 if (cpumask_empty(cmask)) 89 return; 90 91 - if (cmask != cpu_online_mask) { 92 - unsigned int cpuid; 93 94 - cpuid = get_cpu(); 95 - /* check if the tlbflush needs to be sent to other CPUs */ 96 - broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; 97 - } else { 98 - broadcast = true; 99 - } 100 - 101 - if (broadcast) { 102 - if (riscv_use_ipi_for_rfence()) { 103 - ftd.asid = asid; 104 - ftd.start = start; 105 - ftd.size = size; 106 - ftd.stride = stride; 107 - on_each_cpu_mask(cmask, 108 - __ipi_flush_tlb_range_asid, 109 - &ftd, 1); 110 - } else 111 - sbi_remote_sfence_vma_asid(cmask, 112 - start, size, asid); 113 - } else { 114 local_flush_tlb_range_asid(start, size, stride, asid); 115 } 116 117 - if (cmask != cpu_online_mask) 118 - put_cpu(); 119 } 120 121 static inline unsigned long get_mm_asid(struct mm_struct *mm) 122 { 123 - return static_branch_unlikely(&use_asid_allocator) ? 124 - atomic_long_read(&mm->context.id) & asid_mask : FLUSH_TLB_NO_ASID; 125 } 126 127 void flush_tlb_mm(struct mm_struct *mm)
··· 7 #include <asm/sbi.h> 8 #include <asm/mmu_context.h> 9 10 /* 11 * Flush entire TLB if number of entries to be flushed is greater 12 * than the threshold below. 13 */ 14 + unsigned long tlb_flush_all_threshold __read_mostly = 64; 15 16 static void local_flush_tlb_range_threshold_asid(unsigned long start, 17 unsigned long size, ··· 79 80 void flush_tlb_all(void) 81 { 82 + if (num_online_cpus() < 2) 83 + local_flush_tlb_all(); 84 + else if (riscv_use_sbi_for_rfence()) 85 sbi_remote_sfence_vma_asid(NULL, 0, FLUSH_TLB_MAX_SIZE, FLUSH_TLB_NO_ASID); 86 + else 87 + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); 88 } 89 90 struct flush_tlb_range_data { ··· 103 unsigned long start, unsigned long size, 104 unsigned long stride) 105 { 106 + unsigned int cpu; 107 108 if (cpumask_empty(cmask)) 109 return; 110 111 + cpu = get_cpu(); 112 113 + /* Check if the TLB flush needs to be sent to other CPUs. */ 114 + if (cpumask_any_but(cmask, cpu) >= nr_cpu_ids) { 115 local_flush_tlb_range_asid(start, size, stride, asid); 116 + } else if (riscv_use_sbi_for_rfence()) { 117 + sbi_remote_sfence_vma_asid(cmask, start, size, asid); 118 + } else { 119 + struct flush_tlb_range_data ftd; 120 + 121 + ftd.asid = asid; 122 + ftd.start = start; 123 + ftd.size = size; 124 + ftd.stride = stride; 125 + on_each_cpu_mask(cmask, __ipi_flush_tlb_range_asid, &ftd, 1); 126 } 127 128 + put_cpu(); 129 } 130 131 static inline unsigned long get_mm_asid(struct mm_struct *mm) 132 { 133 + return cntx2asid(atomic_long_read(&mm->context.id)); 134 } 135 136 void flush_tlb_mm(struct mm_struct *mm)
+1 -1
drivers/clocksource/timer-clint.c
··· 251 } 252 253 irq_set_chained_handler(clint_ipi_irq, clint_ipi_interrupt); 254 - riscv_ipi_set_virq_range(rc, BITS_PER_BYTE, true); 255 clint_clear_ipi(); 256 #endif 257
··· 251 } 252 253 irq_set_chained_handler(clint_ipi_irq, clint_ipi_interrupt); 254 + riscv_ipi_set_virq_range(rc, BITS_PER_BYTE); 255 clint_clear_ipi(); 256 #endif 257
+16
include/linux/cpumask.h
··· 544 } 545 546 /** 547 * cpumask_test_cpu - test for a cpu in a cpumask 548 * @cpu: cpu number (< nr_cpu_ids) 549 * @cpumask: the cpumask pointer
··· 544 } 545 546 /** 547 + * cpumask_assign_cpu - assign a cpu in a cpumask 548 + * @cpu: cpu number (< nr_cpu_ids) 549 + * @dstp: the cpumask pointer 550 + * @bool: the value to assign 551 + */ 552 + static __always_inline void cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value) 553 + { 554 + assign_bit(cpumask_check(cpu), cpumask_bits(dstp), value); 555 + } 556 + 557 + static __always_inline void __cpumask_assign_cpu(int cpu, struct cpumask *dstp, bool value) 558 + { 559 + __assign_bit(cpumask_check(cpu), cpumask_bits(dstp), value); 560 + } 561 + 562 + /** 563 * cpumask_test_cpu - test for a cpu in a cpumask 564 * @cpu: cpu number (< nr_cpu_ids) 565 * @cpumask: the cpumask pointer
+6
include/uapi/linux/prctl.h
··· 306 # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 307 # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 308 309 /* PowerPC Dynamic Execution Control Register (DEXCR) controls */ 310 #define PR_PPC_GET_DEXCR 72 311 #define PR_PPC_SET_DEXCR 73
··· 306 # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 307 # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 308 309 + #define PR_RISCV_SET_ICACHE_FLUSH_CTX 71 310 + # define PR_RISCV_CTX_SW_FENCEI_ON 0 311 + # define PR_RISCV_CTX_SW_FENCEI_OFF 1 312 + # define PR_RISCV_SCOPE_PER_PROCESS 0 313 + # define PR_RISCV_SCOPE_PER_THREAD 1 314 + 315 /* PowerPC Dynamic Execution Control Register (DEXCR) controls */ 316 #define PR_PPC_GET_DEXCR 72 317 #define PR_PPC_SET_DEXCR 73
+6
kernel/sys.c
··· 146 #ifndef RISCV_V_GET_CONTROL 147 # define RISCV_V_GET_CONTROL() (-EINVAL) 148 #endif 149 #ifndef PPC_GET_DEXCR_ASPECT 150 # define PPC_GET_DEXCR_ASPECT(a, b) (-EINVAL) 151 #endif ··· 2778 break; 2779 case PR_RISCV_V_GET_CONTROL: 2780 error = RISCV_V_GET_CONTROL(); 2781 break; 2782 default: 2783 error = -EINVAL;
··· 146 #ifndef RISCV_V_GET_CONTROL 147 # define RISCV_V_GET_CONTROL() (-EINVAL) 148 #endif 149 + #ifndef RISCV_SET_ICACHE_FLUSH_CTX 150 + # define RISCV_SET_ICACHE_FLUSH_CTX(a, b) (-EINVAL) 151 + #endif 152 #ifndef PPC_GET_DEXCR_ASPECT 153 # define PPC_GET_DEXCR_ASPECT(a, b) (-EINVAL) 154 #endif ··· 2775 break; 2776 case PR_RISCV_V_GET_CONTROL: 2777 error = RISCV_V_GET_CONTROL(); 2778 + break; 2779 + case PR_RISCV_SET_ICACHE_FLUSH_CTX: 2780 + error = RISCV_SET_ICACHE_FLUSH_CTX(arg2, arg3); 2781 break; 2782 default: 2783 error = -EINVAL;
+6
scripts/generate_rust_target.rs
··· 150 // `llvm-target`s are taken from `scripts/Makefile.clang`. 151 if cfg.has("ARM64") { 152 panic!("arm64 uses the builtin rustc aarch64-unknown-none target"); 153 } else if cfg.has("X86_64") { 154 ts.push("arch", "x86_64"); 155 ts.push(
··· 150 // `llvm-target`s are taken from `scripts/Makefile.clang`. 151 if cfg.has("ARM64") { 152 panic!("arm64 uses the builtin rustc aarch64-unknown-none target"); 153 + } else if cfg.has("RISCV") { 154 + if cfg.has("64BIT") { 155 + panic!("64-bit RISC-V uses the builtin rustc riscv64-unknown-none-elf target"); 156 + } else { 157 + panic!("32-bit RISC-V is an unsupported architecture"); 158 + } 159 } else if cfg.has("X86_64") { 160 ts.push("arch", "x86_64"); 161 ts.push(