Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'riscv-for-linus-6.8-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Palmer Dabbelt:

- Support for many new extensions in hwprobe, along with a handful of
cleanups

- Various cleanups to our page table handling code, so we alwayse use
{READ,WRITE}_ONCE

- Support for the which-cpus flavor of hwprobe

- Support for XIP kernels has been resurrected

* tag 'riscv-for-linus-6.8-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (52 commits)
riscv: hwprobe: export Zicond extension
riscv: hwprobe: export Zacas ISA extension
riscv: add ISA extension parsing for Zacas
dt-bindings: riscv: add Zacas ISA extension description
riscv: hwprobe: export Ztso ISA extension
riscv: add ISA extension parsing for Ztso
use linux/export.h rather than asm-generic/export.h
riscv: Remove SHADOW_OVERFLOW_STACK_SIZE macro
riscv; fix __user annotation in save_v_state()
riscv: fix __user annotation in traps_misaligned.c
riscv: Select ARCH_WANTS_NO_INSTR
riscv: Remove obsolete rv32_defconfig file
riscv: Allow disabling of BUILTIN_DTB for XIP
riscv: Fixed wrong register in XIP_FIXUP_FLASH_OFFSET macro
riscv: Make XIP bootable again
riscv: Fix set_direct_map_default_noflush() to reset _PAGE_EXEC
riscv: Fix module_alloc() that did not reset the linear mapping permissions
riscv: Fix wrong usage of lm_alias() when splitting a huge linear mapping
riscv: Check if the code to patch lies in the exit section
riscv: Use the same CPU operations for all CPUs
...

+1471 -726
+115 -7
Documentation/arch/riscv/hwprobe.rst
··· 12 12 }; 13 13 14 14 long sys_riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 15 - size_t cpu_count, cpu_set_t *cpus, 15 + size_t cpusetsize, cpu_set_t *cpus, 16 16 unsigned int flags); 17 17 18 18 The arguments are split into three groups: an array of key-value pairs, a CPU ··· 20 20 must prepopulate the key field for each element, and the kernel will fill in the 21 21 value if the key is recognized. If a key is unknown to the kernel, its key field 22 22 will be cleared to -1, and its value set to 0. The CPU set is defined by 23 - CPU_SET(3). For value-like keys (eg. vendor/arch/impl), the returned value will 24 - be only be valid if all CPUs in the given set have the same value. Otherwise -1 25 - will be returned. For boolean-like keys, the value returned will be a logical 26 - AND of the values for the specified CPUs. Usermode can supply NULL for cpus and 27 - 0 for cpu_count as a shortcut for all online CPUs. There are currently no flags, 28 - this value must be zero for future compatibility. 23 + CPU_SET(3) with size ``cpusetsize`` bytes. For value-like keys (eg. vendor, 24 + arch, impl), the returned value will only be valid if all CPUs in the given set 25 + have the same value. Otherwise -1 will be returned. For boolean-like keys, the 26 + value returned will be a logical AND of the values for the specified CPUs. 27 + Usermode can supply NULL for ``cpus`` and 0 for ``cpusetsize`` as a shortcut for 28 + all online CPUs. The currently supported flags are: 29 + 30 + * :c:macro:`RISCV_HWPROBE_WHICH_CPUS`: This flag basically reverses the behavior 31 + of sys_riscv_hwprobe(). Instead of populating the values of keys for a given 32 + set of CPUs, the values of each key are given and the set of CPUs is reduced 33 + by sys_riscv_hwprobe() to only those which match each of the key-value pairs. 34 + How matching is done depends on the key type. For value-like keys, matching 35 + means to be the exact same as the value. For boolean-like keys, matching 36 + means the result of a logical AND of the pair's value with the CPU's value is 37 + exactly the same as the pair's value. Additionally, when ``cpus`` is an empty 38 + set, then it is initialized to all online CPUs which fit within it, i.e. the 39 + CPU set returned is the reduction of all the online CPUs which can be 40 + represented with a CPU set of size ``cpusetsize``. 41 + 42 + All other flags are reserved for future compatibility and must be zero. 29 43 30 44 On success 0 is returned, on failure a negative error code is returned. 31 45 ··· 93 79 94 80 * :c:macro:`RISCV_HWPROBE_EXT_ZICBOZ`: The Zicboz extension is supported, as 95 81 ratified in commit 3dd606f ("Create cmobase-v1.0.pdf") of riscv-CMOs. 82 + 83 + * :c:macro:`RISCV_HWPROBE_EXT_ZBC` The Zbc extension is supported, as defined 84 + in version 1.0 of the Bit-Manipulation ISA extensions. 85 + 86 + * :c:macro:`RISCV_HWPROBE_EXT_ZBKB` The Zbkb extension is supported, as 87 + defined in version 1.0 of the Scalar Crypto ISA extensions. 88 + 89 + * :c:macro:`RISCV_HWPROBE_EXT_ZBKC` The Zbkc extension is supported, as 90 + defined in version 1.0 of the Scalar Crypto ISA extensions. 91 + 92 + * :c:macro:`RISCV_HWPROBE_EXT_ZBKX` The Zbkx extension is supported, as 93 + defined in version 1.0 of the Scalar Crypto ISA extensions. 94 + 95 + * :c:macro:`RISCV_HWPROBE_EXT_ZKND` The Zknd extension is supported, as 96 + defined in version 1.0 of the Scalar Crypto ISA extensions. 97 + 98 + * :c:macro:`RISCV_HWPROBE_EXT_ZKNE` The Zkne extension is supported, as 99 + defined in version 1.0 of the Scalar Crypto ISA extensions. 100 + 101 + * :c:macro:`RISCV_HWPROBE_EXT_ZKNH` The Zknh extension is supported, as 102 + defined in version 1.0 of the Scalar Crypto ISA extensions. 103 + 104 + * :c:macro:`RISCV_HWPROBE_EXT_ZKSED` The Zksed extension is supported, as 105 + defined in version 1.0 of the Scalar Crypto ISA extensions. 106 + 107 + * :c:macro:`RISCV_HWPROBE_EXT_ZKSH` The Zksh extension is supported, as 108 + defined in version 1.0 of the Scalar Crypto ISA extensions. 109 + 110 + * :c:macro:`RISCV_HWPROBE_EXT_ZKT` The Zkt extension is supported, as defined 111 + in version 1.0 of the Scalar Crypto ISA extensions. 112 + 113 + * :c:macro:`RISCV_HWPROBE_EXT_ZVBB`: The Zvbb extension is supported as 114 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 115 + 116 + * :c:macro:`RISCV_HWPROBE_EXT_ZVBC`: The Zvbc extension is supported as 117 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 118 + 119 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKB`: The Zvkb extension is supported as 120 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 121 + 122 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKG`: The Zvkg extension is supported as 123 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 124 + 125 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKNED`: The Zvkned extension is supported as 126 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 127 + 128 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKNHA`: The Zvknha extension is supported as 129 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 130 + 131 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKNHB`: The Zvknhb extension is supported as 132 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 133 + 134 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKSED`: The Zvksed extension is supported as 135 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 136 + 137 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKSH`: The Zvksh extension is supported as 138 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 139 + 140 + * :c:macro:`RISCV_HWPROBE_EXT_ZVKT`: The Zvkt extension is supported as 141 + defined in version 1.0 of the RISC-V Cryptography Extensions Volume II. 142 + 143 + * :c:macro:`RISCV_HWPROBE_EXT_ZFH`: The Zfh extension version 1.0 is supported 144 + as defined in the RISC-V ISA manual. 145 + 146 + * :c:macro:`RISCV_HWPROBE_EXT_ZFHMIN`: The Zfhmin extension version 1.0 is 147 + supported as defined in the RISC-V ISA manual. 148 + 149 + * :c:macro:`RISCV_HWPROBE_EXT_ZIHINTNTL`: The Zihintntl extension version 1.0 150 + is supported as defined in the RISC-V ISA manual. 151 + 152 + * :c:macro:`RISCV_HWPROBE_EXT_ZVFH`: The Zvfh extension is supported as 153 + defined in the RISC-V Vector manual starting from commit e2ccd0548d6c 154 + ("Remove draft warnings from Zvfh[min]"). 155 + 156 + * :c:macro:`RISCV_HWPROBE_EXT_ZVFHMIN`: The Zvfhmin extension is supported as 157 + defined in the RISC-V Vector manual starting from commit e2ccd0548d6c 158 + ("Remove draft warnings from Zvfh[min]"). 159 + 160 + * :c:macro:`RISCV_HWPROBE_EXT_ZFA`: The Zfa extension is supported as 161 + defined in the RISC-V ISA manual starting from commit 056b6ff467c7 162 + ("Zfa is ratified"). 163 + 164 + * :c:macro:`RISCV_HWPROBE_EXT_ZTSO`: The Ztso extension is supported as 165 + defined in the RISC-V ISA manual starting from commit 5618fb5a216b 166 + ("Ztso is now ratified.") 167 + 168 + * :c:macro:`RISCV_HWPROBE_EXT_ZACAS`: The Zacas extension is supported as 169 + defined in the Atomic Compare-and-Swap (CAS) instructions manual starting 170 + from commit 5059e0ca641c ("update to ratified"). 171 + 172 + * :c:macro:`RISCV_HWPROBE_EXT_ZICOND`: The Zicond extension is supported as 173 + defined in the RISC-V Integer Conditional (Zicond) operations extension 174 + manual starting from commit 95cf1f9 ("Add changes requested by Ved 175 + during signoff") 96 176 97 177 * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance 98 178 information about the selected set of processors.
+1
Documentation/devicetree/bindings/riscv/cpus.yaml
··· 32 32 oneOf: 33 33 - items: 34 34 - enum: 35 + - amd,mbv32 35 36 - andestech,ax45mp 36 37 - canaan,k210 37 38 - sifive,bullet0
+219
Documentation/devicetree/bindings/riscv/extensions.yaml
··· 171 171 memory types as ratified in the 20191213 version of the privileged 172 172 ISA specification. 173 173 174 + - const: zacas 175 + description: | 176 + The Zacas extension for Atomic Compare-and-Swap (CAS) instructions 177 + is supported as ratified at commit 5059e0ca641c ("update to 178 + ratified") of the riscv-zacas. 179 + 174 180 - const: zba 175 181 description: | 176 182 The standard Zba bit-manipulation extension for address generation ··· 196 190 multiplication as ratified at commit 6d33919 ("Merge pull request 197 191 #158 from hirooih/clmul-fix-loop-end-condition") of riscv-bitmanip. 198 192 193 + - const: zbkb 194 + description: 195 + The standard Zbkb bitmanip instructions for cryptography as ratified 196 + in version 1.0 of RISC-V Cryptography Extensions Volume I 197 + specification. 198 + 199 + - const: zbkc 200 + description: 201 + The standard Zbkc carry-less multiply instructions as ratified 202 + in version 1.0 of RISC-V Cryptography Extensions Volume I 203 + specification. 204 + 205 + - const: zbkx 206 + description: 207 + The standard Zbkx crossbar permutation instructions as ratified 208 + in version 1.0 of RISC-V Cryptography Extensions Volume I 209 + specification. 210 + 199 211 - const: zbs 200 212 description: | 201 213 The standard Zbs bit-manipulation extension for single-bit 202 214 instructions as ratified at commit 6d33919 ("Merge pull request #158 203 215 from hirooih/clmul-fix-loop-end-condition") of riscv-bitmanip. 216 + 217 + - const: zfa 218 + description: 219 + The standard Zfa extension for additional floating point 220 + instructions, as ratified in commit 056b6ff ("Zfa is ratified") of 221 + riscv-isa-manual. 222 + 223 + - const: zfh 224 + description: 225 + The standard Zfh extension for 16-bit half-precision binary 226 + floating-point instructions, as ratified in commit 64074bc ("Update 227 + version numbers for Zfh/Zfinx") of riscv-isa-manual. 228 + 229 + - const: zfhmin 230 + description: 231 + The standard Zfhmin extension which provides minimal support for 232 + 16-bit half-precision binary floating-point instructions, as ratified 233 + in commit 64074bc ("Update version numbers for Zfh/Zfinx") of 234 + riscv-isa-manual. 235 + 236 + - const: zk 237 + description: 238 + The standard Zk Standard Scalar cryptography extension as ratified 239 + in version 1.0 of RISC-V Cryptography Extensions Volume I 240 + specification. 241 + 242 + - const: zkn 243 + description: 244 + The standard Zkn NIST algorithm suite extensions as ratified in 245 + version 1.0 of RISC-V Cryptography Extensions Volume I 246 + specification. 247 + 248 + - const: zknd 249 + description: | 250 + The standard Zknd for NIST suite: AES decryption instructions as 251 + ratified in version 1.0 of RISC-V Cryptography Extensions Volume I 252 + specification. 253 + 254 + - const: zkne 255 + description: | 256 + The standard Zkne for NIST suite: AES encryption instructions as 257 + ratified in version 1.0 of RISC-V Cryptography Extensions Volume I 258 + specification. 259 + 260 + - const: zknh 261 + description: | 262 + The standard Zknh for NIST suite: hash function instructions as 263 + ratified in version 1.0 of RISC-V Cryptography Extensions Volume I 264 + specification. 265 + 266 + - const: zkr 267 + description: 268 + The standard Zkr entropy source extension as ratified in version 269 + 1.0 of RISC-V Cryptography Extensions Volume I specification. 270 + This string being present means that the CSR associated to this 271 + extension is accessible at the privilege level to which that 272 + device-tree has been provided. 273 + 274 + - const: zks 275 + description: 276 + The standard Zks ShangMi algorithm suite extensions as ratified in 277 + version 1.0 of RISC-V Cryptography Extensions Volume I 278 + specification. 279 + 280 + - const: zksed 281 + description: | 282 + The standard Zksed for ShangMi suite: SM4 block cipher instructions 283 + as ratified in version 1.0 of RISC-V Cryptography Extensions 284 + Volume I specification. 285 + 286 + - const: zksh 287 + description: | 288 + The standard Zksh for ShangMi suite: SM3 hash function instructions 289 + as ratified in version 1.0 of RISC-V Cryptography Extensions 290 + Volume I specification. 291 + 292 + - const: zkt 293 + description: 294 + The standard Zkt for data independent execution latency as ratified 295 + in version 1.0 of RISC-V Cryptography Extensions Volume I 296 + specification. 204 297 205 298 - const: zicbom 206 299 description: ··· 351 246 The standard Zihintpause extension for pause hints, as ratified in 352 247 commit d8ab5c7 ("Zihintpause is ratified") of the riscv-isa-manual. 353 248 249 + - const: zihintntl 250 + description: 251 + The standard Zihintntl extension for non-temporal locality hints, as 252 + ratified in commit 0dc91f5 ("Zihintntl is ratified") of the 253 + riscv-isa-manual. 254 + 354 255 - const: zihpm 355 256 description: 356 257 The standard Zihpm extension for hardware performance counters, as ··· 368 257 The standard Ztso extension for total store ordering, as ratified 369 258 in commit 2e5236 ("Ztso is now ratified.") of the 370 259 riscv-isa-manual. 260 + 261 + - const: zvbb 262 + description: 263 + The standard Zvbb extension for vectored basic bit-manipulation 264 + instructions, as ratified in commit 56ed795 ("Update 265 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 266 + 267 + - const: zvbc 268 + description: 269 + The standard Zvbc extension for vectored carryless multiplication 270 + instructions, as ratified in commit 56ed795 ("Update 271 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 272 + 273 + - const: zvfh 274 + description: 275 + The standard Zvfh extension for vectored half-precision 276 + floating-point instructions, as ratified in commit e2ccd05 277 + ("Remove draft warnings from Zvfh[min]") of riscv-v-spec. 278 + 279 + - const: zvfhmin 280 + description: 281 + The standard Zvfhmin extension for vectored minimal half-precision 282 + floating-point instructions, as ratified in commit e2ccd05 283 + ("Remove draft warnings from Zvfh[min]") of riscv-v-spec. 284 + 285 + - const: zvkb 286 + description: 287 + The standard Zvkb extension for vector cryptography bit-manipulation 288 + instructions, as ratified in commit 56ed795 ("Update 289 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 290 + 291 + - const: zvkg 292 + description: 293 + The standard Zvkg extension for vector GCM/GMAC instructions, as 294 + ratified in commit 56ed795 ("Update riscv-crypto-spec-vector.adoc") 295 + of riscv-crypto. 296 + 297 + - const: zvkn 298 + description: 299 + The standard Zvkn extension for NIST algorithm suite instructions, as 300 + ratified in commit 56ed795 ("Update riscv-crypto-spec-vector.adoc") 301 + of riscv-crypto. 302 + 303 + - const: zvknc 304 + description: 305 + The standard Zvknc extension for NIST algorithm suite with carryless 306 + multiply instructions, as ratified in commit 56ed795 ("Update 307 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 308 + 309 + - const: zvkned 310 + description: 311 + The standard Zvkned extension for Vector AES block cipher 312 + instructions, as ratified in commit 56ed795 ("Update 313 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 314 + 315 + - const: zvkng 316 + description: 317 + The standard Zvkng extension for NIST algorithm suite with GCM 318 + instructions, as ratified in commit 56ed795 ("Update 319 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 320 + 321 + - const: zvknha 322 + description: | 323 + The standard Zvknha extension for NIST suite: vector SHA-2 secure, 324 + hash (SHA-256 only) instructions, as ratified in commit 325 + 56ed795 ("Update riscv-crypto-spec-vector.adoc") of riscv-crypto. 326 + 327 + - const: zvknhb 328 + description: | 329 + The standard Zvknhb extension for NIST suite: vector SHA-2 secure, 330 + hash (SHA-256 and SHA-512) instructions, as ratified in commit 331 + 56ed795 ("Update riscv-crypto-spec-vector.adoc") of riscv-crypto. 332 + 333 + - const: zvks 334 + description: 335 + The standard Zvks extension for ShangMi algorithm suite 336 + instructions, as ratified in commit 56ed795 ("Update 337 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 338 + 339 + - const: zvksc 340 + description: 341 + The standard Zvksc extension for ShangMi algorithm suite with 342 + carryless multiplication instructions, as ratified in commit 56ed795 343 + ("Update riscv-crypto-spec-vector.adoc") of riscv-crypto. 344 + 345 + - const: zvksed 346 + description: | 347 + The standard Zvksed extension for ShangMi suite: SM4 block cipher 348 + instructions, as ratified in commit 56ed795 ("Update 349 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 350 + 351 + - const: zvksh 352 + description: | 353 + The standard Zvksh extension for ShangMi suite: SM3 secure hash 354 + instructions, as ratified in commit 56ed795 ("Update 355 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 356 + 357 + - const: zvksg 358 + description: 359 + The standard Zvksg extension for ShangMi algorithm suite with GCM 360 + instructions, as ratified in commit 56ed795 ("Update 361 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 362 + 363 + - const: zvkt 364 + description: 365 + The standard Zvkt extension for vector data-independent execution 366 + latency, as ratified in commit 56ed795 ("Update 367 + riscv-crypto-spec-vector.adoc") of riscv-crypto. 371 368 372 369 additionalProperties: true 373 370 ...
+2
arch/arm/include/asm/pgtable.h
··· 151 151 152 152 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 153 153 154 + #define pgdp_get(pgpd) READ_ONCE(*pgdp) 155 + 154 156 #define pud_page(pud) pmd_page(__pmd(pud_val(pud))) 155 157 #define pud_write(pud) pmd_write(__pmd(pud_val(pud))) 156 158
+4 -3
arch/riscv/Kconfig
··· 59 59 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT 60 60 select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL 61 61 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP 62 + select ARCH_WANTS_NO_INSTR 62 63 select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE 63 64 select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU 64 65 select BUILDTIME_TABLE_SORT if MMU ··· 903 902 on the replacement properties, "riscv,isa-base" and 904 903 "riscv,isa-extensions". 905 904 906 - endmenu # "Boot options" 907 - 908 905 config BUILTIN_DTB 909 - bool 906 + bool "Built-in device tree" 910 907 depends on OF && NONPORTABLE 911 908 default y if XIP_KERNEL 909 + 910 + endmenu # "Boot options" 912 911 913 912 config PORTABLE 914 913 bool
-139
arch/riscv/configs/rv32_defconfig
··· 1 - CONFIG_SYSVIPC=y 2 - CONFIG_POSIX_MQUEUE=y 3 - CONFIG_NO_HZ_IDLE=y 4 - CONFIG_HIGH_RES_TIMERS=y 5 - CONFIG_BPF_SYSCALL=y 6 - CONFIG_IKCONFIG=y 7 - CONFIG_IKCONFIG_PROC=y 8 - CONFIG_CGROUPS=y 9 - CONFIG_CGROUP_SCHED=y 10 - CONFIG_CFS_BANDWIDTH=y 11 - CONFIG_CGROUP_BPF=y 12 - CONFIG_NAMESPACES=y 13 - CONFIG_USER_NS=y 14 - CONFIG_CHECKPOINT_RESTORE=y 15 - CONFIG_BLK_DEV_INITRD=y 16 - CONFIG_EXPERT=y 17 - # CONFIG_SYSFS_SYSCALL is not set 18 - CONFIG_PROFILING=y 19 - CONFIG_SOC_SIFIVE=y 20 - CONFIG_SOC_VIRT=y 21 - CONFIG_NONPORTABLE=y 22 - CONFIG_ARCH_RV32I=y 23 - CONFIG_SMP=y 24 - CONFIG_HOTPLUG_CPU=y 25 - CONFIG_PM=y 26 - CONFIG_CPU_IDLE=y 27 - CONFIG_VIRTUALIZATION=y 28 - CONFIG_KVM=m 29 - CONFIG_JUMP_LABEL=y 30 - CONFIG_MODULES=y 31 - CONFIG_MODULE_UNLOAD=y 32 - CONFIG_NET=y 33 - CONFIG_PACKET=y 34 - CONFIG_UNIX=y 35 - CONFIG_INET=y 36 - CONFIG_IP_MULTICAST=y 37 - CONFIG_IP_ADVANCED_ROUTER=y 38 - CONFIG_IP_PNP=y 39 - CONFIG_IP_PNP_DHCP=y 40 - CONFIG_IP_PNP_BOOTP=y 41 - CONFIG_IP_PNP_RARP=y 42 - CONFIG_NETLINK_DIAG=y 43 - CONFIG_NET_9P=y 44 - CONFIG_NET_9P_VIRTIO=y 45 - CONFIG_PCI=y 46 - CONFIG_PCIEPORTBUS=y 47 - CONFIG_PCI_HOST_GENERIC=y 48 - CONFIG_PCIE_XILINX=y 49 - CONFIG_DEVTMPFS=y 50 - CONFIG_DEVTMPFS_MOUNT=y 51 - CONFIG_BLK_DEV_LOOP=y 52 - CONFIG_VIRTIO_BLK=y 53 - CONFIG_BLK_DEV_SD=y 54 - CONFIG_BLK_DEV_SR=y 55 - CONFIG_SCSI_VIRTIO=y 56 - CONFIG_ATA=y 57 - CONFIG_SATA_AHCI=y 58 - CONFIG_SATA_AHCI_PLATFORM=y 59 - CONFIG_NETDEVICES=y 60 - CONFIG_VIRTIO_NET=y 61 - CONFIG_MACB=y 62 - CONFIG_E1000E=y 63 - CONFIG_R8169=y 64 - CONFIG_MICROSEMI_PHY=y 65 - CONFIG_INPUT_MOUSEDEV=y 66 - CONFIG_SERIAL_8250=y 67 - CONFIG_SERIAL_8250_CONSOLE=y 68 - CONFIG_SERIAL_OF_PLATFORM=y 69 - CONFIG_VIRTIO_CONSOLE=y 70 - CONFIG_HW_RANDOM=y 71 - CONFIG_HW_RANDOM_VIRTIO=y 72 - CONFIG_SPI=y 73 - CONFIG_SPI_SIFIVE=y 74 - # CONFIG_PTP_1588_CLOCK is not set 75 - CONFIG_DRM=y 76 - CONFIG_DRM_RADEON=y 77 - CONFIG_DRM_VIRTIO_GPU=y 78 - CONFIG_FB=y 79 - CONFIG_FRAMEBUFFER_CONSOLE=y 80 - CONFIG_USB=y 81 - CONFIG_USB_XHCI_HCD=y 82 - CONFIG_USB_XHCI_PLATFORM=y 83 - CONFIG_USB_EHCI_HCD=y 84 - CONFIG_USB_EHCI_HCD_PLATFORM=y 85 - CONFIG_USB_OHCI_HCD=y 86 - CONFIG_USB_OHCI_HCD_PLATFORM=y 87 - CONFIG_USB_STORAGE=y 88 - CONFIG_USB_UAS=y 89 - CONFIG_MMC=y 90 - CONFIG_MMC_SPI=y 91 - CONFIG_RTC_CLASS=y 92 - CONFIG_VIRTIO_PCI=y 93 - CONFIG_VIRTIO_BALLOON=y 94 - CONFIG_VIRTIO_INPUT=y 95 - CONFIG_VIRTIO_MMIO=y 96 - CONFIG_RPMSG_CHAR=y 97 - CONFIG_RPMSG_CTRL=y 98 - CONFIG_RPMSG_VIRTIO=y 99 - CONFIG_EXT4_FS=y 100 - CONFIG_EXT4_FS_POSIX_ACL=y 101 - CONFIG_AUTOFS_FS=y 102 - CONFIG_MSDOS_FS=y 103 - CONFIG_VFAT_FS=y 104 - CONFIG_TMPFS=y 105 - CONFIG_TMPFS_POSIX_ACL=y 106 - CONFIG_HUGETLBFS=y 107 - CONFIG_NFS_FS=y 108 - CONFIG_NFS_V4=y 109 - CONFIG_NFS_V4_1=y 110 - CONFIG_NFS_V4_2=y 111 - CONFIG_ROOT_NFS=y 112 - CONFIG_9P_FS=y 113 - CONFIG_CRYPTO_USER_API_HASH=y 114 - CONFIG_CRYPTO_DEV_VIRTIO=y 115 - CONFIG_PRINTK_TIME=y 116 - CONFIG_DEBUG_FS=y 117 - CONFIG_DEBUG_PAGEALLOC=y 118 - CONFIG_SCHED_STACK_END_CHECK=y 119 - CONFIG_DEBUG_VM=y 120 - CONFIG_DEBUG_VM_PGFLAGS=y 121 - CONFIG_DEBUG_MEMORY_INIT=y 122 - CONFIG_DEBUG_PER_CPU_MAPS=y 123 - CONFIG_SOFTLOCKUP_DETECTOR=y 124 - CONFIG_WQ_WATCHDOG=y 125 - CONFIG_DEBUG_TIMEKEEPING=y 126 - CONFIG_DEBUG_RT_MUTEXES=y 127 - CONFIG_DEBUG_SPINLOCK=y 128 - CONFIG_DEBUG_MUTEXES=y 129 - CONFIG_DEBUG_RWSEMS=y 130 - CONFIG_DEBUG_ATOMIC_SLEEP=y 131 - CONFIG_STACKTRACE=y 132 - CONFIG_DEBUG_LIST=y 133 - CONFIG_DEBUG_PLIST=y 134 - CONFIG_DEBUG_SG=y 135 - # CONFIG_RCU_TRACE is not set 136 - CONFIG_RCU_EQS_DEBUG=y 137 - # CONFIG_FTRACE is not set 138 - # CONFIG_RUNTIME_TESTING_MENU is not set 139 - CONFIG_MEMTEST=y
+2 -12
arch/riscv/include/asm/cpu_ops.h
··· 13 13 /** 14 14 * struct cpu_operations - Callback operations for hotplugging CPUs. 15 15 * 16 - * @name: Name of the boot protocol. 17 - * @cpu_prepare: Early one-time preparation step for a cpu. If there 18 - * is a mechanism for doing so, tests whether it is 19 - * possible to boot the given HART. 20 16 * @cpu_start: Boots a cpu into the kernel. 21 - * @cpu_disable: Prepares a cpu to die. May fail for some 22 - * mechanism-specific reason, which will cause the hot 23 - * unplug to be aborted. Called from the cpu to be killed. 24 17 * @cpu_stop: Makes a cpu leave the kernel. Must not fail. Called from 25 18 * the cpu being stopped. 26 19 * @cpu_is_stopped: Ensures a cpu has left the kernel. Called from another 27 20 * cpu. 28 21 */ 29 22 struct cpu_operations { 30 - const char *name; 31 - int (*cpu_prepare)(unsigned int cpu); 32 23 int (*cpu_start)(unsigned int cpu, 33 24 struct task_struct *tidle); 34 25 #ifdef CONFIG_HOTPLUG_CPU 35 - int (*cpu_disable)(unsigned int cpu); 36 26 void (*cpu_stop)(void); 37 27 int (*cpu_is_stopped)(unsigned int cpu); 38 28 #endif 39 29 }; 40 30 41 31 extern const struct cpu_operations cpu_ops_spinwait; 42 - extern const struct cpu_operations *cpu_ops[NR_CPUS]; 43 - void __init cpu_set_ops(int cpu); 32 + extern const struct cpu_operations *cpu_ops; 33 + void __init cpu_set_ops(void); 44 34 45 35 #endif /* ifndef __ASM_CPU_OPS_H */
+3 -1
arch/riscv/include/asm/cpufeature.h
··· 59 59 const unsigned int id; 60 60 const char *name; 61 61 const char *property; 62 + const unsigned int *subset_ext_ids; 63 + const unsigned int subset_ext_size; 62 64 }; 63 65 64 66 extern const struct riscv_isa_ext_data riscv_isa_ext[]; ··· 69 67 70 68 unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap); 71 69 72 - bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit); 70 + bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned int bit); 73 71 #define riscv_isa_extension_available(isa_bitmap, ext) \ 74 72 __riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext) 75 73
+31 -7
arch/riscv/include/asm/hwcap.h
··· 11 11 #include <uapi/asm/hwcap.h> 12 12 13 13 #define RISCV_ISA_EXT_a ('a' - 'a') 14 - #define RISCV_ISA_EXT_b ('b' - 'a') 15 14 #define RISCV_ISA_EXT_c ('c' - 'a') 16 15 #define RISCV_ISA_EXT_d ('d' - 'a') 17 16 #define RISCV_ISA_EXT_f ('f' - 'a') 18 17 #define RISCV_ISA_EXT_h ('h' - 'a') 19 18 #define RISCV_ISA_EXT_i ('i' - 'a') 20 - #define RISCV_ISA_EXT_j ('j' - 'a') 21 - #define RISCV_ISA_EXT_k ('k' - 'a') 22 19 #define RISCV_ISA_EXT_m ('m' - 'a') 23 - #define RISCV_ISA_EXT_p ('p' - 'a') 24 20 #define RISCV_ISA_EXT_q ('q' - 'a') 25 - #define RISCV_ISA_EXT_s ('s' - 'a') 26 - #define RISCV_ISA_EXT_u ('u' - 'a') 27 21 #define RISCV_ISA_EXT_v ('v' - 'a') 28 22 29 23 /* ··· 51 57 #define RISCV_ISA_EXT_ZIHPM 42 52 58 #define RISCV_ISA_EXT_SMSTATEEN 43 53 59 #define RISCV_ISA_EXT_ZICOND 44 60 + #define RISCV_ISA_EXT_ZBC 45 61 + #define RISCV_ISA_EXT_ZBKB 46 62 + #define RISCV_ISA_EXT_ZBKC 47 63 + #define RISCV_ISA_EXT_ZBKX 48 64 + #define RISCV_ISA_EXT_ZKND 49 65 + #define RISCV_ISA_EXT_ZKNE 50 66 + #define RISCV_ISA_EXT_ZKNH 51 67 + #define RISCV_ISA_EXT_ZKR 52 68 + #define RISCV_ISA_EXT_ZKSED 53 69 + #define RISCV_ISA_EXT_ZKSH 54 70 + #define RISCV_ISA_EXT_ZKT 55 71 + #define RISCV_ISA_EXT_ZVBB 56 72 + #define RISCV_ISA_EXT_ZVBC 57 73 + #define RISCV_ISA_EXT_ZVKB 58 74 + #define RISCV_ISA_EXT_ZVKG 59 75 + #define RISCV_ISA_EXT_ZVKNED 60 76 + #define RISCV_ISA_EXT_ZVKNHA 61 77 + #define RISCV_ISA_EXT_ZVKNHB 62 78 + #define RISCV_ISA_EXT_ZVKSED 63 79 + #define RISCV_ISA_EXT_ZVKSH 64 80 + #define RISCV_ISA_EXT_ZVKT 65 81 + #define RISCV_ISA_EXT_ZFH 66 82 + #define RISCV_ISA_EXT_ZFHMIN 67 83 + #define RISCV_ISA_EXT_ZIHINTNTL 68 84 + #define RISCV_ISA_EXT_ZVFH 69 85 + #define RISCV_ISA_EXT_ZVFHMIN 70 86 + #define RISCV_ISA_EXT_ZFA 71 87 + #define RISCV_ISA_EXT_ZTSO 72 88 + #define RISCV_ISA_EXT_ZACAS 73 54 89 55 - #define RISCV_ISA_EXT_MAX 64 90 + #define RISCV_ISA_EXT_MAX 128 91 + #define RISCV_ISA_EXT_INVALID U32_MAX 56 92 57 93 #ifdef CONFIG_RISCV_M_MODE 58 94 #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA
+24
arch/riscv/include/asm/hwprobe.h
··· 15 15 return key >= 0 && key <= RISCV_HWPROBE_MAX_KEY; 16 16 } 17 17 18 + static inline bool hwprobe_key_is_bitmask(__s64 key) 19 + { 20 + switch (key) { 21 + case RISCV_HWPROBE_KEY_BASE_BEHAVIOR: 22 + case RISCV_HWPROBE_KEY_IMA_EXT_0: 23 + case RISCV_HWPROBE_KEY_CPUPERF_0: 24 + return true; 25 + } 26 + 27 + return false; 28 + } 29 + 30 + static inline bool riscv_hwprobe_pair_cmp(struct riscv_hwprobe *pair, 31 + struct riscv_hwprobe *other_pair) 32 + { 33 + if (pair->key != other_pair->key) 34 + return false; 35 + 36 + if (hwprobe_key_is_bitmask(pair->key)) 37 + return (pair->value & other_pair->value) == other_pair->value; 38 + 39 + return pair->value == other_pair->value; 40 + } 41 + 18 42 #endif
+2 -2
arch/riscv/include/asm/kfence.h
··· 18 18 pte_t *pte = virt_to_kpte(addr); 19 19 20 20 if (protect) 21 - set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); 21 + set_pte(pte, __pte(pte_val(ptep_get(pte)) & ~_PAGE_PRESENT)); 22 22 else 23 - set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); 23 + set_pte(pte, __pte(pte_val(ptep_get(pte)) | _PAGE_PRESENT)); 24 24 25 25 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); 26 26
+5 -17
arch/riscv/include/asm/pgtable-64.h
··· 202 202 203 203 static inline void set_pud(pud_t *pudp, pud_t pud) 204 204 { 205 - *pudp = pud; 205 + WRITE_ONCE(*pudp, pud); 206 206 } 207 207 208 208 static inline void pud_clear(pud_t *pudp) ··· 278 278 static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) 279 279 { 280 280 if (pgtable_l4_enabled) 281 - *p4dp = p4d; 281 + WRITE_ONCE(*p4dp, p4d); 282 282 else 283 283 set_pud((pud_t *)p4dp, (pud_t){ p4d_val(p4d) }); 284 284 } ··· 340 340 #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) 341 341 342 342 #define pud_offset pud_offset 343 - static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) 344 - { 345 - if (pgtable_l4_enabled) 346 - return p4d_pgtable(*p4d) + pud_index(address); 347 - 348 - return (pud_t *)p4d; 349 - } 343 + pud_t *pud_offset(p4d_t *p4d, unsigned long address); 350 344 351 345 static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) 352 346 { 353 347 if (pgtable_l5_enabled) 354 - *pgdp = pgd; 348 + WRITE_ONCE(*pgdp, pgd); 355 349 else 356 350 set_p4d((p4d_t *)pgdp, (p4d_t){ pgd_val(pgd) }); 357 351 } ··· 398 404 #define p4d_index(addr) (((addr) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) 399 405 400 406 #define p4d_offset p4d_offset 401 - static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 402 - { 403 - if (pgtable_l5_enabled) 404 - return pgd_pgtable(*pgd) + p4d_index(address); 405 - 406 - return (p4d_t *)pgd; 407 - } 407 + p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); 408 408 409 409 #endif /* _ASM_RISCV_PGTABLE_64_H */
+8 -25
arch/riscv/include/asm/pgtable.h
··· 248 248 249 249 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) 250 250 { 251 - *pmdp = pmd; 251 + WRITE_ONCE(*pmdp, pmd); 252 252 } 253 253 254 254 static inline void pmd_clear(pmd_t *pmdp) ··· 510 510 */ 511 511 static inline void set_pte(pte_t *ptep, pte_t pteval) 512 512 { 513 - *ptep = pteval; 513 + WRITE_ONCE(*ptep, pteval); 514 514 } 515 515 516 516 void flush_icache_pte(pte_t pte); ··· 544 544 __set_pte_at(ptep, __pte(0)); 545 545 } 546 546 547 - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS 548 - static inline int ptep_set_access_flags(struct vm_area_struct *vma, 549 - unsigned long address, pte_t *ptep, 550 - pte_t entry, int dirty) 551 - { 552 - if (!pte_same(*ptep, entry)) 553 - __set_pte_at(ptep, entry); 554 - /* 555 - * update_mmu_cache will unconditionally execute, handling both 556 - * the case that the PTE changed and the spurious fault case. 557 - */ 558 - return true; 559 - } 547 + #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS /* defined in mm/pgtable.c */ 548 + extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, 549 + pte_t *ptep, pte_t entry, int dirty); 550 + #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG /* defined in mm/pgtable.c */ 551 + extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, 552 + pte_t *ptep); 560 553 561 554 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR 562 555 static inline pte_t ptep_get_and_clear(struct mm_struct *mm, ··· 560 567 page_table_check_pte_clear(mm, pte); 561 568 562 569 return pte; 563 - } 564 - 565 - #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG 566 - static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, 567 - unsigned long address, 568 - pte_t *ptep) 569 - { 570 - if (!pte_young(*ptep)) 571 - return 0; 572 - return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); 573 570 } 574 571 575 572 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
+1
arch/riscv/include/asm/sections.h
··· 13 13 extern char __init_data_begin[], __init_data_end[]; 14 14 extern char __init_text_begin[], __init_text_end[]; 15 15 extern char __alt_start[], __alt_end[]; 16 + extern char __exittext_begin[], __exittext_end[]; 16 17 17 18 static inline bool is_va_kernel_text(uintptr_t va) 18 19 {
-1
arch/riscv/include/asm/thread_info.h
··· 28 28 29 29 #define THREAD_SHIFT (PAGE_SHIFT + THREAD_SIZE_ORDER) 30 30 #define OVERFLOW_STACK_SIZE SZ_4K 31 - #define SHADOW_OVERFLOW_STACK_SIZE (1024) 32 31 33 32 #define IRQ_STACK_SIZE THREAD_SIZE 34 33
+1 -1
arch/riscv/include/asm/xip_fixup.h
··· 13 13 add \reg, \reg, t0 14 14 .endm 15 15 .macro XIP_FIXUP_FLASH_OFFSET reg 16 - la t1, __data_loc 16 + la t0, __data_loc 17 17 REG_L t1, _xip_phys_offset 18 18 sub \reg, \reg, t1 19 19 add \reg, \reg, t0
+32
arch/riscv/include/uapi/asm/hwprobe.h
··· 30 30 #define RISCV_HWPROBE_EXT_ZBB (1 << 4) 31 31 #define RISCV_HWPROBE_EXT_ZBS (1 << 5) 32 32 #define RISCV_HWPROBE_EXT_ZICBOZ (1 << 6) 33 + #define RISCV_HWPROBE_EXT_ZBC (1 << 7) 34 + #define RISCV_HWPROBE_EXT_ZBKB (1 << 8) 35 + #define RISCV_HWPROBE_EXT_ZBKC (1 << 9) 36 + #define RISCV_HWPROBE_EXT_ZBKX (1 << 10) 37 + #define RISCV_HWPROBE_EXT_ZKND (1 << 11) 38 + #define RISCV_HWPROBE_EXT_ZKNE (1 << 12) 39 + #define RISCV_HWPROBE_EXT_ZKNH (1 << 13) 40 + #define RISCV_HWPROBE_EXT_ZKSED (1 << 14) 41 + #define RISCV_HWPROBE_EXT_ZKSH (1 << 15) 42 + #define RISCV_HWPROBE_EXT_ZKT (1 << 16) 43 + #define RISCV_HWPROBE_EXT_ZVBB (1 << 17) 44 + #define RISCV_HWPROBE_EXT_ZVBC (1 << 18) 45 + #define RISCV_HWPROBE_EXT_ZVKB (1 << 19) 46 + #define RISCV_HWPROBE_EXT_ZVKG (1 << 20) 47 + #define RISCV_HWPROBE_EXT_ZVKNED (1 << 21) 48 + #define RISCV_HWPROBE_EXT_ZVKNHA (1 << 22) 49 + #define RISCV_HWPROBE_EXT_ZVKNHB (1 << 23) 50 + #define RISCV_HWPROBE_EXT_ZVKSED (1 << 24) 51 + #define RISCV_HWPROBE_EXT_ZVKSH (1 << 25) 52 + #define RISCV_HWPROBE_EXT_ZVKT (1 << 26) 53 + #define RISCV_HWPROBE_EXT_ZFH (1 << 27) 54 + #define RISCV_HWPROBE_EXT_ZFHMIN (1 << 28) 55 + #define RISCV_HWPROBE_EXT_ZIHINTNTL (1 << 29) 56 + #define RISCV_HWPROBE_EXT_ZVFH (1 << 30) 57 + #define RISCV_HWPROBE_EXT_ZVFHMIN (1 << 31) 58 + #define RISCV_HWPROBE_EXT_ZFA (1ULL << 32) 59 + #define RISCV_HWPROBE_EXT_ZTSO (1ULL << 33) 60 + #define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34) 61 + #define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35) 33 62 #define RISCV_HWPROBE_KEY_CPUPERF_0 5 34 63 #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) 35 64 #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0) ··· 68 39 #define RISCV_HWPROBE_MISALIGNED_MASK (7 << 0) 69 40 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 70 41 /* Increase RISCV_HWPROBE_MAX_KEY when adding items. */ 42 + 43 + /* Flags */ 44 + #define RISCV_HWPROBE_WHICH_CPUS (1 << 0) 71 45 72 46 #endif
+1
arch/riscv/kernel/Makefile
··· 50 50 obj-y += signal.o 51 51 obj-y += syscall_table.o 52 52 obj-y += sys_riscv.o 53 + obj-y += sys_hwprobe.o 53 54 obj-y += time.o 54 55 obj-y += traps.o 55 56 obj-y += riscv_ksyms.o
+6 -13
arch/riscv/kernel/cpu-hotplug.c
··· 18 18 19 19 bool cpu_has_hotplug(unsigned int cpu) 20 20 { 21 - if (cpu_ops[cpu]->cpu_stop) 21 + if (cpu_ops->cpu_stop) 22 22 return true; 23 23 24 24 return false; ··· 29 29 */ 30 30 int __cpu_disable(void) 31 31 { 32 - int ret = 0; 33 32 unsigned int cpu = smp_processor_id(); 34 33 35 - if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_stop) 34 + if (!cpu_ops->cpu_stop) 36 35 return -EOPNOTSUPP; 37 - 38 - if (cpu_ops[cpu]->cpu_disable) 39 - ret = cpu_ops[cpu]->cpu_disable(cpu); 40 - 41 - if (ret) 42 - return ret; 43 36 44 37 remove_cpu_topology(cpu); 45 38 numa_remove_cpu(cpu); ··· 40 47 riscv_ipi_disable(); 41 48 irq_migrate_all_off_this_cpu(); 42 49 43 - return ret; 50 + return 0; 44 51 } 45 52 46 53 #ifdef CONFIG_HOTPLUG_CPU ··· 55 62 pr_notice("CPU%u: off\n", cpu); 56 63 57 64 /* Verify from the firmware if the cpu is really stopped*/ 58 - if (cpu_ops[cpu]->cpu_is_stopped) 59 - ret = cpu_ops[cpu]->cpu_is_stopped(cpu); 65 + if (cpu_ops->cpu_is_stopped) 66 + ret = cpu_ops->cpu_is_stopped(cpu); 60 67 if (ret) 61 68 pr_warn("CPU%d may not have stopped: %d\n", cpu, ret); 62 69 } ··· 70 77 71 78 cpuhp_ap_report_dead(); 72 79 73 - cpu_ops[smp_processor_id()]->cpu_stop(); 80 + cpu_ops->cpu_stop(); 74 81 /* It should never reach here */ 75 82 BUG(); 76 83 }
+5 -9
arch/riscv/kernel/cpu_ops.c
··· 13 13 #include <asm/sbi.h> 14 14 #include <asm/smp.h> 15 15 16 - const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init; 16 + const struct cpu_operations *cpu_ops __ro_after_init = &cpu_ops_spinwait; 17 17 18 18 extern const struct cpu_operations cpu_ops_sbi; 19 19 #ifndef CONFIG_RISCV_BOOT_SPINWAIT 20 20 const struct cpu_operations cpu_ops_spinwait = { 21 - .name = "", 22 - .cpu_prepare = NULL, 23 21 .cpu_start = NULL, 24 22 }; 25 23 #endif 26 24 27 - void __init cpu_set_ops(int cpuid) 25 + void __init cpu_set_ops(void) 28 26 { 29 27 #if IS_ENABLED(CONFIG_RISCV_SBI) 30 28 if (sbi_probe_extension(SBI_EXT_HSM)) { 31 - if (!cpuid) 32 - pr_info("SBI HSM extension detected\n"); 33 - cpu_ops[cpuid] = &cpu_ops_sbi; 34 - } else 29 + pr_info("SBI HSM extension detected\n"); 30 + cpu_ops = &cpu_ops_sbi; 31 + } 35 32 #endif 36 - cpu_ops[cpuid] = &cpu_ops_spinwait; 37 33 }
-19
arch/riscv/kernel/cpu_ops_sbi.c
··· 79 79 return sbi_hsm_hart_start(hartid, boot_addr, hsm_data); 80 80 } 81 81 82 - static int sbi_cpu_prepare(unsigned int cpuid) 83 - { 84 - if (!cpu_ops_sbi.cpu_start) { 85 - pr_err("cpu start method not defined for CPU [%d]\n", cpuid); 86 - return -ENODEV; 87 - } 88 - return 0; 89 - } 90 - 91 82 #ifdef CONFIG_HOTPLUG_CPU 92 - static int sbi_cpu_disable(unsigned int cpuid) 93 - { 94 - if (!cpu_ops_sbi.cpu_stop) 95 - return -EOPNOTSUPP; 96 - return 0; 97 - } 98 - 99 83 static void sbi_cpu_stop(void) 100 84 { 101 85 int ret; ··· 102 118 #endif 103 119 104 120 const struct cpu_operations cpu_ops_sbi = { 105 - .name = "sbi", 106 - .cpu_prepare = sbi_cpu_prepare, 107 121 .cpu_start = sbi_cpu_start, 108 122 #ifdef CONFIG_HOTPLUG_CPU 109 - .cpu_disable = sbi_cpu_disable, 110 123 .cpu_stop = sbi_cpu_stop, 111 124 .cpu_is_stopped = sbi_cpu_is_stopped, 112 125 #endif
-11
arch/riscv/kernel/cpu_ops_spinwait.c
··· 39 39 WRITE_ONCE(__cpu_spinwait_task_pointer[hartid], tidle); 40 40 } 41 41 42 - static int spinwait_cpu_prepare(unsigned int cpuid) 43 - { 44 - if (!cpu_ops_spinwait.cpu_start) { 45 - pr_err("cpu start method not defined for CPU [%d]\n", cpuid); 46 - return -ENODEV; 47 - } 48 - return 0; 49 - } 50 - 51 42 static int spinwait_cpu_start(unsigned int cpuid, struct task_struct *tidle) 52 43 { 53 44 /* ··· 55 64 } 56 65 57 66 const struct cpu_operations cpu_ops_spinwait = { 58 - .name = "spinwait", 59 - .cpu_prepare = spinwait_cpu_prepare, 60 67 .cpu_start = spinwait_cpu_start, 61 68 };
+168 -27
arch/riscv/kernel/cpufeature.c
··· 70 70 * 71 71 * NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used. 72 72 */ 73 - bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit) 73 + bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned int bit) 74 74 { 75 75 const unsigned long *bmap = (isa_bitmap) ? isa_bitmap : riscv_isa; 76 76 ··· 102 102 return false; 103 103 } 104 104 return true; 105 + case RISCV_ISA_EXT_INVALID: 106 + return false; 105 107 } 106 108 107 109 return true; 108 110 } 109 111 110 - #define __RISCV_ISA_EXT_DATA(_name, _id) { \ 111 - .name = #_name, \ 112 - .property = #_name, \ 113 - .id = _id, \ 112 + #define _RISCV_ISA_EXT_DATA(_name, _id, _subset_exts, _subset_exts_size) { \ 113 + .name = #_name, \ 114 + .property = #_name, \ 115 + .id = _id, \ 116 + .subset_ext_ids = _subset_exts, \ 117 + .subset_ext_size = _subset_exts_size \ 114 118 } 119 + 120 + #define __RISCV_ISA_EXT_DATA(_name, _id) _RISCV_ISA_EXT_DATA(_name, _id, NULL, 0) 121 + 122 + /* Used to declare pure "lasso" extension (Zk for instance) */ 123 + #define __RISCV_ISA_EXT_BUNDLE(_name, _bundled_exts) \ 124 + _RISCV_ISA_EXT_DATA(_name, RISCV_ISA_EXT_INVALID, _bundled_exts, ARRAY_SIZE(_bundled_exts)) 125 + 126 + /* Used to declare extensions that are a superset of other extensions (Zvbb for instance) */ 127 + #define __RISCV_ISA_EXT_SUPERSET(_name, _id, _sub_exts) \ 128 + _RISCV_ISA_EXT_DATA(_name, _id, _sub_exts, ARRAY_SIZE(_sub_exts)) 129 + 130 + static const unsigned int riscv_zk_bundled_exts[] = { 131 + RISCV_ISA_EXT_ZBKB, 132 + RISCV_ISA_EXT_ZBKC, 133 + RISCV_ISA_EXT_ZBKX, 134 + RISCV_ISA_EXT_ZKND, 135 + RISCV_ISA_EXT_ZKNE, 136 + RISCV_ISA_EXT_ZKR, 137 + RISCV_ISA_EXT_ZKT, 138 + }; 139 + 140 + static const unsigned int riscv_zkn_bundled_exts[] = { 141 + RISCV_ISA_EXT_ZBKB, 142 + RISCV_ISA_EXT_ZBKC, 143 + RISCV_ISA_EXT_ZBKX, 144 + RISCV_ISA_EXT_ZKND, 145 + RISCV_ISA_EXT_ZKNE, 146 + RISCV_ISA_EXT_ZKNH, 147 + }; 148 + 149 + static const unsigned int riscv_zks_bundled_exts[] = { 150 + RISCV_ISA_EXT_ZBKB, 151 + RISCV_ISA_EXT_ZBKC, 152 + RISCV_ISA_EXT_ZKSED, 153 + RISCV_ISA_EXT_ZKSH 154 + }; 155 + 156 + #define RISCV_ISA_EXT_ZVKN \ 157 + RISCV_ISA_EXT_ZVKNED, \ 158 + RISCV_ISA_EXT_ZVKNHB, \ 159 + RISCV_ISA_EXT_ZVKB, \ 160 + RISCV_ISA_EXT_ZVKT 161 + 162 + static const unsigned int riscv_zvkn_bundled_exts[] = { 163 + RISCV_ISA_EXT_ZVKN 164 + }; 165 + 166 + static const unsigned int riscv_zvknc_bundled_exts[] = { 167 + RISCV_ISA_EXT_ZVKN, 168 + RISCV_ISA_EXT_ZVBC 169 + }; 170 + 171 + static const unsigned int riscv_zvkng_bundled_exts[] = { 172 + RISCV_ISA_EXT_ZVKN, 173 + RISCV_ISA_EXT_ZVKG 174 + }; 175 + 176 + #define RISCV_ISA_EXT_ZVKS \ 177 + RISCV_ISA_EXT_ZVKSED, \ 178 + RISCV_ISA_EXT_ZVKSH, \ 179 + RISCV_ISA_EXT_ZVKB, \ 180 + RISCV_ISA_EXT_ZVKT 181 + 182 + static const unsigned int riscv_zvks_bundled_exts[] = { 183 + RISCV_ISA_EXT_ZVKS 184 + }; 185 + 186 + static const unsigned int riscv_zvksc_bundled_exts[] = { 187 + RISCV_ISA_EXT_ZVKS, 188 + RISCV_ISA_EXT_ZVBC 189 + }; 190 + 191 + static const unsigned int riscv_zvksg_bundled_exts[] = { 192 + RISCV_ISA_EXT_ZVKS, 193 + RISCV_ISA_EXT_ZVKG 194 + }; 195 + 196 + static const unsigned int riscv_zvbb_exts[] = { 197 + RISCV_ISA_EXT_ZVKB 198 + }; 115 199 116 200 /* 117 201 * The canonical order of ISA extension names in the ISA string is defined in ··· 244 160 __RISCV_ISA_EXT_DATA(d, RISCV_ISA_EXT_d), 245 161 __RISCV_ISA_EXT_DATA(q, RISCV_ISA_EXT_q), 246 162 __RISCV_ISA_EXT_DATA(c, RISCV_ISA_EXT_c), 247 - __RISCV_ISA_EXT_DATA(b, RISCV_ISA_EXT_b), 248 - __RISCV_ISA_EXT_DATA(k, RISCV_ISA_EXT_k), 249 - __RISCV_ISA_EXT_DATA(j, RISCV_ISA_EXT_j), 250 - __RISCV_ISA_EXT_DATA(p, RISCV_ISA_EXT_p), 251 163 __RISCV_ISA_EXT_DATA(v, RISCV_ISA_EXT_v), 252 164 __RISCV_ISA_EXT_DATA(h, RISCV_ISA_EXT_h), 253 165 __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), ··· 252 172 __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND), 253 173 __RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR), 254 174 __RISCV_ISA_EXT_DATA(zifencei, RISCV_ISA_EXT_ZIFENCEI), 175 + __RISCV_ISA_EXT_DATA(zihintntl, RISCV_ISA_EXT_ZIHINTNTL), 255 176 __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), 256 177 __RISCV_ISA_EXT_DATA(zihpm, RISCV_ISA_EXT_ZIHPM), 178 + __RISCV_ISA_EXT_DATA(zacas, RISCV_ISA_EXT_ZACAS), 179 + __RISCV_ISA_EXT_DATA(zfa, RISCV_ISA_EXT_ZFA), 180 + __RISCV_ISA_EXT_DATA(zfh, RISCV_ISA_EXT_ZFH), 181 + __RISCV_ISA_EXT_DATA(zfhmin, RISCV_ISA_EXT_ZFHMIN), 257 182 __RISCV_ISA_EXT_DATA(zba, RISCV_ISA_EXT_ZBA), 258 183 __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), 184 + __RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC), 185 + __RISCV_ISA_EXT_DATA(zbkb, RISCV_ISA_EXT_ZBKB), 186 + __RISCV_ISA_EXT_DATA(zbkc, RISCV_ISA_EXT_ZBKC), 187 + __RISCV_ISA_EXT_DATA(zbkx, RISCV_ISA_EXT_ZBKX), 259 188 __RISCV_ISA_EXT_DATA(zbs, RISCV_ISA_EXT_ZBS), 189 + __RISCV_ISA_EXT_BUNDLE(zk, riscv_zk_bundled_exts), 190 + __RISCV_ISA_EXT_BUNDLE(zkn, riscv_zkn_bundled_exts), 191 + __RISCV_ISA_EXT_DATA(zknd, RISCV_ISA_EXT_ZKND), 192 + __RISCV_ISA_EXT_DATA(zkne, RISCV_ISA_EXT_ZKNE), 193 + __RISCV_ISA_EXT_DATA(zknh, RISCV_ISA_EXT_ZKNH), 194 + __RISCV_ISA_EXT_DATA(zkr, RISCV_ISA_EXT_ZKR), 195 + __RISCV_ISA_EXT_BUNDLE(zks, riscv_zks_bundled_exts), 196 + __RISCV_ISA_EXT_DATA(zkt, RISCV_ISA_EXT_ZKT), 197 + __RISCV_ISA_EXT_DATA(zksed, RISCV_ISA_EXT_ZKSED), 198 + __RISCV_ISA_EXT_DATA(zksh, RISCV_ISA_EXT_ZKSH), 199 + __RISCV_ISA_EXT_DATA(ztso, RISCV_ISA_EXT_ZTSO), 200 + __RISCV_ISA_EXT_SUPERSET(zvbb, RISCV_ISA_EXT_ZVBB, riscv_zvbb_exts), 201 + __RISCV_ISA_EXT_DATA(zvbc, RISCV_ISA_EXT_ZVBC), 202 + __RISCV_ISA_EXT_DATA(zvfh, RISCV_ISA_EXT_ZVFH), 203 + __RISCV_ISA_EXT_DATA(zvfhmin, RISCV_ISA_EXT_ZVFHMIN), 204 + __RISCV_ISA_EXT_DATA(zvkb, RISCV_ISA_EXT_ZVKB), 205 + __RISCV_ISA_EXT_DATA(zvkg, RISCV_ISA_EXT_ZVKG), 206 + __RISCV_ISA_EXT_BUNDLE(zvkn, riscv_zvkn_bundled_exts), 207 + __RISCV_ISA_EXT_BUNDLE(zvknc, riscv_zvknc_bundled_exts), 208 + __RISCV_ISA_EXT_DATA(zvkned, RISCV_ISA_EXT_ZVKNED), 209 + __RISCV_ISA_EXT_BUNDLE(zvkng, riscv_zvkng_bundled_exts), 210 + __RISCV_ISA_EXT_DATA(zvknha, RISCV_ISA_EXT_ZVKNHA), 211 + __RISCV_ISA_EXT_DATA(zvknhb, RISCV_ISA_EXT_ZVKNHB), 212 + __RISCV_ISA_EXT_BUNDLE(zvks, riscv_zvks_bundled_exts), 213 + __RISCV_ISA_EXT_BUNDLE(zvksc, riscv_zvksc_bundled_exts), 214 + __RISCV_ISA_EXT_DATA(zvksed, RISCV_ISA_EXT_ZVKSED), 215 + __RISCV_ISA_EXT_DATA(zvksh, RISCV_ISA_EXT_ZVKSH), 216 + __RISCV_ISA_EXT_BUNDLE(zvksg, riscv_zvksg_bundled_exts), 217 + __RISCV_ISA_EXT_DATA(zvkt, RISCV_ISA_EXT_ZVKT), 260 218 __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), 261 219 __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), 262 220 __RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA), ··· 306 188 }; 307 189 308 190 const size_t riscv_isa_ext_count = ARRAY_SIZE(riscv_isa_ext); 191 + 192 + static void __init match_isa_ext(const struct riscv_isa_ext_data *ext, const char *name, 193 + const char *name_end, struct riscv_isainfo *isainfo) 194 + { 195 + if ((name_end - name == strlen(ext->name)) && 196 + !strncasecmp(name, ext->name, name_end - name)) { 197 + /* 198 + * If this is a bundle, enable all the ISA extensions that 199 + * comprise the bundle. 200 + */ 201 + if (ext->subset_ext_size) { 202 + for (int i = 0; i < ext->subset_ext_size; i++) { 203 + if (riscv_isa_extension_check(ext->subset_ext_ids[i])) 204 + set_bit(ext->subset_ext_ids[i], isainfo->isa); 205 + } 206 + } 207 + 208 + /* 209 + * This is valid even for bundle extensions which uses the RISCV_ISA_EXT_INVALID id 210 + * (rejected by riscv_isa_extension_check()). 211 + */ 212 + if (riscv_isa_extension_check(ext->id)) 213 + set_bit(ext->id, isainfo->isa); 214 + } 215 + } 309 216 310 217 static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct riscv_isainfo *isainfo, 311 218 unsigned long *isa2hwcap, const char *isa) ··· 464 321 if (*isa == '_') 465 322 ++isa; 466 323 467 - #define SET_ISA_EXT_MAP(name, bit) \ 468 - do { \ 469 - if ((ext_end - ext == strlen(name)) && \ 470 - !strncasecmp(ext, name, strlen(name)) && \ 471 - riscv_isa_extension_check(bit)) \ 472 - set_bit(bit, isainfo->isa); \ 473 - } while (false) \ 474 - 475 324 if (unlikely(ext_err)) 476 325 continue; 477 326 if (!ext_long) { ··· 475 340 } 476 341 } else { 477 342 for (int i = 0; i < riscv_isa_ext_count; i++) 478 - SET_ISA_EXT_MAP(riscv_isa_ext[i].name, 479 - riscv_isa_ext[i].id); 343 + match_isa_ext(&riscv_isa_ext[i], ext, ext_end, isainfo); 480 344 } 481 - #undef SET_ISA_EXT_MAP 482 345 } 483 346 } 484 347 ··· 575 442 } 576 443 577 444 for (int i = 0; i < riscv_isa_ext_count; i++) { 445 + const struct riscv_isa_ext_data *ext = &riscv_isa_ext[i]; 446 + 578 447 if (of_property_match_string(cpu_node, "riscv,isa-extensions", 579 - riscv_isa_ext[i].property) < 0) 448 + ext->property) < 0) 580 449 continue; 581 450 582 - if (!riscv_isa_extension_check(riscv_isa_ext[i].id)) 583 - continue; 451 + if (ext->subset_ext_size) { 452 + for (int j = 0; j < ext->subset_ext_size; j++) { 453 + if (riscv_isa_extension_check(ext->subset_ext_ids[i])) 454 + set_bit(ext->subset_ext_ids[j], isainfo->isa); 455 + } 456 + } 584 457 585 - /* Only single letter extensions get set in hwcap */ 586 - if (strnlen(riscv_isa_ext[i].name, 2) == 1) 587 - this_hwcap |= isa2hwcap[riscv_isa_ext[i].id]; 458 + if (riscv_isa_extension_check(ext->id)) { 459 + set_bit(ext->id, isainfo->isa); 588 460 589 - set_bit(riscv_isa_ext[i].id, isainfo->isa); 461 + /* Only single letter extensions get set in hwcap */ 462 + if (strnlen(riscv_isa_ext[i].name, 2) == 1) 463 + this_hwcap |= isa2hwcap[riscv_isa_ext[i].id]; 464 + } 590 465 } 591 466 592 467 of_node_put(cpu_node);
+1 -1
arch/riscv/kernel/efi.c
··· 60 60 static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) 61 61 { 62 62 efi_memory_desc_t *md = data; 63 - pte_t pte = READ_ONCE(*ptep); 63 + pte_t pte = ptep_get(ptep); 64 64 unsigned long val; 65 65 66 66 if (md->attribute & EFI_MEMORY_RO) {
+4 -2
arch/riscv/kernel/head.S
··· 11 11 #include <asm/page.h> 12 12 #include <asm/pgtable.h> 13 13 #include <asm/csr.h> 14 - #include <asm/cpu_ops_sbi.h> 15 14 #include <asm/hwcap.h> 16 15 #include <asm/image.h> 17 16 #include <asm/scs.h> ··· 88 89 /* Compute satp for kernel page tables, but don't load it yet */ 89 90 srl a2, a0, PAGE_SHIFT 90 91 la a1, satp_mode 92 + XIP_FIXUP_OFFSET a1 91 93 REG_L a1, 0(a1) 92 94 or a2, a2, a1 93 95 ··· 265 265 la sp, _end + THREAD_SIZE 266 266 XIP_FIXUP_OFFSET sp 267 267 mv s0, a0 268 + mv s1, a1 268 269 call __copy_data 269 270 270 - /* Restore a0 copy */ 271 + /* Restore a0 & a1 copy */ 271 272 mv a0, s0 273 + mv a1, s1 272 274 #endif 273 275 274 276 #ifndef CONFIG_XIP_KERNEL
+1 -1
arch/riscv/kernel/mcount-dyn.S
··· 3 3 4 4 #include <linux/init.h> 5 5 #include <linux/linkage.h> 6 + #include <linux/export.h> 6 7 #include <asm/asm.h> 7 8 #include <asm/csr.h> 8 9 #include <asm/unistd.h> 9 10 #include <asm/thread_info.h> 10 11 #include <asm/asm-offsets.h> 11 - #include <asm-generic/export.h> 12 12 #include <asm/ftrace.h> 13 13 14 14 .text
+1 -1
arch/riscv/kernel/mcount.S
··· 4 4 #include <linux/init.h> 5 5 #include <linux/linkage.h> 6 6 #include <linux/cfi_types.h> 7 + #include <linux/export.h> 7 8 #include <asm/asm.h> 8 9 #include <asm/csr.h> 9 10 #include <asm/unistd.h> 10 11 #include <asm/thread_info.h> 11 12 #include <asm/asm-offsets.h> 12 - #include <asm-generic/export.h> 13 13 #include <asm/ftrace.h> 14 14 15 15 .text
+2 -1
arch/riscv/kernel/module.c
··· 894 894 { 895 895 return __vmalloc_node_range(size, 1, MODULES_VADDR, 896 896 MODULES_END, GFP_KERNEL, 897 - PAGE_KERNEL, 0, NUMA_NO_NODE, 897 + PAGE_KERNEL, VM_FLUSH_RESET_PERMS, 898 + NUMA_NO_NODE, 898 899 __builtin_return_address(0)); 899 900 } 900 901 #endif
+10 -1
arch/riscv/kernel/patch.c
··· 14 14 #include <asm/fixmap.h> 15 15 #include <asm/ftrace.h> 16 16 #include <asm/patch.h> 17 + #include <asm/sections.h> 17 18 18 19 struct patch_insn { 19 20 void *addr; ··· 26 25 int riscv_patch_in_stop_machine = false; 27 26 28 27 #ifdef CONFIG_MMU 28 + 29 + static inline bool is_kernel_exittext(uintptr_t addr) 30 + { 31 + return system_state < SYSTEM_RUNNING && 32 + addr >= (uintptr_t)__exittext_begin && 33 + addr < (uintptr_t)__exittext_end; 34 + } 35 + 29 36 /* 30 37 * The fix_to_virt(, idx) needs a const value (not a dynamic variable of 31 38 * reg-a0) or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses". ··· 44 35 uintptr_t uintaddr = (uintptr_t) addr; 45 36 struct page *page; 46 37 47 - if (core_kernel_text(uintaddr)) 38 + if (core_kernel_text(uintaddr) || is_kernel_exittext(uintaddr)) 48 39 page = phys_to_page(__pa_symbol(addr)); 49 40 else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) 50 41 page = vmalloc_to_page(addr);
-1
arch/riscv/kernel/setup.c
··· 26 26 #include <asm/alternative.h> 27 27 #include <asm/cacheflush.h> 28 28 #include <asm/cpufeature.h> 29 - #include <asm/cpu_ops.h> 30 29 #include <asm/early_ioremap.h> 31 30 #include <asm/pgtable.h> 32 31 #include <asm/setup.h>
+1 -1
arch/riscv/kernel/signal.c
··· 91 91 err = __copy_to_user(&state->v_state, &current->thread.vstate, 92 92 offsetof(struct __riscv_v_ext_state, datap)); 93 93 /* Copy the pointer datap itself. */ 94 - err |= __put_user(datap, &state->v_state.datap); 94 + err |= __put_user((__force void *)datap, &state->v_state.datap); 95 95 /* Copy the whole vector content to user space datap. */ 96 96 err |= __copy_to_user(datap, current->thread.vstate.datap, riscv_v_vsize); 97 97 /* Copy magic to the user space after saving all vector conetext */
+1 -1
arch/riscv/kernel/smp.c
··· 81 81 82 82 #ifdef CONFIG_HOTPLUG_CPU 83 83 if (cpu_has_hotplug(cpu)) 84 - cpu_ops[cpu]->cpu_stop(); 84 + cpu_ops->cpu_stop(); 85 85 #endif 86 86 87 87 for(;;)
+10 -28
arch/riscv/kernel/smpboot.c
··· 49 49 void __init smp_prepare_cpus(unsigned int max_cpus) 50 50 { 51 51 int cpuid; 52 - int ret; 53 52 unsigned int curr_cpuid; 54 53 55 54 init_cpu_topology(); ··· 65 66 for_each_possible_cpu(cpuid) { 66 67 if (cpuid == curr_cpuid) 67 68 continue; 68 - if (cpu_ops[cpuid]->cpu_prepare) { 69 - ret = cpu_ops[cpuid]->cpu_prepare(cpuid); 70 - if (ret) 71 - continue; 72 - } 73 69 set_cpu_present(cpuid, true); 74 70 numa_store_cpu_info(cpuid); 75 71 } ··· 119 125 120 126 static void __init acpi_parse_and_init_cpus(void) 121 127 { 122 - int cpuid; 123 - 124 - cpu_set_ops(0); 125 - 126 128 acpi_table_parse_madt(ACPI_MADT_TYPE_RINTC, acpi_parse_rintc, 0); 127 - 128 - for (cpuid = 1; cpuid < nr_cpu_ids; cpuid++) { 129 - if (cpuid_to_hartid_map(cpuid) != INVALID_HARTID) { 130 - cpu_set_ops(cpuid); 131 - set_cpu_possible(cpuid, true); 132 - } 133 - } 134 129 } 135 130 #else 136 131 #define acpi_parse_and_init_cpus(...) do { } while (0) ··· 132 149 bool found_boot_cpu = false; 133 150 int cpuid = 1; 134 151 int rc; 135 - 136 - cpu_set_ops(0); 137 152 138 153 for_each_of_cpu_node(dn) { 139 154 rc = riscv_early_of_processor_hartid(dn, &hart); ··· 160 179 if (cpuid > nr_cpu_ids) 161 180 pr_warn("Total number of cpus [%d] is greater than nr_cpus option value [%d]\n", 162 181 cpuid, nr_cpu_ids); 163 - 164 - for (cpuid = 1; cpuid < nr_cpu_ids; cpuid++) { 165 - if (cpuid_to_hartid_map(cpuid) != INVALID_HARTID) { 166 - cpu_set_ops(cpuid); 167 - set_cpu_possible(cpuid, true); 168 - } 169 - } 170 182 } 171 183 172 184 void __init setup_smp(void) 173 185 { 186 + int cpuid; 187 + 188 + cpu_set_ops(); 189 + 174 190 if (acpi_disabled) 175 191 of_parse_and_init_cpus(); 176 192 else 177 193 acpi_parse_and_init_cpus(); 194 + 195 + for (cpuid = 1; cpuid < nr_cpu_ids; cpuid++) 196 + if (cpuid_to_hartid_map(cpuid) != INVALID_HARTID) 197 + set_cpu_possible(cpuid, true); 178 198 } 179 199 180 200 static int start_secondary_cpu(int cpu, struct task_struct *tidle) 181 201 { 182 - if (cpu_ops[cpu]->cpu_start) 183 - return cpu_ops[cpu]->cpu_start(cpu, tidle); 202 + if (cpu_ops->cpu_start) 203 + return cpu_ops->cpu_start(cpu, tidle); 184 204 185 205 return -EOPNOTSUPP; 186 206 }
+411
arch/riscv/kernel/sys_hwprobe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * The hwprobe interface, for allowing userspace to probe to see which features 4 + * are supported by the hardware. See Documentation/arch/riscv/hwprobe.rst for 5 + * more details. 6 + */ 7 + #include <linux/syscalls.h> 8 + #include <asm/cacheflush.h> 9 + #include <asm/cpufeature.h> 10 + #include <asm/hwprobe.h> 11 + #include <asm/sbi.h> 12 + #include <asm/switch_to.h> 13 + #include <asm/uaccess.h> 14 + #include <asm/unistd.h> 15 + #include <asm/vector.h> 16 + #include <vdso/vsyscall.h> 17 + 18 + 19 + static void hwprobe_arch_id(struct riscv_hwprobe *pair, 20 + const struct cpumask *cpus) 21 + { 22 + u64 id = -1ULL; 23 + bool first = true; 24 + int cpu; 25 + 26 + for_each_cpu(cpu, cpus) { 27 + u64 cpu_id; 28 + 29 + switch (pair->key) { 30 + case RISCV_HWPROBE_KEY_MVENDORID: 31 + cpu_id = riscv_cached_mvendorid(cpu); 32 + break; 33 + case RISCV_HWPROBE_KEY_MIMPID: 34 + cpu_id = riscv_cached_mimpid(cpu); 35 + break; 36 + case RISCV_HWPROBE_KEY_MARCHID: 37 + cpu_id = riscv_cached_marchid(cpu); 38 + break; 39 + } 40 + 41 + if (first) { 42 + id = cpu_id; 43 + first = false; 44 + } 45 + 46 + /* 47 + * If there's a mismatch for the given set, return -1 in the 48 + * value. 49 + */ 50 + if (id != cpu_id) { 51 + id = -1ULL; 52 + break; 53 + } 54 + } 55 + 56 + pair->value = id; 57 + } 58 + 59 + static void hwprobe_isa_ext0(struct riscv_hwprobe *pair, 60 + const struct cpumask *cpus) 61 + { 62 + int cpu; 63 + u64 missing = 0; 64 + 65 + pair->value = 0; 66 + if (has_fpu()) 67 + pair->value |= RISCV_HWPROBE_IMA_FD; 68 + 69 + if (riscv_isa_extension_available(NULL, c)) 70 + pair->value |= RISCV_HWPROBE_IMA_C; 71 + 72 + if (has_vector()) 73 + pair->value |= RISCV_HWPROBE_IMA_V; 74 + 75 + /* 76 + * Loop through and record extensions that 1) anyone has, and 2) anyone 77 + * doesn't have. 78 + */ 79 + for_each_cpu(cpu, cpus) { 80 + struct riscv_isainfo *isainfo = &hart_isa[cpu]; 81 + 82 + #define EXT_KEY(ext) \ 83 + do { \ 84 + if (__riscv_isa_extension_available(isainfo->isa, RISCV_ISA_EXT_##ext)) \ 85 + pair->value |= RISCV_HWPROBE_EXT_##ext; \ 86 + else \ 87 + missing |= RISCV_HWPROBE_EXT_##ext; \ 88 + } while (false) 89 + 90 + /* 91 + * Only use EXT_KEY() for extensions which can be exposed to userspace, 92 + * regardless of the kernel's configuration, as no other checks, besides 93 + * presence in the hart_isa bitmap, are made. 94 + */ 95 + EXT_KEY(ZBA); 96 + EXT_KEY(ZBB); 97 + EXT_KEY(ZBS); 98 + EXT_KEY(ZICBOZ); 99 + EXT_KEY(ZBC); 100 + 101 + EXT_KEY(ZBKB); 102 + EXT_KEY(ZBKC); 103 + EXT_KEY(ZBKX); 104 + EXT_KEY(ZKND); 105 + EXT_KEY(ZKNE); 106 + EXT_KEY(ZKNH); 107 + EXT_KEY(ZKSED); 108 + EXT_KEY(ZKSH); 109 + EXT_KEY(ZKT); 110 + EXT_KEY(ZIHINTNTL); 111 + EXT_KEY(ZTSO); 112 + EXT_KEY(ZACAS); 113 + EXT_KEY(ZICOND); 114 + 115 + if (has_vector()) { 116 + EXT_KEY(ZVBB); 117 + EXT_KEY(ZVBC); 118 + EXT_KEY(ZVKB); 119 + EXT_KEY(ZVKG); 120 + EXT_KEY(ZVKNED); 121 + EXT_KEY(ZVKNHA); 122 + EXT_KEY(ZVKNHB); 123 + EXT_KEY(ZVKSED); 124 + EXT_KEY(ZVKSH); 125 + EXT_KEY(ZVKT); 126 + EXT_KEY(ZVFH); 127 + EXT_KEY(ZVFHMIN); 128 + } 129 + 130 + if (has_fpu()) { 131 + EXT_KEY(ZFH); 132 + EXT_KEY(ZFHMIN); 133 + EXT_KEY(ZFA); 134 + } 135 + #undef EXT_KEY 136 + } 137 + 138 + /* Now turn off reporting features if any CPU is missing it. */ 139 + pair->value &= ~missing; 140 + } 141 + 142 + static bool hwprobe_ext0_has(const struct cpumask *cpus, unsigned long ext) 143 + { 144 + struct riscv_hwprobe pair; 145 + 146 + hwprobe_isa_ext0(&pair, cpus); 147 + return (pair.value & ext); 148 + } 149 + 150 + static u64 hwprobe_misaligned(const struct cpumask *cpus) 151 + { 152 + int cpu; 153 + u64 perf = -1ULL; 154 + 155 + for_each_cpu(cpu, cpus) { 156 + int this_perf = per_cpu(misaligned_access_speed, cpu); 157 + 158 + if (perf == -1ULL) 159 + perf = this_perf; 160 + 161 + if (perf != this_perf) { 162 + perf = RISCV_HWPROBE_MISALIGNED_UNKNOWN; 163 + break; 164 + } 165 + } 166 + 167 + if (perf == -1ULL) 168 + return RISCV_HWPROBE_MISALIGNED_UNKNOWN; 169 + 170 + return perf; 171 + } 172 + 173 + static void hwprobe_one_pair(struct riscv_hwprobe *pair, 174 + const struct cpumask *cpus) 175 + { 176 + switch (pair->key) { 177 + case RISCV_HWPROBE_KEY_MVENDORID: 178 + case RISCV_HWPROBE_KEY_MARCHID: 179 + case RISCV_HWPROBE_KEY_MIMPID: 180 + hwprobe_arch_id(pair, cpus); 181 + break; 182 + /* 183 + * The kernel already assumes that the base single-letter ISA 184 + * extensions are supported on all harts, and only supports the 185 + * IMA base, so just cheat a bit here and tell that to 186 + * userspace. 187 + */ 188 + case RISCV_HWPROBE_KEY_BASE_BEHAVIOR: 189 + pair->value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA; 190 + break; 191 + 192 + case RISCV_HWPROBE_KEY_IMA_EXT_0: 193 + hwprobe_isa_ext0(pair, cpus); 194 + break; 195 + 196 + case RISCV_HWPROBE_KEY_CPUPERF_0: 197 + pair->value = hwprobe_misaligned(cpus); 198 + break; 199 + 200 + case RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE: 201 + pair->value = 0; 202 + if (hwprobe_ext0_has(cpus, RISCV_HWPROBE_EXT_ZICBOZ)) 203 + pair->value = riscv_cboz_block_size; 204 + break; 205 + 206 + /* 207 + * For forward compatibility, unknown keys don't fail the whole 208 + * call, but get their element key set to -1 and value set to 0 209 + * indicating they're unrecognized. 210 + */ 211 + default: 212 + pair->key = -1; 213 + pair->value = 0; 214 + break; 215 + } 216 + } 217 + 218 + static int hwprobe_get_values(struct riscv_hwprobe __user *pairs, 219 + size_t pair_count, size_t cpusetsize, 220 + unsigned long __user *cpus_user, 221 + unsigned int flags) 222 + { 223 + size_t out; 224 + int ret; 225 + cpumask_t cpus; 226 + 227 + /* Check the reserved flags. */ 228 + if (flags != 0) 229 + return -EINVAL; 230 + 231 + /* 232 + * The interface supports taking in a CPU mask, and returns values that 233 + * are consistent across that mask. Allow userspace to specify NULL and 234 + * 0 as a shortcut to all online CPUs. 235 + */ 236 + cpumask_clear(&cpus); 237 + if (!cpusetsize && !cpus_user) { 238 + cpumask_copy(&cpus, cpu_online_mask); 239 + } else { 240 + if (cpusetsize > cpumask_size()) 241 + cpusetsize = cpumask_size(); 242 + 243 + ret = copy_from_user(&cpus, cpus_user, cpusetsize); 244 + if (ret) 245 + return -EFAULT; 246 + 247 + /* 248 + * Userspace must provide at least one online CPU, without that 249 + * there's no way to define what is supported. 250 + */ 251 + cpumask_and(&cpus, &cpus, cpu_online_mask); 252 + if (cpumask_empty(&cpus)) 253 + return -EINVAL; 254 + } 255 + 256 + for (out = 0; out < pair_count; out++, pairs++) { 257 + struct riscv_hwprobe pair; 258 + 259 + if (get_user(pair.key, &pairs->key)) 260 + return -EFAULT; 261 + 262 + pair.value = 0; 263 + hwprobe_one_pair(&pair, &cpus); 264 + ret = put_user(pair.key, &pairs->key); 265 + if (ret == 0) 266 + ret = put_user(pair.value, &pairs->value); 267 + 268 + if (ret) 269 + return -EFAULT; 270 + } 271 + 272 + return 0; 273 + } 274 + 275 + static int hwprobe_get_cpus(struct riscv_hwprobe __user *pairs, 276 + size_t pair_count, size_t cpusetsize, 277 + unsigned long __user *cpus_user, 278 + unsigned int flags) 279 + { 280 + cpumask_t cpus, one_cpu; 281 + bool clear_all = false; 282 + size_t i; 283 + int ret; 284 + 285 + if (flags != RISCV_HWPROBE_WHICH_CPUS) 286 + return -EINVAL; 287 + 288 + if (!cpusetsize || !cpus_user) 289 + return -EINVAL; 290 + 291 + if (cpusetsize > cpumask_size()) 292 + cpusetsize = cpumask_size(); 293 + 294 + ret = copy_from_user(&cpus, cpus_user, cpusetsize); 295 + if (ret) 296 + return -EFAULT; 297 + 298 + if (cpumask_empty(&cpus)) 299 + cpumask_copy(&cpus, cpu_online_mask); 300 + 301 + cpumask_and(&cpus, &cpus, cpu_online_mask); 302 + 303 + cpumask_clear(&one_cpu); 304 + 305 + for (i = 0; i < pair_count; i++) { 306 + struct riscv_hwprobe pair, tmp; 307 + int cpu; 308 + 309 + ret = copy_from_user(&pair, &pairs[i], sizeof(pair)); 310 + if (ret) 311 + return -EFAULT; 312 + 313 + if (!riscv_hwprobe_key_is_valid(pair.key)) { 314 + clear_all = true; 315 + pair = (struct riscv_hwprobe){ .key = -1, }; 316 + ret = copy_to_user(&pairs[i], &pair, sizeof(pair)); 317 + if (ret) 318 + return -EFAULT; 319 + } 320 + 321 + if (clear_all) 322 + continue; 323 + 324 + tmp = (struct riscv_hwprobe){ .key = pair.key, }; 325 + 326 + for_each_cpu(cpu, &cpus) { 327 + cpumask_set_cpu(cpu, &one_cpu); 328 + 329 + hwprobe_one_pair(&tmp, &one_cpu); 330 + 331 + if (!riscv_hwprobe_pair_cmp(&tmp, &pair)) 332 + cpumask_clear_cpu(cpu, &cpus); 333 + 334 + cpumask_clear_cpu(cpu, &one_cpu); 335 + } 336 + } 337 + 338 + if (clear_all) 339 + cpumask_clear(&cpus); 340 + 341 + ret = copy_to_user(cpus_user, &cpus, cpusetsize); 342 + if (ret) 343 + return -EFAULT; 344 + 345 + return 0; 346 + } 347 + 348 + static int do_riscv_hwprobe(struct riscv_hwprobe __user *pairs, 349 + size_t pair_count, size_t cpusetsize, 350 + unsigned long __user *cpus_user, 351 + unsigned int flags) 352 + { 353 + if (flags & RISCV_HWPROBE_WHICH_CPUS) 354 + return hwprobe_get_cpus(pairs, pair_count, cpusetsize, 355 + cpus_user, flags); 356 + 357 + return hwprobe_get_values(pairs, pair_count, cpusetsize, 358 + cpus_user, flags); 359 + } 360 + 361 + #ifdef CONFIG_MMU 362 + 363 + static int __init init_hwprobe_vdso_data(void) 364 + { 365 + struct vdso_data *vd = __arch_get_k_vdso_data(); 366 + struct arch_vdso_data *avd = &vd->arch_data; 367 + u64 id_bitsmash = 0; 368 + struct riscv_hwprobe pair; 369 + int key; 370 + 371 + /* 372 + * Initialize vDSO data with the answers for the "all CPUs" case, to 373 + * save a syscall in the common case. 374 + */ 375 + for (key = 0; key <= RISCV_HWPROBE_MAX_KEY; key++) { 376 + pair.key = key; 377 + hwprobe_one_pair(&pair, cpu_online_mask); 378 + 379 + WARN_ON_ONCE(pair.key < 0); 380 + 381 + avd->all_cpu_hwprobe_values[key] = pair.value; 382 + /* 383 + * Smash together the vendor, arch, and impl IDs to see if 384 + * they're all 0 or any negative. 385 + */ 386 + if (key <= RISCV_HWPROBE_KEY_MIMPID) 387 + id_bitsmash |= pair.value; 388 + } 389 + 390 + /* 391 + * If the arch, vendor, and implementation ID are all the same across 392 + * all harts, then assume all CPUs are the same, and allow the vDSO to 393 + * answer queries for arbitrary masks. However if all values are 0 (not 394 + * populated) or any value returns -1 (varies across CPUs), then the 395 + * vDSO should defer to the kernel for exotic cpu masks. 396 + */ 397 + avd->homogeneous_cpus = id_bitsmash != 0 && id_bitsmash != -1; 398 + return 0; 399 + } 400 + 401 + arch_initcall_sync(init_hwprobe_vdso_data); 402 + 403 + #endif /* CONFIG_MMU */ 404 + 405 + SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs, 406 + size_t, pair_count, size_t, cpusetsize, unsigned long __user *, 407 + cpus, unsigned int, flags) 408 + { 409 + return do_riscv_hwprobe(pairs, pair_count, cpusetsize, 410 + cpus, flags); 411 + }
-285
arch/riscv/kernel/sys_riscv.c
··· 7 7 8 8 #include <linux/syscalls.h> 9 9 #include <asm/cacheflush.h> 10 - #include <asm/cpufeature.h> 11 - #include <asm/hwprobe.h> 12 - #include <asm/sbi.h> 13 - #include <asm/vector.h> 14 - #include <asm/switch_to.h> 15 - #include <asm/uaccess.h> 16 - #include <asm/unistd.h> 17 10 #include <asm-generic/mman-common.h> 18 - #include <vdso/vsyscall.h> 19 11 20 12 static long riscv_sys_mmap(unsigned long addr, unsigned long len, 21 13 unsigned long prot, unsigned long flags, ··· 67 75 flush_icache_mm(current->mm, flags & SYS_RISCV_FLUSH_ICACHE_LOCAL); 68 76 69 77 return 0; 70 - } 71 - 72 - /* 73 - * The hwprobe interface, for allowing userspace to probe to see which features 74 - * are supported by the hardware. See Documentation/arch/riscv/hwprobe.rst for more 75 - * details. 76 - */ 77 - static void hwprobe_arch_id(struct riscv_hwprobe *pair, 78 - const struct cpumask *cpus) 79 - { 80 - u64 id = -1ULL; 81 - bool first = true; 82 - int cpu; 83 - 84 - for_each_cpu(cpu, cpus) { 85 - u64 cpu_id; 86 - 87 - switch (pair->key) { 88 - case RISCV_HWPROBE_KEY_MVENDORID: 89 - cpu_id = riscv_cached_mvendorid(cpu); 90 - break; 91 - case RISCV_HWPROBE_KEY_MIMPID: 92 - cpu_id = riscv_cached_mimpid(cpu); 93 - break; 94 - case RISCV_HWPROBE_KEY_MARCHID: 95 - cpu_id = riscv_cached_marchid(cpu); 96 - break; 97 - } 98 - 99 - if (first) { 100 - id = cpu_id; 101 - first = false; 102 - } 103 - 104 - /* 105 - * If there's a mismatch for the given set, return -1 in the 106 - * value. 107 - */ 108 - if (id != cpu_id) { 109 - id = -1ULL; 110 - break; 111 - } 112 - } 113 - 114 - pair->value = id; 115 - } 116 - 117 - static void hwprobe_isa_ext0(struct riscv_hwprobe *pair, 118 - const struct cpumask *cpus) 119 - { 120 - int cpu; 121 - u64 missing = 0; 122 - 123 - pair->value = 0; 124 - if (has_fpu()) 125 - pair->value |= RISCV_HWPROBE_IMA_FD; 126 - 127 - if (riscv_isa_extension_available(NULL, c)) 128 - pair->value |= RISCV_HWPROBE_IMA_C; 129 - 130 - if (has_vector()) 131 - pair->value |= RISCV_HWPROBE_IMA_V; 132 - 133 - /* 134 - * Loop through and record extensions that 1) anyone has, and 2) anyone 135 - * doesn't have. 136 - */ 137 - for_each_cpu(cpu, cpus) { 138 - struct riscv_isainfo *isainfo = &hart_isa[cpu]; 139 - 140 - #define EXT_KEY(ext) \ 141 - do { \ 142 - if (__riscv_isa_extension_available(isainfo->isa, RISCV_ISA_EXT_##ext)) \ 143 - pair->value |= RISCV_HWPROBE_EXT_##ext; \ 144 - else \ 145 - missing |= RISCV_HWPROBE_EXT_##ext; \ 146 - } while (false) 147 - 148 - /* 149 - * Only use EXT_KEY() for extensions which can be exposed to userspace, 150 - * regardless of the kernel's configuration, as no other checks, besides 151 - * presence in the hart_isa bitmap, are made. 152 - */ 153 - EXT_KEY(ZBA); 154 - EXT_KEY(ZBB); 155 - EXT_KEY(ZBS); 156 - EXT_KEY(ZICBOZ); 157 - #undef EXT_KEY 158 - } 159 - 160 - /* Now turn off reporting features if any CPU is missing it. */ 161 - pair->value &= ~missing; 162 - } 163 - 164 - static bool hwprobe_ext0_has(const struct cpumask *cpus, u64 ext) 165 - { 166 - struct riscv_hwprobe pair; 167 - 168 - hwprobe_isa_ext0(&pair, cpus); 169 - return (pair.value & ext); 170 - } 171 - 172 - static u64 hwprobe_misaligned(const struct cpumask *cpus) 173 - { 174 - int cpu; 175 - u64 perf = -1ULL; 176 - 177 - for_each_cpu(cpu, cpus) { 178 - int this_perf = per_cpu(misaligned_access_speed, cpu); 179 - 180 - if (perf == -1ULL) 181 - perf = this_perf; 182 - 183 - if (perf != this_perf) { 184 - perf = RISCV_HWPROBE_MISALIGNED_UNKNOWN; 185 - break; 186 - } 187 - } 188 - 189 - if (perf == -1ULL) 190 - return RISCV_HWPROBE_MISALIGNED_UNKNOWN; 191 - 192 - return perf; 193 - } 194 - 195 - static void hwprobe_one_pair(struct riscv_hwprobe *pair, 196 - const struct cpumask *cpus) 197 - { 198 - switch (pair->key) { 199 - case RISCV_HWPROBE_KEY_MVENDORID: 200 - case RISCV_HWPROBE_KEY_MARCHID: 201 - case RISCV_HWPROBE_KEY_MIMPID: 202 - hwprobe_arch_id(pair, cpus); 203 - break; 204 - /* 205 - * The kernel already assumes that the base single-letter ISA 206 - * extensions are supported on all harts, and only supports the 207 - * IMA base, so just cheat a bit here and tell that to 208 - * userspace. 209 - */ 210 - case RISCV_HWPROBE_KEY_BASE_BEHAVIOR: 211 - pair->value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA; 212 - break; 213 - 214 - case RISCV_HWPROBE_KEY_IMA_EXT_0: 215 - hwprobe_isa_ext0(pair, cpus); 216 - break; 217 - 218 - case RISCV_HWPROBE_KEY_CPUPERF_0: 219 - pair->value = hwprobe_misaligned(cpus); 220 - break; 221 - 222 - case RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE: 223 - pair->value = 0; 224 - if (hwprobe_ext0_has(cpus, RISCV_HWPROBE_EXT_ZICBOZ)) 225 - pair->value = riscv_cboz_block_size; 226 - break; 227 - 228 - /* 229 - * For forward compatibility, unknown keys don't fail the whole 230 - * call, but get their element key set to -1 and value set to 0 231 - * indicating they're unrecognized. 232 - */ 233 - default: 234 - pair->key = -1; 235 - pair->value = 0; 236 - break; 237 - } 238 - } 239 - 240 - static int do_riscv_hwprobe(struct riscv_hwprobe __user *pairs, 241 - size_t pair_count, size_t cpu_count, 242 - unsigned long __user *cpus_user, 243 - unsigned int flags) 244 - { 245 - size_t out; 246 - int ret; 247 - cpumask_t cpus; 248 - 249 - /* Check the reserved flags. */ 250 - if (flags != 0) 251 - return -EINVAL; 252 - 253 - /* 254 - * The interface supports taking in a CPU mask, and returns values that 255 - * are consistent across that mask. Allow userspace to specify NULL and 256 - * 0 as a shortcut to all online CPUs. 257 - */ 258 - cpumask_clear(&cpus); 259 - if (!cpu_count && !cpus_user) { 260 - cpumask_copy(&cpus, cpu_online_mask); 261 - } else { 262 - if (cpu_count > cpumask_size()) 263 - cpu_count = cpumask_size(); 264 - 265 - ret = copy_from_user(&cpus, cpus_user, cpu_count); 266 - if (ret) 267 - return -EFAULT; 268 - 269 - /* 270 - * Userspace must provide at least one online CPU, without that 271 - * there's no way to define what is supported. 272 - */ 273 - cpumask_and(&cpus, &cpus, cpu_online_mask); 274 - if (cpumask_empty(&cpus)) 275 - return -EINVAL; 276 - } 277 - 278 - for (out = 0; out < pair_count; out++, pairs++) { 279 - struct riscv_hwprobe pair; 280 - 281 - if (get_user(pair.key, &pairs->key)) 282 - return -EFAULT; 283 - 284 - pair.value = 0; 285 - hwprobe_one_pair(&pair, &cpus); 286 - ret = put_user(pair.key, &pairs->key); 287 - if (ret == 0) 288 - ret = put_user(pair.value, &pairs->value); 289 - 290 - if (ret) 291 - return -EFAULT; 292 - } 293 - 294 - return 0; 295 - } 296 - 297 - #ifdef CONFIG_MMU 298 - 299 - static int __init init_hwprobe_vdso_data(void) 300 - { 301 - struct vdso_data *vd = __arch_get_k_vdso_data(); 302 - struct arch_vdso_data *avd = &vd->arch_data; 303 - u64 id_bitsmash = 0; 304 - struct riscv_hwprobe pair; 305 - int key; 306 - 307 - /* 308 - * Initialize vDSO data with the answers for the "all CPUs" case, to 309 - * save a syscall in the common case. 310 - */ 311 - for (key = 0; key <= RISCV_HWPROBE_MAX_KEY; key++) { 312 - pair.key = key; 313 - hwprobe_one_pair(&pair, cpu_online_mask); 314 - 315 - WARN_ON_ONCE(pair.key < 0); 316 - 317 - avd->all_cpu_hwprobe_values[key] = pair.value; 318 - /* 319 - * Smash together the vendor, arch, and impl IDs to see if 320 - * they're all 0 or any negative. 321 - */ 322 - if (key <= RISCV_HWPROBE_KEY_MIMPID) 323 - id_bitsmash |= pair.value; 324 - } 325 - 326 - /* 327 - * If the arch, vendor, and implementation ID are all the same across 328 - * all harts, then assume all CPUs are the same, and allow the vDSO to 329 - * answer queries for arbitrary masks. However if all values are 0 (not 330 - * populated) or any value returns -1 (varies across CPUs), then the 331 - * vDSO should defer to the kernel for exotic cpu masks. 332 - */ 333 - avd->homogeneous_cpus = id_bitsmash != 0 && id_bitsmash != -1; 334 - return 0; 335 - } 336 - 337 - arch_initcall_sync(init_hwprobe_vdso_data); 338 - 339 - #endif /* CONFIG_MMU */ 340 - 341 - SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs, 342 - size_t, pair_count, size_t, cpu_count, unsigned long __user *, 343 - cpus, unsigned int, flags) 344 - { 345 - return do_riscv_hwprobe(pairs, pair_count, cpu_count, 346 - cpus, flags); 347 78 } 348 79 349 80 /* Not defined using SYSCALL_DEFINE0 to avoid error injection */
+3 -3
arch/riscv/kernel/traps_misaligned.c
··· 319 319 static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) 320 320 { 321 321 if (user_mode(regs)) { 322 - return __get_user(*r_val, addr); 322 + return __get_user(*r_val, (u8 __user *)addr); 323 323 } else { 324 324 *r_val = *addr; 325 325 return 0; ··· 329 329 static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) 330 330 { 331 331 if (user_mode(regs)) { 332 - return __put_user(val, addr); 332 + return __put_user(val, (u8 __user *)addr); 333 333 } else { 334 334 *addr = val; 335 335 return 0; ··· 343 343 if (user_mode(regs)) { \ 344 344 __ret = __get_user(insn, insn_addr); \ 345 345 } else { \ 346 - insn = *insn_addr; \ 346 + insn = *(__force u16 *)insn_addr; \ 347 347 __ret = 0; \ 348 348 } \ 349 349 \
+75 -11
arch/riscv/kernel/vdso/hwprobe.c
··· 3 3 * Copyright 2023 Rivos, Inc 4 4 */ 5 5 6 + #include <linux/string.h> 6 7 #include <linux/types.h> 7 8 #include <vdso/datapage.h> 8 9 #include <vdso/helpers.h> 9 10 10 11 extern int riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 11 - size_t cpu_count, unsigned long *cpus, 12 + size_t cpusetsize, unsigned long *cpus, 12 13 unsigned int flags); 13 14 14 - /* Add a prototype to avoid -Wmissing-prototypes warning. */ 15 - int __vdso_riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 16 - size_t cpu_count, unsigned long *cpus, 17 - unsigned int flags); 18 - 19 - int __vdso_riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 20 - size_t cpu_count, unsigned long *cpus, 21 - unsigned int flags) 15 + static int riscv_vdso_get_values(struct riscv_hwprobe *pairs, size_t pair_count, 16 + size_t cpusetsize, unsigned long *cpus, 17 + unsigned int flags) 22 18 { 23 19 const struct vdso_data *vd = __arch_get_vdso_data(); 24 20 const struct arch_vdso_data *avd = &vd->arch_data; 25 - bool all_cpus = !cpu_count && !cpus; 21 + bool all_cpus = !cpusetsize && !cpus; 26 22 struct riscv_hwprobe *p = pairs; 27 23 struct riscv_hwprobe *end = pairs + pair_count; 28 24 ··· 29 33 * masks. 30 34 */ 31 35 if ((flags != 0) || (!all_cpus && !avd->homogeneous_cpus)) 32 - return riscv_hwprobe(pairs, pair_count, cpu_count, cpus, flags); 36 + return riscv_hwprobe(pairs, pair_count, cpusetsize, cpus, flags); 33 37 34 38 /* This is something we can handle, fill out the pairs. */ 35 39 while (p < end) { ··· 45 49 } 46 50 47 51 return 0; 52 + } 53 + 54 + static int riscv_vdso_get_cpus(struct riscv_hwprobe *pairs, size_t pair_count, 55 + size_t cpusetsize, unsigned long *cpus, 56 + unsigned int flags) 57 + { 58 + const struct vdso_data *vd = __arch_get_vdso_data(); 59 + const struct arch_vdso_data *avd = &vd->arch_data; 60 + struct riscv_hwprobe *p = pairs; 61 + struct riscv_hwprobe *end = pairs + pair_count; 62 + unsigned char *c = (unsigned char *)cpus; 63 + bool empty_cpus = true; 64 + bool clear_all = false; 65 + int i; 66 + 67 + if (!cpusetsize || !cpus) 68 + return -EINVAL; 69 + 70 + for (i = 0; i < cpusetsize; i++) { 71 + if (c[i]) { 72 + empty_cpus = false; 73 + break; 74 + } 75 + } 76 + 77 + if (empty_cpus || flags != RISCV_HWPROBE_WHICH_CPUS || !avd->homogeneous_cpus) 78 + return riscv_hwprobe(pairs, pair_count, cpusetsize, cpus, flags); 79 + 80 + while (p < end) { 81 + if (riscv_hwprobe_key_is_valid(p->key)) { 82 + struct riscv_hwprobe t = { 83 + .key = p->key, 84 + .value = avd->all_cpu_hwprobe_values[p->key], 85 + }; 86 + 87 + if (!riscv_hwprobe_pair_cmp(&t, p)) 88 + clear_all = true; 89 + } else { 90 + clear_all = true; 91 + p->key = -1; 92 + p->value = 0; 93 + } 94 + p++; 95 + } 96 + 97 + if (clear_all) { 98 + for (i = 0; i < cpusetsize; i++) 99 + c[i] = 0; 100 + } 101 + 102 + return 0; 103 + } 104 + 105 + /* Add a prototype to avoid -Wmissing-prototypes warning. */ 106 + int __vdso_riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 107 + size_t cpusetsize, unsigned long *cpus, 108 + unsigned int flags); 109 + 110 + int __vdso_riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 111 + size_t cpusetsize, unsigned long *cpus, 112 + unsigned int flags) 113 + { 114 + if (flags & RISCV_HWPROBE_WHICH_CPUS) 115 + return riscv_vdso_get_cpus(pairs, pair_count, cpusetsize, 116 + cpus, flags); 117 + 118 + return riscv_vdso_get_values(pairs, pair_count, cpusetsize, 119 + cpus, flags); 48 120 }
+2
arch/riscv/kernel/vmlinux-xip.lds.S
··· 29 29 HEAD_TEXT_SECTION 30 30 INIT_TEXT_SECTION(PAGE_SIZE) 31 31 /* we have to discard exit text and such at runtime, not link time */ 32 + __exittext_begin = .; 32 33 .exit.text : 33 34 { 34 35 EXIT_TEXT 35 36 } 37 + __exittext_end = .; 36 38 37 39 .text : { 38 40 _text = .;
+2
arch/riscv/kernel/vmlinux.lds.S
··· 69 69 __soc_builtin_dtb_table_end = .; 70 70 } 71 71 /* we have to discard exit text and such at runtime, not link time */ 72 + __exittext_begin = .; 72 73 .exit.text : 73 74 { 74 75 EXIT_TEXT 75 76 } 77 + __exittext_end = .; 76 78 77 79 __init_text_end = .; 78 80 . = ALIGN(SECTION_ALIGN);
+11 -11
arch/riscv/kvm/mmu.c
··· 103 103 *ptep_level = current_level; 104 104 ptep = (pte_t *)kvm->arch.pgd; 105 105 ptep = &ptep[gstage_pte_index(addr, current_level)]; 106 - while (ptep && pte_val(*ptep)) { 106 + while (ptep && pte_val(ptep_get(ptep))) { 107 107 if (gstage_pte_leaf(ptep)) { 108 108 *ptep_level = current_level; 109 109 *ptepp = ptep; ··· 113 113 if (current_level) { 114 114 current_level--; 115 115 *ptep_level = current_level; 116 - ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); 116 + ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); 117 117 ptep = &ptep[gstage_pte_index(addr, current_level)]; 118 118 } else { 119 119 ptep = NULL; ··· 149 149 if (gstage_pte_leaf(ptep)) 150 150 return -EEXIST; 151 151 152 - if (!pte_val(*ptep)) { 152 + if (!pte_val(ptep_get(ptep))) { 153 153 if (!pcache) 154 154 return -ENOMEM; 155 155 next_ptep = kvm_mmu_memory_cache_alloc(pcache); 156 156 if (!next_ptep) 157 157 return -ENOMEM; 158 - *ptep = pfn_pte(PFN_DOWN(__pa(next_ptep)), 159 - __pgprot(_PAGE_TABLE)); 158 + set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), 159 + __pgprot(_PAGE_TABLE))); 160 160 } else { 161 161 if (gstage_pte_leaf(ptep)) 162 162 return -EEXIST; 163 - next_ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); 163 + next_ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); 164 164 } 165 165 166 166 current_level--; 167 167 ptep = &next_ptep[gstage_pte_index(addr, current_level)]; 168 168 } 169 169 170 - *ptep = *new_pte; 170 + set_pte(ptep, *new_pte); 171 171 if (gstage_pte_leaf(ptep)) 172 172 gstage_remote_tlb_flush(kvm, current_level, addr); 173 173 ··· 239 239 240 240 BUG_ON(addr & (page_size - 1)); 241 241 242 - if (!pte_val(*ptep)) 242 + if (!pte_val(ptep_get(ptep))) 243 243 return; 244 244 245 245 if (ptep_level && !gstage_pte_leaf(ptep)) { 246 - next_ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); 246 + next_ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); 247 247 next_ptep_level = ptep_level - 1; 248 248 ret = gstage_level_to_page_size(next_ptep_level, 249 249 &next_page_size); ··· 261 261 if (op == GSTAGE_OP_CLEAR) 262 262 set_pte(ptep, __pte(0)); 263 263 else if (op == GSTAGE_OP_WP) 264 - set_pte(ptep, __pte(pte_val(*ptep) & ~_PAGE_WRITE)); 264 + set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); 265 265 gstage_remote_tlb_flush(kvm, ptep_level, addr); 266 266 } 267 267 } ··· 603 603 &ptep, &ptep_level)) 604 604 return false; 605 605 606 - return pte_young(*ptep); 606 + return pte_young(ptep_get(ptep)); 607 607 } 608 608 609 609 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
+1 -1
arch/riscv/lib/clear_page.S
··· 4 4 */ 5 5 6 6 #include <linux/linkage.h> 7 + #include <linux/export.h> 7 8 #include <asm/asm.h> 8 9 #include <asm/alternative-macros.h> 9 - #include <asm-generic/export.h> 10 10 #include <asm/hwcap.h> 11 11 #include <asm/insn-def.h> 12 12 #include <asm/page.h>
+1 -1
arch/riscv/lib/tishift.S
··· 4 4 */ 5 5 6 6 #include <linux/linkage.h> 7 - #include <asm-generic/export.h> 7 + #include <linux/export.h> 8 8 9 9 SYM_FUNC_START(__lshrti3) 10 10 beqz a2, .L1
+1 -1
arch/riscv/lib/uaccess.S
··· 1 1 #include <linux/linkage.h> 2 - #include <asm-generic/export.h> 2 + #include <linux/export.h> 3 3 #include <asm/asm.h> 4 4 #include <asm/asm-extable.h> 5 5 #include <asm/csr.h>
+1 -2
arch/riscv/mm/Makefile
··· 13 13 KCOV_INSTRUMENT_init.o := n 14 14 15 15 obj-y += init.o 16 - obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o 16 + obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o pgtable.o 17 17 obj-y += cacheflush.o 18 18 obj-y += context.o 19 - obj-y += pgtable.o 20 19 obj-y += pmem.o 21 20 22 21 ifeq ($(CONFIG_MMU),y)
+8 -8
arch/riscv/mm/fault.c
··· 136 136 pgd = (pgd_t *)pfn_to_virt(pfn) + index; 137 137 pgd_k = init_mm.pgd + index; 138 138 139 - if (!pgd_present(*pgd_k)) { 139 + if (!pgd_present(pgdp_get(pgd_k))) { 140 140 no_context(regs, addr); 141 141 return; 142 142 } 143 - set_pgd(pgd, *pgd_k); 143 + set_pgd(pgd, pgdp_get(pgd_k)); 144 144 145 145 p4d_k = p4d_offset(pgd_k, addr); 146 - if (!p4d_present(*p4d_k)) { 146 + if (!p4d_present(p4dp_get(p4d_k))) { 147 147 no_context(regs, addr); 148 148 return; 149 149 } 150 150 151 151 pud_k = pud_offset(p4d_k, addr); 152 - if (!pud_present(*pud_k)) { 152 + if (!pud_present(pudp_get(pud_k))) { 153 153 no_context(regs, addr); 154 154 return; 155 155 } 156 - if (pud_leaf(*pud_k)) 156 + if (pud_leaf(pudp_get(pud_k))) 157 157 goto flush_tlb; 158 158 159 159 /* ··· 161 161 * to copy individual PTEs 162 162 */ 163 163 pmd_k = pmd_offset(pud_k, addr); 164 - if (!pmd_present(*pmd_k)) { 164 + if (!pmd_present(pmdp_get(pmd_k))) { 165 165 no_context(regs, addr); 166 166 return; 167 167 } 168 - if (pmd_leaf(*pmd_k)) 168 + if (pmd_leaf(pmdp_get(pmd_k))) 169 169 goto flush_tlb; 170 170 171 171 /* ··· 175 175 * silently loop forever. 176 176 */ 177 177 pte_k = pte_offset_kernel(pmd_k, addr); 178 - if (!pte_present(*pte_k)) { 178 + if (!pte_present(ptep_get(pte_k))) { 179 179 no_context(regs, addr); 180 180 return; 181 181 }
+6 -6
arch/riscv/mm/hugetlbpage.c
··· 54 54 } 55 55 56 56 if (sz == PMD_SIZE) { 57 - if (want_pmd_share(vma, addr) && pud_none(*pud)) 57 + if (want_pmd_share(vma, addr) && pud_none(pudp_get(pud))) 58 58 pte = huge_pmd_share(mm, vma, addr, pud); 59 59 else 60 60 pte = (pte_t *)pmd_alloc(mm, pud, addr); ··· 93 93 pmd_t *pmd; 94 94 95 95 pgd = pgd_offset(mm, addr); 96 - if (!pgd_present(*pgd)) 96 + if (!pgd_present(pgdp_get(pgd))) 97 97 return NULL; 98 98 99 99 p4d = p4d_offset(pgd, addr); 100 - if (!p4d_present(*p4d)) 100 + if (!p4d_present(p4dp_get(p4d))) 101 101 return NULL; 102 102 103 103 pud = pud_offset(p4d, addr); ··· 105 105 /* must be pud huge, non-present or none */ 106 106 return (pte_t *)pud; 107 107 108 - if (!pud_present(*pud)) 108 + if (!pud_present(pudp_get(pud))) 109 109 return NULL; 110 110 111 111 pmd = pmd_offset(pud, addr); ··· 113 113 /* must be pmd huge, non-present or none */ 114 114 return (pte_t *)pmd; 115 115 116 - if (!pmd_present(*pmd)) 116 + if (!pmd_present(pmdp_get(pmd))) 117 117 return NULL; 118 118 119 119 for_each_napot_order(order) { ··· 293 293 pte_t *ptep, 294 294 unsigned long sz) 295 295 { 296 - pte_t pte = READ_ONCE(*ptep); 296 + pte_t pte = ptep_get(ptep); 297 297 int i, pte_num; 298 298 299 299 if (!pte_napot(pte)) {
+6 -2
arch/riscv/mm/init.c
··· 174 174 175 175 /* Limit the memory size via mem. */ 176 176 static phys_addr_t memory_limit; 177 + #ifdef CONFIG_XIP_KERNEL 178 + #define memory_limit (*(phys_addr_t *)XIP_FIXUP(&memory_limit)) 179 + #endif /* CONFIG_XIP_KERNEL */ 177 180 178 181 static int __init early_mem(char *p) 179 182 { ··· 955 952 * setup_vm_final installs the linear mapping. For 32-bit kernel, as the 956 953 * kernel is mapped in the linear mapping, that makes no difference. 957 954 */ 958 - dtb_early_va = kernel_mapping_pa_to_va(XIP_FIXUP(dtb_pa)); 955 + dtb_early_va = kernel_mapping_pa_to_va(dtb_pa); 959 956 #endif 960 957 961 958 dtb_early_pa = dtb_pa; ··· 1058 1055 #endif 1059 1056 1060 1057 kernel_map.virt_addr = KERNEL_LINK_ADDR + kernel_map.virt_offset; 1061 - kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL); 1062 1058 1063 1059 #ifdef CONFIG_XIP_KERNEL 1060 + kernel_map.page_offset = PAGE_OFFSET_L3; 1064 1061 kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR; 1065 1062 kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); 1066 1063 ··· 1070 1067 1071 1068 kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom; 1072 1069 #else 1070 + kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL); 1073 1071 kernel_map.phys_addr = (uintptr_t)(&_start); 1074 1072 kernel_map.size = (uintptr_t)(&_end) - kernel_map.phys_addr; 1075 1073 #endif
+24 -21
arch/riscv/mm/kasan_init.c
··· 31 31 phys_addr_t phys_addr; 32 32 pte_t *ptep, *p; 33 33 34 - if (pmd_none(*pmd)) { 34 + if (pmd_none(pmdp_get(pmd))) { 35 35 p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); 36 36 set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE)); 37 37 } ··· 39 39 ptep = pte_offset_kernel(pmd, vaddr); 40 40 41 41 do { 42 - if (pte_none(*ptep)) { 42 + if (pte_none(ptep_get(ptep))) { 43 43 phys_addr = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); 44 44 set_pte(ptep, pfn_pte(PFN_DOWN(phys_addr), PAGE_KERNEL)); 45 45 memset(__va(phys_addr), KASAN_SHADOW_INIT, PAGE_SIZE); ··· 53 53 pmd_t *pmdp, *p; 54 54 unsigned long next; 55 55 56 - if (pud_none(*pud)) { 56 + if (pud_none(pudp_get(pud))) { 57 57 p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); 58 58 set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); 59 59 } ··· 63 63 do { 64 64 next = pmd_addr_end(vaddr, end); 65 65 66 - if (pmd_none(*pmdp) && IS_ALIGNED(vaddr, PMD_SIZE) && (next - vaddr) >= PMD_SIZE) { 66 + if (pmd_none(pmdp_get(pmdp)) && IS_ALIGNED(vaddr, PMD_SIZE) && 67 + (next - vaddr) >= PMD_SIZE) { 67 68 phys_addr = memblock_phys_alloc(PMD_SIZE, PMD_SIZE); 68 69 if (phys_addr) { 69 70 set_pmd(pmdp, pfn_pmd(PFN_DOWN(phys_addr), PAGE_KERNEL)); ··· 84 83 pud_t *pudp, *p; 85 84 unsigned long next; 86 85 87 - if (p4d_none(*p4d)) { 86 + if (p4d_none(p4dp_get(p4d))) { 88 87 p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); 89 88 set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); 90 89 } ··· 94 93 do { 95 94 next = pud_addr_end(vaddr, end); 96 95 97 - if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { 96 + if (pud_none(pudp_get(pudp)) && IS_ALIGNED(vaddr, PUD_SIZE) && 97 + (next - vaddr) >= PUD_SIZE) { 98 98 phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); 99 99 if (phys_addr) { 100 100 set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); ··· 115 113 p4d_t *p4dp, *p; 116 114 unsigned long next; 117 115 118 - if (pgd_none(*pgd)) { 116 + if (pgd_none(pgdp_get(pgd))) { 119 117 p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); 120 118 set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); 121 119 } ··· 125 123 do { 126 124 next = p4d_addr_end(vaddr, end); 127 125 128 - if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) { 126 + if (p4d_none(p4dp_get(p4dp)) && IS_ALIGNED(vaddr, P4D_SIZE) && 127 + (next - vaddr) >= P4D_SIZE) { 129 128 phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); 130 129 if (phys_addr) { 131 130 set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); ··· 148 145 do { 149 146 next = pgd_addr_end(vaddr, end); 150 147 151 - if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && 148 + if (pgd_none(pgdp_get(pgdp)) && IS_ALIGNED(vaddr, PGDIR_SIZE) && 152 149 (next - vaddr) >= PGDIR_SIZE) { 153 150 phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); 154 151 if (phys_addr) { ··· 171 168 if (!pgtable_l4_enabled) { 172 169 pudp = (pud_t *)p4dp; 173 170 } else { 174 - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); 171 + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(p4dp_get(p4dp)))); 175 172 pudp = base_pud + pud_index(vaddr); 176 173 } 177 174 ··· 196 193 if (!pgtable_l5_enabled) { 197 194 p4dp = (p4d_t *)pgdp; 198 195 } else { 199 - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); 196 + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(pgdp_get(pgdp)))); 200 197 p4dp = base_p4d + p4d_index(vaddr); 201 198 } 202 199 ··· 242 239 if (!pgtable_l4_enabled) { 243 240 pudp = (pud_t *)p4dp; 244 241 } else { 245 - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); 242 + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(p4dp_get(p4dp)))); 246 243 pudp = base_pud + pud_index(vaddr); 247 244 } 248 245 249 246 do { 250 247 next = pud_addr_end(vaddr, end); 251 248 252 - if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && 249 + if (pud_none(pudp_get(pudp)) && IS_ALIGNED(vaddr, PUD_SIZE) && 253 250 (next - vaddr) >= PUD_SIZE) { 254 251 phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd); 255 252 set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); ··· 280 277 if (!pgtable_l5_enabled) { 281 278 p4dp = (p4d_t *)pgdp; 282 279 } else { 283 - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); 280 + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(pgdp_get(pgdp)))); 284 281 p4dp = base_p4d + p4d_index(vaddr); 285 282 } 286 283 287 284 do { 288 285 next = p4d_addr_end(vaddr, end); 289 286 290 - if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && 287 + if (p4d_none(p4dp_get(p4dp)) && IS_ALIGNED(vaddr, P4D_SIZE) && 291 288 (next - vaddr) >= P4D_SIZE) { 292 289 phys_addr = __pa((uintptr_t)kasan_early_shadow_pud); 293 290 set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); ··· 308 305 do { 309 306 next = pgd_addr_end(vaddr, end); 310 307 311 - if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && 308 + if (pgd_none(pgdp_get(pgdp)) && IS_ALIGNED(vaddr, PGDIR_SIZE) && 312 309 (next - vaddr) >= PGDIR_SIZE) { 313 310 phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d); 314 311 set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); ··· 384 381 do { 385 382 next = pud_addr_end(vaddr, end); 386 383 387 - if (pud_none(*pud_k)) { 384 + if (pud_none(pudp_get(pud_k))) { 388 385 p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 389 386 set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); 390 387 continue; ··· 404 401 do { 405 402 next = p4d_addr_end(vaddr, end); 406 403 407 - if (p4d_none(*p4d_k)) { 404 + if (p4d_none(p4dp_get(p4d_k))) { 408 405 p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 409 406 set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); 410 407 continue; ··· 423 420 do { 424 421 next = pgd_addr_end(vaddr, end); 425 422 426 - if (pgd_none(*pgd_k)) { 423 + if (pgd_none(pgdp_get(pgd_k))) { 427 424 p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 428 425 set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); 429 426 continue; ··· 454 451 455 452 /* Copy the last p4d since it is shared with the kernel mapping. */ 456 453 if (pgtable_l5_enabled) { 457 - ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END)); 454 + ptr = (p4d_t *)pgd_page_vaddr(pgdp_get(pgd_offset_k(KASAN_SHADOW_END))); 458 455 memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D); 459 456 set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)], 460 457 pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE)); ··· 465 462 466 463 /* Copy the last pud since it is shared with the kernel mapping. */ 467 464 if (pgtable_l4_enabled) { 468 - ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END))); 465 + ptr = (pud_t *)p4d_page_vaddr(p4dp_get(base_p4d + p4d_index(KASAN_SHADOW_END))); 469 466 memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD); 470 467 set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)], 471 468 pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE));
+30 -25
arch/riscv/mm/pageattr.c
··· 29 29 static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr, 30 30 unsigned long next, struct mm_walk *walk) 31 31 { 32 - p4d_t val = READ_ONCE(*p4d); 32 + p4d_t val = p4dp_get(p4d); 33 33 34 34 if (p4d_leaf(val)) { 35 35 val = __p4d(set_pageattr_masks(p4d_val(val), walk)); ··· 42 42 static int pageattr_pud_entry(pud_t *pud, unsigned long addr, 43 43 unsigned long next, struct mm_walk *walk) 44 44 { 45 - pud_t val = READ_ONCE(*pud); 45 + pud_t val = pudp_get(pud); 46 46 47 47 if (pud_leaf(val)) { 48 48 val = __pud(set_pageattr_masks(pud_val(val), walk)); ··· 55 55 static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, 56 56 unsigned long next, struct mm_walk *walk) 57 57 { 58 - pmd_t val = READ_ONCE(*pmd); 58 + pmd_t val = pmdp_get(pmd); 59 59 60 60 if (pmd_leaf(val)) { 61 61 val = __pmd(set_pageattr_masks(pmd_val(val), walk)); ··· 68 68 static int pageattr_pte_entry(pte_t *pte, unsigned long addr, 69 69 unsigned long next, struct mm_walk *walk) 70 70 { 71 - pte_t val = READ_ONCE(*pte); 71 + pte_t val = ptep_get(pte); 72 72 73 73 val = __pte(set_pageattr_masks(pte_val(val), walk)); 74 74 set_pte(pte, val); ··· 108 108 vaddr <= (vaddr & PMD_MASK) && end >= next) 109 109 continue; 110 110 111 - if (pmd_leaf(*pmdp)) { 111 + if (pmd_leaf(pmdp_get(pmdp))) { 112 112 struct page *pte_page; 113 - unsigned long pfn = _pmd_pfn(*pmdp); 114 - pgprot_t prot = __pgprot(pmd_val(*pmdp) & ~_PAGE_PFN_MASK); 113 + unsigned long pfn = _pmd_pfn(pmdp_get(pmdp)); 114 + pgprot_t prot = __pgprot(pmd_val(pmdp_get(pmdp)) & ~_PAGE_PFN_MASK); 115 115 pte_t *ptep_new; 116 116 int i; 117 117 ··· 148 148 vaddr <= (vaddr & PUD_MASK) && end >= next) 149 149 continue; 150 150 151 - if (pud_leaf(*pudp)) { 151 + if (pud_leaf(pudp_get(pudp))) { 152 152 struct page *pmd_page; 153 - unsigned long pfn = _pud_pfn(*pudp); 154 - pgprot_t prot = __pgprot(pud_val(*pudp) & ~_PAGE_PFN_MASK); 153 + unsigned long pfn = _pud_pfn(pudp_get(pudp)); 154 + pgprot_t prot = __pgprot(pud_val(pudp_get(pudp)) & ~_PAGE_PFN_MASK); 155 155 pmd_t *pmdp_new; 156 156 int i; 157 157 ··· 197 197 vaddr <= (vaddr & P4D_MASK) && end >= next) 198 198 continue; 199 199 200 - if (p4d_leaf(*p4dp)) { 200 + if (p4d_leaf(p4dp_get(p4dp))) { 201 201 struct page *pud_page; 202 - unsigned long pfn = _p4d_pfn(*p4dp); 203 - pgprot_t prot = __pgprot(p4d_val(*p4dp) & ~_PAGE_PFN_MASK); 202 + unsigned long pfn = _p4d_pfn(p4dp_get(p4dp)); 203 + pgprot_t prot = __pgprot(p4d_val(p4dp_get(p4dp)) & ~_PAGE_PFN_MASK); 204 204 pud_t *pudp_new; 205 205 int i; 206 206 ··· 305 305 goto unlock; 306 306 } 307 307 } else if (is_kernel_mapping(start) || is_linear_mapping(start)) { 308 - lm_start = (unsigned long)lm_alias(start); 309 - lm_end = (unsigned long)lm_alias(end); 308 + if (is_kernel_mapping(start)) { 309 + lm_start = (unsigned long)lm_alias(start); 310 + lm_end = (unsigned long)lm_alias(end); 311 + } else { 312 + lm_start = start; 313 + lm_end = end; 314 + } 310 315 311 316 ret = split_linear_mapping(lm_start, lm_end); 312 317 if (ret) ··· 383 378 int set_direct_map_default_noflush(struct page *page) 384 379 { 385 380 return __set_memory((unsigned long)page_address(page), 1, 386 - PAGE_KERNEL, __pgprot(0)); 381 + PAGE_KERNEL, __pgprot(_PAGE_EXEC)); 387 382 } 388 383 389 384 #ifdef CONFIG_DEBUG_PAGEALLOC ··· 411 406 pte_t *pte; 412 407 413 408 pgd = pgd_offset_k(addr); 414 - if (!pgd_present(*pgd)) 409 + if (!pgd_present(pgdp_get(pgd))) 415 410 return false; 416 - if (pgd_leaf(*pgd)) 411 + if (pgd_leaf(pgdp_get(pgd))) 417 412 return true; 418 413 419 414 p4d = p4d_offset(pgd, addr); 420 - if (!p4d_present(*p4d)) 415 + if (!p4d_present(p4dp_get(p4d))) 421 416 return false; 422 - if (p4d_leaf(*p4d)) 417 + if (p4d_leaf(p4dp_get(p4d))) 423 418 return true; 424 419 425 420 pud = pud_offset(p4d, addr); 426 - if (!pud_present(*pud)) 421 + if (!pud_present(pudp_get(pud))) 427 422 return false; 428 - if (pud_leaf(*pud)) 423 + if (pud_leaf(pudp_get(pud))) 429 424 return true; 430 425 431 426 pmd = pmd_offset(pud, addr); 432 - if (!pmd_present(*pmd)) 427 + if (!pmd_present(pmdp_get(pmd))) 433 428 return false; 434 - if (pmd_leaf(*pmd)) 429 + if (pmd_leaf(pmdp_get(pmd))) 435 430 return true; 436 431 437 432 pte = pte_offset_kernel(pmd, addr); 438 - return pte_present(*pte); 433 + return pte_present(ptep_get(pte)); 439 434 }
+46 -5
arch/riscv/mm/pgtable.c
··· 5 5 #include <linux/kernel.h> 6 6 #include <linux/pgtable.h> 7 7 8 + int ptep_set_access_flags(struct vm_area_struct *vma, 9 + unsigned long address, pte_t *ptep, 10 + pte_t entry, int dirty) 11 + { 12 + if (!pte_same(ptep_get(ptep), entry)) 13 + __set_pte_at(ptep, entry); 14 + /* 15 + * update_mmu_cache will unconditionally execute, handling both 16 + * the case that the PTE changed and the spurious fault case. 17 + */ 18 + return true; 19 + } 20 + 21 + int ptep_test_and_clear_young(struct vm_area_struct *vma, 22 + unsigned long address, 23 + pte_t *ptep) 24 + { 25 + if (!pte_young(ptep_get(ptep))) 26 + return 0; 27 + return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); 28 + } 29 + EXPORT_SYMBOL_GPL(ptep_test_and_clear_young); 30 + 31 + #ifdef CONFIG_64BIT 32 + pud_t *pud_offset(p4d_t *p4d, unsigned long address) 33 + { 34 + if (pgtable_l4_enabled) 35 + return p4d_pgtable(p4dp_get(p4d)) + pud_index(address); 36 + 37 + return (pud_t *)p4d; 38 + } 39 + 40 + p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 41 + { 42 + if (pgtable_l5_enabled) 43 + return pgd_pgtable(pgdp_get(pgd)) + p4d_index(address); 44 + 45 + return (p4d_t *)pgd; 46 + } 47 + #endif 48 + 8 49 #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP 9 50 int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot) 10 51 { ··· 66 25 67 26 int pud_clear_huge(pud_t *pud) 68 27 { 69 - if (!pud_leaf(READ_ONCE(*pud))) 28 + if (!pud_leaf(pudp_get(pud))) 70 29 return 0; 71 30 pud_clear(pud); 72 31 return 1; ··· 74 33 75 34 int pud_free_pmd_page(pud_t *pud, unsigned long addr) 76 35 { 77 - pmd_t *pmd = pud_pgtable(*pud); 36 + pmd_t *pmd = pud_pgtable(pudp_get(pud)); 78 37 int i; 79 38 80 39 pud_clear(pud); ··· 104 63 105 64 int pmd_clear_huge(pmd_t *pmd) 106 65 { 107 - if (!pmd_leaf(READ_ONCE(*pmd))) 66 + if (!pmd_leaf(pmdp_get(pmd))) 108 67 return 0; 109 68 pmd_clear(pmd); 110 69 return 1; ··· 112 71 113 72 int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) 114 73 { 115 - pte_t *pte = (pte_t *)pmd_page_vaddr(*pmd); 74 + pte_t *pte = (pte_t *)pmd_page_vaddr(pmdp_get(pmd)); 116 75 117 76 pmd_clear(pmd); 118 77 ··· 129 88 pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); 130 89 131 90 VM_BUG_ON(address & ~HPAGE_PMD_MASK); 132 - VM_BUG_ON(pmd_trans_huge(*pmdp)); 91 + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmdp))); 133 92 /* 134 93 * When leaf PTE entries (regular pages) are collapsed into a leaf 135 94 * PMD entry (huge page), a valid non-leaf PTE is converted into a
+21
include/linux/pgtable.h
··· 299 299 } 300 300 #endif 301 301 302 + #ifndef pudp_get 303 + static inline pud_t pudp_get(pud_t *pudp) 304 + { 305 + return READ_ONCE(*pudp); 306 + } 307 + #endif 308 + 309 + #ifndef p4dp_get 310 + static inline p4d_t p4dp_get(p4d_t *p4dp) 311 + { 312 + return READ_ONCE(*p4dp); 313 + } 314 + #endif 315 + 316 + #ifndef pgdp_get 317 + static inline pgd_t pgdp_get(pgd_t *pgdp) 318 + { 319 + return READ_ONCE(*pgdp); 320 + } 321 + #endif 322 + 302 323 #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG 303 324 static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, 304 325 unsigned long address,
+4 -1
tools/testing/selftests/riscv/hwprobe/Makefile
··· 4 4 5 5 CFLAGS += -I$(top_srcdir)/tools/include 6 6 7 - TEST_GEN_PROGS := hwprobe cbo 7 + TEST_GEN_PROGS := hwprobe cbo which-cpus 8 8 9 9 include ../../lib.mk 10 10 ··· 12 12 $(CC) -static -o$@ $(CFLAGS) $(LDFLAGS) $^ 13 13 14 14 $(OUTPUT)/cbo: cbo.c sys_hwprobe.S 15 + $(CC) -static -o$@ $(CFLAGS) $(LDFLAGS) $^ 16 + 17 + $(OUTPUT)/which-cpus: which-cpus.c sys_hwprobe.S 15 18 $(CC) -static -o$@ $(CFLAGS) $(LDFLAGS) $^
+1 -1
tools/testing/selftests/riscv/hwprobe/hwprobe.c
··· 47 47 ksft_test_result(out != 0, "Bad CPU set\n"); 48 48 49 49 out = riscv_hwprobe(pairs, 8, 1, 0, 0); 50 - ksft_test_result(out != 0, "NULL CPU set with non-zero count\n"); 50 + ksft_test_result(out != 0, "NULL CPU set with non-zero size\n"); 51 51 52 52 pairs[0].key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR; 53 53 out = riscv_hwprobe(pairs, 1, 1, &cpus, 0);
+1 -1
tools/testing/selftests/riscv/hwprobe/hwprobe.h
··· 10 10 * contain the call. 11 11 */ 12 12 long riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 13 - size_t cpu_count, unsigned long *cpus, unsigned int flags); 13 + size_t cpusetsize, unsigned long *cpus, unsigned int flags); 14 14 15 15 #endif
+154
tools/testing/selftests/riscv/hwprobe/which-cpus.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2023 Ventana Micro Systems Inc. 4 + * 5 + * Test the RISCV_HWPROBE_WHICH_CPUS flag of hwprobe. Also provides a command 6 + * line interface to get the cpu list for arbitrary hwprobe pairs. 7 + */ 8 + #define _GNU_SOURCE 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <string.h> 12 + #include <sched.h> 13 + #include <unistd.h> 14 + #include <assert.h> 15 + 16 + #include "hwprobe.h" 17 + #include "../../kselftest.h" 18 + 19 + static void help(void) 20 + { 21 + printf("\n" 22 + "which-cpus: [-h] [<key=value> [<key=value> ...]]\n\n" 23 + " Without parameters, tests the RISCV_HWPROBE_WHICH_CPUS flag of hwprobe.\n" 24 + " With parameters, where each parameter is a hwprobe pair written as\n" 25 + " <key=value>, outputs the cpulist for cpus which all match the given set\n" 26 + " of pairs. 'key' and 'value' should be in numeric form, e.g. 4=0x3b\n"); 27 + } 28 + 29 + static void print_cpulist(cpu_set_t *cpus) 30 + { 31 + int start = 0, end = 0; 32 + 33 + if (!CPU_COUNT(cpus)) { 34 + printf("cpus: None\n"); 35 + return; 36 + } 37 + 38 + printf("cpus:"); 39 + for (int i = 0, c = 0; i < CPU_COUNT(cpus); i++, c++) { 40 + if (start != end && !CPU_ISSET(c, cpus)) 41 + printf("-%d", end); 42 + 43 + while (!CPU_ISSET(c, cpus)) 44 + ++c; 45 + 46 + if (i != 0 && c == end + 1) { 47 + end = c; 48 + continue; 49 + } 50 + 51 + printf("%c%d", i == 0 ? ' ' : ',', c); 52 + start = end = c; 53 + } 54 + if (start != end) 55 + printf("-%d", end); 56 + printf("\n"); 57 + } 58 + 59 + static void do_which_cpus(int argc, char **argv, cpu_set_t *cpus) 60 + { 61 + struct riscv_hwprobe *pairs; 62 + int nr_pairs = argc - 1; 63 + char *start, *end; 64 + int rc; 65 + 66 + pairs = malloc(nr_pairs * sizeof(struct riscv_hwprobe)); 67 + assert(pairs); 68 + 69 + for (int i = 0; i < nr_pairs; i++) { 70 + start = argv[i + 1]; 71 + pairs[i].key = strtol(start, &end, 0); 72 + assert(end != start && *end == '='); 73 + start = end + 1; 74 + pairs[i].value = strtoul(start, &end, 0); 75 + assert(end != start && *end == '\0'); 76 + } 77 + 78 + rc = riscv_hwprobe(pairs, nr_pairs, sizeof(cpu_set_t), (unsigned long *)cpus, RISCV_HWPROBE_WHICH_CPUS); 79 + assert(rc == 0); 80 + print_cpulist(cpus); 81 + free(pairs); 82 + } 83 + 84 + int main(int argc, char **argv) 85 + { 86 + struct riscv_hwprobe pairs[2]; 87 + cpu_set_t cpus_aff, cpus; 88 + __u64 ext0_all; 89 + long rc; 90 + 91 + rc = sched_getaffinity(0, sizeof(cpu_set_t), &cpus_aff); 92 + assert(rc == 0); 93 + 94 + if (argc > 1) { 95 + if (!strcmp(argv[1], "-h")) 96 + help(); 97 + else 98 + do_which_cpus(argc, argv, &cpus_aff); 99 + return 0; 100 + } 101 + 102 + ksft_print_header(); 103 + ksft_set_plan(7); 104 + 105 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, }; 106 + rc = riscv_hwprobe(pairs, 1, 0, NULL, 0); 107 + assert(rc == 0 && pairs[0].key == RISCV_HWPROBE_KEY_BASE_BEHAVIOR && 108 + pairs[0].value == RISCV_HWPROBE_BASE_BEHAVIOR_IMA); 109 + 110 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_IMA_EXT_0, }; 111 + rc = riscv_hwprobe(pairs, 1, 0, NULL, 0); 112 + assert(rc == 0 && pairs[0].key == RISCV_HWPROBE_KEY_IMA_EXT_0); 113 + ext0_all = pairs[0].value; 114 + 115 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 116 + CPU_ZERO(&cpus); 117 + rc = riscv_hwprobe(pairs, 1, 0, (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 118 + ksft_test_result(rc == -EINVAL, "no cpusetsize\n"); 119 + 120 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 121 + rc = riscv_hwprobe(pairs, 1, sizeof(cpu_set_t), NULL, RISCV_HWPROBE_WHICH_CPUS); 122 + ksft_test_result(rc == -EINVAL, "NULL cpus\n"); 123 + 124 + pairs[0] = (struct riscv_hwprobe){ .key = 0xbadc0de, }; 125 + CPU_ZERO(&cpus); 126 + rc = riscv_hwprobe(pairs, 1, sizeof(cpu_set_t), (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 127 + ksft_test_result(rc == 0 && CPU_COUNT(&cpus) == 0, "unknown key\n"); 128 + 129 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 130 + pairs[1] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 131 + CPU_ZERO(&cpus); 132 + rc = riscv_hwprobe(pairs, 2, sizeof(cpu_set_t), (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 133 + ksft_test_result(rc == 0, "duplicate keys\n"); 134 + 135 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 136 + pairs[1] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_IMA_EXT_0, .value = ext0_all, }; 137 + CPU_ZERO(&cpus); 138 + rc = riscv_hwprobe(pairs, 2, sizeof(cpu_set_t), (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 139 + ksft_test_result(rc == 0 && CPU_COUNT(&cpus) == sysconf(_SC_NPROCESSORS_ONLN), "set all cpus\n"); 140 + 141 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 142 + pairs[1] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_IMA_EXT_0, .value = ext0_all, }; 143 + memcpy(&cpus, &cpus_aff, sizeof(cpu_set_t)); 144 + rc = riscv_hwprobe(pairs, 2, sizeof(cpu_set_t), (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 145 + ksft_test_result(rc == 0 && CPU_EQUAL(&cpus, &cpus_aff), "set all affinity cpus\n"); 146 + 147 + pairs[0] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_BASE_BEHAVIOR, .value = RISCV_HWPROBE_BASE_BEHAVIOR_IMA, }; 148 + pairs[1] = (struct riscv_hwprobe){ .key = RISCV_HWPROBE_KEY_IMA_EXT_0, .value = ~ext0_all, }; 149 + memcpy(&cpus, &cpus_aff, sizeof(cpu_set_t)); 150 + rc = riscv_hwprobe(pairs, 2, sizeof(cpu_set_t), (unsigned long *)&cpus, RISCV_HWPROBE_WHICH_CPUS); 151 + ksft_test_result(rc == 0 && CPU_COUNT(&cpus) == 0, "clear all cpus\n"); 152 + 153 + ksft_finished(); 154 + }
+1 -9
tools/testing/selftests/riscv/vector/vstate_prctl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 #include <sys/prctl.h> 3 3 #include <unistd.h> 4 - #include <asm/hwprobe.h> 5 4 #include <errno.h> 6 5 #include <sys/wait.h> 7 6 7 + #include "../hwprobe/hwprobe.h" 8 8 #include "../../kselftest.h" 9 - 10 - /* 11 - * Rather than relying on having a new enough libc to define this, just do it 12 - * ourselves. This way we don't need to be coupled to a new-enough libc to 13 - * contain the call. 14 - */ 15 - long riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count, 16 - size_t cpu_count, unsigned long *cpus, unsigned int flags); 17 9 18 10 #define NEXT_PROGRAM "./vstate_exec_nolibc" 19 11 static int launch_test(int test_inherit)