Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

RISC-V CPU Idle Support

This series adds RISC-V CPU Idle support using SBI HSM suspend function.
The RISC-V SBI CPU idle driver added by this series is highly inspired
from the ARM PSCI CPU idle driver.

Special thanks Sandeep Tripathy for providing early feeback on SBI HSM
support in all above projects (RISC-V SBI specification, OpenSBI, and
Linux RISC-V).

* palmer/riscv-idle:
RISC-V: Enable RISC-V SBI CPU Idle driver for QEMU virt machine
dt-bindings: Add common bindings for ARM and RISC-V idle states
cpuidle: Add RISC-V SBI CPU idle driver
cpuidle: Factor-out power domain related code from PSCI domain driver
RISC-V: Add SBI HSM suspend related defines
RISC-V: Add arch functions for non-retentive suspend entry/exit
RISC-V: Rename relocate() and make it global
RISC-V: Enable CPU_IDLE drivers

+1458 -178
+211 -17
Documentation/devicetree/bindings/arm/idle-states.yaml Documentation/devicetree/bindings/cpu/idle-states.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/arm/idle-states.yaml# 4 + $id: http://devicetree.org/schemas/cpu/idle-states.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: ARM idle states binding description 7 + title: Idle states binding description 8 8 9 9 maintainers: 10 10 - Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 11 + - Anup Patel <anup@brainfault.org> 11 12 12 13 description: |+ 13 14 ========================================== 14 15 1 - Introduction 15 16 ========================================== 16 17 17 - ARM systems contain HW capable of managing power consumption dynamically, 18 - where cores can be put in different low-power states (ranging from simple wfi 19 - to power gating) according to OS PM policies. The CPU states representing the 20 - range of dynamic idle states that a processor can enter at run-time, can be 21 - specified through device tree bindings representing the parameters required to 22 - enter/exit specific idle states on a given processor. 18 + ARM and RISC-V systems contain HW capable of managing power consumption 19 + dynamically, where cores can be put in different low-power states (ranging 20 + from simple wfi to power gating) according to OS PM policies. The CPU states 21 + representing the range of dynamic idle states that a processor can enter at 22 + run-time, can be specified through device tree bindings representing the 23 + parameters required to enter/exit specific idle states on a given processor. 24 + 25 + ========================================== 26 + 2 - ARM idle states 27 + ========================================== 23 28 24 29 According to the Server Base System Architecture document (SBSA, [3]), the 25 30 power states an ARM CPU can be put into are identified by the following list: ··· 48 43 The device tree binding definition for ARM idle states is the subject of this 49 44 document. 50 45 46 + ========================================== 47 + 3 - RISC-V idle states 48 + ========================================== 49 + 50 + On RISC-V systems, the HARTs (or CPUs) [6] can be put in platform specific 51 + suspend (or idle) states (ranging from simple WFI, power gating, etc). The 52 + RISC-V SBI v0.3 (or higher) [7] hart state management extension provides a 53 + standard mechanism for OS to request HART state transitions. 54 + 55 + The platform specific suspend (or idle) states of a hart can be either 56 + retentive or non-rententive in nature. A retentive suspend state will 57 + preserve HART registers and CSR values for all privilege modes whereas 58 + a non-retentive suspend state will not preserve HART registers and CSR 59 + values. 60 + 51 61 =========================================== 52 - 2 - idle-states definitions 62 + 4 - idle-states definitions 53 63 =========================================== 54 64 55 65 Idle states are characterized for a specific system through a set of ··· 231 211 properties specification that is the subject of the following sections. 232 212 233 213 =========================================== 234 - 3 - idle-states node 214 + 5 - idle-states node 235 215 =========================================== 236 216 237 - ARM processor idle states are defined within the idle-states node, which is 217 + The processor idle states are defined within the idle-states node, which is 238 218 a direct child of the cpus node [1] and provides a container where the 239 219 processor idle states, defined as device tree nodes, are listed. 240 220 ··· 243 223 just supports idle_standby, an idle-states node is not required. 244 224 245 225 =========================================== 246 - 4 - References 226 + 6 - References 247 227 =========================================== 248 228 249 229 [1] ARM Linux Kernel documentation - CPUs bindings ··· 258 238 [4] ARM Architecture Reference Manuals 259 239 http://infocenter.arm.com/help/index.jsp 260 240 261 - [6] ARM Linux Kernel documentation - Booting AArch64 Linux 241 + [5] ARM Linux Kernel documentation - Booting AArch64 Linux 262 242 Documentation/arm64/booting.rst 243 + 244 + [6] RISC-V Linux Kernel documentation - CPUs bindings 245 + Documentation/devicetree/bindings/riscv/cpus.yaml 246 + 247 + [7] RISC-V Supervisor Binary Interface (SBI) 248 + http://github.com/riscv/riscv-sbi-doc/riscv-sbi.adoc 263 249 264 250 properties: 265 251 $nodename: ··· 279 253 On ARM 32-bit systems this property is optional 280 254 281 255 This assumes that the "enable-method" property is set to "psci" in the cpu 282 - node[6] that is responsible for setting up CPU idle management in the OS 256 + node[5] that is responsible for setting up CPU idle management in the OS 283 257 implementation. 284 258 const: psci 285 259 ··· 291 265 as follows. 292 266 293 267 The idle state entered by executing the wfi instruction (idle_standby 294 - SBSA,[3][4]) is considered standard on all ARM platforms and therefore 295 - must not be listed. 268 + SBSA,[3][4]) is considered standard on all ARM and RISC-V platforms and 269 + therefore must not be listed. 296 270 297 271 In addition to the properties listed above, a state node may require 298 272 additional properties specific to the entry-method defined in the ··· 301 275 302 276 properties: 303 277 compatible: 304 - const: arm,idle-state 278 + enum: 279 + - arm,idle-state 280 + - riscv,idle-state 281 + 282 + arm,psci-suspend-param: 283 + $ref: /schemas/types.yaml#/definitions/uint32 284 + description: | 285 + power_state parameter to pass to the ARM PSCI suspend call. 286 + 287 + Device tree nodes that require usage of PSCI CPU_SUSPEND function 288 + (i.e. idle states node with entry-method property is set to "psci") 289 + must specify this property. 290 + 291 + riscv,sbi-suspend-param: 292 + $ref: /schemas/types.yaml#/definitions/uint32 293 + description: | 294 + suspend_type parameter to pass to the RISC-V SBI HSM suspend call. 295 + 296 + This property is required in idle state nodes of device tree meant 297 + for RISC-V systems. For more details on the suspend_type parameter 298 + refer the SBI specifiation v0.3 (or higher) [7]. 305 299 306 300 local-timer-stop: 307 301 description: ··· 362 316 $ref: /schemas/types.yaml#/definitions/string 363 317 description: 364 318 A string used as a descriptive name for the idle state. 319 + 320 + additionalProperties: false 365 321 366 322 required: 367 323 - compatible ··· 702 654 exit-latency-us = <2000>; 703 655 min-residency-us = <6500>; 704 656 wakeup-latency-us = <2300>; 657 + }; 658 + }; 659 + }; 660 + 661 + - | 662 + // Example 3 (RISC-V 64-bit, 4-cpu systems, two clusters): 663 + 664 + cpus { 665 + #size-cells = <0>; 666 + #address-cells = <1>; 667 + 668 + cpu@0 { 669 + device_type = "cpu"; 670 + compatible = "riscv"; 671 + reg = <0x0>; 672 + riscv,isa = "rv64imafdc"; 673 + mmu-type = "riscv,sv48"; 674 + cpu-idle-states = <&CPU_RET_0_0 &CPU_NONRET_0_0 675 + &CLUSTER_RET_0 &CLUSTER_NONRET_0>; 676 + 677 + cpu_intc0: interrupt-controller { 678 + #interrupt-cells = <1>; 679 + compatible = "riscv,cpu-intc"; 680 + interrupt-controller; 681 + }; 682 + }; 683 + 684 + cpu@1 { 685 + device_type = "cpu"; 686 + compatible = "riscv"; 687 + reg = <0x1>; 688 + riscv,isa = "rv64imafdc"; 689 + mmu-type = "riscv,sv48"; 690 + cpu-idle-states = <&CPU_RET_0_0 &CPU_NONRET_0_0 691 + &CLUSTER_RET_0 &CLUSTER_NONRET_0>; 692 + 693 + cpu_intc1: interrupt-controller { 694 + #interrupt-cells = <1>; 695 + compatible = "riscv,cpu-intc"; 696 + interrupt-controller; 697 + }; 698 + }; 699 + 700 + cpu@10 { 701 + device_type = "cpu"; 702 + compatible = "riscv"; 703 + reg = <0x10>; 704 + riscv,isa = "rv64imafdc"; 705 + mmu-type = "riscv,sv48"; 706 + cpu-idle-states = <&CPU_RET_1_0 &CPU_NONRET_1_0 707 + &CLUSTER_RET_1 &CLUSTER_NONRET_1>; 708 + 709 + cpu_intc10: interrupt-controller { 710 + #interrupt-cells = <1>; 711 + compatible = "riscv,cpu-intc"; 712 + interrupt-controller; 713 + }; 714 + }; 715 + 716 + cpu@11 { 717 + device_type = "cpu"; 718 + compatible = "riscv"; 719 + reg = <0x11>; 720 + riscv,isa = "rv64imafdc"; 721 + mmu-type = "riscv,sv48"; 722 + cpu-idle-states = <&CPU_RET_1_0 &CPU_NONRET_1_0 723 + &CLUSTER_RET_1 &CLUSTER_NONRET_1>; 724 + 725 + cpu_intc11: interrupt-controller { 726 + #interrupt-cells = <1>; 727 + compatible = "riscv,cpu-intc"; 728 + interrupt-controller; 729 + }; 730 + }; 731 + 732 + idle-states { 733 + CPU_RET_0_0: cpu-retentive-0-0 { 734 + compatible = "riscv,idle-state"; 735 + riscv,sbi-suspend-param = <0x10000000>; 736 + entry-latency-us = <20>; 737 + exit-latency-us = <40>; 738 + min-residency-us = <80>; 739 + }; 740 + 741 + CPU_NONRET_0_0: cpu-nonretentive-0-0 { 742 + compatible = "riscv,idle-state"; 743 + riscv,sbi-suspend-param = <0x90000000>; 744 + entry-latency-us = <250>; 745 + exit-latency-us = <500>; 746 + min-residency-us = <950>; 747 + }; 748 + 749 + CLUSTER_RET_0: cluster-retentive-0 { 750 + compatible = "riscv,idle-state"; 751 + riscv,sbi-suspend-param = <0x11000000>; 752 + local-timer-stop; 753 + entry-latency-us = <50>; 754 + exit-latency-us = <100>; 755 + min-residency-us = <250>; 756 + wakeup-latency-us = <130>; 757 + }; 758 + 759 + CLUSTER_NONRET_0: cluster-nonretentive-0 { 760 + compatible = "riscv,idle-state"; 761 + riscv,sbi-suspend-param = <0x91000000>; 762 + local-timer-stop; 763 + entry-latency-us = <600>; 764 + exit-latency-us = <1100>; 765 + min-residency-us = <2700>; 766 + wakeup-latency-us = <1500>; 767 + }; 768 + 769 + CPU_RET_1_0: cpu-retentive-1-0 { 770 + compatible = "riscv,idle-state"; 771 + riscv,sbi-suspend-param = <0x10000010>; 772 + entry-latency-us = <20>; 773 + exit-latency-us = <40>; 774 + min-residency-us = <80>; 775 + }; 776 + 777 + CPU_NONRET_1_0: cpu-nonretentive-1-0 { 778 + compatible = "riscv,idle-state"; 779 + riscv,sbi-suspend-param = <0x90000010>; 780 + entry-latency-us = <250>; 781 + exit-latency-us = <500>; 782 + min-residency-us = <950>; 783 + }; 784 + 785 + CLUSTER_RET_1: cluster-retentive-1 { 786 + compatible = "riscv,idle-state"; 787 + riscv,sbi-suspend-param = <0x11000010>; 788 + local-timer-stop; 789 + entry-latency-us = <50>; 790 + exit-latency-us = <100>; 791 + min-residency-us = <250>; 792 + wakeup-latency-us = <130>; 793 + }; 794 + 795 + CLUSTER_NONRET_1: cluster-nonretentive-1 { 796 + compatible = "riscv,idle-state"; 797 + riscv,sbi-suspend-param = <0x91000010>; 798 + local-timer-stop; 799 + entry-latency-us = <600>; 800 + exit-latency-us = <1100>; 801 + min-residency-us = <2700>; 802 + wakeup-latency-us = <1500>; 705 803 }; 706 804 }; 707 805 };
+1 -1
Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt
··· 81 81 }; 82 82 }; 83 83 84 - [1]. Documentation/devicetree/bindings/arm/idle-states.yaml 84 + [1]. Documentation/devicetree/bindings/cpu/idle-states.yaml
+1 -1
Documentation/devicetree/bindings/arm/psci.yaml
··· 101 101 bindings in [1]) must specify this property. 102 102 103 103 [1] Kernel documentation - ARM idle states bindings 104 - Documentation/devicetree/bindings/arm/idle-states.yaml 104 + Documentation/devicetree/bindings/cpu/idle-states.yaml 105 105 106 106 patternProperties: 107 107 "^power-domain-":
+6
Documentation/devicetree/bindings/riscv/cpus.yaml
··· 99 99 - compatible 100 100 - interrupt-controller 101 101 102 + cpu-idle-states: 103 + $ref: '/schemas/types.yaml#/definitions/phandle-array' 104 + description: | 105 + List of phandles to idle state nodes supported 106 + by this hart (see ./idle-states.yaml). 107 + 102 108 required: 103 109 - riscv,isa 104 110 - interrupt-controller
+14
MAINTAINERS
··· 5069 5069 F: drivers/cpuidle/cpuidle-psci.h 5070 5070 F: drivers/cpuidle/cpuidle-psci-domain.c 5071 5071 5072 + CPUIDLE DRIVER - DT IDLE PM DOMAIN 5073 + M: Ulf Hansson <ulf.hansson@linaro.org> 5074 + L: linux-pm@vger.kernel.org 5075 + S: Supported 5076 + F: drivers/cpuidle/dt_idle_genpd.c 5077 + F: drivers/cpuidle/dt_idle_genpd.h 5078 + 5079 + CPUIDLE DRIVER - RISC-V SBI 5080 + M: Anup Patel <anup@brainfault.org> 5081 + L: linux-pm@vger.kernel.org 5082 + L: linux-riscv@lists.infradead.org 5083 + S: Maintained 5084 + F: drivers/cpuidle/cpuidle-riscv-sbi.c 5085 + 5072 5086 CRAMFS FILESYSTEM 5073 5087 M: Nicolas Pitre <nico@fluxnic.net> 5074 5088 S: Maintained
+7
arch/riscv/Kconfig
··· 48 48 select CLONE_BACKWARDS 49 49 select CLINT_TIMER if !MMU 50 50 select COMMON_CLK 51 + select CPU_PM if CPU_IDLE 51 52 select EDAC_SUPPORT 52 53 select GENERIC_ARCH_TOPOLOGY if SMP 53 54 select GENERIC_ATOMIC64 if !64BIT ··· 532 531 menu "Power management options" 533 532 534 533 source "kernel/power/Kconfig" 534 + 535 + endmenu 536 + 537 + menu "CPU Power Management" 538 + 539 + source "drivers/cpuidle/Kconfig" 535 540 536 541 endmenu 537 542
+3
arch/riscv/Kconfig.socs
··· 36 36 select GOLDFISH 37 37 select RTC_DRV_GOLDFISH if RTC_CLASS 38 38 select SIFIVE_PLIC 39 + select PM_GENERIC_DOMAINS if PM 40 + select PM_GENERIC_DOMAINS_OF if PM && OF 41 + select RISCV_SBI_CPUIDLE if CPU_IDLE 39 42 help 40 43 This enables support for QEMU Virt Machine. 41 44
+2
arch/riscv/configs/defconfig
··· 20 20 CONFIG_SOC_VIRT=y 21 21 CONFIG_SMP=y 22 22 CONFIG_HOTPLUG_CPU=y 23 + CONFIG_PM=y 24 + CONFIG_CPU_IDLE=y 23 25 CONFIG_VIRTUALIZATION=y 24 26 CONFIG_KVM=m 25 27 CONFIG_JUMP_LABEL=y
+2
arch/riscv/configs/rv32_defconfig
··· 20 20 CONFIG_ARCH_RV32I=y 21 21 CONFIG_SMP=y 22 22 CONFIG_HOTPLUG_CPU=y 23 + CONFIG_PM=y 24 + CONFIG_CPU_IDLE=y 23 25 CONFIG_VIRTUALIZATION=y 24 26 CONFIG_KVM=m 25 27 CONFIG_JUMP_LABEL=y
+26
arch/riscv/include/asm/asm.h
··· 67 67 #error "Unexpected __SIZEOF_SHORT__" 68 68 #endif 69 69 70 + #ifdef __ASSEMBLY__ 71 + 72 + /* Common assembly source macros */ 73 + 74 + #ifdef CONFIG_XIP_KERNEL 75 + .macro XIP_FIXUP_OFFSET reg 76 + REG_L t0, _xip_fixup 77 + add \reg, \reg, t0 78 + .endm 79 + .macro XIP_FIXUP_FLASH_OFFSET reg 80 + la t1, __data_loc 81 + REG_L t1, _xip_phys_offset 82 + sub \reg, \reg, t1 83 + add \reg, \reg, t0 84 + .endm 85 + _xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET 86 + _xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET 87 + #else 88 + .macro XIP_FIXUP_OFFSET reg 89 + .endm 90 + .macro XIP_FIXUP_FLASH_OFFSET reg 91 + .endm 92 + #endif /* CONFIG_XIP_KERNEL */ 93 + 94 + #endif /* __ASSEMBLY__ */ 95 + 70 96 #endif /* _ASM_RISCV_ASM_H */
+24
arch/riscv/include/asm/cpuidle.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2021 Allwinner Ltd 4 + * Copyright (C) 2021 Western Digital Corporation or its affiliates. 5 + */ 6 + 7 + #ifndef _ASM_RISCV_CPUIDLE_H 8 + #define _ASM_RISCV_CPUIDLE_H 9 + 10 + #include <asm/barrier.h> 11 + #include <asm/processor.h> 12 + 13 + static inline void cpu_do_idle(void) 14 + { 15 + /* 16 + * Add mb() here to ensure that all 17 + * IO/MEM accesses are completed prior 18 + * to entering WFI. 19 + */ 20 + mb(); 21 + wait_for_interrupt(); 22 + } 23 + 24 + #endif
+36
arch/riscv/include/asm/suspend.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2021 Western Digital Corporation or its affiliates. 4 + * Copyright (c) 2022 Ventana Micro Systems Inc. 5 + */ 6 + 7 + #ifndef _ASM_RISCV_SUSPEND_H 8 + #define _ASM_RISCV_SUSPEND_H 9 + 10 + #include <asm/ptrace.h> 11 + 12 + struct suspend_context { 13 + /* Saved and restored by low-level functions */ 14 + struct pt_regs regs; 15 + /* Saved and restored by high-level functions */ 16 + unsigned long scratch; 17 + unsigned long tvec; 18 + unsigned long ie; 19 + #ifdef CONFIG_MMU 20 + unsigned long satp; 21 + #endif 22 + }; 23 + 24 + /* Low-level CPU suspend entry function */ 25 + int __cpu_suspend_enter(struct suspend_context *context); 26 + 27 + /* High-level CPU suspend which will save context and call finish() */ 28 + int cpu_suspend(unsigned long arg, 29 + int (*finish)(unsigned long arg, 30 + unsigned long entry, 31 + unsigned long context)); 32 + 33 + /* Low-level CPU resume entry function */ 34 + int __cpu_resume_enter(unsigned long hartid, unsigned long context); 35 + 36 + #endif
+2
arch/riscv/kernel/Makefile
··· 48 48 obj-$(CONFIG_MODULES) += module.o 49 49 obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o 50 50 51 + obj-$(CONFIG_CPU_PM) += suspend_entry.o suspend.o 52 + 51 53 obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o 52 54 obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o 53 55
+3
arch/riscv/kernel/asm-offsets.c
··· 13 13 #include <asm/thread_info.h> 14 14 #include <asm/ptrace.h> 15 15 #include <asm/cpu_ops_sbi.h> 16 + #include <asm/suspend.h> 16 17 17 18 void asm_offsets(void); 18 19 ··· 113 112 OFFSET(PT_STATUS, pt_regs, status); 114 113 OFFSET(PT_BADADDR, pt_regs, badaddr); 115 114 OFFSET(PT_CAUSE, pt_regs, cause); 115 + 116 + OFFSET(SUSPEND_CONTEXT_REGS, suspend_context, regs); 116 117 117 118 OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); 118 119 OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra);
+4 -23
arch/riscv/kernel/head.S
··· 16 16 #include <asm/image.h> 17 17 #include "efi-header.S" 18 18 19 - #ifdef CONFIG_XIP_KERNEL 20 - .macro XIP_FIXUP_OFFSET reg 21 - REG_L t0, _xip_fixup 22 - add \reg, \reg, t0 23 - .endm 24 - .macro XIP_FIXUP_FLASH_OFFSET reg 25 - la t0, __data_loc 26 - REG_L t1, _xip_phys_offset 27 - sub \reg, \reg, t1 28 - add \reg, \reg, t0 29 - .endm 30 - _xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET 31 - _xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET 32 - #else 33 - .macro XIP_FIXUP_OFFSET reg 34 - .endm 35 - .macro XIP_FIXUP_FLASH_OFFSET reg 36 - .endm 37 - #endif /* CONFIG_XIP_KERNEL */ 38 - 39 19 __HEAD 40 20 ENTRY(_start) 41 21 /* ··· 69 89 70 90 .align 2 71 91 #ifdef CONFIG_MMU 72 - relocate: 92 + .global relocate_enable_mmu 93 + relocate_enable_mmu: 73 94 /* Relocate return address */ 74 95 la a1, kernel_map 75 96 XIP_FIXUP_OFFSET a1 ··· 165 184 /* Enable virtual memory and relocate to virtual address */ 166 185 la a0, swapper_pg_dir 167 186 XIP_FIXUP_OFFSET a0 168 - call relocate 187 + call relocate_enable_mmu 169 188 #endif 170 189 call setup_trap_vector 171 190 tail smp_callin ··· 309 328 #ifdef CONFIG_MMU 310 329 la a0, early_pg_dir 311 330 XIP_FIXUP_OFFSET a0 312 - call relocate 331 + call relocate_enable_mmu 313 332 #endif /* CONFIG_MMU */ 314 333 315 334 call setup_trap_vector
+2 -1
arch/riscv/kernel/process.c
··· 23 23 #include <asm/string.h> 24 24 #include <asm/switch_to.h> 25 25 #include <asm/thread_info.h> 26 + #include <asm/cpuidle.h> 26 27 27 28 register unsigned long gp_in_global __asm__("gp"); 28 29 ··· 38 37 39 38 void arch_cpu_idle(void) 40 39 { 41 - wait_for_interrupt(); 40 + cpu_do_idle(); 42 41 raw_local_irq_enable(); 43 42 } 44 43
+87
arch/riscv/kernel/suspend.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2021 Western Digital Corporation or its affiliates. 4 + * Copyright (c) 2022 Ventana Micro Systems Inc. 5 + */ 6 + 7 + #include <linux/ftrace.h> 8 + #include <asm/csr.h> 9 + #include <asm/suspend.h> 10 + 11 + static void suspend_save_csrs(struct suspend_context *context) 12 + { 13 + context->scratch = csr_read(CSR_SCRATCH); 14 + context->tvec = csr_read(CSR_TVEC); 15 + context->ie = csr_read(CSR_IE); 16 + 17 + /* 18 + * No need to save/restore IP CSR (i.e. MIP or SIP) because: 19 + * 20 + * 1. For no-MMU (M-mode) kernel, the bits in MIP are set by 21 + * external devices (such as interrupt controller, timer, etc). 22 + * 2. For MMU (S-mode) kernel, the bits in SIP are set by 23 + * M-mode firmware and external devices (such as interrupt 24 + * controller, etc). 25 + */ 26 + 27 + #ifdef CONFIG_MMU 28 + context->satp = csr_read(CSR_SATP); 29 + #endif 30 + } 31 + 32 + static void suspend_restore_csrs(struct suspend_context *context) 33 + { 34 + csr_write(CSR_SCRATCH, context->scratch); 35 + csr_write(CSR_TVEC, context->tvec); 36 + csr_write(CSR_IE, context->ie); 37 + 38 + #ifdef CONFIG_MMU 39 + csr_write(CSR_SATP, context->satp); 40 + #endif 41 + } 42 + 43 + int cpu_suspend(unsigned long arg, 44 + int (*finish)(unsigned long arg, 45 + unsigned long entry, 46 + unsigned long context)) 47 + { 48 + int rc = 0; 49 + struct suspend_context context = { 0 }; 50 + 51 + /* Finisher should be non-NULL */ 52 + if (!finish) 53 + return -EINVAL; 54 + 55 + /* Save additional CSRs*/ 56 + suspend_save_csrs(&context); 57 + 58 + /* 59 + * Function graph tracer state gets incosistent when the kernel 60 + * calls functions that never return (aka finishers) hence disable 61 + * graph tracing during their execution. 62 + */ 63 + pause_graph_tracing(); 64 + 65 + /* Save context on stack */ 66 + if (__cpu_suspend_enter(&context)) { 67 + /* Call the finisher */ 68 + rc = finish(arg, __pa_symbol(__cpu_resume_enter), 69 + (ulong)&context); 70 + 71 + /* 72 + * Should never reach here, unless the suspend finisher 73 + * fails. Successful cpu_suspend() should return from 74 + * __cpu_resume_entry() 75 + */ 76 + if (!rc) 77 + rc = -EOPNOTSUPP; 78 + } 79 + 80 + /* Enable function graph tracer */ 81 + unpause_graph_tracing(); 82 + 83 + /* Restore additional CSRs */ 84 + suspend_restore_csrs(&context); 85 + 86 + return rc; 87 + }
+124
arch/riscv/kernel/suspend_entry.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2021 Western Digital Corporation or its affiliates. 4 + * Copyright (c) 2022 Ventana Micro Systems Inc. 5 + */ 6 + 7 + #include <linux/linkage.h> 8 + #include <asm/asm.h> 9 + #include <asm/asm-offsets.h> 10 + #include <asm/csr.h> 11 + 12 + .text 13 + .altmacro 14 + .option norelax 15 + 16 + ENTRY(__cpu_suspend_enter) 17 + /* Save registers (except A0 and T0-T6) */ 18 + REG_S ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0) 19 + REG_S sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0) 20 + REG_S gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0) 21 + REG_S tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0) 22 + REG_S s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0) 23 + REG_S s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0) 24 + REG_S a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0) 25 + REG_S a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0) 26 + REG_S a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0) 27 + REG_S a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0) 28 + REG_S a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0) 29 + REG_S a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0) 30 + REG_S a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0) 31 + REG_S s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0) 32 + REG_S s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0) 33 + REG_S s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0) 34 + REG_S s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0) 35 + REG_S s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0) 36 + REG_S s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0) 37 + REG_S s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0) 38 + REG_S s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0) 39 + REG_S s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0) 40 + REG_S s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0) 41 + 42 + /* Save CSRs */ 43 + csrr t0, CSR_EPC 44 + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0) 45 + csrr t0, CSR_STATUS 46 + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0) 47 + csrr t0, CSR_TVAL 48 + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0) 49 + csrr t0, CSR_CAUSE 50 + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0) 51 + 52 + /* Return non-zero value */ 53 + li a0, 1 54 + 55 + /* Return to C code */ 56 + ret 57 + END(__cpu_suspend_enter) 58 + 59 + ENTRY(__cpu_resume_enter) 60 + /* Load the global pointer */ 61 + .option push 62 + .option norelax 63 + la gp, __global_pointer$ 64 + .option pop 65 + 66 + #ifdef CONFIG_MMU 67 + /* Save A0 and A1 */ 68 + add t0, a0, zero 69 + add t1, a1, zero 70 + 71 + /* Enable MMU */ 72 + la a0, swapper_pg_dir 73 + XIP_FIXUP_OFFSET a0 74 + call relocate_enable_mmu 75 + 76 + /* Restore A0 and A1 */ 77 + add a0, t0, zero 78 + add a1, t1, zero 79 + #endif 80 + 81 + /* Make A0 point to suspend context */ 82 + add a0, a1, zero 83 + 84 + /* Restore CSRs */ 85 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0) 86 + csrw CSR_EPC, t0 87 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0) 88 + csrw CSR_STATUS, t0 89 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0) 90 + csrw CSR_TVAL, t0 91 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0) 92 + csrw CSR_CAUSE, t0 93 + 94 + /* Restore registers (except A0 and T0-T6) */ 95 + REG_L ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0) 96 + REG_L sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0) 97 + REG_L gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0) 98 + REG_L tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0) 99 + REG_L s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0) 100 + REG_L s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0) 101 + REG_L a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0) 102 + REG_L a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0) 103 + REG_L a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0) 104 + REG_L a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0) 105 + REG_L a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0) 106 + REG_L a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0) 107 + REG_L a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0) 108 + REG_L s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0) 109 + REG_L s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0) 110 + REG_L s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0) 111 + REG_L s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0) 112 + REG_L s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0) 113 + REG_L s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0) 114 + REG_L s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0) 115 + REG_L s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0) 116 + REG_L s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0) 117 + REG_L s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0) 118 + 119 + /* Return zero value */ 120 + add a0, zero, zero 121 + 122 + /* Return to C code */ 123 + ret 124 + END(__cpu_resume_enter)
+9
drivers/cpuidle/Kconfig
··· 47 47 config DT_IDLE_STATES 48 48 bool 49 49 50 + config DT_IDLE_GENPD 51 + depends on PM_GENERIC_DOMAINS_OF 52 + bool 53 + 50 54 menu "ARM CPU Idle Drivers" 51 55 depends on ARM || ARM64 52 56 source "drivers/cpuidle/Kconfig.arm" ··· 64 60 menu "POWERPC CPU Idle Drivers" 65 61 depends on PPC 66 62 source "drivers/cpuidle/Kconfig.powerpc" 63 + endmenu 64 + 65 + menu "RISC-V CPU Idle Drivers" 66 + depends on RISCV 67 + source "drivers/cpuidle/Kconfig.riscv" 67 68 endmenu 68 69 69 70 config HALTPOLL_CPUIDLE
+1
drivers/cpuidle/Kconfig.arm
··· 27 27 bool "PSCI CPU idle Domain" 28 28 depends on ARM_PSCI_CPUIDLE 29 29 depends on PM_GENERIC_DOMAINS_OF 30 + select DT_IDLE_GENPD 30 31 default y 31 32 help 32 33 Select this to enable the PSCI based CPUidle driver to use PM domains,
+15
drivers/cpuidle/Kconfig.riscv
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # RISC-V CPU Idle drivers 4 + # 5 + 6 + config RISCV_SBI_CPUIDLE 7 + bool "RISC-V SBI CPU idle Driver" 8 + depends on RISCV_SBI 9 + select DT_IDLE_STATES 10 + select CPU_IDLE_MULTIPLE_DRIVERS 11 + select DT_IDLE_GENPD if PM_GENERIC_DOMAINS_OF 12 + help 13 + Select this option to enable RISC-V SBI firmware based CPU idle 14 + driver for RISC-V systems. This drivers also supports hierarchical 15 + DT based layout of the idle state.
+5
drivers/cpuidle/Makefile
··· 6 6 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ 7 7 obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o 8 8 obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o 9 + obj-$(CONFIG_DT_IDLE_GENPD) += dt_idle_genpd.o 9 10 obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o 10 11 obj-$(CONFIG_HALTPOLL_CPUIDLE) += cpuidle-haltpoll.o 11 12 ··· 35 34 # POWERPC drivers 36 35 obj-$(CONFIG_PSERIES_CPUIDLE) += cpuidle-pseries.o 37 36 obj-$(CONFIG_POWERNV_CPUIDLE) += cpuidle-powernv.o 37 + 38 + ############################################################################### 39 + # RISC-V drivers 40 + obj-$(CONFIG_RISCV_SBI_CPUIDLE) += cpuidle-riscv-sbi.o
+5 -133
drivers/cpuidle/cpuidle-psci-domain.c
··· 47 47 return 0; 48 48 } 49 49 50 - static int psci_pd_parse_state_nodes(struct genpd_power_state *states, 51 - int state_count) 52 - { 53 - int i, ret; 54 - u32 psci_state, *psci_state_buf; 55 - 56 - for (i = 0; i < state_count; i++) { 57 - ret = psci_dt_parse_state_node(to_of_node(states[i].fwnode), 58 - &psci_state); 59 - if (ret) 60 - goto free_state; 61 - 62 - psci_state_buf = kmalloc(sizeof(u32), GFP_KERNEL); 63 - if (!psci_state_buf) { 64 - ret = -ENOMEM; 65 - goto free_state; 66 - } 67 - *psci_state_buf = psci_state; 68 - states[i].data = psci_state_buf; 69 - } 70 - 71 - return 0; 72 - 73 - free_state: 74 - i--; 75 - for (; i >= 0; i--) 76 - kfree(states[i].data); 77 - return ret; 78 - } 79 - 80 - static int psci_pd_parse_states(struct device_node *np, 81 - struct genpd_power_state **states, int *state_count) 82 - { 83 - int ret; 84 - 85 - /* Parse the domain idle states. */ 86 - ret = of_genpd_parse_idle_states(np, states, state_count); 87 - if (ret) 88 - return ret; 89 - 90 - /* Fill out the PSCI specifics for each found state. */ 91 - ret = psci_pd_parse_state_nodes(*states, *state_count); 92 - if (ret) 93 - kfree(*states); 94 - 95 - return ret; 96 - } 97 - 98 - static void psci_pd_free_states(struct genpd_power_state *states, 99 - unsigned int state_count) 100 - { 101 - int i; 102 - 103 - for (i = 0; i < state_count; i++) 104 - kfree(states[i].data); 105 - kfree(states); 106 - } 107 - 108 50 static int psci_pd_init(struct device_node *np, bool use_osi) 109 51 { 110 52 struct generic_pm_domain *pd; 111 53 struct psci_pd_provider *pd_provider; 112 54 struct dev_power_governor *pd_gov; 113 - struct genpd_power_state *states = NULL; 114 55 int ret = -ENOMEM, state_count = 0; 115 56 116 - pd = kzalloc(sizeof(*pd), GFP_KERNEL); 57 + pd = dt_idle_pd_alloc(np, psci_dt_parse_state_node); 117 58 if (!pd) 118 59 goto out; 119 60 ··· 62 121 if (!pd_provider) 63 122 goto free_pd; 64 123 65 - pd->name = kasprintf(GFP_KERNEL, "%pOF", np); 66 - if (!pd->name) 67 - goto free_pd_prov; 68 - 69 - /* 70 - * Parse the domain idle states and let genpd manage the state selection 71 - * for those being compatible with "domain-idle-state". 72 - */ 73 - ret = psci_pd_parse_states(np, &states, &state_count); 74 - if (ret) 75 - goto free_name; 76 - 77 - pd->free_states = psci_pd_free_states; 78 - pd->name = kbasename(pd->name); 79 - pd->states = states; 80 - pd->state_count = state_count; 81 124 pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN; 82 125 83 126 /* Allow power off when OSI has been successfully enabled. */ ··· 74 149 pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL; 75 150 76 151 ret = pm_genpd_init(pd, pd_gov, false); 77 - if (ret) { 78 - psci_pd_free_states(states, state_count); 79 - goto free_name; 80 - } 152 + if (ret) 153 + goto free_pd_prov; 81 154 82 155 ret = of_genpd_add_provider_simple(np, pd); 83 156 if (ret) ··· 89 166 90 167 remove_pd: 91 168 pm_genpd_remove(pd); 92 - free_name: 93 - kfree(pd->name); 94 169 free_pd_prov: 95 170 kfree(pd_provider); 96 171 free_pd: 97 - kfree(pd); 172 + dt_idle_pd_free(pd); 98 173 out: 99 174 pr_err("failed to init PM domain ret=%d %pOF\n", ret, np); 100 175 return ret; ··· 114 193 list_del(&pd_provider->link); 115 194 kfree(pd_provider); 116 195 } 117 - } 118 - 119 - static int psci_pd_init_topology(struct device_node *np) 120 - { 121 - struct device_node *node; 122 - struct of_phandle_args child, parent; 123 - int ret; 124 - 125 - for_each_child_of_node(np, node) { 126 - if (of_parse_phandle_with_args(node, "power-domains", 127 - "#power-domain-cells", 0, &parent)) 128 - continue; 129 - 130 - child.np = node; 131 - child.args_count = 0; 132 - ret = of_genpd_add_subdomain(&parent, &child); 133 - of_node_put(parent.np); 134 - if (ret) { 135 - of_node_put(node); 136 - return ret; 137 - } 138 - } 139 - 140 - return 0; 141 196 } 142 197 143 198 static bool psci_pd_try_set_osi_mode(void) ··· 179 282 goto no_pd; 180 283 181 284 /* Link genpd masters/subdomains to model the CPU topology. */ 182 - ret = psci_pd_init_topology(np); 285 + ret = dt_idle_pd_init_topology(np); 183 286 if (ret) 184 287 goto remove_pd; 185 288 ··· 211 314 return platform_driver_register(&psci_cpuidle_domain_driver); 212 315 } 213 316 subsys_initcall(psci_idle_init_domains); 214 - 215 - struct device *psci_dt_attach_cpu(int cpu) 216 - { 217 - struct device *dev; 218 - 219 - dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci"); 220 - if (IS_ERR_OR_NULL(dev)) 221 - return dev; 222 - 223 - pm_runtime_irq_safe(dev); 224 - if (cpu_online(cpu)) 225 - pm_runtime_get_sync(dev); 226 - 227 - dev_pm_syscore_device(dev, true); 228 - 229 - return dev; 230 - } 231 - 232 - void psci_dt_detach_cpu(struct device *dev) 233 - { 234 - if (IS_ERR_OR_NULL(dev)) 235 - return; 236 - 237 - dev_pm_domain_detach(dev, false); 238 - }
+13 -2
drivers/cpuidle/cpuidle-psci.h
··· 10 10 int psci_dt_parse_state_node(struct device_node *np, u32 *state); 11 11 12 12 #ifdef CONFIG_ARM_PSCI_CPUIDLE_DOMAIN 13 - struct device *psci_dt_attach_cpu(int cpu); 14 - void psci_dt_detach_cpu(struct device *dev); 13 + 14 + #include "dt_idle_genpd.h" 15 + 16 + static inline struct device *psci_dt_attach_cpu(int cpu) 17 + { 18 + return dt_idle_attach_cpu(cpu, "psci"); 19 + } 20 + 21 + static inline void psci_dt_detach_cpu(struct device *dev) 22 + { 23 + dt_idle_detach_cpu(dev); 24 + } 25 + 15 26 #else 16 27 static inline struct device *psci_dt_attach_cpu(int cpu) { return NULL; } 17 28 static inline void psci_dt_detach_cpu(struct device *dev) { }
+627
drivers/cpuidle/cpuidle-riscv-sbi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * RISC-V SBI CPU idle driver. 4 + * 5 + * Copyright (c) 2021 Western Digital Corporation or its affiliates. 6 + * Copyright (c) 2022 Ventana Micro Systems Inc. 7 + */ 8 + 9 + #define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt 10 + 11 + #include <linux/cpuidle.h> 12 + #include <linux/cpumask.h> 13 + #include <linux/cpu_pm.h> 14 + #include <linux/cpu_cooling.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/of.h> 18 + #include <linux/of_device.h> 19 + #include <linux/slab.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/pm_domain.h> 22 + #include <linux/pm_runtime.h> 23 + #include <asm/cpuidle.h> 24 + #include <asm/sbi.h> 25 + #include <asm/suspend.h> 26 + 27 + #include "dt_idle_states.h" 28 + #include "dt_idle_genpd.h" 29 + 30 + struct sbi_cpuidle_data { 31 + u32 *states; 32 + struct device *dev; 33 + }; 34 + 35 + struct sbi_domain_state { 36 + bool available; 37 + u32 state; 38 + }; 39 + 40 + static DEFINE_PER_CPU_READ_MOSTLY(struct sbi_cpuidle_data, sbi_cpuidle_data); 41 + static DEFINE_PER_CPU(struct sbi_domain_state, domain_state); 42 + static bool sbi_cpuidle_use_osi; 43 + static bool sbi_cpuidle_use_cpuhp; 44 + static bool sbi_cpuidle_pd_allow_domain_state; 45 + 46 + static inline void sbi_set_domain_state(u32 state) 47 + { 48 + struct sbi_domain_state *data = this_cpu_ptr(&domain_state); 49 + 50 + data->available = true; 51 + data->state = state; 52 + } 53 + 54 + static inline u32 sbi_get_domain_state(void) 55 + { 56 + struct sbi_domain_state *data = this_cpu_ptr(&domain_state); 57 + 58 + return data->state; 59 + } 60 + 61 + static inline void sbi_clear_domain_state(void) 62 + { 63 + struct sbi_domain_state *data = this_cpu_ptr(&domain_state); 64 + 65 + data->available = false; 66 + } 67 + 68 + static inline bool sbi_is_domain_state_available(void) 69 + { 70 + struct sbi_domain_state *data = this_cpu_ptr(&domain_state); 71 + 72 + return data->available; 73 + } 74 + 75 + static int sbi_suspend_finisher(unsigned long suspend_type, 76 + unsigned long resume_addr, 77 + unsigned long opaque) 78 + { 79 + struct sbiret ret; 80 + 81 + ret = sbi_ecall(SBI_EXT_HSM, SBI_EXT_HSM_HART_SUSPEND, 82 + suspend_type, resume_addr, opaque, 0, 0, 0); 83 + 84 + return (ret.error) ? sbi_err_map_linux_errno(ret.error) : 0; 85 + } 86 + 87 + static int sbi_suspend(u32 state) 88 + { 89 + if (state & SBI_HSM_SUSP_NON_RET_BIT) 90 + return cpu_suspend(state, sbi_suspend_finisher); 91 + else 92 + return sbi_suspend_finisher(state, 0, 0); 93 + } 94 + 95 + static int sbi_cpuidle_enter_state(struct cpuidle_device *dev, 96 + struct cpuidle_driver *drv, int idx) 97 + { 98 + u32 *states = __this_cpu_read(sbi_cpuidle_data.states); 99 + 100 + return CPU_PM_CPU_IDLE_ENTER_PARAM(sbi_suspend, idx, states[idx]); 101 + } 102 + 103 + static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev, 104 + struct cpuidle_driver *drv, int idx, 105 + bool s2idle) 106 + { 107 + struct sbi_cpuidle_data *data = this_cpu_ptr(&sbi_cpuidle_data); 108 + u32 *states = data->states; 109 + struct device *pd_dev = data->dev; 110 + u32 state; 111 + int ret; 112 + 113 + ret = cpu_pm_enter(); 114 + if (ret) 115 + return -1; 116 + 117 + /* Do runtime PM to manage a hierarchical CPU toplogy. */ 118 + rcu_irq_enter_irqson(); 119 + if (s2idle) 120 + dev_pm_genpd_suspend(pd_dev); 121 + else 122 + pm_runtime_put_sync_suspend(pd_dev); 123 + rcu_irq_exit_irqson(); 124 + 125 + if (sbi_is_domain_state_available()) 126 + state = sbi_get_domain_state(); 127 + else 128 + state = states[idx]; 129 + 130 + ret = sbi_suspend(state) ? -1 : idx; 131 + 132 + rcu_irq_enter_irqson(); 133 + if (s2idle) 134 + dev_pm_genpd_resume(pd_dev); 135 + else 136 + pm_runtime_get_sync(pd_dev); 137 + rcu_irq_exit_irqson(); 138 + 139 + cpu_pm_exit(); 140 + 141 + /* Clear the domain state to start fresh when back from idle. */ 142 + sbi_clear_domain_state(); 143 + return ret; 144 + } 145 + 146 + static int sbi_enter_domain_idle_state(struct cpuidle_device *dev, 147 + struct cpuidle_driver *drv, int idx) 148 + { 149 + return __sbi_enter_domain_idle_state(dev, drv, idx, false); 150 + } 151 + 152 + static int sbi_enter_s2idle_domain_idle_state(struct cpuidle_device *dev, 153 + struct cpuidle_driver *drv, 154 + int idx) 155 + { 156 + return __sbi_enter_domain_idle_state(dev, drv, idx, true); 157 + } 158 + 159 + static int sbi_cpuidle_cpuhp_up(unsigned int cpu) 160 + { 161 + struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev); 162 + 163 + if (pd_dev) 164 + pm_runtime_get_sync(pd_dev); 165 + 166 + return 0; 167 + } 168 + 169 + static int sbi_cpuidle_cpuhp_down(unsigned int cpu) 170 + { 171 + struct device *pd_dev = __this_cpu_read(sbi_cpuidle_data.dev); 172 + 173 + if (pd_dev) { 174 + pm_runtime_put_sync(pd_dev); 175 + /* Clear domain state to start fresh at next online. */ 176 + sbi_clear_domain_state(); 177 + } 178 + 179 + return 0; 180 + } 181 + 182 + static void sbi_idle_init_cpuhp(void) 183 + { 184 + int err; 185 + 186 + if (!sbi_cpuidle_use_cpuhp) 187 + return; 188 + 189 + err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING, 190 + "cpuidle/sbi:online", 191 + sbi_cpuidle_cpuhp_up, 192 + sbi_cpuidle_cpuhp_down); 193 + if (err) 194 + pr_warn("Failed %d while setup cpuhp state\n", err); 195 + } 196 + 197 + static const struct of_device_id sbi_cpuidle_state_match[] = { 198 + { .compatible = "riscv,idle-state", 199 + .data = sbi_cpuidle_enter_state }, 200 + { }, 201 + }; 202 + 203 + static bool sbi_suspend_state_is_valid(u32 state) 204 + { 205 + if (state > SBI_HSM_SUSPEND_RET_DEFAULT && 206 + state < SBI_HSM_SUSPEND_RET_PLATFORM) 207 + return false; 208 + if (state > SBI_HSM_SUSPEND_NON_RET_DEFAULT && 209 + state < SBI_HSM_SUSPEND_NON_RET_PLATFORM) 210 + return false; 211 + return true; 212 + } 213 + 214 + static int sbi_dt_parse_state_node(struct device_node *np, u32 *state) 215 + { 216 + int err = of_property_read_u32(np, "riscv,sbi-suspend-param", state); 217 + 218 + if (err) { 219 + pr_warn("%pOF missing riscv,sbi-suspend-param property\n", np); 220 + return err; 221 + } 222 + 223 + if (!sbi_suspend_state_is_valid(*state)) { 224 + pr_warn("Invalid SBI suspend state %#x\n", *state); 225 + return -EINVAL; 226 + } 227 + 228 + return 0; 229 + } 230 + 231 + static int sbi_dt_cpu_init_topology(struct cpuidle_driver *drv, 232 + struct sbi_cpuidle_data *data, 233 + unsigned int state_count, int cpu) 234 + { 235 + /* Currently limit the hierarchical topology to be used in OSI mode. */ 236 + if (!sbi_cpuidle_use_osi) 237 + return 0; 238 + 239 + data->dev = dt_idle_attach_cpu(cpu, "sbi"); 240 + if (IS_ERR_OR_NULL(data->dev)) 241 + return PTR_ERR_OR_ZERO(data->dev); 242 + 243 + /* 244 + * Using the deepest state for the CPU to trigger a potential selection 245 + * of a shared state for the domain, assumes the domain states are all 246 + * deeper states. 247 + */ 248 + drv->states[state_count - 1].enter = sbi_enter_domain_idle_state; 249 + drv->states[state_count - 1].enter_s2idle = 250 + sbi_enter_s2idle_domain_idle_state; 251 + sbi_cpuidle_use_cpuhp = true; 252 + 253 + return 0; 254 + } 255 + 256 + static int sbi_cpuidle_dt_init_states(struct device *dev, 257 + struct cpuidle_driver *drv, 258 + unsigned int cpu, 259 + unsigned int state_count) 260 + { 261 + struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu); 262 + struct device_node *state_node; 263 + struct device_node *cpu_node; 264 + u32 *states; 265 + int i, ret; 266 + 267 + cpu_node = of_cpu_device_node_get(cpu); 268 + if (!cpu_node) 269 + return -ENODEV; 270 + 271 + states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL); 272 + if (!states) { 273 + ret = -ENOMEM; 274 + goto fail; 275 + } 276 + 277 + /* Parse SBI specific details from state DT nodes */ 278 + for (i = 1; i < state_count; i++) { 279 + state_node = of_get_cpu_state_node(cpu_node, i - 1); 280 + if (!state_node) 281 + break; 282 + 283 + ret = sbi_dt_parse_state_node(state_node, &states[i]); 284 + of_node_put(state_node); 285 + 286 + if (ret) 287 + return ret; 288 + 289 + pr_debug("sbi-state %#x index %d\n", states[i], i); 290 + } 291 + if (i != state_count) { 292 + ret = -ENODEV; 293 + goto fail; 294 + } 295 + 296 + /* Initialize optional data, used for the hierarchical topology. */ 297 + ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu); 298 + if (ret < 0) 299 + return ret; 300 + 301 + /* Store states in the per-cpu struct. */ 302 + data->states = states; 303 + 304 + fail: 305 + of_node_put(cpu_node); 306 + 307 + return ret; 308 + } 309 + 310 + static void sbi_cpuidle_deinit_cpu(int cpu) 311 + { 312 + struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu); 313 + 314 + dt_idle_detach_cpu(data->dev); 315 + sbi_cpuidle_use_cpuhp = false; 316 + } 317 + 318 + static int sbi_cpuidle_init_cpu(struct device *dev, int cpu) 319 + { 320 + struct cpuidle_driver *drv; 321 + unsigned int state_count = 0; 322 + int ret = 0; 323 + 324 + drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL); 325 + if (!drv) 326 + return -ENOMEM; 327 + 328 + drv->name = "sbi_cpuidle"; 329 + drv->owner = THIS_MODULE; 330 + drv->cpumask = (struct cpumask *)cpumask_of(cpu); 331 + 332 + /* RISC-V architectural WFI to be represented as state index 0. */ 333 + drv->states[0].enter = sbi_cpuidle_enter_state; 334 + drv->states[0].exit_latency = 1; 335 + drv->states[0].target_residency = 1; 336 + drv->states[0].power_usage = UINT_MAX; 337 + strcpy(drv->states[0].name, "WFI"); 338 + strcpy(drv->states[0].desc, "RISC-V WFI"); 339 + 340 + /* 341 + * If no DT idle states are detected (ret == 0) let the driver 342 + * initialization fail accordingly since there is no reason to 343 + * initialize the idle driver if only wfi is supported, the 344 + * default archictectural back-end already executes wfi 345 + * on idle entry. 346 + */ 347 + ret = dt_init_idle_driver(drv, sbi_cpuidle_state_match, 1); 348 + if (ret <= 0) { 349 + pr_debug("HART%ld: failed to parse DT idle states\n", 350 + cpuid_to_hartid_map(cpu)); 351 + return ret ? : -ENODEV; 352 + } 353 + state_count = ret + 1; /* Include WFI state as well */ 354 + 355 + /* Initialize idle states from DT. */ 356 + ret = sbi_cpuidle_dt_init_states(dev, drv, cpu, state_count); 357 + if (ret) { 358 + pr_err("HART%ld: failed to init idle states\n", 359 + cpuid_to_hartid_map(cpu)); 360 + return ret; 361 + } 362 + 363 + ret = cpuidle_register(drv, NULL); 364 + if (ret) 365 + goto deinit; 366 + 367 + cpuidle_cooling_register(drv); 368 + 369 + return 0; 370 + deinit: 371 + sbi_cpuidle_deinit_cpu(cpu); 372 + return ret; 373 + } 374 + 375 + static void sbi_cpuidle_domain_sync_state(struct device *dev) 376 + { 377 + /* 378 + * All devices have now been attached/probed to the PM domain 379 + * topology, hence it's fine to allow domain states to be picked. 380 + */ 381 + sbi_cpuidle_pd_allow_domain_state = true; 382 + } 383 + 384 + #ifdef CONFIG_DT_IDLE_GENPD 385 + 386 + static int sbi_cpuidle_pd_power_off(struct generic_pm_domain *pd) 387 + { 388 + struct genpd_power_state *state = &pd->states[pd->state_idx]; 389 + u32 *pd_state; 390 + 391 + if (!state->data) 392 + return 0; 393 + 394 + if (!sbi_cpuidle_pd_allow_domain_state) 395 + return -EBUSY; 396 + 397 + /* OSI mode is enabled, set the corresponding domain state. */ 398 + pd_state = state->data; 399 + sbi_set_domain_state(*pd_state); 400 + 401 + return 0; 402 + } 403 + 404 + struct sbi_pd_provider { 405 + struct list_head link; 406 + struct device_node *node; 407 + }; 408 + 409 + static LIST_HEAD(sbi_pd_providers); 410 + 411 + static int sbi_pd_init(struct device_node *np) 412 + { 413 + struct generic_pm_domain *pd; 414 + struct sbi_pd_provider *pd_provider; 415 + struct dev_power_governor *pd_gov; 416 + int ret = -ENOMEM, state_count = 0; 417 + 418 + pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node); 419 + if (!pd) 420 + goto out; 421 + 422 + pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL); 423 + if (!pd_provider) 424 + goto free_pd; 425 + 426 + pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN; 427 + 428 + /* Allow power off when OSI is available. */ 429 + if (sbi_cpuidle_use_osi) 430 + pd->power_off = sbi_cpuidle_pd_power_off; 431 + else 432 + pd->flags |= GENPD_FLAG_ALWAYS_ON; 433 + 434 + /* Use governor for CPU PM domains if it has some states to manage. */ 435 + pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL; 436 + 437 + ret = pm_genpd_init(pd, pd_gov, false); 438 + if (ret) 439 + goto free_pd_prov; 440 + 441 + ret = of_genpd_add_provider_simple(np, pd); 442 + if (ret) 443 + goto remove_pd; 444 + 445 + pd_provider->node = of_node_get(np); 446 + list_add(&pd_provider->link, &sbi_pd_providers); 447 + 448 + pr_debug("init PM domain %s\n", pd->name); 449 + return 0; 450 + 451 + remove_pd: 452 + pm_genpd_remove(pd); 453 + free_pd_prov: 454 + kfree(pd_provider); 455 + free_pd: 456 + dt_idle_pd_free(pd); 457 + out: 458 + pr_err("failed to init PM domain ret=%d %pOF\n", ret, np); 459 + return ret; 460 + } 461 + 462 + static void sbi_pd_remove(void) 463 + { 464 + struct sbi_pd_provider *pd_provider, *it; 465 + struct generic_pm_domain *genpd; 466 + 467 + list_for_each_entry_safe(pd_provider, it, &sbi_pd_providers, link) { 468 + of_genpd_del_provider(pd_provider->node); 469 + 470 + genpd = of_genpd_remove_last(pd_provider->node); 471 + if (!IS_ERR(genpd)) 472 + kfree(genpd); 473 + 474 + of_node_put(pd_provider->node); 475 + list_del(&pd_provider->link); 476 + kfree(pd_provider); 477 + } 478 + } 479 + 480 + static int sbi_genpd_probe(struct device_node *np) 481 + { 482 + struct device_node *node; 483 + int ret = 0, pd_count = 0; 484 + 485 + if (!np) 486 + return -ENODEV; 487 + 488 + /* 489 + * Parse child nodes for the "#power-domain-cells" property and 490 + * initialize a genpd/genpd-of-provider pair when it's found. 491 + */ 492 + for_each_child_of_node(np, node) { 493 + if (!of_find_property(node, "#power-domain-cells", NULL)) 494 + continue; 495 + 496 + ret = sbi_pd_init(node); 497 + if (ret) 498 + goto put_node; 499 + 500 + pd_count++; 501 + } 502 + 503 + /* Bail out if not using the hierarchical CPU topology. */ 504 + if (!pd_count) 505 + goto no_pd; 506 + 507 + /* Link genpd masters/subdomains to model the CPU topology. */ 508 + ret = dt_idle_pd_init_topology(np); 509 + if (ret) 510 + goto remove_pd; 511 + 512 + return 0; 513 + 514 + put_node: 515 + of_node_put(node); 516 + remove_pd: 517 + sbi_pd_remove(); 518 + pr_err("failed to create CPU PM domains ret=%d\n", ret); 519 + no_pd: 520 + return ret; 521 + } 522 + 523 + #else 524 + 525 + static inline int sbi_genpd_probe(struct device_node *np) 526 + { 527 + return 0; 528 + } 529 + 530 + #endif 531 + 532 + static int sbi_cpuidle_probe(struct platform_device *pdev) 533 + { 534 + int cpu, ret; 535 + struct cpuidle_driver *drv; 536 + struct cpuidle_device *dev; 537 + struct device_node *np, *pds_node; 538 + 539 + /* Detect OSI support based on CPU DT nodes */ 540 + sbi_cpuidle_use_osi = true; 541 + for_each_possible_cpu(cpu) { 542 + np = of_cpu_device_node_get(cpu); 543 + if (np && 544 + of_find_property(np, "power-domains", NULL) && 545 + of_find_property(np, "power-domain-names", NULL)) { 546 + continue; 547 + } else { 548 + sbi_cpuidle_use_osi = false; 549 + break; 550 + } 551 + } 552 + 553 + /* Populate generic power domains from DT nodes */ 554 + pds_node = of_find_node_by_path("/cpus/power-domains"); 555 + if (pds_node) { 556 + ret = sbi_genpd_probe(pds_node); 557 + of_node_put(pds_node); 558 + if (ret) 559 + return ret; 560 + } 561 + 562 + /* Initialize CPU idle driver for each CPU */ 563 + for_each_possible_cpu(cpu) { 564 + ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu); 565 + if (ret) { 566 + pr_debug("HART%ld: idle driver init failed\n", 567 + cpuid_to_hartid_map(cpu)); 568 + goto out_fail; 569 + } 570 + } 571 + 572 + /* Setup CPU hotplut notifiers */ 573 + sbi_idle_init_cpuhp(); 574 + 575 + pr_info("idle driver registered for all CPUs\n"); 576 + 577 + return 0; 578 + 579 + out_fail: 580 + while (--cpu >= 0) { 581 + dev = per_cpu(cpuidle_devices, cpu); 582 + drv = cpuidle_get_cpu_driver(dev); 583 + cpuidle_unregister(drv); 584 + sbi_cpuidle_deinit_cpu(cpu); 585 + } 586 + 587 + return ret; 588 + } 589 + 590 + static struct platform_driver sbi_cpuidle_driver = { 591 + .probe = sbi_cpuidle_probe, 592 + .driver = { 593 + .name = "sbi-cpuidle", 594 + .sync_state = sbi_cpuidle_domain_sync_state, 595 + }, 596 + }; 597 + 598 + static int __init sbi_cpuidle_init(void) 599 + { 600 + int ret; 601 + struct platform_device *pdev; 602 + 603 + /* 604 + * The SBI HSM suspend function is only available when: 605 + * 1) SBI version is 0.3 or higher 606 + * 2) SBI HSM extension is available 607 + */ 608 + if ((sbi_spec_version < sbi_mk_version(0, 3)) || 609 + sbi_probe_extension(SBI_EXT_HSM) <= 0) { 610 + pr_info("HSM suspend not available\n"); 611 + return 0; 612 + } 613 + 614 + ret = platform_driver_register(&sbi_cpuidle_driver); 615 + if (ret) 616 + return ret; 617 + 618 + pdev = platform_device_register_simple("sbi-cpuidle", 619 + -1, NULL, 0); 620 + if (IS_ERR(pdev)) { 621 + platform_driver_unregister(&sbi_cpuidle_driver); 622 + return PTR_ERR(pdev); 623 + } 624 + 625 + return 0; 626 + } 627 + device_initcall(sbi_cpuidle_init);
+178
drivers/cpuidle/dt_idle_genpd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * PM domains for CPUs via genpd. 4 + * 5 + * Copyright (C) 2019 Linaro Ltd. 6 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 7 + * 8 + * Copyright (c) 2021 Western Digital Corporation or its affiliates. 9 + * Copyright (c) 2022 Ventana Micro Systems Inc. 10 + */ 11 + 12 + #define pr_fmt(fmt) "dt-idle-genpd: " fmt 13 + 14 + #include <linux/cpu.h> 15 + #include <linux/device.h> 16 + #include <linux/kernel.h> 17 + #include <linux/pm_domain.h> 18 + #include <linux/pm_runtime.h> 19 + #include <linux/slab.h> 20 + #include <linux/string.h> 21 + 22 + #include "dt_idle_genpd.h" 23 + 24 + static int pd_parse_state_nodes( 25 + int (*parse_state)(struct device_node *, u32 *), 26 + struct genpd_power_state *states, int state_count) 27 + { 28 + int i, ret; 29 + u32 state, *state_buf; 30 + 31 + for (i = 0; i < state_count; i++) { 32 + ret = parse_state(to_of_node(states[i].fwnode), &state); 33 + if (ret) 34 + goto free_state; 35 + 36 + state_buf = kmalloc(sizeof(u32), GFP_KERNEL); 37 + if (!state_buf) { 38 + ret = -ENOMEM; 39 + goto free_state; 40 + } 41 + *state_buf = state; 42 + states[i].data = state_buf; 43 + } 44 + 45 + return 0; 46 + 47 + free_state: 48 + i--; 49 + for (; i >= 0; i--) 50 + kfree(states[i].data); 51 + return ret; 52 + } 53 + 54 + static int pd_parse_states(struct device_node *np, 55 + int (*parse_state)(struct device_node *, u32 *), 56 + struct genpd_power_state **states, 57 + int *state_count) 58 + { 59 + int ret; 60 + 61 + /* Parse the domain idle states. */ 62 + ret = of_genpd_parse_idle_states(np, states, state_count); 63 + if (ret) 64 + return ret; 65 + 66 + /* Fill out the dt specifics for each found state. */ 67 + ret = pd_parse_state_nodes(parse_state, *states, *state_count); 68 + if (ret) 69 + kfree(*states); 70 + 71 + return ret; 72 + } 73 + 74 + static void pd_free_states(struct genpd_power_state *states, 75 + unsigned int state_count) 76 + { 77 + int i; 78 + 79 + for (i = 0; i < state_count; i++) 80 + kfree(states[i].data); 81 + kfree(states); 82 + } 83 + 84 + void dt_idle_pd_free(struct generic_pm_domain *pd) 85 + { 86 + pd_free_states(pd->states, pd->state_count); 87 + kfree(pd->name); 88 + kfree(pd); 89 + } 90 + 91 + struct generic_pm_domain *dt_idle_pd_alloc(struct device_node *np, 92 + int (*parse_state)(struct device_node *, u32 *)) 93 + { 94 + struct generic_pm_domain *pd; 95 + struct genpd_power_state *states = NULL; 96 + int ret, state_count = 0; 97 + 98 + pd = kzalloc(sizeof(*pd), GFP_KERNEL); 99 + if (!pd) 100 + goto out; 101 + 102 + pd->name = kasprintf(GFP_KERNEL, "%pOF", np); 103 + if (!pd->name) 104 + goto free_pd; 105 + 106 + /* 107 + * Parse the domain idle states and let genpd manage the state selection 108 + * for those being compatible with "domain-idle-state". 109 + */ 110 + ret = pd_parse_states(np, parse_state, &states, &state_count); 111 + if (ret) 112 + goto free_name; 113 + 114 + pd->free_states = pd_free_states; 115 + pd->name = kbasename(pd->name); 116 + pd->states = states; 117 + pd->state_count = state_count; 118 + 119 + pr_debug("alloc PM domain %s\n", pd->name); 120 + return pd; 121 + 122 + free_name: 123 + kfree(pd->name); 124 + free_pd: 125 + kfree(pd); 126 + out: 127 + pr_err("failed to alloc PM domain %pOF\n", np); 128 + return NULL; 129 + } 130 + 131 + int dt_idle_pd_init_topology(struct device_node *np) 132 + { 133 + struct device_node *node; 134 + struct of_phandle_args child, parent; 135 + int ret; 136 + 137 + for_each_child_of_node(np, node) { 138 + if (of_parse_phandle_with_args(node, "power-domains", 139 + "#power-domain-cells", 0, &parent)) 140 + continue; 141 + 142 + child.np = node; 143 + child.args_count = 0; 144 + ret = of_genpd_add_subdomain(&parent, &child); 145 + of_node_put(parent.np); 146 + if (ret) { 147 + of_node_put(node); 148 + return ret; 149 + } 150 + } 151 + 152 + return 0; 153 + } 154 + 155 + struct device *dt_idle_attach_cpu(int cpu, const char *name) 156 + { 157 + struct device *dev; 158 + 159 + dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), name); 160 + if (IS_ERR_OR_NULL(dev)) 161 + return dev; 162 + 163 + pm_runtime_irq_safe(dev); 164 + if (cpu_online(cpu)) 165 + pm_runtime_get_sync(dev); 166 + 167 + dev_pm_syscore_device(dev, true); 168 + 169 + return dev; 170 + } 171 + 172 + void dt_idle_detach_cpu(struct device *dev) 173 + { 174 + if (IS_ERR_OR_NULL(dev)) 175 + return; 176 + 177 + dev_pm_domain_detach(dev, false); 178 + }
+50
drivers/cpuidle/dt_idle_genpd.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __DT_IDLE_GENPD 3 + #define __DT_IDLE_GENPD 4 + 5 + struct device_node; 6 + struct generic_pm_domain; 7 + 8 + #ifdef CONFIG_DT_IDLE_GENPD 9 + 10 + void dt_idle_pd_free(struct generic_pm_domain *pd); 11 + 12 + struct generic_pm_domain *dt_idle_pd_alloc(struct device_node *np, 13 + int (*parse_state)(struct device_node *, u32 *)); 14 + 15 + int dt_idle_pd_init_topology(struct device_node *np); 16 + 17 + struct device *dt_idle_attach_cpu(int cpu, const char *name); 18 + 19 + void dt_idle_detach_cpu(struct device *dev); 20 + 21 + #else 22 + 23 + static inline void dt_idle_pd_free(struct generic_pm_domain *pd) 24 + { 25 + } 26 + 27 + static inline struct generic_pm_domain *dt_idle_pd_alloc( 28 + struct device_node *np, 29 + int (*parse_state)(struct device_node *, u32 *)) 30 + { 31 + return NULL; 32 + } 33 + 34 + static inline int dt_idle_pd_init_topology(struct device_node *np) 35 + { 36 + return 0; 37 + } 38 + 39 + static inline struct device *dt_idle_attach_cpu(int cpu, const char *name) 40 + { 41 + return NULL; 42 + } 43 + 44 + static inline void dt_idle_detach_cpu(struct device *dev) 45 + { 46 + } 47 + 48 + #endif 49 + 50 + #endif