Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael Wysocki:
"Features-wise, to me the most important this time is a rework of
wakeup interrupts handling in the core that makes them work
consistently across all of the available sleep states, including
suspend-to-idle. Many thanks to Thomas Gleixner for his help with
this work.

Second is an update of the generic PM domains code that has been in
need of some care for quite a while. Unused code is being removed, DT
support is being added and domains are now going to be attached to
devices in bus type code in analogy with the ACPI PM domain. The
majority of work here was done by Ulf Hansson who also has been the
most active developer this time.

Apart from this we have a traditional ACPICA update, this time to
upstream version 20140828 and a few ACPI wakeup interrupts handling
patches on top of the general rework mentioned above. There also are
several cpufreq commits including renaming the cpufreq-cpu0 driver to
cpufreq-dt, as this is what implements generic DT-based cpufreq
support, and a new DT-based idle states infrastructure for cpuidle.

In addition to that, the ACPI LPSS driver is updated, ACPI support for
Apple machines is improved, a few bugs are fixed and a few cleanups
are made all over.

Finally, the Adaptive Voltage Scaling (AVS) subsystem now has a tree
maintained by Kevin Hilman that will be merged through the PM tree.

Numbers-wise, the generic PM domains update takes the lead this time
with 32 non-merge commits, second is cpufreq (15 commits) and the 3rd
place goes to the wakeup interrupts handling rework (13 commits).

Specifics:

- Rework the handling of wakeup IRQs by the IRQ core such that all of
them will be switched over to "wakeup" mode in suspend_device_irqs()
and in that mode the first interrupt will abort system suspend in
progress or wake up the system if already in suspend-to-idle (or
equivalent) without executing any interrupt handlers. Among other
things that eliminates the wakeup-related motivation to use the
IRQF_NO_SUSPEND interrupt flag with interrupts which don't really
need it and should not use it (Thomas Gleixner and Rafael Wysocki)

- Switch over ACPI to handling wakeup interrupts with the help of the
new mechanism introduced by the above IRQ core rework (Rafael Wysocki)

- Rework the core generic PM domains code to eliminate code that's
not used, add DT support and add a generic mechanism by which
devices can be added to PM domains automatically during enumeration
(Ulf Hansson, Geert Uytterhoeven and Tomasz Figa).

- Add debugfs-based mechanics for debugging generic PM domains
(Maciej Matraszek).

- ACPICA update to upstream version 20140828. Included are updates
related to the SRAT and GTDT tables and the _PSx methods are in the
METHOD_NAME list now (Bob Moore and Hanjun Guo).

- Add _OSI("Darwin") support to the ACPI core (unfortunately, that
can't really be done in a straightforward way) to prevent
Thunderbolt from being turned off on Apple systems after boot (or
after resume from system suspend) and rework the ACPI Smart Battery
Subsystem (SBS) driver to work correctly with Apple platforms
(Matthew Garrett and Andreas Noever).

- ACPI LPSS (Low-Power Subsystem) driver update cleaning up the code,
adding support for 133MHz I2C source clock on Intel Baytrail to it
and making it avoid using UART RTS override with Auto Flow Control
(Heikki Krogerus).

- ACPI backlight updates removing the video_set_use_native_backlight
quirk which is not necessary any more, making the code check the
list of output devices returned by the _DOD method to avoid
creating acpi_video interfaces that won't work and adding a quirk
for Lenovo Ideapad Z570 (Hans de Goede, Aaron Lu and Stepan Bujnak)

- New Win8 ACPI OSI quirks for some Dell laptops (Edward Lin)

- Assorted ACPI code cleanups (Fabian Frederick, Rasmus Villemoes,
Sudip Mukherjee, Yijing Wang, and Zhang Rui)

- cpufreq core updates and cleanups (Viresh Kumar, Preeti U Murthy,
Rasmus Villemoes)

- cpufreq driver updates: cpufreq-cpu0/cpufreq-dt (driver name change
among other things), ppc-corenet, powernv (Viresh Kumar, Preeti U
Murthy, Shilpasri G Bhat, Lucas Stach)

- cpuidle support for DT-based idle states infrastructure, new ARM64
cpuidle driver, cpuidle core cleanups (Lorenzo Pieralisi, Rasmus
Villemoes)

- ARM big.LITTLE cpuidle driver updates: support for DT-based
initialization and Exynos5800 compatible string (Lorenzo Pieralisi,
Kevin Hilman)

- Rework of the test_suspend kernel command line argument and a new
trace event for console resume (Srinivas Pandruvada, Todd E Brandt)

- Second attempt to optimize swsusp_free() (hibernation core) to make
it avoid going through all PFNs which may be way too slow on some
systems (Joerg Roedel)

- devfreq updates (Paul Bolle, Punit Agrawal, Ãrjan Eide).

- rockchip-io Adaptive Voltage Scaling (AVS) driver and AVS entry
update in MAINTAINERS (Heiko Stübner, Kevin Hilman)

- PM core fix related to clock management (Geert Uytterhoeven)

- PM core's sysfs code cleanup (Johannes Berg)"

* tag 'pm+acpi-3.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (105 commits)
ACPI / fan: printk replacement
PM / clk: Fix crash in clocks management code if !CONFIG_PM_RUNTIME
PM / Domains: Rename cpu_data to cpuidle_data
cpufreq: cpufreq-dt: fix potential double put of cpu OF node
cpufreq: cpu0: rename driver and internals to 'cpufreq_dt'
PM / hibernate: Iterate over set bits instead of PFNs in swsusp_free()
cpufreq: ppc-corenet: remove duplicate update of cpu_data
ACPI / sleep: Rework the handling of ACPI GPE wakeup from suspend-to-idle
PM / sleep: Rename platform suspend/resume functions in suspend.c
PM / sleep: Export dpm_suspend_late/noirq() and dpm_resume_early/noirq()
ACPICA: Introduce acpi_enable_all_wakeup_gpes()
ACPICA: Clear all non-wakeup GPEs in acpi_hw_enable_wakeup_gpe_block()
ACPI / video: check _DOD list when creating backlight devices
PM / Domains: Move dev_pm_domain_attach|detach() to pm_domain.h
cpufreq: Replace strnicmp with strncasecmp
cpufreq: powernv: Set the cpus to nominal frequency during reboot/kexec
cpufreq: powernv: Set the pstate of the last hotplugged out cpu in policy->cpus to minimum
cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
PM / devfreq: exynos: Enable building exynos PPMU as module
PM / devfreq: Export helper functions for drivers
...

+2975 -1487
+6 -7
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 8 8 * samsung,exynos4210-pd - for exynos4210 type power domain. 9 9 - reg: physical base address of the controller and length of memory mapped 10 10 region. 11 + - #power-domain-cells: number of cells in power domain specifier; 12 + must be 0. 11 13 12 14 Optional Properties: 13 15 - clocks: List of clock handles. The parent clocks of the input clocks to the ··· 31 29 lcd0: power-domain-lcd0 { 32 30 compatible = "samsung,exynos4210-pd"; 33 31 reg = <0x10023C00 0x10>; 32 + #power-domain-cells = <0>; 34 33 }; 35 34 36 35 mfc_pd: power-domain@10044060 { ··· 40 37 clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 41 38 <&clock CLK_MOUT_USER_ACLK333>; 42 39 clock-names = "oscclk", "pclk0", "clk0"; 40 + #power-domain-cells = <0>; 43 41 }; 44 42 45 - Example of the node using power domain: 46 - 47 - node { 48 - /* ... */ 49 - samsung,power-domain = <&lcd0>; 50 - /* ... */ 51 - }; 43 + See Documentation/devicetree/bindings/power/power_domain.txt for description 44 + of consumer-side bindings.
+4 -4
Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt Documentation/devicetree/bindings/cpufreq/cpufreq-dt.txt
··· 1 - Generic CPU0 cpufreq driver 1 + Generic cpufreq driver 2 2 3 - It is a generic cpufreq driver for CPU0 frequency management. It 4 - supports both uniprocessor (UP) and symmetric multiprocessor (SMP) 5 - systems which share clock and voltage across all CPUs. 3 + It is a generic DT based cpufreq driver for frequency management. It supports 4 + both uniprocessor (UP) and symmetric multiprocessor (SMP) systems which share 5 + clock and voltage across all CPUs. 6 6 7 7 Both required and optional properties listed below must be defined 8 8 under node /cpus/cpu@0.
+49
Documentation/devicetree/bindings/power/power_domain.txt
··· 1 + * Generic PM domains 2 + 3 + System on chip designs are often divided into multiple PM domains that can be 4 + used for power gating of selected IP blocks for power saving by reduced leakage 5 + current. 6 + 7 + This device tree binding can be used to bind PM domain consumer devices with 8 + their PM domains provided by PM domain providers. A PM domain provider can be 9 + represented by any node in the device tree and can provide one or more PM 10 + domains. A consumer node can refer to the provider by a phandle and a set of 11 + phandle arguments (so called PM domain specifiers) of length specified by the 12 + #power-domain-cells property in the PM domain provider node. 13 + 14 + ==PM domain providers== 15 + 16 + Required properties: 17 + - #power-domain-cells : Number of cells in a PM domain specifier; 18 + Typically 0 for nodes representing a single PM domain and 1 for nodes 19 + providing multiple PM domains (e.g. power controllers), but can be any value 20 + as specified by device tree binding documentation of particular provider. 21 + 22 + Example: 23 + 24 + power: power-controller@12340000 { 25 + compatible = "foo,power-controller"; 26 + reg = <0x12340000 0x1000>; 27 + #power-domain-cells = <1>; 28 + }; 29 + 30 + The node above defines a power controller that is a PM domain provider and 31 + expects one cell as its phandle argument. 32 + 33 + ==PM domain consumers== 34 + 35 + Required properties: 36 + - power-domains : A phandle and PM domain specifier as defined by bindings of 37 + the power controller specified by phandle. 38 + 39 + Example: 40 + 41 + leaky-device@12350000 { 42 + compatible = "foo,i-leak-current"; 43 + reg = <0x12350000 0x1000>; 44 + power-domains = <&power 0>; 45 + }; 46 + 47 + The node above defines a typical PM domain consumer device, which is located 48 + inside a PM domain with index 0 of a power controller represented by a node 49 + with the label "power".
+83
Documentation/devicetree/bindings/power/rockchip-io-domain.txt
··· 1 + Rockchip SRAM for IO Voltage Domains: 2 + ------------------------------------- 3 + 4 + IO domain voltages on some Rockchip SoCs are variable but need to be 5 + kept in sync between the regulators and the SoC using a special 6 + register. 7 + 8 + A specific example using rk3288: 9 + - If the regulator hooked up to a pin like SDMMC0_VDD is 3.3V then 10 + bit 7 of GRF_IO_VSEL needs to be 0. If the regulator hooked up to 11 + that same pin is 1.8V then bit 7 of GRF_IO_VSEL needs to be 1. 12 + 13 + Said another way, this driver simply handles keeping bits in the SoC's 14 + general register file (GRF) in sync with the actual value of a voltage 15 + hooked up to the pins. 16 + 17 + Note that this driver specifically doesn't include: 18 + - any logic for deciding what voltage we should set regulators to 19 + - any logic for deciding whether regulators (or internal SoC blocks) 20 + should have power or not have power 21 + 22 + If there were some other software that had the smarts of making 23 + decisions about regulators, it would work in conjunction with this 24 + driver. When that other software adjusted a regulator's voltage then 25 + this driver would handle telling the SoC about it. A good example is 26 + vqmmc for SD. In that case the dw_mmc driver simply is told about a 27 + regulator. It changes the regulator between 3.3V and 1.8V at the 28 + right time. This driver notices the change and makes sure that the 29 + SoC is on the same page. 30 + 31 + 32 + Required properties: 33 + - compatible: should be one of: 34 + - "rockchip,rk3188-io-voltage-domain" for rk3188 35 + - "rockchip,rk3288-io-voltage-domain" for rk3288 36 + - rockchip,grf: phandle to the syscon managing the "general register files" 37 + 38 + 39 + You specify supplies using the standard regulator bindings by including 40 + a phandle the the relevant regulator. All specified supplies must be able 41 + to report their voltage. The IO Voltage Domain for any non-specified 42 + supplies will be not be touched. 43 + 44 + Possible supplies for rk3188: 45 + - ap0-supply: The supply connected to AP0_VCC. 46 + - ap1-supply: The supply connected to AP1_VCC. 47 + - cif-supply: The supply connected to CIF_VCC. 48 + - flash-supply: The supply connected to FLASH_VCC. 49 + - lcdc0-supply: The supply connected to LCD0_VCC. 50 + - lcdc1-supply: The supply connected to LCD1_VCC. 51 + - vccio0-supply: The supply connected to VCCIO0. 52 + - vccio1-supply: The supply connected to VCCIO1. 53 + Sometimes also labeled VCCIO1 and VCCIO2. 54 + 55 + Possible supplies for rk3288: 56 + - audio-supply: The supply connected to APIO4_VDD. 57 + - bb-supply: The supply connected to APIO5_VDD. 58 + - dvp-supply: The supply connected to DVPIO_VDD. 59 + - flash0-supply: The supply connected to FLASH0_VDD. Typically for eMMC 60 + - flash1-supply: The supply connected to FLASH1_VDD. Also known as SDIO1. 61 + - gpio30-supply: The supply connected to APIO1_VDD. 62 + - gpio1830 The supply connected to APIO2_VDD. 63 + - lcdc-supply: The supply connected to LCDC_VDD. 64 + - sdcard-supply: The supply connected to SDMMC0_VDD. 65 + - wifi-supply: The supply connected to APIO3_VDD. Also known as SDIO0. 66 + 67 + 68 + Example: 69 + 70 + io-domains { 71 + compatible = "rockchip,rk3288-io-voltage-domain"; 72 + rockchip,grf = <&grf>; 73 + 74 + audio-supply = <&vcc18_codec>; 75 + bb-supply = <&vcc33_io>; 76 + dvp-supply = <&vcc_18>; 77 + flash0-supply = <&vcc18_flashio>; 78 + gpio1830-supply = <&vcc33_io>; 79 + gpio30-supply = <&vcc33_pmuio>; 80 + lcdc-supply = <&vcc33_lcd>; 81 + sdcard-supply = <&vccio_sd>; 82 + wifi-supply = <&vcc18_wl>; 83 + };
+6 -4
Documentation/kernel-parameters.txt
··· 3321 3321 3322 3322 tdfx= [HW,DRM] 3323 3323 3324 - test_suspend= [SUSPEND] 3324 + test_suspend= [SUSPEND][,N] 3325 3325 Specify "mem" (for Suspend-to-RAM) or "standby" (for 3326 - standby suspend) as the system sleep state to briefly 3327 - enter during system startup. The system is woken from 3328 - this state using a wakeup-capable RTC alarm. 3326 + standby suspend) or "freeze" (for suspend type freeze) 3327 + as the system sleep state during system startup with 3328 + the optional capability to repeat N number of times. 3329 + The system is woken from this state using a 3330 + wakeup-capable RTC alarm. 3329 3331 3330 3332 thash_entries= [KNL,NET] 3331 3333 Set number of hash buckets for TCP connection
+123
Documentation/power/suspend-and-interrupts.txt
··· 1 + System Suspend and Device Interrupts 2 + 3 + Copyright (C) 2014 Intel Corp. 4 + Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 5 + 6 + 7 + Suspending and Resuming Device IRQs 8 + ----------------------------------- 9 + 10 + Device interrupt request lines (IRQs) are generally disabled during system 11 + suspend after the "late" phase of suspending devices (that is, after all of the 12 + ->prepare, ->suspend and ->suspend_late callbacks have been executed for all 13 + devices). That is done by suspend_device_irqs(). 14 + 15 + The rationale for doing so is that after the "late" phase of device suspend 16 + there is no legitimate reason why any interrupts from suspended devices should 17 + trigger and if any devices have not been suspended properly yet, it is better to 18 + block interrupts from them anyway. Also, in the past we had problems with 19 + interrupt handlers for shared IRQs that device drivers implementing them were 20 + not prepared for interrupts triggering after their devices had been suspended. 21 + In some cases they would attempt to access, for example, memory address spaces 22 + of suspended devices and cause unpredictable behavior to ensue as a result. 23 + Unfortunately, such problems are very difficult to debug and the introduction 24 + of suspend_device_irqs(), along with the "noirq" phase of device suspend and 25 + resume, was the only practical way to mitigate them. 26 + 27 + Device IRQs are re-enabled during system resume, right before the "early" phase 28 + of resuming devices (that is, before starting to execute ->resume_early 29 + callbacks for devices). The function doing that is resume_device_irqs(). 30 + 31 + 32 + The IRQF_NO_SUSPEND Flag 33 + ------------------------ 34 + 35 + There are interrupts that can legitimately trigger during the entire system 36 + suspend-resume cycle, including the "noirq" phases of suspending and resuming 37 + devices as well as during the time when nonboot CPUs are taken offline and 38 + brought back online. That applies to timer interrupts in the first place, 39 + but also to IPIs and to some other special-purpose interrupts. 40 + 41 + The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when 42 + requesting a special-purpose interrupt. It causes suspend_device_irqs() to 43 + leave the corresponding IRQ enabled so as to allow the interrupt to work all 44 + the time as expected. 45 + 46 + Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one 47 + user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed 48 + for it will be executed as usual after suspend_device_irqs(), even if the 49 + IRQF_NO_SUSPEND flag was not passed to request_irq() (or equivalent) by some of 50 + the IRQ's users. For this reason, using IRQF_NO_SUSPEND and IRQF_SHARED at the 51 + same time should be avoided. 52 + 53 + 54 + System Wakeup Interrupts, enable_irq_wake() and disable_irq_wake() 55 + ------------------------------------------------------------------ 56 + 57 + System wakeup interrupts generally need to be configured to wake up the system 58 + from sleep states, especially if they are used for different purposes (e.g. as 59 + I/O interrupts) in the working state. 60 + 61 + That may involve turning on a special signal handling logic within the platform 62 + (such as an SoC) so that signals from a given line are routed in a different way 63 + during system sleep so as to trigger a system wakeup when needed. For example, 64 + the platform may include a dedicated interrupt controller used specifically for 65 + handling system wakeup events. Then, if a given interrupt line is supposed to 66 + wake up the system from sleep sates, the corresponding input of that interrupt 67 + controller needs to be enabled to receive signals from the line in question. 68 + After wakeup, it generally is better to disable that input to prevent the 69 + dedicated controller from triggering interrupts unnecessarily. 70 + 71 + The IRQ subsystem provides two helper functions to be used by device drivers for 72 + those purposes. Namely, enable_irq_wake() turns on the platform's logic for 73 + handling the given IRQ as a system wakeup interrupt line and disable_irq_wake() 74 + turns that logic off. 75 + 76 + Calling enable_irq_wake() causes suspend_device_irqs() to treat the given IRQ 77 + in a special way. Namely, the IRQ remains enabled, by on the first interrupt 78 + it will be disabled, marked as pending and "suspended" so that it will be 79 + re-enabled by resume_device_irqs() during the subsequent system resume. Also 80 + the PM core is notified about the event which casues the system suspend in 81 + progress to be aborted (that doesn't have to happen immediately, but at one 82 + of the points where the suspend thread looks for pending wakeup events). 83 + 84 + This way every interrupt from a wakeup interrupt source will either cause the 85 + system suspend currently in progress to be aborted or wake up the system if 86 + already suspended. However, after suspend_device_irqs() interrupt handlers are 87 + not executed for system wakeup IRQs. They are only executed for IRQF_NO_SUSPEND 88 + IRQs at that time, but those IRQs should not be configured for system wakeup 89 + using enable_irq_wake(). 90 + 91 + 92 + Interrupts and Suspend-to-Idle 93 + ------------------------------ 94 + 95 + Suspend-to-idle (also known as the "freeze" sleep state) is a relatively new 96 + system sleep state that works by idling all of the processors and waiting for 97 + interrupts right after the "noirq" phase of suspending devices. 98 + 99 + Of course, this means that all of the interrupts with the IRQF_NO_SUSPEND flag 100 + set will bring CPUs out of idle while in that state, but they will not cause the 101 + IRQ subsystem to trigger a system wakeup. 102 + 103 + System wakeup interrupts, in turn, will trigger wakeup from suspend-to-idle in 104 + analogy with what they do in the full system suspend case. The only difference 105 + is that the wakeup from suspend-to-idle is signaled using the usual working 106 + state interrupt delivery mechanisms and doesn't require the platform to use 107 + any special interrupt handling logic for it to work. 108 + 109 + 110 + IRQF_NO_SUSPEND and enable_irq_wake() 111 + ------------------------------------- 112 + 113 + There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND 114 + flag on the same IRQ. 115 + 116 + First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND 117 + interrupts (interrupt handlers are invoked after suspend_device_irqs()) are 118 + directly at odds with the rules for handling system wakeup interrupts (interrupt 119 + handlers are not invoked after suspend_device_irqs()). 120 + 121 + Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not 122 + to individual interrupt handlers, so sharing an IRQ between a system wakeup 123 + interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.
+2 -2
MAINTAINERS
··· 8490 8490 F: Documentation/security/Smack.txt 8491 8491 F: security/smack/ 8492 8492 8493 - SMARTREFLEX DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS) 8493 + DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS) 8494 8494 M: Kevin Hilman <khilman@kernel.org> 8495 8495 M: Nishanth Menon <nm@ti.com> 8496 8496 S: Maintained 8497 - F: drivers/power/avs/smartreflex.c 8497 + F: drivers/power/avs/ 8498 8498 F: include/linux/power/smartreflex.h 8499 8499 L: linux-pm@vger.kernel.org 8500 8500
+23
arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
··· 38 38 compatible = "arm,cortex-a15"; 39 39 reg = <0>; 40 40 cci-control-port = <&cci_control1>; 41 + cpu-idle-states = <&CLUSTER_SLEEP_BIG>; 41 42 }; 42 43 43 44 cpu1: cpu@1 { ··· 46 45 compatible = "arm,cortex-a15"; 47 46 reg = <1>; 48 47 cci-control-port = <&cci_control1>; 48 + cpu-idle-states = <&CLUSTER_SLEEP_BIG>; 49 49 }; 50 50 51 51 cpu2: cpu@2 { ··· 54 52 compatible = "arm,cortex-a7"; 55 53 reg = <0x100>; 56 54 cci-control-port = <&cci_control2>; 55 + cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>; 57 56 }; 58 57 59 58 cpu3: cpu@3 { ··· 62 59 compatible = "arm,cortex-a7"; 63 60 reg = <0x101>; 64 61 cci-control-port = <&cci_control2>; 62 + cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>; 65 63 }; 66 64 67 65 cpu4: cpu@4 { ··· 70 66 compatible = "arm,cortex-a7"; 71 67 reg = <0x102>; 72 68 cci-control-port = <&cci_control2>; 69 + cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>; 70 + }; 71 + 72 + idle-states { 73 + CLUSTER_SLEEP_BIG: cluster-sleep-big { 74 + compatible = "arm,idle-state"; 75 + local-timer-stop; 76 + entry-latency-us = <1000>; 77 + exit-latency-us = <700>; 78 + min-residency-us = <2000>; 79 + }; 80 + 81 + CLUSTER_SLEEP_LITTLE: cluster-sleep-little { 82 + compatible = "arm,idle-state"; 83 + local-timer-stop; 84 + entry-latency-us = <1000>; 85 + exit-latency-us = <500>; 86 + min-residency-us = <2500>; 87 + }; 73 88 }; 74 89 }; 75 90
-1
arch/arm/mach-exynos/exynos.c
··· 193 193 /* to be supported later */ 194 194 return; 195 195 196 - pm_genpd_poweroff_unused(); 197 196 exynos_pm_init(); 198 197 } 199 198
+1 -77
arch/arm/mach-exynos/pm_domains.c
··· 105 105 return exynos_pd_power(domain, false); 106 106 } 107 107 108 - static void exynos_add_device_to_domain(struct exynos_pm_domain *pd, 109 - struct device *dev) 110 - { 111 - int ret; 112 - 113 - dev_dbg(dev, "adding to power domain %s\n", pd->pd.name); 114 - 115 - while (1) { 116 - ret = pm_genpd_add_device(&pd->pd, dev); 117 - if (ret != -EAGAIN) 118 - break; 119 - cond_resched(); 120 - } 121 - 122 - pm_genpd_dev_need_restore(dev, true); 123 - } 124 - 125 - static void exynos_remove_device_from_domain(struct device *dev) 126 - { 127 - struct generic_pm_domain *genpd = dev_to_genpd(dev); 128 - int ret; 129 - 130 - dev_dbg(dev, "removing from power domain %s\n", genpd->name); 131 - 132 - while (1) { 133 - ret = pm_genpd_remove_device(genpd, dev); 134 - if (ret != -EAGAIN) 135 - break; 136 - cond_resched(); 137 - } 138 - } 139 - 140 - static void exynos_read_domain_from_dt(struct device *dev) 141 - { 142 - struct platform_device *pd_pdev; 143 - struct exynos_pm_domain *pd; 144 - struct device_node *node; 145 - 146 - node = of_parse_phandle(dev->of_node, "samsung,power-domain", 0); 147 - if (!node) 148 - return; 149 - pd_pdev = of_find_device_by_node(node); 150 - if (!pd_pdev) 151 - return; 152 - pd = platform_get_drvdata(pd_pdev); 153 - exynos_add_device_to_domain(pd, dev); 154 - } 155 - 156 - static int exynos_pm_notifier_call(struct notifier_block *nb, 157 - unsigned long event, void *data) 158 - { 159 - struct device *dev = data; 160 - 161 - switch (event) { 162 - case BUS_NOTIFY_BIND_DRIVER: 163 - if (dev->of_node) 164 - exynos_read_domain_from_dt(dev); 165 - 166 - break; 167 - 168 - case BUS_NOTIFY_UNBOUND_DRIVER: 169 - exynos_remove_device_from_domain(dev); 170 - 171 - break; 172 - } 173 - return NOTIFY_DONE; 174 - } 175 - 176 - static struct notifier_block platform_nb = { 177 - .notifier_call = exynos_pm_notifier_call, 178 - }; 179 - 180 108 static __init int exynos4_pm_init_power_domain(void) 181 109 { 182 110 struct platform_device *pdev; ··· 130 202 pd->base = of_iomap(np, 0); 131 203 pd->pd.power_off = exynos_pd_power_off; 132 204 pd->pd.power_on = exynos_pd_power_on; 133 - pd->pd.of_node = np; 134 205 135 206 pd->oscclk = clk_get(dev, "oscclk"); 136 207 if (IS_ERR(pd->oscclk)) ··· 155 228 clk_put(pd->oscclk); 156 229 157 230 no_clk: 158 - platform_set_drvdata(pdev, pd); 159 - 160 231 on = __raw_readl(pd->base + 0x4) & INT_LOCAL_PWR_EN; 161 232 162 233 pm_genpd_init(&pd->pd, NULL, !on); 234 + of_genpd_add_provider_simple(np, &pd->pd); 163 235 } 164 - 165 - bus_register_notifier(&platform_bus_type, &platform_nb); 166 236 167 237 return 0; 168 238 }
+1 -1
arch/arm/mach-imx/imx27-dt.c
··· 20 20 21 21 static void __init imx27_dt_init(void) 22 22 { 23 - struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 23 + struct platform_device_info devinfo = { .name = "cpufreq-dt", }; 24 24 25 25 mxc_arch_reset_init_dt(); 26 26
+1 -1
arch/arm/mach-imx/mach-imx51.c
··· 51 51 52 52 static void __init imx51_dt_init(void) 53 53 { 54 - struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 54 + struct platform_device_info devinfo = { .name = "cpufreq-dt", }; 55 55 56 56 mxc_arch_reset_init_dt(); 57 57 imx51_ipu_mipi_setup();
+1 -1
arch/arm/mach-mvebu/pmsu.c
··· 644 644 } 645 645 } 646 646 647 - platform_device_register_simple("cpufreq-generic", -1, NULL, 0); 647 + platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 648 648 return 0; 649 649 } 650 650
+1 -1
arch/arm/mach-omap2/pm.c
··· 282 282 if (!of_have_populated_dt()) 283 283 devinfo.name = "omap-cpufreq"; 284 284 else 285 - devinfo.name = "cpufreq-cpu0"; 285 + devinfo.name = "cpufreq-dt"; 286 286 platform_device_register_full(&devinfo); 287 287 } 288 288
-5
arch/arm/mach-s3c64xx/common.c
··· 440 440 /* if all else fails, or mode was for soft, jump to 0 */ 441 441 soft_restart(0); 442 442 } 443 - 444 - void __init s3c64xx_init_late(void) 445 - { 446 - s3c64xx_pm_late_initcall(); 447 - }
-7
arch/arm/mach-s3c64xx/common.h
··· 23 23 void s3c64xx_init_io(struct map_desc *mach_desc, int size); 24 24 25 25 void s3c64xx_restart(enum reboot_mode mode, const char *cmd); 26 - void s3c64xx_init_late(void); 27 26 28 27 void s3c64xx_clk_init(struct device_node *np, unsigned long xtal_f, 29 28 unsigned long xusbxti_f, bool is_s3c6400, void __iomem *reg_base); ··· 49 50 #else 50 51 #define s3c6410_map_io NULL 51 52 #define s3c6410_init NULL 52 - #endif 53 - 54 - #ifdef CONFIG_PM 55 - int __init s3c64xx_pm_late_initcall(void); 56 - #else 57 - static inline int s3c64xx_pm_late_initcall(void) { return 0; } 58 53 #endif 59 54 60 55 #ifdef CONFIG_S3C64XX_PL080
-1
arch/arm/mach-s3c64xx/mach-anw6410.c
··· 233 233 .init_irq = s3c6410_init_irq, 234 234 .map_io = anw6410_map_io, 235 235 .init_machine = anw6410_machine_init, 236 - .init_late = s3c64xx_init_late, 237 236 .init_time = samsung_timer_init, 238 237 .restart = s3c64xx_restart, 239 238 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-crag6410.c
··· 857 857 .init_irq = s3c6410_init_irq, 858 858 .map_io = crag6410_map_io, 859 859 .init_machine = crag6410_machine_init, 860 - .init_late = s3c64xx_init_late, 861 860 .init_time = samsung_timer_init, 862 861 .restart = s3c64xx_restart, 863 862 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-hmt.c
··· 277 277 .init_irq = s3c6410_init_irq, 278 278 .map_io = hmt_map_io, 279 279 .init_machine = hmt_machine_init, 280 - .init_late = s3c64xx_init_late, 281 280 .init_time = samsung_timer_init, 282 281 .restart = s3c64xx_restart, 283 282 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-mini6410.c
··· 366 366 .init_irq = s3c6410_init_irq, 367 367 .map_io = mini6410_map_io, 368 368 .init_machine = mini6410_machine_init, 369 - .init_late = s3c64xx_init_late, 370 369 .init_time = samsung_timer_init, 371 370 .restart = s3c64xx_restart, 372 371 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-ncp.c
··· 103 103 .init_irq = s3c6410_init_irq, 104 104 .map_io = ncp_map_io, 105 105 .init_machine = ncp_machine_init, 106 - .init_late = s3c64xx_init_late, 107 106 .init_time = samsung_timer_init, 108 107 .restart = s3c64xx_restart, 109 108 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-real6410.c
··· 335 335 .init_irq = s3c6410_init_irq, 336 336 .map_io = real6410_map_io, 337 337 .init_machine = real6410_machine_init, 338 - .init_late = s3c64xx_init_late, 339 338 .init_time = samsung_timer_init, 340 339 .restart = s3c64xx_restart, 341 340 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-smartq5.c
··· 156 156 .init_irq = s3c6410_init_irq, 157 157 .map_io = smartq_map_io, 158 158 .init_machine = smartq5_machine_init, 159 - .init_late = s3c64xx_init_late, 160 159 .init_time = samsung_timer_init, 161 160 .restart = s3c64xx_restart, 162 161 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-smartq7.c
··· 172 172 .init_irq = s3c6410_init_irq, 173 173 .map_io = smartq_map_io, 174 174 .init_machine = smartq7_machine_init, 175 - .init_late = s3c64xx_init_late, 176 175 .init_time = samsung_timer_init, 177 176 .restart = s3c64xx_restart, 178 177 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-smdk6400.c
··· 92 92 .init_irq = s3c6400_init_irq, 93 93 .map_io = smdk6400_map_io, 94 94 .init_machine = smdk6400_machine_init, 95 - .init_late = s3c64xx_init_late, 96 95 .init_time = samsung_timer_init, 97 96 .restart = s3c64xx_restart, 98 97 MACHINE_END
-1
arch/arm/mach-s3c64xx/mach-smdk6410.c
··· 705 705 .init_irq = s3c6410_init_irq, 706 706 .map_io = smdk6410_map_io, 707 707 .init_machine = smdk6410_machine_init, 708 - .init_late = s3c64xx_init_late, 709 708 .init_time = samsung_timer_init, 710 709 .restart = s3c64xx_restart, 711 710 MACHINE_END
-7
arch/arm/mach-s3c64xx/pm.c
··· 347 347 return 0; 348 348 } 349 349 arch_initcall(s3c64xx_pm_initcall); 350 - 351 - int __init s3c64xx_pm_late_initcall(void) 352 - { 353 - pm_genpd_poweroff_unused(); 354 - 355 - return 0; 356 - }
+1 -1
arch/arm/mach-shmobile/cpufreq.c
··· 12 12 13 13 int __init shmobile_cpufreq_init(void) 14 14 { 15 - platform_device_register_simple("cpufreq-cpu0", -1, NULL, 0); 15 + platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 16 16 return 0; 17 17 }
-1
arch/arm/mach-shmobile/pm-r8a7779.c
··· 87 87 genpd->dev_ops.stop = pm_clk_suspend; 88 88 genpd->dev_ops.start = pm_clk_resume; 89 89 genpd->dev_ops.active_wakeup = pd_active_wakeup; 90 - genpd->dev_irq_safe = true; 91 90 genpd->power_off = pd_power_down; 92 91 genpd->power_on = pd_power_up; 93 92
-1
arch/arm/mach-shmobile/pm-rmobile.c
··· 110 110 genpd->dev_ops.stop = pm_clk_suspend; 111 111 genpd->dev_ops.start = pm_clk_resume; 112 112 genpd->dev_ops.active_wakeup = rmobile_pd_active_wakeup; 113 - genpd->dev_irq_safe = true; 114 113 genpd->power_off = rmobile_pd_power_down; 115 114 genpd->power_on = rmobile_pd_power_up; 116 115 __rmobile_pd_power_up(rmobile_pd, false);
+1 -1
arch/arm/mach-zynq/common.c
··· 110 110 */ 111 111 static void __init zynq_init_machine(void) 112 112 { 113 - struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 113 + struct platform_device_info devinfo = { .name = "cpufreq-dt", }; 114 114 struct soc_device_attribute *soc_dev_attr; 115 115 struct soc_device *soc_dev; 116 116 struct device *parent = NULL;
+5
arch/x86/kernel/apic/io_apic.c
··· 2623 2623 .irq_eoi = ack_apic_level, 2624 2624 .irq_set_affinity = native_ioapic_set_affinity, 2625 2625 .irq_retrigger = ioapic_retrigger_irq, 2626 + .flags = IRQCHIP_SKIP_SET_WAKE, 2626 2627 }; 2627 2628 2628 2629 static inline void init_IO_APIC_traps(void) ··· 3174 3173 .irq_ack = ack_apic_edge, 3175 3174 .irq_set_affinity = msi_set_affinity, 3176 3175 .irq_retrigger = ioapic_retrigger_irq, 3176 + .flags = IRQCHIP_SKIP_SET_WAKE, 3177 3177 }; 3178 3178 3179 3179 int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc, ··· 3273 3271 .irq_ack = ack_apic_edge, 3274 3272 .irq_set_affinity = dmar_msi_set_affinity, 3275 3273 .irq_retrigger = ioapic_retrigger_irq, 3274 + .flags = IRQCHIP_SKIP_SET_WAKE, 3276 3275 }; 3277 3276 3278 3277 int arch_setup_dmar_msi(unsigned int irq) ··· 3324 3321 .irq_ack = ack_apic_edge, 3325 3322 .irq_set_affinity = hpet_msi_set_affinity, 3326 3323 .irq_retrigger = ioapic_retrigger_irq, 3324 + .flags = IRQCHIP_SKIP_SET_WAKE, 3327 3325 }; 3328 3326 3329 3327 int default_setup_hpet_msi(unsigned int irq, unsigned int id) ··· 3388 3384 .irq_ack = ack_apic_edge, 3389 3385 .irq_set_affinity = ht_set_affinity, 3390 3386 .irq_retrigger = ioapic_retrigger_irq, 3387 + .flags = IRQCHIP_SKIP_SET_WAKE, 3391 3388 }; 3392 3389 3393 3390 int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
+57 -108
drivers/acpi/acpi_lpss.c
··· 54 54 55 55 #define LPSS_PRV_REG_COUNT 9 56 56 57 - struct lpss_shared_clock { 58 - const char *name; 59 - unsigned long rate; 60 - struct clk *clk; 61 - }; 57 + /* LPSS Flags */ 58 + #define LPSS_CLK BIT(0) 59 + #define LPSS_CLK_GATE BIT(1) 60 + #define LPSS_CLK_DIVIDER BIT(2) 61 + #define LPSS_LTR BIT(3) 62 + #define LPSS_SAVE_CTX BIT(4) 62 63 63 64 struct lpss_private_data; 64 65 65 66 struct lpss_device_desc { 66 - bool clk_required; 67 - const char *clkdev_name; 68 - bool ltr_required; 67 + unsigned int flags; 69 68 unsigned int prv_offset; 70 69 size_t prv_size_override; 71 - bool clk_divider; 72 - bool clk_gate; 73 - bool save_ctx; 74 - struct lpss_shared_clock *shared_clock; 75 70 void (*setup)(struct lpss_private_data *pdata); 76 71 }; 77 72 78 73 static struct lpss_device_desc lpss_dma_desc = { 79 - .clk_required = true, 80 - .clkdev_name = "hclk", 74 + .flags = LPSS_CLK, 81 75 }; 82 76 83 77 struct lpss_private_data { 84 78 void __iomem *mmio_base; 85 79 resource_size_t mmio_size; 80 + unsigned int fixed_clk_rate; 86 81 struct clk *clk; 87 82 const struct lpss_device_desc *dev_desc; 88 83 u32 prv_reg_ctx[LPSS_PRV_REG_COUNT]; 89 84 }; 90 85 86 + /* UART Component Parameter Register */ 87 + #define LPSS_UART_CPR 0xF4 88 + #define LPSS_UART_CPR_AFCE BIT(4) 89 + 91 90 static void lpss_uart_setup(struct lpss_private_data *pdata) 92 91 { 93 92 unsigned int offset; 94 - u32 reg; 93 + u32 val; 95 94 96 95 offset = pdata->dev_desc->prv_offset + LPSS_TX_INT; 97 - reg = readl(pdata->mmio_base + offset); 98 - writel(reg | LPSS_TX_INT_MASK, pdata->mmio_base + offset); 96 + val = readl(pdata->mmio_base + offset); 97 + writel(val | LPSS_TX_INT_MASK, pdata->mmio_base + offset); 99 98 100 - offset = pdata->dev_desc->prv_offset + LPSS_GENERAL; 101 - reg = readl(pdata->mmio_base + offset); 102 - writel(reg | LPSS_GENERAL_UART_RTS_OVRD, pdata->mmio_base + offset); 99 + val = readl(pdata->mmio_base + LPSS_UART_CPR); 100 + if (!(val & LPSS_UART_CPR_AFCE)) { 101 + offset = pdata->dev_desc->prv_offset + LPSS_GENERAL; 102 + val = readl(pdata->mmio_base + offset); 103 + val |= LPSS_GENERAL_UART_RTS_OVRD; 104 + writel(val, pdata->mmio_base + offset); 105 + } 103 106 } 104 107 105 - static void lpss_i2c_setup(struct lpss_private_data *pdata) 108 + static void byt_i2c_setup(struct lpss_private_data *pdata) 106 109 { 107 110 unsigned int offset; 108 111 u32 val; ··· 114 111 val = readl(pdata->mmio_base + offset); 115 112 val |= LPSS_RESETS_RESET_APB | LPSS_RESETS_RESET_FUNC; 116 113 writel(val, pdata->mmio_base + offset); 114 + 115 + if (readl(pdata->mmio_base + pdata->dev_desc->prv_offset)) 116 + pdata->fixed_clk_rate = 133000000; 117 117 } 118 118 119 - static struct lpss_device_desc wpt_dev_desc = { 120 - .clk_required = true, 121 - .prv_offset = 0x800, 122 - .ltr_required = true, 123 - .clk_divider = true, 124 - .clk_gate = true, 125 - }; 126 - 127 119 static struct lpss_device_desc lpt_dev_desc = { 128 - .clk_required = true, 120 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, 129 121 .prv_offset = 0x800, 130 - .ltr_required = true, 131 - .clk_divider = true, 132 - .clk_gate = true, 133 122 }; 134 123 135 124 static struct lpss_device_desc lpt_i2c_dev_desc = { 136 - .clk_required = true, 125 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR, 137 126 .prv_offset = 0x800, 138 - .ltr_required = true, 139 - .clk_gate = true, 140 127 }; 141 128 142 129 static struct lpss_device_desc lpt_uart_dev_desc = { 143 - .clk_required = true, 130 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, 144 131 .prv_offset = 0x800, 145 - .ltr_required = true, 146 - .clk_divider = true, 147 - .clk_gate = true, 148 132 .setup = lpss_uart_setup, 149 133 }; 150 134 151 135 static struct lpss_device_desc lpt_sdio_dev_desc = { 136 + .flags = LPSS_LTR, 152 137 .prv_offset = 0x1000, 153 138 .prv_size_override = 0x1018, 154 - .ltr_required = true, 155 - }; 156 - 157 - static struct lpss_shared_clock pwm_clock = { 158 - .name = "pwm_clk", 159 - .rate = 25000000, 160 139 }; 161 140 162 141 static struct lpss_device_desc byt_pwm_dev_desc = { 163 - .clk_required = true, 164 - .save_ctx = true, 165 - .shared_clock = &pwm_clock, 142 + .flags = LPSS_SAVE_CTX, 166 143 }; 167 144 168 145 static struct lpss_device_desc byt_uart_dev_desc = { 169 - .clk_required = true, 146 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 170 147 .prv_offset = 0x800, 171 - .clk_divider = true, 172 - .clk_gate = true, 173 - .save_ctx = true, 174 148 .setup = lpss_uart_setup, 175 149 }; 176 150 177 151 static struct lpss_device_desc byt_spi_dev_desc = { 178 - .clk_required = true, 152 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 179 153 .prv_offset = 0x400, 180 - .clk_divider = true, 181 - .clk_gate = true, 182 - .save_ctx = true, 183 154 }; 184 155 185 156 static struct lpss_device_desc byt_sdio_dev_desc = { 186 - .clk_required = true, 187 - }; 188 - 189 - static struct lpss_shared_clock i2c_clock = { 190 - .name = "i2c_clk", 191 - .rate = 100000000, 157 + .flags = LPSS_CLK, 192 158 }; 193 159 194 160 static struct lpss_device_desc byt_i2c_dev_desc = { 195 - .clk_required = true, 161 + .flags = LPSS_CLK | LPSS_SAVE_CTX, 196 162 .prv_offset = 0x800, 197 - .save_ctx = true, 198 - .shared_clock = &i2c_clock, 199 - .setup = lpss_i2c_setup, 200 - }; 201 - 202 - static struct lpss_shared_clock bsw_pwm_clock = { 203 - .name = "pwm_clk", 204 - .rate = 19200000, 205 - }; 206 - 207 - static struct lpss_device_desc bsw_pwm_dev_desc = { 208 - .clk_required = true, 209 - .save_ctx = true, 210 - .shared_clock = &bsw_pwm_clock, 163 + .setup = byt_i2c_setup, 211 164 }; 212 165 213 166 #else ··· 196 237 { "INT33FC", }, 197 238 198 239 /* Braswell LPSS devices */ 199 - { "80862288", LPSS_ADDR(bsw_pwm_dev_desc) }, 240 + { "80862288", LPSS_ADDR(byt_pwm_dev_desc) }, 200 241 { "8086228A", LPSS_ADDR(byt_uart_dev_desc) }, 201 242 { "8086228E", LPSS_ADDR(byt_spi_dev_desc) }, 202 243 { "808622C1", LPSS_ADDR(byt_i2c_dev_desc) }, ··· 210 251 { "INT3436", LPSS_ADDR(lpt_sdio_dev_desc) }, 211 252 { "INT3437", }, 212 253 213 - { "INT3438", LPSS_ADDR(wpt_dev_desc) }, 254 + /* Wildcat Point LPSS devices */ 255 + { "INT3438", LPSS_ADDR(lpt_dev_desc) }, 214 256 215 257 { } 216 258 }; ··· 236 276 struct lpss_private_data *pdata) 237 277 { 238 278 const struct lpss_device_desc *dev_desc = pdata->dev_desc; 239 - struct lpss_shared_clock *shared_clock = dev_desc->shared_clock; 240 279 const char *devname = dev_name(&adev->dev); 241 280 struct clk *clk = ERR_PTR(-ENODEV); 242 281 struct lpss_clk_data *clk_data; ··· 248 289 clk_data = platform_get_drvdata(lpss_clk_dev); 249 290 if (!clk_data) 250 291 return -ENODEV; 251 - 252 - if (dev_desc->clkdev_name) { 253 - clk_register_clkdev(clk_data->clk, dev_desc->clkdev_name, 254 - devname); 255 - return 0; 256 - } 292 + clk = clk_data->clk; 257 293 258 294 if (!pdata->mmio_base 259 295 || pdata->mmio_size < dev_desc->prv_offset + LPSS_CLK_SIZE) ··· 257 303 parent = clk_data->name; 258 304 prv_base = pdata->mmio_base + dev_desc->prv_offset; 259 305 260 - if (shared_clock) { 261 - clk = shared_clock->clk; 262 - if (!clk) { 263 - clk = clk_register_fixed_rate(NULL, shared_clock->name, 264 - "lpss_clk", 0, 265 - shared_clock->rate); 266 - shared_clock->clk = clk; 267 - } 268 - parent = shared_clock->name; 306 + if (pdata->fixed_clk_rate) { 307 + clk = clk_register_fixed_rate(NULL, devname, parent, 0, 308 + pdata->fixed_clk_rate); 309 + goto out; 269 310 } 270 311 271 - if (dev_desc->clk_gate) { 312 + if (dev_desc->flags & LPSS_CLK_GATE) { 272 313 clk = clk_register_gate(NULL, devname, parent, 0, 273 314 prv_base, 0, 0, NULL); 274 315 parent = devname; 275 316 } 276 317 277 - if (dev_desc->clk_divider) { 318 + if (dev_desc->flags & LPSS_CLK_DIVIDER) { 278 319 /* Prevent division by zero */ 279 320 if (!readl(prv_base)) 280 321 writel(LPSS_CLK_DIVIDER_DEF_MASK, prv_base); ··· 293 344 kfree(parent); 294 345 kfree(clk_name); 295 346 } 296 - 347 + out: 297 348 if (IS_ERR(clk)) 298 349 return PTR_ERR(clk); 299 350 ··· 341 392 342 393 pdata->dev_desc = dev_desc; 343 394 344 - if (dev_desc->clk_required) { 395 + if (dev_desc->setup) 396 + dev_desc->setup(pdata); 397 + 398 + if (dev_desc->flags & LPSS_CLK) { 345 399 ret = register_device_clock(adev, pdata); 346 400 if (ret) { 347 401 /* Skip the device, but continue the namespace scan. */ ··· 364 412 ret = 0; 365 413 goto err_out; 366 414 } 367 - 368 - if (dev_desc->setup) 369 - dev_desc->setup(pdata); 370 415 371 416 adev->driver_data = pdata; 372 417 pdev = acpi_create_platform_device(adev); ··· 641 692 642 693 switch (action) { 643 694 case BUS_NOTIFY_BOUND_DRIVER: 644 - if (pdata->dev_desc->save_ctx) 695 + if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 645 696 pdev->dev.pm_domain = &acpi_lpss_pm_domain; 646 697 break; 647 698 case BUS_NOTIFY_UNBOUND_DRIVER: 648 - if (pdata->dev_desc->save_ctx) 699 + if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 649 700 pdev->dev.pm_domain = NULL; 650 701 break; 651 702 case BUS_NOTIFY_ADD_DEVICE: 652 - if (pdata->dev_desc->ltr_required) 703 + if (pdata->dev_desc->flags & LPSS_LTR) 653 704 return sysfs_create_group(&pdev->dev.kobj, 654 705 &lpss_attr_group); 655 706 case BUS_NOTIFY_DEL_DEVICE: 656 - if (pdata->dev_desc->ltr_required) 707 + if (pdata->dev_desc->flags & LPSS_LTR) 657 708 sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); 658 709 default: 659 710 break; ··· 670 721 { 671 722 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 672 723 673 - if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required) 724 + if (!pdata || !pdata->mmio_base || !(pdata->dev_desc->flags & LPSS_LTR)) 674 725 return; 675 726 676 727 if (pdata->mmio_size >= pdata->dev_desc->prv_offset + LPSS_LTR_SIZE)
-4
drivers/acpi/acpi_pnp.c
··· 130 130 {"PNP0401"}, /* ECP Printer Port */ 131 131 /* apple-gmux */ 132 132 {"APP000B"}, 133 - /* fujitsu-laptop.c */ 134 - {"FUJ02bf"}, 135 - {"FUJ02B1"}, 136 - {"FUJ02E3"}, 137 133 /* system */ 138 134 {"PNP0c02"}, /* General ID for reserving resources */ 139 135 {"PNP0c01"}, /* memory controller */
+32
drivers/acpi/acpica/evxfgpe.c
··· 596 596 597 597 ACPI_EXPORT_SYMBOL(acpi_enable_all_runtime_gpes) 598 598 599 + /****************************************************************************** 600 + * 601 + * FUNCTION: acpi_enable_all_wakeup_gpes 602 + * 603 + * PARAMETERS: None 604 + * 605 + * RETURN: Status 606 + * 607 + * DESCRIPTION: Enable all "wakeup" GPEs and disable all of the other GPEs, in 608 + * all GPE blocks. 609 + * 610 + ******************************************************************************/ 611 + 612 + acpi_status acpi_enable_all_wakeup_gpes(void) 613 + { 614 + acpi_status status; 615 + 616 + ACPI_FUNCTION_TRACE(acpi_enable_all_wakeup_gpes); 617 + 618 + status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); 619 + if (ACPI_FAILURE(status)) { 620 + return_ACPI_STATUS(status); 621 + } 622 + 623 + status = acpi_hw_enable_all_wakeup_gpes(); 624 + (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 625 + 626 + return_ACPI_STATUS(status); 627 + } 628 + 629 + ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeup_gpes) 630 + 599 631 /******************************************************************************* 600 632 * 601 633 * FUNCTION: acpi_install_gpe_block
+4 -4
drivers/acpi/acpica/hwgpe.c
··· 396 396 /* Examine each GPE Register within the block */ 397 397 398 398 for (i = 0; i < gpe_block->register_count; i++) { 399 - if (!gpe_block->register_info[i].enable_for_wake) { 400 - continue; 401 - } 402 399 403 - /* Enable all "wake" GPEs in this register */ 400 + /* 401 + * Enable all "wake" GPEs in this register and disable the 402 + * remaining ones. 403 + */ 404 404 405 405 status = 406 406 acpi_hw_write(gpe_block->register_info[i].enable_for_wake,
+3 -1
drivers/acpi/acpica/utresrc.c
··· 87 87 88 88 const char *acpi_gbl_ll_decode[] = { 89 89 "ActiveHigh", 90 - "ActiveLow" 90 + "ActiveLow", 91 + "ActiveBoth", 92 + "Reserved" 91 93 }; 92 94 93 95 const char *acpi_gbl_max_decode[] = {
+1 -1
drivers/acpi/battery.c
··· 695 695 if (battery->power_unit && dmi_name_in_vendors("LENOVO")) { 696 696 const char *s; 697 697 s = dmi_get_system_info(DMI_PRODUCT_VERSION); 698 - if (s && !strnicmp(s, "ThinkPad", 8)) { 698 + if (s && !strncasecmp(s, "ThinkPad", 8)) { 699 699 dmi_walk(find_battery, battery); 700 700 if (test_bit(ACPI_BATTERY_QUIRK_THINKPAD_MAH, 701 701 &battery->flags) &&
+34 -2
drivers/acpi/blacklist.c
··· 247 247 }, 248 248 249 249 /* 250 - * These machines will power on immediately after shutdown when 251 - * reporting the Windows 2012 OSI. 250 + * The wireless hotkey does not work on those machines when 251 + * returning true for _OSI("Windows 2012") 252 252 */ 253 253 { 254 254 .callback = dmi_disable_osi_win8, ··· 256 256 .matches = { 257 257 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 258 258 DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7737"), 259 + }, 260 + }, 261 + { 262 + .callback = dmi_disable_osi_win8, 263 + .ident = "Dell Inspiron 7537", 264 + .matches = { 265 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 266 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7537"), 267 + }, 268 + }, 269 + { 270 + .callback = dmi_disable_osi_win8, 271 + .ident = "Dell Inspiron 5437", 272 + .matches = { 273 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 274 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5437"), 275 + }, 276 + }, 277 + { 278 + .callback = dmi_disable_osi_win8, 279 + .ident = "Dell Inspiron 3437", 280 + .matches = { 281 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 282 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 3437"), 283 + }, 284 + }, 285 + { 286 + .callback = dmi_disable_osi_win8, 287 + .ident = "Dell Vostro 3446", 288 + .matches = { 289 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 290 + DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"), 259 291 }, 260 292 }, 261 293
+36 -35
drivers/acpi/device_pm.c
··· 1041 1041 }; 1042 1042 1043 1043 /** 1044 + * acpi_dev_pm_detach - Remove ACPI power management from the device. 1045 + * @dev: Device to take care of. 1046 + * @power_off: Whether or not to try to remove power from the device. 1047 + * 1048 + * Remove the device from the general ACPI PM domain and remove its wakeup 1049 + * notifier. If @power_off is set, additionally remove power from the device if 1050 + * possible. 1051 + * 1052 + * Callers must ensure proper synchronization of this function with power 1053 + * management callbacks. 1054 + */ 1055 + static void acpi_dev_pm_detach(struct device *dev, bool power_off) 1056 + { 1057 + struct acpi_device *adev = ACPI_COMPANION(dev); 1058 + 1059 + if (adev && dev->pm_domain == &acpi_general_pm_domain) { 1060 + dev->pm_domain = NULL; 1061 + acpi_remove_pm_notifier(adev); 1062 + if (power_off) { 1063 + /* 1064 + * If the device's PM QoS resume latency limit or flags 1065 + * have been exposed to user space, they have to be 1066 + * hidden at this point, so that they don't affect the 1067 + * choice of the low-power state to put the device into. 1068 + */ 1069 + dev_pm_qos_hide_latency_limit(dev); 1070 + dev_pm_qos_hide_flags(dev); 1071 + acpi_device_wakeup(adev, ACPI_STATE_S0, false); 1072 + acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0); 1073 + } 1074 + } 1075 + } 1076 + 1077 + /** 1044 1078 * acpi_dev_pm_attach - Prepare device for ACPI power management. 1045 1079 * @dev: Device to prepare. 1046 1080 * @power_on: Whether or not to power on the device. ··· 1106 1072 acpi_dev_pm_full_power(adev); 1107 1073 acpi_device_wakeup(adev, ACPI_STATE_S0, false); 1108 1074 } 1075 + 1076 + dev->pm_domain->detach = acpi_dev_pm_detach; 1109 1077 return 0; 1110 1078 } 1111 1079 EXPORT_SYMBOL_GPL(acpi_dev_pm_attach); 1112 - 1113 - /** 1114 - * acpi_dev_pm_detach - Remove ACPI power management from the device. 1115 - * @dev: Device to take care of. 1116 - * @power_off: Whether or not to try to remove power from the device. 1117 - * 1118 - * Remove the device from the general ACPI PM domain and remove its wakeup 1119 - * notifier. If @power_off is set, additionally remove power from the device if 1120 - * possible. 1121 - * 1122 - * Callers must ensure proper synchronization of this function with power 1123 - * management callbacks. 1124 - */ 1125 - void acpi_dev_pm_detach(struct device *dev, bool power_off) 1126 - { 1127 - struct acpi_device *adev = ACPI_COMPANION(dev); 1128 - 1129 - if (adev && dev->pm_domain == &acpi_general_pm_domain) { 1130 - dev->pm_domain = NULL; 1131 - acpi_remove_pm_notifier(adev); 1132 - if (power_off) { 1133 - /* 1134 - * If the device's PM QoS resume latency limit or flags 1135 - * have been exposed to user space, they have to be 1136 - * hidden at this point, so that they don't affect the 1137 - * choice of the low-power state to put the device into. 1138 - */ 1139 - dev_pm_qos_hide_latency_limit(dev); 1140 - dev_pm_qos_hide_flags(dev); 1141 - acpi_device_wakeup(adev, ACPI_STATE_S0, false); 1142 - acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0); 1143 - } 1144 - } 1145 - } 1146 - EXPORT_SYMBOL_GPL(acpi_dev_pm_detach); 1147 1080 #endif /* CONFIG_PM */
+8 -10
drivers/acpi/fan.c
··· 27 27 #include <linux/module.h> 28 28 #include <linux/init.h> 29 29 #include <linux/types.h> 30 - #include <asm/uaccess.h> 30 + #include <linux/uaccess.h> 31 31 #include <linux/thermal.h> 32 32 #include <linux/acpi.h> 33 - 34 - #define PREFIX "ACPI: " 35 33 36 34 #define ACPI_FAN_CLASS "fan" 37 35 #define ACPI_FAN_FILE_STATE "state" ··· 125 127 }; 126 128 127 129 /* -------------------------------------------------------------------------- 128 - Driver Interface 129 - -------------------------------------------------------------------------- */ 130 + * Driver Interface 131 + * -------------------------------------------------------------------------- 132 + */ 130 133 131 134 static int acpi_fan_add(struct acpi_device *device) 132 135 { ··· 142 143 143 144 result = acpi_bus_update_power(device->handle, NULL); 144 145 if (result) { 145 - printk(KERN_ERR PREFIX "Setting initial power state\n"); 146 + dev_err(&device->dev, "Setting initial power state\n"); 146 147 goto end; 147 148 } 148 149 ··· 167 168 &device->dev.kobj, 168 169 "device"); 169 170 if (result) 170 - dev_err(&device->dev, "Failed to create sysfs link " 171 - "'device'\n"); 171 + dev_err(&device->dev, "Failed to create sysfs link 'device'\n"); 172 172 173 - printk(KERN_INFO PREFIX "%s [%s] (%s)\n", 173 + dev_info(&device->dev, "ACPI: %s [%s] (%s)\n", 174 174 acpi_device_name(device), acpi_device_bid(device), 175 175 !device->power.state ? "on" : "off"); 176 176 ··· 215 217 216 218 result = acpi_bus_update_power(to_acpi_device(dev)->handle, NULL); 217 219 if (result) 218 - printk(KERN_ERR PREFIX "Error updating fan power state\n"); 220 + dev_err(dev, "Error updating fan power state\n"); 219 221 220 222 return result; 221 223 }
+11 -1
drivers/acpi/osl.c
··· 152 152 osi_linux.dmi ? " via DMI" : ""); 153 153 } 154 154 155 + if (!strcmp("Darwin", interface)) { 156 + /* 157 + * Apple firmware will behave poorly if it receives positive 158 + * answers to "Darwin" and any other OS. Respond positively 159 + * to Darwin and then disable all other vendor strings. 160 + */ 161 + acpi_update_interfaces(ACPI_DISABLE_ALL_VENDOR_STRINGS); 162 + supported = ACPI_UINT32_MAX; 163 + } 164 + 155 165 return supported; 156 166 } 157 167 ··· 835 825 836 826 acpi_irq_handler = handler; 837 827 acpi_irq_context = context; 838 - if (request_irq(irq, acpi_irq, IRQF_SHARED | IRQF_NO_SUSPEND, "acpi", acpi_irq)) { 828 + if (request_irq(irq, acpi_irq, IRQF_SHARED, "acpi", acpi_irq)) { 839 829 printk(KERN_ERR PREFIX "SCI (IRQ%d) allocation failed\n", irq); 840 830 acpi_irq_handler = NULL; 841 831 return AE_NOT_ACQUIRED;
+14
drivers/acpi/pci_root.c
··· 35 35 #include <linux/pci-aspm.h> 36 36 #include <linux/acpi.h> 37 37 #include <linux/slab.h> 38 + #include <linux/dmi.h> 38 39 #include <acpi/apei.h> /* for acpi_hest_init() */ 39 40 40 41 #include "internal.h" ··· 429 428 acpi_status status; 430 429 struct acpi_device *device = root->device; 431 430 acpi_handle handle = device->handle; 431 + 432 + /* 433 + * Apple always return failure on _OSC calls when _OSI("Darwin") has 434 + * been called successfully. We know the feature set supported by the 435 + * platform, so avoid calling _OSC at all 436 + */ 437 + 438 + if (dmi_match(DMI_SYS_VENDOR, "Apple Inc.")) { 439 + root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL; 440 + decode_osc_control(root, "OS assumes control of", 441 + root->osc_control_set); 442 + return; 443 + } 432 444 433 445 /* 434 446 * All supported architectures that use ACPI have support for
+3 -3
drivers/acpi/processor_core.c
··· 16 16 u32 acpi_id, int *apic_id) 17 17 { 18 18 struct acpi_madt_local_apic *lapic = 19 - (struct acpi_madt_local_apic *)entry; 19 + container_of(entry, struct acpi_madt_local_apic, header); 20 20 21 21 if (!(lapic->lapic_flags & ACPI_MADT_ENABLED)) 22 22 return -ENODEV; ··· 32 32 int device_declaration, u32 acpi_id, int *apic_id) 33 33 { 34 34 struct acpi_madt_local_x2apic *apic = 35 - (struct acpi_madt_local_x2apic *)entry; 35 + container_of(entry, struct acpi_madt_local_x2apic, header); 36 36 37 37 if (!(apic->lapic_flags & ACPI_MADT_ENABLED)) 38 38 return -ENODEV; ··· 49 49 int device_declaration, u32 acpi_id, int *apic_id) 50 50 { 51 51 struct acpi_madt_local_sapic *lsapic = 52 - (struct acpi_madt_local_sapic *)entry; 52 + container_of(entry, struct acpi_madt_local_sapic, header); 53 53 54 54 if (!(lsapic->lapic_flags & ACPI_MADT_ENABLED)) 55 55 return -ENODEV;
+65 -15
drivers/acpi/sbs.c
··· 35 35 #include <linux/jiffies.h> 36 36 #include <linux/delay.h> 37 37 #include <linux/power_supply.h> 38 + #include <linux/dmi.h> 38 39 39 40 #include "sbshc.h" 40 41 #include "battery.h" ··· 61 60 static unsigned int cache_time = 1000; 62 61 module_param(cache_time, uint, 0644); 63 62 MODULE_PARM_DESC(cache_time, "cache time in milliseconds"); 63 + 64 + static bool sbs_manager_broken; 64 65 65 66 #define MAX_SBS_BAT 4 66 67 #define ACPI_SBS_BLOCK_MAX 32 ··· 112 109 u8 batteries_supported:4; 113 110 u8 manager_present:1; 114 111 u8 charger_present:1; 112 + u8 charger_exists:1; 115 113 }; 116 114 117 115 #define to_acpi_sbs(x) container_of(x, struct acpi_sbs, charger) ··· 433 429 434 430 result = acpi_smbus_read(sbs->hc, SMBUS_READ_WORD, ACPI_SBS_CHARGER, 435 431 0x13, (u8 *) & status); 436 - if (!result) 437 - sbs->charger_present = (status >> 15) & 0x1; 438 - return result; 432 + 433 + if (result) 434 + return result; 435 + 436 + /* 437 + * The spec requires that bit 4 always be 1. If it's not set, assume 438 + * that the implementation doesn't support an SBS charger 439 + */ 440 + if (!((status >> 4) & 0x1)) 441 + return -ENODEV; 442 + 443 + sbs->charger_present = (status >> 15) & 0x1; 444 + return 0; 439 445 } 440 446 441 447 static ssize_t acpi_battery_alarm_show(struct device *dev, ··· 497 483 ACPI_SBS_MANAGER, 0x01, (u8 *)&state, 2); 498 484 } else if (battery->id == 0) 499 485 battery->present = 1; 486 + 500 487 if (result || !battery->present) 501 488 return result; 502 489 503 490 if (saved_present != battery->present) { 504 491 battery->update_time = 0; 505 492 result = acpi_battery_get_info(battery); 506 - if (result) 493 + if (result) { 494 + battery->present = 0; 507 495 return result; 496 + } 508 497 } 509 498 result = acpi_battery_get_state(battery); 499 + if (result) 500 + battery->present = 0; 510 501 return result; 511 502 } 512 503 ··· 543 524 result = power_supply_register(&sbs->device->dev, &battery->bat); 544 525 if (result) 545 526 goto end; 527 + 546 528 result = device_create_file(battery->bat.dev, &alarm_attr); 547 529 if (result) 548 530 goto end; ··· 574 554 if (result) 575 555 goto end; 576 556 557 + sbs->charger_exists = 1; 577 558 sbs->charger.name = "sbs-charger"; 578 559 sbs->charger.type = POWER_SUPPLY_TYPE_MAINS; 579 560 sbs->charger.properties = sbs_ac_props; ··· 601 580 struct acpi_battery *bat; 602 581 u8 saved_charger_state = sbs->charger_present; 603 582 u8 saved_battery_state; 604 - acpi_ac_get_present(sbs); 605 - if (sbs->charger_present != saved_charger_state) 606 - kobject_uevent(&sbs->charger.dev->kobj, KOBJ_CHANGE); 583 + 584 + if (sbs->charger_exists) { 585 + acpi_ac_get_present(sbs); 586 + if (sbs->charger_present != saved_charger_state) 587 + kobject_uevent(&sbs->charger.dev->kobj, KOBJ_CHANGE); 588 + } 607 589 608 590 if (sbs->manager_present) { 609 591 for (id = 0; id < MAX_SBS_BAT; ++id) { ··· 622 598 } 623 599 } 624 600 601 + static int disable_sbs_manager(const struct dmi_system_id *d) 602 + { 603 + sbs_manager_broken = true; 604 + return 0; 605 + } 606 + 607 + static struct dmi_system_id acpi_sbs_dmi_table[] = { 608 + { 609 + .callback = disable_sbs_manager, 610 + .ident = "Apple", 611 + .matches = { 612 + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc.") 613 + }, 614 + }, 615 + { }, 616 + }; 617 + 625 618 static int acpi_sbs_add(struct acpi_device *device) 626 619 { 627 620 struct acpi_sbs *sbs; 628 621 int result = 0; 629 622 int id; 623 + 624 + dmi_check_system(acpi_sbs_dmi_table); 630 625 631 626 sbs = kzalloc(sizeof(struct acpi_sbs), GFP_KERNEL); 632 627 if (!sbs) { ··· 662 619 device->driver_data = sbs; 663 620 664 621 result = acpi_charger_add(sbs); 665 - if (result) 622 + if (result && result != -ENODEV) 666 623 goto end; 667 624 668 - result = acpi_manager_get_info(sbs); 669 - if (!result) { 670 - sbs->manager_present = 1; 671 - for (id = 0; id < MAX_SBS_BAT; ++id) 672 - if ((sbs->batteries_supported & (1 << id))) 673 - acpi_battery_add(sbs, id); 674 - } else 625 + result = 0; 626 + 627 + if (!sbs_manager_broken) { 628 + result = acpi_manager_get_info(sbs); 629 + if (!result) { 630 + sbs->manager_present = 0; 631 + for (id = 0; id < MAX_SBS_BAT; ++id) 632 + if ((sbs->batteries_supported & (1 << id))) 633 + acpi_battery_add(sbs, id); 634 + } 635 + } 636 + 637 + if (!sbs->manager_present) 675 638 acpi_battery_add(sbs, 0); 639 + 676 640 acpi_smbus_register_callback(sbs->hc, acpi_sbs_callback, sbs); 677 641 end: 678 642 if (result)
+16
drivers/acpi/sleep.c
··· 14 14 #include <linux/irq.h> 15 15 #include <linux/dmi.h> 16 16 #include <linux/device.h> 17 + #include <linux/interrupt.h> 17 18 #include <linux/suspend.h> 18 19 #include <linux/reboot.h> 19 20 #include <linux/acpi.h> ··· 627 626 return 0; 628 627 } 629 628 629 + static int acpi_freeze_prepare(void) 630 + { 631 + acpi_enable_all_wakeup_gpes(); 632 + enable_irq_wake(acpi_gbl_FADT.sci_interrupt); 633 + return 0; 634 + } 635 + 636 + static void acpi_freeze_restore(void) 637 + { 638 + disable_irq_wake(acpi_gbl_FADT.sci_interrupt); 639 + acpi_enable_all_runtime_gpes(); 640 + } 641 + 630 642 static void acpi_freeze_end(void) 631 643 { 632 644 acpi_scan_lock_release(); ··· 647 633 648 634 static const struct platform_freeze_ops acpi_freeze_ops = { 649 635 .begin = acpi_freeze_begin, 636 + .prepare = acpi_freeze_prepare, 637 + .restore = acpi_freeze_restore, 650 638 .end = acpi_freeze_end, 651 639 }; 652 640
-1
drivers/acpi/utils.c
··· 661 661 * @uuid: UUID of requested functions, should be 16 bytes at least 662 662 * @rev: revision number of requested functions 663 663 * @funcs: bitmap of requested functions 664 - * @exclude: excluding special value, used to support i915 and nouveau 665 664 * 666 665 * Evaluate device's _DSM method to check whether it supports requested 667 666 * functions. Currently only support 64 functions at maximum, should be
+26 -265
drivers/acpi/video.c
··· 411 411 return 0; 412 412 } 413 413 414 - static int __init video_set_use_native_backlight(const struct dmi_system_id *d) 415 - { 416 - use_native_backlight_dmi = true; 417 - return 0; 418 - } 419 - 420 414 static int __init video_disable_native_backlight(const struct dmi_system_id *d) 421 415 { 422 416 use_native_backlight_dmi = false; ··· 459 465 .matches = { 460 466 DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 461 467 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7720"), 462 - }, 463 - }, 464 - { 465 - .callback = video_set_use_native_backlight, 466 - .ident = "ThinkPad X230", 467 - .matches = { 468 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 469 - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"), 470 - }, 471 - }, 472 - { 473 - .callback = video_set_use_native_backlight, 474 - .ident = "ThinkPad T430 and T430s", 475 - .matches = { 476 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 477 - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430"), 478 - }, 479 - }, 480 - { 481 - .callback = video_set_use_native_backlight, 482 - .ident = "ThinkPad T430", 483 - .matches = { 484 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 485 - DMI_MATCH(DMI_PRODUCT_VERSION, "2349D15"), 486 - }, 487 - }, 488 - { 489 - .callback = video_set_use_native_backlight, 490 - .ident = "ThinkPad T431s", 491 - .matches = { 492 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 493 - DMI_MATCH(DMI_PRODUCT_VERSION, "20AACTO1WW"), 494 - }, 495 - }, 496 - { 497 - .callback = video_set_use_native_backlight, 498 - .ident = "ThinkPad Edge E530", 499 - .matches = { 500 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 501 - DMI_MATCH(DMI_PRODUCT_VERSION, "3259A2G"), 502 - }, 503 - }, 504 - { 505 - .callback = video_set_use_native_backlight, 506 - .ident = "ThinkPad Edge E530", 507 - .matches = { 508 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 509 - DMI_MATCH(DMI_PRODUCT_VERSION, "3259CTO"), 510 - }, 511 - }, 512 - { 513 - .callback = video_set_use_native_backlight, 514 - .ident = "ThinkPad Edge E530", 515 - .matches = { 516 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 517 - DMI_MATCH(DMI_PRODUCT_VERSION, "3259HJG"), 518 - }, 519 - }, 520 - { 521 - .callback = video_set_use_native_backlight, 522 - .ident = "ThinkPad W530", 523 - .matches = { 524 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 525 - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"), 526 - }, 527 - }, 528 - { 529 - .callback = video_set_use_native_backlight, 530 - .ident = "ThinkPad X1 Carbon", 531 - .matches = { 532 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 533 - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"), 534 - }, 535 - }, 536 - { 537 - .callback = video_set_use_native_backlight, 538 - .ident = "Lenovo Yoga 13", 539 - .matches = { 540 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 541 - DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo IdeaPad Yoga 13"), 542 - }, 543 - }, 544 - { 545 - .callback = video_set_use_native_backlight, 546 - .ident = "Lenovo Yoga 2 11", 547 - .matches = { 548 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 549 - DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Yoga 2 11"), 550 - }, 551 - }, 552 - { 553 - .callback = video_set_use_native_backlight, 554 - .ident = "Thinkpad Helix", 555 - .matches = { 556 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 557 - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad Helix"), 558 - }, 559 - }, 560 - { 561 - .callback = video_set_use_native_backlight, 562 - .ident = "Dell Inspiron 7520", 563 - .matches = { 564 - DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 565 - DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7520"), 566 - }, 567 - }, 568 - { 569 - .callback = video_set_use_native_backlight, 570 - .ident = "Acer Aspire 5733Z", 571 - .matches = { 572 - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 573 - DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5733Z"), 574 - }, 575 - }, 576 - { 577 - .callback = video_set_use_native_backlight, 578 - .ident = "Acer Aspire 5742G", 579 - .matches = { 580 - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 581 - DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5742G"), 582 - }, 583 - }, 584 - { 585 - .callback = video_set_use_native_backlight, 586 - .ident = "Acer Aspire V5-171", 587 - .matches = { 588 - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 589 - DMI_MATCH(DMI_PRODUCT_NAME, "V5-171"), 590 - }, 591 - }, 592 - { 593 - .callback = video_set_use_native_backlight, 594 - .ident = "Acer Aspire V5-431", 595 - .matches = { 596 - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 597 - DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-431"), 598 - }, 599 - }, 600 - { 601 - .callback = video_set_use_native_backlight, 602 - .ident = "Acer Aspire V5-471G", 603 - .matches = { 604 - DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 605 - DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-471G"), 606 - }, 607 - }, 608 - { 609 - .callback = video_set_use_native_backlight, 610 - .ident = "Acer TravelMate B113", 611 - .matches = { 612 - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 613 - DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), 614 - }, 615 - }, 616 - { 617 - .callback = video_set_use_native_backlight, 618 - .ident = "Acer Aspire V5-572G", 619 - .matches = { 620 - DMI_MATCH(DMI_SYS_VENDOR, "Acer Aspire"), 621 - DMI_MATCH(DMI_PRODUCT_VERSION, "V5-572G/Dazzle_CX"), 622 - }, 623 - }, 624 - { 625 - .callback = video_set_use_native_backlight, 626 - .ident = "Acer Aspire V5-573G", 627 - .matches = { 628 - DMI_MATCH(DMI_SYS_VENDOR, "Acer Aspire"), 629 - DMI_MATCH(DMI_PRODUCT_VERSION, "V5-573G/Dazzle_HW"), 630 - }, 631 - }, 632 - { 633 - .callback = video_set_use_native_backlight, 634 - .ident = "ASUS Zenbook Prime UX31A", 635 - .matches = { 636 - DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 637 - DMI_MATCH(DMI_PRODUCT_NAME, "UX31A"), 638 - }, 639 - }, 640 - { 641 - .callback = video_set_use_native_backlight, 642 - .ident = "HP ProBook 4340s", 643 - .matches = { 644 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 645 - DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4340s"), 646 - }, 647 - }, 648 - { 649 - .callback = video_set_use_native_backlight, 650 - .ident = "HP ProBook 4540s", 651 - .matches = { 652 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 653 - DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4540s"), 654 - }, 655 - }, 656 - { 657 - .callback = video_set_use_native_backlight, 658 - .ident = "HP ProBook 2013 models", 659 - .matches = { 660 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 661 - DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook "), 662 - DMI_MATCH(DMI_PRODUCT_NAME, " G1"), 663 - }, 664 - }, 665 - { 666 - .callback = video_set_use_native_backlight, 667 - .ident = "HP EliteBook 2013 models", 668 - .matches = { 669 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 670 - DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook "), 671 - DMI_MATCH(DMI_PRODUCT_NAME, " G1"), 672 - }, 673 - }, 674 - { 675 - .callback = video_set_use_native_backlight, 676 - .ident = "HP EliteBook 2014 models", 677 - .matches = { 678 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 679 - DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook "), 680 - DMI_MATCH(DMI_PRODUCT_NAME, " G2"), 681 - }, 682 - }, 683 - { 684 - .callback = video_set_use_native_backlight, 685 - .ident = "HP ZBook 14", 686 - .matches = { 687 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 688 - DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 14"), 689 - }, 690 - }, 691 - { 692 - .callback = video_set_use_native_backlight, 693 - .ident = "HP ZBook 15", 694 - .matches = { 695 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 696 - DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 15"), 697 - }, 698 - }, 699 - { 700 - .callback = video_set_use_native_backlight, 701 - .ident = "HP ZBook 17", 702 - .matches = { 703 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 704 - DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 17"), 705 - }, 706 - }, 707 - { 708 - .callback = video_set_use_native_backlight, 709 - .ident = "HP EliteBook 8470p", 710 - .matches = { 711 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 712 - DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8470p"), 713 - }, 714 - }, 715 - { 716 - .callback = video_set_use_native_backlight, 717 - .ident = "HP EliteBook 8780w", 718 - .matches = { 719 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 720 - DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8780w"), 721 468 }, 722 469 }, 723 470 ··· 1154 1419 } 1155 1420 } 1156 1421 1422 + static bool acpi_video_device_in_dod(struct acpi_video_device *device) 1423 + { 1424 + struct acpi_video_bus *video = device->video; 1425 + int i; 1426 + 1427 + /* If we have a broken _DOD, no need to test */ 1428 + if (!video->attached_count) 1429 + return true; 1430 + 1431 + for (i = 0; i < video->attached_count; i++) { 1432 + if (video->attached_array[i].bind_info == device) 1433 + return true; 1434 + } 1435 + 1436 + return false; 1437 + } 1438 + 1157 1439 /* 1158 1440 * Arg: 1159 1441 * video : video bus device ··· 1609 1857 int result; 1610 1858 static int count; 1611 1859 char *name; 1860 + 1861 + /* 1862 + * Do not create backlight device for video output 1863 + * device that is not in the enumerated list. 1864 + */ 1865 + if (!acpi_video_device_in_dod(device)) { 1866 + dev_dbg(&device->dev->dev, "not in _DOD list, ignore\n"); 1867 + return; 1868 + } 1612 1869 1613 1870 result = acpi_video_init_brightness(device); 1614 1871 if (result)
+8
drivers/acpi/video_detect.c
··· 174 174 DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), 175 175 }, 176 176 }, 177 + { 178 + .callback = video_detect_force_vendor, 179 + .ident = "Lenovo IdeaPad Z570", 180 + .matches = { 181 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 182 + DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"), 183 + }, 184 + }, 177 185 { }, 178 186 }; 179 187
+11 -2
drivers/amba/bus.c
··· 15 15 #include <linux/io.h> 16 16 #include <linux/pm.h> 17 17 #include <linux/pm_runtime.h> 18 + #include <linux/pm_domain.h> 18 19 #include <linux/amba/bus.h> 19 20 #include <linux/sizes.h> 20 21 ··· 183 182 int ret; 184 183 185 184 do { 186 - ret = amba_get_enable_pclk(pcdev); 187 - if (ret) 185 + ret = dev_pm_domain_attach(dev, true); 186 + if (ret == -EPROBE_DEFER) 188 187 break; 188 + 189 + ret = amba_get_enable_pclk(pcdev); 190 + if (ret) { 191 + dev_pm_domain_detach(dev, true); 192 + break; 193 + } 189 194 190 195 pm_runtime_get_noresume(dev); 191 196 pm_runtime_set_active(dev); ··· 206 199 pm_runtime_put_noidle(dev); 207 200 208 201 amba_put_disable_pclk(pcdev); 202 + dev_pm_domain_detach(dev, true); 209 203 } while (0); 210 204 211 205 return ret; ··· 228 220 pm_runtime_put_noidle(dev); 229 221 230 222 amba_put_disable_pclk(pcdev); 223 + dev_pm_domain_detach(dev, true); 231 224 232 225 return ret; 233 226 }
+9 -7
drivers/base/platform.c
··· 21 21 #include <linux/err.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/pm_runtime.h> 24 + #include <linux/pm_domain.h> 24 25 #include <linux/idr.h> 25 26 #include <linux/acpi.h> 26 27 #include <linux/clk/clk-conf.h> ··· 507 506 if (ret < 0) 508 507 return ret; 509 508 510 - acpi_dev_pm_attach(_dev, true); 511 - 512 - ret = drv->probe(dev); 513 - if (ret) 514 - acpi_dev_pm_detach(_dev, true); 509 + ret = dev_pm_domain_attach(_dev, true); 510 + if (ret != -EPROBE_DEFER) { 511 + ret = drv->probe(dev); 512 + if (ret) 513 + dev_pm_domain_detach(_dev, true); 514 + } 515 515 516 516 if (drv->prevent_deferred_probe && ret == -EPROBE_DEFER) { 517 517 dev_warn(_dev, "probe deferral not supported\n"); ··· 534 532 int ret; 535 533 536 534 ret = drv->remove(dev); 537 - acpi_dev_pm_detach(_dev, true); 535 + dev_pm_domain_detach(_dev, true); 538 536 539 537 return ret; 540 538 } ··· 545 543 struct platform_device *dev = to_platform_device(_dev); 546 544 547 545 drv->shutdown(dev); 548 - acpi_dev_pm_detach(_dev, true); 546 + dev_pm_domain_detach(_dev, true); 549 547 } 550 548 551 549 /**
+15 -4
drivers/base/power/clock_ops.c
··· 368 368 369 369 spin_lock_irqsave(&psd->lock, flags); 370 370 371 - list_for_each_entry_reverse(ce, &psd->clock_list, node) 372 - clk_disable(ce->clk); 371 + list_for_each_entry_reverse(ce, &psd->clock_list, node) { 372 + if (ce->status < PCE_STATUS_ERROR) { 373 + if (ce->status == PCE_STATUS_ENABLED) 374 + clk_disable(ce->clk); 375 + ce->status = PCE_STATUS_ACQUIRED; 376 + } 377 + } 373 378 374 379 spin_unlock_irqrestore(&psd->lock, flags); 375 380 ··· 390 385 struct pm_subsys_data *psd = dev_to_psd(dev); 391 386 struct pm_clock_entry *ce; 392 387 unsigned long flags; 388 + int ret; 393 389 394 390 dev_dbg(dev, "%s()\n", __func__); 395 391 ··· 400 394 401 395 spin_lock_irqsave(&psd->lock, flags); 402 396 403 - list_for_each_entry(ce, &psd->clock_list, node) 404 - __pm_clk_enable(dev, ce->clk); 397 + list_for_each_entry(ce, &psd->clock_list, node) { 398 + if (ce->status < PCE_STATUS_ERROR) { 399 + ret = __pm_clk_enable(dev, ce->clk); 400 + if (!ret) 401 + ce->status = PCE_STATUS_ENABLED; 402 + } 403 + } 405 404 406 405 spin_unlock_irqrestore(&psd->lock, flags); 407 406
+52
drivers/base/power/common.c
··· 11 11 #include <linux/export.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/pm_clock.h> 14 + #include <linux/acpi.h> 15 + #include <linux/pm_domain.h> 14 16 15 17 /** 16 18 * dev_pm_get_subsys_data - Create or refcount power.subsys_data for device. ··· 84 82 return ret; 85 83 } 86 84 EXPORT_SYMBOL_GPL(dev_pm_put_subsys_data); 85 + 86 + /** 87 + * dev_pm_domain_attach - Attach a device to its PM domain. 88 + * @dev: Device to attach. 89 + * @power_on: Used to indicate whether we should power on the device. 90 + * 91 + * The @dev may only be attached to a single PM domain. By iterating through 92 + * the available alternatives we try to find a valid PM domain for the device. 93 + * As attachment succeeds, the ->detach() callback in the struct dev_pm_domain 94 + * should be assigned by the corresponding attach function. 95 + * 96 + * This function should typically be invoked from subsystem level code during 97 + * the probe phase. Especially for those that holds devices which requires 98 + * power management through PM domains. 99 + * 100 + * Callers must ensure proper synchronization of this function with power 101 + * management callbacks. 102 + * 103 + * Returns 0 on successfully attached PM domain or negative error code. 104 + */ 105 + int dev_pm_domain_attach(struct device *dev, bool power_on) 106 + { 107 + int ret; 108 + 109 + ret = acpi_dev_pm_attach(dev, power_on); 110 + if (ret) 111 + ret = genpd_dev_pm_attach(dev); 112 + 113 + return ret; 114 + } 115 + EXPORT_SYMBOL_GPL(dev_pm_domain_attach); 116 + 117 + /** 118 + * dev_pm_domain_detach - Detach a device from its PM domain. 119 + * @dev: Device to attach. 120 + * @power_off: Used to indicate whether we should power off the device. 121 + * 122 + * This functions will reverse the actions from dev_pm_domain_attach() and thus 123 + * try to detach the @dev from its PM domain. Typically it should be invoked 124 + * from subsystem level code during the remove phase. 125 + * 126 + * Callers must ensure proper synchronization of this function with power 127 + * management callbacks. 128 + */ 129 + void dev_pm_domain_detach(struct device *dev, bool power_off) 130 + { 131 + if (dev->pm_domain && dev->pm_domain->detach) 132 + dev->pm_domain->detach(dev, power_off); 133 + } 134 + EXPORT_SYMBOL_GPL(dev_pm_domain_detach);
+514 -351
drivers/base/power/domain.c
··· 8 8 9 9 #include <linux/kernel.h> 10 10 #include <linux/io.h> 11 + #include <linux/platform_device.h> 11 12 #include <linux/pm_runtime.h> 12 13 #include <linux/pm_domain.h> 13 14 #include <linux/pm_qos.h> ··· 26 25 __routine = genpd->dev_ops.callback; \ 27 26 if (__routine) { \ 28 27 __ret = __routine(dev); \ 29 - } else { \ 30 - __routine = dev_gpd_data(dev)->ops.callback; \ 31 - if (__routine) \ 32 - __ret = __routine(dev); \ 33 28 } \ 34 29 __ret; \ 35 30 }) ··· 66 69 mutex_unlock(&gpd_list_lock); 67 70 return genpd; 68 71 } 69 - 70 - #ifdef CONFIG_PM 71 72 72 73 struct generic_pm_domain *dev_to_genpd(struct device *dev) 73 74 { ··· 142 147 { 143 148 s64 usecs64; 144 149 145 - if (!genpd->cpu_data) 150 + if (!genpd->cpuidle_data) 146 151 return; 147 152 148 153 usecs64 = genpd->power_on_latency_ns; 149 154 do_div(usecs64, NSEC_PER_USEC); 150 - usecs64 += genpd->cpu_data->saved_exit_latency; 151 - genpd->cpu_data->idle_state->exit_latency = usecs64; 155 + usecs64 += genpd->cpuidle_data->saved_exit_latency; 156 + genpd->cpuidle_data->idle_state->exit_latency = usecs64; 152 157 } 153 158 154 159 /** ··· 188 193 return 0; 189 194 } 190 195 191 - if (genpd->cpu_data) { 196 + if (genpd->cpuidle_data) { 192 197 cpuidle_pause_and_lock(); 193 - genpd->cpu_data->idle_state->disabled = true; 198 + genpd->cpuidle_data->idle_state->disabled = true; 194 199 cpuidle_resume_and_unlock(); 195 200 goto out; 196 201 } ··· 279 284 genpd = pm_genpd_lookup_name(domain_name); 280 285 return genpd ? pm_genpd_poweron(genpd) : -EINVAL; 281 286 } 282 - 283 - #endif /* CONFIG_PM */ 284 287 285 288 #ifdef CONFIG_PM_RUNTIME 286 289 ··· 423 430 * Queue up the execution of pm_genpd_poweroff() unless it's already been done 424 431 * before. 425 432 */ 426 - void genpd_queue_power_off_work(struct generic_pm_domain *genpd) 433 + static void genpd_queue_power_off_work(struct generic_pm_domain *genpd) 427 434 { 428 435 queue_work(pm_wq, &genpd->power_off_work); 429 436 } ··· 513 520 } 514 521 } 515 522 516 - if (genpd->cpu_data) { 523 + if (genpd->cpuidle_data) { 517 524 /* 518 - * If cpu_data is set, cpuidle should turn the domain off when 519 - * the CPU in it is idle. In that case we don't decrement the 520 - * subdomain counts of the master domains, so that power is not 521 - * removed from the current domain prematurely as a result of 522 - * cutting off the masters' power. 525 + * If cpuidle_data is set, cpuidle should turn the domain off 526 + * when the CPU in it is idle. In that case we don't decrement 527 + * the subdomain counts of the master domains, so that power is 528 + * not removed from the current domain prematurely as a result 529 + * of cutting off the masters' power. 523 530 */ 524 531 genpd->status = GPD_STATE_POWER_OFF; 525 532 cpuidle_pause_and_lock(); 526 - genpd->cpu_data->idle_state->disabled = false; 533 + genpd->cpuidle_data->idle_state->disabled = false; 527 534 cpuidle_resume_and_unlock(); 528 535 goto out; 529 536 } ··· 612 619 if (IS_ERR(genpd)) 613 620 return -EINVAL; 614 621 615 - might_sleep_if(!genpd->dev_irq_safe); 616 - 617 622 stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; 618 623 if (stop_ok && !stop_ok(dev)) 619 624 return -EBUSY; ··· 655 664 genpd = dev_to_genpd(dev); 656 665 if (IS_ERR(genpd)) 657 666 return -EINVAL; 658 - 659 - might_sleep_if(!genpd->dev_irq_safe); 660 667 661 668 /* If power.irq_safe, the PM domain is never powered off. */ 662 669 if (dev->power.irq_safe) ··· 722 733 mutex_unlock(&gpd_list_lock); 723 734 } 724 735 736 + static int __init genpd_poweroff_unused(void) 737 + { 738 + pm_genpd_poweroff_unused(); 739 + return 0; 740 + } 741 + late_initcall(genpd_poweroff_unused); 742 + 725 743 #else 726 744 727 745 static inline int genpd_dev_pm_qos_notifier(struct notifier_block *nb, ··· 736 740 { 737 741 return NOTIFY_DONE; 738 742 } 743 + 744 + static inline void 745 + genpd_queue_power_off_work(struct generic_pm_domain *genpd) {} 739 746 740 747 static inline void genpd_power_off_work_fn(struct work_struct *work) {} 741 748 ··· 771 772 struct device *dev) 772 773 { 773 774 return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev); 774 - } 775 - 776 - static int genpd_suspend_dev(struct generic_pm_domain *genpd, struct device *dev) 777 - { 778 - return GENPD_DEV_CALLBACK(genpd, int, suspend, dev); 779 - } 780 - 781 - static int genpd_suspend_late(struct generic_pm_domain *genpd, struct device *dev) 782 - { 783 - return GENPD_DEV_CALLBACK(genpd, int, suspend_late, dev); 784 - } 785 - 786 - static int genpd_resume_early(struct generic_pm_domain *genpd, struct device *dev) 787 - { 788 - return GENPD_DEV_CALLBACK(genpd, int, resume_early, dev); 789 - } 790 - 791 - static int genpd_resume_dev(struct generic_pm_domain *genpd, struct device *dev) 792 - { 793 - return GENPD_DEV_CALLBACK(genpd, int, resume, dev); 794 - } 795 - 796 - static int genpd_freeze_dev(struct generic_pm_domain *genpd, struct device *dev) 797 - { 798 - return GENPD_DEV_CALLBACK(genpd, int, freeze, dev); 799 - } 800 - 801 - static int genpd_freeze_late(struct generic_pm_domain *genpd, struct device *dev) 802 - { 803 - return GENPD_DEV_CALLBACK(genpd, int, freeze_late, dev); 804 - } 805 - 806 - static int genpd_thaw_early(struct generic_pm_domain *genpd, struct device *dev) 807 - { 808 - return GENPD_DEV_CALLBACK(genpd, int, thaw_early, dev); 809 - } 810 - 811 - static int genpd_thaw_dev(struct generic_pm_domain *genpd, struct device *dev) 812 - { 813 - return GENPD_DEV_CALLBACK(genpd, int, thaw, dev); 814 775 } 815 776 816 777 /** ··· 954 995 if (IS_ERR(genpd)) 955 996 return -EINVAL; 956 997 957 - return genpd->suspend_power_off ? 0 : genpd_suspend_dev(genpd, dev); 998 + return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev); 958 999 } 959 1000 960 1001 /** ··· 975 1016 if (IS_ERR(genpd)) 976 1017 return -EINVAL; 977 1018 978 - return genpd->suspend_power_off ? 0 : genpd_suspend_late(genpd, dev); 1019 + return genpd->suspend_power_off ? 0 : pm_generic_suspend_late(dev); 979 1020 } 980 1021 981 1022 /** ··· 1062 1103 if (IS_ERR(genpd)) 1063 1104 return -EINVAL; 1064 1105 1065 - return genpd->suspend_power_off ? 0 : genpd_resume_early(genpd, dev); 1106 + return genpd->suspend_power_off ? 0 : pm_generic_resume_early(dev); 1066 1107 } 1067 1108 1068 1109 /** ··· 1083 1124 if (IS_ERR(genpd)) 1084 1125 return -EINVAL; 1085 1126 1086 - return genpd->suspend_power_off ? 0 : genpd_resume_dev(genpd, dev); 1127 + return genpd->suspend_power_off ? 0 : pm_generic_resume(dev); 1087 1128 } 1088 1129 1089 1130 /** ··· 1104 1145 if (IS_ERR(genpd)) 1105 1146 return -EINVAL; 1106 1147 1107 - return genpd->suspend_power_off ? 0 : genpd_freeze_dev(genpd, dev); 1148 + return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev); 1108 1149 } 1109 1150 1110 1151 /** ··· 1126 1167 if (IS_ERR(genpd)) 1127 1168 return -EINVAL; 1128 1169 1129 - return genpd->suspend_power_off ? 0 : genpd_freeze_late(genpd, dev); 1170 + return genpd->suspend_power_off ? 0 : pm_generic_freeze_late(dev); 1130 1171 } 1131 1172 1132 1173 /** ··· 1190 1231 if (IS_ERR(genpd)) 1191 1232 return -EINVAL; 1192 1233 1193 - return genpd->suspend_power_off ? 0 : genpd_thaw_early(genpd, dev); 1234 + return genpd->suspend_power_off ? 0 : pm_generic_thaw_early(dev); 1194 1235 } 1195 1236 1196 1237 /** ··· 1211 1252 if (IS_ERR(genpd)) 1212 1253 return -EINVAL; 1213 1254 1214 - return genpd->suspend_power_off ? 0 : genpd_thaw_dev(genpd, dev); 1255 + return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev); 1215 1256 } 1216 1257 1217 1258 /** ··· 1303 1344 } 1304 1345 1305 1346 /** 1306 - * pm_genpd_syscore_switch - Switch power during system core suspend or resume. 1347 + * genpd_syscore_switch - Switch power during system core suspend or resume. 1307 1348 * @dev: Device that normally is marked as "always on" to switch power for. 1308 1349 * 1309 1350 * This routine may only be called during the system core (syscore) suspend or 1310 1351 * resume phase for devices whose "always on" flags are set. 1311 1352 */ 1312 - void pm_genpd_syscore_switch(struct device *dev, bool suspend) 1353 + static void genpd_syscore_switch(struct device *dev, bool suspend) 1313 1354 { 1314 1355 struct generic_pm_domain *genpd; 1315 1356 ··· 1325 1366 genpd->suspended_count--; 1326 1367 } 1327 1368 } 1328 - EXPORT_SYMBOL_GPL(pm_genpd_syscore_switch); 1369 + 1370 + void pm_genpd_syscore_poweroff(struct device *dev) 1371 + { 1372 + genpd_syscore_switch(dev, true); 1373 + } 1374 + EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweroff); 1375 + 1376 + void pm_genpd_syscore_poweron(struct device *dev) 1377 + { 1378 + genpd_syscore_switch(dev, false); 1379 + } 1380 + EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron); 1329 1381 1330 1382 #else 1331 1383 ··· 1436 1466 1437 1467 spin_unlock_irq(&dev->power.lock); 1438 1468 1469 + if (genpd->attach_dev) 1470 + genpd->attach_dev(dev); 1471 + 1439 1472 mutex_lock(&gpd_data->lock); 1440 1473 gpd_data->base.dev = dev; 1441 1474 list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); ··· 1455 1482 1456 1483 return ret; 1457 1484 } 1458 - 1459 - /** 1460 - * __pm_genpd_of_add_device - Add a device to an I/O PM domain. 1461 - * @genpd_node: Device tree node pointer representing a PM domain to which the 1462 - * the device is added to. 1463 - * @dev: Device to be added. 1464 - * @td: Set of PM QoS timing parameters to attach to the device. 1465 - */ 1466 - int __pm_genpd_of_add_device(struct device_node *genpd_node, struct device *dev, 1467 - struct gpd_timing_data *td) 1468 - { 1469 - struct generic_pm_domain *genpd = NULL, *gpd; 1470 - 1471 - dev_dbg(dev, "%s()\n", __func__); 1472 - 1473 - if (IS_ERR_OR_NULL(genpd_node) || IS_ERR_OR_NULL(dev)) 1474 - return -EINVAL; 1475 - 1476 - mutex_lock(&gpd_list_lock); 1477 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 1478 - if (gpd->of_node == genpd_node) { 1479 - genpd = gpd; 1480 - break; 1481 - } 1482 - } 1483 - mutex_unlock(&gpd_list_lock); 1484 - 1485 - if (!genpd) 1486 - return -EINVAL; 1487 - 1488 - return __pm_genpd_add_device(genpd, dev, td); 1489 - } 1490 - 1491 1485 1492 1486 /** 1493 1487 * __pm_genpd_name_add_device - Find I/O PM domain and add a device to it. ··· 1497 1557 1498 1558 genpd->device_count--; 1499 1559 genpd->max_off_time_changed = true; 1560 + 1561 + if (genpd->detach_dev) 1562 + genpd->detach_dev(dev); 1500 1563 1501 1564 spin_lock_irq(&dev->power.lock); 1502 1565 ··· 1687 1744 } 1688 1745 1689 1746 /** 1690 - * pm_genpd_add_callbacks - Add PM domain callbacks to a given device. 1691 - * @dev: Device to add the callbacks to. 1692 - * @ops: Set of callbacks to add. 1693 - * @td: Timing data to add to the device along with the callbacks (optional). 1694 - * 1695 - * Every call to this routine should be balanced with a call to 1696 - * __pm_genpd_remove_callbacks() and they must not be nested. 1697 - */ 1698 - int pm_genpd_add_callbacks(struct device *dev, struct gpd_dev_ops *ops, 1699 - struct gpd_timing_data *td) 1700 - { 1701 - struct generic_pm_domain_data *gpd_data_new, *gpd_data = NULL; 1702 - int ret = 0; 1703 - 1704 - if (!(dev && ops)) 1705 - return -EINVAL; 1706 - 1707 - gpd_data_new = __pm_genpd_alloc_dev_data(dev); 1708 - if (!gpd_data_new) 1709 - return -ENOMEM; 1710 - 1711 - pm_runtime_disable(dev); 1712 - device_pm_lock(); 1713 - 1714 - ret = dev_pm_get_subsys_data(dev); 1715 - if (ret) 1716 - goto out; 1717 - 1718 - spin_lock_irq(&dev->power.lock); 1719 - 1720 - if (dev->power.subsys_data->domain_data) { 1721 - gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 1722 - } else { 1723 - gpd_data = gpd_data_new; 1724 - dev->power.subsys_data->domain_data = &gpd_data->base; 1725 - } 1726 - gpd_data->refcount++; 1727 - gpd_data->ops = *ops; 1728 - if (td) 1729 - gpd_data->td = *td; 1730 - 1731 - spin_unlock_irq(&dev->power.lock); 1732 - 1733 - out: 1734 - device_pm_unlock(); 1735 - pm_runtime_enable(dev); 1736 - 1737 - if (gpd_data != gpd_data_new) 1738 - __pm_genpd_free_dev_data(dev, gpd_data_new); 1739 - 1740 - return ret; 1741 - } 1742 - EXPORT_SYMBOL_GPL(pm_genpd_add_callbacks); 1743 - 1744 - /** 1745 - * __pm_genpd_remove_callbacks - Remove PM domain callbacks from a given device. 1746 - * @dev: Device to remove the callbacks from. 1747 - * @clear_td: If set, clear the device's timing data too. 1748 - * 1749 - * This routine can only be called after pm_genpd_add_callbacks(). 1750 - */ 1751 - int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) 1752 - { 1753 - struct generic_pm_domain_data *gpd_data = NULL; 1754 - bool remove = false; 1755 - int ret = 0; 1756 - 1757 - if (!(dev && dev->power.subsys_data)) 1758 - return -EINVAL; 1759 - 1760 - pm_runtime_disable(dev); 1761 - device_pm_lock(); 1762 - 1763 - spin_lock_irq(&dev->power.lock); 1764 - 1765 - if (dev->power.subsys_data->domain_data) { 1766 - gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 1767 - gpd_data->ops = (struct gpd_dev_ops){ NULL }; 1768 - if (clear_td) 1769 - gpd_data->td = (struct gpd_timing_data){ 0 }; 1770 - 1771 - if (--gpd_data->refcount == 0) { 1772 - dev->power.subsys_data->domain_data = NULL; 1773 - remove = true; 1774 - } 1775 - } else { 1776 - ret = -EINVAL; 1777 - } 1778 - 1779 - spin_unlock_irq(&dev->power.lock); 1780 - 1781 - device_pm_unlock(); 1782 - pm_runtime_enable(dev); 1783 - 1784 - if (ret) 1785 - return ret; 1786 - 1787 - dev_pm_put_subsys_data(dev); 1788 - if (remove) 1789 - __pm_genpd_free_dev_data(dev, gpd_data); 1790 - 1791 - return 0; 1792 - } 1793 - EXPORT_SYMBOL_GPL(__pm_genpd_remove_callbacks); 1794 - 1795 - /** 1796 1747 * pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle. 1797 1748 * @genpd: PM domain to be connected with cpuidle. 1798 1749 * @state: cpuidle state this domain can disable/enable. ··· 1698 1861 int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) 1699 1862 { 1700 1863 struct cpuidle_driver *cpuidle_drv; 1701 - struct gpd_cpu_data *cpu_data; 1864 + struct gpd_cpuidle_data *cpuidle_data; 1702 1865 struct cpuidle_state *idle_state; 1703 1866 int ret = 0; 1704 1867 ··· 1707 1870 1708 1871 genpd_acquire_lock(genpd); 1709 1872 1710 - if (genpd->cpu_data) { 1873 + if (genpd->cpuidle_data) { 1711 1874 ret = -EEXIST; 1712 1875 goto out; 1713 1876 } 1714 - cpu_data = kzalloc(sizeof(*cpu_data), GFP_KERNEL); 1715 - if (!cpu_data) { 1877 + cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL); 1878 + if (!cpuidle_data) { 1716 1879 ret = -ENOMEM; 1717 1880 goto out; 1718 1881 } ··· 1730 1893 ret = -EAGAIN; 1731 1894 goto err; 1732 1895 } 1733 - cpu_data->idle_state = idle_state; 1734 - cpu_data->saved_exit_latency = idle_state->exit_latency; 1735 - genpd->cpu_data = cpu_data; 1896 + cpuidle_data->idle_state = idle_state; 1897 + cpuidle_data->saved_exit_latency = idle_state->exit_latency; 1898 + genpd->cpuidle_data = cpuidle_data; 1736 1899 genpd_recalc_cpu_exit_latency(genpd); 1737 1900 1738 1901 out: ··· 1743 1906 cpuidle_driver_unref(); 1744 1907 1745 1908 err_drv: 1746 - kfree(cpu_data); 1909 + kfree(cpuidle_data); 1747 1910 goto out; 1748 1911 } 1749 1912 ··· 1766 1929 */ 1767 1930 int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) 1768 1931 { 1769 - struct gpd_cpu_data *cpu_data; 1932 + struct gpd_cpuidle_data *cpuidle_data; 1770 1933 struct cpuidle_state *idle_state; 1771 1934 int ret = 0; 1772 1935 ··· 1775 1938 1776 1939 genpd_acquire_lock(genpd); 1777 1940 1778 - cpu_data = genpd->cpu_data; 1779 - if (!cpu_data) { 1941 + cpuidle_data = genpd->cpuidle_data; 1942 + if (!cpuidle_data) { 1780 1943 ret = -ENODEV; 1781 1944 goto out; 1782 1945 } 1783 - idle_state = cpu_data->idle_state; 1946 + idle_state = cpuidle_data->idle_state; 1784 1947 if (!idle_state->disabled) { 1785 1948 ret = -EAGAIN; 1786 1949 goto out; 1787 1950 } 1788 - idle_state->exit_latency = cpu_data->saved_exit_latency; 1951 + idle_state->exit_latency = cpuidle_data->saved_exit_latency; 1789 1952 cpuidle_driver_unref(); 1790 - genpd->cpu_data = NULL; 1791 - kfree(cpu_data); 1953 + genpd->cpuidle_data = NULL; 1954 + kfree(cpuidle_data); 1792 1955 1793 1956 out: 1794 1957 genpd_release_lock(genpd); ··· 1807 1970 /* Default device callbacks for generic PM domains. */ 1808 1971 1809 1972 /** 1810 - * pm_genpd_default_save_state - Default "save device state" for PM domians. 1973 + * pm_genpd_default_save_state - Default "save device state" for PM domains. 1811 1974 * @dev: Device to handle. 1812 1975 */ 1813 1976 static int pm_genpd_default_save_state(struct device *dev) 1814 1977 { 1815 1978 int (*cb)(struct device *__dev); 1816 - 1817 - cb = dev_gpd_data(dev)->ops.save_state; 1818 - if (cb) 1819 - return cb(dev); 1820 1979 1821 1980 if (dev->type && dev->type->pm) 1822 1981 cb = dev->type->pm->runtime_suspend; ··· 1830 1997 } 1831 1998 1832 1999 /** 1833 - * pm_genpd_default_restore_state - Default PM domians "restore device state". 2000 + * pm_genpd_default_restore_state - Default PM domains "restore device state". 1834 2001 * @dev: Device to handle. 1835 2002 */ 1836 2003 static int pm_genpd_default_restore_state(struct device *dev) 1837 2004 { 1838 2005 int (*cb)(struct device *__dev); 1839 - 1840 - cb = dev_gpd_data(dev)->ops.restore_state; 1841 - if (cb) 1842 - return cb(dev); 1843 2006 1844 2007 if (dev->type && dev->type->pm) 1845 2008 cb = dev->type->pm->runtime_resume; ··· 1851 2022 1852 2023 return cb ? cb(dev) : 0; 1853 2024 } 1854 - 1855 - #ifdef CONFIG_PM_SLEEP 1856 - 1857 - /** 1858 - * pm_genpd_default_suspend - Default "device suspend" for PM domians. 1859 - * @dev: Device to handle. 1860 - */ 1861 - static int pm_genpd_default_suspend(struct device *dev) 1862 - { 1863 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend; 1864 - 1865 - return cb ? cb(dev) : pm_generic_suspend(dev); 1866 - } 1867 - 1868 - /** 1869 - * pm_genpd_default_suspend_late - Default "late device suspend" for PM domians. 1870 - * @dev: Device to handle. 1871 - */ 1872 - static int pm_genpd_default_suspend_late(struct device *dev) 1873 - { 1874 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend_late; 1875 - 1876 - return cb ? cb(dev) : pm_generic_suspend_late(dev); 1877 - } 1878 - 1879 - /** 1880 - * pm_genpd_default_resume_early - Default "early device resume" for PM domians. 1881 - * @dev: Device to handle. 1882 - */ 1883 - static int pm_genpd_default_resume_early(struct device *dev) 1884 - { 1885 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume_early; 1886 - 1887 - return cb ? cb(dev) : pm_generic_resume_early(dev); 1888 - } 1889 - 1890 - /** 1891 - * pm_genpd_default_resume - Default "device resume" for PM domians. 1892 - * @dev: Device to handle. 1893 - */ 1894 - static int pm_genpd_default_resume(struct device *dev) 1895 - { 1896 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume; 1897 - 1898 - return cb ? cb(dev) : pm_generic_resume(dev); 1899 - } 1900 - 1901 - /** 1902 - * pm_genpd_default_freeze - Default "device freeze" for PM domians. 1903 - * @dev: Device to handle. 1904 - */ 1905 - static int pm_genpd_default_freeze(struct device *dev) 1906 - { 1907 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze; 1908 - 1909 - return cb ? cb(dev) : pm_generic_freeze(dev); 1910 - } 1911 - 1912 - /** 1913 - * pm_genpd_default_freeze_late - Default "late device freeze" for PM domians. 1914 - * @dev: Device to handle. 1915 - */ 1916 - static int pm_genpd_default_freeze_late(struct device *dev) 1917 - { 1918 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze_late; 1919 - 1920 - return cb ? cb(dev) : pm_generic_freeze_late(dev); 1921 - } 1922 - 1923 - /** 1924 - * pm_genpd_default_thaw_early - Default "early device thaw" for PM domians. 1925 - * @dev: Device to handle. 1926 - */ 1927 - static int pm_genpd_default_thaw_early(struct device *dev) 1928 - { 1929 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw_early; 1930 - 1931 - return cb ? cb(dev) : pm_generic_thaw_early(dev); 1932 - } 1933 - 1934 - /** 1935 - * pm_genpd_default_thaw - Default "device thaw" for PM domians. 1936 - * @dev: Device to handle. 1937 - */ 1938 - static int pm_genpd_default_thaw(struct device *dev) 1939 - { 1940 - int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw; 1941 - 1942 - return cb ? cb(dev) : pm_generic_thaw(dev); 1943 - } 1944 - 1945 - #else /* !CONFIG_PM_SLEEP */ 1946 - 1947 - #define pm_genpd_default_suspend NULL 1948 - #define pm_genpd_default_suspend_late NULL 1949 - #define pm_genpd_default_resume_early NULL 1950 - #define pm_genpd_default_resume NULL 1951 - #define pm_genpd_default_freeze NULL 1952 - #define pm_genpd_default_freeze_late NULL 1953 - #define pm_genpd_default_thaw_early NULL 1954 - #define pm_genpd_default_thaw NULL 1955 - 1956 - #endif /* !CONFIG_PM_SLEEP */ 1957 2025 1958 2026 /** 1959 2027 * pm_genpd_init - Initialize a generic I/O PM domain object. ··· 1903 2177 genpd->domain.ops.complete = pm_genpd_complete; 1904 2178 genpd->dev_ops.save_state = pm_genpd_default_save_state; 1905 2179 genpd->dev_ops.restore_state = pm_genpd_default_restore_state; 1906 - genpd->dev_ops.suspend = pm_genpd_default_suspend; 1907 - genpd->dev_ops.suspend_late = pm_genpd_default_suspend_late; 1908 - genpd->dev_ops.resume_early = pm_genpd_default_resume_early; 1909 - genpd->dev_ops.resume = pm_genpd_default_resume; 1910 - genpd->dev_ops.freeze = pm_genpd_default_freeze; 1911 - genpd->dev_ops.freeze_late = pm_genpd_default_freeze_late; 1912 - genpd->dev_ops.thaw_early = pm_genpd_default_thaw_early; 1913 - genpd->dev_ops.thaw = pm_genpd_default_thaw; 1914 2180 mutex_lock(&gpd_list_lock); 1915 2181 list_add(&genpd->gpd_list_node, &gpd_list); 1916 2182 mutex_unlock(&gpd_list_lock); 1917 2183 } 2184 + 2185 + #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 2186 + /* 2187 + * Device Tree based PM domain providers. 2188 + * 2189 + * The code below implements generic device tree based PM domain providers that 2190 + * bind device tree nodes with generic PM domains registered in the system. 2191 + * 2192 + * Any driver that registers generic PM domains and needs to support binding of 2193 + * devices to these domains is supposed to register a PM domain provider, which 2194 + * maps a PM domain specifier retrieved from the device tree to a PM domain. 2195 + * 2196 + * Two simple mapping functions have been provided for convenience: 2197 + * - __of_genpd_xlate_simple() for 1:1 device tree node to PM domain mapping. 2198 + * - __of_genpd_xlate_onecell() for mapping of multiple PM domains per node by 2199 + * index. 2200 + */ 2201 + 2202 + /** 2203 + * struct of_genpd_provider - PM domain provider registration structure 2204 + * @link: Entry in global list of PM domain providers 2205 + * @node: Pointer to device tree node of PM domain provider 2206 + * @xlate: Provider-specific xlate callback mapping a set of specifier cells 2207 + * into a PM domain. 2208 + * @data: context pointer to be passed into @xlate callback 2209 + */ 2210 + struct of_genpd_provider { 2211 + struct list_head link; 2212 + struct device_node *node; 2213 + genpd_xlate_t xlate; 2214 + void *data; 2215 + }; 2216 + 2217 + /* List of registered PM domain providers. */ 2218 + static LIST_HEAD(of_genpd_providers); 2219 + /* Mutex to protect the list above. */ 2220 + static DEFINE_MUTEX(of_genpd_mutex); 2221 + 2222 + /** 2223 + * __of_genpd_xlate_simple() - Xlate function for direct node-domain mapping 2224 + * @genpdspec: OF phandle args to map into a PM domain 2225 + * @data: xlate function private data - pointer to struct generic_pm_domain 2226 + * 2227 + * This is a generic xlate function that can be used to model PM domains that 2228 + * have their own device tree nodes. The private data of xlate function needs 2229 + * to be a valid pointer to struct generic_pm_domain. 2230 + */ 2231 + struct generic_pm_domain *__of_genpd_xlate_simple( 2232 + struct of_phandle_args *genpdspec, 2233 + void *data) 2234 + { 2235 + if (genpdspec->args_count != 0) 2236 + return ERR_PTR(-EINVAL); 2237 + return data; 2238 + } 2239 + EXPORT_SYMBOL_GPL(__of_genpd_xlate_simple); 2240 + 2241 + /** 2242 + * __of_genpd_xlate_onecell() - Xlate function using a single index. 2243 + * @genpdspec: OF phandle args to map into a PM domain 2244 + * @data: xlate function private data - pointer to struct genpd_onecell_data 2245 + * 2246 + * This is a generic xlate function that can be used to model simple PM domain 2247 + * controllers that have one device tree node and provide multiple PM domains. 2248 + * A single cell is used as an index into an array of PM domains specified in 2249 + * the genpd_onecell_data struct when registering the provider. 2250 + */ 2251 + struct generic_pm_domain *__of_genpd_xlate_onecell( 2252 + struct of_phandle_args *genpdspec, 2253 + void *data) 2254 + { 2255 + struct genpd_onecell_data *genpd_data = data; 2256 + unsigned int idx = genpdspec->args[0]; 2257 + 2258 + if (genpdspec->args_count != 1) 2259 + return ERR_PTR(-EINVAL); 2260 + 2261 + if (idx >= genpd_data->num_domains) { 2262 + pr_err("%s: invalid domain index %u\n", __func__, idx); 2263 + return ERR_PTR(-EINVAL); 2264 + } 2265 + 2266 + if (!genpd_data->domains[idx]) 2267 + return ERR_PTR(-ENOENT); 2268 + 2269 + return genpd_data->domains[idx]; 2270 + } 2271 + EXPORT_SYMBOL_GPL(__of_genpd_xlate_onecell); 2272 + 2273 + /** 2274 + * __of_genpd_add_provider() - Register a PM domain provider for a node 2275 + * @np: Device node pointer associated with the PM domain provider. 2276 + * @xlate: Callback for decoding PM domain from phandle arguments. 2277 + * @data: Context pointer for @xlate callback. 2278 + */ 2279 + int __of_genpd_add_provider(struct device_node *np, genpd_xlate_t xlate, 2280 + void *data) 2281 + { 2282 + struct of_genpd_provider *cp; 2283 + 2284 + cp = kzalloc(sizeof(*cp), GFP_KERNEL); 2285 + if (!cp) 2286 + return -ENOMEM; 2287 + 2288 + cp->node = of_node_get(np); 2289 + cp->data = data; 2290 + cp->xlate = xlate; 2291 + 2292 + mutex_lock(&of_genpd_mutex); 2293 + list_add(&cp->link, &of_genpd_providers); 2294 + mutex_unlock(&of_genpd_mutex); 2295 + pr_debug("Added domain provider from %s\n", np->full_name); 2296 + 2297 + return 0; 2298 + } 2299 + EXPORT_SYMBOL_GPL(__of_genpd_add_provider); 2300 + 2301 + /** 2302 + * of_genpd_del_provider() - Remove a previously registered PM domain provider 2303 + * @np: Device node pointer associated with the PM domain provider 2304 + */ 2305 + void of_genpd_del_provider(struct device_node *np) 2306 + { 2307 + struct of_genpd_provider *cp; 2308 + 2309 + mutex_lock(&of_genpd_mutex); 2310 + list_for_each_entry(cp, &of_genpd_providers, link) { 2311 + if (cp->node == np) { 2312 + list_del(&cp->link); 2313 + of_node_put(cp->node); 2314 + kfree(cp); 2315 + break; 2316 + } 2317 + } 2318 + mutex_unlock(&of_genpd_mutex); 2319 + } 2320 + EXPORT_SYMBOL_GPL(of_genpd_del_provider); 2321 + 2322 + /** 2323 + * of_genpd_get_from_provider() - Look-up PM domain 2324 + * @genpdspec: OF phandle args to use for look-up 2325 + * 2326 + * Looks for a PM domain provider under the node specified by @genpdspec and if 2327 + * found, uses xlate function of the provider to map phandle args to a PM 2328 + * domain. 2329 + * 2330 + * Returns a valid pointer to struct generic_pm_domain on success or ERR_PTR() 2331 + * on failure. 2332 + */ 2333 + static struct generic_pm_domain *of_genpd_get_from_provider( 2334 + struct of_phandle_args *genpdspec) 2335 + { 2336 + struct generic_pm_domain *genpd = ERR_PTR(-ENOENT); 2337 + struct of_genpd_provider *provider; 2338 + 2339 + mutex_lock(&of_genpd_mutex); 2340 + 2341 + /* Check if we have such a provider in our array */ 2342 + list_for_each_entry(provider, &of_genpd_providers, link) { 2343 + if (provider->node == genpdspec->np) 2344 + genpd = provider->xlate(genpdspec, provider->data); 2345 + if (!IS_ERR(genpd)) 2346 + break; 2347 + } 2348 + 2349 + mutex_unlock(&of_genpd_mutex); 2350 + 2351 + return genpd; 2352 + } 2353 + 2354 + /** 2355 + * genpd_dev_pm_detach - Detach a device from its PM domain. 2356 + * @dev: Device to attach. 2357 + * @power_off: Currently not used 2358 + * 2359 + * Try to locate a corresponding generic PM domain, which the device was 2360 + * attached to previously. If such is found, the device is detached from it. 2361 + */ 2362 + static void genpd_dev_pm_detach(struct device *dev, bool power_off) 2363 + { 2364 + struct generic_pm_domain *pd = NULL, *gpd; 2365 + int ret = 0; 2366 + 2367 + if (!dev->pm_domain) 2368 + return; 2369 + 2370 + mutex_lock(&gpd_list_lock); 2371 + list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2372 + if (&gpd->domain == dev->pm_domain) { 2373 + pd = gpd; 2374 + break; 2375 + } 2376 + } 2377 + mutex_unlock(&gpd_list_lock); 2378 + 2379 + if (!pd) 2380 + return; 2381 + 2382 + dev_dbg(dev, "removing from PM domain %s\n", pd->name); 2383 + 2384 + while (1) { 2385 + ret = pm_genpd_remove_device(pd, dev); 2386 + if (ret != -EAGAIN) 2387 + break; 2388 + cond_resched(); 2389 + } 2390 + 2391 + if (ret < 0) { 2392 + dev_err(dev, "failed to remove from PM domain %s: %d", 2393 + pd->name, ret); 2394 + return; 2395 + } 2396 + 2397 + /* Check if PM domain can be powered off after removing this device. */ 2398 + genpd_queue_power_off_work(pd); 2399 + } 2400 + 2401 + /** 2402 + * genpd_dev_pm_attach - Attach a device to its PM domain using DT. 2403 + * @dev: Device to attach. 2404 + * 2405 + * Parse device's OF node to find a PM domain specifier. If such is found, 2406 + * attaches the device to retrieved pm_domain ops. 2407 + * 2408 + * Both generic and legacy Samsung-specific DT bindings are supported to keep 2409 + * backwards compatibility with existing DTBs. 2410 + * 2411 + * Returns 0 on successfully attached PM domain or negative error code. 2412 + */ 2413 + int genpd_dev_pm_attach(struct device *dev) 2414 + { 2415 + struct of_phandle_args pd_args; 2416 + struct generic_pm_domain *pd; 2417 + int ret; 2418 + 2419 + if (!dev->of_node) 2420 + return -ENODEV; 2421 + 2422 + if (dev->pm_domain) 2423 + return -EEXIST; 2424 + 2425 + ret = of_parse_phandle_with_args(dev->of_node, "power-domains", 2426 + "#power-domain-cells", 0, &pd_args); 2427 + if (ret < 0) { 2428 + if (ret != -ENOENT) 2429 + return ret; 2430 + 2431 + /* 2432 + * Try legacy Samsung-specific bindings 2433 + * (for backwards compatibility of DT ABI) 2434 + */ 2435 + pd_args.args_count = 0; 2436 + pd_args.np = of_parse_phandle(dev->of_node, 2437 + "samsung,power-domain", 0); 2438 + if (!pd_args.np) 2439 + return -ENOENT; 2440 + } 2441 + 2442 + pd = of_genpd_get_from_provider(&pd_args); 2443 + if (IS_ERR(pd)) { 2444 + dev_dbg(dev, "%s() failed to find PM domain: %ld\n", 2445 + __func__, PTR_ERR(pd)); 2446 + of_node_put(dev->of_node); 2447 + return PTR_ERR(pd); 2448 + } 2449 + 2450 + dev_dbg(dev, "adding to PM domain %s\n", pd->name); 2451 + 2452 + while (1) { 2453 + ret = pm_genpd_add_device(pd, dev); 2454 + if (ret != -EAGAIN) 2455 + break; 2456 + cond_resched(); 2457 + } 2458 + 2459 + if (ret < 0) { 2460 + dev_err(dev, "failed to add to PM domain %s: %d", 2461 + pd->name, ret); 2462 + of_node_put(dev->of_node); 2463 + return ret; 2464 + } 2465 + 2466 + dev->pm_domain->detach = genpd_dev_pm_detach; 2467 + 2468 + return 0; 2469 + } 2470 + EXPORT_SYMBOL_GPL(genpd_dev_pm_attach); 2471 + #endif 2472 + 2473 + 2474 + /*** debugfs support ***/ 2475 + 2476 + #ifdef CONFIG_PM_ADVANCED_DEBUG 2477 + #include <linux/pm.h> 2478 + #include <linux/device.h> 2479 + #include <linux/debugfs.h> 2480 + #include <linux/seq_file.h> 2481 + #include <linux/init.h> 2482 + #include <linux/kobject.h> 2483 + static struct dentry *pm_genpd_debugfs_dir; 2484 + 2485 + /* 2486 + * TODO: This function is a slightly modified version of rtpm_status_show 2487 + * from sysfs.c, but dependencies between PM_GENERIC_DOMAINS and PM_RUNTIME 2488 + * are too loose to generalize it. 2489 + */ 2490 + #ifdef CONFIG_PM_RUNTIME 2491 + static void rtpm_status_str(struct seq_file *s, struct device *dev) 2492 + { 2493 + static const char * const status_lookup[] = { 2494 + [RPM_ACTIVE] = "active", 2495 + [RPM_RESUMING] = "resuming", 2496 + [RPM_SUSPENDED] = "suspended", 2497 + [RPM_SUSPENDING] = "suspending" 2498 + }; 2499 + const char *p = ""; 2500 + 2501 + if (dev->power.runtime_error) 2502 + p = "error"; 2503 + else if (dev->power.disable_depth) 2504 + p = "unsupported"; 2505 + else if (dev->power.runtime_status < ARRAY_SIZE(status_lookup)) 2506 + p = status_lookup[dev->power.runtime_status]; 2507 + else 2508 + WARN_ON(1); 2509 + 2510 + seq_puts(s, p); 2511 + } 2512 + #else 2513 + static void rtpm_status_str(struct seq_file *s, struct device *dev) 2514 + { 2515 + seq_puts(s, "active"); 2516 + } 2517 + #endif 2518 + 2519 + static int pm_genpd_summary_one(struct seq_file *s, 2520 + struct generic_pm_domain *gpd) 2521 + { 2522 + static const char * const status_lookup[] = { 2523 + [GPD_STATE_ACTIVE] = "on", 2524 + [GPD_STATE_WAIT_MASTER] = "wait-master", 2525 + [GPD_STATE_BUSY] = "busy", 2526 + [GPD_STATE_REPEAT] = "off-in-progress", 2527 + [GPD_STATE_POWER_OFF] = "off" 2528 + }; 2529 + struct pm_domain_data *pm_data; 2530 + const char *kobj_path; 2531 + struct gpd_link *link; 2532 + int ret; 2533 + 2534 + ret = mutex_lock_interruptible(&gpd->lock); 2535 + if (ret) 2536 + return -ERESTARTSYS; 2537 + 2538 + if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup))) 2539 + goto exit; 2540 + seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]); 2541 + 2542 + /* 2543 + * Modifications on the list require holding locks on both 2544 + * master and slave, so we are safe. 2545 + * Also gpd->name is immutable. 2546 + */ 2547 + list_for_each_entry(link, &gpd->master_links, master_node) { 2548 + seq_printf(s, "%s", link->slave->name); 2549 + if (!list_is_last(&link->master_node, &gpd->master_links)) 2550 + seq_puts(s, ", "); 2551 + } 2552 + 2553 + list_for_each_entry(pm_data, &gpd->dev_list, list_node) { 2554 + kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); 2555 + if (kobj_path == NULL) 2556 + continue; 2557 + 2558 + seq_printf(s, "\n %-50s ", kobj_path); 2559 + rtpm_status_str(s, pm_data->dev); 2560 + kfree(kobj_path); 2561 + } 2562 + 2563 + seq_puts(s, "\n"); 2564 + exit: 2565 + mutex_unlock(&gpd->lock); 2566 + 2567 + return 0; 2568 + } 2569 + 2570 + static int pm_genpd_summary_show(struct seq_file *s, void *data) 2571 + { 2572 + struct generic_pm_domain *gpd; 2573 + int ret = 0; 2574 + 2575 + seq_puts(s, " domain status slaves\n"); 2576 + seq_puts(s, " /device runtime status\n"); 2577 + seq_puts(s, "----------------------------------------------------------------------\n"); 2578 + 2579 + ret = mutex_lock_interruptible(&gpd_list_lock); 2580 + if (ret) 2581 + return -ERESTARTSYS; 2582 + 2583 + list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2584 + ret = pm_genpd_summary_one(s, gpd); 2585 + if (ret) 2586 + break; 2587 + } 2588 + mutex_unlock(&gpd_list_lock); 2589 + 2590 + return ret; 2591 + } 2592 + 2593 + static int pm_genpd_summary_open(struct inode *inode, struct file *file) 2594 + { 2595 + return single_open(file, pm_genpd_summary_show, NULL); 2596 + } 2597 + 2598 + static const struct file_operations pm_genpd_summary_fops = { 2599 + .open = pm_genpd_summary_open, 2600 + .read = seq_read, 2601 + .llseek = seq_lseek, 2602 + .release = single_release, 2603 + }; 2604 + 2605 + static int __init pm_genpd_debug_init(void) 2606 + { 2607 + struct dentry *d; 2608 + 2609 + pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2610 + 2611 + if (!pm_genpd_debugfs_dir) 2612 + return -ENOMEM; 2613 + 2614 + d = debugfs_create_file("pm_genpd_summary", S_IRUGO, 2615 + pm_genpd_debugfs_dir, NULL, &pm_genpd_summary_fops); 2616 + if (!d) 2617 + return -ENOMEM; 2618 + 2619 + return 0; 2620 + } 2621 + late_initcall(pm_genpd_debug_init); 2622 + 2623 + static void __exit pm_genpd_debug_exit(void) 2624 + { 2625 + debugfs_remove_recursive(pm_genpd_debugfs_dir); 2626 + } 2627 + __exitcall(pm_genpd_debug_exit); 2628 + #endif /* CONFIG_PM_ADVANCED_DEBUG */
+2 -5
drivers/base/power/domain_governor.c
··· 42 42 * default_stop_ok - Default PM domain governor routine for stopping devices. 43 43 * @dev: Device to check. 44 44 */ 45 - bool default_stop_ok(struct device *dev) 45 + static bool default_stop_ok(struct device *dev) 46 46 { 47 47 struct gpd_timing_data *td = &dev_gpd_data(dev)->td; 48 48 unsigned long flags; ··· 229 229 230 230 #else /* !CONFIG_PM_RUNTIME */ 231 231 232 - bool default_stop_ok(struct device *dev) 233 - { 234 - return false; 235 - } 232 + static inline bool default_stop_ok(struct device *dev) { return false; } 236 233 237 234 #define default_power_down_ok NULL 238 235 #define always_on_power_down_ok NULL
+4 -4
drivers/base/power/main.c
··· 540 540 * Call the "noirq" resume handlers for all devices in dpm_noirq_list and 541 541 * enable device drivers to receive interrupts. 542 542 */ 543 - static void dpm_resume_noirq(pm_message_t state) 543 + void dpm_resume_noirq(pm_message_t state) 544 544 { 545 545 struct device *dev; 546 546 ktime_t starttime = ktime_get(); ··· 662 662 * dpm_resume_early - Execute "early resume" callbacks for all devices. 663 663 * @state: PM transition of the system being carried out. 664 664 */ 665 - static void dpm_resume_early(pm_message_t state) 665 + void dpm_resume_early(pm_message_t state) 666 666 { 667 667 struct device *dev; 668 668 ktime_t starttime = ktime_get(); ··· 1093 1093 * Prevent device drivers from receiving interrupts and call the "noirq" suspend 1094 1094 * handlers for all non-sysdev devices. 1095 1095 */ 1096 - static int dpm_suspend_noirq(pm_message_t state) 1096 + int dpm_suspend_noirq(pm_message_t state) 1097 1097 { 1098 1098 ktime_t starttime = ktime_get(); 1099 1099 int error = 0; ··· 1232 1232 * dpm_suspend_late - Execute "late suspend" callbacks for all devices. 1233 1233 * @state: PM transition of the system being carried out. 1234 1234 */ 1235 - static int dpm_suspend_late(pm_message_t state) 1235 + int dpm_suspend_late(pm_message_t state) 1236 1236 { 1237 1237 ktime_t starttime = ktime_get(); 1238 1238 int error = 0;
+13 -11
drivers/base/power/sysfs.c
··· 92 92 * wakeup_count - Report the number of wakeup events related to the device 93 93 */ 94 94 95 - static const char enabled[] = "enabled"; 96 - static const char disabled[] = "disabled"; 97 - 98 95 const char power_group_name[] = "power"; 99 96 EXPORT_SYMBOL_GPL(power_group_name); 100 97 ··· 333 336 #endif /* CONFIG_PM_RUNTIME */ 334 337 335 338 #ifdef CONFIG_PM_SLEEP 339 + static const char _enabled[] = "enabled"; 340 + static const char _disabled[] = "disabled"; 341 + 336 342 static ssize_t 337 343 wake_show(struct device * dev, struct device_attribute *attr, char * buf) 338 344 { 339 345 return sprintf(buf, "%s\n", device_can_wakeup(dev) 340 - ? (device_may_wakeup(dev) ? enabled : disabled) 346 + ? (device_may_wakeup(dev) ? _enabled : _disabled) 341 347 : ""); 342 348 } 343 349 ··· 357 357 cp = memchr(buf, '\n', n); 358 358 if (cp) 359 359 len = cp - buf; 360 - if (len == sizeof enabled - 1 361 - && strncmp(buf, enabled, sizeof enabled - 1) == 0) 360 + if (len == sizeof _enabled - 1 361 + && strncmp(buf, _enabled, sizeof _enabled - 1) == 0) 362 362 device_set_wakeup_enable(dev, 1); 363 - else if (len == sizeof disabled - 1 364 - && strncmp(buf, disabled, sizeof disabled - 1) == 0) 363 + else if (len == sizeof _disabled - 1 364 + && strncmp(buf, _disabled, sizeof _disabled - 1) == 0) 365 365 device_set_wakeup_enable(dev, 0); 366 366 else 367 367 return -EINVAL; ··· 570 570 char *buf) 571 571 { 572 572 return sprintf(buf, "%s\n", 573 - device_async_suspend_enabled(dev) ? enabled : disabled); 573 + device_async_suspend_enabled(dev) ? 574 + _enabled : _disabled); 574 575 } 575 576 576 577 static ssize_t async_store(struct device *dev, struct device_attribute *attr, ··· 583 582 cp = memchr(buf, '\n', n); 584 583 if (cp) 585 584 len = cp - buf; 586 - if (len == sizeof enabled - 1 && strncmp(buf, enabled, len) == 0) 585 + if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0) 587 586 device_enable_async_suspend(dev); 588 - else if (len == sizeof disabled - 1 && strncmp(buf, disabled, len) == 0) 587 + else if (len == sizeof _disabled - 1 && 588 + strncmp(buf, _disabled, len) == 0) 589 589 device_disable_async_suspend(dev); 590 590 else 591 591 return -EINVAL;
+15 -1
drivers/base/power/wakeup.c
··· 24 24 */ 25 25 bool events_check_enabled __read_mostly; 26 26 27 + /* If set and the system is suspending, terminate the suspend. */ 28 + static bool pm_abort_suspend __read_mostly; 29 + 27 30 /* 28 31 * Combined counters of registered wakeup events and wakeup events in progress. 29 32 * They need to be modified together atomically, so it's better to use one ··· 722 719 pm_print_active_wakeup_sources(); 723 720 } 724 721 725 - return ret; 722 + return ret || pm_abort_suspend; 723 + } 724 + 725 + void pm_system_wakeup(void) 726 + { 727 + pm_abort_suspend = true; 728 + freeze_wake(); 729 + } 730 + 731 + void pm_wakeup_clear(void) 732 + { 733 + pm_abort_suspend = false; 726 734 } 727 735 728 736 /**
+3 -4
drivers/base/syscore.c
··· 9 9 #include <linux/syscore_ops.h> 10 10 #include <linux/mutex.h> 11 11 #include <linux/module.h> 12 - #include <linux/interrupt.h> 12 + #include <linux/suspend.h> 13 13 #include <trace/events/power.h> 14 14 15 15 static LIST_HEAD(syscore_ops_list); ··· 54 54 pr_debug("Checking wakeup interrupts\n"); 55 55 56 56 /* Return error code if there are any wakeup interrupts pending. */ 57 - ret = check_wakeup_irqs(); 58 - if (ret) 59 - return ret; 57 + if (pm_wakeup_pending()) 58 + return -EBUSY; 60 59 61 60 WARN_ONCE(!irqs_disabled(), 62 61 "Interrupts enabled before system core suspend.\n");
+4 -4
drivers/cpufreq/Kconfig
··· 183 183 184 184 If in doubt, say N. 185 185 186 - config GENERIC_CPUFREQ_CPU0 187 - tristate "Generic CPU0 cpufreq driver" 186 + config CPUFREQ_DT 187 + tristate "Generic DT based cpufreq driver" 188 188 depends on HAVE_CLK && OF 189 - # if CPU_THERMAL is on and THERMAL=m, CPU0 cannot be =y: 189 + # if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y: 190 190 depends on !CPU_THERMAL || THERMAL 191 191 select PM_OPP 192 192 help 193 - This adds a generic cpufreq driver for CPU0 frequency management. 193 + This adds a generic DT based cpufreq driver for frequency management. 194 194 It supports both uniprocessor (UP) and symmetric multiprocessor (SMP) 195 195 systems which share clock and voltage across all CPUs. 196 196
+1 -1
drivers/cpufreq/Kconfig.arm
··· 92 92 93 93 config ARM_HIGHBANK_CPUFREQ 94 94 tristate "Calxeda Highbank-based" 95 - depends on ARCH_HIGHBANK && GENERIC_CPUFREQ_CPU0 && REGULATOR 95 + depends on ARCH_HIGHBANK && CPUFREQ_DT && REGULATOR 96 96 default m 97 97 help 98 98 This adds the CPUFreq driver for Calxeda Highbank SoC
+1 -1
drivers/cpufreq/Makefile
··· 13 13 obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o 14 14 obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o 15 15 16 - obj-$(CONFIG_GENERIC_CPUFREQ_CPU0) += cpufreq-cpu0.o 16 + obj-$(CONFIG_CPUFREQ_DT) += cpufreq-dt.o 17 17 18 18 ################################################################################## 19 19 # x86 drivers.
-248
drivers/cpufreq/cpufreq-cpu0.c
··· 1 - /* 2 - * Copyright (C) 2012 Freescale Semiconductor, Inc. 3 - * 4 - * The OPP code in function cpu0_set_target() is reused from 5 - * drivers/cpufreq/omap-cpufreq.c 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 - 14 - #include <linux/clk.h> 15 - #include <linux/cpu.h> 16 - #include <linux/cpu_cooling.h> 17 - #include <linux/cpufreq.h> 18 - #include <linux/cpumask.h> 19 - #include <linux/err.h> 20 - #include <linux/module.h> 21 - #include <linux/of.h> 22 - #include <linux/pm_opp.h> 23 - #include <linux/platform_device.h> 24 - #include <linux/regulator/consumer.h> 25 - #include <linux/slab.h> 26 - #include <linux/thermal.h> 27 - 28 - static unsigned int transition_latency; 29 - static unsigned int voltage_tolerance; /* in percentage */ 30 - 31 - static struct device *cpu_dev; 32 - static struct clk *cpu_clk; 33 - static struct regulator *cpu_reg; 34 - static struct cpufreq_frequency_table *freq_table; 35 - static struct thermal_cooling_device *cdev; 36 - 37 - static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index) 38 - { 39 - struct dev_pm_opp *opp; 40 - unsigned long volt = 0, volt_old = 0, tol = 0; 41 - unsigned int old_freq, new_freq; 42 - long freq_Hz, freq_exact; 43 - int ret; 44 - 45 - freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); 46 - if (freq_Hz <= 0) 47 - freq_Hz = freq_table[index].frequency * 1000; 48 - 49 - freq_exact = freq_Hz; 50 - new_freq = freq_Hz / 1000; 51 - old_freq = clk_get_rate(cpu_clk) / 1000; 52 - 53 - if (!IS_ERR(cpu_reg)) { 54 - rcu_read_lock(); 55 - opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz); 56 - if (IS_ERR(opp)) { 57 - rcu_read_unlock(); 58 - pr_err("failed to find OPP for %ld\n", freq_Hz); 59 - return PTR_ERR(opp); 60 - } 61 - volt = dev_pm_opp_get_voltage(opp); 62 - rcu_read_unlock(); 63 - tol = volt * voltage_tolerance / 100; 64 - volt_old = regulator_get_voltage(cpu_reg); 65 - } 66 - 67 - pr_debug("%u MHz, %ld mV --> %u MHz, %ld mV\n", 68 - old_freq / 1000, volt_old ? volt_old / 1000 : -1, 69 - new_freq / 1000, volt ? volt / 1000 : -1); 70 - 71 - /* scaling up? scale voltage before frequency */ 72 - if (!IS_ERR(cpu_reg) && new_freq > old_freq) { 73 - ret = regulator_set_voltage_tol(cpu_reg, volt, tol); 74 - if (ret) { 75 - pr_err("failed to scale voltage up: %d\n", ret); 76 - return ret; 77 - } 78 - } 79 - 80 - ret = clk_set_rate(cpu_clk, freq_exact); 81 - if (ret) { 82 - pr_err("failed to set clock rate: %d\n", ret); 83 - if (!IS_ERR(cpu_reg)) 84 - regulator_set_voltage_tol(cpu_reg, volt_old, tol); 85 - return ret; 86 - } 87 - 88 - /* scaling down? scale voltage after frequency */ 89 - if (!IS_ERR(cpu_reg) && new_freq < old_freq) { 90 - ret = regulator_set_voltage_tol(cpu_reg, volt, tol); 91 - if (ret) { 92 - pr_err("failed to scale voltage down: %d\n", ret); 93 - clk_set_rate(cpu_clk, old_freq * 1000); 94 - } 95 - } 96 - 97 - return ret; 98 - } 99 - 100 - static int cpu0_cpufreq_init(struct cpufreq_policy *policy) 101 - { 102 - policy->clk = cpu_clk; 103 - return cpufreq_generic_init(policy, freq_table, transition_latency); 104 - } 105 - 106 - static struct cpufreq_driver cpu0_cpufreq_driver = { 107 - .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 108 - .verify = cpufreq_generic_frequency_table_verify, 109 - .target_index = cpu0_set_target, 110 - .get = cpufreq_generic_get, 111 - .init = cpu0_cpufreq_init, 112 - .name = "generic_cpu0", 113 - .attr = cpufreq_generic_attr, 114 - }; 115 - 116 - static int cpu0_cpufreq_probe(struct platform_device *pdev) 117 - { 118 - struct device_node *np; 119 - int ret; 120 - 121 - cpu_dev = get_cpu_device(0); 122 - if (!cpu_dev) { 123 - pr_err("failed to get cpu0 device\n"); 124 - return -ENODEV; 125 - } 126 - 127 - np = of_node_get(cpu_dev->of_node); 128 - if (!np) { 129 - pr_err("failed to find cpu0 node\n"); 130 - return -ENOENT; 131 - } 132 - 133 - cpu_reg = regulator_get_optional(cpu_dev, "cpu0"); 134 - if (IS_ERR(cpu_reg)) { 135 - /* 136 - * If cpu0 regulator supply node is present, but regulator is 137 - * not yet registered, we should try defering probe. 138 - */ 139 - if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) { 140 - dev_dbg(cpu_dev, "cpu0 regulator not ready, retry\n"); 141 - ret = -EPROBE_DEFER; 142 - goto out_put_node; 143 - } 144 - pr_warn("failed to get cpu0 regulator: %ld\n", 145 - PTR_ERR(cpu_reg)); 146 - } 147 - 148 - cpu_clk = clk_get(cpu_dev, NULL); 149 - if (IS_ERR(cpu_clk)) { 150 - ret = PTR_ERR(cpu_clk); 151 - pr_err("failed to get cpu0 clock: %d\n", ret); 152 - goto out_put_reg; 153 - } 154 - 155 - /* OPPs might be populated at runtime, don't check for error here */ 156 - of_init_opp_table(cpu_dev); 157 - 158 - ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 159 - if (ret) { 160 - pr_err("failed to init cpufreq table: %d\n", ret); 161 - goto out_put_clk; 162 - } 163 - 164 - of_property_read_u32(np, "voltage-tolerance", &voltage_tolerance); 165 - 166 - if (of_property_read_u32(np, "clock-latency", &transition_latency)) 167 - transition_latency = CPUFREQ_ETERNAL; 168 - 169 - if (!IS_ERR(cpu_reg)) { 170 - struct dev_pm_opp *opp; 171 - unsigned long min_uV, max_uV; 172 - int i; 173 - 174 - /* 175 - * OPP is maintained in order of increasing frequency, and 176 - * freq_table initialised from OPP is therefore sorted in the 177 - * same order. 178 - */ 179 - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 180 - ; 181 - rcu_read_lock(); 182 - opp = dev_pm_opp_find_freq_exact(cpu_dev, 183 - freq_table[0].frequency * 1000, true); 184 - min_uV = dev_pm_opp_get_voltage(opp); 185 - opp = dev_pm_opp_find_freq_exact(cpu_dev, 186 - freq_table[i-1].frequency * 1000, true); 187 - max_uV = dev_pm_opp_get_voltage(opp); 188 - rcu_read_unlock(); 189 - ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV); 190 - if (ret > 0) 191 - transition_latency += ret * 1000; 192 - } 193 - 194 - ret = cpufreq_register_driver(&cpu0_cpufreq_driver); 195 - if (ret) { 196 - pr_err("failed register driver: %d\n", ret); 197 - goto out_free_table; 198 - } 199 - 200 - /* 201 - * For now, just loading the cooling device; 202 - * thermal DT code takes care of matching them. 203 - */ 204 - if (of_find_property(np, "#cooling-cells", NULL)) { 205 - cdev = of_cpufreq_cooling_register(np, cpu_present_mask); 206 - if (IS_ERR(cdev)) 207 - pr_err("running cpufreq without cooling device: %ld\n", 208 - PTR_ERR(cdev)); 209 - } 210 - 211 - of_node_put(np); 212 - return 0; 213 - 214 - out_free_table: 215 - dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 216 - out_put_clk: 217 - if (!IS_ERR(cpu_clk)) 218 - clk_put(cpu_clk); 219 - out_put_reg: 220 - if (!IS_ERR(cpu_reg)) 221 - regulator_put(cpu_reg); 222 - out_put_node: 223 - of_node_put(np); 224 - return ret; 225 - } 226 - 227 - static int cpu0_cpufreq_remove(struct platform_device *pdev) 228 - { 229 - cpufreq_cooling_unregister(cdev); 230 - cpufreq_unregister_driver(&cpu0_cpufreq_driver); 231 - dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 232 - 233 - return 0; 234 - } 235 - 236 - static struct platform_driver cpu0_cpufreq_platdrv = { 237 - .driver = { 238 - .name = "cpufreq-cpu0", 239 - .owner = THIS_MODULE, 240 - }, 241 - .probe = cpu0_cpufreq_probe, 242 - .remove = cpu0_cpufreq_remove, 243 - }; 244 - module_platform_driver(cpu0_cpufreq_platdrv); 245 - 246 - MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>"); 247 - MODULE_DESCRIPTION("Generic CPU0 cpufreq driver"); 248 - MODULE_LICENSE("GPL");
+364
drivers/cpufreq/cpufreq-dt.c
··· 1 + /* 2 + * Copyright (C) 2012 Freescale Semiconductor, Inc. 3 + * 4 + * Copyright (C) 2014 Linaro. 5 + * Viresh Kumar <viresh.kumar@linaro.org> 6 + * 7 + * The OPP code in function set_target() is reused from 8 + * drivers/cpufreq/omap-cpufreq.c 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 16 + 17 + #include <linux/clk.h> 18 + #include <linux/cpu.h> 19 + #include <linux/cpu_cooling.h> 20 + #include <linux/cpufreq.h> 21 + #include <linux/cpumask.h> 22 + #include <linux/err.h> 23 + #include <linux/module.h> 24 + #include <linux/of.h> 25 + #include <linux/pm_opp.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/regulator/consumer.h> 28 + #include <linux/slab.h> 29 + #include <linux/thermal.h> 30 + 31 + struct private_data { 32 + struct device *cpu_dev; 33 + struct regulator *cpu_reg; 34 + struct thermal_cooling_device *cdev; 35 + unsigned int voltage_tolerance; /* in percentage */ 36 + }; 37 + 38 + static int set_target(struct cpufreq_policy *policy, unsigned int index) 39 + { 40 + struct dev_pm_opp *opp; 41 + struct cpufreq_frequency_table *freq_table = policy->freq_table; 42 + struct clk *cpu_clk = policy->clk; 43 + struct private_data *priv = policy->driver_data; 44 + struct device *cpu_dev = priv->cpu_dev; 45 + struct regulator *cpu_reg = priv->cpu_reg; 46 + unsigned long volt = 0, volt_old = 0, tol = 0; 47 + unsigned int old_freq, new_freq; 48 + long freq_Hz, freq_exact; 49 + int ret; 50 + 51 + freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); 52 + if (freq_Hz <= 0) 53 + freq_Hz = freq_table[index].frequency * 1000; 54 + 55 + freq_exact = freq_Hz; 56 + new_freq = freq_Hz / 1000; 57 + old_freq = clk_get_rate(cpu_clk) / 1000; 58 + 59 + if (!IS_ERR(cpu_reg)) { 60 + rcu_read_lock(); 61 + opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz); 62 + if (IS_ERR(opp)) { 63 + rcu_read_unlock(); 64 + dev_err(cpu_dev, "failed to find OPP for %ld\n", 65 + freq_Hz); 66 + return PTR_ERR(opp); 67 + } 68 + volt = dev_pm_opp_get_voltage(opp); 69 + rcu_read_unlock(); 70 + tol = volt * priv->voltage_tolerance / 100; 71 + volt_old = regulator_get_voltage(cpu_reg); 72 + } 73 + 74 + dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n", 75 + old_freq / 1000, volt_old ? volt_old / 1000 : -1, 76 + new_freq / 1000, volt ? volt / 1000 : -1); 77 + 78 + /* scaling up? scale voltage before frequency */ 79 + if (!IS_ERR(cpu_reg) && new_freq > old_freq) { 80 + ret = regulator_set_voltage_tol(cpu_reg, volt, tol); 81 + if (ret) { 82 + dev_err(cpu_dev, "failed to scale voltage up: %d\n", 83 + ret); 84 + return ret; 85 + } 86 + } 87 + 88 + ret = clk_set_rate(cpu_clk, freq_exact); 89 + if (ret) { 90 + dev_err(cpu_dev, "failed to set clock rate: %d\n", ret); 91 + if (!IS_ERR(cpu_reg)) 92 + regulator_set_voltage_tol(cpu_reg, volt_old, tol); 93 + return ret; 94 + } 95 + 96 + /* scaling down? scale voltage after frequency */ 97 + if (!IS_ERR(cpu_reg) && new_freq < old_freq) { 98 + ret = regulator_set_voltage_tol(cpu_reg, volt, tol); 99 + if (ret) { 100 + dev_err(cpu_dev, "failed to scale voltage down: %d\n", 101 + ret); 102 + clk_set_rate(cpu_clk, old_freq * 1000); 103 + } 104 + } 105 + 106 + return ret; 107 + } 108 + 109 + static int allocate_resources(int cpu, struct device **cdev, 110 + struct regulator **creg, struct clk **cclk) 111 + { 112 + struct device *cpu_dev; 113 + struct regulator *cpu_reg; 114 + struct clk *cpu_clk; 115 + int ret = 0; 116 + char *reg_cpu0 = "cpu0", *reg_cpu = "cpu", *reg; 117 + 118 + cpu_dev = get_cpu_device(cpu); 119 + if (!cpu_dev) { 120 + pr_err("failed to get cpu%d device\n", cpu); 121 + return -ENODEV; 122 + } 123 + 124 + /* Try "cpu0" for older DTs */ 125 + if (!cpu) 126 + reg = reg_cpu0; 127 + else 128 + reg = reg_cpu; 129 + 130 + try_again: 131 + cpu_reg = regulator_get_optional(cpu_dev, reg); 132 + if (IS_ERR(cpu_reg)) { 133 + /* 134 + * If cpu's regulator supply node is present, but regulator is 135 + * not yet registered, we should try defering probe. 136 + */ 137 + if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) { 138 + dev_dbg(cpu_dev, "cpu%d regulator not ready, retry\n", 139 + cpu); 140 + return -EPROBE_DEFER; 141 + } 142 + 143 + /* Try with "cpu-supply" */ 144 + if (reg == reg_cpu0) { 145 + reg = reg_cpu; 146 + goto try_again; 147 + } 148 + 149 + dev_warn(cpu_dev, "failed to get cpu%d regulator: %ld\n", 150 + cpu, PTR_ERR(cpu_reg)); 151 + } 152 + 153 + cpu_clk = clk_get(cpu_dev, NULL); 154 + if (IS_ERR(cpu_clk)) { 155 + /* put regulator */ 156 + if (!IS_ERR(cpu_reg)) 157 + regulator_put(cpu_reg); 158 + 159 + ret = PTR_ERR(cpu_clk); 160 + 161 + /* 162 + * If cpu's clk node is present, but clock is not yet 163 + * registered, we should try defering probe. 164 + */ 165 + if (ret == -EPROBE_DEFER) 166 + dev_dbg(cpu_dev, "cpu%d clock not ready, retry\n", cpu); 167 + else 168 + dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", ret, 169 + cpu); 170 + } else { 171 + *cdev = cpu_dev; 172 + *creg = cpu_reg; 173 + *cclk = cpu_clk; 174 + } 175 + 176 + return ret; 177 + } 178 + 179 + static int cpufreq_init(struct cpufreq_policy *policy) 180 + { 181 + struct cpufreq_frequency_table *freq_table; 182 + struct thermal_cooling_device *cdev; 183 + struct device_node *np; 184 + struct private_data *priv; 185 + struct device *cpu_dev; 186 + struct regulator *cpu_reg; 187 + struct clk *cpu_clk; 188 + unsigned int transition_latency; 189 + int ret; 190 + 191 + ret = allocate_resources(policy->cpu, &cpu_dev, &cpu_reg, &cpu_clk); 192 + if (ret) { 193 + pr_err("%s: Failed to allocate resources\n: %d", __func__, ret); 194 + return ret; 195 + } 196 + 197 + np = of_node_get(cpu_dev->of_node); 198 + if (!np) { 199 + dev_err(cpu_dev, "failed to find cpu%d node\n", policy->cpu); 200 + ret = -ENOENT; 201 + goto out_put_reg_clk; 202 + } 203 + 204 + /* OPPs might be populated at runtime, don't check for error here */ 205 + of_init_opp_table(cpu_dev); 206 + 207 + ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 208 + if (ret) { 209 + dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); 210 + goto out_put_node; 211 + } 212 + 213 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 214 + if (!priv) { 215 + ret = -ENOMEM; 216 + goto out_free_table; 217 + } 218 + 219 + of_property_read_u32(np, "voltage-tolerance", &priv->voltage_tolerance); 220 + 221 + if (of_property_read_u32(np, "clock-latency", &transition_latency)) 222 + transition_latency = CPUFREQ_ETERNAL; 223 + 224 + if (!IS_ERR(cpu_reg)) { 225 + struct dev_pm_opp *opp; 226 + unsigned long min_uV, max_uV; 227 + int i; 228 + 229 + /* 230 + * OPP is maintained in order of increasing frequency, and 231 + * freq_table initialised from OPP is therefore sorted in the 232 + * same order. 233 + */ 234 + for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 235 + ; 236 + rcu_read_lock(); 237 + opp = dev_pm_opp_find_freq_exact(cpu_dev, 238 + freq_table[0].frequency * 1000, true); 239 + min_uV = dev_pm_opp_get_voltage(opp); 240 + opp = dev_pm_opp_find_freq_exact(cpu_dev, 241 + freq_table[i-1].frequency * 1000, true); 242 + max_uV = dev_pm_opp_get_voltage(opp); 243 + rcu_read_unlock(); 244 + ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV); 245 + if (ret > 0) 246 + transition_latency += ret * 1000; 247 + } 248 + 249 + /* 250 + * For now, just loading the cooling device; 251 + * thermal DT code takes care of matching them. 252 + */ 253 + if (of_find_property(np, "#cooling-cells", NULL)) { 254 + cdev = of_cpufreq_cooling_register(np, cpu_present_mask); 255 + if (IS_ERR(cdev)) 256 + dev_err(cpu_dev, 257 + "running cpufreq without cooling device: %ld\n", 258 + PTR_ERR(cdev)); 259 + else 260 + priv->cdev = cdev; 261 + } 262 + 263 + priv->cpu_dev = cpu_dev; 264 + priv->cpu_reg = cpu_reg; 265 + policy->driver_data = priv; 266 + 267 + policy->clk = cpu_clk; 268 + ret = cpufreq_generic_init(policy, freq_table, transition_latency); 269 + if (ret) 270 + goto out_cooling_unregister; 271 + 272 + of_node_put(np); 273 + 274 + return 0; 275 + 276 + out_cooling_unregister: 277 + cpufreq_cooling_unregister(priv->cdev); 278 + kfree(priv); 279 + out_free_table: 280 + dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 281 + out_put_node: 282 + of_node_put(np); 283 + out_put_reg_clk: 284 + clk_put(cpu_clk); 285 + if (!IS_ERR(cpu_reg)) 286 + regulator_put(cpu_reg); 287 + 288 + return ret; 289 + } 290 + 291 + static int cpufreq_exit(struct cpufreq_policy *policy) 292 + { 293 + struct private_data *priv = policy->driver_data; 294 + 295 + cpufreq_cooling_unregister(priv->cdev); 296 + dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 297 + clk_put(policy->clk); 298 + if (!IS_ERR(priv->cpu_reg)) 299 + regulator_put(priv->cpu_reg); 300 + kfree(priv); 301 + 302 + return 0; 303 + } 304 + 305 + static struct cpufreq_driver dt_cpufreq_driver = { 306 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 307 + .verify = cpufreq_generic_frequency_table_verify, 308 + .target_index = set_target, 309 + .get = cpufreq_generic_get, 310 + .init = cpufreq_init, 311 + .exit = cpufreq_exit, 312 + .name = "cpufreq-dt", 313 + .attr = cpufreq_generic_attr, 314 + }; 315 + 316 + static int dt_cpufreq_probe(struct platform_device *pdev) 317 + { 318 + struct device *cpu_dev; 319 + struct regulator *cpu_reg; 320 + struct clk *cpu_clk; 321 + int ret; 322 + 323 + /* 324 + * All per-cluster (CPUs sharing clock/voltages) initialization is done 325 + * from ->init(). In probe(), we just need to make sure that clk and 326 + * regulators are available. Else defer probe and retry. 327 + * 328 + * FIXME: Is checking this only for CPU0 sufficient ? 329 + */ 330 + ret = allocate_resources(0, &cpu_dev, &cpu_reg, &cpu_clk); 331 + if (ret) 332 + return ret; 333 + 334 + clk_put(cpu_clk); 335 + if (!IS_ERR(cpu_reg)) 336 + regulator_put(cpu_reg); 337 + 338 + ret = cpufreq_register_driver(&dt_cpufreq_driver); 339 + if (ret) 340 + dev_err(cpu_dev, "failed register driver: %d\n", ret); 341 + 342 + return ret; 343 + } 344 + 345 + static int dt_cpufreq_remove(struct platform_device *pdev) 346 + { 347 + cpufreq_unregister_driver(&dt_cpufreq_driver); 348 + return 0; 349 + } 350 + 351 + static struct platform_driver dt_cpufreq_platdrv = { 352 + .driver = { 353 + .name = "cpufreq-dt", 354 + .owner = THIS_MODULE, 355 + }, 356 + .probe = dt_cpufreq_probe, 357 + .remove = dt_cpufreq_remove, 358 + }; 359 + module_platform_driver(dt_cpufreq_platdrv); 360 + 361 + MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>"); 362 + MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>"); 363 + MODULE_DESCRIPTION("Generic cpufreq driver"); 364 + MODULE_LICENSE("GPL");
+4 -4
drivers/cpufreq/cpufreq.c
··· 437 437 struct cpufreq_governor *t; 438 438 439 439 list_for_each_entry(t, &cpufreq_governor_list, governor_list) 440 - if (!strnicmp(str_governor, t->name, CPUFREQ_NAME_LEN)) 440 + if (!strncasecmp(str_governor, t->name, CPUFREQ_NAME_LEN)) 441 441 return t; 442 442 443 443 return NULL; ··· 455 455 goto out; 456 456 457 457 if (cpufreq_driver->setpolicy) { 458 - if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { 458 + if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { 459 459 *policy = CPUFREQ_POLICY_PERFORMANCE; 460 460 err = 0; 461 - } else if (!strnicmp(str_governor, "powersave", 461 + } else if (!strncasecmp(str_governor, "powersave", 462 462 CPUFREQ_NAME_LEN)) { 463 463 *policy = CPUFREQ_POLICY_POWERSAVE; 464 464 err = 0; ··· 1382 1382 if (!cpufreq_suspended) 1383 1383 pr_debug("%s: policy Kobject moved to cpu: %d from: %d\n", 1384 1384 __func__, new_cpu, cpu); 1385 - } else if (cpufreq_driver->stop_cpu && cpufreq_driver->setpolicy) { 1385 + } else if (cpufreq_driver->stop_cpu) { 1386 1386 cpufreq_driver->stop_cpu(policy); 1387 1387 } 1388 1388
+1 -1
drivers/cpufreq/exynos4210-cpufreq.c
··· 127 127 * dependencies on platform headers. It is necessary to enable 128 128 * Exynos multi-platform support and will be removed together with 129 129 * this whole driver as soon as Exynos gets migrated to use 130 - * cpufreq-cpu0 driver. 130 + * cpufreq-dt driver. 131 131 */ 132 132 np = of_find_compatible_node(NULL, NULL, "samsung,exynos4210-clock"); 133 133 if (!np) {
+1 -1
drivers/cpufreq/exynos4x12-cpufreq.c
··· 174 174 * dependencies on platform headers. It is necessary to enable 175 175 * Exynos multi-platform support and will be removed together with 176 176 * this whole driver as soon as Exynos gets migrated to use 177 - * cpufreq-cpu0 driver. 177 + * cpufreq-dt driver. 178 178 */ 179 179 np = of_find_compatible_node(NULL, NULL, "samsung,exynos4412-clock"); 180 180 if (!np) {
+1 -1
drivers/cpufreq/exynos5250-cpufreq.c
··· 153 153 * dependencies on platform headers. It is necessary to enable 154 154 * Exynos multi-platform support and will be removed together with 155 155 * this whole driver as soon as Exynos gets migrated to use 156 - * cpufreq-cpu0 driver. 156 + * cpufreq-dt driver. 157 157 */ 158 158 np = of_find_compatible_node(NULL, NULL, "samsung,exynos5250-clock"); 159 159 if (!np) {
+3 -3
drivers/cpufreq/highbank-cpufreq.c
··· 6 6 * published by the Free Software Foundation. 7 7 * 8 8 * This driver provides the clk notifier callbacks that are used when 9 - * the cpufreq-cpu0 driver changes to frequency to alert the highbank 9 + * the cpufreq-dt driver changes to frequency to alert the highbank 10 10 * EnergyCore Management Engine (ECME) about the need to change 11 11 * voltage. The ECME interfaces with the actual voltage regulators. 12 12 */ ··· 60 60 61 61 static int hb_cpufreq_driver_init(void) 62 62 { 63 - struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 63 + struct platform_device_info devinfo = { .name = "cpufreq-dt", }; 64 64 struct device *cpu_dev; 65 65 struct clk *cpu_clk; 66 66 struct device_node *np; ··· 95 95 goto out_put_node; 96 96 } 97 97 98 - /* Instantiate cpufreq-cpu0 */ 98 + /* Instantiate cpufreq-dt */ 99 99 platform_device_register_full(&devinfo); 100 100 101 101 out_put_node:
+44
drivers/cpufreq/powernv-cpufreq.c
··· 26 26 #include <linux/cpufreq.h> 27 27 #include <linux/smp.h> 28 28 #include <linux/of.h> 29 + #include <linux/reboot.h> 29 30 30 31 #include <asm/cputhreads.h> 31 32 #include <asm/firmware.h> ··· 36 35 #define POWERNV_MAX_PSTATES 256 37 36 38 37 static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1]; 38 + static bool rebooting; 39 39 40 40 /* 41 41 * Note: The set of pstates consists of contiguous integers, the ··· 286 284 } 287 285 288 286 /* 287 + * get_nominal_index: Returns the index corresponding to the nominal 288 + * pstate in the cpufreq table 289 + */ 290 + static inline unsigned int get_nominal_index(void) 291 + { 292 + return powernv_pstate_info.max - powernv_pstate_info.nominal; 293 + } 294 + 295 + /* 289 296 * powernv_cpufreq_target_index: Sets the frequency corresponding to 290 297 * the cpufreq table entry indexed by new_index on the cpus in the 291 298 * mask policy->cpus ··· 303 292 unsigned int new_index) 304 293 { 305 294 struct powernv_smp_call_data freq_data; 295 + 296 + if (unlikely(rebooting) && new_index != get_nominal_index()) 297 + return 0; 306 298 307 299 freq_data.pstate_id = powernv_freqs[new_index].driver_data; 308 300 ··· 331 317 return cpufreq_table_validate_and_show(policy, powernv_freqs); 332 318 } 333 319 320 + static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb, 321 + unsigned long action, void *unused) 322 + { 323 + int cpu; 324 + struct cpufreq_policy cpu_policy; 325 + 326 + rebooting = true; 327 + for_each_online_cpu(cpu) { 328 + cpufreq_get_policy(&cpu_policy, cpu); 329 + powernv_cpufreq_target_index(&cpu_policy, get_nominal_index()); 330 + } 331 + 332 + return NOTIFY_DONE; 333 + } 334 + 335 + static struct notifier_block powernv_cpufreq_reboot_nb = { 336 + .notifier_call = powernv_cpufreq_reboot_notifier, 337 + }; 338 + 339 + static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy) 340 + { 341 + struct powernv_smp_call_data freq_data; 342 + 343 + freq_data.pstate_id = powernv_pstate_info.min; 344 + smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1); 345 + } 346 + 334 347 static struct cpufreq_driver powernv_cpufreq_driver = { 335 348 .name = "powernv-cpufreq", 336 349 .flags = CPUFREQ_CONST_LOOPS, ··· 365 324 .verify = cpufreq_generic_frequency_table_verify, 366 325 .target_index = powernv_cpufreq_target_index, 367 326 .get = powernv_cpufreq_get, 327 + .stop_cpu = powernv_cpufreq_stop_cpu, 368 328 .attr = powernv_cpu_freq_attr, 369 329 }; 370 330 ··· 384 342 return rc; 385 343 } 386 344 345 + register_reboot_notifier(&powernv_cpufreq_reboot_nb); 387 346 return cpufreq_register_driver(&powernv_cpufreq_driver); 388 347 } 389 348 module_init(powernv_cpufreq_init); 390 349 391 350 static void __exit powernv_cpufreq_exit(void) 392 351 { 352 + unregister_reboot_notifier(&powernv_cpufreq_reboot_nb); 393 353 cpufreq_unregister_driver(&powernv_cpufreq_driver); 394 354 } 395 355 module_exit(powernv_cpufreq_exit);
-1
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 199 199 } 200 200 201 201 data->table = table; 202 - per_cpu(cpu_data, cpu) = data; 203 202 204 203 /* update ->cpus if we have cluster, no harm if not */ 205 204 cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
+1 -1
drivers/cpufreq/s5pv210-cpufreq.c
··· 597 597 * and dependencies on platform headers. It is necessary to enable 598 598 * S5PV210 multi-platform support and will be removed together with 599 599 * this whole driver as soon as S5PV210 gets migrated to use 600 - * cpufreq-cpu0 driver. 600 + * cpufreq-dt driver. 601 601 */ 602 602 np = of_find_compatible_node(NULL, NULL, "samsung,s5pv210-clock"); 603 603 if (!np) {
+8
drivers/cpuidle/Kconfig
··· 25 25 bool "Menu governor (for tickless system)" 26 26 default y 27 27 28 + config DT_IDLE_STATES 29 + bool 30 + 28 31 menu "ARM CPU Idle Drivers" 29 32 depends on ARM 30 33 source "drivers/cpuidle/Kconfig.arm" 34 + endmenu 35 + 36 + menu "ARM64 CPU Idle Drivers" 37 + depends on ARM64 38 + source "drivers/cpuidle/Kconfig.arm64" 31 39 endmenu 32 40 33 41 menu "MIPS CPU Idle Drivers"
+1
drivers/cpuidle/Kconfig.arm
··· 7 7 depends on MCPM 8 8 select ARM_CPU_SUSPEND 9 9 select CPU_IDLE_MULTIPLE_DRIVERS 10 + select DT_IDLE_STATES 10 11 help 11 12 Select this option to enable CPU idle driver for big.LITTLE based 12 13 ARM systems. Driver manages CPUs coordination through MCPM and
+14
drivers/cpuidle/Kconfig.arm64
··· 1 + # 2 + # ARM64 CPU Idle drivers 3 + # 4 + 5 + config ARM64_CPUIDLE 6 + bool "Generic ARM64 CPU idle Driver" 7 + select ARM64_CPU_SUSPEND 8 + select DT_IDLE_STATES 9 + help 10 + Select this to enable generic cpuidle driver for ARM64. 11 + It provides a generic idle driver whose idle states are configured 12 + at run-time through DT nodes. The CPUidle suspend backend is 13 + initialized by calling the CPU operations init idle hook 14 + provided by architecture code.
+5
drivers/cpuidle/Makefile
··· 4 4 5 5 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ 6 6 obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o 7 + obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o 7 8 8 9 ################################################################################## 9 10 # ARM SoC drivers ··· 21 20 ############################################################################### 22 21 # MIPS drivers 23 22 obj-$(CONFIG_MIPS_CPS_CPUIDLE) += cpuidle-cps.o 23 + 24 + ############################################################################### 25 + # ARM64 drivers 26 + obj-$(CONFIG_ARM64_CPUIDLE) += cpuidle-arm64.o 24 27 25 28 ############################################################################### 26 29 # POWERPC drivers
+133
drivers/cpuidle/cpuidle-arm64.c
··· 1 + /* 2 + * ARM64 generic CPU idle driver. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #define pr_fmt(fmt) "CPUidle arm64: " fmt 13 + 14 + #include <linux/cpuidle.h> 15 + #include <linux/cpumask.h> 16 + #include <linux/cpu_pm.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + 21 + #include <asm/cpuidle.h> 22 + #include <asm/suspend.h> 23 + 24 + #include "dt_idle_states.h" 25 + 26 + /* 27 + * arm64_enter_idle_state - Programs CPU to enter the specified state 28 + * 29 + * dev: cpuidle device 30 + * drv: cpuidle driver 31 + * idx: state index 32 + * 33 + * Called from the CPUidle framework to program the device to the 34 + * specified target state selected by the governor. 35 + */ 36 + static int arm64_enter_idle_state(struct cpuidle_device *dev, 37 + struct cpuidle_driver *drv, int idx) 38 + { 39 + int ret; 40 + 41 + if (!idx) { 42 + cpu_do_idle(); 43 + return idx; 44 + } 45 + 46 + ret = cpu_pm_enter(); 47 + if (!ret) { 48 + /* 49 + * Pass idle state index to cpu_suspend which in turn will 50 + * call the CPU ops suspend protocol with idle index as a 51 + * parameter. 52 + */ 53 + ret = cpu_suspend(idx); 54 + 55 + cpu_pm_exit(); 56 + } 57 + 58 + return ret ? -1 : idx; 59 + } 60 + 61 + static struct cpuidle_driver arm64_idle_driver = { 62 + .name = "arm64_idle", 63 + .owner = THIS_MODULE, 64 + /* 65 + * State at index 0 is standby wfi and considered standard 66 + * on all ARM platforms. If in some platforms simple wfi 67 + * can't be used as "state 0", DT bindings must be implemented 68 + * to work around this issue and allow installing a special 69 + * handler for idle state index 0. 70 + */ 71 + .states[0] = { 72 + .enter = arm64_enter_idle_state, 73 + .exit_latency = 1, 74 + .target_residency = 1, 75 + .power_usage = UINT_MAX, 76 + .flags = CPUIDLE_FLAG_TIME_VALID, 77 + .name = "WFI", 78 + .desc = "ARM64 WFI", 79 + } 80 + }; 81 + 82 + static const struct of_device_id arm64_idle_state_match[] __initconst = { 83 + { .compatible = "arm,idle-state", 84 + .data = arm64_enter_idle_state }, 85 + { }, 86 + }; 87 + 88 + /* 89 + * arm64_idle_init 90 + * 91 + * Registers the arm64 specific cpuidle driver with the cpuidle 92 + * framework. It relies on core code to parse the idle states 93 + * and initialize them using driver data structures accordingly. 94 + */ 95 + static int __init arm64_idle_init(void) 96 + { 97 + int cpu, ret; 98 + struct cpuidle_driver *drv = &arm64_idle_driver; 99 + 100 + /* 101 + * Initialize idle states data, starting at index 1. 102 + * This driver is DT only, if no DT idle states are detected (ret == 0) 103 + * let the driver initialization fail accordingly since there is no 104 + * reason to initialize the idle driver if only wfi is supported. 105 + */ 106 + ret = dt_init_idle_driver(drv, arm64_idle_state_match, 1); 107 + if (ret <= 0) { 108 + if (ret) 109 + pr_err("failed to initialize idle states\n"); 110 + return ret ? : -ENODEV; 111 + } 112 + 113 + /* 114 + * Call arch CPU operations in order to initialize 115 + * idle states suspend back-end specific data 116 + */ 117 + for_each_possible_cpu(cpu) { 118 + ret = cpu_init_idle(cpu); 119 + if (ret) { 120 + pr_err("CPU %d failed to init idle CPU ops\n", cpu); 121 + return ret; 122 + } 123 + } 124 + 125 + ret = cpuidle_register(drv, NULL); 126 + if (ret) { 127 + pr_err("failed to register cpuidle driver\n"); 128 + return ret; 129 + } 130 + 131 + return 0; 132 + } 133 + device_initcall(arm64_idle_init);
+20
drivers/cpuidle/cpuidle-big_little.c
··· 24 24 #include <asm/smp_plat.h> 25 25 #include <asm/suspend.h> 26 26 27 + #include "dt_idle_states.h" 28 + 27 29 static int bl_enter_powerdown(struct cpuidle_device *dev, 28 30 struct cpuidle_driver *drv, int idx); 29 31 ··· 73 71 .desc = "ARM little-cluster power down", 74 72 }, 75 73 .state_count = 2, 74 + }; 75 + 76 + static const struct of_device_id bl_idle_state_match[] __initconst = { 77 + { .compatible = "arm,idle-state", 78 + .data = bl_enter_powerdown }, 79 + { }, 76 80 }; 77 81 78 82 static struct cpuidle_driver bl_idle_big_driver = { ··· 167 159 static const struct of_device_id compatible_machine_match[] = { 168 160 { .compatible = "arm,vexpress,v2p-ca15_a7" }, 169 161 { .compatible = "samsung,exynos5420" }, 162 + { .compatible = "samsung,exynos5800" }, 170 163 {}, 171 164 }; 172 165 ··· 198 189 ret = bl_idle_driver_init(&bl_idle_big_driver, ARM_CPU_PART_CORTEX_A15); 199 190 if (ret) 200 191 goto out_uninit_little; 192 + 193 + /* Start at index 1, index 0 standard WFI */ 194 + ret = dt_init_idle_driver(&bl_idle_big_driver, bl_idle_state_match, 1); 195 + if (ret < 0) 196 + goto out_uninit_big; 197 + 198 + /* Start at index 1, index 0 standard WFI */ 199 + ret = dt_init_idle_driver(&bl_idle_little_driver, 200 + bl_idle_state_match, 1); 201 + if (ret < 0) 202 + goto out_uninit_big; 201 203 202 204 ret = cpuidle_register(&bl_idle_little_driver, NULL); 203 205 if (ret)
+213
drivers/cpuidle/dt_idle_states.c
··· 1 + /* 2 + * DT idle states parsing code. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #define pr_fmt(fmt) "DT idle-states: " fmt 13 + 14 + #include <linux/cpuidle.h> 15 + #include <linux/cpumask.h> 16 + #include <linux/errno.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + #include <linux/of_device.h> 21 + 22 + #include "dt_idle_states.h" 23 + 24 + static int init_state_node(struct cpuidle_state *idle_state, 25 + const struct of_device_id *matches, 26 + struct device_node *state_node) 27 + { 28 + int err; 29 + const struct of_device_id *match_id; 30 + 31 + match_id = of_match_node(matches, state_node); 32 + if (!match_id) 33 + return -ENODEV; 34 + /* 35 + * CPUidle drivers are expected to initialize the const void *data 36 + * pointer of the passed in struct of_device_id array to the idle 37 + * state enter function. 38 + */ 39 + idle_state->enter = match_id->data; 40 + 41 + err = of_property_read_u32(state_node, "wakeup-latency-us", 42 + &idle_state->exit_latency); 43 + if (err) { 44 + u32 entry_latency, exit_latency; 45 + 46 + err = of_property_read_u32(state_node, "entry-latency-us", 47 + &entry_latency); 48 + if (err) { 49 + pr_debug(" * %s missing entry-latency-us property\n", 50 + state_node->full_name); 51 + return -EINVAL; 52 + } 53 + 54 + err = of_property_read_u32(state_node, "exit-latency-us", 55 + &exit_latency); 56 + if (err) { 57 + pr_debug(" * %s missing exit-latency-us property\n", 58 + state_node->full_name); 59 + return -EINVAL; 60 + } 61 + /* 62 + * If wakeup-latency-us is missing, default to entry+exit 63 + * latencies as defined in idle states bindings 64 + */ 65 + idle_state->exit_latency = entry_latency + exit_latency; 66 + } 67 + 68 + err = of_property_read_u32(state_node, "min-residency-us", 69 + &idle_state->target_residency); 70 + if (err) { 71 + pr_debug(" * %s missing min-residency-us property\n", 72 + state_node->full_name); 73 + return -EINVAL; 74 + } 75 + 76 + idle_state->flags = CPUIDLE_FLAG_TIME_VALID; 77 + if (of_property_read_bool(state_node, "local-timer-stop")) 78 + idle_state->flags |= CPUIDLE_FLAG_TIMER_STOP; 79 + /* 80 + * TODO: 81 + * replace with kstrdup and pointer assignment when name 82 + * and desc become string pointers 83 + */ 84 + strncpy(idle_state->name, state_node->name, CPUIDLE_NAME_LEN - 1); 85 + strncpy(idle_state->desc, state_node->name, CPUIDLE_DESC_LEN - 1); 86 + return 0; 87 + } 88 + 89 + /* 90 + * Check that the idle state is uniform across all CPUs in the CPUidle driver 91 + * cpumask 92 + */ 93 + static bool idle_state_valid(struct device_node *state_node, unsigned int idx, 94 + const cpumask_t *cpumask) 95 + { 96 + int cpu; 97 + struct device_node *cpu_node, *curr_state_node; 98 + bool valid = true; 99 + 100 + /* 101 + * Compare idle state phandles for index idx on all CPUs in the 102 + * CPUidle driver cpumask. Start from next logical cpu following 103 + * cpumask_first(cpumask) since that's the CPU state_node was 104 + * retrieved from. If a mismatch is found bail out straight 105 + * away since we certainly hit a firmware misconfiguration. 106 + */ 107 + for (cpu = cpumask_next(cpumask_first(cpumask), cpumask); 108 + cpu < nr_cpu_ids; cpu = cpumask_next(cpu, cpumask)) { 109 + cpu_node = of_cpu_device_node_get(cpu); 110 + curr_state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 111 + idx); 112 + if (state_node != curr_state_node) 113 + valid = false; 114 + 115 + of_node_put(curr_state_node); 116 + of_node_put(cpu_node); 117 + if (!valid) 118 + break; 119 + } 120 + 121 + return valid; 122 + } 123 + 124 + /** 125 + * dt_init_idle_driver() - Parse the DT idle states and initialize the 126 + * idle driver states array 127 + * @drv: Pointer to CPU idle driver to be initialized 128 + * @matches: Array of of_device_id match structures to search in for 129 + * compatible idle state nodes. The data pointer for each valid 130 + * struct of_device_id entry in the matches array must point to 131 + * a function with the following signature, that corresponds to 132 + * the CPUidle state enter function signature: 133 + * 134 + * int (*)(struct cpuidle_device *dev, 135 + * struct cpuidle_driver *drv, 136 + * int index); 137 + * 138 + * @start_idx: First idle state index to be initialized 139 + * 140 + * If DT idle states are detected and are valid the state count and states 141 + * array entries in the cpuidle driver are initialized accordingly starting 142 + * from index start_idx. 143 + * 144 + * Return: number of valid DT idle states parsed, <0 on failure 145 + */ 146 + int dt_init_idle_driver(struct cpuidle_driver *drv, 147 + const struct of_device_id *matches, 148 + unsigned int start_idx) 149 + { 150 + struct cpuidle_state *idle_state; 151 + struct device_node *state_node, *cpu_node; 152 + int i, err = 0; 153 + const cpumask_t *cpumask; 154 + unsigned int state_idx = start_idx; 155 + 156 + if (state_idx >= CPUIDLE_STATE_MAX) 157 + return -EINVAL; 158 + /* 159 + * We get the idle states for the first logical cpu in the 160 + * driver mask (or cpu_possible_mask if the driver cpumask is not set) 161 + * and we check through idle_state_valid() if they are uniform 162 + * across CPUs, otherwise we hit a firmware misconfiguration. 163 + */ 164 + cpumask = drv->cpumask ? : cpu_possible_mask; 165 + cpu_node = of_cpu_device_node_get(cpumask_first(cpumask)); 166 + 167 + for (i = 0; ; i++) { 168 + state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 169 + if (!state_node) 170 + break; 171 + 172 + if (!idle_state_valid(state_node, i, cpumask)) { 173 + pr_warn("%s idle state not valid, bailing out\n", 174 + state_node->full_name); 175 + err = -EINVAL; 176 + break; 177 + } 178 + 179 + if (state_idx == CPUIDLE_STATE_MAX) { 180 + pr_warn("State index reached static CPU idle driver states array size\n"); 181 + break; 182 + } 183 + 184 + idle_state = &drv->states[state_idx++]; 185 + err = init_state_node(idle_state, matches, state_node); 186 + if (err) { 187 + pr_err("Parsing idle state node %s failed with err %d\n", 188 + state_node->full_name, err); 189 + err = -EINVAL; 190 + break; 191 + } 192 + of_node_put(state_node); 193 + } 194 + 195 + of_node_put(state_node); 196 + of_node_put(cpu_node); 197 + if (err) 198 + return err; 199 + /* 200 + * Update the driver state count only if some valid DT idle states 201 + * were detected 202 + */ 203 + if (i) 204 + drv->state_count = state_idx; 205 + 206 + /* 207 + * Return the number of present and valid DT idle states, which can 208 + * also be 0 on platforms with missing DT idle states or legacy DT 209 + * configuration predating the DT idle states bindings. 210 + */ 211 + return i; 212 + } 213 + EXPORT_SYMBOL_GPL(dt_init_idle_driver);
+7
drivers/cpuidle/dt_idle_states.h
··· 1 + #ifndef __DT_IDLE_STATES 2 + #define __DT_IDLE_STATES 3 + 4 + int dt_init_idle_driver(struct cpuidle_driver *drv, 5 + const struct of_device_id *matches, 6 + unsigned int start_idx); 7 + #endif
+1 -1
drivers/cpuidle/governor.c
··· 28 28 struct cpuidle_governor *gov; 29 29 30 30 list_for_each_entry(gov, &cpuidle_governors, governor_list) 31 - if (!strnicmp(str, gov->name, CPUIDLE_NAME_LEN)) 31 + if (!strncasecmp(str, gov->name, CPUIDLE_NAME_LEN)) 32 32 return gov; 33 33 34 34 return NULL;
+1 -2
drivers/devfreq/Kconfig
··· 78 78 This does not yet operate with optimal voltages. 79 79 80 80 config ARM_EXYNOS5_BUS_DEVFREQ 81 - bool "ARM Exynos5250 Bus DEVFREQ Driver" 81 + tristate "ARM Exynos5250 Bus DEVFREQ Driver" 82 82 depends on SOC_EXYNOS5250 83 - select ARCH_HAS_OPP 84 83 select DEVFREQ_GOV_SIMPLE_ONDEMAND 85 84 select PM_OPP 86 85 help
+3
drivers/devfreq/devfreq.c
··· 1119 1119 1120 1120 return opp; 1121 1121 } 1122 + EXPORT_SYMBOL(devfreq_recommended_opp); 1122 1123 1123 1124 /** 1124 1125 * devfreq_register_opp_notifier() - Helper function to get devfreq notified ··· 1143 1142 1144 1143 return ret; 1145 1144 } 1145 + EXPORT_SYMBOL(devfreq_register_opp_notifier); 1146 1146 1147 1147 /** 1148 1148 * devfreq_unregister_opp_notifier() - Helper function to stop getting devfreq ··· 1170 1168 1171 1169 return ret; 1172 1170 } 1171 + EXPORT_SYMBOL(devfreq_unregister_opp_notifier); 1173 1172 1174 1173 static void devm_devfreq_opp_release(struct device *dev, void *res) 1175 1174 {
+3
drivers/devfreq/exynos/exynos_ppmu.c
··· 73 73 exynos_ppmu_start(ppmu_base); 74 74 } 75 75 } 76 + EXPORT_SYMBOL(busfreq_mon_reset); 76 77 77 78 void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data) 78 79 { ··· 98 97 99 98 busfreq_mon_reset(ppmu_data); 100 99 } 100 + EXPORT_SYMBOL(exynos_read_ppmu); 101 101 102 102 int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data) 103 103 { ··· 116 114 117 115 return busy; 118 116 } 117 + EXPORT_SYMBOL(exynos_get_busier_ppmu);
+9 -5
drivers/i2c/i2c-core.c
··· 50 50 #include <linux/irqflags.h> 51 51 #include <linux/rwsem.h> 52 52 #include <linux/pm_runtime.h> 53 + #include <linux/pm_domain.h> 53 54 #include <linux/acpi.h> 54 55 #include <linux/jump_label.h> 55 56 #include <asm/uaccess.h> ··· 644 643 if (status < 0) 645 644 return status; 646 645 647 - acpi_dev_pm_attach(&client->dev, true); 648 - status = driver->probe(client, i2c_match_id(driver->id_table, client)); 649 - if (status) 650 - acpi_dev_pm_detach(&client->dev, true); 646 + status = dev_pm_domain_attach(&client->dev, true); 647 + if (status != -EPROBE_DEFER) { 648 + status = driver->probe(client, i2c_match_id(driver->id_table, 649 + client)); 650 + if (status) 651 + dev_pm_domain_detach(&client->dev, true); 652 + } 651 653 652 654 return status; 653 655 } ··· 670 666 status = driver->remove(client); 671 667 } 672 668 673 - acpi_dev_pm_detach(&client->dev, true); 669 + dev_pm_domain_detach(&client->dev, true); 674 670 return status; 675 671 } 676 672
+3 -2
drivers/mmc/core/sdio_bus.c
··· 16 16 #include <linux/export.h> 17 17 #include <linux/slab.h> 18 18 #include <linux/pm_runtime.h> 19 + #include <linux/pm_domain.h> 19 20 #include <linux/acpi.h> 20 21 21 22 #include <linux/mmc/card.h> ··· 316 315 ret = device_add(&func->dev); 317 316 if (ret == 0) { 318 317 sdio_func_set_present(func); 319 - acpi_dev_pm_attach(&func->dev, false); 318 + dev_pm_domain_attach(&func->dev, false); 320 319 } 321 320 322 321 return ret; ··· 333 332 if (!sdio_func_present(func)) 334 333 return; 335 334 336 - acpi_dev_pm_detach(&func->dev, false); 335 + dev_pm_domain_detach(&func->dev, false); 337 336 device_del(&func->dev); 338 337 put_device(&func->dev); 339 338 }
+51 -10
drivers/pci/pcie/pme.c
··· 41 41 } 42 42 __setup("pcie_pme=", pcie_pme_setup); 43 43 44 + enum pme_suspend_level { 45 + PME_SUSPEND_NONE = 0, 46 + PME_SUSPEND_WAKEUP, 47 + PME_SUSPEND_NOIRQ, 48 + }; 49 + 44 50 struct pcie_pme_service_data { 45 51 spinlock_t lock; 46 52 struct pcie_device *srv; 47 53 struct work_struct work; 48 - bool noirq; /* Don't enable the PME interrupt used by this service. */ 54 + enum pme_suspend_level suspend_level; 49 55 }; 50 56 51 57 /** ··· 229 223 spin_lock_irq(&data->lock); 230 224 231 225 for (;;) { 232 - if (data->noirq) 226 + if (data->suspend_level != PME_SUSPEND_NONE) 233 227 break; 234 228 235 229 pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta); ··· 256 250 spin_lock_irq(&data->lock); 257 251 } 258 252 259 - if (!data->noirq) 253 + if (data->suspend_level == PME_SUSPEND_NONE) 260 254 pcie_pme_interrupt_enable(port, true); 261 255 262 256 spin_unlock_irq(&data->lock); ··· 373 367 return ret; 374 368 } 375 369 370 + static bool pcie_pme_check_wakeup(struct pci_bus *bus) 371 + { 372 + struct pci_dev *dev; 373 + 374 + if (!bus) 375 + return false; 376 + 377 + list_for_each_entry(dev, &bus->devices, bus_list) 378 + if (device_may_wakeup(&dev->dev) 379 + || pcie_pme_check_wakeup(dev->subordinate)) 380 + return true; 381 + 382 + return false; 383 + } 384 + 376 385 /** 377 386 * pcie_pme_suspend - Suspend PCIe PME service device. 378 387 * @srv: PCIe service device to suspend. ··· 396 375 { 397 376 struct pcie_pme_service_data *data = get_service_data(srv); 398 377 struct pci_dev *port = srv->port; 378 + bool wakeup; 399 379 380 + if (device_may_wakeup(&port->dev)) { 381 + wakeup = true; 382 + } else { 383 + down_read(&pci_bus_sem); 384 + wakeup = pcie_pme_check_wakeup(port->subordinate); 385 + up_read(&pci_bus_sem); 386 + } 400 387 spin_lock_irq(&data->lock); 401 - pcie_pme_interrupt_enable(port, false); 402 - pcie_clear_root_pme_status(port); 403 - data->noirq = true; 388 + if (wakeup) { 389 + enable_irq_wake(srv->irq); 390 + data->suspend_level = PME_SUSPEND_WAKEUP; 391 + } else { 392 + struct pci_dev *port = srv->port; 393 + 394 + pcie_pme_interrupt_enable(port, false); 395 + pcie_clear_root_pme_status(port); 396 + data->suspend_level = PME_SUSPEND_NOIRQ; 397 + } 404 398 spin_unlock_irq(&data->lock); 405 399 406 400 synchronize_irq(srv->irq); ··· 430 394 static int pcie_pme_resume(struct pcie_device *srv) 431 395 { 432 396 struct pcie_pme_service_data *data = get_service_data(srv); 433 - struct pci_dev *port = srv->port; 434 397 435 398 spin_lock_irq(&data->lock); 436 - data->noirq = false; 437 - pcie_clear_root_pme_status(port); 438 - pcie_pme_interrupt_enable(port, true); 399 + if (data->suspend_level == PME_SUSPEND_NOIRQ) { 400 + struct pci_dev *port = srv->port; 401 + 402 + pcie_clear_root_pme_status(port); 403 + pcie_pme_interrupt_enable(port, true); 404 + } else { 405 + disable_irq_wake(srv->irq); 406 + } 407 + data->suspend_level = PME_SUSPEND_NONE; 439 408 spin_unlock_irq(&data->lock); 440 409 441 410 return 0;
+7 -9
drivers/platform/x86/fujitsu-laptop.c
··· 1050 1050 }, 1051 1051 }; 1052 1052 1053 + static const struct acpi_device_id fujitsu_ids[] __used = { 1054 + {ACPI_FUJITSU_HID, 0}, 1055 + {ACPI_FUJITSU_HOTKEY_HID, 0}, 1056 + {"", 0} 1057 + }; 1058 + MODULE_DEVICE_TABLE(acpi, fujitsu_ids); 1059 + 1053 1060 static int __init fujitsu_init(void) 1054 1061 { 1055 1062 int ret, result, max_brightness; ··· 1215 1208 MODULE_ALIAS("dmi:*:svnFUJITSUSIEMENS:*:pvr:rvnFUJITSU:rnFJNB1D3:*:cvrS6410:*"); 1216 1209 MODULE_ALIAS("dmi:*:svnFUJITSUSIEMENS:*:pvr:rvnFUJITSU:rnFJNB1E6:*:cvrS6420:*"); 1217 1210 MODULE_ALIAS("dmi:*:svnFUJITSU:*:pvr:rvnFUJITSU:rnFJNB19C:*:cvrS7020:*"); 1218 - 1219 - static struct pnp_device_id pnp_ids[] __used = { 1220 - {.id = "FUJ02bf"}, 1221 - {.id = "FUJ02B1"}, 1222 - {.id = "FUJ02E3"}, 1223 - {.id = ""} 1224 - }; 1225 - 1226 - MODULE_DEVICE_TABLE(pnp, pnp_ids);
+8
drivers/power/avs/Kconfig
··· 10 10 AVS is also called SmartReflex on OMAP devices. 11 11 12 12 Say Y here to enable Adaptive Voltage Scaling class support. 13 + 14 + config ROCKCHIP_IODOMAIN 15 + tristate "Rockchip IO domain support" 16 + depends on ARCH_ROCKCHIP && OF 17 + help 18 + Say y here to enable support io domains on Rockchip SoCs. It is 19 + necessary for the io domain setting of the SoC to match the 20 + voltage supplied by the regulators.
+1
drivers/power/avs/Makefile
··· 1 1 obj-$(CONFIG_POWER_AVS_OMAP) += smartreflex.o 2 + obj-$(CONFIG_ROCKCHIP_IODOMAIN) += rockchip-io-domain.o
+351
drivers/power/avs/rockchip-io-domain.c
··· 1 + /* 2 + * Rockchip IO Voltage Domain driver 3 + * 4 + * Copyright 2014 MundoReader S.L. 5 + * Copyright 2014 Google, Inc. 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/err.h> 20 + #include <linux/mfd/syscon.h> 21 + #include <linux/of.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/regmap.h> 24 + #include <linux/regulator/consumer.h> 25 + 26 + #define MAX_SUPPLIES 16 27 + 28 + /* 29 + * The max voltage for 1.8V and 3.3V come from the Rockchip datasheet under 30 + * "Recommended Operating Conditions" for "Digital GPIO". When the typical 31 + * is 3.3V the max is 3.6V. When the typical is 1.8V the max is 1.98V. 32 + * 33 + * They are used like this: 34 + * - If the voltage on a rail is above the "1.8" voltage (1.98V) we'll tell the 35 + * SoC we're at 3.3. 36 + * - If the voltage on a rail is above the "3.3" voltage (3.6V) we'll consider 37 + * that to be an error. 38 + */ 39 + #define MAX_VOLTAGE_1_8 1980000 40 + #define MAX_VOLTAGE_3_3 3600000 41 + 42 + #define RK3288_SOC_CON2 0x24c 43 + #define RK3288_SOC_CON2_FLASH0 BIT(7) 44 + #define RK3288_SOC_FLASH_SUPPLY_NUM 2 45 + 46 + struct rockchip_iodomain; 47 + 48 + /** 49 + * @supplies: voltage settings matching the register bits. 50 + */ 51 + struct rockchip_iodomain_soc_data { 52 + int grf_offset; 53 + const char *supply_names[MAX_SUPPLIES]; 54 + void (*init)(struct rockchip_iodomain *iod); 55 + }; 56 + 57 + struct rockchip_iodomain_supply { 58 + struct rockchip_iodomain *iod; 59 + struct regulator *reg; 60 + struct notifier_block nb; 61 + int idx; 62 + }; 63 + 64 + struct rockchip_iodomain { 65 + struct device *dev; 66 + struct regmap *grf; 67 + struct rockchip_iodomain_soc_data *soc_data; 68 + struct rockchip_iodomain_supply supplies[MAX_SUPPLIES]; 69 + }; 70 + 71 + static int rockchip_iodomain_write(struct rockchip_iodomain_supply *supply, 72 + int uV) 73 + { 74 + struct rockchip_iodomain *iod = supply->iod; 75 + u32 val; 76 + int ret; 77 + 78 + /* set value bit */ 79 + val = (uV > MAX_VOLTAGE_1_8) ? 0 : 1; 80 + val <<= supply->idx; 81 + 82 + /* apply hiword-mask */ 83 + val |= (BIT(supply->idx) << 16); 84 + 85 + ret = regmap_write(iod->grf, iod->soc_data->grf_offset, val); 86 + if (ret) 87 + dev_err(iod->dev, "Couldn't write to GRF\n"); 88 + 89 + return ret; 90 + } 91 + 92 + static int rockchip_iodomain_notify(struct notifier_block *nb, 93 + unsigned long event, 94 + void *data) 95 + { 96 + struct rockchip_iodomain_supply *supply = 97 + container_of(nb, struct rockchip_iodomain_supply, nb); 98 + int uV; 99 + int ret; 100 + 101 + /* 102 + * According to Rockchip it's important to keep the SoC IO domain 103 + * higher than (or equal to) the external voltage. That means we need 104 + * to change it before external voltage changes happen in the case 105 + * of an increase. 106 + * 107 + * Note that in the "pre" change we pick the max possible voltage that 108 + * the regulator might end up at (the client requests a range and we 109 + * don't know for certain the exact voltage). Right now we rely on the 110 + * slop in MAX_VOLTAGE_1_8 and MAX_VOLTAGE_3_3 to save us if clients 111 + * request something like a max of 3.6V when they really want 3.3V. 112 + * We could attempt to come up with better rules if this fails. 113 + */ 114 + if (event & REGULATOR_EVENT_PRE_VOLTAGE_CHANGE) { 115 + struct pre_voltage_change_data *pvc_data = data; 116 + 117 + uV = max_t(unsigned long, pvc_data->old_uV, pvc_data->max_uV); 118 + } else if (event & (REGULATOR_EVENT_VOLTAGE_CHANGE | 119 + REGULATOR_EVENT_ABORT_VOLTAGE_CHANGE)) { 120 + uV = (unsigned long)data; 121 + } else { 122 + return NOTIFY_OK; 123 + } 124 + 125 + dev_dbg(supply->iod->dev, "Setting to %d\n", uV); 126 + 127 + if (uV > MAX_VOLTAGE_3_3) { 128 + dev_err(supply->iod->dev, "Voltage too high: %d\n", uV); 129 + 130 + if (event == REGULATOR_EVENT_PRE_VOLTAGE_CHANGE) 131 + return NOTIFY_BAD; 132 + } 133 + 134 + ret = rockchip_iodomain_write(supply, uV); 135 + if (ret && event == REGULATOR_EVENT_PRE_VOLTAGE_CHANGE) 136 + return NOTIFY_BAD; 137 + 138 + dev_info(supply->iod->dev, "Setting to %d done\n", uV); 139 + return NOTIFY_OK; 140 + } 141 + 142 + static void rk3288_iodomain_init(struct rockchip_iodomain *iod) 143 + { 144 + int ret; 145 + u32 val; 146 + 147 + /* if no flash supply we should leave things alone */ 148 + if (!iod->supplies[RK3288_SOC_FLASH_SUPPLY_NUM].reg) 149 + return; 150 + 151 + /* 152 + * set flash0 iodomain to also use this framework 153 + * instead of a special gpio. 154 + */ 155 + val = RK3288_SOC_CON2_FLASH0 | (RK3288_SOC_CON2_FLASH0 << 16); 156 + ret = regmap_write(iod->grf, RK3288_SOC_CON2, val); 157 + if (ret < 0) 158 + dev_warn(iod->dev, "couldn't update flash0 ctrl\n"); 159 + } 160 + 161 + /* 162 + * On the rk3188 the io-domains are handled by a shared register with the 163 + * lower 8 bits being still being continuing drive-strength settings. 164 + */ 165 + static const struct rockchip_iodomain_soc_data soc_data_rk3188 = { 166 + .grf_offset = 0x104, 167 + .supply_names = { 168 + NULL, 169 + NULL, 170 + NULL, 171 + NULL, 172 + NULL, 173 + NULL, 174 + NULL, 175 + NULL, 176 + "ap0", 177 + "ap1", 178 + "cif", 179 + "flash", 180 + "vccio0", 181 + "vccio1", 182 + "lcdc0", 183 + "lcdc1", 184 + }, 185 + }; 186 + 187 + static const struct rockchip_iodomain_soc_data soc_data_rk3288 = { 188 + .grf_offset = 0x380, 189 + .supply_names = { 190 + "lcdc", /* LCDC_VDD */ 191 + "dvp", /* DVPIO_VDD */ 192 + "flash0", /* FLASH0_VDD (emmc) */ 193 + "flash1", /* FLASH1_VDD (sdio1) */ 194 + "wifi", /* APIO3_VDD (sdio0) */ 195 + "bb", /* APIO5_VDD */ 196 + "audio", /* APIO4_VDD */ 197 + "sdcard", /* SDMMC0_VDD (sdmmc) */ 198 + "gpio30", /* APIO1_VDD */ 199 + "gpio1830", /* APIO2_VDD */ 200 + }, 201 + .init = rk3288_iodomain_init, 202 + }; 203 + 204 + static const struct of_device_id rockchip_iodomain_match[] = { 205 + { 206 + .compatible = "rockchip,rk3188-io-voltage-domain", 207 + .data = (void *)&soc_data_rk3188 208 + }, 209 + { 210 + .compatible = "rockchip,rk3288-io-voltage-domain", 211 + .data = (void *)&soc_data_rk3288 212 + }, 213 + { /* sentinel */ }, 214 + }; 215 + 216 + static int rockchip_iodomain_probe(struct platform_device *pdev) 217 + { 218 + struct device_node *np = pdev->dev.of_node; 219 + const struct of_device_id *match; 220 + struct rockchip_iodomain *iod; 221 + int i, ret = 0; 222 + 223 + if (!np) 224 + return -ENODEV; 225 + 226 + iod = devm_kzalloc(&pdev->dev, sizeof(*iod), GFP_KERNEL); 227 + if (!iod) 228 + return -ENOMEM; 229 + 230 + iod->dev = &pdev->dev; 231 + platform_set_drvdata(pdev, iod); 232 + 233 + match = of_match_node(rockchip_iodomain_match, np); 234 + iod->soc_data = (struct rockchip_iodomain_soc_data *)match->data; 235 + 236 + iod->grf = syscon_regmap_lookup_by_phandle(np, "rockchip,grf"); 237 + if (IS_ERR(iod->grf)) { 238 + dev_err(&pdev->dev, "couldn't find grf regmap\n"); 239 + return PTR_ERR(iod->grf); 240 + } 241 + 242 + for (i = 0; i < MAX_SUPPLIES; i++) { 243 + const char *supply_name = iod->soc_data->supply_names[i]; 244 + struct rockchip_iodomain_supply *supply = &iod->supplies[i]; 245 + struct regulator *reg; 246 + int uV; 247 + 248 + if (!supply_name) 249 + continue; 250 + 251 + reg = devm_regulator_get_optional(iod->dev, supply_name); 252 + if (IS_ERR(reg)) { 253 + ret = PTR_ERR(reg); 254 + 255 + /* If a supply wasn't specified, that's OK */ 256 + if (ret == -ENODEV) 257 + continue; 258 + else if (ret != -EPROBE_DEFER) 259 + dev_err(iod->dev, "couldn't get regulator %s\n", 260 + supply_name); 261 + goto unreg_notify; 262 + } 263 + 264 + /* set initial correct value */ 265 + uV = regulator_get_voltage(reg); 266 + 267 + /* must be a regulator we can get the voltage of */ 268 + if (uV < 0) { 269 + dev_err(iod->dev, "Can't determine voltage: %s\n", 270 + supply_name); 271 + goto unreg_notify; 272 + } 273 + 274 + if (uV > MAX_VOLTAGE_3_3) { 275 + dev_crit(iod->dev, 276 + "%d uV is too high. May damage SoC!\n", 277 + uV); 278 + ret = -EINVAL; 279 + goto unreg_notify; 280 + } 281 + 282 + /* setup our supply */ 283 + supply->idx = i; 284 + supply->iod = iod; 285 + supply->reg = reg; 286 + supply->nb.notifier_call = rockchip_iodomain_notify; 287 + 288 + ret = rockchip_iodomain_write(supply, uV); 289 + if (ret) { 290 + supply->reg = NULL; 291 + goto unreg_notify; 292 + } 293 + 294 + /* register regulator notifier */ 295 + ret = regulator_register_notifier(reg, &supply->nb); 296 + if (ret) { 297 + dev_err(&pdev->dev, 298 + "regulator notifier request failed\n"); 299 + supply->reg = NULL; 300 + goto unreg_notify; 301 + } 302 + } 303 + 304 + if (iod->soc_data->init) 305 + iod->soc_data->init(iod); 306 + 307 + return 0; 308 + 309 + unreg_notify: 310 + for (i = MAX_SUPPLIES - 1; i >= 0; i--) { 311 + struct rockchip_iodomain_supply *io_supply = &iod->supplies[i]; 312 + 313 + if (io_supply->reg) 314 + regulator_unregister_notifier(io_supply->reg, 315 + &io_supply->nb); 316 + } 317 + 318 + return ret; 319 + } 320 + 321 + static int rockchip_iodomain_remove(struct platform_device *pdev) 322 + { 323 + struct rockchip_iodomain *iod = platform_get_drvdata(pdev); 324 + int i; 325 + 326 + for (i = MAX_SUPPLIES - 1; i >= 0; i--) { 327 + struct rockchip_iodomain_supply *io_supply = &iod->supplies[i]; 328 + 329 + if (io_supply->reg) 330 + regulator_unregister_notifier(io_supply->reg, 331 + &io_supply->nb); 332 + } 333 + 334 + return 0; 335 + } 336 + 337 + static struct platform_driver rockchip_iodomain_driver = { 338 + .probe = rockchip_iodomain_probe, 339 + .remove = rockchip_iodomain_remove, 340 + .driver = { 341 + .name = "rockchip-iodomain", 342 + .of_match_table = rockchip_iodomain_match, 343 + }, 344 + }; 345 + 346 + module_platform_driver(rockchip_iodomain_driver); 347 + 348 + MODULE_DESCRIPTION("Rockchip IO-domain driver"); 349 + MODULE_AUTHOR("Heiko Stuebner <heiko@sntech.de>"); 350 + MODULE_AUTHOR("Doug Anderson <dianders@chromium.org>"); 351 + MODULE_LICENSE("GPL v2");
-11
drivers/sh/pm_runtime.c
··· 75 75 .con_ids = { NULL, }, 76 76 }; 77 77 78 - static bool default_pm_on; 79 - 80 78 static int __init sh_pm_runtime_init(void) 81 79 { 82 80 if (IS_ENABLED(CONFIG_ARCH_SHMOBILE_MULTI)) { ··· 94 96 return 0; 95 97 } 96 98 97 - default_pm_on = true; 98 99 pm_clk_add_notifier(&platform_bus_type, &platform_bus_notifier); 99 100 return 0; 100 101 } 101 102 core_initcall(sh_pm_runtime_init); 102 - 103 - static int __init sh_pm_runtime_late_init(void) 104 - { 105 - if (default_pm_on) 106 - pm_genpd_poweroff_unused(); 107 - return 0; 108 - } 109 - late_initcall(sh_pm_runtime_late_init);
+8 -5
drivers/spi/spi.c
··· 35 35 #include <linux/spi/spi.h> 36 36 #include <linux/of_gpio.h> 37 37 #include <linux/pm_runtime.h> 38 + #include <linux/pm_domain.h> 38 39 #include <linux/export.h> 39 40 #include <linux/sched/rt.h> 40 41 #include <linux/delay.h> ··· 265 264 if (ret) 266 265 return ret; 267 266 268 - acpi_dev_pm_attach(dev, true); 269 - ret = sdrv->probe(to_spi_device(dev)); 270 - if (ret) 271 - acpi_dev_pm_detach(dev, true); 267 + ret = dev_pm_domain_attach(dev, true); 268 + if (ret != -EPROBE_DEFER) { 269 + ret = sdrv->probe(to_spi_device(dev)); 270 + if (ret) 271 + dev_pm_domain_detach(dev, true); 272 + } 272 273 273 274 return ret; 274 275 } ··· 281 278 int ret; 282 279 283 280 ret = sdrv->remove(to_spi_device(dev)); 284 - acpi_dev_pm_detach(dev, true); 281 + dev_pm_domain_detach(dev, true); 285 282 286 283 return ret; 287 284 }
+4
include/acpi/acnames.h
··· 59 59 #define METHOD_NAME__PRS "_PRS" 60 60 #define METHOD_NAME__PRT "_PRT" 61 61 #define METHOD_NAME__PRW "_PRW" 62 + #define METHOD_NAME__PS0 "_PS0" 63 + #define METHOD_NAME__PS1 "_PS1" 64 + #define METHOD_NAME__PS2 "_PS2" 65 + #define METHOD_NAME__PS3 "_PS3" 62 66 #define METHOD_NAME__REG "_REG" 63 67 #define METHOD_NAME__SB_ "_SB_" 64 68 #define METHOD_NAME__SEG "_SEG"
+2 -1
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20140724 49 + #define ACPI_CA_VERSION 0x20140828 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 692 692 *event_status)) 693 693 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) 694 694 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) 695 + ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void)) 695 696 696 697 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 697 698 acpi_get_gpe_device(u32 gpe_index,
+17 -2
include/acpi/actbl1.h
··· 952 952 ACPI_SRAT_TYPE_CPU_AFFINITY = 0, 953 953 ACPI_SRAT_TYPE_MEMORY_AFFINITY = 1, 954 954 ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY = 2, 955 - ACPI_SRAT_TYPE_RESERVED = 3 /* 3 and greater are reserved */ 955 + ACPI_SRAT_TYPE_GICC_AFFINITY = 3, 956 + ACPI_SRAT_TYPE_RESERVED = 4 /* 4 and greater are reserved */ 956 957 }; 957 958 958 959 /* ··· 969 968 u32 flags; 970 969 u8 local_sapic_eid; 971 970 u8 proximity_domain_hi[3]; 972 - u32 reserved; /* Reserved, must be zero */ 971 + u32 clock_domain; 973 972 }; 974 973 975 974 /* Flags */ ··· 1010 1009 /* Flags for struct acpi_srat_cpu_affinity and struct acpi_srat_x2apic_cpu_affinity */ 1011 1010 1012 1011 #define ACPI_SRAT_CPU_ENABLED (1) /* 00: Use affinity structure */ 1012 + 1013 + /* 3: GICC Affinity (ACPI 5.1) */ 1014 + 1015 + struct acpi_srat_gicc_affinity { 1016 + struct acpi_subtable_header header; 1017 + u32 proximity_domain; 1018 + u32 acpi_processor_uid; 1019 + u32 flags; 1020 + u32 clock_domain; 1021 + }; 1022 + 1023 + /* Flags for struct acpi_srat_gicc_affinity */ 1024 + 1025 + #define ACPI_SRAT_GICC_ENABLED (1) /* 00: Use affinity structure */ 1013 1026 1014 1027 /* Reset to default packing */ 1015 1028
+7 -2
include/acpi/actbl3.h
··· 310 310 u32 common_flags; 311 311 }; 312 312 313 + /* Flag Definitions: timer_flags and virtual_timer_flags above */ 314 + 315 + #define ACPI_GTDT_GT_IRQ_MODE (1) 316 + #define ACPI_GTDT_GT_IRQ_POLARITY (1<<1) 317 + 313 318 /* Flag Definitions: common_flags above */ 314 319 315 - #define ACPI_GTDT_GT_IS_SECURE_TIMER (1) 316 - #define ACPI_GTDT_GT_ALWAYS_ON (1<<1) 320 + #define ACPI_GTDT_GT_IS_SECURE_TIMER (1) 321 + #define ACPI_GTDT_GT_ALWAYS_ON (1<<1) 317 322 318 323 /* 1: SBSA Generic Watchdog Structure */ 319 324
-2
include/linux/acpi.h
··· 587 587 #if defined(CONFIG_ACPI) && defined(CONFIG_PM) 588 588 struct acpi_device *acpi_dev_pm_get_node(struct device *dev); 589 589 int acpi_dev_pm_attach(struct device *dev, bool power_on); 590 - void acpi_dev_pm_detach(struct device *dev, bool power_off); 591 590 #else 592 591 static inline struct acpi_device *acpi_dev_pm_get_node(struct device *dev) 593 592 { ··· 596 597 { 597 598 return -ENODEV; 598 599 } 599 - static inline void acpi_dev_pm_detach(struct device *dev, bool power_off) {} 600 600 #endif 601 601 602 602 #ifdef CONFIG_ACPI
+3
include/linux/cpufreq.h
··· 112 112 spinlock_t transition_lock; 113 113 wait_queue_head_t transition_wait; 114 114 struct task_struct *transition_task; /* Task which is doing the transition */ 115 + 116 + /* For cpufreq driver's internal use */ 117 + void *driver_data; 115 118 }; 116 119 117 120 /* Only for ACPI */
-5
include/linux/interrupt.h
··· 193 193 /* The following three functions are for the core kernel use only. */ 194 194 extern void suspend_device_irqs(void); 195 195 extern void resume_device_irqs(void); 196 - #ifdef CONFIG_PM_SLEEP 197 - extern int check_wakeup_irqs(void); 198 - #else 199 - static inline int check_wakeup_irqs(void) { return 0; } 200 - #endif 201 196 202 197 /** 203 198 * struct irq_affinity_notify - context for notification of IRQ affinity changes
+8
include/linux/irq.h
··· 173 173 * IRQD_IRQ_DISABLED - Disabled state of the interrupt 174 174 * IRQD_IRQ_MASKED - Masked state of the interrupt 175 175 * IRQD_IRQ_INPROGRESS - In progress state of the interrupt 176 + * IRQD_WAKEUP_ARMED - Wakeup mode armed 176 177 */ 177 178 enum { 178 179 IRQD_TRIGGER_MASK = 0xf, ··· 187 186 IRQD_IRQ_DISABLED = (1 << 16), 188 187 IRQD_IRQ_MASKED = (1 << 17), 189 188 IRQD_IRQ_INPROGRESS = (1 << 18), 189 + IRQD_WAKEUP_ARMED = (1 << 19), 190 190 }; 191 191 192 192 static inline bool irqd_is_setaffinity_pending(struct irq_data *d) ··· 258 256 { 259 257 return d->state_use_accessors & IRQD_IRQ_INPROGRESS; 260 258 } 259 + 260 + static inline bool irqd_is_wakeup_armed(struct irq_data *d) 261 + { 262 + return d->state_use_accessors & IRQD_WAKEUP_ARMED; 263 + } 264 + 261 265 262 266 /* 263 267 * Functions for chained handlers which can be enabled/disabled by the
+10
include/linux/irqdesc.h
··· 38 38 * @threads_oneshot: bitfield to handle shared oneshot threads 39 39 * @threads_active: number of irqaction threads currently running 40 40 * @wait_for_threads: wait queue for sync_irq to wait for threaded handlers 41 + * @nr_actions: number of installed actions on this descriptor 42 + * @no_suspend_depth: number of irqactions on a irq descriptor with 43 + * IRQF_NO_SUSPEND set 44 + * @force_resume_depth: number of irqactions on a irq descriptor with 45 + * IRQF_FORCE_RESUME set 41 46 * @dir: /proc/irq/ procfs entry 42 47 * @name: flow handler name for /proc/interrupts output 43 48 */ ··· 75 70 unsigned long threads_oneshot; 76 71 atomic_t threads_active; 77 72 wait_queue_head_t wait_for_threads; 73 + #ifdef CONFIG_PM_SLEEP 74 + unsigned int nr_actions; 75 + unsigned int no_suspend_depth; 76 + unsigned int force_resume_depth; 77 + #endif 78 78 #ifdef CONFIG_PROC_FS 79 79 struct proc_dir_entry *dir; 80 80 #endif
+5
include/linux/pm.h
··· 619 619 */ 620 620 struct dev_pm_domain { 621 621 struct dev_pm_ops ops; 622 + void (*detach)(struct device *dev, bool power_off); 622 623 }; 623 624 624 625 /* ··· 680 679 extern void device_pm_lock(void); 681 680 extern void dpm_resume_start(pm_message_t state); 682 681 extern void dpm_resume_end(pm_message_t state); 682 + extern void dpm_resume_noirq(pm_message_t state); 683 + extern void dpm_resume_early(pm_message_t state); 683 684 extern void dpm_resume(pm_message_t state); 684 685 extern void dpm_complete(pm_message_t state); 685 686 686 687 extern void device_pm_unlock(void); 687 688 extern int dpm_suspend_end(pm_message_t state); 688 689 extern int dpm_suspend_start(pm_message_t state); 690 + extern int dpm_suspend_noirq(pm_message_t state); 691 + extern int dpm_suspend_late(pm_message_t state); 689 692 extern int dpm_suspend(pm_message_t state); 690 693 extern int dpm_prepare(pm_message_t state); 691 694
+66 -64
include/linux/pm_domain.h
··· 35 35 int (*stop)(struct device *dev); 36 36 int (*save_state)(struct device *dev); 37 37 int (*restore_state)(struct device *dev); 38 - int (*suspend)(struct device *dev); 39 - int (*suspend_late)(struct device *dev); 40 - int (*resume_early)(struct device *dev); 41 - int (*resume)(struct device *dev); 42 - int (*freeze)(struct device *dev); 43 - int (*freeze_late)(struct device *dev); 44 - int (*thaw_early)(struct device *dev); 45 - int (*thaw)(struct device *dev); 46 38 bool (*active_wakeup)(struct device *dev); 47 39 }; 48 40 49 - struct gpd_cpu_data { 41 + struct gpd_cpuidle_data { 50 42 unsigned int saved_exit_latency; 51 43 struct cpuidle_state *idle_state; 52 44 }; ··· 63 71 unsigned int suspended_count; /* System suspend device counter */ 64 72 unsigned int prepared_count; /* Suspend counter of prepared devices */ 65 73 bool suspend_power_off; /* Power status before system suspend */ 66 - bool dev_irq_safe; /* Device callbacks are IRQ-safe */ 67 74 int (*power_off)(struct generic_pm_domain *domain); 68 75 s64 power_off_latency_ns; 69 76 int (*power_on)(struct generic_pm_domain *domain); ··· 71 80 s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ 72 81 bool max_off_time_changed; 73 82 bool cached_power_down_ok; 74 - struct device_node *of_node; /* Node in device tree */ 75 - struct gpd_cpu_data *cpu_data; 83 + struct gpd_cpuidle_data *cpuidle_data; 84 + void (*attach_dev)(struct device *dev); 85 + void (*detach_dev)(struct device *dev); 76 86 }; 77 87 78 88 static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd) ··· 100 108 101 109 struct generic_pm_domain_data { 102 110 struct pm_domain_data base; 103 - struct gpd_dev_ops ops; 104 111 struct gpd_timing_data td; 105 112 struct notifier_block nb; 106 113 struct mutex lock; ··· 118 127 return to_gpd_data(dev->power.subsys_data->domain_data); 119 128 } 120 129 121 - extern struct dev_power_governor simple_qos_governor; 122 - 123 130 extern struct generic_pm_domain *dev_to_genpd(struct device *dev); 124 131 extern int __pm_genpd_add_device(struct generic_pm_domain *genpd, 125 132 struct device *dev, 126 133 struct gpd_timing_data *td); 127 - 128 - extern int __pm_genpd_of_add_device(struct device_node *genpd_node, 129 - struct device *dev, 130 - struct gpd_timing_data *td); 131 134 132 135 extern int __pm_genpd_name_add_device(const char *domain_name, 133 136 struct device *dev, ··· 136 151 const char *subdomain_name); 137 152 extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 138 153 struct generic_pm_domain *target); 139 - extern int pm_genpd_add_callbacks(struct device *dev, 140 - struct gpd_dev_ops *ops, 141 - struct gpd_timing_data *td); 142 - extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td); 143 154 extern int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state); 144 155 extern int pm_genpd_name_attach_cpuidle(const char *name, int state); 145 156 extern int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd); ··· 146 165 extern int pm_genpd_poweron(struct generic_pm_domain *genpd); 147 166 extern int pm_genpd_name_poweron(const char *domain_name); 148 167 149 - extern bool default_stop_ok(struct device *dev); 150 - 168 + extern struct dev_power_governor simple_qos_governor; 151 169 extern struct dev_power_governor pm_domain_always_on_gov; 152 170 #else 153 171 ··· 161 181 static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd, 162 182 struct device *dev, 163 183 struct gpd_timing_data *td) 164 - { 165 - return -ENOSYS; 166 - } 167 - static inline int __pm_genpd_of_add_device(struct device_node *genpd_node, 168 - struct device *dev, 169 - struct gpd_timing_data *td) 170 184 { 171 185 return -ENOSYS; 172 186 } ··· 188 214 } 189 215 static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 190 216 struct generic_pm_domain *target) 191 - { 192 - return -ENOSYS; 193 - } 194 - static inline int pm_genpd_add_callbacks(struct device *dev, 195 - struct gpd_dev_ops *ops, 196 - struct gpd_timing_data *td) 197 - { 198 - return -ENOSYS; 199 - } 200 - static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) 201 217 { 202 218 return -ENOSYS; 203 219 } ··· 219 255 { 220 256 return -ENOSYS; 221 257 } 222 - static inline bool default_stop_ok(struct device *dev) 223 - { 224 - return false; 225 - } 226 258 #define simple_qos_governor NULL 227 259 #define pm_domain_always_on_gov NULL 228 260 #endif ··· 229 269 return __pm_genpd_add_device(genpd, dev, NULL); 230 270 } 231 271 232 - static inline int pm_genpd_of_add_device(struct device_node *genpd_node, 233 - struct device *dev) 234 - { 235 - return __pm_genpd_of_add_device(genpd_node, dev, NULL); 236 - } 237 - 238 272 static inline int pm_genpd_name_add_device(const char *domain_name, 239 273 struct device *dev) 240 274 { 241 275 return __pm_genpd_name_add_device(domain_name, dev, NULL); 242 276 } 243 277 244 - static inline int pm_genpd_remove_callbacks(struct device *dev) 245 - { 246 - return __pm_genpd_remove_callbacks(dev, true); 247 - } 248 - 249 278 #ifdef CONFIG_PM_GENERIC_DOMAINS_RUNTIME 250 - extern void genpd_queue_power_off_work(struct generic_pm_domain *genpd); 251 279 extern void pm_genpd_poweroff_unused(void); 252 280 #else 253 - static inline void genpd_queue_power_off_work(struct generic_pm_domain *gpd) {} 254 281 static inline void pm_genpd_poweroff_unused(void) {} 255 282 #endif 256 283 257 284 #ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP 258 - extern void pm_genpd_syscore_switch(struct device *dev, bool suspend); 285 + extern void pm_genpd_syscore_poweroff(struct device *dev); 286 + extern void pm_genpd_syscore_poweron(struct device *dev); 259 287 #else 260 - static inline void pm_genpd_syscore_switch(struct device *dev, bool suspend) {} 288 + static inline void pm_genpd_syscore_poweroff(struct device *dev) {} 289 + static inline void pm_genpd_syscore_poweron(struct device *dev) {} 261 290 #endif 262 291 263 - static inline void pm_genpd_syscore_poweroff(struct device *dev) 292 + /* OF PM domain providers */ 293 + struct of_device_id; 294 + 295 + struct genpd_onecell_data { 296 + struct generic_pm_domain **domains; 297 + unsigned int num_domains; 298 + }; 299 + 300 + typedef struct generic_pm_domain *(*genpd_xlate_t)(struct of_phandle_args *args, 301 + void *data); 302 + 303 + #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 304 + int __of_genpd_add_provider(struct device_node *np, genpd_xlate_t xlate, 305 + void *data); 306 + void of_genpd_del_provider(struct device_node *np); 307 + 308 + struct generic_pm_domain *__of_genpd_xlate_simple( 309 + struct of_phandle_args *genpdspec, 310 + void *data); 311 + struct generic_pm_domain *__of_genpd_xlate_onecell( 312 + struct of_phandle_args *genpdspec, 313 + void *data); 314 + 315 + int genpd_dev_pm_attach(struct device *dev); 316 + #else /* !CONFIG_PM_GENERIC_DOMAINS_OF */ 317 + static inline int __of_genpd_add_provider(struct device_node *np, 318 + genpd_xlate_t xlate, void *data) 264 319 { 265 - pm_genpd_syscore_switch(dev, true); 320 + return 0; 321 + } 322 + static inline void of_genpd_del_provider(struct device_node *np) {} 323 + 324 + #define __of_genpd_xlate_simple NULL 325 + #define __of_genpd_xlate_onecell NULL 326 + 327 + static inline int genpd_dev_pm_attach(struct device *dev) 328 + { 329 + return -ENODEV; 330 + } 331 + #endif /* CONFIG_PM_GENERIC_DOMAINS_OF */ 332 + 333 + static inline int of_genpd_add_provider_simple(struct device_node *np, 334 + struct generic_pm_domain *genpd) 335 + { 336 + return __of_genpd_add_provider(np, __of_genpd_xlate_simple, genpd); 337 + } 338 + static inline int of_genpd_add_provider_onecell(struct device_node *np, 339 + struct genpd_onecell_data *data) 340 + { 341 + return __of_genpd_add_provider(np, __of_genpd_xlate_onecell, data); 266 342 } 267 343 268 - static inline void pm_genpd_syscore_poweron(struct device *dev) 344 + #ifdef CONFIG_PM 345 + extern int dev_pm_domain_attach(struct device *dev, bool power_on); 346 + extern void dev_pm_domain_detach(struct device *dev, bool power_off); 347 + #else 348 + static inline int dev_pm_domain_attach(struct device *dev, bool power_on) 269 349 { 270 - pm_genpd_syscore_switch(dev, false); 350 + return -ENODEV; 271 351 } 352 + static inline void dev_pm_domain_detach(struct device *dev, bool power_off) {} 353 + #endif 272 354 273 355 #endif /* _LINUX_PM_DOMAIN_H */
+6
include/linux/suspend.h
··· 189 189 190 190 struct platform_freeze_ops { 191 191 int (*begin)(void); 192 + int (*prepare)(void); 193 + void (*restore)(void); 192 194 void (*end)(void); 193 195 }; 194 196 ··· 373 371 extern bool events_check_enabled; 374 372 375 373 extern bool pm_wakeup_pending(void); 374 + extern void pm_system_wakeup(void); 375 + extern void pm_wakeup_clear(void); 376 376 extern bool pm_get_wakeup_count(unsigned int *count, bool block); 377 377 extern bool pm_save_wakeup_count(unsigned int count); 378 378 extern void pm_wakep_autosleep_enabled(bool set); ··· 422 418 #define pm_notifier(fn, pri) do { (void)(fn); } while (0) 423 419 424 420 static inline bool pm_wakeup_pending(void) { return false; } 421 + static inline void pm_system_wakeup(void) {} 422 + static inline void pm_wakeup_clear(void) {} 425 423 426 424 static inline void lock_system_sleep(void) {} 427 425 static inline void unlock_system_sleep(void) {}
+61 -32
kernel/irq/chip.c
··· 342 342 return irq_wait_for_poll(desc); 343 343 } 344 344 345 + static bool irq_may_run(struct irq_desc *desc) 346 + { 347 + unsigned int mask = IRQD_IRQ_INPROGRESS | IRQD_WAKEUP_ARMED; 348 + 349 + /* 350 + * If the interrupt is not in progress and is not an armed 351 + * wakeup interrupt, proceed. 352 + */ 353 + if (!irqd_has_set(&desc->irq_data, mask)) 354 + return true; 355 + 356 + /* 357 + * If the interrupt is an armed wakeup source, mark it pending 358 + * and suspended, disable it and notify the pm core about the 359 + * event. 360 + */ 361 + if (irq_pm_check_wakeup(desc)) 362 + return false; 363 + 364 + /* 365 + * Handle a potential concurrent poll on a different core. 366 + */ 367 + return irq_check_poll(desc); 368 + } 369 + 345 370 /** 346 371 * handle_simple_irq - Simple and software-decoded IRQs. 347 372 * @irq: the interrupt number ··· 384 359 { 385 360 raw_spin_lock(&desc->lock); 386 361 387 - if (unlikely(irqd_irq_inprogress(&desc->irq_data))) 388 - if (!irq_check_poll(desc)) 389 - goto out_unlock; 362 + if (!irq_may_run(desc)) 363 + goto out_unlock; 390 364 391 365 desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 392 366 kstat_incr_irqs_this_cpu(irq, desc); ··· 436 412 raw_spin_lock(&desc->lock); 437 413 mask_ack_irq(desc); 438 414 439 - if (unlikely(irqd_irq_inprogress(&desc->irq_data))) 440 - if (!irq_check_poll(desc)) 441 - goto out_unlock; 415 + if (!irq_may_run(desc)) 416 + goto out_unlock; 442 417 443 418 desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 444 419 kstat_incr_irqs_this_cpu(irq, desc); ··· 508 485 509 486 raw_spin_lock(&desc->lock); 510 487 511 - if (unlikely(irqd_irq_inprogress(&desc->irq_data))) 512 - if (!irq_check_poll(desc)) 513 - goto out; 488 + if (!irq_may_run(desc)) 489 + goto out; 514 490 515 491 desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 516 492 kstat_incr_irqs_this_cpu(irq, desc); ··· 563 541 raw_spin_lock(&desc->lock); 564 542 565 543 desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 566 - /* 567 - * If we're currently running this IRQ, or its disabled, 568 - * we shouldn't process the IRQ. Mark it pending, handle 569 - * the necessary masking and go out 570 - */ 571 - if (unlikely(irqd_irq_disabled(&desc->irq_data) || 572 - irqd_irq_inprogress(&desc->irq_data) || !desc->action)) { 573 - if (!irq_check_poll(desc)) { 574 - desc->istate |= IRQS_PENDING; 575 - mask_ack_irq(desc); 576 - goto out_unlock; 577 - } 544 + 545 + if (!irq_may_run(desc)) { 546 + desc->istate |= IRQS_PENDING; 547 + mask_ack_irq(desc); 548 + goto out_unlock; 578 549 } 550 + 551 + /* 552 + * If its disabled or no action available then mask it and get 553 + * out of here. 554 + */ 555 + if (irqd_irq_disabled(&desc->irq_data) || !desc->action) { 556 + desc->istate |= IRQS_PENDING; 557 + mask_ack_irq(desc); 558 + goto out_unlock; 559 + } 560 + 579 561 kstat_incr_irqs_this_cpu(irq, desc); 580 562 581 563 /* Start handling the irq */ ··· 628 602 raw_spin_lock(&desc->lock); 629 603 630 604 desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 631 - /* 632 - * If we're currently running this IRQ, or its disabled, 633 - * we shouldn't process the IRQ. Mark it pending, handle 634 - * the necessary masking and go out 635 - */ 636 - if (unlikely(irqd_irq_disabled(&desc->irq_data) || 637 - irqd_irq_inprogress(&desc->irq_data) || !desc->action)) { 638 - if (!irq_check_poll(desc)) { 639 - desc->istate |= IRQS_PENDING; 640 - goto out_eoi; 641 - } 605 + 606 + if (!irq_may_run(desc)) { 607 + desc->istate |= IRQS_PENDING; 608 + goto out_eoi; 642 609 } 610 + 611 + /* 612 + * If its disabled or no action available then mask it and get 613 + * out of here. 614 + */ 615 + if (irqd_irq_disabled(&desc->irq_data) || !desc->action) { 616 + desc->istate |= IRQS_PENDING; 617 + goto out_eoi; 618 + } 619 + 643 620 kstat_incr_irqs_this_cpu(irq, desc); 644 621 645 622 do {
+14 -2
kernel/irq/internals.h
··· 63 63 64 64 extern int __irq_set_trigger(struct irq_desc *desc, unsigned int irq, 65 65 unsigned long flags); 66 - extern void __disable_irq(struct irq_desc *desc, unsigned int irq, bool susp); 67 - extern void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume); 66 + extern void __disable_irq(struct irq_desc *desc, unsigned int irq); 67 + extern void __enable_irq(struct irq_desc *desc, unsigned int irq); 68 68 69 69 extern int irq_startup(struct irq_desc *desc, bool resend); 70 70 extern void irq_shutdown(struct irq_desc *desc); ··· 194 194 __this_cpu_inc(*desc->kstat_irqs); 195 195 __this_cpu_inc(kstat.irqs_sum); 196 196 } 197 + 198 + #ifdef CONFIG_PM_SLEEP 199 + bool irq_pm_check_wakeup(struct irq_desc *desc); 200 + void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action); 201 + void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action); 202 + #else 203 + static inline bool irq_pm_check_wakeup(struct irq_desc *desc) { return false; } 204 + static inline void 205 + irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { } 206 + static inline void 207 + irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { } 208 + #endif
+9 -23
kernel/irq/manage.c
··· 382 382 } 383 383 #endif 384 384 385 - void __disable_irq(struct irq_desc *desc, unsigned int irq, bool suspend) 385 + void __disable_irq(struct irq_desc *desc, unsigned int irq) 386 386 { 387 - if (suspend) { 388 - if (!desc->action || (desc->action->flags & IRQF_NO_SUSPEND)) 389 - return; 390 - desc->istate |= IRQS_SUSPENDED; 391 - } 392 - 393 387 if (!desc->depth++) 394 388 irq_disable(desc); 395 389 } ··· 395 401 396 402 if (!desc) 397 403 return -EINVAL; 398 - __disable_irq(desc, irq, false); 404 + __disable_irq(desc, irq); 399 405 irq_put_desc_busunlock(desc, flags); 400 406 return 0; 401 407 } ··· 436 442 } 437 443 EXPORT_SYMBOL(disable_irq); 438 444 439 - void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume) 445 + void __enable_irq(struct irq_desc *desc, unsigned int irq) 440 446 { 441 - if (resume) { 442 - if (!(desc->istate & IRQS_SUSPENDED)) { 443 - if (!desc->action) 444 - return; 445 - if (!(desc->action->flags & IRQF_FORCE_RESUME)) 446 - return; 447 - /* Pretend that it got disabled ! */ 448 - desc->depth++; 449 - } 450 - desc->istate &= ~IRQS_SUSPENDED; 451 - } 452 - 453 447 switch (desc->depth) { 454 448 case 0: 455 449 err_out: ··· 479 497 KERN_ERR "enable_irq before setup/request_irq: irq %u\n", irq)) 480 498 goto out; 481 499 482 - __enable_irq(desc, irq, false); 500 + __enable_irq(desc, irq); 483 501 out: 484 502 irq_put_desc_busunlock(desc, flags); 485 503 } ··· 1200 1218 new->irq = irq; 1201 1219 *old_ptr = new; 1202 1220 1221 + irq_pm_install_action(desc, new); 1222 + 1203 1223 /* Reset broken irq detection when installing new handler */ 1204 1224 desc->irq_count = 0; 1205 1225 desc->irqs_unhandled = 0; ··· 1212 1228 */ 1213 1229 if (shared && (desc->istate & IRQS_SPURIOUS_DISABLED)) { 1214 1230 desc->istate &= ~IRQS_SPURIOUS_DISABLED; 1215 - __enable_irq(desc, irq, false); 1231 + __enable_irq(desc, irq); 1216 1232 } 1217 1233 1218 1234 raw_spin_unlock_irqrestore(&desc->lock, flags); ··· 1319 1335 1320 1336 /* Found it - now remove it from the list of entries: */ 1321 1337 *action_ptr = action->next; 1338 + 1339 + irq_pm_remove_action(desc, action); 1322 1340 1323 1341 /* If this was the last handler, shut down the IRQ line: */ 1324 1342 if (!desc->action) {
+115 -44
kernel/irq/pm.c
··· 9 9 #include <linux/irq.h> 10 10 #include <linux/module.h> 11 11 #include <linux/interrupt.h> 12 + #include <linux/suspend.h> 12 13 #include <linux/syscore_ops.h> 13 14 14 15 #include "internals.h" 15 16 17 + bool irq_pm_check_wakeup(struct irq_desc *desc) 18 + { 19 + if (irqd_is_wakeup_armed(&desc->irq_data)) { 20 + irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED); 21 + desc->istate |= IRQS_SUSPENDED | IRQS_PENDING; 22 + desc->depth++; 23 + irq_disable(desc); 24 + pm_system_wakeup(); 25 + return true; 26 + } 27 + return false; 28 + } 29 + 30 + /* 31 + * Called from __setup_irq() with desc->lock held after @action has 32 + * been installed in the action chain. 33 + */ 34 + void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) 35 + { 36 + desc->nr_actions++; 37 + 38 + if (action->flags & IRQF_FORCE_RESUME) 39 + desc->force_resume_depth++; 40 + 41 + WARN_ON_ONCE(desc->force_resume_depth && 42 + desc->force_resume_depth != desc->nr_actions); 43 + 44 + if (action->flags & IRQF_NO_SUSPEND) 45 + desc->no_suspend_depth++; 46 + 47 + WARN_ON_ONCE(desc->no_suspend_depth && 48 + desc->no_suspend_depth != desc->nr_actions); 49 + } 50 + 51 + /* 52 + * Called from __free_irq() with desc->lock held after @action has 53 + * been removed from the action chain. 54 + */ 55 + void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) 56 + { 57 + desc->nr_actions--; 58 + 59 + if (action->flags & IRQF_FORCE_RESUME) 60 + desc->force_resume_depth--; 61 + 62 + if (action->flags & IRQF_NO_SUSPEND) 63 + desc->no_suspend_depth--; 64 + } 65 + 66 + static bool suspend_device_irq(struct irq_desc *desc, int irq) 67 + { 68 + if (!desc->action || desc->no_suspend_depth) 69 + return false; 70 + 71 + if (irqd_is_wakeup_set(&desc->irq_data)) { 72 + irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED); 73 + /* 74 + * We return true here to force the caller to issue 75 + * synchronize_irq(). We need to make sure that the 76 + * IRQD_WAKEUP_ARMED is visible before we return from 77 + * suspend_device_irqs(). 78 + */ 79 + return true; 80 + } 81 + 82 + desc->istate |= IRQS_SUSPENDED; 83 + __disable_irq(desc, irq); 84 + 85 + /* 86 + * Hardware which has no wakeup source configuration facility 87 + * requires that the non wakeup interrupts are masked at the 88 + * chip level. The chip implementation indicates that with 89 + * IRQCHIP_MASK_ON_SUSPEND. 90 + */ 91 + if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND) 92 + mask_irq(desc); 93 + return true; 94 + } 95 + 16 96 /** 17 97 * suspend_device_irqs - disable all currently enabled interrupt lines 18 98 * 19 - * During system-wide suspend or hibernation device drivers need to be prevented 20 - * from receiving interrupts and this function is provided for this purpose. 21 - * It marks all interrupt lines in use, except for the timer ones, as disabled 22 - * and sets the IRQS_SUSPENDED flag for each of them. 99 + * During system-wide suspend or hibernation device drivers need to be 100 + * prevented from receiving interrupts and this function is provided 101 + * for this purpose. 102 + * 103 + * So we disable all interrupts and mark them IRQS_SUSPENDED except 104 + * for those which are unused, those which are marked as not 105 + * suspendable via an interrupt request with the flag IRQF_NO_SUSPEND 106 + * set and those which are marked as active wakeup sources. 107 + * 108 + * The active wakeup sources are handled by the flow handler entry 109 + * code which checks for the IRQD_WAKEUP_ARMED flag, suspends the 110 + * interrupt and notifies the pm core about the wakeup. 23 111 */ 24 112 void suspend_device_irqs(void) 25 113 { ··· 116 28 117 29 for_each_irq_desc(irq, desc) { 118 30 unsigned long flags; 31 + bool sync; 119 32 120 33 raw_spin_lock_irqsave(&desc->lock, flags); 121 - __disable_irq(desc, irq, true); 34 + sync = suspend_device_irq(desc, irq); 122 35 raw_spin_unlock_irqrestore(&desc->lock, flags); 123 - } 124 36 125 - for_each_irq_desc(irq, desc) 126 - if (desc->istate & IRQS_SUSPENDED) 37 + if (sync) 127 38 synchronize_irq(irq); 39 + } 128 40 } 129 41 EXPORT_SYMBOL_GPL(suspend_device_irqs); 42 + 43 + static void resume_irq(struct irq_desc *desc, int irq) 44 + { 45 + irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED); 46 + 47 + if (desc->istate & IRQS_SUSPENDED) 48 + goto resume; 49 + 50 + /* Force resume the interrupt? */ 51 + if (!desc->force_resume_depth) 52 + return; 53 + 54 + /* Pretend that it got disabled ! */ 55 + desc->depth++; 56 + resume: 57 + desc->istate &= ~IRQS_SUSPENDED; 58 + __enable_irq(desc, irq); 59 + } 130 60 131 61 static void resume_irqs(bool want_early) 132 62 { ··· 160 54 continue; 161 55 162 56 raw_spin_lock_irqsave(&desc->lock, flags); 163 - __enable_irq(desc, irq, true); 57 + resume_irq(desc, irq); 164 58 raw_spin_unlock_irqrestore(&desc->lock, flags); 165 59 } 166 60 } ··· 199 93 resume_irqs(false); 200 94 } 201 95 EXPORT_SYMBOL_GPL(resume_device_irqs); 202 - 203 - /** 204 - * check_wakeup_irqs - check if any wake-up interrupts are pending 205 - */ 206 - int check_wakeup_irqs(void) 207 - { 208 - struct irq_desc *desc; 209 - int irq; 210 - 211 - for_each_irq_desc(irq, desc) { 212 - /* 213 - * Only interrupts which are marked as wakeup source 214 - * and have not been disabled before the suspend check 215 - * can abort suspend. 216 - */ 217 - if (irqd_is_wakeup_set(&desc->irq_data)) { 218 - if (desc->depth == 1 && desc->istate & IRQS_PENDING) 219 - return -EBUSY; 220 - continue; 221 - } 222 - /* 223 - * Check the non wakeup interrupts whether they need 224 - * to be masked before finally going into suspend 225 - * state. That's for hardware which has no wakeup 226 - * source configuration facility. The chip 227 - * implementation indicates that with 228 - * IRQCHIP_MASK_ON_SUSPEND. 229 - */ 230 - if (desc->istate & IRQS_SUSPENDED && 231 - irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND) 232 - mask_irq(desc); 233 - } 234 - 235 - return 0; 236 - }
+4
kernel/power/Kconfig
··· 302 302 def_bool y 303 303 depends on PM_RUNTIME && PM_GENERIC_DOMAINS 304 304 305 + config PM_GENERIC_DOMAINS_OF 306 + def_bool y 307 + depends on PM_GENERIC_DOMAINS && OF 308 + 305 309 config CPU_PM 306 310 bool 307 311 depends on SUSPEND || CPU_IDLE
+1
kernel/power/process.c
··· 129 129 if (!pm_freezing) 130 130 atomic_inc(&system_freezing_cnt); 131 131 132 + pm_wakeup_clear(); 132 133 printk("Freezing user space processes ... "); 133 134 pm_freezing = true; 134 135 error = try_to_freeze_tasks(true);
+38 -14
kernel/power/snapshot.c
··· 725 725 clear_bit(bit, addr); 726 726 } 727 727 728 + static void memory_bm_clear_current(struct memory_bitmap *bm) 729 + { 730 + int bit; 731 + 732 + bit = max(bm->cur.node_bit - 1, 0); 733 + clear_bit(bit, bm->cur.node->data); 734 + } 735 + 728 736 static int memory_bm_test_bit(struct memory_bitmap *bm, unsigned long pfn) 729 737 { 730 738 void *addr; ··· 1341 1333 1342 1334 void swsusp_free(void) 1343 1335 { 1344 - struct zone *zone; 1345 - unsigned long pfn, max_zone_pfn; 1336 + unsigned long fb_pfn, fr_pfn; 1346 1337 1347 - for_each_populated_zone(zone) { 1348 - max_zone_pfn = zone_end_pfn(zone); 1349 - for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) 1350 - if (pfn_valid(pfn)) { 1351 - struct page *page = pfn_to_page(pfn); 1338 + if (!forbidden_pages_map || !free_pages_map) 1339 + goto out; 1352 1340 1353 - if (swsusp_page_is_forbidden(page) && 1354 - swsusp_page_is_free(page)) { 1355 - swsusp_unset_page_forbidden(page); 1356 - swsusp_unset_page_free(page); 1357 - __free_page(page); 1358 - } 1359 - } 1341 + memory_bm_position_reset(forbidden_pages_map); 1342 + memory_bm_position_reset(free_pages_map); 1343 + 1344 + loop: 1345 + fr_pfn = memory_bm_next_pfn(free_pages_map); 1346 + fb_pfn = memory_bm_next_pfn(forbidden_pages_map); 1347 + 1348 + /* 1349 + * Find the next bit set in both bitmaps. This is guaranteed to 1350 + * terminate when fb_pfn == fr_pfn == BM_END_OF_MAP. 1351 + */ 1352 + do { 1353 + if (fb_pfn < fr_pfn) 1354 + fb_pfn = memory_bm_next_pfn(forbidden_pages_map); 1355 + if (fr_pfn < fb_pfn) 1356 + fr_pfn = memory_bm_next_pfn(free_pages_map); 1357 + } while (fb_pfn != fr_pfn); 1358 + 1359 + if (fr_pfn != BM_END_OF_MAP && pfn_valid(fr_pfn)) { 1360 + struct page *page = pfn_to_page(fr_pfn); 1361 + 1362 + memory_bm_clear_current(forbidden_pages_map); 1363 + memory_bm_clear_current(free_pages_map); 1364 + __free_page(page); 1365 + goto loop; 1360 1366 } 1367 + 1368 + out: 1361 1369 nr_copy_pages = 0; 1362 1370 nr_meta_pages = 0; 1363 1371 restore_pblist = NULL;
+40 -11
kernel/power/suspend.c
··· 146 146 147 147 static int platform_suspend_prepare_late(suspend_state_t state) 148 148 { 149 + return state == PM_SUSPEND_FREEZE && freeze_ops->prepare ? 150 + freeze_ops->prepare() : 0; 151 + } 152 + 153 + static int platform_suspend_prepare_noirq(suspend_state_t state) 154 + { 149 155 return state != PM_SUSPEND_FREEZE && suspend_ops->prepare_late ? 150 156 suspend_ops->prepare_late() : 0; 151 157 } 152 158 153 - static void platform_suspend_wake(suspend_state_t state) 159 + static void platform_resume_noirq(suspend_state_t state) 154 160 { 155 161 if (state != PM_SUSPEND_FREEZE && suspend_ops->wake) 156 162 suspend_ops->wake(); 157 163 } 158 164 159 - static void platform_suspend_finish(suspend_state_t state) 165 + static void platform_resume_early(suspend_state_t state) 166 + { 167 + if (state == PM_SUSPEND_FREEZE && freeze_ops->restore) 168 + freeze_ops->restore(); 169 + } 170 + 171 + static void platform_resume_finish(suspend_state_t state) 160 172 { 161 173 if (state != PM_SUSPEND_FREEZE && suspend_ops->finish) 162 174 suspend_ops->finish(); ··· 184 172 return 0; 185 173 } 186 174 187 - static void platform_suspend_end(suspend_state_t state) 175 + static void platform_resume_end(suspend_state_t state) 188 176 { 189 177 if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) 190 178 freeze_ops->end(); ··· 192 180 suspend_ops->end(); 193 181 } 194 182 195 - static void platform_suspend_recover(suspend_state_t state) 183 + static void platform_recover(suspend_state_t state) 196 184 { 197 185 if (state != PM_SUSPEND_FREEZE && suspend_ops->recover) 198 186 suspend_ops->recover(); ··· 277 265 if (error) 278 266 goto Platform_finish; 279 267 280 - error = dpm_suspend_end(PMSG_SUSPEND); 268 + error = dpm_suspend_late(PMSG_SUSPEND); 281 269 if (error) { 282 - printk(KERN_ERR "PM: Some devices failed to power down\n"); 270 + printk(KERN_ERR "PM: late suspend of devices failed\n"); 283 271 goto Platform_finish; 284 272 } 285 273 error = platform_suspend_prepare_late(state); 274 + if (error) 275 + goto Devices_early_resume; 276 + 277 + error = dpm_suspend_noirq(PMSG_SUSPEND); 278 + if (error) { 279 + printk(KERN_ERR "PM: noirq suspend of devices failed\n"); 280 + goto Platform_early_resume; 281 + } 282 + error = platform_suspend_prepare_noirq(state); 286 283 if (error) 287 284 goto Platform_wake; 288 285 ··· 339 318 enable_nonboot_cpus(); 340 319 341 320 Platform_wake: 342 - platform_suspend_wake(state); 343 - dpm_resume_start(PMSG_RESUME); 321 + platform_resume_noirq(state); 322 + dpm_resume_noirq(PMSG_RESUME); 323 + 324 + Platform_early_resume: 325 + platform_resume_early(state); 326 + 327 + Devices_early_resume: 328 + dpm_resume_early(PMSG_RESUME); 344 329 345 330 Platform_finish: 346 - platform_suspend_finish(state); 331 + platform_resume_finish(state); 347 332 return error; 348 333 } 349 334 ··· 388 361 suspend_test_start(); 389 362 dpm_resume_end(PMSG_RESUME); 390 363 suspend_test_finish("resume devices"); 364 + trace_suspend_resume(TPS("resume_console"), state, true); 391 365 resume_console(); 366 + trace_suspend_resume(TPS("resume_console"), state, false); 392 367 393 368 Close: 394 - platform_suspend_end(state); 369 + platform_resume_end(state); 395 370 return error; 396 371 397 372 Recover_platform: 398 - platform_suspend_recover(state); 373 + platform_recover(state); 399 374 goto Resume_devices; 400 375 } 401 376
+29 -3
kernel/power/suspend_test.c
··· 22 22 #define TEST_SUSPEND_SECONDS 10 23 23 24 24 static unsigned long suspend_test_start_time; 25 + static u32 test_repeat_count_max = 1; 26 + static u32 test_repeat_count_current; 25 27 26 28 void suspend_test_start(void) 27 29 { ··· 76 74 int status; 77 75 78 76 /* this may fail if the RTC hasn't been initialized */ 77 + repeat: 79 78 status = rtc_read_time(rtc, &alm.time); 80 79 if (status < 0) { 81 80 printk(err_readtime, dev_name(&rtc->dev), status); ··· 103 100 if (state == PM_SUSPEND_STANDBY) { 104 101 printk(info_test, pm_states[state]); 105 102 status = pm_suspend(state); 103 + if (status < 0) 104 + state = PM_SUSPEND_FREEZE; 106 105 } 106 + if (state == PM_SUSPEND_FREEZE) { 107 + printk(info_test, pm_states[state]); 108 + status = pm_suspend(state); 109 + } 110 + 107 111 if (status < 0) 108 112 printk(err_suspend, status); 113 + 114 + test_repeat_count_current++; 115 + if (test_repeat_count_current < test_repeat_count_max) 116 + goto repeat; 109 117 110 118 /* Some platforms can't detect that the alarm triggered the 111 119 * wakeup, or (accordingly) disable it after it afterwards. ··· 151 137 static int __init setup_test_suspend(char *value) 152 138 { 153 139 int i; 140 + char *repeat; 141 + char *suspend_type; 154 142 155 - /* "=mem" ==> "mem" */ 143 + /* example : "=mem[,N]" ==> "mem[,N]" */ 156 144 value++; 145 + suspend_type = strsep(&value, ","); 146 + if (!suspend_type) 147 + return 0; 148 + 149 + repeat = strsep(&value, ","); 150 + if (repeat) { 151 + if (kstrtou32(repeat, 0, &test_repeat_count_max)) 152 + return 0; 153 + } 154 + 157 155 for (i = 0; pm_labels[i]; i++) 158 - if (!strcmp(pm_labels[i], value)) { 156 + if (!strcmp(pm_labels[i], suspend_type)) { 159 157 test_state_label = pm_labels[i]; 160 158 return 0; 161 159 } 162 160 163 - printk(warn_bad_state, value); 161 + printk(warn_bad_state, suspend_type); 164 162 return 0; 165 163 } 166 164 __setup("test_suspend", setup_test_suspend);