Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

* 'pm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
PM / Hibernate: Implement compat_ioctl for /dev/snapshot
PM / Freezer: fix return value of freezable_schedule_timeout_killable()
PM / shmobile: Allow the A4R domain to be turned off at run time
PM / input / touchscreen: Make st1232 use device PM QoS constraints
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
PM / shmobile: Remove the stay_on flag from SH7372's PM domains
PM / shmobile: Don't include SH7372's INTCS in syscore suspend/resume
PM / shmobile: Add support for the sh7372 A4S power domain / sleep mode
PM: Drop generic_subsys_pm_ops
PM / Sleep: Remove forward-only callbacks from AMBA bus type
PM / Sleep: Remove forward-only callbacks from platform bus type
PM: Run the driver callback directly if the subsystem one is not there
PM / Sleep: Make pm_op() and pm_noirq_op() return callback pointers
PM/Devfreq: Add Exynos4-bus device DVFS driver for Exynos4210/4212/4412.
PM / Sleep: Merge internal functions in generic_ops.c
PM / Sleep: Simplify generic system suspend callbacks
PM / Hibernate: Remove deprecated hibernation snapshot ioctls
PM / Sleep: Fix freezer failures due to racy usermodehelper_is_disabled()
ARM: S3C64XX: Implement basic power domain support
PM / shmobile: Use common always on power domain governor
...

Fix up trivial conflict in fs/xfs/xfs_buf.c due to removal of unused
XBT_FORCE_SLEEP bit

+3252 -1533
-11
Documentation/feature-removal-schedule.txt
··· 85 85 86 86 --------------------------- 87 87 88 - What: Deprecated snapshot ioctls 89 - When: 2.6.36 90 - 91 - Why: The ioctls in kernel/power/user.c were marked as deprecated long time 92 - ago. Now they notify users about that so that they need to replace 93 - their userspace. After some more time, remove them completely. 94 - 95 - Who: Jiri Slaby <jirislaby@gmail.com> 96 - 97 - --------------------------- 98 - 99 88 What: The ieee80211_regdom module parameter 100 89 When: March 2010 / desktop catchup 101 90
+21 -16
Documentation/power/devices.txt
··· 126 126 pointed to by the ops member of struct dev_pm_domain, or by the pm member of 127 127 struct bus_type, struct device_type and struct class. They are mostly of 128 128 interest to the people writing infrastructure for platforms and buses, like PCI 129 - or USB, or device type and device class drivers. 129 + or USB, or device type and device class drivers. They also are relevant to the 130 + writers of device drivers whose subsystems (PM domains, device types, device 131 + classes and bus types) don't provide all power management methods. 130 132 131 133 Bus drivers implement these methods as appropriate for the hardware and the 132 134 drivers using it; PCI works differently from USB, and so on. Not many people ··· 270 268 unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have 271 269 been disabled (except for those marked with the IRQF_NO_SUSPEND flag). 272 270 273 - All phases use PM domain, bus, type, or class callbacks (that is, methods 274 - defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm). 275 - These callbacks are regarded by the PM core as mutually exclusive. Moreover, 276 - PM domain callbacks always take precedence over bus, type and class callbacks, 277 - while type callbacks take precedence over bus and class callbacks, and class 278 - callbacks take precedence over bus callbacks. To be precise, the following 279 - rules are used to determine which callback to execute in the given phase: 271 + All phases use PM domain, bus, type, class or driver callbacks (that is, methods 272 + defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, dev->class->pm or 273 + dev->driver->pm). These callbacks are regarded by the PM core as mutually 274 + exclusive. Moreover, PM domain callbacks always take precedence over all of the 275 + other callbacks and, for example, type callbacks take precedence over bus, class 276 + and driver callbacks. To be precise, the following rules are used to determine 277 + which callback to execute in the given phase: 280 278 281 - 1. If dev->pm_domain is present, the PM core will attempt to execute the 282 - callback included in dev->pm_domain->ops. If that callback is not 283 - present, no action will be carried out for the given device. 279 + 1. If dev->pm_domain is present, the PM core will choose the callback 280 + included in dev->pm_domain->ops for execution 284 281 285 282 2. Otherwise, if both dev->type and dev->type->pm are present, the callback 286 - included in dev->type->pm will be executed. 283 + included in dev->type->pm will be chosen for execution. 287 284 288 285 3. Otherwise, if both dev->class and dev->class->pm are present, the 289 - callback included in dev->class->pm will be executed. 286 + callback included in dev->class->pm will be chosen for execution. 290 287 291 288 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback 292 - included in dev->bus->pm will be executed. 289 + included in dev->bus->pm will be chosen for execution. 293 290 294 291 This allows PM domains and device types to override callbacks provided by bus 295 292 types or device classes if necessary. 296 293 297 - These callbacks may in turn invoke device- or driver-specific methods stored in 298 - dev->driver->pm, but they don't have to. 294 + The PM domain, type, class and bus callbacks may in turn invoke device- or 295 + driver-specific methods stored in dev->driver->pm, but they don't have to do 296 + that. 297 + 298 + If the subsystem callback chosen for execution is not present, the PM core will 299 + execute the corresponding method from dev->driver->pm instead if there is one. 299 300 300 301 301 302 Entering System Suspend
+32 -7
Documentation/power/freezing-of-tasks.txt
··· 21 21 try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and 22 22 either wakes them up, if they are kernel threads, or sends fake signals to them, 23 23 if they are user space processes. A task that has TIF_FREEZE set, should react 24 - to it by calling the function called refrigerator() (defined in 24 + to it by calling the function called __refrigerator() (defined in 25 25 kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state 26 26 to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it. 27 27 Then, we say that the task is 'frozen' and therefore the set of functions ··· 29 29 defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h). 30 30 User space processes are generally frozen before kernel threads. 31 31 32 - It is not recommended to call refrigerator() directly. Instead, it is 33 - recommended to use the try_to_freeze() function (defined in 34 - include/linux/freezer.h), that checks the task's TIF_FREEZE flag and makes the 35 - task enter refrigerator() if the flag is set. 32 + __refrigerator() must not be called directly. Instead, use the 33 + try_to_freeze() function (defined in include/linux/freezer.h), that checks 34 + the task's TIF_FREEZE flag and makes the task enter __refrigerator() if the 35 + flag is set. 36 36 37 37 For user space processes try_to_freeze() is called automatically from the 38 38 signal-handling code, but the freezable kernel threads need to call it ··· 61 61 After the system memory state has been restored from a hibernation image and 62 62 devices have been reinitialized, the function thaw_processes() is called in 63 63 order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that 64 - have been frozen leave refrigerator() and continue running. 64 + have been frozen leave __refrigerator() and continue running. 65 65 66 66 III. Which kernel threads are freezable? 67 67 68 68 Kernel threads are not freezable by default. However, a kernel thread may clear 69 69 PF_NOFREEZE for itself by calling set_freezable() (the resetting of PF_NOFREEZE 70 - directly is strongly discouraged). From this point it is regarded as freezable 70 + directly is not allowed). From this point it is regarded as freezable 71 71 and must call try_to_freeze() in a suitable place. 72 72 73 73 IV. Why do we do that? ··· 176 176 A driver must have all firmwares it may need in RAM before suspend() is called. 177 177 If keeping them is not practical, for example due to their size, they must be 178 178 requested early enough using the suspend notifier API described in notifiers.txt. 179 + 180 + VI. Are there any precautions to be taken to prevent freezing failures? 181 + 182 + Yes, there are. 183 + 184 + First of all, grabbing the 'pm_mutex' lock to mutually exclude a piece of code 185 + from system-wide sleep such as suspend/hibernation is not encouraged. 186 + If possible, that piece of code must instead hook onto the suspend/hibernation 187 + notifiers to achieve mutual exclusion. Look at the CPU-Hotplug code 188 + (kernel/cpu.c) for an example. 189 + 190 + However, if that is not feasible, and grabbing 'pm_mutex' is deemed necessary, 191 + it is strongly discouraged to directly call mutex_[un]lock(&pm_mutex) since 192 + that could lead to freezing failures, because if the suspend/hibernate code 193 + successfully acquired the 'pm_mutex' lock, and hence that other entity failed 194 + to acquire the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE 195 + state. As a consequence, the freezer would not be able to freeze that task, 196 + leading to freezing failure. 197 + 198 + However, the [un]lock_system_sleep() APIs are safe to use in this scenario, 199 + since they ask the freezer to skip freezing this task, since it is anyway 200 + "frozen enough" as it is blocked on 'pm_mutex', which will be released 201 + only after the entire suspend/hibernation sequence is complete. 202 + So, to summarize, use [un]lock_system_sleep() instead of directly using 203 + mutex_[un]lock(&pm_mutex). That would prevent freezing failures.
+60 -54
Documentation/power/runtime_pm.txt
··· 57 57 58 58 4. Bus type of the device, if both dev->bus and dev->bus->pm are present. 59 59 60 + If the subsystem chosen by applying the above rules doesn't provide the relevant 61 + callback, the PM core will invoke the corresponding driver callback stored in 62 + dev->driver->pm directly (if present). 63 + 60 64 The PM core always checks which callback to use in the order given above, so the 61 65 priority order of callbacks from high to low is: PM domain, device type, class 62 66 and bus type. Moreover, the high-priority one will always take precedence over ··· 68 64 are referred to as subsystem-level callbacks in what follows. 69 65 70 66 By default, the callbacks are always invoked in process context with interrupts 71 - enabled. However, subsystems can use the pm_runtime_irq_safe() helper function 72 - to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and 73 - ->runtime_idle() callbacks may be invoked in atomic context with interrupts 74 - disabled for a given device. This implies that the callback routines in 75 - question must not block or sleep, but it also means that the synchronous helper 76 - functions listed at the end of Section 4 may be used for that device within an 77 - interrupt handler or generally in an atomic context. 67 + enabled. However, the pm_runtime_irq_safe() helper function can be used to tell 68 + the PM core that it is safe to run the ->runtime_suspend(), ->runtime_resume() 69 + and ->runtime_idle() callbacks for the given device in atomic context with 70 + interrupts disabled. This implies that the callback routines in question must 71 + not block or sleep, but it also means that the synchronous helper functions 72 + listed at the end of Section 4 may be used for that device within an interrupt 73 + handler or generally in an atomic context. 78 74 79 - The subsystem-level suspend callback is _entirely_ _responsible_ for handling 80 - the suspend of the device as appropriate, which may, but need not include 81 - executing the device driver's own ->runtime_suspend() callback (from the 75 + The subsystem-level suspend callback, if present, is _entirely_ _responsible_ 76 + for handling the suspend of the device as appropriate, which may, but need not 77 + include executing the device driver's own ->runtime_suspend() callback (from the 82 78 PM core's point of view it is not necessary to implement a ->runtime_suspend() 83 79 callback in a device driver as long as the subsystem-level suspend callback 84 80 knows what to do to handle the device). 85 81 86 - * Once the subsystem-level suspend callback has completed successfully 87 - for given device, the PM core regards the device as suspended, which need 88 - not mean that the device has been put into a low power state. It is 89 - supposed to mean, however, that the device will not process data and will 90 - not communicate with the CPU(s) and RAM until the subsystem-level resume 91 - callback is executed for it. The runtime PM status of a device after 92 - successful execution of the subsystem-level suspend callback is 'suspended'. 82 + * Once the subsystem-level suspend callback (or the driver suspend callback, 83 + if invoked directly) has completed successfully for the given device, the PM 84 + core regards the device as suspended, which need not mean that it has been 85 + put into a low power state. It is supposed to mean, however, that the 86 + device will not process data and will not communicate with the CPU(s) and 87 + RAM until the appropriate resume callback is executed for it. The runtime 88 + PM status of a device after successful execution of the suspend callback is 89 + 'suspended'. 93 90 94 - * If the subsystem-level suspend callback returns -EBUSY or -EAGAIN, 95 - the device's runtime PM status is 'active', which means that the device 96 - _must_ be fully operational afterwards. 91 + * If the suspend callback returns -EBUSY or -EAGAIN, the device's runtime PM 92 + status remains 'active', which means that the device _must_ be fully 93 + operational afterwards. 97 94 98 - * If the subsystem-level suspend callback returns an error code different 99 - from -EBUSY or -EAGAIN, the PM core regards this as a fatal error and will 100 - refuse to run the helper functions described in Section 4 for the device, 101 - until the status of it is directly set either to 'active', or to 'suspended' 102 - (the PM core provides special helper functions for this purpose). 95 + * If the suspend callback returns an error code different from -EBUSY and 96 + -EAGAIN, the PM core regards this as a fatal error and will refuse to run 97 + the helper functions described in Section 4 for the device until its status 98 + is directly set to either'active', or 'suspended' (the PM core provides 99 + special helper functions for this purpose). 103 100 104 - In particular, if the driver requires remote wake-up capability (i.e. hardware 101 + In particular, if the driver requires remote wakeup capability (i.e. hardware 105 102 mechanism allowing the device to request a change of its power state, such as 106 103 PCI PME) for proper functioning and device_run_wake() returns 'false' for the 107 104 device, then ->runtime_suspend() should return -EBUSY. On the other hand, if 108 - device_run_wake() returns 'true' for the device and the device is put into a low 109 - power state during the execution of the subsystem-level suspend callback, it is 110 - expected that remote wake-up will be enabled for the device. Generally, remote 111 - wake-up should be enabled for all input devices put into a low power state at 112 - run time. 105 + device_run_wake() returns 'true' for the device and the device is put into a 106 + low-power state during the execution of the suspend callback, it is expected 107 + that remote wakeup will be enabled for the device. Generally, remote wakeup 108 + should be enabled for all input devices put into low-power states at run time. 113 109 114 - The subsystem-level resume callback is _entirely_ _responsible_ for handling the 115 - resume of the device as appropriate, which may, but need not include executing 116 - the device driver's own ->runtime_resume() callback (from the PM core's point of 117 - view it is not necessary to implement a ->runtime_resume() callback in a device 118 - driver as long as the subsystem-level resume callback knows what to do to handle 119 - the device). 110 + The subsystem-level resume callback, if present, is _entirely_ _responsible_ for 111 + handling the resume of the device as appropriate, which may, but need not 112 + include executing the device driver's own ->runtime_resume() callback (from the 113 + PM core's point of view it is not necessary to implement a ->runtime_resume() 114 + callback in a device driver as long as the subsystem-level resume callback knows 115 + what to do to handle the device). 120 116 121 - * Once the subsystem-level resume callback has completed successfully, the PM 122 - core regards the device as fully operational, which means that the device 123 - _must_ be able to complete I/O operations as needed. The runtime PM status 124 - of the device is then 'active'. 117 + * Once the subsystem-level resume callback (or the driver resume callback, if 118 + invoked directly) has completed successfully, the PM core regards the device 119 + as fully operational, which means that the device _must_ be able to complete 120 + I/O operations as needed. The runtime PM status of the device is then 121 + 'active'. 125 122 126 - * If the subsystem-level resume callback returns an error code, the PM core 127 - regards this as a fatal error and will refuse to run the helper functions 128 - described in Section 4 for the device, until its status is directly set 129 - either to 'active' or to 'suspended' (the PM core provides special helper 130 - functions for this purpose). 123 + * If the resume callback returns an error code, the PM core regards this as a 124 + fatal error and will refuse to run the helper functions described in Section 125 + 4 for the device, until its status is directly set to either 'active', or 126 + 'suspended' (by means of special helper functions provided by the PM core 127 + for this purpose). 131 128 132 - The subsystem-level idle callback is executed by the PM core whenever the device 133 - appears to be idle, which is indicated to the PM core by two counters, the 134 - device's usage counter and the counter of 'active' children of the device. 129 + The idle callback (a subsystem-level one, if present, or the driver one) is 130 + executed by the PM core whenever the device appears to be idle, which is 131 + indicated to the PM core by two counters, the device's usage counter and the 132 + counter of 'active' children of the device. 135 133 136 134 * If any of these counters is decreased using a helper function provided by 137 135 the PM core and it turns out to be equal to zero, the other counter is 138 136 checked. If that counter also is equal to zero, the PM core executes the 139 - subsystem-level idle callback with the device as an argument. 137 + idle callback with the device as its argument. 140 138 141 - The action performed by a subsystem-level idle callback is totally dependent on 142 - the subsystem in question, but the expected and recommended action is to check 139 + The action performed by the idle callback is totally dependent on the subsystem 140 + (or driver) in question, but the expected and recommended action is to check 143 141 if the device can be suspended (i.e. if all of the conditions necessary for 144 142 suspending the device are satisfied) and to queue up a suspend request for the 145 143 device in that case. The value returned by this callback is ignored by the PM 146 144 core. 147 145 148 146 The helper functions provided by the PM core, described in Section 4, guarantee 149 - that the following constraints are met with respect to the bus type's runtime 150 - PM callbacks: 147 + that the following constraints are met with respect to runtime PM callbacks for 148 + one device: 151 149 152 150 (1) The callbacks are mutually exclusive (e.g. it is forbidden to execute 153 151 ->runtime_suspend() in parallel with ->runtime_resume() or with another
-2
arch/alpha/include/asm/thread_info.h
··· 79 79 #define TIF_UAC_SIGBUS 12 /* ! userspace part of 'osf_sysinfo' */ 80 80 #define TIF_MEMDIE 13 /* is terminating due to OOM killer */ 81 81 #define TIF_RESTORE_SIGMASK 14 /* restore signal mask in do_signal */ 82 - #define TIF_FREEZE 16 /* is freezing for suspend */ 83 82 84 83 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 85 84 #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) ··· 86 87 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 87 88 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 88 89 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) 89 - #define _TIF_FREEZE (1<<TIF_FREEZE) 90 90 91 91 /* Work to do on interrupt/exception return. */ 92 92 #define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
-2
arch/arm/include/asm/thread_info.h
··· 142 142 #define TIF_POLLING_NRFLAG 16 143 143 #define TIF_USING_IWMMXT 17 144 144 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 145 - #define TIF_FREEZE 19 146 145 #define TIF_RESTORE_SIGMASK 20 147 146 #define TIF_SECCOMP 21 148 147 ··· 151 152 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 152 153 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 153 154 #define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT) 154 - #define _TIF_FREEZE (1 << TIF_FREEZE) 155 155 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 156 156 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 157 157
+1
arch/arm/mach-s3c64xx/Kconfig
··· 8 8 bool 9 9 depends on ARCH_S3C64XX 10 10 select SAMSUNG_WAKEMASK 11 + select PM_GENERIC_DOMAINS 11 12 default y 12 13 help 13 14 Base platform code for any Samsung S3C64XX device
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410.c
··· 706 706 707 707 regulator_has_full_constraints(); 708 708 709 - s3c_pm_init(); 709 + s3c64xx_pm_init(); 710 710 } 711 711 712 712 MACHINE_START(WLF_CRAGG_6410, "Wolfson Cragganmore 6410")
+174 -2
arch/arm/mach-s3c64xx/pm.c
··· 17 17 #include <linux/serial_core.h> 18 18 #include <linux/io.h> 19 19 #include <linux/gpio.h> 20 + #include <linux/pm_domain.h> 20 21 21 22 #include <mach/map.h> 22 23 #include <mach/irqs.h> 23 24 25 + #include <plat/devs.h> 24 26 #include <plat/pm.h> 25 27 #include <plat/wakeup-mask.h> 26 28 ··· 32 30 #include <mach/regs-syscon-power.h> 33 31 #include <mach/regs-gpio-memport.h> 34 32 #include <mach/regs-modem.h> 33 + 34 + struct s3c64xx_pm_domain { 35 + char *const name; 36 + u32 ena; 37 + u32 pwr_stat; 38 + struct generic_pm_domain pd; 39 + }; 40 + 41 + static int s3c64xx_pd_off(struct generic_pm_domain *domain) 42 + { 43 + struct s3c64xx_pm_domain *pd; 44 + u32 val; 45 + 46 + pd = container_of(domain, struct s3c64xx_pm_domain, pd); 47 + 48 + val = __raw_readl(S3C64XX_NORMAL_CFG); 49 + val &= ~(pd->ena); 50 + __raw_writel(val, S3C64XX_NORMAL_CFG); 51 + 52 + return 0; 53 + } 54 + 55 + static int s3c64xx_pd_on(struct generic_pm_domain *domain) 56 + { 57 + struct s3c64xx_pm_domain *pd; 58 + u32 val; 59 + long retry = 1000000L; 60 + 61 + pd = container_of(domain, struct s3c64xx_pm_domain, pd); 62 + 63 + val = __raw_readl(S3C64XX_NORMAL_CFG); 64 + val |= pd->ena; 65 + __raw_writel(val, S3C64XX_NORMAL_CFG); 66 + 67 + /* Not all domains provide power status readback */ 68 + if (pd->pwr_stat) { 69 + do { 70 + cpu_relax(); 71 + if (__raw_readl(S3C64XX_BLK_PWR_STAT) & pd->pwr_stat) 72 + break; 73 + } while (retry--); 74 + 75 + if (!retry) { 76 + pr_err("Failed to start domain %s\n", pd->name); 77 + return -EBUSY; 78 + } 79 + } 80 + 81 + return 0; 82 + } 83 + 84 + static struct s3c64xx_pm_domain s3c64xx_pm_irom = { 85 + .name = "IROM", 86 + .ena = S3C64XX_NORMALCFG_IROM_ON, 87 + .pd = { 88 + .power_off = s3c64xx_pd_off, 89 + .power_on = s3c64xx_pd_on, 90 + }, 91 + }; 92 + 93 + static struct s3c64xx_pm_domain s3c64xx_pm_etm = { 94 + .name = "ETM", 95 + .ena = S3C64XX_NORMALCFG_DOMAIN_ETM_ON, 96 + .pwr_stat = S3C64XX_BLKPWRSTAT_ETM, 97 + .pd = { 98 + .power_off = s3c64xx_pd_off, 99 + .power_on = s3c64xx_pd_on, 100 + }, 101 + }; 102 + 103 + static struct s3c64xx_pm_domain s3c64xx_pm_s = { 104 + .name = "S", 105 + .ena = S3C64XX_NORMALCFG_DOMAIN_S_ON, 106 + .pwr_stat = S3C64XX_BLKPWRSTAT_S, 107 + .pd = { 108 + .power_off = s3c64xx_pd_off, 109 + .power_on = s3c64xx_pd_on, 110 + }, 111 + }; 112 + 113 + static struct s3c64xx_pm_domain s3c64xx_pm_f = { 114 + .name = "F", 115 + .ena = S3C64XX_NORMALCFG_DOMAIN_F_ON, 116 + .pwr_stat = S3C64XX_BLKPWRSTAT_F, 117 + .pd = { 118 + .power_off = s3c64xx_pd_off, 119 + .power_on = s3c64xx_pd_on, 120 + }, 121 + }; 122 + 123 + static struct s3c64xx_pm_domain s3c64xx_pm_p = { 124 + .name = "P", 125 + .ena = S3C64XX_NORMALCFG_DOMAIN_P_ON, 126 + .pwr_stat = S3C64XX_BLKPWRSTAT_P, 127 + .pd = { 128 + .power_off = s3c64xx_pd_off, 129 + .power_on = s3c64xx_pd_on, 130 + }, 131 + }; 132 + 133 + static struct s3c64xx_pm_domain s3c64xx_pm_i = { 134 + .name = "I", 135 + .ena = S3C64XX_NORMALCFG_DOMAIN_I_ON, 136 + .pwr_stat = S3C64XX_BLKPWRSTAT_I, 137 + .pd = { 138 + .power_off = s3c64xx_pd_off, 139 + .power_on = s3c64xx_pd_on, 140 + }, 141 + }; 142 + 143 + static struct s3c64xx_pm_domain s3c64xx_pm_g = { 144 + .name = "G", 145 + .ena = S3C64XX_NORMALCFG_DOMAIN_G_ON, 146 + .pd = { 147 + .power_off = s3c64xx_pd_off, 148 + .power_on = s3c64xx_pd_on, 149 + }, 150 + }; 151 + 152 + static struct s3c64xx_pm_domain s3c64xx_pm_v = { 153 + .name = "V", 154 + .ena = S3C64XX_NORMALCFG_DOMAIN_V_ON, 155 + .pwr_stat = S3C64XX_BLKPWRSTAT_V, 156 + .pd = { 157 + .power_off = s3c64xx_pd_off, 158 + .power_on = s3c64xx_pd_on, 159 + }, 160 + }; 161 + 162 + static struct s3c64xx_pm_domain *s3c64xx_always_on_pm_domains[] = { 163 + &s3c64xx_pm_irom, 164 + }; 165 + 166 + static struct s3c64xx_pm_domain *s3c64xx_pm_domains[] = { 167 + &s3c64xx_pm_etm, 168 + &s3c64xx_pm_g, 169 + &s3c64xx_pm_v, 170 + &s3c64xx_pm_i, 171 + &s3c64xx_pm_p, 172 + &s3c64xx_pm_s, 173 + &s3c64xx_pm_f, 174 + }; 35 175 36 176 #ifdef CONFIG_S3C_PM_DEBUG_LED_SMDK 37 177 void s3c_pm_debug_smdkled(u32 set, u32 clear) ··· 233 89 234 90 SAVE_ITEM(S3C64XX_SDMA_SEL), 235 91 SAVE_ITEM(S3C64XX_MODEM_MIFPCON), 92 + 93 + SAVE_ITEM(S3C64XX_NORMAL_CFG), 236 94 }; 237 95 238 96 void s3c_pm_configure_extint(void) ··· 325 179 __raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT); 326 180 } 327 181 328 - static int s3c64xx_pm_init(void) 182 + int __init s3c64xx_pm_init(void) 183 + { 184 + int i; 185 + 186 + s3c_pm_init(); 187 + 188 + for (i = 0; i < ARRAY_SIZE(s3c64xx_always_on_pm_domains); i++) 189 + pm_genpd_init(&s3c64xx_always_on_pm_domains[i]->pd, 190 + &pm_domain_always_on_gov, false); 191 + 192 + for (i = 0; i < ARRAY_SIZE(s3c64xx_pm_domains); i++) 193 + pm_genpd_init(&s3c64xx_pm_domains[i]->pd, NULL, false); 194 + 195 + if (dev_get_platdata(&s3c_device_fb.dev)) 196 + pm_genpd_add_device(&s3c64xx_pm_f.pd, &s3c_device_fb.dev); 197 + 198 + return 0; 199 + } 200 + 201 + static __init int s3c64xx_pm_initcall(void) 329 202 { 330 203 pm_cpu_prep = s3c64xx_pm_prepare; 331 204 pm_cpu_sleep = s3c64xx_cpu_suspend; ··· 363 198 364 199 return 0; 365 200 } 201 + arch_initcall(s3c64xx_pm_initcall); 366 202 367 - arch_initcall(s3c64xx_pm_init); 203 + static __init int s3c64xx_pm_late_initcall(void) 204 + { 205 + pm_genpd_poweroff_unused(); 206 + 207 + return 0; 208 + } 209 + late_initcall(s3c64xx_pm_late_initcall);
+2 -2
arch/arm/mach-shmobile/include/mach/common.h
··· 34 34 extern void sh7372_clock_init(void); 35 35 extern void sh7372_pinmux_init(void); 36 36 extern void sh7372_pm_init(void); 37 - extern void sh7372_resume_core_standby_a3sm(void); 38 - extern int sh7372_do_idle_a3sm(unsigned long unused); 37 + extern void sh7372_resume_core_standby_sysc(void); 38 + extern int sh7372_do_idle_sysc(unsigned long sleep_mode); 39 39 extern struct clk sh7372_extal1_clk; 40 40 extern struct clk sh7372_extal2_clk; 41 41
+4 -2
arch/arm/mach-shmobile/include/mach/sh7372.h
··· 480 480 struct sh7372_pm_domain { 481 481 struct generic_pm_domain genpd; 482 482 struct dev_power_governor *gov; 483 - void (*suspend)(void); 483 + int (*suspend)(void); 484 484 void (*resume)(void); 485 485 unsigned int bit_shift; 486 486 bool no_debug; 487 - bool stay_on; 488 487 }; 489 488 490 489 static inline struct sh7372_pm_domain *to_sh7372_pd(struct generic_pm_domain *d) ··· 498 499 extern struct sh7372_pm_domain sh7372_a4r; 499 500 extern struct sh7372_pm_domain sh7372_a3rv; 500 501 extern struct sh7372_pm_domain sh7372_a3ri; 502 + extern struct sh7372_pm_domain sh7372_a4s; 501 503 extern struct sh7372_pm_domain sh7372_a3sp; 502 504 extern struct sh7372_pm_domain sh7372_a3sg; 503 505 ··· 515 515 516 516 extern void sh7372_intcs_suspend(void); 517 517 extern void sh7372_intcs_resume(void); 518 + extern void sh7372_intca_suspend(void); 519 + extern void sh7372_intca_resume(void); 518 520 519 521 #endif /* __ASM_SH7372_H__ */
+50
arch/arm/mach-shmobile/intc-sh7372.c
··· 535 535 static struct intc_desc intcs_desc __initdata = { 536 536 .name = "sh7372-intcs", 537 537 .force_enable = ENABLED_INTCS, 538 + .skip_syscore_suspend = true, 538 539 .resource = intcs_resources, 539 540 .num_resources = ARRAY_SIZE(intcs_resources), 540 541 .hw = INTC_HW_DESC(intcs_vectors, intcs_groups, intcs_mask_registers, ··· 611 610 612 611 for (k = 0x80; k <= 0x9c; k += 4) 613 612 __raw_writeb(ffd5[k], intcs_ffd5 + k); 613 + } 614 + 615 + static unsigned short e694[0x200]; 616 + static unsigned short e695[0x200]; 617 + 618 + void sh7372_intca_suspend(void) 619 + { 620 + int k; 621 + 622 + for (k = 0x00; k <= 0x38; k += 4) 623 + e694[k] = __raw_readw(0xe6940000 + k); 624 + 625 + for (k = 0x80; k <= 0xb4; k += 4) 626 + e694[k] = __raw_readb(0xe6940000 + k); 627 + 628 + for (k = 0x180; k <= 0x1b4; k += 4) 629 + e694[k] = __raw_readb(0xe6940000 + k); 630 + 631 + for (k = 0x00; k <= 0x50; k += 4) 632 + e695[k] = __raw_readw(0xe6950000 + k); 633 + 634 + for (k = 0x80; k <= 0xa8; k += 4) 635 + e695[k] = __raw_readb(0xe6950000 + k); 636 + 637 + for (k = 0x180; k <= 0x1a8; k += 4) 638 + e695[k] = __raw_readb(0xe6950000 + k); 639 + } 640 + 641 + void sh7372_intca_resume(void) 642 + { 643 + int k; 644 + 645 + for (k = 0x00; k <= 0x38; k += 4) 646 + __raw_writew(e694[k], 0xe6940000 + k); 647 + 648 + for (k = 0x80; k <= 0xb4; k += 4) 649 + __raw_writeb(e694[k], 0xe6940000 + k); 650 + 651 + for (k = 0x180; k <= 0x1b4; k += 4) 652 + __raw_writeb(e694[k], 0xe6940000 + k); 653 + 654 + for (k = 0x00; k <= 0x50; k += 4) 655 + __raw_writew(e695[k], 0xe6950000 + k); 656 + 657 + for (k = 0x80; k <= 0xa8; k += 4) 658 + __raw_writeb(e695[k], 0xe6950000 + k); 659 + 660 + for (k = 0x180; k <= 0x1a8; k += 4) 661 + __raw_writeb(e695[k], 0xe6950000 + k); 614 662 }
+149 -53
arch/arm/mach-shmobile/pm-sh7372.c
··· 82 82 struct sh7372_pm_domain *sh7372_pd = to_sh7372_pd(genpd); 83 83 unsigned int mask = 1 << sh7372_pd->bit_shift; 84 84 85 - if (sh7372_pd->suspend) 86 - sh7372_pd->suspend(); 85 + if (sh7372_pd->suspend) { 86 + int ret = sh7372_pd->suspend(); 87 87 88 - if (sh7372_pd->stay_on) 89 - return 0; 88 + if (ret) 89 + return ret; 90 + } 90 91 91 92 if (__raw_readl(PSTR) & mask) { 92 93 unsigned int retry_count; ··· 102 101 } 103 102 104 103 if (!sh7372_pd->no_debug) 105 - pr_debug("sh7372 power domain down 0x%08x -> PSTR = 0x%08x\n", 106 - mask, __raw_readl(PSTR)); 104 + pr_debug("%s: Power off, 0x%08x -> PSTR = 0x%08x\n", 105 + genpd->name, mask, __raw_readl(PSTR)); 107 106 108 107 return 0; 109 108 } ··· 113 112 unsigned int mask = 1 << sh7372_pd->bit_shift; 114 113 unsigned int retry_count; 115 114 int ret = 0; 116 - 117 - if (sh7372_pd->stay_on) 118 - goto out; 119 115 120 116 if (__raw_readl(PSTR) & mask) 121 117 goto out; ··· 131 133 ret = -EIO; 132 134 133 135 if (!sh7372_pd->no_debug) 134 - pr_debug("sh7372 power domain up 0x%08x -> PSTR = 0x%08x\n", 135 - mask, __raw_readl(PSTR)); 136 + pr_debug("%s: Power on, 0x%08x -> PSTR = 0x%08x\n", 137 + sh7372_pd->genpd.name, mask, __raw_readl(PSTR)); 136 138 137 139 out: 138 140 if (ret == 0 && sh7372_pd->resume && do_resume) ··· 146 148 return __pd_power_up(to_sh7372_pd(genpd), true); 147 149 } 148 150 149 - static void sh7372_a4r_suspend(void) 151 + static int sh7372_a4r_suspend(void) 150 152 { 151 153 sh7372_intcs_suspend(); 152 154 __raw_writel(0x300fffff, WUPRMSK); /* avoid wakeup */ 155 + return 0; 153 156 } 154 157 155 158 static bool pd_active_wakeup(struct device *dev) 156 159 { 157 - return true; 160 + bool (*active_wakeup)(struct device *dev); 161 + 162 + active_wakeup = dev_gpd_data(dev)->ops.active_wakeup; 163 + return active_wakeup ? active_wakeup(dev) : true; 158 164 } 159 165 160 - static bool sh7372_power_down_forbidden(struct dev_pm_domain *domain) 166 + static int sh7372_stop_dev(struct device *dev) 161 167 { 162 - return false; 168 + int (*stop)(struct device *dev); 169 + 170 + stop = dev_gpd_data(dev)->ops.stop; 171 + if (stop) { 172 + int ret = stop(dev); 173 + if (ret) 174 + return ret; 175 + } 176 + return pm_clk_suspend(dev); 163 177 } 164 178 165 - struct dev_power_governor sh7372_always_on_gov = { 166 - .power_down_ok = sh7372_power_down_forbidden, 167 - }; 179 + static int sh7372_start_dev(struct device *dev) 180 + { 181 + int (*start)(struct device *dev); 182 + int ret; 183 + 184 + ret = pm_clk_resume(dev); 185 + if (ret) 186 + return ret; 187 + 188 + start = dev_gpd_data(dev)->ops.start; 189 + if (start) 190 + ret = start(dev); 191 + 192 + return ret; 193 + } 168 194 169 195 void sh7372_init_pm_domain(struct sh7372_pm_domain *sh7372_pd) 170 196 { 171 197 struct generic_pm_domain *genpd = &sh7372_pd->genpd; 198 + struct dev_power_governor *gov = sh7372_pd->gov; 172 199 173 - pm_genpd_init(genpd, sh7372_pd->gov, false); 174 - genpd->stop_device = pm_clk_suspend; 175 - genpd->start_device = pm_clk_resume; 200 + pm_genpd_init(genpd, gov ? : &simple_qos_governor, false); 201 + genpd->dev_ops.stop = sh7372_stop_dev; 202 + genpd->dev_ops.start = sh7372_start_dev; 203 + genpd->dev_ops.active_wakeup = pd_active_wakeup; 176 204 genpd->dev_irq_safe = true; 177 - genpd->active_wakeup = pd_active_wakeup; 178 205 genpd->power_off = pd_power_down; 179 206 genpd->power_on = pd_power_up; 180 207 __pd_power_up(sh7372_pd, false); ··· 222 199 } 223 200 224 201 struct sh7372_pm_domain sh7372_a4lc = { 202 + .genpd.name = "A4LC", 225 203 .bit_shift = 1, 226 204 }; 227 205 228 206 struct sh7372_pm_domain sh7372_a4mp = { 207 + .genpd.name = "A4MP", 229 208 .bit_shift = 2, 230 209 }; 231 210 232 211 struct sh7372_pm_domain sh7372_d4 = { 212 + .genpd.name = "D4", 233 213 .bit_shift = 3, 234 214 }; 235 215 236 216 struct sh7372_pm_domain sh7372_a4r = { 217 + .genpd.name = "A4R", 237 218 .bit_shift = 5, 238 - .gov = &sh7372_always_on_gov, 239 219 .suspend = sh7372_a4r_suspend, 240 220 .resume = sh7372_intcs_resume, 241 - .stay_on = true, 242 221 }; 243 222 244 223 struct sh7372_pm_domain sh7372_a3rv = { 224 + .genpd.name = "A3RV", 245 225 .bit_shift = 6, 246 226 }; 247 227 248 228 struct sh7372_pm_domain sh7372_a3ri = { 229 + .genpd.name = "A3RI", 249 230 .bit_shift = 8, 250 231 }; 251 232 252 - struct sh7372_pm_domain sh7372_a3sp = { 253 - .bit_shift = 11, 254 - .gov = &sh7372_always_on_gov, 255 - .no_debug = true, 256 - }; 257 - 258 - static void sh7372_a3sp_init(void) 233 + static int sh7372_a4s_suspend(void) 259 234 { 260 - /* serial consoles make use of SCIF hardware located in A3SP, 261 - * keep such power domain on if "no_console_suspend" is set. 235 + /* 236 + * The A4S domain contains the CPU core and therefore it should 237 + * only be turned off if the CPU is in use. 262 238 */ 263 - sh7372_a3sp.stay_on = !console_suspend_enabled; 239 + return -EBUSY; 264 240 } 265 241 242 + struct sh7372_pm_domain sh7372_a4s = { 243 + .genpd.name = "A4S", 244 + .bit_shift = 10, 245 + .gov = &pm_domain_always_on_gov, 246 + .no_debug = true, 247 + .suspend = sh7372_a4s_suspend, 248 + }; 249 + 250 + static int sh7372_a3sp_suspend(void) 251 + { 252 + /* 253 + * Serial consoles make use of SCIF hardware located in A3SP, 254 + * keep such power domain on if "no_console_suspend" is set. 255 + */ 256 + return console_suspend_enabled ? -EBUSY : 0; 257 + } 258 + 259 + struct sh7372_pm_domain sh7372_a3sp = { 260 + .genpd.name = "A3SP", 261 + .bit_shift = 11, 262 + .gov = &pm_domain_always_on_gov, 263 + .no_debug = true, 264 + .suspend = sh7372_a3sp_suspend, 265 + }; 266 + 266 267 struct sh7372_pm_domain sh7372_a3sg = { 268 + .genpd.name = "A3SG", 267 269 .bit_shift = 13, 268 270 }; 269 271 ··· 305 257 return 0; 306 258 } 307 259 308 - static void sh7372_enter_core_standby(void) 260 + static void sh7372_set_reset_vector(unsigned long address) 309 261 { 310 262 /* set reset vector, translate 4k */ 311 - __raw_writel(__pa(sh7372_resume_core_standby_a3sm), SBAR); 263 + __raw_writel(address, SBAR); 312 264 __raw_writel(0, APARMBAREA); 265 + } 266 + 267 + static void sh7372_enter_core_standby(void) 268 + { 269 + sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc)); 313 270 314 271 /* enter sleep mode with SYSTBCR to 0x10 */ 315 272 __raw_writel(0x10, SYSTBCR); ··· 327 274 #endif 328 275 329 276 #ifdef CONFIG_SUSPEND 330 - static void sh7372_enter_a3sm_common(int pllc0_on) 277 + static void sh7372_enter_sysc(int pllc0_on, unsigned long sleep_mode) 331 278 { 332 - /* set reset vector, translate 4k */ 333 - __raw_writel(__pa(sh7372_resume_core_standby_a3sm), SBAR); 334 - __raw_writel(0, APARMBAREA); 335 - 336 279 if (pllc0_on) 337 280 __raw_writel(0, PLLC01STPCR); 338 281 else 339 282 __raw_writel(1 << 28, PLLC01STPCR); 340 283 341 - __raw_writel(0, PDNSEL); /* power-down A3SM only, not A4S */ 342 284 __raw_readl(WUPSFAC); /* read wakeup int. factor before sleep */ 343 - cpu_suspend(0, sh7372_do_idle_a3sm); 285 + cpu_suspend(sleep_mode, sh7372_do_idle_sysc); 344 286 __raw_readl(WUPSFAC); /* read wakeup int. factor after wakeup */ 345 287 346 288 /* disable reset vector translation */ 347 289 __raw_writel(0, SBAR); 348 290 } 349 291 350 - static int sh7372_a3sm_valid(unsigned long *mskp, unsigned long *msk2p) 292 + static int sh7372_sysc_valid(unsigned long *mskp, unsigned long *msk2p) 351 293 { 352 294 unsigned long mstpsr0, mstpsr1, mstpsr2, mstpsr3, mstpsr4; 353 295 unsigned long msk, msk2; ··· 430 382 *irqcr2p = irqcr2; 431 383 } 432 384 433 - static void sh7372_setup_a3sm(unsigned long msk, unsigned long msk2) 385 + static void sh7372_setup_sysc(unsigned long msk, unsigned long msk2) 434 386 { 435 387 u16 irqcrx_low, irqcrx_high, irqcry_low, irqcry_high; 436 388 unsigned long tmp; ··· 463 415 __raw_writel((irqcrx_high << 16) | irqcrx_low, IRQCR3); 464 416 __raw_writel((irqcry_high << 16) | irqcry_low, IRQCR4); 465 417 } 418 + 419 + static void sh7372_enter_a3sm_common(int pllc0_on) 420 + { 421 + sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc)); 422 + sh7372_enter_sysc(pllc0_on, 1 << 12); 423 + } 424 + 425 + static void sh7372_enter_a4s_common(int pllc0_on) 426 + { 427 + sh7372_intca_suspend(); 428 + memcpy((void *)SMFRAM, sh7372_resume_core_standby_sysc, 0x100); 429 + sh7372_set_reset_vector(SMFRAM); 430 + sh7372_enter_sysc(pllc0_on, 1 << 10); 431 + sh7372_intca_resume(); 432 + } 433 + 466 434 #endif 467 435 468 436 #ifdef CONFIG_CPU_IDLE ··· 512 448 unsigned long msk, msk2; 513 449 514 450 /* check active clocks to determine potential wakeup sources */ 515 - if (sh7372_a3sm_valid(&msk, &msk2)) { 516 - 451 + if (sh7372_sysc_valid(&msk, &msk2)) { 517 452 /* convert INTC mask and sense to SYSC mask and sense */ 518 - sh7372_setup_a3sm(msk, msk2); 453 + sh7372_setup_sysc(msk, msk2); 519 454 520 - /* enter A3SM sleep with PLLC0 off */ 521 - pr_debug("entering A3SM\n"); 522 - sh7372_enter_a3sm_common(0); 455 + if (!console_suspend_enabled && 456 + sh7372_a4s.genpd.status == GPD_STATE_POWER_OFF) { 457 + /* enter A4S sleep with PLLC0 off */ 458 + pr_debug("entering A4S\n"); 459 + sh7372_enter_a4s_common(0); 460 + } else { 461 + /* enter A3SM sleep with PLLC0 off */ 462 + pr_debug("entering A3SM\n"); 463 + sh7372_enter_a3sm_common(0); 464 + } 523 465 } else { 524 466 /* default to Core Standby that supports all wakeup sources */ 525 467 pr_debug("entering Core Standby\n"); ··· 534 464 return 0; 535 465 } 536 466 467 + /** 468 + * sh7372_pm_notifier_fn - SH7372 PM notifier routine. 469 + * @notifier: Unused. 470 + * @pm_event: Event being handled. 471 + * @unused: Unused. 472 + */ 473 + static int sh7372_pm_notifier_fn(struct notifier_block *notifier, 474 + unsigned long pm_event, void *unused) 475 + { 476 + switch (pm_event) { 477 + case PM_SUSPEND_PREPARE: 478 + /* 479 + * This is necessary, because the A4R domain has to be "on" 480 + * when suspend_device_irqs() and resume_device_irqs() are 481 + * executed during system suspend and resume, respectively, so 482 + * that those functions don't crash while accessing the INTCS. 483 + */ 484 + pm_genpd_poweron(&sh7372_a4r.genpd); 485 + break; 486 + case PM_POST_SUSPEND: 487 + pm_genpd_poweroff_unused(); 488 + break; 489 + } 490 + 491 + return NOTIFY_DONE; 492 + } 493 + 537 494 static void sh7372_suspend_init(void) 538 495 { 539 496 shmobile_suspend_ops.enter = sh7372_enter_suspend; 497 + pm_notifier(sh7372_pm_notifier_fn, 0); 540 498 } 541 499 #else 542 500 static void sh7372_suspend_init(void) {} ··· 579 481 580 482 /* do not convert A3SM, A3SP, A3SG, A4R power down into A4S */ 581 483 __raw_writel(0, PDNSEL); 582 - 583 - sh7372_a3sp_init(); 584 484 585 485 sh7372_suspend_init(); 586 486 sh7372_cpuidle_init();
+5 -1
arch/arm/mach-shmobile/setup-sh7372.c
··· 994 994 sh7372_init_pm_domain(&sh7372_a4r); 995 995 sh7372_init_pm_domain(&sh7372_a3rv); 996 996 sh7372_init_pm_domain(&sh7372_a3ri); 997 - sh7372_init_pm_domain(&sh7372_a3sg); 997 + sh7372_init_pm_domain(&sh7372_a4s); 998 998 sh7372_init_pm_domain(&sh7372_a3sp); 999 + sh7372_init_pm_domain(&sh7372_a3sg); 999 1000 1000 1001 sh7372_pm_add_subdomain(&sh7372_a4lc, &sh7372_a3rv); 1001 1002 sh7372_pm_add_subdomain(&sh7372_a4r, &sh7372_a4lc); 1003 + 1004 + sh7372_pm_add_subdomain(&sh7372_a4s, &sh7372_a3sg); 1005 + sh7372_pm_add_subdomain(&sh7372_a4s, &sh7372_a3sp); 1002 1006 1003 1007 platform_add_devices(sh7372_early_devices, 1004 1008 ARRAY_SIZE(sh7372_early_devices));
+11 -10
arch/arm/mach-shmobile/sleep-sh7372.S
··· 37 37 #if defined(CONFIG_SUSPEND) || defined(CONFIG_CPU_IDLE) 38 38 .align 12 39 39 .text 40 - .global sh7372_resume_core_standby_a3sm 41 - sh7372_resume_core_standby_a3sm: 40 + .global sh7372_resume_core_standby_sysc 41 + sh7372_resume_core_standby_sysc: 42 42 ldr pc, 1f 43 43 1: .long cpu_resume - PAGE_OFFSET + PLAT_PHYS_OFFSET 44 44 45 - .global sh7372_do_idle_a3sm 46 - sh7372_do_idle_a3sm: 45 + #define SPDCR 0xe6180008 46 + 47 + /* A3SM & A4S power down */ 48 + .global sh7372_do_idle_sysc 49 + sh7372_do_idle_sysc: 50 + mov r8, r0 /* sleep mode passed in r0 */ 51 + 47 52 /* 48 53 * Clear the SCTLR.C bit to prevent further data cache 49 54 * allocation. Clearing SCTLR.C would make all the data accesses ··· 85 80 dsb 86 81 dmb 87 82 88 - #define SPDCR 0xe6180008 89 - #define A3SM (1 << 12) 90 - 91 - /* A3SM power down */ 83 + /* SYSC power down */ 92 84 ldr r0, =SPDCR 93 - ldr r1, =A3SM 94 - str r1, [r0] 85 + str r8, [r0] 95 86 1: 96 87 b 1b 97 88
+6
arch/arm/plat-samsung/include/plat/pm.h
··· 22 22 #ifdef CONFIG_PM 23 23 24 24 extern __init int s3c_pm_init(void); 25 + extern __init int s3c64xx_pm_init(void); 25 26 26 27 #else 27 28 28 29 static inline int s3c_pm_init(void) 30 + { 31 + return 0; 32 + } 33 + 34 + static inline int s3c64xx_pm_init(void) 29 35 { 30 36 return 0; 31 37 }
-2
arch/avr32/include/asm/thread_info.h
··· 85 85 #define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */ 86 86 #define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */ 87 87 #define TIF_NOTIFY_RESUME 9 /* callback before returning to user */ 88 - #define TIF_FREEZE 29 89 88 #define TIF_DEBUG 30 /* debugging enabled */ 90 89 #define TIF_USERSPACE 31 /* true if FS sets userspace */ 91 90 ··· 97 98 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 98 99 #define _TIF_CPU_GOING_TO_SLEEP (1 << TIF_CPU_GOING_TO_SLEEP) 99 100 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 100 - #define _TIF_FREEZE (1 << TIF_FREEZE) 101 101 102 102 /* Note: The masks below must never span more than 16 bits! */ 103 103
-2
arch/blackfin/include/asm/thread_info.h
··· 100 100 TIF_NEED_RESCHED */ 101 101 #define TIF_MEMDIE 4 /* is terminating due to OOM killer */ 102 102 #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ 103 - #define TIF_FREEZE 6 /* is freezing for suspend */ 104 103 #define TIF_IRQ_SYNC 7 /* sync pipeline stage */ 105 104 #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ 106 105 #define TIF_SINGLESTEP 9 ··· 110 111 #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) 111 112 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 112 113 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 113 - #define _TIF_FREEZE (1<<TIF_FREEZE) 114 114 #define _TIF_IRQ_SYNC (1<<TIF_IRQ_SYNC) 115 115 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) 116 116 #define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
-2
arch/cris/include/asm/thread_info.h
··· 86 86 #define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */ 87 87 #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 88 88 #define TIF_MEMDIE 17 /* is terminating due to OOM killer */ 89 - #define TIF_FREEZE 18 /* is freezing for suspend */ 90 89 91 90 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 92 91 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) ··· 93 94 #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) 94 95 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 95 96 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 96 - #define _TIF_FREEZE (1<<TIF_FREEZE) 97 97 98 98 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 99 99 #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
-2
arch/frv/include/asm/thread_info.h
··· 111 111 #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ 112 112 #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 113 113 #define TIF_MEMDIE 17 /* is terminating due to OOM killer */ 114 - #define TIF_FREEZE 18 /* freezing for suspend */ 115 114 116 115 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 117 116 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) ··· 119 120 #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 120 121 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 121 122 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 122 - #define _TIF_FREEZE (1 << TIF_FREEZE) 123 123 124 124 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 125 125 #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
-2
arch/h8300/include/asm/thread_info.h
··· 90 90 #define TIF_MEMDIE 4 /* is terminating due to OOM killer */ 91 91 #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ 92 92 #define TIF_NOTIFY_RESUME 6 /* callback before returning to user */ 93 - #define TIF_FREEZE 16 /* is freezing for suspend */ 94 93 95 94 /* as above, but as bit values */ 96 95 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) ··· 98 99 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 99 100 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 100 101 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 101 - #define _TIF_FREEZE (1<<TIF_FREEZE) 102 102 103 103 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 104 104
-2
arch/ia64/include/asm/thread_info.h
··· 113 113 #define TIF_MEMDIE 17 /* is terminating due to OOM killer */ 114 114 #define TIF_MCA_INIT 18 /* this task is processing MCA or INIT */ 115 115 #define TIF_DB_DISABLED 19 /* debug trap disabled for fsyscall */ 116 - #define TIF_FREEZE 20 /* is freezing for suspend */ 117 116 #define TIF_RESTORE_RSE 21 /* user RBS is newer than kernel RBS */ 118 117 119 118 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) ··· 125 126 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 126 127 #define _TIF_MCA_INIT (1 << TIF_MCA_INIT) 127 128 #define _TIF_DB_DISABLED (1 << TIF_DB_DISABLED) 128 - #define _TIF_FREEZE (1 << TIF_FREEZE) 129 129 #define _TIF_RESTORE_RSE (1 << TIF_RESTORE_RSE) 130 130 131 131 /* "work to do on user-return" bits */
-2
arch/m32r/include/asm/thread_info.h
··· 138 138 #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ 139 139 #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 140 140 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 141 - #define TIF_FREEZE 19 /* is freezing for suspend */ 142 141 143 142 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 144 143 #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) ··· 148 149 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 149 150 #define _TIF_USEDFPU (1<<TIF_USEDFPU) 150 151 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 151 - #define _TIF_FREEZE (1<<TIF_FREEZE) 152 152 153 153 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 154 154 #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
-1
arch/m68k/include/asm/thread_info.h
··· 76 76 #define TIF_DELAYED_TRACE 14 /* single step a syscall */ 77 77 #define TIF_SYSCALL_TRACE 15 /* syscall trace active */ 78 78 #define TIF_MEMDIE 16 /* is terminating due to OOM killer */ 79 - #define TIF_FREEZE 17 /* thread is freezing for suspend */ 80 79 #define TIF_RESTORE_SIGMASK 18 /* restore signal mask in do_signal */ 81 80 82 81 #endif /* _ASM_M68K_THREAD_INFO_H */
-2
arch/microblaze/include/asm/thread_info.h
··· 125 125 #define TIF_MEMDIE 6 /* is terminating due to OOM killer */ 126 126 #define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */ 127 127 #define TIF_SECCOMP 10 /* secure computing */ 128 - #define TIF_FREEZE 14 /* Freezing for suspend */ 129 128 130 129 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 131 130 #define TIF_POLLING_NRFLAG 16 ··· 136 137 #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 137 138 #define _TIF_IRET (1 << TIF_IRET) 138 139 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 139 - #define _TIF_FREEZE (1 << TIF_FREEZE) 140 140 #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 141 141 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 142 142
-2
arch/mips/include/asm/thread_info.h
··· 117 117 #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ 118 118 #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 119 119 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 120 - #define TIF_FREEZE 19 121 120 #define TIF_FIXADE 20 /* Fix address errors in software */ 122 121 #define TIF_LOGADE 21 /* Log address errors to syslog */ 123 122 #define TIF_32BIT_REGS 22 /* also implies 16/32 fprs */ ··· 140 141 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 141 142 #define _TIF_USEDFPU (1<<TIF_USEDFPU) 142 143 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 143 - #define _TIF_FREEZE (1<<TIF_FREEZE) 144 144 #define _TIF_FIXADE (1<<TIF_FIXADE) 145 145 #define _TIF_LOGADE (1<<TIF_LOGADE) 146 146 #define _TIF_32BIT_REGS (1<<TIF_32BIT_REGS)
-2
arch/mn10300/include/asm/thread_info.h
··· 165 165 #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ 166 166 #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 167 167 #define TIF_MEMDIE 17 /* is terminating due to OOM killer */ 168 - #define TIF_FREEZE 18 /* freezing for suspend */ 169 168 170 169 #define _TIF_SYSCALL_TRACE +(1 << TIF_SYSCALL_TRACE) 171 170 #define _TIF_NOTIFY_RESUME +(1 << TIF_NOTIFY_RESUME) ··· 173 174 #define _TIF_SINGLESTEP +(1 << TIF_SINGLESTEP) 174 175 #define _TIF_RESTORE_SIGMASK +(1 << TIF_RESTORE_SIGMASK) 175 176 #define _TIF_POLLING_NRFLAG +(1 << TIF_POLLING_NRFLAG) 176 - #define _TIF_FREEZE +(1 << TIF_FREEZE) 177 177 178 178 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 179 179 #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
-2
arch/parisc/include/asm/thread_info.h
··· 58 58 #define TIF_32BIT 4 /* 32 bit binary */ 59 59 #define TIF_MEMDIE 5 /* is terminating due to OOM killer */ 60 60 #define TIF_RESTORE_SIGMASK 6 /* restore saved signal mask */ 61 - #define TIF_FREEZE 7 /* is freezing for suspend */ 62 61 #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ 63 62 #define TIF_SINGLESTEP 9 /* single stepping? */ 64 63 #define TIF_BLOCKSTEP 10 /* branch stepping? */ ··· 68 69 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 69 70 #define _TIF_32BIT (1 << TIF_32BIT) 70 71 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 71 - #define _TIF_FREEZE (1 << TIF_FREEZE) 72 72 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 73 73 #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 74 74 #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
-2
arch/powerpc/include/asm/thread_info.h
··· 109 109 #define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */ 110 110 #define TIF_NOERROR 12 /* Force successful syscall return */ 111 111 #define TIF_NOTIFY_RESUME 13 /* callback before returning to user */ 112 - #define TIF_FREEZE 14 /* Freezing for suspend */ 113 112 #define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */ 114 113 #define TIF_RUNLATCH 16 /* Is the runlatch enabled? */ 115 114 ··· 126 127 #define _TIF_RESTOREALL (1<<TIF_RESTOREALL) 127 128 #define _TIF_NOERROR (1<<TIF_NOERROR) 128 129 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) 129 - #define _TIF_FREEZE (1<<TIF_FREEZE) 130 130 #define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT) 131 131 #define _TIF_RUNLATCH (1<<TIF_RUNLATCH) 132 132 #define _TIF_SYSCALL_T_OR_A (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
-1
arch/powerpc/kernel/vio.c
··· 1406 1406 .match = vio_bus_match, 1407 1407 .probe = vio_bus_probe, 1408 1408 .remove = vio_bus_remove, 1409 - .pm = GENERIC_SUBSYS_PM_OPS, 1410 1409 }; 1411 1410 1412 1411 /**
-2
arch/s390/include/asm/thread_info.h
··· 102 102 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 103 103 #define TIF_RESTORE_SIGMASK 19 /* restore signal mask in do_signal() */ 104 104 #define TIF_SINGLE_STEP 20 /* This task is single stepped */ 105 - #define TIF_FREEZE 21 /* thread is freezing for suspend */ 106 105 107 106 #define _TIF_SYSCALL (1<<TIF_SYSCALL) 108 107 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) ··· 118 119 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 119 120 #define _TIF_31BIT (1<<TIF_31BIT) 120 121 #define _TIF_SINGLE_STEP (1<<TIF_SINGLE_STEP) 121 - #define _TIF_FREEZE (1<<TIF_FREEZE) 122 122 123 123 #ifdef CONFIG_64BIT 124 124 #define is_32bit_task() (test_thread_flag(TIF_31BIT))
-2
arch/sh/include/asm/thread_info.h
··· 122 122 #define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */ 123 123 #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 124 124 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 125 - #define TIF_FREEZE 19 /* Freezing for suspend */ 126 125 127 126 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 128 127 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) ··· 132 133 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 133 134 #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) 134 135 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 135 - #define _TIF_FREEZE (1 << TIF_FREEZE) 136 136 137 137 /* 138 138 * _TIF_ALLWORK_MASK and _TIF_WORK_MASK need to fit within 2 bytes, or we
-2
arch/sparc/include/asm/thread_info_32.h
··· 133 133 #define TIF_POLLING_NRFLAG 9 /* true if poll_idle() is polling 134 134 * TIF_NEED_RESCHED */ 135 135 #define TIF_MEMDIE 10 /* is terminating due to OOM killer */ 136 - #define TIF_FREEZE 11 /* is freezing for suspend */ 137 136 138 137 /* as above, but as bit values */ 139 138 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) ··· 146 147 #define _TIF_DO_NOTIFY_RESUME_MASK (_TIF_NOTIFY_RESUME | \ 147 148 _TIF_SIGPENDING | \ 148 149 _TIF_RESTORE_SIGMASK) 149 - #define _TIF_FREEZE (1<<TIF_FREEZE) 150 150 151 151 #endif /* __KERNEL__ */ 152 152
-2
arch/sparc/include/asm/thread_info_64.h
··· 225 225 /* flag bit 12 is available */ 226 226 #define TIF_MEMDIE 13 /* is terminating due to OOM killer */ 227 227 #define TIF_POLLING_NRFLAG 14 228 - #define TIF_FREEZE 15 /* is freezing for suspend */ 229 228 230 229 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 231 230 #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) ··· 236 237 #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT) 237 238 #define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT) 238 239 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 239 - #define _TIF_FREEZE (1<<TIF_FREEZE) 240 240 241 241 #define _TIF_USER_WORK_MASK ((0xff << TI_FLAG_WSAVED_SHIFT) | \ 242 242 _TIF_DO_NOTIFY_RESUME_MASK | \
-2
arch/um/include/asm/thread_info.h
··· 71 71 #define TIF_MEMDIE 5 /* is terminating due to OOM killer */ 72 72 #define TIF_SYSCALL_AUDIT 6 73 73 #define TIF_RESTORE_SIGMASK 7 74 - #define TIF_FREEZE 16 /* is freezing for suspend */ 75 74 76 75 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 77 76 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) ··· 79 80 #define _TIF_MEMDIE (1 << TIF_MEMDIE) 80 81 #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 81 82 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 82 - #define _TIF_FREEZE (1 << TIF_FREEZE) 83 83 84 84 #endif
-2
arch/unicore32/include/asm/thread_info.h
··· 135 135 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ 136 136 #define TIF_SYSCALL_TRACE 8 137 137 #define TIF_MEMDIE 18 138 - #define TIF_FREEZE 19 139 138 #define TIF_RESTORE_SIGMASK 20 140 139 141 140 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) 142 141 #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) 143 142 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 144 143 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 145 - #define _TIF_FREEZE (1 << TIF_FREEZE) 146 144 #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 147 145 148 146 /*
-2
arch/x86/include/asm/thread_info.h
··· 91 91 #define TIF_MEMDIE 20 /* is terminating due to OOM killer */ 92 92 #define TIF_DEBUG 21 /* uses debug registers */ 93 93 #define TIF_IO_BITMAP 22 /* uses I/O bitmap */ 94 - #define TIF_FREEZE 23 /* is freezing for suspend */ 95 94 #define TIF_FORCED_TF 24 /* true if TF in eflags artificially */ 96 95 #define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */ 97 96 #define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */ ··· 112 113 #define _TIF_FORK (1 << TIF_FORK) 113 114 #define _TIF_DEBUG (1 << TIF_DEBUG) 114 115 #define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP) 115 - #define _TIF_FREEZE (1 << TIF_FREEZE) 116 116 #define _TIF_FORCED_TF (1 << TIF_FORCED_TF) 117 117 #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) 118 118 #define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES)
-2
arch/xtensa/include/asm/thread_info.h
··· 132 132 #define TIF_MEMDIE 5 /* is terminating due to OOM killer */ 133 133 #define TIF_RESTORE_SIGMASK 6 /* restore signal mask in do_signal() */ 134 134 #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 135 - #define TIF_FREEZE 17 /* is freezing for suspend */ 136 135 137 136 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 138 137 #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) ··· 140 141 #define _TIF_IRET (1<<TIF_IRET) 141 142 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 142 143 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 143 - #define _TIF_FREEZE (1<<TIF_FREEZE) 144 144 145 145 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 146 146 #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
+16
drivers/acpi/sleep.c
··· 476 476 DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW520F"), 477 477 }, 478 478 }, 479 + { 480 + .callback = init_nvs_nosave, 481 + .ident = "Asus K54C", 482 + .matches = { 483 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."), 484 + DMI_MATCH(DMI_PRODUCT_NAME, "K54C"), 485 + }, 486 + }, 487 + { 488 + .callback = init_nvs_nosave, 489 + .ident = "Asus K54HR", 490 + .matches = { 491 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."), 492 + DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"), 493 + }, 494 + }, 479 495 {}, 480 496 }; 481 497 #endif /* CONFIG_SUSPEND */
+1 -135
drivers/amba/bus.c
··· 113 113 return ret; 114 114 } 115 115 116 - static int amba_pm_prepare(struct device *dev) 117 - { 118 - struct device_driver *drv = dev->driver; 119 - int ret = 0; 120 - 121 - if (drv && drv->pm && drv->pm->prepare) 122 - ret = drv->pm->prepare(dev); 123 - 124 - return ret; 125 - } 126 - 127 - static void amba_pm_complete(struct device *dev) 128 - { 129 - struct device_driver *drv = dev->driver; 130 - 131 - if (drv && drv->pm && drv->pm->complete) 132 - drv->pm->complete(dev); 133 - } 134 - 135 - #else /* !CONFIG_PM_SLEEP */ 136 - 137 - #define amba_pm_prepare NULL 138 - #define amba_pm_complete NULL 139 - 140 - #endif /* !CONFIG_PM_SLEEP */ 116 + #endif /* CONFIG_PM_SLEEP */ 141 117 142 118 #ifdef CONFIG_SUSPEND 143 119 ··· 130 154 ret = drv->pm->suspend(dev); 131 155 } else { 132 156 ret = amba_legacy_suspend(dev, PMSG_SUSPEND); 133 - } 134 - 135 - return ret; 136 - } 137 - 138 - static int amba_pm_suspend_noirq(struct device *dev) 139 - { 140 - struct device_driver *drv = dev->driver; 141 - int ret = 0; 142 - 143 - if (!drv) 144 - return 0; 145 - 146 - if (drv->pm) { 147 - if (drv->pm->suspend_noirq) 148 - ret = drv->pm->suspend_noirq(dev); 149 157 } 150 158 151 159 return ret; ··· 153 193 return ret; 154 194 } 155 195 156 - static int amba_pm_resume_noirq(struct device *dev) 157 - { 158 - struct device_driver *drv = dev->driver; 159 - int ret = 0; 160 - 161 - if (!drv) 162 - return 0; 163 - 164 - if (drv->pm) { 165 - if (drv->pm->resume_noirq) 166 - ret = drv->pm->resume_noirq(dev); 167 - } 168 - 169 - return ret; 170 - } 171 - 172 196 #else /* !CONFIG_SUSPEND */ 173 197 174 198 #define amba_pm_suspend NULL 175 199 #define amba_pm_resume NULL 176 - #define amba_pm_suspend_noirq NULL 177 - #define amba_pm_resume_noirq NULL 178 200 179 201 #endif /* !CONFIG_SUSPEND */ 180 202 ··· 180 238 return ret; 181 239 } 182 240 183 - static int amba_pm_freeze_noirq(struct device *dev) 184 - { 185 - struct device_driver *drv = dev->driver; 186 - int ret = 0; 187 - 188 - if (!drv) 189 - return 0; 190 - 191 - if (drv->pm) { 192 - if (drv->pm->freeze_noirq) 193 - ret = drv->pm->freeze_noirq(dev); 194 - } 195 - 196 - return ret; 197 - } 198 - 199 241 static int amba_pm_thaw(struct device *dev) 200 242 { 201 243 struct device_driver *drv = dev->driver; ··· 193 267 ret = drv->pm->thaw(dev); 194 268 } else { 195 269 ret = amba_legacy_resume(dev); 196 - } 197 - 198 - return ret; 199 - } 200 - 201 - static int amba_pm_thaw_noirq(struct device *dev) 202 - { 203 - struct device_driver *drv = dev->driver; 204 - int ret = 0; 205 - 206 - if (!drv) 207 - return 0; 208 - 209 - if (drv->pm) { 210 - if (drv->pm->thaw_noirq) 211 - ret = drv->pm->thaw_noirq(dev); 212 270 } 213 271 214 272 return ret; ··· 216 306 return ret; 217 307 } 218 308 219 - static int amba_pm_poweroff_noirq(struct device *dev) 220 - { 221 - struct device_driver *drv = dev->driver; 222 - int ret = 0; 223 - 224 - if (!drv) 225 - return 0; 226 - 227 - if (drv->pm) { 228 - if (drv->pm->poweroff_noirq) 229 - ret = drv->pm->poweroff_noirq(dev); 230 - } 231 - 232 - return ret; 233 - } 234 - 235 309 static int amba_pm_restore(struct device *dev) 236 310 { 237 311 struct device_driver *drv = dev->driver; ··· 234 340 return ret; 235 341 } 236 342 237 - static int amba_pm_restore_noirq(struct device *dev) 238 - { 239 - struct device_driver *drv = dev->driver; 240 - int ret = 0; 241 - 242 - if (!drv) 243 - return 0; 244 - 245 - if (drv->pm) { 246 - if (drv->pm->restore_noirq) 247 - ret = drv->pm->restore_noirq(dev); 248 - } 249 - 250 - return ret; 251 - } 252 - 253 343 #else /* !CONFIG_HIBERNATE_CALLBACKS */ 254 344 255 345 #define amba_pm_freeze NULL 256 346 #define amba_pm_thaw NULL 257 347 #define amba_pm_poweroff NULL 258 348 #define amba_pm_restore NULL 259 - #define amba_pm_freeze_noirq NULL 260 - #define amba_pm_thaw_noirq NULL 261 - #define amba_pm_poweroff_noirq NULL 262 - #define amba_pm_restore_noirq NULL 263 349 264 350 #endif /* !CONFIG_HIBERNATE_CALLBACKS */ 265 351 ··· 280 406 #ifdef CONFIG_PM 281 407 282 408 static const struct dev_pm_ops amba_pm = { 283 - .prepare = amba_pm_prepare, 284 - .complete = amba_pm_complete, 285 409 .suspend = amba_pm_suspend, 286 410 .resume = amba_pm_resume, 287 411 .freeze = amba_pm_freeze, 288 412 .thaw = amba_pm_thaw, 289 413 .poweroff = amba_pm_poweroff, 290 414 .restore = amba_pm_restore, 291 - .suspend_noirq = amba_pm_suspend_noirq, 292 - .resume_noirq = amba_pm_resume_noirq, 293 - .freeze_noirq = amba_pm_freeze_noirq, 294 - .thaw_noirq = amba_pm_thaw_noirq, 295 - .poweroff_noirq = amba_pm_poweroff_noirq, 296 - .restore_noirq = amba_pm_restore_noirq, 297 415 SET_RUNTIME_PM_OPS( 298 416 amba_pm_runtime_suspend, 299 417 amba_pm_runtime_resume,
+4
drivers/base/firmware_class.c
··· 534 534 return 0; 535 535 } 536 536 537 + read_lock_usermodehelper(); 538 + 537 539 if (WARN_ON(usermodehelper_is_disabled())) { 538 540 dev_err(device, "firmware: %s will not be loaded\n", name); 539 541 retval = -EBUSY; ··· 574 572 fw_destroy_instance(fw_priv); 575 573 576 574 out: 575 + read_unlock_usermodehelper(); 576 + 577 577 if (retval) { 578 578 release_firmware(firmware); 579 579 *firmware_p = NULL;
-115
drivers/base/platform.c
··· 700 700 return ret; 701 701 } 702 702 703 - int platform_pm_prepare(struct device *dev) 704 - { 705 - struct device_driver *drv = dev->driver; 706 - int ret = 0; 707 - 708 - if (drv && drv->pm && drv->pm->prepare) 709 - ret = drv->pm->prepare(dev); 710 - 711 - return ret; 712 - } 713 - 714 - void platform_pm_complete(struct device *dev) 715 - { 716 - struct device_driver *drv = dev->driver; 717 - 718 - if (drv && drv->pm && drv->pm->complete) 719 - drv->pm->complete(dev); 720 - } 721 - 722 703 #endif /* CONFIG_PM_SLEEP */ 723 704 724 705 #ifdef CONFIG_SUSPEND ··· 722 741 return ret; 723 742 } 724 743 725 - int platform_pm_suspend_noirq(struct device *dev) 726 - { 727 - struct device_driver *drv = dev->driver; 728 - int ret = 0; 729 - 730 - if (!drv) 731 - return 0; 732 - 733 - if (drv->pm) { 734 - if (drv->pm->suspend_noirq) 735 - ret = drv->pm->suspend_noirq(dev); 736 - } 737 - 738 - return ret; 739 - } 740 - 741 744 int platform_pm_resume(struct device *dev) 742 745 { 743 746 struct device_driver *drv = dev->driver; ··· 735 770 ret = drv->pm->resume(dev); 736 771 } else { 737 772 ret = platform_legacy_resume(dev); 738 - } 739 - 740 - return ret; 741 - } 742 - 743 - int platform_pm_resume_noirq(struct device *dev) 744 - { 745 - struct device_driver *drv = dev->driver; 746 - int ret = 0; 747 - 748 - if (!drv) 749 - return 0; 750 - 751 - if (drv->pm) { 752 - if (drv->pm->resume_noirq) 753 - ret = drv->pm->resume_noirq(dev); 754 773 } 755 774 756 775 return ret; ··· 762 813 return ret; 763 814 } 764 815 765 - int platform_pm_freeze_noirq(struct device *dev) 766 - { 767 - struct device_driver *drv = dev->driver; 768 - int ret = 0; 769 - 770 - if (!drv) 771 - return 0; 772 - 773 - if (drv->pm) { 774 - if (drv->pm->freeze_noirq) 775 - ret = drv->pm->freeze_noirq(dev); 776 - } 777 - 778 - return ret; 779 - } 780 - 781 816 int platform_pm_thaw(struct device *dev) 782 817 { 783 818 struct device_driver *drv = dev->driver; ··· 775 842 ret = drv->pm->thaw(dev); 776 843 } else { 777 844 ret = platform_legacy_resume(dev); 778 - } 779 - 780 - return ret; 781 - } 782 - 783 - int platform_pm_thaw_noirq(struct device *dev) 784 - { 785 - struct device_driver *drv = dev->driver; 786 - int ret = 0; 787 - 788 - if (!drv) 789 - return 0; 790 - 791 - if (drv->pm) { 792 - if (drv->pm->thaw_noirq) 793 - ret = drv->pm->thaw_noirq(dev); 794 845 } 795 846 796 847 return ret; ··· 798 881 return ret; 799 882 } 800 883 801 - int platform_pm_poweroff_noirq(struct device *dev) 802 - { 803 - struct device_driver *drv = dev->driver; 804 - int ret = 0; 805 - 806 - if (!drv) 807 - return 0; 808 - 809 - if (drv->pm) { 810 - if (drv->pm->poweroff_noirq) 811 - ret = drv->pm->poweroff_noirq(dev); 812 - } 813 - 814 - return ret; 815 - } 816 - 817 884 int platform_pm_restore(struct device *dev) 818 885 { 819 886 struct device_driver *drv = dev->driver; ··· 811 910 ret = drv->pm->restore(dev); 812 911 } else { 813 912 ret = platform_legacy_resume(dev); 814 - } 815 - 816 - return ret; 817 - } 818 - 819 - int platform_pm_restore_noirq(struct device *dev) 820 - { 821 - struct device_driver *drv = dev->driver; 822 - int ret = 0; 823 - 824 - if (!drv) 825 - return 0; 826 - 827 - if (drv->pm) { 828 - if (drv->pm->restore_noirq) 829 - ret = drv->pm->restore_noirq(dev); 830 913 } 831 914 832 915 return ret;
+1 -1
drivers/base/power/Makefile
··· 3 3 obj-$(CONFIG_PM_RUNTIME) += runtime.o 4 4 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 5 5 obj-$(CONFIG_PM_OPP) += opp.o 6 - obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o 6 + obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 7 7 obj-$(CONFIG_HAVE_CLK) += clock_ops.o 8 8 9 9 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
+394 -145
drivers/base/power/domain.c
··· 15 15 #include <linux/err.h> 16 16 #include <linux/sched.h> 17 17 #include <linux/suspend.h> 18 + #include <linux/export.h> 19 + 20 + #define GENPD_DEV_CALLBACK(genpd, type, callback, dev) \ 21 + ({ \ 22 + type (*__routine)(struct device *__d); \ 23 + type __ret = (type)0; \ 24 + \ 25 + __routine = genpd->dev_ops.callback; \ 26 + if (__routine) { \ 27 + __ret = __routine(dev); \ 28 + } else { \ 29 + __routine = dev_gpd_data(dev)->ops.callback; \ 30 + if (__routine) \ 31 + __ret = __routine(dev); \ 32 + } \ 33 + __ret; \ 34 + }) 35 + 36 + #define GENPD_DEV_TIMED_CALLBACK(genpd, type, callback, dev, field, name) \ 37 + ({ \ 38 + ktime_t __start = ktime_get(); \ 39 + type __retval = GENPD_DEV_CALLBACK(genpd, type, callback, dev); \ 40 + s64 __elapsed = ktime_to_ns(ktime_sub(ktime_get(), __start)); \ 41 + struct generic_pm_domain_data *__gpd_data = dev_gpd_data(dev); \ 42 + if (__elapsed > __gpd_data->td.field) { \ 43 + __gpd_data->td.field = __elapsed; \ 44 + dev_warn(dev, name " latency exceeded, new value %lld ns\n", \ 45 + __elapsed); \ 46 + } \ 47 + __retval; \ 48 + }) 18 49 19 50 static LIST_HEAD(gpd_list); 20 51 static DEFINE_MUTEX(gpd_list_lock); 21 52 22 53 #ifdef CONFIG_PM 23 54 24 - static struct generic_pm_domain *dev_to_genpd(struct device *dev) 55 + struct generic_pm_domain *dev_to_genpd(struct device *dev) 25 56 { 26 57 if (IS_ERR_OR_NULL(dev->pm_domain)) 27 58 return ERR_PTR(-EINVAL); 28 59 29 60 return pd_to_genpd(dev->pm_domain); 61 + } 62 + 63 + static int genpd_stop_dev(struct generic_pm_domain *genpd, struct device *dev) 64 + { 65 + return GENPD_DEV_TIMED_CALLBACK(genpd, int, stop, dev, 66 + stop_latency_ns, "stop"); 67 + } 68 + 69 + static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev) 70 + { 71 + return GENPD_DEV_TIMED_CALLBACK(genpd, int, start, dev, 72 + start_latency_ns, "start"); 73 + } 74 + 75 + static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) 76 + { 77 + return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev, 78 + save_state_latency_ns, "state save"); 79 + } 80 + 81 + static int genpd_restore_dev(struct generic_pm_domain *genpd, struct device *dev) 82 + { 83 + return GENPD_DEV_TIMED_CALLBACK(genpd, int, restore_state, dev, 84 + restore_state_latency_ns, 85 + "state restore"); 30 86 } 31 87 32 88 static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd) ··· 201 145 } 202 146 203 147 if (genpd->power_on) { 148 + ktime_t time_start = ktime_get(); 149 + s64 elapsed_ns; 150 + 204 151 ret = genpd->power_on(genpd); 205 152 if (ret) 206 153 goto err; 154 + 155 + elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 156 + if (elapsed_ns > genpd->power_on_latency_ns) { 157 + genpd->power_on_latency_ns = elapsed_ns; 158 + if (genpd->name) 159 + pr_warning("%s: Power-on latency exceeded, " 160 + "new value %lld ns\n", genpd->name, 161 + elapsed_ns); 162 + } 207 163 } 208 164 209 165 genpd_set_active(genpd); ··· 258 190 { 259 191 struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 260 192 struct device *dev = pdd->dev; 261 - struct device_driver *drv = dev->driver; 262 193 int ret = 0; 263 194 264 195 if (gpd_data->need_restore) ··· 265 198 266 199 mutex_unlock(&genpd->lock); 267 200 268 - if (drv && drv->pm && drv->pm->runtime_suspend) { 269 - if (genpd->start_device) 270 - genpd->start_device(dev); 271 - 272 - ret = drv->pm->runtime_suspend(dev); 273 - 274 - if (genpd->stop_device) 275 - genpd->stop_device(dev); 276 - } 201 + genpd_start_dev(genpd, dev); 202 + ret = genpd_save_dev(genpd, dev); 203 + genpd_stop_dev(genpd, dev); 277 204 278 205 mutex_lock(&genpd->lock); 279 206 ··· 288 227 { 289 228 struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 290 229 struct device *dev = pdd->dev; 291 - struct device_driver *drv = dev->driver; 292 230 293 231 if (!gpd_data->need_restore) 294 232 return; 295 233 296 234 mutex_unlock(&genpd->lock); 297 235 298 - if (drv && drv->pm && drv->pm->runtime_resume) { 299 - if (genpd->start_device) 300 - genpd->start_device(dev); 301 - 302 - drv->pm->runtime_resume(dev); 303 - 304 - if (genpd->stop_device) 305 - genpd->stop_device(dev); 306 - } 236 + genpd_start_dev(genpd, dev); 237 + genpd_restore_dev(genpd, dev); 238 + genpd_stop_dev(genpd, dev); 307 239 308 240 mutex_lock(&genpd->lock); 309 241 ··· 408 354 } 409 355 410 356 if (genpd->power_off) { 357 + ktime_t time_start; 358 + s64 elapsed_ns; 359 + 411 360 if (atomic_read(&genpd->sd_count) > 0) { 412 361 ret = -EBUSY; 413 362 goto out; 414 363 } 364 + 365 + time_start = ktime_get(); 415 366 416 367 /* 417 368 * If sd_count > 0 at this point, one of the subdomains hasn't ··· 431 372 genpd_set_active(genpd); 432 373 goto out; 433 374 } 375 + 376 + elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 377 + if (elapsed_ns > genpd->power_off_latency_ns) { 378 + genpd->power_off_latency_ns = elapsed_ns; 379 + if (genpd->name) 380 + pr_warning("%s: Power-off latency exceeded, " 381 + "new value %lld ns\n", genpd->name, 382 + elapsed_ns); 383 + } 434 384 } 435 385 436 386 genpd->status = GPD_STATE_POWER_OFF; 387 + genpd->power_off_time = ktime_get(); 388 + 389 + /* Update PM QoS information for devices in the domain. */ 390 + list_for_each_entry_reverse(pdd, &genpd->dev_list, list_node) { 391 + struct gpd_timing_data *td = &to_gpd_data(pdd)->td; 392 + 393 + pm_runtime_update_max_time_suspended(pdd->dev, 394 + td->start_latency_ns + 395 + td->restore_state_latency_ns + 396 + genpd->power_on_latency_ns); 397 + } 437 398 438 399 list_for_each_entry(link, &genpd->slave_links, slave_node) { 439 400 genpd_sd_counter_dec(link->master); ··· 492 413 static int pm_genpd_runtime_suspend(struct device *dev) 493 414 { 494 415 struct generic_pm_domain *genpd; 416 + bool (*stop_ok)(struct device *__dev); 417 + int ret; 495 418 496 419 dev_dbg(dev, "%s()\n", __func__); 497 420 ··· 503 422 504 423 might_sleep_if(!genpd->dev_irq_safe); 505 424 506 - if (genpd->stop_device) { 507 - int ret = genpd->stop_device(dev); 508 - if (ret) 509 - return ret; 510 - } 425 + stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; 426 + if (stop_ok && !stop_ok(dev)) 427 + return -EBUSY; 428 + 429 + ret = genpd_stop_dev(genpd, dev); 430 + if (ret) 431 + return ret; 432 + 433 + pm_runtime_update_max_time_suspended(dev, 434 + dev_gpd_data(dev)->td.start_latency_ns); 511 435 512 436 /* 513 437 * If power.irq_safe is set, this routine will be run with interrupts ··· 588 502 mutex_unlock(&genpd->lock); 589 503 590 504 out: 591 - if (genpd->start_device) 592 - genpd->start_device(dev); 505 + genpd_start_dev(genpd, dev); 593 506 594 507 return 0; 595 508 } ··· 618 533 #endif /* CONFIG_PM_RUNTIME */ 619 534 620 535 #ifdef CONFIG_PM_SLEEP 536 + 537 + static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, 538 + struct device *dev) 539 + { 540 + return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev); 541 + } 542 + 543 + static int genpd_suspend_dev(struct generic_pm_domain *genpd, struct device *dev) 544 + { 545 + return GENPD_DEV_CALLBACK(genpd, int, suspend, dev); 546 + } 547 + 548 + static int genpd_suspend_late(struct generic_pm_domain *genpd, struct device *dev) 549 + { 550 + return GENPD_DEV_CALLBACK(genpd, int, suspend_late, dev); 551 + } 552 + 553 + static int genpd_resume_early(struct generic_pm_domain *genpd, struct device *dev) 554 + { 555 + return GENPD_DEV_CALLBACK(genpd, int, resume_early, dev); 556 + } 557 + 558 + static int genpd_resume_dev(struct generic_pm_domain *genpd, struct device *dev) 559 + { 560 + return GENPD_DEV_CALLBACK(genpd, int, resume, dev); 561 + } 562 + 563 + static int genpd_freeze_dev(struct generic_pm_domain *genpd, struct device *dev) 564 + { 565 + return GENPD_DEV_CALLBACK(genpd, int, freeze, dev); 566 + } 567 + 568 + static int genpd_freeze_late(struct generic_pm_domain *genpd, struct device *dev) 569 + { 570 + return GENPD_DEV_CALLBACK(genpd, int, freeze_late, dev); 571 + } 572 + 573 + static int genpd_thaw_early(struct generic_pm_domain *genpd, struct device *dev) 574 + { 575 + return GENPD_DEV_CALLBACK(genpd, int, thaw_early, dev); 576 + } 577 + 578 + static int genpd_thaw_dev(struct generic_pm_domain *genpd, struct device *dev) 579 + { 580 + return GENPD_DEV_CALLBACK(genpd, int, thaw, dev); 581 + } 621 582 622 583 /** 623 584 * pm_genpd_sync_poweroff - Synchronously power off a PM domain and its masters. ··· 721 590 if (!device_can_wakeup(dev)) 722 591 return false; 723 592 724 - active_wakeup = genpd->active_wakeup && genpd->active_wakeup(dev); 593 + active_wakeup = genpd_dev_active_wakeup(genpd, dev); 725 594 return device_may_wakeup(dev) ? active_wakeup : !active_wakeup; 726 595 } 727 596 ··· 777 646 /* 778 647 * The PM domain must be in the GPD_STATE_ACTIVE state at this point, 779 648 * so pm_genpd_poweron() will return immediately, but if the device 780 - * is suspended (e.g. it's been stopped by .stop_device()), we need 649 + * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need 781 650 * to make it operational. 782 651 */ 783 652 pm_runtime_resume(dev); ··· 816 685 if (IS_ERR(genpd)) 817 686 return -EINVAL; 818 687 819 - return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev); 688 + return genpd->suspend_power_off ? 0 : genpd_suspend_dev(genpd, dev); 820 689 } 821 690 822 691 /** ··· 841 710 if (genpd->suspend_power_off) 842 711 return 0; 843 712 844 - ret = pm_generic_suspend_noirq(dev); 713 + ret = genpd_suspend_late(genpd, dev); 845 714 if (ret) 846 715 return ret; 847 716 848 - if (dev->power.wakeup_path 849 - && genpd->active_wakeup && genpd->active_wakeup(dev)) 717 + if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 850 718 return 0; 851 719 852 - if (genpd->stop_device) 853 - genpd->stop_device(dev); 720 + genpd_stop_dev(genpd, dev); 854 721 855 722 /* 856 723 * Since all of the "noirq" callbacks are executed sequentially, it is ··· 890 761 */ 891 762 pm_genpd_poweron(genpd); 892 763 genpd->suspended_count--; 893 - if (genpd->start_device) 894 - genpd->start_device(dev); 764 + genpd_start_dev(genpd, dev); 895 765 896 - return pm_generic_resume_noirq(dev); 766 + return genpd_resume_early(genpd, dev); 897 767 } 898 768 899 769 /** ··· 913 785 if (IS_ERR(genpd)) 914 786 return -EINVAL; 915 787 916 - return genpd->suspend_power_off ? 0 : pm_generic_resume(dev); 788 + return genpd->suspend_power_off ? 0 : genpd_resume_dev(genpd, dev); 917 789 } 918 790 919 791 /** ··· 934 806 if (IS_ERR(genpd)) 935 807 return -EINVAL; 936 808 937 - return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev); 809 + return genpd->suspend_power_off ? 0 : genpd_freeze_dev(genpd, dev); 938 810 } 939 811 940 812 /** ··· 960 832 if (genpd->suspend_power_off) 961 833 return 0; 962 834 963 - ret = pm_generic_freeze_noirq(dev); 835 + ret = genpd_freeze_late(genpd, dev); 964 836 if (ret) 965 837 return ret; 966 838 967 - if (genpd->stop_device) 968 - genpd->stop_device(dev); 839 + genpd_stop_dev(genpd, dev); 969 840 970 841 return 0; 971 842 } ··· 991 864 if (genpd->suspend_power_off) 992 865 return 0; 993 866 994 - if (genpd->start_device) 995 - genpd->start_device(dev); 867 + genpd_start_dev(genpd, dev); 996 868 997 - return pm_generic_thaw_noirq(dev); 869 + return genpd_thaw_early(genpd, dev); 998 870 } 999 871 1000 872 /** ··· 1014 888 if (IS_ERR(genpd)) 1015 889 return -EINVAL; 1016 890 1017 - return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev); 1018 - } 1019 - 1020 - /** 1021 - * pm_genpd_dev_poweroff - Power off a device belonging to an I/O PM domain. 1022 - * @dev: Device to suspend. 1023 - * 1024 - * Power off a device under the assumption that its pm_domain field points to 1025 - * the domain member of an object of type struct generic_pm_domain representing 1026 - * a PM domain consisting of I/O devices. 1027 - */ 1028 - static int pm_genpd_dev_poweroff(struct device *dev) 1029 - { 1030 - struct generic_pm_domain *genpd; 1031 - 1032 - dev_dbg(dev, "%s()\n", __func__); 1033 - 1034 - genpd = dev_to_genpd(dev); 1035 - if (IS_ERR(genpd)) 1036 - return -EINVAL; 1037 - 1038 - return genpd->suspend_power_off ? 0 : pm_generic_poweroff(dev); 1039 - } 1040 - 1041 - /** 1042 - * pm_genpd_dev_poweroff_noirq - Late power off of a device from a PM domain. 1043 - * @dev: Device to suspend. 1044 - * 1045 - * Carry out a late powering off of a device under the assumption that its 1046 - * pm_domain field points to the domain member of an object of type 1047 - * struct generic_pm_domain representing a PM domain consisting of I/O devices. 1048 - */ 1049 - static int pm_genpd_dev_poweroff_noirq(struct device *dev) 1050 - { 1051 - struct generic_pm_domain *genpd; 1052 - int ret; 1053 - 1054 - dev_dbg(dev, "%s()\n", __func__); 1055 - 1056 - genpd = dev_to_genpd(dev); 1057 - if (IS_ERR(genpd)) 1058 - return -EINVAL; 1059 - 1060 - if (genpd->suspend_power_off) 1061 - return 0; 1062 - 1063 - ret = pm_generic_poweroff_noirq(dev); 1064 - if (ret) 1065 - return ret; 1066 - 1067 - if (dev->power.wakeup_path 1068 - && genpd->active_wakeup && genpd->active_wakeup(dev)) 1069 - return 0; 1070 - 1071 - if (genpd->stop_device) 1072 - genpd->stop_device(dev); 1073 - 1074 - /* 1075 - * Since all of the "noirq" callbacks are executed sequentially, it is 1076 - * guaranteed that this function will never run twice in parallel for 1077 - * the same PM domain, so it is not necessary to use locking here. 1078 - */ 1079 - genpd->suspended_count++; 1080 - pm_genpd_sync_poweroff(genpd); 1081 - 1082 - return 0; 891 + return genpd->suspend_power_off ? 0 : genpd_thaw_dev(genpd, dev); 1083 892 } 1084 893 1085 894 /** ··· 1054 993 1055 994 pm_genpd_poweron(genpd); 1056 995 genpd->suspended_count--; 1057 - if (genpd->start_device) 1058 - genpd->start_device(dev); 996 + genpd_start_dev(genpd, dev); 1059 997 1060 - return pm_generic_restore_noirq(dev); 1061 - } 1062 - 1063 - /** 1064 - * pm_genpd_restore - Restore a device belonging to an I/O power domain. 1065 - * @dev: Device to resume. 1066 - * 1067 - * Restore a device under the assumption that its pm_domain field points to the 1068 - * domain member of an object of type struct generic_pm_domain representing 1069 - * a power domain consisting of I/O devices. 1070 - */ 1071 - static int pm_genpd_restore(struct device *dev) 1072 - { 1073 - struct generic_pm_domain *genpd; 1074 - 1075 - dev_dbg(dev, "%s()\n", __func__); 1076 - 1077 - genpd = dev_to_genpd(dev); 1078 - if (IS_ERR(genpd)) 1079 - return -EINVAL; 1080 - 1081 - return genpd->suspend_power_off ? 0 : pm_generic_restore(dev); 998 + return genpd_resume_early(genpd, dev); 1082 999 } 1083 1000 1084 1001 /** ··· 1106 1067 #define pm_genpd_freeze_noirq NULL 1107 1068 #define pm_genpd_thaw_noirq NULL 1108 1069 #define pm_genpd_thaw NULL 1109 - #define pm_genpd_dev_poweroff_noirq NULL 1110 - #define pm_genpd_dev_poweroff NULL 1111 1070 #define pm_genpd_restore_noirq NULL 1112 - #define pm_genpd_restore NULL 1113 1071 #define pm_genpd_complete NULL 1114 1072 1115 1073 #endif /* CONFIG_PM_SLEEP */ 1116 1074 1117 1075 /** 1118 - * pm_genpd_add_device - Add a device to an I/O PM domain. 1076 + * __pm_genpd_add_device - Add a device to an I/O PM domain. 1119 1077 * @genpd: PM domain to add the device to. 1120 1078 * @dev: Device to be added. 1079 + * @td: Set of PM QoS timing parameters to attach to the device. 1121 1080 */ 1122 - int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev) 1081 + int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, 1082 + struct gpd_timing_data *td) 1123 1083 { 1124 1084 struct generic_pm_domain_data *gpd_data; 1125 1085 struct pm_domain_data *pdd; ··· 1161 1123 gpd_data->base.dev = dev; 1162 1124 gpd_data->need_restore = false; 1163 1125 list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); 1126 + if (td) 1127 + gpd_data->td = *td; 1164 1128 1165 1129 out: 1166 1130 genpd_release_lock(genpd); ··· 1320 1280 } 1321 1281 1322 1282 /** 1283 + * pm_genpd_add_callbacks - Add PM domain callbacks to a given device. 1284 + * @dev: Device to add the callbacks to. 1285 + * @ops: Set of callbacks to add. 1286 + * @td: Timing data to add to the device along with the callbacks (optional). 1287 + */ 1288 + int pm_genpd_add_callbacks(struct device *dev, struct gpd_dev_ops *ops, 1289 + struct gpd_timing_data *td) 1290 + { 1291 + struct pm_domain_data *pdd; 1292 + int ret = 0; 1293 + 1294 + if (!(dev && dev->power.subsys_data && ops)) 1295 + return -EINVAL; 1296 + 1297 + pm_runtime_disable(dev); 1298 + device_pm_lock(); 1299 + 1300 + pdd = dev->power.subsys_data->domain_data; 1301 + if (pdd) { 1302 + struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 1303 + 1304 + gpd_data->ops = *ops; 1305 + if (td) 1306 + gpd_data->td = *td; 1307 + } else { 1308 + ret = -EINVAL; 1309 + } 1310 + 1311 + device_pm_unlock(); 1312 + pm_runtime_enable(dev); 1313 + 1314 + return ret; 1315 + } 1316 + EXPORT_SYMBOL_GPL(pm_genpd_add_callbacks); 1317 + 1318 + /** 1319 + * __pm_genpd_remove_callbacks - Remove PM domain callbacks from a given device. 1320 + * @dev: Device to remove the callbacks from. 1321 + * @clear_td: If set, clear the device's timing data too. 1322 + */ 1323 + int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) 1324 + { 1325 + struct pm_domain_data *pdd; 1326 + int ret = 0; 1327 + 1328 + if (!(dev && dev->power.subsys_data)) 1329 + return -EINVAL; 1330 + 1331 + pm_runtime_disable(dev); 1332 + device_pm_lock(); 1333 + 1334 + pdd = dev->power.subsys_data->domain_data; 1335 + if (pdd) { 1336 + struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 1337 + 1338 + gpd_data->ops = (struct gpd_dev_ops){ 0 }; 1339 + if (clear_td) 1340 + gpd_data->td = (struct gpd_timing_data){ 0 }; 1341 + } else { 1342 + ret = -EINVAL; 1343 + } 1344 + 1345 + device_pm_unlock(); 1346 + pm_runtime_enable(dev); 1347 + 1348 + return ret; 1349 + } 1350 + EXPORT_SYMBOL_GPL(__pm_genpd_remove_callbacks); 1351 + 1352 + /* Default device callbacks for generic PM domains. */ 1353 + 1354 + /** 1355 + * pm_genpd_default_save_state - Default "save device state" for PM domians. 1356 + * @dev: Device to handle. 1357 + */ 1358 + static int pm_genpd_default_save_state(struct device *dev) 1359 + { 1360 + int (*cb)(struct device *__dev); 1361 + struct device_driver *drv = dev->driver; 1362 + 1363 + cb = dev_gpd_data(dev)->ops.save_state; 1364 + if (cb) 1365 + return cb(dev); 1366 + 1367 + if (drv && drv->pm && drv->pm->runtime_suspend) 1368 + return drv->pm->runtime_suspend(dev); 1369 + 1370 + return 0; 1371 + } 1372 + 1373 + /** 1374 + * pm_genpd_default_restore_state - Default PM domians "restore device state". 1375 + * @dev: Device to handle. 1376 + */ 1377 + static int pm_genpd_default_restore_state(struct device *dev) 1378 + { 1379 + int (*cb)(struct device *__dev); 1380 + struct device_driver *drv = dev->driver; 1381 + 1382 + cb = dev_gpd_data(dev)->ops.restore_state; 1383 + if (cb) 1384 + return cb(dev); 1385 + 1386 + if (drv && drv->pm && drv->pm->runtime_resume) 1387 + return drv->pm->runtime_resume(dev); 1388 + 1389 + return 0; 1390 + } 1391 + 1392 + /** 1393 + * pm_genpd_default_suspend - Default "device suspend" for PM domians. 1394 + * @dev: Device to handle. 1395 + */ 1396 + static int pm_genpd_default_suspend(struct device *dev) 1397 + { 1398 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend; 1399 + 1400 + return cb ? cb(dev) : pm_generic_suspend(dev); 1401 + } 1402 + 1403 + /** 1404 + * pm_genpd_default_suspend_late - Default "late device suspend" for PM domians. 1405 + * @dev: Device to handle. 1406 + */ 1407 + static int pm_genpd_default_suspend_late(struct device *dev) 1408 + { 1409 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend_late; 1410 + 1411 + return cb ? cb(dev) : pm_generic_suspend_noirq(dev); 1412 + } 1413 + 1414 + /** 1415 + * pm_genpd_default_resume_early - Default "early device resume" for PM domians. 1416 + * @dev: Device to handle. 1417 + */ 1418 + static int pm_genpd_default_resume_early(struct device *dev) 1419 + { 1420 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume_early; 1421 + 1422 + return cb ? cb(dev) : pm_generic_resume_noirq(dev); 1423 + } 1424 + 1425 + /** 1426 + * pm_genpd_default_resume - Default "device resume" for PM domians. 1427 + * @dev: Device to handle. 1428 + */ 1429 + static int pm_genpd_default_resume(struct device *dev) 1430 + { 1431 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume; 1432 + 1433 + return cb ? cb(dev) : pm_generic_resume(dev); 1434 + } 1435 + 1436 + /** 1437 + * pm_genpd_default_freeze - Default "device freeze" for PM domians. 1438 + * @dev: Device to handle. 1439 + */ 1440 + static int pm_genpd_default_freeze(struct device *dev) 1441 + { 1442 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze; 1443 + 1444 + return cb ? cb(dev) : pm_generic_freeze(dev); 1445 + } 1446 + 1447 + /** 1448 + * pm_genpd_default_freeze_late - Default "late device freeze" for PM domians. 1449 + * @dev: Device to handle. 1450 + */ 1451 + static int pm_genpd_default_freeze_late(struct device *dev) 1452 + { 1453 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze_late; 1454 + 1455 + return cb ? cb(dev) : pm_generic_freeze_noirq(dev); 1456 + } 1457 + 1458 + /** 1459 + * pm_genpd_default_thaw_early - Default "early device thaw" for PM domians. 1460 + * @dev: Device to handle. 1461 + */ 1462 + static int pm_genpd_default_thaw_early(struct device *dev) 1463 + { 1464 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw_early; 1465 + 1466 + return cb ? cb(dev) : pm_generic_thaw_noirq(dev); 1467 + } 1468 + 1469 + /** 1470 + * pm_genpd_default_thaw - Default "device thaw" for PM domians. 1471 + * @dev: Device to handle. 1472 + */ 1473 + static int pm_genpd_default_thaw(struct device *dev) 1474 + { 1475 + int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw; 1476 + 1477 + return cb ? cb(dev) : pm_generic_thaw(dev); 1478 + } 1479 + 1480 + /** 1323 1481 * pm_genpd_init - Initialize a generic I/O PM domain object. 1324 1482 * @genpd: PM domain object to initialize. 1325 1483 * @gov: PM domain governor to associate with the domain (may be NULL). ··· 1543 1305 genpd->resume_count = 0; 1544 1306 genpd->device_count = 0; 1545 1307 genpd->suspended_count = 0; 1308 + genpd->max_off_time_ns = -1; 1546 1309 genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend; 1547 1310 genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume; 1548 1311 genpd->domain.ops.runtime_idle = pm_generic_runtime_idle; ··· 1556 1317 genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq; 1557 1318 genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq; 1558 1319 genpd->domain.ops.thaw = pm_genpd_thaw; 1559 - genpd->domain.ops.poweroff = pm_genpd_dev_poweroff; 1560 - genpd->domain.ops.poweroff_noirq = pm_genpd_dev_poweroff_noirq; 1320 + genpd->domain.ops.poweroff = pm_genpd_suspend; 1321 + genpd->domain.ops.poweroff_noirq = pm_genpd_suspend_noirq; 1561 1322 genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq; 1562 - genpd->domain.ops.restore = pm_genpd_restore; 1323 + genpd->domain.ops.restore = pm_genpd_resume; 1563 1324 genpd->domain.ops.complete = pm_genpd_complete; 1325 + genpd->dev_ops.save_state = pm_genpd_default_save_state; 1326 + genpd->dev_ops.restore_state = pm_genpd_default_restore_state; 1327 + genpd->dev_ops.suspend = pm_genpd_default_suspend; 1328 + genpd->dev_ops.suspend_late = pm_genpd_default_suspend_late; 1329 + genpd->dev_ops.resume_early = pm_genpd_default_resume_early; 1330 + genpd->dev_ops.resume = pm_genpd_default_resume; 1331 + genpd->dev_ops.freeze = pm_genpd_default_freeze; 1332 + genpd->dev_ops.freeze_late = pm_genpd_default_freeze_late; 1333 + genpd->dev_ops.thaw_early = pm_genpd_default_thaw_early; 1334 + genpd->dev_ops.thaw = pm_genpd_default_thaw; 1564 1335 mutex_lock(&gpd_list_lock); 1565 1336 list_add(&genpd->gpd_list_node, &gpd_list); 1566 1337 mutex_unlock(&gpd_list_lock);
+156
drivers/base/power/domain_governor.c
··· 1 + /* 2 + * drivers/base/power/domain_governor.c - Governors for device PM domains. 3 + * 4 + * Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Renesas Electronics Corp. 5 + * 6 + * This file is released under the GPLv2. 7 + */ 8 + 9 + #include <linux/init.h> 10 + #include <linux/kernel.h> 11 + #include <linux/pm_domain.h> 12 + #include <linux/pm_qos.h> 13 + #include <linux/hrtimer.h> 14 + 15 + /** 16 + * default_stop_ok - Default PM domain governor routine for stopping devices. 17 + * @dev: Device to check. 18 + */ 19 + bool default_stop_ok(struct device *dev) 20 + { 21 + struct gpd_timing_data *td = &dev_gpd_data(dev)->td; 22 + 23 + dev_dbg(dev, "%s()\n", __func__); 24 + 25 + if (dev->power.max_time_suspended_ns < 0 || td->break_even_ns == 0) 26 + return true; 27 + 28 + return td->stop_latency_ns + td->start_latency_ns < td->break_even_ns 29 + && td->break_even_ns < dev->power.max_time_suspended_ns; 30 + } 31 + 32 + /** 33 + * default_power_down_ok - Default generic PM domain power off governor routine. 34 + * @pd: PM domain to check. 35 + * 36 + * This routine must be executed under the PM domain's lock. 37 + */ 38 + static bool default_power_down_ok(struct dev_pm_domain *pd) 39 + { 40 + struct generic_pm_domain *genpd = pd_to_genpd(pd); 41 + struct gpd_link *link; 42 + struct pm_domain_data *pdd; 43 + s64 min_dev_off_time_ns; 44 + s64 off_on_time_ns; 45 + ktime_t time_now = ktime_get(); 46 + 47 + off_on_time_ns = genpd->power_off_latency_ns + 48 + genpd->power_on_latency_ns; 49 + /* 50 + * It doesn't make sense to remove power from the domain if saving 51 + * the state of all devices in it and the power off/power on operations 52 + * take too much time. 53 + * 54 + * All devices in this domain have been stopped already at this point. 55 + */ 56 + list_for_each_entry(pdd, &genpd->dev_list, list_node) { 57 + if (pdd->dev->driver) 58 + off_on_time_ns += 59 + to_gpd_data(pdd)->td.save_state_latency_ns; 60 + } 61 + 62 + /* 63 + * Check if subdomains can be off for enough time. 64 + * 65 + * All subdomains have been powered off already at this point. 66 + */ 67 + list_for_each_entry(link, &genpd->master_links, master_node) { 68 + struct generic_pm_domain *sd = link->slave; 69 + s64 sd_max_off_ns = sd->max_off_time_ns; 70 + 71 + if (sd_max_off_ns < 0) 72 + continue; 73 + 74 + sd_max_off_ns -= ktime_to_ns(ktime_sub(time_now, 75 + sd->power_off_time)); 76 + /* 77 + * Check if the subdomain is allowed to be off long enough for 78 + * the current domain to turn off and on (that's how much time 79 + * it will have to wait worst case). 80 + */ 81 + if (sd_max_off_ns <= off_on_time_ns) 82 + return false; 83 + } 84 + 85 + /* 86 + * Check if the devices in the domain can be off enough time. 87 + */ 88 + min_dev_off_time_ns = -1; 89 + list_for_each_entry(pdd, &genpd->dev_list, list_node) { 90 + struct gpd_timing_data *td; 91 + struct device *dev = pdd->dev; 92 + s64 dev_off_time_ns; 93 + 94 + if (!dev->driver || dev->power.max_time_suspended_ns < 0) 95 + continue; 96 + 97 + td = &to_gpd_data(pdd)->td; 98 + dev_off_time_ns = dev->power.max_time_suspended_ns - 99 + (td->start_latency_ns + td->restore_state_latency_ns + 100 + ktime_to_ns(ktime_sub(time_now, 101 + dev->power.suspend_time))); 102 + if (dev_off_time_ns <= off_on_time_ns) 103 + return false; 104 + 105 + if (min_dev_off_time_ns > dev_off_time_ns 106 + || min_dev_off_time_ns < 0) 107 + min_dev_off_time_ns = dev_off_time_ns; 108 + } 109 + 110 + if (min_dev_off_time_ns < 0) { 111 + /* 112 + * There are no latency constraints, so the domain can spend 113 + * arbitrary time in the "off" state. 114 + */ 115 + genpd->max_off_time_ns = -1; 116 + return true; 117 + } 118 + 119 + /* 120 + * The difference between the computed minimum delta and the time needed 121 + * to turn the domain on is the maximum theoretical time this domain can 122 + * spend in the "off" state. 123 + */ 124 + min_dev_off_time_ns -= genpd->power_on_latency_ns; 125 + 126 + /* 127 + * If the difference between the computed minimum delta and the time 128 + * needed to turn the domain off and back on on is smaller than the 129 + * domain's power break even time, removing power from the domain is not 130 + * worth it. 131 + */ 132 + if (genpd->break_even_ns > 133 + min_dev_off_time_ns - genpd->power_off_latency_ns) 134 + return false; 135 + 136 + genpd->max_off_time_ns = min_dev_off_time_ns; 137 + return true; 138 + } 139 + 140 + struct dev_power_governor simple_qos_governor = { 141 + .stop_ok = default_stop_ok, 142 + .power_down_ok = default_power_down_ok, 143 + }; 144 + 145 + static bool always_on_power_down_ok(struct dev_pm_domain *domain) 146 + { 147 + return false; 148 + } 149 + 150 + /** 151 + * pm_genpd_gov_always_on - A governor implementing an always-on policy 152 + */ 153 + struct dev_power_governor pm_domain_always_on_gov = { 154 + .power_down_ok = always_on_power_down_ok, 155 + .stop_ok = default_stop_ok, 156 + };
+14 -77
drivers/base/power/generic_ops.c
··· 97 97 * @event: PM transition of the system under way. 98 98 * @bool: Whether or not this is the "noirq" stage. 99 99 * 100 - * If the device has not been suspended at run time, execute the 101 - * suspend/freeze/poweroff/thaw callback provided by its driver, if defined, and 102 - * return its error code. Otherwise, return zero. 100 + * Execute the PM callback corresponding to @event provided by the driver of 101 + * @dev, if defined, and return its error code. Return 0 if the callback is 102 + * not present. 103 103 */ 104 104 static int __pm_generic_call(struct device *dev, int event, bool noirq) 105 105 { 106 106 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 107 107 int (*callback)(struct device *); 108 108 109 - if (!pm || pm_runtime_suspended(dev)) 109 + if (!pm) 110 110 return 0; 111 111 112 112 switch (event) { ··· 119 119 case PM_EVENT_HIBERNATE: 120 120 callback = noirq ? pm->poweroff_noirq : pm->poweroff; 121 121 break; 122 + case PM_EVENT_RESUME: 123 + callback = noirq ? pm->resume_noirq : pm->resume; 124 + break; 122 125 case PM_EVENT_THAW: 123 126 callback = noirq ? pm->thaw_noirq : pm->thaw; 127 + break; 128 + case PM_EVENT_RESTORE: 129 + callback = noirq ? pm->restore_noirq : pm->restore; 124 130 break; 125 131 default: 126 132 callback = NULL; ··· 217 211 EXPORT_SYMBOL_GPL(pm_generic_thaw); 218 212 219 213 /** 220 - * __pm_generic_resume - Generic resume/restore callback for subsystems. 221 - * @dev: Device to handle. 222 - * @event: PM transition of the system under way. 223 - * @bool: Whether or not this is the "noirq" stage. 224 - * 225 - * Execute the resume/resotre callback provided by the @dev's driver, if 226 - * defined. If it returns 0, change the device's runtime PM status to 'active'. 227 - * Return the callback's error code. 228 - */ 229 - static int __pm_generic_resume(struct device *dev, int event, bool noirq) 230 - { 231 - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 232 - int (*callback)(struct device *); 233 - int ret; 234 - 235 - if (!pm) 236 - return 0; 237 - 238 - switch (event) { 239 - case PM_EVENT_RESUME: 240 - callback = noirq ? pm->resume_noirq : pm->resume; 241 - break; 242 - case PM_EVENT_RESTORE: 243 - callback = noirq ? pm->restore_noirq : pm->restore; 244 - break; 245 - default: 246 - callback = NULL; 247 - break; 248 - } 249 - 250 - if (!callback) 251 - return 0; 252 - 253 - ret = callback(dev); 254 - if (!ret && !noirq && pm_runtime_enabled(dev)) { 255 - pm_runtime_disable(dev); 256 - pm_runtime_set_active(dev); 257 - pm_runtime_enable(dev); 258 - } 259 - 260 - return ret; 261 - } 262 - 263 - /** 264 214 * pm_generic_resume_noirq - Generic resume_noirq callback for subsystems. 265 215 * @dev: Device to resume. 266 216 */ 267 217 int pm_generic_resume_noirq(struct device *dev) 268 218 { 269 - return __pm_generic_resume(dev, PM_EVENT_RESUME, true); 219 + return __pm_generic_call(dev, PM_EVENT_RESUME, true); 270 220 } 271 221 EXPORT_SYMBOL_GPL(pm_generic_resume_noirq); 272 222 ··· 232 270 */ 233 271 int pm_generic_resume(struct device *dev) 234 272 { 235 - return __pm_generic_resume(dev, PM_EVENT_RESUME, false); 273 + return __pm_generic_call(dev, PM_EVENT_RESUME, false); 236 274 } 237 275 EXPORT_SYMBOL_GPL(pm_generic_resume); 238 276 ··· 242 280 */ 243 281 int pm_generic_restore_noirq(struct device *dev) 244 282 { 245 - return __pm_generic_resume(dev, PM_EVENT_RESTORE, true); 283 + return __pm_generic_call(dev, PM_EVENT_RESTORE, true); 246 284 } 247 285 EXPORT_SYMBOL_GPL(pm_generic_restore_noirq); 248 286 ··· 252 290 */ 253 291 int pm_generic_restore(struct device *dev) 254 292 { 255 - return __pm_generic_resume(dev, PM_EVENT_RESTORE, false); 293 + return __pm_generic_call(dev, PM_EVENT_RESTORE, false); 256 294 } 257 295 EXPORT_SYMBOL_GPL(pm_generic_restore); 258 296 ··· 276 314 pm_runtime_idle(dev); 277 315 } 278 316 #endif /* CONFIG_PM_SLEEP */ 279 - 280 - struct dev_pm_ops generic_subsys_pm_ops = { 281 - #ifdef CONFIG_PM_SLEEP 282 - .prepare = pm_generic_prepare, 283 - .suspend = pm_generic_suspend, 284 - .suspend_noirq = pm_generic_suspend_noirq, 285 - .resume = pm_generic_resume, 286 - .resume_noirq = pm_generic_resume_noirq, 287 - .freeze = pm_generic_freeze, 288 - .freeze_noirq = pm_generic_freeze_noirq, 289 - .thaw = pm_generic_thaw, 290 - .thaw_noirq = pm_generic_thaw_noirq, 291 - .poweroff = pm_generic_poweroff, 292 - .poweroff_noirq = pm_generic_poweroff_noirq, 293 - .restore = pm_generic_restore, 294 - .restore_noirq = pm_generic_restore_noirq, 295 - .complete = pm_generic_complete, 296 - #endif 297 - #ifdef CONFIG_PM_RUNTIME 298 - .runtime_suspend = pm_generic_runtime_suspend, 299 - .runtime_resume = pm_generic_runtime_resume, 300 - .runtime_idle = pm_generic_runtime_idle, 301 - #endif 302 - }; 303 - EXPORT_SYMBOL_GPL(generic_subsys_pm_ops);
+165 -210
drivers/base/power/main.c
··· 32 32 #include "../base.h" 33 33 #include "power.h" 34 34 35 + typedef int (*pm_callback_t)(struct device *); 36 + 35 37 /* 36 38 * The entries in the dpm_list list are in a depth first order, simply 37 39 * because children are guaranteed to be discovered after parents, and ··· 166 164 ktime_t calltime = ktime_set(0, 0); 167 165 168 166 if (initcall_debug) { 169 - pr_info("calling %s+ @ %i\n", 170 - dev_name(dev), task_pid_nr(current)); 167 + pr_info("calling %s+ @ %i, parent: %s\n", 168 + dev_name(dev), task_pid_nr(current), 169 + dev->parent ? dev_name(dev->parent) : "none"); 171 170 calltime = ktime_get(); 172 171 } 173 172 ··· 214 211 } 215 212 216 213 /** 217 - * pm_op - Execute the PM operation appropriate for given PM event. 218 - * @dev: Device to handle. 214 + * pm_op - Return the PM operation appropriate for given PM event. 219 215 * @ops: PM operations to choose from. 220 216 * @state: PM transition of the system being carried out. 221 217 */ 222 - static int pm_op(struct device *dev, 223 - const struct dev_pm_ops *ops, 224 - pm_message_t state) 218 + static pm_callback_t pm_op(const struct dev_pm_ops *ops, pm_message_t state) 225 219 { 226 - int error = 0; 227 - ktime_t calltime; 228 - 229 - calltime = initcall_debug_start(dev); 230 - 231 220 switch (state.event) { 232 221 #ifdef CONFIG_SUSPEND 233 222 case PM_EVENT_SUSPEND: 234 - if (ops->suspend) { 235 - error = ops->suspend(dev); 236 - suspend_report_result(ops->suspend, error); 237 - } 238 - break; 223 + return ops->suspend; 239 224 case PM_EVENT_RESUME: 240 - if (ops->resume) { 241 - error = ops->resume(dev); 242 - suspend_report_result(ops->resume, error); 243 - } 244 - break; 225 + return ops->resume; 245 226 #endif /* CONFIG_SUSPEND */ 246 227 #ifdef CONFIG_HIBERNATE_CALLBACKS 247 228 case PM_EVENT_FREEZE: 248 229 case PM_EVENT_QUIESCE: 249 - if (ops->freeze) { 250 - error = ops->freeze(dev); 251 - suspend_report_result(ops->freeze, error); 252 - } 253 - break; 230 + return ops->freeze; 254 231 case PM_EVENT_HIBERNATE: 255 - if (ops->poweroff) { 256 - error = ops->poweroff(dev); 257 - suspend_report_result(ops->poweroff, error); 258 - } 259 - break; 232 + return ops->poweroff; 260 233 case PM_EVENT_THAW: 261 234 case PM_EVENT_RECOVER: 262 - if (ops->thaw) { 263 - error = ops->thaw(dev); 264 - suspend_report_result(ops->thaw, error); 265 - } 235 + return ops->thaw; 266 236 break; 267 237 case PM_EVENT_RESTORE: 268 - if (ops->restore) { 269 - error = ops->restore(dev); 270 - suspend_report_result(ops->restore, error); 271 - } 272 - break; 238 + return ops->restore; 273 239 #endif /* CONFIG_HIBERNATE_CALLBACKS */ 274 - default: 275 - error = -EINVAL; 276 240 } 277 241 278 - initcall_debug_report(dev, calltime, error); 279 - 280 - return error; 242 + return NULL; 281 243 } 282 244 283 245 /** 284 - * pm_noirq_op - Execute the PM operation appropriate for given PM event. 285 - * @dev: Device to handle. 246 + * pm_noirq_op - Return the PM operation appropriate for given PM event. 286 247 * @ops: PM operations to choose from. 287 248 * @state: PM transition of the system being carried out. 288 249 * 289 250 * The driver of @dev will not receive interrupts while this function is being 290 251 * executed. 291 252 */ 292 - static int pm_noirq_op(struct device *dev, 293 - const struct dev_pm_ops *ops, 294 - pm_message_t state) 253 + static pm_callback_t pm_noirq_op(const struct dev_pm_ops *ops, pm_message_t state) 295 254 { 296 - int error = 0; 297 - ktime_t calltime = ktime_set(0, 0), delta, rettime; 298 - 299 - if (initcall_debug) { 300 - pr_info("calling %s+ @ %i, parent: %s\n", 301 - dev_name(dev), task_pid_nr(current), 302 - dev->parent ? dev_name(dev->parent) : "none"); 303 - calltime = ktime_get(); 304 - } 305 - 306 255 switch (state.event) { 307 256 #ifdef CONFIG_SUSPEND 308 257 case PM_EVENT_SUSPEND: 309 - if (ops->suspend_noirq) { 310 - error = ops->suspend_noirq(dev); 311 - suspend_report_result(ops->suspend_noirq, error); 312 - } 313 - break; 258 + return ops->suspend_noirq; 314 259 case PM_EVENT_RESUME: 315 - if (ops->resume_noirq) { 316 - error = ops->resume_noirq(dev); 317 - suspend_report_result(ops->resume_noirq, error); 318 - } 319 - break; 260 + return ops->resume_noirq; 320 261 #endif /* CONFIG_SUSPEND */ 321 262 #ifdef CONFIG_HIBERNATE_CALLBACKS 322 263 case PM_EVENT_FREEZE: 323 264 case PM_EVENT_QUIESCE: 324 - if (ops->freeze_noirq) { 325 - error = ops->freeze_noirq(dev); 326 - suspend_report_result(ops->freeze_noirq, error); 327 - } 328 - break; 265 + return ops->freeze_noirq; 329 266 case PM_EVENT_HIBERNATE: 330 - if (ops->poweroff_noirq) { 331 - error = ops->poweroff_noirq(dev); 332 - suspend_report_result(ops->poweroff_noirq, error); 333 - } 334 - break; 267 + return ops->poweroff_noirq; 335 268 case PM_EVENT_THAW: 336 269 case PM_EVENT_RECOVER: 337 - if (ops->thaw_noirq) { 338 - error = ops->thaw_noirq(dev); 339 - suspend_report_result(ops->thaw_noirq, error); 340 - } 341 - break; 270 + return ops->thaw_noirq; 342 271 case PM_EVENT_RESTORE: 343 - if (ops->restore_noirq) { 344 - error = ops->restore_noirq(dev); 345 - suspend_report_result(ops->restore_noirq, error); 346 - } 347 - break; 272 + return ops->restore_noirq; 348 273 #endif /* CONFIG_HIBERNATE_CALLBACKS */ 349 - default: 350 - error = -EINVAL; 351 274 } 352 275 353 - if (initcall_debug) { 354 - rettime = ktime_get(); 355 - delta = ktime_sub(rettime, calltime); 356 - printk("initcall %s_i+ returned %d after %Ld usecs\n", 357 - dev_name(dev), error, 358 - (unsigned long long)ktime_to_ns(delta) >> 10); 359 - } 360 - 361 - return error; 276 + return NULL; 362 277 } 363 278 364 279 static char *pm_verb(int event) ··· 334 413 usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC); 335 414 } 336 415 416 + static int dpm_run_callback(pm_callback_t cb, struct device *dev, 417 + pm_message_t state, char *info) 418 + { 419 + ktime_t calltime; 420 + int error; 421 + 422 + if (!cb) 423 + return 0; 424 + 425 + calltime = initcall_debug_start(dev); 426 + 427 + pm_dev_dbg(dev, state, info); 428 + error = cb(dev); 429 + suspend_report_result(cb, error); 430 + 431 + initcall_debug_report(dev, calltime, error); 432 + 433 + return error; 434 + } 435 + 337 436 /*------------------------- Resume routines -------------------------*/ 338 437 339 438 /** ··· 366 425 */ 367 426 static int device_resume_noirq(struct device *dev, pm_message_t state) 368 427 { 428 + pm_callback_t callback = NULL; 429 + char *info = NULL; 369 430 int error = 0; 370 431 371 432 TRACE_DEVICE(dev); 372 433 TRACE_RESUME(0); 373 434 374 435 if (dev->pm_domain) { 375 - pm_dev_dbg(dev, state, "EARLY power domain "); 376 - error = pm_noirq_op(dev, &dev->pm_domain->ops, state); 436 + info = "EARLY power domain "; 437 + callback = pm_noirq_op(&dev->pm_domain->ops, state); 377 438 } else if (dev->type && dev->type->pm) { 378 - pm_dev_dbg(dev, state, "EARLY type "); 379 - error = pm_noirq_op(dev, dev->type->pm, state); 439 + info = "EARLY type "; 440 + callback = pm_noirq_op(dev->type->pm, state); 380 441 } else if (dev->class && dev->class->pm) { 381 - pm_dev_dbg(dev, state, "EARLY class "); 382 - error = pm_noirq_op(dev, dev->class->pm, state); 442 + info = "EARLY class "; 443 + callback = pm_noirq_op(dev->class->pm, state); 383 444 } else if (dev->bus && dev->bus->pm) { 384 - pm_dev_dbg(dev, state, "EARLY "); 385 - error = pm_noirq_op(dev, dev->bus->pm, state); 445 + info = "EARLY bus "; 446 + callback = pm_noirq_op(dev->bus->pm, state); 386 447 } 448 + 449 + if (!callback && dev->driver && dev->driver->pm) { 450 + info = "EARLY driver "; 451 + callback = pm_noirq_op(dev->driver->pm, state); 452 + } 453 + 454 + error = dpm_run_callback(callback, dev, state, info); 387 455 388 456 TRACE_RESUME(error); 389 457 return error; ··· 436 486 EXPORT_SYMBOL_GPL(dpm_resume_noirq); 437 487 438 488 /** 439 - * legacy_resume - Execute a legacy (bus or class) resume callback for device. 440 - * @dev: Device to resume. 441 - * @cb: Resume callback to execute. 442 - */ 443 - static int legacy_resume(struct device *dev, int (*cb)(struct device *dev)) 444 - { 445 - int error; 446 - ktime_t calltime; 447 - 448 - calltime = initcall_debug_start(dev); 449 - 450 - error = cb(dev); 451 - suspend_report_result(cb, error); 452 - 453 - initcall_debug_report(dev, calltime, error); 454 - 455 - return error; 456 - } 457 - 458 - /** 459 489 * device_resume - Execute "resume" callbacks for given device. 460 490 * @dev: Device to handle. 461 491 * @state: PM transition of the system being carried out. ··· 443 513 */ 444 514 static int device_resume(struct device *dev, pm_message_t state, bool async) 445 515 { 516 + pm_callback_t callback = NULL; 517 + char *info = NULL; 446 518 int error = 0; 447 519 bool put = false; 448 520 ··· 467 535 put = true; 468 536 469 537 if (dev->pm_domain) { 470 - pm_dev_dbg(dev, state, "power domain "); 471 - error = pm_op(dev, &dev->pm_domain->ops, state); 472 - goto End; 538 + info = "power domain "; 539 + callback = pm_op(&dev->pm_domain->ops, state); 540 + goto Driver; 473 541 } 474 542 475 543 if (dev->type && dev->type->pm) { 476 - pm_dev_dbg(dev, state, "type "); 477 - error = pm_op(dev, dev->type->pm, state); 478 - goto End; 544 + info = "type "; 545 + callback = pm_op(dev->type->pm, state); 546 + goto Driver; 479 547 } 480 548 481 549 if (dev->class) { 482 550 if (dev->class->pm) { 483 - pm_dev_dbg(dev, state, "class "); 484 - error = pm_op(dev, dev->class->pm, state); 485 - goto End; 551 + info = "class "; 552 + callback = pm_op(dev->class->pm, state); 553 + goto Driver; 486 554 } else if (dev->class->resume) { 487 - pm_dev_dbg(dev, state, "legacy class "); 488 - error = legacy_resume(dev, dev->class->resume); 555 + info = "legacy class "; 556 + callback = dev->class->resume; 489 557 goto End; 490 558 } 491 559 } 492 560 493 561 if (dev->bus) { 494 562 if (dev->bus->pm) { 495 - pm_dev_dbg(dev, state, ""); 496 - error = pm_op(dev, dev->bus->pm, state); 563 + info = "bus "; 564 + callback = pm_op(dev->bus->pm, state); 497 565 } else if (dev->bus->resume) { 498 - pm_dev_dbg(dev, state, "legacy "); 499 - error = legacy_resume(dev, dev->bus->resume); 566 + info = "legacy bus "; 567 + callback = dev->bus->resume; 568 + goto End; 500 569 } 501 570 } 502 571 572 + Driver: 573 + if (!callback && dev->driver && dev->driver->pm) { 574 + info = "driver "; 575 + callback = pm_op(dev->driver->pm, state); 576 + } 577 + 503 578 End: 579 + error = dpm_run_callback(callback, dev, state, info); 504 580 dev->power.is_suspended = false; 505 581 506 582 Unlock: ··· 600 660 */ 601 661 static void device_complete(struct device *dev, pm_message_t state) 602 662 { 663 + void (*callback)(struct device *) = NULL; 664 + char *info = NULL; 665 + 603 666 device_lock(dev); 604 667 605 668 if (dev->pm_domain) { 606 - pm_dev_dbg(dev, state, "completing power domain "); 607 - if (dev->pm_domain->ops.complete) 608 - dev->pm_domain->ops.complete(dev); 669 + info = "completing power domain "; 670 + callback = dev->pm_domain->ops.complete; 609 671 } else if (dev->type && dev->type->pm) { 610 - pm_dev_dbg(dev, state, "completing type "); 611 - if (dev->type->pm->complete) 612 - dev->type->pm->complete(dev); 672 + info = "completing type "; 673 + callback = dev->type->pm->complete; 613 674 } else if (dev->class && dev->class->pm) { 614 - pm_dev_dbg(dev, state, "completing class "); 615 - if (dev->class->pm->complete) 616 - dev->class->pm->complete(dev); 675 + info = "completing class "; 676 + callback = dev->class->pm->complete; 617 677 } else if (dev->bus && dev->bus->pm) { 618 - pm_dev_dbg(dev, state, "completing "); 619 - if (dev->bus->pm->complete) 620 - dev->bus->pm->complete(dev); 678 + info = "completing bus "; 679 + callback = dev->bus->pm->complete; 680 + } 681 + 682 + if (!callback && dev->driver && dev->driver->pm) { 683 + info = "completing driver "; 684 + callback = dev->driver->pm->complete; 685 + } 686 + 687 + if (callback) { 688 + pm_dev_dbg(dev, state, info); 689 + callback(dev); 621 690 } 622 691 623 692 device_unlock(dev); ··· 712 763 */ 713 764 static int device_suspend_noirq(struct device *dev, pm_message_t state) 714 765 { 715 - int error; 766 + pm_callback_t callback = NULL; 767 + char *info = NULL; 716 768 717 769 if (dev->pm_domain) { 718 - pm_dev_dbg(dev, state, "LATE power domain "); 719 - error = pm_noirq_op(dev, &dev->pm_domain->ops, state); 720 - if (error) 721 - return error; 770 + info = "LATE power domain "; 771 + callback = pm_noirq_op(&dev->pm_domain->ops, state); 722 772 } else if (dev->type && dev->type->pm) { 723 - pm_dev_dbg(dev, state, "LATE type "); 724 - error = pm_noirq_op(dev, dev->type->pm, state); 725 - if (error) 726 - return error; 773 + info = "LATE type "; 774 + callback = pm_noirq_op(dev->type->pm, state); 727 775 } else if (dev->class && dev->class->pm) { 728 - pm_dev_dbg(dev, state, "LATE class "); 729 - error = pm_noirq_op(dev, dev->class->pm, state); 730 - if (error) 731 - return error; 776 + info = "LATE class "; 777 + callback = pm_noirq_op(dev->class->pm, state); 732 778 } else if (dev->bus && dev->bus->pm) { 733 - pm_dev_dbg(dev, state, "LATE "); 734 - error = pm_noirq_op(dev, dev->bus->pm, state); 735 - if (error) 736 - return error; 779 + info = "LATE bus "; 780 + callback = pm_noirq_op(dev->bus->pm, state); 737 781 } 738 782 739 - return 0; 783 + if (!callback && dev->driver && dev->driver->pm) { 784 + info = "LATE driver "; 785 + callback = pm_noirq_op(dev->driver->pm, state); 786 + } 787 + 788 + return dpm_run_callback(callback, dev, state, info); 740 789 } 741 790 742 791 /** ··· 811 864 */ 812 865 static int __device_suspend(struct device *dev, pm_message_t state, bool async) 813 866 { 867 + pm_callback_t callback = NULL; 868 + char *info = NULL; 814 869 int error = 0; 815 870 816 871 dpm_wait_for_children(dev, async); ··· 833 884 device_lock(dev); 834 885 835 886 if (dev->pm_domain) { 836 - pm_dev_dbg(dev, state, "power domain "); 837 - error = pm_op(dev, &dev->pm_domain->ops, state); 838 - goto End; 887 + info = "power domain "; 888 + callback = pm_op(&dev->pm_domain->ops, state); 889 + goto Run; 839 890 } 840 891 841 892 if (dev->type && dev->type->pm) { 842 - pm_dev_dbg(dev, state, "type "); 843 - error = pm_op(dev, dev->type->pm, state); 844 - goto End; 893 + info = "type "; 894 + callback = pm_op(dev->type->pm, state); 895 + goto Run; 845 896 } 846 897 847 898 if (dev->class) { 848 899 if (dev->class->pm) { 849 - pm_dev_dbg(dev, state, "class "); 850 - error = pm_op(dev, dev->class->pm, state); 851 - goto End; 900 + info = "class "; 901 + callback = pm_op(dev->class->pm, state); 902 + goto Run; 852 903 } else if (dev->class->suspend) { 853 904 pm_dev_dbg(dev, state, "legacy class "); 854 905 error = legacy_suspend(dev, state, dev->class->suspend); ··· 858 909 859 910 if (dev->bus) { 860 911 if (dev->bus->pm) { 861 - pm_dev_dbg(dev, state, ""); 862 - error = pm_op(dev, dev->bus->pm, state); 912 + info = "bus "; 913 + callback = pm_op(dev->bus->pm, state); 863 914 } else if (dev->bus->suspend) { 864 - pm_dev_dbg(dev, state, "legacy "); 915 + pm_dev_dbg(dev, state, "legacy bus "); 865 916 error = legacy_suspend(dev, state, dev->bus->suspend); 917 + goto End; 866 918 } 867 919 } 920 + 921 + Run: 922 + if (!callback && dev->driver && dev->driver->pm) { 923 + info = "driver "; 924 + callback = pm_op(dev->driver->pm, state); 925 + } 926 + 927 + error = dpm_run_callback(callback, dev, state, info); 868 928 869 929 End: 870 930 if (!error) { ··· 980 1022 */ 981 1023 static int device_prepare(struct device *dev, pm_message_t state) 982 1024 { 1025 + int (*callback)(struct device *) = NULL; 1026 + char *info = NULL; 983 1027 int error = 0; 984 1028 985 1029 device_lock(dev); ··· 989 1029 dev->power.wakeup_path = device_may_wakeup(dev); 990 1030 991 1031 if (dev->pm_domain) { 992 - pm_dev_dbg(dev, state, "preparing power domain "); 993 - if (dev->pm_domain->ops.prepare) 994 - error = dev->pm_domain->ops.prepare(dev); 995 - suspend_report_result(dev->pm_domain->ops.prepare, error); 996 - if (error) 997 - goto End; 1032 + info = "preparing power domain "; 1033 + callback = dev->pm_domain->ops.prepare; 998 1034 } else if (dev->type && dev->type->pm) { 999 - pm_dev_dbg(dev, state, "preparing type "); 1000 - if (dev->type->pm->prepare) 1001 - error = dev->type->pm->prepare(dev); 1002 - suspend_report_result(dev->type->pm->prepare, error); 1003 - if (error) 1004 - goto End; 1035 + info = "preparing type "; 1036 + callback = dev->type->pm->prepare; 1005 1037 } else if (dev->class && dev->class->pm) { 1006 - pm_dev_dbg(dev, state, "preparing class "); 1007 - if (dev->class->pm->prepare) 1008 - error = dev->class->pm->prepare(dev); 1009 - suspend_report_result(dev->class->pm->prepare, error); 1010 - if (error) 1011 - goto End; 1038 + info = "preparing class "; 1039 + callback = dev->class->pm->prepare; 1012 1040 } else if (dev->bus && dev->bus->pm) { 1013 - pm_dev_dbg(dev, state, "preparing "); 1014 - if (dev->bus->pm->prepare) 1015 - error = dev->bus->pm->prepare(dev); 1016 - suspend_report_result(dev->bus->pm->prepare, error); 1041 + info = "preparing bus "; 1042 + callback = dev->bus->pm->prepare; 1017 1043 } 1018 1044 1019 - End: 1045 + if (!callback && dev->driver && dev->driver->pm) { 1046 + info = "preparing driver "; 1047 + callback = dev->driver->pm->prepare; 1048 + } 1049 + 1050 + if (callback) { 1051 + error = callback(dev); 1052 + suspend_report_result(callback, error); 1053 + } 1054 + 1020 1055 device_unlock(dev); 1021 1056 1022 1057 return error;
+41 -8
drivers/base/power/qos.c
··· 47 47 static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers); 48 48 49 49 /** 50 - * dev_pm_qos_read_value - Get PM QoS constraint for a given device. 50 + * __dev_pm_qos_read_value - Get PM QoS constraint for a given device. 51 + * @dev: Device to get the PM QoS constraint value for. 52 + * 53 + * This routine must be called with dev->power.lock held. 54 + */ 55 + s32 __dev_pm_qos_read_value(struct device *dev) 56 + { 57 + struct pm_qos_constraints *c = dev->power.constraints; 58 + 59 + return c ? pm_qos_read_value(c) : 0; 60 + } 61 + 62 + /** 63 + * dev_pm_qos_read_value - Get PM QoS constraint for a given device (locked). 51 64 * @dev: Device to get the PM QoS constraint value for. 52 65 */ 53 66 s32 dev_pm_qos_read_value(struct device *dev) 54 67 { 55 - struct pm_qos_constraints *c; 56 68 unsigned long flags; 57 - s32 ret = 0; 69 + s32 ret; 58 70 59 71 spin_lock_irqsave(&dev->power.lock, flags); 60 - 61 - c = dev->power.constraints; 62 - if (c) 63 - ret = pm_qos_read_value(c); 64 - 72 + ret = __dev_pm_qos_read_value(dev); 65 73 spin_unlock_irqrestore(&dev->power.lock, flags); 66 74 67 75 return ret; ··· 420 412 return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier); 421 413 } 422 414 EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier); 415 + 416 + /** 417 + * dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor. 418 + * @dev: Device whose ancestor to add the request for. 419 + * @req: Pointer to the preallocated handle. 420 + * @value: Constraint latency value. 421 + */ 422 + int dev_pm_qos_add_ancestor_request(struct device *dev, 423 + struct dev_pm_qos_request *req, s32 value) 424 + { 425 + struct device *ancestor = dev->parent; 426 + int error = -ENODEV; 427 + 428 + while (ancestor && !ancestor->power.ignore_children) 429 + ancestor = ancestor->parent; 430 + 431 + if (ancestor) 432 + error = dev_pm_qos_add_request(ancestor, req, value); 433 + 434 + if (error) 435 + req->dev = NULL; 436 + 437 + return error; 438 + } 439 + EXPORT_SYMBOL_GPL(dev_pm_qos_add_ancestor_request);
+138 -21
drivers/base/power/runtime.c
··· 250 250 else 251 251 callback = NULL; 252 252 253 + if (!callback && dev->driver && dev->driver->pm) 254 + callback = dev->driver->pm->runtime_idle; 255 + 253 256 if (callback) 254 257 __rpm_callback(callback, dev); 255 258 ··· 282 279 return retval != -EACCES ? retval : -EIO; 283 280 } 284 281 282 + struct rpm_qos_data { 283 + ktime_t time_now; 284 + s64 constraint_ns; 285 + }; 286 + 287 + /** 288 + * rpm_update_qos_constraint - Update a given PM QoS constraint data. 289 + * @dev: Device whose timing data to use. 290 + * @data: PM QoS constraint data to update. 291 + * 292 + * Use the suspend timing data of @dev to update PM QoS constraint data pointed 293 + * to by @data. 294 + */ 295 + static int rpm_update_qos_constraint(struct device *dev, void *data) 296 + { 297 + struct rpm_qos_data *qos = data; 298 + unsigned long flags; 299 + s64 delta_ns; 300 + int ret = 0; 301 + 302 + spin_lock_irqsave(&dev->power.lock, flags); 303 + 304 + if (dev->power.max_time_suspended_ns < 0) 305 + goto out; 306 + 307 + delta_ns = dev->power.max_time_suspended_ns - 308 + ktime_to_ns(ktime_sub(qos->time_now, dev->power.suspend_time)); 309 + if (delta_ns <= 0) { 310 + ret = -EBUSY; 311 + goto out; 312 + } 313 + 314 + if (qos->constraint_ns > delta_ns || qos->constraint_ns == 0) 315 + qos->constraint_ns = delta_ns; 316 + 317 + out: 318 + spin_unlock_irqrestore(&dev->power.lock, flags); 319 + 320 + return ret; 321 + } 322 + 285 323 /** 286 324 * rpm_suspend - Carry out runtime suspend of given device. 287 325 * @dev: Device to suspend. ··· 349 305 { 350 306 int (*callback)(struct device *); 351 307 struct device *parent = NULL; 308 + struct rpm_qos_data qos; 352 309 int retval; 353 310 354 311 trace_rpm_suspend(dev, rpmflags); ··· 445 400 goto out; 446 401 } 447 402 403 + qos.constraint_ns = __dev_pm_qos_read_value(dev); 404 + if (qos.constraint_ns < 0) { 405 + /* Negative constraint means "never suspend". */ 406 + retval = -EPERM; 407 + goto out; 408 + } 409 + qos.constraint_ns *= NSEC_PER_USEC; 410 + qos.time_now = ktime_get(); 411 + 448 412 __update_runtime_status(dev, RPM_SUSPENDING); 413 + 414 + if (!dev->power.ignore_children) { 415 + if (dev->power.irq_safe) 416 + spin_unlock(&dev->power.lock); 417 + else 418 + spin_unlock_irq(&dev->power.lock); 419 + 420 + retval = device_for_each_child(dev, &qos, 421 + rpm_update_qos_constraint); 422 + 423 + if (dev->power.irq_safe) 424 + spin_lock(&dev->power.lock); 425 + else 426 + spin_lock_irq(&dev->power.lock); 427 + 428 + if (retval) 429 + goto fail; 430 + } 431 + 432 + dev->power.suspend_time = qos.time_now; 433 + dev->power.max_time_suspended_ns = qos.constraint_ns ? : -1; 449 434 450 435 if (dev->pm_domain) 451 436 callback = dev->pm_domain->ops.runtime_suspend; ··· 488 413 else 489 414 callback = NULL; 490 415 491 - retval = rpm_callback(callback, dev); 492 - if (retval) { 493 - __update_runtime_status(dev, RPM_ACTIVE); 494 - dev->power.deferred_resume = false; 495 - if (retval == -EAGAIN || retval == -EBUSY) { 496 - dev->power.runtime_error = 0; 416 + if (!callback && dev->driver && dev->driver->pm) 417 + callback = dev->driver->pm->runtime_suspend; 497 418 498 - /* 499 - * If the callback routine failed an autosuspend, and 500 - * if the last_busy time has been updated so that there 501 - * is a new autosuspend expiration time, automatically 502 - * reschedule another autosuspend. 503 - */ 504 - if ((rpmflags & RPM_AUTO) && 505 - pm_runtime_autosuspend_expiration(dev) != 0) 506 - goto repeat; 507 - } else { 508 - pm_runtime_cancel_pending(dev); 509 - } 510 - wake_up_all(&dev->power.wait_queue); 511 - goto out; 512 - } 419 + retval = rpm_callback(callback, dev); 420 + if (retval) 421 + goto fail; 422 + 513 423 no_callback: 514 424 __update_runtime_status(dev, RPM_SUSPENDED); 515 425 pm_runtime_deactivate_timer(dev); ··· 526 466 trace_rpm_return_int(dev, _THIS_IP_, retval); 527 467 528 468 return retval; 469 + 470 + fail: 471 + __update_runtime_status(dev, RPM_ACTIVE); 472 + dev->power.suspend_time = ktime_set(0, 0); 473 + dev->power.max_time_suspended_ns = -1; 474 + dev->power.deferred_resume = false; 475 + if (retval == -EAGAIN || retval == -EBUSY) { 476 + dev->power.runtime_error = 0; 477 + 478 + /* 479 + * If the callback routine failed an autosuspend, and 480 + * if the last_busy time has been updated so that there 481 + * is a new autosuspend expiration time, automatically 482 + * reschedule another autosuspend. 483 + */ 484 + if ((rpmflags & RPM_AUTO) && 485 + pm_runtime_autosuspend_expiration(dev) != 0) 486 + goto repeat; 487 + } else { 488 + pm_runtime_cancel_pending(dev); 489 + } 490 + wake_up_all(&dev->power.wait_queue); 491 + goto out; 529 492 } 530 493 531 494 /** ··· 703 620 if (dev->power.no_callbacks) 704 621 goto no_callback; /* Assume success. */ 705 622 623 + dev->power.suspend_time = ktime_set(0, 0); 624 + dev->power.max_time_suspended_ns = -1; 625 + 706 626 __update_runtime_status(dev, RPM_RESUMING); 707 627 708 628 if (dev->pm_domain) ··· 718 632 callback = dev->bus->pm->runtime_resume; 719 633 else 720 634 callback = NULL; 635 + 636 + if (!callback && dev->driver && dev->driver->pm) 637 + callback = dev->driver->pm->runtime_resume; 721 638 722 639 retval = rpm_callback(callback, dev); 723 640 if (retval) { ··· 1368 1279 setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn, 1369 1280 (unsigned long)dev); 1370 1281 1282 + dev->power.suspend_time = ktime_set(0, 0); 1283 + dev->power.max_time_suspended_ns = -1; 1284 + 1371 1285 init_waitqueue_head(&dev->power.wait_queue); 1372 1286 } 1373 1287 ··· 1387 1295 pm_runtime_set_suspended(dev); 1388 1296 if (dev->power.irq_safe && dev->parent) 1389 1297 pm_runtime_put_sync(dev->parent); 1298 + } 1299 + 1300 + /** 1301 + * pm_runtime_update_max_time_suspended - Update device's suspend time data. 1302 + * @dev: Device to handle. 1303 + * @delta_ns: Value to subtract from the device's max_time_suspended_ns field. 1304 + * 1305 + * Update the device's power.max_time_suspended_ns field by subtracting 1306 + * @delta_ns from it. The resulting value of power.max_time_suspended_ns is 1307 + * never negative. 1308 + */ 1309 + void pm_runtime_update_max_time_suspended(struct device *dev, s64 delta_ns) 1310 + { 1311 + unsigned long flags; 1312 + 1313 + spin_lock_irqsave(&dev->power.lock, flags); 1314 + 1315 + if (delta_ns > 0 && dev->power.max_time_suspended_ns > 0) { 1316 + if (dev->power.max_time_suspended_ns > delta_ns) 1317 + dev->power.max_time_suspended_ns -= delta_ns; 1318 + else 1319 + dev->power.max_time_suspended_ns = 0; 1320 + } 1321 + 1322 + spin_unlock_irqrestore(&dev->power.lock, flags); 1390 1323 }
-2
drivers/bluetooth/btmrvl_main.c
··· 475 475 476 476 init_waitqueue_entry(&wait, current); 477 477 478 - current->flags |= PF_NOFREEZE; 479 - 480 478 for (;;) { 481 479 add_wait_queue(&thread->wait_q, &wait); 482 480
+13
drivers/devfreq/Kconfig
··· 65 65 66 66 comment "DEVFREQ Drivers" 67 67 68 + config ARM_EXYNOS4_BUS_DEVFREQ 69 + bool "ARM Exynos4210/4212/4412 Memory Bus DEVFREQ Driver" 70 + depends on CPU_EXYNOS4210 || CPU_EXYNOS4212 || CPU_EXYNOS4412 71 + select ARCH_HAS_OPP 72 + select DEVFREQ_GOV_SIMPLE_ONDEMAND 73 + help 74 + This adds the DEVFREQ driver for Exynos4210 memory bus (vdd_int) 75 + and Exynos4212/4412 memory interface and bus (vdd_mif + vdd_int). 76 + It reads PPMU counters of memory controllers and adjusts 77 + the operating frequencies and voltages with OPP support. 78 + To operate with optimal voltages, ASV support is required 79 + (CONFIG_EXYNOS_ASV). 80 + 68 81 endif # PM_DEVFREQ
+3
drivers/devfreq/Makefile
··· 3 3 obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o 4 4 obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o 5 5 obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o 6 + 7 + # DEVFREQ Drivers 8 + obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos4_bus.o
+7 -8
drivers/devfreq/devfreq.c
··· 347 347 if (!IS_ERR(devfreq)) { 348 348 dev_err(dev, "%s: Unable to create devfreq for the device. It already has one.\n", __func__); 349 349 err = -EINVAL; 350 - goto out; 350 + goto err_out; 351 351 } 352 352 } 353 353 ··· 356 356 dev_err(dev, "%s: Unable to create devfreq for the device\n", 357 357 __func__); 358 358 err = -ENOMEM; 359 - goto out; 359 + goto err_out; 360 360 } 361 361 362 362 mutex_init(&devfreq->lock); ··· 399 399 devfreq->next_polling); 400 400 } 401 401 mutex_unlock(&devfreq_list_lock); 402 - goto out; 402 + out: 403 + return devfreq; 404 + 403 405 err_init: 404 406 device_unregister(&devfreq->dev); 405 407 err_dev: 406 408 mutex_unlock(&devfreq->lock); 407 409 kfree(devfreq); 408 - out: 409 - if (err) 410 - return ERR_PTR(err); 411 - else 412 - return devfreq; 410 + err_out: 411 + return ERR_PTR(err); 413 412 } 414 413 415 414 /**
+1135
drivers/devfreq/exynos4_bus.c
··· 1 + /* drivers/devfreq/exynos4210_memorybus.c 2 + * 3 + * Copyright (c) 2011 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * MyungJoo Ham <myungjoo.ham@samsung.com> 6 + * 7 + * EXYNOS4 - Memory/Bus clock frequency scaling support in DEVFREQ framework 8 + * This version supports EXYNOS4210 only. This changes bus frequencies 9 + * and vddint voltages. Exynos4412/4212 should be able to be supported 10 + * with minor modifications. 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License version 2 as 14 + * published by the Free Software Foundation. 15 + * 16 + */ 17 + 18 + #include <linux/io.h> 19 + #include <linux/slab.h> 20 + #include <linux/mutex.h> 21 + #include <linux/suspend.h> 22 + #include <linux/opp.h> 23 + #include <linux/devfreq.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/regulator/consumer.h> 26 + #include <linux/module.h> 27 + 28 + /* Exynos4 ASV has been in the mailing list, but not upstreamed, yet. */ 29 + #ifdef CONFIG_EXYNOS_ASV 30 + extern unsigned int exynos_result_of_asv; 31 + #endif 32 + 33 + #include <mach/regs-clock.h> 34 + 35 + #include <plat/map-s5p.h> 36 + 37 + #define MAX_SAFEVOLT 1200000 /* 1.2V */ 38 + 39 + enum exynos4_busf_type { 40 + TYPE_BUSF_EXYNOS4210, 41 + TYPE_BUSF_EXYNOS4x12, 42 + }; 43 + 44 + /* Assume that the bus is saturated if the utilization is 40% */ 45 + #define BUS_SATURATION_RATIO 40 46 + 47 + enum ppmu_counter { 48 + PPMU_PMNCNT0 = 0, 49 + PPMU_PMCCNT1, 50 + PPMU_PMNCNT2, 51 + PPMU_PMNCNT3, 52 + PPMU_PMNCNT_MAX, 53 + }; 54 + struct exynos4_ppmu { 55 + void __iomem *hw_base; 56 + unsigned int ccnt; 57 + unsigned int event; 58 + unsigned int count[PPMU_PMNCNT_MAX]; 59 + bool ccnt_overflow; 60 + bool count_overflow[PPMU_PMNCNT_MAX]; 61 + }; 62 + 63 + enum busclk_level_idx { 64 + LV_0 = 0, 65 + LV_1, 66 + LV_2, 67 + LV_3, 68 + LV_4, 69 + _LV_END 70 + }; 71 + #define EX4210_LV_MAX LV_2 72 + #define EX4x12_LV_MAX LV_4 73 + #define EX4210_LV_NUM (LV_2 + 1) 74 + #define EX4x12_LV_NUM (LV_4 + 1) 75 + 76 + struct busfreq_data { 77 + enum exynos4_busf_type type; 78 + struct device *dev; 79 + struct devfreq *devfreq; 80 + bool disabled; 81 + struct regulator *vdd_int; 82 + struct regulator *vdd_mif; /* Exynos4412/4212 only */ 83 + struct opp *curr_opp; 84 + struct exynos4_ppmu dmc[2]; 85 + 86 + struct notifier_block pm_notifier; 87 + struct mutex lock; 88 + 89 + /* Dividers calculated at boot/probe-time */ 90 + unsigned int dmc_divtable[_LV_END]; /* DMC0 */ 91 + unsigned int top_divtable[_LV_END]; 92 + }; 93 + 94 + struct bus_opp_table { 95 + unsigned int idx; 96 + unsigned long clk; 97 + unsigned long volt; 98 + }; 99 + 100 + /* 4210 controls clock of mif and voltage of int */ 101 + static struct bus_opp_table exynos4210_busclk_table[] = { 102 + {LV_0, 400000, 1150000}, 103 + {LV_1, 267000, 1050000}, 104 + {LV_2, 133000, 1025000}, 105 + {0, 0, 0}, 106 + }; 107 + 108 + /* 109 + * MIF is the main control knob clock for exynox4x12 MIF/INT 110 + * clock and voltage of both mif/int are controlled. 111 + */ 112 + static struct bus_opp_table exynos4x12_mifclk_table[] = { 113 + {LV_0, 400000, 1100000}, 114 + {LV_1, 267000, 1000000}, 115 + {LV_2, 160000, 950000}, 116 + {LV_3, 133000, 950000}, 117 + {LV_4, 100000, 950000}, 118 + {0, 0, 0}, 119 + }; 120 + 121 + /* 122 + * INT is not the control knob of 4x12. LV_x is not meant to represent 123 + * the current performance. (MIF does) 124 + */ 125 + static struct bus_opp_table exynos4x12_intclk_table[] = { 126 + {LV_0, 200000, 1000000}, 127 + {LV_1, 160000, 950000}, 128 + {LV_2, 133000, 925000}, 129 + {LV_3, 100000, 900000}, 130 + {0, 0, 0}, 131 + }; 132 + 133 + /* TODO: asv volt definitions are "__initdata"? */ 134 + /* Some chips have different operating voltages */ 135 + static unsigned int exynos4210_asv_volt[][EX4210_LV_NUM] = { 136 + {1150000, 1050000, 1050000}, 137 + {1125000, 1025000, 1025000}, 138 + {1100000, 1000000, 1000000}, 139 + {1075000, 975000, 975000}, 140 + {1050000, 950000, 950000}, 141 + }; 142 + 143 + static unsigned int exynos4x12_mif_step_50[][EX4x12_LV_NUM] = { 144 + /* 400 267 160 133 100 */ 145 + {1050000, 950000, 900000, 900000, 900000}, /* ASV0 */ 146 + {1050000, 950000, 900000, 900000, 900000}, /* ASV1 */ 147 + {1050000, 950000, 900000, 900000, 900000}, /* ASV2 */ 148 + {1050000, 900000, 900000, 900000, 900000}, /* ASV3 */ 149 + {1050000, 900000, 900000, 900000, 850000}, /* ASV4 */ 150 + {1050000, 900000, 900000, 850000, 850000}, /* ASV5 */ 151 + {1050000, 900000, 850000, 850000, 850000}, /* ASV6 */ 152 + {1050000, 900000, 850000, 850000, 850000}, /* ASV7 */ 153 + {1050000, 900000, 850000, 850000, 850000}, /* ASV8 */ 154 + }; 155 + 156 + static unsigned int exynos4x12_int_volt[][EX4x12_LV_NUM] = { 157 + /* 200 160 133 100 */ 158 + {1000000, 950000, 925000, 900000}, /* ASV0 */ 159 + {975000, 925000, 925000, 900000}, /* ASV1 */ 160 + {950000, 925000, 900000, 875000}, /* ASV2 */ 161 + {950000, 900000, 900000, 875000}, /* ASV3 */ 162 + {925000, 875000, 875000, 875000}, /* ASV4 */ 163 + {900000, 850000, 850000, 850000}, /* ASV5 */ 164 + {900000, 850000, 850000, 850000}, /* ASV6 */ 165 + {900000, 850000, 850000, 850000}, /* ASV7 */ 166 + {900000, 850000, 850000, 850000}, /* ASV8 */ 167 + }; 168 + 169 + /*** Clock Divider Data for Exynos4210 ***/ 170 + static unsigned int exynos4210_clkdiv_dmc0[][8] = { 171 + /* 172 + * Clock divider value for following 173 + * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD 174 + * DIVDMCP, DIVCOPY2, DIVCORE_TIMERS } 175 + */ 176 + 177 + /* DMC L0: 400MHz */ 178 + { 3, 1, 1, 1, 1, 1, 3, 1 }, 179 + /* DMC L1: 266.7MHz */ 180 + { 4, 1, 1, 2, 1, 1, 3, 1 }, 181 + /* DMC L2: 133MHz */ 182 + { 5, 1, 1, 5, 1, 1, 3, 1 }, 183 + }; 184 + static unsigned int exynos4210_clkdiv_top[][5] = { 185 + /* 186 + * Clock divider value for following 187 + * { DIVACLK200, DIVACLK100, DIVACLK160, DIVACLK133, DIVONENAND } 188 + */ 189 + /* ACLK200 L0: 200MHz */ 190 + { 3, 7, 4, 5, 1 }, 191 + /* ACLK200 L1: 160MHz */ 192 + { 4, 7, 5, 6, 1 }, 193 + /* ACLK200 L2: 133MHz */ 194 + { 5, 7, 7, 7, 1 }, 195 + }; 196 + static unsigned int exynos4210_clkdiv_lr_bus[][2] = { 197 + /* 198 + * Clock divider value for following 199 + * { DIVGDL/R, DIVGPL/R } 200 + */ 201 + /* ACLK_GDL/R L1: 200MHz */ 202 + { 3, 1 }, 203 + /* ACLK_GDL/R L2: 160MHz */ 204 + { 4, 1 }, 205 + /* ACLK_GDL/R L3: 133MHz */ 206 + { 5, 1 }, 207 + }; 208 + 209 + /*** Clock Divider Data for Exynos4212/4412 ***/ 210 + static unsigned int exynos4x12_clkdiv_dmc0[][6] = { 211 + /* 212 + * Clock divider value for following 213 + * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD 214 + * DIVDMCP} 215 + */ 216 + 217 + /* DMC L0: 400MHz */ 218 + {3, 1, 1, 1, 1, 1}, 219 + /* DMC L1: 266.7MHz */ 220 + {4, 1, 1, 2, 1, 1}, 221 + /* DMC L2: 160MHz */ 222 + {5, 1, 1, 4, 1, 1}, 223 + /* DMC L3: 133MHz */ 224 + {5, 1, 1, 5, 1, 1}, 225 + /* DMC L4: 100MHz */ 226 + {7, 1, 1, 7, 1, 1}, 227 + }; 228 + static unsigned int exynos4x12_clkdiv_dmc1[][6] = { 229 + /* 230 + * Clock divider value for following 231 + * { G2DACP, DIVC2C, DIVC2C_ACLK } 232 + */ 233 + 234 + /* DMC L0: 400MHz */ 235 + {3, 1, 1}, 236 + /* DMC L1: 266.7MHz */ 237 + {4, 2, 1}, 238 + /* DMC L2: 160MHz */ 239 + {5, 4, 1}, 240 + /* DMC L3: 133MHz */ 241 + {5, 5, 1}, 242 + /* DMC L4: 100MHz */ 243 + {7, 7, 1}, 244 + }; 245 + static unsigned int exynos4x12_clkdiv_top[][5] = { 246 + /* 247 + * Clock divider value for following 248 + * { DIVACLK266_GPS, DIVACLK100, DIVACLK160, 249 + DIVACLK133, DIVONENAND } 250 + */ 251 + 252 + /* ACLK_GDL/R L0: 200MHz */ 253 + {2, 7, 4, 5, 1}, 254 + /* ACLK_GDL/R L1: 200MHz */ 255 + {2, 7, 4, 5, 1}, 256 + /* ACLK_GDL/R L2: 160MHz */ 257 + {4, 7, 5, 7, 1}, 258 + /* ACLK_GDL/R L3: 133MHz */ 259 + {4, 7, 5, 7, 1}, 260 + /* ACLK_GDL/R L4: 100MHz */ 261 + {7, 7, 7, 7, 1}, 262 + }; 263 + static unsigned int exynos4x12_clkdiv_lr_bus[][2] = { 264 + /* 265 + * Clock divider value for following 266 + * { DIVGDL/R, DIVGPL/R } 267 + */ 268 + 269 + /* ACLK_GDL/R L0: 200MHz */ 270 + {3, 1}, 271 + /* ACLK_GDL/R L1: 200MHz */ 272 + {3, 1}, 273 + /* ACLK_GDL/R L2: 160MHz */ 274 + {4, 1}, 275 + /* ACLK_GDL/R L3: 133MHz */ 276 + {5, 1}, 277 + /* ACLK_GDL/R L4: 100MHz */ 278 + {7, 1}, 279 + }; 280 + static unsigned int exynos4x12_clkdiv_sclkip[][3] = { 281 + /* 282 + * Clock divider value for following 283 + * { DIVMFC, DIVJPEG, DIVFIMC0~3} 284 + */ 285 + 286 + /* SCLK_MFC: 200MHz */ 287 + {3, 3, 4}, 288 + /* SCLK_MFC: 200MHz */ 289 + {3, 3, 4}, 290 + /* SCLK_MFC: 160MHz */ 291 + {4, 4, 5}, 292 + /* SCLK_MFC: 133MHz */ 293 + {5, 5, 5}, 294 + /* SCLK_MFC: 100MHz */ 295 + {7, 7, 7}, 296 + }; 297 + 298 + 299 + static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp) 300 + { 301 + unsigned int index; 302 + unsigned int tmp; 303 + 304 + for (index = LV_0; index < EX4210_LV_NUM; index++) 305 + if (opp_get_freq(opp) == exynos4210_busclk_table[index].clk) 306 + break; 307 + 308 + if (index == EX4210_LV_NUM) 309 + return -EINVAL; 310 + 311 + /* Change Divider - DMC0 */ 312 + tmp = data->dmc_divtable[index]; 313 + 314 + __raw_writel(tmp, S5P_CLKDIV_DMC0); 315 + 316 + do { 317 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC0); 318 + } while (tmp & 0x11111111); 319 + 320 + /* Change Divider - TOP */ 321 + tmp = data->top_divtable[index]; 322 + 323 + __raw_writel(tmp, S5P_CLKDIV_TOP); 324 + 325 + do { 326 + tmp = __raw_readl(S5P_CLKDIV_STAT_TOP); 327 + } while (tmp & 0x11111); 328 + 329 + /* Change Divider - LEFTBUS */ 330 + tmp = __raw_readl(S5P_CLKDIV_LEFTBUS); 331 + 332 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 333 + 334 + tmp |= ((exynos4210_clkdiv_lr_bus[index][0] << 335 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 336 + (exynos4210_clkdiv_lr_bus[index][1] << 337 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 338 + 339 + __raw_writel(tmp, S5P_CLKDIV_LEFTBUS); 340 + 341 + do { 342 + tmp = __raw_readl(S5P_CLKDIV_STAT_LEFTBUS); 343 + } while (tmp & 0x11); 344 + 345 + /* Change Divider - RIGHTBUS */ 346 + tmp = __raw_readl(S5P_CLKDIV_RIGHTBUS); 347 + 348 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 349 + 350 + tmp |= ((exynos4210_clkdiv_lr_bus[index][0] << 351 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 352 + (exynos4210_clkdiv_lr_bus[index][1] << 353 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 354 + 355 + __raw_writel(tmp, S5P_CLKDIV_RIGHTBUS); 356 + 357 + do { 358 + tmp = __raw_readl(S5P_CLKDIV_STAT_RIGHTBUS); 359 + } while (tmp & 0x11); 360 + 361 + return 0; 362 + } 363 + 364 + static int exynos4x12_set_busclk(struct busfreq_data *data, struct opp *opp) 365 + { 366 + unsigned int index; 367 + unsigned int tmp; 368 + 369 + for (index = LV_0; index < EX4x12_LV_NUM; index++) 370 + if (opp_get_freq(opp) == exynos4x12_mifclk_table[index].clk) 371 + break; 372 + 373 + if (index == EX4x12_LV_NUM) 374 + return -EINVAL; 375 + 376 + /* Change Divider - DMC0 */ 377 + tmp = data->dmc_divtable[index]; 378 + 379 + __raw_writel(tmp, S5P_CLKDIV_DMC0); 380 + 381 + do { 382 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC0); 383 + } while (tmp & 0x11111111); 384 + 385 + /* Change Divider - DMC1 */ 386 + tmp = __raw_readl(S5P_CLKDIV_DMC1); 387 + 388 + tmp &= ~(S5P_CLKDIV_DMC1_G2D_ACP_MASK | 389 + S5P_CLKDIV_DMC1_C2C_MASK | 390 + S5P_CLKDIV_DMC1_C2CACLK_MASK); 391 + 392 + tmp |= ((exynos4x12_clkdiv_dmc1[index][0] << 393 + S5P_CLKDIV_DMC1_G2D_ACP_SHIFT) | 394 + (exynos4x12_clkdiv_dmc1[index][1] << 395 + S5P_CLKDIV_DMC1_C2C_SHIFT) | 396 + (exynos4x12_clkdiv_dmc1[index][2] << 397 + S5P_CLKDIV_DMC1_C2CACLK_SHIFT)); 398 + 399 + __raw_writel(tmp, S5P_CLKDIV_DMC1); 400 + 401 + do { 402 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC1); 403 + } while (tmp & 0x111111); 404 + 405 + /* Change Divider - TOP */ 406 + tmp = __raw_readl(S5P_CLKDIV_TOP); 407 + 408 + tmp &= ~(S5P_CLKDIV_TOP_ACLK266_GPS_MASK | 409 + S5P_CLKDIV_TOP_ACLK100_MASK | 410 + S5P_CLKDIV_TOP_ACLK160_MASK | 411 + S5P_CLKDIV_TOP_ACLK133_MASK | 412 + S5P_CLKDIV_TOP_ONENAND_MASK); 413 + 414 + tmp |= ((exynos4x12_clkdiv_top[index][0] << 415 + S5P_CLKDIV_TOP_ACLK266_GPS_SHIFT) | 416 + (exynos4x12_clkdiv_top[index][1] << 417 + S5P_CLKDIV_TOP_ACLK100_SHIFT) | 418 + (exynos4x12_clkdiv_top[index][2] << 419 + S5P_CLKDIV_TOP_ACLK160_SHIFT) | 420 + (exynos4x12_clkdiv_top[index][3] << 421 + S5P_CLKDIV_TOP_ACLK133_SHIFT) | 422 + (exynos4x12_clkdiv_top[index][4] << 423 + S5P_CLKDIV_TOP_ONENAND_SHIFT)); 424 + 425 + __raw_writel(tmp, S5P_CLKDIV_TOP); 426 + 427 + do { 428 + tmp = __raw_readl(S5P_CLKDIV_STAT_TOP); 429 + } while (tmp & 0x11111); 430 + 431 + /* Change Divider - LEFTBUS */ 432 + tmp = __raw_readl(S5P_CLKDIV_LEFTBUS); 433 + 434 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 435 + 436 + tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] << 437 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 438 + (exynos4x12_clkdiv_lr_bus[index][1] << 439 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 440 + 441 + __raw_writel(tmp, S5P_CLKDIV_LEFTBUS); 442 + 443 + do { 444 + tmp = __raw_readl(S5P_CLKDIV_STAT_LEFTBUS); 445 + } while (tmp & 0x11); 446 + 447 + /* Change Divider - RIGHTBUS */ 448 + tmp = __raw_readl(S5P_CLKDIV_RIGHTBUS); 449 + 450 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 451 + 452 + tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] << 453 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 454 + (exynos4x12_clkdiv_lr_bus[index][1] << 455 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 456 + 457 + __raw_writel(tmp, S5P_CLKDIV_RIGHTBUS); 458 + 459 + do { 460 + tmp = __raw_readl(S5P_CLKDIV_STAT_RIGHTBUS); 461 + } while (tmp & 0x11); 462 + 463 + /* Change Divider - MFC */ 464 + tmp = __raw_readl(S5P_CLKDIV_MFC); 465 + 466 + tmp &= ~(S5P_CLKDIV_MFC_MASK); 467 + 468 + tmp |= ((exynos4x12_clkdiv_sclkip[index][0] << 469 + S5P_CLKDIV_MFC_SHIFT)); 470 + 471 + __raw_writel(tmp, S5P_CLKDIV_MFC); 472 + 473 + do { 474 + tmp = __raw_readl(S5P_CLKDIV_STAT_MFC); 475 + } while (tmp & 0x1); 476 + 477 + /* Change Divider - JPEG */ 478 + tmp = __raw_readl(S5P_CLKDIV_CAM1); 479 + 480 + tmp &= ~(S5P_CLKDIV_CAM1_JPEG_MASK); 481 + 482 + tmp |= ((exynos4x12_clkdiv_sclkip[index][1] << 483 + S5P_CLKDIV_CAM1_JPEG_SHIFT)); 484 + 485 + __raw_writel(tmp, S5P_CLKDIV_CAM1); 486 + 487 + do { 488 + tmp = __raw_readl(S5P_CLKDIV_STAT_CAM1); 489 + } while (tmp & 0x1); 490 + 491 + /* Change Divider - FIMC0~3 */ 492 + tmp = __raw_readl(S5P_CLKDIV_CAM); 493 + 494 + tmp &= ~(S5P_CLKDIV_CAM_FIMC0_MASK | S5P_CLKDIV_CAM_FIMC1_MASK | 495 + S5P_CLKDIV_CAM_FIMC2_MASK | S5P_CLKDIV_CAM_FIMC3_MASK); 496 + 497 + tmp |= ((exynos4x12_clkdiv_sclkip[index][2] << 498 + S5P_CLKDIV_CAM_FIMC0_SHIFT) | 499 + (exynos4x12_clkdiv_sclkip[index][2] << 500 + S5P_CLKDIV_CAM_FIMC1_SHIFT) | 501 + (exynos4x12_clkdiv_sclkip[index][2] << 502 + S5P_CLKDIV_CAM_FIMC2_SHIFT) | 503 + (exynos4x12_clkdiv_sclkip[index][2] << 504 + S5P_CLKDIV_CAM_FIMC3_SHIFT)); 505 + 506 + __raw_writel(tmp, S5P_CLKDIV_CAM); 507 + 508 + do { 509 + tmp = __raw_readl(S5P_CLKDIV_STAT_CAM1); 510 + } while (tmp & 0x1111); 511 + 512 + return 0; 513 + } 514 + 515 + 516 + static void busfreq_mon_reset(struct busfreq_data *data) 517 + { 518 + unsigned int i; 519 + 520 + for (i = 0; i < 2; i++) { 521 + void __iomem *ppmu_base = data->dmc[i].hw_base; 522 + 523 + /* Reset PPMU */ 524 + __raw_writel(0x8000000f, ppmu_base + 0xf010); 525 + __raw_writel(0x8000000f, ppmu_base + 0xf050); 526 + __raw_writel(0x6, ppmu_base + 0xf000); 527 + __raw_writel(0x0, ppmu_base + 0xf100); 528 + 529 + /* Set PPMU Event */ 530 + data->dmc[i].event = 0x6; 531 + __raw_writel(((data->dmc[i].event << 12) | 0x1), 532 + ppmu_base + 0xfc); 533 + 534 + /* Start PPMU */ 535 + __raw_writel(0x1, ppmu_base + 0xf000); 536 + } 537 + } 538 + 539 + static void exynos4_read_ppmu(struct busfreq_data *data) 540 + { 541 + int i, j; 542 + 543 + for (i = 0; i < 2; i++) { 544 + void __iomem *ppmu_base = data->dmc[i].hw_base; 545 + u32 overflow; 546 + 547 + /* Stop PPMU */ 548 + __raw_writel(0x0, ppmu_base + 0xf000); 549 + 550 + /* Update local data from PPMU */ 551 + overflow = __raw_readl(ppmu_base + 0xf050); 552 + 553 + data->dmc[i].ccnt = __raw_readl(ppmu_base + 0xf100); 554 + data->dmc[i].ccnt_overflow = overflow & (1 << 31); 555 + 556 + for (j = 0; j < PPMU_PMNCNT_MAX; j++) { 557 + data->dmc[i].count[j] = __raw_readl( 558 + ppmu_base + (0xf110 + (0x10 * j))); 559 + data->dmc[i].count_overflow[j] = overflow & (1 << j); 560 + } 561 + } 562 + 563 + busfreq_mon_reset(data); 564 + } 565 + 566 + static int exynos4x12_get_intspec(unsigned long mifclk) 567 + { 568 + int i = 0; 569 + 570 + while (exynos4x12_intclk_table[i].clk) { 571 + if (exynos4x12_intclk_table[i].clk <= mifclk) 572 + return i; 573 + i++; 574 + } 575 + 576 + return -EINVAL; 577 + } 578 + 579 + static int exynos4_bus_setvolt(struct busfreq_data *data, struct opp *opp, 580 + struct opp *oldopp) 581 + { 582 + int err = 0, tmp; 583 + unsigned long volt = opp_get_voltage(opp); 584 + 585 + switch (data->type) { 586 + case TYPE_BUSF_EXYNOS4210: 587 + /* OPP represents DMC clock + INT voltage */ 588 + err = regulator_set_voltage(data->vdd_int, volt, 589 + MAX_SAFEVOLT); 590 + break; 591 + case TYPE_BUSF_EXYNOS4x12: 592 + /* OPP represents MIF clock + MIF voltage */ 593 + err = regulator_set_voltage(data->vdd_mif, volt, 594 + MAX_SAFEVOLT); 595 + if (err) 596 + break; 597 + 598 + tmp = exynos4x12_get_intspec(opp_get_freq(opp)); 599 + if (tmp < 0) { 600 + err = tmp; 601 + regulator_set_voltage(data->vdd_mif, 602 + opp_get_voltage(oldopp), 603 + MAX_SAFEVOLT); 604 + break; 605 + } 606 + err = regulator_set_voltage(data->vdd_int, 607 + exynos4x12_intclk_table[tmp].volt, 608 + MAX_SAFEVOLT); 609 + /* Try to recover */ 610 + if (err) 611 + regulator_set_voltage(data->vdd_mif, 612 + opp_get_voltage(oldopp), 613 + MAX_SAFEVOLT); 614 + break; 615 + default: 616 + err = -EINVAL; 617 + } 618 + 619 + return err; 620 + } 621 + 622 + static int exynos4_bus_target(struct device *dev, unsigned long *_freq) 623 + { 624 + int err = 0; 625 + struct platform_device *pdev = container_of(dev, struct platform_device, 626 + dev); 627 + struct busfreq_data *data = platform_get_drvdata(pdev); 628 + struct opp *opp = devfreq_recommended_opp(dev, _freq); 629 + unsigned long old_freq = opp_get_freq(data->curr_opp); 630 + unsigned long freq = opp_get_freq(opp); 631 + 632 + if (old_freq == freq) 633 + return 0; 634 + 635 + dev_dbg(dev, "targetting %lukHz %luuV\n", freq, opp_get_voltage(opp)); 636 + 637 + mutex_lock(&data->lock); 638 + 639 + if (data->disabled) 640 + goto out; 641 + 642 + if (old_freq < freq) 643 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 644 + if (err) 645 + goto out; 646 + 647 + if (old_freq != freq) { 648 + switch (data->type) { 649 + case TYPE_BUSF_EXYNOS4210: 650 + err = exynos4210_set_busclk(data, opp); 651 + break; 652 + case TYPE_BUSF_EXYNOS4x12: 653 + err = exynos4x12_set_busclk(data, opp); 654 + break; 655 + default: 656 + err = -EINVAL; 657 + } 658 + } 659 + if (err) 660 + goto out; 661 + 662 + if (old_freq > freq) 663 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 664 + if (err) 665 + goto out; 666 + 667 + data->curr_opp = opp; 668 + out: 669 + mutex_unlock(&data->lock); 670 + return err; 671 + } 672 + 673 + static int exynos4_get_busier_dmc(struct busfreq_data *data) 674 + { 675 + u64 p0 = data->dmc[0].count[0]; 676 + u64 p1 = data->dmc[1].count[0]; 677 + 678 + p0 *= data->dmc[1].ccnt; 679 + p1 *= data->dmc[0].ccnt; 680 + 681 + if (data->dmc[1].ccnt == 0) 682 + return 0; 683 + 684 + if (p0 > p1) 685 + return 0; 686 + return 1; 687 + } 688 + 689 + static int exynos4_bus_get_dev_status(struct device *dev, 690 + struct devfreq_dev_status *stat) 691 + { 692 + struct platform_device *pdev = container_of(dev, struct platform_device, 693 + dev); 694 + struct busfreq_data *data = platform_get_drvdata(pdev); 695 + int busier_dmc; 696 + int cycles_x2 = 2; /* 2 x cycles */ 697 + void __iomem *addr; 698 + u32 timing; 699 + u32 memctrl; 700 + 701 + exynos4_read_ppmu(data); 702 + busier_dmc = exynos4_get_busier_dmc(data); 703 + stat->current_frequency = opp_get_freq(data->curr_opp); 704 + 705 + if (busier_dmc) 706 + addr = S5P_VA_DMC1; 707 + else 708 + addr = S5P_VA_DMC0; 709 + 710 + memctrl = __raw_readl(addr + 0x04); /* one of DDR2/3/LPDDR2 */ 711 + timing = __raw_readl(addr + 0x38); /* CL or WL/RL values */ 712 + 713 + switch ((memctrl >> 8) & 0xf) { 714 + case 0x4: /* DDR2 */ 715 + cycles_x2 = ((timing >> 16) & 0xf) * 2; 716 + break; 717 + case 0x5: /* LPDDR2 */ 718 + case 0x6: /* DDR3 */ 719 + cycles_x2 = ((timing >> 8) & 0xf) + ((timing >> 0) & 0xf); 720 + break; 721 + default: 722 + pr_err("%s: Unknown Memory Type(%d).\n", __func__, 723 + (memctrl >> 8) & 0xf); 724 + return -EINVAL; 725 + } 726 + 727 + /* Number of cycles spent on memory access */ 728 + stat->busy_time = data->dmc[busier_dmc].count[0] / 2 * (cycles_x2 + 2); 729 + stat->busy_time *= 100 / BUS_SATURATION_RATIO; 730 + stat->total_time = data->dmc[busier_dmc].ccnt; 731 + 732 + /* If the counters have overflown, retry */ 733 + if (data->dmc[busier_dmc].ccnt_overflow || 734 + data->dmc[busier_dmc].count_overflow[0]) 735 + return -EAGAIN; 736 + 737 + return 0; 738 + } 739 + 740 + static void exynos4_bus_exit(struct device *dev) 741 + { 742 + struct platform_device *pdev = container_of(dev, struct platform_device, 743 + dev); 744 + struct busfreq_data *data = platform_get_drvdata(pdev); 745 + 746 + devfreq_unregister_opp_notifier(dev, data->devfreq); 747 + } 748 + 749 + static struct devfreq_dev_profile exynos4_devfreq_profile = { 750 + .initial_freq = 400000, 751 + .polling_ms = 50, 752 + .target = exynos4_bus_target, 753 + .get_dev_status = exynos4_bus_get_dev_status, 754 + .exit = exynos4_bus_exit, 755 + }; 756 + 757 + static int exynos4210_init_tables(struct busfreq_data *data) 758 + { 759 + u32 tmp; 760 + int mgrp; 761 + int i, err = 0; 762 + 763 + tmp = __raw_readl(S5P_CLKDIV_DMC0); 764 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 765 + tmp &= ~(S5P_CLKDIV_DMC0_ACP_MASK | 766 + S5P_CLKDIV_DMC0_ACPPCLK_MASK | 767 + S5P_CLKDIV_DMC0_DPHY_MASK | 768 + S5P_CLKDIV_DMC0_DMC_MASK | 769 + S5P_CLKDIV_DMC0_DMCD_MASK | 770 + S5P_CLKDIV_DMC0_DMCP_MASK | 771 + S5P_CLKDIV_DMC0_COPY2_MASK | 772 + S5P_CLKDIV_DMC0_CORETI_MASK); 773 + 774 + tmp |= ((exynos4210_clkdiv_dmc0[i][0] << 775 + S5P_CLKDIV_DMC0_ACP_SHIFT) | 776 + (exynos4210_clkdiv_dmc0[i][1] << 777 + S5P_CLKDIV_DMC0_ACPPCLK_SHIFT) | 778 + (exynos4210_clkdiv_dmc0[i][2] << 779 + S5P_CLKDIV_DMC0_DPHY_SHIFT) | 780 + (exynos4210_clkdiv_dmc0[i][3] << 781 + S5P_CLKDIV_DMC0_DMC_SHIFT) | 782 + (exynos4210_clkdiv_dmc0[i][4] << 783 + S5P_CLKDIV_DMC0_DMCD_SHIFT) | 784 + (exynos4210_clkdiv_dmc0[i][5] << 785 + S5P_CLKDIV_DMC0_DMCP_SHIFT) | 786 + (exynos4210_clkdiv_dmc0[i][6] << 787 + S5P_CLKDIV_DMC0_COPY2_SHIFT) | 788 + (exynos4210_clkdiv_dmc0[i][7] << 789 + S5P_CLKDIV_DMC0_CORETI_SHIFT)); 790 + 791 + data->dmc_divtable[i] = tmp; 792 + } 793 + 794 + tmp = __raw_readl(S5P_CLKDIV_TOP); 795 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 796 + tmp &= ~(S5P_CLKDIV_TOP_ACLK200_MASK | 797 + S5P_CLKDIV_TOP_ACLK100_MASK | 798 + S5P_CLKDIV_TOP_ACLK160_MASK | 799 + S5P_CLKDIV_TOP_ACLK133_MASK | 800 + S5P_CLKDIV_TOP_ONENAND_MASK); 801 + 802 + tmp |= ((exynos4210_clkdiv_top[i][0] << 803 + S5P_CLKDIV_TOP_ACLK200_SHIFT) | 804 + (exynos4210_clkdiv_top[i][1] << 805 + S5P_CLKDIV_TOP_ACLK100_SHIFT) | 806 + (exynos4210_clkdiv_top[i][2] << 807 + S5P_CLKDIV_TOP_ACLK160_SHIFT) | 808 + (exynos4210_clkdiv_top[i][3] << 809 + S5P_CLKDIV_TOP_ACLK133_SHIFT) | 810 + (exynos4210_clkdiv_top[i][4] << 811 + S5P_CLKDIV_TOP_ONENAND_SHIFT)); 812 + 813 + data->top_divtable[i] = tmp; 814 + } 815 + 816 + #ifdef CONFIG_EXYNOS_ASV 817 + tmp = exynos4_result_of_asv; 818 + #else 819 + tmp = 0; /* Max voltages for the reliability of the unknown */ 820 + #endif 821 + 822 + pr_debug("ASV Group of Exynos4 is %d\n", tmp); 823 + /* Use merged grouping for voltage */ 824 + switch (tmp) { 825 + case 0: 826 + mgrp = 0; 827 + break; 828 + case 1: 829 + case 2: 830 + mgrp = 1; 831 + break; 832 + case 3: 833 + case 4: 834 + mgrp = 2; 835 + break; 836 + case 5: 837 + case 6: 838 + mgrp = 3; 839 + break; 840 + case 7: 841 + mgrp = 4; 842 + break; 843 + default: 844 + pr_warn("Unknown ASV Group. Use max voltage.\n"); 845 + mgrp = 0; 846 + } 847 + 848 + for (i = LV_0; i < EX4210_LV_NUM; i++) 849 + exynos4210_busclk_table[i].volt = exynos4210_asv_volt[mgrp][i]; 850 + 851 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 852 + err = opp_add(data->dev, exynos4210_busclk_table[i].clk, 853 + exynos4210_busclk_table[i].volt); 854 + if (err) { 855 + dev_err(data->dev, "Cannot add opp entries.\n"); 856 + return err; 857 + } 858 + } 859 + 860 + 861 + return 0; 862 + } 863 + 864 + static int exynos4x12_init_tables(struct busfreq_data *data) 865 + { 866 + unsigned int i; 867 + unsigned int tmp; 868 + int ret; 869 + 870 + /* Enable pause function for DREX2 DVFS */ 871 + tmp = __raw_readl(S5P_DMC_PAUSE_CTRL); 872 + tmp |= DMC_PAUSE_ENABLE; 873 + __raw_writel(tmp, S5P_DMC_PAUSE_CTRL); 874 + 875 + tmp = __raw_readl(S5P_CLKDIV_DMC0); 876 + 877 + for (i = 0; i < EX4x12_LV_NUM; i++) { 878 + tmp &= ~(S5P_CLKDIV_DMC0_ACP_MASK | 879 + S5P_CLKDIV_DMC0_ACPPCLK_MASK | 880 + S5P_CLKDIV_DMC0_DPHY_MASK | 881 + S5P_CLKDIV_DMC0_DMC_MASK | 882 + S5P_CLKDIV_DMC0_DMCD_MASK | 883 + S5P_CLKDIV_DMC0_DMCP_MASK); 884 + 885 + tmp |= ((exynos4x12_clkdiv_dmc0[i][0] << 886 + S5P_CLKDIV_DMC0_ACP_SHIFT) | 887 + (exynos4x12_clkdiv_dmc0[i][1] << 888 + S5P_CLKDIV_DMC0_ACPPCLK_SHIFT) | 889 + (exynos4x12_clkdiv_dmc0[i][2] << 890 + S5P_CLKDIV_DMC0_DPHY_SHIFT) | 891 + (exynos4x12_clkdiv_dmc0[i][3] << 892 + S5P_CLKDIV_DMC0_DMC_SHIFT) | 893 + (exynos4x12_clkdiv_dmc0[i][4] << 894 + S5P_CLKDIV_DMC0_DMCD_SHIFT) | 895 + (exynos4x12_clkdiv_dmc0[i][5] << 896 + S5P_CLKDIV_DMC0_DMCP_SHIFT)); 897 + 898 + data->dmc_divtable[i] = tmp; 899 + } 900 + 901 + #ifdef CONFIG_EXYNOS_ASV 902 + tmp = exynos4_result_of_asv; 903 + #else 904 + tmp = 0; /* Max voltages for the reliability of the unknown */ 905 + #endif 906 + 907 + if (tmp > 8) 908 + tmp = 0; 909 + pr_debug("ASV Group of Exynos4x12 is %d\n", tmp); 910 + 911 + for (i = 0; i < EX4x12_LV_NUM; i++) { 912 + exynos4x12_mifclk_table[i].volt = 913 + exynos4x12_mif_step_50[tmp][i]; 914 + exynos4x12_intclk_table[i].volt = 915 + exynos4x12_int_volt[tmp][i]; 916 + } 917 + 918 + for (i = 0; i < EX4x12_LV_NUM; i++) { 919 + ret = opp_add(data->dev, exynos4x12_mifclk_table[i].clk, 920 + exynos4x12_mifclk_table[i].volt); 921 + if (ret) { 922 + dev_err(data->dev, "Fail to add opp entries.\n"); 923 + return ret; 924 + } 925 + } 926 + 927 + return 0; 928 + } 929 + 930 + static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this, 931 + unsigned long event, void *ptr) 932 + { 933 + struct busfreq_data *data = container_of(this, struct busfreq_data, 934 + pm_notifier); 935 + struct opp *opp; 936 + unsigned long maxfreq = ULONG_MAX; 937 + int err = 0; 938 + 939 + switch (event) { 940 + case PM_SUSPEND_PREPARE: 941 + /* Set Fastest and Deactivate DVFS */ 942 + mutex_lock(&data->lock); 943 + 944 + data->disabled = true; 945 + 946 + opp = opp_find_freq_floor(data->dev, &maxfreq); 947 + 948 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 949 + if (err) 950 + goto unlock; 951 + 952 + switch (data->type) { 953 + case TYPE_BUSF_EXYNOS4210: 954 + err = exynos4210_set_busclk(data, opp); 955 + break; 956 + case TYPE_BUSF_EXYNOS4x12: 957 + err = exynos4x12_set_busclk(data, opp); 958 + break; 959 + default: 960 + err = -EINVAL; 961 + } 962 + if (err) 963 + goto unlock; 964 + 965 + data->curr_opp = opp; 966 + unlock: 967 + mutex_unlock(&data->lock); 968 + if (err) 969 + return err; 970 + return NOTIFY_OK; 971 + case PM_POST_RESTORE: 972 + case PM_POST_SUSPEND: 973 + /* Reactivate */ 974 + mutex_lock(&data->lock); 975 + data->disabled = false; 976 + mutex_unlock(&data->lock); 977 + return NOTIFY_OK; 978 + } 979 + 980 + return NOTIFY_DONE; 981 + } 982 + 983 + static __devinit int exynos4_busfreq_probe(struct platform_device *pdev) 984 + { 985 + struct busfreq_data *data; 986 + struct opp *opp; 987 + struct device *dev = &pdev->dev; 988 + int err = 0; 989 + 990 + data = kzalloc(sizeof(struct busfreq_data), GFP_KERNEL); 991 + if (data == NULL) { 992 + dev_err(dev, "Cannot allocate memory.\n"); 993 + return -ENOMEM; 994 + } 995 + 996 + data->type = pdev->id_entry->driver_data; 997 + data->dmc[0].hw_base = S5P_VA_DMC0; 998 + data->dmc[1].hw_base = S5P_VA_DMC1; 999 + data->pm_notifier.notifier_call = exynos4_busfreq_pm_notifier_event; 1000 + data->dev = dev; 1001 + mutex_init(&data->lock); 1002 + 1003 + switch (data->type) { 1004 + case TYPE_BUSF_EXYNOS4210: 1005 + err = exynos4210_init_tables(data); 1006 + break; 1007 + case TYPE_BUSF_EXYNOS4x12: 1008 + err = exynos4x12_init_tables(data); 1009 + break; 1010 + default: 1011 + dev_err(dev, "Cannot determine the device id %d\n", data->type); 1012 + err = -EINVAL; 1013 + } 1014 + if (err) 1015 + goto err_regulator; 1016 + 1017 + data->vdd_int = regulator_get(dev, "vdd_int"); 1018 + if (IS_ERR(data->vdd_int)) { 1019 + dev_err(dev, "Cannot get the regulator \"vdd_int\"\n"); 1020 + err = PTR_ERR(data->vdd_int); 1021 + goto err_regulator; 1022 + } 1023 + if (data->type == TYPE_BUSF_EXYNOS4x12) { 1024 + data->vdd_mif = regulator_get(dev, "vdd_mif"); 1025 + if (IS_ERR(data->vdd_mif)) { 1026 + dev_err(dev, "Cannot get the regulator \"vdd_mif\"\n"); 1027 + err = PTR_ERR(data->vdd_mif); 1028 + regulator_put(data->vdd_int); 1029 + goto err_regulator; 1030 + 1031 + } 1032 + } 1033 + 1034 + opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq); 1035 + if (IS_ERR(opp)) { 1036 + dev_err(dev, "Invalid initial frequency %lu kHz.\n", 1037 + exynos4_devfreq_profile.initial_freq); 1038 + err = PTR_ERR(opp); 1039 + goto err_opp_add; 1040 + } 1041 + data->curr_opp = opp; 1042 + 1043 + platform_set_drvdata(pdev, data); 1044 + 1045 + busfreq_mon_reset(data); 1046 + 1047 + data->devfreq = devfreq_add_device(dev, &exynos4_devfreq_profile, 1048 + &devfreq_simple_ondemand, NULL); 1049 + if (IS_ERR(data->devfreq)) { 1050 + err = PTR_ERR(data->devfreq); 1051 + goto err_opp_add; 1052 + } 1053 + 1054 + devfreq_register_opp_notifier(dev, data->devfreq); 1055 + 1056 + err = register_pm_notifier(&data->pm_notifier); 1057 + if (err) { 1058 + dev_err(dev, "Failed to setup pm notifier\n"); 1059 + goto err_devfreq_add; 1060 + } 1061 + 1062 + return 0; 1063 + err_devfreq_add: 1064 + devfreq_remove_device(data->devfreq); 1065 + err_opp_add: 1066 + if (data->vdd_mif) 1067 + regulator_put(data->vdd_mif); 1068 + regulator_put(data->vdd_int); 1069 + err_regulator: 1070 + kfree(data); 1071 + return err; 1072 + } 1073 + 1074 + static __devexit int exynos4_busfreq_remove(struct platform_device *pdev) 1075 + { 1076 + struct busfreq_data *data = platform_get_drvdata(pdev); 1077 + 1078 + unregister_pm_notifier(&data->pm_notifier); 1079 + devfreq_remove_device(data->devfreq); 1080 + regulator_put(data->vdd_int); 1081 + if (data->vdd_mif) 1082 + regulator_put(data->vdd_mif); 1083 + kfree(data); 1084 + 1085 + return 0; 1086 + } 1087 + 1088 + static int exynos4_busfreq_resume(struct device *dev) 1089 + { 1090 + struct platform_device *pdev = container_of(dev, struct platform_device, 1091 + dev); 1092 + struct busfreq_data *data = platform_get_drvdata(pdev); 1093 + 1094 + busfreq_mon_reset(data); 1095 + return 0; 1096 + } 1097 + 1098 + static const struct dev_pm_ops exynos4_busfreq_pm = { 1099 + .resume = exynos4_busfreq_resume, 1100 + }; 1101 + 1102 + static const struct platform_device_id exynos4_busfreq_id[] = { 1103 + { "exynos4210-busfreq", TYPE_BUSF_EXYNOS4210 }, 1104 + { "exynos4412-busfreq", TYPE_BUSF_EXYNOS4x12 }, 1105 + { "exynos4212-busfreq", TYPE_BUSF_EXYNOS4x12 }, 1106 + { }, 1107 + }; 1108 + 1109 + static struct platform_driver exynos4_busfreq_driver = { 1110 + .probe = exynos4_busfreq_probe, 1111 + .remove = __devexit_p(exynos4_busfreq_remove), 1112 + .id_table = exynos4_busfreq_id, 1113 + .driver = { 1114 + .name = "exynos4-busfreq", 1115 + .owner = THIS_MODULE, 1116 + .pm = &exynos4_busfreq_pm, 1117 + }, 1118 + }; 1119 + 1120 + static int __init exynos4_busfreq_init(void) 1121 + { 1122 + return platform_driver_register(&exynos4_busfreq_driver); 1123 + } 1124 + late_initcall(exynos4_busfreq_init); 1125 + 1126 + static void __exit exynos4_busfreq_exit(void) 1127 + { 1128 + platform_driver_unregister(&exynos4_busfreq_driver); 1129 + } 1130 + module_exit(exynos4_busfreq_exit); 1131 + 1132 + MODULE_LICENSE("GPL"); 1133 + MODULE_DESCRIPTION("EXYNOS4 busfreq driver with devfreq framework"); 1134 + MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>"); 1135 + MODULE_ALIAS("exynos4-busfreq");
+27 -19
drivers/dma/dmatest.c
··· 214 214 return error_count; 215 215 } 216 216 217 - static void dmatest_callback(void *completion) 217 + /* poor man's completion - we want to use wait_event_freezable() on it */ 218 + struct dmatest_done { 219 + bool done; 220 + wait_queue_head_t *wait; 221 + }; 222 + 223 + static void dmatest_callback(void *arg) 218 224 { 219 - complete(completion); 225 + struct dmatest_done *done = arg; 226 + 227 + done->done = true; 228 + wake_up_all(done->wait); 220 229 } 221 230 222 231 /* ··· 244 235 */ 245 236 static int dmatest_func(void *data) 246 237 { 238 + DECLARE_WAIT_QUEUE_HEAD_ONSTACK(done_wait); 247 239 struct dmatest_thread *thread = data; 240 + struct dmatest_done done = { .wait = &done_wait }; 248 241 struct dma_chan *chan; 249 242 const char *thread_name; 250 243 unsigned int src_off, dst_off, len; ··· 263 252 int i; 264 253 265 254 thread_name = current->comm; 266 - set_freezable_with_signal(); 255 + set_freezable(); 267 256 268 257 ret = -ENOMEM; 269 258 ··· 317 306 struct dma_async_tx_descriptor *tx = NULL; 318 307 dma_addr_t dma_srcs[src_cnt]; 319 308 dma_addr_t dma_dsts[dst_cnt]; 320 - struct completion cmp; 321 - unsigned long start, tmo, end = 0 /* compiler... */; 322 - bool reload = true; 323 309 u8 align = 0; 324 310 325 311 total_tests++; ··· 399 391 continue; 400 392 } 401 393 402 - init_completion(&cmp); 394 + done.done = false; 403 395 tx->callback = dmatest_callback; 404 - tx->callback_param = &cmp; 396 + tx->callback_param = &done; 405 397 cookie = tx->tx_submit(tx); 406 398 407 399 if (dma_submit_error(cookie)) { ··· 415 407 } 416 408 dma_async_issue_pending(chan); 417 409 418 - do { 419 - start = jiffies; 420 - if (reload) 421 - end = start + msecs_to_jiffies(timeout); 422 - else if (end <= start) 423 - end = start + 1; 424 - tmo = wait_for_completion_interruptible_timeout(&cmp, 425 - end - start); 426 - reload = try_to_freeze(); 427 - } while (tmo == -ERESTARTSYS); 410 + wait_event_freezable_timeout(done_wait, done.done, 411 + msecs_to_jiffies(timeout)); 428 412 429 413 status = dma_async_is_tx_complete(chan, cookie, NULL, NULL); 430 414 431 - if (tmo == 0) { 415 + if (!done.done) { 416 + /* 417 + * We're leaving the timed out dma operation with 418 + * dangling pointer to done_wait. To make this 419 + * correct, we'll need to allocate wait_done for 420 + * each test iteration and perform "who's gonna 421 + * free it this time?" dancing. For now, just 422 + * leave it dangling. 423 + */ 432 424 pr_warning("%s: #%u: test timed out\n", 433 425 thread_name, total_tests - 1); 434 426 failed_tests++;
+12 -1
drivers/input/touchscreen/st1232.c
··· 23 23 #include <linux/input.h> 24 24 #include <linux/interrupt.h> 25 25 #include <linux/module.h> 26 + #include <linux/pm_qos.h> 26 27 #include <linux/slab.h> 27 28 #include <linux/types.h> 28 29 ··· 47 46 struct i2c_client *client; 48 47 struct input_dev *input_dev; 49 48 struct st1232_ts_finger finger[MAX_FINGERS]; 49 + struct dev_pm_qos_request low_latency_req; 50 50 }; 51 51 52 52 static int st1232_ts_read_data(struct st1232_ts_data *ts) ··· 120 118 } 121 119 122 120 /* SYN_MT_REPORT only if no contact */ 123 - if (!count) 121 + if (!count) { 124 122 input_mt_sync(input_dev); 123 + if (ts->low_latency_req.dev) { 124 + dev_pm_qos_remove_request(&ts->low_latency_req); 125 + ts->low_latency_req.dev = NULL; 126 + } 127 + } else if (!ts->low_latency_req.dev) { 128 + /* First contact, request 100 us latency. */ 129 + dev_pm_qos_add_ancestor_request(&ts->client->dev, 130 + &ts->low_latency_req, 100); 131 + } 125 132 126 133 /* SYN_REPORT */ 127 134 input_sync(input_dev);
-2
drivers/mfd/twl6030-irq.c
··· 138 138 static const unsigned max_i2c_errors = 100; 139 139 int ret; 140 140 141 - current->flags |= PF_NOFREEZE; 142 - 143 141 while (!kthread_should_stop()) { 144 142 int i; 145 143 union {
+1 -1
drivers/net/irda/stir4200.c
··· 750 750 751 751 write_reg(stir, REG_CTRL1, CTRL1_TXPWD|CTRL1_RXPWD); 752 752 753 - refrigerator(); 753 + try_to_freeze(); 754 754 755 755 if (change_speed(stir, stir->speed)) 756 756 break;
+6 -9
drivers/platform/x86/thinkpad_acpi.c
··· 2456 2456 u32 poll_mask, event_mask; 2457 2457 unsigned int si, so; 2458 2458 unsigned long t; 2459 - unsigned int change_detector, must_reset; 2459 + unsigned int change_detector; 2460 2460 unsigned int poll_freq; 2461 + bool was_frozen; 2461 2462 2462 2463 mutex_lock(&hotkey_thread_mutex); 2463 2464 ··· 2489 2488 t = 100; /* should never happen... */ 2490 2489 } 2491 2490 t = msleep_interruptible(t); 2492 - if (unlikely(kthread_should_stop())) 2491 + if (unlikely(kthread_freezable_should_stop(&was_frozen))) 2493 2492 break; 2494 - must_reset = try_to_freeze(); 2495 - if (t > 0 && !must_reset) 2493 + 2494 + if (t > 0 && !was_frozen) 2496 2495 continue; 2497 2496 2498 2497 mutex_lock(&hotkey_thread_data_mutex); 2499 - if (must_reset || hotkey_config_change != change_detector) { 2498 + if (was_frozen || hotkey_config_change != change_detector) { 2500 2499 /* forget old state on thaw or config change */ 2501 2500 si = so; 2502 2501 t = 0; ··· 2529 2528 static void hotkey_poll_stop_sync(void) 2530 2529 { 2531 2530 if (tpacpi_hotkey_task) { 2532 - if (frozen(tpacpi_hotkey_task) || 2533 - freezing(tpacpi_hotkey_task)) 2534 - thaw_process(tpacpi_hotkey_task); 2535 - 2536 2531 kthread_stop(tpacpi_hotkey_task); 2537 2532 tpacpi_hotkey_task = NULL; 2538 2533 mutex_lock(&hotkey_thread_mutex);
+8
drivers/sh/intc/core.c
··· 354 354 if (desc->force_enable) 355 355 intc_enable_disable_enum(desc, d, desc->force_enable, 1); 356 356 357 + d->skip_suspend = desc->skip_syscore_suspend; 358 + 357 359 nr_intc_controllers++; 358 360 359 361 return 0; ··· 388 386 list_for_each_entry(d, &intc_list, list) { 389 387 int irq; 390 388 389 + if (d->skip_suspend) 390 + continue; 391 + 391 392 /* enable wakeup irqs belonging to this intc controller */ 392 393 for_each_active_irq(irq) { 393 394 struct irq_data *data; ··· 413 408 414 409 list_for_each_entry(d, &intc_list, list) { 415 410 int irq; 411 + 412 + if (d->skip_suspend) 413 + continue; 416 414 417 415 for_each_active_irq(irq) { 418 416 struct irq_data *data;
+1
drivers/sh/intc/internals.h
··· 67 67 struct intc_window *window; 68 68 unsigned int nr_windows; 69 69 struct irq_chip chip; 70 + bool skip_suspend; 70 71 }; 71 72 72 73
-2
drivers/staging/rts_pstor/rtsx.c
··· 466 466 struct rtsx_chip *chip = dev->chip; 467 467 struct Scsi_Host *host = rtsx_to_host(dev); 468 468 469 - current->flags |= PF_NOFREEZE; 470 - 471 469 for (;;) { 472 470 if (wait_for_completion_interruptible(&dev->cmnd_ready)) 473 471 break;
+7 -6
drivers/usb/storage/usb.c
··· 831 831 832 832 dev_dbg(dev, "device found\n"); 833 833 834 - set_freezable_with_signal(); 834 + set_freezable(); 835 + 835 836 /* 836 837 * Wait for the timeout to expire or for a disconnect 837 838 * ··· 840 839 * fail to freeze, but we can't be non-freezable either. Nor can 841 840 * khubd freeze while waiting for scanning to complete as it may 842 841 * hold the device lock, causing a hang when suspending devices. 843 - * So we request a fake signal when freezing and use 844 - * interruptible sleep to kick us out of our wait early when 845 - * freezing happens. 842 + * So instead of using wait_event_freezable(), explicitly test 843 + * for (DONT_SCAN || freezing) in interruptible wait and proceed 844 + * if any of DONT_SCAN, freezing or timeout has happened. 846 845 */ 847 846 if (delay_use > 0) { 848 847 dev_dbg(dev, "waiting for device to settle " 849 848 "before scanning\n"); 850 849 wait_event_interruptible_timeout(us->delay_wait, 851 - test_bit(US_FLIDX_DONT_SCAN, &us->dflags), 852 - delay_use * HZ); 850 + test_bit(US_FLIDX_DONT_SCAN, &us->dflags) || 851 + freezing(current), delay_use * HZ); 853 852 } 854 853 855 854 /* If the device is still connected, perform the scanning */
+1 -1
fs/btrfs/async-thread.c
··· 334 334 if (freezing(current)) { 335 335 worker->working = 0; 336 336 spin_unlock_irq(&worker->lock); 337 - refrigerator(); 337 + try_to_freeze(); 338 338 } else { 339 339 spin_unlock_irq(&worker->lock); 340 340 if (!kthread_should_stop()) {
+2 -6
fs/btrfs/disk-io.c
··· 1579 1579 btrfs_run_defrag_inodes(root->fs_info); 1580 1580 } 1581 1581 1582 - if (freezing(current)) { 1583 - refrigerator(); 1584 - } else { 1582 + if (!try_to_freeze()) { 1585 1583 set_current_state(TASK_INTERRUPTIBLE); 1586 1584 if (!kthread_should_stop()) 1587 1585 schedule(); ··· 1633 1635 wake_up_process(root->fs_info->cleaner_kthread); 1634 1636 mutex_unlock(&root->fs_info->transaction_kthread_mutex); 1635 1637 1636 - if (freezing(current)) { 1637 - refrigerator(); 1638 - } else { 1638 + if (!try_to_freeze()) { 1639 1639 set_current_state(TASK_INTERRUPTIBLE); 1640 1640 if (!kthread_should_stop() && 1641 1641 !btrfs_transaction_blocked(root->fs_info))
+1 -2
fs/ext4/super.c
··· 2882 2882 } 2883 2883 mutex_unlock(&eli->li_list_mtx); 2884 2884 2885 - if (freezing(current)) 2886 - refrigerator(); 2885 + try_to_freeze(); 2887 2886 2888 2887 cur = jiffies; 2889 2888 if ((time_after_eq(cur, next_wakeup)) ||
+1 -3
fs/fs-writeback.c
··· 936 936 937 937 trace_writeback_thread_start(bdi); 938 938 939 - while (!kthread_should_stop()) { 939 + while (!kthread_freezable_should_stop(NULL)) { 940 940 /* 941 941 * Remove own delayed wake-up timer, since we are already awake 942 942 * and we'll take care of the preriodic write-back. ··· 966 966 */ 967 967 schedule(); 968 968 } 969 - 970 - try_to_freeze(); 971 969 } 972 970 973 971 /* Flush any work that raced with us exiting */
+2 -2
fs/gfs2/log.c
··· 951 951 wake_up(&sdp->sd_log_waitq); 952 952 953 953 t = gfs2_tune_get(sdp, gt_logd_secs) * HZ; 954 - if (freezing(current)) 955 - refrigerator(); 954 + 955 + try_to_freeze(); 956 956 957 957 do { 958 958 prepare_to_wait(&sdp->sd_logd_waitq, &wait,
+2 -2
fs/gfs2/quota.c
··· 1417 1417 /* Check for & recover partially truncated inodes */ 1418 1418 quotad_check_trunc_list(sdp); 1419 1419 1420 - if (freezing(current)) 1421 - refrigerator(); 1420 + try_to_freeze(); 1421 + 1422 1422 t = min(quotad_timeo, statfs_timeo); 1423 1423 1424 1424 prepare_to_wait(&sdp->sd_quota_wait, &wait, TASK_INTERRUPTIBLE);
+1 -1
fs/jbd/journal.c
··· 166 166 */ 167 167 jbd_debug(1, "Now suspending kjournald\n"); 168 168 spin_unlock(&journal->j_state_lock); 169 - refrigerator(); 169 + try_to_freeze(); 170 170 spin_lock(&journal->j_state_lock); 171 171 } else { 172 172 /*
+1 -1
fs/jbd2/journal.c
··· 173 173 */ 174 174 jbd_debug(1, "Now suspending kjournald2\n"); 175 175 write_unlock(&journal->j_state_lock); 176 - refrigerator(); 176 + try_to_freeze(); 177 177 write_lock(&journal->j_state_lock); 178 178 } else { 179 179 /*
+1 -1
fs/jfs/jfs_logmgr.c
··· 2349 2349 2350 2350 if (freezing(current)) { 2351 2351 spin_unlock_irq(&log_redrive_lock); 2352 - refrigerator(); 2352 + try_to_freeze(); 2353 2353 } else { 2354 2354 set_current_state(TASK_INTERRUPTIBLE); 2355 2355 spin_unlock_irq(&log_redrive_lock);
+2 -2
fs/jfs/jfs_txnmgr.c
··· 2800 2800 2801 2801 if (freezing(current)) { 2802 2802 LAZY_UNLOCK(flags); 2803 - refrigerator(); 2803 + try_to_freeze(); 2804 2804 } else { 2805 2805 DECLARE_WAITQUEUE(wq, current); 2806 2806 ··· 2994 2994 2995 2995 if (freezing(current)) { 2996 2996 TXN_UNLOCK(); 2997 - refrigerator(); 2997 + try_to_freeze(); 2998 2998 } else { 2999 2999 set_current_state(TASK_INTERRUPTIBLE); 3000 3000 TXN_UNLOCK();
+2 -1
fs/nfs/inode.c
··· 38 38 #include <linux/nfs_xdr.h> 39 39 #include <linux/slab.h> 40 40 #include <linux/compat.h> 41 + #include <linux/freezer.h> 41 42 42 43 #include <asm/system.h> 43 44 #include <asm/uaccess.h> ··· 78 77 { 79 78 if (fatal_signal_pending(current)) 80 79 return -ERESTARTSYS; 81 - schedule(); 80 + freezable_schedule(); 82 81 return 0; 83 82 } 84 83
+2 -1
fs/nfs/nfs3proc.c
··· 17 17 #include <linux/nfs_page.h> 18 18 #include <linux/lockd/bind.h> 19 19 #include <linux/nfs_mount.h> 20 + #include <linux/freezer.h> 20 21 21 22 #include "iostat.h" 22 23 #include "internal.h" ··· 33 32 res = rpc_call_sync(clnt, msg, flags); 34 33 if (res != -EJUKEBOX && res != -EKEYEXPIRED) 35 34 break; 36 - schedule_timeout_killable(NFS_JUKEBOX_RETRY_TIME); 35 + freezable_schedule_timeout_killable(NFS_JUKEBOX_RETRY_TIME); 37 36 res = -ERESTARTSYS; 38 37 } while (!fatal_signal_pending(current)); 39 38 return res;
+3 -2
fs/nfs/nfs4proc.c
··· 55 55 #include <linux/sunrpc/bc_xprt.h> 56 56 #include <linux/xattr.h> 57 57 #include <linux/utsname.h> 58 + #include <linux/freezer.h> 58 59 59 60 #include "nfs4_fs.h" 60 61 #include "delegation.h" ··· 244 243 *timeout = NFS4_POLL_RETRY_MIN; 245 244 if (*timeout > NFS4_POLL_RETRY_MAX) 246 245 *timeout = NFS4_POLL_RETRY_MAX; 247 - schedule_timeout_killable(*timeout); 246 + freezable_schedule_timeout_killable(*timeout); 248 247 if (fatal_signal_pending(current)) 249 248 res = -ERESTARTSYS; 250 249 *timeout <<= 1; ··· 3959 3958 static unsigned long 3960 3959 nfs4_set_lock_task_retry(unsigned long timeout) 3961 3960 { 3962 - schedule_timeout_killable(timeout); 3961 + freezable_schedule_timeout_killable(timeout); 3963 3962 timeout <<= 1; 3964 3963 if (timeout > NFS4_LOCK_MAXTIMEOUT) 3965 3964 return NFS4_LOCK_MAXTIMEOUT;
+2 -1
fs/nfs/proc.c
··· 41 41 #include <linux/nfs_fs.h> 42 42 #include <linux/nfs_page.h> 43 43 #include <linux/lockd/bind.h> 44 + #include <linux/freezer.h> 44 45 #include "internal.h" 45 46 46 47 #define NFSDBG_FACILITY NFSDBG_PROC ··· 60 59 res = rpc_call_sync(clnt, msg, flags); 61 60 if (res != -EKEYEXPIRED) 62 61 break; 63 - schedule_timeout_killable(NFS_JUKEBOX_RETRY_TIME); 62 + freezable_schedule_timeout_killable(NFS_JUKEBOX_RETRY_TIME); 64 63 res = -ERESTARTSYS; 65 64 } while (!fatal_signal_pending(current)); 66 65 return res;
+1 -1
fs/nilfs2/segment.c
··· 2470 2470 2471 2471 if (freezing(current)) { 2472 2472 spin_unlock(&sci->sc_state_lock); 2473 - refrigerator(); 2473 + try_to_freeze(); 2474 2474 spin_lock(&sci->sc_state_lock); 2475 2475 } else { 2476 2476 DEFINE_WAIT(wait);
+1 -1
fs/xfs/xfs_buf.c
··· 1702 1702 struct blk_plug plug; 1703 1703 1704 1704 if (unlikely(freezing(current))) 1705 - refrigerator(); 1705 + try_to_freeze(); 1706 1706 1707 1707 /* sleep for a long time if there is nothing to do. */ 1708 1708 if (list_empty(&target->bt_delwri_queue))
+77 -84
include/linux/freezer.h
··· 5 5 6 6 #include <linux/sched.h> 7 7 #include <linux/wait.h> 8 + #include <linux/atomic.h> 8 9 9 10 #ifdef CONFIG_FREEZER 11 + extern atomic_t system_freezing_cnt; /* nr of freezing conds in effect */ 12 + extern bool pm_freezing; /* PM freezing in effect */ 13 + extern bool pm_nosig_freezing; /* PM nosig freezing in effect */ 14 + 10 15 /* 11 16 * Check if a process has been frozen 12 17 */ 13 - static inline int frozen(struct task_struct *p) 18 + static inline bool frozen(struct task_struct *p) 14 19 { 15 20 return p->flags & PF_FROZEN; 16 21 } 17 22 23 + extern bool freezing_slow_path(struct task_struct *p); 24 + 18 25 /* 19 26 * Check if there is a request to freeze a process 20 27 */ 21 - static inline int freezing(struct task_struct *p) 28 + static inline bool freezing(struct task_struct *p) 22 29 { 23 - return test_tsk_thread_flag(p, TIF_FREEZE); 24 - } 25 - 26 - /* 27 - * Request that a process be frozen 28 - */ 29 - static inline void set_freeze_flag(struct task_struct *p) 30 - { 31 - set_tsk_thread_flag(p, TIF_FREEZE); 32 - } 33 - 34 - /* 35 - * Sometimes we may need to cancel the previous 'freeze' request 36 - */ 37 - static inline void clear_freeze_flag(struct task_struct *p) 38 - { 39 - clear_tsk_thread_flag(p, TIF_FREEZE); 40 - } 41 - 42 - static inline bool should_send_signal(struct task_struct *p) 43 - { 44 - return !(p->flags & PF_FREEZER_NOSIG); 30 + if (likely(!atomic_read(&system_freezing_cnt))) 31 + return false; 32 + return freezing_slow_path(p); 45 33 } 46 34 47 35 /* Takes and releases task alloc lock using task_lock() */ 48 - extern int thaw_process(struct task_struct *p); 36 + extern void __thaw_task(struct task_struct *t); 49 37 50 - extern void refrigerator(void); 38 + extern bool __refrigerator(bool check_kthr_stop); 51 39 extern int freeze_processes(void); 52 40 extern int freeze_kernel_threads(void); 53 41 extern void thaw_processes(void); 54 42 55 - static inline int try_to_freeze(void) 43 + static inline bool try_to_freeze(void) 56 44 { 57 - if (freezing(current)) { 58 - refrigerator(); 59 - return 1; 60 - } else 61 - return 0; 45 + might_sleep(); 46 + if (likely(!freezing(current))) 47 + return false; 48 + return __refrigerator(false); 62 49 } 63 50 64 - extern bool freeze_task(struct task_struct *p, bool sig_only); 65 - extern void cancel_freezing(struct task_struct *p); 51 + extern bool freeze_task(struct task_struct *p); 52 + extern bool set_freezable(void); 66 53 67 54 #ifdef CONFIG_CGROUP_FREEZER 68 - extern int cgroup_freezing_or_frozen(struct task_struct *task); 55 + extern bool cgroup_freezing(struct task_struct *task); 69 56 #else /* !CONFIG_CGROUP_FREEZER */ 70 - static inline int cgroup_freezing_or_frozen(struct task_struct *task) 57 + static inline bool cgroup_freezing(struct task_struct *task) 71 58 { 72 - return 0; 59 + return false; 73 60 } 74 61 #endif /* !CONFIG_CGROUP_FREEZER */ 75 62 ··· 67 80 * appropriately in case the child has exited before the freezing of tasks is 68 81 * complete. However, we don't want kernel threads to be frozen in unexpected 69 82 * places, so we allow them to block freeze_processes() instead or to set 70 - * PF_NOFREEZE if needed and PF_FREEZER_SKIP is only set for userland vfork 71 - * parents. Fortunately, in the ____call_usermodehelper() case the parent won't 72 - * really block freeze_processes(), since ____call_usermodehelper() (the child) 73 - * does a little before exec/exit and it can't be frozen before waking up the 74 - * parent. 83 + * PF_NOFREEZE if needed. Fortunately, in the ____call_usermodehelper() case the 84 + * parent won't really block freeze_processes(), since ____call_usermodehelper() 85 + * (the child) does a little before exec/exit and it can't be frozen before 86 + * waking up the parent. 75 87 */ 76 88 77 - /* 78 - * If the current task is a user space one, tell the freezer not to count it as 79 - * freezable. 80 - */ 89 + 90 + /* Tell the freezer not to count the current task as freezable. */ 81 91 static inline void freezer_do_not_count(void) 82 92 { 83 - if (current->mm) 84 - current->flags |= PF_FREEZER_SKIP; 93 + current->flags |= PF_FREEZER_SKIP; 85 94 } 86 95 87 96 /* 88 - * If the current task is a user space one, tell the freezer to count it as 89 - * freezable again and try to freeze it. 97 + * Tell the freezer to count the current task as freezable again and try to 98 + * freeze it. 90 99 */ 91 100 static inline void freezer_count(void) 92 101 { 93 - if (current->mm) { 94 - current->flags &= ~PF_FREEZER_SKIP; 95 - try_to_freeze(); 96 - } 102 + current->flags &= ~PF_FREEZER_SKIP; 103 + try_to_freeze(); 97 104 } 98 105 99 106 /* ··· 99 118 } 100 119 101 120 /* 102 - * Tell the freezer that the current task should be frozen by it 121 + * These macros are intended to be used whenever you want allow a task that's 122 + * sleeping in TASK_UNINTERRUPTIBLE or TASK_KILLABLE state to be frozen. Note 123 + * that neither return any clear indication of whether a freeze event happened 124 + * while in this function. 103 125 */ 104 - static inline void set_freezable(void) 105 - { 106 - current->flags &= ~PF_NOFREEZE; 107 - } 108 126 109 - /* 110 - * Tell the freezer that the current task should be frozen by it and that it 111 - * should send a fake signal to the task to freeze it. 112 - */ 113 - static inline void set_freezable_with_signal(void) 114 - { 115 - current->flags &= ~(PF_NOFREEZE | PF_FREEZER_NOSIG); 116 - } 127 + /* Like schedule(), but should not block the freezer. */ 128 + #define freezable_schedule() \ 129 + ({ \ 130 + freezer_do_not_count(); \ 131 + schedule(); \ 132 + freezer_count(); \ 133 + }) 134 + 135 + /* Like schedule_timeout_killable(), but should not block the freezer. */ 136 + #define freezable_schedule_timeout_killable(timeout) \ 137 + ({ \ 138 + long __retval; \ 139 + freezer_do_not_count(); \ 140 + __retval = schedule_timeout_killable(timeout); \ 141 + freezer_count(); \ 142 + __retval; \ 143 + }) 117 144 118 145 /* 119 146 * Freezer-friendly wrappers around wait_event_interruptible(), ··· 141 152 #define wait_event_freezable(wq, condition) \ 142 153 ({ \ 143 154 int __retval; \ 144 - do { \ 155 + for (;;) { \ 145 156 __retval = wait_event_interruptible(wq, \ 146 157 (condition) || freezing(current)); \ 147 - if (__retval && !freezing(current)) \ 158 + if (__retval || (condition)) \ 148 159 break; \ 149 - else if (!(condition)) \ 150 - __retval = -ERESTARTSYS; \ 151 - } while (try_to_freeze()); \ 160 + try_to_freeze(); \ 161 + } \ 152 162 __retval; \ 153 163 }) 154 - 155 164 156 165 #define wait_event_freezable_timeout(wq, condition, timeout) \ 157 166 ({ \ 158 167 long __retval = timeout; \ 159 - do { \ 168 + for (;;) { \ 160 169 __retval = wait_event_interruptible_timeout(wq, \ 161 170 (condition) || freezing(current), \ 162 171 __retval); \ 163 - } while (try_to_freeze()); \ 172 + if (__retval <= 0 || (condition)) \ 173 + break; \ 174 + try_to_freeze(); \ 175 + } \ 164 176 __retval; \ 165 177 }) 166 - #else /* !CONFIG_FREEZER */ 167 - static inline int frozen(struct task_struct *p) { return 0; } 168 - static inline int freezing(struct task_struct *p) { return 0; } 169 - static inline void set_freeze_flag(struct task_struct *p) {} 170 - static inline void clear_freeze_flag(struct task_struct *p) {} 171 - static inline int thaw_process(struct task_struct *p) { return 1; } 172 178 173 - static inline void refrigerator(void) {} 179 + #else /* !CONFIG_FREEZER */ 180 + static inline bool frozen(struct task_struct *p) { return false; } 181 + static inline bool freezing(struct task_struct *p) { return false; } 182 + static inline void __thaw_task(struct task_struct *t) {} 183 + 184 + static inline bool __refrigerator(bool check_kthr_stop) { return false; } 174 185 static inline int freeze_processes(void) { return -ENOSYS; } 175 186 static inline int freeze_kernel_threads(void) { return -ENOSYS; } 176 187 static inline void thaw_processes(void) {} 177 188 178 - static inline int try_to_freeze(void) { return 0; } 189 + static inline bool try_to_freeze(void) { return false; } 179 190 180 191 static inline void freezer_do_not_count(void) {} 181 192 static inline void freezer_count(void) {} 182 193 static inline int freezer_should_skip(struct task_struct *p) { return 0; } 183 194 static inline void set_freezable(void) {} 184 - static inline void set_freezable_with_signal(void) {} 195 + 196 + #define freezable_schedule() schedule() 197 + 198 + #define freezable_schedule_timeout_killable(timeout) \ 199 + schedule_timeout_killable(timeout) 185 200 186 201 #define wait_event_freezable(wq, condition) \ 187 202 wait_event_interruptible(wq, condition)
+2
include/linux/kmod.h
··· 117 117 extern int usermodehelper_disable(void); 118 118 extern void usermodehelper_enable(void); 119 119 extern bool usermodehelper_is_disabled(void); 120 + extern void read_lock_usermodehelper(void); 121 + extern void read_unlock_usermodehelper(void); 120 122 121 123 #endif /* __LINUX_KMOD_H__ */
+1
include/linux/kthread.h
··· 35 35 void kthread_bind(struct task_struct *k, unsigned int cpu); 36 36 int kthread_stop(struct task_struct *k); 37 37 int kthread_should_stop(void); 38 + bool kthread_freezable_should_stop(bool *was_frozen); 38 39 void *kthread_data(struct task_struct *k); 39 40 40 41 int kthreadd(void *unused);
+1 -29
include/linux/platform_device.h
··· 256 256 } 257 257 #endif /* MODULE */ 258 258 259 - #ifdef CONFIG_PM_SLEEP 260 - extern int platform_pm_prepare(struct device *dev); 261 - extern void platform_pm_complete(struct device *dev); 262 - #else 263 - #define platform_pm_prepare NULL 264 - #define platform_pm_complete NULL 265 - #endif 266 - 267 259 #ifdef CONFIG_SUSPEND 268 260 extern int platform_pm_suspend(struct device *dev); 269 - extern int platform_pm_suspend_noirq(struct device *dev); 270 261 extern int platform_pm_resume(struct device *dev); 271 - extern int platform_pm_resume_noirq(struct device *dev); 272 262 #else 273 263 #define platform_pm_suspend NULL 274 264 #define platform_pm_resume NULL 275 - #define platform_pm_suspend_noirq NULL 276 - #define platform_pm_resume_noirq NULL 277 265 #endif 278 266 279 267 #ifdef CONFIG_HIBERNATE_CALLBACKS 280 268 extern int platform_pm_freeze(struct device *dev); 281 - extern int platform_pm_freeze_noirq(struct device *dev); 282 269 extern int platform_pm_thaw(struct device *dev); 283 - extern int platform_pm_thaw_noirq(struct device *dev); 284 270 extern int platform_pm_poweroff(struct device *dev); 285 - extern int platform_pm_poweroff_noirq(struct device *dev); 286 271 extern int platform_pm_restore(struct device *dev); 287 - extern int platform_pm_restore_noirq(struct device *dev); 288 272 #else 289 273 #define platform_pm_freeze NULL 290 274 #define platform_pm_thaw NULL 291 275 #define platform_pm_poweroff NULL 292 276 #define platform_pm_restore NULL 293 - #define platform_pm_freeze_noirq NULL 294 - #define platform_pm_thaw_noirq NULL 295 - #define platform_pm_poweroff_noirq NULL 296 - #define platform_pm_restore_noirq NULL 297 277 #endif 298 278 299 279 #ifdef CONFIG_PM_SLEEP 300 280 #define USE_PLATFORM_PM_SLEEP_OPS \ 301 - .prepare = platform_pm_prepare, \ 302 - .complete = platform_pm_complete, \ 303 281 .suspend = platform_pm_suspend, \ 304 282 .resume = platform_pm_resume, \ 305 283 .freeze = platform_pm_freeze, \ 306 284 .thaw = platform_pm_thaw, \ 307 285 .poweroff = platform_pm_poweroff, \ 308 - .restore = platform_pm_restore, \ 309 - .suspend_noirq = platform_pm_suspend_noirq, \ 310 - .resume_noirq = platform_pm_resume_noirq, \ 311 - .freeze_noirq = platform_pm_freeze_noirq, \ 312 - .thaw_noirq = platform_pm_thaw_noirq, \ 313 - .poweroff_noirq = platform_pm_poweroff_noirq, \ 314 - .restore_noirq = platform_pm_restore_noirq, 286 + .restore = platform_pm_restore, 315 287 #else 316 288 #define USE_PLATFORM_PM_SLEEP_OPS 317 289 #endif
+2 -13
include/linux/pm.h
··· 300 300 SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 301 301 } 302 302 303 - /* 304 - * Use this for subsystems (bus types, device types, device classes) that don't 305 - * need any special suspend/resume handling in addition to invoking the PM 306 - * callbacks provided by device drivers supporting both the system sleep PM and 307 - * runtime PM, make the pm member point to generic_subsys_pm_ops. 308 - */ 309 - #ifdef CONFIG_PM 310 - extern struct dev_pm_ops generic_subsys_pm_ops; 311 - #define GENERIC_SUBSYS_PM_OPS (&generic_subsys_pm_ops) 312 - #else 313 - #define GENERIC_SUBSYS_PM_OPS NULL 314 - #endif 315 - 316 303 /** 317 304 * PM_EVENT_ messages 318 305 * ··· 508 521 unsigned long active_jiffies; 509 522 unsigned long suspended_jiffies; 510 523 unsigned long accounting_timestamp; 524 + ktime_t suspend_time; 525 + s64 max_time_suspended_ns; 511 526 #endif 512 527 struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */ 513 528 struct pm_qos_constraints *constraints;
+96 -7
include/linux/pm_domain.h
··· 10 10 #define _LINUX_PM_DOMAIN_H 11 11 12 12 #include <linux/device.h> 13 + #include <linux/err.h> 13 14 14 15 enum gpd_status { 15 16 GPD_STATE_ACTIVE = 0, /* PM domain is active */ ··· 22 21 23 22 struct dev_power_governor { 24 23 bool (*power_down_ok)(struct dev_pm_domain *domain); 24 + bool (*stop_ok)(struct device *dev); 25 + }; 26 + 27 + struct gpd_dev_ops { 28 + int (*start)(struct device *dev); 29 + int (*stop)(struct device *dev); 30 + int (*save_state)(struct device *dev); 31 + int (*restore_state)(struct device *dev); 32 + int (*suspend)(struct device *dev); 33 + int (*suspend_late)(struct device *dev); 34 + int (*resume_early)(struct device *dev); 35 + int (*resume)(struct device *dev); 36 + int (*freeze)(struct device *dev); 37 + int (*freeze_late)(struct device *dev); 38 + int (*thaw_early)(struct device *dev); 39 + int (*thaw)(struct device *dev); 40 + bool (*active_wakeup)(struct device *dev); 25 41 }; 26 42 27 43 struct generic_pm_domain { ··· 50 32 struct mutex lock; 51 33 struct dev_power_governor *gov; 52 34 struct work_struct power_off_work; 35 + char *name; 53 36 unsigned int in_progress; /* Number of devices being suspended now */ 54 37 atomic_t sd_count; /* Number of subdomains with power "on" */ 55 38 enum gpd_status status; /* Current state of the domain */ ··· 63 44 bool suspend_power_off; /* Power status before system suspend */ 64 45 bool dev_irq_safe; /* Device callbacks are IRQ-safe */ 65 46 int (*power_off)(struct generic_pm_domain *domain); 47 + s64 power_off_latency_ns; 66 48 int (*power_on)(struct generic_pm_domain *domain); 67 - int (*start_device)(struct device *dev); 68 - int (*stop_device)(struct device *dev); 69 - bool (*active_wakeup)(struct device *dev); 49 + s64 power_on_latency_ns; 50 + struct gpd_dev_ops dev_ops; 51 + s64 break_even_ns; /* Power break even for the entire domain. */ 52 + s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ 53 + ktime_t power_off_time; 70 54 }; 71 55 72 56 static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd) ··· 84 62 struct list_head slave_node; 85 63 }; 86 64 65 + struct gpd_timing_data { 66 + s64 stop_latency_ns; 67 + s64 start_latency_ns; 68 + s64 save_state_latency_ns; 69 + s64 restore_state_latency_ns; 70 + s64 break_even_ns; 71 + }; 72 + 87 73 struct generic_pm_domain_data { 88 74 struct pm_domain_data base; 75 + struct gpd_dev_ops ops; 76 + struct gpd_timing_data td; 89 77 bool need_restore; 90 78 }; 91 79 ··· 105 73 } 106 74 107 75 #ifdef CONFIG_PM_GENERIC_DOMAINS 108 - extern int pm_genpd_add_device(struct generic_pm_domain *genpd, 109 - struct device *dev); 76 + static inline struct generic_pm_domain_data *dev_gpd_data(struct device *dev) 77 + { 78 + return to_gpd_data(dev->power.subsys_data->domain_data); 79 + } 80 + 81 + extern struct dev_power_governor simple_qos_governor; 82 + 83 + extern struct generic_pm_domain *dev_to_genpd(struct device *dev); 84 + extern int __pm_genpd_add_device(struct generic_pm_domain *genpd, 85 + struct device *dev, 86 + struct gpd_timing_data *td); 87 + 88 + static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, 89 + struct device *dev) 90 + { 91 + return __pm_genpd_add_device(genpd, dev, NULL); 92 + } 93 + 110 94 extern int pm_genpd_remove_device(struct generic_pm_domain *genpd, 111 95 struct device *dev); 112 96 extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 113 97 struct generic_pm_domain *new_subdomain); 114 98 extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 115 99 struct generic_pm_domain *target); 100 + extern int pm_genpd_add_callbacks(struct device *dev, 101 + struct gpd_dev_ops *ops, 102 + struct gpd_timing_data *td); 103 + extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td); 116 104 extern void pm_genpd_init(struct generic_pm_domain *genpd, 117 105 struct dev_power_governor *gov, bool is_off); 106 + 118 107 extern int pm_genpd_poweron(struct generic_pm_domain *genpd); 108 + 109 + extern bool default_stop_ok(struct device *dev); 110 + 111 + extern struct dev_power_governor pm_domain_always_on_gov; 119 112 #else 113 + 114 + static inline struct generic_pm_domain *dev_to_genpd(struct device *dev) 115 + { 116 + return ERR_PTR(-ENOSYS); 117 + } 118 + static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd, 119 + struct device *dev, 120 + struct gpd_timing_data *td) 121 + { 122 + return -ENOSYS; 123 + } 120 124 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, 121 125 struct device *dev) 122 126 { ··· 173 105 { 174 106 return -ENOSYS; 175 107 } 176 - static inline void pm_genpd_init(struct generic_pm_domain *genpd, 177 - struct dev_power_governor *gov, bool is_off) {} 108 + static inline int pm_genpd_add_callbacks(struct device *dev, 109 + struct gpd_dev_ops *ops, 110 + struct gpd_timing_data *td) 111 + { 112 + return -ENOSYS; 113 + } 114 + static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) 115 + { 116 + return -ENOSYS; 117 + } 118 + static inline void pm_genpd_init(struct generic_pm_domain *genpd, bool is_off) 119 + { 120 + } 178 121 static inline int pm_genpd_poweron(struct generic_pm_domain *genpd) 179 122 { 180 123 return -ENOSYS; 181 124 } 125 + static inline bool default_stop_ok(struct device *dev) 126 + { 127 + return false; 128 + } 129 + #define pm_domain_always_on_gov NULL 182 130 #endif 131 + 132 + static inline int pm_genpd_remove_callbacks(struct device *dev) 133 + { 134 + return __pm_genpd_remove_callbacks(dev, true); 135 + } 183 136 184 137 #ifdef CONFIG_PM_GENERIC_DOMAINS_RUNTIME 185 138 extern void genpd_queue_power_off_work(struct generic_pm_domain *genpd);
+8
include/linux/pm_qos.h
··· 78 78 int pm_qos_request_active(struct pm_qos_request *req); 79 79 s32 pm_qos_read_value(struct pm_qos_constraints *c); 80 80 81 + s32 __dev_pm_qos_read_value(struct device *dev); 81 82 s32 dev_pm_qos_read_value(struct device *dev); 82 83 int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, 83 84 s32 value); ··· 92 91 int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier); 93 92 void dev_pm_qos_constraints_init(struct device *dev); 94 93 void dev_pm_qos_constraints_destroy(struct device *dev); 94 + int dev_pm_qos_add_ancestor_request(struct device *dev, 95 + struct dev_pm_qos_request *req, s32 value); 95 96 #else 96 97 static inline int pm_qos_update_target(struct pm_qos_constraints *c, 97 98 struct plist_node *node, ··· 122 119 static inline s32 pm_qos_read_value(struct pm_qos_constraints *c) 123 120 { return 0; } 124 121 122 + static inline s32 __dev_pm_qos_read_value(struct device *dev) 123 + { return 0; } 125 124 static inline s32 dev_pm_qos_read_value(struct device *dev) 126 125 { return 0; } 127 126 static inline int dev_pm_qos_add_request(struct device *dev, ··· 155 150 { 156 151 dev->power.power_state = PMSG_INVALID; 157 152 } 153 + static inline int dev_pm_qos_add_ancestor_request(struct device *dev, 154 + struct dev_pm_qos_request *req, s32 value) 155 + { return 0; } 158 156 #endif 159 157 160 158 #endif
+5
include/linux/pm_runtime.h
··· 45 45 extern void __pm_runtime_use_autosuspend(struct device *dev, bool use); 46 46 extern void pm_runtime_set_autosuspend_delay(struct device *dev, int delay); 47 47 extern unsigned long pm_runtime_autosuspend_expiration(struct device *dev); 48 + extern void pm_runtime_update_max_time_suspended(struct device *dev, 49 + s64 delta_ns); 48 50 49 51 static inline bool pm_children_suspended(struct device *dev) 50 52 { ··· 149 147 int delay) {} 150 148 static inline unsigned long pm_runtime_autosuspend_expiration( 151 149 struct device *dev) { return 0; } 150 + 151 + static inline void pm_runtime_update_max_time_suspended(struct device *dev, 152 + s64 delta_ns) {} 152 153 153 154 #endif /* !CONFIG_PM_RUNTIME */ 154 155
+1 -3
include/linux/sched.h
··· 220 220 ((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0) 221 221 #define task_contributes_to_load(task) \ 222 222 ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \ 223 - (task->flags & PF_FREEZING) == 0) 223 + (task->flags & PF_FROZEN) == 0) 224 224 225 225 #define __set_task_state(tsk, state_value) \ 226 226 do { (tsk)->state = (state_value); } while (0) ··· 1787 1787 #define PF_MEMALLOC 0x00000800 /* Allocating memory */ 1788 1788 #define PF_NPROC_EXCEEDED 0x00001000 /* set_user noticed that RLIMIT_NPROC was exceeded */ 1789 1789 #define PF_USED_MATH 0x00002000 /* if unset the fpu must be initialized before use */ 1790 - #define PF_FREEZING 0x00004000 /* freeze in progress. do not account to load */ 1791 1790 #define PF_NOFREEZE 0x00008000 /* this thread should not be frozen */ 1792 1791 #define PF_FROZEN 0x00010000 /* frozen for system suspend */ 1793 1792 #define PF_FSTRANS 0x00020000 /* inside a filesystem transaction */ ··· 1802 1803 #define PF_MEMPOLICY 0x10000000 /* Non-default NUMA mempolicy */ 1803 1804 #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */ 1804 1805 #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ 1805 - #define PF_FREEZER_NOSIG 0x80000000 /* Freezer won't send signals to it */ 1806 1806 1807 1807 /* 1808 1808 * Only the _current_ task can read/write to tsk->flags, but other
+1
include/linux/sh_intc.h
··· 95 95 unsigned int num_resources; 96 96 intc_enum force_enable; 97 97 intc_enum force_disable; 98 + bool skip_syscore_suspend; 98 99 struct intc_hw_desc hw; 99 100 }; 100 101
+17 -18
include/linux/suspend.h
··· 6 6 #include <linux/init.h> 7 7 #include <linux/pm.h> 8 8 #include <linux/mm.h> 9 + #include <linux/freezer.h> 9 10 #include <asm/errno.h> 10 11 11 12 #ifdef CONFIG_VT ··· 332 331 #define PM_RESTORE_PREPARE 0x0005 /* Going to restore a saved image */ 333 332 #define PM_POST_RESTORE 0x0006 /* Restore failed */ 334 333 334 + extern struct mutex pm_mutex; 335 + 335 336 #ifdef CONFIG_PM_SLEEP 336 337 void save_processor_state(void); 337 338 void restore_processor_state(void); ··· 354 351 extern bool pm_wakeup_pending(void); 355 352 extern bool pm_get_wakeup_count(unsigned int *count); 356 353 extern bool pm_save_wakeup_count(unsigned int count); 354 + 355 + static inline void lock_system_sleep(void) 356 + { 357 + freezer_do_not_count(); 358 + mutex_lock(&pm_mutex); 359 + } 360 + 361 + static inline void unlock_system_sleep(void) 362 + { 363 + mutex_unlock(&pm_mutex); 364 + freezer_count(); 365 + } 366 + 357 367 #else /* !CONFIG_PM_SLEEP */ 358 368 359 369 static inline int register_pm_notifier(struct notifier_block *nb) ··· 382 366 #define pm_notifier(fn, pri) do { (void)(fn); } while (0) 383 367 384 368 static inline bool pm_wakeup_pending(void) { return false; } 385 - #endif /* !CONFIG_PM_SLEEP */ 386 369 387 - extern struct mutex pm_mutex; 388 - 389 - #ifndef CONFIG_HIBERNATE_CALLBACKS 390 370 static inline void lock_system_sleep(void) {} 391 371 static inline void unlock_system_sleep(void) {} 392 372 393 - #else 394 - 395 - /* Let some subsystems like memory hotadd exclude hibernation */ 396 - 397 - static inline void lock_system_sleep(void) 398 - { 399 - mutex_lock(&pm_mutex); 400 - } 401 - 402 - static inline void unlock_system_sleep(void) 403 - { 404 - mutex_unlock(&pm_mutex); 405 - } 406 - #endif 373 + #endif /* !CONFIG_PM_SLEEP */ 407 374 408 375 #ifdef CONFIG_ARCH_SAVE_PAGE_KEYS 409 376 /*
+28 -35
kernel/cgroup_freezer.c
··· 48 48 struct freezer, css); 49 49 } 50 50 51 - static inline int __cgroup_freezing_or_frozen(struct task_struct *task) 51 + bool cgroup_freezing(struct task_struct *task) 52 52 { 53 - enum freezer_state state = task_freezer(task)->state; 54 - return (state == CGROUP_FREEZING) || (state == CGROUP_FROZEN); 55 - } 53 + enum freezer_state state; 54 + bool ret; 56 55 57 - int cgroup_freezing_or_frozen(struct task_struct *task) 58 - { 59 - int result; 60 - task_lock(task); 61 - result = __cgroup_freezing_or_frozen(task); 62 - task_unlock(task); 63 - return result; 56 + rcu_read_lock(); 57 + state = task_freezer(task)->state; 58 + ret = state == CGROUP_FREEZING || state == CGROUP_FROZEN; 59 + rcu_read_unlock(); 60 + 61 + return ret; 64 62 } 65 63 66 64 /* ··· 100 102 * freezer_can_attach(): 101 103 * cgroup_mutex (held by caller of can_attach) 102 104 * 103 - * cgroup_freezing_or_frozen(): 104 - * task->alloc_lock (to get task's cgroup) 105 - * 106 105 * freezer_fork() (preserving fork() performance means can't take cgroup_mutex): 107 106 * freezer->lock 108 107 * sighand->siglock (if the cgroup is freezing) ··· 125 130 * write_lock css_set_lock (cgroup iterator start) 126 131 * task->alloc_lock 127 132 * read_lock css_set_lock (cgroup iterator start) 128 - * task->alloc_lock (inside thaw_process(), prevents race with refrigerator()) 133 + * task->alloc_lock (inside __thaw_task(), prevents race with refrigerator()) 129 134 * sighand->siglock 130 135 */ 131 136 static struct cgroup_subsys_state *freezer_create(struct cgroup_subsys *ss, ··· 145 150 static void freezer_destroy(struct cgroup_subsys *ss, 146 151 struct cgroup *cgroup) 147 152 { 148 - kfree(cgroup_freezer(cgroup)); 153 + struct freezer *freezer = cgroup_freezer(cgroup); 154 + 155 + if (freezer->state != CGROUP_THAWED) 156 + atomic_dec(&system_freezing_cnt); 157 + kfree(freezer); 149 158 } 150 159 151 160 /* task is frozen or will freeze immediately when next it gets woken */ ··· 183 184 184 185 static int freezer_can_attach_task(struct cgroup *cgrp, struct task_struct *tsk) 185 186 { 186 - rcu_read_lock(); 187 - if (__cgroup_freezing_or_frozen(tsk)) { 188 - rcu_read_unlock(); 189 - return -EBUSY; 190 - } 191 - rcu_read_unlock(); 192 - return 0; 187 + return cgroup_freezing(tsk) ? -EBUSY : 0; 193 188 } 194 189 195 190 static void freezer_fork(struct cgroup_subsys *ss, struct task_struct *task) ··· 213 220 214 221 /* Locking avoids race with FREEZING -> THAWED transitions. */ 215 222 if (freezer->state == CGROUP_FREEZING) 216 - freeze_task(task, true); 223 + freeze_task(task); 217 224 spin_unlock_irq(&freezer->lock); 218 225 } 219 226 ··· 231 238 cgroup_iter_start(cgroup, &it); 232 239 while ((task = cgroup_iter_next(cgroup, &it))) { 233 240 ntotal++; 234 - if (is_task_frozen_enough(task)) 241 + if (freezing(task) && is_task_frozen_enough(task)) 235 242 nfrozen++; 236 243 } 237 244 ··· 279 286 struct task_struct *task; 280 287 unsigned int num_cant_freeze_now = 0; 281 288 282 - freezer->state = CGROUP_FREEZING; 283 289 cgroup_iter_start(cgroup, &it); 284 290 while ((task = cgroup_iter_next(cgroup, &it))) { 285 - if (!freeze_task(task, true)) 291 + if (!freeze_task(task)) 286 292 continue; 287 293 if (is_task_frozen_enough(task)) 288 294 continue; ··· 299 307 struct task_struct *task; 300 308 301 309 cgroup_iter_start(cgroup, &it); 302 - while ((task = cgroup_iter_next(cgroup, &it))) { 303 - thaw_process(task); 304 - } 310 + while ((task = cgroup_iter_next(cgroup, &it))) 311 + __thaw_task(task); 305 312 cgroup_iter_end(cgroup, &it); 306 - 307 - freezer->state = CGROUP_THAWED; 308 313 } 309 314 310 315 static int freezer_change_state(struct cgroup *cgroup, ··· 315 326 spin_lock_irq(&freezer->lock); 316 327 317 328 update_if_frozen(cgroup, freezer); 318 - if (goal_state == freezer->state) 319 - goto out; 320 329 321 330 switch (goal_state) { 322 331 case CGROUP_THAWED: 332 + if (freezer->state != CGROUP_THAWED) 333 + atomic_dec(&system_freezing_cnt); 334 + freezer->state = CGROUP_THAWED; 323 335 unfreeze_cgroup(cgroup, freezer); 324 336 break; 325 337 case CGROUP_FROZEN: 338 + if (freezer->state == CGROUP_THAWED) 339 + atomic_inc(&system_freezing_cnt); 340 + freezer->state = CGROUP_FREEZING; 326 341 retval = try_to_freeze_cgroup(cgroup, freezer); 327 342 break; 328 343 default: 329 344 BUG(); 330 345 } 331 - out: 346 + 332 347 spin_unlock_irq(&freezer->lock); 333 348 334 349 return retval;
+2 -2
kernel/cpu.c
··· 470 470 cpu_maps_update_done(); 471 471 } 472 472 473 - static int alloc_frozen_cpus(void) 473 + static int __init alloc_frozen_cpus(void) 474 474 { 475 475 if (!alloc_cpumask_var(&frozen_cpus, GFP_KERNEL|__GFP_ZERO)) 476 476 return -ENOMEM; ··· 543 543 } 544 544 545 545 546 - int cpu_hotplug_pm_sync_init(void) 546 + static int __init cpu_hotplug_pm_sync_init(void) 547 547 { 548 548 pm_notifier(cpu_hotplug_pm_callback, 0); 549 549 return 0;
+1 -2
kernel/exit.c
··· 679 679 tsk->mm = NULL; 680 680 up_read(&mm->mmap_sem); 681 681 enter_lazy_tlb(mm, current); 682 - /* We don't want this task to be frozen prematurely */ 683 - clear_freeze_flag(tsk); 684 682 task_unlock(tsk); 685 683 mm_update_next_owner(mm); 686 684 mmput(mm); ··· 1038 1040 exit_rcu(); 1039 1041 /* causes final put_task_struct in finish_task_switch(). */ 1040 1042 tsk->state = TASK_DEAD; 1043 + tsk->flags |= PF_NOFREEZE; /* tell freezer to ignore us */ 1041 1044 schedule(); 1042 1045 BUG(); 1043 1046 /* Avoid "noreturn function does return". */
-1
kernel/fork.c
··· 992 992 new_flags |= PF_FORKNOEXEC; 993 993 new_flags |= PF_STARTING; 994 994 p->flags = new_flags; 995 - clear_freeze_flag(p); 996 995 } 997 996 998 997 SYSCALL_DEFINE1(set_tid_address, int __user *, tidptr)
+107 -102
kernel/freezer.c
··· 9 9 #include <linux/export.h> 10 10 #include <linux/syscalls.h> 11 11 #include <linux/freezer.h> 12 + #include <linux/kthread.h> 12 13 13 - /* 14 - * freezing is complete, mark current process as frozen 14 + /* total number of freezing conditions in effect */ 15 + atomic_t system_freezing_cnt = ATOMIC_INIT(0); 16 + EXPORT_SYMBOL(system_freezing_cnt); 17 + 18 + /* indicate whether PM freezing is in effect, protected by pm_mutex */ 19 + bool pm_freezing; 20 + bool pm_nosig_freezing; 21 + 22 + /* protects freezing and frozen transitions */ 23 + static DEFINE_SPINLOCK(freezer_lock); 24 + 25 + /** 26 + * freezing_slow_path - slow path for testing whether a task needs to be frozen 27 + * @p: task to be tested 28 + * 29 + * This function is called by freezing() if system_freezing_cnt isn't zero 30 + * and tests whether @p needs to enter and stay in frozen state. Can be 31 + * called under any context. The freezers are responsible for ensuring the 32 + * target tasks see the updated state. 15 33 */ 16 - static inline void frozen_process(void) 34 + bool freezing_slow_path(struct task_struct *p) 17 35 { 18 - if (!unlikely(current->flags & PF_NOFREEZE)) { 19 - current->flags |= PF_FROZEN; 20 - smp_wmb(); 21 - } 22 - clear_freeze_flag(current); 36 + if (p->flags & PF_NOFREEZE) 37 + return false; 38 + 39 + if (pm_nosig_freezing || cgroup_freezing(p)) 40 + return true; 41 + 42 + if (pm_freezing && !(p->flags & PF_KTHREAD)) 43 + return true; 44 + 45 + return false; 23 46 } 47 + EXPORT_SYMBOL(freezing_slow_path); 24 48 25 49 /* Refrigerator is place where frozen processes are stored :-). */ 26 - void refrigerator(void) 50 + bool __refrigerator(bool check_kthr_stop) 27 51 { 28 52 /* Hmm, should we be allowed to suspend when there are realtime 29 53 processes around? */ 30 - long save; 54 + bool was_frozen = false; 55 + long save = current->state; 31 56 32 - task_lock(current); 33 - if (freezing(current)) { 34 - frozen_process(); 35 - task_unlock(current); 36 - } else { 37 - task_unlock(current); 38 - return; 39 - } 40 - save = current->state; 41 57 pr_debug("%s entered refrigerator\n", current->comm); 42 - 43 - spin_lock_irq(&current->sighand->siglock); 44 - recalc_sigpending(); /* We sent fake signal, clean it up */ 45 - spin_unlock_irq(&current->sighand->siglock); 46 - 47 - /* prevent accounting of that task to load */ 48 - current->flags |= PF_FREEZING; 49 58 50 59 for (;;) { 51 60 set_current_state(TASK_UNINTERRUPTIBLE); 52 - if (!frozen(current)) 61 + 62 + spin_lock_irq(&freezer_lock); 63 + current->flags |= PF_FROZEN; 64 + if (!freezing(current) || 65 + (check_kthr_stop && kthread_should_stop())) 66 + current->flags &= ~PF_FROZEN; 67 + spin_unlock_irq(&freezer_lock); 68 + 69 + if (!(current->flags & PF_FROZEN)) 53 70 break; 71 + was_frozen = true; 54 72 schedule(); 55 73 } 56 74 57 - /* Remove the accounting blocker */ 58 - current->flags &= ~PF_FREEZING; 59 - 60 75 pr_debug("%s left refrigerator\n", current->comm); 61 - __set_current_state(save); 76 + 77 + /* 78 + * Restore saved task state before returning. The mb'd version 79 + * needs to be used; otherwise, it might silently break 80 + * synchronization which depends on ordered task state change. 81 + */ 82 + set_current_state(save); 83 + 84 + return was_frozen; 62 85 } 63 - EXPORT_SYMBOL(refrigerator); 86 + EXPORT_SYMBOL(__refrigerator); 64 87 65 88 static void fake_signal_wake_up(struct task_struct *p) 66 89 { 67 90 unsigned long flags; 68 91 69 - spin_lock_irqsave(&p->sighand->siglock, flags); 70 - signal_wake_up(p, 0); 71 - spin_unlock_irqrestore(&p->sighand->siglock, flags); 92 + if (lock_task_sighand(p, &flags)) { 93 + signal_wake_up(p, 0); 94 + unlock_task_sighand(p, &flags); 95 + } 72 96 } 73 97 74 98 /** 75 - * freeze_task - send a freeze request to given task 76 - * @p: task to send the request to 77 - * @sig_only: if set, the request will only be sent if the task has the 78 - * PF_FREEZER_NOSIG flag unset 79 - * Return value: 'false', if @sig_only is set and the task has 80 - * PF_FREEZER_NOSIG set or the task is frozen, 'true', otherwise 99 + * freeze_task - send a freeze request to given task 100 + * @p: task to send the request to 81 101 * 82 - * The freeze request is sent by setting the tasks's TIF_FREEZE flag and 83 - * either sending a fake signal to it or waking it up, depending on whether 84 - * or not it has PF_FREEZER_NOSIG set. If @sig_only is set and the task 85 - * has PF_FREEZER_NOSIG set (ie. it is a typical kernel thread), its 86 - * TIF_FREEZE flag will not be set. 102 + * If @p is freezing, the freeze request is sent by setting %TIF_FREEZE 103 + * flag and either sending a fake signal to it or waking it up, depending 104 + * on whether it has %PF_FREEZER_NOSIG set. 105 + * 106 + * RETURNS: 107 + * %false, if @p is not freezing or already frozen; %true, otherwise 87 108 */ 88 - bool freeze_task(struct task_struct *p, bool sig_only) 109 + bool freeze_task(struct task_struct *p) 89 110 { 90 - /* 91 - * We first check if the task is freezing and next if it has already 92 - * been frozen to avoid the race with frozen_process() which first marks 93 - * the task as frozen and next clears its TIF_FREEZE. 94 - */ 95 - if (!freezing(p)) { 96 - smp_rmb(); 97 - if (frozen(p)) 98 - return false; 111 + unsigned long flags; 99 112 100 - if (!sig_only || should_send_signal(p)) 101 - set_freeze_flag(p); 102 - else 103 - return false; 113 + spin_lock_irqsave(&freezer_lock, flags); 114 + if (!freezing(p) || frozen(p)) { 115 + spin_unlock_irqrestore(&freezer_lock, flags); 116 + return false; 104 117 } 105 118 106 - if (should_send_signal(p)) { 119 + if (!(p->flags & PF_KTHREAD)) { 107 120 fake_signal_wake_up(p); 108 121 /* 109 122 * fake_signal_wake_up() goes through p's scheduler ··· 124 111 * TASK_RUNNING transition can't race with task state 125 112 * testing in try_to_freeze_tasks(). 126 113 */ 127 - } else if (sig_only) { 128 - return false; 129 114 } else { 130 115 wake_up_state(p, TASK_INTERRUPTIBLE); 131 116 } 132 117 118 + spin_unlock_irqrestore(&freezer_lock, flags); 133 119 return true; 134 120 } 135 121 136 - void cancel_freezing(struct task_struct *p) 122 + void __thaw_task(struct task_struct *p) 137 123 { 138 124 unsigned long flags; 139 125 140 - if (freezing(p)) { 141 - pr_debug(" clean up: %s\n", p->comm); 142 - clear_freeze_flag(p); 143 - spin_lock_irqsave(&p->sighand->siglock, flags); 144 - recalc_sigpending_and_wake(p); 145 - spin_unlock_irqrestore(&p->sighand->siglock, flags); 146 - } 147 - } 148 - 149 - static int __thaw_process(struct task_struct *p) 150 - { 151 - if (frozen(p)) { 152 - p->flags &= ~PF_FROZEN; 153 - return 1; 154 - } 155 - clear_freeze_flag(p); 156 - return 0; 157 - } 158 - 159 - /* 160 - * Wake up a frozen process 161 - * 162 - * task_lock() is needed to prevent the race with refrigerator() which may 163 - * occur if the freezing of tasks fails. Namely, without the lock, if the 164 - * freezing of tasks failed, thaw_tasks() might have run before a task in 165 - * refrigerator() could call frozen_process(), in which case the task would be 166 - * frozen and no one would thaw it. 167 - */ 168 - int thaw_process(struct task_struct *p) 169 - { 170 - task_lock(p); 171 - if (__thaw_process(p) == 1) { 172 - task_unlock(p); 126 + /* 127 + * Clear freezing and kick @p if FROZEN. Clearing is guaranteed to 128 + * be visible to @p as waking up implies wmb. Waking up inside 129 + * freezer_lock also prevents wakeups from leaking outside 130 + * refrigerator. 131 + */ 132 + spin_lock_irqsave(&freezer_lock, flags); 133 + if (frozen(p)) 173 134 wake_up_process(p); 174 - return 1; 175 - } 176 - task_unlock(p); 177 - return 0; 135 + spin_unlock_irqrestore(&freezer_lock, flags); 178 136 } 179 - EXPORT_SYMBOL(thaw_process); 137 + 138 + /** 139 + * set_freezable - make %current freezable 140 + * 141 + * Mark %current freezable and enter refrigerator if necessary. 142 + */ 143 + bool set_freezable(void) 144 + { 145 + might_sleep(); 146 + 147 + /* 148 + * Modify flags while holding freezer_lock. This ensures the 149 + * freezer notices that we aren't frozen yet or the freezing 150 + * condition is visible to try_to_freeze() below. 151 + */ 152 + spin_lock_irq(&freezer_lock); 153 + current->flags &= ~PF_NOFREEZE; 154 + spin_unlock_irq(&freezer_lock); 155 + 156 + return try_to_freeze(); 157 + } 158 + EXPORT_SYMBOL(set_freezable);
+2 -2
kernel/kexec.c
··· 1523 1523 1524 1524 #ifdef CONFIG_KEXEC_JUMP 1525 1525 if (kexec_image->preserve_context) { 1526 - mutex_lock(&pm_mutex); 1526 + lock_system_sleep(); 1527 1527 pm_prepare_console(); 1528 1528 error = freeze_processes(); 1529 1529 if (error) { ··· 1576 1576 thaw_processes(); 1577 1577 Restore_console: 1578 1578 pm_restore_console(); 1579 - mutex_unlock(&pm_mutex); 1579 + unlock_system_sleep(); 1580 1580 } 1581 1581 #endif 1582 1582
+24 -3
kernel/kmod.c
··· 36 36 #include <linux/resource.h> 37 37 #include <linux/notifier.h> 38 38 #include <linux/suspend.h> 39 + #include <linux/rwsem.h> 39 40 #include <asm/uaccess.h> 40 41 41 42 #include <trace/events/module.h> ··· 51 50 static kernel_cap_t usermodehelper_bset = CAP_FULL_SET; 52 51 static kernel_cap_t usermodehelper_inheritable = CAP_FULL_SET; 53 52 static DEFINE_SPINLOCK(umh_sysctl_lock); 53 + static DECLARE_RWSEM(umhelper_sem); 54 54 55 55 #ifdef CONFIG_MODULES 56 56 ··· 277 275 * If set, call_usermodehelper_exec() will exit immediately returning -EBUSY 278 276 * (used for preventing user land processes from being created after the user 279 277 * land has been frozen during a system-wide hibernation or suspend operation). 278 + * Should always be manipulated under umhelper_sem acquired for write. 280 279 */ 281 280 static int usermodehelper_disabled = 1; 282 281 ··· 285 282 static atomic_t running_helpers = ATOMIC_INIT(0); 286 283 287 284 /* 288 - * Wait queue head used by usermodehelper_pm_callback() to wait for all running 285 + * Wait queue head used by usermodehelper_disable() to wait for all running 289 286 * helpers to finish. 290 287 */ 291 288 static DECLARE_WAIT_QUEUE_HEAD(running_helpers_waitq); 292 289 293 290 /* 294 291 * Time to wait for running_helpers to become zero before the setting of 295 - * usermodehelper_disabled in usermodehelper_pm_callback() fails 292 + * usermodehelper_disabled in usermodehelper_disable() fails 296 293 */ 297 294 #define RUNNING_HELPERS_TIMEOUT (5 * HZ) 295 + 296 + void read_lock_usermodehelper(void) 297 + { 298 + down_read(&umhelper_sem); 299 + } 300 + EXPORT_SYMBOL_GPL(read_lock_usermodehelper); 301 + 302 + void read_unlock_usermodehelper(void) 303 + { 304 + up_read(&umhelper_sem); 305 + } 306 + EXPORT_SYMBOL_GPL(read_unlock_usermodehelper); 298 307 299 308 /** 300 309 * usermodehelper_disable - prevent new helpers from being started ··· 315 300 { 316 301 long retval; 317 302 303 + down_write(&umhelper_sem); 318 304 usermodehelper_disabled = 1; 319 - smp_mb(); 305 + up_write(&umhelper_sem); 306 + 320 307 /* 321 308 * From now on call_usermodehelper_exec() won't start any new 322 309 * helpers, so it is sufficient if running_helpers turns out to ··· 331 314 if (retval) 332 315 return 0; 333 316 317 + down_write(&umhelper_sem); 334 318 usermodehelper_disabled = 0; 319 + up_write(&umhelper_sem); 335 320 return -EAGAIN; 336 321 } 337 322 ··· 342 323 */ 343 324 void usermodehelper_enable(void) 344 325 { 326 + down_write(&umhelper_sem); 345 327 usermodehelper_disabled = 0; 328 + up_write(&umhelper_sem); 346 329 } 347 330 348 331 /**
+26 -1
kernel/kthread.c
··· 59 59 EXPORT_SYMBOL(kthread_should_stop); 60 60 61 61 /** 62 + * kthread_freezable_should_stop - should this freezable kthread return now? 63 + * @was_frozen: optional out parameter, indicates whether %current was frozen 64 + * 65 + * kthread_should_stop() for freezable kthreads, which will enter 66 + * refrigerator if necessary. This function is safe from kthread_stop() / 67 + * freezer deadlock and freezable kthreads should use this function instead 68 + * of calling try_to_freeze() directly. 69 + */ 70 + bool kthread_freezable_should_stop(bool *was_frozen) 71 + { 72 + bool frozen = false; 73 + 74 + might_sleep(); 75 + 76 + if (unlikely(freezing(current))) 77 + frozen = __refrigerator(true); 78 + 79 + if (was_frozen) 80 + *was_frozen = frozen; 81 + 82 + return kthread_should_stop(); 83 + } 84 + EXPORT_SYMBOL_GPL(kthread_freezable_should_stop); 85 + 86 + /** 62 87 * kthread_data - return data value specified on kthread creation 63 88 * @task: kthread task in question 64 89 * ··· 282 257 set_cpus_allowed_ptr(tsk, cpu_all_mask); 283 258 set_mems_allowed(node_states[N_HIGH_MEMORY]); 284 259 285 - current->flags |= PF_NOFREEZE | PF_FREEZER_NOSIG; 260 + current->flags |= PF_NOFREEZE; 286 261 287 262 for (;;) { 288 263 set_current_state(TASK_INTERRUPTIBLE);
+27 -65
kernel/power/hibernate.c
··· 43 43 enum { 44 44 HIBERNATION_INVALID, 45 45 HIBERNATION_PLATFORM, 46 - HIBERNATION_TEST, 47 - HIBERNATION_TESTPROC, 48 46 HIBERNATION_SHUTDOWN, 49 47 HIBERNATION_REBOOT, 50 48 /* keep last */ ··· 53 55 54 56 static int hibernation_mode = HIBERNATION_SHUTDOWN; 55 57 56 - static bool freezer_test_done; 58 + bool freezer_test_done; 57 59 58 60 static const struct platform_hibernation_ops *hibernation_ops; 59 61 ··· 69 71 WARN_ON(1); 70 72 return; 71 73 } 72 - mutex_lock(&pm_mutex); 74 + lock_system_sleep(); 73 75 hibernation_ops = ops; 74 76 if (ops) 75 77 hibernation_mode = HIBERNATION_PLATFORM; 76 78 else if (hibernation_mode == HIBERNATION_PLATFORM) 77 79 hibernation_mode = HIBERNATION_SHUTDOWN; 78 80 79 - mutex_unlock(&pm_mutex); 81 + unlock_system_sleep(); 80 82 } 81 83 82 84 static bool entering_platform_hibernation; ··· 94 96 mdelay(5000); 95 97 } 96 98 97 - static int hibernation_testmode(int mode) 98 - { 99 - if (hibernation_mode == mode) { 100 - hibernation_debug_sleep(); 101 - return 1; 102 - } 103 - return 0; 104 - } 105 - 106 99 static int hibernation_test(int level) 107 100 { 108 101 if (pm_test_level == level) { ··· 103 114 return 0; 104 115 } 105 116 #else /* !CONFIG_PM_DEBUG */ 106 - static int hibernation_testmode(int mode) { return 0; } 107 117 static int hibernation_test(int level) { return 0; } 108 118 #endif /* !CONFIG_PM_DEBUG */ 109 119 ··· 266 278 goto Platform_finish; 267 279 268 280 error = disable_nonboot_cpus(); 269 - if (error || hibernation_test(TEST_CPUS) 270 - || hibernation_testmode(HIBERNATION_TEST)) 281 + if (error || hibernation_test(TEST_CPUS)) 271 282 goto Enable_cpus; 272 283 273 284 local_irq_disable(); ··· 320 333 */ 321 334 int hibernation_snapshot(int platform_mode) 322 335 { 323 - pm_message_t msg = PMSG_RECOVER; 336 + pm_message_t msg; 324 337 int error; 325 338 326 339 error = platform_begin(platform_mode); ··· 336 349 if (error) 337 350 goto Cleanup; 338 351 339 - if (hibernation_test(TEST_FREEZER) || 340 - hibernation_testmode(HIBERNATION_TESTPROC)) { 352 + if (hibernation_test(TEST_FREEZER)) { 341 353 342 354 /* 343 355 * Indicate to the caller that we are returning due to a ··· 348 362 349 363 error = dpm_prepare(PMSG_FREEZE); 350 364 if (error) { 351 - dpm_complete(msg); 365 + dpm_complete(PMSG_RECOVER); 352 366 goto Cleanup; 353 367 } 354 368 355 369 suspend_console(); 356 370 pm_restrict_gfp_mask(); 371 + 357 372 error = dpm_suspend(PMSG_FREEZE); 358 - if (error) 359 - goto Recover_platform; 360 373 361 - if (hibernation_test(TEST_DEVICES)) 362 - goto Recover_platform; 374 + if (error || hibernation_test(TEST_DEVICES)) 375 + platform_recover(platform_mode); 376 + else 377 + error = create_image(platform_mode); 363 378 364 - error = create_image(platform_mode); 365 379 /* 366 - * Control returns here (1) after the image has been created or the 380 + * In the case that we call create_image() above, the control 381 + * returns here (1) after the image has been created or the 367 382 * image creation has failed and (2) after a successful restore. 368 383 */ 369 384 370 - Resume_devices: 371 385 /* We may need to release the preallocated image pages here. */ 372 386 if (error || !in_suspend) 373 387 swsusp_free(); ··· 384 398 Close: 385 399 platform_end(platform_mode); 386 400 return error; 387 - 388 - Recover_platform: 389 - platform_recover(platform_mode); 390 - goto Resume_devices; 391 401 392 402 Cleanup: 393 403 swsusp_free(); ··· 572 590 static void power_down(void) 573 591 { 574 592 switch (hibernation_mode) { 575 - case HIBERNATION_TEST: 576 - case HIBERNATION_TESTPROC: 577 - break; 578 593 case HIBERNATION_REBOOT: 579 594 kernel_restart(NULL); 580 595 break; ··· 590 611 while(1); 591 612 } 592 613 593 - static int prepare_processes(void) 594 - { 595 - int error = 0; 596 - 597 - if (freeze_processes()) { 598 - error = -EBUSY; 599 - thaw_processes(); 600 - } 601 - return error; 602 - } 603 - 604 614 /** 605 615 * hibernate - Carry out system hibernation, including saving the image. 606 616 */ ··· 597 629 { 598 630 int error; 599 631 600 - mutex_lock(&pm_mutex); 632 + lock_system_sleep(); 601 633 /* The snapshot device should not be opened while we're running */ 602 634 if (!atomic_add_unless(&snapshot_device_available, -1, 0)) { 603 635 error = -EBUSY; ··· 622 654 sys_sync(); 623 655 printk("done.\n"); 624 656 625 - error = prepare_processes(); 657 + error = freeze_processes(); 626 658 if (error) 627 659 goto Finish; 628 660 ··· 665 697 pm_restore_console(); 666 698 atomic_inc(&snapshot_device_available); 667 699 Unlock: 668 - mutex_unlock(&pm_mutex); 700 + unlock_system_sleep(); 669 701 return error; 670 702 } 671 703 ··· 779 811 goto close_finish; 780 812 781 813 error = create_basic_memory_bitmaps(); 782 - if (error) 814 + if (error) { 815 + usermodehelper_enable(); 783 816 goto close_finish; 817 + } 784 818 785 819 pr_debug("PM: Preparing processes for restore.\n"); 786 - error = prepare_processes(); 820 + error = freeze_processes(); 787 821 if (error) { 788 822 swsusp_close(FMODE_READ); 789 823 goto Done; ··· 825 855 [HIBERNATION_PLATFORM] = "platform", 826 856 [HIBERNATION_SHUTDOWN] = "shutdown", 827 857 [HIBERNATION_REBOOT] = "reboot", 828 - [HIBERNATION_TEST] = "test", 829 - [HIBERNATION_TESTPROC] = "testproc", 830 858 }; 831 859 832 860 /* ··· 833 865 * Hibernation can be handled in several ways. There are a few different ways 834 866 * to put the system into the sleep state: using the platform driver (e.g. ACPI 835 867 * or other hibernation_ops), powering it off or rebooting it (for testing 836 - * mostly), or using one of the two available test modes. 868 + * mostly). 837 869 * 838 870 * The sysfs file /sys/power/disk provides an interface for selecting the 839 871 * hibernation mode to use. Reading from this file causes the available modes 840 - * to be printed. There are 5 modes that can be supported: 872 + * to be printed. There are 3 modes that can be supported: 841 873 * 842 874 * 'platform' 843 875 * 'shutdown' 844 876 * 'reboot' 845 - * 'test' 846 - * 'testproc' 847 877 * 848 878 * If a platform hibernation driver is in use, 'platform' will be supported 849 879 * and will be used by default. Otherwise, 'shutdown' will be used by default. ··· 865 899 switch (i) { 866 900 case HIBERNATION_SHUTDOWN: 867 901 case HIBERNATION_REBOOT: 868 - case HIBERNATION_TEST: 869 - case HIBERNATION_TESTPROC: 870 902 break; 871 903 case HIBERNATION_PLATFORM: 872 904 if (hibernation_ops) ··· 893 929 p = memchr(buf, '\n', n); 894 930 len = p ? p - buf : n; 895 931 896 - mutex_lock(&pm_mutex); 932 + lock_system_sleep(); 897 933 for (i = HIBERNATION_FIRST; i <= HIBERNATION_MAX; i++) { 898 934 if (len == strlen(hibernation_modes[i]) 899 935 && !strncmp(buf, hibernation_modes[i], len)) { ··· 905 941 switch (mode) { 906 942 case HIBERNATION_SHUTDOWN: 907 943 case HIBERNATION_REBOOT: 908 - case HIBERNATION_TEST: 909 - case HIBERNATION_TESTPROC: 910 944 hibernation_mode = mode; 911 945 break; 912 946 case HIBERNATION_PLATFORM: ··· 919 957 if (!error) 920 958 pr_debug("PM: Hibernation mode set to '%s'\n", 921 959 hibernation_modes[mode]); 922 - mutex_unlock(&pm_mutex); 960 + unlock_system_sleep(); 923 961 return error ? error : n; 924 962 } 925 963 ··· 946 984 if (maj != MAJOR(res) || min != MINOR(res)) 947 985 goto out; 948 986 949 - mutex_lock(&pm_mutex); 987 + lock_system_sleep(); 950 988 swsusp_resume_device = res; 951 - mutex_unlock(&pm_mutex); 989 + unlock_system_sleep(); 952 990 printk(KERN_INFO "PM: Starting manual resume from disk\n"); 953 991 noresume = 0; 954 992 software_resume();
+5 -5
kernel/power/main.c
··· 3 3 * 4 4 * Copyright (c) 2003 Patrick Mochel 5 5 * Copyright (c) 2003 Open Source Development Lab 6 - * 6 + * 7 7 * This file is released under the GPLv2 8 8 * 9 9 */ ··· 116 116 p = memchr(buf, '\n', n); 117 117 len = p ? p - buf : n; 118 118 119 - mutex_lock(&pm_mutex); 119 + lock_system_sleep(); 120 120 121 121 level = TEST_FIRST; 122 122 for (s = &pm_tests[level]; level <= TEST_MAX; s++, level++) ··· 126 126 break; 127 127 } 128 128 129 - mutex_unlock(&pm_mutex); 129 + unlock_system_sleep(); 130 130 131 131 return error ? error : n; 132 132 } ··· 240 240 * 'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 241 241 * 'disk' (Suspend-to-Disk). 242 242 * 243 - * store() accepts one of those strings, translates it into the 243 + * store() accepts one of those strings, translates it into the 244 244 * proper enumerated value, and initiates a suspend transition. 245 245 */ 246 246 static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr, ··· 282 282 /* First, check if we are requested to hibernate */ 283 283 if (len == 4 && !strncmp(buf, "disk", len)) { 284 284 error = hibernate(); 285 - goto Exit; 285 + goto Exit; 286 286 } 287 287 288 288 #ifdef CONFIG_SUSPEND
+2
kernel/power/power.h
··· 50 50 #define SPARE_PAGES ((1024 * 1024) >> PAGE_SHIFT) 51 51 52 52 /* kernel/power/hibernate.c */ 53 + extern bool freezer_test_done; 54 + 53 55 extern int hibernation_snapshot(int platform_mode); 54 56 extern int hibernation_restore(int platform_mode); 55 57 extern int hibernation_platform_enter(void);
+34 -49
kernel/power/process.c
··· 22 22 */ 23 23 #define TIMEOUT (20 * HZ) 24 24 25 - static inline int freezable(struct task_struct * p) 26 - { 27 - if ((p == current) || 28 - (p->flags & PF_NOFREEZE) || 29 - (p->exit_state != 0)) 30 - return 0; 31 - return 1; 32 - } 33 - 34 - static int try_to_freeze_tasks(bool sig_only) 25 + static int try_to_freeze_tasks(bool user_only) 35 26 { 36 27 struct task_struct *g, *p; 37 28 unsigned long end_time; ··· 37 46 38 47 end_time = jiffies + TIMEOUT; 39 48 40 - if (!sig_only) 49 + if (!user_only) 41 50 freeze_workqueues_begin(); 42 51 43 52 while (true) { 44 53 todo = 0; 45 54 read_lock(&tasklist_lock); 46 55 do_each_thread(g, p) { 47 - if (frozen(p) || !freezable(p)) 48 - continue; 49 - 50 - if (!freeze_task(p, sig_only)) 56 + if (p == current || !freeze_task(p)) 51 57 continue; 52 58 53 59 /* ··· 65 77 } while_each_thread(g, p); 66 78 read_unlock(&tasklist_lock); 67 79 68 - if (!sig_only) { 80 + if (!user_only) { 69 81 wq_busy = freeze_workqueues_busy(); 70 82 todo += wq_busy; 71 83 } ··· 91 103 elapsed_csecs = elapsed_csecs64; 92 104 93 105 if (todo) { 94 - /* This does not unfreeze processes that are already frozen 95 - * (we have slightly ugly calling convention in that respect, 96 - * and caller must call thaw_processes() if something fails), 97 - * but it cleans up leftover PF_FREEZE requests. 98 - */ 99 106 printk("\n"); 100 107 printk(KERN_ERR "Freezing of tasks %s after %d.%02d seconds " 101 108 "(%d tasks refusing to freeze, wq_busy=%d):\n", ··· 98 115 elapsed_csecs / 100, elapsed_csecs % 100, 99 116 todo - wq_busy, wq_busy); 100 117 101 - thaw_workqueues(); 102 - 103 118 read_lock(&tasklist_lock); 104 119 do_each_thread(g, p) { 105 - task_lock(p); 106 - if (!wakeup && freezing(p) && !freezer_should_skip(p)) 120 + if (!wakeup && !freezer_should_skip(p) && 121 + p != current && freezing(p) && !frozen(p)) 107 122 sched_show_task(p); 108 - cancel_freezing(p); 109 - task_unlock(p); 110 123 } while_each_thread(g, p); 111 124 read_unlock(&tasklist_lock); 112 125 } else { ··· 115 136 116 137 /** 117 138 * freeze_processes - Signal user space processes to enter the refrigerator. 139 + * 140 + * On success, returns 0. On failure, -errno and system is fully thawed. 118 141 */ 119 142 int freeze_processes(void) 120 143 { 121 144 int error; 122 145 146 + if (!pm_freezing) 147 + atomic_inc(&system_freezing_cnt); 148 + 123 149 printk("Freezing user space processes ... "); 150 + pm_freezing = true; 124 151 error = try_to_freeze_tasks(true); 125 152 if (!error) { 126 153 printk("done."); ··· 135 150 printk("\n"); 136 151 BUG_ON(in_atomic()); 137 152 153 + if (error) 154 + thaw_processes(); 138 155 return error; 139 156 } 140 157 141 158 /** 142 159 * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator. 160 + * 161 + * On success, returns 0. On failure, -errno and system is fully thawed. 143 162 */ 144 163 int freeze_kernel_threads(void) 145 164 { 146 165 int error; 147 166 148 167 printk("Freezing remaining freezable tasks ... "); 168 + pm_nosig_freezing = true; 149 169 error = try_to_freeze_tasks(false); 150 170 if (!error) 151 171 printk("done."); ··· 158 168 printk("\n"); 159 169 BUG_ON(in_atomic()); 160 170 171 + if (error) 172 + thaw_processes(); 161 173 return error; 162 - } 163 - 164 - static void thaw_tasks(bool nosig_only) 165 - { 166 - struct task_struct *g, *p; 167 - 168 - read_lock(&tasklist_lock); 169 - do_each_thread(g, p) { 170 - if (!freezable(p)) 171 - continue; 172 - 173 - if (nosig_only && should_send_signal(p)) 174 - continue; 175 - 176 - if (cgroup_freezing_or_frozen(p)) 177 - continue; 178 - 179 - thaw_process(p); 180 - } while_each_thread(g, p); 181 - read_unlock(&tasklist_lock); 182 174 } 183 175 184 176 void thaw_processes(void) 185 177 { 178 + struct task_struct *g, *p; 179 + 180 + if (pm_freezing) 181 + atomic_dec(&system_freezing_cnt); 182 + pm_freezing = false; 183 + pm_nosig_freezing = false; 184 + 186 185 oom_killer_enable(); 187 186 188 187 printk("Restarting tasks ... "); 188 + 189 189 thaw_workqueues(); 190 - thaw_tasks(true); 191 - thaw_tasks(false); 190 + 191 + read_lock(&tasklist_lock); 192 + do_each_thread(g, p) { 193 + __thaw_task(p); 194 + } while_each_thread(g, p); 195 + read_unlock(&tasklist_lock); 196 + 192 197 schedule(); 193 198 printk("done.\n"); 194 199 }
+5 -7
kernel/power/suspend.c
··· 42 42 */ 43 43 void suspend_set_ops(const struct platform_suspend_ops *ops) 44 44 { 45 - mutex_lock(&pm_mutex); 45 + lock_system_sleep(); 46 46 suspend_ops = ops; 47 - mutex_unlock(&pm_mutex); 47 + unlock_system_sleep(); 48 48 } 49 49 EXPORT_SYMBOL_GPL(suspend_set_ops); 50 50 ··· 106 106 goto Finish; 107 107 108 108 error = suspend_freeze_processes(); 109 - if (error) { 110 - suspend_stats.failed_freeze++; 111 - dpm_save_failed_step(SUSPEND_FREEZE); 112 - } else 109 + if (!error) 113 110 return 0; 114 111 115 - suspend_thaw_processes(); 112 + suspend_stats.failed_freeze++; 113 + dpm_save_failed_step(SUSPEND_FREEZE); 116 114 usermodehelper_enable(); 117 115 Finish: 118 116 pm_notifier_call_chain(PM_POST_SUSPEND);
+82 -102
kernel/power/user.c
··· 21 21 #include <linux/swapops.h> 22 22 #include <linux/pm.h> 23 23 #include <linux/fs.h> 24 + #include <linux/compat.h> 24 25 #include <linux/console.h> 25 26 #include <linux/cpu.h> 26 27 #include <linux/freezer.h> ··· 30 29 #include <asm/uaccess.h> 31 30 32 31 #include "power.h" 33 - 34 - /* 35 - * NOTE: The SNAPSHOT_SET_SWAP_FILE and SNAPSHOT_PMOPS ioctls are obsolete and 36 - * will be removed in the future. They are only preserved here for 37 - * compatibility with existing userland utilities. 38 - */ 39 - #define SNAPSHOT_SET_SWAP_FILE _IOW(SNAPSHOT_IOC_MAGIC, 10, unsigned int) 40 - #define SNAPSHOT_PMOPS _IOW(SNAPSHOT_IOC_MAGIC, 12, unsigned int) 41 - 42 - #define PMOPS_PREPARE 1 43 - #define PMOPS_ENTER 2 44 - #define PMOPS_FINISH 3 45 - 46 - /* 47 - * NOTE: The following ioctl definitions are wrong and have been replaced with 48 - * correct ones. They are only preserved here for compatibility with existing 49 - * userland utilities and will be removed in the future. 50 - */ 51 - #define SNAPSHOT_ATOMIC_SNAPSHOT _IOW(SNAPSHOT_IOC_MAGIC, 3, void *) 52 - #define SNAPSHOT_SET_IMAGE_SIZE _IOW(SNAPSHOT_IOC_MAGIC, 6, unsigned long) 53 - #define SNAPSHOT_AVAIL_SWAP _IOR(SNAPSHOT_IOC_MAGIC, 7, void *) 54 - #define SNAPSHOT_GET_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 8, void *) 55 32 56 33 57 34 #define SNAPSHOT_MINOR 231 ··· 50 71 struct snapshot_data *data; 51 72 int error; 52 73 53 - mutex_lock(&pm_mutex); 74 + lock_system_sleep(); 54 75 55 76 if (!atomic_add_unless(&snapshot_device_available, -1, 0)) { 56 77 error = -EBUSY; ··· 102 123 data->platform_support = 0; 103 124 104 125 Unlock: 105 - mutex_unlock(&pm_mutex); 126 + unlock_system_sleep(); 106 127 107 128 return error; 108 129 } ··· 111 132 { 112 133 struct snapshot_data *data; 113 134 114 - mutex_lock(&pm_mutex); 135 + lock_system_sleep(); 115 136 116 137 swsusp_free(); 117 138 free_basic_memory_bitmaps(); ··· 125 146 PM_POST_HIBERNATION : PM_POST_RESTORE); 126 147 atomic_inc(&snapshot_device_available); 127 148 128 - mutex_unlock(&pm_mutex); 149 + unlock_system_sleep(); 129 150 130 151 return 0; 131 152 } ··· 137 158 ssize_t res; 138 159 loff_t pg_offp = *offp & ~PAGE_MASK; 139 160 140 - mutex_lock(&pm_mutex); 161 + lock_system_sleep(); 141 162 142 163 data = filp->private_data; 143 164 if (!data->ready) { ··· 158 179 *offp += res; 159 180 160 181 Unlock: 161 - mutex_unlock(&pm_mutex); 182 + unlock_system_sleep(); 162 183 163 184 return res; 164 185 } ··· 170 191 ssize_t res; 171 192 loff_t pg_offp = *offp & ~PAGE_MASK; 172 193 173 - mutex_lock(&pm_mutex); 194 + lock_system_sleep(); 174 195 175 196 data = filp->private_data; 176 197 ··· 187 208 if (res > 0) 188 209 *offp += res; 189 210 unlock: 190 - mutex_unlock(&pm_mutex); 211 + unlock_system_sleep(); 191 212 192 213 return res; 193 - } 194 - 195 - static void snapshot_deprecated_ioctl(unsigned int cmd) 196 - { 197 - if (printk_ratelimit()) 198 - printk(KERN_NOTICE "%pf: ioctl '%.8x' is deprecated and will " 199 - "be removed soon, update your suspend-to-disk " 200 - "utilities\n", 201 - __builtin_return_address(0), cmd); 202 214 } 203 215 204 216 static long snapshot_ioctl(struct file *filp, unsigned int cmd, ··· 227 257 break; 228 258 229 259 error = freeze_processes(); 230 - if (error) { 231 - thaw_processes(); 260 + if (error) 232 261 usermodehelper_enable(); 233 - } 234 - if (!error) 262 + else 235 263 data->frozen = 1; 236 264 break; 237 265 ··· 242 274 data->frozen = 0; 243 275 break; 244 276 245 - case SNAPSHOT_ATOMIC_SNAPSHOT: 246 - snapshot_deprecated_ioctl(cmd); 247 277 case SNAPSHOT_CREATE_IMAGE: 248 278 if (data->mode != O_RDONLY || !data->frozen || data->ready) { 249 279 error = -EPERM; ··· 249 283 } 250 284 pm_restore_gfp_mask(); 251 285 error = hibernation_snapshot(data->platform_support); 252 - if (!error) 286 + if (!error) { 253 287 error = put_user(in_suspend, (int __user *)arg); 254 - if (!error) 255 - data->ready = 1; 288 + if (!error && !freezer_test_done) 289 + data->ready = 1; 290 + if (freezer_test_done) { 291 + freezer_test_done = false; 292 + thaw_processes(); 293 + } 294 + } 256 295 break; 257 296 258 297 case SNAPSHOT_ATOMIC_RESTORE: ··· 276 305 data->ready = 0; 277 306 break; 278 307 279 - case SNAPSHOT_SET_IMAGE_SIZE: 280 - snapshot_deprecated_ioctl(cmd); 281 308 case SNAPSHOT_PREF_IMAGE_SIZE: 282 309 image_size = arg; 283 310 break; ··· 290 321 error = put_user(size, (loff_t __user *)arg); 291 322 break; 292 323 293 - case SNAPSHOT_AVAIL_SWAP: 294 - snapshot_deprecated_ioctl(cmd); 295 324 case SNAPSHOT_AVAIL_SWAP_SIZE: 296 325 size = count_swap_pages(data->swap, 1); 297 326 size <<= PAGE_SHIFT; 298 327 error = put_user(size, (loff_t __user *)arg); 299 328 break; 300 329 301 - case SNAPSHOT_GET_SWAP_PAGE: 302 - snapshot_deprecated_ioctl(cmd); 303 330 case SNAPSHOT_ALLOC_SWAP_PAGE: 304 331 if (data->swap < 0 || data->swap >= MAX_SWAPFILES) { 305 332 error = -ENODEV; ··· 318 353 free_all_swap_pages(data->swap); 319 354 break; 320 355 321 - case SNAPSHOT_SET_SWAP_FILE: /* This ioctl is deprecated */ 322 - snapshot_deprecated_ioctl(cmd); 323 - if (!swsusp_swap_in_use()) { 324 - /* 325 - * User space encodes device types as two-byte values, 326 - * so we need to recode them 327 - */ 328 - if (old_decode_dev(arg)) { 329 - data->swap = swap_type_of(old_decode_dev(arg), 330 - 0, NULL); 331 - if (data->swap < 0) 332 - error = -ENODEV; 333 - } else { 334 - data->swap = -1; 335 - error = -EINVAL; 336 - } 337 - } else { 338 - error = -EPERM; 339 - } 340 - break; 341 - 342 356 case SNAPSHOT_S2RAM: 343 357 if (!data->frozen) { 344 358 error = -EPERM; ··· 338 394 case SNAPSHOT_POWER_OFF: 339 395 if (data->platform_support) 340 396 error = hibernation_platform_enter(); 341 - break; 342 - 343 - case SNAPSHOT_PMOPS: /* This ioctl is deprecated */ 344 - snapshot_deprecated_ioctl(cmd); 345 - error = -EINVAL; 346 - 347 - switch (arg) { 348 - 349 - case PMOPS_PREPARE: 350 - data->platform_support = 1; 351 - error = 0; 352 - break; 353 - 354 - case PMOPS_ENTER: 355 - if (data->platform_support) 356 - error = hibernation_platform_enter(); 357 - break; 358 - 359 - case PMOPS_FINISH: 360 - if (data->platform_support) 361 - error = 0; 362 - break; 363 - 364 - default: 365 - printk(KERN_ERR "SNAPSHOT_PMOPS: invalid argument %ld\n", arg); 366 - 367 - } 368 397 break; 369 398 370 399 case SNAPSHOT_SET_SWAP_AREA: ··· 381 464 return error; 382 465 } 383 466 467 + #ifdef CONFIG_COMPAT 468 + 469 + struct compat_resume_swap_area { 470 + compat_loff_t offset; 471 + u32 dev; 472 + } __packed; 473 + 474 + static long 475 + snapshot_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 476 + { 477 + BUILD_BUG_ON(sizeof(loff_t) != sizeof(compat_loff_t)); 478 + 479 + switch (cmd) { 480 + case SNAPSHOT_GET_IMAGE_SIZE: 481 + case SNAPSHOT_AVAIL_SWAP_SIZE: 482 + case SNAPSHOT_ALLOC_SWAP_PAGE: { 483 + compat_loff_t __user *uoffset = compat_ptr(arg); 484 + loff_t offset; 485 + mm_segment_t old_fs; 486 + int err; 487 + 488 + old_fs = get_fs(); 489 + set_fs(KERNEL_DS); 490 + err = snapshot_ioctl(file, cmd, (unsigned long) &offset); 491 + set_fs(old_fs); 492 + if (!err && put_user(offset, uoffset)) 493 + err = -EFAULT; 494 + return err; 495 + } 496 + 497 + case SNAPSHOT_CREATE_IMAGE: 498 + return snapshot_ioctl(file, cmd, 499 + (unsigned long) compat_ptr(arg)); 500 + 501 + case SNAPSHOT_SET_SWAP_AREA: { 502 + struct compat_resume_swap_area __user *u_swap_area = 503 + compat_ptr(arg); 504 + struct resume_swap_area swap_area; 505 + mm_segment_t old_fs; 506 + int err; 507 + 508 + err = get_user(swap_area.offset, &u_swap_area->offset); 509 + err |= get_user(swap_area.dev, &u_swap_area->dev); 510 + if (err) 511 + return -EFAULT; 512 + old_fs = get_fs(); 513 + set_fs(KERNEL_DS); 514 + err = snapshot_ioctl(file, SNAPSHOT_SET_SWAP_AREA, 515 + (unsigned long) &swap_area); 516 + set_fs(old_fs); 517 + return err; 518 + } 519 + 520 + default: 521 + return snapshot_ioctl(file, cmd, arg); 522 + } 523 + } 524 + 525 + #endif /* CONFIG_COMPAT */ 526 + 384 527 static const struct file_operations snapshot_fops = { 385 528 .open = snapshot_open, 386 529 .release = snapshot_release, ··· 448 471 .write = snapshot_write, 449 472 .llseek = no_llseek, 450 473 .unlocked_ioctl = snapshot_ioctl, 474 + #ifdef CONFIG_COMPAT 475 + .compat_ioctl = snapshot_compat_ioctl, 476 + #endif 451 477 }; 452 478 453 479 static struct miscdevice snapshot_device = {
+2 -6
mm/backing-dev.c
··· 600 600 601 601 /* 602 602 * Finally, kill the kernel thread. We don't need to be RCU 603 - * safe anymore, since the bdi is gone from visibility. Force 604 - * unfreeze of the thread before calling kthread_stop(), otherwise 605 - * it would never exet if it is currently stuck in the refrigerator. 603 + * safe anymore, since the bdi is gone from visibility. 606 604 */ 607 - if (bdi->wb.task) { 608 - thaw_process(bdi->wb.task); 605 + if (bdi->wb.task) 609 606 kthread_stop(bdi->wb.task); 610 - } 611 607 } 612 608 613 609 /*
+1 -1
mm/oom_kill.c
··· 328 328 */ 329 329 if (test_tsk_thread_flag(p, TIF_MEMDIE)) { 330 330 if (unlikely(frozen(p))) 331 - thaw_process(p); 331 + __thaw_task(p); 332 332 return ERR_PTR(-1UL); 333 333 } 334 334 if (!p->mm)
+2 -1
net/sunrpc/sched.c
··· 18 18 #include <linux/smp.h> 19 19 #include <linux/spinlock.h> 20 20 #include <linux/mutex.h> 21 + #include <linux/freezer.h> 21 22 22 23 #include <linux/sunrpc/clnt.h> 23 24 ··· 232 231 { 233 232 if (fatal_signal_pending(current)) 234 233 return -ERESTARTSYS; 235 - schedule(); 234 + freezable_schedule(); 236 235 return 0; 237 236 } 238 237