Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI fixes from Rafael Wysocki:
"These are fixes for recent regressions (ACPI resources management,
suspend-to-idle), stable-candidate fixes (ACPI backlight), fixes
related to the wakeup IRQ management changes made in v3.18, other
fixes (suspend-to-idle, cpufreq ppc driver) and a couple of cleanups
(suspend-to-idle, generic power domains, ACPI backlight).

Specifics:

- Fix ACPI resources management problems introduced by the recent
rework of the code in question (Jiang Liu) and a build issue
introduced by those changes (Joachim Nilsson).

- Fix a recent suspend-to-idle regression on systems where entering
idle states causes local timers to stop, prevent suspend-to-idle
from crashing in restricted configurations (no cpuidle driver,
cpuidle disabled etc.) and clean up the idle loop somewhat while at
it (Rafael J Wysocki).

- Fix build problem in the cpufreq ppc driver (Geert Uytterhoeven).

- Allow the ACPI backlight driver module to be loaded if ACPI is
disabled which helps the i915 driver in those configurations
(stable-candidate) and change the code to help debug unusual use
cases (Chris Wilson).

- Wakeup IRQ management changes in v3.18 caused some drivers on the
at91 platform to trigger a warning from the IRQ core related to an
unexpected combination of interrupt action handler flags. However,
on at91 a timer IRQ is shared with some other devices (including
system wakeup ones) and that leads to the unusual combination of
flags in question.

To make it possible to avoid the warning introduce a new interrupt
action handler flag (which can be used by drivers to indicate the
special case to the core) and rework the problematic at91 drivers
to use it and work as expected during system suspend/resume. From
Boris Brezillon, Rafael J Wysocki and Mark Rutland.

- Clean up the generic power domains subsystem's debugfs interface
(Kevin Hilman)"

* tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
genirq / PM: describe IRQF_COND_SUSPEND
tty: serial: atmel: rework interrupt and wakeup handling
watchdog: at91sam9: request the irq with IRQF_NO_SUSPEND
cpuidle / sleep: Use broadcast timer for states that stop local timer
clk: at91: implement suspend/resume for the PMC irqchip
rtc: at91rm9200: rework wakeup and interrupt handling
rtc: at91sam9: rework wakeup and interrupt handling
PM / wakeup: export pm_system_wakeup symbol
genirq / PM: Add flag for shared NO_SUSPEND interrupt lines
ACPI / video: Propagate the error code for acpi_video_register
ACPI / video: Load the module even if ACPI is disabled
PM / Domains: cleanup: rename gpd -> genpd in debugfs interface
cpufreq: ppc: Add missing #include <asm/smp.h>
x86/PCI/ACPI: Relax ACPI resource descriptor checks to work around BIOS bugs
x86/PCI/ACPI: Ignore resources consumed by host bridge itself
cpuidle: Clean up fallback handling in cpuidle_idle_call()
cpuidle / sleep: Do sanity checks in cpuidle_enter_freeze() too
idle / sleep: Avoid excessive disabling and enabling interrupts
PCI: versatile: Update for list_for_each_entry() API change
genirq / PM: better describe IRQF_NO_SUSPEND semantics

+334 -122
+17 -5
Documentation/power/suspend-and-interrupts.txt
··· 40 40 41 41 The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when 42 42 requesting a special-purpose interrupt. It causes suspend_device_irqs() to 43 - leave the corresponding IRQ enabled so as to allow the interrupt to work all 44 - the time as expected. 43 + leave the corresponding IRQ enabled so as to allow the interrupt to work as 44 + expected during the suspend-resume cycle, but does not guarantee that the 45 + interrupt will wake the system from a suspended state -- for such cases it is 46 + necessary to use enable_irq_wake(). 45 47 46 48 Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one 47 49 user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed ··· 112 110 IRQF_NO_SUSPEND and enable_irq_wake() 113 111 ------------------------------------- 114 112 115 - There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND 116 - flag on the same IRQ. 113 + There are very few valid reasons to use both enable_irq_wake() and the 114 + IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the 115 + same device. 117 116 118 117 First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND 119 118 interrupts (interrupt handlers are invoked after suspend_device_irqs()) are ··· 123 120 124 121 Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not 125 122 to individual interrupt handlers, so sharing an IRQ between a system wakeup 126 - interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense. 123 + interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally 124 + make sense. 125 + 126 + In rare cases an IRQ can be shared between a wakeup device driver and an 127 + IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver 128 + must be able to discern spurious IRQs from genuine wakeup events (signalling 129 + the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to 130 + ensure that the IRQ will function as a wakeup source, and must request the IRQ 131 + with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If 132 + these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
+8 -3
arch/x86/pci/acpi.c
··· 331 331 struct list_head *list) 332 332 { 333 333 int ret; 334 - struct resource_entry *entry; 334 + struct resource_entry *entry, *tmp; 335 335 336 336 sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum); 337 337 info->bridge = device; ··· 345 345 dev_dbg(&device->dev, 346 346 "no IO and memory resources present in _CRS\n"); 347 347 else 348 - resource_list_for_each_entry(entry, list) 349 - entry->res->name = info->name; 348 + resource_list_for_each_entry_safe(entry, tmp, list) { 349 + if ((entry->res->flags & IORESOURCE_WINDOW) == 0 || 350 + (entry->res->flags & IORESOURCE_DISABLED)) 351 + resource_list_destroy_entry(entry); 352 + else 353 + entry->res->name = info->name; 354 + } 350 355 } 351 356 352 357 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+3 -1
drivers/acpi/resource.c
··· 42 42 * CHECKME: len might be required to check versus a minimum 43 43 * length as well. 1 for io is fine, but for memory it does 44 44 * not make any sense at all. 45 + * Note: some BIOSes report incorrect length for ACPI address space 46 + * descriptor, so remove check of 'reslen == len' to avoid regression. 45 47 */ 46 - if (len && reslen && reslen == len && start <= end) 48 + if (len && reslen && start <= end) 47 49 return true; 48 50 49 51 pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
+16 -4
drivers/acpi/video.c
··· 2110 2110 2111 2111 int acpi_video_register(void) 2112 2112 { 2113 - int result = 0; 2113 + int ret; 2114 + 2114 2115 if (register_count) { 2115 2116 /* 2116 2117 * if the function of acpi_video_register is already called, ··· 2123 2122 mutex_init(&video_list_lock); 2124 2123 INIT_LIST_HEAD(&video_bus_head); 2125 2124 2126 - result = acpi_bus_register_driver(&acpi_video_bus); 2127 - if (result < 0) 2128 - return -ENODEV; 2125 + ret = acpi_bus_register_driver(&acpi_video_bus); 2126 + if (ret) 2127 + return ret; 2129 2128 2130 2129 /* 2131 2130 * When the acpi_video_bus is loaded successfully, increase ··· 2177 2176 2178 2177 static int __init acpi_video_init(void) 2179 2178 { 2179 + /* 2180 + * Let the module load even if ACPI is disabled (e.g. due to 2181 + * a broken BIOS) so that i915.ko can still be loaded on such 2182 + * old systems without an AcpiOpRegion. 2183 + * 2184 + * acpi_video_register() will report -ENODEV later as well due 2185 + * to acpi_disabled when i915.ko tries to register itself afterwards. 2186 + */ 2187 + if (acpi_disabled) 2188 + return 0; 2189 + 2180 2190 dmi_check_system(video_dmi_table); 2181 2191 2182 2192 if (intel_opregion_present())
+12 -12
drivers/base/power/domain.c
··· 2242 2242 } 2243 2243 2244 2244 static int pm_genpd_summary_one(struct seq_file *s, 2245 - struct generic_pm_domain *gpd) 2245 + struct generic_pm_domain *genpd) 2246 2246 { 2247 2247 static const char * const status_lookup[] = { 2248 2248 [GPD_STATE_ACTIVE] = "on", ··· 2256 2256 struct gpd_link *link; 2257 2257 int ret; 2258 2258 2259 - ret = mutex_lock_interruptible(&gpd->lock); 2259 + ret = mutex_lock_interruptible(&genpd->lock); 2260 2260 if (ret) 2261 2261 return -ERESTARTSYS; 2262 2262 2263 - if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup))) 2263 + if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup))) 2264 2264 goto exit; 2265 - seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]); 2265 + seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]); 2266 2266 2267 2267 /* 2268 2268 * Modifications on the list require holding locks on both 2269 2269 * master and slave, so we are safe. 2270 - * Also gpd->name is immutable. 2270 + * Also genpd->name is immutable. 2271 2271 */ 2272 - list_for_each_entry(link, &gpd->master_links, master_node) { 2272 + list_for_each_entry(link, &genpd->master_links, master_node) { 2273 2273 seq_printf(s, "%s", link->slave->name); 2274 - if (!list_is_last(&link->master_node, &gpd->master_links)) 2274 + if (!list_is_last(&link->master_node, &genpd->master_links)) 2275 2275 seq_puts(s, ", "); 2276 2276 } 2277 2277 2278 - list_for_each_entry(pm_data, &gpd->dev_list, list_node) { 2278 + list_for_each_entry(pm_data, &genpd->dev_list, list_node) { 2279 2279 kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); 2280 2280 if (kobj_path == NULL) 2281 2281 continue; ··· 2287 2287 2288 2288 seq_puts(s, "\n"); 2289 2289 exit: 2290 - mutex_unlock(&gpd->lock); 2290 + mutex_unlock(&genpd->lock); 2291 2291 2292 2292 return 0; 2293 2293 } 2294 2294 2295 2295 static int pm_genpd_summary_show(struct seq_file *s, void *data) 2296 2296 { 2297 - struct generic_pm_domain *gpd; 2297 + struct generic_pm_domain *genpd; 2298 2298 int ret = 0; 2299 2299 2300 2300 seq_puts(s, " domain status slaves\n"); ··· 2305 2305 if (ret) 2306 2306 return -ERESTARTSYS; 2307 2307 2308 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2309 - ret = pm_genpd_summary_one(s, gpd); 2308 + list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2309 + ret = pm_genpd_summary_one(s, genpd); 2310 2310 if (ret) 2311 2311 break; 2312 2312 }
+1
drivers/base/power/wakeup.c
··· 730 730 pm_abort_suspend = true; 731 731 freeze_wake(); 732 732 } 733 + EXPORT_SYMBOL_GPL(pm_system_wakeup); 733 734 734 735 void pm_wakeup_clear(void) 735 736 {
+19 -1
drivers/clk/at91/pmc.c
··· 89 89 return 0; 90 90 } 91 91 92 + static void pmc_irq_suspend(struct irq_data *d) 93 + { 94 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 95 + 96 + pmc->imr = pmc_read(pmc, AT91_PMC_IMR); 97 + pmc_write(pmc, AT91_PMC_IDR, pmc->imr); 98 + } 99 + 100 + static void pmc_irq_resume(struct irq_data *d) 101 + { 102 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 103 + 104 + pmc_write(pmc, AT91_PMC_IER, pmc->imr); 105 + } 106 + 92 107 static struct irq_chip pmc_irq = { 93 108 .name = "PMC", 94 109 .irq_disable = pmc_irq_mask, 95 110 .irq_mask = pmc_irq_mask, 96 111 .irq_unmask = pmc_irq_unmask, 97 112 .irq_set_type = pmc_irq_set_type, 113 + .irq_suspend = pmc_irq_suspend, 114 + .irq_resume = pmc_irq_resume, 98 115 }; 99 116 100 117 static struct lock_class_key pmc_lock_class; ··· 241 224 goto out_free_pmc; 242 225 243 226 pmc_write(pmc, AT91_PMC_IDR, 0xffffffff); 244 - if (request_irq(pmc->virq, pmc_irq_handler, IRQF_SHARED, "pmc", pmc)) 227 + if (request_irq(pmc->virq, pmc_irq_handler, 228 + IRQF_SHARED | IRQF_COND_SUSPEND, "pmc", pmc)) 245 229 goto out_remove_irqdomain; 246 230 247 231 return pmc;
+1
drivers/clk/at91/pmc.h
··· 33 33 spinlock_t lock; 34 34 const struct at91_pmc_caps *caps; 35 35 struct irq_domain *irqdomain; 36 + u32 imr; 36 37 }; 37 38 38 39 static inline void pmc_lock(struct at91_pmc *pmc)
+2
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 22 22 #include <linux/smp.h> 23 23 #include <sysdev/fsl_soc.h> 24 24 25 + #include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */ 26 + 25 27 /** 26 28 * struct cpu_data - per CPU data struct 27 29 * @parent: the parent node of cpu clock
+26 -35
drivers/cpuidle/cpuidle.c
··· 44 44 off = 1; 45 45 } 46 46 47 + bool cpuidle_not_available(struct cpuidle_driver *drv, 48 + struct cpuidle_device *dev) 49 + { 50 + return off || !initialized || !drv || !dev || !dev->enabled; 51 + } 52 + 47 53 /** 48 54 * cpuidle_play_dead - cpu off-lining 49 55 * ··· 72 66 return -ENODEV; 73 67 } 74 68 75 - /** 76 - * cpuidle_find_deepest_state - Find deepest state meeting specific conditions. 77 - * @drv: cpuidle driver for the given CPU. 78 - * @dev: cpuidle device for the given CPU. 79 - * @freeze: Whether or not the state should be suitable for suspend-to-idle. 80 - */ 81 - static int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 82 - struct cpuidle_device *dev, bool freeze) 69 + static int find_deepest_state(struct cpuidle_driver *drv, 70 + struct cpuidle_device *dev, bool freeze) 83 71 { 84 72 unsigned int latency_req = 0; 85 73 int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1; ··· 90 90 ret = i; 91 91 } 92 92 return ret; 93 + } 94 + 95 + /** 96 + * cpuidle_find_deepest_state - Find the deepest available idle state. 97 + * @drv: cpuidle driver for the given CPU. 98 + * @dev: cpuidle device for the given CPU. 99 + */ 100 + int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 101 + struct cpuidle_device *dev) 102 + { 103 + return find_deepest_state(drv, dev, false); 93 104 } 94 105 95 106 static void enter_freeze_proper(struct cpuidle_driver *drv, ··· 124 113 125 114 /** 126 115 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. 116 + * @drv: cpuidle driver for the given CPU. 117 + * @dev: cpuidle device for the given CPU. 127 118 * 128 119 * If there are states with the ->enter_freeze callback, find the deepest of 129 - * them and enter it with frozen tick. Otherwise, find the deepest state 130 - * available and enter it normally. 120 + * them and enter it with frozen tick. 131 121 */ 132 - void cpuidle_enter_freeze(void) 122 + int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) 133 123 { 134 - struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices); 135 - struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 136 124 int index; 137 125 138 126 /* ··· 139 129 * that interrupts won't be enabled when it exits and allows the tick to 140 130 * be frozen safely. 141 131 */ 142 - index = cpuidle_find_deepest_state(drv, dev, true); 143 - if (index >= 0) { 144 - enter_freeze_proper(drv, dev, index); 145 - return; 146 - } 147 - 148 - /* 149 - * It is not safe to freeze the tick, find the deepest state available 150 - * at all and try to enter it normally. 151 - */ 152 - index = cpuidle_find_deepest_state(drv, dev, false); 132 + index = find_deepest_state(drv, dev, true); 153 133 if (index >= 0) 154 - cpuidle_enter(drv, dev, index); 155 - else 156 - arch_cpu_idle(); 134 + enter_freeze_proper(drv, dev, index); 157 135 158 - /* Interrupts are enabled again here. */ 159 - local_irq_disable(); 136 + return index; 160 137 } 161 138 162 139 /** ··· 202 205 */ 203 206 int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) 204 207 { 205 - if (off || !initialized) 206 - return -ENODEV; 207 - 208 - if (!drv || !dev || !dev->enabled) 209 - return -EBUSY; 210 - 211 208 return cpuidle_curr_governor->select(drv, dev); 212 209 } 213 210
+1 -1
drivers/pci/host/pci-versatile.c
··· 80 80 if (err) 81 81 return err; 82 82 83 - resource_list_for_each_entry(win, res, list) { 83 + resource_list_for_each_entry(win, res) { 84 84 struct resource *parent, *res = win->res; 85 85 86 86 switch (resource_type(res)) {
+48 -14
drivers/rtc/rtc-at91rm9200.c
··· 31 31 #include <linux/io.h> 32 32 #include <linux/of.h> 33 33 #include <linux/of_device.h> 34 + #include <linux/suspend.h> 34 35 #include <linux/uaccess.h> 35 36 36 37 #include "rtc-at91rm9200.h" ··· 55 54 static int irq; 56 55 static DEFINE_SPINLOCK(at91_rtc_lock); 57 56 static u32 at91_rtc_shadow_imr; 57 + static bool suspended; 58 + static DEFINE_SPINLOCK(suspended_lock); 59 + static unsigned long cached_events; 60 + static u32 at91_rtc_imr; 58 61 59 62 static void at91_rtc_write_ier(u32 mask) 60 63 { ··· 295 290 struct rtc_device *rtc = platform_get_drvdata(pdev); 296 291 unsigned int rtsr; 297 292 unsigned long events = 0; 293 + int ret = IRQ_NONE; 298 294 295 + spin_lock(&suspended_lock); 299 296 rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr(); 300 297 if (rtsr) { /* this interrupt is shared! Is it ours? */ 301 298 if (rtsr & AT91_RTC_ALARM) ··· 311 304 312 305 at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */ 313 306 314 - rtc_update_irq(rtc, 1, events); 307 + if (!suspended) { 308 + rtc_update_irq(rtc, 1, events); 315 309 316 - dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__, 317 - events >> 8, events & 0x000000FF); 310 + dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", 311 + __func__, events >> 8, events & 0x000000FF); 312 + } else { 313 + cached_events |= events; 314 + at91_rtc_write_idr(at91_rtc_imr); 315 + pm_system_wakeup(); 316 + } 318 317 319 - return IRQ_HANDLED; 318 + ret = IRQ_HANDLED; 320 319 } 321 - return IRQ_NONE; /* not handled */ 320 + spin_lock(&suspended_lock); 321 + 322 + return ret; 322 323 } 323 324 324 325 static const struct at91_rtc_config at91rm9200_config = { ··· 416 401 AT91_RTC_CALEV); 417 402 418 403 ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt, 419 - IRQF_SHARED, 420 - "at91_rtc", pdev); 404 + IRQF_SHARED | IRQF_COND_SUSPEND, 405 + "at91_rtc", pdev); 421 406 if (ret) { 422 407 dev_err(&pdev->dev, "IRQ %d already in use.\n", irq); 423 408 return ret; ··· 469 454 470 455 /* AT91RM9200 RTC Power management control */ 471 456 472 - static u32 at91_rtc_imr; 473 - 474 457 static int at91_rtc_suspend(struct device *dev) 475 458 { 476 459 /* this IRQ is shared with DBGU and other hardware which isn't ··· 477 464 at91_rtc_imr = at91_rtc_read_imr() 478 465 & (AT91_RTC_ALARM|AT91_RTC_SECEV); 479 466 if (at91_rtc_imr) { 480 - if (device_may_wakeup(dev)) 467 + if (device_may_wakeup(dev)) { 468 + unsigned long flags; 469 + 481 470 enable_irq_wake(irq); 482 - else 471 + 472 + spin_lock_irqsave(&suspended_lock, flags); 473 + suspended = true; 474 + spin_unlock_irqrestore(&suspended_lock, flags); 475 + } else { 483 476 at91_rtc_write_idr(at91_rtc_imr); 477 + } 484 478 } 485 479 return 0; 486 480 } 487 481 488 482 static int at91_rtc_resume(struct device *dev) 489 483 { 484 + struct rtc_device *rtc = dev_get_drvdata(dev); 485 + 490 486 if (at91_rtc_imr) { 491 - if (device_may_wakeup(dev)) 487 + if (device_may_wakeup(dev)) { 488 + unsigned long flags; 489 + 490 + spin_lock_irqsave(&suspended_lock, flags); 491 + 492 + if (cached_events) { 493 + rtc_update_irq(rtc, 1, cached_events); 494 + cached_events = 0; 495 + } 496 + 497 + suspended = false; 498 + spin_unlock_irqrestore(&suspended_lock, flags); 499 + 492 500 disable_irq_wake(irq); 493 - else 494 - at91_rtc_write_ier(at91_rtc_imr); 501 + } 502 + at91_rtc_write_ier(at91_rtc_imr); 495 503 } 496 504 return 0; 497 505 }
+63 -14
drivers/rtc/rtc-at91sam9.c
··· 23 23 #include <linux/io.h> 24 24 #include <linux/mfd/syscon.h> 25 25 #include <linux/regmap.h> 26 + #include <linux/suspend.h> 26 27 #include <linux/clk.h> 27 28 28 29 /* ··· 78 77 unsigned int gpbr_offset; 79 78 int irq; 80 79 struct clk *sclk; 80 + bool suspended; 81 + unsigned long events; 82 + spinlock_t lock; 81 83 }; 82 84 83 85 #define rtt_readl(rtc, field) \ ··· 275 271 return 0; 276 272 } 277 273 278 - /* 279 - * IRQ handler for the RTC 280 - */ 281 - static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 274 + static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc) 282 275 { 283 - struct sam9_rtc *rtc = _rtc; 284 276 u32 sr, mr; 285 - unsigned long events = 0; 286 277 287 278 /* Shared interrupt may be for another device. Note: reading 288 279 * SR clears it, so we must only read it in this irq handler! ··· 289 290 290 291 /* alarm status */ 291 292 if (sr & AT91_RTT_ALMS) 292 - events |= (RTC_AF | RTC_IRQF); 293 + rtc->events |= (RTC_AF | RTC_IRQF); 293 294 294 295 /* timer update/increment */ 295 296 if (sr & AT91_RTT_RTTINC) 296 - events |= (RTC_UF | RTC_IRQF); 297 - 298 - rtc_update_irq(rtc->rtcdev, 1, events); 299 - 300 - pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 301 - events >> 8, events & 0x000000FF); 297 + rtc->events |= (RTC_UF | RTC_IRQF); 302 298 303 299 return IRQ_HANDLED; 300 + } 301 + 302 + static void at91_rtc_flush_events(struct sam9_rtc *rtc) 303 + { 304 + if (!rtc->events) 305 + return; 306 + 307 + rtc_update_irq(rtc->rtcdev, 1, rtc->events); 308 + rtc->events = 0; 309 + 310 + pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 311 + rtc->events >> 8, rtc->events & 0x000000FF); 312 + } 313 + 314 + /* 315 + * IRQ handler for the RTC 316 + */ 317 + static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 318 + { 319 + struct sam9_rtc *rtc = _rtc; 320 + int ret; 321 + 322 + spin_lock(&rtc->lock); 323 + 324 + ret = at91_rtc_cache_events(rtc); 325 + 326 + /* We're called in suspended state */ 327 + if (rtc->suspended) { 328 + /* Mask irqs coming from this peripheral */ 329 + rtt_writel(rtc, MR, 330 + rtt_readl(rtc, MR) & 331 + ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN)); 332 + /* Trigger a system wakeup */ 333 + pm_system_wakeup(); 334 + } else { 335 + at91_rtc_flush_events(rtc); 336 + } 337 + 338 + spin_unlock(&rtc->lock); 339 + 340 + return ret; 304 341 } 305 342 306 343 static const struct rtc_class_ops at91_rtc_ops = { ··· 456 421 457 422 /* register irq handler after we know what name we'll use */ 458 423 ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt, 459 - IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc); 424 + IRQF_SHARED | IRQF_COND_SUSPEND, 425 + dev_name(&rtc->rtcdev->dev), rtc); 460 426 if (ret) { 461 427 dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq); 462 428 return ret; ··· 518 482 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN); 519 483 if (rtc->imr) { 520 484 if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) { 485 + unsigned long flags; 486 + 521 487 enable_irq_wake(rtc->irq); 488 + spin_lock_irqsave(&rtc->lock, flags); 489 + rtc->suspended = true; 490 + spin_unlock_irqrestore(&rtc->lock, flags); 522 491 /* don't let RTTINC cause wakeups */ 523 492 if (mr & AT91_RTT_RTTINCIEN) 524 493 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN); ··· 540 499 u32 mr; 541 500 542 501 if (rtc->imr) { 502 + unsigned long flags; 503 + 543 504 if (device_may_wakeup(dev)) 544 505 disable_irq_wake(rtc->irq); 545 506 mr = rtt_readl(rtc, MR); 546 507 rtt_writel(rtc, MR, mr | rtc->imr); 508 + 509 + spin_lock_irqsave(&rtc->lock, flags); 510 + rtc->suspended = false; 511 + at91_rtc_cache_events(rtc); 512 + at91_rtc_flush_events(rtc); 513 + spin_unlock_irqrestore(&rtc->lock, flags); 547 514 } 548 515 549 516 return 0;
+45 -4
drivers/tty/serial/atmel_serial.c
··· 47 47 #include <linux/gpio/consumer.h> 48 48 #include <linux/err.h> 49 49 #include <linux/irq.h> 50 + #include <linux/suspend.h> 50 51 51 52 #include <asm/io.h> 52 53 #include <asm/ioctls.h> ··· 174 173 bool ms_irq_enabled; 175 174 bool is_usart; /* usart or uart */ 176 175 struct timer_list uart_timer; /* uart timer */ 176 + 177 + bool suspended; 178 + unsigned int pending; 179 + unsigned int pending_status; 180 + spinlock_t lock_suspended; 181 + 177 182 int (*prepare_rx)(struct uart_port *port); 178 183 int (*prepare_tx)(struct uart_port *port); 179 184 void (*schedule_rx)(struct uart_port *port); ··· 1186 1179 { 1187 1180 struct uart_port *port = dev_id; 1188 1181 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1189 - unsigned int status, pending, pass_counter = 0; 1182 + unsigned int status, pending, mask, pass_counter = 0; 1190 1183 bool gpio_handled = false; 1184 + 1185 + spin_lock(&atmel_port->lock_suspended); 1191 1186 1192 1187 do { 1193 1188 status = atmel_get_lines_status(port); 1194 - pending = status & UART_GET_IMR(port); 1189 + mask = UART_GET_IMR(port); 1190 + pending = status & mask; 1195 1191 if (!gpio_handled) { 1196 1192 /* 1197 1193 * Dealing with GPIO interrupt ··· 1216 1206 if (!pending) 1217 1207 break; 1218 1208 1209 + if (atmel_port->suspended) { 1210 + atmel_port->pending |= pending; 1211 + atmel_port->pending_status = status; 1212 + UART_PUT_IDR(port, mask); 1213 + pm_system_wakeup(); 1214 + break; 1215 + } 1216 + 1219 1217 atmel_handle_receive(port, pending); 1220 1218 atmel_handle_status(port, pending, status); 1221 1219 atmel_handle_transmit(port, pending); 1222 1220 } while (pass_counter++ < ATMEL_ISR_PASS_LIMIT); 1221 + 1222 + spin_unlock(&atmel_port->lock_suspended); 1223 1223 1224 1224 return pass_counter ? IRQ_HANDLED : IRQ_NONE; 1225 1225 } ··· 1762 1742 /* 1763 1743 * Allocate the IRQ 1764 1744 */ 1765 - retval = request_irq(port->irq, atmel_interrupt, IRQF_SHARED, 1745 + retval = request_irq(port->irq, atmel_interrupt, 1746 + IRQF_SHARED | IRQF_COND_SUSPEND, 1766 1747 tty ? tty->name : "atmel_serial", port); 1767 1748 if (retval) { 1768 1749 dev_err(port->dev, "atmel_startup - Can't get irq\n"); ··· 2534 2513 2535 2514 /* we can not wake up if we're running on slow clock */ 2536 2515 atmel_port->may_wakeup = device_may_wakeup(&pdev->dev); 2537 - if (atmel_serial_clk_will_stop()) 2516 + if (atmel_serial_clk_will_stop()) { 2517 + unsigned long flags; 2518 + 2519 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2520 + atmel_port->suspended = true; 2521 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2538 2522 device_set_wakeup_enable(&pdev->dev, 0); 2523 + } 2539 2524 2540 2525 uart_suspend_port(&atmel_uart, port); 2541 2526 ··· 2552 2525 { 2553 2526 struct uart_port *port = platform_get_drvdata(pdev); 2554 2527 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 2528 + unsigned long flags; 2529 + 2530 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2531 + if (atmel_port->pending) { 2532 + atmel_handle_receive(port, atmel_port->pending); 2533 + atmel_handle_status(port, atmel_port->pending, 2534 + atmel_port->pending_status); 2535 + atmel_handle_transmit(port, atmel_port->pending); 2536 + atmel_port->pending = 0; 2537 + } 2538 + atmel_port->suspended = false; 2539 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2555 2540 2556 2541 uart_resume_port(&atmel_uart, port); 2557 2542 device_set_wakeup_enable(&pdev->dev, atmel_port->may_wakeup); ··· 2631 2592 port = &atmel_ports[ret]; 2632 2593 port->backup_imr = 0; 2633 2594 port->uart.line = ret; 2595 + 2596 + spin_lock_init(&port->lock_suspended); 2634 2597 2635 2598 ret = atmel_init_gpios(port, &pdev->dev); 2636 2599 if (ret < 0)
+2 -1
drivers/watchdog/at91sam9_wdt.c
··· 208 208 209 209 if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) { 210 210 err = request_irq(wdt->irq, wdt_interrupt, 211 - IRQF_SHARED | IRQF_IRQPOLL, 211 + IRQF_SHARED | IRQF_IRQPOLL | 212 + IRQF_NO_SUSPEND, 212 213 pdev->name, wdt); 213 214 if (err) 214 215 return err;
+15 -2
include/linux/cpuidle.h
··· 126 126 127 127 #ifdef CONFIG_CPU_IDLE 128 128 extern void disable_cpuidle(void); 129 + extern bool cpuidle_not_available(struct cpuidle_driver *drv, 130 + struct cpuidle_device *dev); 129 131 130 132 extern int cpuidle_select(struct cpuidle_driver *drv, 131 133 struct cpuidle_device *dev); ··· 152 150 extern int cpuidle_enable_device(struct cpuidle_device *dev); 153 151 extern void cpuidle_disable_device(struct cpuidle_device *dev); 154 152 extern int cpuidle_play_dead(void); 155 - extern void cpuidle_enter_freeze(void); 153 + extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 154 + struct cpuidle_device *dev); 155 + extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, 156 + struct cpuidle_device *dev); 156 157 157 158 extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev); 158 159 #else 159 160 static inline void disable_cpuidle(void) { } 161 + static inline bool cpuidle_not_available(struct cpuidle_driver *drv, 162 + struct cpuidle_device *dev) 163 + {return true; } 160 164 static inline int cpuidle_select(struct cpuidle_driver *drv, 161 165 struct cpuidle_device *dev) 162 166 {return -ENODEV; } ··· 191 183 {return -ENODEV; } 192 184 static inline void cpuidle_disable_device(struct cpuidle_device *dev) { } 193 185 static inline int cpuidle_play_dead(void) {return -ENODEV; } 194 - static inline void cpuidle_enter_freeze(void) { } 186 + static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 187 + struct cpuidle_device *dev) 188 + {return -ENODEV; } 189 + static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, 190 + struct cpuidle_device *dev) 191 + {return -ENODEV; } 195 192 static inline struct cpuidle_driver *cpuidle_get_cpu_driver( 196 193 struct cpuidle_device *dev) {return NULL; } 197 194 #endif
+8 -1
include/linux/interrupt.h
··· 52 52 * IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished. 53 53 * Used by threaded interrupts which need to keep the 54 54 * irq line disabled until the threaded handler has been run. 55 - * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend 55 + * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee 56 + * that this interrupt will wake the system from a suspended 57 + * state. See Documentation/power/suspend-and-interrupts.txt 56 58 * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set 57 59 * IRQF_NO_THREAD - Interrupt cannot be threaded 58 60 * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device 59 61 * resume time. 62 + * IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this 63 + * interrupt handler after suspending interrupts. For system 64 + * wakeup devices users need to implement wakeup detection in 65 + * their interrupt handlers. 60 66 */ 61 67 #define IRQF_DISABLED 0x00000020 62 68 #define IRQF_SHARED 0x00000080 ··· 76 70 #define IRQF_FORCE_RESUME 0x00008000 77 71 #define IRQF_NO_THREAD 0x00010000 78 72 #define IRQF_EARLY_RESUME 0x00020000 73 + #define IRQF_COND_SUSPEND 0x00040000 79 74 80 75 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) 81 76
+1
include/linux/irqdesc.h
··· 78 78 #ifdef CONFIG_PM_SLEEP 79 79 unsigned int nr_actions; 80 80 unsigned int no_suspend_depth; 81 + unsigned int cond_suspend_depth; 81 82 unsigned int force_resume_depth; 82 83 #endif 83 84 #ifdef CONFIG_PROC_FS
+6 -1
kernel/irq/manage.c
··· 1474 1474 * otherwise we'll have trouble later trying to figure out 1475 1475 * which interrupt is which (messes up the interrupt freeing 1476 1476 * logic etc). 1477 + * 1478 + * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and 1479 + * it cannot be set along with IRQF_NO_SUSPEND. 1477 1480 */ 1478 - if ((irqflags & IRQF_SHARED) && !dev_id) 1481 + if (((irqflags & IRQF_SHARED) && !dev_id) || 1482 + (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) || 1483 + ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND))) 1479 1484 return -EINVAL; 1480 1485 1481 1486 desc = irq_to_desc(irq);
+6 -1
kernel/irq/pm.c
··· 43 43 44 44 if (action->flags & IRQF_NO_SUSPEND) 45 45 desc->no_suspend_depth++; 46 + else if (action->flags & IRQF_COND_SUSPEND) 47 + desc->cond_suspend_depth++; 46 48 47 49 WARN_ON_ONCE(desc->no_suspend_depth && 48 - desc->no_suspend_depth != desc->nr_actions); 50 + (desc->no_suspend_depth + 51 + desc->cond_suspend_depth) != desc->nr_actions); 49 52 } 50 53 51 54 /* ··· 64 61 65 62 if (action->flags & IRQF_NO_SUSPEND) 66 63 desc->no_suspend_depth--; 64 + else if (action->flags & IRQF_COND_SUSPEND) 65 + desc->cond_suspend_depth--; 67 66 } 68 67 69 68 static bool suspend_device_irq(struct irq_desc *desc, int irq)
+34 -22
kernel/sched/idle.c
··· 82 82 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 83 83 int next_state, entered_state; 84 84 unsigned int broadcast; 85 + bool reflect; 85 86 86 87 /* 87 88 * Check if the idle task must be rescheduled. If it is the ··· 106 105 */ 107 106 rcu_idle_enter(); 108 107 108 + if (cpuidle_not_available(drv, dev)) 109 + goto use_default; 110 + 109 111 /* 110 112 * Suspend-to-idle ("freeze") is a system state in which all user space 111 113 * has been frozen, all I/O devices have been suspended and the only ··· 119 115 * until a proper wakeup interrupt happens. 120 116 */ 121 117 if (idle_should_freeze()) { 122 - cpuidle_enter_freeze(); 123 - local_irq_enable(); 124 - goto exit_idle; 125 - } 126 - 127 - /* 128 - * Ask the cpuidle framework to choose a convenient idle state. 129 - * Fall back to the default arch idle method on errors. 130 - */ 131 - next_state = cpuidle_select(drv, dev); 132 - if (next_state < 0) { 133 - use_default: 134 - /* 135 - * We can't use the cpuidle framework, let's use the default 136 - * idle routine. 137 - */ 138 - if (current_clr_polling_and_test()) 118 + entered_state = cpuidle_enter_freeze(drv, dev); 119 + if (entered_state >= 0) { 139 120 local_irq_enable(); 140 - else 141 - arch_cpu_idle(); 121 + goto exit_idle; 122 + } 142 123 143 - goto exit_idle; 124 + reflect = false; 125 + next_state = cpuidle_find_deepest_state(drv, dev); 126 + } else { 127 + reflect = true; 128 + /* 129 + * Ask the cpuidle framework to choose a convenient idle state. 130 + */ 131 + next_state = cpuidle_select(drv, dev); 144 132 } 145 - 133 + /* Fall back to the default arch idle method on errors. */ 134 + if (next_state < 0) 135 + goto use_default; 146 136 147 137 /* 148 138 * The idle task must be scheduled, it is pointless to ··· 181 183 /* 182 184 * Give the governor an opportunity to reflect on the outcome 183 185 */ 184 - cpuidle_reflect(dev, entered_state); 186 + if (reflect) 187 + cpuidle_reflect(dev, entered_state); 185 188 186 189 exit_idle: 187 190 __current_set_polling(); ··· 195 196 196 197 rcu_idle_exit(); 197 198 start_critical_timings(); 199 + return; 200 + 201 + use_default: 202 + /* 203 + * We can't use the cpuidle framework, let's use the default 204 + * idle routine. 205 + */ 206 + if (current_clr_polling_and_test()) 207 + local_irq_enable(); 208 + else 209 + arch_cpu_idle(); 210 + 211 + goto exit_idle; 198 212 } 199 213 200 214 /*