Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (aka "Andrew's patch-bomb")

Andrew elucidates:
- First installmeant of MM. We have a HUGE number of MM patches this
time. It's crazy.
- MAINTAINERS updates
- backlight updates
- leds
- checkpatch updates
- misc ELF stuff
- rtc updates
- reiserfs
- procfs
- some misc other bits

* akpm: (124 commits)
user namespace: make signal.c respect user namespaces
workqueue: make alloc_workqueue() take printf fmt and args for name
procfs: add hidepid= and gid= mount options
procfs: parse mount options
procfs: introduce the /proc/<pid>/map_files/ directory
procfs: make proc_get_link to use dentry instead of inode
signal: add block_sigmask() for adding sigmask to current->blocked
sparc: make SA_NOMASK a synonym of SA_NODEFER
reiserfs: don't lock root inode searching
reiserfs: don't lock journal_init()
reiserfs: delay reiserfs lock until journal initialization
reiserfs: delete comments referring to the BKL
drivers/rtc/interface.c: fix alarm rollover when day or month is out-of-range
drivers/rtc/rtc-twl.c: add DT support for RTC inside twl4030/twl6030
drivers/rtc/: remove redundant spi driver bus initialization
drivers/rtc/rtc-jz4740.c: make jz4740_rtc_driver static
drivers/rtc/rtc-mc13xxx.c: make mc13xxx_rtc_idtable static
rtc: convert drivers/rtc/* to use module_platform_driver()
drivers/rtc/rtc-wm831x.c: convert to devm_kzalloc()
drivers/rtc/rtc-wm831x.c: remove unused period IRQ handler
...

+3045 -1876
+12
Documentation/ABI/testing/sysfs-class-rtc-rtc0-device-rtc_calibration
··· 1 + What: Attribute for calibrating ST-Ericsson AB8500 Real Time Clock 2 + Date: Oct 2011 3 + KernelVersion: 3.0 4 + Contact: Mark Godfrey <mark.godfrey@stericsson.com> 5 + Description: The rtc_calibration attribute allows the userspace to 6 + calibrate the AB8500.s 32KHz Real Time Clock. 7 + Every 60 seconds the AB8500 will correct the RTC's value 8 + by adding to it the value of this attribute. 9 + The range of the attribute is -127 to +127 in units of 10 + 30.5 micro-seconds (half-parts-per-million of the 32KHz clock) 11 + Users: The /vendor/st-ericsson/base_utilities/core/rtc_calibration 12 + daemon uses this interface.
+12
Documentation/devicetree/bindings/rtc/twl-rtc.txt
··· 1 + * TI twl RTC 2 + 3 + The TWL family (twl4030/6030) contains a RTC. 4 + 5 + Required properties: 6 + - compatible : Should be twl4030-rtc 7 + 8 + Examples: 9 + 10 + rtc@0 { 11 + compatible = "ti,twl4030-rtc"; 12 + };
+39
Documentation/filesystems/proc.txt
··· 41 41 3.5 /proc/<pid>/mountinfo - Information about mounts 42 42 3.6 /proc/<pid>/comm & /proc/<pid>/task/<tid>/comm 43 43 44 + 4 Configuring procfs 45 + 4.1 Mount options 44 46 45 47 ------------------------------------------------------------------------------ 46 48 Preface ··· 1544 1542 is limited in size compared to the cmdline value, so writing anything longer 1545 1543 then the kernel's TASK_COMM_LEN (currently 16 chars) will result in a truncated 1546 1544 comm value. 1545 + 1546 + 1547 + ------------------------------------------------------------------------------ 1548 + Configuring procfs 1549 + ------------------------------------------------------------------------------ 1550 + 1551 + 4.1 Mount options 1552 + --------------------- 1553 + 1554 + The following mount options are supported: 1555 + 1556 + hidepid= Set /proc/<pid>/ access mode. 1557 + gid= Set the group authorized to learn processes information. 1558 + 1559 + hidepid=0 means classic mode - everybody may access all /proc/<pid>/ directories 1560 + (default). 1561 + 1562 + hidepid=1 means users may not access any /proc/<pid>/ directories but their 1563 + own. Sensitive files like cmdline, sched*, status are now protected against 1564 + other users. This makes it impossible to learn whether any user runs 1565 + specific program (given the program doesn't reveal itself by its behaviour). 1566 + As an additional bonus, as /proc/<pid>/cmdline is unaccessible for other users, 1567 + poorly written programs passing sensitive information via program arguments are 1568 + now protected against local eavesdroppers. 1569 + 1570 + hidepid=2 means hidepid=1 plus all /proc/<pid>/ will be fully invisible to other 1571 + users. It doesn't mean that it hides a fact whether a process with a specific 1572 + pid value exists (it can be learned by other means, e.g. by "kill -0 $PID"), 1573 + but it hides process' uid and gid, which may be learned by stat()'ing 1574 + /proc/<pid>/ otherwise. It greatly complicates an intruder's task of gathering 1575 + information about running processes, whether some daemon runs with elevated 1576 + privileges, whether other user runs some sensitive program, whether other users 1577 + run any program at all, etc. 1578 + 1579 + gid= defines a group authorized to learn processes information otherwise 1580 + prohibited by hidepid=. If you use some daemon like identd which needs to learn 1581 + information about processes information, just add identd to this group.
+19
Documentation/kernel-parameters.txt
··· 628 628 no_debug_objects 629 629 [KNL] Disable object debugging 630 630 631 + debug_guardpage_minorder= 632 + [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this 633 + parameter allows control of the order of pages that will 634 + be intentionally kept free (and hence protected) by the 635 + buddy allocator. Bigger value increase the probability 636 + of catching random memory corruption, but reduce the 637 + amount of memory for normal system use. The maximum 638 + possible value is MAX_ORDER/2. Setting this parameter 639 + to 1 or 2 should be enough to identify most random 640 + memory corruption problems caused by bugs in kernel or 641 + driver code when a CPU writes to (or reads from) a 642 + random memory location. Note that there exists a class 643 + of memory corruptions problems caused by buggy H/W or 644 + F/W or by drivers badly programing DMA (basically when 645 + memory is written at bus level and the CPU MMU is 646 + bypassed) which are not detectable by 647 + CONFIG_DEBUG_PAGEALLOC, hence this option will not help 648 + tracking down these problems. 649 + 631 650 debugpat [X86] Enable PAT debugging 632 651 633 652 decnet.addr= [HW,NET]
+6 -6
Documentation/trace/events-kmem.txt
··· 40 40 ================== 41 41 mm_page_alloc page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s 42 42 mm_page_alloc_zone_locked page=%p pfn=%lu order=%u migratetype=%d cpu=%d percpu_refill=%d 43 - mm_page_free_direct page=%p pfn=%lu order=%d 44 - mm_pagevec_free page=%p pfn=%lu order=%d cold=%d 43 + mm_page_free page=%p pfn=%lu order=%d 44 + mm_page_free_batched page=%p pfn=%lu order=%d cold=%d 45 45 46 46 These four events deal with page allocation and freeing. mm_page_alloc is 47 47 a simple indicator of page allocator activity. Pages may be allocated from ··· 53 53 impairs performance by disabling interrupts, dirtying cache lines between 54 54 CPUs and serialising many CPUs. 55 55 56 - When a page is freed directly by the caller, the mm_page_free_direct event 56 + When a page is freed directly by the caller, the only mm_page_free event 57 57 is triggered. Significant amounts of activity here could indicate that the 58 58 callers should be batching their activities. 59 59 60 - When pages are freed using a pagevec, the mm_pagevec_free is 61 - triggered. Broadly speaking, pages are taken off the LRU lock in bulk and 62 - freed in batch with a pagevec. Significant amounts of activity here could 60 + When pages are freed in batch, the also mm_page_free_batched is triggered. 61 + Broadly speaking, pages are taken off the LRU lock in bulk and 62 + freed in batch with a page list. Significant amounts of activity here could 63 63 indicate that the system is under memory pressure and can also indicate 64 64 contention on the zone->lru_lock. 65 65
+10 -10
Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
··· 17 17 18 18 # Tracepoint events 19 19 use constant MM_PAGE_ALLOC => 1; 20 - use constant MM_PAGE_FREE_DIRECT => 2; 21 - use constant MM_PAGEVEC_FREE => 3; 20 + use constant MM_PAGE_FREE => 2; 21 + use constant MM_PAGE_FREE_BATCHED => 3; 22 22 use constant MM_PAGE_PCPU_DRAIN => 4; 23 23 use constant MM_PAGE_ALLOC_ZONE_LOCKED => 5; 24 24 use constant MM_PAGE_ALLOC_EXTFRAG => 6; ··· 223 223 # Perl Switch() sucks majorly 224 224 if ($tracepoint eq "mm_page_alloc") { 225 225 $perprocesspid{$process_pid}->{MM_PAGE_ALLOC}++; 226 - } elsif ($tracepoint eq "mm_page_free_direct") { 227 - $perprocesspid{$process_pid}->{MM_PAGE_FREE_DIRECT}++; 228 - } elsif ($tracepoint eq "mm_pagevec_free") { 229 - $perprocesspid{$process_pid}->{MM_PAGEVEC_FREE}++; 226 + } elsif ($tracepoint eq "mm_page_free") { 227 + $perprocesspid{$process_pid}->{MM_PAGE_FREE}++ 228 + } elsif ($tracepoint eq "mm_page_free_batched") { 229 + $perprocesspid{$process_pid}->{MM_PAGE_FREE_BATCHED}++; 230 230 } elsif ($tracepoint eq "mm_page_pcpu_drain") { 231 231 $perprocesspid{$process_pid}->{MM_PAGE_PCPU_DRAIN}++; 232 232 $perprocesspid{$process_pid}->{STATE_PCPU_PAGES_DRAINED}++; ··· 336 336 $process_pid, 337 337 $stats{$process_pid}->{MM_PAGE_ALLOC}, 338 338 $stats{$process_pid}->{MM_PAGE_ALLOC_ZONE_LOCKED}, 339 - $stats{$process_pid}->{MM_PAGE_FREE_DIRECT}, 340 - $stats{$process_pid}->{MM_PAGEVEC_FREE}, 339 + $stats{$process_pid}->{MM_PAGE_FREE}, 340 + $stats{$process_pid}->{MM_PAGE_FREE_BATCHED}, 341 341 $stats{$process_pid}->{MM_PAGE_PCPU_DRAIN}, 342 342 $stats{$process_pid}->{HIGH_PCPU_DRAINS}, 343 343 $stats{$process_pid}->{HIGH_PCPU_REFILLS}, ··· 364 364 365 365 $perprocess{$process}->{MM_PAGE_ALLOC} += $perprocesspid{$process_pid}->{MM_PAGE_ALLOC}; 366 366 $perprocess{$process}->{MM_PAGE_ALLOC_ZONE_LOCKED} += $perprocesspid{$process_pid}->{MM_PAGE_ALLOC_ZONE_LOCKED}; 367 - $perprocess{$process}->{MM_PAGE_FREE_DIRECT} += $perprocesspid{$process_pid}->{MM_PAGE_FREE_DIRECT}; 368 - $perprocess{$process}->{MM_PAGEVEC_FREE} += $perprocesspid{$process_pid}->{MM_PAGEVEC_FREE}; 367 + $perprocess{$process}->{MM_PAGE_FREE} += $perprocesspid{$process_pid}->{MM_PAGE_FREE}; 368 + $perprocess{$process}->{MM_PAGE_FREE_BATCHED} += $perprocesspid{$process_pid}->{MM_PAGE_FREE_BATCHED}; 369 369 $perprocess{$process}->{MM_PAGE_PCPU_DRAIN} += $perprocesspid{$process_pid}->{MM_PAGE_PCPU_DRAIN}; 370 370 $perprocess{$process}->{HIGH_PCPU_DRAINS} += $perprocesspid{$process_pid}->{HIGH_PCPU_DRAINS}; 371 371 $perprocess{$process}->{HIGH_PCPU_REFILLS} += $perprocesspid{$process_pid}->{HIGH_PCPU_REFILLS};
+20 -20
Documentation/trace/tracepoint-analysis.txt
··· 93 93 for a duration of time can be examined. 94 94 95 95 $ perf stat -a \ 96 - -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \ 97 - -e kmem:mm_pagevec_free \ 96 + -e kmem:mm_page_alloc -e kmem:mm_page_free \ 97 + -e kmem:mm_page_free_batched \ 98 98 sleep 10 99 99 Performance counter stats for 'sleep 10': 100 100 101 101 9630 kmem:mm_page_alloc 102 - 2143 kmem:mm_page_free_direct 103 - 7424 kmem:mm_pagevec_free 102 + 2143 kmem:mm_page_free 103 + 7424 kmem:mm_page_free_batched 104 104 105 105 10.002577764 seconds time elapsed 106 106 ··· 119 119 Events can be activated and tracked for the duration of a process on a local 120 120 basis using PCL such as follows. 121 121 122 - $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \ 123 - -e kmem:mm_pagevec_free ./hackbench 10 122 + $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free \ 123 + -e kmem:mm_page_free_batched ./hackbench 10 124 124 Time: 0.909 125 125 126 126 Performance counter stats for './hackbench 10': 127 127 128 128 17803 kmem:mm_page_alloc 129 - 12398 kmem:mm_page_free_direct 130 - 4827 kmem:mm_pagevec_free 129 + 12398 kmem:mm_page_free 130 + 4827 kmem:mm_page_free_batched 131 131 132 132 0.973913387 seconds time elapsed 133 133 ··· 146 146 performance analyst to do it by hand. In the event that the discrete event 147 147 occurrences are useful to the performance analyst, then perf can be used. 148 148 149 - $ perf stat --repeat 5 -e kmem:mm_page_alloc -e kmem:mm_page_free_direct 150 - -e kmem:mm_pagevec_free ./hackbench 10 149 + $ perf stat --repeat 5 -e kmem:mm_page_alloc -e kmem:mm_page_free 150 + -e kmem:mm_page_free_batched ./hackbench 10 151 151 Time: 0.890 152 152 Time: 0.895 153 153 Time: 0.915 ··· 157 157 Performance counter stats for './hackbench 10' (5 runs): 158 158 159 159 16630 kmem:mm_page_alloc ( +- 3.542% ) 160 - 11486 kmem:mm_page_free_direct ( +- 4.771% ) 161 - 4730 kmem:mm_pagevec_free ( +- 2.325% ) 160 + 11486 kmem:mm_page_free ( +- 4.771% ) 161 + 4730 kmem:mm_page_free_batched ( +- 2.325% ) 162 162 163 163 0.982653002 seconds time elapsed ( +- 1.448% ) 164 164 ··· 168 168 Using --repeat, it is also possible to view how events are fluctuating over 169 169 time on a system-wide basis using -a and sleep. 170 170 171 - $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \ 172 - -e kmem:mm_pagevec_free \ 171 + $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free \ 172 + -e kmem:mm_page_free_batched \ 173 173 -a --repeat 10 \ 174 174 sleep 1 175 175 Performance counter stats for 'sleep 1' (10 runs): 176 176 177 177 1066 kmem:mm_page_alloc ( +- 26.148% ) 178 - 182 kmem:mm_page_free_direct ( +- 5.464% ) 179 - 890 kmem:mm_pagevec_free ( +- 30.079% ) 178 + 182 kmem:mm_page_free ( +- 5.464% ) 179 + 890 kmem:mm_page_free_batched ( +- 30.079% ) 180 180 181 181 1.002251757 seconds time elapsed ( +- 0.005% ) 182 182 ··· 220 220 data must be recorded. At the time of writing, this required root: 221 221 222 222 $ perf record -c 1 \ 223 - -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \ 224 - -e kmem:mm_pagevec_free \ 223 + -e kmem:mm_page_alloc -e kmem:mm_page_free \ 224 + -e kmem:mm_page_free_batched \ 225 225 ./hackbench 10 226 226 Time: 0.894 227 227 [ perf record: Captured and wrote 0.733 MB perf.data (~32010 samples) ] ··· 260 260 at it: 261 261 262 262 $ perf record -c 1 -f \ 263 - -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \ 264 - -e kmem:mm_pagevec_free \ 263 + -e kmem:mm_page_alloc -e kmem:mm_page_free \ 264 + -e kmem:mm_page_free_batched \ 265 265 -p `pidof X` 266 266 267 267 This was interrupted after a few seconds and
+25 -24
MAINTAINERS
··· 342 342 F: drivers/mfd/adp5520.c 343 343 F: drivers/video/backlight/adp5520_bl.c 344 344 F: drivers/leds/leds-adp5520.c 345 - F: drivers/gpio/adp5520-gpio.c 345 + F: drivers/gpio/gpio-adp5520.c 346 346 F: drivers/input/keyboard/adp5520-keys.c 347 347 348 348 ADP5588 QWERTY KEYPAD AND IO EXPANDER DRIVER (ADP5588/ADP5587) ··· 351 351 W: http://wiki.analog.com/ADP5588 352 352 S: Supported 353 353 F: drivers/input/keyboard/adp5588-keys.c 354 - F: drivers/gpio/adp5588-gpio.c 354 + F: drivers/gpio/gpio-adp5588.c 355 355 356 356 ADP8860 BACKLIGHT DRIVER (ADP8860/ADP8861/ADP8863) 357 357 M: Michael Hennerich <michael.hennerich@analog.com> ··· 914 914 M: Nicolas Pitre <nico@fluxnic.net> 915 915 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 916 916 S: Odd Fixes 917 - F: arch/arm/mach-loki/ 918 917 F: arch/arm/mach-kirkwood/ 919 918 F: arch/arm/mach-mv78xx0/ 920 919 F: arch/arm/mach-orion5x/ ··· 1075 1076 S: Maintained 1076 1077 F: arch/arm/mach-s5pv210/mach-aquila.c 1077 1078 F: arch/arm/mach-s5pv210/mach-goni.c 1078 - F: arch/arm/mach-exynos4/mach-universal_c210.c 1079 - F: arch/arm/mach-exynos4/mach-nuri.c 1079 + F: arch/arm/mach-exynos/mach-universal_c210.c 1080 + F: arch/arm/mach-exynos/mach-nuri.c 1080 1081 1081 1082 ARM/SAMSUNG S5P SERIES FIMC SUPPORT 1082 1083 M: Kyungmin Park <kyungmin.park@samsung.com> ··· 1104 1105 L: linux-arm-kernel@lists.infradead.org 1105 1106 L: linux-media@vger.kernel.org 1106 1107 S: Maintained 1107 - F: arch/arm/plat-s5p/dev-tv.c 1108 1108 F: drivers/media/video/s5p-tv/ 1109 1109 1110 1110 ARM/SHMOBILE ARM ARCHITECTURE ··· 1138 1140 W: http://www.mcuos.com 1139 1141 S: Maintained 1140 1142 F: arch/arm/mach-w90x900/ 1141 - F: arch/arm/mach-nuc93x/ 1142 1143 F: drivers/input/keyboard/w90p910_keypad.c 1143 1144 F: drivers/input/touchscreen/w90p910_ts.c 1144 1145 F: drivers/watchdog/nuc900_wdt.c 1145 1146 F: drivers/net/ethernet/nuvoton/w90p910_ether.c 1146 1147 F: drivers/mtd/nand/nuc900_nand.c 1147 1148 F: drivers/rtc/rtc-nuc900.c 1148 - F: drivers/spi/spi_nuc900.c 1149 + F: drivers/spi/spi-nuc900.c 1149 1150 F: drivers/usb/host/ehci-w90x900.c 1150 1151 F: drivers/video/nuc900fb.c 1151 1152 ··· 1169 1172 S: Maintained 1170 1173 F: arch/arm/mach-ux500/ 1171 1174 F: drivers/dma/ste_dma40* 1172 - F: drivers/mfd/ab3550* 1173 1175 F: drivers/mfd/abx500* 1174 1176 F: drivers/mfd/ab8500* 1175 1177 F: drivers/mfd/stmpe* ··· 1348 1352 ATMEL SPI DRIVER 1349 1353 M: Nicolas Ferre <nicolas.ferre@atmel.com> 1350 1354 S: Supported 1351 - F: drivers/spi/atmel_spi.* 1355 + F: drivers/spi/spi-atmel.* 1352 1356 1353 1357 ATMEL USBA UDC DRIVER 1354 1358 M: Nicolas Ferre <nicolas.ferre@atmel.com> ··· 1487 1491 L: uclinux-dist-devel@blackfin.uclinux.org 1488 1492 W: http://blackfin.uclinux.org 1489 1493 S: Supported 1490 - F: drivers/tty/serial/bfin_5xx.c 1494 + F: drivers/tty/serial/bfin_uart.c 1491 1495 1492 1496 BLACKFIN WATCHDOG DRIVER 1493 1497 M: Mike Frysinger <vapier.adi@gmail.com> ··· 1617 1621 M: Michael Buesch <m@bues.ch> 1618 1622 W: http://bu3sch.de/btgpio.php 1619 1623 S: Maintained 1620 - F: drivers/gpio/bt8xxgpio.c 1624 + F: drivers/gpio/gpio-bt8xx.c 1621 1625 1622 1626 BTRFS FILE SYSTEM 1623 1627 M: Chris Mason <chris.mason@oracle.com> ··· 1658 1662 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git 1659 1663 S: Maintained 1660 1664 F: Documentation/video4linux/cafe_ccic 1661 - F: drivers/media/video/cafe_ccic* 1665 + F: drivers/media/video/marvell-ccic/ 1662 1666 1663 1667 CAIF NETWORK LAYER 1664 1668 M: Sjur Braendeland <sjur.brandeland@stericsson.com> ··· 2096 2100 L: netdev@vger.kernel.org 2097 2101 S: Orphan 2098 2102 F: Documentation/networking/dmfe.txt 2099 - F: drivers/net/ethernet/tulip/dmfe.c 2103 + F: drivers/net/ethernet/dec/tulip/dmfe.c 2100 2104 2101 2105 DC390/AM53C974 SCSI driver 2102 2106 M: Kurt Garloff <garloff@suse.de> ··· 2168 2172 T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git 2169 2173 S: Maintained 2170 2174 F: drivers/usb/dwc3/ 2175 + 2176 + DEVICE FREQUENCY (DEVFREQ) 2177 + M: MyungJoo Ham <myungjoo.ham@samsung.com> 2178 + M: Kyungmin Park <kyungmin.park@samsung.com> 2179 + L: linux-kernel@vger.kernel.org 2180 + S: Maintained 2181 + F: drivers/devfreq/ 2171 2182 2172 2183 DEVICE NUMBER REGISTRY 2173 2184 M: Torben Mathiasen <device@lanana.org> ··· 2913 2910 M: Kristoffer Glembo <kristoffer@gaisler.com> 2914 2911 L: netdev@vger.kernel.org 2915 2912 S: Maintained 2916 - F: drivers/net/greth* 2913 + F: drivers/net/ethernet/aeroflex/ 2917 2914 2918 2915 GSPCA FINEPIX SUBDRIVER 2919 2916 M: Frank Zago <frank@zago.net> ··· 3863 3860 S: Supported 3864 3861 F: Documentation/security/keys-trusted-encrypted.txt 3865 3862 F: include/keys/encrypted-type.h 3866 - F: security/keys/encrypted.c 3867 - F: security/keys/encrypted.h 3863 + F: security/keys/encrypted-keys/ 3868 3864 3869 3865 KGDB / KDB /debug_core 3870 3866 M: Jason Wessel <jason.wessel@windriver.com> ··· 5315 5313 S: Maintained 5316 5314 F: arch/arm/mach-pxa/ 5317 5315 F: drivers/pcmcia/pxa2xx* 5318 - F: drivers/spi/pxa2xx* 5316 + F: drivers/spi/spi-pxa2xx* 5319 5317 F: drivers/usb/gadget/pxa2* 5320 5318 F: include/sound/pxa2xx-lib.h 5321 5319 F: sound/arm/pxa* ··· 5797 5795 T: git git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc.git 5798 5796 S: Maintained 5799 5797 F: drivers/mmc/host/sdhci.* 5798 + F: drivers/mmc/host/sdhci-pltfm.[ch] 5800 5799 5801 5800 SECURE DIGITAL HOST CONTROLLER INTERFACE, OPEN FIRMWARE BINDINGS (SDHCI-OF) 5802 5801 M: Anton Vorontsov <avorontsov@ru.mvista.com> 5803 5802 L: linuxppc-dev@lists.ozlabs.org 5804 5803 L: linux-mmc@vger.kernel.org 5805 5804 S: Maintained 5806 - F: drivers/mmc/host/sdhci-of.* 5805 + F: drivers/mmc/host/sdhci-pltfm.[ch] 5807 5806 5808 5807 SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) SAMSUNG DRIVER 5809 5808 M: Ben Dooks <ben-linux@fluff.org> ··· 6183 6180 W: http://www.st.com/spear 6184 6181 S: Maintained 6185 6182 F: arch/arm/mach-spear*/clock.c 6186 - F: arch/arm/mach-spear*/include/mach/clkdev.h 6187 6183 F: arch/arm/plat-spear/clock.c 6188 - F: arch/arm/plat-spear/include/plat/clkdev.h 6189 6184 F: arch/arm/plat-spear/include/plat/clock.h 6190 6185 6191 6186 SPEAR PAD MULTIPLEXING SUPPORT ··· 6307 6306 M: Jarod Wilson <jarod@wilsonet.com> 6308 6307 W: http://www.lirc.org/ 6309 6308 S: Odd Fixes 6310 - F: drivers/staging/lirc/ 6309 + F: drivers/staging/media/lirc/ 6311 6310 6312 6311 STAGING - NVIDIA COMPLIANT EMBEDDED CONTROLLER INTERFACE (nvec) 6313 6312 M: Julian Andres Klode <jak@jak-linux.org> ··· 6343 6342 STAGING - SOFTLOGIC 6x10 MPEG CODEC 6344 6343 M: Ben Collins <bcollins@bluecherry.net> 6345 6344 S: Odd Fixes 6346 - F: drivers/staging/solo6x10/ 6345 + F: drivers/staging/media/solo6x10/ 6347 6346 6348 6347 STAGING - SPEAKUP CONSOLE SPEECH DRIVER 6349 6348 M: William Hubbs <w.d.hubbs@gmail.com> ··· 6646 6645 M: Grant Grundler <grundler@parisc-linux.org> 6647 6646 L: netdev@vger.kernel.org 6648 6647 S: Maintained 6649 - F: drivers/net/ethernet/tulip/ 6648 + F: drivers/net/ethernet/dec/tulip/ 6650 6649 6651 6650 TUN/TAP driver 6652 6651 M: Maxim Krasnyansky <maxk@qualcomm.com>
+1
arch/arm/Kconfig
··· 16 16 select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL) 17 17 select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) 18 18 select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL) 19 + select ARCH_BINFMT_ELF_RANDOMIZE_PIE 19 20 select HAVE_GENERIC_DMA_COHERENT 20 21 select HAVE_KERNEL_GZIP 21 22 select HAVE_KERNEL_LZO
+1
arch/mips/Kconfig
··· 16 16 select HAVE_FUNCTION_GRAPH_TRACER 17 17 select HAVE_KPROBES 18 18 select HAVE_KRETPROBES 19 + select ARCH_BINFMT_ELF_RANDOMIZE_PIE 19 20 select RTC_LIB if !MACH_LOONGSON 20 21 select GENERIC_ATOMIC64 if !64BIT 21 22 select HAVE_DMA_ATTRS
+2 -1
arch/sparc/include/asm/signal.h
··· 143 143 #define SA_ONSTACK _SV_SSTACK 144 144 #define SA_RESTART _SV_INTR 145 145 #define SA_ONESHOT _SV_RESET 146 - #define SA_NOMASK 0x20u 146 + #define SA_NODEFER 0x20u 147 147 #define SA_NOCLDWAIT 0x100u 148 148 #define SA_SIGINFO 0x200u 149 149 150 + #define SA_NOMASK SA_NODEFER 150 151 151 152 #define SIG_BLOCK 0x01 /* for blocking signals */ 152 153 #define SIG_UNBLOCK 0x02 /* for unblocking signals */
+1
arch/x86/Kconfig
··· 62 62 select ANON_INODES 63 63 select HAVE_ARCH_KMEMCHECK 64 64 select HAVE_USER_RETURN_NOTIFIER 65 + select ARCH_BINFMT_ELF_RANDOMIZE_PIE 65 66 select HAVE_ARCH_JUMP_LABEL 66 67 select HAVE_TEXT_POKE_SMP 67 68 select HAVE_GENERIC_HARDIRQS
+1 -5
arch/x86/kernel/signal.c
··· 682 682 handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, 683 683 struct pt_regs *regs) 684 684 { 685 - sigset_t blocked; 686 685 int ret; 687 686 688 687 /* Are we from a system call? */ ··· 732 733 */ 733 734 regs->flags &= ~X86_EFLAGS_TF; 734 735 735 - sigorsets(&blocked, &current->blocked, &ka->sa.sa_mask); 736 - if (!(ka->sa.sa_flags & SA_NODEFER)) 737 - sigaddset(&blocked, sig); 738 - set_current_blocked(&blocked); 736 + block_sigmask(ka, sig); 739 737 740 738 tracehook_signal_handler(sig, info, ka, regs, 741 739 test_thread_flag(TIF_SINGLESTEP));
+8
drivers/leds/Kconfig
··· 388 388 pin function. The latter to support brightness control. 389 389 Brightness control is supported but hardware blinking is not. 390 390 391 + config LEDS_TCA6507 392 + tristate "LED Support for TCA6507 I2C chip" 393 + depends on LEDS_CLASS && I2C 394 + help 395 + This option enables support for LEDs connected to TC6507 396 + LED driver chips accessed via the I2C bus. 397 + Driver support brightness control and hardware-assisted blinking. 398 + 391 399 config LEDS_TRIGGERS 392 400 bool "LED Trigger support" 393 401 depends on LEDS_CLASS
+1
drivers/leds/Makefile
··· 25 25 obj-$(CONFIG_LEDS_LP3944) += leds-lp3944.o 26 26 obj-$(CONFIG_LEDS_LP5521) += leds-lp5521.o 27 27 obj-$(CONFIG_LEDS_LP5523) += leds-lp5523.o 28 + obj-$(CONFIG_LEDS_TCA6507) += leds-tca6507.o 28 29 obj-$(CONFIG_LEDS_CLEVO_MAIL) += leds-clevo-mail.o 29 30 obj-$(CONFIG_LEDS_HP6XX) += leds-hp6xx.o 30 31 obj-$(CONFIG_LEDS_FSG) += leds-fsg.o
+1 -11
drivers/leds/leds-88pm860x.c
··· 238 238 .remove = pm860x_led_remove, 239 239 }; 240 240 241 - static int __devinit pm860x_led_init(void) 242 - { 243 - return platform_driver_register(&pm860x_led_driver); 244 - } 245 - module_init(pm860x_led_init); 246 - 247 - static void __devexit pm860x_led_exit(void) 248 - { 249 - platform_driver_unregister(&pm860x_led_driver); 250 - } 251 - module_exit(pm860x_led_exit); 241 + module_platform_driver(pm860x_led_driver); 252 242 253 243 MODULE_DESCRIPTION("LED driver for Marvell PM860x"); 254 244 MODULE_AUTHOR("Haojian Zhuang <haojian.zhuang@marvell.com>");
+1 -11
drivers/leds/leds-adp5520.c
··· 213 213 .remove = __devexit_p(adp5520_led_remove), 214 214 }; 215 215 216 - static int __init adp5520_led_init(void) 217 - { 218 - return platform_driver_register(&adp5520_led_driver); 219 - } 220 - module_init(adp5520_led_init); 221 - 222 - static void __exit adp5520_led_exit(void) 223 - { 224 - platform_driver_unregister(&adp5520_led_driver); 225 - } 226 - module_exit(adp5520_led_exit); 216 + module_platform_driver(adp5520_led_driver); 227 217 228 218 MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>"); 229 219 MODULE_DESCRIPTION("LEDS ADP5520(01) Driver");
+1 -12
drivers/leds/leds-ams-delta.c
··· 118 118 }, 119 119 }; 120 120 121 - static int __init ams_delta_led_init(void) 122 - { 123 - return platform_driver_register(&ams_delta_led_driver); 124 - } 125 - 126 - static void __exit ams_delta_led_exit(void) 127 - { 128 - platform_driver_unregister(&ams_delta_led_driver); 129 - } 130 - 131 - module_init(ams_delta_led_init); 132 - module_exit(ams_delta_led_exit); 121 + module_platform_driver(ams_delta_led_driver); 133 122 134 123 MODULE_AUTHOR("Jonathan McDowell <noodles@earth.li>"); 135 124 MODULE_DESCRIPTION("Amstrad Delta LED driver");
+2 -14
drivers/leds/leds-asic3.c
··· 179 179 }, 180 180 }; 181 181 182 - MODULE_ALIAS("platform:leds-asic3"); 183 - 184 - static int __init asic3_led_init(void) 185 - { 186 - return platform_driver_register(&asic3_led_driver); 187 - } 188 - 189 - static void __exit asic3_led_exit(void) 190 - { 191 - platform_driver_unregister(&asic3_led_driver); 192 - } 193 - 194 - module_init(asic3_led_init); 195 - module_exit(asic3_led_exit); 182 + module_platform_driver(asic3_led_driver); 196 183 197 184 MODULE_AUTHOR("Paul Parsons <lost.distance@yahoo.com>"); 198 185 MODULE_DESCRIPTION("HTC ASIC3 LED driver"); 199 186 MODULE_LICENSE("GPL"); 187 + MODULE_ALIAS("platform:leds-asic3");
+3 -14
drivers/leds/leds-atmel-pwm.c
··· 134 134 return 0; 135 135 } 136 136 137 - /* work with hotplug and coldplug */ 138 - MODULE_ALIAS("platform:leds-atmel-pwm"); 139 - 140 137 static struct platform_driver pwmled_driver = { 141 138 .driver = { 142 139 .name = "leds-atmel-pwm", 143 140 .owner = THIS_MODULE, 144 141 }, 145 142 /* REVISIT add suspend() and resume() methods */ 143 + .probe = pwmled_probe, 146 144 .remove = __exit_p(pwmled_remove), 147 145 }; 148 146 149 - static int __init modinit(void) 150 - { 151 - return platform_driver_probe(&pwmled_driver, pwmled_probe); 152 - } 153 - module_init(modinit); 154 - 155 - static void __exit modexit(void) 156 - { 157 - platform_driver_unregister(&pwmled_driver); 158 - } 159 - module_exit(modexit); 147 + module_platform_driver(pwmled_driver); 160 148 161 149 MODULE_DESCRIPTION("Driver for LEDs with PWM-controlled brightness"); 162 150 MODULE_LICENSE("GPL"); 151 + MODULE_ALIAS("platform:leds-atmel-pwm");
+2 -13
drivers/leds/leds-bd2802.c
··· 688 688 i2c_set_clientdata(client, led); 689 689 690 690 /* Configure RESET GPIO (L: RESET, H: RESET cancel) */ 691 - gpio_request(pdata->reset_gpio, "RGB_RESETB"); 692 - gpio_direction_output(pdata->reset_gpio, 1); 691 + gpio_request_one(pdata->reset_gpio, GPIOF_OUT_INIT_HIGH, "RGB_RESETB"); 693 692 694 693 /* Tacss = min 0.1ms */ 695 694 udelay(100); ··· 812 813 .id_table = bd2802_id, 813 814 }; 814 815 815 - static int __init bd2802_init(void) 816 - { 817 - return i2c_add_driver(&bd2802_i2c_driver); 818 - } 819 - module_init(bd2802_init); 820 - 821 - static void __exit bd2802_exit(void) 822 - { 823 - i2c_del_driver(&bd2802_i2c_driver); 824 - } 825 - module_exit(bd2802_exit); 816 + module_i2c_driver(bd2802_i2c_driver); 826 817 827 818 MODULE_AUTHOR("Kim Kyuwon <q1.kim@samsung.com>"); 828 819 MODULE_DESCRIPTION("BD2802 LED driver");
+2 -15
drivers/leds/leds-cobalt-qube.c
··· 75 75 return 0; 76 76 } 77 77 78 - /* work with hotplug and coldplug */ 79 - MODULE_ALIAS("platform:cobalt-qube-leds"); 80 - 81 78 static struct platform_driver cobalt_qube_led_driver = { 82 79 .probe = cobalt_qube_led_probe, 83 80 .remove = __devexit_p(cobalt_qube_led_remove), ··· 84 87 }, 85 88 }; 86 89 87 - static int __init cobalt_qube_led_init(void) 88 - { 89 - return platform_driver_register(&cobalt_qube_led_driver); 90 - } 91 - 92 - static void __exit cobalt_qube_led_exit(void) 93 - { 94 - platform_driver_unregister(&cobalt_qube_led_driver); 95 - } 96 - 97 - module_init(cobalt_qube_led_init); 98 - module_exit(cobalt_qube_led_exit); 90 + module_platform_driver(cobalt_qube_led_driver); 99 91 100 92 MODULE_LICENSE("GPL"); 101 93 MODULE_DESCRIPTION("Front LED support for Cobalt Server"); 102 94 MODULE_AUTHOR("Florian Fainelli <florian@openwrt.org>"); 95 + MODULE_ALIAS("platform:cobalt-qube-leds");
+1 -11
drivers/leds/leds-da903x.c
··· 158 158 .remove = __devexit_p(da903x_led_remove), 159 159 }; 160 160 161 - static int __init da903x_led_init(void) 162 - { 163 - return platform_driver_register(&da903x_led_driver); 164 - } 165 - module_init(da903x_led_init); 166 - 167 - static void __exit da903x_led_exit(void) 168 - { 169 - platform_driver_unregister(&da903x_led_driver); 170 - } 171 - module_exit(da903x_led_exit); 161 + module_platform_driver(da903x_led_driver); 172 162 173 163 MODULE_DESCRIPTION("LEDs driver for Dialog Semiconductor DA9030/DA9034"); 174 164 MODULE_AUTHOR("Eric Miao <eric.miao@marvell.com>"
+1 -12
drivers/leds/leds-dac124s085.c
··· 131 131 }, 132 132 }; 133 133 134 - static int __init dac124s085_leds_init(void) 135 - { 136 - return spi_register_driver(&dac124s085_driver); 137 - } 138 - 139 - static void __exit dac124s085_leds_exit(void) 140 - { 141 - spi_unregister_driver(&dac124s085_driver); 142 - } 143 - 144 - module_init(dac124s085_leds_init); 145 - module_exit(dac124s085_leds_exit); 134 + module_spi_driver(dac124s085_driver); 146 135 147 136 MODULE_AUTHOR("Guennadi Liakhovetski <lg@denx.de>"); 148 137 MODULE_DESCRIPTION("DAC124S085 LED driver");
+1 -14
drivers/leds/leds-fsg.c
··· 224 224 }, 225 225 }; 226 226 227 - 228 - static int __init fsg_led_init(void) 229 - { 230 - return platform_driver_register(&fsg_led_driver); 231 - } 232 - 233 - static void __exit fsg_led_exit(void) 234 - { 235 - platform_driver_unregister(&fsg_led_driver); 236 - } 237 - 238 - 239 - module_init(fsg_led_init); 240 - module_exit(fsg_led_exit); 227 + module_platform_driver(fsg_led_driver); 241 228 242 229 MODULE_AUTHOR("Rod Whitby <rod@whitby.id.au>"); 243 230 MODULE_DESCRIPTION("Freecom FSG-3 LED driver");
+2 -14
drivers/leds/leds-gpio.c
··· 293 293 }, 294 294 }; 295 295 296 - MODULE_ALIAS("platform:leds-gpio"); 297 - 298 - static int __init gpio_led_init(void) 299 - { 300 - return platform_driver_register(&gpio_led_driver); 301 - } 302 - 303 - static void __exit gpio_led_exit(void) 304 - { 305 - platform_driver_unregister(&gpio_led_driver); 306 - } 307 - 308 - module_init(gpio_led_init); 309 - module_exit(gpio_led_exit); 296 + module_platform_driver(gpio_led_driver); 310 297 311 298 MODULE_AUTHOR("Raphael Assenat <raph@8d.com>, Trent Piepho <tpiepho@freescale.com>"); 312 299 MODULE_DESCRIPTION("GPIO LED driver"); 313 300 MODULE_LICENSE("GPL"); 301 + MODULE_ALIAS("platform:leds-gpio");
+2 -15
drivers/leds/leds-hp6xx.c
··· 79 79 return 0; 80 80 } 81 81 82 - /* work with hotplug and coldplug */ 83 - MODULE_ALIAS("platform:hp6xx-led"); 84 - 85 82 static struct platform_driver hp6xxled_driver = { 86 83 .probe = hp6xxled_probe, 87 84 .remove = hp6xxled_remove, ··· 88 91 }, 89 92 }; 90 93 91 - static int __init hp6xxled_init(void) 92 - { 93 - return platform_driver_register(&hp6xxled_driver); 94 - } 95 - 96 - static void __exit hp6xxled_exit(void) 97 - { 98 - platform_driver_unregister(&hp6xxled_driver); 99 - } 100 - 101 - module_init(hp6xxled_init); 102 - module_exit(hp6xxled_exit); 94 + module_platform_driver(hp6xxled_driver); 103 95 104 96 MODULE_AUTHOR("Kristoffer Ericson <kristoffer.ericson@gmail.com>"); 105 97 MODULE_DESCRIPTION("HP Jornada 6xx LED driver"); 106 98 MODULE_LICENSE("GPL"); 99 + MODULE_ALIAS("platform:hp6xx-led");
+1 -12
drivers/leds/leds-lm3530.c
··· 457 457 }, 458 458 }; 459 459 460 - static int __init lm3530_init(void) 461 - { 462 - return i2c_add_driver(&lm3530_i2c_driver); 463 - } 464 - 465 - static void __exit lm3530_exit(void) 466 - { 467 - i2c_del_driver(&lm3530_i2c_driver); 468 - } 469 - 470 - module_init(lm3530_init); 471 - module_exit(lm3530_exit); 460 + module_i2c_driver(lm3530_i2c_driver); 472 461 473 462 MODULE_DESCRIPTION("Back Light driver for LM3530"); 474 463 MODULE_LICENSE("GPL v2");
+1 -12
drivers/leds/leds-lp3944.c
··· 453 453 .id_table = lp3944_id, 454 454 }; 455 455 456 - static int __init lp3944_module_init(void) 457 - { 458 - return i2c_add_driver(&lp3944_driver); 459 - } 460 - 461 - static void __exit lp3944_module_exit(void) 462 - { 463 - i2c_del_driver(&lp3944_driver); 464 - } 465 - 466 - module_init(lp3944_module_init); 467 - module_exit(lp3944_module_exit); 456 + module_i2c_driver(lp3944_driver); 468 457 469 458 MODULE_AUTHOR("Antonio Ospite <ospite@studenti.unina.it>"); 470 459 MODULE_DESCRIPTION("LP3944 Fun Light Chip");
+1 -19
drivers/leds/leds-lp5521.c
··· 797 797 .id_table = lp5521_id, 798 798 }; 799 799 800 - static int __init lp5521_init(void) 801 - { 802 - int ret; 803 - 804 - ret = i2c_add_driver(&lp5521_driver); 805 - 806 - if (ret < 0) 807 - printk(KERN_ALERT "Adding lp5521 driver failed\n"); 808 - 809 - return ret; 810 - } 811 - 812 - static void __exit lp5521_exit(void) 813 - { 814 - i2c_del_driver(&lp5521_driver); 815 - } 816 - 817 - module_init(lp5521_init); 818 - module_exit(lp5521_exit); 800 + module_i2c_driver(lp5521_driver); 819 801 820 802 MODULE_AUTHOR("Mathias Nyman, Yuri Zaporozhets, Samu Onkalo"); 821 803 MODULE_DESCRIPTION("LP5521 LED engine");
+1 -21
drivers/leds/leds-lp5523.c
··· 870 870 return 0; 871 871 } 872 872 873 - static struct i2c_driver lp5523_driver; 874 - 875 873 static int __devinit lp5523_probe(struct i2c_client *client, 876 874 const struct i2c_device_id *id) 877 875 { ··· 1019 1021 .id_table = lp5523_id, 1020 1022 }; 1021 1023 1022 - static int __init lp5523_init(void) 1023 - { 1024 - int ret; 1025 - 1026 - ret = i2c_add_driver(&lp5523_driver); 1027 - 1028 - if (ret < 0) 1029 - printk(KERN_ALERT "Adding lp5523 driver failed\n"); 1030 - 1031 - return ret; 1032 - } 1033 - 1034 - static void __exit lp5523_exit(void) 1035 - { 1036 - i2c_del_driver(&lp5523_driver); 1037 - } 1038 - 1039 - module_init(lp5523_init); 1040 - module_exit(lp5523_exit); 1024 + module_i2c_driver(lp5523_driver); 1041 1025 1042 1026 MODULE_AUTHOR("Mathias Nyman <mathias.nyman@nokia.com>"); 1043 1027 MODULE_DESCRIPTION("LP5523 LED engine");
+2 -14
drivers/leds/leds-lt3593.c
··· 199 199 }, 200 200 }; 201 201 202 - MODULE_ALIAS("platform:leds-lt3593"); 203 - 204 - static int __init lt3593_led_init(void) 205 - { 206 - return platform_driver_register(&lt3593_led_driver); 207 - } 208 - 209 - static void __exit lt3593_led_exit(void) 210 - { 211 - platform_driver_unregister(&lt3593_led_driver); 212 - } 213 - 214 - module_init(lt3593_led_init); 215 - module_exit(lt3593_led_exit); 202 + module_platform_driver(lt3593_led_driver); 216 203 217 204 MODULE_AUTHOR("Daniel Mack <daniel@caiaq.de>"); 218 205 MODULE_DESCRIPTION("LED driver for LT3593 controllers"); 219 206 MODULE_LICENSE("GPL"); 207 + MODULE_ALIAS("platform:leds-lt3593");
+2 -12
drivers/leds/leds-mc13783.c
··· 275 275 return -ENODEV; 276 276 } 277 277 278 - if (pdata->num_leds < 1 || pdata->num_leds > MC13783_LED_MAX) { 278 + if (pdata->num_leds < 1 || pdata->num_leds > (MC13783_LED_MAX + 1)) { 279 279 dev_err(&pdev->dev, "Invalid led count %d\n", pdata->num_leds); 280 280 return -EINVAL; 281 281 } ··· 385 385 .remove = __devexit_p(mc13783_led_remove), 386 386 }; 387 387 388 - static int __init mc13783_led_init(void) 389 - { 390 - return platform_driver_register(&mc13783_led_driver); 391 - } 392 - module_init(mc13783_led_init); 393 - 394 - static void __exit mc13783_led_exit(void) 395 - { 396 - platform_driver_unregister(&mc13783_led_driver); 397 - } 398 - module_exit(mc13783_led_exit); 388 + module_platform_driver(mc13783_led_driver); 399 389 400 390 MODULE_DESCRIPTION("LEDs driver for Freescale MC13783 PMIC"); 401 391 MODULE_AUTHOR("Philippe Retornaz <philippe.retornaz@epfl.ch>");
+8 -31
drivers/leds/leds-netxbig.c
··· 81 81 82 82 /* Configure address GPIOs. */ 83 83 for (i = 0; i < gpio_ext->num_addr; i++) { 84 - err = gpio_request(gpio_ext->addr[i], "GPIO extension addr"); 84 + err = gpio_request_one(gpio_ext->addr[i], GPIOF_OUT_INIT_LOW, 85 + "GPIO extension addr"); 85 86 if (err) 86 87 goto err_free_addr; 87 - err = gpio_direction_output(gpio_ext->addr[i], 0); 88 - if (err) { 89 - gpio_free(gpio_ext->addr[i]); 90 - goto err_free_addr; 91 - } 92 88 } 93 89 /* Configure data GPIOs. */ 94 90 for (i = 0; i < gpio_ext->num_data; i++) { 95 - err = gpio_request(gpio_ext->data[i], "GPIO extension data"); 91 + err = gpio_request_one(gpio_ext->data[i], GPIOF_OUT_INIT_LOW, 92 + "GPIO extension data"); 96 93 if (err) 97 94 goto err_free_data; 98 - err = gpio_direction_output(gpio_ext->data[i], 0); 99 - if (err) { 100 - gpio_free(gpio_ext->data[i]); 101 - goto err_free_data; 102 - } 103 95 } 104 96 /* Configure "enable select" GPIO. */ 105 - err = gpio_request(gpio_ext->enable, "GPIO extension enable"); 97 + err = gpio_request_one(gpio_ext->enable, GPIOF_OUT_INIT_LOW, 98 + "GPIO extension enable"); 106 99 if (err) 107 100 goto err_free_data; 108 - err = gpio_direction_output(gpio_ext->enable, 0); 109 - if (err) { 110 - gpio_free(gpio_ext->enable); 111 - goto err_free_data; 112 - } 113 101 114 102 return 0; 115 103 ··· 417 429 .owner = THIS_MODULE, 418 430 }, 419 431 }; 420 - MODULE_ALIAS("platform:leds-netxbig"); 421 432 422 - static int __init netxbig_led_init(void) 423 - { 424 - return platform_driver_register(&netxbig_led_driver); 425 - } 426 - 427 - static void __exit netxbig_led_exit(void) 428 - { 429 - platform_driver_unregister(&netxbig_led_driver); 430 - } 431 - 432 - module_init(netxbig_led_init); 433 - module_exit(netxbig_led_exit); 433 + module_platform_driver(netxbig_led_driver); 434 434 435 435 MODULE_AUTHOR("Simon Guinot <sguinot@lacie.com>"); 436 436 MODULE_DESCRIPTION("LED driver for LaCie xBig Network boards"); 437 437 MODULE_LICENSE("GPL"); 438 + MODULE_ALIAS("platform:leds-netxbig");
+2 -13
drivers/leds/leds-ns2.c
··· 323 323 .owner = THIS_MODULE, 324 324 }, 325 325 }; 326 - MODULE_ALIAS("platform:leds-ns2"); 327 326 328 - static int __init ns2_led_init(void) 329 - { 330 - return platform_driver_register(&ns2_led_driver); 331 - } 332 - 333 - static void __exit ns2_led_exit(void) 334 - { 335 - platform_driver_unregister(&ns2_led_driver); 336 - } 337 - 338 - module_init(ns2_led_init); 339 - module_exit(ns2_led_exit); 327 + module_platform_driver(ns2_led_driver); 340 328 341 329 MODULE_AUTHOR("Simon Guinot <sguinot@lacie.com>"); 342 330 MODULE_DESCRIPTION("Network Space v2 LED driver"); 343 331 MODULE_LICENSE("GPL"); 332 + MODULE_ALIAS("platform:leds-ns2");
+1 -13
drivers/leds/leds-pca9532.c
··· 489 489 return 0; 490 490 } 491 491 492 - static int __init pca9532_init(void) 493 - { 494 - return i2c_add_driver(&pca9532_driver); 495 - } 496 - 497 - static void __exit pca9532_exit(void) 498 - { 499 - i2c_del_driver(&pca9532_driver); 500 - } 492 + module_i2c_driver(pca9532_driver); 501 493 502 494 MODULE_AUTHOR("Riku Voipio"); 503 495 MODULE_LICENSE("GPL"); 504 496 MODULE_DESCRIPTION("PCA 9532 LED dimmer"); 505 - 506 - module_init(pca9532_init); 507 - module_exit(pca9532_exit); 508 -
+1 -12
drivers/leds/leds-pca955x.c
··· 371 371 .id_table = pca955x_id, 372 372 }; 373 373 374 - static int __init pca955x_leds_init(void) 375 - { 376 - return i2c_add_driver(&pca955x_driver); 377 - } 378 - 379 - static void __exit pca955x_leds_exit(void) 380 - { 381 - i2c_del_driver(&pca955x_driver); 382 - } 383 - 384 - module_init(pca955x_leds_init); 385 - module_exit(pca955x_leds_exit); 374 + module_i2c_driver(pca955x_driver); 386 375 387 376 MODULE_AUTHOR("Nate Case <ncase@xes-inc.com>"); 388 377 MODULE_DESCRIPTION("PCA955x LED driver");
+1 -12
drivers/leds/leds-pwm.c
··· 135 135 }, 136 136 }; 137 137 138 - static int __init led_pwm_init(void) 139 - { 140 - return platform_driver_register(&led_pwm_driver); 141 - } 142 - 143 - static void __exit led_pwm_exit(void) 144 - { 145 - platform_driver_unregister(&led_pwm_driver); 146 - } 147 - 148 - module_init(led_pwm_init); 149 - module_exit(led_pwm_exit); 138 + module_platform_driver(led_pwm_driver); 150 139 151 140 MODULE_AUTHOR("Luotao Fu <l.fu@pengutronix.de>"); 152 141 MODULE_DESCRIPTION("PWM LED driver for PXA");
+2 -14
drivers/leds/leds-rb532.c
··· 57 57 }, 58 58 }; 59 59 60 - static int __init rb532_led_init(void) 61 - { 62 - return platform_driver_register(&rb532_led_driver); 63 - } 64 - 65 - static void __exit rb532_led_exit(void) 66 - { 67 - platform_driver_unregister(&rb532_led_driver); 68 - } 69 - 70 - module_init(rb532_led_init); 71 - module_exit(rb532_led_exit); 72 - 73 - MODULE_ALIAS("platform:rb532-led"); 60 + module_platform_driver(rb532_led_driver); 74 61 75 62 MODULE_LICENSE("GPL"); 76 63 MODULE_DESCRIPTION("User LED support for Routerboard532"); 77 64 MODULE_AUTHOR("Phil Sutter <n0-1@freewrt.org>"); 65 + MODULE_ALIAS("platform:rb532-led");
+1 -11
drivers/leds/leds-regulator.c
··· 229 229 .remove = __devexit_p(regulator_led_remove), 230 230 }; 231 231 232 - static int __init regulator_led_init(void) 233 - { 234 - return platform_driver_register(&regulator_led_driver); 235 - } 236 - module_init(regulator_led_init); 237 - 238 - static void __exit regulator_led_exit(void) 239 - { 240 - platform_driver_unregister(&regulator_led_driver); 241 - } 242 - module_exit(regulator_led_exit); 232 + module_platform_driver(regulator_led_driver); 243 233 244 234 MODULE_AUTHOR("Antonio Ospite <ospite@studenti.unina.it>"); 245 235 MODULE_DESCRIPTION("Regulator driven LED driver");
+1 -12
drivers/leds/leds-renesas-tpu.c
··· 339 339 } 340 340 }; 341 341 342 - static int __init r_tpu_init(void) 343 - { 344 - return platform_driver_register(&r_tpu_device_driver); 345 - } 346 - 347 - static void __exit r_tpu_exit(void) 348 - { 349 - platform_driver_unregister(&r_tpu_device_driver); 350 - } 351 - 352 - module_init(r_tpu_init); 353 - module_exit(r_tpu_exit); 342 + module_platform_driver(r_tpu_device_driver); 354 343 355 344 MODULE_AUTHOR("Magnus Damm"); 356 345 MODULE_DESCRIPTION("Renesas TPU LED Driver");
+1 -12
drivers/leds/leds-s3c24xx.c
··· 121 121 }, 122 122 }; 123 123 124 - static int __init s3c24xx_led_init(void) 125 - { 126 - return platform_driver_register(&s3c24xx_led_driver); 127 - } 128 - 129 - static void __exit s3c24xx_led_exit(void) 130 - { 131 - platform_driver_unregister(&s3c24xx_led_driver); 132 - } 133 - 134 - module_init(s3c24xx_led_init); 135 - module_exit(s3c24xx_led_exit); 124 + module_platform_driver(s3c24xx_led_driver); 136 125 137 126 MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>"); 138 127 MODULE_DESCRIPTION("S3C24XX LED driver");
+779
drivers/leds/leds-tca6507.c
··· 1 + /* 2 + * leds-tca6507 3 + * 4 + * The TCA6507 is a programmable LED controller that can drive 7 5 + * separate lines either by holding them low, or by pulsing them 6 + * with modulated width. 7 + * The modulation can be varied in a simple pattern to produce a blink or 8 + * double-blink. 9 + * 10 + * This driver can configure each line either as a 'GPIO' which is out-only 11 + * (no pull-up) or as an LED with variable brightness and hardware-assisted 12 + * blinking. 13 + * 14 + * Apart from OFF and ON there are three programmable brightness levels which 15 + * can be programmed from 0 to 15 and indicate how many 500usec intervals in 16 + * each 8msec that the led is 'on'. The levels are named MASTER, BANK0 and 17 + * BANK1. 18 + * 19 + * There are two different blink rates that can be programmed, each with 20 + * separate time for rise, on, fall, off and second-off. Thus if 3 or more 21 + * different non-trivial rates are required, software must be used for the extra 22 + * rates. The two different blink rates must align with the two levels BANK0 and 23 + * BANK1. 24 + * This driver does not support double-blink so 'second-off' always matches 25 + * 'off'. 26 + * 27 + * Only 16 different times can be programmed in a roughly logarithmic scale from 28 + * 64ms to 16320ms. To be precise the possible times are: 29 + * 0, 64, 128, 192, 256, 384, 512, 768, 30 + * 1024, 1536, 2048, 3072, 4096, 5760, 8128, 16320 31 + * 32 + * Times that cannot be closely matched with these must be 33 + * handled in software. This driver allows 12.5% error in matching. 34 + * 35 + * This driver does not allow rise/fall rates to be set explicitly. When trying 36 + * to match a given 'on' or 'off' period, an appropriate pair of 'change' and 37 + * 'hold' times are chosen to get a close match. If the target delay is even, 38 + * the 'change' number will be the smaller; if odd, the 'hold' number will be 39 + * the smaller. 40 + 41 + * Choosing pairs of delays with 12.5% errors allows us to match delays in the 42 + * ranges: 56-72, 112-144, 168-216, 224-27504, 28560-36720. 43 + * 26% of the achievable sums can be matched by multiple pairings. For example 44 + * 1536 == 1536+0, 1024+512, or 768+768. This driver will always choose the 45 + * pairing with the least maximum - 768+768 in this case. Other pairings are 46 + * not available. 47 + * 48 + * Access to the 3 levels and 2 blinks are on a first-come, first-served basis. 49 + * Access can be shared by multiple leds if they have the same level and 50 + * either same blink rates, or some don't blink. 51 + * When a led changes, it relinquishes access and tries again, so it might 52 + * lose access to hardware blink. 53 + * If a blink engine cannot be allocated, software blink is used. 54 + * If the desired brightness cannot be allocated, the closest available non-zero 55 + * brightness is used. As 'full' is always available, the worst case would be 56 + * to have two different blink rates at '1', with Max at '2', then other leds 57 + * will have to choose between '2' and '16'. Hopefully this is not likely. 58 + * 59 + * Each bank (BANK0 and BANK1) has two usage counts - LEDs using the brightness 60 + * and LEDs using the blink. It can only be reprogrammed when the appropriate 61 + * counter is zero. The MASTER level has a single usage count. 62 + * 63 + * Each Led has programmable 'on' and 'off' time as milliseconds. With each 64 + * there is a flag saying if it was explicitly requested or defaulted. 65 + * Similarly the banks know if each time was explicit or a default. Defaults 66 + * are permitted to be changed freely - they are not recognised when matching. 67 + * 68 + * 69 + * An led-tca6507 device must be provided with platform data. This data 70 + * lists for each output: the name, default trigger, and whether the signal 71 + * is being used as a GPiO rather than an led. 'struct led_plaform_data' 72 + * is used for this. If 'name' is NULL, the output isn't used. If 'flags' 73 + * is TCA6507_MAKE_CPIO, the output is a GPO. 74 + * The "struct led_platform_data" can be embedded in a 75 + * "struct tca6507_platform_data" which adds a 'gpio_base' for the GPiOs, 76 + * and a 'setup' callback which is called once the GPiOs are available. 77 + * 78 + */ 79 + 80 + #include <linux/module.h> 81 + #include <linux/slab.h> 82 + #include <linux/leds.h> 83 + #include <linux/err.h> 84 + #include <linux/i2c.h> 85 + #include <linux/gpio.h> 86 + #include <linux/workqueue.h> 87 + #include <linux/leds-tca6507.h> 88 + 89 + /* LED select registers determine the source that drives LED outputs */ 90 + #define TCA6507_LS_LED_OFF 0x0 /* Output HI-Z (off) */ 91 + #define TCA6507_LS_LED_OFF1 0x1 /* Output HI-Z (off) - not used */ 92 + #define TCA6507_LS_LED_PWM0 0x2 /* Output LOW with Bank0 rate */ 93 + #define TCA6507_LS_LED_PWM1 0x3 /* Output LOW with Bank1 rate */ 94 + #define TCA6507_LS_LED_ON 0x4 /* Output LOW (on) */ 95 + #define TCA6507_LS_LED_MIR 0x5 /* Output LOW with Master Intensity */ 96 + #define TCA6507_LS_BLINK0 0x6 /* Blink at Bank0 rate */ 97 + #define TCA6507_LS_BLINK1 0x7 /* Blink at Bank1 rate */ 98 + 99 + enum { 100 + BANK0, 101 + BANK1, 102 + MASTER, 103 + }; 104 + static int bank_source[3] = { 105 + TCA6507_LS_LED_PWM0, 106 + TCA6507_LS_LED_PWM1, 107 + TCA6507_LS_LED_MIR, 108 + }; 109 + static int blink_source[2] = { 110 + TCA6507_LS_BLINK0, 111 + TCA6507_LS_BLINK1, 112 + }; 113 + 114 + /* PWM registers */ 115 + #define TCA6507_REG_CNT 11 116 + 117 + /* 118 + * 0x00, 0x01, 0x02 encode the TCA6507_LS_* values, each output 119 + * owns one bit in each register 120 + */ 121 + #define TCA6507_FADE_ON 0x03 122 + #define TCA6507_FULL_ON 0x04 123 + #define TCA6507_FADE_OFF 0x05 124 + #define TCA6507_FIRST_OFF 0x06 125 + #define TCA6507_SECOND_OFF 0x07 126 + #define TCA6507_MAX_INTENSITY 0x08 127 + #define TCA6507_MASTER_INTENSITY 0x09 128 + #define TCA6507_INITIALIZE 0x0A 129 + 130 + #define INIT_CODE 0x8 131 + 132 + #define TIMECODES 16 133 + static int time_codes[TIMECODES] = { 134 + 0, 64, 128, 192, 256, 384, 512, 768, 135 + 1024, 1536, 2048, 3072, 4096, 5760, 8128, 16320 136 + }; 137 + 138 + /* Convert an led.brightness level (0..255) to a TCA6507 level (0..15) */ 139 + static inline int TO_LEVEL(int brightness) 140 + { 141 + return brightness >> 4; 142 + } 143 + 144 + /* ...and convert back */ 145 + static inline int TO_BRIGHT(int level) 146 + { 147 + if (level) 148 + return (level << 4) | 0xf; 149 + return 0; 150 + } 151 + 152 + #define NUM_LEDS 7 153 + struct tca6507_chip { 154 + int reg_set; /* One bit per register where 155 + * a '1' means the register 156 + * should be written */ 157 + u8 reg_file[TCA6507_REG_CNT]; 158 + /* Bank 2 is Master Intensity and doesn't use times */ 159 + struct bank { 160 + int level; 161 + int ontime, offtime; 162 + int on_dflt, off_dflt; 163 + int time_use, level_use; 164 + } bank[3]; 165 + struct i2c_client *client; 166 + struct work_struct work; 167 + spinlock_t lock; 168 + 169 + struct tca6507_led { 170 + struct tca6507_chip *chip; 171 + struct led_classdev led_cdev; 172 + int num; 173 + int ontime, offtime; 174 + int on_dflt, off_dflt; 175 + int bank; /* Bank used, or -1 */ 176 + int blink; /* Set if hardware-blinking */ 177 + } leds[NUM_LEDS]; 178 + #ifdef CONFIG_GPIOLIB 179 + struct gpio_chip gpio; 180 + const char *gpio_name[NUM_LEDS]; 181 + int gpio_map[NUM_LEDS]; 182 + #endif 183 + }; 184 + 185 + static const struct i2c_device_id tca6507_id[] = { 186 + { "tca6507" }, 187 + { } 188 + }; 189 + MODULE_DEVICE_TABLE(i2c, tca6507_id); 190 + 191 + static int choose_times(int msec, int *c1p, int *c2p) 192 + { 193 + /* 194 + * Choose two timecodes which add to 'msec' as near as possible. 195 + * The first returned is the 'on' or 'off' time. The second is to be 196 + * used as a 'fade-on' or 'fade-off' time. If 'msec' is even, 197 + * the first will not be smaller than the second. If 'msec' is odd, 198 + * the first will not be larger than the second. 199 + * If we cannot get a sum within 1/8 of 'msec' fail with -EINVAL, 200 + * otherwise return the sum that was achieved, plus 1 if the first is 201 + * smaller. 202 + * If two possibilities are equally good (e.g. 512+0, 256+256), choose 203 + * the first pair so there is more change-time visible (i.e. it is 204 + * softer). 205 + */ 206 + int c1, c2; 207 + int tmax = msec * 9 / 8; 208 + int tmin = msec * 7 / 8; 209 + int diff = 65536; 210 + 211 + /* We start at '1' to ensure we never even think of choosing a 212 + * total time of '0'. 213 + */ 214 + for (c1 = 1; c1 < TIMECODES; c1++) { 215 + int t = time_codes[c1]; 216 + if (t*2 < tmin) 217 + continue; 218 + if (t > tmax) 219 + break; 220 + for (c2 = 0; c2 <= c1; c2++) { 221 + int tt = t + time_codes[c2]; 222 + int d; 223 + if (tt < tmin) 224 + continue; 225 + if (tt > tmax) 226 + break; 227 + /* This works! */ 228 + d = abs(msec - tt); 229 + if (d >= diff) 230 + continue; 231 + /* Best yet */ 232 + *c1p = c1; 233 + *c2p = c2; 234 + diff = d; 235 + if (d == 0) 236 + return msec; 237 + } 238 + } 239 + if (diff < 65536) { 240 + int actual; 241 + if (msec & 1) { 242 + c1 = *c2p; 243 + *c2p = *c1p; 244 + *c1p = c1; 245 + } 246 + actual = time_codes[*c1p] + time_codes[*c2p]; 247 + if (*c1p < *c2p) 248 + return actual + 1; 249 + else 250 + return actual; 251 + } 252 + /* No close match */ 253 + return -EINVAL; 254 + } 255 + 256 + /* 257 + * Update the register file with the appropriate 3-bit state for 258 + * the given led. 259 + */ 260 + static void set_select(struct tca6507_chip *tca, int led, int val) 261 + { 262 + int mask = (1 << led); 263 + int bit; 264 + 265 + for (bit = 0; bit < 3; bit++) { 266 + int n = tca->reg_file[bit] & ~mask; 267 + if (val & (1 << bit)) 268 + n |= mask; 269 + if (tca->reg_file[bit] != n) { 270 + tca->reg_file[bit] = n; 271 + tca->reg_set |= (1 << bit); 272 + } 273 + } 274 + } 275 + 276 + /* Update the register file with the appropriate 4-bit code for 277 + * one bank or other. This can be used for timers, for levels, or 278 + * for initialisation. 279 + */ 280 + static void set_code(struct tca6507_chip *tca, int reg, int bank, int new) 281 + { 282 + int mask = 0xF; 283 + int n; 284 + if (bank) { 285 + mask <<= 4; 286 + new <<= 4; 287 + } 288 + n = tca->reg_file[reg] & ~mask; 289 + n |= new; 290 + if (tca->reg_file[reg] != n) { 291 + tca->reg_file[reg] = n; 292 + tca->reg_set |= 1 << reg; 293 + } 294 + } 295 + 296 + /* Update brightness level. */ 297 + static void set_level(struct tca6507_chip *tca, int bank, int level) 298 + { 299 + switch (bank) { 300 + case BANK0: 301 + case BANK1: 302 + set_code(tca, TCA6507_MAX_INTENSITY, bank, level); 303 + break; 304 + case MASTER: 305 + set_code(tca, TCA6507_MASTER_INTENSITY, 0, level); 306 + break; 307 + } 308 + tca->bank[bank].level = level; 309 + } 310 + 311 + /* Record all relevant time code for a given bank */ 312 + static void set_times(struct tca6507_chip *tca, int bank) 313 + { 314 + int c1, c2; 315 + int result; 316 + 317 + result = choose_times(tca->bank[bank].ontime, &c1, &c2); 318 + dev_dbg(&tca->client->dev, 319 + "Chose on times %d(%d) %d(%d) for %dms\n", c1, time_codes[c1], 320 + c2, time_codes[c2], tca->bank[bank].ontime); 321 + set_code(tca, TCA6507_FADE_ON, bank, c2); 322 + set_code(tca, TCA6507_FULL_ON, bank, c1); 323 + tca->bank[bank].ontime = result; 324 + 325 + result = choose_times(tca->bank[bank].offtime, &c1, &c2); 326 + dev_dbg(&tca->client->dev, 327 + "Chose off times %d(%d) %d(%d) for %dms\n", c1, time_codes[c1], 328 + c2, time_codes[c2], tca->bank[bank].offtime); 329 + set_code(tca, TCA6507_FADE_OFF, bank, c2); 330 + set_code(tca, TCA6507_FIRST_OFF, bank, c1); 331 + set_code(tca, TCA6507_SECOND_OFF, bank, c1); 332 + tca->bank[bank].offtime = result; 333 + 334 + set_code(tca, TCA6507_INITIALIZE, bank, INIT_CODE); 335 + } 336 + 337 + /* Write all needed register of tca6507 */ 338 + 339 + static void tca6507_work(struct work_struct *work) 340 + { 341 + struct tca6507_chip *tca = container_of(work, struct tca6507_chip, 342 + work); 343 + struct i2c_client *cl = tca->client; 344 + int set; 345 + u8 file[TCA6507_REG_CNT]; 346 + int r; 347 + 348 + spin_lock_irq(&tca->lock); 349 + set = tca->reg_set; 350 + memcpy(file, tca->reg_file, TCA6507_REG_CNT); 351 + tca->reg_set = 0; 352 + spin_unlock_irq(&tca->lock); 353 + 354 + for (r = 0; r < TCA6507_REG_CNT; r++) 355 + if (set & (1<<r)) 356 + i2c_smbus_write_byte_data(cl, r, file[r]); 357 + } 358 + 359 + static void led_release(struct tca6507_led *led) 360 + { 361 + /* If led owns any resource, release it. */ 362 + struct tca6507_chip *tca = led->chip; 363 + if (led->bank >= 0) { 364 + struct bank *b = tca->bank + led->bank; 365 + if (led->blink) 366 + b->time_use--; 367 + b->level_use--; 368 + } 369 + led->blink = 0; 370 + led->bank = -1; 371 + } 372 + 373 + static int led_prepare(struct tca6507_led *led) 374 + { 375 + /* Assign this led to a bank, configuring that bank if necessary. */ 376 + int level = TO_LEVEL(led->led_cdev.brightness); 377 + struct tca6507_chip *tca = led->chip; 378 + int c1, c2; 379 + int i; 380 + struct bank *b; 381 + int need_init = 0; 382 + 383 + led->led_cdev.brightness = TO_BRIGHT(level); 384 + if (level == 0) { 385 + set_select(tca, led->num, TCA6507_LS_LED_OFF); 386 + return 0; 387 + } 388 + 389 + if (led->ontime == 0 || led->offtime == 0) { 390 + /* 391 + * Just set the brightness, choosing first usable bank. 392 + * If none perfect, choose best. 393 + * Count backwards so we check MASTER bank first 394 + * to avoid wasting a timer. 395 + */ 396 + int best = -1;/* full-on */ 397 + int diff = 15-level; 398 + 399 + if (level == 15) { 400 + set_select(tca, led->num, TCA6507_LS_LED_ON); 401 + return 0; 402 + } 403 + 404 + for (i = MASTER; i >= BANK0; i--) { 405 + int d; 406 + if (tca->bank[i].level == level || 407 + tca->bank[i].level_use == 0) { 408 + best = i; 409 + break; 410 + } 411 + d = abs(level - tca->bank[i].level); 412 + if (d < diff) { 413 + diff = d; 414 + best = i; 415 + } 416 + } 417 + if (best == -1) { 418 + /* Best brightness is full-on */ 419 + set_select(tca, led->num, TCA6507_LS_LED_ON); 420 + led->led_cdev.brightness = LED_FULL; 421 + return 0; 422 + } 423 + 424 + if (!tca->bank[best].level_use) 425 + set_level(tca, best, level); 426 + 427 + tca->bank[best].level_use++; 428 + led->bank = best; 429 + set_select(tca, led->num, bank_source[best]); 430 + led->led_cdev.brightness = TO_BRIGHT(tca->bank[best].level); 431 + return 0; 432 + } 433 + 434 + /* 435 + * We have on/off time so we need to try to allocate a timing bank. 436 + * First check if times are compatible with hardware and give up if 437 + * not. 438 + */ 439 + if (choose_times(led->ontime, &c1, &c2) < 0) 440 + return -EINVAL; 441 + if (choose_times(led->offtime, &c1, &c2) < 0) 442 + return -EINVAL; 443 + 444 + for (i = BANK0; i <= BANK1; i++) { 445 + if (tca->bank[i].level_use == 0) 446 + /* not in use - it is ours! */ 447 + break; 448 + if (tca->bank[i].level != level) 449 + /* Incompatible level - skip */ 450 + /* FIX: if timer matches we maybe should consider 451 + * this anyway... 452 + */ 453 + continue; 454 + 455 + if (tca->bank[i].time_use == 0) 456 + /* Timer not in use, and level matches - use it */ 457 + break; 458 + 459 + if (!(tca->bank[i].on_dflt || 460 + led->on_dflt || 461 + tca->bank[i].ontime == led->ontime)) 462 + /* on time is incompatible */ 463 + continue; 464 + 465 + if (!(tca->bank[i].off_dflt || 466 + led->off_dflt || 467 + tca->bank[i].offtime == led->offtime)) 468 + /* off time is incompatible */ 469 + continue; 470 + 471 + /* looks like a suitable match */ 472 + break; 473 + } 474 + 475 + if (i > BANK1) 476 + /* Nothing matches - how sad */ 477 + return -EINVAL; 478 + 479 + b = &tca->bank[i]; 480 + if (b->level_use == 0) 481 + set_level(tca, i, level); 482 + b->level_use++; 483 + led->bank = i; 484 + 485 + if (b->on_dflt || 486 + !led->on_dflt || 487 + b->time_use == 0) { 488 + b->ontime = led->ontime; 489 + b->on_dflt = led->on_dflt; 490 + need_init = 1; 491 + } 492 + 493 + if (b->off_dflt || 494 + !led->off_dflt || 495 + b->time_use == 0) { 496 + b->offtime = led->offtime; 497 + b->off_dflt = led->off_dflt; 498 + need_init = 1; 499 + } 500 + 501 + if (need_init) 502 + set_times(tca, i); 503 + 504 + led->ontime = b->ontime; 505 + led->offtime = b->offtime; 506 + 507 + b->time_use++; 508 + led->blink = 1; 509 + led->led_cdev.brightness = TO_BRIGHT(b->level); 510 + set_select(tca, led->num, blink_source[i]); 511 + return 0; 512 + } 513 + 514 + static int led_assign(struct tca6507_led *led) 515 + { 516 + struct tca6507_chip *tca = led->chip; 517 + int err; 518 + unsigned long flags; 519 + 520 + spin_lock_irqsave(&tca->lock, flags); 521 + led_release(led); 522 + err = led_prepare(led); 523 + if (err) { 524 + /* 525 + * Can only fail on timer setup. In that case we need to 526 + * re-establish as steady level. 527 + */ 528 + led->ontime = 0; 529 + led->offtime = 0; 530 + led_prepare(led); 531 + } 532 + spin_unlock_irqrestore(&tca->lock, flags); 533 + 534 + if (tca->reg_set) 535 + schedule_work(&tca->work); 536 + return err; 537 + } 538 + 539 + static void tca6507_brightness_set(struct led_classdev *led_cdev, 540 + enum led_brightness brightness) 541 + { 542 + struct tca6507_led *led = container_of(led_cdev, struct tca6507_led, 543 + led_cdev); 544 + led->led_cdev.brightness = brightness; 545 + led->ontime = 0; 546 + led->offtime = 0; 547 + led_assign(led); 548 + } 549 + 550 + static int tca6507_blink_set(struct led_classdev *led_cdev, 551 + unsigned long *delay_on, 552 + unsigned long *delay_off) 553 + { 554 + struct tca6507_led *led = container_of(led_cdev, struct tca6507_led, 555 + led_cdev); 556 + 557 + if (*delay_on == 0) 558 + led->on_dflt = 1; 559 + else if (delay_on != &led_cdev->blink_delay_on) 560 + led->on_dflt = 0; 561 + led->ontime = *delay_on; 562 + 563 + if (*delay_off == 0) 564 + led->off_dflt = 1; 565 + else if (delay_off != &led_cdev->blink_delay_off) 566 + led->off_dflt = 0; 567 + led->offtime = *delay_off; 568 + 569 + if (led->ontime == 0) 570 + led->ontime = 512; 571 + if (led->offtime == 0) 572 + led->offtime = 512; 573 + 574 + if (led->led_cdev.brightness == LED_OFF) 575 + led->led_cdev.brightness = LED_FULL; 576 + if (led_assign(led) < 0) { 577 + led->ontime = 0; 578 + led->offtime = 0; 579 + led->led_cdev.brightness = LED_OFF; 580 + return -EINVAL; 581 + } 582 + *delay_on = led->ontime; 583 + *delay_off = led->offtime; 584 + return 0; 585 + } 586 + 587 + #ifdef CONFIG_GPIOLIB 588 + static void tca6507_gpio_set_value(struct gpio_chip *gc, 589 + unsigned offset, int val) 590 + { 591 + struct tca6507_chip *tca = container_of(gc, struct tca6507_chip, gpio); 592 + unsigned long flags; 593 + 594 + spin_lock_irqsave(&tca->lock, flags); 595 + /* 596 + * 'OFF' is floating high, and 'ON' is pulled down, so it has the 597 + * inverse sense of 'val'. 598 + */ 599 + set_select(tca, tca->gpio_map[offset], 600 + val ? TCA6507_LS_LED_OFF : TCA6507_LS_LED_ON); 601 + spin_unlock_irqrestore(&tca->lock, flags); 602 + if (tca->reg_set) 603 + schedule_work(&tca->work); 604 + } 605 + 606 + static int tca6507_gpio_direction_output(struct gpio_chip *gc, 607 + unsigned offset, int val) 608 + { 609 + tca6507_gpio_set_value(gc, offset, val); 610 + return 0; 611 + } 612 + 613 + static int tca6507_probe_gpios(struct i2c_client *client, 614 + struct tca6507_chip *tca, 615 + struct tca6507_platform_data *pdata) 616 + { 617 + int err; 618 + int i = 0; 619 + int gpios = 0; 620 + 621 + for (i = 0; i < NUM_LEDS; i++) 622 + if (pdata->leds.leds[i].name && pdata->leds.leds[i].flags) { 623 + /* Configure as a gpio */ 624 + tca->gpio_name[gpios] = pdata->leds.leds[i].name; 625 + tca->gpio_map[gpios] = i; 626 + gpios++; 627 + } 628 + 629 + if (!gpios) 630 + return 0; 631 + 632 + tca->gpio.label = "gpio-tca6507"; 633 + tca->gpio.names = tca->gpio_name; 634 + tca->gpio.ngpio = gpios; 635 + tca->gpio.base = pdata->gpio_base; 636 + tca->gpio.owner = THIS_MODULE; 637 + tca->gpio.direction_output = tca6507_gpio_direction_output; 638 + tca->gpio.set = tca6507_gpio_set_value; 639 + tca->gpio.dev = &client->dev; 640 + err = gpiochip_add(&tca->gpio); 641 + if (err) { 642 + tca->gpio.ngpio = 0; 643 + return err; 644 + } 645 + if (pdata->setup) 646 + pdata->setup(tca->gpio.base, tca->gpio.ngpio); 647 + return 0; 648 + } 649 + 650 + static void tca6507_remove_gpio(struct tca6507_chip *tca) 651 + { 652 + if (tca->gpio.ngpio) { 653 + int err = gpiochip_remove(&tca->gpio); 654 + dev_err(&tca->client->dev, "%s failed, %d\n", 655 + "gpiochip_remove()", err); 656 + } 657 + } 658 + #else /* CONFIG_GPIOLIB */ 659 + static int tca6507_probe_gpios(struct i2c_client *client, 660 + struct tca6507_chip *tca, 661 + struct tca6507_platform_data *pdata) 662 + { 663 + return 0; 664 + } 665 + static void tca6507_remove_gpio(struct tca6507_chip *tca) 666 + { 667 + } 668 + #endif /* CONFIG_GPIOLIB */ 669 + 670 + static int __devinit tca6507_probe(struct i2c_client *client, 671 + const struct i2c_device_id *id) 672 + { 673 + struct tca6507_chip *tca; 674 + struct i2c_adapter *adapter; 675 + struct tca6507_platform_data *pdata; 676 + int err; 677 + int i = 0; 678 + 679 + adapter = to_i2c_adapter(client->dev.parent); 680 + pdata = client->dev.platform_data; 681 + 682 + if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) 683 + return -EIO; 684 + 685 + if (!pdata || pdata->leds.num_leds != NUM_LEDS) { 686 + dev_err(&client->dev, "Need %d entries in platform-data list\n", 687 + NUM_LEDS); 688 + return -ENODEV; 689 + } 690 + err = -ENOMEM; 691 + tca = kzalloc(sizeof(*tca), GFP_KERNEL); 692 + if (!tca) 693 + goto exit; 694 + 695 + tca->client = client; 696 + INIT_WORK(&tca->work, tca6507_work); 697 + spin_lock_init(&tca->lock); 698 + i2c_set_clientdata(client, tca); 699 + 700 + for (i = 0; i < NUM_LEDS; i++) { 701 + struct tca6507_led *l = tca->leds + i; 702 + 703 + l->chip = tca; 704 + l->num = i; 705 + if (pdata->leds.leds[i].name && !pdata->leds.leds[i].flags) { 706 + l->led_cdev.name = pdata->leds.leds[i].name; 707 + l->led_cdev.default_trigger 708 + = pdata->leds.leds[i].default_trigger; 709 + l->led_cdev.brightness_set = tca6507_brightness_set; 710 + l->led_cdev.blink_set = tca6507_blink_set; 711 + l->bank = -1; 712 + err = led_classdev_register(&client->dev, 713 + &l->led_cdev); 714 + if (err < 0) 715 + goto exit; 716 + } 717 + } 718 + err = tca6507_probe_gpios(client, tca, pdata); 719 + if (err) 720 + goto exit; 721 + /* set all registers to known state - zero */ 722 + tca->reg_set = 0x7f; 723 + schedule_work(&tca->work); 724 + 725 + return 0; 726 + exit: 727 + while (i--) 728 + if (tca->leds[i].led_cdev.name) 729 + led_classdev_unregister(&tca->leds[i].led_cdev); 730 + cancel_work_sync(&tca->work); 731 + i2c_set_clientdata(client, NULL); 732 + kfree(tca); 733 + return err; 734 + } 735 + 736 + static int __devexit tca6507_remove(struct i2c_client *client) 737 + { 738 + int i; 739 + struct tca6507_chip *tca = i2c_get_clientdata(client); 740 + struct tca6507_led *tca_leds = tca->leds; 741 + 742 + for (i = 0; i < NUM_LEDS; i++) { 743 + if (tca_leds[i].led_cdev.name) 744 + led_classdev_unregister(&tca_leds[i].led_cdev); 745 + } 746 + tca6507_remove_gpio(tca); 747 + cancel_work_sync(&tca->work); 748 + i2c_set_clientdata(client, NULL); 749 + kfree(tca); 750 + 751 + return 0; 752 + } 753 + 754 + static struct i2c_driver tca6507_driver = { 755 + .driver = { 756 + .name = "leds-tca6507", 757 + .owner = THIS_MODULE, 758 + }, 759 + .probe = tca6507_probe, 760 + .remove = __devexit_p(tca6507_remove), 761 + .id_table = tca6507_id, 762 + }; 763 + 764 + static int __init tca6507_leds_init(void) 765 + { 766 + return i2c_add_driver(&tca6507_driver); 767 + } 768 + 769 + static void __exit tca6507_leds_exit(void) 770 + { 771 + i2c_del_driver(&tca6507_driver); 772 + } 773 + 774 + module_init(tca6507_leds_init); 775 + module_exit(tca6507_leds_exit); 776 + 777 + MODULE_AUTHOR("NeilBrown <neilb@suse.de>"); 778 + MODULE_DESCRIPTION("TCA6507 LED/GPO driver"); 779 + MODULE_LICENSE("GPL v2");
+3 -14
drivers/leds/leds-wm831x-status.c
··· 237 237 goto err; 238 238 } 239 239 240 - drvdata = kzalloc(sizeof(struct wm831x_status), GFP_KERNEL); 240 + drvdata = devm_kzalloc(&pdev->dev, sizeof(struct wm831x_status), 241 + GFP_KERNEL); 241 242 if (!drvdata) 242 243 return -ENOMEM; 243 244 dev_set_drvdata(&pdev->dev, drvdata); ··· 301 300 302 301 err_led: 303 302 led_classdev_unregister(&drvdata->cdev); 304 - kfree(drvdata); 305 303 err: 306 304 return ret; 307 305 } ··· 311 311 312 312 device_remove_file(drvdata->cdev.dev, &dev_attr_src); 313 313 led_classdev_unregister(&drvdata->cdev); 314 - kfree(drvdata); 315 314 316 315 return 0; 317 316 } ··· 324 325 .remove = wm831x_status_remove, 325 326 }; 326 327 327 - static int __devinit wm831x_status_init(void) 328 - { 329 - return platform_driver_register(&wm831x_status_driver); 330 - } 331 - module_init(wm831x_status_init); 332 - 333 - static void wm831x_status_exit(void) 334 - { 335 - platform_driver_unregister(&wm831x_status_driver); 336 - } 337 - module_exit(wm831x_status_exit); 328 + module_platform_driver(wm831x_status_driver); 338 329 339 330 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 340 331 MODULE_DESCRIPTION("WM831x status LED driver");
+3 -16
drivers/leds/leds-wm8350.c
··· 227 227 goto err_isink; 228 228 } 229 229 230 - led = kzalloc(sizeof(*led), GFP_KERNEL); 230 + led = devm_kzalloc(&pdev->dev, sizeof(*led), GFP_KERNEL); 231 231 if (led == NULL) { 232 232 ret = -ENOMEM; 233 233 goto err_dcdc; ··· 259 259 260 260 ret = led_classdev_register(&pdev->dev, &led->cdev); 261 261 if (ret < 0) 262 - goto err_led; 262 + goto err_dcdc; 263 263 264 264 return 0; 265 265 266 - err_led: 267 - kfree(led); 268 266 err_dcdc: 269 267 regulator_put(dcdc); 270 268 err_isink: ··· 279 281 wm8350_led_disable(led); 280 282 regulator_put(led->dcdc); 281 283 regulator_put(led->isink); 282 - kfree(led); 283 284 return 0; 284 285 } 285 286 ··· 292 295 .shutdown = wm8350_led_shutdown, 293 296 }; 294 297 295 - static int __devinit wm8350_led_init(void) 296 - { 297 - return platform_driver_register(&wm8350_led_driver); 298 - } 299 - module_init(wm8350_led_init); 300 - 301 - static void wm8350_led_exit(void) 302 - { 303 - platform_driver_unregister(&wm8350_led_driver); 304 - } 305 - module_exit(wm8350_led_exit); 298 + module_platform_driver(wm8350_led_driver); 306 299 307 300 MODULE_AUTHOR("Mark Brown"); 308 301 MODULE_DESCRIPTION("WM8350 LED driver");
+2 -2
drivers/rtc/interface.c
··· 228 228 alarm->time.tm_hour = now.tm_hour; 229 229 230 230 /* For simplicity, only support date rollover for now */ 231 - if (alarm->time.tm_mday == -1) { 231 + if (alarm->time.tm_mday < 1 || alarm->time.tm_mday > 31) { 232 232 alarm->time.tm_mday = now.tm_mday; 233 233 missing = day; 234 234 } 235 - if (alarm->time.tm_mon == -1) { 235 + if ((unsigned)alarm->time.tm_mon >= 12) { 236 236 alarm->time.tm_mon = now.tm_mon; 237 237 if (missing == none) 238 238 missing = month;
+1 -11
drivers/rtc/rtc-88pm860x.c
··· 410 410 .remove = __devexit_p(pm860x_rtc_remove), 411 411 }; 412 412 413 - static int __init pm860x_rtc_init(void) 414 - { 415 - return platform_driver_register(&pm860x_rtc_driver); 416 - } 417 - module_init(pm860x_rtc_init); 418 - 419 - static void __exit pm860x_rtc_exit(void) 420 - { 421 - platform_driver_unregister(&pm860x_rtc_driver); 422 - } 423 - module_exit(pm860x_rtc_exit); 413 + module_platform_driver(pm860x_rtc_driver); 424 414 425 415 MODULE_DESCRIPTION("Marvell 88PM860x RTC driver"); 426 416 MODULE_AUTHOR("Haojian Zhuang <haojian.zhuang@marvell.com>");
+120 -16
drivers/rtc/rtc-ab8500.c
··· 90 90 91 91 /* Early AB8500 chips will not clear the rtc read request bit */ 92 92 if (abx500_get_chip_id(dev) == 0) { 93 - msleep(1); 93 + usleep_range(1000, 1000); 94 94 } else { 95 95 /* Wait for some cycles after enabling the rtc read in ab8500 */ 96 96 while (time_before(jiffies, timeout)) { ··· 102 102 if (!(value & RTC_READ_REQUEST)) 103 103 break; 104 104 105 - msleep(1); 105 + usleep_range(1000, 5000); 106 106 } 107 107 } 108 108 ··· 258 258 return ab8500_rtc_irq_enable(dev, alarm->enabled); 259 259 } 260 260 261 + 262 + static int ab8500_rtc_set_calibration(struct device *dev, int calibration) 263 + { 264 + int retval; 265 + u8 rtccal = 0; 266 + 267 + /* 268 + * Check that the calibration value (which is in units of 0.5 269 + * parts-per-million) is in the AB8500's range for RtcCalibration 270 + * register. -128 (0x80) is not permitted because the AB8500 uses 271 + * a sign-bit rather than two's complement, so 0x80 is just another 272 + * representation of zero. 273 + */ 274 + if ((calibration < -127) || (calibration > 127)) { 275 + dev_err(dev, "RtcCalibration value outside permitted range\n"); 276 + return -EINVAL; 277 + } 278 + 279 + /* 280 + * The AB8500 uses sign (in bit7) and magnitude (in bits0-7) 281 + * so need to convert to this sort of representation before writing 282 + * into RtcCalibration register... 283 + */ 284 + if (calibration >= 0) 285 + rtccal = 0x7F & calibration; 286 + else 287 + rtccal = ~(calibration - 1) | 0x80; 288 + 289 + retval = abx500_set_register_interruptible(dev, AB8500_RTC, 290 + AB8500_RTC_CALIB_REG, rtccal); 291 + 292 + return retval; 293 + } 294 + 295 + static int ab8500_rtc_get_calibration(struct device *dev, int *calibration) 296 + { 297 + int retval; 298 + u8 rtccal = 0; 299 + 300 + retval = abx500_get_register_interruptible(dev, AB8500_RTC, 301 + AB8500_RTC_CALIB_REG, &rtccal); 302 + if (retval >= 0) { 303 + /* 304 + * The AB8500 uses sign (in bit7) and magnitude (in bits0-7) 305 + * so need to convert value from RtcCalibration register into 306 + * a two's complement signed value... 307 + */ 308 + if (rtccal & 0x80) 309 + *calibration = 0 - (rtccal & 0x7F); 310 + else 311 + *calibration = 0x7F & rtccal; 312 + } 313 + 314 + return retval; 315 + } 316 + 317 + static ssize_t ab8500_sysfs_store_rtc_calibration(struct device *dev, 318 + struct device_attribute *attr, 319 + const char *buf, size_t count) 320 + { 321 + int retval; 322 + int calibration = 0; 323 + 324 + if (sscanf(buf, " %i ", &calibration) != 1) { 325 + dev_err(dev, "Failed to store RTC calibration attribute\n"); 326 + return -EINVAL; 327 + } 328 + 329 + retval = ab8500_rtc_set_calibration(dev, calibration); 330 + 331 + return retval ? retval : count; 332 + } 333 + 334 + static ssize_t ab8500_sysfs_show_rtc_calibration(struct device *dev, 335 + struct device_attribute *attr, char *buf) 336 + { 337 + int retval = 0; 338 + int calibration = 0; 339 + 340 + retval = ab8500_rtc_get_calibration(dev, &calibration); 341 + if (retval < 0) { 342 + dev_err(dev, "Failed to read RTC calibration attribute\n"); 343 + sprintf(buf, "0\n"); 344 + return retval; 345 + } 346 + 347 + return sprintf(buf, "%d\n", calibration); 348 + } 349 + 350 + static DEVICE_ATTR(rtc_calibration, S_IRUGO | S_IWUSR, 351 + ab8500_sysfs_show_rtc_calibration, 352 + ab8500_sysfs_store_rtc_calibration); 353 + 354 + static int ab8500_sysfs_rtc_register(struct device *dev) 355 + { 356 + return device_create_file(dev, &dev_attr_rtc_calibration); 357 + } 358 + 359 + static void ab8500_sysfs_rtc_unregister(struct device *dev) 360 + { 361 + device_remove_file(dev, &dev_attr_rtc_calibration); 362 + } 363 + 261 364 static irqreturn_t rtc_alarm_handler(int irq, void *data) 262 365 { 263 366 struct rtc_device *rtc = data; ··· 398 295 return err; 399 296 400 297 /* Wait for reset by the PorRtc */ 401 - msleep(1); 298 + usleep_range(1000, 5000); 402 299 403 300 err = abx500_get_register_interruptible(&pdev->dev, AB8500_RTC, 404 301 AB8500_RTC_STAT_REG, &rtc_ctrl); ··· 411 308 return -ENODEV; 412 309 } 413 310 311 + device_init_wakeup(&pdev->dev, true); 312 + 414 313 rtc = rtc_device_register("ab8500-rtc", &pdev->dev, &ab8500_rtc_ops, 415 314 THIS_MODULE); 416 315 if (IS_ERR(rtc)) { ··· 421 316 return err; 422 317 } 423 318 424 - err = request_threaded_irq(irq, NULL, rtc_alarm_handler, 0, 425 - "ab8500-rtc", rtc); 319 + err = request_threaded_irq(irq, NULL, rtc_alarm_handler, 320 + IRQF_NO_SUSPEND, "ab8500-rtc", rtc); 426 321 if (err < 0) { 427 322 rtc_device_unregister(rtc); 428 323 return err; 429 324 } 430 325 431 326 platform_set_drvdata(pdev, rtc); 327 + 328 + 329 + err = ab8500_sysfs_rtc_register(&pdev->dev); 330 + if (err) { 331 + dev_err(&pdev->dev, "sysfs RTC failed to register\n"); 332 + return err; 333 + } 432 334 433 335 return 0; 434 336 } ··· 444 332 { 445 333 struct rtc_device *rtc = platform_get_drvdata(pdev); 446 334 int irq = platform_get_irq_byname(pdev, "ALARM"); 335 + 336 + ab8500_sysfs_rtc_unregister(&pdev->dev); 447 337 448 338 free_irq(irq, rtc); 449 339 rtc_device_unregister(rtc); ··· 463 349 .remove = __devexit_p(ab8500_rtc_remove), 464 350 }; 465 351 466 - static int __init ab8500_rtc_init(void) 467 - { 468 - return platform_driver_register(&ab8500_rtc_driver); 469 - } 352 + module_platform_driver(ab8500_rtc_driver); 470 353 471 - static void __exit ab8500_rtc_exit(void) 472 - { 473 - platform_driver_unregister(&ab8500_rtc_driver); 474 - } 475 - 476 - module_init(ab8500_rtc_init); 477 - module_exit(ab8500_rtc_exit); 478 354 MODULE_AUTHOR("Virupax Sadashivpetimath <virupax.sadashivpetimath@stericsson.com>"); 479 355 MODULE_DESCRIPTION("AB8500 RTC Driver"); 480 356 MODULE_LICENSE("GPL v2");
+1 -12
drivers/rtc/rtc-bfin.c
··· 456 456 .resume = bfin_rtc_resume, 457 457 }; 458 458 459 - static int __init bfin_rtc_init(void) 460 - { 461 - return platform_driver_register(&bfin_rtc_driver); 462 - } 463 - 464 - static void __exit bfin_rtc_exit(void) 465 - { 466 - platform_driver_unregister(&bfin_rtc_driver); 467 - } 468 - 469 - module_init(bfin_rtc_init); 470 - module_exit(bfin_rtc_exit); 459 + module_platform_driver(bfin_rtc_driver); 471 460 472 461 MODULE_DESCRIPTION("Blackfin On-Chip Real Time Clock Driver"); 473 462 MODULE_AUTHOR("Mike Frysinger <vapier@gentoo.org>");
+1 -12
drivers/rtc/rtc-bq4802.c
··· 218 218 .remove = __devexit_p(bq4802_remove), 219 219 }; 220 220 221 - static int __init bq4802_init(void) 222 - { 223 - return platform_driver_register(&bq4802_driver); 224 - } 225 - 226 - static void __exit bq4802_exit(void) 227 - { 228 - platform_driver_unregister(&bq4802_driver); 229 - } 230 - 231 - module_init(bq4802_init); 232 - module_exit(bq4802_exit); 221 + module_platform_driver(bq4802_driver);
+1 -1
drivers/rtc/rtc-cmos.c
··· 164 164 static inline void cmos_write_bank2(unsigned char val, unsigned char addr) 165 165 { 166 166 outb(addr, RTC_PORT(2)); 167 - outb(val, RTC_PORT(2)); 167 + outb(val, RTC_PORT(3)); 168 168 } 169 169 170 170 #else
+1 -11
drivers/rtc/rtc-dm355evm.c
··· 161 161 }, 162 162 }; 163 163 164 - static int __init dm355evm_rtc_init(void) 165 - { 166 - return platform_driver_register(&rtc_dm355evm_driver); 167 - } 168 - module_init(dm355evm_rtc_init); 169 - 170 - static void __exit dm355evm_rtc_exit(void) 171 - { 172 - platform_driver_unregister(&rtc_dm355evm_driver); 173 - } 174 - module_exit(dm355evm_rtc_exit); 164 + module_platform_driver(rtc_dm355evm_driver); 175 165 176 166 MODULE_LICENSE("GPL");
+1 -12
drivers/rtc/rtc-ds1286.c
··· 396 396 .remove = __devexit_p(ds1286_remove), 397 397 }; 398 398 399 - static int __init ds1286_init(void) 400 - { 401 - return platform_driver_register(&ds1286_platform_driver); 402 - } 403 - 404 - static void __exit ds1286_exit(void) 405 - { 406 - platform_driver_unregister(&ds1286_platform_driver); 407 - } 399 + module_platform_driver(ds1286_platform_driver); 408 400 409 401 MODULE_AUTHOR("Thomas Bogendoerfer <tsbogend@alpha.franken.de>"); 410 402 MODULE_DESCRIPTION("DS1286 RTC driver"); 411 403 MODULE_LICENSE("GPL"); 412 404 MODULE_VERSION(DRV_VERSION); 413 405 MODULE_ALIAS("platform:rtc-ds1286"); 414 - 415 - module_init(ds1286_init); 416 - module_exit(ds1286_exit);
+1 -14
drivers/rtc/rtc-ds1511.c
··· 580 580 }, 581 581 }; 582 582 583 - static int __init 584 - ds1511_rtc_init(void) 585 - { 586 - return platform_driver_register(&ds1511_rtc_driver); 587 - } 588 - 589 - static void __exit 590 - ds1511_rtc_exit(void) 591 - { 592 - platform_driver_unregister(&ds1511_rtc_driver); 593 - } 594 - 595 - module_init(ds1511_rtc_init); 596 - module_exit(ds1511_rtc_exit); 583 + module_platform_driver(ds1511_rtc_driver); 597 584 598 585 MODULE_AUTHOR("Andrew Sharp <andy.sharp@lsi.com>"); 599 586 MODULE_DESCRIPTION("Dallas DS1511 RTC driver");
+1 -12
drivers/rtc/rtc-ds1553.c
··· 361 361 }, 362 362 }; 363 363 364 - static __init int ds1553_init(void) 365 - { 366 - return platform_driver_register(&ds1553_rtc_driver); 367 - } 368 - 369 - static __exit void ds1553_exit(void) 370 - { 371 - platform_driver_unregister(&ds1553_rtc_driver); 372 - } 373 - 374 - module_init(ds1553_init); 375 - module_exit(ds1553_exit); 364 + module_platform_driver(ds1553_rtc_driver); 376 365 377 366 MODULE_AUTHOR("Atsushi Nemoto <anemo@mba.ocn.ne.jp>"); 378 367 MODULE_DESCRIPTION("Dallas DS1553 RTC driver");
+1 -12
drivers/rtc/rtc-ds1742.c
··· 240 240 }, 241 241 }; 242 242 243 - static __init int ds1742_init(void) 244 - { 245 - return platform_driver_register(&ds1742_rtc_driver); 246 - } 247 - 248 - static __exit void ds1742_exit(void) 249 - { 250 - platform_driver_unregister(&ds1742_rtc_driver); 251 - } 252 - 253 - module_init(ds1742_init); 254 - module_exit(ds1742_exit); 243 + module_platform_driver(ds1742_rtc_driver); 255 244 256 245 MODULE_AUTHOR("Atsushi Nemoto <anemo@mba.ocn.ne.jp>"); 257 246 MODULE_DESCRIPTION("Dallas DS1742 RTC driver");
+2 -12
drivers/rtc/rtc-jz4740.c
··· 345 345 #define JZ4740_RTC_PM_OPS NULL 346 346 #endif /* CONFIG_PM */ 347 347 348 - struct platform_driver jz4740_rtc_driver = { 348 + static struct platform_driver jz4740_rtc_driver = { 349 349 .probe = jz4740_rtc_probe, 350 350 .remove = __devexit_p(jz4740_rtc_remove), 351 351 .driver = { ··· 355 355 }, 356 356 }; 357 357 358 - static int __init jz4740_rtc_init(void) 359 - { 360 - return platform_driver_register(&jz4740_rtc_driver); 361 - } 362 - module_init(jz4740_rtc_init); 363 - 364 - static void __exit jz4740_rtc_exit(void) 365 - { 366 - platform_driver_unregister(&jz4740_rtc_driver); 367 - } 368 - module_exit(jz4740_rtc_exit); 358 + module_platform_driver(jz4740_rtc_driver); 369 359 370 360 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>"); 371 361 MODULE_LICENSE("GPL");
+1 -11
drivers/rtc/rtc-lpc32xx.c
··· 396 396 }, 397 397 }; 398 398 399 - static int __init lpc32xx_rtc_init(void) 400 - { 401 - return platform_driver_register(&lpc32xx_rtc_driver); 402 - } 403 - module_init(lpc32xx_rtc_init); 404 - 405 - static void __exit lpc32xx_rtc_exit(void) 406 - { 407 - platform_driver_unregister(&lpc32xx_rtc_driver); 408 - } 409 - module_exit(lpc32xx_rtc_exit); 399 + module_platform_driver(lpc32xx_rtc_driver); 410 400 411 401 MODULE_AUTHOR("Kevin Wells <wellsk40@gmail.com"); 412 402 MODULE_DESCRIPTION("RTC driver for the LPC32xx SoC");
-1
drivers/rtc/rtc-m41t93.c
··· 200 200 static struct spi_driver m41t93_driver = { 201 201 .driver = { 202 202 .name = "rtc-m41t93", 203 - .bus = &spi_bus_type, 204 203 .owner = THIS_MODULE, 205 204 }, 206 205 .probe = m41t93_probe,
-1
drivers/rtc/rtc-m41t94.c
··· 147 147 static struct spi_driver m41t94_driver = { 148 148 .driver = { 149 149 .name = "rtc-m41t94", 150 - .bus = &spi_bus_type, 151 150 .owner = THIS_MODULE, 152 151 }, 153 152 .probe = m41t94_probe,
+1 -12
drivers/rtc/rtc-m48t35.c
··· 216 216 .remove = __devexit_p(m48t35_remove), 217 217 }; 218 218 219 - static int __init m48t35_init(void) 220 - { 221 - return platform_driver_register(&m48t35_platform_driver); 222 - } 223 - 224 - static void __exit m48t35_exit(void) 225 - { 226 - platform_driver_unregister(&m48t35_platform_driver); 227 - } 219 + module_platform_driver(m48t35_platform_driver); 228 220 229 221 MODULE_AUTHOR("Thomas Bogendoerfer <tsbogend@alpha.franken.de>"); 230 222 MODULE_DESCRIPTION("M48T35 RTC driver"); 231 223 MODULE_LICENSE("GPL"); 232 224 MODULE_VERSION(DRV_VERSION); 233 225 MODULE_ALIAS("platform:rtc-m48t35"); 234 - 235 - module_init(m48t35_init); 236 - module_exit(m48t35_exit);
+1 -12
drivers/rtc/rtc-m48t59.c
··· 530 530 .remove = __devexit_p(m48t59_rtc_remove), 531 531 }; 532 532 533 - static int __init m48t59_rtc_init(void) 534 - { 535 - return platform_driver_register(&m48t59_rtc_driver); 536 - } 537 - 538 - static void __exit m48t59_rtc_exit(void) 539 - { 540 - platform_driver_unregister(&m48t59_rtc_driver); 541 - } 542 - 543 - module_init(m48t59_rtc_init); 544 - module_exit(m48t59_rtc_exit); 533 + module_platform_driver(m48t59_rtc_driver); 545 534 546 535 MODULE_AUTHOR("Mark Zhan <rongkai.zhan@windriver.com>"); 547 536 MODULE_DESCRIPTION("M48T59/M48T02/M48T08 RTC driver");
+1 -12
drivers/rtc/rtc-m48t86.c
··· 185 185 .remove = __devexit_p(m48t86_rtc_remove), 186 186 }; 187 187 188 - static int __init m48t86_rtc_init(void) 189 - { 190 - return platform_driver_register(&m48t86_rtc_platform_driver); 191 - } 192 - 193 - static void __exit m48t86_rtc_exit(void) 194 - { 195 - platform_driver_unregister(&m48t86_rtc_platform_driver); 196 - } 188 + module_platform_driver(m48t86_rtc_platform_driver); 197 189 198 190 MODULE_AUTHOR("Alessandro Zummo <a.zummo@towertech.it>"); 199 191 MODULE_DESCRIPTION("M48T86 RTC driver"); 200 192 MODULE_LICENSE("GPL"); 201 193 MODULE_VERSION(DRV_VERSION); 202 194 MODULE_ALIAS("platform:rtc-m48t86"); 203 - 204 - module_init(m48t86_rtc_init); 205 - module_exit(m48t86_rtc_exit);
-1
drivers/rtc/rtc-max6902.c
··· 154 154 static struct spi_driver max6902_driver = { 155 155 .driver = { 156 156 .name = "rtc-max6902", 157 - .bus = &spi_bus_type, 158 157 .owner = THIS_MODULE, 159 158 }, 160 159 .probe = max6902_probe,
+1 -11
drivers/rtc/rtc-max8925.c
··· 299 299 .remove = __devexit_p(max8925_rtc_remove), 300 300 }; 301 301 302 - static int __init max8925_rtc_init(void) 303 - { 304 - return platform_driver_register(&max8925_rtc_driver); 305 - } 306 - module_init(max8925_rtc_init); 307 - 308 - static void __exit max8925_rtc_exit(void) 309 - { 310 - platform_driver_unregister(&max8925_rtc_driver); 311 - } 312 - module_exit(max8925_rtc_exit); 302 + module_platform_driver(max8925_rtc_driver); 313 303 314 304 MODULE_DESCRIPTION("Maxim MAX8925 RTC driver"); 315 305 MODULE_AUTHOR("Haojian Zhuang <haojian.zhuang@marvell.com>");
+1 -11
drivers/rtc/rtc-max8998.c
··· 327 327 .id_table = max8998_rtc_id, 328 328 }; 329 329 330 - static int __init max8998_rtc_init(void) 331 - { 332 - return platform_driver_register(&max8998_rtc_driver); 333 - } 334 - module_init(max8998_rtc_init); 335 - 336 - static void __exit max8998_rtc_exit(void) 337 - { 338 - platform_driver_unregister(&max8998_rtc_driver); 339 - } 340 - module_exit(max8998_rtc_exit); 330 + module_platform_driver(max8998_rtc_driver); 341 331 342 332 MODULE_AUTHOR("Minkyu Kang <mk7.kang@samsung.com>"); 343 333 MODULE_AUTHOR("Joonyoung Shim <jy0922.shim@samsung.com>");
+1 -1
drivers/rtc/rtc-mc13xxx.c
··· 399 399 return 0; 400 400 } 401 401 402 - const struct platform_device_id mc13xxx_rtc_idtable[] = { 402 + static const struct platform_device_id mc13xxx_rtc_idtable[] = { 403 403 { 404 404 .name = "mc13783-rtc", 405 405 }, {
+1 -11
drivers/rtc/rtc-mpc5121.c
··· 418 418 .remove = __devexit_p(mpc5121_rtc_remove), 419 419 }; 420 420 421 - static int __init mpc5121_rtc_init(void) 422 - { 423 - return platform_driver_register(&mpc5121_rtc_driver); 424 - } 425 - module_init(mpc5121_rtc_init); 426 - 427 - static void __exit mpc5121_rtc_exit(void) 428 - { 429 - platform_driver_unregister(&mpc5121_rtc_driver); 430 - } 431 - module_exit(mpc5121_rtc_exit); 421 + module_platform_driver(mpc5121_rtc_driver); 432 422 433 423 MODULE_LICENSE("GPL"); 434 424 MODULE_AUTHOR("John Rigby <jcrigby@gmail.com>");
+1 -12
drivers/rtc/rtc-mrst.c
··· 537 537 } 538 538 }; 539 539 540 - static int __init vrtc_mrst_init(void) 541 - { 542 - return platform_driver_register(&vrtc_mrst_platform_driver); 543 - } 544 - 545 - static void __exit vrtc_mrst_exit(void) 546 - { 547 - platform_driver_unregister(&vrtc_mrst_platform_driver); 548 - } 549 - 550 - module_init(vrtc_mrst_init); 551 - module_exit(vrtc_mrst_exit); 540 + module_platform_driver(vrtc_mrst_platform_driver); 552 541 553 542 MODULE_AUTHOR("Jacob Pan; Feng Tang"); 554 543 MODULE_DESCRIPTION("Driver for Moorestown virtual RTC");
+70 -53
drivers/rtc/rtc-mxc.c
··· 155 155 { 156 156 struct rtc_time alarm_tm, now_tm; 157 157 unsigned long now, time; 158 - int ret; 159 158 struct platform_device *pdev = to_platform_device(dev); 160 159 struct rtc_plat_data *pdata = platform_get_drvdata(pdev); 161 160 void __iomem *ioaddr = pdata->ioaddr; ··· 167 168 alarm_tm.tm_hour = alrm->tm_hour; 168 169 alarm_tm.tm_min = alrm->tm_min; 169 170 alarm_tm.tm_sec = alrm->tm_sec; 170 - rtc_tm_to_time(&now_tm, &now); 171 171 rtc_tm_to_time(&alarm_tm, &time); 172 - 173 - if (time < now) { 174 - time += 60 * 60 * 24; 175 - rtc_time_to_tm(time, &alarm_tm); 176 - } 177 - 178 - ret = rtc_tm_to_time(&alarm_tm, &time); 179 172 180 173 /* clear all the interrupt status bits */ 181 174 writew(readw(ioaddr + RTC_RTCISR), ioaddr + RTC_RTCISR); 182 175 set_alarm_or_time(dev, MXC_RTC_ALARM, time); 183 176 184 - return ret; 177 + return 0; 178 + } 179 + 180 + static void mxc_rtc_irq_enable(struct device *dev, unsigned int bit, 181 + unsigned int enabled) 182 + { 183 + struct platform_device *pdev = to_platform_device(dev); 184 + struct rtc_plat_data *pdata = platform_get_drvdata(pdev); 185 + void __iomem *ioaddr = pdata->ioaddr; 186 + u32 reg; 187 + 188 + spin_lock_irq(&pdata->rtc->irq_lock); 189 + reg = readw(ioaddr + RTC_RTCIENR); 190 + 191 + if (enabled) 192 + reg |= bit; 193 + else 194 + reg &= ~bit; 195 + 196 + writew(reg, ioaddr + RTC_RTCIENR); 197 + spin_unlock_irq(&pdata->rtc->irq_lock); 185 198 } 186 199 187 200 /* This function is the RTC interrupt service routine. */ ··· 210 199 /* clear interrupt sources */ 211 200 writew(status, ioaddr + RTC_RTCISR); 212 201 213 - /* clear alarm interrupt if it has occurred */ 214 - if (status & RTC_ALM_BIT) 215 - status &= ~RTC_ALM_BIT; 216 - 217 202 /* update irq data & counter */ 218 - if (status & RTC_ALM_BIT) 203 + if (status & RTC_ALM_BIT) { 219 204 events |= (RTC_AF | RTC_IRQF); 205 + /* RTC alarm should be one-shot */ 206 + mxc_rtc_irq_enable(&pdev->dev, RTC_ALM_BIT, 0); 207 + } 220 208 221 209 if (status & RTC_1HZ_BIT) 222 210 events |= (RTC_UF | RTC_IRQF); 223 211 224 212 if (status & PIT_ALL_ON) 225 213 events |= (RTC_PF | RTC_IRQF); 226 - 227 - if ((status & RTC_ALM_BIT) && rtc_valid_tm(&pdata->g_rtc_alarm)) 228 - rtc_update_alarm(&pdev->dev, &pdata->g_rtc_alarm); 229 214 230 215 rtc_update_irq(pdata->rtc, 1, events); 231 216 spin_unlock_irq(&pdata->rtc->irq_lock); ··· 246 239 /* Clear all interrupt status */ 247 240 writew(0xffffffff, ioaddr + RTC_RTCISR); 248 241 249 - spin_unlock_irq(&pdata->rtc->irq_lock); 250 - } 251 - 252 - static void mxc_rtc_irq_enable(struct device *dev, unsigned int bit, 253 - unsigned int enabled) 254 - { 255 - struct platform_device *pdev = to_platform_device(dev); 256 - struct rtc_plat_data *pdata = platform_get_drvdata(pdev); 257 - void __iomem *ioaddr = pdata->ioaddr; 258 - u32 reg; 259 - 260 - spin_lock_irq(&pdata->rtc->irq_lock); 261 - reg = readw(ioaddr + RTC_RTCIENR); 262 - 263 - if (enabled) 264 - reg |= bit; 265 - else 266 - reg &= ~bit; 267 - 268 - writew(reg, ioaddr + RTC_RTCIENR); 269 242 spin_unlock_irq(&pdata->rtc->irq_lock); 270 243 } 271 244 ··· 277 290 */ 278 291 static int mxc_rtc_set_mmss(struct device *dev, unsigned long time) 279 292 { 293 + /* 294 + * TTC_DAYR register is 9-bit in MX1 SoC, save time and day of year only 295 + */ 296 + if (cpu_is_mx1()) { 297 + struct rtc_time tm; 298 + 299 + rtc_time_to_tm(time, &tm); 300 + tm.tm_year = 70; 301 + rtc_tm_to_time(&tm, &time); 302 + } 303 + 280 304 /* Avoid roll-over from reading the different registers */ 281 305 do { 282 306 set_alarm_or_time(dev, MXC_RTC_TIME, time); ··· 322 324 struct rtc_plat_data *pdata = platform_get_drvdata(pdev); 323 325 int ret; 324 326 325 - if (rtc_valid_tm(&alrm->time)) { 326 - if (alrm->time.tm_sec > 59 || 327 - alrm->time.tm_hour > 23 || 328 - alrm->time.tm_min > 59) 329 - return -EINVAL; 330 - 331 - ret = rtc_update_alarm(dev, &alrm->time); 332 - } else { 333 - ret = rtc_valid_tm(&alrm->time); 334 - if (ret) 335 - return ret; 336 - 337 - ret = rtc_update_alarm(dev, &alrm->time); 338 - } 339 - 327 + ret = rtc_update_alarm(dev, &alrm->time); 340 328 if (ret) 341 329 return ret; 342 330 ··· 408 424 pdata->irq = -1; 409 425 } 410 426 427 + if (pdata->irq >=0) 428 + device_init_wakeup(&pdev->dev, 1); 429 + 411 430 rtc = rtc_device_register(pdev->name, &pdev->dev, &mxc_rtc_ops, 412 431 THIS_MODULE); 413 432 if (IS_ERR(rtc)) { ··· 446 459 return 0; 447 460 } 448 461 462 + #ifdef CONFIG_PM 463 + static int mxc_rtc_suspend(struct device *dev) 464 + { 465 + struct rtc_plat_data *pdata = dev_get_drvdata(dev); 466 + 467 + if (device_may_wakeup(dev)) 468 + enable_irq_wake(pdata->irq); 469 + 470 + return 0; 471 + } 472 + 473 + static int mxc_rtc_resume(struct device *dev) 474 + { 475 + struct rtc_plat_data *pdata = dev_get_drvdata(dev); 476 + 477 + if (device_may_wakeup(dev)) 478 + disable_irq_wake(pdata->irq); 479 + 480 + return 0; 481 + } 482 + 483 + static struct dev_pm_ops mxc_rtc_pm_ops = { 484 + .suspend = mxc_rtc_suspend, 485 + .resume = mxc_rtc_resume, 486 + }; 487 + #endif 488 + 449 489 static struct platform_driver mxc_rtc_driver = { 450 490 .driver = { 451 491 .name = "mxc_rtc", 492 + #ifdef CONFIG_PM 493 + .pm = &mxc_rtc_pm_ops, 494 + #endif 452 495 .owner = THIS_MODULE, 453 496 }, 454 497 .remove = __exit_p(mxc_rtc_remove),
-1
drivers/rtc/rtc-pcf2123.c
··· 340 340 static struct spi_driver pcf2123_driver = { 341 341 .driver = { 342 342 .name = "rtc-pcf2123", 343 - .bus = &spi_bus_type, 344 343 .owner = THIS_MODULE, 345 344 }, 346 345 .probe = pcf2123_probe,
+1 -11
drivers/rtc/rtc-pcf50633.c
··· 294 294 .remove = __devexit_p(pcf50633_rtc_remove), 295 295 }; 296 296 297 - static int __init pcf50633_rtc_init(void) 298 - { 299 - return platform_driver_register(&pcf50633_rtc_driver); 300 - } 301 - module_init(pcf50633_rtc_init); 302 - 303 - static void __exit pcf50633_rtc_exit(void) 304 - { 305 - platform_driver_unregister(&pcf50633_rtc_driver); 306 - } 307 - module_exit(pcf50633_rtc_exit); 297 + module_platform_driver(pcf50633_rtc_driver); 308 298 309 299 MODULE_DESCRIPTION("PCF50633 RTC driver"); 310 300 MODULE_AUTHOR("Balaji Rao <balajirrao@openmoko.org>");
+1 -11
drivers/rtc/rtc-pm8xxx.c
··· 532 532 }, 533 533 }; 534 534 535 - static int __init pm8xxx_rtc_init(void) 536 - { 537 - return platform_driver_register(&pm8xxx_rtc_driver); 538 - } 539 - module_init(pm8xxx_rtc_init); 540 - 541 - static void __exit pm8xxx_rtc_exit(void) 542 - { 543 - platform_driver_unregister(&pm8xxx_rtc_driver); 544 - } 545 - module_exit(pm8xxx_rtc_exit); 535 + module_platform_driver(pm8xxx_rtc_driver); 546 536 547 537 MODULE_ALIAS("platform:rtc-pm8xxx"); 548 538 MODULE_DESCRIPTION("PMIC8xxx RTC driver");
-1
drivers/rtc/rtc-rs5c348.c
··· 229 229 static struct spi_driver rs5c348_driver = { 230 230 .driver = { 231 231 .name = "rtc-rs5c348", 232 - .bus = &spi_bus_type, 233 232 .owner = THIS_MODULE, 234 233 }, 235 234 .probe = rs5c348_probe,
+1 -15
drivers/rtc/rtc-s3c.c
··· 673 673 }, 674 674 }; 675 675 676 - static char __initdata banner[] = "S3C24XX RTC, (c) 2004,2006 Simtec Electronics\n"; 677 - 678 - static int __init s3c_rtc_init(void) 679 - { 680 - printk(banner); 681 - return platform_driver_register(&s3c_rtc_driver); 682 - } 683 - 684 - static void __exit s3c_rtc_exit(void) 685 - { 686 - platform_driver_unregister(&s3c_rtc_driver); 687 - } 688 - 689 - module_init(s3c_rtc_init); 690 - module_exit(s3c_rtc_exit); 676 + module_platform_driver(s3c_rtc_driver); 691 677 692 678 MODULE_DESCRIPTION("Samsung S3C RTC Driver"); 693 679 MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>");
+1 -12
drivers/rtc/rtc-sa1100.c
··· 435 435 }, 436 436 }; 437 437 438 - static int __init sa1100_rtc_init(void) 439 - { 440 - return platform_driver_register(&sa1100_rtc_driver); 441 - } 442 - 443 - static void __exit sa1100_rtc_exit(void) 444 - { 445 - platform_driver_unregister(&sa1100_rtc_driver); 446 - } 447 - 448 - module_init(sa1100_rtc_init); 449 - module_exit(sa1100_rtc_exit); 438 + module_platform_driver(sa1100_rtc_driver); 450 439 451 440 MODULE_AUTHOR("Richard Purdie <rpurdie@rpsys.net>"); 452 441 MODULE_DESCRIPTION("SA11x0/PXA2xx Realtime Clock Driver (RTC)");
+1 -11
drivers/rtc/rtc-spear.c
··· 516 516 }, 517 517 }; 518 518 519 - static int __init rtc_init(void) 520 - { 521 - return platform_driver_register(&spear_rtc_driver); 522 - } 523 - module_init(rtc_init); 524 - 525 - static void __exit rtc_exit(void) 526 - { 527 - platform_driver_unregister(&spear_rtc_driver); 528 - } 529 - module_exit(rtc_exit); 519 + module_platform_driver(spear_rtc_driver); 530 520 531 521 MODULE_ALIAS("platform:rtc-spear"); 532 522 MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>");
+1 -12
drivers/rtc/rtc-stk17ta8.c
··· 370 370 }, 371 371 }; 372 372 373 - static __init int stk17ta8_init(void) 374 - { 375 - return platform_driver_register(&stk17ta8_rtc_driver); 376 - } 377 - 378 - static __exit void stk17ta8_exit(void) 379 - { 380 - platform_driver_unregister(&stk17ta8_rtc_driver); 381 - } 382 - 383 - module_init(stk17ta8_init); 384 - module_exit(stk17ta8_exit); 373 + module_platform_driver(stk17ta8_rtc_driver); 385 374 386 375 MODULE_AUTHOR("Thomas Hommel <thomas.hommel@ge.com>"); 387 376 MODULE_DESCRIPTION("Simtek STK17TA8 RTC driver");
+1 -12
drivers/rtc/rtc-stmp3xxx.c
··· 276 276 }, 277 277 }; 278 278 279 - static int __init stmp3xxx_rtc_init(void) 280 - { 281 - return platform_driver_register(&stmp3xxx_rtcdrv); 282 - } 283 - 284 - static void __exit stmp3xxx_rtc_exit(void) 285 - { 286 - platform_driver_unregister(&stmp3xxx_rtcdrv); 287 - } 288 - 289 - module_init(stmp3xxx_rtc_init); 290 - module_exit(stmp3xxx_rtc_exit); 279 + module_platform_driver(stmp3xxx_rtcdrv); 291 280 292 281 MODULE_DESCRIPTION("STMP3xxx RTC Driver"); 293 282 MODULE_AUTHOR("dmitry pervushin <dpervushin@embeddedalley.com> and "
+8 -2
drivers/rtc/rtc-twl.c
··· 550 550 #define twl_rtc_resume NULL 551 551 #endif 552 552 553 + static const struct of_device_id twl_rtc_of_match[] = { 554 + {.compatible = "ti,twl4030-rtc", }, 555 + { }, 556 + }; 557 + MODULE_DEVICE_TABLE(of, twl_rtc_of_match); 553 558 MODULE_ALIAS("platform:twl_rtc"); 554 559 555 560 static struct platform_driver twl4030rtc_driver = { ··· 564 559 .suspend = twl_rtc_suspend, 565 560 .resume = twl_rtc_resume, 566 561 .driver = { 567 - .owner = THIS_MODULE, 568 - .name = "twl_rtc", 562 + .owner = THIS_MODULE, 563 + .name = "twl_rtc", 564 + .of_match_table = twl_rtc_of_match, 569 565 }, 570 566 }; 571 567
+1 -12
drivers/rtc/rtc-v3020.c
··· 393 393 }, 394 394 }; 395 395 396 - static __init int v3020_init(void) 397 - { 398 - return platform_driver_register(&rtc_device_driver); 399 - } 400 - 401 - static __exit void v3020_exit(void) 402 - { 403 - platform_driver_unregister(&rtc_device_driver); 404 - } 405 - 406 - module_init(v3020_init); 407 - module_exit(v3020_exit); 396 + module_platform_driver(rtc_device_driver); 408 397 409 398 MODULE_DESCRIPTION("V3020 RTC"); 410 399 MODULE_AUTHOR("Raphael Assenat");
+1 -12
drivers/rtc/rtc-vr41xx.c
··· 405 405 }, 406 406 }; 407 407 408 - static int __init vr41xx_rtc_init(void) 409 - { 410 - return platform_driver_register(&rtc_platform_driver); 411 - } 412 - 413 - static void __exit vr41xx_rtc_exit(void) 414 - { 415 - platform_driver_unregister(&rtc_platform_driver); 416 - } 417 - 418 - module_init(vr41xx_rtc_init); 419 - module_exit(vr41xx_rtc_exit); 408 + module_platform_driver(rtc_platform_driver);
+1 -11
drivers/rtc/rtc-vt8500.c
··· 311 311 }, 312 312 }; 313 313 314 - static int __init vt8500_rtc_init(void) 315 - { 316 - return platform_driver_register(&vt8500_rtc_driver); 317 - } 318 - module_init(vt8500_rtc_init); 319 - 320 - static void __exit vt8500_rtc_exit(void) 321 - { 322 - platform_driver_unregister(&vt8500_rtc_driver); 323 - } 324 - module_exit(vt8500_rtc_exit); 314 + module_platform_driver(vt8500_rtc_driver); 325 315 326 316 MODULE_AUTHOR("Alexey Charkov <alchark@gmail.com>"); 327 317 MODULE_DESCRIPTION("VIA VT8500 SoC Realtime Clock Driver (RTC)");
+2 -34
drivers/rtc/rtc-wm831x.c
··· 324 324 return IRQ_HANDLED; 325 325 } 326 326 327 - static irqreturn_t wm831x_per_irq(int irq, void *data) 328 - { 329 - struct wm831x_rtc *wm831x_rtc = data; 330 - 331 - rtc_update_irq(wm831x_rtc->rtc, 1, RTC_IRQF | RTC_UF); 332 - 333 - return IRQ_HANDLED; 334 - } 335 - 336 327 static const struct rtc_class_ops wm831x_rtc_ops = { 337 328 .read_time = wm831x_rtc_readtime, 338 329 .set_mmss = wm831x_rtc_set_mmss, ··· 396 405 { 397 406 struct wm831x *wm831x = dev_get_drvdata(pdev->dev.parent); 398 407 struct wm831x_rtc *wm831x_rtc; 399 - int per_irq = platform_get_irq_byname(pdev, "PER"); 400 408 int alm_irq = platform_get_irq_byname(pdev, "ALM"); 401 409 int ret = 0; 402 410 403 - wm831x_rtc = kzalloc(sizeof(*wm831x_rtc), GFP_KERNEL); 411 + wm831x_rtc = devm_kzalloc(&pdev->dev, sizeof(*wm831x_rtc), GFP_KERNEL); 404 412 if (wm831x_rtc == NULL) 405 413 return -ENOMEM; 406 414 ··· 423 433 goto err; 424 434 } 425 435 426 - ret = request_threaded_irq(per_irq, NULL, wm831x_per_irq, 427 - IRQF_TRIGGER_RISING, "RTC period", 428 - wm831x_rtc); 429 - if (ret != 0) { 430 - dev_err(&pdev->dev, "Failed to request periodic IRQ %d: %d\n", 431 - per_irq, ret); 432 - } 433 - 434 436 ret = request_threaded_irq(alm_irq, NULL, wm831x_alm_irq, 435 437 IRQF_TRIGGER_RISING, "RTC alarm", 436 438 wm831x_rtc); ··· 434 452 return 0; 435 453 436 454 err: 437 - kfree(wm831x_rtc); 438 455 return ret; 439 456 } 440 457 441 458 static int __devexit wm831x_rtc_remove(struct platform_device *pdev) 442 459 { 443 460 struct wm831x_rtc *wm831x_rtc = platform_get_drvdata(pdev); 444 - int per_irq = platform_get_irq_byname(pdev, "PER"); 445 461 int alm_irq = platform_get_irq_byname(pdev, "ALM"); 446 462 447 463 free_irq(alm_irq, wm831x_rtc); 448 - free_irq(per_irq, wm831x_rtc); 449 464 rtc_device_unregister(wm831x_rtc->rtc); 450 - kfree(wm831x_rtc); 451 465 452 466 return 0; 453 467 } ··· 468 490 }, 469 491 }; 470 492 471 - static int __init wm831x_rtc_init(void) 472 - { 473 - return platform_driver_register(&wm831x_rtc_driver); 474 - } 475 - module_init(wm831x_rtc_init); 476 - 477 - static void __exit wm831x_rtc_exit(void) 478 - { 479 - platform_driver_unregister(&wm831x_rtc_driver); 480 - } 481 - module_exit(wm831x_rtc_exit); 493 + module_platform_driver(wm831x_rtc_driver); 482 494 483 495 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 484 496 MODULE_DESCRIPTION("RTC driver for the WM831x series PMICs");
+1 -11
drivers/rtc/rtc-wm8350.c
··· 486 486 }, 487 487 }; 488 488 489 - static int __init wm8350_rtc_init(void) 490 - { 491 - return platform_driver_register(&wm8350_rtc_driver); 492 - } 493 - module_init(wm8350_rtc_init); 494 - 495 - static void __exit wm8350_rtc_exit(void) 496 - { 497 - platform_driver_unregister(&wm8350_rtc_driver); 498 - } 499 - module_exit(wm8350_rtc_exit); 489 + module_platform_driver(wm8350_rtc_driver); 500 490 501 491 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 502 492 MODULE_DESCRIPTION("RTC driver for the WM8350");
+1 -11
drivers/video/backlight/88pm860x_bl.c
··· 270 270 .remove = pm860x_backlight_remove, 271 271 }; 272 272 273 - static int __init pm860x_backlight_init(void) 274 - { 275 - return platform_driver_register(&pm860x_backlight_driver); 276 - } 277 - module_init(pm860x_backlight_init); 278 - 279 - static void __exit pm860x_backlight_exit(void) 280 - { 281 - platform_driver_unregister(&pm860x_backlight_driver); 282 - } 283 - module_exit(pm860x_backlight_exit); 273 + module_platform_driver(pm860x_backlight_driver); 284 274 285 275 MODULE_DESCRIPTION("Backlight Driver for Marvell Semiconductor 88PM8606"); 286 276 MODULE_AUTHOR("Haojian Zhuang <haojian.zhuang@marvell.com>");
-8
drivers/video/backlight/Kconfig
··· 280 280 If you have a backlight driven by the ISINK and DCDC of a 281 281 WM831x PMIC say y to enable the backlight driver for it. 282 282 283 - config BACKLIGHT_ADX 284 - tristate "Avionic Design Xanthos Backlight Driver" 285 - depends on ARCH_PXA_ADX 286 - default y 287 - help 288 - Say Y to enable the backlight driver on Avionic Design Xanthos-based 289 - boards. 290 - 291 283 config BACKLIGHT_ADP5520 292 284 tristate "Backlight Driver for ADP5520/ADP5501 using WLED" 293 285 depends on PMIC_ADP5520
-1
drivers/video/backlight/Makefile
··· 32 32 obj-$(CONFIG_BACKLIGHT_TOSA) += tosa_bl.o 33 33 obj-$(CONFIG_BACKLIGHT_SAHARA) += kb3886_bl.o 34 34 obj-$(CONFIG_BACKLIGHT_WM831X) += wm831x_bl.o 35 - obj-$(CONFIG_BACKLIGHT_ADX) += adx_bl.o 36 35 obj-$(CONFIG_BACKLIGHT_ADP5520) += adp5520_bl.o 37 36 obj-$(CONFIG_BACKLIGHT_ADP8860) += adp8860_bl.o 38 37 obj-$(CONFIG_BACKLIGHT_ADP8870) += adp8870_bl.o
+1 -11
drivers/video/backlight/adp5520_bl.c
··· 384 384 .resume = adp5520_bl_resume, 385 385 }; 386 386 387 - static int __init adp5520_bl_init(void) 388 - { 389 - return platform_driver_register(&adp5520_bl_driver); 390 - } 391 - module_init(adp5520_bl_init); 392 - 393 - static void __exit adp5520_bl_exit(void) 394 - { 395 - platform_driver_unregister(&adp5520_bl_driver); 396 - } 397 - module_exit(adp5520_bl_exit); 387 + module_platform_driver(adp5520_bl_driver); 398 388 399 389 MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>"); 400 390 MODULE_DESCRIPTION("ADP5520(01) Backlight Driver");
-182
drivers/video/backlight/adx_bl.c
··· 1 - /* 2 - * linux/drivers/video/backlight/adx.c 3 - * 4 - * Copyright (C) 2009 Avionic Design GmbH 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - * 10 - * Written by Thierry Reding <thierry.reding@avionic-design.de> 11 - */ 12 - 13 - #include <linux/backlight.h> 14 - #include <linux/fb.h> 15 - #include <linux/gfp.h> 16 - #include <linux/io.h> 17 - #include <linux/module.h> 18 - #include <linux/platform_device.h> 19 - 20 - /* register definitions */ 21 - #define ADX_BACKLIGHT_CONTROL 0x00 22 - #define ADX_BACKLIGHT_CONTROL_ENABLE (1 << 0) 23 - #define ADX_BACKLIGHT_BRIGHTNESS 0x08 24 - #define ADX_BACKLIGHT_STATUS 0x10 25 - #define ADX_BACKLIGHT_ERROR 0x18 26 - 27 - struct adxbl { 28 - void __iomem *base; 29 - }; 30 - 31 - static int adx_backlight_update_status(struct backlight_device *bldev) 32 - { 33 - struct adxbl *bl = bl_get_data(bldev); 34 - u32 value; 35 - 36 - value = bldev->props.brightness; 37 - writel(value, bl->base + ADX_BACKLIGHT_BRIGHTNESS); 38 - 39 - value = readl(bl->base + ADX_BACKLIGHT_CONTROL); 40 - 41 - if (bldev->props.state & BL_CORE_FBBLANK) 42 - value &= ~ADX_BACKLIGHT_CONTROL_ENABLE; 43 - else 44 - value |= ADX_BACKLIGHT_CONTROL_ENABLE; 45 - 46 - writel(value, bl->base + ADX_BACKLIGHT_CONTROL); 47 - 48 - return 0; 49 - } 50 - 51 - static int adx_backlight_get_brightness(struct backlight_device *bldev) 52 - { 53 - struct adxbl *bl = bl_get_data(bldev); 54 - u32 brightness; 55 - 56 - brightness = readl(bl->base + ADX_BACKLIGHT_BRIGHTNESS); 57 - return brightness & 0xff; 58 - } 59 - 60 - static int adx_backlight_check_fb(struct backlight_device *bldev, struct fb_info *fb) 61 - { 62 - return 1; 63 - } 64 - 65 - static const struct backlight_ops adx_backlight_ops = { 66 - .options = 0, 67 - .update_status = adx_backlight_update_status, 68 - .get_brightness = adx_backlight_get_brightness, 69 - .check_fb = adx_backlight_check_fb, 70 - }; 71 - 72 - static int __devinit adx_backlight_probe(struct platform_device *pdev) 73 - { 74 - struct backlight_properties props; 75 - struct backlight_device *bldev; 76 - struct resource *res; 77 - struct adxbl *bl; 78 - int ret = 0; 79 - 80 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 81 - if (!res) { 82 - ret = -ENXIO; 83 - goto out; 84 - } 85 - 86 - res = devm_request_mem_region(&pdev->dev, res->start, 87 - resource_size(res), res->name); 88 - if (!res) { 89 - ret = -ENXIO; 90 - goto out; 91 - } 92 - 93 - bl = devm_kzalloc(&pdev->dev, sizeof(*bl), GFP_KERNEL); 94 - if (!bl) { 95 - ret = -ENOMEM; 96 - goto out; 97 - } 98 - 99 - bl->base = devm_ioremap_nocache(&pdev->dev, res->start, 100 - resource_size(res)); 101 - if (!bl->base) { 102 - ret = -ENXIO; 103 - goto out; 104 - } 105 - 106 - memset(&props, 0, sizeof(struct backlight_properties)); 107 - props.type = BACKLIGHT_RAW; 108 - props.max_brightness = 0xff; 109 - bldev = backlight_device_register(dev_name(&pdev->dev), &pdev->dev, 110 - bl, &adx_backlight_ops, &props); 111 - if (IS_ERR(bldev)) { 112 - ret = PTR_ERR(bldev); 113 - goto out; 114 - } 115 - 116 - bldev->props.brightness = 0xff; 117 - bldev->props.power = FB_BLANK_UNBLANK; 118 - 119 - platform_set_drvdata(pdev, bldev); 120 - 121 - out: 122 - return ret; 123 - } 124 - 125 - static int __devexit adx_backlight_remove(struct platform_device *pdev) 126 - { 127 - struct backlight_device *bldev; 128 - int ret = 0; 129 - 130 - bldev = platform_get_drvdata(pdev); 131 - bldev->props.power = FB_BLANK_UNBLANK; 132 - bldev->props.brightness = 0xff; 133 - backlight_update_status(bldev); 134 - backlight_device_unregister(bldev); 135 - platform_set_drvdata(pdev, NULL); 136 - 137 - return ret; 138 - } 139 - 140 - #ifdef CONFIG_PM 141 - static int adx_backlight_suspend(struct platform_device *pdev, 142 - pm_message_t state) 143 - { 144 - return 0; 145 - } 146 - 147 - static int adx_backlight_resume(struct platform_device *pdev) 148 - { 149 - return 0; 150 - } 151 - #else 152 - #define adx_backlight_suspend NULL 153 - #define adx_backlight_resume NULL 154 - #endif 155 - 156 - static struct platform_driver adx_backlight_driver = { 157 - .probe = adx_backlight_probe, 158 - .remove = __devexit_p(adx_backlight_remove), 159 - .suspend = adx_backlight_suspend, 160 - .resume = adx_backlight_resume, 161 - .driver = { 162 - .name = "adx-backlight", 163 - .owner = THIS_MODULE, 164 - }, 165 - }; 166 - 167 - static int __init adx_backlight_init(void) 168 - { 169 - return platform_driver_register(&adx_backlight_driver); 170 - } 171 - 172 - static void __exit adx_backlight_exit(void) 173 - { 174 - platform_driver_unregister(&adx_backlight_driver); 175 - } 176 - 177 - module_init(adx_backlight_init); 178 - module_exit(adx_backlight_exit); 179 - 180 - MODULE_AUTHOR("Thierry Reding <thierry.reding@avionic-design.de>"); 181 - MODULE_DESCRIPTION("Avionic Design Xanthos Backlight Driver"); 182 - MODULE_LICENSE("GPL v2");
+3 -3
drivers/video/backlight/backlight.c
··· 102 102 } 103 103 104 104 static ssize_t backlight_show_power(struct device *dev, 105 - struct device_attribute *attr,char *buf) 105 + struct device_attribute *attr, char *buf) 106 106 { 107 107 struct backlight_device *bd = to_backlight_device(dev); 108 108 ··· 116 116 struct backlight_device *bd = to_backlight_device(dev); 117 117 unsigned long power; 118 118 119 - rc = strict_strtoul(buf, 0, &power); 119 + rc = kstrtoul(buf, 0, &power); 120 120 if (rc) 121 121 return rc; 122 122 ··· 150 150 struct backlight_device *bd = to_backlight_device(dev); 151 151 unsigned long brightness; 152 152 153 - rc = strict_strtoul(buf, 0, &brightness); 153 + rc = kstrtoul(buf, 0, &brightness); 154 154 if (rc) 155 155 return rc; 156 156
+1 -11
drivers/video/backlight/da903x_bl.c
··· 199 199 .remove = da903x_backlight_remove, 200 200 }; 201 201 202 - static int __init da903x_backlight_init(void) 203 - { 204 - return platform_driver_register(&da903x_backlight_driver); 205 - } 206 - module_init(da903x_backlight_init); 207 - 208 - static void __exit da903x_backlight_exit(void) 209 - { 210 - platform_driver_unregister(&da903x_backlight_driver); 211 - } 212 - module_exit(da903x_backlight_exit); 202 + module_platform_driver(da903x_backlight_driver); 213 203 214 204 MODULE_DESCRIPTION("Backlight Driver for Dialog Semiconductor DA9030/DA9034"); 215 205 MODULE_AUTHOR("Eric Miao <eric.miao@marvell.com>"
+1 -12
drivers/video/backlight/ep93xx_bl.c
··· 13 13 14 14 #include <linux/module.h> 15 15 #include <linux/platform_device.h> 16 - #include <linux/module.h> 17 16 #include <linux/io.h> 18 17 #include <linux/fb.h> 19 18 #include <linux/backlight.h> ··· 143 144 .resume = ep93xxbl_resume, 144 145 }; 145 146 146 - static int __init ep93xxbl_init(void) 147 - { 148 - return platform_driver_register(&ep93xxbl_driver); 149 - } 150 - module_init(ep93xxbl_init); 151 - 152 - static void __exit ep93xxbl_exit(void) 153 - { 154 - platform_driver_unregister(&ep93xxbl_driver); 155 - } 156 - module_exit(ep93xxbl_exit); 147 + module_platform_driver(ep93xxbl_driver); 157 148 158 149 MODULE_DESCRIPTION("EP93xx Backlight Driver"); 159 150 MODULE_AUTHOR("H Hartley Sweeten <hsweeten@visionengravers.com>");
+1 -12
drivers/video/backlight/generic_bl.c
··· 132 132 }, 133 133 }; 134 134 135 - static int __init genericbl_init(void) 136 - { 137 - return platform_driver_register(&genericbl_driver); 138 - } 139 - 140 - static void __exit genericbl_exit(void) 141 - { 142 - platform_driver_unregister(&genericbl_driver); 143 - } 144 - 145 - module_init(genericbl_init); 146 - module_exit(genericbl_exit); 135 + module_platform_driver(genericbl_driver); 147 136 148 137 MODULE_AUTHOR("Richard Purdie <rpurdie@rpsys.net>"); 149 138 MODULE_DESCRIPTION("Generic Backlight Driver");
+1 -12
drivers/video/backlight/jornada720_bl.c
··· 147 147 }, 148 148 }; 149 149 150 - static int __init jornada_bl_init(void) 151 - { 152 - return platform_driver_register(&jornada_bl_driver); 153 - } 154 - 155 - static void __exit jornada_bl_exit(void) 156 - { 157 - platform_driver_unregister(&jornada_bl_driver); 158 - } 150 + module_platform_driver(jornada_bl_driver); 159 151 160 152 MODULE_AUTHOR("Kristoffer Ericson <kristoffer.ericson>"); 161 153 MODULE_DESCRIPTION("HP Jornada 710/720/728 Backlight driver"); 162 154 MODULE_LICENSE("GPL"); 163 - 164 - module_init(jornada_bl_init); 165 - module_exit(jornada_bl_exit);
+1 -12
drivers/video/backlight/jornada720_lcd.c
··· 135 135 }, 136 136 }; 137 137 138 - static int __init jornada_lcd_init(void) 139 - { 140 - return platform_driver_register(&jornada_lcd_driver); 141 - } 142 - 143 - static void __exit jornada_lcd_exit(void) 144 - { 145 - platform_driver_unregister(&jornada_lcd_driver); 146 - } 138 + module_platform_driver(jornada_lcd_driver); 147 139 148 140 MODULE_AUTHOR("Kristoffer Ericson <kristoffer.ericson@gmail.com>"); 149 141 MODULE_DESCRIPTION("HP Jornada 710/720/728 LCD driver"); 150 142 MODULE_LICENSE("GPL"); 151 - 152 - module_init(jornada_lcd_init); 153 - module_exit(jornada_lcd_exit);
+10 -16
drivers/video/backlight/lcd.c
··· 97 97 struct device_attribute *attr, const char *buf, size_t count) 98 98 { 99 99 int rc = -ENXIO; 100 - char *endp; 101 100 struct lcd_device *ld = to_lcd_device(dev); 102 - int power = simple_strtoul(buf, &endp, 0); 103 - size_t size = endp - buf; 101 + unsigned long power; 104 102 105 - if (isspace(*endp)) 106 - size++; 107 - if (size != count) 108 - return -EINVAL; 103 + rc = kstrtoul(buf, 0, &power); 104 + if (rc) 105 + return rc; 109 106 110 107 mutex_lock(&ld->ops_lock); 111 108 if (ld->ops && ld->ops->set_power) { 112 - pr_debug("lcd: set power to %d\n", power); 109 + pr_debug("lcd: set power to %lu\n", power); 113 110 ld->ops->set_power(ld, power); 114 111 rc = count; 115 112 } ··· 133 136 struct device_attribute *attr, const char *buf, size_t count) 134 137 { 135 138 int rc = -ENXIO; 136 - char *endp; 137 139 struct lcd_device *ld = to_lcd_device(dev); 138 - int contrast = simple_strtoul(buf, &endp, 0); 139 - size_t size = endp - buf; 140 + unsigned long contrast; 140 141 141 - if (isspace(*endp)) 142 - size++; 143 - if (size != count) 144 - return -EINVAL; 142 + rc = kstrtoul(buf, 0, &contrast); 143 + if (rc) 144 + return rc; 145 145 146 146 mutex_lock(&ld->ops_lock); 147 147 if (ld->ops && ld->ops->set_contrast) { 148 - pr_debug("lcd: set contrast to %d\n", contrast); 148 + pr_debug("lcd: set contrast to %lu\n", contrast); 149 149 ld->ops->set_contrast(ld, contrast); 150 150 rc = count; 151 151 }
+59 -12
drivers/video/backlight/ld9040.c
··· 31 31 #include <linux/lcd.h> 32 32 #include <linux/backlight.h> 33 33 #include <linux/module.h> 34 + #include <linux/regulator/consumer.h> 34 35 35 36 #include "ld9040_gamma.h" 36 37 ··· 54 53 struct lcd_device *ld; 55 54 struct backlight_device *bd; 56 55 struct lcd_platform_data *lcd_pd; 56 + 57 + struct mutex lock; 58 + bool enabled; 57 59 }; 60 + 61 + static struct regulator_bulk_data supplies[] = { 62 + { .supply = "vdd3", }, 63 + { .supply = "vci", }, 64 + }; 65 + 66 + static void ld9040_regulator_enable(struct ld9040 *lcd) 67 + { 68 + int ret = 0; 69 + struct lcd_platform_data *pd = NULL; 70 + 71 + pd = lcd->lcd_pd; 72 + mutex_lock(&lcd->lock); 73 + if (!lcd->enabled) { 74 + ret = regulator_bulk_enable(ARRAY_SIZE(supplies), supplies); 75 + if (ret) 76 + goto out; 77 + 78 + lcd->enabled = true; 79 + } 80 + mdelay(pd->power_on_delay); 81 + out: 82 + mutex_unlock(&lcd->lock); 83 + } 84 + 85 + static void ld9040_regulator_disable(struct ld9040 *lcd) 86 + { 87 + int ret = 0; 88 + 89 + mutex_lock(&lcd->lock); 90 + if (lcd->enabled) { 91 + ret = regulator_bulk_disable(ARRAY_SIZE(supplies), supplies); 92 + if (ret) 93 + goto out; 94 + 95 + lcd->enabled = false; 96 + } 97 + out: 98 + mutex_unlock(&lcd->lock); 99 + } 58 100 59 101 static const unsigned short seq_swreset[] = { 60 102 0x01, COMMAND_ONLY, ··· 576 532 return -EFAULT; 577 533 } 578 534 579 - if (!pd->power_on) { 580 - dev_err(lcd->dev, "power_on is NULL.\n"); 581 - return -EFAULT; 582 - } else { 583 - pd->power_on(lcd->ld, 1); 584 - mdelay(pd->power_on_delay); 585 - } 535 + /* lcd power on */ 536 + ld9040_regulator_enable(lcd); 586 537 587 538 if (!pd->reset) { 588 539 dev_err(lcd->dev, "reset is NULL.\n"); ··· 621 582 622 583 mdelay(pd->power_off_delay); 623 584 624 - if (!pd->power_on) { 625 - dev_err(lcd->dev, "power_on is NULL.\n"); 626 - return -EFAULT; 627 - } else 628 - pd->power_on(lcd->ld, 0); 585 + /* lcd power off */ 586 + ld9040_regulator_disable(lcd); 629 587 630 588 return 0; 631 589 } ··· 729 693 goto out_free_lcd; 730 694 } 731 695 696 + mutex_init(&lcd->lock); 697 + 698 + ret = regulator_bulk_get(lcd->dev, ARRAY_SIZE(supplies), supplies); 699 + if (ret) { 700 + dev_err(lcd->dev, "Failed to get regulators: %d\n", ret); 701 + goto out_free_lcd; 702 + } 703 + 732 704 ld = lcd_device_register("ld9040", &spi->dev, lcd, &ld9040_lcd_ops); 733 705 if (IS_ERR(ld)) { 734 706 ret = PTR_ERR(ld); ··· 783 739 out_unregister_lcd: 784 740 lcd_device_unregister(lcd->ld); 785 741 out_free_lcd: 742 + regulator_bulk_free(ARRAY_SIZE(supplies), supplies); 743 + 786 744 kfree(lcd); 787 745 return ret; 788 746 } ··· 796 750 ld9040_power(lcd, FB_BLANK_POWERDOWN); 797 751 backlight_device_unregister(lcd->bd); 798 752 lcd_device_unregister(lcd->ld); 753 + regulator_bulk_free(ARRAY_SIZE(supplies), supplies); 799 754 kfree(lcd); 800 755 801 756 return 0;
+1 -11
drivers/video/backlight/max8925_bl.c
··· 188 188 .remove = __devexit_p(max8925_backlight_remove), 189 189 }; 190 190 191 - static int __init max8925_backlight_init(void) 192 - { 193 - return platform_driver_register(&max8925_backlight_driver); 194 - } 195 - module_init(max8925_backlight_init); 196 - 197 - static void __exit max8925_backlight_exit(void) 198 - { 199 - platform_driver_unregister(&max8925_backlight_driver); 200 - }; 201 - module_exit(max8925_backlight_exit); 191 + module_platform_driver(max8925_backlight_driver); 202 192 203 193 MODULE_DESCRIPTION("Backlight Driver for Maxim MAX8925"); 204 194 MODULE_AUTHOR("Haojian Zhuang <haojian.zhuang@marvell.com>");
+1 -12
drivers/video/backlight/omap1_bl.c
··· 195 195 }, 196 196 }; 197 197 198 - static int __init omapbl_init(void) 199 - { 200 - return platform_driver_register(&omapbl_driver); 201 - } 202 - 203 - static void __exit omapbl_exit(void) 204 - { 205 - platform_driver_unregister(&omapbl_driver); 206 - } 207 - 208 - module_init(omapbl_init); 209 - module_exit(omapbl_exit); 198 + module_platform_driver(omapbl_driver); 210 199 211 200 MODULE_AUTHOR("Andrzej Zaborowski <balrog@zabor.org>"); 212 201 MODULE_DESCRIPTION("OMAP LCD Backlight driver");
+1 -11
drivers/video/backlight/pcf50633-backlight.c
··· 173 173 }, 174 174 }; 175 175 176 - static int __init pcf50633_bl_init(void) 177 - { 178 - return platform_driver_register(&pcf50633_bl_driver); 179 - } 180 - module_init(pcf50633_bl_init); 181 - 182 - static void __exit pcf50633_bl_exit(void) 183 - { 184 - platform_driver_unregister(&pcf50633_bl_driver); 185 - } 186 - module_exit(pcf50633_bl_exit); 176 + module_platform_driver(pcf50633_bl_driver); 187 177 188 178 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>"); 189 179 MODULE_DESCRIPTION("PCF50633 backlight driver");
+5 -17
drivers/video/backlight/platform_lcd.c
··· 85 85 return -EINVAL; 86 86 } 87 87 88 - plcd = kzalloc(sizeof(struct platform_lcd), GFP_KERNEL); 88 + plcd = devm_kzalloc(&pdev->dev, sizeof(struct platform_lcd), 89 + GFP_KERNEL); 89 90 if (!plcd) { 90 91 dev_err(dev, "no memory for state\n"); 91 92 return -ENOMEM; ··· 99 98 if (IS_ERR(plcd->lcd)) { 100 99 dev_err(dev, "cannot register lcd device\n"); 101 100 err = PTR_ERR(plcd->lcd); 102 - goto err_mem; 101 + goto err; 103 102 } 104 103 105 104 platform_set_drvdata(pdev, plcd); ··· 107 106 108 107 return 0; 109 108 110 - err_mem: 111 - kfree(plcd); 109 + err: 112 110 return err; 113 111 } 114 112 ··· 116 116 struct platform_lcd *plcd = platform_get_drvdata(pdev); 117 117 118 118 lcd_device_unregister(plcd->lcd); 119 - kfree(plcd); 120 119 121 120 return 0; 122 121 } ··· 156 157 .resume = platform_lcd_resume, 157 158 }; 158 159 159 - static int __init platform_lcd_init(void) 160 - { 161 - return platform_driver_register(&platform_lcd_driver); 162 - } 163 - 164 - static void __exit platform_lcd_cleanup(void) 165 - { 166 - platform_driver_unregister(&platform_lcd_driver); 167 - } 168 - 169 - module_init(platform_lcd_init); 170 - module_exit(platform_lcd_cleanup); 160 + module_platform_driver(platform_lcd_driver); 171 161 172 162 MODULE_AUTHOR("Ben Dooks <ben-linux@fluff.org>"); 173 163 MODULE_LICENSE("GPL v2");
+12 -21
drivers/video/backlight/pwm_bl.c
··· 169 169 } 170 170 171 171 #ifdef CONFIG_PM 172 - static int pwm_backlight_suspend(struct platform_device *pdev, 173 - pm_message_t state) 172 + static int pwm_backlight_suspend(struct device *dev) 174 173 { 175 - struct backlight_device *bl = platform_get_drvdata(pdev); 174 + struct backlight_device *bl = dev_get_drvdata(dev); 176 175 struct pwm_bl_data *pb = dev_get_drvdata(&bl->dev); 177 176 178 177 if (pb->notify) ··· 183 184 return 0; 184 185 } 185 186 186 - static int pwm_backlight_resume(struct platform_device *pdev) 187 + static int pwm_backlight_resume(struct device *dev) 187 188 { 188 - struct backlight_device *bl = platform_get_drvdata(pdev); 189 + struct backlight_device *bl = dev_get_drvdata(dev); 189 190 190 191 backlight_update_status(bl); 191 192 return 0; 192 193 } 193 - #else 194 - #define pwm_backlight_suspend NULL 195 - #define pwm_backlight_resume NULL 194 + 195 + static SIMPLE_DEV_PM_OPS(pwm_backlight_pm_ops, pwm_backlight_suspend, 196 + pwm_backlight_resume); 197 + 196 198 #endif 197 199 198 200 static struct platform_driver pwm_backlight_driver = { 199 201 .driver = { 200 202 .name = "pwm-backlight", 201 203 .owner = THIS_MODULE, 204 + #ifdef CONFIG_PM 205 + .pm = &pwm_backlight_pm_ops, 206 + #endif 202 207 }, 203 208 .probe = pwm_backlight_probe, 204 209 .remove = pwm_backlight_remove, 205 - .suspend = pwm_backlight_suspend, 206 - .resume = pwm_backlight_resume, 207 210 }; 208 211 209 - static int __init pwm_backlight_init(void) 210 - { 211 - return platform_driver_register(&pwm_backlight_driver); 212 - } 213 - module_init(pwm_backlight_init); 214 - 215 - static void __exit pwm_backlight_exit(void) 216 - { 217 - platform_driver_unregister(&pwm_backlight_driver); 218 - } 219 - module_exit(pwm_backlight_exit); 212 + module_platform_driver(pwm_backlight_driver); 220 213 221 214 MODULE_DESCRIPTION("PWM based Backlight Driver"); 222 215 MODULE_LICENSE("GPL");
+1 -11
drivers/video/backlight/wm831x_bl.c
··· 236 236 .remove = wm831x_backlight_remove, 237 237 }; 238 238 239 - static int __init wm831x_backlight_init(void) 240 - { 241 - return platform_driver_register(&wm831x_backlight_driver); 242 - } 243 - module_init(wm831x_backlight_init); 244 - 245 - static void __exit wm831x_backlight_exit(void) 246 - { 247 - platform_driver_unregister(&wm831x_backlight_driver); 248 - } 249 - module_exit(wm831x_backlight_exit); 239 + module_platform_driver(wm831x_backlight_driver); 250 240 251 241 MODULE_DESCRIPTION("Backlight Driver for WM831x PMICs"); 252 242 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com");
+3
fs/Kconfig.binfmt
··· 27 27 bool 28 28 depends on COMPAT && BINFMT_ELF 29 29 30 + config ARCH_BINFMT_ELF_RANDOMIZE_PIE 31 + bool 32 + 30 33 config BINFMT_ELF_FDPIC 31 34 bool "Kernel support for FDPIC ELF binaries" 32 35 default y
+1 -1
fs/binfmt_elf.c
··· 794 794 * default mmap base, as well as whatever program they 795 795 * might try to exec. This is because the brk will 796 796 * follow the loader, and is not movable. */ 797 - #if defined(CONFIG_X86) || defined(CONFIG_ARM) 797 + #ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE 798 798 /* Memory randomization might have been switched off 799 799 * in runtime via sysctl. 800 800 * If that is the case, retain the original non-zero
+1 -1
fs/btrfs/file.c
··· 1081 1081 again: 1082 1082 for (i = 0; i < num_pages; i++) { 1083 1083 pages[i] = find_or_create_page(inode->i_mapping, index + i, 1084 - mask); 1084 + mask | __GFP_WRITE); 1085 1085 if (!pages[i]) { 1086 1086 faili = i - 1; 1087 1087 err = -ENOMEM;
+4
fs/exec.c
··· 59 59 #include <asm/uaccess.h> 60 60 #include <asm/mmu_context.h> 61 61 #include <asm/tlb.h> 62 + 63 + #include <trace/events/task.h> 62 64 #include "internal.h" 63 65 64 66 int core_uses_pid; ··· 1055 1053 void set_task_comm(struct task_struct *tsk, char *buf) 1056 1054 { 1057 1055 task_lock(tsk); 1056 + 1057 + trace_task_rename(tsk, buf); 1058 1058 1059 1059 /* 1060 1060 * Threads may access current->comm without holding
+2
fs/inode.c
··· 776 776 else 777 777 __count_vm_events(PGINODESTEAL, reap); 778 778 spin_unlock(&sb->s_inode_lru_lock); 779 + if (current->reclaim_state) 780 + current->reclaim_state->reclaimed_slab += reap; 779 781 780 782 dispose_list(&freeable); 781 783 }
+436 -11
fs/proc/base.c
··· 83 83 #include <linux/pid_namespace.h> 84 84 #include <linux/fs_struct.h> 85 85 #include <linux/slab.h> 86 + #include <linux/flex_array.h> 86 87 #ifdef CONFIG_HARDWALL 87 88 #include <asm/hardwall.h> 88 89 #endif 90 + #include <trace/events/oom.h> 89 91 #include "internal.h" 90 92 91 93 /* NOTE: ··· 135 133 NULL, &proc_single_file_operations, \ 136 134 { .proc_show = show } ) 137 135 136 + static int proc_fd_permission(struct inode *inode, int mask); 137 + 138 138 /* 139 139 * Count the number of hardlinks for the pid_entry table, excluding the . 140 140 * and .. links. ··· 169 165 return result; 170 166 } 171 167 172 - static int proc_cwd_link(struct inode *inode, struct path *path) 168 + static int proc_cwd_link(struct dentry *dentry, struct path *path) 173 169 { 174 - struct task_struct *task = get_proc_task(inode); 170 + struct task_struct *task = get_proc_task(dentry->d_inode); 175 171 int result = -ENOENT; 176 172 177 173 if (task) { ··· 186 182 return result; 187 183 } 188 184 189 - static int proc_root_link(struct inode *inode, struct path *path) 185 + static int proc_root_link(struct dentry *dentry, struct path *path) 190 186 { 191 - struct task_struct *task = get_proc_task(inode); 187 + struct task_struct *task = get_proc_task(dentry->d_inode); 192 188 int result = -ENOENT; 193 189 194 190 if (task) { ··· 631 627 return 0; 632 628 } 633 629 630 + /* 631 + * May current process learn task's sched/cmdline info (for hide_pid_min=1) 632 + * or euid/egid (for hide_pid_min=2)? 633 + */ 634 + static bool has_pid_permissions(struct pid_namespace *pid, 635 + struct task_struct *task, 636 + int hide_pid_min) 637 + { 638 + if (pid->hide_pid < hide_pid_min) 639 + return true; 640 + if (in_group_p(pid->pid_gid)) 641 + return true; 642 + return ptrace_may_access(task, PTRACE_MODE_READ); 643 + } 644 + 645 + 646 + static int proc_pid_permission(struct inode *inode, int mask) 647 + { 648 + struct pid_namespace *pid = inode->i_sb->s_fs_info; 649 + struct task_struct *task; 650 + bool has_perms; 651 + 652 + task = get_proc_task(inode); 653 + has_perms = has_pid_permissions(pid, task, 1); 654 + put_task_struct(task); 655 + 656 + if (!has_perms) { 657 + if (pid->hide_pid == 2) { 658 + /* 659 + * Let's make getdents(), stat(), and open() 660 + * consistent with each other. If a process 661 + * may not stat() a file, it shouldn't be seen 662 + * in procfs at all. 663 + */ 664 + return -ENOENT; 665 + } 666 + 667 + return -EPERM; 668 + } 669 + return generic_permission(inode, mask); 670 + } 671 + 672 + 673 + 634 674 static const struct inode_operations proc_def_inode_operations = { 635 675 .setattr = proc_setattr, 636 676 }; ··· 1058 1010 else 1059 1011 task->signal->oom_score_adj = (oom_adjust * OOM_SCORE_ADJ_MAX) / 1060 1012 -OOM_DISABLE; 1013 + trace_oom_score_adj_update(task); 1061 1014 err_sighand: 1062 1015 unlock_task_sighand(task, &flags); 1063 1016 err_task_lock: ··· 1146 1097 task->signal->oom_score_adj = oom_score_adj; 1147 1098 if (has_capability_noaudit(current, CAP_SYS_RESOURCE)) 1148 1099 task->signal->oom_score_adj_min = oom_score_adj; 1100 + trace_oom_score_adj_update(task); 1149 1101 /* 1150 1102 * Scale /proc/pid/oom_adj appropriately ensuring that OOM_DISABLE is 1151 1103 * always attainable. ··· 1503 1453 .release = single_release, 1504 1454 }; 1505 1455 1506 - static int proc_exe_link(struct inode *inode, struct path *exe_path) 1456 + static int proc_exe_link(struct dentry *dentry, struct path *exe_path) 1507 1457 { 1508 1458 struct task_struct *task; 1509 1459 struct mm_struct *mm; 1510 1460 struct file *exe_file; 1511 1461 1512 - task = get_proc_task(inode); 1462 + task = get_proc_task(dentry->d_inode); 1513 1463 if (!task) 1514 1464 return -ENOENT; 1515 1465 mm = get_task_mm(task); ··· 1539 1489 if (!proc_fd_access_allowed(inode)) 1540 1490 goto out; 1541 1491 1542 - error = PROC_I(inode)->op.proc_get_link(inode, &nd->path); 1492 + error = PROC_I(inode)->op.proc_get_link(dentry, &nd->path); 1543 1493 out: 1544 1494 return ERR_PTR(error); 1545 1495 } ··· 1578 1528 if (!proc_fd_access_allowed(inode)) 1579 1529 goto out; 1580 1530 1581 - error = PROC_I(inode)->op.proc_get_link(inode, &path); 1531 + error = PROC_I(inode)->op.proc_get_link(dentry, &path); 1582 1532 if (error) 1583 1533 goto out; 1584 1534 ··· 1659 1609 struct inode *inode = dentry->d_inode; 1660 1610 struct task_struct *task; 1661 1611 const struct cred *cred; 1612 + struct pid_namespace *pid = dentry->d_sb->s_fs_info; 1662 1613 1663 1614 generic_fillattr(inode, stat); 1664 1615 ··· 1668 1617 stat->gid = 0; 1669 1618 task = pid_task(proc_pid(inode), PIDTYPE_PID); 1670 1619 if (task) { 1620 + if (!has_pid_permissions(pid, task, 2)) { 1621 + rcu_read_unlock(); 1622 + /* 1623 + * This doesn't prevent learning whether PID exists, 1624 + * it only makes getattr() consistent with readdir(). 1625 + */ 1626 + return -ENOENT; 1627 + } 1671 1628 if ((inode->i_mode == (S_IFDIR|S_IRUGO|S_IXUGO)) || 1672 1629 task_dumpable(task)) { 1673 1630 cred = __task_cred(task); ··· 1879 1820 return -ENOENT; 1880 1821 } 1881 1822 1882 - static int proc_fd_link(struct inode *inode, struct path *path) 1823 + static int proc_fd_link(struct dentry *dentry, struct path *path) 1883 1824 { 1884 - return proc_fd_info(inode, path, NULL); 1825 + return proc_fd_info(dentry->d_inode, path, NULL); 1885 1826 } 1886 1827 1887 1828 static int tid_fd_revalidate(struct dentry *dentry, struct nameidata *nd) ··· 2101 2042 .readdir = proc_readfd, 2102 2043 .llseek = default_llseek, 2103 2044 }; 2045 + 2046 + #ifdef CONFIG_CHECKPOINT_RESTORE 2047 + 2048 + /* 2049 + * dname_to_vma_addr - maps a dentry name into two unsigned longs 2050 + * which represent vma start and end addresses. 2051 + */ 2052 + static int dname_to_vma_addr(struct dentry *dentry, 2053 + unsigned long *start, unsigned long *end) 2054 + { 2055 + if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2) 2056 + return -EINVAL; 2057 + 2058 + return 0; 2059 + } 2060 + 2061 + static int map_files_d_revalidate(struct dentry *dentry, struct nameidata *nd) 2062 + { 2063 + unsigned long vm_start, vm_end; 2064 + bool exact_vma_exists = false; 2065 + struct mm_struct *mm = NULL; 2066 + struct task_struct *task; 2067 + const struct cred *cred; 2068 + struct inode *inode; 2069 + int status = 0; 2070 + 2071 + if (nd && nd->flags & LOOKUP_RCU) 2072 + return -ECHILD; 2073 + 2074 + if (!capable(CAP_SYS_ADMIN)) { 2075 + status = -EACCES; 2076 + goto out_notask; 2077 + } 2078 + 2079 + inode = dentry->d_inode; 2080 + task = get_proc_task(inode); 2081 + if (!task) 2082 + goto out_notask; 2083 + 2084 + if (!ptrace_may_access(task, PTRACE_MODE_READ)) 2085 + goto out; 2086 + 2087 + mm = get_task_mm(task); 2088 + if (!mm) 2089 + goto out; 2090 + 2091 + if (!dname_to_vma_addr(dentry, &vm_start, &vm_end)) { 2092 + down_read(&mm->mmap_sem); 2093 + exact_vma_exists = !!find_exact_vma(mm, vm_start, vm_end); 2094 + up_read(&mm->mmap_sem); 2095 + } 2096 + 2097 + mmput(mm); 2098 + 2099 + if (exact_vma_exists) { 2100 + if (task_dumpable(task)) { 2101 + rcu_read_lock(); 2102 + cred = __task_cred(task); 2103 + inode->i_uid = cred->euid; 2104 + inode->i_gid = cred->egid; 2105 + rcu_read_unlock(); 2106 + } else { 2107 + inode->i_uid = 0; 2108 + inode->i_gid = 0; 2109 + } 2110 + security_task_to_inode(task, inode); 2111 + status = 1; 2112 + } 2113 + 2114 + out: 2115 + put_task_struct(task); 2116 + 2117 + out_notask: 2118 + if (status <= 0) 2119 + d_drop(dentry); 2120 + 2121 + return status; 2122 + } 2123 + 2124 + static const struct dentry_operations tid_map_files_dentry_operations = { 2125 + .d_revalidate = map_files_d_revalidate, 2126 + .d_delete = pid_delete_dentry, 2127 + }; 2128 + 2129 + static int proc_map_files_get_link(struct dentry *dentry, struct path *path) 2130 + { 2131 + unsigned long vm_start, vm_end; 2132 + struct vm_area_struct *vma; 2133 + struct task_struct *task; 2134 + struct mm_struct *mm; 2135 + int rc; 2136 + 2137 + rc = -ENOENT; 2138 + task = get_proc_task(dentry->d_inode); 2139 + if (!task) 2140 + goto out; 2141 + 2142 + mm = get_task_mm(task); 2143 + put_task_struct(task); 2144 + if (!mm) 2145 + goto out; 2146 + 2147 + rc = dname_to_vma_addr(dentry, &vm_start, &vm_end); 2148 + if (rc) 2149 + goto out_mmput; 2150 + 2151 + down_read(&mm->mmap_sem); 2152 + vma = find_exact_vma(mm, vm_start, vm_end); 2153 + if (vma && vma->vm_file) { 2154 + *path = vma->vm_file->f_path; 2155 + path_get(path); 2156 + rc = 0; 2157 + } 2158 + up_read(&mm->mmap_sem); 2159 + 2160 + out_mmput: 2161 + mmput(mm); 2162 + out: 2163 + return rc; 2164 + } 2165 + 2166 + struct map_files_info { 2167 + struct file *file; 2168 + unsigned long len; 2169 + unsigned char name[4*sizeof(long)+2]; /* max: %lx-%lx\0 */ 2170 + }; 2171 + 2172 + static struct dentry * 2173 + proc_map_files_instantiate(struct inode *dir, struct dentry *dentry, 2174 + struct task_struct *task, const void *ptr) 2175 + { 2176 + const struct file *file = ptr; 2177 + struct proc_inode *ei; 2178 + struct inode *inode; 2179 + 2180 + if (!file) 2181 + return ERR_PTR(-ENOENT); 2182 + 2183 + inode = proc_pid_make_inode(dir->i_sb, task); 2184 + if (!inode) 2185 + return ERR_PTR(-ENOENT); 2186 + 2187 + ei = PROC_I(inode); 2188 + ei->op.proc_get_link = proc_map_files_get_link; 2189 + 2190 + inode->i_op = &proc_pid_link_inode_operations; 2191 + inode->i_size = 64; 2192 + inode->i_mode = S_IFLNK; 2193 + 2194 + if (file->f_mode & FMODE_READ) 2195 + inode->i_mode |= S_IRUSR; 2196 + if (file->f_mode & FMODE_WRITE) 2197 + inode->i_mode |= S_IWUSR; 2198 + 2199 + d_set_d_op(dentry, &tid_map_files_dentry_operations); 2200 + d_add(dentry, inode); 2201 + 2202 + return NULL; 2203 + } 2204 + 2205 + static struct dentry *proc_map_files_lookup(struct inode *dir, 2206 + struct dentry *dentry, struct nameidata *nd) 2207 + { 2208 + unsigned long vm_start, vm_end; 2209 + struct vm_area_struct *vma; 2210 + struct task_struct *task; 2211 + struct dentry *result; 2212 + struct mm_struct *mm; 2213 + 2214 + result = ERR_PTR(-EACCES); 2215 + if (!capable(CAP_SYS_ADMIN)) 2216 + goto out; 2217 + 2218 + result = ERR_PTR(-ENOENT); 2219 + task = get_proc_task(dir); 2220 + if (!task) 2221 + goto out; 2222 + 2223 + result = ERR_PTR(-EACCES); 2224 + if (lock_trace(task)) 2225 + goto out_put_task; 2226 + 2227 + result = ERR_PTR(-ENOENT); 2228 + if (dname_to_vma_addr(dentry, &vm_start, &vm_end)) 2229 + goto out_unlock; 2230 + 2231 + mm = get_task_mm(task); 2232 + if (!mm) 2233 + goto out_unlock; 2234 + 2235 + down_read(&mm->mmap_sem); 2236 + vma = find_exact_vma(mm, vm_start, vm_end); 2237 + if (!vma) 2238 + goto out_no_vma; 2239 + 2240 + result = proc_map_files_instantiate(dir, dentry, task, vma->vm_file); 2241 + 2242 + out_no_vma: 2243 + up_read(&mm->mmap_sem); 2244 + mmput(mm); 2245 + out_unlock: 2246 + unlock_trace(task); 2247 + out_put_task: 2248 + put_task_struct(task); 2249 + out: 2250 + return result; 2251 + } 2252 + 2253 + static const struct inode_operations proc_map_files_inode_operations = { 2254 + .lookup = proc_map_files_lookup, 2255 + .permission = proc_fd_permission, 2256 + .setattr = proc_setattr, 2257 + }; 2258 + 2259 + static int 2260 + proc_map_files_readdir(struct file *filp, void *dirent, filldir_t filldir) 2261 + { 2262 + struct dentry *dentry = filp->f_path.dentry; 2263 + struct inode *inode = dentry->d_inode; 2264 + struct vm_area_struct *vma; 2265 + struct task_struct *task; 2266 + struct mm_struct *mm; 2267 + ino_t ino; 2268 + int ret; 2269 + 2270 + ret = -EACCES; 2271 + if (!capable(CAP_SYS_ADMIN)) 2272 + goto out; 2273 + 2274 + ret = -ENOENT; 2275 + task = get_proc_task(inode); 2276 + if (!task) 2277 + goto out; 2278 + 2279 + ret = -EACCES; 2280 + if (lock_trace(task)) 2281 + goto out_put_task; 2282 + 2283 + ret = 0; 2284 + switch (filp->f_pos) { 2285 + case 0: 2286 + ino = inode->i_ino; 2287 + if (filldir(dirent, ".", 1, 0, ino, DT_DIR) < 0) 2288 + goto out_unlock; 2289 + filp->f_pos++; 2290 + case 1: 2291 + ino = parent_ino(dentry); 2292 + if (filldir(dirent, "..", 2, 1, ino, DT_DIR) < 0) 2293 + goto out_unlock; 2294 + filp->f_pos++; 2295 + default: 2296 + { 2297 + unsigned long nr_files, pos, i; 2298 + struct flex_array *fa = NULL; 2299 + struct map_files_info info; 2300 + struct map_files_info *p; 2301 + 2302 + mm = get_task_mm(task); 2303 + if (!mm) 2304 + goto out_unlock; 2305 + down_read(&mm->mmap_sem); 2306 + 2307 + nr_files = 0; 2308 + 2309 + /* 2310 + * We need two passes here: 2311 + * 2312 + * 1) Collect vmas of mapped files with mmap_sem taken 2313 + * 2) Release mmap_sem and instantiate entries 2314 + * 2315 + * otherwise we get lockdep complained, since filldir() 2316 + * routine might require mmap_sem taken in might_fault(). 2317 + */ 2318 + 2319 + for (vma = mm->mmap, pos = 2; vma; vma = vma->vm_next) { 2320 + if (vma->vm_file && ++pos > filp->f_pos) 2321 + nr_files++; 2322 + } 2323 + 2324 + if (nr_files) { 2325 + fa = flex_array_alloc(sizeof(info), nr_files, 2326 + GFP_KERNEL); 2327 + if (!fa || flex_array_prealloc(fa, 0, nr_files, 2328 + GFP_KERNEL)) { 2329 + ret = -ENOMEM; 2330 + if (fa) 2331 + flex_array_free(fa); 2332 + up_read(&mm->mmap_sem); 2333 + mmput(mm); 2334 + goto out_unlock; 2335 + } 2336 + for (i = 0, vma = mm->mmap, pos = 2; vma; 2337 + vma = vma->vm_next) { 2338 + if (!vma->vm_file) 2339 + continue; 2340 + if (++pos <= filp->f_pos) 2341 + continue; 2342 + 2343 + get_file(vma->vm_file); 2344 + info.file = vma->vm_file; 2345 + info.len = snprintf(info.name, 2346 + sizeof(info.name), "%lx-%lx", 2347 + vma->vm_start, vma->vm_end); 2348 + if (flex_array_put(fa, i++, &info, GFP_KERNEL)) 2349 + BUG(); 2350 + } 2351 + } 2352 + up_read(&mm->mmap_sem); 2353 + 2354 + for (i = 0; i < nr_files; i++) { 2355 + p = flex_array_get(fa, i); 2356 + ret = proc_fill_cache(filp, dirent, filldir, 2357 + p->name, p->len, 2358 + proc_map_files_instantiate, 2359 + task, p->file); 2360 + if (ret) 2361 + break; 2362 + filp->f_pos++; 2363 + fput(p->file); 2364 + } 2365 + for (; i < nr_files; i++) { 2366 + /* 2367 + * In case of error don't forget 2368 + * to put rest of file refs. 2369 + */ 2370 + p = flex_array_get(fa, i); 2371 + fput(p->file); 2372 + } 2373 + if (fa) 2374 + flex_array_free(fa); 2375 + mmput(mm); 2376 + } 2377 + } 2378 + 2379 + out_unlock: 2380 + unlock_trace(task); 2381 + out_put_task: 2382 + put_task_struct(task); 2383 + out: 2384 + return ret; 2385 + } 2386 + 2387 + static const struct file_operations proc_map_files_operations = { 2388 + .read = generic_read_dir, 2389 + .readdir = proc_map_files_readdir, 2390 + .llseek = default_llseek, 2391 + }; 2392 + 2393 + #endif /* CONFIG_CHECKPOINT_RESTORE */ 2104 2394 2105 2395 /* 2106 2396 * /proc/pid/fd needs a special permission handler so that a process can still ··· 3066 2658 static const struct pid_entry tgid_base_stuff[] = { 3067 2659 DIR("task", S_IRUGO|S_IXUGO, proc_task_inode_operations, proc_task_operations), 3068 2660 DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations), 2661 + #ifdef CONFIG_CHECKPOINT_RESTORE 2662 + DIR("map_files", S_IRUSR|S_IXUSR, proc_map_files_inode_operations, proc_map_files_operations), 2663 + #endif 3069 2664 DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations), 3070 2665 DIR("ns", S_IRUSR|S_IXUGO, proc_ns_dir_inode_operations, proc_ns_dir_operations), 3071 2666 #ifdef CONFIG_NET ··· 3172 2761 .lookup = proc_tgid_base_lookup, 3173 2762 .getattr = pid_getattr, 3174 2763 .setattr = proc_setattr, 2764 + .permission = proc_pid_permission, 3175 2765 }; 3176 2766 3177 2767 static void proc_flush_task_mnt(struct vfsmount *mnt, pid_t pid, pid_t tgid) ··· 3376 2964 proc_pid_instantiate, iter.task, NULL); 3377 2965 } 3378 2966 2967 + static int fake_filldir(void *buf, const char *name, int namelen, 2968 + loff_t offset, u64 ino, unsigned d_type) 2969 + { 2970 + return 0; 2971 + } 2972 + 3379 2973 /* for the /proc/ directory itself, after non-process stuff has been done */ 3380 2974 int proc_pid_readdir(struct file * filp, void * dirent, filldir_t filldir) 3381 2975 { ··· 3389 2971 struct task_struct *reaper; 3390 2972 struct tgid_iter iter; 3391 2973 struct pid_namespace *ns; 2974 + filldir_t __filldir; 3392 2975 3393 2976 if (filp->f_pos >= PID_MAX_LIMIT + TGID_OFFSET) 3394 2977 goto out_no_task; ··· 3411 2992 for (iter = next_tgid(ns, iter); 3412 2993 iter.task; 3413 2994 iter.tgid += 1, iter = next_tgid(ns, iter)) { 2995 + if (has_pid_permissions(ns, iter.task, 2)) 2996 + __filldir = filldir; 2997 + else 2998 + __filldir = fake_filldir; 2999 + 3414 3000 filp->f_pos = iter.tgid + TGID_OFFSET; 3415 - if (proc_pid_fill_cache(filp, dirent, filldir, iter) < 0) { 3001 + if (proc_pid_fill_cache(filp, dirent, __filldir, iter) < 0) { 3416 3002 put_task_struct(iter.task); 3417 3003 goto out; 3418 3004 } ··· 3752 3328 .lookup = proc_task_lookup, 3753 3329 .getattr = proc_task_getattr, 3754 3330 .setattr = proc_setattr, 3331 + .permission = proc_pid_permission, 3755 3332 }; 3756 3333 3757 3334 static const struct file_operations proc_task_operations = {
+18
fs/proc/inode.c
··· 7 7 #include <linux/time.h> 8 8 #include <linux/proc_fs.h> 9 9 #include <linux/kernel.h> 10 + #include <linux/pid_namespace.h> 10 11 #include <linux/mm.h> 11 12 #include <linux/string.h> 12 13 #include <linux/stat.h> ··· 18 17 #include <linux/init.h> 19 18 #include <linux/module.h> 20 19 #include <linux/sysctl.h> 20 + #include <linux/seq_file.h> 21 21 #include <linux/slab.h> 22 + #include <linux/mount.h> 22 23 23 24 #include <asm/system.h> 24 25 #include <asm/uaccess.h> ··· 104 101 init_once); 105 102 } 106 103 104 + static int proc_show_options(struct seq_file *seq, struct dentry *root) 105 + { 106 + struct super_block *sb = root->d_sb; 107 + struct pid_namespace *pid = sb->s_fs_info; 108 + 109 + if (pid->pid_gid) 110 + seq_printf(seq, ",gid=%lu", (unsigned long)pid->pid_gid); 111 + if (pid->hide_pid != 0) 112 + seq_printf(seq, ",hidepid=%u", pid->hide_pid); 113 + 114 + return 0; 115 + } 116 + 107 117 static const struct super_operations proc_sops = { 108 118 .alloc_inode = proc_alloc_inode, 109 119 .destroy_inode = proc_destroy_inode, 110 120 .drop_inode = generic_delete_inode, 111 121 .evict_inode = proc_evict_inode, 112 122 .statfs = simple_statfs, 123 + .remount_fs = proc_remount, 124 + .show_options = proc_show_options, 113 125 }; 114 126 115 127 static void __pde_users_dec(struct proc_dir_entry *pde)
+1
fs/proc/internal.h
··· 117 117 118 118 int proc_fill_super(struct super_block *); 119 119 struct inode *proc_get_inode(struct super_block *, struct proc_dir_entry *); 120 + int proc_remount(struct super_block *sb, int *flags, char *data); 120 121 121 122 /* 122 123 * These are generic /proc routines that use the internal
+68 -2
fs/proc/root.c
··· 18 18 #include <linux/bitops.h> 19 19 #include <linux/mount.h> 20 20 #include <linux/pid_namespace.h> 21 + #include <linux/parser.h> 21 22 22 23 #include "internal.h" 23 24 ··· 37 36 return err; 38 37 } 39 38 39 + enum { 40 + Opt_gid, Opt_hidepid, Opt_err, 41 + }; 42 + 43 + static const match_table_t tokens = { 44 + {Opt_hidepid, "hidepid=%u"}, 45 + {Opt_gid, "gid=%u"}, 46 + {Opt_err, NULL}, 47 + }; 48 + 49 + static int proc_parse_options(char *options, struct pid_namespace *pid) 50 + { 51 + char *p; 52 + substring_t args[MAX_OPT_ARGS]; 53 + int option; 54 + 55 + if (!options) 56 + return 1; 57 + 58 + while ((p = strsep(&options, ",")) != NULL) { 59 + int token; 60 + if (!*p) 61 + continue; 62 + 63 + args[0].to = args[0].from = 0; 64 + token = match_token(p, tokens, args); 65 + switch (token) { 66 + case Opt_gid: 67 + if (match_int(&args[0], &option)) 68 + return 0; 69 + pid->pid_gid = option; 70 + break; 71 + case Opt_hidepid: 72 + if (match_int(&args[0], &option)) 73 + return 0; 74 + if (option < 0 || option > 2) { 75 + pr_err("proc: hidepid value must be between 0 and 2.\n"); 76 + return 0; 77 + } 78 + pid->hide_pid = option; 79 + break; 80 + default: 81 + pr_err("proc: unrecognized mount option \"%s\" " 82 + "or missing value\n", p); 83 + return 0; 84 + } 85 + } 86 + 87 + return 1; 88 + } 89 + 90 + int proc_remount(struct super_block *sb, int *flags, char *data) 91 + { 92 + struct pid_namespace *pid = sb->s_fs_info; 93 + return !proc_parse_options(data, pid); 94 + } 95 + 40 96 static struct dentry *proc_mount(struct file_system_type *fs_type, 41 97 int flags, const char *dev_name, void *data) 42 98 { ··· 101 43 struct super_block *sb; 102 44 struct pid_namespace *ns; 103 45 struct proc_inode *ei; 46 + char *options; 104 47 105 - if (flags & MS_KERNMOUNT) 48 + if (flags & MS_KERNMOUNT) { 106 49 ns = (struct pid_namespace *)data; 107 - else 50 + options = NULL; 51 + } else { 108 52 ns = current->nsproxy->pid_ns; 53 + options = data; 54 + } 109 55 110 56 sb = sget(fs_type, proc_test_super, proc_set_super, ns); 111 57 if (IS_ERR(sb)) ··· 117 55 118 56 if (!sb->s_root) { 119 57 sb->s_flags = flags; 58 + if (!proc_parse_options(options, ns)) { 59 + deactivate_locked_super(sb); 60 + return ERR_PTR(-EINVAL); 61 + } 120 62 err = proc_fill_super(sb); 121 63 if (err) { 122 64 deactivate_locked_super(sb);
-3
fs/reiserfs/bitmap.c
··· 1364 1364 struct reiserfs_bitmap_info *bitmap; 1365 1365 unsigned int bmap_nr = reiserfs_bmap_count(sb); 1366 1366 1367 - /* Avoid lock recursion in fault case */ 1368 - reiserfs_write_unlock(sb); 1369 1367 bitmap = vmalloc(sizeof(*bitmap) * bmap_nr); 1370 - reiserfs_write_lock(sb); 1371 1368 if (bitmap == NULL) 1372 1369 return -ENOMEM; 1373 1370
+24 -40
fs/reiserfs/journal.c
··· 2678 2678 char b[BDEVNAME_SIZE]; 2679 2679 int ret; 2680 2680 2681 - /* 2682 - * Unlock here to avoid various RECLAIM-FS-ON <-> IN-RECLAIM-FS 2683 - * dependency inversion warnings. 2684 - */ 2685 - reiserfs_write_unlock(sb); 2686 2681 journal = SB_JOURNAL(sb) = vzalloc(sizeof(struct reiserfs_journal)); 2687 2682 if (!journal) { 2688 2683 reiserfs_warning(sb, "journal-1256", 2689 2684 "unable to get memory for journal structure"); 2690 - reiserfs_write_lock(sb); 2691 2685 return 1; 2692 2686 } 2693 2687 INIT_LIST_HEAD(&journal->j_bitmap_nodes); ··· 2689 2695 INIT_LIST_HEAD(&journal->j_working_list); 2690 2696 INIT_LIST_HEAD(&journal->j_journal_list); 2691 2697 journal->j_persistent_trans = 0; 2692 - ret = reiserfs_allocate_list_bitmaps(sb, journal->j_list_bitmap, 2693 - reiserfs_bmap_count(sb)); 2694 - reiserfs_write_lock(sb); 2695 - if (ret) 2698 + if (reiserfs_allocate_list_bitmaps(sb, journal->j_list_bitmap, 2699 + reiserfs_bmap_count(sb))) 2696 2700 goto free_and_return; 2697 2701 2698 2702 allocate_bitmap_nodes(sb); ··· 2719 2727 goto free_and_return; 2720 2728 } 2721 2729 2722 - /* 2723 - * We need to unlock here to avoid creating the following 2724 - * dependency: 2725 - * reiserfs_lock -> sysfs_mutex 2726 - * Because the reiserfs mmap path creates the following dependency: 2727 - * mm->mmap -> reiserfs_lock, hence we have 2728 - * mm->mmap -> reiserfs_lock ->sysfs_mutex 2729 - * This would ends up in a circular dependency with sysfs readdir path 2730 - * which does sysfs_mutex -> mm->mmap_sem 2731 - * This is fine because the reiserfs lock is useless in mount path, 2732 - * at least until we call journal_begin. We keep it for paranoid 2733 - * reasons. 2734 - */ 2735 - reiserfs_write_unlock(sb); 2736 2730 if (journal_init_dev(sb, journal, j_dev_name) != 0) { 2737 - reiserfs_write_lock(sb); 2738 2731 reiserfs_warning(sb, "sh-462", 2739 2732 "unable to initialize jornal device"); 2740 2733 goto free_and_return; 2741 2734 } 2742 - reiserfs_write_lock(sb); 2743 2735 2744 2736 rs = SB_DISK_SUPER_BLOCK(sb); 2745 2737 ··· 2805 2829 journal->j_mount_id = 10; 2806 2830 journal->j_state = 0; 2807 2831 atomic_set(&(journal->j_jlock), 0); 2808 - reiserfs_write_unlock(sb); 2809 2832 journal->j_cnode_free_list = allocate_cnodes(num_cnodes); 2810 - reiserfs_write_lock(sb); 2811 2833 journal->j_cnode_free_orig = journal->j_cnode_free_list; 2812 2834 journal->j_cnode_free = journal->j_cnode_free_list ? num_cnodes : 0; 2813 2835 journal->j_cnode_used = 0; ··· 2822 2848 2823 2849 init_journal_hash(sb); 2824 2850 jl = journal->j_current_jl; 2851 + 2852 + /* 2853 + * get_list_bitmap() may call flush_commit_list() which 2854 + * requires the lock. Calling flush_commit_list() shouldn't happen 2855 + * this early but I like to be paranoid. 2856 + */ 2857 + reiserfs_write_lock(sb); 2825 2858 jl->j_list_bitmap = get_list_bitmap(sb, jl); 2859 + reiserfs_write_unlock(sb); 2826 2860 if (!jl->j_list_bitmap) { 2827 2861 reiserfs_warning(sb, "journal-2005", 2828 2862 "get_list_bitmap failed for journal list 0"); 2829 2863 goto free_and_return; 2830 2864 } 2831 - if (journal_read(sb) < 0) { 2865 + 2866 + /* 2867 + * Journal_read needs to be inspected in order to push down 2868 + * the lock further inside (or even remove it). 2869 + */ 2870 + reiserfs_write_lock(sb); 2871 + ret = journal_read(sb); 2872 + reiserfs_write_unlock(sb); 2873 + if (ret < 0) { 2832 2874 reiserfs_warning(sb, "reiserfs-2006", 2833 2875 "Replay Failure, unable to mount"); 2834 2876 goto free_and_return; 2835 2877 } 2836 2878 2837 2879 reiserfs_mounted_fs_count++; 2838 - if (reiserfs_mounted_fs_count <= 1) { 2839 - reiserfs_write_unlock(sb); 2880 + if (reiserfs_mounted_fs_count <= 1) 2840 2881 commit_wq = alloc_workqueue("reiserfs", WQ_MEM_RECLAIM, 0); 2841 - reiserfs_write_lock(sb); 2842 - } 2843 2882 2844 2883 INIT_DELAYED_WORK(&journal->j_work, flush_async_commits); 2845 2884 journal->j_work_sb = sb; ··· 2883 2896 journal->j_cnode_free < (journal->j_trans_max * 3)) { 2884 2897 return 1; 2885 2898 } 2886 - /* protected by the BKL here */ 2899 + 2887 2900 journal->j_len_alloc += new_alloc; 2888 2901 th->t_blocks_allocated += new_alloc ; 2889 2902 return 0; 2890 2903 } 2891 2904 2892 - /* this must be called inside a transaction, and requires the 2893 - ** kernel_lock to be held 2905 + /* this must be called inside a transaction 2894 2906 */ 2895 2907 void reiserfs_block_writes(struct reiserfs_transaction_handle *th) 2896 2908 { ··· 2900 2914 return; 2901 2915 } 2902 2916 2903 - /* this must be called without a transaction started, and does not 2904 - ** require BKL 2917 + /* this must be called without a transaction started 2905 2918 */ 2906 2919 void reiserfs_allow_writes(struct super_block *s) 2907 2920 { ··· 2909 2924 wake_up(&journal->j_join_wait); 2910 2925 } 2911 2926 2912 - /* this must be called without a transaction started, and does not 2913 - ** require BKL 2927 + /* this must be called without a transaction started 2914 2928 */ 2915 2929 void reiserfs_wait_on_write_block(struct super_block *s) 2916 2930 {
+30 -26
fs/reiserfs/super.c
··· 1519 1519 static int reread_meta_blocks(struct super_block *s) 1520 1520 { 1521 1521 ll_rw_block(READ, 1, &(SB_BUFFER_WITH_SB(s))); 1522 - reiserfs_write_unlock(s); 1523 1522 wait_on_buffer(SB_BUFFER_WITH_SB(s)); 1524 - reiserfs_write_lock(s); 1525 1523 if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) { 1526 1524 reiserfs_warning(s, "reiserfs-2504", "error reading the super"); 1527 1525 return 1; ··· 1744 1746 mutex_init(&REISERFS_SB(s)->lock); 1745 1747 REISERFS_SB(s)->lock_depth = -1; 1746 1748 1747 - /* 1748 - * This function is called with the bkl, which also was the old 1749 - * locking used here. 1750 - * do_journal_begin() will soon check if we hold the lock (ie: was the 1751 - * bkl). This is likely because do_journal_begin() has several another 1752 - * callers because at this time, it doesn't seem to be necessary to 1753 - * protect against anything. 1754 - * Anyway, let's be conservative and lock for now. 1755 - */ 1756 - reiserfs_write_lock(s); 1757 - 1758 1749 jdev_name = NULL; 1759 1750 if (reiserfs_parse_options 1760 1751 (s, (char *)data, &(sbi->s_mount_opt), &blocks, &jdev_name, 1761 1752 &commit_max_age, qf_names, &qfmt) == 0) { 1762 - goto error; 1753 + goto error_unlocked; 1763 1754 } 1764 1755 if (jdev_name && jdev_name[0]) { 1765 1756 REISERFS_SB(s)->s_jdev = kstrdup(jdev_name, GFP_KERNEL); ··· 1764 1777 1765 1778 if (blocks) { 1766 1779 SWARN(silent, s, "jmacd-7", "resize option for remount only"); 1767 - goto error; 1780 + goto error_unlocked; 1768 1781 } 1769 1782 1770 1783 /* try old format (undistributed bitmap, super block in 8-th 1k block of a device) */ ··· 1774 1787 else if (read_super_block(s, REISERFS_DISK_OFFSET_IN_BYTES)) { 1775 1788 SWARN(silent, s, "sh-2021", "can not find reiserfs on %s", 1776 1789 reiserfs_bdevname(s)); 1777 - goto error; 1790 + goto error_unlocked; 1778 1791 } 1779 1792 1780 1793 rs = SB_DISK_SUPER_BLOCK(s); ··· 1790 1803 "or increase size of your LVM partition"); 1791 1804 SWARN(silent, s, "", "Or may be you forgot to " 1792 1805 "reboot after fdisk when it told you to"); 1793 - goto error; 1806 + goto error_unlocked; 1794 1807 } 1795 1808 1796 1809 sbi->s_mount_state = SB_REISERFS_STATE(s); ··· 1798 1811 1799 1812 if ((errval = reiserfs_init_bitmap_cache(s))) { 1800 1813 SWARN(silent, s, "jmacd-8", "unable to read bitmap"); 1801 - goto error; 1814 + goto error_unlocked; 1802 1815 } 1816 + 1803 1817 errval = -EINVAL; 1804 1818 #ifdef CONFIG_REISERFS_CHECK 1805 1819 SWARN(silent, s, "", "CONFIG_REISERFS_CHECK is set ON"); ··· 1823 1835 if (reiserfs_barrier_flush(s)) { 1824 1836 printk("reiserfs: using flush barriers\n"); 1825 1837 } 1838 + 1826 1839 // set_device_ro(s->s_dev, 1) ; 1827 1840 if (journal_init(s, jdev_name, old_format, commit_max_age)) { 1828 1841 SWARN(silent, s, "sh-2022", 1829 1842 "unable to initialize journal space"); 1830 - goto error; 1843 + goto error_unlocked; 1831 1844 } else { 1832 1845 jinit_done = 1; /* once this is set, journal_release must be called 1833 1846 ** if we error out of the mount 1834 1847 */ 1835 1848 } 1849 + 1836 1850 if (reread_meta_blocks(s)) { 1837 1851 SWARN(silent, s, "jmacd-9", 1838 1852 "unable to reread meta blocks after journal init"); 1839 - goto error; 1853 + goto error_unlocked; 1840 1854 } 1841 1855 1842 1856 if (replay_only(s)) 1843 - goto error; 1857 + goto error_unlocked; 1844 1858 1845 1859 if (bdev_read_only(s->s_bdev) && !(s->s_flags & MS_RDONLY)) { 1846 1860 SWARN(silent, s, "clm-7000", ··· 1856 1866 reiserfs_init_locked_inode, (void *)(&args)); 1857 1867 if (!root_inode) { 1858 1868 SWARN(silent, s, "jmacd-10", "get root inode failed"); 1859 - goto error; 1869 + goto error_unlocked; 1860 1870 } 1871 + 1872 + /* 1873 + * This path assumed to be called with the BKL in the old times. 1874 + * Now we have inherited the big reiserfs lock from it and many 1875 + * reiserfs helpers called in the mount path and elsewhere require 1876 + * this lock to be held even if it's not always necessary. Let's be 1877 + * conservative and hold it early. The window can be reduced after 1878 + * careful review of the code. 1879 + */ 1880 + reiserfs_write_lock(s); 1861 1881 1862 1882 if (root_inode->i_state & I_NEW) { 1863 1883 reiserfs_read_locked_inode(root_inode, &args); ··· 1995 1995 return (0); 1996 1996 1997 1997 error: 1998 - if (jinit_done) { /* kill the commit thread, free journal ram */ 1999 - journal_release_error(NULL, s); 2000 - } 2001 - 2002 1998 reiserfs_write_unlock(s); 1999 + 2000 + error_unlocked: 2001 + /* kill the commit thread, free journal ram */ 2002 + if (jinit_done) { 2003 + reiserfs_write_lock(s); 2004 + journal_release_error(NULL, s); 2005 + reiserfs_write_unlock(s); 2006 + } 2003 2007 2004 2008 reiserfs_free_bitmap_cache(s); 2005 2009 if (SB_BUFFER_WITH_SB(s))
+1
include/linux/compiler-gcc4.h
··· 29 29 the kernel context */ 30 30 #define __cold __attribute__((__cold__)) 31 31 32 + #define __linktime_error(message) __attribute__((__error__(message))) 32 33 33 34 #if __GNUC_MINOR__ >= 5 34 35 /*
+3 -1
include/linux/compiler.h
··· 293 293 #ifndef __compiletime_error 294 294 # define __compiletime_error(message) 295 295 #endif 296 - 296 + #ifndef __linktime_error 297 + # define __linktime_error(message) 298 + #endif 297 299 /* 298 300 * Prevent the compiler from merging or refetching accesses. The compiler 299 301 * is also forbidden from reordering successive instances of ACCESS_ONCE(),
+21 -2
include/linux/gfp.h
··· 36 36 #endif 37 37 #define ___GFP_NO_KSWAPD 0x400000u 38 38 #define ___GFP_OTHER_NODE 0x800000u 39 + #define ___GFP_WRITE 0x1000000u 39 40 40 41 /* 41 42 * GFP bitmasks.. ··· 86 85 87 86 #define __GFP_NO_KSWAPD ((__force gfp_t)___GFP_NO_KSWAPD) 88 87 #define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE) /* On behalf of other node */ 88 + #define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) /* Allocator intends to dirty page */ 89 89 90 90 /* 91 91 * This may seem redundant, but it's a way of annotating false positives vs. ··· 94 92 */ 95 93 #define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK) 96 94 97 - #define __GFP_BITS_SHIFT 24 /* Room for N __GFP_FOO bits */ 95 + #define __GFP_BITS_SHIFT 25 /* Room for N __GFP_FOO bits */ 98 96 #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) 99 97 100 98 /* This equals 0, but use constants in case they ever change */ ··· 315 313 static inline struct page *alloc_pages_exact_node(int nid, gfp_t gfp_mask, 316 314 unsigned int order) 317 315 { 318 - VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); 316 + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid)); 319 317 320 318 return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); 321 319 } ··· 360 358 extern void __free_pages(struct page *page, unsigned int order); 361 359 extern void free_pages(unsigned long addr, unsigned int order); 362 360 extern void free_hot_cold_page(struct page *page, int cold); 361 + extern void free_hot_cold_page_list(struct list_head *list, int cold); 363 362 364 363 #define __free_page(page) __free_pages((page), 0) 365 364 #define free_page(addr) free_pages((addr), 0) ··· 370 367 void drain_all_pages(void); 371 368 void drain_local_pages(void *dummy); 372 369 370 + /* 371 + * gfp_allowed_mask is set to GFP_BOOT_MASK during early boot to restrict what 372 + * GFP flags are used before interrupts are enabled. Once interrupts are 373 + * enabled, it is set to __GFP_BITS_MASK while the system is running. During 374 + * hibernation, it is used by PM to avoid I/O during memory allocation while 375 + * devices are suspended. 376 + */ 373 377 extern gfp_t gfp_allowed_mask; 374 378 375 379 extern void pm_restrict_gfp_mask(void); 376 380 extern void pm_restore_gfp_mask(void); 381 + 382 + #ifdef CONFIG_PM_SLEEP 383 + extern bool pm_suspended_storage(void); 384 + #else 385 + static inline bool pm_suspended_storage(void) 386 + { 387 + return false; 388 + } 389 + #endif /* CONFIG_PM_SLEEP */ 377 390 378 391 #endif /* __LINUX_GFP_H */
+16
include/linux/kernel.h
··· 665 665 #define BUILD_BUG_ON_ZERO(e) (0) 666 666 #define BUILD_BUG_ON_NULL(e) ((void*)0) 667 667 #define BUILD_BUG_ON(condition) 668 + #define BUILD_BUG() (0) 668 669 #else /* __CHECKER__ */ 669 670 670 671 /* Force a compilation error if a constant expression is not a power of 2 */ ··· 704 703 if (condition) __build_bug_on_failed = 1; \ 705 704 } while(0) 706 705 #endif 706 + 707 + /** 708 + * BUILD_BUG - break compile if used. 709 + * 710 + * If you have some code that you expect the compiler to eliminate at 711 + * build time, you should use BUILD_BUG to detect if it is 712 + * unexpectedly used. 713 + */ 714 + #define BUILD_BUG() \ 715 + do { \ 716 + extern void __build_bug_failed(void) \ 717 + __linktime_error("BUILD_BUG failed"); \ 718 + __build_bug_failed(); \ 719 + } while (0) 720 + 707 721 #endif /* __CHECKER__ */ 708 722 709 723 /* Trap pasters of __FUNCTION__ at compile-time */
+34
include/linux/leds-tca6507.h
··· 1 + /* 2 + * TCA6507 LED chip driver. 3 + * 4 + * Copyright (C) 2011 Neil Brown <neil@brown.name> 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License 8 + * version 2 as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, but 11 + * WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 + * General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, write to the Free Software 17 + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 18 + * 02110-1301 USA 19 + */ 20 + 21 + #ifndef __LINUX_TCA6507_H 22 + #define __LINUX_TCA6507_H 23 + #include <linux/leds.h> 24 + 25 + struct tca6507_platform_data { 26 + struct led_platform_data leds; 27 + #ifdef CONFIG_GPIOLIB 28 + int gpio_base; 29 + void (*setup)(unsigned gpio_base, unsigned ngpio); 30 + #endif 31 + }; 32 + 33 + #define TCA6507_MAKE_GPIO 1 34 + #endif /* __LINUX_TCA6507_H*/
+5 -5
include/linux/mempolicy.h
··· 164 164 atomic_inc(&pol->refcnt); 165 165 } 166 166 167 - extern int __mpol_equal(struct mempolicy *a, struct mempolicy *b); 168 - static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b) 167 + extern bool __mpol_equal(struct mempolicy *a, struct mempolicy *b); 168 + static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) 169 169 { 170 170 if (a == b) 171 - return 1; 171 + return true; 172 172 return __mpol_equal(a, b); 173 173 } 174 174 ··· 257 257 258 258 struct mempolicy {}; 259 259 260 - static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b) 260 + static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) 261 261 { 262 - return 1; 262 + return true; 263 263 } 264 264 265 265 static inline void mpol_put(struct mempolicy *p)
+29
include/linux/mm.h
··· 1482 1482 return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 1483 1483 } 1484 1484 1485 + /* Look up the first VMA which exactly match the interval vm_start ... vm_end */ 1486 + static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm, 1487 + unsigned long vm_start, unsigned long vm_end) 1488 + { 1489 + struct vm_area_struct *vma = find_vma(mm, vm_start); 1490 + 1491 + if (vma && (vma->vm_start != vm_start || vma->vm_end != vm_end)) 1492 + vma = NULL; 1493 + 1494 + return vma; 1495 + } 1496 + 1485 1497 #ifdef CONFIG_MMU 1486 1498 pgprot_t vm_get_page_prot(unsigned long vm_flags); 1487 1499 #else ··· 1629 1617 unsigned long addr, struct vm_area_struct *vma, 1630 1618 unsigned int pages_per_huge_page); 1631 1619 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ 1620 + 1621 + #ifdef CONFIG_DEBUG_PAGEALLOC 1622 + extern unsigned int _debug_guardpage_minorder; 1623 + 1624 + static inline unsigned int debug_guardpage_minorder(void) 1625 + { 1626 + return _debug_guardpage_minorder; 1627 + } 1628 + 1629 + static inline bool page_is_guard(struct page *page) 1630 + { 1631 + return test_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags); 1632 + } 1633 + #else 1634 + static inline unsigned int debug_guardpage_minorder(void) { return 0; } 1635 + static inline bool page_is_guard(struct page *page) { return false; } 1636 + #endif /* CONFIG_DEBUG_PAGEALLOC */ 1632 1637 1633 1638 #endif /* __KERNEL__ */ 1634 1639 #endif /* _LINUX_MM_H */
+6
include/linux/mmzone.h
··· 317 317 */ 318 318 unsigned long lowmem_reserve[MAX_NR_ZONES]; 319 319 320 + /* 321 + * This is a per-zone reserve of pages that should not be 322 + * considered dirtyable memory. 323 + */ 324 + unsigned long dirty_balance_reserve; 325 + 320 326 #ifdef CONFIG_NUMA 321 327 int node; 322 328 /*
+3 -1
include/linux/page-debug-flags.h
··· 13 13 14 14 enum page_debug_flags { 15 15 PAGE_DEBUG_FLAG_POISON, /* Page is poisoned */ 16 + PAGE_DEBUG_FLAG_GUARD, 16 17 }; 17 18 18 19 /* ··· 22 21 */ 23 22 24 23 #ifdef CONFIG_WANT_PAGE_DEBUG_FLAGS 25 - #if !defined(CONFIG_PAGE_POISONING) \ 24 + #if !defined(CONFIG_PAGE_POISONING) && \ 25 + !defined(CONFIG_PAGE_GUARD) \ 26 26 /* && !defined(CONFIG_PAGE_DEBUG_SOMETHING_ELSE) && ... */ 27 27 #error WANT_PAGE_DEBUG_FLAGS is turned on with no debug features! 28 28 #endif
-7
include/linux/pagevec.h
··· 21 21 }; 22 22 23 23 void __pagevec_release(struct pagevec *pvec); 24 - void __pagevec_free(struct pagevec *pvec); 25 24 void ____pagevec_lru_add(struct pagevec *pvec, enum lru_list lru); 26 25 void pagevec_strip(struct pagevec *pvec); 27 26 unsigned pagevec_lookup(struct pagevec *pvec, struct address_space *mapping, ··· 64 65 { 65 66 if (pagevec_count(pvec)) 66 67 __pagevec_release(pvec); 67 - } 68 - 69 - static inline void pagevec_free(struct pagevec *pvec) 70 - { 71 - if (pagevec_count(pvec)) 72 - __pagevec_free(pvec); 73 68 } 74 69 75 70 static inline void __pagevec_lru_add_anon(struct pagevec *pvec)
+2
include/linux/pid_namespace.h
··· 30 30 #ifdef CONFIG_BSD_PROCESS_ACCT 31 31 struct bsd_acct_struct *bacct; 32 32 #endif 33 + gid_t pid_gid; 34 + int hide_pid; 33 35 }; 34 36 35 37 extern struct pid_namespace init_pid_ns;
+1 -1
include/linux/proc_fs.h
··· 253 253 extern const struct proc_ns_operations ipcns_operations; 254 254 255 255 union proc_op { 256 - int (*proc_get_link)(struct inode *, struct path *); 256 + int (*proc_get_link)(struct dentry *, struct path *); 257 257 int (*proc_read)(struct task_struct *task, char *page); 258 258 int (*proc_show)(struct seq_file *m, 259 259 struct pid_namespace *ns, struct pid *pid,
+1
include/linux/rmap.h
··· 120 120 int anon_vma_prepare(struct vm_area_struct *); 121 121 void unlink_anon_vmas(struct vm_area_struct *); 122 122 int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *); 123 + void anon_vma_moveto_tail(struct vm_area_struct *); 123 124 int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *); 124 125 void __anon_vma_link(struct vm_area_struct *); 125 126
+1
include/linux/signal.h
··· 254 254 extern int show_unhandled_signals; 255 255 256 256 extern int get_signal_to_deliver(siginfo_t *info, struct k_sigaction *return_ka, struct pt_regs *regs, void *cookie); 257 + extern void block_sigmask(struct k_sigaction *ka, int signr); 257 258 extern void exit_signals(struct task_struct *tsk); 258 259 259 260 extern struct kmem_cache *sighand_cachep;
+1
include/linux/swap.h
··· 207 207 /* linux/mm/page_alloc.c */ 208 208 extern unsigned long totalram_pages; 209 209 extern unsigned long totalreserve_pages; 210 + extern unsigned long dirty_balance_reserve; 210 211 extern unsigned int nr_free_buffer_pages(void); 211 212 extern unsigned int nr_free_pagecache_pages(void); 212 213
+31 -16
include/linux/workqueue.h
··· 297 297 extern struct workqueue_struct *system_freezable_wq; 298 298 299 299 extern struct workqueue_struct * 300 - __alloc_workqueue_key(const char *name, unsigned int flags, int max_active, 301 - struct lock_class_key *key, const char *lock_name); 300 + __alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active, 301 + struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6); 302 302 303 + /** 304 + * alloc_workqueue - allocate a workqueue 305 + * @fmt: printf format for the name of the workqueue 306 + * @flags: WQ_* flags 307 + * @max_active: max in-flight work items, 0 for default 308 + * @args: args for @fmt 309 + * 310 + * Allocate a workqueue with the specified parameters. For detailed 311 + * information on WQ_* flags, please refer to Documentation/workqueue.txt. 312 + * 313 + * The __lock_name macro dance is to guarantee that single lock_class_key 314 + * doesn't end up with different namesm, which isn't allowed by lockdep. 315 + * 316 + * RETURNS: 317 + * Pointer to the allocated workqueue on success, %NULL on failure. 318 + */ 303 319 #ifdef CONFIG_LOCKDEP 304 - #define alloc_workqueue(name, flags, max_active) \ 320 + #define alloc_workqueue(fmt, flags, max_active, args...) \ 305 321 ({ \ 306 322 static struct lock_class_key __key; \ 307 323 const char *__lock_name; \ 308 324 \ 309 - if (__builtin_constant_p(name)) \ 310 - __lock_name = (name); \ 325 + if (__builtin_constant_p(fmt)) \ 326 + __lock_name = (fmt); \ 311 327 else \ 312 - __lock_name = #name; \ 328 + __lock_name = #fmt; \ 313 329 \ 314 - __alloc_workqueue_key((name), (flags), (max_active), \ 315 - &__key, __lock_name); \ 330 + __alloc_workqueue_key((fmt), (flags), (max_active), \ 331 + &__key, __lock_name, ##args); \ 316 332 }) 317 333 #else 318 - #define alloc_workqueue(name, flags, max_active) \ 319 - __alloc_workqueue_key((name), (flags), (max_active), NULL, NULL) 334 + #define alloc_workqueue(fmt, flags, max_active, args...) \ 335 + __alloc_workqueue_key((fmt), (flags), (max_active), \ 336 + NULL, NULL, ##args) 320 337 #endif 321 338 322 339 /** 323 340 * alloc_ordered_workqueue - allocate an ordered workqueue 324 - * @name: name of the workqueue 341 + * @fmt: printf format for the name of the workqueue 325 342 * @flags: WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful) 343 + * @args: args for @fmt 326 344 * 327 345 * Allocate an ordered workqueue. An ordered workqueue executes at 328 346 * most one work item at any given time in the queued order. They are ··· 349 331 * RETURNS: 350 332 * Pointer to the allocated workqueue on success, %NULL on failure. 351 333 */ 352 - static inline struct workqueue_struct * 353 - alloc_ordered_workqueue(const char *name, unsigned int flags) 354 - { 355 - return alloc_workqueue(name, WQ_UNBOUND | flags, 1); 356 - } 334 + #define alloc_ordered_workqueue(fmt, flags, args...) \ 335 + alloc_workqueue(fmt, WQ_UNBOUND | (flags), 1, ##args) 357 336 358 337 #define create_workqueue(name) \ 359 338 alloc_workqueue((name), WQ_MEM_RECLAIM, 1)
+1 -2
include/linux/writeback.h
··· 124 124 static inline void laptop_sync_completion(void) { } 125 125 #endif 126 126 void throttle_vm_writeout(gfp_t gfp_mask); 127 + bool zone_dirty_ok(struct zone *zone); 127 128 128 129 extern unsigned long global_dirty_limit; 129 130 ··· 138 137 extern int vm_highmem_is_dirtyable; 139 138 extern int block_dump; 140 139 extern int laptop_mode; 141 - 142 - extern unsigned long determine_dirtyable_memory(void); 143 140 144 141 extern int dirty_background_ratio_handler(struct ctl_table *table, int write, 145 142 void __user *buffer, size_t *lenp,
+2 -2
include/trace/events/kmem.h
··· 147 147 TP_ARGS(call_site, ptr) 148 148 ); 149 149 150 - TRACE_EVENT(mm_page_free_direct, 150 + TRACE_EVENT(mm_page_free, 151 151 152 152 TP_PROTO(struct page *page, unsigned int order), 153 153 ··· 169 169 __entry->order) 170 170 ); 171 171 172 - TRACE_EVENT(mm_pagevec_free, 172 + TRACE_EVENT(mm_page_free_batched, 173 173 174 174 TP_PROTO(struct page *page, int cold), 175 175
+33
include/trace/events/oom.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM oom 3 + 4 + #if !defined(_TRACE_OOM_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _TRACE_OOM_H 6 + #include <linux/tracepoint.h> 7 + 8 + TRACE_EVENT(oom_score_adj_update, 9 + 10 + TP_PROTO(struct task_struct *task), 11 + 12 + TP_ARGS(task), 13 + 14 + TP_STRUCT__entry( 15 + __field( pid_t, pid) 16 + __array( char, comm, TASK_COMM_LEN ) 17 + __field( int, oom_score_adj) 18 + ), 19 + 20 + TP_fast_assign( 21 + __entry->pid = task->pid; 22 + memcpy(__entry->comm, task->comm, TASK_COMM_LEN); 23 + __entry->oom_score_adj = task->signal->oom_score_adj; 24 + ), 25 + 26 + TP_printk("pid=%d comm=%s oom_score_adj=%d", 27 + __entry->pid, __entry->comm, __entry->oom_score_adj) 28 + ); 29 + 30 + #endif 31 + 32 + /* This part must be outside protection */ 33 + #include <trace/define_trace.h>
+61
include/trace/events/task.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM task 3 + 4 + #if !defined(_TRACE_TASK_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _TRACE_TASK_H 6 + #include <linux/tracepoint.h> 7 + 8 + TRACE_EVENT(task_newtask, 9 + 10 + TP_PROTO(struct task_struct *task, unsigned long clone_flags), 11 + 12 + TP_ARGS(task, clone_flags), 13 + 14 + TP_STRUCT__entry( 15 + __field( pid_t, pid) 16 + __array( char, comm, TASK_COMM_LEN) 17 + __field( unsigned long, clone_flags) 18 + __field( int, oom_score_adj) 19 + ), 20 + 21 + TP_fast_assign( 22 + __entry->pid = task->pid; 23 + memcpy(__entry->comm, task->comm, TASK_COMM_LEN); 24 + __entry->clone_flags = clone_flags; 25 + __entry->oom_score_adj = task->signal->oom_score_adj; 26 + ), 27 + 28 + TP_printk("pid=%d comm=%s clone_flags=%lx oom_score_adj=%d", 29 + __entry->pid, __entry->comm, 30 + __entry->clone_flags, __entry->oom_score_adj) 31 + ); 32 + 33 + TRACE_EVENT(task_rename, 34 + 35 + TP_PROTO(struct task_struct *task, char *comm), 36 + 37 + TP_ARGS(task, comm), 38 + 39 + TP_STRUCT__entry( 40 + __field( pid_t, pid) 41 + __array( char, oldcomm, TASK_COMM_LEN) 42 + __array( char, newcomm, TASK_COMM_LEN) 43 + __field( int, oom_score_adj) 44 + ), 45 + 46 + TP_fast_assign( 47 + __entry->pid = task->pid; 48 + memcpy(entry->oldcomm, task->comm, TASK_COMM_LEN); 49 + memcpy(entry->newcomm, comm, TASK_COMM_LEN); 50 + __entry->oom_score_adj = task->signal->oom_score_adj; 51 + ), 52 + 53 + TP_printk("pid=%d oldcomm=%s newcomm=%s oom_score_adj=%d", 54 + __entry->pid, __entry->oldcomm, 55 + __entry->newcomm, __entry->oom_score_adj) 56 + ); 57 + 58 + #endif 59 + 60 + /* This part must be outside protection */ 61 + #include <trace/define_trace.h>
+6 -1
ipc/mqueue.c
··· 32 32 #include <linux/nsproxy.h> 33 33 #include <linux/pid.h> 34 34 #include <linux/ipc_namespace.h> 35 + #include <linux/user_namespace.h> 35 36 #include <linux/slab.h> 36 37 37 38 #include <net/sock.h> ··· 543 542 sig_i.si_errno = 0; 544 543 sig_i.si_code = SI_MESGQ; 545 544 sig_i.si_value = info->notify.sigev_value; 545 + /* map current pid/uid into info->owner's namespaces */ 546 + rcu_read_lock(); 546 547 sig_i.si_pid = task_tgid_nr_ns(current, 547 548 ns_of_pid(info->notify_owner)); 548 - sig_i.si_uid = current_uid(); 549 + sig_i.si_uid = user_ns_map_uid(info->user->user_ns, 550 + current_cred(), current_uid()); 551 + rcu_read_unlock(); 549 552 550 553 kill_pid_info(info->notify.sigev_signo, 551 554 &sig_i, info->notify_owner);
+6
kernel/fork.c
··· 76 76 77 77 #include <trace/events/sched.h> 78 78 79 + #define CREATE_TRACE_POINTS 80 + #include <trace/events/task.h> 81 + 79 82 /* 80 83 * Protected counters by write_lock_irq(&tasklist_lock) 81 84 */ ··· 1373 1370 if (clone_flags & CLONE_THREAD) 1374 1371 threadgroup_change_end(current); 1375 1372 perf_event_fork(p); 1373 + 1374 + trace_task_newtask(p, clone_flags); 1375 + 1376 1376 return p; 1377 1377 1378 1378 bad_fork_free_pid:
+6
kernel/power/snapshot.c
··· 858 858 PageReserved(page)) 859 859 return NULL; 860 860 861 + if (page_is_guard(page)) 862 + return NULL; 863 + 861 864 return page; 862 865 } 863 866 ··· 921 918 922 919 if (PageReserved(page) 923 920 && (!kernel_page_present(page) || pfn_is_nosave(pfn))) 921 + return NULL; 922 + 923 + if (page_is_guard(page)) 924 924 return NULL; 925 925 926 926 return page;
+61 -3
kernel/signal.c
··· 28 28 #include <linux/freezer.h> 29 29 #include <linux/pid_namespace.h> 30 30 #include <linux/nsproxy.h> 31 + #include <linux/user_namespace.h> 31 32 #define CREATE_TRACE_POINTS 32 33 #include <trace/events/signal.h> 33 34 ··· 1020 1019 return (sig < SIGRTMIN) && sigismember(&signals->signal, sig); 1021 1020 } 1022 1021 1022 + /* 1023 + * map the uid in struct cred into user namespace *ns 1024 + */ 1025 + static inline uid_t map_cred_ns(const struct cred *cred, 1026 + struct user_namespace *ns) 1027 + { 1028 + return user_ns_map_uid(ns, cred, cred->uid); 1029 + } 1030 + 1031 + #ifdef CONFIG_USER_NS 1032 + static inline void userns_fixup_signal_uid(struct siginfo *info, struct task_struct *t) 1033 + { 1034 + if (current_user_ns() == task_cred_xxx(t, user_ns)) 1035 + return; 1036 + 1037 + if (SI_FROMKERNEL(info)) 1038 + return; 1039 + 1040 + info->si_uid = user_ns_map_uid(task_cred_xxx(t, user_ns), 1041 + current_cred(), info->si_uid); 1042 + } 1043 + #else 1044 + static inline void userns_fixup_signal_uid(struct siginfo *info, struct task_struct *t) 1045 + { 1046 + return; 1047 + } 1048 + #endif 1049 + 1023 1050 static int __send_signal(int sig, struct siginfo *info, struct task_struct *t, 1024 1051 int group, int from_ancestor_ns) 1025 1052 { ··· 1117 1088 q->info.si_pid = 0; 1118 1089 break; 1119 1090 } 1091 + 1092 + userns_fixup_signal_uid(&q->info, t); 1093 + 1120 1094 } else if (!is_si_special(info)) { 1121 1095 if (sig >= SIGRTMIN && info->si_code != SI_USER) { 1122 1096 /* ··· 1658 1626 */ 1659 1627 rcu_read_lock(); 1660 1628 info.si_pid = task_pid_nr_ns(tsk, tsk->parent->nsproxy->pid_ns); 1661 - info.si_uid = __task_cred(tsk)->uid; 1629 + info.si_uid = map_cred_ns(__task_cred(tsk), 1630 + task_cred_xxx(tsk->parent, user_ns)); 1662 1631 rcu_read_unlock(); 1663 1632 1664 1633 info.si_utime = cputime_to_clock_t(tsk->utime + tsk->signal->utime); ··· 1742 1709 */ 1743 1710 rcu_read_lock(); 1744 1711 info.si_pid = task_pid_nr_ns(tsk, parent->nsproxy->pid_ns); 1745 - info.si_uid = __task_cred(tsk)->uid; 1712 + info.si_uid = map_cred_ns(__task_cred(tsk), 1713 + task_cred_xxx(parent, user_ns)); 1746 1714 rcu_read_unlock(); 1747 1715 1748 1716 info.si_utime = cputime_to_clock_t(tsk->utime); ··· 2159 2125 info->si_signo = signr; 2160 2126 info->si_errno = 0; 2161 2127 info->si_code = SI_USER; 2128 + rcu_read_lock(); 2162 2129 info->si_pid = task_pid_vnr(current->parent); 2163 - info->si_uid = task_uid(current->parent); 2130 + info->si_uid = map_cred_ns(__task_cred(current->parent), 2131 + current_user_ns()); 2132 + rcu_read_unlock(); 2164 2133 } 2165 2134 2166 2135 /* If the (new) signal is now blocked, requeue it. */ ··· 2353 2316 } 2354 2317 spin_unlock_irq(&sighand->siglock); 2355 2318 return signr; 2319 + } 2320 + 2321 + /** 2322 + * block_sigmask - add @ka's signal mask to current->blocked 2323 + * @ka: action for @signr 2324 + * @signr: signal that has been successfully delivered 2325 + * 2326 + * This function should be called when a signal has succesfully been 2327 + * delivered. It adds the mask of signals for @ka to current->blocked 2328 + * so that they are blocked during the execution of the signal 2329 + * handler. In addition, @signr will be blocked unless %SA_NODEFER is 2330 + * set in @ka->sa.sa_flags. 2331 + */ 2332 + void block_sigmask(struct k_sigaction *ka, int signr) 2333 + { 2334 + sigset_t blocked; 2335 + 2336 + sigorsets(&blocked, &current->blocked, &ka->sa.sa_mask); 2337 + if (!(ka->sa.sa_flags & SA_NODEFER)) 2338 + sigaddset(&blocked, signr); 2339 + set_current_blocked(&blocked); 2356 2340 } 2357 2341 2358 2342 /*
+22 -10
kernel/workqueue.c
··· 242 242 243 243 int nr_drainers; /* W: drain in progress */ 244 244 int saved_max_active; /* W: saved cwq max_active */ 245 - const char *name; /* I: workqueue name */ 246 245 #ifdef CONFIG_LOCKDEP 247 246 struct lockdep_map lockdep_map; 248 247 #endif 248 + char name[]; /* I: workqueue name */ 249 249 }; 250 250 251 251 struct workqueue_struct *system_wq __read_mostly; ··· 2954 2954 return clamp_val(max_active, 1, lim); 2955 2955 } 2956 2956 2957 - struct workqueue_struct *__alloc_workqueue_key(const char *name, 2957 + struct workqueue_struct *__alloc_workqueue_key(const char *fmt, 2958 2958 unsigned int flags, 2959 2959 int max_active, 2960 2960 struct lock_class_key *key, 2961 - const char *lock_name) 2961 + const char *lock_name, ...) 2962 2962 { 2963 + va_list args, args1; 2963 2964 struct workqueue_struct *wq; 2964 2965 unsigned int cpu; 2966 + size_t namelen; 2967 + 2968 + /* determine namelen, allocate wq and format name */ 2969 + va_start(args, lock_name); 2970 + va_copy(args1, args); 2971 + namelen = vsnprintf(NULL, 0, fmt, args) + 1; 2972 + 2973 + wq = kzalloc(sizeof(*wq) + namelen, GFP_KERNEL); 2974 + if (!wq) 2975 + goto err; 2976 + 2977 + vsnprintf(wq->name, namelen, fmt, args1); 2978 + va_end(args); 2979 + va_end(args1); 2965 2980 2966 2981 /* 2967 2982 * Workqueues which may be used during memory reclaim should ··· 2993 2978 flags |= WQ_HIGHPRI; 2994 2979 2995 2980 max_active = max_active ?: WQ_DFL_ACTIVE; 2996 - max_active = wq_clamp_max_active(max_active, flags, name); 2981 + max_active = wq_clamp_max_active(max_active, flags, wq->name); 2997 2982 2998 - wq = kzalloc(sizeof(*wq), GFP_KERNEL); 2999 - if (!wq) 3000 - goto err; 3001 - 2983 + /* init wq */ 3002 2984 wq->flags = flags; 3003 2985 wq->saved_max_active = max_active; 3004 2986 mutex_init(&wq->flush_mutex); ··· 3003 2991 INIT_LIST_HEAD(&wq->flusher_queue); 3004 2992 INIT_LIST_HEAD(&wq->flusher_overflow); 3005 2993 3006 - wq->name = name; 3007 2994 lockdep_init_map(&wq->lockdep_map, lock_name, key, 0); 3008 2995 INIT_LIST_HEAD(&wq->list); 3009 2996 ··· 3031 3020 if (!rescuer) 3032 3021 goto err; 3033 3022 3034 - rescuer->task = kthread_create(rescuer_thread, wq, "%s", name); 3023 + rescuer->task = kthread_create(rescuer_thread, wq, "%s", 3024 + wq->name); 3035 3025 if (IS_ERR(rescuer->task)) 3036 3026 goto err; 3037 3027
+1
lib/btree.c
··· 357 357 } 358 358 return NULL; 359 359 } 360 + EXPORT_SYMBOL_GPL(btree_get_prev); 360 361 361 362 static int getpos(struct btree_geo *geo, unsigned long *node, 362 363 unsigned long *key)
+11 -10
lib/crc32.c
··· 51 51 crc32_body(u32 crc, unsigned char const *buf, size_t len, const u32 (*tab)[256]) 52 52 { 53 53 # ifdef __LITTLE_ENDIAN 54 - # define DO_CRC(x) crc = tab[0][(crc ^ (x)) & 255] ^ (crc >> 8) 55 - # define DO_CRC4 crc = tab[3][(crc) & 255] ^ \ 56 - tab[2][(crc >> 8) & 255] ^ \ 57 - tab[1][(crc >> 16) & 255] ^ \ 58 - tab[0][(crc >> 24) & 255] 54 + # define DO_CRC(x) crc = t0[(crc ^ (x)) & 255] ^ (crc >> 8) 55 + # define DO_CRC4 crc = t3[(crc) & 255] ^ \ 56 + t2[(crc >> 8) & 255] ^ \ 57 + t1[(crc >> 16) & 255] ^ \ 58 + t0[(crc >> 24) & 255] 59 59 # else 60 - # define DO_CRC(x) crc = tab[0][((crc >> 24) ^ (x)) & 255] ^ (crc << 8) 61 - # define DO_CRC4 crc = tab[0][(crc) & 255] ^ \ 62 - tab[1][(crc >> 8) & 255] ^ \ 63 - tab[2][(crc >> 16) & 255] ^ \ 64 - tab[3][(crc >> 24) & 255] 60 + # define DO_CRC(x) crc = t0[((crc >> 24) ^ (x)) & 255] ^ (crc << 8) 61 + # define DO_CRC4 crc = t0[(crc) & 255] ^ \ 62 + t1[(crc >> 8) & 255] ^ \ 63 + t2[(crc >> 16) & 255] ^ \ 64 + t3[(crc >> 24) & 255] 65 65 # endif 66 66 const u32 *b; 67 67 size_t rem_len; 68 + const u32 *t0=tab[0], *t1=tab[1], *t2=tab[2], *t3=tab[3]; 68 69 69 70 /* Align it */ 70 71 if (unlikely((long)buf & 3 && len)) {
+5
mm/Kconfig.debug
··· 4 4 depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC 5 5 depends on !KMEMCHECK 6 6 select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC 7 + select PAGE_GUARD if ARCH_SUPPORTS_DEBUG_PAGEALLOC 7 8 ---help--- 8 9 Unmap pages from the kernel linear mapping after free_pages(). 9 10 This results in a large slowdown, but helps to find certain types ··· 21 20 bool 22 21 23 22 config PAGE_POISONING 23 + bool 24 + select WANT_PAGE_DEBUG_FLAGS 25 + 26 + config PAGE_GUARD 24 27 bool 25 28 select WANT_PAGE_DEBUG_FLAGS
+11 -13
mm/bootmem.c
··· 56 56 57 57 static unsigned long __init bootmap_bytes(unsigned long pages) 58 58 { 59 - unsigned long bytes = (pages + 7) / 8; 59 + unsigned long bytes = DIV_ROUND_UP(pages, 8); 60 60 61 61 return ALIGN(bytes, sizeof(long)); 62 62 } ··· 171 171 172 172 static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) 173 173 { 174 - int aligned; 175 174 struct page *page; 176 175 unsigned long start, end, pages, count = 0; 177 176 ··· 180 181 start = bdata->node_min_pfn; 181 182 end = bdata->node_low_pfn; 182 183 183 - /* 184 - * If the start is aligned to the machines wordsize, we might 185 - * be able to free pages in bulks of that order. 186 - */ 187 - aligned = !(start & (BITS_PER_LONG - 1)); 188 - 189 - bdebug("nid=%td start=%lx end=%lx aligned=%d\n", 190 - bdata - bootmem_node_data, start, end, aligned); 184 + bdebug("nid=%td start=%lx end=%lx\n", 185 + bdata - bootmem_node_data, start, end); 191 186 192 187 while (start < end) { 193 188 unsigned long *map, idx, vec; ··· 189 196 map = bdata->node_bootmem_map; 190 197 idx = start - bdata->node_min_pfn; 191 198 vec = ~map[idx / BITS_PER_LONG]; 192 - 193 - if (aligned && vec == ~0UL && start + BITS_PER_LONG < end) { 199 + /* 200 + * If we have a properly aligned and fully unreserved 201 + * BITS_PER_LONG block of pages in front of us, free 202 + * it in one go. 203 + */ 204 + if (IS_ALIGNED(start, BITS_PER_LONG) && vec == ~0UL) { 194 205 int order = ilog2(BITS_PER_LONG); 195 206 196 207 __free_pages_bootmem(pfn_to_page(start), order); 197 208 count += BITS_PER_LONG; 209 + start += BITS_PER_LONG; 198 210 } else { 199 211 unsigned long off = 0; 200 212 ··· 212 214 vec >>= 1; 213 215 off++; 214 216 } 217 + start = ALIGN(start + 1, BITS_PER_LONG); 215 218 } 216 - start += BITS_PER_LONG; 217 219 } 218 220 219 221 page = virt_to_page(bdata->node_bootmem_map);
+3 -1
mm/compaction.c
··· 365 365 nr_isolated++; 366 366 367 367 /* Avoid isolating too much */ 368 - if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) 368 + if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) { 369 + ++low_pfn; 369 370 break; 371 + } 370 372 } 371 373 372 374 acct_isolated(zone, cc);
+2 -1
mm/fadvise.c
··· 117 117 break; 118 118 case POSIX_FADV_DONTNEED: 119 119 if (!bdi_write_congested(mapping->backing_dev_info)) 120 - filemap_flush(mapping); 120 + __filemap_fdatawrite_range(mapping, offset, endbyte, 121 + WB_SYNC_NONE); 121 122 122 123 /* First and last FULL page! */ 123 124 start_index = (offset+(PAGE_CACHE_SIZE-1)) >> PAGE_CACHE_SHIFT;
+4 -1
mm/filemap.c
··· 2351 2351 pgoff_t index, unsigned flags) 2352 2352 { 2353 2353 int status; 2354 + gfp_t gfp_mask; 2354 2355 struct page *page; 2355 2356 gfp_t gfp_notmask = 0; 2357 + 2358 + gfp_mask = mapping_gfp_mask(mapping) | __GFP_WRITE; 2356 2359 if (flags & AOP_FLAG_NOFS) 2357 2360 gfp_notmask = __GFP_FS; 2358 2361 repeat: ··· 2363 2360 if (page) 2364 2361 goto found; 2365 2362 2366 - page = __page_cache_alloc(mapping_gfp_mask(mapping) & ~gfp_notmask); 2363 + page = __page_cache_alloc(gfp_mask & ~gfp_notmask); 2367 2364 if (!page) 2368 2365 return NULL; 2369 2366 status = add_to_page_cache_lru(page, mapping, index,
+15 -4
mm/hugetlb.c
··· 800 800 801 801 if (page && arch_prepare_hugepage(page)) { 802 802 __free_pages(page, huge_page_order(h)); 803 - return NULL; 803 + page = NULL; 804 804 } 805 805 806 806 spin_lock(&hugetlb_lock); ··· 2315 2315 * from page cache lookup which is in HPAGE_SIZE units. 2316 2316 */ 2317 2317 address = address & huge_page_mask(h); 2318 - pgoff = ((address - vma->vm_start) >> PAGE_SHIFT) 2319 - + (vma->vm_pgoff >> PAGE_SHIFT); 2318 + pgoff = vma_hugecache_offset(h, vma, address); 2320 2319 mapping = (struct address_space *)page_private(page); 2321 2320 2322 2321 /* ··· 2348 2349 2349 2350 /* 2350 2351 * Hugetlb_cow() should be called with page lock of the original hugepage held. 2352 + * Called with hugetlb_instantiation_mutex held and pte_page locked so we 2353 + * cannot race with other handlers or page migration. 2354 + * Keep the pte_same checks anyway to make transition from the mutex easier. 2351 2355 */ 2352 2356 static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, 2353 2357 unsigned long address, pte_t *ptep, pte_t pte, ··· 2410 2408 BUG_ON(page_count(old_page) != 1); 2411 2409 BUG_ON(huge_pte_none(pte)); 2412 2410 spin_lock(&mm->page_table_lock); 2413 - goto retry_avoidcopy; 2411 + ptep = huge_pte_offset(mm, address & huge_page_mask(h)); 2412 + if (likely(pte_same(huge_ptep_get(ptep), pte))) 2413 + goto retry_avoidcopy; 2414 + /* 2415 + * race occurs while re-acquiring page_table_lock, and 2416 + * our job is done. 2417 + */ 2418 + return 0; 2414 2419 } 2415 2420 WARN_ON_ONCE(1); 2416 2421 } ··· 2638 2629 struct page *pagecache_page = NULL; 2639 2630 static DEFINE_MUTEX(hugetlb_instantiation_mutex); 2640 2631 struct hstate *h = hstate_vma(vma); 2632 + 2633 + address &= huge_page_mask(h); 2641 2634 2642 2635 ptep = huge_pte_offset(mm, address); 2643 2636 if (ptep) {
+7 -7
mm/mempolicy.c
··· 1983 1983 } 1984 1984 1985 1985 /* Slow path of a mempolicy comparison */ 1986 - int __mpol_equal(struct mempolicy *a, struct mempolicy *b) 1986 + bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) 1987 1987 { 1988 1988 if (!a || !b) 1989 - return 0; 1989 + return false; 1990 1990 if (a->mode != b->mode) 1991 - return 0; 1991 + return false; 1992 1992 if (a->flags != b->flags) 1993 - return 0; 1993 + return false; 1994 1994 if (mpol_store_user_nodemask(a)) 1995 1995 if (!nodes_equal(a->w.user_nodemask, b->w.user_nodemask)) 1996 - return 0; 1996 + return false; 1997 1997 1998 1998 switch (a->mode) { 1999 1999 case MPOL_BIND: 2000 2000 /* Fall through */ 2001 2001 case MPOL_INTERLEAVE: 2002 - return nodes_equal(a->v.nodes, b->v.nodes); 2002 + return !!nodes_equal(a->v.nodes, b->v.nodes); 2003 2003 case MPOL_PREFERRED: 2004 2004 return a->v.preferred_node == b->v.preferred_node; 2005 2005 default: 2006 2006 BUG(); 2007 - return 0; 2007 + return false; 2008 2008 } 2009 2009 } 2010 2010
+70 -34
mm/mempool.c
··· 27 27 return pool->elements[--pool->curr_nr]; 28 28 } 29 29 30 - static void free_pool(mempool_t *pool) 30 + /** 31 + * mempool_destroy - deallocate a memory pool 32 + * @pool: pointer to the memory pool which was allocated via 33 + * mempool_create(). 34 + * 35 + * Free all reserved elements in @pool and @pool itself. This function 36 + * only sleeps if the free_fn() function sleeps. 37 + */ 38 + void mempool_destroy(mempool_t *pool) 31 39 { 32 40 while (pool->curr_nr) { 33 41 void *element = remove_element(pool); ··· 44 36 kfree(pool->elements); 45 37 kfree(pool); 46 38 } 39 + EXPORT_SYMBOL(mempool_destroy); 47 40 48 41 /** 49 42 * mempool_create - create a memory pool ··· 95 86 96 87 element = pool->alloc(GFP_KERNEL, pool->pool_data); 97 88 if (unlikely(!element)) { 98 - free_pool(pool); 89 + mempool_destroy(pool); 99 90 return NULL; 100 91 } 101 92 add_element(pool, element); ··· 181 172 EXPORT_SYMBOL(mempool_resize); 182 173 183 174 /** 184 - * mempool_destroy - deallocate a memory pool 185 - * @pool: pointer to the memory pool which was allocated via 186 - * mempool_create(). 187 - * 188 - * this function only sleeps if the free_fn() function sleeps. The caller 189 - * has to guarantee that all elements have been returned to the pool (ie: 190 - * freed) prior to calling mempool_destroy(). 191 - */ 192 - void mempool_destroy(mempool_t *pool) 193 - { 194 - /* Check for outstanding elements */ 195 - BUG_ON(pool->curr_nr != pool->min_nr); 196 - free_pool(pool); 197 - } 198 - EXPORT_SYMBOL(mempool_destroy); 199 - 200 - /** 201 175 * mempool_alloc - allocate an element from a specific memory pool 202 176 * @pool: pointer to the memory pool which was allocated via 203 177 * mempool_create(). ··· 216 224 if (likely(pool->curr_nr)) { 217 225 element = remove_element(pool); 218 226 spin_unlock_irqrestore(&pool->lock, flags); 227 + /* paired with rmb in mempool_free(), read comment there */ 228 + smp_wmb(); 219 229 return element; 220 230 } 221 - spin_unlock_irqrestore(&pool->lock, flags); 222 231 223 - /* We must not sleep in the GFP_ATOMIC case */ 224 - if (!(gfp_mask & __GFP_WAIT)) 232 + /* 233 + * We use gfp mask w/o __GFP_WAIT or IO for the first round. If 234 + * alloc failed with that and @pool was empty, retry immediately. 235 + */ 236 + if (gfp_temp != gfp_mask) { 237 + spin_unlock_irqrestore(&pool->lock, flags); 238 + gfp_temp = gfp_mask; 239 + goto repeat_alloc; 240 + } 241 + 242 + /* We must not sleep if !__GFP_WAIT */ 243 + if (!(gfp_mask & __GFP_WAIT)) { 244 + spin_unlock_irqrestore(&pool->lock, flags); 225 245 return NULL; 246 + } 226 247 227 - /* Now start performing page reclaim */ 228 - gfp_temp = gfp_mask; 248 + /* Let's wait for someone else to return an element to @pool */ 229 249 init_wait(&wait); 230 250 prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE); 231 - smp_mb(); 232 - if (!pool->curr_nr) { 233 - /* 234 - * FIXME: this should be io_schedule(). The timeout is there 235 - * as a workaround for some DM problems in 2.6.18. 236 - */ 237 - io_schedule_timeout(5*HZ); 238 - } 239 - finish_wait(&pool->wait, &wait); 240 251 252 + spin_unlock_irqrestore(&pool->lock, flags); 253 + 254 + /* 255 + * FIXME: this should be io_schedule(). The timeout is there as a 256 + * workaround for some DM problems in 2.6.18. 257 + */ 258 + io_schedule_timeout(5*HZ); 259 + 260 + finish_wait(&pool->wait, &wait); 241 261 goto repeat_alloc; 242 262 } 243 263 EXPORT_SYMBOL(mempool_alloc); ··· 269 265 if (unlikely(element == NULL)) 270 266 return; 271 267 272 - smp_mb(); 268 + /* 269 + * Paired with the wmb in mempool_alloc(). The preceding read is 270 + * for @element and the following @pool->curr_nr. This ensures 271 + * that the visible value of @pool->curr_nr is from after the 272 + * allocation of @element. This is necessary for fringe cases 273 + * where @element was passed to this task without going through 274 + * barriers. 275 + * 276 + * For example, assume @p is %NULL at the beginning and one task 277 + * performs "p = mempool_alloc(...);" while another task is doing 278 + * "while (!p) cpu_relax(); mempool_free(p, ...);". This function 279 + * may end up using curr_nr value which is from before allocation 280 + * of @p without the following rmb. 281 + */ 282 + smp_rmb(); 283 + 284 + /* 285 + * For correctness, we need a test which is guaranteed to trigger 286 + * if curr_nr + #allocated == min_nr. Testing curr_nr < min_nr 287 + * without locking achieves that and refilling as soon as possible 288 + * is desirable. 289 + * 290 + * Because curr_nr visible here is always a value after the 291 + * allocation of @element, any task which decremented curr_nr below 292 + * min_nr is guaranteed to see curr_nr < min_nr unless curr_nr gets 293 + * incremented to min_nr afterwards. If curr_nr gets incremented 294 + * to min_nr after the allocation of @element, the elements 295 + * allocated after that are subject to the same guarantee. 296 + * 297 + * Waiters happen iff curr_nr is 0 and the above guarantee also 298 + * ensures that there will be frees which return elements to the 299 + * pool waking up the waiters. 300 + */ 273 301 if (pool->curr_nr < pool->min_nr) { 274 302 spin_lock_irqsave(&pool->lock, flags); 275 303 if (pool->curr_nr < pool->min_nr) {
+4 -10
mm/migrate.c
··· 39 39 40 40 #include "internal.h" 41 41 42 - #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru)) 43 - 44 42 /* 45 43 * migrate_prep() needs to be called before we start compiling a list of pages 46 44 * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is ··· 179 181 * Something used the pte of a page under migration. We need to 180 182 * get to the page and wait until migration is finished. 181 183 * When we return from this function the fault will be retried. 182 - * 183 - * This function is called from do_swap_page(). 184 184 */ 185 185 void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 186 186 unsigned long address) ··· 265 269 266 270 radix_tree_replace_slot(pslot, newpage); 267 271 268 - page_unfreeze_refs(page, expected_count); 269 272 /* 270 - * Drop cache reference from old page. 273 + * Drop cache reference from old page by unfreezing 274 + * to one less reference. 271 275 * We know this isn't the last reference. 272 276 */ 273 - __put_page(page); 277 + page_unfreeze_refs(page, expected_count - 1); 274 278 275 279 /* 276 280 * If moved to a different zone then also account ··· 330 334 331 335 radix_tree_replace_slot(pslot, newpage); 332 336 333 - page_unfreeze_refs(page, expected_count); 334 - 335 - __put_page(page); 337 + page_unfreeze_refs(page, expected_count - 1); 336 338 337 339 spin_unlock_irq(&mapping->tree_lock); 338 340 return 0;
+29 -31
mm/mmap.c
··· 1603 1603 1604 1604 EXPORT_SYMBOL(find_vma); 1605 1605 1606 - /* Same as find_vma, but also return a pointer to the previous VMA in *pprev. */ 1606 + /* 1607 + * Same as find_vma, but also return a pointer to the previous VMA in *pprev. 1608 + * Note: pprev is set to NULL when return value is NULL. 1609 + */ 1607 1610 struct vm_area_struct * 1608 1611 find_vma_prev(struct mm_struct *mm, unsigned long addr, 1609 1612 struct vm_area_struct **pprev) 1610 1613 { 1611 - struct vm_area_struct *vma = NULL, *prev = NULL; 1612 - struct rb_node *rb_node; 1613 - if (!mm) 1614 - goto out; 1614 + struct vm_area_struct *vma; 1615 1615 1616 - /* Guard against addr being lower than the first VMA */ 1617 - vma = mm->mmap; 1618 - 1619 - /* Go through the RB tree quickly. */ 1620 - rb_node = mm->mm_rb.rb_node; 1621 - 1622 - while (rb_node) { 1623 - struct vm_area_struct *vma_tmp; 1624 - vma_tmp = rb_entry(rb_node, struct vm_area_struct, vm_rb); 1625 - 1626 - if (addr < vma_tmp->vm_end) { 1627 - rb_node = rb_node->rb_left; 1628 - } else { 1629 - prev = vma_tmp; 1630 - if (!prev->vm_next || (addr < prev->vm_next->vm_end)) 1631 - break; 1632 - rb_node = rb_node->rb_right; 1633 - } 1634 - } 1635 - 1636 - out: 1637 - *pprev = prev; 1638 - return prev ? prev->vm_next : vma; 1616 + vma = find_vma(mm, addr); 1617 + *pprev = vma ? vma->vm_prev : NULL; 1618 + return vma; 1639 1619 } 1640 1620 1641 1621 /* ··· 2302 2322 struct vm_area_struct *new_vma, *prev; 2303 2323 struct rb_node **rb_link, *rb_parent; 2304 2324 struct mempolicy *pol; 2325 + bool faulted_in_anon_vma = true; 2305 2326 2306 2327 /* 2307 2328 * If anonymous vma has not yet been faulted, update new pgoff 2308 2329 * to match new location, to increase its chance of merging. 2309 2330 */ 2310 - if (!vma->vm_file && !vma->anon_vma) 2331 + if (unlikely(!vma->vm_file && !vma->anon_vma)) { 2311 2332 pgoff = addr >> PAGE_SHIFT; 2333 + faulted_in_anon_vma = false; 2334 + } 2312 2335 2313 2336 find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent); 2314 2337 new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags, ··· 2320 2337 /* 2321 2338 * Source vma may have been merged into new_vma 2322 2339 */ 2323 - if (vma_start >= new_vma->vm_start && 2324 - vma_start < new_vma->vm_end) 2340 + if (unlikely(vma_start >= new_vma->vm_start && 2341 + vma_start < new_vma->vm_end)) { 2342 + /* 2343 + * The only way we can get a vma_merge with 2344 + * self during an mremap is if the vma hasn't 2345 + * been faulted in yet and we were allowed to 2346 + * reset the dst vma->vm_pgoff to the 2347 + * destination address of the mremap to allow 2348 + * the merge to happen. mremap must change the 2349 + * vm_pgoff linearity between src and dst vmas 2350 + * (in turn preventing a vma_merge) to be 2351 + * safe. It is only safe to keep the vm_pgoff 2352 + * linear if there are no pages mapped yet. 2353 + */ 2354 + VM_BUG_ON(faulted_in_anon_vma); 2325 2355 *vmap = new_vma; 2356 + } else 2357 + anon_vma_moveto_tail(new_vma); 2326 2358 } else { 2327 2359 new_vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); 2328 2360 if (new_vma) {
+9
mm/mremap.c
··· 221 221 moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len); 222 222 if (moved_len < old_len) { 223 223 /* 224 + * Before moving the page tables from the new vma to 225 + * the old vma, we need to be sure the old vma is 226 + * queued after new vma in the same_anon_vma list to 227 + * prevent SMP races with rmap_walk (that could lead 228 + * rmap_walk to miss some page table). 229 + */ 230 + anon_vma_moveto_tail(vma); 231 + 232 + /* 224 233 * On error, move entries back from new area to old, 225 234 * which will succeed since page tables still there, 226 235 * and then proceed to unmap new area instead of old.
+6
mm/oom_kill.c
··· 33 33 #include <linux/security.h> 34 34 #include <linux/ptrace.h> 35 35 #include <linux/freezer.h> 36 + #include <linux/ftrace.h> 37 + 38 + #define CREATE_TRACE_POINTS 39 + #include <trace/events/oom.h> 36 40 37 41 int sysctl_panic_on_oom; 38 42 int sysctl_oom_kill_allocating_task; ··· 59 55 spin_lock_irq(&sighand->siglock); 60 56 if (current->signal->oom_score_adj == old_val) 61 57 current->signal->oom_score_adj = new_val; 58 + trace_oom_score_adj_update(current); 62 59 spin_unlock_irq(&sighand->siglock); 63 60 } 64 61 ··· 79 74 spin_lock_irq(&sighand->siglock); 80 75 old_val = current->signal->oom_score_adj; 81 76 current->signal->oom_score_adj = new_val; 77 + trace_oom_score_adj_update(current); 82 78 spin_unlock_irq(&sighand->siglock); 83 79 84 80 return old_val;
+186 -104
mm/page-writeback.c
··· 130 130 static struct prop_descriptor vm_completions; 131 131 132 132 /* 133 + * Work out the current dirty-memory clamping and background writeout 134 + * thresholds. 135 + * 136 + * The main aim here is to lower them aggressively if there is a lot of mapped 137 + * memory around. To avoid stressing page reclaim with lots of unreclaimable 138 + * pages. It is better to clamp down on writers than to start swapping, and 139 + * performing lots of scanning. 140 + * 141 + * We only allow 1/2 of the currently-unmapped memory to be dirtied. 142 + * 143 + * We don't permit the clamping level to fall below 5% - that is getting rather 144 + * excessive. 145 + * 146 + * We make sure that the background writeout level is below the adjusted 147 + * clamping level. 148 + */ 149 + 150 + /* 151 + * In a memory zone, there is a certain amount of pages we consider 152 + * available for the page cache, which is essentially the number of 153 + * free and reclaimable pages, minus some zone reserves to protect 154 + * lowmem and the ability to uphold the zone's watermarks without 155 + * requiring writeback. 156 + * 157 + * This number of dirtyable pages is the base value of which the 158 + * user-configurable dirty ratio is the effictive number of pages that 159 + * are allowed to be actually dirtied. Per individual zone, or 160 + * globally by using the sum of dirtyable pages over all zones. 161 + * 162 + * Because the user is allowed to specify the dirty limit globally as 163 + * absolute number of bytes, calculating the per-zone dirty limit can 164 + * require translating the configured limit into a percentage of 165 + * global dirtyable memory first. 166 + */ 167 + 168 + static unsigned long highmem_dirtyable_memory(unsigned long total) 169 + { 170 + #ifdef CONFIG_HIGHMEM 171 + int node; 172 + unsigned long x = 0; 173 + 174 + for_each_node_state(node, N_HIGH_MEMORY) { 175 + struct zone *z = 176 + &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; 177 + 178 + x += zone_page_state(z, NR_FREE_PAGES) + 179 + zone_reclaimable_pages(z) - z->dirty_balance_reserve; 180 + } 181 + /* 182 + * Make sure that the number of highmem pages is never larger 183 + * than the number of the total dirtyable memory. This can only 184 + * occur in very strange VM situations but we want to make sure 185 + * that this does not occur. 186 + */ 187 + return min(x, total); 188 + #else 189 + return 0; 190 + #endif 191 + } 192 + 193 + /** 194 + * global_dirtyable_memory - number of globally dirtyable pages 195 + * 196 + * Returns the global number of pages potentially available for dirty 197 + * page cache. This is the base value for the global dirty limits. 198 + */ 199 + unsigned long global_dirtyable_memory(void) 200 + { 201 + unsigned long x; 202 + 203 + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - 204 + dirty_balance_reserve; 205 + 206 + if (!vm_highmem_is_dirtyable) 207 + x -= highmem_dirtyable_memory(x); 208 + 209 + return x + 1; /* Ensure that we never return 0 */ 210 + } 211 + 212 + /* 213 + * global_dirty_limits - background-writeback and dirty-throttling thresholds 214 + * 215 + * Calculate the dirty thresholds based on sysctl parameters 216 + * - vm.dirty_background_ratio or vm.dirty_background_bytes 217 + * - vm.dirty_ratio or vm.dirty_bytes 218 + * The dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and 219 + * real-time tasks. 220 + */ 221 + void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) 222 + { 223 + unsigned long background; 224 + unsigned long dirty; 225 + unsigned long uninitialized_var(available_memory); 226 + struct task_struct *tsk; 227 + 228 + if (!vm_dirty_bytes || !dirty_background_bytes) 229 + available_memory = global_dirtyable_memory(); 230 + 231 + if (vm_dirty_bytes) 232 + dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE); 233 + else 234 + dirty = (vm_dirty_ratio * available_memory) / 100; 235 + 236 + if (dirty_background_bytes) 237 + background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE); 238 + else 239 + background = (dirty_background_ratio * available_memory) / 100; 240 + 241 + if (background >= dirty) 242 + background = dirty / 2; 243 + tsk = current; 244 + if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) { 245 + background += background / 4; 246 + dirty += dirty / 4; 247 + } 248 + *pbackground = background; 249 + *pdirty = dirty; 250 + trace_global_dirty_state(background, dirty); 251 + } 252 + 253 + /** 254 + * zone_dirtyable_memory - number of dirtyable pages in a zone 255 + * @zone: the zone 256 + * 257 + * Returns the zone's number of pages potentially available for dirty 258 + * page cache. This is the base value for the per-zone dirty limits. 259 + */ 260 + static unsigned long zone_dirtyable_memory(struct zone *zone) 261 + { 262 + /* 263 + * The effective global number of dirtyable pages may exclude 264 + * highmem as a big-picture measure to keep the ratio between 265 + * dirty memory and lowmem reasonable. 266 + * 267 + * But this function is purely about the individual zone and a 268 + * highmem zone can hold its share of dirty pages, so we don't 269 + * care about vm_highmem_is_dirtyable here. 270 + */ 271 + return zone_page_state(zone, NR_FREE_PAGES) + 272 + zone_reclaimable_pages(zone) - 273 + zone->dirty_balance_reserve; 274 + } 275 + 276 + /** 277 + * zone_dirty_limit - maximum number of dirty pages allowed in a zone 278 + * @zone: the zone 279 + * 280 + * Returns the maximum number of dirty pages allowed in a zone, based 281 + * on the zone's dirtyable memory. 282 + */ 283 + static unsigned long zone_dirty_limit(struct zone *zone) 284 + { 285 + unsigned long zone_memory = zone_dirtyable_memory(zone); 286 + struct task_struct *tsk = current; 287 + unsigned long dirty; 288 + 289 + if (vm_dirty_bytes) 290 + dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE) * 291 + zone_memory / global_dirtyable_memory(); 292 + else 293 + dirty = vm_dirty_ratio * zone_memory / 100; 294 + 295 + if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) 296 + dirty += dirty / 4; 297 + 298 + return dirty; 299 + } 300 + 301 + /** 302 + * zone_dirty_ok - tells whether a zone is within its dirty limits 303 + * @zone: the zone to check 304 + * 305 + * Returns %true when the dirty pages in @zone are within the zone's 306 + * dirty limit, %false if the limit is exceeded. 307 + */ 308 + bool zone_dirty_ok(struct zone *zone) 309 + { 310 + unsigned long limit = zone_dirty_limit(zone); 311 + 312 + return zone_page_state(zone, NR_FILE_DIRTY) + 313 + zone_page_state(zone, NR_UNSTABLE_NFS) + 314 + zone_page_state(zone, NR_WRITEBACK) <= limit; 315 + } 316 + 317 + /* 133 318 * couple the period to the dirty_ratio: 134 319 * 135 320 * period/2 ~ roundup_pow_of_two(dirty limit) ··· 326 141 if (vm_dirty_bytes) 327 142 dirty_total = vm_dirty_bytes / PAGE_SIZE; 328 143 else 329 - dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) / 144 + dirty_total = (vm_dirty_ratio * global_dirtyable_memory()) / 330 145 100; 331 146 return 2 + ilog2(dirty_total - 1); 332 147 } ··· 380 195 } 381 196 return ret; 382 197 } 383 - 384 198 385 199 int dirty_bytes_handler(struct ctl_table *table, int write, 386 200 void __user *buffer, size_t *lenp, ··· 475 291 } 476 292 EXPORT_SYMBOL(bdi_set_max_ratio); 477 293 478 - /* 479 - * Work out the current dirty-memory clamping and background writeout 480 - * thresholds. 481 - * 482 - * The main aim here is to lower them aggressively if there is a lot of mapped 483 - * memory around. To avoid stressing page reclaim with lots of unreclaimable 484 - * pages. It is better to clamp down on writers than to start swapping, and 485 - * performing lots of scanning. 486 - * 487 - * We only allow 1/2 of the currently-unmapped memory to be dirtied. 488 - * 489 - * We don't permit the clamping level to fall below 5% - that is getting rather 490 - * excessive. 491 - * 492 - * We make sure that the background writeout level is below the adjusted 493 - * clamping level. 494 - */ 495 - 496 - static unsigned long highmem_dirtyable_memory(unsigned long total) 497 - { 498 - #ifdef CONFIG_HIGHMEM 499 - int node; 500 - unsigned long x = 0; 501 - 502 - for_each_node_state(node, N_HIGH_MEMORY) { 503 - struct zone *z = 504 - &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; 505 - 506 - x += zone_page_state(z, NR_FREE_PAGES) + 507 - zone_reclaimable_pages(z); 508 - } 509 - /* 510 - * Make sure that the number of highmem pages is never larger 511 - * than the number of the total dirtyable memory. This can only 512 - * occur in very strange VM situations but we want to make sure 513 - * that this does not occur. 514 - */ 515 - return min(x, total); 516 - #else 517 - return 0; 518 - #endif 519 - } 520 - 521 - /** 522 - * determine_dirtyable_memory - amount of memory that may be used 523 - * 524 - * Returns the numebr of pages that can currently be freed and used 525 - * by the kernel for direct mappings. 526 - */ 527 - unsigned long determine_dirtyable_memory(void) 528 - { 529 - unsigned long x; 530 - 531 - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); 532 - 533 - if (!vm_highmem_is_dirtyable) 534 - x -= highmem_dirtyable_memory(x); 535 - 536 - return x + 1; /* Ensure that we never return 0 */ 537 - } 538 - 539 294 static unsigned long dirty_freerun_ceiling(unsigned long thresh, 540 295 unsigned long bg_thresh) 541 296 { ··· 484 361 static unsigned long hard_dirty_limit(unsigned long thresh) 485 362 { 486 363 return max(thresh, global_dirty_limit); 487 - } 488 - 489 - /* 490 - * global_dirty_limits - background-writeback and dirty-throttling thresholds 491 - * 492 - * Calculate the dirty thresholds based on sysctl parameters 493 - * - vm.dirty_background_ratio or vm.dirty_background_bytes 494 - * - vm.dirty_ratio or vm.dirty_bytes 495 - * The dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and 496 - * real-time tasks. 497 - */ 498 - void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) 499 - { 500 - unsigned long background; 501 - unsigned long dirty; 502 - unsigned long uninitialized_var(available_memory); 503 - struct task_struct *tsk; 504 - 505 - if (!vm_dirty_bytes || !dirty_background_bytes) 506 - available_memory = determine_dirtyable_memory(); 507 - 508 - if (vm_dirty_bytes) 509 - dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE); 510 - else 511 - dirty = (vm_dirty_ratio * available_memory) / 100; 512 - 513 - if (dirty_background_bytes) 514 - background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE); 515 - else 516 - background = (dirty_background_ratio * available_memory) / 100; 517 - 518 - if (background >= dirty) 519 - background = dirty / 2; 520 - tsk = current; 521 - if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) { 522 - background += background / 4; 523 - dirty += dirty / 4; 524 - } 525 - *pbackground = background; 526 - *pdirty = dirty; 527 - trace_global_dirty_state(background, dirty); 528 364 } 529 365 530 366 /**
+190 -63
mm/page_alloc.c
··· 57 57 #include <linux/ftrace_event.h> 58 58 #include <linux/memcontrol.h> 59 59 #include <linux/prefetch.h> 60 + #include <linux/page-debug-flags.h> 60 61 61 62 #include <asm/tlbflush.h> 62 63 #include <asm/div64.h> ··· 97 96 98 97 unsigned long totalram_pages __read_mostly; 99 98 unsigned long totalreserve_pages __read_mostly; 99 + /* 100 + * When calculating the number of globally allowed dirty pages, there 101 + * is a certain number of per-zone reserves that should not be 102 + * considered dirtyable memory. This is the sum of those reserves 103 + * over all existing zones that contribute dirtyable memory. 104 + */ 105 + unsigned long dirty_balance_reserve __read_mostly; 106 + 100 107 int percpu_pagelist_fraction; 101 108 gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; 102 109 ··· 135 126 WARN_ON(saved_gfp_mask); 136 127 saved_gfp_mask = gfp_allowed_mask; 137 128 gfp_allowed_mask &= ~GFP_IOFS; 129 + } 130 + 131 + bool pm_suspended_storage(void) 132 + { 133 + if ((gfp_allowed_mask & GFP_IOFS) == GFP_IOFS) 134 + return false; 135 + return true; 138 136 } 139 137 #endif /* CONFIG_PM_SLEEP */ 140 138 ··· 397 381 clear_highpage(page + i); 398 382 } 399 383 384 + #ifdef CONFIG_DEBUG_PAGEALLOC 385 + unsigned int _debug_guardpage_minorder; 386 + 387 + static int __init debug_guardpage_minorder_setup(char *buf) 388 + { 389 + unsigned long res; 390 + 391 + if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { 392 + printk(KERN_ERR "Bad debug_guardpage_minorder value\n"); 393 + return 0; 394 + } 395 + _debug_guardpage_minorder = res; 396 + printk(KERN_INFO "Setting debug_guardpage_minorder to %lu\n", res); 397 + return 0; 398 + } 399 + __setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup); 400 + 401 + static inline void set_page_guard_flag(struct page *page) 402 + { 403 + __set_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags); 404 + } 405 + 406 + static inline void clear_page_guard_flag(struct page *page) 407 + { 408 + __clear_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags); 409 + } 410 + #else 411 + static inline void set_page_guard_flag(struct page *page) { } 412 + static inline void clear_page_guard_flag(struct page *page) { } 413 + #endif 414 + 400 415 static inline void set_page_order(struct page *page, int order) 401 416 { 402 417 set_page_private(page, order); ··· 485 438 if (page_zone_id(page) != page_zone_id(buddy)) 486 439 return 0; 487 440 441 + if (page_is_guard(buddy) && page_order(buddy) == order) { 442 + VM_BUG_ON(page_count(buddy) != 0); 443 + return 1; 444 + } 445 + 488 446 if (PageBuddy(buddy) && page_order(buddy) == order) { 489 447 VM_BUG_ON(page_count(buddy) != 0); 490 448 return 1; ··· 546 494 buddy = page + (buddy_idx - page_idx); 547 495 if (!page_is_buddy(page, buddy, order)) 548 496 break; 549 - 550 - /* Our buddy is free, merge with it and move up one order. */ 551 - list_del(&buddy->lru); 552 - zone->free_area[order].nr_free--; 553 - rmv_page_order(buddy); 497 + /* 498 + * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, 499 + * merge with it and move up one order. 500 + */ 501 + if (page_is_guard(buddy)) { 502 + clear_page_guard_flag(buddy); 503 + set_page_private(page, 0); 504 + __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order); 505 + } else { 506 + list_del(&buddy->lru); 507 + zone->free_area[order].nr_free--; 508 + rmv_page_order(buddy); 509 + } 554 510 combined_idx = buddy_idx & page_idx; 555 511 page = page + (combined_idx - page_idx); 556 512 page_idx = combined_idx; ··· 692 632 int i; 693 633 int bad = 0; 694 634 695 - trace_mm_page_free_direct(page, order); 635 + trace_mm_page_free(page, order); 696 636 kmemcheck_free_shadow(page, order); 697 637 698 638 if (PageAnon(page)) ··· 730 670 local_irq_restore(flags); 731 671 } 732 672 733 - /* 734 - * permit the bootmem allocator to evade page validation on high-order frees 735 - */ 736 673 void __meminit __free_pages_bootmem(struct page *page, unsigned int order) 737 674 { 738 - if (order == 0) { 739 - __ClearPageReserved(page); 740 - set_page_count(page, 0); 741 - set_page_refcounted(page); 742 - __free_page(page); 743 - } else { 744 - int loop; 675 + unsigned int nr_pages = 1 << order; 676 + unsigned int loop; 745 677 746 - prefetchw(page); 747 - for (loop = 0; loop < (1 << order); loop++) { 748 - struct page *p = &page[loop]; 678 + prefetchw(page); 679 + for (loop = 0; loop < nr_pages; loop++) { 680 + struct page *p = &page[loop]; 749 681 750 - if (loop + 1 < (1 << order)) 751 - prefetchw(p + 1); 752 - __ClearPageReserved(p); 753 - set_page_count(p, 0); 754 - } 755 - 756 - set_page_refcounted(page); 757 - __free_pages(page, order); 682 + if (loop + 1 < nr_pages) 683 + prefetchw(p + 1); 684 + __ClearPageReserved(p); 685 + set_page_count(p, 0); 758 686 } 687 + 688 + set_page_refcounted(page); 689 + __free_pages(page, order); 759 690 } 760 691 761 692 ··· 775 724 high--; 776 725 size >>= 1; 777 726 VM_BUG_ON(bad_range(zone, &page[size])); 727 + 728 + #ifdef CONFIG_DEBUG_PAGEALLOC 729 + if (high < debug_guardpage_minorder()) { 730 + /* 731 + * Mark as guard pages (or page), that will allow to 732 + * merge back to allocator when buddy will be freed. 733 + * Corresponding page table entries will not be touched, 734 + * pages will stay not present in virtual address space 735 + */ 736 + INIT_LIST_HEAD(&page[size].lru); 737 + set_page_guard_flag(&page[size]); 738 + set_page_private(&page[size], high); 739 + /* Guard pages are not available for any usage */ 740 + __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high)); 741 + continue; 742 + } 743 + #endif 778 744 list_add(&page[size].lru, &area->free_list[migratetype]); 779 745 area->nr_free++; 780 746 set_page_order(&page[size], high); ··· 1257 1189 } 1258 1190 1259 1191 /* 1192 + * Free a list of 0-order pages 1193 + */ 1194 + void free_hot_cold_page_list(struct list_head *list, int cold) 1195 + { 1196 + struct page *page, *next; 1197 + 1198 + list_for_each_entry_safe(page, next, list, lru) { 1199 + trace_mm_page_free_batched(page, cold); 1200 + free_hot_cold_page(page, cold); 1201 + } 1202 + } 1203 + 1204 + /* 1260 1205 * split_page takes a non-compound higher-order page, and splits it into 1261 1206 * n (1<<order) sub-pages: page[0..n] 1262 1207 * Each sub-page must be freed individually. ··· 1516 1435 long min = mark; 1517 1436 int o; 1518 1437 1519 - free_pages -= (1 << order) + 1; 1438 + free_pages -= (1 << order) - 1; 1520 1439 if (alloc_flags & ALLOC_HIGH) 1521 1440 min -= min / 2; 1522 1441 if (alloc_flags & ALLOC_HARDER) ··· 1726 1645 if ((alloc_flags & ALLOC_CPUSET) && 1727 1646 !cpuset_zone_allowed_softwall(zone, gfp_mask)) 1728 1647 continue; 1648 + /* 1649 + * When allocating a page cache page for writing, we 1650 + * want to get it from a zone that is within its dirty 1651 + * limit, such that no single zone holds more than its 1652 + * proportional share of globally allowed dirty pages. 1653 + * The dirty limits take into account the zone's 1654 + * lowmem reserves and high watermark so that kswapd 1655 + * should be able to balance it without having to 1656 + * write pages from its LRU list. 1657 + * 1658 + * This may look like it could increase pressure on 1659 + * lower zones by failing allocations in higher zones 1660 + * before they are full. But the pages that do spill 1661 + * over are limited as the lower zones are protected 1662 + * by this very same mechanism. It should not become 1663 + * a practical burden to them. 1664 + * 1665 + * XXX: For now, allow allocations to potentially 1666 + * exceed the per-zone dirty limit in the slowpath 1667 + * (ALLOC_WMARK_LOW unset) before going into reclaim, 1668 + * which is important when on a NUMA setup the allowed 1669 + * zones are together not big enough to reach the 1670 + * global limit. The proper fix for these situations 1671 + * will require awareness of zones in the 1672 + * dirty-throttling and the flusher threads. 1673 + */ 1674 + if ((alloc_flags & ALLOC_WMARK_LOW) && 1675 + (gfp_mask & __GFP_WRITE) && !zone_dirty_ok(zone)) 1676 + goto this_zone_full; 1729 1677 1730 1678 BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK); 1731 1679 if (!(alloc_flags & ALLOC_NO_WATERMARKS)) { ··· 1844 1734 { 1845 1735 unsigned int filter = SHOW_MEM_FILTER_NODES; 1846 1736 1847 - if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs)) 1737 + if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) || 1738 + debug_guardpage_minorder() > 0) 1848 1739 return; 1849 1740 1850 1741 /* ··· 1884 1773 1885 1774 static inline int 1886 1775 should_alloc_retry(gfp_t gfp_mask, unsigned int order, 1776 + unsigned long did_some_progress, 1887 1777 unsigned long pages_reclaimed) 1888 1778 { 1889 1779 /* Do not loop if specifically requested */ 1890 1780 if (gfp_mask & __GFP_NORETRY) 1781 + return 0; 1782 + 1783 + /* Always retry if specifically requested */ 1784 + if (gfp_mask & __GFP_NOFAIL) 1785 + return 1; 1786 + 1787 + /* 1788 + * Suspend converts GFP_KERNEL to __GFP_WAIT which can prevent reclaim 1789 + * making forward progress without invoking OOM. Suspend also disables 1790 + * storage devices so kswapd will not help. Bail if we are suspending. 1791 + */ 1792 + if (!did_some_progress && pm_suspended_storage()) 1891 1793 return 0; 1892 1794 1893 1795 /* ··· 1919 1795 * allocation still fails, we stop retrying. 1920 1796 */ 1921 1797 if (gfp_mask & __GFP_REPEAT && pages_reclaimed < (1 << order)) 1922 - return 1; 1923 - 1924 - /* 1925 - * Don't let big-order allocations loop unless the caller 1926 - * explicitly requests that. 1927 - */ 1928 - if (gfp_mask & __GFP_NOFAIL) 1929 1798 return 1; 1930 1799 1931 1800 return 0; ··· 2313 2196 2314 2197 /* Check if we should retry the allocation */ 2315 2198 pages_reclaimed += did_some_progress; 2316 - if (should_alloc_retry(gfp_mask, order, pages_reclaimed)) { 2199 + if (should_alloc_retry(gfp_mask, order, did_some_progress, 2200 + pages_reclaimed)) { 2317 2201 /* Wait for some write requests to complete then retry */ 2318 2202 wait_iff_congested(preferred_zone, BLK_RW_ASYNC, HZ/50); 2319 2203 goto rebalance; ··· 2423 2305 return __get_free_pages(gfp_mask | __GFP_ZERO, 0); 2424 2306 } 2425 2307 EXPORT_SYMBOL(get_zeroed_page); 2426 - 2427 - void __pagevec_free(struct pagevec *pvec) 2428 - { 2429 - int i = pagevec_count(pvec); 2430 - 2431 - while (--i >= 0) { 2432 - trace_mm_pagevec_free(pvec->pages[i], pvec->cold); 2433 - free_hot_cold_page(pvec->pages[i], pvec->cold); 2434 - } 2435 - } 2436 2308 2437 2309 void __free_pages(struct page *page, unsigned int order) 2438 2310 { ··· 3493 3385 if (page_to_nid(page) != zone_to_nid(zone)) 3494 3386 continue; 3495 3387 3496 - /* Blocks with reserved pages will never free, skip them. */ 3497 - block_end_pfn = min(pfn + pageblock_nr_pages, end_pfn); 3498 - if (pageblock_is_reserved(pfn, block_end_pfn)) 3499 - continue; 3500 - 3501 3388 block_migratetype = get_pageblock_migratetype(page); 3502 3389 3503 - /* If this block is reserved, account for it */ 3504 - if (reserve > 0 && block_migratetype == MIGRATE_RESERVE) { 3505 - reserve--; 3506 - continue; 3507 - } 3390 + /* Only test what is necessary when the reserves are not met */ 3391 + if (reserve > 0) { 3392 + /* 3393 + * Blocks with reserved pages will never free, skip 3394 + * them. 3395 + */ 3396 + block_end_pfn = min(pfn + pageblock_nr_pages, end_pfn); 3397 + if (pageblock_is_reserved(pfn, block_end_pfn)) 3398 + continue; 3508 3399 3509 - /* Suitable for reserving if this block is movable */ 3510 - if (reserve > 0 && block_migratetype == MIGRATE_MOVABLE) { 3511 - set_pageblock_migratetype(page, MIGRATE_RESERVE); 3512 - move_freepages_block(zone, page, MIGRATE_RESERVE); 3513 - reserve--; 3514 - continue; 3400 + /* If this block is reserved, account for it */ 3401 + if (block_migratetype == MIGRATE_RESERVE) { 3402 + reserve--; 3403 + continue; 3404 + } 3405 + 3406 + /* Suitable for reserving if this block is movable */ 3407 + if (block_migratetype == MIGRATE_MOVABLE) { 3408 + set_pageblock_migratetype(page, 3409 + MIGRATE_RESERVE); 3410 + move_freepages_block(zone, page, 3411 + MIGRATE_RESERVE); 3412 + reserve--; 3413 + continue; 3414 + } 3515 3415 } 3516 3416 3517 3417 /* ··· 4850 4734 if (max > zone->present_pages) 4851 4735 max = zone->present_pages; 4852 4736 reserve_pages += max; 4737 + /* 4738 + * Lowmem reserves are not available to 4739 + * GFP_HIGHUSER page cache allocations and 4740 + * kswapd tries to balance zones to their high 4741 + * watermark. As a result, neither should be 4742 + * regarded as dirtyable memory, to prevent a 4743 + * situation where reclaim has to clean pages 4744 + * in order to balance the zones. 4745 + */ 4746 + zone->dirty_balance_reserve = max; 4853 4747 } 4854 4748 } 4749 + dirty_balance_reserve = reserve_pages; 4855 4750 totalreserve_pages = reserve_pages; 4856 4751 } 4857 4752
+45
mm/rmap.c
··· 272 272 } 273 273 274 274 /* 275 + * Some rmap walk that needs to find all ptes/hugepmds without false 276 + * negatives (like migrate and split_huge_page) running concurrent 277 + * with operations that copy or move pagetables (like mremap() and 278 + * fork()) to be safe. They depend on the anon_vma "same_anon_vma" 279 + * list to be in a certain order: the dst_vma must be placed after the 280 + * src_vma in the list. This is always guaranteed by fork() but 281 + * mremap() needs to call this function to enforce it in case the 282 + * dst_vma isn't newly allocated and chained with the anon_vma_clone() 283 + * function but just an extension of a pre-existing vma through 284 + * vma_merge. 285 + * 286 + * NOTE: the same_anon_vma list can still be changed by other 287 + * processes while mremap runs because mremap doesn't hold the 288 + * anon_vma mutex to prevent modifications to the list while it 289 + * runs. All we need to enforce is that the relative order of this 290 + * process vmas isn't changing (we don't care about other vmas 291 + * order). Each vma corresponds to an anon_vma_chain structure so 292 + * there's no risk that other processes calling anon_vma_moveto_tail() 293 + * and changing the same_anon_vma list under mremap() will screw with 294 + * the relative order of this process vmas in the list, because we 295 + * they can't alter the order of any vma that belongs to this 296 + * process. And there can't be another anon_vma_moveto_tail() running 297 + * concurrently with mremap() coming from this process because we hold 298 + * the mmap_sem for the whole mremap(). fork() ordering dependency 299 + * also shouldn't be affected because fork() only cares that the 300 + * parent vmas are placed in the list before the child vmas and 301 + * anon_vma_moveto_tail() won't reorder vmas from either the fork() 302 + * parent or child. 303 + */ 304 + void anon_vma_moveto_tail(struct vm_area_struct *dst) 305 + { 306 + struct anon_vma_chain *pavc; 307 + struct anon_vma *root = NULL; 308 + 309 + list_for_each_entry_reverse(pavc, &dst->anon_vma_chain, same_vma) { 310 + struct anon_vma *anon_vma = pavc->anon_vma; 311 + VM_BUG_ON(pavc->vma != dst); 312 + root = lock_anon_vma_root(root, anon_vma); 313 + list_del(&pavc->same_anon_vma); 314 + list_add_tail(&pavc->same_anon_vma, &anon_vma->head); 315 + } 316 + unlock_anon_vma_root(root); 317 + } 318 + 319 + /* 275 320 * Attach vma to its own anon_vma, as well as to the anon_vmas that 276 321 * the corresponding VMA in the parent process is attached to. 277 322 * Returns 0 on success, non-zero on failure.
+3
mm/slub.c
··· 3654 3654 struct kmem_cache *temp_kmem_cache_node; 3655 3655 unsigned long kmalloc_size; 3656 3656 3657 + if (debug_guardpage_minorder()) 3658 + slub_max_order = 0; 3659 + 3657 3660 kmem_size = offsetof(struct kmem_cache, node) + 3658 3661 nr_node_ids * sizeof(struct kmem_cache_node *); 3659 3662
+3 -11
mm/swap.c
··· 585 585 void release_pages(struct page **pages, int nr, int cold) 586 586 { 587 587 int i; 588 - struct pagevec pages_to_free; 588 + LIST_HEAD(pages_to_free); 589 589 struct zone *zone = NULL; 590 590 unsigned long uninitialized_var(flags); 591 591 592 - pagevec_init(&pages_to_free, cold); 593 592 for (i = 0; i < nr; i++) { 594 593 struct page *page = pages[i]; 595 594 ··· 619 620 del_page_from_lru(zone, page); 620 621 } 621 622 622 - if (!pagevec_add(&pages_to_free, page)) { 623 - if (zone) { 624 - spin_unlock_irqrestore(&zone->lru_lock, flags); 625 - zone = NULL; 626 - } 627 - __pagevec_free(&pages_to_free); 628 - pagevec_reinit(&pages_to_free); 629 - } 623 + list_add(&page->lru, &pages_to_free); 630 624 } 631 625 if (zone) 632 626 spin_unlock_irqrestore(&zone->lru_lock, flags); 633 627 634 - pagevec_free(&pages_to_free); 628 + free_hot_cold_page_list(&pages_to_free, cold); 635 629 } 636 630 EXPORT_SYMBOL(release_pages); 637 631
+3 -3
mm/swapfile.c
··· 667 667 * original page might be freed under memory pressure, then 668 668 * later read back in from swap, now with the wrong data. 669 669 * 670 - * Hibernation clears bits from gfp_allowed_mask to prevent 671 - * memory reclaim from writing to disk, so check that here. 670 + * Hibration suspends storage while it is writing the image 671 + * to disk so check that here. 672 672 */ 673 - if (!(gfp_allowed_mask & __GFP_IO)) 673 + if (pm_suspended_storage()) 674 674 return 0; 675 675 676 676 delete_from_swap_cache(page);
+4 -4
mm/vmalloc.c
··· 256 256 struct rb_node rb_node; /* address sorted rbtree */ 257 257 struct list_head list; /* address sorted list */ 258 258 struct list_head purge_list; /* "lazy purge" list */ 259 - void *private; 259 + struct vm_struct *vm; 260 260 struct rcu_head rcu_head; 261 261 }; 262 262 ··· 1285 1285 vm->addr = (void *)va->va_start; 1286 1286 vm->size = va->va_end - va->va_start; 1287 1287 vm->caller = caller; 1288 - va->private = vm; 1288 + va->vm = vm; 1289 1289 va->flags |= VM_VM_AREA; 1290 1290 } 1291 1291 ··· 1408 1408 1409 1409 va = find_vmap_area((unsigned long)addr); 1410 1410 if (va && va->flags & VM_VM_AREA) 1411 - return va->private; 1411 + return va->vm; 1412 1412 1413 1413 return NULL; 1414 1414 } ··· 1427 1427 1428 1428 va = find_vmap_area((unsigned long)addr); 1429 1429 if (va && va->flags & VM_VM_AREA) { 1430 - struct vm_struct *vm = va->private; 1430 + struct vm_struct *vm = va->vm; 1431 1431 1432 1432 if (!(vm->flags & VM_UNLIST)) { 1433 1433 struct vm_struct *tmp, **p;
+16 -26
mm/vmscan.c
··· 715 715 */ 716 716 SetPageReferenced(page); 717 717 718 - if (referenced_page) 718 + if (referenced_page || referenced_ptes > 1) 719 + return PAGEREF_ACTIVATE; 720 + 721 + /* 722 + * Activate file-backed executable pages after first usage. 723 + */ 724 + if (vm_flags & VM_EXEC) 719 725 return PAGEREF_ACTIVATE; 720 726 721 727 return PAGEREF_KEEP; ··· 732 726 return PAGEREF_RECLAIM_CLEAN; 733 727 734 728 return PAGEREF_RECLAIM; 735 - } 736 - 737 - static noinline_for_stack void free_page_list(struct list_head *free_pages) 738 - { 739 - struct pagevec freed_pvec; 740 - struct page *page, *tmp; 741 - 742 - pagevec_init(&freed_pvec, 1); 743 - 744 - list_for_each_entry_safe(page, tmp, free_pages, lru) { 745 - list_del(&page->lru); 746 - if (!pagevec_add(&freed_pvec, page)) { 747 - __pagevec_free(&freed_pvec); 748 - pagevec_reinit(&freed_pvec); 749 - } 750 - } 751 - 752 - pagevec_free(&freed_pvec); 753 729 } 754 730 755 731 /* ··· 997 1009 if (nr_dirty && nr_dirty == nr_congested && scanning_global_lru(sc)) 998 1010 zone_set_flag(zone, ZONE_CONGESTED); 999 1011 1000 - free_page_list(&free_pages); 1012 + free_hot_cold_page_list(&free_pages, 1); 1001 1013 1002 1014 list_splice(&ret_pages, page_list); 1003 1015 count_vm_events(PGACTIVATE, pgactivate); ··· 1166 1178 * anon page which don't already have a swap slot is 1167 1179 * pointless. 1168 1180 */ 1169 - if (nr_swap_pages <= 0 && PageAnon(cursor_page) && 1181 + if (nr_swap_pages <= 0 && PageSwapBacked(cursor_page) && 1170 1182 !PageSwapCache(cursor_page)) 1171 1183 break; 1172 1184 1173 1185 if (__isolate_lru_page(cursor_page, mode, file) == 0) { 1174 1186 list_move(&cursor_page->lru, dst); 1175 1187 mem_cgroup_del_lru(cursor_page); 1176 - nr_taken += hpage_nr_pages(page); 1188 + nr_taken += hpage_nr_pages(cursor_page); 1177 1189 nr_lumpy_taken++; 1178 1190 if (PageDirty(cursor_page)) 1179 1191 nr_lumpy_dirty++; ··· 2000 2012 * inactive lists are large enough, continue reclaiming 2001 2013 */ 2002 2014 pages_for_compaction = (2UL << sc->order); 2003 - inactive_lru_pages = zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON) + 2004 - zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE); 2015 + inactive_lru_pages = zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE); 2016 + if (nr_swap_pages > 0) 2017 + inactive_lru_pages += zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON); 2005 2018 if (sc->nr_reclaimed < pages_for_compaction && 2006 2019 inactive_lru_pages > pages_for_compaction) 2007 2020 return true; ··· 3437 3448 static void warn_scan_unevictable_pages(void) 3438 3449 { 3439 3450 printk_once(KERN_WARNING 3440 - "The scan_unevictable_pages sysctl/node-interface has been " 3451 + "%s: The scan_unevictable_pages sysctl/node-interface has been " 3441 3452 "disabled for lack of a legitimate use case. If you have " 3442 - "one, please send an email to linux-mm@kvack.org.\n"); 3453 + "one, please send an email to linux-mm@kvack.org.\n", 3454 + current->comm); 3443 3455 } 3444 3456 3445 3457 /*
+138 -98
scripts/checkpatch.pl
··· 227 227 our $Member = qr{->$Ident|\.$Ident|\[[^]]*\]}; 228 228 our $Lval = qr{$Ident(?:$Member)*}; 229 229 230 - our $Constant = qr{(?:[0-9]+|0x[0-9a-fA-F]+)[UL]*}; 230 + our $Constant = qr{(?i:(?:[0-9]+|0x[0-9a-f]+)[ul]*)}; 231 231 our $Assignment = qr{(?:\*\=|/=|%=|\+=|-=|<<=|>>=|&=|\^=|\|=|=)}; 232 232 our $Compare = qr{<=|>=|==|!=|<|>}; 233 233 our $Operators = qr{ ··· 315 315 $NonptrType = qr{ 316 316 (?:$Modifier\s+|const\s+)* 317 317 (?: 318 - (?:typeof|__typeof__)\s*\(\s*\**\s*$Ident\s*\)| 318 + (?:typeof|__typeof__)\s*\([^\)]*\)| 319 319 (?:$typeTypedefs\b)| 320 320 (?:${all}\b) 321 321 ) ··· 334 334 335 335 our $Typecast = qr{\s*(\(\s*$NonptrType\s*\)){0,1}\s*}; 336 336 our $LvalOrFunc = qr{($Lval)\s*($match_balanced_parentheses{0,1})\s*}; 337 + our $FuncArg = qr{$Typecast{0,1}($LvalOrFunc|$Constant)}; 337 338 338 339 sub deparenthesize { 339 340 my ($string) = @_; ··· 677 676 if ($off >= $len) { 678 677 last; 679 678 } 679 + if ($level == 0 && substr($blk, $off) =~ /^.\s*#\s*define/) { 680 + $level++; 681 + $type = '#'; 682 + } 680 683 } 681 684 $p = $c; 682 685 $c = substr($blk, $off, 1); ··· 742 737 } 743 738 last; 744 739 } 740 + } 741 + # Preprocessor commands end at the newline unless escaped. 742 + if ($type eq '#' && $c eq "\n" && $p ne "\\") { 743 + $level--; 744 + $type = ''; 745 + $off++; 746 + last; 745 747 } 746 748 $off++; 747 749 } ··· 1032 1020 } elsif ($cur =~ /^(\(\s*$Type\s*)\)/ && $av_pending eq '_') { 1033 1021 print "CAST($1)\n" if ($dbg_values > 1); 1034 1022 push(@av_paren_type, $type); 1035 - $type = 'C'; 1023 + $type = 'c'; 1036 1024 1037 1025 } elsif ($cur =~ /^($Type)\s*(?:$Ident|,|\)|\(|\s*$)/) { 1038 1026 print "DECLARE($1)\n" if ($dbg_values > 1); ··· 1224 1212 case| 1225 1213 else| 1226 1214 asm|__asm__| 1227 - do 1215 + do| 1216 + \#| 1217 + \#\#| 1228 1218 )(?:\s|$)| 1229 1219 ^(?:typedef|struct|enum)\b 1230 1220 )}x; ··· 1373 1359 my %suppress_ifbraces; 1374 1360 my %suppress_whiletrailers; 1375 1361 my %suppress_export; 1362 + my $suppress_statement = 0; 1376 1363 1377 1364 # Pre-scan the patch sanitizing the lines. 1378 1365 # Pre-scan the patch looking for any __setup documentation. ··· 1483 1468 %suppress_ifbraces = (); 1484 1469 %suppress_whiletrailers = (); 1485 1470 %suppress_export = (); 1471 + $suppress_statement = 0; 1486 1472 next; 1487 1473 1488 1474 # track the line number as we move through the hunk, note that ··· 1520 1504 if ($line =~ /^diff --git.*?(\S+)$/) { 1521 1505 $realfile = $1; 1522 1506 $realfile =~ s@^([^/]*)/@@; 1507 + $in_commit_log = 0; 1523 1508 } elsif ($line =~ /^\+\+\+\s+(\S+)/) { 1524 1509 $realfile = $1; 1525 1510 $realfile =~ s@^([^/]*)/@@; 1511 + $in_commit_log = 0; 1526 1512 1527 1513 $p1_prefix = $1; 1528 1514 if (!$file && $tree && $p1_prefix ne '' && ··· 1564 1546 } 1565 1547 1566 1548 # Check signature styles 1567 - if ($line =~ /^(\s*)($signature_tags)(\s*)(.*)/) { 1549 + if (!$in_header_lines && 1550 + $line =~ /^(\s*)($signature_tags)(\s*)(.*)/) { 1568 1551 my $space_before = $1; 1569 1552 my $sign_off = $2; 1570 1553 my $space_after = $3; ··· 1642 1623 # Check if it's the start of a commit log 1643 1624 # (not a header line and we haven't seen the patch filename) 1644 1625 if ($in_header_lines && $realfile =~ /^$/ && 1645 - $rawline !~ /^(commit\b|from\b|\w+:).+$/i) { 1626 + $rawline !~ /^(commit\b|from\b|[\w-]+:).+$/i) { 1646 1627 $in_header_lines = 0; 1647 1628 $in_commit_log = 1; 1648 1629 } ··· 1674 1655 # Only applies when adding the entry originally, after that we do not have 1675 1656 # sufficient context to determine whether it is indeed long enough. 1676 1657 if ($realfile =~ /Kconfig/ && 1677 - $line =~ /\+\s*(?:---)?help(?:---)?$/) { 1658 + $line =~ /.\s*config\s+/) { 1678 1659 my $length = 0; 1679 1660 my $cnt = $realcnt; 1680 1661 my $ln = $linenr + 1; 1681 1662 my $f; 1663 + my $is_start = 0; 1682 1664 my $is_end = 0; 1683 - while ($cnt > 0 && defined $lines[$ln - 1]) { 1665 + for (; $cnt > 0 && defined $lines[$ln - 1]; $ln++) { 1684 1666 $f = $lines[$ln - 1]; 1685 1667 $cnt-- if ($lines[$ln - 1] !~ /^-/); 1686 1668 $is_end = $lines[$ln - 1] =~ /^\+/; 1687 - $ln++; 1688 1669 1689 1670 next if ($f =~ /^-/); 1671 + 1672 + if ($lines[$ln - 1] =~ /.\s*(?:bool|tristate)\s*\"/) { 1673 + $is_start = 1; 1674 + } elsif ($lines[$ln - 1] =~ /.\s*(?:---)?help(?:---)?$/) { 1675 + $length = -1; 1676 + } 1677 + 1690 1678 $f =~ s/^.//; 1691 1679 $f =~ s/#.*//; 1692 1680 $f =~ s/^\s+//; ··· 1705 1679 $length++; 1706 1680 } 1707 1681 WARN("CONFIG_DESCRIPTION", 1708 - "please write a paragraph that describes the config symbol fully\n" . $herecurr) if ($is_end && $length < 4); 1709 - #print "is_end<$is_end> length<$length>\n"; 1682 + "please write a paragraph that describes the config symbol fully\n" . $herecurr) if ($is_start && $is_end && $length < 4); 1683 + #print "is_start<$is_start> is_end<$is_end> length<$length>\n"; 1710 1684 } 1711 1685 1712 1686 if (($realfile =~ /Makefile.*/ || $realfile =~ /Kbuild.*/) && ··· 1818 1792 # Check for potential 'bare' types 1819 1793 my ($stat, $cond, $line_nr_next, $remain_next, $off_next, 1820 1794 $realline_next); 1821 - if ($realcnt && $line =~ /.\s*\S/) { 1795 + #print "LINE<$line>\n"; 1796 + if ($linenr >= $suppress_statement && 1797 + $realcnt && $line =~ /.\s*\S/) { 1822 1798 ($stat, $cond, $line_nr_next, $remain_next, $off_next) = 1823 1799 ctx_statement_block($linenr, $realcnt, 0); 1824 1800 $stat =~ s/\n./\n /g; 1825 1801 $cond =~ s/\n./\n /g; 1802 + 1803 + #print "linenr<$linenr> <$stat>\n"; 1804 + # If this statement has no statement boundaries within 1805 + # it there is no point in retrying a statement scan 1806 + # until we hit end of it. 1807 + my $frag = $stat; $frag =~ s/;+\s*$//; 1808 + if ($frag !~ /(?:{|;)/) { 1809 + #print "skip<$line_nr_next>\n"; 1810 + $suppress_statement = $line_nr_next; 1811 + } 1826 1812 1827 1813 # Find the real next line. 1828 1814 $realline_next = $line_nr_next; ··· 1961 1923 1962 1924 # Check relative indent for conditionals and blocks. 1963 1925 if ($line =~ /\b(?:(?:if|while|for)\s*\(|do\b)/ && $line !~ /^.\s*#/ && $line !~ /\}\s*while\s*/) { 1926 + ($stat, $cond, $line_nr_next, $remain_next, $off_next) = 1927 + ctx_statement_block($linenr, $realcnt, 0) 1928 + if (!defined $stat); 1964 1929 my ($s, $c) = ($stat, $cond); 1965 1930 1966 1931 substr($s, 0, length($c), ''); ··· 2131 2090 # XXX(foo); 2132 2091 # EXPORT_SYMBOL(something_foo); 2133 2092 my $name = $1; 2134 - if ($stat =~ /^.([A-Z_]+)\s*\(\s*($Ident)/ && 2093 + if ($stat =~ /^(?:.\s*}\s*\n)?.([A-Z_]+)\s*\(\s*($Ident)/ && 2135 2094 $name =~ /^${Ident}_$2/) { 2136 2095 #print "FOO C name<$name>\n"; 2137 2096 $suppress_export{$realline_next} = 1; ··· 2209 2168 2210 2169 # * goes on variable not on type 2211 2170 # (char*[ const]) 2212 - if ($line =~ m{\($NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)\)}) { 2213 - my ($from, $to) = ($1, $1); 2171 + while ($line =~ m{(\($NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)\))}g) { 2172 + #print "AA<$1>\n"; 2173 + my ($from, $to) = ($2, $2); 2214 2174 2215 2175 # Should start with a space. 2216 2176 $to =~ s/^(\S)/ $1/; ··· 2226 2184 ERROR("POINTER_LOCATION", 2227 2185 "\"(foo$from)\" should be \"(foo$to)\"\n" . $herecurr); 2228 2186 } 2229 - } elsif ($line =~ m{\b$NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)($Ident)}) { 2230 - my ($from, $to, $ident) = ($1, $1, $2); 2187 + } 2188 + while ($line =~ m{(\b$NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)($Ident))}g) { 2189 + #print "BB<$1>\n"; 2190 + my ($from, $to, $ident) = ($2, $2, $3); 2231 2191 2232 2192 # Should start with a space. 2233 2193 $to =~ s/^(\S)/ $1/; ··· 2612 2568 # Flatten any parentheses 2613 2569 $value =~ s/\(/ \(/g; 2614 2570 $value =~ s/\)/\) /g; 2615 - while ($value =~ s/\[[^\{\}]*\]/1/ || 2571 + while ($value =~ s/\[[^\[\]]*\]/1/ || 2616 2572 $value !~ /(?:$Ident|-?$Constant)\s* 2617 2573 $Compare\s* 2618 2574 (?:$Ident|-?$Constant)/x && ··· 2637 2593 } 2638 2594 } 2639 2595 2640 - # typecasts on min/max could be min_t/max_t 2641 - if ($line =~ /^\+(?:.*?)\b(min|max)\s*\($Typecast{0,1}($LvalOrFunc)\s*,\s*$Typecast{0,1}($LvalOrFunc)\s*\)/) { 2642 - if (defined $2 || defined $8) { 2643 - my $call = $1; 2644 - my $cast1 = deparenthesize($2); 2645 - my $arg1 = $3; 2646 - my $cast2 = deparenthesize($8); 2647 - my $arg2 = $9; 2648 - my $cast; 2649 - 2650 - if ($cast1 ne "" && $cast2 ne "") { 2651 - $cast = "$cast1 or $cast2"; 2652 - } elsif ($cast1 ne "") { 2653 - $cast = $cast1; 2654 - } else { 2655 - $cast = $cast2; 2656 - } 2657 - WARN("MINMAX", 2658 - "$call() should probably be ${call}_t($cast, $arg1, $arg2)\n" . $herecurr); 2659 - } 2660 - } 2661 - 2662 2596 # Need a space before open parenthesis after if, while etc 2663 2597 if ($line=~/\b(if|while|for|switch)\(/) { 2664 2598 ERROR("SPACING", "space required before the open parenthesis '('\n" . $herecurr); ··· 2645 2623 # Check for illegal assignment in if conditional -- and check for trailing 2646 2624 # statements after the conditional. 2647 2625 if ($line =~ /do\s*(?!{)/) { 2626 + ($stat, $cond, $line_nr_next, $remain_next, $off_next) = 2627 + ctx_statement_block($linenr, $realcnt, 0) 2628 + if (!defined $stat); 2648 2629 my ($stat_next) = ctx_statement_block($line_nr_next, 2649 2630 $remain_next, $off_next); 2650 2631 $stat_next =~ s/\n./\n /g; ··· 2803 2778 my $cnt = $realcnt; 2804 2779 my ($off, $dstat, $dcond, $rest); 2805 2780 my $ctx = ''; 2806 - 2807 - my $args = defined($1); 2808 - 2809 - # Find the end of the macro and limit our statement 2810 - # search to that. 2811 - while ($cnt > 0 && defined $lines[$ln - 1] && 2812 - $lines[$ln - 1] =~ /^(?:-|..*\\$)/) 2813 - { 2814 - $ctx .= $rawlines[$ln - 1] . "\n"; 2815 - $cnt-- if ($lines[$ln - 1] !~ /^-/); 2816 - $ln++; 2817 - } 2818 - $ctx .= $rawlines[$ln - 1]; 2819 - 2820 2781 ($dstat, $dcond, $ln, $cnt, $off) = 2821 - ctx_statement_block($linenr, $ln - $linenr + 1, 0); 2782 + ctx_statement_block($linenr, $realcnt, 0); 2783 + $ctx = $dstat; 2822 2784 #print "dstat<$dstat> dcond<$dcond> cnt<$cnt> off<$off>\n"; 2823 2785 #print "LINE<$lines[$ln-1]> len<" . length($lines[$ln-1]) . "\n"; 2824 2786 2825 - # Extract the remainder of the define (if any) and 2826 - # rip off surrounding spaces, and trailing \'s. 2827 - $rest = ''; 2828 - while ($off != 0 || ($cnt > 0 && $rest =~ /\\\s*$/)) { 2829 - #print "ADDING cnt<$cnt> $off <" . substr($lines[$ln - 1], $off) . "> rest<$rest>\n"; 2830 - if ($off != 0 || $lines[$ln - 1] !~ /^-/) { 2831 - $rest .= substr($lines[$ln - 1], $off) . "\n"; 2832 - $cnt--; 2833 - } 2834 - $ln++; 2835 - $off = 0; 2836 - } 2837 - $rest =~ s/\\\n.//g; 2838 - $rest =~ s/^\s*//s; 2839 - $rest =~ s/\s*$//s; 2840 - 2841 - # Clean up the original statement. 2842 - if ($args) { 2843 - substr($dstat, 0, length($dcond), ''); 2844 - } else { 2845 - $dstat =~ s/^.\s*\#\s*define\s+$Ident\s*//; 2846 - } 2787 + $dstat =~ s/^.\s*\#\s*define\s+$Ident(?:\([^\)]*\))?\s*//; 2847 2788 $dstat =~ s/$;//g; 2848 2789 $dstat =~ s/\\\n.//g; 2849 2790 $dstat =~ s/^\s*//s; ··· 2818 2827 # Flatten any parentheses and braces 2819 2828 while ($dstat =~ s/\([^\(\)]*\)/1/ || 2820 2829 $dstat =~ s/\{[^\{\}]*\}/1/ || 2821 - $dstat =~ s/\[[^\{\}]*\]/1/) 2830 + $dstat =~ s/\[[^\[\]]*\]/1/) 2822 2831 { 2823 2832 } 2824 2833 ··· 2835 2844 ^\"|\"$ 2836 2845 }x; 2837 2846 #print "REST<$rest> dstat<$dstat> ctx<$ctx>\n"; 2838 - if ($rest ne '' && $rest ne ',') { 2839 - if ($rest !~ /while\s*\(/ && 2840 - $dstat !~ /$exceptions/) 2841 - { 2842 - ERROR("MULTISTATEMENT_MACRO_USE_DO_WHILE", 2843 - "Macros with multiple statements should be enclosed in a do - while loop\n" . "$here\n$ctx\n"); 2847 + if ($dstat ne '' && 2848 + $dstat !~ /^(?:$Ident|-?$Constant),$/ && # 10, // foo(), 2849 + $dstat !~ /^(?:$Ident|-?$Constant);$/ && # foo(); 2850 + $dstat !~ /^(?:$Ident|-?$Constant)$/ && # 10 // foo() 2851 + $dstat !~ /$exceptions/ && 2852 + $dstat !~ /^\.$Ident\s*=/ && # .foo = 2853 + $dstat !~ /^do\s*$Constant\s*while\s*$Constant;?$/ && # do {...} while (...); // do {...} while (...) 2854 + $dstat !~ /^for\s*$Constant$/ && # for (...) 2855 + $dstat !~ /^for\s*$Constant\s+(?:$Ident|-?$Constant)$/ && # for (...) bar() 2856 + $dstat !~ /^do\s*{/ && # do {... 2857 + $dstat !~ /^\({/) # ({... 2858 + { 2859 + $ctx =~ s/\n*$//; 2860 + my $herectx = $here . "\n"; 2861 + my $cnt = statement_rawlines($ctx); 2862 + 2863 + for (my $n = 0; $n < $cnt; $n++) { 2864 + $herectx .= raw_line($linenr, $n) . "\n"; 2844 2865 } 2845 2866 2846 - } elsif ($ctx !~ /;/) { 2847 - if ($dstat ne '' && 2848 - $dstat !~ /^(?:$Ident|-?$Constant)$/ && 2849 - $dstat !~ /$exceptions/ && 2850 - $dstat !~ /^\.$Ident\s*=/ && 2851 - $dstat =~ /$Operators/) 2852 - { 2867 + if ($dstat =~ /;/) { 2868 + ERROR("MULTISTATEMENT_MACRO_USE_DO_WHILE", 2869 + "Macros with multiple statements should be enclosed in a do - while loop\n" . "$herectx"); 2870 + } else { 2853 2871 ERROR("COMPLEX_MACRO", 2854 - "Macros with complex values should be enclosed in parenthesis\n" . "$here\n$ctx\n"); 2872 + "Macros with complex values should be enclosed in parenthesis\n" . "$herectx"); 2855 2873 } 2856 2874 } 2857 2875 } ··· 3111 3111 "__aligned(size) is preferred over __attribute__((aligned(size)))\n" . $herecurr); 3112 3112 } 3113 3113 3114 + # Check for __attribute__ format(printf, prefer __printf 3115 + if ($line =~ /\b__attribute__\s*\(\s*\(\s*format\s*\(\s*printf/) { 3116 + WARN("PREFER_PRINTF", 3117 + "__printf(string-index, first-to-check) is preferred over __attribute__((format(printf, string-index, first-to-check)))\n" . $herecurr); 3118 + } 3119 + 3114 3120 # check for sizeof(&) 3115 3121 if ($line =~ /\bsizeof\s*\(\s*\&/) { 3116 3122 WARN("SIZEOF_ADDRESS", ··· 3127 3121 if ($rawline =~ /\\$/ && $rawline =~ tr/"/"/ % 2) { 3128 3122 WARN("LINE_CONTINUATIONS", 3129 3123 "Avoid line continuations in quoted strings\n" . $herecurr); 3124 + } 3125 + 3126 + # Check for misused memsets 3127 + if (defined $stat && 3128 + $stat =~ /^\+(?:.*?)\bmemset\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\,\s*$FuncArg\s*\)/s) { 3129 + 3130 + my $ms_addr = $2; 3131 + my $ms_val = $8; 3132 + my $ms_size = $14; 3133 + 3134 + if ($ms_size =~ /^(0x|)0$/i) { 3135 + ERROR("MEMSET", 3136 + "memset to 0's uses 0 as the 2nd argument, not the 3rd\n" . "$here\n$stat\n"); 3137 + } elsif ($ms_size =~ /^(0x|)1$/i) { 3138 + WARN("MEMSET", 3139 + "single byte memset is suspicious. Swapped 2nd/3rd argument?\n" . "$here\n$stat\n"); 3140 + } 3141 + } 3142 + 3143 + # typecasts on min/max could be min_t/max_t 3144 + if (defined $stat && 3145 + $stat =~ /^\+(?:.*?)\b(min|max)\s*\(\s*$FuncArg\s*,\s*$FuncArg\s*\)/) { 3146 + if (defined $2 || defined $8) { 3147 + my $call = $1; 3148 + my $cast1 = deparenthesize($2); 3149 + my $arg1 = $3; 3150 + my $cast2 = deparenthesize($8); 3151 + my $arg2 = $9; 3152 + my $cast; 3153 + 3154 + if ($cast1 ne "" && $cast2 ne "") { 3155 + $cast = "$cast1 or $cast2"; 3156 + } elsif ($cast1 ne "") { 3157 + $cast = $cast1; 3158 + } else { 3159 + $cast = $cast2; 3160 + } 3161 + WARN("MINMAX", 3162 + "$call() should probably be ${call}_t($cast, $arg1, $arg2)\n" . "$here\n$stat\n"); 3163 + } 3130 3164 } 3131 3165 3132 3166 # check for new externs in .c files. ··· 3339 3293 $line =~ /DEVICE_ATTR.*S_IWUGO/ ) { 3340 3294 WARN("EXPORTED_WORLD_WRITABLE", 3341 3295 "Exporting world writable files is usually an error. Consider more restrictive permissions.\n" . $herecurr); 3342 - } 3343 - 3344 - # Check for memset with swapped arguments 3345 - if ($line =~ /memset.*\,(\ |)(0x|)0(\ |0|)\);/) { 3346 - ERROR("MEMSET", 3347 - "memset size is 3rd argument, not the second.\n" . $herecurr); 3348 3296 } 3349 3297 } 3350 3298
+1 -1
scripts/get_maintainer.pl
··· 95 95 "execute_cmd" => \&git_execute_cmd, 96 96 "available" => '(which("git") ne "") && (-d ".git")', 97 97 "find_signers_cmd" => 98 - "git log --no-color --since=\$email_git_since " . 98 + "git log --no-color --follow --since=\$email_git_since " . 99 99 '--format="GitCommit: %H%n' . 100 100 'GitAuthor: %an <%ae>%n' . 101 101 'GitDate: %aD%n' .
+17 -17
tools/perf/Documentation/examples.txt
··· 17 17 kmem:kmem_cache_alloc_node [Tracepoint event] 18 18 kmem:kfree [Tracepoint event] 19 19 kmem:kmem_cache_free [Tracepoint event] 20 - kmem:mm_page_free_direct [Tracepoint event] 21 - kmem:mm_pagevec_free [Tracepoint event] 20 + kmem:mm_page_free [Tracepoint event] 21 + kmem:mm_page_free_batched [Tracepoint event] 22 22 kmem:mm_page_alloc [Tracepoint event] 23 23 kmem:mm_page_alloc_zone_locked [Tracepoint event] 24 24 kmem:mm_page_pcpu_drain [Tracepoint event] ··· 29 29 run' are: 30 30 31 31 titan:~> perf stat -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc 32 - -e kmem:mm_pagevec_free -e kmem:mm_page_free_direct ./hackbench 10 32 + -e kmem:mm_page_free_batched -e kmem:mm_page_free ./hackbench 10 33 33 Time: 0.575 34 34 35 35 Performance counter stats for './hackbench 10': 36 36 37 37 13857 kmem:mm_page_pcpu_drain 38 38 27576 kmem:mm_page_alloc 39 - 6025 kmem:mm_pagevec_free 40 - 20934 kmem:mm_page_free_direct 39 + 6025 kmem:mm_page_free_batched 40 + 20934 kmem:mm_page_free 41 41 42 42 0.613972165 seconds time elapsed 43 43 ··· 45 45 'repeat the workload N times' feature of perf stat: 46 46 47 47 titan:~> perf stat --repeat 5 -e kmem:mm_page_pcpu_drain -e 48 - kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 49 - kmem:mm_page_free_direct ./hackbench 10 48 + kmem:mm_page_alloc -e kmem:mm_page_free_batched -e 49 + kmem:mm_page_free ./hackbench 10 50 50 Time: 0.627 51 51 Time: 0.644 52 52 Time: 0.564 ··· 57 57 58 58 12920 kmem:mm_page_pcpu_drain ( +- 3.359% ) 59 59 25035 kmem:mm_page_alloc ( +- 3.783% ) 60 - 6104 kmem:mm_pagevec_free ( +- 0.934% ) 61 - 18376 kmem:mm_page_free_direct ( +- 4.941% ) 60 + 6104 kmem:mm_page_free_batched ( +- 0.934% ) 61 + 18376 kmem:mm_page_free ( +- 4.941% ) 62 62 63 63 0.643954516 seconds time elapsed ( +- 2.363% ) 64 64 ··· 158 158 seconds: 159 159 160 160 titan:~/git> perf stat -a -e kmem:mm_page_pcpu_drain -e 161 - kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 162 - kmem:mm_page_free_direct sleep 10 161 + kmem:mm_page_alloc -e kmem:mm_page_free_batched -e 162 + kmem:mm_page_free sleep 10 163 163 164 164 Performance counter stats for 'sleep 10': 165 165 166 166 171585 kmem:mm_page_pcpu_drain 167 167 322114 kmem:mm_page_alloc 168 - 73623 kmem:mm_pagevec_free 169 - 254115 kmem:mm_page_free_direct 168 + 73623 kmem:mm_page_free_batched 169 + 254115 kmem:mm_page_free 170 170 171 171 10.000591410 seconds time elapsed 172 172 ··· 174 174 analysis done over ten 1-second intervals: 175 175 176 176 titan:~/git> perf stat --repeat 10 -a -e kmem:mm_page_pcpu_drain -e 177 - kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 178 - kmem:mm_page_free_direct sleep 1 177 + kmem:mm_page_alloc -e kmem:mm_page_free_batched -e 178 + kmem:mm_page_free sleep 1 179 179 180 180 Performance counter stats for 'sleep 1' (10 runs): 181 181 182 182 17254 kmem:mm_page_pcpu_drain ( +- 3.709% ) 183 183 34394 kmem:mm_page_alloc ( +- 4.617% ) 184 - 7509 kmem:mm_pagevec_free ( +- 4.820% ) 185 - 25653 kmem:mm_page_free_direct ( +- 3.672% ) 184 + 7509 kmem:mm_page_free_batched ( +- 4.820% ) 185 + 25653 kmem:mm_page_free ( +- 3.672% ) 186 186 187 187 1.058135029 seconds time elapsed ( +- 3.089% ) 188 188