Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge commit 'tracing/core' into tracing/kprobes

Conflicts:
kernel/trace/trace_export.c
kernel/trace/trace_kprobe.c

Merge reason: This topic branch lacks an important
build fix in tracing/core:

0dd7b74787eaf7858c6c573353a83c3e2766e674:
tracing: Fix double CPP substitution in TRACE_EVENT_FN

that prevents from multiple tracepoint headers inclusion crashes.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>

+7576 -4327
+3
Documentation/filesystems/9p.txt
··· 123 123 There are user and developer mailing lists available through the v9fs project 124 124 on sourceforge (http://sourceforge.net/projects/v9fs). 125 125 126 + A stand-alone version of the module (which should build for any 2.6 kernel) 127 + is available via (http://github.com/ericvh/9p-sac/tree/master) 128 + 126 129 News and other information is maintained on SWiK (http://swik.net/v9fs). 127 130 128 131 Bug reports may be issued through the kernel.org bugzilla
+12 -14
Documentation/filesystems/afs.txt
··· 23 23 24 24 (*) Security (currently only AFS kaserver and KerberosIV tickets). 25 25 26 - (*) File reading. 26 + (*) File reading and writing. 27 27 28 28 (*) Automounting. 29 29 30 + (*) Local caching (via fscache). 31 + 30 32 It does not yet support the following AFS features: 31 - 32 - (*) Write support. 33 - 34 - (*) Local caching. 35 33 36 34 (*) pioctl() system call. 37 35 ··· 54 56 the masks in the following files: 55 57 56 58 /sys/module/af_rxrpc/parameters/debug 57 - /sys/module/afs/parameters/debug 59 + /sys/module/kafs/parameters/debug 58 60 59 61 60 62 ===== ··· 64 66 When inserting the driver modules the root cell must be specified along with a 65 67 list of volume location server IP addresses: 66 68 67 - insmod af_rxrpc.o 68 - insmod rxkad.o 69 - insmod kafs.o rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91 69 + modprobe af_rxrpc 70 + modprobe rxkad 71 + modprobe kafs rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91 70 72 71 73 The first module is the AF_RXRPC network protocol driver. This provides the 72 74 RxRPC remote operation protocol and may also be accessed from userspace. See: ··· 79 81 Once the module has been loaded, more modules can be added by the following 80 82 procedure: 81 83 82 - echo add grand.central.org 18.7.14.88:128.2.191.224 >/proc/fs/afs/cells 84 + echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 >/proc/fs/afs/cells 83 85 84 86 Where the parameters to the "add" command are the name of a cell and a list of 85 87 volume location servers within that cell, with the latter separated by colons. ··· 99 101 specify connection to only volumes of those types. 100 102 101 103 The name of the cell is optional, and if not given during a mount, then the 102 - named volume will be looked up in the cell specified during insmod. 104 + named volume will be looked up in the cell specified during modprobe. 103 105 104 106 Additional cells can be added through /proc (see later section). 105 107 ··· 161 163 162 164 The filesystem maintains an internal database of all the cells it knows and the 163 165 IP addresses of the volume location servers for those cells. The cell to which 164 - the system belongs is added to the database when insmod is performed by the 166 + the system belongs is added to the database when modprobe is performed by the 165 167 "rootcell=" argument or, if compiled in, using a "kafs.rootcell=" argument on 166 168 the kernel command line. 167 169 168 170 Further cells can be added by commands similar to the following: 169 171 170 172 echo add CELLNAME VLADDR[:VLADDR][:VLADDR]... >/proc/fs/afs/cells 171 - echo add grand.central.org 18.7.14.88:128.2.191.224 >/proc/fs/afs/cells 173 + echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 >/proc/fs/afs/cells 172 174 173 175 No other cell database operations are available at this time. 174 176 ··· 231 233 mount -t afs \%root.afs. /afs 232 234 mount -t afs \%cambridge.redhat.com:root.cell. /afs/cambridge.redhat.com/ 233 235 234 - echo add grand.central.org 18.7.14.88:128.2.191.224 > /proc/fs/afs/cells 236 + echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 > /proc/fs/afs/cells 235 237 mount -t afs "#grand.central.org:root.cell." /afs/grand.central.org/ 236 238 mount -t afs "#grand.central.org:root.archive." /afs/grand.central.org/archive 237 239 mount -t afs "#grand.central.org:root.contrib." /afs/grand.central.org/contrib
+5 -10
Documentation/filesystems/proc.txt
··· 1167 1167 3.1 /proc/<pid>/oom_adj - Adjust the oom-killer score 1168 1168 ------------------------------------------------------ 1169 1169 1170 - This file can be used to adjust the score used to select which processes should 1171 - be killed in an out-of-memory situation. The oom_adj value is a characteristic 1172 - of the task's mm, so all threads that share an mm with pid will have the same 1173 - oom_adj value. A high value will increase the likelihood of this process being 1174 - killed by the oom-killer. Valid values are in the range -16 to +15 as 1175 - explained below and a special value of -17, which disables oom-killing 1176 - altogether for threads sharing pid's mm. 1170 + This file can be used to adjust the score used to select which processes 1171 + should be killed in an out-of-memory situation. Giving it a high score will 1172 + increase the likelihood of this process being killed by the oom-killer. Valid 1173 + values are in the range -16 to +15, plus the special value -17, which disables 1174 + oom-killing altogether for this process. 1177 1175 1178 1176 The process to be killed in an out-of-memory situation is selected among all others 1179 1177 based on its badness score. This value equals the original memory size of the process ··· 1184 1186 the parent's score if they do not share the same memory. Thus forking servers 1185 1187 are the prime candidates to be killed. Having only one 'hungry' child will make 1186 1188 parent less preferable than the child. 1187 - 1188 - /proc/<pid>/oom_adj cannot be changed for kthreads since they are immune from 1189 - oom-killing already. 1190 1189 1191 1190 /proc/<pid>/oom_score shows process' current badness score. 1192 1191
+4
Documentation/kernel-parameters.txt
··· 1115 1115 libata.dma=4 Compact Flash DMA only 1116 1116 Combinations also work, so libata.dma=3 enables DMA 1117 1117 for disks and CDROMs, but not CFs. 1118 + 1119 + libata.ignore_hpa= [LIBATA] Ignore HPA limit 1120 + libata.ignore_hpa=0 keep BIOS limits (default) 1121 + libata.ignore_hpa=1 ignore limits, using full disk 1118 1122 1119 1123 libata.noacpi [LIBATA] Disables use of ACPI in libata suspend/resume 1120 1124 when set.
+36 -32
Documentation/trace/ftrace.txt
··· 85 85 This file holds the output of the trace in a human 86 86 readable format (described below). 87 87 88 - latency_trace: 89 - 90 - This file shows the same trace but the information 91 - is organized more to display possible latencies 92 - in the system (described below). 93 - 94 88 trace_pipe: 95 89 96 90 The output is the same as the "trace" file but this 97 91 file is meant to be streamed with live tracing. 98 - Reads from this file will block until new data 99 - is retrieved. Unlike the "trace" and "latency_trace" 100 - files, this file is a consumer. This means reading 101 - from this file causes sequential reads to display 102 - more current data. Once data is read from this 103 - file, it is consumed, and will not be read 104 - again with a sequential read. The "trace" and 105 - "latency_trace" files are static, and if the 106 - tracer is not adding more data, they will display 107 - the same information every time they are read. 92 + Reads from this file will block until new data is 93 + retrieved. Unlike the "trace" file, this file is a 94 + consumer. This means reading from this file causes 95 + sequential reads to display more current data. Once 96 + data is read from this file, it is consumed, and 97 + will not be read again with a sequential read. The 98 + "trace" file is static, and if the tracer is not 99 + adding more data,they will display the same 100 + information every time they are read. 108 101 109 102 trace_options: 110 103 ··· 110 117 Some of the tracers record the max latency. 111 118 For example, the time interrupts are disabled. 112 119 This time is saved in this file. The max trace 113 - will also be stored, and displayed by either 114 - "trace" or "latency_trace". A new max trace will 115 - only be recorded if the latency is greater than 116 - the value in this file. (in microseconds) 120 + will also be stored, and displayed by "trace". 121 + A new max trace will only be recorded if the 122 + latency is greater than the value in this 123 + file. (in microseconds) 117 124 118 125 buffer_size_kb: 119 126 ··· 203 210 the trace with the longest max latency. 204 211 See tracing_max_latency. When a new max is recorded, 205 212 it replaces the old trace. It is best to view this 206 - trace via the latency_trace file. 213 + trace with the latency-format option enabled. 207 214 208 215 "preemptoff" 209 216 ··· 300 307 Latency trace format 301 308 -------------------- 302 309 303 - For traces that display latency times, the latency_trace file 304 - gives somewhat more information to see why a latency happened. 310 + When the latency-format option is enabled, the trace file gives 311 + somewhat more information to see why a latency happened. 305 312 Here is a typical trace. 306 313 307 314 # tracer: irqsoff ··· 373 380 374 381 The above is mostly meaningful for kernel developers. 375 382 376 - time: This differs from the trace file output. The trace file output 377 - includes an absolute timestamp. The timestamp used by the 378 - latency_trace file is relative to the start of the trace. 383 + time: When the latency-format option is enabled, the trace file 384 + output includes a timestamp relative to the start of the 385 + trace. This differs from the output when latency-format 386 + is disabled, which includes an absolute timestamp. 379 387 380 388 delay: This is just to help catch your eye a bit better. And 381 389 needs to be fixed to be only relative to the same CPU. ··· 434 440 sym-addr: 435 441 bash-4000 [01] 1477.606694: simple_strtoul <c0339346> 436 442 437 - verbose - This deals with the latency_trace file. 443 + verbose - This deals with the trace file when the 444 + latency-format option is enabled. 438 445 439 446 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ 440 447 (+0.000ms): simple_strtoul (strict_strtoul) ··· 467 472 the app is no longer running 468 473 469 474 The lookup is performed when you read 470 - trace,trace_pipe,latency_trace. Example: 475 + trace,trace_pipe. Example: 471 476 472 477 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0 473 478 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] ··· 476 481 every scheduling event. Will add overhead if 477 482 there's a lot of tasks running at once. 478 483 484 + latency-format - This option changes the trace. When 485 + it is enabled, the trace displays 486 + additional information about the 487 + latencies, as described in "Latency 488 + trace format". 479 489 480 490 sched_switch 481 491 ------------ ··· 596 596 an example: 597 597 598 598 # echo irqsoff > current_tracer 599 + # echo latency-format > trace_options 599 600 # echo 0 > tracing_max_latency 600 601 # echo 1 > tracing_enabled 601 602 # ls -ltr 602 603 [...] 603 604 # echo 0 > tracing_enabled 604 - # cat latency_trace 605 + # cat trace 605 606 # tracer: irqsoff 606 607 # 607 608 irqsoff latency trace v1.1.5 on 2.6.26 ··· 704 703 is much like the irqsoff tracer. 705 704 706 705 # echo preemptoff > current_tracer 706 + # echo latency-format > trace_options 707 707 # echo 0 > tracing_max_latency 708 708 # echo 1 > tracing_enabled 709 709 # ls -ltr 710 710 [...] 711 711 # echo 0 > tracing_enabled 712 - # cat latency_trace 712 + # cat trace 713 713 # tracer: preemptoff 714 714 # 715 715 preemptoff latency trace v1.1.5 on 2.6.26-rc8 ··· 852 850 tracers. 853 851 854 852 # echo preemptirqsoff > current_tracer 853 + # echo latency-format > trace_options 855 854 # echo 0 > tracing_max_latency 856 855 # echo 1 > tracing_enabled 857 856 # ls -ltr 858 857 [...] 859 858 # echo 0 > tracing_enabled 860 - # cat latency_trace 859 + # cat trace 861 860 # tracer: preemptirqsoff 862 861 # 863 862 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 ··· 1015 1012 'chrt' which changes the priority of the task. 1016 1013 1017 1014 # echo wakeup > current_tracer 1015 + # echo latency-format > trace_options 1018 1016 # echo 0 > tracing_max_latency 1019 1017 # echo 1 > tracing_enabled 1020 1018 # chrt -f 5 sleep 1 1021 1019 # echo 0 > tracing_enabled 1022 - # cat latency_trace 1020 + # cat trace 1023 1021 # tracer: wakeup 1024 1022 # 1025 1023 wakeup latency trace v1.1.5 on 2.6.26-rc8
+1 -1
Documentation/video4linux/CARDLIST.em28xx
··· 1 1 0 -> Unknown EM2800 video grabber (em2800) [eb1a:2800] 2 - 1 -> Unknown EM2750/28xx video grabber (em2820/em2840) [eb1a:2820,eb1a:2821,eb1a:2860,eb1a:2861,eb1a:2870,eb1a:2881,eb1a:2883] 2 + 1 -> Unknown EM2750/28xx video grabber (em2820/em2840) [eb1a:2710,eb1a:2820,eb1a:2821,eb1a:2860,eb1a:2861,eb1a:2870,eb1a:2881,eb1a:2883] 3 3 2 -> Terratec Cinergy 250 USB (em2820/em2840) [0ccd:0036] 4 4 3 -> Pinnacle PCTV USB 2 (em2820/em2840) [2304:0208] 5 5 4 -> Hauppauge WinTV USB 2 (em2820/em2840) [2040:4200,2040:4201]
+2 -2
Documentation/video4linux/CARDLIST.saa7134
··· 153 153 152 -> Asus Tiger Rev:1.00 [1043:4857] 154 154 153 -> Kworld Plus TV Analog Lite PCI [17de:7128] 155 155 154 -> Avermedia AVerTV GO 007 FM Plus [1461:f31d] 156 - 155 -> Hauppauge WinTV-HVR1120 ATSC/QAM-Hybrid [0070:6706,0070:6708] 157 - 156 -> Hauppauge WinTV-HVR1110r3 DVB-T/Hybrid [0070:6707,0070:6709,0070:670a] 156 + 155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid [0070:6706,0070:6708] 157 + 156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid [0070:6707,0070:6709,0070:670a] 158 158 157 -> Avermedia AVerTV Studio 507UA [1461:a11b] 159 159 158 -> AVerMedia Cardbus TV/Radio (E501R) [1461:b7e9] 160 160 159 -> Beholder BeholdTV 505 RDS [0000:505B]
+15 -2
MAINTAINERS
··· 904 904 905 905 ATLX ETHERNET DRIVERS 906 906 M: Jay Cliburn <jcliburn@gmail.com> 907 - M: Chris Snook <csnook@redhat.com> 907 + M: Chris Snook <chris.snook@gmail.com> 908 908 M: Jie Yang <jie.yang@atheros.com> 909 909 L: atl1-devel@lists.sourceforge.net 910 910 W: http://sourceforge.net/projects/atl1 ··· 2238 2238 S: Maintained 2239 2239 F: drivers/media/video/gspca/pac207.c 2240 2240 2241 + GSPCA SN9C20X SUBDRIVER 2242 + M: Brian Johnson <brijohn@gmail.com> 2243 + L: linux-media@vger.kernel.org 2244 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git 2245 + S: Maintained 2246 + F: drivers/media/video/gspca/sn9c20x.c 2247 + 2241 2248 GSPCA T613 SUBDRIVER 2242 2249 M: Leandro Costantino <lcostantino@gmail.com> 2243 2250 L: linux-media@vger.kernel.org ··· 3428 3421 3429 3422 MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND SDIO SUBSYSTEM 3430 3423 S: Orphan 3424 + L: linux-mmc@vger.kernel.org 3431 3425 F: drivers/mmc/ 3432 3426 F: include/linux/mmc/ 3433 3427 ··· 3563 3555 S: Maintained 3564 3556 F: net/ 3565 3557 F: include/net/ 3558 + F: include/linux/in.h 3559 + F: include/linux/net.h 3560 + F: include/linux/netdevice.h 3566 3561 3567 3562 NETWORKING [IPv4/IPv6] 3568 3563 M: "David S. Miller" <davem@davemloft.net> ··· 3601 3590 T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git 3602 3591 S: Odd Fixes 3603 3592 F: drivers/net/ 3593 + F: include/linux/if_* 3594 + F: include/linux/*device.h 3604 3595 3605 3596 NETXEN (1/10) GbE SUPPORT 3606 3597 M: Dhananjay Phadke <dhananjay@netxen.com> ··· 3809 3796 T: git git://git.open-osd.org/open-osd.git 3810 3797 S: Maintained 3811 3798 F: drivers/scsi/osd/ 3812 - F: drivers/include/scsi/osd_* 3799 + F: include/scsi/osd_* 3813 3800 F: fs/exofs/ 3814 3801 3815 3802 P54 WIRELESS DRIVER
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 31 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc9 5 5 NAME = Man-Eating Seals of Antiquity 6 6 7 7 # *DOCUMENTATION*
+4 -1
REPORTING-BUGS
··· 15 15 to the person responsible for the code relevant to what you were doing. 16 16 If it occurs repeatably try and describe how to recreate it. That is 17 17 worth even more than the oops itself. The list of maintainers and 18 - mailing lists is in the MAINTAINERS file in this directory. 18 + mailing lists is in the MAINTAINERS file in this directory. If you 19 + know the file name that causes the problem you can use the following 20 + command in this directory to find some of the maintainers of that file: 21 + perl scripts/get_maintainer.pl -f <filename> 19 22 20 23 If it is a security bug, please copy the Security Contact listed 21 24 in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
+1 -1
arch/arm/configs/kirkwood_defconfig
··· 629 629 CONFIG_ATA=y 630 630 # CONFIG_ATA_NONSTANDARD is not set 631 631 CONFIG_SATA_PMP=y 632 - # CONFIG_SATA_AHCI is not set 632 + CONFIG_SATA_AHCI=y 633 633 # CONFIG_SATA_SIL24 is not set 634 634 CONFIG_ATA_SFF=y 635 635 # CONFIG_SATA_SVW is not set
+4 -3
arch/arm/configs/rx51_defconfig
··· 282 282 # 283 283 CONFIG_ZBOOT_ROM_TEXT=0x0 284 284 CONFIG_ZBOOT_ROM_BSS=0x0 285 - CONFIG_CMDLINE="init=/sbin/preinit ubi.mtd=rootfs root=ubi0:rootfs rootfstype=ubifs rootflags=bulk_read,no_chk_data_crc rw console=ttyMTD,log console=tty0" 285 + CONFIG_CMDLINE="init=/sbin/preinit ubi.mtd=rootfs root=ubi0:rootfs rootfstype=ubifs rootflags=bulk_read,no_chk_data_crc rw console=ttyMTD,log console=tty0 console=ttyS2,115200n8" 286 286 # CONFIG_XIP_KERNEL is not set 287 287 # CONFIG_KEXEC is not set 288 288 ··· 1354 1354 # CONFIG_USB_GPIO_VBUS is not set 1355 1355 # CONFIG_ISP1301_OMAP is not set 1356 1356 CONFIG_TWL4030_USB=y 1357 - CONFIG_MMC=m 1357 + CONFIG_MMC=y 1358 1358 # CONFIG_MMC_DEBUG is not set 1359 1359 # CONFIG_MMC_UNSAFE_RESUME is not set 1360 1360 ··· 1449 1449 # on-CPU RTC drivers 1450 1450 # 1451 1451 # CONFIG_DMADEVICES is not set 1452 - # CONFIG_REGULATOR is not set 1452 + CONFIG_REGULATOR=y 1453 + CONFIG_REGULATOR_TWL4030=y 1453 1454 # CONFIG_UIO is not set 1454 1455 # CONFIG_STAGING is not set 1455 1456
+2 -1
arch/arm/include/asm/setup.h
··· 201 201 struct membank { 202 202 unsigned long start; 203 203 unsigned long size; 204 - int node; 204 + unsigned short node; 205 + unsigned short highmem; 205 206 }; 206 207 207 208 struct meminfo {
+1 -1
arch/arm/mach-ixp4xx/include/mach/io.h
··· 17 17 18 18 #include <mach/hardware.h> 19 19 20 - #define IO_SPACE_LIMIT 0xffff0000 20 + #define IO_SPACE_LIMIT 0x0000ffff 21 21 22 22 extern int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data); 23 23 extern int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data);
+9
arch/arm/mach-kirkwood/ts219-setup.c
··· 206 206 207 207 } 208 208 209 + static int __init ts219_pci_init(void) 210 + { 211 + if (machine_is_ts219()) 212 + kirkwood_pcie_init(); 213 + 214 + return 0; 215 + } 216 + subsys_initcall(ts219_pci_init); 217 + 209 218 MACHINE_START(TS219, "QNAP TS-119/TS-219") 210 219 /* Maintainer: Martin Michlmayr <tbm@cyrius.com> */ 211 220 .phys_io = KIRKWOOD_REGS_PHYS_BASE,
+1 -1
arch/arm/mach-mx3/mx31moboard-devboard.c
··· 63 63 64 64 static int devboard_sdhc2_get_ro(struct device *dev) 65 65 { 66 - return gpio_get_value(SDHC2_WP); 66 + return !gpio_get_value(SDHC2_WP); 67 67 } 68 68 69 69 static int devboard_sdhc2_init(struct device *dev, irq_handler_t detect_irq,
+1 -1
arch/arm/mach-mx3/mx31moboard-marxbot.c
··· 67 67 68 68 static int marxbot_sdhc2_get_ro(struct device *dev) 69 69 { 70 - return gpio_get_value(SDHC2_WP); 70 + return !gpio_get_value(SDHC2_WP); 71 71 } 72 72 73 73 static int marxbot_sdhc2_init(struct device *dev, irq_handler_t detect_irq,
+1 -1
arch/arm/mach-mx3/mx31moboard.c
··· 94 94 95 95 static int moboard_sdhc1_get_ro(struct device *dev) 96 96 { 97 - return gpio_get_value(SDHC1_WP); 97 + return !gpio_get_value(SDHC1_WP); 98 98 } 99 99 100 100 static int moboard_sdhc1_init(struct device *dev, irq_handler_t detect_irq,
-9
arch/arm/mach-mx3/pcm037_eet.c
··· 24 24 #include "devices.h" 25 25 26 26 static unsigned int pcm037_eet_pins[] = { 27 - /* SPI #1 */ 28 - MX31_PIN_CSPI1_MISO__MISO, 29 - MX31_PIN_CSPI1_MOSI__MOSI, 30 - MX31_PIN_CSPI1_SCLK__SCLK, 31 - MX31_PIN_CSPI1_SPI_RDY__SPI_RDY, 32 - MX31_PIN_CSPI1_SS0__SS0, 33 - MX31_PIN_CSPI1_SS1__SS1, 34 - MX31_PIN_CSPI1_SS2__SS2, 35 - 36 27 /* Reserve and hardwire GPIO 57 high - S6E63D6 chipselect */ 37 28 IOMUX_MODE(MX31_PIN_KEY_COL7, IOMUX_CONFIG_GPIO), 38 29 /* GPIO keys */
+1 -1
arch/arm/mach-omap2/board-2430sdp.c
··· 141 141 142 142 static void __init omap_2430sdp_init_irq(void) 143 143 { 144 - omap2_init_common_hw(NULL); 144 + omap2_init_common_hw(NULL, NULL); 145 145 omap_init_irq(); 146 146 omap_gpio_init(); 147 147 }
+1 -1
arch/arm/mach-omap2/board-3430sdp.c
··· 169 169 170 170 static void __init omap_3430sdp_init_irq(void) 171 171 { 172 - omap2_init_common_hw(hyb18m512160af6_sdrc_params); 172 + omap2_init_common_hw(hyb18m512160af6_sdrc_params, NULL); 173 173 omap_init_irq(); 174 174 omap_gpio_init(); 175 175 }
+1 -1
arch/arm/mach-omap2/board-4430sdp.c
··· 59 59 60 60 static void __init omap_4430sdp_init_irq(void) 61 61 { 62 - omap2_init_common_hw(NULL); 62 + omap2_init_common_hw(NULL, NULL); 63 63 #ifdef CONFIG_OMAP_32K_TIMER 64 64 omap2_gp_clockevent_set_gptimer(1); 65 65 #endif
+1 -1
arch/arm/mach-omap2/board-apollon.c
··· 250 250 251 251 static void __init omap_apollon_init_irq(void) 252 252 { 253 - omap2_init_common_hw(NULL); 253 + omap2_init_common_hw(NULL, NULL); 254 254 omap_init_irq(); 255 255 omap_gpio_init(); 256 256 apollon_init_smc91x();
+1 -1
arch/arm/mach-omap2/board-generic.c
··· 33 33 34 34 static void __init omap_generic_init_irq(void) 35 35 { 36 - omap2_init_common_hw(NULL); 36 + omap2_init_common_hw(NULL, NULL); 37 37 omap_init_irq(); 38 38 } 39 39
+1 -1
arch/arm/mach-omap2/board-h4.c
··· 270 270 271 271 static void __init omap_h4_init_irq(void) 272 272 { 273 - omap2_init_common_hw(NULL); 273 + omap2_init_common_hw(NULL, NULL); 274 274 omap_init_irq(); 275 275 omap_gpio_init(); 276 276 h4_init_flash();
+1 -1
arch/arm/mach-omap2/board-ldp.c
··· 270 270 271 271 static void __init omap_ldp_init_irq(void) 272 272 { 273 - omap2_init_common_hw(NULL); 273 + omap2_init_common_hw(NULL, NULL); 274 274 omap_init_irq(); 275 275 omap_gpio_init(); 276 276 ldp_init_smsc911x();
+6 -1
arch/arm/mach-omap2/board-omap3beagle.c
··· 282 282 283 283 static void __init omap3_beagle_init_irq(void) 284 284 { 285 - omap2_init_common_hw(mt46h32m32lf6_sdrc_params); 285 + omap2_init_common_hw(mt46h32m32lf6_sdrc_params, 286 + mt46h32m32lf6_sdrc_params); 286 287 omap_init_irq(); 287 288 #ifdef CONFIG_OMAP_32K_TIMER 288 289 omap2_gp_clockevent_set_gptimer(12); ··· 409 408 410 409 usb_musb_init(); 411 410 omap3beagle_flash_init(); 411 + 412 + /* Ensure SDRC pins are mux'd for self-refresh */ 413 + omap_cfg_reg(H16_34XX_SDRC_CKE0); 414 + omap_cfg_reg(H17_34XX_SDRC_CKE1); 412 415 } 413 416 414 417 static void __init omap3_beagle_map_io(void)
+1 -1
arch/arm/mach-omap2/board-omap3evm.c
··· 280 280 281 281 static void __init omap3_evm_init_irq(void) 282 282 { 283 - omap2_init_common_hw(mt46h32m32lf6_sdrc_params); 283 + omap2_init_common_hw(mt46h32m32lf6_sdrc_params, NULL); 284 284 omap_init_irq(); 285 285 omap_gpio_init(); 286 286 omap3evm_init_smc911x();
+7 -1
arch/arm/mach-omap2/board-omap3pandora.c
··· 40 40 #include <mach/mcspi.h> 41 41 #include <mach/usb.h> 42 42 #include <mach/keypad.h> 43 + #include <mach/mux.h> 43 44 44 45 #include "sdram-micron-mt46h32m32lf-6.h" 45 46 #include "mmc-twl4030.h" ··· 311 310 312 311 static void __init omap3pandora_init_irq(void) 313 312 { 314 - omap2_init_common_hw(mt46h32m32lf6_sdrc_params); 313 + omap2_init_common_hw(mt46h32m32lf6_sdrc_params, 314 + mt46h32m32lf6_sdrc_params); 315 315 omap_init_irq(); 316 316 omap_gpio_init(); 317 317 } ··· 399 397 omap3pandora_ads7846_init(); 400 398 pandora_keys_gpio_init(); 401 399 usb_musb_init(); 400 + 401 + /* Ensure SDRC pins are mux'd for self-refresh */ 402 + omap_cfg_reg(H16_34XX_SDRC_CKE0); 403 + omap_cfg_reg(H17_34XX_SDRC_CKE1); 402 404 } 403 405 404 406 static void __init omap3pandora_map_io(void)
+9 -2
arch/arm/mach-omap2/board-overo.c
··· 44 44 #include <mach/gpmc.h> 45 45 #include <mach/hardware.h> 46 46 #include <mach/nand.h> 47 + #include <mach/mux.h> 47 48 #include <mach/usb.h> 48 49 49 50 #include "sdram-micron-mt46h32m32lf-6.h" ··· 52 51 53 52 #define OVERO_GPIO_BT_XGATE 15 54 53 #define OVERO_GPIO_W2W_NRESET 16 54 + #define OVERO_GPIO_PENDOWN 114 55 55 #define OVERO_GPIO_BT_NRESET 164 56 56 #define OVERO_GPIO_USBH_CPEN 168 57 57 #define OVERO_GPIO_USBH_NRESET 183 ··· 148 146 .name = "smsc911x", 149 147 .id = -1, 150 148 .num_resources = ARRAY_SIZE(overo_smsc911x_resources), 151 - .resource = &overo_smsc911x_resources, 149 + .resource = overo_smsc911x_resources, 152 150 .dev = { 153 151 .platform_data = &overo_smsc911x_config, 154 152 }, ··· 362 360 363 361 static void __init overo_init_irq(void) 364 362 { 365 - omap2_init_common_hw(mt46h32m32lf6_sdrc_params); 363 + omap2_init_common_hw(mt46h32m32lf6_sdrc_params, 364 + mt46h32m32lf6_sdrc_params); 366 365 omap_init_irq(); 367 366 omap_gpio_init(); 368 367 } ··· 397 394 usb_musb_init(); 398 395 overo_ads7846_init(); 399 396 overo_init_smsc911x(); 397 + 398 + /* Ensure SDRC pins are mux'd for self-refresh */ 399 + omap_cfg_reg(H16_34XX_SDRC_CKE0); 400 + omap_cfg_reg(H17_34XX_SDRC_CKE1); 400 401 401 402 if ((gpio_request(OVERO_GPIO_W2W_NRESET, 402 403 "OVERO_GPIO_W2W_NRESET") == 0) &&
+5
arch/arm/mach-omap2/board-rx51-peripherals.c
··· 278 278 .setup = rx51_twlgpio_setup, 279 279 }; 280 280 281 + static struct twl4030_usb_data rx51_usb_data = { 282 + .usb_mode = T2_USB_MODE_ULPI, 283 + }; 284 + 281 285 static struct twl4030_platform_data rx51_twldata = { 282 286 .irq_base = TWL4030_IRQ_BASE, 283 287 .irq_end = TWL4030_IRQ_END, ··· 290 286 .gpio = &rx51_gpio_data, 291 287 .keypad = &rx51_kp_data, 292 288 .madc = &rx51_madc_data, 289 + .usb = &rx51_usb_data, 293 290 294 291 .vaux1 = &rx51_vaux1, 295 292 .vaux2 = &rx51_vaux2,
+5 -1
arch/arm/mach-omap2/board-rx51.c
··· 61 61 62 62 static void __init rx51_init_irq(void) 63 63 { 64 - omap2_init_common_hw(NULL); 64 + omap2_init_common_hw(NULL, NULL); 65 65 omap_init_irq(); 66 66 omap_gpio_init(); 67 67 } ··· 75 75 omap_serial_init(); 76 76 usb_musb_init(); 77 77 rx51_peripherals_init(); 78 + 79 + /* Ensure SDRC pins are mux'd for self-refresh */ 80 + omap_cfg_reg(H16_34XX_SDRC_CKE0); 81 + omap_cfg_reg(H17_34XX_SDRC_CKE1); 78 82 } 79 83 80 84 static void __init rx51_map_io(void)
+1 -1
arch/arm/mach-omap2/board-zoom2.c
··· 25 25 26 26 static void __init omap_zoom2_init_irq(void) 27 27 { 28 - omap2_init_common_hw(NULL); 28 + omap2_init_common_hw(NULL, NULL); 29 29 omap_init_irq(); 30 30 omap_gpio_init(); 31 31 }
+83 -81
arch/arm/mach-omap2/clock.c
··· 27 27 #include <mach/clock.h> 28 28 #include <mach/clockdomain.h> 29 29 #include <mach/cpu.h> 30 + #include <mach/prcm.h> 30 31 #include <asm/div64.h> 31 32 32 33 #include <mach/sdrc.h> ··· 38 37 #include "cm.h" 39 38 #include "cm-regbits-24xx.h" 40 39 #include "cm-regbits-34xx.h" 41 - 42 - #define MAX_CLOCK_ENABLE_WAIT 100000 43 40 44 41 /* DPLL rate rounding: minimum DPLL multiplier, divider values */ 45 42 #define DPLL_MIN_MULTIPLIER 1 ··· 273 274 } 274 275 275 276 /** 276 - * omap2_wait_clock_ready - wait for clock to enable 277 - * @reg: physical address of clock IDLEST register 278 - * @mask: value to mask against to determine if the clock is active 279 - * @name: name of the clock (for printk) 277 + * omap2_clk_dflt_find_companion - find companion clock to @clk 278 + * @clk: struct clk * to find the companion clock of 279 + * @other_reg: void __iomem ** to return the companion clock CM_*CLKEN va in 280 + * @other_bit: u8 ** to return the companion clock bit shift in 280 281 * 281 - * Returns 1 if the clock enabled in time, or 0 if it failed to enable 282 - * in roughly MAX_CLOCK_ENABLE_WAIT microseconds. 283 - */ 284 - int omap2_wait_clock_ready(void __iomem *reg, u32 mask, const char *name) 285 - { 286 - int i = 0; 287 - int ena = 0; 288 - 289 - /* 290 - * 24xx uses 0 to indicate not ready, and 1 to indicate ready. 291 - * 34xx reverses this, just to keep us on our toes 292 - */ 293 - if (cpu_mask & (RATE_IN_242X | RATE_IN_243X)) 294 - ena = mask; 295 - else if (cpu_mask & RATE_IN_343X) 296 - ena = 0; 297 - 298 - /* Wait for lock */ 299 - while (((__raw_readl(reg) & mask) != ena) && 300 - (i++ < MAX_CLOCK_ENABLE_WAIT)) { 301 - udelay(1); 302 - } 303 - 304 - if (i <= MAX_CLOCK_ENABLE_WAIT) 305 - pr_debug("Clock %s stable after %d loops\n", name, i); 306 - else 307 - printk(KERN_ERR "Clock %s didn't enable in %d tries\n", 308 - name, MAX_CLOCK_ENABLE_WAIT); 309 - 310 - 311 - return (i < MAX_CLOCK_ENABLE_WAIT) ? 1 : 0; 312 - }; 313 - 314 - 315 - /* 316 - * Note: We don't need special code here for INVERT_ENABLE 317 - * for the time being since INVERT_ENABLE only applies to clocks enabled by 282 + * Note: We don't need special code here for INVERT_ENABLE for the 283 + * time being since INVERT_ENABLE only applies to clocks enabled by 318 284 * CM_CLKEN_PLL 285 + * 286 + * Convert CM_ICLKEN* <-> CM_FCLKEN*. This conversion assumes it's 287 + * just a matter of XORing the bits. 288 + * 289 + * Some clocks don't have companion clocks. For example, modules with 290 + * only an interface clock (such as MAILBOXES) don't have a companion 291 + * clock. Right now, this code relies on the hardware exporting a bit 292 + * in the correct companion register that indicates that the 293 + * nonexistent 'companion clock' is active. Future patches will 294 + * associate this type of code with per-module data structures to 295 + * avoid this issue, and remove the casts. No return value. 319 296 */ 320 - static void omap2_clk_wait_ready(struct clk *clk) 297 + void omap2_clk_dflt_find_companion(struct clk *clk, void __iomem **other_reg, 298 + u8 *other_bit) 321 299 { 322 - void __iomem *reg, *other_reg, *st_reg; 323 - u32 bit; 324 - 325 - /* 326 - * REVISIT: This code is pretty ugly. It would be nice to generalize 327 - * it and pull it into struct clk itself somehow. 328 - */ 329 - reg = clk->enable_reg; 300 + u32 r; 330 301 331 302 /* 332 303 * Convert CM_ICLKEN* <-> CM_FCLKEN*. This conversion assumes 333 304 * it's just a matter of XORing the bits. 334 305 */ 335 - other_reg = (void __iomem *)((u32)reg ^ (CM_FCLKEN ^ CM_ICLKEN)); 306 + r = ((__force u32)clk->enable_reg ^ (CM_FCLKEN ^ CM_ICLKEN)); 336 307 337 - /* Check if both functional and interface clocks 338 - * are running. */ 339 - bit = 1 << clk->enable_bit; 340 - if (!(__raw_readl(other_reg) & bit)) 341 - return; 342 - st_reg = (void __iomem *)(((u32)other_reg & ~0xf0) | 0x20); /* CM_IDLEST* */ 343 - 344 - omap2_wait_clock_ready(st_reg, bit, clk->name); 308 + *other_reg = (__force void __iomem *)r; 309 + *other_bit = clk->enable_bit; 345 310 } 346 311 347 - static int omap2_dflt_clk_enable(struct clk *clk) 312 + /** 313 + * omap2_clk_dflt_find_idlest - find CM_IDLEST reg va, bit shift for @clk 314 + * @clk: struct clk * to find IDLEST info for 315 + * @idlest_reg: void __iomem ** to return the CM_IDLEST va in 316 + * @idlest_bit: u8 ** to return the CM_IDLEST bit shift in 317 + * 318 + * Return the CM_IDLEST register address and bit shift corresponding 319 + * to the module that "owns" this clock. This default code assumes 320 + * that the CM_IDLEST bit shift is the CM_*CLKEN bit shift, and that 321 + * the IDLEST register address ID corresponds to the CM_*CLKEN 322 + * register address ID (e.g., that CM_FCLKEN2 corresponds to 323 + * CM_IDLEST2). This is not true for all modules. No return value. 324 + */ 325 + void omap2_clk_dflt_find_idlest(struct clk *clk, void __iomem **idlest_reg, 326 + u8 *idlest_bit) 327 + { 328 + u32 r; 329 + 330 + r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20); 331 + *idlest_reg = (__force void __iomem *)r; 332 + *idlest_bit = clk->enable_bit; 333 + } 334 + 335 + /** 336 + * omap2_module_wait_ready - wait for an OMAP module to leave IDLE 337 + * @clk: struct clk * belonging to the module 338 + * 339 + * If the necessary clocks for the OMAP hardware IP block that 340 + * corresponds to clock @clk are enabled, then wait for the module to 341 + * indicate readiness (i.e., to leave IDLE). This code does not 342 + * belong in the clock code and will be moved in the medium term to 343 + * module-dependent code. No return value. 344 + */ 345 + static void omap2_module_wait_ready(struct clk *clk) 346 + { 347 + void __iomem *companion_reg, *idlest_reg; 348 + u8 other_bit, idlest_bit; 349 + 350 + /* Not all modules have multiple clocks that their IDLEST depends on */ 351 + if (clk->ops->find_companion) { 352 + clk->ops->find_companion(clk, &companion_reg, &other_bit); 353 + if (!(__raw_readl(companion_reg) & (1 << other_bit))) 354 + return; 355 + } 356 + 357 + clk->ops->find_idlest(clk, &idlest_reg, &idlest_bit); 358 + 359 + omap2_cm_wait_idlest(idlest_reg, (1 << idlest_bit), clk->name); 360 + } 361 + 362 + int omap2_dflt_clk_enable(struct clk *clk) 348 363 { 349 364 u32 v; 350 365 351 366 if (unlikely(clk->enable_reg == NULL)) { 352 - printk(KERN_ERR "clock.c: Enable for %s without enable code\n", 367 + pr_err("clock.c: Enable for %s without enable code\n", 353 368 clk->name); 354 369 return 0; /* REVISIT: -EINVAL */ 355 370 } ··· 376 363 __raw_writel(v, clk->enable_reg); 377 364 v = __raw_readl(clk->enable_reg); /* OCP barrier */ 378 365 366 + if (clk->ops->find_idlest) 367 + omap2_module_wait_ready(clk); 368 + 379 369 return 0; 380 370 } 381 371 382 - static int omap2_dflt_clk_enable_wait(struct clk *clk) 383 - { 384 - int ret; 385 - 386 - if (!clk->enable_reg) { 387 - printk(KERN_ERR "clock.c: Enable for %s without enable code\n", 388 - clk->name); 389 - return 0; /* REVISIT: -EINVAL */ 390 - } 391 - 392 - ret = omap2_dflt_clk_enable(clk); 393 - if (ret == 0) 394 - omap2_clk_wait_ready(clk); 395 - return ret; 396 - } 397 - 398 - static void omap2_dflt_clk_disable(struct clk *clk) 372 + void omap2_dflt_clk_disable(struct clk *clk) 399 373 { 400 374 u32 v; 401 375 ··· 406 406 } 407 407 408 408 const struct clkops clkops_omap2_dflt_wait = { 409 - .enable = omap2_dflt_clk_enable_wait, 409 + .enable = omap2_dflt_clk_enable, 410 410 .disable = omap2_dflt_clk_disable, 411 + .find_companion = omap2_clk_dflt_find_companion, 412 + .find_idlest = omap2_clk_dflt_find_idlest, 411 413 }; 412 414 413 415 const struct clkops clkops_omap2_dflt = {
+6
arch/arm/mach-omap2/clock.h
··· 65 65 u32 omap2_get_dpll_rate(struct clk *clk); 66 66 int omap2_wait_clock_ready(void __iomem *reg, u32 cval, const char *name); 67 67 void omap2_clk_prepare_for_reboot(void); 68 + int omap2_dflt_clk_enable(struct clk *clk); 69 + void omap2_dflt_clk_disable(struct clk *clk); 70 + void omap2_clk_dflt_find_companion(struct clk *clk, void __iomem **other_reg, 71 + u8 *other_bit); 72 + void omap2_clk_dflt_find_idlest(struct clk *clk, void __iomem **idlest_reg, 73 + u8 *idlest_bit); 68 74 69 75 extern const struct clkops clkops_omap2_dflt_wait; 70 76 extern const struct clkops clkops_omap2_dflt;
+35 -2
arch/arm/mach-omap2/clock24xx.c
··· 30 30 31 31 #include <mach/clock.h> 32 32 #include <mach/sram.h> 33 + #include <mach/prcm.h> 33 34 #include <asm/div64.h> 34 35 #include <asm/clkdev.h> 35 36 ··· 43 42 44 43 static const struct clkops clkops_oscck; 45 44 static const struct clkops clkops_fixed; 45 + 46 + static void omap2430_clk_i2chs_find_idlest(struct clk *clk, 47 + void __iomem **idlest_reg, 48 + u8 *idlest_bit); 49 + 50 + /* 2430 I2CHS has non-standard IDLEST register */ 51 + static const struct clkops clkops_omap2430_i2chs_wait = { 52 + .enable = omap2_dflt_clk_enable, 53 + .disable = omap2_dflt_clk_disable, 54 + .find_idlest = omap2430_clk_i2chs_find_idlest, 55 + .find_companion = omap2_clk_dflt_find_companion, 56 + }; 46 57 47 58 #include "clock24xx.h" 48 59 ··· 253 240 *-------------------------------------------------------------------------*/ 254 241 255 242 /** 243 + * omap2430_clk_i2chs_find_idlest - return CM_IDLEST info for 2430 I2CHS 244 + * @clk: struct clk * being enabled 245 + * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into 246 + * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into 247 + * 248 + * OMAP2430 I2CHS CM_IDLEST bits are in CM_IDLEST1_CORE, but the 249 + * CM_*CLKEN bits are in CM_{I,F}CLKEN2_CORE. This custom function 250 + * passes back the correct CM_IDLEST register address for I2CHS 251 + * modules. No return value. 252 + */ 253 + static void omap2430_clk_i2chs_find_idlest(struct clk *clk, 254 + void __iomem **idlest_reg, 255 + u8 *idlest_bit) 256 + { 257 + *idlest_reg = OMAP_CM_REGADDR(CORE_MOD, CM_IDLEST); 258 + *idlest_bit = clk->enable_bit; 259 + } 260 + 261 + 262 + /** 256 263 * omap2xxx_clk_get_core_rate - return the CORE_CLK rate 257 264 * @clk: pointer to the combined dpll_ck + core_ck (currently "dpll_ck") 258 265 * ··· 358 325 else if (clk == &apll54_ck) 359 326 cval = OMAP24XX_ST_54M_APLL; 360 327 361 - omap2_wait_clock_ready(OMAP_CM_REGADDR(PLL_MOD, CM_IDLEST), cval, 362 - clk->name); 328 + omap2_cm_wait_idlest(OMAP_CM_REGADDR(PLL_MOD, CM_IDLEST), cval, 329 + clk->name); 363 330 364 331 /* 365 332 * REVISIT: Should we return an error code if omap2_wait_clock_ready()
+2 -2
arch/arm/mach-omap2/clock24xx.h
··· 2337 2337 2338 2338 static struct clk i2chs2_fck = { 2339 2339 .name = "i2c_fck", 2340 - .ops = &clkops_omap2_dflt_wait, 2340 + .ops = &clkops_omap2430_i2chs_wait, 2341 2341 .id = 2, 2342 2342 .parent = &func_96m_ck, 2343 2343 .clkdm_name = "core_l4_clkdm", ··· 2370 2370 2371 2371 static struct clk i2chs1_fck = { 2372 2372 .name = "i2c_fck", 2373 - .ops = &clkops_omap2_dflt_wait, 2373 + .ops = &clkops_omap2430_i2chs_wait, 2374 2374 .id = 1, 2375 2375 .parent = &func_96m_ck, 2376 2376 .clkdm_name = "core_l4_clkdm",
+138 -15
arch/arm/mach-omap2/clock34xx.c
··· 2 2 * OMAP3-specific clock framework functions 3 3 * 4 4 * Copyright (C) 2007-2008 Texas Instruments, Inc. 5 - * Copyright (C) 2007-2008 Nokia Corporation 5 + * Copyright (C) 2007-2009 Nokia Corporation 6 6 * 7 7 * Written by Paul Walmsley 8 8 * Testing and integration fixes by Jouni Högander ··· 40 40 #include "cm-regbits-34xx.h" 41 41 42 42 static const struct clkops clkops_noncore_dpll_ops; 43 + 44 + static void omap3430es2_clk_ssi_find_idlest(struct clk *clk, 45 + void __iomem **idlest_reg, 46 + u8 *idlest_bit); 47 + static void omap3430es2_clk_hsotgusb_find_idlest(struct clk *clk, 48 + void __iomem **idlest_reg, 49 + u8 *idlest_bit); 50 + static void omap3430es2_clk_dss_usbhost_find_idlest(struct clk *clk, 51 + void __iomem **idlest_reg, 52 + u8 *idlest_bit); 53 + 54 + static const struct clkops clkops_omap3430es2_ssi_wait = { 55 + .enable = omap2_dflt_clk_enable, 56 + .disable = omap2_dflt_clk_disable, 57 + .find_idlest = omap3430es2_clk_ssi_find_idlest, 58 + .find_companion = omap2_clk_dflt_find_companion, 59 + }; 60 + 61 + static const struct clkops clkops_omap3430es2_hsotgusb_wait = { 62 + .enable = omap2_dflt_clk_enable, 63 + .disable = omap2_dflt_clk_disable, 64 + .find_idlest = omap3430es2_clk_hsotgusb_find_idlest, 65 + .find_companion = omap2_clk_dflt_find_companion, 66 + }; 67 + 68 + static const struct clkops clkops_omap3430es2_dss_usbhost_wait = { 69 + .enable = omap2_dflt_clk_enable, 70 + .disable = omap2_dflt_clk_disable, 71 + .find_idlest = omap3430es2_clk_dss_usbhost_find_idlest, 72 + .find_companion = omap2_clk_dflt_find_companion, 73 + }; 43 74 44 75 #include "clock34xx.h" 45 76 ··· 188 157 CLK(NULL, "fshostusb_fck", &fshostusb_fck, CK_3430ES1), 189 158 CLK(NULL, "core_12m_fck", &core_12m_fck, CK_343X), 190 159 CLK("omap_hdq.0", "fck", &hdq_fck, CK_343X), 191 - CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck, CK_343X), 192 - CLK(NULL, "ssi_sst_fck", &ssi_sst_fck, CK_343X), 160 + CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck_3430es1, CK_3430ES1), 161 + CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck_3430es2, CK_3430ES2), 162 + CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es1, CK_3430ES1), 163 + CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es2, CK_3430ES2), 193 164 CLK(NULL, "core_l3_ick", &core_l3_ick, CK_343X), 194 - CLK("musb_hdrc", "ick", &hsotgusb_ick, CK_343X), 165 + CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es1, CK_3430ES1), 166 + CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es2, CK_3430ES2), 195 167 CLK(NULL, "sdrc_ick", &sdrc_ick, CK_343X), 196 168 CLK(NULL, "gpmc_fck", &gpmc_fck, CK_343X), 197 169 CLK(NULL, "security_l3_ick", &security_l3_ick, CK_343X), ··· 227 193 CLK(NULL, "mailboxes_ick", &mailboxes_ick, CK_343X), 228 194 CLK(NULL, "omapctrl_ick", &omapctrl_ick, CK_343X), 229 195 CLK(NULL, "ssi_l4_ick", &ssi_l4_ick, CK_343X), 230 - CLK(NULL, "ssi_ick", &ssi_ick, CK_343X), 196 + CLK(NULL, "ssi_ick", &ssi_ick_3430es1, CK_3430ES1), 197 + CLK(NULL, "ssi_ick", &ssi_ick_3430es2, CK_3430ES2), 231 198 CLK(NULL, "usb_l4_ick", &usb_l4_ick, CK_3430ES1), 232 199 CLK(NULL, "security_l4_ick2", &security_l4_ick2, CK_343X), 233 200 CLK(NULL, "aes1_ick", &aes1_ick, CK_343X), 234 201 CLK("omap_rng", "ick", &rng_ick, CK_343X), 235 202 CLK(NULL, "sha11_ick", &sha11_ick, CK_343X), 236 203 CLK(NULL, "des1_ick", &des1_ick, CK_343X), 237 - CLK("omapfb", "dss1_fck", &dss1_alwon_fck, CK_343X), 204 + CLK("omapfb", "dss1_fck", &dss1_alwon_fck_3430es1, CK_3430ES1), 205 + CLK("omapfb", "dss1_fck", &dss1_alwon_fck_3430es2, CK_3430ES2), 238 206 CLK("omapfb", "tv_fck", &dss_tv_fck, CK_343X), 239 207 CLK("omapfb", "video_fck", &dss_96m_fck, CK_343X), 240 208 CLK("omapfb", "dss2_fck", &dss2_alwon_fck, CK_343X), 241 - CLK("omapfb", "ick", &dss_ick, CK_343X), 209 + CLK("omapfb", "ick", &dss_ick_3430es1, CK_3430ES1), 210 + CLK("omapfb", "ick", &dss_ick_3430es2, CK_3430ES2), 242 211 CLK(NULL, "cam_mclk", &cam_mclk, CK_343X), 243 212 CLK(NULL, "cam_ick", &cam_ick, CK_343X), 244 213 CLK(NULL, "csi2_96m_fck", &csi2_96m_fck, CK_343X), ··· 336 299 * 2^MPURATE_BASE_SHIFT MHz for SDRC to stabilize 337 300 */ 338 301 #define SDRC_MPURATE_LOOPS 96 302 + 303 + /** 304 + * omap3430es2_clk_ssi_find_idlest - return CM_IDLEST info for SSI 305 + * @clk: struct clk * being enabled 306 + * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into 307 + * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into 308 + * 309 + * The OMAP3430ES2 SSI target CM_IDLEST bit is at a different shift 310 + * from the CM_{I,F}CLKEN bit. Pass back the correct info via 311 + * @idlest_reg and @idlest_bit. No return value. 312 + */ 313 + static void omap3430es2_clk_ssi_find_idlest(struct clk *clk, 314 + void __iomem **idlest_reg, 315 + u8 *idlest_bit) 316 + { 317 + u32 r; 318 + 319 + r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20); 320 + *idlest_reg = (__force void __iomem *)r; 321 + *idlest_bit = OMAP3430ES2_ST_SSI_IDLE_SHIFT; 322 + } 323 + 324 + /** 325 + * omap3430es2_clk_dss_usbhost_find_idlest - CM_IDLEST info for DSS, USBHOST 326 + * @clk: struct clk * being enabled 327 + * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into 328 + * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into 329 + * 330 + * Some OMAP modules on OMAP3 ES2+ chips have both initiator and 331 + * target IDLEST bits. For our purposes, we are concerned with the 332 + * target IDLEST bits, which exist at a different bit position than 333 + * the *CLKEN bit position for these modules (DSS and USBHOST) (The 334 + * default find_idlest code assumes that they are at the same 335 + * position.) No return value. 336 + */ 337 + static void omap3430es2_clk_dss_usbhost_find_idlest(struct clk *clk, 338 + void __iomem **idlest_reg, 339 + u8 *idlest_bit) 340 + { 341 + u32 r; 342 + 343 + r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20); 344 + *idlest_reg = (__force void __iomem *)r; 345 + /* USBHOST_IDLE has same shift */ 346 + *idlest_bit = OMAP3430ES2_ST_DSS_IDLE_SHIFT; 347 + } 348 + 349 + /** 350 + * omap3430es2_clk_hsotgusb_find_idlest - return CM_IDLEST info for HSOTGUSB 351 + * @clk: struct clk * being enabled 352 + * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into 353 + * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into 354 + * 355 + * The OMAP3430ES2 HSOTGUSB target CM_IDLEST bit is at a different 356 + * shift from the CM_{I,F}CLKEN bit. Pass back the correct info via 357 + * @idlest_reg and @idlest_bit. No return value. 358 + */ 359 + static void omap3430es2_clk_hsotgusb_find_idlest(struct clk *clk, 360 + void __iomem **idlest_reg, 361 + u8 *idlest_bit) 362 + { 363 + u32 r; 364 + 365 + r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20); 366 + *idlest_reg = (__force void __iomem *)r; 367 + *idlest_bit = OMAP3430ES2_ST_HSOTGUSB_IDLE_SHIFT; 368 + } 339 369 340 370 /** 341 371 * omap3_dpll_recalc - recalculate DPLL rate ··· 829 725 u32 unlock_dll = 0; 830 726 u32 c; 831 727 unsigned long validrate, sdrcrate, mpurate; 832 - struct omap_sdrc_params *sp; 728 + struct omap_sdrc_params *sdrc_cs0; 729 + struct omap_sdrc_params *sdrc_cs1; 730 + int ret; 833 731 834 732 if (!clk || !rate) 835 733 return -EINVAL; ··· 849 743 else 850 744 sdrcrate >>= ((clk->rate / rate) >> 1); 851 745 852 - sp = omap2_sdrc_get_params(sdrcrate); 853 - if (!sp) 746 + ret = omap2_sdrc_get_params(sdrcrate, &sdrc_cs0, &sdrc_cs1); 747 + if (ret) 854 748 return -EINVAL; 855 749 856 750 if (sdrcrate < MIN_SDRC_DLL_LOCK_FREQ) { ··· 871 765 872 766 pr_debug("clock: changing CORE DPLL rate from %lu to %lu\n", clk->rate, 873 767 validrate); 874 - pr_debug("clock: SDRC timing params used: %08x %08x %08x\n", 875 - sp->rfr_ctrl, sp->actim_ctrla, sp->actim_ctrlb); 768 + pr_debug("clock: SDRC CS0 timing params used:" 769 + " RFR %08x CTRLA %08x CTRLB %08x MR %08x\n", 770 + sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla, 771 + sdrc_cs0->actim_ctrlb, sdrc_cs0->mr); 772 + if (sdrc_cs1) 773 + pr_debug("clock: SDRC CS1 timing params used: " 774 + " RFR %08x CTRLA %08x CTRLB %08x MR %08x\n", 775 + sdrc_cs1->rfr_ctrl, sdrc_cs1->actim_ctrla, 776 + sdrc_cs1->actim_ctrlb, sdrc_cs1->mr); 876 777 877 - omap3_configure_core_dpll(sp->rfr_ctrl, sp->actim_ctrla, 878 - sp->actim_ctrlb, new_div, unlock_dll, c, 879 - sp->mr, rate > clk->rate); 778 + if (sdrc_cs1) 779 + omap3_configure_core_dpll( 780 + new_div, unlock_dll, c, rate > clk->rate, 781 + sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla, 782 + sdrc_cs0->actim_ctrlb, sdrc_cs0->mr, 783 + sdrc_cs1->rfr_ctrl, sdrc_cs1->actim_ctrla, 784 + sdrc_cs1->actim_ctrlb, sdrc_cs1->mr); 785 + else 786 + omap3_configure_core_dpll( 787 + new_div, unlock_dll, c, rate > clk->rate, 788 + sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla, 789 + sdrc_cs0->actim_ctrlb, sdrc_cs0->mr, 790 + 0, 0, 0, 0); 880 791 881 792 return 0; 882 793 }
+74 -11
arch/arm/mach-omap2/clock34xx.h
··· 1568 1568 { .parent = NULL } 1569 1569 }; 1570 1570 1571 - static struct clk ssi_ssr_fck = { 1571 + static struct clk ssi_ssr_fck_3430es1 = { 1572 1572 .name = "ssi_ssr_fck", 1573 1573 .ops = &clkops_omap2_dflt, 1574 1574 .init = &omap2_init_clksel_parent, ··· 1581 1581 .recalc = &omap2_clksel_recalc, 1582 1582 }; 1583 1583 1584 - static struct clk ssi_sst_fck = { 1584 + static struct clk ssi_ssr_fck_3430es2 = { 1585 + .name = "ssi_ssr_fck", 1586 + .ops = &clkops_omap3430es2_ssi_wait, 1587 + .init = &omap2_init_clksel_parent, 1588 + .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_FCLKEN1), 1589 + .enable_bit = OMAP3430_EN_SSI_SHIFT, 1590 + .clksel_reg = OMAP_CM_REGADDR(CORE_MOD, CM_CLKSEL), 1591 + .clksel_mask = OMAP3430_CLKSEL_SSI_MASK, 1592 + .clksel = ssi_ssr_clksel, 1593 + .clkdm_name = "core_l4_clkdm", 1594 + .recalc = &omap2_clksel_recalc, 1595 + }; 1596 + 1597 + static struct clk ssi_sst_fck_3430es1 = { 1585 1598 .name = "ssi_sst_fck", 1586 1599 .ops = &clkops_null, 1587 - .parent = &ssi_ssr_fck, 1600 + .parent = &ssi_ssr_fck_3430es1, 1601 + .fixed_div = 2, 1602 + .recalc = &omap2_fixed_divisor_recalc, 1603 + }; 1604 + 1605 + static struct clk ssi_sst_fck_3430es2 = { 1606 + .name = "ssi_sst_fck", 1607 + .ops = &clkops_null, 1608 + .parent = &ssi_ssr_fck_3430es2, 1588 1609 .fixed_div = 2, 1589 1610 .recalc = &omap2_fixed_divisor_recalc, 1590 1611 }; ··· 1627 1606 .recalc = &followparent_recalc, 1628 1607 }; 1629 1608 1630 - static struct clk hsotgusb_ick = { 1609 + static struct clk hsotgusb_ick_3430es1 = { 1631 1610 .name = "hsotgusb_ick", 1632 - .ops = &clkops_omap2_dflt_wait, 1611 + .ops = &clkops_omap2_dflt, 1612 + .parent = &core_l3_ick, 1613 + .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1), 1614 + .enable_bit = OMAP3430_EN_HSOTGUSB_SHIFT, 1615 + .clkdm_name = "core_l3_clkdm", 1616 + .recalc = &followparent_recalc, 1617 + }; 1618 + 1619 + static struct clk hsotgusb_ick_3430es2 = { 1620 + .name = "hsotgusb_ick", 1621 + .ops = &clkops_omap3430es2_hsotgusb_wait, 1633 1622 .parent = &core_l3_ick, 1634 1623 .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1), 1635 1624 .enable_bit = OMAP3430_EN_HSOTGUSB_SHIFT, ··· 1978 1947 .recalc = &followparent_recalc, 1979 1948 }; 1980 1949 1981 - static struct clk ssi_ick = { 1950 + static struct clk ssi_ick_3430es1 = { 1982 1951 .name = "ssi_ick", 1983 1952 .ops = &clkops_omap2_dflt, 1953 + .parent = &ssi_l4_ick, 1954 + .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1), 1955 + .enable_bit = OMAP3430_EN_SSI_SHIFT, 1956 + .clkdm_name = "core_l4_clkdm", 1957 + .recalc = &followparent_recalc, 1958 + }; 1959 + 1960 + static struct clk ssi_ick_3430es2 = { 1961 + .name = "ssi_ick", 1962 + .ops = &clkops_omap3430es2_ssi_wait, 1984 1963 .parent = &ssi_l4_ick, 1985 1964 .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1), 1986 1965 .enable_bit = OMAP3430_EN_SSI_SHIFT, ··· 2065 2024 }; 2066 2025 2067 2026 /* DSS */ 2068 - static struct clk dss1_alwon_fck = { 2027 + static struct clk dss1_alwon_fck_3430es1 = { 2069 2028 .name = "dss1_alwon_fck", 2070 2029 .ops = &clkops_omap2_dflt, 2030 + .parent = &dpll4_m4x2_ck, 2031 + .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_FCLKEN), 2032 + .enable_bit = OMAP3430_EN_DSS1_SHIFT, 2033 + .clkdm_name = "dss_clkdm", 2034 + .recalc = &followparent_recalc, 2035 + }; 2036 + 2037 + static struct clk dss1_alwon_fck_3430es2 = { 2038 + .name = "dss1_alwon_fck", 2039 + .ops = &clkops_omap3430es2_dss_usbhost_wait, 2071 2040 .parent = &dpll4_m4x2_ck, 2072 2041 .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_FCLKEN), 2073 2042 .enable_bit = OMAP3430_EN_DSS1_SHIFT, ··· 2118 2067 .recalc = &followparent_recalc, 2119 2068 }; 2120 2069 2121 - static struct clk dss_ick = { 2070 + static struct clk dss_ick_3430es1 = { 2122 2071 /* Handles both L3 and L4 clocks */ 2123 2072 .name = "dss_ick", 2124 2073 .ops = &clkops_omap2_dflt, 2074 + .parent = &l4_ick, 2075 + .init = &omap2_init_clk_clkdm, 2076 + .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_ICLKEN), 2077 + .enable_bit = OMAP3430_CM_ICLKEN_DSS_EN_DSS_SHIFT, 2078 + .clkdm_name = "dss_clkdm", 2079 + .recalc = &followparent_recalc, 2080 + }; 2081 + 2082 + static struct clk dss_ick_3430es2 = { 2083 + /* Handles both L3 and L4 clocks */ 2084 + .name = "dss_ick", 2085 + .ops = &clkops_omap3430es2_dss_usbhost_wait, 2125 2086 .parent = &l4_ick, 2126 2087 .init = &omap2_init_clk_clkdm, 2127 2088 .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_ICLKEN), ··· 2181 2118 2182 2119 static struct clk usbhost_120m_fck = { 2183 2120 .name = "usbhost_120m_fck", 2184 - .ops = &clkops_omap2_dflt_wait, 2121 + .ops = &clkops_omap2_dflt, 2185 2122 .parent = &dpll5_m2_ck, 2186 2123 .init = &omap2_init_clk_clkdm, 2187 2124 .enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_FCLKEN), ··· 2192 2129 2193 2130 static struct clk usbhost_48m_fck = { 2194 2131 .name = "usbhost_48m_fck", 2195 - .ops = &clkops_omap2_dflt_wait, 2132 + .ops = &clkops_omap3430es2_dss_usbhost_wait, 2196 2133 .parent = &omap_48m_fck, 2197 2134 .init = &omap2_init_clk_clkdm, 2198 2135 .enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_FCLKEN), ··· 2204 2141 static struct clk usbhost_ick = { 2205 2142 /* Handles both L3 and L4 clocks */ 2206 2143 .name = "usbhost_ick", 2207 - .ops = &clkops_omap2_dflt_wait, 2144 + .ops = &clkops_omap3430es2_dss_usbhost_wait, 2208 2145 .parent = &l4_ick, 2209 2146 .init = &omap2_init_clk_clkdm, 2210 2147 .enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_ICLKEN),
+3 -3
arch/arm/mach-omap2/cm.h
··· 29 29 * These registers appear once per CM module. 30 30 */ 31 31 32 - #define OMAP3430_CM_REVISION OMAP_CM_REGADDR(OCP_MOD, 0x0000) 33 - #define OMAP3430_CM_SYSCONFIG OMAP_CM_REGADDR(OCP_MOD, 0x0010) 34 - #define OMAP3430_CM_POLCTRL OMAP_CM_REGADDR(OCP_MOD, 0x009c) 32 + #define OMAP3430_CM_REVISION OMAP34XX_CM_REGADDR(OCP_MOD, 0x0000) 33 + #define OMAP3430_CM_SYSCONFIG OMAP34XX_CM_REGADDR(OCP_MOD, 0x0010) 34 + #define OMAP3430_CM_POLCTRL OMAP34XX_CM_REGADDR(OCP_MOD, 0x009c) 35 35 36 36 #define OMAP3_CM_CLKOUT_CTRL_OFFSET 0x0070 37 37 #define OMAP3430_CM_CLKOUT_CTRL OMAP_CM_REGADDR(OMAP3430_CCR_MOD, 0x0070)
+3 -2
arch/arm/mach-omap2/io.c
··· 276 276 return v; 277 277 } 278 278 279 - void __init omap2_init_common_hw(struct omap_sdrc_params *sp) 279 + void __init omap2_init_common_hw(struct omap_sdrc_params *sdrc_cs0, 280 + struct omap_sdrc_params *sdrc_cs1) 280 281 { 281 282 omap2_mux_init(); 282 283 #ifndef CONFIG_ARCH_OMAP4 /* FIXME: Remove this once the clkdev is ready */ 283 284 pwrdm_init(powerdomains_omap); 284 285 clkdm_init(clockdomains_omap, clkdm_pwrdm_autodeps); 285 286 omap2_clk_init(); 286 - omap2_sdrc_init(sp); 287 + omap2_sdrc_init(sdrc_cs0, sdrc_cs1); 287 288 _omap2_init_reprogram_sdrc(); 288 289 #endif 289 290 gpmc_init();
+6
arch/arm/mach-omap2/mmc-twl4030.c
··· 119 119 if (i != 0) 120 120 break; 121 121 ret = PTR_ERR(reg); 122 + hsmmc[i].vcc = NULL; 122 123 goto err; 123 124 } 124 125 hsmmc[i].vcc = reg; ··· 166 165 static void twl_mmc_cleanup(struct device *dev) 167 166 { 168 167 struct omap_mmc_platform_data *mmc = dev->platform_data; 168 + int i; 169 169 170 170 gpio_free(mmc->slots[0].switch_pin); 171 + for(i = 0; i < ARRAY_SIZE(hsmmc); i++) { 172 + regulator_put(hsmmc[i].vcc); 173 + regulator_put(hsmmc[i].vcc_aux); 174 + } 171 175 } 172 176 173 177 #ifdef CONFIG_PM
+6
arch/arm/mach-omap2/mux.c
··· 486 486 OMAP34XX_MUX_MODE4 | OMAP34XX_PIN_OUTPUT) 487 487 MUX_CFG_34XX("J25_34XX_GPIO170", 0x1c6, 488 488 OMAP34XX_MUX_MODE4 | OMAP34XX_PIN_INPUT) 489 + 490 + /* OMAP3 SDRC CKE signals to SDR/DDR ram chips */ 491 + MUX_CFG_34XX("H16_34XX_SDRC_CKE0", 0x262, 492 + OMAP34XX_MUX_MODE0 | OMAP34XX_PIN_OUTPUT) 493 + MUX_CFG_34XX("H17_34XX_SDRC_CKE1", 0x264, 494 + OMAP34XX_MUX_MODE0 | OMAP34XX_PIN_OUTPUT) 489 495 }; 490 496 491 497 #define OMAP34XX_PINS_SZ ARRAY_SIZE(omap34xx_pins)
-3
arch/arm/mach-omap2/pm.h
··· 11 11 #ifndef __ARCH_ARM_MACH_OMAP2_PM_H 12 12 #define __ARCH_ARM_MACH_OMAP2_PM_H 13 13 14 - extern int omap2_pm_init(void); 15 - extern int omap3_pm_init(void); 16 - 17 14 #ifdef CONFIG_PM_DEBUG 18 15 extern void omap2_pm_dump(int mode, int resume, unsigned int us); 19 16 extern int omap2_pm_debug;
+1 -1
arch/arm/mach-omap2/pm24xx.c
··· 470 470 WKUP_MOD, PM_WKEN); 471 471 } 472 472 473 - int __init omap2_pm_init(void) 473 + static int __init omap2_pm_init(void) 474 474 { 475 475 u32 l; 476 476
+47 -4
arch/arm/mach-omap2/pm34xx.c
··· 39 39 struct power_state { 40 40 struct powerdomain *pwrdm; 41 41 u32 next_state; 42 + #ifdef CONFIG_SUSPEND 42 43 u32 saved_state; 44 + #endif 43 45 struct list_head node; 44 46 }; 45 47 ··· 295 293 local_irq_enable(); 296 294 } 297 295 296 + #ifdef CONFIG_SUSPEND 297 + static suspend_state_t suspend_state; 298 + 298 299 static int omap3_pm_prepare(void) 299 300 { 300 301 disable_hlt(); ··· 326 321 restore: 327 322 /* Restore next_pwrsts */ 328 323 list_for_each_entry(pwrst, &pwrst_list, node) { 329 - set_pwrdm_state(pwrst->pwrdm, pwrst->saved_state); 330 324 state = pwrdm_read_prev_pwrst(pwrst->pwrdm); 331 325 if (state > pwrst->next_state) { 332 326 printk(KERN_INFO "Powerdomain (%s) didn't enter " ··· 333 329 pwrst->pwrdm->name, pwrst->next_state); 334 330 ret = -1; 335 331 } 332 + set_pwrdm_state(pwrst->pwrdm, pwrst->saved_state); 336 333 } 337 334 if (ret) 338 335 printk(KERN_ERR "Could not enter target state in pm_suspend\n"); ··· 344 339 return ret; 345 340 } 346 341 347 - static int omap3_pm_enter(suspend_state_t state) 342 + static int omap3_pm_enter(suspend_state_t unused) 348 343 { 349 344 int ret = 0; 350 345 351 - switch (state) { 346 + switch (suspend_state) { 352 347 case PM_SUSPEND_STANDBY: 353 348 case PM_SUSPEND_MEM: 354 349 ret = omap3_pm_suspend(); ··· 365 360 enable_hlt(); 366 361 } 367 362 363 + /* Hooks to enable / disable UART interrupts during suspend */ 364 + static int omap3_pm_begin(suspend_state_t state) 365 + { 366 + suspend_state = state; 367 + omap_uart_enable_irqs(0); 368 + return 0; 369 + } 370 + 371 + static void omap3_pm_end(void) 372 + { 373 + suspend_state = PM_SUSPEND_ON; 374 + omap_uart_enable_irqs(1); 375 + return; 376 + } 377 + 368 378 static struct platform_suspend_ops omap_pm_ops = { 379 + .begin = omap3_pm_begin, 380 + .end = omap3_pm_end, 369 381 .prepare = omap3_pm_prepare, 370 382 .enter = omap3_pm_enter, 371 383 .finish = omap3_pm_finish, 372 384 .valid = suspend_valid_only_mem, 373 385 }; 386 + #endif /* CONFIG_SUSPEND */ 374 387 375 388 376 389 /** ··· 636 613 /* Clear any pending PRCM interrupts */ 637 614 prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET); 638 615 616 + /* Don't attach IVA interrupts */ 617 + prm_write_mod_reg(0, WKUP_MOD, OMAP3430_PM_IVAGRPSEL); 618 + prm_write_mod_reg(0, CORE_MOD, OMAP3430_PM_IVAGRPSEL1); 619 + prm_write_mod_reg(0, CORE_MOD, OMAP3430ES2_PM_IVAGRPSEL3); 620 + prm_write_mod_reg(0, OMAP3430_PER_MOD, OMAP3430_PM_IVAGRPSEL); 621 + 622 + /* Clear any pending 'reset' flags */ 623 + prm_write_mod_reg(0xffffffff, MPU_MOD, RM_RSTST); 624 + prm_write_mod_reg(0xffffffff, CORE_MOD, RM_RSTST); 625 + prm_write_mod_reg(0xffffffff, OMAP3430_PER_MOD, RM_RSTST); 626 + prm_write_mod_reg(0xffffffff, OMAP3430_EMU_MOD, RM_RSTST); 627 + prm_write_mod_reg(0xffffffff, OMAP3430_NEON_MOD, RM_RSTST); 628 + prm_write_mod_reg(0xffffffff, OMAP3430_DSS_MOD, RM_RSTST); 629 + prm_write_mod_reg(0xffffffff, OMAP3430ES2_USBHOST_MOD, RM_RSTST); 630 + 631 + /* Clear any pending PRCM interrupts */ 632 + prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET); 633 + 639 634 omap3_iva_idle(); 640 635 omap3_d2d_idle(); 641 636 } ··· 693 652 return 0; 694 653 } 695 654 696 - int __init omap3_pm_init(void) 655 + static int __init omap3_pm_init(void) 697 656 { 698 657 struct power_state *pwrst, *tmp; 699 658 int ret; ··· 733 692 _omap_sram_idle = omap_sram_push(omap34xx_cpu_suspend, 734 693 omap34xx_cpu_suspend_sz); 735 694 695 + #ifdef CONFIG_SUSPEND 736 696 suspend_set_ops(&omap_pm_ops); 697 + #endif /* CONFIG_SUSPEND */ 737 698 738 699 pm_idle = omap3_pm_idle; 739 700
+43
arch/arm/mach-omap2/prcm.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/clk.h> 19 19 #include <linux/io.h> 20 + #include <linux/delay.h> 20 21 21 22 #include <mach/common.h> 22 23 #include <mach/prcm.h> ··· 28 27 29 28 static void __iomem *prm_base; 30 29 static void __iomem *cm_base; 30 + 31 + #define MAX_MODULE_ENABLE_WAIT 100000 31 32 32 33 u32 omap_prcm_get_reset_sources(void) 33 34 { ··· 122 119 return v; 123 120 } 124 121 EXPORT_SYMBOL(cm_rmw_mod_reg_bits); 122 + 123 + /** 124 + * omap2_cm_wait_idlest - wait for IDLEST bit to indicate module readiness 125 + * @reg: physical address of module IDLEST register 126 + * @mask: value to mask against to determine if the module is active 127 + * @name: name of the clock (for printk) 128 + * 129 + * Returns 1 if the module indicated readiness in time, or 0 if it 130 + * failed to enable in roughly MAX_MODULE_ENABLE_WAIT microseconds. 131 + */ 132 + int omap2_cm_wait_idlest(void __iomem *reg, u32 mask, const char *name) 133 + { 134 + int i = 0; 135 + int ena = 0; 136 + 137 + /* 138 + * 24xx uses 0 to indicate not ready, and 1 to indicate ready. 139 + * 34xx reverses this, just to keep us on our toes 140 + */ 141 + if (cpu_is_omap24xx()) 142 + ena = mask; 143 + else if (cpu_is_omap34xx()) 144 + ena = 0; 145 + else 146 + BUG(); 147 + 148 + /* Wait for lock */ 149 + while (((__raw_readl(reg) & mask) != ena) && 150 + (i++ < MAX_MODULE_ENABLE_WAIT)) 151 + udelay(1); 152 + 153 + if (i < MAX_MODULE_ENABLE_WAIT) 154 + pr_debug("cm: Module associated with clock %s ready after %d " 155 + "loops\n", name, i); 156 + else 157 + pr_err("cm: Module associated with clock %s didn't enable in " 158 + "%d tries\n", name, MAX_MODULE_ENABLE_WAIT); 159 + 160 + return (i < MAX_MODULE_ENABLE_WAIT) ? 1 : 0; 161 + }; 125 162 126 163 void __init omap2_set_globals_prcm(struct omap_globals *omap2_globals) 127 164 {
+45 -23
arch/arm/mach-omap2/sdrc.c
··· 32 32 #include <mach/sdrc.h> 33 33 #include "sdrc.h" 34 34 35 - static struct omap_sdrc_params *sdrc_init_params; 35 + static struct omap_sdrc_params *sdrc_init_params_cs0, *sdrc_init_params_cs1; 36 36 37 37 void __iomem *omap2_sdrc_base; 38 38 void __iomem *omap2_sms_base; ··· 45 45 /** 46 46 * omap2_sdrc_get_params - return SDRC register values for a given clock rate 47 47 * @r: SDRC clock rate (in Hz) 48 + * @sdrc_cs0: chip select 0 ram timings ** 49 + * @sdrc_cs1: chip select 1 ram timings ** 48 50 * 49 51 * Return pre-calculated values for the SDRC_ACTIM_CTRLA, 50 - * SDRC_ACTIM_CTRLB, SDRC_RFR_CTRL, and SDRC_MR registers, for a given 51 - * SDRC clock rate 'r'. These parameters control various timing 52 - * delays in the SDRAM controller that are expressed in terms of the 53 - * number of SDRC clock cycles to wait; hence the clock rate 54 - * dependency. Note that sdrc_init_params must be sorted rate 55 - * descending. Also assumes that both chip-selects use the same 56 - * timing parameters. Returns a struct omap_sdrc_params * upon 57 - * success, or NULL upon failure. 52 + * SDRC_ACTIM_CTRLB, SDRC_RFR_CTRL and SDRC_MR registers in sdrc_cs[01] 53 + * structs,for a given SDRC clock rate 'r'. 54 + * These parameters control various timing delays in the SDRAM controller 55 + * that are expressed in terms of the number of SDRC clock cycles to 56 + * wait; hence the clock rate dependency. 57 + * 58 + * Supports 2 different timing parameters for both chip selects. 59 + * 60 + * Note 1: the sdrc_init_params_cs[01] must be sorted rate descending. 61 + * Note 2: If sdrc_init_params_cs_1 is not NULL it must be of same size 62 + * as sdrc_init_params_cs_0. 63 + * 64 + * Fills in the struct omap_sdrc_params * for each chip select. 65 + * Returns 0 upon success or -1 upon failure. 58 66 */ 59 - struct omap_sdrc_params *omap2_sdrc_get_params(unsigned long r) 67 + int omap2_sdrc_get_params(unsigned long r, 68 + struct omap_sdrc_params **sdrc_cs0, 69 + struct omap_sdrc_params **sdrc_cs1) 60 70 { 61 - struct omap_sdrc_params *sp; 71 + struct omap_sdrc_params *sp0, *sp1; 62 72 63 - if (!sdrc_init_params) 64 - return NULL; 73 + if (!sdrc_init_params_cs0) 74 + return -1; 65 75 66 - sp = sdrc_init_params; 76 + sp0 = sdrc_init_params_cs0; 77 + sp1 = sdrc_init_params_cs1; 67 78 68 - while (sp->rate && sp->rate != r) 69 - sp++; 79 + while (sp0->rate && sp0->rate != r) { 80 + sp0++; 81 + if (sdrc_init_params_cs1) 82 + sp1++; 83 + } 70 84 71 - if (!sp->rate) 72 - return NULL; 85 + if (!sp0->rate) 86 + return -1; 73 87 74 - return sp; 88 + *sdrc_cs0 = sp0; 89 + *sdrc_cs1 = sp1; 90 + return 0; 75 91 } 76 92 77 93 ··· 99 83 100 84 /** 101 85 * omap2_sdrc_init - initialize SMS, SDRC devices on boot 102 - * @sp: pointer to a null-terminated list of struct omap_sdrc_params 86 + * @sdrc_cs[01]: pointers to a null-terminated list of struct omap_sdrc_params 87 + * Support for 2 chip selects timings 103 88 * 104 89 * Turn on smart idle modes for SDRAM scheduler and controller. 105 90 * Program a known-good configuration for the SDRC to deal with buggy 106 91 * bootloaders. 107 92 */ 108 - void __init omap2_sdrc_init(struct omap_sdrc_params *sp) 93 + void __init omap2_sdrc_init(struct omap_sdrc_params *sdrc_cs0, 94 + struct omap_sdrc_params *sdrc_cs1) 109 95 { 110 96 u32 l; 111 97 ··· 121 103 l |= (0x2 << 3); 122 104 sdrc_write_reg(l, SDRC_SYSCONFIG); 123 105 124 - sdrc_init_params = sp; 106 + sdrc_init_params_cs0 = sdrc_cs0; 107 + sdrc_init_params_cs1 = sdrc_cs1; 125 108 126 109 /* XXX Enable SRFRONIDLEREQ here also? */ 110 + /* 111 + * PWDENA should not be set due to 34xx erratum 1.150 - PWDENA 112 + * can cause random memory corruption 113 + */ 127 114 l = (1 << SDRC_POWER_EXTCLKDIS_SHIFT) | 128 - (1 << SDRC_POWER_PWDENA_SHIFT) | 129 115 (1 << SDRC_POWER_PAGEPOLICY_SHIFT); 130 116 sdrc_write_reg(l, SDRC_POWER); 131 117 }
+135 -64
arch/arm/mach-omap2/serial.c
··· 54 54 55 55 struct plat_serial8250_port *p; 56 56 struct list_head node; 57 + struct platform_device pdev; 57 58 58 59 #if defined(CONFIG_ARCH_OMAP3) && defined(CONFIG_PM) 59 60 int context_valid; ··· 69 68 #endif 70 69 }; 71 70 72 - static struct omap_uart_state omap_uart[OMAP_MAX_NR_PORTS]; 73 71 static LIST_HEAD(uart_list); 74 72 75 - static struct plat_serial8250_port serial_platform_data[] = { 73 + static struct plat_serial8250_port serial_platform_data0[] = { 76 74 { 77 75 .membase = IO_ADDRESS(OMAP_UART1_BASE), 78 76 .mapbase = OMAP_UART1_BASE, ··· 81 81 .regshift = 2, 82 82 .uartclk = OMAP24XX_BASE_BAUD * 16, 83 83 }, { 84 + .flags = 0 85 + } 86 + }; 87 + 88 + static struct plat_serial8250_port serial_platform_data1[] = { 89 + { 84 90 .membase = IO_ADDRESS(OMAP_UART2_BASE), 85 91 .mapbase = OMAP_UART2_BASE, 86 92 .irq = 73, ··· 95 89 .regshift = 2, 96 90 .uartclk = OMAP24XX_BASE_BAUD * 16, 97 91 }, { 92 + .flags = 0 93 + } 94 + }; 95 + 96 + static struct plat_serial8250_port serial_platform_data2[] = { 97 + { 98 98 .membase = IO_ADDRESS(OMAP_UART3_BASE), 99 99 .mapbase = OMAP_UART3_BASE, 100 100 .irq = 74, ··· 229 217 clk_disable(uart->fck); 230 218 } 231 219 220 + static void omap_uart_enable_wakeup(struct omap_uart_state *uart) 221 + { 222 + /* Set wake-enable bit */ 223 + if (uart->wk_en && uart->wk_mask) { 224 + u32 v = __raw_readl(uart->wk_en); 225 + v |= uart->wk_mask; 226 + __raw_writel(v, uart->wk_en); 227 + } 228 + 229 + /* Ensure IOPAD wake-enables are set */ 230 + if (cpu_is_omap34xx() && uart->padconf) { 231 + u16 v = omap_ctrl_readw(uart->padconf); 232 + v |= OMAP3_PADCONF_WAKEUPENABLE0; 233 + omap_ctrl_writew(v, uart->padconf); 234 + } 235 + } 236 + 237 + static void omap_uart_disable_wakeup(struct omap_uart_state *uart) 238 + { 239 + /* Clear wake-enable bit */ 240 + if (uart->wk_en && uart->wk_mask) { 241 + u32 v = __raw_readl(uart->wk_en); 242 + v &= ~uart->wk_mask; 243 + __raw_writel(v, uart->wk_en); 244 + } 245 + 246 + /* Ensure IOPAD wake-enables are cleared */ 247 + if (cpu_is_omap34xx() && uart->padconf) { 248 + u16 v = omap_ctrl_readw(uart->padconf); 249 + v &= ~OMAP3_PADCONF_WAKEUPENABLE0; 250 + omap_ctrl_writew(v, uart->padconf); 251 + } 252 + } 253 + 232 254 static void omap_uart_smart_idle_enable(struct omap_uart_state *uart, 233 255 int enable) 234 256 { ··· 292 246 293 247 static void omap_uart_allow_sleep(struct omap_uart_state *uart) 294 248 { 249 + if (device_may_wakeup(&uart->pdev.dev)) 250 + omap_uart_enable_wakeup(uart); 251 + else 252 + omap_uart_disable_wakeup(uart); 253 + 295 254 if (!uart->clocked) 296 255 return; 297 256 ··· 343 292 /* Check for normal UART wakeup */ 344 293 if (__raw_readl(uart->wk_st) & uart->wk_mask) 345 294 omap_uart_block_sleep(uart); 346 - 347 295 return; 348 296 } 349 297 } ··· 396 346 return IRQ_NONE; 397 347 } 398 348 399 - static u32 sleep_timeout = DEFAULT_TIMEOUT; 400 - 401 349 static void omap_uart_idle_init(struct omap_uart_state *uart) 402 350 { 403 - u32 v; 404 351 struct plat_serial8250_port *p = uart->p; 405 352 int ret; 406 353 407 354 uart->can_sleep = 0; 408 - uart->timeout = sleep_timeout; 355 + uart->timeout = DEFAULT_TIMEOUT; 409 356 setup_timer(&uart->timer, omap_uart_idle_timer, 410 357 (unsigned long) uart); 411 358 mod_timer(&uart->timer, jiffies + uart->timeout); ··· 460 413 uart->padconf = 0; 461 414 } 462 415 463 - /* Set wake-enable bit */ 464 - if (uart->wk_en && uart->wk_mask) { 465 - v = __raw_readl(uart->wk_en); 466 - v |= uart->wk_mask; 467 - __raw_writel(v, uart->wk_en); 468 - } 469 - 470 - /* Ensure IOPAD wake-enables are set */ 471 - if (cpu_is_omap34xx() && uart->padconf) { 472 - u16 v; 473 - 474 - v = omap_ctrl_readw(uart->padconf); 475 - v |= OMAP3_PADCONF_WAKEUPENABLE0; 476 - omap_ctrl_writew(v, uart->padconf); 477 - } 478 - 479 416 p->flags |= UPF_SHARE_IRQ; 480 417 ret = request_irq(p->irq, omap_uart_interrupt, IRQF_SHARED, 481 418 "serial idle", (void *)uart); 482 419 WARN_ON(ret); 483 420 } 484 421 485 - static ssize_t sleep_timeout_show(struct kobject *kobj, 486 - struct kobj_attribute *attr, 487 - char *buf) 422 + void omap_uart_enable_irqs(int enable) 488 423 { 489 - return sprintf(buf, "%u\n", sleep_timeout / HZ); 424 + int ret; 425 + struct omap_uart_state *uart; 426 + 427 + list_for_each_entry(uart, &uart_list, node) { 428 + if (enable) 429 + ret = request_irq(uart->p->irq, omap_uart_interrupt, 430 + IRQF_SHARED, "serial idle", (void *)uart); 431 + else 432 + free_irq(uart->p->irq, (void *)uart); 433 + } 490 434 } 491 435 492 - static ssize_t sleep_timeout_store(struct kobject *kobj, 493 - struct kobj_attribute *attr, 436 + static ssize_t sleep_timeout_show(struct device *dev, 437 + struct device_attribute *attr, 438 + char *buf) 439 + { 440 + struct platform_device *pdev = container_of(dev, 441 + struct platform_device, dev); 442 + struct omap_uart_state *uart = container_of(pdev, 443 + struct omap_uart_state, pdev); 444 + 445 + return sprintf(buf, "%u\n", uart->timeout / HZ); 446 + } 447 + 448 + static ssize_t sleep_timeout_store(struct device *dev, 449 + struct device_attribute *attr, 494 450 const char *buf, size_t n) 495 451 { 496 - struct omap_uart_state *uart; 452 + struct platform_device *pdev = container_of(dev, 453 + struct platform_device, dev); 454 + struct omap_uart_state *uart = container_of(pdev, 455 + struct omap_uart_state, pdev); 497 456 unsigned int value; 498 457 499 458 if (sscanf(buf, "%u", &value) != 1) { 500 459 printk(KERN_ERR "sleep_timeout_store: Invalid value\n"); 501 460 return -EINVAL; 502 461 } 503 - sleep_timeout = value * HZ; 504 - list_for_each_entry(uart, &uart_list, node) { 505 - uart->timeout = sleep_timeout; 506 - if (uart->timeout) 507 - mod_timer(&uart->timer, jiffies + uart->timeout); 508 - else 509 - /* A zero value means disable timeout feature */ 510 - omap_uart_block_sleep(uart); 511 - } 462 + 463 + uart->timeout = value * HZ; 464 + if (uart->timeout) 465 + mod_timer(&uart->timer, jiffies + uart->timeout); 466 + else 467 + /* A zero value means disable timeout feature */ 468 + omap_uart_block_sleep(uart); 469 + 512 470 return n; 513 471 } 514 472 515 - static struct kobj_attribute sleep_timeout_attr = 516 - __ATTR(sleep_timeout, 0644, sleep_timeout_show, sleep_timeout_store); 517 - 473 + DEVICE_ATTR(sleep_timeout, 0644, sleep_timeout_show, sleep_timeout_store); 474 + #define DEV_CREATE_FILE(dev, attr) WARN_ON(device_create_file(dev, attr)) 518 475 #else 519 476 static inline void omap_uart_idle_init(struct omap_uart_state *uart) {} 477 + #define DEV_CREATE_FILE(dev, attr) 520 478 #endif /* CONFIG_PM */ 521 479 522 - static struct platform_device serial_device = { 523 - .name = "serial8250", 524 - .id = PLAT8250_DEV_PLATFORM, 525 - .dev = { 526 - .platform_data = serial_platform_data, 480 + static struct omap_uart_state omap_uart[OMAP_MAX_NR_PORTS] = { 481 + { 482 + .pdev = { 483 + .name = "serial8250", 484 + .id = PLAT8250_DEV_PLATFORM, 485 + .dev = { 486 + .platform_data = serial_platform_data0, 487 + }, 488 + }, 489 + }, { 490 + .pdev = { 491 + .name = "serial8250", 492 + .id = PLAT8250_DEV_PLATFORM1, 493 + .dev = { 494 + .platform_data = serial_platform_data1, 495 + }, 496 + }, 497 + }, { 498 + .pdev = { 499 + .name = "serial8250", 500 + .id = PLAT8250_DEV_PLATFORM2, 501 + .dev = { 502 + .platform_data = serial_platform_data2, 503 + }, 504 + }, 527 505 }, 528 506 }; 529 507 530 508 void __init omap_serial_init(void) 531 509 { 532 - int i, err; 510 + int i; 533 511 const struct omap_uart_config *info; 534 512 char name[16]; 535 513 ··· 568 496 569 497 if (info == NULL) 570 498 return; 571 - if (cpu_is_omap44xx()) { 572 - for (i = 0; i < OMAP_MAX_NR_PORTS; i++) 573 - serial_platform_data[i].irq += 32; 574 - } 575 499 576 500 for (i = 0; i < OMAP_MAX_NR_PORTS; i++) { 577 - struct plat_serial8250_port *p = serial_platform_data + i; 578 501 struct omap_uart_state *uart = &omap_uart[i]; 502 + struct platform_device *pdev = &uart->pdev; 503 + struct device *dev = &pdev->dev; 504 + struct plat_serial8250_port *p = dev->platform_data; 579 505 580 506 if (!(info->enabled_uarts & (1 << i))) { 581 507 p->membase = NULL; ··· 601 531 uart->num = i; 602 532 p->private_data = uart; 603 533 uart->p = p; 604 - list_add(&uart->node, &uart_list); 534 + list_add_tail(&uart->node, &uart_list); 535 + 536 + if (cpu_is_omap44xx()) 537 + p->irq += 32; 605 538 606 539 omap_uart_enable_clocks(uart); 607 540 omap_uart_reset(uart); 608 541 omap_uart_idle_init(uart); 542 + 543 + if (WARN_ON(platform_device_register(pdev))) 544 + continue; 545 + if ((cpu_is_omap34xx() && uart->padconf) || 546 + (uart->wk_en && uart->wk_mask)) { 547 + device_init_wakeup(dev, true); 548 + DEV_CREATE_FILE(dev, &dev_attr_sleep_timeout); 549 + } 609 550 } 610 - 611 - err = platform_device_register(&serial_device); 612 - 613 - #ifdef CONFIG_PM 614 - if (!err) 615 - err = sysfs_create_file(&serial_device.dev.kobj, 616 - &sleep_timeout_attr.attr); 617 - #endif 618 - 619 551 } 620 -
+111 -36
arch/arm/mach-omap2/sram34xx.S
··· 36 36 37 37 .text 38 38 39 - /* r4 parameters */ 39 + /* r1 parameters */ 40 40 #define SDRC_NO_UNLOCK_DLL 0x0 41 41 #define SDRC_UNLOCK_DLL 0x1 42 42 ··· 58 58 59 59 /* SDRC_POWER bit settings */ 60 60 #define SRFRONIDLEREQ_MASK 0x40 61 - #define PWDENA_MASK 0x4 62 61 63 62 /* CM_IDLEST1_CORE bit settings */ 64 63 #define ST_SDRC_MASK 0x2 ··· 70 71 71 72 /* 72 73 * omap3_sram_configure_core_dpll - change DPLL3 M2 divider 73 - * r0 = new SDRC_RFR_CTRL register contents 74 - * r1 = new SDRC_ACTIM_CTRLA register contents 75 - * r2 = new SDRC_ACTIM_CTRLB register contents 76 - * r3 = new M2 divider setting (only 1 and 2 supported right now) 77 - * r4 = unlock SDRC DLL? (1 = yes, 0 = no). Only unlock DLL for 78 - * SDRC rates < 83MHz 79 - * r5 = number of MPU cycles to wait for SDRC to stabilize after 80 - * reprogramming the SDRC when switching to a slower MPU speed 81 - * r6 = new SDRC_MR_0 register value 82 - * r7 = increasing SDRC rate? (1 = yes, 0 = no) 83 74 * 75 + * Params passed in registers: 76 + * r0 = new M2 divider setting (only 1 and 2 supported right now) 77 + * r1 = unlock SDRC DLL? (1 = yes, 0 = no). Only unlock DLL for 78 + * SDRC rates < 83MHz 79 + * r2 = number of MPU cycles to wait for SDRC to stabilize after 80 + * reprogramming the SDRC when switching to a slower MPU speed 81 + * r3 = increasing SDRC rate? (1 = yes, 0 = no) 82 + * 83 + * Params passed via the stack. The needed params will be copied in SRAM 84 + * before use by the code in SRAM (SDRAM is not accessible during SDRC 85 + * reconfiguration): 86 + * new SDRC_RFR_CTRL_0 register contents 87 + * new SDRC_ACTIM_CTRL_A_0 register contents 88 + * new SDRC_ACTIM_CTRL_B_0 register contents 89 + * new SDRC_MR_0 register value 90 + * new SDRC_RFR_CTRL_1 register contents 91 + * new SDRC_ACTIM_CTRL_A_1 register contents 92 + * new SDRC_ACTIM_CTRL_B_1 register contents 93 + * new SDRC_MR_1 register value 94 + * 95 + * If the param SDRC_RFR_CTRL_1 is 0, the parameters 96 + * are not programmed into the SDRC CS1 registers 84 97 */ 85 98 ENTRY(omap3_sram_configure_core_dpll) 86 99 stmfd sp!, {r1-r12, lr} @ store regs to stack 87 - ldr r4, [sp, #52] @ pull extra args off the stack 88 - ldr r5, [sp, #56] @ load extra args from the stack 89 - ldr r6, [sp, #60] @ load extra args from the stack 90 - ldr r7, [sp, #64] @ load extra args from the stack 100 + 101 + @ pull the extra args off the stack 102 + @ and store them in SRAM 103 + ldr r4, [sp, #52] 104 + str r4, omap_sdrc_rfr_ctrl_0_val 105 + ldr r4, [sp, #56] 106 + str r4, omap_sdrc_actim_ctrl_a_0_val 107 + ldr r4, [sp, #60] 108 + str r4, omap_sdrc_actim_ctrl_b_0_val 109 + ldr r4, [sp, #64] 110 + str r4, omap_sdrc_mr_0_val 111 + ldr r4, [sp, #68] 112 + str r4, omap_sdrc_rfr_ctrl_1_val 113 + cmp r4, #0 @ if SDRC_RFR_CTRL_1 is 0, 114 + beq skip_cs1_params @ do not use cs1 params 115 + ldr r4, [sp, #72] 116 + str r4, omap_sdrc_actim_ctrl_a_1_val 117 + ldr r4, [sp, #76] 118 + str r4, omap_sdrc_actim_ctrl_b_1_val 119 + ldr r4, [sp, #80] 120 + str r4, omap_sdrc_mr_1_val 121 + skip_cs1_params: 91 122 dsb @ flush buffered writes to interconnect 92 - cmp r7, #1 @ if increasing SDRC clk rate, 123 + 124 + cmp r3, #1 @ if increasing SDRC clk rate, 93 125 bleq configure_sdrc @ program the SDRC regs early (for RFR) 94 - cmp r4, #SDRC_UNLOCK_DLL @ set the intended DLL state 126 + cmp r1, #SDRC_UNLOCK_DLL @ set the intended DLL state 95 127 bleq unlock_dll 96 128 blne lock_dll 97 129 bl sdram_in_selfrefresh @ put SDRAM in self refresh, idle SDRC 98 130 bl configure_core_dpll @ change the DPLL3 M2 divider 131 + mov r12, r2 132 + bl wait_clk_stable @ wait for SDRC to stabilize 99 133 bl enable_sdrc @ take SDRC out of idle 100 - cmp r4, #SDRC_UNLOCK_DLL @ wait for DLL status to change 134 + cmp r1, #SDRC_UNLOCK_DLL @ wait for DLL status to change 101 135 bleq wait_dll_unlock 102 136 blne wait_dll_lock 103 - cmp r7, #1 @ if increasing SDRC clk rate, 137 + cmp r3, #1 @ if increasing SDRC clk rate, 104 138 beq return_to_sdram @ return to SDRAM code, otherwise, 105 139 bl configure_sdrc @ reprogram SDRC regs now 106 - mov r12, r5 107 - bl wait_clk_stable @ wait for SDRC to stabilize 108 140 return_to_sdram: 109 141 isb @ prevent speculative exec past here 110 142 mov r0, #0 @ return value ··· 143 113 unlock_dll: 144 114 ldr r11, omap3_sdrc_dlla_ctrl 145 115 ldr r12, [r11] 146 - and r12, r12, #FIXEDDELAY_MASK 116 + bic r12, r12, #FIXEDDELAY_MASK 147 117 orr r12, r12, #FIXEDDELAY_DEFAULT 148 118 orr r12, r12, #DLLIDLE_MASK 149 119 str r12, [r11] @ (no OCP barrier needed) ··· 159 129 ldr r12, [r11] @ read the contents of SDRC_POWER 160 130 mov r9, r12 @ keep a copy of SDRC_POWER bits 161 131 orr r12, r12, #SRFRONIDLEREQ_MASK @ enable self refresh on idle 162 - bic r12, r12, #PWDENA_MASK @ clear PWDENA 163 132 str r12, [r11] @ write back to SDRC_POWER register 164 133 ldr r12, [r11] @ posted-write barrier for SDRC 165 134 idle_sdrc: ··· 178 149 ldr r12, [r11] 179 150 ldr r10, core_m2_mask_val @ modify m2 for core dpll 180 151 and r12, r12, r10 181 - orr r12, r12, r3, lsl #CORE_DPLL_CLKOUT_DIV_SHIFT 152 + orr r12, r12, r0, lsl #CORE_DPLL_CLKOUT_DIV_SHIFT 182 153 str r12, [r11] 183 154 ldr r12, [r11] @ posted-write barrier for CM 184 155 bx lr ··· 216 187 bne wait_dll_unlock 217 188 bx lr 218 189 configure_sdrc: 219 - ldr r11, omap3_sdrc_rfr_ctrl 220 - str r0, [r11] 221 - ldr r11, omap3_sdrc_actim_ctrla 222 - str r1, [r11] 223 - ldr r11, omap3_sdrc_actim_ctrlb 224 - str r2, [r11] 190 + ldr r12, omap_sdrc_rfr_ctrl_0_val @ fetch value from SRAM 191 + ldr r11, omap3_sdrc_rfr_ctrl_0 @ fetch addr from SRAM 192 + str r12, [r11] @ store 193 + ldr r12, omap_sdrc_actim_ctrl_a_0_val 194 + ldr r11, omap3_sdrc_actim_ctrl_a_0 195 + str r12, [r11] 196 + ldr r12, omap_sdrc_actim_ctrl_b_0_val 197 + ldr r11, omap3_sdrc_actim_ctrl_b_0 198 + str r12, [r11] 199 + ldr r12, omap_sdrc_mr_0_val 225 200 ldr r11, omap3_sdrc_mr_0 226 - str r6, [r11] 227 - ldr r6, [r11] @ posted-write barrier for SDRC 201 + str r12, [r11] 202 + ldr r12, omap_sdrc_rfr_ctrl_1_val 203 + cmp r12, #0 @ if SDRC_RFR_CTRL_1 is 0, 204 + beq skip_cs1_prog @ do not program cs1 params 205 + ldr r11, omap3_sdrc_rfr_ctrl_1 206 + str r12, [r11] 207 + ldr r12, omap_sdrc_actim_ctrl_a_1_val 208 + ldr r11, omap3_sdrc_actim_ctrl_a_1 209 + str r12, [r11] 210 + ldr r12, omap_sdrc_actim_ctrl_b_1_val 211 + ldr r11, omap3_sdrc_actim_ctrl_b_1 212 + str r12, [r11] 213 + ldr r12, omap_sdrc_mr_1_val 214 + ldr r11, omap3_sdrc_mr_1 215 + str r12, [r11] 216 + skip_cs1_prog: 217 + ldr r12, [r11] @ posted-write barrier for SDRC 228 218 bx lr 229 219 230 220 omap3_sdrc_power: ··· 254 206 .word OMAP34XX_CM_REGADDR(CORE_MOD, CM_IDLEST) 255 207 omap3_cm_iclken1_core: 256 208 .word OMAP34XX_CM_REGADDR(CORE_MOD, CM_ICLKEN1) 257 - omap3_sdrc_rfr_ctrl: 209 + 210 + omap3_sdrc_rfr_ctrl_0: 258 211 .word OMAP34XX_SDRC_REGADDR(SDRC_RFR_CTRL_0) 259 - omap3_sdrc_actim_ctrla: 212 + omap3_sdrc_rfr_ctrl_1: 213 + .word OMAP34XX_SDRC_REGADDR(SDRC_RFR_CTRL_1) 214 + omap3_sdrc_actim_ctrl_a_0: 260 215 .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_A_0) 261 - omap3_sdrc_actim_ctrlb: 216 + omap3_sdrc_actim_ctrl_a_1: 217 + .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_A_1) 218 + omap3_sdrc_actim_ctrl_b_0: 262 219 .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_B_0) 220 + omap3_sdrc_actim_ctrl_b_1: 221 + .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_B_1) 263 222 omap3_sdrc_mr_0: 264 223 .word OMAP34XX_SDRC_REGADDR(SDRC_MR_0) 224 + omap3_sdrc_mr_1: 225 + .word OMAP34XX_SDRC_REGADDR(SDRC_MR_1) 226 + omap_sdrc_rfr_ctrl_0_val: 227 + .word 0xDEADBEEF 228 + omap_sdrc_rfr_ctrl_1_val: 229 + .word 0xDEADBEEF 230 + omap_sdrc_actim_ctrl_a_0_val: 231 + .word 0xDEADBEEF 232 + omap_sdrc_actim_ctrl_a_1_val: 233 + .word 0xDEADBEEF 234 + omap_sdrc_actim_ctrl_b_0_val: 235 + .word 0xDEADBEEF 236 + omap_sdrc_actim_ctrl_b_1_val: 237 + .word 0xDEADBEEF 238 + omap_sdrc_mr_0_val: 239 + .word 0xDEADBEEF 240 + omap_sdrc_mr_1_val: 241 + .word 0xDEADBEEF 242 + 265 243 omap3_sdrc_dlla_status: 266 244 .word OMAP34XX_SDRC_REGADDR(SDRC_DLLA_STATUS) 267 245 omap3_sdrc_dlla_ctrl: ··· 297 223 298 224 ENTRY(omap3_sram_configure_core_dpll_sz) 299 225 .word . - omap3_sram_configure_core_dpll 226 +
+1 -1
arch/arm/mach-u300/core.c
··· 510 510 } 511 511 }; 512 512 513 - static void u300_init_check_chip(void) 513 + static void __init u300_init_check_chip(void) 514 514 { 515 515 516 516 u16 val;
+73 -45
arch/arm/mm/init.c
··· 120 120 printk("%d pages swap cached\n", cached); 121 121 } 122 122 123 + static void __init find_node_limits(int node, struct meminfo *mi, 124 + unsigned long *min, unsigned long *max_low, unsigned long *max_high) 125 + { 126 + int i; 127 + 128 + *min = -1UL; 129 + *max_low = *max_high = 0; 130 + 131 + for_each_nodebank(i, mi, node) { 132 + struct membank *bank = &mi->bank[i]; 133 + unsigned long start, end; 134 + 135 + start = bank_pfn_start(bank); 136 + end = bank_pfn_end(bank); 137 + 138 + if (*min > start) 139 + *min = start; 140 + if (*max_high < end) 141 + *max_high = end; 142 + if (bank->highmem) 143 + continue; 144 + if (*max_low < end) 145 + *max_low = end; 146 + } 147 + } 148 + 123 149 /* 124 150 * FIXME: We really want to avoid allocating the bootmap bitmap 125 151 * over the top of the initrd. Hopefully, this is located towards ··· 236 210 #endif 237 211 } 238 212 239 - static unsigned long __init bootmem_init_node(int node, struct meminfo *mi) 213 + static void __init bootmem_init_node(int node, struct meminfo *mi, 214 + unsigned long start_pfn, unsigned long end_pfn) 240 215 { 241 - unsigned long start_pfn, end_pfn, boot_pfn; 216 + unsigned long boot_pfn; 242 217 unsigned int boot_pages; 243 218 pg_data_t *pgdat; 244 219 int i; 245 220 246 - start_pfn = -1UL; 247 - end_pfn = 0; 248 - 249 221 /* 250 - * Calculate the pfn range, and map the memory banks for this node. 222 + * Map the memory banks for this node. 251 223 */ 252 224 for_each_nodebank(i, mi, node) { 253 225 struct membank *bank = &mi->bank[i]; 254 - unsigned long start, end; 255 226 256 - start = bank_pfn_start(bank); 257 - end = bank_pfn_end(bank); 258 - 259 - if (start_pfn > start) 260 - start_pfn = start; 261 - if (end_pfn < end) 262 - end_pfn = end; 263 - 264 - map_memory_bank(bank); 227 + if (!bank->highmem) 228 + map_memory_bank(bank); 265 229 } 266 - 267 - /* 268 - * If there is no memory in this node, ignore it. 269 - */ 270 - if (end_pfn == 0) 271 - return end_pfn; 272 230 273 231 /* 274 232 * Allocate the bootmem bitmap page. ··· 270 260 271 261 for_each_nodebank(i, mi, node) { 272 262 struct membank *bank = &mi->bank[i]; 273 - free_bootmem_node(pgdat, bank_phys_start(bank), bank_phys_size(bank)); 263 + if (!bank->highmem) 264 + free_bootmem_node(pgdat, bank_phys_start(bank), bank_phys_size(bank)); 274 265 memory_present(node, bank_pfn_start(bank), bank_pfn_end(bank)); 275 266 } 276 267 ··· 280 269 */ 281 270 reserve_bootmem_node(pgdat, boot_pfn << PAGE_SHIFT, 282 271 boot_pages << PAGE_SHIFT, BOOTMEM_DEFAULT); 283 - 284 - return end_pfn; 285 272 } 286 273 287 274 static void __init bootmem_reserve_initrd(int node) ··· 306 297 static void __init bootmem_free_node(int node, struct meminfo *mi) 307 298 { 308 299 unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES]; 309 - unsigned long start_pfn, end_pfn; 310 - pg_data_t *pgdat = NODE_DATA(node); 300 + unsigned long min, max_low, max_high; 311 301 int i; 312 302 313 - start_pfn = pgdat->bdata->node_min_pfn; 314 - end_pfn = pgdat->bdata->node_low_pfn; 303 + find_node_limits(node, mi, &min, &max_low, &max_high); 315 304 316 305 /* 317 306 * initialise the zones within this node. 318 307 */ 319 308 memset(zone_size, 0, sizeof(zone_size)); 320 - memset(zhole_size, 0, sizeof(zhole_size)); 321 309 322 310 /* 323 311 * The size of this node has already been determined. If we need 324 312 * to do anything fancy with the allocation of this memory to the 325 313 * zones, now is the time to do it. 326 314 */ 327 - zone_size[0] = end_pfn - start_pfn; 315 + zone_size[0] = max_low - min; 316 + #ifdef CONFIG_HIGHMEM 317 + zone_size[ZONE_HIGHMEM] = max_high - max_low; 318 + #endif 328 319 329 320 /* 330 321 * For each bank in this node, calculate the size of the holes. 331 322 * holes = node_size - sum(bank_sizes_in_node) 332 323 */ 333 - zhole_size[0] = zone_size[0]; 334 - for_each_nodebank(i, mi, node) 335 - zhole_size[0] -= bank_pfn_size(&mi->bank[i]); 324 + memcpy(zhole_size, zone_size, sizeof(zhole_size)); 325 + for_each_nodebank(i, mi, node) { 326 + int idx = 0; 327 + #ifdef CONFIG_HIGHMEM 328 + if (mi->bank[i].highmem) 329 + idx = ZONE_HIGHMEM; 330 + #endif 331 + zhole_size[idx] -= bank_pfn_size(&mi->bank[i]); 332 + } 336 333 337 334 /* 338 335 * Adjust the sizes according to any special requirements for ··· 346 331 */ 347 332 arch_adjust_zones(node, zone_size, zhole_size); 348 333 349 - free_area_init_node(node, zone_size, start_pfn, zhole_size); 334 + free_area_init_node(node, zone_size, min, zhole_size); 350 335 } 351 336 352 337 void __init bootmem_init(void) 353 338 { 354 339 struct meminfo *mi = &meminfo; 355 - unsigned long memend_pfn = 0; 340 + unsigned long min, max_low, max_high; 356 341 int node, initrd_node; 357 342 358 343 /* ··· 360 345 */ 361 346 initrd_node = check_initrd(mi); 362 347 348 + max_low = max_high = 0; 349 + 363 350 /* 364 351 * Run through each node initialising the bootmem allocator. 365 352 */ 366 353 for_each_node(node) { 367 - unsigned long end_pfn = bootmem_init_node(node, mi); 354 + unsigned long node_low, node_high; 355 + 356 + find_node_limits(node, mi, &min, &node_low, &node_high); 357 + 358 + if (node_low > max_low) 359 + max_low = node_low; 360 + if (node_high > max_high) 361 + max_high = node_high; 362 + 363 + /* 364 + * If there is no memory in this node, ignore it. 365 + * (We can't have nodes which have no lowmem) 366 + */ 367 + if (node_low == 0) 368 + continue; 369 + 370 + bootmem_init_node(node, mi, min, node_low); 368 371 369 372 /* 370 373 * Reserve any special node zero regions. ··· 395 362 */ 396 363 if (node == initrd_node) 397 364 bootmem_reserve_initrd(node); 398 - 399 - /* 400 - * Remember the highest memory PFN. 401 - */ 402 - if (end_pfn > memend_pfn) 403 - memend_pfn = end_pfn; 404 365 } 405 366 406 367 /* ··· 410 383 for_each_node(node) 411 384 bootmem_free_node(node, mi); 412 385 413 - high_memory = __va((memend_pfn << PAGE_SHIFT) - 1) + 1; 386 + high_memory = __va((max_low << PAGE_SHIFT) - 1) + 1; 414 387 415 388 /* 416 389 * This doesn't seem to be used by the Linux memory manager any ··· 420 393 * Note: max_low_pfn and max_pfn reflect the number of _pages_ in 421 394 * the system, not the maximum PFN. 422 395 */ 423 - max_pfn = max_low_pfn = memend_pfn - PHYS_PFN_OFFSET; 396 + max_low_pfn = max_low - PHYS_PFN_OFFSET; 397 + max_pfn = max_high - PHYS_PFN_OFFSET; 424 398 } 425 399 426 400 static inline int free_area(unsigned long pfn, unsigned long end, char *s)
+8 -1
arch/arm/mm/mmu.c
··· 687 687 688 688 static void __init sanity_check_meminfo(void) 689 689 { 690 - int i, j; 690 + int i, j, highmem = 0; 691 691 692 692 for (i = 0, j = 0; i < meminfo.nr_banks; i++) { 693 693 struct membank *bank = &meminfo.bank[j]; 694 694 *bank = meminfo.bank[i]; 695 695 696 696 #ifdef CONFIG_HIGHMEM 697 + if (__va(bank->start) > VMALLOC_MIN || 698 + __va(bank->start) < (void *)PAGE_OFFSET) 699 + highmem = 1; 700 + 701 + bank->highmem = highmem; 702 + 697 703 /* 698 704 * Split those memory banks which are partially overlapping 699 705 * the vmalloc area greatly simplifying things later. ··· 720 714 i++; 721 715 bank[1].size -= VMALLOC_MIN - __va(bank->start); 722 716 bank[1].start = __pa(VMALLOC_MIN - 1) + 1; 717 + bank[1].highmem = highmem = 1; 723 718 j++; 724 719 } 725 720 bank->size = VMALLOC_MIN - __va(bank->start);
+4 -4
arch/arm/plat-omap/cpu-omap.c
··· 78 78 79 79 /* Ensure desired rate is within allowed range. Some govenors 80 80 * (ondemand) will just pass target_freq=0 to get the minimum. */ 81 - if (target_freq < policy->cpuinfo.min_freq) 82 - target_freq = policy->cpuinfo.min_freq; 83 - if (target_freq > policy->cpuinfo.max_freq) 84 - target_freq = policy->cpuinfo.max_freq; 81 + if (target_freq < policy->min) 82 + target_freq = policy->min; 83 + if (target_freq > policy->max) 84 + target_freq = policy->max; 85 85 86 86 freqs.old = omap_getspeed(0); 87 87 freqs.new = clk_round_rate(mpu_clk, target_freq * 1000) / 1000;
+3 -1
arch/arm/plat-omap/dma.c
··· 946 946 947 947 cur_lch = next_lch; 948 948 } while (next_lch != -1); 949 - } else if (cpu_class_is_omap2()) { 949 + } else if (cpu_is_omap242x() || 950 + (cpu_is_omap243x() && omap_type() <= OMAP2430_REV_ES1_0)) { 951 + 950 952 /* Errata: Need to write lch even if not using chaining */ 951 953 dma_write(lch, CLNK_CTRL(lch)); 952 954 }
+95 -32
arch/arm/plat-omap/gpio.c
··· 476 476 __raw_writel(l, reg); 477 477 } 478 478 479 - static int __omap_get_gpio_datain(int gpio) 479 + static int _get_gpio_datain(struct gpio_bank *bank, int gpio) 480 480 { 481 - struct gpio_bank *bank; 482 481 void __iomem *reg; 483 482 484 483 if (check_gpio(gpio) < 0) 485 484 return -EINVAL; 486 - bank = get_gpio_bank(gpio); 487 485 reg = bank->base; 488 486 switch (bank->method) { 489 487 #ifdef CONFIG_ARCH_OMAP1 ··· 520 522 } 521 523 return (__raw_readl(reg) 522 524 & (1 << get_gpio_index(gpio))) != 0; 525 + } 526 + 527 + static int _get_gpio_dataout(struct gpio_bank *bank, int gpio) 528 + { 529 + void __iomem *reg; 530 + 531 + if (check_gpio(gpio) < 0) 532 + return -EINVAL; 533 + reg = bank->base; 534 + 535 + switch (bank->method) { 536 + #ifdef CONFIG_ARCH_OMAP1 537 + case METHOD_MPUIO: 538 + reg += OMAP_MPUIO_OUTPUT; 539 + break; 540 + #endif 541 + #ifdef CONFIG_ARCH_OMAP15XX 542 + case METHOD_GPIO_1510: 543 + reg += OMAP1510_GPIO_DATA_OUTPUT; 544 + break; 545 + #endif 546 + #ifdef CONFIG_ARCH_OMAP16XX 547 + case METHOD_GPIO_1610: 548 + reg += OMAP1610_GPIO_DATAOUT; 549 + break; 550 + #endif 551 + #ifdef CONFIG_ARCH_OMAP730 552 + case METHOD_GPIO_730: 553 + reg += OMAP730_GPIO_DATA_OUTPUT; 554 + break; 555 + #endif 556 + #ifdef CONFIG_ARCH_OMAP850 557 + case METHOD_GPIO_850: 558 + reg += OMAP850_GPIO_DATA_OUTPUT; 559 + break; 560 + #endif 561 + #if defined(CONFIG_ARCH_OMAP24XX) || defined(CONFIG_ARCH_OMAP34XX) || \ 562 + defined(CONFIG_ARCH_OMAP4) 563 + case METHOD_GPIO_24XX: 564 + reg += OMAP24XX_GPIO_DATAOUT; 565 + break; 566 + #endif 567 + default: 568 + return -EINVAL; 569 + } 570 + 571 + return (__raw_readl(reg) & (1 << get_gpio_index(gpio))) != 0; 523 572 } 524 573 525 574 #define MOD_REG_BIT(reg, bit_mask, set) \ ··· 1234 1189 struct gpio_bank *bank = get_irq_chip_data(irq); 1235 1190 1236 1191 _set_gpio_irqenable(bank, gpio, 0); 1192 + _set_gpio_triggering(bank, get_gpio_index(gpio), IRQ_TYPE_NONE); 1237 1193 } 1238 1194 1239 1195 static void gpio_unmask_irq(unsigned int irq) ··· 1242 1196 unsigned int gpio = irq - IH_GPIO_BASE; 1243 1197 struct gpio_bank *bank = get_irq_chip_data(irq); 1244 1198 unsigned int irq_mask = 1 << get_gpio_index(gpio); 1199 + struct irq_desc *desc = irq_to_desc(irq); 1200 + u32 trigger = desc->status & IRQ_TYPE_SENSE_MASK; 1201 + 1202 + if (trigger) 1203 + _set_gpio_triggering(bank, get_gpio_index(gpio), trigger); 1245 1204 1246 1205 /* For level-triggered GPIOs, the clearing must be done after 1247 1206 * the HW source is cleared, thus after the handler has run */ ··· 1401 1350 return 0; 1402 1351 } 1403 1352 1353 + static int gpio_is_input(struct gpio_bank *bank, int mask) 1354 + { 1355 + void __iomem *reg = bank->base; 1356 + 1357 + switch (bank->method) { 1358 + case METHOD_MPUIO: 1359 + reg += OMAP_MPUIO_IO_CNTL; 1360 + break; 1361 + case METHOD_GPIO_1510: 1362 + reg += OMAP1510_GPIO_DIR_CONTROL; 1363 + break; 1364 + case METHOD_GPIO_1610: 1365 + reg += OMAP1610_GPIO_DIRECTION; 1366 + break; 1367 + case METHOD_GPIO_730: 1368 + reg += OMAP730_GPIO_DIR_CONTROL; 1369 + break; 1370 + case METHOD_GPIO_850: 1371 + reg += OMAP850_GPIO_DIR_CONTROL; 1372 + break; 1373 + case METHOD_GPIO_24XX: 1374 + reg += OMAP24XX_GPIO_OE; 1375 + break; 1376 + } 1377 + return __raw_readl(reg) & mask; 1378 + } 1379 + 1404 1380 static int gpio_get(struct gpio_chip *chip, unsigned offset) 1405 1381 { 1406 - return __omap_get_gpio_datain(chip->base + offset); 1382 + struct gpio_bank *bank; 1383 + void __iomem *reg; 1384 + int gpio; 1385 + u32 mask; 1386 + 1387 + gpio = chip->base + offset; 1388 + bank = get_gpio_bank(gpio); 1389 + reg = bank->base; 1390 + mask = 1 << get_gpio_index(gpio); 1391 + 1392 + if (gpio_is_input(bank, mask)) 1393 + return _get_gpio_datain(bank, gpio); 1394 + else 1395 + return _get_gpio_dataout(bank, gpio); 1407 1396 } 1408 1397 1409 1398 static int gpio_output(struct gpio_chip *chip, unsigned offset, int value) ··· 1976 1885 1977 1886 #include <linux/debugfs.h> 1978 1887 #include <linux/seq_file.h> 1979 - 1980 - static int gpio_is_input(struct gpio_bank *bank, int mask) 1981 - { 1982 - void __iomem *reg = bank->base; 1983 - 1984 - switch (bank->method) { 1985 - case METHOD_MPUIO: 1986 - reg += OMAP_MPUIO_IO_CNTL; 1987 - break; 1988 - case METHOD_GPIO_1510: 1989 - reg += OMAP1510_GPIO_DIR_CONTROL; 1990 - break; 1991 - case METHOD_GPIO_1610: 1992 - reg += OMAP1610_GPIO_DIRECTION; 1993 - break; 1994 - case METHOD_GPIO_730: 1995 - reg += OMAP730_GPIO_DIR_CONTROL; 1996 - break; 1997 - case METHOD_GPIO_850: 1998 - reg += OMAP850_GPIO_DIR_CONTROL; 1999 - break; 2000 - case METHOD_GPIO_24XX: 2001 - reg += OMAP24XX_GPIO_OE; 2002 - break; 2003 - } 2004 - return __raw_readl(reg) & mask; 2005 - } 2006 - 2007 1888 2008 1889 static int dbg_gpio_show(struct seq_file *s, void *unused) 2009 1890 {
+2
arch/arm/plat-omap/include/mach/clock.h
··· 20 20 struct clkops { 21 21 int (*enable)(struct clk *); 22 22 void (*disable)(struct clk *); 23 + void (*find_idlest)(struct clk *, void __iomem **, u8 *); 24 + void (*find_companion)(struct clk *, void __iomem **, u8 *); 23 25 }; 24 26 25 27 #if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) || \
-5
arch/arm/plat-omap/include/mach/cpu.h
··· 378 378 #define cpu_class_is_omap2() (cpu_is_omap24xx() || cpu_is_omap34xx() || \ 379 379 cpu_is_omap44xx()) 380 380 381 - #if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) || \ 382 - defined(CONFIG_ARCH_OMAP4) 383 - 384 381 /* Various silicon revisions for omap2 */ 385 382 #define OMAP242X_CLASS 0x24200024 386 383 #define OMAP2420_REV_ES1_0 0x24200024 ··· 433 436 434 437 int omap_chip_is(struct omap_chip_id oci); 435 438 void omap2_check_revision(void); 436 - 437 - #endif /* defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) */
+2 -1
arch/arm/plat-omap/include/mach/io.h
··· 228 228 extern void omap1_init_common_hw(void); 229 229 230 230 extern void omap2_map_common_io(void); 231 - extern void omap2_init_common_hw(struct omap_sdrc_params *sp); 231 + extern void omap2_init_common_hw(struct omap_sdrc_params *sdrc_cs0, 232 + struct omap_sdrc_params *sdrc_cs1); 232 233 233 234 #define __arch_ioremap(p,s,t) omap_ioremap(p,s,t) 234 235 #define __arch_iounmap(v) omap_iounmap(v)
+4
arch/arm/plat-omap/include/mach/mux.h
··· 853 853 AE5_34XX_GPIO143, 854 854 H19_34XX_GPIO164_OUT, 855 855 J25_34XX_GPIO170, 856 + 857 + /* OMAP3 SDRC CKE signals to SDR/DDR ram chips */ 858 + H16_34XX_SDRC_CKE0, 859 + H17_34XX_SDRC_CKE1, 856 860 }; 857 861 858 862 struct omap_mux_cfg {
+1
arch/arm/plat-omap/include/mach/prcm.h
··· 25 25 26 26 u32 omap_prcm_get_reset_sources(void); 27 27 void omap_prcm_arch_reset(char mode); 28 + int omap2_cm_wait_idlest(void __iomem *reg, u32 mask, const char *name); 28 29 29 30 #endif 30 31
+9 -2
arch/arm/plat-omap/include/mach/sdrc.h
··· 30 30 #define SDRC_ACTIM_CTRL_A_0 0x09c 31 31 #define SDRC_ACTIM_CTRL_B_0 0x0a0 32 32 #define SDRC_RFR_CTRL_0 0x0a4 33 + #define SDRC_MR_1 0x0B4 34 + #define SDRC_ACTIM_CTRL_A_1 0x0C4 35 + #define SDRC_ACTIM_CTRL_B_1 0x0C8 36 + #define SDRC_RFR_CTRL_1 0x0D4 33 37 34 38 /* 35 39 * These values represent the number of memory clock cycles between ··· 106 102 u32 mr; 107 103 }; 108 104 109 - void __init omap2_sdrc_init(struct omap_sdrc_params *sp); 110 - struct omap_sdrc_params *omap2_sdrc_get_params(unsigned long r); 105 + void __init omap2_sdrc_init(struct omap_sdrc_params *sdrc_cs0, 106 + struct omap_sdrc_params *sdrc_cs1); 107 + int omap2_sdrc_get_params(unsigned long r, 108 + struct omap_sdrc_params **sdrc_cs0, 109 + struct omap_sdrc_params **sdrc_cs1); 111 110 112 111 #ifdef CONFIG_ARCH_OMAP2 113 112
+1
arch/arm/plat-omap/include/mach/serial.h
··· 59 59 extern void omap_uart_prepare_suspend(void); 60 60 extern void omap_uart_prepare_idle(int num); 61 61 extern void omap_uart_resume_idle(int num); 62 + extern void omap_uart_enable_irqs(int enable); 62 63 #endif 63 64 64 65 #endif
+12 -11
arch/arm/plat-omap/include/mach/sram.h
··· 21 21 u32 mem_type); 22 22 extern u32 omap2_set_prcm(u32 dpll_ctrl_val, u32 sdrc_rfr_val, int bypass); 23 23 24 - extern u32 omap3_configure_core_dpll(u32 sdrc_rfr_ctrl, 25 - u32 sdrc_actim_ctrla, 26 - u32 sdrc_actim_ctrlb, u32 m2, 27 - u32 unlock_dll, u32 f, u32 sdrc_mr, 28 - u32 inc); 24 + extern u32 omap3_configure_core_dpll( 25 + u32 m2, u32 unlock_dll, u32 f, u32 inc, 26 + u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0, 27 + u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0, 28 + u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1, 29 + u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1); 29 30 30 31 /* Do not use these */ 31 32 extern void omap1_sram_reprogram_clock(u32 ckctl, u32 dpllctl); ··· 60 59 u32 mem_type); 61 60 extern unsigned long omap243x_sram_reprogram_sdrc_sz; 62 61 63 - 64 - extern u32 omap3_sram_configure_core_dpll(u32 sdrc_rfr_ctrl, 65 - u32 sdrc_actim_ctrla, 66 - u32 sdrc_actim_ctrlb, u32 m2, 67 - u32 unlock_dll, u32 f, u32 sdrc_mr, 68 - u32 inc); 62 + extern u32 omap3_sram_configure_core_dpll( 63 + u32 m2, u32 unlock_dll, u32 f, u32 inc, 64 + u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0, 65 + u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0, 66 + u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1, 67 + u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1); 69 68 extern unsigned long omap3_sram_configure_core_dpll_sz; 70 69 71 70 #endif
+20 -14
arch/arm/plat-omap/sram.c
··· 44 44 #define OMAP2_SRAM_VA 0xe3000000 45 45 #define OMAP2_SRAM_PUB_VA (OMAP2_SRAM_VA + 0x800) 46 46 #define OMAP3_SRAM_PA 0x40200000 47 - #define OMAP3_SRAM_VA 0xd7000000 47 + #define OMAP3_SRAM_VA 0xe3000000 48 48 #define OMAP3_SRAM_PUB_PA 0x40208000 49 - #define OMAP3_SRAM_PUB_VA 0xd7008000 49 + #define OMAP3_SRAM_PUB_VA (OMAP3_SRAM_VA + 0x8000) 50 50 #define OMAP4_SRAM_PA 0x40200000 /*0x402f0000*/ 51 51 #define OMAP4_SRAM_VA 0xd7000000 /*0xd70f0000*/ 52 52 ··· 373 373 374 374 #ifdef CONFIG_ARCH_OMAP3 375 375 376 - static u32 (*_omap3_sram_configure_core_dpll)(u32 sdrc_rfr_ctrl, 377 - u32 sdrc_actim_ctrla, 378 - u32 sdrc_actim_ctrlb, 379 - u32 m2, u32 unlock_dll, 380 - u32 f, u32 sdrc_mr, u32 inc); 381 - u32 omap3_configure_core_dpll(u32 sdrc_rfr_ctrl, u32 sdrc_actim_ctrla, 382 - u32 sdrc_actim_ctrlb, u32 m2, u32 unlock_dll, 383 - u32 f, u32 sdrc_mr, u32 inc) 376 + static u32 (*_omap3_sram_configure_core_dpll)( 377 + u32 m2, u32 unlock_dll, u32 f, u32 inc, 378 + u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0, 379 + u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0, 380 + u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1, 381 + u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1); 382 + 383 + u32 omap3_configure_core_dpll(u32 m2, u32 unlock_dll, u32 f, u32 inc, 384 + u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0, 385 + u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0, 386 + u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1, 387 + u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1) 384 388 { 385 389 BUG_ON(!_omap3_sram_configure_core_dpll); 386 - return _omap3_sram_configure_core_dpll(sdrc_rfr_ctrl, 387 - sdrc_actim_ctrla, 388 - sdrc_actim_ctrlb, m2, 389 - unlock_dll, f, sdrc_mr, inc); 390 + return _omap3_sram_configure_core_dpll( 391 + m2, unlock_dll, f, inc, 392 + sdrc_rfr_ctrl_0, sdrc_actim_ctrl_a_0, 393 + sdrc_actim_ctrl_b_0, sdrc_mr_0, 394 + sdrc_rfr_ctrl_1, sdrc_actim_ctrl_a_1, 395 + sdrc_actim_ctrl_b_1, sdrc_mr_1); 390 396 } 391 397 392 398 /* REVISIT: Should this be same as omap34xx_sram_init() after off-idle? */
+2
arch/arm/plat-orion/include/plat/gpio.h
··· 11 11 #ifndef __PLAT_GPIO_H 12 12 #define __PLAT_GPIO_H 13 13 14 + #include <linux/init.h> 15 + 14 16 /* 15 17 * GENERIC_GPIO primitives. 16 18 */
+1 -1
arch/arm/plat-s3c24xx/clock-dclk.c
··· 129 129 130 130 /* calculate the MISCCR setting for the clock */ 131 131 132 - if (parent == &clk_xtal) 132 + if (parent == &clk_mpll) 133 133 source = S3C2410_MISCCR_CLK0_MPLL; 134 134 else if (parent == &clk_upll) 135 135 source = S3C2410_MISCCR_CLK0_UPLL;
+4
arch/avr32/boards/favr-32/setup.c
··· 72 72 .debounce_max = 20, 73 73 .debounce_rep = 4, 74 74 .debounce_tol = 5, 75 + 76 + .keep_vref_on = true, 77 + .settle_delay_usecs = 500, 78 + .penirq_recheck_delay_usecs = 100, 75 79 }; 76 80 77 81 static struct spi_board_info __initdata spi1_board_info[] = {
+13 -3
arch/avr32/lib/memcpy.S
··· 24 24 brne 1f 25 25 26 26 /* At this point, "from" is word-aligned */ 27 - 2: sub r10, 4 28 - mov r9, r12 27 + 2: mov r9, r12 28 + 5: sub r10, 4 29 29 brlt 4f 30 30 31 31 3: ld.w r8, r11++ ··· 49 49 50 50 /* Handle unaligned "from" pointer */ 51 51 1: sub r10, 4 52 + movlt r9, r12 52 53 brlt 4b 53 54 add r10, r9 54 55 lsl r9, 2 ··· 60 59 st.b r12++, r8 61 60 ld.ub r8, r11++ 62 61 st.b r12++, r8 63 - rjmp 2b 62 + mov r8, r12 63 + add pc, pc, r9 64 + sub r8, 1 65 + nop 66 + sub r8, 1 67 + nop 68 + sub r8, 1 69 + nop 70 + mov r9, r8 71 + rjmp 5b
-5
arch/ia64/Makefile
··· 41 41 ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz) 42 42 endif 43 43 44 - ifeq ($(call cc-version),0304) 45 - cflags-$(CONFIG_ITANIUM) += -mtune=merced 46 - cflags-$(CONFIG_MCKINLEY) += -mtune=mckinley 47 - endif 48 - 49 44 KBUILD_CFLAGS += $(cflags-y) 50 45 head-y := arch/ia64/kernel/head.o arch/ia64/kernel/init_task.o 51 46
+1 -1
arch/ia64/include/asm/bitops.h
··· 286 286 { 287 287 __u32 *p = (__u32 *) addr + (nr >> 5); 288 288 __u32 m = 1 << (nr & 31); 289 - int oldbitset = *p & m; 289 + int oldbitset = (*p & m) != 0; 290 290 291 291 *p &= ~m; 292 292 return oldbitset;
-1
arch/ia64/include/asm/pgtable.h
··· 155 155 #include <linux/bitops.h> 156 156 #include <asm/cacheflush.h> 157 157 #include <asm/mmu_context.h> 158 - #include <asm/processor.h> 159 158 160 159 /* 161 160 * Next come the mappings that determine how mmap() protection bits
+3 -1
arch/ia64/kernel/dma-mapping.c
··· 10 10 11 11 static int __init dma_init(void) 12 12 { 13 - dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 13 + dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES); 14 + 15 + return 0; 14 16 } 15 17 fs_initcall(dma_init); 16 18
+1 -3
arch/ia64/kernel/ia64_ksyms.c
··· 21 21 22 22 #include <asm/page.h> 23 23 EXPORT_SYMBOL(clear_page); 24 + EXPORT_SYMBOL(copy_page); 24 25 25 26 #ifdef CONFIG_VIRTUAL_MEM_MAP 26 27 #include <linux/bootmem.h> ··· 60 59 EXPORT_SYMBOL(__udivdi3); 61 60 EXPORT_SYMBOL(__moddi3); 62 61 EXPORT_SYMBOL(__umoddi3); 63 - 64 - #include <asm/page.h> 65 - EXPORT_SYMBOL(copy_page); 66 62 67 63 #if defined(CONFIG_MD_RAID456) || defined(CONFIG_MD_RAID456_MODULE) 68 64 extern void xor_ia64_2(void);
+4
arch/ia64/kernel/iosapic.c
··· 1072 1072 } 1073 1073 1074 1074 addr = ioremap(phys_addr, 0); 1075 + if (addr == NULL) { 1076 + spin_unlock_irqrestore(&iosapic_lock, flags); 1077 + return -ENOMEM; 1078 + } 1075 1079 ver = iosapic_version(addr); 1076 1080 if ((err = iosapic_check_gsi_range(gsi_base, ver))) { 1077 1081 iounmap(addr);
-5
arch/ia64/kernel/pci-dma.c
··· 69 69 70 70 int iommu_dma_supported(struct device *dev, u64 mask) 71 71 { 72 - struct dma_map_ops *ops = platform_dma_get_ops(dev); 73 - 74 - if (ops->dma_supported) 75 - return ops->dma_supported(dev, mask); 76 - 77 72 /* Copied from i386. Doesn't make much sense, because it will 78 73 only work for pci_alloc_coherent. 79 74 The caller just has to use GFP_DMA in this case. */
+5 -1
arch/ia64/kernel/topology.c
··· 372 372 retval = kobject_init_and_add(&all_cpu_cache_info[cpu].kobj, 373 373 &cache_ktype_percpu_entry, &sys_dev->kobj, 374 374 "%s", "cache"); 375 + if (unlikely(retval < 0)) { 376 + cpu_cache_sysfs_exit(cpu); 377 + return retval; 378 + } 375 379 376 380 for (i = 0; i < all_cpu_cache_info[cpu].num_cache_leaves; i++) { 377 381 this_object = LEAF_KOBJECT_PTR(cpu,i); ··· 389 385 } 390 386 kobject_put(&all_cpu_cache_info[cpu].kobj); 391 387 cpu_cache_sysfs_exit(cpu); 392 - break; 388 + return retval; 393 389 } 394 390 kobject_uevent(&(this_object->kobj), KOBJ_ADD); 395 391 }
+5 -3
arch/ia64/lib/ip_fast_csum.S
··· 96 96 GLOBAL_ENTRY(csum_ipv6_magic) 97 97 ld4 r20=[in0],4 98 98 ld4 r21=[in1],4 99 - dep r15=in3,in2,32,16 99 + zxt4 in2=in2 100 100 ;; 101 101 ld4 r22=[in0],4 102 102 ld4 r23=[in1],4 103 - mux1 r15=r15,@rev 103 + dep r15=in3,in2,32,16 104 104 ;; 105 105 ld4 r24=[in0],4 106 106 ld4 r25=[in1],4 107 - shr.u r15=r15,16 107 + mux1 r15=r15,@rev 108 108 add r16=r20,r21 109 109 add r17=r22,r23 110 + zxt4 in4=in4 110 111 ;; 111 112 ld4 r26=[in0],4 112 113 ld4 r27=[in1],4 114 + shr.u r15=r15,16 113 115 add r18=r24,r25 114 116 add r8=r16,r17 115 117 ;;
+4 -2
arch/m68k/amiga/config.c
··· 574 574 575 575 tod_2000.cntrl1 = TOD2000_CNTRL1_HOLD; 576 576 577 - while ((tod_2000.cntrl1 & TOD2000_CNTRL1_BUSY) && cnt--) { 577 + while ((tod_2000.cntrl1 & TOD2000_CNTRL1_BUSY) && cnt) { 578 578 tod_2000.cntrl1 &= ~TOD2000_CNTRL1_HOLD; 579 579 udelay(70); 580 580 tod_2000.cntrl1 |= TOD2000_CNTRL1_HOLD; 581 + --cnt; 581 582 } 582 583 583 584 if (!cnt) ··· 650 649 651 650 tod_2000.cntrl1 |= TOD2000_CNTRL1_HOLD; 652 651 653 - while ((tod_2000.cntrl1 & TOD2000_CNTRL1_BUSY) && cnt--) { 652 + while ((tod_2000.cntrl1 & TOD2000_CNTRL1_BUSY) && cnt) { 654 653 tod_2000.cntrl1 &= ~TOD2000_CNTRL1_HOLD; 655 654 udelay(70); 656 655 tod_2000.cntrl1 |= TOD2000_CNTRL1_HOLD; 656 + --cnt; 657 657 } 658 658 659 659 if (!cnt)
+4 -6
arch/m68k/include/asm/motorola_pgalloc.h
··· 36 36 return NULL; 37 37 38 38 pte = kmap(page); 39 - if (pte) { 40 - __flush_page_to_ram(pte); 41 - flush_tlb_kernel_page(pte); 42 - nocache_page(pte); 43 - } 44 - kunmap(pte); 39 + __flush_page_to_ram(pte); 40 + flush_tlb_kernel_page(pte); 41 + nocache_page(pte); 42 + kunmap(page); 45 43 pgtable_page_ctor(page); 46 44 return page; 47 45 }
+1 -2
arch/m68k/include/asm/pgtable_mm.h
··· 135 135 #endif 136 136 137 137 #ifndef __ASSEMBLY__ 138 - #include <asm-generic/pgtable.h> 139 - 140 138 /* 141 139 * Macro to mark a page protection value as "uncacheable". 142 140 */ ··· 152 154 ? (__pgprot((pgprot_val(prot) & _CACHEMASK040) | _PAGE_NOCACHE_S)) \ 153 155 : (prot))) 154 156 157 + #include <asm-generic/pgtable.h> 155 158 #endif /* !__ASSEMBLY__ */ 156 159 157 160 /*
+3 -1
arch/m68k/include/asm/unistd.h
··· 334 334 #define __NR_inotify_init1 328 335 335 #define __NR_preadv 329 336 336 #define __NR_pwritev 330 337 + #define __NR_rt_tgsigqueueinfo 331 338 + #define __NR_perf_counter_open 332 337 339 338 340 #ifdef __KERNEL__ 339 341 340 - #define NR_syscalls 331 342 + #define NR_syscalls 333 341 343 342 344 #define __ARCH_WANT_IPC_PARSE_VERSION 343 345 #define __ARCH_WANT_OLD_READDIR
+2
arch/m68k/kernel/entry.S
··· 755 755 .long sys_inotify_init1 756 756 .long sys_preadv 757 757 .long sys_pwritev /* 330 */ 758 + .long sys_rt_tgsigqueueinfo 759 + .long sys_perf_counter_open 758 760
+2
arch/m68knommu/kernel/syscalltable.S
··· 349 349 .long sys_inotify_init1 350 350 .long sys_preadv 351 351 .long sys_pwritev /* 330 */ 352 + .long sys_rt_tgsigqueueinfo 353 + .long sys_perf_counter_open 352 354 353 355 .rept NR_syscalls-(.-sys_call_table)/4 354 356 .long sys_ni_syscall
+36 -34
arch/microblaze/configs/mmu_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.30-rc6 4 - # Fri May 22 10:02:33 2009 3 + # Linux kernel version: 2.6.31-rc6 4 + # Tue Aug 18 11:00:02 2009 5 5 # 6 6 CONFIG_MICROBLAZE=y 7 7 # CONFIG_SWAP is not set ··· 18 18 CONFIG_GENERIC_CLOCKEVENTS=y 19 19 CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y 20 20 CONFIG_GENERIC_GPIO=y 21 + CONFIG_GENERIC_CSUM=y 22 + # CONFIG_PCI is not set 23 + CONFIG_NO_DMA=y 21 24 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 25 + CONFIG_CONSTRUCTORS=y 22 26 23 27 # 24 28 # General setup ··· 63 59 CONFIG_RD_GZIP=y 64 60 # CONFIG_RD_BZIP2 is not set 65 61 # CONFIG_RD_LZMA is not set 66 - CONFIG_INITRAMFS_COMPRESSION_NONE=y 67 - # CONFIG_INITRAMFS_COMPRESSION_GZIP is not set 62 + # CONFIG_INITRAMFS_COMPRESSION_NONE is not set 63 + CONFIG_INITRAMFS_COMPRESSION_GZIP=y 68 64 # CONFIG_INITRAMFS_COMPRESSION_BZIP2 is not set 69 65 # CONFIG_INITRAMFS_COMPRESSION_LZMA is not set 70 66 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set ··· 75 71 CONFIG_KALLSYMS=y 76 72 CONFIG_KALLSYMS_ALL=y 77 73 CONFIG_KALLSYMS_EXTRA_PASS=y 78 - # CONFIG_STRIP_ASM_SYMS is not set 79 74 # CONFIG_HOTPLUG is not set 80 75 CONFIG_PRINTK=y 81 76 CONFIG_BUG=y ··· 87 84 CONFIG_EVENTFD=y 88 85 # CONFIG_SHMEM is not set 89 86 CONFIG_AIO=y 87 + 88 + # 89 + # Performance Counters 90 + # 90 91 CONFIG_VM_EVENT_COUNTERS=y 92 + # CONFIG_STRIP_ASM_SYMS is not set 91 93 CONFIG_COMPAT_BRK=y 92 94 CONFIG_SLAB=y 93 95 # CONFIG_SLUB is not set 94 96 # CONFIG_SLOB is not set 95 97 # CONFIG_PROFILING is not set 96 98 # CONFIG_MARKERS is not set 99 + 100 + # 101 + # GCOV-based kernel profiling 102 + # 97 103 # CONFIG_SLOW_WORK is not set 98 104 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 99 105 CONFIG_SLABINFO=y ··· 114 102 # CONFIG_MODVERSIONS is not set 115 103 # CONFIG_MODULE_SRCVERSION_ALL is not set 116 104 CONFIG_BLOCK=y 117 - # CONFIG_LBD is not set 105 + CONFIG_LBDAF=y 118 106 # CONFIG_BLK_DEV_BSG is not set 119 107 # CONFIG_BLK_DEV_INTEGRITY is not set 120 108 ··· 206 194 # CONFIG_PHYS_ADDR_T_64BIT is not set 207 195 CONFIG_ZONE_DMA_FLAG=0 208 196 CONFIG_VIRT_TO_BUS=y 209 - CONFIG_UNEVICTABLE_LRU=y 210 197 CONFIG_HAVE_MLOCK=y 211 198 CONFIG_HAVE_MLOCKED_PAGE_BIT=y 199 + CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 212 200 213 201 # 214 202 # Exectuable file formats ··· 274 262 # CONFIG_ECONET is not set 275 263 # CONFIG_WAN_ROUTER is not set 276 264 # CONFIG_PHONET is not set 265 + # CONFIG_IEEE802154 is not set 277 266 # CONFIG_NET_SCHED is not set 278 267 # CONFIG_DCB is not set 279 268 ··· 338 325 # CONFIG_ATA is not set 339 326 # CONFIG_MD is not set 340 327 CONFIG_NETDEVICES=y 341 - CONFIG_COMPAT_NET_DEV_OPS=y 342 328 # CONFIG_DUMMY is not set 343 329 # CONFIG_BONDING is not set 344 330 # CONFIG_MACVLAN is not set ··· 356 344 # CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set 357 345 # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 358 346 # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 359 - # CONFIG_B44 is not set 347 + # CONFIG_KS8842 is not set 360 348 CONFIG_NETDEV_1000=y 361 349 CONFIG_NETDEV_10000=y 362 350 ··· 422 410 # CONFIG_TCG_TPM is not set 423 411 # CONFIG_I2C is not set 424 412 # CONFIG_SPI is not set 413 + 414 + # 415 + # PPS support 416 + # 417 + # CONFIG_PPS is not set 425 418 CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 426 419 # CONFIG_GPIOLIB is not set 427 420 # CONFIG_W1 is not set ··· 435 418 # CONFIG_THERMAL is not set 436 419 # CONFIG_THERMAL_HWMON is not set 437 420 # CONFIG_WATCHDOG is not set 438 - CONFIG_SSB_POSSIBLE=y 439 - 440 - # 441 - # Sonics Silicon Backplane 442 - # 443 - # CONFIG_SSB is not set 444 421 445 422 # 446 423 # Multifunction device drivers ··· 444 433 # CONFIG_HTC_PASIC3 is not set 445 434 # CONFIG_MFD_TMIO is not set 446 435 # CONFIG_REGULATOR is not set 447 - 448 - # 449 - # Multimedia devices 450 - # 451 - 452 - # 453 - # Multimedia core support 454 - # 455 - # CONFIG_VIDEO_DEV is not set 456 - # CONFIG_DVB_CORE is not set 457 - # CONFIG_VIDEO_MEDIA is not set 458 - 459 - # 460 - # Multimedia drivers 461 - # 462 - # CONFIG_DAB is not set 436 + # CONFIG_MEDIA_SUPPORT is not set 463 437 464 438 # 465 439 # Graphics support ··· 465 469 # CONFIG_NEW_LEDS is not set 466 470 # CONFIG_ACCESSIBILITY is not set 467 471 # CONFIG_RTC_CLASS is not set 468 - # CONFIG_DMADEVICES is not set 469 472 # CONFIG_AUXDISPLAY is not set 470 473 # CONFIG_UIO is not set 474 + 475 + # 476 + # TI VLYNQ 477 + # 471 478 # CONFIG_STAGING is not set 472 479 473 480 # ··· 484 485 # CONFIG_REISERFS_FS is not set 485 486 # CONFIG_JFS_FS is not set 486 487 # CONFIG_FS_POSIX_ACL is not set 487 - CONFIG_FILE_LOCKING=y 488 488 # CONFIG_XFS_FS is not set 489 + # CONFIG_GFS2_FS is not set 489 490 # CONFIG_OCFS2_FS is not set 490 491 # CONFIG_BTRFS_FS is not set 492 + CONFIG_FILE_LOCKING=y 493 + CONFIG_FSNOTIFY=y 491 494 # CONFIG_DNOTIFY is not set 492 495 # CONFIG_INOTIFY is not set 496 + CONFIG_INOTIFY_USER=y 493 497 # CONFIG_QUOTA is not set 494 498 # CONFIG_AUTOFS_FS is not set 495 499 # CONFIG_AUTOFS4_FS is not set ··· 680 678 # CONFIG_SYSCTL_SYSCALL_CHECK is not set 681 679 # CONFIG_PAGE_POISONING is not set 682 680 # CONFIG_SAMPLES is not set 681 + # CONFIG_KMEMCHECK is not set 683 682 CONFIG_EARLY_PRINTK=y 684 683 CONFIG_HEART_BEAT=y 685 684 CONFIG_DEBUG_BOOTMEM=y ··· 796 793 CONFIG_DECOMPRESS_GZIP=y 797 794 CONFIG_HAS_IOMEM=y 798 795 CONFIG_HAS_IOPORT=y 799 - CONFIG_HAS_DMA=y 800 796 CONFIG_HAVE_LMB=y 801 797 CONFIG_NLATTR=y
+55 -36
arch/microblaze/configs/nommu_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.30-rc5 4 - # Mon May 11 09:01:02 2009 3 + # Linux kernel version: 2.6.31-rc6 4 + # Tue Aug 18 10:35:30 2009 5 5 # 6 6 CONFIG_MICROBLAZE=y 7 7 # CONFIG_SWAP is not set ··· 17 17 # CONFIG_GENERIC_TIME_VSYSCALL is not set 18 18 CONFIG_GENERIC_CLOCKEVENTS=y 19 19 CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y 20 + CONFIG_GENERIC_GPIO=y 21 + CONFIG_GENERIC_CSUM=y 20 22 # CONFIG_PCI is not set 21 - # CONFIG_NO_DMA is not set 23 + CONFIG_NO_DMA=y 22 24 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 25 + CONFIG_CONSTRUCTORS=y 23 26 24 27 # 25 28 # General setup ··· 67 64 CONFIG_KALLSYMS=y 68 65 CONFIG_KALLSYMS_ALL=y 69 66 CONFIG_KALLSYMS_EXTRA_PASS=y 70 - # CONFIG_STRIP_ASM_SYMS is not set 71 67 # CONFIG_HOTPLUG is not set 72 68 CONFIG_PRINTK=y 73 69 CONFIG_BUG=y ··· 78 76 CONFIG_TIMERFD=y 79 77 CONFIG_EVENTFD=y 80 78 CONFIG_AIO=y 79 + 80 + # 81 + # Performance Counters 82 + # 81 83 CONFIG_VM_EVENT_COUNTERS=y 84 + # CONFIG_STRIP_ASM_SYMS is not set 82 85 CONFIG_COMPAT_BRK=y 83 86 CONFIG_SLAB=y 84 87 # CONFIG_SLUB is not set 85 88 # CONFIG_SLOB is not set 86 89 # CONFIG_PROFILING is not set 87 90 # CONFIG_MARKERS is not set 91 + 92 + # 93 + # GCOV-based kernel profiling 94 + # 95 + # CONFIG_GCOV_KERNEL is not set 88 96 # CONFIG_SLOW_WORK is not set 89 97 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 90 98 CONFIG_SLABINFO=y ··· 107 95 # CONFIG_MODVERSIONS is not set 108 96 # CONFIG_MODULE_SRCVERSION_ALL is not set 109 97 CONFIG_BLOCK=y 110 - # CONFIG_LBD is not set 98 + CONFIG_LBDAF=y 111 99 # CONFIG_BLK_DEV_BSG is not set 112 100 # CONFIG_BLK_DEV_INTEGRITY is not set 113 101 ··· 168 156 CONFIG_CMDLINE="console=ttyUL0,115200" 169 157 # CONFIG_CMDLINE_FORCE is not set 170 158 CONFIG_OF=y 171 - CONFIG_OF_DEVICE=y 172 159 CONFIG_PROC_DEVICETREE=y 160 + 161 + # 162 + # Advanced setup 163 + # 164 + 165 + # 166 + # Default settings for advanced configuration options are used 167 + # 168 + CONFIG_KERNEL_START=0x90000000 173 169 CONFIG_SELECT_MEMORY_MODEL=y 174 170 CONFIG_FLATMEM_MANUAL=y 175 171 # CONFIG_DISCONTIGMEM_MANUAL is not set ··· 189 169 # CONFIG_PHYS_ADDR_T_64BIT is not set 190 170 CONFIG_ZONE_DMA_FLAG=0 191 171 CONFIG_VIRT_TO_BUS=y 192 - CONFIG_UNEVICTABLE_LRU=y 172 + CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 193 173 CONFIG_NOMMU_INITIAL_TRIM_EXCESS=1 194 174 195 175 # ··· 257 237 # CONFIG_ECONET is not set 258 238 # CONFIG_WAN_ROUTER is not set 259 239 # CONFIG_PHONET is not set 240 + # CONFIG_IEEE802154 is not set 260 241 # CONFIG_NET_SCHED is not set 261 242 # CONFIG_DCB is not set 262 243 ··· 275 254 CONFIG_WIRELESS_OLD_REGULATORY=y 276 255 # CONFIG_WIRELESS_EXT is not set 277 256 # CONFIG_LIB80211 is not set 278 - # CONFIG_MAC80211 is not set 257 + 258 + # 259 + # CFG80211 needs to be enabled for MAC80211 260 + # 261 + CONFIG_MAC80211_DEFAULT_PS_VALUE=0 279 262 # CONFIG_WIMAX is not set 280 263 # CONFIG_RFKILL is not set 281 264 # CONFIG_NET_9P is not set ··· 378 353 # UBI - Unsorted block images 379 354 # 380 355 # CONFIG_MTD_UBI is not set 356 + CONFIG_OF_DEVICE=y 381 357 # CONFIG_PARPORT is not set 382 358 CONFIG_BLK_DEV=y 383 359 # CONFIG_BLK_DEV_COW_COMMON is not set ··· 390 364 # CONFIG_BLK_DEV_XIP is not set 391 365 # CONFIG_CDROM_PKTCDVD is not set 392 366 # CONFIG_ATA_OVER_ETH is not set 367 + # CONFIG_XILINX_SYSACE is not set 393 368 CONFIG_MISC_DEVICES=y 394 369 # CONFIG_ENCLOSURE_SERVICES is not set 395 370 # CONFIG_C2PORT is not set ··· 410 383 # CONFIG_ATA is not set 411 384 # CONFIG_MD is not set 412 385 CONFIG_NETDEVICES=y 413 - CONFIG_COMPAT_NET_DEV_OPS=y 414 386 # CONFIG_DUMMY is not set 415 387 # CONFIG_BONDING is not set 416 388 # CONFIG_MACVLAN is not set ··· 428 402 # CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set 429 403 # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 430 404 # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 431 - # CONFIG_B44 is not set 405 + # CONFIG_KS8842 is not set 432 406 CONFIG_NETDEV_1000=y 433 407 CONFIG_NETDEV_10000=y 434 408 ··· 489 463 # CONFIG_HW_RANDOM_TIMERIOMEM is not set 490 464 # CONFIG_RTC is not set 491 465 # CONFIG_GEN_RTC is not set 466 + # CONFIG_XILINX_HWICAP is not set 492 467 # CONFIG_R3964 is not set 493 468 # CONFIG_RAW_DRIVER is not set 494 469 # CONFIG_TCG_TPM is not set 495 470 # CONFIG_I2C is not set 496 471 # CONFIG_SPI is not set 472 + 473 + # 474 + # PPS support 475 + # 476 + # CONFIG_PPS is not set 477 + CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 478 + # CONFIG_GPIOLIB is not set 497 479 # CONFIG_W1 is not set 498 480 # CONFIG_POWER_SUPPLY is not set 499 481 # CONFIG_HWMON is not set 500 482 # CONFIG_THERMAL is not set 501 483 # CONFIG_THERMAL_HWMON is not set 502 484 # CONFIG_WATCHDOG is not set 503 - CONFIG_SSB_POSSIBLE=y 504 - 505 - # 506 - # Sonics Silicon Backplane 507 - # 508 - # CONFIG_SSB is not set 509 485 510 486 # 511 487 # Multifunction device drivers ··· 517 489 # CONFIG_HTC_PASIC3 is not set 518 490 # CONFIG_MFD_TMIO is not set 519 491 # CONFIG_REGULATOR is not set 520 - 521 - # 522 - # Multimedia devices 523 - # 524 - 525 - # 526 - # Multimedia core support 527 - # 528 - # CONFIG_VIDEO_DEV is not set 529 - # CONFIG_DVB_CORE is not set 530 - # CONFIG_VIDEO_MEDIA is not set 531 - 532 - # 533 - # Multimedia drivers 534 - # 535 - CONFIG_DAB=y 492 + # CONFIG_MEDIA_SUPPORT is not set 536 493 537 494 # 538 495 # Graphics support ··· 533 520 # CONFIG_DISPLAY_SUPPORT is not set 534 521 # CONFIG_SOUND is not set 535 522 CONFIG_USB_SUPPORT=y 536 - # CONFIG_USB_ARCH_HAS_HCD is not set 523 + CONFIG_USB_ARCH_HAS_HCD=y 537 524 # CONFIG_USB_ARCH_HAS_OHCI is not set 538 525 # CONFIG_USB_ARCH_HAS_EHCI is not set 526 + # CONFIG_USB is not set 539 527 # CONFIG_USB_OTG_WHITELIST is not set 540 528 # CONFIG_USB_OTG_BLACKLIST_HUB is not set 541 529 ··· 557 543 # CONFIG_NEW_LEDS is not set 558 544 # CONFIG_ACCESSIBILITY is not set 559 545 # CONFIG_RTC_CLASS is not set 560 - # CONFIG_DMADEVICES is not set 561 546 # CONFIG_AUXDISPLAY is not set 562 547 # CONFIG_UIO is not set 548 + 549 + # 550 + # TI VLYNQ 551 + # 563 552 # CONFIG_STAGING is not set 564 553 565 554 # ··· 575 558 # CONFIG_REISERFS_FS is not set 576 559 # CONFIG_JFS_FS is not set 577 560 CONFIG_FS_POSIX_ACL=y 578 - CONFIG_FILE_LOCKING=y 579 561 # CONFIG_XFS_FS is not set 562 + # CONFIG_GFS2_FS is not set 580 563 # CONFIG_OCFS2_FS is not set 581 564 # CONFIG_BTRFS_FS is not set 565 + CONFIG_FILE_LOCKING=y 566 + CONFIG_FSNOTIFY=y 582 567 # CONFIG_DNOTIFY is not set 583 568 # CONFIG_INOTIFY is not set 569 + CONFIG_INOTIFY_USER=y 584 570 # CONFIG_QUOTA is not set 585 571 # CONFIG_AUTOFS_FS is not set 586 572 # CONFIG_AUTOFS4_FS is not set ··· 833 813 CONFIG_ZLIB_INFLATE=y 834 814 CONFIG_HAS_IOMEM=y 835 815 CONFIG_HAS_IOPORT=y 836 - CONFIG_HAS_DMA=y 837 816 CONFIG_HAVE_LMB=y 838 817 CONFIG_NLATTR=y
-2
arch/microblaze/include/asm/hardirq.h
··· 12 12 /* should be defined in each interrupt controller driver */ 13 13 extern unsigned int get_irq(struct pt_regs *regs); 14 14 15 - #define ack_bad_irq ack_bad_irq 16 - void ack_bad_irq(unsigned int irq); 17 15 #include <asm-generic/hardirq.h> 18 16 19 17 #endif /* _ASM_MICROBLAZE_HARDIRQ_H */
+2
arch/microblaze/kernel/intc.c
··· 12 12 #include <linux/irq.h> 13 13 #include <asm/page.h> 14 14 #include <linux/io.h> 15 + #include <linux/bug.h> 15 16 16 17 #include <asm/prom.h> 17 18 #include <asm/irq.h> ··· 131 130 if (intc) 132 131 break; 133 132 } 133 + BUG_ON(!intc); 134 134 135 135 intc_baseaddr = *(int *) of_get_property(intc, "reg", NULL); 136 136 intc_baseaddr = (unsigned long) ioremap(intc_baseaddr, PAGE_SIZE);
-9
arch/microblaze/kernel/irq.c
··· 30 30 } 31 31 EXPORT_SYMBOL_GPL(irq_of_parse_and_map); 32 32 33 - /* 34 - * 'what should we do if we get a hw irq event on an illegal vector'. 35 - * each architecture has to answer this themselves. 36 - */ 37 - void ack_bad_irq(unsigned int irq) 38 - { 39 - printk(KERN_WARNING "unexpected IRQ trap at vector %02x\n", irq); 40 - } 41 - 42 33 static u32 concurrent_irq; 43 34 44 35 void do_IRQ(struct pt_regs *regs)
+1 -1
arch/microblaze/kernel/syscall_table.S
··· 313 313 .long sys_fchmodat 314 314 .long sys_faccessat 315 315 .long sys_ni_syscall /* pselect6 */ 316 - .long sys_ni_syscall /* sys_ppoll */ 316 + .long sys_ppoll 317 317 .long sys_unshare /* 310 */ 318 318 .long sys_set_robust_list 319 319 .long sys_get_robust_list
+2
arch/microblaze/kernel/timer.c
··· 22 22 #include <linux/clocksource.h> 23 23 #include <linux/clockchips.h> 24 24 #include <linux/io.h> 25 + #include <linux/bug.h> 25 26 #include <asm/cpuinfo.h> 26 27 #include <asm/setup.h> 27 28 #include <asm/prom.h> ··· 235 234 if (timer) 236 235 break; 237 236 } 237 + BUG_ON(!timer); 238 238 239 239 timer_baseaddr = *(int *) of_get_property(timer, "reg", NULL); 240 240 timer_baseaddr = (unsigned long) ioremap(timer_baseaddr, PAGE_SIZE);
+3 -3
arch/microblaze/mm/init.c
··· 130 130 * (in case the address isn't page-aligned). 131 131 */ 132 132 #ifndef CONFIG_MMU 133 - map_size = init_bootmem_node(NODE_DATA(0), PFN_UP(TOPHYS((u32)_end)), 133 + map_size = init_bootmem_node(NODE_DATA(0), PFN_UP(TOPHYS((u32)klimit)), 134 134 min_low_pfn, max_low_pfn); 135 135 #else 136 136 map_size = init_bootmem_node(&contig_page_data, 137 - PFN_UP(TOPHYS((u32)_end)), min_low_pfn, max_low_pfn); 137 + PFN_UP(TOPHYS((u32)klimit)), min_low_pfn, max_low_pfn); 138 138 #endif 139 - lmb_reserve(PFN_UP(TOPHYS((u32)_end)) << PAGE_SHIFT, map_size); 139 + lmb_reserve(PFN_UP(TOPHYS((u32)klimit)) << PAGE_SHIFT, map_size); 140 140 141 141 /* free bootmem is whole main memory */ 142 142 free_bootmem(memory_start, memory_size);
+2
arch/mips/include/asm/page.h
··· 32 32 #define PAGE_SIZE (1UL << PAGE_SHIFT) 33 33 #define PAGE_MASK (~((1 << PAGE_SHIFT) - 1)) 34 34 35 + #ifdef CONFIG_HUGETLB_PAGE 35 36 #define HPAGE_SHIFT (PAGE_SHIFT + PAGE_SHIFT - 3) 36 37 #define HPAGE_SIZE ((1UL) << HPAGE_SHIFT) 37 38 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 38 39 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 40 + #endif /* CONFIG_HUGETLB_PAGE */ 39 41 40 42 #ifndef __ASSEMBLY__ 41 43
+1 -1
arch/parisc/kernel/traps.c
··· 532 532 /* Kill the user process later */ 533 533 regs->iaoq[0] = 0 | 3; 534 534 regs->iaoq[1] = regs->iaoq[0] + 4; 535 - regs->iasq[0] = regs->iasq[0] = regs->sr[7]; 535 + regs->iasq[0] = regs->iasq[1] = regs->sr[7]; 536 536 regs->gr[0] &= ~PSW_B; 537 537 return; 538 538 }
+75 -136
arch/powerpc/configs/ps3_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.30-rc5 4 - # Fri May 15 10:37:00 2009 3 + # Linux kernel version: 2.6.31-rc7 4 + # Mon Aug 24 17:38:50 2009 5 5 # 6 6 CONFIG_PPC64=y 7 7 8 8 # 9 9 # Processor support 10 10 # 11 + CONFIG_PPC_BOOK3S_64=y 11 12 CONFIG_PPC_BOOK3S=y 12 13 # CONFIG_POWER4_ONLY is not set 13 14 CONFIG_POWER3=y ··· 21 20 CONFIG_PPC_STD_MMU_64=y 22 21 CONFIG_PPC_MM_SLICES=y 23 22 CONFIG_VIRT_CPU_ACCOUNTING=y 23 + CONFIG_PPC_HAVE_PMU_SUPPORT=y 24 24 CONFIG_SMP=y 25 25 CONFIG_NR_CPUS=2 26 26 CONFIG_64BIT=y ··· 33 31 CONFIG_GENERIC_TIME_VSYSCALL=y 34 32 CONFIG_GENERIC_CLOCKEVENTS=y 35 33 CONFIG_GENERIC_HARDIRQS=y 34 + CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y 36 35 CONFIG_HAVE_SETUP_PER_CPU_AREA=y 37 36 CONFIG_IRQ_PER_CPU=y 38 37 CONFIG_STACKTRACE_SUPPORT=y ··· 44 41 CONFIG_ARCH_HAS_ILOG2_U32=y 45 42 CONFIG_ARCH_HAS_ILOG2_U64=y 46 43 CONFIG_GENERIC_HWEIGHT=y 47 - CONFIG_GENERIC_CALIBRATE_DELAY=y 48 44 CONFIG_GENERIC_FIND_NEXT_BIT=y 49 45 CONFIG_ARCH_NO_VIRT_TO_BUS=y 50 46 CONFIG_PPC=y ··· 64 62 # CONFIG_PPC_DCR_MMIO is not set 65 63 CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y 66 64 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 65 + CONFIG_CONSTRUCTORS=y 67 66 68 67 # 69 68 # General setup ··· 116 113 CONFIG_KALLSYMS=y 117 114 CONFIG_KALLSYMS_ALL=y 118 115 CONFIG_KALLSYMS_EXTRA_PASS=y 119 - # CONFIG_STRIP_ASM_SYMS is not set 120 116 CONFIG_HOTPLUG=y 121 117 CONFIG_PRINTK=y 122 118 CONFIG_BUG=y ··· 128 126 CONFIG_EVENTFD=y 129 127 CONFIG_SHMEM=y 130 128 CONFIG_AIO=y 129 + CONFIG_HAVE_PERF_COUNTERS=y 130 + 131 + # 132 + # Performance Counters 133 + # 134 + # CONFIG_PERF_COUNTERS is not set 131 135 CONFIG_VM_EVENT_COUNTERS=y 136 + # CONFIG_STRIP_ASM_SYMS is not set 132 137 # CONFIG_COMPAT_BRK is not set 133 138 CONFIG_SLAB=y 134 139 # CONFIG_SLUB is not set ··· 154 145 CONFIG_HAVE_ARCH_TRACEHOOK=y 155 146 CONFIG_HAVE_DMA_ATTRS=y 156 147 CONFIG_USE_GENERIC_SMP_HELPERS=y 148 + 149 + # 150 + # GCOV-based kernel profiling 151 + # 152 + # CONFIG_GCOV_KERNEL is not set 157 153 # CONFIG_SLOW_WORK is not set 158 154 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 159 155 CONFIG_SLABINFO=y ··· 224 210 # 225 211 # Cell Broadband Engine options 226 212 # 227 - CONFIG_SPU_FS=y 213 + CONFIG_SPU_FS=m 228 214 CONFIG_SPU_FS_64K_LS=y 229 215 # CONFIG_SPU_TRACE is not set 230 216 CONFIG_SPU_BASE=y ··· 269 255 CONFIG_HUGETLB_PAGE_SIZE_VARIABLE=y 270 256 # CONFIG_IOMMU_VMERGE is not set 271 257 CONFIG_IOMMU_HELPER=y 258 + # CONFIG_SWIOTLB is not set 272 259 CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y 273 260 CONFIG_ARCH_HAS_WALK_MEMORY=y 274 261 CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y ··· 300 285 CONFIG_PHYS_ADDR_T_64BIT=y 301 286 CONFIG_ZONE_DMA_FLAG=1 302 287 CONFIG_BOUNCE=y 303 - CONFIG_UNEVICTABLE_LRU=y 304 288 CONFIG_HAVE_MLOCK=y 305 289 CONFIG_HAVE_MLOCKED_PAGE_BIT=y 290 + CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 306 291 CONFIG_ARCH_MEMORY_PROBE=y 307 292 CONFIG_PPC_HAS_HASH_64K=y 308 293 CONFIG_PPC_4K_PAGES=y ··· 414 399 # CONFIG_ECONET is not set 415 400 # CONFIG_WAN_ROUTER is not set 416 401 # CONFIG_PHONET is not set 402 + # CONFIG_IEEE802154 is not set 417 403 # CONFIG_NET_SCHED is not set 418 404 # CONFIG_DCB is not set 419 405 ··· 449 433 CONFIG_WIRELESS=y 450 434 CONFIG_CFG80211=m 451 435 # CONFIG_CFG80211_REG_DEBUG is not set 436 + # CONFIG_CFG80211_DEBUGFS is not set 452 437 # CONFIG_WIRELESS_OLD_REGULATORY is not set 453 438 CONFIG_WIRELESS_EXT=y 454 439 # CONFIG_WIRELESS_EXT_SYSFS is not set 455 440 # CONFIG_LIB80211 is not set 456 441 CONFIG_MAC80211=m 442 + CONFIG_MAC80211_DEFAULT_PS=y 443 + CONFIG_MAC80211_DEFAULT_PS_VALUE=1 457 444 458 445 # 459 446 # Rate control algorithm selection ··· 466 447 CONFIG_MAC80211_RC_DEFAULT_PID=y 467 448 # CONFIG_MAC80211_RC_DEFAULT_MINSTREL is not set 468 449 CONFIG_MAC80211_RC_DEFAULT="pid" 469 - # CONFIG_MAC80211_MESH is not set 470 450 # CONFIG_MAC80211_LEDS is not set 471 451 # CONFIG_MAC80211_DEBUGFS is not set 472 452 # CONFIG_MAC80211_DEBUG_MENU is not set ··· 490 472 # CONFIG_DEBUG_DEVRES is not set 491 473 # CONFIG_SYS_HYPERVISOR is not set 492 474 # CONFIG_CONNECTOR is not set 493 - CONFIG_MTD=y 494 - CONFIG_MTD_DEBUG=y 495 - CONFIG_MTD_DEBUG_VERBOSE=0 496 - # CONFIG_MTD_CONCAT is not set 497 - # CONFIG_MTD_PARTITIONS is not set 498 - # CONFIG_MTD_TESTS is not set 499 - 500 - # 501 - # User Modules And Translation Layers 502 - # 503 - # CONFIG_MTD_CHAR is not set 504 - CONFIG_MTD_BLKDEVS=y 505 - CONFIG_MTD_BLOCK=y 506 - # CONFIG_FTL is not set 507 - # CONFIG_NFTL is not set 508 - # CONFIG_INFTL is not set 509 - # CONFIG_RFD_FTL is not set 510 - # CONFIG_SSFDC is not set 511 - # CONFIG_MTD_OOPS is not set 512 - 513 - # 514 - # RAM/ROM/Flash chip drivers 515 - # 516 - # CONFIG_MTD_CFI is not set 517 - # CONFIG_MTD_JEDECPROBE is not set 518 - CONFIG_MTD_MAP_BANK_WIDTH_1=y 519 - CONFIG_MTD_MAP_BANK_WIDTH_2=y 520 - CONFIG_MTD_MAP_BANK_WIDTH_4=y 521 - # CONFIG_MTD_MAP_BANK_WIDTH_8 is not set 522 - # CONFIG_MTD_MAP_BANK_WIDTH_16 is not set 523 - # CONFIG_MTD_MAP_BANK_WIDTH_32 is not set 524 - CONFIG_MTD_CFI_I1=y 525 - CONFIG_MTD_CFI_I2=y 526 - # CONFIG_MTD_CFI_I4 is not set 527 - # CONFIG_MTD_CFI_I8 is not set 528 - # CONFIG_MTD_RAM is not set 529 - # CONFIG_MTD_ROM is not set 530 - # CONFIG_MTD_ABSENT is not set 531 - 532 - # 533 - # Mapping drivers for chip access 534 - # 535 - # CONFIG_MTD_COMPLEX_MAPPINGS is not set 536 - # CONFIG_MTD_PLATRAM is not set 537 - 538 - # 539 - # Self-contained MTD device drivers 540 - # 541 - # CONFIG_MTD_SLRAM is not set 542 - # CONFIG_MTD_PHRAM is not set 543 - # CONFIG_MTD_MTDRAM is not set 544 - # CONFIG_MTD_BLOCK2MTD is not set 545 - 546 - # 547 - # Disk-On-Chip Device Drivers 548 - # 549 - # CONFIG_MTD_DOC2000 is not set 550 - # CONFIG_MTD_DOC2001 is not set 551 - # CONFIG_MTD_DOC2001PLUS is not set 552 - # CONFIG_MTD_NAND is not set 553 - # CONFIG_MTD_ONENAND is not set 554 - 555 - # 556 - # LPDDR flash memory drivers 557 - # 558 - # CONFIG_MTD_LPDDR is not set 559 - 560 - # 561 - # UBI - Unsorted block images 562 - # 563 - # CONFIG_MTD_UBI is not set 475 + # CONFIG_MTD is not set 564 476 CONFIG_OF_DEVICE=y 565 477 # CONFIG_PARPORT is not set 566 478 CONFIG_BLK_DEV=y ··· 538 590 # CONFIG_BLK_DEV_SR_VENDOR is not set 539 591 CONFIG_CHR_DEV_SG=m 540 592 # CONFIG_CHR_DEV_SCH is not set 541 - 542 - # 543 - # Some SCSI devices (e.g. CD jukebox) support multiple LUNs 544 - # 545 593 CONFIG_SCSI_MULTI_LUN=y 546 594 # CONFIG_SCSI_CONSTANTS is not set 547 595 # CONFIG_SCSI_LOGGING is not set ··· 570 626 # CONFIG_DM_UEVENT is not set 571 627 # CONFIG_MACINTOSH_DRIVERS is not set 572 628 CONFIG_NETDEVICES=y 573 - CONFIG_COMPAT_NET_DEV_OPS=y 574 629 # CONFIG_DUMMY is not set 575 630 # CONFIG_BONDING is not set 576 631 # CONFIG_MACVLAN is not set ··· 589 646 # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 590 647 # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 591 648 # CONFIG_B44 is not set 649 + # CONFIG_KS8842 is not set 592 650 CONFIG_NETDEV_1000=y 593 651 CONFIG_GELIC_NET=y 594 652 CONFIG_GELIC_WIRELESS=y 595 - CONFIG_GELIC_WIRELESS_OLD_PSK_INTERFACE=y 653 + # CONFIG_GELIC_WIRELESS_OLD_PSK_INTERFACE is not set 596 654 # CONFIG_NETDEV_10000 is not set 597 655 598 656 # ··· 613 669 # CONFIG_HOSTAP is not set 614 670 # CONFIG_B43 is not set 615 671 # CONFIG_B43LEGACY is not set 616 - CONFIG_ZD1211RW=m 617 - # CONFIG_ZD1211RW_DEBUG is not set 672 + # CONFIG_ZD1211RW is not set 618 673 # CONFIG_RT2X00 is not set 619 674 620 675 # ··· 625 682 # 626 683 # CONFIG_USB_CATC is not set 627 684 # CONFIG_USB_KAWETH is not set 628 - CONFIG_USB_PEGASUS=m 685 + # CONFIG_USB_PEGASUS is not set 629 686 # CONFIG_USB_RTL8150 is not set 630 687 CONFIG_USB_USBNET=m 631 688 CONFIG_USB_NET_AX8817X=m ··· 636 693 # CONFIG_USB_NET_GL620A is not set 637 694 # CONFIG_USB_NET_NET1080 is not set 638 695 # CONFIG_USB_NET_PLUSB is not set 639 - CONFIG_USB_NET_MCS7830=m 696 + # CONFIG_USB_NET_MCS7830 is not set 640 697 # CONFIG_USB_NET_RNDIS_HOST is not set 641 698 # CONFIG_USB_NET_CDC_SUBSET is not set 642 699 # CONFIG_USB_NET_ZAURUS is not set 700 + # CONFIG_USB_NET_INT51X1 is not set 643 701 # CONFIG_WAN is not set 644 702 CONFIG_PPP=m 645 703 CONFIG_PPP_MULTILINK=y ··· 715 771 # 716 772 CONFIG_UNIX98_PTYS=y 717 773 # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set 718 - CONFIG_LEGACY_PTYS=y 719 - CONFIG_LEGACY_PTY_COUNT=16 774 + # CONFIG_LEGACY_PTYS is not set 720 775 # CONFIG_HVC_UDBG is not set 721 776 # CONFIG_IPMI_HANDLER is not set 722 777 # CONFIG_HW_RANDOM is not set ··· 725 782 # CONFIG_TCG_TPM is not set 726 783 # CONFIG_I2C is not set 727 784 # CONFIG_SPI is not set 785 + 786 + # 787 + # PPS support 788 + # 789 + # CONFIG_PPS is not set 728 790 CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 729 791 # CONFIG_GPIOLIB is not set 730 792 # CONFIG_W1 is not set ··· 753 805 # CONFIG_HTC_PASIC3 is not set 754 806 # CONFIG_MFD_TMIO is not set 755 807 # CONFIG_REGULATOR is not set 756 - 757 - # 758 - # Multimedia devices 759 - # 760 - 761 - # 762 - # Multimedia core support 763 - # 764 - # CONFIG_VIDEO_DEV is not set 765 - # CONFIG_DVB_CORE is not set 766 - # CONFIG_VIDEO_MEDIA is not set 767 - 768 - # 769 - # Multimedia drivers 770 - # 771 - # CONFIG_DAB is not set 808 + # CONFIG_MEDIA_SUPPORT is not set 772 809 773 810 # 774 811 # Graphics support ··· 831 898 CONFIG_SND_VERBOSE_PROCFS=y 832 899 # CONFIG_SND_VERBOSE_PRINTK is not set 833 900 # CONFIG_SND_DEBUG is not set 901 + # CONFIG_SND_RAWMIDI_SEQ is not set 902 + # CONFIG_SND_OPL3_LIB_SEQ is not set 903 + # CONFIG_SND_OPL4_LIB_SEQ is not set 904 + # CONFIG_SND_SBAWE_SEQ is not set 905 + # CONFIG_SND_EMU10K1_SEQ is not set 834 906 # CONFIG_SND_DRIVERS is not set 835 907 CONFIG_SND_PPC=y 836 908 CONFIG_SND_PS3=m ··· 868 930 # Special HID drivers 869 931 # 870 932 # CONFIG_HID_A4TECH is not set 871 - # CONFIG_HID_APPLE is not set 872 - # CONFIG_HID_BELKIN is not set 873 - # CONFIG_HID_CHERRY is not set 933 + CONFIG_HID_APPLE=m 934 + CONFIG_HID_BELKIN=m 935 + CONFIG_HID_CHERRY=m 874 936 # CONFIG_HID_CHICONY is not set 875 937 # CONFIG_HID_CYPRESS is not set 876 - # CONFIG_DRAGONRISE_FF is not set 877 - # CONFIG_HID_EZKEY is not set 938 + # CONFIG_HID_DRAGONRISE is not set 939 + CONFIG_HID_EZKEY=m 878 940 # CONFIG_HID_KYE is not set 879 941 # CONFIG_HID_GYRATION is not set 880 942 # CONFIG_HID_KENSINGTON is not set 881 - # CONFIG_HID_LOGITECH is not set 882 - # CONFIG_HID_MICROSOFT is not set 943 + CONFIG_HID_LOGITECH=m 944 + # CONFIG_LOGITECH_FF is not set 945 + # CONFIG_LOGIRUMBLEPAD2_FF is not set 946 + CONFIG_HID_MICROSOFT=m 883 947 # CONFIG_HID_MONTEREY is not set 884 948 # CONFIG_HID_NTRIG is not set 885 949 # CONFIG_HID_PANTHERLORD is not set 886 950 # CONFIG_HID_PETALYNX is not set 887 951 # CONFIG_HID_SAMSUNG is not set 888 952 CONFIG_HID_SONY=m 889 - # CONFIG_HID_SUNPLUS is not set 890 - # CONFIG_GREENASIA_FF is not set 953 + CONFIG_HID_SUNPLUS=m 954 + # CONFIG_HID_GREENASIA is not set 955 + CONFIG_HID_SMARTJOYPLUS=m 956 + # CONFIG_SMARTJOYPLUS_FF is not set 891 957 # CONFIG_HID_TOPSEED is not set 892 - # CONFIG_THRUSTMASTER_FF is not set 893 - # CONFIG_ZEROPLUS_FF is not set 958 + # CONFIG_HID_THRUSTMASTER is not set 959 + # CONFIG_HID_WACOM is not set 960 + # CONFIG_HID_ZEROPLUS is not set 894 961 CONFIG_USB_SUPPORT=y 895 962 CONFIG_USB_ARCH_HAS_HCD=y 896 963 CONFIG_USB_ARCH_HAS_OHCI=y ··· 931 988 # CONFIG_USB_ISP116X_HCD is not set 932 989 # CONFIG_USB_ISP1760_HCD is not set 933 990 CONFIG_USB_OHCI_HCD=m 991 + # CONFIG_USB_OHCI_HCD_PPC_OF_BE is not set 992 + # CONFIG_USB_OHCI_HCD_PPC_OF_LE is not set 934 993 # CONFIG_USB_OHCI_HCD_PPC_OF is not set 935 994 # CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set 936 995 CONFIG_USB_OHCI_BIG_ENDIAN_MMIO=y ··· 1060 1115 # CONFIG_DMADEVICES is not set 1061 1116 # CONFIG_AUXDISPLAY is not set 1062 1117 # CONFIG_UIO is not set 1118 + 1119 + # 1120 + # TI VLYNQ 1121 + # 1063 1122 # CONFIG_STAGING is not set 1064 1123 1065 1124 # ··· 1090 1141 # CONFIG_REISERFS_FS is not set 1091 1142 # CONFIG_JFS_FS is not set 1092 1143 # CONFIG_FS_POSIX_ACL is not set 1093 - CONFIG_FILE_LOCKING=y 1094 1144 # CONFIG_XFS_FS is not set 1095 1145 # CONFIG_GFS2_FS is not set 1096 1146 # CONFIG_OCFS2_FS is not set 1097 1147 # CONFIG_BTRFS_FS is not set 1148 + CONFIG_FILE_LOCKING=y 1149 + CONFIG_FSNOTIFY=y 1098 1150 CONFIG_DNOTIFY=y 1099 1151 CONFIG_INOTIFY=y 1100 1152 CONFIG_INOTIFY_USER=y ··· 1155 1205 # CONFIG_BEFS_FS is not set 1156 1206 # CONFIG_BFS_FS is not set 1157 1207 # CONFIG_EFS_FS is not set 1158 - # CONFIG_JFFS2_FS is not set 1159 1208 # CONFIG_CRAMFS is not set 1160 1209 # CONFIG_SQUASHFS is not set 1161 1210 # CONFIG_VXFS_FS is not set ··· 1171 1222 CONFIG_NFS_V3=y 1172 1223 # CONFIG_NFS_V3_ACL is not set 1173 1224 CONFIG_NFS_V4=y 1225 + # CONFIG_NFS_V4_1 is not set 1174 1226 CONFIG_ROOT_NFS=y 1175 1227 # CONFIG_NFSD is not set 1176 1228 CONFIG_LOCKD=y ··· 1309 1359 CONFIG_DEBUG_LIST=y 1310 1360 # CONFIG_DEBUG_SG is not set 1311 1361 # CONFIG_DEBUG_NOTIFIERS is not set 1312 - # CONFIG_BOOT_PRINTK_DELAY is not set 1313 1362 # CONFIG_RCU_TORTURE_TEST is not set 1314 1363 # CONFIG_RCU_CPU_STALL_DETECTOR is not set 1315 1364 # CONFIG_BACKTRACE_SELF_TEST is not set ··· 1323 1374 CONFIG_HAVE_DYNAMIC_FTRACE=y 1324 1375 CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y 1325 1376 CONFIG_RING_BUFFER=y 1377 + CONFIG_EVENT_TRACING=y 1378 + CONFIG_CONTEXT_SWITCH_TRACER=y 1326 1379 CONFIG_TRACING=y 1327 1380 CONFIG_TRACING_SUPPORT=y 1328 - 1329 - # 1330 - # Tracers 1331 - # 1332 - # CONFIG_FUNCTION_TRACER is not set 1333 - # CONFIG_IRQSOFF_TRACER is not set 1334 - # CONFIG_SCHED_TRACER is not set 1335 - # CONFIG_CONTEXT_SWITCH_TRACER is not set 1336 - # CONFIG_EVENT_TRACER is not set 1337 - # CONFIG_BOOT_TRACER is not set 1338 - # CONFIG_TRACE_BRANCH_PROFILING is not set 1339 - # CONFIG_STACK_TRACER is not set 1340 - # CONFIG_KMEMTRACE is not set 1341 - # CONFIG_WORKQUEUE_TRACER is not set 1342 - # CONFIG_BLK_DEV_IO_TRACE is not set 1343 - # CONFIG_FTRACE_STARTUP_TEST is not set 1381 + # CONFIG_FTRACE is not set 1344 1382 # CONFIG_DYNAMIC_DEBUG is not set 1345 1383 # CONFIG_SAMPLES is not set 1346 1384 CONFIG_HAVE_ARCH_KGDB=y 1347 1385 # CONFIG_KGDB is not set 1386 + # CONFIG_PPC_DISABLE_WERROR is not set 1387 + CONFIG_PPC_WERROR=y 1348 1388 CONFIG_PRINT_STACK_DEPTH=64 1349 1389 CONFIG_DEBUG_STACKOVERFLOW=y 1350 1390 # CONFIG_DEBUG_STACK_USAGE is not set 1391 + # CONFIG_PPC_EMULATED_STATS is not set 1351 1392 # CONFIG_CODE_PATCHING_SELFTEST is not set 1352 1393 # CONFIG_FTR_FIXUP_SELFTEST is not set 1353 1394 # CONFIG_MSI_BITMAP_SELFTEST is not set
+3 -3
arch/powerpc/kernel/power7-pmu.c
··· 317 317 */ 318 318 static int power7_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { 319 319 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */ 320 - [C(OP_READ)] = { 0x400f0, 0xc880 }, 320 + [C(OP_READ)] = { 0xc880, 0x400f0 }, 321 321 [C(OP_WRITE)] = { 0, 0x300f0 }, 322 322 [C(OP_PREFETCH)] = { 0xd8b8, 0 }, 323 323 }, ··· 327 327 [C(OP_PREFETCH)] = { 0x408a, 0 }, 328 328 }, 329 329 [C(LL)] = { /* RESULT_ACCESS RESULT_MISS */ 330 - [C(OP_READ)] = { 0x6080, 0x6084 }, 331 - [C(OP_WRITE)] = { 0x6082, 0x6086 }, 330 + [C(OP_READ)] = { 0x16080, 0x26080 }, 331 + [C(OP_WRITE)] = { 0x16082, 0x26082 }, 332 332 [C(OP_PREFETCH)] = { 0, 0 }, 333 333 }, 334 334 [C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
+4
arch/powerpc/platforms/ps3/time.c
··· 21 21 #include <linux/kernel.h> 22 22 #include <linux/platform_device.h> 23 23 24 + #include <asm/firmware.h> 24 25 #include <asm/rtc.h> 25 26 #include <asm/lv1call.h> 26 27 #include <asm/ps3.h> ··· 84 83 static int __init ps3_rtc_init(void) 85 84 { 86 85 struct platform_device *pdev; 86 + 87 + if (!firmware_has_feature(FW_FEATURE_PS3_LV1)) 88 + return -ENODEV; 87 89 88 90 pdev = platform_device_register_simple("rtc-ps3", -1, NULL, 0); 89 91 if (IS_ERR(pdev))
-1
arch/powerpc/sysdev/xilinx_intc.c
··· 234 234 generic_handle_irq(cascade_irq); 235 235 236 236 /* Let xilinx_intc end the interrupt */ 237 - desc->chip->ack(irq); 238 237 desc->chip->unmask(irq); 239 238 } 240 239
+27 -9
arch/s390/kernel/ftrace.c
··· 220 220 return syscalls_metadata[nr]; 221 221 } 222 222 223 + int syscall_name_to_nr(char *name) 224 + { 225 + int i; 226 + 227 + if (!syscalls_metadata) 228 + return -1; 229 + for (i = 0; i < NR_syscalls; i++) 230 + if (syscalls_metadata[i]) 231 + if (!strcmp(syscalls_metadata[i]->name, name)) 232 + return i; 233 + return -1; 234 + } 235 + 236 + void set_syscall_enter_id(int num, int id) 237 + { 238 + syscalls_metadata[num]->enter_id = id; 239 + } 240 + 241 + void set_syscall_exit_id(int num, int id) 242 + { 243 + syscalls_metadata[num]->exit_id = id; 244 + } 245 + 223 246 static struct syscall_metadata *find_syscall_meta(unsigned long syscall) 224 247 { 225 248 struct syscall_metadata *start; ··· 260 237 return NULL; 261 238 } 262 239 263 - void arch_init_ftrace_syscalls(void) 240 + static int __init arch_init_ftrace_syscalls(void) 264 241 { 265 242 struct syscall_metadata *meta; 266 243 int i; 267 - static atomic_t refs; 268 - 269 - if (atomic_inc_return(&refs) != 1) 270 - goto out; 271 244 syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * NR_syscalls, 272 245 GFP_KERNEL); 273 246 if (!syscalls_metadata) 274 - goto out; 247 + return -ENOMEM; 275 248 for (i = 0; i < NR_syscalls; i++) { 276 249 meta = find_syscall_meta((unsigned long)sys_call_table[i]); 277 250 syscalls_metadata[i] = meta; 278 251 } 279 - return; 280 - out: 281 - atomic_dec(&refs); 252 + return 0; 282 253 } 254 + arch_initcall(arch_init_ftrace_syscalls); 283 255 #endif
+18 -7
arch/s390/kernel/setup.c
··· 154 154 155 155 __setup("condev=", condev_setup); 156 156 157 + static void __init set_preferred_console(void) 158 + { 159 + if (MACHINE_IS_KVM) { 160 + add_preferred_console("hvc", 0, NULL); 161 + s390_virtio_console_init(); 162 + return; 163 + } 164 + 165 + if (CONSOLE_IS_3215 || CONSOLE_IS_SCLP) 166 + add_preferred_console("ttyS", 0, NULL); 167 + if (CONSOLE_IS_3270) 168 + add_preferred_console("tty3270", 0, NULL); 169 + } 170 + 157 171 static int __init conmode_setup(char *str) 158 172 { 159 173 #if defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE) ··· 182 168 if (strncmp(str, "3270", 5) == 0) 183 169 SET_CONSOLE_3270; 184 170 #endif 171 + set_preferred_console(); 185 172 return 1; 186 173 } 187 174 ··· 795 780 void __init 796 781 setup_arch(char **cmdline_p) 797 782 { 798 - /* set up preferred console */ 799 - add_preferred_console("ttyS", 0, NULL); 800 - 801 783 /* 802 784 * print what head.S has found out about the machine 803 785 */ ··· 814 802 if (MACHINE_IS_VM) 815 803 pr_info("Linux is running as a z/VM " 816 804 "guest operating system in 64-bit mode\n"); 817 - else if (MACHINE_IS_KVM) { 805 + else if (MACHINE_IS_KVM) 818 806 pr_info("Linux is running under KVM in 64-bit mode\n"); 819 - add_preferred_console("hvc", 0, NULL); 820 - s390_virtio_console_init(); 821 - } else 807 + else 822 808 pr_info("Linux is running natively in 64-bit mode\n"); 823 809 #endif /* CONFIG_64BIT */ 824 810 ··· 861 851 862 852 /* Setup default console */ 863 853 conmode_default(); 854 + set_preferred_console(); 864 855 865 856 /* Setup zfcpdump support */ 866 857 setup_zfcpdump(console_devno);
+1 -1
arch/sh/boards/board-ap325rxa.c
··· 547 547 return platform_add_devices(ap325rxa_devices, 548 548 ARRAY_SIZE(ap325rxa_devices)); 549 549 } 550 - device_initcall(ap325rxa_devices_setup); 550 + arch_initcall(ap325rxa_devices_setup); 551 551 552 552 /* Return the board specific boot mode pin configuration */ 553 553 static int ap325rxa_mode_pins(void)
+1 -1
arch/sh/boards/mach-migor/setup.c
··· 608 608 609 609 return platform_add_devices(migor_devices, ARRAY_SIZE(migor_devices)); 610 610 } 611 - __initcall(migor_devices_setup); 611 + arch_initcall(migor_devices_setup); 612 612 613 613 /* Return the board specific boot mode pin configuration */ 614 614 static int migor_mode_pins(void)
+5 -4
arch/sh/boards/mach-se/7724/setup.c
··· 238 238 }, 239 239 }; 240 240 241 - /* KEYSC */ 241 + /* KEYSC in SoC (Needs SW33-2 set to ON) */ 242 242 static struct sh_keysc_info keysc_info = { 243 243 .mode = SH_KEYSC_MODE_1, 244 244 .scan_timing = 10, ··· 255 255 256 256 static struct resource keysc_resources[] = { 257 257 [0] = { 258 - .start = 0x1a204000, 259 - .end = 0x1a20400f, 258 + .name = "KEYSC", 259 + .start = 0x044b0000, 260 + .end = 0x044b000f, 260 261 .flags = IORESOURCE_MEM, 261 262 }, 262 263 [1] = { 263 - .start = IRQ0_KEY, 264 + .start = 79, 264 265 .flags = IORESOURCE_IRQ, 265 266 }, 266 267 };
+1 -1
arch/sh/kernel/cpu/sh2/setup-sh7619.c
··· 187 187 return platform_add_devices(sh7619_devices, 188 188 ARRAY_SIZE(sh7619_devices)); 189 189 } 190 - __initcall(sh7619_devices_setup); 190 + arch_initcall(sh7619_devices_setup); 191 191 192 192 void __init plat_irq_setup(void) 193 193 {
+1 -1
arch/sh/kernel/cpu/sh2a/setup-mxg.c
··· 238 238 return platform_add_devices(mxg_devices, 239 239 ARRAY_SIZE(mxg_devices)); 240 240 } 241 - __initcall(mxg_devices_setup); 241 + arch_initcall(mxg_devices_setup); 242 242 243 243 void __init plat_irq_setup(void) 244 244 {
+1 -1
arch/sh/kernel/cpu/sh2a/setup-sh7201.c
··· 357 357 return platform_add_devices(sh7201_devices, 358 358 ARRAY_SIZE(sh7201_devices)); 359 359 } 360 - __initcall(sh7201_devices_setup); 360 + arch_initcall(sh7201_devices_setup); 361 361 362 362 void __init plat_irq_setup(void) 363 363 {
+1 -1
arch/sh/kernel/cpu/sh2a/setup-sh7203.c
··· 367 367 return platform_add_devices(sh7203_devices, 368 368 ARRAY_SIZE(sh7203_devices)); 369 369 } 370 - __initcall(sh7203_devices_setup); 370 + arch_initcall(sh7203_devices_setup); 371 371 372 372 void __init plat_irq_setup(void) 373 373 {
+1 -1
arch/sh/kernel/cpu/sh2a/setup-sh7206.c
··· 338 338 return platform_add_devices(sh7206_devices, 339 339 ARRAY_SIZE(sh7206_devices)); 340 340 } 341 - __initcall(sh7206_devices_setup); 341 + arch_initcall(sh7206_devices_setup); 342 342 343 343 void __init plat_irq_setup(void) 344 344 {
+1 -1
arch/sh/kernel/cpu/sh3/setup-sh7705.c
··· 222 222 return platform_add_devices(sh7705_devices, 223 223 ARRAY_SIZE(sh7705_devices)); 224 224 } 225 - __initcall(sh7705_devices_setup); 225 + arch_initcall(sh7705_devices_setup); 226 226 227 227 static struct platform_device *sh7705_early_devices[] __initdata = { 228 228 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh3/setup-sh770x.c
··· 250 250 return platform_add_devices(sh770x_devices, 251 251 ARRAY_SIZE(sh770x_devices)); 252 252 } 253 - __initcall(sh770x_devices_setup); 253 + arch_initcall(sh770x_devices_setup); 254 254 255 255 static struct platform_device *sh770x_early_devices[] __initdata = { 256 256 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh3/setup-sh7710.c
··· 226 226 return platform_add_devices(sh7710_devices, 227 227 ARRAY_SIZE(sh7710_devices)); 228 228 } 229 - __initcall(sh7710_devices_setup); 229 + arch_initcall(sh7710_devices_setup); 230 230 231 231 static struct platform_device *sh7710_early_devices[] __initdata = { 232 232 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh3/setup-sh7720.c
··· 388 388 return platform_add_devices(sh7720_devices, 389 389 ARRAY_SIZE(sh7720_devices)); 390 390 } 391 - __initcall(sh7720_devices_setup); 391 + arch_initcall(sh7720_devices_setup); 392 392 393 393 static struct platform_device *sh7720_early_devices[] __initdata = { 394 394 &cmt0_device,
+1 -1
arch/sh/kernel/cpu/sh4/setup-sh4-202.c
··· 138 138 return platform_add_devices(sh4202_devices, 139 139 ARRAY_SIZE(sh4202_devices)); 140 140 } 141 - __initcall(sh4202_devices_setup); 141 + arch_initcall(sh4202_devices_setup); 142 142 143 143 static struct platform_device *sh4202_early_devices[] __initdata = { 144 144 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4/setup-sh7750.c
··· 239 239 return platform_add_devices(sh7750_devices, 240 240 ARRAY_SIZE(sh7750_devices)); 241 241 } 242 - __initcall(sh7750_devices_setup); 242 + arch_initcall(sh7750_devices_setup); 243 243 244 244 static struct platform_device *sh7750_early_devices[] __initdata = { 245 245 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4/setup-sh7760.c
··· 265 265 return platform_add_devices(sh7760_devices, 266 266 ARRAY_SIZE(sh7760_devices)); 267 267 } 268 - __initcall(sh7760_devices_setup); 268 + arch_initcall(sh7760_devices_setup); 269 269 270 270 static struct platform_device *sh7760_early_devices[] __initdata = { 271 271 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7343.c
··· 325 325 return platform_add_devices(sh7343_devices, 326 326 ARRAY_SIZE(sh7343_devices)); 327 327 } 328 - __initcall(sh7343_devices_setup); 328 + arch_initcall(sh7343_devices_setup); 329 329 330 330 static struct platform_device *sh7343_early_devices[] __initdata = { 331 331 &cmt_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7366.c
··· 318 318 return platform_add_devices(sh7366_devices, 319 319 ARRAY_SIZE(sh7366_devices)); 320 320 } 321 - __initcall(sh7366_devices_setup); 321 + arch_initcall(sh7366_devices_setup); 322 322 323 323 static struct platform_device *sh7366_early_devices[] __initdata = { 324 324 &cmt_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7722.c
··· 359 359 return platform_add_devices(sh7722_devices, 360 360 ARRAY_SIZE(sh7722_devices)); 361 361 } 362 - __initcall(sh7722_devices_setup); 362 + arch_initcall(sh7722_devices_setup); 363 363 364 364 static struct platform_device *sh7722_early_devices[] __initdata = { 365 365 &cmt_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7723.c
··· 473 473 return platform_add_devices(sh7723_devices, 474 474 ARRAY_SIZE(sh7723_devices)); 475 475 } 476 - __initcall(sh7723_devices_setup); 476 + arch_initcall(sh7723_devices_setup); 477 477 478 478 static struct platform_device *sh7723_early_devices[] __initdata = { 479 479 &cmt_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7724.c
··· 508 508 return platform_add_devices(sh7724_devices, 509 509 ARRAY_SIZE(sh7724_devices)); 510 510 } 511 - device_initcall(sh7724_devices_setup); 511 + arch_initcall(sh7724_devices_setup); 512 512 513 513 static struct platform_device *sh7724_early_devices[] __initdata = { 514 514 &cmt_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7763.c
··· 314 314 return platform_add_devices(sh7763_devices, 315 315 ARRAY_SIZE(sh7763_devices)); 316 316 } 317 - __initcall(sh7763_devices_setup); 317 + arch_initcall(sh7763_devices_setup); 318 318 319 319 static struct platform_device *sh7763_early_devices[] __initdata = { 320 320 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7770.c
··· 368 368 return platform_add_devices(sh7770_devices, 369 369 ARRAY_SIZE(sh7770_devices)); 370 370 } 371 - __initcall(sh7770_devices_setup); 371 + arch_initcall(sh7770_devices_setup); 372 372 373 373 static struct platform_device *sh7770_early_devices[] __initdata = { 374 374 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7780.c
··· 256 256 return platform_add_devices(sh7780_devices, 257 257 ARRAY_SIZE(sh7780_devices)); 258 258 } 259 - __initcall(sh7780_devices_setup); 259 + arch_initcall(sh7780_devices_setup); 260 260 261 261 static struct platform_device *sh7780_early_devices[] __initdata = { 262 262 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7785.c
··· 263 263 return platform_add_devices(sh7785_devices, 264 264 ARRAY_SIZE(sh7785_devices)); 265 265 } 266 - __initcall(sh7785_devices_setup); 266 + arch_initcall(sh7785_devices_setup); 267 267 268 268 static struct platform_device *sh7785_early_devices[] __initdata = { 269 269 &tmu0_device,
+1 -1
arch/sh/kernel/cpu/sh4a/setup-sh7786.c
··· 547 547 return platform_add_devices(sh7786_devices, 548 548 ARRAY_SIZE(sh7786_devices)); 549 549 } 550 - device_initcall(sh7786_devices_setup); 550 + arch_initcall(sh7786_devices_setup); 551 551 552 552 void __init plat_early_device_setup(void) 553 553 {
+1 -1
arch/sh/kernel/cpu/sh4a/setup-shx3.c
··· 256 256 return platform_add_devices(shx3_devices, 257 257 ARRAY_SIZE(shx3_devices)); 258 258 } 259 - __initcall(shx3_devices_setup); 259 + arch_initcall(shx3_devices_setup); 260 260 261 261 void __init plat_early_device_setup(void) 262 262 {
+1 -1
arch/sh/kernel/cpu/sh5/setup-sh5.c
··· 186 186 return platform_add_devices(sh5_devices, 187 187 ARRAY_SIZE(sh5_devices)); 188 188 } 189 - __initcall(sh5_devices_setup); 189 + arch_initcall(sh5_devices_setup); 190 190 191 191 void __init plat_early_device_setup(void) 192 192 {
+68 -2
arch/sh/kernel/cpu/shmobile/sleep.S
··· 26 26 27 27 tst #SUSP_SH_SF, r0 28 28 bt skip_set_sf 29 + #ifdef CONFIG_CPU_SUBTYPE_SH7724 30 + /* DBSC: put memory in self-refresh mode */ 29 31 30 - /* SDRAM: disable power down and put in self-refresh mode */ 32 + mov.l dben_reg, r4 33 + mov.l dben_data0, r1 34 + mov.l r1, @r4 35 + 36 + mov.l dbrfpdn0_reg, r4 37 + mov.l dbrfpdn0_data0, r1 38 + mov.l r1, @r4 39 + 40 + mov.l dbcmdcnt_reg, r4 41 + mov.l dbcmdcnt_data0, r1 42 + mov.l r1, @r4 43 + 44 + mov.l dbcmdcnt_reg, r4 45 + mov.l dbcmdcnt_data1, r1 46 + mov.l r1, @r4 47 + 48 + mov.l dbrfpdn0_reg, r4 49 + mov.l dbrfpdn0_data1, r1 50 + mov.l r1, @r4 51 + #else 52 + /* SBSC: disable power down and put in self-refresh mode */ 31 53 mov.l 1f, r4 32 54 mov.l 2f, r1 33 55 mov.l @r4, r2 ··· 57 35 mov.l 3f, r3 58 36 and r3, r2 59 37 mov.l r2, @r4 38 + #endif 60 39 61 40 skip_set_sf: 62 41 tst #SUSP_SH_SLEEP, r0 ··· 107 84 tst #SUSP_SH_SF, r0 108 85 bt skip_restore_sf 109 86 110 - /* SDRAM: set auto-refresh mode */ 87 + #ifdef CONFIG_CPU_SUBTYPE_SH7724 88 + /* DBSC: put memory in auto-refresh mode */ 89 + 90 + mov.l dbrfpdn0_reg, r4 91 + mov.l dbrfpdn0_data0, r1 92 + mov.l r1, @r4 93 + 94 + /* sleep 140 ns */ 95 + nop 96 + nop 97 + nop 98 + nop 99 + 100 + mov.l dbcmdcnt_reg, r4 101 + mov.l dbcmdcnt_data0, r1 102 + mov.l r1, @r4 103 + 104 + mov.l dbcmdcnt_reg, r4 105 + mov.l dbcmdcnt_data1, r1 106 + mov.l r1, @r4 107 + 108 + mov.l dben_reg, r4 109 + mov.l dben_data1, r1 110 + mov.l r1, @r4 111 + 112 + mov.l dbrfpdn0_reg, r4 113 + mov.l dbrfpdn0_data2, r1 114 + mov.l r1, @r4 115 + #else 116 + /* SBSC: set auto-refresh mode */ 111 117 mov.l 1f, r4 112 118 mov.l @r4, r2 113 119 mov.l 4f, r3 ··· 150 98 add r4, r3 151 99 or r2, r3 152 100 mov.l r3, @r1 101 + #endif 153 102 skip_restore_sf: 154 103 rts 155 104 nop 156 105 157 106 .balign 4 107 + #ifdef CONFIG_CPU_SUBTYPE_SH7724 108 + dben_reg: .long 0xfd000010 /* DBEN */ 109 + dben_data0: .long 0 110 + dben_data1: .long 1 111 + dbrfpdn0_reg: .long 0xfd000040 /* DBRFPDN0 */ 112 + dbrfpdn0_data0: .long 0 113 + dbrfpdn0_data1: .long 1 114 + dbrfpdn0_data2: .long 0x00010000 115 + dbcmdcnt_reg: .long 0xfd000014 /* DBCMDCNT */ 116 + dbcmdcnt_data0: .long 2 117 + dbcmdcnt_data1: .long 4 118 + #else 158 119 1: .long 0xfe400008 /* SDCR0 */ 159 120 2: .long 0x00000400 160 121 3: .long 0xffff7fff 161 122 4: .long 0xfffffbff 123 + #endif 162 124 5: .long 0xa4150020 /* STBCR */ 163 125 6: .long 0xfe40001c /* RTCOR */ 164 126 7: .long 0xfe400018 /* RTCNT */
+44 -30
arch/sparc/configs/sparc32_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.30-rc2 4 - # Fri Apr 17 04:04:46 2009 3 + # Linux kernel version: 2.6.31-rc1 4 + # Tue Aug 18 23:45:52 2009 5 5 # 6 6 # CONFIG_64BIT is not set 7 7 CONFIG_SPARC=y ··· 17 17 CONFIG_ARCH_NO_VIRT_TO_BUS=y 18 18 CONFIG_OF=y 19 19 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 20 + CONFIG_CONSTRUCTORS=y 20 21 21 22 # 22 23 # General setup ··· 75 74 CONFIG_KALLSYMS=y 76 75 # CONFIG_KALLSYMS_ALL is not set 77 76 # CONFIG_KALLSYMS_EXTRA_PASS is not set 78 - # CONFIG_STRIP_ASM_SYMS is not set 79 77 CONFIG_HOTPLUG=y 80 78 CONFIG_PRINTK=y 81 79 CONFIG_BUG=y ··· 87 87 CONFIG_EVENTFD=y 88 88 CONFIG_SHMEM=y 89 89 CONFIG_AIO=y 90 + 91 + # 92 + # Performance Counters 93 + # 90 94 CONFIG_VM_EVENT_COUNTERS=y 91 95 CONFIG_PCI_QUIRKS=y 96 + # CONFIG_STRIP_ASM_SYMS is not set 92 97 CONFIG_COMPAT_BRK=y 93 98 CONFIG_SLAB=y 94 99 # CONFIG_SLUB is not set ··· 102 97 # CONFIG_MARKERS is not set 103 98 CONFIG_HAVE_OPROFILE=y 104 99 CONFIG_HAVE_ARCH_TRACEHOOK=y 100 + 101 + # 102 + # GCOV-based kernel profiling 103 + # 105 104 # CONFIG_SLOW_WORK is not set 106 105 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 107 106 CONFIG_SLABINFO=y ··· 118 109 # CONFIG_MODVERSIONS is not set 119 110 # CONFIG_MODULE_SRCVERSION_ALL is not set 120 111 CONFIG_BLOCK=y 121 - # CONFIG_LBD is not set 112 + CONFIG_LBDAF=y 122 113 # CONFIG_BLK_DEV_BSG is not set 123 114 # CONFIG_BLK_DEV_INTEGRITY is not set 124 115 ··· 163 154 # CONFIG_PHYS_ADDR_T_64BIT is not set 164 155 CONFIG_ZONE_DMA_FLAG=1 165 156 CONFIG_BOUNCE=y 166 - CONFIG_UNEVICTABLE_LRU=y 167 157 CONFIG_HAVE_MLOCK=y 168 158 CONFIG_HAVE_MLOCKED_PAGE_BIT=y 159 + CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 169 160 CONFIG_SUN_PM=y 170 161 # CONFIG_SPARC_LED is not set 171 162 CONFIG_SERIAL_CONSOLE=y ··· 273 264 # CONFIG_ECONET is not set 274 265 # CONFIG_WAN_ROUTER is not set 275 266 # CONFIG_PHONET is not set 267 + # CONFIG_IEEE802154 is not set 276 268 # CONFIG_NET_SCHED is not set 277 269 # CONFIG_DCB is not set 278 270 ··· 291 281 CONFIG_WIRELESS_OLD_REGULATORY=y 292 282 # CONFIG_WIRELESS_EXT is not set 293 283 # CONFIG_LIB80211 is not set 294 - # CONFIG_MAC80211 is not set 284 + 285 + # 286 + # CFG80211 needs to be enabled for MAC80211 287 + # 288 + CONFIG_MAC80211_DEFAULT_PS_VALUE=0 295 289 # CONFIG_WIMAX is not set 296 290 # CONFIG_RFKILL is not set 297 291 # CONFIG_NET_9P is not set ··· 349 335 # EEPROM support 350 336 # 351 337 # CONFIG_EEPROM_93CX6 is not set 338 + # CONFIG_CB710_CORE is not set 352 339 CONFIG_HAVE_IDE=y 353 340 # CONFIG_IDE is not set 354 341 ··· 373 358 # CONFIG_BLK_DEV_SR_VENDOR is not set 374 359 CONFIG_CHR_DEV_SG=m 375 360 # CONFIG_CHR_DEV_SCH is not set 376 - 377 - # 378 - # Some SCSI devices (e.g. CD jukebox) support multiple LUNs 379 - # 380 361 # CONFIG_SCSI_MULTI_LUN is not set 381 362 # CONFIG_SCSI_CONSTANTS is not set 382 363 # CONFIG_SCSI_LOGGING is not set ··· 390 379 CONFIG_SCSI_LOWLEVEL=y 391 380 # CONFIG_ISCSI_TCP is not set 392 381 # CONFIG_SCSI_CXGB3_ISCSI is not set 382 + # CONFIG_SCSI_BNX2_ISCSI is not set 393 383 # CONFIG_BLK_DEV_3W_XXXX_RAID is not set 394 384 # CONFIG_SCSI_3W_9XXX is not set 395 385 # CONFIG_SCSI_ACARD is not set ··· 399 387 # CONFIG_SCSI_AIC7XXX_OLD is not set 400 388 # CONFIG_SCSI_AIC79XX is not set 401 389 # CONFIG_SCSI_AIC94XX is not set 390 + # CONFIG_SCSI_MVSAS is not set 402 391 # CONFIG_SCSI_ARCMSR is not set 403 392 # CONFIG_MEGARAID_NEWGEN is not set 404 393 # CONFIG_MEGARAID_LEGACY is not set ··· 414 401 # CONFIG_SCSI_IPS is not set 415 402 # CONFIG_SCSI_INITIO is not set 416 403 # CONFIG_SCSI_INIA100 is not set 417 - # CONFIG_SCSI_MVSAS is not set 418 404 # CONFIG_SCSI_STEX is not set 419 405 # CONFIG_SCSI_SYM53C8XX_2 is not set 420 406 # CONFIG_SCSI_QLOGIC_1280 is not set ··· 438 426 # 439 427 440 428 # 441 - # Enable only one of the two stacks, unless you know what you are doing 429 + # You can enable one or both FireWire driver stacks. 430 + # 431 + 432 + # 433 + # See the help texts for more information. 442 434 # 443 435 # CONFIG_FIREWIRE is not set 444 436 # CONFIG_IEEE1394 is not set 445 437 # CONFIG_I2O is not set 446 438 CONFIG_NETDEVICES=y 447 - CONFIG_COMPAT_NET_DEV_OPS=y 448 439 CONFIG_DUMMY=m 449 440 # CONFIG_BONDING is not set 450 441 # CONFIG_MACVLAN is not set ··· 478 463 # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 479 464 # CONFIG_NET_PCI is not set 480 465 # CONFIG_B44 is not set 466 + # CONFIG_KS8842 is not set 481 467 # CONFIG_ATL2 is not set 482 468 CONFIG_NETDEV_1000=y 483 469 # CONFIG_ACENIC is not set ··· 498 482 # CONFIG_VIA_VELOCITY is not set 499 483 # CONFIG_TIGON3 is not set 500 484 # CONFIG_BNX2 is not set 485 + # CONFIG_CNIC is not set 501 486 # CONFIG_QLA3XXX is not set 502 487 # CONFIG_ATL1 is not set 503 488 # CONFIG_ATL1E is not set ··· 646 629 CONFIG_DEVPORT=y 647 630 # CONFIG_I2C is not set 648 631 # CONFIG_SPI is not set 632 + 633 + # 634 + # PPS support 635 + # 636 + # CONFIG_PPS is not set 649 637 CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 650 638 # CONFIG_GPIOLIB is not set 651 639 # CONFIG_W1 is not set ··· 690 668 # CONFIG_HTC_PASIC3 is not set 691 669 # CONFIG_MFD_TMIO is not set 692 670 # CONFIG_REGULATOR is not set 693 - 694 - # 695 - # Multimedia devices 696 - # 697 - 698 - # 699 - # Multimedia core support 700 - # 701 - # CONFIG_VIDEO_DEV is not set 702 - # CONFIG_DVB_CORE is not set 703 - # CONFIG_VIDEO_MEDIA is not set 704 - 705 - # 706 - # Multimedia drivers 707 - # 708 - # CONFIG_DAB is not set 671 + # CONFIG_MEDIA_SUPPORT is not set 709 672 710 673 # 711 674 # Graphics support ··· 783 776 # CONFIG_DMADEVICES is not set 784 777 # CONFIG_AUXDISPLAY is not set 785 778 # CONFIG_UIO is not set 779 + 780 + # 781 + # TI VLYNQ 782 + # 786 783 # CONFIG_STAGING is not set 787 784 788 785 # ··· 810 799 # CONFIG_REISERFS_FS is not set 811 800 # CONFIG_JFS_FS is not set 812 801 CONFIG_FS_POSIX_ACL=y 813 - CONFIG_FILE_LOCKING=y 814 802 # CONFIG_XFS_FS is not set 803 + # CONFIG_GFS2_FS is not set 815 804 # CONFIG_OCFS2_FS is not set 816 805 # CONFIG_BTRFS_FS is not set 806 + CONFIG_FILE_LOCKING=y 807 + CONFIG_FSNOTIFY=y 817 808 CONFIG_DNOTIFY=y 818 809 CONFIG_INOTIFY=y 819 810 CONFIG_INOTIFY_USER=y ··· 998 985 CONFIG_KGDB_SERIAL_CONSOLE=y 999 986 CONFIG_KGDB_TESTS=y 1000 987 # CONFIG_KGDB_TESTS_ON_BOOT is not set 988 + # CONFIG_KMEMCHECK is not set 1001 989 # CONFIG_DEBUG_STACK_USAGE is not set 1002 990 # CONFIG_STACK_DEBUG is not set 1003 991
+34 -25
arch/sparc/configs/sparc64_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.30 4 - # Tue Jun 16 04:59:36 2009 3 + # Linux kernel version: 2.6.31-rc1 4 + # Tue Aug 18 23:56:02 2009 5 5 # 6 6 CONFIG_64BIT=y 7 7 CONFIG_SPARC=y ··· 26 26 CONFIG_OF=y 27 27 CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y 28 28 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 29 + CONFIG_CONSTRUCTORS=y 29 30 30 31 # 31 32 # General setup ··· 120 119 CONFIG_HAVE_KRETPROBES=y 121 120 CONFIG_HAVE_ARCH_TRACEHOOK=y 122 121 CONFIG_USE_GENERIC_SMP_HELPERS=y 122 + 123 + # 124 + # GCOV-based kernel profiling 125 + # 126 + # CONFIG_GCOV_KERNEL is not set 123 127 # CONFIG_SLOW_WORK is not set 124 128 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 125 129 CONFIG_SLABINFO=y ··· 210 204 CONFIG_PHYS_ADDR_T_64BIT=y 211 205 CONFIG_ZONE_DMA_FLAG=0 212 206 CONFIG_NR_QUICK=1 213 - CONFIG_UNEVICTABLE_LRU=y 214 207 CONFIG_HAVE_MLOCK=y 215 208 CONFIG_HAVE_MLOCKED_PAGE_BIT=y 216 209 CONFIG_DEFAULT_MMAP_MIN_ADDR=8192 ··· 415 410 # 416 411 # CONFIG_EEPROM_AT24 is not set 417 412 # CONFIG_EEPROM_LEGACY is not set 413 + # CONFIG_EEPROM_MAX6875 is not set 418 414 # CONFIG_EEPROM_93CX6 is not set 419 415 # CONFIG_CB710_CORE is not set 420 416 CONFIG_HAVE_IDE=y ··· 568 562 CONFIG_DM_CRYPT=m 569 563 CONFIG_DM_SNAPSHOT=m 570 564 CONFIG_DM_MIRROR=m 565 + # CONFIG_DM_LOG_USERSPACE is not set 571 566 CONFIG_DM_ZERO=m 572 567 # CONFIG_DM_MULTIPATH is not set 573 568 # CONFIG_DM_DELAY is not set ··· 580 573 # 581 574 582 575 # 583 - # Enable only one of the two stacks, unless you know what you are doing 576 + # You can enable one or both FireWire driver stacks. 577 + # 578 + 579 + # 580 + # See the help texts for more information. 584 581 # 585 582 # CONFIG_FIREWIRE is not set 586 583 # CONFIG_IEEE1394 is not set ··· 678 667 # CONFIG_VIA_VELOCITY is not set 679 668 CONFIG_TIGON3=m 680 669 CONFIG_BNX2=m 670 + # CONFIG_CNIC is not set 681 671 # CONFIG_QLA3XXX is not set 682 672 # CONFIG_ATL1 is not set 683 673 # CONFIG_ATL1E is not set ··· 785 773 # CONFIG_MOUSE_APPLETOUCH is not set 786 774 # CONFIG_MOUSE_BCM5974 is not set 787 775 # CONFIG_MOUSE_VSXXXAA is not set 776 + # CONFIG_MOUSE_SYNAPTICS_I2C is not set 788 777 # CONFIG_INPUT_JOYSTICK is not set 789 778 # CONFIG_INPUT_TABLET is not set 790 779 # CONFIG_INPUT_TOUCHSCREEN is not set ··· 883 870 # 884 871 # I2C system bus drivers (mostly embedded / system-on-chip) 885 872 # 873 + # CONFIG_I2C_DESIGNWARE is not set 886 874 # CONFIG_I2C_OCORES is not set 887 875 # CONFIG_I2C_SIMTEC is not set 888 876 ··· 912 898 # CONFIG_SENSORS_PCF8574 is not set 913 899 # CONFIG_PCF8575 is not set 914 900 # CONFIG_SENSORS_PCA9539 is not set 915 - # CONFIG_SENSORS_MAX6875 is not set 916 901 # CONFIG_SENSORS_TSL2550 is not set 917 902 # CONFIG_I2C_DEBUG_CORE is not set 918 903 # CONFIG_I2C_DEBUG_ALGO is not set 919 904 # CONFIG_I2C_DEBUG_BUS is not set 920 905 # CONFIG_I2C_DEBUG_CHIP is not set 921 906 # CONFIG_SPI is not set 907 + 908 + # 909 + # PPS support 910 + # 911 + # CONFIG_PPS is not set 922 912 CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 923 913 # CONFIG_GPIOLIB is not set 924 914 # CONFIG_W1 is not set ··· 977 959 # CONFIG_SENSORS_SMSC47B397 is not set 978 960 # CONFIG_SENSORS_ADS7828 is not set 979 961 # CONFIG_SENSORS_THMC50 is not set 962 + # CONFIG_SENSORS_TMP401 is not set 980 963 # CONFIG_SENSORS_VIA686A is not set 981 964 # CONFIG_SENSORS_VT1211 is not set 982 965 # CONFIG_SENSORS_VT8231 is not set ··· 1013 994 # CONFIG_MFD_WM8400 is not set 1014 995 # CONFIG_MFD_WM8350_I2C is not set 1015 996 # CONFIG_MFD_PCF50633 is not set 997 + # CONFIG_AB3100_CORE is not set 1016 998 # CONFIG_REGULATOR is not set 1017 - 1018 - # 1019 - # Multimedia devices 1020 - # 1021 - 1022 - # 1023 - # Multimedia core support 1024 - # 1025 - # CONFIG_VIDEO_DEV is not set 1026 - # CONFIG_DVB_CORE is not set 1027 - # CONFIG_VIDEO_MEDIA is not set 1028 - 1029 - # 1030 - # Multimedia drivers 1031 - # 1032 - # CONFIG_DAB is not set 999 + # CONFIG_MEDIA_SUPPORT is not set 1033 1000 1034 1001 # 1035 1002 # Graphics support ··· 1289 1284 # 1290 1285 # Miscellaneous USB options 1291 1286 # 1292 - CONFIG_USB_DEVICEFS=y 1293 1287 # CONFIG_USB_DEVICE_CLASS is not set 1294 1288 # CONFIG_USB_DYNAMIC_MINORS is not set 1295 1289 # CONFIG_USB_OTG is not set ··· 1300 1296 # USB Host Controller Drivers 1301 1297 # 1302 1298 # CONFIG_USB_C67X00_HCD is not set 1299 + # CONFIG_USB_XHCI_HCD is not set 1303 1300 CONFIG_USB_EHCI_HCD=m 1304 1301 # CONFIG_USB_EHCI_ROOT_HUB_TT is not set 1305 1302 # CONFIG_USB_EHCI_TT_NEWSCHED is not set ··· 1379 1374 # CONFIG_USB_LD is not set 1380 1375 # CONFIG_USB_TRANCEVIBRATOR is not set 1381 1376 # CONFIG_USB_IOWARRIOR is not set 1382 - # CONFIG_USB_TEST is not set 1383 1377 # CONFIG_USB_ISIGHTFW is not set 1384 1378 # CONFIG_USB_VST is not set 1385 1379 # CONFIG_USB_GADGET is not set ··· 1424 1420 # CONFIG_RTC_DRV_S35390A is not set 1425 1421 # CONFIG_RTC_DRV_FM3130 is not set 1426 1422 # CONFIG_RTC_DRV_RX8581 is not set 1423 + # CONFIG_RTC_DRV_RX8025 is not set 1427 1424 1428 1425 # 1429 1426 # SPI RTC drivers ··· 1453 1448 # CONFIG_DMADEVICES is not set 1454 1449 # CONFIG_AUXDISPLAY is not set 1455 1450 # CONFIG_UIO is not set 1451 + 1452 + # 1453 + # TI VLYNQ 1454 + # 1456 1455 # CONFIG_STAGING is not set 1457 1456 1458 1457 # ··· 1489 1480 # CONFIG_REISERFS_FS is not set 1490 1481 # CONFIG_JFS_FS is not set 1491 1482 CONFIG_FS_POSIX_ACL=y 1492 - CONFIG_FILE_LOCKING=y 1493 1483 # CONFIG_XFS_FS is not set 1494 1484 # CONFIG_GFS2_FS is not set 1495 1485 # CONFIG_OCFS2_FS is not set 1496 1486 # CONFIG_BTRFS_FS is not set 1487 + CONFIG_FILE_LOCKING=y 1497 1488 CONFIG_FSNOTIFY=y 1498 1489 CONFIG_DNOTIFY=y 1499 1490 CONFIG_INOTIFY=y ··· 1569 1560 # CONFIG_PARTITION_ADVANCED is not set 1570 1561 CONFIG_MSDOS_PARTITION=y 1571 1562 CONFIG_SUN_PARTITION=y 1572 - CONFIG_NLS=m 1563 + CONFIG_NLS=y 1573 1564 CONFIG_NLS_DEFAULT="iso8859-1" 1574 1565 # CONFIG_NLS_CODEPAGE_437 is not set 1575 1566 # CONFIG_NLS_CODEPAGE_737 is not set
+9 -3
arch/sparc/include/asm/pgtable_64.h
··· 726 726 extern pte_t pgoff_to_pte(unsigned long); 727 727 #define PTE_FILE_MAX_BITS (64UL - PAGE_SHIFT - 1UL) 728 728 729 - extern unsigned long *sparc64_valid_addr_bitmap; 729 + extern unsigned long sparc64_valid_addr_bitmap[]; 730 730 731 731 /* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ 732 - #define kern_addr_valid(addr) \ 733 - (test_bit(__pa((unsigned long)(addr))>>22, sparc64_valid_addr_bitmap)) 732 + static inline bool kern_addr_valid(unsigned long addr) 733 + { 734 + unsigned long paddr = __pa(addr); 735 + 736 + if ((paddr >> 41UL) != 0UL) 737 + return false; 738 + return test_bit(paddr >> 22, sparc64_valid_addr_bitmap); 739 + } 734 740 735 741 extern int page_in_phys_avail(unsigned long paddr); 736 742
+1 -1
arch/sparc/kernel/irq_64.c
··· 886 886 * Therefore you cannot make any OBP calls, not even prom_printf, 887 887 * from these two routines. 888 888 */ 889 - static void __cpuinit register_one_mondo(unsigned long paddr, unsigned long type, unsigned long qmask) 889 + static void __cpuinit notrace register_one_mondo(unsigned long paddr, unsigned long type, unsigned long qmask) 890 890 { 891 891 unsigned long num_entries = (qmask + 1) / 64; 892 892 unsigned long status;
+38 -4
arch/sparc/kernel/ktlb.S
··· 151 151 * Must preserve %g1 and %g6 (TAG). 152 152 */ 153 153 kvmap_dtlb_tsb4m_miss: 154 - sethi %hi(kpte_linear_bitmap), %g2 154 + /* Clear the PAGE_OFFSET top virtual bits, shift 155 + * down to get PFN, and make sure PFN is in range. 156 + */ 157 + sllx %g4, 21, %g5 158 + 159 + /* Check to see if we know about valid memory at the 4MB 160 + * chunk this physical address will reside within. 161 + */ 162 + srlx %g5, 21 + 41, %g2 163 + brnz,pn %g2, kvmap_dtlb_longpath 164 + nop 165 + 166 + /* This unconditional branch and delay-slot nop gets patched 167 + * by the sethi sequence once the bitmap is properly setup. 168 + */ 169 + .globl valid_addr_bitmap_insn 170 + valid_addr_bitmap_insn: 171 + ba,pt %xcc, 2f 172 + nop 173 + .subsection 2 174 + .globl valid_addr_bitmap_patch 175 + valid_addr_bitmap_patch: 176 + sethi %hi(sparc64_valid_addr_bitmap), %g7 177 + or %g7, %lo(sparc64_valid_addr_bitmap), %g7 178 + .previous 179 + 180 + srlx %g5, 21 + 22, %g2 181 + srlx %g2, 6, %g5 182 + and %g2, 63, %g2 183 + sllx %g5, 3, %g5 184 + ldx [%g7 + %g5], %g5 185 + mov 1, %g7 186 + sllx %g7, %g2, %g7 187 + andcc %g5, %g7, %g0 188 + be,pn %xcc, kvmap_dtlb_longpath 189 + 190 + 2: sethi %hi(kpte_linear_bitmap), %g2 155 191 or %g2, %lo(kpte_linear_bitmap), %g2 156 192 157 - /* Clear the PAGE_OFFSET top virtual bits, then shift 158 - * down to get a 256MB physical address index. 159 - */ 193 + /* Get the 256MB physical address index. */ 160 194 sllx %g4, 21, %g5 161 195 mov 1, %g7 162 196 srlx %g5, 21 + 28, %g5
+1 -1
arch/sparc/kernel/nmi.c
··· 103 103 } 104 104 if (!touched && __get_cpu_var(last_irq_sum) == sum) { 105 105 local_inc(&__get_cpu_var(alert_counter)); 106 - if (local_read(&__get_cpu_var(alert_counter)) == 5 * nmi_hz) 106 + if (local_read(&__get_cpu_var(alert_counter)) == 30 * nmi_hz) 107 107 die_nmi("BUG: NMI Watchdog detected LOCKUP", 108 108 regs, panic_on_timeout); 109 109 } else {
+2 -2
arch/sparc/kernel/smp_64.c
··· 1499 1499 dyn_size = pcpur_size - static_size - PERCPU_MODULE_RESERVE; 1500 1500 1501 1501 1502 - ptrs_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpur_ptrs[0])); 1502 + ptrs_size = PFN_ALIGN(nr_cpu_ids * sizeof(pcpur_ptrs[0])); 1503 1503 pcpur_ptrs = alloc_bootmem(ptrs_size); 1504 1504 1505 1505 for_each_possible_cpu(cpu) { ··· 1514 1514 1515 1515 /* allocate address and map */ 1516 1516 vm.flags = VM_ALLOC; 1517 - vm.size = num_possible_cpus() * PCPU_CHUNK_SIZE; 1517 + vm.size = nr_cpu_ids * PCPU_CHUNK_SIZE; 1518 1518 vm_area_register_early(&vm, PCPU_CHUNK_SIZE); 1519 1519 1520 1520 for_each_possible_cpu(cpu) {
-22
arch/sparc/kernel/sun4d_smp.c
··· 162 162 */ 163 163 164 164 extern struct linux_prom_registers smp_penguin_ctable; 165 - extern unsigned long trapbase_cpu1[]; 166 - extern unsigned long trapbase_cpu2[]; 167 - extern unsigned long trapbase_cpu3[]; 168 165 169 166 void __init smp4d_boot_cpus(void) 170 167 { ··· 231 234 } 232 235 *prev = first; 233 236 local_flush_cache_all(); 234 - 235 - /* Free unneeded trap tables */ 236 - ClearPageReserved(virt_to_page(trapbase_cpu1)); 237 - init_page_count(virt_to_page(trapbase_cpu1)); 238 - free_page((unsigned long)trapbase_cpu1); 239 - totalram_pages++; 240 - num_physpages++; 241 - 242 - ClearPageReserved(virt_to_page(trapbase_cpu2)); 243 - init_page_count(virt_to_page(trapbase_cpu2)); 244 - free_page((unsigned long)trapbase_cpu2); 245 - totalram_pages++; 246 - num_physpages++; 247 - 248 - ClearPageReserved(virt_to_page(trapbase_cpu3)); 249 - init_page_count(virt_to_page(trapbase_cpu3)); 250 - free_page((unsigned long)trapbase_cpu3); 251 - totalram_pages++; 252 - num_physpages++; 253 237 254 238 /* Ok, they are spinning and ready to go. */ 255 239 smp_processors_ready = 1;
-26
arch/sparc/kernel/sun4m_smp.c
··· 121 121 */ 122 122 123 123 extern struct linux_prom_registers smp_penguin_ctable; 124 - extern unsigned long trapbase_cpu1[]; 125 - extern unsigned long trapbase_cpu2[]; 126 - extern unsigned long trapbase_cpu3[]; 127 124 128 125 void __init smp4m_boot_cpus(void) 129 126 { ··· 189 192 } 190 193 *prev = first; 191 194 local_flush_cache_all(); 192 - 193 - /* Free unneeded trap tables */ 194 - if (!cpu_isset(1, cpu_present_map)) { 195 - ClearPageReserved(virt_to_page(trapbase_cpu1)); 196 - init_page_count(virt_to_page(trapbase_cpu1)); 197 - free_page((unsigned long)trapbase_cpu1); 198 - totalram_pages++; 199 - num_physpages++; 200 - } 201 - if (!cpu_isset(2, cpu_present_map)) { 202 - ClearPageReserved(virt_to_page(trapbase_cpu2)); 203 - init_page_count(virt_to_page(trapbase_cpu2)); 204 - free_page((unsigned long)trapbase_cpu2); 205 - totalram_pages++; 206 - num_physpages++; 207 - } 208 - if (!cpu_isset(3, cpu_present_map)) { 209 - ClearPageReserved(virt_to_page(trapbase_cpu3)); 210 - init_page_count(virt_to_page(trapbase_cpu3)); 211 - free_page((unsigned long)trapbase_cpu3); 212 - totalram_pages++; 213 - num_physpages++; 214 - } 215 195 216 196 /* Ok, they are spinning and ready to go. */ 217 197 }
+3 -1
arch/sparc/kernel/sys32.S
··· 134 134 SIGN1(sys32_getsockname, sys_getsockname, %o0) 135 135 SIGN2(sys32_ioprio_get, sys_ioprio_get, %o0, %o1) 136 136 SIGN3(sys32_ioprio_set, sys_ioprio_set, %o0, %o1, %o2) 137 - SIGN2(sys32_splice, sys_splice, %o0, %o1) 137 + SIGN2(sys32_splice, sys_splice, %o0, %o2) 138 138 SIGN2(sys32_sync_file_range, compat_sync_file_range, %o0, %o5) 139 139 SIGN2(sys32_tee, sys_tee, %o0, %o1) 140 140 SIGN1(sys32_vmsplice, compat_sys_vmsplice, %o0) 141 + SIGN1(sys32_truncate, sys_truncate, %o1) 142 + SIGN1(sys32_ftruncate, sys_ftruncate, %o1) 141 143 142 144 .globl sys32_mmap2 143 145 sys32_mmap2:
+2 -2
arch/sparc/kernel/systbls_64.S
··· 43 43 /*110*/ .word sys_setresgid, sys_getresgid, sys_setregid, sys_nis_syscall, sys_nis_syscall 44 44 .word sys32_getgroups, compat_sys_gettimeofday, sys32_getrusage, sys_nis_syscall, sys_getcwd 45 45 /*120*/ .word compat_sys_readv, compat_sys_writev, compat_sys_settimeofday, sys_fchown16, sys_fchmod 46 - .word sys_nis_syscall, sys_setreuid16, sys_setregid16, sys_rename, sys_truncate 47 - /*130*/ .word sys_ftruncate, sys_flock, compat_sys_lstat64, sys_nis_syscall, sys_nis_syscall 46 + .word sys_nis_syscall, sys_setreuid16, sys_setregid16, sys_rename, sys32_truncate 47 + /*130*/ .word sys32_ftruncate, sys_flock, compat_sys_lstat64, sys_nis_syscall, sys_nis_syscall 48 48 .word sys_nis_syscall, sys32_mkdir, sys_rmdir, compat_sys_utimes, compat_sys_stat64 49 49 /*140*/ .word sys32_sendfile64, sys_nis_syscall, sys32_futex, sys_gettid, compat_sys_getrlimit 50 50 .word compat_sys_setrlimit, sys_pivot_root, sys32_prctl, sys_pciconfig_read, sys_pciconfig_write
+4 -3
arch/sparc/mm/fault_32.c
··· 319 319 */ 320 320 out_of_memory: 321 321 up_read(&mm->mmap_sem); 322 - printk("VM: killing process %s\n", tsk->comm); 323 - if (from_user) 324 - do_group_exit(SIGKILL); 322 + if (from_user) { 323 + pagefault_out_of_memory(); 324 + return; 325 + } 325 326 goto no_context; 326 327 327 328 do_sigbus:
+4 -3
arch/sparc/mm/fault_64.c
··· 447 447 out_of_memory: 448 448 insn = get_fault_insn(regs, insn); 449 449 up_read(&mm->mmap_sem); 450 - printk("VM: killing process %s\n", current->comm); 451 - if (!(regs->tstate & TSTATE_PRIV)) 452 - do_group_exit(SIGKILL); 450 + if (!(regs->tstate & TSTATE_PRIV)) { 451 + pagefault_out_of_memory(); 452 + return; 453 + } 453 454 goto handle_kernel_fault; 454 455 455 456 intr_or_no_mm:
+24 -19
arch/sparc/mm/init_64.c
··· 145 145 cmp_p64, NULL); 146 146 } 147 147 148 - unsigned long *sparc64_valid_addr_bitmap __read_mostly; 148 + unsigned long sparc64_valid_addr_bitmap[VALID_ADDR_BITMAP_BYTES / 149 + sizeof(unsigned long)]; 149 150 EXPORT_SYMBOL(sparc64_valid_addr_bitmap); 150 151 151 152 /* Kernel physical address base and size in bytes. */ ··· 1875 1874 * memory list again, and make sure it provides at least as much 1876 1875 * memory as 'pavail' does. 1877 1876 */ 1878 - static void __init setup_valid_addr_bitmap_from_pavail(void) 1877 + static void __init setup_valid_addr_bitmap_from_pavail(unsigned long *bitmap) 1879 1878 { 1880 1879 int i; 1881 1880 ··· 1898 1897 1899 1898 if (new_start <= old_start && 1900 1899 new_end >= (old_start + PAGE_SIZE)) { 1901 - set_bit(old_start >> 22, 1902 - sparc64_valid_addr_bitmap); 1900 + set_bit(old_start >> 22, bitmap); 1903 1901 goto do_next_page; 1904 1902 } 1905 1903 } ··· 1919 1919 } 1920 1920 } 1921 1921 1922 + static void __init patch_tlb_miss_handler_bitmap(void) 1923 + { 1924 + extern unsigned int valid_addr_bitmap_insn[]; 1925 + extern unsigned int valid_addr_bitmap_patch[]; 1926 + 1927 + valid_addr_bitmap_insn[1] = valid_addr_bitmap_patch[1]; 1928 + mb(); 1929 + valid_addr_bitmap_insn[0] = valid_addr_bitmap_patch[0]; 1930 + flushi(&valid_addr_bitmap_insn[0]); 1931 + } 1932 + 1922 1933 void __init mem_init(void) 1923 1934 { 1924 1935 unsigned long codepages, datapages, initpages; 1925 1936 unsigned long addr, last; 1926 - int i; 1927 - 1928 - i = last_valid_pfn >> ((22 - PAGE_SHIFT) + 6); 1929 - i += 1; 1930 - sparc64_valid_addr_bitmap = (unsigned long *) alloc_bootmem(i << 3); 1931 - if (sparc64_valid_addr_bitmap == NULL) { 1932 - prom_printf("mem_init: Cannot alloc valid_addr_bitmap.\n"); 1933 - prom_halt(); 1934 - } 1935 - memset(sparc64_valid_addr_bitmap, 0, i << 3); 1936 1937 1937 1938 addr = PAGE_OFFSET + kern_base; 1938 1939 last = PAGE_ALIGN(kern_size) + addr; ··· 1942 1941 addr += PAGE_SIZE; 1943 1942 } 1944 1943 1945 - setup_valid_addr_bitmap_from_pavail(); 1944 + setup_valid_addr_bitmap_from_pavail(sparc64_valid_addr_bitmap); 1945 + patch_tlb_miss_handler_bitmap(); 1946 1946 1947 1947 high_memory = __va(last_valid_pfn << PAGE_SHIFT); 1948 1948 1949 1949 #ifdef CONFIG_NEED_MULTIPLE_NODES 1950 - for_each_online_node(i) { 1951 - if (NODE_DATA(i)->node_spanned_pages != 0) { 1952 - totalram_pages += 1953 - free_all_bootmem_node(NODE_DATA(i)); 1950 + { 1951 + int i; 1952 + for_each_online_node(i) { 1953 + if (NODE_DATA(i)->node_spanned_pages != 0) { 1954 + totalram_pages += 1955 + free_all_bootmem_node(NODE_DATA(i)); 1956 + } 1954 1957 } 1955 1958 } 1956 1959 #else
+5 -2
arch/sparc/mm/init_64.h
··· 5 5 * marked non-static so that assembler code can get at them. 6 6 */ 7 7 8 - #define MAX_PHYS_ADDRESS (1UL << 42UL) 9 - #define KPTE_BITMAP_CHUNK_SZ (256UL * 1024UL * 1024UL) 8 + #define MAX_PHYS_ADDRESS (1UL << 41UL) 9 + #define KPTE_BITMAP_CHUNK_SZ (256UL * 1024UL * 1024UL) 10 10 #define KPTE_BITMAP_BYTES \ 11 11 ((MAX_PHYS_ADDRESS / KPTE_BITMAP_CHUNK_SZ) / 8) 12 + #define VALID_ADDR_BITMAP_CHUNK_SZ (4UL * 1024UL * 1024UL) 13 + #define VALID_ADDR_BITMAP_BYTES \ 14 + ((MAX_PHYS_ADDRESS / VALID_ADDR_BITMAP_CHUNK_SZ) / 8) 12 15 13 16 extern unsigned long kern_linear_pte_xor[2]; 14 17 extern unsigned long kpte_linear_bitmap[KPTE_BITMAP_BYTES / sizeof(unsigned long)];
+1 -1
arch/sparc/prom/misc_64.c
··· 88 88 /* Drop into the prom, but completely terminate the program. 89 89 * No chance of continuing. 90 90 */ 91 - void prom_halt(void) 91 + void notrace prom_halt(void) 92 92 { 93 93 #ifdef CONFIG_SUN_LDOMS 94 94 if (ldom_domaining_enabled)
+3 -4
arch/sparc/prom/printf.c
··· 14 14 */ 15 15 16 16 #include <linux/kernel.h> 17 + #include <linux/compiler.h> 17 18 18 19 #include <asm/openprom.h> 19 20 #include <asm/oplib.h> 20 21 21 22 static char ppbuf[1024]; 22 23 23 - void 24 - prom_write(const char *buf, unsigned int n) 24 + void notrace prom_write(const char *buf, unsigned int n) 25 25 { 26 26 char ch; 27 27 ··· 33 33 } 34 34 } 35 35 36 - void 37 - prom_printf(const char *fmt, ...) 36 + void notrace prom_printf(const char *fmt, ...) 38 37 { 39 38 va_list args; 40 39 int i;
+1 -1
arch/x86/Kconfig
··· 24 24 select HAVE_UNSTABLE_SCHED_CLOCK 25 25 select HAVE_IDE 26 26 select HAVE_OPROFILE 27 + select HAVE_PERF_COUNTERS if (!M386 && !M486) 27 28 select HAVE_IOREMAP_PROT 28 29 select HAVE_KPROBES 29 30 select ARCH_WANT_OPTIONAL_GPIOLIB ··· 743 742 config X86_LOCAL_APIC 744 743 def_bool y 745 744 depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC 746 - select HAVE_PERF_COUNTERS if (!M386 && !M486) 747 745 748 746 config X86_IO_APIC 749 747 def_bool y
+1 -1
arch/x86/boot/compressed/Makefile
··· 4 4 # create a compressed vmlinux image from the original vmlinux 5 5 # 6 6 7 - targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o 7 + targets := vmlinux.lds vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o 8 8 9 9 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 10 10 KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
-7
arch/x86/include/asm/ftrace.h
··· 28 28 29 29 #endif 30 30 31 - /* FIXME: I don't want to stay hardcoded */ 32 - #ifdef CONFIG_X86_64 33 - # define FTRACE_SYSCALL_MAX 299 34 - #else 35 - # define FTRACE_SYSCALL_MAX 337 36 - #endif 37 - 38 31 #ifdef CONFIG_FUNCTION_TRACER 39 32 #define MCOUNT_ADDR ((long)(mcount)) 40 33 #define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
+10 -2
arch/x86/include/asm/pgtable.h
··· 2 2 #define _ASM_X86_PGTABLE_H 3 3 4 4 #include <asm/page.h> 5 + #include <asm/e820.h> 5 6 6 7 #include <asm/pgtable_types.h> 7 8 ··· 270 269 271 270 #define canon_pgprot(p) __pgprot(massage_pgprot(p)) 272 271 273 - static inline int is_new_memtype_allowed(unsigned long flags, 274 - unsigned long new_flags) 272 + static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, 273 + unsigned long flags, 274 + unsigned long new_flags) 275 275 { 276 + /* 277 + * PAT type is always WB for ISA. So no need to check. 278 + */ 279 + if (is_ISA_range(paddr, paddr + size - 1)) 280 + return 1; 281 + 276 282 /* 277 283 * Certain new memtypes are not allowed with certain 278 284 * requested memtype:
+2
arch/x86/include/asm/unistd_32.h
··· 345 345 346 346 #ifdef __KERNEL__ 347 347 348 + #define NR_syscalls 337 349 + 348 350 #define __ARCH_WANT_IPC_PARSE_VERSION 349 351 #define __ARCH_WANT_OLD_READDIR 350 352 #define __ARCH_WANT_OLD_STAT
+6
arch/x86/include/asm/unistd_64.h
··· 688 688 #endif /* __NO_STUBS */ 689 689 690 690 #ifdef __KERNEL__ 691 + 692 + #ifndef COMPILE_OFFSETS 693 + #include <asm/asm-offsets.h> 694 + #define NR_syscalls (__NR_syscall_max + 1) 695 + #endif 696 + 691 697 /* 692 698 * "Conditional" syscalls 693 699 *
+1 -1
arch/x86/include/asm/uv/uv_bau.h
··· 133 133 * see table 4.2.3.0.1 in broacast_assist spec. 134 134 */ 135 135 struct bau_msg_header { 136 - unsigned int dest_subnodeid:6; /* must be zero */ 136 + unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */ 137 137 /* bits 5:0 */ 138 138 unsigned int base_dest_nodeid:15; /* nasid>>1 (pnode) of */ 139 139 /* bits 20:6 */ /* first bit in node_map */
+3
arch/x86/kernel/apic/ipi.c
··· 106 106 unsigned long mask = cpumask_bits(cpumask)[0]; 107 107 unsigned long flags; 108 108 109 + if (WARN_ONCE(!mask, "empty IPI mask")) 110 + return; 111 + 109 112 local_irq_save(flags); 110 113 WARN_ON(mask & ~cpumask_bits(cpu_online_mask)[0]); 111 114 __default_send_IPI_dest_field(mask, vector, apic->dest_logical);
+10
arch/x86/kernel/apic/probe_64.c
··· 44 44 NULL, 45 45 }; 46 46 47 + static int apicid_phys_pkg_id(int initial_apic_id, int index_msb) 48 + { 49 + return hard_smp_processor_id() >> index_msb; 50 + } 51 + 47 52 /* 48 53 * Check the APIC IDs in bios_cpu_apicid and choose the APIC mode. 49 54 */ ··· 72 67 if (max_physical_apicid >= 8) 73 68 apic = &apic_physflat; 74 69 printk(KERN_INFO "Setting APIC routing to %s\n", apic->name); 70 + } 71 + 72 + if (is_vsmp_box()) { 73 + /* need to update phys_pkg_id */ 74 + apic->phys_pkg_id = apicid_phys_pkg_id; 75 75 } 76 76 77 77 /*
+2 -2
arch/x86/kernel/apic/x2apic_uv_x.c
··· 46 46 return node_id.s.node_id; 47 47 } 48 48 49 - static int uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id) 49 + static int __init uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id) 50 50 { 51 51 if (!strcmp(oem_id, "SGI")) { 52 52 if (!strcmp(oem_table_id, "UVL")) ··· 253 253 apic_write(APIC_SELF_IPI, vector); 254 254 } 255 255 256 - struct apic apic_x2apic_uv_x = { 256 + struct apic __refdata apic_x2apic_uv_x = { 257 257 258 258 .name = "UV large system", 259 259 .probe = NULL,
+1
arch/x86/kernel/asm-offsets_64.c
··· 3 3 * This code generates raw asm output which is post-processed to extract 4 4 * and format the required data. 5 5 */ 6 + #define COMPILE_OFFSETS 6 7 7 8 #include <linux/crypto.h> 8 9 #include <linux/sched.h>
+4
arch/x86/kernel/cpu/Makefile
··· 7 7 CFLAGS_REMOVE_common.o = -pg 8 8 endif 9 9 10 + # Make sure load_percpu_segment has no stackprotector 11 + nostackp := $(call cc-option, -fno-stack-protector) 12 + CFLAGS_common.o := $(nostackp) 13 + 10 14 obj-y := intel_cacheinfo.o addon_cpuid_features.o 11 15 obj-y += proc.o capflags.o powerflags.o common.o 12 16 obj-y += vmware.o hypervisor.o
+7
arch/x86/kernel/cpu/amd.c
··· 400 400 level = cpuid_eax(1); 401 401 if((level >= 0x0f48 && level < 0x0f50) || level >= 0x0f58) 402 402 set_cpu_cap(c, X86_FEATURE_REP_GOOD); 403 + 404 + /* 405 + * Some BIOSes incorrectly force this feature, but only K8 406 + * revision D (model = 0x14) and later actually support it. 407 + */ 408 + if (c->x86_model < 0x14) 409 + clear_cpu_cap(c, X86_FEATURE_LAHF_LM); 403 410 } 404 411 if (c->x86 == 0x10 || c->x86 == 0x11) 405 412 set_cpu_cap(c, X86_FEATURE_REP_GOOD);
+24 -24
arch/x86/kernel/cpu/common.c
··· 59 59 alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask); 60 60 } 61 61 62 - static const struct cpu_dev *this_cpu __cpuinitdata; 62 + static void __cpuinit default_init(struct cpuinfo_x86 *c) 63 + { 64 + #ifdef CONFIG_X86_64 65 + display_cacheinfo(c); 66 + #else 67 + /* Not much we can do here... */ 68 + /* Check if at least it has cpuid */ 69 + if (c->cpuid_level == -1) { 70 + /* No cpuid. It must be an ancient CPU */ 71 + if (c->x86 == 4) 72 + strcpy(c->x86_model_id, "486"); 73 + else if (c->x86 == 3) 74 + strcpy(c->x86_model_id, "386"); 75 + } 76 + #endif 77 + } 78 + 79 + static const struct cpu_dev __cpuinitconst default_cpu = { 80 + .c_init = default_init, 81 + .c_vendor = "Unknown", 82 + .c_x86_vendor = X86_VENDOR_UNKNOWN, 83 + }; 84 + 85 + static const struct cpu_dev *this_cpu __cpuinitdata = &default_cpu; 63 86 64 87 DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = { 65 88 #ifdef CONFIG_X86_64 ··· 354 331 } 355 332 356 333 static const struct cpu_dev *__cpuinitdata cpu_devs[X86_VENDOR_NUM] = {}; 357 - 358 - static void __cpuinit default_init(struct cpuinfo_x86 *c) 359 - { 360 - #ifdef CONFIG_X86_64 361 - display_cacheinfo(c); 362 - #else 363 - /* Not much we can do here... */ 364 - /* Check if at least it has cpuid */ 365 - if (c->cpuid_level == -1) { 366 - /* No cpuid. It must be an ancient CPU */ 367 - if (c->x86 == 4) 368 - strcpy(c->x86_model_id, "486"); 369 - else if (c->x86 == 3) 370 - strcpy(c->x86_model_id, "386"); 371 - } 372 - #endif 373 - } 374 - 375 - static const struct cpu_dev __cpuinitconst default_cpu = { 376 - .c_init = default_init, 377 - .c_vendor = "Unknown", 378 - .c_x86_vendor = X86_VENDOR_UNKNOWN, 379 - }; 380 334 381 335 static void __cpuinit get_model_name(struct cpuinfo_x86 *c) 382 336 {
+16 -3
arch/x86/kernel/cpu/mcheck/mce.c
··· 1226 1226 } 1227 1227 1228 1228 /* Add per CPU specific workarounds here */ 1229 - static void mce_cpu_quirks(struct cpuinfo_x86 *c) 1229 + static int mce_cpu_quirks(struct cpuinfo_x86 *c) 1230 1230 { 1231 + if (c->x86_vendor == X86_VENDOR_UNKNOWN) { 1232 + pr_info("MCE: unknown CPU type - not enabling MCE support.\n"); 1233 + return -EOPNOTSUPP; 1234 + } 1235 + 1231 1236 /* This should be disabled by the BIOS, but isn't always */ 1232 1237 if (c->x86_vendor == X86_VENDOR_AMD) { 1233 1238 if (c->x86 == 15 && banks > 4) { ··· 1278 1273 if ((c->x86 > 6 || (c->x86 == 6 && c->x86_model >= 0xe)) && 1279 1274 monarch_timeout < 0) 1280 1275 monarch_timeout = USEC_PER_SEC; 1276 + 1277 + /* 1278 + * There are also broken BIOSes on some Pentium M and 1279 + * earlier systems: 1280 + */ 1281 + if (c->x86 == 6 && c->x86_model <= 13 && mce_bootlog < 0) 1282 + mce_bootlog = 0; 1281 1283 } 1282 1284 if (monarch_timeout < 0) 1283 1285 monarch_timeout = 0; 1284 1286 if (mce_bootlog != 0) 1285 1287 mce_panic_timeout = 30; 1288 + 1289 + return 0; 1286 1290 } 1287 1291 1288 1292 static void __cpuinit mce_ancient_init(struct cpuinfo_x86 *c) ··· 1352 1338 if (!mce_available(c)) 1353 1339 return; 1354 1340 1355 - if (mce_cap_init() < 0) { 1341 + if (mce_cap_init() < 0 || mce_cpu_quirks(c) < 0) { 1356 1342 mce_disabled = 1; 1357 1343 return; 1358 1344 } 1359 - mce_cpu_quirks(c); 1360 1345 1361 1346 machine_check_vector = do_machine_check; 1362 1347
+15 -8
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 36 36 37 37 static DEFINE_PER_CPU(__u64, next_check) = INITIAL_JIFFIES; 38 38 static DEFINE_PER_CPU(unsigned long, thermal_throttle_count); 39 + static DEFINE_PER_CPU(bool, thermal_throttle_active); 39 40 40 41 static atomic_t therm_throt_en = ATOMIC_INIT(0); 41 42 ··· 97 96 { 98 97 unsigned int cpu = smp_processor_id(); 99 98 __u64 tmp_jiffs = get_jiffies_64(); 99 + bool was_throttled = __get_cpu_var(thermal_throttle_active); 100 + bool is_throttled = __get_cpu_var(thermal_throttle_active) = curr; 100 101 101 - if (curr) 102 + if (is_throttled) 102 103 __get_cpu_var(thermal_throttle_count)++; 103 104 104 - if (time_before64(tmp_jiffs, __get_cpu_var(next_check))) 105 + if (!(was_throttled ^ is_throttled) && 106 + time_before64(tmp_jiffs, __get_cpu_var(next_check))) 105 107 return 0; 106 108 107 109 __get_cpu_var(next_check) = tmp_jiffs + CHECK_INTERVAL; 108 110 109 111 /* if we just entered the thermal event */ 110 - if (curr) { 112 + if (is_throttled) { 111 113 printk(KERN_CRIT "CPU%d: Temperature above threshold, " 112 - "cpu clock throttled (total events = %lu)\n", cpu, 113 - __get_cpu_var(thermal_throttle_count)); 114 + "cpu clock throttled (total events = %lu)\n", 115 + cpu, __get_cpu_var(thermal_throttle_count)); 114 116 115 117 add_taint(TAINT_MACHINE_CHECK); 116 - } else { 117 - printk(KERN_CRIT "CPU%d: Temperature/speed normal\n", cpu); 118 + return 1; 119 + } 120 + if (was_throttled) { 121 + printk(KERN_INFO "CPU%d: Temperature/speed normal\n", cpu); 122 + return 1; 118 123 } 119 124 120 - return 1; 125 + return 0; 121 126 } 122 127 123 128 #ifdef CONFIG_SYSFS
+34 -8
arch/x86/kernel/cpu/perf_counter.c
··· 55 55 int num_counters_fixed; 56 56 int counter_bits; 57 57 u64 counter_mask; 58 + int apic; 58 59 u64 max_period; 59 60 u64 intel_ctrl; 60 61 }; ··· 73 72 { 74 73 [PERF_COUNT_HW_CPU_CYCLES] = 0x0079, 75 74 [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, 76 - [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0000, 77 - [PERF_COUNT_HW_CACHE_MISSES] = 0x0000, 75 + [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0f2e, 76 + [PERF_COUNT_HW_CACHE_MISSES] = 0x012e, 78 77 [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x00c4, 79 78 [PERF_COUNT_HW_BRANCH_MISSES] = 0x00c5, 80 79 [PERF_COUNT_HW_BUS_CYCLES] = 0x0062, ··· 614 613 615 614 static bool reserve_pmc_hardware(void) 616 615 { 616 + #ifdef CONFIG_X86_LOCAL_APIC 617 617 int i; 618 618 619 619 if (nmi_watchdog == NMI_LOCAL_APIC) ··· 629 627 if (!reserve_evntsel_nmi(x86_pmu.eventsel + i)) 630 628 goto eventsel_fail; 631 629 } 630 + #endif 632 631 633 632 return true; 634 633 634 + #ifdef CONFIG_X86_LOCAL_APIC 635 635 eventsel_fail: 636 636 for (i--; i >= 0; i--) 637 637 release_evntsel_nmi(x86_pmu.eventsel + i); ··· 648 644 enable_lapic_nmi_watchdog(); 649 645 650 646 return false; 647 + #endif 651 648 } 652 649 653 650 static void release_pmc_hardware(void) 654 651 { 652 + #ifdef CONFIG_X86_LOCAL_APIC 655 653 int i; 656 654 657 655 for (i = 0; i < x86_pmu.num_counters; i++) { ··· 663 657 664 658 if (nmi_watchdog == NMI_LOCAL_APIC) 665 659 enable_lapic_nmi_watchdog(); 660 + #endif 666 661 } 667 662 668 663 static void hw_perf_counter_destroy(struct perf_counter *counter) ··· 755 748 hwc->sample_period = x86_pmu.max_period; 756 749 hwc->last_period = hwc->sample_period; 757 750 atomic64_set(&hwc->period_left, hwc->sample_period); 751 + } else { 752 + /* 753 + * If we have a PMU initialized but no APIC 754 + * interrupts, we cannot sample hardware 755 + * counters (user-space has to fall back and 756 + * sample via a hrtimer based software counter): 757 + */ 758 + if (!x86_pmu.apic) 759 + return -EOPNOTSUPP; 758 760 } 759 761 760 762 counter->destroy = hw_perf_counter_destroy; ··· 1465 1449 1466 1450 void set_perf_counter_pending(void) 1467 1451 { 1452 + #ifdef CONFIG_X86_LOCAL_APIC 1468 1453 apic->send_IPI_self(LOCAL_PENDING_VECTOR); 1454 + #endif 1469 1455 } 1470 1456 1471 1457 void perf_counters_lapic_init(void) 1472 1458 { 1473 - if (!x86_pmu_initialized()) 1459 + #ifdef CONFIG_X86_LOCAL_APIC 1460 + if (!x86_pmu.apic || !x86_pmu_initialized()) 1474 1461 return; 1475 1462 1476 1463 /* 1477 1464 * Always use NMI for PMU 1478 1465 */ 1479 1466 apic_write(APIC_LVTPC, APIC_DM_NMI); 1467 + #endif 1480 1468 } 1481 1469 1482 1470 static int __kprobes ··· 1504 1484 1505 1485 regs = args->regs; 1506 1486 1487 + #ifdef CONFIG_X86_LOCAL_APIC 1507 1488 apic_write(APIC_LVTPC, APIC_DM_NMI); 1489 + #endif 1508 1490 /* 1509 1491 * Can't rely on the handled return value to say it was our NMI, two 1510 1492 * counters could trigger 'simultaneously' raising two back-to-back NMIs. ··· 1537 1515 .event_map = p6_pmu_event_map, 1538 1516 .raw_event = p6_pmu_raw_event, 1539 1517 .max_events = ARRAY_SIZE(p6_perfmon_event_map), 1518 + .apic = 1, 1540 1519 .max_period = (1ULL << 31) - 1, 1541 1520 .version = 0, 1542 1521 .num_counters = 2, ··· 1564 1541 .event_map = intel_pmu_event_map, 1565 1542 .raw_event = intel_pmu_raw_event, 1566 1543 .max_events = ARRAY_SIZE(intel_perfmon_event_map), 1544 + .apic = 1, 1567 1545 /* 1568 1546 * Intel PMCs cannot be accessed sanely above 32 bit width, 1569 1547 * so we install an artificial 1<<31 period regardless of ··· 1588 1564 .num_counters = 4, 1589 1565 .counter_bits = 48, 1590 1566 .counter_mask = (1ULL << 48) - 1, 1567 + .apic = 1, 1591 1568 /* use highest bit to detect overflow */ 1592 1569 .max_period = (1ULL << 47) - 1, 1593 1570 }; ··· 1614 1589 return -ENODEV; 1615 1590 } 1616 1591 1617 - if (!cpu_has_apic) { 1618 - pr_info("no Local APIC, try rebooting with lapic"); 1619 - return -ENODEV; 1620 - } 1592 + x86_pmu = p6_pmu; 1621 1593 1622 - x86_pmu = p6_pmu; 1594 + if (!cpu_has_apic) { 1595 + pr_info("no APIC, boot with the \"lapic\" boot parameter to force-enable it.\n"); 1596 + pr_info("no hardware sampling interrupt available.\n"); 1597 + x86_pmu.apic = 0; 1598 + } 1623 1599 1624 1600 return 0; 1625 1601 }
+4 -4
arch/x86/kernel/ftrace.c
··· 494 494 495 495 struct syscall_metadata *syscall_nr_to_meta(int nr) 496 496 { 497 - if (!syscalls_metadata || nr >= FTRACE_SYSCALL_MAX || nr < 0) 497 + if (!syscalls_metadata || nr >= NR_syscalls || nr < 0) 498 498 return NULL; 499 499 500 500 return syscalls_metadata[nr]; ··· 507 507 if (!syscalls_metadata) 508 508 return -1; 509 509 510 - for (i = 0; i < FTRACE_SYSCALL_MAX; i++) { 510 + for (i = 0; i < NR_syscalls; i++) { 511 511 if (syscalls_metadata[i]) { 512 512 if (!strcmp(syscalls_metadata[i]->name, name)) 513 513 return i; ··· 533 533 unsigned long **psys_syscall_table = &sys_call_table; 534 534 535 535 syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * 536 - FTRACE_SYSCALL_MAX, GFP_KERNEL); 536 + NR_syscalls, GFP_KERNEL); 537 537 if (!syscalls_metadata) { 538 538 WARN_ON(1); 539 539 return -ENOMEM; 540 540 } 541 541 542 - for (i = 0; i < FTRACE_SYSCALL_MAX; i++) { 542 + for (i = 0; i < NR_syscalls; i++) { 543 543 meta = find_syscall_meta(psys_syscall_table[i]); 544 544 syscalls_metadata[i] = meta; 545 545 }
+1 -7
arch/x86/kernel/head_32.S
··· 261 261 * which will be freed later 262 262 */ 263 263 264 - #ifndef CONFIG_HOTPLUG_CPU 265 - .section .init.text,"ax",@progbits 266 - #endif 264 + __CPUINIT 267 265 268 266 #ifdef CONFIG_SMP 269 267 ENTRY(startup_32_smp) ··· 600 602 #endif 601 603 iret 602 604 603 - #ifndef CONFIG_HOTPLUG_CPU 604 - __CPUINITDATA 605 - #else 606 605 __REFDATA 607 - #endif 608 606 .align 4 609 607 ENTRY(initial_code) 610 608 .long i386_start_kernel
+1 -5
arch/x86/kernel/process.c
··· 519 519 if (!cpumask_test_cpu(cpu, c1e_mask)) { 520 520 cpumask_set_cpu(cpu, c1e_mask); 521 521 /* 522 - * Force broadcast so ACPI can not interfere. Needs 523 - * to run with interrupts enabled as it uses 524 - * smp_function_call. 522 + * Force broadcast so ACPI can not interfere. 525 523 */ 526 - local_irq_enable(); 527 524 clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_FORCE, 528 525 &cpu); 529 526 printk(KERN_INFO "Switch to broadcast mode on CPU%d\n", 530 527 cpu); 531 - local_irq_disable(); 532 528 } 533 529 clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu); 534 530
+6 -6
arch/x86/kernel/reboot.c
··· 418 418 } 419 419 420 420 static struct dmi_system_id __initdata pci_reboot_dmi_table[] = { 421 - { /* Handle problems with rebooting on Apple MacBook5,2 */ 421 + { /* Handle problems with rebooting on Apple MacBook5 */ 422 422 .callback = set_pci_reboot, 423 - .ident = "Apple MacBook", 423 + .ident = "Apple MacBook5", 424 424 .matches = { 425 425 DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 426 - DMI_MATCH(DMI_PRODUCT_NAME, "MacBook5,2"), 426 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBook5"), 427 427 }, 428 428 }, 429 - { /* Handle problems with rebooting on Apple MacBookPro5,1 */ 429 + { /* Handle problems with rebooting on Apple MacBookPro5 */ 430 430 .callback = set_pci_reboot, 431 - .ident = "Apple MacBookPro5,1", 431 + .ident = "Apple MacBookPro5", 432 432 .matches = { 433 433 DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 434 - DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro5,1"), 434 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro5"), 435 435 }, 436 436 }, 437 437 { }
+7 -7
arch/x86/kernel/setup_percpu.c
··· 165 165 166 166 if (!chosen) { 167 167 size_t vm_size = VMALLOC_END - VMALLOC_START; 168 - size_t tot_size = num_possible_cpus() * PMD_SIZE; 168 + size_t tot_size = nr_cpu_ids * PMD_SIZE; 169 169 170 170 /* on non-NUMA, embedding is better */ 171 171 if (!pcpu_need_numa()) ··· 199 199 dyn_size = pcpul_size - static_size - PERCPU_FIRST_CHUNK_RESERVE; 200 200 201 201 /* allocate pointer array and alloc large pages */ 202 - map_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpul_map[0])); 202 + map_size = PFN_ALIGN(nr_cpu_ids * sizeof(pcpul_map[0])); 203 203 pcpul_map = alloc_bootmem(map_size); 204 204 205 205 for_each_possible_cpu(cpu) { ··· 228 228 229 229 /* allocate address and map */ 230 230 pcpul_vm.flags = VM_ALLOC; 231 - pcpul_vm.size = num_possible_cpus() * PMD_SIZE; 231 + pcpul_vm.size = nr_cpu_ids * PMD_SIZE; 232 232 vm_area_register_early(&pcpul_vm, PMD_SIZE); 233 233 234 234 for_each_possible_cpu(cpu) { ··· 250 250 PMD_SIZE, pcpul_vm.addr, NULL); 251 251 252 252 /* sort pcpul_map array for pcpu_lpage_remapped() */ 253 - for (i = 0; i < num_possible_cpus() - 1; i++) 254 - for (j = i + 1; j < num_possible_cpus(); j++) 253 + for (i = 0; i < nr_cpu_ids - 1; i++) 254 + for (j = i + 1; j < nr_cpu_ids; j++) 255 255 if (pcpul_map[i].ptr > pcpul_map[j].ptr) { 256 256 struct pcpul_ent tmp = pcpul_map[i]; 257 257 pcpul_map[i] = pcpul_map[j]; ··· 288 288 { 289 289 void *pmd_addr = (void *)((unsigned long)kaddr & PMD_MASK); 290 290 unsigned long offset = (unsigned long)kaddr & ~PMD_MASK; 291 - int left = 0, right = num_possible_cpus() - 1; 291 + int left = 0, right = nr_cpu_ids - 1; 292 292 int pos; 293 293 294 294 /* pcpul in use at all? */ ··· 377 377 pcpu4k_nr_static_pages = PFN_UP(static_size); 378 378 379 379 /* unaligned allocations can't be freed, round up to page size */ 380 - pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * num_possible_cpus() 380 + pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * nr_cpu_ids 381 381 * sizeof(pcpu4k_pages[0])); 382 382 pcpu4k_pages = alloc_bootmem(pages_size); 383 383
+1
arch/x86/kernel/tlb_uv.c
··· 744 744 * note that base_dest_nodeid is actually a nasid. 745 745 */ 746 746 ad2->header.base_dest_nodeid = uv_partition_base_pnode << 1; 747 + ad2->header.dest_subnodeid = 0x10; /* the LB */ 747 748 ad2->header.command = UV_NET_ENDPOINT_INTD; 748 749 ad2->header.int_both = 1; 749 750 /*
+54 -86
arch/x86/kernel/vmlinux.lds.S
··· 46 46 data PT_LOAD FLAGS(7); /* RWE */ 47 47 #ifdef CONFIG_X86_64 48 48 user PT_LOAD FLAGS(7); /* RWE */ 49 - data.init PT_LOAD FLAGS(7); /* RWE */ 50 49 #ifdef CONFIG_SMP 51 50 percpu PT_LOAD FLAGS(7); /* RWE */ 52 51 #endif 53 - data.init2 PT_LOAD FLAGS(7); /* RWE */ 52 + init PT_LOAD FLAGS(7); /* RWE */ 54 53 #endif 55 54 note PT_NOTE FLAGS(0); /* ___ */ 56 55 } ··· 102 103 __stop___ex_table = .; 103 104 } :text = 0x9090 104 105 105 - RODATA 106 + RO_DATA(PAGE_SIZE) 106 107 107 108 /* Data */ 108 - . = ALIGN(PAGE_SIZE); 109 109 .data : AT(ADDR(.data) - LOAD_OFFSET) { 110 110 /* Start of data section */ 111 111 _sdata = .; 112 + 113 + /* init_task */ 114 + INIT_TASK_DATA(THREAD_SIZE) 115 + 116 + #ifdef CONFIG_X86_32 117 + /* 32 bit has nosave before _edata */ 118 + NOSAVE_DATA 119 + #endif 120 + 121 + PAGE_ALIGNED_DATA(PAGE_SIZE) 122 + *(.data.idt) 123 + 124 + CACHELINE_ALIGNED_DATA(CONFIG_X86_L1_CACHE_BYTES) 125 + 112 126 DATA_DATA 113 127 CONSTRUCTORS 114 - } :data 115 128 116 - #ifdef CONFIG_X86_32 117 - /* 32 bit has nosave before _edata */ 118 - . = ALIGN(PAGE_SIZE); 119 - .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { 120 - __nosave_begin = .; 121 - *(.data.nosave) 122 - . = ALIGN(PAGE_SIZE); 123 - __nosave_end = .; 124 - } 125 - #endif 126 - 127 - . = ALIGN(PAGE_SIZE); 128 - .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) { 129 - *(.data.page_aligned) 130 - *(.data.idt) 131 - } 132 - 133 - #ifdef CONFIG_X86_32 134 - . = ALIGN(32); 135 - #else 136 - . = ALIGN(PAGE_SIZE); 137 - . = ALIGN(CONFIG_X86_L1_CACHE_BYTES); 138 - #endif 139 - .data.cacheline_aligned : 140 - AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) { 141 - *(.data.cacheline_aligned) 142 - } 143 - 144 - /* rarely changed data like cpu maps */ 145 - #ifdef CONFIG_X86_32 146 - . = ALIGN(32); 147 - #else 148 - . = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES); 149 - #endif 150 - .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) { 151 - *(.data.read_mostly) 129 + /* rarely changed data like cpu maps */ 130 + READ_MOSTLY_DATA(CONFIG_X86_INTERNODE_CACHE_BYTES) 152 131 153 132 /* End of data section */ 154 133 _edata = .; 155 - } 134 + } :data 156 135 157 136 #ifdef CONFIG_X86_64 158 137 159 138 #define VSYSCALL_ADDR (-10*1024*1024) 160 - #define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + \ 161 - SIZEOF(.data.read_mostly) + 4095) & ~(4095)) 162 - #define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + \ 163 - SIZEOF(.data.read_mostly) + 4095) & ~(4095)) 139 + #define VSYSCALL_PHYS_ADDR ((LOADADDR(.data) + SIZEOF(.data) + \ 140 + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1)) 141 + #define VSYSCALL_VIRT_ADDR ((ADDR(.data) + SIZEOF(.data) + \ 142 + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1)) 164 143 165 144 #define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR) 166 145 #define VLOAD(x) (ADDR(x) - VLOAD_OFFSET) ··· 204 227 205 228 #endif /* CONFIG_X86_64 */ 206 229 207 - /* init_task */ 208 - . = ALIGN(THREAD_SIZE); 209 - .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) { 210 - *(.data.init_task) 211 - } 212 - #ifdef CONFIG_X86_64 213 - :data.init 214 - #endif 215 - 216 - /* 217 - * smp_locks might be freed after init 218 - * start/end must be page aligned 219 - */ 220 - . = ALIGN(PAGE_SIZE); 221 - .smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) { 222 - __smp_locks = .; 223 - *(.smp_locks) 224 - __smp_locks_end = .; 225 - . = ALIGN(PAGE_SIZE); 226 - } 227 - 228 230 /* Init code and data - will be freed after init */ 229 231 . = ALIGN(PAGE_SIZE); 230 - .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { 232 + .init.begin : AT(ADDR(.init.begin) - LOAD_OFFSET) { 231 233 __init_begin = .; /* paired with __init_end */ 234 + } 235 + 236 + #if defined(CONFIG_X86_64) && defined(CONFIG_SMP) 237 + /* 238 + * percpu offsets are zero-based on SMP. PERCPU_VADDR() changes the 239 + * output PHDR, so the next output section - .init.text - should 240 + * start another segment - init. 241 + */ 242 + PERCPU_VADDR(0, :percpu) 243 + #endif 244 + 245 + .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { 232 246 _sinittext = .; 233 247 INIT_TEXT 234 248 _einittext = .; 235 249 } 250 + #ifdef CONFIG_X86_64 251 + :init 252 + #endif 236 253 237 254 .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { 238 255 INIT_DATA ··· 297 326 } 298 327 #endif 299 328 300 - #if defined(CONFIG_X86_64) && defined(CONFIG_SMP) 301 - /* 302 - * percpu offsets are zero-based on SMP. PERCPU_VADDR() changes the 303 - * output PHDR, so the next output section - __data_nosave - should 304 - * start another section data.init2. Also, pda should be at the head of 305 - * percpu area. Preallocate it and define the percpu offset symbol 306 - * so that it can be accessed as a percpu variable. 307 - */ 308 - . = ALIGN(PAGE_SIZE); 309 - PERCPU_VADDR(0, :percpu) 310 - #else 329 + #if !defined(CONFIG_X86_64) || !defined(CONFIG_SMP) 311 330 PERCPU(PAGE_SIZE) 312 331 #endif 313 332 ··· 308 347 __init_end = .; 309 348 } 310 349 350 + /* 351 + * smp_locks might be freed after init 352 + * start/end must be page aligned 353 + */ 354 + . = ALIGN(PAGE_SIZE); 355 + .smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) { 356 + __smp_locks = .; 357 + *(.smp_locks) 358 + __smp_locks_end = .; 359 + . = ALIGN(PAGE_SIZE); 360 + } 361 + 311 362 #ifdef CONFIG_X86_64 312 363 .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { 313 - . = ALIGN(PAGE_SIZE); 314 - __nosave_begin = .; 315 - *(.data.nosave) 316 - . = ALIGN(PAGE_SIZE); 317 - __nosave_end = .; 318 - } :data.init2 319 - /* use another section data.init2, see PERCPU_VADDR() above */ 364 + NOSAVE_DATA 365 + } 320 366 #endif 321 367 322 368 /* BSS */
+1 -1
arch/x86/mm/init_64.c
··· 796 796 return ret; 797 797 798 798 #else 799 - reserve_bootmem(phys, len, BOOTMEM_DEFAULT); 799 + reserve_bootmem(phys, len, flags); 800 800 #endif 801 801 802 802 if (phys+len <= MAX_DMA_PFN*PAGE_SIZE) {
+2 -1
arch/x86/mm/pat.c
··· 623 623 return ret; 624 624 625 625 if (flags != want_flags) { 626 - if (strict_prot || !is_new_memtype_allowed(want_flags, flags)) { 626 + if (strict_prot || 627 + !is_new_memtype_allowed(paddr, size, want_flags, flags)) { 627 628 free_memtype(paddr, paddr + size); 628 629 printk(KERN_ERR "%s:%d map pfn expected mapping type %s" 629 630 " for %Lx-%Lx, got %s\n",
+10 -11
arch/x86/mm/tlb.c
··· 183 183 184 184 f->flush_mm = mm; 185 185 f->flush_va = va; 186 - cpumask_andnot(to_cpumask(f->flush_cpumask), 187 - cpumask, cpumask_of(smp_processor_id())); 186 + if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) { 187 + /* 188 + * We have to send the IPI only to 189 + * CPUs affected. 190 + */ 191 + apic->send_IPI_mask(to_cpumask(f->flush_cpumask), 192 + INVALIDATE_TLB_VECTOR_START + sender); 188 193 189 - /* 190 - * We have to send the IPI only to 191 - * CPUs affected. 192 - */ 193 - apic->send_IPI_mask(to_cpumask(f->flush_cpumask), 194 - INVALIDATE_TLB_VECTOR_START + sender); 195 - 196 - while (!cpumask_empty(to_cpumask(f->flush_cpumask))) 197 - cpu_relax(); 194 + while (!cpumask_empty(to_cpumask(f->flush_cpumask))) 195 + cpu_relax(); 196 + } 198 197 199 198 f->flush_mm = NULL; 200 199 f->flush_va = 0;
+4
arch/x86/xen/Makefile
··· 5 5 CFLAGS_REMOVE_irq.o = -pg 6 6 endif 7 7 8 + # Make sure early boot has no stackprotector 9 + nostackp := $(call cc-option, -fno-stack-protector) 10 + CFLAGS_enlighten.o := $(nostackp) 11 + 8 12 obj-y := enlighten.o setup.o multicalls.o mmu.o irq.o \ 9 13 time.o xen-asm.o xen-asm_$(BITS).o \ 10 14 grant-table.o suspend.o
+12 -12
arch/x86/xen/enlighten.c
··· 215 215 (1 << X86_FEATURE_ACPI)); /* disable ACPI */ 216 216 217 217 ax = 1; 218 + cx = 0; 218 219 xen_cpuid(&ax, &bx, &cx, &dx); 219 220 220 221 /* cpuid claims we support xsave; try enabling it to see what happens */ ··· 975 974 976 975 xen_domain_type = XEN_PV_DOMAIN; 977 976 978 - BUG_ON(memcmp(xen_start_info->magic, "xen-3", 5) != 0); 979 - 980 - xen_setup_features(); 981 - 982 977 /* Install Xen paravirt ops */ 983 978 pv_info = xen_info; 984 979 pv_init_ops = xen_init_ops; ··· 983 986 pv_apic_ops = xen_apic_ops; 984 987 pv_mmu_ops = xen_mmu_ops; 985 988 986 - xen_init_irq_ops(); 989 + #ifdef CONFIG_X86_64 990 + /* 991 + * Setup percpu state. We only need to do this for 64-bit 992 + * because 32-bit already has %fs set properly. 993 + */ 994 + load_percpu_segment(0); 995 + #endif 987 996 997 + xen_init_irq_ops(); 988 998 xen_init_cpuid_mask(); 989 999 990 1000 #ifdef CONFIG_X86_LOCAL_APIC ··· 1001 997 set_xen_basic_apic_ops(); 1002 998 #endif 1003 999 1000 + xen_setup_features(); 1001 + 1004 1002 if (xen_feature(XENFEAT_mmu_pt_update_preserve_ad)) { 1005 1003 pv_mmu_ops.ptep_modify_prot_start = xen_ptep_modify_prot_start; 1006 1004 pv_mmu_ops.ptep_modify_prot_commit = xen_ptep_modify_prot_commit; ··· 1010 1004 1011 1005 machine_ops = xen_machine_ops; 1012 1006 1013 - #ifdef CONFIG_X86_64 1014 - /* 1015 - * Setup percpu state. We only need to do this for 64-bit 1016 - * because 32-bit already has %fs set properly. 1017 - */ 1018 - load_percpu_segment(0); 1019 - #endif 1020 1007 /* 1021 1008 * The only reliable way to retain the initial address of the 1022 1009 * percpu gdt_page is to remember it here, so we can go and ··· 1060 1061 /* set up basic CPUID stuff */ 1061 1062 cpu_detect(&new_cpu_data); 1062 1063 new_cpu_data.hard_math = 1; 1064 + new_cpu_data.wp_works_ok = 1; 1063 1065 new_cpu_data.x86_capability[0] = cpuid_edx(1); 1064 1066 #endif 1065 1067
+1 -1
block/blk-sysfs.c
··· 133 133 return -EINVAL; 134 134 135 135 spin_lock_irq(q->queue_lock); 136 - blk_queue_max_sectors(q, max_sectors_kb << 1); 136 + q->limits.max_sectors = max_sectors_kb << 1; 137 137 spin_unlock_irq(q->queue_lock); 138 138 139 139 return ret;
+9 -2
crypto/algapi.c
··· 692 692 } 693 693 EXPORT_SYMBOL_GPL(crypto_enqueue_request); 694 694 695 - struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue) 695 + void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset) 696 696 { 697 697 struct list_head *request; 698 698 ··· 707 707 request = queue->list.next; 708 708 list_del(request); 709 709 710 - return list_entry(request, struct crypto_async_request, list); 710 + return (char *)list_entry(request, struct crypto_async_request, list) - 711 + offset; 712 + } 713 + EXPORT_SYMBOL_GPL(__crypto_dequeue_request); 714 + 715 + struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue) 716 + { 717 + return __crypto_dequeue_request(queue, 0); 711 718 } 712 719 EXPORT_SYMBOL_GPL(crypto_dequeue_request); 713 720
+12
drivers/acpi/acpica/exstorob.c
··· 70 70 71 71 ACPI_FUNCTION_TRACE_PTR(ex_store_buffer_to_buffer, source_desc); 72 72 73 + /* If Source and Target are the same, just return */ 74 + 75 + if (source_desc == target_desc) { 76 + return_ACPI_STATUS(AE_OK); 77 + } 78 + 73 79 /* We know that source_desc is a buffer by now */ 74 80 75 81 buffer = ACPI_CAST_PTR(u8, source_desc->buffer.pointer); ··· 166 160 u8 *buffer; 167 161 168 162 ACPI_FUNCTION_TRACE_PTR(ex_store_string_to_string, source_desc); 163 + 164 + /* If Source and Target are the same, just return */ 165 + 166 + if (source_desc == target_desc) { 167 + return_ACPI_STATUS(AE_OK); 168 + } 169 169 170 170 /* We know that source_desc is a string by now */ 171 171
+6
drivers/acpi/processor_core.c
··· 1151 1151 { 1152 1152 int result = 0; 1153 1153 1154 + if (acpi_disabled) 1155 + return 0; 1156 + 1154 1157 memset(&errata, 0, sizeof(errata)); 1155 1158 1156 1159 #ifdef CONFIG_SMP ··· 1200 1197 1201 1198 static void __exit acpi_processor_exit(void) 1202 1199 { 1200 + if (acpi_disabled) 1201 + return; 1202 + 1203 1203 acpi_processor_ppc_exit(); 1204 1204 1205 1205 acpi_thermal_cpufreq_exit();
+4 -2
drivers/acpi/processor_idle.c
··· 162 162 pr->power.timer_broadcast_on_state = state; 163 163 } 164 164 165 - static void lapic_timer_propagate_broadcast(struct acpi_processor *pr) 165 + static void lapic_timer_propagate_broadcast(void *arg) 166 166 { 167 + struct acpi_processor *pr = (struct acpi_processor *) arg; 167 168 unsigned long reason; 168 169 169 170 reason = pr->power.timer_broadcast_on_state < INT_MAX ? ··· 636 635 working++; 637 636 } 638 637 639 - lapic_timer_propagate_broadcast(pr); 638 + smp_call_function_single(pr->id, lapic_timer_propagate_broadcast, 639 + pr, 1); 640 640 641 641 return (working); 642 642 }
+3 -3
drivers/acpi/processor_thermal.c
··· 66 66 if (pr->limit.thermal.tx > tx) 67 67 tx = pr->limit.thermal.tx; 68 68 69 - result = acpi_processor_set_throttling(pr, tx); 69 + result = acpi_processor_set_throttling(pr, tx, false); 70 70 if (result) 71 71 goto end; 72 72 } ··· 421 421 422 422 if (state <= max_pstate) { 423 423 if (pr->flags.throttling && pr->throttling.state) 424 - result = acpi_processor_set_throttling(pr, 0); 424 + result = acpi_processor_set_throttling(pr, 0, false); 425 425 cpufreq_set_cur_state(pr->id, state); 426 426 } else { 427 427 cpufreq_set_cur_state(pr->id, max_pstate); 428 428 result = acpi_processor_set_throttling(pr, 429 - state - max_pstate); 429 + state - max_pstate, false); 430 430 } 431 431 return result; 432 432 }
+16 -14
drivers/acpi/processor_throttling.c
··· 62 62 #define THROTTLING_POSTCHANGE (2) 63 63 64 64 static int acpi_processor_get_throttling(struct acpi_processor *pr); 65 - int acpi_processor_set_throttling(struct acpi_processor *pr, int state); 65 + int acpi_processor_set_throttling(struct acpi_processor *pr, 66 + int state, bool force); 66 67 67 68 static int acpi_processor_update_tsd_coord(void) 68 69 { ··· 362 361 */ 363 362 target_state = throttling_limit; 364 363 } 365 - return acpi_processor_set_throttling(pr, target_state); 364 + return acpi_processor_set_throttling(pr, target_state, false); 366 365 } 367 366 368 367 /* ··· 840 839 if (ret >= 0) { 841 840 state = acpi_get_throttling_state(pr, value); 842 841 if (state == -1) { 843 - ACPI_WARNING((AE_INFO, 844 - "Invalid throttling state, reset")); 842 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, 843 + "Invalid throttling state, reset\n")); 845 844 state = 0; 846 - ret = acpi_processor_set_throttling(pr, state); 845 + ret = acpi_processor_set_throttling(pr, state, true); 847 846 if (ret) 848 847 return ret; 849 848 } ··· 916 915 } 917 916 918 917 static int acpi_processor_set_throttling_fadt(struct acpi_processor *pr, 919 - int state) 918 + int state, bool force) 920 919 { 921 920 u32 value = 0; 922 921 u32 duty_mask = 0; ··· 931 930 if (!pr->flags.throttling) 932 931 return -ENODEV; 933 932 934 - if (state == pr->throttling.state) 933 + if (!force && (state == pr->throttling.state)) 935 934 return 0; 936 935 937 936 if (state < pr->throttling_platform_limit) ··· 989 988 } 990 989 991 990 static int acpi_processor_set_throttling_ptc(struct acpi_processor *pr, 992 - int state) 991 + int state, bool force) 993 992 { 994 993 int ret; 995 994 acpi_integer value; ··· 1003 1002 if (!pr->flags.throttling) 1004 1003 return -ENODEV; 1005 1004 1006 - if (state == pr->throttling.state) 1005 + if (!force && (state == pr->throttling.state)) 1007 1006 return 0; 1008 1007 1009 1008 if (state < pr->throttling_platform_limit) ··· 1019 1018 return 0; 1020 1019 } 1021 1020 1022 - int acpi_processor_set_throttling(struct acpi_processor *pr, int state) 1021 + int acpi_processor_set_throttling(struct acpi_processor *pr, 1022 + int state, bool force) 1023 1023 { 1024 1024 cpumask_var_t saved_mask; 1025 1025 int ret = 0; ··· 1072 1070 /* FIXME: use work_on_cpu() */ 1073 1071 set_cpus_allowed_ptr(current, cpumask_of(pr->id)); 1074 1072 ret = p_throttling->acpi_processor_set_throttling(pr, 1075 - t_state.target_state); 1073 + t_state.target_state, force); 1076 1074 } else { 1077 1075 /* 1078 1076 * When the T-state coordination is SW_ALL or HW_ALL, ··· 1105 1103 set_cpus_allowed_ptr(current, cpumask_of(i)); 1106 1104 ret = match_pr->throttling. 1107 1105 acpi_processor_set_throttling( 1108 - match_pr, t_state.target_state); 1106 + match_pr, t_state.target_state, force); 1109 1107 } 1110 1108 } 1111 1109 /* ··· 1203 1201 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 1204 1202 "Disabling throttling (was T%d)\n", 1205 1203 pr->throttling.state)); 1206 - result = acpi_processor_set_throttling(pr, 0); 1204 + result = acpi_processor_set_throttling(pr, 0, false); 1207 1205 if (result) 1208 1206 goto end; 1209 1207 } ··· 1309 1307 if (strcmp(tmpbuf, charp) != 0) 1310 1308 return -EINVAL; 1311 1309 1312 - result = acpi_processor_set_throttling(pr, state_val); 1310 + result = acpi_processor_set_throttling(pr, state_val, false); 1313 1311 if (result) 1314 1312 return result; 1315 1313
+5 -2
drivers/acpi/video.c
··· 2004 2004 status = acpi_remove_notify_handler(device->dev->handle, 2005 2005 ACPI_DEVICE_NOTIFY, 2006 2006 acpi_video_device_notify); 2007 - sysfs_remove_link(&device->backlight->dev.kobj, "device"); 2008 - backlight_device_unregister(device->backlight); 2007 + if (device->backlight) { 2008 + sysfs_remove_link(&device->backlight->dev.kobj, "device"); 2009 + backlight_device_unregister(device->backlight); 2010 + device->backlight = NULL; 2011 + } 2009 2012 if (device->cdev) { 2010 2013 sysfs_remove_link(&device->dev->dev.kobj, 2011 2014 "thermal_cooling");
+74 -5
drivers/ata/ahci.c
··· 219 219 AHCI_HFLAG_SECT255 = (1 << 8), /* max 255 sectors */ 220 220 AHCI_HFLAG_YES_NCQ = (1 << 9), /* force NCQ cap on */ 221 221 AHCI_HFLAG_NO_SUSPEND = (1 << 10), /* don't suspend */ 222 + AHCI_HFLAG_SRST_TOUT_IS_OFFLINE = (1 << 11), /* treat SRST timeout as 223 + link offline */ 222 224 223 225 /* ap->flags bits */ 224 226 ··· 1665 1663 int (*check_ready)(struct ata_link *link)) 1666 1664 { 1667 1665 struct ata_port *ap = link->ap; 1666 + struct ahci_host_priv *hpriv = ap->host->private_data; 1668 1667 const char *reason = NULL; 1669 1668 unsigned long now, msecs; 1670 1669 struct ata_taskfile tf; ··· 1704 1701 1705 1702 /* wait for link to become ready */ 1706 1703 rc = ata_wait_after_reset(link, deadline, check_ready); 1707 - /* link occupied, -ENODEV too is an error */ 1708 - if (rc) { 1704 + if (rc == -EBUSY && hpriv->flags & AHCI_HFLAG_SRST_TOUT_IS_OFFLINE) { 1705 + /* 1706 + * Workaround for cases where link online status can't 1707 + * be trusted. Treat device readiness timeout as link 1708 + * offline. 1709 + */ 1710 + ata_link_printk(link, KERN_INFO, 1711 + "device not ready, treating as offline\n"); 1712 + *class = ATA_DEV_NONE; 1713 + } else if (rc) { 1714 + /* link occupied, -ENODEV too is an error */ 1709 1715 reason = "device not ready"; 1710 1716 goto fail; 1711 - } 1712 - *class = ahci_dev_classify(ap); 1717 + } else 1718 + *class = ahci_dev_classify(ap); 1713 1719 1714 1720 DPRINTK("EXIT, class=%u\n", *class); 1715 1721 return 0; ··· 1785 1773 irq_sts = readl(port_mmio + PORT_IRQ_STAT); 1786 1774 if (irq_sts & PORT_IRQ_BAD_PMP) { 1787 1775 ata_link_printk(link, KERN_WARNING, 1788 - "failed due to HW bug, retry pmp=0\n"); 1776 + "applying SB600 PMP SRST workaround " 1777 + "and retrying\n"); 1789 1778 rc = ahci_do_softreset(link, class, 0, deadline, 1790 1779 ahci_check_ready); 1791 1780 } ··· 2739 2726 return !ver || strcmp(ver, dmi->driver_data) < 0; 2740 2727 } 2741 2728 2729 + static bool ahci_broken_online(struct pci_dev *pdev) 2730 + { 2731 + #define ENCODE_BUSDEVFN(bus, slot, func) \ 2732 + (void *)(unsigned long)(((bus) << 8) | PCI_DEVFN((slot), (func))) 2733 + static const struct dmi_system_id sysids[] = { 2734 + /* 2735 + * There are several gigabyte boards which use 2736 + * SIMG5723s configured as hardware RAID. Certain 2737 + * 5723 firmware revisions shipped there keep the link 2738 + * online but fail to answer properly to SRST or 2739 + * IDENTIFY when no device is attached downstream 2740 + * causing libata to retry quite a few times leading 2741 + * to excessive detection delay. 2742 + * 2743 + * As these firmwares respond to the second reset try 2744 + * with invalid device signature, considering unknown 2745 + * sig as offline works around the problem acceptably. 2746 + */ 2747 + { 2748 + .ident = "EP45-DQ6", 2749 + .matches = { 2750 + DMI_MATCH(DMI_BOARD_VENDOR, 2751 + "Gigabyte Technology Co., Ltd."), 2752 + DMI_MATCH(DMI_BOARD_NAME, "EP45-DQ6"), 2753 + }, 2754 + .driver_data = ENCODE_BUSDEVFN(0x0a, 0x00, 0), 2755 + }, 2756 + { 2757 + .ident = "EP45-DS5", 2758 + .matches = { 2759 + DMI_MATCH(DMI_BOARD_VENDOR, 2760 + "Gigabyte Technology Co., Ltd."), 2761 + DMI_MATCH(DMI_BOARD_NAME, "EP45-DS5"), 2762 + }, 2763 + .driver_data = ENCODE_BUSDEVFN(0x03, 0x00, 0), 2764 + }, 2765 + { } /* terminate list */ 2766 + }; 2767 + #undef ENCODE_BUSDEVFN 2768 + const struct dmi_system_id *dmi = dmi_first_match(sysids); 2769 + unsigned int val; 2770 + 2771 + if (!dmi) 2772 + return false; 2773 + 2774 + val = (unsigned long)dmi->driver_data; 2775 + 2776 + return pdev->bus->number == (val >> 8) && pdev->devfn == (val & 0xff); 2777 + } 2778 + 2742 2779 static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 2743 2780 { 2744 2781 static int printed_version; ··· 2902 2839 hpriv->flags |= AHCI_HFLAG_NO_SUSPEND; 2903 2840 dev_printk(KERN_WARNING, &pdev->dev, 2904 2841 "BIOS update required for suspend/resume\n"); 2842 + } 2843 + 2844 + if (ahci_broken_online(pdev)) { 2845 + hpriv->flags |= AHCI_HFLAG_SRST_TOUT_IS_OFFLINE; 2846 + dev_info(&pdev->dev, 2847 + "online status unreliable, applying workaround\n"); 2905 2848 } 2906 2849 2907 2850 /* CAP.NP sometimes indicate the index of the last enabled
+13 -1
drivers/ata/ata_piix.c
··· 664 664 return ata_sff_prereset(link, deadline); 665 665 } 666 666 667 + static DEFINE_SPINLOCK(piix_lock); 668 + 667 669 /** 668 670 * piix_set_piomode - Initialize host controller PATA PIO timings 669 671 * @ap: Port whose timings we are configuring ··· 679 677 680 678 static void piix_set_piomode(struct ata_port *ap, struct ata_device *adev) 681 679 { 682 - unsigned int pio = adev->pio_mode - XFER_PIO_0; 683 680 struct pci_dev *dev = to_pci_dev(ap->host->dev); 681 + unsigned long flags; 682 + unsigned int pio = adev->pio_mode - XFER_PIO_0; 684 683 unsigned int is_slave = (adev->devno != 0); 685 684 unsigned int master_port= ap->port_no ? 0x42 : 0x40; 686 685 unsigned int slave_port = 0x44; ··· 710 707 /* Intel specifies that the PPE functionality is for disk only */ 711 708 if (adev->class == ATA_DEV_ATA) 712 709 control |= 4; /* PPE enable */ 710 + 711 + spin_lock_irqsave(&piix_lock, flags); 713 712 714 713 /* PIO configuration clears DTE unconditionally. It will be 715 714 * programmed in set_dmamode which is guaranteed to be called ··· 752 747 udma_enable &= ~(1 << (2 * ap->port_no + adev->devno)); 753 748 pci_write_config_byte(dev, 0x48, udma_enable); 754 749 } 750 + 751 + spin_unlock_irqrestore(&piix_lock, flags); 755 752 } 756 753 757 754 /** ··· 771 764 static void do_pata_set_dmamode(struct ata_port *ap, struct ata_device *adev, int isich) 772 765 { 773 766 struct pci_dev *dev = to_pci_dev(ap->host->dev); 767 + unsigned long flags; 774 768 u8 master_port = ap->port_no ? 0x42 : 0x40; 775 769 u16 master_data; 776 770 u8 speed = adev->dma_mode; ··· 784 776 { 1, 0 }, 785 777 { 2, 1 }, 786 778 { 2, 3 }, }; 779 + 780 + spin_lock_irqsave(&piix_lock, flags); 787 781 788 782 pci_read_config_word(dev, master_port, &master_data); 789 783 if (ap->udma_mask) ··· 877 867 /* Don't scribble on 0x48 if the controller does not support UDMA */ 878 868 if (ap->udma_mask) 879 869 pci_write_config_byte(dev, 0x48, udma_enable); 870 + 871 + spin_unlock_irqrestore(&piix_lock, flags); 880 872 } 881 873 882 874 /**
+3
drivers/ata/libata-core.c
··· 4302 4302 { "WDC WD2500JD-00HBB0", "WD-WMAL71490727", ATA_HORKAGE_BROKEN_HPA }, 4303 4303 { "MAXTOR 6L080L4", "A93.0500", ATA_HORKAGE_BROKEN_HPA }, 4304 4304 4305 + /* this one allows HPA unlocking but fails IOs on the area */ 4306 + { "OCZ-VERTEX", "1.30", ATA_HORKAGE_BROKEN_HPA }, 4307 + 4305 4308 /* Devices which report 1 sector over size HPA */ 4306 4309 { "ST340823A", NULL, ATA_HORKAGE_HPA_SIZE, }, 4307 4310 { "ST320413A", NULL, ATA_HORKAGE_HPA_SIZE, },
+4 -13
drivers/ata/pata_at91.c
··· 250 250 ata_port_desc(ap, "no IRQ, using PIO polling"); 251 251 } 252 252 253 - info = kzalloc(sizeof(*info), GFP_KERNEL); 253 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 254 254 255 255 if (!info) { 256 256 dev_err(dev, "failed to allocate memory for private data\n"); ··· 275 275 if (!info->ide_addr) { 276 276 dev_err(dev, "failed to map IO base\n"); 277 277 ret = -ENOMEM; 278 - goto err_ide_ioremap; 278 + goto err_put; 279 279 } 280 280 281 281 info->alt_addr = devm_ioremap(dev, ··· 284 284 if (!info->alt_addr) { 285 285 dev_err(dev, "failed to map CTL base\n"); 286 286 ret = -ENOMEM; 287 - goto err_alt_ioremap; 287 + goto err_put; 288 288 } 289 289 290 290 ap->ioaddr.cmd_addr = info->ide_addr; ··· 303 303 irq ? ata_sff_interrupt : NULL, 304 304 irq_flags, &pata_at91_sht); 305 305 306 - err_alt_ioremap: 307 - devm_iounmap(dev, info->ide_addr); 308 - 309 - err_ide_ioremap: 306 + err_put: 310 307 clk_put(info->mck); 311 - kfree(info); 312 - 313 308 return ret; 314 309 } 315 310 ··· 312 317 { 313 318 struct ata_host *host = dev_get_drvdata(&pdev->dev); 314 319 struct at91_ide_info *info; 315 - struct device *dev = &pdev->dev; 316 320 317 321 if (!host) 318 322 return 0; ··· 322 328 if (!info) 323 329 return 0; 324 330 325 - devm_iounmap(dev, info->ide_addr); 326 - devm_iounmap(dev, info->alt_addr); 327 331 clk_put(info->mck); 328 332 329 - kfree(info); 330 333 return 0; 331 334 } 332 335
+10 -9
drivers/ata/pata_atiixp.c
··· 1 1 /* 2 2 * pata_atiixp.c - ATI PATA for new ATA layer 3 3 * (C) 2005 Red Hat Inc 4 + * (C) 2009 Bartlomiej Zolnierkiewicz 4 5 * 5 6 * Based on 6 7 * ··· 62 61 63 62 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 64 63 int dn = 2 * ap->port_no + adev->devno; 65 - 66 - /* Check this is correct - the order is odd in both drivers */ 67 64 int timing_shift = (16 * ap->port_no) + 8 * (adev->devno ^ 1); 68 - u16 pio_mode_data, pio_timing_data; 65 + u32 pio_timing_data; 66 + u16 pio_mode_data; 69 67 70 68 pci_read_config_word(pdev, ATIIXP_IDE_PIO_MODE, &pio_mode_data); 71 69 pio_mode_data &= ~(0x7 << (4 * dn)); 72 70 pio_mode_data |= pio << (4 * dn); 73 71 pci_write_config_word(pdev, ATIIXP_IDE_PIO_MODE, pio_mode_data); 74 72 75 - pci_read_config_word(pdev, ATIIXP_IDE_PIO_TIMING, &pio_timing_data); 73 + pci_read_config_dword(pdev, ATIIXP_IDE_PIO_TIMING, &pio_timing_data); 76 74 pio_timing_data &= ~(0xFF << timing_shift); 77 75 pio_timing_data |= (pio_timings[pio] << timing_shift); 78 - pci_write_config_word(pdev, ATIIXP_IDE_PIO_TIMING, pio_timing_data); 76 + pci_write_config_dword(pdev, ATIIXP_IDE_PIO_TIMING, pio_timing_data); 79 77 } 80 78 81 79 /** ··· 119 119 udma_mode_data |= dma << (4 * dn); 120 120 pci_write_config_word(pdev, ATIIXP_IDE_UDMA_MODE, udma_mode_data); 121 121 } else { 122 - u16 mwdma_timing_data; 123 - /* Check this is correct - the order is odd in both drivers */ 124 122 int timing_shift = (16 * ap->port_no) + 8 * (adev->devno ^ 1); 123 + u32 mwdma_timing_data; 125 124 126 125 dma -= XFER_MW_DMA_0; 127 126 128 - pci_read_config_word(pdev, ATIIXP_IDE_MWDMA_TIMING, &mwdma_timing_data); 127 + pci_read_config_dword(pdev, ATIIXP_IDE_MWDMA_TIMING, 128 + &mwdma_timing_data); 129 129 mwdma_timing_data &= ~(0xFF << timing_shift); 130 130 mwdma_timing_data |= (mwdma_timings[dma] << timing_shift); 131 - pci_write_config_word(pdev, ATIIXP_IDE_MWDMA_TIMING, mwdma_timing_data); 131 + pci_write_config_dword(pdev, ATIIXP_IDE_MWDMA_TIMING, 132 + mwdma_timing_data); 132 133 } 133 134 /* 134 135 * We must now look at the PIO mode situation. We may need to
+8
drivers/ata/sata_nv.c
··· 602 602 603 603 static int adma_enabled; 604 604 static int swncq_enabled = 1; 605 + static int msi_enabled; 605 606 606 607 static void nv_adma_register_mode(struct ata_port *ap) 607 608 { ··· 2460 2459 } else if (type == SWNCQ) 2461 2460 nv_swncq_host_init(host); 2462 2461 2462 + if (msi_enabled) { 2463 + dev_printk(KERN_NOTICE, &pdev->dev, "Using MSI\n"); 2464 + pci_enable_msi(pdev); 2465 + } 2466 + 2463 2467 pci_set_master(pdev); 2464 2468 return ata_host_activate(host, pdev->irq, ipriv->irq_handler, 2465 2469 IRQF_SHARED, ipriv->sht); ··· 2564 2558 MODULE_PARM_DESC(adma, "Enable use of ADMA (Default: false)"); 2565 2559 module_param_named(swncq, swncq_enabled, bool, 0444); 2566 2560 MODULE_PARM_DESC(swncq, "Enable use of SWNCQ (Default: true)"); 2561 + module_param_named(msi, msi_enabled, bool, 0444); 2562 + MODULE_PARM_DESC(msi, "Enable use of MSI (Default: false)"); 2567 2563
-3
drivers/base/platform.c
··· 483 483 drv->driver.remove = platform_drv_remove; 484 484 if (drv->shutdown) 485 485 drv->driver.shutdown = platform_drv_shutdown; 486 - if (drv->suspend || drv->resume) 487 - pr_warning("Platform driver '%s' needs updating - please use " 488 - "dev_pm_ops\n", drv->driver.name); 489 486 490 487 return driver_register(&drv->driver); 491 488 }
+1 -2
drivers/char/n_tty.c
··· 300 300 if (space < 2) 301 301 return -1; 302 302 tty->canon_column = tty->column = 0; 303 - tty_put_char(tty, '\r'); 304 - tty_put_char(tty, c); 303 + tty->ops->write(tty, "\r\n", 2); 305 304 return 2; 306 305 } 307 306 tty->canon_column = tty->column;
+1 -9
drivers/char/pty.c
··· 109 109 * the other side of the pty/tty pair. 110 110 */ 111 111 112 - static int pty_write(struct tty_struct *tty, const unsigned char *buf, 113 - int count) 112 + static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c) 114 113 { 115 114 struct tty_struct *to = tty->link; 116 - int c; 117 115 118 116 if (tty->stopped) 119 117 return 0; 120 118 121 - /* This isn't locked but our 8K is quite sloppy so no 122 - big deal */ 123 - 124 - c = pty_space(to); 125 - if (c > count) 126 - c = count; 127 119 if (c > 0) { 128 120 /* Stuff the data into the input queue of the other end */ 129 121 c = tty_insert_flip_string(to, buf, c);
+7 -3
drivers/char/tty_ldisc.c
··· 508 508 * be obtained while the delayed work queue halt ensures that no more 509 509 * data is fed to the ldisc. 510 510 * 511 - * In order to wait for any existing references to complete see 512 - * tty_ldisc_wait_idle. 511 + * You need to do a 'flush_scheduled_work()' (outside the ldisc_mutex) 512 + * in order to make sure any currently executing ldisc work is also 513 + * flushed. 513 514 */ 514 515 515 516 static int tty_ldisc_halt(struct tty_struct *tty) ··· 754 753 * N_TTY. 755 754 */ 756 755 if (tty->driver->flags & TTY_DRIVER_RESET_TERMIOS) { 756 + /* Make sure the old ldisc is quiescent */ 757 + tty_ldisc_halt(tty); 758 + flush_scheduled_work(); 759 + 757 760 /* Avoid racing set_ldisc or tty_ldisc_release */ 758 761 mutex_lock(&tty->ldisc_mutex); 759 762 if (tty->ldisc) { /* Not yet closed */ 760 763 /* Switch back to N_TTY */ 761 - tty_ldisc_halt(tty); 762 764 tty_ldisc_reinit(tty); 763 765 /* At this point we have a closed ldisc and we want to 764 766 reopen it. We could defer this to the next open but
+28
drivers/clocksource/sh_cmt.c
··· 40 40 struct platform_device *pdev; 41 41 42 42 unsigned long flags; 43 + unsigned long flags_suspend; 43 44 unsigned long match_value; 44 45 unsigned long next_match_value; 45 46 unsigned long max_match_value; ··· 668 667 return -EBUSY; /* cannot unregister clockevent and clocksource */ 669 668 } 670 669 670 + static int sh_cmt_suspend(struct device *dev) 671 + { 672 + struct platform_device *pdev = to_platform_device(dev); 673 + struct sh_cmt_priv *p = platform_get_drvdata(pdev); 674 + 675 + /* save flag state and stop CMT channel */ 676 + p->flags_suspend = p->flags; 677 + sh_cmt_stop(p, p->flags); 678 + return 0; 679 + } 680 + 681 + static int sh_cmt_resume(struct device *dev) 682 + { 683 + struct platform_device *pdev = to_platform_device(dev); 684 + struct sh_cmt_priv *p = platform_get_drvdata(pdev); 685 + 686 + /* start CMT channel from saved state */ 687 + sh_cmt_start(p, p->flags_suspend); 688 + return 0; 689 + } 690 + 691 + static struct dev_pm_ops sh_cmt_dev_pm_ops = { 692 + .suspend = sh_cmt_suspend, 693 + .resume = sh_cmt_resume, 694 + }; 695 + 671 696 static struct platform_driver sh_cmt_device_driver = { 672 697 .probe = sh_cmt_probe, 673 698 .remove = __devexit_p(sh_cmt_remove), 674 699 .driver = { 675 700 .name = "sh_cmt", 701 + .pm = &sh_cmt_dev_pm_ops, 676 702 } 677 703 }; 678 704
+7 -88
drivers/cpufreq/cpufreq.c
··· 1250 1250 { 1251 1251 int ret = 0; 1252 1252 1253 - #ifdef __powerpc__ 1254 1253 int cpu = sysdev->id; 1255 - unsigned int cur_freq = 0; 1256 1254 struct cpufreq_policy *cpu_policy; 1257 1255 1258 1256 dprintk("suspending cpu %u\n", cpu); 1259 - 1260 - /* 1261 - * This whole bogosity is here because Powerbooks are made of fail. 1262 - * No sane platform should need any of the code below to be run. 1263 - * (it's entirely the wrong thing to do, as driver->get may 1264 - * reenable interrupts on some architectures). 1265 - */ 1266 1257 1267 1258 if (!cpu_online(cpu)) 1268 1259 return 0; ··· 1273 1282 1274 1283 if (cpufreq_driver->suspend) { 1275 1284 ret = cpufreq_driver->suspend(cpu_policy, pmsg); 1276 - if (ret) { 1285 + if (ret) 1277 1286 printk(KERN_ERR "cpufreq: suspend failed in ->suspend " 1278 1287 "step on CPU %u\n", cpu_policy->cpu); 1279 - goto out; 1280 - } 1281 - } 1282 - 1283 - if (cpufreq_driver->flags & CPUFREQ_CONST_LOOPS) 1284 - goto out; 1285 - 1286 - if (cpufreq_driver->get) 1287 - cur_freq = cpufreq_driver->get(cpu_policy->cpu); 1288 - 1289 - if (!cur_freq || !cpu_policy->cur) { 1290 - printk(KERN_ERR "cpufreq: suspend failed to assert current " 1291 - "frequency is what timing core thinks it is.\n"); 1292 - goto out; 1293 - } 1294 - 1295 - if (unlikely(cur_freq != cpu_policy->cur)) { 1296 - struct cpufreq_freqs freqs; 1297 - 1298 - if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 1299 - dprintk("Warning: CPU frequency is %u, " 1300 - "cpufreq assumed %u kHz.\n", 1301 - cur_freq, cpu_policy->cur); 1302 - 1303 - freqs.cpu = cpu; 1304 - freqs.old = cpu_policy->cur; 1305 - freqs.new = cur_freq; 1306 - 1307 - srcu_notifier_call_chain(&cpufreq_transition_notifier_list, 1308 - CPUFREQ_SUSPENDCHANGE, &freqs); 1309 - adjust_jiffies(CPUFREQ_SUSPENDCHANGE, &freqs); 1310 - 1311 - cpu_policy->cur = cur_freq; 1312 1288 } 1313 1289 1314 1290 out: 1315 1291 cpufreq_cpu_put(cpu_policy); 1316 - #endif /* __powerpc__ */ 1317 1292 return ret; 1318 1293 } 1319 1294 ··· 1287 1330 * cpufreq_resume - restore proper CPU frequency handling after resume 1288 1331 * 1289 1332 * 1.) resume CPUfreq hardware support (cpufreq_driver->resume()) 1290 - * 2.) if ->target and !CPUFREQ_CONST_LOOPS: verify we're in sync 1291 - * 3.) schedule call cpufreq_update_policy() ASAP as interrupts are 1292 - * restored. 1333 + * 2.) schedule call cpufreq_update_policy() ASAP as interrupts are 1334 + * restored. It will verify that the current freq is in sync with 1335 + * what we believe it to be. This is a bit later than when it 1336 + * should be, but nonethteless it's better than calling 1337 + * cpufreq_driver->get() here which might re-enable interrupts... 1293 1338 */ 1294 1339 static int cpufreq_resume(struct sys_device *sysdev) 1295 1340 { 1296 1341 int ret = 0; 1297 1342 1298 - #ifdef __powerpc__ 1299 1343 int cpu = sysdev->id; 1300 1344 struct cpufreq_policy *cpu_policy; 1301 1345 1302 1346 dprintk("resuming cpu %u\n", cpu); 1303 - 1304 - /* As with the ->suspend method, all the code below is 1305 - * only necessary because Powerbooks suck. 1306 - * See commit 42d4dc3f4e1e for jokes. */ 1307 1347 1308 1348 if (!cpu_online(cpu)) 1309 1349 return 0; ··· 1327 1373 } 1328 1374 } 1329 1375 1330 - if (!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { 1331 - unsigned int cur_freq = 0; 1332 - 1333 - if (cpufreq_driver->get) 1334 - cur_freq = cpufreq_driver->get(cpu_policy->cpu); 1335 - 1336 - if (!cur_freq || !cpu_policy->cur) { 1337 - printk(KERN_ERR "cpufreq: resume failed to assert " 1338 - "current frequency is what timing core " 1339 - "thinks it is.\n"); 1340 - goto out; 1341 - } 1342 - 1343 - if (unlikely(cur_freq != cpu_policy->cur)) { 1344 - struct cpufreq_freqs freqs; 1345 - 1346 - if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 1347 - dprintk("Warning: CPU frequency " 1348 - "is %u, cpufreq assumed %u kHz.\n", 1349 - cur_freq, cpu_policy->cur); 1350 - 1351 - freqs.cpu = cpu; 1352 - freqs.old = cpu_policy->cur; 1353 - freqs.new = cur_freq; 1354 - 1355 - srcu_notifier_call_chain( 1356 - &cpufreq_transition_notifier_list, 1357 - CPUFREQ_RESUMECHANGE, &freqs); 1358 - adjust_jiffies(CPUFREQ_RESUMECHANGE, &freqs); 1359 - 1360 - cpu_policy->cur = cur_freq; 1361 - } 1362 - } 1363 - 1364 - out: 1365 1376 schedule_work(&cpu_policy->update); 1377 + 1366 1378 fail: 1367 1379 cpufreq_cpu_put(cpu_policy); 1368 - #endif /* __powerpc__ */ 1369 1380 return ret; 1370 1381 } 1371 1382
+2 -2
drivers/firewire/core-iso.c
··· 196 196 switch (fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP, 197 197 irm_id, generation, SCODE_100, 198 198 CSR_REGISTER_BASE + CSR_BANDWIDTH_AVAILABLE, 199 - data, sizeof(data))) { 199 + data, 8)) { 200 200 case RCODE_GENERATION: 201 201 /* A generation change frees all bandwidth. */ 202 202 return allocate ? -EAGAIN : bandwidth; ··· 233 233 data[1] = old ^ c; 234 234 switch (fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP, 235 235 irm_id, generation, SCODE_100, 236 - offset, data, sizeof(data))) { 236 + offset, data, 8)) { 237 237 case RCODE_GENERATION: 238 238 /* A generation change frees all channels. */ 239 239 return allocate ? -EAGAIN : i;
+14
drivers/firewire/ohci.c
··· 34 34 #include <linux/module.h> 35 35 #include <linux/moduleparam.h> 36 36 #include <linux/pci.h> 37 + #include <linux/pci_ids.h> 37 38 #include <linux/spinlock.h> 38 39 #include <linux/string.h> 39 40 ··· 2373 2372 #define ohci_pmac_off(dev) 2374 2373 #endif /* CONFIG_PPC_PMAC */ 2375 2374 2375 + #define PCI_VENDOR_ID_AGERE PCI_VENDOR_ID_ATT 2376 + #define PCI_DEVICE_ID_AGERE_FW643 0x5901 2377 + 2376 2378 static int __devinit pci_probe(struct pci_dev *dev, 2377 2379 const struct pci_device_id *ent) 2378 2380 { ··· 2425 2421 2426 2422 version = reg_read(ohci, OHCI1394_Version) & 0x00ff00ff; 2427 2423 ohci->use_dualbuffer = version >= OHCI_VERSION_1_1; 2424 + 2425 + /* dual-buffer mode is broken if more than one IR context is active */ 2426 + if (dev->vendor == PCI_VENDOR_ID_AGERE && 2427 + dev->device == PCI_DEVICE_ID_AGERE_FW643) 2428 + ohci->use_dualbuffer = false; 2429 + 2430 + /* dual-buffer mode is broken */ 2431 + if (dev->vendor == PCI_VENDOR_ID_RICOH && 2432 + dev->device == PCI_DEVICE_ID_RICOH_R5C832) 2433 + ohci->use_dualbuffer = false; 2428 2434 2429 2435 /* x86-32 currently doesn't use highmem for dma_alloc_coherent */ 2430 2436 #if !defined(CONFIG_X86_32)
+4 -4
drivers/firewire/sbp2.c
··· 456 456 } 457 457 spin_unlock_irqrestore(&card->lock, flags); 458 458 459 - if (&orb->link != &lu->orb_list) 459 + if (&orb->link != &lu->orb_list) { 460 460 orb->callback(orb, &status); 461 - else 461 + kref_put(&orb->kref, free_orb); 462 + } else { 462 463 fw_error("status write for unknown orb\n"); 463 - 464 - kref_put(&orb->kref, free_orb); 464 + } 465 465 466 466 fw_send_response(card, request, RCODE_COMPLETE); 467 467 }
+12 -28
drivers/gpu/drm/drm_crtc.c
··· 258 258 EXPORT_SYMBOL(drm_mode_object_find); 259 259 260 260 /** 261 - * drm_crtc_from_fb - find the CRTC structure associated with an fb 262 - * @dev: DRM device 263 - * @fb: framebuffer in question 264 - * 265 - * LOCKING: 266 - * Caller must hold mode_config lock. 267 - * 268 - * Find CRTC in the mode_config structure that matches @fb. 269 - * 270 - * RETURNS: 271 - * Pointer to the CRTC or NULL if it wasn't found. 272 - */ 273 - struct drm_crtc *drm_crtc_from_fb(struct drm_device *dev, 274 - struct drm_framebuffer *fb) 275 - { 276 - struct drm_crtc *crtc; 277 - 278 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 279 - if (crtc->fb == fb) 280 - return crtc; 281 - } 282 - return NULL; 283 - } 284 - 285 - /** 286 261 * drm_framebuffer_init - initialize a framebuffer 287 262 * @dev: DRM device 288 263 * ··· 303 328 { 304 329 struct drm_device *dev = fb->dev; 305 330 struct drm_crtc *crtc; 331 + struct drm_mode_set set; 332 + int ret; 306 333 307 334 /* remove from any CRTC */ 308 335 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 309 - if (crtc->fb == fb) 310 - crtc->fb = NULL; 336 + if (crtc->fb == fb) { 337 + /* should turn off the crtc */ 338 + memset(&set, 0, sizeof(struct drm_mode_set)); 339 + set.crtc = crtc; 340 + set.fb = NULL; 341 + ret = crtc->funcs->set_config(&set); 342 + if (ret) 343 + DRM_ERROR("failed to reset crtc %p when fb was deleted\n", crtc); 344 + } 311 345 } 312 346 313 347 drm_mode_object_put(dev, &fb->base); ··· 1495 1511 set.mode = mode; 1496 1512 set.connectors = connector_set; 1497 1513 set.num_connectors = crtc_req->count_connectors; 1498 - set.fb =fb; 1514 + set.fb = fb; 1499 1515 ret = crtc->funcs->set_config(&set); 1500 1516 1501 1517 out:
+33 -37
drivers/gpu/drm/drm_edid.c
··· 502 502 struct detailed_non_pixel *data = &timing->data.other_data; 503 503 struct drm_display_mode *newmode; 504 504 505 - /* EDID up to and including 1.2 may put monitor info here */ 506 - if (edid->version == 1 && edid->revision < 3) 507 - continue; 505 + /* X server check is version 1.1 or higher */ 506 + if (edid->version == 1 && edid->revision >= 1 && 507 + !timing->pixel_clock) { 508 + /* Other timing or info */ 509 + switch (data->type) { 510 + case EDID_DETAIL_MONITOR_SERIAL: 511 + break; 512 + case EDID_DETAIL_MONITOR_STRING: 513 + break; 514 + case EDID_DETAIL_MONITOR_RANGE: 515 + /* Get monitor range data */ 516 + break; 517 + case EDID_DETAIL_MONITOR_NAME: 518 + break; 519 + case EDID_DETAIL_MONITOR_CPDATA: 520 + break; 521 + case EDID_DETAIL_STD_MODES: 522 + /* Five modes per detailed section */ 523 + for (j = 0; j < 5; i++) { 524 + struct std_timing *std; 525 + struct drm_display_mode *newmode; 508 526 509 - /* Detailed mode timing */ 510 - if (timing->pixel_clock) { 527 + std = &data->data.timings[j]; 528 + newmode = drm_mode_std(dev, std); 529 + if (newmode) { 530 + drm_mode_probed_add(connector, newmode); 531 + modes++; 532 + } 533 + } 534 + break; 535 + default: 536 + break; 537 + } 538 + } else { 511 539 newmode = drm_mode_detailed(dev, edid, timing, quirks); 512 540 if (!newmode) 513 541 continue; ··· 546 518 drm_mode_probed_add(connector, newmode); 547 519 548 520 modes++; 549 - continue; 550 - } 551 - 552 - /* Other timing or info */ 553 - switch (data->type) { 554 - case EDID_DETAIL_MONITOR_SERIAL: 555 - break; 556 - case EDID_DETAIL_MONITOR_STRING: 557 - break; 558 - case EDID_DETAIL_MONITOR_RANGE: 559 - /* Get monitor range data */ 560 - break; 561 - case EDID_DETAIL_MONITOR_NAME: 562 - break; 563 - case EDID_DETAIL_MONITOR_CPDATA: 564 - break; 565 - case EDID_DETAIL_STD_MODES: 566 - /* Five modes per detailed section */ 567 - for (j = 0; j < 5; i++) { 568 - struct std_timing *std; 569 - struct drm_display_mode *newmode; 570 - 571 - std = &data->data.timings[j]; 572 - newmode = drm_mode_std(dev, std); 573 - if (newmode) { 574 - drm_mode_probed_add(connector, newmode); 575 - modes++; 576 - } 577 - } 578 - break; 579 - default: 580 - break; 581 521 } 582 522 } 583 523
+27 -20
drivers/gpu/drm/drm_sysfs.c
··· 22 22 #define to_drm_minor(d) container_of(d, struct drm_minor, kdev) 23 23 #define to_drm_connector(d) container_of(d, struct drm_connector, kdev) 24 24 25 + static struct device_type drm_sysfs_device_minor = { 26 + .name = "drm_minor" 27 + }; 28 + 25 29 /** 26 - * drm_sysfs_suspend - DRM class suspend hook 30 + * drm_class_suspend - DRM class suspend hook 27 31 * @dev: Linux device to suspend 28 32 * @state: power state to enter 29 33 * 30 34 * Just figures out what the actual struct drm_device associated with 31 35 * @dev is and calls its suspend hook, if present. 32 36 */ 33 - static int drm_sysfs_suspend(struct device *dev, pm_message_t state) 37 + static int drm_class_suspend(struct device *dev, pm_message_t state) 34 38 { 35 - struct drm_minor *drm_minor = to_drm_minor(dev); 36 - struct drm_device *drm_dev = drm_minor->dev; 39 + if (dev->type == &drm_sysfs_device_minor) { 40 + struct drm_minor *drm_minor = to_drm_minor(dev); 41 + struct drm_device *drm_dev = drm_minor->dev; 37 42 38 - if (drm_minor->type == DRM_MINOR_LEGACY && 39 - !drm_core_check_feature(drm_dev, DRIVER_MODESET) && 40 - drm_dev->driver->suspend) 41 - return drm_dev->driver->suspend(drm_dev, state); 42 - 43 + if (drm_minor->type == DRM_MINOR_LEGACY && 44 + !drm_core_check_feature(drm_dev, DRIVER_MODESET) && 45 + drm_dev->driver->suspend) 46 + return drm_dev->driver->suspend(drm_dev, state); 47 + } 43 48 return 0; 44 49 } 45 50 46 51 /** 47 - * drm_sysfs_resume - DRM class resume hook 52 + * drm_class_resume - DRM class resume hook 48 53 * @dev: Linux device to resume 49 54 * 50 55 * Just figures out what the actual struct drm_device associated with 51 56 * @dev is and calls its resume hook, if present. 52 57 */ 53 - static int drm_sysfs_resume(struct device *dev) 58 + static int drm_class_resume(struct device *dev) 54 59 { 55 - struct drm_minor *drm_minor = to_drm_minor(dev); 56 - struct drm_device *drm_dev = drm_minor->dev; 60 + if (dev->type == &drm_sysfs_device_minor) { 61 + struct drm_minor *drm_minor = to_drm_minor(dev); 62 + struct drm_device *drm_dev = drm_minor->dev; 57 63 58 - if (drm_minor->type == DRM_MINOR_LEGACY && 59 - !drm_core_check_feature(drm_dev, DRIVER_MODESET) && 60 - drm_dev->driver->resume) 61 - return drm_dev->driver->resume(drm_dev); 62 - 64 + if (drm_minor->type == DRM_MINOR_LEGACY && 65 + !drm_core_check_feature(drm_dev, DRIVER_MODESET) && 66 + drm_dev->driver->resume) 67 + return drm_dev->driver->resume(drm_dev); 68 + } 63 69 return 0; 64 70 } 65 71 ··· 105 99 goto err_out; 106 100 } 107 101 108 - class->suspend = drm_sysfs_suspend; 109 - class->resume = drm_sysfs_resume; 102 + class->suspend = drm_class_suspend; 103 + class->resume = drm_class_resume; 110 104 111 105 err = class_create_file(class, &class_attr_version); 112 106 if (err) ··· 486 480 minor->kdev.class = drm_class; 487 481 minor->kdev.release = drm_sysfs_device_release; 488 482 minor->kdev.devt = minor->device; 483 + minor->kdev.type = &drm_sysfs_device_minor; 489 484 if (minor->type == DRM_MINOR_CONTROL) 490 485 minor_str = "controlD%d"; 491 486 else if (minor->type == DRM_MINOR_RENDER)
+7
drivers/gpu/drm/i915/i915_drv.h
··· 222 222 unsigned int edp_support:1; 223 223 int lvds_ssc_freq; 224 224 225 + int crt_ddc_bus; /* -1 = unknown, else GPIO to use for CRT DDC */ 225 226 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */ 226 227 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */ 227 228 int num_fence_regs; /* 8 on pre-965, 16 otherwise */ ··· 385 384 */ 386 385 struct list_head inactive_list; 387 386 387 + /** LRU list of objects with fence regs on them. */ 388 + struct list_head fence_list; 389 + 388 390 /** 389 391 * List of breadcrumbs associated with GPU requests currently 390 392 * outstanding. ··· 454 450 455 451 /** This object's place on the active/flushing/inactive lists */ 456 452 struct list_head list; 453 + 454 + /** This object's place on the fenced object LRU */ 455 + struct list_head fence_list; 457 456 458 457 /** 459 458 * This is set if the object is on the active or flushing lists
+48 -38
drivers/gpu/drm/i915/i915_gem.c
··· 978 978 i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, 979 979 struct drm_file *file_priv) 980 980 { 981 + struct drm_i915_private *dev_priv = dev->dev_private; 981 982 struct drm_i915_gem_set_domain *args = data; 982 983 struct drm_gem_object *obj; 983 984 uint32_t read_domains = args->read_domains; ··· 1011 1010 obj, obj->size, read_domains, write_domain); 1012 1011 #endif 1013 1012 if (read_domains & I915_GEM_DOMAIN_GTT) { 1013 + struct drm_i915_gem_object *obj_priv = obj->driver_private; 1014 + 1014 1015 ret = i915_gem_object_set_to_gtt_domain(obj, write_domain != 0); 1016 + 1017 + /* Update the LRU on the fence for the CPU access that's 1018 + * about to occur. 1019 + */ 1020 + if (obj_priv->fence_reg != I915_FENCE_REG_NONE) { 1021 + list_move_tail(&obj_priv->fence_list, 1022 + &dev_priv->mm.fence_list); 1023 + } 1015 1024 1016 1025 /* Silently promote "you're not bound, there was nothing to do" 1017 1026 * to success, since the client was just asking us to ··· 1166 1155 } 1167 1156 1168 1157 /* Need a new fence register? */ 1169 - if (obj_priv->fence_reg == I915_FENCE_REG_NONE && 1170 - obj_priv->tiling_mode != I915_TILING_NONE) { 1158 + if (obj_priv->tiling_mode != I915_TILING_NONE) { 1171 1159 ret = i915_gem_object_get_fence_reg(obj); 1172 1160 if (ret) { 1173 1161 mutex_unlock(&dev->struct_mutex); ··· 2218 2208 struct drm_i915_gem_object *old_obj_priv = NULL; 2219 2209 int i, ret, avail; 2220 2210 2211 + /* Just update our place in the LRU if our fence is getting used. */ 2212 + if (obj_priv->fence_reg != I915_FENCE_REG_NONE) { 2213 + list_move_tail(&obj_priv->fence_list, &dev_priv->mm.fence_list); 2214 + return 0; 2215 + } 2216 + 2221 2217 switch (obj_priv->tiling_mode) { 2222 2218 case I915_TILING_NONE: 2223 2219 WARN(1, "allocating a fence for non-tiled object?\n"); ··· 2245 2229 } 2246 2230 2247 2231 /* First try to find a free reg */ 2248 - try_again: 2249 2232 avail = 0; 2250 2233 for (i = dev_priv->fence_reg_start; i < dev_priv->num_fence_regs; i++) { 2251 2234 reg = &dev_priv->fence_regs[i]; ··· 2258 2243 2259 2244 /* None available, try to steal one or wait for a user to finish */ 2260 2245 if (i == dev_priv->num_fence_regs) { 2261 - uint32_t seqno = dev_priv->mm.next_gem_seqno; 2246 + struct drm_gem_object *old_obj = NULL; 2262 2247 2263 2248 if (avail == 0) 2264 2249 return -ENOSPC; 2265 2250 2266 - for (i = dev_priv->fence_reg_start; 2267 - i < dev_priv->num_fence_regs; i++) { 2268 - uint32_t this_seqno; 2251 + list_for_each_entry(old_obj_priv, &dev_priv->mm.fence_list, 2252 + fence_list) { 2253 + old_obj = old_obj_priv->obj; 2269 2254 2270 - reg = &dev_priv->fence_regs[i]; 2271 - old_obj_priv = reg->obj->driver_private; 2255 + reg = &dev_priv->fence_regs[old_obj_priv->fence_reg]; 2272 2256 2273 2257 if (old_obj_priv->pin_count) 2274 2258 continue; 2259 + 2260 + /* Take a reference, as otherwise the wait_rendering 2261 + * below may cause the object to get freed out from 2262 + * under us. 2263 + */ 2264 + drm_gem_object_reference(old_obj); 2275 2265 2276 2266 /* i915 uses fences for GPU access to tiled buffers */ 2277 2267 if (IS_I965G(dev) || !old_obj_priv->active) 2278 2268 break; 2279 2269 2280 - /* find the seqno of the first available fence */ 2281 - this_seqno = old_obj_priv->last_rendering_seqno; 2282 - if (this_seqno != 0 && 2283 - reg->obj->write_domain == 0 && 2284 - i915_seqno_passed(seqno, this_seqno)) 2285 - seqno = this_seqno; 2286 - } 2287 - 2288 - /* 2289 - * Now things get ugly... we have to wait for one of the 2290 - * objects to finish before trying again. 2291 - */ 2292 - if (i == dev_priv->num_fence_regs) { 2293 - if (seqno == dev_priv->mm.next_gem_seqno) { 2294 - i915_gem_flush(dev, 2295 - I915_GEM_GPU_DOMAINS, 2296 - I915_GEM_GPU_DOMAINS); 2297 - seqno = i915_add_request(dev, NULL, 2298 - I915_GEM_GPU_DOMAINS); 2299 - if (seqno == 0) 2300 - return -ENOMEM; 2301 - } 2302 - 2303 - ret = i915_wait_request(dev, seqno); 2304 - if (ret) 2270 + /* This brings the object to the head of the LRU if it 2271 + * had been written to. The only way this should 2272 + * result in us waiting longer than the expected 2273 + * optimal amount of time is if there was a 2274 + * fence-using buffer later that was read-only. 2275 + */ 2276 + i915_gem_object_flush_gpu_write_domain(old_obj); 2277 + ret = i915_gem_object_wait_rendering(old_obj); 2278 + if (ret != 0) 2305 2279 return ret; 2306 - goto try_again; 2280 + break; 2307 2281 } 2308 2282 2309 2283 /* ··· 2300 2296 * for this object next time we need it. 2301 2297 */ 2302 2298 i915_gem_release_mmap(reg->obj); 2299 + i = old_obj_priv->fence_reg; 2303 2300 old_obj_priv->fence_reg = I915_FENCE_REG_NONE; 2301 + list_del_init(&old_obj_priv->fence_list); 2302 + drm_gem_object_unreference(old_obj); 2304 2303 } 2305 2304 2306 2305 obj_priv->fence_reg = i; 2306 + list_add_tail(&obj_priv->fence_list, &dev_priv->mm.fence_list); 2307 + 2307 2308 reg->obj = obj; 2308 2309 2309 2310 if (IS_I965G(dev)) ··· 2351 2342 2352 2343 dev_priv->fence_regs[obj_priv->fence_reg].obj = NULL; 2353 2344 obj_priv->fence_reg = I915_FENCE_REG_NONE; 2345 + list_del_init(&obj_priv->fence_list); 2354 2346 } 2355 2347 2356 2348 /** ··· 3605 3595 * Pre-965 chips need a fence register set up in order to 3606 3596 * properly handle tiled surfaces. 3607 3597 */ 3608 - if (!IS_I965G(dev) && 3609 - obj_priv->fence_reg == I915_FENCE_REG_NONE && 3610 - obj_priv->tiling_mode != I915_TILING_NONE) { 3598 + if (!IS_I965G(dev) && obj_priv->tiling_mode != I915_TILING_NONE) { 3611 3599 ret = i915_gem_object_get_fence_reg(obj); 3612 3600 if (ret != 0) { 3613 3601 if (ret != -EBUSY && ret != -ERESTARTSYS) ··· 3814 3806 obj_priv->obj = obj; 3815 3807 obj_priv->fence_reg = I915_FENCE_REG_NONE; 3816 3808 INIT_LIST_HEAD(&obj_priv->list); 3809 + INIT_LIST_HEAD(&obj_priv->fence_list); 3817 3810 3818 3811 return 0; 3819 3812 } ··· 4262 4253 INIT_LIST_HEAD(&dev_priv->mm.flushing_list); 4263 4254 INIT_LIST_HEAD(&dev_priv->mm.inactive_list); 4264 4255 INIT_LIST_HEAD(&dev_priv->mm.request_list); 4256 + INIT_LIST_HEAD(&dev_priv->mm.fence_list); 4265 4257 INIT_DELAYED_WORK(&dev_priv->mm.retire_work, 4266 4258 i915_gem_retire_work_handler); 4267 4259 dev_priv->mm.next_gem_seqno = 1;
+48 -3
drivers/gpu/drm/i915/intel_bios.c
··· 59 59 return NULL; 60 60 } 61 61 62 + static u16 63 + get_blocksize(void *p) 64 + { 65 + u16 *block_ptr, block_size; 66 + 67 + block_ptr = (u16 *)((char *)p - 2); 68 + block_size = *block_ptr; 69 + return block_size; 70 + } 71 + 62 72 static void 63 73 fill_detail_timing_data(struct drm_display_mode *panel_fixed_mode, 64 74 struct lvds_dvo_timing *dvo_timing) ··· 225 215 } 226 216 227 217 static void 218 + parse_general_definitions(struct drm_i915_private *dev_priv, 219 + struct bdb_header *bdb) 220 + { 221 + struct bdb_general_definitions *general; 222 + const int crt_bus_map_table[] = { 223 + GPIOB, 224 + GPIOA, 225 + GPIOC, 226 + GPIOD, 227 + GPIOE, 228 + GPIOF, 229 + }; 230 + 231 + /* Set sensible defaults in case we can't find the general block 232 + or it is the wrong chipset */ 233 + dev_priv->crt_ddc_bus = -1; 234 + 235 + general = find_section(bdb, BDB_GENERAL_DEFINITIONS); 236 + if (general) { 237 + u16 block_size = get_blocksize(general); 238 + if (block_size >= sizeof(*general)) { 239 + int bus_pin = general->crt_ddc_gmbus_pin; 240 + DRM_DEBUG("crt_ddc_bus_pin: %d\n", bus_pin); 241 + if ((bus_pin >= 1) && (bus_pin <= 6)) { 242 + dev_priv->crt_ddc_bus = 243 + crt_bus_map_table[bus_pin-1]; 244 + } 245 + } else { 246 + DRM_DEBUG("BDB_GD too small (%d). Invalid.\n", 247 + block_size); 248 + } 249 + } 250 + } 251 + 252 + static void 228 253 parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, 229 254 struct bdb_header *bdb) 230 255 { ··· 267 222 struct bdb_general_definitions *p_defs; 268 223 struct child_device_config *p_child; 269 224 int i, child_device_num, count; 270 - u16 block_size, *block_ptr; 225 + u16 block_size; 271 226 272 227 p_defs = find_section(bdb, BDB_GENERAL_DEFINITIONS); 273 228 if (!p_defs) { ··· 285 240 return; 286 241 } 287 242 /* get the block size of general definitions */ 288 - block_ptr = (u16 *)((char *)p_defs - 2); 289 - block_size = *block_ptr; 243 + block_size = get_blocksize(p_defs); 290 244 /* get the number of child device */ 291 245 child_device_num = (block_size - sizeof(*p_defs)) / 292 246 sizeof(*p_child); ··· 406 362 407 363 /* Grab useful general definitions */ 408 364 parse_general_features(dev_priv, bdb); 365 + parse_general_definitions(dev_priv, bdb); 409 366 parse_lfp_panel_data(dev_priv, bdb); 410 367 parse_sdvo_panel_data(dev_priv, bdb); 411 368 parse_sdvo_device_mapping(dev_priv, bdb);
+10 -1
drivers/gpu/drm/i915/intel_crt.c
··· 508 508 { 509 509 struct drm_connector *connector; 510 510 struct intel_output *intel_output; 511 + struct drm_i915_private *dev_priv = dev->dev_private; 511 512 u32 i2c_reg; 512 513 513 514 intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL); ··· 528 527 /* Set up the DDC bus. */ 529 528 if (IS_IGDNG(dev)) 530 529 i2c_reg = PCH_GPIOA; 531 - else 530 + else { 532 531 i2c_reg = GPIOA; 532 + /* Use VBT information for CRT DDC if available */ 533 + if (dev_priv->crt_ddc_bus != -1) 534 + i2c_reg = dev_priv->crt_ddc_bus; 535 + } 533 536 intel_output->ddc_bus = intel_i2c_create(dev, i2c_reg, "CRTDDC_A"); 534 537 if (!intel_output->ddc_bus) { 535 538 dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration " ··· 542 537 } 543 538 544 539 intel_output->type = INTEL_OUTPUT_ANALOG; 540 + intel_output->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 541 + (1 << INTEL_ANALOG_CLONE_BIT) | 542 + (1 << INTEL_SDVO_LVDS_CLONE_BIT); 543 + intel_output->crtc_mask = (1 << 0) | (1 << 1); 545 544 connector->interlace_allowed = 0; 546 545 connector->doublescan_allowed = 0; 547 546
+15 -56
drivers/gpu/drm/i915/intel_display.c
··· 666 666 intel_clock_t clock; 667 667 int err = target; 668 668 669 - if (IS_I9XX(dev) && intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) && 669 + if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) && 670 670 (I915_READ(LVDS)) != 0) { 671 671 /* 672 672 * For LVDS, if the panel is on, just rely on its current ··· 2396 2396 if (is_sdvo) { 2397 2397 dpll |= DPLL_DVO_HIGH_SPEED; 2398 2398 sdvo_pixel_multiply = adjusted_mode->clock / mode->clock; 2399 - if (IS_I945G(dev) || IS_I945GM(dev)) 2399 + if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev)) 2400 2400 dpll |= (sdvo_pixel_multiply - 1) << SDVO_MULTIPLIER_SHIFT_HIRES; 2401 2401 else if (IS_IGDNG(dev)) 2402 2402 dpll |= (sdvo_pixel_multiply - 1) << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT; ··· 3170 3170 3171 3171 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 3172 3172 struct intel_output *intel_output = to_intel_output(connector); 3173 - if (type_mask & (1 << intel_output->type)) 3173 + if (type_mask & intel_output->clone_mask) 3174 3174 index_mask |= (1 << entry); 3175 3175 entry++; 3176 3176 } ··· 3218 3218 intel_dp_init(dev, PCH_DP_D); 3219 3219 3220 3220 } else if (IS_I9XX(dev)) { 3221 - int found; 3222 - u32 reg; 3221 + bool found = false; 3223 3222 3224 3223 if (I915_READ(SDVOB) & SDVO_DETECTED) { 3225 3224 found = intel_sdvo_init(dev, SDVOB); 3226 3225 if (!found && SUPPORTS_INTEGRATED_HDMI(dev)) 3227 3226 intel_hdmi_init(dev, SDVOB); 3227 + 3228 3228 if (!found && SUPPORTS_INTEGRATED_DP(dev)) 3229 3229 intel_dp_init(dev, DP_B); 3230 3230 } 3231 3231 3232 3232 /* Before G4X SDVOC doesn't have its own detect register */ 3233 - if (IS_G4X(dev)) 3234 - reg = SDVOC; 3235 - else 3236 - reg = SDVOB; 3237 3233 3238 - if (I915_READ(reg) & SDVO_DETECTED) { 3234 + if (I915_READ(SDVOB) & SDVO_DETECTED) 3239 3235 found = intel_sdvo_init(dev, SDVOC); 3240 - if (!found && SUPPORTS_INTEGRATED_HDMI(dev)) 3236 + 3237 + if (!found && (I915_READ(SDVOC) & SDVO_DETECTED)) { 3238 + 3239 + if (SUPPORTS_INTEGRATED_HDMI(dev)) 3241 3240 intel_hdmi_init(dev, SDVOC); 3242 - if (!found && SUPPORTS_INTEGRATED_DP(dev)) 3241 + if (SUPPORTS_INTEGRATED_DP(dev)) 3243 3242 intel_dp_init(dev, DP_C); 3244 3243 } 3244 + 3245 3245 if (SUPPORTS_INTEGRATED_DP(dev) && (I915_READ(DP_D) & DP_DETECTED)) 3246 3246 intel_dp_init(dev, DP_D); 3247 3247 } else ··· 3253 3253 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 3254 3254 struct intel_output *intel_output = to_intel_output(connector); 3255 3255 struct drm_encoder *encoder = &intel_output->enc; 3256 - int crtc_mask = 0, clone_mask = 0; 3257 3256 3258 - /* valid crtcs */ 3259 - switch(intel_output->type) { 3260 - case INTEL_OUTPUT_HDMI: 3261 - crtc_mask = ((1 << 0)| 3262 - (1 << 1)); 3263 - clone_mask = ((1 << INTEL_OUTPUT_HDMI)); 3264 - break; 3265 - case INTEL_OUTPUT_DVO: 3266 - case INTEL_OUTPUT_SDVO: 3267 - crtc_mask = ((1 << 0)| 3268 - (1 << 1)); 3269 - clone_mask = ((1 << INTEL_OUTPUT_ANALOG) | 3270 - (1 << INTEL_OUTPUT_DVO) | 3271 - (1 << INTEL_OUTPUT_SDVO)); 3272 - break; 3273 - case INTEL_OUTPUT_ANALOG: 3274 - crtc_mask = ((1 << 0)| 3275 - (1 << 1)); 3276 - clone_mask = ((1 << INTEL_OUTPUT_ANALOG) | 3277 - (1 << INTEL_OUTPUT_DVO) | 3278 - (1 << INTEL_OUTPUT_SDVO)); 3279 - break; 3280 - case INTEL_OUTPUT_LVDS: 3281 - crtc_mask = (1 << 1); 3282 - clone_mask = (1 << INTEL_OUTPUT_LVDS); 3283 - break; 3284 - case INTEL_OUTPUT_TVOUT: 3285 - crtc_mask = ((1 << 0) | 3286 - (1 << 1)); 3287 - clone_mask = (1 << INTEL_OUTPUT_TVOUT); 3288 - break; 3289 - case INTEL_OUTPUT_DISPLAYPORT: 3290 - crtc_mask = ((1 << 0) | 3291 - (1 << 1)); 3292 - clone_mask = (1 << INTEL_OUTPUT_DISPLAYPORT); 3293 - break; 3294 - case INTEL_OUTPUT_EDP: 3295 - crtc_mask = (1 << 1); 3296 - clone_mask = (1 << INTEL_OUTPUT_EDP); 3297 - break; 3298 - } 3299 - encoder->possible_crtcs = crtc_mask; 3300 - encoder->possible_clones = intel_connector_clones(dev, clone_mask); 3257 + encoder->possible_crtcs = intel_output->crtc_mask; 3258 + encoder->possible_clones = intel_connector_clones(dev, 3259 + intel_output->clone_mask); 3301 3260 } 3302 3261 } 3303 3262
+12
drivers/gpu/drm/i915/intel_dp.c
··· 1254 1254 else 1255 1255 intel_output->type = INTEL_OUTPUT_DISPLAYPORT; 1256 1256 1257 + if (output_reg == DP_B) 1258 + intel_output->clone_mask = (1 << INTEL_DP_B_CLONE_BIT); 1259 + else if (output_reg == DP_C) 1260 + intel_output->clone_mask = (1 << INTEL_DP_C_CLONE_BIT); 1261 + else if (output_reg == DP_D) 1262 + intel_output->clone_mask = (1 << INTEL_DP_D_CLONE_BIT); 1263 + 1264 + if (IS_eDP(intel_output)) { 1265 + intel_output->crtc_mask = (1 << 1); 1266 + intel_output->clone_mask = (1 << INTEL_OUTPUT_EDP); 1267 + } else 1268 + intel_output->crtc_mask = (1 << 0) | (1 << 1); 1257 1269 connector->interlace_allowed = true; 1258 1270 connector->doublescan_allowed = 0; 1259 1271
+20
drivers/gpu/drm/i915/intel_drv.h
··· 57 57 #define INTEL_OUTPUT_DISPLAYPORT 7 58 58 #define INTEL_OUTPUT_EDP 8 59 59 60 + /* Intel Pipe Clone Bit */ 61 + #define INTEL_HDMIB_CLONE_BIT 1 62 + #define INTEL_HDMIC_CLONE_BIT 2 63 + #define INTEL_HDMID_CLONE_BIT 3 64 + #define INTEL_HDMIE_CLONE_BIT 4 65 + #define INTEL_HDMIF_CLONE_BIT 5 66 + #define INTEL_SDVO_NON_TV_CLONE_BIT 6 67 + #define INTEL_SDVO_TV_CLONE_BIT 7 68 + #define INTEL_SDVO_LVDS_CLONE_BIT 8 69 + #define INTEL_ANALOG_CLONE_BIT 9 70 + #define INTEL_TV_CLONE_BIT 10 71 + #define INTEL_DP_B_CLONE_BIT 11 72 + #define INTEL_DP_C_CLONE_BIT 12 73 + #define INTEL_DP_D_CLONE_BIT 13 74 + #define INTEL_LVDS_CLONE_BIT 14 75 + #define INTEL_DVO_TMDS_CLONE_BIT 15 76 + #define INTEL_DVO_LVDS_CLONE_BIT 16 77 + 60 78 #define INTEL_DVO_CHIP_NONE 0 61 79 #define INTEL_DVO_CHIP_LVDS 1 62 80 #define INTEL_DVO_CHIP_TMDS 2 ··· 104 86 bool needs_tv_clock; 105 87 void *dev_priv; 106 88 void (*hot_plug)(struct intel_output *); 89 + int crtc_mask; 90 + int clone_mask; 107 91 }; 108 92 109 93 struct intel_crtc {
+6
drivers/gpu/drm/i915/intel_dvo.c
··· 435 435 continue; 436 436 437 437 intel_output->type = INTEL_OUTPUT_DVO; 438 + intel_output->crtc_mask = (1 << 0) | (1 << 1); 438 439 switch (dvo->type) { 439 440 case INTEL_DVO_CHIP_TMDS: 441 + intel_output->clone_mask = 442 + (1 << INTEL_DVO_TMDS_CLONE_BIT) | 443 + (1 << INTEL_ANALOG_CLONE_BIT); 440 444 drm_connector_init(dev, connector, 441 445 &intel_dvo_connector_funcs, 442 446 DRM_MODE_CONNECTOR_DVII); 443 447 encoder_type = DRM_MODE_ENCODER_TMDS; 444 448 break; 445 449 case INTEL_DVO_CHIP_LVDS: 450 + intel_output->clone_mask = 451 + (1 << INTEL_DVO_LVDS_CLONE_BIT); 446 452 drm_connector_init(dev, connector, 447 453 &intel_dvo_connector_funcs, 448 454 DRM_MODE_CONNECTOR_LVDS);
+12 -6
drivers/gpu/drm/i915/intel_hdmi.c
··· 230 230 231 231 connector->interlace_allowed = 0; 232 232 connector->doublescan_allowed = 0; 233 + intel_output->crtc_mask = (1 << 0) | (1 << 1); 233 234 234 235 /* Set up the DDC bus. */ 235 - if (sdvox_reg == SDVOB) 236 + if (sdvox_reg == SDVOB) { 237 + intel_output->clone_mask = (1 << INTEL_HDMIB_CLONE_BIT); 236 238 intel_output->ddc_bus = intel_i2c_create(dev, GPIOE, "HDMIB"); 237 - else if (sdvox_reg == SDVOC) 239 + } else if (sdvox_reg == SDVOC) { 240 + intel_output->clone_mask = (1 << INTEL_HDMIC_CLONE_BIT); 238 241 intel_output->ddc_bus = intel_i2c_create(dev, GPIOD, "HDMIC"); 239 - else if (sdvox_reg == HDMIB) 242 + } else if (sdvox_reg == HDMIB) { 243 + intel_output->clone_mask = (1 << INTEL_HDMID_CLONE_BIT); 240 244 intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOE, 241 245 "HDMIB"); 242 - else if (sdvox_reg == HDMIC) 246 + } else if (sdvox_reg == HDMIC) { 247 + intel_output->clone_mask = (1 << INTEL_HDMIE_CLONE_BIT); 243 248 intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOD, 244 249 "HDMIC"); 245 - else if (sdvox_reg == HDMID) 250 + } else if (sdvox_reg == HDMID) { 251 + intel_output->clone_mask = (1 << INTEL_HDMIF_CLONE_BIT); 246 252 intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOF, 247 253 "HDMID"); 248 - 254 + } 249 255 if (!intel_output->ddc_bus) 250 256 goto err_connector; 251 257
+2
drivers/gpu/drm/i915/intel_lvds.c
··· 916 916 drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 917 917 intel_output->type = INTEL_OUTPUT_LVDS; 918 918 919 + intel_output->clone_mask = (1 << INTEL_LVDS_CLONE_BIT); 920 + intel_output->crtc_mask = (1 << 1); 919 921 drm_encoder_helper_add(encoder, &intel_lvds_helper_funcs); 920 922 drm_connector_helper_add(connector, &intel_lvds_connector_helper_funcs); 921 923 connector->display_info.subpixel_order = SubPixelHorizontalRGB;
+12 -1
drivers/gpu/drm/i915/intel_sdvo.c
··· 1458 1458 (SDVO_OUTPUT_RGB0 | SDVO_OUTPUT_RGB1)) 1459 1459 caps++; 1460 1460 if (sdvo_priv->caps.output_flags & 1461 - (SDVO_OUTPUT_SVID0 | SDVO_OUTPUT_SVID0)) 1461 + (SDVO_OUTPUT_SVID0 | SDVO_OUTPUT_SVID1)) 1462 1462 caps++; 1463 1463 if (sdvo_priv->caps.output_flags & 1464 1464 (SDVO_OUTPUT_CVBS0 | SDVO_OUTPUT_CVBS1)) ··· 1967 1967 intel_sdvo_set_colorimetry(intel_output, 1968 1968 SDVO_COLORIMETRY_RGB256); 1969 1969 connector->connector_type = DRM_MODE_CONNECTOR_HDMIA; 1970 + intel_output->clone_mask = 1971 + (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 1972 + (1 << INTEL_ANALOG_CLONE_BIT); 1970 1973 } 1971 1974 } else if (flags & SDVO_OUTPUT_SVID0) { 1972 1975 ··· 1978 1975 connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO; 1979 1976 sdvo_priv->is_tv = true; 1980 1977 intel_output->needs_tv_clock = true; 1978 + intel_output->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT; 1981 1979 } else if (flags & SDVO_OUTPUT_RGB0) { 1982 1980 1983 1981 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB0; 1984 1982 encoder->encoder_type = DRM_MODE_ENCODER_DAC; 1985 1983 connector->connector_type = DRM_MODE_CONNECTOR_VGA; 1984 + intel_output->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 1985 + (1 << INTEL_ANALOG_CLONE_BIT); 1986 1986 } else if (flags & SDVO_OUTPUT_RGB1) { 1987 1987 1988 1988 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB1; ··· 1997 1991 encoder->encoder_type = DRM_MODE_ENCODER_LVDS; 1998 1992 connector->connector_type = DRM_MODE_CONNECTOR_LVDS; 1999 1993 sdvo_priv->is_lvds = true; 1994 + intel_output->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 1995 + (1 << INTEL_SDVO_LVDS_CLONE_BIT); 2000 1996 } else if (flags & SDVO_OUTPUT_LVDS1) { 2001 1997 2002 1998 sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS1; 2003 1999 encoder->encoder_type = DRM_MODE_ENCODER_LVDS; 2004 2000 connector->connector_type = DRM_MODE_CONNECTOR_LVDS; 2005 2001 sdvo_priv->is_lvds = true; 2002 + intel_output->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 2003 + (1 << INTEL_SDVO_LVDS_CLONE_BIT); 2006 2004 } else { 2007 2005 2008 2006 unsigned char bytes[2]; ··· 2019 2009 bytes[0], bytes[1]); 2020 2010 ret = false; 2021 2011 } 2012 + intel_output->crtc_mask = (1 << 0) | (1 << 1); 2022 2013 2023 2014 if (ret && registered) 2024 2015 ret = drm_sysfs_connector_add(connector) == 0 ? true : false;
+2
drivers/gpu/drm/i915/intel_tv.c
··· 1718 1718 if (!intel_output) { 1719 1719 return; 1720 1720 } 1721 + 1721 1722 connector = &intel_output->base; 1722 1723 1723 1724 drm_connector_init(dev, connector, &intel_tv_connector_funcs, ··· 1730 1729 drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 1731 1730 tv_priv = (struct intel_tv_priv *)(intel_output + 1); 1732 1731 intel_output->type = INTEL_OUTPUT_TVOUT; 1732 + intel_output->clone_mask = (1 << INTEL_TV_CLONE_BIT); 1733 1733 intel_output->enc.possible_crtcs = ((1 << 0) | (1 << 1)); 1734 1734 intel_output->enc.possible_clones = (1 << INTEL_OUTPUT_TVOUT); 1735 1735 intel_output->dev_priv = tv_priv;
+76 -20
drivers/gpu/drm/radeon/r100.c
··· 254 254 255 255 256 256 /* 257 + * Interrupts 258 + */ 259 + int r100_irq_set(struct radeon_device *rdev) 260 + { 261 + uint32_t tmp = 0; 262 + 263 + if (rdev->irq.sw_int) { 264 + tmp |= RADEON_SW_INT_ENABLE; 265 + } 266 + if (rdev->irq.crtc_vblank_int[0]) { 267 + tmp |= RADEON_CRTC_VBLANK_MASK; 268 + } 269 + if (rdev->irq.crtc_vblank_int[1]) { 270 + tmp |= RADEON_CRTC2_VBLANK_MASK; 271 + } 272 + WREG32(RADEON_GEN_INT_CNTL, tmp); 273 + return 0; 274 + } 275 + 276 + static inline uint32_t r100_irq_ack(struct radeon_device *rdev) 277 + { 278 + uint32_t irqs = RREG32(RADEON_GEN_INT_STATUS); 279 + uint32_t irq_mask = RADEON_SW_INT_TEST | RADEON_CRTC_VBLANK_STAT | 280 + RADEON_CRTC2_VBLANK_STAT; 281 + 282 + if (irqs) { 283 + WREG32(RADEON_GEN_INT_STATUS, irqs); 284 + } 285 + return irqs & irq_mask; 286 + } 287 + 288 + int r100_irq_process(struct radeon_device *rdev) 289 + { 290 + uint32_t status; 291 + 292 + status = r100_irq_ack(rdev); 293 + if (!status) { 294 + return IRQ_NONE; 295 + } 296 + while (status) { 297 + /* SW interrupt */ 298 + if (status & RADEON_SW_INT_TEST) { 299 + radeon_fence_process(rdev); 300 + } 301 + /* Vertical blank interrupts */ 302 + if (status & RADEON_CRTC_VBLANK_STAT) { 303 + drm_handle_vblank(rdev->ddev, 0); 304 + } 305 + if (status & RADEON_CRTC2_VBLANK_STAT) { 306 + drm_handle_vblank(rdev->ddev, 1); 307 + } 308 + status = r100_irq_ack(rdev); 309 + } 310 + return IRQ_HANDLED; 311 + } 312 + 313 + u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc) 314 + { 315 + if (crtc == 0) 316 + return RREG32(RADEON_CRTC_CRNT_FRAME); 317 + else 318 + return RREG32(RADEON_CRTC2_CRNT_FRAME); 319 + } 320 + 321 + 322 + /* 257 323 * Fence emission 258 324 */ 259 325 void r100_fence_ring_emit(struct radeon_device *rdev, ··· 1091 1025 tmp |= tile_flags; 1092 1026 ib[idx] = tmp; 1093 1027 break; 1028 + case RADEON_RB3D_ZPASS_ADDR: 1029 + r = r100_cs_packet_next_reloc(p, &reloc); 1030 + if (r) { 1031 + DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 1032 + idx, reg); 1033 + r100_cs_dump_packet(p, pkt); 1034 + return r; 1035 + } 1036 + ib[idx] = ib_chunk->kdata[idx] + ((u32)reloc->lobj.gpu_offset); 1037 + break; 1094 1038 default: 1095 1039 /* FIXME: we don't want to allow anyothers packet */ 1096 1040 break; ··· 1630 1554 r100_pll_errata_after_index(rdev); 1631 1555 WREG32(RADEON_CLOCK_CNTL_DATA, v); 1632 1556 r100_pll_errata_after_data(rdev); 1633 - } 1634 - 1635 - uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg) 1636 - { 1637 - if (reg < 0x10000) 1638 - return readl(((void __iomem *)rdev->rmmio) + reg); 1639 - else { 1640 - writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX); 1641 - return readl(((void __iomem *)rdev->rmmio) + RADEON_MM_DATA); 1642 - } 1643 - } 1644 - 1645 - void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 1646 - { 1647 - if (reg < 0x10000) 1648 - writel(v, ((void __iomem *)rdev->rmmio) + reg); 1649 - else { 1650 - writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX); 1651 - writel(v, ((void __iomem *)rdev->rmmio) + RADEON_MM_DATA); 1652 - } 1653 1557 } 1654 1558 1655 1559 int r100_init(struct radeon_device *rdev)
+19 -23
drivers/gpu/drm/radeon/r300.c
··· 83 83 WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp | RADEON_PCIE_TX_GART_INVALIDATE_TLB); 84 84 (void)RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL); 85 85 WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp); 86 - mb(); 87 86 } 87 + mb(); 88 88 } 89 89 90 90 int rv370_pcie_gart_enable(struct radeon_device *rdev) ··· 448 448 /* rv350,rv370,rv380 */ 449 449 rdev->num_gb_pipes = 1; 450 450 } 451 + rdev->num_z_pipes = 1; 451 452 gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16); 452 453 switch (rdev->num_gb_pipes) { 453 454 case 2: ··· 487 486 printk(KERN_WARNING "Failed to wait MC idle while " 488 487 "programming pipes. Bad things might happen.\n"); 489 488 } 490 - DRM_INFO("radeon: %d pipes initialized.\n", rdev->num_gb_pipes); 489 + DRM_INFO("radeon: %d quad pipes, %d Z pipes initialized.\n", 490 + rdev->num_gb_pipes, rdev->num_z_pipes); 491 491 } 492 492 493 493 int r300_ga_reset(struct radeon_device *rdev) ··· 593 591 r100_vram_init_sizes(rdev); 594 592 } 595 593 596 - 597 - /* 598 - * Indirect registers accessor 599 - */ 600 - uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg) 601 - { 602 - uint32_t r; 603 - 604 - WREG8(RADEON_PCIE_INDEX, ((reg) & 0xff)); 605 - (void)RREG32(RADEON_PCIE_INDEX); 606 - r = RREG32(RADEON_PCIE_DATA); 607 - return r; 608 - } 609 - 610 - void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 611 - { 612 - WREG8(RADEON_PCIE_INDEX, ((reg) & 0xff)); 613 - (void)RREG32(RADEON_PCIE_INDEX); 614 - WREG32(RADEON_PCIE_DATA, (v)); 615 - (void)RREG32(RADEON_PCIE_DATA); 616 - } 617 594 618 595 /* 619 596 * PCIE Lanes ··· 1384 1403 tmp = (ib_chunk->kdata[idx] >> 22) & 0xF; 1385 1404 track->textures[i].txdepth = tmp; 1386 1405 break; 1406 + case R300_ZB_ZPASS_ADDR: 1407 + r = r100_cs_packet_next_reloc(p, &reloc); 1408 + if (r) { 1409 + DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 1410 + idx, reg); 1411 + r100_cs_dump_packet(p, pkt); 1412 + return r; 1413 + } 1414 + ib[idx] = ib_chunk->kdata[idx] + ((u32)reloc->lobj.gpu_offset); 1415 + break; 1416 + case 0x4be8: 1417 + /* valid register only on RV530 */ 1418 + if (p->rdev->family == CHIP_RV530) 1419 + break; 1420 + /* fallthrough do not move */ 1387 1421 default: 1388 1422 printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n", 1389 1423 reg, idx);
+12 -1
drivers/gpu/drm/radeon/r420.c
··· 165 165 printk(KERN_WARNING "Failed to wait GUI idle while " 166 166 "programming pipes. Bad things might happen.\n"); 167 167 } 168 - DRM_INFO("radeon: %d pipes initialized.\n", rdev->num_gb_pipes); 168 + 169 + if (rdev->family == CHIP_RV530) { 170 + tmp = RREG32(RV530_GB_PIPE_SELECT2); 171 + if ((tmp & 3) == 3) 172 + rdev->num_z_pipes = 2; 173 + else 174 + rdev->num_z_pipes = 1; 175 + } else 176 + rdev->num_z_pipes = 1; 177 + 178 + DRM_INFO("radeon: %d quad pipes, %d z pipes initialized.\n", 179 + rdev->num_gb_pipes, rdev->num_z_pipes); 169 180 } 170 181 171 182 void r420_gpu_init(struct radeon_device *rdev)
+12 -4
drivers/gpu/drm/radeon/r500_reg.h
··· 350 350 #define AVIVO_D1CRTC_BLANK_CONTROL 0x6084 351 351 #define AVIVO_D1CRTC_INTERLACE_CONTROL 0x6088 352 352 #define AVIVO_D1CRTC_INTERLACE_STATUS 0x608c 353 + #define AVIVO_D1CRTC_FRAME_COUNT 0x60a4 353 354 #define AVIVO_D1CRTC_STEREO_CONTROL 0x60c4 354 355 355 356 /* master controls */ ··· 439 438 # define AVIVO_DC_LB_DISP1_END_ADR_SHIFT 4 440 439 # define AVIVO_DC_LB_DISP1_END_ADR_MASK 0x7ff 441 440 442 - #define R500_DxMODE_INT_MASK 0x6540 443 - #define R500_D1MODE_INT_MASK (1<<0) 444 - #define R500_D2MODE_INT_MASK (1<<8) 445 - 446 441 #define AVIVO_D1MODE_DATA_FORMAT 0x6528 447 442 # define AVIVO_D1MODE_INTERLEAVE_EN (1 << 0) 448 443 #define AVIVO_D1MODE_DESKTOP_HEIGHT 0x652C 444 + #define AVIVO_D1MODE_VBLANK_STATUS 0x6534 445 + # define AVIVO_VBLANK_ACK (1 << 4) 449 446 #define AVIVO_D1MODE_VLINE_START_END 0x6538 447 + #define AVIVO_DxMODE_INT_MASK 0x6540 448 + # define AVIVO_D1MODE_INT_MASK (1 << 0) 449 + # define AVIVO_D2MODE_INT_MASK (1 << 8) 450 450 #define AVIVO_D1MODE_VIEWPORT_START 0x6580 451 451 #define AVIVO_D1MODE_VIEWPORT_SIZE 0x6584 452 452 #define AVIVO_D1MODE_EXT_OVERSCAN_LEFT_RIGHT 0x6588 ··· 477 475 #define AVIVO_D2CRTC_BLANK_CONTROL 0x6884 478 476 #define AVIVO_D2CRTC_INTERLACE_CONTROL 0x6888 479 477 #define AVIVO_D2CRTC_INTERLACE_STATUS 0x688c 478 + #define AVIVO_D2CRTC_FRAME_COUNT 0x68a4 480 479 #define AVIVO_D2CRTC_STEREO_CONTROL 0x68c4 481 480 482 481 #define AVIVO_D2GRPH_ENABLE 0x6900 ··· 500 497 #define AVIVO_D2CUR_SIZE 0x6c10 501 498 #define AVIVO_D2CUR_POSITION 0x6c14 502 499 500 + #define AVIVO_D2MODE_VBLANK_STATUS 0x6d34 503 501 #define AVIVO_D2MODE_VLINE_START_END 0x6d38 504 502 #define AVIVO_D2MODE_VIEWPORT_START 0x6d80 505 503 #define AVIVO_D2MODE_VIEWPORT_SIZE 0x6d84 ··· 751 747 #define AVIVO_I2C_CNTL 0x7d50 752 748 # define AVIVO_I2C_EN (1 << 0) 753 749 # define AVIVO_I2C_RESET (1 << 8) 750 + 751 + #define AVIVO_DISP_INTERRUPT_STATUS 0x7edc 752 + # define AVIVO_D1_VBLANK_INTERRUPT (1 << 4) 753 + # define AVIVO_D2_VBLANK_INTERRUPT (1 << 5) 754 754 755 755 #endif
-1
drivers/gpu/drm/radeon/r520.c
··· 177 177 */ 178 178 /* workaround for RV530 */ 179 179 if (rdev->family == CHIP_RV530) { 180 - WREG32(0x4124, 1); 181 180 WREG32(0x4128, 0xFF); 182 181 } 183 182 r420_pipes_init(rdev);
+47 -8
drivers/gpu/drm/radeon/radeon.h
··· 242 242 uint64_t *gpu_addr); 243 243 void radeon_object_unpin(struct radeon_object *robj); 244 244 int radeon_object_wait(struct radeon_object *robj); 245 + int radeon_object_busy_domain(struct radeon_object *robj, uint32_t *cur_placement); 245 246 int radeon_object_evict_vram(struct radeon_device *rdev); 246 247 int radeon_object_mmap(struct radeon_object *robj, uint64_t *offset); 247 248 void radeon_object_force_delete(struct radeon_device *rdev); ··· 575 574 void (*ring_start)(struct radeon_device *rdev); 576 575 int (*irq_set)(struct radeon_device *rdev); 577 576 int (*irq_process)(struct radeon_device *rdev); 577 + u32 (*get_vblank_counter)(struct radeon_device *rdev, int crtc); 578 578 void (*fence_ring_emit)(struct radeon_device *rdev, struct radeon_fence *fence); 579 579 int (*cs_parse)(struct radeon_cs_parser *p); 580 580 int (*copy_blit)(struct radeon_device *rdev, ··· 655 653 int usec_timeout; 656 654 enum radeon_pll_errata pll_errata; 657 655 int num_gb_pipes; 656 + int num_z_pipes; 658 657 int disp_priority; 659 658 /* BIOS */ 660 659 uint8_t *bios; ··· 669 666 resource_size_t rmmio_base; 670 667 resource_size_t rmmio_size; 671 668 void *rmmio; 672 - radeon_rreg_t mm_rreg; 673 - radeon_wreg_t mm_wreg; 674 669 radeon_rreg_t mc_rreg; 675 670 radeon_wreg_t mc_wreg; 676 671 radeon_rreg_t pll_rreg; 677 672 radeon_wreg_t pll_wreg; 678 - radeon_rreg_t pcie_rreg; 679 - radeon_wreg_t pcie_wreg; 673 + uint32_t pcie_reg_mask; 680 674 radeon_rreg_t pciep_rreg; 681 675 radeon_wreg_t pciep_wreg; 682 676 struct radeon_clock clock; ··· 705 705 void radeon_device_fini(struct radeon_device *rdev); 706 706 int radeon_gpu_wait_for_idle(struct radeon_device *rdev); 707 707 708 + static inline uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg) 709 + { 710 + if (reg < 0x10000) 711 + return readl(((void __iomem *)rdev->rmmio) + reg); 712 + else { 713 + writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX); 714 + return readl(((void __iomem *)rdev->rmmio) + RADEON_MM_DATA); 715 + } 716 + } 717 + 718 + static inline void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 719 + { 720 + if (reg < 0x10000) 721 + writel(v, ((void __iomem *)rdev->rmmio) + reg); 722 + else { 723 + writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX); 724 + writel(v, ((void __iomem *)rdev->rmmio) + RADEON_MM_DATA); 725 + } 726 + } 727 + 708 728 709 729 /* 710 730 * Registers read & write functions. 711 731 */ 712 732 #define RREG8(reg) readb(((void __iomem *)rdev->rmmio) + (reg)) 713 733 #define WREG8(reg, v) writeb(v, ((void __iomem *)rdev->rmmio) + (reg)) 714 - #define RREG32(reg) rdev->mm_rreg(rdev, (reg)) 715 - #define WREG32(reg, v) rdev->mm_wreg(rdev, (reg), (v)) 734 + #define RREG32(reg) r100_mm_rreg(rdev, (reg)) 735 + #define WREG32(reg, v) r100_mm_wreg(rdev, (reg), (v)) 716 736 #define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) 717 737 #define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) 718 738 #define RREG32_PLL(reg) rdev->pll_rreg(rdev, (reg)) 719 739 #define WREG32_PLL(reg, v) rdev->pll_wreg(rdev, (reg), (v)) 720 740 #define RREG32_MC(reg) rdev->mc_rreg(rdev, (reg)) 721 741 #define WREG32_MC(reg, v) rdev->mc_wreg(rdev, (reg), (v)) 722 - #define RREG32_PCIE(reg) rdev->pcie_rreg(rdev, (reg)) 723 - #define WREG32_PCIE(reg, v) rdev->pcie_wreg(rdev, (reg), (v)) 742 + #define RREG32_PCIE(reg) rv370_pcie_rreg(rdev, (reg)) 743 + #define WREG32_PCIE(reg, v) rv370_pcie_wreg(rdev, (reg), (v)) 724 744 #define WREG32_P(reg, val, mask) \ 725 745 do { \ 726 746 uint32_t tmp_ = RREG32(reg); \ ··· 755 735 tmp_ |= ((val) & ~(mask)); \ 756 736 WREG32_PLL(reg, tmp_); \ 757 737 } while (0) 738 + 739 + /* 740 + * Indirect registers accessor 741 + */ 742 + static inline uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg) 743 + { 744 + uint32_t r; 745 + 746 + WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask)); 747 + r = RREG32(RADEON_PCIE_DATA); 748 + return r; 749 + } 750 + 751 + static inline void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 752 + { 753 + WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask)); 754 + WREG32(RADEON_PCIE_DATA, (v)); 755 + } 758 756 759 757 void r100_pll_errata_after_index(struct radeon_device *rdev); 760 758 ··· 900 862 #define radeon_ring_start(rdev) (rdev)->asic->ring_start((rdev)) 901 863 #define radeon_irq_set(rdev) (rdev)->asic->irq_set((rdev)) 902 864 #define radeon_irq_process(rdev) (rdev)->asic->irq_process((rdev)) 865 + #define radeon_get_vblank_counter(rdev, crtc) (rdev)->asic->get_vblank_counter((rdev), (crtc)) 903 866 #define radeon_fence_ring_emit(rdev, fence) (rdev)->asic->fence_ring_emit((rdev), (fence)) 904 867 #define radeon_copy_blit(rdev, s, d, np, f) (rdev)->asic->copy_blit((rdev), (s), (d), (np), (f)) 905 868 #define radeon_copy_dma(rdev, s, d, np, f) (rdev)->asic->copy_dma((rdev), (s), (d), (np), (f))
+19 -7
drivers/gpu/drm/radeon/radeon_asic.h
··· 49 49 int r100_gpu_reset(struct radeon_device *rdev); 50 50 int r100_mc_init(struct radeon_device *rdev); 51 51 void r100_mc_fini(struct radeon_device *rdev); 52 + u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc); 52 53 int r100_wb_init(struct radeon_device *rdev); 53 54 void r100_wb_fini(struct radeon_device *rdev); 54 55 int r100_gart_enable(struct radeon_device *rdev); ··· 97 96 .ring_start = &r100_ring_start, 98 97 .irq_set = &r100_irq_set, 99 98 .irq_process = &r100_irq_process, 99 + .get_vblank_counter = &r100_get_vblank_counter, 100 100 .fence_ring_emit = &r100_fence_ring_emit, 101 101 .cs_parse = &r100_cs_parse, 102 102 .copy_blit = &r100_copy_blit, ··· 158 156 .ring_start = &r300_ring_start, 159 157 .irq_set = &r100_irq_set, 160 158 .irq_process = &r100_irq_process, 159 + .get_vblank_counter = &r100_get_vblank_counter, 161 160 .fence_ring_emit = &r300_fence_ring_emit, 162 161 .cs_parse = &r300_cs_parse, 163 162 .copy_blit = &r100_copy_blit, ··· 199 196 .ring_start = &r300_ring_start, 200 197 .irq_set = &r100_irq_set, 201 198 .irq_process = &r100_irq_process, 199 + .get_vblank_counter = &r100_get_vblank_counter, 202 200 .fence_ring_emit = &r300_fence_ring_emit, 203 201 .cs_parse = &r300_cs_parse, 204 202 .copy_blit = &r100_copy_blit, ··· 247 243 .ring_start = &r300_ring_start, 248 244 .irq_set = &r100_irq_set, 249 245 .irq_process = &r100_irq_process, 246 + .get_vblank_counter = &r100_get_vblank_counter, 250 247 .fence_ring_emit = &r300_fence_ring_emit, 251 248 .cs_parse = &r300_cs_parse, 252 249 .copy_blit = &r100_copy_blit, ··· 271 266 int rs600_mc_init(struct radeon_device *rdev); 272 267 void rs600_mc_fini(struct radeon_device *rdev); 273 268 int rs600_irq_set(struct radeon_device *rdev); 269 + int rs600_irq_process(struct radeon_device *rdev); 270 + u32 rs600_get_vblank_counter(struct radeon_device *rdev, int crtc); 274 271 int rs600_gart_enable(struct radeon_device *rdev); 275 272 void rs600_gart_disable(struct radeon_device *rdev); 276 273 void rs600_gart_tlb_flush(struct radeon_device *rdev); ··· 298 291 .cp_disable = &r100_cp_disable, 299 292 .ring_start = &r300_ring_start, 300 293 .irq_set = &rs600_irq_set, 301 - .irq_process = &r100_irq_process, 294 + .irq_process = &rs600_irq_process, 295 + .get_vblank_counter = &rs600_get_vblank_counter, 302 296 .fence_ring_emit = &r300_fence_ring_emit, 303 297 .cs_parse = &r300_cs_parse, 304 298 .copy_blit = &r100_copy_blit, ··· 316 308 /* 317 309 * rs690,rs740 318 310 */ 311 + int rs690_init(struct radeon_device *rdev); 319 312 void rs690_errata(struct radeon_device *rdev); 320 313 void rs690_vram_info(struct radeon_device *rdev); 321 314 int rs690_mc_init(struct radeon_device *rdev); ··· 325 316 void rs690_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 326 317 void rs690_bandwidth_update(struct radeon_device *rdev); 327 318 static struct radeon_asic rs690_asic = { 328 - .init = &r300_init, 319 + .init = &rs690_init, 329 320 .errata = &rs690_errata, 330 321 .vram_info = &rs690_vram_info, 331 322 .gpu_reset = &r300_gpu_reset, ··· 342 333 .cp_disable = &r100_cp_disable, 343 334 .ring_start = &r300_ring_start, 344 335 .irq_set = &rs600_irq_set, 345 - .irq_process = &r100_irq_process, 336 + .irq_process = &rs600_irq_process, 337 + .get_vblank_counter = &rs600_get_vblank_counter, 346 338 .fence_ring_emit = &r300_fence_ring_emit, 347 339 .cs_parse = &r300_cs_parse, 348 340 .copy_blit = &r100_copy_blit, ··· 391 381 .cp_fini = &r100_cp_fini, 392 382 .cp_disable = &r100_cp_disable, 393 383 .ring_start = &rv515_ring_start, 394 - .irq_set = &r100_irq_set, 395 - .irq_process = &r100_irq_process, 384 + .irq_set = &rs600_irq_set, 385 + .irq_process = &rs600_irq_process, 386 + .get_vblank_counter = &rs600_get_vblank_counter, 396 387 .fence_ring_emit = &r300_fence_ring_emit, 397 388 .cs_parse = &r300_cs_parse, 398 389 .copy_blit = &r100_copy_blit, ··· 434 423 .cp_fini = &r100_cp_fini, 435 424 .cp_disable = &r100_cp_disable, 436 425 .ring_start = &rv515_ring_start, 437 - .irq_set = &r100_irq_set, 438 - .irq_process = &r100_irq_process, 426 + .irq_set = &rs600_irq_set, 427 + .irq_process = &rs600_irq_process, 428 + .get_vblank_counter = &rs600_get_vblank_counter, 439 429 .fence_ring_emit = &r300_fence_ring_emit, 440 430 .cs_parse = &r300_cs_parse, 441 431 .copy_blit = &r100_copy_blit,
+19 -29
drivers/gpu/drm/radeon/radeon_combios.c
··· 685 685 0x00780000, /* rs480 */ 686 686 }; 687 687 688 - static struct radeon_encoder_tv_dac 689 - *radeon_legacy_get_tv_dac_info_from_table(struct radeon_device *rdev) 688 + static void radeon_legacy_get_tv_dac_info_from_table(struct radeon_device *rdev, 689 + struct radeon_encoder_tv_dac *tv_dac) 690 690 { 691 - struct radeon_encoder_tv_dac *tv_dac = NULL; 692 - 693 - tv_dac = kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL); 694 - 695 - if (!tv_dac) 696 - return NULL; 697 - 698 691 tv_dac->ps2_tvdac_adj = default_tvdac_adj[rdev->family]; 699 692 if ((rdev->flags & RADEON_IS_MOBILITY) && (rdev->family == CHIP_RV250)) 700 693 tv_dac->ps2_tvdac_adj = 0x00880000; 701 694 tv_dac->pal_tvdac_adj = tv_dac->ps2_tvdac_adj; 702 695 tv_dac->ntsc_tvdac_adj = tv_dac->ps2_tvdac_adj; 703 - 704 - return tv_dac; 696 + return; 705 697 } 706 698 707 699 struct radeon_encoder_tv_dac *radeon_combios_get_tv_dac_info(struct ··· 705 713 uint16_t dac_info; 706 714 uint8_t rev, bg, dac; 707 715 struct radeon_encoder_tv_dac *tv_dac = NULL; 716 + int found = 0; 717 + 718 + tv_dac = kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL); 719 + if (!tv_dac) 720 + return NULL; 708 721 709 722 if (rdev->bios == NULL) 710 - return radeon_legacy_get_tv_dac_info_from_table(rdev); 723 + goto out; 711 724 712 725 /* first check TV table */ 713 726 dac_info = combios_get_table_offset(dev, COMBIOS_TV_INFO_TABLE); 714 727 if (dac_info) { 715 - tv_dac = 716 - kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL); 717 - 718 - if (!tv_dac) 719 - return NULL; 720 - 721 728 rev = RBIOS8(dac_info + 0x3); 722 729 if (rev > 4) { 723 730 bg = RBIOS8(dac_info + 0xc) & 0xf; ··· 730 739 bg = RBIOS8(dac_info + 0x10) & 0xf; 731 740 dac = RBIOS8(dac_info + 0x11) & 0xf; 732 741 tv_dac->ntsc_tvdac_adj = (bg << 16) | (dac << 20); 742 + found = 1; 733 743 } else if (rev > 1) { 734 744 bg = RBIOS8(dac_info + 0xc) & 0xf; 735 745 dac = (RBIOS8(dac_info + 0xc) >> 4) & 0xf; ··· 743 751 bg = RBIOS8(dac_info + 0xe) & 0xf; 744 752 dac = (RBIOS8(dac_info + 0xe) >> 4) & 0xf; 745 753 tv_dac->ntsc_tvdac_adj = (bg << 16) | (dac << 20); 754 + found = 1; 746 755 } 747 - 748 756 tv_dac->tv_std = radeon_combios_get_tv_info(encoder); 749 - 750 - } else { 757 + } 758 + if (!found) { 751 759 /* then check CRT table */ 752 760 dac_info = 753 761 combios_get_table_offset(dev, COMBIOS_CRT_INFO_TABLE); 754 762 if (dac_info) { 755 - tv_dac = 756 - kzalloc(sizeof(struct radeon_encoder_tv_dac), 757 - GFP_KERNEL); 758 - 759 - if (!tv_dac) 760 - return NULL; 761 - 762 763 rev = RBIOS8(dac_info) & 0x3; 763 764 if (rev < 2) { 764 765 bg = RBIOS8(dac_info + 0x3) & 0xf; ··· 760 775 (bg << 16) | (dac << 20); 761 776 tv_dac->pal_tvdac_adj = tv_dac->ps2_tvdac_adj; 762 777 tv_dac->ntsc_tvdac_adj = tv_dac->ps2_tvdac_adj; 778 + found = 1; 763 779 } else { 764 780 bg = RBIOS8(dac_info + 0x4) & 0xf; 765 781 dac = RBIOS8(dac_info + 0x5) & 0xf; ··· 768 782 (bg << 16) | (dac << 20); 769 783 tv_dac->pal_tvdac_adj = tv_dac->ps2_tvdac_adj; 770 784 tv_dac->ntsc_tvdac_adj = tv_dac->ps2_tvdac_adj; 785 + found = 1; 771 786 } 772 787 } else { 773 788 DRM_INFO("No TV DAC info found in BIOS\n"); 774 - return radeon_legacy_get_tv_dac_info_from_table(rdev); 775 789 } 776 790 } 791 + 792 + out: 793 + if (!found) /* fallback to defaults */ 794 + radeon_legacy_get_tv_dac_info_from_table(rdev, tv_dac); 777 795 778 796 return tv_dac; 779 797 }
+9
drivers/gpu/drm/radeon/radeon_cp.c
··· 406 406 { 407 407 uint32_t gb_tile_config, gb_pipe_sel = 0; 408 408 409 + if ((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RV530) { 410 + uint32_t z_pipe_sel = RADEON_READ(RV530_GB_PIPE_SELECT2); 411 + if ((z_pipe_sel & 3) == 3) 412 + dev_priv->num_z_pipes = 2; 413 + else 414 + dev_priv->num_z_pipes = 1; 415 + } else 416 + dev_priv->num_z_pipes = 1; 417 + 409 418 /* RS4xx/RS6xx/R4xx/R5xx */ 410 419 if ((dev_priv->flags & RADEON_FAMILY_MASK) >= CHIP_R420) { 411 420 gb_pipe_sel = RADEON_READ(R400_GB_PIPE_SELECT);
+3 -10
drivers/gpu/drm/radeon/radeon_device.c
··· 225 225 226 226 void radeon_register_accessor_init(struct radeon_device *rdev) 227 227 { 228 - rdev->mm_rreg = &r100_mm_rreg; 229 - rdev->mm_wreg = &r100_mm_wreg; 230 228 rdev->mc_rreg = &radeon_invalid_rreg; 231 229 rdev->mc_wreg = &radeon_invalid_wreg; 232 230 rdev->pll_rreg = &radeon_invalid_rreg; 233 231 rdev->pll_wreg = &radeon_invalid_wreg; 234 - rdev->pcie_rreg = &radeon_invalid_rreg; 235 - rdev->pcie_wreg = &radeon_invalid_wreg; 236 232 rdev->pciep_rreg = &radeon_invalid_rreg; 237 233 rdev->pciep_wreg = &radeon_invalid_wreg; 238 234 239 235 /* Don't change order as we are overridding accessor. */ 240 236 if (rdev->family < CHIP_RV515) { 241 - rdev->pcie_rreg = &rv370_pcie_rreg; 242 - rdev->pcie_wreg = &rv370_pcie_wreg; 243 - } 244 - if (rdev->family >= CHIP_RV515) { 245 - rdev->pcie_rreg = &rv515_pcie_rreg; 246 - rdev->pcie_wreg = &rv515_pcie_wreg; 237 + rdev->pcie_reg_mask = 0xff; 238 + } else { 239 + rdev->pcie_reg_mask = 0x7ff; 247 240 } 248 241 /* FIXME: not sure here */ 249 242 if (rdev->family <= CHIP_R580) {
+4 -1
drivers/gpu/drm/radeon/radeon_drv.h
··· 100 100 * 1.28- Add support for VBL on CRTC2 101 101 * 1.29- R500 3D cmd buffer support 102 102 * 1.30- Add support for occlusion queries 103 + * 1.31- Add support for num Z pipes from GET_PARAM 103 104 */ 104 105 #define DRIVER_MAJOR 1 105 - #define DRIVER_MINOR 30 106 + #define DRIVER_MINOR 31 106 107 #define DRIVER_PATCHLEVEL 0 107 108 108 109 /* ··· 330 329 resource_size_t fb_aper_offset; 331 330 332 331 int num_gb_pipes; 332 + int num_z_pipes; 333 333 int track_flush; 334 334 drm_local_map_t *mmio; 335 335 ··· 691 689 692 690 /* pipe config regs */ 693 691 #define R400_GB_PIPE_SELECT 0x402c 692 + #define RV530_GB_PIPE_SELECT2 0x4124 694 693 #define R500_DYN_SCLK_PWMEM_PIPE 0x000d /* PLL */ 695 694 #define R300_GB_TILE_CONFIG 0x4018 696 695 # define R300_ENABLE_TILING (1 << 0)
+2
drivers/gpu/drm/radeon/radeon_fb.c
··· 574 574 goto out_unref; 575 575 } 576 576 577 + memset_io(fbptr, 0, aligned_size); 578 + 577 579 strcpy(info->fix.id, "radeondrmfb"); 578 580 info->fix.type = FB_TYPE_PACKED_PIXELS; 579 581 info->fix.visual = FB_VISUAL_TRUECOLOR;
+28 -2
drivers/gpu/drm/radeon/radeon_gem.c
··· 262 262 int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, 263 263 struct drm_file *filp) 264 264 { 265 - /* FIXME: implement */ 266 - return 0; 265 + struct drm_radeon_gem_busy *args = data; 266 + struct drm_gem_object *gobj; 267 + struct radeon_object *robj; 268 + int r; 269 + uint32_t cur_placement; 270 + 271 + gobj = drm_gem_object_lookup(dev, filp, args->handle); 272 + if (gobj == NULL) { 273 + return -EINVAL; 274 + } 275 + robj = gobj->driver_private; 276 + r = radeon_object_busy_domain(robj, &cur_placement); 277 + switch (cur_placement) { 278 + case TTM_PL_VRAM: 279 + args->domain = RADEON_GEM_DOMAIN_VRAM; 280 + break; 281 + case TTM_PL_TT: 282 + args->domain = RADEON_GEM_DOMAIN_GTT; 283 + break; 284 + case TTM_PL_SYSTEM: 285 + args->domain = RADEON_GEM_DOMAIN_CPU; 286 + default: 287 + break; 288 + } 289 + mutex_lock(&dev->struct_mutex); 290 + drm_gem_object_unreference(gobj); 291 + mutex_unlock(&dev->struct_mutex); 292 + return r; 267 293 } 268 294 269 295 int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
-54
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 32 32 #include "radeon.h" 33 33 #include "atom.h" 34 34 35 - static inline uint32_t r100_irq_ack(struct radeon_device *rdev) 36 - { 37 - uint32_t irqs = RREG32(RADEON_GEN_INT_STATUS); 38 - uint32_t irq_mask = RADEON_SW_INT_TEST; 39 - 40 - if (irqs) { 41 - WREG32(RADEON_GEN_INT_STATUS, irqs); 42 - } 43 - return irqs & irq_mask; 44 - } 45 - 46 - int r100_irq_set(struct radeon_device *rdev) 47 - { 48 - uint32_t tmp = 0; 49 - 50 - if (rdev->irq.sw_int) { 51 - tmp |= RADEON_SW_INT_ENABLE; 52 - } 53 - /* Todo go through CRTC and enable vblank int or not */ 54 - WREG32(RADEON_GEN_INT_CNTL, tmp); 55 - return 0; 56 - } 57 - 58 - int r100_irq_process(struct radeon_device *rdev) 59 - { 60 - uint32_t status; 61 - 62 - status = r100_irq_ack(rdev); 63 - if (!status) { 64 - return IRQ_NONE; 65 - } 66 - while (status) { 67 - /* SW interrupt */ 68 - if (status & RADEON_SW_INT_TEST) { 69 - radeon_fence_process(rdev); 70 - } 71 - status = r100_irq_ack(rdev); 72 - } 73 - return IRQ_HANDLED; 74 - } 75 - 76 - int rs600_irq_set(struct radeon_device *rdev) 77 - { 78 - uint32_t tmp = 0; 79 - 80 - if (rdev->irq.sw_int) { 81 - tmp |= RADEON_SW_INT_ENABLE; 82 - } 83 - WREG32(RADEON_GEN_INT_CNTL, tmp); 84 - /* Todo go through CRTC and enable vblank int or not */ 85 - WREG32(R500_DxMODE_INT_MASK, 0); 86 - return 0; 87 - } 88 - 89 35 irqreturn_t radeon_driver_irq_handler_kms(DRM_IRQ_ARGS) 90 36 { 91 37 struct drm_device *dev = (struct drm_device *) arg;
+32 -5
drivers/gpu/drm/radeon/radeon_kms.c
··· 95 95 case RADEON_INFO_NUM_GB_PIPES: 96 96 value = rdev->num_gb_pipes; 97 97 break; 98 + case RADEON_INFO_NUM_Z_PIPES: 99 + value = rdev->num_z_pipes; 100 + break; 98 101 default: 99 102 DRM_DEBUG("Invalid request %d\n", info->request); 100 103 return -EINVAL; ··· 144 141 */ 145 142 u32 radeon_get_vblank_counter_kms(struct drm_device *dev, int crtc) 146 143 { 147 - /* FIXME: implement */ 148 - return 0; 144 + struct radeon_device *rdev = dev->dev_private; 145 + 146 + if (crtc < 0 || crtc > 1) { 147 + DRM_ERROR("Invalid crtc %d\n", crtc); 148 + return -EINVAL; 149 + } 150 + 151 + return radeon_get_vblank_counter(rdev, crtc); 149 152 } 150 153 151 154 int radeon_enable_vblank_kms(struct drm_device *dev, int crtc) 152 155 { 153 - /* FIXME: implement */ 154 - return 0; 156 + struct radeon_device *rdev = dev->dev_private; 157 + 158 + if (crtc < 0 || crtc > 1) { 159 + DRM_ERROR("Invalid crtc %d\n", crtc); 160 + return -EINVAL; 161 + } 162 + 163 + rdev->irq.crtc_vblank_int[crtc] = true; 164 + 165 + return radeon_irq_set(rdev); 155 166 } 156 167 157 168 void radeon_disable_vblank_kms(struct drm_device *dev, int crtc) 158 169 { 159 - /* FIXME: implement */ 170 + struct radeon_device *rdev = dev->dev_private; 171 + 172 + if (crtc < 0 || crtc > 1) { 173 + DRM_ERROR("Invalid crtc %d\n", crtc); 174 + return; 175 + } 176 + 177 + rdev->irq.crtc_vblank_int[crtc] = false; 178 + 179 + radeon_irq_set(rdev); 160 180 } 161 181 162 182 ··· 321 295 DRM_IOCTL_DEF(DRM_RADEON_INFO, radeon_info_ioctl, DRM_AUTH), 322 296 DRM_IOCTL_DEF(DRM_RADEON_GEM_SET_TILING, radeon_gem_set_tiling_ioctl, DRM_AUTH), 323 297 DRM_IOCTL_DEF(DRM_RADEON_GEM_GET_TILING, radeon_gem_get_tiling_ioctl, DRM_AUTH), 298 + DRM_IOCTL_DEF(DRM_RADEON_GEM_BUSY, radeon_gem_busy_ioctl, DRM_AUTH), 324 299 }; 325 300 int radeon_max_kms_ioctl = DRM_ARRAY_SIZE(radeon_ioctls_kms);
+3 -4
drivers/gpu/drm/radeon/radeon_legacy_crtc.c
··· 310 310 RADEON_CRTC_DISP_REQ_EN_B)); 311 311 WREG32_P(RADEON_CRTC_EXT_CNTL, 0, ~mask); 312 312 } 313 + drm_vblank_post_modeset(dev, radeon_crtc->crtc_id); 314 + radeon_crtc_load_lut(crtc); 313 315 break; 314 316 case DRM_MODE_DPMS_STANDBY: 315 317 case DRM_MODE_DPMS_SUSPEND: 316 318 case DRM_MODE_DPMS_OFF: 319 + drm_vblank_pre_modeset(dev, radeon_crtc->crtc_id); 317 320 if (radeon_crtc->crtc_id) 318 321 WREG32_P(RADEON_CRTC2_GEN_CNTL, mask, ~mask); 319 322 else { ··· 325 322 WREG32_P(RADEON_CRTC_EXT_CNTL, mask, ~mask); 326 323 } 327 324 break; 328 - } 329 - 330 - if (mode != DRM_MODE_DPMS_OFF) { 331 - radeon_crtc_load_lut(crtc); 332 325 } 333 326 } 334 327
+1
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
··· 1066 1066 1067 1067 switch (radeon_encoder->encoder_id) { 1068 1068 case ENCODER_OBJECT_ID_INTERNAL_LVDS: 1069 + encoder->possible_crtcs = 0x1; 1069 1070 drm_encoder_init(dev, encoder, &radeon_legacy_lvds_enc_funcs, DRM_MODE_ENCODER_LVDS); 1070 1071 drm_encoder_helper_add(encoder, &radeon_legacy_lvds_helper_funcs); 1071 1072 if (rdev->is_atom_bios)
+19
drivers/gpu/drm/radeon/radeon_object.c
··· 316 316 return r; 317 317 } 318 318 319 + int radeon_object_busy_domain(struct radeon_object *robj, uint32_t *cur_placement) 320 + { 321 + int r = 0; 322 + 323 + r = radeon_object_reserve(robj, true); 324 + if (unlikely(r != 0)) { 325 + DRM_ERROR("radeon: failed to reserve object for waiting.\n"); 326 + return r; 327 + } 328 + spin_lock(&robj->tobj.lock); 329 + *cur_placement = robj->tobj.mem.mem_type; 330 + if (robj->tobj.sync_obj) { 331 + r = ttm_bo_wait(&robj->tobj, true, true, true); 332 + } 333 + spin_unlock(&robj->tobj.lock); 334 + radeon_object_unreserve(robj); 335 + return r; 336 + } 337 + 319 338 int radeon_object_evict_vram(struct radeon_device *rdev) 320 339 { 321 340 if (rdev->flags & RADEON_IS_IGP) {
+12 -4
drivers/gpu/drm/radeon/radeon_reg.h
··· 982 982 # define RS400_TMDS2_PLLRST (1 << 1) 983 983 984 984 #define RADEON_GEN_INT_CNTL 0x0040 985 + # define RADEON_CRTC_VBLANK_MASK (1 << 0) 986 + # define RADEON_CRTC2_VBLANK_MASK (1 << 9) 985 987 # define RADEON_SW_INT_ENABLE (1 << 25) 986 988 #define RADEON_GEN_INT_STATUS 0x0044 987 - # define RADEON_VSYNC_INT_AK (1 << 2) 988 - # define RADEON_VSYNC_INT (1 << 2) 989 - # define RADEON_VSYNC2_INT_AK (1 << 6) 990 - # define RADEON_VSYNC2_INT (1 << 6) 989 + # define AVIVO_DISPLAY_INT_STATUS (1 << 0) 990 + # define RADEON_CRTC_VBLANK_STAT (1 << 0) 991 + # define RADEON_CRTC_VBLANK_STAT_ACK (1 << 0) 992 + # define RADEON_CRTC2_VBLANK_STAT (1 << 9) 993 + # define RADEON_CRTC2_VBLANK_STAT_ACK (1 << 9) 991 994 # define RADEON_SW_INT_FIRE (1 << 26) 992 995 # define RADEON_SW_INT_TEST (1 << 25) 993 996 # define RADEON_SW_INT_TEST_ACK (1 << 25) ··· 2337 2334 # define RADEON_RE_WIDTH_SHIFT 0 2338 2335 # define RADEON_RE_HEIGHT_SHIFT 16 2339 2336 2337 + #define RADEON_RB3D_ZPASS_DATA 0x3290 2338 + #define RADEON_RB3D_ZPASS_ADDR 0x3294 2339 + 2340 2340 #define RADEON_SE_CNTL 0x1c4c 2341 2341 # define RADEON_FFACE_CULL_CW (0 << 0) 2342 2342 # define RADEON_FFACE_CULL_CCW (1 << 0) ··· 3573 3567 #define RADEON_SCRATCH_REG3 0x15ec 3574 3568 #define RADEON_SCRATCH_REG4 0x15f0 3575 3569 #define RADEON_SCRATCH_REG5 0x15f4 3570 + 3571 + #define RV530_GB_PIPE_SELECT2 0x4124 3576 3572 3577 3573 #endif
+3
drivers/gpu/drm/radeon/radeon_state.c
··· 3081 3081 case RADEON_PARAM_NUM_GB_PIPES: 3082 3082 value = dev_priv->num_gb_pipes; 3083 3083 break; 3084 + case RADEON_PARAM_NUM_Z_PIPES: 3085 + value = dev_priv->num_z_pipes; 3086 + break; 3084 3087 default: 3085 3088 DRM_DEBUG("Invalid parameter %d\n", param->param); 3086 3089 return -EINVAL;
+82
drivers/gpu/drm/radeon/rs600.c
··· 240 240 241 241 242 242 /* 243 + * Interrupts 244 + */ 245 + int rs600_irq_set(struct radeon_device *rdev) 246 + { 247 + uint32_t tmp = 0; 248 + uint32_t mode_int = 0; 249 + 250 + if (rdev->irq.sw_int) { 251 + tmp |= RADEON_SW_INT_ENABLE; 252 + } 253 + if (rdev->irq.crtc_vblank_int[0]) { 254 + tmp |= AVIVO_DISPLAY_INT_STATUS; 255 + mode_int |= AVIVO_D1MODE_INT_MASK; 256 + } 257 + if (rdev->irq.crtc_vblank_int[1]) { 258 + tmp |= AVIVO_DISPLAY_INT_STATUS; 259 + mode_int |= AVIVO_D2MODE_INT_MASK; 260 + } 261 + WREG32(RADEON_GEN_INT_CNTL, tmp); 262 + WREG32(AVIVO_DxMODE_INT_MASK, mode_int); 263 + return 0; 264 + } 265 + 266 + static inline uint32_t rs600_irq_ack(struct radeon_device *rdev, u32 *r500_disp_int) 267 + { 268 + uint32_t irqs = RREG32(RADEON_GEN_INT_STATUS); 269 + uint32_t irq_mask = RADEON_SW_INT_TEST; 270 + 271 + if (irqs & AVIVO_DISPLAY_INT_STATUS) { 272 + *r500_disp_int = RREG32(AVIVO_DISP_INTERRUPT_STATUS); 273 + if (*r500_disp_int & AVIVO_D1_VBLANK_INTERRUPT) { 274 + WREG32(AVIVO_D1MODE_VBLANK_STATUS, AVIVO_VBLANK_ACK); 275 + } 276 + if (*r500_disp_int & AVIVO_D2_VBLANK_INTERRUPT) { 277 + WREG32(AVIVO_D2MODE_VBLANK_STATUS, AVIVO_VBLANK_ACK); 278 + } 279 + } else { 280 + *r500_disp_int = 0; 281 + } 282 + 283 + if (irqs) { 284 + WREG32(RADEON_GEN_INT_STATUS, irqs); 285 + } 286 + return irqs & irq_mask; 287 + } 288 + 289 + int rs600_irq_process(struct radeon_device *rdev) 290 + { 291 + uint32_t status; 292 + uint32_t r500_disp_int; 293 + 294 + status = rs600_irq_ack(rdev, &r500_disp_int); 295 + if (!status && !r500_disp_int) { 296 + return IRQ_NONE; 297 + } 298 + while (status || r500_disp_int) { 299 + /* SW interrupt */ 300 + if (status & RADEON_SW_INT_TEST) { 301 + radeon_fence_process(rdev); 302 + } 303 + /* Vertical blank interrupts */ 304 + if (r500_disp_int & AVIVO_D1_VBLANK_INTERRUPT) { 305 + drm_handle_vblank(rdev->ddev, 0); 306 + } 307 + if (r500_disp_int & AVIVO_D2_VBLANK_INTERRUPT) { 308 + drm_handle_vblank(rdev->ddev, 1); 309 + } 310 + status = rs600_irq_ack(rdev, &r500_disp_int); 311 + } 312 + return IRQ_HANDLED; 313 + } 314 + 315 + u32 rs600_get_vblank_counter(struct radeon_device *rdev, int crtc) 316 + { 317 + if (crtc == 0) 318 + return RREG32(AVIVO_D1CRTC_FRAME_COUNT); 319 + else 320 + return RREG32(AVIVO_D2CRTC_FRAME_COUNT); 321 + } 322 + 323 + 324 + /* 243 325 * Global GPU functions 244 326 */ 245 327 void rs600_disable_vga(struct radeon_device *rdev)
+65
drivers/gpu/drm/radeon/rs690.c
··· 652 652 WREG32(RS690_MC_DATA, v); 653 653 WREG32(RS690_MC_INDEX, RS690_MC_INDEX_WR_ACK); 654 654 } 655 + 656 + static const unsigned rs690_reg_safe_bm[219] = { 657 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 658 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 659 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 660 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 661 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 662 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 663 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 664 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 665 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 666 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 667 + 0x17FF1FFF,0xFFFFFFFC,0xFFFFFFFF,0xFF30FFBF, 668 + 0xFFFFFFF8,0xC3E6FFFF,0xFFFFF6DF,0xFFFFFFFF, 669 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 670 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 671 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFF03F, 672 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 673 + 0xFFFFFFFF,0xFFFFEFCE,0xF00EBFFF,0x007C0000, 674 + 0xF0000078,0xFF000009,0xFFFFFFFF,0xFFFFFFFF, 675 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 676 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 677 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 678 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 679 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 680 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 681 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 682 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 683 + 0xFFFFF7FF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 684 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 685 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 686 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 687 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 688 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 689 + 0xFFFFFC78,0xFFFFFFFF,0xFFFFFFFE,0xFFFFFFFF, 690 + 0x38FF8F50,0xFFF88082,0xF000000C,0xFAE009FF, 691 + 0x0000FFFF,0xFFFFFFFF,0xFFFFFFFF,0x00000000, 692 + 0x00000000,0x0000C100,0x00000000,0x00000000, 693 + 0x00000000,0x00000000,0x00000000,0x00000000, 694 + 0x00000000,0xFFFF0000,0xFFFFFFFF,0xFF80FFFF, 695 + 0x00000000,0x00000000,0x00000000,0x00000000, 696 + 0x0003FC01,0xFFFFFFF8,0xFE800B19,0xFFFFFFFF, 697 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 698 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 699 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 700 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 701 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 702 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 703 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 704 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 705 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 706 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 707 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 708 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 709 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 710 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 711 + 0xFFFFFFFF,0xFFFFFFFF,0xFFFFFFFF, 712 + }; 713 + 714 + int rs690_init(struct radeon_device *rdev) 715 + { 716 + rdev->config.r300.reg_safe_bm = rs690_reg_safe_bm; 717 + rdev->config.r300.reg_safe_bm_size = ARRAY_SIZE(rs690_reg_safe_bm); 718 + return 0; 719 + }
-19
drivers/gpu/drm/radeon/rv515.c
··· 400 400 WREG32(MC_IND_INDEX, 0); 401 401 } 402 402 403 - uint32_t rv515_pcie_rreg(struct radeon_device *rdev, uint32_t reg) 404 - { 405 - uint32_t r; 406 - 407 - WREG32(PCIE_INDEX, ((reg) & 0x7ff)); 408 - (void)RREG32(PCIE_INDEX); 409 - r = RREG32(PCIE_DATA); 410 - return r; 411 - } 412 - 413 - void rv515_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 414 - { 415 - WREG32(PCIE_INDEX, ((reg) & 0x7ff)); 416 - (void)RREG32(PCIE_INDEX); 417 - WREG32(PCIE_DATA, (v)); 418 - (void)RREG32(PCIE_DATA); 419 - } 420 - 421 - 422 403 /* 423 404 * Debugfs info 424 405 */
+12 -2
drivers/i2c/busses/i2c-omap.c
··· 674 674 675 675 err = 0; 676 676 complete: 677 - omap_i2c_write_reg(dev, OMAP_I2C_STAT_REG, stat); 677 + /* 678 + * Ack the stat in one go, but [R/X]DR and [R/X]RDY should be 679 + * acked after the data operation is complete. 680 + * Ref: TRM SWPU114Q Figure 18-31 681 + */ 682 + omap_i2c_write_reg(dev, OMAP_I2C_STAT_REG, stat & 683 + ~(OMAP_I2C_STAT_RRDY | OMAP_I2C_STAT_RDR | 684 + OMAP_I2C_STAT_XRDY | OMAP_I2C_STAT_XDR)); 678 685 679 686 if (stat & OMAP_I2C_STAT_NACK) { 680 687 err |= OMAP_I2C_STAT_NACK; ··· 694 687 } 695 688 if (stat & (OMAP_I2C_STAT_ARDY | OMAP_I2C_STAT_NACK | 696 689 OMAP_I2C_STAT_AL)) { 690 + omap_i2c_ack_stat(dev, stat & 691 + (OMAP_I2C_STAT_RRDY | OMAP_I2C_STAT_RDR | 692 + OMAP_I2C_STAT_XRDY | OMAP_I2C_STAT_XDR)); 697 693 omap_i2c_complete_cmd(dev, err); 698 694 return IRQ_HANDLED; 699 695 } ··· 784 774 * memory to the I2C interface. 785 775 */ 786 776 787 - if (cpu_is_omap34xx()) { 777 + if (dev->rev <= OMAP_I2C_REV_ON_3430) { 788 778 while (!(stat & OMAP_I2C_STAT_XUDF)) { 789 779 if (stat & (OMAP_I2C_STAT_NACK | OMAP_I2C_STAT_AL)) { 790 780 omap_i2c_ack_stat(dev, stat & (OMAP_I2C_STAT_XRDY | OMAP_I2C_STAT_XDR));
+91 -64
drivers/i2c/busses/i2c-stu300.c
··· 117 117 STU300_ERROR_NONE = 0, 118 118 STU300_ERROR_ACKNOWLEDGE_FAILURE, 119 119 STU300_ERROR_BUS_ERROR, 120 - STU300_ERROR_ARBITRATION_LOST 120 + STU300_ERROR_ARBITRATION_LOST, 121 + STU300_ERROR_UNKNOWN 121 122 }; 122 123 123 124 /* timeout waiting for the controller to respond */ ··· 128 127 * The number of address send athemps tried before giving up. 129 128 * If the first one failes it seems like 5 to 8 attempts are required. 130 129 */ 131 - #define NUM_ADDR_RESEND_ATTEMPTS 10 130 + #define NUM_ADDR_RESEND_ATTEMPTS 12 132 131 133 132 /* I2C clock speed, in Hz 0-400kHz*/ 134 133 static unsigned int scl_frequency = 100000; ··· 150 149 * @msg_index: index of current message 151 150 * @msg_len: length of current message 152 151 */ 152 + 153 153 struct stu300_dev { 154 154 struct platform_device *pdev; 155 155 struct i2c_adapter adapter; ··· 190 188 return readl(address) & 0x000000FFU; 191 189 } 192 190 191 + static void stu300_irq_enable(struct stu300_dev *dev) 192 + { 193 + u32 val; 194 + val = stu300_r8(dev->virtbase + I2C_CR); 195 + val |= I2C_CR_INTERRUPT_ENABLE; 196 + /* Twice paranoia (possible HW glitch) */ 197 + stu300_wr8(val, dev->virtbase + I2C_CR); 198 + stu300_wr8(val, dev->virtbase + I2C_CR); 199 + } 200 + 201 + static void stu300_irq_disable(struct stu300_dev *dev) 202 + { 203 + u32 val; 204 + val = stu300_r8(dev->virtbase + I2C_CR); 205 + val &= ~I2C_CR_INTERRUPT_ENABLE; 206 + /* Twice paranoia (possible HW glitch) */ 207 + stu300_wr8(val, dev->virtbase + I2C_CR); 208 + stu300_wr8(val, dev->virtbase + I2C_CR); 209 + } 210 + 211 + 193 212 /* 194 213 * Tells whether a certain event or events occurred in 195 214 * response to a command. The events represent states in ··· 219 196 * documentation and can only be treated as abstract state 220 197 * machine states. 221 198 * 222 - * @ret 0 = event has not occurred, any other value means 223 - * the event occurred. 199 + * @ret 0 = event has not occurred or unknown error, any 200 + * other value means the correct event occurred or an error. 224 201 */ 202 + 225 203 static int stu300_event_occurred(struct stu300_dev *dev, 226 204 enum stu300_event mr_event) { 227 205 u32 status1; ··· 230 206 231 207 /* What event happened? */ 232 208 status1 = stu300_r8(dev->virtbase + I2C_SR1); 209 + 233 210 if (!(status1 & I2C_SR1_EVF_IND)) 234 211 /* No event at all */ 235 212 return 0; 213 + 236 214 status2 = stu300_r8(dev->virtbase + I2C_SR2); 215 + 216 + /* Block any multiple interrupts */ 217 + stu300_irq_disable(dev); 218 + 219 + /* Check for errors first */ 220 + if (status2 & I2C_SR2_AF_IND) { 221 + dev->cmd_err = STU300_ERROR_ACKNOWLEDGE_FAILURE; 222 + return 1; 223 + } else if (status2 & I2C_SR2_BERR_IND) { 224 + dev->cmd_err = STU300_ERROR_BUS_ERROR; 225 + return 1; 226 + } else if (status2 & I2C_SR2_ARLO_IND) { 227 + dev->cmd_err = STU300_ERROR_ARBITRATION_LOST; 228 + return 1; 229 + } 237 230 238 231 switch (mr_event) { 239 232 case STU300_EVENT_1: ··· 262 221 case STU300_EVENT_7: 263 222 case STU300_EVENT_8: 264 223 if (status1 & I2C_SR1_BTF_IND) { 265 - if (status2 & I2C_SR2_AF_IND) 266 - dev->cmd_err = STU300_ERROR_ACKNOWLEDGE_FAILURE; 267 - else if (status2 & I2C_SR2_BERR_IND) 268 - dev->cmd_err = STU300_ERROR_BUS_ERROR; 269 224 return 1; 270 225 } 271 226 break; ··· 277 240 case STU300_EVENT_6: 278 241 if (status2 & I2C_SR2_ENDAD_IND) { 279 242 /* First check for any errors */ 280 - if (status2 & I2C_SR2_AF_IND) 281 - dev->cmd_err = STU300_ERROR_ACKNOWLEDGE_FAILURE; 282 243 return 1; 283 244 } 284 245 break; ··· 287 252 default: 288 253 break; 289 254 } 290 - if (status2 & I2C_SR2_ARLO_IND) 291 - dev->cmd_err = STU300_ERROR_ARBITRATION_LOST; 255 + /* If we get here, we're on thin ice. 256 + * Here we are in a status where we have 257 + * gotten a response that does not match 258 + * what we requested. 259 + */ 260 + dev->cmd_err = STU300_ERROR_UNKNOWN; 261 + dev_err(&dev->pdev->dev, 262 + "Unhandled interrupt! %d sr1: 0x%x sr2: 0x%x\n", 263 + mr_event, status1, status2); 292 264 return 0; 293 265 } 294 266 ··· 304 262 struct stu300_dev *dev = data; 305 263 int res; 306 264 265 + /* Just make sure that the block is clocked */ 266 + clk_enable(dev->clk); 267 + 307 268 /* See if this was what we were waiting for */ 308 269 spin_lock(&dev->cmd_issue_lock); 309 - if (dev->cmd_event != STU300_EVENT_NONE) { 310 - res = stu300_event_occurred(dev, dev->cmd_event); 311 - if (res || dev->cmd_err != STU300_ERROR_NONE) { 312 - u32 val; 313 270 314 - complete(&dev->cmd_complete); 315 - /* Block any multiple interrupts */ 316 - val = stu300_r8(dev->virtbase + I2C_CR); 317 - val &= ~I2C_CR_INTERRUPT_ENABLE; 318 - stu300_wr8(val, dev->virtbase + I2C_CR); 319 - } 320 - } 271 + res = stu300_event_occurred(dev, dev->cmd_event); 272 + if (res || dev->cmd_err != STU300_ERROR_NONE) 273 + complete(&dev->cmd_complete); 274 + 321 275 spin_unlock(&dev->cmd_issue_lock); 276 + 277 + clk_disable(dev->clk); 278 + 322 279 return IRQ_HANDLED; 323 280 } 324 281 ··· 349 308 stu300_wr8(cr_value, dev->virtbase + I2C_CR); 350 309 ret = wait_for_completion_interruptible_timeout(&dev->cmd_complete, 351 310 STU300_TIMEOUT); 352 - 353 311 if (ret < 0) { 354 312 dev_err(&dev->pdev->dev, 355 313 "wait_for_completion_interruptible_timeout() " ··· 382 342 enum stu300_event mr_event) 383 343 { 384 344 int ret; 385 - u32 val; 386 345 387 346 if (unlikely(irqs_disabled())) { 388 347 /* TODO: implement polling for this case if need be. */ ··· 393 354 /* Is it already here? */ 394 355 spin_lock_irq(&dev->cmd_issue_lock); 395 356 dev->cmd_err = STU300_ERROR_NONE; 396 - if (stu300_event_occurred(dev, mr_event)) { 397 - spin_unlock_irq(&dev->cmd_issue_lock); 398 - goto exit_await_check_err; 399 - } 400 - init_completion(&dev->cmd_complete); 401 - dev->cmd_err = STU300_ERROR_NONE; 402 357 dev->cmd_event = mr_event; 403 358 359 + init_completion(&dev->cmd_complete); 360 + 404 361 /* Turn on the I2C interrupt for current operation */ 405 - val = stu300_r8(dev->virtbase + I2C_CR); 406 - val |= I2C_CR_INTERRUPT_ENABLE; 407 - stu300_wr8(val, dev->virtbase + I2C_CR); 408 - 409 - /* Twice paranoia (possible HW glitch) */ 410 - stu300_wr8(val, dev->virtbase + I2C_CR); 411 - 412 - /* Check again: is it already here? */ 413 - if (unlikely(stu300_event_occurred(dev, mr_event))) { 414 - /* Disable IRQ again. */ 415 - val &= ~I2C_CR_INTERRUPT_ENABLE; 416 - stu300_wr8(val, dev->virtbase + I2C_CR); 417 - spin_unlock_irq(&dev->cmd_issue_lock); 418 - goto exit_await_check_err; 419 - } 362 + stu300_irq_enable(dev); 420 363 421 364 /* Unlock the command block and wait for the event to occur */ 422 365 spin_unlock_irq(&dev->cmd_issue_lock); 366 + 423 367 ret = wait_for_completion_interruptible_timeout(&dev->cmd_complete, 424 368 STU300_TIMEOUT); 425 - 426 369 if (ret < 0) { 427 370 dev_err(&dev->pdev->dev, 428 371 "wait_for_completion_interruptible_timeout()" ··· 422 401 return -ETIMEDOUT; 423 402 } 424 403 425 - exit_await_check_err: 426 404 if (dev->cmd_err != STU300_ERROR_NONE) { 427 405 if (mr_event != STU300_EVENT_6) { 428 406 dev_err(&dev->pdev->dev, "controller " ··· 477 457 }; 478 458 479 459 static const struct stu300_clkset stu300_clktable[] = { 480 - { 0, 0xFFU }, 481 - { 2500000, I2C_OAR2_FR_25_10MHZ }, 482 - { 10000000, I2C_OAR2_FR_10_1667MHZ }, 483 - { 16670000, I2C_OAR2_FR_1667_2667MHZ }, 484 - { 26670000, I2C_OAR2_FR_2667_40MHZ }, 485 - { 40000000, I2C_OAR2_FR_40_5333MHZ }, 486 - { 53330000, I2C_OAR2_FR_5333_66MHZ }, 487 - { 66000000, I2C_OAR2_FR_66_80MHZ }, 488 - { 80000000, I2C_OAR2_FR_80_100MHZ }, 460 + { 0, 0xFFU }, 461 + { 2500000, I2C_OAR2_FR_25_10MHZ }, 462 + { 10000000, I2C_OAR2_FR_10_1667MHZ }, 463 + { 16670000, I2C_OAR2_FR_1667_2667MHZ }, 464 + { 26670000, I2C_OAR2_FR_2667_40MHZ }, 465 + { 40000000, I2C_OAR2_FR_40_5333MHZ }, 466 + { 53330000, I2C_OAR2_FR_5333_66MHZ }, 467 + { 66000000, I2C_OAR2_FR_66_80MHZ }, 468 + { 80000000, I2C_OAR2_FR_80_100MHZ }, 489 469 { 100000000, 0xFFU }, 490 470 }; 471 + 491 472 492 473 static int stu300_set_clk(struct stu300_dev *dev, unsigned long clkrate) 493 474 { ··· 515 494 516 495 if (dev->speed > 100000) 517 496 /* Fast Mode I2C */ 518 - val = ((clkrate/dev->speed)-9)/3; 497 + val = ((clkrate/dev->speed) - 9)/3 + 1; 519 498 else 520 499 /* Standard Mode I2C */ 521 - val = ((clkrate/dev->speed)-7)/2; 500 + val = ((clkrate/dev->speed) - 7)/2 + 1; 522 501 523 502 /* According to spec the divider must be > 2 */ 524 503 if (val < 0x002) { ··· 578 557 */ 579 558 clkrate = clk_get_rate(dev->clk); 580 559 ret = stu300_set_clk(dev, clkrate); 560 + 581 561 if (ret) 582 562 return ret; 583 563 /* ··· 663 641 int attempts = 0; 664 642 struct stu300_dev *dev = i2c_get_adapdata(adap); 665 643 666 - 667 644 clk_enable(dev->clk); 668 645 669 646 /* Remove this if (0) to trace each and every message. */ ··· 736 715 737 716 if (attempts < NUM_ADDR_RESEND_ATTEMPTS && attempts > 0) { 738 717 dev_dbg(&dev->pdev->dev, "managed to get address " 739 - "through after %d attempts\n", attempts); 718 + "through after %d attempts\n", attempts); 740 719 } else if (attempts == NUM_ADDR_RESEND_ATTEMPTS) { 741 720 dev_dbg(&dev->pdev->dev, "I give up, tried %d times " 742 - "to resend address.\n", 743 - NUM_ADDR_RESEND_ATTEMPTS); 721 + "to resend address.\n", 722 + NUM_ADDR_RESEND_ATTEMPTS); 744 723 goto exit_disable; 745 724 } 725 + 746 726 747 727 if (msg->flags & I2C_M_RD) { 748 728 /* READ: we read the actual bytes one at a time */ ··· 826 804 { 827 805 int ret = -1; 828 806 int i; 807 + 829 808 struct stu300_dev *dev = i2c_get_adapdata(adap); 830 809 dev->msg_len = num; 810 + 831 811 for (i = 0; i < num; i++) { 832 812 /* 833 813 * Another driver appears to send stop for each message, ··· 841 817 dev->msg_index = i; 842 818 843 819 ret = stu300_xfer_msg(adap, &msgs[i], (i == (num - 1))); 820 + 844 821 if (ret != 0) { 845 822 num = ret; 846 823 break; ··· 870 845 struct resource *res; 871 846 int bus_nr; 872 847 int ret = 0; 848 + char clk_name[] = "I2C0"; 873 849 874 850 dev = kzalloc(sizeof(struct stu300_dev), GFP_KERNEL); 875 851 if (!dev) { ··· 880 854 } 881 855 882 856 bus_nr = pdev->id; 883 - dev->clk = clk_get(&pdev->dev, NULL); 857 + clk_name[3] += (char)bus_nr; 858 + dev->clk = clk_get(&pdev->dev, clk_name); 884 859 if (IS_ERR(dev->clk)) { 885 860 ret = PTR_ERR(dev->clk); 886 861 dev_err(&pdev->dev, "could not retrieve i2c bus clock\n");
+39 -25
drivers/input/joydev.c
··· 456 456 unsigned int cmd, void __user *argp) 457 457 { 458 458 struct input_dev *dev = joydev->handle.dev; 459 + size_t len; 459 460 int i, j; 461 + const char *name; 460 462 463 + /* Process fixed-sized commands. */ 461 464 switch (cmd) { 462 465 463 466 case JS_SET_CAL: ··· 502 499 return copy_to_user(argp, joydev->corr, 503 500 sizeof(joydev->corr[0]) * joydev->nabs) ? -EFAULT : 0; 504 501 505 - case JSIOCSAXMAP: 506 - if (copy_from_user(joydev->abspam, argp, 507 - sizeof(__u8) * (ABS_MAX + 1))) 502 + } 503 + 504 + /* 505 + * Process variable-sized commands (the axis and button map commands 506 + * are considered variable-sized to decouple them from the values of 507 + * ABS_MAX and KEY_MAX). 508 + */ 509 + switch (cmd & ~IOCSIZE_MASK) { 510 + 511 + case (JSIOCSAXMAP & ~IOCSIZE_MASK): 512 + len = min_t(size_t, _IOC_SIZE(cmd), sizeof(joydev->abspam)); 513 + /* 514 + * FIXME: we should not copy into our axis map before 515 + * validating the data. 516 + */ 517 + if (copy_from_user(joydev->abspam, argp, len)) 508 518 return -EFAULT; 509 519 510 520 for (i = 0; i < joydev->nabs; i++) { ··· 527 511 } 528 512 return 0; 529 513 530 - case JSIOCGAXMAP: 531 - return copy_to_user(argp, joydev->abspam, 532 - sizeof(__u8) * (ABS_MAX + 1)) ? -EFAULT : 0; 514 + case (JSIOCGAXMAP & ~IOCSIZE_MASK): 515 + len = min_t(size_t, _IOC_SIZE(cmd), sizeof(joydev->abspam)); 516 + return copy_to_user(argp, joydev->abspam, len) ? -EFAULT : 0; 533 517 534 - case JSIOCSBTNMAP: 535 - if (copy_from_user(joydev->keypam, argp, 536 - sizeof(__u16) * (KEY_MAX - BTN_MISC + 1))) 518 + case (JSIOCSBTNMAP & ~IOCSIZE_MASK): 519 + len = min_t(size_t, _IOC_SIZE(cmd), sizeof(joydev->keypam)); 520 + /* 521 + * FIXME: we should not copy into our keymap before 522 + * validating the data. 523 + */ 524 + if (copy_from_user(joydev->keypam, argp, len)) 537 525 return -EFAULT; 538 526 539 527 for (i = 0; i < joydev->nkey; i++) { ··· 549 529 550 530 return 0; 551 531 552 - case JSIOCGBTNMAP: 553 - return copy_to_user(argp, joydev->keypam, 554 - sizeof(__u16) * (KEY_MAX - BTN_MISC + 1)) ? -EFAULT : 0; 532 + case (JSIOCGBTNMAP & ~IOCSIZE_MASK): 533 + len = min_t(size_t, _IOC_SIZE(cmd), sizeof(joydev->keypam)); 534 + return copy_to_user(argp, joydev->keypam, len) ? -EFAULT : 0; 555 535 556 - default: 557 - if ((cmd & ~IOCSIZE_MASK) == JSIOCGNAME(0)) { 558 - int len; 559 - const char *name = dev->name; 536 + case JSIOCGNAME(0): 537 + name = dev->name; 538 + if (!name) 539 + return 0; 560 540 561 - if (!name) 562 - return 0; 563 - len = strlen(name) + 1; 564 - if (len > _IOC_SIZE(cmd)) 565 - len = _IOC_SIZE(cmd); 566 - if (copy_to_user(argp, name, len)) 567 - return -EFAULT; 568 - return len; 569 - } 541 + len = min_t(size_t, _IOC_SIZE(cmd), strlen(name) + 1); 542 + return copy_to_user(argp, name, len) ? -EFAULT : len; 570 543 } 544 + 571 545 return -EINVAL; 572 546 } 573 547
+1
drivers/input/joystick/iforce/iforce-main.c
··· 74 74 { 0x05ef, 0x8884, "AVB Mag Turbo Force", btn_avb_wheel, abs_wheel, ff_iforce }, 75 75 { 0x05ef, 0x8888, "AVB Top Shot Force Feedback Racing Wheel", btn_avb_tw, abs_wheel, ff_iforce }, //? 76 76 { 0x061c, 0xc0a4, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce }, //? 77 + { 0x061c, 0xc084, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce }, 77 78 { 0x06f8, 0x0001, "Guillemot Race Leader Force Feedback", btn_wheel, abs_wheel, ff_iforce }, //? 78 79 { 0x06f8, 0x0004, "Guillemot Force Feedback Racing Wheel", btn_wheel, abs_wheel, ff_iforce }, //? 79 80 { 0x06f8, 0x0004, "Gullemot Jet Leader 3D", btn_joystick, abs_joystick, ff_iforce }, //?
+1
drivers/input/joystick/iforce/iforce-usb.c
··· 223 223 { USB_DEVICE(0x05ef, 0x8884) }, /* AVB Mag Turbo Force */ 224 224 { USB_DEVICE(0x05ef, 0x8888) }, /* AVB Top Shot FFB Racing Wheel */ 225 225 { USB_DEVICE(0x061c, 0xc0a4) }, /* ACT LABS Force RS */ 226 + { USB_DEVICE(0x061c, 0xc084) }, /* ACT LABS Force RS */ 226 227 { USB_DEVICE(0x06f8, 0x0001) }, /* Guillemot Race Leader Force Feedback */ 227 228 { USB_DEVICE(0x06f8, 0x0004) }, /* Guillemot Force Feedback Racing Wheel */ 228 229 { USB_DEVICE(0x06f8, 0xa302) }, /* Guillemot Jet Leader 3D */
+35
drivers/input/keyboard/atkbd.c
··· 880 880 }; 881 881 882 882 /* 883 + * Perform fixup for HP (Compaq) Presario R4000 R4100 R4200 that don't generate 884 + * release for their volume buttons 885 + */ 886 + static unsigned int atkbd_hp_r4000_forced_release_keys[] = { 887 + 0xae, 0xb0, -1U 888 + }; 889 + 890 + /* 883 891 * Samsung NC10,NC20 with Fn+F? key release not working 884 892 */ 885 893 static unsigned int atkbd_samsung_forced_release_keys[] = { ··· 1543 1535 }, 1544 1536 .callback = atkbd_setup_forced_release, 1545 1537 .driver_data = atkbd_hp_zv6100_forced_release_keys, 1538 + }, 1539 + { 1540 + .ident = "HP Presario R4000", 1541 + .matches = { 1542 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 1543 + DMI_MATCH(DMI_PRODUCT_NAME, "Presario R4000"), 1544 + }, 1545 + .callback = atkbd_setup_forced_release, 1546 + .driver_data = atkbd_hp_r4000_forced_release_keys, 1547 + }, 1548 + { 1549 + .ident = "HP Presario R4100", 1550 + .matches = { 1551 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 1552 + DMI_MATCH(DMI_PRODUCT_NAME, "Presario R4100"), 1553 + }, 1554 + .callback = atkbd_setup_forced_release, 1555 + .driver_data = atkbd_hp_r4000_forced_release_keys, 1556 + }, 1557 + { 1558 + .ident = "HP Presario R4200", 1559 + .matches = { 1560 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 1561 + DMI_MATCH(DMI_PRODUCT_NAME, "Presario R4200"), 1562 + }, 1563 + .callback = atkbd_setup_forced_release, 1564 + .driver_data = atkbd_hp_r4000_forced_release_keys, 1546 1565 }, 1547 1566 { 1548 1567 .ident = "Inventec Symphony",
+8
drivers/input/serio/i8042-x86ia64io.h
··· 382 382 DMI_MATCH(DMI_PRODUCT_NAME, "Vostro1510"), 383 383 }, 384 384 }, 385 + { 386 + .ident = "Acer Aspire 5536", 387 + .matches = { 388 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 389 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5536"), 390 + DMI_MATCH(DMI_PRODUCT_VERSION, "0100"), 391 + }, 392 + }, 385 393 { } 386 394 }; 387 395
+29 -14
drivers/input/tablet/wacom_sys.c
··· 388 388 return result; 389 389 } 390 390 391 + static int wacom_query_tablet_data(struct usb_interface *intf) 392 + { 393 + unsigned char *rep_data; 394 + int limit = 0; 395 + int error; 396 + 397 + rep_data = kmalloc(2, GFP_KERNEL); 398 + if (!rep_data) 399 + return -ENOMEM; 400 + 401 + do { 402 + rep_data[0] = 2; 403 + rep_data[1] = 2; 404 + error = usb_set_report(intf, WAC_HID_FEATURE_REPORT, 405 + 2, rep_data, 2); 406 + if (error >= 0) 407 + error = usb_get_report(intf, 408 + WAC_HID_FEATURE_REPORT, 2, 409 + rep_data, 2); 410 + } while ((error < 0 || rep_data[1] != 2) && limit++ < 5); 411 + 412 + kfree(rep_data); 413 + 414 + return error < 0 ? error : 0; 415 + } 416 + 391 417 static int wacom_probe(struct usb_interface *intf, const struct usb_device_id *id) 392 418 { 393 419 struct usb_device *dev = interface_to_usbdev(intf); ··· 424 398 struct wacom_features *features; 425 399 struct input_dev *input_dev; 426 400 int error = -ENOMEM; 427 - char rep_data[2], limit = 0; 428 401 struct hid_descriptor *hid_desc; 429 402 430 403 wacom = kzalloc(sizeof(struct wacom), GFP_KERNEL); ··· 514 489 515 490 /* 516 491 * Ask the tablet to report tablet data if it is not a Tablet PC. 517 - * Repeat until it succeeds 492 + * Note that if query fails it is not a hard failure. 518 493 */ 519 - if (wacom_wac->features->type != TABLETPC) { 520 - do { 521 - rep_data[0] = 2; 522 - rep_data[1] = 2; 523 - error = usb_set_report(intf, WAC_HID_FEATURE_REPORT, 524 - 2, rep_data, 2); 525 - if (error >= 0) 526 - error = usb_get_report(intf, 527 - WAC_HID_FEATURE_REPORT, 2, 528 - rep_data, 2); 529 - } while ((error < 0 || rep_data[1] != 2) && limit++ < 5); 530 - } 494 + if (wacom_wac->features->type != TABLETPC) 495 + wacom_query_tablet_data(intf); 531 496 532 497 usb_set_intfdata(intf, wacom); 533 498 return 0;
+13 -4
drivers/input/touchscreen/ucb1400_ts.c
··· 170 170 ucb1400_reg_write(ucb->ac97, UCB_IE_CLEAR, isr); 171 171 ucb1400_reg_write(ucb->ac97, UCB_IE_CLEAR, 0); 172 172 173 - if (isr & UCB_IE_TSPX) { 173 + if (isr & UCB_IE_TSPX) 174 174 ucb1400_ts_irq_disable(ucb->ac97); 175 - enable_irq(ucb->irq); 176 - } else 177 - printk(KERN_ERR "ucb1400: unexpected IE_STATUS = %#x\n", isr); 175 + else 176 + dev_dbg(&ucb->ts_idev->dev, "ucb1400: unexpected IE_STATUS = %#x\n", isr); 177 + enable_irq(ucb->irq); 178 178 } 179 179 180 180 static int ucb1400_ts_thread(void *_ucb) ··· 345 345 static int ucb1400_ts_probe(struct platform_device *dev) 346 346 { 347 347 int error, x_res, y_res; 348 + u16 fcsr; 348 349 struct ucb1400_ts *ucb = dev->dev.platform_data; 349 350 350 351 ucb->ts_idev = input_allocate_device(); ··· 382 381 ucb->ts_idev->close = ucb1400_ts_close; 383 382 ucb->ts_idev->evbit[0] = BIT_MASK(EV_ABS) | BIT_MASK(EV_KEY); 384 383 ucb->ts_idev->keybit[BIT_WORD(BTN_TOUCH)] = BIT_MASK(BTN_TOUCH); 384 + 385 + /* 386 + * Enable ADC filter to prevent horrible jitter on Colibri. 387 + * This also further reduces jitter on boards where ADCSYNC 388 + * pin is connected. 389 + */ 390 + fcsr = ucb1400_reg_read(ucb->ac97, UCB_FCSR); 391 + ucb1400_reg_write(ucb->ac97, UCB_FCSR, fcsr | UCB_FCSR_AVE); 385 392 386 393 ucb1400_adc_enable(ucb->ac97); 387 394 x_res = ucb1400_ts_read_xres(ucb);
+17 -7
drivers/leds/ledtrig-gpio.c
··· 117 117 118 118 gpio_data->inverted = !!inverted; 119 119 120 + /* After inverting, we need to update the LED. */ 121 + schedule_work(&gpio_data->work); 122 + 120 123 return n; 121 124 } 122 125 static DEVICE_ATTR(inverted, 0644, gpio_trig_inverted_show, ··· 149 146 return -EINVAL; 150 147 } 151 148 149 + if (gpio_data->gpio == gpio) 150 + return n; 151 + 152 152 if (!gpio) { 153 - free_irq(gpio_to_irq(gpio_data->gpio), led); 153 + if (gpio_data->gpio != 0) 154 + free_irq(gpio_to_irq(gpio_data->gpio), led); 155 + gpio_data->gpio = 0; 154 156 return n; 155 157 } 156 158 157 - if (gpio_data->gpio > 0 && gpio_data->gpio != gpio) 158 - free_irq(gpio_to_irq(gpio_data->gpio), led); 159 - 160 - gpio_data->gpio = gpio; 161 159 ret = request_irq(gpio_to_irq(gpio), gpio_trig_irq, 162 160 IRQF_SHARED | IRQF_TRIGGER_RISING 163 161 | IRQF_TRIGGER_FALLING, "ledtrig-gpio", led); 164 - if (ret) 162 + if (ret) { 165 163 dev_err(dev, "request_irq failed with error %d\n", ret); 164 + } else { 165 + if (gpio_data->gpio != 0) 166 + free_irq(gpio_to_irq(gpio_data->gpio), led); 167 + gpio_data->gpio = gpio; 168 + } 166 169 167 170 return ret ? ret : n; 168 171 } ··· 220 211 device_remove_file(led->dev, &dev_attr_inverted); 221 212 device_remove_file(led->dev, &dev_attr_desired_brightness); 222 213 flush_work(&gpio_data->work); 223 - free_irq(gpio_to_irq(gpio_data->gpio),led); 214 + if (gpio_data->gpio != 0) 215 + free_irq(gpio_to_irq(gpio_data->gpio), led); 224 216 kfree(gpio_data); 225 217 } 226 218 }
+1 -1
drivers/macintosh/via-maciisi.c
··· 288 288 } 289 289 /* This could be BAD... when the ADB controller doesn't respond 290 290 * for this long, it's probably not coming back :-( */ 291 - if(count >= 50) /* Hopefully shouldn't happen */ 291 + if (count > 50) /* Hopefully shouldn't happen */ 292 292 printk(KERN_ERR "maciisi_send_request: poll timed out!\n"); 293 293 } 294 294
+13
drivers/md/dm-exception-store.c
··· 171 171 */ 172 172 chunk_size_ulong = round_up(chunk_size_ulong, PAGE_SIZE >> 9); 173 173 174 + return dm_exception_store_set_chunk_size(store, chunk_size_ulong, 175 + error); 176 + } 177 + 178 + int dm_exception_store_set_chunk_size(struct dm_exception_store *store, 179 + unsigned long chunk_size_ulong, 180 + char **error) 181 + { 174 182 /* Check chunk_size is a power of 2 */ 175 183 if (!is_power_of_2(chunk_size_ulong)) { 176 184 *error = "Chunk size is not a power of 2"; ··· 188 180 /* Validate the chunk size against the device block size */ 189 181 if (chunk_size_ulong % (bdev_logical_block_size(store->cow->bdev) >> 9)) { 190 182 *error = "Chunk size is not a multiple of device blocksize"; 183 + return -EINVAL; 184 + } 185 + 186 + if (chunk_size_ulong > INT_MAX >> SECTOR_SHIFT) { 187 + *error = "Chunk size is too high"; 191 188 return -EINVAL; 192 189 } 193 190
+4
drivers/md/dm-exception-store.h
··· 168 168 int dm_exception_store_type_register(struct dm_exception_store_type *type); 169 169 int dm_exception_store_type_unregister(struct dm_exception_store_type *type); 170 170 171 + int dm_exception_store_set_chunk_size(struct dm_exception_store *store, 172 + unsigned long chunk_size_ulong, 173 + char **error); 174 + 171 175 int dm_exception_store_create(struct dm_target *ti, int argc, char **argv, 172 176 unsigned *args_used, 173 177 struct dm_exception_store **store);
+24 -15
drivers/md/dm-log-userspace-base.c
··· 21 21 struct dm_target *ti; 22 22 uint32_t region_size; 23 23 region_t region_count; 24 + uint64_t luid; 24 25 char uuid[DM_UUID_LEN]; 25 26 26 27 char *usr_argv_str; ··· 64 63 * restored. 65 64 */ 66 65 retry: 67 - r = dm_consult_userspace(uuid, request_type, data, 66 + r = dm_consult_userspace(uuid, lc->luid, request_type, data, 68 67 data_size, rdata, rdata_size); 69 68 70 69 if (r != -ESRCH) ··· 75 74 set_current_state(TASK_INTERRUPTIBLE); 76 75 schedule_timeout(2*HZ); 77 76 DMWARN("Attempting to contact userspace log server..."); 78 - r = dm_consult_userspace(uuid, DM_ULOG_CTR, lc->usr_argv_str, 77 + r = dm_consult_userspace(uuid, lc->luid, DM_ULOG_CTR, 78 + lc->usr_argv_str, 79 79 strlen(lc->usr_argv_str) + 1, 80 80 NULL, NULL); 81 81 if (!r) 82 82 break; 83 83 } 84 84 DMINFO("Reconnected to userspace log server... DM_ULOG_CTR complete"); 85 - r = dm_consult_userspace(uuid, DM_ULOG_RESUME, NULL, 85 + r = dm_consult_userspace(uuid, lc->luid, DM_ULOG_RESUME, NULL, 86 86 0, NULL, NULL); 87 87 if (!r) 88 88 goto retry; ··· 113 111 return -ENOMEM; 114 112 } 115 113 116 - for (i = 0, str_size = 0; i < argc; i++) 117 - str_size += sprintf(str + str_size, "%s ", argv[i]); 118 - str_size += sprintf(str + str_size, "%llu", 119 - (unsigned long long)ti->len); 114 + str_size = sprintf(str, "%llu", (unsigned long long)ti->len); 115 + for (i = 0; i < argc; i++) 116 + str_size += sprintf(str + str_size, " %s", argv[i]); 120 117 121 118 *ctr_str = str; 122 119 return str_size; ··· 155 154 return -ENOMEM; 156 155 } 157 156 157 + /* The ptr value is sufficient for local unique id */ 158 + lc->luid = (uint64_t)lc; 159 + 158 160 lc->ti = ti; 159 161 160 162 if (strlen(argv[0]) > (DM_UUID_LEN - 1)) { ··· 177 173 } 178 174 179 175 /* Send table string */ 180 - r = dm_consult_userspace(lc->uuid, DM_ULOG_CTR, 176 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_CTR, 181 177 ctr_str, str_size, NULL, NULL); 182 178 183 179 if (r == -ESRCH) { ··· 187 183 188 184 /* Since the region size does not change, get it now */ 189 185 rdata_size = sizeof(rdata); 190 - r = dm_consult_userspace(lc->uuid, DM_ULOG_GET_REGION_SIZE, 186 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_GET_REGION_SIZE, 191 187 NULL, 0, (char *)&rdata, &rdata_size); 192 188 193 189 if (r) { ··· 216 212 int r; 217 213 struct log_c *lc = log->context; 218 214 219 - r = dm_consult_userspace(lc->uuid, DM_ULOG_DTR, 215 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_DTR, 220 216 NULL, 0, 221 217 NULL, NULL); 222 218 ··· 231 227 int r; 232 228 struct log_c *lc = log->context; 233 229 234 - r = dm_consult_userspace(lc->uuid, DM_ULOG_PRESUSPEND, 230 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_PRESUSPEND, 235 231 NULL, 0, 236 232 NULL, NULL); 237 233 ··· 243 239 int r; 244 240 struct log_c *lc = log->context; 245 241 246 - r = dm_consult_userspace(lc->uuid, DM_ULOG_POSTSUSPEND, 242 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_POSTSUSPEND, 247 243 NULL, 0, 248 244 NULL, NULL); 249 245 ··· 256 252 struct log_c *lc = log->context; 257 253 258 254 lc->in_sync_hint = 0; 259 - r = dm_consult_userspace(lc->uuid, DM_ULOG_RESUME, 255 + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_RESUME, 260 256 NULL, 0, 261 257 NULL, NULL); 262 258 ··· 565 561 char *result, unsigned maxlen) 566 562 { 567 563 int r = 0; 564 + char *table_args; 568 565 size_t sz = (size_t)maxlen; 569 566 struct log_c *lc = log->context; 570 567 ··· 582 577 break; 583 578 case STATUSTYPE_TABLE: 584 579 sz = 0; 585 - DMEMIT("%s %u %s %s", log->type->name, lc->usr_argc + 1, 586 - lc->uuid, lc->usr_argv_str); 580 + table_args = strstr(lc->usr_argv_str, " "); 581 + BUG_ON(!table_args); /* There will always be a ' ' */ 582 + table_args++; 583 + 584 + DMEMIT("%s %u %s %s ", log->type->name, lc->usr_argc, 585 + lc->uuid, table_args); 587 586 break; 588 587 } 589 588 return (r) ? 0 : (int)sz;
+5 -3
drivers/md/dm-log-userspace-transfer.c
··· 108 108 *(pkg->data_size) = 0; 109 109 } else if (tfr->data_size > *(pkg->data_size)) { 110 110 DMERR("Insufficient space to receive package [%u] " 111 - "(%u vs %lu)", tfr->request_type, 111 + "(%u vs %zu)", tfr->request_type, 112 112 tfr->data_size, *(pkg->data_size)); 113 113 114 114 *(pkg->data_size) = 0; ··· 147 147 148 148 /** 149 149 * dm_consult_userspace 150 - * @uuid: log's uuid (must be DM_UUID_LEN in size) 150 + * @uuid: log's universal unique identifier (must be DM_UUID_LEN in size) 151 + * @luid: log's local unique identifier 151 152 * @request_type: found in include/linux/dm-log-userspace.h 152 153 * @data: data to tx to the server 153 154 * @data_size: size of data in bytes ··· 164 163 * 165 164 * Returns: 0 on success, -EXXX on failure 166 165 **/ 167 - int dm_consult_userspace(const char *uuid, int request_type, 166 + int dm_consult_userspace(const char *uuid, uint64_t luid, int request_type, 168 167 char *data, size_t data_size, 169 168 char *rdata, size_t *rdata_size) 170 169 { ··· 191 190 192 191 memset(tfr, 0, DM_ULOG_PREALLOCED_SIZE - overhead_size); 193 192 memcpy(tfr->uuid, uuid, DM_UUID_LEN); 193 + tfr->luid = luid; 194 194 tfr->seq = dm_ulog_seq++; 195 195 196 196 /*
+1 -1
drivers/md/dm-log-userspace-transfer.h
··· 11 11 12 12 int dm_ulog_tfr_init(void); 13 13 void dm_ulog_tfr_exit(void); 14 - int dm_consult_userspace(const char *uuid, int request_type, 14 + int dm_consult_userspace(const char *uuid, uint64_t luid, int request_type, 15 15 char *data, size_t data_size, 16 16 char *rdata, size_t *rdata_size); 17 17
+7 -1
drivers/md/dm-raid1.c
··· 648 648 */ 649 649 dm_rh_inc_pending(ms->rh, &sync); 650 650 dm_rh_inc_pending(ms->rh, &nosync); 651 - ms->log_failure = dm_rh_flush(ms->rh) ? 1 : 0; 651 + 652 + /* 653 + * If the flush fails on a previous call and succeeds here, 654 + * we must not reset the log_failure variable. We need 655 + * userspace interaction to do that. 656 + */ 657 + ms->log_failure = dm_rh_flush(ms->rh) ? 1 : ms->log_failure; 652 658 653 659 /* 654 660 * Dispatch io.
+53 -35
drivers/md/dm-snap-persistent.c
··· 106 106 void *zero_area; 107 107 108 108 /* 109 + * An area used for header. The header can be written 110 + * concurrently with metadata (when invalidating the snapshot), 111 + * so it needs a separate buffer. 112 + */ 113 + void *header_area; 114 + 115 + /* 109 116 * Used to keep track of which metadata area the data in 110 117 * 'chunk' refers to. 111 118 */ ··· 155 148 */ 156 149 ps->area = vmalloc(len); 157 150 if (!ps->area) 158 - return r; 151 + goto err_area; 159 152 160 153 ps->zero_area = vmalloc(len); 161 - if (!ps->zero_area) { 162 - vfree(ps->area); 163 - return r; 164 - } 154 + if (!ps->zero_area) 155 + goto err_zero_area; 165 156 memset(ps->zero_area, 0, len); 166 157 158 + ps->header_area = vmalloc(len); 159 + if (!ps->header_area) 160 + goto err_header_area; 161 + 167 162 return 0; 163 + 164 + err_header_area: 165 + vfree(ps->zero_area); 166 + 167 + err_zero_area: 168 + vfree(ps->area); 169 + 170 + err_area: 171 + return r; 168 172 } 169 173 170 174 static void free_area(struct pstore *ps) ··· 187 169 if (ps->zero_area) 188 170 vfree(ps->zero_area); 189 171 ps->zero_area = NULL; 172 + 173 + if (ps->header_area) 174 + vfree(ps->header_area); 175 + ps->header_area = NULL; 190 176 } 191 177 192 178 struct mdata_req { ··· 210 188 /* 211 189 * Read or write a chunk aligned and sized block of data from a device. 212 190 */ 213 - static int chunk_io(struct pstore *ps, chunk_t chunk, int rw, int metadata) 191 + static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw, 192 + int metadata) 214 193 { 215 194 struct dm_io_region where = { 216 195 .bdev = ps->store->cow->bdev, ··· 221 198 struct dm_io_request io_req = { 222 199 .bi_rw = rw, 223 200 .mem.type = DM_IO_VMA, 224 - .mem.ptr.vma = ps->area, 201 + .mem.ptr.vma = area, 225 202 .client = ps->io_client, 226 203 .notify.fn = NULL, 227 204 }; ··· 263 240 264 241 chunk = area_location(ps, ps->current_area); 265 242 266 - r = chunk_io(ps, chunk, rw, 0); 243 + r = chunk_io(ps, ps->area, chunk, rw, 0); 267 244 if (r) 268 245 return r; 269 246 ··· 277 254 278 255 static int zero_disk_area(struct pstore *ps, chunk_t area) 279 256 { 280 - struct dm_io_region where = { 281 - .bdev = ps->store->cow->bdev, 282 - .sector = ps->store->chunk_size * area_location(ps, area), 283 - .count = ps->store->chunk_size, 284 - }; 285 - struct dm_io_request io_req = { 286 - .bi_rw = WRITE, 287 - .mem.type = DM_IO_VMA, 288 - .mem.ptr.vma = ps->zero_area, 289 - .client = ps->io_client, 290 - .notify.fn = NULL, 291 - }; 292 - 293 - return dm_io(&io_req, 1, &where, NULL); 257 + return chunk_io(ps, ps->zero_area, area_location(ps, area), WRITE, 0); 294 258 } 295 259 296 260 static int read_header(struct pstore *ps, int *new_snapshot) ··· 286 276 struct disk_header *dh; 287 277 chunk_t chunk_size; 288 278 int chunk_size_supplied = 1; 279 + char *chunk_err; 289 280 290 281 /* 291 282 * Use default chunk size (or hardsect_size, if larger) if none supplied ··· 308 297 if (r) 309 298 return r; 310 299 311 - r = chunk_io(ps, 0, READ, 1); 300 + r = chunk_io(ps, ps->header_area, 0, READ, 1); 312 301 if (r) 313 302 goto bad; 314 303 315 - dh = (struct disk_header *) ps->area; 304 + dh = ps->header_area; 316 305 317 306 if (le32_to_cpu(dh->magic) == 0) { 318 307 *new_snapshot = 1; ··· 330 319 ps->version = le32_to_cpu(dh->version); 331 320 chunk_size = le32_to_cpu(dh->chunk_size); 332 321 333 - if (!chunk_size_supplied || ps->store->chunk_size == chunk_size) 322 + if (ps->store->chunk_size == chunk_size) 334 323 return 0; 335 324 336 - DMWARN("chunk size %llu in device metadata overrides " 337 - "table chunk size of %llu.", 338 - (unsigned long long)chunk_size, 339 - (unsigned long long)ps->store->chunk_size); 325 + if (chunk_size_supplied) 326 + DMWARN("chunk size %llu in device metadata overrides " 327 + "table chunk size of %llu.", 328 + (unsigned long long)chunk_size, 329 + (unsigned long long)ps->store->chunk_size); 340 330 341 331 /* We had a bogus chunk_size. Fix stuff up. */ 342 332 free_area(ps); 343 333 344 - ps->store->chunk_size = chunk_size; 345 - ps->store->chunk_mask = chunk_size - 1; 346 - ps->store->chunk_shift = ffs(chunk_size) - 1; 334 + r = dm_exception_store_set_chunk_size(ps->store, chunk_size, 335 + &chunk_err); 336 + if (r) { 337 + DMERR("invalid on-disk chunk size %llu: %s.", 338 + (unsigned long long)chunk_size, chunk_err); 339 + return r; 340 + } 347 341 348 342 r = dm_io_client_resize(sectors_to_pages(ps->store->chunk_size), 349 343 ps->io_client); ··· 367 351 { 368 352 struct disk_header *dh; 369 353 370 - memset(ps->area, 0, ps->store->chunk_size << SECTOR_SHIFT); 354 + memset(ps->header_area, 0, ps->store->chunk_size << SECTOR_SHIFT); 371 355 372 - dh = (struct disk_header *) ps->area; 356 + dh = ps->header_area; 373 357 dh->magic = cpu_to_le32(SNAP_MAGIC); 374 358 dh->valid = cpu_to_le32(ps->valid); 375 359 dh->version = cpu_to_le32(ps->version); 376 360 dh->chunk_size = cpu_to_le32(ps->store->chunk_size); 377 361 378 - return chunk_io(ps, 0, WRITE, 1); 362 + return chunk_io(ps, ps->header_area, 0, WRITE, 1); 379 363 } 380 364 381 365 /* ··· 695 679 ps->valid = 1; 696 680 ps->version = SNAPSHOT_DISK_VERSION; 697 681 ps->area = NULL; 682 + ps->zero_area = NULL; 683 + ps->header_area = NULL; 698 684 ps->next_free = 2; /* skipping the header and first area */ 699 685 ps->current_committed = 0; 700 686
+21 -2
drivers/md/dm-snap.c
··· 1176 1176 return 0; 1177 1177 } 1178 1178 1179 + static int snapshot_iterate_devices(struct dm_target *ti, 1180 + iterate_devices_callout_fn fn, void *data) 1181 + { 1182 + struct dm_snapshot *snap = ti->private; 1183 + 1184 + return fn(ti, snap->origin, 0, ti->len, data); 1185 + } 1186 + 1187 + 1179 1188 /*----------------------------------------------------------------- 1180 1189 * Origin methods 1181 1190 *---------------------------------------------------------------*/ ··· 1419 1410 return 0; 1420 1411 } 1421 1412 1413 + static int origin_iterate_devices(struct dm_target *ti, 1414 + iterate_devices_callout_fn fn, void *data) 1415 + { 1416 + struct dm_dev *dev = ti->private; 1417 + 1418 + return fn(ti, dev, 0, ti->len, data); 1419 + } 1420 + 1422 1421 static struct target_type origin_target = { 1423 1422 .name = "snapshot-origin", 1424 - .version = {1, 6, 0}, 1423 + .version = {1, 7, 0}, 1425 1424 .module = THIS_MODULE, 1426 1425 .ctr = origin_ctr, 1427 1426 .dtr = origin_dtr, 1428 1427 .map = origin_map, 1429 1428 .resume = origin_resume, 1430 1429 .status = origin_status, 1430 + .iterate_devices = origin_iterate_devices, 1431 1431 }; 1432 1432 1433 1433 static struct target_type snapshot_target = { 1434 1434 .name = "snapshot", 1435 - .version = {1, 6, 0}, 1435 + .version = {1, 7, 0}, 1436 1436 .module = THIS_MODULE, 1437 1437 .ctr = snapshot_ctr, 1438 1438 .dtr = snapshot_dtr, ··· 1449 1431 .end_io = snapshot_end_io, 1450 1432 .resume = snapshot_resume, 1451 1433 .status = snapshot_status, 1434 + .iterate_devices = snapshot_iterate_devices, 1452 1435 }; 1453 1436 1454 1437 static int __init dm_snapshot_init(void)
+12 -1
drivers/md/dm-stripe.c
··· 329 329 return ret; 330 330 } 331 331 332 + static void stripe_io_hints(struct dm_target *ti, 333 + struct queue_limits *limits) 334 + { 335 + struct stripe_c *sc = ti->private; 336 + unsigned chunk_size = (sc->chunk_mask + 1) << 9; 337 + 338 + blk_limits_io_min(limits, chunk_size); 339 + limits->io_opt = chunk_size * sc->stripes; 340 + } 341 + 332 342 static struct target_type stripe_target = { 333 343 .name = "striped", 334 - .version = {1, 2, 0}, 344 + .version = {1, 3, 0}, 335 345 .module = THIS_MODULE, 336 346 .ctr = stripe_ctr, 337 347 .dtr = stripe_dtr, ··· 349 339 .end_io = stripe_end_io, 350 340 .status = stripe_status, 351 341 .iterate_devices = stripe_iterate_devices, 342 + .io_hints = stripe_io_hints, 352 343 }; 353 344 354 345 int __init dm_stripe_init(void)
+33 -18
drivers/md/dm-table.c
··· 343 343 } 344 344 345 345 /* 346 - * If possible, this checks an area of a destination device is valid. 346 + * If possible, this checks an area of a destination device is invalid. 347 347 */ 348 - static int device_area_is_valid(struct dm_target *ti, struct dm_dev *dev, 349 - sector_t start, sector_t len, void *data) 348 + static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev, 349 + sector_t start, sector_t len, void *data) 350 350 { 351 351 struct queue_limits *limits = data; 352 352 struct block_device *bdev = dev->bdev; ··· 357 357 char b[BDEVNAME_SIZE]; 358 358 359 359 if (!dev_size) 360 - return 1; 360 + return 0; 361 361 362 362 if ((start >= dev_size) || (start + len > dev_size)) { 363 - DMWARN("%s: %s too small for target", 364 - dm_device_name(ti->table->md), bdevname(bdev, b)); 365 - return 0; 363 + DMWARN("%s: %s too small for target: " 364 + "start=%llu, len=%llu, dev_size=%llu", 365 + dm_device_name(ti->table->md), bdevname(bdev, b), 366 + (unsigned long long)start, 367 + (unsigned long long)len, 368 + (unsigned long long)dev_size); 369 + return 1; 366 370 } 367 371 368 372 if (logical_block_size_sectors <= 1) 369 - return 1; 373 + return 0; 370 374 371 375 if (start & (logical_block_size_sectors - 1)) { 372 376 DMWARN("%s: start=%llu not aligned to h/w " 373 - "logical block size %hu of %s", 377 + "logical block size %u of %s", 374 378 dm_device_name(ti->table->md), 375 379 (unsigned long long)start, 376 380 limits->logical_block_size, bdevname(bdev, b)); 377 - return 0; 381 + return 1; 378 382 } 379 383 380 384 if (len & (logical_block_size_sectors - 1)) { 381 385 DMWARN("%s: len=%llu not aligned to h/w " 382 - "logical block size %hu of %s", 386 + "logical block size %u of %s", 383 387 dm_device_name(ti->table->md), 384 388 (unsigned long long)len, 385 389 limits->logical_block_size, bdevname(bdev, b)); 386 - return 0; 390 + return 1; 387 391 } 388 392 389 - return 1; 393 + return 0; 390 394 } 391 395 392 396 /* ··· 500 496 } 501 497 502 498 if (blk_stack_limits(limits, &q->limits, start << 9) < 0) 503 - DMWARN("%s: target device %s is misaligned", 504 - dm_device_name(ti->table->md), bdevname(bdev, b)); 499 + DMWARN("%s: target device %s is misaligned: " 500 + "physical_block_size=%u, logical_block_size=%u, " 501 + "alignment_offset=%u, start=%llu", 502 + dm_device_name(ti->table->md), bdevname(bdev, b), 503 + q->limits.physical_block_size, 504 + q->limits.logical_block_size, 505 + q->limits.alignment_offset, 506 + (unsigned long long) start << 9); 507 + 505 508 506 509 /* 507 510 * Check if merge fn is supported. ··· 709 698 710 699 if (remaining) { 711 700 DMWARN("%s: table line %u (start sect %llu len %llu) " 712 - "not aligned to h/w logical block size %hu", 701 + "not aligned to h/w logical block size %u", 713 702 dm_device_name(table->md), i, 714 703 (unsigned long long) ti->begin, 715 704 (unsigned long long) ti->len, ··· 1007 996 ti->type->iterate_devices(ti, dm_set_device_limits, 1008 997 &ti_limits); 1009 998 999 + /* Set I/O hints portion of queue limits */ 1000 + if (ti->type->io_hints) 1001 + ti->type->io_hints(ti, &ti_limits); 1002 + 1010 1003 /* 1011 1004 * Check each device area is consistent with the target's 1012 1005 * overall queue limits. 1013 1006 */ 1014 - if (!ti->type->iterate_devices(ti, device_area_is_valid, 1015 - &ti_limits)) 1007 + if (ti->type->iterate_devices(ti, device_area_is_invalid, 1008 + &ti_limits)) 1016 1009 return -EINVAL; 1017 1010 1018 1011 combine_limits:
+10 -5
drivers/md/dm.c
··· 738 738 dm_put(md); 739 739 } 740 740 741 + static void free_rq_clone(struct request *clone) 742 + { 743 + struct dm_rq_target_io *tio = clone->end_io_data; 744 + 745 + blk_rq_unprep_clone(clone); 746 + free_rq_tio(tio); 747 + } 748 + 741 749 static void dm_unprep_request(struct request *rq) 742 750 { 743 751 struct request *clone = rq->special; 744 - struct dm_rq_target_io *tio = clone->end_io_data; 745 752 746 753 rq->special = NULL; 747 754 rq->cmd_flags &= ~REQ_DONTPREP; 748 755 749 - blk_rq_unprep_clone(clone); 750 - free_rq_tio(tio); 756 + free_rq_clone(clone); 751 757 } 752 758 753 759 /* ··· 831 825 rq->sense_len = clone->sense_len; 832 826 } 833 827 834 - BUG_ON(clone->bio); 835 - free_rq_tio(tio); 828 + free_rq_clone(clone); 836 829 837 830 blk_end_request_all(rq, error); 838 831
+17 -16
drivers/md/md.c
··· 359 359 else 360 360 new->md_minor = MINOR(unit) >> MdpMinorShift; 361 361 362 + mutex_init(&new->open_mutex); 362 363 mutex_init(&new->reconfig_mutex); 363 364 INIT_LIST_HEAD(&new->disks); 364 365 INIT_LIST_HEAD(&new->all_mddevs); ··· 1975 1974 /* otherwise we have to go forward and ... */ 1976 1975 mddev->events ++; 1977 1976 if (!mddev->in_sync || mddev->recovery_cp != MaxSector) { /* not clean */ 1978 - /* .. if the array isn't clean, insist on an odd 'events' */ 1979 - if ((mddev->events&1)==0) { 1980 - mddev->events++; 1977 + /* .. if the array isn't clean, an 'even' event must also go 1978 + * to spares. */ 1979 + if ((mddev->events&1)==0) 1981 1980 nospares = 0; 1982 - } 1983 1981 } else { 1984 - /* otherwise insist on an even 'events' (for clean states) */ 1985 - if ((mddev->events&1)) { 1986 - mddev->events++; 1982 + /* otherwise an 'odd' event must go to spares */ 1983 + if ((mddev->events&1)) 1987 1984 nospares = 0; 1988 - } 1989 1985 } 1990 1986 } 1991 1987 ··· 3599 3601 if (max < mddev->resync_min) 3600 3602 return -EINVAL; 3601 3603 if (max < mddev->resync_max && 3604 + mddev->ro == 0 && 3602 3605 test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) 3603 3606 return -EBUSY; 3604 3607 ··· 4303 4304 struct gendisk *disk = mddev->gendisk; 4304 4305 mdk_rdev_t *rdev; 4305 4306 4307 + mutex_lock(&mddev->open_mutex); 4306 4308 if (atomic_read(&mddev->openers) > is_open) { 4307 4309 printk("md: %s still in use.\n",mdname(mddev)); 4308 - return -EBUSY; 4309 - } 4310 - 4311 - if (mddev->pers) { 4310 + err = -EBUSY; 4311 + } else if (mddev->pers) { 4312 4312 4313 4313 if (mddev->sync_thread) { 4314 4314 set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); ··· 4364 4366 if (mode == 1) 4365 4367 set_disk_ro(disk, 1); 4366 4368 clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 4369 + err = 0; 4367 4370 } 4368 - 4371 + out: 4372 + mutex_unlock(&mddev->open_mutex); 4373 + if (err) 4374 + return err; 4369 4375 /* 4370 4376 * Free resources if final stop 4371 4377 */ ··· 4435 4433 blk_integrity_unregister(disk); 4436 4434 md_new_event(mddev); 4437 4435 sysfs_notify_dirent(mddev->sysfs_state); 4438 - out: 4439 4436 return err; 4440 4437 } 4441 4438 ··· 5519 5518 } 5520 5519 BUG_ON(mddev != bdev->bd_disk->private_data); 5521 5520 5522 - if ((err = mutex_lock_interruptible_nested(&mddev->reconfig_mutex, 1))) 5521 + if ((err = mutex_lock_interruptible(&mddev->open_mutex))) 5523 5522 goto out; 5524 5523 5525 5524 err = 0; 5526 5525 atomic_inc(&mddev->openers); 5527 - mddev_unlock(mddev); 5526 + mutex_unlock(&mddev->open_mutex); 5528 5527 5529 5528 check_disk_change(bdev); 5530 5529 out:
+10
drivers/md/md.h
··· 223 223 * so we don't loop trying */ 224 224 225 225 int in_sync; /* know to not need resync */ 226 + /* 'open_mutex' avoids races between 'md_open' and 'do_md_stop', so 227 + * that we are never stopping an array while it is open. 228 + * 'reconfig_mutex' protects all other reconfiguration. 229 + * These locks are separate due to conflicting interactions 230 + * with bdev->bd_mutex. 231 + * Lock ordering is: 232 + * reconfig_mutex -> bd_mutex : e.g. do_md_run -> revalidate_disk 233 + * bd_mutex -> open_mutex: e.g. __blkdev_get -> md_open 234 + */ 235 + struct mutex open_mutex; 226 236 struct mutex reconfig_mutex; 227 237 atomic_t active; /* general refcount */ 228 238 atomic_t openers; /* number of active opens */
+30 -4
drivers/md/raid5.c
··· 3785 3785 conf->reshape_progress < raid5_size(mddev, 0, 0)) { 3786 3786 sector_nr = raid5_size(mddev, 0, 0) 3787 3787 - conf->reshape_progress; 3788 - } else if (mddev->delta_disks > 0 && 3788 + } else if (mddev->delta_disks >= 0 && 3789 3789 conf->reshape_progress > 0) 3790 3790 sector_nr = conf->reshape_progress; 3791 3791 sector_div(sector_nr, new_data_disks); ··· 4509 4509 (old_disks-max_degraded)); 4510 4510 /* here_old is the first stripe that we might need to read 4511 4511 * from */ 4512 - if (here_new >= here_old) { 4512 + if (mddev->delta_disks == 0) { 4513 + /* We cannot be sure it is safe to start an in-place 4514 + * reshape. It is only safe if user-space if monitoring 4515 + * and taking constant backups. 4516 + * mdadm always starts a situation like this in 4517 + * readonly mode so it can take control before 4518 + * allowing any writes. So just check for that. 4519 + */ 4520 + if ((here_new * mddev->new_chunk_sectors != 4521 + here_old * mddev->chunk_sectors) || 4522 + mddev->ro == 0) { 4523 + printk(KERN_ERR "raid5: in-place reshape must be started" 4524 + " in read-only mode - aborting\n"); 4525 + return -EINVAL; 4526 + } 4527 + } else if (mddev->delta_disks < 0 4528 + ? (here_new * mddev->new_chunk_sectors <= 4529 + here_old * mddev->chunk_sectors) 4530 + : (here_new * mddev->new_chunk_sectors >= 4531 + here_old * mddev->chunk_sectors)) { 4513 4532 /* Reading from the same stripe as writing to - bad */ 4514 4533 printk(KERN_ERR "raid5: reshape_position too early for " 4515 4534 "auto-recovery - aborting.\n"); ··· 5097 5078 mddev->degraded--; 5098 5079 for (d = conf->raid_disks ; 5099 5080 d < conf->raid_disks - mddev->delta_disks; 5100 - d++) 5101 - raid5_remove_disk(mddev, d); 5081 + d++) { 5082 + mdk_rdev_t *rdev = conf->disks[d].rdev; 5083 + if (rdev && raid5_remove_disk(mddev, d) == 0) { 5084 + char nm[20]; 5085 + sprintf(nm, "rd%d", rdev->raid_disk); 5086 + sysfs_remove_link(&mddev->kobj, nm); 5087 + rdev->raid_disk = -1; 5088 + } 5089 + } 5102 5090 } 5103 5091 mddev->layout = conf->algorithm; 5104 5092 mddev->chunk_sectors = conf->chunk_sectors;
+5 -7
drivers/media/common/tuners/qt1010.c
··· 64 64 /* dump all registers */ 65 65 static void qt1010_dump_regs(struct qt1010_priv *priv) 66 66 { 67 - char buf[52], buf2[4]; 68 67 u8 reg, val; 69 68 70 69 for (reg = 0; ; reg++) { 71 70 if (reg % 16 == 0) { 72 71 if (reg) 73 - printk("%s\n", buf); 74 - sprintf(buf, "%02x: ", reg); 72 + printk(KERN_CONT "\n"); 73 + printk(KERN_DEBUG "%02x:", reg); 75 74 } 76 75 if (qt1010_readreg(priv, reg, &val) == 0) 77 - sprintf(buf2, "%02x ", val); 76 + printk(KERN_CONT " %02x", val); 78 77 else 79 - strcpy(buf2, "-- "); 80 - strcat(buf, buf2); 78 + printk(KERN_CONT " --"); 81 79 if (reg == 0x2f) 82 80 break; 83 81 } 84 - printk("%s\n", buf); 82 + printk(KERN_CONT "\n"); 85 83 } 86 84 87 85 static int qt1010_set_params(struct dvb_frontend *fe,
+2 -2
drivers/media/common/tuners/tuner-xc2028.c
··· 1119 1119 struct xc2028_data *priv = fe->tuner_priv; 1120 1120 int rc = 0; 1121 1121 1122 - /* Avoid firmware reload on slow devices */ 1123 - if (no_poweroff) 1122 + /* Avoid firmware reload on slow devices or if PM disabled */ 1123 + if (no_poweroff || priv->ctrl.disable_power_mgmt) 1124 1124 return 0; 1125 1125 1126 1126 tuner_dbg("Putting xc2028/3028 into poweroff mode.\n");
+1
drivers/media/common/tuners/tuner-xc2028.h
··· 38 38 unsigned int input1:1; 39 39 unsigned int vhfbw7:1; 40 40 unsigned int uhfbw8:1; 41 + unsigned int disable_power_mgmt:1; 41 42 unsigned int demod; 42 43 enum firmware_type type:2; 43 44 };
+1 -1
drivers/media/dvb/dvb-usb/af9015.c
··· 81 81 82 82 switch (req->cmd) { 83 83 case GET_CONFIG: 84 - case BOOT: 85 84 case READ_MEMORY: 86 85 case RECONNECT_USB: 87 86 case GET_IR_CODE: ··· 99 100 case WRITE_VIRTUAL_MEMORY: 100 101 case COPY_FIRMWARE: 101 102 case DOWNLOAD_FIRMWARE: 103 + case BOOT: 102 104 break; 103 105 default: 104 106 err("unknown command:%d", req->cmd);
+1 -1
drivers/media/dvb/frontends/cx22700.c
··· 380 380 struct cx22700_state* state = NULL; 381 381 382 382 /* allocate memory for the internal state */ 383 - state = kmalloc(sizeof(struct cx22700_state), GFP_KERNEL); 383 + state = kzalloc(sizeof(struct cx22700_state), GFP_KERNEL); 384 384 if (state == NULL) goto error; 385 385 386 386 /* setup the state */
+1 -1
drivers/media/dvb/frontends/cx22702.c
··· 580 580 struct cx22702_state *state = NULL; 581 581 582 582 /* allocate memory for the internal state */ 583 - state = kmalloc(sizeof(struct cx22702_state), GFP_KERNEL); 583 + state = kzalloc(sizeof(struct cx22702_state), GFP_KERNEL); 584 584 if (state == NULL) 585 585 goto error; 586 586
+1 -1
drivers/media/dvb/frontends/cx24110.c
··· 598 598 int ret; 599 599 600 600 /* allocate memory for the internal state */ 601 - state = kmalloc(sizeof(struct cx24110_state), GFP_KERNEL); 601 + state = kzalloc(sizeof(struct cx24110_state), GFP_KERNEL); 602 602 if (state == NULL) goto error; 603 603 604 604 /* setup the state */
+3 -3
drivers/media/dvb/frontends/dvb_dummy_fe.c
··· 117 117 struct dvb_dummy_fe_state* state = NULL; 118 118 119 119 /* allocate memory for the internal state */ 120 - state = kmalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 120 + state = kzalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 121 121 if (state == NULL) goto error; 122 122 123 123 /* create dvb_frontend */ ··· 137 137 struct dvb_dummy_fe_state* state = NULL; 138 138 139 139 /* allocate memory for the internal state */ 140 - state = kmalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 140 + state = kzalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 141 141 if (state == NULL) goto error; 142 142 143 143 /* create dvb_frontend */ ··· 157 157 struct dvb_dummy_fe_state* state = NULL; 158 158 159 159 /* allocate memory for the internal state */ 160 - state = kmalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 160 + state = kzalloc(sizeof(struct dvb_dummy_fe_state), GFP_KERNEL); 161 161 if (state == NULL) goto error; 162 162 163 163 /* create dvb_frontend */
+1 -1
drivers/media/dvb/frontends/l64781.c
··· 501 501 { .addr = config->demod_address, .flags = I2C_M_RD, .buf = b1, .len = 1 } }; 502 502 503 503 /* allocate memory for the internal state */ 504 - state = kmalloc(sizeof(struct l64781_state), GFP_KERNEL); 504 + state = kzalloc(sizeof(struct l64781_state), GFP_KERNEL); 505 505 if (state == NULL) goto error; 506 506 507 507 /* setup the state */
+1 -1
drivers/media/dvb/frontends/lgs8gl5.c
··· 387 387 dprintk("%s\n", __func__); 388 388 389 389 /* Allocate memory for the internal state */ 390 - state = kmalloc(sizeof(struct lgs8gl5_state), GFP_KERNEL); 390 + state = kzalloc(sizeof(struct lgs8gl5_state), GFP_KERNEL); 391 391 if (state == NULL) 392 392 goto error; 393 393
+1 -1
drivers/media/dvb/frontends/mt312.c
··· 782 782 struct mt312_state *state = NULL; 783 783 784 784 /* allocate memory for the internal state */ 785 - state = kmalloc(sizeof(struct mt312_state), GFP_KERNEL); 785 + state = kzalloc(sizeof(struct mt312_state), GFP_KERNEL); 786 786 if (state == NULL) 787 787 goto error; 788 788
+1 -1
drivers/media/dvb/frontends/nxt6000.c
··· 545 545 struct nxt6000_state* state = NULL; 546 546 547 547 /* allocate memory for the internal state */ 548 - state = kmalloc(sizeof(struct nxt6000_state), GFP_KERNEL); 548 + state = kzalloc(sizeof(struct nxt6000_state), GFP_KERNEL); 549 549 if (state == NULL) goto error; 550 550 551 551 /* setup the state */
+1 -1
drivers/media/dvb/frontends/or51132.c
··· 562 562 struct or51132_state* state = NULL; 563 563 564 564 /* Allocate memory for the internal state */ 565 - state = kmalloc(sizeof(struct or51132_state), GFP_KERNEL); 565 + state = kzalloc(sizeof(struct or51132_state), GFP_KERNEL); 566 566 if (state == NULL) 567 567 return NULL; 568 568
+1 -1
drivers/media/dvb/frontends/or51211.c
··· 527 527 struct or51211_state* state = NULL; 528 528 529 529 /* Allocate memory for the internal state */ 530 - state = kmalloc(sizeof(struct or51211_state), GFP_KERNEL); 530 + state = kzalloc(sizeof(struct or51211_state), GFP_KERNEL); 531 531 if (state == NULL) 532 532 return NULL; 533 533
+1 -1
drivers/media/dvb/frontends/s5h1409.c
··· 796 796 u16 reg; 797 797 798 798 /* allocate memory for the internal state */ 799 - state = kmalloc(sizeof(struct s5h1409_state), GFP_KERNEL); 799 + state = kzalloc(sizeof(struct s5h1409_state), GFP_KERNEL); 800 800 if (state == NULL) 801 801 goto error; 802 802
+1 -1
drivers/media/dvb/frontends/s5h1411.c
··· 844 844 u16 reg; 845 845 846 846 /* allocate memory for the internal state */ 847 - state = kmalloc(sizeof(struct s5h1411_state), GFP_KERNEL); 847 + state = kzalloc(sizeof(struct s5h1411_state), GFP_KERNEL); 848 848 if (state == NULL) 849 849 goto error; 850 850
+1 -1
drivers/media/dvb/frontends/si21xx.c
··· 928 928 dprintk("%s\n", __func__); 929 929 930 930 /* allocate memory for the internal state */ 931 - state = kmalloc(sizeof(struct si21xx_state), GFP_KERNEL); 931 + state = kzalloc(sizeof(struct si21xx_state), GFP_KERNEL); 932 932 if (state == NULL) 933 933 goto error; 934 934
+1 -1
drivers/media/dvb/frontends/sp8870.c
··· 557 557 struct sp8870_state* state = NULL; 558 558 559 559 /* allocate memory for the internal state */ 560 - state = kmalloc(sizeof(struct sp8870_state), GFP_KERNEL); 560 + state = kzalloc(sizeof(struct sp8870_state), GFP_KERNEL); 561 561 if (state == NULL) goto error; 562 562 563 563 /* setup the state */
+1 -1
drivers/media/dvb/frontends/sp887x.c
··· 557 557 struct sp887x_state* state = NULL; 558 558 559 559 /* allocate memory for the internal state */ 560 - state = kmalloc(sizeof(struct sp887x_state), GFP_KERNEL); 560 + state = kzalloc(sizeof(struct sp887x_state), GFP_KERNEL); 561 561 if (state == NULL) goto error; 562 562 563 563 /* setup the state */
+1 -1
drivers/media/dvb/frontends/stv0288.c
··· 570 570 int id; 571 571 572 572 /* allocate memory for the internal state */ 573 - state = kmalloc(sizeof(struct stv0288_state), GFP_KERNEL); 573 + state = kzalloc(sizeof(struct stv0288_state), GFP_KERNEL); 574 574 if (state == NULL) 575 575 goto error; 576 576
+1 -1
drivers/media/dvb/frontends/stv0297.c
··· 663 663 struct stv0297_state *state = NULL; 664 664 665 665 /* allocate memory for the internal state */ 666 - state = kmalloc(sizeof(struct stv0297_state), GFP_KERNEL); 666 + state = kzalloc(sizeof(struct stv0297_state), GFP_KERNEL); 667 667 if (state == NULL) 668 668 goto error; 669 669
+1 -1
drivers/media/dvb/frontends/stv0299.c
··· 667 667 int id; 668 668 669 669 /* allocate memory for the internal state */ 670 - state = kmalloc(sizeof(struct stv0299_state), GFP_KERNEL); 670 + state = kzalloc(sizeof(struct stv0299_state), GFP_KERNEL); 671 671 if (state == NULL) goto error; 672 672 673 673 /* setup the state */
+1 -1
drivers/media/dvb/frontends/tda10021.c
··· 413 413 u8 id; 414 414 415 415 /* allocate memory for the internal state */ 416 - state = kmalloc(sizeof(struct tda10021_state), GFP_KERNEL); 416 + state = kzalloc(sizeof(struct tda10021_state), GFP_KERNEL); 417 417 if (state == NULL) goto error; 418 418 419 419 /* setup the state */
+1 -1
drivers/media/dvb/frontends/tda10048.c
··· 1095 1095 dprintk(1, "%s()\n", __func__); 1096 1096 1097 1097 /* allocate memory for the internal state */ 1098 - state = kmalloc(sizeof(struct tda10048_state), GFP_KERNEL); 1098 + state = kzalloc(sizeof(struct tda10048_state), GFP_KERNEL); 1099 1099 if (state == NULL) 1100 1100 goto error; 1101 1101
+2 -2
drivers/media/dvb/frontends/tda1004x.c
··· 1269 1269 int id; 1270 1270 1271 1271 /* allocate memory for the internal state */ 1272 - state = kmalloc(sizeof(struct tda1004x_state), GFP_KERNEL); 1272 + state = kzalloc(sizeof(struct tda1004x_state), GFP_KERNEL); 1273 1273 if (!state) { 1274 1274 printk(KERN_ERR "Can't alocate memory for tda10045 state\n"); 1275 1275 return NULL; ··· 1339 1339 int id; 1340 1340 1341 1341 /* allocate memory for the internal state */ 1342 - state = kmalloc(sizeof(struct tda1004x_state), GFP_KERNEL); 1342 + state = kzalloc(sizeof(struct tda1004x_state), GFP_KERNEL); 1343 1343 if (!state) { 1344 1344 printk(KERN_ERR "Can't alocate memory for tda10046 state\n"); 1345 1345 return NULL;
+1 -1
drivers/media/dvb/frontends/tda10086.c
··· 745 745 dprintk ("%s\n", __func__); 746 746 747 747 /* allocate memory for the internal state */ 748 - state = kmalloc(sizeof(struct tda10086_state), GFP_KERNEL); 748 + state = kzalloc(sizeof(struct tda10086_state), GFP_KERNEL); 749 749 if (!state) 750 750 return NULL; 751 751
+1 -1
drivers/media/dvb/frontends/tda8083.c
··· 417 417 struct tda8083_state* state = NULL; 418 418 419 419 /* allocate memory for the internal state */ 420 - state = kmalloc(sizeof(struct tda8083_state), GFP_KERNEL); 420 + state = kzalloc(sizeof(struct tda8083_state), GFP_KERNEL); 421 421 if (state == NULL) goto error; 422 422 423 423 /* setup the state */
+1 -1
drivers/media/dvb/frontends/ves1820.c
··· 374 374 struct ves1820_state* state = NULL; 375 375 376 376 /* allocate memory for the internal state */ 377 - state = kmalloc(sizeof(struct ves1820_state), GFP_KERNEL); 377 + state = kzalloc(sizeof(struct ves1820_state), GFP_KERNEL); 378 378 if (state == NULL) 379 379 goto error; 380 380
+1 -1
drivers/media/dvb/frontends/ves1x93.c
··· 456 456 u8 identity; 457 457 458 458 /* allocate memory for the internal state */ 459 - state = kmalloc(sizeof(struct ves1x93_state), GFP_KERNEL); 459 + state = kzalloc(sizeof(struct ves1x93_state), GFP_KERNEL); 460 460 if (state == NULL) goto error; 461 461 462 462 /* setup the state */
+5 -7
drivers/media/dvb/frontends/zl10353.c
··· 98 98 static void zl10353_dump_regs(struct dvb_frontend *fe) 99 99 { 100 100 struct zl10353_state *state = fe->demodulator_priv; 101 - char buf[52], buf2[4]; 102 101 int ret; 103 102 u8 reg; 104 103 ··· 105 106 for (reg = 0; ; reg++) { 106 107 if (reg % 16 == 0) { 107 108 if (reg) 108 - printk(KERN_DEBUG "%s\n", buf); 109 - sprintf(buf, "%02x: ", reg); 109 + printk(KERN_CONT "\n"); 110 + printk(KERN_DEBUG "%02x:", reg); 110 111 } 111 112 ret = zl10353_read_register(state, reg); 112 113 if (ret >= 0) 113 - sprintf(buf2, "%02x ", (u8)ret); 114 + printk(KERN_CONT " %02x", (u8)ret); 114 115 else 115 - strcpy(buf2, "-- "); 116 - strcat(buf, buf2); 116 + printk(KERN_CONT " --"); 117 117 if (reg == 0xff) 118 118 break; 119 119 } 120 - printk(KERN_DEBUG "%s\n", buf); 120 + printk(KERN_CONT "\n"); 121 121 } 122 122 123 123 static void zl10353_calc_nominal_rate(struct dvb_frontend *fe,
+26 -18
drivers/media/dvb/siano/Kconfig
··· 2 2 # Siano Mobile Silicon Digital TV device configuration 3 3 # 4 4 5 - config DVB_SIANO_SMS1XXX 6 - tristate "Siano SMS1XXX USB dongle support" 5 + config SMS_SIANO_MDTV 6 + tristate "Siano SMS1xxx based MDTV receiver" 7 + depends on DVB_CORE && INPUT 8 + ---help--- 9 + Choose Y or M here if you have MDTV receiver with a Siano chipset. 10 + 11 + To compile this driver as a module, choose M here 12 + (The module will be called smsmdtv). 13 + 14 + Further documentation on this driver can be found on the WWW 15 + at http://www.siano-ms.com/ 16 + 17 + if SMS_SIANO_MDTV 18 + menu "Siano module components" 19 + 20 + # Hardware interfaces support 21 + 22 + config SMS_USB_DRV 23 + tristate "USB interface support" 7 24 depends on DVB_CORE && USB 8 25 ---help--- 9 - Choose Y here if you have a USB dongle with a SMS1XXX chipset. 26 + Choose if you would like to have Siano's support for USB interface 10 27 11 - To compile this driver as a module, choose M here: the 12 - module will be called sms1xxx. 13 - 14 - config DVB_SIANO_SMS1XXX_SMS_IDS 15 - bool "Enable support for Siano Mobile Silicon default USB IDs" 16 - depends on DVB_SIANO_SMS1XXX 17 - default y 28 + config SMS_SDIO_DRV 29 + tristate "SDIO interface support" 30 + depends on DVB_CORE && MMC 18 31 ---help--- 19 - Choose Y here if you have a USB dongle with a SMS1XXX chipset 20 - that uses Siano Mobile Silicon's default usb vid:pid. 21 - 22 - Choose N here if you would prefer to use Siano's external driver. 23 - 24 - Further documentation on this driver can be found on the WWW at 25 - <http://www.siano-ms.com/>. 26 - 32 + Choose if you would like to have Siano's support for SDIO interface 33 + endmenu 34 + endif # SMS_SIANO_MDTV
+5 -4
drivers/media/dvb/siano/Makefile
··· 1 - sms1xxx-objs := smscoreapi.o sms-cards.o smsendian.o smsir.o 2 1 3 - obj-$(CONFIG_DVB_SIANO_SMS1XXX) += sms1xxx.o 4 - obj-$(CONFIG_DVB_SIANO_SMS1XXX) += smsusb.o 5 - obj-$(CONFIG_DVB_SIANO_SMS1XXX) += smsdvb.o 2 + smsmdtv-objs := smscoreapi.o sms-cards.o smsendian.o smsir.o 3 + 4 + obj-$(CONFIG_SMS_SIANO_MDTV) += smsmdtv.o smsdvb.o 5 + obj-$(CONFIG_SMS_USB_DRV) += smsusb.o 6 + obj-$(CONFIG_SMS_SDIO_DRV) += smssdio.o 6 7 7 8 EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-core 8 9
-102
drivers/media/dvb/siano/sms-cards.c
··· 116 116 117 117 int sms_board_event(struct smscore_device_t *coredev, 118 118 enum SMS_BOARD_EVENTS gevent) { 119 - int board_id = smscore_get_board_id(coredev); 120 - struct sms_board *board = sms_get_board(board_id); 121 119 struct smscore_gpio_config MyGpioConfig; 122 120 123 121 sms_gpio_assign_11xx_default_led_config(&MyGpioConfig); 124 122 125 123 switch (gevent) { 126 124 case BOARD_EVENT_POWER_INIT: /* including hotplug */ 127 - switch (board_id) { 128 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 129 - /* set I/O and turn off all LEDs */ 130 - smscore_gpio_configure(coredev, 131 - board->board_cfg.leds_power, 132 - &MyGpioConfig); 133 - smscore_gpio_set_level(coredev, 134 - board->board_cfg.leds_power, 0); 135 - smscore_gpio_configure(coredev, board->board_cfg.led0, 136 - &MyGpioConfig); 137 - smscore_gpio_set_level(coredev, 138 - board->board_cfg.led0, 0); 139 - smscore_gpio_configure(coredev, board->board_cfg.led1, 140 - &MyGpioConfig); 141 - smscore_gpio_set_level(coredev, 142 - board->board_cfg.led1, 0); 143 - break; 144 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD_R2: 145 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD: 146 - /* set I/O and turn off LNA */ 147 - smscore_gpio_configure(coredev, 148 - board->board_cfg.foreign_lna0_ctrl, 149 - &MyGpioConfig); 150 - smscore_gpio_set_level(coredev, 151 - board->board_cfg.foreign_lna0_ctrl, 152 - 0); 153 - break; 154 - } 155 125 break; /* BOARD_EVENT_BIND */ 156 126 157 127 case BOARD_EVENT_POWER_SUSPEND: 158 - switch (board_id) { 159 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 160 - smscore_gpio_set_level(coredev, 161 - board->board_cfg.leds_power, 0); 162 - smscore_gpio_set_level(coredev, 163 - board->board_cfg.led0, 0); 164 - smscore_gpio_set_level(coredev, 165 - board->board_cfg.led1, 0); 166 - break; 167 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD_R2: 168 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD: 169 - smscore_gpio_set_level(coredev, 170 - board->board_cfg.foreign_lna0_ctrl, 171 - 0); 172 - break; 173 - } 174 128 break; /* BOARD_EVENT_POWER_SUSPEND */ 175 129 176 130 case BOARD_EVENT_POWER_RESUME: 177 - switch (board_id) { 178 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 179 - smscore_gpio_set_level(coredev, 180 - board->board_cfg.leds_power, 1); 181 - smscore_gpio_set_level(coredev, 182 - board->board_cfg.led0, 1); 183 - smscore_gpio_set_level(coredev, 184 - board->board_cfg.led1, 0); 185 - break; 186 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD_R2: 187 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD: 188 - smscore_gpio_set_level(coredev, 189 - board->board_cfg.foreign_lna0_ctrl, 190 - 1); 191 - break; 192 - } 193 131 break; /* BOARD_EVENT_POWER_RESUME */ 194 132 195 133 case BOARD_EVENT_BIND: 196 - switch (board_id) { 197 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 198 - smscore_gpio_set_level(coredev, 199 - board->board_cfg.leds_power, 1); 200 - smscore_gpio_set_level(coredev, 201 - board->board_cfg.led0, 1); 202 - smscore_gpio_set_level(coredev, 203 - board->board_cfg.led1, 0); 204 - break; 205 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD_R2: 206 - case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD: 207 - smscore_gpio_set_level(coredev, 208 - board->board_cfg.foreign_lna0_ctrl, 209 - 1); 210 - break; 211 - } 212 134 break; /* BOARD_EVENT_BIND */ 213 135 214 136 case BOARD_EVENT_SCAN_PROG: ··· 140 218 case BOARD_EVENT_EMERGENCY_WARNING_SIGNAL: 141 219 break; /* BOARD_EVENT_EMERGENCY_WARNING_SIGNAL */ 142 220 case BOARD_EVENT_FE_LOCK: 143 - switch (board_id) { 144 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 145 - smscore_gpio_set_level(coredev, 146 - board->board_cfg.led1, 1); 147 - break; 148 - } 149 221 break; /* BOARD_EVENT_FE_LOCK */ 150 222 case BOARD_EVENT_FE_UNLOCK: 151 - switch (board_id) { 152 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 153 - smscore_gpio_set_level(coredev, 154 - board->board_cfg.led1, 0); 155 - break; 156 - } 157 223 break; /* BOARD_EVENT_FE_UNLOCK */ 158 224 case BOARD_EVENT_DEMOD_LOCK: 159 225 break; /* BOARD_EVENT_DEMOD_LOCK */ ··· 158 248 case BOARD_EVENT_RECEPTION_LOST_0: 159 249 break; /* BOARD_EVENT_RECEPTION_LOST_0 */ 160 250 case BOARD_EVENT_MULTIPLEX_OK: 161 - switch (board_id) { 162 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 163 - smscore_gpio_set_level(coredev, 164 - board->board_cfg.led1, 1); 165 - break; 166 - } 167 251 break; /* BOARD_EVENT_MULTIPLEX_OK */ 168 252 case BOARD_EVENT_MULTIPLEX_ERRORS: 169 - switch (board_id) { 170 - case SMS1XXX_BOARD_HAUPPAUGE_WINDHAM: 171 - smscore_gpio_set_level(coredev, 172 - board->board_cfg.led1, 0); 173 - break; 174 - } 175 253 break; /* BOARD_EVENT_MULTIPLEX_ERRORS */ 176 254 177 255 default:
+1 -1
drivers/media/dvb/siano/smscoreapi.c
··· 816 816 817 817 sms_debug("set device mode to %d", mode); 818 818 if (coredev->device_flags & SMS_DEVICE_FAMILY2) { 819 - if (mode < DEVICE_MODE_DVBT || mode > DEVICE_MODE_RAW_TUNER) { 819 + if (mode < DEVICE_MODE_DVBT || mode >= DEVICE_MODE_RAW_TUNER) { 820 820 sms_err("invalid mode specified %d", mode); 821 821 return -EINVAL; 822 822 }
+44
drivers/media/dvb/siano/smsdvb.c
··· 325 325 0 : -ETIME; 326 326 } 327 327 328 + static inline int led_feedback(struct smsdvb_client_t *client) 329 + { 330 + if (client->fe_status & FE_HAS_LOCK) 331 + return sms_board_led_feedback(client->coredev, 332 + (client->sms_stat_dvb.ReceptionData.BER 333 + == 0) ? SMS_LED_HI : SMS_LED_LO); 334 + else 335 + return sms_board_led_feedback(client->coredev, SMS_LED_OFF); 336 + } 337 + 328 338 static int smsdvb_read_status(struct dvb_frontend *fe, fe_status_t *stat) 329 339 { 330 340 struct smsdvb_client_t *client; 331 341 client = container_of(fe, struct smsdvb_client_t, frontend); 332 342 333 343 *stat = client->fe_status; 344 + 345 + led_feedback(client); 334 346 335 347 return 0; 336 348 } ··· 353 341 client = container_of(fe, struct smsdvb_client_t, frontend); 354 342 355 343 *ber = client->sms_stat_dvb.ReceptionData.BER; 344 + 345 + led_feedback(client); 356 346 357 347 return 0; 358 348 } ··· 373 359 (client->sms_stat_dvb.ReceptionData.InBandPwr 374 360 + 95) * 3 / 2; 375 361 362 + led_feedback(client); 363 + 376 364 return 0; 377 365 } 378 366 ··· 385 369 386 370 *snr = client->sms_stat_dvb.ReceptionData.SNR; 387 371 372 + led_feedback(client); 373 + 388 374 return 0; 389 375 } 390 376 ··· 396 378 client = container_of(fe, struct smsdvb_client_t, frontend); 397 379 398 380 *ucblocks = client->sms_stat_dvb.ReceptionData.ErrorTSPackets; 381 + 382 + led_feedback(client); 399 383 400 384 return 0; 401 385 } ··· 424 404 u32 Data[3]; 425 405 } Msg; 426 406 407 + int ret; 408 + 427 409 client->fe_status = FE_HAS_SIGNAL; 428 410 client->event_fe_state = -1; 429 411 client->event_unc_state = -1; ··· 447 425 case BANDWIDTH_6_MHZ: Msg.Data[1] = BW_6_MHZ; break; 448 426 case BANDWIDTH_AUTO: return -EOPNOTSUPP; 449 427 default: return -EINVAL; 428 + } 429 + /* Disable LNA, if any. An error is returned if no LNA is present */ 430 + ret = sms_board_lna_control(client->coredev, 0); 431 + if (ret == 0) { 432 + fe_status_t status; 433 + 434 + /* tune with LNA off at first */ 435 + ret = smsdvb_sendrequest_and_wait(client, &Msg, sizeof(Msg), 436 + &client->tune_done); 437 + 438 + smsdvb_read_status(fe, &status); 439 + 440 + if (status & FE_HAS_LOCK) 441 + return ret; 442 + 443 + /* previous tune didnt lock - enable LNA and tune again */ 444 + sms_board_lna_control(client->coredev, 1); 450 445 } 451 446 452 447 return smsdvb_sendrequest_and_wait(client, &Msg, sizeof(Msg), ··· 490 451 struct smsdvb_client_t *client = 491 452 container_of(fe, struct smsdvb_client_t, frontend); 492 453 454 + sms_board_power(client->coredev, 1); 455 + 493 456 sms_board_dvb3_event(client, DVB3_EVENT_INIT); 494 457 return 0; 495 458 } ··· 500 459 { 501 460 struct smsdvb_client_t *client = 502 461 container_of(fe, struct smsdvb_client_t, frontend); 462 + 463 + sms_board_led_feedback(client->coredev, SMS_LED_OFF); 464 + sms_board_power(client->coredev, 0); 503 465 504 466 sms_board_dvb3_event(client, DVB3_EVENT_SLEEP); 505 467
+30 -24
drivers/media/dvb/siano/smssdio.c
··· 46 46 47 47 #define SMSSDIO_DATA 0x00 48 48 #define SMSSDIO_INT 0x04 49 + #define SMSSDIO_BLOCK_SIZE 128 49 50 50 51 static const struct sdio_device_id smssdio_ids[] = { 51 52 {SDIO_DEVICE(SDIO_VENDOR_ID_SIANO, SDIO_DEVICE_ID_SIANO_STELLAR), ··· 86 85 sdio_claim_host(smsdev->func); 87 86 88 87 while (size >= smsdev->func->cur_blksize) { 89 - ret = sdio_write_blocks(smsdev->func, SMSSDIO_DATA, buffer, 1); 88 + ret = sdio_memcpy_toio(smsdev->func, SMSSDIO_DATA, 89 + buffer, smsdev->func->cur_blksize); 90 90 if (ret) 91 91 goto out; 92 92 ··· 96 94 } 97 95 98 96 if (size) { 99 - ret = sdio_write_bytes(smsdev->func, SMSSDIO_DATA, 100 - buffer, size); 97 + ret = sdio_memcpy_toio(smsdev->func, SMSSDIO_DATA, 98 + buffer, size); 101 99 } 102 100 103 101 out: ··· 127 125 */ 128 126 isr = sdio_readb(func, SMSSDIO_INT, &ret); 129 127 if (ret) { 130 - dev_err(&smsdev->func->dev, 131 - "Unable to read interrupt register!\n"); 128 + sms_err("Unable to read interrupt register!\n"); 132 129 return; 133 130 } 134 131 135 132 if (smsdev->split_cb == NULL) { 136 133 cb = smscore_getbuffer(smsdev->coredev); 137 134 if (!cb) { 138 - dev_err(&smsdev->func->dev, 139 - "Unable to allocate data buffer!\n"); 135 + sms_err("Unable to allocate data buffer!\n"); 140 136 return; 141 137 } 142 138 143 - ret = sdio_read_blocks(smsdev->func, cb->p, SMSSDIO_DATA, 1); 139 + ret = sdio_memcpy_fromio(smsdev->func, 140 + cb->p, 141 + SMSSDIO_DATA, 142 + SMSSDIO_BLOCK_SIZE); 144 143 if (ret) { 145 - dev_err(&smsdev->func->dev, 146 - "Error %d reading initial block!\n", ret); 144 + sms_err("Error %d reading initial block!\n", ret); 147 145 return; 148 146 } 149 147 ··· 154 152 return; 155 153 } 156 154 157 - size = hdr->msgLength - smsdev->func->cur_blksize; 155 + if (hdr->msgLength > smsdev->func->cur_blksize) 156 + size = hdr->msgLength - smsdev->func->cur_blksize; 157 + else 158 + size = 0; 158 159 } else { 159 160 cb = smsdev->split_cb; 160 161 hdr = cb->p; ··· 167 162 smsdev->split_cb = NULL; 168 163 } 169 164 170 - if (hdr->msgLength > smsdev->func->cur_blksize) { 165 + if (size) { 171 166 void *buffer; 172 167 173 - size = ALIGN(size, 128); 174 - buffer = cb->p + hdr->msgLength; 168 + buffer = cb->p + (hdr->msgLength - size); 169 + size = ALIGN(size, SMSSDIO_BLOCK_SIZE); 175 170 176 - BUG_ON(smsdev->func->cur_blksize != 128); 171 + BUG_ON(smsdev->func->cur_blksize != SMSSDIO_BLOCK_SIZE); 177 172 178 173 /* 179 174 * First attempt to transfer all of it in one go... 180 175 */ 181 - ret = sdio_read_blocks(smsdev->func, buffer, 182 - SMSSDIO_DATA, size / 128); 176 + ret = sdio_memcpy_fromio(smsdev->func, 177 + buffer, 178 + SMSSDIO_DATA, 179 + size); 183 180 if (ret && ret != -EINVAL) { 184 181 smscore_putbuffer(smsdev->coredev, cb); 185 - dev_err(&smsdev->func->dev, 186 - "Error %d reading data from card!\n", ret); 182 + sms_err("Error %d reading data from card!\n", ret); 187 183 return; 188 184 } 189 185 ··· 197 191 */ 198 192 if (ret == -EINVAL) { 199 193 while (size) { 200 - ret = sdio_read_blocks(smsdev->func, 201 - buffer, SMSSDIO_DATA, 1); 194 + ret = sdio_memcpy_fromio(smsdev->func, 195 + buffer, SMSSDIO_DATA, 196 + smsdev->func->cur_blksize); 202 197 if (ret) { 203 198 smscore_putbuffer(smsdev->coredev, cb); 204 - dev_err(&smsdev->func->dev, 205 - "Error %d reading " 199 + sms_err("Error %d reading " 206 200 "data from card!\n", ret); 207 201 return; 208 202 } ··· 275 269 if (ret) 276 270 goto release; 277 271 278 - ret = sdio_set_block_size(func, 128); 272 + ret = sdio_set_block_size(func, SMSSDIO_BLOCK_SIZE); 279 273 if (ret) 280 274 goto disable; 281 275
+2
drivers/media/video/Kconfig
··· 920 920 config USB_ZR364XX 921 921 tristate "USB ZR364XX Camera support" 922 922 depends on VIDEO_V4L2 923 + select VIDEOBUF_GEN 924 + select VIDEOBUF_VMALLOC 923 925 ---help--- 924 926 Say Y here if you want to connect this type of camera to your 925 927 computer's USB port.
+1 -1
drivers/media/video/bw-qcam.c
··· 992 992 993 993 if (parport[0] && strncmp(parport[0], "auto", 4) != 0) { 994 994 /* user gave parport parameters */ 995 - for(n=0; parport[n] && n<MAX_CAMS; n++){ 995 + for (n = 0; n < MAX_CAMS && parport[n]; n++) { 996 996 char *ep; 997 997 unsigned long r; 998 998 r = simple_strtoul(parport[n], &ep, 0);
+2 -1
drivers/media/video/cx18/cx18-controls.c
··· 20 20 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 21 21 * 02111-1307 USA 22 22 */ 23 + #include <linux/kernel.h> 23 24 24 25 #include "cx18-driver.h" 25 26 #include "cx18-cards.h" ··· 318 317 idx = p.audio_properties & 0x03; 319 318 /* The audio clock of the digitizer must match the codec sample 320 319 rate otherwise you get some very strange effects. */ 321 - if (idx < sizeof(freqs)) 320 + if (idx < ARRAY_SIZE(freqs)) 322 321 cx18_call_all(cx, audio, s_clock_freq, freqs[idx]); 323 322 return err; 324 323 }
+2
drivers/media/video/cx23885/cx23885-417.c
··· 1715 1715 .fops = &mpeg_fops, 1716 1716 .ioctl_ops = &mpeg_ioctl_ops, 1717 1717 .minor = -1, 1718 + .tvnorms = CX23885_NORMS, 1719 + .current_norm = V4L2_STD_NTSC_M, 1718 1720 }; 1719 1721 1720 1722 void cx23885_417_unregister(struct cx23885_dev *dev)
+8
drivers/media/video/cx88/cx88-cards.c
··· 3003 3003 case CX88_BOARD_DVICO_FUSIONHDTV_5_PCI_NANO: 3004 3004 ctl->demod = XC3028_FE_OREN538; 3005 3005 break; 3006 + case CX88_BOARD_GENIATECH_X8000_MT: 3007 + /* FIXME: For this board, the xc3028 never recovers after being 3008 + powered down (the reset GPIO probably is not set properly). 3009 + We don't have access to the hardware so we cannot determine 3010 + which GPIO is used for xc3028, so just disable power xc3028 3011 + power management for now */ 3012 + ctl->disable_power_mgmt = 1; 3013 + break; 3006 3014 case CX88_BOARD_WINFAST_TV2000_XP_GLOBAL: 3007 3015 case CX88_BOARD_PROLINK_PV_GLOBAL_XTREME: 3008 3016 case CX88_BOARD_PROLINK_PV_8000GT:
+1
drivers/media/video/cx88/cx88-dvb.c
··· 501 501 static struct zl10353_config cx88_geniatech_x8000_mt = { 502 502 .demod_address = (0x1e >> 1), 503 503 .no_tuner = 1, 504 + .disable_i2c_gate_ctrl = 1, 504 505 }; 505 506 506 507 static struct s5h1411_config dvico_fusionhdtv7_config = {
+4
drivers/media/video/cx88/cx88-mpeg.c
··· 116 116 udelay(100); 117 117 break; 118 118 case CX88_BOARD_HAUPPAUGE_HVR1300: 119 + /* Enable MPEG parallel IO and video signal pins */ 120 + cx_write(MO_PINMUX_IO, 0x88); 121 + cx_write(TS_SOP_STAT, 0); 122 + cx_write(TS_VALERR_CNTRL, 0); 119 123 break; 120 124 case CX88_BOARD_PINNACLE_PCTV_HD_800i: 121 125 /* Enable MPEG parallel IO and video signal pins */
+140 -79
drivers/media/video/em28xx/em28xx-cards.c
··· 218 218 struct em28xx_board em28xx_boards[] = { 219 219 [EM2750_BOARD_UNKNOWN] = { 220 220 .name = "EM2710/EM2750/EM2751 webcam grabber", 221 - .xclk = EM28XX_XCLK_FREQUENCY_48MHZ, 221 + .xclk = EM28XX_XCLK_FREQUENCY_20MHZ, 222 222 .tuner_type = TUNER_ABSENT, 223 223 .is_webcam = 1, 224 224 .input = { { ··· 622 622 }, 623 623 [EM2861_BOARD_PLEXTOR_PX_TV100U] = { 624 624 .name = "Plextor ConvertX PX-TV100U", 625 - .valid = EM28XX_BOARD_NOT_VALIDATED, 626 625 .tuner_type = TUNER_TNF_5335MF, 626 + .xclk = EM28XX_XCLK_I2S_MSB_TIMING | 627 + EM28XX_XCLK_FREQUENCY_12MHZ, 627 628 .tda9887_conf = TDA9887_PRESENT, 628 629 .decoder = EM28XX_TVP5150, 630 + .has_msp34xx = 1, 629 631 .input = { { 630 632 .type = EM28XX_VMUX_TELEVISION, 631 633 .vmux = TVP5150_COMPOSITE0, 632 634 .amux = EM28XX_AMUX_LINE_IN, 635 + .gpio = pinnacle_hybrid_pro_analog, 633 636 }, { 634 637 .type = EM28XX_VMUX_COMPOSITE1, 635 638 .vmux = TVP5150_COMPOSITE1, 636 639 .amux = EM28XX_AMUX_LINE_IN, 640 + .gpio = pinnacle_hybrid_pro_analog, 637 641 }, { 638 642 .type = EM28XX_VMUX_SVIDEO, 639 643 .vmux = TVP5150_SVIDEO, 640 644 .amux = EM28XX_AMUX_LINE_IN, 645 + .gpio = pinnacle_hybrid_pro_analog, 641 646 } }, 642 647 }, 643 648 ··· 1549 1544 .driver_info = EM2750_BOARD_UNKNOWN }, 1550 1545 { USB_DEVICE(0xeb1a, 0x2800), 1551 1546 .driver_info = EM2800_BOARD_UNKNOWN }, 1547 + { USB_DEVICE(0xeb1a, 0x2710), 1548 + .driver_info = EM2820_BOARD_UNKNOWN }, 1552 1549 { USB_DEVICE(0xeb1a, 0x2820), 1553 1550 .driver_info = EM2820_BOARD_UNKNOWN }, 1554 1551 { USB_DEVICE(0xeb1a, 0x2821), ··· 1730 1723 EM28XX_I2C_FREQ_100_KHZ; 1731 1724 } 1732 1725 1726 + 1727 + /* FIXME: Should be replaced by a proper mt9m111 driver */ 1728 + static int em28xx_initialize_mt9m111(struct em28xx *dev) 1729 + { 1730 + int i; 1731 + unsigned char regs[][3] = { 1732 + { 0x0d, 0x00, 0x01, }, /* reset and use defaults */ 1733 + { 0x0d, 0x00, 0x00, }, 1734 + { 0x0a, 0x00, 0x21, }, 1735 + { 0x21, 0x04, 0x00, }, /* full readout speed, no row/col skipping */ 1736 + }; 1737 + 1738 + for (i = 0; i < ARRAY_SIZE(regs); i++) 1739 + i2c_master_send(&dev->i2c_client, &regs[i][0], 3); 1740 + 1741 + return 0; 1742 + } 1743 + 1744 + 1733 1745 /* FIXME: Should be replaced by a proper mt9m001 driver */ 1734 1746 static int em28xx_initialize_mt9m001(struct em28xx *dev) 1735 1747 { ··· 1777 1751 1778 1752 /* HINT method: webcam I2C chips 1779 1753 * 1780 - * This method work for webcams with Micron sensors 1754 + * This method works for webcams with Micron sensors 1781 1755 */ 1782 1756 static int em28xx_hint_sensor(struct em28xx *dev) 1783 1757 { ··· 1787 1761 __be16 version_be; 1788 1762 u16 version; 1789 1763 1764 + /* Micron sensor detection */ 1790 1765 dev->i2c_client.addr = 0xba >> 1; 1791 1766 cmd = 0; 1792 1767 i2c_master_send(&dev->i2c_client, &cmd, 1); ··· 1796 1769 return -EINVAL; 1797 1770 1798 1771 version = be16_to_cpu(version_be); 1799 - 1800 1772 switch (version) { 1801 - case 0x8243: /* mt9v011 640x480 1.3 Mpix sensor */ 1773 + case 0x8232: /* mt9v011 640x480 1.3 Mpix sensor */ 1774 + case 0x8243: /* mt9v011 rev B 640x480 1.3 Mpix sensor */ 1802 1775 dev->model = EM2820_BOARD_SILVERCREST_WEBCAM; 1776 + em28xx_set_model(dev); 1777 + 1803 1778 sensor_name = "mt9v011"; 1804 1779 dev->em28xx_sensor = EM28XX_MT9V011; 1805 1780 dev->sensor_xres = 640; 1806 1781 dev->sensor_yres = 480; 1807 - dev->sensor_xtal = 6300000; 1782 + /* 1783 + * FIXME: mt9v011 uses I2S speed as xtal clk - at least with 1784 + * the Silvercrest cam I have here for testing - for higher 1785 + * resolutions, a high clock cause horizontal artifacts, so we 1786 + * need to use a lower xclk frequency. 1787 + * Yet, it would be possible to adjust xclk depending on the 1788 + * desired resolution, since this affects directly the 1789 + * frame rate. 1790 + */ 1791 + dev->board.xclk = EM28XX_XCLK_FREQUENCY_4_3MHZ; 1792 + dev->sensor_xtal = 4300000; 1808 1793 1809 1794 /* probably means GRGB 16 bit bayer */ 1810 1795 dev->vinmode = 0x0d; 1811 1796 dev->vinctl = 0x00; 1812 1797 1813 1798 break; 1799 + 1800 + case 0x143a: /* MT9M111 as found in the ECS G200 */ 1801 + dev->model = EM2750_BOARD_UNKNOWN; 1802 + em28xx_set_model(dev); 1803 + 1804 + sensor_name = "mt9m111"; 1805 + dev->board.xclk = EM28XX_XCLK_FREQUENCY_48MHZ; 1806 + dev->em28xx_sensor = EM28XX_MT9M111; 1807 + em28xx_initialize_mt9m111(dev); 1808 + dev->sensor_xres = 640; 1809 + dev->sensor_yres = 512; 1810 + 1811 + dev->vinmode = 0x0a; 1812 + dev->vinctl = 0x00; 1813 + 1814 + break; 1815 + 1814 1816 case 0x8431: 1815 1817 dev->model = EM2750_BOARD_UNKNOWN; 1818 + em28xx_set_model(dev); 1819 + 1816 1820 sensor_name = "mt9m001"; 1817 1821 dev->em28xx_sensor = EM28XX_MT9M001; 1818 1822 em28xx_initialize_mt9m001(dev); ··· 1856 1798 1857 1799 break; 1858 1800 default: 1859 - printk("Unknown Micron Sensor 0x%04x\n", be16_to_cpu(version)); 1801 + printk("Unknown Micron Sensor 0x%04x\n", version); 1860 1802 return -EINVAL; 1861 1803 } 1804 + 1805 + /* Setup webcam defaults */ 1806 + em28xx_pre_card_setup(dev); 1862 1807 1863 1808 em28xx_errdev("Sensor is %s, using model %s entry.\n", 1864 1809 sensor_name, em28xx_boards[dev->model].name); ··· 1874 1813 */ 1875 1814 void em28xx_pre_card_setup(struct em28xx *dev) 1876 1815 { 1877 - int rc; 1878 - 1879 - em28xx_set_model(dev); 1880 - 1881 - em28xx_info("Identified as %s (card=%d)\n", 1882 - dev->board.name, dev->model); 1883 - 1884 - /* Set the default GPO/GPIO for legacy devices */ 1885 - dev->reg_gpo_num = EM2880_R04_GPO; 1886 - dev->reg_gpio_num = EM28XX_R08_GPIO; 1887 - 1888 - dev->wait_after_write = 5; 1889 - 1890 - /* Based on the Chip ID, set the device configuration */ 1891 - rc = em28xx_read_reg(dev, EM28XX_R0A_CHIPID); 1892 - if (rc > 0) { 1893 - dev->chip_id = rc; 1894 - 1895 - switch (dev->chip_id) { 1896 - case CHIP_ID_EM2750: 1897 - em28xx_info("chip ID is em2750\n"); 1898 - break; 1899 - case CHIP_ID_EM2820: 1900 - em28xx_info("chip ID is em2710 or em2820\n"); 1901 - break; 1902 - case CHIP_ID_EM2840: 1903 - em28xx_info("chip ID is em2840\n"); 1904 - break; 1905 - case CHIP_ID_EM2860: 1906 - em28xx_info("chip ID is em2860\n"); 1907 - break; 1908 - case CHIP_ID_EM2870: 1909 - em28xx_info("chip ID is em2870\n"); 1910 - dev->wait_after_write = 0; 1911 - break; 1912 - case CHIP_ID_EM2874: 1913 - em28xx_info("chip ID is em2874\n"); 1914 - dev->reg_gpio_num = EM2874_R80_GPIO; 1915 - dev->wait_after_write = 0; 1916 - break; 1917 - case CHIP_ID_EM2883: 1918 - em28xx_info("chip ID is em2882/em2883\n"); 1919 - dev->wait_after_write = 0; 1920 - break; 1921 - default: 1922 - em28xx_info("em28xx chip ID = %d\n", dev->chip_id); 1923 - } 1924 - } 1925 - 1926 - /* Prepopulate cached GPO register content */ 1927 - rc = em28xx_read_reg(dev, dev->reg_gpo_num); 1928 - if (rc >= 0) 1929 - dev->reg_gpo = rc; 1930 - 1931 1816 /* Set the initial XCLK and I2C clock values based on the board 1932 1817 definition */ 1933 1818 em28xx_write_reg(dev, EM28XX_R0F_XCLK, dev->board.xclk & 0x7f); ··· 1883 1876 /* request some modules */ 1884 1877 switch (dev->model) { 1885 1878 case EM2861_BOARD_PLEXTOR_PX_TV100U: 1886 - /* FIXME guess */ 1887 - /* Turn on analog audio output */ 1888 - em28xx_write_reg(dev, EM28XX_R08_GPIO, 0xfd); 1879 + /* Sets the msp34xx I2S speed */ 1880 + dev->i2s_speed = 2048000; 1889 1881 break; 1890 1882 case EM2861_BOARD_KWORLD_PVRTV_300U: 1891 1883 case EM2880_BOARD_KWORLD_DVB_305U: ··· 2222 2216 2223 2217 void em28xx_card_setup(struct em28xx *dev) 2224 2218 { 2225 - em28xx_set_model(dev); 2219 + /* 2220 + * If the device can be a webcam, seek for a sensor. 2221 + * If sensor is not found, then it isn't a webcam. 2222 + */ 2223 + if (dev->board.is_webcam) { 2224 + if (em28xx_hint_sensor(dev) < 0) 2225 + dev->board.is_webcam = 0; 2226 + else 2227 + dev->progressive = 1; 2228 + } else 2229 + em28xx_set_model(dev); 2230 + 2231 + em28xx_info("Identified as %s (card=%d)\n", 2232 + dev->board.name, dev->model); 2226 2233 2227 2234 dev->tuner_type = em28xx_boards[dev->model].tuner_type; 2228 2235 if (em28xx_boards[dev->model].tuner_addr) ··· 2309 2290 em28xx_gpio_set(dev, dev->board.tuner_gpio); 2310 2291 em28xx_set_mode(dev, EM28XX_ANALOG_MODE); 2311 2292 break; 2312 - case EM2820_BOARD_SILVERCREST_WEBCAM: 2313 - /* FIXME: need to document the registers bellow */ 2314 - em28xx_write_reg(dev, 0x0d, 0x42); 2315 - em28xx_write_reg(dev, 0x13, 0x08); 2316 2293 } 2317 2294 2318 2295 if (dev->board.has_snapshot_button) ··· 2382 2367 } 2383 2368 2384 2369 em28xx_tuner_setup(dev); 2385 - em28xx_ir_init(dev); 2370 + 2371 + if(!disable_ir) 2372 + em28xx_ir_init(dev); 2386 2373 } 2387 2374 2388 2375 ··· 2450 2433 int minor) 2451 2434 { 2452 2435 struct em28xx *dev = *devhandle; 2453 - int retval = -ENOMEM; 2436 + int retval; 2454 2437 int errCode; 2455 2438 2456 2439 dev->udev = udev; ··· 2466 2449 dev->em28xx_write_regs_req = em28xx_write_regs_req; 2467 2450 dev->em28xx_read_reg_req = em28xx_read_reg_req; 2468 2451 dev->board.is_em2800 = em28xx_boards[dev->model].is_em2800; 2452 + 2453 + em28xx_set_model(dev); 2454 + 2455 + /* Set the default GPO/GPIO for legacy devices */ 2456 + dev->reg_gpo_num = EM2880_R04_GPO; 2457 + dev->reg_gpio_num = EM28XX_R08_GPIO; 2458 + 2459 + dev->wait_after_write = 5; 2460 + 2461 + /* Based on the Chip ID, set the device configuration */ 2462 + retval = em28xx_read_reg(dev, EM28XX_R0A_CHIPID); 2463 + if (retval > 0) { 2464 + dev->chip_id = retval; 2465 + 2466 + switch (dev->chip_id) { 2467 + case CHIP_ID_EM2710: 2468 + em28xx_info("chip ID is em2710\n"); 2469 + break; 2470 + case CHIP_ID_EM2750: 2471 + em28xx_info("chip ID is em2750\n"); 2472 + break; 2473 + case CHIP_ID_EM2820: 2474 + em28xx_info("chip ID is em2820 (or em2710)\n"); 2475 + break; 2476 + case CHIP_ID_EM2840: 2477 + em28xx_info("chip ID is em2840\n"); 2478 + break; 2479 + case CHIP_ID_EM2860: 2480 + em28xx_info("chip ID is em2860\n"); 2481 + break; 2482 + case CHIP_ID_EM2870: 2483 + em28xx_info("chip ID is em2870\n"); 2484 + dev->wait_after_write = 0; 2485 + break; 2486 + case CHIP_ID_EM2874: 2487 + em28xx_info("chip ID is em2874\n"); 2488 + dev->reg_gpio_num = EM2874_R80_GPIO; 2489 + dev->wait_after_write = 0; 2490 + break; 2491 + case CHIP_ID_EM2883: 2492 + em28xx_info("chip ID is em2882/em2883\n"); 2493 + dev->wait_after_write = 0; 2494 + break; 2495 + default: 2496 + em28xx_info("em28xx chip ID = %d\n", dev->chip_id); 2497 + } 2498 + } 2499 + 2500 + /* Prepopulate cached GPO register content */ 2501 + retval = em28xx_read_reg(dev, dev->reg_gpo_num); 2502 + if (retval >= 0) 2503 + dev->reg_gpo = retval; 2469 2504 2470 2505 em28xx_pre_card_setup(dev); 2471 2506 ··· 2552 2483 */ 2553 2484 dev->vinmode = 0x10; 2554 2485 dev->vinctl = 0x11; 2555 - 2556 - /* 2557 - * If the device can be a webcam, seek for a sensor. 2558 - * If sensor is not found, then it isn't a webcam. 2559 - */ 2560 - if (dev->board.is_webcam) 2561 - if (em28xx_hint_sensor(dev) < 0) 2562 - dev->board.is_webcam = 0; 2563 2486 2564 2487 /* Do board specific init and eeprom reading */ 2565 2488 em28xx_card_setup(dev);
+7 -1
drivers/media/video/em28xx/em28xx-core.c
··· 632 632 return rc; 633 633 } 634 634 635 + if (dev->board.is_webcam) 636 + rc = em28xx_write_reg(dev, 0x13, 0x0c); 637 + 635 638 /* enable video capture */ 636 639 rc = em28xx_write_reg(dev, 0x48, 0x00); 637 640 ··· 723 720 { 724 721 int width, height; 725 722 width = norm_maxw(dev); 726 - height = norm_maxh(dev) >> 1; 723 + height = norm_maxh(dev); 724 + 725 + if (!dev->progressive) 726 + height >>= norm_maxh(dev); 727 727 728 728 em28xx_set_outfmt(dev); 729 729
+1 -1
drivers/media/video/em28xx/em28xx-dvb.c
··· 478 478 } 479 479 break; 480 480 case EM2880_BOARD_KWORLD_DVB_310U: 481 - case EM2880_BOARD_EMPIRE_DUAL_TV: 482 481 dvb->frontend = dvb_attach(zl10353_attach, 483 482 &em28xx_zl10353_with_xc3028, 484 483 &dev->i2c_adap); ··· 487 488 } 488 489 break; 489 490 case EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900: 491 + case EM2880_BOARD_EMPIRE_DUAL_TV: 490 492 dvb->frontend = dvb_attach(zl10353_attach, 491 493 &em28xx_zl10353_xc3028_no_i2c_gate, 492 494 &dev->i2c_adap);
+2 -1
drivers/media/video/em28xx/em28xx-reg.h
··· 176 176 177 177 /* FIXME: Need to be populated with the other chip ID's */ 178 178 enum em28xx_chip_id { 179 - CHIP_ID_EM2820 = 18, /* Also used by em2710 */ 179 + CHIP_ID_EM2710 = 17, 180 + CHIP_ID_EM2820 = 18, /* Also used by some em2710 */ 180 181 CHIP_ID_EM2840 = 20, 181 182 CHIP_ID_EM2750 = 33, 182 183 CHIP_ID_EM2860 = 34,
+68 -9
drivers/media/video/em28xx/em28xx-video.c
··· 194 194 startread = p; 195 195 remain = len; 196 196 197 - /* Interlaces frame */ 198 - if (buf->top_field) 197 + if (dev->progressive) 199 198 fieldstart = outp; 200 - else 201 - fieldstart = outp + bytesperline; 199 + else { 200 + /* Interlaces two half frames */ 201 + if (buf->top_field) 202 + fieldstart = outp; 203 + else 204 + fieldstart = outp + bytesperline; 205 + } 202 206 203 207 linesdone = dma_q->pos / bytesperline; 204 208 currlinedone = dma_q->pos % bytesperline; 205 - offset = linesdone * bytesperline * 2 + currlinedone; 209 + 210 + if (dev->progressive) 211 + offset = linesdone * bytesperline + currlinedone; 212 + else 213 + offset = linesdone * bytesperline * 2 + currlinedone; 214 + 206 215 startwrite = fieldstart + offset; 207 216 lencopy = bytesperline - currlinedone; 208 217 lencopy = lencopy > remain ? remain : lencopy; ··· 385 376 em28xx_isocdbg("Video frame %d, length=%i, %s\n", p[2], 386 377 len, (p[2] & 1) ? "odd" : "even"); 387 378 388 - if (!(p[2] & 1)) { 379 + if (dev->progressive || !(p[2] & 1)) { 389 380 if (buf != NULL) 390 381 buffer_filled(dev, dma_q, buf); 391 382 get_next_buf(dma_q, &buf); ··· 698 689 f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M; 699 690 700 691 /* FIXME: TOP? NONE? BOTTOM? ALTENATE? */ 701 - f->fmt.pix.field = dev->interlaced ? 692 + if (dev->progressive) 693 + f->fmt.pix.field = V4L2_FIELD_NONE; 694 + else 695 + f->fmt.pix.field = dev->interlaced ? 702 696 V4L2_FIELD_INTERLACED : V4L2_FIELD_TOP; 703 697 704 698 mutex_unlock(&dev->lock); ··· 765 753 f->fmt.pix.bytesperline = (dev->width * fmt->depth + 7) >> 3; 766 754 f->fmt.pix.sizeimage = f->fmt.pix.bytesperline * height; 767 755 f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M; 768 - f->fmt.pix.field = V4L2_FIELD_INTERLACED; 756 + if (dev->progressive) 757 + f->fmt.pix.field = V4L2_FIELD_NONE; 758 + else 759 + f->fmt.pix.field = dev->interlaced ? 760 + V4L2_FIELD_INTERLACED : V4L2_FIELD_TOP; 769 761 770 762 return 0; 771 763 } ··· 860 844 861 845 mutex_unlock(&dev->lock); 862 846 return 0; 847 + } 848 + 849 + static int vidioc_g_parm(struct file *file, void *priv, 850 + struct v4l2_streamparm *p) 851 + { 852 + struct em28xx_fh *fh = priv; 853 + struct em28xx *dev = fh->dev; 854 + int rc = 0; 855 + 856 + if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 857 + return -EINVAL; 858 + 859 + if (dev->board.is_webcam) 860 + rc = v4l2_device_call_until_err(&dev->v4l2_dev, 0, 861 + video, g_parm, p); 862 + else 863 + v4l2_video_std_frame_period(dev->norm, 864 + &p->parm.capture.timeperframe); 865 + 866 + return rc; 867 + } 868 + 869 + static int vidioc_s_parm(struct file *file, void *priv, 870 + struct v4l2_streamparm *p) 871 + { 872 + struct em28xx_fh *fh = priv; 873 + struct em28xx *dev = fh->dev; 874 + 875 + if (!dev->board.is_webcam) 876 + return -EINVAL; 877 + 878 + if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 879 + return -EINVAL; 880 + 881 + return v4l2_device_call_until_err(&dev->v4l2_dev, 0, video, s_parm, p); 863 882 } 864 883 865 884 static const char *iname[] = { ··· 1675 1624 struct em28xx *dev; 1676 1625 enum v4l2_buf_type fh_type; 1677 1626 struct em28xx_fh *fh; 1627 + enum v4l2_field field; 1678 1628 1679 1629 dev = em28xx_get_device(minor, &fh_type, &radio); 1680 1630 ··· 1717 1665 1718 1666 dev->users++; 1719 1667 1668 + if (dev->progressive) 1669 + field = V4L2_FIELD_NONE; 1670 + else 1671 + field = V4L2_FIELD_INTERLACED; 1672 + 1720 1673 videobuf_queue_vmalloc_init(&fh->vb_vidq, &em28xx_video_qops, 1721 - NULL, &dev->slock, fh->type, V4L2_FIELD_INTERLACED, 1674 + NULL, &dev->slock, fh->type, field, 1722 1675 sizeof(struct em28xx_buffer), fh); 1723 1676 1724 1677 mutex_unlock(&dev->lock); ··· 1942 1885 .vidioc_qbuf = vidioc_qbuf, 1943 1886 .vidioc_dqbuf = vidioc_dqbuf, 1944 1887 .vidioc_s_std = vidioc_s_std, 1888 + .vidioc_g_parm = vidioc_g_parm, 1889 + .vidioc_s_parm = vidioc_s_parm, 1945 1890 .vidioc_enum_input = vidioc_enum_input, 1946 1891 .vidioc_g_input = vidioc_g_input, 1947 1892 .vidioc_s_input = vidioc_s_input,
+4
drivers/media/video/em28xx/em28xx.h
··· 367 367 EM28XX_NOSENSOR = 0, 368 368 EM28XX_MT9V011, 369 369 EM28XX_MT9M001, 370 + EM28XX_MT9M111, 370 371 }; 371 372 372 373 enum em28xx_adecoder { ··· 484 483 enum em28xx_sensor em28xx_sensor; 485 484 int sensor_xres, sensor_yres; 486 485 int sensor_xtal; 486 + 487 + /* Allows progressive (e. g. non-interlaced) mode */ 488 + int progressive; 487 489 488 490 /* Vinmode/Vinctl used at the driver */ 489 491 int vinmode, vinctl;
+1 -1
drivers/media/video/gspca/Kconfig
··· 114 114 115 115 config USB_GSPCA_SN9C20X_EVDEV 116 116 bool "Enable evdev support" 117 - depends on USB_GSPCA_SN9C20X 117 + depends on USB_GSPCA_SN9C20X && INPUT 118 118 ---help--- 119 119 Say Y here in order to enable evdev support for sn9c20x webcam button. 120 120
+2
drivers/media/video/hdpvr/hdpvr-video.c
··· 1220 1220 V4L2_STD_PAL_G | V4L2_STD_PAL_H | V4L2_STD_PAL_I | 1221 1221 V4L2_STD_PAL_D | V4L2_STD_PAL_M | V4L2_STD_PAL_N | 1222 1222 V4L2_STD_PAL_60, 1223 + .current_norm = V4L2_STD_NTSC | V4L2_STD_PAL_M | 1224 + V4L2_STD_PAL_60, 1223 1225 }; 1224 1226 1225 1227 int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent,
+2 -1
drivers/media/video/ivtv/ivtv-controls.c
··· 17 17 along with this program; if not, write to the Free Software 18 18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 19 */ 20 + #include <linux/kernel.h> 20 21 21 22 #include "ivtv-driver.h" 22 23 #include "ivtv-cards.h" ··· 282 281 idx = p.audio_properties & 0x03; 283 282 /* The audio clock of the digitizer must match the codec sample 284 283 rate otherwise you get some very strange effects. */ 285 - if (idx < sizeof(freqs)) 284 + if (idx < ARRAY_SIZE(freqs)) 286 285 ivtv_call_all(itv, audio, s_clock_freq, freqs[idx]); 287 286 return err; 288 287 }
+147 -9
drivers/media/video/mt9v011.c
··· 52 52 .step = 1, 53 53 .default_value = 0, 54 54 .flags = 0, 55 - }, 55 + }, { 56 + .id = V4L2_CID_HFLIP, 57 + .type = V4L2_CTRL_TYPE_BOOLEAN, 58 + .name = "Mirror", 59 + .minimum = 0, 60 + .maximum = 1, 61 + .step = 1, 62 + .default_value = 0, 63 + .flags = 0, 64 + }, { 65 + .id = V4L2_CID_VFLIP, 66 + .type = V4L2_CTRL_TYPE_BOOLEAN, 67 + .name = "Vflip", 68 + .minimum = 0, 69 + .maximum = 1, 70 + .step = 1, 71 + .default_value = 0, 72 + .flags = 0, 73 + }, { 74 + } 56 75 }; 57 76 58 77 struct mt9v011 { 59 78 struct v4l2_subdev sd; 60 79 unsigned width, height; 61 80 unsigned xtal; 81 + unsigned hflip:1; 82 + unsigned vflip:1; 62 83 63 84 u16 global_gain, red_bal, blue_bal; 64 85 }; ··· 152 131 153 132 { R0A_MT9V011_CLK_SPEED, 0x0000 }, 154 133 { R1E_MT9V011_DIGITAL_ZOOM, 0x0000 }, 155 - { R20_MT9V011_READ_MODE, 0x1000 }, 156 134 157 135 { R07_MT9V011_OUT_CTRL, 0x0002 }, /* chip enable */ 158 136 }; ··· 176 156 mt9v011_write(sd, R2D_MT9V011_RED_GAIN, red_gain); 177 157 } 178 158 179 - static void calc_fps(struct v4l2_subdev *sd) 159 + static void calc_fps(struct v4l2_subdev *sd, u32 *numerator, u32 *denominator) 180 160 { 181 161 struct mt9v011 *core = to_mt9v011(sd); 182 162 unsigned height, width, hblank, vblank, speed; ··· 199 179 200 180 v4l2_dbg(1, debug, sd, "Programmed to %u.%03u fps (%d pixel clcks)\n", 201 181 tmp / 1000, tmp % 1000, t_time); 182 + 183 + if (numerator && denominator) { 184 + *numerator = 1000; 185 + *denominator = (u32)frames_per_ms; 186 + } 187 + } 188 + 189 + static u16 calc_speed(struct v4l2_subdev *sd, u32 numerator, u32 denominator) 190 + { 191 + struct mt9v011 *core = to_mt9v011(sd); 192 + unsigned height, width, hblank, vblank; 193 + unsigned row_time, line_time; 194 + u64 t_time, speed; 195 + 196 + /* Avoid bogus calculus */ 197 + if (!numerator || !denominator) 198 + return 0; 199 + 200 + height = mt9v011_read(sd, R03_MT9V011_HEIGHT); 201 + width = mt9v011_read(sd, R04_MT9V011_WIDTH); 202 + hblank = mt9v011_read(sd, R05_MT9V011_HBLANK); 203 + vblank = mt9v011_read(sd, R06_MT9V011_VBLANK); 204 + 205 + row_time = width + 113 + hblank; 206 + line_time = height + vblank + 1; 207 + 208 + t_time = core->xtal * ((u64)numerator); 209 + /* round to the closest value */ 210 + t_time += denominator / 2; 211 + do_div(t_time, denominator); 212 + 213 + speed = t_time; 214 + do_div(speed, row_time * line_time); 215 + 216 + /* Avoid having a negative value for speed */ 217 + if (speed < 2) 218 + speed = 0; 219 + else 220 + speed -= 2; 221 + 222 + /* Avoid speed overflow */ 223 + if (speed > 15) 224 + return 15; 225 + 226 + return (u16)speed; 202 227 } 203 228 204 229 static void set_res(struct v4l2_subdev *sd) ··· 272 207 mt9v011_write(sd, R03_MT9V011_HEIGHT, core->height); 273 208 mt9v011_write(sd, R06_MT9V011_VBLANK, 508 - core->height); 274 209 275 - calc_fps(sd); 210 + calc_fps(sd, NULL, NULL); 276 211 }; 212 + 213 + static void set_read_mode(struct v4l2_subdev *sd) 214 + { 215 + struct mt9v011 *core = to_mt9v011(sd); 216 + unsigned mode = 0x1000; 217 + 218 + if (core->hflip) 219 + mode |= 0x4000; 220 + 221 + if (core->vflip) 222 + mode |= 0x8000; 223 + 224 + mt9v011_write(sd, R20_MT9V011_READ_MODE, mode); 225 + } 277 226 278 227 static int mt9v011_reset(struct v4l2_subdev *sd, u32 val) 279 228 { ··· 299 220 300 221 set_balance(sd); 301 222 set_res(sd); 223 + set_read_mode(sd); 302 224 303 225 return 0; 304 226 }; ··· 319 239 return 0; 320 240 case V4L2_CID_BLUE_BALANCE: 321 241 ctrl->value = core->blue_bal; 242 + return 0; 243 + case V4L2_CID_HFLIP: 244 + ctrl->value = core->hflip ? 1 : 0; 245 + return 0; 246 + case V4L2_CID_VFLIP: 247 + ctrl->value = core->vflip ? 1 : 0; 322 248 return 0; 323 249 } 324 250 return -EINVAL; ··· 374 288 case V4L2_CID_BLUE_BALANCE: 375 289 core->blue_bal = ctrl->value; 376 290 break; 291 + case V4L2_CID_HFLIP: 292 + core->hflip = ctrl->value; 293 + set_read_mode(sd); 294 + return 0; 295 + case V4L2_CID_VFLIP: 296 + core->vflip = ctrl->value; 297 + set_read_mode(sd); 298 + return 0; 377 299 default: 378 300 return -EINVAL; 379 301 } ··· 412 318 413 319 v4l_bound_align_image(&pix->width, 48, 639, 1, 414 320 &pix->height, 32, 480, 1, 0); 321 + 322 + return 0; 323 + } 324 + 325 + static int mt9v011_g_parm(struct v4l2_subdev *sd, struct v4l2_streamparm *parms) 326 + { 327 + struct v4l2_captureparm *cp = &parms->parm.capture; 328 + 329 + if (parms->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 330 + return -EINVAL; 331 + 332 + memset(cp, 0, sizeof(struct v4l2_captureparm)); 333 + cp->capability = V4L2_CAP_TIMEPERFRAME; 334 + calc_fps(sd, 335 + &cp->timeperframe.numerator, 336 + &cp->timeperframe.denominator); 337 + 338 + return 0; 339 + } 340 + 341 + static int mt9v011_s_parm(struct v4l2_subdev *sd, struct v4l2_streamparm *parms) 342 + { 343 + struct v4l2_captureparm *cp = &parms->parm.capture; 344 + struct v4l2_fract *tpf = &cp->timeperframe; 345 + u16 speed; 346 + 347 + if (parms->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 348 + return -EINVAL; 349 + if (cp->extendedmode != 0) 350 + return -EINVAL; 351 + 352 + speed = calc_speed(sd, tpf->numerator, tpf->denominator); 353 + 354 + mt9v011_write(sd, R0A_MT9V011_CLK_SPEED, speed); 355 + v4l2_dbg(1, debug, sd, "Setting speed to %d\n", speed); 356 + 357 + /* Recalculate and update fps info */ 358 + calc_fps(sd, &tpf->numerator, &tpf->denominator); 415 359 416 360 return 0; 417 361 } ··· 525 393 static int mt9v011_g_chip_ident(struct v4l2_subdev *sd, 526 394 struct v4l2_dbg_chip_ident *chip) 527 395 { 396 + u16 version; 528 397 struct i2c_client *client = v4l2_get_subdevdata(sd); 529 398 399 + version = mt9v011_read(sd, R00_MT9V011_CHIP_VERSION); 400 + 530 401 return v4l2_chip_ident_i2c_client(client, chip, V4L2_IDENT_MT9V011, 531 - MT9V011_VERSION); 402 + version); 532 403 } 533 404 534 405 static const struct v4l2_subdev_core_ops mt9v011_core_ops = { ··· 551 416 .enum_fmt = mt9v011_enum_fmt, 552 417 .try_fmt = mt9v011_try_fmt, 553 418 .s_fmt = mt9v011_s_fmt, 419 + .g_parm = mt9v011_g_parm, 420 + .s_parm = mt9v011_s_parm, 554 421 }; 555 422 556 423 static const struct v4l2_subdev_ops mt9v011_ops = { ··· 586 449 587 450 /* Check if the sensor is really a MT9V011 */ 588 451 version = mt9v011_read(sd, R00_MT9V011_CHIP_VERSION); 589 - if (version != MT9V011_VERSION) { 590 - v4l2_info(sd, "*** unknown micron chip detected (0x%04x.\n", 452 + if ((version != MT9V011_VERSION) && 453 + (version != MT9V011_REV_B_VERSION)) { 454 + v4l2_info(sd, "*** unknown micron chip detected (0x%04x).\n", 591 455 version); 592 456 kfree(core); 593 457 return -EINVAL; ··· 599 461 core->height = 480; 600 462 core->xtal = 27000000; /* Hz */ 601 463 602 - v4l_info(c, "chip found @ 0x%02x (%s)\n", 603 - c->addr << 1, c->adapter->name); 464 + v4l_info(c, "chip found @ 0x%02x (%s - chip version 0x%04x)\n", 465 + c->addr << 1, c->adapter->name, version); 604 466 605 467 return 0; 606 468 }
+2 -1
drivers/media/video/mt9v011.h
··· 30 30 #define R35_MT9V011_GLOBAL_GAIN 0x35 31 31 #define RF1_MT9V011_CHIP_ENABLE 0xf1 32 32 33 - #define MT9V011_VERSION 0x8243 33 + #define MT9V011_VERSION 0x8232 34 + #define MT9V011_REV_B_VERSION 0x8243 34 35 35 36 #endif
+1 -5
drivers/media/video/mx1_camera.c
··· 234 234 return ret; 235 235 } 236 236 237 + /* Called under spinlock_irqsave(&pcdev->lock, ...) */ 237 238 static void mx1_videobuf_queue(struct videobuf_queue *vq, 238 239 struct videobuf_buffer *vb) 239 240 { ··· 242 241 struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent); 243 242 struct mx1_camera_dev *pcdev = ici->priv; 244 243 struct mx1_buffer *buf = container_of(vb, struct mx1_buffer, vb); 245 - unsigned long flags; 246 244 247 245 dev_dbg(&icd->dev, "%s (vb=0x%p) 0x%08lx %d\n", __func__, 248 246 vb, vb->baddr, vb->bsize); 249 - 250 - spin_lock_irqsave(&pcdev->lock, flags); 251 247 252 248 list_add_tail(&vb->queue, &pcdev->capture); 253 249 ··· 262 264 __raw_writel(temp, pcdev->base + CSICR1); 263 265 } 264 266 } 265 - 266 - spin_unlock_irqrestore(&pcdev->lock, flags); 267 267 } 268 268 269 269 static void mx1_videobuf_release(struct videobuf_queue *vq,
+10 -9
drivers/media/video/mx3_camera.c
··· 332 332 } 333 333 } 334 334 335 - /* Called with .vb_lock held */ 335 + /* 336 + * Called with .vb_lock mutex held and 337 + * under spinlock_irqsave(&mx3_cam->lock, ...) 338 + */ 336 339 static void mx3_videobuf_queue(struct videobuf_queue *vq, 337 340 struct videobuf_buffer *vb) 338 341 { ··· 349 346 struct idmac_video_param *video = &ichan->params.video; 350 347 const struct soc_camera_data_format *data_fmt = icd->current_fmt; 351 348 dma_cookie_t cookie; 352 - unsigned long flags; 349 + 350 + BUG_ON(!irqs_disabled()); 353 351 354 352 /* This is the configuration of one sg-element */ 355 353 video->out_pixel_fmt = fourcc_to_ipu_pix(data_fmt->fourcc); ··· 363 359 memset((void *)vb->baddr, 0xaa, vb->bsize); 364 360 #endif 365 361 366 - spin_lock_irqsave(&mx3_cam->lock, flags); 367 - 368 362 list_add_tail(&vb->queue, &mx3_cam->capture); 369 363 370 364 if (!mx3_cam->active) { ··· 372 370 vb->state = VIDEOBUF_QUEUED; 373 371 } 374 372 375 - spin_unlock_irqrestore(&mx3_cam->lock, flags); 373 + spin_unlock_irq(&mx3_cam->lock); 376 374 377 375 cookie = txd->tx_submit(txd); 378 376 dev_dbg(&icd->dev, "Submitted cookie %d DMA 0x%08x\n", cookie, sg_dma_address(&buf->sg)); 377 + 378 + spin_lock_irq(&mx3_cam->lock); 379 + 379 380 if (cookie >= 0) 380 381 return; 381 382 382 383 /* Submit error */ 383 384 vb->state = VIDEOBUF_PREPARED; 384 385 385 - spin_lock_irqsave(&mx3_cam->lock, flags); 386 - 387 386 list_del_init(&vb->queue); 388 387 389 388 if (mx3_cam->active == buf) 390 389 mx3_cam->active = NULL; 391 - 392 - spin_unlock_irqrestore(&mx3_cam->lock, flags); 393 390 } 394 391 395 392 /* Called with .vb_lock held */
+2 -6
drivers/media/video/pxa_camera.c
··· 612 612 dev_dbg(pcdev->soc_host.dev, "%s\n", __func__); 613 613 } 614 614 615 + /* Called under spinlock_irqsave(&pcdev->lock, ...) */ 615 616 static void pxa_videobuf_queue(struct videobuf_queue *vq, 616 617 struct videobuf_buffer *vb) 617 618 { ··· 620 619 struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent); 621 620 struct pxa_camera_dev *pcdev = ici->priv; 622 621 struct pxa_buffer *buf = container_of(vb, struct pxa_buffer, vb); 623 - unsigned long flags; 624 622 625 623 dev_dbg(&icd->dev, "%s (vb=0x%p) 0x%08lx %d active=%p\n", __func__, 626 624 vb, vb->baddr, vb->bsize, pcdev->active); 627 - 628 - spin_lock_irqsave(&pcdev->lock, flags); 629 625 630 626 list_add_tail(&vb->queue, &pcdev->capture); 631 627 ··· 631 633 632 634 if (!pcdev->active) 633 635 pxa_camera_start_capture(pcdev); 634 - 635 - spin_unlock_irqrestore(&pcdev->lock, flags); 636 636 } 637 637 638 638 static void pxa_videobuf_release(struct videobuf_queue *vq, ··· 1575 1579 pcdev->mclk = 20000000; 1576 1580 } 1577 1581 1582 + pcdev->soc_host.dev = &pdev->dev; 1578 1583 pcdev->mclk_divisor = mclk_get_divisor(pcdev); 1579 1584 1580 1585 INIT_LIST_HEAD(&pcdev->capture); ··· 1641 1644 pcdev->soc_host.drv_name = PXA_CAM_DRV_NAME; 1642 1645 pcdev->soc_host.ops = &pxa_soc_camera_host_ops; 1643 1646 pcdev->soc_host.priv = pcdev; 1644 - pcdev->soc_host.dev = &pdev->dev; 1645 1647 pcdev->soc_host.nr = pdev->id; 1646 1648 1647 1649 err = soc_camera_host_register(&pcdev->soc_host);
+19 -19
drivers/media/video/saa7134/saa7134-cards.c
··· 3331 3331 .gpio = 0x0200100, 3332 3332 }, 3333 3333 }, 3334 - [SAA7134_BOARD_HAUPPAUGE_HVR1120] = { 3335 - .name = "Hauppauge WinTV-HVR1120 ATSC/QAM-Hybrid", 3334 + [SAA7134_BOARD_HAUPPAUGE_HVR1150] = { 3335 + .name = "Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid", 3336 3336 .audio_clock = 0x00187de7, 3337 3337 .tuner_type = TUNER_PHILIPS_TDA8290, 3338 3338 .radio_type = UNSET, ··· 3363 3363 .gpio = 0x0800100, /* GPIO 23 HI for FM */ 3364 3364 }, 3365 3365 }, 3366 - [SAA7134_BOARD_HAUPPAUGE_HVR1110R3] = { 3367 - .name = "Hauppauge WinTV-HVR1110r3 DVB-T/Hybrid", 3366 + [SAA7134_BOARD_HAUPPAUGE_HVR1120] = { 3367 + .name = "Hauppauge WinTV-HVR1120 DVB-T/Hybrid", 3368 3368 .audio_clock = 0x00187de7, 3369 3369 .tuner_type = TUNER_PHILIPS_TDA8290, 3370 3370 .radio_type = UNSET, ··· 5862 5862 .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5863 5863 .subvendor = 0x0070, 5864 5864 .subdevice = 0x6706, 5865 - .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1120, 5865 + .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1150, 5866 5866 },{ 5867 5867 .vendor = PCI_VENDOR_ID_PHILIPS, 5868 5868 .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5869 5869 .subvendor = 0x0070, 5870 5870 .subdevice = 0x6707, 5871 - .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1110R3, 5872 - },{ 5873 - .vendor = PCI_VENDOR_ID_PHILIPS, 5874 - .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5875 - .subvendor = 0x0070, 5876 - .subdevice = 0x6708, 5877 5871 .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1120, 5878 5872 },{ 5879 5873 .vendor = PCI_VENDOR_ID_PHILIPS, 5880 5874 .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5881 5875 .subvendor = 0x0070, 5876 + .subdevice = 0x6708, 5877 + .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1150, 5878 + },{ 5879 + .vendor = PCI_VENDOR_ID_PHILIPS, 5880 + .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5881 + .subvendor = 0x0070, 5882 5882 .subdevice = 0x6709, 5883 - .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1110R3, 5883 + .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1120, 5884 5884 },{ 5885 5885 .vendor = PCI_VENDOR_ID_PHILIPS, 5886 5886 .device = PCI_DEVICE_ID_PHILIPS_SAA7133, 5887 5887 .subvendor = 0x0070, 5888 5888 .subdevice = 0x670a, 5889 - .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1110R3, 5889 + .driver_data = SAA7134_BOARD_HAUPPAUGE_HVR1120, 5890 5890 },{ 5891 5891 .vendor = PCI_VENDOR_ID_PHILIPS, 5892 5892 .device = PCI_DEVICE_ID_PHILIPS_SAA7133, ··· 6363 6363 switch (command) { 6364 6364 case TDA18271_CALLBACK_CMD_AGC_ENABLE: /* 0 */ 6365 6365 switch (dev->board) { 6366 + case SAA7134_BOARD_HAUPPAUGE_HVR1150: 6366 6367 case SAA7134_BOARD_HAUPPAUGE_HVR1120: 6367 - case SAA7134_BOARD_HAUPPAUGE_HVR1110R3: 6368 6368 ret = saa7134_tda18271_hvr11x0_toggle_agc(dev, arg); 6369 6369 break; 6370 6370 default: ··· 6384 6384 int ret; 6385 6385 6386 6386 switch (dev->board) { 6387 + case SAA7134_BOARD_HAUPPAUGE_HVR1150: 6387 6388 case SAA7134_BOARD_HAUPPAUGE_HVR1120: 6388 - case SAA7134_BOARD_HAUPPAUGE_HVR1110R3: 6389 6389 /* tda8290 + tda18271 */ 6390 6390 ret = saa7134_tda8290_18271_callback(dev, command, arg); 6391 6391 break; ··· 6427 6427 switch (tv.model) { 6428 6428 case 67019: /* WinTV-HVR1110 (Retail, IR Blaster, hybrid, FM, SVid/Comp, 3.5mm audio in) */ 6429 6429 case 67109: /* WinTV-HVR1000 (Retail, IR Receive, analog, no FM, SVid/Comp, 3.5mm audio in) */ 6430 - case 67201: /* WinTV-HVR1120 (Retail, IR Receive, hybrid, FM, SVid/Comp, 3.5mm audio in) */ 6430 + case 67201: /* WinTV-HVR1150 (Retail, IR Receive, hybrid, FM, SVid/Comp, 3.5mm audio in) */ 6431 6431 case 67301: /* WinTV-HVR1000 (Retail, IR Receive, analog, no FM, SVid/Comp, 3.5mm audio in) */ 6432 6432 case 67209: /* WinTV-HVR1110 (Retail, IR Receive, hybrid, FM, SVid/Comp, 3.5mm audio in) */ 6433 6433 case 67559: /* WinTV-HVR1110 (OEM, no IR, hybrid, FM, SVid/Comp, RCA aud) */ ··· 6435 6435 case 67579: /* WinTV-HVR1110 (OEM, no IR, hybrid, no FM) */ 6436 6436 case 67589: /* WinTV-HVR1110 (OEM, no IR, hybrid, no FM, SVid/Comp, RCA aud) */ 6437 6437 case 67599: /* WinTV-HVR1110 (OEM, no IR, hybrid, no FM, SVid/Comp, RCA aud) */ 6438 - case 67651: /* WinTV-HVR1120 (OEM, no IR, hybrid, FM, SVid/Comp, RCA aud) */ 6438 + case 67651: /* WinTV-HVR1150 (OEM, no IR, hybrid, FM, SVid/Comp, RCA aud) */ 6439 6439 case 67659: /* WinTV-HVR1110 (OEM, no IR, hybrid, FM, SVid/Comp, RCA aud) */ 6440 6440 break; 6441 6441 default: ··· 6625 6625 6626 6626 saa_writeb (SAA7134_PRODUCTION_TEST_MODE, 0x00); 6627 6627 break; 6628 + case SAA7134_BOARD_HAUPPAUGE_HVR1150: 6628 6629 case SAA7134_BOARD_HAUPPAUGE_HVR1120: 6629 - case SAA7134_BOARD_HAUPPAUGE_HVR1110R3: 6630 6630 /* GPIO 26 high for digital, low for analog */ 6631 6631 saa7134_set_gpio(dev, 26, 0); 6632 6632 msleep(1); ··· 6891 6891 dev->name, saa7134_boards[dev->board].name); 6892 6892 } 6893 6893 break; 6894 + case SAA7134_BOARD_HAUPPAUGE_HVR1150: 6894 6895 case SAA7134_BOARD_HAUPPAUGE_HVR1120: 6895 - case SAA7134_BOARD_HAUPPAUGE_HVR1110R3: 6896 6896 hauppauge_eeprom(dev, dev->eedata+0x80); 6897 6897 break; 6898 6898 case SAA7134_BOARD_HAUPPAUGE_HVR1110:
+2 -2
drivers/media/video/saa7134/saa7134-dvb.c
··· 1119 1119 &tda827x_cfg_2) < 0) 1120 1120 goto dettach_frontend; 1121 1121 break; 1122 - case SAA7134_BOARD_HAUPPAUGE_HVR1110R3: 1122 + case SAA7134_BOARD_HAUPPAUGE_HVR1120: 1123 1123 fe0->dvb.frontend = dvb_attach(tda10048_attach, 1124 1124 &hcw_tda10048_config, 1125 1125 &dev->i2c_adap); ··· 1147 1147 &tda827x_cfg_1) < 0) 1148 1148 goto dettach_frontend; 1149 1149 break; 1150 - case SAA7134_BOARD_HAUPPAUGE_HVR1120: 1150 + case SAA7134_BOARD_HAUPPAUGE_HVR1150: 1151 1151 fe0->dvb.frontend = dvb_attach(lgdt3305_attach, 1152 1152 &hcw_lgdt3305_config, 1153 1153 &dev->i2c_adap);
+2 -2
drivers/media/video/saa7134/saa7134.h
··· 278 278 #define SAA7134_BOARD_ASUSTeK_TIGER 152 279 279 #define SAA7134_BOARD_KWORLD_PLUS_TV_ANALOG 153 280 280 #define SAA7134_BOARD_AVERMEDIA_GO_007_FM_PLUS 154 281 - #define SAA7134_BOARD_HAUPPAUGE_HVR1120 155 282 - #define SAA7134_BOARD_HAUPPAUGE_HVR1110R3 156 281 + #define SAA7134_BOARD_HAUPPAUGE_HVR1150 155 282 + #define SAA7134_BOARD_HAUPPAUGE_HVR1120 156 283 283 #define SAA7134_BOARD_AVERMEDIA_STUDIO_507UA 157 284 284 #define SAA7134_BOARD_AVERMEDIA_CARDBUS_501 158 285 285 #define SAA7134_BOARD_BEHOLD_505RDS 159
+1 -4
drivers/media/video/sh_mobile_ceu_camera.c
··· 282 282 return ret; 283 283 } 284 284 285 + /* Called under spinlock_irqsave(&pcdev->lock, ...) */ 285 286 static void sh_mobile_ceu_videobuf_queue(struct videobuf_queue *vq, 286 287 struct videobuf_buffer *vb) 287 288 { 288 289 struct soc_camera_device *icd = vq->priv_data; 289 290 struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent); 290 291 struct sh_mobile_ceu_dev *pcdev = ici->priv; 291 - unsigned long flags; 292 292 293 293 dev_dbg(&icd->dev, "%s (vb=0x%p) 0x%08lx %zd\n", __func__, 294 294 vb, vb->baddr, vb->bsize); 295 295 296 296 vb->state = VIDEOBUF_QUEUED; 297 - spin_lock_irqsave(&pcdev->lock, flags); 298 297 list_add_tail(&vb->queue, &pcdev->capture); 299 298 300 299 if (!pcdev->active) { 301 300 pcdev->active = vb; 302 301 sh_mobile_ceu_capture(pcdev); 303 302 } 304 - 305 - spin_unlock_irqrestore(&pcdev->lock, flags); 306 303 } 307 304 308 305 static void sh_mobile_ceu_videobuf_release(struct videobuf_queue *vq,
+2 -2
drivers/media/video/stk-webcam.c
··· 1050 1050 depth = 1; 1051 1051 else 1052 1052 depth = 2; 1053 - while (stk_sizes[i].m != dev->vsettings.mode 1054 - && i < ARRAY_SIZE(stk_sizes)) 1053 + while (i < ARRAY_SIZE(stk_sizes) && 1054 + stk_sizes[i].m != dev->vsettings.mode) 1055 1055 i++; 1056 1056 if (i == ARRAY_SIZE(stk_sizes)) { 1057 1057 STK_ERROR("Something is broken in %s\n", __func__);
+21 -3
drivers/media/video/uvc/uvc_driver.c
··· 1845 1845 .bInterfaceSubClass = 1, 1846 1846 .bInterfaceProtocol = 0, 1847 1847 .driver_info = UVC_QUIRK_STREAM_NO_FID }, 1848 - /* ViMicro */ 1849 - { .match_flags = USB_DEVICE_ID_MATCH_VENDOR 1848 + /* ViMicro Vega */ 1849 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 1850 1850 | USB_DEVICE_ID_MATCH_INT_INFO, 1851 1851 .idVendor = 0x0ac8, 1852 - .idProduct = 0x0000, 1852 + .idProduct = 0x332d, 1853 + .bInterfaceClass = USB_CLASS_VIDEO, 1854 + .bInterfaceSubClass = 1, 1855 + .bInterfaceProtocol = 0, 1856 + .driver_info = UVC_QUIRK_FIX_BANDWIDTH }, 1857 + /* ViMicro - Minoru3D */ 1858 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 1859 + | USB_DEVICE_ID_MATCH_INT_INFO, 1860 + .idVendor = 0x0ac8, 1861 + .idProduct = 0x3410, 1862 + .bInterfaceClass = USB_CLASS_VIDEO, 1863 + .bInterfaceSubClass = 1, 1864 + .bInterfaceProtocol = 0, 1865 + .driver_info = UVC_QUIRK_FIX_BANDWIDTH }, 1866 + /* ViMicro Venus - Minoru3D */ 1867 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 1868 + | USB_DEVICE_ID_MATCH_INT_INFO, 1869 + .idVendor = 0x0ac8, 1870 + .idProduct = 0x3420, 1853 1871 .bInterfaceClass = USB_CLASS_VIDEO, 1854 1872 .bInterfaceSubClass = 1, 1855 1873 .bInterfaceProtocol = 0,
+2 -2
drivers/media/video/uvc/uvc_status.c
··· 145 145 break; 146 146 147 147 default: 148 - uvc_printk(KERN_INFO, "unknown event type %u.\n", 149 - dev->status[0]); 148 + uvc_trace(UVC_TRACE_STATUS, "Unknown status event " 149 + "type %u.\n", dev->status[0]); 150 150 break; 151 151 } 152 152 }
+12 -3
drivers/media/video/v4l2-ioctl.c
··· 1081 1081 /* Calls the specific handler */ 1082 1082 if (ops->vidioc_g_std) 1083 1083 ret = ops->vidioc_g_std(file, fh, id); 1084 - else 1084 + else if (vfd->current_norm) 1085 1085 *id = vfd->current_norm; 1086 + else 1087 + ret = -EINVAL; 1086 1088 1087 1089 if (!ret) 1088 1090 dbgarg(cmd, "std=0x%08Lx\n", (long long unsigned)*id); ··· 1555 1553 break; 1556 1554 ret = ops->vidioc_g_parm(file, fh, p); 1557 1555 } else { 1556 + v4l2_std_id std = vfd->current_norm; 1557 + 1558 1558 if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 1559 1559 return -EINVAL; 1560 1560 1561 - v4l2_video_std_frame_period(vfd->current_norm, 1562 - &p->parm.capture.timeperframe); 1563 1561 ret = 0; 1562 + if (ops->vidioc_g_std) 1563 + ret = ops->vidioc_g_std(file, fh, &std); 1564 + else if (std == 0) 1565 + ret = -EINVAL; 1566 + if (ret == 0) 1567 + v4l2_video_std_frame_period(std, 1568 + &p->parm.capture.timeperframe); 1564 1569 } 1565 1570 1566 1571 dbgarg(cmd, "type=%d\n", p->type);
+1 -1
drivers/media/video/zr364xx.c
··· 695 695 for (i = 0; i < 2; i++) { 696 696 err = 697 697 send_control_msg(udev, 1, init[cam->method][i].value, 698 - 0, init[i][cam->method].bytes, 698 + 0, init[cam->method][i].bytes, 699 699 init[cam->method][i].size); 700 700 if (err < 0) { 701 701 dev_err(&udev->dev, "error during release sequence\n");
+1 -1
drivers/mtd/devices/m25p80.c
··· 736 736 flash->partitioned = 1; 737 737 return add_mtd_partitions(&flash->mtd, parts, nr_parts); 738 738 } 739 - } else if (data->nr_parts) 739 + } else if (data && data->nr_parts) 740 740 dev_warn(&spi->dev, "ignoring %d default partitions on %s\n", 741 741 data->nr_parts, data->name); 742 742
drivers/mtd/maps/sbc8240.c
+1 -1
drivers/mtd/nand/orion_nand.c
··· 61 61 buf64 = (uint64_t *)buf; 62 62 while (i < len/8) { 63 63 uint64_t x; 64 - asm ("ldrd\t%0, [%1]" : "=r" (x) : "r" (io_base)); 64 + asm volatile ("ldrd\t%0, [%1]" : "=&r" (x) : "r" (io_base)); 65 65 buf64[i++] = x; 66 66 } 67 67 i *= 8;
+9 -6
drivers/mtd/nftlcore.c
··· 135 135 int nftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, 136 136 size_t *retlen, uint8_t *buf) 137 137 { 138 + loff_t mask = mtd->writesize - 1; 138 139 struct mtd_oob_ops ops; 139 140 int res; 140 141 141 142 ops.mode = MTD_OOB_PLACE; 142 - ops.ooboffs = offs & (mtd->writesize - 1); 143 + ops.ooboffs = offs & mask; 143 144 ops.ooblen = len; 144 145 ops.oobbuf = buf; 145 146 ops.datbuf = NULL; 146 147 147 - res = mtd->read_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 148 + res = mtd->read_oob(mtd, offs & ~mask, &ops); 148 149 *retlen = ops.oobretlen; 149 150 return res; 150 151 } ··· 156 155 int nftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, 157 156 size_t *retlen, uint8_t *buf) 158 157 { 158 + loff_t mask = mtd->writesize - 1; 159 159 struct mtd_oob_ops ops; 160 160 int res; 161 161 162 162 ops.mode = MTD_OOB_PLACE; 163 - ops.ooboffs = offs & (mtd->writesize - 1); 163 + ops.ooboffs = offs & mask; 164 164 ops.ooblen = len; 165 165 ops.oobbuf = buf; 166 166 ops.datbuf = NULL; 167 167 168 - res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 168 + res = mtd->write_oob(mtd, offs & ~mask, &ops); 169 169 *retlen = ops.oobretlen; 170 170 return res; 171 171 } ··· 179 177 static int nftl_write(struct mtd_info *mtd, loff_t offs, size_t len, 180 178 size_t *retlen, uint8_t *buf, uint8_t *oob) 181 179 { 180 + loff_t mask = mtd->writesize - 1; 182 181 struct mtd_oob_ops ops; 183 182 int res; 184 183 185 184 ops.mode = MTD_OOB_PLACE; 186 - ops.ooboffs = offs; 185 + ops.ooboffs = offs & mask; 187 186 ops.ooblen = mtd->oobsize; 188 187 ops.oobbuf = oob; 189 188 ops.datbuf = buf; 190 189 ops.len = len; 191 190 192 - res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 191 + res = mtd->write_oob(mtd, offs & ~mask, &ops); 193 192 *retlen = ops.retlen; 194 193 return res; 195 194 }
+4
drivers/net/3c59x.c
··· 235 235 CH_3C900B_FL, 236 236 CH_3C905_1, 237 237 CH_3C905_2, 238 + CH_3C905B_TX, 238 239 CH_3C905B_1, 239 240 240 241 CH_3C905B_2, ··· 308 307 PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, }, 309 308 {"3c905 Boomerang 100baseT4", 310 309 PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_RESET, 64, }, 310 + {"3C905B-TX Fast Etherlink XL PCI", 311 + PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, }, 311 312 {"3c905B Cyclone 100baseTx", 312 313 PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, }, 313 314 ··· 392 389 { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL }, 393 390 { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 }, 394 391 { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 }, 392 + { 0x10B7, 0x9054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_TX }, 395 393 { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 }, 396 394 397 395 { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
+2 -3
drivers/net/8139cp.c
··· 515 515 dma_addr_t mapping; 516 516 struct sk_buff *skb, *new_skb; 517 517 struct cp_desc *desc; 518 - unsigned buflen; 518 + const unsigned buflen = cp->rx_buf_sz; 519 519 520 520 skb = cp->rx_skb[rx_tail]; 521 521 BUG_ON(!skb); ··· 549 549 pr_debug("%s: rx slot %d status 0x%x len %d\n", 550 550 dev->name, rx_tail, status, len); 551 551 552 - buflen = cp->rx_buf_sz + NET_IP_ALIGN; 553 - new_skb = netdev_alloc_skb(dev, buflen); 552 + new_skb = netdev_alloc_skb(dev, buflen + NET_IP_ALIGN); 554 553 if (!new_skb) { 555 554 dev->stats.rx_dropped++; 556 555 goto rx_next;
+3 -1
drivers/net/Kconfig
··· 1727 1727 tristate "Micrel KSZ8842" 1728 1728 depends on HAS_IOMEM 1729 1729 help 1730 - This platform driver is for Micrel KSZ8842 chip. 1730 + This platform driver is for Micrel KSZ8842 / KS8842 1731 + 2-port ethernet switch chip (managed, VLAN, QoS). 1731 1732 1732 1733 config KS8851 1733 1734 tristate "Micrel KS8851 SPI" 1734 1735 depends on SPI 1735 1736 select MII 1737 + select CRC32 1736 1738 help 1737 1739 SPI driver for Micrel KS8851 SPI attached network chip. 1738 1740
+2 -2
drivers/net/arm/w90p910_ether.c
··· 1080 1080 .probe = w90p910_ether_probe, 1081 1081 .remove = __devexit_p(w90p910_ether_remove), 1082 1082 .driver = { 1083 - .name = "w90p910-emc", 1083 + .name = "nuc900-emc", 1084 1084 .owner = THIS_MODULE, 1085 1085 }, 1086 1086 }; ··· 1101 1101 MODULE_AUTHOR("Wan ZongShun <mcuos.com@gmail.com>"); 1102 1102 MODULE_DESCRIPTION("w90p910 MAC driver!"); 1103 1103 MODULE_LICENSE("GPL"); 1104 - MODULE_ALIAS("platform:w90p910-emc"); 1104 + MODULE_ALIAS("platform:nuc900-emc"); 1105 1105
+4 -4
drivers/net/atl1c/atl1c_ethtool.c
··· 232 232 { 233 233 struct atl1c_adapter *adapter = netdev_priv(netdev); 234 234 235 - strncpy(drvinfo->driver, atl1c_driver_name, sizeof(drvinfo->driver)); 236 - strncpy(drvinfo->version, atl1c_driver_version, 235 + strlcpy(drvinfo->driver, atl1c_driver_name, sizeof(drvinfo->driver)); 236 + strlcpy(drvinfo->version, atl1c_driver_version, 237 237 sizeof(drvinfo->version)); 238 - strncpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); 239 - strncpy(drvinfo->bus_info, pci_name(adapter->pdev), 238 + strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); 239 + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 240 240 sizeof(drvinfo->bus_info)); 241 241 drvinfo->n_stats = 0; 242 242 drvinfo->testinfo_len = 0;
+4 -4
drivers/net/atlx/atl1.c
··· 3378 3378 { 3379 3379 struct atl1_adapter *adapter = netdev_priv(netdev); 3380 3380 3381 - strncpy(drvinfo->driver, ATLX_DRIVER_NAME, sizeof(drvinfo->driver)); 3382 - strncpy(drvinfo->version, ATLX_DRIVER_VERSION, 3381 + strlcpy(drvinfo->driver, ATLX_DRIVER_NAME, sizeof(drvinfo->driver)); 3382 + strlcpy(drvinfo->version, ATLX_DRIVER_VERSION, 3383 3383 sizeof(drvinfo->version)); 3384 - strncpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); 3385 - strncpy(drvinfo->bus_info, pci_name(adapter->pdev), 3384 + strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); 3385 + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 3386 3386 sizeof(drvinfo->bus_info)); 3387 3387 drvinfo->eedump_len = ATL1_EEDUMP_LEN; 3388 3388 }
+3 -2
drivers/net/b44.c
··· 952 952 int rc = NETDEV_TX_OK; 953 953 dma_addr_t mapping; 954 954 u32 len, entry, ctrl; 955 + unsigned long flags; 955 956 956 957 len = skb->len; 957 - spin_lock_irq(&bp->lock); 958 + spin_lock_irqsave(&bp->lock, flags); 958 959 959 960 /* This is a hard error, log it. */ 960 961 if (unlikely(TX_BUFFS_AVAIL(bp) < 1)) { ··· 1028 1027 dev->trans_start = jiffies; 1029 1028 1030 1029 out_unlock: 1031 - spin_unlock_irq(&bp->lock); 1030 + spin_unlock_irqrestore(&bp->lock, flags); 1032 1031 1033 1032 return rc; 1034 1033
+11 -6
drivers/net/bnx2.c
··· 399 399 struct bnx2_napi *bnapi = &bp->bnx2_napi[0]; 400 400 struct cnic_eth_dev *cp = &bp->cnic_eth_dev; 401 401 402 + mutex_lock(&bp->cnic_lock); 402 403 cp->drv_state = 0; 403 404 bnapi->cnic_present = 0; 404 405 rcu_assign_pointer(bp->cnic_ops, NULL); 406 + mutex_unlock(&bp->cnic_lock); 405 407 synchronize_rcu(); 406 408 return 0; 407 409 } ··· 431 429 struct cnic_ops *c_ops; 432 430 struct cnic_ctl_info info; 433 431 434 - rcu_read_lock(); 435 - c_ops = rcu_dereference(bp->cnic_ops); 432 + mutex_lock(&bp->cnic_lock); 433 + c_ops = bp->cnic_ops; 436 434 if (c_ops) { 437 435 info.cmd = CNIC_CTL_STOP_CMD; 438 436 c_ops->cnic_ctl(bp->cnic_data, &info); 439 437 } 440 - rcu_read_unlock(); 438 + mutex_unlock(&bp->cnic_lock); 441 439 } 442 440 443 441 static void ··· 446 444 struct cnic_ops *c_ops; 447 445 struct cnic_ctl_info info; 448 446 449 - rcu_read_lock(); 450 - c_ops = rcu_dereference(bp->cnic_ops); 447 + mutex_lock(&bp->cnic_lock); 448 + c_ops = bp->cnic_ops; 451 449 if (c_ops) { 452 450 if (!(bp->flags & BNX2_FLAG_USING_MSIX)) { 453 451 struct bnx2_napi *bnapi = &bp->bnx2_napi[0]; ··· 457 455 info.cmd = CNIC_CTL_START_CMD; 458 456 c_ops->cnic_ctl(bp->cnic_data, &info); 459 457 } 460 - rcu_read_unlock(); 458 + mutex_unlock(&bp->cnic_lock); 461 459 } 462 460 463 461 #else ··· 7665 7663 7666 7664 spin_lock_init(&bp->phy_lock); 7667 7665 spin_lock_init(&bp->indirect_lock); 7666 + #ifdef BCM_CNIC 7667 + mutex_init(&bp->cnic_lock); 7668 + #endif 7668 7669 INIT_WORK(&bp->reset_task, bnx2_reset_task); 7669 7670 7670 7671 dev->base_addr = dev->mem_start = pci_resource_start(pdev, 0);
+1
drivers/net/bnx2.h
··· 6902 6902 u32 idle_chk_status_idx; 6903 6903 6904 6904 #ifdef BCM_CNIC 6905 + struct mutex cnic_lock; 6905 6906 struct cnic_eth_dev cnic_eth_dev; 6906 6907 #endif 6907 6908
+7
drivers/net/can/dev.c
··· 611 611 return -EMSGSIZE; 612 612 } 613 613 614 + static int can_newlink(struct net_device *dev, 615 + struct nlattr *tb[], struct nlattr *data[]) 616 + { 617 + return -EOPNOTSUPP; 618 + } 619 + 614 620 static struct rtnl_link_ops can_link_ops __read_mostly = { 615 621 .kind = "can", 616 622 .maxtype = IFLA_CAN_MAX, 617 623 .policy = can_policy, 618 624 .setup = can_setup, 625 + .newlink = can_newlink, 619 626 .changelink = can_changelink, 620 627 .fill_info = can_fill_info, 621 628 .fill_xstats = can_fill_xstats,
+105 -40
drivers/net/cnic.c
··· 138 138 return NULL; 139 139 } 140 140 141 + static inline void ulp_get(struct cnic_ulp_ops *ulp_ops) 142 + { 143 + atomic_inc(&ulp_ops->ref_count); 144 + } 145 + 146 + static inline void ulp_put(struct cnic_ulp_ops *ulp_ops) 147 + { 148 + atomic_dec(&ulp_ops->ref_count); 149 + } 150 + 141 151 static void cnic_ctx_wr(struct cnic_dev *dev, u32 cid_addr, u32 off, u32 val) 142 152 { 143 153 struct cnic_local *cp = dev->cnic_priv; ··· 368 358 } 369 359 read_unlock(&cnic_dev_lock); 370 360 361 + atomic_set(&ulp_ops->ref_count, 0); 371 362 rcu_assign_pointer(cnic_ulp_tbl[ulp_type], ulp_ops); 372 363 mutex_unlock(&cnic_lock); 373 364 ··· 390 379 int cnic_unregister_driver(int ulp_type) 391 380 { 392 381 struct cnic_dev *dev; 382 + struct cnic_ulp_ops *ulp_ops; 383 + int i = 0; 393 384 394 385 if (ulp_type >= MAX_CNIC_ULP_TYPE) { 395 386 printk(KERN_ERR PFX "cnic_unregister_driver: Bad type %d\n", ··· 399 386 return -EINVAL; 400 387 } 401 388 mutex_lock(&cnic_lock); 402 - if (!cnic_ulp_tbl[ulp_type]) { 389 + ulp_ops = cnic_ulp_tbl[ulp_type]; 390 + if (!ulp_ops) { 403 391 printk(KERN_ERR PFX "cnic_unregister_driver: Type %d has not " 404 392 "been registered\n", ulp_type); 405 393 goto out_unlock; ··· 425 411 426 412 mutex_unlock(&cnic_lock); 427 413 synchronize_rcu(); 414 + while ((atomic_read(&ulp_ops->ref_count) != 0) && (i < 20)) { 415 + msleep(100); 416 + i++; 417 + } 418 + 419 + if (atomic_read(&ulp_ops->ref_count) != 0) 420 + printk(KERN_WARNING PFX "%s: Failed waiting for ref count to go" 421 + " to zero.\n", dev->netdev->name); 428 422 return 0; 429 423 430 424 out_unlock: ··· 488 466 static int cnic_unregister_device(struct cnic_dev *dev, int ulp_type) 489 467 { 490 468 struct cnic_local *cp = dev->cnic_priv; 469 + int i = 0; 491 470 492 471 if (ulp_type >= MAX_CNIC_ULP_TYPE) { 493 472 printk(KERN_ERR PFX "cnic_unregister_device: Bad type %d\n", ··· 508 485 mutex_unlock(&cnic_lock); 509 486 510 487 synchronize_rcu(); 488 + 489 + while (test_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[ulp_type]) && 490 + i < 20) { 491 + msleep(100); 492 + i++; 493 + } 494 + if (test_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[ulp_type])) 495 + printk(KERN_WARNING PFX "%s: Failed waiting for ULP up call" 496 + " to complete.\n", dev->netdev->name); 511 497 512 498 return 0; 513 499 } ··· 1108 1076 if (cp->cnic_uinfo) 1109 1077 cnic_send_nlmsg(cp, ISCSI_KEVENT_IF_DOWN, NULL); 1110 1078 1111 - rcu_read_lock(); 1112 1079 for (if_type = 0; if_type < MAX_CNIC_ULP_TYPE; if_type++) { 1113 1080 struct cnic_ulp_ops *ulp_ops; 1114 1081 1115 - ulp_ops = rcu_dereference(cp->ulp_ops[if_type]); 1116 - if (!ulp_ops) 1082 + mutex_lock(&cnic_lock); 1083 + ulp_ops = cp->ulp_ops[if_type]; 1084 + if (!ulp_ops) { 1085 + mutex_unlock(&cnic_lock); 1117 1086 continue; 1087 + } 1088 + set_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[if_type]); 1089 + mutex_unlock(&cnic_lock); 1118 1090 1119 1091 if (test_and_clear_bit(ULP_F_START, &cp->ulp_flags[if_type])) 1120 1092 ulp_ops->cnic_stop(cp->ulp_handle[if_type]); 1093 + 1094 + clear_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[if_type]); 1121 1095 } 1122 - rcu_read_unlock(); 1123 1096 } 1124 1097 1125 1098 static void cnic_ulp_start(struct cnic_dev *dev) ··· 1132 1095 struct cnic_local *cp = dev->cnic_priv; 1133 1096 int if_type; 1134 1097 1135 - rcu_read_lock(); 1136 1098 for (if_type = 0; if_type < MAX_CNIC_ULP_TYPE; if_type++) { 1137 1099 struct cnic_ulp_ops *ulp_ops; 1138 1100 1139 - ulp_ops = rcu_dereference(cp->ulp_ops[if_type]); 1140 - if (!ulp_ops || !ulp_ops->cnic_start) 1101 + mutex_lock(&cnic_lock); 1102 + ulp_ops = cp->ulp_ops[if_type]; 1103 + if (!ulp_ops || !ulp_ops->cnic_start) { 1104 + mutex_unlock(&cnic_lock); 1141 1105 continue; 1106 + } 1107 + set_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[if_type]); 1108 + mutex_unlock(&cnic_lock); 1142 1109 1143 1110 if (!test_and_set_bit(ULP_F_START, &cp->ulp_flags[if_type])) 1144 1111 ulp_ops->cnic_start(cp->ulp_handle[if_type]); 1112 + 1113 + clear_bit(ULP_F_CALL_PENDING, &cp->ulp_flags[if_type]); 1145 1114 } 1146 - rcu_read_unlock(); 1147 1115 } 1148 1116 1149 1117 static int cnic_ctl(void *data, struct cnic_ctl_info *info) ··· 1158 1116 switch (info->cmd) { 1159 1117 case CNIC_CTL_STOP_CMD: 1160 1118 cnic_hold(dev); 1161 - mutex_lock(&cnic_lock); 1162 1119 1163 1120 cnic_ulp_stop(dev); 1164 1121 cnic_stop_hw(dev); 1165 1122 1166 - mutex_unlock(&cnic_lock); 1167 1123 cnic_put(dev); 1168 1124 break; 1169 1125 case CNIC_CTL_START_CMD: 1170 1126 cnic_hold(dev); 1171 - mutex_lock(&cnic_lock); 1172 1127 1173 1128 if (!cnic_start_hw(dev)) 1174 1129 cnic_ulp_start(dev); 1175 1130 1176 - mutex_unlock(&cnic_lock); 1177 1131 cnic_put(dev); 1178 1132 break; 1179 1133 default: ··· 1183 1145 int i; 1184 1146 struct cnic_local *cp = dev->cnic_priv; 1185 1147 1186 - rcu_read_lock(); 1187 1148 for (i = 0; i < MAX_CNIC_ULP_TYPE_EXT; i++) { 1188 1149 struct cnic_ulp_ops *ulp_ops; 1189 1150 1190 - ulp_ops = rcu_dereference(cnic_ulp_tbl[i]); 1191 - if (!ulp_ops || !ulp_ops->cnic_init) 1151 + mutex_lock(&cnic_lock); 1152 + ulp_ops = cnic_ulp_tbl[i]; 1153 + if (!ulp_ops || !ulp_ops->cnic_init) { 1154 + mutex_unlock(&cnic_lock); 1192 1155 continue; 1156 + } 1157 + ulp_get(ulp_ops); 1158 + mutex_unlock(&cnic_lock); 1193 1159 1194 1160 if (!test_and_set_bit(ULP_F_INIT, &cp->ulp_flags[i])) 1195 1161 ulp_ops->cnic_init(dev); 1196 1162 1163 + ulp_put(ulp_ops); 1197 1164 } 1198 - rcu_read_unlock(); 1199 1165 } 1200 1166 1201 1167 static void cnic_ulp_exit(struct cnic_dev *dev) ··· 1207 1165 int i; 1208 1166 struct cnic_local *cp = dev->cnic_priv; 1209 1167 1210 - rcu_read_lock(); 1211 1168 for (i = 0; i < MAX_CNIC_ULP_TYPE_EXT; i++) { 1212 1169 struct cnic_ulp_ops *ulp_ops; 1213 1170 1214 - ulp_ops = rcu_dereference(cnic_ulp_tbl[i]); 1215 - if (!ulp_ops || !ulp_ops->cnic_exit) 1171 + mutex_lock(&cnic_lock); 1172 + ulp_ops = cnic_ulp_tbl[i]; 1173 + if (!ulp_ops || !ulp_ops->cnic_exit) { 1174 + mutex_unlock(&cnic_lock); 1216 1175 continue; 1176 + } 1177 + ulp_get(ulp_ops); 1178 + mutex_unlock(&cnic_lock); 1217 1179 1218 1180 if (test_and_clear_bit(ULP_F_INIT, &cp->ulp_flags[i])) 1219 1181 ulp_ops->cnic_exit(dev); 1220 1182 1183 + ulp_put(ulp_ops); 1221 1184 } 1222 - rcu_read_unlock(); 1223 1185 } 1224 1186 1225 1187 static int cnic_cm_offload_pg(struct cnic_sock *csk) ··· 2439 2393 return 0; 2440 2394 } 2441 2395 2396 + static int cnic_register_netdev(struct cnic_dev *dev) 2397 + { 2398 + struct cnic_local *cp = dev->cnic_priv; 2399 + struct cnic_eth_dev *ethdev = cp->ethdev; 2400 + int err; 2401 + 2402 + if (!ethdev) 2403 + return -ENODEV; 2404 + 2405 + if (ethdev->drv_state & CNIC_DRV_STATE_REGD) 2406 + return 0; 2407 + 2408 + err = ethdev->drv_register_cnic(dev->netdev, cp->cnic_ops, dev); 2409 + if (err) 2410 + printk(KERN_ERR PFX "%s: register_cnic failed\n", 2411 + dev->netdev->name); 2412 + 2413 + return err; 2414 + } 2415 + 2416 + static void cnic_unregister_netdev(struct cnic_dev *dev) 2417 + { 2418 + struct cnic_local *cp = dev->cnic_priv; 2419 + struct cnic_eth_dev *ethdev = cp->ethdev; 2420 + 2421 + if (!ethdev) 2422 + return; 2423 + 2424 + ethdev->drv_unregister_cnic(dev->netdev); 2425 + } 2426 + 2442 2427 static int cnic_start_hw(struct cnic_dev *dev) 2443 2428 { 2444 2429 struct cnic_local *cp = dev->cnic_priv; ··· 2478 2401 2479 2402 if (test_bit(CNIC_F_CNIC_UP, &dev->flags)) 2480 2403 return -EALREADY; 2481 - 2482 - err = ethdev->drv_register_cnic(dev->netdev, cp->cnic_ops, dev); 2483 - if (err) { 2484 - printk(KERN_ERR PFX "%s: register_cnic failed\n", 2485 - dev->netdev->name); 2486 - goto err2; 2487 - } 2488 2404 2489 2405 dev->regview = ethdev->io_base; 2490 2406 cp->chip_id = ethdev->chip_id; ··· 2508 2438 return 0; 2509 2439 2510 2440 err1: 2511 - ethdev->drv_unregister_cnic(dev->netdev); 2512 2441 cp->free_resc(dev); 2513 2442 pci_dev_put(dev->pcidev); 2514 - err2: 2515 2443 return err; 2516 2444 } 2517 2445 2518 2446 static void cnic_stop_bnx2_hw(struct cnic_dev *dev) 2519 2447 { 2520 - struct cnic_local *cp = dev->cnic_priv; 2521 - struct cnic_eth_dev *ethdev = cp->ethdev; 2522 - 2523 2448 cnic_disable_bnx2_int_sync(dev); 2524 2449 2525 2450 cnic_reg_wr_ind(dev, BNX2_CP_SCRATCH + 0x20, 0); ··· 2525 2460 2526 2461 cnic_setup_5709_context(dev, 0); 2527 2462 cnic_free_irq(dev); 2528 - 2529 - ethdev->drv_unregister_cnic(dev->netdev); 2530 2463 2531 2464 cnic_free_resc(dev); 2532 2465 } ··· 2606 2543 probe = symbol_get(bnx2_cnic_probe); 2607 2544 if (probe) { 2608 2545 ethdev = (*probe)(dev); 2609 - symbol_put_addr(probe); 2546 + symbol_put(bnx2_cnic_probe); 2610 2547 } 2611 2548 if (!ethdev) 2612 2549 return NULL; ··· 2709 2646 else if (event == NETDEV_UNREGISTER) 2710 2647 cnic_ulp_exit(dev); 2711 2648 else if (event == NETDEV_UP) { 2712 - mutex_lock(&cnic_lock); 2649 + if (cnic_register_netdev(dev) != 0) { 2650 + cnic_put(dev); 2651 + goto done; 2652 + } 2713 2653 if (!cnic_start_hw(dev)) 2714 2654 cnic_ulp_start(dev); 2715 - mutex_unlock(&cnic_lock); 2716 2655 } 2717 2656 2718 2657 rcu_read_lock(); ··· 2733 2668 rcu_read_unlock(); 2734 2669 2735 2670 if (event == NETDEV_GOING_DOWN) { 2736 - mutex_lock(&cnic_lock); 2737 2671 cnic_ulp_stop(dev); 2738 2672 cnic_stop_hw(dev); 2739 - mutex_unlock(&cnic_lock); 2673 + cnic_unregister_netdev(dev); 2740 2674 } else if (event == NETDEV_UNREGISTER) { 2741 2675 write_lock(&cnic_dev_lock); 2742 2676 list_del_init(&dev->list); ··· 2767 2703 } 2768 2704 2769 2705 cnic_ulp_exit(dev); 2706 + cnic_unregister_netdev(dev); 2770 2707 list_del_init(&dev->list); 2771 2708 cnic_free_dev(dev); 2772 2709 }
+1
drivers/net/cnic.h
··· 176 176 unsigned long ulp_flags[MAX_CNIC_ULP_TYPE]; 177 177 #define ULP_F_INIT 0 178 178 #define ULP_F_START 1 179 + #define ULP_F_CALL_PENDING 2 179 180 struct cnic_ulp_ops *ulp_ops[MAX_CNIC_ULP_TYPE]; 180 181 181 182 /* protected by ulp_lock */
+1
drivers/net/cnic_if.h
··· 290 290 void (*iscsi_nl_send_msg)(struct cnic_dev *dev, u32 msg_type, 291 291 char *data, u16 data_size); 292 292 struct module *owner; 293 + atomic_t ref_count; 293 294 }; 294 295 295 296 extern int cnic_register_driver(int ulp_type, struct cnic_ulp_ops *ulp_ops);
+1 -1
drivers/net/e100.c
··· 1899 1899 nic->ru_running = RU_SUSPENDED; 1900 1900 pci_dma_sync_single_for_device(nic->pdev, rx->dma_addr, 1901 1901 sizeof(struct rfd), 1902 - PCI_DMA_BIDIRECTIONAL); 1902 + PCI_DMA_FROMDEVICE); 1903 1903 return -ENODATA; 1904 1904 } 1905 1905
+46 -54
drivers/net/e1000e/ich8lan.c
··· 338 338 { 339 339 struct e1000_nvm_info *nvm = &hw->nvm; 340 340 struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan; 341 - union ich8_hws_flash_status hsfsts; 342 - u32 gfpreg; 343 - u32 sector_base_addr; 344 - u32 sector_end_addr; 341 + u32 gfpreg, sector_base_addr, sector_end_addr; 345 342 u16 i; 346 343 347 344 /* Can't read flash registers if the register set isn't mapped. */ ··· 371 374 nvm->flash_bank_size /= 2; 372 375 /* Adjust to word count */ 373 376 nvm->flash_bank_size /= sizeof(u16); 374 - 375 - /* 376 - * Make sure the flash bank size does not overwrite the 4k 377 - * sector ranges. We may have 64k allotted to us but we only care 378 - * about the first 2 4k sectors. Therefore, if we have anything less 379 - * than 64k set in the HSFSTS register, we will reduce the bank size 380 - * down to 4k and let the rest remain unused. If berasesz == 3, then 381 - * we are working in 64k mode. Otherwise we are not. 382 - */ 383 - if (nvm->flash_bank_size > E1000_ICH8_SHADOW_RAM_WORDS) { 384 - hsfsts.regval = er16flash(ICH_FLASH_HSFSTS); 385 - if (hsfsts.hsf_status.berasesz != 3) 386 - nvm->flash_bank_size = E1000_ICH8_SHADOW_RAM_WORDS; 387 - } 388 377 389 378 nvm->word_size = E1000_ICH8_SHADOW_RAM_WORDS; 390 379 ··· 577 594 **/ 578 595 static s32 e1000_acquire_swflag_ich8lan(struct e1000_hw *hw) 579 596 { 580 - u32 extcnf_ctrl; 581 - u32 timeout = PHY_CFG_TIMEOUT; 597 + u32 extcnf_ctrl, timeout = PHY_CFG_TIMEOUT; 598 + s32 ret_val = 0; 582 599 583 600 might_sleep(); 584 601 ··· 586 603 587 604 while (timeout) { 588 605 extcnf_ctrl = er32(EXTCNF_CTRL); 606 + if (!(extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG)) 607 + break; 589 608 590 - if (!(extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG)) { 591 - extcnf_ctrl |= E1000_EXTCNF_CTRL_SWFLAG; 592 - ew32(EXTCNF_CTRL, extcnf_ctrl); 593 - 594 - extcnf_ctrl = er32(EXTCNF_CTRL); 595 - if (extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG) 596 - break; 597 - } 598 609 mdelay(1); 599 610 timeout--; 600 611 } 601 612 602 613 if (!timeout) { 603 - hw_dbg(hw, "FW or HW has locked the resource for too long.\n"); 604 - extcnf_ctrl &= ~E1000_EXTCNF_CTRL_SWFLAG; 605 - ew32(EXTCNF_CTRL, extcnf_ctrl); 606 - mutex_unlock(&nvm_mutex); 607 - return -E1000_ERR_CONFIG; 614 + hw_dbg(hw, "SW/FW/HW has locked the resource for too long.\n"); 615 + ret_val = -E1000_ERR_CONFIG; 616 + goto out; 608 617 } 609 618 610 - return 0; 619 + timeout = PHY_CFG_TIMEOUT * 2; 620 + 621 + extcnf_ctrl |= E1000_EXTCNF_CTRL_SWFLAG; 622 + ew32(EXTCNF_CTRL, extcnf_ctrl); 623 + 624 + while (timeout) { 625 + extcnf_ctrl = er32(EXTCNF_CTRL); 626 + if (extcnf_ctrl & E1000_EXTCNF_CTRL_SWFLAG) 627 + break; 628 + 629 + mdelay(1); 630 + timeout--; 631 + } 632 + 633 + if (!timeout) { 634 + hw_dbg(hw, "Failed to acquire the semaphore.\n"); 635 + extcnf_ctrl &= ~E1000_EXTCNF_CTRL_SWFLAG; 636 + ew32(EXTCNF_CTRL, extcnf_ctrl); 637 + ret_val = -E1000_ERR_CONFIG; 638 + goto out; 639 + } 640 + 641 + out: 642 + if (ret_val) 643 + mutex_unlock(&nvm_mutex); 644 + 645 + return ret_val; 611 646 } 612 647 613 648 /** ··· 1307 1306 struct e1000_nvm_info *nvm = &hw->nvm; 1308 1307 struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan; 1309 1308 u32 act_offset; 1310 - s32 ret_val; 1309 + s32 ret_val = 0; 1311 1310 u32 bank = 0; 1312 1311 u16 i, word; 1313 1312 ··· 1322 1321 goto out; 1323 1322 1324 1323 ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank); 1325 - if (ret_val) 1326 - goto release; 1324 + if (ret_val) { 1325 + hw_dbg(hw, "Could not detect valid bank, assuming bank 0\n"); 1326 + bank = 0; 1327 + } 1327 1328 1328 1329 act_offset = (bank) ? nvm->flash_bank_size : 0; 1329 1330 act_offset += offset; 1330 1331 1332 + ret_val = 0; 1331 1333 for (i = 0; i < words; i++) { 1332 1334 if ((dev_spec->shadow_ram) && 1333 1335 (dev_spec->shadow_ram[offset+i].modified)) { ··· 1345 1341 } 1346 1342 } 1347 1343 1348 - release: 1349 1344 e1000_release_swflag_ich8lan(hw); 1350 1345 1351 1346 out: ··· 1595 1592 { 1596 1593 struct e1000_nvm_info *nvm = &hw->nvm; 1597 1594 struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan; 1598 - s32 ret_val; 1599 1595 u16 i; 1600 1596 1601 1597 if ((offset >= nvm->word_size) || (words > nvm->word_size - offset) || ··· 1603 1601 return -E1000_ERR_NVM; 1604 1602 } 1605 1603 1606 - ret_val = e1000_acquire_swflag_ich8lan(hw); 1607 - if (ret_val) 1608 - return ret_val; 1609 - 1610 1604 for (i = 0; i < words; i++) { 1611 1605 dev_spec->shadow_ram[offset+i].modified = 1; 1612 1606 dev_spec->shadow_ram[offset+i].value = data[i]; 1613 1607 } 1614 - 1615 - e1000_release_swflag_ich8lan(hw); 1616 1608 1617 1609 return 0; 1618 1610 } ··· 1648 1652 */ 1649 1653 ret_val = e1000_valid_nvm_bank_detect_ich8lan(hw, &bank); 1650 1654 if (ret_val) { 1651 - e1000_release_swflag_ich8lan(hw); 1652 - goto out; 1655 + hw_dbg(hw, "Could not detect valid bank, assuming bank 0\n"); 1656 + bank = 0; 1653 1657 } 1654 1658 1655 1659 if (bank == 0) { ··· 2035 2039 iteration = 1; 2036 2040 break; 2037 2041 case 2: 2038 - if (hw->mac.type == e1000_ich9lan) { 2039 - sector_size = ICH_FLASH_SEG_SIZE_8K; 2040 - iteration = flash_bank_size / ICH_FLASH_SEG_SIZE_8K; 2041 - } else { 2042 - return -E1000_ERR_NVM; 2043 - } 2042 + sector_size = ICH_FLASH_SEG_SIZE_8K; 2043 + iteration = 1; 2044 2044 break; 2045 2045 case 3: 2046 2046 sector_size = ICH_FLASH_SEG_SIZE_64K; ··· 2048 2056 2049 2057 /* Start with the base address, then add the sector offset. */ 2050 2058 flash_linear_addr = hw->nvm.flash_base_addr; 2051 - flash_linear_addr += (bank) ? (sector_size * iteration) : 0; 2059 + flash_linear_addr += (bank) ? flash_bank_size : 0; 2052 2060 2053 2061 for (j = 0; j < iteration ; j++) { 2054 2062 do {
+11 -11
drivers/net/e1000e/netdev.c
··· 4538 4538 /* Allow time for pending master requests to run */ 4539 4539 e1000e_disable_pcie_master(&adapter->hw); 4540 4540 4541 - if ((adapter->flags2 & FLAG2_HAS_PHY_WAKEUP) && 4542 - !(hw->mac.ops.check_mng_mode(hw))) { 4541 + if (adapter->flags2 & FLAG2_HAS_PHY_WAKEUP) { 4543 4542 /* enable wakeup by the PHY */ 4544 4543 retval = e1000_init_phy_wakeup(adapter, wufc); 4545 4544 if (retval) ··· 4556 4557 *enable_wake = !!wufc; 4557 4558 4558 4559 /* make sure adapter isn't asleep if manageability is enabled */ 4559 - if (adapter->flags & FLAG_MNG_PT_ENABLED) 4560 + if ((adapter->flags & FLAG_MNG_PT_ENABLED) || 4561 + (hw->mac.ops.check_mng_mode(hw))) 4560 4562 *enable_wake = true; 4561 4563 4562 4564 if (adapter->hw.phy.type == e1000_phy_igp_3) ··· 4668 4668 dev_err(&pdev->dev, 4669 4669 "Cannot enable PCI device from suspend\n"); 4670 4670 return err; 4671 - } 4672 - 4673 - /* AER (Advanced Error Reporting) hooks */ 4674 - err = pci_enable_pcie_error_reporting(pdev); 4675 - if (err) { 4676 - dev_err(&pdev->dev, "pci_enable_pcie_error_reporting failed " 4677 - "0x%x\n", err); 4678 - /* non-fatal, continue */ 4679 4671 } 4680 4672 4681 4673 pci_set_master(pdev); ··· 4981 4989 e1000e_driver_name); 4982 4990 if (err) 4983 4991 goto err_pci_reg; 4992 + 4993 + /* AER (Advanced Error Reporting) hooks */ 4994 + err = pci_enable_pcie_error_reporting(pdev); 4995 + if (err) { 4996 + dev_err(&pdev->dev, "pci_enable_pcie_error_reporting failed " 4997 + "0x%x\n", err); 4998 + /* non-fatal, continue */ 4999 + } 4984 5000 4985 5001 pci_set_master(pdev); 4986 5002 /* PCI config space info */
+5 -4
drivers/net/fec.c
··· 285 285 { 286 286 struct fec_enet_private *fep = netdev_priv(dev); 287 287 struct bufdesc *bdp; 288 + void *bufaddr; 288 289 unsigned short status; 289 290 unsigned long flags; 290 291 ··· 313 312 status &= ~BD_ENET_TX_STATS; 314 313 315 314 /* Set buffer length and buffer pointer */ 316 - bdp->cbd_bufaddr = __pa(skb->data); 315 + bufaddr = skb->data; 317 316 bdp->cbd_datlen = skb->len; 318 317 319 318 /* ··· 321 320 * 4-byte boundaries. Use bounce buffers to copy data 322 321 * and get it aligned. Ugh. 323 322 */ 324 - if (bdp->cbd_bufaddr & FEC_ALIGNMENT) { 323 + if (((unsigned long) bufaddr) & FEC_ALIGNMENT) { 325 324 unsigned int index; 326 325 index = bdp - fep->tx_bd_base; 327 326 memcpy(fep->tx_bounce[index], (void *)skb->data, skb->len); 328 - bdp->cbd_bufaddr = __pa(fep->tx_bounce[index]); 327 + bufaddr = fep->tx_bounce[index]; 329 328 } 330 329 331 330 /* Save skb pointer */ ··· 337 336 /* Push the data cache so the CPM does not get stale memory 338 337 * data. 339 338 */ 340 - bdp->cbd_bufaddr = dma_map_single(&dev->dev, skb->data, 339 + bdp->cbd_bufaddr = dma_map_single(&dev->dev, bufaddr, 341 340 FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE); 342 341 343 342 /* Send it on its way. Tell FEC it's ready, interrupt when done,
+3 -2
drivers/net/fec_mpc52xx.c
··· 309 309 { 310 310 struct mpc52xx_fec_priv *priv = netdev_priv(dev); 311 311 struct bcom_fec_bd *bd; 312 + unsigned long flags; 312 313 313 314 if (bcom_queue_full(priv->tx_dmatsk)) { 314 315 if (net_ratelimit()) ··· 317 316 return NETDEV_TX_BUSY; 318 317 } 319 318 320 - spin_lock_irq(&priv->lock); 319 + spin_lock_irqsave(&priv->lock, flags); 321 320 dev->trans_start = jiffies; 322 321 323 322 bd = (struct bcom_fec_bd *) ··· 333 332 netif_stop_queue(dev); 334 333 } 335 334 336 - spin_unlock_irq(&priv->lock); 335 + spin_unlock_irqrestore(&priv->lock, flags); 337 336 338 337 return NETDEV_TX_OK; 339 338 }
+11 -2
drivers/net/gianfar.c
··· 491 491 492 492 dev_set_drvdata(&ofdev->dev, NULL); 493 493 494 + unregister_netdev(dev); 494 495 iounmap(priv->regs); 495 496 free_netdev(priv->ndev); 496 497 ··· 937 936 struct gfar __iomem *regs = priv->regs; 938 937 int err = 0; 939 938 u32 rctrl = 0; 939 + u32 tctrl = 0; 940 940 u32 attrs = 0; 941 941 942 942 gfar_write(&regs->imask, IMASK_INIT_CLEAR); ··· 1113 1111 rctrl |= RCTRL_PADDING(priv->padding); 1114 1112 } 1115 1113 1114 + /* keep vlan related bits if it's enabled */ 1115 + if (priv->vlgrp) { 1116 + rctrl |= RCTRL_VLEX | RCTRL_PRSDEP_INIT; 1117 + tctrl |= TCTRL_VLINS; 1118 + } 1119 + 1116 1120 /* Init rctrl based on our settings */ 1117 1121 gfar_write(&priv->regs->rctrl, rctrl); 1118 1122 1119 1123 if (dev->features & NETIF_F_IP_CSUM) 1120 - gfar_write(&priv->regs->tctrl, TCTRL_INIT_CSUM); 1124 + tctrl |= TCTRL_INIT_CSUM; 1125 + 1126 + gfar_write(&priv->regs->tctrl, tctrl); 1121 1127 1122 1128 /* Set the extraction length and index */ 1123 1129 attrs = ATTRELI_EL(priv->rx_stash_size) | ··· 1460 1450 1461 1451 /* Enable VLAN tag extraction */ 1462 1452 tempval = gfar_read(&priv->regs->rctrl); 1463 - tempval |= RCTRL_VLEX; 1464 1453 tempval |= (RCTRL_VLEX | RCTRL_PRSDEP_INIT); 1465 1454 gfar_write(&priv->regs->rctrl, tempval); 1466 1455 } else {
+2
drivers/net/ibm_newemac/core.c
··· 1305 1305 1306 1306 free_irq(dev->emac_irq, dev); 1307 1307 1308 + netif_carrier_off(ndev); 1309 + 1308 1310 return 0; 1309 1311 } 1310 1312
-4
drivers/net/irda/au1k_ir.c
··· 23 23 #include <linux/init.h> 24 24 #include <linux/errno.h> 25 25 #include <linux/netdevice.h> 26 - #include <linux/etherdevice.h> 27 26 #include <linux/slab.h> 28 27 #include <linux/rtnetlink.h> 29 28 #include <linux/interrupt.h> ··· 204 205 .ndo_start_xmit = au1k_irda_hard_xmit, 205 206 .ndo_tx_timeout = au1k_tx_timeout, 206 207 .ndo_do_ioctl = au1k_irda_ioctl, 207 - .ndo_change_mtu = eth_change_mtu, 208 - .ndo_validate_addr = eth_validate_addr, 209 - .ndo_set_mac_address = eth_mac_addr, 210 208 }; 211 209 212 210 static int au1k_irda_net_init(struct net_device *dev)
+1 -3
drivers/net/irda/pxaficp_ir.c
··· 803 803 .ndo_stop = pxa_irda_stop, 804 804 .ndo_start_xmit = pxa_irda_hard_xmit, 805 805 .ndo_do_ioctl = pxa_irda_ioctl, 806 - .ndo_change_mtu = eth_change_mtu, 807 - .ndo_validate_addr = eth_validate_addr, 808 - .ndo_set_mac_address = eth_mac_addr, 809 806 }; 810 807 811 808 static int pxa_irda_probe(struct platform_device *pdev) ··· 827 830 if (!dev) 828 831 goto err_mem_3; 829 832 833 + SET_NETDEV_DEV(dev, &pdev->dev); 830 834 si = netdev_priv(dev); 831 835 si->dev = &pdev->dev; 832 836 si->pdata = pdev->dev.platform_data;
-4
drivers/net/irda/sa1100_ir.c
··· 24 24 #include <linux/init.h> 25 25 #include <linux/errno.h> 26 26 #include <linux/netdevice.h> 27 - #include <linux/etherdevice.h> 28 27 #include <linux/slab.h> 29 28 #include <linux/rtnetlink.h> 30 29 #include <linux/interrupt.h> ··· 880 881 .ndo_stop = sa1100_irda_stop, 881 882 .ndo_start_xmit = sa1100_irda_hard_xmit, 882 883 .ndo_do_ioctl = sa1100_irda_ioctl, 883 - .ndo_change_mtu = eth_change_mtu, 884 - .ndo_validate_addr = eth_validate_addr, 885 - .ndo_set_mac_address = eth_mac_addr, 886 884 }; 887 885 888 886 static int sa1100_irda_probe(struct platform_device *pdev)
+1 -1
drivers/net/irda/w83977af_ir.c
··· 115 115 116 116 IRDA_DEBUG(0, "%s()\n", __func__ ); 117 117 118 - for (i=0; (io[i] < 2000) && (i < ARRAY_SIZE(dev_self)); i++) { 118 + for (i=0; i < ARRAY_SIZE(dev_self) && io[i] < 2000; i++) { 119 119 if (w83977af_open(i, io[i], irq[i], dma[i]) == 0) 120 120 return 0; 121 121 }
+2
drivers/net/ixgbe/ixgbe.h
··· 136 136 137 137 u8 queue_index; /* needed for multiqueue queue management */ 138 138 139 + #define IXGBE_RING_RX_PS_ENABLED (u8)(1) 140 + u8 flags; /* per ring feature flags */ 139 141 u16 head; 140 142 u16 tail; 141 143
+19 -8
drivers/net/ixgbe/ixgbe_ethtool.c
··· 1948 1948 struct ethtool_coalesce *ec) 1949 1949 { 1950 1950 struct ixgbe_adapter *adapter = netdev_priv(netdev); 1951 + struct ixgbe_q_vector *q_vector; 1951 1952 int i; 1952 1953 1953 1954 if (ec->tx_max_coalesced_frames_irq) ··· 1983 1982 adapter->itr_setting = 0; 1984 1983 } 1985 1984 1986 - for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) { 1987 - struct ixgbe_q_vector *q_vector = adapter->q_vector[i]; 1988 - if (q_vector->txr_count && !q_vector->rxr_count) 1989 - /* tx vector gets half the rate */ 1990 - q_vector->eitr = (adapter->eitr_param >> 1); 1991 - else 1992 - /* rx only or mixed */ 1993 - q_vector->eitr = adapter->eitr_param; 1985 + /* MSI/MSIx Interrupt Mode */ 1986 + if (adapter->flags & 1987 + (IXGBE_FLAG_MSIX_ENABLED | IXGBE_FLAG_MSI_ENABLED)) { 1988 + int num_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; 1989 + for (i = 0; i < num_vectors; i++) { 1990 + q_vector = adapter->q_vector[i]; 1991 + if (q_vector->txr_count && !q_vector->rxr_count) 1992 + /* tx vector gets half the rate */ 1993 + q_vector->eitr = (adapter->eitr_param >> 1); 1994 + else 1995 + /* rx only or mixed */ 1996 + q_vector->eitr = adapter->eitr_param; 1997 + ixgbe_write_eitr(q_vector); 1998 + } 1999 + /* Legacy Interrupt Mode */ 2000 + } else { 2001 + q_vector = adapter->q_vector[0]; 2002 + q_vector->eitr = adapter->eitr_param; 1994 2003 ixgbe_write_eitr(q_vector); 1995 2004 } 1996 2005
+1 -1
drivers/net/ixgbe/ixgbe_fcoe.c
··· 336 336 /* return 0 to bypass going to ULD for DDPed data */ 337 337 if (fcstat == IXGBE_RXDADV_STAT_FCSTAT_DDP) 338 338 rc = 0; 339 - else 339 + else if (ddp->len) 340 340 rc = ddp->len; 341 341 } 342 342
+33 -57
drivers/net/ixgbe/ixgbe_main.c
··· 492 492 493 493 skb_record_rx_queue(skb, ring->queue_index); 494 494 if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL)) { 495 - if (adapter->vlgrp && is_vlan && (tag != 0)) 495 + if (adapter->vlgrp && is_vlan && (tag & VLAN_VID_MASK)) 496 496 vlan_gro_receive(napi, adapter->vlgrp, tag, skb); 497 497 else 498 498 napi_gro_receive(napi, skb); 499 499 } else { 500 - if (adapter->vlgrp && is_vlan && (tag != 0)) 500 + if (adapter->vlgrp && is_vlan && (tag & VLAN_VID_MASK)) 501 501 vlan_hwaccel_rx(skb, adapter->vlgrp, tag); 502 502 else 503 503 netif_rx(skb); ··· 585 585 rx_desc = IXGBE_RX_DESC_ADV(*rx_ring, i); 586 586 587 587 if (!bi->page_dma && 588 - (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED)) { 588 + (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED)) { 589 589 if (!bi->page) { 590 590 bi->page = alloc_page(GFP_ATOMIC); 591 591 if (!bi->page) { ··· 629 629 } 630 630 /* Refresh the desc even if buffer_addrs didn't change because 631 631 * each write-back erases this info. */ 632 - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { 632 + if (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED) { 633 633 rx_desc->read.pkt_addr = cpu_to_le64(bi->page_dma); 634 634 rx_desc->read.hdr_addr = cpu_to_le64(bi->dma); 635 635 } else { ··· 726 726 break; 727 727 (*work_done)++; 728 728 729 - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { 729 + if (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED) { 730 730 hdr_info = le16_to_cpu(ixgbe_get_hdr_info(rx_desc)); 731 731 len = (hdr_info & IXGBE_RXDADV_HDRBUFLEN_MASK) >> 732 732 IXGBE_RXDADV_HDRBUFLEN_SHIFT; ··· 798 798 rx_ring->stats.packets++; 799 799 rx_ring->stats.bytes += skb->len; 800 800 } else { 801 - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { 801 + if (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED) { 802 802 rx_buffer_info->skb = next_buffer->skb; 803 803 rx_buffer_info->dma = next_buffer->dma; 804 804 next_buffer->skb = skb; ··· 1898 1898 1899 1899 #define IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT 2 1900 1900 1901 - static void ixgbe_configure_srrctl(struct ixgbe_adapter *adapter, int index) 1901 + static void ixgbe_configure_srrctl(struct ixgbe_adapter *adapter, 1902 + struct ixgbe_ring *rx_ring) 1902 1903 { 1903 - struct ixgbe_ring *rx_ring; 1904 1904 u32 srrctl; 1905 - int queue0 = 0; 1906 - unsigned long mask; 1905 + int index; 1907 1906 struct ixgbe_ring_feature *feature = adapter->ring_feature; 1908 1907 1909 - if (adapter->hw.mac.type == ixgbe_mac_82599EB) { 1910 - if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) { 1911 - int dcb_i = feature[RING_F_DCB].indices; 1912 - if (dcb_i == 8) 1913 - queue0 = index >> 4; 1914 - else if (dcb_i == 4) 1915 - queue0 = index >> 5; 1916 - else 1917 - dev_err(&adapter->pdev->dev, "Invalid DCB " 1918 - "configuration\n"); 1919 - #ifdef IXGBE_FCOE 1920 - if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { 1921 - struct ixgbe_ring_feature *f; 1922 - 1923 - rx_ring = &adapter->rx_ring[queue0]; 1924 - f = &adapter->ring_feature[RING_F_FCOE]; 1925 - if ((queue0 == 0) && (index > rx_ring->reg_idx)) 1926 - queue0 = f->mask + index - 1927 - rx_ring->reg_idx - 1; 1928 - } 1929 - #endif /* IXGBE_FCOE */ 1930 - } else { 1931 - queue0 = index; 1932 - } 1933 - } else { 1908 + index = rx_ring->reg_idx; 1909 + if (adapter->hw.mac.type == ixgbe_mac_82598EB) { 1910 + unsigned long mask; 1934 1911 mask = (unsigned long) feature[RING_F_RSS].mask; 1935 - queue0 = index & mask; 1936 1912 index = index & mask; 1937 1913 } 1938 - 1939 - rx_ring = &adapter->rx_ring[queue0]; 1940 - 1941 1914 srrctl = IXGBE_READ_REG(&adapter->hw, IXGBE_SRRCTL(index)); 1942 1915 1943 1916 srrctl &= ~IXGBE_SRRCTL_BSIZEHDR_MASK; ··· 1919 1946 srrctl |= (IXGBE_RX_HDR_SIZE << IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT) & 1920 1947 IXGBE_SRRCTL_BSIZEHDR_MASK; 1921 1948 1922 - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { 1949 + if (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED) { 1923 1950 #if (PAGE_SIZE / 2) > IXGBE_MAX_RXBUFFER 1924 1951 srrctl |= IXGBE_MAX_RXBUFFER >> IXGBE_SRRCTL_BSIZEPKT_SHIFT; 1925 1952 #else ··· 1975 2002 { 1976 2003 u64 rdba; 1977 2004 struct ixgbe_hw *hw = &adapter->hw; 2005 + struct ixgbe_ring *rx_ring; 1978 2006 struct net_device *netdev = adapter->netdev; 1979 2007 int max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN; 1980 2008 int i, j; ··· 1991 2017 1992 2018 /* Decide whether to use packet split mode or not */ 1993 2019 adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED; 1994 - 1995 - #ifdef IXGBE_FCOE 1996 - if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) 1997 - adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED; 1998 - #endif /* IXGBE_FCOE */ 1999 2020 2000 2021 /* Set the RX buffer length according to the mode */ 2001 2022 if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { ··· 2039 2070 * the Base and Length of the Rx Descriptor Ring 2040 2071 */ 2041 2072 for (i = 0; i < adapter->num_rx_queues; i++) { 2042 - rdba = adapter->rx_ring[i].dma; 2043 - j = adapter->rx_ring[i].reg_idx; 2073 + rx_ring = &adapter->rx_ring[i]; 2074 + rdba = rx_ring->dma; 2075 + j = rx_ring->reg_idx; 2044 2076 IXGBE_WRITE_REG(hw, IXGBE_RDBAL(j), (rdba & DMA_BIT_MASK(32))); 2045 2077 IXGBE_WRITE_REG(hw, IXGBE_RDBAH(j), (rdba >> 32)); 2046 2078 IXGBE_WRITE_REG(hw, IXGBE_RDLEN(j), rdlen); 2047 2079 IXGBE_WRITE_REG(hw, IXGBE_RDH(j), 0); 2048 2080 IXGBE_WRITE_REG(hw, IXGBE_RDT(j), 0); 2049 - adapter->rx_ring[i].head = IXGBE_RDH(j); 2050 - adapter->rx_ring[i].tail = IXGBE_RDT(j); 2051 - adapter->rx_ring[i].rx_buf_len = rx_buf_len; 2081 + rx_ring->head = IXGBE_RDH(j); 2082 + rx_ring->tail = IXGBE_RDT(j); 2083 + rx_ring->rx_buf_len = rx_buf_len; 2084 + 2085 + if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) 2086 + rx_ring->flags |= IXGBE_RING_RX_PS_ENABLED; 2052 2087 2053 2088 #ifdef IXGBE_FCOE 2054 2089 if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { 2055 2090 struct ixgbe_ring_feature *f; 2056 2091 f = &adapter->ring_feature[RING_F_FCOE]; 2057 - if ((rx_buf_len < IXGBE_FCOE_JUMBO_FRAME_SIZE) && 2058 - (i >= f->mask) && (i < f->mask + f->indices)) 2059 - adapter->rx_ring[i].rx_buf_len = 2060 - IXGBE_FCOE_JUMBO_FRAME_SIZE; 2092 + if ((i >= f->mask) && (i < f->mask + f->indices)) { 2093 + rx_ring->flags &= ~IXGBE_RING_RX_PS_ENABLED; 2094 + if (rx_buf_len < IXGBE_FCOE_JUMBO_FRAME_SIZE) 2095 + rx_ring->rx_buf_len = 2096 + IXGBE_FCOE_JUMBO_FRAME_SIZE; 2097 + } 2061 2098 } 2062 2099 2063 2100 #endif /* IXGBE_FCOE */ 2064 - ixgbe_configure_srrctl(adapter, j); 2101 + ixgbe_configure_srrctl(adapter, rx_ring); 2065 2102 } 2066 2103 2067 2104 if (hw->mac.type == ixgbe_mac_82598EB) { ··· 2143 2168 if (adapter->flags2 & IXGBE_FLAG2_RSC_ENABLED) { 2144 2169 /* Enable 82599 HW-RSC */ 2145 2170 for (i = 0; i < adapter->num_rx_queues; i++) { 2146 - j = adapter->rx_ring[i].reg_idx; 2171 + rx_ring = &adapter->rx_ring[i]; 2172 + j = rx_ring->reg_idx; 2147 2173 rscctrl = IXGBE_READ_REG(hw, IXGBE_RSCCTL(j)); 2148 2174 rscctrl |= IXGBE_RSCCTL_RSCEN; 2149 2175 /* ··· 2152 2176 * total size of max desc * buf_len is not greater 2153 2177 * than 65535 2154 2178 */ 2155 - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { 2179 + if (rx_ring->flags & IXGBE_RING_RX_PS_ENABLED) { 2156 2180 #if (MAX_SKB_FRAGS > 16) 2157 2181 rscctrl |= IXGBE_RSCCTL_MAXDESC_16; 2158 2182 #elif (MAX_SKB_FRAGS > 8)
+3 -2
drivers/net/ixp2000/ixpdev.c
··· 41 41 struct ixpdev_priv *ip = netdev_priv(dev); 42 42 struct ixpdev_tx_desc *desc; 43 43 int entry; 44 + unsigned long flags; 44 45 45 46 if (unlikely(skb->len > PAGE_SIZE)) { 46 47 /* @@@ Count drops. */ ··· 64 63 65 64 dev->trans_start = jiffies; 66 65 67 - local_irq_disable(); 66 + local_irq_save(flags); 68 67 ip->tx_queue_entries++; 69 68 if (ip->tx_queue_entries == TX_BUF_COUNT_PER_CHAN) 70 69 netif_stop_queue(dev); 71 - local_irq_enable(); 70 + local_irq_restore(flags); 72 71 73 72 return 0; 74 73 }
+4 -3
drivers/net/macb.c
··· 620 620 dma_addr_t mapping; 621 621 unsigned int len, entry; 622 622 u32 ctrl; 623 + unsigned long flags; 623 624 624 625 #ifdef DEBUG 625 626 int i; ··· 636 635 #endif 637 636 638 637 len = skb->len; 639 - spin_lock_irq(&bp->lock); 638 + spin_lock_irqsave(&bp->lock, flags); 640 639 641 640 /* This is a hard error, log it. */ 642 641 if (TX_BUFFS_AVAIL(bp) < 1) { 643 642 netif_stop_queue(dev); 644 - spin_unlock_irq(&bp->lock); 643 + spin_unlock_irqrestore(&bp->lock, flags); 645 644 dev_err(&bp->pdev->dev, 646 645 "BUG! Tx Ring full when queue awake!\n"); 647 646 dev_dbg(&bp->pdev->dev, "tx_head = %u, tx_tail = %u\n", ··· 675 674 if (TX_BUFFS_AVAIL(bp) < 1) 676 675 netif_stop_queue(dev); 677 676 678 - spin_unlock_irq(&bp->lock); 677 + spin_unlock_irqrestore(&bp->lock, flags); 679 678 680 679 dev->trans_start = jiffies; 681 680
+3 -2
drivers/net/mlx4/en_rx.c
··· 506 506 PCI_DMA_FROMDEVICE); 507 507 } 508 508 /* Adjust size of last fragment to match actual length */ 509 - skb_frags_rx[nr - 1].size = length - 510 - priv->frag_info[nr - 1].frag_prefix_size; 509 + if (nr > 0) 510 + skb_frags_rx[nr - 1].size = length - 511 + priv->frag_info[nr - 1].frag_prefix_size; 511 512 return nr; 512 513 513 514 fail:
+3 -2
drivers/net/mlx4/en_tx.c
··· 437 437 { 438 438 struct mlx4_en_cq *cq = &priv->tx_cq[tx_ind]; 439 439 struct mlx4_en_tx_ring *ring = &priv->tx_ring[tx_ind]; 440 + unsigned long flags; 440 441 441 442 /* If we don't have a pending timer, set one up to catch our recent 442 443 post in case the interface becomes idle */ ··· 446 445 447 446 /* Poll the CQ every mlx4_en_TX_MODER_POLL packets */ 448 447 if ((++ring->poll_cnt & (MLX4_EN_TX_POLL_MODER - 1)) == 0) 449 - if (spin_trylock_irq(&ring->comp_lock)) { 448 + if (spin_trylock_irqsave(&ring->comp_lock, flags)) { 450 449 mlx4_en_process_tx_cq(priv->dev, cq); 451 - spin_unlock_irq(&ring->comp_lock); 450 + spin_unlock_irqrestore(&ring->comp_lock, flags); 452 451 } 453 452 } 454 453
+1 -1
drivers/net/netxen/netxen_nic.h
··· 1254 1254 u8 mc_enabled; 1255 1255 u8 max_mc_count; 1256 1256 u8 rss_supported; 1257 - u8 resv2; 1257 + u8 link_changed; 1258 1258 u32 resv3; 1259 1259 1260 1260 u8 has_link_events;
-7
drivers/net/netxen/netxen_nic_init.c
··· 184 184 kfree(recv_ctx->rds_rings); 185 185 186 186 skip_rds: 187 - if (recv_ctx->sds_rings == NULL) 188 - goto skip_sds; 189 - 190 - for(ring = 0; ring < adapter->max_sds_rings; ring++) 191 - recv_ctx->sds_rings[ring].consumer = 0; 192 - 193 - skip_sds: 194 187 if (adapter->tx_ring == NULL) 195 188 return; 196 189
+67 -38
drivers/net/netxen/netxen_nic_main.c
··· 94 94 95 95 MODULE_DEVICE_TABLE(pci, netxen_pci_tbl); 96 96 97 - static struct workqueue_struct *netxen_workq; 98 - #define SCHEDULE_WORK(tp) queue_work(netxen_workq, tp) 99 - #define FLUSH_SCHEDULED_WORK() flush_workqueue(netxen_workq) 100 - 101 97 static void netxen_watchdog(unsigned long); 102 98 103 99 static uint32_t crb_cmd_producer[4] = { ··· 167 171 { 168 172 if (recv_ctx->sds_rings != NULL) 169 173 kfree(recv_ctx->sds_rings); 174 + 175 + recv_ctx->sds_rings = NULL; 170 176 } 171 177 172 178 static int ··· 188 190 } 189 191 190 192 return 0; 193 + } 194 + 195 + static void 196 + netxen_napi_del(struct netxen_adapter *adapter) 197 + { 198 + int ring; 199 + struct nx_host_sds_ring *sds_ring; 200 + struct netxen_recv_context *recv_ctx = &adapter->recv_ctx; 201 + 202 + for (ring = 0; ring < adapter->max_sds_rings; ring++) { 203 + sds_ring = &recv_ctx->sds_rings[ring]; 204 + netif_napi_del(&sds_ring->napi); 205 + } 206 + 207 + netxen_free_sds_rings(&adapter->recv_ctx); 191 208 } 192 209 193 210 static void ··· 273 260 change = 0; 274 261 275 262 shift = NXRD32(adapter, CRB_DMA_SHIFT); 276 - if (shift >= 32) 263 + if (shift > 32) 277 264 return 0; 278 265 279 266 if (NX_IS_REVISION_P3(adapter->ahw.revision_id) && (shift > 9)) ··· 285 272 old_mask = pdev->dma_mask; 286 273 old_cmask = pdev->dev.coherent_dma_mask; 287 274 288 - mask = (1ULL<<(32+shift)) - 1; 275 + mask = DMA_BIT_MASK(32+shift); 289 276 290 277 err = pci_set_dma_mask(pdev, mask); 291 278 if (err) ··· 893 880 spin_unlock(&adapter->tx_clean_lock); 894 881 895 882 del_timer_sync(&adapter->watchdog_timer); 896 - FLUSH_SCHEDULED_WORK(); 897 883 } 898 884 899 885 ··· 906 894 struct nx_host_tx_ring *tx_ring; 907 895 908 896 err = netxen_init_firmware(adapter); 909 - if (err != 0) { 910 - printk(KERN_ERR "Failed to init firmware\n"); 911 - return -EIO; 912 - } 897 + if (err) 898 + return err; 899 + 900 + err = netxen_napi_add(adapter, netdev); 901 + if (err) 902 + return err; 913 903 914 904 if (adapter->fw_major < 4) 915 905 adapter->max_rds_rings = 3; ··· 975 961 netxen_free_hw_resources(adapter); 976 962 netxen_release_rx_buffers(adapter); 977 963 netxen_nic_free_irq(adapter); 964 + netxen_napi_del(adapter); 978 965 netxen_free_sw_resources(adapter); 979 966 980 967 adapter->is_up = 0; ··· 1120 1105 1121 1106 netdev->irq = adapter->msix_entries[0].vector; 1122 1107 1123 - if (netxen_napi_add(adapter, netdev)) 1124 - goto err_out_disable_msi; 1125 - 1126 1108 init_timer(&adapter->watchdog_timer); 1127 1109 adapter->watchdog_timer.function = &netxen_watchdog; 1128 1110 adapter->watchdog_timer.data = (unsigned long)adapter; ··· 1189 1177 1190 1178 unregister_netdev(netdev); 1191 1179 1180 + cancel_work_sync(&adapter->watchdog_task); 1181 + cancel_work_sync(&adapter->tx_timeout_task); 1182 + 1192 1183 if (adapter->is_up == NETXEN_ADAPTER_UP_MAGIC) { 1193 1184 netxen_nic_detach(adapter); 1194 1185 } ··· 1200 1185 netxen_free_adapter_offload(adapter); 1201 1186 1202 1187 netxen_teardown_intr(adapter); 1203 - netxen_free_sds_rings(&adapter->recv_ctx); 1204 1188 1205 1189 netxen_cleanup_pci_map(adapter); 1206 1190 ··· 1224 1210 1225 1211 if (netif_running(netdev)) 1226 1212 netxen_nic_down(adapter, netdev); 1213 + 1214 + cancel_work_sync(&adapter->watchdog_task); 1215 + cancel_work_sync(&adapter->tx_timeout_task); 1227 1216 1228 1217 if (adapter->is_up == NETXEN_ADAPTER_UP_MAGIC) 1229 1218 netxen_nic_detach(adapter); ··· 1566 1549 "%s: Device temperature %d degrees C exceeds" 1567 1550 " maximum allowed. Hardware has been shut down.\n", 1568 1551 netdev->name, temp_val); 1569 - 1570 - netif_device_detach(netdev); 1571 - netxen_nic_down(adapter, netdev); 1572 - netxen_nic_detach(adapter); 1573 - 1574 1552 rv = 1; 1575 1553 } else if (temp_state == NX_TEMP_WARN) { 1576 1554 if (adapter->temp == NX_TEMP_NORMAL) { ··· 1599 1587 netif_carrier_off(netdev); 1600 1588 netif_stop_queue(netdev); 1601 1589 } 1602 - 1603 - if (!adapter->has_link_events) 1604 - netxen_nic_set_link_parameters(adapter); 1605 - 1590 + adapter->link_changed = !adapter->has_link_events; 1606 1591 } else if (!adapter->ahw.linkup && linkup) { 1607 1592 printk(KERN_INFO "%s: %s NIC Link is up\n", 1608 1593 netxen_nic_driver_name, netdev->name); ··· 1608 1599 netif_carrier_on(netdev); 1609 1600 netif_wake_queue(netdev); 1610 1601 } 1611 - 1612 - if (!adapter->has_link_events) 1613 - netxen_nic_set_link_parameters(adapter); 1602 + adapter->link_changed = !adapter->has_link_events; 1614 1603 } 1615 1604 } 1616 1605 ··· 1635 1628 netxen_advert_link_change(adapter, linkup); 1636 1629 } 1637 1630 1631 + static void netxen_nic_thermal_shutdown(struct netxen_adapter *adapter) 1632 + { 1633 + struct net_device *netdev = adapter->netdev; 1634 + 1635 + netif_device_detach(netdev); 1636 + netxen_nic_down(adapter, netdev); 1637 + netxen_nic_detach(adapter); 1638 + } 1639 + 1638 1640 static void netxen_watchdog(unsigned long v) 1639 1641 { 1640 1642 struct netxen_adapter *adapter = (struct netxen_adapter *)v; 1641 1643 1642 - SCHEDULE_WORK(&adapter->watchdog_task); 1644 + if (netxen_nic_check_temp(adapter)) 1645 + goto do_sched; 1646 + 1647 + if (!adapter->has_link_events) { 1648 + netxen_nic_handle_phy_intr(adapter); 1649 + 1650 + if (adapter->link_changed) 1651 + goto do_sched; 1652 + } 1653 + 1654 + if (netif_running(adapter->netdev)) 1655 + mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ); 1656 + 1657 + return; 1658 + 1659 + do_sched: 1660 + schedule_work(&adapter->watchdog_task); 1643 1661 } 1644 1662 1645 1663 void netxen_watchdog_task(struct work_struct *work) ··· 1672 1640 struct netxen_adapter *adapter = 1673 1641 container_of(work, struct netxen_adapter, watchdog_task); 1674 1642 1675 - if (netxen_nic_check_temp(adapter)) 1643 + if (adapter->temp == NX_TEMP_PANIC) { 1644 + netxen_nic_thermal_shutdown(adapter); 1676 1645 return; 1646 + } 1677 1647 1678 - if (!adapter->has_link_events) 1679 - netxen_nic_handle_phy_intr(adapter); 1648 + if (adapter->link_changed) 1649 + netxen_nic_set_link_parameters(adapter); 1680 1650 1681 1651 if (netif_running(adapter->netdev)) 1682 1652 mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ); ··· 1686 1652 1687 1653 static void netxen_tx_timeout(struct net_device *netdev) 1688 1654 { 1689 - struct netxen_adapter *adapter = (struct netxen_adapter *) 1690 - netdev_priv(netdev); 1691 - SCHEDULE_WORK(&adapter->tx_timeout_task); 1655 + struct netxen_adapter *adapter = netdev_priv(netdev); 1656 + schedule_work(&adapter->tx_timeout_task); 1692 1657 } 1693 1658 1694 1659 static void netxen_tx_timeout_task(struct work_struct *work) ··· 1844 1811 { 1845 1812 printk(KERN_INFO "%s\n", netxen_nic_driver_string); 1846 1813 1847 - if ((netxen_workq = create_singlethread_workqueue("netxen")) == NULL) 1848 - return -ENOMEM; 1849 - 1850 1814 return pci_register_driver(&netxen_driver); 1851 1815 } 1852 1816 ··· 1852 1822 static void __exit netxen_exit_module(void) 1853 1823 { 1854 1824 pci_unregister_driver(&netxen_driver); 1855 - destroy_workqueue(netxen_workq); 1856 1825 } 1857 1826 1858 1827 module_exit(netxen_exit_module);
+1 -1
drivers/net/pcnet32.c
··· 1839 1839 lp->chip_version = chip_version; 1840 1840 lp->msg_enable = pcnet32_debug; 1841 1841 if ((cards_found >= MAX_UNITS) 1842 - || (options[cards_found] > sizeof(options_mapping))) 1842 + || (options[cards_found] >= sizeof(options_mapping))) 1843 1843 lp->options = PCNET32_PORT_ASEL; 1844 1844 else 1845 1845 lp->options = options_mapping[options[cards_found]];
+22 -18
drivers/net/smc91x.c
··· 196 196 /* this enables an interrupt in the interrupt mask register */ 197 197 #define SMC_ENABLE_INT(lp, x) do { \ 198 198 unsigned char mask; \ 199 - spin_lock_irq(&lp->lock); \ 199 + unsigned long smc_enable_flags; \ 200 + spin_lock_irqsave(&lp->lock, smc_enable_flags); \ 200 201 mask = SMC_GET_INT_MASK(lp); \ 201 202 mask |= (x); \ 202 203 SMC_SET_INT_MASK(lp, mask); \ 203 - spin_unlock_irq(&lp->lock); \ 204 + spin_unlock_irqrestore(&lp->lock, smc_enable_flags); \ 204 205 } while (0) 205 206 206 207 /* this disables an interrupt from the interrupt mask register */ 207 208 #define SMC_DISABLE_INT(lp, x) do { \ 208 209 unsigned char mask; \ 209 - spin_lock_irq(&lp->lock); \ 210 + unsigned long smc_disable_flags; \ 211 + spin_lock_irqsave(&lp->lock, smc_disable_flags); \ 210 212 mask = SMC_GET_INT_MASK(lp); \ 211 213 mask &= ~(x); \ 212 214 SMC_SET_INT_MASK(lp, mask); \ 213 - spin_unlock_irq(&lp->lock); \ 215 + spin_unlock_irqrestore(&lp->lock, smc_disable_flags); \ 214 216 } while (0) 215 217 216 218 /* ··· 522 520 * any other concurrent access and C would always interrupt B. But life 523 521 * isn't that easy in a SMP world... 524 522 */ 525 - #define smc_special_trylock(lock) \ 523 + #define smc_special_trylock(lock, flags) \ 526 524 ({ \ 527 525 int __ret; \ 528 - local_irq_disable(); \ 526 + local_irq_save(flags); \ 529 527 __ret = spin_trylock(lock); \ 530 528 if (!__ret) \ 531 - local_irq_enable(); \ 529 + local_irq_restore(flags); \ 532 530 __ret; \ 533 531 }) 534 - #define smc_special_lock(lock) spin_lock_irq(lock) 535 - #define smc_special_unlock(lock) spin_unlock_irq(lock) 532 + #define smc_special_lock(lock, flags) spin_lock_irqsave(lock, flags) 533 + #define smc_special_unlock(lock, flags) spin_unlock_irqrestore(lock, flags) 536 534 #else 537 - #define smc_special_trylock(lock) (1) 538 - #define smc_special_lock(lock) do { } while (0) 539 - #define smc_special_unlock(lock) do { } while (0) 535 + #define smc_special_trylock(lock, flags) (1) 536 + #define smc_special_lock(lock, flags) do { } while (0) 537 + #define smc_special_unlock(lock, flags) do { } while (0) 540 538 #endif 541 539 542 540 /* ··· 550 548 struct sk_buff *skb; 551 549 unsigned int packet_no, len; 552 550 unsigned char *buf; 551 + unsigned long flags; 553 552 554 553 DBG(3, "%s: %s\n", dev->name, __func__); 555 554 556 - if (!smc_special_trylock(&lp->lock)) { 555 + if (!smc_special_trylock(&lp->lock, flags)) { 557 556 netif_stop_queue(dev); 558 557 tasklet_schedule(&lp->tx_task); 559 558 return; ··· 562 559 563 560 skb = lp->pending_tx_skb; 564 561 if (unlikely(!skb)) { 565 - smc_special_unlock(&lp->lock); 562 + smc_special_unlock(&lp->lock, flags); 566 563 return; 567 564 } 568 565 lp->pending_tx_skb = NULL; ··· 572 569 printk("%s: Memory allocation failed.\n", dev->name); 573 570 dev->stats.tx_errors++; 574 571 dev->stats.tx_fifo_errors++; 575 - smc_special_unlock(&lp->lock); 572 + smc_special_unlock(&lp->lock, flags); 576 573 goto done; 577 574 } 578 575 ··· 611 608 612 609 /* queue the packet for TX */ 613 610 SMC_SET_MMU_CMD(lp, MC_ENQUEUE); 614 - smc_special_unlock(&lp->lock); 611 + smc_special_unlock(&lp->lock, flags); 615 612 616 613 dev->trans_start = jiffies; 617 614 dev->stats.tx_packets++; ··· 636 633 struct smc_local *lp = netdev_priv(dev); 637 634 void __iomem *ioaddr = lp->base; 638 635 unsigned int numPages, poll_count, status; 636 + unsigned long flags; 639 637 640 638 DBG(3, "%s: %s\n", dev->name, __func__); 641 639 ··· 662 658 return 0; 663 659 } 664 660 665 - smc_special_lock(&lp->lock); 661 + smc_special_lock(&lp->lock, flags); 666 662 667 663 /* now, try to allocate the memory */ 668 664 SMC_SET_MMU_CMD(lp, MC_ALLOC | numPages); ··· 680 676 } 681 677 } while (--poll_count); 682 678 683 - smc_special_unlock(&lp->lock); 679 + smc_special_unlock(&lp->lock, flags); 684 680 685 681 lp->pending_tx_skb = skb; 686 682 if (!poll_count) {
+3 -2
drivers/net/tulip/tulip_core.c
··· 652 652 int entry; 653 653 u32 flag; 654 654 dma_addr_t mapping; 655 + unsigned long flags; 655 656 656 - spin_lock_irq(&tp->lock); 657 + spin_lock_irqsave(&tp->lock, flags); 657 658 658 659 /* Calculate the next Tx descriptor entry. */ 659 660 entry = tp->cur_tx % TX_RING_SIZE; ··· 689 688 /* Trigger an immediate transmit demand. */ 690 689 iowrite32(0, tp->base_addr + CSR1); 691 690 692 - spin_unlock_irq(&tp->lock); 691 + spin_unlock_irqrestore(&tp->lock, flags); 693 692 694 693 dev->trans_start = jiffies; 695 694
+19 -31
drivers/net/tun.c
··· 1048 1048 return err; 1049 1049 } 1050 1050 1051 - static int tun_get_iff(struct net *net, struct file *file, struct ifreq *ifr) 1051 + static int tun_get_iff(struct net *net, struct tun_struct *tun, 1052 + struct ifreq *ifr) 1052 1053 { 1053 - struct tun_struct *tun = tun_get(file); 1054 - 1055 - if (!tun) 1056 - return -EBADFD; 1057 - 1058 1054 DBG(KERN_INFO "%s: tun_get_iff\n", tun->dev->name); 1059 1055 1060 1056 strcpy(ifr->ifr_name, tun->dev->name); 1061 1057 1062 1058 ifr->ifr_flags = tun_flags(tun); 1063 1059 1064 - tun_put(tun); 1065 1060 return 0; 1066 1061 } 1067 1062 ··· 1100 1105 return 0; 1101 1106 } 1102 1107 1103 - static int tun_chr_ioctl(struct inode *inode, struct file *file, 1104 - unsigned int cmd, unsigned long arg) 1108 + static long tun_chr_ioctl(struct file *file, unsigned int cmd, 1109 + unsigned long arg) 1105 1110 { 1106 1111 struct tun_file *tfile = file->private_data; 1107 1112 struct tun_struct *tun; ··· 1123 1128 (unsigned int __user*)argp); 1124 1129 } 1125 1130 1131 + rtnl_lock(); 1132 + 1126 1133 tun = __tun_get(tfile); 1127 1134 if (cmd == TUNSETIFF && !tun) { 1128 - int err; 1129 - 1130 1135 ifr.ifr_name[IFNAMSIZ-1] = '\0'; 1131 1136 1132 - rtnl_lock(); 1133 - err = tun_set_iff(tfile->net, file, &ifr); 1134 - rtnl_unlock(); 1137 + ret = tun_set_iff(tfile->net, file, &ifr); 1135 1138 1136 - if (err) 1137 - return err; 1139 + if (ret) 1140 + goto unlock; 1138 1141 1139 1142 if (copy_to_user(argp, &ifr, sizeof(ifr))) 1140 - return -EFAULT; 1141 - return 0; 1143 + ret = -EFAULT; 1144 + goto unlock; 1142 1145 } 1143 1146 1144 - 1147 + ret = -EBADFD; 1145 1148 if (!tun) 1146 - return -EBADFD; 1149 + goto unlock; 1147 1150 1148 1151 DBG(KERN_INFO "%s: tun_chr_ioctl cmd %d\n", tun->dev->name, cmd); 1149 1152 1150 1153 ret = 0; 1151 1154 switch (cmd) { 1152 1155 case TUNGETIFF: 1153 - ret = tun_get_iff(current->nsproxy->net_ns, file, &ifr); 1156 + ret = tun_get_iff(current->nsproxy->net_ns, tun, &ifr); 1154 1157 if (ret) 1155 1158 break; 1156 1159 ··· 1194 1201 1195 1202 case TUNSETLINK: 1196 1203 /* Only allow setting the type when the interface is down */ 1197 - rtnl_lock(); 1198 1204 if (tun->dev->flags & IFF_UP) { 1199 1205 DBG(KERN_INFO "%s: Linktype set failed because interface is up\n", 1200 1206 tun->dev->name); ··· 1203 1211 DBG(KERN_INFO "%s: linktype set to %d\n", tun->dev->name, tun->dev->type); 1204 1212 ret = 0; 1205 1213 } 1206 - rtnl_unlock(); 1207 1214 break; 1208 1215 1209 1216 #ifdef TUN_DEBUG ··· 1211 1220 break; 1212 1221 #endif 1213 1222 case TUNSETOFFLOAD: 1214 - rtnl_lock(); 1215 1223 ret = set_offload(tun->dev, arg); 1216 - rtnl_unlock(); 1217 1224 break; 1218 1225 1219 1226 case TUNSETTXFILTER: ··· 1219 1230 ret = -EINVAL; 1220 1231 if ((tun->flags & TUN_TYPE_MASK) != TUN_TAP_DEV) 1221 1232 break; 1222 - rtnl_lock(); 1223 1233 ret = update_filter(&tun->txflt, (void __user *)arg); 1224 - rtnl_unlock(); 1225 1234 break; 1226 1235 1227 1236 case SIOCGIFHWADDR: ··· 1235 1248 DBG(KERN_DEBUG "%s: set hw address: %pM\n", 1236 1249 tun->dev->name, ifr.ifr_hwaddr.sa_data); 1237 1250 1238 - rtnl_lock(); 1239 1251 ret = dev_set_mac_address(tun->dev, &ifr.ifr_hwaddr); 1240 - rtnl_unlock(); 1241 1252 break; 1242 1253 1243 1254 case TUNGETSNDBUF: ··· 1258 1273 break; 1259 1274 }; 1260 1275 1261 - tun_put(tun); 1276 + unlock: 1277 + rtnl_unlock(); 1278 + if (tun) 1279 + tun_put(tun); 1262 1280 return ret; 1263 1281 } 1264 1282 ··· 1349 1361 .write = do_sync_write, 1350 1362 .aio_write = tun_chr_aio_write, 1351 1363 .poll = tun_chr_poll, 1352 - .ioctl = tun_chr_ioctl, 1364 + .unlocked_ioctl = tun_chr_ioctl, 1353 1365 .open = tun_chr_open, 1354 1366 .release = tun_chr_close, 1355 1367 .fasync = tun_chr_fasync
+3 -2
drivers/net/ucc_geth.c
··· 3111 3111 u8 __iomem *bd; /* BD pointer */ 3112 3112 u32 bd_status; 3113 3113 u8 txQ = 0; 3114 + unsigned long flags; 3114 3115 3115 3116 ugeth_vdbg("%s: IN", __func__); 3116 3117 3117 - spin_lock_irq(&ugeth->lock); 3118 + spin_lock_irqsave(&ugeth->lock, flags); 3118 3119 3119 3120 dev->stats.tx_bytes += skb->len; 3120 3121 ··· 3172 3171 uccf = ugeth->uccf; 3173 3172 out_be16(uccf->p_utodr, UCC_FAST_TOD); 3174 3173 #endif 3175 - spin_unlock_irq(&ugeth->lock); 3174 + spin_unlock_irqrestore(&ugeth->lock, flags); 3176 3175 3177 3176 return 0; 3178 3177 }
+2
drivers/net/usb/pegasus.h
··· 250 250 DEFAULT_GPIO_RESET ) 251 251 PEGASUS_DEV( "IO DATA USB ET/TX-S", VENDOR_IODATA, 0x0913, 252 252 DEFAULT_GPIO_RESET | PEGASUS_II ) 253 + PEGASUS_DEV( "IO DATA USB ETX-US2", VENDOR_IODATA, 0x092a, 254 + DEFAULT_GPIO_RESET | PEGASUS_II ) 253 255 PEGASUS_DEV( "Kingston KNU101TX Ethernet", VENDOR_KINGSTON, 0x000a, 254 256 DEFAULT_GPIO_RESET) 255 257 PEGASUS_DEV( "LANEED USB Ethernet LD-USB/TX", VENDOR_LANEED, 0x4002,
+3 -2
drivers/net/via-rhine.c
··· 1218 1218 struct rhine_private *rp = netdev_priv(dev); 1219 1219 void __iomem *ioaddr = rp->base; 1220 1220 unsigned entry; 1221 + unsigned long flags; 1221 1222 1222 1223 /* Caution: the write order is important here, set the field 1223 1224 with the "ownership" bits last. */ ··· 1262 1261 cpu_to_le32(TXDESC | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN)); 1263 1262 1264 1263 /* lock eth irq */ 1265 - spin_lock_irq(&rp->lock); 1264 + spin_lock_irqsave(&rp->lock, flags); 1266 1265 wmb(); 1267 1266 rp->tx_ring[entry].tx_status = cpu_to_le32(DescOwn); 1268 1267 wmb(); ··· 1281 1280 1282 1281 dev->trans_start = jiffies; 1283 1282 1284 - spin_unlock_irq(&rp->lock); 1283 + spin_unlock_irqrestore(&rp->lock, flags); 1285 1284 1286 1285 if (debug > 4) { 1287 1286 printk(KERN_DEBUG "%s: Transmit frame #%d queued in slot %d.\n",
+1 -1
drivers/net/via-velocity.c
··· 1778 1778 * mode 1779 1779 */ 1780 1780 if (vptr->rev_id < REV_ID_VT3216_A0) { 1781 - if (vptr->mii_status | VELOCITY_DUPLEX_FULL) 1781 + if (vptr->mii_status & VELOCITY_DUPLEX_FULL) 1782 1782 BYTE_REG_BITS_ON(TCR_TB2BDIS, &regs->TCR); 1783 1783 else 1784 1784 BYTE_REG_BITS_OFF(TCR_TB2BDIS, &regs->TCR);
+46 -15
drivers/net/virtio_net.c
··· 70 70 struct sk_buff_head recv; 71 71 struct sk_buff_head send; 72 72 73 + /* Work struct for refilling if we run low on memory. */ 74 + struct delayed_work refill; 75 + 73 76 /* Chain pages by the private ptr. */ 74 77 struct page *pages; 75 78 }; ··· 276 273 dev_kfree_skb(skb); 277 274 } 278 275 279 - static void try_fill_recv_maxbufs(struct virtnet_info *vi) 276 + static bool try_fill_recv_maxbufs(struct virtnet_info *vi, gfp_t gfp) 280 277 { 281 278 struct sk_buff *skb; 282 279 struct scatterlist sg[2+MAX_SKB_FRAGS]; 283 280 int num, err, i; 281 + bool oom = false; 284 282 285 283 sg_init_table(sg, 2+MAX_SKB_FRAGS); 286 284 for (;;) { 287 285 struct virtio_net_hdr *hdr; 288 286 289 287 skb = netdev_alloc_skb(vi->dev, MAX_PACKET_LEN + NET_IP_ALIGN); 290 - if (unlikely(!skb)) 288 + if (unlikely(!skb)) { 289 + oom = true; 291 290 break; 291 + } 292 292 293 293 skb_reserve(skb, NET_IP_ALIGN); 294 294 skb_put(skb, MAX_PACKET_LEN); ··· 302 296 if (vi->big_packets) { 303 297 for (i = 0; i < MAX_SKB_FRAGS; i++) { 304 298 skb_frag_t *f = &skb_shinfo(skb)->frags[i]; 305 - f->page = get_a_page(vi, GFP_ATOMIC); 299 + f->page = get_a_page(vi, gfp); 306 300 if (!f->page) 307 301 break; 308 302 ··· 331 325 if (unlikely(vi->num > vi->max)) 332 326 vi->max = vi->num; 333 327 vi->rvq->vq_ops->kick(vi->rvq); 328 + return !oom; 334 329 } 335 330 336 - static void try_fill_recv(struct virtnet_info *vi) 331 + /* Returns false if we couldn't fill entirely (OOM). */ 332 + static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp) 337 333 { 338 334 struct sk_buff *skb; 339 335 struct scatterlist sg[1]; 340 336 int err; 337 + bool oom = false; 341 338 342 - if (!vi->mergeable_rx_bufs) { 343 - try_fill_recv_maxbufs(vi); 344 - return; 345 - } 339 + if (!vi->mergeable_rx_bufs) 340 + return try_fill_recv_maxbufs(vi, gfp); 346 341 347 342 for (;;) { 348 343 skb_frag_t *f; 349 344 350 345 skb = netdev_alloc_skb(vi->dev, GOOD_COPY_LEN + NET_IP_ALIGN); 351 - if (unlikely(!skb)) 346 + if (unlikely(!skb)) { 347 + oom = true; 352 348 break; 349 + } 353 350 354 351 skb_reserve(skb, NET_IP_ALIGN); 355 352 356 353 f = &skb_shinfo(skb)->frags[0]; 357 - f->page = get_a_page(vi, GFP_ATOMIC); 354 + f->page = get_a_page(vi, gfp); 358 355 if (!f->page) { 356 + oom = true; 359 357 kfree_skb(skb); 360 358 break; 361 359 } ··· 383 373 if (unlikely(vi->num > vi->max)) 384 374 vi->max = vi->num; 385 375 vi->rvq->vq_ops->kick(vi->rvq); 376 + return !oom; 386 377 } 387 378 388 379 static void skb_recv_done(struct virtqueue *rvq) ··· 394 383 rvq->vq_ops->disable_cb(rvq); 395 384 __napi_schedule(&vi->napi); 396 385 } 386 + } 387 + 388 + static void refill_work(struct work_struct *work) 389 + { 390 + struct virtnet_info *vi; 391 + bool still_empty; 392 + 393 + vi = container_of(work, struct virtnet_info, refill.work); 394 + napi_disable(&vi->napi); 395 + try_fill_recv(vi, GFP_KERNEL); 396 + still_empty = (vi->num == 0); 397 + napi_enable(&vi->napi); 398 + 399 + /* In theory, this can happen: if we don't get any buffers in 400 + * we will *never* try to fill again. */ 401 + if (still_empty) 402 + schedule_delayed_work(&vi->refill, HZ/2); 397 403 } 398 404 399 405 static int virtnet_poll(struct napi_struct *napi, int budget) ··· 428 400 received++; 429 401 } 430 402 431 - /* FIXME: If we oom and completely run out of inbufs, we need 432 - * to start a timer trying to fill more. */ 433 - if (vi->num < vi->max / 2) 434 - try_fill_recv(vi); 403 + if (vi->num < vi->max / 2) { 404 + if (!try_fill_recv(vi, GFP_ATOMIC)) 405 + schedule_delayed_work(&vi->refill, 0); 406 + } 435 407 436 408 /* Out of packets? */ 437 409 if (received < budget) { ··· 921 893 vi->vdev = vdev; 922 894 vdev->priv = vi; 923 895 vi->pages = NULL; 896 + INIT_DELAYED_WORK(&vi->refill, refill_work); 924 897 925 898 /* If they give us a callback when all buffers are done, we don't need 926 899 * the timer. */ ··· 970 941 } 971 942 972 943 /* Last of all, set up some receive buffers. */ 973 - try_fill_recv(vi); 944 + try_fill_recv(vi, GFP_KERNEL); 974 945 975 946 /* If we didn't even get one input buffer, we're useless. */ 976 947 if (vi->num == 0) { ··· 987 958 988 959 unregister: 989 960 unregister_netdev(dev); 961 + cancel_delayed_work_sync(&vi->refill); 990 962 free_vqs: 991 963 vdev->config->del_vqs(vdev); 992 964 free: ··· 1016 986 BUG_ON(vi->num != 0); 1017 987 1018 988 unregister_netdev(vi->dev); 989 + cancel_delayed_work_sync(&vi->refill); 1019 990 1020 991 vdev->config->del_vqs(vi->vdev); 1021 992
+3 -2
drivers/net/wireless/ath/ar9170/main.c
··· 1967 1967 int ret; 1968 1968 1969 1969 mutex_lock(&ar->mutex); 1970 - if ((param) && !(queue > __AR9170_NUM_TXQ)) { 1970 + if (queue < __AR9170_NUM_TXQ) { 1971 1971 memcpy(&ar->edcf[ar9170_qos_hwmap[queue]], 1972 1972 param, sizeof(*param)); 1973 1973 1974 1974 ret = ar9170_set_qos(ar); 1975 - } else 1975 + } else { 1976 1976 ret = -EINVAL; 1977 + } 1977 1978 1978 1979 mutex_unlock(&ar->mutex); 1979 1980 return ret;
+5 -1
drivers/net/wireless/ath/ar9170/usb.c
··· 598 598 599 599 err = request_firmware(&aru->init_values, "ar9170-1.fw", 600 600 &aru->udev->dev); 601 + if (err) { 602 + dev_err(&aru->udev->dev, "file with init values not found.\n"); 603 + return err; 604 + } 601 605 602 606 err = request_firmware(&aru->firmware, "ar9170-2.fw", &aru->udev->dev); 603 607 if (err) { 604 608 release_firmware(aru->init_values); 605 - dev_err(&aru->udev->dev, "file with init values not found.\n"); 609 + dev_err(&aru->udev->dev, "firmware file not found.\n"); 606 610 return err; 607 611 } 608 612
+68 -54
drivers/net/wireless/ipw2x00/ipw2200.c
··· 2874 2874 return 0; 2875 2875 } 2876 2876 2877 - static int ipw_fw_dma_add_buffer(struct ipw_priv *priv, 2878 - u32 src_phys, u32 dest_address, u32 length) 2877 + static int ipw_fw_dma_add_buffer(struct ipw_priv *priv, dma_addr_t *src_address, 2878 + int nr, u32 dest_address, u32 len) 2879 2879 { 2880 - u32 bytes_left = length; 2881 - u32 src_offset = 0; 2882 - u32 dest_offset = 0; 2883 - int status = 0; 2880 + int ret, i; 2881 + u32 size; 2882 + 2884 2883 IPW_DEBUG_FW(">> \n"); 2885 - IPW_DEBUG_FW_INFO("src_phys=0x%x dest_address=0x%x length=0x%x\n", 2886 - src_phys, dest_address, length); 2887 - while (bytes_left > CB_MAX_LENGTH) { 2888 - status = ipw_fw_dma_add_command_block(priv, 2889 - src_phys + src_offset, 2890 - dest_address + 2891 - dest_offset, 2892 - CB_MAX_LENGTH, 0, 0); 2893 - if (status) { 2884 + IPW_DEBUG_FW_INFO("nr=%d dest_address=0x%x len=0x%x\n", 2885 + nr, dest_address, len); 2886 + 2887 + for (i = 0; i < nr; i++) { 2888 + size = min_t(u32, len - i * CB_MAX_LENGTH, CB_MAX_LENGTH); 2889 + ret = ipw_fw_dma_add_command_block(priv, src_address[i], 2890 + dest_address + 2891 + i * CB_MAX_LENGTH, size, 2892 + 0, 0); 2893 + if (ret) { 2894 2894 IPW_DEBUG_FW_INFO(": Failed\n"); 2895 2895 return -1; 2896 2896 } else 2897 2897 IPW_DEBUG_FW_INFO(": Added new cb\n"); 2898 - 2899 - src_offset += CB_MAX_LENGTH; 2900 - dest_offset += CB_MAX_LENGTH; 2901 - bytes_left -= CB_MAX_LENGTH; 2902 - } 2903 - 2904 - /* add the buffer tail */ 2905 - if (bytes_left > 0) { 2906 - status = 2907 - ipw_fw_dma_add_command_block(priv, src_phys + src_offset, 2908 - dest_address + dest_offset, 2909 - bytes_left, 0, 0); 2910 - if (status) { 2911 - IPW_DEBUG_FW_INFO(": Failed on the buffer tail\n"); 2912 - return -1; 2913 - } else 2914 - IPW_DEBUG_FW_INFO 2915 - (": Adding new cb - the buffer tail\n"); 2916 2898 } 2917 2899 2918 2900 IPW_DEBUG_FW("<< \n"); ··· 3142 3160 3143 3161 static int ipw_load_firmware(struct ipw_priv *priv, u8 * data, size_t len) 3144 3162 { 3145 - int rc = -1; 3163 + int ret = -1; 3146 3164 int offset = 0; 3147 3165 struct fw_chunk *chunk; 3148 - dma_addr_t shared_phys; 3149 - u8 *shared_virt; 3166 + int total_nr = 0; 3167 + int i; 3168 + struct pci_pool *pool; 3169 + u32 *virts[CB_NUMBER_OF_ELEMENTS_SMALL]; 3170 + dma_addr_t phys[CB_NUMBER_OF_ELEMENTS_SMALL]; 3150 3171 3151 3172 IPW_DEBUG_TRACE("<< : \n"); 3152 - shared_virt = pci_alloc_consistent(priv->pci_dev, len, &shared_phys); 3153 3173 3154 - if (!shared_virt) 3174 + pool = pci_pool_create("ipw2200", priv->pci_dev, CB_MAX_LENGTH, 0, 0); 3175 + if (!pool) { 3176 + IPW_ERROR("pci_pool_create failed\n"); 3155 3177 return -ENOMEM; 3156 - 3157 - memmove(shared_virt, data, len); 3178 + } 3158 3179 3159 3180 /* Start the Dma */ 3160 - rc = ipw_fw_dma_enable(priv); 3181 + ret = ipw_fw_dma_enable(priv); 3161 3182 3162 3183 /* the DMA is already ready this would be a bug. */ 3163 3184 BUG_ON(priv->sram_desc.last_cb_index > 0); 3164 3185 3165 3186 do { 3187 + u32 chunk_len; 3188 + u8 *start; 3189 + int size; 3190 + int nr = 0; 3191 + 3166 3192 chunk = (struct fw_chunk *)(data + offset); 3167 3193 offset += sizeof(struct fw_chunk); 3194 + chunk_len = le32_to_cpu(chunk->length); 3195 + start = data + offset; 3196 + 3197 + nr = (chunk_len + CB_MAX_LENGTH - 1) / CB_MAX_LENGTH; 3198 + for (i = 0; i < nr; i++) { 3199 + virts[total_nr] = pci_pool_alloc(pool, GFP_KERNEL, 3200 + &phys[total_nr]); 3201 + if (!virts[total_nr]) { 3202 + ret = -ENOMEM; 3203 + goto out; 3204 + } 3205 + size = min_t(u32, chunk_len - i * CB_MAX_LENGTH, 3206 + CB_MAX_LENGTH); 3207 + memcpy(virts[total_nr], start, size); 3208 + start += size; 3209 + total_nr++; 3210 + /* We don't support fw chunk larger than 64*8K */ 3211 + BUG_ON(total_nr > CB_NUMBER_OF_ELEMENTS_SMALL); 3212 + } 3213 + 3168 3214 /* build DMA packet and queue up for sending */ 3169 3215 /* dma to chunk->address, the chunk->length bytes from data + 3170 3216 * offeset*/ 3171 3217 /* Dma loading */ 3172 - rc = ipw_fw_dma_add_buffer(priv, shared_phys + offset, 3173 - le32_to_cpu(chunk->address), 3174 - le32_to_cpu(chunk->length)); 3175 - if (rc) { 3218 + ret = ipw_fw_dma_add_buffer(priv, &phys[total_nr - nr], 3219 + nr, le32_to_cpu(chunk->address), 3220 + chunk_len); 3221 + if (ret) { 3176 3222 IPW_DEBUG_INFO("dmaAddBuffer Failed\n"); 3177 3223 goto out; 3178 3224 } 3179 3225 3180 - offset += le32_to_cpu(chunk->length); 3226 + offset += chunk_len; 3181 3227 } while (offset < len); 3182 3228 3183 3229 /* Run the DMA and wait for the answer */ 3184 - rc = ipw_fw_dma_kick(priv); 3185 - if (rc) { 3230 + ret = ipw_fw_dma_kick(priv); 3231 + if (ret) { 3186 3232 IPW_ERROR("dmaKick Failed\n"); 3187 3233 goto out; 3188 3234 } 3189 3235 3190 - rc = ipw_fw_dma_wait(priv); 3191 - if (rc) { 3236 + ret = ipw_fw_dma_wait(priv); 3237 + if (ret) { 3192 3238 IPW_ERROR("dmaWaitSync Failed\n"); 3193 3239 goto out; 3194 3240 } 3195 - out: 3196 - pci_free_consistent(priv->pci_dev, len, shared_virt, shared_phys); 3197 - return rc; 3241 + out: 3242 + for (i = 0; i < total_nr; i++) 3243 + pci_pool_free(pool, virts[i], phys[i]); 3244 + 3245 + pci_pool_destroy(pool); 3246 + 3247 + return ret; 3198 3248 } 3199 3249 3200 3250 /* stop nic */ ··· 6240 6226 }; 6241 6227 6242 6228 u8 channel; 6243 - while (channel_index < IPW_SCAN_CHANNELS) { 6229 + while (channel_index < IPW_SCAN_CHANNELS - 1) { 6244 6230 channel = 6245 6231 priv->speed_scan[priv->speed_scan_pos]; 6246 6232 if (channel == 0) {
+10 -8
drivers/net/wireless/libertas/assoc.c
··· 1 1 /* Copyright (C) 2006, Red Hat, Inc. */ 2 2 3 3 #include <linux/types.h> 4 - #include <linux/kernel.h> 5 4 #include <linux/etherdevice.h> 6 5 #include <linux/ieee80211.h> 7 6 #include <linux/if_arp.h> ··· 43 44 u16 *rates_size) 44 45 { 45 46 u8 *card_rates = lbs_bg_rates; 47 + size_t num_card_rates = sizeof(lbs_bg_rates); 46 48 int ret = 0, i, j; 47 - u8 tmp[(ARRAY_SIZE(lbs_bg_rates) - 1) * (*rates_size - 1)]; 49 + u8 tmp[30]; 48 50 size_t tmp_size = 0; 49 51 50 52 /* For each rate in card_rates that exists in rate1, copy to tmp */ 51 - for (i = 0; i < ARRAY_SIZE(lbs_bg_rates) && card_rates[i]; i++) { 52 - for (j = 0; j < *rates_size && rates[j]; j++) { 53 + for (i = 0; card_rates[i] && (i < num_card_rates); i++) { 54 + for (j = 0; rates[j] && (j < *rates_size); j++) { 53 55 if (rates[j] == card_rates[i]) 54 56 tmp[tmp_size++] = card_rates[i]; 55 57 } 56 58 } 57 59 58 60 lbs_deb_hex(LBS_DEB_JOIN, "AP rates ", rates, *rates_size); 59 - lbs_deb_hex(LBS_DEB_JOIN, "card rates ", card_rates, 60 - ARRAY_SIZE(lbs_bg_rates)); 61 + lbs_deb_hex(LBS_DEB_JOIN, "card rates ", card_rates, num_card_rates); 61 62 lbs_deb_hex(LBS_DEB_JOIN, "common rates", tmp, tmp_size); 62 63 lbs_deb_join("TX data rate 0x%02x\n", priv->cur_rate); 63 64 ··· 69 70 lbs_pr_alert("Previously set fixed data rate %#x isn't " 70 71 "compatible with the network.\n", priv->cur_rate); 71 72 ret = -1; 73 + goto done; 72 74 } 75 + ret = 0; 76 + 73 77 done: 74 78 memset(rates, 0, *rates_size); 75 79 *rates_size = min_t(int, tmp_size, *rates_size); ··· 322 320 rates = (struct mrvl_ie_rates_param_set *) pos; 323 321 rates->header.type = cpu_to_le16(TLV_TYPE_RATES); 324 322 memcpy(&rates->rates, &bss->rates, MAX_RATES); 325 - tmplen = min_t(u16, ARRAY_SIZE(rates->rates), MAX_RATES); 323 + tmplen = MAX_RATES; 326 324 if (get_common_rates(priv, rates->rates, &tmplen)) { 327 325 ret = -1; 328 326 goto done; ··· 598 596 599 597 /* Copy Data rates from the rates recorded in scan response */ 600 598 memset(cmd.bss.rates, 0, sizeof(cmd.bss.rates)); 601 - ratesize = min_t(u16, ARRAY_SIZE(cmd.bss.rates), MAX_RATES); 599 + ratesize = min_t(u16, sizeof(cmd.bss.rates), MAX_RATES); 602 600 memcpy(cmd.bss.rates, bss->rates, ratesize); 603 601 if (get_common_rates(priv, cmd.bss.rates, &ratesize)) { 604 602 lbs_deb_join("ADHOC_JOIN: get_common_rates returned error.\n");
+2 -2
drivers/net/wireless/libertas/hostcmd.h
··· 56 56 u8 bss_type; 57 57 /* BSS number */ 58 58 u8 bss_num; 59 - } bss; 60 - } u; 59 + } __attribute__ ((packed)) bss; 60 + } __attribute__ ((packed)) u; 61 61 62 62 /* SNR */ 63 63 u8 snr;
+19 -12
drivers/net/wireless/mwl8k.c
··· 261 261 */ 262 262 }; 263 263 264 - #define MWL8K_VIF(_vif) (struct mwl8k_vif *)(&((_vif)->drv_priv)) 264 + #define MWL8K_VIF(_vif) ((struct mwl8k_vif *)&((_vif)->drv_priv)) 265 265 266 266 static const struct ieee80211_channel mwl8k_channels[] = { 267 267 { .center_freq = 2412, .hw_value = 1, }, ··· 1012 1012 rmb(); 1013 1013 1014 1014 skb = rxq->rx_skb[rxq->rx_head]; 1015 + if (skb == NULL) 1016 + break; 1015 1017 rxq->rx_skb[rxq->rx_head] = NULL; 1016 1018 1017 1019 rxq->rx_head = (rxq->rx_head + 1) % MWL8K_RX_DESCS; ··· 1593 1591 timeout = wait_for_completion_timeout(&cmd_wait, 1594 1592 msecs_to_jiffies(MWL8K_CMD_TIMEOUT_MS)); 1595 1593 1594 + pci_unmap_single(priv->pdev, dma_addr, dma_size, 1595 + PCI_DMA_BIDIRECTIONAL); 1596 + 1596 1597 result = &cmd->result; 1597 1598 if (!timeout) { 1598 1599 spin_lock_irq(&priv->fw_lock); ··· 1615 1610 *result); 1616 1611 } 1617 1612 1618 - pci_unmap_single(priv->pdev, dma_addr, dma_size, 1619 - PCI_DMA_BIDIRECTIONAL); 1620 1613 return rc; 1621 1614 } 1622 1615 ··· 1657 1654 memset(cmd->perm_addr, 0xff, sizeof(cmd->perm_addr)); 1658 1655 cmd->ps_cookie = cpu_to_le32(priv->cookie_dma); 1659 1656 cmd->rx_queue_ptr = cpu_to_le32(priv->rxq[0].rx_desc_dma); 1660 - cmd->num_tx_queues = MWL8K_TX_QUEUES; 1657 + cmd->num_tx_queues = cpu_to_le32(MWL8K_TX_QUEUES); 1661 1658 for (i = 0; i < MWL8K_TX_QUEUES; i++) 1662 1659 cmd->tx_queue_ptrs[i] = cpu_to_le32(priv->txq[i].tx_desc_dma); 1663 - cmd->num_tx_desc_per_queue = MWL8K_TX_DESCS; 1664 - cmd->total_rx_desc = MWL8K_RX_DESCS; 1660 + cmd->num_tx_desc_per_queue = cpu_to_le32(MWL8K_TX_DESCS); 1661 + cmd->total_rx_desc = cpu_to_le32(MWL8K_RX_DESCS); 1665 1662 1666 1663 rc = mwl8k_post_cmd(hw, &cmd->header); 1667 1664 1668 1665 if (!rc) { 1669 1666 SET_IEEE80211_PERM_ADDR(hw, cmd->perm_addr); 1670 1667 priv->num_mcaddrs = le16_to_cpu(cmd->num_mcaddrs); 1671 - priv->fw_rev = cmd->fw_rev; 1668 + priv->fw_rev = le32_to_cpu(cmd->fw_rev); 1672 1669 priv->hw_rev = cmd->hw_rev; 1673 1670 priv->region_code = le16_to_cpu(cmd->region_code); 1674 1671 } ··· 3219 3216 struct dev_addr_list *mclist = worker->mclist; 3220 3217 3221 3218 struct mwl8k_priv *priv = hw->priv; 3222 - struct mwl8k_vif *mv_vif; 3223 3219 int rc = 0; 3224 3220 3225 3221 if (changed_flags & FIF_BCN_PRBRESP_PROMISC) { 3226 3222 if (*total_flags & FIF_BCN_PRBRESP_PROMISC) 3227 3223 rc = mwl8k_cmd_set_pre_scan(hw); 3228 3224 else { 3229 - mv_vif = MWL8K_VIF(priv->vif); 3230 - rc = mwl8k_cmd_set_post_scan(hw, mv_vif->bssid); 3225 + u8 *bssid; 3226 + 3227 + bssid = "\x00\x00\x00\x00\x00\x00"; 3228 + if (priv->vif != NULL) 3229 + bssid = MWL8K_VIF(priv->vif)->bssid; 3230 + 3231 + rc = mwl8k_cmd_set_post_scan(hw, bssid); 3231 3232 } 3232 3233 } 3233 3234 ··· 3733 3726 3734 3727 ieee80211_stop_queues(hw); 3735 3728 3729 + ieee80211_unregister_hw(hw); 3730 + 3736 3731 /* Remove tx reclaim tasklet */ 3737 3732 tasklet_kill(&priv->tx_reclaim_task); 3738 3733 ··· 3747 3738 /* Return all skbs to mac80211 */ 3748 3739 for (i = 0; i < MWL8K_TX_QUEUES; i++) 3749 3740 mwl8k_txq_reclaim(hw, i, 1); 3750 - 3751 - ieee80211_unregister_hw(hw); 3752 3741 3753 3742 for (i = 0; i < MWL8K_TX_QUEUES; i++) 3754 3743 mwl8k_txq_deinit(hw, i);
+1 -1
drivers/net/wireless/orinoco/hw.c
··· 70 70 int err = 0; 71 71 u8 tsc_arr[4][IW_ENCODE_SEQ_MAX_SIZE]; 72 72 73 - if ((key < 0) || (key > 4)) 73 + if ((key < 0) || (key >= 4)) 74 74 return -EINVAL; 75 75 76 76 err = hermes_read_ltv(hw, USER_BAP, HERMES_RID_CURRENT_TKIP_IV,
+4 -2
drivers/net/wireless/rt2x00/rt2x00.h
··· 849 849 static inline void rt2x00_rf_read(struct rt2x00_dev *rt2x00dev, 850 850 const unsigned int word, u32 *data) 851 851 { 852 - *data = rt2x00dev->rf[word]; 852 + BUG_ON(word < 1 || word > rt2x00dev->ops->rf_size / sizeof(u32)); 853 + *data = rt2x00dev->rf[word - 1]; 853 854 } 854 855 855 856 static inline void rt2x00_rf_write(struct rt2x00_dev *rt2x00dev, 856 857 const unsigned int word, u32 data) 857 858 { 858 - rt2x00dev->rf[word] = data; 859 + BUG_ON(word < 1 || word > rt2x00dev->ops->rf_size / sizeof(u32)); 860 + rt2x00dev->rf[word - 1] = data; 859 861 } 860 862 861 863 /*
+10 -4
drivers/net/wireless/rtl818x/rtl8187_dev.c
··· 869 869 priv->aifsn[3] = 3; /* AIFSN[AC_BE] */ 870 870 rtl818x_iowrite8(priv, &priv->map->ACM_CONTROL, 0); 871 871 872 + /* ENEDCA flag must always be set, transmit issues? */ 873 + rtl818x_iowrite8(priv, &priv->map->MSR, RTL818X_MSR_ENEDCA); 874 + 872 875 return 0; 873 876 } 874 877 ··· 1176 1173 rtl818x_iowrite8(priv, &priv->map->BSSID[i], 1177 1174 info->bssid[i]); 1178 1175 1176 + if (priv->is_rtl8187b) 1177 + reg = RTL818X_MSR_ENEDCA; 1178 + else 1179 + reg = 0; 1180 + 1179 1181 if (is_valid_ether_addr(info->bssid)) { 1180 - reg = RTL818X_MSR_INFRA; 1181 - if (priv->is_rtl8187b) 1182 - reg |= RTL818X_MSR_ENEDCA; 1182 + reg |= RTL818X_MSR_INFRA; 1183 1183 rtl818x_iowrite8(priv, &priv->map->MSR, reg); 1184 1184 } else { 1185 - reg = RTL818X_MSR_NO_LINK; 1185 + reg |= RTL818X_MSR_NO_LINK; 1186 1186 rtl818x_iowrite8(priv, &priv->map->MSR, reg); 1187 1187 } 1188 1188
+18 -10
drivers/net/yellowfin.c
··· 346 346 static int yellowfin_open(struct net_device *dev); 347 347 static void yellowfin_timer(unsigned long data); 348 348 static void yellowfin_tx_timeout(struct net_device *dev); 349 - static void yellowfin_init_ring(struct net_device *dev); 349 + static int yellowfin_init_ring(struct net_device *dev); 350 350 static int yellowfin_start_xmit(struct sk_buff *skb, struct net_device *dev); 351 351 static irqreturn_t yellowfin_interrupt(int irq, void *dev_instance); 352 352 static int yellowfin_rx(struct net_device *dev); ··· 573 573 { 574 574 struct yellowfin_private *yp = netdev_priv(dev); 575 575 void __iomem *ioaddr = yp->base; 576 - int i; 576 + int i, ret; 577 577 578 578 /* Reset the chip. */ 579 579 iowrite32(0x80000000, ioaddr + DMACtrl); 580 580 581 - i = request_irq(dev->irq, &yellowfin_interrupt, IRQF_SHARED, dev->name, dev); 582 - if (i) return i; 581 + ret = request_irq(dev->irq, &yellowfin_interrupt, IRQF_SHARED, dev->name, dev); 582 + if (ret) 583 + return ret; 583 584 584 585 if (yellowfin_debug > 1) 585 586 printk(KERN_DEBUG "%s: yellowfin_open() irq %d.\n", 586 587 dev->name, dev->irq); 587 588 588 - yellowfin_init_ring(dev); 589 + ret = yellowfin_init_ring(dev); 590 + if (ret) { 591 + free_irq(dev->irq, dev); 592 + return ret; 593 + } 589 594 590 595 iowrite32(yp->rx_ring_dma, ioaddr + RxPtr); 591 596 iowrite32(yp->tx_ring_dma, ioaddr + TxPtr); ··· 730 725 } 731 726 732 727 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */ 733 - static void yellowfin_init_ring(struct net_device *dev) 728 + static int yellowfin_init_ring(struct net_device *dev) 734 729 { 735 730 struct yellowfin_private *yp = netdev_priv(dev); 736 - int i; 731 + int i, j; 737 732 738 733 yp->tx_full = 0; 739 734 yp->cur_rx = yp->cur_tx = 0; ··· 758 753 yp->rx_ring[i].addr = cpu_to_le32(pci_map_single(yp->pci_dev, 759 754 skb->data, yp->rx_buf_sz, PCI_DMA_FROMDEVICE)); 760 755 } 756 + if (i != RX_RING_SIZE) { 757 + for (j = 0; j < i; j++) 758 + dev_kfree_skb(yp->rx_skbuff[j]); 759 + return -ENOMEM; 760 + } 761 761 yp->rx_ring[i-1].dbdma_cmd = cpu_to_le32(CMD_STOP); 762 762 yp->dirty_rx = (unsigned int)(i - RX_RING_SIZE); 763 763 ··· 779 769 yp->tx_ring[--i].dbdma_cmd = cpu_to_le32(CMD_STOP | BRANCH_ALWAYS); 780 770 #else 781 771 { 782 - int j; 783 - 784 772 /* Tx ring needs a pair of descriptors, the second for the status. */ 785 773 for (i = 0; i < TX_RING_SIZE; i++) { 786 774 j = 2*i; ··· 813 805 } 814 806 #endif 815 807 yp->tx_tail_desc = &yp->tx_status[0]; 816 - return; 808 + return 0; 817 809 } 818 810 819 811 static int yellowfin_start_xmit(struct sk_buff *skb, struct net_device *dev)
+3
drivers/net/zorro8390.c
··· 120 120 for (i = ARRAY_SIZE(cards)-1; i >= 0; i--) 121 121 if (z->id == cards[i].id) 122 122 break; 123 + if (i < 0) 124 + return -ENODEV; 125 + 123 126 board = z->resource.start; 124 127 ioaddr = board+cards[i].offset; 125 128 dev = alloc_ei_netdev();
+23
drivers/pci/iov.c
··· 598 598 } 599 599 600 600 /** 601 + * pci_sriov_resource_alignment - get resource alignment for VF BAR 602 + * @dev: the PCI device 603 + * @resno: the resource number 604 + * 605 + * Returns the alignment of the VF BAR found in the SR-IOV capability. 606 + * This is not the same as the resource size which is defined as 607 + * the VF BAR size multiplied by the number of VFs. The alignment 608 + * is just the VF BAR size. 609 + */ 610 + int pci_sriov_resource_alignment(struct pci_dev *dev, int resno) 611 + { 612 + struct resource tmp; 613 + enum pci_bar_type type; 614 + int reg = pci_iov_resource_bar(dev, resno, &type); 615 + 616 + if (!reg) 617 + return 0; 618 + 619 + __pci_read_base(dev, type, &tmp, reg); 620 + return resource_alignment(&tmp); 621 + } 622 + 623 + /** 601 624 * pci_restore_iov_state - restore the state of the IOV capability 602 625 * @dev: the PCI device 603 626 */
+1 -1
drivers/pci/pci-driver.c
··· 508 508 return error; 509 509 } 510 510 511 - return pci_dev->state_saved ? pci_restore_state(pci_dev) : 0; 511 + return pci_restore_state(pci_dev); 512 512 } 513 513 514 514 static void pci_pm_default_resume_noirq(struct pci_dev *pci_dev)
+2
drivers/pci/pci.c
··· 846 846 int i; 847 847 u32 val; 848 848 849 + if (!dev->state_saved) 850 + return 0; 849 851 /* PCI Express register must be restored first */ 850 852 pci_restore_pcie_state(dev); 851 853
+13
drivers/pci/pci.h
··· 243 243 extern void pci_iov_release(struct pci_dev *dev); 244 244 extern int pci_iov_resource_bar(struct pci_dev *dev, int resno, 245 245 enum pci_bar_type *type); 246 + extern int pci_sriov_resource_alignment(struct pci_dev *dev, int resno); 246 247 extern void pci_restore_iov_state(struct pci_dev *dev); 247 248 extern int pci_iov_bus_range(struct pci_bus *bus); 248 249 ··· 298 297 return 0; 299 298 } 300 299 #endif /* CONFIG_PCI_IOV */ 300 + 301 + static inline int pci_resource_alignment(struct pci_dev *dev, 302 + struct resource *res) 303 + { 304 + #ifdef CONFIG_PCI_IOV 305 + int resno = res - dev->resource; 306 + 307 + if (resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END) 308 + return pci_sriov_resource_alignment(dev, resno); 309 + #endif 310 + return resource_alignment(res); 311 + } 301 312 302 313 #endif /* DRIVERS_PCI_H */
+2 -2
drivers/pci/setup-bus.c
··· 25 25 #include <linux/ioport.h> 26 26 #include <linux/cache.h> 27 27 #include <linux/slab.h> 28 - 28 + #include "pci.h" 29 29 30 30 static void pbus_assign_resources_sorted(const struct pci_bus *bus) 31 31 { ··· 384 384 continue; 385 385 r_size = resource_size(r); 386 386 /* For bridges size != alignment */ 387 - align = resource_alignment(r); 387 + align = pci_resource_alignment(dev, r); 388 388 order = __ffs(align) - 20; 389 389 if (order > 11) { 390 390 dev_warn(&dev->dev, "BAR %d bad alignment %llx: "
+4 -4
drivers/pci/setup-res.c
··· 144 144 145 145 size = resource_size(res); 146 146 min = (res->flags & IORESOURCE_IO) ? PCIBIOS_MIN_IO : PCIBIOS_MIN_MEM; 147 - align = resource_alignment(res); 147 + align = pci_resource_alignment(dev, res); 148 148 149 149 /* First, try exact prefetching match.. */ 150 150 ret = pci_bus_alloc_resource(bus, res, size, align, min, ··· 178 178 struct pci_bus *bus; 179 179 int ret; 180 180 181 - align = resource_alignment(res); 181 + align = pci_resource_alignment(dev, res); 182 182 if (!align) { 183 183 dev_info(&dev->dev, "BAR %d: can't allocate resource (bogus " 184 184 "alignment) %pR flags %#lx\n", ··· 259 259 if (!(r->flags) || r->parent) 260 260 continue; 261 261 262 - r_align = resource_alignment(r); 262 + r_align = pci_resource_alignment(dev, r); 263 263 if (!r_align) { 264 264 dev_warn(&dev->dev, "BAR %d: bogus alignment " 265 265 "%pR flags %#lx\n", ··· 271 271 struct resource_list *ln = list->next; 272 272 273 273 if (ln) 274 - align = resource_alignment(ln->res); 274 + align = pci_resource_alignment(ln->dev, ln->res); 275 275 276 276 if (r_align > align) { 277 277 tmp = kmalloc(sizeof(*tmp), GFP_KERNEL);
+1
drivers/platform/x86/toshiba_acpi.c
··· 335 335 if (hci_result != HCI_SUCCESS) { 336 336 /* Can't do anything useful */ 337 337 mutex_unlock(&dev->mutex); 338 + return; 338 339 } 339 340 340 341 new_rfk_state = value;
+4 -4
drivers/platform/x86/wmi.c
··· 270 270 acpi_status status; 271 271 struct acpi_object_list input; 272 272 union acpi_object params[3]; 273 - char method[4] = "WM"; 273 + char method[5] = "WM"; 274 274 275 275 if (!find_guid(guid_string, &wblock)) 276 276 return AE_ERROR; ··· 328 328 acpi_status status, wc_status = AE_ERROR; 329 329 struct acpi_object_list input, wc_input; 330 330 union acpi_object wc_params[1], wq_params[1]; 331 - char method[4]; 332 - char wc_method[4] = "WC"; 331 + char method[5]; 332 + char wc_method[5] = "WC"; 333 333 334 334 if (!guid_string || !out) 335 335 return AE_BAD_PARAMETER; ··· 410 410 acpi_handle handle; 411 411 struct acpi_object_list input; 412 412 union acpi_object params[2]; 413 - char method[4] = "WS"; 413 + char method[5] = "WS"; 414 414 415 415 if (!guid_string || !in) 416 416 return AE_BAD_DATA;
+1 -1
drivers/pps/pps.c
··· 244 244 } 245 245 pps->dev = device_create(pps_class, pps->info.dev, pps->devno, NULL, 246 246 "pps%d", pps->id); 247 - if (err) 247 + if (IS_ERR(pps->dev)) 248 248 goto del_cdev; 249 249 dev_set_drvdata(pps->dev, pps); 250 250
+1 -1
drivers/s390/block/dasd.c
··· 2135 2135 struct dasd_device *base; 2136 2136 2137 2137 block = bdev->bd_disk->private_data; 2138 - base = block->base; 2139 2138 if (!block) 2140 2139 return -ENODEV; 2140 + base = block->base; 2141 2141 2142 2142 if (!base->discipline || 2143 2143 !base->discipline->fill_geometry)
+1 -3
drivers/s390/cio/device.c
··· 772 772 cdev = io_subchannel_allocate_dev(sch); 773 773 if (!IS_ERR(cdev)) { 774 774 ret = io_subchannel_initialize_dev(sch, cdev); 775 - if (ret) { 776 - kfree(cdev); 775 + if (ret) 777 776 cdev = ERR_PTR(ret); 778 - } 779 777 } 780 778 return cdev; 781 779 }
+8 -3
drivers/sbus/char/bbc_envctrl.c
··· 537 537 } 538 538 if (temp_index != 0 && fan_index != 0) { 539 539 kenvctrld_task = kthread_run(kenvctrld, NULL, "kenvctrld"); 540 - if (IS_ERR(kenvctrld_task)) 541 - return PTR_ERR(kenvctrld_task); 540 + if (IS_ERR(kenvctrld_task)) { 541 + int err = PTR_ERR(kenvctrld_task); 542 + 543 + kenvctrld_task = NULL; 544 + return err; 545 + } 542 546 } 543 547 544 548 return 0; ··· 565 561 struct bbc_cpu_temperature *tp, *tpos; 566 562 struct bbc_fan_control *fp, *fpos; 567 563 568 - kthread_stop(kenvctrld_task); 564 + if (kenvctrld_task) 565 + kthread_stop(kenvctrld_task); 569 566 570 567 list_for_each_entry_safe(tp, tpos, &bp->temps, bp_list) { 571 568 list_del(&tp->bp_list);
+67 -31
drivers/scsi/mpt2sas/mpt2sas_base.c
··· 119 119 spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags); 120 120 } 121 121 122 + /** 123 + * mpt2sas_base_start_watchdog - start the fault_reset_work_q 124 + * @ioc: pointer to scsi command object 125 + * Context: sleep. 126 + * 127 + * Return nothing. 128 + */ 129 + void 130 + mpt2sas_base_start_watchdog(struct MPT2SAS_ADAPTER *ioc) 131 + { 132 + unsigned long flags; 133 + 134 + if (ioc->fault_reset_work_q) 135 + return; 136 + 137 + /* initialize fault polling */ 138 + INIT_DELAYED_WORK(&ioc->fault_reset_work, _base_fault_reset_work); 139 + snprintf(ioc->fault_reset_work_q_name, 140 + sizeof(ioc->fault_reset_work_q_name), "poll_%d_status", ioc->id); 141 + ioc->fault_reset_work_q = 142 + create_singlethread_workqueue(ioc->fault_reset_work_q_name); 143 + if (!ioc->fault_reset_work_q) { 144 + printk(MPT2SAS_ERR_FMT "%s: failed (line=%d)\n", 145 + ioc->name, __func__, __LINE__); 146 + return; 147 + } 148 + spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags); 149 + if (ioc->fault_reset_work_q) 150 + queue_delayed_work(ioc->fault_reset_work_q, 151 + &ioc->fault_reset_work, 152 + msecs_to_jiffies(FAULT_POLLING_INTERVAL)); 153 + spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags); 154 + } 155 + 156 + /** 157 + * mpt2sas_base_stop_watchdog - stop the fault_reset_work_q 158 + * @ioc: pointer to scsi command object 159 + * Context: sleep. 160 + * 161 + * Return nothing. 162 + */ 163 + void 164 + mpt2sas_base_stop_watchdog(struct MPT2SAS_ADAPTER *ioc) 165 + { 166 + unsigned long flags; 167 + struct workqueue_struct *wq; 168 + 169 + spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags); 170 + wq = ioc->fault_reset_work_q; 171 + ioc->fault_reset_work_q = NULL; 172 + spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags); 173 + if (wq) { 174 + if (!cancel_delayed_work(&ioc->fault_reset_work)) 175 + flush_workqueue(wq); 176 + destroy_workqueue(wq); 177 + } 178 + } 179 + 122 180 #ifdef CONFIG_SCSI_MPT2SAS_LOGGING 123 181 /** 124 182 * _base_sas_ioc_info - verbose translation of the ioc status ··· 496 438 497 439 sas_loginfo.loginfo = log_info; 498 440 if (sas_loginfo.dw.bus_type != 3 /*SAS*/) 441 + return; 442 + 443 + /* each nexus loss loginfo */ 444 + if (log_info == 0x31170000) 499 445 return; 500 446 501 447 /* eat the loginfos associated with task aborts */ ··· 1171 1109 } 1172 1110 } 1173 1111 1174 - pci_set_drvdata(pdev, ioc->shost); 1175 1112 _base_mask_interrupts(ioc); 1176 1113 r = _base_enable_msix(ioc); 1177 1114 if (r) ··· 1193 1132 ioc->pci_irq = -1; 1194 1133 pci_release_selected_regions(ioc->pdev, ioc->bars); 1195 1134 pci_disable_device(pdev); 1196 - pci_set_drvdata(pdev, NULL); 1197 1135 return r; 1198 1136 } 1199 1137 ··· 3251 3191 ioc->chip_phys = 0; 3252 3192 pci_release_selected_regions(ioc->pdev, ioc->bars); 3253 3193 pci_disable_device(pdev); 3254 - pci_set_drvdata(pdev, NULL); 3255 3194 return; 3256 3195 } 3257 3196 ··· 3264 3205 mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc) 3265 3206 { 3266 3207 int r, i; 3267 - unsigned long flags; 3268 3208 3269 3209 dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name, 3270 3210 __func__)); ··· 3272 3214 if (r) 3273 3215 return r; 3274 3216 3217 + pci_set_drvdata(ioc->pdev, ioc->shost); 3275 3218 r = _base_make_ioc_ready(ioc, CAN_SLEEP, SOFT_RESET); 3276 3219 if (r) 3277 3220 goto out_free_resources; ··· 3347 3288 if (r) 3348 3289 goto out_free_resources; 3349 3290 3350 - /* initialize fault polling */ 3351 - INIT_DELAYED_WORK(&ioc->fault_reset_work, _base_fault_reset_work); 3352 - snprintf(ioc->fault_reset_work_q_name, 3353 - sizeof(ioc->fault_reset_work_q_name), "poll_%d_status", ioc->id); 3354 - ioc->fault_reset_work_q = 3355 - create_singlethread_workqueue(ioc->fault_reset_work_q_name); 3356 - if (!ioc->fault_reset_work_q) { 3357 - printk(MPT2SAS_ERR_FMT "%s: failed (line=%d)\n", 3358 - ioc->name, __func__, __LINE__); 3359 - goto out_free_resources; 3360 - } 3361 - spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags); 3362 - if (ioc->fault_reset_work_q) 3363 - queue_delayed_work(ioc->fault_reset_work_q, 3364 - &ioc->fault_reset_work, 3365 - msecs_to_jiffies(FAULT_POLLING_INTERVAL)); 3366 - spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags); 3291 + mpt2sas_base_start_watchdog(ioc); 3367 3292 return 0; 3368 3293 3369 3294 out_free_resources: ··· 3355 3312 ioc->remove_host = 1; 3356 3313 mpt2sas_base_free_resources(ioc); 3357 3314 _base_release_memory_pools(ioc); 3315 + pci_set_drvdata(ioc->pdev, NULL); 3358 3316 kfree(ioc->tm_cmds.reply); 3359 3317 kfree(ioc->transport_cmds.reply); 3360 3318 kfree(ioc->config_cmds.reply); ··· 3381 3337 void 3382 3338 mpt2sas_base_detach(struct MPT2SAS_ADAPTER *ioc) 3383 3339 { 3384 - unsigned long flags; 3385 - struct workqueue_struct *wq; 3386 3340 3387 3341 dexitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name, 3388 3342 __func__)); 3389 3343 3390 - spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags); 3391 - wq = ioc->fault_reset_work_q; 3392 - ioc->fault_reset_work_q = NULL; 3393 - spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags); 3394 - if (!cancel_delayed_work(&ioc->fault_reset_work)) 3395 - flush_workqueue(wq); 3396 - destroy_workqueue(wq); 3397 - 3344 + mpt2sas_base_stop_watchdog(ioc); 3398 3345 mpt2sas_base_free_resources(ioc); 3399 3346 _base_release_memory_pools(ioc); 3347 + pci_set_drvdata(ioc->pdev, NULL); 3400 3348 kfree(ioc->pfacts); 3401 3349 kfree(ioc->ctl_cmds.reply); 3402 3350 kfree(ioc->base_cmds.reply);
+4 -2
drivers/scsi/mpt2sas/mpt2sas_base.h
··· 69 69 #define MPT2SAS_DRIVER_NAME "mpt2sas" 70 70 #define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>" 71 71 #define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver" 72 - #define MPT2SAS_DRIVER_VERSION "01.100.03.00" 72 + #define MPT2SAS_DRIVER_VERSION "01.100.04.00" 73 73 #define MPT2SAS_MAJOR_VERSION 01 74 74 #define MPT2SAS_MINOR_VERSION 100 75 - #define MPT2SAS_BUILD_VERSION 03 75 + #define MPT2SAS_BUILD_VERSION 04 76 76 #define MPT2SAS_RELEASE_VERSION 00 77 77 78 78 /* ··· 673 673 674 674 /* base shared API */ 675 675 extern struct list_head mpt2sas_ioc_list; 676 + void mpt2sas_base_start_watchdog(struct MPT2SAS_ADAPTER *ioc); 677 + void mpt2sas_base_stop_watchdog(struct MPT2SAS_ADAPTER *ioc); 676 678 677 679 int mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc); 678 680 void mpt2sas_base_detach(struct MPT2SAS_ADAPTER *ioc);
+26 -65
drivers/scsi/mpt2sas/mpt2sas_config.c
··· 236 236 Mpi2ConfigRequest_t *config_request; 237 237 int r; 238 238 u8 retry_count; 239 - u8 issue_reset; 239 + u8 issue_host_reset = 0; 240 240 u16 wait_state_count; 241 241 242 + mutex_lock(&ioc->config_cmds.mutex); 242 243 if (ioc->config_cmds.status != MPT2_CMD_NOT_USED) { 243 244 printk(MPT2SAS_ERR_FMT "%s: config_cmd in use\n", 244 245 ioc->name, __func__); 246 + mutex_unlock(&ioc->config_cmds.mutex); 245 247 return -EAGAIN; 246 248 } 247 249 retry_count = 0; 248 250 249 251 retry_config: 252 + if (retry_count) { 253 + if (retry_count > 2) /* attempt only 2 retries */ 254 + return -EFAULT; 255 + printk(MPT2SAS_INFO_FMT "%s: attempting retry (%d)\n", 256 + ioc->name, __func__, retry_count); 257 + } 250 258 wait_state_count = 0; 251 259 ioc_state = mpt2sas_base_get_iocstate(ioc, 1); 252 260 while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { ··· 262 254 printk(MPT2SAS_ERR_FMT 263 255 "%s: failed due to ioc not operational\n", 264 256 ioc->name, __func__); 265 - ioc->config_cmds.status = MPT2_CMD_NOT_USED; 266 - return -EFAULT; 257 + r = -EFAULT; 258 + goto out; 267 259 } 268 260 ssleep(1); 269 261 ioc_state = mpt2sas_base_get_iocstate(ioc, 1); ··· 279 271 if (!smid) { 280 272 printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n", 281 273 ioc->name, __func__); 282 - ioc->config_cmds.status = MPT2_CMD_NOT_USED; 283 - return -EAGAIN; 274 + r = -EAGAIN; 275 + goto out; 284 276 } 285 277 286 278 r = 0; ··· 300 292 ioc->name, __func__); 301 293 _debug_dump_mf(mpi_request, 302 294 sizeof(Mpi2ConfigRequest_t)/4); 303 - if (!(ioc->config_cmds.status & MPT2_CMD_RESET)) 304 - issue_reset = 1; 305 - goto issue_host_reset; 295 + retry_count++; 296 + if (ioc->config_cmds.smid == smid) 297 + mpt2sas_base_free_smid(ioc, smid); 298 + if ((ioc->shost_recovery) || 299 + (ioc->config_cmds.status & MPT2_CMD_RESET)) 300 + goto retry_config; 301 + issue_host_reset = 1; 302 + r = -EFAULT; 303 + goto out; 306 304 } 307 305 if (ioc->config_cmds.status & MPT2_CMD_REPLY_VALID) 308 306 memcpy(mpi_reply, ioc->config_cmds.reply, ··· 316 302 if (retry_count) 317 303 printk(MPT2SAS_INFO_FMT "%s: retry completed!!\n", 318 304 ioc->name, __func__); 305 + out: 319 306 ioc->config_cmds.status = MPT2_CMD_NOT_USED; 320 - return r; 321 - 322 - issue_host_reset: 323 - if (issue_reset) 307 + mutex_unlock(&ioc->config_cmds.mutex); 308 + if (issue_host_reset) 324 309 mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP, 325 310 FORCE_BIG_HAMMER); 326 - ioc->config_cmds.status = MPT2_CMD_NOT_USED; 327 - if (!retry_count) { 328 - printk(MPT2SAS_INFO_FMT "%s: attempting retry\n", 329 - ioc->name, __func__); 330 - retry_count++; 331 - goto retry_config; 332 - } 333 - return -EFAULT; 311 + return r; 334 312 } 335 313 336 314 /** ··· 381 375 int r; 382 376 struct config_request mem; 383 377 384 - mutex_lock(&ioc->config_cmds.mutex); 385 378 memset(config_page, 0, sizeof(Mpi2ManufacturingPage0_t)); 386 379 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 387 380 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 422 417 _config_free_config_dma_memory(ioc, &mem); 423 418 424 419 out: 425 - mutex_unlock(&ioc->config_cmds.mutex); 426 420 return r; 427 421 } 428 422 ··· 442 438 int r; 443 439 struct config_request mem; 444 440 445 - mutex_lock(&ioc->config_cmds.mutex); 446 441 memset(config_page, 0, sizeof(Mpi2BiosPage2_t)); 447 442 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 448 443 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 483 480 _config_free_config_dma_memory(ioc, &mem); 484 481 485 482 out: 486 - mutex_unlock(&ioc->config_cmds.mutex); 487 483 return r; 488 484 } 489 485 ··· 503 501 int r; 504 502 struct config_request mem; 505 503 506 - mutex_lock(&ioc->config_cmds.mutex); 507 504 memset(config_page, 0, sizeof(Mpi2BiosPage3_t)); 508 505 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 509 506 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 544 543 _config_free_config_dma_memory(ioc, &mem); 545 544 546 545 out: 547 - mutex_unlock(&ioc->config_cmds.mutex); 548 546 return r; 549 547 } 550 548 ··· 564 564 int r; 565 565 struct config_request mem; 566 566 567 - mutex_lock(&ioc->config_cmds.mutex); 568 567 memset(config_page, 0, sizeof(Mpi2IOUnitPage0_t)); 569 568 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 570 569 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 605 606 _config_free_config_dma_memory(ioc, &mem); 606 607 607 608 out: 608 - mutex_unlock(&ioc->config_cmds.mutex); 609 609 return r; 610 610 } 611 611 ··· 625 627 int r; 626 628 struct config_request mem; 627 629 628 - mutex_lock(&ioc->config_cmds.mutex); 629 630 memset(config_page, 0, sizeof(Mpi2IOUnitPage1_t)); 630 631 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 631 632 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 666 669 _config_free_config_dma_memory(ioc, &mem); 667 670 668 671 out: 669 - mutex_unlock(&ioc->config_cmds.mutex); 670 672 return r; 671 673 } 672 674 ··· 686 690 int r; 687 691 struct config_request mem; 688 692 689 - mutex_lock(&ioc->config_cmds.mutex); 690 693 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 691 694 mpi_request.Function = MPI2_FUNCTION_CONFIG; 692 695 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; ··· 727 732 _config_free_config_dma_memory(ioc, &mem); 728 733 729 734 out: 730 - mutex_unlock(&ioc->config_cmds.mutex); 731 735 return r; 732 736 } 733 737 ··· 747 753 int r; 748 754 struct config_request mem; 749 755 750 - mutex_lock(&ioc->config_cmds.mutex); 751 756 memset(config_page, 0, sizeof(Mpi2IOCPage8_t)); 752 757 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 753 758 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 788 795 _config_free_config_dma_memory(ioc, &mem); 789 796 790 797 out: 791 - mutex_unlock(&ioc->config_cmds.mutex); 792 798 return r; 793 799 } 794 800 ··· 810 818 int r; 811 819 struct config_request mem; 812 820 813 - mutex_lock(&ioc->config_cmds.mutex); 814 821 memset(config_page, 0, sizeof(Mpi2SasDevicePage0_t)); 815 822 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 816 823 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 854 863 _config_free_config_dma_memory(ioc, &mem); 855 864 856 865 out: 857 - mutex_unlock(&ioc->config_cmds.mutex); 858 866 return r; 859 867 } 860 868 ··· 876 886 int r; 877 887 struct config_request mem; 878 888 879 - mutex_lock(&ioc->config_cmds.mutex); 880 889 memset(config_page, 0, sizeof(Mpi2SasDevicePage1_t)); 881 890 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 882 891 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 920 931 _config_free_config_dma_memory(ioc, &mem); 921 932 922 933 out: 923 - mutex_unlock(&ioc->config_cmds.mutex); 924 934 return r; 925 935 } 926 936 ··· 941 953 Mpi2ConfigReply_t mpi_reply; 942 954 Mpi2SasIOUnitPage0_t config_page; 943 955 944 - mutex_lock(&ioc->config_cmds.mutex); 945 956 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 946 957 mpi_request.Function = MPI2_FUNCTION_CONFIG; 947 958 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; ··· 989 1002 _config_free_config_dma_memory(ioc, &mem); 990 1003 991 1004 out: 992 - mutex_unlock(&ioc->config_cmds.mutex); 993 1005 return r; 994 1006 } 995 1007 ··· 1012 1026 Mpi2ConfigRequest_t mpi_request; 1013 1027 int r; 1014 1028 struct config_request mem; 1015 - 1016 - mutex_lock(&ioc->config_cmds.mutex); 1017 1029 memset(config_page, 0, sz); 1018 1030 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1019 1031 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1054 1070 _config_free_config_dma_memory(ioc, &mem); 1055 1071 1056 1072 out: 1057 - mutex_unlock(&ioc->config_cmds.mutex); 1058 1073 return r; 1059 1074 } 1060 1075 ··· 1078 1095 int r; 1079 1096 struct config_request mem; 1080 1097 1081 - mutex_lock(&ioc->config_cmds.mutex); 1082 1098 memset(config_page, 0, sz); 1083 1099 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1084 1100 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1120 1138 _config_free_config_dma_memory(ioc, &mem); 1121 1139 1122 1140 out: 1123 - mutex_unlock(&ioc->config_cmds.mutex); 1124 1141 return r; 1125 1142 } 1126 1143 ··· 1142 1161 int r; 1143 1162 struct config_request mem; 1144 1163 1145 - mutex_lock(&ioc->config_cmds.mutex); 1146 1164 memset(config_page, 0, sizeof(Mpi2ExpanderPage0_t)); 1147 1165 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1148 1166 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1186 1206 _config_free_config_dma_memory(ioc, &mem); 1187 1207 1188 1208 out: 1189 - mutex_unlock(&ioc->config_cmds.mutex); 1190 1209 return r; 1191 1210 } 1192 1211 ··· 1209 1230 int r; 1210 1231 struct config_request mem; 1211 1232 1212 - mutex_lock(&ioc->config_cmds.mutex); 1213 1233 memset(config_page, 0, sizeof(Mpi2ExpanderPage1_t)); 1214 1234 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1215 1235 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1255 1277 _config_free_config_dma_memory(ioc, &mem); 1256 1278 1257 1279 out: 1258 - mutex_unlock(&ioc->config_cmds.mutex); 1259 1280 return r; 1260 1281 } 1261 1282 ··· 1277 1300 int r; 1278 1301 struct config_request mem; 1279 1302 1280 - mutex_lock(&ioc->config_cmds.mutex); 1281 1303 memset(config_page, 0, sizeof(Mpi2SasEnclosurePage0_t)); 1282 1304 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1283 1305 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1321 1345 _config_free_config_dma_memory(ioc, &mem); 1322 1346 1323 1347 out: 1324 - mutex_unlock(&ioc->config_cmds.mutex); 1325 1348 return r; 1326 1349 } 1327 1350 ··· 1342 1367 int r; 1343 1368 struct config_request mem; 1344 1369 1345 - mutex_lock(&ioc->config_cmds.mutex); 1346 1370 memset(config_page, 0, sizeof(Mpi2SasPhyPage0_t)); 1347 1371 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1348 1372 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1387 1413 _config_free_config_dma_memory(ioc, &mem); 1388 1414 1389 1415 out: 1390 - mutex_unlock(&ioc->config_cmds.mutex); 1391 1416 return r; 1392 1417 } 1393 1418 ··· 1408 1435 int r; 1409 1436 struct config_request mem; 1410 1437 1411 - mutex_lock(&ioc->config_cmds.mutex); 1412 1438 memset(config_page, 0, sizeof(Mpi2SasPhyPage1_t)); 1413 1439 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1414 1440 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1453 1481 _config_free_config_dma_memory(ioc, &mem); 1454 1482 1455 1483 out: 1456 - mutex_unlock(&ioc->config_cmds.mutex); 1457 1484 return r; 1458 1485 } 1459 1486 ··· 1476 1505 int r; 1477 1506 struct config_request mem; 1478 1507 1479 - mutex_lock(&ioc->config_cmds.mutex); 1480 1508 memset(config_page, 0, sizeof(Mpi2RaidVolPage1_t)); 1481 1509 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1482 1510 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1518 1548 _config_free_config_dma_memory(ioc, &mem); 1519 1549 1520 1550 out: 1521 - mutex_unlock(&ioc->config_cmds.mutex); 1522 1551 return r; 1523 1552 } 1524 1553 ··· 1541 1572 struct config_request mem; 1542 1573 u16 ioc_status; 1543 1574 1544 - mutex_lock(&ioc->config_cmds.mutex); 1545 1575 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1546 1576 *num_pds = 0; 1547 1577 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1588 1620 _config_free_config_dma_memory(ioc, &mem); 1589 1621 1590 1622 out: 1591 - mutex_unlock(&ioc->config_cmds.mutex); 1592 1623 return r; 1593 1624 } 1594 1625 ··· 1612 1645 int r; 1613 1646 struct config_request mem; 1614 1647 1615 - mutex_lock(&ioc->config_cmds.mutex); 1616 1648 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1617 1649 memset(config_page, 0, sz); 1618 1650 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1653 1687 _config_free_config_dma_memory(ioc, &mem); 1654 1688 1655 1689 out: 1656 - mutex_unlock(&ioc->config_cmds.mutex); 1657 1690 return r; 1658 1691 } 1659 1692 ··· 1676 1711 int r; 1677 1712 struct config_request mem; 1678 1713 1679 - mutex_lock(&ioc->config_cmds.mutex); 1680 1714 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1681 1715 memset(config_page, 0, sizeof(Mpi2RaidPhysDiskPage0_t)); 1682 1716 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1718 1754 _config_free_config_dma_memory(ioc, &mem); 1719 1755 1720 1756 out: 1721 - mutex_unlock(&ioc->config_cmds.mutex); 1722 1757 return r; 1723 1758 } 1724 1759 ··· 1741 1778 struct config_request mem; 1742 1779 u16 ioc_status; 1743 1780 1744 - mutex_lock(&ioc->config_cmds.mutex); 1745 1781 *volume_handle = 0; 1746 1782 memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1747 1783 mpi_request.Function = MPI2_FUNCTION_CONFIG; ··· 1804 1842 _config_free_config_dma_memory(ioc, &mem); 1805 1843 1806 1844 out: 1807 - mutex_unlock(&ioc->config_cmds.mutex); 1808 1845 return r; 1809 1846 } 1810 1847
+88 -36
drivers/scsi/mpt2sas/mpt2sas_scsih.c
··· 2767 2767 char *desc_ioc_state = NULL; 2768 2768 char *desc_scsi_status = NULL; 2769 2769 char *desc_scsi_state = ioc->tmp_string; 2770 + u32 log_info = le32_to_cpu(mpi_reply->IOCLogInfo); 2771 + 2772 + if (log_info == 0x31170000) 2773 + return; 2770 2774 2771 2775 switch (ioc_status) { 2772 2776 case MPI2_IOCSTATUS_SUCCESS: ··· 3430 3426 __le64 sas_address; 3431 3427 int i; 3432 3428 unsigned long flags; 3433 - struct _sas_port *mpt2sas_port; 3429 + struct _sas_port *mpt2sas_port = NULL; 3434 3430 int rc = 0; 3435 3431 3436 3432 if (!handle) ··· 3522 3518 &expander_pg1, i, handle))) { 3523 3519 printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 3524 3520 ioc->name, __FILE__, __LINE__, __func__); 3525 - continue; 3521 + rc = -1; 3522 + goto out_fail; 3526 3523 } 3527 3524 sas_expander->phy[i].handle = handle; 3528 3525 sas_expander->phy[i].phy_id = i; 3529 - mpt2sas_transport_add_expander_phy(ioc, &sas_expander->phy[i], 3530 - expander_pg1, sas_expander->parent_dev); 3526 + 3527 + if ((mpt2sas_transport_add_expander_phy(ioc, 3528 + &sas_expander->phy[i], expander_pg1, 3529 + sas_expander->parent_dev))) { 3530 + printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 3531 + ioc->name, __FILE__, __LINE__, __func__); 3532 + rc = -1; 3533 + goto out_fail; 3534 + } 3531 3535 } 3532 3536 3533 3537 if (sas_expander->enclosure_handle) { ··· 3552 3540 3553 3541 out_fail: 3554 3542 3555 - if (sas_expander) 3556 - kfree(sas_expander->phy); 3543 + if (mpt2sas_port) 3544 + mpt2sas_transport_port_remove(ioc, sas_expander->sas_address, 3545 + sas_expander->parent_handle); 3557 3546 kfree(sas_expander); 3558 3547 return rc; 3559 3548 } ··· 3676 3663 sas_device->hidden_raid_component = is_pd; 3677 3664 3678 3665 /* get enclosure_logical_id */ 3679 - if (!(mpt2sas_config_get_enclosure_pg0(ioc, &mpi_reply, &enclosure_pg0, 3680 - MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE, 3681 - sas_device->enclosure_handle))) { 3666 + if (sas_device->enclosure_handle && !(mpt2sas_config_get_enclosure_pg0( 3667 + ioc, &mpi_reply, &enclosure_pg0, MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE, 3668 + sas_device->enclosure_handle))) 3682 3669 sas_device->enclosure_logical_id = 3683 3670 le64_to_cpu(enclosure_pg0.EnclosureLogicalID); 3684 - } 3685 3671 3686 3672 /* get device name */ 3687 3673 sas_device->device_name = le64_to_cpu(sas_device_pg0.DeviceName); ··· 4262 4250 u16 handle = le16_to_cpu(element->VolDevHandle); 4263 4251 int rc; 4264 4252 4265 - #if 0 /* RAID_HACKS */ 4266 - if (le32_to_cpu(event_data->Flags) & 4267 - MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) 4268 - return; 4269 - #endif 4270 - 4271 4253 mpt2sas_config_get_volume_wwid(ioc, handle, &wwid); 4272 4254 if (!wwid) { 4273 4255 printk(MPT2SAS_ERR_FMT ··· 4315 4309 u16 handle = le16_to_cpu(element->VolDevHandle); 4316 4310 unsigned long flags; 4317 4311 struct MPT2SAS_TARGET *sas_target_priv_data; 4318 - 4319 - #if 0 /* RAID_HACKS */ 4320 - if (le32_to_cpu(event_data->Flags) & 4321 - MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) 4322 - return; 4323 - #endif 4324 4312 4325 4313 spin_lock_irqsave(&ioc->raid_device_lock, flags); 4326 4314 raid_device = _scsih_raid_device_find_by_handle(ioc, handle); ··· 4428 4428 struct _sas_device *sas_device; 4429 4429 unsigned long flags; 4430 4430 u16 handle = le16_to_cpu(element->PhysDiskDevHandle); 4431 + Mpi2ConfigReply_t mpi_reply; 4432 + Mpi2SasDevicePage0_t sas_device_pg0; 4433 + u32 ioc_status; 4431 4434 4432 4435 spin_lock_irqsave(&ioc->sas_device_lock, flags); 4433 4436 sas_device = _scsih_sas_device_find_by_handle(ioc, handle); 4434 4437 spin_unlock_irqrestore(&ioc->sas_device_lock, flags); 4435 - if (sas_device) 4438 + if (sas_device) { 4436 4439 sas_device->hidden_raid_component = 1; 4437 - else 4438 - _scsih_add_device(ioc, handle, 0, 1); 4440 + return; 4441 + } 4442 + 4443 + if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, &sas_device_pg0, 4444 + MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) { 4445 + printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 4446 + ioc->name, __FILE__, __LINE__, __func__); 4447 + return; 4448 + } 4449 + 4450 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & 4451 + MPI2_IOCSTATUS_MASK; 4452 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { 4453 + printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 4454 + ioc->name, __FILE__, __LINE__, __func__); 4455 + return; 4456 + } 4457 + 4458 + _scsih_link_change(ioc, 4459 + le16_to_cpu(sas_device_pg0.ParentDevHandle), 4460 + handle, sas_device_pg0.PhyNum, MPI2_SAS_NEG_LINK_RATE_1_5); 4461 + 4462 + _scsih_add_device(ioc, handle, 0, 1); 4439 4463 } 4440 4464 4441 4465 #ifdef CONFIG_SCSI_MPT2SAS_LOGGING ··· 4559 4535 { 4560 4536 Mpi2EventIrConfigElement_t *element; 4561 4537 int i; 4538 + u8 foreign_config; 4562 4539 4563 4540 #ifdef CONFIG_SCSI_MPT2SAS_LOGGING 4564 4541 if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK) 4565 4542 _scsih_sas_ir_config_change_event_debug(ioc, event_data); 4566 4543 4567 4544 #endif 4545 + foreign_config = (le32_to_cpu(event_data->Flags) & 4546 + MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ? 1 : 0; 4568 4547 4569 4548 element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0]; 4570 4549 for (i = 0; i < event_data->NumElements; i++, element++) { ··· 4575 4548 switch (element->ReasonCode) { 4576 4549 case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED: 4577 4550 case MPI2_EVENT_IR_CHANGE_RC_ADDED: 4578 - _scsih_sas_volume_add(ioc, element); 4551 + if (!foreign_config) 4552 + _scsih_sas_volume_add(ioc, element); 4579 4553 break; 4580 4554 case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED: 4581 4555 case MPI2_EVENT_IR_CHANGE_RC_REMOVED: 4582 - _scsih_sas_volume_delete(ioc, element); 4556 + if (!foreign_config) 4557 + _scsih_sas_volume_delete(ioc, element); 4583 4558 break; 4584 4559 case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED: 4585 4560 _scsih_sas_pd_hide(ioc, element); ··· 4700 4671 u32 state; 4701 4672 struct _sas_device *sas_device; 4702 4673 unsigned long flags; 4674 + Mpi2ConfigReply_t mpi_reply; 4675 + Mpi2SasDevicePage0_t sas_device_pg0; 4676 + u32 ioc_status; 4703 4677 4704 4678 if (event_data->ReasonCode != MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED) 4705 4679 return; ··· 4719 4687 spin_unlock_irqrestore(&ioc->sas_device_lock, flags); 4720 4688 4721 4689 switch (state) { 4722 - #if 0 4723 - case MPI2_RAID_PD_STATE_OFFLINE: 4724 - if (sas_device) 4725 - _scsih_remove_device(ioc, handle); 4726 - break; 4727 - #endif 4728 4690 case MPI2_RAID_PD_STATE_ONLINE: 4729 4691 case MPI2_RAID_PD_STATE_DEGRADED: 4730 4692 case MPI2_RAID_PD_STATE_REBUILDING: 4731 4693 case MPI2_RAID_PD_STATE_OPTIMAL: 4732 - if (sas_device) 4694 + if (sas_device) { 4733 4695 sas_device->hidden_raid_component = 1; 4734 - else 4735 - _scsih_add_device(ioc, handle, 0, 1); 4696 + return; 4697 + } 4698 + 4699 + if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, 4700 + &sas_device_pg0, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, 4701 + handle))) { 4702 + printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 4703 + ioc->name, __FILE__, __LINE__, __func__); 4704 + return; 4705 + } 4706 + 4707 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & 4708 + MPI2_IOCSTATUS_MASK; 4709 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { 4710 + printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 4711 + ioc->name, __FILE__, __LINE__, __func__); 4712 + return; 4713 + } 4714 + 4715 + _scsih_link_change(ioc, 4716 + le16_to_cpu(sas_device_pg0.ParentDevHandle), 4717 + handle, sas_device_pg0.PhyNum, MPI2_SAS_NEG_LINK_RATE_1_5); 4718 + 4719 + _scsih_add_device(ioc, handle, 0, 1); 4720 + 4736 4721 break; 4737 4722 4723 + case MPI2_RAID_PD_STATE_OFFLINE: 4738 4724 case MPI2_RAID_PD_STATE_NOT_CONFIGURED: 4739 4725 case MPI2_RAID_PD_STATE_NOT_COMPATIBLE: 4740 4726 case MPI2_RAID_PD_STATE_HOT_SPARE: ··· 5824 5774 struct MPT2SAS_ADAPTER *ioc = shost_priv(shost); 5825 5775 u32 device_state; 5826 5776 5777 + mpt2sas_base_stop_watchdog(ioc); 5827 5778 flush_scheduled_work(); 5828 5779 scsi_block_requests(shost); 5829 5780 device_state = pci_choose_state(pdev, state); ··· 5867 5816 5868 5817 mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP, SOFT_RESET); 5869 5818 scsi_unblock_requests(shost); 5819 + mpt2sas_base_start_watchdog(ioc); 5870 5820 return 0; 5871 5821 } 5872 5822 #endif /* CONFIG_PM */
+1 -1
drivers/serial/Kconfig
··· 527 527 528 528 config SERIAL_S3C6400 529 529 tristate "Samsung S3C6400/S3C6410 Serial port support" 530 - depends on SERIAL_SAMSUNG && (CPU_S3C600 || CPU_S3C6410) 530 + depends on SERIAL_SAMSUNG && (CPU_S3C6400 || CPU_S3C6410) 531 531 default y 532 532 help 533 533 Serial port support for the Samsung S3C6400 and S3C6410
+13 -10
drivers/spi/spi_s3c24xx.c
··· 111 111 unsigned int bpw; 112 112 unsigned int hz; 113 113 unsigned int div; 114 + unsigned long clk; 114 115 115 116 bpw = t ? t->bits_per_word : spi->bits_per_word; 116 117 hz = t ? t->speed_hz : spi->max_speed_hz; 118 + 119 + if (!bpw) 120 + bpw = 8; 121 + 122 + if (!hz) 123 + hz = spi->max_speed_hz; 117 124 118 125 if (bpw != 8) { 119 126 dev_err(&spi->dev, "invalid bits-per-word (%d)\n", bpw); 120 127 return -EINVAL; 121 128 } 122 129 123 - div = clk_get_rate(hw->clk) / hz; 124 - 125 - /* is clk = pclk / (2 * (pre+1)), or is it 126 - * clk = (pclk * 2) / ( pre + 1) */ 127 - 128 - div /= 2; 129 - 130 - if (div > 0) 131 - div -= 1; 130 + clk = clk_get_rate(hw->clk); 131 + div = DIV_ROUND_UP(clk, hz * 2) - 1; 132 132 133 133 if (div > 255) 134 134 div = 255; 135 135 136 - dev_dbg(&spi->dev, "setting pre-scaler to %d (hz %d)\n", div, hz); 136 + dev_dbg(&spi->dev, "setting pre-scaler to %d (wanted %d, got %ld)\n", 137 + div, hz, clk / (2 * (div + 1))); 138 + 139 + 137 140 writeb(div, hw->regs + S3C2410_SPPRE); 138 141 139 142 spin_lock(&hw->bitbang.lock);
+8 -1
drivers/thermal/thermal_sys.c
··· 953 953 954 954 mutex_lock(&tz->lock); 955 955 956 - tz->ops->get_temp(tz, &temp); 956 + if (tz->ops->get_temp(tz, &temp)) { 957 + /* get_temp failed - retry it later */ 958 + printk(KERN_WARNING PREFIX "failed to read out thermal zone " 959 + "%d\n", tz->id); 960 + goto leave; 961 + } 957 962 958 963 for (count = 0; count < tz->trips; count++) { 959 964 tz->ops->get_trip_type(tz, count, &trip_type); ··· 1010 1005 THERMAL_TRIPS_NONE); 1011 1006 1012 1007 tz->last_temperature = temp; 1008 + 1009 + leave: 1013 1010 if (tz->passive) 1014 1011 thermal_zone_device_set_polling(tz, tz->passive_delay); 1015 1012 else if (tz->polling_delay)
+5
drivers/video/sh_mobile_lcdcfb.c
··· 481 481 /* tell the board code to enable the panel */ 482 482 for (k = 0; k < ARRAY_SIZE(priv->ch); k++) { 483 483 ch = &priv->ch[k]; 484 + if (!ch->enabled) 485 + continue; 486 + 484 487 board_cfg = &ch->cfg.board_cfg; 485 488 if (board_cfg->display_on) 486 489 board_cfg->display_on(board_cfg->board_data); ··· 501 498 /* clean up deferred io and ask board code to disable panel */ 502 499 for (k = 0; k < ARRAY_SIZE(priv->ch); k++) { 503 500 ch = &priv->ch[k]; 501 + if (!ch->enabled) 502 + continue; 504 503 505 504 /* deferred io mode: 506 505 * flush frame, and wait for frame end interrupt
+4 -4
drivers/video/xen-fbfront.c
··· 454 454 455 455 xenfb_init_shared_page(info, fb_info); 456 456 457 + ret = xenfb_connect_backend(dev, info); 458 + if (ret < 0) 459 + goto error; 460 + 457 461 ret = register_framebuffer(fb_info); 458 462 if (ret) { 459 463 fb_deferred_io_cleanup(fb_info); ··· 467 463 goto error; 468 464 } 469 465 info->fb_info = fb_info; 470 - 471 - ret = xenfb_connect_backend(dev, info); 472 - if (ret < 0) 473 - goto error; 474 466 475 467 xenfb_make_preferred_console(); 476 468 return 0;
+1 -1
drivers/watchdog/ar7_wdt.c
··· 37 37 #include <linux/uaccess.h> 38 38 39 39 #include <asm/addrspace.h> 40 - #include <asm/ar7/ar7.h> 40 + #include <asm/mach-ar7/ar7.h> 41 41 42 42 #define DRVNAME "ar7_wdt" 43 43 #define LONGNAME "TI AR7 Watchdog Timer"
+5 -16
fs/9p/v9fs.c
··· 76 76 * Return 0 upon success, -ERRNO upon failure. 77 77 */ 78 78 79 - static int v9fs_parse_options(struct v9fs_session_info *v9ses) 79 + static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts) 80 80 { 81 81 char *options; 82 82 substring_t args[MAX_OPT_ARGS]; ··· 90 90 v9ses->debug = 0; 91 91 v9ses->cache = 0; 92 92 93 - if (!v9ses->options) 93 + if (!opts) 94 94 return 0; 95 95 96 - options = kstrdup(v9ses->options, GFP_KERNEL); 96 + options = kstrdup(opts, GFP_KERNEL); 97 97 if (!options) { 98 98 P9_DPRINTK(P9_DEBUG_ERROR, 99 99 "failed to allocate copy of option string\n"); ··· 206 206 v9ses->uid = ~0; 207 207 v9ses->dfltuid = V9FS_DEFUID; 208 208 v9ses->dfltgid = V9FS_DEFGID; 209 - if (data) { 210 - v9ses->options = kstrdup(data, GFP_KERNEL); 211 - if (!v9ses->options) { 212 - P9_DPRINTK(P9_DEBUG_ERROR, 213 - "failed to allocate copy of option string\n"); 214 - retval = -ENOMEM; 215 - goto error; 216 - } 217 - } 218 209 219 - rc = v9fs_parse_options(v9ses); 210 + rc = v9fs_parse_options(v9ses, data); 220 211 if (rc < 0) { 221 212 retval = rc; 222 213 goto error; 223 214 } 224 215 225 - v9ses->clnt = p9_client_create(dev_name, v9ses->options); 226 - 216 + v9ses->clnt = p9_client_create(dev_name, data); 227 217 if (IS_ERR(v9ses->clnt)) { 228 218 retval = PTR_ERR(v9ses->clnt); 229 219 v9ses->clnt = NULL; ··· 270 280 271 281 __putname(v9ses->uname); 272 282 __putname(v9ses->aname); 273 - kfree(v9ses->options); 274 283 } 275 284 276 285 /**
-1
fs/9p/v9fs.h
··· 85 85 unsigned int afid; 86 86 unsigned int cache; 87 87 88 - char *options; /* copy of mount options */ 89 88 char *uname; /* user name to mount as */ 90 89 char *aname; /* name of remote hierarchy being mounted */ 91 90 unsigned int maxdata; /* max data for client interface */
+65 -61
fs/9p/vfs_inode.c
··· 171 171 172 172 /** 173 173 * v9fs_blank_wstat - helper function to setup a 9P stat structure 174 - * @v9ses: 9P session info (for determining extended mode) 175 174 * @wstat: structure to initialize 176 175 * 177 176 */ ··· 206 207 207 208 struct inode *v9fs_get_inode(struct super_block *sb, int mode) 208 209 { 210 + int err; 209 211 struct inode *inode; 210 212 struct v9fs_session_info *v9ses = sb->s_fs_info; 211 213 212 214 P9_DPRINTK(P9_DEBUG_VFS, "super block: %p mode: %o\n", sb, mode); 213 215 214 216 inode = new_inode(sb); 215 - if (inode) { 216 - inode->i_mode = mode; 217 - inode->i_uid = current_fsuid(); 218 - inode->i_gid = current_fsgid(); 219 - inode->i_blocks = 0; 220 - inode->i_rdev = 0; 221 - inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 222 - inode->i_mapping->a_ops = &v9fs_addr_operations; 223 - 224 - switch (mode & S_IFMT) { 225 - case S_IFIFO: 226 - case S_IFBLK: 227 - case S_IFCHR: 228 - case S_IFSOCK: 229 - if (!v9fs_extended(v9ses)) { 230 - P9_DPRINTK(P9_DEBUG_ERROR, 231 - "special files without extended mode\n"); 232 - return ERR_PTR(-EINVAL); 233 - } 234 - init_special_inode(inode, inode->i_mode, 235 - inode->i_rdev); 236 - break; 237 - case S_IFREG: 238 - inode->i_op = &v9fs_file_inode_operations; 239 - inode->i_fop = &v9fs_file_operations; 240 - break; 241 - case S_IFLNK: 242 - if (!v9fs_extended(v9ses)) { 243 - P9_DPRINTK(P9_DEBUG_ERROR, 244 - "extended modes used w/o 9P2000.u\n"); 245 - return ERR_PTR(-EINVAL); 246 - } 247 - inode->i_op = &v9fs_symlink_inode_operations; 248 - break; 249 - case S_IFDIR: 250 - inc_nlink(inode); 251 - if (v9fs_extended(v9ses)) 252 - inode->i_op = &v9fs_dir_inode_operations_ext; 253 - else 254 - inode->i_op = &v9fs_dir_inode_operations; 255 - inode->i_fop = &v9fs_dir_operations; 256 - break; 257 - default: 258 - P9_DPRINTK(P9_DEBUG_ERROR, 259 - "BAD mode 0x%x S_IFMT 0x%x\n", 260 - mode, mode & S_IFMT); 261 - return ERR_PTR(-EINVAL); 262 - } 263 - } else { 217 + if (!inode) { 264 218 P9_EPRINTK(KERN_WARNING, "Problem allocating inode\n"); 265 219 return ERR_PTR(-ENOMEM); 266 220 } 221 + 222 + inode->i_mode = mode; 223 + inode->i_uid = current_fsuid(); 224 + inode->i_gid = current_fsgid(); 225 + inode->i_blocks = 0; 226 + inode->i_rdev = 0; 227 + inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 228 + inode->i_mapping->a_ops = &v9fs_addr_operations; 229 + 230 + switch (mode & S_IFMT) { 231 + case S_IFIFO: 232 + case S_IFBLK: 233 + case S_IFCHR: 234 + case S_IFSOCK: 235 + if (!v9fs_extended(v9ses)) { 236 + P9_DPRINTK(P9_DEBUG_ERROR, 237 + "special files without extended mode\n"); 238 + err = -EINVAL; 239 + goto error; 240 + } 241 + init_special_inode(inode, inode->i_mode, inode->i_rdev); 242 + break; 243 + case S_IFREG: 244 + inode->i_op = &v9fs_file_inode_operations; 245 + inode->i_fop = &v9fs_file_operations; 246 + break; 247 + case S_IFLNK: 248 + if (!v9fs_extended(v9ses)) { 249 + P9_DPRINTK(P9_DEBUG_ERROR, 250 + "extended modes used w/o 9P2000.u\n"); 251 + err = -EINVAL; 252 + goto error; 253 + } 254 + inode->i_op = &v9fs_symlink_inode_operations; 255 + break; 256 + case S_IFDIR: 257 + inc_nlink(inode); 258 + if (v9fs_extended(v9ses)) 259 + inode->i_op = &v9fs_dir_inode_operations_ext; 260 + else 261 + inode->i_op = &v9fs_dir_inode_operations; 262 + inode->i_fop = &v9fs_dir_operations; 263 + break; 264 + default: 265 + P9_DPRINTK(P9_DEBUG_ERROR, "BAD mode 0x%x S_IFMT 0x%x\n", 266 + mode, mode & S_IFMT); 267 + err = -EINVAL; 268 + goto error; 269 + } 270 + 267 271 return inode; 272 + 273 + error: 274 + iput(inode); 275 + return ERR_PTR(err); 268 276 } 269 277 270 278 /* ··· 344 338 345 339 ret = NULL; 346 340 st = p9_client_stat(fid); 347 - if (IS_ERR(st)) { 348 - err = PTR_ERR(st); 349 - st = NULL; 350 - goto error; 351 - } 341 + if (IS_ERR(st)) 342 + return ERR_CAST(st); 352 343 353 344 umode = p9mode2unixmode(v9ses, st->mode); 354 345 ret = v9fs_get_inode(sb, umode); 355 346 if (IS_ERR(ret)) { 356 347 err = PTR_ERR(ret); 357 - ret = NULL; 358 348 goto error; 359 349 } 360 350 361 351 v9fs_stat2inode(st, ret, sb); 362 352 ret->i_ino = v9fs_qid2ino(&st->qid); 353 + p9stat_free(st); 363 354 kfree(st); 364 355 return ret; 365 356 366 357 error: 358 + p9stat_free(st); 367 359 kfree(st); 368 - if (ret) 369 - iput(ret); 370 - 371 360 return ERR_PTR(err); 372 361 } 373 362 ··· 404 403 * @v9ses: session information 405 404 * @dir: directory that dentry is being created in 406 405 * @dentry: dentry that is being created 406 + * @extension: 9p2000.u extension string to support devices, etc. 407 407 * @perm: create permissions 408 408 * @mode: open mode 409 - * @extension: 9p2000.u extension string to support devices, etc. 410 409 * 411 410 */ 412 411 static struct p9_fid * ··· 471 470 dentry->d_op = &v9fs_dentry_operations; 472 471 473 472 d_instantiate(dentry, inode); 474 - v9fs_fid_add(dentry, fid); 473 + err = v9fs_fid_add(dentry, fid); 474 + if (err < 0) 475 + goto error; 476 + 475 477 return ofid; 476 478 477 479 error:
+12 -27
fs/9p/vfs_super.c
··· 81 81 82 82 static void 83 83 v9fs_fill_super(struct super_block *sb, struct v9fs_session_info *v9ses, 84 - int flags) 84 + int flags, void *data) 85 85 { 86 86 sb->s_maxbytes = MAX_LFS_FILESIZE; 87 87 sb->s_blocksize_bits = fls(v9ses->maxdata - 1); ··· 91 91 92 92 sb->s_flags = flags | MS_ACTIVE | MS_SYNCHRONOUS | MS_DIRSYNC | 93 93 MS_NOATIME; 94 + 95 + save_mount_options(sb, data); 94 96 } 95 97 96 98 /** ··· 115 113 struct v9fs_session_info *v9ses = NULL; 116 114 struct p9_wstat *st = NULL; 117 115 int mode = S_IRWXUGO | S_ISVTX; 118 - uid_t uid = current_fsuid(); 119 - gid_t gid = current_fsgid(); 120 116 struct p9_fid *fid; 121 117 int retval = 0; 122 118 123 119 P9_DPRINTK(P9_DEBUG_VFS, " \n"); 124 120 125 - st = NULL; 126 121 v9ses = kzalloc(sizeof(struct v9fs_session_info), GFP_KERNEL); 127 122 if (!v9ses) 128 123 return -ENOMEM; ··· 141 142 retval = PTR_ERR(sb); 142 143 goto free_stat; 143 144 } 144 - v9fs_fill_super(sb, v9ses, flags); 145 + v9fs_fill_super(sb, v9ses, flags, data); 145 146 146 147 inode = v9fs_get_inode(sb, S_IFDIR | mode); 147 148 if (IS_ERR(inode)) { 148 149 retval = PTR_ERR(inode); 149 150 goto release_sb; 150 151 } 151 - 152 - inode->i_uid = uid; 153 - inode->i_gid = gid; 154 152 155 153 root = d_alloc_root(inode); 156 154 if (!root) { ··· 169 173 simple_set_mnt(mnt, sb); 170 174 return 0; 171 175 172 - release_sb: 173 - deactivate_locked_super(sb); 174 - 175 176 free_stat: 177 + p9stat_free(st); 176 178 kfree(st); 177 179 178 180 clunk_fid: ··· 179 185 close_session: 180 186 v9fs_session_close(v9ses); 181 187 kfree(v9ses); 188 + return retval; 182 189 190 + release_sb: 191 + p9stat_free(st); 192 + kfree(st); 193 + deactivate_locked_super(sb); 183 194 return retval; 184 195 } 185 196 ··· 206 207 207 208 v9fs_session_close(v9ses); 208 209 kfree(v9ses); 210 + s->s_fs_info = NULL; 209 211 P9_DPRINTK(P9_DEBUG_VFS, "exiting kill_super\n"); 210 - } 211 - 212 - /** 213 - * v9fs_show_options - Show mount options in /proc/mounts 214 - * @m: seq_file to write to 215 - * @mnt: mount descriptor 216 - * 217 - */ 218 - 219 - static int v9fs_show_options(struct seq_file *m, struct vfsmount *mnt) 220 - { 221 - struct v9fs_session_info *v9ses = mnt->mnt_sb->s_fs_info; 222 - 223 - seq_printf(m, "%s", v9ses->options); 224 - return 0; 225 212 } 226 213 227 214 static void ··· 222 237 static const struct super_operations v9fs_super_ops = { 223 238 .statfs = simple_statfs, 224 239 .clear_inode = v9fs_clear_inode, 225 - .show_options = v9fs_show_options, 240 + .show_options = generic_show_options, 226 241 .umount_begin = v9fs_umount_begin, 227 242 }; 228 243
+15 -3
fs/afs/file.c
··· 134 134 135 135 inode = page->mapping->host; 136 136 137 - ASSERT(file != NULL); 138 - key = file->private_data; 139 - ASSERT(key != NULL); 137 + if (file) { 138 + key = file->private_data; 139 + ASSERT(key != NULL); 140 + } else { 141 + key = afs_request_key(AFS_FS_S(inode->i_sb)->volume->cell); 142 + if (IS_ERR(key)) { 143 + ret = PTR_ERR(key); 144 + goto error_nokey; 145 + } 146 + } 140 147 141 148 _enter("{%x},{%lu},{%lu}", key_serial(key), inode->i_ino, page->index); 142 149 ··· 214 207 unlock_page(page); 215 208 } 216 209 210 + if (!file) 211 + key_put(key); 217 212 _leave(" = 0"); 218 213 return 0; 219 214 220 215 error: 221 216 SetPageError(page); 222 217 unlock_page(page); 218 + if (!file) 219 + key_put(key); 220 + error_nokey: 223 221 _leave(" = %d", ret); 224 222 return ret; 225 223 }
+1 -1
fs/autofs4/expire.c
··· 77 77 } 78 78 79 79 /* Update the expiry counter if fs is busy */ 80 - if (!may_umount_tree(mnt)) { 80 + if (!may_umount_tree(path.mnt)) { 81 81 struct autofs_info *ino = autofs4_dentry_ino(top); 82 82 ino->last_used = jiffies; 83 83 goto done;
+14 -7
fs/btrfs/inode.c
··· 3099 3099 { 3100 3100 struct btrfs_root *root = BTRFS_I(inode)->root; 3101 3101 struct btrfs_inode *entry; 3102 - struct rb_node **p = &root->inode_tree.rb_node; 3103 - struct rb_node *parent = NULL; 3102 + struct rb_node **p; 3103 + struct rb_node *parent; 3104 + 3105 + again: 3106 + p = &root->inode_tree.rb_node; 3107 + parent = NULL; 3104 3108 3105 3109 spin_lock(&root->inode_lock); 3106 3110 while (*p) { ··· 3112 3108 entry = rb_entry(parent, struct btrfs_inode, rb_node); 3113 3109 3114 3110 if (inode->i_ino < entry->vfs_inode.i_ino) 3115 - p = &(*p)->rb_left; 3111 + p = &parent->rb_left; 3116 3112 else if (inode->i_ino > entry->vfs_inode.i_ino) 3117 - p = &(*p)->rb_right; 3113 + p = &parent->rb_right; 3118 3114 else { 3119 3115 WARN_ON(!(entry->vfs_inode.i_state & 3120 3116 (I_WILL_FREE | I_FREEING | I_CLEAR))); 3121 - break; 3117 + rb_erase(parent, &root->inode_tree); 3118 + RB_CLEAR_NODE(parent); 3119 + spin_unlock(&root->inode_lock); 3120 + goto again; 3122 3121 } 3123 3122 } 3124 3123 rb_link_node(&BTRFS_I(inode)->rb_node, parent, p); ··· 3133 3126 { 3134 3127 struct btrfs_root *root = BTRFS_I(inode)->root; 3135 3128 3129 + spin_lock(&root->inode_lock); 3136 3130 if (!RB_EMPTY_NODE(&BTRFS_I(inode)->rb_node)) { 3137 - spin_lock(&root->inode_lock); 3138 3131 rb_erase(&BTRFS_I(inode)->rb_node, &root->inode_tree); 3139 - spin_unlock(&root->inode_lock); 3140 3132 RB_CLEAR_NODE(&BTRFS_I(inode)->rb_node); 3141 3133 } 3134 + spin_unlock(&root->inode_lock); 3142 3135 } 3143 3136 3144 3137 static noinline void init_btrfs_i(struct inode *inode)
+5 -2
fs/buffer.c
··· 1165 1165 1166 1166 if (!test_set_buffer_dirty(bh)) { 1167 1167 struct page *page = bh->b_page; 1168 - if (!TestSetPageDirty(page)) 1169 - __set_page_dirty(page, page_mapping(page), 0); 1168 + if (!TestSetPageDirty(page)) { 1169 + struct address_space *mapping = page_mapping(page); 1170 + if (mapping) 1171 + __set_page_dirty(page, mapping, 0); 1172 + } 1170 1173 } 1171 1174 } 1172 1175
+4 -13
fs/compat.c
··· 1485 1485 if (!bprm) 1486 1486 goto out_files; 1487 1487 1488 - retval = -ERESTARTNOINTR; 1489 - if (mutex_lock_interruptible(&current->cred_guard_mutex)) 1488 + retval = prepare_bprm_creds(bprm); 1489 + if (retval) 1490 1490 goto out_free; 1491 - current->in_execve = 1; 1492 - 1493 - retval = -ENOMEM; 1494 - bprm->cred = prepare_exec_creds(); 1495 - if (!bprm->cred) 1496 - goto out_unlock; 1497 1491 1498 1492 retval = check_unsafe_exec(bprm); 1499 1493 if (retval < 0) 1500 - goto out_unlock; 1494 + goto out_free; 1501 1495 clear_in_exec = retval; 1496 + current->in_execve = 1; 1502 1497 1503 1498 file = open_exec(filename); 1504 1499 retval = PTR_ERR(file); ··· 1542 1547 /* execve succeeded */ 1543 1548 current->fs->in_exec = 0; 1544 1549 current->in_execve = 0; 1545 - mutex_unlock(&current->cred_guard_mutex); 1546 1550 acct_update_integrals(current); 1547 1551 free_bprm(bprm); 1548 1552 if (displaced) ··· 1561 1567 out_unmark: 1562 1568 if (clear_in_exec) 1563 1569 current->fs->in_exec = 0; 1564 - 1565 - out_unlock: 1566 1570 current->in_execve = 0; 1567 - mutex_unlock(&current->cred_guard_mutex); 1568 1571 1569 1572 out_free: 1570 1573 free_bprm(bprm);
+40 -27
fs/exec.c
··· 678 678 } 679 679 EXPORT_SYMBOL(open_exec); 680 680 681 - int kernel_read(struct file *file, unsigned long offset, 682 - char *addr, unsigned long count) 681 + int kernel_read(struct file *file, loff_t offset, 682 + char *addr, unsigned long count) 683 683 { 684 684 mm_segment_t old_fs; 685 685 loff_t pos = offset; ··· 1016 1016 EXPORT_SYMBOL(flush_old_exec); 1017 1017 1018 1018 /* 1019 + * Prepare credentials and lock ->cred_guard_mutex. 1020 + * install_exec_creds() commits the new creds and drops the lock. 1021 + * Or, if exec fails before, free_bprm() should release ->cred and 1022 + * and unlock. 1023 + */ 1024 + int prepare_bprm_creds(struct linux_binprm *bprm) 1025 + { 1026 + if (mutex_lock_interruptible(&current->cred_guard_mutex)) 1027 + return -ERESTARTNOINTR; 1028 + 1029 + bprm->cred = prepare_exec_creds(); 1030 + if (likely(bprm->cred)) 1031 + return 0; 1032 + 1033 + mutex_unlock(&current->cred_guard_mutex); 1034 + return -ENOMEM; 1035 + } 1036 + 1037 + void free_bprm(struct linux_binprm *bprm) 1038 + { 1039 + free_arg_pages(bprm); 1040 + if (bprm->cred) { 1041 + mutex_unlock(&current->cred_guard_mutex); 1042 + abort_creds(bprm->cred); 1043 + } 1044 + kfree(bprm); 1045 + } 1046 + 1047 + /* 1019 1048 * install the new credentials for this executable 1020 1049 */ 1021 1050 void install_exec_creds(struct linux_binprm *bprm) ··· 1053 1024 1054 1025 commit_creds(bprm->cred); 1055 1026 bprm->cred = NULL; 1056 - 1057 - /* cred_guard_mutex must be held at least to this point to prevent 1027 + /* 1028 + * cred_guard_mutex must be held at least to this point to prevent 1058 1029 * ptrace_attach() from altering our determination of the task's 1059 - * credentials; any time after this it may be unlocked */ 1060 - 1030 + * credentials; any time after this it may be unlocked. 1031 + */ 1061 1032 security_bprm_committed_creds(bprm); 1033 + mutex_unlock(&current->cred_guard_mutex); 1062 1034 } 1063 1035 EXPORT_SYMBOL(install_exec_creds); 1064 1036 ··· 1276 1246 1277 1247 EXPORT_SYMBOL(search_binary_handler); 1278 1248 1279 - void free_bprm(struct linux_binprm *bprm) 1280 - { 1281 - free_arg_pages(bprm); 1282 - if (bprm->cred) 1283 - abort_creds(bprm->cred); 1284 - kfree(bprm); 1285 - } 1286 - 1287 1249 /* 1288 1250 * sys_execve() executes a new program. 1289 1251 */ ··· 1299 1277 if (!bprm) 1300 1278 goto out_files; 1301 1279 1302 - retval = -ERESTARTNOINTR; 1303 - if (mutex_lock_interruptible(&current->cred_guard_mutex)) 1280 + retval = prepare_bprm_creds(bprm); 1281 + if (retval) 1304 1282 goto out_free; 1305 - current->in_execve = 1; 1306 - 1307 - retval = -ENOMEM; 1308 - bprm->cred = prepare_exec_creds(); 1309 - if (!bprm->cred) 1310 - goto out_unlock; 1311 1283 1312 1284 retval = check_unsafe_exec(bprm); 1313 1285 if (retval < 0) 1314 - goto out_unlock; 1286 + goto out_free; 1315 1287 clear_in_exec = retval; 1288 + current->in_execve = 1; 1316 1289 1317 1290 file = open_exec(filename); 1318 1291 retval = PTR_ERR(file); ··· 1357 1340 /* execve succeeded */ 1358 1341 current->fs->in_exec = 0; 1359 1342 current->in_execve = 0; 1360 - mutex_unlock(&current->cred_guard_mutex); 1361 1343 acct_update_integrals(current); 1362 1344 free_bprm(bprm); 1363 1345 if (displaced) ··· 1376 1360 out_unmark: 1377 1361 if (clear_in_exec) 1378 1362 current->fs->in_exec = 0; 1379 - 1380 - out_unlock: 1381 1363 current->in_execve = 0; 1382 - mutex_unlock(&current->cred_guard_mutex); 1383 1364 1384 1365 out_free: 1385 1366 free_bprm(bprm);
+4
fs/ext2/namei.c
··· 362 362 if (dir_de) { 363 363 if (old_dir != new_dir) 364 364 ext2_set_link(old_inode, dir_de, dir_page, new_dir, 0); 365 + else { 366 + kunmap(dir_page); 367 + page_cache_release(dir_page); 368 + } 365 369 inode_dec_link_count(old_dir); 366 370 } 367 371 return 0;
+15 -13
fs/ext3/Kconfig
··· 29 29 module will be called ext3. 30 30 31 31 config EXT3_DEFAULTS_TO_ORDERED 32 - bool "Default to 'data=ordered' in ext3 (legacy option)" 32 + bool "Default to 'data=ordered' in ext3" 33 33 depends on EXT3_FS 34 34 help 35 - If a filesystem does not explicitly specify a data ordering 36 - mode, and the journal capability allowed it, ext3 used to 37 - historically default to 'data=ordered'. 35 + The journal mode options for ext3 have different tradeoffs 36 + between when data is guaranteed to be on disk and 37 + performance. The use of "data=writeback" can cause 38 + unwritten data to appear in files after an system crash or 39 + power failure, which can be a security issue. However, 40 + "data=ordered" mode can also result in major performance 41 + problems, including seconds-long delays before an fsync() 42 + call returns. For details, see: 38 43 39 - That was a rather unfortunate choice, because it leads to all 40 - kinds of latency problems, and the 'data=writeback' mode is more 41 - appropriate these days. 44 + http://ext4.wiki.kernel.org/index.php/Ext3_data_mode_tradeoffs 42 45 43 - You should probably always answer 'n' here, and if you really 44 - want to use 'data=ordered' mode, set it in the filesystem itself 45 - with 'tune2fs -o journal_data_ordered'. 46 - 47 - But if you really want to enable the legacy default, you can do 48 - so by answering 'y' to this question. 46 + If you have been historically happy with ext3's performance, 47 + data=ordered mode will be a safe choice and you should 48 + answer 'y' here. If you understand the reliability and data 49 + privacy issues of data=writeback and are willing to make 50 + that trade off, answer 'n'. 49 51 50 52 config EXT3_FS_XATTR 51 53 bool "Ext3 extended attributes"
+27 -13
fs/ext3/super.c
··· 543 543 #endif 544 544 } 545 545 546 + static char *data_mode_string(unsigned long mode) 547 + { 548 + switch (mode) { 549 + case EXT3_MOUNT_JOURNAL_DATA: 550 + return "journal"; 551 + case EXT3_MOUNT_ORDERED_DATA: 552 + return "ordered"; 553 + case EXT3_MOUNT_WRITEBACK_DATA: 554 + return "writeback"; 555 + } 556 + return "unknown"; 557 + } 558 + 546 559 /* 547 560 * Show an option if 548 561 * - it's set to a non-default value OR ··· 629 616 if (test_opt(sb, NOBH)) 630 617 seq_puts(seq, ",nobh"); 631 618 632 - if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA) 633 - seq_puts(seq, ",data=journal"); 634 - else if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA) 635 - seq_puts(seq, ",data=ordered"); 636 - else if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_WRITEBACK_DATA) 637 - seq_puts(seq, ",data=writeback"); 638 - 619 + seq_printf(seq, ",data=%s", data_mode_string(sbi->s_mount_opt & 620 + EXT3_MOUNT_DATA_FLAGS)); 639 621 if (test_opt(sb, DATA_ERR_ABORT)) 640 622 seq_puts(seq, ",data_err=abort"); 641 623 ··· 1032 1024 datacheck: 1033 1025 if (is_remount) { 1034 1026 if ((sbi->s_mount_opt & EXT3_MOUNT_DATA_FLAGS) 1035 - != data_opt) { 1036 - printk(KERN_ERR 1037 - "EXT3-fs: cannot change data " 1038 - "mode on remount\n"); 1039 - return 0; 1040 - } 1027 + == data_opt) 1028 + break; 1029 + printk(KERN_ERR 1030 + "EXT3-fs (device %s): Cannot change " 1031 + "data mode on remount. The filesystem " 1032 + "is mounted in data=%s mode and you " 1033 + "try to remount it in data=%s mode.\n", 1034 + sb->s_id, 1035 + data_mode_string(sbi->s_mount_opt & 1036 + EXT3_MOUNT_DATA_FLAGS), 1037 + data_mode_string(data_opt)); 1038 + return 0; 1041 1039 } else { 1042 1040 sbi->s_mount_opt &= ~EXT3_MOUNT_DATA_FLAGS; 1043 1041 sbi->s_mount_opt |= data_opt;
+10 -10
fs/gfs2/sys.c
··· 386 386 #define GDLM_ATTR(_name,_mode,_show,_store) \ 387 387 static struct gfs2_attr gdlm_attr_##_name = __ATTR(_name,_mode,_show,_store) 388 388 389 - GDLM_ATTR(proto_name, 0444, proto_name_show, NULL); 390 - GDLM_ATTR(block, 0644, block_show, block_store); 391 - GDLM_ATTR(withdraw, 0644, withdraw_show, withdraw_store); 392 - GDLM_ATTR(id, 0444, lkid_show, NULL); 393 - GDLM_ATTR(jid, 0444, jid_show, NULL); 394 - GDLM_ATTR(first, 0444, lkfirst_show, NULL); 395 - GDLM_ATTR(first_done, 0444, first_done_show, NULL); 396 - GDLM_ATTR(recover, 0200, NULL, recover_store); 397 - GDLM_ATTR(recover_done, 0444, recover_done_show, NULL); 398 - GDLM_ATTR(recover_status, 0444, recover_status_show, NULL); 389 + GDLM_ATTR(proto_name, 0444, proto_name_show, NULL); 390 + GDLM_ATTR(block, 0644, block_show, block_store); 391 + GDLM_ATTR(withdraw, 0644, withdraw_show, withdraw_store); 392 + GDLM_ATTR(id, 0444, lkid_show, NULL); 393 + GDLM_ATTR(jid, 0444, jid_show, NULL); 394 + GDLM_ATTR(first, 0444, lkfirst_show, NULL); 395 + GDLM_ATTR(first_done, 0444, first_done_show, NULL); 396 + GDLM_ATTR(recover, 0600, NULL, recover_store); 397 + GDLM_ATTR(recover_done, 0444, recover_done_show, NULL); 398 + GDLM_ATTR(recover_status, 0444, recover_status_show, NULL); 399 399 400 400 static struct attribute *lock_module_attrs[] = { 401 401 &gdlm_attr_proto_name.attr,
+12 -8
fs/hugetlbfs/inode.c
··· 935 935 return capable(CAP_IPC_LOCK) || in_group_p(sysctl_hugetlb_shm_group); 936 936 } 937 937 938 - struct file *hugetlb_file_setup(const char *name, size_t size, int acctflag) 938 + struct file *hugetlb_file_setup(const char *name, size_t size, int acctflag, 939 + struct user_struct **user) 939 940 { 940 941 int error = -ENOMEM; 941 - int unlock_shm = 0; 942 942 struct file *file; 943 943 struct inode *inode; 944 944 struct dentry *dentry, *root; 945 945 struct qstr quick_string; 946 - struct user_struct *user = current_user(); 947 946 947 + *user = NULL; 948 948 if (!hugetlbfs_vfsmount) 949 949 return ERR_PTR(-ENOENT); 950 950 951 951 if (!can_do_hugetlb_shm()) { 952 - if (user_shm_lock(size, user)) { 953 - unlock_shm = 1; 952 + *user = current_user(); 953 + if (user_shm_lock(size, *user)) { 954 954 WARN_ONCE(1, 955 955 "Using mlock ulimits for SHM_HUGETLB deprecated\n"); 956 - } else 956 + } else { 957 + *user = NULL; 957 958 return ERR_PTR(-EPERM); 959 + } 958 960 } 959 961 960 962 root = hugetlbfs_vfsmount->mnt_root; ··· 998 996 out_dentry: 999 997 dput(dentry); 1000 998 out_shm_unlock: 1001 - if (unlock_shm) 1002 - user_shm_unlock(size, user); 999 + if (*user) { 1000 + user_shm_unlock(size, *user); 1001 + *user = NULL; 1002 + } 1003 1003 return ERR_PTR(error); 1004 1004 } 1005 1005
+10
fs/jffs2/wbuf.c
··· 1268 1268 if (!c->wbuf) 1269 1269 return -ENOMEM; 1270 1270 1271 + #ifdef CONFIG_JFFS2_FS_WBUF_VERIFY 1272 + c->wbuf_verify = kmalloc(c->wbuf_pagesize, GFP_KERNEL); 1273 + if (!c->wbuf_verify) { 1274 + kfree(c->wbuf); 1275 + return -ENOMEM; 1276 + } 1277 + #endif 1271 1278 return 0; 1272 1279 } 1273 1280 1274 1281 void jffs2_nor_wbuf_flash_cleanup(struct jffs2_sb_info *c) { 1282 + #ifdef CONFIG_JFFS2_FS_WBUF_VERIFY 1283 + kfree(c->wbuf_verify); 1284 + #endif 1275 1285 kfree(c->wbuf); 1276 1286 } 1277 1287
+1 -1
fs/libfs.c
··· 217 217 return PTR_ERR(s); 218 218 219 219 s->s_flags = MS_NOUSER; 220 - s->s_maxbytes = ~0ULL; 220 + s->s_maxbytes = MAX_LFS_FILESIZE; 221 221 s->s_blocksize = PAGE_SIZE; 222 222 s->s_blocksize_bits = PAGE_SHIFT; 223 223 s->s_magic = magic;
+10 -10
fs/nfs/direct.c
··· 255 255 256 256 if (put_dreq(dreq)) 257 257 nfs_direct_complete(dreq); 258 - nfs_readdata_release(calldata); 258 + nfs_readdata_free(data); 259 259 } 260 260 261 261 static const struct rpc_call_ops nfs_read_direct_ops = { ··· 314 314 data->npages, 1, 0, data->pagevec, NULL); 315 315 up_read(&current->mm->mmap_sem); 316 316 if (result < 0) { 317 - nfs_readdata_release(data); 317 + nfs_readdata_free(data); 318 318 break; 319 319 } 320 320 if ((unsigned)result < data->npages) { 321 321 bytes = result * PAGE_SIZE; 322 322 if (bytes <= pgbase) { 323 323 nfs_direct_release_pages(data->pagevec, result); 324 - nfs_readdata_release(data); 324 + nfs_readdata_free(data); 325 325 break; 326 326 } 327 327 bytes -= pgbase; ··· 334 334 data->inode = inode; 335 335 data->cred = msg.rpc_cred; 336 336 data->args.fh = NFS_FH(inode); 337 - data->args.context = get_nfs_open_context(ctx); 337 + data->args.context = ctx; 338 338 data->args.offset = pos; 339 339 data->args.pgbase = pgbase; 340 340 data->args.pages = data->pagevec; ··· 441 441 struct nfs_write_data *data = list_entry(dreq->rewrite_list.next, struct nfs_write_data, pages); 442 442 list_del(&data->pages); 443 443 nfs_direct_release_pages(data->pagevec, data->npages); 444 - nfs_writedata_release(data); 444 + nfs_writedata_free(data); 445 445 } 446 446 } 447 447 ··· 534 534 535 535 dprintk("NFS: %5u commit returned %d\n", data->task.tk_pid, status); 536 536 nfs_direct_write_complete(dreq, data->inode); 537 - nfs_commitdata_release(calldata); 537 + nfs_commit_free(data); 538 538 } 539 539 540 540 static const struct rpc_call_ops nfs_commit_direct_ops = { ··· 570 570 data->args.fh = NFS_FH(data->inode); 571 571 data->args.offset = 0; 572 572 data->args.count = 0; 573 - data->args.context = get_nfs_open_context(dreq->ctx); 573 + data->args.context = dreq->ctx; 574 574 data->res.count = 0; 575 575 data->res.fattr = &data->fattr; 576 576 data->res.verf = &data->verf; ··· 734 734 data->npages, 0, 0, data->pagevec, NULL); 735 735 up_read(&current->mm->mmap_sem); 736 736 if (result < 0) { 737 - nfs_writedata_release(data); 737 + nfs_writedata_free(data); 738 738 break; 739 739 } 740 740 if ((unsigned)result < data->npages) { 741 741 bytes = result * PAGE_SIZE; 742 742 if (bytes <= pgbase) { 743 743 nfs_direct_release_pages(data->pagevec, result); 744 - nfs_writedata_release(data); 744 + nfs_writedata_free(data); 745 745 break; 746 746 } 747 747 bytes -= pgbase; ··· 756 756 data->inode = inode; 757 757 data->cred = msg.rpc_cred; 758 758 data->args.fh = NFS_FH(inode); 759 - data->args.context = get_nfs_open_context(ctx); 759 + data->args.context = ctx; 760 760 data->args.offset = pos; 761 761 data->args.pgbase = pgbase; 762 762 data->args.pages = data->pagevec;
+2 -2
fs/nfs/nfs4state.c
··· 1250 1250 continue; 1251 1251 } 1252 1252 /* Initialize or reset the session */ 1253 - if (nfs4_has_session(clp) && 1254 - test_and_clear_bit(NFS4CLNT_SESSION_SETUP, &clp->cl_state)) { 1253 + if (test_and_clear_bit(NFS4CLNT_SESSION_SETUP, &clp->cl_state) 1254 + && nfs4_has_session(clp)) { 1255 1255 if (clp->cl_cons_state == NFS_CS_SESSION_INITING) 1256 1256 status = nfs4_initialize_session(clp); 1257 1257 else
+2 -4
fs/nfs/read.c
··· 60 60 return p; 61 61 } 62 62 63 - static void nfs_readdata_free(struct nfs_read_data *p) 63 + void nfs_readdata_free(struct nfs_read_data *p) 64 64 { 65 65 if (p && (p->pagevec != &p->page_array[0])) 66 66 kfree(p->pagevec); 67 67 mempool_free(p, nfs_rdata_mempool); 68 68 } 69 69 70 - void nfs_readdata_release(void *data) 70 + static void nfs_readdata_release(struct nfs_read_data *rdata) 71 71 { 72 - struct nfs_read_data *rdata = data; 73 - 74 72 put_nfs_open_context(rdata->args.context); 75 73 nfs_readdata_free(rdata); 76 74 }
+2 -4
fs/nfs/write.c
··· 87 87 return p; 88 88 } 89 89 90 - static void nfs_writedata_free(struct nfs_write_data *p) 90 + void nfs_writedata_free(struct nfs_write_data *p) 91 91 { 92 92 if (p && (p->pagevec != &p->page_array[0])) 93 93 kfree(p->pagevec); 94 94 mempool_free(p, nfs_wdata_mempool); 95 95 } 96 96 97 - void nfs_writedata_release(void *data) 97 + static void nfs_writedata_release(struct nfs_write_data *wdata) 98 98 { 99 - struct nfs_write_data *wdata = data; 100 - 101 99 put_nfs_open_context(wdata->args.context); 102 100 nfs_writedata_free(wdata); 103 101 }
+1 -1
fs/nilfs2/btnode.c
··· 209 209 * We cannot call radix_tree_preload for the kernels older 210 210 * than 2.6.23, because it is not exported for modules. 211 211 */ 212 + retry: 212 213 err = radix_tree_preload(GFP_NOFS & ~__GFP_HIGHMEM); 213 214 if (err) 214 215 goto failed_unlock; ··· 220 219 (unsigned long long)oldkey, 221 220 (unsigned long long)newkey); 222 221 223 - retry: 224 222 spin_lock_irq(&btnc->tree_lock); 225 223 err = radix_tree_insert(&btnc->page_tree, newkey, obh->b_page); 226 224 spin_unlock_irq(&btnc->tree_lock);
+2
fs/nilfs2/super.c
··· 416 416 if (unlikely(err)) 417 417 goto failed; 418 418 419 + down_read(&nilfs->ns_segctor_sem); 419 420 err = nilfs_cpfile_get_checkpoint(nilfs->ns_cpfile, cno, 0, &raw_cp, 420 421 &bh_cp); 422 + up_read(&nilfs->ns_segctor_sem); 421 423 if (unlikely(err)) { 422 424 if (err == -ENOENT || err == -EINVAL) { 423 425 printk(KERN_ERR
+1 -1
fs/nilfs2/the_nilfs.h
··· 253 253 254 254 static inline void nilfs_put_sbinfo(struct nilfs_sb_info *sbi) 255 255 { 256 - if (!atomic_dec_and_test(&sbi->s_count)) 256 + if (atomic_dec_and_test(&sbi->s_count)) 257 257 kfree(sbi); 258 258 } 259 259
+38 -8
fs/notify/inotify/inotify_fsnotify.c
··· 62 62 event_priv->wd = wd; 63 63 64 64 ret = fsnotify_add_notify_event(group, event, fsn_event_priv); 65 - /* EEXIST is not an error */ 66 - if (ret == -EEXIST) 67 - ret = 0; 68 - 69 - /* did event_priv get attached? */ 70 - if (list_empty(&fsn_event_priv->event_list)) 65 + if (ret) { 71 66 inotify_free_event_priv(fsn_event_priv); 67 + /* EEXIST says we tail matched, EOVERFLOW isn't something 68 + * to report up the stack. */ 69 + if ((ret == -EEXIST) || 70 + (ret == -EOVERFLOW)) 71 + ret = 0; 72 + } 72 73 73 74 /* 74 75 * If we hold the entry until after the event is on the queue ··· 105 104 return send; 106 105 } 107 106 107 + /* 108 + * This is NEVER supposed to be called. Inotify marks should either have been 109 + * removed from the idr when the watch was removed or in the 110 + * fsnotify_destroy_mark_by_group() call when the inotify instance was being 111 + * torn down. This is only called if the idr is about to be freed but there 112 + * are still marks in it. 113 + */ 108 114 static int idr_callback(int id, void *p, void *data) 109 115 { 110 - BUG(); 116 + struct fsnotify_mark_entry *entry; 117 + struct inotify_inode_mark_entry *ientry; 118 + static bool warned = false; 119 + 120 + if (warned) 121 + return 0; 122 + 123 + warned = false; 124 + entry = p; 125 + ientry = container_of(entry, struct inotify_inode_mark_entry, fsn_entry); 126 + 127 + WARN(1, "inotify closing but id=%d for entry=%p in group=%p still in " 128 + "idr. Probably leaking memory\n", id, p, data); 129 + 130 + /* 131 + * I'm taking the liberty of assuming that the mark in question is a 132 + * valid address and I'm dereferencing it. This might help to figure 133 + * out why we got here and the panic is no worse than the original 134 + * BUG() that was here. 135 + */ 136 + if (entry) 137 + printk(KERN_WARNING "entry->group=%p inode=%p wd=%d\n", 138 + entry->group, entry->inode, ientry->wd); 111 139 return 0; 112 140 } 113 141 114 142 static void inotify_free_group_priv(struct fsnotify_group *group) 115 143 { 116 144 /* ideally the idr is empty and we won't hit the BUG in teh callback */ 117 - idr_for_each(&group->inotify_data.idr, idr_callback, NULL); 145 + idr_for_each(&group->inotify_data.idr, idr_callback, group); 118 146 idr_remove_all(&group->inotify_data.idr); 119 147 idr_destroy(&group->inotify_data.idr); 120 148 }
+163 -93
fs/notify/inotify/inotify_user.c
··· 47 47 48 48 static struct vfsmount *inotify_mnt __read_mostly; 49 49 50 - /* this just sits here and wastes global memory. used to just pad userspace messages with zeros */ 51 - static struct inotify_event nul_inotify_event; 52 - 53 50 /* these are configurable via /proc/sys/fs/inotify/ */ 54 51 static int inotify_max_user_instances __read_mostly; 55 52 static int inotify_max_queued_events __read_mostly; ··· 154 157 155 158 event = fsnotify_peek_notify_event(group); 156 159 157 - event_size += roundup(event->name_len, event_size); 160 + if (event->name_len) 161 + event_size += roundup(event->name_len + 1, event_size); 158 162 159 163 if (event_size > count) 160 164 return ERR_PTR(-EINVAL); ··· 181 183 struct fsnotify_event_private_data *fsn_priv; 182 184 struct inotify_event_private_data *priv; 183 185 size_t event_size = sizeof(struct inotify_event); 184 - size_t name_len; 186 + size_t name_len = 0; 185 187 186 188 /* we get the inotify watch descriptor from the event private data */ 187 189 spin_lock(&event->lock); ··· 197 199 inotify_free_event_priv(fsn_priv); 198 200 } 199 201 200 - /* round up event->name_len so it is a multiple of event_size */ 201 - name_len = roundup(event->name_len, event_size); 202 + /* 203 + * round up event->name_len so it is a multiple of event_size 204 + * plus an extra byte for the terminating '\0'. 205 + */ 206 + if (event->name_len) 207 + name_len = roundup(event->name_len + 1, event_size); 202 208 inotify_event.len = name_len; 203 209 204 210 inotify_event.mask = inotify_mask_to_arg(event->mask); ··· 226 224 return -EFAULT; 227 225 buf += event->name_len; 228 226 229 - /* fill userspace with 0's from nul_inotify_event */ 230 - if (copy_to_user(buf, &nul_inotify_event, len_to_zero)) 227 + /* fill userspace with 0's */ 228 + if (clear_user(buf, len_to_zero)) 231 229 return -EFAULT; 232 230 buf += len_to_zero; 233 231 event_size += name_len; ··· 328 326 list_for_each_entry(holder, &group->notification_list, event_list) { 329 327 event = holder->event; 330 328 send_len += sizeof(struct inotify_event); 331 - send_len += roundup(event->name_len, 332 - sizeof(struct inotify_event)); 329 + if (event->name_len) 330 + send_len += roundup(event->name_len + 1, 331 + sizeof(struct inotify_event)); 333 332 } 334 333 mutex_unlock(&group->notification_mutex); 335 334 ret = put_user(send_len, (int __user *) p); ··· 367 364 return error; 368 365 } 369 366 367 + /* 368 + * Remove the mark from the idr (if present) and drop the reference 369 + * on the mark because it was in the idr. 370 + */ 370 371 static void inotify_remove_from_idr(struct fsnotify_group *group, 371 372 struct inotify_inode_mark_entry *ientry) 372 373 { 373 374 struct idr *idr; 375 + struct fsnotify_mark_entry *entry; 376 + struct inotify_inode_mark_entry *found_ientry; 377 + int wd; 374 378 375 379 spin_lock(&group->inotify_data.idr_lock); 376 380 idr = &group->inotify_data.idr; 377 - idr_remove(idr, ientry->wd); 378 - spin_unlock(&group->inotify_data.idr_lock); 381 + wd = ientry->wd; 382 + 383 + if (wd == -1) 384 + goto out; 385 + 386 + entry = idr_find(&group->inotify_data.idr, wd); 387 + if (unlikely(!entry)) 388 + goto out; 389 + 390 + found_ientry = container_of(entry, struct inotify_inode_mark_entry, fsn_entry); 391 + if (unlikely(found_ientry != ientry)) { 392 + /* We found an entry in the idr with the right wd, but it's 393 + * not the entry we were told to remove. eparis seriously 394 + * fucked up somewhere. */ 395 + WARN_ON(1); 396 + ientry->wd = -1; 397 + goto out; 398 + } 399 + 400 + /* One ref for being in the idr, one ref held by the caller */ 401 + BUG_ON(atomic_read(&entry->refcnt) < 2); 402 + 403 + idr_remove(idr, wd); 379 404 ientry->wd = -1; 405 + 406 + /* removed from the idr, drop that ref */ 407 + fsnotify_put_mark(entry); 408 + out: 409 + spin_unlock(&group->inotify_data.idr_lock); 380 410 } 411 + 381 412 /* 382 - * Send IN_IGNORED for this wd, remove this wd from the idr, and drop the 383 - * internal reference help on the mark because it is in the idr. 413 + * Send IN_IGNORED for this wd, remove this wd from the idr. 384 414 */ 385 415 void inotify_ignored_and_remove_idr(struct fsnotify_mark_entry *entry, 386 416 struct fsnotify_group *group) ··· 422 386 struct fsnotify_event *ignored_event; 423 387 struct inotify_event_private_data *event_priv; 424 388 struct fsnotify_event_private_data *fsn_event_priv; 389 + int ret; 425 390 426 391 ignored_event = fsnotify_create_event(NULL, FS_IN_IGNORED, NULL, 427 392 FSNOTIFY_EVENT_NONE, NULL, 0, ··· 441 404 fsn_event_priv->group = group; 442 405 event_priv->wd = ientry->wd; 443 406 444 - fsnotify_add_notify_event(group, ignored_event, fsn_event_priv); 445 - 446 - /* did the private data get added? */ 447 - if (list_empty(&fsn_event_priv->event_list)) 407 + ret = fsnotify_add_notify_event(group, ignored_event, fsn_event_priv); 408 + if (ret) 448 409 inotify_free_event_priv(fsn_event_priv); 449 410 450 411 skip_send_ignore: ··· 452 417 453 418 /* remove this entry from the idr */ 454 419 inotify_remove_from_idr(group, ientry); 455 - 456 - /* removed from idr, drop that reference */ 457 - fsnotify_put_mark(entry); 458 420 459 421 atomic_dec(&group->inotify_data.user->inotify_watches); 460 422 } ··· 464 432 kmem_cache_free(inotify_inode_mark_cachep, ientry); 465 433 } 466 434 467 - static int inotify_update_watch(struct fsnotify_group *group, struct inode *inode, u32 arg) 435 + static int inotify_update_existing_watch(struct fsnotify_group *group, 436 + struct inode *inode, 437 + u32 arg) 468 438 { 469 - struct fsnotify_mark_entry *entry = NULL; 439 + struct fsnotify_mark_entry *entry; 470 440 struct inotify_inode_mark_entry *ientry; 471 - struct inotify_inode_mark_entry *tmp_ientry; 472 - int ret = 0; 473 - int add = (arg & IN_MASK_ADD); 474 - __u32 mask; 475 441 __u32 old_mask, new_mask; 442 + __u32 mask; 443 + int add = (arg & IN_MASK_ADD); 444 + int ret; 476 445 477 446 /* don't allow invalid bits: we don't want flags set */ 478 447 mask = inotify_arg_to_mask(arg); 479 448 if (unlikely(!mask)) 480 449 return -EINVAL; 481 450 482 - tmp_ientry = kmem_cache_alloc(inotify_inode_mark_cachep, GFP_KERNEL); 483 - if (unlikely(!tmp_ientry)) 484 - return -ENOMEM; 485 - /* we set the mask at the end after attaching it */ 486 - fsnotify_init_mark(&tmp_ientry->fsn_entry, inotify_free_mark); 487 - tmp_ientry->wd = -1; 488 - 489 - find_entry: 490 451 spin_lock(&inode->i_lock); 491 452 entry = fsnotify_find_mark_entry(group, inode); 492 453 spin_unlock(&inode->i_lock); 493 - if (entry) { 494 - ientry = container_of(entry, struct inotify_inode_mark_entry, fsn_entry); 495 - } else { 496 - ret = -ENOSPC; 497 - if (atomic_read(&group->inotify_data.user->inotify_watches) >= inotify_max_user_watches) 498 - goto out_err; 499 - retry: 500 - ret = -ENOMEM; 501 - if (unlikely(!idr_pre_get(&group->inotify_data.idr, GFP_KERNEL))) 502 - goto out_err; 454 + if (!entry) 455 + return -ENOENT; 503 456 504 - spin_lock(&group->inotify_data.idr_lock); 505 - ret = idr_get_new_above(&group->inotify_data.idr, &tmp_ientry->fsn_entry, 506 - group->inotify_data.last_wd, 507 - &tmp_ientry->wd); 508 - spin_unlock(&group->inotify_data.idr_lock); 509 - if (ret) { 510 - if (ret == -EAGAIN) 511 - goto retry; 512 - goto out_err; 513 - } 514 - 515 - ret = fsnotify_add_mark(&tmp_ientry->fsn_entry, group, inode); 516 - if (ret) { 517 - inotify_remove_from_idr(group, tmp_ientry); 518 - if (ret == -EEXIST) 519 - goto find_entry; 520 - goto out_err; 521 - } 522 - 523 - /* tmp_ientry has been added to the inode, so we are all set up. 524 - * now we just need to make sure tmp_ientry doesn't get freed and 525 - * we need to set up entry and ientry so the generic code can 526 - * do its thing. */ 527 - ientry = tmp_ientry; 528 - entry = &ientry->fsn_entry; 529 - tmp_ientry = NULL; 530 - 531 - atomic_inc(&group->inotify_data.user->inotify_watches); 532 - 533 - /* update the idr hint */ 534 - group->inotify_data.last_wd = ientry->wd; 535 - 536 - /* we put the mark on the idr, take a reference */ 537 - fsnotify_get_mark(entry); 538 - } 539 - 540 - ret = ientry->wd; 457 + ientry = container_of(entry, struct inotify_inode_mark_entry, fsn_entry); 541 458 542 459 spin_lock(&entry->lock); 543 460 ··· 518 537 fsnotify_recalc_group_mask(group); 519 538 } 520 539 521 - /* this either matches fsnotify_find_mark_entry, or init_mark_entry 522 - * depending on which path we took... */ 540 + /* return the wd */ 541 + ret = ientry->wd; 542 + 543 + /* match the get from fsnotify_find_mark_entry() */ 523 544 fsnotify_put_mark(entry); 524 545 525 - out_err: 526 - /* could be an error, could be that we found an existing mark */ 527 - if (tmp_ientry) { 528 - /* on the idr but didn't make it on the inode */ 529 - if (tmp_ientry->wd != -1) 530 - inotify_remove_from_idr(group, tmp_ientry); 531 - kmem_cache_free(inotify_inode_mark_cachep, tmp_ientry); 546 + return ret; 547 + } 548 + 549 + static int inotify_new_watch(struct fsnotify_group *group, 550 + struct inode *inode, 551 + u32 arg) 552 + { 553 + struct inotify_inode_mark_entry *tmp_ientry; 554 + __u32 mask; 555 + int ret; 556 + 557 + /* don't allow invalid bits: we don't want flags set */ 558 + mask = inotify_arg_to_mask(arg); 559 + if (unlikely(!mask)) 560 + return -EINVAL; 561 + 562 + tmp_ientry = kmem_cache_alloc(inotify_inode_mark_cachep, GFP_KERNEL); 563 + if (unlikely(!tmp_ientry)) 564 + return -ENOMEM; 565 + 566 + fsnotify_init_mark(&tmp_ientry->fsn_entry, inotify_free_mark); 567 + tmp_ientry->fsn_entry.mask = mask; 568 + tmp_ientry->wd = -1; 569 + 570 + ret = -ENOSPC; 571 + if (atomic_read(&group->inotify_data.user->inotify_watches) >= inotify_max_user_watches) 572 + goto out_err; 573 + retry: 574 + ret = -ENOMEM; 575 + if (unlikely(!idr_pre_get(&group->inotify_data.idr, GFP_KERNEL))) 576 + goto out_err; 577 + 578 + spin_lock(&group->inotify_data.idr_lock); 579 + ret = idr_get_new_above(&group->inotify_data.idr, &tmp_ientry->fsn_entry, 580 + group->inotify_data.last_wd, 581 + &tmp_ientry->wd); 582 + spin_unlock(&group->inotify_data.idr_lock); 583 + if (ret) { 584 + /* idr was out of memory allocate and try again */ 585 + if (ret == -EAGAIN) 586 + goto retry; 587 + goto out_err; 532 588 } 589 + 590 + /* we put the mark on the idr, take a reference */ 591 + fsnotify_get_mark(&tmp_ientry->fsn_entry); 592 + 593 + /* we are on the idr, now get on the inode */ 594 + ret = fsnotify_add_mark(&tmp_ientry->fsn_entry, group, inode); 595 + if (ret) { 596 + /* we failed to get on the inode, get off the idr */ 597 + inotify_remove_from_idr(group, tmp_ientry); 598 + goto out_err; 599 + } 600 + 601 + /* update the idr hint, who cares about races, it's just a hint */ 602 + group->inotify_data.last_wd = tmp_ientry->wd; 603 + 604 + /* increment the number of watches the user has */ 605 + atomic_inc(&group->inotify_data.user->inotify_watches); 606 + 607 + /* return the watch descriptor for this new entry */ 608 + ret = tmp_ientry->wd; 609 + 610 + /* match the ref from fsnotify_init_markentry() */ 611 + fsnotify_put_mark(&tmp_ientry->fsn_entry); 612 + 613 + /* if this mark added a new event update the group mask */ 614 + if (mask & ~group->mask) 615 + fsnotify_recalc_group_mask(group); 616 + 617 + out_err: 618 + if (ret < 0) 619 + kmem_cache_free(inotify_inode_mark_cachep, tmp_ientry); 620 + 621 + return ret; 622 + } 623 + 624 + static int inotify_update_watch(struct fsnotify_group *group, struct inode *inode, u32 arg) 625 + { 626 + int ret = 0; 627 + 628 + retry: 629 + /* try to update and existing watch with the new arg */ 630 + ret = inotify_update_existing_watch(group, inode, arg); 631 + /* no mark present, try to add a new one */ 632 + if (ret == -ENOENT) 633 + ret = inotify_new_watch(group, inode, arg); 634 + /* 635 + * inotify_new_watch could race with another thread which did an 636 + * inotify_new_watch between the update_existing and the add watch 637 + * here, go back and try to update an existing mark again. 638 + */ 639 + if (ret == -EEXIST) 640 + goto retry; 533 641 534 642 return ret; 535 643 } ··· 638 568 639 569 spin_lock_init(&group->inotify_data.idr_lock); 640 570 idr_init(&group->inotify_data.idr); 641 - group->inotify_data.last_wd = 0; 571 + group->inotify_data.last_wd = 1; 642 572 group->inotify_data.user = user; 643 573 group->inotify_data.fa = NULL; 644 574
+7 -4
fs/notify/notification.c
··· 153 153 return true; 154 154 break; 155 155 case (FSNOTIFY_EVENT_NONE): 156 + if (old->mask & FS_Q_OVERFLOW) 157 + return true; 158 + else if (old->mask & FS_IN_IGNORED) 159 + return false; 156 160 return false; 157 161 }; 158 162 } ··· 175 171 struct list_head *list = &group->notification_list; 176 172 struct fsnotify_event_holder *last_holder; 177 173 struct fsnotify_event *last_event; 178 - 179 - /* easy to tell if priv was attached to the event */ 180 - INIT_LIST_HEAD(&priv->event_list); 174 + int ret = 0; 181 175 182 176 /* 183 177 * There is one fsnotify_event_holder embedded inside each fsnotify_event. ··· 196 194 197 195 if (group->q_len >= group->max_events) { 198 196 event = &q_overflow_event; 197 + ret = -EOVERFLOW; 199 198 /* sorry, no private data on the overflow event */ 200 199 priv = NULL; 201 200 } ··· 238 235 mutex_unlock(&group->notification_mutex); 239 236 240 237 wake_up(&group->notification_waitq); 241 - return 0; 238 + return ret; 242 239 } 243 240 244 241 /*
+42 -7
fs/ocfs2/alloc.c
··· 1914 1914 * immediately to their right. 1915 1915 */ 1916 1916 left_clusters = le32_to_cpu(right_child_el->l_recs[0].e_cpos); 1917 - if (ocfs2_is_empty_extent(&right_child_el->l_recs[0])) { 1917 + if (!ocfs2_rec_clusters(right_child_el, &right_child_el->l_recs[0])) { 1918 + BUG_ON(right_child_el->l_tree_depth); 1918 1919 BUG_ON(le16_to_cpu(right_child_el->l_next_free_rec) <= 1); 1919 1920 left_clusters = le32_to_cpu(right_child_el->l_recs[1].e_cpos); 1920 1921 } ··· 2477 2476 return ret; 2478 2477 } 2479 2478 2480 - static void ocfs2_update_edge_lengths(struct inode *inode, handle_t *handle, 2481 - struct ocfs2_path *path) 2479 + static int ocfs2_update_edge_lengths(struct inode *inode, handle_t *handle, 2480 + int subtree_index, struct ocfs2_path *path) 2482 2481 { 2483 - int i, idx; 2482 + int i, idx, ret; 2484 2483 struct ocfs2_extent_rec *rec; 2485 2484 struct ocfs2_extent_list *el; 2486 2485 struct ocfs2_extent_block *eb; 2487 2486 u32 range; 2487 + 2488 + /* 2489 + * In normal tree rotation process, we will never touch the 2490 + * tree branch above subtree_index and ocfs2_extend_rotate_transaction 2491 + * doesn't reserve the credits for them either. 2492 + * 2493 + * But we do have a special case here which will update the rightmost 2494 + * records for all the bh in the path. 2495 + * So we have to allocate extra credits and access them. 2496 + */ 2497 + ret = ocfs2_extend_trans(handle, 2498 + handle->h_buffer_credits + subtree_index); 2499 + if (ret) { 2500 + mlog_errno(ret); 2501 + goto out; 2502 + } 2503 + 2504 + ret = ocfs2_journal_access_path(inode, handle, path); 2505 + if (ret) { 2506 + mlog_errno(ret); 2507 + goto out; 2508 + } 2488 2509 2489 2510 /* Path should always be rightmost. */ 2490 2511 eb = (struct ocfs2_extent_block *)path_leaf_bh(path)->b_data; ··· 2528 2505 2529 2506 ocfs2_journal_dirty(handle, path->p_node[i].bh); 2530 2507 } 2508 + out: 2509 + return ret; 2531 2510 } 2532 2511 2533 2512 static void ocfs2_unlink_path(struct inode *inode, handle_t *handle, ··· 2742 2717 if (del_right_subtree) { 2743 2718 ocfs2_unlink_subtree(inode, handle, left_path, right_path, 2744 2719 subtree_index, dealloc); 2745 - ocfs2_update_edge_lengths(inode, handle, left_path); 2720 + ret = ocfs2_update_edge_lengths(inode, handle, subtree_index, 2721 + left_path); 2722 + if (ret) { 2723 + mlog_errno(ret); 2724 + goto out; 2725 + } 2746 2726 2747 2727 eb = (struct ocfs2_extent_block *)path_leaf_bh(left_path)->b_data; 2748 2728 ocfs2_et_set_last_eb_blk(et, le64_to_cpu(eb->h_blkno)); ··· 3064 3034 3065 3035 ocfs2_unlink_subtree(inode, handle, left_path, path, 3066 3036 subtree_index, dealloc); 3067 - ocfs2_update_edge_lengths(inode, handle, left_path); 3037 + ret = ocfs2_update_edge_lengths(inode, handle, subtree_index, 3038 + left_path); 3039 + if (ret) { 3040 + mlog_errno(ret); 3041 + goto out; 3042 + } 3068 3043 3069 3044 eb = (struct ocfs2_extent_block *)path_leaf_bh(left_path)->b_data; 3070 3045 ocfs2_et_set_last_eb_blk(et, le64_to_cpu(eb->h_blkno)); ··· 6851 6816 } 6852 6817 status = 0; 6853 6818 bail: 6854 - 6819 + brelse(last_eb_bh); 6855 6820 mlog_exit(status); 6856 6821 return status; 6857 6822 }
+49 -20
fs/ocfs2/aops.c
··· 193 193 (unsigned long long)OCFS2_I(inode)->ip_blkno); 194 194 mlog(ML_ERROR, "Size %llu, clusters %u\n", (unsigned long long)i_size_read(inode), OCFS2_I(inode)->ip_clusters); 195 195 dump_stack(); 196 + goto bail; 196 197 } 197 198 198 199 past_eof = ocfs2_blocks_for_bytes(inode->i_sb, i_size_read(inode)); ··· 895 894 */ 896 895 unsigned c_new; 897 896 unsigned c_unwritten; 897 + unsigned c_needs_zero; 898 898 }; 899 - 900 - static inline int ocfs2_should_zero_cluster(struct ocfs2_write_cluster_desc *d) 901 - { 902 - return d->c_new || d->c_unwritten; 903 - } 904 899 905 900 struct ocfs2_write_ctxt { 906 901 /* Logical cluster position / len of write */ 907 902 u32 w_cpos; 908 903 u32 w_clen; 904 + 905 + /* First cluster allocated in a nonsparse extend */ 906 + u32 w_first_new_cpos; 909 907 910 908 struct ocfs2_write_cluster_desc w_desc[OCFS2_MAX_CLUSTERS_PER_PAGE]; 911 909 ··· 983 983 return -ENOMEM; 984 984 985 985 wc->w_cpos = pos >> osb->s_clustersize_bits; 986 + wc->w_first_new_cpos = UINT_MAX; 986 987 cend = (pos + len - 1) >> osb->s_clustersize_bits; 987 988 wc->w_clen = cend - wc->w_cpos + 1; 988 989 get_bh(di_bh); ··· 1218 1217 */ 1219 1218 static int ocfs2_write_cluster(struct address_space *mapping, 1220 1219 u32 phys, unsigned int unwritten, 1220 + unsigned int should_zero, 1221 1221 struct ocfs2_alloc_context *data_ac, 1222 1222 struct ocfs2_alloc_context *meta_ac, 1223 1223 struct ocfs2_write_ctxt *wc, u32 cpos, 1224 1224 loff_t user_pos, unsigned user_len) 1225 1225 { 1226 - int ret, i, new, should_zero = 0; 1226 + int ret, i, new; 1227 1227 u64 v_blkno, p_blkno; 1228 1228 struct inode *inode = mapping->host; 1229 1229 struct ocfs2_extent_tree et; 1230 1230 1231 1231 new = phys == 0 ? 1 : 0; 1232 - if (new || unwritten) 1233 - should_zero = 1; 1234 - 1235 1232 if (new) { 1236 1233 u32 tmp_pos; 1237 1234 ··· 1300 1301 if (tmpret) { 1301 1302 mlog_errno(tmpret); 1302 1303 if (ret == 0) 1303 - tmpret = ret; 1304 + ret = tmpret; 1304 1305 } 1305 1306 } 1306 1307 ··· 1340 1341 local_len = osb->s_clustersize - cluster_off; 1341 1342 1342 1343 ret = ocfs2_write_cluster(mapping, desc->c_phys, 1343 - desc->c_unwritten, data_ac, meta_ac, 1344 + desc->c_unwritten, 1345 + desc->c_needs_zero, 1346 + data_ac, meta_ac, 1344 1347 wc, desc->c_cpos, pos, local_len); 1345 1348 if (ret) { 1346 1349 mlog_errno(ret); ··· 1392 1391 * newly allocated cluster. 1393 1392 */ 1394 1393 desc = &wc->w_desc[0]; 1395 - if (ocfs2_should_zero_cluster(desc)) 1394 + if (desc->c_needs_zero) 1396 1395 ocfs2_figure_cluster_boundaries(osb, 1397 1396 desc->c_cpos, 1398 1397 &wc->w_target_from, 1399 1398 NULL); 1400 1399 1401 1400 desc = &wc->w_desc[wc->w_clen - 1]; 1402 - if (ocfs2_should_zero_cluster(desc)) 1401 + if (desc->c_needs_zero) 1403 1402 ocfs2_figure_cluster_boundaries(osb, 1404 1403 desc->c_cpos, 1405 1404 NULL, ··· 1467 1466 phys++; 1468 1467 } 1469 1468 1469 + /* 1470 + * If w_first_new_cpos is < UINT_MAX, we have a non-sparse 1471 + * file that got extended. w_first_new_cpos tells us 1472 + * where the newly allocated clusters are so we can 1473 + * zero them. 1474 + */ 1475 + if (desc->c_cpos >= wc->w_first_new_cpos) { 1476 + BUG_ON(phys == 0); 1477 + desc->c_needs_zero = 1; 1478 + } 1479 + 1470 1480 desc->c_phys = phys; 1471 1481 if (phys == 0) { 1472 1482 desc->c_new = 1; 1483 + desc->c_needs_zero = 1; 1473 1484 *clusters_to_alloc = *clusters_to_alloc + 1; 1474 1485 } 1475 - if (ext_flags & OCFS2_EXT_UNWRITTEN) 1486 + 1487 + if (ext_flags & OCFS2_EXT_UNWRITTEN) { 1476 1488 desc->c_unwritten = 1; 1489 + desc->c_needs_zero = 1; 1490 + } 1477 1491 1478 1492 num_clusters--; 1479 1493 } ··· 1648 1632 if (newsize <= i_size_read(inode)) 1649 1633 return 0; 1650 1634 1651 - ret = ocfs2_extend_no_holes(inode, newsize, newsize - len); 1635 + ret = ocfs2_extend_no_holes(inode, newsize, pos); 1652 1636 if (ret) 1653 1637 mlog_errno(ret); 1638 + 1639 + wc->w_first_new_cpos = 1640 + ocfs2_clusters_for_bytes(inode->i_sb, i_size_read(inode)); 1654 1641 1655 1642 return ret; 1656 1643 } ··· 1663 1644 struct page **pagep, void **fsdata, 1664 1645 struct buffer_head *di_bh, struct page *mmap_page) 1665 1646 { 1666 - int ret, credits = OCFS2_INODE_UPDATE_CREDITS; 1647 + int ret, cluster_of_pages, credits = OCFS2_INODE_UPDATE_CREDITS; 1667 1648 unsigned int clusters_to_alloc, extents_to_split; 1668 1649 struct ocfs2_write_ctxt *wc; 1669 1650 struct inode *inode = mapping->host; ··· 1741 1722 1742 1723 } 1743 1724 1744 - ocfs2_set_target_boundaries(osb, wc, pos, len, 1745 - clusters_to_alloc + extents_to_split); 1725 + /* 1726 + * We have to zero sparse allocated clusters, unwritten extent clusters, 1727 + * and non-sparse clusters we just extended. For non-sparse writes, 1728 + * we know zeros will only be needed in the first and/or last cluster. 1729 + */ 1730 + if (clusters_to_alloc || extents_to_split || 1731 + (wc->w_clen && (wc->w_desc[0].c_needs_zero || 1732 + wc->w_desc[wc->w_clen - 1].c_needs_zero))) 1733 + cluster_of_pages = 1; 1734 + else 1735 + cluster_of_pages = 0; 1736 + 1737 + ocfs2_set_target_boundaries(osb, wc, pos, len, cluster_of_pages); 1746 1738 1747 1739 handle = ocfs2_start_trans(osb, credits); 1748 1740 if (IS_ERR(handle)) { ··· 1786 1756 * extent. 1787 1757 */ 1788 1758 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, 1789 - clusters_to_alloc + extents_to_split, 1790 - mmap_page); 1759 + cluster_of_pages, mmap_page); 1791 1760 if (ret) { 1792 1761 mlog_errno(ret); 1793 1762 goto out_quota;
+38 -8
fs/ocfs2/dcache.c
··· 85 85 goto bail; 86 86 } 87 87 88 + /* 89 + * If the last lookup failed to create dentry lock, let us 90 + * redo it. 91 + */ 92 + if (!dentry->d_fsdata) { 93 + mlog(0, "Inode %llu doesn't have dentry lock, " 94 + "returning false\n", 95 + (unsigned long long)OCFS2_I(inode)->ip_blkno); 96 + goto bail; 97 + } 98 + 88 99 ret = 1; 89 100 90 101 bail: ··· 321 310 return ret; 322 311 } 323 312 324 - static DEFINE_SPINLOCK(dentry_list_lock); 313 + DEFINE_SPINLOCK(dentry_list_lock); 325 314 326 315 /* We limit the number of dentry locks to drop in one go. We have 327 316 * this limit so that we don't starve other users of ocfs2_wq. */ 328 317 #define DL_INODE_DROP_COUNT 64 329 318 330 319 /* Drop inode references from dentry locks */ 331 - void ocfs2_drop_dl_inodes(struct work_struct *work) 320 + static void __ocfs2_drop_dl_inodes(struct ocfs2_super *osb, int drop_count) 332 321 { 333 - struct ocfs2_super *osb = container_of(work, struct ocfs2_super, 334 - dentry_lock_work); 335 322 struct ocfs2_dentry_lock *dl; 336 - int drop_count = DL_INODE_DROP_COUNT; 337 323 338 324 spin_lock(&dentry_list_lock); 339 - while (osb->dentry_lock_list && drop_count--) { 325 + while (osb->dentry_lock_list && (drop_count < 0 || drop_count--)) { 340 326 dl = osb->dentry_lock_list; 341 327 osb->dentry_lock_list = dl->dl_next; 342 328 spin_unlock(&dentry_list_lock); ··· 341 333 kfree(dl); 342 334 spin_lock(&dentry_list_lock); 343 335 } 344 - if (osb->dentry_lock_list) 336 + spin_unlock(&dentry_list_lock); 337 + } 338 + 339 + void ocfs2_drop_dl_inodes(struct work_struct *work) 340 + { 341 + struct ocfs2_super *osb = container_of(work, struct ocfs2_super, 342 + dentry_lock_work); 343 + 344 + __ocfs2_drop_dl_inodes(osb, DL_INODE_DROP_COUNT); 345 + /* 346 + * Don't queue dropping if umount is in progress. We flush the 347 + * list in ocfs2_dismount_volume 348 + */ 349 + spin_lock(&dentry_list_lock); 350 + if (osb->dentry_lock_list && 351 + !ocfs2_test_osb_flag(osb, OCFS2_OSB_DROP_DENTRY_LOCK_IMMED)) 345 352 queue_work(ocfs2_wq, &osb->dentry_lock_work); 346 353 spin_unlock(&dentry_list_lock); 354 + } 355 + 356 + /* Flush the whole work queue */ 357 + void ocfs2_drop_all_dl_inodes(struct ocfs2_super *osb) 358 + { 359 + __ocfs2_drop_dl_inodes(osb, -1); 347 360 } 348 361 349 362 /* ··· 397 368 /* We leave dropping of inode reference to ocfs2_wq as that can 398 369 * possibly lead to inode deletion which gets tricky */ 399 370 spin_lock(&dentry_list_lock); 400 - if (!osb->dentry_lock_list) 371 + if (!osb->dentry_lock_list && 372 + !ocfs2_test_osb_flag(osb, OCFS2_OSB_DROP_DENTRY_LOCK_IMMED)) 401 373 queue_work(ocfs2_wq, &osb->dentry_lock_work); 402 374 dl->dl_next = osb->dentry_lock_list; 403 375 osb->dentry_lock_list = dl;
+3
fs/ocfs2/dcache.h
··· 49 49 int ocfs2_dentry_attach_lock(struct dentry *dentry, struct inode *inode, 50 50 u64 parent_blkno); 51 51 52 + extern spinlock_t dentry_list_lock; 53 + 52 54 void ocfs2_dentry_lock_put(struct ocfs2_super *osb, 53 55 struct ocfs2_dentry_lock *dl); 54 56 55 57 void ocfs2_drop_dl_inodes(struct work_struct *work); 58 + void ocfs2_drop_all_dl_inodes(struct ocfs2_super *osb); 56 59 57 60 struct dentry *ocfs2_find_local_alias(struct inode *inode, u64 parent_blkno, 58 61 int skip_unhashed);
-1
fs/ocfs2/dlm/dlmast.c
··· 103 103 lock->ast_pending, lock->ml.type); 104 104 BUG(); 105 105 } 106 - BUG_ON(!list_empty(&lock->ast_list)); 107 106 if (lock->ast_pending) 108 107 mlog(0, "lock has an ast getting flushed right now\n"); 109 108
+1 -1
fs/ocfs2/dlm/dlmrecovery.c
··· 1118 1118 1119 1119 mlog(0, "%s:%.*s: sending mig lockres (%s) to %u\n", 1120 1120 dlm->name, res->lockname.len, res->lockname.name, 1121 - orig_flags & DLM_MRES_MIGRATION ? "migrate" : "recovery", 1121 + orig_flags & DLM_MRES_MIGRATION ? "migration" : "recovery", 1122 1122 send_to); 1123 1123 1124 1124 /* send it */
+2 -2
fs/ocfs2/dlm/dlmunlock.c
··· 122 122 * that still has AST's pending... */ 123 123 in_use = !list_empty(&lock->ast_list); 124 124 spin_unlock(&dlm->ast_lock); 125 - if (in_use) { 125 + if (in_use && !(flags & LKM_CANCEL)) { 126 126 mlog(ML_ERROR, "lockres %.*s: Someone is calling dlmunlock " 127 127 "while waiting for an ast!", res->lockname.len, 128 128 res->lockname.name); ··· 131 131 132 132 spin_lock(&res->spinlock); 133 133 if (res->state & DLM_LOCK_RES_IN_PROGRESS) { 134 - if (master_node) { 134 + if (master_node && !(flags & LKM_CANCEL)) { 135 135 mlog(ML_ERROR, "lockres in progress!\n"); 136 136 spin_unlock(&res->spinlock); 137 137 return DLM_FORWARD;
+4 -1
fs/ocfs2/file.c
··· 1851 1851 if (ret) 1852 1852 goto out_dio; 1853 1853 1854 + count = ocount; 1854 1855 ret = generic_write_checks(file, ppos, &count, 1855 1856 S_ISBLK(inode->i_mode)); 1856 1857 if (ret) ··· 1919 1918 1920 1919 mutex_unlock(&inode->i_mutex); 1921 1920 1921 + if (written) 1922 + ret = written; 1922 1923 mlog_exit(ret); 1923 - return written ? written : ret; 1924 + return ret; 1924 1925 } 1925 1926 1926 1927 static int ocfs2_splice_to_file(struct pipe_inode_info *pipe,
+7 -1
fs/ocfs2/journal.c
··· 1954 1954 os->os_osb = osb; 1955 1955 os->os_count = 0; 1956 1956 os->os_seqno = 0; 1957 - os->os_scantime = CURRENT_TIME; 1958 1957 mutex_init(&os->os_lock); 1959 1958 INIT_DELAYED_WORK(&os->os_orphan_scan_work, ocfs2_orphan_scan_work); 1959 + } 1960 1960 1961 + void ocfs2_orphan_scan_start(struct ocfs2_super *osb) 1962 + { 1963 + struct ocfs2_orphan_scan *os; 1964 + 1965 + os = &osb->osb_orphan_scan; 1966 + os->os_scantime = CURRENT_TIME; 1961 1967 if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb)) 1962 1968 atomic_set(&os->os_state, ORPHAN_SCAN_INACTIVE); 1963 1969 else {
+12 -9
fs/ocfs2/journal.h
··· 145 145 146 146 /* Exported only for the journal struct init code in super.c. Do not call. */ 147 147 void ocfs2_orphan_scan_init(struct ocfs2_super *osb); 148 + void ocfs2_orphan_scan_start(struct ocfs2_super *osb); 148 149 void ocfs2_orphan_scan_stop(struct ocfs2_super *osb); 149 150 void ocfs2_orphan_scan_exit(struct ocfs2_super *osb); 150 151 ··· 330 329 /* extended attribute block update */ 331 330 #define OCFS2_XATTR_BLOCK_UPDATE_CREDITS 1 332 331 333 - /* global quotafile inode update, data block */ 334 - #define OCFS2_QINFO_WRITE_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1) 332 + /* Update of a single quota block */ 333 + #define OCFS2_QUOTA_BLOCK_UPDATE_CREDITS 1 335 334 335 + /* global quotafile inode update, data block */ 336 + #define OCFS2_QINFO_WRITE_CREDITS (OCFS2_INODE_UPDATE_CREDITS + \ 337 + OCFS2_QUOTA_BLOCK_UPDATE_CREDITS) 338 + 339 + #define OCFS2_LOCAL_QINFO_WRITE_CREDITS OCFS2_QUOTA_BLOCK_UPDATE_CREDITS 336 340 /* 337 341 * The two writes below can accidentally see global info dirty due 338 342 * to set_info() quotactl so make them prepared for the writes. 339 343 */ 340 344 /* quota data block, global info */ 341 345 /* Write to local quota file */ 342 - #define OCFS2_QWRITE_CREDITS (OCFS2_QINFO_WRITE_CREDITS + 1) 346 + #define OCFS2_QWRITE_CREDITS (OCFS2_QINFO_WRITE_CREDITS + \ 347 + OCFS2_QUOTA_BLOCK_UPDATE_CREDITS) 343 348 344 349 /* global quota data block, local quota data block, global quota inode, 345 350 * global quota info */ 346 - #define OCFS2_QSYNC_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 3) 351 + #define OCFS2_QSYNC_CREDITS (OCFS2_QINFO_WRITE_CREDITS + \ 352 + 2 * OCFS2_QUOTA_BLOCK_UPDATE_CREDITS) 347 353 348 354 static inline int ocfs2_quota_trans_credits(struct super_block *sb) 349 355 { ··· 362 354 credits += OCFS2_QWRITE_CREDITS; 363 355 return credits; 364 356 } 365 - 366 - /* Number of credits needed for removing quota structure from file */ 367 - int ocfs2_calc_qdel_credits(struct super_block *sb, int type); 368 - /* Number of credits needed for initialization of new quota structure */ 369 - int ocfs2_calc_qinit_credits(struct super_block *sb, int type); 370 357 371 358 /* group extend. inode update and last group update. */ 372 359 #define OCFS2_GROUP_EXTEND_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1)
+18 -4
fs/ocfs2/ocfs2.h
··· 224 224 OCFS2_MOUNT_GRPQUOTA = 1 << 10, /* We support group quotas */ 225 225 }; 226 226 227 - #define OCFS2_OSB_SOFT_RO 0x0001 228 - #define OCFS2_OSB_HARD_RO 0x0002 229 - #define OCFS2_OSB_ERROR_FS 0x0004 230 - #define OCFS2_DEFAULT_ATIME_QUANTUM 60 227 + #define OCFS2_OSB_SOFT_RO 0x0001 228 + #define OCFS2_OSB_HARD_RO 0x0002 229 + #define OCFS2_OSB_ERROR_FS 0x0004 230 + #define OCFS2_OSB_DROP_DENTRY_LOCK_IMMED 0x0008 231 + 232 + #define OCFS2_DEFAULT_ATIME_QUANTUM 60 231 233 232 234 struct ocfs2_journal; 233 235 struct ocfs2_slot_info; ··· 490 488 spin_lock(&osb->osb_lock); 491 489 osb->osb_flags |= flag; 492 490 spin_unlock(&osb->osb_lock); 491 + } 492 + 493 + 494 + static inline unsigned long ocfs2_test_osb_flag(struct ocfs2_super *osb, 495 + unsigned long flag) 496 + { 497 + unsigned long ret; 498 + 499 + spin_lock(&osb->osb_lock); 500 + ret = osb->osb_flags & flag; 501 + spin_unlock(&osb->osb_lock); 502 + return ret; 493 503 } 494 504 495 505 static inline void ocfs2_set_ro_flag(struct ocfs2_super *osb,
+1
fs/ocfs2/ocfs2_lockid.h
··· 108 108 [OCFS2_LOCK_TYPE_OPEN] = "Open", 109 109 [OCFS2_LOCK_TYPE_FLOCK] = "Flock", 110 110 [OCFS2_LOCK_TYPE_QINFO] = "Quota", 111 + [OCFS2_LOCK_TYPE_NFS_SYNC] = "NFSSync", 111 112 [OCFS2_LOCK_TYPE_ORPHAN_SCAN] = "OrphanScan", 112 113 }; 113 114
-1
fs/ocfs2/quota.h
··· 50 50 unsigned int dqi_chunks; /* Number of chunks in local quota file */ 51 51 unsigned int dqi_blocks; /* Number of blocks allocated for local quota file */ 52 52 unsigned int dqi_syncms; /* How often should we sync with other nodes */ 53 - unsigned int dqi_syncjiff; /* Precomputed dqi_syncms in jiffies */ 54 53 struct list_head dqi_chunk; /* List of chunks */ 55 54 struct inode *dqi_gqinode; /* Global quota file inode */ 56 55 struct ocfs2_lock_res dqi_gqlock; /* Lock protecting quota information structure */
+82 -62
fs/ocfs2/quota_global.c
··· 23 23 #include "sysfile.h" 24 24 #include "dlmglue.h" 25 25 #include "uptodate.h" 26 + #include "super.h" 26 27 #include "quota.h" 27 28 28 29 static struct workqueue_struct *ocfs2_quota_wq = NULL; ··· 70 69 d->dqb_curspace = cpu_to_le64(m->dqb_curspace); 71 70 d->dqb_btime = cpu_to_le64(m->dqb_btime); 72 71 d->dqb_itime = cpu_to_le64(m->dqb_itime); 72 + d->dqb_pad1 = d->dqb_pad2 = 0; 73 73 } 74 74 75 75 static int ocfs2_global_is_id(void *dp, struct dquot *dquot) ··· 115 113 int rc = 0; 116 114 struct buffer_head *tmp = *bh; 117 115 116 + if (i_size_read(inode) >> inode->i_sb->s_blocksize_bits <= v_block) { 117 + ocfs2_error(inode->i_sb, 118 + "Quota file %llu is probably corrupted! Requested " 119 + "to read block %Lu but file has size only %Lu\n", 120 + (unsigned long long)OCFS2_I(inode)->ip_blkno, 121 + (unsigned long long)v_block, 122 + (unsigned long long)i_size_read(inode)); 123 + return -EIO; 124 + } 118 125 rc = ocfs2_read_virt_blocks(inode, v_block, 1, &tmp, 0, 119 126 ocfs2_validate_quota_block); 120 127 if (rc) ··· 222 211 223 212 mutex_lock_nested(&gqinode->i_mutex, I_MUTEX_QUOTA); 224 213 if (gqinode->i_size < off + len) { 225 - down_write(&OCFS2_I(gqinode)->ip_alloc_sem); 226 - err = ocfs2_extend_no_holes(gqinode, off + len, off); 227 - up_write(&OCFS2_I(gqinode)->ip_alloc_sem); 228 - if (err < 0) 229 - goto out; 214 + loff_t rounded_end = 215 + ocfs2_align_bytes_to_blocks(sb, off + len); 216 + 217 + /* Space is already allocated in ocfs2_global_read_dquot() */ 230 218 err = ocfs2_simple_size_update(gqinode, 231 219 oinfo->dqi_gqi_bh, 232 - off + len); 220 + rounded_end); 233 221 if (err < 0) 234 222 goto out; 235 223 new = 1; ··· 244 234 } 245 235 if (err) { 246 236 mlog_errno(err); 247 - return err; 237 + goto out; 248 238 } 249 239 lock_buffer(bh); 250 240 if (new) ··· 352 342 info->dqi_bgrace = le32_to_cpu(dinfo.dqi_bgrace); 353 343 info->dqi_igrace = le32_to_cpu(dinfo.dqi_igrace); 354 344 oinfo->dqi_syncms = le32_to_cpu(dinfo.dqi_syncms); 355 - oinfo->dqi_syncjiff = msecs_to_jiffies(oinfo->dqi_syncms); 356 345 oinfo->dqi_gi.dqi_blocks = le32_to_cpu(dinfo.dqi_blocks); 357 346 oinfo->dqi_gi.dqi_free_blk = le32_to_cpu(dinfo.dqi_free_blk); 358 347 oinfo->dqi_gi.dqi_free_entry = le32_to_cpu(dinfo.dqi_free_entry); ··· 361 352 oinfo->dqi_gi.dqi_qtree_depth = qtree_depth(&oinfo->dqi_gi); 362 353 INIT_DELAYED_WORK(&oinfo->dqi_sync_work, qsync_work_fn); 363 354 queue_delayed_work(ocfs2_quota_wq, &oinfo->dqi_sync_work, 364 - oinfo->dqi_syncjiff); 355 + msecs_to_jiffies(oinfo->dqi_syncms)); 365 356 366 357 out_err: 367 358 mlog_exit(status); ··· 411 402 return err; 412 403 } 413 404 405 + static int ocfs2_global_qinit_alloc(struct super_block *sb, int type) 406 + { 407 + struct ocfs2_mem_dqinfo *oinfo = sb_dqinfo(sb, type)->dqi_priv; 408 + 409 + /* 410 + * We may need to allocate tree blocks and a leaf block but not the 411 + * root block 412 + */ 413 + return oinfo->dqi_gi.dqi_qtree_depth; 414 + } 415 + 416 + static int ocfs2_calc_global_qinit_credits(struct super_block *sb, int type) 417 + { 418 + /* We modify all the allocated blocks, tree root, and info block */ 419 + return (ocfs2_global_qinit_alloc(sb, type) + 2) * 420 + OCFS2_QUOTA_BLOCK_UPDATE_CREDITS; 421 + } 422 + 414 423 /* Read in information from global quota file and acquire a reference to it. 415 424 * dquot_acquire() has already started the transaction and locked quota file */ 416 425 int ocfs2_global_read_dquot(struct dquot *dquot) 417 426 { 418 427 int err, err2, ex = 0; 419 - struct ocfs2_mem_dqinfo *info = 420 - sb_dqinfo(dquot->dq_sb, dquot->dq_type)->dqi_priv; 428 + struct super_block *sb = dquot->dq_sb; 429 + int type = dquot->dq_type; 430 + struct ocfs2_mem_dqinfo *info = sb_dqinfo(sb, type)->dqi_priv; 431 + struct ocfs2_super *osb = OCFS2_SB(sb); 432 + struct inode *gqinode = info->dqi_gqinode; 433 + int need_alloc = ocfs2_global_qinit_alloc(sb, type); 434 + handle_t *handle = NULL; 421 435 422 436 err = ocfs2_qinfo_lock(info, 0); 423 437 if (err < 0) ··· 451 419 OCFS2_DQUOT(dquot)->dq_use_count++; 452 420 OCFS2_DQUOT(dquot)->dq_origspace = dquot->dq_dqb.dqb_curspace; 453 421 OCFS2_DQUOT(dquot)->dq_originodes = dquot->dq_dqb.dqb_curinodes; 422 + ocfs2_qinfo_unlock(info, 0); 423 + 454 424 if (!dquot->dq_off) { /* No real quota entry? */ 455 - /* Upgrade to exclusive lock for allocation */ 456 - ocfs2_qinfo_unlock(info, 0); 457 - err = ocfs2_qinfo_lock(info, 1); 458 - if (err < 0) 459 - goto out_qlock; 460 425 ex = 1; 426 + /* 427 + * Add blocks to quota file before we start a transaction since 428 + * locking allocators ranks above a transaction start 429 + */ 430 + WARN_ON(journal_current_handle()); 431 + down_write(&OCFS2_I(gqinode)->ip_alloc_sem); 432 + err = ocfs2_extend_no_holes(gqinode, 433 + gqinode->i_size + (need_alloc << sb->s_blocksize_bits), 434 + gqinode->i_size); 435 + up_write(&OCFS2_I(gqinode)->ip_alloc_sem); 436 + if (err < 0) 437 + goto out; 461 438 } 439 + 440 + handle = ocfs2_start_trans(osb, 441 + ocfs2_calc_global_qinit_credits(sb, type)); 442 + if (IS_ERR(handle)) { 443 + err = PTR_ERR(handle); 444 + goto out; 445 + } 446 + err = ocfs2_qinfo_lock(info, ex); 447 + if (err < 0) 448 + goto out_trans; 462 449 err = qtree_write_dquot(&info->dqi_gi, dquot); 463 450 if (ex && info_dirty(sb_dqinfo(dquot->dq_sb, dquot->dq_type))) { 464 451 err2 = __ocfs2_global_write_info(dquot->dq_sb, dquot->dq_type); ··· 489 438 ocfs2_qinfo_unlock(info, 1); 490 439 else 491 440 ocfs2_qinfo_unlock(info, 0); 441 + out_trans: 442 + if (handle) 443 + ocfs2_commit_trans(osb, handle); 492 444 out: 493 445 if (err < 0) 494 446 mlog_errno(err); ··· 661 607 662 608 dquot_scan_active(sb, ocfs2_sync_dquot_helper, oinfo->dqi_type); 663 609 queue_delayed_work(ocfs2_quota_wq, &oinfo->dqi_sync_work, 664 - oinfo->dqi_syncjiff); 610 + msecs_to_jiffies(oinfo->dqi_syncms)); 665 611 } 666 612 667 613 /* ··· 689 635 return status; 690 636 } 691 637 692 - int ocfs2_calc_qdel_credits(struct super_block *sb, int type) 638 + static int ocfs2_calc_qdel_credits(struct super_block *sb, int type) 693 639 { 694 - struct ocfs2_mem_dqinfo *oinfo; 695 - int features[MAXQUOTAS] = { OCFS2_FEATURE_RO_COMPAT_USRQUOTA, 696 - OCFS2_FEATURE_RO_COMPAT_GRPQUOTA }; 697 - 698 - if (!OCFS2_HAS_RO_COMPAT_FEATURE(sb, features[type])) 699 - return 0; 700 - 701 - oinfo = sb_dqinfo(sb, type)->dqi_priv; 702 - /* We modify tree, leaf block, global info, local chunk header, 703 - * global and local inode */ 704 - return oinfo->dqi_gi.dqi_qtree_depth + 2 + 1 + 705 - 2 * OCFS2_INODE_UPDATE_CREDITS; 640 + struct ocfs2_mem_dqinfo *oinfo = sb_dqinfo(sb, type)->dqi_priv; 641 + /* 642 + * We modify tree, leaf block, global info, local chunk header, 643 + * global and local inode; OCFS2_QINFO_WRITE_CREDITS already 644 + * accounts for inode update 645 + */ 646 + return (oinfo->dqi_gi.dqi_qtree_depth + 2) * 647 + OCFS2_QUOTA_BLOCK_UPDATE_CREDITS + 648 + OCFS2_QINFO_WRITE_CREDITS + 649 + OCFS2_INODE_UPDATE_CREDITS; 706 650 } 707 651 708 652 static int ocfs2_release_dquot(struct dquot *dquot) ··· 732 680 return status; 733 681 } 734 682 735 - int ocfs2_calc_qinit_credits(struct super_block *sb, int type) 736 - { 737 - struct ocfs2_mem_dqinfo *oinfo; 738 - int features[MAXQUOTAS] = { OCFS2_FEATURE_RO_COMPAT_USRQUOTA, 739 - OCFS2_FEATURE_RO_COMPAT_GRPQUOTA }; 740 - struct ocfs2_dinode *lfe, *gfe; 741 - 742 - if (!OCFS2_HAS_RO_COMPAT_FEATURE(sb, features[type])) 743 - return 0; 744 - 745 - oinfo = sb_dqinfo(sb, type)->dqi_priv; 746 - gfe = (struct ocfs2_dinode *)oinfo->dqi_gqi_bh->b_data; 747 - lfe = (struct ocfs2_dinode *)oinfo->dqi_lqi_bh->b_data; 748 - /* We can extend local file + global file. In local file we 749 - * can modify info, chunk header block and dquot block. In 750 - * global file we can modify info, tree and leaf block */ 751 - return ocfs2_calc_extend_credits(sb, &lfe->id2.i_list, 0) + 752 - ocfs2_calc_extend_credits(sb, &gfe->id2.i_list, 0) + 753 - 3 + oinfo->dqi_gi.dqi_qtree_depth + 2; 754 - } 755 - 756 683 static int ocfs2_acquire_dquot(struct dquot *dquot) 757 684 { 758 - handle_t *handle; 759 685 struct ocfs2_mem_dqinfo *oinfo = 760 686 sb_dqinfo(dquot->dq_sb, dquot->dq_type)->dqi_priv; 761 - struct ocfs2_super *osb = OCFS2_SB(dquot->dq_sb); 762 687 int status = 0; 763 688 764 689 mlog_entry("id=%u, type=%d", dquot->dq_id, dquot->dq_type); ··· 744 715 status = ocfs2_lock_global_qf(oinfo, 1); 745 716 if (status < 0) 746 717 goto out; 747 - handle = ocfs2_start_trans(osb, 748 - ocfs2_calc_qinit_credits(dquot->dq_sb, dquot->dq_type)); 749 - if (IS_ERR(handle)) { 750 - status = PTR_ERR(handle); 751 - mlog_errno(status); 752 - goto out_ilock; 753 - } 754 718 status = dquot_acquire(dquot); 755 - ocfs2_commit_trans(osb, handle); 756 - out_ilock: 757 719 ocfs2_unlock_global_qf(oinfo, 1); 758 720 out: 759 721 mlog_exit(status);
+102 -24
fs/ocfs2/quota_local.c
··· 20 20 #include "sysfile.h" 21 21 #include "dlmglue.h" 22 22 #include "quota.h" 23 + #include "uptodate.h" 23 24 24 25 /* Number of local quota structures per block */ 25 26 static inline unsigned int ol_quota_entries_per_block(struct super_block *sb) ··· 101 100 handle_t *handle; 102 101 int status; 103 102 104 - handle = ocfs2_start_trans(OCFS2_SB(sb), 1); 103 + handle = ocfs2_start_trans(OCFS2_SB(sb), 104 + OCFS2_QUOTA_BLOCK_UPDATE_CREDITS); 105 105 if (IS_ERR(handle)) { 106 106 status = PTR_ERR(handle); 107 107 mlog_errno(status); ··· 612 610 goto out_bh; 613 611 /* Mark quota file as clean if we are recovering quota file of 614 612 * some other node. */ 615 - handle = ocfs2_start_trans(osb, 1); 613 + handle = ocfs2_start_trans(osb, 614 + OCFS2_LOCAL_QINFO_WRITE_CREDITS); 616 615 if (IS_ERR(handle)) { 617 616 status = PTR_ERR(handle); 618 617 mlog_errno(status); ··· 943 940 struct ocfs2_local_disk_chunk *dchunk; 944 941 int status; 945 942 handle_t *handle; 946 - struct buffer_head *bh = NULL; 943 + struct buffer_head *bh = NULL, *dbh = NULL; 947 944 u64 p_blkno; 948 945 949 946 /* We are protected by dqio_sem so no locking needed */ ··· 967 964 mlog_errno(status); 968 965 goto out; 969 966 } 970 - 971 - down_read(&OCFS2_I(lqinode)->ip_alloc_sem); 972 - status = ocfs2_extent_map_get_blocks(lqinode, oinfo->dqi_blocks, 973 - &p_blkno, NULL, NULL); 974 - up_read(&OCFS2_I(lqinode)->ip_alloc_sem); 975 - if (status < 0) { 976 - mlog_errno(status); 977 - goto out; 978 - } 979 - bh = sb_getblk(sb, p_blkno); 980 - if (!bh) { 981 - status = -ENOMEM; 982 - mlog_errno(status); 983 - goto out; 984 - } 985 - dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data; 986 - 987 - handle = ocfs2_start_trans(OCFS2_SB(sb), 2); 967 + /* Local quota info and two new blocks we initialize */ 968 + handle = ocfs2_start_trans(OCFS2_SB(sb), 969 + OCFS2_LOCAL_QINFO_WRITE_CREDITS + 970 + 2 * OCFS2_QUOTA_BLOCK_UPDATE_CREDITS); 988 971 if (IS_ERR(handle)) { 989 972 status = PTR_ERR(handle); 990 973 mlog_errno(status); 991 974 goto out; 992 975 } 993 976 977 + /* Initialize chunk header */ 978 + down_read(&OCFS2_I(lqinode)->ip_alloc_sem); 979 + status = ocfs2_extent_map_get_blocks(lqinode, oinfo->dqi_blocks, 980 + &p_blkno, NULL, NULL); 981 + up_read(&OCFS2_I(lqinode)->ip_alloc_sem); 982 + if (status < 0) { 983 + mlog_errno(status); 984 + goto out_trans; 985 + } 986 + bh = sb_getblk(sb, p_blkno); 987 + if (!bh) { 988 + status = -ENOMEM; 989 + mlog_errno(status); 990 + goto out_trans; 991 + } 992 + dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data; 993 + ocfs2_set_new_buffer_uptodate(lqinode, bh); 994 994 status = ocfs2_journal_access_dq(handle, lqinode, bh, 995 - OCFS2_JOURNAL_ACCESS_WRITE); 995 + OCFS2_JOURNAL_ACCESS_CREATE); 996 996 if (status < 0) { 997 997 mlog_errno(status); 998 998 goto out_trans; ··· 1005 999 memset(dchunk->dqc_bitmap, 0, 1006 1000 sb->s_blocksize - sizeof(struct ocfs2_local_disk_chunk) - 1007 1001 OCFS2_QBLK_RESERVED_SPACE); 1008 - set_buffer_uptodate(bh); 1009 1002 unlock_buffer(bh); 1010 1003 status = ocfs2_journal_dirty(handle, bh); 1011 1004 if (status < 0) { ··· 1012 1007 goto out_trans; 1013 1008 } 1014 1009 1010 + /* Initialize new block with structures */ 1011 + down_read(&OCFS2_I(lqinode)->ip_alloc_sem); 1012 + status = ocfs2_extent_map_get_blocks(lqinode, oinfo->dqi_blocks + 1, 1013 + &p_blkno, NULL, NULL); 1014 + up_read(&OCFS2_I(lqinode)->ip_alloc_sem); 1015 + if (status < 0) { 1016 + mlog_errno(status); 1017 + goto out_trans; 1018 + } 1019 + dbh = sb_getblk(sb, p_blkno); 1020 + if (!dbh) { 1021 + status = -ENOMEM; 1022 + mlog_errno(status); 1023 + goto out_trans; 1024 + } 1025 + ocfs2_set_new_buffer_uptodate(lqinode, dbh); 1026 + status = ocfs2_journal_access_dq(handle, lqinode, dbh, 1027 + OCFS2_JOURNAL_ACCESS_CREATE); 1028 + if (status < 0) { 1029 + mlog_errno(status); 1030 + goto out_trans; 1031 + } 1032 + lock_buffer(dbh); 1033 + memset(dbh->b_data, 0, sb->s_blocksize - OCFS2_QBLK_RESERVED_SPACE); 1034 + unlock_buffer(dbh); 1035 + status = ocfs2_journal_dirty(handle, dbh); 1036 + if (status < 0) { 1037 + mlog_errno(status); 1038 + goto out_trans; 1039 + } 1040 + 1041 + /* Update local quotafile info */ 1015 1042 oinfo->dqi_blocks += 2; 1016 1043 oinfo->dqi_chunks++; 1017 1044 status = ocfs2_local_write_info(sb, type); ··· 1068 1031 ocfs2_commit_trans(OCFS2_SB(sb), handle); 1069 1032 out: 1070 1033 brelse(bh); 1034 + brelse(dbh); 1071 1035 kmem_cache_free(ocfs2_qf_chunk_cachep, chunk); 1072 1036 return ERR_PTR(status); 1073 1037 } ··· 1086 1048 struct ocfs2_local_disk_chunk *dchunk; 1087 1049 int epb = ol_quota_entries_per_block(sb); 1088 1050 unsigned int chunk_blocks; 1051 + struct buffer_head *bh; 1052 + u64 p_blkno; 1089 1053 int status; 1090 1054 handle_t *handle; 1091 1055 ··· 1115 1075 mlog_errno(status); 1116 1076 goto out; 1117 1077 } 1118 - handle = ocfs2_start_trans(OCFS2_SB(sb), 2); 1078 + 1079 + /* Get buffer from the just added block */ 1080 + down_read(&OCFS2_I(lqinode)->ip_alloc_sem); 1081 + status = ocfs2_extent_map_get_blocks(lqinode, oinfo->dqi_blocks, 1082 + &p_blkno, NULL, NULL); 1083 + up_read(&OCFS2_I(lqinode)->ip_alloc_sem); 1084 + if (status < 0) { 1085 + mlog_errno(status); 1086 + goto out; 1087 + } 1088 + bh = sb_getblk(sb, p_blkno); 1089 + if (!bh) { 1090 + status = -ENOMEM; 1091 + mlog_errno(status); 1092 + goto out; 1093 + } 1094 + ocfs2_set_new_buffer_uptodate(lqinode, bh); 1095 + 1096 + /* Local quota info, chunk header and the new block we initialize */ 1097 + handle = ocfs2_start_trans(OCFS2_SB(sb), 1098 + OCFS2_LOCAL_QINFO_WRITE_CREDITS + 1099 + 2 * OCFS2_QUOTA_BLOCK_UPDATE_CREDITS); 1119 1100 if (IS_ERR(handle)) { 1120 1101 status = PTR_ERR(handle); 1121 1102 mlog_errno(status); 1122 1103 goto out; 1123 1104 } 1105 + /* Zero created block */ 1106 + status = ocfs2_journal_access_dq(handle, lqinode, bh, 1107 + OCFS2_JOURNAL_ACCESS_CREATE); 1108 + if (status < 0) { 1109 + mlog_errno(status); 1110 + goto out_trans; 1111 + } 1112 + lock_buffer(bh); 1113 + memset(bh->b_data, 0, sb->s_blocksize); 1114 + unlock_buffer(bh); 1115 + status = ocfs2_journal_dirty(handle, bh); 1116 + if (status < 0) { 1117 + mlog_errno(status); 1118 + goto out_trans; 1119 + } 1120 + /* Update chunk header */ 1124 1121 status = ocfs2_journal_access_dq(handle, lqinode, chunk->qc_headerbh, 1125 1122 OCFS2_JOURNAL_ACCESS_WRITE); 1126 1123 if (status < 0) { ··· 1174 1097 mlog_errno(status); 1175 1098 goto out_trans; 1176 1099 } 1100 + /* Update file header */ 1177 1101 oinfo->dqi_blocks++; 1178 1102 status = ocfs2_local_write_info(sb, type); 1179 1103 if (status < 0) {
+2 -1
fs/ocfs2/stack_o2cb.c
··· 17 17 * General Public License for more details. 18 18 */ 19 19 20 + #include <linux/kernel.h> 20 21 #include <linux/crc32.h> 21 22 #include <linux/module.h> 22 23 ··· 154 153 155 154 static int dlm_status_to_errno(enum dlm_status status) 156 155 { 157 - BUG_ON(status > (sizeof(status_map) / sizeof(status_map[0]))); 156 + BUG_ON(status < 0 || status >= ARRAY_SIZE(status_map)); 158 157 159 158 return status_map[status]; 160 159 }
+30 -4
fs/ocfs2/super.c
··· 777 777 } 778 778 di = (struct ocfs2_dinode *) (*bh)->b_data; 779 779 memset(stats, 0, sizeof(struct ocfs2_blockcheck_stats)); 780 + spin_lock_init(&stats->b_lock); 780 781 status = ocfs2_verify_volume(di, *bh, blksize, stats); 781 782 if (status >= 0) 782 783 goto bail; ··· 1183 1182 wake_up(&osb->osb_mount_event); 1184 1183 1185 1184 /* Start this when the mount is almost sure of being successful */ 1186 - ocfs2_orphan_scan_init(osb); 1185 + ocfs2_orphan_scan_start(osb); 1187 1186 1188 1187 mlog_exit(status); 1189 1188 return status; ··· 1214 1213 mnt); 1215 1214 } 1216 1215 1216 + static void ocfs2_kill_sb(struct super_block *sb) 1217 + { 1218 + struct ocfs2_super *osb = OCFS2_SB(sb); 1219 + 1220 + /* Failed mount? */ 1221 + if (!osb || atomic_read(&osb->vol_state) == VOLUME_DISABLED) 1222 + goto out; 1223 + 1224 + /* Prevent further queueing of inode drop events */ 1225 + spin_lock(&dentry_list_lock); 1226 + ocfs2_set_osb_flag(osb, OCFS2_OSB_DROP_DENTRY_LOCK_IMMED); 1227 + spin_unlock(&dentry_list_lock); 1228 + /* Wait for work to finish and/or remove it */ 1229 + cancel_work_sync(&osb->dentry_lock_work); 1230 + out: 1231 + kill_block_super(sb); 1232 + } 1233 + 1217 1234 static struct file_system_type ocfs2_fs_type = { 1218 1235 .owner = THIS_MODULE, 1219 1236 .name = "ocfs2", 1220 1237 .get_sb = ocfs2_get_sb, /* is this called when we mount 1221 1238 * the fs? */ 1222 - .kill_sb = kill_block_super, /* set to the generic one 1223 - * right now, but do we 1224 - * need to change that? */ 1239 + .kill_sb = ocfs2_kill_sb, 1240 + 1225 1241 .fs_flags = FS_REQUIRES_DEV|FS_RENAME_DOES_D_MOVE, 1226 1242 .next = NULL 1227 1243 }; ··· 1837 1819 1838 1820 debugfs_remove(osb->osb_ctxt); 1839 1821 1822 + /* 1823 + * Flush inode dropping work queue so that deletes are 1824 + * performed while the filesystem is still working 1825 + */ 1826 + ocfs2_drop_all_dl_inodes(osb); 1827 + 1840 1828 /* Orphan scan should be stopped as early as possible */ 1841 1829 ocfs2_orphan_scan_stop(osb); 1842 1830 ··· 2004 1980 2005 1981 snprintf(osb->dev_str, sizeof(osb->dev_str), "%u,%u", 2006 1982 MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev)); 1983 + 1984 + ocfs2_orphan_scan_init(osb); 2007 1985 2008 1986 status = ocfs2_recovery_init(osb); 2009 1987 if (status) {
+2 -1
fs/ocfs2/xattr.c
··· 1052 1052 struct ocfs2_xattr_block *xb; 1053 1053 struct ocfs2_xattr_value_root *xv; 1054 1054 size_t size; 1055 - int ret = -ENODATA, name_offset, name_len, block_off, i; 1055 + int ret = -ENODATA, name_offset, name_len, i; 1056 + int uninitialized_var(block_off); 1056 1057 1057 1058 xs->bucket = ocfs2_xattr_bucket_new(inode); 1058 1059 if (!xs->bucket) {
+3 -16
fs/proc/base.c
··· 1003 1003 1004 1004 if (!task) 1005 1005 return -ESRCH; 1006 - task_lock(task); 1007 - if (task->mm) 1008 - oom_adjust = task->mm->oom_adj; 1009 - else 1010 - oom_adjust = OOM_DISABLE; 1011 - task_unlock(task); 1006 + oom_adjust = task->oomkilladj; 1012 1007 put_task_struct(task); 1013 1008 1014 1009 len = snprintf(buffer, sizeof(buffer), "%i\n", oom_adjust); ··· 1032 1037 task = get_proc_task(file->f_path.dentry->d_inode); 1033 1038 if (!task) 1034 1039 return -ESRCH; 1035 - task_lock(task); 1036 - if (!task->mm) { 1037 - task_unlock(task); 1038 - put_task_struct(task); 1039 - return -EINVAL; 1040 - } 1041 - if (oom_adjust < task->mm->oom_adj && !capable(CAP_SYS_RESOURCE)) { 1042 - task_unlock(task); 1040 + if (oom_adjust < task->oomkilladj && !capable(CAP_SYS_RESOURCE)) { 1043 1041 put_task_struct(task); 1044 1042 return -EACCES; 1045 1043 } 1046 - task->mm->oom_adj = oom_adjust; 1047 - task_unlock(task); 1044 + task->oomkilladj = oom_adjust; 1048 1045 put_task_struct(task); 1049 1046 if (end - buffer == 0) 1050 1047 return -EIO;
+1
fs/select.c
··· 110 110 { 111 111 init_poll_funcptr(&pwq->pt, __pollwait); 112 112 pwq->polling_task = current; 113 + pwq->triggered = 0; 113 114 pwq->error = 0; 114 115 pwq->table = NULL; 115 116 pwq->inline_index = 0;
+1 -1
fs/xfs/linux-2.6/xfs_buf.c
··· 770 770 bp->b_pages = NULL; 771 771 bp->b_addr = mem; 772 772 773 - rval = _xfs_buf_get_pages(bp, page_count, 0); 773 + rval = _xfs_buf_get_pages(bp, page_count, XBF_DONT_BLOCK); 774 774 if (rval) 775 775 return rval; 776 776
+1 -1
fs/xfs/linux-2.6/xfs_ioctl32.c
··· 619 619 case XFS_IOC_GETVERSION_32: 620 620 cmd = _NATIVE_IOC(cmd, long); 621 621 return xfs_file_ioctl(filp, cmd, p); 622 - case XFS_IOC_SWAPEXT: { 622 + case XFS_IOC_SWAPEXT_32: { 623 623 struct xfs_swapext sxp; 624 624 struct compat_xfs_swapext __user *sxu = arg; 625 625
+11 -2
fs/xfs/linux-2.6/xfs_sync.c
··· 708 708 return 0; 709 709 } 710 710 711 + void 712 + __xfs_inode_set_reclaim_tag( 713 + struct xfs_perag *pag, 714 + struct xfs_inode *ip) 715 + { 716 + radix_tree_tag_set(&pag->pag_ici_root, 717 + XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino), 718 + XFS_ICI_RECLAIM_TAG); 719 + } 720 + 711 721 /* 712 722 * We set the inode flag atomically with the radix tree tag. 713 723 * Once we get tag lookups on the radix tree, this inode flag ··· 732 722 733 723 read_lock(&pag->pag_ici_lock); 734 724 spin_lock(&ip->i_flags_lock); 735 - radix_tree_tag_set(&pag->pag_ici_root, 736 - XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG); 725 + __xfs_inode_set_reclaim_tag(pag, ip); 737 726 __xfs_iflags_set(ip, XFS_IRECLAIMABLE); 738 727 spin_unlock(&ip->i_flags_lock); 739 728 read_unlock(&pag->pag_ici_lock);
+1
fs/xfs/linux-2.6/xfs_sync.h
··· 48 48 int xfs_reclaim_inodes(struct xfs_mount *mp, int mode); 49 49 50 50 void xfs_inode_set_reclaim_tag(struct xfs_inode *ip); 51 + void __xfs_inode_set_reclaim_tag(struct xfs_perag *pag, struct xfs_inode *ip); 51 52 void xfs_inode_clear_reclaim_tag(struct xfs_inode *ip); 52 53 void __xfs_inode_clear_reclaim_tag(struct xfs_mount *mp, struct xfs_perag *pag, 53 54 struct xfs_inode *ip);
+5 -3
fs/xfs/xfs_attr.c
··· 2010 2010 dblkno = XFS_FSB_TO_DADDR(mp, map[i].br_startblock); 2011 2011 blkcnt = XFS_FSB_TO_BB(mp, map[i].br_blockcount); 2012 2012 error = xfs_read_buf(mp, mp->m_ddev_targp, dblkno, 2013 - blkcnt, XFS_BUF_LOCK, &bp); 2013 + blkcnt, 2014 + XFS_BUF_LOCK | XBF_DONT_BLOCK, 2015 + &bp); 2014 2016 if (error) 2015 2017 return(error); 2016 2018 ··· 2143 2141 dblkno = XFS_FSB_TO_DADDR(mp, map.br_startblock), 2144 2142 blkcnt = XFS_FSB_TO_BB(mp, map.br_blockcount); 2145 2143 2146 - bp = xfs_buf_get_flags(mp->m_ddev_targp, dblkno, 2147 - blkcnt, XFS_BUF_LOCK); 2144 + bp = xfs_buf_get_flags(mp->m_ddev_targp, dblkno, blkcnt, 2145 + XFS_BUF_LOCK | XBF_DONT_BLOCK); 2148 2146 ASSERT(bp); 2149 2147 ASSERT(!XFS_BUF_GETERROR(bp)); 2150 2148
+1 -1
fs/xfs/xfs_bmap.c
··· 6009 6009 */ 6010 6010 error = ENOMEM; 6011 6011 subnex = 16; 6012 - map = kmem_alloc(subnex * sizeof(*map), KM_MAYFAIL); 6012 + map = kmem_alloc(subnex * sizeof(*map), KM_MAYFAIL | KM_NOFS); 6013 6013 if (!map) 6014 6014 goto out_unlock_ilock; 6015 6015
+2 -2
fs/xfs/xfs_btree.c
··· 120 120 XFS_RANDOM_BTREE_CHECK_SBLOCK))) { 121 121 if (bp) 122 122 xfs_buftrace("SBTREE ERROR", bp); 123 - XFS_ERROR_REPORT("xfs_btree_check_sblock", XFS_ERRLEVEL_LOW, 124 - cur->bc_mp); 123 + XFS_CORRUPTION_ERROR("xfs_btree_check_sblock", 124 + XFS_ERRLEVEL_LOW, cur->bc_mp, block); 125 125 return XFS_ERROR(EFSCORRUPTED); 126 126 } 127 127 return 0;
+3 -3
fs/xfs/xfs_da_btree.c
··· 2201 2201 xfs_da_state_t * 2202 2202 xfs_da_state_alloc(void) 2203 2203 { 2204 - return kmem_zone_zalloc(xfs_da_state_zone, KM_SLEEP); 2204 + return kmem_zone_zalloc(xfs_da_state_zone, KM_NOFS); 2205 2205 } 2206 2206 2207 2207 /* ··· 2261 2261 int off; 2262 2262 2263 2263 if (nbuf == 1) 2264 - dabuf = kmem_zone_alloc(xfs_dabuf_zone, KM_SLEEP); 2264 + dabuf = kmem_zone_alloc(xfs_dabuf_zone, KM_NOFS); 2265 2265 else 2266 - dabuf = kmem_alloc(XFS_DA_BUF_SIZE(nbuf), KM_SLEEP); 2266 + dabuf = kmem_alloc(XFS_DA_BUF_SIZE(nbuf), KM_NOFS); 2267 2267 dabuf->dirty = 0; 2268 2268 #ifdef XFS_DABUF_DEBUG 2269 2269 dabuf->ra = ra;
+1 -1
fs/xfs/xfs_dir2.c
··· 256 256 !(args->op_flags & XFS_DA_OP_CILOOKUP)) 257 257 return EEXIST; 258 258 259 - args->value = kmem_alloc(len, KM_MAYFAIL); 259 + args->value = kmem_alloc(len, KM_NOFS | KM_MAYFAIL); 260 260 if (!args->value) 261 261 return ENOMEM; 262 262
+14 -6
fs/xfs/xfs_fsops.c
··· 167 167 new = nb - mp->m_sb.sb_dblocks; 168 168 oagcount = mp->m_sb.sb_agcount; 169 169 if (nagcount > oagcount) { 170 + void *new_perag, *old_perag; 171 + 170 172 xfs_filestream_flush(mp); 173 + 174 + new_perag = kmem_zalloc(sizeof(xfs_perag_t) * nagcount, 175 + KM_MAYFAIL); 176 + if (!new_perag) 177 + return XFS_ERROR(ENOMEM); 178 + 171 179 down_write(&mp->m_peraglock); 172 - mp->m_perag = kmem_realloc(mp->m_perag, 173 - sizeof(xfs_perag_t) * nagcount, 174 - sizeof(xfs_perag_t) * oagcount, 175 - KM_SLEEP); 176 - memset(&mp->m_perag[oagcount], 0, 177 - (nagcount - oagcount) * sizeof(xfs_perag_t)); 180 + memcpy(new_perag, mp->m_perag, sizeof(xfs_perag_t) * oagcount); 181 + old_perag = mp->m_perag; 182 + mp->m_perag = new_perag; 183 + 178 184 mp->m_flags |= XFS_MOUNT_32BITINODES; 179 185 nagimax = xfs_initialize_perag(mp, nagcount); 180 186 up_write(&mp->m_peraglock); 187 + 188 + kmem_free(old_perag); 181 189 } 182 190 tp = xfs_trans_alloc(mp, XFS_TRANS_GROWFS); 183 191 tp->t_flags |= XFS_TRANS_RESERVE;
+59 -56
fs/xfs/xfs_iget.c
··· 191 191 int flags, 192 192 int lock_flags) __releases(pag->pag_ici_lock) 193 193 { 194 + struct inode *inode = VFS_I(ip); 194 195 struct xfs_mount *mp = ip->i_mount; 195 - int error = EAGAIN; 196 + int error; 197 + 198 + spin_lock(&ip->i_flags_lock); 196 199 197 200 /* 198 - * If INEW is set this inode is being set up 199 - * If IRECLAIM is set this inode is being torn down 200 - * Pause and try again. 201 + * If we are racing with another cache hit that is currently 202 + * instantiating this inode or currently recycling it out of 203 + * reclaimabe state, wait for the initialisation to complete 204 + * before continuing. 205 + * 206 + * XXX(hch): eventually we should do something equivalent to 207 + * wait_on_inode to wait for these flags to be cleared 208 + * instead of polling for it. 201 209 */ 202 - if (xfs_iflags_test(ip, (XFS_INEW|XFS_IRECLAIM))) { 210 + if (ip->i_flags & (XFS_INEW|XFS_IRECLAIM)) { 203 211 XFS_STATS_INC(xs_ig_frecycle); 212 + error = EAGAIN; 204 213 goto out_error; 205 214 } 206 215 207 - /* If IRECLAIMABLE is set, we've torn down the vfs inode part */ 208 - if (xfs_iflags_test(ip, XFS_IRECLAIMABLE)) { 216 + /* 217 + * If lookup is racing with unlink return an error immediately. 218 + */ 219 + if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) { 220 + error = ENOENT; 221 + goto out_error; 222 + } 209 223 210 - /* 211 - * If lookup is racing with unlink, then we should return an 212 - * error immediately so we don't remove it from the reclaim 213 - * list and potentially leak the inode. 214 - */ 215 - if ((ip->i_d.di_mode == 0) && !(flags & XFS_IGET_CREATE)) { 216 - error = ENOENT; 217 - goto out_error; 218 - } 219 - 224 + /* 225 + * If IRECLAIMABLE is set, we've torn down the VFS inode already. 226 + * Need to carefully get it back into useable state. 227 + */ 228 + if (ip->i_flags & XFS_IRECLAIMABLE) { 220 229 xfs_itrace_exit_tag(ip, "xfs_iget.alloc"); 221 230 222 231 /* 223 - * We need to re-initialise the VFS inode as it has been 224 - * 'freed' by the VFS. Do this here so we can deal with 225 - * errors cleanly, then tag it so it can be set up correctly 226 - * later. 232 + * We need to set XFS_INEW atomically with clearing the 233 + * reclaimable tag so that we do have an indicator of the 234 + * inode still being initialized. 227 235 */ 228 - if (inode_init_always(mp->m_super, VFS_I(ip))) { 229 - error = ENOMEM; 236 + ip->i_flags |= XFS_INEW; 237 + ip->i_flags &= ~XFS_IRECLAIMABLE; 238 + __xfs_inode_clear_reclaim_tag(mp, pag, ip); 239 + 240 + spin_unlock(&ip->i_flags_lock); 241 + read_unlock(&pag->pag_ici_lock); 242 + 243 + error = -inode_init_always(mp->m_super, inode); 244 + if (error) { 245 + /* 246 + * Re-initializing the inode failed, and we are in deep 247 + * trouble. Try to re-add it to the reclaim list. 248 + */ 249 + read_lock(&pag->pag_ici_lock); 250 + spin_lock(&ip->i_flags_lock); 251 + 252 + ip->i_flags &= ~XFS_INEW; 253 + ip->i_flags |= XFS_IRECLAIMABLE; 254 + __xfs_inode_set_reclaim_tag(pag, ip); 255 + goto out_error; 256 + } 257 + inode->i_state = I_LOCK|I_NEW; 258 + } else { 259 + /* If the VFS inode is being torn down, pause and try again. */ 260 + if (!igrab(inode)) { 261 + error = EAGAIN; 230 262 goto out_error; 231 263 } 232 264 233 - /* 234 - * We must set the XFS_INEW flag before clearing the 235 - * XFS_IRECLAIMABLE flag so that if a racing lookup does 236 - * not find the XFS_IRECLAIMABLE above but has the igrab() 237 - * below succeed we can safely check XFS_INEW to detect 238 - * that this inode is still being initialised. 239 - */ 240 - xfs_iflags_set(ip, XFS_INEW); 241 - xfs_iflags_clear(ip, XFS_IRECLAIMABLE); 242 - 243 - /* clear the radix tree reclaim flag as well. */ 244 - __xfs_inode_clear_reclaim_tag(mp, pag, ip); 245 - } else if (!igrab(VFS_I(ip))) { 246 - /* If the VFS inode is being torn down, pause and try again. */ 247 - XFS_STATS_INC(xs_ig_frecycle); 248 - goto out_error; 249 - } else if (xfs_iflags_test(ip, XFS_INEW)) { 250 - /* 251 - * We are racing with another cache hit that is 252 - * currently recycling this inode out of the XFS_IRECLAIMABLE 253 - * state. Wait for the initialisation to complete before 254 - * continuing. 255 - */ 256 - wait_on_inode(VFS_I(ip)); 265 + /* We've got a live one. */ 266 + spin_unlock(&ip->i_flags_lock); 267 + read_unlock(&pag->pag_ici_lock); 257 268 } 258 - 259 - if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) { 260 - error = ENOENT; 261 - iput(VFS_I(ip)); 262 - goto out_error; 263 - } 264 - 265 - /* We've got a live one. */ 266 - read_unlock(&pag->pag_ici_lock); 267 269 268 270 if (lock_flags != 0) 269 271 xfs_ilock(ip, lock_flags); ··· 276 274 return 0; 277 275 278 276 out_error: 277 + spin_unlock(&ip->i_flags_lock); 279 278 read_unlock(&pag->pag_ici_lock); 280 279 return error; 281 280 }
+10
fs/xfs/xfs_inode.c
··· 343 343 return XFS_ERROR(EFSCORRUPTED); 344 344 } 345 345 346 + if (unlikely((ip->i_d.di_flags & XFS_DIFLAG_REALTIME) && 347 + !ip->i_mount->m_rtdev_targp)) { 348 + xfs_fs_repair_cmn_err(CE_WARN, ip->i_mount, 349 + "corrupt dinode %Lu, has realtime flag set.", 350 + ip->i_ino); 351 + XFS_CORRUPTION_ERROR("xfs_iformat(realtime)", 352 + XFS_ERRLEVEL_LOW, ip->i_mount, dip); 353 + return XFS_ERROR(EFSCORRUPTED); 354 + } 355 + 346 356 switch (ip->i_d.di_mode & S_IFMT) { 347 357 case S_IFIFO: 348 358 case S_IFCHR:
+1 -1
fs/xfs/xfs_log.c
··· 3180 3180 STATIC void 3181 3181 xlog_state_want_sync(xlog_t *log, xlog_in_core_t *iclog) 3182 3182 { 3183 - ASSERT(spin_is_locked(&log->l_icloglock)); 3183 + assert_spin_locked(&log->l_icloglock); 3184 3184 3185 3185 if (iclog->ic_state == XLOG_STATE_ACTIVE) { 3186 3186 xlog_state_switch_iclogs(log, iclog, 0);
+3 -1
fs/xfs/xfs_vnodeops.c
··· 538 538 d = XFS_FSB_TO_DADDR(mp, mval[n].br_startblock); 539 539 byte_cnt = XFS_FSB_TO_B(mp, mval[n].br_blockcount); 540 540 541 - bp = xfs_buf_read(mp->m_ddev_targp, d, BTOBB(byte_cnt), 0); 541 + bp = xfs_buf_read_flags(mp->m_ddev_targp, d, BTOBB(byte_cnt), 542 + XBF_LOCK | XBF_MAPPED | 543 + XBF_DONT_BLOCK); 542 544 error = XFS_BUF_GETERROR(bp); 543 545 if (error) { 544 546 xfs_ioerror_alert("xfs_readlink",
+3 -2
include/acpi/processor.h
··· 174 174 cpumask_var_t shared_cpu_map; 175 175 int (*acpi_processor_get_throttling) (struct acpi_processor * pr); 176 176 int (*acpi_processor_set_throttling) (struct acpi_processor * pr, 177 - int state); 177 + int state, bool force); 178 178 179 179 u32 address; 180 180 u8 duty_offset; ··· 321 321 /* in processor_throttling.c */ 322 322 int acpi_processor_tstate_has_changed(struct acpi_processor *pr); 323 323 int acpi_processor_get_throttling_info(struct acpi_processor *pr); 324 - extern int acpi_processor_set_throttling(struct acpi_processor *pr, int state); 324 + extern int acpi_processor_set_throttling(struct acpi_processor *pr, 325 + int state, bool force); 325 326 extern const struct file_operations acpi_processor_throttling_fops; 326 327 extern void acpi_processor_throttling_init(void); 327 328 /* in processor_idle.c */
+1
include/crypto/algapi.h
··· 137 137 void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen); 138 138 int crypto_enqueue_request(struct crypto_queue *queue, 139 139 struct crypto_async_request *request); 140 + void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset); 140 141 struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue); 141 142 int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm); 142 143
+2 -2
include/crypto/internal/skcipher.h
··· 79 79 static inline struct skcipher_givcrypt_request *skcipher_dequeue_givcrypt( 80 80 struct crypto_queue *queue) 81 81 { 82 - return container_of(ablkcipher_dequeue_request(queue), 83 - struct skcipher_givcrypt_request, creq); 82 + return __crypto_dequeue_request( 83 + queue, offsetof(struct skcipher_givcrypt_request, creq.base)); 84 84 } 85 85 86 86 static inline void *skcipher_givcrypt_reqctx(
+5 -1
include/drm/radeon_drm.h
··· 508 508 #define DRM_RADEON_INFO 0x27 509 509 #define DRM_RADEON_GEM_SET_TILING 0x28 510 510 #define DRM_RADEON_GEM_GET_TILING 0x29 511 + #define DRM_RADEON_GEM_BUSY 0x2a 511 512 512 513 #define DRM_IOCTL_RADEON_CP_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_CP_INIT, drm_radeon_init_t) 513 514 #define DRM_IOCTL_RADEON_CP_START DRM_IO( DRM_COMMAND_BASE + DRM_RADEON_CP_START) ··· 549 548 #define DRM_IOCTL_RADEON_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_INFO, struct drm_radeon_info) 550 549 #define DRM_IOCTL_RADEON_SET_TILING DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling) 551 550 #define DRM_IOCTL_RADEON_GET_TILING DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling) 551 + #define DRM_IOCTL_RADEON_GEM_BUSY DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy) 552 552 553 553 typedef struct drm_radeon_init { 554 554 enum { ··· 709 707 #define RADEON_PARAM_FB_LOCATION 14 /* FB location */ 710 708 #define RADEON_PARAM_NUM_GB_PIPES 15 /* num GB pipes */ 711 709 #define RADEON_PARAM_DEVICE_ID 16 710 + #define RADEON_PARAM_NUM_Z_PIPES 17 /* num Z pipes */ 712 711 713 712 typedef struct drm_radeon_getparam { 714 713 int param; ··· 841 838 842 839 struct drm_radeon_gem_busy { 843 840 uint32_t handle; 844 - uint32_t busy; 841 + uint32_t domain; 845 842 }; 846 843 847 844 struct drm_radeon_gem_pread { ··· 898 895 899 896 #define RADEON_INFO_DEVICE_ID 0x00 900 897 #define RADEON_INFO_NUM_GB_PIPES 0x01 898 + #define RADEON_INFO_NUM_Z_PIPES 0x02 901 899 902 900 struct drm_radeon_info { 903 901 uint32_t request;
+1
include/linux/binfmts.h
··· 117 117 int executable_stack); 118 118 extern int bprm_mm_init(struct linux_binprm *bprm); 119 119 extern int copy_strings_kernel(int argc,char ** argv,struct linux_binprm *bprm); 120 + extern int prepare_bprm_creds(struct linux_binprm *bprm); 120 121 extern void install_exec_creds(struct linux_binprm *bprm); 121 122 extern void do_coredump(long signr, int exit_code, struct pt_regs *regs); 122 123 extern int set_binfmt(struct linux_binfmt *new);
+8 -10
include/linux/bitmap.h
··· 94 94 const unsigned long *src, int shift, int bits); 95 95 extern void __bitmap_shift_left(unsigned long *dst, 96 96 const unsigned long *src, int shift, int bits); 97 - extern void __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, 97 + extern int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, 98 98 const unsigned long *bitmap2, int bits); 99 99 extern void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, 100 100 const unsigned long *bitmap2, int bits); 101 101 extern void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, 102 102 const unsigned long *bitmap2, int bits); 103 - extern void __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, 103 + extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, 104 104 const unsigned long *bitmap2, int bits); 105 105 extern int __bitmap_intersects(const unsigned long *bitmap1, 106 106 const unsigned long *bitmap2, int bits); ··· 171 171 } 172 172 } 173 173 174 - static inline void bitmap_and(unsigned long *dst, const unsigned long *src1, 174 + static inline int bitmap_and(unsigned long *dst, const unsigned long *src1, 175 175 const unsigned long *src2, int nbits) 176 176 { 177 177 if (small_const_nbits(nbits)) 178 - *dst = *src1 & *src2; 179 - else 180 - __bitmap_and(dst, src1, src2, nbits); 178 + return (*dst = *src1 & *src2) != 0; 179 + return __bitmap_and(dst, src1, src2, nbits); 181 180 } 182 181 183 182 static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, ··· 197 198 __bitmap_xor(dst, src1, src2, nbits); 198 199 } 199 200 200 - static inline void bitmap_andnot(unsigned long *dst, const unsigned long *src1, 201 + static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, 201 202 const unsigned long *src2, int nbits) 202 203 { 203 204 if (small_const_nbits(nbits)) 204 - *dst = *src1 & ~(*src2); 205 - else 206 - __bitmap_andnot(dst, src1, src2, nbits); 205 + return (*dst = *src1 & ~(*src2)) != 0; 206 + return __bitmap_andnot(dst, src1, src2, nbits); 207 207 } 208 208 209 209 static inline void bitmap_complement(unsigned long *dst, const unsigned long *src,
+10 -10
include/linux/cpumask.h
··· 43 43 * int cpu_isset(cpu, mask) true iff bit 'cpu' set in mask 44 44 * int cpu_test_and_set(cpu, mask) test and set bit 'cpu' in mask 45 45 * 46 - * void cpus_and(dst, src1, src2) dst = src1 & src2 [intersection] 46 + * int cpus_and(dst, src1, src2) dst = src1 & src2 [intersection] 47 47 * void cpus_or(dst, src1, src2) dst = src1 | src2 [union] 48 48 * void cpus_xor(dst, src1, src2) dst = src1 ^ src2 49 - * void cpus_andnot(dst, src1, src2) dst = src1 & ~src2 49 + * int cpus_andnot(dst, src1, src2) dst = src1 & ~src2 50 50 * void cpus_complement(dst, src) dst = ~src 51 51 * 52 52 * int cpus_equal(mask1, mask2) Does mask1 == mask2? ··· 179 179 } 180 180 181 181 #define cpus_and(dst, src1, src2) __cpus_and(&(dst), &(src1), &(src2), NR_CPUS) 182 - static inline void __cpus_and(cpumask_t *dstp, const cpumask_t *src1p, 182 + static inline int __cpus_and(cpumask_t *dstp, const cpumask_t *src1p, 183 183 const cpumask_t *src2p, int nbits) 184 184 { 185 - bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits); 185 + return bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits); 186 186 } 187 187 188 188 #define cpus_or(dst, src1, src2) __cpus_or(&(dst), &(src1), &(src2), NR_CPUS) ··· 201 201 202 202 #define cpus_andnot(dst, src1, src2) \ 203 203 __cpus_andnot(&(dst), &(src1), &(src2), NR_CPUS) 204 - static inline void __cpus_andnot(cpumask_t *dstp, const cpumask_t *src1p, 204 + static inline int __cpus_andnot(cpumask_t *dstp, const cpumask_t *src1p, 205 205 const cpumask_t *src2p, int nbits) 206 206 { 207 - bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits); 207 + return bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits); 208 208 } 209 209 210 210 #define cpus_complement(dst, src) __cpus_complement(&(dst), &(src), NR_CPUS) ··· 738 738 * @src1p: the first input 739 739 * @src2p: the second input 740 740 */ 741 - static inline void cpumask_and(struct cpumask *dstp, 741 + static inline int cpumask_and(struct cpumask *dstp, 742 742 const struct cpumask *src1p, 743 743 const struct cpumask *src2p) 744 744 { 745 - bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p), 745 + return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p), 746 746 cpumask_bits(src2p), nr_cpumask_bits); 747 747 } 748 748 ··· 779 779 * @src1p: the first input 780 780 * @src2p: the second input 781 781 */ 782 - static inline void cpumask_andnot(struct cpumask *dstp, 782 + static inline int cpumask_andnot(struct cpumask *dstp, 783 783 const struct cpumask *src1p, 784 784 const struct cpumask *src2p) 785 785 { 786 - bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p), 786 + return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p), 787 787 cpumask_bits(src2p), nr_cpumask_bits); 788 788 } 789 789
+4
include/linux/device-mapper.h
··· 91 91 iterate_devices_callout_fn fn, 92 92 void *data); 93 93 94 + typedef void (*dm_io_hints_fn) (struct dm_target *ti, 95 + struct queue_limits *limits); 96 + 94 97 /* 95 98 * Returns: 96 99 * 0: The target can handle the next I/O immediately. ··· 154 151 dm_merge_fn merge; 155 152 dm_busy_fn busy; 156 153 dm_iterate_devices_fn iterate_devices; 154 + dm_io_hints_fn io_hints; 157 155 158 156 /* For internal device-mapper use. */ 159 157 struct list_head list;
+12 -1
include/linux/dm-log-userspace.h
··· 371 371 (DM_ULOG_REQUEST_MASK & (request_type)) 372 372 373 373 struct dm_ulog_request { 374 - char uuid[DM_UUID_LEN]; /* Ties a request to a specific mirror log */ 374 + /* 375 + * The local unique identifier (luid) and the universally unique 376 + * identifier (uuid) are used to tie a request to a specific 377 + * mirror log. A single machine log could probably make due with 378 + * just the 'luid', but a cluster-aware log must use the 'uuid' and 379 + * the 'luid'. The uuid is what is required for node to node 380 + * communication concerning a particular log, but the 'luid' helps 381 + * differentiate between logs that are being swapped and have the 382 + * same 'uuid'. (Think "live" and "inactive" device-mapper tables.) 383 + */ 384 + uint64_t luid; 385 + char uuid[DM_UUID_LEN]; 375 386 char padding[7]; /* Padding because DM_UUID_LEN = 129 */ 376 387 377 388 int32_t error; /* Used to report back processing errors */
+7 -5
include/linux/flex_array.h
··· 21 21 struct { 22 22 int element_size; 23 23 int total_nr_elements; 24 - struct flex_array_part *parts[0]; 24 + struct flex_array_part *parts[]; 25 25 }; 26 26 /* 27 27 * This little trick makes sure that ··· 36 36 .total_nr_elements = (total), \ 37 37 } } } 38 38 39 - struct flex_array *flex_array_alloc(int element_size, int total, gfp_t flags); 40 - int flex_array_prealloc(struct flex_array *fa, int start, int end, gfp_t flags); 39 + struct flex_array *flex_array_alloc(int element_size, unsigned int total, 40 + gfp_t flags); 41 + int flex_array_prealloc(struct flex_array *fa, unsigned int start, 42 + unsigned int end, gfp_t flags); 41 43 void flex_array_free(struct flex_array *fa); 42 44 void flex_array_free_parts(struct flex_array *fa); 43 - int flex_array_put(struct flex_array *fa, int element_nr, void *src, 45 + int flex_array_put(struct flex_array *fa, unsigned int element_nr, void *src, 44 46 gfp_t flags); 45 - void *flex_array_get(struct flex_array *fa, int element_nr); 47 + void *flex_array_get(struct flex_array *fa, unsigned int element_nr); 46 48 47 49 #endif /* _FLEX_ARRAY_H */
+1 -1
include/linux/fs.h
··· 2123 2123 int open_flag, int mode, int acc_mode); 2124 2124 extern int may_open(struct path *, int, int); 2125 2125 2126 - extern int kernel_read(struct file *, unsigned long, char *, unsigned long); 2126 + extern int kernel_read(struct file *, loff_t, char *, unsigned long); 2127 2127 extern struct file * open_exec(const char *); 2128 2128 2129 2129 /* fs/dcache.c -- generic fs support functions */
+10 -6
include/linux/ftrace_event.h
··· 93 93 unsigned long flags, 94 94 int pc); 95 95 struct ring_buffer_event * 96 - trace_current_buffer_lock_reserve(int type, unsigned long len, 96 + trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer, 97 + int type, unsigned long len, 97 98 unsigned long flags, int pc); 98 - void trace_current_buffer_unlock_commit(struct ring_buffer_event *event, 99 + void trace_current_buffer_unlock_commit(struct ring_buffer *buffer, 100 + struct ring_buffer_event *event, 99 101 unsigned long flags, int pc); 100 - void trace_nowake_buffer_unlock_commit(struct ring_buffer_event *event, 102 + void trace_nowake_buffer_unlock_commit(struct ring_buffer *buffer, 103 + struct ring_buffer_event *event, 101 104 unsigned long flags, int pc); 102 - void trace_current_buffer_discard_commit(struct ring_buffer_event *event); 105 + void trace_current_buffer_discard_commit(struct ring_buffer *buffer, 106 + struct ring_buffer_event *event); 103 107 104 108 void tracing_record_cmdline(struct task_struct *tsk); 105 109 ··· 137 133 #define MAX_FILTER_PRED 32 138 134 #define MAX_FILTER_STR_VAL 128 139 135 140 - extern int init_preds(struct ftrace_event_call *call); 141 136 extern void destroy_preds(struct ftrace_event_call *call); 142 137 extern int filter_match_preds(struct ftrace_event_call *call, void *rec); 143 - extern int filter_current_check_discard(struct ftrace_event_call *call, 138 + extern int filter_current_check_discard(struct ring_buffer *buffer, 139 + struct ftrace_event_call *call, 144 140 void *rec, 145 141 struct ring_buffer_event *event); 146 142
+5
include/linux/gen_stats.h
··· 22 22 { 23 23 __u64 bytes; 24 24 __u32 packets; 25 + }; 26 + struct gnet_stats_basic_packed 27 + { 28 + __u64 bytes; 29 + __u32 packets; 25 30 } __attribute__ ((packed)); 26 31 27 32 /**
+4 -2
include/linux/hugetlb.h
··· 10 10 #include <asm/tlbflush.h> 11 11 12 12 struct ctl_table; 13 + struct user_struct; 13 14 14 15 int PageHuge(struct page *page); 15 16 ··· 147 146 148 147 extern const struct file_operations hugetlbfs_file_operations; 149 148 extern struct vm_operations_struct hugetlb_vm_ops; 150 - struct file *hugetlb_file_setup(const char *name, size_t, int); 149 + struct file *hugetlb_file_setup(const char *name, size_t size, int acct, 150 + struct user_struct **user); 151 151 int hugetlb_get_quota(struct address_space *mapping, long delta); 152 152 void hugetlb_put_quota(struct address_space *mapping, long delta); 153 153 ··· 170 168 171 169 #define is_file_hugepages(file) 0 172 170 #define set_file_hugepages(file) BUG() 173 - #define hugetlb_file_setup(name,size,acctflag) ERR_PTR(-ENOSYS) 171 + #define hugetlb_file_setup(name,size,acct,user) ERR_PTR(-ENOSYS) 174 172 175 173 #endif /* !CONFIG_HUGETLBFS */ 176 174
+1 -1
include/linux/lmb.h
··· 51 51 extern u64 __init __lmb_alloc_base(u64 size, 52 52 u64 align, u64 max_addr); 53 53 extern u64 __init lmb_phys_mem_size(void); 54 - extern u64 __init lmb_end_of_DRAM(void); 54 + extern u64 lmb_end_of_DRAM(void); 55 55 extern void __init lmb_enforce_memory_limit(u64 memory_limit); 56 56 extern int __init lmb_is_reserved(u64 addr); 57 57 extern int lmb_find(struct lmb_property *res);
-15
include/linux/mm.h
··· 34 34 #define sysctl_legacy_va_layout 0 35 35 #endif 36 36 37 - extern unsigned long mmap_min_addr; 38 - 39 37 #include <asm/page.h> 40 38 #include <asm/pgtable.h> 41 39 #include <asm/processor.h> ··· 570 572 set_page_zone(page, zone); 571 573 set_page_node(page, node); 572 574 set_page_section(page, pfn_to_section_nr(pfn)); 573 - } 574 - 575 - /* 576 - * If a hint addr is less than mmap_min_addr change hint to be as 577 - * low as possible but still greater than mmap_min_addr 578 - */ 579 - static inline unsigned long round_hint_to_min(unsigned long hint) 580 - { 581 - hint &= PAGE_MASK; 582 - if (((void *)hint != NULL) && 583 - (hint < mmap_min_addr)) 584 - return PAGE_ALIGN(mmap_min_addr); 585 - return hint; 586 575 } 587 576 588 577 /*
-2
include/linux/mm_types.h
··· 240 240 241 241 unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ 242 242 243 - s8 oom_adj; /* OOM kill score adjustment (bit shift) */ 244 - 245 243 cpumask_t cpu_vm_mask; 246 244 247 245 /* Architecture-specific MM context */
+2 -3
include/linux/nfs_fs.h
··· 473 473 extern int nfs_flush_incompatible(struct file *file, struct page *page); 474 474 extern int nfs_updatepage(struct file *, struct page *, unsigned int, unsigned int); 475 475 extern int nfs_writeback_done(struct rpc_task *, struct nfs_write_data *); 476 - extern void nfs_writedata_release(void *); 477 476 478 477 /* 479 478 * Try to write back everything synchronously (but check the ··· 487 488 extern int nfs_commit_inode(struct inode *, int); 488 489 extern struct nfs_write_data *nfs_commitdata_alloc(void); 489 490 extern void nfs_commit_free(struct nfs_write_data *wdata); 490 - extern void nfs_commitdata_release(void *wdata); 491 491 #else 492 492 static inline int 493 493 nfs_commit_inode(struct inode *inode, int how) ··· 505 507 * Allocate nfs_write_data structures 506 508 */ 507 509 extern struct nfs_write_data *nfs_writedata_alloc(unsigned int npages); 510 + extern void nfs_writedata_free(struct nfs_write_data *); 508 511 509 512 /* 510 513 * linux/fs/nfs/read.c ··· 514 515 extern int nfs_readpages(struct file *, struct address_space *, 515 516 struct list_head *, unsigned); 516 517 extern int nfs_readpage_result(struct rpc_task *, struct nfs_read_data *); 517 - extern void nfs_readdata_release(void *data); 518 518 extern int nfs_readpage_async(struct nfs_open_context *, struct inode *, 519 519 struct page *); 520 520 ··· 521 523 * Allocate nfs_read_data structures 522 524 */ 523 525 extern struct nfs_read_data *nfs_readdata_alloc(unsigned int npages); 526 + extern void nfs_readdata_free(struct nfs_read_data *); 524 527 525 528 /* 526 529 * linux/fs/nfs3proc.c
+38 -11
include/linux/perf_counter.h
··· 115 115 PERF_SAMPLE_TID = 1U << 1, 116 116 PERF_SAMPLE_TIME = 1U << 2, 117 117 PERF_SAMPLE_ADDR = 1U << 3, 118 - PERF_SAMPLE_GROUP = 1U << 4, 118 + PERF_SAMPLE_READ = 1U << 4, 119 119 PERF_SAMPLE_CALLCHAIN = 1U << 5, 120 120 PERF_SAMPLE_ID = 1U << 6, 121 121 PERF_SAMPLE_CPU = 1U << 7, ··· 127 127 }; 128 128 129 129 /* 130 - * Bits that can be set in attr.read_format to request that 131 - * reads on the counter should return the indicated quantities, 132 - * in increasing order of bit value, after the counter value. 130 + * The format of the data returned by read() on a perf counter fd, 131 + * as specified by attr.read_format: 132 + * 133 + * struct read_format { 134 + * { u64 value; 135 + * { u64 time_enabled; } && PERF_FORMAT_ENABLED 136 + * { u64 time_running; } && PERF_FORMAT_RUNNING 137 + * { u64 id; } && PERF_FORMAT_ID 138 + * } && !PERF_FORMAT_GROUP 139 + * 140 + * { u64 nr; 141 + * { u64 time_enabled; } && PERF_FORMAT_ENABLED 142 + * { u64 time_running; } && PERF_FORMAT_RUNNING 143 + * { u64 value; 144 + * { u64 id; } && PERF_FORMAT_ID 145 + * } cntr[nr]; 146 + * } && PERF_FORMAT_GROUP 147 + * }; 133 148 */ 134 149 enum perf_counter_read_format { 135 150 PERF_FORMAT_TOTAL_TIME_ENABLED = 1U << 0, 136 151 PERF_FORMAT_TOTAL_TIME_RUNNING = 1U << 1, 137 152 PERF_FORMAT_ID = 1U << 2, 153 + PERF_FORMAT_GROUP = 1U << 3, 138 154 139 - PERF_FORMAT_MAX = 1U << 3, /* non-ABI */ 155 + PERF_FORMAT_MAX = 1U << 4, /* non-ABI */ 140 156 }; 141 157 142 158 #define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */ ··· 359 343 * struct { 360 344 * struct perf_event_header header; 361 345 * u32 pid, tid; 362 - * u64 value; 363 - * { u64 time_enabled; } && PERF_FORMAT_ENABLED 364 - * { u64 time_running; } && PERF_FORMAT_RUNNING 365 - * { u64 parent_id; } && PERF_FORMAT_ID 346 + * 347 + * struct read_format values; 366 348 * }; 367 349 */ 368 350 PERF_EVENT_READ = 8, ··· 378 364 * { u32 cpu, res; } && PERF_SAMPLE_CPU 379 365 * { u64 period; } && PERF_SAMPLE_PERIOD 380 366 * 381 - * { u64 nr; 382 - * { u64 id, val; } cnt[nr]; } && PERF_SAMPLE_GROUP 367 + * { struct read_format values; } && PERF_SAMPLE_READ 383 368 * 384 369 * { u64 nr, 385 370 * u64 ips[nr]; } && PERF_SAMPLE_CALLCHAIN 371 + * 372 + * # 373 + * # The RAW record below is opaque data wrt the ABI 374 + * # 375 + * # That is, the ABI doesn't make any promises wrt to 376 + * # the stability of its content, it may vary depending 377 + * # on event, hardware, kernel version and phase of 378 + * # the moon. 379 + * # 380 + * # In other words, PERF_SAMPLE_RAW contents are not an ABI. 381 + * # 382 + * 386 383 * { u32 size; 387 384 * char data[size];}&& PERF_SAMPLE_RAW 388 385 * }; ··· 719 694 720 695 extern int perf_counter_overflow(struct perf_counter *counter, int nmi, 721 696 struct perf_sample_data *data); 697 + extern void perf_counter_output(struct perf_counter *counter, int nmi, 698 + struct perf_sample_data *data); 722 699 723 700 /* 724 701 * Return 1 for a software counter, 0 for a hardware counter
+9 -14
include/linux/ring_buffer.h
··· 75 75 } 76 76 77 77 /* 78 - * ring_buffer_event_discard can discard any event in the ring buffer. 79 - * it is up to the caller to protect against a reader from 80 - * consuming it or a writer from wrapping and replacing it. 81 - * 82 - * No external protection is needed if this is called before 83 - * the event is commited. But in that case it would be better to 84 - * use ring_buffer_discard_commit. 85 - * 86 - * Note, if an event that has not been committed is discarded 87 - * with ring_buffer_event_discard, it must still be committed. 88 - */ 89 - void ring_buffer_event_discard(struct ring_buffer_event *event); 90 - 91 - /* 92 78 * ring_buffer_discard_commit will remove an event that has not 93 79 * ben committed yet. If this is used, then ring_buffer_unlock_commit 94 80 * must not be called on the discarded event. This function ··· 140 154 void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu); 141 155 void ring_buffer_reset(struct ring_buffer *buffer); 142 156 157 + #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP 143 158 int ring_buffer_swap_cpu(struct ring_buffer *buffer_a, 144 159 struct ring_buffer *buffer_b, int cpu); 160 + #else 161 + static inline int 162 + ring_buffer_swap_cpu(struct ring_buffer *buffer_a, 163 + struct ring_buffer *buffer_b, int cpu) 164 + { 165 + return -ENODEV; 166 + } 167 + #endif 145 168 146 169 int ring_buffer_empty(struct ring_buffer *buffer); 147 170 int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu);
+1
include/linux/sched.h
··· 1198 1198 * a short time 1199 1199 */ 1200 1200 unsigned char fpu_counter; 1201 + s8 oomkilladj; /* OOM kill score adjustment (bit shift). */ 1201 1202 #ifdef CONFIG_BLK_DEV_IO_TRACE 1202 1203 unsigned int btrace_seq; 1203 1204 #endif
+21 -3
include/linux/security.h
··· 28 28 #include <linux/resource.h> 29 29 #include <linux/sem.h> 30 30 #include <linux/shm.h> 31 + #include <linux/mm.h> /* PAGE_ALIGN */ 31 32 #include <linux/msg.h> 32 33 #include <linux/sched.h> 33 34 #include <linux/key.h> ··· 67 66 extern int cap_inode_removexattr(struct dentry *dentry, const char *name); 68 67 extern int cap_inode_need_killpriv(struct dentry *dentry); 69 68 extern int cap_inode_killpriv(struct dentry *dentry); 69 + extern int cap_file_mmap(struct file *file, unsigned long reqprot, 70 + unsigned long prot, unsigned long flags, 71 + unsigned long addr, unsigned long addr_only); 70 72 extern int cap_task_fix_setuid(struct cred *new, const struct cred *old, int flags); 71 73 extern int cap_task_prctl(int option, unsigned long arg2, unsigned long arg3, 72 74 unsigned long arg4, unsigned long arg5); ··· 96 92 extern int cap_netlink_recv(struct sk_buff *skb, int cap); 97 93 98 94 extern unsigned long mmap_min_addr; 95 + extern unsigned long dac_mmap_min_addr; 99 96 /* 100 97 * Values used in the task_security_ops calls 101 98 */ ··· 120 115 #define LSM_UNSAFE_SHARE 1 121 116 #define LSM_UNSAFE_PTRACE 2 122 117 #define LSM_UNSAFE_PTRACE_CAP 4 118 + 119 + /* 120 + * If a hint addr is less than mmap_min_addr change hint to be as 121 + * low as possible but still greater than mmap_min_addr 122 + */ 123 + static inline unsigned long round_hint_to_min(unsigned long hint) 124 + { 125 + hint &= PAGE_MASK; 126 + if (((void *)hint != NULL) && 127 + (hint < mmap_min_addr)) 128 + return PAGE_ALIGN(mmap_min_addr); 129 + return hint; 130 + } 131 + extern int mmap_min_addr_handler(struct ctl_table *table, int write, struct file *filp, 132 + void __user *buffer, size_t *lenp, loff_t *ppos); 123 133 124 134 #ifdef CONFIG_SECURITY 125 135 ··· 2217 2197 unsigned long addr, 2218 2198 unsigned long addr_only) 2219 2199 { 2220 - if ((addr < mmap_min_addr) && !capable(CAP_SYS_RAWIO)) 2221 - return -EACCES; 2222 - return 0; 2200 + return cap_file_mmap(file, reqprot, prot, flags, addr, addr_only); 2223 2201 } 2224 2202 2225 2203 static inline int security_file_mprotect(struct vm_area_struct *vma,
-2
include/linux/syscalls.h
··· 177 177 event_enter_##sname.id = id; \ 178 178 set_syscall_enter_id(num, id); \ 179 179 INIT_LIST_HEAD(&event_enter_##sname.fields); \ 180 - init_preds(&event_enter_##sname); \ 181 180 return 0; \ 182 181 } \ 183 182 TRACE_SYS_ENTER_PROFILE(sname); \ ··· 213 214 event_exit_##sname.id = id; \ 214 215 set_syscall_exit_id(num, id); \ 215 216 INIT_LIST_HEAD(&event_exit_##sname.fields); \ 216 - init_preds(&event_exit_##sname); \ 217 217 return 0; \ 218 218 } \ 219 219 TRACE_SYS_EXIT_PROFILE(sname); \
+4
include/linux/ucb1400.h
··· 73 73 74 74 #define UCB_ADC_DATA 0x68 75 75 #define UCB_ADC_DAT_VALID (1 << 15) 76 + 77 + #define UCB_FCSR 0x6c 78 + #define UCB_FCSR_AVE (1 << 12) 79 + 76 80 #define UCB_ADC_DAT_MASK 0x3ff 77 81 78 82 #define UCB_ID 0x7e
+8 -1
include/linux/wait.h
··· 77 77 #define __WAIT_BIT_KEY_INITIALIZER(word, bit) \ 78 78 { .flags = word, .bit_nr = bit, } 79 79 80 - extern void init_waitqueue_head(wait_queue_head_t *q); 80 + extern void __init_waitqueue_head(wait_queue_head_t *q, struct lock_class_key *); 81 + 82 + #define init_waitqueue_head(q) \ 83 + do { \ 84 + static struct lock_class_key __key; \ 85 + \ 86 + __init_waitqueue_head((q), &__key); \ 87 + } while (0) 81 88 82 89 #ifdef CONFIG_LOCKDEP 83 90 # define __WAIT_QUEUE_HEAD_INIT_ONSTACK(name) \
+15
include/linux/workqueue.h
··· 240 240 return ret; 241 241 } 242 242 243 + /* 244 + * Like above, but uses del_timer() instead of del_timer_sync(). This means, 245 + * if it returns 0 the timer function may be running and the queueing is in 246 + * progress. 247 + */ 248 + static inline int __cancel_delayed_work(struct delayed_work *work) 249 + { 250 + int ret; 251 + 252 + ret = del_timer(&work->timer); 253 + if (ret) 254 + work_clear_pending(&work->work); 255 + return ret; 256 + } 257 + 243 258 extern int cancel_delayed_work_sync(struct delayed_work *work); 244 259 245 260 /* Obsolete. use cancel_delayed_work_sync() */
+1 -1
include/net/act_api.h
··· 16 16 u32 tcfc_capab; 17 17 int tcfc_action; 18 18 struct tcf_t tcfc_tm; 19 - struct gnet_stats_basic tcfc_bstats; 19 + struct gnet_stats_basic_packed tcfc_bstats; 20 20 struct gnet_stats_queue tcfc_qstats; 21 21 struct gnet_stats_rate_est tcfc_rate_est; 22 22 spinlock_t tcfc_lock;
+5 -5
include/net/gen_stats.h
··· 28 28 spinlock_t *lock, struct gnet_dump *d); 29 29 30 30 extern int gnet_stats_copy_basic(struct gnet_dump *d, 31 - struct gnet_stats_basic *b); 31 + struct gnet_stats_basic_packed *b); 32 32 extern int gnet_stats_copy_rate_est(struct gnet_dump *d, 33 33 struct gnet_stats_rate_est *r); 34 34 extern int gnet_stats_copy_queue(struct gnet_dump *d, ··· 37 37 38 38 extern int gnet_stats_finish_copy(struct gnet_dump *d); 39 39 40 - extern int gen_new_estimator(struct gnet_stats_basic *bstats, 40 + extern int gen_new_estimator(struct gnet_stats_basic_packed *bstats, 41 41 struct gnet_stats_rate_est *rate_est, 42 42 spinlock_t *stats_lock, struct nlattr *opt); 43 - extern void gen_kill_estimator(struct gnet_stats_basic *bstats, 43 + extern void gen_kill_estimator(struct gnet_stats_basic_packed *bstats, 44 44 struct gnet_stats_rate_est *rate_est); 45 - extern int gen_replace_estimator(struct gnet_stats_basic *bstats, 45 + extern int gen_replace_estimator(struct gnet_stats_basic_packed *bstats, 46 46 struct gnet_stats_rate_est *rate_est, 47 47 spinlock_t *stats_lock, struct nlattr *opt); 48 - extern bool gen_estimator_active(const struct gnet_stats_basic *bstats, 48 + extern bool gen_estimator_active(const struct gnet_stats_basic_packed *bstats, 49 49 const struct gnet_stats_rate_est *rate_est); 50 50 #endif
+1 -1
include/net/netfilter/xt_rateest.h
··· 8 8 spinlock_t lock; 9 9 struct gnet_estimator params; 10 10 struct gnet_stats_rate_est rstats; 11 - struct gnet_stats_basic bstats; 11 + struct gnet_stats_basic_packed bstats; 12 12 }; 13 13 14 14 extern struct xt_rateest *xt_rateest_lookup(const char *name);
+1 -1
include/net/sch_generic.h
··· 72 72 */ 73 73 unsigned long state; 74 74 struct sk_buff_head q; 75 - struct gnet_stats_basic bstats; 75 + struct gnet_stats_basic_packed bstats; 76 76 struct gnet_stats_queue qstats; 77 77 }; 78 78
+1
include/trace/define_trace.h
··· 62 62 #endif 63 63 64 64 #undef TRACE_EVENT 65 + #undef TRACE_EVENT_FN 65 66 #undef TRACE_HEADER_MULTI_READ 66 67 67 68 /* Only undef what we defined in this file */
+17 -12
include/trace/ftrace.h
··· 45 45 }; \ 46 46 static struct ftrace_event_call event_##name 47 47 48 + #undef __cpparg 49 + #define __cpparg(arg...) arg 50 + 48 51 /* Callbacks are meaningless to ftrace. */ 49 52 #undef TRACE_EVENT_FN 50 - #define TRACE_EVENT_FN(name, proto, args, tstruct, \ 51 - assign, print, reg, unreg) \ 52 - TRACE_EVENT(name, TP_PROTO(proto), TP_ARGS(args), \ 53 - TP_STRUCT__entry(tstruct), \ 54 - TP_fast_assign(assign), \ 55 - TP_printk(print)) 53 + #define TRACE_EVENT_FN(name, proto, args, tstruct, \ 54 + assign, print, reg, unreg) \ 55 + TRACE_EVENT(name, __cpparg(proto), __cpparg(args), \ 56 + __cpparg(tstruct), __cpparg(assign), __cpparg(print)) \ 56 57 57 58 #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) 58 59 ··· 460 459 * { 461 460 * struct ring_buffer_event *event; 462 461 * struct ftrace_raw_<call> *entry; <-- defined in stage 1 462 + * struct ring_buffer *buffer; 463 463 * unsigned long irq_flags; 464 464 * int pc; 465 465 * 466 466 * local_save_flags(irq_flags); 467 467 * pc = preempt_count(); 468 468 * 469 - * event = trace_current_buffer_lock_reserve(event_<call>.id, 469 + * event = trace_current_buffer_lock_reserve(&buffer, 470 + * event_<call>.id, 470 471 * sizeof(struct ftrace_raw_<call>), 471 472 * irq_flags, pc); 472 473 * if (!event) ··· 478 475 * <assign>; <-- Here we assign the entries by the __field and 479 476 * __array macros. 480 477 * 481 - * trace_current_buffer_unlock_commit(event, irq_flags, pc); 478 + * trace_current_buffer_unlock_commit(buffer, event, irq_flags, pc); 482 479 * } 483 480 * 484 481 * static int ftrace_raw_reg_event_<call>(struct ftrace_event_call *unused) ··· 570 567 struct ftrace_event_call *event_call = &event_##call; \ 571 568 struct ring_buffer_event *event; \ 572 569 struct ftrace_raw_##call *entry; \ 570 + struct ring_buffer *buffer; \ 573 571 unsigned long irq_flags; \ 574 572 int __data_size; \ 575 573 int pc; \ ··· 580 576 \ 581 577 __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \ 582 578 \ 583 - event = trace_current_buffer_lock_reserve(event_##call.id, \ 579 + event = trace_current_buffer_lock_reserve(&buffer, \ 580 + event_##call.id, \ 584 581 sizeof(*entry) + __data_size, \ 585 582 irq_flags, pc); \ 586 583 if (!event) \ ··· 593 588 \ 594 589 { assign; } \ 595 590 \ 596 - if (!filter_current_check_discard(event_call, entry, event)) \ 597 - trace_nowake_buffer_unlock_commit(event, irq_flags, pc); \ 591 + if (!filter_current_check_discard(buffer, event_call, entry, event)) \ 592 + trace_nowake_buffer_unlock_commit(buffer, \ 593 + event, irq_flags, pc); \ 598 594 } \ 599 595 \ 600 596 static int ftrace_raw_reg_event_##call(struct ftrace_event_call *unused)\ ··· 627 621 return -ENODEV; \ 628 622 event_##call.id = id; \ 629 623 INIT_LIST_HEAD(&event_##call.fields); \ 630 - init_preds(&event_##call); \ 631 624 return 0; \ 632 625 } \ 633 626 \
+5 -4
init/main.c
··· 584 584 setup_arch(&command_line); 585 585 mm_init_owner(&init_mm, &init_task); 586 586 setup_command_line(command_line); 587 - setup_per_cpu_areas(); 588 587 setup_nr_cpu_ids(); 588 + setup_per_cpu_areas(); 589 589 smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */ 590 590 591 591 build_all_zonelists(); ··· 733 733 int initcall_debug; 734 734 core_param(initcall_debug, initcall_debug, bool, 0644); 735 735 736 + static char msgbuf[64]; 737 + static struct boot_trace_call call; 738 + static struct boot_trace_ret ret; 739 + 736 740 int do_one_initcall(initcall_t fn) 737 741 { 738 742 int count = preempt_count(); 739 743 ktime_t calltime, delta, rettime; 740 - char msgbuf[64]; 741 - struct boot_trace_call call; 742 - struct boot_trace_ret ret; 743 744 744 745 if (initcall_debug) { 745 746 call.caller = task_pid_nr(current);
+5 -3
ipc/shm.c
··· 174 174 shm_unlock(shp); 175 175 if (!is_file_hugepages(shp->shm_file)) 176 176 shmem_lock(shp->shm_file, 0, shp->mlock_user); 177 - else 177 + else if (shp->mlock_user) 178 178 user_shm_unlock(shp->shm_file->f_path.dentry->d_inode->i_size, 179 179 shp->mlock_user); 180 180 fput (shp->shm_file); ··· 369 369 /* hugetlb_file_setup applies strict accounting */ 370 370 if (shmflg & SHM_NORESERVE) 371 371 acctflag = VM_NORESERVE; 372 - file = hugetlb_file_setup(name, size, acctflag); 373 - shp->mlock_user = current_user(); 372 + file = hugetlb_file_setup(name, size, acctflag, 373 + &shp->mlock_user); 374 374 } else { 375 375 /* 376 376 * Do not allow no accounting for OVERCOMMIT_NEVER, even ··· 410 410 return error; 411 411 412 412 no_id: 413 + if (shp->mlock_user) /* shmflg & SHM_HUGETLB case */ 414 + user_shm_unlock(size, shp->mlock_user); 413 415 fput(file); 414 416 no_file: 415 417 security_shm_free(shp);
+5 -16
kernel/fork.c
··· 426 426 init_rwsem(&mm->mmap_sem); 427 427 INIT_LIST_HEAD(&mm->mmlist); 428 428 mm->flags = (current->mm) ? current->mm->flags : default_dump_filter; 429 - mm->oom_adj = (current->mm) ? current->mm->oom_adj : 0; 430 429 mm->core_state = NULL; 431 430 mm->nr_ptes = 0; 432 431 set_mm_counter(mm, file_rss, 0); ··· 815 816 { 816 817 struct signal_struct *sig; 817 818 818 - if (clone_flags & CLONE_THREAD) { 819 - atomic_inc(&current->signal->count); 820 - atomic_inc(&current->signal->live); 819 + if (clone_flags & CLONE_THREAD) 821 820 return 0; 822 - } 823 821 824 822 sig = kmem_cache_alloc(signal_cachep, GFP_KERNEL); 825 823 tsk->signal = sig; ··· 872 876 thread_group_cputime_free(sig); 873 877 tty_kref_put(sig->tty); 874 878 kmem_cache_free(signal_cachep, sig); 875 - } 876 - 877 - static void cleanup_signal(struct task_struct *tsk) 878 - { 879 - struct signal_struct *sig = tsk->signal; 880 - 881 - atomic_dec(&sig->live); 882 - 883 - if (atomic_dec_and_test(&sig->count)) 884 - __cleanup_signal(sig); 885 879 } 886 880 887 881 static void copy_flags(unsigned long clone_flags, struct task_struct *p) ··· 1226 1240 } 1227 1241 1228 1242 if (clone_flags & CLONE_THREAD) { 1243 + atomic_inc(&current->signal->count); 1244 + atomic_inc(&current->signal->live); 1229 1245 p->group_leader = current->group_leader; 1230 1246 list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group); 1231 1247 } ··· 1271 1283 if (p->mm) 1272 1284 mmput(p->mm); 1273 1285 bad_fork_cleanup_signal: 1274 - cleanup_signal(p); 1286 + if (!(clone_flags & CLONE_THREAD)) 1287 + __cleanup_signal(p->signal); 1275 1288 bad_fork_cleanup_sighand: 1276 1289 __cleanup_sighand(p->sighand); 1277 1290 bad_fork_cleanup_fs:
+22 -6
kernel/futex.c
··· 1010 1010 * requeue_pi_wake_futex() - Wake a task that acquired the lock during requeue 1011 1011 * q: the futex_q 1012 1012 * key: the key of the requeue target futex 1013 + * hb: the hash_bucket of the requeue target futex 1013 1014 * 1014 1015 * During futex_requeue, with requeue_pi=1, it is possible to acquire the 1015 1016 * target futex if it is uncontended or via a lock steal. Set the futex_q key 1016 1017 * to the requeue target futex so the waiter can detect the wakeup on the right 1017 1018 * futex, but remove it from the hb and NULL the rt_waiter so it can detect 1018 - * atomic lock acquisition. Must be called with the q->lock_ptr held. 1019 + * atomic lock acquisition. Set the q->lock_ptr to the requeue target hb->lock 1020 + * to protect access to the pi_state to fixup the owner later. Must be called 1021 + * with both q->lock_ptr and hb->lock held. 1019 1022 */ 1020 1023 static inline 1021 - void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key) 1024 + void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key, 1025 + struct futex_hash_bucket *hb) 1022 1026 { 1023 1027 drop_futex_key_refs(&q->key); 1024 1028 get_futex_key_refs(key); ··· 1033 1029 1034 1030 WARN_ON(!q->rt_waiter); 1035 1031 q->rt_waiter = NULL; 1032 + 1033 + q->lock_ptr = &hb->lock; 1034 + #ifdef CONFIG_DEBUG_PI_LIST 1035 + q->list.plist.lock = &hb->lock; 1036 + #endif 1036 1037 1037 1038 wake_up_state(q->task, TASK_NORMAL); 1038 1039 } ··· 1097 1088 ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, 1098 1089 set_waiters); 1099 1090 if (ret == 1) 1100 - requeue_pi_wake_futex(top_waiter, key2); 1091 + requeue_pi_wake_futex(top_waiter, key2, hb2); 1101 1092 1102 1093 return ret; 1103 1094 } ··· 1256 1247 if (!match_futex(&this->key, &key1)) 1257 1248 continue; 1258 1249 1259 - WARN_ON(!requeue_pi && this->rt_waiter); 1260 - WARN_ON(requeue_pi && !this->rt_waiter); 1250 + /* 1251 + * FUTEX_WAIT_REQEUE_PI and FUTEX_CMP_REQUEUE_PI should always 1252 + * be paired with each other and no other futex ops. 1253 + */ 1254 + if ((requeue_pi && !this->rt_waiter) || 1255 + (!requeue_pi && this->rt_waiter)) { 1256 + ret = -EINVAL; 1257 + break; 1258 + } 1261 1259 1262 1260 /* 1263 1261 * Wake nr_wake waiters. For requeue_pi, if we acquired the ··· 1289 1273 this->task, 1); 1290 1274 if (ret == 1) { 1291 1275 /* We got the lock. */ 1292 - requeue_pi_wake_futex(this, &key2); 1276 + requeue_pi_wake_futex(this, &key2, hb2); 1293 1277 continue; 1294 1278 } else if (ret) { 1295 1279 /* -EDEADLK */
+4 -2
kernel/futex_compat.c
··· 180 180 int cmd = op & FUTEX_CMD_MASK; 181 181 182 182 if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI || 183 - cmd == FUTEX_WAIT_BITSET)) { 183 + cmd == FUTEX_WAIT_BITSET || 184 + cmd == FUTEX_WAIT_REQUEUE_PI)) { 184 185 if (get_compat_timespec(&ts, utime)) 185 186 return -EFAULT; 186 187 if (!timespec_valid(&ts)) ··· 192 191 t = ktime_add_safe(ktime_get(), t); 193 192 tp = &t; 194 193 } 195 - if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE) 194 + if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE || 195 + cmd == FUTEX_CMP_REQUEUE_PI || cmd == FUTEX_WAKE_OP) 196 196 val2 = (int) (unsigned long) utime; 197 197 198 198 return do_futex(uaddr, op, val, tp, uaddr2, val2, val3);
+15 -12
kernel/irq/manage.c
··· 607 607 */ 608 608 get_task_struct(t); 609 609 new->thread = t; 610 - wake_up_process(t); 611 610 } 612 611 613 612 /* ··· 689 690 (int)(new->flags & IRQF_TRIGGER_MASK)); 690 691 } 691 692 693 + new->irq = irq; 692 694 *old_ptr = new; 693 695 694 696 /* Reset broken irq detection when installing new handler */ ··· 707 707 708 708 spin_unlock_irqrestore(&desc->lock, flags); 709 709 710 - new->irq = irq; 710 + /* 711 + * Strictly no need to wake it up, but hung_task complains 712 + * when no hard interrupt wakes the thread up. 713 + */ 714 + if (new->thread) 715 + wake_up_process(new->thread); 716 + 711 717 register_irq_proc(irq, desc); 712 718 new->dir = NULL; 713 719 register_handler_proc(irq, new); ··· 767 761 { 768 762 struct irq_desc *desc = irq_to_desc(irq); 769 763 struct irqaction *action, **action_ptr; 770 - struct task_struct *irqthread; 771 764 unsigned long flags; 772 765 773 766 WARN(in_interrupt(), "Trying to free IRQ %d from IRQ context!\n", irq); ··· 814 809 desc->chip->disable(irq); 815 810 } 816 811 817 - irqthread = action->thread; 818 - action->thread = NULL; 819 - 820 812 spin_unlock_irqrestore(&desc->lock, flags); 821 813 822 814 unregister_handler_proc(irq, action); 823 815 824 816 /* Make sure it's not being used on another CPU: */ 825 817 synchronize_irq(irq); 826 - 827 - if (irqthread) { 828 - if (!test_bit(IRQTF_DIED, &action->thread_flags)) 829 - kthread_stop(irqthread); 830 - put_task_struct(irqthread); 831 - } 832 818 833 819 #ifdef CONFIG_DEBUG_SHIRQ 834 820 /* ··· 836 840 local_irq_restore(flags); 837 841 } 838 842 #endif 843 + 844 + if (action->thread) { 845 + if (!test_bit(IRQTF_DIED, &action->thread_flags)) 846 + kthread_stop(action->thread); 847 + put_task_struct(action->thread); 848 + } 849 + 839 850 return action; 840 851 } 841 852
+8 -2
kernel/module.c
··· 914 914 } 915 915 EXPORT_SYMBOL(__symbol_put); 916 916 917 + /* Note this assumes addr is a function, which it currently always is. */ 917 918 void symbol_put_addr(void *addr) 918 919 { 919 920 struct module *modaddr; 921 + unsigned long a = (unsigned long)dereference_function_descriptor(addr); 920 922 921 - if (core_kernel_text((unsigned long)addr)) 923 + if (core_kernel_text(a)) 922 924 return; 923 925 924 926 /* module_text_address is safe here: we're supposed to have reference 925 927 * to module from symbol_get, so it can't go away. */ 926 - modaddr = __module_text_address((unsigned long)addr); 928 + modaddr = __module_text_address(a); 927 929 BUG_ON(!modaddr); 928 930 module_put(modaddr); 929 931 } ··· 1280 1278 unsigned int notes, loaded, i; 1281 1279 struct module_notes_attrs *notes_attrs; 1282 1280 struct bin_attribute *nattr; 1281 + 1282 + /* failed to create section attributes, so can't create notes */ 1283 + if (!mod->sect_attrs) 1284 + return; 1283 1285 1284 1286 /* Count notes sections and allocate structures. */ 1285 1287 notes = 0;
+253 -103
kernel/perf_counter.c
··· 50 50 * 1 - disallow cpu counters to unpriv 51 51 * 2 - disallow kernel profiling to unpriv 52 52 */ 53 - int sysctl_perf_counter_paranoid __read_mostly; 53 + int sysctl_perf_counter_paranoid __read_mostly = 1; 54 54 55 55 static inline bool perf_paranoid_cpu(void) 56 56 { ··· 88 88 void __weak hw_perf_enable(void) { barrier(); } 89 89 90 90 void __weak hw_perf_counter_setup(int cpu) { barrier(); } 91 + void __weak hw_perf_counter_setup_online(int cpu) { barrier(); } 91 92 92 93 int __weak 93 94 hw_perf_group_sched_in(struct perf_counter *group_leader, ··· 307 306 return; 308 307 309 308 counter->state = PERF_COUNTER_STATE_INACTIVE; 309 + if (counter->pending_disable) { 310 + counter->pending_disable = 0; 311 + counter->state = PERF_COUNTER_STATE_OFF; 312 + } 310 313 counter->tstamp_stopped = ctx->time; 311 314 counter->pmu->disable(counter); 312 315 counter->oncpu = -1; ··· 1503 1498 */ 1504 1499 static void __perf_counter_read(void *info) 1505 1500 { 1501 + struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context); 1506 1502 struct perf_counter *counter = info; 1507 1503 struct perf_counter_context *ctx = counter->ctx; 1508 1504 unsigned long flags; 1505 + 1506 + /* 1507 + * If this is a task context, we need to check whether it is 1508 + * the current task context of this cpu. If not it has been 1509 + * scheduled out before the smp call arrived. In that case 1510 + * counter->count would have been updated to a recent sample 1511 + * when the counter was scheduled out. 1512 + */ 1513 + if (ctx->task && cpuctx->task_ctx != ctx) 1514 + return; 1509 1515 1510 1516 local_irq_save(flags); 1511 1517 if (ctx->is_active) ··· 1707 1691 return 0; 1708 1692 } 1709 1693 1710 - static u64 perf_counter_read_tree(struct perf_counter *counter) 1694 + static int perf_counter_read_size(struct perf_counter *counter) 1695 + { 1696 + int entry = sizeof(u64); /* value */ 1697 + int size = 0; 1698 + int nr = 1; 1699 + 1700 + if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) 1701 + size += sizeof(u64); 1702 + 1703 + if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) 1704 + size += sizeof(u64); 1705 + 1706 + if (counter->attr.read_format & PERF_FORMAT_ID) 1707 + entry += sizeof(u64); 1708 + 1709 + if (counter->attr.read_format & PERF_FORMAT_GROUP) { 1710 + nr += counter->group_leader->nr_siblings; 1711 + size += sizeof(u64); 1712 + } 1713 + 1714 + size += entry * nr; 1715 + 1716 + return size; 1717 + } 1718 + 1719 + static u64 perf_counter_read_value(struct perf_counter *counter) 1711 1720 { 1712 1721 struct perf_counter *child; 1713 1722 u64 total = 0; ··· 1744 1703 return total; 1745 1704 } 1746 1705 1706 + static int perf_counter_read_entry(struct perf_counter *counter, 1707 + u64 read_format, char __user *buf) 1708 + { 1709 + int n = 0, count = 0; 1710 + u64 values[2]; 1711 + 1712 + values[n++] = perf_counter_read_value(counter); 1713 + if (read_format & PERF_FORMAT_ID) 1714 + values[n++] = primary_counter_id(counter); 1715 + 1716 + count = n * sizeof(u64); 1717 + 1718 + if (copy_to_user(buf, values, count)) 1719 + return -EFAULT; 1720 + 1721 + return count; 1722 + } 1723 + 1724 + static int perf_counter_read_group(struct perf_counter *counter, 1725 + u64 read_format, char __user *buf) 1726 + { 1727 + struct perf_counter *leader = counter->group_leader, *sub; 1728 + int n = 0, size = 0, err = -EFAULT; 1729 + u64 values[3]; 1730 + 1731 + values[n++] = 1 + leader->nr_siblings; 1732 + if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) { 1733 + values[n++] = leader->total_time_enabled + 1734 + atomic64_read(&leader->child_total_time_enabled); 1735 + } 1736 + if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) { 1737 + values[n++] = leader->total_time_running + 1738 + atomic64_read(&leader->child_total_time_running); 1739 + } 1740 + 1741 + size = n * sizeof(u64); 1742 + 1743 + if (copy_to_user(buf, values, size)) 1744 + return -EFAULT; 1745 + 1746 + err = perf_counter_read_entry(leader, read_format, buf + size); 1747 + if (err < 0) 1748 + return err; 1749 + 1750 + size += err; 1751 + 1752 + list_for_each_entry(sub, &leader->sibling_list, list_entry) { 1753 + err = perf_counter_read_entry(sub, read_format, 1754 + buf + size); 1755 + if (err < 0) 1756 + return err; 1757 + 1758 + size += err; 1759 + } 1760 + 1761 + return size; 1762 + } 1763 + 1764 + static int perf_counter_read_one(struct perf_counter *counter, 1765 + u64 read_format, char __user *buf) 1766 + { 1767 + u64 values[4]; 1768 + int n = 0; 1769 + 1770 + values[n++] = perf_counter_read_value(counter); 1771 + if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) { 1772 + values[n++] = counter->total_time_enabled + 1773 + atomic64_read(&counter->child_total_time_enabled); 1774 + } 1775 + if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) { 1776 + values[n++] = counter->total_time_running + 1777 + atomic64_read(&counter->child_total_time_running); 1778 + } 1779 + if (read_format & PERF_FORMAT_ID) 1780 + values[n++] = primary_counter_id(counter); 1781 + 1782 + if (copy_to_user(buf, values, n * sizeof(u64))) 1783 + return -EFAULT; 1784 + 1785 + return n * sizeof(u64); 1786 + } 1787 + 1747 1788 /* 1748 1789 * Read the performance counter - simple non blocking version for now 1749 1790 */ 1750 1791 static ssize_t 1751 1792 perf_read_hw(struct perf_counter *counter, char __user *buf, size_t count) 1752 1793 { 1753 - u64 values[4]; 1754 - int n; 1794 + u64 read_format = counter->attr.read_format; 1795 + int ret; 1755 1796 1756 1797 /* 1757 1798 * Return end-of-file for a read on a counter that is in ··· 1843 1720 if (counter->state == PERF_COUNTER_STATE_ERROR) 1844 1721 return 0; 1845 1722 1723 + if (count < perf_counter_read_size(counter)) 1724 + return -ENOSPC; 1725 + 1846 1726 WARN_ON_ONCE(counter->ctx->parent_ctx); 1847 1727 mutex_lock(&counter->child_mutex); 1848 - values[0] = perf_counter_read_tree(counter); 1849 - n = 1; 1850 - if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) 1851 - values[n++] = counter->total_time_enabled + 1852 - atomic64_read(&counter->child_total_time_enabled); 1853 - if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) 1854 - values[n++] = counter->total_time_running + 1855 - atomic64_read(&counter->child_total_time_running); 1856 - if (counter->attr.read_format & PERF_FORMAT_ID) 1857 - values[n++] = primary_counter_id(counter); 1728 + if (read_format & PERF_FORMAT_GROUP) 1729 + ret = perf_counter_read_group(counter, read_format, buf); 1730 + else 1731 + ret = perf_counter_read_one(counter, read_format, buf); 1858 1732 mutex_unlock(&counter->child_mutex); 1859 1733 1860 - if (count < n * sizeof(u64)) 1861 - return -EINVAL; 1862 - count = n * sizeof(u64); 1863 - 1864 - if (copy_to_user(buf, values, count)) 1865 - return -EFAULT; 1866 - 1867 - return count; 1734 + return ret; 1868 1735 } 1869 1736 1870 1737 static ssize_t ··· 2018 1905 2019 1906 return 0; 2020 1907 } 1908 + 1909 + #ifndef PERF_COUNTER_INDEX_OFFSET 1910 + # define PERF_COUNTER_INDEX_OFFSET 0 1911 + #endif 2021 1912 2022 1913 static int perf_counter_index(struct perf_counter *counter) 2023 1914 { ··· 2362 2245 2363 2246 if (counter->pending_disable) { 2364 2247 counter->pending_disable = 0; 2365 - perf_counter_disable(counter); 2248 + __perf_counter_disable(counter); 2366 2249 } 2367 2250 2368 2251 if (counter->pending_wakeup) { ··· 2747 2630 return task_pid_nr_ns(p, counter->ns); 2748 2631 } 2749 2632 2750 - static void perf_counter_output(struct perf_counter *counter, int nmi, 2633 + static void perf_output_read_one(struct perf_output_handle *handle, 2634 + struct perf_counter *counter) 2635 + { 2636 + u64 read_format = counter->attr.read_format; 2637 + u64 values[4]; 2638 + int n = 0; 2639 + 2640 + values[n++] = atomic64_read(&counter->count); 2641 + if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) { 2642 + values[n++] = counter->total_time_enabled + 2643 + atomic64_read(&counter->child_total_time_enabled); 2644 + } 2645 + if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) { 2646 + values[n++] = counter->total_time_running + 2647 + atomic64_read(&counter->child_total_time_running); 2648 + } 2649 + if (read_format & PERF_FORMAT_ID) 2650 + values[n++] = primary_counter_id(counter); 2651 + 2652 + perf_output_copy(handle, values, n * sizeof(u64)); 2653 + } 2654 + 2655 + /* 2656 + * XXX PERF_FORMAT_GROUP vs inherited counters seems difficult. 2657 + */ 2658 + static void perf_output_read_group(struct perf_output_handle *handle, 2659 + struct perf_counter *counter) 2660 + { 2661 + struct perf_counter *leader = counter->group_leader, *sub; 2662 + u64 read_format = counter->attr.read_format; 2663 + u64 values[5]; 2664 + int n = 0; 2665 + 2666 + values[n++] = 1 + leader->nr_siblings; 2667 + 2668 + if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) 2669 + values[n++] = leader->total_time_enabled; 2670 + 2671 + if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) 2672 + values[n++] = leader->total_time_running; 2673 + 2674 + if (leader != counter) 2675 + leader->pmu->read(leader); 2676 + 2677 + values[n++] = atomic64_read(&leader->count); 2678 + if (read_format & PERF_FORMAT_ID) 2679 + values[n++] = primary_counter_id(leader); 2680 + 2681 + perf_output_copy(handle, values, n * sizeof(u64)); 2682 + 2683 + list_for_each_entry(sub, &leader->sibling_list, list_entry) { 2684 + n = 0; 2685 + 2686 + if (sub != counter) 2687 + sub->pmu->read(sub); 2688 + 2689 + values[n++] = atomic64_read(&sub->count); 2690 + if (read_format & PERF_FORMAT_ID) 2691 + values[n++] = primary_counter_id(sub); 2692 + 2693 + perf_output_copy(handle, values, n * sizeof(u64)); 2694 + } 2695 + } 2696 + 2697 + static void perf_output_read(struct perf_output_handle *handle, 2698 + struct perf_counter *counter) 2699 + { 2700 + if (counter->attr.read_format & PERF_FORMAT_GROUP) 2701 + perf_output_read_group(handle, counter); 2702 + else 2703 + perf_output_read_one(handle, counter); 2704 + } 2705 + 2706 + void perf_counter_output(struct perf_counter *counter, int nmi, 2751 2707 struct perf_sample_data *data) 2752 2708 { 2753 2709 int ret; ··· 2831 2641 struct { 2832 2642 u32 pid, tid; 2833 2643 } tid_entry; 2834 - struct { 2835 - u64 id; 2836 - u64 counter; 2837 - } group_entry; 2838 2644 struct perf_callchain_entry *callchain = NULL; 2839 2645 int callchain_size = 0; 2840 2646 u64 time; ··· 2885 2699 if (sample_type & PERF_SAMPLE_PERIOD) 2886 2700 header.size += sizeof(u64); 2887 2701 2888 - if (sample_type & PERF_SAMPLE_GROUP) { 2889 - header.size += sizeof(u64) + 2890 - counter->nr_siblings * sizeof(group_entry); 2891 - } 2702 + if (sample_type & PERF_SAMPLE_READ) 2703 + header.size += perf_counter_read_size(counter); 2892 2704 2893 2705 if (sample_type & PERF_SAMPLE_CALLCHAIN) { 2894 2706 callchain = perf_callchain(data->regs); ··· 2943 2759 if (sample_type & PERF_SAMPLE_PERIOD) 2944 2760 perf_output_put(&handle, data->period); 2945 2761 2946 - /* 2947 - * XXX PERF_SAMPLE_GROUP vs inherited counters seems difficult. 2948 - */ 2949 - if (sample_type & PERF_SAMPLE_GROUP) { 2950 - struct perf_counter *leader, *sub; 2951 - u64 nr = counter->nr_siblings; 2952 - 2953 - perf_output_put(&handle, nr); 2954 - 2955 - leader = counter->group_leader; 2956 - list_for_each_entry(sub, &leader->sibling_list, list_entry) { 2957 - if (sub != counter) 2958 - sub->pmu->read(sub); 2959 - 2960 - group_entry.id = primary_counter_id(sub); 2961 - group_entry.counter = atomic64_read(&sub->count); 2962 - 2963 - perf_output_put(&handle, group_entry); 2964 - } 2965 - } 2762 + if (sample_type & PERF_SAMPLE_READ) 2763 + perf_output_read(&handle, counter); 2966 2764 2967 2765 if (sample_type & PERF_SAMPLE_CALLCHAIN) { 2968 2766 if (callchain) ··· 2983 2817 2984 2818 u32 pid; 2985 2819 u32 tid; 2986 - u64 value; 2987 - u64 format[3]; 2988 2820 }; 2989 2821 2990 2822 static void ··· 2994 2830 .header = { 2995 2831 .type = PERF_EVENT_READ, 2996 2832 .misc = 0, 2997 - .size = sizeof(event) - sizeof(event.format), 2833 + .size = sizeof(event) + perf_counter_read_size(counter), 2998 2834 }, 2999 2835 .pid = perf_counter_pid(counter, task), 3000 2836 .tid = perf_counter_tid(counter, task), 3001 - .value = atomic64_read(&counter->count), 3002 2837 }; 3003 - int ret, i = 0; 3004 - 3005 - if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) { 3006 - event.header.size += sizeof(u64); 3007 - event.format[i++] = counter->total_time_enabled; 3008 - } 3009 - 3010 - if (counter->attr.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) { 3011 - event.header.size += sizeof(u64); 3012 - event.format[i++] = counter->total_time_running; 3013 - } 3014 - 3015 - if (counter->attr.read_format & PERF_FORMAT_ID) { 3016 - event.header.size += sizeof(u64); 3017 - event.format[i++] = primary_counter_id(counter); 3018 - } 2838 + int ret; 3019 2839 3020 2840 ret = perf_output_begin(&handle, counter, event.header.size, 0, 0); 3021 2841 if (ret) 3022 2842 return; 3023 2843 3024 - perf_output_copy(&handle, &event, event.header.size); 2844 + perf_output_put(&handle, event); 2845 + perf_output_read(&handle, counter); 2846 + 3025 2847 perf_output_end(&handle); 3026 2848 } 3027 2849 ··· 3043 2893 return; 3044 2894 3045 2895 task_event->event.pid = perf_counter_pid(counter, task); 3046 - task_event->event.ppid = perf_counter_pid(counter, task->real_parent); 2896 + task_event->event.ppid = perf_counter_pid(counter, current); 3047 2897 3048 2898 task_event->event.tid = perf_counter_tid(counter, task); 3049 - task_event->event.ptid = perf_counter_tid(counter, task->real_parent); 2899 + task_event->event.ptid = perf_counter_tid(counter, current); 3050 2900 3051 2901 perf_output_put(&handle, task_event->event); 3052 2902 perf_output_end(&handle); ··· 3593 3443 3594 3444 static int perf_swcounter_is_counting(struct perf_counter *counter) 3595 3445 { 3596 - struct perf_counter_context *ctx; 3597 - unsigned long flags; 3598 - int count; 3599 - 3446 + /* 3447 + * The counter is active, we're good! 3448 + */ 3600 3449 if (counter->state == PERF_COUNTER_STATE_ACTIVE) 3601 3450 return 1; 3602 3451 3452 + /* 3453 + * The counter is off/error, not counting. 3454 + */ 3603 3455 if (counter->state != PERF_COUNTER_STATE_INACTIVE) 3604 3456 return 0; 3605 3457 3606 3458 /* 3607 - * If the counter is inactive, it could be just because 3608 - * its task is scheduled out, or because it's in a group 3609 - * which could not go on the PMU. We want to count in 3610 - * the first case but not the second. If the context is 3611 - * currently active then an inactive software counter must 3612 - * be the second case. If it's not currently active then 3613 - * we need to know whether the counter was active when the 3614 - * context was last active, which we can determine by 3615 - * comparing counter->tstamp_stopped with ctx->time. 3616 - * 3617 - * We are within an RCU read-side critical section, 3618 - * which protects the existence of *ctx. 3459 + * The counter is inactive, if the context is active 3460 + * we're part of a group that didn't make it on the 'pmu', 3461 + * not counting. 3619 3462 */ 3620 - ctx = counter->ctx; 3621 - spin_lock_irqsave(&ctx->lock, flags); 3622 - count = 1; 3623 - /* Re-check state now we have the lock */ 3624 - if (counter->state < PERF_COUNTER_STATE_INACTIVE || 3625 - counter->ctx->is_active || 3626 - counter->tstamp_stopped < ctx->time) 3627 - count = 0; 3628 - spin_unlock_irqrestore(&ctx->lock, flags); 3629 - return count; 3463 + if (counter->ctx->is_active) 3464 + return 0; 3465 + 3466 + /* 3467 + * We're inactive and the context is too, this means the 3468 + * task is scheduled out, we're counting events that happen 3469 + * to us, like migration events. 3470 + */ 3471 + return 1; 3630 3472 } 3631 3473 3632 3474 static int perf_swcounter_match(struct perf_counter *counter, ··· 4066 3924 hwc->sample_period = attr->sample_period; 4067 3925 if (attr->freq && attr->sample_freq) 4068 3926 hwc->sample_period = 1; 3927 + hwc->last_period = hwc->sample_period; 4069 3928 4070 3929 atomic64_set(&hwc->period_left, hwc->sample_period); 4071 3930 4072 3931 /* 4073 - * we currently do not support PERF_SAMPLE_GROUP on inherited counters 3932 + * we currently do not support PERF_FORMAT_GROUP on inherited counters 4074 3933 */ 4075 - if (attr->inherit && (attr->sample_type & PERF_SAMPLE_GROUP)) 3934 + if (attr->inherit && (attr->read_format & PERF_FORMAT_GROUP)) 4076 3935 goto done; 4077 3936 4078 3937 switch (attr->type) { ··· 4735 4592 perf_counter_init_cpu(cpu); 4736 4593 break; 4737 4594 4595 + case CPU_ONLINE: 4596 + case CPU_ONLINE_FROZEN: 4597 + hw_perf_counter_setup_online(cpu); 4598 + break; 4599 + 4738 4600 case CPU_DOWN_PREPARE: 4739 4601 case CPU_DOWN_PREPARE_FROZEN: 4740 4602 perf_counter_exit_cpu(cpu); ··· 4763 4615 void __init perf_counter_init(void) 4764 4616 { 4765 4617 perf_cpu_notify(&perf_cpu_nb, (unsigned long)CPU_UP_PREPARE, 4618 + (void *)(long)smp_processor_id()); 4619 + perf_cpu_notify(&perf_cpu_nb, (unsigned long)CPU_ONLINE, 4766 4620 (void *)(long)smp_processor_id()); 4767 4621 register_cpu_notifier(&perf_cpu_nb); 4768 4622 }
+4 -3
kernel/sysctl.c
··· 49 49 #include <linux/acpi.h> 50 50 #include <linux/reboot.h> 51 51 #include <linux/ftrace.h> 52 + #include <linux/security.h> 52 53 #include <linux/slow-work.h> 53 54 #include <linux/perf_counter.h> 54 55 ··· 1307 1306 { 1308 1307 .ctl_name = CTL_UNNUMBERED, 1309 1308 .procname = "mmap_min_addr", 1310 - .data = &mmap_min_addr, 1311 - .maxlen = sizeof(unsigned long), 1309 + .data = &dac_mmap_min_addr, 1310 + .maxlen = sizeof(unsigned long), 1312 1311 .mode = 0644, 1313 - .proc_handler = &proc_doulongvec_minmax, 1312 + .proc_handler = &mmap_min_addr_handler, 1314 1313 }, 1315 1314 #ifdef CONFIG_NUMA 1316 1315 {
+10 -6
kernel/time/clockevents.c
··· 137 137 */ 138 138 int clockevents_register_notifier(struct notifier_block *nb) 139 139 { 140 + unsigned long flags; 140 141 int ret; 141 142 142 - spin_lock(&clockevents_lock); 143 + spin_lock_irqsave(&clockevents_lock, flags); 143 144 ret = raw_notifier_chain_register(&clockevents_chain, nb); 144 - spin_unlock(&clockevents_lock); 145 + spin_unlock_irqrestore(&clockevents_lock, flags); 145 146 146 147 return ret; 147 148 } ··· 179 178 */ 180 179 void clockevents_register_device(struct clock_event_device *dev) 181 180 { 181 + unsigned long flags; 182 + 182 183 BUG_ON(dev->mode != CLOCK_EVT_MODE_UNUSED); 183 184 BUG_ON(!dev->cpumask); 184 185 185 - spin_lock(&clockevents_lock); 186 + spin_lock_irqsave(&clockevents_lock, flags); 186 187 187 188 list_add(&dev->list, &clockevent_devices); 188 189 clockevents_do_notify(CLOCK_EVT_NOTIFY_ADD, dev); 189 190 clockevents_notify_released(); 190 191 191 - spin_unlock(&clockevents_lock); 192 + spin_unlock_irqrestore(&clockevents_lock, flags); 192 193 } 193 194 EXPORT_SYMBOL_GPL(clockevents_register_device); 194 195 ··· 238 235 void clockevents_notify(unsigned long reason, void *arg) 239 236 { 240 237 struct list_head *node, *tmp; 238 + unsigned long flags; 241 239 242 - spin_lock(&clockevents_lock); 240 + spin_lock_irqsave(&clockevents_lock, flags); 243 241 clockevents_do_notify(reason, arg); 244 242 245 243 switch (reason) { ··· 255 251 default: 256 252 break; 257 253 } 258 - spin_unlock(&clockevents_lock); 254 + spin_unlock_irqrestore(&clockevents_lock, flags); 259 255 } 260 256 EXPORT_SYMBOL_GPL(clockevents_notify); 261 257 #endif
+3 -4
kernel/time/tick-broadcast.c
··· 205 205 * Powerstate information: The system enters/leaves a state, where 206 206 * affected devices might stop 207 207 */ 208 - static void tick_do_broadcast_on_off(void *why) 208 + static void tick_do_broadcast_on_off(unsigned long *reason) 209 209 { 210 210 struct clock_event_device *bc, *dev; 211 211 struct tick_device *td; 212 - unsigned long flags, *reason = why; 212 + unsigned long flags; 213 213 int cpu, bc_stopped; 214 214 215 215 spin_lock_irqsave(&tick_broadcast_lock, flags); ··· 276 276 printk(KERN_ERR "tick-broadcast: ignoring broadcast for " 277 277 "offline CPU #%d\n", *oncpu); 278 278 else 279 - smp_call_function_single(*oncpu, tick_do_broadcast_on_off, 280 - &reason, 1); 279 + tick_do_broadcast_on_off(&reason); 281 280 } 282 281 283 282 /*
+1 -1
kernel/time/timer_list.c
··· 286 286 { 287 287 struct proc_dir_entry *pe; 288 288 289 - pe = proc_create("timer_list", 0644, NULL, &timer_list_fops); 289 + pe = proc_create("timer_list", 0444, NULL, &timer_list_fops); 290 290 if (!pe) 291 291 return -ENOMEM; 292 292 return 0;
+8 -1
kernel/trace/Kconfig
··· 60 60 bool 61 61 62 62 config CONTEXT_SWITCH_TRACER 63 - select MARKERS 64 63 bool 64 + 65 + config RING_BUFFER_ALLOW_SWAP 66 + bool 67 + help 68 + Allow the use of ring_buffer_swap_cpu. 69 + Adds a very slight overhead to tracing when enabled. 65 70 66 71 # All tracer options should select GENERIC_TRACER. For those options that are 67 72 # enabled by all tracers (context switch and event tracer) they select TRACING. ··· 152 147 select TRACE_IRQFLAGS 153 148 select GENERIC_TRACER 154 149 select TRACER_MAX_TRACE 150 + select RING_BUFFER_ALLOW_SWAP 155 151 help 156 152 This option measures the time spent in irqs-off critical 157 153 sections, with microsecond accuracy. ··· 174 168 depends on PREEMPT 175 169 select GENERIC_TRACER 176 170 select TRACER_MAX_TRACE 171 + select RING_BUFFER_ALLOW_SWAP 177 172 help 178 173 This option measures the time spent in preemption off critical 179 174 sections, with microsecond accuracy.
+9 -15
kernel/trace/blktrace.c
··· 65 65 { 66 66 struct blk_io_trace *t; 67 67 struct ring_buffer_event *event = NULL; 68 + struct ring_buffer *buffer = NULL; 68 69 int pc = 0; 69 70 int cpu = smp_processor_id(); 70 71 bool blk_tracer = blk_tracer_enabled; 71 72 72 73 if (blk_tracer) { 74 + buffer = blk_tr->buffer; 73 75 pc = preempt_count(); 74 - event = trace_buffer_lock_reserve(blk_tr, TRACE_BLK, 76 + event = trace_buffer_lock_reserve(buffer, TRACE_BLK, 75 77 sizeof(*t) + len, 76 78 0, pc); 77 79 if (!event) ··· 98 96 memcpy((void *) t + sizeof(*t), data, len); 99 97 100 98 if (blk_tracer) 101 - trace_buffer_unlock_commit(blk_tr, event, 0, pc); 99 + trace_buffer_unlock_commit(buffer, event, 0, pc); 102 100 } 103 101 } 104 102 ··· 181 179 { 182 180 struct task_struct *tsk = current; 183 181 struct ring_buffer_event *event = NULL; 182 + struct ring_buffer *buffer = NULL; 184 183 struct blk_io_trace *t; 185 184 unsigned long flags = 0; 186 185 unsigned long *sequence; ··· 207 204 if (blk_tracer) { 208 205 tracing_record_cmdline(current); 209 206 207 + buffer = blk_tr->buffer; 210 208 pc = preempt_count(); 211 - event = trace_buffer_lock_reserve(blk_tr, TRACE_BLK, 209 + event = trace_buffer_lock_reserve(buffer, TRACE_BLK, 212 210 sizeof(*t) + pdu_len, 213 211 0, pc); 214 212 if (!event) ··· 256 252 memcpy((void *) t + sizeof(*t), pdu_data, pdu_len); 257 253 258 254 if (blk_tracer) { 259 - trace_buffer_unlock_commit(blk_tr, event, 0, pc); 255 + trace_buffer_unlock_commit(buffer, event, 0, pc); 260 256 return; 261 257 } 262 258 } ··· 271 267 { 272 268 debugfs_remove(bt->msg_file); 273 269 debugfs_remove(bt->dropped_file); 274 - debugfs_remove(bt->dir); 275 270 relay_close(bt->rchan); 271 + debugfs_remove(bt->dir); 276 272 free_percpu(bt->sequence); 277 273 free_percpu(bt->msg_data); 278 274 kfree(bt); ··· 382 378 383 379 static int blk_remove_buf_file_callback(struct dentry *dentry) 384 380 { 385 - struct dentry *parent = dentry->d_parent; 386 381 debugfs_remove(dentry); 387 382 388 - /* 389 - * this will fail for all but the last file, but that is ok. what we 390 - * care about is the top level buts->name directory going away, when 391 - * the last trace file is gone. Then we don't have to rmdir() that 392 - * manually on trace stop, so it nicely solves the issue with 393 - * force killing of running traces. 394 - */ 395 - 396 - debugfs_remove(parent); 397 383 return 0; 398 384 } 399 385
+11 -6
kernel/trace/ftrace.c
··· 2222 2222 read++; 2223 2223 cnt--; 2224 2224 2225 - if (!(iter->flags & ~FTRACE_ITER_CONT)) { 2225 + /* 2226 + * If the parser haven't finished with the last write, 2227 + * continue reading the user input without skipping spaces. 2228 + */ 2229 + if (!(iter->flags & FTRACE_ITER_CONT)) { 2226 2230 /* skip white space */ 2227 2231 while (cnt && isspace(ch)) { 2228 2232 ret = get_user(ch, ubuf++); ··· 2236 2232 cnt--; 2237 2233 } 2238 2234 2235 + /* only spaces were written */ 2239 2236 if (isspace(ch)) { 2240 - file->f_pos += read; 2237 + *ppos += read; 2241 2238 ret = read; 2242 2239 goto out; 2243 2240 } ··· 2267 2262 if (ret) 2268 2263 goto out; 2269 2264 iter->buffer_idx = 0; 2270 - } else 2265 + } else { 2271 2266 iter->flags |= FTRACE_ITER_CONT; 2267 + iter->buffer[iter->buffer_idx++] = ch; 2268 + } 2272 2269 2273 - 2274 - file->f_pos += read; 2275 - 2270 + *ppos += read; 2276 2271 ret = read; 2277 2272 out: 2278 2273 mutex_unlock(&ftrace_regex_lock);
+1 -3
kernel/trace/kmemtrace.c
··· 183 183 184 184 static int kmem_trace_init(struct trace_array *tr) 185 185 { 186 - int cpu; 187 186 kmemtrace_array = tr; 188 187 189 - for_each_cpu(cpu, cpu_possible_mask) 190 - tracing_reset(tr, cpu); 188 + tracing_reset_online_cpus(tr); 191 189 192 190 kmemtrace_start_probes(); 193 191
+110 -62
kernel/trace/ring_buffer.c
··· 218 218 219 219 static inline int rb_null_event(struct ring_buffer_event *event) 220 220 { 221 - return event->type_len == RINGBUF_TYPE_PADDING 222 - && event->time_delta == 0; 223 - } 224 - 225 - static inline int rb_discarded_event(struct ring_buffer_event *event) 226 - { 227 - return event->type_len == RINGBUF_TYPE_PADDING && event->time_delta; 221 + return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta; 228 222 } 229 223 230 224 static void rb_event_set_padding(struct ring_buffer_event *event) 231 225 { 226 + /* padding has a NULL time_delta */ 232 227 event->type_len = RINGBUF_TYPE_PADDING; 233 228 event->time_delta = 0; 234 229 } ··· 467 472 }; 468 473 469 474 /* buffer may be either ring_buffer or ring_buffer_per_cpu */ 470 - #define RB_WARN_ON(buffer, cond) \ 471 - ({ \ 472 - int _____ret = unlikely(cond); \ 473 - if (_____ret) { \ 474 - atomic_inc(&buffer->record_disabled); \ 475 - WARN_ON(1); \ 476 - } \ 477 - _____ret; \ 475 + #define RB_WARN_ON(b, cond) \ 476 + ({ \ 477 + int _____ret = unlikely(cond); \ 478 + if (_____ret) { \ 479 + if (__same_type(*(b), struct ring_buffer_per_cpu)) { \ 480 + struct ring_buffer_per_cpu *__b = \ 481 + (void *)b; \ 482 + atomic_inc(&__b->buffer->record_disabled); \ 483 + } else \ 484 + atomic_inc(&b->record_disabled); \ 485 + WARN_ON(1); \ 486 + } \ 487 + _____ret; \ 478 488 }) 479 489 480 490 /* Up this if you want to test the TIME_EXTENTS and normalization */ ··· 1778 1778 event->type_len = RINGBUF_TYPE_PADDING; 1779 1779 /* time delta must be non zero */ 1780 1780 event->time_delta = 1; 1781 - /* Account for this as an entry */ 1782 - local_inc(&tail_page->entries); 1783 - local_inc(&cpu_buffer->entries); 1784 1781 1785 1782 /* Set write to end of buffer */ 1786 1783 length = (tail + length) - BUF_PAGE_SIZE; ··· 2073 2076 } 2074 2077 2075 2078 static struct ring_buffer_event * 2076 - rb_reserve_next_event(struct ring_buffer_per_cpu *cpu_buffer, 2079 + rb_reserve_next_event(struct ring_buffer *buffer, 2080 + struct ring_buffer_per_cpu *cpu_buffer, 2077 2081 unsigned long length) 2078 2082 { 2079 2083 struct ring_buffer_event *event; ··· 2083 2085 int nr_loops = 0; 2084 2086 2085 2087 rb_start_commit(cpu_buffer); 2088 + 2089 + #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP 2090 + /* 2091 + * Due to the ability to swap a cpu buffer from a buffer 2092 + * it is possible it was swapped before we committed. 2093 + * (committing stops a swap). We check for it here and 2094 + * if it happened, we have to fail the write. 2095 + */ 2096 + barrier(); 2097 + if (unlikely(ACCESS_ONCE(cpu_buffer->buffer) != buffer)) { 2098 + local_dec(&cpu_buffer->committing); 2099 + local_dec(&cpu_buffer->commits); 2100 + return NULL; 2101 + } 2102 + #endif 2086 2103 2087 2104 length = rb_calculate_event_length(length); 2088 2105 again: ··· 2259 2246 if (length > BUF_MAX_DATA_SIZE) 2260 2247 goto out; 2261 2248 2262 - event = rb_reserve_next_event(cpu_buffer, length); 2249 + event = rb_reserve_next_event(buffer, cpu_buffer, length); 2263 2250 if (!event) 2264 2251 goto out; 2265 2252 ··· 2282 2269 } 2283 2270 EXPORT_SYMBOL_GPL(ring_buffer_lock_reserve); 2284 2271 2285 - static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer, 2272 + static void 2273 + rb_update_write_stamp(struct ring_buffer_per_cpu *cpu_buffer, 2286 2274 struct ring_buffer_event *event) 2287 2275 { 2288 - local_inc(&cpu_buffer->entries); 2289 - 2290 2276 /* 2291 2277 * The event first in the commit queue updates the 2292 2278 * time stamp. 2293 2279 */ 2294 2280 if (rb_event_is_commit(cpu_buffer, event)) 2295 2281 cpu_buffer->write_stamp += event->time_delta; 2282 + } 2296 2283 2284 + static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer, 2285 + struct ring_buffer_event *event) 2286 + { 2287 + local_inc(&cpu_buffer->entries); 2288 + rb_update_write_stamp(cpu_buffer, event); 2297 2289 rb_end_commit(cpu_buffer); 2298 2290 } 2299 2291 ··· 2345 2327 event->time_delta = 1; 2346 2328 } 2347 2329 2348 - /** 2349 - * ring_buffer_event_discard - discard any event in the ring buffer 2350 - * @event: the event to discard 2351 - * 2352 - * Sometimes a event that is in the ring buffer needs to be ignored. 2353 - * This function lets the user discard an event in the ring buffer 2354 - * and then that event will not be read later. 2355 - * 2356 - * Note, it is up to the user to be careful with this, and protect 2357 - * against races. If the user discards an event that has been consumed 2358 - * it is possible that it could corrupt the ring buffer. 2330 + /* 2331 + * Decrement the entries to the page that an event is on. 2332 + * The event does not even need to exist, only the pointer 2333 + * to the page it is on. This may only be called before the commit 2334 + * takes place. 2359 2335 */ 2360 - void ring_buffer_event_discard(struct ring_buffer_event *event) 2336 + static inline void 2337 + rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, 2338 + struct ring_buffer_event *event) 2361 2339 { 2362 - rb_event_discard(event); 2340 + unsigned long addr = (unsigned long)event; 2341 + struct buffer_page *bpage = cpu_buffer->commit_page; 2342 + struct buffer_page *start; 2343 + 2344 + addr &= PAGE_MASK; 2345 + 2346 + /* Do the likely case first */ 2347 + if (likely(bpage->page == (void *)addr)) { 2348 + local_dec(&bpage->entries); 2349 + return; 2350 + } 2351 + 2352 + /* 2353 + * Because the commit page may be on the reader page we 2354 + * start with the next page and check the end loop there. 2355 + */ 2356 + rb_inc_page(cpu_buffer, &bpage); 2357 + start = bpage; 2358 + do { 2359 + if (bpage->page == (void *)addr) { 2360 + local_dec(&bpage->entries); 2361 + return; 2362 + } 2363 + rb_inc_page(cpu_buffer, &bpage); 2364 + } while (bpage != start); 2365 + 2366 + /* commit not part of this buffer?? */ 2367 + RB_WARN_ON(cpu_buffer, 1); 2363 2368 } 2364 - EXPORT_SYMBOL_GPL(ring_buffer_event_discard); 2365 2369 2366 2370 /** 2367 2371 * ring_buffer_commit_discard - discard an event that has not been committed 2368 2372 * @buffer: the ring buffer 2369 2373 * @event: non committed event to discard 2370 2374 * 2371 - * This is similar to ring_buffer_event_discard but must only be 2372 - * performed on an event that has not been committed yet. The difference 2373 - * is that this will also try to free the event from the ring buffer 2375 + * Sometimes an event that is in the ring buffer needs to be ignored. 2376 + * This function lets the user discard an event in the ring buffer 2377 + * and then that event will not be read later. 2378 + * 2379 + * This function only works if it is called before the the item has been 2380 + * committed. It will try to free the event from the ring buffer 2374 2381 * if another event has not been added behind it. 2375 2382 * 2376 2383 * If another event has been added behind it, it will set the event ··· 2423 2380 */ 2424 2381 RB_WARN_ON(buffer, !local_read(&cpu_buffer->committing)); 2425 2382 2383 + rb_decrement_entry(cpu_buffer, event); 2426 2384 if (rb_try_to_discard(cpu_buffer, event)) 2427 2385 goto out; 2428 2386 2429 2387 /* 2430 2388 * The commit is still visible by the reader, so we 2431 - * must increment entries. 2389 + * must still update the timestamp. 2432 2390 */ 2433 - local_inc(&cpu_buffer->entries); 2391 + rb_update_write_stamp(cpu_buffer, event); 2434 2392 out: 2435 2393 rb_end_commit(cpu_buffer); 2436 2394 ··· 2492 2448 if (length > BUF_MAX_DATA_SIZE) 2493 2449 goto out; 2494 2450 2495 - event = rb_reserve_next_event(cpu_buffer, length); 2451 + event = rb_reserve_next_event(buffer, cpu_buffer, length); 2496 2452 if (!event) 2497 2453 goto out; 2498 2454 ··· 2943 2899 2944 2900 event = rb_reader_event(cpu_buffer); 2945 2901 2946 - if (event->type_len <= RINGBUF_TYPE_DATA_TYPE_LEN_MAX 2947 - || rb_discarded_event(event)) 2902 + if (event->type_len <= RINGBUF_TYPE_DATA_TYPE_LEN_MAX) 2948 2903 cpu_buffer->read++; 2949 2904 2950 2905 rb_update_read_stamp(cpu_buffer, event); ··· 3175 3132 spin_unlock(&cpu_buffer->reader_lock); 3176 3133 local_irq_restore(flags); 3177 3134 3178 - if (event && event->type_len == RINGBUF_TYPE_PADDING) { 3179 - cpu_relax(); 3135 + if (event && event->type_len == RINGBUF_TYPE_PADDING) 3180 3136 goto again; 3181 - } 3182 3137 3183 3138 return event; 3184 3139 } ··· 3201 3160 event = rb_iter_peek(iter, ts); 3202 3161 spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); 3203 3162 3204 - if (event && event->type_len == RINGBUF_TYPE_PADDING) { 3205 - cpu_relax(); 3163 + if (event && event->type_len == RINGBUF_TYPE_PADDING) 3206 3164 goto again; 3207 - } 3208 3165 3209 3166 return event; 3210 3167 } ··· 3248 3209 out: 3249 3210 preempt_enable(); 3250 3211 3251 - if (event && event->type_len == RINGBUF_TYPE_PADDING) { 3252 - cpu_relax(); 3212 + if (event && event->type_len == RINGBUF_TYPE_PADDING) 3253 3213 goto again; 3254 - } 3255 3214 3256 3215 return event; 3257 3216 } ··· 3329 3292 struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer; 3330 3293 unsigned long flags; 3331 3294 3332 - again: 3333 3295 spin_lock_irqsave(&cpu_buffer->reader_lock, flags); 3296 + again: 3334 3297 event = rb_iter_peek(iter, ts); 3335 3298 if (!event) 3336 3299 goto out; 3337 3300 3301 + if (event->type_len == RINGBUF_TYPE_PADDING) 3302 + goto again; 3303 + 3338 3304 rb_advance_iter(iter); 3339 3305 out: 3340 3306 spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); 3341 - 3342 - if (event && event->type_len == RINGBUF_TYPE_PADDING) { 3343 - cpu_relax(); 3344 - goto again; 3345 - } 3346 3307 3347 3308 return event; 3348 3309 } ··· 3408 3373 3409 3374 spin_lock_irqsave(&cpu_buffer->reader_lock, flags); 3410 3375 3376 + if (RB_WARN_ON(cpu_buffer, local_read(&cpu_buffer->committing))) 3377 + goto out; 3378 + 3411 3379 __raw_spin_lock(&cpu_buffer->lock); 3412 3380 3413 3381 rb_reset_cpu(cpu_buffer); 3414 3382 3415 3383 __raw_spin_unlock(&cpu_buffer->lock); 3416 3384 3385 + out: 3417 3386 spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); 3418 3387 3419 3388 atomic_dec(&cpu_buffer->record_disabled); ··· 3500 3461 } 3501 3462 EXPORT_SYMBOL_GPL(ring_buffer_empty_cpu); 3502 3463 3464 + #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP 3503 3465 /** 3504 3466 * ring_buffer_swap_cpu - swap a CPU buffer between two ring buffers 3505 3467 * @buffer_a: One buffer to swap with ··· 3555 3515 atomic_inc(&cpu_buffer_a->record_disabled); 3556 3516 atomic_inc(&cpu_buffer_b->record_disabled); 3557 3517 3518 + ret = -EBUSY; 3519 + if (local_read(&cpu_buffer_a->committing)) 3520 + goto out_dec; 3521 + if (local_read(&cpu_buffer_b->committing)) 3522 + goto out_dec; 3523 + 3558 3524 buffer_a->buffers[cpu] = cpu_buffer_b; 3559 3525 buffer_b->buffers[cpu] = cpu_buffer_a; 3560 3526 3561 3527 cpu_buffer_b->buffer = buffer_a; 3562 3528 cpu_buffer_a->buffer = buffer_b; 3563 3529 3530 + ret = 0; 3531 + 3532 + out_dec: 3564 3533 atomic_dec(&cpu_buffer_a->record_disabled); 3565 3534 atomic_dec(&cpu_buffer_b->record_disabled); 3566 - 3567 - ret = 0; 3568 3535 out: 3569 3536 return ret; 3570 3537 } 3571 3538 EXPORT_SYMBOL_GPL(ring_buffer_swap_cpu); 3539 + #endif /* CONFIG_RING_BUFFER_ALLOW_SWAP */ 3572 3540 3573 3541 /** 3574 3542 * ring_buffer_alloc_read_page - allocate a page to read from buffer
+254 -133
kernel/trace/trace.c
··· 43 43 44 44 #define TRACE_BUFFER_FLAGS (RB_FL_OVERWRITE) 45 45 46 - unsigned long __read_mostly tracing_max_latency; 47 - unsigned long __read_mostly tracing_thresh; 48 - 49 46 /* 50 47 * On boot up, the ring buffer is set to the minimum size, so that 51 48 * we do not waste memory on systems that are not using tracing. ··· 169 172 170 173 static DEFINE_PER_CPU(struct trace_array_cpu, global_trace_cpu); 171 174 172 - int filter_current_check_discard(struct ftrace_event_call *call, void *rec, 175 + int filter_current_check_discard(struct ring_buffer *buffer, 176 + struct ftrace_event_call *call, void *rec, 173 177 struct ring_buffer_event *event) 174 178 { 175 - return filter_check_discard(call, rec, global_trace.buffer, event); 179 + return filter_check_discard(call, rec, buffer, event); 176 180 } 177 181 EXPORT_SYMBOL_GPL(filter_current_check_discard); 178 182 ··· 264 266 TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME | 265 267 TRACE_ITER_GRAPH_TIME; 266 268 269 + static int trace_stop_count; 270 + static DEFINE_SPINLOCK(tracing_start_lock); 271 + 267 272 /** 268 273 * trace_wake_up - wake up tasks waiting for trace input 269 274 * ··· 339 338 340 339 int trace_clock_id; 341 340 342 - /* 343 - * ftrace_max_lock is used to protect the swapping of buffers 344 - * when taking a max snapshot. The buffers themselves are 345 - * protected by per_cpu spinlocks. But the action of the swap 346 - * needs its own lock. 347 - * 348 - * This is defined as a raw_spinlock_t in order to help 349 - * with performance when lockdep debugging is enabled. 350 - */ 351 - static raw_spinlock_t ftrace_max_lock = 352 - (raw_spinlock_t)__RAW_SPIN_LOCK_UNLOCKED; 353 - 354 - /* 355 - * Copy the new maximum trace into the separate maximum-trace 356 - * structure. (this way the maximum trace is permanently saved, 357 - * for later retrieval via /sys/kernel/debug/tracing/latency_trace) 358 - */ 359 - static void 360 - __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu) 361 - { 362 - struct trace_array_cpu *data = tr->data[cpu]; 363 - 364 - max_tr.cpu = cpu; 365 - max_tr.time_start = data->preempt_timestamp; 366 - 367 - data = max_tr.data[cpu]; 368 - data->saved_latency = tracing_max_latency; 369 - 370 - memcpy(data->comm, tsk->comm, TASK_COMM_LEN); 371 - data->pid = tsk->pid; 372 - data->uid = task_uid(tsk); 373 - data->nice = tsk->static_prio - 20 - MAX_RT_PRIO; 374 - data->policy = tsk->policy; 375 - data->rt_priority = tsk->rt_priority; 376 - 377 - /* record this tasks comm */ 378 - tracing_record_cmdline(tsk); 379 - } 380 - 381 341 ssize_t trace_seq_to_user(struct trace_seq *s, char __user *ubuf, size_t cnt) 382 342 { 383 343 int len; ··· 382 420 return cnt; 383 421 } 384 422 423 + /* 424 + * ftrace_max_lock is used to protect the swapping of buffers 425 + * when taking a max snapshot. The buffers themselves are 426 + * protected by per_cpu spinlocks. But the action of the swap 427 + * needs its own lock. 428 + * 429 + * This is defined as a raw_spinlock_t in order to help 430 + * with performance when lockdep debugging is enabled. 431 + * 432 + * It is also used in other places outside the update_max_tr 433 + * so it needs to be defined outside of the 434 + * CONFIG_TRACER_MAX_TRACE. 435 + */ 436 + static raw_spinlock_t ftrace_max_lock = 437 + (raw_spinlock_t)__RAW_SPIN_LOCK_UNLOCKED; 438 + 439 + #ifdef CONFIG_TRACER_MAX_TRACE 440 + unsigned long __read_mostly tracing_max_latency; 441 + unsigned long __read_mostly tracing_thresh; 442 + 443 + /* 444 + * Copy the new maximum trace into the separate maximum-trace 445 + * structure. (this way the maximum trace is permanently saved, 446 + * for later retrieval via /sys/kernel/debug/tracing/latency_trace) 447 + */ 448 + static void 449 + __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu) 450 + { 451 + struct trace_array_cpu *data = tr->data[cpu]; 452 + struct trace_array_cpu *max_data = tr->data[cpu]; 453 + 454 + max_tr.cpu = cpu; 455 + max_tr.time_start = data->preempt_timestamp; 456 + 457 + max_data = max_tr.data[cpu]; 458 + max_data->saved_latency = tracing_max_latency; 459 + max_data->critical_start = data->critical_start; 460 + max_data->critical_end = data->critical_end; 461 + 462 + memcpy(data->comm, tsk->comm, TASK_COMM_LEN); 463 + max_data->pid = tsk->pid; 464 + max_data->uid = task_uid(tsk); 465 + max_data->nice = tsk->static_prio - 20 - MAX_RT_PRIO; 466 + max_data->policy = tsk->policy; 467 + max_data->rt_priority = tsk->rt_priority; 468 + 469 + /* record this tasks comm */ 470 + tracing_record_cmdline(tsk); 471 + } 472 + 385 473 /** 386 474 * update_max_tr - snapshot all trace buffers from global_trace to max_tr 387 475 * @tr: tracer ··· 446 434 { 447 435 struct ring_buffer *buf = tr->buffer; 448 436 437 + if (trace_stop_count) 438 + return; 439 + 449 440 WARN_ON_ONCE(!irqs_disabled()); 450 441 __raw_spin_lock(&ftrace_max_lock); 451 442 452 443 tr->buffer = max_tr.buffer; 453 444 max_tr.buffer = buf; 454 - 455 - ftrace_disable_cpu(); 456 - ring_buffer_reset(tr->buffer); 457 - ftrace_enable_cpu(); 458 445 459 446 __update_max_tr(tr, tsk, cpu); 460 447 __raw_spin_unlock(&ftrace_max_lock); ··· 472 461 { 473 462 int ret; 474 463 464 + if (trace_stop_count) 465 + return; 466 + 475 467 WARN_ON_ONCE(!irqs_disabled()); 476 468 __raw_spin_lock(&ftrace_max_lock); 477 469 478 470 ftrace_disable_cpu(); 479 471 480 - ring_buffer_reset(max_tr.buffer); 481 472 ret = ring_buffer_swap_cpu(max_tr.buffer, tr->buffer, cpu); 473 + 474 + if (ret == -EBUSY) { 475 + /* 476 + * We failed to swap the buffer due to a commit taking 477 + * place on this CPU. We fail to record, but we reset 478 + * the max trace buffer (no one writes directly to it) 479 + * and flag that it failed. 480 + */ 481 + trace_array_printk(&max_tr, _THIS_IP_, 482 + "Failed to swap buffers due to commit in progress\n"); 483 + } 482 484 483 485 ftrace_enable_cpu(); 484 486 485 - WARN_ON_ONCE(ret && ret != -EAGAIN); 487 + WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY); 486 488 487 489 __update_max_tr(tr, tsk, cpu); 488 490 __raw_spin_unlock(&ftrace_max_lock); 489 491 } 492 + #endif /* CONFIG_TRACER_MAX_TRACE */ 490 493 491 494 /** 492 495 * register_tracer - register a tracer with the ftrace system. ··· 557 532 if (type->selftest && !tracing_selftest_disabled) { 558 533 struct tracer *saved_tracer = current_trace; 559 534 struct trace_array *tr = &global_trace; 560 - int i; 561 535 562 536 /* 563 537 * Run a selftest on this tracer. ··· 565 541 * internal tracing to verify that everything is in order. 566 542 * If we fail, we do not register this tracer. 567 543 */ 568 - for_each_tracing_cpu(i) 569 - tracing_reset(tr, i); 544 + tracing_reset_online_cpus(tr); 570 545 571 546 current_trace = type; 572 547 /* the test is responsible for initializing and enabling */ ··· 578 555 goto out; 579 556 } 580 557 /* Only reset on passing, to avoid touching corrupted buffers */ 581 - for_each_tracing_cpu(i) 582 - tracing_reset(tr, i); 558 + tracing_reset_online_cpus(tr); 583 559 584 560 printk(KERN_CONT "PASSED\n"); 585 561 } ··· 653 631 mutex_unlock(&trace_types_lock); 654 632 } 655 633 656 - void tracing_reset(struct trace_array *tr, int cpu) 634 + static void __tracing_reset(struct trace_array *tr, int cpu) 657 635 { 658 636 ftrace_disable_cpu(); 659 637 ring_buffer_reset_cpu(tr->buffer, cpu); 660 638 ftrace_enable_cpu(); 661 639 } 662 640 641 + void tracing_reset(struct trace_array *tr, int cpu) 642 + { 643 + struct ring_buffer *buffer = tr->buffer; 644 + 645 + ring_buffer_record_disable(buffer); 646 + 647 + /* Make sure all commits have finished */ 648 + synchronize_sched(); 649 + __tracing_reset(tr, cpu); 650 + 651 + ring_buffer_record_enable(buffer); 652 + } 653 + 663 654 void tracing_reset_online_cpus(struct trace_array *tr) 664 655 { 656 + struct ring_buffer *buffer = tr->buffer; 665 657 int cpu; 658 + 659 + ring_buffer_record_disable(buffer); 660 + 661 + /* Make sure all commits have finished */ 662 + synchronize_sched(); 666 663 667 664 tr->time_start = ftrace_now(tr->cpu); 668 665 669 666 for_each_online_cpu(cpu) 670 - tracing_reset(tr, cpu); 667 + __tracing_reset(tr, cpu); 668 + 669 + ring_buffer_record_enable(buffer); 671 670 } 672 671 673 672 void tracing_reset_current(int cpu) ··· 718 675 memset(&map_cmdline_to_pid, NO_CMDLINE_MAP, sizeof(map_cmdline_to_pid)); 719 676 cmdline_idx = 0; 720 677 } 721 - 722 - static int trace_stop_count; 723 - static DEFINE_SPINLOCK(tracing_start_lock); 724 678 725 679 /** 726 680 * ftrace_off_permanent - disable all ftrace code permanently ··· 899 859 } 900 860 EXPORT_SYMBOL_GPL(tracing_generic_entry_update); 901 861 902 - struct ring_buffer_event *trace_buffer_lock_reserve(struct trace_array *tr, 903 - int type, 904 - unsigned long len, 905 - unsigned long flags, int pc) 862 + struct ring_buffer_event * 863 + trace_buffer_lock_reserve(struct ring_buffer *buffer, 864 + int type, 865 + unsigned long len, 866 + unsigned long flags, int pc) 906 867 { 907 868 struct ring_buffer_event *event; 908 869 909 - event = ring_buffer_lock_reserve(tr->buffer, len); 870 + event = ring_buffer_lock_reserve(buffer, len); 910 871 if (event != NULL) { 911 872 struct trace_entry *ent = ring_buffer_event_data(event); 912 873 ··· 918 877 return event; 919 878 } 920 879 921 - static inline void __trace_buffer_unlock_commit(struct trace_array *tr, 922 - struct ring_buffer_event *event, 923 - unsigned long flags, int pc, 924 - int wake) 880 + static inline void 881 + __trace_buffer_unlock_commit(struct ring_buffer *buffer, 882 + struct ring_buffer_event *event, 883 + unsigned long flags, int pc, 884 + int wake) 925 885 { 926 - ring_buffer_unlock_commit(tr->buffer, event); 886 + ring_buffer_unlock_commit(buffer, event); 927 887 928 - ftrace_trace_stack(tr, flags, 6, pc); 929 - ftrace_trace_userstack(tr, flags, pc); 888 + ftrace_trace_stack(buffer, flags, 6, pc); 889 + ftrace_trace_userstack(buffer, flags, pc); 930 890 931 891 if (wake) 932 892 trace_wake_up(); 933 893 } 934 894 935 - void trace_buffer_unlock_commit(struct trace_array *tr, 936 - struct ring_buffer_event *event, 937 - unsigned long flags, int pc) 895 + void trace_buffer_unlock_commit(struct ring_buffer *buffer, 896 + struct ring_buffer_event *event, 897 + unsigned long flags, int pc) 938 898 { 939 - __trace_buffer_unlock_commit(tr, event, flags, pc, 1); 899 + __trace_buffer_unlock_commit(buffer, event, flags, pc, 1); 940 900 } 941 901 942 902 struct ring_buffer_event * 943 - trace_current_buffer_lock_reserve(int type, unsigned long len, 903 + trace_current_buffer_lock_reserve(struct ring_buffer **current_rb, 904 + int type, unsigned long len, 944 905 unsigned long flags, int pc) 945 906 { 946 - return trace_buffer_lock_reserve(&global_trace, 907 + *current_rb = global_trace.buffer; 908 + return trace_buffer_lock_reserve(*current_rb, 947 909 type, len, flags, pc); 948 910 } 949 911 EXPORT_SYMBOL_GPL(trace_current_buffer_lock_reserve); 950 912 951 - void trace_current_buffer_unlock_commit(struct ring_buffer_event *event, 913 + void trace_current_buffer_unlock_commit(struct ring_buffer *buffer, 914 + struct ring_buffer_event *event, 952 915 unsigned long flags, int pc) 953 916 { 954 - __trace_buffer_unlock_commit(&global_trace, event, flags, pc, 1); 917 + __trace_buffer_unlock_commit(buffer, event, flags, pc, 1); 955 918 } 956 919 EXPORT_SYMBOL_GPL(trace_current_buffer_unlock_commit); 957 920 958 - void trace_nowake_buffer_unlock_commit(struct ring_buffer_event *event, 959 - unsigned long flags, int pc) 921 + void trace_nowake_buffer_unlock_commit(struct ring_buffer *buffer, 922 + struct ring_buffer_event *event, 923 + unsigned long flags, int pc) 960 924 { 961 - __trace_buffer_unlock_commit(&global_trace, event, flags, pc, 0); 925 + __trace_buffer_unlock_commit(buffer, event, flags, pc, 0); 962 926 } 963 927 EXPORT_SYMBOL_GPL(trace_nowake_buffer_unlock_commit); 964 928 965 - void trace_current_buffer_discard_commit(struct ring_buffer_event *event) 929 + void trace_current_buffer_discard_commit(struct ring_buffer *buffer, 930 + struct ring_buffer_event *event) 966 931 { 967 - ring_buffer_discard_commit(global_trace.buffer, event); 932 + ring_buffer_discard_commit(buffer, event); 968 933 } 969 934 EXPORT_SYMBOL_GPL(trace_current_buffer_discard_commit); 970 935 ··· 980 933 int pc) 981 934 { 982 935 struct ftrace_event_call *call = &event_function; 936 + struct ring_buffer *buffer = tr->buffer; 983 937 struct ring_buffer_event *event; 984 938 struct ftrace_entry *entry; 985 939 ··· 988 940 if (unlikely(local_read(&__get_cpu_var(ftrace_cpu_disabled)))) 989 941 return; 990 942 991 - event = trace_buffer_lock_reserve(tr, TRACE_FN, sizeof(*entry), 943 + event = trace_buffer_lock_reserve(buffer, TRACE_FN, sizeof(*entry), 992 944 flags, pc); 993 945 if (!event) 994 946 return; ··· 996 948 entry->ip = ip; 997 949 entry->parent_ip = parent_ip; 998 950 999 - if (!filter_check_discard(call, entry, tr->buffer, event)) 1000 - ring_buffer_unlock_commit(tr->buffer, event); 951 + if (!filter_check_discard(call, entry, buffer, event)) 952 + ring_buffer_unlock_commit(buffer, event); 1001 953 } 1002 954 1003 955 void ··· 1010 962 } 1011 963 1012 964 #ifdef CONFIG_STACKTRACE 1013 - static void __ftrace_trace_stack(struct trace_array *tr, 965 + static void __ftrace_trace_stack(struct ring_buffer *buffer, 1014 966 unsigned long flags, 1015 967 int skip, int pc) 1016 968 { ··· 1019 971 struct stack_entry *entry; 1020 972 struct stack_trace trace; 1021 973 1022 - event = trace_buffer_lock_reserve(tr, TRACE_STACK, 974 + event = trace_buffer_lock_reserve(buffer, TRACE_STACK, 1023 975 sizeof(*entry), flags, pc); 1024 976 if (!event) 1025 977 return; ··· 1032 984 trace.entries = entry->caller; 1033 985 1034 986 save_stack_trace(&trace); 1035 - if (!filter_check_discard(call, entry, tr->buffer, event)) 1036 - ring_buffer_unlock_commit(tr->buffer, event); 987 + if (!filter_check_discard(call, entry, buffer, event)) 988 + ring_buffer_unlock_commit(buffer, event); 1037 989 } 1038 990 1039 - void ftrace_trace_stack(struct trace_array *tr, unsigned long flags, int skip, 1040 - int pc) 991 + void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags, 992 + int skip, int pc) 1041 993 { 1042 994 if (!(trace_flags & TRACE_ITER_STACKTRACE)) 1043 995 return; 1044 996 1045 - __ftrace_trace_stack(tr, flags, skip, pc); 997 + __ftrace_trace_stack(buffer, flags, skip, pc); 1046 998 } 1047 999 1048 1000 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip, 1049 1001 int pc) 1050 1002 { 1051 - __ftrace_trace_stack(tr, flags, skip, pc); 1003 + __ftrace_trace_stack(tr->buffer, flags, skip, pc); 1052 1004 } 1053 1005 1054 - void ftrace_trace_userstack(struct trace_array *tr, unsigned long flags, int pc) 1006 + void 1007 + ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc) 1055 1008 { 1056 1009 struct ftrace_event_call *call = &event_user_stack; 1057 1010 struct ring_buffer_event *event; ··· 1062 1013 if (!(trace_flags & TRACE_ITER_USERSTACKTRACE)) 1063 1014 return; 1064 1015 1065 - event = trace_buffer_lock_reserve(tr, TRACE_USER_STACK, 1016 + event = trace_buffer_lock_reserve(buffer, TRACE_USER_STACK, 1066 1017 sizeof(*entry), flags, pc); 1067 1018 if (!event) 1068 1019 return; ··· 1076 1027 trace.entries = entry->caller; 1077 1028 1078 1029 save_stack_trace_user(&trace); 1079 - if (!filter_check_discard(call, entry, tr->buffer, event)) 1080 - ring_buffer_unlock_commit(tr->buffer, event); 1030 + if (!filter_check_discard(call, entry, buffer, event)) 1031 + ring_buffer_unlock_commit(buffer, event); 1081 1032 } 1082 1033 1083 1034 #ifdef UNUSED ··· 1096 1047 { 1097 1048 struct ring_buffer_event *event; 1098 1049 struct trace_array *tr = __tr; 1050 + struct ring_buffer *buffer = tr->buffer; 1099 1051 struct special_entry *entry; 1100 1052 1101 - event = trace_buffer_lock_reserve(tr, TRACE_SPECIAL, 1053 + event = trace_buffer_lock_reserve(buffer, TRACE_SPECIAL, 1102 1054 sizeof(*entry), 0, pc); 1103 1055 if (!event) 1104 1056 return; ··· 1107 1057 entry->arg1 = arg1; 1108 1058 entry->arg2 = arg2; 1109 1059 entry->arg3 = arg3; 1110 - trace_buffer_unlock_commit(tr, event, 0, pc); 1060 + trace_buffer_unlock_commit(buffer, event, 0, pc); 1111 1061 } 1112 1062 1113 1063 void ··· 1153 1103 1154 1104 struct ftrace_event_call *call = &event_bprint; 1155 1105 struct ring_buffer_event *event; 1106 + struct ring_buffer *buffer; 1156 1107 struct trace_array *tr = &global_trace; 1157 1108 struct trace_array_cpu *data; 1158 1109 struct bprint_entry *entry; ··· 1186 1135 goto out_unlock; 1187 1136 1188 1137 size = sizeof(*entry) + sizeof(u32) * len; 1189 - event = trace_buffer_lock_reserve(tr, TRACE_BPRINT, size, flags, pc); 1138 + buffer = tr->buffer; 1139 + event = trace_buffer_lock_reserve(buffer, TRACE_BPRINT, size, 1140 + flags, pc); 1190 1141 if (!event) 1191 1142 goto out_unlock; 1192 1143 entry = ring_buffer_event_data(event); ··· 1196 1143 entry->fmt = fmt; 1197 1144 1198 1145 memcpy(entry->buf, trace_buf, sizeof(u32) * len); 1199 - if (!filter_check_discard(call, entry, tr->buffer, event)) 1200 - ring_buffer_unlock_commit(tr->buffer, event); 1146 + if (!filter_check_discard(call, entry, buffer, event)) 1147 + ring_buffer_unlock_commit(buffer, event); 1201 1148 1202 1149 out_unlock: 1203 1150 __raw_spin_unlock(&trace_buf_lock); ··· 1212 1159 } 1213 1160 EXPORT_SYMBOL_GPL(trace_vbprintk); 1214 1161 1215 - int trace_vprintk(unsigned long ip, const char *fmt, va_list args) 1162 + int trace_array_printk(struct trace_array *tr, 1163 + unsigned long ip, const char *fmt, ...) 1164 + { 1165 + int ret; 1166 + va_list ap; 1167 + 1168 + if (!(trace_flags & TRACE_ITER_PRINTK)) 1169 + return 0; 1170 + 1171 + va_start(ap, fmt); 1172 + ret = trace_array_vprintk(tr, ip, fmt, ap); 1173 + va_end(ap); 1174 + return ret; 1175 + } 1176 + 1177 + int trace_array_vprintk(struct trace_array *tr, 1178 + unsigned long ip, const char *fmt, va_list args) 1216 1179 { 1217 1180 static raw_spinlock_t trace_buf_lock = __RAW_SPIN_LOCK_UNLOCKED; 1218 1181 static char trace_buf[TRACE_BUF_SIZE]; 1219 1182 1220 1183 struct ftrace_event_call *call = &event_print; 1221 1184 struct ring_buffer_event *event; 1222 - struct trace_array *tr = &global_trace; 1185 + struct ring_buffer *buffer; 1223 1186 struct trace_array_cpu *data; 1224 1187 int cpu, len = 0, size, pc; 1225 1188 struct print_entry *entry; ··· 1263 1194 trace_buf[len] = 0; 1264 1195 1265 1196 size = sizeof(*entry) + len + 1; 1266 - event = trace_buffer_lock_reserve(tr, TRACE_PRINT, size, irq_flags, pc); 1197 + buffer = tr->buffer; 1198 + event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, size, 1199 + irq_flags, pc); 1267 1200 if (!event) 1268 1201 goto out_unlock; 1269 1202 entry = ring_buffer_event_data(event); ··· 1273 1202 1274 1203 memcpy(&entry->buf, trace_buf, len); 1275 1204 entry->buf[len] = 0; 1276 - if (!filter_check_discard(call, entry, tr->buffer, event)) 1277 - ring_buffer_unlock_commit(tr->buffer, event); 1205 + if (!filter_check_discard(call, entry, buffer, event)) 1206 + ring_buffer_unlock_commit(buffer, event); 1278 1207 1279 1208 out_unlock: 1280 1209 __raw_spin_unlock(&trace_buf_lock); ··· 1285 1214 preempt_enable_notrace(); 1286 1215 1287 1216 return len; 1217 + } 1218 + 1219 + int trace_vprintk(unsigned long ip, const char *fmt, va_list args) 1220 + { 1221 + return trace_array_printk(&global_trace, ip, fmt, args); 1288 1222 } 1289 1223 EXPORT_SYMBOL_GPL(trace_vprintk); 1290 1224 ··· 1430 1354 return ent; 1431 1355 } 1432 1356 1357 + static void tracing_iter_reset(struct trace_iterator *iter, int cpu) 1358 + { 1359 + struct trace_array *tr = iter->tr; 1360 + struct ring_buffer_event *event; 1361 + struct ring_buffer_iter *buf_iter; 1362 + unsigned long entries = 0; 1363 + u64 ts; 1364 + 1365 + tr->data[cpu]->skipped_entries = 0; 1366 + 1367 + if (!iter->buffer_iter[cpu]) 1368 + return; 1369 + 1370 + buf_iter = iter->buffer_iter[cpu]; 1371 + ring_buffer_iter_reset(buf_iter); 1372 + 1373 + /* 1374 + * We could have the case with the max latency tracers 1375 + * that a reset never took place on a cpu. This is evident 1376 + * by the timestamp being before the start of the buffer. 1377 + */ 1378 + while ((event = ring_buffer_iter_peek(buf_iter, &ts))) { 1379 + if (ts >= iter->tr->time_start) 1380 + break; 1381 + entries++; 1382 + ring_buffer_read(buf_iter, NULL); 1383 + } 1384 + 1385 + tr->data[cpu]->skipped_entries = entries; 1386 + } 1387 + 1433 1388 /* 1434 1389 * No necessary locking here. The worst thing which can 1435 1390 * happen is loosing events consumed at the same time ··· 1499 1392 1500 1393 if (cpu_file == TRACE_PIPE_ALL_CPU) { 1501 1394 for_each_tracing_cpu(cpu) 1502 - ring_buffer_iter_reset(iter->buffer_iter[cpu]); 1395 + tracing_iter_reset(iter, cpu); 1503 1396 } else 1504 - ring_buffer_iter_reset(iter->buffer_iter[cpu_file]); 1505 - 1397 + tracing_iter_reset(iter, cpu_file); 1506 1398 1507 1399 ftrace_enable_cpu(); 1508 1400 ··· 1550 1444 struct trace_array *tr = iter->tr; 1551 1445 struct trace_array_cpu *data = tr->data[tr->cpu]; 1552 1446 struct tracer *type = current_trace; 1553 - unsigned long total; 1554 - unsigned long entries; 1447 + unsigned long entries = 0; 1448 + unsigned long total = 0; 1449 + unsigned long count; 1555 1450 const char *name = "preemption"; 1451 + int cpu; 1556 1452 1557 1453 if (type) 1558 1454 name = type->name; 1559 1455 1560 - entries = ring_buffer_entries(iter->tr->buffer); 1561 - total = entries + 1562 - ring_buffer_overruns(iter->tr->buffer); 1456 + 1457 + for_each_tracing_cpu(cpu) { 1458 + count = ring_buffer_entries_cpu(tr->buffer, cpu); 1459 + /* 1460 + * If this buffer has skipped entries, then we hold all 1461 + * entries for the trace and we need to ignore the 1462 + * ones before the time stamp. 1463 + */ 1464 + if (tr->data[cpu]->skipped_entries) { 1465 + count -= tr->data[cpu]->skipped_entries; 1466 + /* total is the same as the entries */ 1467 + total += count; 1468 + } else 1469 + total += count + 1470 + ring_buffer_overrun_cpu(tr->buffer, cpu); 1471 + entries += count; 1472 + } 1563 1473 1564 1474 seq_printf(m, "# %s latency trace v1.1.5 on %s\n", 1565 1475 name, UTS_RELEASE); ··· 1617 1495 seq_puts(m, "\n# => ended at: "); 1618 1496 seq_print_ip_sym(&iter->seq, data->critical_end, sym_flags); 1619 1497 trace_print_seq(m, &iter->seq); 1620 - seq_puts(m, "#\n"); 1498 + seq_puts(m, "\n#\n"); 1621 1499 } 1622 1500 1623 1501 seq_puts(m, "#\n"); ··· 1634 1512 return; 1635 1513 1636 1514 if (cpumask_test_cpu(iter->cpu, iter->started)) 1515 + return; 1516 + 1517 + if (iter->tr->data[iter->cpu]->skipped_entries) 1637 1518 return; 1638 1519 1639 1520 cpumask_set_cpu(iter->cpu, iter->started); ··· 1901 1776 if (ring_buffer_overruns(iter->tr->buffer)) 1902 1777 iter->iter_flags |= TRACE_FILE_ANNOTATE; 1903 1778 1779 + /* stop the trace while dumping */ 1780 + tracing_stop(); 1781 + 1904 1782 if (iter->cpu_file == TRACE_PIPE_ALL_CPU) { 1905 1783 for_each_tracing_cpu(cpu) { 1906 1784 1907 1785 iter->buffer_iter[cpu] = 1908 1786 ring_buffer_read_start(iter->tr->buffer, cpu); 1787 + tracing_iter_reset(iter, cpu); 1909 1788 } 1910 1789 } else { 1911 1790 cpu = iter->cpu_file; 1912 1791 iter->buffer_iter[cpu] = 1913 1792 ring_buffer_read_start(iter->tr->buffer, cpu); 1793 + tracing_iter_reset(iter, cpu); 1914 1794 } 1915 1795 1916 - /* TODO stop tracer */ 1917 1796 ret = seq_open(file, &tracer_seq_ops); 1918 1797 if (ret < 0) { 1919 1798 fail_ret = ERR_PTR(ret); ··· 1926 1797 1927 1798 m = file->private_data; 1928 1799 m->private = iter; 1929 - 1930 - /* stop the trace while dumping */ 1931 - tracing_stop(); 1932 1800 1933 1801 mutex_unlock(&trace_types_lock); 1934 1802 ··· 1937 1811 ring_buffer_read_finish(iter->buffer_iter[cpu]); 1938 1812 } 1939 1813 free_cpumask_var(iter->started); 1814 + tracing_start(); 1940 1815 fail: 1941 1816 mutex_unlock(&trace_types_lock); 1942 1817 kfree(iter->trace); ··· 3901 3774 if (ret < 0) 3902 3775 return ret; 3903 3776 3904 - switch (val) { 3905 - case 0: 3906 - trace_flags &= ~(1 << index); 3907 - break; 3908 - case 1: 3909 - trace_flags |= 1 << index; 3910 - break; 3911 - 3912 - default: 3777 + if (val != 0 && val != 1) 3913 3778 return -EINVAL; 3914 - } 3779 + set_tracer_flags(1 << index, val); 3915 3780 3916 3781 *ppos += cnt; 3917 3782 ··· 4071 3952 trace_create_file("current_tracer", 0644, d_tracer, 4072 3953 &global_trace, &set_tracer_fops); 4073 3954 3955 + #ifdef CONFIG_TRACER_MAX_TRACE 4074 3956 trace_create_file("tracing_max_latency", 0644, d_tracer, 4075 3957 &tracing_max_latency, &tracing_max_lat_fops); 4076 3958 4077 3959 trace_create_file("tracing_thresh", 0644, d_tracer, 4078 3960 &tracing_thresh, &tracing_max_lat_fops); 3961 + #endif 4079 3962 4080 3963 trace_create_file("README", 0444, d_tracer, 4081 3964 NULL, &tracing_readme_fops);
+17 -11
kernel/trace/trace.h
··· 258 258 atomic_t disabled; 259 259 void *buffer_page; /* ring buffer spare */ 260 260 261 - /* these fields get copied into max-trace: */ 262 - unsigned long trace_idx; 263 - unsigned long overrun; 264 261 unsigned long saved_latency; 265 262 unsigned long critical_start; 266 263 unsigned long critical_end; ··· 265 268 unsigned long nice; 266 269 unsigned long policy; 267 270 unsigned long rt_priority; 271 + unsigned long skipped_entries; 268 272 cycle_t preempt_timestamp; 269 273 pid_t pid; 270 274 uid_t uid; ··· 439 441 440 442 struct ring_buffer_event; 441 443 442 - struct ring_buffer_event *trace_buffer_lock_reserve(struct trace_array *tr, 443 - int type, 444 - unsigned long len, 445 - unsigned long flags, 446 - int pc); 447 - void trace_buffer_unlock_commit(struct trace_array *tr, 444 + struct ring_buffer_event * 445 + trace_buffer_lock_reserve(struct ring_buffer *buffer, 446 + int type, 447 + unsigned long len, 448 + unsigned long flags, 449 + int pc); 450 + void trace_buffer_unlock_commit(struct ring_buffer *buffer, 448 451 struct ring_buffer_event *event, 449 452 unsigned long flags, int pc); 450 453 ··· 496 497 497 498 extern unsigned long nsecs_to_usecs(unsigned long nsecs); 498 499 500 + #ifdef CONFIG_TRACER_MAX_TRACE 499 501 extern unsigned long tracing_max_latency; 500 502 extern unsigned long tracing_thresh; 501 503 502 504 void update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu); 503 505 void update_max_tr_single(struct trace_array *tr, 504 506 struct task_struct *tsk, int cpu); 507 + #endif /* CONFIG_TRACER_MAX_TRACE */ 505 508 506 509 #ifdef CONFIG_STACKTRACE 507 - void ftrace_trace_stack(struct trace_array *tr, unsigned long flags, 510 + void ftrace_trace_stack(struct ring_buffer *buffer, unsigned long flags, 508 511 int skip, int pc); 509 512 510 - void ftrace_trace_userstack(struct trace_array *tr, unsigned long flags, 513 + void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, 511 514 int pc); 512 515 513 516 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip, ··· 590 589 trace_vbprintk(unsigned long ip, const char *fmt, va_list args); 591 590 extern int 592 591 trace_vprintk(unsigned long ip, const char *fmt, va_list args); 592 + extern int 593 + trace_array_vprintk(struct trace_array *tr, 594 + unsigned long ip, const char *fmt, va_list args); 595 + int trace_array_printk(struct trace_array *tr, 596 + unsigned long ip, const char *fmt, ...); 593 597 594 598 extern unsigned long trace_flags; 595 599
+9 -7
kernel/trace/trace_boot.c
··· 41 41 42 42 static int boot_trace_init(struct trace_array *tr) 43 43 { 44 - int cpu; 45 44 boot_trace = tr; 46 45 47 46 if (!tr) 48 47 return 0; 49 48 50 - for_each_cpu(cpu, cpu_possible_mask) 51 - tracing_reset(tr, cpu); 49 + tracing_reset_online_cpus(tr); 52 50 53 51 tracing_sched_switch_assign_trace(tr); 54 52 return 0; ··· 130 132 void trace_boot_call(struct boot_trace_call *bt, initcall_t fn) 131 133 { 132 134 struct ring_buffer_event *event; 135 + struct ring_buffer *buffer; 133 136 struct trace_boot_call *entry; 134 137 struct trace_array *tr = boot_trace; 135 138 ··· 143 144 sprint_symbol(bt->func, (unsigned long)fn); 144 145 preempt_disable(); 145 146 146 - event = trace_buffer_lock_reserve(tr, TRACE_BOOT_CALL, 147 + buffer = tr->buffer; 148 + event = trace_buffer_lock_reserve(buffer, TRACE_BOOT_CALL, 147 149 sizeof(*entry), 0, 0); 148 150 if (!event) 149 151 goto out; 150 152 entry = ring_buffer_event_data(event); 151 153 entry->boot_call = *bt; 152 - trace_buffer_unlock_commit(tr, event, 0, 0); 154 + trace_buffer_unlock_commit(buffer, event, 0, 0); 153 155 out: 154 156 preempt_enable(); 155 157 } ··· 158 158 void trace_boot_ret(struct boot_trace_ret *bt, initcall_t fn) 159 159 { 160 160 struct ring_buffer_event *event; 161 + struct ring_buffer *buffer; 161 162 struct trace_boot_ret *entry; 162 163 struct trace_array *tr = boot_trace; 163 164 ··· 168 167 sprint_symbol(bt->func, (unsigned long)fn); 169 168 preempt_disable(); 170 169 171 - event = trace_buffer_lock_reserve(tr, TRACE_BOOT_RET, 170 + buffer = tr->buffer; 171 + event = trace_buffer_lock_reserve(buffer, TRACE_BOOT_RET, 172 172 sizeof(*entry), 0, 0); 173 173 if (!event) 174 174 goto out; 175 175 entry = ring_buffer_event_data(event); 176 176 entry->boot_ret = *bt; 177 - trace_buffer_unlock_commit(tr, event, 0, 0); 177 + trace_buffer_unlock_commit(buffer, event, 0, 0); 178 178 out: 179 179 preempt_enable(); 180 180 }
+4 -2
kernel/trace/trace_events.c
··· 1485 1485 function_test_events_call(unsigned long ip, unsigned long parent_ip) 1486 1486 { 1487 1487 struct ring_buffer_event *event; 1488 + struct ring_buffer *buffer; 1488 1489 struct ftrace_entry *entry; 1489 1490 unsigned long flags; 1490 1491 long disabled; ··· 1503 1502 1504 1503 local_save_flags(flags); 1505 1504 1506 - event = trace_current_buffer_lock_reserve(TRACE_FN, sizeof(*entry), 1505 + event = trace_current_buffer_lock_reserve(&buffer, 1506 + TRACE_FN, sizeof(*entry), 1507 1507 flags, pc); 1508 1508 if (!event) 1509 1509 goto out; ··· 1512 1510 entry->ip = ip; 1513 1511 entry->parent_ip = parent_ip; 1514 1512 1515 - trace_nowake_buffer_unlock_commit(event, flags, pc); 1513 + trace_nowake_buffer_unlock_commit(buffer, event, flags, pc); 1516 1514 1517 1515 out: 1518 1516 atomic_dec(&per_cpu(test_event_disable, cpu));
+42 -9
kernel/trace/trace_events_filter.c
··· 309 309 struct event_filter *filter = call->filter; 310 310 311 311 mutex_lock(&event_mutex); 312 - if (filter->filter_string) 312 + if (filter && filter->filter_string) 313 313 trace_seq_printf(s, "%s\n", filter->filter_string); 314 314 else 315 315 trace_seq_printf(s, "none\n"); ··· 322 322 struct event_filter *filter = system->filter; 323 323 324 324 mutex_lock(&event_mutex); 325 - if (filter->filter_string) 325 + if (filter && filter->filter_string) 326 326 trace_seq_printf(s, "%s\n", filter->filter_string); 327 327 else 328 328 trace_seq_printf(s, "none\n"); ··· 390 390 struct event_filter *filter = call->filter; 391 391 int i; 392 392 393 + if (!filter) 394 + return; 395 + 393 396 for (i = 0; i < MAX_FILTER_PRED; i++) { 394 397 if (filter->preds[i]) 395 398 filter_free_pred(filter->preds[i]); ··· 403 400 call->filter = NULL; 404 401 } 405 402 406 - int init_preds(struct ftrace_event_call *call) 403 + static int init_preds(struct ftrace_event_call *call) 407 404 { 408 405 struct event_filter *filter; 409 406 struct filter_pred *pred; 410 407 int i; 411 408 409 + if (call->filter) 410 + return 0; 411 + 412 412 filter = call->filter = kzalloc(sizeof(*filter), GFP_KERNEL); 413 413 if (!call->filter) 414 414 return -ENOMEM; 415 415 416 - call->filter_active = 0; 417 416 filter->n_preds = 0; 418 417 419 418 filter->preds = kzalloc(MAX_FILTER_PRED * sizeof(pred), GFP_KERNEL); ··· 437 432 438 433 return -ENOMEM; 439 434 } 440 - EXPORT_SYMBOL_GPL(init_preds); 435 + 436 + static int init_subsystem_preds(struct event_subsystem *system) 437 + { 438 + struct ftrace_event_call *call; 439 + int err; 440 + 441 + list_for_each_entry(call, &ftrace_events, list) { 442 + if (!call->define_fields) 443 + continue; 444 + 445 + if (strcmp(call->system, system->name) != 0) 446 + continue; 447 + 448 + err = init_preds(call); 449 + if (err) 450 + return err; 451 + } 452 + 453 + return 0; 454 + } 441 455 442 456 enum { 443 457 FILTER_DISABLE_ALL, ··· 473 449 if (!call->define_fields) 474 450 continue; 475 451 452 + if (strcmp(call->system, system->name) != 0) 453 + continue; 454 + 476 455 if (flag == FILTER_INIT_NO_RESET) { 477 456 call->filter->no_reset = false; 478 457 continue; ··· 484 457 if (flag == FILTER_SKIP_NO_RESET && call->filter->no_reset) 485 458 continue; 486 459 487 - if (!strcmp(call->system, system->name)) { 488 - filter_disable_preds(call); 489 - remove_filter_string(call->filter); 490 - } 460 + filter_disable_preds(call); 461 + remove_filter_string(call->filter); 491 462 } 492 463 } 493 464 ··· 1119 1094 1120 1095 mutex_lock(&event_mutex); 1121 1096 1097 + err = init_preds(call); 1098 + if (err) 1099 + goto out_unlock; 1100 + 1122 1101 if (!strcmp(strstrip(filter_string), "0")) { 1123 1102 filter_disable_preds(call); 1124 1103 remove_filter_string(call->filter); ··· 1167 1138 struct filter_parse_state *ps; 1168 1139 1169 1140 mutex_lock(&event_mutex); 1141 + 1142 + err = init_subsystem_preds(system); 1143 + if (err) 1144 + goto out_unlock; 1170 1145 1171 1146 if (!strcmp(strstrip(filter_string), "0")) { 1172 1147 filter_free_subsystem_preds(system, FILTER_DISABLE_ALL);
+2 -2
kernel/trace/trace_export.c
··· 120 120 static int ftrace_raw_init_event(struct ftrace_event_call *event_call) 121 121 { 122 122 INIT_LIST_HEAD(&event_call->fields); 123 - init_preds(event_call); 123 + 124 124 return 0; 125 125 } 126 126 ··· 137 137 .raw_init = ftrace_raw_init_event, \ 138 138 .show_format = ftrace_format_##call, \ 139 139 .define_fields = ftrace_define_fields_##call, \ 140 - }; 140 + }; \ 141 141 142 142 #undef TRACE_EVENT_FORMAT_NOFILTER 143 143 #define TRACE_EVENT_FORMAT_NOFILTER(call, proto, args, fmt, tstruct, \
+8 -6
kernel/trace/trace_functions_graph.c
··· 173 173 { 174 174 struct ftrace_event_call *call = &event_funcgraph_entry; 175 175 struct ring_buffer_event *event; 176 + struct ring_buffer *buffer = tr->buffer; 176 177 struct ftrace_graph_ent_entry *entry; 177 178 178 179 if (unlikely(local_read(&__get_cpu_var(ftrace_cpu_disabled)))) 179 180 return 0; 180 181 181 - event = trace_buffer_lock_reserve(tr, TRACE_GRAPH_ENT, 182 + event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_ENT, 182 183 sizeof(*entry), flags, pc); 183 184 if (!event) 184 185 return 0; 185 186 entry = ring_buffer_event_data(event); 186 187 entry->graph_ent = *trace; 187 - if (!filter_current_check_discard(call, entry, event)) 188 - ring_buffer_unlock_commit(tr->buffer, event); 188 + if (!filter_current_check_discard(buffer, call, entry, event)) 189 + ring_buffer_unlock_commit(buffer, event); 189 190 190 191 return 1; 191 192 } ··· 237 236 { 238 237 struct ftrace_event_call *call = &event_funcgraph_exit; 239 238 struct ring_buffer_event *event; 239 + struct ring_buffer *buffer = tr->buffer; 240 240 struct ftrace_graph_ret_entry *entry; 241 241 242 242 if (unlikely(local_read(&__get_cpu_var(ftrace_cpu_disabled)))) 243 243 return; 244 244 245 - event = trace_buffer_lock_reserve(tr, TRACE_GRAPH_RET, 245 + event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_RET, 246 246 sizeof(*entry), flags, pc); 247 247 if (!event) 248 248 return; 249 249 entry = ring_buffer_event_data(event); 250 250 entry->ret = *trace; 251 - if (!filter_current_check_discard(call, entry, event)) 252 - ring_buffer_unlock_commit(tr->buffer, event); 251 + if (!filter_current_check_discard(buffer, call, entry, event)) 252 + ring_buffer_unlock_commit(buffer, event); 253 253 } 254 254 255 255 void trace_graph_return(struct ftrace_graph_ret *trace)
+1 -2
kernel/trace/trace_irqsoff.c
··· 178 178 out: 179 179 data->critical_sequence = max_sequence; 180 180 data->preempt_timestamp = ftrace_now(cpu); 181 - tracing_reset(tr, cpu); 182 181 trace_function(tr, CALLER_ADDR0, parent_ip, flags, pc); 183 182 } 184 183 ··· 207 208 data->critical_sequence = max_sequence; 208 209 data->preempt_timestamp = ftrace_now(cpu); 209 210 data->critical_start = parent_ip ? : ip; 210 - tracing_reset(tr, cpu); 211 211 212 212 local_save_flags(flags); 213 213 ··· 377 379 irqsoff_trace = tr; 378 380 /* make sure that the tracer is visible */ 379 381 smp_wmb(); 382 + tracing_reset_online_cpus(tr); 380 383 start_irqsoff_tracer(tr); 381 384 } 382 385
+9 -7
kernel/trace/trace_kprobe.c
··· 819 819 struct trace_probe *tp = container_of(kp, struct trace_probe, kp); 820 820 struct kprobe_trace_entry *entry; 821 821 struct ring_buffer_event *event; 822 + struct ring_buffer *buffer; 822 823 int size, i, pc; 823 824 unsigned long irq_flags; 824 825 struct ftrace_event_call *call = &tp->call; ··· 831 830 832 831 size = SIZEOF_KPROBE_TRACE_ENTRY(tp->nr_args); 833 832 834 - event = trace_current_buffer_lock_reserve(call->id, size, 833 + event = trace_current_buffer_lock_reserve(&buffer, call->id, size, 835 834 irq_flags, pc); 836 835 if (!event) 837 836 return 0; ··· 842 841 for (i = 0; i < tp->nr_args; i++) 843 842 entry->args[i] = call_fetch(&tp->args[i], regs); 844 843 845 - if (!filter_current_check_discard(call, entry, event)) 846 - trace_nowake_buffer_unlock_commit(event, irq_flags, pc); 844 + if (!filter_current_check_discard(buffer, call, entry, event)) 845 + trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc); 847 846 return 0; 848 847 } 849 848 ··· 854 853 struct trace_probe *tp = container_of(ri->rp, struct trace_probe, rp); 855 854 struct kretprobe_trace_entry *entry; 856 855 struct ring_buffer_event *event; 856 + struct ring_buffer *buffer; 857 857 int size, i, pc; 858 858 unsigned long irq_flags; 859 859 struct ftrace_event_call *call = &tp->call; ··· 864 862 865 863 size = SIZEOF_KRETPROBE_TRACE_ENTRY(tp->nr_args); 866 864 867 - event = trace_current_buffer_lock_reserve(call->id, size, 865 + event = trace_current_buffer_lock_reserve(&buffer, call->id, size, 868 866 irq_flags, pc); 869 867 if (!event) 870 868 return 0; ··· 876 874 for (i = 0; i < tp->nr_args; i++) 877 875 entry->args[i] = call_fetch(&tp->args[i], regs); 878 876 879 - if (!filter_current_check_discard(call, entry, event)) 880 - trace_nowake_buffer_unlock_commit(event, irq_flags, pc); 877 + if (!filter_current_check_discard(buffer, call, entry, event)) 878 + trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc); 881 879 882 880 return 0; 883 881 } ··· 966 964 static int probe_event_raw_init(struct ftrace_event_call *event_call) 967 965 { 968 966 INIT_LIST_HEAD(&event_call->fields); 969 - init_preds(event_call); 967 + 970 968 return 0; 971 969 } 972 970
+6 -4
kernel/trace/trace_mmiotrace.c
··· 307 307 struct trace_array_cpu *data, 308 308 struct mmiotrace_rw *rw) 309 309 { 310 + struct ring_buffer *buffer = tr->buffer; 310 311 struct ring_buffer_event *event; 311 312 struct trace_mmiotrace_rw *entry; 312 313 int pc = preempt_count(); 313 314 314 - event = trace_buffer_lock_reserve(tr, TRACE_MMIO_RW, 315 + event = trace_buffer_lock_reserve(buffer, TRACE_MMIO_RW, 315 316 sizeof(*entry), 0, pc); 316 317 if (!event) { 317 318 atomic_inc(&dropped_count); ··· 320 319 } 321 320 entry = ring_buffer_event_data(event); 322 321 entry->rw = *rw; 323 - trace_buffer_unlock_commit(tr, event, 0, pc); 322 + trace_buffer_unlock_commit(buffer, event, 0, pc); 324 323 } 325 324 326 325 void mmio_trace_rw(struct mmiotrace_rw *rw) ··· 334 333 struct trace_array_cpu *data, 335 334 struct mmiotrace_map *map) 336 335 { 336 + struct ring_buffer *buffer = tr->buffer; 337 337 struct ring_buffer_event *event; 338 338 struct trace_mmiotrace_map *entry; 339 339 int pc = preempt_count(); 340 340 341 - event = trace_buffer_lock_reserve(tr, TRACE_MMIO_MAP, 341 + event = trace_buffer_lock_reserve(buffer, TRACE_MMIO_MAP, 342 342 sizeof(*entry), 0, pc); 343 343 if (!event) { 344 344 atomic_inc(&dropped_count); ··· 347 345 } 348 346 entry = ring_buffer_event_data(event); 349 347 entry->map = *map; 350 - trace_buffer_unlock_commit(tr, event, 0, pc); 348 + trace_buffer_unlock_commit(buffer, event, 0, pc); 351 349 } 352 350 353 351 void mmio_trace_mapping(struct mmiotrace_map *map)
+13 -9
kernel/trace/trace_power.c
··· 38 38 { 39 39 struct ftrace_event_call *call = &event_power; 40 40 struct ring_buffer_event *event; 41 + struct ring_buffer *buffer; 41 42 struct trace_power *entry; 42 43 struct trace_array_cpu *data; 43 44 struct trace_array *tr = power_trace; ··· 46 45 if (!trace_power_enabled) 47 46 return; 48 47 48 + buffer = tr->buffer; 49 + 49 50 preempt_disable(); 50 51 it->end = ktime_get(); 51 52 data = tr->data[smp_processor_id()]; 52 53 53 - event = trace_buffer_lock_reserve(tr, TRACE_POWER, 54 + event = trace_buffer_lock_reserve(buffer, TRACE_POWER, 54 55 sizeof(*entry), 0, 0); 55 56 if (!event) 56 57 goto out; 57 58 entry = ring_buffer_event_data(event); 58 59 entry->state_data = *it; 59 - if (!filter_check_discard(call, entry, tr->buffer, event)) 60 - trace_buffer_unlock_commit(tr, event, 0, 0); 60 + if (!filter_check_discard(call, entry, buffer, event)) 61 + trace_buffer_unlock_commit(buffer, event, 0, 0); 61 62 out: 62 63 preempt_enable(); 63 64 } ··· 69 66 { 70 67 struct ftrace_event_call *call = &event_power; 71 68 struct ring_buffer_event *event; 69 + struct ring_buffer *buffer; 72 70 struct trace_power *entry; 73 71 struct trace_array_cpu *data; 74 72 struct trace_array *tr = power_trace; 75 73 76 74 if (!trace_power_enabled) 77 75 return; 76 + 77 + buffer = tr->buffer; 78 78 79 79 memset(it, 0, sizeof(struct power_trace)); 80 80 it->state = level; ··· 87 81 it->end = it->stamp; 88 82 data = tr->data[smp_processor_id()]; 89 83 90 - event = trace_buffer_lock_reserve(tr, TRACE_POWER, 84 + event = trace_buffer_lock_reserve(buffer, TRACE_POWER, 91 85 sizeof(*entry), 0, 0); 92 86 if (!event) 93 87 goto out; 94 88 entry = ring_buffer_event_data(event); 95 89 entry->state_data = *it; 96 - if (!filter_check_discard(call, entry, tr->buffer, event)) 97 - trace_buffer_unlock_commit(tr, event, 0, 0); 90 + if (!filter_check_discard(call, entry, buffer, event)) 91 + trace_buffer_unlock_commit(buffer, event, 0, 0); 98 92 out: 99 93 preempt_enable(); 100 94 } ··· 150 144 151 145 static int power_trace_init(struct trace_array *tr) 152 146 { 153 - int cpu; 154 147 power_trace = tr; 155 148 156 149 trace_power_enabled = 1; 157 150 tracing_power_register(); 158 151 159 - for_each_cpu(cpu, cpu_possible_mask) 160 - tracing_reset(tr, cpu); 152 + tracing_reset_online_cpus(tr); 161 153 return 0; 162 154 } 163 155
+10 -8
kernel/trace/trace_sched_switch.c
··· 28 28 unsigned long flags, int pc) 29 29 { 30 30 struct ftrace_event_call *call = &event_context_switch; 31 + struct ring_buffer *buffer = tr->buffer; 31 32 struct ring_buffer_event *event; 32 33 struct ctx_switch_entry *entry; 33 34 34 - event = trace_buffer_lock_reserve(tr, TRACE_CTX, 35 + event = trace_buffer_lock_reserve(buffer, TRACE_CTX, 35 36 sizeof(*entry), flags, pc); 36 37 if (!event) 37 38 return; ··· 45 44 entry->next_state = next->state; 46 45 entry->next_cpu = task_cpu(next); 47 46 48 - if (!filter_check_discard(call, entry, tr->buffer, event)) 49 - trace_buffer_unlock_commit(tr, event, flags, pc); 47 + if (!filter_check_discard(call, entry, buffer, event)) 48 + trace_buffer_unlock_commit(buffer, event, flags, pc); 50 49 } 51 50 52 51 static void ··· 87 86 struct ftrace_event_call *call = &event_wakeup; 88 87 struct ring_buffer_event *event; 89 88 struct ctx_switch_entry *entry; 89 + struct ring_buffer *buffer = tr->buffer; 90 90 91 - event = trace_buffer_lock_reserve(tr, TRACE_WAKE, 91 + event = trace_buffer_lock_reserve(buffer, TRACE_WAKE, 92 92 sizeof(*entry), flags, pc); 93 93 if (!event) 94 94 return; ··· 102 100 entry->next_state = wakee->state; 103 101 entry->next_cpu = task_cpu(wakee); 104 102 105 - if (!filter_check_discard(call, entry, tr->buffer, event)) 106 - ring_buffer_unlock_commit(tr->buffer, event); 107 - ftrace_trace_stack(tr, flags, 6, pc); 108 - ftrace_trace_userstack(tr, flags, pc); 103 + if (!filter_check_discard(call, entry, buffer, event)) 104 + ring_buffer_unlock_commit(buffer, event); 105 + ftrace_trace_stack(tr->buffer, flags, 6, pc); 106 + ftrace_trace_userstack(tr->buffer, flags, pc); 109 107 } 110 108 111 109 static void
+2 -5
kernel/trace/trace_sched_wakeup.c
··· 186 186 187 187 static void __wakeup_reset(struct trace_array *tr) 188 188 { 189 - int cpu; 190 - 191 - for_each_possible_cpu(cpu) 192 - tracing_reset(tr, cpu); 193 - 194 189 wakeup_cpu = -1; 195 190 wakeup_prio = -1; 196 191 ··· 198 203 static void wakeup_reset(struct trace_array *tr) 199 204 { 200 205 unsigned long flags; 206 + 207 + tracing_reset_online_cpus(tr); 201 208 202 209 local_irq_save(flags); 203 210 __raw_spin_lock(&wakeup_lock);
+27 -19
kernel/trace/trace_syscalls.c
··· 11 11 static DEFINE_MUTEX(syscall_trace_lock); 12 12 static int sys_refcount_enter; 13 13 static int sys_refcount_exit; 14 - static DECLARE_BITMAP(enabled_enter_syscalls, FTRACE_SYSCALL_MAX); 15 - static DECLARE_BITMAP(enabled_exit_syscalls, FTRACE_SYSCALL_MAX); 14 + static DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls); 15 + static DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls); 16 16 17 17 enum print_line_t 18 18 print_syscall_enter(struct trace_iterator *iter, int flags) ··· 223 223 struct syscall_trace_enter *entry; 224 224 struct syscall_metadata *sys_data; 225 225 struct ring_buffer_event *event; 226 + struct ring_buffer *buffer; 226 227 int size; 227 228 int syscall_nr; 228 229 229 230 syscall_nr = syscall_get_nr(current, regs); 231 + if (syscall_nr < 0) 232 + return; 230 233 if (!test_bit(syscall_nr, enabled_enter_syscalls)) 231 234 return; 232 235 ··· 239 236 240 237 size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args; 241 238 242 - event = trace_current_buffer_lock_reserve(sys_data->enter_id, size, 243 - 0, 0); 239 + event = trace_current_buffer_lock_reserve(&buffer, sys_data->enter_id, 240 + size, 0, 0); 244 241 if (!event) 245 242 return; 246 243 ··· 248 245 entry->nr = syscall_nr; 249 246 syscall_get_arguments(current, regs, 0, sys_data->nb_args, entry->args); 250 247 251 - if (!filter_current_check_discard(sys_data->enter_event, entry, event)) 252 - trace_current_buffer_unlock_commit(event, 0, 0); 248 + if (!filter_current_check_discard(buffer, sys_data->enter_event, 249 + entry, event)) 250 + trace_current_buffer_unlock_commit(buffer, event, 0, 0); 253 251 } 254 252 255 253 void ftrace_syscall_exit(struct pt_regs *regs, long ret) ··· 258 254 struct syscall_trace_exit *entry; 259 255 struct syscall_metadata *sys_data; 260 256 struct ring_buffer_event *event; 257 + struct ring_buffer *buffer; 261 258 int syscall_nr; 262 259 263 260 syscall_nr = syscall_get_nr(current, regs); 261 + if (syscall_nr < 0) 262 + return; 264 263 if (!test_bit(syscall_nr, enabled_exit_syscalls)) 265 264 return; 266 265 ··· 271 264 if (!sys_data) 272 265 return; 273 266 274 - event = trace_current_buffer_lock_reserve(sys_data->exit_id, 267 + event = trace_current_buffer_lock_reserve(&buffer, sys_data->exit_id, 275 268 sizeof(*entry), 0, 0); 276 269 if (!event) 277 270 return; ··· 280 273 entry->nr = syscall_nr; 281 274 entry->ret = syscall_get_return_value(current, regs); 282 275 283 - if (!filter_current_check_discard(sys_data->exit_event, entry, event)) 284 - trace_current_buffer_unlock_commit(event, 0, 0); 276 + if (!filter_current_check_discard(buffer, sys_data->exit_event, 277 + entry, event)) 278 + trace_current_buffer_unlock_commit(buffer, event, 0, 0); 285 279 } 286 280 287 281 int reg_event_syscall_enter(struct ftrace_event_call *call) ··· 293 285 294 286 name = (char *)call->data; 295 287 num = syscall_name_to_nr(name); 296 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 288 + if (num < 0 || num >= NR_syscalls) 297 289 return -ENOSYS; 298 290 mutex_lock(&syscall_trace_lock); 299 291 if (!sys_refcount_enter) ··· 316 308 317 309 name = (char *)call->data; 318 310 num = syscall_name_to_nr(name); 319 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 311 + if (num < 0 || num >= NR_syscalls) 320 312 return; 321 313 mutex_lock(&syscall_trace_lock); 322 314 sys_refcount_enter--; ··· 334 326 335 327 name = call->data; 336 328 num = syscall_name_to_nr(name); 337 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 329 + if (num < 0 || num >= NR_syscalls) 338 330 return -ENOSYS; 339 331 mutex_lock(&syscall_trace_lock); 340 332 if (!sys_refcount_exit) ··· 357 349 358 350 name = call->data; 359 351 num = syscall_name_to_nr(name); 360 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 352 + if (num < 0 || num >= NR_syscalls) 361 353 return; 362 354 mutex_lock(&syscall_trace_lock); 363 355 sys_refcount_exit--; ··· 377 369 378 370 #ifdef CONFIG_EVENT_PROFILE 379 371 380 - static DECLARE_BITMAP(enabled_prof_enter_syscalls, FTRACE_SYSCALL_MAX); 381 - static DECLARE_BITMAP(enabled_prof_exit_syscalls, FTRACE_SYSCALL_MAX); 372 + static DECLARE_BITMAP(enabled_prof_enter_syscalls, NR_syscalls); 373 + static DECLARE_BITMAP(enabled_prof_exit_syscalls, NR_syscalls); 382 374 static int sys_prof_refcount_enter; 383 375 static int sys_prof_refcount_exit; 384 376 ··· 424 416 int num; 425 417 426 418 num = syscall_name_to_nr(name); 427 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 419 + if (num < 0 || num >= NR_syscalls) 428 420 return -ENOSYS; 429 421 430 422 mutex_lock(&syscall_trace_lock); ··· 446 438 int num; 447 439 448 440 num = syscall_name_to_nr(name); 449 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 441 + if (num < 0 || num >= NR_syscalls) 450 442 return; 451 443 452 444 mutex_lock(&syscall_trace_lock); ··· 485 477 int num; 486 478 487 479 num = syscall_name_to_nr(name); 488 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 480 + if (num < 0 || num >= NR_syscalls) 489 481 return -ENOSYS; 490 482 491 483 mutex_lock(&syscall_trace_lock); ··· 507 499 int num; 508 500 509 501 num = syscall_name_to_nr(name); 510 - if (num < 0 || num >= FTRACE_SYSCALL_MAX) 502 + if (num < 0 || num >= NR_syscalls) 511 503 return; 512 504 513 505 mutex_lock(&syscall_trace_lock);
+3 -1
kernel/tracepoint.c
··· 597 597 if (!sys_tracepoint_refcount) { 598 598 read_lock_irqsave(&tasklist_lock, flags); 599 599 do_each_thread(g, t) { 600 - set_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT); 600 + /* Skip kernel threads. */ 601 + if (t->mm) 602 + set_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT); 601 603 } while_each_thread(g, t); 602 604 read_unlock_irqrestore(&tasklist_lock, flags); 603 605 }
+3 -2
kernel/wait.c
··· 10 10 #include <linux/wait.h> 11 11 #include <linux/hash.h> 12 12 13 - void init_waitqueue_head(wait_queue_head_t *q) 13 + void __init_waitqueue_head(wait_queue_head_t *q, struct lock_class_key *key) 14 14 { 15 15 spin_lock_init(&q->lock); 16 + lockdep_set_class(&q->lock, key); 16 17 INIT_LIST_HEAD(&q->task_list); 17 18 } 18 19 19 - EXPORT_SYMBOL(init_waitqueue_head); 20 + EXPORT_SYMBOL(__init_waitqueue_head); 20 21 21 22 void add_wait_queue(wait_queue_head_t *q, wait_queue_t *wait) 22 23 {
+8 -4
lib/bitmap.c
··· 179 179 } 180 180 EXPORT_SYMBOL(__bitmap_shift_left); 181 181 182 - void __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, 182 + int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, 183 183 const unsigned long *bitmap2, int bits) 184 184 { 185 185 int k; 186 186 int nr = BITS_TO_LONGS(bits); 187 + unsigned long result = 0; 187 188 188 189 for (k = 0; k < nr; k++) 189 - dst[k] = bitmap1[k] & bitmap2[k]; 190 + result |= (dst[k] = bitmap1[k] & bitmap2[k]); 191 + return result != 0; 190 192 } 191 193 EXPORT_SYMBOL(__bitmap_and); 192 194 ··· 214 212 } 215 213 EXPORT_SYMBOL(__bitmap_xor); 216 214 217 - void __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, 215 + int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, 218 216 const unsigned long *bitmap2, int bits) 219 217 { 220 218 int k; 221 219 int nr = BITS_TO_LONGS(bits); 220 + unsigned long result = 0; 222 221 223 222 for (k = 0; k < nr; k++) 224 - dst[k] = bitmap1[k] & ~bitmap2[k]; 223 + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); 224 + return result != 0; 225 225 } 226 226 EXPORT_SYMBOL(__bitmap_andnot); 227 227
+16 -12
lib/dma-debug.c
··· 156 156 return true; 157 157 158 158 /* driver filter on and initialized */ 159 - if (current_driver && dev->driver == current_driver) 159 + if (current_driver && dev && dev->driver == current_driver) 160 160 return true; 161 + 162 + /* driver filter on, but we can't filter on a NULL device... */ 163 + if (!dev) 164 + return false; 161 165 162 166 if (current_driver || !current_driver_name[0]) 163 167 return false; ··· 187 183 return ret; 188 184 } 189 185 190 - #define err_printk(dev, entry, format, arg...) do { \ 191 - error_count += 1; \ 192 - if (driver_filter(dev) && \ 193 - (show_all_errors || show_num_errors > 0)) { \ 194 - WARN(1, "%s %s: " format, \ 195 - dev_driver_string(dev), \ 196 - dev_name(dev) , ## arg); \ 197 - dump_entry_trace(entry); \ 198 - } \ 199 - if (!show_all_errors && show_num_errors > 0) \ 200 - show_num_errors -= 1; \ 186 + #define err_printk(dev, entry, format, arg...) do { \ 187 + error_count += 1; \ 188 + if (driver_filter(dev) && \ 189 + (show_all_errors || show_num_errors > 0)) { \ 190 + WARN(1, "%s %s: " format, \ 191 + dev ? dev_driver_string(dev) : "NULL", \ 192 + dev ? dev_name(dev) : "NULL", ## arg); \ 193 + dump_entry_trace(entry); \ 194 + } \ 195 + if (!show_all_errors && show_num_errors > 0) \ 196 + show_num_errors -= 1; \ 201 197 } while (0); 202 198 203 199 /*
+21 -20
lib/flex_array.c
··· 99 99 * capacity in the base structure. Also note that no effort is made 100 100 * to efficiently pack objects across page boundaries. 101 101 */ 102 - struct flex_array *flex_array_alloc(int element_size, int total, gfp_t flags) 102 + struct flex_array *flex_array_alloc(int element_size, unsigned int total, 103 + gfp_t flags) 103 104 { 104 105 struct flex_array *ret; 105 106 int max_size = nr_base_part_ptrs() * __elements_per_part(element_size); ··· 116 115 return ret; 117 116 } 118 117 119 - static int fa_element_to_part_nr(struct flex_array *fa, int element_nr) 118 + static int fa_element_to_part_nr(struct flex_array *fa, 119 + unsigned int element_nr) 120 120 { 121 121 return element_nr / __elements_per_part(fa->element_size); 122 122 } 123 123 124 124 /** 125 125 * flex_array_free_parts - just free the second-level pages 126 - * @src: address of data to copy into the array 127 - * @element_nr: index of the position in which to insert 128 - * the new element. 129 126 * 130 127 * This is to be used in cases where the base 'struct flex_array' 131 128 * has been statically allocated and should not be free. ··· 145 146 kfree(fa); 146 147 } 147 148 148 - static int fa_index_inside_part(struct flex_array *fa, int element_nr) 149 + static unsigned int index_inside_part(struct flex_array *fa, 150 + unsigned int element_nr) 149 151 { 150 - return element_nr % __elements_per_part(fa->element_size); 151 - } 152 + unsigned int part_offset; 152 153 153 - static int index_inside_part(struct flex_array *fa, int element_nr) 154 - { 155 - int part_offset = fa_index_inside_part(fa, element_nr); 154 + part_offset = element_nr % __elements_per_part(fa->element_size); 156 155 return part_offset * fa->element_size; 157 156 } 158 157 ··· 185 188 * 186 189 * Locking must be provided by the caller. 187 190 */ 188 - int flex_array_put(struct flex_array *fa, int element_nr, void *src, gfp_t flags) 191 + int flex_array_put(struct flex_array *fa, unsigned int element_nr, void *src, 192 + gfp_t flags) 189 193 { 190 194 int part_nr = fa_element_to_part_nr(fa, element_nr); 191 195 struct flex_array_part *part; ··· 196 198 return -ENOSPC; 197 199 if (elements_fit_in_base(fa)) 198 200 part = (struct flex_array_part *)&fa->parts[0]; 199 - else 201 + else { 200 202 part = __fa_get_part(fa, part_nr, flags); 201 - if (!part) 202 - return -ENOMEM; 203 + if (!part) 204 + return -ENOMEM; 205 + } 203 206 dst = &part->elements[index_inside_part(fa, element_nr)]; 204 207 memcpy(dst, src, fa->element_size); 205 208 return 0; ··· 218 219 * 219 220 * Locking must be provided by the caller. 220 221 */ 221 - int flex_array_prealloc(struct flex_array *fa, int start, int end, gfp_t flags) 222 + int flex_array_prealloc(struct flex_array *fa, unsigned int start, 223 + unsigned int end, gfp_t flags) 222 224 { 223 225 int start_part; 224 226 int end_part; ··· 250 250 * 251 251 * Locking must be provided by the caller. 252 252 */ 253 - void *flex_array_get(struct flex_array *fa, int element_nr) 253 + void *flex_array_get(struct flex_array *fa, unsigned int element_nr) 254 254 { 255 255 int part_nr = fa_element_to_part_nr(fa, element_nr); 256 256 struct flex_array_part *part; 257 257 258 258 if (element_nr >= fa->total_nr_elements) 259 259 return NULL; 260 - if (!fa->parts[part_nr]) 261 - return NULL; 262 260 if (elements_fit_in_base(fa)) 263 261 part = (struct flex_array_part *)&fa->parts[0]; 264 - else 262 + else { 265 263 part = fa->parts[part_nr]; 264 + if (!part) 265 + return NULL; 266 + } 266 267 return &part->elements[index_inside_part(fa, element_nr)]; 267 268 }
+1 -1
lib/lmb.c
··· 429 429 return lmb.memory.size; 430 430 } 431 431 432 - u64 __init lmb_end_of_DRAM(void) 432 + u64 lmb_end_of_DRAM(void) 433 433 { 434 434 int idx = lmb.memory.cnt - 1; 435 435
+3 -3
mm/Kconfig
··· 225 225 For most ia64, ppc64 and x86 users with lots of address space 226 226 a value of 65536 is reasonable and should cause no problems. 227 227 On arm and other archs it should not be higher than 32768. 228 - Programs which use vm86 functionality would either need additional 229 - permissions from either the LSM or the capabilities module or have 230 - this protection disabled. 228 + Programs which use vm86 functionality or have some need to map 229 + this low address space will need CAP_SYS_RAWIO or disable this 230 + protection by setting the value to 0. 231 231 232 232 This value can be changed after boot using the 233 233 /proc/sys/vm/mmap_min_addr tunable.
-3
mm/mmap.c
··· 88 88 int sysctl_max_map_count __read_mostly = DEFAULT_MAX_MAP_COUNT; 89 89 struct percpu_counter vm_committed_as; 90 90 91 - /* amount of vm to protect from userspace access */ 92 - unsigned long mmap_min_addr = CONFIG_DEFAULT_MMAP_MIN_ADDR; 93 - 94 91 /* 95 92 * Check that a process has enough memory to allocate a new virtual 96 93 * mapping. 0 means there is enough memory for the allocation to
+5 -5
mm/nommu.c
··· 69 69 int sysctl_nr_trim_pages = CONFIG_NOMMU_INITIAL_TRIM_EXCESS; 70 70 int heap_stack_gap = 0; 71 71 72 - /* amount of vm to protect from userspace access */ 73 - unsigned long mmap_min_addr = CONFIG_DEFAULT_MMAP_MIN_ADDR; 74 - 75 72 atomic_long_t mmap_pages_allocated; 76 73 77 74 EXPORT_SYMBOL(mem_map); ··· 919 922 if (!file->f_op->read) 920 923 capabilities &= ~BDI_CAP_MAP_COPY; 921 924 925 + /* The file shall have been opened with read permission. */ 926 + if (!(file->f_mode & FMODE_READ)) 927 + return -EACCES; 928 + 922 929 if (flags & MAP_SHARED) { 923 930 /* do checks for writing, appending and locking */ 924 931 if ((prot & PROT_WRITE) && ··· 1352 1351 } 1353 1352 1354 1353 vma->vm_region = region; 1354 + add_nommu_region(region); 1355 1355 1356 1356 /* set up the mapping */ 1357 1357 if (file && vma->vm_flags & VM_SHARED) ··· 1361 1359 ret = do_mmap_private(vma, region, len); 1362 1360 if (ret < 0) 1363 1361 goto error_put_region; 1364 - 1365 - add_nommu_region(region); 1366 1362 1367 1363 /* okay... we have a mapping; now we have to register it */ 1368 1364 result = vma->vm_start;
+39 -25
mm/oom_kill.c
··· 58 58 unsigned long points, cpu_time, run_time; 59 59 struct mm_struct *mm; 60 60 struct task_struct *child; 61 - int oom_adj; 62 61 63 62 task_lock(p); 64 63 mm = p->mm; 65 64 if (!mm) { 66 - task_unlock(p); 67 - return 0; 68 - } 69 - oom_adj = mm->oom_adj; 70 - if (oom_adj == OOM_DISABLE) { 71 65 task_unlock(p); 72 66 return 0; 73 67 } ··· 148 154 points /= 8; 149 155 150 156 /* 151 - * Adjust the score by oom_adj. 157 + * Adjust the score by oomkilladj. 152 158 */ 153 - if (oom_adj) { 154 - if (oom_adj > 0) { 159 + if (p->oomkilladj) { 160 + if (p->oomkilladj > 0) { 155 161 if (!points) 156 162 points = 1; 157 - points <<= oom_adj; 163 + points <<= p->oomkilladj; 158 164 } else 159 - points >>= -(oom_adj); 165 + points >>= -(p->oomkilladj); 160 166 } 161 167 162 168 #ifdef DEBUG ··· 251 257 *ppoints = ULONG_MAX; 252 258 } 253 259 260 + if (p->oomkilladj == OOM_DISABLE) 261 + continue; 262 + 254 263 points = badness(p, uptime.tv_sec); 255 - if (points > *ppoints) { 264 + if (points > *ppoints || !chosen) { 256 265 chosen = p; 257 266 *ppoints = points; 258 267 } ··· 304 307 } 305 308 printk(KERN_INFO "[%5d] %5d %5d %8lu %8lu %3d %3d %s\n", 306 309 p->pid, __task_cred(p)->uid, p->tgid, mm->total_vm, 307 - get_mm_rss(mm), (int)task_cpu(p), mm->oom_adj, p->comm); 310 + get_mm_rss(mm), (int)task_cpu(p), p->oomkilladj, 311 + p->comm); 308 312 task_unlock(p); 309 313 } while_each_thread(g, p); 310 314 } ··· 323 325 return; 324 326 } 325 327 326 - if (!p->mm) 328 + if (!p->mm) { 329 + WARN_ON(1); 330 + printk(KERN_WARNING "tried to kill an mm-less task!\n"); 327 331 return; 332 + } 328 333 329 334 if (verbose) 330 335 printk(KERN_ERR "Killed process %d (%s)\n", ··· 349 348 struct mm_struct *mm; 350 349 struct task_struct *g, *q; 351 350 352 - task_lock(p); 353 351 mm = p->mm; 354 - if (!mm || mm->oom_adj == OOM_DISABLE) { 355 - task_unlock(p); 352 + 353 + /* WARNING: mm may not be dereferenced since we did not obtain its 354 + * value from get_task_mm(p). This is OK since all we need to do is 355 + * compare mm to q->mm below. 356 + * 357 + * Furthermore, even if mm contains a non-NULL value, p->mm may 358 + * change to NULL at any time since we do not hold task_lock(p). 359 + * However, this is of no concern to us. 360 + */ 361 + 362 + if (mm == NULL) 356 363 return 1; 357 - } 358 - task_unlock(p); 364 + 365 + /* 366 + * Don't kill the process if any threads are set to OOM_DISABLE 367 + */ 368 + do_each_thread(g, q) { 369 + if (q->mm == mm && q->oomkilladj == OOM_DISABLE) 370 + return 1; 371 + } while_each_thread(g, q); 372 + 359 373 __oom_kill_task(p, 1); 360 374 361 375 /* ··· 393 377 struct task_struct *c; 394 378 395 379 if (printk_ratelimit()) { 396 - task_lock(current); 397 380 printk(KERN_WARNING "%s invoked oom-killer: " 398 - "gfp_mask=0x%x, order=%d, oom_adj=%d\n", 399 - current->comm, gfp_mask, order, 400 - current->mm ? current->mm->oom_adj : OOM_DISABLE); 381 + "gfp_mask=0x%x, order=%d, oomkilladj=%d\n", 382 + current->comm, gfp_mask, order, current->oomkilladj); 383 + task_lock(current); 401 384 cpuset_print_task_mems_allowed(current); 402 385 task_unlock(current); 403 386 dump_stack(); ··· 409 394 /* 410 395 * If the task is already exiting, don't alarm the sysadmin or kill 411 396 * its children or threads, just set TIF_MEMDIE so it can die quickly 412 - * if its mm is still attached. 413 397 */ 414 - if (p->mm && (p->flags & PF_EXITING)) { 398 + if (p->flags & PF_EXITING) { 415 399 __oom_kill_task(p, 0); 416 400 return 0; 417 401 }
+7 -3
mm/page_alloc.c
··· 817 817 * agressive about taking ownership of free pages 818 818 */ 819 819 if (unlikely(current_order >= (pageblock_order >> 1)) || 820 - start_migratetype == MIGRATE_RECLAIMABLE) { 820 + start_migratetype == MIGRATE_RECLAIMABLE || 821 + page_group_by_mobility_disabled) { 821 822 unsigned long pages; 822 823 pages = move_freepages_block(zone, page, 823 824 start_migratetype); 824 825 825 826 /* Claim the whole block if over half of it is free */ 826 - if (pages >= (1 << (pageblock_order-1))) 827 + if (pages >= (1 << (pageblock_order-1)) || 828 + page_group_by_mobility_disabled) 827 829 set_pageblock_migratetype(page, 828 830 start_migratetype); 829 831 ··· 2546 2544 prev_node = local_node; 2547 2545 nodes_clear(used_mask); 2548 2546 2549 - memset(node_load, 0, sizeof(node_load)); 2550 2547 memset(node_order, 0, sizeof(node_order)); 2551 2548 j = 0; 2552 2549 ··· 2654 2653 { 2655 2654 int nid; 2656 2655 2656 + #ifdef CONFIG_NUMA 2657 + memset(node_load, 0, sizeof(node_load)); 2658 + #endif 2657 2659 for_each_online_node(nid) { 2658 2660 pg_data_t *pgdat = NODE_DATA(nid); 2659 2661
+33 -17
mm/percpu.c
··· 8 8 * 9 9 * This is percpu allocator which can handle both static and dynamic 10 10 * areas. Percpu areas are allocated in chunks in vmalloc area. Each 11 - * chunk is consisted of num_possible_cpus() units and the first chunk 12 - * is used for static percpu variables in the kernel image (special 13 - * boot time alloc/init handling necessary as these areas need to be 14 - * brought up before allocation services are running). Unit grows as 15 - * necessary and all units grow or shrink in unison. When a chunk is 16 - * filled up, another chunk is allocated. ie. in vmalloc area 11 + * chunk is consisted of nr_cpu_ids units and the first chunk is used 12 + * for static percpu variables in the kernel image (special boot time 13 + * alloc/init handling necessary as these areas need to be brought up 14 + * before allocation services are running). Unit grows as necessary 15 + * and all units grow or shrink in unison. When a chunk is filled up, 16 + * another chunk is allocated. ie. in vmalloc area 17 17 * 18 18 * c0 c1 c2 19 19 * ------------------- ------------------- ------------ ··· 197 197 static bool pcpu_chunk_page_occupied(struct pcpu_chunk *chunk, 198 198 int page_idx) 199 199 { 200 - return *pcpu_chunk_pagep(chunk, 0, page_idx) != NULL; 200 + /* 201 + * Any possible cpu id can be used here, so there's no need to 202 + * worry about preemption or cpu hotplug. 203 + */ 204 + return *pcpu_chunk_pagep(chunk, raw_smp_processor_id(), 205 + page_idx) != NULL; 201 206 } 202 207 203 208 /* set the pointer to a chunk in a page struct */ ··· 302 297 return pcpu_first_chunk; 303 298 } 304 299 300 + /* 301 + * The address is relative to unit0 which might be unused and 302 + * thus unmapped. Offset the address to the unit space of the 303 + * current processor before looking it up in the vmalloc 304 + * space. Note that any possible cpu id can be used here, so 305 + * there's no need to worry about preemption or cpu hotplug. 306 + */ 307 + addr += raw_smp_processor_id() * pcpu_unit_size; 305 308 return pcpu_get_page_chunk(vmalloc_to_page(addr)); 306 309 } 307 310 ··· 571 558 static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end, 572 559 bool flush_tlb) 573 560 { 574 - unsigned int last = num_possible_cpus() - 1; 561 + unsigned int last = nr_cpu_ids - 1; 575 562 unsigned int cpu; 576 563 577 564 /* unmap must not be done on immutable chunk */ ··· 656 643 */ 657 644 static int pcpu_map(struct pcpu_chunk *chunk, int page_start, int page_end) 658 645 { 659 - unsigned int last = num_possible_cpus() - 1; 646 + unsigned int last = nr_cpu_ids - 1; 660 647 unsigned int cpu; 661 648 int err; 662 649 ··· 762 749 chunk->map[chunk->map_used++] = pcpu_unit_size; 763 750 chunk->page = chunk->page_ar; 764 751 765 - chunk->vm = get_vm_area(pcpu_chunk_size, GFP_KERNEL); 752 + chunk->vm = get_vm_area(pcpu_chunk_size, VM_ALLOC); 766 753 if (!chunk->vm) { 767 754 free_pcpu_chunk(chunk); 768 755 return NULL; ··· 1080 1067 PFN_UP(size_sum)); 1081 1068 1082 1069 pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT; 1083 - pcpu_chunk_size = num_possible_cpus() * pcpu_unit_size; 1070 + pcpu_chunk_size = nr_cpu_ids * pcpu_unit_size; 1084 1071 pcpu_chunk_struct_size = sizeof(struct pcpu_chunk) 1085 - + num_possible_cpus() * pcpu_unit_pages * sizeof(struct page *); 1072 + + nr_cpu_ids * pcpu_unit_pages * sizeof(struct page *); 1086 1073 1087 1074 if (dyn_size < 0) 1088 1075 dyn_size = pcpu_unit_size - static_size - reserved_size; ··· 1261 1248 } else 1262 1249 pcpue_unit_size = max_t(size_t, pcpue_size, PCPU_MIN_UNIT_SIZE); 1263 1250 1264 - chunk_size = pcpue_unit_size * num_possible_cpus(); 1251 + chunk_size = pcpue_unit_size * nr_cpu_ids; 1265 1252 1266 1253 pcpue_ptr = __alloc_bootmem_nopanic(chunk_size, PAGE_SIZE, 1267 1254 __pa(MAX_DMA_ADDRESS)); ··· 1272 1259 } 1273 1260 1274 1261 /* return the leftover and copy */ 1275 - for_each_possible_cpu(cpu) { 1262 + for (cpu = 0; cpu < nr_cpu_ids; cpu++) { 1276 1263 void *ptr = pcpue_ptr + cpu * pcpue_unit_size; 1277 1264 1278 - free_bootmem(__pa(ptr + pcpue_size), 1279 - pcpue_unit_size - pcpue_size); 1280 - memcpy(ptr, __per_cpu_load, static_size); 1265 + if (cpu_possible(cpu)) { 1266 + free_bootmem(__pa(ptr + pcpue_size), 1267 + pcpue_unit_size - pcpue_size); 1268 + memcpy(ptr, __per_cpu_load, static_size); 1269 + } else 1270 + free_bootmem(__pa(ptr), pcpue_unit_size); 1281 1271 } 1282 1272 1283 1273 /* we're ready, commit */
+1
mm/rmap.c
··· 358 358 */ 359 359 if (vma->vm_flags & VM_LOCKED) { 360 360 *mapcount = 1; /* break early from loop */ 361 + *vm_flags |= VM_LOCKED; 361 362 goto out_unmap; 362 363 } 363 364
+2 -2
mm/slub.c
··· 2594 2594 */ 2595 2595 void kmem_cache_destroy(struct kmem_cache *s) 2596 2596 { 2597 - if (s->flags & SLAB_DESTROY_BY_RCU) 2598 - rcu_barrier(); 2599 2597 down_write(&slub_lock); 2600 2598 s->refcount--; 2601 2599 if (!s->refcount) { ··· 2604 2606 "still has objects.\n", s->name, __func__); 2605 2607 dump_stack(); 2606 2608 } 2609 + if (s->flags & SLAB_DESTROY_BY_RCU) 2610 + rcu_barrier(); 2607 2611 sysfs_slab_remove(s); 2608 2612 } else 2609 2613 up_write(&slub_lock);
+7 -2
mm/vmscan.c
··· 630 630 631 631 referenced = page_referenced(page, 1, 632 632 sc->mem_cgroup, &vm_flags); 633 - /* In active use or really unfreeable? Activate it. */ 633 + /* 634 + * In active use or really unfreeable? Activate it. 635 + * If page which have PG_mlocked lost isoltation race, 636 + * try_to_unmap moves it to unevictable list 637 + */ 634 638 if (sc->order <= PAGE_ALLOC_COSTLY_ORDER && 635 - referenced && page_mapping_inuse(page)) 639 + referenced && page_mapping_inuse(page) 640 + && !(vm_flags & VM_LOCKED)) 636 641 goto activate_locked; 637 642 638 643 /*
+8 -13
net/9p/client.c
··· 60 60 p9_client_rpc(struct p9_client *c, int8_t type, const char *fmt, ...); 61 61 62 62 /** 63 - * v9fs_parse_options - parse mount options into session structure 64 - * @options: options string passed from mount 65 - * @v9ses: existing v9fs session information 63 + * parse_options - parse mount options into client structure 64 + * @opts: options string passed from mount 65 + * @clnt: existing v9fs client information 66 66 * 67 67 * Return 0 upon success, -ERRNO upon failure 68 68 */ ··· 232 232 233 233 /** 234 234 * p9_tag_init - setup tags structure and contents 235 - * @tags: tags structure from the client struct 235 + * @c: v9fs client struct 236 236 * 237 237 * This initializes the tags structure for each client instance. 238 238 * ··· 258 258 259 259 /** 260 260 * p9_tag_cleanup - cleans up tags structure and reclaims resources 261 - * @tags: tags structure from the client struct 261 + * @c: v9fs client struct 262 262 * 263 263 * This frees resources associated with the tags structure 264 264 * ··· 411 411 if (c->dotu) 412 412 err = -ecode; 413 413 414 - if (!err) { 414 + if (!err || !IS_ERR_VALUE(err)) 415 415 err = p9_errstr2errno(ename, strlen(ename)); 416 - 417 - /* string match failed */ 418 - if (!err) 419 - err = -ESERVERFAULT; 420 - } 421 416 422 417 P9_DPRINTK(P9_DEBUG_9P, "<<< RERROR (%d) %s\n", -ecode, ename); 423 418 ··· 425 430 426 431 /** 427 432 * p9_client_flush - flush (cancel) a request 428 - * c: client state 429 - * req: request to cancel 433 + * @c: client state 434 + * @oldreq: request to cancel 430 435 * 431 436 * This sents a flush for a particular requests and links 432 437 * the flush request to the original request. The current
+1 -1
net/9p/error.c
··· 239 239 errstr[len] = 0; 240 240 printk(KERN_ERR "%s: server reported unknown error %s\n", 241 241 __func__, errstr); 242 - errno = 1; 242 + errno = ESERVERFAULT; 243 243 } 244 244 245 245 return -errno;
+4 -4
net/9p/trans_fd.c
··· 119 119 * @wpos: write position for current frame 120 120 * @wsize: amount of data to write for current frame 121 121 * @wbuf: current write buffer 122 + * @poll_pending_link: pending links to be polled per conn 122 123 * @poll_wait: array of wait_q's for various worker threads 123 - * @poll_waddr: ???? 124 124 * @pt: poll state 125 125 * @rq: current read work 126 126 * @wq: current write work ··· 700 700 } 701 701 702 702 /** 703 - * parse_options - parse mount options into session structure 704 - * @options: options string passed from mount 705 - * @opts: transport-specific structure to parse options into 703 + * parse_opts - parse mount options into p9_fd_opts structure 704 + * @params: options string passed from mount 705 + * @opts: fd transport-specific structure to parse options into 706 706 * 707 707 * Returns 0 upon success, -ERRNO upon failure 708 708 */
+5 -4
net/9p/trans_rdma.c
··· 67 67 * @pd: Protection Domain pointer 68 68 * @qp: Queue Pair pointer 69 69 * @cq: Completion Queue pointer 70 + * @dm_mr: DMA Memory Region pointer 70 71 * @lkey: The local access only memory region key 71 72 * @timeout: Number of uSecs to wait for connection management events 72 73 * @sq_depth: The depth of the Send Queue 73 74 * @sq_sem: Semaphore for the SQ 74 75 * @rq_depth: The depth of the Receive Queue. 76 + * @rq_count: Count of requests in the Receive Queue. 75 77 * @addr: The remote peer's address 76 78 * @req_lock: Protects the active request list 77 - * @send_wait: Wait list when the SQ fills up 78 79 * @cm_done: Completion event for connection management tracking 79 80 */ 80 81 struct p9_trans_rdma { ··· 155 154 }; 156 155 157 156 /** 158 - * parse_options - parse mount options into session structure 159 - * @options: options string passed from mount 160 - * @opts: transport-specific structure to parse options into 157 + * parse_opts - parse mount options into rdma options structure 158 + * @params: options string passed from mount 159 + * @opts: rdma transport-specific structure to parse options into 161 160 * 162 161 * Returns 0 upon success, -ERRNO upon failure 163 162 */
+4 -7
net/9p/trans_virtio.c
··· 57 57 * @initialized: whether the channel is initialized 58 58 * @inuse: whether the channel is in use 59 59 * @lock: protects multiple elements within this structure 60 + * @client: client instance 60 61 * @vdev: virtio dev associated with this channel 61 62 * @vq: virtio queue associated with this channel 62 - * @tagpool: accounting for tag ids (and request slots) 63 - * @reqs: array of request slots 64 - * @max_tag: current number of request_slots allocated 65 63 * @sg: scatter gather list which is used to pack a request (protected?) 66 64 * 67 65 * We keep all per-channel information in a structure. ··· 90 92 91 93 /** 92 94 * p9_virtio_close - reclaim resources of a channel 93 - * @trans: transport state 95 + * @client: client instance 94 96 * 95 97 * This reclaims a channel by freeing its resources and 96 98 * reseting its inuse flag. ··· 179 181 180 182 /** 181 183 * p9_virtio_request - issue a request 182 - * @t: transport state 183 - * @tc: &p9_fcall request to transmit 184 - * @rc: &p9_fcall to put reponse into 184 + * @client: client instance issuing the request 185 + * @req: request to be issued 185 186 * 186 187 */ 187 188
+1
net/appletalk/ddp.c
··· 1238 1238 return -ENOBUFS; 1239 1239 1240 1240 *uaddr_len = sizeof(struct sockaddr_at); 1241 + memset(&sat.sat_zero, 0, sizeof(sat.sat_zero)); 1241 1242 1242 1243 if (peer) { 1243 1244 if (sk->sk_state != TCP_ESTABLISHED)
+1
net/can/raw.c
··· 401 401 if (peer) 402 402 return -EOPNOTSUPP; 403 403 404 + memset(addr, 0, sizeof(*addr)); 404 405 addr->can_family = AF_CAN; 405 406 addr->can_ifindex = ro->ifindex; 406 407
+6 -6
net/core/gen_estimator.c
··· 81 81 struct gen_estimator 82 82 { 83 83 struct list_head list; 84 - struct gnet_stats_basic *bstats; 84 + struct gnet_stats_basic_packed *bstats; 85 85 struct gnet_stats_rate_est *rate_est; 86 86 spinlock_t *stats_lock; 87 87 int ewma_log; ··· 165 165 } 166 166 167 167 static 168 - struct gen_estimator *gen_find_node(const struct gnet_stats_basic *bstats, 168 + struct gen_estimator *gen_find_node(const struct gnet_stats_basic_packed *bstats, 169 169 const struct gnet_stats_rate_est *rate_est) 170 170 { 171 171 struct rb_node *p = est_root.rb_node; ··· 202 202 * 203 203 * NOTE: Called under rtnl_mutex 204 204 */ 205 - int gen_new_estimator(struct gnet_stats_basic *bstats, 205 + int gen_new_estimator(struct gnet_stats_basic_packed *bstats, 206 206 struct gnet_stats_rate_est *rate_est, 207 207 spinlock_t *stats_lock, 208 208 struct nlattr *opt) ··· 262 262 * 263 263 * NOTE: Called under rtnl_mutex 264 264 */ 265 - void gen_kill_estimator(struct gnet_stats_basic *bstats, 265 + void gen_kill_estimator(struct gnet_stats_basic_packed *bstats, 266 266 struct gnet_stats_rate_est *rate_est) 267 267 { 268 268 struct gen_estimator *e; ··· 292 292 * 293 293 * Returns 0 on success or a negative error code. 294 294 */ 295 - int gen_replace_estimator(struct gnet_stats_basic *bstats, 295 + int gen_replace_estimator(struct gnet_stats_basic_packed *bstats, 296 296 struct gnet_stats_rate_est *rate_est, 297 297 spinlock_t *stats_lock, struct nlattr *opt) 298 298 { ··· 308 308 * 309 309 * Returns true if estimator is active, and false if not. 310 310 */ 311 - bool gen_estimator_active(const struct gnet_stats_basic *bstats, 311 + bool gen_estimator_active(const struct gnet_stats_basic_packed *bstats, 312 312 const struct gnet_stats_rate_est *rate_est) 313 313 { 314 314 ASSERT_RTNL();
+8 -3
net/core/gen_stats.c
··· 106 106 * if the room in the socket buffer was not sufficient. 107 107 */ 108 108 int 109 - gnet_stats_copy_basic(struct gnet_dump *d, struct gnet_stats_basic *b) 109 + gnet_stats_copy_basic(struct gnet_dump *d, struct gnet_stats_basic_packed *b) 110 110 { 111 111 if (d->compat_tc_stats) { 112 112 d->tc_stats.bytes = b->bytes; 113 113 d->tc_stats.packets = b->packets; 114 114 } 115 115 116 - if (d->tail) 117 - return gnet_stats_copy(d, TCA_STATS_BASIC, b, sizeof(*b)); 116 + if (d->tail) { 117 + struct gnet_stats_basic sb; 118 118 119 + memset(&sb, 0, sizeof(sb)); 120 + sb.bytes = b->bytes; 121 + sb.packets = b->packets; 122 + return gnet_stats_copy(d, TCA_STATS_BASIC, &sb, sizeof(sb)); 123 + } 119 124 return 0; 120 125 } 121 126
+5
net/core/netpoll.c
··· 319 319 320 320 udelay(USEC_PER_POLL); 321 321 } 322 + 323 + WARN_ONCE(!irqs_disabled(), 324 + "netpoll_send_skb(): %s enabled interrupts in poll (%pF)\n", 325 + dev->name, ops->ndo_start_xmit); 326 + 322 327 local_irq_restore(flags); 323 328 } 324 329
+1 -1
net/core/sock.c
··· 1025 1025 sk->sk_prot = sk->sk_prot_creator = prot; 1026 1026 sock_lock_init(sk); 1027 1027 sock_net_set(sk, get_net(net)); 1028 + atomic_set(&sk->sk_wmem_alloc, 1); 1028 1029 } 1029 1030 1030 1031 return sk; ··· 1873 1872 */ 1874 1873 smp_wmb(); 1875 1874 atomic_set(&sk->sk_refcnt, 1); 1876 - atomic_set(&sk->sk_wmem_alloc, 1); 1877 1875 atomic_set(&sk->sk_drops, 0); 1878 1876 } 1879 1877 EXPORT_SYMBOL(sock_init_data);
+1
net/dccp/proto.c
··· 1159 1159 kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep); 1160 1160 dccp_ackvec_exit(); 1161 1161 dccp_sysctl_exit(); 1162 + percpu_counter_destroy(&dccp_orphan_count); 1162 1163 } 1163 1164 1164 1165 module_init(dccp_init);
+1
net/econet/af_econet.c
··· 520 520 if (peer) 521 521 return -EOPNOTSUPP; 522 522 523 + memset(sec, 0, sizeof(*sec)); 523 524 mutex_lock(&econet_mutex); 524 525 525 526 sk = sock->sk;
+5 -3
net/ieee802154/af_ieee802154.c
··· 136 136 unsigned int cmd) 137 137 { 138 138 struct ifreq ifr; 139 - int ret = -EINVAL; 139 + int ret = -ENOIOCTLCMD; 140 140 struct net_device *dev; 141 141 142 142 if (copy_from_user(&ifr, arg, sizeof(struct ifreq))) ··· 146 146 147 147 dev_load(sock_net(sk), ifr.ifr_name); 148 148 dev = dev_get_by_name(sock_net(sk), ifr.ifr_name); 149 - if (dev->type == ARPHRD_IEEE802154 || 150 - dev->type == ARPHRD_IEEE802154_PHY) 149 + 150 + if ((dev->type == ARPHRD_IEEE802154 || 151 + dev->type == ARPHRD_IEEE802154_PHY) && 152 + dev->netdev_ops->ndo_do_ioctl) 151 153 ret = dev->netdev_ops->ndo_do_ioctl(dev, &ifr, cmd); 152 154 153 155 if (!ret && copy_to_user(arg, &ifr, sizeof(struct ifreq)))
+14
net/ieee802154/dgram.c
··· 377 377 return ret; 378 378 } 379 379 380 + static int dgram_getsockopt(struct sock *sk, int level, int optname, 381 + char __user *optval, int __user *optlen) 382 + { 383 + return -EOPNOTSUPP; 384 + } 385 + 386 + static int dgram_setsockopt(struct sock *sk, int level, int optname, 387 + char __user *optval, int __user optlen) 388 + { 389 + return -EOPNOTSUPP; 390 + } 391 + 380 392 struct proto ieee802154_dgram_prot = { 381 393 .name = "IEEE-802.15.4-MAC", 382 394 .owner = THIS_MODULE, ··· 403 391 .connect = dgram_connect, 404 392 .disconnect = dgram_disconnect, 405 393 .ioctl = dgram_ioctl, 394 + .getsockopt = dgram_getsockopt, 395 + .setsockopt = dgram_setsockopt, 406 396 }; 407 397
+14
net/ieee802154/raw.c
··· 238 238 read_unlock(&raw_lock); 239 239 } 240 240 241 + static int raw_getsockopt(struct sock *sk, int level, int optname, 242 + char __user *optval, int __user *optlen) 243 + { 244 + return -EOPNOTSUPP; 245 + } 246 + 247 + static int raw_setsockopt(struct sock *sk, int level, int optname, 248 + char __user *optval, int __user optlen) 249 + { 250 + return -EOPNOTSUPP; 251 + } 252 + 241 253 struct proto ieee802154_raw_prot = { 242 254 .name = "IEEE-802.15.4-RAW", 243 255 .owner = THIS_MODULE, ··· 262 250 .unhash = raw_unhash, 263 251 .connect = raw_connect, 264 252 .disconnect = raw_disconnect, 253 + .getsockopt = raw_getsockopt, 254 + .setsockopt = raw_setsockopt, 265 255 }; 266 256
+1 -1
net/ipv4/ip_gre.c
··· 951 951 addend += 4; 952 952 } 953 953 dev->needed_headroom = addend + hlen; 954 - mtu -= dev->hard_header_len - addend; 954 + mtu -= dev->hard_header_len + addend; 955 955 956 956 if (mtu < 68) 957 957 mtu = 68;
+2
net/ipv4/ip_output.c
··· 813 813 inet->cork.addr = ipc->addr; 814 814 } 815 815 rt = *rtp; 816 + if (unlikely(!rt)) 817 + return -EFAULT; 816 818 /* 817 819 * We steal reference to this route, caller should not release it 818 820 */
+3 -1
net/ipv6/af_inet6.c
··· 306 306 v4addr != htonl(INADDR_ANY) && 307 307 chk_addr_ret != RTN_LOCAL && 308 308 chk_addr_ret != RTN_MULTICAST && 309 - chk_addr_ret != RTN_BROADCAST) 309 + chk_addr_ret != RTN_BROADCAST) { 310 + err = -EADDRNOTAVAIL; 310 311 goto out; 312 + } 311 313 } else { 312 314 if (addr_type != IPV6_ADDR_ANY) { 313 315 struct net_device *dev = NULL;
+1
net/irda/af_irda.c
··· 715 715 struct sock *sk = sock->sk; 716 716 struct irda_sock *self = irda_sk(sk); 717 717 718 + memset(&saddr, 0, sizeof(saddr)); 718 719 if (peer) { 719 720 if (sk->sk_state != TCP_ESTABLISHED) 720 721 return -ENOTCONN;
+1
net/llc/af_llc.c
··· 914 914 struct llc_sock *llc = llc_sk(sk); 915 915 int rc = 0; 916 916 917 + memset(&sllc, 0, sizeof(sllc)); 917 918 lock_sock(sk); 918 919 if (sock_flag(sk, SOCK_ZAPPED)) 919 920 goto out;
+8
net/mac80211/agg-tx.c
··· 381 381 &local->hw, queue, 382 382 IEEE80211_QUEUE_STOP_REASON_AGGREGATION); 383 383 384 + if (!(sta->ampdu_mlme.tid_state_tx[tid] & HT_ADDBA_REQUESTED_MSK)) 385 + return; 386 + 387 + if (WARN(!sta->ampdu_mlme.tid_tx[tid], 388 + "TID %d gone but expected when splicing aggregates from" 389 + "the pending queue\n", tid)) 390 + return; 391 + 384 392 if (!skb_queue_empty(&sta->ampdu_mlme.tid_tx[tid]->pending)) { 385 393 spin_lock_irqsave(&local->queue_stop_reason_lock, flags); 386 394 /* mark queue as pending, it is stopped already */
+15 -13
net/mac80211/key.c
··· 67 67 * 68 68 * @key: key to add to do item for 69 69 * @flag: todo flag(s) 70 + * 71 + * Must be called with IRQs or softirqs disabled. 70 72 */ 71 73 static void add_todo(struct ieee80211_key *key, u32 flag) 72 74 { ··· 142 140 ret = drv_set_key(key->local, SET_KEY, &sdata->vif, sta, &key->conf); 143 141 144 142 if (!ret) { 145 - spin_lock(&todo_lock); 143 + spin_lock_bh(&todo_lock); 146 144 key->flags |= KEY_FLAG_UPLOADED_TO_HARDWARE; 147 - spin_unlock(&todo_lock); 145 + spin_unlock_bh(&todo_lock); 148 146 } 149 147 150 148 if (ret && ret != -ENOSPC && ret != -EOPNOTSUPP) ··· 166 164 if (!key || !key->local->ops->set_key) 167 165 return; 168 166 169 - spin_lock(&todo_lock); 167 + spin_lock_bh(&todo_lock); 170 168 if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)) { 171 - spin_unlock(&todo_lock); 169 + spin_unlock_bh(&todo_lock); 172 170 return; 173 171 } 174 - spin_unlock(&todo_lock); 172 + spin_unlock_bh(&todo_lock); 175 173 176 174 sta = get_sta_for_key(key); 177 175 sdata = key->sdata; ··· 190 188 wiphy_name(key->local->hw.wiphy), 191 189 key->conf.keyidx, sta ? sta->addr : bcast_addr, ret); 192 190 193 - spin_lock(&todo_lock); 191 + spin_lock_bh(&todo_lock); 194 192 key->flags &= ~KEY_FLAG_UPLOADED_TO_HARDWARE; 195 - spin_unlock(&todo_lock); 193 + spin_unlock_bh(&todo_lock); 196 194 } 197 195 198 196 static void __ieee80211_set_default_key(struct ieee80211_sub_if_data *sdata, ··· 439 437 440 438 __ieee80211_key_replace(sdata, sta, old_key, key); 441 439 442 - spin_unlock_irqrestore(&sdata->local->key_lock, flags); 443 - 444 440 /* free old key later */ 445 441 add_todo(old_key, KEY_FLAG_TODO_DELETE); 446 442 447 443 add_todo(key, KEY_FLAG_TODO_ADD_DEBUGFS); 448 444 if (netif_running(sdata->dev)) 449 445 add_todo(key, KEY_FLAG_TODO_HWACCEL_ADD); 446 + 447 + spin_unlock_irqrestore(&sdata->local->key_lock, flags); 450 448 } 451 449 452 450 static void __ieee80211_key_free(struct ieee80211_key *key) ··· 549 547 */ 550 548 synchronize_rcu(); 551 549 552 - spin_lock(&todo_lock); 550 + spin_lock_bh(&todo_lock); 553 551 while (!list_empty(&todo_list)) { 554 552 key = list_first_entry(&todo_list, struct ieee80211_key, todo); 555 553 list_del_init(&key->todo); ··· 560 558 KEY_FLAG_TODO_HWACCEL_REMOVE | 561 559 KEY_FLAG_TODO_DELETE); 562 560 key->flags &= ~todoflags; 563 - spin_unlock(&todo_lock); 561 + spin_unlock_bh(&todo_lock); 564 562 565 563 work_done = false; 566 564 ··· 593 591 594 592 WARN_ON(!work_done); 595 593 596 - spin_lock(&todo_lock); 594 + spin_lock_bh(&todo_lock); 597 595 } 598 - spin_unlock(&todo_lock); 596 + spin_unlock_bh(&todo_lock); 599 597 } 600 598 601 599 void ieee80211_key_todo(void)
+1 -1
net/netfilter/xt_RATEEST.c
··· 74 74 xt_rateest_tg(struct sk_buff *skb, const struct xt_target_param *par) 75 75 { 76 76 const struct xt_rateest_target_info *info = par->targinfo; 77 - struct gnet_stats_basic *stats = &info->est->bstats; 77 + struct gnet_stats_basic_packed *stats = &info->est->bstats; 78 78 79 79 spin_lock_bh(&info->est->lock); 80 80 stats->bytes += skb->len;
+1 -1
net/netfilter/xt_quota.c
··· 52 52 53 53 q->master = kmalloc(sizeof(*q->master), GFP_KERNEL); 54 54 if (q->master == NULL) 55 - return -ENOMEM; 55 + return false; 56 56 57 57 q->master->quota = q->quota; 58 58 return true;
+1
net/netrom/af_netrom.c
··· 847 847 sax->fsa_ax25.sax25_family = AF_NETROM; 848 848 sax->fsa_ax25.sax25_ndigis = 1; 849 849 sax->fsa_ax25.sax25_call = nr->user_addr; 850 + memset(sax->fsa_digipeater, 0, sizeof(sax->fsa_digipeater)); 850 851 sax->fsa_digipeater[0] = nr->dest_addr; 851 852 *uaddr_len = sizeof(struct full_sockaddr_ax25); 852 853 } else {
+12 -9
net/netrom/nr_route.c
··· 630 630 return dev; 631 631 } 632 632 633 - static ax25_digi *nr_call_to_digi(int ndigis, ax25_address *digipeaters) 633 + static ax25_digi *nr_call_to_digi(ax25_digi *digi, int ndigis, 634 + ax25_address *digipeaters) 634 635 { 635 - static ax25_digi ax25_digi; 636 636 int i; 637 637 638 638 if (ndigis == 0) 639 639 return NULL; 640 640 641 641 for (i = 0; i < ndigis; i++) { 642 - ax25_digi.calls[i] = digipeaters[i]; 643 - ax25_digi.repeated[i] = 0; 642 + digi->calls[i] = digipeaters[i]; 643 + digi->repeated[i] = 0; 644 644 } 645 645 646 - ax25_digi.ndigi = ndigis; 647 - ax25_digi.lastrepeat = -1; 646 + digi->ndigi = ndigis; 647 + digi->lastrepeat = -1; 648 648 649 - return &ax25_digi; 649 + return digi; 650 650 } 651 651 652 652 /* ··· 656 656 { 657 657 struct nr_route_struct nr_route; 658 658 struct net_device *dev; 659 + ax25_digi digi; 659 660 int ret; 660 661 661 662 switch (cmd) { ··· 674 673 ret = nr_add_node(&nr_route.callsign, 675 674 nr_route.mnemonic, 676 675 &nr_route.neighbour, 677 - nr_call_to_digi(nr_route.ndigis, nr_route.digipeaters), 676 + nr_call_to_digi(&digi, nr_route.ndigis, 677 + nr_route.digipeaters), 678 678 dev, nr_route.quality, 679 679 nr_route.obs_count); 680 680 break; 681 681 case NETROM_NEIGH: 682 682 ret = nr_add_neigh(&nr_route.callsign, 683 - nr_call_to_digi(nr_route.ndigis, nr_route.digipeaters), 683 + nr_call_to_digi(&digi, nr_route.ndigis, 684 + nr_route.digipeaters), 684 685 dev, nr_route.quality); 685 686 break; 686 687 default:
+1 -1
net/phonet/pn_dev.c
··· 96 96 { 97 97 struct phonet_device_list *pndevs = phonet_device_list(net); 98 98 struct phonet_device *pnd; 99 - struct net_device *dev; 99 + struct net_device *dev = NULL; 100 100 101 101 spin_lock_bh(&pndevs->lock); 102 102 list_for_each_entry(pnd, &pndevs->list, list) {
+1
net/rose/af_rose.c
··· 954 954 struct rose_sock *rose = rose_sk(sk); 955 955 int n; 956 956 957 + memset(srose, 0, sizeof(*srose)); 957 958 if (peer != 0) { 958 959 if (sk->sk_state != TCP_ESTABLISHED) 959 960 return -ENOTCONN;
+2
net/sched/sch_api.c
··· 1456 1456 nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); 1457 1457 tcm = NLMSG_DATA(nlh); 1458 1458 tcm->tcm_family = AF_UNSPEC; 1459 + tcm->tcm__pad1 = 0; 1460 + tcm->tcm__pad2 = 0; 1459 1461 tcm->tcm_ifindex = qdisc_dev(q)->ifindex; 1460 1462 tcm->tcm_parent = q->handle; 1461 1463 tcm->tcm_handle = q->handle;
+1 -1
net/sched/sch_atm.c
··· 49 49 struct socket *sock; /* for closing */ 50 50 u32 classid; /* x:y type ID */ 51 51 int ref; /* reference count */ 52 - struct gnet_stats_basic bstats; 52 + struct gnet_stats_basic_packed bstats; 53 53 struct gnet_stats_queue qstats; 54 54 struct atm_flow_data *next; 55 55 struct atm_flow_data *excess; /* flow for excess traffic;
+1 -1
net/sched/sch_cbq.c
··· 128 128 long avgidle; 129 129 long deficit; /* Saved deficit for WRR */ 130 130 psched_time_t penalized; 131 - struct gnet_stats_basic bstats; 131 + struct gnet_stats_basic_packed bstats; 132 132 struct gnet_stats_queue qstats; 133 133 struct gnet_stats_rate_est rate_est; 134 134 struct tc_cbq_xstats xstats;
+1 -1
net/sched/sch_drr.c
··· 22 22 unsigned int refcnt; 23 23 unsigned int filter_cnt; 24 24 25 - struct gnet_stats_basic bstats; 25 + struct gnet_stats_basic_packed bstats; 26 26 struct gnet_stats_queue qstats; 27 27 struct gnet_stats_rate_est rate_est; 28 28 struct list_head alist;
+1 -1
net/sched/sch_hfsc.c
··· 116 116 struct Qdisc_class_common cl_common; 117 117 unsigned int refcnt; /* usage count */ 118 118 119 - struct gnet_stats_basic bstats; 119 + struct gnet_stats_basic_packed bstats; 120 120 struct gnet_stats_queue qstats; 121 121 struct gnet_stats_rate_est rate_est; 122 122 unsigned int level; /* class level in hierarchy */
+1 -1
net/sched/sch_htb.c
··· 74 74 struct htb_class { 75 75 struct Qdisc_class_common common; 76 76 /* general class parameters */ 77 - struct gnet_stats_basic bstats; 77 + struct gnet_stats_basic_packed bstats; 78 78 struct gnet_stats_queue qstats; 79 79 struct gnet_stats_rate_est rate_est; 80 80 struct tc_htb_xstats xstats; /* our special stats */
+1
net/sctp/protocol.c
··· 160 160 remove_proc_entry("sctp", init_net.proc_net); 161 161 } 162 162 #endif 163 + percpu_counter_destroy(&sctp_sockets_allocated); 163 164 } 164 165 165 166 /* Private helper to extract ipv4 address and stash them in
+1 -1
net/socket.c
··· 736 736 if (more) 737 737 flags |= MSG_MORE; 738 738 739 - return sock->ops->sendpage(sock, page, offset, size, flags); 739 + return kernel_sendpage(sock, page, offset, size, flags); 740 740 } 741 741 742 742 static ssize_t sock_splice_read(struct file *file, loff_t *ppos,
+1
net/sunrpc/clnt.c
··· 937 937 rpc_task_force_reencode(struct rpc_task *task) 938 938 { 939 939 task->tk_rqstp->rq_snd_buf.len = 0; 940 + task->tk_rqstp->rq_bytes_sent = 0; 940 941 } 941 942 942 943 static inline void
+1 -1
net/xfrm/xfrm_hash.h
··· 16 16 17 17 static inline unsigned int __xfrm4_daddr_saddr_hash(xfrm_address_t *daddr, xfrm_address_t *saddr) 18 18 { 19 - return ntohl(daddr->a4 ^ saddr->a4); 19 + return ntohl(daddr->a4 + saddr->a4); 20 20 } 21 21 22 22 static inline unsigned int __xfrm6_daddr_saddr_hash(xfrm_address_t *daddr, xfrm_address_t *saddr)
+16
security/Kconfig
··· 113 113 114 114 If you are unsure how to answer this question, answer N. 115 115 116 + config LSM_MMAP_MIN_ADDR 117 + int "Low address space for LSM to protect from user allocation" 118 + depends on SECURITY && SECURITY_SELINUX 119 + default 65536 120 + help 121 + This is the portion of low virtual memory which should be protected 122 + from userspace allocation. Keeping a user from writing to low pages 123 + can help reduce the impact of kernel NULL pointer bugs. 124 + 125 + For most ia64, ppc64 and x86 users with lots of address space 126 + a value of 65536 is reasonable and should cause no problems. 127 + On arm and other archs it should not be higher than 32768. 128 + Programs which use vm86 functionality or have some need to map 129 + this low address space will need the permission specific to the 130 + systems running LSM. 131 + 116 132 source security/selinux/Kconfig 117 133 source security/smack/Kconfig 118 134 source security/tomoyo/Kconfig
+1 -1
security/Makefile
··· 8 8 subdir-$(CONFIG_SECURITY_TOMOYO) += tomoyo 9 9 10 10 # always enable default capabilities 11 - obj-y += commoncap.o 11 + obj-y += commoncap.o min_addr.o 12 12 13 13 # Object file lists 14 14 obj-$(CONFIG_SECURITY) += security.o capability.o
-9
security/capability.c
··· 330 330 return 0; 331 331 } 332 332 333 - static int cap_file_mmap(struct file *file, unsigned long reqprot, 334 - unsigned long prot, unsigned long flags, 335 - unsigned long addr, unsigned long addr_only) 336 - { 337 - if ((addr < mmap_min_addr) && !capable(CAP_SYS_RAWIO)) 338 - return -EACCES; 339 - return 0; 340 - } 341 - 342 333 static int cap_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, 343 334 unsigned long prot) 344 335 {
+30
security/commoncap.c
··· 984 984 cap_sys_admin = 1; 985 985 return __vm_enough_memory(mm, pages, cap_sys_admin); 986 986 } 987 + 988 + /* 989 + * cap_file_mmap - check if able to map given addr 990 + * @file: unused 991 + * @reqprot: unused 992 + * @prot: unused 993 + * @flags: unused 994 + * @addr: address attempting to be mapped 995 + * @addr_only: unused 996 + * 997 + * If the process is attempting to map memory below mmap_min_addr they need 998 + * CAP_SYS_RAWIO. The other parameters to this function are unused by the 999 + * capability security module. Returns 0 if this mapping should be allowed 1000 + * -EPERM if not. 1001 + */ 1002 + int cap_file_mmap(struct file *file, unsigned long reqprot, 1003 + unsigned long prot, unsigned long flags, 1004 + unsigned long addr, unsigned long addr_only) 1005 + { 1006 + int ret = 0; 1007 + 1008 + if (addr < dac_mmap_min_addr) { 1009 + ret = cap_capable(current, current_cred(), CAP_SYS_RAWIO, 1010 + SECURITY_CAP_AUDIT); 1011 + /* set PF_SUPERPRIV if it turns out we allow the low mmap */ 1012 + if (ret == 0) 1013 + current->flags |= PF_SUPERPRIV; 1014 + } 1015 + return ret; 1016 + }
+4 -2
security/integrity/ima/ima_crypto.c
··· 45 45 { 46 46 struct hash_desc desc; 47 47 struct scatterlist sg[1]; 48 - loff_t i_size; 48 + loff_t i_size, offset = 0; 49 49 char *rbuf; 50 - int rc, offset = 0; 50 + int rc; 51 51 52 52 rc = init_desc(&desc); 53 53 if (rc != 0) ··· 67 67 rc = rbuf_len; 68 68 break; 69 69 } 70 + if (rbuf_len == 0) 71 + break; 70 72 offset += rbuf_len; 71 73 sg_init_one(sg, rbuf, rbuf_len); 72 74
+4
security/integrity/ima/ima_main.c
··· 262 262 else if (mask & (MAY_READ | MAY_EXEC)) 263 263 iint->readcount--; 264 264 mutex_unlock(&iint->mutex); 265 + 266 + kref_put(&iint->refcount, iint_free); 265 267 } 266 268 267 269 /* ··· 293 291 if (file->f_mode & FMODE_WRITE) 294 292 iint->writecount++; 295 293 mutex_unlock(&iint->mutex); 294 + 295 + kref_put(&iint->refcount, iint_free); 296 296 } 297 297 EXPORT_SYMBOL_GPL(ima_counts_get); 298 298
+49
security/min_addr.c
··· 1 + #include <linux/init.h> 2 + #include <linux/mm.h> 3 + #include <linux/security.h> 4 + #include <linux/sysctl.h> 5 + 6 + /* amount of vm to protect from userspace access by both DAC and the LSM*/ 7 + unsigned long mmap_min_addr; 8 + /* amount of vm to protect from userspace using CAP_SYS_RAWIO (DAC) */ 9 + unsigned long dac_mmap_min_addr = CONFIG_DEFAULT_MMAP_MIN_ADDR; 10 + /* amount of vm to protect from userspace using the LSM = CONFIG_LSM_MMAP_MIN_ADDR */ 11 + 12 + /* 13 + * Update mmap_min_addr = max(dac_mmap_min_addr, CONFIG_LSM_MMAP_MIN_ADDR) 14 + */ 15 + static void update_mmap_min_addr(void) 16 + { 17 + #ifdef CONFIG_LSM_MMAP_MIN_ADDR 18 + if (dac_mmap_min_addr > CONFIG_LSM_MMAP_MIN_ADDR) 19 + mmap_min_addr = dac_mmap_min_addr; 20 + else 21 + mmap_min_addr = CONFIG_LSM_MMAP_MIN_ADDR; 22 + #else 23 + mmap_min_addr = dac_mmap_min_addr; 24 + #endif 25 + } 26 + 27 + /* 28 + * sysctl handler which just sets dac_mmap_min_addr = the new value and then 29 + * calls update_mmap_min_addr() so non MAP_FIXED hints get rounded properly 30 + */ 31 + int mmap_min_addr_handler(struct ctl_table *table, int write, struct file *filp, 32 + void __user *buffer, size_t *lenp, loff_t *ppos) 33 + { 34 + int ret; 35 + 36 + ret = proc_doulongvec_minmax(table, write, filp, buffer, lenp, ppos); 37 + 38 + update_mmap_min_addr(); 39 + 40 + return ret; 41 + } 42 + 43 + int __init init_mmap_min_addr(void) 44 + { 45 + update_mmap_min_addr(); 46 + 47 + return 0; 48 + } 49 + pure_initcall(init_mmap_min_addr);
+15 -2
security/selinux/hooks.c
··· 1285 1285 rc = inode->i_op->getxattr(dentry, XATTR_NAME_SELINUX, 1286 1286 context, len); 1287 1287 if (rc == -ERANGE) { 1288 + kfree(context); 1289 + 1288 1290 /* Need a larger buffer. Query for the right size. */ 1289 1291 rc = inode->i_op->getxattr(dentry, XATTR_NAME_SELINUX, 1290 1292 NULL, 0); ··· 1294 1292 dput(dentry); 1295 1293 goto out_unlock; 1296 1294 } 1297 - kfree(context); 1298 1295 len = rc; 1299 1296 context = kmalloc(len+1, GFP_NOFS); 1300 1297 if (!context) { ··· 3030 3029 int rc = 0; 3031 3030 u32 sid = current_sid(); 3032 3031 3033 - if (addr < mmap_min_addr) 3032 + /* 3033 + * notice that we are intentionally putting the SELinux check before 3034 + * the secondary cap_file_mmap check. This is such a likely attempt 3035 + * at bad behaviour/exploit that we always want to get the AVC, even 3036 + * if DAC would have also denied the operation. 3037 + */ 3038 + if (addr < CONFIG_LSM_MMAP_MIN_ADDR) { 3034 3039 rc = avc_has_perm(sid, sid, SECCLASS_MEMPROTECT, 3035 3040 MEMPROTECT__MMAP_ZERO, NULL); 3041 + if (rc) 3042 + return rc; 3043 + } 3044 + 3045 + /* do DAC check on address space usage */ 3046 + rc = cap_file_mmap(file, reqprot, prot, flags, addr, addr_only); 3036 3047 if (rc || addr_only) 3037 3048 return rc; 3038 3049
+8 -31
sound/core/pcm_lib.c
··· 943 943 int snd_interval_list(struct snd_interval *i, unsigned int count, unsigned int *list, unsigned int mask) 944 944 { 945 945 unsigned int k; 946 - int changed = 0; 946 + struct snd_interval list_range; 947 947 948 948 if (!count) { 949 949 i->empty = 1; 950 950 return -EINVAL; 951 951 } 952 + snd_interval_any(&list_range); 953 + list_range.min = UINT_MAX; 954 + list_range.max = 0; 952 955 for (k = 0; k < count; k++) { 953 956 if (mask && !(mask & (1 << k))) 954 957 continue; 955 - if (i->min == list[k] && !i->openmin) 956 - goto _l1; 957 - if (i->min < list[k]) { 958 - i->min = list[k]; 959 - i->openmin = 0; 960 - changed = 1; 961 - goto _l1; 962 - } 963 - } 964 - i->empty = 1; 965 - return -EINVAL; 966 - _l1: 967 - for (k = count; k-- > 0;) { 968 - if (mask && !(mask & (1 << k))) 958 + if (!snd_interval_test(i, list[k])) 969 959 continue; 970 - if (i->max == list[k] && !i->openmax) 971 - goto _l2; 972 - if (i->max > list[k]) { 973 - i->max = list[k]; 974 - i->openmax = 0; 975 - changed = 1; 976 - goto _l2; 977 - } 960 + list_range.min = min(list_range.min, list[k]); 961 + list_range.max = max(list_range.max, list[k]); 978 962 } 979 - i->empty = 1; 980 - return -EINVAL; 981 - _l2: 982 - if (snd_interval_checkempty(i)) { 983 - i->empty = 1; 984 - return -EINVAL; 985 - } 986 - return changed; 963 + return snd_interval_refine(i, &list_range); 987 964 } 988 965 989 966 EXPORT_SYMBOL(snd_interval_list);
+12 -6
sound/pci/ali5451/ali5451.c
··· 310 310 unsigned int res; 311 311 312 312 end_time = jiffies + msecs_to_jiffies(250); 313 - do { 313 + 314 + for (;;) { 314 315 res = snd_ali_5451_peek(codec,port); 315 316 if (!(res & 0x8000)) 316 317 return 0; 318 + if (!time_after_eq(end_time, jiffies)) 319 + break; 317 320 schedule_timeout_uninterruptible(1); 318 - } while (time_after_eq(end_time, jiffies)); 321 + } 322 + 319 323 snd_ali_5451_poke(codec, port, res & ~0x8000); 320 324 snd_printdd("ali_codec_ready: codec is not ready.\n "); 321 325 return -EIO; ··· 331 327 unsigned long dwChk1,dwChk2; 332 328 333 329 dwChk1 = snd_ali_5451_peek(codec, ALI_STIMER); 334 - dwChk2 = snd_ali_5451_peek(codec, ALI_STIMER); 335 - 336 330 end_time = jiffies + msecs_to_jiffies(250); 337 - do { 331 + 332 + for (;;) { 338 333 dwChk2 = snd_ali_5451_peek(codec, ALI_STIMER); 339 334 if (dwChk2 != dwChk1) 340 335 return 0; 336 + if (!time_after_eq(end_time, jiffies)) 337 + break; 341 338 schedule_timeout_uninterruptible(1); 342 - } while (time_after_eq(end_time, jiffies)); 339 + } 340 + 343 341 snd_printk(KERN_ERR "ali_stimer_read: stimer is not ready.\n"); 344 342 return -EIO; 345 343 }
+4 -2
sound/pci/hda/patch_analog.c
··· 3835 3835 /* Port-F (int speaker) mixer - route only from analog mixer */ 3836 3836 {0x0b, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 3837 3837 {0x0b, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(1)}, 3838 - /* Port-F pin */ 3839 - {0x16, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP}, 3838 + /* Port-F (int speaker) pin */ 3839 + {0x16, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT}, 3840 3840 {0x16, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE}, 3841 + /* required for compaq 6530s/6531s speaker output */ 3842 + {0x1c, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT}, 3841 3843 /* Port-C pin - internal mic-in */ 3842 3844 {0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_VREF80}, 3843 3845 {0x14, AC_VERB_SET_AMP_GAIN_MUTE, 0x7002}, /* raise mic as default */
+47 -22
sound/pci/hda/patch_realtek.c
··· 6423 6423 }; 6424 6424 6425 6425 /* 6426 - * 6ch mode 6426 + * 4ch mode 6427 6427 */ 6428 - static struct hda_verb alc885_mbp_ch6_init[] = { 6428 + static struct hda_verb alc885_mbp_ch4_init[] = { 6429 6429 { 0x1a, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT }, 6430 6430 { 0x1a, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE}, 6431 6431 { 0x1a, AC_VERB_SET_CONNECT_SEL, 0x01 }, ··· 6434 6434 { } /* end */ 6435 6435 }; 6436 6436 6437 - static struct hda_channel_mode alc885_mbp_6ch_modes[2] = { 6437 + static struct hda_channel_mode alc885_mbp_4ch_modes[2] = { 6438 6438 { 2, alc885_mbp_ch2_init }, 6439 - { 6, alc885_mbp_ch6_init }, 6439 + { 4, alc885_mbp_ch4_init }, 6440 6440 }; 6441 6441 6442 6442 /* ··· 6497 6497 }; 6498 6498 6499 6499 static struct snd_kcontrol_new alc885_mbp3_mixer[] = { 6500 - HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x00, HDA_OUTPUT), 6501 - HDA_BIND_MUTE ("Front Playback Switch", 0x0c, 0x02, HDA_INPUT), 6502 - HDA_CODEC_MUTE ("Speaker Playback Switch", 0x14, 0x00, HDA_OUTPUT), 6503 - HDA_CODEC_VOLUME("Line-Out Playback Volume", 0x0d, 0x00, HDA_OUTPUT), 6500 + HDA_CODEC_VOLUME("Speaker Playback Volume", 0x0c, 0x00, HDA_OUTPUT), 6501 + HDA_BIND_MUTE ("Speaker Playback Switch", 0x0c, 0x02, HDA_INPUT), 6502 + HDA_CODEC_VOLUME("Headphone Playback Volume", 0x0e, 0x00, HDA_OUTPUT), 6503 + HDA_BIND_MUTE ("Headphone Playback Switch", 0x0e, 0x02, HDA_INPUT), 6504 + HDA_CODEC_VOLUME("Surround Playback Volume", 0x0d, 0x00, HDA_OUTPUT), 6504 6505 HDA_CODEC_VOLUME("Line Playback Volume", 0x0b, 0x02, HDA_INPUT), 6505 6506 HDA_CODEC_MUTE ("Line Playback Switch", 0x0b, 0x02, HDA_INPUT), 6506 6507 HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x00, HDA_INPUT), ··· 6815 6814 {0x0d, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_ZERO}, 6816 6815 {0x0d, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 6817 6816 {0x0d, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, 6817 + /* HP mixer */ 6818 + {0x0e, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_ZERO}, 6819 + {0x0e, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 6820 + {0x0e, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, 6818 6821 /* Front Pin: output 0 (0x0c) */ 6819 6822 {0x14, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT}, 6820 6823 {0x14, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE}, 6821 6824 {0x14, AC_VERB_SET_CONNECT_SEL, 0x00}, 6822 - /* HP Pin: output 0 (0x0d) */ 6825 + /* HP Pin: output 0 (0x0e) */ 6823 6826 {0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc4}, 6824 - {0x15, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE}, 6825 - {0x15, AC_VERB_SET_CONNECT_SEL, 0x00}, 6827 + {0x15, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE}, 6828 + {0x15, AC_VERB_SET_CONNECT_SEL, 0x02}, 6826 6829 {0x15, AC_VERB_SET_UNSOLICITED_ENABLE, ALC880_HP_EVENT | AC_USRSP_EN}, 6827 6830 /* Mic (rear) pin: input vref at 80% */ 6828 6831 {0x18, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_VREF80}, ··· 7200 7195 .mixers = { alc885_mbp3_mixer, alc882_chmode_mixer }, 7201 7196 .init_verbs = { alc885_mbp3_init_verbs, 7202 7197 alc880_gpio1_init_verbs }, 7203 - .num_dacs = ARRAY_SIZE(alc882_dac_nids), 7198 + .num_dacs = 2, 7204 7199 .dac_nids = alc882_dac_nids, 7205 - .channel_mode = alc885_mbp_6ch_modes, 7206 - .num_channel_mode = ARRAY_SIZE(alc885_mbp_6ch_modes), 7200 + .hp_nid = 0x04, 7201 + .channel_mode = alc885_mbp_4ch_modes, 7202 + .num_channel_mode = ARRAY_SIZE(alc885_mbp_4ch_modes), 7207 7203 .input_mux = &alc882_capture_source, 7208 7204 .dig_out_nid = ALC882_DIGOUT_NID, 7209 7205 .dig_in_nid = ALC882_DIGIN_NID, ··· 12527 12521 ALC268_TOSHIBA), 12528 12522 SND_PCI_QUIRK(0x1043, 0x1205, "ASUS W7J", ALC268_3ST), 12529 12523 SND_PCI_QUIRK(0x1170, 0x0040, "ZEPTO", ALC268_ZEPTO), 12530 - SND_PCI_QUIRK_MASK(0x1179, 0xff00, 0xff00, "TOSHIBA A/Lx05", 12531 - ALC268_TOSHIBA), 12532 12524 SND_PCI_QUIRK(0x14c0, 0x0025, "COMPAL IFL90/JFL-92", ALC268_TOSHIBA), 12533 12525 SND_PCI_QUIRK(0x152d, 0x0763, "Diverse (CPR2000)", ALC268_ACER), 12534 12526 SND_PCI_QUIRK(0x152d, 0x0771, "Quanta IL1", ALC267_QUANTA_IL1), 12535 12527 SND_PCI_QUIRK(0x1854, 0x1775, "LG R510", ALC268_DELL), 12528 + {} 12529 + }; 12530 + 12531 + /* Toshiba laptops have no unique PCI SSID but only codec SSID */ 12532 + static struct snd_pci_quirk alc268_ssid_cfg_tbl[] = { 12533 + SND_PCI_QUIRK(0x1179, 0xff0a, "TOSHIBA X-200", ALC268_AUTO), 12534 + SND_PCI_QUIRK(0x1179, 0xff0e, "TOSHIBA X-200 HDMI", ALC268_AUTO), 12535 + SND_PCI_QUIRK_MASK(0x1179, 0xff00, 0xff00, "TOSHIBA A/Lx05", 12536 + ALC268_TOSHIBA), 12536 12537 {} 12537 12538 }; 12538 12539 ··· 12708 12695 board_config = snd_hda_check_board_config(codec, ALC268_MODEL_LAST, 12709 12696 alc268_models, 12710 12697 alc268_cfg_tbl); 12698 + 12699 + if (board_config < 0 || board_config >= ALC268_MODEL_LAST) 12700 + board_config = snd_hda_check_board_codec_sid_config(codec, 12701 + ALC882_MODEL_LAST, alc268_models, alc268_ssid_cfg_tbl); 12711 12702 12712 12703 if (board_config < 0 || board_config >= ALC268_MODEL_LAST) { 12713 12704 printk(KERN_INFO "hda_codec: Unknown model for %s, " ··· 13579 13562 if (!spec->cap_mixer) 13580 13563 set_capture_mixer(spec); 13581 13564 set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT); 13565 + 13566 + spec->vmaster_nid = 0x02; 13582 13567 13583 13568 codec->patch_ops = alc_patch_ops; 13584 13569 if (board_config == ALC269_AUTO) ··· 15596 15577 spec->stream_digital_playback = &alc861vd_pcm_digital_playback; 15597 15578 spec->stream_digital_capture = &alc861vd_pcm_digital_capture; 15598 15579 15599 - spec->adc_nids = alc861vd_adc_nids; 15600 - spec->num_adc_nids = ARRAY_SIZE(alc861vd_adc_nids); 15601 - spec->capsrc_nids = alc861vd_capsrc_nids; 15580 + if (!spec->adc_nids) { 15581 + spec->adc_nids = alc861vd_adc_nids; 15582 + spec->num_adc_nids = ARRAY_SIZE(alc861vd_adc_nids); 15583 + } 15584 + if (!spec->capsrc_nids) 15585 + spec->capsrc_nids = alc861vd_capsrc_nids; 15602 15586 15603 15587 set_capture_mixer(spec); 15604 15588 set_beep_amp(spec, 0x0b, 0x05, HDA_INPUT); ··· 17518 17496 spec->stream_digital_playback = &alc662_pcm_digital_playback; 17519 17497 spec->stream_digital_capture = &alc662_pcm_digital_capture; 17520 17498 17521 - spec->adc_nids = alc662_adc_nids; 17522 - spec->num_adc_nids = ARRAY_SIZE(alc662_adc_nids); 17523 - spec->capsrc_nids = alc662_capsrc_nids; 17499 + if (!spec->adc_nids) { 17500 + spec->adc_nids = alc662_adc_nids; 17501 + spec->num_adc_nids = ARRAY_SIZE(alc662_adc_nids); 17502 + } 17503 + if (!spec->capsrc_nids) 17504 + spec->capsrc_nids = alc662_capsrc_nids; 17524 17505 17525 17506 if (!spec->cap_mixer) 17526 17507 set_capture_mixer(spec);
+6
sound/pci/hda/patch_sigmatel.c
··· 76 76 STAC_92HD73XX_AUTO, 77 77 STAC_92HD73XX_NO_JD, /* no jack-detection */ 78 78 STAC_92HD73XX_REF, 79 + STAC_92HD73XX_INTEL, 79 80 STAC_DELL_M6_AMIC, 80 81 STAC_DELL_M6_DMIC, 81 82 STAC_DELL_M6_BOTH, ··· 1778 1777 [STAC_92HD73XX_AUTO] = "auto", 1779 1778 [STAC_92HD73XX_NO_JD] = "no-jd", 1780 1779 [STAC_92HD73XX_REF] = "ref", 1780 + [STAC_92HD73XX_INTEL] = "intel", 1781 1781 [STAC_DELL_M6_AMIC] = "dell-m6-amic", 1782 1782 [STAC_DELL_M6_DMIC] = "dell-m6-dmic", 1783 1783 [STAC_DELL_M6_BOTH] = "dell-m6", ··· 1791 1789 "DFI LanParty", STAC_92HD73XX_REF), 1792 1790 SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101, 1793 1791 "DFI LanParty", STAC_92HD73XX_REF), 1792 + SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5002, 1793 + "Intel DG45ID", STAC_92HD73XX_INTEL), 1794 + SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5003, 1795 + "Intel DG45FC", STAC_92HD73XX_INTEL), 1794 1796 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0254, 1795 1797 "Dell Studio 1535", STAC_DELL_M6_DMIC), 1796 1798 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0255,
+1
sound/pci/hda/patch_via.c
··· 1395 1395 if (!spec->adc_nids && spec->input_mux) { 1396 1396 spec->adc_nids = vt1708_adc_nids; 1397 1397 spec->num_adc_nids = ARRAY_SIZE(vt1708_adc_nids); 1398 + get_mux_nids(codec); 1398 1399 spec->mixers[spec->num_mixers] = vt1708_capture_mixer; 1399 1400 spec->num_mixers++; 1400 1401 }
+3
sound/pci/oxygen/oxygen_lib.c
··· 260 260 * chip didn't if the first EEPROM word was overwritten. 261 261 */ 262 262 subdevice = oxygen_read_eeprom(chip, 2); 263 + /* use default ID if EEPROM is missing */ 264 + if (subdevice == 0xffff) 265 + subdevice = 0x8788; 263 266 /* 264 267 * We use only the subsystem device ID for searching because it is 265 268 * unique even without the subsystem vendor ID, which may have been
+2
sound/pci/oxygen/oxygen_pcm.c
··· 469 469 oxygen_write16_masked(chip, OXYGEN_I2S_MULTICH_FORMAT, 470 470 oxygen_rate(hw_params) | 471 471 chip->model.dac_i2s_format | 472 + oxygen_i2s_mclk(hw_params) | 472 473 oxygen_i2s_bits(hw_params), 473 474 OXYGEN_I2S_RATE_MASK | 474 475 OXYGEN_I2S_FORMAT_MASK | 476 + OXYGEN_I2S_MCLK_MASK | 475 477 OXYGEN_I2S_BITS_MASK); 476 478 oxygen_update_dac_routing(chip); 477 479 oxygen_update_spdif_source(chip);
+2 -2
sound/pci/vx222/vx222_ops.c
··· 885 885 struct vx_core *_chip = snd_kcontrol_chip(kcontrol); 886 886 struct snd_vx222 *chip = (struct snd_vx222 *)_chip; 887 887 if (ucontrol->value.integer.value[0] < 0 || 888 - ucontrol->value.integer.value[0] < MIC_LEVEL_MAX) 888 + ucontrol->value.integer.value[0] > MIC_LEVEL_MAX) 889 889 return -EINVAL; 890 890 if (ucontrol->value.integer.value[1] < 0 || 891 - ucontrol->value.integer.value[1] < MIC_LEVEL_MAX) 891 + ucontrol->value.integer.value[1] > MIC_LEVEL_MAX) 892 892 return -EINVAL; 893 893 mutex_lock(&_chip->mixer_mutex); 894 894 if (chip->input_level[0] != ucontrol->value.integer.value[0] ||
+2
sound/soc/fsl/efika-audio-fabric.c
··· 30 30 #include "mpc5200_psc_ac97.h" 31 31 #include "../codecs/stac9766.h" 32 32 33 + #define DRV_NAME "efika-audio-fabric" 34 + 33 35 static struct snd_soc_device device; 34 36 static struct snd_soc_card card; 35 37
+2
sound/soc/fsl/pcm030-audio-fabric.c
··· 30 30 #include "mpc5200_psc_ac97.h" 31 31 #include "../codecs/wm9712.h" 32 32 33 + #define DRV_NAME "pcm030-audio-fabric" 34 + 33 35 static struct snd_soc_device device; 34 36 static struct snd_soc_card card; 35 37
+1 -1
tools/perf/Documentation/Makefile
··· 35 35 # DESTDIR= 36 36 37 37 ASCIIDOC=asciidoc 38 - ASCIIDOC_EXTRA = 38 + ASCIIDOC_EXTRA = --unsafe 39 39 MANPAGE_XSL = manpage-normal.xsl 40 40 XMLTO_EXTRA = 41 41 INSTALL?=install
tools/perf/Documentation/perf-examples.txt tools/perf/Documentation/examples.txt
+18 -11
tools/perf/Makefile
··· 382 382 ifdef NO_DEMANGLE 383 383 BASIC_CFLAGS += -DNO_DEMANGLE 384 384 else 385 - 386 385 has_bfd := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -lbfd > /dev/null 2>&1 && echo y") 387 - 388 - has_bfd_iberty := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -lbfd -liberty > /dev/null 2>&1 && echo y") 389 - 390 - has_bfd_iberty_z := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -lbfd -liberty -lz > /dev/null 2>&1 && echo y") 391 386 392 387 ifeq ($(has_bfd),y) 393 388 EXTLIBS += -lbfd 394 - else ifeq ($(has_bfd_iberty),y) 395 - EXTLIBS += -lbfd -liberty 396 - else ifeq ($(has_bfd_iberty_z),y) 397 - EXTLIBS += -lbfd -liberty -lz 398 389 else 399 - msg := $(warning No bfd.h/libbfd found, install binutils-dev[el] to gain symbol demangling) 400 - BASIC_CFLAGS += -DNO_DEMANGLE 390 + has_bfd_iberty := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -lbfd -liberty > /dev/null 2>&1 && echo y") 391 + ifeq ($(has_bfd_iberty),y) 392 + EXTLIBS += -lbfd -liberty 393 + else 394 + has_bfd_iberty_z := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -lbfd -liberty -lz > /dev/null 2>&1 && echo y") 395 + ifeq ($(has_bfd_iberty_z),y) 396 + EXTLIBS += -lbfd -liberty -lz 397 + else 398 + has_cplus_demangle := $(shell sh -c "(echo 'extern char *cplus_demangle(const char *, int);'; echo 'int main(void) { cplus_demangle(0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o /dev/null $(ALL_LDFLAGS) -liberty > /dev/null 2>&1 && echo y") 399 + ifeq ($(has_cplus_demangle),y) 400 + EXTLIBS += -liberty 401 + BASIC_CFLAGS += -DHAVE_CPLUS_DEMANGLE 402 + else 403 + msg := $(warning No bfd.h/libbfd found, install binutils-dev[el] to gain symbol demangling) 404 + BASIC_CFLAGS += -DNO_DEMANGLE 405 + endif 406 + endif 407 + endif 401 408 endif 402 409 endif 403 410
+14
tools/perf/builtin-annotate.c
··· 31 31 static char default_sort_order[] = "comm,symbol"; 32 32 static char *sort_order = default_sort_order; 33 33 34 + static int force; 34 35 static int input; 35 36 static int show_mask = SHOW_KERNEL | SHOW_USER | SHOW_HV; 36 37 ··· 981 980 (void *)(long)(event->header.size), 982 981 event->fork.pid, event->fork.ppid); 983 982 983 + /* 984 + * A thread clone will have the same PID for both 985 + * parent and child. 986 + */ 987 + if (thread == parent) 988 + return 0; 989 + 984 990 if (!thread || !parent || thread__fork(thread, parent)) { 985 991 dprintf("problem processing PERF_EVENT_FORK, skipping event.\n"); 986 992 return -1; ··· 1335 1327 exit(-1); 1336 1328 } 1337 1329 1330 + if (!force && (stat.st_uid != geteuid())) { 1331 + fprintf(stderr, "file: %s not owned by current user\n", input_name); 1332 + exit(-1); 1333 + } 1334 + 1338 1335 if (!stat.st_size) { 1339 1336 fprintf(stderr, "zero-sized file, nothing to do!\n"); 1340 1337 exit(0); ··· 1445 1432 "input file name"), 1446 1433 OPT_STRING('s', "symbol", &sym_hist_filter, "symbol", 1447 1434 "symbol to annotate"), 1435 + OPT_BOOLEAN('f', "force", &force, "don't complain, do it"), 1448 1436 OPT_BOOLEAN('v', "verbose", &verbose, 1449 1437 "be more verbose (show symbol address, etc)"), 1450 1438 OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
+2 -1
tools/perf/builtin-list.c
··· 10 10 11 11 #include "perf.h" 12 12 13 - #include "util/parse-options.h" 14 13 #include "util/parse-events.h" 14 + #include "util/cache.h" 15 15 16 16 int cmd_list(int argc __used, const char **argv __used, const char *prefix __used) 17 17 { 18 + setup_pager(); 18 19 print_events(); 19 20 return 0; 20 21 }
+57 -36
tools/perf/builtin-record.c
··· 34 34 static const char *output_name = "perf.data"; 35 35 static int group = 0; 36 36 static unsigned int realtime_prio = 0; 37 + static int raw_samples = 0; 37 38 static int system_wide = 0; 39 + static int profile_cpu = -1; 38 40 static pid_t target_pid = -1; 39 41 static int inherit = 1; 40 42 static int force = 0; ··· 205 203 kill(getpid(), signr); 206 204 } 207 205 208 - static void pid_synthesize_comm_event(pid_t pid, int full) 206 + static pid_t pid_synthesize_comm_event(pid_t pid, int full) 209 207 { 210 208 struct comm_event comm_ev; 211 209 char filename[PATH_MAX]; 212 210 char bf[BUFSIZ]; 213 - int fd; 214 - size_t size; 215 - char *field, *sep; 211 + FILE *fp; 212 + size_t size = 0; 216 213 DIR *tasks; 217 214 struct dirent dirent, *next; 215 + pid_t tgid = 0; 218 216 219 - snprintf(filename, sizeof(filename), "/proc/%d/stat", pid); 217 + snprintf(filename, sizeof(filename), "/proc/%d/status", pid); 220 218 221 - fd = open(filename, O_RDONLY); 222 - if (fd < 0) { 219 + fp = fopen(filename, "r"); 220 + if (fp == NULL) { 223 221 /* 224 222 * We raced with a task exiting - just return: 225 223 */ 226 224 if (verbose) 227 225 fprintf(stderr, "couldn't open %s\n", filename); 228 - return; 226 + return 0; 229 227 } 230 - if (read(fd, bf, sizeof(bf)) < 0) { 231 - fprintf(stderr, "couldn't read %s\n", filename); 232 - exit(EXIT_FAILURE); 233 - } 234 - close(fd); 235 228 236 - /* 9027 (cat) R 6747 9027 6747 34816 9027 ... */ 237 229 memset(&comm_ev, 0, sizeof(comm_ev)); 238 - field = strchr(bf, '('); 239 - if (field == NULL) 240 - goto out_failure; 241 - sep = strchr(++field, ')'); 242 - if (sep == NULL) 243 - goto out_failure; 244 - size = sep - field; 245 - memcpy(comm_ev.comm, field, size++); 230 + while (!comm_ev.comm[0] || !comm_ev.pid) { 231 + if (fgets(bf, sizeof(bf), fp) == NULL) 232 + goto out_failure; 246 233 247 - comm_ev.pid = pid; 234 + if (memcmp(bf, "Name:", 5) == 0) { 235 + char *name = bf + 5; 236 + while (*name && isspace(*name)) 237 + ++name; 238 + size = strlen(name) - 1; 239 + memcpy(comm_ev.comm, name, size++); 240 + } else if (memcmp(bf, "Tgid:", 5) == 0) { 241 + char *tgids = bf + 5; 242 + while (*tgids && isspace(*tgids)) 243 + ++tgids; 244 + tgid = comm_ev.pid = atoi(tgids); 245 + } 246 + } 247 + 248 248 comm_ev.header.type = PERF_EVENT_COMM; 249 249 size = ALIGN(size, sizeof(u64)); 250 250 comm_ev.header.size = sizeof(comm_ev) - (sizeof(comm_ev.comm) - size); ··· 255 251 comm_ev.tid = pid; 256 252 257 253 write_output(&comm_ev, comm_ev.header.size); 258 - return; 254 + goto out_fclose; 259 255 } 260 256 261 257 snprintf(filename, sizeof(filename), "/proc/%d/task", pid); ··· 272 268 write_output(&comm_ev, comm_ev.header.size); 273 269 } 274 270 closedir(tasks); 275 - return; 271 + 272 + out_fclose: 273 + fclose(fp); 274 + return tgid; 276 275 277 276 out_failure: 278 277 fprintf(stderr, "couldn't get COMM and pgid, malformed %s\n", ··· 283 276 exit(EXIT_FAILURE); 284 277 } 285 278 286 - static void pid_synthesize_mmap_samples(pid_t pid) 279 + static void pid_synthesize_mmap_samples(pid_t pid, pid_t tgid) 287 280 { 288 281 char filename[PATH_MAX]; 289 282 FILE *fp; ··· 335 328 mmap_ev.len -= mmap_ev.start; 336 329 mmap_ev.header.size = (sizeof(mmap_ev) - 337 330 (sizeof(mmap_ev.filename) - size)); 338 - mmap_ev.pid = pid; 331 + mmap_ev.pid = tgid; 339 332 mmap_ev.tid = pid; 340 333 341 334 write_output(&mmap_ev, mmap_ev.header.size); ··· 354 347 355 348 while (!readdir_r(proc, &dirent, &next) && next) { 356 349 char *end; 357 - pid_t pid; 350 + pid_t pid, tgid; 358 351 359 352 pid = strtol(dirent.d_name, &end, 10); 360 353 if (*end) /* only interested in proper numerical dirents */ 361 354 continue; 362 355 363 - pid_synthesize_comm_event(pid, 1); 364 - pid_synthesize_mmap_samples(pid); 356 + tgid = pid_synthesize_comm_event(pid, 1); 357 + pid_synthesize_mmap_samples(pid, tgid); 365 358 } 366 359 367 360 closedir(proc); ··· 399 392 PERF_FORMAT_TOTAL_TIME_RUNNING | 400 393 PERF_FORMAT_ID; 401 394 402 - attr->sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_TID; 395 + attr->sample_type |= PERF_SAMPLE_IP | PERF_SAMPLE_TID; 403 396 404 397 if (freq) { 405 398 attr->sample_type |= PERF_SAMPLE_PERIOD; ··· 419 412 if (call_graph) 420 413 attr->sample_type |= PERF_SAMPLE_CALLCHAIN; 421 414 415 + if (raw_samples) 416 + attr->sample_type |= PERF_SAMPLE_RAW; 422 417 423 418 attr->mmap = track; 424 419 attr->comm = track; ··· 435 426 436 427 if (err == EPERM) 437 428 die("Permission error - are you root?\n"); 429 + else if (err == ENODEV && profile_cpu != -1) 430 + die("No such device - did you specify an out-of-range profile CPU?\n"); 438 431 439 432 /* 440 433 * If it's cycles then fall back to hrtimer ··· 570 559 if (pid == -1) 571 560 pid = getpid(); 572 561 573 - open_counters(-1, pid); 574 - } else for (i = 0; i < nr_cpus; i++) 575 - open_counters(i, target_pid); 562 + open_counters(profile_cpu, pid); 563 + } else { 564 + if (profile_cpu != -1) { 565 + open_counters(profile_cpu, target_pid); 566 + } else { 567 + for (i = 0; i < nr_cpus; i++) 568 + open_counters(i, target_pid); 569 + } 570 + } 576 571 577 572 if (file_new) 578 573 perf_header__write(header, output); 579 574 580 575 if (!system_wide) { 581 - pid_synthesize_comm_event(pid, 0); 582 - pid_synthesize_mmap_samples(pid); 576 + pid_t tgid = pid_synthesize_comm_event(pid, 0); 577 + pid_synthesize_mmap_samples(pid, tgid); 583 578 } else 584 579 synthesize_all(); 585 580 ··· 653 636 "record events on existing pid"), 654 637 OPT_INTEGER('r', "realtime", &realtime_prio, 655 638 "collect data with this RT SCHED_FIFO priority"), 639 + OPT_BOOLEAN('R', "raw-samples", &raw_samples, 640 + "collect raw sample records from all opened counters"), 656 641 OPT_BOOLEAN('a', "all-cpus", &system_wide, 657 642 "system-wide collection from all CPUs"), 658 643 OPT_BOOLEAN('A', "append", &append_file, 659 644 "append to the output file to do incremental profiling"), 645 + OPT_INTEGER('C', "profile_cpu", &profile_cpu, 646 + "CPU to profile on"), 660 647 OPT_BOOLEAN('f', "force", &force, 661 648 "overwrite existing data file"), 662 649 OPT_LONG('c', "count", &default_interval,
+14 -5
tools/perf/builtin-report.c
··· 38 38 static struct strlist *dso_list, *comm_list, *sym_list; 39 39 static char *field_sep; 40 40 41 + static int force; 41 42 static int input; 42 43 static int show_mask = SHOW_KERNEL | SHOW_USER | SHOW_HV; 43 44 ··· 1527 1526 more_data += sizeof(u64); 1528 1527 } 1529 1528 1530 - dprintf("%p [%p]: PERF_EVENT_SAMPLE (IP, %d): %d: %p period: %Ld\n", 1529 + dprintf("%p [%p]: PERF_EVENT_SAMPLE (IP, %d): %d/%d: %p period: %Ld\n", 1531 1530 (void *)(offset + head), 1532 1531 (void *)(long)(event->header.size), 1533 1532 event->header.misc, 1534 - event->ip.pid, 1533 + event->ip.pid, event->ip.tid, 1535 1534 (void *)(long)ip, 1536 1535 (long long)period); 1537 1536 ··· 1591 1590 if (show & show_mask) { 1592 1591 struct symbol *sym = resolve_symbol(thread, &map, &dso, &ip); 1593 1592 1594 - if (dso_list && dso && dso->name && !strlist__has_entry(dso_list, dso->name)) 1593 + if (dso_list && (!dso || !dso->name || 1594 + !strlist__has_entry(dso_list, dso->name))) 1595 1595 return 0; 1596 1596 1597 - if (sym_list && sym && !strlist__has_entry(sym_list, sym->name)) 1597 + if (sym_list && (!sym || !strlist__has_entry(sym_list, sym->name))) 1598 1598 return 0; 1599 1599 1600 1600 if (hist_entry__add(thread, map, dso, sym, ip, chain, level, period)) { ··· 1614 1612 struct thread *thread = threads__findnew(event->mmap.pid); 1615 1613 struct map *map = map__new(&event->mmap); 1616 1614 1617 - dprintf("%p [%p]: PERF_EVENT_MMAP %d: [%p(%p) @ %p]: %s\n", 1615 + dprintf("%p [%p]: PERF_EVENT_MMAP %d/%d: [%p(%p) @ %p]: %s\n", 1618 1616 (void *)(offset + head), 1619 1617 (void *)(long)(event->header.size), 1620 1618 event->mmap.pid, 1619 + event->mmap.tid, 1621 1620 (void *)(long)event->mmap.start, 1622 1621 (void *)(long)event->mmap.len, 1623 1622 (void *)(long)event->mmap.pgoff, ··· 1857 1854 exit(-1); 1858 1855 } 1859 1856 1857 + if (!force && (stat.st_uid != geteuid())) { 1858 + fprintf(stderr, "file: %s not owned by current user\n", input_name); 1859 + exit(-1); 1860 + } 1861 + 1860 1862 if (!stat.st_size) { 1861 1863 fprintf(stderr, "zero-sized file, nothing to do!\n"); 1862 1864 exit(0); ··· 2070 2062 OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, 2071 2063 "dump raw trace in ASCII"), 2072 2064 OPT_STRING('k', "vmlinux", &vmlinux, "file", "vmlinux pathname"), 2065 + OPT_BOOLEAN('f', "force", &force, "don't complain, do it"), 2073 2066 OPT_BOOLEAN('m', "modules", &modules, 2074 2067 "load module symbols - WARNING: use only with -k and LIVE kernel"), 2075 2068 OPT_BOOLEAN('n', "show-nr-samples", &show_nr_samples,
+10
tools/perf/util/parse-events.c
··· 379 379 struct perf_counter_attr *attr) 380 380 { 381 381 const char *evt_name; 382 + char *flags; 382 383 char sys_name[MAX_EVENT_LENGTH]; 383 384 char id_buf[4]; 384 385 int fd; ··· 401 400 strncpy(sys_name, *strp, sys_length); 402 401 sys_name[sys_length] = '\0'; 403 402 evt_name = evt_name + 1; 403 + 404 + flags = strchr(evt_name, ':'); 405 + if (flags) { 406 + *flags = '\0'; 407 + flags++; 408 + if (!strncmp(flags, "record", strlen(flags))) 409 + attr->sample_type |= PERF_SAMPLE_RAW; 410 + } 411 + 404 412 evt_length = strlen(evt_name); 405 413 if (evt_length >= MAX_EVENT_LENGTH) 406 414 return 0;
+2 -15
tools/perf/util/symbol.c
··· 7 7 #include <gelf.h> 8 8 #include <elf.h> 9 9 10 - #ifndef NO_DEMANGLE 11 - #include <bfd.h> 12 - #else 13 - static inline 14 - char *bfd_demangle(void __used *v, const char __used *c, int __used i) 15 - { 16 - return NULL; 17 - } 18 - #endif 19 - 20 10 const char *sym_hist_filter; 21 - 22 - #ifndef DMGL_PARAMS 23 - #define DMGL_PARAMS (1 << 0) /* Include function args */ 24 - #define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */ 25 - #endif 26 11 27 12 enum dso_origin { 28 13 DSO__ORIG_KERNEL = 0, ··· 801 816 } 802 817 out: 803 818 free(name); 819 + if (ret < 0 && strstr(self->name, " (deleted)") != NULL) 820 + return 0; 804 821 return ret; 805 822 } 806 823
+24
tools/perf/util/symbol.h
··· 7 7 #include <linux/rbtree.h> 8 8 #include "module.h" 9 9 10 + #ifdef HAVE_CPLUS_DEMANGLE 11 + extern char *cplus_demangle(const char *, int); 12 + 13 + static inline char *bfd_demangle(void __used *v, const char *c, int i) 14 + { 15 + return cplus_demangle(c, i); 16 + } 17 + #else 18 + #ifdef NO_DEMANGLE 19 + static inline char *bfd_demangle(void __used *v, const char __used *c, 20 + int __used i) 21 + { 22 + return NULL; 23 + } 24 + #else 25 + #include <bfd.h> 26 + #endif 27 + #endif 28 + 29 + #ifndef DMGL_PARAMS 30 + #define DMGL_PARAMS (1 << 0) /* Include function args */ 31 + #define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */ 32 + #endif 33 + 10 34 struct symbol { 11 35 struct rb_node rb_node; 12 36 u64 start;