Merge branch 'for-spi' of git://git.kernel.org/pub/scm/linux/kernel/git/vapier/blackfin into spi/next

+2091 -1433
+96 -277
Documentation/networking/e1000.txt
··· 1 1 Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters 2 2 =============================================================== 3 3 4 - September 26, 2006 5 - 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 6 6 7 7 Contents 8 8 ======== 9 9 10 - - In This Release 11 10 - Identifying Your Adapter 12 - - Building and Installation 13 11 - Command Line Parameters 14 12 - Speed and Duplex Configuration 15 13 - Additional Configurations 16 - - Known Issues 17 14 - Support 18 - 19 - 20 - In This Release 21 - =============== 22 - 23 - This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family 24 - of Adapters. This driver includes support for Itanium(R)2-based systems. 25 - 26 - For questions related to hardware requirements, refer to the documentation 27 - supplied with your Intel PRO/1000 adapter. All hardware requirements listed 28 - apply to use with Linux. 29 - 30 - The following features are now available in supported kernels: 31 - - Native VLANs 32 - - Channel Bonding (teaming) 33 - - SNMP 34 - 35 - Channel Bonding documentation can be found in the Linux kernel source: 36 - /Documentation/networking/bonding.txt 37 - 38 - The driver information previously displayed in the /proc filesystem is not 39 - supported in this release. Alternatively, you can use ethtool (version 1.6 40 - or later), lspci, and ifconfig to obtain the same information. 41 - 42 - Instructions on updating ethtool can be found in the section "Additional 43 - Configurations" later in this document. 44 - 45 - NOTE: The Intel(R) 82562v 10/100 Network Connection only provides 10/100 46 - support. 47 - 48 15 49 16 Identifying Your Adapter 50 17 ======================== ··· 19 52 For more information on how to identify your adapter, go to the Adapter & 20 53 Driver ID Guide at: 21 54 22 - http://support.intel.com/support/network/adapter/pro100/21397.htm 55 + http://support.intel.com/support/go/network/adapter/idguide.htm 23 56 24 57 For the latest Intel network drivers for Linux, refer to the following 25 58 website. In the search field, enter your adapter name or type, or use the 26 59 networking link on the left to search for your adapter: 27 60 28 - http://downloadfinder.intel.com/scripts-df/support_intel.asp 29 - 61 + http://support.intel.com/support/go/network/adapter/home.htm 30 62 31 63 Command Line Parameters 32 64 ======================= 33 - 34 - If the driver is built as a module, the following optional parameters 35 - are used by entering them on the command line with the modprobe command 36 - using this syntax: 37 - 38 - modprobe e1000 [<option>=<VAL1>,<VAL2>,...] 39 - 40 - For example, with two PRO/1000 PCI adapters, entering: 41 - 42 - modprobe e1000 TxDescriptors=80,128 43 - 44 - loads the e1000 driver with 80 TX descriptors for the first adapter and 45 - 128 TX descriptors for the second adapter. 46 65 47 66 The default value for each parameter is generally the recommended setting, 48 67 unless otherwise noted. ··· 41 88 RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay 42 89 parameters, see the application note at: 43 90 http://www.intel.com/design/network/applnots/ap450.htm 44 - 45 - A descriptor describes a data buffer and attributes related to 46 - the data buffer. This information is accessed by the hardware. 47 - 48 91 49 92 AutoNeg 50 93 ------- ··· 55 106 NOTE: Refer to the Speed and Duplex section of this readme for more 56 107 information on the AutoNeg parameter. 57 108 58 - 59 109 Duplex 60 110 ------ 61 111 (Supported only on adapters with copper connections) ··· 67 119 link partner is forced (either full or half), Duplex defaults to half- 68 120 duplex. 69 121 70 - 71 122 FlowControl 72 123 ----------- 73 124 Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) ··· 75 128 This parameter controls the automatic generation(Tx) and response(Rx) 76 129 to Ethernet PAUSE frames. 77 130 78 - 79 131 InterruptThrottleRate 80 132 --------------------- 81 133 (not supported on Intel(R) 82542, 82543 or 82544-based adapters) 82 - Valid Range: 0,1,3,100-100000 (0=off, 1=dynamic, 3=dynamic conservative) 134 + Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 135 + 4=simplified balancing) 83 136 Default Value: 3 84 137 85 138 The driver can limit the amount of interrupts per second that the adapter 86 - will generate for incoming packets. It does this by writing a value to the 87 - adapter that is based on the maximum amount of interrupts that the adapter 139 + will generate for incoming packets. It does this by writing a value to the 140 + adapter that is based on the maximum amount of interrupts that the adapter 88 141 will generate per second. 89 142 90 143 Setting InterruptThrottleRate to a value greater or equal to 100 ··· 93 146 load on the system and can lower CPU utilization under heavy load, 94 147 but will increase latency as packets are not processed as quickly. 95 148 96 - The default behaviour of the driver previously assumed a static 97 - InterruptThrottleRate value of 8000, providing a good fallback value for 98 - all traffic types,but lacking in small packet performance and latency. 99 - The hardware can handle many more small packets per second however, and 149 + The default behaviour of the driver previously assumed a static 150 + InterruptThrottleRate value of 8000, providing a good fallback value for 151 + all traffic types,but lacking in small packet performance and latency. 152 + The hardware can handle many more small packets per second however, and 100 153 for this reason an adaptive interrupt moderation algorithm was implemented. 101 154 102 155 Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which 103 - it dynamically adjusts the InterruptThrottleRate value based on the traffic 156 + it dynamically adjusts the InterruptThrottleRate value based on the traffic 104 157 that it receives. After determining the type of incoming traffic in the last 105 - timeframe, it will adjust the InterruptThrottleRate to an appropriate value 158 + timeframe, it will adjust the InterruptThrottleRate to an appropriate value 106 159 for that traffic. 107 160 108 161 The algorithm classifies the incoming traffic every interval into 109 - classes. Once the class is determined, the InterruptThrottleRate value is 110 - adjusted to suit that traffic type the best. There are three classes defined: 162 + classes. Once the class is determined, the InterruptThrottleRate value is 163 + adjusted to suit that traffic type the best. There are three classes defined: 111 164 "Bulk traffic", for large amounts of packets of normal size; "Low latency", 112 165 for small amounts of traffic and/or a significant percentage of small 113 - packets; and "Lowest latency", for almost completely small packets or 166 + packets; and "Lowest latency", for almost completely small packets or 114 167 minimal traffic. 115 168 116 - In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 117 - for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 118 - latency" or "Lowest latency" class, the InterruptThrottleRate is increased 169 + In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 170 + for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 171 + latency" or "Lowest latency" class, the InterruptThrottleRate is increased 119 172 stepwise to 20000. This default mode is suitable for most applications. 120 173 121 174 For situations where low latency is vital such as cluster or 122 175 grid computing, the algorithm can reduce latency even more when 123 176 InterruptThrottleRate is set to mode 1. In this mode, which operates 124 - the same as mode 3, the InterruptThrottleRate will be increased stepwise to 177 + the same as mode 3, the InterruptThrottleRate will be increased stepwise to 125 178 70000 for traffic in class "Lowest latency". 179 + 180 + In simplified mode the interrupt rate is based on the ratio of Tx and 181 + Rx traffic. If the bytes per second rate is approximately equal, the 182 + interrupt rate will drop as low as 2000 interrupts per second. If the 183 + traffic is mostly transmit or mostly receive, the interrupt rate could 184 + be as high as 8000. 126 185 127 186 Setting InterruptThrottleRate to 0 turns off any interrupt moderation 128 187 and may improve small packet latency, but is generally not suitable ··· 165 212 be platform-specific. If CPU utilization is not a concern, use 166 213 RX_POLLING (NAPI) and default driver settings. 167 214 168 - 169 - 170 215 RxDescriptors 171 216 ------------- 172 217 Valid Range: 80-256 for 82542 and 82543-based adapters ··· 176 225 incoming packets, at the expense of increased system memory utilization. 177 226 178 227 Each descriptor is 16 bytes. A receive buffer is also allocated for each 179 - descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 228 + descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 180 229 on the MTU setting. The maximum MTU size is 16110. 181 230 182 - NOTE: MTU designates the frame size. It only needs to be set for Jumbo 183 - Frames. Depending on the available system resources, the request 184 - for a higher number of receive descriptors may be denied. In this 231 + NOTE: MTU designates the frame size. It only needs to be set for Jumbo 232 + Frames. Depending on the available system resources, the request 233 + for a higher number of receive descriptors may be denied. In this 185 234 case, use a lower number. 186 - 187 235 188 236 RxIntDelay 189 237 ---------- ··· 204 254 restoring the network connection. To eliminate the potential 205 255 for the hang ensure that RxIntDelay is set to 0. 206 256 207 - 208 257 RxAbsIntDelay 209 258 ------------- 210 259 (This parameter is supported only on 82540, 82545 and later adapters.) ··· 217 268 along with RxIntDelay, may improve traffic throughput in specific network 218 269 conditions. 219 270 220 - 221 271 Speed 222 272 ----- 223 273 (This parameter is supported only on adapters with copper connections.) ··· 227 279 (Mbps). If this parameter is not specified or is set to 0 and the link 228 280 partner is set to auto-negotiate, the board will auto-detect the correct 229 281 speed. Duplex should also be set when Speed is set to either 10 or 100. 230 - 231 282 232 283 TxDescriptors 233 284 ------------- ··· 242 295 higher number of transmit descriptors may be denied. In this case, 243 296 use a lower number. 244 297 298 + TxDescriptorStep 299 + ---------------- 300 + Valid Range: 1 (use every Tx Descriptor) 301 + 4 (use every 4th Tx Descriptor) 302 + 303 + Default Value: 1 (use every Tx Descriptor) 304 + 305 + On certain non-Intel architectures, it has been observed that intense TX 306 + traffic bursts of short packets may result in an improper descriptor 307 + writeback. If this occurs, the driver will report a "TX Timeout" and reset 308 + the adapter, after which the transmit flow will restart, though data may 309 + have stalled for as much as 10 seconds before it resumes. 310 + 311 + The improper writeback does not occur on the first descriptor in a system 312 + memory cache-line, which is typically 32 bytes, or 4 descriptors long. 313 + 314 + Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors 315 + are aligned to the start of a system memory cache line, and so this problem 316 + will not occur. 317 + 318 + NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of 319 + TxDescriptors available for transmits to 1/4 of the normal allocation. 320 + This has a possible negative performance impact, which may be 321 + compensated for by allocating more descriptors using the TxDescriptors 322 + module parameter. 323 + 324 + There are other conditions which may result in "TX Timeout", which will 325 + not be resolved by the use of the TxDescriptorStep parameter. As the 326 + issue addressed by this parameter has never been observed on Intel 327 + Architecture platforms, it should not be used on Intel platforms. 245 328 246 329 TxIntDelay 247 330 ---------- ··· 283 306 efficiency if properly tuned for specific network traffic. If the 284 307 system is reporting dropped transmits, this value may be set too high 285 308 causing the driver to run out of available transmit descriptors. 286 - 287 309 288 310 TxAbsIntDelay 289 311 ------------- ··· 306 330 A value of '1' indicates that the driver should enable IP checksum 307 331 offload for received packets (both UDP and TCP) to the adapter hardware. 308 332 333 + Copybreak 334 + --------- 335 + Valid Range: 0-xxxxxxx (0=off) 336 + Default Value: 256 337 + Usage: insmod e1000.ko copybreak=128 338 + 339 + Driver copies all packets below or equaling this size to a fresh Rx 340 + buffer before handing it up the stack. 341 + 342 + This parameter is different than other parameters, in that it is a 343 + single (not 1,1,1 etc.) parameter applied to all driver instances and 344 + it is also available during runtime at 345 + /sys/module/e1000/parameters/copybreak 346 + 347 + SmartPowerDownEnable 348 + -------------------- 349 + Valid Range: 0-1 350 + Default Value: 0 (disabled) 351 + 352 + Allows PHY to turn off in lower power states. The user can turn off 353 + this parameter in supported chipsets. 354 + 355 + KumeranLockLoss 356 + --------------- 357 + Valid Range: 0-1 358 + Default Value: 1 (enabled) 359 + 360 + This workaround skips resetting the PHY at shutdown for the initial 361 + silicon releases of ICH8 systems. 309 362 310 363 Speed and Duplex Configuration 311 364 ============================== ··· 390 385 parameter should not be used. Instead, use the Speed and Duplex parameters 391 386 previously mentioned to force the adapter to the same speed and duplex. 392 387 393 - 394 388 Additional Configurations 395 389 ========================= 396 - 397 - Configuring the Driver on Different Distributions 398 - ------------------------------------------------- 399 - Configuring a network driver to load properly when the system is started 400 - is distribution dependent. Typically, the configuration process involves 401 - adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well 402 - as editing other system startup scripts and/or configuration files. Many 403 - popular Linux distributions ship with tools to make these changes for you. 404 - To learn the proper way to configure a network device for your system, 405 - refer to your distribution documentation. If during this process you are 406 - asked for the driver or module name, the name for the Linux Base Driver 407 - for the Intel(R) PRO/1000 Family of Adapters is e1000. 408 - 409 - As an example, if you install the e1000 driver for two PRO/1000 adapters 410 - (eth0 and eth1) and set the speed and duplex to 10full and 100half, add 411 - the following to modules.conf or or modprobe.conf: 412 - 413 - alias eth0 e1000 414 - alias eth1 e1000 415 - options e1000 Speed=10,100 Duplex=2,1 416 - 417 - Viewing Link Messages 418 - --------------------- 419 - Link messages will not be displayed to the console if the distribution is 420 - restricting system messages. In order to see network driver link messages 421 - on your console, set dmesg to eight by entering the following: 422 - 423 - dmesg -n 8 424 - 425 - NOTE: This setting is not saved across reboots. 426 390 427 391 Jumbo Frames 428 392 ------------ ··· 411 437 setting in a different location. 412 438 413 439 Notes: 414 - 415 - - To enable Jumbo Frames, increase the MTU size on the interface beyond 416 - 1500. 440 + Degradation in throughput performance may be observed in some Jumbo frames 441 + environments. If this is observed, increasing the application's socket buffer 442 + size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. 443 + See the specific application manual and /usr/src/linux*/Documentation/ 444 + networking/ip-sysctl.txt for more details. 417 445 418 446 - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 419 447 with the maximum Jumbo Frames size of 16128. ··· 423 447 - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or 424 448 loss of link. 425 449 426 - - Some Intel gigabit adapters that support Jumbo Frames have a frame size 427 - limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes. 428 - The adapters with this limitation are based on the Intel(R) 82571EB, 429 - 82572EI, 82573L and 80003ES2LAN controller. These correspond to the 430 - following product names: 431 - Intel(R) PRO/1000 PT Server Adapter 432 - Intel(R) PRO/1000 PT Desktop Adapter 433 - Intel(R) PRO/1000 PT Network Connection 434 - Intel(R) PRO/1000 PT Dual Port Server Adapter 435 - Intel(R) PRO/1000 PT Dual Port Network Connection 436 - Intel(R) PRO/1000 PF Server Adapter 437 - Intel(R) PRO/1000 PF Network Connection 438 - Intel(R) PRO/1000 PF Dual Port Server Adapter 439 - Intel(R) PRO/1000 PB Server Connection 440 - Intel(R) PRO/1000 PL Network Connection 441 - Intel(R) PRO/1000 EB Network Connection with I/O Acceleration 442 - Intel(R) PRO/1000 EB Backplane Connection with I/O Acceleration 443 - Intel(R) PRO/1000 PT Quad Port Server Adapter 444 - 445 450 - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 446 451 support Jumbo Frames. These correspond to the following product names: 447 452 Intel(R) PRO/1000 Gigabit Server Adapter 448 453 Intel(R) PRO/1000 PM Network Connection 449 - 450 - - The following adapters do not support Jumbo Frames: 451 - Intel(R) 82562V 10/100 Network Connection 452 - Intel(R) 82566DM Gigabit Network Connection 453 - Intel(R) 82566DC Gigabit Network Connection 454 - Intel(R) 82566MM Gigabit Network Connection 455 - Intel(R) 82566MC Gigabit Network Connection 456 - Intel(R) 82562GT 10/100 Network Connection 457 - Intel(R) 82562G 10/100 Network Connection 458 - 459 454 460 455 Ethtool 461 456 ------- ··· 437 490 The latest release of ethtool can be found from 438 491 http://sourceforge.net/projects/gkernel. 439 492 440 - NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support 441 - for a more complete ethtool feature set can be enabled by upgrading 442 - ethtool to ethtool-1.8.1. 443 - 444 493 Enabling Wake on LAN* (WoL) 445 494 --------------------------- 446 - WoL is configured through the Ethtool* utility. Ethtool is included with 447 - all versions of Red Hat after Red Hat 7.2. For other Linux distributions, 448 - download and install Ethtool from the following website: 449 - http://sourceforge.net/projects/gkernel. 450 - 451 - For instructions on enabling WoL with Ethtool, refer to the website listed 452 - above. 495 + WoL is configured through the Ethtool* utility. 453 496 454 497 WoL will be enabled on the system during the next shut down or reboot. 455 498 For this driver version, in order to enable WoL, the e1000 driver must be 456 499 loaded when shutting down or rebooting the system. 457 - 458 - Wake On LAN is only supported on port A for the following devices: 459 - Intel(R) PRO/1000 PT Dual Port Network Connection 460 - Intel(R) PRO/1000 PT Dual Port Server Connection 461 - Intel(R) PRO/1000 PT Dual Port Server Adapter 462 - Intel(R) PRO/1000 PF Dual Port Server Adapter 463 - Intel(R) PRO/1000 PT Quad Port Server Adapter 464 - 465 - NAPI 466 - ---- 467 - NAPI (Rx polling mode) is enabled in the e1000 driver. 468 - 469 - See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI. 470 - 471 - 472 - Known Issues 473 - ============ 474 - 475 - Dropped Receive Packets on Half-duplex 10/100 Networks 476 - ------------------------------------------------------ 477 - If you have an Intel PCI Express adapter running at 10mbps or 100mbps, half- 478 - duplex, you may observe occasional dropped receive packets. There are no 479 - workarounds for this problem in this network configuration. The network must 480 - be updated to operate in full-duplex, and/or 1000mbps only. 481 - 482 - Jumbo Frames System Requirement 483 - ------------------------------- 484 - Memory allocation failures have been observed on Linux systems with 64 MB 485 - of RAM or less that are running Jumbo Frames. If you are using Jumbo 486 - Frames, your system may require more than the advertised minimum 487 - requirement of 64 MB of system memory. 488 - 489 - Performance Degradation with Jumbo Frames 490 - ----------------------------------------- 491 - Degradation in throughput performance may be observed in some Jumbo frames 492 - environments. If this is observed, increasing the application's socket 493 - buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values 494 - may help. See the specific application manual and 495 - /usr/src/linux*/Documentation/ 496 - networking/ip-sysctl.txt for more details. 497 - 498 - Jumbo Frames on Foundry BigIron 8000 switch 499 - ------------------------------------------- 500 - There is a known issue using Jumbo frames when connected to a Foundry 501 - BigIron 8000 switch. This is a 3rd party limitation. If you experience 502 - loss of packets, lower the MTU size. 503 - 504 - Allocating Rx Buffers when Using Jumbo Frames 505 - --------------------------------------------- 506 - Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if 507 - the available memory is heavily fragmented. This issue may be seen with PCI-X 508 - adapters or with packet split disabled. This can be reduced or eliminated 509 - by changing the amount of available memory for receive buffer allocation, by 510 - increasing /proc/sys/vm/min_free_kbytes. 511 - 512 - Multiple Interfaces on Same Ethernet Broadcast Network 513 - ------------------------------------------------------ 514 - Due to the default ARP behavior on Linux, it is not possible to have 515 - one system on two IP networks in the same Ethernet broadcast domain 516 - (non-partitioned switch) behave as expected. All Ethernet interfaces 517 - will respond to IP traffic for any IP address assigned to the system. 518 - This results in unbalanced receive traffic. 519 - 520 - If you have multiple interfaces in a server, either turn on ARP 521 - filtering by entering: 522 - 523 - echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 524 - (this only works if your kernel's version is higher than 2.4.5), 525 - 526 - NOTE: This setting is not saved across reboots. The configuration 527 - change can be made permanent by adding the line: 528 - net.ipv4.conf.all.arp_filter = 1 529 - to the file /etc/sysctl.conf 530 - 531 - or, 532 - 533 - install the interfaces in separate broadcast domains (either in 534 - different switches or in a switch partitioned to VLANs). 535 - 536 - 82541/82547 can't link or are slow to link with some link partners 537 - ----------------------------------------------------------------- 538 - There is a known compatibility issue with 82541/82547 and some 539 - low-end switches where the link will not be established, or will 540 - be slow to establish. In particular, these switches are known to 541 - be incompatible with 82541/82547: 542 - 543 - Planex FXG-08TE 544 - I-O Data ETG-SH8 545 - 546 - To workaround this issue, the driver can be compiled with an override 547 - of the PHY's master/slave setting. Forcing master or forcing slave 548 - mode will improve time-to-link. 549 - 550 - # make CFLAGS_EXTRA=-DE1000_MASTER_SLAVE=<n> 551 - 552 - Where <n> is: 553 - 554 - 0 = Hardware default 555 - 1 = Master mode 556 - 2 = Slave mode 557 - 3 = Auto master/slave 558 - 559 - Disable rx flow control with ethtool 560 - ------------------------------------ 561 - In order to disable receive flow control using ethtool, you must turn 562 - off auto-negotiation on the same command line. 563 - 564 - For example: 565 - 566 - ethtool -A eth? autoneg off rx off 567 - 568 - Unplugging network cable while ethtool -p is running 569 - ---------------------------------------------------- 570 - In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging 571 - the network cable while ethtool -p is running will cause the system to 572 - become unresponsive to keyboard commands, except for control-alt-delete. 573 - Restarting the system appears to be the only remedy. 574 - 575 500 576 501 Support 577 502 =======
+302
Documentation/networking/e1000e.txt
··· 1 + Linux* Driver for Intel(R) Network Connection 2 + =============================================================== 3 + 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 6 + 7 + Contents 8 + ======== 9 + 10 + - Identifying Your Adapter 11 + - Command Line Parameters 12 + - Additional Configurations 13 + - Support 14 + 15 + Identifying Your Adapter 16 + ======================== 17 + 18 + The e1000e driver supports all PCI Express Intel(R) Gigabit Network 19 + Connections, except those that are 82575, 82576 and 82580-based*. 20 + 21 + * NOTE: The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by 22 + the e1000 driver, not the e1000e driver due to the 82546 part being used 23 + behind a PCI Express bridge. 24 + 25 + For more information on how to identify your adapter, go to the Adapter & 26 + Driver ID Guide at: 27 + 28 + http://support.intel.com/support/go/network/adapter/idguide.htm 29 + 30 + For the latest Intel network drivers for Linux, refer to the following 31 + website. In the search field, enter your adapter name or type, or use the 32 + networking link on the left to search for your adapter: 33 + 34 + http://support.intel.com/support/go/network/adapter/home.htm 35 + 36 + Command Line Parameters 37 + ======================= 38 + 39 + The default value for each parameter is generally the recommended setting, 40 + unless otherwise noted. 41 + 42 + NOTES: For more information about the InterruptThrottleRate, 43 + RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay 44 + parameters, see the application note at: 45 + http://www.intel.com/design/network/applnots/ap450.htm 46 + 47 + InterruptThrottleRate 48 + --------------------- 49 + Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 50 + 4=simplified balancing) 51 + Default Value: 3 52 + 53 + The driver can limit the amount of interrupts per second that the adapter 54 + will generate for incoming packets. It does this by writing a value to the 55 + adapter that is based on the maximum amount of interrupts that the adapter 56 + will generate per second. 57 + 58 + Setting InterruptThrottleRate to a value greater or equal to 100 59 + will program the adapter to send out a maximum of that many interrupts 60 + per second, even if more packets have come in. This reduces interrupt 61 + load on the system and can lower CPU utilization under heavy load, 62 + but will increase latency as packets are not processed as quickly. 63 + 64 + The driver has two adaptive modes (setting 1 or 3) in which 65 + it dynamically adjusts the InterruptThrottleRate value based on the traffic 66 + that it receives. After determining the type of incoming traffic in the last 67 + timeframe, it will adjust the InterruptThrottleRate to an appropriate value 68 + for that traffic. 69 + 70 + The algorithm classifies the incoming traffic every interval into 71 + classes. Once the class is determined, the InterruptThrottleRate value is 72 + adjusted to suit that traffic type the best. There are three classes defined: 73 + "Bulk traffic", for large amounts of packets of normal size; "Low latency", 74 + for small amounts of traffic and/or a significant percentage of small 75 + packets; and "Lowest latency", for almost completely small packets or 76 + minimal traffic. 77 + 78 + In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 79 + for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 80 + latency" or "Lowest latency" class, the InterruptThrottleRate is increased 81 + stepwise to 20000. This default mode is suitable for most applications. 82 + 83 + For situations where low latency is vital such as cluster or 84 + grid computing, the algorithm can reduce latency even more when 85 + InterruptThrottleRate is set to mode 1. In this mode, which operates 86 + the same as mode 3, the InterruptThrottleRate will be increased stepwise to 87 + 70000 for traffic in class "Lowest latency". 88 + 89 + In simplified mode the interrupt rate is based on the ratio of Tx and 90 + Rx traffic. If the bytes per second rate is approximately equal the 91 + interrupt rate will drop as low as 2000 interrupts per second. If the 92 + traffic is mostly transmit or mostly receive, the interrupt rate could 93 + be as high as 8000. 94 + 95 + Setting InterruptThrottleRate to 0 turns off any interrupt moderation 96 + and may improve small packet latency, but is generally not suitable 97 + for bulk throughput traffic. 98 + 99 + NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and 100 + RxAbsIntDelay parameters. In other words, minimizing the receive 101 + and/or transmit absolute delays does not force the controller to 102 + generate more interrupts than what the Interrupt Throttle Rate 103 + allows. 104 + 105 + NOTE: When e1000e is loaded with default settings and multiple adapters 106 + are in use simultaneously, the CPU utilization may increase non- 107 + linearly. In order to limit the CPU utilization without impacting 108 + the overall throughput, we recommend that you load the driver as 109 + follows: 110 + 111 + modprobe e1000e InterruptThrottleRate=3000,3000,3000 112 + 113 + This sets the InterruptThrottleRate to 3000 interrupts/sec for 114 + the first, second, and third instances of the driver. The range 115 + of 2000 to 3000 interrupts per second works on a majority of 116 + systems and is a good starting point, but the optimal value will 117 + be platform-specific. If CPU utilization is not a concern, use 118 + RX_POLLING (NAPI) and default driver settings. 119 + 120 + RxIntDelay 121 + ---------- 122 + Valid Range: 0-65535 (0=off) 123 + Default Value: 0 124 + 125 + This value delays the generation of receive interrupts in units of 1.024 126 + microseconds. Receive interrupt reduction can improve CPU efficiency if 127 + properly tuned for specific network traffic. Increasing this value adds 128 + extra latency to frame reception and can end up decreasing the throughput 129 + of TCP traffic. If the system is reporting dropped receives, this value 130 + may be set too high, causing the driver to run out of available receive 131 + descriptors. 132 + 133 + CAUTION: When setting RxIntDelay to a value other than 0, adapters may 134 + hang (stop transmitting) under certain network conditions. If 135 + this occurs a NETDEV WATCHDOG message is logged in the system 136 + event log. In addition, the controller is automatically reset, 137 + restoring the network connection. To eliminate the potential 138 + for the hang ensure that RxIntDelay is set to 0. 139 + 140 + RxAbsIntDelay 141 + ------------- 142 + Valid Range: 0-65535 (0=off) 143 + Default Value: 8 144 + 145 + This value, in units of 1.024 microseconds, limits the delay in which a 146 + receive interrupt is generated. Useful only if RxIntDelay is non-zero, 147 + this value ensures that an interrupt is generated after the initial 148 + packet is received within the set amount of time. Proper tuning, 149 + along with RxIntDelay, may improve traffic throughput in specific network 150 + conditions. 151 + 152 + TxIntDelay 153 + ---------- 154 + Valid Range: 0-65535 (0=off) 155 + Default Value: 8 156 + 157 + This value delays the generation of transmit interrupts in units of 158 + 1.024 microseconds. Transmit interrupt reduction can improve CPU 159 + efficiency if properly tuned for specific network traffic. If the 160 + system is reporting dropped transmits, this value may be set too high 161 + causing the driver to run out of available transmit descriptors. 162 + 163 + TxAbsIntDelay 164 + ------------- 165 + Valid Range: 0-65535 (0=off) 166 + Default Value: 32 167 + 168 + This value, in units of 1.024 microseconds, limits the delay in which a 169 + transmit interrupt is generated. Useful only if TxIntDelay is non-zero, 170 + this value ensures that an interrupt is generated after the initial 171 + packet is sent on the wire within the set amount of time. Proper tuning, 172 + along with TxIntDelay, may improve traffic throughput in specific 173 + network conditions. 174 + 175 + Copybreak 176 + --------- 177 + Valid Range: 0-xxxxxxx (0=off) 178 + Default Value: 256 179 + 180 + Driver copies all packets below or equaling this size to a fresh Rx 181 + buffer before handing it up the stack. 182 + 183 + This parameter is different than other parameters, in that it is a 184 + single (not 1,1,1 etc.) parameter applied to all driver instances and 185 + it is also available during runtime at 186 + /sys/module/e1000e/parameters/copybreak 187 + 188 + SmartPowerDownEnable 189 + -------------------- 190 + Valid Range: 0-1 191 + Default Value: 0 (disabled) 192 + 193 + Allows PHY to turn off in lower power states. The user can set this parameter 194 + in supported chipsets. 195 + 196 + KumeranLockLoss 197 + --------------- 198 + Valid Range: 0-1 199 + Default Value: 1 (enabled) 200 + 201 + This workaround skips resetting the PHY at shutdown for the initial 202 + silicon releases of ICH8 systems. 203 + 204 + IntMode 205 + ------- 206 + Valid Range: 0-2 (0=legacy, 1=MSI, 2=MSI-X) 207 + Default Value: 2 208 + 209 + Allows changing the interrupt mode at module load time, without requiring a 210 + recompile. If the driver load fails to enable a specific interrupt mode, the 211 + driver will try other interrupt modes, from least to most compatible. The 212 + interrupt order is MSI-X, MSI, Legacy. If specifying MSI (IntMode=1) 213 + interrupts, only MSI and Legacy will be attempted. 214 + 215 + CrcStripping 216 + ------------ 217 + Valid Range: 0-1 218 + Default Value: 1 (enabled) 219 + 220 + Strip the CRC from received packets before sending up the network stack. If 221 + you have a machine with a BMC enabled but cannot receive IPMI traffic after 222 + loading or enabling the driver, try disabling this feature. 223 + 224 + WriteProtectNVM 225 + --------------- 226 + Valid Range: 0-1 227 + Default Value: 1 (enabled) 228 + 229 + Set the hardware to ignore all write/erase cycles to the GbE region in the 230 + ICHx NVM (non-volatile memory). This feature can be disabled by the 231 + WriteProtectNVM module parameter (enabled by default) only after a hardware 232 + reset, but the machine must be power cycled before trying to enable writes. 233 + 234 + Note: the kernel boot option iomem=relaxed may need to be set if the kernel 235 + config option CONFIG_STRICT_DEVMEM=y, if the root user wants to write the 236 + NVM from user space via ethtool. 237 + 238 + Additional Configurations 239 + ========================= 240 + 241 + Jumbo Frames 242 + ------------ 243 + Jumbo Frames support is enabled by changing the MTU to a value larger than 244 + the default of 1500. Use the ifconfig command to increase the MTU size. 245 + For example: 246 + 247 + ifconfig eth<x> mtu 9000 up 248 + 249 + This setting is not saved across reboots. 250 + 251 + Notes: 252 + 253 + - The maximum MTU setting for Jumbo Frames is 9216. This value coincides 254 + with the maximum Jumbo Frames size of 9234 bytes. 255 + 256 + - Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in 257 + poor performance or loss of link. 258 + 259 + - Some adapters limit Jumbo Frames sized packets to a maximum of 260 + 4096 bytes and some adapters do not support Jumbo Frames. 261 + 262 + 263 + Ethtool 264 + ------- 265 + The driver utilizes the ethtool interface for driver configuration and 266 + diagnostics, as well as displaying statistical information. We 267 + strongly recommend downloading the latest version of Ethtool at: 268 + 269 + http://sourceforge.net/projects/gkernel. 270 + 271 + Speed and Duplex 272 + ---------------- 273 + Speed and Duplex are configured through the Ethtool* utility. For 274 + instructions, refer to the Ethtool man page. 275 + 276 + Enabling Wake on LAN* (WoL) 277 + --------------------------- 278 + WoL is configured through the Ethtool* utility. For instructions on 279 + enabling WoL with Ethtool, refer to the Ethtool man page. 280 + 281 + WoL will be enabled on the system during the next shut down or reboot. 282 + For this driver version, in order to enable WoL, the e1000e driver must be 283 + loaded when shutting down or rebooting the system. 284 + 285 + In most cases Wake On LAN is only supported on port A for multiple port 286 + adapters. To verify if a port supports Wake on LAN run ethtool eth<X>. 287 + 288 + 289 + Support 290 + ======= 291 + 292 + For general information, go to the Intel support website at: 293 + 294 + www.intel.com/support/ 295 + 296 + or the Intel Wired Networking project hosted by Sourceforge at: 297 + 298 + http://sourceforge.net/projects/e1000 299 + 300 + If an issue is identified with the released source code on the supported 301 + kernel with a supported adapter, email the specific information related 302 + to the issue to e1000-devel@lists.sf.net
+3 -37
Documentation/networking/ixgbevf.txt
··· 1 1 Linux* Base Driver for Intel(R) Network Connection 2 2 ================================================== 3 3 4 - November 24, 2009 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 5 6 6 7 Contents 7 8 ======== 8 9 9 - - In This Release 10 10 - Identifying Your Adapter 11 11 - Known Issues/Troubleshooting 12 12 - Support 13 - 14 - In This Release 15 - =============== 16 13 17 14 This file describes the ixgbevf Linux* Base Driver for Intel Network 18 15 Connection. ··· 30 33 For more information on how to identify your adapter, go to the Adapter & 31 34 Driver ID Guide at: 32 35 33 - http://support.intel.com/support/network/sb/CS-008441.htm 36 + http://support.intel.com/support/go/network/adapter/idguide.htm 34 37 35 38 Known Issues/Troubleshooting 36 39 ============================ ··· 54 57 If an issue is identified with the released source code on the supported 55 58 kernel with a supported adapter, email the specific information related 56 59 to the issue to e1000-devel@lists.sf.net 57 - 58 - License 59 - ======= 60 - 61 - Intel 10 Gigabit Linux driver. 62 - Copyright(c) 1999 - 2009 Intel Corporation. 63 - 64 - This program is free software; you can redistribute it and/or modify it 65 - under the terms and conditions of the GNU General Public License, 66 - version 2, as published by the Free Software Foundation. 67 - 68 - This program is distributed in the hope it will be useful, but WITHOUT 69 - ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 70 - FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 71 - more details. 72 - 73 - You should have received a copy of the GNU General Public License along with 74 - this program; if not, write to the Free Software Foundation, Inc., 75 - 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 76 - 77 - The full GNU General Public License is included in this distribution in 78 - the file called "COPYING". 79 - 80 - Trademarks 81 - ========== 82 - 83 - Intel, Itanium, and Pentium are trademarks or registered trademarks of 84 - Intel Corporation or its subsidiaries in the United States and other 85 - countries. 86 - 87 - * Other names and brands may be claimed as the property of others.
+1 -1
Documentation/vm/page-types.c
··· 478 478 } 479 479 480 480 if (opt_unpoison && !hwpoison_forget_fd) { 481 - sprintf(buf, "%s/renew-pfn", hwpoison_debug_fs); 481 + sprintf(buf, "%s/unpoison-pfn", hwpoison_debug_fs); 482 482 hwpoison_forget_fd = checked_open(buf, O_WRONLY); 483 483 } 484 484 }
+34 -4
MAINTAINERS
··· 969 969 S: Maintained 970 970 F: arch/arm/mach-s5p*/ 971 971 972 + ARM/SAMSUNG S5P SERIES FIMC SUPPORT 973 + M: Kyungmin Park <kyungmin.park@samsung.com> 974 + M: Sylwester Nawrocki <s.nawrocki@samsung.com> 975 + L: linux-arm-kernel@lists.infradead.org 976 + L: linux-media@vger.kernel.org 977 + S: Maintained 978 + F: arch/arm/plat-s5p/dev-fimc* 979 + F: arch/arm/plat-samsung/include/plat/*fimc* 980 + F: drivers/media/video/s5p-fimc/ 981 + 972 982 ARM/SHMOBILE ARM ARCHITECTURE 973 983 M: Paul Mundt <lethal@linux-sh.org> 974 984 M: Magnus Damm <magnus.damm@gmail.com> ··· 2545 2535 F: drivers/scsi/gdt* 2546 2536 2547 2537 GENERIC GPIO I2C DRIVER 2548 - M: Haavard Skinnemoen <hskinnemoen@atmel.com> 2538 + M: Haavard Skinnemoen <hskinnemoen@gmail.com> 2549 2539 S: Supported 2550 2540 F: drivers/i2c/busses/i2c-gpio.c 2551 2541 F: include/linux/i2c-gpio.h ··· 3073 3063 S: Maintained 3074 3064 F: drivers/net/ixp2000/ 3075 3065 3076 - INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe) 3066 + INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe/ixgbevf) 3077 3067 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 3078 3068 M: Jesse Brandeburg <jesse.brandeburg@intel.com> 3079 3069 M: Bruce Allan <bruce.w.allan@intel.com> 3080 - M: Alex Duyck <alexander.h.duyck@intel.com> 3070 + M: Carolyn Wyborny <carolyn.wyborny@intel.com> 3071 + M: Don Skidmore <donald.c.skidmore@intel.com> 3072 + M: Greg Rose <gregory.v.rose@intel.com> 3081 3073 M: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com> 3074 + M: Alex Duyck <alexander.h.duyck@intel.com> 3082 3075 M: John Ronciak <john.ronciak@intel.com> 3083 3076 L: e1000-devel@lists.sourceforge.net 3084 3077 W: http://e1000.sourceforge.net/ 3085 3078 S: Supported 3079 + F: Documentation/networking/e100.txt 3080 + F: Documentation/networking/e1000.txt 3081 + F: Documentation/networking/e1000e.txt 3082 + F: Documentation/networking/igb.txt 3083 + F: Documentation/networking/igbvf.txt 3084 + F: Documentation/networking/ixgb.txt 3085 + F: Documentation/networking/ixgbe.txt 3086 + F: Documentation/networking/ixgbevf.txt 3086 3087 F: drivers/net/e100.c 3087 3088 F: drivers/net/e1000/ 3088 3089 F: drivers/net/e1000e/ ··· 3101 3080 F: drivers/net/igbvf/ 3102 3081 F: drivers/net/ixgb/ 3103 3082 F: drivers/net/ixgbe/ 3083 + F: drivers/net/ixgbevf/ 3104 3084 3105 3085 INTEL PRO/WIRELESS 2100 NETWORK CONNECTION SUPPORT 3106 3086 L: linux-wireless@vger.kernel.org ··· 5030 5008 F: drivers/media/video/*7146* 5031 5009 F: include/media/*7146* 5032 5010 5011 + SAMSUNG AUDIO (ASoC) DRIVERS 5012 + M: Jassi Brar <jassi.brar@samsung.com> 5013 + L: alsa-devel@alsa-project.org (moderated for non-subscribers) 5014 + S: Supported 5015 + F: sound/soc/s3c24xx 5016 + 5033 5017 TLG2300 VIDEO4LINUX-2 DRIVER 5034 5018 M: Huang Shijie <shijie8@gmail.com> 5035 5019 M: Kang Yong <kangyong@telegent.com> ··· 6478 6450 WOLFSON MICROELECTRONICS DRIVERS 6479 6451 M: Mark Brown <broonie@opensource.wolfsonmicro.com> 6480 6452 M: Ian Lartey <ian@opensource.wolfsonmicro.com> 6453 + M: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> 6454 + T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc 6481 6455 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus 6482 - W: http://opensource.wolfsonmicro.com/node/8 6456 + W: http://opensource.wolfsonmicro.com/content/linux-drivers-wolfson-devices 6483 6457 S: Supported 6484 6458 F: Documentation/hwmon/wm83?? 6485 6459 F: drivers/leds/leds-wm83*.c
+2 -2
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 36 4 - EXTRAVERSION = -rc7 5 - NAME = Sheep on Meth 4 + EXTRAVERSION = -rc8 5 + NAME = Flesh-Eating Bats with Fangs 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+14
arch/arm/Kconfig
··· 1101 1101 invalidated are not, resulting in an incoherency in the system page 1102 1102 tables. The workaround changes the TLB flushing routines to invalidate 1103 1103 entries regardless of the ASID. 1104 + 1105 + config ARM_ERRATA_743622 1106 + bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption" 1107 + depends on CPU_V7 1108 + help 1109 + This option enables the workaround for the 743622 Cortex-A9 1110 + (r2p0..r2p2) erratum. Under very rare conditions, a faulty 1111 + optimisation in the Cortex-A9 Store Buffer may lead to data 1112 + corruption. This workaround sets a specific bit in the diagnostic 1113 + register of the Cortex-A9 which disables the Store Buffer 1114 + optimisation, preventing the defect from occurring. This has no 1115 + visible impact on the overall performance or power consumption of the 1116 + processor. 1117 + 1104 1118 endmenu 1105 1119 1106 1120 source "arch/arm/common/Kconfig"
+4 -3
arch/arm/kernel/kprobes-decode.c
··· 1162 1162 { 1163 1163 /* 1164 1164 * MSR : cccc 0011 0x10 xxxx xxxx xxxx xxxx xxxx 1165 - * Undef : cccc 0011 0x00 xxxx xxxx xxxx xxxx xxxx 1165 + * Undef : cccc 0011 0100 xxxx xxxx xxxx xxxx xxxx 1166 1166 * ALU op with S bit and Rd == 15 : 1167 1167 * cccc 001x xxx1 xxxx 1111 xxxx xxxx xxxx 1168 1168 */ 1169 - if ((insn & 0x0f900000) == 0x03200000 || /* MSR & Undef */ 1169 + if ((insn & 0x0fb00000) == 0x03200000 || /* MSR */ 1170 + (insn & 0x0ff00000) == 0x03400000 || /* Undef */ 1170 1171 (insn & 0x0e10f000) == 0x0210f000) /* ALU s-bit, R15 */ 1171 1172 return INSN_REJECTED; 1172 1173 ··· 1178 1177 * *S (bit 20) updates condition codes 1179 1178 * ADC/SBC/RSC reads the C flag 1180 1179 */ 1181 - insn &= 0xfff00fff; /* Rn = r0, Rd = r0 */ 1180 + insn &= 0xffff0fff; /* Rd = r0 */ 1182 1181 asi->insn[0] = insn; 1183 1182 asi->insn_handler = (insn & (1 << 20)) ? /* S-bit */ 1184 1183 emulate_alu_imm_rwflags : emulate_alu_imm_rflags;
+3 -4
arch/arm/mach-at91/include/mach/system.h
··· 28 28 29 29 static inline void arch_idle(void) 30 30 { 31 - #ifndef CONFIG_DEBUG_KERNEL 32 31 /* 33 32 * Disable the processor clock. The processor will be automatically 34 33 * re-enabled by an interrupt or by a reset. 35 34 */ 36 35 at91_sys_write(AT91_PMC_SCDR, AT91_PMC_PCK); 37 - #else 36 + #ifndef CONFIG_CPU_ARM920T 38 37 /* 39 38 * Set the processor (CP15) into 'Wait for Interrupt' mode. 40 - * Unlike disabling the processor clock via the PMC (above) 41 - * this allows the processor to be woken via JTAG. 39 + * Post-RM9200 processors need this in conjunction with the above 40 + * to save power when idle. 42 41 */ 43 42 cpu_do_idle(); 44 43 #endif
+1 -1
arch/arm/mach-ep93xx/dma-m2p.c
··· 276 276 v &= ~(M2P_CONTROL_STALL_IRQ_EN | M2P_CONTROL_NFB_IRQ_EN); 277 277 m2p_set_control(ch, v); 278 278 279 - while (m2p_channel_state(ch) == STATE_ON) 279 + while (m2p_channel_state(ch) >= STATE_ON) 280 280 cpu_relax(); 281 281 282 282 m2p_set_control(ch, 0x0);
+1
arch/arm/mach-imx/Kconfig
··· 122 122 select IMX_HAVE_PLATFORM_IMX_I2C 123 123 select IMX_HAVE_PLATFORM_IMX_UART 124 124 select IMX_HAVE_PLATFORM_MXC_NAND 125 + select MXC_ULPI if USB_ULPI 125 126 help 126 127 Include support for Eukrea CPUIMX27 platform. This includes 127 128 specific configurations for the module and its peripherals.
+1 -1
arch/arm/mach-imx/mach-cpuimx27.c
··· 259 259 i2c_register_board_info(0, eukrea_cpuimx27_i2c_devices, 260 260 ARRAY_SIZE(eukrea_cpuimx27_i2c_devices)); 261 261 262 - imx27_add_i2c_imx1(&cpuimx27_i2c1_data); 262 + imx27_add_i2c_imx0(&cpuimx27_i2c1_data); 263 263 264 264 platform_add_devices(platform_devices, ARRAY_SIZE(platform_devices)); 265 265
+1
arch/arm/mach-s5p6440/cpu.c
··· 19 19 #include <linux/sysdev.h> 20 20 #include <linux/serial_core.h> 21 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 22 23 23 24 #include <asm/mach/arch.h> 24 25 #include <asm/mach/map.h>
+1
arch/arm/mach-s5p6442/cpu.c
··· 19 19 #include <linux/sysdev.h> 20 20 #include <linux/serial_core.h> 21 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 22 23 23 24 #include <asm/mach/arch.h> 24 25 #include <asm/mach/map.h>
+1
arch/arm/mach-s5pc100/cpu.c
··· 21 21 #include <linux/sysdev.h> 22 22 #include <linux/serial_core.h> 23 23 #include <linux/platform_device.h> 24 + #include <linux/sched.h> 24 25 25 26 #include <asm/mach/arch.h> 26 27 #include <asm/mach/map.h>
-5
arch/arm/mach-s5pv210/clock.c
··· 173 173 return s5p_gatectrl(S5P_CLKGATE_IP3, clk, enable); 174 174 } 175 175 176 - static int s5pv210_clk_ip4_ctrl(struct clk *clk, int enable) 177 - { 178 - return s5p_gatectrl(S5P_CLKGATE_IP4, clk, enable); 179 - } 180 - 181 176 static int s5pv210_clk_mask0_ctrl(struct clk *clk, int enable) 182 177 { 183 178 return s5p_gatectrl(S5P_CLK_SRC_MASK0, clk, enable);
+1
arch/arm/mach-s5pv210/cpu.c
··· 19 19 #include <linux/io.h> 20 20 #include <linux/sysdev.h> 21 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 22 23 23 24 #include <asm/mach/arch.h> 24 25 #include <asm/mach/map.h>
+2 -2
arch/arm/mach-vexpress/ct-ca9x4.c
··· 68 68 } 69 69 70 70 #if 0 71 - static void ct_ca9x4_timer_init(void) 71 + static void __init ct_ca9x4_timer_init(void) 72 72 { 73 73 writel(0, MMIO_P2V(CT_CA9X4_TIMER0) + TIMER_CTRL); 74 74 writel(0, MMIO_P2V(CT_CA9X4_TIMER1) + TIMER_CTRL); ··· 222 222 .resource = pmu_resources, 223 223 }; 224 224 225 - static void ct_ca9x4_init(void) 225 + static void __init ct_ca9x4_init(void) 226 226 { 227 227 int i; 228 228
+1 -1
arch/arm/mach-vexpress/v2m.c
··· 48 48 } 49 49 50 50 51 - static void v2m_timer_init(void) 51 + static void __init v2m_timer_init(void) 52 52 { 53 53 writel(0, MMIO_P2V(V2M_TIMER0) + TIMER_CTRL); 54 54 writel(0, MMIO_P2V(V2M_TIMER1) + TIMER_CTRL);
+6 -2
arch/arm/mm/ioremap.c
··· 204 204 /* 205 205 * Don't allow RAM to be mapped - this causes problems with ARMv6+ 206 206 */ 207 - if (WARN_ON(pfn_valid(pfn))) 208 - return NULL; 207 + if (pfn_valid(pfn)) { 208 + printk(KERN_WARNING "BUG: Your driver calls ioremap() on system memory. This leads\n" 209 + KERN_WARNING "to architecturally unpredictable behaviour on ARMv6+, and ioremap()\n" 210 + KERN_WARNING "will fail in the next kernel release. Please fix your driver.\n"); 211 + WARN_ON(1); 212 + } 209 213 210 214 type = get_mem_type(mtype); 211 215 if (!type)
+2 -2
arch/arm/mm/mmu.c
··· 248 248 }, 249 249 [MT_MEMORY] = { 250 250 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 251 - L_PTE_USER | L_PTE_EXEC, 251 + L_PTE_WRITE | L_PTE_EXEC, 252 252 .prot_l1 = PMD_TYPE_TABLE, 253 253 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 254 254 .domain = DOMAIN_KERNEL, ··· 259 259 }, 260 260 [MT_MEMORY_NONCACHED] = { 261 261 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 262 - L_PTE_USER | L_PTE_EXEC | L_PTE_MT_BUFFERABLE, 262 + L_PTE_WRITE | L_PTE_EXEC | L_PTE_MT_BUFFERABLE, 263 263 .prot_l1 = PMD_TYPE_TABLE, 264 264 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 265 265 .domain = DOMAIN_KERNEL,
+9 -1
arch/arm/mm/proc-v7.S
··· 253 253 orreq r10, r10, #1 << 22 @ set bit #22 254 254 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 255 255 #endif 256 + #ifdef CONFIG_ARM_ERRATA_743622 257 + teq r6, #0x20 @ present in r2p0 258 + teqne r6, #0x21 @ present in r2p1 259 + teqne r6, #0x22 @ present in r2p2 260 + mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 261 + orreq r10, r10, #1 << 6 @ set bit #6 262 + mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 263 + #endif 256 264 257 265 3: mov r10, #0 258 266 #ifdef HARVARD_CACHE ··· 373 365 b __v7_ca9mp_setup 374 366 .long cpu_arch_name 375 367 .long cpu_elf_name 376 - .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP 368 + .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_TLS 377 369 .long cpu_v7_name 378 370 .long v7_processor_functions 379 371 .long v7wbi_tlb_fns
+1
arch/arm/plat-omap/iommu.c
··· 320 320 if ((start <= da) && (da < start + bytes)) { 321 321 dev_dbg(obj->dev, "%s: %08x<=%08x(%x)\n", 322 322 __func__, start, da, bytes); 323 + iotlb_load_cr(obj, &cr); 323 324 iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY); 324 325 } 325 326 }
-1
arch/arm/plat-samsung/adc.c
··· 435 435 static int s3c_adc_resume(struct platform_device *pdev) 436 436 { 437 437 struct adc_device *adc = platform_get_drvdata(pdev); 438 - unsigned long flags; 439 438 440 439 clk_enable(adc->clk); 441 440 enable_irq(adc->irq);
+26 -1
arch/arm/plat-samsung/clock.c
··· 48 48 #include <plat/clock.h> 49 49 #include <plat/cpu.h> 50 50 51 + #include <linux/serial_core.h> 52 + #include <plat/regs-serial.h> /* for s3c24xx_uart_devs */ 53 + 51 54 /* clock information */ 52 55 53 56 static LIST_HEAD(clocks); ··· 68 65 return 0; 69 66 } 70 67 68 + static int dev_is_s3c_uart(struct device *dev) 69 + { 70 + struct platform_device **pdev = s3c24xx_uart_devs; 71 + int i; 72 + for (i = 0; i < ARRAY_SIZE(s3c24xx_uart_devs); i++, pdev++) 73 + if (*pdev && dev == &(*pdev)->dev) 74 + return 1; 75 + return 0; 76 + } 77 + 78 + /* 79 + * Serial drivers call get_clock() very early, before platform bus 80 + * has been set up, this requires a special check to let them get 81 + * a proper clock 82 + */ 83 + 84 + static int dev_is_platform_device(struct device *dev) 85 + { 86 + return dev->bus == &platform_bus_type || 87 + (dev->bus == NULL && dev_is_s3c_uart(dev)); 88 + } 89 + 71 90 /* Clock API calls */ 72 91 73 92 struct clk *clk_get(struct device *dev, const char *id) ··· 98 73 struct clk *clk = ERR_PTR(-ENOENT); 99 74 int idno; 100 75 101 - if (dev == NULL || dev->bus != &platform_bus_type) 76 + if (dev == NULL || !dev_is_platform_device(dev)) 102 77 idno = -1; 103 78 else 104 79 idno = to_platform_device(dev)->id;
+8 -73
arch/blackfin/include/asm/bfin5xx_spi.h
··· 11 11 12 12 #define MIN_SPI_BAUD_VAL 2 13 13 14 - #define SPI_READ 0 15 - #define SPI_WRITE 1 16 - 17 - #define SPI_CTRL_OFF 0x0 18 - #define SPI_FLAG_OFF 0x4 19 - #define SPI_STAT_OFF 0x8 20 - #define SPI_TXBUFF_OFF 0xc 21 - #define SPI_RXBUFF_OFF 0x10 22 - #define SPI_BAUD_OFF 0x14 23 - #define SPI_SHAW_OFF 0x18 24 - 25 - 26 14 #define BIT_CTL_ENABLE 0x4000 27 15 #define BIT_CTL_OPENDRAIN 0x2000 28 16 #define BIT_CTL_MASTER 0x1000 29 - #define BIT_CTL_POLAR 0x0800 30 - #define BIT_CTL_PHASE 0x0400 31 - #define BIT_CTL_BITORDER 0x0200 17 + #define BIT_CTL_CPOL 0x0800 18 + #define BIT_CTL_CPHA 0x0400 19 + #define BIT_CTL_LSBF 0x0200 32 20 #define BIT_CTL_WORDSIZE 0x0100 33 - #define BIT_CTL_MISOENABLE 0x0020 21 + #define BIT_CTL_EMISO 0x0020 22 + #define BIT_CTL_PSSE 0x0010 23 + #define BIT_CTL_GM 0x0008 24 + #define BIT_CTL_SZ 0x0004 34 25 #define BIT_CTL_RXMOD 0x0000 35 26 #define BIT_CTL_TXMOD 0x0001 36 27 #define BIT_CTL_TIMOD_DMA_TX 0x0003 ··· 41 50 #define BIT_STU_SENDOVER 0x0001 42 51 #define BIT_STU_RECVFULL 0x0020 43 52 44 - #define CFG_SPI_ENABLE 1 45 - #define CFG_SPI_DISABLE 0 46 - 47 - #define CFG_SPI_OUTENABLE 1 48 - #define CFG_SPI_OUTDISABLE 0 49 - 50 - #define CFG_SPI_ACTLOW 1 51 - #define CFG_SPI_ACTHIGH 0 52 - 53 - #define CFG_SPI_PHASESTART 1 54 - #define CFG_SPI_PHASEMID 0 55 - 56 - #define CFG_SPI_MASTER 1 57 - #define CFG_SPI_SLAVE 0 58 - 59 - #define CFG_SPI_SENELAST 0 60 - #define CFG_SPI_SENDZERO 1 61 - 62 - #define CFG_SPI_RCVFLUSH 1 63 - #define CFG_SPI_RCVDISCARD 0 64 - 65 - #define CFG_SPI_LSBFIRST 1 66 - #define CFG_SPI_MSBFIRST 0 67 - 68 - #define CFG_SPI_WORDSIZE16 1 69 - #define CFG_SPI_WORDSIZE8 0 70 - 71 - #define CFG_SPI_MISOENABLE 1 72 - #define CFG_SPI_MISODISABLE 0 73 - 74 - #define CFG_SPI_READ 0x00 75 - #define CFG_SPI_WRITE 0x01 76 - #define CFG_SPI_DMAREAD 0x02 77 - #define CFG_SPI_DMAWRITE 0x03 78 - 79 - #define CFG_SPI_CSCLEARALL 0 80 - #define CFG_SPI_CHIPSEL1 1 81 - #define CFG_SPI_CHIPSEL2 2 82 - #define CFG_SPI_CHIPSEL3 3 83 - #define CFG_SPI_CHIPSEL4 4 84 - #define CFG_SPI_CHIPSEL5 5 85 - #define CFG_SPI_CHIPSEL6 6 86 - #define CFG_SPI_CHIPSEL7 7 87 - 88 - #define CFG_SPI_CS1VALUE 1 89 - #define CFG_SPI_CS2VALUE 2 90 - #define CFG_SPI_CS3VALUE 3 91 - #define CFG_SPI_CS4VALUE 4 92 - #define CFG_SPI_CS5VALUE 5 93 - #define CFG_SPI_CS6VALUE 6 94 - #define CFG_SPI_CS7VALUE 7 95 - 96 - #define CMD_SPI_SET_BAUDRATE 2 97 - #define CMD_SPI_GET_SYSTEMCLOCK 25 98 - #define CMD_SPI_SET_WRITECONTINUOUS 26 53 + #define MAX_CTRL_CS 8 /* cs in spi controller */ 99 54 100 55 /* device.platform_data for SSP controller devices */ 101 56 struct bfin5xx_spi_master { ··· 57 120 u16 ctl_reg; 58 121 u8 enable_dma; 59 122 u8 bits_per_word; 60 - u8 cs_change_per_word; 61 123 u16 cs_chg_udelay; /* Some devices require 16-bit delays */ 62 - u32 cs_gpio; 63 124 /* Value to send if no TX value is supplied, usually 0x0 or 0xFFFF */ 64 125 u16 idle_tx_val; 65 126 u8 pio_interrupt; /* Enable spi data irq */
+2 -2
arch/m32r/include/asm/elf.h
··· 82 82 * These are used to set parameters in the core dumps. 83 83 */ 84 84 #define ELF_CLASS ELFCLASS32 85 - #if defined(__LITTLE_ENDIAN) 85 + #if defined(__LITTLE_ENDIAN__) 86 86 #define ELF_DATA ELFDATA2LSB 87 - #elif defined(__BIG_ENDIAN) 87 + #elif defined(__BIG_ENDIAN__) 88 88 #define ELF_DATA ELFDATA2MSB 89 89 #else 90 90 #error no endian defined
+1
arch/m32r/kernel/.gitignore
··· 1 + vmlinux.lds
+3 -1
arch/m32r/kernel/signal.c
··· 28 28 29 29 #define DEBUG_SIG 0 30 30 31 + #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) 32 + 31 33 asmlinkage int 32 34 sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss, 33 35 unsigned long r2, unsigned long r3, unsigned long r4, ··· 256 254 static int prev_insn(struct pt_regs *regs) 257 255 { 258 256 u16 inst; 259 - if (get_user(&inst, (u16 __user *)(regs->bpc - 2))) 257 + if (get_user(inst, (u16 __user *)(regs->bpc - 2))) 260 258 return -EFAULT; 261 259 if ((inst & 0xfff0) == 0x10f0) /* trap ? */ 262 260 regs->bpc -= 2;
+1
arch/mips/include/asm/siginfo.h
··· 88 88 #ifdef __ARCH_SI_TRAPNO 89 89 int _trapno; /* TRAP # which caused the signal */ 90 90 #endif 91 + short _addr_lsb; 91 92 } _sigfault; 92 93 93 94 /* SIGPOLL, SIGXFSZ (To do ...) */
+5 -9
arch/um/drivers/hostaudio_kern.c
··· 40 40 " This is used to specify the host mixer device to the hostaudio driver.\n"\ 41 41 " The default is \"" HOSTAUDIO_DEV_MIXER "\".\n\n" 42 42 43 + module_param(dsp, charp, 0644); 44 + MODULE_PARM_DESC(dsp, DSP_HELP); 45 + module_param(mixer, charp, 0644); 46 + MODULE_PARM_DESC(mixer, MIXER_HELP); 47 + 43 48 #ifndef MODULE 44 49 static int set_dsp(char *name, int *add) 45 50 { ··· 61 56 } 62 57 63 58 __uml_setup("mixer=", set_mixer, "mixer=<mixer device>\n" MIXER_HELP); 64 - 65 - #else /*MODULE*/ 66 - 67 - module_param(dsp, charp, 0644); 68 - MODULE_PARM_DESC(dsp, DSP_HELP); 69 - 70 - module_param(mixer, charp, 0644); 71 - MODULE_PARM_DESC(mixer, MIXER_HELP); 72 - 73 59 #endif 74 60 75 61 /* /dev/dsp file operations */
+5 -4
arch/um/drivers/ubd_kern.c
··· 163 163 struct scatterlist sg[MAX_SG]; 164 164 struct request *request; 165 165 int start_sg, end_sg; 166 + sector_t rq_pos; 166 167 }; 167 168 168 169 #define DEFAULT_COW { \ ··· 188 187 .request = NULL, \ 189 188 .start_sg = 0, \ 190 189 .end_sg = 0, \ 190 + .rq_pos = 0, \ 191 191 } 192 192 193 193 /* Protected by ubd_lock */ ··· 1230 1228 { 1231 1229 struct io_thread_req *io_req; 1232 1230 struct request *req; 1233 - sector_t sector; 1234 1231 int n; 1235 1232 1236 1233 while(1){ ··· 1240 1239 return; 1241 1240 1242 1241 dev->request = req; 1242 + dev->rq_pos = blk_rq_pos(req); 1243 1243 dev->start_sg = 0; 1244 1244 dev->end_sg = blk_rq_map_sg(q, req, dev->sg); 1245 1245 } 1246 1246 1247 1247 req = dev->request; 1248 - sector = blk_rq_pos(req); 1249 1248 while(dev->start_sg < dev->end_sg){ 1250 1249 struct scatterlist *sg = &dev->sg[dev->start_sg]; 1251 1250 ··· 1257 1256 return; 1258 1257 } 1259 1258 prepare_request(req, io_req, 1260 - (unsigned long long)sector << 9, 1259 + (unsigned long long)dev->rq_pos << 9, 1261 1260 sg->offset, sg->length, sg_page(sg)); 1262 1261 1263 - sector += sg->length >> 9; 1264 1262 n = os_write_file(thread_fd, &io_req, 1265 1263 sizeof(struct io_thread_req *)); 1266 1264 if(n != sizeof(struct io_thread_req *)){ ··· 1272 1272 return; 1273 1273 } 1274 1274 1275 + dev->rq_pos += sg->length >> 9; 1275 1276 dev->start_sg++; 1276 1277 } 1277 1278 dev->end_sg = 0;
+5 -17
arch/x86/ia32/ia32_aout.c
··· 34 34 #include <asm/ia32.h> 35 35 36 36 #undef WARN_OLD 37 - #undef CORE_DUMP /* probably broken */ 37 + #undef CORE_DUMP /* definitely broken */ 38 38 39 39 static int load_aout_binary(struct linux_binprm *, struct pt_regs *regs); 40 40 static int load_aout_library(struct file *); ··· 131 131 * macros to write out all the necessary info. 132 132 */ 133 133 134 - static int dump_write(struct file *file, const void *addr, int nr) 135 - { 136 - return file->f_op->write(file, addr, nr, &file->f_pos) == nr; 137 - } 134 + #include <linux/coredump.h> 138 135 139 136 #define DUMP_WRITE(addr, nr) \ 140 137 if (!dump_write(file, (void *)(addr), (nr))) \ 141 138 goto end_coredump; 142 139 143 - #define DUMP_SEEK(offset) \ 144 - if (file->f_op->llseek) { \ 145 - if (file->f_op->llseek(file, (offset), 0) != (offset)) \ 146 - goto end_coredump; \ 147 - } else \ 148 - file->f_pos = (offset) 140 + #define DUMP_SEEK(offset) \ 141 + if (!dump_seek(file, offset)) \ 142 + goto end_coredump; 149 143 150 144 #define START_DATA() (u.u_tsize << PAGE_SHIFT) 151 145 #define START_STACK(u) (u.start_stack) ··· 211 217 dump_size = dump.u_ssize << PAGE_SHIFT; 212 218 DUMP_WRITE(dump_start, dump_size); 213 219 } 214 - /* 215 - * Finally dump the task struct. Not be used by gdb, but 216 - * could be useful 217 - */ 218 - set_fs(KERNEL_DS); 219 - DUMP_WRITE(current, sizeof(*current)); 220 220 end_coredump: 221 221 set_fs(fs); 222 222 return has_dumped;
+3 -6
arch/x86/kernel/cpu/mcheck/mce_amd.c
··· 141 141 address = (low & MASK_BLKPTR_LO) >> 21; 142 142 if (!address) 143 143 break; 144 + 144 145 address += MCG_XBLK_ADDR; 145 146 } else 146 147 ++address; ··· 149 148 if (rdmsr_safe(address, &low, &high)) 150 149 break; 151 150 152 - if (!(high & MASK_VALID_HI)) { 153 - if (block) 154 - continue; 155 - else 156 - break; 157 - } 151 + if (!(high & MASK_VALID_HI)) 152 + continue; 158 153 159 154 if (!(high & MASK_CNTP_HI) || 160 155 (high & MASK_LOCKED_HI))
+2 -1
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 216 216 err = sysfs_add_file_to_group(&sys_dev->kobj, 217 217 &attr_core_power_limit_count.attr, 218 218 thermal_attr_group.name); 219 - if (cpu_has(c, X86_FEATURE_PTS)) 219 + if (cpu_has(c, X86_FEATURE_PTS)) { 220 220 err = sysfs_add_file_to_group(&sys_dev->kobj, 221 221 &attr_package_throttle_count.attr, 222 222 thermal_attr_group.name); ··· 224 224 err = sysfs_add_file_to_group(&sys_dev->kobj, 225 225 &attr_package_power_limit_count.attr, 226 226 thermal_attr_group.name); 227 + } 227 228 228 229 return err; 229 230 }
+1 -1
arch/x86/kvm/svm.c
··· 766 766 767 767 control->iopm_base_pa = iopm_base; 768 768 control->msrpm_base_pa = __pa(svm->msrpm); 769 - control->tsc_offset = 0; 770 769 control->int_ctl = V_INTR_MASKING_MASK; 771 770 772 771 init_seg(&save->es); ··· 901 902 svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; 902 903 svm->asid_generation = 0; 903 904 init_vmcb(svm); 905 + svm->vmcb->control.tsc_offset = 0-native_read_tsc(); 904 906 905 907 err = fx_init(&svm->vcpu); 906 908 if (err)
+5 -3
arch/x86/mm/srat_64.c
··· 420 420 return -1; 421 421 } 422 422 423 - for_each_node_mask(i, nodes_parsed) 424 - e820_register_active_regions(i, nodes[i].start >> PAGE_SHIFT, 425 - nodes[i].end >> PAGE_SHIFT); 423 + for (i = 0; i < num_node_memblks; i++) 424 + e820_register_active_regions(memblk_nodeid[i], 425 + node_memblk_range[i].start >> PAGE_SHIFT, 426 + node_memblk_range[i].end >> PAGE_SHIFT); 427 + 426 428 /* for out of order entries in SRAT */ 427 429 sort_node_map(); 428 430 if (!nodes_cover_memory(nodes)) {
+8 -4
block/elevator.c
··· 938 938 } 939 939 } 940 940 kobject_uevent(&e->kobj, KOBJ_ADD); 941 + e->registered = 1; 941 942 } 942 943 return error; 943 944 } ··· 948 947 { 949 948 kobject_uevent(&e->kobj, KOBJ_REMOVE); 950 949 kobject_del(&e->kobj); 950 + e->registered = 0; 951 951 } 952 952 953 953 void elv_unregister_queue(struct request_queue *q) ··· 1044 1042 1045 1043 spin_unlock_irq(q->queue_lock); 1046 1044 1047 - __elv_unregister_queue(old_elevator); 1045 + if (old_elevator->registered) { 1046 + __elv_unregister_queue(old_elevator); 1048 1047 1049 - err = elv_register_queue(q); 1050 - if (err) 1051 - goto fail_register; 1048 + err = elv_register_queue(q); 1049 + if (err) 1050 + goto fail_register; 1051 + } 1052 1052 1053 1053 /* 1054 1054 * finally exit old elevator and turn off BYPASS.
+17
drivers/acpi/blacklist.c
··· 204 204 }, 205 205 }, 206 206 { 207 + /* 208 + * There have a NVIF method in MSI GX723 DSDT need call by Nvidia 209 + * driver (e.g. nouveau) when user press brightness hotkey. 210 + * Currently, nouveau driver didn't do the job and it causes there 211 + * have a infinite while loop in DSDT when user press hotkey. 212 + * We add MSI GX723's dmi information to this table for workaround 213 + * this issue. 214 + * Will remove MSI GX723 from the table after nouveau grows support. 215 + */ 216 + .callback = dmi_disable_osi_vista, 217 + .ident = "MSI GX723", 218 + .matches = { 219 + DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star International"), 220 + DMI_MATCH(DMI_PRODUCT_NAME, "GX723"), 221 + }, 222 + }, 223 + { 207 224 .callback = dmi_disable_osi_vista, 208 225 .ident = "Sony VGN-NS10J_S", 209 226 .matches = {
+1
drivers/acpi/processor_core.c
··· 346 346 acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, 347 347 ACPI_UINT32_MAX, 348 348 early_init_pdc, NULL, NULL, NULL); 349 + acpi_get_devices("ACPI0007", early_init_pdc, NULL, NULL); 349 350 }
-6
drivers/atm/iphase.c
··· 3156 3156 { 3157 3157 struct atm_dev *dev; 3158 3158 IADEV *iadev; 3159 - unsigned long flags; 3160 3159 int ret; 3161 3160 3162 3161 iadev = kzalloc(sizeof(*iadev), GFP_KERNEL); ··· 3187 3188 ia_dev[iadev_count] = iadev; 3188 3189 _ia_dev[iadev_count] = dev; 3189 3190 iadev_count++; 3190 - spin_lock_init(&iadev->misc_lock); 3191 - /* First fixes first. I don't want to think about this now. */ 3192 - spin_lock_irqsave(&iadev->misc_lock, flags); 3193 3191 if (ia_init(dev) || ia_start(dev)) { 3194 3192 IF_INIT(printk("IA register failed!\n");) 3195 3193 iadev_count--; 3196 3194 ia_dev[iadev_count] = NULL; 3197 3195 _ia_dev[iadev_count] = NULL; 3198 - spin_unlock_irqrestore(&iadev->misc_lock, flags); 3199 3196 ret = -EINVAL; 3200 3197 goto err_out_deregister_dev; 3201 3198 } 3202 - spin_unlock_irqrestore(&iadev->misc_lock, flags); 3203 3199 IF_EVENT(printk("iadev_count = %d\n", iadev_count);) 3204 3200 3205 3201 iadev->next_board = ia_boards;
+1 -1
drivers/atm/iphase.h
··· 1022 1022 struct dle_q rx_dle_q; 1023 1023 struct free_desc_q *rx_free_desc_qhead; 1024 1024 struct sk_buff_head rx_dma_q; 1025 - spinlock_t rx_lock, misc_lock; 1025 + spinlock_t rx_lock; 1026 1026 struct atm_vcc **rx_open; /* list of all open VCs */ 1027 1027 u16 num_rx_desc, rx_buf_sz, rxing; 1028 1028 u32 rx_pkt_ram, rx_tmp_cnt;
+5 -3
drivers/atm/solos-pci.c
··· 444 444 struct atm_dev *atmdev = container_of(dev, struct atm_dev, class_dev); 445 445 struct solos_card *card = atmdev->dev_data; 446 446 struct sk_buff *skb; 447 + unsigned int len; 447 448 448 449 spin_lock(&card->cli_queue_lock); 449 450 skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]); ··· 452 451 if(skb == NULL) 453 452 return sprintf(buf, "No data.\n"); 454 453 455 - memcpy(buf, skb->data, skb->len); 456 - dev_dbg(&card->dev->dev, "len: %d\n", skb->len); 454 + len = skb->len; 455 + memcpy(buf, skb->data, len); 456 + dev_dbg(&card->dev->dev, "len: %d\n", len); 457 457 458 458 kfree_skb(skb); 459 - return skb->len; 459 + return len; 460 460 } 461 461 462 462 static int send_command(struct solos_card *card, int dev, const char *buf, size_t size)
+1 -1
drivers/block/ps3disk.c
··· 113 113 memcpy(buf, dev->bounce_buf+offset, size); 114 114 offset += size; 115 115 flush_kernel_dcache_page(bvec->bv_page); 116 - bvec_kunmap_irq(bvec, &flags); 116 + bvec_kunmap_irq(buf, &flags); 117 117 i++; 118 118 } 119 119 }
+5 -1
drivers/block/virtio_blk.c
··· 202 202 struct virtio_blk *vblk = disk->private_data; 203 203 struct request *req; 204 204 struct bio *bio; 205 + int err; 205 206 206 207 bio = bio_map_kern(vblk->disk->queue, id_str, VIRTIO_BLK_ID_BYTES, 207 208 GFP_KERNEL); ··· 216 215 } 217 216 218 217 req->cmd_type = REQ_TYPE_SPECIAL; 219 - return blk_execute_rq(vblk->disk->queue, vblk->disk, req, false); 218 + err = blk_execute_rq(vblk->disk->queue, vblk->disk, req, false); 219 + blk_put_request(req); 220 + 221 + return err; 220 222 } 221 223 222 224 static int virtblk_locked_ioctl(struct block_device *bdev, fmode_t mode,
+1 -1
drivers/dma/ioat/dma_v2.c
··· 879 879 dma->device_issue_pending = ioat2_issue_pending; 880 880 dma->device_alloc_chan_resources = ioat2_alloc_chan_resources; 881 881 dma->device_free_chan_resources = ioat2_free_chan_resources; 882 - dma->device_tx_status = ioat_tx_status; 882 + dma->device_tx_status = ioat_dma_tx_status; 883 883 884 884 err = ioat_probe(device); 885 885 if (err)
+3
drivers/gpu/drm/i915/i915_dma.c
··· 2231 2231 dev_priv->mchdev_lock = &mchdev_lock; 2232 2232 spin_unlock(&mchdev_lock); 2233 2233 2234 + /* XXX Prevent module unload due to memory corruption bugs. */ 2235 + __module_get(THIS_MODULE); 2236 + 2234 2237 return 0; 2235 2238 2236 2239 out_workqueue_free:
+1 -1
drivers/gpu/drm/i915/intel_fb.c
··· 238 238 239 239 drm_framebuffer_cleanup(&ifb->base); 240 240 if (ifb->obj) { 241 - drm_gem_object_handle_unreference(ifb->obj); 242 241 drm_gem_object_unreference(ifb->obj); 242 + ifb->obj = NULL; 243 243 } 244 244 245 245 return 0;
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 352 352 353 353 if (nouveau_fb->nvbo) { 354 354 nouveau_bo_unmap(nouveau_fb->nvbo); 355 - drm_gem_object_handle_unreference_unlocked(nouveau_fb->nvbo->gem); 356 355 drm_gem_object_unreference_unlocked(nouveau_fb->nvbo->gem); 357 356 nouveau_fb->nvbo = NULL; 358 357 }
-1
drivers/gpu/drm/nouveau/nouveau_notifier.c
··· 79 79 mutex_lock(&dev->struct_mutex); 80 80 nouveau_bo_unpin(chan->notifier_bo); 81 81 mutex_unlock(&dev->struct_mutex); 82 - drm_gem_object_handle_unreference_unlocked(chan->notifier_bo->gem); 83 82 drm_gem_object_unreference_unlocked(chan->notifier_bo->gem); 84 83 drm_mm_takedown(&chan->notifier_heap); 85 84 }
+3 -2
drivers/gpu/drm/radeon/evergreen.c
··· 1137 1137 1138 1138 WREG32(RCU_IND_INDEX, 0x203); 1139 1139 efuse_straps_3 = RREG32(RCU_IND_DATA); 1140 - efuse_box_bit_127_124 = (u8)(efuse_straps_3 & 0xF0000000) >> 28; 1140 + efuse_box_bit_127_124 = (u8)((efuse_straps_3 & 0xF0000000) >> 28); 1141 1141 1142 1142 switch(efuse_box_bit_127_124) { 1143 1143 case 0x0: ··· 1407 1407 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1408 1408 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1409 1409 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1410 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1410 1411 r600_vram_gtt_location(rdev, &rdev->mc); 1411 1412 radeon_update_bandwidth_info(rdev); 1412 1413 ··· 1521 1520 { 1522 1521 u32 tmp; 1523 1522 1524 - WREG32(CP_INT_CNTL, 0); 1523 + WREG32(CP_INT_CNTL, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE); 1525 1524 WREG32(GRBM_INT_CNTL, 0); 1526 1525 WREG32(INT_MASK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0); 1527 1526 WREG32(INT_MASK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
+3
drivers/gpu/drm/radeon/r100.c
··· 1030 1030 return r; 1031 1031 } 1032 1032 rdev->cp.ready = true; 1033 + rdev->mc.active_vram_size = rdev->mc.real_vram_size; 1033 1034 return 0; 1034 1035 } 1035 1036 ··· 1048 1047 void r100_cp_disable(struct radeon_device *rdev) 1049 1048 { 1050 1049 /* Disable ring */ 1050 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1051 1051 rdev->cp.ready = false; 1052 1052 WREG32(RADEON_CP_CSQ_MODE, 0); 1053 1053 WREG32(RADEON_CP_CSQ_CNTL, 0); ··· 2297 2295 /* FIXME we don't use the second aperture yet when we could use it */ 2298 2296 if (rdev->mc.visible_vram_size > rdev->mc.aper_size) 2299 2297 rdev->mc.visible_vram_size = rdev->mc.aper_size; 2298 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 2300 2299 config_aper_size = RREG32(RADEON_CONFIG_APER_SIZE); 2301 2300 if (rdev->flags & RADEON_IS_IGP) { 2302 2301 uint32_t tom;
+3 -1
drivers/gpu/drm/radeon/r600.c
··· 1248 1248 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 1249 1249 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 1250 1250 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1251 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1251 1252 r600_vram_gtt_location(rdev, &rdev->mc); 1252 1253 1253 1254 if (rdev->flags & RADEON_IS_IGP) { ··· 1918 1917 */ 1919 1918 void r600_cp_stop(struct radeon_device *rdev) 1920 1919 { 1920 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1921 1921 WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1)); 1922 1922 } 1923 1923 ··· 2912 2910 { 2913 2911 u32 tmp; 2914 2912 2915 - WREG32(CP_INT_CNTL, 0); 2913 + WREG32(CP_INT_CNTL, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE); 2916 2914 WREG32(GRBM_INT_CNTL, 0); 2917 2915 WREG32(DxMODE_INT_MASK, 0); 2918 2916 if (ASIC_IS_DCE3(rdev)) {
+2
drivers/gpu/drm/radeon/r600_blit_kms.c
··· 532 532 memcpy(ptr + rdev->r600_blit.ps_offset, r6xx_ps, r6xx_ps_size * 4); 533 533 radeon_bo_kunmap(rdev->r600_blit.shader_obj); 534 534 radeon_bo_unreserve(rdev->r600_blit.shader_obj); 535 + rdev->mc.active_vram_size = rdev->mc.real_vram_size; 535 536 return 0; 536 537 } 537 538 ··· 540 539 { 541 540 int r; 542 541 542 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 543 543 if (rdev->r600_blit.shader_obj == NULL) 544 544 return; 545 545 /* If we can't reserve the bo, unref should be enough to destroy
+1
drivers/gpu/drm/radeon/radeon.h
··· 344 344 * about vram size near mc fb location */ 345 345 u64 mc_vram_size; 346 346 u64 visible_vram_size; 347 + u64 active_vram_size; 347 348 u64 gtt_size; 348 349 u64 gtt_start; 349 350 u64 gtt_end;
+9 -9
drivers/gpu/drm/radeon/radeon_atombios.c
··· 1558 1558 switch (tv_info->ucTV_BootUpDefaultStandard) { 1559 1559 case ATOM_TV_NTSC: 1560 1560 tv_std = TV_STD_NTSC; 1561 - DRM_INFO("Default TV standard: NTSC\n"); 1561 + DRM_DEBUG_KMS("Default TV standard: NTSC\n"); 1562 1562 break; 1563 1563 case ATOM_TV_NTSCJ: 1564 1564 tv_std = TV_STD_NTSC_J; 1565 - DRM_INFO("Default TV standard: NTSC-J\n"); 1565 + DRM_DEBUG_KMS("Default TV standard: NTSC-J\n"); 1566 1566 break; 1567 1567 case ATOM_TV_PAL: 1568 1568 tv_std = TV_STD_PAL; 1569 - DRM_INFO("Default TV standard: PAL\n"); 1569 + DRM_DEBUG_KMS("Default TV standard: PAL\n"); 1570 1570 break; 1571 1571 case ATOM_TV_PALM: 1572 1572 tv_std = TV_STD_PAL_M; 1573 - DRM_INFO("Default TV standard: PAL-M\n"); 1573 + DRM_DEBUG_KMS("Default TV standard: PAL-M\n"); 1574 1574 break; 1575 1575 case ATOM_TV_PALN: 1576 1576 tv_std = TV_STD_PAL_N; 1577 - DRM_INFO("Default TV standard: PAL-N\n"); 1577 + DRM_DEBUG_KMS("Default TV standard: PAL-N\n"); 1578 1578 break; 1579 1579 case ATOM_TV_PALCN: 1580 1580 tv_std = TV_STD_PAL_CN; 1581 - DRM_INFO("Default TV standard: PAL-CN\n"); 1581 + DRM_DEBUG_KMS("Default TV standard: PAL-CN\n"); 1582 1582 break; 1583 1583 case ATOM_TV_PAL60: 1584 1584 tv_std = TV_STD_PAL_60; 1585 - DRM_INFO("Default TV standard: PAL-60\n"); 1585 + DRM_DEBUG_KMS("Default TV standard: PAL-60\n"); 1586 1586 break; 1587 1587 case ATOM_TV_SECAM: 1588 1588 tv_std = TV_STD_SECAM; 1589 - DRM_INFO("Default TV standard: SECAM\n"); 1589 + DRM_DEBUG_KMS("Default TV standard: SECAM\n"); 1590 1590 break; 1591 1591 default: 1592 1592 tv_std = TV_STD_NTSC; 1593 - DRM_INFO("Unknown TV standard; defaulting to NTSC\n"); 1593 + DRM_DEBUG_KMS("Unknown TV standard; defaulting to NTSC\n"); 1594 1594 break; 1595 1595 } 1596 1596 }
+13 -13
drivers/gpu/drm/radeon/radeon_combios.c
··· 913 913 switch (RBIOS8(tv_info + 7) & 0xf) { 914 914 case 1: 915 915 tv_std = TV_STD_NTSC; 916 - DRM_INFO("Default TV standard: NTSC\n"); 916 + DRM_DEBUG_KMS("Default TV standard: NTSC\n"); 917 917 break; 918 918 case 2: 919 919 tv_std = TV_STD_PAL; 920 - DRM_INFO("Default TV standard: PAL\n"); 920 + DRM_DEBUG_KMS("Default TV standard: PAL\n"); 921 921 break; 922 922 case 3: 923 923 tv_std = TV_STD_PAL_M; 924 - DRM_INFO("Default TV standard: PAL-M\n"); 924 + DRM_DEBUG_KMS("Default TV standard: PAL-M\n"); 925 925 break; 926 926 case 4: 927 927 tv_std = TV_STD_PAL_60; 928 - DRM_INFO("Default TV standard: PAL-60\n"); 928 + DRM_DEBUG_KMS("Default TV standard: PAL-60\n"); 929 929 break; 930 930 case 5: 931 931 tv_std = TV_STD_NTSC_J; 932 - DRM_INFO("Default TV standard: NTSC-J\n"); 932 + DRM_DEBUG_KMS("Default TV standard: NTSC-J\n"); 933 933 break; 934 934 case 6: 935 935 tv_std = TV_STD_SCART_PAL; 936 - DRM_INFO("Default TV standard: SCART-PAL\n"); 936 + DRM_DEBUG_KMS("Default TV standard: SCART-PAL\n"); 937 937 break; 938 938 default: 939 939 tv_std = TV_STD_NTSC; 940 - DRM_INFO 940 + DRM_DEBUG_KMS 941 941 ("Unknown TV standard; defaulting to NTSC\n"); 942 942 break; 943 943 } 944 944 945 945 switch ((RBIOS8(tv_info + 9) >> 2) & 0x3) { 946 946 case 0: 947 - DRM_INFO("29.498928713 MHz TV ref clk\n"); 947 + DRM_DEBUG_KMS("29.498928713 MHz TV ref clk\n"); 948 948 break; 949 949 case 1: 950 - DRM_INFO("28.636360000 MHz TV ref clk\n"); 950 + DRM_DEBUG_KMS("28.636360000 MHz TV ref clk\n"); 951 951 break; 952 952 case 2: 953 - DRM_INFO("14.318180000 MHz TV ref clk\n"); 953 + DRM_DEBUG_KMS("14.318180000 MHz TV ref clk\n"); 954 954 break; 955 955 case 3: 956 - DRM_INFO("27.000000000 MHz TV ref clk\n"); 956 + DRM_DEBUG_KMS("27.000000000 MHz TV ref clk\n"); 957 957 break; 958 958 default: 959 959 break; ··· 1324 1324 1325 1325 if (tmds_info) { 1326 1326 ver = RBIOS8(tmds_info); 1327 - DRM_INFO("DFP table revision: %d\n", ver); 1327 + DRM_DEBUG_KMS("DFP table revision: %d\n", ver); 1328 1328 if (ver == 3) { 1329 1329 n = RBIOS8(tmds_info + 5) + 1; 1330 1330 if (n > 4) ··· 1408 1408 offset = combios_get_table_offset(dev, COMBIOS_EXT_TMDS_INFO_TABLE); 1409 1409 if (offset) { 1410 1410 ver = RBIOS8(offset); 1411 - DRM_INFO("External TMDS Table revision: %d\n", ver); 1411 + DRM_DEBUG_KMS("External TMDS Table revision: %d\n", ver); 1412 1412 tmds->slave_addr = RBIOS8(offset + 4 + 2); 1413 1413 tmds->slave_addr >>= 1; /* 7 bit addressing */ 1414 1414 gpio = RBIOS8(offset + 4 + 3);
-1
drivers/gpu/drm/radeon/radeon_fb.c
··· 97 97 radeon_bo_unpin(rbo); 98 98 radeon_bo_unreserve(rbo); 99 99 } 100 - drm_gem_object_handle_unreference(gobj); 101 100 drm_gem_object_unreference_unlocked(gobj); 102 101 } 103 102
+1 -1
drivers/gpu/drm/radeon/radeon_object.c
··· 69 69 u32 c = 0; 70 70 71 71 rbo->placement.fpfn = 0; 72 - rbo->placement.lpfn = 0; 72 + rbo->placement.lpfn = rbo->rdev->mc.active_vram_size >> PAGE_SHIFT; 73 73 rbo->placement.placement = rbo->placements; 74 74 rbo->placement.busy_placement = rbo->placements; 75 75 if (domain & RADEON_GEM_DOMAIN_VRAM)
+1 -4
drivers/gpu/drm/radeon/radeon_object.h
··· 124 124 int r; 125 125 126 126 r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, 0); 127 - if (unlikely(r != 0)) { 128 - if (r != -ERESTARTSYS) 129 - dev_err(bo->rdev->dev, "%p reserve failed for wait\n", bo); 127 + if (unlikely(r != 0)) 130 128 return r; 131 - } 132 129 spin_lock(&bo->tbo.lock); 133 130 if (mem_type) 134 131 *mem_type = bo->tbo.mem.mem_type;
+1
drivers/gpu/drm/radeon/rs600.c
··· 693 693 rdev->mc.real_vram_size = RREG32(RADEON_CONFIG_MEMSIZE); 694 694 rdev->mc.mc_vram_size = rdev->mc.real_vram_size; 695 695 rdev->mc.visible_vram_size = rdev->mc.aper_size; 696 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 696 697 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 697 698 base = RREG32_MC(R_000004_MC_FB_LOCATION); 698 699 base = G_000004_MC_FB_START(base) << 16;
+1
drivers/gpu/drm/radeon/rs690.c
··· 157 157 rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0); 158 158 rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0); 159 159 rdev->mc.visible_vram_size = rdev->mc.aper_size; 160 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 160 161 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 161 162 base = G_000100_MC_FB_START(base) << 16; 162 163 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
+2
drivers/gpu/drm/radeon/rv770.c
··· 267 267 */ 268 268 void r700_cp_stop(struct radeon_device *rdev) 269 269 { 270 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 270 271 WREG32(CP_ME_CNTL, (CP_ME_HALT | CP_PFP_HALT)); 271 272 } 272 273 ··· 993 992 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 994 993 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 995 994 rdev->mc.visible_vram_size = rdev->mc.aper_size; 995 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 996 996 r600_vram_gtt_location(rdev, &rdev->mc); 997 997 radeon_update_bandwidth_info(rdev); 998 998
+71 -12
drivers/gpu/drm/ttm/ttm_bo.c
··· 442 442 } 443 443 444 444 /** 445 + * Call bo::reserved and with the lru lock held. 446 + * Will release GPU memory type usage on destruction. 447 + * This is the place to put in driver specific hooks. 448 + * Will release the bo::reserved lock and the 449 + * lru lock on exit. 450 + */ 451 + 452 + static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) 453 + { 454 + struct ttm_bo_global *glob = bo->glob; 455 + 456 + if (bo->ttm) { 457 + 458 + /** 459 + * Release the lru_lock, since we don't want to have 460 + * an atomic requirement on ttm_tt[unbind|destroy]. 461 + */ 462 + 463 + spin_unlock(&glob->lru_lock); 464 + ttm_tt_unbind(bo->ttm); 465 + ttm_tt_destroy(bo->ttm); 466 + bo->ttm = NULL; 467 + spin_lock(&glob->lru_lock); 468 + } 469 + 470 + if (bo->mem.mm_node) { 471 + drm_mm_put_block(bo->mem.mm_node); 472 + bo->mem.mm_node = NULL; 473 + } 474 + 475 + atomic_set(&bo->reserved, 0); 476 + wake_up_all(&bo->event_queue); 477 + spin_unlock(&glob->lru_lock); 478 + } 479 + 480 + 481 + /** 445 482 * If bo idle, remove from delayed- and lru lists, and unref. 446 483 * If not idle, and already on delayed list, do nothing. 447 484 * If not idle, and not on delayed list, put on delayed list, ··· 493 456 int ret; 494 457 495 458 spin_lock(&bo->lock); 459 + retry: 496 460 (void) ttm_bo_wait(bo, false, false, !remove_all); 497 461 498 462 if (!bo->sync_obj) { ··· 502 464 spin_unlock(&bo->lock); 503 465 504 466 spin_lock(&glob->lru_lock); 505 - put_count = ttm_bo_del_from_lru(bo); 467 + ret = ttm_bo_reserve_locked(bo, false, !remove_all, false, 0); 506 468 507 - ret = ttm_bo_reserve_locked(bo, false, false, false, 0); 508 - BUG_ON(ret); 509 - if (bo->ttm) 510 - ttm_tt_unbind(bo->ttm); 469 + /** 470 + * Someone else has the object reserved. Bail and retry. 471 + */ 472 + 473 + if (unlikely(ret == -EBUSY)) { 474 + spin_unlock(&glob->lru_lock); 475 + spin_lock(&bo->lock); 476 + goto requeue; 477 + } 478 + 479 + /** 480 + * We can re-check for sync object without taking 481 + * the bo::lock since setting the sync object requires 482 + * also bo::reserved. A busy object at this point may 483 + * be caused by another thread starting an accelerated 484 + * eviction. 485 + */ 486 + 487 + if (unlikely(bo->sync_obj)) { 488 + atomic_set(&bo->reserved, 0); 489 + wake_up_all(&bo->event_queue); 490 + spin_unlock(&glob->lru_lock); 491 + spin_lock(&bo->lock); 492 + if (remove_all) 493 + goto retry; 494 + else 495 + goto requeue; 496 + } 497 + 498 + put_count = ttm_bo_del_from_lru(bo); 511 499 512 500 if (!list_empty(&bo->ddestroy)) { 513 501 list_del_init(&bo->ddestroy); 514 502 ++put_count; 515 503 } 516 - if (bo->mem.mm_node) { 517 - drm_mm_put_block(bo->mem.mm_node); 518 - bo->mem.mm_node = NULL; 519 - } 520 - spin_unlock(&glob->lru_lock); 521 504 522 - atomic_set(&bo->reserved, 0); 505 + ttm_bo_cleanup_memtype_use(bo); 523 506 524 507 while (put_count--) 525 508 kref_put(&bo->list_kref, ttm_bo_ref_bug); 526 509 527 510 return 0; 528 511 } 529 - 512 + requeue: 530 513 spin_lock(&glob->lru_lock); 531 514 if (list_empty(&bo->ddestroy)) { 532 515 void *sync_obj = bo->sync_obj;
+2
drivers/hid/hid-cando.c
··· 237 237 USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 238 238 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, 239 239 USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 240 + { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, 241 + USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6) }, 240 242 { } 241 243 }; 242 244 MODULE_DEVICE_TABLE(hid, cando_devices);
+1
drivers/hid/hid-core.c
··· 1292 1292 { HID_USB_DEVICE(USB_VENDOR_ID_BTC, USB_DEVICE_ID_BTC_EMPREX_REMOTE_2) }, 1293 1293 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 1294 1294 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 1295 + { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6) }, 1295 1296 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION) }, 1296 1297 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR) }, 1297 1298 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_TACTICAL_PAD) },
+2
drivers/hid/hid-ids.h
··· 134 134 #define USB_VENDOR_ID_CANDO 0x2087 135 135 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH 0x0a01 136 136 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6 0x0b03 137 + #define USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6 0x0f01 137 138 138 139 #define USB_VENDOR_ID_CH 0x068e 139 140 #define USB_DEVICE_ID_CH_PRO_PEDALS 0x00f2 ··· 504 503 505 504 #define USB_VENDOR_ID_TURBOX 0x062a 506 505 #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201 506 + #define USB_DEVICE_ID_TURBOX_TOUCHSCREEN_MOSART 0x7100 507 507 508 508 #define USB_VENDOR_ID_TWINHAN 0x6253 509 509 #define USB_DEVICE_ID_TWINHAN_IR_REMOTE 0x0100
+11
drivers/hid/hidraw.c
··· 109 109 int ret = 0; 110 110 111 111 mutex_lock(&minors_lock); 112 + 113 + if (!hidraw_table[minor]) { 114 + ret = -ENODEV; 115 + goto out; 116 + } 117 + 112 118 dev = hidraw_table[minor]->hid; 113 119 114 120 if (!dev->hid_output_raw_report) { ··· 250 244 251 245 mutex_lock(&minors_lock); 252 246 dev = hidraw_table[minor]; 247 + if (!dev) { 248 + ret = -ENODEV; 249 + goto out; 250 + } 253 251 254 252 switch (cmd) { 255 253 case HIDIOCGRDESCSIZE: ··· 327 317 328 318 ret = -ENOTTY; 329 319 } 320 + out: 330 321 mutex_unlock(&minors_lock); 331 322 return ret; 332 323 }
+1
drivers/hid/usbhid/hid-quirks.c
··· 36 36 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER, HID_QUIRK_MULTI_INPUT | HID_QUIRK_NOGET }, 37 37 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH, HID_QUIRK_MULTI_INPUT }, 38 38 { USB_VENDOR_ID_MOJO, USB_DEVICE_ID_RETRO_ADAPTER, HID_QUIRK_MULTI_INPUT }, 39 + { USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_TOUCHSCREEN_MOSART, HID_QUIRK_MULTI_INPUT }, 39 40 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_DRIVING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 40 41 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FLYING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 41 42 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
+5
drivers/i2c/busses/i2c-cpm.c
··· 677 677 dev_dbg(&ofdev->dev, "hw routines for %s registered.\n", 678 678 cpm->adap.name); 679 679 680 + /* 681 + * register OF I2C devices 682 + */ 683 + of_i2c_register_devices(&cpm->adap); 684 + 680 685 return 0; 681 686 out_shut: 682 687 cpm_i2c_shutdown(cpm);
+3
drivers/i2c/busses/i2c-ibm_iic.c
··· 761 761 dev_info(&ofdev->dev, "using %s mode\n", 762 762 dev->fast_mode ? "fast (400 kHz)" : "standard (100 kHz)"); 763 763 764 + /* Now register all the child nodes */ 765 + of_i2c_register_devices(adap); 766 + 764 767 return 0; 765 768 766 769 error_cleanup:
+1
drivers/i2c/busses/i2c-mpc.c
··· 632 632 dev_err(i2c->dev, "failed to add adapter\n"); 633 633 goto fail_add; 634 634 } 635 + of_i2c_register_devices(&i2c->adap); 635 636 636 637 return result; 637 638
+8 -4
drivers/i2c/busses/i2c-pca-isa.c
··· 71 71 72 72 static int pca_isa_waitforcompletion(void *pd) 73 73 { 74 - long ret = ~0; 75 74 unsigned long timeout; 75 + long ret; 76 76 77 77 if (irq > -1) { 78 78 ret = wait_event_timeout(pca_wait, ··· 81 81 } else { 82 82 /* Do polling */ 83 83 timeout = jiffies + pca_isa_ops.timeout; 84 - while (((pca_isa_readbyte(pd, I2C_PCA_CON) 85 - & I2C_PCA_CON_SI) == 0) 86 - && (ret = time_before(jiffies, timeout))) 84 + do { 85 + ret = time_before(jiffies, timeout); 86 + if (pca_isa_readbyte(pd, I2C_PCA_CON) 87 + & I2C_PCA_CON_SI) 88 + break; 87 89 udelay(100); 90 + } while (ret); 88 91 } 92 + 89 93 return ret > 0; 90 94 } 91 95
+7 -4
drivers/i2c/busses/i2c-pca-platform.c
··· 80 80 static int i2c_pca_pf_waitforcompletion(void *pd) 81 81 { 82 82 struct i2c_pca_pf_data *i2c = pd; 83 - long ret = ~0; 84 83 unsigned long timeout; 84 + long ret; 85 85 86 86 if (i2c->irq) { 87 87 ret = wait_event_timeout(i2c->wait, ··· 90 90 } else { 91 91 /* Do polling */ 92 92 timeout = jiffies + i2c->adap.timeout; 93 - while (((i2c->algo_data.read_byte(i2c, I2C_PCA_CON) 94 - & I2C_PCA_CON_SI) == 0) 95 - && (ret = time_before(jiffies, timeout))) 93 + do { 94 + ret = time_before(jiffies, timeout); 95 + if (i2c->algo_data.read_byte(i2c, I2C_PCA_CON) 96 + & I2C_PCA_CON_SI) 97 + break; 96 98 udelay(100); 99 + } while (ret); 97 100 } 98 101 99 102 return ret > 0;
+24 -30
drivers/i2c/i2c-core.c
··· 32 32 #include <linux/init.h> 33 33 #include <linux/idr.h> 34 34 #include <linux/mutex.h> 35 - #include <linux/of_i2c.h> 36 35 #include <linux/of_device.h> 37 36 #include <linux/completion.h> 38 37 #include <linux/hardirq.h> ··· 196 197 { 197 198 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 198 199 199 - if (pm_runtime_suspended(dev)) 200 - return 0; 201 - 202 - if (pm) 203 - return pm->suspend ? pm->suspend(dev) : 0; 200 + if (pm) { 201 + if (pm_runtime_suspended(dev)) 202 + return 0; 203 + else 204 + return pm->suspend ? pm->suspend(dev) : 0; 205 + } 204 206 205 207 return i2c_legacy_suspend(dev, PMSG_SUSPEND); 206 208 } ··· 216 216 else 217 217 ret = i2c_legacy_resume(dev); 218 218 219 - if (!ret) { 220 - pm_runtime_disable(dev); 221 - pm_runtime_set_active(dev); 222 - pm_runtime_enable(dev); 223 - } 224 - 225 219 return ret; 226 220 } 227 221 ··· 223 229 { 224 230 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 225 231 226 - if (pm_runtime_suspended(dev)) 227 - return 0; 228 - 229 - if (pm) 230 - return pm->freeze ? pm->freeze(dev) : 0; 232 + if (pm) { 233 + if (pm_runtime_suspended(dev)) 234 + return 0; 235 + else 236 + return pm->freeze ? pm->freeze(dev) : 0; 237 + } 231 238 232 239 return i2c_legacy_suspend(dev, PMSG_FREEZE); 233 240 } ··· 237 242 { 238 243 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 239 244 240 - if (pm_runtime_suspended(dev)) 241 - return 0; 242 - 243 - if (pm) 244 - return pm->thaw ? pm->thaw(dev) : 0; 245 + if (pm) { 246 + if (pm_runtime_suspended(dev)) 247 + return 0; 248 + else 249 + return pm->thaw ? pm->thaw(dev) : 0; 250 + } 245 251 246 252 return i2c_legacy_resume(dev); 247 253 } ··· 251 255 { 252 256 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 253 257 254 - if (pm_runtime_suspended(dev)) 255 - return 0; 256 - 257 - if (pm) 258 - return pm->poweroff ? pm->poweroff(dev) : 0; 258 + if (pm) { 259 + if (pm_runtime_suspended(dev)) 260 + return 0; 261 + else 262 + return pm->poweroff ? pm->poweroff(dev) : 0; 263 + } 259 264 260 265 return i2c_legacy_suspend(dev, PMSG_HIBERNATE); 261 266 } ··· 872 875 /* create pre-declared device nodes */ 873 876 if (adap->nr < __i2c_first_dynamic_bus_num) 874 877 i2c_scan_static_board_info(adap); 875 - 876 - /* Register devices from the device tree */ 877 - of_i2c_register_devices(adap); 878 878 879 879 /* Notify drivers */ 880 880 mutex_lock(&core_lock);
+5 -5
drivers/idle/intel_idle.c
··· 157 157 { /* MWAIT C5 */ }, 158 158 { /* MWAIT C6 */ 159 159 .name = "ATM-C6", 160 - .desc = "MWAIT 0x40", 161 - .driver_data = (void *) 0x40, 160 + .desc = "MWAIT 0x52", 161 + .driver_data = (void *) 0x52, 162 162 .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, 163 - .exit_latency = 200, 163 + .exit_latency = 140, 164 164 .power_usage = 150, 165 - .target_residency = 800, 166 - .enter = NULL }, /* disabled */ 165 + .target_residency = 560, 166 + .enter = &intel_idle }, 167 167 }; 168 168 169 169 /**
+3
drivers/input/joydev.c
··· 483 483 484 484 memcpy(joydev->abspam, abspam, len); 485 485 486 + for (i = 0; i < joydev->nabs; i++) 487 + joydev->absmap[joydev->abspam[i]] = i; 488 + 486 489 out: 487 490 kfree(abspam); 488 491 return retval;
+7
drivers/input/misc/uinput.c
··· 404 404 retval = uinput_validate_absbits(dev); 405 405 if (retval < 0) 406 406 goto exit; 407 + if (test_bit(ABS_MT_SLOT, dev->absbit)) { 408 + int nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1; 409 + input_mt_create_slots(dev, nslot); 410 + input_set_events_per_packet(dev, 6 * nslot); 411 + } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) { 412 + input_set_events_per_packet(dev, 60); 413 + } 407 414 } 408 415 409 416 udev->state = UIST_SETUP_COMPLETE;
+12 -11
drivers/input/tablet/wacom_sys.c
··· 103 103 static int wacom_open(struct input_dev *dev) 104 104 { 105 105 struct wacom *wacom = input_get_drvdata(dev); 106 + int retval = 0; 107 + 108 + if (usb_autopm_get_interface(wacom->intf) < 0) 109 + return -EIO; 106 110 107 111 mutex_lock(&wacom->lock); 108 112 109 - wacom->irq->dev = wacom->usbdev; 110 - 111 - if (usb_autopm_get_interface(wacom->intf) < 0) { 112 - mutex_unlock(&wacom->lock); 113 - return -EIO; 114 - } 115 - 116 113 if (usb_submit_urb(wacom->irq, GFP_KERNEL)) { 117 - usb_autopm_put_interface(wacom->intf); 118 - mutex_unlock(&wacom->lock); 119 - return -EIO; 114 + retval = -EIO; 115 + goto out; 120 116 } 121 117 122 118 wacom->open = true; 123 119 wacom->intf->needs_remote_wakeup = 1; 124 120 121 + out: 125 122 mutex_unlock(&wacom->lock); 126 - return 0; 123 + if (retval) 124 + usb_autopm_put_interface(wacom->intf); 125 + return retval; 127 126 } 128 127 129 128 static void wacom_close(struct input_dev *dev) ··· 134 135 wacom->open = false; 135 136 wacom->intf->needs_remote_wakeup = 0; 136 137 mutex_unlock(&wacom->lock); 138 + 139 + usb_autopm_put_interface(wacom->intf); 137 140 } 138 141 139 142 static int wacom_parse_hid(struct usb_interface *intf, struct hid_descriptor *hid_desc,
+3 -1
drivers/input/tablet/wacom_wac.c
··· 442 442 /* general pen packet */ 443 443 if ((data[1] & 0xb8) == 0xa0) { 444 444 t = (data[6] << 2) | ((data[7] >> 6) & 3); 445 - if (features->type >= INTUOS4S && features->type <= INTUOS4L) 445 + if ((features->type >= INTUOS4S && features->type <= INTUOS4L) || 446 + features->type == WACOM_21UX2) { 446 447 t = (t << 1) | (data[1] & 1); 448 + } 447 449 input_report_abs(input, ABS_PRESSURE, t); 448 450 input_report_abs(input, ABS_TILT_X, 449 451 ((data[7] << 1) & 0x7e) | (data[8] >> 7));
+14 -4
drivers/isdn/sc/interrupt.c
··· 112 112 } 113 113 else if(callid>=0x0000 && callid<=0x7FFF) 114 114 { 115 + int len; 116 + 115 117 pr_debug("%s: Got Incoming Call\n", 116 118 sc_adapter[card]->devicename); 117 - strcpy(setup.phone,&(rcvmsg.msg_data.byte_array[4])); 118 - strcpy(setup.eazmsn, 119 - sc_adapter[card]->channel[rcvmsg.phy_link_no-1].dn); 119 + len = strlcpy(setup.phone, &(rcvmsg.msg_data.byte_array[4]), 120 + sizeof(setup.phone)); 121 + if (len >= sizeof(setup.phone)) 122 + continue; 123 + len = strlcpy(setup.eazmsn, 124 + sc_adapter[card]->channel[rcvmsg.phy_link_no - 1].dn, 125 + sizeof(setup.eazmsn)); 126 + if (len >= sizeof(setup.eazmsn)) 127 + continue; 120 128 setup.si1 = 7; 121 129 setup.si2 = 0; 122 130 setup.plan = 0; ··· 184 176 * Handle a GetMyNumber Rsp 185 177 */ 186 178 if (IS_CE_MESSAGE(rcvmsg,Call,0,GetMyNumber)){ 187 - strcpy(sc_adapter[card]->channel[rcvmsg.phy_link_no-1].dn,rcvmsg.msg_data.byte_array); 179 + strlcpy(sc_adapter[card]->channel[rcvmsg.phy_link_no - 1].dn, 180 + rcvmsg.msg_data.byte_array, 181 + sizeof(rcvmsg.msg_data.byte_array)); 188 182 continue; 189 183 } 190 184
+5 -4
drivers/md/bitmap.c
··· 1000 1000 page = bitmap->sb_page; 1001 1001 offset = sizeof(bitmap_super_t); 1002 1002 if (!file) 1003 - read_sb_page(bitmap->mddev, 1004 - bitmap->mddev->bitmap_info.offset, 1005 - page, 1006 - index, count); 1003 + page = read_sb_page( 1004 + bitmap->mddev, 1005 + bitmap->mddev->bitmap_info.offset, 1006 + page, 1007 + index, count); 1007 1008 } else if (file) { 1008 1009 page = read_page(file, index, bitmap, count); 1009 1010 offset = 0;
+3 -1
drivers/md/raid1.c
··· 1839 1839 1840 1840 /* take from bio_init */ 1841 1841 bio->bi_next = NULL; 1842 + bio->bi_flags &= ~(BIO_POOL_MASK-1); 1842 1843 bio->bi_flags |= 1 << BIO_UPTODATE; 1844 + bio->bi_comp_cpu = -1; 1843 1845 bio->bi_rw = READ; 1844 1846 bio->bi_vcnt = 0; 1845 1847 bio->bi_idx = 0; ··· 1914 1912 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1915 1913 break; 1916 1914 BUG_ON(sync_blocks < (PAGE_SIZE>>9)); 1917 - if (len > (sync_blocks<<9)) 1915 + if ((len >> 9) > sync_blocks) 1918 1916 len = sync_blocks<<9; 1919 1917 } 1920 1918
+8 -1
drivers/media/IR/ir-keytable.c
··· 319 319 * a keyup event might follow immediately after the keydown. 320 320 */ 321 321 spin_lock_irqsave(&ir->keylock, flags); 322 - if (time_is_after_eq_jiffies(ir->keyup_jiffies)) 322 + if (time_is_before_eq_jiffies(ir->keyup_jiffies)) 323 323 ir_keyup(ir); 324 324 spin_unlock_irqrestore(&ir->keylock, flags); 325 325 } ··· 509 509 driver_name, rc_tab->name, 510 510 (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_IR_RAW) ? 511 511 " in raw mode" : ""); 512 + 513 + /* 514 + * Default delay of 250ms is too short for some protocols, expecially 515 + * since the timeout is currently set to 250ms. Increase it to 500ms, 516 + * to avoid wrong repetition of the keycodes. 517 + */ 518 + input_dev->rep[REP_DELAY] = 500; 512 519 513 520 return 0; 514 521
+1 -1
drivers/media/IR/ir-lirc-codec.c
··· 267 267 features |= LIRC_CAN_SET_SEND_CARRIER; 268 268 269 269 if (ir_dev->props->s_tx_duty_cycle) 270 - features |= LIRC_CAN_SET_REC_DUTY_CYCLE; 270 + features |= LIRC_CAN_SET_SEND_DUTY_CYCLE; 271 271 } 272 272 273 273 if (ir_dev->props->s_rx_carrier_range)
+3 -1
drivers/media/IR/ir-raw-event.c
··· 279 279 "rc%u", (unsigned int)ir->devno); 280 280 281 281 if (IS_ERR(ir->raw->thread)) { 282 + int ret = PTR_ERR(ir->raw->thread); 283 + 282 284 kfree(ir->raw); 283 285 ir->raw = NULL; 284 - return PTR_ERR(ir->raw->thread); 286 + return ret; 285 287 } 286 288 287 289 mutex_lock(&ir_raw_handler_lock);
+11 -6
drivers/media/IR/ir-sysfs.c
··· 67 67 char *tmp = buf; 68 68 int i; 69 69 70 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 70 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 71 71 enabled = ir_dev->rc_tab.ir_type; 72 72 allowed = ir_dev->props->allowed_protos; 73 - } else { 73 + } else if (ir_dev->raw) { 74 74 enabled = ir_dev->raw->enabled_protocols; 75 75 allowed = ir_raw_get_allowed_protocols(); 76 - } 76 + } else 77 + return sprintf(tmp, "[builtin]\n"); 77 78 78 79 IR_dprintk(1, "allowed - 0x%llx, enabled - 0x%llx\n", 79 80 (long long)allowed, ··· 122 121 int rc, i, count = 0; 123 122 unsigned long flags; 124 123 125 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) 124 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) 126 125 type = ir_dev->rc_tab.ir_type; 127 - else 126 + else if (ir_dev->raw) 128 127 type = ir_dev->raw->enabled_protocols; 128 + else { 129 + IR_dprintk(1, "Protocol switching not supported\n"); 130 + return -EINVAL; 131 + } 129 132 130 133 while ((tmp = strsep((char **) &data, " \n")) != NULL) { 131 134 if (!*tmp) ··· 190 185 } 191 186 } 192 187 193 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 188 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 194 189 spin_lock_irqsave(&ir_dev->rc_tab.lock, flags); 195 190 ir_dev->rc_tab.ir_type = type; 196 191 spin_unlock_irqrestore(&ir_dev->rc_tab.lock, flags);
+3
drivers/media/IR/keymaps/rc-rc6-mce.c
··· 19 19 20 20 { 0x800f0416, KEY_PLAY }, 21 21 { 0x800f0418, KEY_PAUSE }, 22 + { 0x800f046e, KEY_PLAYPAUSE }, 22 23 { 0x800f0419, KEY_STOP }, 23 24 { 0x800f0417, KEY_RECORD }, 24 25 ··· 38 37 { 0x800f0411, KEY_VOLUMEDOWN }, 39 38 { 0x800f0412, KEY_CHANNELUP }, 40 39 { 0x800f0413, KEY_CHANNELDOWN }, 40 + { 0x800f043a, KEY_BRIGHTNESSUP }, 41 + { 0x800f0480, KEY_BRIGHTNESSDOWN }, 41 42 42 43 { 0x800f0401, KEY_NUMERIC_1 }, 43 44 { 0x800f0402, KEY_NUMERIC_2 },
+4
drivers/media/IR/mceusb.c
··· 120 120 { USB_DEVICE(VENDOR_PHILIPS, 0x0613) }, 121 121 /* Philips eHome Infrared Transceiver */ 122 122 { USB_DEVICE(VENDOR_PHILIPS, 0x0815) }, 123 + /* Philips/Spinel plus IR transceiver for ASUS */ 124 + { USB_DEVICE(VENDOR_PHILIPS, 0x206c) }, 125 + /* Philips/Spinel plus IR transceiver for ASUS */ 126 + { USB_DEVICE(VENDOR_PHILIPS, 0x2088) }, 123 127 /* Realtek MCE IR Receiver */ 124 128 { USB_DEVICE(VENDOR_REALTEK, 0x0161) }, 125 129 /* SMK/Toshiba G83C0004D410 */
-3
drivers/media/dvb/dvb-usb/dib0700_core.c
··· 673 673 else 674 674 dev->props.rc.core.bulk_mode = false; 675 675 676 - /* Need a higher delay, to avoid wrong repeat */ 677 - dev->rc_input_dev->rep[REP_DELAY] = 500; 678 - 679 676 dib0700_rc_setup(dev); 680 677 681 678 return 0;
+54 -2
drivers/media/dvb/dvb-usb/dib0700_devices.c
··· 940 940 return adap->fe == NULL ? -ENODEV : 0; 941 941 } 942 942 943 + /* STK7770P */ 944 + static struct dib7000p_config dib7770p_dib7000p_config = { 945 + .output_mpeg2_in_188_bytes = 1, 946 + 947 + .agc_config_count = 1, 948 + .agc = &dib7070_agc_config, 949 + .bw = &dib7070_bw_config_12_mhz, 950 + .tuner_is_baseband = 1, 951 + .spur_protect = 1, 952 + 953 + .gpio_dir = DIB7000P_GPIO_DEFAULT_DIRECTIONS, 954 + .gpio_val = DIB7000P_GPIO_DEFAULT_VALUES, 955 + .gpio_pwm_pos = DIB7000P_GPIO_DEFAULT_PWM_POS, 956 + 957 + .hostbus_diversity = 1, 958 + .enable_current_mirror = 1, 959 + .disable_sample_and_hold = 0, 960 + }; 961 + 962 + static int stk7770p_frontend_attach(struct dvb_usb_adapter *adap) 963 + { 964 + struct usb_device_descriptor *p = &adap->dev->udev->descriptor; 965 + if (p->idVendor == cpu_to_le16(USB_VID_PINNACLE) && 966 + p->idProduct == cpu_to_le16(USB_PID_PINNACLE_PCTV72E)) 967 + dib0700_set_gpio(adap->dev, GPIO6, GPIO_OUT, 0); 968 + else 969 + dib0700_set_gpio(adap->dev, GPIO6, GPIO_OUT, 1); 970 + msleep(10); 971 + dib0700_set_gpio(adap->dev, GPIO9, GPIO_OUT, 1); 972 + dib0700_set_gpio(adap->dev, GPIO4, GPIO_OUT, 1); 973 + dib0700_set_gpio(adap->dev, GPIO7, GPIO_OUT, 1); 974 + dib0700_set_gpio(adap->dev, GPIO10, GPIO_OUT, 0); 975 + 976 + dib0700_ctrl_clock(adap->dev, 72, 1); 977 + 978 + msleep(10); 979 + dib0700_set_gpio(adap->dev, GPIO10, GPIO_OUT, 1); 980 + msleep(10); 981 + dib0700_set_gpio(adap->dev, GPIO0, GPIO_OUT, 1); 982 + 983 + if (dib7000p_i2c_enumeration(&adap->dev->i2c_adap, 1, 18, 984 + &dib7770p_dib7000p_config) != 0) { 985 + err("%s: dib7000p_i2c_enumeration failed. Cannot continue\n", 986 + __func__); 987 + return -ENODEV; 988 + } 989 + 990 + adap->fe = dvb_attach(dib7000p_attach, &adap->dev->i2c_adap, 0x80, 991 + &dib7770p_dib7000p_config); 992 + return adap->fe == NULL ? -ENODEV : 0; 993 + } 994 + 943 995 /* DIB807x generic */ 944 996 static struct dibx000_agc_config dib807x_agc_config[2] = { 945 997 { ··· 1833 1781 /* 60 */{ USB_DEVICE(USB_VID_TERRATEC, USB_PID_TERRATEC_CINERGY_T_XXS_2) }, 1834 1782 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XPVR) }, 1835 1783 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XP) }, 1836 - { USB_DEVICE(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD) }, 1784 + { USB_DEVICE_VER(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD, 0x000, 0x3f00) }, 1837 1785 { USB_DEVICE(USB_VID_EVOLUTEPC, USB_PID_TVWAY_PLUS) }, 1838 1786 /* 65 */{ USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV73ESE) }, 1839 1787 { USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV282E) }, ··· 2458 2406 .pid_filter_count = 32, 2459 2407 .pid_filter = stk70x0p_pid_filter, 2460 2408 .pid_filter_ctrl = stk70x0p_pid_filter_ctrl, 2461 - .frontend_attach = stk7070p_frontend_attach, 2409 + .frontend_attach = stk7770p_frontend_attach, 2462 2410 .tuner_attach = dib7770p_tuner_attach, 2463 2411 2464 2412 DIB0700_DEFAULT_STREAMING_CONFIG(0x02),
+1 -3
drivers/media/dvb/dvb-usb/opera1.c
··· 483 483 } 484 484 } 485 485 kfree(p); 486 - if (fw) { 487 - release_firmware(fw); 488 - } 486 + release_firmware(fw); 489 487 return ret; 490 488 } 491 489
+7 -1
drivers/media/dvb/frontends/dib7000p.c
··· 260 260 261 261 // dprintk( "908: %x, 909: %x\n", reg_908, reg_909); 262 262 263 + reg_909 |= (state->cfg.disable_sample_and_hold & 1) << 4; 264 + reg_908 |= (state->cfg.enable_current_mirror & 1) << 7; 265 + 263 266 dib7000p_write_word(state, 908, reg_908); 264 267 dib7000p_write_word(state, 909, reg_909); 265 268 } ··· 781 778 default: 782 779 case GUARD_INTERVAL_1_32: value *= 1; break; 783 780 } 784 - state->div_sync_wait = (value * 3) / 2 + 32; // add 50% SFN margin + compensate for one DVSY-fifo TODO 781 + if (state->cfg.diversity_delay == 0) 782 + state->div_sync_wait = (value * 3) / 2 + 48; // add 50% SFN margin + compensate for one DVSY-fifo 783 + else 784 + state->div_sync_wait = (value * 3) / 2 + state->cfg.diversity_delay; // add 50% SFN margin + compensate for one DVSY-fifo 785 785 786 786 /* deactive the possibility of diversity reception if extended interleaver */ 787 787 state->div_force_off = !1 && ch->u.ofdm.transmission_mode != TRANSMISSION_MODE_8K;
+5
drivers/media/dvb/frontends/dib7000p.h
··· 33 33 int (*agc_control) (struct dvb_frontend *, u8 before); 34 34 35 35 u8 output_mode; 36 + u8 disable_sample_and_hold : 1; 37 + 38 + u8 enable_current_mirror : 1; 39 + u8 diversity_delay; 40 + 36 41 }; 37 42 38 43 #define DEFAULT_DIB7000P_I2C_ADDRESS 18
+13 -20
drivers/media/dvb/siano/smscoreapi.c
··· 1098 1098 * 1099 1099 * @return pointer to descriptor on success, NULL on error. 1100 1100 */ 1101 - struct smscore_buffer_t *smscore_getbuffer(struct smscore_device_t *coredev) 1101 + 1102 + struct smscore_buffer_t *get_entry(struct smscore_device_t *coredev) 1102 1103 { 1103 1104 struct smscore_buffer_t *cb = NULL; 1104 1105 unsigned long flags; 1105 1106 1106 - DEFINE_WAIT(wait); 1107 - 1108 1107 spin_lock_irqsave(&coredev->bufferslock, flags); 1109 - 1110 - /* This function must return a valid buffer, since the buffer list is 1111 - * finite, we check that there is an available buffer, if not, we wait 1112 - * until such buffer become available. 1113 - */ 1114 - 1115 - prepare_to_wait(&coredev->buffer_mng_waitq, &wait, TASK_INTERRUPTIBLE); 1116 - if (list_empty(&coredev->buffers)) { 1117 - spin_unlock_irqrestore(&coredev->bufferslock, flags); 1118 - schedule(); 1119 - spin_lock_irqsave(&coredev->bufferslock, flags); 1108 + if (!list_empty(&coredev->buffers)) { 1109 + cb = (struct smscore_buffer_t *) coredev->buffers.next; 1110 + list_del(&cb->entry); 1120 1111 } 1121 - 1122 - finish_wait(&coredev->buffer_mng_waitq, &wait); 1123 - 1124 - cb = (struct smscore_buffer_t *) coredev->buffers.next; 1125 - list_del(&cb->entry); 1126 - 1127 1112 spin_unlock_irqrestore(&coredev->bufferslock, flags); 1113 + return cb; 1114 + } 1115 + 1116 + struct smscore_buffer_t *smscore_getbuffer(struct smscore_device_t *coredev) 1117 + { 1118 + struct smscore_buffer_t *cb = NULL; 1119 + 1120 + wait_event(coredev->buffer_mng_waitq, (cb = get_entry(coredev))); 1128 1121 1129 1122 return cb; 1130 1123 }
+1 -1
drivers/media/radio/si470x/radio-si470x-i2c.c
··· 395 395 radio->registers[POWERCFG] = POWERCFG_ENABLE; 396 396 if (si470x_set_register(radio, POWERCFG) < 0) { 397 397 retval = -EIO; 398 - goto err_all; 398 + goto err_video; 399 399 } 400 400 msleep(110); 401 401
+1
drivers/media/video/cx231xx/Makefile
··· 11 11 EXTRA_CFLAGS += -Idrivers/media/common/tuners 12 12 EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-core 13 13 EXTRA_CFLAGS += -Idrivers/media/dvb/frontends 14 + EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-usb 14 15
+11 -6
drivers/media/video/cx231xx/cx231xx-cards.c
··· 32 32 #include <media/v4l2-chip-ident.h> 33 33 34 34 #include <media/cx25840.h> 35 + #include "dvb-usb-ids.h" 35 36 #include "xc5000.h" 36 37 37 38 #include "cx231xx.h" ··· 176 175 .driver_info = CX231XX_BOARD_CNXT_RDE_250}, 177 176 {USB_DEVICE(0x0572, 0x58A1), 178 177 .driver_info = CX231XX_BOARD_CNXT_RDU_250}, 178 + {USB_DEVICE_VER(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD, 0x4000,0x4fff), 179 + .driver_info = CX231XX_BOARD_UNKNOWN}, 179 180 {}, 180 181 }; 181 182 ··· 229 226 dev->board.name, dev->model); 230 227 231 228 /* set the direction for GPIO pins */ 232 - cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1); 233 - cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1); 234 - cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1); 229 + if (dev->board.tuner_gpio) { 230 + cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1); 231 + cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1); 232 + cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1); 235 233 236 - /* request some modules if any required */ 234 + /* request some modules if any required */ 237 235 238 - /* reset the Tuner */ 239 - cx231xx_gpio_set(dev, dev->board.tuner_gpio); 236 + /* reset the Tuner */ 237 + cx231xx_gpio_set(dev, dev->board.tuner_gpio); 238 + } 240 239 241 240 /* set the mode to Analog mode initially */ 242 241 cx231xx_set_mode(dev, CX231XX_ANALOG_MODE);
+1 -1
drivers/media/video/cx25840/cx25840-core.c
··· 1996 1996 1997 1997 state->volume = v4l2_ctrl_new_std(&state->hdl, 1998 1998 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_VOLUME, 1999 - 0, 65335, 65535 / 100, default_volume); 1999 + 0, 65535, 65535 / 100, default_volume); 2000 2000 state->mute = v4l2_ctrl_new_std(&state->hdl, 2001 2001 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_MUTE, 2002 2002 0, 1, 1, 0);
+1 -1
drivers/media/video/cx88/Kconfig
··· 17 17 18 18 config VIDEO_CX88_ALSA 19 19 tristate "Conexant 2388x DMA audio support" 20 - depends on VIDEO_CX88 && SND && EXPERIMENTAL 20 + depends on VIDEO_CX88 && SND 21 21 select SND_PCM 22 22 ---help--- 23 23 This is a video4linux driver for direct (DMA) audio on
+1
drivers/media/video/gspca/gspca.c
··· 223 223 usb_rcvintpipe(dev, ep->bEndpointAddress), 224 224 buffer, buffer_len, 225 225 int_irq, (void *)gspca_dev, interval); 226 + urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 226 227 gspca_dev->int_urb = urb; 227 228 ret = usb_submit_urb(urb, GFP_KERNEL); 228 229 if (ret < 0) {
+1 -2
drivers/media/video/gspca/sn9c20x.c
··· 2357 2357 (data[33] << 10); 2358 2358 avg_lum >>= 9; 2359 2359 atomic_set(&sd->avg_lum, avg_lum); 2360 - gspca_frame_add(gspca_dev, LAST_PACKET, 2361 - data, len); 2360 + gspca_frame_add(gspca_dev, LAST_PACKET, NULL, 0); 2362 2361 return; 2363 2362 } 2364 2363 if (gspca_dev->last_packet_type == LAST_PACKET) {
+2
drivers/media/video/ivtv/ivtvfb.c
··· 466 466 struct fb_vblank vblank; 467 467 u32 trace; 468 468 469 + memset(&vblank, 0, sizeof(struct fb_vblank)); 470 + 469 471 vblank.flags = FB_VBLANK_HAVE_COUNT |FB_VBLANK_HAVE_VCOUNT | 470 472 FB_VBLANK_HAVE_VSYNC; 471 473 trace = read_reg(IVTV_REG_DEC_LINE_FIELD) >> 16;
+2 -1
drivers/media/video/mem2mem_testdev.c
··· 239 239 return -EFAULT; 240 240 } 241 241 242 - if (in_buf->vb.size < out_buf->vb.size) { 242 + if (in_buf->vb.size > out_buf->vb.size) { 243 243 v4l2_err(&dev->v4l2_dev, "Output buffer is too small\n"); 244 244 return -EINVAL; 245 245 } ··· 1014 1014 v4l2_m2m_release(dev->m2m_dev); 1015 1015 del_timer_sync(&dev->timer); 1016 1016 video_unregister_device(dev->vfd); 1017 + video_device_release(dev->vfd); 1017 1018 v4l2_device_unregister(&dev->v4l2_dev); 1018 1019 kfree(dev); 1019 1020
+7 -1
drivers/media/video/mt9m111.c
··· 447 447 dev_dbg(&client->dev, "%s left=%d, top=%d, width=%d, height=%d\n", 448 448 __func__, rect.left, rect.top, rect.width, rect.height); 449 449 450 + if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 451 + return -EINVAL; 452 + 450 453 ret = mt9m111_make_rect(client, &rect); 451 454 if (!ret) 452 455 mt9m111->rect = rect; ··· 469 466 470 467 static int mt9m111_cropcap(struct v4l2_subdev *sd, struct v4l2_cropcap *a) 471 468 { 469 + if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 470 + return -EINVAL; 471 + 472 472 a->bounds.left = MT9M111_MIN_DARK_COLS; 473 473 a->bounds.top = MT9M111_MIN_DARK_ROWS; 474 474 a->bounds.width = MT9M111_MAX_WIDTH; 475 475 a->bounds.height = MT9M111_MAX_HEIGHT; 476 476 a->defrect = a->bounds; 477 - a->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 478 477 a->pixelaspect.numerator = 1; 479 478 a->pixelaspect.denominator = 1; 480 479 ··· 492 487 mf->width = mt9m111->rect.width; 493 488 mf->height = mt9m111->rect.height; 494 489 mf->code = mt9m111->fmt->code; 490 + mf->colorspace = mt9m111->fmt->colorspace; 495 491 mf->field = V4L2_FIELD_NONE; 496 492 497 493 return 0;
-3
drivers/media/video/mt9v022.c
··· 402 402 if (mt9v022->model != V4L2_IDENT_MT9V022IX7ATC) 403 403 return -EINVAL; 404 404 break; 405 - case 0: 406 - /* No format change, only geometry */ 407 - break; 408 405 default: 409 406 return -EINVAL; 410 407 }
+4
drivers/media/video/mx2_camera.c
··· 378 378 379 379 spin_lock_irqsave(&pcdev->lock, flags); 380 380 381 + if (*fb_active == NULL) 382 + goto out; 383 + 381 384 vb = &(*fb_active)->vb; 382 385 dev_dbg(pcdev->dev, "%s (vb=0x%p) 0x%08lx %d\n", __func__, 383 386 vb, vb->baddr, vb->bsize); ··· 405 402 406 403 *fb_active = buf; 407 404 405 + out: 408 406 spin_unlock_irqrestore(&pcdev->lock, flags); 409 407 } 410 408
+3 -3
drivers/media/video/pvrusb2/pvrusb2-ctrl.c
··· 513 513 if (ret >= 0) { 514 514 ret = pvr2_ctrl_range_check(cptr,*valptr); 515 515 } 516 - if (maskptr) *maskptr = ~0; 516 + *maskptr = ~0; 517 517 } else if (cptr->info->type == pvr2_ctl_bool) { 518 518 ret = parse_token(ptr,len,valptr,boolNames, 519 519 ARRAY_SIZE(boolNames)); ··· 522 522 } else if (ret == 0) { 523 523 *valptr = (*valptr & 1) ? !0 : 0; 524 524 } 525 - if (maskptr) *maskptr = 1; 525 + *maskptr = 1; 526 526 } else if (cptr->info->type == pvr2_ctl_enum) { 527 527 ret = parse_token( 528 528 ptr,len,valptr, ··· 531 531 if (ret >= 0) { 532 532 ret = pvr2_ctrl_range_check(cptr,*valptr); 533 533 } 534 - if (maskptr) *maskptr = ~0; 534 + *maskptr = ~0; 535 535 } else if (cptr->info->type == pvr2_ctl_bitmask) { 536 536 ret = parse_tlist( 537 537 ptr,len,maskptr,valptr,
+42 -52
drivers/media/video/s5p-fimc/fimc-core.c
··· 393 393 dbg("ctx->out_order_1p= %d", ctx->out_order_1p); 394 394 } 395 395 396 + static void fimc_prepare_dma_offset(struct fimc_ctx *ctx, struct fimc_frame *f) 397 + { 398 + struct samsung_fimc_variant *variant = ctx->fimc_dev->variant; 399 + 400 + f->dma_offset.y_h = f->offs_h; 401 + if (!variant->pix_hoff) 402 + f->dma_offset.y_h *= (f->fmt->depth >> 3); 403 + 404 + f->dma_offset.y_v = f->offs_v; 405 + 406 + f->dma_offset.cb_h = f->offs_h; 407 + f->dma_offset.cb_v = f->offs_v; 408 + 409 + f->dma_offset.cr_h = f->offs_h; 410 + f->dma_offset.cr_v = f->offs_v; 411 + 412 + if (!variant->pix_hoff) { 413 + if (f->fmt->planes_cnt == 3) { 414 + f->dma_offset.cb_h >>= 1; 415 + f->dma_offset.cr_h >>= 1; 416 + } 417 + if (f->fmt->color == S5P_FIMC_YCBCR420) { 418 + f->dma_offset.cb_v >>= 1; 419 + f->dma_offset.cr_v >>= 1; 420 + } 421 + } 422 + 423 + dbg("in_offset: color= %d, y_h= %d, y_v= %d", 424 + f->fmt->color, f->dma_offset.y_h, f->dma_offset.y_v); 425 + } 426 + 396 427 /** 397 428 * fimc_prepare_config - check dimensions, operation and color mode 398 429 * and pre-calculate offset and the scaling coefficients. ··· 437 406 { 438 407 struct fimc_frame *s_frame, *d_frame; 439 408 struct fimc_vid_buffer *buf = NULL; 440 - struct samsung_fimc_variant *variant = ctx->fimc_dev->variant; 441 409 int ret = 0; 442 410 443 411 s_frame = &ctx->s_frame; ··· 449 419 swap(d_frame->width, d_frame->height); 450 420 } 451 421 452 - /* Prepare the output offset ratios for scaler. */ 453 - d_frame->dma_offset.y_h = d_frame->offs_h; 454 - if (!variant->pix_hoff) 455 - d_frame->dma_offset.y_h *= (d_frame->fmt->depth >> 3); 422 + /* Prepare the DMA offset ratios for scaler. */ 423 + fimc_prepare_dma_offset(ctx, &ctx->s_frame); 424 + fimc_prepare_dma_offset(ctx, &ctx->d_frame); 456 425 457 - d_frame->dma_offset.y_v = d_frame->offs_v; 458 - 459 - d_frame->dma_offset.cb_h = d_frame->offs_h; 460 - d_frame->dma_offset.cb_v = d_frame->offs_v; 461 - 462 - d_frame->dma_offset.cr_h = d_frame->offs_h; 463 - d_frame->dma_offset.cr_v = d_frame->offs_v; 464 - 465 - if (!variant->pix_hoff && d_frame->fmt->planes_cnt == 3) { 466 - d_frame->dma_offset.cb_h >>= 1; 467 - d_frame->dma_offset.cb_v >>= 1; 468 - d_frame->dma_offset.cr_h >>= 1; 469 - d_frame->dma_offset.cr_v >>= 1; 470 - } 471 - 472 - dbg("out offset: color= %d, y_h= %d, y_v= %d", 473 - d_frame->fmt->color, 474 - d_frame->dma_offset.y_h, d_frame->dma_offset.y_v); 475 - 476 - /* Prepare the input offset ratios for scaler. */ 477 - s_frame->dma_offset.y_h = s_frame->offs_h; 478 - if (!variant->pix_hoff) 479 - s_frame->dma_offset.y_h *= (s_frame->fmt->depth >> 3); 480 - s_frame->dma_offset.y_v = s_frame->offs_v; 481 - 482 - s_frame->dma_offset.cb_h = s_frame->offs_h; 483 - s_frame->dma_offset.cb_v = s_frame->offs_v; 484 - 485 - s_frame->dma_offset.cr_h = s_frame->offs_h; 486 - s_frame->dma_offset.cr_v = s_frame->offs_v; 487 - 488 - if (!variant->pix_hoff && s_frame->fmt->planes_cnt == 3) { 489 - s_frame->dma_offset.cb_h >>= 1; 490 - s_frame->dma_offset.cb_v >>= 1; 491 - s_frame->dma_offset.cr_h >>= 1; 492 - s_frame->dma_offset.cr_v >>= 1; 493 - } 494 - 495 - dbg("in offset: color= %d, y_h= %d, y_v= %d", 496 - s_frame->fmt->color, s_frame->dma_offset.y_h, 497 - s_frame->dma_offset.y_v); 498 - 499 - fimc_set_yuv_order(ctx); 500 - 501 - /* Check against the scaler ratio. */ 502 426 if (s_frame->height > (SCALER_MAX_VRATIO * d_frame->height) || 503 427 s_frame->width > (SCALER_MAX_HRATIO * d_frame->width)) { 504 428 err("out of scaler range"); 505 429 return -EINVAL; 506 430 } 431 + fimc_set_yuv_order(ctx); 507 432 } 508 433 509 434 /* Input DMA mode is not allowed when the scaler is disabled. */ ··· 807 822 } else { 808 823 v4l2_err(&ctx->fimc_dev->m2m.v4l2_dev, 809 824 "Wrong buffer/video queue type (%d)\n", f->type); 810 - return -EINVAL; 825 + ret = -EINVAL; 826 + goto s_fmt_out; 811 827 } 812 828 813 829 pix = &f->fmt.pix; ··· 1400 1414 } 1401 1415 1402 1416 fimc->work_queue = create_workqueue(dev_name(&fimc->pdev->dev)); 1403 - if (!fimc->work_queue) 1417 + if (!fimc->work_queue) { 1418 + ret = -ENOMEM; 1404 1419 goto err_irq; 1420 + } 1405 1421 1406 1422 ret = fimc_register_m2m_device(fimc); 1407 1423 if (ret) ··· 1480 1492 }; 1481 1493 1482 1494 static struct samsung_fimc_variant fimc01_variant_s5pv210 = { 1495 + .pix_hoff = 1, 1483 1496 .has_inp_rot = 1, 1484 1497 .has_out_rot = 1, 1485 1498 .min_inp_pixsize = 16, ··· 1495 1506 }; 1496 1507 1497 1508 static struct samsung_fimc_variant fimc2_variant_s5pv210 = { 1509 + .pix_hoff = 1, 1498 1510 .min_inp_pixsize = 16, 1499 1511 .min_out_pixsize = 32, 1500 1512
+5 -5
drivers/media/video/saa7134/saa7134-cards.c
··· 4323 4323 }, 4324 4324 [SAA7134_BOARD_BEHOLD_COLUMBUS_TVFM] = { 4325 4325 /* Beholder Intl. Ltd. 2008 */ 4326 - /*Dmitry Belimov <d.belimov@gmail.com> */ 4327 - .name = "Beholder BeholdTV Columbus TVFM", 4326 + /* Dmitry Belimov <d.belimov@gmail.com> */ 4327 + .name = "Beholder BeholdTV Columbus TV/FM", 4328 4328 .audio_clock = 0x00187de7, 4329 4329 .tuner_type = TUNER_ALPS_TSBE5_PAL, 4330 - .radio_type = UNSET, 4331 - .tuner_addr = ADDR_UNSET, 4332 - .radio_addr = ADDR_UNSET, 4330 + .radio_type = TUNER_TEA5767, 4331 + .tuner_addr = 0xc2 >> 1, 4332 + .radio_addr = 0xc0 >> 1, 4333 4333 .tda9887_conf = TDA9887_PRESENT, 4334 4334 .gpiomask = 0x000A8004, 4335 4335 .inputs = {{
+3 -2
drivers/media/video/saa7164/saa7164-buffer.c
··· 136 136 int saa7164_buffer_dealloc(struct saa7164_tsport *port, 137 137 struct saa7164_buffer *buf) 138 138 { 139 - struct saa7164_dev *dev = port->dev; 139 + struct saa7164_dev *dev; 140 140 141 - if ((buf == 0) || (port == 0)) 141 + if (!buf || !port) 142 142 return SAA_ERR_BAD_PARAMETER; 143 + dev = port->dev; 143 144 144 145 dprintk(DBGLVL_BUF, "%s() deallocating buffer @ 0x%p\n", __func__, buf); 145 146
+24
drivers/media/video/uvc/uvc_driver.c
··· 486 486 max(frame->dwFrameInterval[0], 487 487 frame->dwDefaultFrameInterval)); 488 488 489 + if (dev->quirks & UVC_QUIRK_RESTRICT_FRAME_RATE) { 490 + frame->bFrameIntervalType = 1; 491 + frame->dwFrameInterval[0] = 492 + frame->dwDefaultFrameInterval; 493 + } 494 + 489 495 uvc_trace(UVC_TRACE_DESCR, "- %ux%u (%u.%u fps)\n", 490 496 frame->wWidth, frame->wHeight, 491 497 10000000/frame->dwDefaultFrameInterval, ··· 2032 2026 .bInterfaceClass = USB_CLASS_VENDOR_SPEC, 2033 2027 .bInterfaceSubClass = 1, 2034 2028 .bInterfaceProtocol = 0 }, 2029 + /* Chicony CNF7129 (Asus EEE 100HE) */ 2030 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2031 + | USB_DEVICE_ID_MATCH_INT_INFO, 2032 + .idVendor = 0x04f2, 2033 + .idProduct = 0xb071, 2034 + .bInterfaceClass = USB_CLASS_VIDEO, 2035 + .bInterfaceSubClass = 1, 2036 + .bInterfaceProtocol = 0, 2037 + .driver_info = UVC_QUIRK_RESTRICT_FRAME_RATE }, 2035 2038 /* Alcor Micro AU3820 (Future Boy PC USB Webcam) */ 2036 2039 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2037 2040 | USB_DEVICE_ID_MATCH_INT_INFO, ··· 2106 2091 .bInterfaceProtocol = 0, 2107 2092 .driver_info = UVC_QUIRK_PROBE_MINMAX 2108 2093 | UVC_QUIRK_PROBE_DEF }, 2094 + /* IMC Networks (Medion Akoya) */ 2095 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2096 + | USB_DEVICE_ID_MATCH_INT_INFO, 2097 + .idVendor = 0x13d3, 2098 + .idProduct = 0x5103, 2099 + .bInterfaceClass = USB_CLASS_VIDEO, 2100 + .bInterfaceSubClass = 1, 2101 + .bInterfaceProtocol = 0, 2102 + .driver_info = UVC_QUIRK_STREAM_NO_FID }, 2109 2103 /* Syntek (HP Spartan) */ 2110 2104 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2111 2105 | USB_DEVICE_ID_MATCH_INT_INFO,
+1
drivers/media/video/uvc/uvcvideo.h
··· 182 182 #define UVC_QUIRK_IGNORE_SELECTOR_UNIT 0x00000020 183 183 #define UVC_QUIRK_FIX_BANDWIDTH 0x00000080 184 184 #define UVC_QUIRK_PROBE_DEF 0x00000100 185 + #define UVC_QUIRK_RESTRICT_FRAME_RATE 0x00000200 185 186 186 187 /* Format flags */ 187 188 #define UVC_FMT_FLAG_COMPRESSED 0x00000001
+4 -2
drivers/media/video/videobuf-dma-contig.c
··· 393 393 } 394 394 395 395 /* read() method */ 396 - dma_free_coherent(q->dev, mem->size, mem->vaddr, mem->dma_handle); 397 - mem->vaddr = NULL; 396 + if (mem->vaddr) { 397 + dma_free_coherent(q->dev, mem->size, mem->vaddr, mem->dma_handle); 398 + mem->vaddr = NULL; 399 + } 398 400 } 399 401 EXPORT_SYMBOL_GPL(videobuf_dma_contig_free); 400 402
+7 -4
drivers/media/video/videobuf-dma-sg.c
··· 94 94 * must free the memory. 95 95 */ 96 96 static struct scatterlist *videobuf_pages_to_sg(struct page **pages, 97 - int nr_pages, int offset) 97 + int nr_pages, int offset, size_t size) 98 98 { 99 99 struct scatterlist *sglist; 100 100 int i; ··· 110 110 /* DMA to highmem pages might not work */ 111 111 goto highmem; 112 112 sg_set_page(&sglist[0], pages[0], PAGE_SIZE - offset, offset); 113 + size -= PAGE_SIZE - offset; 113 114 for (i = 1; i < nr_pages; i++) { 114 115 if (NULL == pages[i]) 115 116 goto nopage; 116 117 if (PageHighMem(pages[i])) 117 118 goto highmem; 118 - sg_set_page(&sglist[i], pages[i], PAGE_SIZE, 0); 119 + sg_set_page(&sglist[i], pages[i], min(PAGE_SIZE, size), 0); 120 + size -= min(PAGE_SIZE, size); 119 121 } 120 122 return sglist; 121 123 ··· 172 170 173 171 first = (data & PAGE_MASK) >> PAGE_SHIFT; 174 172 last = ((data+size-1) & PAGE_MASK) >> PAGE_SHIFT; 175 - dma->offset = data & ~PAGE_MASK; 173 + dma->offset = data & ~PAGE_MASK; 174 + dma->size = size; 176 175 dma->nr_pages = last-first+1; 177 176 dma->pages = kmalloc(dma->nr_pages * sizeof(struct page *), GFP_KERNEL); 178 177 if (NULL == dma->pages) ··· 255 252 256 253 if (dma->pages) { 257 254 dma->sglist = videobuf_pages_to_sg(dma->pages, dma->nr_pages, 258 - dma->offset); 255 + dma->offset, dma->size); 259 256 } 260 257 if (dma->vaddr) { 261 258 dma->sglist = videobuf_vmalloc_to_sg(dma->vaddr,
-1
drivers/misc/bh1780gli.c
··· 190 190 191 191 ddata = i2c_get_clientdata(client); 192 192 sysfs_remove_group(&client->dev.kobj, &bh1780_attr_group); 193 - i2c_set_clientdata(client, NULL); 194 193 kfree(ddata); 195 194 196 195 return 0;
+13
drivers/mmc/core/core.c
··· 1631 1631 if (host->bus_ops && !host->bus_dead) { 1632 1632 if (host->bus_ops->suspend) 1633 1633 err = host->bus_ops->suspend(host); 1634 + if (err == -ENOSYS || !host->bus_ops->resume) { 1635 + /* 1636 + * We simply "remove" the card in this case. 1637 + * It will be redetected on resume. 1638 + */ 1639 + if (host->bus_ops->remove) 1640 + host->bus_ops->remove(host); 1641 + mmc_claim_host(host); 1642 + mmc_detach_bus(host); 1643 + mmc_release_host(host); 1644 + host->pm_flags = 0; 1645 + err = 0; 1646 + } 1634 1647 } 1635 1648 mmc_bus_put(host); 1636 1649
+2 -2
drivers/net/Kconfig
··· 2428 2428 2429 2429 config MV643XX_ETH 2430 2430 tristate "Marvell Discovery (643XX) and Orion ethernet support" 2431 - depends on MV64X60 || PPC32 || PLAT_ORION 2431 + depends on (MV64X60 || PPC32 || PLAT_ORION) && INET 2432 2432 select INET_LRO 2433 2433 select PHYLIB 2434 2434 help ··· 2803 2803 2804 2804 config PASEMI_MAC 2805 2805 tristate "PA Semi 1/10Gbit MAC" 2806 - depends on PPC_PASEMI && PCI 2806 + depends on PPC_PASEMI && PCI && INET 2807 2807 select PHYLIB 2808 2808 select INET_LRO 2809 2809 help
+2 -2
drivers/net/b44.c
··· 2170 2170 dev->irq = sdev->irq; 2171 2171 SET_ETHTOOL_OPS(dev, &b44_ethtool_ops); 2172 2172 2173 - netif_carrier_off(dev); 2174 - 2175 2173 err = ssb_bus_powerup(sdev->bus, 0); 2176 2174 if (err) { 2177 2175 dev_err(sdev->dev, ··· 2210 2212 dev_err(sdev->dev, "Cannot register net device, aborting\n"); 2211 2213 goto err_out_powerdown; 2212 2214 } 2215 + 2216 + netif_carrier_off(dev); 2213 2217 2214 2218 ssb_set_drvdata(sdev, dev); 2215 2219
+9
drivers/net/bonding/bond_main.c
··· 5164 5164 res = dev_alloc_name(bond_dev, "bond%d"); 5165 5165 if (res < 0) 5166 5166 goto out; 5167 + } else { 5168 + /* 5169 + * If we're given a name to register 5170 + * we need to ensure that its not already 5171 + * registered 5172 + */ 5173 + res = -EEXIST; 5174 + if (__dev_get_by_name(net, name) != NULL) 5175 + goto out; 5167 5176 } 5168 5177 5169 5178 res = register_netdevice(bond_dev);
+8 -1
drivers/net/ehea/ehea_main.c
··· 533 533 int length = cqe->num_bytes_transfered - 4; /*remove CRC */ 534 534 535 535 skb_put(skb, length); 536 - skb->ip_summed = CHECKSUM_UNNECESSARY; 537 536 skb->protocol = eth_type_trans(skb, dev); 537 + 538 + /* The packet was not an IPV4 packet so a complemented checksum was 539 + calculated. The value is found in the Internet Checksum field. */ 540 + if (cqe->status & EHEA_CQE_BLIND_CKSUM) { 541 + skb->ip_summed = CHECKSUM_COMPLETE; 542 + skb->csum = csum_unfold(~cqe->inet_checksum_value); 543 + } else 544 + skb->ip_summed = CHECKSUM_UNNECESSARY; 538 545 } 539 546 540 547 static inline struct sk_buff *get_skb_by_index(struct sk_buff **skb_array,
+1
drivers/net/ehea/ehea_qmr.h
··· 150 150 #define EHEA_CQE_TYPE_RQ 0x60 151 151 #define EHEA_CQE_STAT_ERR_MASK 0x700F 152 152 #define EHEA_CQE_STAT_FAT_ERR_MASK 0xF 153 + #define EHEA_CQE_BLIND_CKSUM 0x8000 153 154 #define EHEA_CQE_STAT_ERR_TCP 0x4000 154 155 #define EHEA_CQE_STAT_ERR_IP 0x2000 155 156 #define EHEA_CQE_STAT_ERR_CRC 0x1000
+30 -14
drivers/net/fec.c
··· 678 678 { 679 679 struct fec_enet_private *fep = netdev_priv(dev); 680 680 struct phy_device *phy_dev = NULL; 681 - int ret; 681 + char mdio_bus_id[MII_BUS_ID_SIZE]; 682 + char phy_name[MII_BUS_ID_SIZE + 3]; 683 + int phy_id; 682 684 683 685 fep->phy_dev = NULL; 684 686 685 - /* find the first phy */ 686 - phy_dev = phy_find_first(fep->mii_bus); 687 - if (!phy_dev) { 688 - printk(KERN_ERR "%s: no PHY found\n", dev->name); 689 - return -ENODEV; 687 + /* check for attached phy */ 688 + for (phy_id = 0; (phy_id < PHY_MAX_ADDR); phy_id++) { 689 + if ((fep->mii_bus->phy_mask & (1 << phy_id))) 690 + continue; 691 + if (fep->mii_bus->phy_map[phy_id] == NULL) 692 + continue; 693 + if (fep->mii_bus->phy_map[phy_id]->phy_id == 0) 694 + continue; 695 + strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE); 696 + break; 690 697 } 691 698 692 - /* attach the mac to the phy */ 693 - ret = phy_connect_direct(dev, phy_dev, 694 - &fec_enet_adjust_link, 0, 695 - PHY_INTERFACE_MODE_MII); 696 - if (ret) { 697 - printk(KERN_ERR "%s: Could not attach to PHY\n", dev->name); 698 - return ret; 699 + if (phy_id >= PHY_MAX_ADDR) { 700 + printk(KERN_INFO "%s: no PHY, assuming direct connection " 701 + "to switch\n", dev->name); 702 + strncpy(mdio_bus_id, "0", MII_BUS_ID_SIZE); 703 + phy_id = 0; 704 + } 705 + 706 + snprintf(phy_name, MII_BUS_ID_SIZE, PHY_ID_FMT, mdio_bus_id, phy_id); 707 + phy_dev = phy_connect(dev, phy_name, &fec_enet_adjust_link, 0, 708 + PHY_INTERFACE_MODE_MII); 709 + if (IS_ERR(phy_dev)) { 710 + printk(KERN_ERR "%s: could not attach to PHY\n", dev->name); 711 + return PTR_ERR(phy_dev); 699 712 } 700 713 701 714 /* mask with MAC supported features */ ··· 751 738 fep->mii_bus->read = fec_enet_mdio_read; 752 739 fep->mii_bus->write = fec_enet_mdio_write; 753 740 fep->mii_bus->reset = fec_enet_mdio_reset; 754 - snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", pdev->id); 741 + snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", pdev->id + 1); 755 742 fep->mii_bus->priv = fep; 756 743 fep->mii_bus->parent = &pdev->dev; 757 744 ··· 1323 1310 ret = fec_enet_mii_init(pdev); 1324 1311 if (ret) 1325 1312 goto failed_mii_init; 1313 + 1314 + /* Carrier starts down, phylib will bring it up */ 1315 + netif_carrier_off(ndev); 1326 1316 1327 1317 ret = register_netdev(ndev); 1328 1318 if (ret)
+35 -30
drivers/net/r8169.c
··· 1212 1212 if ((RTL_R8(ChipCmd) & CmdRxEnb) == 0) 1213 1213 return; 1214 1214 1215 - counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr); 1215 + counters = dma_alloc_coherent(&tp->pci_dev->dev, sizeof(*counters), 1216 + &paddr, GFP_KERNEL); 1216 1217 if (!counters) 1217 1218 return; 1218 1219 ··· 1234 1233 RTL_W32(CounterAddrLow, 0); 1235 1234 RTL_W32(CounterAddrHigh, 0); 1236 1235 1237 - pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr); 1236 + dma_free_coherent(&tp->pci_dev->dev, sizeof(*counters), counters, 1237 + paddr); 1238 1238 } 1239 1239 1240 1240 static void rtl8169_get_ethtool_stats(struct net_device *dev, ··· 3294 3292 3295 3293 /* 3296 3294 * Rx and Tx desscriptors needs 256 bytes alignment. 3297 - * pci_alloc_consistent provides more. 3295 + * dma_alloc_coherent provides more. 3298 3296 */ 3299 - tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES, 3300 - &tp->TxPhyAddr); 3297 + tp->TxDescArray = dma_alloc_coherent(&pdev->dev, R8169_TX_RING_BYTES, 3298 + &tp->TxPhyAddr, GFP_KERNEL); 3301 3299 if (!tp->TxDescArray) 3302 3300 goto err_pm_runtime_put; 3303 3301 3304 - tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES, 3305 - &tp->RxPhyAddr); 3302 + tp->RxDescArray = dma_alloc_coherent(&pdev->dev, R8169_RX_RING_BYTES, 3303 + &tp->RxPhyAddr, GFP_KERNEL); 3306 3304 if (!tp->RxDescArray) 3307 3305 goto err_free_tx_0; 3308 3306 ··· 3336 3334 err_release_ring_2: 3337 3335 rtl8169_rx_clear(tp); 3338 3336 err_free_rx_1: 3339 - pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray, 3340 - tp->RxPhyAddr); 3337 + dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray, 3338 + tp->RxPhyAddr); 3341 3339 tp->RxDescArray = NULL; 3342 3340 err_free_tx_0: 3343 - pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray, 3344 - tp->TxPhyAddr); 3341 + dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray, 3342 + tp->TxPhyAddr); 3345 3343 tp->TxDescArray = NULL; 3346 3344 err_pm_runtime_put: 3347 3345 pm_runtime_put_noidle(&pdev->dev); ··· 3977 3975 { 3978 3976 struct pci_dev *pdev = tp->pci_dev; 3979 3977 3980 - pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz, 3978 + dma_unmap_single(&pdev->dev, le64_to_cpu(desc->addr), tp->rx_buf_sz, 3981 3979 PCI_DMA_FROMDEVICE); 3982 3980 dev_kfree_skb(*sk_buff); 3983 3981 *sk_buff = NULL; ··· 4002 4000 static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev, 4003 4001 struct net_device *dev, 4004 4002 struct RxDesc *desc, int rx_buf_sz, 4005 - unsigned int align) 4003 + unsigned int align, gfp_t gfp) 4006 4004 { 4007 4005 struct sk_buff *skb; 4008 4006 dma_addr_t mapping; ··· 4010 4008 4011 4009 pad = align ? align : NET_IP_ALIGN; 4012 4010 4013 - skb = netdev_alloc_skb(dev, rx_buf_sz + pad); 4011 + skb = __netdev_alloc_skb(dev, rx_buf_sz + pad, gfp); 4014 4012 if (!skb) 4015 4013 goto err_out; 4016 4014 4017 4015 skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad); 4018 4016 4019 - mapping = pci_map_single(pdev, skb->data, rx_buf_sz, 4017 + mapping = dma_map_single(&pdev->dev, skb->data, rx_buf_sz, 4020 4018 PCI_DMA_FROMDEVICE); 4021 4019 4022 4020 rtl8169_map_to_asic(desc, mapping, rx_buf_sz); ··· 4041 4039 } 4042 4040 4043 4041 static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev, 4044 - u32 start, u32 end) 4042 + u32 start, u32 end, gfp_t gfp) 4045 4043 { 4046 4044 u32 cur; 4047 4045 ··· 4056 4054 4057 4055 skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev, 4058 4056 tp->RxDescArray + i, 4059 - tp->rx_buf_sz, tp->align); 4057 + tp->rx_buf_sz, tp->align, gfp); 4060 4058 if (!skb) 4061 4059 break; 4062 4060 ··· 4084 4082 memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info)); 4085 4083 memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *)); 4086 4084 4087 - if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC) 4085 + if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC, GFP_KERNEL) != NUM_RX_DESC) 4088 4086 goto err_out; 4089 4087 4090 4088 rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1); ··· 4101 4099 { 4102 4100 unsigned int len = tx_skb->len; 4103 4101 4104 - pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE); 4102 + dma_unmap_single(&pdev->dev, le64_to_cpu(desc->addr), len, 4103 + PCI_DMA_TODEVICE); 4105 4104 desc->opts1 = 0x00; 4106 4105 desc->opts2 = 0x00; 4107 4106 desc->addr = 0x00; ··· 4246 4243 txd = tp->TxDescArray + entry; 4247 4244 len = frag->size; 4248 4245 addr = ((void *) page_address(frag->page)) + frag->page_offset; 4249 - mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE); 4246 + mapping = dma_map_single(&tp->pci_dev->dev, addr, len, 4247 + PCI_DMA_TODEVICE); 4250 4248 4251 4249 /* anti gcc 2.95.3 bugware (sic) */ 4252 4250 status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC)); ··· 4317 4313 tp->tx_skb[entry].skb = skb; 4318 4314 } 4319 4315 4320 - mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE); 4316 + mapping = dma_map_single(&tp->pci_dev->dev, skb->data, len, 4317 + PCI_DMA_TODEVICE); 4321 4318 4322 4319 tp->tx_skb[entry].len = len; 4323 4320 txd->addr = cpu_to_le64(mapping); ··· 4482 4477 if (!skb) 4483 4478 goto out; 4484 4479 4485 - pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size, 4486 - PCI_DMA_FROMDEVICE); 4480 + dma_sync_single_for_cpu(&tp->pci_dev->dev, addr, pkt_size, 4481 + PCI_DMA_FROMDEVICE); 4487 4482 skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size); 4488 4483 *sk_buff = skb; 4489 4484 done = true; ··· 4554 4549 rtl8169_rx_csum(skb, desc); 4555 4550 4556 4551 if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) { 4557 - pci_dma_sync_single_for_device(pdev, addr, 4552 + dma_sync_single_for_device(&pdev->dev, addr, 4558 4553 pkt_size, PCI_DMA_FROMDEVICE); 4559 4554 rtl8169_mark_to_asic(desc, tp->rx_buf_sz); 4560 4555 } else { 4561 - pci_unmap_single(pdev, addr, tp->rx_buf_sz, 4556 + dma_unmap_single(&pdev->dev, addr, tp->rx_buf_sz, 4562 4557 PCI_DMA_FROMDEVICE); 4563 4558 tp->Rx_skbuff[entry] = NULL; 4564 4559 } ··· 4588 4583 count = cur_rx - tp->cur_rx; 4589 4584 tp->cur_rx = cur_rx; 4590 4585 4591 - delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx); 4586 + delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx, GFP_ATOMIC); 4592 4587 if (!delta && count) 4593 4588 netif_info(tp, intr, dev, "no Rx buffer allocated\n"); 4594 4589 tp->dirty_rx += delta; ··· 4774 4769 4775 4770 free_irq(dev->irq, dev); 4776 4771 4777 - pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray, 4778 - tp->RxPhyAddr); 4779 - pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray, 4780 - tp->TxPhyAddr); 4772 + dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray, 4773 + tp->RxPhyAddr); 4774 + dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray, 4775 + tp->TxPhyAddr); 4781 4776 tp->TxDescArray = NULL; 4782 4777 tp->RxDescArray = NULL; 4783 4778
+17 -1
drivers/net/skge.c
··· 43 43 #include <linux/seq_file.h> 44 44 #include <linux/mii.h> 45 45 #include <linux/slab.h> 46 + #include <linux/dmi.h> 46 47 #include <asm/irq.h> 47 48 48 49 #include "skge.h" ··· 3869 3868 netif_info(skge, probe, skge->netdev, "addr %pM\n", dev->dev_addr); 3870 3869 } 3871 3870 3871 + static int only_32bit_dma; 3872 + 3872 3873 static int __devinit skge_probe(struct pci_dev *pdev, 3873 3874 const struct pci_device_id *ent) 3874 3875 { ··· 3892 3889 3893 3890 pci_set_master(pdev); 3894 3891 3895 - if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 3892 + if (!only_32bit_dma && !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 3896 3893 using_dac = 1; 3897 3894 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 3898 3895 } else if (!(err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) { ··· 4150 4147 .shutdown = skge_shutdown, 4151 4148 }; 4152 4149 4150 + static struct dmi_system_id skge_32bit_dma_boards[] = { 4151 + { 4152 + .ident = "Gigabyte nForce boards", 4153 + .matches = { 4154 + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co"), 4155 + DMI_MATCH(DMI_BOARD_NAME, "nForce"), 4156 + }, 4157 + }, 4158 + {} 4159 + }; 4160 + 4153 4161 static int __init skge_init_module(void) 4154 4162 { 4163 + if (dmi_check_system(skge_32bit_dma_boards)) 4164 + only_32bit_dma = 1; 4155 4165 skge_debug_init(); 4156 4166 return pci_register_driver(&skge_driver); 4157 4167 }
+4 -2
drivers/net/tg3.c
··· 4666 4666 desc_idx, *post_ptr); 4667 4667 drop_it_no_recycle: 4668 4668 /* Other statistics kept track of by card. */ 4669 - tp->net_stats.rx_dropped++; 4669 + tp->rx_dropped++; 4670 4670 goto next_pkt; 4671 4671 } 4672 4672 ··· 4726 4726 if (len > (tp->dev->mtu + ETH_HLEN) && 4727 4727 skb->protocol != htons(ETH_P_8021Q)) { 4728 4728 dev_kfree_skb(skb); 4729 - goto next_pkt; 4729 + goto drop_it_no_recycle; 4730 4730 } 4731 4731 4732 4732 if (desc->type_flags & RXD_FLAG_VLAN && ··· 9239 9239 9240 9240 stats->rx_missed_errors = old_stats->rx_missed_errors + 9241 9241 get_stat64(&hw_stats->rx_discards); 9242 + 9243 + stats->rx_dropped = tp->rx_dropped; 9242 9244 9243 9245 return stats; 9244 9246 }
+1 -1
drivers/net/tg3.h
··· 2759 2759 2760 2760 2761 2761 /* begin "everything else" cacheline(s) section */ 2762 - struct rtnl_link_stats64 net_stats; 2762 + unsigned long rx_dropped; 2763 2763 struct rtnl_link_stats64 net_stats_prev; 2764 2764 struct tg3_ethtool_stats estats; 2765 2765 struct tg3_ethtool_stats estats_prev;
+13 -13
drivers/net/wimax/i2400m/rx.c
··· 1244 1244 int i, result; 1245 1245 struct device *dev = i2400m_dev(i2400m); 1246 1246 const struct i2400m_msg_hdr *msg_hdr; 1247 - size_t pl_itr, pl_size, skb_len; 1247 + size_t pl_itr, pl_size; 1248 1248 unsigned long flags; 1249 - unsigned num_pls, single_last; 1249 + unsigned num_pls, single_last, skb_len; 1250 1250 1251 1251 skb_len = skb->len; 1252 - d_fnstart(4, dev, "(i2400m %p skb %p [size %zu])\n", 1252 + d_fnstart(4, dev, "(i2400m %p skb %p [size %u])\n", 1253 1253 i2400m, skb, skb_len); 1254 1254 result = -EIO; 1255 1255 msg_hdr = (void *) skb->data; 1256 - result = i2400m_rx_msg_hdr_check(i2400m, msg_hdr, skb->len); 1256 + result = i2400m_rx_msg_hdr_check(i2400m, msg_hdr, skb_len); 1257 1257 if (result < 0) 1258 1258 goto error_msg_hdr_check; 1259 1259 result = -EIO; ··· 1261 1261 pl_itr = sizeof(*msg_hdr) + /* Check payload descriptor(s) */ 1262 1262 num_pls * sizeof(msg_hdr->pld[0]); 1263 1263 pl_itr = ALIGN(pl_itr, I2400M_PL_ALIGN); 1264 - if (pl_itr > skb->len) { /* got all the payload descriptors? */ 1264 + if (pl_itr > skb_len) { /* got all the payload descriptors? */ 1265 1265 dev_err(dev, "RX: HW BUG? message too short (%u bytes) for " 1266 1266 "%u payload descriptors (%zu each, total %zu)\n", 1267 - skb->len, num_pls, sizeof(msg_hdr->pld[0]), pl_itr); 1267 + skb_len, num_pls, sizeof(msg_hdr->pld[0]), pl_itr); 1268 1268 goto error_pl_descr_short; 1269 1269 } 1270 1270 /* Walk each payload payload--check we really got it */ ··· 1272 1272 /* work around old gcc warnings */ 1273 1273 pl_size = i2400m_pld_size(&msg_hdr->pld[i]); 1274 1274 result = i2400m_rx_pl_descr_check(i2400m, &msg_hdr->pld[i], 1275 - pl_itr, skb->len); 1275 + pl_itr, skb_len); 1276 1276 if (result < 0) 1277 1277 goto error_pl_descr_check; 1278 1278 single_last = num_pls == 1 || i == num_pls - 1; ··· 1290 1290 if (i < i2400m->rx_pl_min) 1291 1291 i2400m->rx_pl_min = i; 1292 1292 i2400m->rx_num++; 1293 - i2400m->rx_size_acc += skb->len; 1294 - if (skb->len < i2400m->rx_size_min) 1295 - i2400m->rx_size_min = skb->len; 1296 - if (skb->len > i2400m->rx_size_max) 1297 - i2400m->rx_size_max = skb->len; 1293 + i2400m->rx_size_acc += skb_len; 1294 + if (skb_len < i2400m->rx_size_min) 1295 + i2400m->rx_size_min = skb_len; 1296 + if (skb_len > i2400m->rx_size_max) 1297 + i2400m->rx_size_max = skb_len; 1298 1298 spin_unlock_irqrestore(&i2400m->rx_lock, flags); 1299 1299 error_pl_descr_check: 1300 1300 error_pl_descr_short: 1301 1301 error_msg_hdr_check: 1302 - d_fnend(4, dev, "(i2400m %p skb %p [size %zu]) = %d\n", 1302 + d_fnend(4, dev, "(i2400m %p skb %p [size %u]) = %d\n", 1303 1303 i2400m, skb, skb_len, result); 1304 1304 return result; 1305 1305 }
+1 -1
drivers/net/wireless/ath/ath9k/ani.c
··· 543 543 if (conf_is_ht40(conf)) 544 544 return clockrate * 2; 545 545 546 - return clockrate * 2; 546 + return clockrate; 547 547 } 548 548 549 549 static int32_t ath9k_hw_ani_get_listen_time(struct ath_hw *ah)
+93 -35
drivers/platform/x86/intel_ips.c
··· 51 51 * TODO: 52 52 * - handle CPU hotplug 53 53 * - provide turbo enable/disable api 54 - * - make sure we can write turbo enable/disable reg based on MISC_EN 55 54 * 56 55 * Related documents: 57 56 * - CDI 403777, 403778 - Auburndale EDS vol 1 & 2 ··· 229 230 #define THM_TC2 0xac 230 231 #define THM_DTV 0xb0 231 232 #define THM_ITV 0xd8 232 - #define ITV_ME_SEQNO_MASK 0x000f0000 /* ME should update every ~200ms */ 233 + #define ITV_ME_SEQNO_MASK 0x00ff0000 /* ME should update every ~200ms */ 233 234 #define ITV_ME_SEQNO_SHIFT (16) 234 235 #define ITV_MCH_TEMP_MASK 0x0000ff00 235 236 #define ITV_MCH_TEMP_SHIFT (8) ··· 324 325 bool gpu_preferred; 325 326 bool poll_turbo_status; 326 327 bool second_cpu; 328 + bool turbo_toggle_allowed; 327 329 struct ips_mcp_limits *limits; 328 330 329 331 /* Optional MCH interfaces for if i915 is in use */ ··· 415 415 new_limit = cur_limit - 8; /* 1W decrease */ 416 416 417 417 /* Clamp to SKU TDP limit */ 418 - if (((new_limit * 10) / 8) < (ips->orig_turbo_limit & TURBO_TDP_MASK)) 418 + if (new_limit < (ips->orig_turbo_limit & TURBO_TDP_MASK)) 419 419 new_limit = ips->orig_turbo_limit & TURBO_TDP_MASK; 420 420 421 421 thm_writew(THM_MPCPC, (new_limit * 10) / 8); ··· 461 461 if (ips->__cpu_turbo_on) 462 462 return; 463 463 464 - on_each_cpu(do_enable_cpu_turbo, ips, 1); 464 + if (ips->turbo_toggle_allowed) 465 + on_each_cpu(do_enable_cpu_turbo, ips, 1); 465 466 466 467 ips->__cpu_turbo_on = true; 467 468 } ··· 499 498 if (!ips->__cpu_turbo_on) 500 499 return; 501 500 502 - on_each_cpu(do_disable_cpu_turbo, ips, 1); 501 + if (ips->turbo_toggle_allowed) 502 + on_each_cpu(do_disable_cpu_turbo, ips, 1); 503 503 504 504 ips->__cpu_turbo_on = false; 505 505 } ··· 600 598 { 601 599 unsigned long flags; 602 600 bool ret = false; 601 + u32 temp_limit; 602 + u32 avg_power; 603 + const char *msg = "MCP limit exceeded: "; 603 604 604 605 spin_lock_irqsave(&ips->turbo_status_lock, flags); 605 - if (ips->mcp_avg_temp > (ips->mcp_temp_limit * 100)) 606 - ret = true; 607 - if (ips->cpu_avg_power + ips->mch_avg_power > ips->mcp_power_limit) 608 - ret = true; 609 - spin_unlock_irqrestore(&ips->turbo_status_lock, flags); 610 606 611 - if (ret) 607 + temp_limit = ips->mcp_temp_limit * 100; 608 + if (ips->mcp_avg_temp > temp_limit) { 612 609 dev_info(&ips->dev->dev, 613 - "MCP power or thermal limit exceeded\n"); 610 + "%sAvg temp %u, limit %u\n", msg, ips->mcp_avg_temp, 611 + temp_limit); 612 + ret = true; 613 + } 614 + 615 + avg_power = ips->cpu_avg_power + ips->mch_avg_power; 616 + if (avg_power > ips->mcp_power_limit) { 617 + dev_info(&ips->dev->dev, 618 + "%sAvg power %u, limit %u\n", msg, avg_power, 619 + ips->mcp_power_limit); 620 + ret = true; 621 + } 622 + 623 + spin_unlock_irqrestore(&ips->turbo_status_lock, flags); 614 624 615 625 return ret; 616 626 } ··· 677 663 } 678 664 679 665 /** 666 + * verify_limits - verify BIOS provided limits 667 + * @ips: IPS structure 668 + * 669 + * BIOS can optionally provide non-default limits for power and temp. Check 670 + * them here and use the defaults if the BIOS values are not provided or 671 + * are otherwise unusable. 672 + */ 673 + static void verify_limits(struct ips_driver *ips) 674 + { 675 + if (ips->mcp_power_limit < ips->limits->mcp_power_limit || 676 + ips->mcp_power_limit > 35000) 677 + ips->mcp_power_limit = ips->limits->mcp_power_limit; 678 + 679 + if (ips->mcp_temp_limit < ips->limits->core_temp_limit || 680 + ips->mcp_temp_limit < ips->limits->mch_temp_limit || 681 + ips->mcp_temp_limit > 150) 682 + ips->mcp_temp_limit = min(ips->limits->core_temp_limit, 683 + ips->limits->mch_temp_limit); 684 + } 685 + 686 + /** 680 687 * update_turbo_limits - get various limits & settings from regs 681 688 * @ips: IPS driver struct 682 689 * ··· 715 680 u32 hts = thm_readl(THM_HTS); 716 681 717 682 ips->cpu_turbo_enabled = !(hts & HTS_PCTD_DIS); 718 - ips->gpu_turbo_enabled = !(hts & HTS_GTD_DIS); 683 + /* 684 + * Disable turbo for now, until we can figure out why the power figures 685 + * are wrong 686 + */ 687 + ips->cpu_turbo_enabled = false; 688 + 689 + if (ips->gpu_busy) 690 + ips->gpu_turbo_enabled = !(hts & HTS_GTD_DIS); 691 + 719 692 ips->core_power_limit = thm_readw(THM_MPCPC); 720 693 ips->mch_power_limit = thm_readw(THM_MMGPC); 721 694 ips->mcp_temp_limit = thm_readw(THM_PTL); 722 695 ips->mcp_power_limit = thm_readw(THM_MPPC); 723 696 697 + verify_limits(ips); 724 698 /* Ignore BIOS CPU vs GPU pref */ 725 699 } 726 700 ··· 902 858 ret = (ret * 1000) / 65535; 903 859 *last = val; 904 860 905 - return ret; 861 + return 0; 906 862 } 907 863 908 864 static const u16 temp_decay_factor = 2; ··· 984 940 kfree(mch_samples); 985 941 kfree(cpu_samples); 986 942 kfree(mchp_samples); 987 - kthread_stop(ips->adjust); 988 943 return -ENOMEM; 989 944 } 990 945 ··· 991 948 ITV_ME_SEQNO_SHIFT; 992 949 seqno_timestamp = get_jiffies_64(); 993 950 994 - old_cpu_power = thm_readl(THM_CEC) / 65535; 951 + old_cpu_power = thm_readl(THM_CEC); 995 952 schedule_timeout_interruptible(msecs_to_jiffies(IPS_SAMPLE_PERIOD)); 996 953 997 954 /* Collect an initial average */ ··· 1193 1150 STS_GPL_SHIFT; 1194 1151 /* ignore EC CPU vs GPU pref */ 1195 1152 ips->cpu_turbo_enabled = !(sts & STS_PCTD_DIS); 1196 - ips->gpu_turbo_enabled = !(sts & STS_GTD_DIS); 1153 + /* 1154 + * Disable turbo for now, until we can figure 1155 + * out why the power figures are wrong 1156 + */ 1157 + ips->cpu_turbo_enabled = false; 1158 + if (ips->gpu_busy) 1159 + ips->gpu_turbo_enabled = !(sts & STS_GTD_DIS); 1197 1160 ips->mcp_temp_limit = (sts & STS_PTL_MASK) >> 1198 1161 STS_PTL_SHIFT; 1199 1162 ips->mcp_power_limit = (tc1 & STS_PPL_MASK) >> 1200 1163 STS_PPL_SHIFT; 1164 + verify_limits(ips); 1201 1165 spin_unlock(&ips->turbo_status_lock); 1202 1166 1203 1167 thm_writeb(THM_SEC, SEC_ACK); ··· 1383 1333 * turbo manually or we'll get an illegal MSR access, even though 1384 1334 * turbo will still be available. 1385 1335 */ 1386 - if (!(misc_en & IA32_MISC_TURBO_EN)) 1387 - ; /* add turbo MSR write allowed flag if necessary */ 1336 + if (misc_en & IA32_MISC_TURBO_EN) 1337 + ips->turbo_toggle_allowed = true; 1338 + else 1339 + ips->turbo_toggle_allowed = false; 1388 1340 1389 1341 if (strstr(boot_cpu_data.x86_model_id, "CPU M")) 1390 1342 limits = &ips_sv_limits; ··· 1403 1351 tdp = turbo_power & TURBO_TDP_MASK; 1404 1352 1405 1353 /* Sanity check TDP against CPU */ 1406 - if (limits->mcp_power_limit != (tdp / 8) * 1000) { 1407 - dev_warn(&ips->dev->dev, "Warning: CPU TDP doesn't match expected value (found %d, expected %d)\n", 1408 - tdp / 8, limits->mcp_power_limit / 1000); 1354 + if (limits->core_power_limit != (tdp / 8) * 1000) { 1355 + dev_info(&ips->dev->dev, "CPU TDP doesn't match expected value (found %d, expected %d)\n", 1356 + tdp / 8, limits->core_power_limit / 1000); 1357 + limits->core_power_limit = (tdp / 8) * 1000; 1409 1358 } 1410 1359 1411 1360 out: ··· 1443 1390 return true; 1444 1391 1445 1392 out_put_busy: 1446 - symbol_put(i915_gpu_turbo_disable); 1393 + symbol_put(i915_gpu_busy); 1447 1394 out_put_lower: 1448 1395 symbol_put(i915_gpu_lower); 1449 1396 out_put_raise: ··· 1585 1532 /* Save turbo limits & ratios */ 1586 1533 rdmsrl(TURBO_POWER_CURRENT_LIMIT, ips->orig_turbo_limit); 1587 1534 1588 - ips_enable_cpu_turbo(ips); 1589 - ips->cpu_turbo_enabled = true; 1535 + ips_disable_cpu_turbo(ips); 1536 + ips->cpu_turbo_enabled = false; 1590 1537 1591 - /* Set up the work queue and monitor/adjust threads */ 1592 - ips->monitor = kthread_run(ips_monitor, ips, "ips-monitor"); 1593 - if (IS_ERR(ips->monitor)) { 1594 - dev_err(&dev->dev, 1595 - "failed to create thermal monitor thread, aborting\n"); 1596 - ret = -ENOMEM; 1597 - goto error_free_irq; 1598 - } 1599 - 1538 + /* Create thermal adjust thread */ 1600 1539 ips->adjust = kthread_create(ips_adjust, ips, "ips-adjust"); 1601 1540 if (IS_ERR(ips->adjust)) { 1602 1541 dev_err(&dev->dev, 1603 1542 "failed to create thermal adjust thread, aborting\n"); 1543 + ret = -ENOMEM; 1544 + goto error_free_irq; 1545 + 1546 + } 1547 + 1548 + /* 1549 + * Set up the work queue and monitor thread. The monitor thread 1550 + * will wake up ips_adjust thread. 1551 + */ 1552 + ips->monitor = kthread_run(ips_monitor, ips, "ips-monitor"); 1553 + if (IS_ERR(ips->monitor)) { 1554 + dev_err(&dev->dev, 1555 + "failed to create thermal monitor thread, aborting\n"); 1604 1556 ret = -ENOMEM; 1605 1557 goto error_thread_cleanup; 1606 1558 } ··· 1624 1566 return ret; 1625 1567 1626 1568 error_thread_cleanup: 1627 - kthread_stop(ips->monitor); 1569 + kthread_stop(ips->adjust); 1628 1570 error_free_irq: 1629 1571 free_irq(ips->dev->irq, ips); 1630 1572 error_unmap:
-1
drivers/regulator/ad5398.c
··· 256 256 257 257 regulator_unregister(chip->rdev); 258 258 kfree(chip); 259 - i2c_set_clientdata(client, NULL); 260 259 261 260 return 0; 262 261 }
-2
drivers/regulator/isl6271a-regulator.c
··· 191 191 struct isl_pmic *pmic = i2c_get_clientdata(i2c); 192 192 int i; 193 193 194 - i2c_set_clientdata(i2c, NULL); 195 - 196 194 for (i = 0; i < 3; i++) 197 195 regulator_unregister(pmic->rdev[i]); 198 196
-2
drivers/rtc/rtc-ds3232.c
··· 268 268 free_irq(client->irq, client); 269 269 270 270 out_free: 271 - i2c_set_clientdata(client, NULL); 272 271 kfree(ds3232); 273 272 return ret; 274 273 } ··· 286 287 } 287 288 288 289 rtc_device_unregister(ds3232->rtc); 289 - i2c_set_clientdata(client, NULL); 290 290 kfree(ds3232); 291 291 return 0; 292 292 }
+453 -423
drivers/spi/spi_bfin5xx.c
··· 1 1 /* 2 2 * Blackfin On-Chip SPI Driver 3 3 * 4 - * Copyright 2004-2007 Analog Devices Inc. 4 + * Copyright 2004-2010 Analog Devices Inc. 5 5 * 6 6 * Enter bugs at http://blackfin.uclinux.org/ 7 7 * ··· 41 41 #define RUNNING_STATE ((void *)1) 42 42 #define DONE_STATE ((void *)2) 43 43 #define ERROR_STATE ((void *)-1) 44 - #define QUEUE_RUNNING 0 45 - #define QUEUE_STOPPED 1 46 44 47 - /* Value to send if no TX value is supplied */ 48 - #define SPI_IDLE_TXVAL 0x0000 45 + struct bfin_spi_master_data; 49 46 50 - struct driver_data { 47 + struct bfin_spi_transfer_ops { 48 + void (*write) (struct bfin_spi_master_data *); 49 + void (*read) (struct bfin_spi_master_data *); 50 + void (*duplex) (struct bfin_spi_master_data *); 51 + }; 52 + 53 + struct bfin_spi_master_data { 51 54 /* Driver model hookup */ 52 55 struct platform_device *pdev; 53 56 ··· 72 69 spinlock_t lock; 73 70 struct list_head queue; 74 71 int busy; 75 - int run; 72 + bool running; 76 73 77 74 /* Message Transfer pump */ 78 75 struct tasklet_struct pump_transfers; ··· 80 77 /* Current message transfer state info */ 81 78 struct spi_message *cur_msg; 82 79 struct spi_transfer *cur_transfer; 83 - struct chip_data *cur_chip; 80 + struct bfin_spi_slave_data *cur_chip; 84 81 size_t len_in_bytes; 85 82 size_t len; 86 83 void *tx; ··· 95 92 dma_addr_t rx_dma; 96 93 dma_addr_t tx_dma; 97 94 95 + int irq_requested; 96 + int spi_irq; 97 + 98 98 size_t rx_map_len; 99 99 size_t tx_map_len; 100 100 u8 n_bytes; 101 + u16 ctrl_reg; 102 + u16 flag_reg; 103 + 101 104 int cs_change; 102 - void (*write) (struct driver_data *); 103 - void (*read) (struct driver_data *); 104 - void (*duplex) (struct driver_data *); 105 + const struct bfin_spi_transfer_ops *ops; 105 106 }; 106 107 107 - struct chip_data { 108 + struct bfin_spi_slave_data { 108 109 u16 ctl_reg; 109 110 u16 baud; 110 111 u16 flag; 111 112 112 113 u8 chip_select_num; 113 - u8 n_bytes; 114 - u8 width; /* 0 or 1 */ 115 114 u8 enable_dma; 116 - u8 bits_per_word; /* 8 or 16 */ 117 - u8 cs_change_per_word; 118 115 u16 cs_chg_udelay; /* Some devices require > 255usec delay */ 119 116 u32 cs_gpio; 120 117 u16 idle_tx_val; 121 - void (*write) (struct driver_data *); 122 - void (*read) (struct driver_data *); 123 - void (*duplex) (struct driver_data *); 118 + u8 pio_interrupt; /* use spi data irq */ 119 + const struct bfin_spi_transfer_ops *ops; 124 120 }; 125 121 126 122 #define DEFINE_SPI_REG(reg, off) \ 127 - static inline u16 read_##reg(struct driver_data *drv_data) \ 123 + static inline u16 read_##reg(struct bfin_spi_master_data *drv_data) \ 128 124 { return bfin_read16(drv_data->regs_base + off); } \ 129 - static inline void write_##reg(struct driver_data *drv_data, u16 v) \ 125 + static inline void write_##reg(struct bfin_spi_master_data *drv_data, u16 v) \ 130 126 { bfin_write16(drv_data->regs_base + off, v); } 131 127 132 128 DEFINE_SPI_REG(CTRL, 0x00) ··· 136 134 DEFINE_SPI_REG(BAUD, 0x14) 137 135 DEFINE_SPI_REG(SHAW, 0x18) 138 136 139 - static void bfin_spi_enable(struct driver_data *drv_data) 137 + static void bfin_spi_enable(struct bfin_spi_master_data *drv_data) 140 138 { 141 139 u16 cr; 142 140 ··· 144 142 write_CTRL(drv_data, (cr | BIT_CTL_ENABLE)); 145 143 } 146 144 147 - static void bfin_spi_disable(struct driver_data *drv_data) 145 + static void bfin_spi_disable(struct bfin_spi_master_data *drv_data) 148 146 { 149 147 u16 cr; 150 148 ··· 167 165 return spi_baud; 168 166 } 169 167 170 - static int bfin_spi_flush(struct driver_data *drv_data) 168 + static int bfin_spi_flush(struct bfin_spi_master_data *drv_data) 171 169 { 172 170 unsigned long limit = loops_per_jiffy << 1; 173 171 ··· 181 179 } 182 180 183 181 /* Chip select operation functions for cs_change flag */ 184 - static void bfin_spi_cs_active(struct driver_data *drv_data, struct chip_data *chip) 182 + static void bfin_spi_cs_active(struct bfin_spi_master_data *drv_data, struct bfin_spi_slave_data *chip) 185 183 { 186 - if (likely(chip->chip_select_num)) { 184 + if (likely(chip->chip_select_num < MAX_CTRL_CS)) { 187 185 u16 flag = read_FLAG(drv_data); 188 186 189 - flag |= chip->flag; 190 - flag &= ~(chip->flag << 8); 187 + flag &= ~chip->flag; 191 188 192 189 write_FLAG(drv_data, flag); 193 190 } else { ··· 194 193 } 195 194 } 196 195 197 - static void bfin_spi_cs_deactive(struct driver_data *drv_data, struct chip_data *chip) 196 + static void bfin_spi_cs_deactive(struct bfin_spi_master_data *drv_data, 197 + struct bfin_spi_slave_data *chip) 198 198 { 199 - if (likely(chip->chip_select_num)) { 199 + if (likely(chip->chip_select_num < MAX_CTRL_CS)) { 200 200 u16 flag = read_FLAG(drv_data); 201 201 202 - flag &= ~chip->flag; 203 - flag |= (chip->flag << 8); 202 + flag |= chip->flag; 204 203 205 204 write_FLAG(drv_data, flag); 206 205 } else { ··· 212 211 udelay(chip->cs_chg_udelay); 213 212 } 214 213 215 - /* stop controller and re-config current chip*/ 216 - static void bfin_spi_restore_state(struct driver_data *drv_data) 214 + /* enable or disable the pin muxed by GPIO and SPI CS to work as SPI CS */ 215 + static inline void bfin_spi_cs_enable(struct bfin_spi_master_data *drv_data, 216 + struct bfin_spi_slave_data *chip) 217 217 { 218 - struct chip_data *chip = drv_data->cur_chip; 218 + if (chip->chip_select_num < MAX_CTRL_CS) { 219 + u16 flag = read_FLAG(drv_data); 220 + 221 + flag |= (chip->flag >> 8); 222 + 223 + write_FLAG(drv_data, flag); 224 + } 225 + } 226 + 227 + static inline void bfin_spi_cs_disable(struct bfin_spi_master_data *drv_data, 228 + struct bfin_spi_slave_data *chip) 229 + { 230 + if (chip->chip_select_num < MAX_CTRL_CS) { 231 + u16 flag = read_FLAG(drv_data); 232 + 233 + flag &= ~(chip->flag >> 8); 234 + 235 + write_FLAG(drv_data, flag); 236 + } 237 + } 238 + 239 + /* stop controller and re-config current chip*/ 240 + static void bfin_spi_restore_state(struct bfin_spi_master_data *drv_data) 241 + { 242 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 219 243 220 244 /* Clear status and disable clock */ 221 245 write_STAT(drv_data, BIT_STAT_CLR); 222 246 bfin_spi_disable(drv_data); 223 247 dev_dbg(&drv_data->pdev->dev, "restoring spi ctl state\n"); 248 + 249 + SSYNC(); 224 250 225 251 /* Load the registers */ 226 252 write_CTRL(drv_data, chip->ctl_reg); ··· 258 230 } 259 231 260 232 /* used to kick off transfer in rx mode and read unwanted RX data */ 261 - static inline void bfin_spi_dummy_read(struct driver_data *drv_data) 233 + static inline void bfin_spi_dummy_read(struct bfin_spi_master_data *drv_data) 262 234 { 263 235 (void) read_RDBR(drv_data); 264 236 } 265 237 266 - static void bfin_spi_null_writer(struct driver_data *drv_data) 267 - { 268 - u8 n_bytes = drv_data->n_bytes; 269 - u16 tx_val = drv_data->cur_chip->idle_tx_val; 270 - 271 - /* clear RXS (we check for RXS inside the loop) */ 272 - bfin_spi_dummy_read(drv_data); 273 - 274 - while (drv_data->tx < drv_data->tx_end) { 275 - write_TDBR(drv_data, tx_val); 276 - drv_data->tx += n_bytes; 277 - /* wait until transfer finished. 278 - checking SPIF or TXS may not guarantee transfer completion */ 279 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 280 - cpu_relax(); 281 - /* discard RX data and clear RXS */ 282 - bfin_spi_dummy_read(drv_data); 283 - } 284 - } 285 - 286 - static void bfin_spi_null_reader(struct driver_data *drv_data) 287 - { 288 - u8 n_bytes = drv_data->n_bytes; 289 - u16 tx_val = drv_data->cur_chip->idle_tx_val; 290 - 291 - /* discard old RX data and clear RXS */ 292 - bfin_spi_dummy_read(drv_data); 293 - 294 - while (drv_data->rx < drv_data->rx_end) { 295 - write_TDBR(drv_data, tx_val); 296 - drv_data->rx += n_bytes; 297 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 298 - cpu_relax(); 299 - bfin_spi_dummy_read(drv_data); 300 - } 301 - } 302 - 303 - static void bfin_spi_u8_writer(struct driver_data *drv_data) 238 + static void bfin_spi_u8_writer(struct bfin_spi_master_data *drv_data) 304 239 { 305 240 /* clear RXS (we check for RXS inside the loop) */ 306 241 bfin_spi_dummy_read(drv_data); ··· 279 288 } 280 289 } 281 290 282 - static void bfin_spi_u8_cs_chg_writer(struct driver_data *drv_data) 283 - { 284 - struct chip_data *chip = drv_data->cur_chip; 285 - 286 - /* clear RXS (we check for RXS inside the loop) */ 287 - bfin_spi_dummy_read(drv_data); 288 - 289 - while (drv_data->tx < drv_data->tx_end) { 290 - bfin_spi_cs_active(drv_data, chip); 291 - write_TDBR(drv_data, (*(u8 *) (drv_data->tx++))); 292 - /* make sure transfer finished before deactiving CS */ 293 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 294 - cpu_relax(); 295 - bfin_spi_dummy_read(drv_data); 296 - bfin_spi_cs_deactive(drv_data, chip); 297 - } 298 - } 299 - 300 - static void bfin_spi_u8_reader(struct driver_data *drv_data) 291 + static void bfin_spi_u8_reader(struct bfin_spi_master_data *drv_data) 301 292 { 302 293 u16 tx_val = drv_data->cur_chip->idle_tx_val; 303 294 ··· 294 321 } 295 322 } 296 323 297 - static void bfin_spi_u8_cs_chg_reader(struct driver_data *drv_data) 298 - { 299 - struct chip_data *chip = drv_data->cur_chip; 300 - u16 tx_val = chip->idle_tx_val; 301 - 302 - /* discard old RX data and clear RXS */ 303 - bfin_spi_dummy_read(drv_data); 304 - 305 - while (drv_data->rx < drv_data->rx_end) { 306 - bfin_spi_cs_active(drv_data, chip); 307 - write_TDBR(drv_data, tx_val); 308 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 309 - cpu_relax(); 310 - *(u8 *) (drv_data->rx++) = read_RDBR(drv_data); 311 - bfin_spi_cs_deactive(drv_data, chip); 312 - } 313 - } 314 - 315 - static void bfin_spi_u8_duplex(struct driver_data *drv_data) 324 + static void bfin_spi_u8_duplex(struct bfin_spi_master_data *drv_data) 316 325 { 317 326 /* discard old RX data and clear RXS */ 318 327 bfin_spi_dummy_read(drv_data); ··· 307 352 } 308 353 } 309 354 310 - static void bfin_spi_u8_cs_chg_duplex(struct driver_data *drv_data) 311 - { 312 - struct chip_data *chip = drv_data->cur_chip; 355 + static const struct bfin_spi_transfer_ops bfin_bfin_spi_transfer_ops_u8 = { 356 + .write = bfin_spi_u8_writer, 357 + .read = bfin_spi_u8_reader, 358 + .duplex = bfin_spi_u8_duplex, 359 + }; 313 360 314 - /* discard old RX data and clear RXS */ 315 - bfin_spi_dummy_read(drv_data); 316 - 317 - while (drv_data->rx < drv_data->rx_end) { 318 - bfin_spi_cs_active(drv_data, chip); 319 - write_TDBR(drv_data, (*(u8 *) (drv_data->tx++))); 320 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 321 - cpu_relax(); 322 - *(u8 *) (drv_data->rx++) = read_RDBR(drv_data); 323 - bfin_spi_cs_deactive(drv_data, chip); 324 - } 325 - } 326 - 327 - static void bfin_spi_u16_writer(struct driver_data *drv_data) 361 + static void bfin_spi_u16_writer(struct bfin_spi_master_data *drv_data) 328 362 { 329 363 /* clear RXS (we check for RXS inside the loop) */ 330 364 bfin_spi_dummy_read(drv_data); ··· 330 386 } 331 387 } 332 388 333 - static void bfin_spi_u16_cs_chg_writer(struct driver_data *drv_data) 334 - { 335 - struct chip_data *chip = drv_data->cur_chip; 336 - 337 - /* clear RXS (we check for RXS inside the loop) */ 338 - bfin_spi_dummy_read(drv_data); 339 - 340 - while (drv_data->tx < drv_data->tx_end) { 341 - bfin_spi_cs_active(drv_data, chip); 342 - write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 343 - drv_data->tx += 2; 344 - /* make sure transfer finished before deactiving CS */ 345 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 346 - cpu_relax(); 347 - bfin_spi_dummy_read(drv_data); 348 - bfin_spi_cs_deactive(drv_data, chip); 349 - } 350 - } 351 - 352 - static void bfin_spi_u16_reader(struct driver_data *drv_data) 389 + static void bfin_spi_u16_reader(struct bfin_spi_master_data *drv_data) 353 390 { 354 391 u16 tx_val = drv_data->cur_chip->idle_tx_val; 355 392 ··· 346 421 } 347 422 } 348 423 349 - static void bfin_spi_u16_cs_chg_reader(struct driver_data *drv_data) 350 - { 351 - struct chip_data *chip = drv_data->cur_chip; 352 - u16 tx_val = chip->idle_tx_val; 353 - 354 - /* discard old RX data and clear RXS */ 355 - bfin_spi_dummy_read(drv_data); 356 - 357 - while (drv_data->rx < drv_data->rx_end) { 358 - bfin_spi_cs_active(drv_data, chip); 359 - write_TDBR(drv_data, tx_val); 360 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 361 - cpu_relax(); 362 - *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 363 - drv_data->rx += 2; 364 - bfin_spi_cs_deactive(drv_data, chip); 365 - } 366 - } 367 - 368 - static void bfin_spi_u16_duplex(struct driver_data *drv_data) 424 + static void bfin_spi_u16_duplex(struct bfin_spi_master_data *drv_data) 369 425 { 370 426 /* discard old RX data and clear RXS */ 371 427 bfin_spi_dummy_read(drv_data); ··· 361 455 } 362 456 } 363 457 364 - static void bfin_spi_u16_cs_chg_duplex(struct driver_data *drv_data) 365 - { 366 - struct chip_data *chip = drv_data->cur_chip; 458 + static const struct bfin_spi_transfer_ops bfin_bfin_spi_transfer_ops_u16 = { 459 + .write = bfin_spi_u16_writer, 460 + .read = bfin_spi_u16_reader, 461 + .duplex = bfin_spi_u16_duplex, 462 + }; 367 463 368 - /* discard old RX data and clear RXS */ 369 - bfin_spi_dummy_read(drv_data); 370 - 371 - while (drv_data->rx < drv_data->rx_end) { 372 - bfin_spi_cs_active(drv_data, chip); 373 - write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 374 - drv_data->tx += 2; 375 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 376 - cpu_relax(); 377 - *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 378 - drv_data->rx += 2; 379 - bfin_spi_cs_deactive(drv_data, chip); 380 - } 381 - } 382 - 383 - /* test if ther is more transfer to be done */ 384 - static void *bfin_spi_next_transfer(struct driver_data *drv_data) 464 + /* test if there is more transfer to be done */ 465 + static void *bfin_spi_next_transfer(struct bfin_spi_master_data *drv_data) 385 466 { 386 467 struct spi_message *msg = drv_data->cur_msg; 387 468 struct spi_transfer *trans = drv_data->cur_transfer; ··· 387 494 * caller already set message->status; 388 495 * dma and pio irqs are blocked give finished message back 389 496 */ 390 - static void bfin_spi_giveback(struct driver_data *drv_data) 497 + static void bfin_spi_giveback(struct bfin_spi_master_data *drv_data) 391 498 { 392 - struct chip_data *chip = drv_data->cur_chip; 499 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 393 500 struct spi_transfer *last_transfer; 394 501 unsigned long flags; 395 502 struct spi_message *msg; ··· 418 525 msg->complete(msg->context); 419 526 } 420 527 528 + /* spi data irq handler */ 529 + static irqreturn_t bfin_spi_pio_irq_handler(int irq, void *dev_id) 530 + { 531 + struct bfin_spi_master_data *drv_data = dev_id; 532 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 533 + struct spi_message *msg = drv_data->cur_msg; 534 + int n_bytes = drv_data->n_bytes; 535 + 536 + /* wait until transfer finished. */ 537 + while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 538 + cpu_relax(); 539 + 540 + if ((drv_data->tx && drv_data->tx >= drv_data->tx_end) || 541 + (drv_data->rx && drv_data->rx >= (drv_data->rx_end - n_bytes))) { 542 + /* last read */ 543 + if (drv_data->rx) { 544 + dev_dbg(&drv_data->pdev->dev, "last read\n"); 545 + if (n_bytes == 2) 546 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 547 + else if (n_bytes == 1) 548 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 549 + drv_data->rx += n_bytes; 550 + } 551 + 552 + msg->actual_length += drv_data->len_in_bytes; 553 + if (drv_data->cs_change) 554 + bfin_spi_cs_deactive(drv_data, chip); 555 + /* Move to next transfer */ 556 + msg->state = bfin_spi_next_transfer(drv_data); 557 + 558 + disable_irq_nosync(drv_data->spi_irq); 559 + 560 + /* Schedule transfer tasklet */ 561 + tasklet_schedule(&drv_data->pump_transfers); 562 + return IRQ_HANDLED; 563 + } 564 + 565 + if (drv_data->rx && drv_data->tx) { 566 + /* duplex */ 567 + dev_dbg(&drv_data->pdev->dev, "duplex: write_TDBR\n"); 568 + if (drv_data->n_bytes == 2) { 569 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 570 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 571 + } else if (drv_data->n_bytes == 1) { 572 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 573 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 574 + } 575 + } else if (drv_data->rx) { 576 + /* read */ 577 + dev_dbg(&drv_data->pdev->dev, "read: write_TDBR\n"); 578 + if (drv_data->n_bytes == 2) 579 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 580 + else if (drv_data->n_bytes == 1) 581 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 582 + write_TDBR(drv_data, chip->idle_tx_val); 583 + } else if (drv_data->tx) { 584 + /* write */ 585 + dev_dbg(&drv_data->pdev->dev, "write: write_TDBR\n"); 586 + bfin_spi_dummy_read(drv_data); 587 + if (drv_data->n_bytes == 2) 588 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 589 + else if (drv_data->n_bytes == 1) 590 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 591 + } 592 + 593 + if (drv_data->tx) 594 + drv_data->tx += n_bytes; 595 + if (drv_data->rx) 596 + drv_data->rx += n_bytes; 597 + 598 + return IRQ_HANDLED; 599 + } 600 + 421 601 static irqreturn_t bfin_spi_dma_irq_handler(int irq, void *dev_id) 422 602 { 423 - struct driver_data *drv_data = dev_id; 424 - struct chip_data *chip = drv_data->cur_chip; 603 + struct bfin_spi_master_data *drv_data = dev_id; 604 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 425 605 struct spi_message *msg = drv_data->cur_msg; 426 606 unsigned long timeout; 427 607 unsigned short dmastat = get_dma_curr_irqstat(drv_data->dma_channel); ··· 506 540 507 541 clear_dma_irqstat(drv_data->dma_channel); 508 542 509 - /* Wait for DMA to complete */ 510 - while (get_dma_curr_irqstat(drv_data->dma_channel) & DMA_RUN) 511 - cpu_relax(); 512 - 513 543 /* 514 544 * wait for the last transaction shifted out. HRM states: 515 545 * at this point there may still be data in the SPI DMA FIFO waiting ··· 513 551 * register until it goes low for 2 successive reads 514 552 */ 515 553 if (drv_data->tx != NULL) { 516 - while ((read_STAT(drv_data) & TXS) || 517 - (read_STAT(drv_data) & TXS)) 554 + while ((read_STAT(drv_data) & BIT_STAT_TXS) || 555 + (read_STAT(drv_data) & BIT_STAT_TXS)) 518 556 cpu_relax(); 519 557 } 520 558 ··· 523 561 dmastat, read_STAT(drv_data)); 524 562 525 563 timeout = jiffies + HZ; 526 - while (!(read_STAT(drv_data) & SPIF)) 564 + while (!(read_STAT(drv_data) & BIT_STAT_SPIF)) 527 565 if (!time_before(jiffies, timeout)) { 528 566 dev_warn(&drv_data->pdev->dev, "timeout waiting for SPIF"); 529 567 break; 530 568 } else 531 569 cpu_relax(); 532 570 533 - if ((dmastat & DMA_ERR) && (spistat & RBSY)) { 571 + if ((dmastat & DMA_ERR) && (spistat & BIT_STAT_RBSY)) { 534 572 msg->state = ERROR_STATE; 535 573 dev_err(&drv_data->pdev->dev, "dma receive: fifo/buffer overflow\n"); 536 574 } else { ··· 550 588 dev_dbg(&drv_data->pdev->dev, 551 589 "disable dma channel irq%d\n", 552 590 drv_data->dma_channel); 553 - dma_disable_irq(drv_data->dma_channel); 591 + dma_disable_irq_nosync(drv_data->dma_channel); 554 592 555 593 return IRQ_HANDLED; 556 594 } 557 595 558 596 static void bfin_spi_pump_transfers(unsigned long data) 559 597 { 560 - struct driver_data *drv_data = (struct driver_data *)data; 598 + struct bfin_spi_master_data *drv_data = (struct bfin_spi_master_data *)data; 561 599 struct spi_message *message = NULL; 562 600 struct spi_transfer *transfer = NULL; 563 601 struct spi_transfer *previous = NULL; 564 - struct chip_data *chip = NULL; 565 - u8 width; 566 - u16 cr, dma_width, dma_config; 602 + struct bfin_spi_slave_data *chip = NULL; 603 + unsigned int bits_per_word; 604 + u16 cr, cr_width, dma_width, dma_config; 567 605 u32 tranf_success = 1; 568 606 u8 full_duplex = 0; 569 607 ··· 601 639 udelay(previous->delay_usecs); 602 640 } 603 641 604 - /* Setup the transfer state based on the type of transfer */ 642 + /* Flush any existing transfers that may be sitting in the hardware */ 605 643 if (bfin_spi_flush(drv_data) == 0) { 606 644 dev_err(&drv_data->pdev->dev, "pump_transfers: flush failed\n"); 607 645 message->status = -EIO; ··· 641 679 drv_data->cs_change = transfer->cs_change; 642 680 643 681 /* Bits per word setup */ 644 - switch (transfer->bits_per_word) { 645 - case 8: 682 + bits_per_word = transfer->bits_per_word ? : message->spi->bits_per_word; 683 + if (bits_per_word == 8) { 646 684 drv_data->n_bytes = 1; 647 - width = CFG_SPI_WORDSIZE8; 648 - drv_data->read = chip->cs_change_per_word ? 649 - bfin_spi_u8_cs_chg_reader : bfin_spi_u8_reader; 650 - drv_data->write = chip->cs_change_per_word ? 651 - bfin_spi_u8_cs_chg_writer : bfin_spi_u8_writer; 652 - drv_data->duplex = chip->cs_change_per_word ? 653 - bfin_spi_u8_cs_chg_duplex : bfin_spi_u8_duplex; 654 - break; 655 - 656 - case 16: 685 + drv_data->len = transfer->len; 686 + cr_width = 0; 687 + drv_data->ops = &bfin_bfin_spi_transfer_ops_u8; 688 + } else if (bits_per_word == 16) { 657 689 drv_data->n_bytes = 2; 658 - width = CFG_SPI_WORDSIZE16; 659 - drv_data->read = chip->cs_change_per_word ? 660 - bfin_spi_u16_cs_chg_reader : bfin_spi_u16_reader; 661 - drv_data->write = chip->cs_change_per_word ? 662 - bfin_spi_u16_cs_chg_writer : bfin_spi_u16_writer; 663 - drv_data->duplex = chip->cs_change_per_word ? 664 - bfin_spi_u16_cs_chg_duplex : bfin_spi_u16_duplex; 665 - break; 666 - 667 - default: 668 - /* No change, the same as default setting */ 669 - drv_data->n_bytes = chip->n_bytes; 670 - width = chip->width; 671 - drv_data->write = drv_data->tx ? chip->write : bfin_spi_null_writer; 672 - drv_data->read = drv_data->rx ? chip->read : bfin_spi_null_reader; 673 - drv_data->duplex = chip->duplex ? chip->duplex : bfin_spi_null_writer; 674 - break; 690 + drv_data->len = (transfer->len) >> 1; 691 + cr_width = BIT_CTL_WORDSIZE; 692 + drv_data->ops = &bfin_bfin_spi_transfer_ops_u16; 693 + } else { 694 + dev_err(&drv_data->pdev->dev, "transfer: unsupported bits_per_word\n"); 695 + message->status = -EINVAL; 696 + bfin_spi_giveback(drv_data); 697 + return; 675 698 } 676 - cr = (read_CTRL(drv_data) & (~BIT_CTL_TIMOD)); 677 - cr |= (width << 8); 699 + cr = read_CTRL(drv_data) & ~(BIT_CTL_TIMOD | BIT_CTL_WORDSIZE); 700 + cr |= cr_width; 678 701 write_CTRL(drv_data, cr); 679 702 680 - if (width == CFG_SPI_WORDSIZE16) { 681 - drv_data->len = (transfer->len) >> 1; 682 - } else { 683 - drv_data->len = transfer->len; 684 - } 685 703 dev_dbg(&drv_data->pdev->dev, 686 - "transfer: drv_data->write is %p, chip->write is %p, null_wr is %p\n", 687 - drv_data->write, chip->write, bfin_spi_null_writer); 704 + "transfer: drv_data->ops is %p, chip->ops is %p, u8_ops is %p\n", 705 + drv_data->ops, chip->ops, &bfin_bfin_spi_transfer_ops_u8); 688 706 689 - /* speed and width has been set on per message */ 690 707 message->state = RUNNING_STATE; 691 708 dma_config = 0; 692 709 ··· 676 735 write_BAUD(drv_data, chip->baud); 677 736 678 737 write_STAT(drv_data, BIT_STAT_CLR); 679 - cr = (read_CTRL(drv_data) & (~BIT_CTL_TIMOD)); 680 - if (drv_data->cs_change) 681 - bfin_spi_cs_active(drv_data, chip); 738 + bfin_spi_cs_active(drv_data, chip); 682 739 683 740 dev_dbg(&drv_data->pdev->dev, 684 741 "now pumping a transfer: width is %d, len is %d\n", 685 - width, transfer->len); 742 + cr_width, transfer->len); 686 743 687 744 /* 688 745 * Try to map dma buffer and do a dma transfer. If successful use, ··· 699 760 /* config dma channel */ 700 761 dev_dbg(&drv_data->pdev->dev, "doing dma transfer\n"); 701 762 set_dma_x_count(drv_data->dma_channel, drv_data->len); 702 - if (width == CFG_SPI_WORDSIZE16) { 763 + if (cr_width == BIT_CTL_WORDSIZE) { 703 764 set_dma_x_modify(drv_data->dma_channel, 2); 704 765 dma_width = WDSIZE_16; 705 766 } else { ··· 785 846 dma_enable_irq(drv_data->dma_channel); 786 847 local_irq_restore(flags); 787 848 788 - } else { 789 - /* IO mode write then read */ 790 - dev_dbg(&drv_data->pdev->dev, "doing IO transfer\n"); 791 - 792 - /* we always use SPI_WRITE mode. SPI_READ mode 793 - seems to have problems with setting up the 794 - output value in TDBR prior to the transfer. */ 795 - write_CTRL(drv_data, (cr | CFG_SPI_WRITE)); 796 - 797 - if (full_duplex) { 798 - /* full duplex mode */ 799 - BUG_ON((drv_data->tx_end - drv_data->tx) != 800 - (drv_data->rx_end - drv_data->rx)); 801 - dev_dbg(&drv_data->pdev->dev, 802 - "IO duplex: cr is 0x%x\n", cr); 803 - 804 - drv_data->duplex(drv_data); 805 - 806 - if (drv_data->tx != drv_data->tx_end) 807 - tranf_success = 0; 808 - } else if (drv_data->tx != NULL) { 809 - /* write only half duplex */ 810 - dev_dbg(&drv_data->pdev->dev, 811 - "IO write: cr is 0x%x\n", cr); 812 - 813 - drv_data->write(drv_data); 814 - 815 - if (drv_data->tx != drv_data->tx_end) 816 - tranf_success = 0; 817 - } else if (drv_data->rx != NULL) { 818 - /* read only half duplex */ 819 - dev_dbg(&drv_data->pdev->dev, 820 - "IO read: cr is 0x%x\n", cr); 821 - 822 - drv_data->read(drv_data); 823 - if (drv_data->rx != drv_data->rx_end) 824 - tranf_success = 0; 825 - } 826 - 827 - if (!tranf_success) { 828 - dev_dbg(&drv_data->pdev->dev, 829 - "IO write error!\n"); 830 - message->state = ERROR_STATE; 831 - } else { 832 - /* Update total byte transfered */ 833 - message->actual_length += drv_data->len_in_bytes; 834 - /* Move to next transfer of this msg */ 835 - message->state = bfin_spi_next_transfer(drv_data); 836 - if (drv_data->cs_change) 837 - bfin_spi_cs_deactive(drv_data, chip); 838 - } 839 - /* Schedule next transfer tasklet */ 840 - tasklet_schedule(&drv_data->pump_transfers); 849 + return; 841 850 } 851 + 852 + /* 853 + * We always use SPI_WRITE mode (transfer starts with TDBR write). 854 + * SPI_READ mode (transfer starts with RDBR read) seems to have 855 + * problems with setting up the output value in TDBR prior to the 856 + * start of the transfer. 857 + */ 858 + write_CTRL(drv_data, cr | BIT_CTL_TXMOD); 859 + 860 + if (chip->pio_interrupt) { 861 + /* SPI irq should have been disabled by now */ 862 + 863 + /* discard old RX data and clear RXS */ 864 + bfin_spi_dummy_read(drv_data); 865 + 866 + /* start transfer */ 867 + if (drv_data->tx == NULL) 868 + write_TDBR(drv_data, chip->idle_tx_val); 869 + else { 870 + if (bits_per_word == 8) 871 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 872 + else 873 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 874 + drv_data->tx += drv_data->n_bytes; 875 + } 876 + 877 + /* once TDBR is empty, interrupt is triggered */ 878 + enable_irq(drv_data->spi_irq); 879 + return; 880 + } 881 + 882 + /* IO mode */ 883 + dev_dbg(&drv_data->pdev->dev, "doing IO transfer\n"); 884 + 885 + if (full_duplex) { 886 + /* full duplex mode */ 887 + BUG_ON((drv_data->tx_end - drv_data->tx) != 888 + (drv_data->rx_end - drv_data->rx)); 889 + dev_dbg(&drv_data->pdev->dev, 890 + "IO duplex: cr is 0x%x\n", cr); 891 + 892 + drv_data->ops->duplex(drv_data); 893 + 894 + if (drv_data->tx != drv_data->tx_end) 895 + tranf_success = 0; 896 + } else if (drv_data->tx != NULL) { 897 + /* write only half duplex */ 898 + dev_dbg(&drv_data->pdev->dev, 899 + "IO write: cr is 0x%x\n", cr); 900 + 901 + drv_data->ops->write(drv_data); 902 + 903 + if (drv_data->tx != drv_data->tx_end) 904 + tranf_success = 0; 905 + } else if (drv_data->rx != NULL) { 906 + /* read only half duplex */ 907 + dev_dbg(&drv_data->pdev->dev, 908 + "IO read: cr is 0x%x\n", cr); 909 + 910 + drv_data->ops->read(drv_data); 911 + if (drv_data->rx != drv_data->rx_end) 912 + tranf_success = 0; 913 + } 914 + 915 + if (!tranf_success) { 916 + dev_dbg(&drv_data->pdev->dev, 917 + "IO write error!\n"); 918 + message->state = ERROR_STATE; 919 + } else { 920 + /* Update total byte transfered */ 921 + message->actual_length += drv_data->len_in_bytes; 922 + /* Move to next transfer of this msg */ 923 + message->state = bfin_spi_next_transfer(drv_data); 924 + if (drv_data->cs_change) 925 + bfin_spi_cs_deactive(drv_data, chip); 926 + } 927 + 928 + /* Schedule next transfer tasklet */ 929 + tasklet_schedule(&drv_data->pump_transfers); 842 930 } 843 931 844 932 /* pop a msg from queue and kick off real transfer */ 845 933 static void bfin_spi_pump_messages(struct work_struct *work) 846 934 { 847 - struct driver_data *drv_data; 935 + struct bfin_spi_master_data *drv_data; 848 936 unsigned long flags; 849 937 850 - drv_data = container_of(work, struct driver_data, pump_messages); 938 + drv_data = container_of(work, struct bfin_spi_master_data, pump_messages); 851 939 852 940 /* Lock queue and check for queue work */ 853 941 spin_lock_irqsave(&drv_data->lock, flags); 854 - if (list_empty(&drv_data->queue) || drv_data->run == QUEUE_STOPPED) { 942 + if (list_empty(&drv_data->queue) || !drv_data->running) { 855 943 /* pumper kicked off but no work to do */ 856 944 drv_data->busy = 0; 857 945 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 928 962 */ 929 963 static int bfin_spi_transfer(struct spi_device *spi, struct spi_message *msg) 930 964 { 931 - struct driver_data *drv_data = spi_master_get_devdata(spi->master); 965 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 932 966 unsigned long flags; 933 967 934 968 spin_lock_irqsave(&drv_data->lock, flags); 935 969 936 - if (drv_data->run == QUEUE_STOPPED) { 970 + if (!drv_data->running) { 937 971 spin_unlock_irqrestore(&drv_data->lock, flags); 938 972 return -ESHUTDOWN; 939 973 } ··· 945 979 dev_dbg(&spi->dev, "adding an msg in transfer() \n"); 946 980 list_add_tail(&msg->queue, &drv_data->queue); 947 981 948 - if (drv_data->run == QUEUE_RUNNING && !drv_data->busy) 982 + if (drv_data->running && !drv_data->busy) 949 983 queue_work(drv_data->workqueue, &drv_data->pump_messages); 950 984 951 985 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 969 1003 P_SPI2_SSEL6, P_SPI2_SSEL7}, 970 1004 }; 971 1005 972 - /* first setup for new devices */ 1006 + /* setup for devices (may be called multiple times -- not just first setup) */ 973 1007 static int bfin_spi_setup(struct spi_device *spi) 974 1008 { 975 - struct bfin5xx_spi_chip *chip_info = NULL; 976 - struct chip_data *chip; 977 - struct driver_data *drv_data = spi_master_get_devdata(spi->master); 978 - int ret; 979 - 980 - if (spi->bits_per_word != 8 && spi->bits_per_word != 16) 981 - return -EINVAL; 1009 + struct bfin5xx_spi_chip *chip_info; 1010 + struct bfin_spi_slave_data *chip = NULL; 1011 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 1012 + u16 bfin_ctl_reg; 1013 + int ret = -EINVAL; 982 1014 983 1015 /* Only alloc (or use chip_info) on first setup */ 1016 + chip_info = NULL; 984 1017 chip = spi_get_ctldata(spi); 985 1018 if (chip == NULL) { 986 - chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL); 987 - if (!chip) 988 - return -ENOMEM; 1019 + chip = kzalloc(sizeof(*chip), GFP_KERNEL); 1020 + if (!chip) { 1021 + dev_err(&spi->dev, "cannot allocate chip data\n"); 1022 + ret = -ENOMEM; 1023 + goto error; 1024 + } 989 1025 990 1026 chip->enable_dma = 0; 991 1027 chip_info = spi->controller_data; 992 1028 } 993 1029 1030 + /* Let people set non-standard bits directly */ 1031 + bfin_ctl_reg = BIT_CTL_OPENDRAIN | BIT_CTL_EMISO | 1032 + BIT_CTL_PSSE | BIT_CTL_GM | BIT_CTL_SZ; 1033 + 994 1034 /* chip_info isn't always needed */ 995 1035 if (chip_info) { 996 1036 /* Make sure people stop trying to set fields via ctl_reg 997 1037 * when they should actually be using common SPI framework. 998 - * Currently we let through: WOM EMISO PSSE GM SZ TIMOD. 1038 + * Currently we let through: WOM EMISO PSSE GM SZ. 999 1039 * Not sure if a user actually needs/uses any of these, 1000 1040 * but let's assume (for now) they do. 1001 1041 */ 1002 - if (chip_info->ctl_reg & (SPE|MSTR|CPOL|CPHA|LSBF|SIZE)) { 1042 + if (chip_info->ctl_reg & ~bfin_ctl_reg) { 1003 1043 dev_err(&spi->dev, "do not set bits in ctl_reg " 1004 1044 "that the SPI framework manages\n"); 1005 - return -EINVAL; 1045 + goto error; 1006 1046 } 1007 - 1008 1047 chip->enable_dma = chip_info->enable_dma != 0 1009 1048 && drv_data->master_info->enable_dma; 1010 1049 chip->ctl_reg = chip_info->ctl_reg; 1011 - chip->bits_per_word = chip_info->bits_per_word; 1012 - chip->cs_change_per_word = chip_info->cs_change_per_word; 1013 1050 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 1014 - chip->cs_gpio = chip_info->cs_gpio; 1015 1051 chip->idle_tx_val = chip_info->idle_tx_val; 1052 + chip->pio_interrupt = chip_info->pio_interrupt; 1053 + spi->bits_per_word = chip_info->bits_per_word; 1054 + } else { 1055 + /* force a default base state */ 1056 + chip->ctl_reg &= bfin_ctl_reg; 1057 + } 1058 + 1059 + if (spi->bits_per_word != 8 && spi->bits_per_word != 16) { 1060 + dev_err(&spi->dev, "%d bits_per_word is not supported\n", 1061 + spi->bits_per_word); 1062 + goto error; 1016 1063 } 1017 1064 1018 1065 /* translate common spi framework into our register */ 1019 - if (spi->mode & SPI_CPOL) 1020 - chip->ctl_reg |= CPOL; 1021 - if (spi->mode & SPI_CPHA) 1022 - chip->ctl_reg |= CPHA; 1023 - if (spi->mode & SPI_LSB_FIRST) 1024 - chip->ctl_reg |= LSBF; 1025 - /* we dont support running in slave mode (yet?) */ 1026 - chip->ctl_reg |= MSTR; 1027 - 1028 - /* 1029 - * if any one SPI chip is registered and wants DMA, request the 1030 - * DMA channel for it 1031 - */ 1032 - if (chip->enable_dma && !drv_data->dma_requested) { 1033 - /* register dma irq handler */ 1034 - if (request_dma(drv_data->dma_channel, "BFIN_SPI_DMA") < 0) { 1035 - dev_dbg(&spi->dev, 1036 - "Unable to request BlackFin SPI DMA channel\n"); 1037 - return -ENODEV; 1038 - } 1039 - if (set_dma_callback(drv_data->dma_channel, 1040 - bfin_spi_dma_irq_handler, drv_data) < 0) { 1041 - dev_dbg(&spi->dev, "Unable to set dma callback\n"); 1042 - return -EPERM; 1043 - } 1044 - dma_disable_irq(drv_data->dma_channel); 1045 - drv_data->dma_requested = 1; 1066 + if (spi->mode & ~(SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST)) { 1067 + dev_err(&spi->dev, "unsupported spi modes detected\n"); 1068 + goto error; 1046 1069 } 1070 + if (spi->mode & SPI_CPOL) 1071 + chip->ctl_reg |= BIT_CTL_CPOL; 1072 + if (spi->mode & SPI_CPHA) 1073 + chip->ctl_reg |= BIT_CTL_CPHA; 1074 + if (spi->mode & SPI_LSB_FIRST) 1075 + chip->ctl_reg |= BIT_CTL_LSBF; 1076 + /* we dont support running in slave mode (yet?) */ 1077 + chip->ctl_reg |= BIT_CTL_MASTER; 1047 1078 1048 1079 /* 1049 1080 * Notice: for blackfin, the speed_hz is the value of register 1050 1081 * SPI_BAUD, not the real baudrate 1051 1082 */ 1052 1083 chip->baud = hz_to_spi_baud(spi->max_speed_hz); 1053 - chip->flag = 1 << (spi->chip_select); 1054 1084 chip->chip_select_num = spi->chip_select; 1085 + if (chip->chip_select_num < MAX_CTRL_CS) { 1086 + if (!(spi->mode & SPI_CPHA)) 1087 + dev_warn(&spi->dev, "Warning: SPI CPHA not set:" 1088 + " Slave Select not under software control!\n" 1089 + " See Documentation/blackfin/bfin-spi-notes.txt"); 1055 1090 1056 - if (chip->chip_select_num == 0) { 1091 + chip->flag = (1 << spi->chip_select) << 8; 1092 + } else 1093 + chip->cs_gpio = chip->chip_select_num - MAX_CTRL_CS; 1094 + 1095 + if (chip->enable_dma && chip->pio_interrupt) { 1096 + dev_err(&spi->dev, "enable_dma is set, " 1097 + "do not set pio_interrupt\n"); 1098 + goto error; 1099 + } 1100 + /* 1101 + * if any one SPI chip is registered and wants DMA, request the 1102 + * DMA channel for it 1103 + */ 1104 + if (chip->enable_dma && !drv_data->dma_requested) { 1105 + /* register dma irq handler */ 1106 + ret = request_dma(drv_data->dma_channel, "BFIN_SPI_DMA"); 1107 + if (ret) { 1108 + dev_err(&spi->dev, 1109 + "Unable to request BlackFin SPI DMA channel\n"); 1110 + goto error; 1111 + } 1112 + drv_data->dma_requested = 1; 1113 + 1114 + ret = set_dma_callback(drv_data->dma_channel, 1115 + bfin_spi_dma_irq_handler, drv_data); 1116 + if (ret) { 1117 + dev_err(&spi->dev, "Unable to set dma callback\n"); 1118 + goto error; 1119 + } 1120 + dma_disable_irq(drv_data->dma_channel); 1121 + } 1122 + 1123 + if (chip->pio_interrupt && !drv_data->irq_requested) { 1124 + ret = request_irq(drv_data->spi_irq, bfin_spi_pio_irq_handler, 1125 + IRQF_DISABLED, "BFIN_SPI", drv_data); 1126 + if (ret) { 1127 + dev_err(&spi->dev, "Unable to register spi IRQ\n"); 1128 + goto error; 1129 + } 1130 + drv_data->irq_requested = 1; 1131 + /* we use write mode, spi irq has to be disabled here */ 1132 + disable_irq(drv_data->spi_irq); 1133 + } 1134 + 1135 + if (chip->chip_select_num >= MAX_CTRL_CS) { 1057 1136 ret = gpio_request(chip->cs_gpio, spi->modalias); 1058 1137 if (ret) { 1059 - if (drv_data->dma_requested) 1060 - free_dma(drv_data->dma_channel); 1061 - return ret; 1138 + dev_err(&spi->dev, "gpio_request() error\n"); 1139 + goto pin_error; 1062 1140 } 1063 1141 gpio_direction_output(chip->cs_gpio, 1); 1064 1142 } 1065 1143 1066 - switch (chip->bits_per_word) { 1067 - case 8: 1068 - chip->n_bytes = 1; 1069 - chip->width = CFG_SPI_WORDSIZE8; 1070 - chip->read = chip->cs_change_per_word ? 1071 - bfin_spi_u8_cs_chg_reader : bfin_spi_u8_reader; 1072 - chip->write = chip->cs_change_per_word ? 1073 - bfin_spi_u8_cs_chg_writer : bfin_spi_u8_writer; 1074 - chip->duplex = chip->cs_change_per_word ? 1075 - bfin_spi_u8_cs_chg_duplex : bfin_spi_u8_duplex; 1076 - break; 1077 - 1078 - case 16: 1079 - chip->n_bytes = 2; 1080 - chip->width = CFG_SPI_WORDSIZE16; 1081 - chip->read = chip->cs_change_per_word ? 1082 - bfin_spi_u16_cs_chg_reader : bfin_spi_u16_reader; 1083 - chip->write = chip->cs_change_per_word ? 1084 - bfin_spi_u16_cs_chg_writer : bfin_spi_u16_writer; 1085 - chip->duplex = chip->cs_change_per_word ? 1086 - bfin_spi_u16_cs_chg_duplex : bfin_spi_u16_duplex; 1087 - break; 1088 - 1089 - default: 1090 - dev_err(&spi->dev, "%d bits_per_word is not supported\n", 1091 - chip->bits_per_word); 1092 - if (chip_info) 1093 - kfree(chip); 1094 - return -ENODEV; 1095 - } 1096 - 1097 1144 dev_dbg(&spi->dev, "setup spi chip %s, width is %d, dma is %d\n", 1098 - spi->modalias, chip->width, chip->enable_dma); 1145 + spi->modalias, spi->bits_per_word, chip->enable_dma); 1099 1146 dev_dbg(&spi->dev, "ctl_reg is 0x%x, flag_reg is 0x%x\n", 1100 1147 chip->ctl_reg, chip->flag); 1101 1148 1102 1149 spi_set_ctldata(spi, chip); 1103 1150 1104 1151 dev_dbg(&spi->dev, "chip select number is %d\n", chip->chip_select_num); 1105 - if ((chip->chip_select_num > 0) 1106 - && (chip->chip_select_num <= spi->master->num_chipselect)) 1107 - peripheral_request(ssel[spi->master->bus_num] 1108 - [chip->chip_select_num-1], spi->modalias); 1152 + if (chip->chip_select_num < MAX_CTRL_CS) { 1153 + ret = peripheral_request(ssel[spi->master->bus_num] 1154 + [chip->chip_select_num-1], spi->modalias); 1155 + if (ret) { 1156 + dev_err(&spi->dev, "peripheral_request() error\n"); 1157 + goto pin_error; 1158 + } 1159 + } 1109 1160 1161 + bfin_spi_cs_enable(drv_data, chip); 1110 1162 bfin_spi_cs_deactive(drv_data, chip); 1111 1163 1112 1164 return 0; 1165 + 1166 + pin_error: 1167 + if (chip->chip_select_num >= MAX_CTRL_CS) 1168 + gpio_free(chip->cs_gpio); 1169 + else 1170 + peripheral_free(ssel[spi->master->bus_num] 1171 + [chip->chip_select_num - 1]); 1172 + error: 1173 + if (chip) { 1174 + if (drv_data->dma_requested) 1175 + free_dma(drv_data->dma_channel); 1176 + drv_data->dma_requested = 0; 1177 + 1178 + kfree(chip); 1179 + /* prevent free 'chip' twice */ 1180 + spi_set_ctldata(spi, NULL); 1181 + } 1182 + 1183 + return ret; 1113 1184 } 1114 1185 1115 1186 /* ··· 1155 1152 */ 1156 1153 static void bfin_spi_cleanup(struct spi_device *spi) 1157 1154 { 1158 - struct chip_data *chip = spi_get_ctldata(spi); 1155 + struct bfin_spi_slave_data *chip = spi_get_ctldata(spi); 1156 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 1159 1157 1160 1158 if (!chip) 1161 1159 return; 1162 1160 1163 - if ((chip->chip_select_num > 0) 1164 - && (chip->chip_select_num <= spi->master->num_chipselect)) 1161 + if (chip->chip_select_num < MAX_CTRL_CS) { 1165 1162 peripheral_free(ssel[spi->master->bus_num] 1166 1163 [chip->chip_select_num-1]); 1167 - 1168 - if (chip->chip_select_num == 0) 1164 + bfin_spi_cs_disable(drv_data, chip); 1165 + } else 1169 1166 gpio_free(chip->cs_gpio); 1170 1167 1171 1168 kfree(chip); 1169 + /* prevent free 'chip' twice */ 1170 + spi_set_ctldata(spi, NULL); 1172 1171 } 1173 1172 1174 - static inline int bfin_spi_init_queue(struct driver_data *drv_data) 1173 + static inline int bfin_spi_init_queue(struct bfin_spi_master_data *drv_data) 1175 1174 { 1176 1175 INIT_LIST_HEAD(&drv_data->queue); 1177 1176 spin_lock_init(&drv_data->lock); 1178 1177 1179 - drv_data->run = QUEUE_STOPPED; 1178 + drv_data->running = false; 1180 1179 drv_data->busy = 0; 1181 1180 1182 1181 /* init transfer tasklet */ ··· 1195 1190 return 0; 1196 1191 } 1197 1192 1198 - static inline int bfin_spi_start_queue(struct driver_data *drv_data) 1193 + static inline int bfin_spi_start_queue(struct bfin_spi_master_data *drv_data) 1199 1194 { 1200 1195 unsigned long flags; 1201 1196 1202 1197 spin_lock_irqsave(&drv_data->lock, flags); 1203 1198 1204 - if (drv_data->run == QUEUE_RUNNING || drv_data->busy) { 1199 + if (drv_data->running || drv_data->busy) { 1205 1200 spin_unlock_irqrestore(&drv_data->lock, flags); 1206 1201 return -EBUSY; 1207 1202 } 1208 1203 1209 - drv_data->run = QUEUE_RUNNING; 1204 + drv_data->running = true; 1210 1205 drv_data->cur_msg = NULL; 1211 1206 drv_data->cur_transfer = NULL; 1212 1207 drv_data->cur_chip = NULL; ··· 1217 1212 return 0; 1218 1213 } 1219 1214 1220 - static inline int bfin_spi_stop_queue(struct driver_data *drv_data) 1215 + static inline int bfin_spi_stop_queue(struct bfin_spi_master_data *drv_data) 1221 1216 { 1222 1217 unsigned long flags; 1223 1218 unsigned limit = 500; ··· 1231 1226 * execution path (pump_messages) would be required to call wake_up or 1232 1227 * friends on every SPI message. Do this instead 1233 1228 */ 1234 - drv_data->run = QUEUE_STOPPED; 1229 + drv_data->running = false; 1235 1230 while (!list_empty(&drv_data->queue) && drv_data->busy && limit--) { 1236 1231 spin_unlock_irqrestore(&drv_data->lock, flags); 1237 1232 msleep(10); ··· 1246 1241 return status; 1247 1242 } 1248 1243 1249 - static inline int bfin_spi_destroy_queue(struct driver_data *drv_data) 1244 + static inline int bfin_spi_destroy_queue(struct bfin_spi_master_data *drv_data) 1250 1245 { 1251 1246 int status; 1252 1247 ··· 1264 1259 struct device *dev = &pdev->dev; 1265 1260 struct bfin5xx_spi_master *platform_info; 1266 1261 struct spi_master *master; 1267 - struct driver_data *drv_data = 0; 1262 + struct bfin_spi_master_data *drv_data; 1268 1263 struct resource *res; 1269 1264 int status = 0; 1270 1265 1271 1266 platform_info = dev->platform_data; 1272 1267 1273 1268 /* Allocate master with space for drv_data */ 1274 - master = spi_alloc_master(dev, sizeof(struct driver_data) + 16); 1269 + master = spi_alloc_master(dev, sizeof(*drv_data)); 1275 1270 if (!master) { 1276 1271 dev_err(&pdev->dev, "can not alloc spi_master\n"); 1277 1272 return -ENOMEM; ··· 1307 1302 goto out_error_ioremap; 1308 1303 } 1309 1304 1310 - drv_data->dma_channel = platform_get_irq(pdev, 0); 1311 - if (drv_data->dma_channel < 0) { 1305 + res = platform_get_resource(pdev, IORESOURCE_DMA, 0); 1306 + if (res == NULL) { 1312 1307 dev_err(dev, "No DMA channel specified\n"); 1313 1308 status = -ENOENT; 1314 - goto out_error_no_dma_ch; 1309 + goto out_error_free_io; 1310 + } 1311 + drv_data->dma_channel = res->start; 1312 + 1313 + drv_data->spi_irq = platform_get_irq(pdev, 0); 1314 + if (drv_data->spi_irq < 0) { 1315 + dev_err(dev, "No spi pio irq specified\n"); 1316 + status = -ENOENT; 1317 + goto out_error_free_io; 1315 1318 } 1316 1319 1317 1320 /* Initial and start queue */ ··· 1341 1328 goto out_error_queue_alloc; 1342 1329 } 1343 1330 1331 + /* Reset SPI registers. If these registers were used by the boot loader, 1332 + * the sky may fall on your head if you enable the dma controller. 1333 + */ 1334 + write_CTRL(drv_data, BIT_CTL_CPHA | BIT_CTL_MASTER); 1335 + write_FLAG(drv_data, 0xFF00); 1336 + 1344 1337 /* Register with the SPI framework */ 1345 1338 platform_set_drvdata(pdev, drv_data); 1346 1339 status = spi_register_master(master); ··· 1362 1343 1363 1344 out_error_queue_alloc: 1364 1345 bfin_spi_destroy_queue(drv_data); 1365 - out_error_no_dma_ch: 1346 + out_error_free_io: 1366 1347 iounmap((void *) drv_data->regs_base); 1367 1348 out_error_ioremap: 1368 1349 out_error_get_res: ··· 1374 1355 /* stop hardware and remove the driver */ 1375 1356 static int __devexit bfin_spi_remove(struct platform_device *pdev) 1376 1357 { 1377 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1358 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1378 1359 int status = 0; 1379 1360 1380 1361 if (!drv_data) ··· 1394 1375 free_dma(drv_data->dma_channel); 1395 1376 } 1396 1377 1378 + if (drv_data->irq_requested) { 1379 + free_irq(drv_data->spi_irq, drv_data); 1380 + drv_data->irq_requested = 0; 1381 + } 1382 + 1397 1383 /* Disconnect from the SPI framework */ 1398 1384 spi_unregister_master(drv_data->master); 1399 1385 ··· 1413 1389 #ifdef CONFIG_PM 1414 1390 static int bfin_spi_suspend(struct platform_device *pdev, pm_message_t state) 1415 1391 { 1416 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1392 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1417 1393 int status = 0; 1418 1394 1419 1395 status = bfin_spi_stop_queue(drv_data); 1420 1396 if (status != 0) 1421 1397 return status; 1422 1398 1423 - /* stop hardware */ 1424 - bfin_spi_disable(drv_data); 1399 + drv_data->ctrl_reg = read_CTRL(drv_data); 1400 + drv_data->flag_reg = read_FLAG(drv_data); 1401 + 1402 + /* 1403 + * reset SPI_CTL and SPI_FLG registers 1404 + */ 1405 + write_CTRL(drv_data, BIT_CTL_CPHA | BIT_CTL_MASTER); 1406 + write_FLAG(drv_data, 0xFF00); 1425 1407 1426 1408 return 0; 1427 1409 } 1428 1410 1429 1411 static int bfin_spi_resume(struct platform_device *pdev) 1430 1412 { 1431 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1413 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1432 1414 int status = 0; 1433 1415 1434 - /* Enable the SPI interface */ 1435 - bfin_spi_enable(drv_data); 1416 + write_CTRL(drv_data, drv_data->ctrl_reg); 1417 + write_FLAG(drv_data, drv_data->flag_reg); 1436 1418 1437 1419 /* Start the queue running */ 1438 1420 status = bfin_spi_start_queue(drv_data); ··· 1469 1439 { 1470 1440 return platform_driver_probe(&bfin_spi_driver, bfin_spi_probe); 1471 1441 } 1472 - module_init(bfin_spi_init); 1442 + subsys_initcall(bfin_spi_init); 1473 1443 1474 1444 static void __exit bfin_spi_exit(void) 1475 1445 {
+1 -1
drivers/staging/tm6000/Kconfig
··· 1 1 config VIDEO_TM6000 2 2 tristate "TV Master TM5600/6000/6010 driver" 3 - depends on VIDEO_DEV && I2C && INPUT && USB && EXPERIMENTAL 3 + depends on VIDEO_DEV && I2C && INPUT && IR_CORE && USB && EXPERIMENTAL 4 4 select VIDEO_TUNER 5 5 select MEDIA_TUNER_XC2028 6 6 select MEDIA_TUNER_XC5000
+39 -22
drivers/staging/tm6000/tm6000-input.c
··· 46 46 } 47 47 48 48 struct tm6000_ir_poll_result { 49 - u8 rc_data[4]; 49 + u16 rc_data; 50 50 }; 51 51 52 52 struct tm6000_IR { ··· 60 60 int polling; 61 61 struct delayed_work work; 62 62 u8 wait:1; 63 + u8 key:1; 63 64 struct urb *int_urb; 64 65 u8 *urb_data; 65 - u8 key:1; 66 66 67 67 int (*get_key) (struct tm6000_IR *, struct tm6000_ir_poll_result *); 68 68 ··· 122 122 123 123 if (urb->status != 0) 124 124 printk(KERN_INFO "not ready\n"); 125 - else if (urb->actual_length > 0) 125 + else if (urb->actual_length > 0) { 126 126 memcpy(ir->urb_data, urb->transfer_buffer, urb->actual_length); 127 127 128 - dprintk("data %02x %02x %02x %02x\n", ir->urb_data[0], 129 - ir->urb_data[1], ir->urb_data[2], ir->urb_data[3]); 128 + dprintk("data %02x %02x %02x %02x\n", ir->urb_data[0], 129 + ir->urb_data[1], ir->urb_data[2], ir->urb_data[3]); 130 130 131 - ir->key = 1; 131 + ir->key = 1; 132 + } 132 133 133 134 rc = usb_submit_urb(urb, GFP_ATOMIC); 134 135 } ··· 141 140 int rc; 142 141 u8 buf[2]; 143 142 144 - if (ir->wait && !&dev->int_in) { 145 - poll_result->rc_data[0] = 0xff; 143 + if (ir->wait && !&dev->int_in) 146 144 return 0; 147 - } 148 145 149 146 if (&dev->int_in) { 150 - poll_result->rc_data[0] = ir->urb_data[0]; 151 - poll_result->rc_data[1] = ir->urb_data[1]; 147 + if (ir->ir.ir_type == IR_TYPE_RC5) 148 + poll_result->rc_data = ir->urb_data[0]; 149 + else 150 + poll_result->rc_data = ir->urb_data[0] | ir->urb_data[1] << 8; 152 151 } else { 153 152 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 0); 154 153 msleep(10); 155 154 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 1); 156 155 msleep(10); 157 156 158 - rc = tm6000_read_write_usb(dev, USB_DIR_IN | USB_TYPE_VENDOR | 159 - USB_RECIP_DEVICE, REQ_02_GET_IR_CODE, 0, 0, buf, 1); 157 + if (ir->ir.ir_type == IR_TYPE_RC5) { 158 + rc = tm6000_read_write_usb(dev, USB_DIR_IN | 159 + USB_TYPE_VENDOR | USB_RECIP_DEVICE, 160 + REQ_02_GET_IR_CODE, 0, 0, buf, 1); 160 161 161 - msleep(10); 162 + msleep(10); 162 163 163 - dprintk("read data=%02x\n", buf[0]); 164 - if (rc < 0) 165 - return rc; 164 + dprintk("read data=%02x\n", buf[0]); 165 + if (rc < 0) 166 + return rc; 166 167 167 - poll_result->rc_data[0] = buf[0]; 168 + poll_result->rc_data = buf[0]; 169 + } else { 170 + rc = tm6000_read_write_usb(dev, USB_DIR_IN | 171 + USB_TYPE_VENDOR | USB_RECIP_DEVICE, 172 + REQ_02_GET_IR_CODE, 0, 0, buf, 2); 173 + 174 + msleep(10); 175 + 176 + dprintk("read data=%04x\n", buf[0] | buf[1] << 8); 177 + if (rc < 0) 178 + return rc; 179 + 180 + poll_result->rc_data = buf[0] | buf[1] << 8; 181 + } 182 + if ((poll_result->rc_data & 0x00ff) != 0xff) 183 + ir->key = 1; 168 184 } 169 185 return 0; 170 186 } ··· 198 180 return; 199 181 } 200 182 201 - dprintk("ir->get_key result data=%02x %02x\n", 202 - poll_result.rc_data[0], poll_result.rc_data[1]); 183 + dprintk("ir->get_key result data=%04x\n", poll_result.rc_data); 203 184 204 - if (poll_result.rc_data[0] != 0xff && ir->key == 1) { 185 + if (ir->key) { 205 186 ir_input_keydown(ir->input->input_dev, &ir->ir, 206 - poll_result.rc_data[0] | poll_result.rc_data[1] << 8); 187 + (u32)poll_result.rc_data); 207 188 208 189 ir_input_nokey(ir->input->input_dev, &ir->ir); 209 190 ir->key = 0;
-4
fs/binfmt_aout.c
··· 134 134 if (!dump_write(file, dump_start, dump_size)) 135 135 goto end_coredump; 136 136 } 137 - /* Finally dump the task struct. Not be used by gdb, but could be useful */ 138 - set_fs(KERNEL_DS); 139 - if (!dump_write(file, current, sizeof(*current))) 140 - goto end_coredump; 141 137 end_coredump: 142 138 set_fs(fs); 143 139 return has_dumped;
+18 -13
fs/ceph/caps.c
··· 2283 2283 { 2284 2284 struct ceph_inode_info *ci = ceph_inode(inode); 2285 2285 int mds = session->s_mds; 2286 - int seq = le32_to_cpu(grant->seq); 2286 + unsigned seq = le32_to_cpu(grant->seq); 2287 + unsigned issue_seq = le32_to_cpu(grant->issue_seq); 2287 2288 int newcaps = le32_to_cpu(grant->caps); 2288 2289 int issued, implemented, used, wanted, dirty; 2289 2290 u64 size = le64_to_cpu(grant->size); ··· 2296 2295 int revoked_rdcache = 0; 2297 2296 int queue_invalidate = 0; 2298 2297 2299 - dout("handle_cap_grant inode %p cap %p mds%d seq %d %s\n", 2300 - inode, cap, mds, seq, ceph_cap_string(newcaps)); 2298 + dout("handle_cap_grant inode %p cap %p mds%d seq %u/%u %s\n", 2299 + inode, cap, mds, seq, issue_seq, ceph_cap_string(newcaps)); 2301 2300 dout(" size %llu max_size %llu, i_size %llu\n", size, max_size, 2302 2301 inode->i_size); 2303 2302 ··· 2393 2392 } 2394 2393 2395 2394 cap->seq = seq; 2395 + cap->issue_seq = issue_seq; 2396 2396 2397 2397 /* file layout may have changed */ 2398 2398 ci->i_layout = grant->layout; ··· 2776 2774 if (op == CEPH_CAP_OP_IMPORT) 2777 2775 __queue_cap_release(session, vino.ino, cap_id, 2778 2776 mseq, seq); 2779 - 2780 - /* 2781 - * send any full release message to try to move things 2782 - * along for the mds (who clearly thinks we still have this 2783 - * cap). 2784 - */ 2785 - ceph_add_cap_releases(mdsc, session); 2786 - ceph_send_cap_releases(mdsc, session); 2787 - goto done; 2777 + goto flush_cap_releases; 2788 2778 } 2789 2779 2790 2780 /* these will work even if we don't have a cap yet */ ··· 2804 2810 dout(" no cap on %p ino %llx.%llx from mds%d\n", 2805 2811 inode, ceph_ino(inode), ceph_snap(inode), mds); 2806 2812 spin_unlock(&inode->i_lock); 2807 - goto done; 2813 + goto flush_cap_releases; 2808 2814 } 2809 2815 2810 2816 /* note that each of these drops i_lock for us */ ··· 2827 2833 pr_err("ceph_handle_caps: unknown cap op %d %s\n", op, 2828 2834 ceph_cap_op_name(op)); 2829 2835 } 2836 + 2837 + goto done; 2838 + 2839 + flush_cap_releases: 2840 + /* 2841 + * send any full release message to try to move things 2842 + * along for the mds (who clearly thinks we still have this 2843 + * cap). 2844 + */ 2845 + ceph_add_cap_releases(mdsc, session); 2846 + ceph_send_cap_releases(mdsc, session); 2830 2847 2831 2848 done: 2832 2849 mutex_unlock(&session->s_mutex);
+13 -8
fs/ceph/export.c
··· 42 42 static int ceph_encode_fh(struct dentry *dentry, u32 *rawfh, int *max_len, 43 43 int connectable) 44 44 { 45 + int type; 45 46 struct ceph_nfs_fh *fh = (void *)rawfh; 46 47 struct ceph_nfs_confh *cfh = (void *)rawfh; 47 48 struct dentry *parent = dentry->d_parent; 48 49 struct inode *inode = dentry->d_inode; 49 - int type; 50 + int connected_handle_length = sizeof(*cfh)/4; 51 + int handle_length = sizeof(*fh)/4; 50 52 51 53 /* don't re-export snaps */ 52 54 if (ceph_snap(inode) != CEPH_NOSNAP) 53 55 return -EINVAL; 54 56 55 - if (*max_len >= sizeof(*cfh)) { 57 + if (*max_len >= connected_handle_length) { 56 58 dout("encode_fh %p connectable\n", dentry); 57 59 cfh->ino = ceph_ino(dentry->d_inode); 58 60 cfh->parent_ino = ceph_ino(parent->d_inode); 59 61 cfh->parent_name_hash = parent->d_name.hash; 60 - *max_len = sizeof(*cfh); 62 + *max_len = connected_handle_length; 61 63 type = 2; 62 - } else if (*max_len > sizeof(*fh)) { 63 - if (connectable) 64 - return -ENOSPC; 64 + } else if (*max_len >= handle_length) { 65 + if (connectable) { 66 + *max_len = connected_handle_length; 67 + return 255; 68 + } 65 69 dout("encode_fh %p\n", dentry); 66 70 fh->ino = ceph_ino(dentry->d_inode); 67 - *max_len = sizeof(*fh); 71 + *max_len = handle_length; 68 72 type = 1; 69 73 } else { 70 - return -ENOSPC; 74 + *max_len = handle_length; 75 + return 255; 71 76 } 72 77 return type; 73 78 }
+1 -1
fs/ceph/file.c
··· 697 697 * start_request so that a tid has been assigned. 698 698 */ 699 699 spin_lock(&ci->i_unsafe_lock); 700 - list_add(&ci->i_unsafe_writes, &req->r_unsafe_item); 700 + list_add(&req->r_unsafe_item, &ci->i_unsafe_writes); 701 701 spin_unlock(&ci->i_unsafe_lock); 702 702 ceph_get_cap_refs(ci, CEPH_CAP_FILE_WR); 703 703 }
+1 -1
fs/ceph/osd_client.c
··· 549 549 */ 550 550 static void __cancel_request(struct ceph_osd_request *req) 551 551 { 552 - if (req->r_sent) { 552 + if (req->r_sent && req->r_osd) { 553 553 ceph_con_revoke(&req->r_osd->o_con, req->r_request); 554 554 req->r_sent = 0; 555 555 }
+40
fs/exec.c
··· 2014 2014 fail: 2015 2015 return; 2016 2016 } 2017 + 2018 + /* 2019 + * Core dumping helper functions. These are the only things you should 2020 + * do on a core-file: use only these functions to write out all the 2021 + * necessary info. 2022 + */ 2023 + int dump_write(struct file *file, const void *addr, int nr) 2024 + { 2025 + return access_ok(VERIFY_READ, addr, nr) && file->f_op->write(file, addr, nr, &file->f_pos) == nr; 2026 + } 2027 + EXPORT_SYMBOL(dump_write); 2028 + 2029 + int dump_seek(struct file *file, loff_t off) 2030 + { 2031 + int ret = 1; 2032 + 2033 + if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 2034 + if (file->f_op->llseek(file, off, SEEK_CUR) < 0) 2035 + return 0; 2036 + } else { 2037 + char *buf = (char *)get_zeroed_page(GFP_KERNEL); 2038 + 2039 + if (!buf) 2040 + return 0; 2041 + while (off > 0) { 2042 + unsigned long n = off; 2043 + 2044 + if (n > PAGE_SIZE) 2045 + n = PAGE_SIZE; 2046 + if (!dump_write(file, buf, n)) { 2047 + ret = 0; 2048 + break; 2049 + } 2050 + off -= n; 2051 + } 2052 + free_page((unsigned long)buf); 2053 + } 2054 + return ret; 2055 + } 2056 + EXPORT_SYMBOL(dump_seek);
+7 -1
fs/exofs/inode.c
··· 54 54 unsigned nr_pages; 55 55 unsigned long length; 56 56 loff_t pg_first; /* keep 64bit also in 32-arches */ 57 + bool read_4_write; /* This means two things: that the read is sync 58 + * And the pages should not be unlocked. 59 + */ 57 60 }; 58 61 59 62 static void _pcol_init(struct page_collect *pcol, unsigned expected_pages, ··· 74 71 pcol->nr_pages = 0; 75 72 pcol->length = 0; 76 73 pcol->pg_first = -1; 74 + pcol->read_4_write = false; 77 75 } 78 76 79 77 static void _pcol_reset(struct page_collect *pcol) ··· 351 347 if (PageError(page)) 352 348 ClearPageError(page); 353 349 354 - unlock_page(page); 350 + if (!pcol->read_4_write) 351 + unlock_page(page); 355 352 EXOFS_DBGMSG("readpage_strip(0x%lx, 0x%lx) empty page," 356 353 " splitting\n", inode->i_ino, page->index); 357 354 ··· 433 428 /* readpage_strip might call read_exec(,is_sync==false) at several 434 429 * places but not if we have a single page. 435 430 */ 431 + pcol.read_4_write = is_sync; 436 432 ret = readpage_strip(&pcol, page); 437 433 if (ret) { 438 434 EXOFS_ERR("_readpage => %d\n", ret);
-2
fs/nfsd/nfsfh.h
··· 196 196 static inline void 197 197 fh_unlock(struct svc_fh *fhp) 198 198 { 199 - BUG_ON(!fhp->fh_dentry); 200 - 201 199 if (fhp->fh_locked) { 202 200 fill_post_wcc(fhp); 203 201 mutex_unlock(&fhp->fh_dentry->d_inode->i_mutex);
+1 -1
fs/notify/Kconfig
··· 3 3 4 4 source "fs/notify/dnotify/Kconfig" 5 5 source "fs/notify/inotify/Kconfig" 6 - source "fs/notify/fanotify/Kconfig" 6 + #source "fs/notify/fanotify/Kconfig"
+14 -5
fs/xfs/linux-2.6/xfs_sync.c
··· 668 668 xfs_perag_put(pag); 669 669 } 670 670 671 - void 672 - __xfs_inode_clear_reclaim_tag( 673 - xfs_mount_t *mp, 671 + STATIC void 672 + __xfs_inode_clear_reclaim( 674 673 xfs_perag_t *pag, 675 674 xfs_inode_t *ip) 676 675 { 677 - radix_tree_tag_clear(&pag->pag_ici_root, 678 - XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG); 679 676 pag->pag_ici_reclaimable--; 680 677 if (!pag->pag_ici_reclaimable) { 681 678 /* clear the reclaim tag from the perag radix tree */ ··· 684 687 trace_xfs_perag_clear_reclaim(ip->i_mount, pag->pag_agno, 685 688 -1, _RET_IP_); 686 689 } 690 + } 691 + 692 + void 693 + __xfs_inode_clear_reclaim_tag( 694 + xfs_mount_t *mp, 695 + xfs_perag_t *pag, 696 + xfs_inode_t *ip) 697 + { 698 + radix_tree_tag_clear(&pag->pag_ici_root, 699 + XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG); 700 + __xfs_inode_clear_reclaim(pag, ip); 687 701 } 688 702 689 703 /* ··· 846 838 if (!radix_tree_delete(&pag->pag_ici_root, 847 839 XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino))) 848 840 ASSERT(0); 841 + __xfs_inode_clear_reclaim(pag, ip); 849 842 write_unlock(&pag->pag_ici_lock); 850 843 851 844 /*
+3 -1
include/drm/ttm/ttm_bo_api.h
··· 246 246 247 247 atomic_t reserved; 248 248 249 - 250 249 /** 251 250 * Members protected by the bo::lock 251 + * In addition, setting sync_obj to anything else 252 + * than NULL requires bo::reserved to be held. This allows for 253 + * checking NULL while reserved but not holding bo::lock. 252 254 */ 253 255 254 256 void *sync_obj_arg;
-1
include/linux/Kbuild
··· 118 118 header-y += ext2_fs.h 119 119 header-y += fadvise.h 120 120 header-y += falloc.h 121 - header-y += fanotify.h 122 121 header-y += fb.h 123 122 header-y += fcntl.h 124 123 header-y += fd.h
+2 -32
include/linux/coredump.h
··· 9 9 * These are the only things you should do on a core-file: use only these 10 10 * functions to write out all the necessary info. 11 11 */ 12 - static inline int dump_write(struct file *file, const void *addr, int nr) 13 - { 14 - return file->f_op->write(file, addr, nr, &file->f_pos) == nr; 15 - } 16 - 17 - static inline int dump_seek(struct file *file, loff_t off) 18 - { 19 - int ret = 1; 20 - 21 - if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 22 - if (file->f_op->llseek(file, off, SEEK_CUR) < 0) 23 - return 0; 24 - } else { 25 - char *buf = (char *)get_zeroed_page(GFP_KERNEL); 26 - 27 - if (!buf) 28 - return 0; 29 - while (off > 0) { 30 - unsigned long n = off; 31 - 32 - if (n > PAGE_SIZE) 33 - n = PAGE_SIZE; 34 - if (!dump_write(file, buf, n)) { 35 - ret = 0; 36 - break; 37 - } 38 - off -= n; 39 - } 40 - free_page((unsigned long)buf); 41 - } 42 - return ret; 43 - } 12 + extern int dump_write(struct file *file, const void *addr, int nr); 13 + extern int dump_seek(struct file *file, loff_t off); 44 14 45 15 #endif /* _LINUX_COREDUMP_H */
+1
include/linux/elevator.h
··· 93 93 struct elevator_type *elevator_type; 94 94 struct mutex sysfs_lock; 95 95 struct hlist_head *hash; 96 + unsigned int registered:1; 96 97 }; 97 98 98 99 /*
+14 -1
include/linux/types.h
··· 121 121 typedef __s64 int64_t; 122 122 #endif 123 123 124 - /* this is a special 64bit data type that is 8-byte aligned */ 124 + /* 125 + * aligned_u64 should be used in defining kernel<->userspace ABIs to avoid 126 + * common 32/64-bit compat problems. 127 + * 64-bit values align to 4-byte boundaries on x86_32 (and possibly other 128 + * architectures) and to 8-byte boundaries on 64-bit architetures. The new 129 + * aligned_64 type enforces 8-byte alignment so that structs containing 130 + * aligned_64 values have the same alignment on 32-bit and 64-bit architectures. 131 + * No conversions are necessary between 32-bit user-space and a 64-bit kernel. 132 + */ 125 133 #define aligned_u64 __u64 __attribute__((aligned(8))) 126 134 #define aligned_be64 __be64 __attribute__((aligned(8))) 127 135 #define aligned_le64 __le64 __attribute__((aligned(8))) ··· 185 177 186 178 typedef __u16 __bitwise __sum16; 187 179 typedef __u32 __bitwise __wsum; 180 + 181 + /* this is a special 64bit data type that is 8-byte aligned */ 182 + #define __aligned_u64 __u64 __attribute__((aligned(8))) 183 + #define __aligned_be64 __be64 __attribute__((aligned(8))) 184 + #define __aligned_le64 __le64 __attribute__((aligned(8))) 188 185 189 186 #ifdef __KERNEL__ 190 187 typedef unsigned __bitwise__ gfp_t;
+1
include/media/videobuf-dma-sg.h
··· 48 48 49 49 /* for userland buffer */ 50 50 int offset; 51 + size_t size; 51 52 struct page **pages; 52 53 53 54 /* for kernel buffers */
+18
include/net/bluetooth/bluetooth.h
··· 161 161 { 162 162 struct sk_buff *skb; 163 163 164 + release_sock(sk); 164 165 if ((skb = sock_alloc_send_skb(sk, len + BT_SKB_RESERVE, nb, err))) { 165 166 skb_reserve(skb, BT_SKB_RESERVE); 166 167 bt_cb(skb)->incoming = 0; 167 168 } 169 + lock_sock(sk); 170 + 171 + if (!skb && *err) 172 + return NULL; 173 + 174 + *err = sock_error(sk); 175 + if (*err) 176 + goto out; 177 + 178 + if (sk->sk_shutdown) { 179 + *err = -ECONNRESET; 180 + goto out; 181 + } 168 182 169 183 return skb; 184 + 185 + out: 186 + kfree_skb(skb); 187 + return NULL; 170 188 } 171 189 172 190 int bt_err(__u16 code);
+11 -2
kernel/hrtimer.c
··· 931 931 remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) 932 932 { 933 933 if (hrtimer_is_queued(timer)) { 934 + unsigned long state; 934 935 int reprogram; 935 936 936 937 /* ··· 945 944 debug_deactivate(timer); 946 945 timer_stats_hrtimer_clear_start_info(timer); 947 946 reprogram = base->cpu_base == &__get_cpu_var(hrtimer_bases); 948 - __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 949 - reprogram); 947 + /* 948 + * We must preserve the CALLBACK state flag here, 949 + * otherwise we could move the timer base in 950 + * switch_hrtimer_base. 951 + */ 952 + state = timer->state & HRTIMER_STATE_CALLBACK; 953 + __remove_hrtimer(timer, base, state, reprogram); 950 954 return 1; 951 955 } 952 956 return 0; ··· 1237 1231 BUG_ON(timer->state != HRTIMER_STATE_CALLBACK); 1238 1232 enqueue_hrtimer(timer, base); 1239 1233 } 1234 + 1235 + WARN_ON_ONCE(!(timer->state & HRTIMER_STATE_CALLBACK)); 1236 + 1240 1237 timer->state &= ~HRTIMER_STATE_CALLBACK; 1241 1238 } 1242 1239
+1 -3
kernel/perf_event.c
··· 2202 2202 static int perf_event_period(struct perf_event *event, u64 __user *arg) 2203 2203 { 2204 2204 struct perf_event_context *ctx = event->ctx; 2205 - unsigned long size; 2206 2205 int ret = 0; 2207 2206 u64 value; 2208 2207 2209 2208 if (!event->attr.sample_period) 2210 2209 return -EINVAL; 2211 2210 2212 - size = copy_from_user(&value, arg, sizeof(value)); 2213 - if (size != sizeof(value)) 2211 + if (copy_from_user(&value, arg, sizeof(value))) 2214 2212 return -EFAULT; 2215 2213 2216 2214 if (!value)
+8
kernel/signal.c
··· 2215 2215 #ifdef __ARCH_SI_TRAPNO 2216 2216 err |= __put_user(from->si_trapno, &to->si_trapno); 2217 2217 #endif 2218 + #ifdef BUS_MCEERR_AO 2219 + /* 2220 + * Other callers might not initialize the si_lsb field, 2221 + * so check explicitely for the right codes here. 2222 + */ 2223 + if (from->si_code == BUS_MCEERR_AR || from->si_code == BUS_MCEERR_AO) 2224 + err |= __put_user(from->si_addr_lsb, &to->si_addr_lsb); 2225 + #endif 2218 2226 break; 2219 2227 case __SI_CHLD: 2220 2228 err |= __put_user(from->si_pid, &to->si_pid);
+1 -1
kernel/sysctl.c
··· 2485 2485 kbuf[left] = 0; 2486 2486 } 2487 2487 2488 - for (; left && vleft--; i++, min++, max++, first=0) { 2488 + for (; left && vleft--; i++, first = 0) { 2489 2489 unsigned long val; 2490 2490 2491 2491 if (write) {
-9
kernel/sysctl_check.c
··· 143 143 if (!table->maxlen) 144 144 set_fail(&fail, table, "No maxlen"); 145 145 } 146 - if ((table->proc_handler == proc_doulongvec_minmax) || 147 - (table->proc_handler == proc_doulongvec_ms_jiffies_minmax)) { 148 - if (table->maxlen > sizeof (unsigned long)) { 149 - if (!table->extra1) 150 - set_fail(&fail, table, "No min"); 151 - if (!table->extra2) 152 - set_fail(&fail, table, "No max"); 153 - } 154 - } 155 146 #ifdef CONFIG_PROC_SYSCTL 156 147 if (table->procname && !table->proc_handler) 157 148 set_fail(&fail, table, "No proc_handler");
+1 -1
kernel/trace/ring_buffer.c
··· 405 405 #define BUF_MAX_DATA_SIZE (BUF_PAGE_SIZE - (sizeof(u32) * 2)) 406 406 407 407 /* Max number of timestamps that can fit on a page */ 408 - #define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_STAMP) 408 + #define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_EXTEND) 409 409 410 410 int ring_buffer_print_page_header(struct trace_seq *s) 411 411 {
+7 -3
mm/memcontrol.c
··· 3587 3587 3588 3588 static void mem_cgroup_threshold(struct mem_cgroup *memcg) 3589 3589 { 3590 - __mem_cgroup_threshold(memcg, false); 3591 - if (do_swap_account) 3592 - __mem_cgroup_threshold(memcg, true); 3590 + while (memcg) { 3591 + __mem_cgroup_threshold(memcg, false); 3592 + if (do_swap_account) 3593 + __mem_cgroup_threshold(memcg, true); 3594 + 3595 + memcg = parent_mem_cgroup(memcg); 3596 + } 3593 3597 } 3594 3598 3595 3599 static int compare_thresholds(const void *a, const void *b)
+6 -6
mm/memory-failure.c
··· 183 183 * signal. 184 184 */ 185 185 static int kill_proc_ao(struct task_struct *t, unsigned long addr, int trapno, 186 - unsigned long pfn) 186 + unsigned long pfn, struct page *page) 187 187 { 188 188 struct siginfo si; 189 189 int ret; ··· 198 198 #ifdef __ARCH_SI_TRAPNO 199 199 si.si_trapno = trapno; 200 200 #endif 201 - si.si_addr_lsb = PAGE_SHIFT; 201 + si.si_addr_lsb = compound_order(compound_head(page)) + PAGE_SHIFT; 202 202 /* 203 203 * Don't use force here, it's convenient if the signal 204 204 * can be temporarily blocked. ··· 235 235 int nr; 236 236 do { 237 237 nr = shrink_slab(1000, GFP_KERNEL, 1000); 238 - if (page_count(p) == 0) 238 + if (page_count(p) == 1) 239 239 break; 240 240 } while (nr > 10); 241 241 } ··· 327 327 * wrong earlier. 328 328 */ 329 329 static void kill_procs_ao(struct list_head *to_kill, int doit, int trapno, 330 - int fail, unsigned long pfn) 330 + int fail, struct page *page, unsigned long pfn) 331 331 { 332 332 struct to_kill *tk, *next; 333 333 ··· 352 352 * process anyways. 353 353 */ 354 354 else if (kill_proc_ao(tk->tsk, tk->addr, trapno, 355 - pfn) < 0) 355 + pfn, page) < 0) 356 356 printk(KERN_ERR 357 357 "MCE %#lx: Cannot send advisory machine check signal to %s:%d\n", 358 358 pfn, tk->tsk->comm, tk->tsk->pid); ··· 928 928 * any accesses to the poisoned memory. 929 929 */ 930 930 kill_procs_ao(&tokill, !!PageDirty(hpage), trapno, 931 - ret != SWAP_SUCCESS, pfn); 931 + ret != SWAP_SUCCESS, p, pfn); 932 932 933 933 return ret; 934 934 }
+2 -2
mm/page_alloc.c
··· 5182 5182 if (!table) 5183 5183 panic("Failed to allocate %s hash table\n", tablename); 5184 5184 5185 - printk(KERN_INFO "%s hash table entries: %d (order: %d, %lu bytes)\n", 5185 + printk(KERN_INFO "%s hash table entries: %ld (order: %d, %lu bytes)\n", 5186 5186 tablename, 5187 - (1U << log2qty), 5187 + (1UL << log2qty), 5188 5188 ilog2(size) - PAGE_SHIFT, 5189 5189 size); 5190 5190
+1 -1
net/atm/mpc.c
··· 778 778 eg->packets_rcvd++; 779 779 mpc->eg_ops->put(eg); 780 780 781 - memset(ATM_SKB(skb), 0, sizeof(struct atm_skb_data)); 781 + memset(ATM_SKB(new_skb), 0, sizeof(struct atm_skb_data)); 782 782 netif_rx(new_skb); 783 783 } 784 784
+29 -33
net/bluetooth/l2cap.c
··· 1441 1441 1442 1442 static void l2cap_streaming_send(struct sock *sk) 1443 1443 { 1444 - struct sk_buff *skb, *tx_skb; 1444 + struct sk_buff *skb; 1445 1445 struct l2cap_pinfo *pi = l2cap_pi(sk); 1446 1446 u16 control, fcs; 1447 1447 1448 - while ((skb = sk->sk_send_head)) { 1449 - tx_skb = skb_clone(skb, GFP_ATOMIC); 1450 - 1451 - control = get_unaligned_le16(tx_skb->data + L2CAP_HDR_SIZE); 1448 + while ((skb = skb_dequeue(TX_QUEUE(sk)))) { 1449 + control = get_unaligned_le16(skb->data + L2CAP_HDR_SIZE); 1452 1450 control |= pi->next_tx_seq << L2CAP_CTRL_TXSEQ_SHIFT; 1453 - put_unaligned_le16(control, tx_skb->data + L2CAP_HDR_SIZE); 1451 + put_unaligned_le16(control, skb->data + L2CAP_HDR_SIZE); 1454 1452 1455 1453 if (pi->fcs == L2CAP_FCS_CRC16) { 1456 - fcs = crc16(0, (u8 *)tx_skb->data, tx_skb->len - 2); 1457 - put_unaligned_le16(fcs, tx_skb->data + tx_skb->len - 2); 1454 + fcs = crc16(0, (u8 *)skb->data, skb->len - 2); 1455 + put_unaligned_le16(fcs, skb->data + skb->len - 2); 1458 1456 } 1459 1457 1460 - l2cap_do_send(sk, tx_skb); 1458 + l2cap_do_send(sk, skb); 1461 1459 1462 1460 pi->next_tx_seq = (pi->next_tx_seq + 1) % 64; 1463 - 1464 - if (skb_queue_is_last(TX_QUEUE(sk), skb)) 1465 - sk->sk_send_head = NULL; 1466 - else 1467 - sk->sk_send_head = skb_queue_next(TX_QUEUE(sk), skb); 1468 - 1469 - skb = skb_dequeue(TX_QUEUE(sk)); 1470 - kfree_skb(skb); 1471 1461 } 1472 1462 } 1473 1463 ··· 1950 1960 1951 1961 switch (optname) { 1952 1962 case L2CAP_OPTIONS: 1963 + if (sk->sk_state == BT_CONNECTED) { 1964 + err = -EINVAL; 1965 + break; 1966 + } 1967 + 1953 1968 opts.imtu = l2cap_pi(sk)->imtu; 1954 1969 opts.omtu = l2cap_pi(sk)->omtu; 1955 1970 opts.flush_to = l2cap_pi(sk)->flush_to; ··· 2766 2771 case L2CAP_CONF_MTU: 2767 2772 if (val < L2CAP_DEFAULT_MIN_MTU) { 2768 2773 *result = L2CAP_CONF_UNACCEPT; 2769 - pi->omtu = L2CAP_DEFAULT_MIN_MTU; 2774 + pi->imtu = L2CAP_DEFAULT_MIN_MTU; 2770 2775 } else 2771 - pi->omtu = val; 2772 - l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->omtu); 2776 + pi->imtu = val; 2777 + l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->imtu); 2773 2778 break; 2774 2779 2775 2780 case L2CAP_CONF_FLUSH_TO: ··· 3066 3071 return 0; 3067 3072 } 3068 3073 3074 + static inline void set_default_fcs(struct l2cap_pinfo *pi) 3075 + { 3076 + /* FCS is enabled only in ERTM or streaming mode, if one or both 3077 + * sides request it. 3078 + */ 3079 + if (pi->mode != L2CAP_MODE_ERTM && pi->mode != L2CAP_MODE_STREAMING) 3080 + pi->fcs = L2CAP_FCS_NONE; 3081 + else if (!(pi->conf_state & L2CAP_CONF_NO_FCS_RECV)) 3082 + pi->fcs = L2CAP_FCS_CRC16; 3083 + } 3084 + 3069 3085 static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 3070 3086 { 3071 3087 struct l2cap_conf_req *req = (struct l2cap_conf_req *) data; ··· 3094 3088 if (!sk) 3095 3089 return -ENOENT; 3096 3090 3097 - if (sk->sk_state != BT_CONFIG) { 3098 - struct l2cap_cmd_rej rej; 3099 - 3100 - rej.reason = cpu_to_le16(0x0002); 3101 - l2cap_send_cmd(conn, cmd->ident, L2CAP_COMMAND_REJ, 3102 - sizeof(rej), &rej); 3091 + if (sk->sk_state == BT_DISCONN) 3103 3092 goto unlock; 3104 - } 3105 3093 3106 3094 /* Reject if config buffer is too small. */ 3107 3095 len = cmd_len - sizeof(*req); ··· 3135 3135 goto unlock; 3136 3136 3137 3137 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_INPUT_DONE) { 3138 - if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_NO_FCS_RECV) || 3139 - l2cap_pi(sk)->fcs != L2CAP_FCS_NONE) 3140 - l2cap_pi(sk)->fcs = L2CAP_FCS_CRC16; 3138 + set_default_fcs(l2cap_pi(sk)); 3141 3139 3142 3140 sk->sk_state = BT_CONNECTED; 3143 3141 ··· 3223 3225 l2cap_pi(sk)->conf_state |= L2CAP_CONF_INPUT_DONE; 3224 3226 3225 3227 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_OUTPUT_DONE) { 3226 - if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_NO_FCS_RECV) || 3227 - l2cap_pi(sk)->fcs != L2CAP_FCS_NONE) 3228 - l2cap_pi(sk)->fcs = L2CAP_FCS_CRC16; 3228 + set_default_fcs(l2cap_pi(sk)); 3229 3229 3230 3230 sk->sk_state = BT_CONNECTED; 3231 3231 l2cap_pi(sk)->next_tx_seq = 0;
+4
net/bluetooth/rfcomm/sock.c
··· 82 82 static void rfcomm_sk_state_change(struct rfcomm_dlc *d, int err) 83 83 { 84 84 struct sock *sk = d->owner, *parent; 85 + unsigned long flags; 86 + 85 87 if (!sk) 86 88 return; 87 89 88 90 BT_DBG("dlc %p state %ld err %d", d, d->state, err); 89 91 92 + local_irq_save(flags); 90 93 bh_lock_sock(sk); 91 94 92 95 if (err) ··· 111 108 } 112 109 113 110 bh_unlock_sock(sk); 111 + local_irq_restore(flags); 114 112 115 113 if (parent && sock_flag(sk, SOCK_ZAPPED)) { 116 114 /* We have to drop DLC lock here, otherwise
+15 -6
net/caif/caif_socket.c
··· 827 827 long timeo; 828 828 int err; 829 829 int ifindex, headroom, tailroom; 830 + unsigned int mtu; 830 831 struct net_device *dev; 831 832 832 833 lock_sock(sk); ··· 897 896 cf_sk->sk.sk_state = CAIF_DISCONNECTED; 898 897 goto out; 899 898 } 900 - dev = dev_get_by_index(sock_net(sk), ifindex); 899 + 900 + err = -ENODEV; 901 + rcu_read_lock(); 902 + dev = dev_get_by_index_rcu(sock_net(sk), ifindex); 903 + if (!dev) { 904 + rcu_read_unlock(); 905 + goto out; 906 + } 901 907 cf_sk->headroom = LL_RESERVED_SPACE_EXTRA(dev, headroom); 908 + mtu = dev->mtu; 909 + rcu_read_unlock(); 910 + 902 911 cf_sk->tailroom = tailroom; 903 - cf_sk->maxframe = dev->mtu - (headroom + tailroom); 904 - dev_put(dev); 912 + cf_sk->maxframe = mtu - (headroom + tailroom); 905 913 if (cf_sk->maxframe < 1) { 906 - pr_warning("CAIF: %s(): CAIF Interface MTU too small (%d)\n", 907 - __func__, dev->mtu); 908 - err = -ENODEV; 914 + pr_warning("CAIF: %s(): CAIF Interface MTU too small (%u)\n", 915 + __func__, mtu); 909 916 goto out; 910 917 } 911 918
+4 -4
net/core/ethtool.c
··· 348 348 if (info.cmd == ETHTOOL_GRXCLSRLALL) { 349 349 if (info.rule_cnt > 0) { 350 350 if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32)) 351 - rule_buf = kmalloc(info.rule_cnt * sizeof(u32), 351 + rule_buf = kzalloc(info.rule_cnt * sizeof(u32), 352 352 GFP_USER); 353 353 if (!rule_buf) 354 354 return -ENOMEM; ··· 397 397 (KMALLOC_MAX_SIZE - sizeof(*indir)) / sizeof(*indir->ring_index)) 398 398 return -ENOMEM; 399 399 full_size = sizeof(*indir) + sizeof(*indir->ring_index) * table_size; 400 - indir = kmalloc(full_size, GFP_USER); 400 + indir = kzalloc(full_size, GFP_USER); 401 401 if (!indir) 402 402 return -ENOMEM; 403 403 ··· 538 538 539 539 gstrings.len = ret; 540 540 541 - data = kmalloc(gstrings.len * ETH_GSTRING_LEN, GFP_USER); 541 + data = kzalloc(gstrings.len * ETH_GSTRING_LEN, GFP_USER); 542 542 if (!data) 543 543 return -ENOMEM; 544 544 ··· 775 775 if (regs.len > reglen) 776 776 regs.len = reglen; 777 777 778 - regbuf = kmalloc(reglen, GFP_USER); 778 + regbuf = kzalloc(reglen, GFP_USER); 779 779 if (!regbuf) 780 780 return -ENOMEM; 781 781
+4 -4
net/core/stream.c
··· 141 141 142 142 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 143 143 sk->sk_write_pending++; 144 - sk_wait_event(sk, &current_timeo, !sk->sk_err && 145 - !(sk->sk_shutdown & SEND_SHUTDOWN) && 146 - sk_stream_memory_free(sk) && 147 - vm_wait); 144 + sk_wait_event(sk, &current_timeo, sk->sk_err || 145 + (sk->sk_shutdown & SEND_SHUTDOWN) || 146 + (sk_stream_memory_free(sk) && 147 + !vm_wait)); 148 148 sk->sk_write_pending--; 149 149 150 150 if (vm_wait) {
+1 -1
net/ipv4/Kconfig
··· 413 413 If unsure, say Y. 414 414 415 415 config INET_LRO 416 - bool "Large Receive Offload (ipv4/tcp)" 416 + tristate "Large Receive Offload (ipv4/tcp)" 417 417 default y 418 418 ---help--- 419 419 Support for Large Receive Offload (ipv4/tcp).
+13 -1
net/ipv4/igmp.c
··· 834 834 int mark = 0; 835 835 836 836 837 - if (len == 8 || IGMP_V2_SEEN(in_dev)) { 837 + if (len == 8) { 838 838 if (ih->code == 0) { 839 839 /* Alas, old v1 router presents here. */ 840 840 ··· 856 856 igmpv3_clear_delrec(in_dev); 857 857 } else if (len < 12) { 858 858 return; /* ignore bogus packet; freed by caller */ 859 + } else if (IGMP_V1_SEEN(in_dev)) { 860 + /* This is a v3 query with v1 queriers present */ 861 + max_delay = IGMP_Query_Response_Interval; 862 + group = 0; 863 + } else if (IGMP_V2_SEEN(in_dev)) { 864 + /* this is a v3 query with v2 queriers present; 865 + * Interpretation of the max_delay code is problematic here. 866 + * A real v2 host would use ih_code directly, while v3 has a 867 + * different encoding. We use the v3 encoding as more likely 868 + * to be intended in a v3 query. 869 + */ 870 + max_delay = IGMPV3_MRC(ih3->code)*(HZ/IGMP_TIMER_SCALE); 859 871 } else { /* v3 */ 860 872 if (!pskb_may_pull(skb, sizeof(struct igmpv3_query))) 861 873 return;
+24 -4
net/ipv6/route.c
··· 1556 1556 * i.e. Path MTU discovery 1557 1557 */ 1558 1558 1559 - void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr, 1560 - struct net_device *dev, u32 pmtu) 1559 + static void rt6_do_pmtu_disc(struct in6_addr *daddr, struct in6_addr *saddr, 1560 + struct net *net, u32 pmtu, int ifindex) 1561 1561 { 1562 1562 struct rt6_info *rt, *nrt; 1563 - struct net *net = dev_net(dev); 1564 1563 int allfrag = 0; 1565 1564 1566 - rt = rt6_lookup(net, daddr, saddr, dev->ifindex, 0); 1565 + rt = rt6_lookup(net, daddr, saddr, ifindex, 0); 1567 1566 if (rt == NULL) 1568 1567 return; 1569 1568 ··· 1628 1629 } 1629 1630 out: 1630 1631 dst_release(&rt->dst); 1632 + } 1633 + 1634 + void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr, 1635 + struct net_device *dev, u32 pmtu) 1636 + { 1637 + struct net *net = dev_net(dev); 1638 + 1639 + /* 1640 + * RFC 1981 states that a node "MUST reduce the size of the packets it 1641 + * is sending along the path" that caused the Packet Too Big message. 1642 + * Since it's not possible in the general case to determine which 1643 + * interface was used to send the original packet, we update the MTU 1644 + * on the interface that will be used to send future packets. We also 1645 + * update the MTU on the interface that received the Packet Too Big in 1646 + * case the original packet was forced out that interface with 1647 + * SO_BINDTODEVICE or similar. This is the next best thing to the 1648 + * correct behaviour, which would be to update the MTU on all 1649 + * interfaces. 1650 + */ 1651 + rt6_do_pmtu_disc(daddr, saddr, net, pmtu, 0); 1652 + rt6_do_pmtu_disc(daddr, saddr, net, pmtu, dev->ifindex); 1631 1653 } 1632 1654 1633 1655 /*
+2
net/mac80211/agg-tx.c
··· 175 175 176 176 set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 177 177 178 + del_timer_sync(&tid_tx->addba_resp_timer); 179 + 178 180 /* 179 181 * After this packets are no longer handed right through 180 182 * to the driver but are put onto tid_tx->pending instead,
+2 -2
net/mac80211/status.c
··· 377 377 skb2 = skb_clone(skb, GFP_ATOMIC); 378 378 if (skb2) { 379 379 skb2->dev = prev_dev; 380 - netif_receive_skb(skb2); 380 + netif_rx(skb2); 381 381 } 382 382 } 383 383 ··· 386 386 } 387 387 if (prev_dev) { 388 388 skb->dev = prev_dev; 389 - netif_receive_skb(skb); 389 + netif_rx(skb); 390 390 skb = NULL; 391 391 } 392 392 rcu_read_unlock();
+1 -1
net/sched/cls_u32.c
··· 137 137 int toff = off + key->off + (off2 & key->offmask); 138 138 __be32 *data, _data; 139 139 140 - if (skb_headroom(skb) + toff < 0) 140 + if (skb_headroom(skb) + toff > INT_MAX) 141 141 goto out; 142 142 143 143 data = skb_header_pointer(skb, toff, 4, &_data);
+6 -2
net/sctp/auth.c
··· 543 543 id = ntohs(hmacs->hmac_ids[i]); 544 544 545 545 /* Check the id is in the supported range */ 546 - if (id > SCTP_AUTH_HMAC_ID_MAX) 546 + if (id > SCTP_AUTH_HMAC_ID_MAX) { 547 + id = 0; 547 548 continue; 549 + } 548 550 549 551 /* See is we support the id. Supported IDs have name and 550 552 * length fields set, so that we can allocated and use 551 553 * them. We can safely just check for name, for without the 552 554 * name, we can't allocate the TFM. 553 555 */ 554 - if (!sctp_hmac_list[id].hmac_name) 556 + if (!sctp_hmac_list[id].hmac_name) { 557 + id = 0; 555 558 continue; 559 + } 556 560 557 561 break; 558 562 }
+12 -1
net/sctp/socket.c
··· 916 916 /* Walk through the addrs buffer and count the number of addresses. */ 917 917 addr_buf = kaddrs; 918 918 while (walk_size < addrs_size) { 919 + if (walk_size + sizeof(sa_family_t) > addrs_size) { 920 + kfree(kaddrs); 921 + return -EINVAL; 922 + } 923 + 919 924 sa_addr = (struct sockaddr *)addr_buf; 920 925 af = sctp_get_af_specific(sa_addr->sa_family); 921 926 ··· 1007 1002 /* Walk through the addrs buffer and count the number of addresses. */ 1008 1003 addr_buf = kaddrs; 1009 1004 while (walk_size < addrs_size) { 1005 + if (walk_size + sizeof(sa_family_t) > addrs_size) { 1006 + err = -EINVAL; 1007 + goto out_free; 1008 + } 1009 + 1010 1010 sa_addr = (union sctp_addr *)addr_buf; 1011 1011 af = sctp_get_af_specific(sa_addr->sa.sa_family); 1012 - port = ntohs(sa_addr->v4.sin_port); 1013 1012 1014 1013 /* If the address family is not supported or if this address 1015 1014 * causes the address buffer to overflow return EINVAL. ··· 1022 1013 err = -EINVAL; 1023 1014 goto out_free; 1024 1015 } 1016 + 1017 + port = ntohs(sa_addr->v4.sin_port); 1025 1018 1026 1019 /* Save current address so we can work with it */ 1027 1020 memcpy(&to, sa_addr, af->sockaddr_len);
+1 -1
scripts/kconfig/conf.c
··· 427 427 if (sym->name && !sym_is_choice_value(sym)) { 428 428 printf("CONFIG_%s\n", sym->name); 429 429 } 430 - } else { 430 + } else if (input_mode != oldnoconfig) { 431 431 if (!conf_cnt++) 432 432 printf(_("*\n* Restart config...\n*\n")); 433 433 rootEntry = menu_get_parent_menu(menu);
-1
scripts/kconfig/expr.h
··· 165 165 struct symbol *sym; 166 166 struct property *prompt; 167 167 struct expr *dep; 168 - struct expr *dir_dep; 169 168 unsigned int flags; 170 169 char *help; 171 170 struct file *file;
+2 -5
scripts/kconfig/menu.c
··· 107 107 void menu_add_dep(struct expr *dep) 108 108 { 109 109 current_entry->dep = expr_alloc_and(current_entry->dep, menu_check_dep(dep)); 110 - current_entry->dir_dep = current_entry->dep; 111 110 } 112 111 113 112 void menu_set_type(int type) ··· 290 291 for (menu = parent->list; menu; menu = menu->next) 291 292 menu_finalize(menu); 292 293 } else if (sym) { 293 - /* ignore inherited dependencies for dir_dep */ 294 - sym->dir_dep.expr = expr_transform(expr_copy(parent->dir_dep)); 295 - sym->dir_dep.expr = expr_eliminate_dups(sym->dir_dep.expr); 296 - 297 294 basedep = parent->prompt ? parent->prompt->visible.expr : NULL; 298 295 basedep = expr_trans_compare(basedep, E_UNEQUAL, &symbol_no); 299 296 basedep = expr_eliminate_dups(expr_transform(basedep)); ··· 320 325 parent->next = last_menu->next; 321 326 last_menu->next = NULL; 322 327 } 328 + 329 + sym->dir_dep.expr = parent->dep; 323 330 } 324 331 for (menu = parent->list; menu; menu = menu->next) { 325 332 if (sym && sym_is_choice(sym) &&
+2
scripts/kconfig/symbol.c
··· 350 350 } 351 351 } 352 352 calc_newval: 353 + #if 0 353 354 if (sym->dir_dep.tri == no && sym->rev_dep.tri != no) { 354 355 fprintf(stderr, "warning: ("); 355 356 expr_fprint(sym->rev_dep.expr, stderr); ··· 359 358 expr_fprint(sym->dir_dep.expr, stderr); 360 359 fprintf(stderr, ")\n"); 361 360 } 361 + #endif 362 362 newval.tri = EXPR_OR(newval.tri, sym->rev_dep.tri); 363 363 } 364 364 if (newval.tri == mod && sym_get_type(sym) == S_BOOLEAN)
+3 -1
sound/core/rawmidi.c
··· 535 535 { 536 536 struct snd_rawmidi_file *rfile; 537 537 struct snd_rawmidi *rmidi; 538 + struct module *module; 538 539 539 540 rfile = file->private_data; 540 541 rmidi = rfile->rmidi; 541 542 rawmidi_release_priv(rfile); 542 543 kfree(rfile); 544 + module = rmidi->card->module; 543 545 snd_card_file_remove(rmidi->card, file); 544 - module_put(rmidi->card->module); 546 + module_put(module); 545 547 return 0; 546 548 } 547 549
+2 -2
sound/oss/soundcard.c
··· 391 391 case SND_DEV_DSP: 392 392 case SND_DEV_DSP16: 393 393 case SND_DEV_AUDIO: 394 - return audio_ioctl(dev, file, cmd, p); 394 + ret = audio_ioctl(dev, file, cmd, p); 395 395 break; 396 396 397 397 case SND_DEV_MIDIN: 398 - return MIDIbuf_ioctl(dev, file, cmd, p); 398 + ret = MIDIbuf_ioctl(dev, file, cmd, p); 399 399 break; 400 400 401 401 }
+2
sound/pci/hda/patch_sigmatel.c
··· 1747 1747 "HP dv6", STAC_HP_DV5), 1748 1748 SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x3061, 1749 1749 "HP dv6", STAC_HP_DV5), /* HP dv6-1110ax */ 1750 + SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x363e, 1751 + "HP DV6", STAC_HP_DV5), 1750 1752 SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x7010, 1751 1753 "HP", STAC_HP_DV5), 1752 1754 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0233,
+12
tools/perf/perf.h
··· 73 73 #define cpu_relax() asm volatile("":::"memory") 74 74 #endif 75 75 76 + #ifdef __mips__ 77 + #include "../../arch/mips/include/asm/unistd.h" 78 + #define rmb() asm volatile( \ 79 + ".set mips2\n\t" \ 80 + "sync\n\t" \ 81 + ".set mips0" \ 82 + : /* no output */ \ 83 + : /* no input */ \ 84 + : "memory") 85 + #define cpu_relax() asm volatile("" ::: "memory") 86 + #endif 87 + 76 88 #include <time.h> 77 89 #include <unistd.h> 78 90 #include <sys/types.h>