Merge branch 'for-spi' of git://git.kernel.org/pub/scm/linux/kernel/git/vapier/blackfin into spi/next

+2091 -1433
+96 -277
Documentation/networking/e1000.txt
··· 1 Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters 2 =============================================================== 3 4 - September 26, 2006 5 - 6 7 Contents 8 ======== 9 10 - - In This Release 11 - Identifying Your Adapter 12 - - Building and Installation 13 - Command Line Parameters 14 - Speed and Duplex Configuration 15 - Additional Configurations 16 - - Known Issues 17 - Support 18 - 19 - 20 - In This Release 21 - =============== 22 - 23 - This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family 24 - of Adapters. This driver includes support for Itanium(R)2-based systems. 25 - 26 - For questions related to hardware requirements, refer to the documentation 27 - supplied with your Intel PRO/1000 adapter. All hardware requirements listed 28 - apply to use with Linux. 29 - 30 - The following features are now available in supported kernels: 31 - - Native VLANs 32 - - Channel Bonding (teaming) 33 - - SNMP 34 - 35 - Channel Bonding documentation can be found in the Linux kernel source: 36 - /Documentation/networking/bonding.txt 37 - 38 - The driver information previously displayed in the /proc filesystem is not 39 - supported in this release. Alternatively, you can use ethtool (version 1.6 40 - or later), lspci, and ifconfig to obtain the same information. 41 - 42 - Instructions on updating ethtool can be found in the section "Additional 43 - Configurations" later in this document. 44 - 45 - NOTE: The Intel(R) 82562v 10/100 Network Connection only provides 10/100 46 - support. 47 - 48 49 Identifying Your Adapter 50 ======================== ··· 19 For more information on how to identify your adapter, go to the Adapter & 20 Driver ID Guide at: 21 22 - http://support.intel.com/support/network/adapter/pro100/21397.htm 23 24 For the latest Intel network drivers for Linux, refer to the following 25 website. In the search field, enter your adapter name or type, or use the 26 networking link on the left to search for your adapter: 27 28 - http://downloadfinder.intel.com/scripts-df/support_intel.asp 29 - 30 31 Command Line Parameters 32 ======================= 33 - 34 - If the driver is built as a module, the following optional parameters 35 - are used by entering them on the command line with the modprobe command 36 - using this syntax: 37 - 38 - modprobe e1000 [<option>=<VAL1>,<VAL2>,...] 39 - 40 - For example, with two PRO/1000 PCI adapters, entering: 41 - 42 - modprobe e1000 TxDescriptors=80,128 43 - 44 - loads the e1000 driver with 80 TX descriptors for the first adapter and 45 - 128 TX descriptors for the second adapter. 46 47 The default value for each parameter is generally the recommended setting, 48 unless otherwise noted. ··· 41 RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay 42 parameters, see the application note at: 43 http://www.intel.com/design/network/applnots/ap450.htm 44 - 45 - A descriptor describes a data buffer and attributes related to 46 - the data buffer. This information is accessed by the hardware. 47 - 48 49 AutoNeg 50 ------- ··· 55 NOTE: Refer to the Speed and Duplex section of this readme for more 56 information on the AutoNeg parameter. 57 58 - 59 Duplex 60 ------ 61 (Supported only on adapters with copper connections) ··· 67 link partner is forced (either full or half), Duplex defaults to half- 68 duplex. 69 70 - 71 FlowControl 72 ----------- 73 Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) ··· 75 This parameter controls the automatic generation(Tx) and response(Rx) 76 to Ethernet PAUSE frames. 77 78 - 79 InterruptThrottleRate 80 --------------------- 81 (not supported on Intel(R) 82542, 82543 or 82544-based adapters) 82 - Valid Range: 0,1,3,100-100000 (0=off, 1=dynamic, 3=dynamic conservative) 83 Default Value: 3 84 85 The driver can limit the amount of interrupts per second that the adapter 86 - will generate for incoming packets. It does this by writing a value to the 87 - adapter that is based on the maximum amount of interrupts that the adapter 88 will generate per second. 89 90 Setting InterruptThrottleRate to a value greater or equal to 100 ··· 93 load on the system and can lower CPU utilization under heavy load, 94 but will increase latency as packets are not processed as quickly. 95 96 - The default behaviour of the driver previously assumed a static 97 - InterruptThrottleRate value of 8000, providing a good fallback value for 98 - all traffic types,but lacking in small packet performance and latency. 99 - The hardware can handle many more small packets per second however, and 100 for this reason an adaptive interrupt moderation algorithm was implemented. 101 102 Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which 103 - it dynamically adjusts the InterruptThrottleRate value based on the traffic 104 that it receives. After determining the type of incoming traffic in the last 105 - timeframe, it will adjust the InterruptThrottleRate to an appropriate value 106 for that traffic. 107 108 The algorithm classifies the incoming traffic every interval into 109 - classes. Once the class is determined, the InterruptThrottleRate value is 110 - adjusted to suit that traffic type the best. There are three classes defined: 111 "Bulk traffic", for large amounts of packets of normal size; "Low latency", 112 for small amounts of traffic and/or a significant percentage of small 113 - packets; and "Lowest latency", for almost completely small packets or 114 minimal traffic. 115 116 - In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 117 - for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 118 - latency" or "Lowest latency" class, the InterruptThrottleRate is increased 119 stepwise to 20000. This default mode is suitable for most applications. 120 121 For situations where low latency is vital such as cluster or 122 grid computing, the algorithm can reduce latency even more when 123 InterruptThrottleRate is set to mode 1. In this mode, which operates 124 - the same as mode 3, the InterruptThrottleRate will be increased stepwise to 125 70000 for traffic in class "Lowest latency". 126 127 Setting InterruptThrottleRate to 0 turns off any interrupt moderation 128 and may improve small packet latency, but is generally not suitable ··· 165 be platform-specific. If CPU utilization is not a concern, use 166 RX_POLLING (NAPI) and default driver settings. 167 168 - 169 - 170 RxDescriptors 171 ------------- 172 Valid Range: 80-256 for 82542 and 82543-based adapters ··· 176 incoming packets, at the expense of increased system memory utilization. 177 178 Each descriptor is 16 bytes. A receive buffer is also allocated for each 179 - descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 180 on the MTU setting. The maximum MTU size is 16110. 181 182 - NOTE: MTU designates the frame size. It only needs to be set for Jumbo 183 - Frames. Depending on the available system resources, the request 184 - for a higher number of receive descriptors may be denied. In this 185 case, use a lower number. 186 - 187 188 RxIntDelay 189 ---------- ··· 204 restoring the network connection. To eliminate the potential 205 for the hang ensure that RxIntDelay is set to 0. 206 207 - 208 RxAbsIntDelay 209 ------------- 210 (This parameter is supported only on 82540, 82545 and later adapters.) ··· 217 along with RxIntDelay, may improve traffic throughput in specific network 218 conditions. 219 220 - 221 Speed 222 ----- 223 (This parameter is supported only on adapters with copper connections.) ··· 227 (Mbps). If this parameter is not specified or is set to 0 and the link 228 partner is set to auto-negotiate, the board will auto-detect the correct 229 speed. Duplex should also be set when Speed is set to either 10 or 100. 230 - 231 232 TxDescriptors 233 ------------- ··· 242 higher number of transmit descriptors may be denied. In this case, 243 use a lower number. 244 245 246 TxIntDelay 247 ---------- ··· 283 efficiency if properly tuned for specific network traffic. If the 284 system is reporting dropped transmits, this value may be set too high 285 causing the driver to run out of available transmit descriptors. 286 - 287 288 TxAbsIntDelay 289 ------------- ··· 306 A value of '1' indicates that the driver should enable IP checksum 307 offload for received packets (both UDP and TCP) to the adapter hardware. 308 309 310 Speed and Duplex Configuration 311 ============================== ··· 390 parameter should not be used. Instead, use the Speed and Duplex parameters 391 previously mentioned to force the adapter to the same speed and duplex. 392 393 - 394 Additional Configurations 395 ========================= 396 - 397 - Configuring the Driver on Different Distributions 398 - ------------------------------------------------- 399 - Configuring a network driver to load properly when the system is started 400 - is distribution dependent. Typically, the configuration process involves 401 - adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well 402 - as editing other system startup scripts and/or configuration files. Many 403 - popular Linux distributions ship with tools to make these changes for you. 404 - To learn the proper way to configure a network device for your system, 405 - refer to your distribution documentation. If during this process you are 406 - asked for the driver or module name, the name for the Linux Base Driver 407 - for the Intel(R) PRO/1000 Family of Adapters is e1000. 408 - 409 - As an example, if you install the e1000 driver for two PRO/1000 adapters 410 - (eth0 and eth1) and set the speed and duplex to 10full and 100half, add 411 - the following to modules.conf or or modprobe.conf: 412 - 413 - alias eth0 e1000 414 - alias eth1 e1000 415 - options e1000 Speed=10,100 Duplex=2,1 416 - 417 - Viewing Link Messages 418 - --------------------- 419 - Link messages will not be displayed to the console if the distribution is 420 - restricting system messages. In order to see network driver link messages 421 - on your console, set dmesg to eight by entering the following: 422 - 423 - dmesg -n 8 424 - 425 - NOTE: This setting is not saved across reboots. 426 427 Jumbo Frames 428 ------------ ··· 411 setting in a different location. 412 413 Notes: 414 - 415 - - To enable Jumbo Frames, increase the MTU size on the interface beyond 416 - 1500. 417 418 - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 419 with the maximum Jumbo Frames size of 16128. ··· 423 - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or 424 loss of link. 425 426 - - Some Intel gigabit adapters that support Jumbo Frames have a frame size 427 - limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes. 428 - The adapters with this limitation are based on the Intel(R) 82571EB, 429 - 82572EI, 82573L and 80003ES2LAN controller. These correspond to the 430 - following product names: 431 - Intel(R) PRO/1000 PT Server Adapter 432 - Intel(R) PRO/1000 PT Desktop Adapter 433 - Intel(R) PRO/1000 PT Network Connection 434 - Intel(R) PRO/1000 PT Dual Port Server Adapter 435 - Intel(R) PRO/1000 PT Dual Port Network Connection 436 - Intel(R) PRO/1000 PF Server Adapter 437 - Intel(R) PRO/1000 PF Network Connection 438 - Intel(R) PRO/1000 PF Dual Port Server Adapter 439 - Intel(R) PRO/1000 PB Server Connection 440 - Intel(R) PRO/1000 PL Network Connection 441 - Intel(R) PRO/1000 EB Network Connection with I/O Acceleration 442 - Intel(R) PRO/1000 EB Backplane Connection with I/O Acceleration 443 - Intel(R) PRO/1000 PT Quad Port Server Adapter 444 - 445 - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 446 support Jumbo Frames. These correspond to the following product names: 447 Intel(R) PRO/1000 Gigabit Server Adapter 448 Intel(R) PRO/1000 PM Network Connection 449 - 450 - - The following adapters do not support Jumbo Frames: 451 - Intel(R) 82562V 10/100 Network Connection 452 - Intel(R) 82566DM Gigabit Network Connection 453 - Intel(R) 82566DC Gigabit Network Connection 454 - Intel(R) 82566MM Gigabit Network Connection 455 - Intel(R) 82566MC Gigabit Network Connection 456 - Intel(R) 82562GT 10/100 Network Connection 457 - Intel(R) 82562G 10/100 Network Connection 458 - 459 460 Ethtool 461 ------- ··· 437 The latest release of ethtool can be found from 438 http://sourceforge.net/projects/gkernel. 439 440 - NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support 441 - for a more complete ethtool feature set can be enabled by upgrading 442 - ethtool to ethtool-1.8.1. 443 - 444 Enabling Wake on LAN* (WoL) 445 --------------------------- 446 - WoL is configured through the Ethtool* utility. Ethtool is included with 447 - all versions of Red Hat after Red Hat 7.2. For other Linux distributions, 448 - download and install Ethtool from the following website: 449 - http://sourceforge.net/projects/gkernel. 450 - 451 - For instructions on enabling WoL with Ethtool, refer to the website listed 452 - above. 453 454 WoL will be enabled on the system during the next shut down or reboot. 455 For this driver version, in order to enable WoL, the e1000 driver must be 456 loaded when shutting down or rebooting the system. 457 - 458 - Wake On LAN is only supported on port A for the following devices: 459 - Intel(R) PRO/1000 PT Dual Port Network Connection 460 - Intel(R) PRO/1000 PT Dual Port Server Connection 461 - Intel(R) PRO/1000 PT Dual Port Server Adapter 462 - Intel(R) PRO/1000 PF Dual Port Server Adapter 463 - Intel(R) PRO/1000 PT Quad Port Server Adapter 464 - 465 - NAPI 466 - ---- 467 - NAPI (Rx polling mode) is enabled in the e1000 driver. 468 - 469 - See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI. 470 - 471 - 472 - Known Issues 473 - ============ 474 - 475 - Dropped Receive Packets on Half-duplex 10/100 Networks 476 - ------------------------------------------------------ 477 - If you have an Intel PCI Express adapter running at 10mbps or 100mbps, half- 478 - duplex, you may observe occasional dropped receive packets. There are no 479 - workarounds for this problem in this network configuration. The network must 480 - be updated to operate in full-duplex, and/or 1000mbps only. 481 - 482 - Jumbo Frames System Requirement 483 - ------------------------------- 484 - Memory allocation failures have been observed on Linux systems with 64 MB 485 - of RAM or less that are running Jumbo Frames. If you are using Jumbo 486 - Frames, your system may require more than the advertised minimum 487 - requirement of 64 MB of system memory. 488 - 489 - Performance Degradation with Jumbo Frames 490 - ----------------------------------------- 491 - Degradation in throughput performance may be observed in some Jumbo frames 492 - environments. If this is observed, increasing the application's socket 493 - buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values 494 - may help. See the specific application manual and 495 - /usr/src/linux*/Documentation/ 496 - networking/ip-sysctl.txt for more details. 497 - 498 - Jumbo Frames on Foundry BigIron 8000 switch 499 - ------------------------------------------- 500 - There is a known issue using Jumbo frames when connected to a Foundry 501 - BigIron 8000 switch. This is a 3rd party limitation. If you experience 502 - loss of packets, lower the MTU size. 503 - 504 - Allocating Rx Buffers when Using Jumbo Frames 505 - --------------------------------------------- 506 - Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if 507 - the available memory is heavily fragmented. This issue may be seen with PCI-X 508 - adapters or with packet split disabled. This can be reduced or eliminated 509 - by changing the amount of available memory for receive buffer allocation, by 510 - increasing /proc/sys/vm/min_free_kbytes. 511 - 512 - Multiple Interfaces on Same Ethernet Broadcast Network 513 - ------------------------------------------------------ 514 - Due to the default ARP behavior on Linux, it is not possible to have 515 - one system on two IP networks in the same Ethernet broadcast domain 516 - (non-partitioned switch) behave as expected. All Ethernet interfaces 517 - will respond to IP traffic for any IP address assigned to the system. 518 - This results in unbalanced receive traffic. 519 - 520 - If you have multiple interfaces in a server, either turn on ARP 521 - filtering by entering: 522 - 523 - echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 524 - (this only works if your kernel's version is higher than 2.4.5), 525 - 526 - NOTE: This setting is not saved across reboots. The configuration 527 - change can be made permanent by adding the line: 528 - net.ipv4.conf.all.arp_filter = 1 529 - to the file /etc/sysctl.conf 530 - 531 - or, 532 - 533 - install the interfaces in separate broadcast domains (either in 534 - different switches or in a switch partitioned to VLANs). 535 - 536 - 82541/82547 can't link or are slow to link with some link partners 537 - ----------------------------------------------------------------- 538 - There is a known compatibility issue with 82541/82547 and some 539 - low-end switches where the link will not be established, or will 540 - be slow to establish. In particular, these switches are known to 541 - be incompatible with 82541/82547: 542 - 543 - Planex FXG-08TE 544 - I-O Data ETG-SH8 545 - 546 - To workaround this issue, the driver can be compiled with an override 547 - of the PHY's master/slave setting. Forcing master or forcing slave 548 - mode will improve time-to-link. 549 - 550 - # make CFLAGS_EXTRA=-DE1000_MASTER_SLAVE=<n> 551 - 552 - Where <n> is: 553 - 554 - 0 = Hardware default 555 - 1 = Master mode 556 - 2 = Slave mode 557 - 3 = Auto master/slave 558 - 559 - Disable rx flow control with ethtool 560 - ------------------------------------ 561 - In order to disable receive flow control using ethtool, you must turn 562 - off auto-negotiation on the same command line. 563 - 564 - For example: 565 - 566 - ethtool -A eth? autoneg off rx off 567 - 568 - Unplugging network cable while ethtool -p is running 569 - ---------------------------------------------------- 570 - In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging 571 - the network cable while ethtool -p is running will cause the system to 572 - become unresponsive to keyboard commands, except for control-alt-delete. 573 - Restarting the system appears to be the only remedy. 574 - 575 576 Support 577 =======
··· 1 Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters 2 =============================================================== 3 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 6 7 Contents 8 ======== 9 10 - Identifying Your Adapter 11 - Command Line Parameters 12 - Speed and Duplex Configuration 13 - Additional Configurations 14 - Support 15 16 Identifying Your Adapter 17 ======================== ··· 52 For more information on how to identify your adapter, go to the Adapter & 53 Driver ID Guide at: 54 55 + http://support.intel.com/support/go/network/adapter/idguide.htm 56 57 For the latest Intel network drivers for Linux, refer to the following 58 website. In the search field, enter your adapter name or type, or use the 59 networking link on the left to search for your adapter: 60 61 + http://support.intel.com/support/go/network/adapter/home.htm 62 63 Command Line Parameters 64 ======================= 65 66 The default value for each parameter is generally the recommended setting, 67 unless otherwise noted. ··· 88 RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay 89 parameters, see the application note at: 90 http://www.intel.com/design/network/applnots/ap450.htm 91 92 AutoNeg 93 ------- ··· 106 NOTE: Refer to the Speed and Duplex section of this readme for more 107 information on the AutoNeg parameter. 108 109 Duplex 110 ------ 111 (Supported only on adapters with copper connections) ··· 119 link partner is forced (either full or half), Duplex defaults to half- 120 duplex. 121 122 FlowControl 123 ----------- 124 Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) ··· 128 This parameter controls the automatic generation(Tx) and response(Rx) 129 to Ethernet PAUSE frames. 130 131 InterruptThrottleRate 132 --------------------- 133 (not supported on Intel(R) 82542, 82543 or 82544-based adapters) 134 + Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 135 + 4=simplified balancing) 136 Default Value: 3 137 138 The driver can limit the amount of interrupts per second that the adapter 139 + will generate for incoming packets. It does this by writing a value to the 140 + adapter that is based on the maximum amount of interrupts that the adapter 141 will generate per second. 142 143 Setting InterruptThrottleRate to a value greater or equal to 100 ··· 146 load on the system and can lower CPU utilization under heavy load, 147 but will increase latency as packets are not processed as quickly. 148 149 + The default behaviour of the driver previously assumed a static 150 + InterruptThrottleRate value of 8000, providing a good fallback value for 151 + all traffic types,but lacking in small packet performance and latency. 152 + The hardware can handle many more small packets per second however, and 153 for this reason an adaptive interrupt moderation algorithm was implemented. 154 155 Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which 156 + it dynamically adjusts the InterruptThrottleRate value based on the traffic 157 that it receives. After determining the type of incoming traffic in the last 158 + timeframe, it will adjust the InterruptThrottleRate to an appropriate value 159 for that traffic. 160 161 The algorithm classifies the incoming traffic every interval into 162 + classes. Once the class is determined, the InterruptThrottleRate value is 163 + adjusted to suit that traffic type the best. There are three classes defined: 164 "Bulk traffic", for large amounts of packets of normal size; "Low latency", 165 for small amounts of traffic and/or a significant percentage of small 166 + packets; and "Lowest latency", for almost completely small packets or 167 minimal traffic. 168 169 + In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 170 + for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 171 + latency" or "Lowest latency" class, the InterruptThrottleRate is increased 172 stepwise to 20000. This default mode is suitable for most applications. 173 174 For situations where low latency is vital such as cluster or 175 grid computing, the algorithm can reduce latency even more when 176 InterruptThrottleRate is set to mode 1. In this mode, which operates 177 + the same as mode 3, the InterruptThrottleRate will be increased stepwise to 178 70000 for traffic in class "Lowest latency". 179 + 180 + In simplified mode the interrupt rate is based on the ratio of Tx and 181 + Rx traffic. If the bytes per second rate is approximately equal, the 182 + interrupt rate will drop as low as 2000 interrupts per second. If the 183 + traffic is mostly transmit or mostly receive, the interrupt rate could 184 + be as high as 8000. 185 186 Setting InterruptThrottleRate to 0 turns off any interrupt moderation 187 and may improve small packet latency, but is generally not suitable ··· 212 be platform-specific. If CPU utilization is not a concern, use 213 RX_POLLING (NAPI) and default driver settings. 214 215 RxDescriptors 216 ------------- 217 Valid Range: 80-256 for 82542 and 82543-based adapters ··· 225 incoming packets, at the expense of increased system memory utilization. 226 227 Each descriptor is 16 bytes. A receive buffer is also allocated for each 228 + descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 229 on the MTU setting. The maximum MTU size is 16110. 230 231 + NOTE: MTU designates the frame size. It only needs to be set for Jumbo 232 + Frames. Depending on the available system resources, the request 233 + for a higher number of receive descriptors may be denied. In this 234 case, use a lower number. 235 236 RxIntDelay 237 ---------- ··· 254 restoring the network connection. To eliminate the potential 255 for the hang ensure that RxIntDelay is set to 0. 256 257 RxAbsIntDelay 258 ------------- 259 (This parameter is supported only on 82540, 82545 and later adapters.) ··· 268 along with RxIntDelay, may improve traffic throughput in specific network 269 conditions. 270 271 Speed 272 ----- 273 (This parameter is supported only on adapters with copper connections.) ··· 279 (Mbps). If this parameter is not specified or is set to 0 and the link 280 partner is set to auto-negotiate, the board will auto-detect the correct 281 speed. Duplex should also be set when Speed is set to either 10 or 100. 282 283 TxDescriptors 284 ------------- ··· 295 higher number of transmit descriptors may be denied. In this case, 296 use a lower number. 297 298 + TxDescriptorStep 299 + ---------------- 300 + Valid Range: 1 (use every Tx Descriptor) 301 + 4 (use every 4th Tx Descriptor) 302 + 303 + Default Value: 1 (use every Tx Descriptor) 304 + 305 + On certain non-Intel architectures, it has been observed that intense TX 306 + traffic bursts of short packets may result in an improper descriptor 307 + writeback. If this occurs, the driver will report a "TX Timeout" and reset 308 + the adapter, after which the transmit flow will restart, though data may 309 + have stalled for as much as 10 seconds before it resumes. 310 + 311 + The improper writeback does not occur on the first descriptor in a system 312 + memory cache-line, which is typically 32 bytes, or 4 descriptors long. 313 + 314 + Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors 315 + are aligned to the start of a system memory cache line, and so this problem 316 + will not occur. 317 + 318 + NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of 319 + TxDescriptors available for transmits to 1/4 of the normal allocation. 320 + This has a possible negative performance impact, which may be 321 + compensated for by allocating more descriptors using the TxDescriptors 322 + module parameter. 323 + 324 + There are other conditions which may result in "TX Timeout", which will 325 + not be resolved by the use of the TxDescriptorStep parameter. As the 326 + issue addressed by this parameter has never been observed on Intel 327 + Architecture platforms, it should not be used on Intel platforms. 328 329 TxIntDelay 330 ---------- ··· 306 efficiency if properly tuned for specific network traffic. If the 307 system is reporting dropped transmits, this value may be set too high 308 causing the driver to run out of available transmit descriptors. 309 310 TxAbsIntDelay 311 ------------- ··· 330 A value of '1' indicates that the driver should enable IP checksum 331 offload for received packets (both UDP and TCP) to the adapter hardware. 332 333 + Copybreak 334 + --------- 335 + Valid Range: 0-xxxxxxx (0=off) 336 + Default Value: 256 337 + Usage: insmod e1000.ko copybreak=128 338 + 339 + Driver copies all packets below or equaling this size to a fresh Rx 340 + buffer before handing it up the stack. 341 + 342 + This parameter is different than other parameters, in that it is a 343 + single (not 1,1,1 etc.) parameter applied to all driver instances and 344 + it is also available during runtime at 345 + /sys/module/e1000/parameters/copybreak 346 + 347 + SmartPowerDownEnable 348 + -------------------- 349 + Valid Range: 0-1 350 + Default Value: 0 (disabled) 351 + 352 + Allows PHY to turn off in lower power states. The user can turn off 353 + this parameter in supported chipsets. 354 + 355 + KumeranLockLoss 356 + --------------- 357 + Valid Range: 0-1 358 + Default Value: 1 (enabled) 359 + 360 + This workaround skips resetting the PHY at shutdown for the initial 361 + silicon releases of ICH8 systems. 362 363 Speed and Duplex Configuration 364 ============================== ··· 385 parameter should not be used. Instead, use the Speed and Duplex parameters 386 previously mentioned to force the adapter to the same speed and duplex. 387 388 Additional Configurations 389 ========================= 390 391 Jumbo Frames 392 ------------ ··· 437 setting in a different location. 438 439 Notes: 440 + Degradation in throughput performance may be observed in some Jumbo frames 441 + environments. If this is observed, increasing the application's socket buffer 442 + size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. 443 + See the specific application manual and /usr/src/linux*/Documentation/ 444 + networking/ip-sysctl.txt for more details. 445 446 - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 447 with the maximum Jumbo Frames size of 16128. ··· 447 - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or 448 loss of link. 449 450 - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 451 support Jumbo Frames. These correspond to the following product names: 452 Intel(R) PRO/1000 Gigabit Server Adapter 453 Intel(R) PRO/1000 PM Network Connection 454 455 Ethtool 456 ------- ··· 490 The latest release of ethtool can be found from 491 http://sourceforge.net/projects/gkernel. 492 493 Enabling Wake on LAN* (WoL) 494 --------------------------- 495 + WoL is configured through the Ethtool* utility. 496 497 WoL will be enabled on the system during the next shut down or reboot. 498 For this driver version, in order to enable WoL, the e1000 driver must be 499 loaded when shutting down or rebooting the system. 500 501 Support 502 =======
+302
Documentation/networking/e1000e.txt
···
··· 1 + Linux* Driver for Intel(R) Network Connection 2 + =============================================================== 3 + 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 6 + 7 + Contents 8 + ======== 9 + 10 + - Identifying Your Adapter 11 + - Command Line Parameters 12 + - Additional Configurations 13 + - Support 14 + 15 + Identifying Your Adapter 16 + ======================== 17 + 18 + The e1000e driver supports all PCI Express Intel(R) Gigabit Network 19 + Connections, except those that are 82575, 82576 and 82580-based*. 20 + 21 + * NOTE: The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by 22 + the e1000 driver, not the e1000e driver due to the 82546 part being used 23 + behind a PCI Express bridge. 24 + 25 + For more information on how to identify your adapter, go to the Adapter & 26 + Driver ID Guide at: 27 + 28 + http://support.intel.com/support/go/network/adapter/idguide.htm 29 + 30 + For the latest Intel network drivers for Linux, refer to the following 31 + website. In the search field, enter your adapter name or type, or use the 32 + networking link on the left to search for your adapter: 33 + 34 + http://support.intel.com/support/go/network/adapter/home.htm 35 + 36 + Command Line Parameters 37 + ======================= 38 + 39 + The default value for each parameter is generally the recommended setting, 40 + unless otherwise noted. 41 + 42 + NOTES: For more information about the InterruptThrottleRate, 43 + RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay 44 + parameters, see the application note at: 45 + http://www.intel.com/design/network/applnots/ap450.htm 46 + 47 + InterruptThrottleRate 48 + --------------------- 49 + Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 50 + 4=simplified balancing) 51 + Default Value: 3 52 + 53 + The driver can limit the amount of interrupts per second that the adapter 54 + will generate for incoming packets. It does this by writing a value to the 55 + adapter that is based on the maximum amount of interrupts that the adapter 56 + will generate per second. 57 + 58 + Setting InterruptThrottleRate to a value greater or equal to 100 59 + will program the adapter to send out a maximum of that many interrupts 60 + per second, even if more packets have come in. This reduces interrupt 61 + load on the system and can lower CPU utilization under heavy load, 62 + but will increase latency as packets are not processed as quickly. 63 + 64 + The driver has two adaptive modes (setting 1 or 3) in which 65 + it dynamically adjusts the InterruptThrottleRate value based on the traffic 66 + that it receives. After determining the type of incoming traffic in the last 67 + timeframe, it will adjust the InterruptThrottleRate to an appropriate value 68 + for that traffic. 69 + 70 + The algorithm classifies the incoming traffic every interval into 71 + classes. Once the class is determined, the InterruptThrottleRate value is 72 + adjusted to suit that traffic type the best. There are three classes defined: 73 + "Bulk traffic", for large amounts of packets of normal size; "Low latency", 74 + for small amounts of traffic and/or a significant percentage of small 75 + packets; and "Lowest latency", for almost completely small packets or 76 + minimal traffic. 77 + 78 + In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 79 + for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 80 + latency" or "Lowest latency" class, the InterruptThrottleRate is increased 81 + stepwise to 20000. This default mode is suitable for most applications. 82 + 83 + For situations where low latency is vital such as cluster or 84 + grid computing, the algorithm can reduce latency even more when 85 + InterruptThrottleRate is set to mode 1. In this mode, which operates 86 + the same as mode 3, the InterruptThrottleRate will be increased stepwise to 87 + 70000 for traffic in class "Lowest latency". 88 + 89 + In simplified mode the interrupt rate is based on the ratio of Tx and 90 + Rx traffic. If the bytes per second rate is approximately equal the 91 + interrupt rate will drop as low as 2000 interrupts per second. If the 92 + traffic is mostly transmit or mostly receive, the interrupt rate could 93 + be as high as 8000. 94 + 95 + Setting InterruptThrottleRate to 0 turns off any interrupt moderation 96 + and may improve small packet latency, but is generally not suitable 97 + for bulk throughput traffic. 98 + 99 + NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and 100 + RxAbsIntDelay parameters. In other words, minimizing the receive 101 + and/or transmit absolute delays does not force the controller to 102 + generate more interrupts than what the Interrupt Throttle Rate 103 + allows. 104 + 105 + NOTE: When e1000e is loaded with default settings and multiple adapters 106 + are in use simultaneously, the CPU utilization may increase non- 107 + linearly. In order to limit the CPU utilization without impacting 108 + the overall throughput, we recommend that you load the driver as 109 + follows: 110 + 111 + modprobe e1000e InterruptThrottleRate=3000,3000,3000 112 + 113 + This sets the InterruptThrottleRate to 3000 interrupts/sec for 114 + the first, second, and third instances of the driver. The range 115 + of 2000 to 3000 interrupts per second works on a majority of 116 + systems and is a good starting point, but the optimal value will 117 + be platform-specific. If CPU utilization is not a concern, use 118 + RX_POLLING (NAPI) and default driver settings. 119 + 120 + RxIntDelay 121 + ---------- 122 + Valid Range: 0-65535 (0=off) 123 + Default Value: 0 124 + 125 + This value delays the generation of receive interrupts in units of 1.024 126 + microseconds. Receive interrupt reduction can improve CPU efficiency if 127 + properly tuned for specific network traffic. Increasing this value adds 128 + extra latency to frame reception and can end up decreasing the throughput 129 + of TCP traffic. If the system is reporting dropped receives, this value 130 + may be set too high, causing the driver to run out of available receive 131 + descriptors. 132 + 133 + CAUTION: When setting RxIntDelay to a value other than 0, adapters may 134 + hang (stop transmitting) under certain network conditions. If 135 + this occurs a NETDEV WATCHDOG message is logged in the system 136 + event log. In addition, the controller is automatically reset, 137 + restoring the network connection. To eliminate the potential 138 + for the hang ensure that RxIntDelay is set to 0. 139 + 140 + RxAbsIntDelay 141 + ------------- 142 + Valid Range: 0-65535 (0=off) 143 + Default Value: 8 144 + 145 + This value, in units of 1.024 microseconds, limits the delay in which a 146 + receive interrupt is generated. Useful only if RxIntDelay is non-zero, 147 + this value ensures that an interrupt is generated after the initial 148 + packet is received within the set amount of time. Proper tuning, 149 + along with RxIntDelay, may improve traffic throughput in specific network 150 + conditions. 151 + 152 + TxIntDelay 153 + ---------- 154 + Valid Range: 0-65535 (0=off) 155 + Default Value: 8 156 + 157 + This value delays the generation of transmit interrupts in units of 158 + 1.024 microseconds. Transmit interrupt reduction can improve CPU 159 + efficiency if properly tuned for specific network traffic. If the 160 + system is reporting dropped transmits, this value may be set too high 161 + causing the driver to run out of available transmit descriptors. 162 + 163 + TxAbsIntDelay 164 + ------------- 165 + Valid Range: 0-65535 (0=off) 166 + Default Value: 32 167 + 168 + This value, in units of 1.024 microseconds, limits the delay in which a 169 + transmit interrupt is generated. Useful only if TxIntDelay is non-zero, 170 + this value ensures that an interrupt is generated after the initial 171 + packet is sent on the wire within the set amount of time. Proper tuning, 172 + along with TxIntDelay, may improve traffic throughput in specific 173 + network conditions. 174 + 175 + Copybreak 176 + --------- 177 + Valid Range: 0-xxxxxxx (0=off) 178 + Default Value: 256 179 + 180 + Driver copies all packets below or equaling this size to a fresh Rx 181 + buffer before handing it up the stack. 182 + 183 + This parameter is different than other parameters, in that it is a 184 + single (not 1,1,1 etc.) parameter applied to all driver instances and 185 + it is also available during runtime at 186 + /sys/module/e1000e/parameters/copybreak 187 + 188 + SmartPowerDownEnable 189 + -------------------- 190 + Valid Range: 0-1 191 + Default Value: 0 (disabled) 192 + 193 + Allows PHY to turn off in lower power states. The user can set this parameter 194 + in supported chipsets. 195 + 196 + KumeranLockLoss 197 + --------------- 198 + Valid Range: 0-1 199 + Default Value: 1 (enabled) 200 + 201 + This workaround skips resetting the PHY at shutdown for the initial 202 + silicon releases of ICH8 systems. 203 + 204 + IntMode 205 + ------- 206 + Valid Range: 0-2 (0=legacy, 1=MSI, 2=MSI-X) 207 + Default Value: 2 208 + 209 + Allows changing the interrupt mode at module load time, without requiring a 210 + recompile. If the driver load fails to enable a specific interrupt mode, the 211 + driver will try other interrupt modes, from least to most compatible. The 212 + interrupt order is MSI-X, MSI, Legacy. If specifying MSI (IntMode=1) 213 + interrupts, only MSI and Legacy will be attempted. 214 + 215 + CrcStripping 216 + ------------ 217 + Valid Range: 0-1 218 + Default Value: 1 (enabled) 219 + 220 + Strip the CRC from received packets before sending up the network stack. If 221 + you have a machine with a BMC enabled but cannot receive IPMI traffic after 222 + loading or enabling the driver, try disabling this feature. 223 + 224 + WriteProtectNVM 225 + --------------- 226 + Valid Range: 0-1 227 + Default Value: 1 (enabled) 228 + 229 + Set the hardware to ignore all write/erase cycles to the GbE region in the 230 + ICHx NVM (non-volatile memory). This feature can be disabled by the 231 + WriteProtectNVM module parameter (enabled by default) only after a hardware 232 + reset, but the machine must be power cycled before trying to enable writes. 233 + 234 + Note: the kernel boot option iomem=relaxed may need to be set if the kernel 235 + config option CONFIG_STRICT_DEVMEM=y, if the root user wants to write the 236 + NVM from user space via ethtool. 237 + 238 + Additional Configurations 239 + ========================= 240 + 241 + Jumbo Frames 242 + ------------ 243 + Jumbo Frames support is enabled by changing the MTU to a value larger than 244 + the default of 1500. Use the ifconfig command to increase the MTU size. 245 + For example: 246 + 247 + ifconfig eth<x> mtu 9000 up 248 + 249 + This setting is not saved across reboots. 250 + 251 + Notes: 252 + 253 + - The maximum MTU setting for Jumbo Frames is 9216. This value coincides 254 + with the maximum Jumbo Frames size of 9234 bytes. 255 + 256 + - Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in 257 + poor performance or loss of link. 258 + 259 + - Some adapters limit Jumbo Frames sized packets to a maximum of 260 + 4096 bytes and some adapters do not support Jumbo Frames. 261 + 262 + 263 + Ethtool 264 + ------- 265 + The driver utilizes the ethtool interface for driver configuration and 266 + diagnostics, as well as displaying statistical information. We 267 + strongly recommend downloading the latest version of Ethtool at: 268 + 269 + http://sourceforge.net/projects/gkernel. 270 + 271 + Speed and Duplex 272 + ---------------- 273 + Speed and Duplex are configured through the Ethtool* utility. For 274 + instructions, refer to the Ethtool man page. 275 + 276 + Enabling Wake on LAN* (WoL) 277 + --------------------------- 278 + WoL is configured through the Ethtool* utility. For instructions on 279 + enabling WoL with Ethtool, refer to the Ethtool man page. 280 + 281 + WoL will be enabled on the system during the next shut down or reboot. 282 + For this driver version, in order to enable WoL, the e1000e driver must be 283 + loaded when shutting down or rebooting the system. 284 + 285 + In most cases Wake On LAN is only supported on port A for multiple port 286 + adapters. To verify if a port supports Wake on LAN run ethtool eth<X>. 287 + 288 + 289 + Support 290 + ======= 291 + 292 + For general information, go to the Intel support website at: 293 + 294 + www.intel.com/support/ 295 + 296 + or the Intel Wired Networking project hosted by Sourceforge at: 297 + 298 + http://sourceforge.net/projects/e1000 299 + 300 + If an issue is identified with the released source code on the supported 301 + kernel with a supported adapter, email the specific information related 302 + to the issue to e1000-devel@lists.sf.net
+3 -37
Documentation/networking/ixgbevf.txt
··· 1 Linux* Base Driver for Intel(R) Network Connection 2 ================================================== 3 4 - November 24, 2009 5 6 Contents 7 ======== 8 9 - - In This Release 10 - Identifying Your Adapter 11 - Known Issues/Troubleshooting 12 - Support 13 - 14 - In This Release 15 - =============== 16 17 This file describes the ixgbevf Linux* Base Driver for Intel Network 18 Connection. ··· 30 For more information on how to identify your adapter, go to the Adapter & 31 Driver ID Guide at: 32 33 - http://support.intel.com/support/network/sb/CS-008441.htm 34 35 Known Issues/Troubleshooting 36 ============================ ··· 54 If an issue is identified with the released source code on the supported 55 kernel with a supported adapter, email the specific information related 56 to the issue to e1000-devel@lists.sf.net 57 - 58 - License 59 - ======= 60 - 61 - Intel 10 Gigabit Linux driver. 62 - Copyright(c) 1999 - 2009 Intel Corporation. 63 - 64 - This program is free software; you can redistribute it and/or modify it 65 - under the terms and conditions of the GNU General Public License, 66 - version 2, as published by the Free Software Foundation. 67 - 68 - This program is distributed in the hope it will be useful, but WITHOUT 69 - ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 70 - FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 71 - more details. 72 - 73 - You should have received a copy of the GNU General Public License along with 74 - this program; if not, write to the Free Software Foundation, Inc., 75 - 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 76 - 77 - The full GNU General Public License is included in this distribution in 78 - the file called "COPYING". 79 - 80 - Trademarks 81 - ========== 82 - 83 - Intel, Itanium, and Pentium are trademarks or registered trademarks of 84 - Intel Corporation or its subsidiaries in the United States and other 85 - countries. 86 - 87 - * Other names and brands may be claimed as the property of others.
··· 1 Linux* Base Driver for Intel(R) Network Connection 2 ================================================== 3 4 + Intel Gigabit Linux driver. 5 + Copyright(c) 1999 - 2010 Intel Corporation. 6 7 Contents 8 ======== 9 10 - Identifying Your Adapter 11 - Known Issues/Troubleshooting 12 - Support 13 14 This file describes the ixgbevf Linux* Base Driver for Intel Network 15 Connection. ··· 33 For more information on how to identify your adapter, go to the Adapter & 34 Driver ID Guide at: 35 36 + http://support.intel.com/support/go/network/adapter/idguide.htm 37 38 Known Issues/Troubleshooting 39 ============================ ··· 57 If an issue is identified with the released source code on the supported 58 kernel with a supported adapter, email the specific information related 59 to the issue to e1000-devel@lists.sf.net
+1 -1
Documentation/vm/page-types.c
··· 478 } 479 480 if (opt_unpoison && !hwpoison_forget_fd) { 481 - sprintf(buf, "%s/renew-pfn", hwpoison_debug_fs); 482 hwpoison_forget_fd = checked_open(buf, O_WRONLY); 483 } 484 }
··· 478 } 479 480 if (opt_unpoison && !hwpoison_forget_fd) { 481 + sprintf(buf, "%s/unpoison-pfn", hwpoison_debug_fs); 482 hwpoison_forget_fd = checked_open(buf, O_WRONLY); 483 } 484 }
+34 -4
MAINTAINERS
··· 969 S: Maintained 970 F: arch/arm/mach-s5p*/ 971 972 ARM/SHMOBILE ARM ARCHITECTURE 973 M: Paul Mundt <lethal@linux-sh.org> 974 M: Magnus Damm <magnus.damm@gmail.com> ··· 2545 F: drivers/scsi/gdt* 2546 2547 GENERIC GPIO I2C DRIVER 2548 - M: Haavard Skinnemoen <hskinnemoen@atmel.com> 2549 S: Supported 2550 F: drivers/i2c/busses/i2c-gpio.c 2551 F: include/linux/i2c-gpio.h ··· 3073 S: Maintained 3074 F: drivers/net/ixp2000/ 3075 3076 - INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe) 3077 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 3078 M: Jesse Brandeburg <jesse.brandeburg@intel.com> 3079 M: Bruce Allan <bruce.w.allan@intel.com> 3080 - M: Alex Duyck <alexander.h.duyck@intel.com> 3081 M: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com> 3082 M: John Ronciak <john.ronciak@intel.com> 3083 L: e1000-devel@lists.sourceforge.net 3084 W: http://e1000.sourceforge.net/ 3085 S: Supported 3086 F: drivers/net/e100.c 3087 F: drivers/net/e1000/ 3088 F: drivers/net/e1000e/ ··· 3101 F: drivers/net/igbvf/ 3102 F: drivers/net/ixgb/ 3103 F: drivers/net/ixgbe/ 3104 3105 INTEL PRO/WIRELESS 2100 NETWORK CONNECTION SUPPORT 3106 L: linux-wireless@vger.kernel.org ··· 5030 F: drivers/media/video/*7146* 5031 F: include/media/*7146* 5032 5033 TLG2300 VIDEO4LINUX-2 DRIVER 5034 M: Huang Shijie <shijie8@gmail.com> 5035 M: Kang Yong <kangyong@telegent.com> ··· 6478 WOLFSON MICROELECTRONICS DRIVERS 6479 M: Mark Brown <broonie@opensource.wolfsonmicro.com> 6480 M: Ian Lartey <ian@opensource.wolfsonmicro.com> 6481 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus 6482 - W: http://opensource.wolfsonmicro.com/node/8 6483 S: Supported 6484 F: Documentation/hwmon/wm83?? 6485 F: drivers/leds/leds-wm83*.c
··· 969 S: Maintained 970 F: arch/arm/mach-s5p*/ 971 972 + ARM/SAMSUNG S5P SERIES FIMC SUPPORT 973 + M: Kyungmin Park <kyungmin.park@samsung.com> 974 + M: Sylwester Nawrocki <s.nawrocki@samsung.com> 975 + L: linux-arm-kernel@lists.infradead.org 976 + L: linux-media@vger.kernel.org 977 + S: Maintained 978 + F: arch/arm/plat-s5p/dev-fimc* 979 + F: arch/arm/plat-samsung/include/plat/*fimc* 980 + F: drivers/media/video/s5p-fimc/ 981 + 982 ARM/SHMOBILE ARM ARCHITECTURE 983 M: Paul Mundt <lethal@linux-sh.org> 984 M: Magnus Damm <magnus.damm@gmail.com> ··· 2535 F: drivers/scsi/gdt* 2536 2537 GENERIC GPIO I2C DRIVER 2538 + M: Haavard Skinnemoen <hskinnemoen@gmail.com> 2539 S: Supported 2540 F: drivers/i2c/busses/i2c-gpio.c 2541 F: include/linux/i2c-gpio.h ··· 3063 S: Maintained 3064 F: drivers/net/ixp2000/ 3065 3066 + INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe/ixgbevf) 3067 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 3068 M: Jesse Brandeburg <jesse.brandeburg@intel.com> 3069 M: Bruce Allan <bruce.w.allan@intel.com> 3070 + M: Carolyn Wyborny <carolyn.wyborny@intel.com> 3071 + M: Don Skidmore <donald.c.skidmore@intel.com> 3072 + M: Greg Rose <gregory.v.rose@intel.com> 3073 M: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com> 3074 + M: Alex Duyck <alexander.h.duyck@intel.com> 3075 M: John Ronciak <john.ronciak@intel.com> 3076 L: e1000-devel@lists.sourceforge.net 3077 W: http://e1000.sourceforge.net/ 3078 S: Supported 3079 + F: Documentation/networking/e100.txt 3080 + F: Documentation/networking/e1000.txt 3081 + F: Documentation/networking/e1000e.txt 3082 + F: Documentation/networking/igb.txt 3083 + F: Documentation/networking/igbvf.txt 3084 + F: Documentation/networking/ixgb.txt 3085 + F: Documentation/networking/ixgbe.txt 3086 + F: Documentation/networking/ixgbevf.txt 3087 F: drivers/net/e100.c 3088 F: drivers/net/e1000/ 3089 F: drivers/net/e1000e/ ··· 3080 F: drivers/net/igbvf/ 3081 F: drivers/net/ixgb/ 3082 F: drivers/net/ixgbe/ 3083 + F: drivers/net/ixgbevf/ 3084 3085 INTEL PRO/WIRELESS 2100 NETWORK CONNECTION SUPPORT 3086 L: linux-wireless@vger.kernel.org ··· 5008 F: drivers/media/video/*7146* 5009 F: include/media/*7146* 5010 5011 + SAMSUNG AUDIO (ASoC) DRIVERS 5012 + M: Jassi Brar <jassi.brar@samsung.com> 5013 + L: alsa-devel@alsa-project.org (moderated for non-subscribers) 5014 + S: Supported 5015 + F: sound/soc/s3c24xx 5016 + 5017 TLG2300 VIDEO4LINUX-2 DRIVER 5018 M: Huang Shijie <shijie8@gmail.com> 5019 M: Kang Yong <kangyong@telegent.com> ··· 6450 WOLFSON MICROELECTRONICS DRIVERS 6451 M: Mark Brown <broonie@opensource.wolfsonmicro.com> 6452 M: Ian Lartey <ian@opensource.wolfsonmicro.com> 6453 + M: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> 6454 + T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc 6455 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus 6456 + W: http://opensource.wolfsonmicro.com/content/linux-drivers-wolfson-devices 6457 S: Supported 6458 F: Documentation/hwmon/wm83?? 6459 F: drivers/leds/leds-wm83*.c
+2 -2
Makefile
··· 1 VERSION = 2 2 PATCHLEVEL = 6 3 SUBLEVEL = 36 4 - EXTRAVERSION = -rc7 5 - NAME = Sheep on Meth 6 7 # *DOCUMENTATION* 8 # To see a list of typical targets execute "make help"
··· 1 VERSION = 2 2 PATCHLEVEL = 6 3 SUBLEVEL = 36 4 + EXTRAVERSION = -rc8 5 + NAME = Flesh-Eating Bats with Fangs 6 7 # *DOCUMENTATION* 8 # To see a list of typical targets execute "make help"
+14
arch/arm/Kconfig
··· 1101 invalidated are not, resulting in an incoherency in the system page 1102 tables. The workaround changes the TLB flushing routines to invalidate 1103 entries regardless of the ASID. 1104 endmenu 1105 1106 source "arch/arm/common/Kconfig"
··· 1101 invalidated are not, resulting in an incoherency in the system page 1102 tables. The workaround changes the TLB flushing routines to invalidate 1103 entries regardless of the ASID. 1104 + 1105 + config ARM_ERRATA_743622 1106 + bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption" 1107 + depends on CPU_V7 1108 + help 1109 + This option enables the workaround for the 743622 Cortex-A9 1110 + (r2p0..r2p2) erratum. Under very rare conditions, a faulty 1111 + optimisation in the Cortex-A9 Store Buffer may lead to data 1112 + corruption. This workaround sets a specific bit in the diagnostic 1113 + register of the Cortex-A9 which disables the Store Buffer 1114 + optimisation, preventing the defect from occurring. This has no 1115 + visible impact on the overall performance or power consumption of the 1116 + processor. 1117 + 1118 endmenu 1119 1120 source "arch/arm/common/Kconfig"
+4 -3
arch/arm/kernel/kprobes-decode.c
··· 1162 { 1163 /* 1164 * MSR : cccc 0011 0x10 xxxx xxxx xxxx xxxx xxxx 1165 - * Undef : cccc 0011 0x00 xxxx xxxx xxxx xxxx xxxx 1166 * ALU op with S bit and Rd == 15 : 1167 * cccc 001x xxx1 xxxx 1111 xxxx xxxx xxxx 1168 */ 1169 - if ((insn & 0x0f900000) == 0x03200000 || /* MSR & Undef */ 1170 (insn & 0x0e10f000) == 0x0210f000) /* ALU s-bit, R15 */ 1171 return INSN_REJECTED; 1172 ··· 1178 * *S (bit 20) updates condition codes 1179 * ADC/SBC/RSC reads the C flag 1180 */ 1181 - insn &= 0xfff00fff; /* Rn = r0, Rd = r0 */ 1182 asi->insn[0] = insn; 1183 asi->insn_handler = (insn & (1 << 20)) ? /* S-bit */ 1184 emulate_alu_imm_rwflags : emulate_alu_imm_rflags;
··· 1162 { 1163 /* 1164 * MSR : cccc 0011 0x10 xxxx xxxx xxxx xxxx xxxx 1165 + * Undef : cccc 0011 0100 xxxx xxxx xxxx xxxx xxxx 1166 * ALU op with S bit and Rd == 15 : 1167 * cccc 001x xxx1 xxxx 1111 xxxx xxxx xxxx 1168 */ 1169 + if ((insn & 0x0fb00000) == 0x03200000 || /* MSR */ 1170 + (insn & 0x0ff00000) == 0x03400000 || /* Undef */ 1171 (insn & 0x0e10f000) == 0x0210f000) /* ALU s-bit, R15 */ 1172 return INSN_REJECTED; 1173 ··· 1177 * *S (bit 20) updates condition codes 1178 * ADC/SBC/RSC reads the C flag 1179 */ 1180 + insn &= 0xffff0fff; /* Rd = r0 */ 1181 asi->insn[0] = insn; 1182 asi->insn_handler = (insn & (1 << 20)) ? /* S-bit */ 1183 emulate_alu_imm_rwflags : emulate_alu_imm_rflags;
+3 -4
arch/arm/mach-at91/include/mach/system.h
··· 28 29 static inline void arch_idle(void) 30 { 31 - #ifndef CONFIG_DEBUG_KERNEL 32 /* 33 * Disable the processor clock. The processor will be automatically 34 * re-enabled by an interrupt or by a reset. 35 */ 36 at91_sys_write(AT91_PMC_SCDR, AT91_PMC_PCK); 37 - #else 38 /* 39 * Set the processor (CP15) into 'Wait for Interrupt' mode. 40 - * Unlike disabling the processor clock via the PMC (above) 41 - * this allows the processor to be woken via JTAG. 42 */ 43 cpu_do_idle(); 44 #endif
··· 28 29 static inline void arch_idle(void) 30 { 31 /* 32 * Disable the processor clock. The processor will be automatically 33 * re-enabled by an interrupt or by a reset. 34 */ 35 at91_sys_write(AT91_PMC_SCDR, AT91_PMC_PCK); 36 + #ifndef CONFIG_CPU_ARM920T 37 /* 38 * Set the processor (CP15) into 'Wait for Interrupt' mode. 39 + * Post-RM9200 processors need this in conjunction with the above 40 + * to save power when idle. 41 */ 42 cpu_do_idle(); 43 #endif
+1 -1
arch/arm/mach-ep93xx/dma-m2p.c
··· 276 v &= ~(M2P_CONTROL_STALL_IRQ_EN | M2P_CONTROL_NFB_IRQ_EN); 277 m2p_set_control(ch, v); 278 279 - while (m2p_channel_state(ch) == STATE_ON) 280 cpu_relax(); 281 282 m2p_set_control(ch, 0x0);
··· 276 v &= ~(M2P_CONTROL_STALL_IRQ_EN | M2P_CONTROL_NFB_IRQ_EN); 277 m2p_set_control(ch, v); 278 279 + while (m2p_channel_state(ch) >= STATE_ON) 280 cpu_relax(); 281 282 m2p_set_control(ch, 0x0);
+1
arch/arm/mach-imx/Kconfig
··· 122 select IMX_HAVE_PLATFORM_IMX_I2C 123 select IMX_HAVE_PLATFORM_IMX_UART 124 select IMX_HAVE_PLATFORM_MXC_NAND 125 help 126 Include support for Eukrea CPUIMX27 platform. This includes 127 specific configurations for the module and its peripherals.
··· 122 select IMX_HAVE_PLATFORM_IMX_I2C 123 select IMX_HAVE_PLATFORM_IMX_UART 124 select IMX_HAVE_PLATFORM_MXC_NAND 125 + select MXC_ULPI if USB_ULPI 126 help 127 Include support for Eukrea CPUIMX27 platform. This includes 128 specific configurations for the module and its peripherals.
+1 -1
arch/arm/mach-imx/mach-cpuimx27.c
··· 259 i2c_register_board_info(0, eukrea_cpuimx27_i2c_devices, 260 ARRAY_SIZE(eukrea_cpuimx27_i2c_devices)); 261 262 - imx27_add_i2c_imx1(&cpuimx27_i2c1_data); 263 264 platform_add_devices(platform_devices, ARRAY_SIZE(platform_devices)); 265
··· 259 i2c_register_board_info(0, eukrea_cpuimx27_i2c_devices, 260 ARRAY_SIZE(eukrea_cpuimx27_i2c_devices)); 261 262 + imx27_add_i2c_imx0(&cpuimx27_i2c1_data); 263 264 platform_add_devices(platform_devices, ARRAY_SIZE(platform_devices)); 265
+1
arch/arm/mach-s5p6440/cpu.c
··· 19 #include <linux/sysdev.h> 20 #include <linux/serial_core.h> 21 #include <linux/platform_device.h> 22 23 #include <asm/mach/arch.h> 24 #include <asm/mach/map.h>
··· 19 #include <linux/sysdev.h> 20 #include <linux/serial_core.h> 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 23 24 #include <asm/mach/arch.h> 25 #include <asm/mach/map.h>
+1
arch/arm/mach-s5p6442/cpu.c
··· 19 #include <linux/sysdev.h> 20 #include <linux/serial_core.h> 21 #include <linux/platform_device.h> 22 23 #include <asm/mach/arch.h> 24 #include <asm/mach/map.h>
··· 19 #include <linux/sysdev.h> 20 #include <linux/serial_core.h> 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 23 24 #include <asm/mach/arch.h> 25 #include <asm/mach/map.h>
+1
arch/arm/mach-s5pc100/cpu.c
··· 21 #include <linux/sysdev.h> 22 #include <linux/serial_core.h> 23 #include <linux/platform_device.h> 24 25 #include <asm/mach/arch.h> 26 #include <asm/mach/map.h>
··· 21 #include <linux/sysdev.h> 22 #include <linux/serial_core.h> 23 #include <linux/platform_device.h> 24 + #include <linux/sched.h> 25 26 #include <asm/mach/arch.h> 27 #include <asm/mach/map.h>
-5
arch/arm/mach-s5pv210/clock.c
··· 173 return s5p_gatectrl(S5P_CLKGATE_IP3, clk, enable); 174 } 175 176 - static int s5pv210_clk_ip4_ctrl(struct clk *clk, int enable) 177 - { 178 - return s5p_gatectrl(S5P_CLKGATE_IP4, clk, enable); 179 - } 180 - 181 static int s5pv210_clk_mask0_ctrl(struct clk *clk, int enable) 182 { 183 return s5p_gatectrl(S5P_CLK_SRC_MASK0, clk, enable);
··· 173 return s5p_gatectrl(S5P_CLKGATE_IP3, clk, enable); 174 } 175 176 static int s5pv210_clk_mask0_ctrl(struct clk *clk, int enable) 177 { 178 return s5p_gatectrl(S5P_CLK_SRC_MASK0, clk, enable);
+1
arch/arm/mach-s5pv210/cpu.c
··· 19 #include <linux/io.h> 20 #include <linux/sysdev.h> 21 #include <linux/platform_device.h> 22 23 #include <asm/mach/arch.h> 24 #include <asm/mach/map.h>
··· 19 #include <linux/io.h> 20 #include <linux/sysdev.h> 21 #include <linux/platform_device.h> 22 + #include <linux/sched.h> 23 24 #include <asm/mach/arch.h> 25 #include <asm/mach/map.h>
+2 -2
arch/arm/mach-vexpress/ct-ca9x4.c
··· 68 } 69 70 #if 0 71 - static void ct_ca9x4_timer_init(void) 72 { 73 writel(0, MMIO_P2V(CT_CA9X4_TIMER0) + TIMER_CTRL); 74 writel(0, MMIO_P2V(CT_CA9X4_TIMER1) + TIMER_CTRL); ··· 222 .resource = pmu_resources, 223 }; 224 225 - static void ct_ca9x4_init(void) 226 { 227 int i; 228
··· 68 } 69 70 #if 0 71 + static void __init ct_ca9x4_timer_init(void) 72 { 73 writel(0, MMIO_P2V(CT_CA9X4_TIMER0) + TIMER_CTRL); 74 writel(0, MMIO_P2V(CT_CA9X4_TIMER1) + TIMER_CTRL); ··· 222 .resource = pmu_resources, 223 }; 224 225 + static void __init ct_ca9x4_init(void) 226 { 227 int i; 228
+1 -1
arch/arm/mach-vexpress/v2m.c
··· 48 } 49 50 51 - static void v2m_timer_init(void) 52 { 53 writel(0, MMIO_P2V(V2M_TIMER0) + TIMER_CTRL); 54 writel(0, MMIO_P2V(V2M_TIMER1) + TIMER_CTRL);
··· 48 } 49 50 51 + static void __init v2m_timer_init(void) 52 { 53 writel(0, MMIO_P2V(V2M_TIMER0) + TIMER_CTRL); 54 writel(0, MMIO_P2V(V2M_TIMER1) + TIMER_CTRL);
+6 -2
arch/arm/mm/ioremap.c
··· 204 /* 205 * Don't allow RAM to be mapped - this causes problems with ARMv6+ 206 */ 207 - if (WARN_ON(pfn_valid(pfn))) 208 - return NULL; 209 210 type = get_mem_type(mtype); 211 if (!type)
··· 204 /* 205 * Don't allow RAM to be mapped - this causes problems with ARMv6+ 206 */ 207 + if (pfn_valid(pfn)) { 208 + printk(KERN_WARNING "BUG: Your driver calls ioremap() on system memory. This leads\n" 209 + KERN_WARNING "to architecturally unpredictable behaviour on ARMv6+, and ioremap()\n" 210 + KERN_WARNING "will fail in the next kernel release. Please fix your driver.\n"); 211 + WARN_ON(1); 212 + } 213 214 type = get_mem_type(mtype); 215 if (!type)
+2 -2
arch/arm/mm/mmu.c
··· 248 }, 249 [MT_MEMORY] = { 250 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 251 - L_PTE_USER | L_PTE_EXEC, 252 .prot_l1 = PMD_TYPE_TABLE, 253 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 254 .domain = DOMAIN_KERNEL, ··· 259 }, 260 [MT_MEMORY_NONCACHED] = { 261 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 262 - L_PTE_USER | L_PTE_EXEC | L_PTE_MT_BUFFERABLE, 263 .prot_l1 = PMD_TYPE_TABLE, 264 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 265 .domain = DOMAIN_KERNEL,
··· 248 }, 249 [MT_MEMORY] = { 250 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 251 + L_PTE_WRITE | L_PTE_EXEC, 252 .prot_l1 = PMD_TYPE_TABLE, 253 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 254 .domain = DOMAIN_KERNEL, ··· 259 }, 260 [MT_MEMORY_NONCACHED] = { 261 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 262 + L_PTE_WRITE | L_PTE_EXEC | L_PTE_MT_BUFFERABLE, 263 .prot_l1 = PMD_TYPE_TABLE, 264 .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 265 .domain = DOMAIN_KERNEL,
+9 -1
arch/arm/mm/proc-v7.S
··· 253 orreq r10, r10, #1 << 22 @ set bit #22 254 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 255 #endif 256 257 3: mov r10, #0 258 #ifdef HARVARD_CACHE ··· 373 b __v7_ca9mp_setup 374 .long cpu_arch_name 375 .long cpu_elf_name 376 - .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP 377 .long cpu_v7_name 378 .long v7_processor_functions 379 .long v7wbi_tlb_fns
··· 253 orreq r10, r10, #1 << 22 @ set bit #22 254 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 255 #endif 256 + #ifdef CONFIG_ARM_ERRATA_743622 257 + teq r6, #0x20 @ present in r2p0 258 + teqne r6, #0x21 @ present in r2p1 259 + teqne r6, #0x22 @ present in r2p2 260 + mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 261 + orreq r10, r10, #1 << 6 @ set bit #6 262 + mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 263 + #endif 264 265 3: mov r10, #0 266 #ifdef HARVARD_CACHE ··· 365 b __v7_ca9mp_setup 366 .long cpu_arch_name 367 .long cpu_elf_name 368 + .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_TLS 369 .long cpu_v7_name 370 .long v7_processor_functions 371 .long v7wbi_tlb_fns
+1
arch/arm/plat-omap/iommu.c
··· 320 if ((start <= da) && (da < start + bytes)) { 321 dev_dbg(obj->dev, "%s: %08x<=%08x(%x)\n", 322 __func__, start, da, bytes); 323 iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY); 324 } 325 }
··· 320 if ((start <= da) && (da < start + bytes)) { 321 dev_dbg(obj->dev, "%s: %08x<=%08x(%x)\n", 322 __func__, start, da, bytes); 323 + iotlb_load_cr(obj, &cr); 324 iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY); 325 } 326 }
-1
arch/arm/plat-samsung/adc.c
··· 435 static int s3c_adc_resume(struct platform_device *pdev) 436 { 437 struct adc_device *adc = platform_get_drvdata(pdev); 438 - unsigned long flags; 439 440 clk_enable(adc->clk); 441 enable_irq(adc->irq);
··· 435 static int s3c_adc_resume(struct platform_device *pdev) 436 { 437 struct adc_device *adc = platform_get_drvdata(pdev); 438 439 clk_enable(adc->clk); 440 enable_irq(adc->irq);
+26 -1
arch/arm/plat-samsung/clock.c
··· 48 #include <plat/clock.h> 49 #include <plat/cpu.h> 50 51 /* clock information */ 52 53 static LIST_HEAD(clocks); ··· 68 return 0; 69 } 70 71 /* Clock API calls */ 72 73 struct clk *clk_get(struct device *dev, const char *id) ··· 98 struct clk *clk = ERR_PTR(-ENOENT); 99 int idno; 100 101 - if (dev == NULL || dev->bus != &platform_bus_type) 102 idno = -1; 103 else 104 idno = to_platform_device(dev)->id;
··· 48 #include <plat/clock.h> 49 #include <plat/cpu.h> 50 51 + #include <linux/serial_core.h> 52 + #include <plat/regs-serial.h> /* for s3c24xx_uart_devs */ 53 + 54 /* clock information */ 55 56 static LIST_HEAD(clocks); ··· 65 return 0; 66 } 67 68 + static int dev_is_s3c_uart(struct device *dev) 69 + { 70 + struct platform_device **pdev = s3c24xx_uart_devs; 71 + int i; 72 + for (i = 0; i < ARRAY_SIZE(s3c24xx_uart_devs); i++, pdev++) 73 + if (*pdev && dev == &(*pdev)->dev) 74 + return 1; 75 + return 0; 76 + } 77 + 78 + /* 79 + * Serial drivers call get_clock() very early, before platform bus 80 + * has been set up, this requires a special check to let them get 81 + * a proper clock 82 + */ 83 + 84 + static int dev_is_platform_device(struct device *dev) 85 + { 86 + return dev->bus == &platform_bus_type || 87 + (dev->bus == NULL && dev_is_s3c_uart(dev)); 88 + } 89 + 90 /* Clock API calls */ 91 92 struct clk *clk_get(struct device *dev, const char *id) ··· 73 struct clk *clk = ERR_PTR(-ENOENT); 74 int idno; 75 76 + if (dev == NULL || !dev_is_platform_device(dev)) 77 idno = -1; 78 else 79 idno = to_platform_device(dev)->id;
+8 -73
arch/blackfin/include/asm/bfin5xx_spi.h
··· 11 12 #define MIN_SPI_BAUD_VAL 2 13 14 - #define SPI_READ 0 15 - #define SPI_WRITE 1 16 - 17 - #define SPI_CTRL_OFF 0x0 18 - #define SPI_FLAG_OFF 0x4 19 - #define SPI_STAT_OFF 0x8 20 - #define SPI_TXBUFF_OFF 0xc 21 - #define SPI_RXBUFF_OFF 0x10 22 - #define SPI_BAUD_OFF 0x14 23 - #define SPI_SHAW_OFF 0x18 24 - 25 - 26 #define BIT_CTL_ENABLE 0x4000 27 #define BIT_CTL_OPENDRAIN 0x2000 28 #define BIT_CTL_MASTER 0x1000 29 - #define BIT_CTL_POLAR 0x0800 30 - #define BIT_CTL_PHASE 0x0400 31 - #define BIT_CTL_BITORDER 0x0200 32 #define BIT_CTL_WORDSIZE 0x0100 33 - #define BIT_CTL_MISOENABLE 0x0020 34 #define BIT_CTL_RXMOD 0x0000 35 #define BIT_CTL_TXMOD 0x0001 36 #define BIT_CTL_TIMOD_DMA_TX 0x0003 ··· 41 #define BIT_STU_SENDOVER 0x0001 42 #define BIT_STU_RECVFULL 0x0020 43 44 - #define CFG_SPI_ENABLE 1 45 - #define CFG_SPI_DISABLE 0 46 - 47 - #define CFG_SPI_OUTENABLE 1 48 - #define CFG_SPI_OUTDISABLE 0 49 - 50 - #define CFG_SPI_ACTLOW 1 51 - #define CFG_SPI_ACTHIGH 0 52 - 53 - #define CFG_SPI_PHASESTART 1 54 - #define CFG_SPI_PHASEMID 0 55 - 56 - #define CFG_SPI_MASTER 1 57 - #define CFG_SPI_SLAVE 0 58 - 59 - #define CFG_SPI_SENELAST 0 60 - #define CFG_SPI_SENDZERO 1 61 - 62 - #define CFG_SPI_RCVFLUSH 1 63 - #define CFG_SPI_RCVDISCARD 0 64 - 65 - #define CFG_SPI_LSBFIRST 1 66 - #define CFG_SPI_MSBFIRST 0 67 - 68 - #define CFG_SPI_WORDSIZE16 1 69 - #define CFG_SPI_WORDSIZE8 0 70 - 71 - #define CFG_SPI_MISOENABLE 1 72 - #define CFG_SPI_MISODISABLE 0 73 - 74 - #define CFG_SPI_READ 0x00 75 - #define CFG_SPI_WRITE 0x01 76 - #define CFG_SPI_DMAREAD 0x02 77 - #define CFG_SPI_DMAWRITE 0x03 78 - 79 - #define CFG_SPI_CSCLEARALL 0 80 - #define CFG_SPI_CHIPSEL1 1 81 - #define CFG_SPI_CHIPSEL2 2 82 - #define CFG_SPI_CHIPSEL3 3 83 - #define CFG_SPI_CHIPSEL4 4 84 - #define CFG_SPI_CHIPSEL5 5 85 - #define CFG_SPI_CHIPSEL6 6 86 - #define CFG_SPI_CHIPSEL7 7 87 - 88 - #define CFG_SPI_CS1VALUE 1 89 - #define CFG_SPI_CS2VALUE 2 90 - #define CFG_SPI_CS3VALUE 3 91 - #define CFG_SPI_CS4VALUE 4 92 - #define CFG_SPI_CS5VALUE 5 93 - #define CFG_SPI_CS6VALUE 6 94 - #define CFG_SPI_CS7VALUE 7 95 - 96 - #define CMD_SPI_SET_BAUDRATE 2 97 - #define CMD_SPI_GET_SYSTEMCLOCK 25 98 - #define CMD_SPI_SET_WRITECONTINUOUS 26 99 100 /* device.platform_data for SSP controller devices */ 101 struct bfin5xx_spi_master { ··· 57 u16 ctl_reg; 58 u8 enable_dma; 59 u8 bits_per_word; 60 - u8 cs_change_per_word; 61 u16 cs_chg_udelay; /* Some devices require 16-bit delays */ 62 - u32 cs_gpio; 63 /* Value to send if no TX value is supplied, usually 0x0 or 0xFFFF */ 64 u16 idle_tx_val; 65 u8 pio_interrupt; /* Enable spi data irq */
··· 11 12 #define MIN_SPI_BAUD_VAL 2 13 14 #define BIT_CTL_ENABLE 0x4000 15 #define BIT_CTL_OPENDRAIN 0x2000 16 #define BIT_CTL_MASTER 0x1000 17 + #define BIT_CTL_CPOL 0x0800 18 + #define BIT_CTL_CPHA 0x0400 19 + #define BIT_CTL_LSBF 0x0200 20 #define BIT_CTL_WORDSIZE 0x0100 21 + #define BIT_CTL_EMISO 0x0020 22 + #define BIT_CTL_PSSE 0x0010 23 + #define BIT_CTL_GM 0x0008 24 + #define BIT_CTL_SZ 0x0004 25 #define BIT_CTL_RXMOD 0x0000 26 #define BIT_CTL_TXMOD 0x0001 27 #define BIT_CTL_TIMOD_DMA_TX 0x0003 ··· 50 #define BIT_STU_SENDOVER 0x0001 51 #define BIT_STU_RECVFULL 0x0020 52 53 + #define MAX_CTRL_CS 8 /* cs in spi controller */ 54 55 /* device.platform_data for SSP controller devices */ 56 struct bfin5xx_spi_master { ··· 120 u16 ctl_reg; 121 u8 enable_dma; 122 u8 bits_per_word; 123 u16 cs_chg_udelay; /* Some devices require 16-bit delays */ 124 /* Value to send if no TX value is supplied, usually 0x0 or 0xFFFF */ 125 u16 idle_tx_val; 126 u8 pio_interrupt; /* Enable spi data irq */
+2 -2
arch/m32r/include/asm/elf.h
··· 82 * These are used to set parameters in the core dumps. 83 */ 84 #define ELF_CLASS ELFCLASS32 85 - #if defined(__LITTLE_ENDIAN) 86 #define ELF_DATA ELFDATA2LSB 87 - #elif defined(__BIG_ENDIAN) 88 #define ELF_DATA ELFDATA2MSB 89 #else 90 #error no endian defined
··· 82 * These are used to set parameters in the core dumps. 83 */ 84 #define ELF_CLASS ELFCLASS32 85 + #if defined(__LITTLE_ENDIAN__) 86 #define ELF_DATA ELFDATA2LSB 87 + #elif defined(__BIG_ENDIAN__) 88 #define ELF_DATA ELFDATA2MSB 89 #else 90 #error no endian defined
+1
arch/m32r/kernel/.gitignore
···
··· 1 + vmlinux.lds
+3 -1
arch/m32r/kernel/signal.c
··· 28 29 #define DEBUG_SIG 0 30 31 asmlinkage int 32 sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss, 33 unsigned long r2, unsigned long r3, unsigned long r4, ··· 256 static int prev_insn(struct pt_regs *regs) 257 { 258 u16 inst; 259 - if (get_user(&inst, (u16 __user *)(regs->bpc - 2))) 260 return -EFAULT; 261 if ((inst & 0xfff0) == 0x10f0) /* trap ? */ 262 regs->bpc -= 2;
··· 28 29 #define DEBUG_SIG 0 30 31 + #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) 32 + 33 asmlinkage int 34 sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss, 35 unsigned long r2, unsigned long r3, unsigned long r4, ··· 254 static int prev_insn(struct pt_regs *regs) 255 { 256 u16 inst; 257 + if (get_user(inst, (u16 __user *)(regs->bpc - 2))) 258 return -EFAULT; 259 if ((inst & 0xfff0) == 0x10f0) /* trap ? */ 260 regs->bpc -= 2;
+1
arch/mips/include/asm/siginfo.h
··· 88 #ifdef __ARCH_SI_TRAPNO 89 int _trapno; /* TRAP # which caused the signal */ 90 #endif 91 } _sigfault; 92 93 /* SIGPOLL, SIGXFSZ (To do ...) */
··· 88 #ifdef __ARCH_SI_TRAPNO 89 int _trapno; /* TRAP # which caused the signal */ 90 #endif 91 + short _addr_lsb; 92 } _sigfault; 93 94 /* SIGPOLL, SIGXFSZ (To do ...) */
+5 -9
arch/um/drivers/hostaudio_kern.c
··· 40 " This is used to specify the host mixer device to the hostaudio driver.\n"\ 41 " The default is \"" HOSTAUDIO_DEV_MIXER "\".\n\n" 42 43 #ifndef MODULE 44 static int set_dsp(char *name, int *add) 45 { ··· 61 } 62 63 __uml_setup("mixer=", set_mixer, "mixer=<mixer device>\n" MIXER_HELP); 64 - 65 - #else /*MODULE*/ 66 - 67 - module_param(dsp, charp, 0644); 68 - MODULE_PARM_DESC(dsp, DSP_HELP); 69 - 70 - module_param(mixer, charp, 0644); 71 - MODULE_PARM_DESC(mixer, MIXER_HELP); 72 - 73 #endif 74 75 /* /dev/dsp file operations */
··· 40 " This is used to specify the host mixer device to the hostaudio driver.\n"\ 41 " The default is \"" HOSTAUDIO_DEV_MIXER "\".\n\n" 42 43 + module_param(dsp, charp, 0644); 44 + MODULE_PARM_DESC(dsp, DSP_HELP); 45 + module_param(mixer, charp, 0644); 46 + MODULE_PARM_DESC(mixer, MIXER_HELP); 47 + 48 #ifndef MODULE 49 static int set_dsp(char *name, int *add) 50 { ··· 56 } 57 58 __uml_setup("mixer=", set_mixer, "mixer=<mixer device>\n" MIXER_HELP); 59 #endif 60 61 /* /dev/dsp file operations */
+5 -4
arch/um/drivers/ubd_kern.c
··· 163 struct scatterlist sg[MAX_SG]; 164 struct request *request; 165 int start_sg, end_sg; 166 }; 167 168 #define DEFAULT_COW { \ ··· 188 .request = NULL, \ 189 .start_sg = 0, \ 190 .end_sg = 0, \ 191 } 192 193 /* Protected by ubd_lock */ ··· 1230 { 1231 struct io_thread_req *io_req; 1232 struct request *req; 1233 - sector_t sector; 1234 int n; 1235 1236 while(1){ ··· 1240 return; 1241 1242 dev->request = req; 1243 dev->start_sg = 0; 1244 dev->end_sg = blk_rq_map_sg(q, req, dev->sg); 1245 } 1246 1247 req = dev->request; 1248 - sector = blk_rq_pos(req); 1249 while(dev->start_sg < dev->end_sg){ 1250 struct scatterlist *sg = &dev->sg[dev->start_sg]; 1251 ··· 1257 return; 1258 } 1259 prepare_request(req, io_req, 1260 - (unsigned long long)sector << 9, 1261 sg->offset, sg->length, sg_page(sg)); 1262 1263 - sector += sg->length >> 9; 1264 n = os_write_file(thread_fd, &io_req, 1265 sizeof(struct io_thread_req *)); 1266 if(n != sizeof(struct io_thread_req *)){ ··· 1272 return; 1273 } 1274 1275 dev->start_sg++; 1276 } 1277 dev->end_sg = 0;
··· 163 struct scatterlist sg[MAX_SG]; 164 struct request *request; 165 int start_sg, end_sg; 166 + sector_t rq_pos; 167 }; 168 169 #define DEFAULT_COW { \ ··· 187 .request = NULL, \ 188 .start_sg = 0, \ 189 .end_sg = 0, \ 190 + .rq_pos = 0, \ 191 } 192 193 /* Protected by ubd_lock */ ··· 1228 { 1229 struct io_thread_req *io_req; 1230 struct request *req; 1231 int n; 1232 1233 while(1){ ··· 1239 return; 1240 1241 dev->request = req; 1242 + dev->rq_pos = blk_rq_pos(req); 1243 dev->start_sg = 0; 1244 dev->end_sg = blk_rq_map_sg(q, req, dev->sg); 1245 } 1246 1247 req = dev->request; 1248 while(dev->start_sg < dev->end_sg){ 1249 struct scatterlist *sg = &dev->sg[dev->start_sg]; 1250 ··· 1256 return; 1257 } 1258 prepare_request(req, io_req, 1259 + (unsigned long long)dev->rq_pos << 9, 1260 sg->offset, sg->length, sg_page(sg)); 1261 1262 n = os_write_file(thread_fd, &io_req, 1263 sizeof(struct io_thread_req *)); 1264 if(n != sizeof(struct io_thread_req *)){ ··· 1272 return; 1273 } 1274 1275 + dev->rq_pos += sg->length >> 9; 1276 dev->start_sg++; 1277 } 1278 dev->end_sg = 0;
+5 -17
arch/x86/ia32/ia32_aout.c
··· 34 #include <asm/ia32.h> 35 36 #undef WARN_OLD 37 - #undef CORE_DUMP /* probably broken */ 38 39 static int load_aout_binary(struct linux_binprm *, struct pt_regs *regs); 40 static int load_aout_library(struct file *); ··· 131 * macros to write out all the necessary info. 132 */ 133 134 - static int dump_write(struct file *file, const void *addr, int nr) 135 - { 136 - return file->f_op->write(file, addr, nr, &file->f_pos) == nr; 137 - } 138 139 #define DUMP_WRITE(addr, nr) \ 140 if (!dump_write(file, (void *)(addr), (nr))) \ 141 goto end_coredump; 142 143 - #define DUMP_SEEK(offset) \ 144 - if (file->f_op->llseek) { \ 145 - if (file->f_op->llseek(file, (offset), 0) != (offset)) \ 146 - goto end_coredump; \ 147 - } else \ 148 - file->f_pos = (offset) 149 150 #define START_DATA() (u.u_tsize << PAGE_SHIFT) 151 #define START_STACK(u) (u.start_stack) ··· 211 dump_size = dump.u_ssize << PAGE_SHIFT; 212 DUMP_WRITE(dump_start, dump_size); 213 } 214 - /* 215 - * Finally dump the task struct. Not be used by gdb, but 216 - * could be useful 217 - */ 218 - set_fs(KERNEL_DS); 219 - DUMP_WRITE(current, sizeof(*current)); 220 end_coredump: 221 set_fs(fs); 222 return has_dumped;
··· 34 #include <asm/ia32.h> 35 36 #undef WARN_OLD 37 + #undef CORE_DUMP /* definitely broken */ 38 39 static int load_aout_binary(struct linux_binprm *, struct pt_regs *regs); 40 static int load_aout_library(struct file *); ··· 131 * macros to write out all the necessary info. 132 */ 133 134 + #include <linux/coredump.h> 135 136 #define DUMP_WRITE(addr, nr) \ 137 if (!dump_write(file, (void *)(addr), (nr))) \ 138 goto end_coredump; 139 140 + #define DUMP_SEEK(offset) \ 141 + if (!dump_seek(file, offset)) \ 142 + goto end_coredump; 143 144 #define START_DATA() (u.u_tsize << PAGE_SHIFT) 145 #define START_STACK(u) (u.start_stack) ··· 217 dump_size = dump.u_ssize << PAGE_SHIFT; 218 DUMP_WRITE(dump_start, dump_size); 219 } 220 end_coredump: 221 set_fs(fs); 222 return has_dumped;
+3 -6
arch/x86/kernel/cpu/mcheck/mce_amd.c
··· 141 address = (low & MASK_BLKPTR_LO) >> 21; 142 if (!address) 143 break; 144 address += MCG_XBLK_ADDR; 145 } else 146 ++address; ··· 149 if (rdmsr_safe(address, &low, &high)) 150 break; 151 152 - if (!(high & MASK_VALID_HI)) { 153 - if (block) 154 - continue; 155 - else 156 - break; 157 - } 158 159 if (!(high & MASK_CNTP_HI) || 160 (high & MASK_LOCKED_HI))
··· 141 address = (low & MASK_BLKPTR_LO) >> 21; 142 if (!address) 143 break; 144 + 145 address += MCG_XBLK_ADDR; 146 } else 147 ++address; ··· 148 if (rdmsr_safe(address, &low, &high)) 149 break; 150 151 + if (!(high & MASK_VALID_HI)) 152 + continue; 153 154 if (!(high & MASK_CNTP_HI) || 155 (high & MASK_LOCKED_HI))
+2 -1
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 216 err = sysfs_add_file_to_group(&sys_dev->kobj, 217 &attr_core_power_limit_count.attr, 218 thermal_attr_group.name); 219 - if (cpu_has(c, X86_FEATURE_PTS)) 220 err = sysfs_add_file_to_group(&sys_dev->kobj, 221 &attr_package_throttle_count.attr, 222 thermal_attr_group.name); ··· 224 err = sysfs_add_file_to_group(&sys_dev->kobj, 225 &attr_package_power_limit_count.attr, 226 thermal_attr_group.name); 227 228 return err; 229 }
··· 216 err = sysfs_add_file_to_group(&sys_dev->kobj, 217 &attr_core_power_limit_count.attr, 218 thermal_attr_group.name); 219 + if (cpu_has(c, X86_FEATURE_PTS)) { 220 err = sysfs_add_file_to_group(&sys_dev->kobj, 221 &attr_package_throttle_count.attr, 222 thermal_attr_group.name); ··· 224 err = sysfs_add_file_to_group(&sys_dev->kobj, 225 &attr_package_power_limit_count.attr, 226 thermal_attr_group.name); 227 + } 228 229 return err; 230 }
+1 -1
arch/x86/kvm/svm.c
··· 766 767 control->iopm_base_pa = iopm_base; 768 control->msrpm_base_pa = __pa(svm->msrpm); 769 - control->tsc_offset = 0; 770 control->int_ctl = V_INTR_MASKING_MASK; 771 772 init_seg(&save->es); ··· 901 svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; 902 svm->asid_generation = 0; 903 init_vmcb(svm); 904 905 err = fx_init(&svm->vcpu); 906 if (err)
··· 766 767 control->iopm_base_pa = iopm_base; 768 control->msrpm_base_pa = __pa(svm->msrpm); 769 control->int_ctl = V_INTR_MASKING_MASK; 770 771 init_seg(&save->es); ··· 902 svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; 903 svm->asid_generation = 0; 904 init_vmcb(svm); 905 + svm->vmcb->control.tsc_offset = 0-native_read_tsc(); 906 907 err = fx_init(&svm->vcpu); 908 if (err)
+5 -3
arch/x86/mm/srat_64.c
··· 420 return -1; 421 } 422 423 - for_each_node_mask(i, nodes_parsed) 424 - e820_register_active_regions(i, nodes[i].start >> PAGE_SHIFT, 425 - nodes[i].end >> PAGE_SHIFT); 426 /* for out of order entries in SRAT */ 427 sort_node_map(); 428 if (!nodes_cover_memory(nodes)) {
··· 420 return -1; 421 } 422 423 + for (i = 0; i < num_node_memblks; i++) 424 + e820_register_active_regions(memblk_nodeid[i], 425 + node_memblk_range[i].start >> PAGE_SHIFT, 426 + node_memblk_range[i].end >> PAGE_SHIFT); 427 + 428 /* for out of order entries in SRAT */ 429 sort_node_map(); 430 if (!nodes_cover_memory(nodes)) {
+8 -4
block/elevator.c
··· 938 } 939 } 940 kobject_uevent(&e->kobj, KOBJ_ADD); 941 } 942 return error; 943 } ··· 948 { 949 kobject_uevent(&e->kobj, KOBJ_REMOVE); 950 kobject_del(&e->kobj); 951 } 952 953 void elv_unregister_queue(struct request_queue *q) ··· 1044 1045 spin_unlock_irq(q->queue_lock); 1046 1047 - __elv_unregister_queue(old_elevator); 1048 1049 - err = elv_register_queue(q); 1050 - if (err) 1051 - goto fail_register; 1052 1053 /* 1054 * finally exit old elevator and turn off BYPASS.
··· 938 } 939 } 940 kobject_uevent(&e->kobj, KOBJ_ADD); 941 + e->registered = 1; 942 } 943 return error; 944 } ··· 947 { 948 kobject_uevent(&e->kobj, KOBJ_REMOVE); 949 kobject_del(&e->kobj); 950 + e->registered = 0; 951 } 952 953 void elv_unregister_queue(struct request_queue *q) ··· 1042 1043 spin_unlock_irq(q->queue_lock); 1044 1045 + if (old_elevator->registered) { 1046 + __elv_unregister_queue(old_elevator); 1047 1048 + err = elv_register_queue(q); 1049 + if (err) 1050 + goto fail_register; 1051 + } 1052 1053 /* 1054 * finally exit old elevator and turn off BYPASS.
+17
drivers/acpi/blacklist.c
··· 204 }, 205 }, 206 { 207 .callback = dmi_disable_osi_vista, 208 .ident = "Sony VGN-NS10J_S", 209 .matches = {
··· 204 }, 205 }, 206 { 207 + /* 208 + * There have a NVIF method in MSI GX723 DSDT need call by Nvidia 209 + * driver (e.g. nouveau) when user press brightness hotkey. 210 + * Currently, nouveau driver didn't do the job and it causes there 211 + * have a infinite while loop in DSDT when user press hotkey. 212 + * We add MSI GX723's dmi information to this table for workaround 213 + * this issue. 214 + * Will remove MSI GX723 from the table after nouveau grows support. 215 + */ 216 + .callback = dmi_disable_osi_vista, 217 + .ident = "MSI GX723", 218 + .matches = { 219 + DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star International"), 220 + DMI_MATCH(DMI_PRODUCT_NAME, "GX723"), 221 + }, 222 + }, 223 + { 224 .callback = dmi_disable_osi_vista, 225 .ident = "Sony VGN-NS10J_S", 226 .matches = {
+1
drivers/acpi/processor_core.c
··· 346 acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, 347 ACPI_UINT32_MAX, 348 early_init_pdc, NULL, NULL, NULL); 349 }
··· 346 acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, 347 ACPI_UINT32_MAX, 348 early_init_pdc, NULL, NULL, NULL); 349 + acpi_get_devices("ACPI0007", early_init_pdc, NULL, NULL); 350 }
-6
drivers/atm/iphase.c
··· 3156 { 3157 struct atm_dev *dev; 3158 IADEV *iadev; 3159 - unsigned long flags; 3160 int ret; 3161 3162 iadev = kzalloc(sizeof(*iadev), GFP_KERNEL); ··· 3187 ia_dev[iadev_count] = iadev; 3188 _ia_dev[iadev_count] = dev; 3189 iadev_count++; 3190 - spin_lock_init(&iadev->misc_lock); 3191 - /* First fixes first. I don't want to think about this now. */ 3192 - spin_lock_irqsave(&iadev->misc_lock, flags); 3193 if (ia_init(dev) || ia_start(dev)) { 3194 IF_INIT(printk("IA register failed!\n");) 3195 iadev_count--; 3196 ia_dev[iadev_count] = NULL; 3197 _ia_dev[iadev_count] = NULL; 3198 - spin_unlock_irqrestore(&iadev->misc_lock, flags); 3199 ret = -EINVAL; 3200 goto err_out_deregister_dev; 3201 } 3202 - spin_unlock_irqrestore(&iadev->misc_lock, flags); 3203 IF_EVENT(printk("iadev_count = %d\n", iadev_count);) 3204 3205 iadev->next_board = ia_boards;
··· 3156 { 3157 struct atm_dev *dev; 3158 IADEV *iadev; 3159 int ret; 3160 3161 iadev = kzalloc(sizeof(*iadev), GFP_KERNEL); ··· 3188 ia_dev[iadev_count] = iadev; 3189 _ia_dev[iadev_count] = dev; 3190 iadev_count++; 3191 if (ia_init(dev) || ia_start(dev)) { 3192 IF_INIT(printk("IA register failed!\n");) 3193 iadev_count--; 3194 ia_dev[iadev_count] = NULL; 3195 _ia_dev[iadev_count] = NULL; 3196 ret = -EINVAL; 3197 goto err_out_deregister_dev; 3198 } 3199 IF_EVENT(printk("iadev_count = %d\n", iadev_count);) 3200 3201 iadev->next_board = ia_boards;
+1 -1
drivers/atm/iphase.h
··· 1022 struct dle_q rx_dle_q; 1023 struct free_desc_q *rx_free_desc_qhead; 1024 struct sk_buff_head rx_dma_q; 1025 - spinlock_t rx_lock, misc_lock; 1026 struct atm_vcc **rx_open; /* list of all open VCs */ 1027 u16 num_rx_desc, rx_buf_sz, rxing; 1028 u32 rx_pkt_ram, rx_tmp_cnt;
··· 1022 struct dle_q rx_dle_q; 1023 struct free_desc_q *rx_free_desc_qhead; 1024 struct sk_buff_head rx_dma_q; 1025 + spinlock_t rx_lock; 1026 struct atm_vcc **rx_open; /* list of all open VCs */ 1027 u16 num_rx_desc, rx_buf_sz, rxing; 1028 u32 rx_pkt_ram, rx_tmp_cnt;
+5 -3
drivers/atm/solos-pci.c
··· 444 struct atm_dev *atmdev = container_of(dev, struct atm_dev, class_dev); 445 struct solos_card *card = atmdev->dev_data; 446 struct sk_buff *skb; 447 448 spin_lock(&card->cli_queue_lock); 449 skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]); ··· 452 if(skb == NULL) 453 return sprintf(buf, "No data.\n"); 454 455 - memcpy(buf, skb->data, skb->len); 456 - dev_dbg(&card->dev->dev, "len: %d\n", skb->len); 457 458 kfree_skb(skb); 459 - return skb->len; 460 } 461 462 static int send_command(struct solos_card *card, int dev, const char *buf, size_t size)
··· 444 struct atm_dev *atmdev = container_of(dev, struct atm_dev, class_dev); 445 struct solos_card *card = atmdev->dev_data; 446 struct sk_buff *skb; 447 + unsigned int len; 448 449 spin_lock(&card->cli_queue_lock); 450 skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]); ··· 451 if(skb == NULL) 452 return sprintf(buf, "No data.\n"); 453 454 + len = skb->len; 455 + memcpy(buf, skb->data, len); 456 + dev_dbg(&card->dev->dev, "len: %d\n", len); 457 458 kfree_skb(skb); 459 + return len; 460 } 461 462 static int send_command(struct solos_card *card, int dev, const char *buf, size_t size)
+1 -1
drivers/block/ps3disk.c
··· 113 memcpy(buf, dev->bounce_buf+offset, size); 114 offset += size; 115 flush_kernel_dcache_page(bvec->bv_page); 116 - bvec_kunmap_irq(bvec, &flags); 117 i++; 118 } 119 }
··· 113 memcpy(buf, dev->bounce_buf+offset, size); 114 offset += size; 115 flush_kernel_dcache_page(bvec->bv_page); 116 + bvec_kunmap_irq(buf, &flags); 117 i++; 118 } 119 }
+5 -1
drivers/block/virtio_blk.c
··· 202 struct virtio_blk *vblk = disk->private_data; 203 struct request *req; 204 struct bio *bio; 205 206 bio = bio_map_kern(vblk->disk->queue, id_str, VIRTIO_BLK_ID_BYTES, 207 GFP_KERNEL); ··· 216 } 217 218 req->cmd_type = REQ_TYPE_SPECIAL; 219 - return blk_execute_rq(vblk->disk->queue, vblk->disk, req, false); 220 } 221 222 static int virtblk_locked_ioctl(struct block_device *bdev, fmode_t mode,
··· 202 struct virtio_blk *vblk = disk->private_data; 203 struct request *req; 204 struct bio *bio; 205 + int err; 206 207 bio = bio_map_kern(vblk->disk->queue, id_str, VIRTIO_BLK_ID_BYTES, 208 GFP_KERNEL); ··· 215 } 216 217 req->cmd_type = REQ_TYPE_SPECIAL; 218 + err = blk_execute_rq(vblk->disk->queue, vblk->disk, req, false); 219 + blk_put_request(req); 220 + 221 + return err; 222 } 223 224 static int virtblk_locked_ioctl(struct block_device *bdev, fmode_t mode,
+1 -1
drivers/dma/ioat/dma_v2.c
··· 879 dma->device_issue_pending = ioat2_issue_pending; 880 dma->device_alloc_chan_resources = ioat2_alloc_chan_resources; 881 dma->device_free_chan_resources = ioat2_free_chan_resources; 882 - dma->device_tx_status = ioat_tx_status; 883 884 err = ioat_probe(device); 885 if (err)
··· 879 dma->device_issue_pending = ioat2_issue_pending; 880 dma->device_alloc_chan_resources = ioat2_alloc_chan_resources; 881 dma->device_free_chan_resources = ioat2_free_chan_resources; 882 + dma->device_tx_status = ioat_dma_tx_status; 883 884 err = ioat_probe(device); 885 if (err)
+3
drivers/gpu/drm/i915/i915_dma.c
··· 2231 dev_priv->mchdev_lock = &mchdev_lock; 2232 spin_unlock(&mchdev_lock); 2233 2234 return 0; 2235 2236 out_workqueue_free:
··· 2231 dev_priv->mchdev_lock = &mchdev_lock; 2232 spin_unlock(&mchdev_lock); 2233 2234 + /* XXX Prevent module unload due to memory corruption bugs. */ 2235 + __module_get(THIS_MODULE); 2236 + 2237 return 0; 2238 2239 out_workqueue_free:
+1 -1
drivers/gpu/drm/i915/intel_fb.c
··· 238 239 drm_framebuffer_cleanup(&ifb->base); 240 if (ifb->obj) { 241 - drm_gem_object_handle_unreference(ifb->obj); 242 drm_gem_object_unreference(ifb->obj); 243 } 244 245 return 0;
··· 238 239 drm_framebuffer_cleanup(&ifb->base); 240 if (ifb->obj) { 241 drm_gem_object_unreference(ifb->obj); 242 + ifb->obj = NULL; 243 } 244 245 return 0;
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 352 353 if (nouveau_fb->nvbo) { 354 nouveau_bo_unmap(nouveau_fb->nvbo); 355 - drm_gem_object_handle_unreference_unlocked(nouveau_fb->nvbo->gem); 356 drm_gem_object_unreference_unlocked(nouveau_fb->nvbo->gem); 357 nouveau_fb->nvbo = NULL; 358 }
··· 352 353 if (nouveau_fb->nvbo) { 354 nouveau_bo_unmap(nouveau_fb->nvbo); 355 drm_gem_object_unreference_unlocked(nouveau_fb->nvbo->gem); 356 nouveau_fb->nvbo = NULL; 357 }
-1
drivers/gpu/drm/nouveau/nouveau_notifier.c
··· 79 mutex_lock(&dev->struct_mutex); 80 nouveau_bo_unpin(chan->notifier_bo); 81 mutex_unlock(&dev->struct_mutex); 82 - drm_gem_object_handle_unreference_unlocked(chan->notifier_bo->gem); 83 drm_gem_object_unreference_unlocked(chan->notifier_bo->gem); 84 drm_mm_takedown(&chan->notifier_heap); 85 }
··· 79 mutex_lock(&dev->struct_mutex); 80 nouveau_bo_unpin(chan->notifier_bo); 81 mutex_unlock(&dev->struct_mutex); 82 drm_gem_object_unreference_unlocked(chan->notifier_bo->gem); 83 drm_mm_takedown(&chan->notifier_heap); 84 }
+3 -2
drivers/gpu/drm/radeon/evergreen.c
··· 1137 1138 WREG32(RCU_IND_INDEX, 0x203); 1139 efuse_straps_3 = RREG32(RCU_IND_DATA); 1140 - efuse_box_bit_127_124 = (u8)(efuse_straps_3 & 0xF0000000) >> 28; 1141 1142 switch(efuse_box_bit_127_124) { 1143 case 0x0: ··· 1407 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1408 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1409 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1410 r600_vram_gtt_location(rdev, &rdev->mc); 1411 radeon_update_bandwidth_info(rdev); 1412 ··· 1521 { 1522 u32 tmp; 1523 1524 - WREG32(CP_INT_CNTL, 0); 1525 WREG32(GRBM_INT_CNTL, 0); 1526 WREG32(INT_MASK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0); 1527 WREG32(INT_MASK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
··· 1137 1138 WREG32(RCU_IND_INDEX, 0x203); 1139 efuse_straps_3 = RREG32(RCU_IND_DATA); 1140 + efuse_box_bit_127_124 = (u8)((efuse_straps_3 & 0xF0000000) >> 28); 1141 1142 switch(efuse_box_bit_127_124) { 1143 case 0x0: ··· 1407 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1408 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 1409 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1410 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1411 r600_vram_gtt_location(rdev, &rdev->mc); 1412 radeon_update_bandwidth_info(rdev); 1413 ··· 1520 { 1521 u32 tmp; 1522 1523 + WREG32(CP_INT_CNTL, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE); 1524 WREG32(GRBM_INT_CNTL, 0); 1525 WREG32(INT_MASK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0); 1526 WREG32(INT_MASK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
+3
drivers/gpu/drm/radeon/r100.c
··· 1030 return r; 1031 } 1032 rdev->cp.ready = true; 1033 return 0; 1034 } 1035 ··· 1048 void r100_cp_disable(struct radeon_device *rdev) 1049 { 1050 /* Disable ring */ 1051 rdev->cp.ready = false; 1052 WREG32(RADEON_CP_CSQ_MODE, 0); 1053 WREG32(RADEON_CP_CSQ_CNTL, 0); ··· 2297 /* FIXME we don't use the second aperture yet when we could use it */ 2298 if (rdev->mc.visible_vram_size > rdev->mc.aper_size) 2299 rdev->mc.visible_vram_size = rdev->mc.aper_size; 2300 config_aper_size = RREG32(RADEON_CONFIG_APER_SIZE); 2301 if (rdev->flags & RADEON_IS_IGP) { 2302 uint32_t tom;
··· 1030 return r; 1031 } 1032 rdev->cp.ready = true; 1033 + rdev->mc.active_vram_size = rdev->mc.real_vram_size; 1034 return 0; 1035 } 1036 ··· 1047 void r100_cp_disable(struct radeon_device *rdev) 1048 { 1049 /* Disable ring */ 1050 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1051 rdev->cp.ready = false; 1052 WREG32(RADEON_CP_CSQ_MODE, 0); 1053 WREG32(RADEON_CP_CSQ_CNTL, 0); ··· 2295 /* FIXME we don't use the second aperture yet when we could use it */ 2296 if (rdev->mc.visible_vram_size > rdev->mc.aper_size) 2297 rdev->mc.visible_vram_size = rdev->mc.aper_size; 2298 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 2299 config_aper_size = RREG32(RADEON_CONFIG_APER_SIZE); 2300 if (rdev->flags & RADEON_IS_IGP) { 2301 uint32_t tom;
+3 -1
drivers/gpu/drm/radeon/r600.c
··· 1248 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 1249 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 1250 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1251 r600_vram_gtt_location(rdev, &rdev->mc); 1252 1253 if (rdev->flags & RADEON_IS_IGP) { ··· 1918 */ 1919 void r600_cp_stop(struct radeon_device *rdev) 1920 { 1921 WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1)); 1922 } 1923 ··· 2912 { 2913 u32 tmp; 2914 2915 - WREG32(CP_INT_CNTL, 0); 2916 WREG32(GRBM_INT_CNTL, 0); 2917 WREG32(DxMODE_INT_MASK, 0); 2918 if (ASIC_IS_DCE3(rdev)) {
··· 1248 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 1249 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 1250 rdev->mc.visible_vram_size = rdev->mc.aper_size; 1251 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1252 r600_vram_gtt_location(rdev, &rdev->mc); 1253 1254 if (rdev->flags & RADEON_IS_IGP) { ··· 1917 */ 1918 void r600_cp_stop(struct radeon_device *rdev) 1919 { 1920 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 1921 WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1)); 1922 } 1923 ··· 2910 { 2911 u32 tmp; 2912 2913 + WREG32(CP_INT_CNTL, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE); 2914 WREG32(GRBM_INT_CNTL, 0); 2915 WREG32(DxMODE_INT_MASK, 0); 2916 if (ASIC_IS_DCE3(rdev)) {
+2
drivers/gpu/drm/radeon/r600_blit_kms.c
··· 532 memcpy(ptr + rdev->r600_blit.ps_offset, r6xx_ps, r6xx_ps_size * 4); 533 radeon_bo_kunmap(rdev->r600_blit.shader_obj); 534 radeon_bo_unreserve(rdev->r600_blit.shader_obj); 535 return 0; 536 } 537 ··· 540 { 541 int r; 542 543 if (rdev->r600_blit.shader_obj == NULL) 544 return; 545 /* If we can't reserve the bo, unref should be enough to destroy
··· 532 memcpy(ptr + rdev->r600_blit.ps_offset, r6xx_ps, r6xx_ps_size * 4); 533 radeon_bo_kunmap(rdev->r600_blit.shader_obj); 534 radeon_bo_unreserve(rdev->r600_blit.shader_obj); 535 + rdev->mc.active_vram_size = rdev->mc.real_vram_size; 536 return 0; 537 } 538 ··· 539 { 540 int r; 541 542 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 543 if (rdev->r600_blit.shader_obj == NULL) 544 return; 545 /* If we can't reserve the bo, unref should be enough to destroy
+1
drivers/gpu/drm/radeon/radeon.h
··· 344 * about vram size near mc fb location */ 345 u64 mc_vram_size; 346 u64 visible_vram_size; 347 u64 gtt_size; 348 u64 gtt_start; 349 u64 gtt_end;
··· 344 * about vram size near mc fb location */ 345 u64 mc_vram_size; 346 u64 visible_vram_size; 347 + u64 active_vram_size; 348 u64 gtt_size; 349 u64 gtt_start; 350 u64 gtt_end;
+9 -9
drivers/gpu/drm/radeon/radeon_atombios.c
··· 1558 switch (tv_info->ucTV_BootUpDefaultStandard) { 1559 case ATOM_TV_NTSC: 1560 tv_std = TV_STD_NTSC; 1561 - DRM_INFO("Default TV standard: NTSC\n"); 1562 break; 1563 case ATOM_TV_NTSCJ: 1564 tv_std = TV_STD_NTSC_J; 1565 - DRM_INFO("Default TV standard: NTSC-J\n"); 1566 break; 1567 case ATOM_TV_PAL: 1568 tv_std = TV_STD_PAL; 1569 - DRM_INFO("Default TV standard: PAL\n"); 1570 break; 1571 case ATOM_TV_PALM: 1572 tv_std = TV_STD_PAL_M; 1573 - DRM_INFO("Default TV standard: PAL-M\n"); 1574 break; 1575 case ATOM_TV_PALN: 1576 tv_std = TV_STD_PAL_N; 1577 - DRM_INFO("Default TV standard: PAL-N\n"); 1578 break; 1579 case ATOM_TV_PALCN: 1580 tv_std = TV_STD_PAL_CN; 1581 - DRM_INFO("Default TV standard: PAL-CN\n"); 1582 break; 1583 case ATOM_TV_PAL60: 1584 tv_std = TV_STD_PAL_60; 1585 - DRM_INFO("Default TV standard: PAL-60\n"); 1586 break; 1587 case ATOM_TV_SECAM: 1588 tv_std = TV_STD_SECAM; 1589 - DRM_INFO("Default TV standard: SECAM\n"); 1590 break; 1591 default: 1592 tv_std = TV_STD_NTSC; 1593 - DRM_INFO("Unknown TV standard; defaulting to NTSC\n"); 1594 break; 1595 } 1596 }
··· 1558 switch (tv_info->ucTV_BootUpDefaultStandard) { 1559 case ATOM_TV_NTSC: 1560 tv_std = TV_STD_NTSC; 1561 + DRM_DEBUG_KMS("Default TV standard: NTSC\n"); 1562 break; 1563 case ATOM_TV_NTSCJ: 1564 tv_std = TV_STD_NTSC_J; 1565 + DRM_DEBUG_KMS("Default TV standard: NTSC-J\n"); 1566 break; 1567 case ATOM_TV_PAL: 1568 tv_std = TV_STD_PAL; 1569 + DRM_DEBUG_KMS("Default TV standard: PAL\n"); 1570 break; 1571 case ATOM_TV_PALM: 1572 tv_std = TV_STD_PAL_M; 1573 + DRM_DEBUG_KMS("Default TV standard: PAL-M\n"); 1574 break; 1575 case ATOM_TV_PALN: 1576 tv_std = TV_STD_PAL_N; 1577 + DRM_DEBUG_KMS("Default TV standard: PAL-N\n"); 1578 break; 1579 case ATOM_TV_PALCN: 1580 tv_std = TV_STD_PAL_CN; 1581 + DRM_DEBUG_KMS("Default TV standard: PAL-CN\n"); 1582 break; 1583 case ATOM_TV_PAL60: 1584 tv_std = TV_STD_PAL_60; 1585 + DRM_DEBUG_KMS("Default TV standard: PAL-60\n"); 1586 break; 1587 case ATOM_TV_SECAM: 1588 tv_std = TV_STD_SECAM; 1589 + DRM_DEBUG_KMS("Default TV standard: SECAM\n"); 1590 break; 1591 default: 1592 tv_std = TV_STD_NTSC; 1593 + DRM_DEBUG_KMS("Unknown TV standard; defaulting to NTSC\n"); 1594 break; 1595 } 1596 }
+13 -13
drivers/gpu/drm/radeon/radeon_combios.c
··· 913 switch (RBIOS8(tv_info + 7) & 0xf) { 914 case 1: 915 tv_std = TV_STD_NTSC; 916 - DRM_INFO("Default TV standard: NTSC\n"); 917 break; 918 case 2: 919 tv_std = TV_STD_PAL; 920 - DRM_INFO("Default TV standard: PAL\n"); 921 break; 922 case 3: 923 tv_std = TV_STD_PAL_M; 924 - DRM_INFO("Default TV standard: PAL-M\n"); 925 break; 926 case 4: 927 tv_std = TV_STD_PAL_60; 928 - DRM_INFO("Default TV standard: PAL-60\n"); 929 break; 930 case 5: 931 tv_std = TV_STD_NTSC_J; 932 - DRM_INFO("Default TV standard: NTSC-J\n"); 933 break; 934 case 6: 935 tv_std = TV_STD_SCART_PAL; 936 - DRM_INFO("Default TV standard: SCART-PAL\n"); 937 break; 938 default: 939 tv_std = TV_STD_NTSC; 940 - DRM_INFO 941 ("Unknown TV standard; defaulting to NTSC\n"); 942 break; 943 } 944 945 switch ((RBIOS8(tv_info + 9) >> 2) & 0x3) { 946 case 0: 947 - DRM_INFO("29.498928713 MHz TV ref clk\n"); 948 break; 949 case 1: 950 - DRM_INFO("28.636360000 MHz TV ref clk\n"); 951 break; 952 case 2: 953 - DRM_INFO("14.318180000 MHz TV ref clk\n"); 954 break; 955 case 3: 956 - DRM_INFO("27.000000000 MHz TV ref clk\n"); 957 break; 958 default: 959 break; ··· 1324 1325 if (tmds_info) { 1326 ver = RBIOS8(tmds_info); 1327 - DRM_INFO("DFP table revision: %d\n", ver); 1328 if (ver == 3) { 1329 n = RBIOS8(tmds_info + 5) + 1; 1330 if (n > 4) ··· 1408 offset = combios_get_table_offset(dev, COMBIOS_EXT_TMDS_INFO_TABLE); 1409 if (offset) { 1410 ver = RBIOS8(offset); 1411 - DRM_INFO("External TMDS Table revision: %d\n", ver); 1412 tmds->slave_addr = RBIOS8(offset + 4 + 2); 1413 tmds->slave_addr >>= 1; /* 7 bit addressing */ 1414 gpio = RBIOS8(offset + 4 + 3);
··· 913 switch (RBIOS8(tv_info + 7) & 0xf) { 914 case 1: 915 tv_std = TV_STD_NTSC; 916 + DRM_DEBUG_KMS("Default TV standard: NTSC\n"); 917 break; 918 case 2: 919 tv_std = TV_STD_PAL; 920 + DRM_DEBUG_KMS("Default TV standard: PAL\n"); 921 break; 922 case 3: 923 tv_std = TV_STD_PAL_M; 924 + DRM_DEBUG_KMS("Default TV standard: PAL-M\n"); 925 break; 926 case 4: 927 tv_std = TV_STD_PAL_60; 928 + DRM_DEBUG_KMS("Default TV standard: PAL-60\n"); 929 break; 930 case 5: 931 tv_std = TV_STD_NTSC_J; 932 + DRM_DEBUG_KMS("Default TV standard: NTSC-J\n"); 933 break; 934 case 6: 935 tv_std = TV_STD_SCART_PAL; 936 + DRM_DEBUG_KMS("Default TV standard: SCART-PAL\n"); 937 break; 938 default: 939 tv_std = TV_STD_NTSC; 940 + DRM_DEBUG_KMS 941 ("Unknown TV standard; defaulting to NTSC\n"); 942 break; 943 } 944 945 switch ((RBIOS8(tv_info + 9) >> 2) & 0x3) { 946 case 0: 947 + DRM_DEBUG_KMS("29.498928713 MHz TV ref clk\n"); 948 break; 949 case 1: 950 + DRM_DEBUG_KMS("28.636360000 MHz TV ref clk\n"); 951 break; 952 case 2: 953 + DRM_DEBUG_KMS("14.318180000 MHz TV ref clk\n"); 954 break; 955 case 3: 956 + DRM_DEBUG_KMS("27.000000000 MHz TV ref clk\n"); 957 break; 958 default: 959 break; ··· 1324 1325 if (tmds_info) { 1326 ver = RBIOS8(tmds_info); 1327 + DRM_DEBUG_KMS("DFP table revision: %d\n", ver); 1328 if (ver == 3) { 1329 n = RBIOS8(tmds_info + 5) + 1; 1330 if (n > 4) ··· 1408 offset = combios_get_table_offset(dev, COMBIOS_EXT_TMDS_INFO_TABLE); 1409 if (offset) { 1410 ver = RBIOS8(offset); 1411 + DRM_DEBUG_KMS("External TMDS Table revision: %d\n", ver); 1412 tmds->slave_addr = RBIOS8(offset + 4 + 2); 1413 tmds->slave_addr >>= 1; /* 7 bit addressing */ 1414 gpio = RBIOS8(offset + 4 + 3);
-1
drivers/gpu/drm/radeon/radeon_fb.c
··· 97 radeon_bo_unpin(rbo); 98 radeon_bo_unreserve(rbo); 99 } 100 - drm_gem_object_handle_unreference(gobj); 101 drm_gem_object_unreference_unlocked(gobj); 102 } 103
··· 97 radeon_bo_unpin(rbo); 98 radeon_bo_unreserve(rbo); 99 } 100 drm_gem_object_unreference_unlocked(gobj); 101 } 102
+1 -1
drivers/gpu/drm/radeon/radeon_object.c
··· 69 u32 c = 0; 70 71 rbo->placement.fpfn = 0; 72 - rbo->placement.lpfn = 0; 73 rbo->placement.placement = rbo->placements; 74 rbo->placement.busy_placement = rbo->placements; 75 if (domain & RADEON_GEM_DOMAIN_VRAM)
··· 69 u32 c = 0; 70 71 rbo->placement.fpfn = 0; 72 + rbo->placement.lpfn = rbo->rdev->mc.active_vram_size >> PAGE_SHIFT; 73 rbo->placement.placement = rbo->placements; 74 rbo->placement.busy_placement = rbo->placements; 75 if (domain & RADEON_GEM_DOMAIN_VRAM)
+1 -4
drivers/gpu/drm/radeon/radeon_object.h
··· 124 int r; 125 126 r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, 0); 127 - if (unlikely(r != 0)) { 128 - if (r != -ERESTARTSYS) 129 - dev_err(bo->rdev->dev, "%p reserve failed for wait\n", bo); 130 return r; 131 - } 132 spin_lock(&bo->tbo.lock); 133 if (mem_type) 134 *mem_type = bo->tbo.mem.mem_type;
··· 124 int r; 125 126 r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, 0); 127 + if (unlikely(r != 0)) 128 return r; 129 spin_lock(&bo->tbo.lock); 130 if (mem_type) 131 *mem_type = bo->tbo.mem.mem_type;
+1
drivers/gpu/drm/radeon/rs600.c
··· 693 rdev->mc.real_vram_size = RREG32(RADEON_CONFIG_MEMSIZE); 694 rdev->mc.mc_vram_size = rdev->mc.real_vram_size; 695 rdev->mc.visible_vram_size = rdev->mc.aper_size; 696 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 697 base = RREG32_MC(R_000004_MC_FB_LOCATION); 698 base = G_000004_MC_FB_START(base) << 16;
··· 693 rdev->mc.real_vram_size = RREG32(RADEON_CONFIG_MEMSIZE); 694 rdev->mc.mc_vram_size = rdev->mc.real_vram_size; 695 rdev->mc.visible_vram_size = rdev->mc.aper_size; 696 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 697 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 698 base = RREG32_MC(R_000004_MC_FB_LOCATION); 699 base = G_000004_MC_FB_START(base) << 16;
+1
drivers/gpu/drm/radeon/rs690.c
··· 157 rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0); 158 rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0); 159 rdev->mc.visible_vram_size = rdev->mc.aper_size; 160 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 161 base = G_000100_MC_FB_START(base) << 16; 162 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
··· 157 rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0); 158 rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0); 159 rdev->mc.visible_vram_size = rdev->mc.aper_size; 160 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 161 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 162 base = G_000100_MC_FB_START(base) << 16; 163 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
+2
drivers/gpu/drm/radeon/rv770.c
··· 267 */ 268 void r700_cp_stop(struct radeon_device *rdev) 269 { 270 WREG32(CP_ME_CNTL, (CP_ME_HALT | CP_PFP_HALT)); 271 } 272 ··· 993 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 994 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 995 rdev->mc.visible_vram_size = rdev->mc.aper_size; 996 r600_vram_gtt_location(rdev, &rdev->mc); 997 radeon_update_bandwidth_info(rdev); 998
··· 267 */ 268 void r700_cp_stop(struct radeon_device *rdev) 269 { 270 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 271 WREG32(CP_ME_CNTL, (CP_ME_HALT | CP_PFP_HALT)); 272 } 273 ··· 992 rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE); 993 rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE); 994 rdev->mc.visible_vram_size = rdev->mc.aper_size; 995 + rdev->mc.active_vram_size = rdev->mc.visible_vram_size; 996 r600_vram_gtt_location(rdev, &rdev->mc); 997 radeon_update_bandwidth_info(rdev); 998
+71 -12
drivers/gpu/drm/ttm/ttm_bo.c
··· 442 } 443 444 /** 445 * If bo idle, remove from delayed- and lru lists, and unref. 446 * If not idle, and already on delayed list, do nothing. 447 * If not idle, and not on delayed list, put on delayed list, ··· 493 int ret; 494 495 spin_lock(&bo->lock); 496 (void) ttm_bo_wait(bo, false, false, !remove_all); 497 498 if (!bo->sync_obj) { ··· 502 spin_unlock(&bo->lock); 503 504 spin_lock(&glob->lru_lock); 505 - put_count = ttm_bo_del_from_lru(bo); 506 507 - ret = ttm_bo_reserve_locked(bo, false, false, false, 0); 508 - BUG_ON(ret); 509 - if (bo->ttm) 510 - ttm_tt_unbind(bo->ttm); 511 512 if (!list_empty(&bo->ddestroy)) { 513 list_del_init(&bo->ddestroy); 514 ++put_count; 515 } 516 - if (bo->mem.mm_node) { 517 - drm_mm_put_block(bo->mem.mm_node); 518 - bo->mem.mm_node = NULL; 519 - } 520 - spin_unlock(&glob->lru_lock); 521 522 - atomic_set(&bo->reserved, 0); 523 524 while (put_count--) 525 kref_put(&bo->list_kref, ttm_bo_ref_bug); 526 527 return 0; 528 } 529 - 530 spin_lock(&glob->lru_lock); 531 if (list_empty(&bo->ddestroy)) { 532 void *sync_obj = bo->sync_obj;
··· 442 } 443 444 /** 445 + * Call bo::reserved and with the lru lock held. 446 + * Will release GPU memory type usage on destruction. 447 + * This is the place to put in driver specific hooks. 448 + * Will release the bo::reserved lock and the 449 + * lru lock on exit. 450 + */ 451 + 452 + static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) 453 + { 454 + struct ttm_bo_global *glob = bo->glob; 455 + 456 + if (bo->ttm) { 457 + 458 + /** 459 + * Release the lru_lock, since we don't want to have 460 + * an atomic requirement on ttm_tt[unbind|destroy]. 461 + */ 462 + 463 + spin_unlock(&glob->lru_lock); 464 + ttm_tt_unbind(bo->ttm); 465 + ttm_tt_destroy(bo->ttm); 466 + bo->ttm = NULL; 467 + spin_lock(&glob->lru_lock); 468 + } 469 + 470 + if (bo->mem.mm_node) { 471 + drm_mm_put_block(bo->mem.mm_node); 472 + bo->mem.mm_node = NULL; 473 + } 474 + 475 + atomic_set(&bo->reserved, 0); 476 + wake_up_all(&bo->event_queue); 477 + spin_unlock(&glob->lru_lock); 478 + } 479 + 480 + 481 + /** 482 * If bo idle, remove from delayed- and lru lists, and unref. 483 * If not idle, and already on delayed list, do nothing. 484 * If not idle, and not on delayed list, put on delayed list, ··· 456 int ret; 457 458 spin_lock(&bo->lock); 459 + retry: 460 (void) ttm_bo_wait(bo, false, false, !remove_all); 461 462 if (!bo->sync_obj) { ··· 464 spin_unlock(&bo->lock); 465 466 spin_lock(&glob->lru_lock); 467 + ret = ttm_bo_reserve_locked(bo, false, !remove_all, false, 0); 468 469 + /** 470 + * Someone else has the object reserved. Bail and retry. 471 + */ 472 + 473 + if (unlikely(ret == -EBUSY)) { 474 + spin_unlock(&glob->lru_lock); 475 + spin_lock(&bo->lock); 476 + goto requeue; 477 + } 478 + 479 + /** 480 + * We can re-check for sync object without taking 481 + * the bo::lock since setting the sync object requires 482 + * also bo::reserved. A busy object at this point may 483 + * be caused by another thread starting an accelerated 484 + * eviction. 485 + */ 486 + 487 + if (unlikely(bo->sync_obj)) { 488 + atomic_set(&bo->reserved, 0); 489 + wake_up_all(&bo->event_queue); 490 + spin_unlock(&glob->lru_lock); 491 + spin_lock(&bo->lock); 492 + if (remove_all) 493 + goto retry; 494 + else 495 + goto requeue; 496 + } 497 + 498 + put_count = ttm_bo_del_from_lru(bo); 499 500 if (!list_empty(&bo->ddestroy)) { 501 list_del_init(&bo->ddestroy); 502 ++put_count; 503 } 504 505 + ttm_bo_cleanup_memtype_use(bo); 506 507 while (put_count--) 508 kref_put(&bo->list_kref, ttm_bo_ref_bug); 509 510 return 0; 511 } 512 + requeue: 513 spin_lock(&glob->lru_lock); 514 if (list_empty(&bo->ddestroy)) { 515 void *sync_obj = bo->sync_obj;
+2
drivers/hid/hid-cando.c
··· 237 USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 238 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, 239 USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 240 { } 241 }; 242 MODULE_DEVICE_TABLE(hid, cando_devices);
··· 237 USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 238 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, 239 USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 240 + { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, 241 + USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6) }, 242 { } 243 }; 244 MODULE_DEVICE_TABLE(hid, cando_devices);
+1
drivers/hid/hid-core.c
··· 1292 { HID_USB_DEVICE(USB_VENDOR_ID_BTC, USB_DEVICE_ID_BTC_EMPREX_REMOTE_2) }, 1293 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 1294 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 1295 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION) }, 1296 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR) }, 1297 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_TACTICAL_PAD) },
··· 1292 { HID_USB_DEVICE(USB_VENDOR_ID_BTC, USB_DEVICE_ID_BTC_EMPREX_REMOTE_2) }, 1293 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH) }, 1294 { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6) }, 1295 + { HID_USB_DEVICE(USB_VENDOR_ID_CANDO, USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6) }, 1296 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION) }, 1297 { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR) }, 1298 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_TACTICAL_PAD) },
+2
drivers/hid/hid-ids.h
··· 134 #define USB_VENDOR_ID_CANDO 0x2087 135 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH 0x0a01 136 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6 0x0b03 137 138 #define USB_VENDOR_ID_CH 0x068e 139 #define USB_DEVICE_ID_CH_PRO_PEDALS 0x00f2 ··· 504 505 #define USB_VENDOR_ID_TURBOX 0x062a 506 #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201 507 508 #define USB_VENDOR_ID_TWINHAN 0x6253 509 #define USB_DEVICE_ID_TWINHAN_IR_REMOTE 0x0100
··· 134 #define USB_VENDOR_ID_CANDO 0x2087 135 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH 0x0a01 136 #define USB_DEVICE_ID_CANDO_MULTI_TOUCH_11_6 0x0b03 137 + #define USB_DEVICE_ID_CANDO_MULTI_TOUCH_15_6 0x0f01 138 139 #define USB_VENDOR_ID_CH 0x068e 140 #define USB_DEVICE_ID_CH_PRO_PEDALS 0x00f2 ··· 503 504 #define USB_VENDOR_ID_TURBOX 0x062a 505 #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201 506 + #define USB_DEVICE_ID_TURBOX_TOUCHSCREEN_MOSART 0x7100 507 508 #define USB_VENDOR_ID_TWINHAN 0x6253 509 #define USB_DEVICE_ID_TWINHAN_IR_REMOTE 0x0100
+11
drivers/hid/hidraw.c
··· 109 int ret = 0; 110 111 mutex_lock(&minors_lock); 112 dev = hidraw_table[minor]->hid; 113 114 if (!dev->hid_output_raw_report) { ··· 250 251 mutex_lock(&minors_lock); 252 dev = hidraw_table[minor]; 253 254 switch (cmd) { 255 case HIDIOCGRDESCSIZE: ··· 327 328 ret = -ENOTTY; 329 } 330 mutex_unlock(&minors_lock); 331 return ret; 332 }
··· 109 int ret = 0; 110 111 mutex_lock(&minors_lock); 112 + 113 + if (!hidraw_table[minor]) { 114 + ret = -ENODEV; 115 + goto out; 116 + } 117 + 118 dev = hidraw_table[minor]->hid; 119 120 if (!dev->hid_output_raw_report) { ··· 244 245 mutex_lock(&minors_lock); 246 dev = hidraw_table[minor]; 247 + if (!dev) { 248 + ret = -ENODEV; 249 + goto out; 250 + } 251 252 switch (cmd) { 253 case HIDIOCGRDESCSIZE: ··· 317 318 ret = -ENOTTY; 319 } 320 + out: 321 mutex_unlock(&minors_lock); 322 return ret; 323 }
+1
drivers/hid/usbhid/hid-quirks.c
··· 36 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER, HID_QUIRK_MULTI_INPUT | HID_QUIRK_NOGET }, 37 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH, HID_QUIRK_MULTI_INPUT }, 38 { USB_VENDOR_ID_MOJO, USB_DEVICE_ID_RETRO_ADAPTER, HID_QUIRK_MULTI_INPUT }, 39 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_DRIVING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 40 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FLYING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 41 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
··· 36 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER, HID_QUIRK_MULTI_INPUT | HID_QUIRK_NOGET }, 37 { USB_VENDOR_ID_DWAV, USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH, HID_QUIRK_MULTI_INPUT }, 38 { USB_VENDOR_ID_MOJO, USB_DEVICE_ID_RETRO_ADAPTER, HID_QUIRK_MULTI_INPUT }, 39 + { USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_TOUCHSCREEN_MOSART, HID_QUIRK_MULTI_INPUT }, 40 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_DRIVING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 41 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FLYING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 42 { USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
+5
drivers/i2c/busses/i2c-cpm.c
··· 677 dev_dbg(&ofdev->dev, "hw routines for %s registered.\n", 678 cpm->adap.name); 679 680 return 0; 681 out_shut: 682 cpm_i2c_shutdown(cpm);
··· 677 dev_dbg(&ofdev->dev, "hw routines for %s registered.\n", 678 cpm->adap.name); 679 680 + /* 681 + * register OF I2C devices 682 + */ 683 + of_i2c_register_devices(&cpm->adap); 684 + 685 return 0; 686 out_shut: 687 cpm_i2c_shutdown(cpm);
+3
drivers/i2c/busses/i2c-ibm_iic.c
··· 761 dev_info(&ofdev->dev, "using %s mode\n", 762 dev->fast_mode ? "fast (400 kHz)" : "standard (100 kHz)"); 763 764 return 0; 765 766 error_cleanup:
··· 761 dev_info(&ofdev->dev, "using %s mode\n", 762 dev->fast_mode ? "fast (400 kHz)" : "standard (100 kHz)"); 763 764 + /* Now register all the child nodes */ 765 + of_i2c_register_devices(adap); 766 + 767 return 0; 768 769 error_cleanup:
+1
drivers/i2c/busses/i2c-mpc.c
··· 632 dev_err(i2c->dev, "failed to add adapter\n"); 633 goto fail_add; 634 } 635 636 return result; 637
··· 632 dev_err(i2c->dev, "failed to add adapter\n"); 633 goto fail_add; 634 } 635 + of_i2c_register_devices(&i2c->adap); 636 637 return result; 638
+8 -4
drivers/i2c/busses/i2c-pca-isa.c
··· 71 72 static int pca_isa_waitforcompletion(void *pd) 73 { 74 - long ret = ~0; 75 unsigned long timeout; 76 77 if (irq > -1) { 78 ret = wait_event_timeout(pca_wait, ··· 81 } else { 82 /* Do polling */ 83 timeout = jiffies + pca_isa_ops.timeout; 84 - while (((pca_isa_readbyte(pd, I2C_PCA_CON) 85 - & I2C_PCA_CON_SI) == 0) 86 - && (ret = time_before(jiffies, timeout))) 87 udelay(100); 88 } 89 return ret > 0; 90 } 91
··· 71 72 static int pca_isa_waitforcompletion(void *pd) 73 { 74 unsigned long timeout; 75 + long ret; 76 77 if (irq > -1) { 78 ret = wait_event_timeout(pca_wait, ··· 81 } else { 82 /* Do polling */ 83 timeout = jiffies + pca_isa_ops.timeout; 84 + do { 85 + ret = time_before(jiffies, timeout); 86 + if (pca_isa_readbyte(pd, I2C_PCA_CON) 87 + & I2C_PCA_CON_SI) 88 + break; 89 udelay(100); 90 + } while (ret); 91 } 92 + 93 return ret > 0; 94 } 95
+7 -4
drivers/i2c/busses/i2c-pca-platform.c
··· 80 static int i2c_pca_pf_waitforcompletion(void *pd) 81 { 82 struct i2c_pca_pf_data *i2c = pd; 83 - long ret = ~0; 84 unsigned long timeout; 85 86 if (i2c->irq) { 87 ret = wait_event_timeout(i2c->wait, ··· 90 } else { 91 /* Do polling */ 92 timeout = jiffies + i2c->adap.timeout; 93 - while (((i2c->algo_data.read_byte(i2c, I2C_PCA_CON) 94 - & I2C_PCA_CON_SI) == 0) 95 - && (ret = time_before(jiffies, timeout))) 96 udelay(100); 97 } 98 99 return ret > 0;
··· 80 static int i2c_pca_pf_waitforcompletion(void *pd) 81 { 82 struct i2c_pca_pf_data *i2c = pd; 83 unsigned long timeout; 84 + long ret; 85 86 if (i2c->irq) { 87 ret = wait_event_timeout(i2c->wait, ··· 90 } else { 91 /* Do polling */ 92 timeout = jiffies + i2c->adap.timeout; 93 + do { 94 + ret = time_before(jiffies, timeout); 95 + if (i2c->algo_data.read_byte(i2c, I2C_PCA_CON) 96 + & I2C_PCA_CON_SI) 97 + break; 98 udelay(100); 99 + } while (ret); 100 } 101 102 return ret > 0;
+24 -30
drivers/i2c/i2c-core.c
··· 32 #include <linux/init.h> 33 #include <linux/idr.h> 34 #include <linux/mutex.h> 35 - #include <linux/of_i2c.h> 36 #include <linux/of_device.h> 37 #include <linux/completion.h> 38 #include <linux/hardirq.h> ··· 196 { 197 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 198 199 - if (pm_runtime_suspended(dev)) 200 - return 0; 201 - 202 - if (pm) 203 - return pm->suspend ? pm->suspend(dev) : 0; 204 205 return i2c_legacy_suspend(dev, PMSG_SUSPEND); 206 } ··· 216 else 217 ret = i2c_legacy_resume(dev); 218 219 - if (!ret) { 220 - pm_runtime_disable(dev); 221 - pm_runtime_set_active(dev); 222 - pm_runtime_enable(dev); 223 - } 224 - 225 return ret; 226 } 227 ··· 223 { 224 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 225 226 - if (pm_runtime_suspended(dev)) 227 - return 0; 228 - 229 - if (pm) 230 - return pm->freeze ? pm->freeze(dev) : 0; 231 232 return i2c_legacy_suspend(dev, PMSG_FREEZE); 233 } ··· 237 { 238 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 239 240 - if (pm_runtime_suspended(dev)) 241 - return 0; 242 - 243 - if (pm) 244 - return pm->thaw ? pm->thaw(dev) : 0; 245 246 return i2c_legacy_resume(dev); 247 } ··· 251 { 252 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 253 254 - if (pm_runtime_suspended(dev)) 255 - return 0; 256 - 257 - if (pm) 258 - return pm->poweroff ? pm->poweroff(dev) : 0; 259 260 return i2c_legacy_suspend(dev, PMSG_HIBERNATE); 261 } ··· 872 /* create pre-declared device nodes */ 873 if (adap->nr < __i2c_first_dynamic_bus_num) 874 i2c_scan_static_board_info(adap); 875 - 876 - /* Register devices from the device tree */ 877 - of_i2c_register_devices(adap); 878 879 /* Notify drivers */ 880 mutex_lock(&core_lock);
··· 32 #include <linux/init.h> 33 #include <linux/idr.h> 34 #include <linux/mutex.h> 35 #include <linux/of_device.h> 36 #include <linux/completion.h> 37 #include <linux/hardirq.h> ··· 197 { 198 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 199 200 + if (pm) { 201 + if (pm_runtime_suspended(dev)) 202 + return 0; 203 + else 204 + return pm->suspend ? pm->suspend(dev) : 0; 205 + } 206 207 return i2c_legacy_suspend(dev, PMSG_SUSPEND); 208 } ··· 216 else 217 ret = i2c_legacy_resume(dev); 218 219 return ret; 220 } 221 ··· 229 { 230 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 231 232 + if (pm) { 233 + if (pm_runtime_suspended(dev)) 234 + return 0; 235 + else 236 + return pm->freeze ? pm->freeze(dev) : 0; 237 + } 238 239 return i2c_legacy_suspend(dev, PMSG_FREEZE); 240 } ··· 242 { 243 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 244 245 + if (pm) { 246 + if (pm_runtime_suspended(dev)) 247 + return 0; 248 + else 249 + return pm->thaw ? pm->thaw(dev) : 0; 250 + } 251 252 return i2c_legacy_resume(dev); 253 } ··· 255 { 256 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 257 258 + if (pm) { 259 + if (pm_runtime_suspended(dev)) 260 + return 0; 261 + else 262 + return pm->poweroff ? pm->poweroff(dev) : 0; 263 + } 264 265 return i2c_legacy_suspend(dev, PMSG_HIBERNATE); 266 } ··· 875 /* create pre-declared device nodes */ 876 if (adap->nr < __i2c_first_dynamic_bus_num) 877 i2c_scan_static_board_info(adap); 878 879 /* Notify drivers */ 880 mutex_lock(&core_lock);
+5 -5
drivers/idle/intel_idle.c
··· 157 { /* MWAIT C5 */ }, 158 { /* MWAIT C6 */ 159 .name = "ATM-C6", 160 - .desc = "MWAIT 0x40", 161 - .driver_data = (void *) 0x40, 162 .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, 163 - .exit_latency = 200, 164 .power_usage = 150, 165 - .target_residency = 800, 166 - .enter = NULL }, /* disabled */ 167 }; 168 169 /**
··· 157 { /* MWAIT C5 */ }, 158 { /* MWAIT C6 */ 159 .name = "ATM-C6", 160 + .desc = "MWAIT 0x52", 161 + .driver_data = (void *) 0x52, 162 .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, 163 + .exit_latency = 140, 164 .power_usage = 150, 165 + .target_residency = 560, 166 + .enter = &intel_idle }, 167 }; 168 169 /**
+3
drivers/input/joydev.c
··· 483 484 memcpy(joydev->abspam, abspam, len); 485 486 out: 487 kfree(abspam); 488 return retval;
··· 483 484 memcpy(joydev->abspam, abspam, len); 485 486 + for (i = 0; i < joydev->nabs; i++) 487 + joydev->absmap[joydev->abspam[i]] = i; 488 + 489 out: 490 kfree(abspam); 491 return retval;
+7
drivers/input/misc/uinput.c
··· 404 retval = uinput_validate_absbits(dev); 405 if (retval < 0) 406 goto exit; 407 } 408 409 udev->state = UIST_SETUP_COMPLETE;
··· 404 retval = uinput_validate_absbits(dev); 405 if (retval < 0) 406 goto exit; 407 + if (test_bit(ABS_MT_SLOT, dev->absbit)) { 408 + int nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1; 409 + input_mt_create_slots(dev, nslot); 410 + input_set_events_per_packet(dev, 6 * nslot); 411 + } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) { 412 + input_set_events_per_packet(dev, 60); 413 + } 414 } 415 416 udev->state = UIST_SETUP_COMPLETE;
+12 -11
drivers/input/tablet/wacom_sys.c
··· 103 static int wacom_open(struct input_dev *dev) 104 { 105 struct wacom *wacom = input_get_drvdata(dev); 106 107 mutex_lock(&wacom->lock); 108 109 - wacom->irq->dev = wacom->usbdev; 110 - 111 - if (usb_autopm_get_interface(wacom->intf) < 0) { 112 - mutex_unlock(&wacom->lock); 113 - return -EIO; 114 - } 115 - 116 if (usb_submit_urb(wacom->irq, GFP_KERNEL)) { 117 - usb_autopm_put_interface(wacom->intf); 118 - mutex_unlock(&wacom->lock); 119 - return -EIO; 120 } 121 122 wacom->open = true; 123 wacom->intf->needs_remote_wakeup = 1; 124 125 mutex_unlock(&wacom->lock); 126 - return 0; 127 } 128 129 static void wacom_close(struct input_dev *dev) ··· 134 wacom->open = false; 135 wacom->intf->needs_remote_wakeup = 0; 136 mutex_unlock(&wacom->lock); 137 } 138 139 static int wacom_parse_hid(struct usb_interface *intf, struct hid_descriptor *hid_desc,
··· 103 static int wacom_open(struct input_dev *dev) 104 { 105 struct wacom *wacom = input_get_drvdata(dev); 106 + int retval = 0; 107 + 108 + if (usb_autopm_get_interface(wacom->intf) < 0) 109 + return -EIO; 110 111 mutex_lock(&wacom->lock); 112 113 if (usb_submit_urb(wacom->irq, GFP_KERNEL)) { 114 + retval = -EIO; 115 + goto out; 116 } 117 118 wacom->open = true; 119 wacom->intf->needs_remote_wakeup = 1; 120 121 + out: 122 mutex_unlock(&wacom->lock); 123 + if (retval) 124 + usb_autopm_put_interface(wacom->intf); 125 + return retval; 126 } 127 128 static void wacom_close(struct input_dev *dev) ··· 135 wacom->open = false; 136 wacom->intf->needs_remote_wakeup = 0; 137 mutex_unlock(&wacom->lock); 138 + 139 + usb_autopm_put_interface(wacom->intf); 140 } 141 142 static int wacom_parse_hid(struct usb_interface *intf, struct hid_descriptor *hid_desc,
+3 -1
drivers/input/tablet/wacom_wac.c
··· 442 /* general pen packet */ 443 if ((data[1] & 0xb8) == 0xa0) { 444 t = (data[6] << 2) | ((data[7] >> 6) & 3); 445 - if (features->type >= INTUOS4S && features->type <= INTUOS4L) 446 t = (t << 1) | (data[1] & 1); 447 input_report_abs(input, ABS_PRESSURE, t); 448 input_report_abs(input, ABS_TILT_X, 449 ((data[7] << 1) & 0x7e) | (data[8] >> 7));
··· 442 /* general pen packet */ 443 if ((data[1] & 0xb8) == 0xa0) { 444 t = (data[6] << 2) | ((data[7] >> 6) & 3); 445 + if ((features->type >= INTUOS4S && features->type <= INTUOS4L) || 446 + features->type == WACOM_21UX2) { 447 t = (t << 1) | (data[1] & 1); 448 + } 449 input_report_abs(input, ABS_PRESSURE, t); 450 input_report_abs(input, ABS_TILT_X, 451 ((data[7] << 1) & 0x7e) | (data[8] >> 7));
+14 -4
drivers/isdn/sc/interrupt.c
··· 112 } 113 else if(callid>=0x0000 && callid<=0x7FFF) 114 { 115 pr_debug("%s: Got Incoming Call\n", 116 sc_adapter[card]->devicename); 117 - strcpy(setup.phone,&(rcvmsg.msg_data.byte_array[4])); 118 - strcpy(setup.eazmsn, 119 - sc_adapter[card]->channel[rcvmsg.phy_link_no-1].dn); 120 setup.si1 = 7; 121 setup.si2 = 0; 122 setup.plan = 0; ··· 184 * Handle a GetMyNumber Rsp 185 */ 186 if (IS_CE_MESSAGE(rcvmsg,Call,0,GetMyNumber)){ 187 - strcpy(sc_adapter[card]->channel[rcvmsg.phy_link_no-1].dn,rcvmsg.msg_data.byte_array); 188 continue; 189 } 190
··· 112 } 113 else if(callid>=0x0000 && callid<=0x7FFF) 114 { 115 + int len; 116 + 117 pr_debug("%s: Got Incoming Call\n", 118 sc_adapter[card]->devicename); 119 + len = strlcpy(setup.phone, &(rcvmsg.msg_data.byte_array[4]), 120 + sizeof(setup.phone)); 121 + if (len >= sizeof(setup.phone)) 122 + continue; 123 + len = strlcpy(setup.eazmsn, 124 + sc_adapter[card]->channel[rcvmsg.phy_link_no - 1].dn, 125 + sizeof(setup.eazmsn)); 126 + if (len >= sizeof(setup.eazmsn)) 127 + continue; 128 setup.si1 = 7; 129 setup.si2 = 0; 130 setup.plan = 0; ··· 176 * Handle a GetMyNumber Rsp 177 */ 178 if (IS_CE_MESSAGE(rcvmsg,Call,0,GetMyNumber)){ 179 + strlcpy(sc_adapter[card]->channel[rcvmsg.phy_link_no - 1].dn, 180 + rcvmsg.msg_data.byte_array, 181 + sizeof(rcvmsg.msg_data.byte_array)); 182 continue; 183 } 184
+5 -4
drivers/md/bitmap.c
··· 1000 page = bitmap->sb_page; 1001 offset = sizeof(bitmap_super_t); 1002 if (!file) 1003 - read_sb_page(bitmap->mddev, 1004 - bitmap->mddev->bitmap_info.offset, 1005 - page, 1006 - index, count); 1007 } else if (file) { 1008 page = read_page(file, index, bitmap, count); 1009 offset = 0;
··· 1000 page = bitmap->sb_page; 1001 offset = sizeof(bitmap_super_t); 1002 if (!file) 1003 + page = read_sb_page( 1004 + bitmap->mddev, 1005 + bitmap->mddev->bitmap_info.offset, 1006 + page, 1007 + index, count); 1008 } else if (file) { 1009 page = read_page(file, index, bitmap, count); 1010 offset = 0;
+3 -1
drivers/md/raid1.c
··· 1839 1840 /* take from bio_init */ 1841 bio->bi_next = NULL; 1842 bio->bi_flags |= 1 << BIO_UPTODATE; 1843 bio->bi_rw = READ; 1844 bio->bi_vcnt = 0; 1845 bio->bi_idx = 0; ··· 1914 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1915 break; 1916 BUG_ON(sync_blocks < (PAGE_SIZE>>9)); 1917 - if (len > (sync_blocks<<9)) 1918 len = sync_blocks<<9; 1919 } 1920
··· 1839 1840 /* take from bio_init */ 1841 bio->bi_next = NULL; 1842 + bio->bi_flags &= ~(BIO_POOL_MASK-1); 1843 bio->bi_flags |= 1 << BIO_UPTODATE; 1844 + bio->bi_comp_cpu = -1; 1845 bio->bi_rw = READ; 1846 bio->bi_vcnt = 0; 1847 bio->bi_idx = 0; ··· 1912 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1913 break; 1914 BUG_ON(sync_blocks < (PAGE_SIZE>>9)); 1915 + if ((len >> 9) > sync_blocks) 1916 len = sync_blocks<<9; 1917 } 1918
+8 -1
drivers/media/IR/ir-keytable.c
··· 319 * a keyup event might follow immediately after the keydown. 320 */ 321 spin_lock_irqsave(&ir->keylock, flags); 322 - if (time_is_after_eq_jiffies(ir->keyup_jiffies)) 323 ir_keyup(ir); 324 spin_unlock_irqrestore(&ir->keylock, flags); 325 } ··· 509 driver_name, rc_tab->name, 510 (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_IR_RAW) ? 511 " in raw mode" : ""); 512 513 return 0; 514
··· 319 * a keyup event might follow immediately after the keydown. 320 */ 321 spin_lock_irqsave(&ir->keylock, flags); 322 + if (time_is_before_eq_jiffies(ir->keyup_jiffies)) 323 ir_keyup(ir); 324 spin_unlock_irqrestore(&ir->keylock, flags); 325 } ··· 509 driver_name, rc_tab->name, 510 (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_IR_RAW) ? 511 " in raw mode" : ""); 512 + 513 + /* 514 + * Default delay of 250ms is too short for some protocols, expecially 515 + * since the timeout is currently set to 250ms. Increase it to 500ms, 516 + * to avoid wrong repetition of the keycodes. 517 + */ 518 + input_dev->rep[REP_DELAY] = 500; 519 520 return 0; 521
+1 -1
drivers/media/IR/ir-lirc-codec.c
··· 267 features |= LIRC_CAN_SET_SEND_CARRIER; 268 269 if (ir_dev->props->s_tx_duty_cycle) 270 - features |= LIRC_CAN_SET_REC_DUTY_CYCLE; 271 } 272 273 if (ir_dev->props->s_rx_carrier_range)
··· 267 features |= LIRC_CAN_SET_SEND_CARRIER; 268 269 if (ir_dev->props->s_tx_duty_cycle) 270 + features |= LIRC_CAN_SET_SEND_DUTY_CYCLE; 271 } 272 273 if (ir_dev->props->s_rx_carrier_range)
+3 -1
drivers/media/IR/ir-raw-event.c
··· 279 "rc%u", (unsigned int)ir->devno); 280 281 if (IS_ERR(ir->raw->thread)) { 282 kfree(ir->raw); 283 ir->raw = NULL; 284 - return PTR_ERR(ir->raw->thread); 285 } 286 287 mutex_lock(&ir_raw_handler_lock);
··· 279 "rc%u", (unsigned int)ir->devno); 280 281 if (IS_ERR(ir->raw->thread)) { 282 + int ret = PTR_ERR(ir->raw->thread); 283 + 284 kfree(ir->raw); 285 ir->raw = NULL; 286 + return ret; 287 } 288 289 mutex_lock(&ir_raw_handler_lock);
+11 -6
drivers/media/IR/ir-sysfs.c
··· 67 char *tmp = buf; 68 int i; 69 70 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 71 enabled = ir_dev->rc_tab.ir_type; 72 allowed = ir_dev->props->allowed_protos; 73 - } else { 74 enabled = ir_dev->raw->enabled_protocols; 75 allowed = ir_raw_get_allowed_protocols(); 76 - } 77 78 IR_dprintk(1, "allowed - 0x%llx, enabled - 0x%llx\n", 79 (long long)allowed, ··· 122 int rc, i, count = 0; 123 unsigned long flags; 124 125 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) 126 type = ir_dev->rc_tab.ir_type; 127 - else 128 type = ir_dev->raw->enabled_protocols; 129 130 while ((tmp = strsep((char **) &data, " \n")) != NULL) { 131 if (!*tmp) ··· 190 } 191 } 192 193 - if (ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 194 spin_lock_irqsave(&ir_dev->rc_tab.lock, flags); 195 ir_dev->rc_tab.ir_type = type; 196 spin_unlock_irqrestore(&ir_dev->rc_tab.lock, flags);
··· 67 char *tmp = buf; 68 int i; 69 70 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 71 enabled = ir_dev->rc_tab.ir_type; 72 allowed = ir_dev->props->allowed_protos; 73 + } else if (ir_dev->raw) { 74 enabled = ir_dev->raw->enabled_protocols; 75 allowed = ir_raw_get_allowed_protocols(); 76 + } else 77 + return sprintf(tmp, "[builtin]\n"); 78 79 IR_dprintk(1, "allowed - 0x%llx, enabled - 0x%llx\n", 80 (long long)allowed, ··· 121 int rc, i, count = 0; 122 unsigned long flags; 123 124 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) 125 type = ir_dev->rc_tab.ir_type; 126 + else if (ir_dev->raw) 127 type = ir_dev->raw->enabled_protocols; 128 + else { 129 + IR_dprintk(1, "Protocol switching not supported\n"); 130 + return -EINVAL; 131 + } 132 133 while ((tmp = strsep((char **) &data, " \n")) != NULL) { 134 if (!*tmp) ··· 185 } 186 } 187 188 + if (ir_dev->props && ir_dev->props->driver_type == RC_DRIVER_SCANCODE) { 189 spin_lock_irqsave(&ir_dev->rc_tab.lock, flags); 190 ir_dev->rc_tab.ir_type = type; 191 spin_unlock_irqrestore(&ir_dev->rc_tab.lock, flags);
+3
drivers/media/IR/keymaps/rc-rc6-mce.c
··· 19 20 { 0x800f0416, KEY_PLAY }, 21 { 0x800f0418, KEY_PAUSE }, 22 { 0x800f0419, KEY_STOP }, 23 { 0x800f0417, KEY_RECORD }, 24 ··· 38 { 0x800f0411, KEY_VOLUMEDOWN }, 39 { 0x800f0412, KEY_CHANNELUP }, 40 { 0x800f0413, KEY_CHANNELDOWN }, 41 42 { 0x800f0401, KEY_NUMERIC_1 }, 43 { 0x800f0402, KEY_NUMERIC_2 },
··· 19 20 { 0x800f0416, KEY_PLAY }, 21 { 0x800f0418, KEY_PAUSE }, 22 + { 0x800f046e, KEY_PLAYPAUSE }, 23 { 0x800f0419, KEY_STOP }, 24 { 0x800f0417, KEY_RECORD }, 25 ··· 37 { 0x800f0411, KEY_VOLUMEDOWN }, 38 { 0x800f0412, KEY_CHANNELUP }, 39 { 0x800f0413, KEY_CHANNELDOWN }, 40 + { 0x800f043a, KEY_BRIGHTNESSUP }, 41 + { 0x800f0480, KEY_BRIGHTNESSDOWN }, 42 43 { 0x800f0401, KEY_NUMERIC_1 }, 44 { 0x800f0402, KEY_NUMERIC_2 },
+4
drivers/media/IR/mceusb.c
··· 120 { USB_DEVICE(VENDOR_PHILIPS, 0x0613) }, 121 /* Philips eHome Infrared Transceiver */ 122 { USB_DEVICE(VENDOR_PHILIPS, 0x0815) }, 123 /* Realtek MCE IR Receiver */ 124 { USB_DEVICE(VENDOR_REALTEK, 0x0161) }, 125 /* SMK/Toshiba G83C0004D410 */
··· 120 { USB_DEVICE(VENDOR_PHILIPS, 0x0613) }, 121 /* Philips eHome Infrared Transceiver */ 122 { USB_DEVICE(VENDOR_PHILIPS, 0x0815) }, 123 + /* Philips/Spinel plus IR transceiver for ASUS */ 124 + { USB_DEVICE(VENDOR_PHILIPS, 0x206c) }, 125 + /* Philips/Spinel plus IR transceiver for ASUS */ 126 + { USB_DEVICE(VENDOR_PHILIPS, 0x2088) }, 127 /* Realtek MCE IR Receiver */ 128 { USB_DEVICE(VENDOR_REALTEK, 0x0161) }, 129 /* SMK/Toshiba G83C0004D410 */
-3
drivers/media/dvb/dvb-usb/dib0700_core.c
··· 673 else 674 dev->props.rc.core.bulk_mode = false; 675 676 - /* Need a higher delay, to avoid wrong repeat */ 677 - dev->rc_input_dev->rep[REP_DELAY] = 500; 678 - 679 dib0700_rc_setup(dev); 680 681 return 0;
··· 673 else 674 dev->props.rc.core.bulk_mode = false; 675 676 dib0700_rc_setup(dev); 677 678 return 0;
+54 -2
drivers/media/dvb/dvb-usb/dib0700_devices.c
··· 940 return adap->fe == NULL ? -ENODEV : 0; 941 } 942 943 /* DIB807x generic */ 944 static struct dibx000_agc_config dib807x_agc_config[2] = { 945 { ··· 1833 /* 60 */{ USB_DEVICE(USB_VID_TERRATEC, USB_PID_TERRATEC_CINERGY_T_XXS_2) }, 1834 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XPVR) }, 1835 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XP) }, 1836 - { USB_DEVICE(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD) }, 1837 { USB_DEVICE(USB_VID_EVOLUTEPC, USB_PID_TVWAY_PLUS) }, 1838 /* 65 */{ USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV73ESE) }, 1839 { USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV282E) }, ··· 2458 .pid_filter_count = 32, 2459 .pid_filter = stk70x0p_pid_filter, 2460 .pid_filter_ctrl = stk70x0p_pid_filter_ctrl, 2461 - .frontend_attach = stk7070p_frontend_attach, 2462 .tuner_attach = dib7770p_tuner_attach, 2463 2464 DIB0700_DEFAULT_STREAMING_CONFIG(0x02),
··· 940 return adap->fe == NULL ? -ENODEV : 0; 941 } 942 943 + /* STK7770P */ 944 + static struct dib7000p_config dib7770p_dib7000p_config = { 945 + .output_mpeg2_in_188_bytes = 1, 946 + 947 + .agc_config_count = 1, 948 + .agc = &dib7070_agc_config, 949 + .bw = &dib7070_bw_config_12_mhz, 950 + .tuner_is_baseband = 1, 951 + .spur_protect = 1, 952 + 953 + .gpio_dir = DIB7000P_GPIO_DEFAULT_DIRECTIONS, 954 + .gpio_val = DIB7000P_GPIO_DEFAULT_VALUES, 955 + .gpio_pwm_pos = DIB7000P_GPIO_DEFAULT_PWM_POS, 956 + 957 + .hostbus_diversity = 1, 958 + .enable_current_mirror = 1, 959 + .disable_sample_and_hold = 0, 960 + }; 961 + 962 + static int stk7770p_frontend_attach(struct dvb_usb_adapter *adap) 963 + { 964 + struct usb_device_descriptor *p = &adap->dev->udev->descriptor; 965 + if (p->idVendor == cpu_to_le16(USB_VID_PINNACLE) && 966 + p->idProduct == cpu_to_le16(USB_PID_PINNACLE_PCTV72E)) 967 + dib0700_set_gpio(adap->dev, GPIO6, GPIO_OUT, 0); 968 + else 969 + dib0700_set_gpio(adap->dev, GPIO6, GPIO_OUT, 1); 970 + msleep(10); 971 + dib0700_set_gpio(adap->dev, GPIO9, GPIO_OUT, 1); 972 + dib0700_set_gpio(adap->dev, GPIO4, GPIO_OUT, 1); 973 + dib0700_set_gpio(adap->dev, GPIO7, GPIO_OUT, 1); 974 + dib0700_set_gpio(adap->dev, GPIO10, GPIO_OUT, 0); 975 + 976 + dib0700_ctrl_clock(adap->dev, 72, 1); 977 + 978 + msleep(10); 979 + dib0700_set_gpio(adap->dev, GPIO10, GPIO_OUT, 1); 980 + msleep(10); 981 + dib0700_set_gpio(adap->dev, GPIO0, GPIO_OUT, 1); 982 + 983 + if (dib7000p_i2c_enumeration(&adap->dev->i2c_adap, 1, 18, 984 + &dib7770p_dib7000p_config) != 0) { 985 + err("%s: dib7000p_i2c_enumeration failed. Cannot continue\n", 986 + __func__); 987 + return -ENODEV; 988 + } 989 + 990 + adap->fe = dvb_attach(dib7000p_attach, &adap->dev->i2c_adap, 0x80, 991 + &dib7770p_dib7000p_config); 992 + return adap->fe == NULL ? -ENODEV : 0; 993 + } 994 + 995 /* DIB807x generic */ 996 static struct dibx000_agc_config dib807x_agc_config[2] = { 997 { ··· 1781 /* 60 */{ USB_DEVICE(USB_VID_TERRATEC, USB_PID_TERRATEC_CINERGY_T_XXS_2) }, 1782 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XPVR) }, 1783 { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK807XP) }, 1784 + { USB_DEVICE_VER(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD, 0x000, 0x3f00) }, 1785 { USB_DEVICE(USB_VID_EVOLUTEPC, USB_PID_TVWAY_PLUS) }, 1786 /* 65 */{ USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV73ESE) }, 1787 { USB_DEVICE(USB_VID_PINNACLE, USB_PID_PINNACLE_PCTV282E) }, ··· 2406 .pid_filter_count = 32, 2407 .pid_filter = stk70x0p_pid_filter, 2408 .pid_filter_ctrl = stk70x0p_pid_filter_ctrl, 2409 + .frontend_attach = stk7770p_frontend_attach, 2410 .tuner_attach = dib7770p_tuner_attach, 2411 2412 DIB0700_DEFAULT_STREAMING_CONFIG(0x02),
+1 -3
drivers/media/dvb/dvb-usb/opera1.c
··· 483 } 484 } 485 kfree(p); 486 - if (fw) { 487 - release_firmware(fw); 488 - } 489 return ret; 490 } 491
··· 483 } 484 } 485 kfree(p); 486 + release_firmware(fw); 487 return ret; 488 } 489
+7 -1
drivers/media/dvb/frontends/dib7000p.c
··· 260 261 // dprintk( "908: %x, 909: %x\n", reg_908, reg_909); 262 263 dib7000p_write_word(state, 908, reg_908); 264 dib7000p_write_word(state, 909, reg_909); 265 } ··· 781 default: 782 case GUARD_INTERVAL_1_32: value *= 1; break; 783 } 784 - state->div_sync_wait = (value * 3) / 2 + 32; // add 50% SFN margin + compensate for one DVSY-fifo TODO 785 786 /* deactive the possibility of diversity reception if extended interleaver */ 787 state->div_force_off = !1 && ch->u.ofdm.transmission_mode != TRANSMISSION_MODE_8K;
··· 260 261 // dprintk( "908: %x, 909: %x\n", reg_908, reg_909); 262 263 + reg_909 |= (state->cfg.disable_sample_and_hold & 1) << 4; 264 + reg_908 |= (state->cfg.enable_current_mirror & 1) << 7; 265 + 266 dib7000p_write_word(state, 908, reg_908); 267 dib7000p_write_word(state, 909, reg_909); 268 } ··· 778 default: 779 case GUARD_INTERVAL_1_32: value *= 1; break; 780 } 781 + if (state->cfg.diversity_delay == 0) 782 + state->div_sync_wait = (value * 3) / 2 + 48; // add 50% SFN margin + compensate for one DVSY-fifo 783 + else 784 + state->div_sync_wait = (value * 3) / 2 + state->cfg.diversity_delay; // add 50% SFN margin + compensate for one DVSY-fifo 785 786 /* deactive the possibility of diversity reception if extended interleaver */ 787 state->div_force_off = !1 && ch->u.ofdm.transmission_mode != TRANSMISSION_MODE_8K;
+5
drivers/media/dvb/frontends/dib7000p.h
··· 33 int (*agc_control) (struct dvb_frontend *, u8 before); 34 35 u8 output_mode; 36 }; 37 38 #define DEFAULT_DIB7000P_I2C_ADDRESS 18
··· 33 int (*agc_control) (struct dvb_frontend *, u8 before); 34 35 u8 output_mode; 36 + u8 disable_sample_and_hold : 1; 37 + 38 + u8 enable_current_mirror : 1; 39 + u8 diversity_delay; 40 + 41 }; 42 43 #define DEFAULT_DIB7000P_I2C_ADDRESS 18
+13 -20
drivers/media/dvb/siano/smscoreapi.c
··· 1098 * 1099 * @return pointer to descriptor on success, NULL on error. 1100 */ 1101 - struct smscore_buffer_t *smscore_getbuffer(struct smscore_device_t *coredev) 1102 { 1103 struct smscore_buffer_t *cb = NULL; 1104 unsigned long flags; 1105 1106 - DEFINE_WAIT(wait); 1107 - 1108 spin_lock_irqsave(&coredev->bufferslock, flags); 1109 - 1110 - /* This function must return a valid buffer, since the buffer list is 1111 - * finite, we check that there is an available buffer, if not, we wait 1112 - * until such buffer become available. 1113 - */ 1114 - 1115 - prepare_to_wait(&coredev->buffer_mng_waitq, &wait, TASK_INTERRUPTIBLE); 1116 - if (list_empty(&coredev->buffers)) { 1117 - spin_unlock_irqrestore(&coredev->bufferslock, flags); 1118 - schedule(); 1119 - spin_lock_irqsave(&coredev->bufferslock, flags); 1120 } 1121 - 1122 - finish_wait(&coredev->buffer_mng_waitq, &wait); 1123 - 1124 - cb = (struct smscore_buffer_t *) coredev->buffers.next; 1125 - list_del(&cb->entry); 1126 - 1127 spin_unlock_irqrestore(&coredev->bufferslock, flags); 1128 1129 return cb; 1130 }
··· 1098 * 1099 * @return pointer to descriptor on success, NULL on error. 1100 */ 1101 + 1102 + struct smscore_buffer_t *get_entry(struct smscore_device_t *coredev) 1103 { 1104 struct smscore_buffer_t *cb = NULL; 1105 unsigned long flags; 1106 1107 spin_lock_irqsave(&coredev->bufferslock, flags); 1108 + if (!list_empty(&coredev->buffers)) { 1109 + cb = (struct smscore_buffer_t *) coredev->buffers.next; 1110 + list_del(&cb->entry); 1111 } 1112 spin_unlock_irqrestore(&coredev->bufferslock, flags); 1113 + return cb; 1114 + } 1115 + 1116 + struct smscore_buffer_t *smscore_getbuffer(struct smscore_device_t *coredev) 1117 + { 1118 + struct smscore_buffer_t *cb = NULL; 1119 + 1120 + wait_event(coredev->buffer_mng_waitq, (cb = get_entry(coredev))); 1121 1122 return cb; 1123 }
+1 -1
drivers/media/radio/si470x/radio-si470x-i2c.c
··· 395 radio->registers[POWERCFG] = POWERCFG_ENABLE; 396 if (si470x_set_register(radio, POWERCFG) < 0) { 397 retval = -EIO; 398 - goto err_all; 399 } 400 msleep(110); 401
··· 395 radio->registers[POWERCFG] = POWERCFG_ENABLE; 396 if (si470x_set_register(radio, POWERCFG) < 0) { 397 retval = -EIO; 398 + goto err_video; 399 } 400 msleep(110); 401
+1
drivers/media/video/cx231xx/Makefile
··· 11 EXTRA_CFLAGS += -Idrivers/media/common/tuners 12 EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-core 13 EXTRA_CFLAGS += -Idrivers/media/dvb/frontends 14
··· 11 EXTRA_CFLAGS += -Idrivers/media/common/tuners 12 EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-core 13 EXTRA_CFLAGS += -Idrivers/media/dvb/frontends 14 + EXTRA_CFLAGS += -Idrivers/media/dvb/dvb-usb 15
+11 -6
drivers/media/video/cx231xx/cx231xx-cards.c
··· 32 #include <media/v4l2-chip-ident.h> 33 34 #include <media/cx25840.h> 35 #include "xc5000.h" 36 37 #include "cx231xx.h" ··· 176 .driver_info = CX231XX_BOARD_CNXT_RDE_250}, 177 {USB_DEVICE(0x0572, 0x58A1), 178 .driver_info = CX231XX_BOARD_CNXT_RDU_250}, 179 {}, 180 }; 181 ··· 229 dev->board.name, dev->model); 230 231 /* set the direction for GPIO pins */ 232 - cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1); 233 - cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1); 234 - cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1); 235 236 - /* request some modules if any required */ 237 238 - /* reset the Tuner */ 239 - cx231xx_gpio_set(dev, dev->board.tuner_gpio); 240 241 /* set the mode to Analog mode initially */ 242 cx231xx_set_mode(dev, CX231XX_ANALOG_MODE);
··· 32 #include <media/v4l2-chip-ident.h> 33 34 #include <media/cx25840.h> 35 + #include "dvb-usb-ids.h" 36 #include "xc5000.h" 37 38 #include "cx231xx.h" ··· 175 .driver_info = CX231XX_BOARD_CNXT_RDE_250}, 176 {USB_DEVICE(0x0572, 0x58A1), 177 .driver_info = CX231XX_BOARD_CNXT_RDU_250}, 178 + {USB_DEVICE_VER(USB_VID_PIXELVIEW, USB_PID_PIXELVIEW_SBTVD, 0x4000,0x4fff), 179 + .driver_info = CX231XX_BOARD_UNKNOWN}, 180 {}, 181 }; 182 ··· 226 dev->board.name, dev->model); 227 228 /* set the direction for GPIO pins */ 229 + if (dev->board.tuner_gpio) { 230 + cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1); 231 + cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1); 232 + cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1); 233 234 + /* request some modules if any required */ 235 236 + /* reset the Tuner */ 237 + cx231xx_gpio_set(dev, dev->board.tuner_gpio); 238 + } 239 240 /* set the mode to Analog mode initially */ 241 cx231xx_set_mode(dev, CX231XX_ANALOG_MODE);
+1 -1
drivers/media/video/cx25840/cx25840-core.c
··· 1996 1997 state->volume = v4l2_ctrl_new_std(&state->hdl, 1998 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_VOLUME, 1999 - 0, 65335, 65535 / 100, default_volume); 2000 state->mute = v4l2_ctrl_new_std(&state->hdl, 2001 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_MUTE, 2002 0, 1, 1, 0);
··· 1996 1997 state->volume = v4l2_ctrl_new_std(&state->hdl, 1998 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_VOLUME, 1999 + 0, 65535, 65535 / 100, default_volume); 2000 state->mute = v4l2_ctrl_new_std(&state->hdl, 2001 &cx25840_audio_ctrl_ops, V4L2_CID_AUDIO_MUTE, 2002 0, 1, 1, 0);
+1 -1
drivers/media/video/cx88/Kconfig
··· 17 18 config VIDEO_CX88_ALSA 19 tristate "Conexant 2388x DMA audio support" 20 - depends on VIDEO_CX88 && SND && EXPERIMENTAL 21 select SND_PCM 22 ---help--- 23 This is a video4linux driver for direct (DMA) audio on
··· 17 18 config VIDEO_CX88_ALSA 19 tristate "Conexant 2388x DMA audio support" 20 + depends on VIDEO_CX88 && SND 21 select SND_PCM 22 ---help--- 23 This is a video4linux driver for direct (DMA) audio on
+1
drivers/media/video/gspca/gspca.c
··· 223 usb_rcvintpipe(dev, ep->bEndpointAddress), 224 buffer, buffer_len, 225 int_irq, (void *)gspca_dev, interval); 226 gspca_dev->int_urb = urb; 227 ret = usb_submit_urb(urb, GFP_KERNEL); 228 if (ret < 0) {
··· 223 usb_rcvintpipe(dev, ep->bEndpointAddress), 224 buffer, buffer_len, 225 int_irq, (void *)gspca_dev, interval); 226 + urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 227 gspca_dev->int_urb = urb; 228 ret = usb_submit_urb(urb, GFP_KERNEL); 229 if (ret < 0) {
+1 -2
drivers/media/video/gspca/sn9c20x.c
··· 2357 (data[33] << 10); 2358 avg_lum >>= 9; 2359 atomic_set(&sd->avg_lum, avg_lum); 2360 - gspca_frame_add(gspca_dev, LAST_PACKET, 2361 - data, len); 2362 return; 2363 } 2364 if (gspca_dev->last_packet_type == LAST_PACKET) {
··· 2357 (data[33] << 10); 2358 avg_lum >>= 9; 2359 atomic_set(&sd->avg_lum, avg_lum); 2360 + gspca_frame_add(gspca_dev, LAST_PACKET, NULL, 0); 2361 return; 2362 } 2363 if (gspca_dev->last_packet_type == LAST_PACKET) {
+2
drivers/media/video/ivtv/ivtvfb.c
··· 466 struct fb_vblank vblank; 467 u32 trace; 468 469 vblank.flags = FB_VBLANK_HAVE_COUNT |FB_VBLANK_HAVE_VCOUNT | 470 FB_VBLANK_HAVE_VSYNC; 471 trace = read_reg(IVTV_REG_DEC_LINE_FIELD) >> 16;
··· 466 struct fb_vblank vblank; 467 u32 trace; 468 469 + memset(&vblank, 0, sizeof(struct fb_vblank)); 470 + 471 vblank.flags = FB_VBLANK_HAVE_COUNT |FB_VBLANK_HAVE_VCOUNT | 472 FB_VBLANK_HAVE_VSYNC; 473 trace = read_reg(IVTV_REG_DEC_LINE_FIELD) >> 16;
+2 -1
drivers/media/video/mem2mem_testdev.c
··· 239 return -EFAULT; 240 } 241 242 - if (in_buf->vb.size < out_buf->vb.size) { 243 v4l2_err(&dev->v4l2_dev, "Output buffer is too small\n"); 244 return -EINVAL; 245 } ··· 1014 v4l2_m2m_release(dev->m2m_dev); 1015 del_timer_sync(&dev->timer); 1016 video_unregister_device(dev->vfd); 1017 v4l2_device_unregister(&dev->v4l2_dev); 1018 kfree(dev); 1019
··· 239 return -EFAULT; 240 } 241 242 + if (in_buf->vb.size > out_buf->vb.size) { 243 v4l2_err(&dev->v4l2_dev, "Output buffer is too small\n"); 244 return -EINVAL; 245 } ··· 1014 v4l2_m2m_release(dev->m2m_dev); 1015 del_timer_sync(&dev->timer); 1016 video_unregister_device(dev->vfd); 1017 + video_device_release(dev->vfd); 1018 v4l2_device_unregister(&dev->v4l2_dev); 1019 kfree(dev); 1020
+7 -1
drivers/media/video/mt9m111.c
··· 447 dev_dbg(&client->dev, "%s left=%d, top=%d, width=%d, height=%d\n", 448 __func__, rect.left, rect.top, rect.width, rect.height); 449 450 ret = mt9m111_make_rect(client, &rect); 451 if (!ret) 452 mt9m111->rect = rect; ··· 469 470 static int mt9m111_cropcap(struct v4l2_subdev *sd, struct v4l2_cropcap *a) 471 { 472 a->bounds.left = MT9M111_MIN_DARK_COLS; 473 a->bounds.top = MT9M111_MIN_DARK_ROWS; 474 a->bounds.width = MT9M111_MAX_WIDTH; 475 a->bounds.height = MT9M111_MAX_HEIGHT; 476 a->defrect = a->bounds; 477 - a->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 478 a->pixelaspect.numerator = 1; 479 a->pixelaspect.denominator = 1; 480 ··· 492 mf->width = mt9m111->rect.width; 493 mf->height = mt9m111->rect.height; 494 mf->code = mt9m111->fmt->code; 495 mf->field = V4L2_FIELD_NONE; 496 497 return 0;
··· 447 dev_dbg(&client->dev, "%s left=%d, top=%d, width=%d, height=%d\n", 448 __func__, rect.left, rect.top, rect.width, rect.height); 449 450 + if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 451 + return -EINVAL; 452 + 453 ret = mt9m111_make_rect(client, &rect); 454 if (!ret) 455 mt9m111->rect = rect; ··· 466 467 static int mt9m111_cropcap(struct v4l2_subdev *sd, struct v4l2_cropcap *a) 468 { 469 + if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) 470 + return -EINVAL; 471 + 472 a->bounds.left = MT9M111_MIN_DARK_COLS; 473 a->bounds.top = MT9M111_MIN_DARK_ROWS; 474 a->bounds.width = MT9M111_MAX_WIDTH; 475 a->bounds.height = MT9M111_MAX_HEIGHT; 476 a->defrect = a->bounds; 477 a->pixelaspect.numerator = 1; 478 a->pixelaspect.denominator = 1; 479 ··· 487 mf->width = mt9m111->rect.width; 488 mf->height = mt9m111->rect.height; 489 mf->code = mt9m111->fmt->code; 490 + mf->colorspace = mt9m111->fmt->colorspace; 491 mf->field = V4L2_FIELD_NONE; 492 493 return 0;
-3
drivers/media/video/mt9v022.c
··· 402 if (mt9v022->model != V4L2_IDENT_MT9V022IX7ATC) 403 return -EINVAL; 404 break; 405 - case 0: 406 - /* No format change, only geometry */ 407 - break; 408 default: 409 return -EINVAL; 410 }
··· 402 if (mt9v022->model != V4L2_IDENT_MT9V022IX7ATC) 403 return -EINVAL; 404 break; 405 default: 406 return -EINVAL; 407 }
+4
drivers/media/video/mx2_camera.c
··· 378 379 spin_lock_irqsave(&pcdev->lock, flags); 380 381 vb = &(*fb_active)->vb; 382 dev_dbg(pcdev->dev, "%s (vb=0x%p) 0x%08lx %d\n", __func__, 383 vb, vb->baddr, vb->bsize); ··· 405 406 *fb_active = buf; 407 408 spin_unlock_irqrestore(&pcdev->lock, flags); 409 } 410
··· 378 379 spin_lock_irqsave(&pcdev->lock, flags); 380 381 + if (*fb_active == NULL) 382 + goto out; 383 + 384 vb = &(*fb_active)->vb; 385 dev_dbg(pcdev->dev, "%s (vb=0x%p) 0x%08lx %d\n", __func__, 386 vb, vb->baddr, vb->bsize); ··· 402 403 *fb_active = buf; 404 405 + out: 406 spin_unlock_irqrestore(&pcdev->lock, flags); 407 } 408
+3 -3
drivers/media/video/pvrusb2/pvrusb2-ctrl.c
··· 513 if (ret >= 0) { 514 ret = pvr2_ctrl_range_check(cptr,*valptr); 515 } 516 - if (maskptr) *maskptr = ~0; 517 } else if (cptr->info->type == pvr2_ctl_bool) { 518 ret = parse_token(ptr,len,valptr,boolNames, 519 ARRAY_SIZE(boolNames)); ··· 522 } else if (ret == 0) { 523 *valptr = (*valptr & 1) ? !0 : 0; 524 } 525 - if (maskptr) *maskptr = 1; 526 } else if (cptr->info->type == pvr2_ctl_enum) { 527 ret = parse_token( 528 ptr,len,valptr, ··· 531 if (ret >= 0) { 532 ret = pvr2_ctrl_range_check(cptr,*valptr); 533 } 534 - if (maskptr) *maskptr = ~0; 535 } else if (cptr->info->type == pvr2_ctl_bitmask) { 536 ret = parse_tlist( 537 ptr,len,maskptr,valptr,
··· 513 if (ret >= 0) { 514 ret = pvr2_ctrl_range_check(cptr,*valptr); 515 } 516 + *maskptr = ~0; 517 } else if (cptr->info->type == pvr2_ctl_bool) { 518 ret = parse_token(ptr,len,valptr,boolNames, 519 ARRAY_SIZE(boolNames)); ··· 522 } else if (ret == 0) { 523 *valptr = (*valptr & 1) ? !0 : 0; 524 } 525 + *maskptr = 1; 526 } else if (cptr->info->type == pvr2_ctl_enum) { 527 ret = parse_token( 528 ptr,len,valptr, ··· 531 if (ret >= 0) { 532 ret = pvr2_ctrl_range_check(cptr,*valptr); 533 } 534 + *maskptr = ~0; 535 } else if (cptr->info->type == pvr2_ctl_bitmask) { 536 ret = parse_tlist( 537 ptr,len,maskptr,valptr,
+42 -52
drivers/media/video/s5p-fimc/fimc-core.c
··· 393 dbg("ctx->out_order_1p= %d", ctx->out_order_1p); 394 } 395 396 /** 397 * fimc_prepare_config - check dimensions, operation and color mode 398 * and pre-calculate offset and the scaling coefficients. ··· 437 { 438 struct fimc_frame *s_frame, *d_frame; 439 struct fimc_vid_buffer *buf = NULL; 440 - struct samsung_fimc_variant *variant = ctx->fimc_dev->variant; 441 int ret = 0; 442 443 s_frame = &ctx->s_frame; ··· 449 swap(d_frame->width, d_frame->height); 450 } 451 452 - /* Prepare the output offset ratios for scaler. */ 453 - d_frame->dma_offset.y_h = d_frame->offs_h; 454 - if (!variant->pix_hoff) 455 - d_frame->dma_offset.y_h *= (d_frame->fmt->depth >> 3); 456 457 - d_frame->dma_offset.y_v = d_frame->offs_v; 458 - 459 - d_frame->dma_offset.cb_h = d_frame->offs_h; 460 - d_frame->dma_offset.cb_v = d_frame->offs_v; 461 - 462 - d_frame->dma_offset.cr_h = d_frame->offs_h; 463 - d_frame->dma_offset.cr_v = d_frame->offs_v; 464 - 465 - if (!variant->pix_hoff && d_frame->fmt->planes_cnt == 3) { 466 - d_frame->dma_offset.cb_h >>= 1; 467 - d_frame->dma_offset.cb_v >>= 1; 468 - d_frame->dma_offset.cr_h >>= 1; 469 - d_frame->dma_offset.cr_v >>= 1; 470 - } 471 - 472 - dbg("out offset: color= %d, y_h= %d, y_v= %d", 473 - d_frame->fmt->color, 474 - d_frame->dma_offset.y_h, d_frame->dma_offset.y_v); 475 - 476 - /* Prepare the input offset ratios for scaler. */ 477 - s_frame->dma_offset.y_h = s_frame->offs_h; 478 - if (!variant->pix_hoff) 479 - s_frame->dma_offset.y_h *= (s_frame->fmt->depth >> 3); 480 - s_frame->dma_offset.y_v = s_frame->offs_v; 481 - 482 - s_frame->dma_offset.cb_h = s_frame->offs_h; 483 - s_frame->dma_offset.cb_v = s_frame->offs_v; 484 - 485 - s_frame->dma_offset.cr_h = s_frame->offs_h; 486 - s_frame->dma_offset.cr_v = s_frame->offs_v; 487 - 488 - if (!variant->pix_hoff && s_frame->fmt->planes_cnt == 3) { 489 - s_frame->dma_offset.cb_h >>= 1; 490 - s_frame->dma_offset.cb_v >>= 1; 491 - s_frame->dma_offset.cr_h >>= 1; 492 - s_frame->dma_offset.cr_v >>= 1; 493 - } 494 - 495 - dbg("in offset: color= %d, y_h= %d, y_v= %d", 496 - s_frame->fmt->color, s_frame->dma_offset.y_h, 497 - s_frame->dma_offset.y_v); 498 - 499 - fimc_set_yuv_order(ctx); 500 - 501 - /* Check against the scaler ratio. */ 502 if (s_frame->height > (SCALER_MAX_VRATIO * d_frame->height) || 503 s_frame->width > (SCALER_MAX_HRATIO * d_frame->width)) { 504 err("out of scaler range"); 505 return -EINVAL; 506 } 507 } 508 509 /* Input DMA mode is not allowed when the scaler is disabled. */ ··· 807 } else { 808 v4l2_err(&ctx->fimc_dev->m2m.v4l2_dev, 809 "Wrong buffer/video queue type (%d)\n", f->type); 810 - return -EINVAL; 811 } 812 813 pix = &f->fmt.pix; ··· 1400 } 1401 1402 fimc->work_queue = create_workqueue(dev_name(&fimc->pdev->dev)); 1403 - if (!fimc->work_queue) 1404 goto err_irq; 1405 1406 ret = fimc_register_m2m_device(fimc); 1407 if (ret) ··· 1480 }; 1481 1482 static struct samsung_fimc_variant fimc01_variant_s5pv210 = { 1483 .has_inp_rot = 1, 1484 .has_out_rot = 1, 1485 .min_inp_pixsize = 16, ··· 1495 }; 1496 1497 static struct samsung_fimc_variant fimc2_variant_s5pv210 = { 1498 .min_inp_pixsize = 16, 1499 .min_out_pixsize = 32, 1500
··· 393 dbg("ctx->out_order_1p= %d", ctx->out_order_1p); 394 } 395 396 + static void fimc_prepare_dma_offset(struct fimc_ctx *ctx, struct fimc_frame *f) 397 + { 398 + struct samsung_fimc_variant *variant = ctx->fimc_dev->variant; 399 + 400 + f->dma_offset.y_h = f->offs_h; 401 + if (!variant->pix_hoff) 402 + f->dma_offset.y_h *= (f->fmt->depth >> 3); 403 + 404 + f->dma_offset.y_v = f->offs_v; 405 + 406 + f->dma_offset.cb_h = f->offs_h; 407 + f->dma_offset.cb_v = f->offs_v; 408 + 409 + f->dma_offset.cr_h = f->offs_h; 410 + f->dma_offset.cr_v = f->offs_v; 411 + 412 + if (!variant->pix_hoff) { 413 + if (f->fmt->planes_cnt == 3) { 414 + f->dma_offset.cb_h >>= 1; 415 + f->dma_offset.cr_h >>= 1; 416 + } 417 + if (f->fmt->color == S5P_FIMC_YCBCR420) { 418 + f->dma_offset.cb_v >>= 1; 419 + f->dma_offset.cr_v >>= 1; 420 + } 421 + } 422 + 423 + dbg("in_offset: color= %d, y_h= %d, y_v= %d", 424 + f->fmt->color, f->dma_offset.y_h, f->dma_offset.y_v); 425 + } 426 + 427 /** 428 * fimc_prepare_config - check dimensions, operation and color mode 429 * and pre-calculate offset and the scaling coefficients. ··· 406 { 407 struct fimc_frame *s_frame, *d_frame; 408 struct fimc_vid_buffer *buf = NULL; 409 int ret = 0; 410 411 s_frame = &ctx->s_frame; ··· 419 swap(d_frame->width, d_frame->height); 420 } 421 422 + /* Prepare the DMA offset ratios for scaler. */ 423 + fimc_prepare_dma_offset(ctx, &ctx->s_frame); 424 + fimc_prepare_dma_offset(ctx, &ctx->d_frame); 425 426 if (s_frame->height > (SCALER_MAX_VRATIO * d_frame->height) || 427 s_frame->width > (SCALER_MAX_HRATIO * d_frame->width)) { 428 err("out of scaler range"); 429 return -EINVAL; 430 } 431 + fimc_set_yuv_order(ctx); 432 } 433 434 /* Input DMA mode is not allowed when the scaler is disabled. */ ··· 822 } else { 823 v4l2_err(&ctx->fimc_dev->m2m.v4l2_dev, 824 "Wrong buffer/video queue type (%d)\n", f->type); 825 + ret = -EINVAL; 826 + goto s_fmt_out; 827 } 828 829 pix = &f->fmt.pix; ··· 1414 } 1415 1416 fimc->work_queue = create_workqueue(dev_name(&fimc->pdev->dev)); 1417 + if (!fimc->work_queue) { 1418 + ret = -ENOMEM; 1419 goto err_irq; 1420 + } 1421 1422 ret = fimc_register_m2m_device(fimc); 1423 if (ret) ··· 1492 }; 1493 1494 static struct samsung_fimc_variant fimc01_variant_s5pv210 = { 1495 + .pix_hoff = 1, 1496 .has_inp_rot = 1, 1497 .has_out_rot = 1, 1498 .min_inp_pixsize = 16, ··· 1506 }; 1507 1508 static struct samsung_fimc_variant fimc2_variant_s5pv210 = { 1509 + .pix_hoff = 1, 1510 .min_inp_pixsize = 16, 1511 .min_out_pixsize = 32, 1512
+5 -5
drivers/media/video/saa7134/saa7134-cards.c
··· 4323 }, 4324 [SAA7134_BOARD_BEHOLD_COLUMBUS_TVFM] = { 4325 /* Beholder Intl. Ltd. 2008 */ 4326 - /*Dmitry Belimov <d.belimov@gmail.com> */ 4327 - .name = "Beholder BeholdTV Columbus TVFM", 4328 .audio_clock = 0x00187de7, 4329 .tuner_type = TUNER_ALPS_TSBE5_PAL, 4330 - .radio_type = UNSET, 4331 - .tuner_addr = ADDR_UNSET, 4332 - .radio_addr = ADDR_UNSET, 4333 .tda9887_conf = TDA9887_PRESENT, 4334 .gpiomask = 0x000A8004, 4335 .inputs = {{
··· 4323 }, 4324 [SAA7134_BOARD_BEHOLD_COLUMBUS_TVFM] = { 4325 /* Beholder Intl. Ltd. 2008 */ 4326 + /* Dmitry Belimov <d.belimov@gmail.com> */ 4327 + .name = "Beholder BeholdTV Columbus TV/FM", 4328 .audio_clock = 0x00187de7, 4329 .tuner_type = TUNER_ALPS_TSBE5_PAL, 4330 + .radio_type = TUNER_TEA5767, 4331 + .tuner_addr = 0xc2 >> 1, 4332 + .radio_addr = 0xc0 >> 1, 4333 .tda9887_conf = TDA9887_PRESENT, 4334 .gpiomask = 0x000A8004, 4335 .inputs = {{
+3 -2
drivers/media/video/saa7164/saa7164-buffer.c
··· 136 int saa7164_buffer_dealloc(struct saa7164_tsport *port, 137 struct saa7164_buffer *buf) 138 { 139 - struct saa7164_dev *dev = port->dev; 140 141 - if ((buf == 0) || (port == 0)) 142 return SAA_ERR_BAD_PARAMETER; 143 144 dprintk(DBGLVL_BUF, "%s() deallocating buffer @ 0x%p\n", __func__, buf); 145
··· 136 int saa7164_buffer_dealloc(struct saa7164_tsport *port, 137 struct saa7164_buffer *buf) 138 { 139 + struct saa7164_dev *dev; 140 141 + if (!buf || !port) 142 return SAA_ERR_BAD_PARAMETER; 143 + dev = port->dev; 144 145 dprintk(DBGLVL_BUF, "%s() deallocating buffer @ 0x%p\n", __func__, buf); 146
+24
drivers/media/video/uvc/uvc_driver.c
··· 486 max(frame->dwFrameInterval[0], 487 frame->dwDefaultFrameInterval)); 488 489 uvc_trace(UVC_TRACE_DESCR, "- %ux%u (%u.%u fps)\n", 490 frame->wWidth, frame->wHeight, 491 10000000/frame->dwDefaultFrameInterval, ··· 2032 .bInterfaceClass = USB_CLASS_VENDOR_SPEC, 2033 .bInterfaceSubClass = 1, 2034 .bInterfaceProtocol = 0 }, 2035 /* Alcor Micro AU3820 (Future Boy PC USB Webcam) */ 2036 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2037 | USB_DEVICE_ID_MATCH_INT_INFO, ··· 2106 .bInterfaceProtocol = 0, 2107 .driver_info = UVC_QUIRK_PROBE_MINMAX 2108 | UVC_QUIRK_PROBE_DEF }, 2109 /* Syntek (HP Spartan) */ 2110 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2111 | USB_DEVICE_ID_MATCH_INT_INFO,
··· 486 max(frame->dwFrameInterval[0], 487 frame->dwDefaultFrameInterval)); 488 489 + if (dev->quirks & UVC_QUIRK_RESTRICT_FRAME_RATE) { 490 + frame->bFrameIntervalType = 1; 491 + frame->dwFrameInterval[0] = 492 + frame->dwDefaultFrameInterval; 493 + } 494 + 495 uvc_trace(UVC_TRACE_DESCR, "- %ux%u (%u.%u fps)\n", 496 frame->wWidth, frame->wHeight, 497 10000000/frame->dwDefaultFrameInterval, ··· 2026 .bInterfaceClass = USB_CLASS_VENDOR_SPEC, 2027 .bInterfaceSubClass = 1, 2028 .bInterfaceProtocol = 0 }, 2029 + /* Chicony CNF7129 (Asus EEE 100HE) */ 2030 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2031 + | USB_DEVICE_ID_MATCH_INT_INFO, 2032 + .idVendor = 0x04f2, 2033 + .idProduct = 0xb071, 2034 + .bInterfaceClass = USB_CLASS_VIDEO, 2035 + .bInterfaceSubClass = 1, 2036 + .bInterfaceProtocol = 0, 2037 + .driver_info = UVC_QUIRK_RESTRICT_FRAME_RATE }, 2038 /* Alcor Micro AU3820 (Future Boy PC USB Webcam) */ 2039 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2040 | USB_DEVICE_ID_MATCH_INT_INFO, ··· 2091 .bInterfaceProtocol = 0, 2092 .driver_info = UVC_QUIRK_PROBE_MINMAX 2093 | UVC_QUIRK_PROBE_DEF }, 2094 + /* IMC Networks (Medion Akoya) */ 2095 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2096 + | USB_DEVICE_ID_MATCH_INT_INFO, 2097 + .idVendor = 0x13d3, 2098 + .idProduct = 0x5103, 2099 + .bInterfaceClass = USB_CLASS_VIDEO, 2100 + .bInterfaceSubClass = 1, 2101 + .bInterfaceProtocol = 0, 2102 + .driver_info = UVC_QUIRK_STREAM_NO_FID }, 2103 /* Syntek (HP Spartan) */ 2104 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2105 | USB_DEVICE_ID_MATCH_INT_INFO,
+1
drivers/media/video/uvc/uvcvideo.h
··· 182 #define UVC_QUIRK_IGNORE_SELECTOR_UNIT 0x00000020 183 #define UVC_QUIRK_FIX_BANDWIDTH 0x00000080 184 #define UVC_QUIRK_PROBE_DEF 0x00000100 185 186 /* Format flags */ 187 #define UVC_FMT_FLAG_COMPRESSED 0x00000001
··· 182 #define UVC_QUIRK_IGNORE_SELECTOR_UNIT 0x00000020 183 #define UVC_QUIRK_FIX_BANDWIDTH 0x00000080 184 #define UVC_QUIRK_PROBE_DEF 0x00000100 185 + #define UVC_QUIRK_RESTRICT_FRAME_RATE 0x00000200 186 187 /* Format flags */ 188 #define UVC_FMT_FLAG_COMPRESSED 0x00000001
+4 -2
drivers/media/video/videobuf-dma-contig.c
··· 393 } 394 395 /* read() method */ 396 - dma_free_coherent(q->dev, mem->size, mem->vaddr, mem->dma_handle); 397 - mem->vaddr = NULL; 398 } 399 EXPORT_SYMBOL_GPL(videobuf_dma_contig_free); 400
··· 393 } 394 395 /* read() method */ 396 + if (mem->vaddr) { 397 + dma_free_coherent(q->dev, mem->size, mem->vaddr, mem->dma_handle); 398 + mem->vaddr = NULL; 399 + } 400 } 401 EXPORT_SYMBOL_GPL(videobuf_dma_contig_free); 402
+7 -4
drivers/media/video/videobuf-dma-sg.c
··· 94 * must free the memory. 95 */ 96 static struct scatterlist *videobuf_pages_to_sg(struct page **pages, 97 - int nr_pages, int offset) 98 { 99 struct scatterlist *sglist; 100 int i; ··· 110 /* DMA to highmem pages might not work */ 111 goto highmem; 112 sg_set_page(&sglist[0], pages[0], PAGE_SIZE - offset, offset); 113 for (i = 1; i < nr_pages; i++) { 114 if (NULL == pages[i]) 115 goto nopage; 116 if (PageHighMem(pages[i])) 117 goto highmem; 118 - sg_set_page(&sglist[i], pages[i], PAGE_SIZE, 0); 119 } 120 return sglist; 121 ··· 172 173 first = (data & PAGE_MASK) >> PAGE_SHIFT; 174 last = ((data+size-1) & PAGE_MASK) >> PAGE_SHIFT; 175 - dma->offset = data & ~PAGE_MASK; 176 dma->nr_pages = last-first+1; 177 dma->pages = kmalloc(dma->nr_pages * sizeof(struct page *), GFP_KERNEL); 178 if (NULL == dma->pages) ··· 255 256 if (dma->pages) { 257 dma->sglist = videobuf_pages_to_sg(dma->pages, dma->nr_pages, 258 - dma->offset); 259 } 260 if (dma->vaddr) { 261 dma->sglist = videobuf_vmalloc_to_sg(dma->vaddr,
··· 94 * must free the memory. 95 */ 96 static struct scatterlist *videobuf_pages_to_sg(struct page **pages, 97 + int nr_pages, int offset, size_t size) 98 { 99 struct scatterlist *sglist; 100 int i; ··· 110 /* DMA to highmem pages might not work */ 111 goto highmem; 112 sg_set_page(&sglist[0], pages[0], PAGE_SIZE - offset, offset); 113 + size -= PAGE_SIZE - offset; 114 for (i = 1; i < nr_pages; i++) { 115 if (NULL == pages[i]) 116 goto nopage; 117 if (PageHighMem(pages[i])) 118 goto highmem; 119 + sg_set_page(&sglist[i], pages[i], min(PAGE_SIZE, size), 0); 120 + size -= min(PAGE_SIZE, size); 121 } 122 return sglist; 123 ··· 170 171 first = (data & PAGE_MASK) >> PAGE_SHIFT; 172 last = ((data+size-1) & PAGE_MASK) >> PAGE_SHIFT; 173 + dma->offset = data & ~PAGE_MASK; 174 + dma->size = size; 175 dma->nr_pages = last-first+1; 176 dma->pages = kmalloc(dma->nr_pages * sizeof(struct page *), GFP_KERNEL); 177 if (NULL == dma->pages) ··· 252 253 if (dma->pages) { 254 dma->sglist = videobuf_pages_to_sg(dma->pages, dma->nr_pages, 255 + dma->offset, dma->size); 256 } 257 if (dma->vaddr) { 258 dma->sglist = videobuf_vmalloc_to_sg(dma->vaddr,
-1
drivers/misc/bh1780gli.c
··· 190 191 ddata = i2c_get_clientdata(client); 192 sysfs_remove_group(&client->dev.kobj, &bh1780_attr_group); 193 - i2c_set_clientdata(client, NULL); 194 kfree(ddata); 195 196 return 0;
··· 190 191 ddata = i2c_get_clientdata(client); 192 sysfs_remove_group(&client->dev.kobj, &bh1780_attr_group); 193 kfree(ddata); 194 195 return 0;
+13
drivers/mmc/core/core.c
··· 1631 if (host->bus_ops && !host->bus_dead) { 1632 if (host->bus_ops->suspend) 1633 err = host->bus_ops->suspend(host); 1634 } 1635 mmc_bus_put(host); 1636
··· 1631 if (host->bus_ops && !host->bus_dead) { 1632 if (host->bus_ops->suspend) 1633 err = host->bus_ops->suspend(host); 1634 + if (err == -ENOSYS || !host->bus_ops->resume) { 1635 + /* 1636 + * We simply "remove" the card in this case. 1637 + * It will be redetected on resume. 1638 + */ 1639 + if (host->bus_ops->remove) 1640 + host->bus_ops->remove(host); 1641 + mmc_claim_host(host); 1642 + mmc_detach_bus(host); 1643 + mmc_release_host(host); 1644 + host->pm_flags = 0; 1645 + err = 0; 1646 + } 1647 } 1648 mmc_bus_put(host); 1649
+2 -2
drivers/net/Kconfig
··· 2428 2429 config MV643XX_ETH 2430 tristate "Marvell Discovery (643XX) and Orion ethernet support" 2431 - depends on MV64X60 || PPC32 || PLAT_ORION 2432 select INET_LRO 2433 select PHYLIB 2434 help ··· 2803 2804 config PASEMI_MAC 2805 tristate "PA Semi 1/10Gbit MAC" 2806 - depends on PPC_PASEMI && PCI 2807 select PHYLIB 2808 select INET_LRO 2809 help
··· 2428 2429 config MV643XX_ETH 2430 tristate "Marvell Discovery (643XX) and Orion ethernet support" 2431 + depends on (MV64X60 || PPC32 || PLAT_ORION) && INET 2432 select INET_LRO 2433 select PHYLIB 2434 help ··· 2803 2804 config PASEMI_MAC 2805 tristate "PA Semi 1/10Gbit MAC" 2806 + depends on PPC_PASEMI && PCI && INET 2807 select PHYLIB 2808 select INET_LRO 2809 help
+2 -2
drivers/net/b44.c
··· 2170 dev->irq = sdev->irq; 2171 SET_ETHTOOL_OPS(dev, &b44_ethtool_ops); 2172 2173 - netif_carrier_off(dev); 2174 - 2175 err = ssb_bus_powerup(sdev->bus, 0); 2176 if (err) { 2177 dev_err(sdev->dev, ··· 2210 dev_err(sdev->dev, "Cannot register net device, aborting\n"); 2211 goto err_out_powerdown; 2212 } 2213 2214 ssb_set_drvdata(sdev, dev); 2215
··· 2170 dev->irq = sdev->irq; 2171 SET_ETHTOOL_OPS(dev, &b44_ethtool_ops); 2172 2173 err = ssb_bus_powerup(sdev->bus, 0); 2174 if (err) { 2175 dev_err(sdev->dev, ··· 2212 dev_err(sdev->dev, "Cannot register net device, aborting\n"); 2213 goto err_out_powerdown; 2214 } 2215 + 2216 + netif_carrier_off(dev); 2217 2218 ssb_set_drvdata(sdev, dev); 2219
+9
drivers/net/bonding/bond_main.c
··· 5164 res = dev_alloc_name(bond_dev, "bond%d"); 5165 if (res < 0) 5166 goto out; 5167 } 5168 5169 res = register_netdevice(bond_dev);
··· 5164 res = dev_alloc_name(bond_dev, "bond%d"); 5165 if (res < 0) 5166 goto out; 5167 + } else { 5168 + /* 5169 + * If we're given a name to register 5170 + * we need to ensure that its not already 5171 + * registered 5172 + */ 5173 + res = -EEXIST; 5174 + if (__dev_get_by_name(net, name) != NULL) 5175 + goto out; 5176 } 5177 5178 res = register_netdevice(bond_dev);
+8 -1
drivers/net/ehea/ehea_main.c
··· 533 int length = cqe->num_bytes_transfered - 4; /*remove CRC */ 534 535 skb_put(skb, length); 536 - skb->ip_summed = CHECKSUM_UNNECESSARY; 537 skb->protocol = eth_type_trans(skb, dev); 538 } 539 540 static inline struct sk_buff *get_skb_by_index(struct sk_buff **skb_array,
··· 533 int length = cqe->num_bytes_transfered - 4; /*remove CRC */ 534 535 skb_put(skb, length); 536 skb->protocol = eth_type_trans(skb, dev); 537 + 538 + /* The packet was not an IPV4 packet so a complemented checksum was 539 + calculated. The value is found in the Internet Checksum field. */ 540 + if (cqe->status & EHEA_CQE_BLIND_CKSUM) { 541 + skb->ip_summed = CHECKSUM_COMPLETE; 542 + skb->csum = csum_unfold(~cqe->inet_checksum_value); 543 + } else 544 + skb->ip_summed = CHECKSUM_UNNECESSARY; 545 } 546 547 static inline struct sk_buff *get_skb_by_index(struct sk_buff **skb_array,
+1
drivers/net/ehea/ehea_qmr.h
··· 150 #define EHEA_CQE_TYPE_RQ 0x60 151 #define EHEA_CQE_STAT_ERR_MASK 0x700F 152 #define EHEA_CQE_STAT_FAT_ERR_MASK 0xF 153 #define EHEA_CQE_STAT_ERR_TCP 0x4000 154 #define EHEA_CQE_STAT_ERR_IP 0x2000 155 #define EHEA_CQE_STAT_ERR_CRC 0x1000
··· 150 #define EHEA_CQE_TYPE_RQ 0x60 151 #define EHEA_CQE_STAT_ERR_MASK 0x700F 152 #define EHEA_CQE_STAT_FAT_ERR_MASK 0xF 153 + #define EHEA_CQE_BLIND_CKSUM 0x8000 154 #define EHEA_CQE_STAT_ERR_TCP 0x4000 155 #define EHEA_CQE_STAT_ERR_IP 0x2000 156 #define EHEA_CQE_STAT_ERR_CRC 0x1000
+30 -14
drivers/net/fec.c
··· 678 { 679 struct fec_enet_private *fep = netdev_priv(dev); 680 struct phy_device *phy_dev = NULL; 681 - int ret; 682 683 fep->phy_dev = NULL; 684 685 - /* find the first phy */ 686 - phy_dev = phy_find_first(fep->mii_bus); 687 - if (!phy_dev) { 688 - printk(KERN_ERR "%s: no PHY found\n", dev->name); 689 - return -ENODEV; 690 } 691 692 - /* attach the mac to the phy */ 693 - ret = phy_connect_direct(dev, phy_dev, 694 - &fec_enet_adjust_link, 0, 695 - PHY_INTERFACE_MODE_MII); 696 - if (ret) { 697 - printk(KERN_ERR "%s: Could not attach to PHY\n", dev->name); 698 - return ret; 699 } 700 701 /* mask with MAC supported features */ ··· 751 fep->mii_bus->read = fec_enet_mdio_read; 752 fep->mii_bus->write = fec_enet_mdio_write; 753 fep->mii_bus->reset = fec_enet_mdio_reset; 754 - snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", pdev->id); 755 fep->mii_bus->priv = fep; 756 fep->mii_bus->parent = &pdev->dev; 757 ··· 1323 ret = fec_enet_mii_init(pdev); 1324 if (ret) 1325 goto failed_mii_init; 1326 1327 ret = register_netdev(ndev); 1328 if (ret)
··· 678 { 679 struct fec_enet_private *fep = netdev_priv(dev); 680 struct phy_device *phy_dev = NULL; 681 + char mdio_bus_id[MII_BUS_ID_SIZE]; 682 + char phy_name[MII_BUS_ID_SIZE + 3]; 683 + int phy_id; 684 685 fep->phy_dev = NULL; 686 687 + /* check for attached phy */ 688 + for (phy_id = 0; (phy_id < PHY_MAX_ADDR); phy_id++) { 689 + if ((fep->mii_bus->phy_mask & (1 << phy_id))) 690 + continue; 691 + if (fep->mii_bus->phy_map[phy_id] == NULL) 692 + continue; 693 + if (fep->mii_bus->phy_map[phy_id]->phy_id == 0) 694 + continue; 695 + strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE); 696 + break; 697 } 698 699 + if (phy_id >= PHY_MAX_ADDR) { 700 + printk(KERN_INFO "%s: no PHY, assuming direct connection " 701 + "to switch\n", dev->name); 702 + strncpy(mdio_bus_id, "0", MII_BUS_ID_SIZE); 703 + phy_id = 0; 704 + } 705 + 706 + snprintf(phy_name, MII_BUS_ID_SIZE, PHY_ID_FMT, mdio_bus_id, phy_id); 707 + phy_dev = phy_connect(dev, phy_name, &fec_enet_adjust_link, 0, 708 + PHY_INTERFACE_MODE_MII); 709 + if (IS_ERR(phy_dev)) { 710 + printk(KERN_ERR "%s: could not attach to PHY\n", dev->name); 711 + return PTR_ERR(phy_dev); 712 } 713 714 /* mask with MAC supported features */ ··· 738 fep->mii_bus->read = fec_enet_mdio_read; 739 fep->mii_bus->write = fec_enet_mdio_write; 740 fep->mii_bus->reset = fec_enet_mdio_reset; 741 + snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", pdev->id + 1); 742 fep->mii_bus->priv = fep; 743 fep->mii_bus->parent = &pdev->dev; 744 ··· 1310 ret = fec_enet_mii_init(pdev); 1311 if (ret) 1312 goto failed_mii_init; 1313 + 1314 + /* Carrier starts down, phylib will bring it up */ 1315 + netif_carrier_off(ndev); 1316 1317 ret = register_netdev(ndev); 1318 if (ret)
+35 -30
drivers/net/r8169.c
··· 1212 if ((RTL_R8(ChipCmd) & CmdRxEnb) == 0) 1213 return; 1214 1215 - counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr); 1216 if (!counters) 1217 return; 1218 ··· 1234 RTL_W32(CounterAddrLow, 0); 1235 RTL_W32(CounterAddrHigh, 0); 1236 1237 - pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr); 1238 } 1239 1240 static void rtl8169_get_ethtool_stats(struct net_device *dev, ··· 3294 3295 /* 3296 * Rx and Tx desscriptors needs 256 bytes alignment. 3297 - * pci_alloc_consistent provides more. 3298 */ 3299 - tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES, 3300 - &tp->TxPhyAddr); 3301 if (!tp->TxDescArray) 3302 goto err_pm_runtime_put; 3303 3304 - tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES, 3305 - &tp->RxPhyAddr); 3306 if (!tp->RxDescArray) 3307 goto err_free_tx_0; 3308 ··· 3336 err_release_ring_2: 3337 rtl8169_rx_clear(tp); 3338 err_free_rx_1: 3339 - pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray, 3340 - tp->RxPhyAddr); 3341 tp->RxDescArray = NULL; 3342 err_free_tx_0: 3343 - pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray, 3344 - tp->TxPhyAddr); 3345 tp->TxDescArray = NULL; 3346 err_pm_runtime_put: 3347 pm_runtime_put_noidle(&pdev->dev); ··· 3977 { 3978 struct pci_dev *pdev = tp->pci_dev; 3979 3980 - pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz, 3981 PCI_DMA_FROMDEVICE); 3982 dev_kfree_skb(*sk_buff); 3983 *sk_buff = NULL; ··· 4002 static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev, 4003 struct net_device *dev, 4004 struct RxDesc *desc, int rx_buf_sz, 4005 - unsigned int align) 4006 { 4007 struct sk_buff *skb; 4008 dma_addr_t mapping; ··· 4010 4011 pad = align ? align : NET_IP_ALIGN; 4012 4013 - skb = netdev_alloc_skb(dev, rx_buf_sz + pad); 4014 if (!skb) 4015 goto err_out; 4016 4017 skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad); 4018 4019 - mapping = pci_map_single(pdev, skb->data, rx_buf_sz, 4020 PCI_DMA_FROMDEVICE); 4021 4022 rtl8169_map_to_asic(desc, mapping, rx_buf_sz); ··· 4041 } 4042 4043 static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev, 4044 - u32 start, u32 end) 4045 { 4046 u32 cur; 4047 ··· 4056 4057 skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev, 4058 tp->RxDescArray + i, 4059 - tp->rx_buf_sz, tp->align); 4060 if (!skb) 4061 break; 4062 ··· 4084 memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info)); 4085 memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *)); 4086 4087 - if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC) 4088 goto err_out; 4089 4090 rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1); ··· 4101 { 4102 unsigned int len = tx_skb->len; 4103 4104 - pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE); 4105 desc->opts1 = 0x00; 4106 desc->opts2 = 0x00; 4107 desc->addr = 0x00; ··· 4246 txd = tp->TxDescArray + entry; 4247 len = frag->size; 4248 addr = ((void *) page_address(frag->page)) + frag->page_offset; 4249 - mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE); 4250 4251 /* anti gcc 2.95.3 bugware (sic) */ 4252 status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC)); ··· 4317 tp->tx_skb[entry].skb = skb; 4318 } 4319 4320 - mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE); 4321 4322 tp->tx_skb[entry].len = len; 4323 txd->addr = cpu_to_le64(mapping); ··· 4482 if (!skb) 4483 goto out; 4484 4485 - pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size, 4486 - PCI_DMA_FROMDEVICE); 4487 skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size); 4488 *sk_buff = skb; 4489 done = true; ··· 4554 rtl8169_rx_csum(skb, desc); 4555 4556 if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) { 4557 - pci_dma_sync_single_for_device(pdev, addr, 4558 pkt_size, PCI_DMA_FROMDEVICE); 4559 rtl8169_mark_to_asic(desc, tp->rx_buf_sz); 4560 } else { 4561 - pci_unmap_single(pdev, addr, tp->rx_buf_sz, 4562 PCI_DMA_FROMDEVICE); 4563 tp->Rx_skbuff[entry] = NULL; 4564 } ··· 4588 count = cur_rx - tp->cur_rx; 4589 tp->cur_rx = cur_rx; 4590 4591 - delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx); 4592 if (!delta && count) 4593 netif_info(tp, intr, dev, "no Rx buffer allocated\n"); 4594 tp->dirty_rx += delta; ··· 4774 4775 free_irq(dev->irq, dev); 4776 4777 - pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray, 4778 - tp->RxPhyAddr); 4779 - pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray, 4780 - tp->TxPhyAddr); 4781 tp->TxDescArray = NULL; 4782 tp->RxDescArray = NULL; 4783
··· 1212 if ((RTL_R8(ChipCmd) & CmdRxEnb) == 0) 1213 return; 1214 1215 + counters = dma_alloc_coherent(&tp->pci_dev->dev, sizeof(*counters), 1216 + &paddr, GFP_KERNEL); 1217 if (!counters) 1218 return; 1219 ··· 1233 RTL_W32(CounterAddrLow, 0); 1234 RTL_W32(CounterAddrHigh, 0); 1235 1236 + dma_free_coherent(&tp->pci_dev->dev, sizeof(*counters), counters, 1237 + paddr); 1238 } 1239 1240 static void rtl8169_get_ethtool_stats(struct net_device *dev, ··· 3292 3293 /* 3294 * Rx and Tx desscriptors needs 256 bytes alignment. 3295 + * dma_alloc_coherent provides more. 3296 */ 3297 + tp->TxDescArray = dma_alloc_coherent(&pdev->dev, R8169_TX_RING_BYTES, 3298 + &tp->TxPhyAddr, GFP_KERNEL); 3299 if (!tp->TxDescArray) 3300 goto err_pm_runtime_put; 3301 3302 + tp->RxDescArray = dma_alloc_coherent(&pdev->dev, R8169_RX_RING_BYTES, 3303 + &tp->RxPhyAddr, GFP_KERNEL); 3304 if (!tp->RxDescArray) 3305 goto err_free_tx_0; 3306 ··· 3334 err_release_ring_2: 3335 rtl8169_rx_clear(tp); 3336 err_free_rx_1: 3337 + dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray, 3338 + tp->RxPhyAddr); 3339 tp->RxDescArray = NULL; 3340 err_free_tx_0: 3341 + dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray, 3342 + tp->TxPhyAddr); 3343 tp->TxDescArray = NULL; 3344 err_pm_runtime_put: 3345 pm_runtime_put_noidle(&pdev->dev); ··· 3975 { 3976 struct pci_dev *pdev = tp->pci_dev; 3977 3978 + dma_unmap_single(&pdev->dev, le64_to_cpu(desc->addr), tp->rx_buf_sz, 3979 PCI_DMA_FROMDEVICE); 3980 dev_kfree_skb(*sk_buff); 3981 *sk_buff = NULL; ··· 4000 static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev, 4001 struct net_device *dev, 4002 struct RxDesc *desc, int rx_buf_sz, 4003 + unsigned int align, gfp_t gfp) 4004 { 4005 struct sk_buff *skb; 4006 dma_addr_t mapping; ··· 4008 4009 pad = align ? align : NET_IP_ALIGN; 4010 4011 + skb = __netdev_alloc_skb(dev, rx_buf_sz + pad, gfp); 4012 if (!skb) 4013 goto err_out; 4014 4015 skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad); 4016 4017 + mapping = dma_map_single(&pdev->dev, skb->data, rx_buf_sz, 4018 PCI_DMA_FROMDEVICE); 4019 4020 rtl8169_map_to_asic(desc, mapping, rx_buf_sz); ··· 4039 } 4040 4041 static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev, 4042 + u32 start, u32 end, gfp_t gfp) 4043 { 4044 u32 cur; 4045 ··· 4054 4055 skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev, 4056 tp->RxDescArray + i, 4057 + tp->rx_buf_sz, tp->align, gfp); 4058 if (!skb) 4059 break; 4060 ··· 4082 memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info)); 4083 memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *)); 4084 4085 + if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC, GFP_KERNEL) != NUM_RX_DESC) 4086 goto err_out; 4087 4088 rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1); ··· 4099 { 4100 unsigned int len = tx_skb->len; 4101 4102 + dma_unmap_single(&pdev->dev, le64_to_cpu(desc->addr), len, 4103 + PCI_DMA_TODEVICE); 4104 desc->opts1 = 0x00; 4105 desc->opts2 = 0x00; 4106 desc->addr = 0x00; ··· 4243 txd = tp->TxDescArray + entry; 4244 len = frag->size; 4245 addr = ((void *) page_address(frag->page)) + frag->page_offset; 4246 + mapping = dma_map_single(&tp->pci_dev->dev, addr, len, 4247 + PCI_DMA_TODEVICE); 4248 4249 /* anti gcc 2.95.3 bugware (sic) */ 4250 status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC)); ··· 4313 tp->tx_skb[entry].skb = skb; 4314 } 4315 4316 + mapping = dma_map_single(&tp->pci_dev->dev, skb->data, len, 4317 + PCI_DMA_TODEVICE); 4318 4319 tp->tx_skb[entry].len = len; 4320 txd->addr = cpu_to_le64(mapping); ··· 4477 if (!skb) 4478 goto out; 4479 4480 + dma_sync_single_for_cpu(&tp->pci_dev->dev, addr, pkt_size, 4481 + PCI_DMA_FROMDEVICE); 4482 skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size); 4483 *sk_buff = skb; 4484 done = true; ··· 4549 rtl8169_rx_csum(skb, desc); 4550 4551 if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) { 4552 + dma_sync_single_for_device(&pdev->dev, addr, 4553 pkt_size, PCI_DMA_FROMDEVICE); 4554 rtl8169_mark_to_asic(desc, tp->rx_buf_sz); 4555 } else { 4556 + dma_unmap_single(&pdev->dev, addr, tp->rx_buf_sz, 4557 PCI_DMA_FROMDEVICE); 4558 tp->Rx_skbuff[entry] = NULL; 4559 } ··· 4583 count = cur_rx - tp->cur_rx; 4584 tp->cur_rx = cur_rx; 4585 4586 + delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx, GFP_ATOMIC); 4587 if (!delta && count) 4588 netif_info(tp, intr, dev, "no Rx buffer allocated\n"); 4589 tp->dirty_rx += delta; ··· 4769 4770 free_irq(dev->irq, dev); 4771 4772 + dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray, 4773 + tp->RxPhyAddr); 4774 + dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray, 4775 + tp->TxPhyAddr); 4776 tp->TxDescArray = NULL; 4777 tp->RxDescArray = NULL; 4778
+17 -1
drivers/net/skge.c
··· 43 #include <linux/seq_file.h> 44 #include <linux/mii.h> 45 #include <linux/slab.h> 46 #include <asm/irq.h> 47 48 #include "skge.h" ··· 3869 netif_info(skge, probe, skge->netdev, "addr %pM\n", dev->dev_addr); 3870 } 3871 3872 static int __devinit skge_probe(struct pci_dev *pdev, 3873 const struct pci_device_id *ent) 3874 { ··· 3892 3893 pci_set_master(pdev); 3894 3895 - if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 3896 using_dac = 1; 3897 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 3898 } else if (!(err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) { ··· 4150 .shutdown = skge_shutdown, 4151 }; 4152 4153 static int __init skge_init_module(void) 4154 { 4155 skge_debug_init(); 4156 return pci_register_driver(&skge_driver); 4157 }
··· 43 #include <linux/seq_file.h> 44 #include <linux/mii.h> 45 #include <linux/slab.h> 46 + #include <linux/dmi.h> 47 #include <asm/irq.h> 48 49 #include "skge.h" ··· 3868 netif_info(skge, probe, skge->netdev, "addr %pM\n", dev->dev_addr); 3869 } 3870 3871 + static int only_32bit_dma; 3872 + 3873 static int __devinit skge_probe(struct pci_dev *pdev, 3874 const struct pci_device_id *ent) 3875 { ··· 3889 3890 pci_set_master(pdev); 3891 3892 + if (!only_32bit_dma && !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 3893 using_dac = 1; 3894 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 3895 } else if (!(err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) { ··· 4147 .shutdown = skge_shutdown, 4148 }; 4149 4150 + static struct dmi_system_id skge_32bit_dma_boards[] = { 4151 + { 4152 + .ident = "Gigabyte nForce boards", 4153 + .matches = { 4154 + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co"), 4155 + DMI_MATCH(DMI_BOARD_NAME, "nForce"), 4156 + }, 4157 + }, 4158 + {} 4159 + }; 4160 + 4161 static int __init skge_init_module(void) 4162 { 4163 + if (dmi_check_system(skge_32bit_dma_boards)) 4164 + only_32bit_dma = 1; 4165 skge_debug_init(); 4166 return pci_register_driver(&skge_driver); 4167 }
+4 -2
drivers/net/tg3.c
··· 4666 desc_idx, *post_ptr); 4667 drop_it_no_recycle: 4668 /* Other statistics kept track of by card. */ 4669 - tp->net_stats.rx_dropped++; 4670 goto next_pkt; 4671 } 4672 ··· 4726 if (len > (tp->dev->mtu + ETH_HLEN) && 4727 skb->protocol != htons(ETH_P_8021Q)) { 4728 dev_kfree_skb(skb); 4729 - goto next_pkt; 4730 } 4731 4732 if (desc->type_flags & RXD_FLAG_VLAN && ··· 9239 9240 stats->rx_missed_errors = old_stats->rx_missed_errors + 9241 get_stat64(&hw_stats->rx_discards); 9242 9243 return stats; 9244 }
··· 4666 desc_idx, *post_ptr); 4667 drop_it_no_recycle: 4668 /* Other statistics kept track of by card. */ 4669 + tp->rx_dropped++; 4670 goto next_pkt; 4671 } 4672 ··· 4726 if (len > (tp->dev->mtu + ETH_HLEN) && 4727 skb->protocol != htons(ETH_P_8021Q)) { 4728 dev_kfree_skb(skb); 4729 + goto drop_it_no_recycle; 4730 } 4731 4732 if (desc->type_flags & RXD_FLAG_VLAN && ··· 9239 9240 stats->rx_missed_errors = old_stats->rx_missed_errors + 9241 get_stat64(&hw_stats->rx_discards); 9242 + 9243 + stats->rx_dropped = tp->rx_dropped; 9244 9245 return stats; 9246 }
+1 -1
drivers/net/tg3.h
··· 2759 2760 2761 /* begin "everything else" cacheline(s) section */ 2762 - struct rtnl_link_stats64 net_stats; 2763 struct rtnl_link_stats64 net_stats_prev; 2764 struct tg3_ethtool_stats estats; 2765 struct tg3_ethtool_stats estats_prev;
··· 2759 2760 2761 /* begin "everything else" cacheline(s) section */ 2762 + unsigned long rx_dropped; 2763 struct rtnl_link_stats64 net_stats_prev; 2764 struct tg3_ethtool_stats estats; 2765 struct tg3_ethtool_stats estats_prev;
+13 -13
drivers/net/wimax/i2400m/rx.c
··· 1244 int i, result; 1245 struct device *dev = i2400m_dev(i2400m); 1246 const struct i2400m_msg_hdr *msg_hdr; 1247 - size_t pl_itr, pl_size, skb_len; 1248 unsigned long flags; 1249 - unsigned num_pls, single_last; 1250 1251 skb_len = skb->len; 1252 - d_fnstart(4, dev, "(i2400m %p skb %p [size %zu])\n", 1253 i2400m, skb, skb_len); 1254 result = -EIO; 1255 msg_hdr = (void *) skb->data; 1256 - result = i2400m_rx_msg_hdr_check(i2400m, msg_hdr, skb->len); 1257 if (result < 0) 1258 goto error_msg_hdr_check; 1259 result = -EIO; ··· 1261 pl_itr = sizeof(*msg_hdr) + /* Check payload descriptor(s) */ 1262 num_pls * sizeof(msg_hdr->pld[0]); 1263 pl_itr = ALIGN(pl_itr, I2400M_PL_ALIGN); 1264 - if (pl_itr > skb->len) { /* got all the payload descriptors? */ 1265 dev_err(dev, "RX: HW BUG? message too short (%u bytes) for " 1266 "%u payload descriptors (%zu each, total %zu)\n", 1267 - skb->len, num_pls, sizeof(msg_hdr->pld[0]), pl_itr); 1268 goto error_pl_descr_short; 1269 } 1270 /* Walk each payload payload--check we really got it */ ··· 1272 /* work around old gcc warnings */ 1273 pl_size = i2400m_pld_size(&msg_hdr->pld[i]); 1274 result = i2400m_rx_pl_descr_check(i2400m, &msg_hdr->pld[i], 1275 - pl_itr, skb->len); 1276 if (result < 0) 1277 goto error_pl_descr_check; 1278 single_last = num_pls == 1 || i == num_pls - 1; ··· 1290 if (i < i2400m->rx_pl_min) 1291 i2400m->rx_pl_min = i; 1292 i2400m->rx_num++; 1293 - i2400m->rx_size_acc += skb->len; 1294 - if (skb->len < i2400m->rx_size_min) 1295 - i2400m->rx_size_min = skb->len; 1296 - if (skb->len > i2400m->rx_size_max) 1297 - i2400m->rx_size_max = skb->len; 1298 spin_unlock_irqrestore(&i2400m->rx_lock, flags); 1299 error_pl_descr_check: 1300 error_pl_descr_short: 1301 error_msg_hdr_check: 1302 - d_fnend(4, dev, "(i2400m %p skb %p [size %zu]) = %d\n", 1303 i2400m, skb, skb_len, result); 1304 return result; 1305 }
··· 1244 int i, result; 1245 struct device *dev = i2400m_dev(i2400m); 1246 const struct i2400m_msg_hdr *msg_hdr; 1247 + size_t pl_itr, pl_size; 1248 unsigned long flags; 1249 + unsigned num_pls, single_last, skb_len; 1250 1251 skb_len = skb->len; 1252 + d_fnstart(4, dev, "(i2400m %p skb %p [size %u])\n", 1253 i2400m, skb, skb_len); 1254 result = -EIO; 1255 msg_hdr = (void *) skb->data; 1256 + result = i2400m_rx_msg_hdr_check(i2400m, msg_hdr, skb_len); 1257 if (result < 0) 1258 goto error_msg_hdr_check; 1259 result = -EIO; ··· 1261 pl_itr = sizeof(*msg_hdr) + /* Check payload descriptor(s) */ 1262 num_pls * sizeof(msg_hdr->pld[0]); 1263 pl_itr = ALIGN(pl_itr, I2400M_PL_ALIGN); 1264 + if (pl_itr > skb_len) { /* got all the payload descriptors? */ 1265 dev_err(dev, "RX: HW BUG? message too short (%u bytes) for " 1266 "%u payload descriptors (%zu each, total %zu)\n", 1267 + skb_len, num_pls, sizeof(msg_hdr->pld[0]), pl_itr); 1268 goto error_pl_descr_short; 1269 } 1270 /* Walk each payload payload--check we really got it */ ··· 1272 /* work around old gcc warnings */ 1273 pl_size = i2400m_pld_size(&msg_hdr->pld[i]); 1274 result = i2400m_rx_pl_descr_check(i2400m, &msg_hdr->pld[i], 1275 + pl_itr, skb_len); 1276 if (result < 0) 1277 goto error_pl_descr_check; 1278 single_last = num_pls == 1 || i == num_pls - 1; ··· 1290 if (i < i2400m->rx_pl_min) 1291 i2400m->rx_pl_min = i; 1292 i2400m->rx_num++; 1293 + i2400m->rx_size_acc += skb_len; 1294 + if (skb_len < i2400m->rx_size_min) 1295 + i2400m->rx_size_min = skb_len; 1296 + if (skb_len > i2400m->rx_size_max) 1297 + i2400m->rx_size_max = skb_len; 1298 spin_unlock_irqrestore(&i2400m->rx_lock, flags); 1299 error_pl_descr_check: 1300 error_pl_descr_short: 1301 error_msg_hdr_check: 1302 + d_fnend(4, dev, "(i2400m %p skb %p [size %u]) = %d\n", 1303 i2400m, skb, skb_len, result); 1304 return result; 1305 }
+1 -1
drivers/net/wireless/ath/ath9k/ani.c
··· 543 if (conf_is_ht40(conf)) 544 return clockrate * 2; 545 546 - return clockrate * 2; 547 } 548 549 static int32_t ath9k_hw_ani_get_listen_time(struct ath_hw *ah)
··· 543 if (conf_is_ht40(conf)) 544 return clockrate * 2; 545 546 + return clockrate; 547 } 548 549 static int32_t ath9k_hw_ani_get_listen_time(struct ath_hw *ah)
+93 -35
drivers/platform/x86/intel_ips.c
··· 51 * TODO: 52 * - handle CPU hotplug 53 * - provide turbo enable/disable api 54 - * - make sure we can write turbo enable/disable reg based on MISC_EN 55 * 56 * Related documents: 57 * - CDI 403777, 403778 - Auburndale EDS vol 1 & 2 ··· 229 #define THM_TC2 0xac 230 #define THM_DTV 0xb0 231 #define THM_ITV 0xd8 232 - #define ITV_ME_SEQNO_MASK 0x000f0000 /* ME should update every ~200ms */ 233 #define ITV_ME_SEQNO_SHIFT (16) 234 #define ITV_MCH_TEMP_MASK 0x0000ff00 235 #define ITV_MCH_TEMP_SHIFT (8) ··· 324 bool gpu_preferred; 325 bool poll_turbo_status; 326 bool second_cpu; 327 struct ips_mcp_limits *limits; 328 329 /* Optional MCH interfaces for if i915 is in use */ ··· 415 new_limit = cur_limit - 8; /* 1W decrease */ 416 417 /* Clamp to SKU TDP limit */ 418 - if (((new_limit * 10) / 8) < (ips->orig_turbo_limit & TURBO_TDP_MASK)) 419 new_limit = ips->orig_turbo_limit & TURBO_TDP_MASK; 420 421 thm_writew(THM_MPCPC, (new_limit * 10) / 8); ··· 461 if (ips->__cpu_turbo_on) 462 return; 463 464 - on_each_cpu(do_enable_cpu_turbo, ips, 1); 465 466 ips->__cpu_turbo_on = true; 467 } ··· 499 if (!ips->__cpu_turbo_on) 500 return; 501 502 - on_each_cpu(do_disable_cpu_turbo, ips, 1); 503 504 ips->__cpu_turbo_on = false; 505 } ··· 600 { 601 unsigned long flags; 602 bool ret = false; 603 604 spin_lock_irqsave(&ips->turbo_status_lock, flags); 605 - if (ips->mcp_avg_temp > (ips->mcp_temp_limit * 100)) 606 - ret = true; 607 - if (ips->cpu_avg_power + ips->mch_avg_power > ips->mcp_power_limit) 608 - ret = true; 609 - spin_unlock_irqrestore(&ips->turbo_status_lock, flags); 610 611 - if (ret) 612 dev_info(&ips->dev->dev, 613 - "MCP power or thermal limit exceeded\n"); 614 615 return ret; 616 } ··· 677 } 678 679 /** 680 * update_turbo_limits - get various limits & settings from regs 681 * @ips: IPS driver struct 682 * ··· 715 u32 hts = thm_readl(THM_HTS); 716 717 ips->cpu_turbo_enabled = !(hts & HTS_PCTD_DIS); 718 - ips->gpu_turbo_enabled = !(hts & HTS_GTD_DIS); 719 ips->core_power_limit = thm_readw(THM_MPCPC); 720 ips->mch_power_limit = thm_readw(THM_MMGPC); 721 ips->mcp_temp_limit = thm_readw(THM_PTL); 722 ips->mcp_power_limit = thm_readw(THM_MPPC); 723 724 /* Ignore BIOS CPU vs GPU pref */ 725 } 726 ··· 902 ret = (ret * 1000) / 65535; 903 *last = val; 904 905 - return ret; 906 } 907 908 static const u16 temp_decay_factor = 2; ··· 984 kfree(mch_samples); 985 kfree(cpu_samples); 986 kfree(mchp_samples); 987 - kthread_stop(ips->adjust); 988 return -ENOMEM; 989 } 990 ··· 991 ITV_ME_SEQNO_SHIFT; 992 seqno_timestamp = get_jiffies_64(); 993 994 - old_cpu_power = thm_readl(THM_CEC) / 65535; 995 schedule_timeout_interruptible(msecs_to_jiffies(IPS_SAMPLE_PERIOD)); 996 997 /* Collect an initial average */ ··· 1193 STS_GPL_SHIFT; 1194 /* ignore EC CPU vs GPU pref */ 1195 ips->cpu_turbo_enabled = !(sts & STS_PCTD_DIS); 1196 - ips->gpu_turbo_enabled = !(sts & STS_GTD_DIS); 1197 ips->mcp_temp_limit = (sts & STS_PTL_MASK) >> 1198 STS_PTL_SHIFT; 1199 ips->mcp_power_limit = (tc1 & STS_PPL_MASK) >> 1200 STS_PPL_SHIFT; 1201 spin_unlock(&ips->turbo_status_lock); 1202 1203 thm_writeb(THM_SEC, SEC_ACK); ··· 1383 * turbo manually or we'll get an illegal MSR access, even though 1384 * turbo will still be available. 1385 */ 1386 - if (!(misc_en & IA32_MISC_TURBO_EN)) 1387 - ; /* add turbo MSR write allowed flag if necessary */ 1388 1389 if (strstr(boot_cpu_data.x86_model_id, "CPU M")) 1390 limits = &ips_sv_limits; ··· 1403 tdp = turbo_power & TURBO_TDP_MASK; 1404 1405 /* Sanity check TDP against CPU */ 1406 - if (limits->mcp_power_limit != (tdp / 8) * 1000) { 1407 - dev_warn(&ips->dev->dev, "Warning: CPU TDP doesn't match expected value (found %d, expected %d)\n", 1408 - tdp / 8, limits->mcp_power_limit / 1000); 1409 } 1410 1411 out: ··· 1443 return true; 1444 1445 out_put_busy: 1446 - symbol_put(i915_gpu_turbo_disable); 1447 out_put_lower: 1448 symbol_put(i915_gpu_lower); 1449 out_put_raise: ··· 1585 /* Save turbo limits & ratios */ 1586 rdmsrl(TURBO_POWER_CURRENT_LIMIT, ips->orig_turbo_limit); 1587 1588 - ips_enable_cpu_turbo(ips); 1589 - ips->cpu_turbo_enabled = true; 1590 1591 - /* Set up the work queue and monitor/adjust threads */ 1592 - ips->monitor = kthread_run(ips_monitor, ips, "ips-monitor"); 1593 - if (IS_ERR(ips->monitor)) { 1594 - dev_err(&dev->dev, 1595 - "failed to create thermal monitor thread, aborting\n"); 1596 - ret = -ENOMEM; 1597 - goto error_free_irq; 1598 - } 1599 - 1600 ips->adjust = kthread_create(ips_adjust, ips, "ips-adjust"); 1601 if (IS_ERR(ips->adjust)) { 1602 dev_err(&dev->dev, 1603 "failed to create thermal adjust thread, aborting\n"); 1604 ret = -ENOMEM; 1605 goto error_thread_cleanup; 1606 } ··· 1624 return ret; 1625 1626 error_thread_cleanup: 1627 - kthread_stop(ips->monitor); 1628 error_free_irq: 1629 free_irq(ips->dev->irq, ips); 1630 error_unmap:
··· 51 * TODO: 52 * - handle CPU hotplug 53 * - provide turbo enable/disable api 54 * 55 * Related documents: 56 * - CDI 403777, 403778 - Auburndale EDS vol 1 & 2 ··· 230 #define THM_TC2 0xac 231 #define THM_DTV 0xb0 232 #define THM_ITV 0xd8 233 + #define ITV_ME_SEQNO_MASK 0x00ff0000 /* ME should update every ~200ms */ 234 #define ITV_ME_SEQNO_SHIFT (16) 235 #define ITV_MCH_TEMP_MASK 0x0000ff00 236 #define ITV_MCH_TEMP_SHIFT (8) ··· 325 bool gpu_preferred; 326 bool poll_turbo_status; 327 bool second_cpu; 328 + bool turbo_toggle_allowed; 329 struct ips_mcp_limits *limits; 330 331 /* Optional MCH interfaces for if i915 is in use */ ··· 415 new_limit = cur_limit - 8; /* 1W decrease */ 416 417 /* Clamp to SKU TDP limit */ 418 + if (new_limit < (ips->orig_turbo_limit & TURBO_TDP_MASK)) 419 new_limit = ips->orig_turbo_limit & TURBO_TDP_MASK; 420 421 thm_writew(THM_MPCPC, (new_limit * 10) / 8); ··· 461 if (ips->__cpu_turbo_on) 462 return; 463 464 + if (ips->turbo_toggle_allowed) 465 + on_each_cpu(do_enable_cpu_turbo, ips, 1); 466 467 ips->__cpu_turbo_on = true; 468 } ··· 498 if (!ips->__cpu_turbo_on) 499 return; 500 501 + if (ips->turbo_toggle_allowed) 502 + on_each_cpu(do_disable_cpu_turbo, ips, 1); 503 504 ips->__cpu_turbo_on = false; 505 } ··· 598 { 599 unsigned long flags; 600 bool ret = false; 601 + u32 temp_limit; 602 + u32 avg_power; 603 + const char *msg = "MCP limit exceeded: "; 604 605 spin_lock_irqsave(&ips->turbo_status_lock, flags); 606 607 + temp_limit = ips->mcp_temp_limit * 100; 608 + if (ips->mcp_avg_temp > temp_limit) { 609 dev_info(&ips->dev->dev, 610 + "%sAvg temp %u, limit %u\n", msg, ips->mcp_avg_temp, 611 + temp_limit); 612 + ret = true; 613 + } 614 + 615 + avg_power = ips->cpu_avg_power + ips->mch_avg_power; 616 + if (avg_power > ips->mcp_power_limit) { 617 + dev_info(&ips->dev->dev, 618 + "%sAvg power %u, limit %u\n", msg, avg_power, 619 + ips->mcp_power_limit); 620 + ret = true; 621 + } 622 + 623 + spin_unlock_irqrestore(&ips->turbo_status_lock, flags); 624 625 return ret; 626 } ··· 663 } 664 665 /** 666 + * verify_limits - verify BIOS provided limits 667 + * @ips: IPS structure 668 + * 669 + * BIOS can optionally provide non-default limits for power and temp. Check 670 + * them here and use the defaults if the BIOS values are not provided or 671 + * are otherwise unusable. 672 + */ 673 + static void verify_limits(struct ips_driver *ips) 674 + { 675 + if (ips->mcp_power_limit < ips->limits->mcp_power_limit || 676 + ips->mcp_power_limit > 35000) 677 + ips->mcp_power_limit = ips->limits->mcp_power_limit; 678 + 679 + if (ips->mcp_temp_limit < ips->limits->core_temp_limit || 680 + ips->mcp_temp_limit < ips->limits->mch_temp_limit || 681 + ips->mcp_temp_limit > 150) 682 + ips->mcp_temp_limit = min(ips->limits->core_temp_limit, 683 + ips->limits->mch_temp_limit); 684 + } 685 + 686 + /** 687 * update_turbo_limits - get various limits & settings from regs 688 * @ips: IPS driver struct 689 * ··· 680 u32 hts = thm_readl(THM_HTS); 681 682 ips->cpu_turbo_enabled = !(hts & HTS_PCTD_DIS); 683 + /* 684 + * Disable turbo for now, until we can figure out why the power figures 685 + * are wrong 686 + */ 687 + ips->cpu_turbo_enabled = false; 688 + 689 + if (ips->gpu_busy) 690 + ips->gpu_turbo_enabled = !(hts & HTS_GTD_DIS); 691 + 692 ips->core_power_limit = thm_readw(THM_MPCPC); 693 ips->mch_power_limit = thm_readw(THM_MMGPC); 694 ips->mcp_temp_limit = thm_readw(THM_PTL); 695 ips->mcp_power_limit = thm_readw(THM_MPPC); 696 697 + verify_limits(ips); 698 /* Ignore BIOS CPU vs GPU pref */ 699 } 700 ··· 858 ret = (ret * 1000) / 65535; 859 *last = val; 860 861 + return 0; 862 } 863 864 static const u16 temp_decay_factor = 2; ··· 940 kfree(mch_samples); 941 kfree(cpu_samples); 942 kfree(mchp_samples); 943 return -ENOMEM; 944 } 945 ··· 948 ITV_ME_SEQNO_SHIFT; 949 seqno_timestamp = get_jiffies_64(); 950 951 + old_cpu_power = thm_readl(THM_CEC); 952 schedule_timeout_interruptible(msecs_to_jiffies(IPS_SAMPLE_PERIOD)); 953 954 /* Collect an initial average */ ··· 1150 STS_GPL_SHIFT; 1151 /* ignore EC CPU vs GPU pref */ 1152 ips->cpu_turbo_enabled = !(sts & STS_PCTD_DIS); 1153 + /* 1154 + * Disable turbo for now, until we can figure 1155 + * out why the power figures are wrong 1156 + */ 1157 + ips->cpu_turbo_enabled = false; 1158 + if (ips->gpu_busy) 1159 + ips->gpu_turbo_enabled = !(sts & STS_GTD_DIS); 1160 ips->mcp_temp_limit = (sts & STS_PTL_MASK) >> 1161 STS_PTL_SHIFT; 1162 ips->mcp_power_limit = (tc1 & STS_PPL_MASK) >> 1163 STS_PPL_SHIFT; 1164 + verify_limits(ips); 1165 spin_unlock(&ips->turbo_status_lock); 1166 1167 thm_writeb(THM_SEC, SEC_ACK); ··· 1333 * turbo manually or we'll get an illegal MSR access, even though 1334 * turbo will still be available. 1335 */ 1336 + if (misc_en & IA32_MISC_TURBO_EN) 1337 + ips->turbo_toggle_allowed = true; 1338 + else 1339 + ips->turbo_toggle_allowed = false; 1340 1341 if (strstr(boot_cpu_data.x86_model_id, "CPU M")) 1342 limits = &ips_sv_limits; ··· 1351 tdp = turbo_power & TURBO_TDP_MASK; 1352 1353 /* Sanity check TDP against CPU */ 1354 + if (limits->core_power_limit != (tdp / 8) * 1000) { 1355 + dev_info(&ips->dev->dev, "CPU TDP doesn't match expected value (found %d, expected %d)\n", 1356 + tdp / 8, limits->core_power_limit / 1000); 1357 + limits->core_power_limit = (tdp / 8) * 1000; 1358 } 1359 1360 out: ··· 1390 return true; 1391 1392 out_put_busy: 1393 + symbol_put(i915_gpu_busy); 1394 out_put_lower: 1395 symbol_put(i915_gpu_lower); 1396 out_put_raise: ··· 1532 /* Save turbo limits & ratios */ 1533 rdmsrl(TURBO_POWER_CURRENT_LIMIT, ips->orig_turbo_limit); 1534 1535 + ips_disable_cpu_turbo(ips); 1536 + ips->cpu_turbo_enabled = false; 1537 1538 + /* Create thermal adjust thread */ 1539 ips->adjust = kthread_create(ips_adjust, ips, "ips-adjust"); 1540 if (IS_ERR(ips->adjust)) { 1541 dev_err(&dev->dev, 1542 "failed to create thermal adjust thread, aborting\n"); 1543 + ret = -ENOMEM; 1544 + goto error_free_irq; 1545 + 1546 + } 1547 + 1548 + /* 1549 + * Set up the work queue and monitor thread. The monitor thread 1550 + * will wake up ips_adjust thread. 1551 + */ 1552 + ips->monitor = kthread_run(ips_monitor, ips, "ips-monitor"); 1553 + if (IS_ERR(ips->monitor)) { 1554 + dev_err(&dev->dev, 1555 + "failed to create thermal monitor thread, aborting\n"); 1556 ret = -ENOMEM; 1557 goto error_thread_cleanup; 1558 } ··· 1566 return ret; 1567 1568 error_thread_cleanup: 1569 + kthread_stop(ips->adjust); 1570 error_free_irq: 1571 free_irq(ips->dev->irq, ips); 1572 error_unmap:
-1
drivers/regulator/ad5398.c
··· 256 257 regulator_unregister(chip->rdev); 258 kfree(chip); 259 - i2c_set_clientdata(client, NULL); 260 261 return 0; 262 }
··· 256 257 regulator_unregister(chip->rdev); 258 kfree(chip); 259 260 return 0; 261 }
-2
drivers/regulator/isl6271a-regulator.c
··· 191 struct isl_pmic *pmic = i2c_get_clientdata(i2c); 192 int i; 193 194 - i2c_set_clientdata(i2c, NULL); 195 - 196 for (i = 0; i < 3; i++) 197 regulator_unregister(pmic->rdev[i]); 198
··· 191 struct isl_pmic *pmic = i2c_get_clientdata(i2c); 192 int i; 193 194 for (i = 0; i < 3; i++) 195 regulator_unregister(pmic->rdev[i]); 196
-2
drivers/rtc/rtc-ds3232.c
··· 268 free_irq(client->irq, client); 269 270 out_free: 271 - i2c_set_clientdata(client, NULL); 272 kfree(ds3232); 273 return ret; 274 } ··· 286 } 287 288 rtc_device_unregister(ds3232->rtc); 289 - i2c_set_clientdata(client, NULL); 290 kfree(ds3232); 291 return 0; 292 }
··· 268 free_irq(client->irq, client); 269 270 out_free: 271 kfree(ds3232); 272 return ret; 273 } ··· 287 } 288 289 rtc_device_unregister(ds3232->rtc); 290 kfree(ds3232); 291 return 0; 292 }
+453 -423
drivers/spi/spi_bfin5xx.c
··· 1 /* 2 * Blackfin On-Chip SPI Driver 3 * 4 - * Copyright 2004-2007 Analog Devices Inc. 5 * 6 * Enter bugs at http://blackfin.uclinux.org/ 7 * ··· 41 #define RUNNING_STATE ((void *)1) 42 #define DONE_STATE ((void *)2) 43 #define ERROR_STATE ((void *)-1) 44 - #define QUEUE_RUNNING 0 45 - #define QUEUE_STOPPED 1 46 47 - /* Value to send if no TX value is supplied */ 48 - #define SPI_IDLE_TXVAL 0x0000 49 50 - struct driver_data { 51 /* Driver model hookup */ 52 struct platform_device *pdev; 53 ··· 72 spinlock_t lock; 73 struct list_head queue; 74 int busy; 75 - int run; 76 77 /* Message Transfer pump */ 78 struct tasklet_struct pump_transfers; ··· 80 /* Current message transfer state info */ 81 struct spi_message *cur_msg; 82 struct spi_transfer *cur_transfer; 83 - struct chip_data *cur_chip; 84 size_t len_in_bytes; 85 size_t len; 86 void *tx; ··· 95 dma_addr_t rx_dma; 96 dma_addr_t tx_dma; 97 98 size_t rx_map_len; 99 size_t tx_map_len; 100 u8 n_bytes; 101 int cs_change; 102 - void (*write) (struct driver_data *); 103 - void (*read) (struct driver_data *); 104 - void (*duplex) (struct driver_data *); 105 }; 106 107 - struct chip_data { 108 u16 ctl_reg; 109 u16 baud; 110 u16 flag; 111 112 u8 chip_select_num; 113 - u8 n_bytes; 114 - u8 width; /* 0 or 1 */ 115 u8 enable_dma; 116 - u8 bits_per_word; /* 8 or 16 */ 117 - u8 cs_change_per_word; 118 u16 cs_chg_udelay; /* Some devices require > 255usec delay */ 119 u32 cs_gpio; 120 u16 idle_tx_val; 121 - void (*write) (struct driver_data *); 122 - void (*read) (struct driver_data *); 123 - void (*duplex) (struct driver_data *); 124 }; 125 126 #define DEFINE_SPI_REG(reg, off) \ 127 - static inline u16 read_##reg(struct driver_data *drv_data) \ 128 { return bfin_read16(drv_data->regs_base + off); } \ 129 - static inline void write_##reg(struct driver_data *drv_data, u16 v) \ 130 { bfin_write16(drv_data->regs_base + off, v); } 131 132 DEFINE_SPI_REG(CTRL, 0x00) ··· 136 DEFINE_SPI_REG(BAUD, 0x14) 137 DEFINE_SPI_REG(SHAW, 0x18) 138 139 - static void bfin_spi_enable(struct driver_data *drv_data) 140 { 141 u16 cr; 142 ··· 144 write_CTRL(drv_data, (cr | BIT_CTL_ENABLE)); 145 } 146 147 - static void bfin_spi_disable(struct driver_data *drv_data) 148 { 149 u16 cr; 150 ··· 167 return spi_baud; 168 } 169 170 - static int bfin_spi_flush(struct driver_data *drv_data) 171 { 172 unsigned long limit = loops_per_jiffy << 1; 173 ··· 181 } 182 183 /* Chip select operation functions for cs_change flag */ 184 - static void bfin_spi_cs_active(struct driver_data *drv_data, struct chip_data *chip) 185 { 186 - if (likely(chip->chip_select_num)) { 187 u16 flag = read_FLAG(drv_data); 188 189 - flag |= chip->flag; 190 - flag &= ~(chip->flag << 8); 191 192 write_FLAG(drv_data, flag); 193 } else { ··· 194 } 195 } 196 197 - static void bfin_spi_cs_deactive(struct driver_data *drv_data, struct chip_data *chip) 198 { 199 - if (likely(chip->chip_select_num)) { 200 u16 flag = read_FLAG(drv_data); 201 202 - flag &= ~chip->flag; 203 - flag |= (chip->flag << 8); 204 205 write_FLAG(drv_data, flag); 206 } else { ··· 212 udelay(chip->cs_chg_udelay); 213 } 214 215 - /* stop controller and re-config current chip*/ 216 - static void bfin_spi_restore_state(struct driver_data *drv_data) 217 { 218 - struct chip_data *chip = drv_data->cur_chip; 219 220 /* Clear status and disable clock */ 221 write_STAT(drv_data, BIT_STAT_CLR); 222 bfin_spi_disable(drv_data); 223 dev_dbg(&drv_data->pdev->dev, "restoring spi ctl state\n"); 224 225 /* Load the registers */ 226 write_CTRL(drv_data, chip->ctl_reg); ··· 258 } 259 260 /* used to kick off transfer in rx mode and read unwanted RX data */ 261 - static inline void bfin_spi_dummy_read(struct driver_data *drv_data) 262 { 263 (void) read_RDBR(drv_data); 264 } 265 266 - static void bfin_spi_null_writer(struct driver_data *drv_data) 267 - { 268 - u8 n_bytes = drv_data->n_bytes; 269 - u16 tx_val = drv_data->cur_chip->idle_tx_val; 270 - 271 - /* clear RXS (we check for RXS inside the loop) */ 272 - bfin_spi_dummy_read(drv_data); 273 - 274 - while (drv_data->tx < drv_data->tx_end) { 275 - write_TDBR(drv_data, tx_val); 276 - drv_data->tx += n_bytes; 277 - /* wait until transfer finished. 278 - checking SPIF or TXS may not guarantee transfer completion */ 279 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 280 - cpu_relax(); 281 - /* discard RX data and clear RXS */ 282 - bfin_spi_dummy_read(drv_data); 283 - } 284 - } 285 - 286 - static void bfin_spi_null_reader(struct driver_data *drv_data) 287 - { 288 - u8 n_bytes = drv_data->n_bytes; 289 - u16 tx_val = drv_data->cur_chip->idle_tx_val; 290 - 291 - /* discard old RX data and clear RXS */ 292 - bfin_spi_dummy_read(drv_data); 293 - 294 - while (drv_data->rx < drv_data->rx_end) { 295 - write_TDBR(drv_data, tx_val); 296 - drv_data->rx += n_bytes; 297 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 298 - cpu_relax(); 299 - bfin_spi_dummy_read(drv_data); 300 - } 301 - } 302 - 303 - static void bfin_spi_u8_writer(struct driver_data *drv_data) 304 { 305 /* clear RXS (we check for RXS inside the loop) */ 306 bfin_spi_dummy_read(drv_data); ··· 279 } 280 } 281 282 - static void bfin_spi_u8_cs_chg_writer(struct driver_data *drv_data) 283 - { 284 - struct chip_data *chip = drv_data->cur_chip; 285 - 286 - /* clear RXS (we check for RXS inside the loop) */ 287 - bfin_spi_dummy_read(drv_data); 288 - 289 - while (drv_data->tx < drv_data->tx_end) { 290 - bfin_spi_cs_active(drv_data, chip); 291 - write_TDBR(drv_data, (*(u8 *) (drv_data->tx++))); 292 - /* make sure transfer finished before deactiving CS */ 293 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 294 - cpu_relax(); 295 - bfin_spi_dummy_read(drv_data); 296 - bfin_spi_cs_deactive(drv_data, chip); 297 - } 298 - } 299 - 300 - static void bfin_spi_u8_reader(struct driver_data *drv_data) 301 { 302 u16 tx_val = drv_data->cur_chip->idle_tx_val; 303 ··· 294 } 295 } 296 297 - static void bfin_spi_u8_cs_chg_reader(struct driver_data *drv_data) 298 - { 299 - struct chip_data *chip = drv_data->cur_chip; 300 - u16 tx_val = chip->idle_tx_val; 301 - 302 - /* discard old RX data and clear RXS */ 303 - bfin_spi_dummy_read(drv_data); 304 - 305 - while (drv_data->rx < drv_data->rx_end) { 306 - bfin_spi_cs_active(drv_data, chip); 307 - write_TDBR(drv_data, tx_val); 308 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 309 - cpu_relax(); 310 - *(u8 *) (drv_data->rx++) = read_RDBR(drv_data); 311 - bfin_spi_cs_deactive(drv_data, chip); 312 - } 313 - } 314 - 315 - static void bfin_spi_u8_duplex(struct driver_data *drv_data) 316 { 317 /* discard old RX data and clear RXS */ 318 bfin_spi_dummy_read(drv_data); ··· 307 } 308 } 309 310 - static void bfin_spi_u8_cs_chg_duplex(struct driver_data *drv_data) 311 - { 312 - struct chip_data *chip = drv_data->cur_chip; 313 314 - /* discard old RX data and clear RXS */ 315 - bfin_spi_dummy_read(drv_data); 316 - 317 - while (drv_data->rx < drv_data->rx_end) { 318 - bfin_spi_cs_active(drv_data, chip); 319 - write_TDBR(drv_data, (*(u8 *) (drv_data->tx++))); 320 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 321 - cpu_relax(); 322 - *(u8 *) (drv_data->rx++) = read_RDBR(drv_data); 323 - bfin_spi_cs_deactive(drv_data, chip); 324 - } 325 - } 326 - 327 - static void bfin_spi_u16_writer(struct driver_data *drv_data) 328 { 329 /* clear RXS (we check for RXS inside the loop) */ 330 bfin_spi_dummy_read(drv_data); ··· 330 } 331 } 332 333 - static void bfin_spi_u16_cs_chg_writer(struct driver_data *drv_data) 334 - { 335 - struct chip_data *chip = drv_data->cur_chip; 336 - 337 - /* clear RXS (we check for RXS inside the loop) */ 338 - bfin_spi_dummy_read(drv_data); 339 - 340 - while (drv_data->tx < drv_data->tx_end) { 341 - bfin_spi_cs_active(drv_data, chip); 342 - write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 343 - drv_data->tx += 2; 344 - /* make sure transfer finished before deactiving CS */ 345 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 346 - cpu_relax(); 347 - bfin_spi_dummy_read(drv_data); 348 - bfin_spi_cs_deactive(drv_data, chip); 349 - } 350 - } 351 - 352 - static void bfin_spi_u16_reader(struct driver_data *drv_data) 353 { 354 u16 tx_val = drv_data->cur_chip->idle_tx_val; 355 ··· 346 } 347 } 348 349 - static void bfin_spi_u16_cs_chg_reader(struct driver_data *drv_data) 350 - { 351 - struct chip_data *chip = drv_data->cur_chip; 352 - u16 tx_val = chip->idle_tx_val; 353 - 354 - /* discard old RX data and clear RXS */ 355 - bfin_spi_dummy_read(drv_data); 356 - 357 - while (drv_data->rx < drv_data->rx_end) { 358 - bfin_spi_cs_active(drv_data, chip); 359 - write_TDBR(drv_data, tx_val); 360 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 361 - cpu_relax(); 362 - *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 363 - drv_data->rx += 2; 364 - bfin_spi_cs_deactive(drv_data, chip); 365 - } 366 - } 367 - 368 - static void bfin_spi_u16_duplex(struct driver_data *drv_data) 369 { 370 /* discard old RX data and clear RXS */ 371 bfin_spi_dummy_read(drv_data); ··· 361 } 362 } 363 364 - static void bfin_spi_u16_cs_chg_duplex(struct driver_data *drv_data) 365 - { 366 - struct chip_data *chip = drv_data->cur_chip; 367 368 - /* discard old RX data and clear RXS */ 369 - bfin_spi_dummy_read(drv_data); 370 - 371 - while (drv_data->rx < drv_data->rx_end) { 372 - bfin_spi_cs_active(drv_data, chip); 373 - write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 374 - drv_data->tx += 2; 375 - while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 376 - cpu_relax(); 377 - *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 378 - drv_data->rx += 2; 379 - bfin_spi_cs_deactive(drv_data, chip); 380 - } 381 - } 382 - 383 - /* test if ther is more transfer to be done */ 384 - static void *bfin_spi_next_transfer(struct driver_data *drv_data) 385 { 386 struct spi_message *msg = drv_data->cur_msg; 387 struct spi_transfer *trans = drv_data->cur_transfer; ··· 387 * caller already set message->status; 388 * dma and pio irqs are blocked give finished message back 389 */ 390 - static void bfin_spi_giveback(struct driver_data *drv_data) 391 { 392 - struct chip_data *chip = drv_data->cur_chip; 393 struct spi_transfer *last_transfer; 394 unsigned long flags; 395 struct spi_message *msg; ··· 418 msg->complete(msg->context); 419 } 420 421 static irqreturn_t bfin_spi_dma_irq_handler(int irq, void *dev_id) 422 { 423 - struct driver_data *drv_data = dev_id; 424 - struct chip_data *chip = drv_data->cur_chip; 425 struct spi_message *msg = drv_data->cur_msg; 426 unsigned long timeout; 427 unsigned short dmastat = get_dma_curr_irqstat(drv_data->dma_channel); ··· 506 507 clear_dma_irqstat(drv_data->dma_channel); 508 509 - /* Wait for DMA to complete */ 510 - while (get_dma_curr_irqstat(drv_data->dma_channel) & DMA_RUN) 511 - cpu_relax(); 512 - 513 /* 514 * wait for the last transaction shifted out. HRM states: 515 * at this point there may still be data in the SPI DMA FIFO waiting ··· 513 * register until it goes low for 2 successive reads 514 */ 515 if (drv_data->tx != NULL) { 516 - while ((read_STAT(drv_data) & TXS) || 517 - (read_STAT(drv_data) & TXS)) 518 cpu_relax(); 519 } 520 ··· 523 dmastat, read_STAT(drv_data)); 524 525 timeout = jiffies + HZ; 526 - while (!(read_STAT(drv_data) & SPIF)) 527 if (!time_before(jiffies, timeout)) { 528 dev_warn(&drv_data->pdev->dev, "timeout waiting for SPIF"); 529 break; 530 } else 531 cpu_relax(); 532 533 - if ((dmastat & DMA_ERR) && (spistat & RBSY)) { 534 msg->state = ERROR_STATE; 535 dev_err(&drv_data->pdev->dev, "dma receive: fifo/buffer overflow\n"); 536 } else { ··· 550 dev_dbg(&drv_data->pdev->dev, 551 "disable dma channel irq%d\n", 552 drv_data->dma_channel); 553 - dma_disable_irq(drv_data->dma_channel); 554 555 return IRQ_HANDLED; 556 } 557 558 static void bfin_spi_pump_transfers(unsigned long data) 559 { 560 - struct driver_data *drv_data = (struct driver_data *)data; 561 struct spi_message *message = NULL; 562 struct spi_transfer *transfer = NULL; 563 struct spi_transfer *previous = NULL; 564 - struct chip_data *chip = NULL; 565 - u8 width; 566 - u16 cr, dma_width, dma_config; 567 u32 tranf_success = 1; 568 u8 full_duplex = 0; 569 ··· 601 udelay(previous->delay_usecs); 602 } 603 604 - /* Setup the transfer state based on the type of transfer */ 605 if (bfin_spi_flush(drv_data) == 0) { 606 dev_err(&drv_data->pdev->dev, "pump_transfers: flush failed\n"); 607 message->status = -EIO; ··· 641 drv_data->cs_change = transfer->cs_change; 642 643 /* Bits per word setup */ 644 - switch (transfer->bits_per_word) { 645 - case 8: 646 drv_data->n_bytes = 1; 647 - width = CFG_SPI_WORDSIZE8; 648 - drv_data->read = chip->cs_change_per_word ? 649 - bfin_spi_u8_cs_chg_reader : bfin_spi_u8_reader; 650 - drv_data->write = chip->cs_change_per_word ? 651 - bfin_spi_u8_cs_chg_writer : bfin_spi_u8_writer; 652 - drv_data->duplex = chip->cs_change_per_word ? 653 - bfin_spi_u8_cs_chg_duplex : bfin_spi_u8_duplex; 654 - break; 655 - 656 - case 16: 657 drv_data->n_bytes = 2; 658 - width = CFG_SPI_WORDSIZE16; 659 - drv_data->read = chip->cs_change_per_word ? 660 - bfin_spi_u16_cs_chg_reader : bfin_spi_u16_reader; 661 - drv_data->write = chip->cs_change_per_word ? 662 - bfin_spi_u16_cs_chg_writer : bfin_spi_u16_writer; 663 - drv_data->duplex = chip->cs_change_per_word ? 664 - bfin_spi_u16_cs_chg_duplex : bfin_spi_u16_duplex; 665 - break; 666 - 667 - default: 668 - /* No change, the same as default setting */ 669 - drv_data->n_bytes = chip->n_bytes; 670 - width = chip->width; 671 - drv_data->write = drv_data->tx ? chip->write : bfin_spi_null_writer; 672 - drv_data->read = drv_data->rx ? chip->read : bfin_spi_null_reader; 673 - drv_data->duplex = chip->duplex ? chip->duplex : bfin_spi_null_writer; 674 - break; 675 } 676 - cr = (read_CTRL(drv_data) & (~BIT_CTL_TIMOD)); 677 - cr |= (width << 8); 678 write_CTRL(drv_data, cr); 679 680 - if (width == CFG_SPI_WORDSIZE16) { 681 - drv_data->len = (transfer->len) >> 1; 682 - } else { 683 - drv_data->len = transfer->len; 684 - } 685 dev_dbg(&drv_data->pdev->dev, 686 - "transfer: drv_data->write is %p, chip->write is %p, null_wr is %p\n", 687 - drv_data->write, chip->write, bfin_spi_null_writer); 688 689 - /* speed and width has been set on per message */ 690 message->state = RUNNING_STATE; 691 dma_config = 0; 692 ··· 676 write_BAUD(drv_data, chip->baud); 677 678 write_STAT(drv_data, BIT_STAT_CLR); 679 - cr = (read_CTRL(drv_data) & (~BIT_CTL_TIMOD)); 680 - if (drv_data->cs_change) 681 - bfin_spi_cs_active(drv_data, chip); 682 683 dev_dbg(&drv_data->pdev->dev, 684 "now pumping a transfer: width is %d, len is %d\n", 685 - width, transfer->len); 686 687 /* 688 * Try to map dma buffer and do a dma transfer. If successful use, ··· 699 /* config dma channel */ 700 dev_dbg(&drv_data->pdev->dev, "doing dma transfer\n"); 701 set_dma_x_count(drv_data->dma_channel, drv_data->len); 702 - if (width == CFG_SPI_WORDSIZE16) { 703 set_dma_x_modify(drv_data->dma_channel, 2); 704 dma_width = WDSIZE_16; 705 } else { ··· 785 dma_enable_irq(drv_data->dma_channel); 786 local_irq_restore(flags); 787 788 - } else { 789 - /* IO mode write then read */ 790 - dev_dbg(&drv_data->pdev->dev, "doing IO transfer\n"); 791 - 792 - /* we always use SPI_WRITE mode. SPI_READ mode 793 - seems to have problems with setting up the 794 - output value in TDBR prior to the transfer. */ 795 - write_CTRL(drv_data, (cr | CFG_SPI_WRITE)); 796 - 797 - if (full_duplex) { 798 - /* full duplex mode */ 799 - BUG_ON((drv_data->tx_end - drv_data->tx) != 800 - (drv_data->rx_end - drv_data->rx)); 801 - dev_dbg(&drv_data->pdev->dev, 802 - "IO duplex: cr is 0x%x\n", cr); 803 - 804 - drv_data->duplex(drv_data); 805 - 806 - if (drv_data->tx != drv_data->tx_end) 807 - tranf_success = 0; 808 - } else if (drv_data->tx != NULL) { 809 - /* write only half duplex */ 810 - dev_dbg(&drv_data->pdev->dev, 811 - "IO write: cr is 0x%x\n", cr); 812 - 813 - drv_data->write(drv_data); 814 - 815 - if (drv_data->tx != drv_data->tx_end) 816 - tranf_success = 0; 817 - } else if (drv_data->rx != NULL) { 818 - /* read only half duplex */ 819 - dev_dbg(&drv_data->pdev->dev, 820 - "IO read: cr is 0x%x\n", cr); 821 - 822 - drv_data->read(drv_data); 823 - if (drv_data->rx != drv_data->rx_end) 824 - tranf_success = 0; 825 - } 826 - 827 - if (!tranf_success) { 828 - dev_dbg(&drv_data->pdev->dev, 829 - "IO write error!\n"); 830 - message->state = ERROR_STATE; 831 - } else { 832 - /* Update total byte transfered */ 833 - message->actual_length += drv_data->len_in_bytes; 834 - /* Move to next transfer of this msg */ 835 - message->state = bfin_spi_next_transfer(drv_data); 836 - if (drv_data->cs_change) 837 - bfin_spi_cs_deactive(drv_data, chip); 838 - } 839 - /* Schedule next transfer tasklet */ 840 - tasklet_schedule(&drv_data->pump_transfers); 841 } 842 } 843 844 /* pop a msg from queue and kick off real transfer */ 845 static void bfin_spi_pump_messages(struct work_struct *work) 846 { 847 - struct driver_data *drv_data; 848 unsigned long flags; 849 850 - drv_data = container_of(work, struct driver_data, pump_messages); 851 852 /* Lock queue and check for queue work */ 853 spin_lock_irqsave(&drv_data->lock, flags); 854 - if (list_empty(&drv_data->queue) || drv_data->run == QUEUE_STOPPED) { 855 /* pumper kicked off but no work to do */ 856 drv_data->busy = 0; 857 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 928 */ 929 static int bfin_spi_transfer(struct spi_device *spi, struct spi_message *msg) 930 { 931 - struct driver_data *drv_data = spi_master_get_devdata(spi->master); 932 unsigned long flags; 933 934 spin_lock_irqsave(&drv_data->lock, flags); 935 936 - if (drv_data->run == QUEUE_STOPPED) { 937 spin_unlock_irqrestore(&drv_data->lock, flags); 938 return -ESHUTDOWN; 939 } ··· 945 dev_dbg(&spi->dev, "adding an msg in transfer() \n"); 946 list_add_tail(&msg->queue, &drv_data->queue); 947 948 - if (drv_data->run == QUEUE_RUNNING && !drv_data->busy) 949 queue_work(drv_data->workqueue, &drv_data->pump_messages); 950 951 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 969 P_SPI2_SSEL6, P_SPI2_SSEL7}, 970 }; 971 972 - /* first setup for new devices */ 973 static int bfin_spi_setup(struct spi_device *spi) 974 { 975 - struct bfin5xx_spi_chip *chip_info = NULL; 976 - struct chip_data *chip; 977 - struct driver_data *drv_data = spi_master_get_devdata(spi->master); 978 - int ret; 979 - 980 - if (spi->bits_per_word != 8 && spi->bits_per_word != 16) 981 - return -EINVAL; 982 983 /* Only alloc (or use chip_info) on first setup */ 984 chip = spi_get_ctldata(spi); 985 if (chip == NULL) { 986 - chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL); 987 - if (!chip) 988 - return -ENOMEM; 989 990 chip->enable_dma = 0; 991 chip_info = spi->controller_data; 992 } 993 994 /* chip_info isn't always needed */ 995 if (chip_info) { 996 /* Make sure people stop trying to set fields via ctl_reg 997 * when they should actually be using common SPI framework. 998 - * Currently we let through: WOM EMISO PSSE GM SZ TIMOD. 999 * Not sure if a user actually needs/uses any of these, 1000 * but let's assume (for now) they do. 1001 */ 1002 - if (chip_info->ctl_reg & (SPE|MSTR|CPOL|CPHA|LSBF|SIZE)) { 1003 dev_err(&spi->dev, "do not set bits in ctl_reg " 1004 "that the SPI framework manages\n"); 1005 - return -EINVAL; 1006 } 1007 - 1008 chip->enable_dma = chip_info->enable_dma != 0 1009 && drv_data->master_info->enable_dma; 1010 chip->ctl_reg = chip_info->ctl_reg; 1011 - chip->bits_per_word = chip_info->bits_per_word; 1012 - chip->cs_change_per_word = chip_info->cs_change_per_word; 1013 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 1014 - chip->cs_gpio = chip_info->cs_gpio; 1015 chip->idle_tx_val = chip_info->idle_tx_val; 1016 } 1017 1018 /* translate common spi framework into our register */ 1019 - if (spi->mode & SPI_CPOL) 1020 - chip->ctl_reg |= CPOL; 1021 - if (spi->mode & SPI_CPHA) 1022 - chip->ctl_reg |= CPHA; 1023 - if (spi->mode & SPI_LSB_FIRST) 1024 - chip->ctl_reg |= LSBF; 1025 - /* we dont support running in slave mode (yet?) */ 1026 - chip->ctl_reg |= MSTR; 1027 - 1028 - /* 1029 - * if any one SPI chip is registered and wants DMA, request the 1030 - * DMA channel for it 1031 - */ 1032 - if (chip->enable_dma && !drv_data->dma_requested) { 1033 - /* register dma irq handler */ 1034 - if (request_dma(drv_data->dma_channel, "BFIN_SPI_DMA") < 0) { 1035 - dev_dbg(&spi->dev, 1036 - "Unable to request BlackFin SPI DMA channel\n"); 1037 - return -ENODEV; 1038 - } 1039 - if (set_dma_callback(drv_data->dma_channel, 1040 - bfin_spi_dma_irq_handler, drv_data) < 0) { 1041 - dev_dbg(&spi->dev, "Unable to set dma callback\n"); 1042 - return -EPERM; 1043 - } 1044 - dma_disable_irq(drv_data->dma_channel); 1045 - drv_data->dma_requested = 1; 1046 } 1047 1048 /* 1049 * Notice: for blackfin, the speed_hz is the value of register 1050 * SPI_BAUD, not the real baudrate 1051 */ 1052 chip->baud = hz_to_spi_baud(spi->max_speed_hz); 1053 - chip->flag = 1 << (spi->chip_select); 1054 chip->chip_select_num = spi->chip_select; 1055 1056 - if (chip->chip_select_num == 0) { 1057 ret = gpio_request(chip->cs_gpio, spi->modalias); 1058 if (ret) { 1059 - if (drv_data->dma_requested) 1060 - free_dma(drv_data->dma_channel); 1061 - return ret; 1062 } 1063 gpio_direction_output(chip->cs_gpio, 1); 1064 } 1065 1066 - switch (chip->bits_per_word) { 1067 - case 8: 1068 - chip->n_bytes = 1; 1069 - chip->width = CFG_SPI_WORDSIZE8; 1070 - chip->read = chip->cs_change_per_word ? 1071 - bfin_spi_u8_cs_chg_reader : bfin_spi_u8_reader; 1072 - chip->write = chip->cs_change_per_word ? 1073 - bfin_spi_u8_cs_chg_writer : bfin_spi_u8_writer; 1074 - chip->duplex = chip->cs_change_per_word ? 1075 - bfin_spi_u8_cs_chg_duplex : bfin_spi_u8_duplex; 1076 - break; 1077 - 1078 - case 16: 1079 - chip->n_bytes = 2; 1080 - chip->width = CFG_SPI_WORDSIZE16; 1081 - chip->read = chip->cs_change_per_word ? 1082 - bfin_spi_u16_cs_chg_reader : bfin_spi_u16_reader; 1083 - chip->write = chip->cs_change_per_word ? 1084 - bfin_spi_u16_cs_chg_writer : bfin_spi_u16_writer; 1085 - chip->duplex = chip->cs_change_per_word ? 1086 - bfin_spi_u16_cs_chg_duplex : bfin_spi_u16_duplex; 1087 - break; 1088 - 1089 - default: 1090 - dev_err(&spi->dev, "%d bits_per_word is not supported\n", 1091 - chip->bits_per_word); 1092 - if (chip_info) 1093 - kfree(chip); 1094 - return -ENODEV; 1095 - } 1096 - 1097 dev_dbg(&spi->dev, "setup spi chip %s, width is %d, dma is %d\n", 1098 - spi->modalias, chip->width, chip->enable_dma); 1099 dev_dbg(&spi->dev, "ctl_reg is 0x%x, flag_reg is 0x%x\n", 1100 chip->ctl_reg, chip->flag); 1101 1102 spi_set_ctldata(spi, chip); 1103 1104 dev_dbg(&spi->dev, "chip select number is %d\n", chip->chip_select_num); 1105 - if ((chip->chip_select_num > 0) 1106 - && (chip->chip_select_num <= spi->master->num_chipselect)) 1107 - peripheral_request(ssel[spi->master->bus_num] 1108 - [chip->chip_select_num-1], spi->modalias); 1109 1110 bfin_spi_cs_deactive(drv_data, chip); 1111 1112 return 0; 1113 } 1114 1115 /* ··· 1155 */ 1156 static void bfin_spi_cleanup(struct spi_device *spi) 1157 { 1158 - struct chip_data *chip = spi_get_ctldata(spi); 1159 1160 if (!chip) 1161 return; 1162 1163 - if ((chip->chip_select_num > 0) 1164 - && (chip->chip_select_num <= spi->master->num_chipselect)) 1165 peripheral_free(ssel[spi->master->bus_num] 1166 [chip->chip_select_num-1]); 1167 - 1168 - if (chip->chip_select_num == 0) 1169 gpio_free(chip->cs_gpio); 1170 1171 kfree(chip); 1172 } 1173 1174 - static inline int bfin_spi_init_queue(struct driver_data *drv_data) 1175 { 1176 INIT_LIST_HEAD(&drv_data->queue); 1177 spin_lock_init(&drv_data->lock); 1178 1179 - drv_data->run = QUEUE_STOPPED; 1180 drv_data->busy = 0; 1181 1182 /* init transfer tasklet */ ··· 1195 return 0; 1196 } 1197 1198 - static inline int bfin_spi_start_queue(struct driver_data *drv_data) 1199 { 1200 unsigned long flags; 1201 1202 spin_lock_irqsave(&drv_data->lock, flags); 1203 1204 - if (drv_data->run == QUEUE_RUNNING || drv_data->busy) { 1205 spin_unlock_irqrestore(&drv_data->lock, flags); 1206 return -EBUSY; 1207 } 1208 1209 - drv_data->run = QUEUE_RUNNING; 1210 drv_data->cur_msg = NULL; 1211 drv_data->cur_transfer = NULL; 1212 drv_data->cur_chip = NULL; ··· 1217 return 0; 1218 } 1219 1220 - static inline int bfin_spi_stop_queue(struct driver_data *drv_data) 1221 { 1222 unsigned long flags; 1223 unsigned limit = 500; ··· 1231 * execution path (pump_messages) would be required to call wake_up or 1232 * friends on every SPI message. Do this instead 1233 */ 1234 - drv_data->run = QUEUE_STOPPED; 1235 while (!list_empty(&drv_data->queue) && drv_data->busy && limit--) { 1236 spin_unlock_irqrestore(&drv_data->lock, flags); 1237 msleep(10); ··· 1246 return status; 1247 } 1248 1249 - static inline int bfin_spi_destroy_queue(struct driver_data *drv_data) 1250 { 1251 int status; 1252 ··· 1264 struct device *dev = &pdev->dev; 1265 struct bfin5xx_spi_master *platform_info; 1266 struct spi_master *master; 1267 - struct driver_data *drv_data = 0; 1268 struct resource *res; 1269 int status = 0; 1270 1271 platform_info = dev->platform_data; 1272 1273 /* Allocate master with space for drv_data */ 1274 - master = spi_alloc_master(dev, sizeof(struct driver_data) + 16); 1275 if (!master) { 1276 dev_err(&pdev->dev, "can not alloc spi_master\n"); 1277 return -ENOMEM; ··· 1307 goto out_error_ioremap; 1308 } 1309 1310 - drv_data->dma_channel = platform_get_irq(pdev, 0); 1311 - if (drv_data->dma_channel < 0) { 1312 dev_err(dev, "No DMA channel specified\n"); 1313 status = -ENOENT; 1314 - goto out_error_no_dma_ch; 1315 } 1316 1317 /* Initial and start queue */ ··· 1341 goto out_error_queue_alloc; 1342 } 1343 1344 /* Register with the SPI framework */ 1345 platform_set_drvdata(pdev, drv_data); 1346 status = spi_register_master(master); ··· 1362 1363 out_error_queue_alloc: 1364 bfin_spi_destroy_queue(drv_data); 1365 - out_error_no_dma_ch: 1366 iounmap((void *) drv_data->regs_base); 1367 out_error_ioremap: 1368 out_error_get_res: ··· 1374 /* stop hardware and remove the driver */ 1375 static int __devexit bfin_spi_remove(struct platform_device *pdev) 1376 { 1377 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1378 int status = 0; 1379 1380 if (!drv_data) ··· 1394 free_dma(drv_data->dma_channel); 1395 } 1396 1397 /* Disconnect from the SPI framework */ 1398 spi_unregister_master(drv_data->master); 1399 ··· 1413 #ifdef CONFIG_PM 1414 static int bfin_spi_suspend(struct platform_device *pdev, pm_message_t state) 1415 { 1416 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1417 int status = 0; 1418 1419 status = bfin_spi_stop_queue(drv_data); 1420 if (status != 0) 1421 return status; 1422 1423 - /* stop hardware */ 1424 - bfin_spi_disable(drv_data); 1425 1426 return 0; 1427 } 1428 1429 static int bfin_spi_resume(struct platform_device *pdev) 1430 { 1431 - struct driver_data *drv_data = platform_get_drvdata(pdev); 1432 int status = 0; 1433 1434 - /* Enable the SPI interface */ 1435 - bfin_spi_enable(drv_data); 1436 1437 /* Start the queue running */ 1438 status = bfin_spi_start_queue(drv_data); ··· 1469 { 1470 return platform_driver_probe(&bfin_spi_driver, bfin_spi_probe); 1471 } 1472 - module_init(bfin_spi_init); 1473 1474 static void __exit bfin_spi_exit(void) 1475 {
··· 1 /* 2 * Blackfin On-Chip SPI Driver 3 * 4 + * Copyright 2004-2010 Analog Devices Inc. 5 * 6 * Enter bugs at http://blackfin.uclinux.org/ 7 * ··· 41 #define RUNNING_STATE ((void *)1) 42 #define DONE_STATE ((void *)2) 43 #define ERROR_STATE ((void *)-1) 44 45 + struct bfin_spi_master_data; 46 47 + struct bfin_spi_transfer_ops { 48 + void (*write) (struct bfin_spi_master_data *); 49 + void (*read) (struct bfin_spi_master_data *); 50 + void (*duplex) (struct bfin_spi_master_data *); 51 + }; 52 + 53 + struct bfin_spi_master_data { 54 /* Driver model hookup */ 55 struct platform_device *pdev; 56 ··· 69 spinlock_t lock; 70 struct list_head queue; 71 int busy; 72 + bool running; 73 74 /* Message Transfer pump */ 75 struct tasklet_struct pump_transfers; ··· 77 /* Current message transfer state info */ 78 struct spi_message *cur_msg; 79 struct spi_transfer *cur_transfer; 80 + struct bfin_spi_slave_data *cur_chip; 81 size_t len_in_bytes; 82 size_t len; 83 void *tx; ··· 92 dma_addr_t rx_dma; 93 dma_addr_t tx_dma; 94 95 + int irq_requested; 96 + int spi_irq; 97 + 98 size_t rx_map_len; 99 size_t tx_map_len; 100 u8 n_bytes; 101 + u16 ctrl_reg; 102 + u16 flag_reg; 103 + 104 int cs_change; 105 + const struct bfin_spi_transfer_ops *ops; 106 }; 107 108 + struct bfin_spi_slave_data { 109 u16 ctl_reg; 110 u16 baud; 111 u16 flag; 112 113 u8 chip_select_num; 114 u8 enable_dma; 115 u16 cs_chg_udelay; /* Some devices require > 255usec delay */ 116 u32 cs_gpio; 117 u16 idle_tx_val; 118 + u8 pio_interrupt; /* use spi data irq */ 119 + const struct bfin_spi_transfer_ops *ops; 120 }; 121 122 #define DEFINE_SPI_REG(reg, off) \ 123 + static inline u16 read_##reg(struct bfin_spi_master_data *drv_data) \ 124 { return bfin_read16(drv_data->regs_base + off); } \ 125 + static inline void write_##reg(struct bfin_spi_master_data *drv_data, u16 v) \ 126 { bfin_write16(drv_data->regs_base + off, v); } 127 128 DEFINE_SPI_REG(CTRL, 0x00) ··· 134 DEFINE_SPI_REG(BAUD, 0x14) 135 DEFINE_SPI_REG(SHAW, 0x18) 136 137 + static void bfin_spi_enable(struct bfin_spi_master_data *drv_data) 138 { 139 u16 cr; 140 ··· 142 write_CTRL(drv_data, (cr | BIT_CTL_ENABLE)); 143 } 144 145 + static void bfin_spi_disable(struct bfin_spi_master_data *drv_data) 146 { 147 u16 cr; 148 ··· 165 return spi_baud; 166 } 167 168 + static int bfin_spi_flush(struct bfin_spi_master_data *drv_data) 169 { 170 unsigned long limit = loops_per_jiffy << 1; 171 ··· 179 } 180 181 /* Chip select operation functions for cs_change flag */ 182 + static void bfin_spi_cs_active(struct bfin_spi_master_data *drv_data, struct bfin_spi_slave_data *chip) 183 { 184 + if (likely(chip->chip_select_num < MAX_CTRL_CS)) { 185 u16 flag = read_FLAG(drv_data); 186 187 + flag &= ~chip->flag; 188 189 write_FLAG(drv_data, flag); 190 } else { ··· 193 } 194 } 195 196 + static void bfin_spi_cs_deactive(struct bfin_spi_master_data *drv_data, 197 + struct bfin_spi_slave_data *chip) 198 { 199 + if (likely(chip->chip_select_num < MAX_CTRL_CS)) { 200 u16 flag = read_FLAG(drv_data); 201 202 + flag |= chip->flag; 203 204 write_FLAG(drv_data, flag); 205 } else { ··· 211 udelay(chip->cs_chg_udelay); 212 } 213 214 + /* enable or disable the pin muxed by GPIO and SPI CS to work as SPI CS */ 215 + static inline void bfin_spi_cs_enable(struct bfin_spi_master_data *drv_data, 216 + struct bfin_spi_slave_data *chip) 217 { 218 + if (chip->chip_select_num < MAX_CTRL_CS) { 219 + u16 flag = read_FLAG(drv_data); 220 + 221 + flag |= (chip->flag >> 8); 222 + 223 + write_FLAG(drv_data, flag); 224 + } 225 + } 226 + 227 + static inline void bfin_spi_cs_disable(struct bfin_spi_master_data *drv_data, 228 + struct bfin_spi_slave_data *chip) 229 + { 230 + if (chip->chip_select_num < MAX_CTRL_CS) { 231 + u16 flag = read_FLAG(drv_data); 232 + 233 + flag &= ~(chip->flag >> 8); 234 + 235 + write_FLAG(drv_data, flag); 236 + } 237 + } 238 + 239 + /* stop controller and re-config current chip*/ 240 + static void bfin_spi_restore_state(struct bfin_spi_master_data *drv_data) 241 + { 242 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 243 244 /* Clear status and disable clock */ 245 write_STAT(drv_data, BIT_STAT_CLR); 246 bfin_spi_disable(drv_data); 247 dev_dbg(&drv_data->pdev->dev, "restoring spi ctl state\n"); 248 + 249 + SSYNC(); 250 251 /* Load the registers */ 252 write_CTRL(drv_data, chip->ctl_reg); ··· 230 } 231 232 /* used to kick off transfer in rx mode and read unwanted RX data */ 233 + static inline void bfin_spi_dummy_read(struct bfin_spi_master_data *drv_data) 234 { 235 (void) read_RDBR(drv_data); 236 } 237 238 + static void bfin_spi_u8_writer(struct bfin_spi_master_data *drv_data) 239 { 240 /* clear RXS (we check for RXS inside the loop) */ 241 bfin_spi_dummy_read(drv_data); ··· 288 } 289 } 290 291 + static void bfin_spi_u8_reader(struct bfin_spi_master_data *drv_data) 292 { 293 u16 tx_val = drv_data->cur_chip->idle_tx_val; 294 ··· 321 } 322 } 323 324 + static void bfin_spi_u8_duplex(struct bfin_spi_master_data *drv_data) 325 { 326 /* discard old RX data and clear RXS */ 327 bfin_spi_dummy_read(drv_data); ··· 352 } 353 } 354 355 + static const struct bfin_spi_transfer_ops bfin_bfin_spi_transfer_ops_u8 = { 356 + .write = bfin_spi_u8_writer, 357 + .read = bfin_spi_u8_reader, 358 + .duplex = bfin_spi_u8_duplex, 359 + }; 360 361 + static void bfin_spi_u16_writer(struct bfin_spi_master_data *drv_data) 362 { 363 /* clear RXS (we check for RXS inside the loop) */ 364 bfin_spi_dummy_read(drv_data); ··· 386 } 387 } 388 389 + static void bfin_spi_u16_reader(struct bfin_spi_master_data *drv_data) 390 { 391 u16 tx_val = drv_data->cur_chip->idle_tx_val; 392 ··· 421 } 422 } 423 424 + static void bfin_spi_u16_duplex(struct bfin_spi_master_data *drv_data) 425 { 426 /* discard old RX data and clear RXS */ 427 bfin_spi_dummy_read(drv_data); ··· 455 } 456 } 457 458 + static const struct bfin_spi_transfer_ops bfin_bfin_spi_transfer_ops_u16 = { 459 + .write = bfin_spi_u16_writer, 460 + .read = bfin_spi_u16_reader, 461 + .duplex = bfin_spi_u16_duplex, 462 + }; 463 464 + /* test if there is more transfer to be done */ 465 + static void *bfin_spi_next_transfer(struct bfin_spi_master_data *drv_data) 466 { 467 struct spi_message *msg = drv_data->cur_msg; 468 struct spi_transfer *trans = drv_data->cur_transfer; ··· 494 * caller already set message->status; 495 * dma and pio irqs are blocked give finished message back 496 */ 497 + static void bfin_spi_giveback(struct bfin_spi_master_data *drv_data) 498 { 499 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 500 struct spi_transfer *last_transfer; 501 unsigned long flags; 502 struct spi_message *msg; ··· 525 msg->complete(msg->context); 526 } 527 528 + /* spi data irq handler */ 529 + static irqreturn_t bfin_spi_pio_irq_handler(int irq, void *dev_id) 530 + { 531 + struct bfin_spi_master_data *drv_data = dev_id; 532 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 533 + struct spi_message *msg = drv_data->cur_msg; 534 + int n_bytes = drv_data->n_bytes; 535 + 536 + /* wait until transfer finished. */ 537 + while (!(read_STAT(drv_data) & BIT_STAT_RXS)) 538 + cpu_relax(); 539 + 540 + if ((drv_data->tx && drv_data->tx >= drv_data->tx_end) || 541 + (drv_data->rx && drv_data->rx >= (drv_data->rx_end - n_bytes))) { 542 + /* last read */ 543 + if (drv_data->rx) { 544 + dev_dbg(&drv_data->pdev->dev, "last read\n"); 545 + if (n_bytes == 2) 546 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 547 + else if (n_bytes == 1) 548 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 549 + drv_data->rx += n_bytes; 550 + } 551 + 552 + msg->actual_length += drv_data->len_in_bytes; 553 + if (drv_data->cs_change) 554 + bfin_spi_cs_deactive(drv_data, chip); 555 + /* Move to next transfer */ 556 + msg->state = bfin_spi_next_transfer(drv_data); 557 + 558 + disable_irq_nosync(drv_data->spi_irq); 559 + 560 + /* Schedule transfer tasklet */ 561 + tasklet_schedule(&drv_data->pump_transfers); 562 + return IRQ_HANDLED; 563 + } 564 + 565 + if (drv_data->rx && drv_data->tx) { 566 + /* duplex */ 567 + dev_dbg(&drv_data->pdev->dev, "duplex: write_TDBR\n"); 568 + if (drv_data->n_bytes == 2) { 569 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 570 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 571 + } else if (drv_data->n_bytes == 1) { 572 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 573 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 574 + } 575 + } else if (drv_data->rx) { 576 + /* read */ 577 + dev_dbg(&drv_data->pdev->dev, "read: write_TDBR\n"); 578 + if (drv_data->n_bytes == 2) 579 + *(u16 *) (drv_data->rx) = read_RDBR(drv_data); 580 + else if (drv_data->n_bytes == 1) 581 + *(u8 *) (drv_data->rx) = read_RDBR(drv_data); 582 + write_TDBR(drv_data, chip->idle_tx_val); 583 + } else if (drv_data->tx) { 584 + /* write */ 585 + dev_dbg(&drv_data->pdev->dev, "write: write_TDBR\n"); 586 + bfin_spi_dummy_read(drv_data); 587 + if (drv_data->n_bytes == 2) 588 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 589 + else if (drv_data->n_bytes == 1) 590 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 591 + } 592 + 593 + if (drv_data->tx) 594 + drv_data->tx += n_bytes; 595 + if (drv_data->rx) 596 + drv_data->rx += n_bytes; 597 + 598 + return IRQ_HANDLED; 599 + } 600 + 601 static irqreturn_t bfin_spi_dma_irq_handler(int irq, void *dev_id) 602 { 603 + struct bfin_spi_master_data *drv_data = dev_id; 604 + struct bfin_spi_slave_data *chip = drv_data->cur_chip; 605 struct spi_message *msg = drv_data->cur_msg; 606 unsigned long timeout; 607 unsigned short dmastat = get_dma_curr_irqstat(drv_data->dma_channel); ··· 540 541 clear_dma_irqstat(drv_data->dma_channel); 542 543 /* 544 * wait for the last transaction shifted out. HRM states: 545 * at this point there may still be data in the SPI DMA FIFO waiting ··· 551 * register until it goes low for 2 successive reads 552 */ 553 if (drv_data->tx != NULL) { 554 + while ((read_STAT(drv_data) & BIT_STAT_TXS) || 555 + (read_STAT(drv_data) & BIT_STAT_TXS)) 556 cpu_relax(); 557 } 558 ··· 561 dmastat, read_STAT(drv_data)); 562 563 timeout = jiffies + HZ; 564 + while (!(read_STAT(drv_data) & BIT_STAT_SPIF)) 565 if (!time_before(jiffies, timeout)) { 566 dev_warn(&drv_data->pdev->dev, "timeout waiting for SPIF"); 567 break; 568 } else 569 cpu_relax(); 570 571 + if ((dmastat & DMA_ERR) && (spistat & BIT_STAT_RBSY)) { 572 msg->state = ERROR_STATE; 573 dev_err(&drv_data->pdev->dev, "dma receive: fifo/buffer overflow\n"); 574 } else { ··· 588 dev_dbg(&drv_data->pdev->dev, 589 "disable dma channel irq%d\n", 590 drv_data->dma_channel); 591 + dma_disable_irq_nosync(drv_data->dma_channel); 592 593 return IRQ_HANDLED; 594 } 595 596 static void bfin_spi_pump_transfers(unsigned long data) 597 { 598 + struct bfin_spi_master_data *drv_data = (struct bfin_spi_master_data *)data; 599 struct spi_message *message = NULL; 600 struct spi_transfer *transfer = NULL; 601 struct spi_transfer *previous = NULL; 602 + struct bfin_spi_slave_data *chip = NULL; 603 + unsigned int bits_per_word; 604 + u16 cr, cr_width, dma_width, dma_config; 605 u32 tranf_success = 1; 606 u8 full_duplex = 0; 607 ··· 639 udelay(previous->delay_usecs); 640 } 641 642 + /* Flush any existing transfers that may be sitting in the hardware */ 643 if (bfin_spi_flush(drv_data) == 0) { 644 dev_err(&drv_data->pdev->dev, "pump_transfers: flush failed\n"); 645 message->status = -EIO; ··· 679 drv_data->cs_change = transfer->cs_change; 680 681 /* Bits per word setup */ 682 + bits_per_word = transfer->bits_per_word ? : message->spi->bits_per_word; 683 + if (bits_per_word == 8) { 684 drv_data->n_bytes = 1; 685 + drv_data->len = transfer->len; 686 + cr_width = 0; 687 + drv_data->ops = &bfin_bfin_spi_transfer_ops_u8; 688 + } else if (bits_per_word == 16) { 689 drv_data->n_bytes = 2; 690 + drv_data->len = (transfer->len) >> 1; 691 + cr_width = BIT_CTL_WORDSIZE; 692 + drv_data->ops = &bfin_bfin_spi_transfer_ops_u16; 693 + } else { 694 + dev_err(&drv_data->pdev->dev, "transfer: unsupported bits_per_word\n"); 695 + message->status = -EINVAL; 696 + bfin_spi_giveback(drv_data); 697 + return; 698 } 699 + cr = read_CTRL(drv_data) & ~(BIT_CTL_TIMOD | BIT_CTL_WORDSIZE); 700 + cr |= cr_width; 701 write_CTRL(drv_data, cr); 702 703 dev_dbg(&drv_data->pdev->dev, 704 + "transfer: drv_data->ops is %p, chip->ops is %p, u8_ops is %p\n", 705 + drv_data->ops, chip->ops, &bfin_bfin_spi_transfer_ops_u8); 706 707 message->state = RUNNING_STATE; 708 dma_config = 0; 709 ··· 735 write_BAUD(drv_data, chip->baud); 736 737 write_STAT(drv_data, BIT_STAT_CLR); 738 + bfin_spi_cs_active(drv_data, chip); 739 740 dev_dbg(&drv_data->pdev->dev, 741 "now pumping a transfer: width is %d, len is %d\n", 742 + cr_width, transfer->len); 743 744 /* 745 * Try to map dma buffer and do a dma transfer. If successful use, ··· 760 /* config dma channel */ 761 dev_dbg(&drv_data->pdev->dev, "doing dma transfer\n"); 762 set_dma_x_count(drv_data->dma_channel, drv_data->len); 763 + if (cr_width == BIT_CTL_WORDSIZE) { 764 set_dma_x_modify(drv_data->dma_channel, 2); 765 dma_width = WDSIZE_16; 766 } else { ··· 846 dma_enable_irq(drv_data->dma_channel); 847 local_irq_restore(flags); 848 849 + return; 850 } 851 + 852 + /* 853 + * We always use SPI_WRITE mode (transfer starts with TDBR write). 854 + * SPI_READ mode (transfer starts with RDBR read) seems to have 855 + * problems with setting up the output value in TDBR prior to the 856 + * start of the transfer. 857 + */ 858 + write_CTRL(drv_data, cr | BIT_CTL_TXMOD); 859 + 860 + if (chip->pio_interrupt) { 861 + /* SPI irq should have been disabled by now */ 862 + 863 + /* discard old RX data and clear RXS */ 864 + bfin_spi_dummy_read(drv_data); 865 + 866 + /* start transfer */ 867 + if (drv_data->tx == NULL) 868 + write_TDBR(drv_data, chip->idle_tx_val); 869 + else { 870 + if (bits_per_word == 8) 871 + write_TDBR(drv_data, (*(u8 *) (drv_data->tx))); 872 + else 873 + write_TDBR(drv_data, (*(u16 *) (drv_data->tx))); 874 + drv_data->tx += drv_data->n_bytes; 875 + } 876 + 877 + /* once TDBR is empty, interrupt is triggered */ 878 + enable_irq(drv_data->spi_irq); 879 + return; 880 + } 881 + 882 + /* IO mode */ 883 + dev_dbg(&drv_data->pdev->dev, "doing IO transfer\n"); 884 + 885 + if (full_duplex) { 886 + /* full duplex mode */ 887 + BUG_ON((drv_data->tx_end - drv_data->tx) != 888 + (drv_data->rx_end - drv_data->rx)); 889 + dev_dbg(&drv_data->pdev->dev, 890 + "IO duplex: cr is 0x%x\n", cr); 891 + 892 + drv_data->ops->duplex(drv_data); 893 + 894 + if (drv_data->tx != drv_data->tx_end) 895 + tranf_success = 0; 896 + } else if (drv_data->tx != NULL) { 897 + /* write only half duplex */ 898 + dev_dbg(&drv_data->pdev->dev, 899 + "IO write: cr is 0x%x\n", cr); 900 + 901 + drv_data->ops->write(drv_data); 902 + 903 + if (drv_data->tx != drv_data->tx_end) 904 + tranf_success = 0; 905 + } else if (drv_data->rx != NULL) { 906 + /* read only half duplex */ 907 + dev_dbg(&drv_data->pdev->dev, 908 + "IO read: cr is 0x%x\n", cr); 909 + 910 + drv_data->ops->read(drv_data); 911 + if (drv_data->rx != drv_data->rx_end) 912 + tranf_success = 0; 913 + } 914 + 915 + if (!tranf_success) { 916 + dev_dbg(&drv_data->pdev->dev, 917 + "IO write error!\n"); 918 + message->state = ERROR_STATE; 919 + } else { 920 + /* Update total byte transfered */ 921 + message->actual_length += drv_data->len_in_bytes; 922 + /* Move to next transfer of this msg */ 923 + message->state = bfin_spi_next_transfer(drv_data); 924 + if (drv_data->cs_change) 925 + bfin_spi_cs_deactive(drv_data, chip); 926 + } 927 + 928 + /* Schedule next transfer tasklet */ 929 + tasklet_schedule(&drv_data->pump_transfers); 930 } 931 932 /* pop a msg from queue and kick off real transfer */ 933 static void bfin_spi_pump_messages(struct work_struct *work) 934 { 935 + struct bfin_spi_master_data *drv_data; 936 unsigned long flags; 937 938 + drv_data = container_of(work, struct bfin_spi_master_data, pump_messages); 939 940 /* Lock queue and check for queue work */ 941 spin_lock_irqsave(&drv_data->lock, flags); 942 + if (list_empty(&drv_data->queue) || !drv_data->running) { 943 /* pumper kicked off but no work to do */ 944 drv_data->busy = 0; 945 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 962 */ 963 static int bfin_spi_transfer(struct spi_device *spi, struct spi_message *msg) 964 { 965 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 966 unsigned long flags; 967 968 spin_lock_irqsave(&drv_data->lock, flags); 969 970 + if (!drv_data->running) { 971 spin_unlock_irqrestore(&drv_data->lock, flags); 972 return -ESHUTDOWN; 973 } ··· 979 dev_dbg(&spi->dev, "adding an msg in transfer() \n"); 980 list_add_tail(&msg->queue, &drv_data->queue); 981 982 + if (drv_data->running && !drv_data->busy) 983 queue_work(drv_data->workqueue, &drv_data->pump_messages); 984 985 spin_unlock_irqrestore(&drv_data->lock, flags); ··· 1003 P_SPI2_SSEL6, P_SPI2_SSEL7}, 1004 }; 1005 1006 + /* setup for devices (may be called multiple times -- not just first setup) */ 1007 static int bfin_spi_setup(struct spi_device *spi) 1008 { 1009 + struct bfin5xx_spi_chip *chip_info; 1010 + struct bfin_spi_slave_data *chip = NULL; 1011 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 1012 + u16 bfin_ctl_reg; 1013 + int ret = -EINVAL; 1014 1015 /* Only alloc (or use chip_info) on first setup */ 1016 + chip_info = NULL; 1017 chip = spi_get_ctldata(spi); 1018 if (chip == NULL) { 1019 + chip = kzalloc(sizeof(*chip), GFP_KERNEL); 1020 + if (!chip) { 1021 + dev_err(&spi->dev, "cannot allocate chip data\n"); 1022 + ret = -ENOMEM; 1023 + goto error; 1024 + } 1025 1026 chip->enable_dma = 0; 1027 chip_info = spi->controller_data; 1028 } 1029 1030 + /* Let people set non-standard bits directly */ 1031 + bfin_ctl_reg = BIT_CTL_OPENDRAIN | BIT_CTL_EMISO | 1032 + BIT_CTL_PSSE | BIT_CTL_GM | BIT_CTL_SZ; 1033 + 1034 /* chip_info isn't always needed */ 1035 if (chip_info) { 1036 /* Make sure people stop trying to set fields via ctl_reg 1037 * when they should actually be using common SPI framework. 1038 + * Currently we let through: WOM EMISO PSSE GM SZ. 1039 * Not sure if a user actually needs/uses any of these, 1040 * but let's assume (for now) they do. 1041 */ 1042 + if (chip_info->ctl_reg & ~bfin_ctl_reg) { 1043 dev_err(&spi->dev, "do not set bits in ctl_reg " 1044 "that the SPI framework manages\n"); 1045 + goto error; 1046 } 1047 chip->enable_dma = chip_info->enable_dma != 0 1048 && drv_data->master_info->enable_dma; 1049 chip->ctl_reg = chip_info->ctl_reg; 1050 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 1051 chip->idle_tx_val = chip_info->idle_tx_val; 1052 + chip->pio_interrupt = chip_info->pio_interrupt; 1053 + spi->bits_per_word = chip_info->bits_per_word; 1054 + } else { 1055 + /* force a default base state */ 1056 + chip->ctl_reg &= bfin_ctl_reg; 1057 + } 1058 + 1059 + if (spi->bits_per_word != 8 && spi->bits_per_word != 16) { 1060 + dev_err(&spi->dev, "%d bits_per_word is not supported\n", 1061 + spi->bits_per_word); 1062 + goto error; 1063 } 1064 1065 /* translate common spi framework into our register */ 1066 + if (spi->mode & ~(SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST)) { 1067 + dev_err(&spi->dev, "unsupported spi modes detected\n"); 1068 + goto error; 1069 } 1070 + if (spi->mode & SPI_CPOL) 1071 + chip->ctl_reg |= BIT_CTL_CPOL; 1072 + if (spi->mode & SPI_CPHA) 1073 + chip->ctl_reg |= BIT_CTL_CPHA; 1074 + if (spi->mode & SPI_LSB_FIRST) 1075 + chip->ctl_reg |= BIT_CTL_LSBF; 1076 + /* we dont support running in slave mode (yet?) */ 1077 + chip->ctl_reg |= BIT_CTL_MASTER; 1078 1079 /* 1080 * Notice: for blackfin, the speed_hz is the value of register 1081 * SPI_BAUD, not the real baudrate 1082 */ 1083 chip->baud = hz_to_spi_baud(spi->max_speed_hz); 1084 chip->chip_select_num = spi->chip_select; 1085 + if (chip->chip_select_num < MAX_CTRL_CS) { 1086 + if (!(spi->mode & SPI_CPHA)) 1087 + dev_warn(&spi->dev, "Warning: SPI CPHA not set:" 1088 + " Slave Select not under software control!\n" 1089 + " See Documentation/blackfin/bfin-spi-notes.txt"); 1090 1091 + chip->flag = (1 << spi->chip_select) << 8; 1092 + } else 1093 + chip->cs_gpio = chip->chip_select_num - MAX_CTRL_CS; 1094 + 1095 + if (chip->enable_dma && chip->pio_interrupt) { 1096 + dev_err(&spi->dev, "enable_dma is set, " 1097 + "do not set pio_interrupt\n"); 1098 + goto error; 1099 + } 1100 + /* 1101 + * if any one SPI chip is registered and wants DMA, request the 1102 + * DMA channel for it 1103 + */ 1104 + if (chip->enable_dma && !drv_data->dma_requested) { 1105 + /* register dma irq handler */ 1106 + ret = request_dma(drv_data->dma_channel, "BFIN_SPI_DMA"); 1107 + if (ret) { 1108 + dev_err(&spi->dev, 1109 + "Unable to request BlackFin SPI DMA channel\n"); 1110 + goto error; 1111 + } 1112 + drv_data->dma_requested = 1; 1113 + 1114 + ret = set_dma_callback(drv_data->dma_channel, 1115 + bfin_spi_dma_irq_handler, drv_data); 1116 + if (ret) { 1117 + dev_err(&spi->dev, "Unable to set dma callback\n"); 1118 + goto error; 1119 + } 1120 + dma_disable_irq(drv_data->dma_channel); 1121 + } 1122 + 1123 + if (chip->pio_interrupt && !drv_data->irq_requested) { 1124 + ret = request_irq(drv_data->spi_irq, bfin_spi_pio_irq_handler, 1125 + IRQF_DISABLED, "BFIN_SPI", drv_data); 1126 + if (ret) { 1127 + dev_err(&spi->dev, "Unable to register spi IRQ\n"); 1128 + goto error; 1129 + } 1130 + drv_data->irq_requested = 1; 1131 + /* we use write mode, spi irq has to be disabled here */ 1132 + disable_irq(drv_data->spi_irq); 1133 + } 1134 + 1135 + if (chip->chip_select_num >= MAX_CTRL_CS) { 1136 ret = gpio_request(chip->cs_gpio, spi->modalias); 1137 if (ret) { 1138 + dev_err(&spi->dev, "gpio_request() error\n"); 1139 + goto pin_error; 1140 } 1141 gpio_direction_output(chip->cs_gpio, 1); 1142 } 1143 1144 dev_dbg(&spi->dev, "setup spi chip %s, width is %d, dma is %d\n", 1145 + spi->modalias, spi->bits_per_word, chip->enable_dma); 1146 dev_dbg(&spi->dev, "ctl_reg is 0x%x, flag_reg is 0x%x\n", 1147 chip->ctl_reg, chip->flag); 1148 1149 spi_set_ctldata(spi, chip); 1150 1151 dev_dbg(&spi->dev, "chip select number is %d\n", chip->chip_select_num); 1152 + if (chip->chip_select_num < MAX_CTRL_CS) { 1153 + ret = peripheral_request(ssel[spi->master->bus_num] 1154 + [chip->chip_select_num-1], spi->modalias); 1155 + if (ret) { 1156 + dev_err(&spi->dev, "peripheral_request() error\n"); 1157 + goto pin_error; 1158 + } 1159 + } 1160 1161 + bfin_spi_cs_enable(drv_data, chip); 1162 bfin_spi_cs_deactive(drv_data, chip); 1163 1164 return 0; 1165 + 1166 + pin_error: 1167 + if (chip->chip_select_num >= MAX_CTRL_CS) 1168 + gpio_free(chip->cs_gpio); 1169 + else 1170 + peripheral_free(ssel[spi->master->bus_num] 1171 + [chip->chip_select_num - 1]); 1172 + error: 1173 + if (chip) { 1174 + if (drv_data->dma_requested) 1175 + free_dma(drv_data->dma_channel); 1176 + drv_data->dma_requested = 0; 1177 + 1178 + kfree(chip); 1179 + /* prevent free 'chip' twice */ 1180 + spi_set_ctldata(spi, NULL); 1181 + } 1182 + 1183 + return ret; 1184 } 1185 1186 /* ··· 1152 */ 1153 static void bfin_spi_cleanup(struct spi_device *spi) 1154 { 1155 + struct bfin_spi_slave_data *chip = spi_get_ctldata(spi); 1156 + struct bfin_spi_master_data *drv_data = spi_master_get_devdata(spi->master); 1157 1158 if (!chip) 1159 return; 1160 1161 + if (chip->chip_select_num < MAX_CTRL_CS) { 1162 peripheral_free(ssel[spi->master->bus_num] 1163 [chip->chip_select_num-1]); 1164 + bfin_spi_cs_disable(drv_data, chip); 1165 + } else 1166 gpio_free(chip->cs_gpio); 1167 1168 kfree(chip); 1169 + /* prevent free 'chip' twice */ 1170 + spi_set_ctldata(spi, NULL); 1171 } 1172 1173 + static inline int bfin_spi_init_queue(struct bfin_spi_master_data *drv_data) 1174 { 1175 INIT_LIST_HEAD(&drv_data->queue); 1176 spin_lock_init(&drv_data->lock); 1177 1178 + drv_data->running = false; 1179 drv_data->busy = 0; 1180 1181 /* init transfer tasklet */ ··· 1190 return 0; 1191 } 1192 1193 + static inline int bfin_spi_start_queue(struct bfin_spi_master_data *drv_data) 1194 { 1195 unsigned long flags; 1196 1197 spin_lock_irqsave(&drv_data->lock, flags); 1198 1199 + if (drv_data->running || drv_data->busy) { 1200 spin_unlock_irqrestore(&drv_data->lock, flags); 1201 return -EBUSY; 1202 } 1203 1204 + drv_data->running = true; 1205 drv_data->cur_msg = NULL; 1206 drv_data->cur_transfer = NULL; 1207 drv_data->cur_chip = NULL; ··· 1212 return 0; 1213 } 1214 1215 + static inline int bfin_spi_stop_queue(struct bfin_spi_master_data *drv_data) 1216 { 1217 unsigned long flags; 1218 unsigned limit = 500; ··· 1226 * execution path (pump_messages) would be required to call wake_up or 1227 * friends on every SPI message. Do this instead 1228 */ 1229 + drv_data->running = false; 1230 while (!list_empty(&drv_data->queue) && drv_data->busy && limit--) { 1231 spin_unlock_irqrestore(&drv_data->lock, flags); 1232 msleep(10); ··· 1241 return status; 1242 } 1243 1244 + static inline int bfin_spi_destroy_queue(struct bfin_spi_master_data *drv_data) 1245 { 1246 int status; 1247 ··· 1259 struct device *dev = &pdev->dev; 1260 struct bfin5xx_spi_master *platform_info; 1261 struct spi_master *master; 1262 + struct bfin_spi_master_data *drv_data; 1263 struct resource *res; 1264 int status = 0; 1265 1266 platform_info = dev->platform_data; 1267 1268 /* Allocate master with space for drv_data */ 1269 + master = spi_alloc_master(dev, sizeof(*drv_data)); 1270 if (!master) { 1271 dev_err(&pdev->dev, "can not alloc spi_master\n"); 1272 return -ENOMEM; ··· 1302 goto out_error_ioremap; 1303 } 1304 1305 + res = platform_get_resource(pdev, IORESOURCE_DMA, 0); 1306 + if (res == NULL) { 1307 dev_err(dev, "No DMA channel specified\n"); 1308 status = -ENOENT; 1309 + goto out_error_free_io; 1310 + } 1311 + drv_data->dma_channel = res->start; 1312 + 1313 + drv_data->spi_irq = platform_get_irq(pdev, 0); 1314 + if (drv_data->spi_irq < 0) { 1315 + dev_err(dev, "No spi pio irq specified\n"); 1316 + status = -ENOENT; 1317 + goto out_error_free_io; 1318 } 1319 1320 /* Initial and start queue */ ··· 1328 goto out_error_queue_alloc; 1329 } 1330 1331 + /* Reset SPI registers. If these registers were used by the boot loader, 1332 + * the sky may fall on your head if you enable the dma controller. 1333 + */ 1334 + write_CTRL(drv_data, BIT_CTL_CPHA | BIT_CTL_MASTER); 1335 + write_FLAG(drv_data, 0xFF00); 1336 + 1337 /* Register with the SPI framework */ 1338 platform_set_drvdata(pdev, drv_data); 1339 status = spi_register_master(master); ··· 1343 1344 out_error_queue_alloc: 1345 bfin_spi_destroy_queue(drv_data); 1346 + out_error_free_io: 1347 iounmap((void *) drv_data->regs_base); 1348 out_error_ioremap: 1349 out_error_get_res: ··· 1355 /* stop hardware and remove the driver */ 1356 static int __devexit bfin_spi_remove(struct platform_device *pdev) 1357 { 1358 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1359 int status = 0; 1360 1361 if (!drv_data) ··· 1375 free_dma(drv_data->dma_channel); 1376 } 1377 1378 + if (drv_data->irq_requested) { 1379 + free_irq(drv_data->spi_irq, drv_data); 1380 + drv_data->irq_requested = 0; 1381 + } 1382 + 1383 /* Disconnect from the SPI framework */ 1384 spi_unregister_master(drv_data->master); 1385 ··· 1389 #ifdef CONFIG_PM 1390 static int bfin_spi_suspend(struct platform_device *pdev, pm_message_t state) 1391 { 1392 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1393 int status = 0; 1394 1395 status = bfin_spi_stop_queue(drv_data); 1396 if (status != 0) 1397 return status; 1398 1399 + drv_data->ctrl_reg = read_CTRL(drv_data); 1400 + drv_data->flag_reg = read_FLAG(drv_data); 1401 + 1402 + /* 1403 + * reset SPI_CTL and SPI_FLG registers 1404 + */ 1405 + write_CTRL(drv_data, BIT_CTL_CPHA | BIT_CTL_MASTER); 1406 + write_FLAG(drv_data, 0xFF00); 1407 1408 return 0; 1409 } 1410 1411 static int bfin_spi_resume(struct platform_device *pdev) 1412 { 1413 + struct bfin_spi_master_data *drv_data = platform_get_drvdata(pdev); 1414 int status = 0; 1415 1416 + write_CTRL(drv_data, drv_data->ctrl_reg); 1417 + write_FLAG(drv_data, drv_data->flag_reg); 1418 1419 /* Start the queue running */ 1420 status = bfin_spi_start_queue(drv_data); ··· 1439 { 1440 return platform_driver_probe(&bfin_spi_driver, bfin_spi_probe); 1441 } 1442 + subsys_initcall(bfin_spi_init); 1443 1444 static void __exit bfin_spi_exit(void) 1445 {
+1 -1
drivers/staging/tm6000/Kconfig
··· 1 config VIDEO_TM6000 2 tristate "TV Master TM5600/6000/6010 driver" 3 - depends on VIDEO_DEV && I2C && INPUT && USB && EXPERIMENTAL 4 select VIDEO_TUNER 5 select MEDIA_TUNER_XC2028 6 select MEDIA_TUNER_XC5000
··· 1 config VIDEO_TM6000 2 tristate "TV Master TM5600/6000/6010 driver" 3 + depends on VIDEO_DEV && I2C && INPUT && IR_CORE && USB && EXPERIMENTAL 4 select VIDEO_TUNER 5 select MEDIA_TUNER_XC2028 6 select MEDIA_TUNER_XC5000
+39 -22
drivers/staging/tm6000/tm6000-input.c
··· 46 } 47 48 struct tm6000_ir_poll_result { 49 - u8 rc_data[4]; 50 }; 51 52 struct tm6000_IR { ··· 60 int polling; 61 struct delayed_work work; 62 u8 wait:1; 63 struct urb *int_urb; 64 u8 *urb_data; 65 - u8 key:1; 66 67 int (*get_key) (struct tm6000_IR *, struct tm6000_ir_poll_result *); 68 ··· 122 123 if (urb->status != 0) 124 printk(KERN_INFO "not ready\n"); 125 - else if (urb->actual_length > 0) 126 memcpy(ir->urb_data, urb->transfer_buffer, urb->actual_length); 127 128 - dprintk("data %02x %02x %02x %02x\n", ir->urb_data[0], 129 - ir->urb_data[1], ir->urb_data[2], ir->urb_data[3]); 130 131 - ir->key = 1; 132 133 rc = usb_submit_urb(urb, GFP_ATOMIC); 134 } ··· 141 int rc; 142 u8 buf[2]; 143 144 - if (ir->wait && !&dev->int_in) { 145 - poll_result->rc_data[0] = 0xff; 146 return 0; 147 - } 148 149 if (&dev->int_in) { 150 - poll_result->rc_data[0] = ir->urb_data[0]; 151 - poll_result->rc_data[1] = ir->urb_data[1]; 152 } else { 153 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 0); 154 msleep(10); 155 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 1); 156 msleep(10); 157 158 - rc = tm6000_read_write_usb(dev, USB_DIR_IN | USB_TYPE_VENDOR | 159 - USB_RECIP_DEVICE, REQ_02_GET_IR_CODE, 0, 0, buf, 1); 160 161 - msleep(10); 162 163 - dprintk("read data=%02x\n", buf[0]); 164 - if (rc < 0) 165 - return rc; 166 167 - poll_result->rc_data[0] = buf[0]; 168 } 169 return 0; 170 } ··· 198 return; 199 } 200 201 - dprintk("ir->get_key result data=%02x %02x\n", 202 - poll_result.rc_data[0], poll_result.rc_data[1]); 203 204 - if (poll_result.rc_data[0] != 0xff && ir->key == 1) { 205 ir_input_keydown(ir->input->input_dev, &ir->ir, 206 - poll_result.rc_data[0] | poll_result.rc_data[1] << 8); 207 208 ir_input_nokey(ir->input->input_dev, &ir->ir); 209 ir->key = 0;
··· 46 } 47 48 struct tm6000_ir_poll_result { 49 + u16 rc_data; 50 }; 51 52 struct tm6000_IR { ··· 60 int polling; 61 struct delayed_work work; 62 u8 wait:1; 63 + u8 key:1; 64 struct urb *int_urb; 65 u8 *urb_data; 66 67 int (*get_key) (struct tm6000_IR *, struct tm6000_ir_poll_result *); 68 ··· 122 123 if (urb->status != 0) 124 printk(KERN_INFO "not ready\n"); 125 + else if (urb->actual_length > 0) { 126 memcpy(ir->urb_data, urb->transfer_buffer, urb->actual_length); 127 128 + dprintk("data %02x %02x %02x %02x\n", ir->urb_data[0], 129 + ir->urb_data[1], ir->urb_data[2], ir->urb_data[3]); 130 131 + ir->key = 1; 132 + } 133 134 rc = usb_submit_urb(urb, GFP_ATOMIC); 135 } ··· 140 int rc; 141 u8 buf[2]; 142 143 + if (ir->wait && !&dev->int_in) 144 return 0; 145 146 if (&dev->int_in) { 147 + if (ir->ir.ir_type == IR_TYPE_RC5) 148 + poll_result->rc_data = ir->urb_data[0]; 149 + else 150 + poll_result->rc_data = ir->urb_data[0] | ir->urb_data[1] << 8; 151 } else { 152 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 0); 153 msleep(10); 154 tm6000_set_reg(dev, REQ_04_EN_DISABLE_MCU_INT, 2, 1); 155 msleep(10); 156 157 + if (ir->ir.ir_type == IR_TYPE_RC5) { 158 + rc = tm6000_read_write_usb(dev, USB_DIR_IN | 159 + USB_TYPE_VENDOR | USB_RECIP_DEVICE, 160 + REQ_02_GET_IR_CODE, 0, 0, buf, 1); 161 162 + msleep(10); 163 164 + dprintk("read data=%02x\n", buf[0]); 165 + if (rc < 0) 166 + return rc; 167 168 + poll_result->rc_data = buf[0]; 169 + } else { 170 + rc = tm6000_read_write_usb(dev, USB_DIR_IN | 171 + USB_TYPE_VENDOR | USB_RECIP_DEVICE, 172 + REQ_02_GET_IR_CODE, 0, 0, buf, 2); 173 + 174 + msleep(10); 175 + 176 + dprintk("read data=%04x\n", buf[0] | buf[1] << 8); 177 + if (rc < 0) 178 + return rc; 179 + 180 + poll_result->rc_data = buf[0] | buf[1] << 8; 181 + } 182 + if ((poll_result->rc_data & 0x00ff) != 0xff) 183 + ir->key = 1; 184 } 185 return 0; 186 } ··· 180 return; 181 } 182 183 + dprintk("ir->get_key result data=%04x\n", poll_result.rc_data); 184 185 + if (ir->key) { 186 ir_input_keydown(ir->input->input_dev, &ir->ir, 187 + (u32)poll_result.rc_data); 188 189 ir_input_nokey(ir->input->input_dev, &ir->ir); 190 ir->key = 0;
-4
fs/binfmt_aout.c
··· 134 if (!dump_write(file, dump_start, dump_size)) 135 goto end_coredump; 136 } 137 - /* Finally dump the task struct. Not be used by gdb, but could be useful */ 138 - set_fs(KERNEL_DS); 139 - if (!dump_write(file, current, sizeof(*current))) 140 - goto end_coredump; 141 end_coredump: 142 set_fs(fs); 143 return has_dumped;
··· 134 if (!dump_write(file, dump_start, dump_size)) 135 goto end_coredump; 136 } 137 end_coredump: 138 set_fs(fs); 139 return has_dumped;
+18 -13
fs/ceph/caps.c
··· 2283 { 2284 struct ceph_inode_info *ci = ceph_inode(inode); 2285 int mds = session->s_mds; 2286 - int seq = le32_to_cpu(grant->seq); 2287 int newcaps = le32_to_cpu(grant->caps); 2288 int issued, implemented, used, wanted, dirty; 2289 u64 size = le64_to_cpu(grant->size); ··· 2296 int revoked_rdcache = 0; 2297 int queue_invalidate = 0; 2298 2299 - dout("handle_cap_grant inode %p cap %p mds%d seq %d %s\n", 2300 - inode, cap, mds, seq, ceph_cap_string(newcaps)); 2301 dout(" size %llu max_size %llu, i_size %llu\n", size, max_size, 2302 inode->i_size); 2303 ··· 2393 } 2394 2395 cap->seq = seq; 2396 2397 /* file layout may have changed */ 2398 ci->i_layout = grant->layout; ··· 2776 if (op == CEPH_CAP_OP_IMPORT) 2777 __queue_cap_release(session, vino.ino, cap_id, 2778 mseq, seq); 2779 - 2780 - /* 2781 - * send any full release message to try to move things 2782 - * along for the mds (who clearly thinks we still have this 2783 - * cap). 2784 - */ 2785 - ceph_add_cap_releases(mdsc, session); 2786 - ceph_send_cap_releases(mdsc, session); 2787 - goto done; 2788 } 2789 2790 /* these will work even if we don't have a cap yet */ ··· 2804 dout(" no cap on %p ino %llx.%llx from mds%d\n", 2805 inode, ceph_ino(inode), ceph_snap(inode), mds); 2806 spin_unlock(&inode->i_lock); 2807 - goto done; 2808 } 2809 2810 /* note that each of these drops i_lock for us */ ··· 2827 pr_err("ceph_handle_caps: unknown cap op %d %s\n", op, 2828 ceph_cap_op_name(op)); 2829 } 2830 2831 done: 2832 mutex_unlock(&session->s_mutex);
··· 2283 { 2284 struct ceph_inode_info *ci = ceph_inode(inode); 2285 int mds = session->s_mds; 2286 + unsigned seq = le32_to_cpu(grant->seq); 2287 + unsigned issue_seq = le32_to_cpu(grant->issue_seq); 2288 int newcaps = le32_to_cpu(grant->caps); 2289 int issued, implemented, used, wanted, dirty; 2290 u64 size = le64_to_cpu(grant->size); ··· 2295 int revoked_rdcache = 0; 2296 int queue_invalidate = 0; 2297 2298 + dout("handle_cap_grant inode %p cap %p mds%d seq %u/%u %s\n", 2299 + inode, cap, mds, seq, issue_seq, ceph_cap_string(newcaps)); 2300 dout(" size %llu max_size %llu, i_size %llu\n", size, max_size, 2301 inode->i_size); 2302 ··· 2392 } 2393 2394 cap->seq = seq; 2395 + cap->issue_seq = issue_seq; 2396 2397 /* file layout may have changed */ 2398 ci->i_layout = grant->layout; ··· 2774 if (op == CEPH_CAP_OP_IMPORT) 2775 __queue_cap_release(session, vino.ino, cap_id, 2776 mseq, seq); 2777 + goto flush_cap_releases; 2778 } 2779 2780 /* these will work even if we don't have a cap yet */ ··· 2810 dout(" no cap on %p ino %llx.%llx from mds%d\n", 2811 inode, ceph_ino(inode), ceph_snap(inode), mds); 2812 spin_unlock(&inode->i_lock); 2813 + goto flush_cap_releases; 2814 } 2815 2816 /* note that each of these drops i_lock for us */ ··· 2833 pr_err("ceph_handle_caps: unknown cap op %d %s\n", op, 2834 ceph_cap_op_name(op)); 2835 } 2836 + 2837 + goto done; 2838 + 2839 + flush_cap_releases: 2840 + /* 2841 + * send any full release message to try to move things 2842 + * along for the mds (who clearly thinks we still have this 2843 + * cap). 2844 + */ 2845 + ceph_add_cap_releases(mdsc, session); 2846 + ceph_send_cap_releases(mdsc, session); 2847 2848 done: 2849 mutex_unlock(&session->s_mutex);
+13 -8
fs/ceph/export.c
··· 42 static int ceph_encode_fh(struct dentry *dentry, u32 *rawfh, int *max_len, 43 int connectable) 44 { 45 struct ceph_nfs_fh *fh = (void *)rawfh; 46 struct ceph_nfs_confh *cfh = (void *)rawfh; 47 struct dentry *parent = dentry->d_parent; 48 struct inode *inode = dentry->d_inode; 49 - int type; 50 51 /* don't re-export snaps */ 52 if (ceph_snap(inode) != CEPH_NOSNAP) 53 return -EINVAL; 54 55 - if (*max_len >= sizeof(*cfh)) { 56 dout("encode_fh %p connectable\n", dentry); 57 cfh->ino = ceph_ino(dentry->d_inode); 58 cfh->parent_ino = ceph_ino(parent->d_inode); 59 cfh->parent_name_hash = parent->d_name.hash; 60 - *max_len = sizeof(*cfh); 61 type = 2; 62 - } else if (*max_len > sizeof(*fh)) { 63 - if (connectable) 64 - return -ENOSPC; 65 dout("encode_fh %p\n", dentry); 66 fh->ino = ceph_ino(dentry->d_inode); 67 - *max_len = sizeof(*fh); 68 type = 1; 69 } else { 70 - return -ENOSPC; 71 } 72 return type; 73 }
··· 42 static int ceph_encode_fh(struct dentry *dentry, u32 *rawfh, int *max_len, 43 int connectable) 44 { 45 + int type; 46 struct ceph_nfs_fh *fh = (void *)rawfh; 47 struct ceph_nfs_confh *cfh = (void *)rawfh; 48 struct dentry *parent = dentry->d_parent; 49 struct inode *inode = dentry->d_inode; 50 + int connected_handle_length = sizeof(*cfh)/4; 51 + int handle_length = sizeof(*fh)/4; 52 53 /* don't re-export snaps */ 54 if (ceph_snap(inode) != CEPH_NOSNAP) 55 return -EINVAL; 56 57 + if (*max_len >= connected_handle_length) { 58 dout("encode_fh %p connectable\n", dentry); 59 cfh->ino = ceph_ino(dentry->d_inode); 60 cfh->parent_ino = ceph_ino(parent->d_inode); 61 cfh->parent_name_hash = parent->d_name.hash; 62 + *max_len = connected_handle_length; 63 type = 2; 64 + } else if (*max_len >= handle_length) { 65 + if (connectable) { 66 + *max_len = connected_handle_length; 67 + return 255; 68 + } 69 dout("encode_fh %p\n", dentry); 70 fh->ino = ceph_ino(dentry->d_inode); 71 + *max_len = handle_length; 72 type = 1; 73 } else { 74 + *max_len = handle_length; 75 + return 255; 76 } 77 return type; 78 }
+1 -1
fs/ceph/file.c
··· 697 * start_request so that a tid has been assigned. 698 */ 699 spin_lock(&ci->i_unsafe_lock); 700 - list_add(&ci->i_unsafe_writes, &req->r_unsafe_item); 701 spin_unlock(&ci->i_unsafe_lock); 702 ceph_get_cap_refs(ci, CEPH_CAP_FILE_WR); 703 }
··· 697 * start_request so that a tid has been assigned. 698 */ 699 spin_lock(&ci->i_unsafe_lock); 700 + list_add(&req->r_unsafe_item, &ci->i_unsafe_writes); 701 spin_unlock(&ci->i_unsafe_lock); 702 ceph_get_cap_refs(ci, CEPH_CAP_FILE_WR); 703 }
+1 -1
fs/ceph/osd_client.c
··· 549 */ 550 static void __cancel_request(struct ceph_osd_request *req) 551 { 552 - if (req->r_sent) { 553 ceph_con_revoke(&req->r_osd->o_con, req->r_request); 554 req->r_sent = 0; 555 }
··· 549 */ 550 static void __cancel_request(struct ceph_osd_request *req) 551 { 552 + if (req->r_sent && req->r_osd) { 553 ceph_con_revoke(&req->r_osd->o_con, req->r_request); 554 req->r_sent = 0; 555 }
+40
fs/exec.c
··· 2014 fail: 2015 return; 2016 }
··· 2014 fail: 2015 return; 2016 } 2017 + 2018 + /* 2019 + * Core dumping helper functions. These are the only things you should 2020 + * do on a core-file: use only these functions to write out all the 2021 + * necessary info. 2022 + */ 2023 + int dump_write(struct file *file, const void *addr, int nr) 2024 + { 2025 + return access_ok(VERIFY_READ, addr, nr) && file->f_op->write(file, addr, nr, &file->f_pos) == nr; 2026 + } 2027 + EXPORT_SYMBOL(dump_write); 2028 + 2029 + int dump_seek(struct file *file, loff_t off) 2030 + { 2031 + int ret = 1; 2032 + 2033 + if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 2034 + if (file->f_op->llseek(file, off, SEEK_CUR) < 0) 2035 + return 0; 2036 + } else { 2037 + char *buf = (char *)get_zeroed_page(GFP_KERNEL); 2038 + 2039 + if (!buf) 2040 + return 0; 2041 + while (off > 0) { 2042 + unsigned long n = off; 2043 + 2044 + if (n > PAGE_SIZE) 2045 + n = PAGE_SIZE; 2046 + if (!dump_write(file, buf, n)) { 2047 + ret = 0; 2048 + break; 2049 + } 2050 + off -= n; 2051 + } 2052 + free_page((unsigned long)buf); 2053 + } 2054 + return ret; 2055 + } 2056 + EXPORT_SYMBOL(dump_seek);
+7 -1
fs/exofs/inode.c
··· 54 unsigned nr_pages; 55 unsigned long length; 56 loff_t pg_first; /* keep 64bit also in 32-arches */ 57 }; 58 59 static void _pcol_init(struct page_collect *pcol, unsigned expected_pages, ··· 74 pcol->nr_pages = 0; 75 pcol->length = 0; 76 pcol->pg_first = -1; 77 } 78 79 static void _pcol_reset(struct page_collect *pcol) ··· 351 if (PageError(page)) 352 ClearPageError(page); 353 354 - unlock_page(page); 355 EXOFS_DBGMSG("readpage_strip(0x%lx, 0x%lx) empty page," 356 " splitting\n", inode->i_ino, page->index); 357 ··· 433 /* readpage_strip might call read_exec(,is_sync==false) at several 434 * places but not if we have a single page. 435 */ 436 ret = readpage_strip(&pcol, page); 437 if (ret) { 438 EXOFS_ERR("_readpage => %d\n", ret);
··· 54 unsigned nr_pages; 55 unsigned long length; 56 loff_t pg_first; /* keep 64bit also in 32-arches */ 57 + bool read_4_write; /* This means two things: that the read is sync 58 + * And the pages should not be unlocked. 59 + */ 60 }; 61 62 static void _pcol_init(struct page_collect *pcol, unsigned expected_pages, ··· 71 pcol->nr_pages = 0; 72 pcol->length = 0; 73 pcol->pg_first = -1; 74 + pcol->read_4_write = false; 75 } 76 77 static void _pcol_reset(struct page_collect *pcol) ··· 347 if (PageError(page)) 348 ClearPageError(page); 349 350 + if (!pcol->read_4_write) 351 + unlock_page(page); 352 EXOFS_DBGMSG("readpage_strip(0x%lx, 0x%lx) empty page," 353 " splitting\n", inode->i_ino, page->index); 354 ··· 428 /* readpage_strip might call read_exec(,is_sync==false) at several 429 * places but not if we have a single page. 430 */ 431 + pcol.read_4_write = is_sync; 432 ret = readpage_strip(&pcol, page); 433 if (ret) { 434 EXOFS_ERR("_readpage => %d\n", ret);
-2
fs/nfsd/nfsfh.h
··· 196 static inline void 197 fh_unlock(struct svc_fh *fhp) 198 { 199 - BUG_ON(!fhp->fh_dentry); 200 - 201 if (fhp->fh_locked) { 202 fill_post_wcc(fhp); 203 mutex_unlock(&fhp->fh_dentry->d_inode->i_mutex);
··· 196 static inline void 197 fh_unlock(struct svc_fh *fhp) 198 { 199 if (fhp->fh_locked) { 200 fill_post_wcc(fhp); 201 mutex_unlock(&fhp->fh_dentry->d_inode->i_mutex);
+1 -1
fs/notify/Kconfig
··· 3 4 source "fs/notify/dnotify/Kconfig" 5 source "fs/notify/inotify/Kconfig" 6 - source "fs/notify/fanotify/Kconfig"
··· 3 4 source "fs/notify/dnotify/Kconfig" 5 source "fs/notify/inotify/Kconfig" 6 + #source "fs/notify/fanotify/Kconfig"
+14 -5
fs/xfs/linux-2.6/xfs_sync.c
··· 668 xfs_perag_put(pag); 669 } 670 671 - void 672 - __xfs_inode_clear_reclaim_tag( 673 - xfs_mount_t *mp, 674 xfs_perag_t *pag, 675 xfs_inode_t *ip) 676 { 677 - radix_tree_tag_clear(&pag->pag_ici_root, 678 - XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG); 679 pag->pag_ici_reclaimable--; 680 if (!pag->pag_ici_reclaimable) { 681 /* clear the reclaim tag from the perag radix tree */ ··· 684 trace_xfs_perag_clear_reclaim(ip->i_mount, pag->pag_agno, 685 -1, _RET_IP_); 686 } 687 } 688 689 /* ··· 846 if (!radix_tree_delete(&pag->pag_ici_root, 847 XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino))) 848 ASSERT(0); 849 write_unlock(&pag->pag_ici_lock); 850 851 /*
··· 668 xfs_perag_put(pag); 669 } 670 671 + STATIC void 672 + __xfs_inode_clear_reclaim( 673 xfs_perag_t *pag, 674 xfs_inode_t *ip) 675 { 676 pag->pag_ici_reclaimable--; 677 if (!pag->pag_ici_reclaimable) { 678 /* clear the reclaim tag from the perag radix tree */ ··· 687 trace_xfs_perag_clear_reclaim(ip->i_mount, pag->pag_agno, 688 -1, _RET_IP_); 689 } 690 + } 691 + 692 + void 693 + __xfs_inode_clear_reclaim_tag( 694 + xfs_mount_t *mp, 695 + xfs_perag_t *pag, 696 + xfs_inode_t *ip) 697 + { 698 + radix_tree_tag_clear(&pag->pag_ici_root, 699 + XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG); 700 + __xfs_inode_clear_reclaim(pag, ip); 701 } 702 703 /* ··· 838 if (!radix_tree_delete(&pag->pag_ici_root, 839 XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino))) 840 ASSERT(0); 841 + __xfs_inode_clear_reclaim(pag, ip); 842 write_unlock(&pag->pag_ici_lock); 843 844 /*
+3 -1
include/drm/ttm/ttm_bo_api.h
··· 246 247 atomic_t reserved; 248 249 - 250 /** 251 * Members protected by the bo::lock 252 */ 253 254 void *sync_obj_arg;
··· 246 247 atomic_t reserved; 248 249 /** 250 * Members protected by the bo::lock 251 + * In addition, setting sync_obj to anything else 252 + * than NULL requires bo::reserved to be held. This allows for 253 + * checking NULL while reserved but not holding bo::lock. 254 */ 255 256 void *sync_obj_arg;
-1
include/linux/Kbuild
··· 118 header-y += ext2_fs.h 119 header-y += fadvise.h 120 header-y += falloc.h 121 - header-y += fanotify.h 122 header-y += fb.h 123 header-y += fcntl.h 124 header-y += fd.h
··· 118 header-y += ext2_fs.h 119 header-y += fadvise.h 120 header-y += falloc.h 121 header-y += fb.h 122 header-y += fcntl.h 123 header-y += fd.h
+2 -32
include/linux/coredump.h
··· 9 * These are the only things you should do on a core-file: use only these 10 * functions to write out all the necessary info. 11 */ 12 - static inline int dump_write(struct file *file, const void *addr, int nr) 13 - { 14 - return file->f_op->write(file, addr, nr, &file->f_pos) == nr; 15 - } 16 - 17 - static inline int dump_seek(struct file *file, loff_t off) 18 - { 19 - int ret = 1; 20 - 21 - if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 22 - if (file->f_op->llseek(file, off, SEEK_CUR) < 0) 23 - return 0; 24 - } else { 25 - char *buf = (char *)get_zeroed_page(GFP_KERNEL); 26 - 27 - if (!buf) 28 - return 0; 29 - while (off > 0) { 30 - unsigned long n = off; 31 - 32 - if (n > PAGE_SIZE) 33 - n = PAGE_SIZE; 34 - if (!dump_write(file, buf, n)) { 35 - ret = 0; 36 - break; 37 - } 38 - off -= n; 39 - } 40 - free_page((unsigned long)buf); 41 - } 42 - return ret; 43 - } 44 45 #endif /* _LINUX_COREDUMP_H */
··· 9 * These are the only things you should do on a core-file: use only these 10 * functions to write out all the necessary info. 11 */ 12 + extern int dump_write(struct file *file, const void *addr, int nr); 13 + extern int dump_seek(struct file *file, loff_t off); 14 15 #endif /* _LINUX_COREDUMP_H */
+1
include/linux/elevator.h
··· 93 struct elevator_type *elevator_type; 94 struct mutex sysfs_lock; 95 struct hlist_head *hash; 96 }; 97 98 /*
··· 93 struct elevator_type *elevator_type; 94 struct mutex sysfs_lock; 95 struct hlist_head *hash; 96 + unsigned int registered:1; 97 }; 98 99 /*
+14 -1
include/linux/types.h
··· 121 typedef __s64 int64_t; 122 #endif 123 124 - /* this is a special 64bit data type that is 8-byte aligned */ 125 #define aligned_u64 __u64 __attribute__((aligned(8))) 126 #define aligned_be64 __be64 __attribute__((aligned(8))) 127 #define aligned_le64 __le64 __attribute__((aligned(8))) ··· 185 186 typedef __u16 __bitwise __sum16; 187 typedef __u32 __bitwise __wsum; 188 189 #ifdef __KERNEL__ 190 typedef unsigned __bitwise__ gfp_t;
··· 121 typedef __s64 int64_t; 122 #endif 123 124 + /* 125 + * aligned_u64 should be used in defining kernel<->userspace ABIs to avoid 126 + * common 32/64-bit compat problems. 127 + * 64-bit values align to 4-byte boundaries on x86_32 (and possibly other 128 + * architectures) and to 8-byte boundaries on 64-bit architetures. The new 129 + * aligned_64 type enforces 8-byte alignment so that structs containing 130 + * aligned_64 values have the same alignment on 32-bit and 64-bit architectures. 131 + * No conversions are necessary between 32-bit user-space and a 64-bit kernel. 132 + */ 133 #define aligned_u64 __u64 __attribute__((aligned(8))) 134 #define aligned_be64 __be64 __attribute__((aligned(8))) 135 #define aligned_le64 __le64 __attribute__((aligned(8))) ··· 177 178 typedef __u16 __bitwise __sum16; 179 typedef __u32 __bitwise __wsum; 180 + 181 + /* this is a special 64bit data type that is 8-byte aligned */ 182 + #define __aligned_u64 __u64 __attribute__((aligned(8))) 183 + #define __aligned_be64 __be64 __attribute__((aligned(8))) 184 + #define __aligned_le64 __le64 __attribute__((aligned(8))) 185 186 #ifdef __KERNEL__ 187 typedef unsigned __bitwise__ gfp_t;
+1
include/media/videobuf-dma-sg.h
··· 48 49 /* for userland buffer */ 50 int offset; 51 struct page **pages; 52 53 /* for kernel buffers */
··· 48 49 /* for userland buffer */ 50 int offset; 51 + size_t size; 52 struct page **pages; 53 54 /* for kernel buffers */
+18
include/net/bluetooth/bluetooth.h
··· 161 { 162 struct sk_buff *skb; 163 164 if ((skb = sock_alloc_send_skb(sk, len + BT_SKB_RESERVE, nb, err))) { 165 skb_reserve(skb, BT_SKB_RESERVE); 166 bt_cb(skb)->incoming = 0; 167 } 168 169 return skb; 170 } 171 172 int bt_err(__u16 code);
··· 161 { 162 struct sk_buff *skb; 163 164 + release_sock(sk); 165 if ((skb = sock_alloc_send_skb(sk, len + BT_SKB_RESERVE, nb, err))) { 166 skb_reserve(skb, BT_SKB_RESERVE); 167 bt_cb(skb)->incoming = 0; 168 } 169 + lock_sock(sk); 170 + 171 + if (!skb && *err) 172 + return NULL; 173 + 174 + *err = sock_error(sk); 175 + if (*err) 176 + goto out; 177 + 178 + if (sk->sk_shutdown) { 179 + *err = -ECONNRESET; 180 + goto out; 181 + } 182 183 return skb; 184 + 185 + out: 186 + kfree_skb(skb); 187 + return NULL; 188 } 189 190 int bt_err(__u16 code);
+11 -2
kernel/hrtimer.c
··· 931 remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) 932 { 933 if (hrtimer_is_queued(timer)) { 934 int reprogram; 935 936 /* ··· 945 debug_deactivate(timer); 946 timer_stats_hrtimer_clear_start_info(timer); 947 reprogram = base->cpu_base == &__get_cpu_var(hrtimer_bases); 948 - __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 949 - reprogram); 950 return 1; 951 } 952 return 0; ··· 1237 BUG_ON(timer->state != HRTIMER_STATE_CALLBACK); 1238 enqueue_hrtimer(timer, base); 1239 } 1240 timer->state &= ~HRTIMER_STATE_CALLBACK; 1241 } 1242
··· 931 remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) 932 { 933 if (hrtimer_is_queued(timer)) { 934 + unsigned long state; 935 int reprogram; 936 937 /* ··· 944 debug_deactivate(timer); 945 timer_stats_hrtimer_clear_start_info(timer); 946 reprogram = base->cpu_base == &__get_cpu_var(hrtimer_bases); 947 + /* 948 + * We must preserve the CALLBACK state flag here, 949 + * otherwise we could move the timer base in 950 + * switch_hrtimer_base. 951 + */ 952 + state = timer->state & HRTIMER_STATE_CALLBACK; 953 + __remove_hrtimer(timer, base, state, reprogram); 954 return 1; 955 } 956 return 0; ··· 1231 BUG_ON(timer->state != HRTIMER_STATE_CALLBACK); 1232 enqueue_hrtimer(timer, base); 1233 } 1234 + 1235 + WARN_ON_ONCE(!(timer->state & HRTIMER_STATE_CALLBACK)); 1236 + 1237 timer->state &= ~HRTIMER_STATE_CALLBACK; 1238 } 1239
+1 -3
kernel/perf_event.c
··· 2202 static int perf_event_period(struct perf_event *event, u64 __user *arg) 2203 { 2204 struct perf_event_context *ctx = event->ctx; 2205 - unsigned long size; 2206 int ret = 0; 2207 u64 value; 2208 2209 if (!event->attr.sample_period) 2210 return -EINVAL; 2211 2212 - size = copy_from_user(&value, arg, sizeof(value)); 2213 - if (size != sizeof(value)) 2214 return -EFAULT; 2215 2216 if (!value)
··· 2202 static int perf_event_period(struct perf_event *event, u64 __user *arg) 2203 { 2204 struct perf_event_context *ctx = event->ctx; 2205 int ret = 0; 2206 u64 value; 2207 2208 if (!event->attr.sample_period) 2209 return -EINVAL; 2210 2211 + if (copy_from_user(&value, arg, sizeof(value))) 2212 return -EFAULT; 2213 2214 if (!value)
+8
kernel/signal.c
··· 2215 #ifdef __ARCH_SI_TRAPNO 2216 err |= __put_user(from->si_trapno, &to->si_trapno); 2217 #endif 2218 break; 2219 case __SI_CHLD: 2220 err |= __put_user(from->si_pid, &to->si_pid);
··· 2215 #ifdef __ARCH_SI_TRAPNO 2216 err |= __put_user(from->si_trapno, &to->si_trapno); 2217 #endif 2218 + #ifdef BUS_MCEERR_AO 2219 + /* 2220 + * Other callers might not initialize the si_lsb field, 2221 + * so check explicitely for the right codes here. 2222 + */ 2223 + if (from->si_code == BUS_MCEERR_AR || from->si_code == BUS_MCEERR_AO) 2224 + err |= __put_user(from->si_addr_lsb, &to->si_addr_lsb); 2225 + #endif 2226 break; 2227 case __SI_CHLD: 2228 err |= __put_user(from->si_pid, &to->si_pid);
+1 -1
kernel/sysctl.c
··· 2485 kbuf[left] = 0; 2486 } 2487 2488 - for (; left && vleft--; i++, min++, max++, first=0) { 2489 unsigned long val; 2490 2491 if (write) {
··· 2485 kbuf[left] = 0; 2486 } 2487 2488 + for (; left && vleft--; i++, first = 0) { 2489 unsigned long val; 2490 2491 if (write) {
-9
kernel/sysctl_check.c
··· 143 if (!table->maxlen) 144 set_fail(&fail, table, "No maxlen"); 145 } 146 - if ((table->proc_handler == proc_doulongvec_minmax) || 147 - (table->proc_handler == proc_doulongvec_ms_jiffies_minmax)) { 148 - if (table->maxlen > sizeof (unsigned long)) { 149 - if (!table->extra1) 150 - set_fail(&fail, table, "No min"); 151 - if (!table->extra2) 152 - set_fail(&fail, table, "No max"); 153 - } 154 - } 155 #ifdef CONFIG_PROC_SYSCTL 156 if (table->procname && !table->proc_handler) 157 set_fail(&fail, table, "No proc_handler");
··· 143 if (!table->maxlen) 144 set_fail(&fail, table, "No maxlen"); 145 } 146 #ifdef CONFIG_PROC_SYSCTL 147 if (table->procname && !table->proc_handler) 148 set_fail(&fail, table, "No proc_handler");
+1 -1
kernel/trace/ring_buffer.c
··· 405 #define BUF_MAX_DATA_SIZE (BUF_PAGE_SIZE - (sizeof(u32) * 2)) 406 407 /* Max number of timestamps that can fit on a page */ 408 - #define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_STAMP) 409 410 int ring_buffer_print_page_header(struct trace_seq *s) 411 {
··· 405 #define BUF_MAX_DATA_SIZE (BUF_PAGE_SIZE - (sizeof(u32) * 2)) 406 407 /* Max number of timestamps that can fit on a page */ 408 + #define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_EXTEND) 409 410 int ring_buffer_print_page_header(struct trace_seq *s) 411 {
+7 -3
mm/memcontrol.c
··· 3587 3588 static void mem_cgroup_threshold(struct mem_cgroup *memcg) 3589 { 3590 - __mem_cgroup_threshold(memcg, false); 3591 - if (do_swap_account) 3592 - __mem_cgroup_threshold(memcg, true); 3593 } 3594 3595 static int compare_thresholds(const void *a, const void *b)
··· 3587 3588 static void mem_cgroup_threshold(struct mem_cgroup *memcg) 3589 { 3590 + while (memcg) { 3591 + __mem_cgroup_threshold(memcg, false); 3592 + if (do_swap_account) 3593 + __mem_cgroup_threshold(memcg, true); 3594 + 3595 + memcg = parent_mem_cgroup(memcg); 3596 + } 3597 } 3598 3599 static int compare_thresholds(const void *a, const void *b)
+6 -6
mm/memory-failure.c
··· 183 * signal. 184 */ 185 static int kill_proc_ao(struct task_struct *t, unsigned long addr, int trapno, 186 - unsigned long pfn) 187 { 188 struct siginfo si; 189 int ret; ··· 198 #ifdef __ARCH_SI_TRAPNO 199 si.si_trapno = trapno; 200 #endif 201 - si.si_addr_lsb = PAGE_SHIFT; 202 /* 203 * Don't use force here, it's convenient if the signal 204 * can be temporarily blocked. ··· 235 int nr; 236 do { 237 nr = shrink_slab(1000, GFP_KERNEL, 1000); 238 - if (page_count(p) == 0) 239 break; 240 } while (nr > 10); 241 } ··· 327 * wrong earlier. 328 */ 329 static void kill_procs_ao(struct list_head *to_kill, int doit, int trapno, 330 - int fail, unsigned long pfn) 331 { 332 struct to_kill *tk, *next; 333 ··· 352 * process anyways. 353 */ 354 else if (kill_proc_ao(tk->tsk, tk->addr, trapno, 355 - pfn) < 0) 356 printk(KERN_ERR 357 "MCE %#lx: Cannot send advisory machine check signal to %s:%d\n", 358 pfn, tk->tsk->comm, tk->tsk->pid); ··· 928 * any accesses to the poisoned memory. 929 */ 930 kill_procs_ao(&tokill, !!PageDirty(hpage), trapno, 931 - ret != SWAP_SUCCESS, pfn); 932 933 return ret; 934 }
··· 183 * signal. 184 */ 185 static int kill_proc_ao(struct task_struct *t, unsigned long addr, int trapno, 186 + unsigned long pfn, struct page *page) 187 { 188 struct siginfo si; 189 int ret; ··· 198 #ifdef __ARCH_SI_TRAPNO 199 si.si_trapno = trapno; 200 #endif 201 + si.si_addr_lsb = compound_order(compound_head(page)) + PAGE_SHIFT; 202 /* 203 * Don't use force here, it's convenient if the signal 204 * can be temporarily blocked. ··· 235 int nr; 236 do { 237 nr = shrink_slab(1000, GFP_KERNEL, 1000); 238 + if (page_count(p) == 1) 239 break; 240 } while (nr > 10); 241 } ··· 327 * wrong earlier. 328 */ 329 static void kill_procs_ao(struct list_head *to_kill, int doit, int trapno, 330 + int fail, struct page *page, unsigned long pfn) 331 { 332 struct to_kill *tk, *next; 333 ··· 352 * process anyways. 353 */ 354 else if (kill_proc_ao(tk->tsk, tk->addr, trapno, 355 + pfn, page) < 0) 356 printk(KERN_ERR 357 "MCE %#lx: Cannot send advisory machine check signal to %s:%d\n", 358 pfn, tk->tsk->comm, tk->tsk->pid); ··· 928 * any accesses to the poisoned memory. 929 */ 930 kill_procs_ao(&tokill, !!PageDirty(hpage), trapno, 931 + ret != SWAP_SUCCESS, p, pfn); 932 933 return ret; 934 }
+2 -2
mm/page_alloc.c
··· 5182 if (!table) 5183 panic("Failed to allocate %s hash table\n", tablename); 5184 5185 - printk(KERN_INFO "%s hash table entries: %d (order: %d, %lu bytes)\n", 5186 tablename, 5187 - (1U << log2qty), 5188 ilog2(size) - PAGE_SHIFT, 5189 size); 5190
··· 5182 if (!table) 5183 panic("Failed to allocate %s hash table\n", tablename); 5184 5185 + printk(KERN_INFO "%s hash table entries: %ld (order: %d, %lu bytes)\n", 5186 tablename, 5187 + (1UL << log2qty), 5188 ilog2(size) - PAGE_SHIFT, 5189 size); 5190
+1 -1
net/atm/mpc.c
··· 778 eg->packets_rcvd++; 779 mpc->eg_ops->put(eg); 780 781 - memset(ATM_SKB(skb), 0, sizeof(struct atm_skb_data)); 782 netif_rx(new_skb); 783 } 784
··· 778 eg->packets_rcvd++; 779 mpc->eg_ops->put(eg); 780 781 + memset(ATM_SKB(new_skb), 0, sizeof(struct atm_skb_data)); 782 netif_rx(new_skb); 783 } 784
+29 -33
net/bluetooth/l2cap.c
··· 1441 1442 static void l2cap_streaming_send(struct sock *sk) 1443 { 1444 - struct sk_buff *skb, *tx_skb; 1445 struct l2cap_pinfo *pi = l2cap_pi(sk); 1446 u16 control, fcs; 1447 1448 - while ((skb = sk->sk_send_head)) { 1449 - tx_skb = skb_clone(skb, GFP_ATOMIC); 1450 - 1451 - control = get_unaligned_le16(tx_skb->data + L2CAP_HDR_SIZE); 1452 control |= pi->next_tx_seq << L2CAP_CTRL_TXSEQ_SHIFT; 1453 - put_unaligned_le16(control, tx_skb->data + L2CAP_HDR_SIZE); 1454 1455 if (pi->fcs == L2CAP_FCS_CRC16) { 1456 - fcs = crc16(0, (u8 *)tx_skb->data, tx_skb->len - 2); 1457 - put_unaligned_le16(fcs, tx_skb->data + tx_skb->len - 2); 1458 } 1459 1460 - l2cap_do_send(sk, tx_skb); 1461 1462 pi->next_tx_seq = (pi->next_tx_seq + 1) % 64; 1463 - 1464 - if (skb_queue_is_last(TX_QUEUE(sk), skb)) 1465 - sk->sk_send_head = NULL; 1466 - else 1467 - sk->sk_send_head = skb_queue_next(TX_QUEUE(sk), skb); 1468 - 1469 - skb = skb_dequeue(TX_QUEUE(sk)); 1470 - kfree_skb(skb); 1471 } 1472 } 1473 ··· 1950 1951 switch (optname) { 1952 case L2CAP_OPTIONS: 1953 opts.imtu = l2cap_pi(sk)->imtu; 1954 opts.omtu = l2cap_pi(sk)->omtu; 1955 opts.flush_to = l2cap_pi(sk)->flush_to; ··· 2766 case L2CAP_CONF_MTU: 2767 if (val < L2CAP_DEFAULT_MIN_MTU) { 2768 *result = L2CAP_CONF_UNACCEPT; 2769 - pi->omtu = L2CAP_DEFAULT_MIN_MTU; 2770 } else 2771 - pi->omtu = val; 2772 - l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->omtu); 2773 break; 2774 2775 case L2CAP_CONF_FLUSH_TO: ··· 3066 return 0; 3067 } 3068 3069 static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 3070 { 3071 struct l2cap_conf_req *req = (struct l2cap_conf_req *) data; ··· 3094 if (!sk) 3095 return -ENOENT; 3096 3097 - if (sk->sk_state != BT_CONFIG) { 3098 - struct l2cap_cmd_rej rej; 3099 - 3100 - rej.reason = cpu_to_le16(0x0002); 3101 - l2cap_send_cmd(conn, cmd->ident, L2CAP_COMMAND_REJ, 3102 - sizeof(rej), &rej); 3103 goto unlock; 3104 - } 3105 3106 /* Reject if config buffer is too small. */ 3107 len = cmd_len - sizeof(*req); ··· 3135 goto unlock; 3136 3137 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_INPUT_DONE) { 3138 - if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_NO_FCS_RECV) || 3139 - l2cap_pi(sk)->fcs != L2CAP_FCS_NONE) 3140 - l2cap_pi(sk)->fcs = L2CAP_FCS_CRC16; 3141 3142 sk->sk_state = BT_CONNECTED; 3143 ··· 3223 l2cap_pi(sk)->conf_state |= L2CAP_CONF_INPUT_DONE; 3224 3225 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_OUTPUT_DONE) { 3226 - if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_NO_FCS_RECV) || 3227 - l2cap_pi(sk)->fcs != L2CAP_FCS_NONE) 3228 - l2cap_pi(sk)->fcs = L2CAP_FCS_CRC16; 3229 3230 sk->sk_state = BT_CONNECTED; 3231 l2cap_pi(sk)->next_tx_seq = 0;
··· 1441 1442 static void l2cap_streaming_send(struct sock *sk) 1443 { 1444 + struct sk_buff *skb; 1445 struct l2cap_pinfo *pi = l2cap_pi(sk); 1446 u16 control, fcs; 1447 1448 + while ((skb = skb_dequeue(TX_QUEUE(sk)))) { 1449 + control = get_unaligned_le16(skb->data + L2CAP_HDR_SIZE); 1450 control |= pi->next_tx_seq << L2CAP_CTRL_TXSEQ_SHIFT; 1451 + put_unaligned_le16(control, skb->data + L2CAP_HDR_SIZE); 1452 1453 if (pi->fcs == L2CAP_FCS_CRC16) { 1454 + fcs = crc16(0, (u8 *)skb->data, skb->len - 2); 1455 + put_unaligned_le16(fcs, skb->data + skb->len - 2); 1456 } 1457 1458 + l2cap_do_send(sk, skb); 1459 1460 pi->next_tx_seq = (pi->next_tx_seq + 1) % 64; 1461 } 1462 } 1463 ··· 1960 1961 switch (optname) { 1962 case L2CAP_OPTIONS: 1963 + if (sk->sk_state == BT_CONNECTED) { 1964 + err = -EINVAL; 1965 + break; 1966 + } 1967 + 1968 opts.imtu = l2cap_pi(sk)->imtu; 1969 opts.omtu = l2cap_pi(sk)->omtu; 1970 opts.flush_to = l2cap_pi(sk)->flush_to; ··· 2771 case L2CAP_CONF_MTU: 2772 if (val < L2CAP_DEFAULT_MIN_MTU) { 2773 *result = L2CAP_CONF_UNACCEPT; 2774 + pi->imtu = L2CAP_DEFAULT_MIN_MTU; 2775 } else 2776 + pi->imtu = val; 2777 + l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->imtu); 2778 break; 2779 2780 case L2CAP_CONF_FLUSH_TO: ··· 3071 return 0; 3072 } 3073 3074 + static inline void set_default_fcs(struct l2cap_pinfo *pi) 3075 + { 3076 + /* FCS is enabled only in ERTM or streaming mode, if one or both 3077 + * sides request it. 3078 + */ 3079 + if (pi->mode != L2CAP_MODE_ERTM && pi->mode != L2CAP_MODE_STREAMING) 3080 + pi->fcs = L2CAP_FCS_NONE; 3081 + else if (!(pi->conf_state & L2CAP_CONF_NO_FCS_RECV)) 3082 + pi->fcs = L2CAP_FCS_CRC16; 3083 + } 3084 + 3085 static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 3086 { 3087 struct l2cap_conf_req *req = (struct l2cap_conf_req *) data; ··· 3088 if (!sk) 3089 return -ENOENT; 3090 3091 + if (sk->sk_state == BT_DISCONN) 3092 goto unlock; 3093 3094 /* Reject if config buffer is too small. */ 3095 len = cmd_len - sizeof(*req); ··· 3135 goto unlock; 3136 3137 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_INPUT_DONE) { 3138 + set_default_fcs(l2cap_pi(sk)); 3139 3140 sk->sk_state = BT_CONNECTED; 3141 ··· 3225 l2cap_pi(sk)->conf_state |= L2CAP_CONF_INPUT_DONE; 3226 3227 if (l2cap_pi(sk)->conf_state & L2CAP_CONF_OUTPUT_DONE) { 3228 + set_default_fcs(l2cap_pi(sk)); 3229 3230 sk->sk_state = BT_CONNECTED; 3231 l2cap_pi(sk)->next_tx_seq = 0;
+4
net/bluetooth/rfcomm/sock.c
··· 82 static void rfcomm_sk_state_change(struct rfcomm_dlc *d, int err) 83 { 84 struct sock *sk = d->owner, *parent; 85 if (!sk) 86 return; 87 88 BT_DBG("dlc %p state %ld err %d", d, d->state, err); 89 90 bh_lock_sock(sk); 91 92 if (err) ··· 111 } 112 113 bh_unlock_sock(sk); 114 115 if (parent && sock_flag(sk, SOCK_ZAPPED)) { 116 /* We have to drop DLC lock here, otherwise
··· 82 static void rfcomm_sk_state_change(struct rfcomm_dlc *d, int err) 83 { 84 struct sock *sk = d->owner, *parent; 85 + unsigned long flags; 86 + 87 if (!sk) 88 return; 89 90 BT_DBG("dlc %p state %ld err %d", d, d->state, err); 91 92 + local_irq_save(flags); 93 bh_lock_sock(sk); 94 95 if (err) ··· 108 } 109 110 bh_unlock_sock(sk); 111 + local_irq_restore(flags); 112 113 if (parent && sock_flag(sk, SOCK_ZAPPED)) { 114 /* We have to drop DLC lock here, otherwise
+15 -6
net/caif/caif_socket.c
··· 827 long timeo; 828 int err; 829 int ifindex, headroom, tailroom; 830 struct net_device *dev; 831 832 lock_sock(sk); ··· 897 cf_sk->sk.sk_state = CAIF_DISCONNECTED; 898 goto out; 899 } 900 - dev = dev_get_by_index(sock_net(sk), ifindex); 901 cf_sk->headroom = LL_RESERVED_SPACE_EXTRA(dev, headroom); 902 cf_sk->tailroom = tailroom; 903 - cf_sk->maxframe = dev->mtu - (headroom + tailroom); 904 - dev_put(dev); 905 if (cf_sk->maxframe < 1) { 906 - pr_warning("CAIF: %s(): CAIF Interface MTU too small (%d)\n", 907 - __func__, dev->mtu); 908 - err = -ENODEV; 909 goto out; 910 } 911
··· 827 long timeo; 828 int err; 829 int ifindex, headroom, tailroom; 830 + unsigned int mtu; 831 struct net_device *dev; 832 833 lock_sock(sk); ··· 896 cf_sk->sk.sk_state = CAIF_DISCONNECTED; 897 goto out; 898 } 899 + 900 + err = -ENODEV; 901 + rcu_read_lock(); 902 + dev = dev_get_by_index_rcu(sock_net(sk), ifindex); 903 + if (!dev) { 904 + rcu_read_unlock(); 905 + goto out; 906 + } 907 cf_sk->headroom = LL_RESERVED_SPACE_EXTRA(dev, headroom); 908 + mtu = dev->mtu; 909 + rcu_read_unlock(); 910 + 911 cf_sk->tailroom = tailroom; 912 + cf_sk->maxframe = mtu - (headroom + tailroom); 913 if (cf_sk->maxframe < 1) { 914 + pr_warning("CAIF: %s(): CAIF Interface MTU too small (%u)\n", 915 + __func__, mtu); 916 goto out; 917 } 918
+4 -4
net/core/ethtool.c
··· 348 if (info.cmd == ETHTOOL_GRXCLSRLALL) { 349 if (info.rule_cnt > 0) { 350 if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32)) 351 - rule_buf = kmalloc(info.rule_cnt * sizeof(u32), 352 GFP_USER); 353 if (!rule_buf) 354 return -ENOMEM; ··· 397 (KMALLOC_MAX_SIZE - sizeof(*indir)) / sizeof(*indir->ring_index)) 398 return -ENOMEM; 399 full_size = sizeof(*indir) + sizeof(*indir->ring_index) * table_size; 400 - indir = kmalloc(full_size, GFP_USER); 401 if (!indir) 402 return -ENOMEM; 403 ··· 538 539 gstrings.len = ret; 540 541 - data = kmalloc(gstrings.len * ETH_GSTRING_LEN, GFP_USER); 542 if (!data) 543 return -ENOMEM; 544 ··· 775 if (regs.len > reglen) 776 regs.len = reglen; 777 778 - regbuf = kmalloc(reglen, GFP_USER); 779 if (!regbuf) 780 return -ENOMEM; 781
··· 348 if (info.cmd == ETHTOOL_GRXCLSRLALL) { 349 if (info.rule_cnt > 0) { 350 if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32)) 351 + rule_buf = kzalloc(info.rule_cnt * sizeof(u32), 352 GFP_USER); 353 if (!rule_buf) 354 return -ENOMEM; ··· 397 (KMALLOC_MAX_SIZE - sizeof(*indir)) / sizeof(*indir->ring_index)) 398 return -ENOMEM; 399 full_size = sizeof(*indir) + sizeof(*indir->ring_index) * table_size; 400 + indir = kzalloc(full_size, GFP_USER); 401 if (!indir) 402 return -ENOMEM; 403 ··· 538 539 gstrings.len = ret; 540 541 + data = kzalloc(gstrings.len * ETH_GSTRING_LEN, GFP_USER); 542 if (!data) 543 return -ENOMEM; 544 ··· 775 if (regs.len > reglen) 776 regs.len = reglen; 777 778 + regbuf = kzalloc(reglen, GFP_USER); 779 if (!regbuf) 780 return -ENOMEM; 781
+4 -4
net/core/stream.c
··· 141 142 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 143 sk->sk_write_pending++; 144 - sk_wait_event(sk, &current_timeo, !sk->sk_err && 145 - !(sk->sk_shutdown & SEND_SHUTDOWN) && 146 - sk_stream_memory_free(sk) && 147 - vm_wait); 148 sk->sk_write_pending--; 149 150 if (vm_wait) {
··· 141 142 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 143 sk->sk_write_pending++; 144 + sk_wait_event(sk, &current_timeo, sk->sk_err || 145 + (sk->sk_shutdown & SEND_SHUTDOWN) || 146 + (sk_stream_memory_free(sk) && 147 + !vm_wait)); 148 sk->sk_write_pending--; 149 150 if (vm_wait) {
+1 -1
net/ipv4/Kconfig
··· 413 If unsure, say Y. 414 415 config INET_LRO 416 - bool "Large Receive Offload (ipv4/tcp)" 417 default y 418 ---help--- 419 Support for Large Receive Offload (ipv4/tcp).
··· 413 If unsure, say Y. 414 415 config INET_LRO 416 + tristate "Large Receive Offload (ipv4/tcp)" 417 default y 418 ---help--- 419 Support for Large Receive Offload (ipv4/tcp).
+13 -1
net/ipv4/igmp.c
··· 834 int mark = 0; 835 836 837 - if (len == 8 || IGMP_V2_SEEN(in_dev)) { 838 if (ih->code == 0) { 839 /* Alas, old v1 router presents here. */ 840 ··· 856 igmpv3_clear_delrec(in_dev); 857 } else if (len < 12) { 858 return; /* ignore bogus packet; freed by caller */ 859 } else { /* v3 */ 860 if (!pskb_may_pull(skb, sizeof(struct igmpv3_query))) 861 return;
··· 834 int mark = 0; 835 836 837 + if (len == 8) { 838 if (ih->code == 0) { 839 /* Alas, old v1 router presents here. */ 840 ··· 856 igmpv3_clear_delrec(in_dev); 857 } else if (len < 12) { 858 return; /* ignore bogus packet; freed by caller */ 859 + } else if (IGMP_V1_SEEN(in_dev)) { 860 + /* This is a v3 query with v1 queriers present */ 861 + max_delay = IGMP_Query_Response_Interval; 862 + group = 0; 863 + } else if (IGMP_V2_SEEN(in_dev)) { 864 + /* this is a v3 query with v2 queriers present; 865 + * Interpretation of the max_delay code is problematic here. 866 + * A real v2 host would use ih_code directly, while v3 has a 867 + * different encoding. We use the v3 encoding as more likely 868 + * to be intended in a v3 query. 869 + */ 870 + max_delay = IGMPV3_MRC(ih3->code)*(HZ/IGMP_TIMER_SCALE); 871 } else { /* v3 */ 872 if (!pskb_may_pull(skb, sizeof(struct igmpv3_query))) 873 return;
+24 -4
net/ipv6/route.c
··· 1556 * i.e. Path MTU discovery 1557 */ 1558 1559 - void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr, 1560 - struct net_device *dev, u32 pmtu) 1561 { 1562 struct rt6_info *rt, *nrt; 1563 - struct net *net = dev_net(dev); 1564 int allfrag = 0; 1565 1566 - rt = rt6_lookup(net, daddr, saddr, dev->ifindex, 0); 1567 if (rt == NULL) 1568 return; 1569 ··· 1628 } 1629 out: 1630 dst_release(&rt->dst); 1631 } 1632 1633 /*
··· 1556 * i.e. Path MTU discovery 1557 */ 1558 1559 + static void rt6_do_pmtu_disc(struct in6_addr *daddr, struct in6_addr *saddr, 1560 + struct net *net, u32 pmtu, int ifindex) 1561 { 1562 struct rt6_info *rt, *nrt; 1563 int allfrag = 0; 1564 1565 + rt = rt6_lookup(net, daddr, saddr, ifindex, 0); 1566 if (rt == NULL) 1567 return; 1568 ··· 1629 } 1630 out: 1631 dst_release(&rt->dst); 1632 + } 1633 + 1634 + void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr, 1635 + struct net_device *dev, u32 pmtu) 1636 + { 1637 + struct net *net = dev_net(dev); 1638 + 1639 + /* 1640 + * RFC 1981 states that a node "MUST reduce the size of the packets it 1641 + * is sending along the path" that caused the Packet Too Big message. 1642 + * Since it's not possible in the general case to determine which 1643 + * interface was used to send the original packet, we update the MTU 1644 + * on the interface that will be used to send future packets. We also 1645 + * update the MTU on the interface that received the Packet Too Big in 1646 + * case the original packet was forced out that interface with 1647 + * SO_BINDTODEVICE or similar. This is the next best thing to the 1648 + * correct behaviour, which would be to update the MTU on all 1649 + * interfaces. 1650 + */ 1651 + rt6_do_pmtu_disc(daddr, saddr, net, pmtu, 0); 1652 + rt6_do_pmtu_disc(daddr, saddr, net, pmtu, dev->ifindex); 1653 } 1654 1655 /*
+2
net/mac80211/agg-tx.c
··· 175 176 set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 177 178 /* 179 * After this packets are no longer handed right through 180 * to the driver but are put onto tid_tx->pending instead,
··· 175 176 set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 177 178 + del_timer_sync(&tid_tx->addba_resp_timer); 179 + 180 /* 181 * After this packets are no longer handed right through 182 * to the driver but are put onto tid_tx->pending instead,
+2 -2
net/mac80211/status.c
··· 377 skb2 = skb_clone(skb, GFP_ATOMIC); 378 if (skb2) { 379 skb2->dev = prev_dev; 380 - netif_receive_skb(skb2); 381 } 382 } 383 ··· 386 } 387 if (prev_dev) { 388 skb->dev = prev_dev; 389 - netif_receive_skb(skb); 390 skb = NULL; 391 } 392 rcu_read_unlock();
··· 377 skb2 = skb_clone(skb, GFP_ATOMIC); 378 if (skb2) { 379 skb2->dev = prev_dev; 380 + netif_rx(skb2); 381 } 382 } 383 ··· 386 } 387 if (prev_dev) { 388 skb->dev = prev_dev; 389 + netif_rx(skb); 390 skb = NULL; 391 } 392 rcu_read_unlock();
+1 -1
net/sched/cls_u32.c
··· 137 int toff = off + key->off + (off2 & key->offmask); 138 __be32 *data, _data; 139 140 - if (skb_headroom(skb) + toff < 0) 141 goto out; 142 143 data = skb_header_pointer(skb, toff, 4, &_data);
··· 137 int toff = off + key->off + (off2 & key->offmask); 138 __be32 *data, _data; 139 140 + if (skb_headroom(skb) + toff > INT_MAX) 141 goto out; 142 143 data = skb_header_pointer(skb, toff, 4, &_data);
+6 -2
net/sctp/auth.c
··· 543 id = ntohs(hmacs->hmac_ids[i]); 544 545 /* Check the id is in the supported range */ 546 - if (id > SCTP_AUTH_HMAC_ID_MAX) 547 continue; 548 549 /* See is we support the id. Supported IDs have name and 550 * length fields set, so that we can allocated and use 551 * them. We can safely just check for name, for without the 552 * name, we can't allocate the TFM. 553 */ 554 - if (!sctp_hmac_list[id].hmac_name) 555 continue; 556 557 break; 558 }
··· 543 id = ntohs(hmacs->hmac_ids[i]); 544 545 /* Check the id is in the supported range */ 546 + if (id > SCTP_AUTH_HMAC_ID_MAX) { 547 + id = 0; 548 continue; 549 + } 550 551 /* See is we support the id. Supported IDs have name and 552 * length fields set, so that we can allocated and use 553 * them. We can safely just check for name, for without the 554 * name, we can't allocate the TFM. 555 */ 556 + if (!sctp_hmac_list[id].hmac_name) { 557 + id = 0; 558 continue; 559 + } 560 561 break; 562 }
+12 -1
net/sctp/socket.c
··· 916 /* Walk through the addrs buffer and count the number of addresses. */ 917 addr_buf = kaddrs; 918 while (walk_size < addrs_size) { 919 sa_addr = (struct sockaddr *)addr_buf; 920 af = sctp_get_af_specific(sa_addr->sa_family); 921 ··· 1007 /* Walk through the addrs buffer and count the number of addresses. */ 1008 addr_buf = kaddrs; 1009 while (walk_size < addrs_size) { 1010 sa_addr = (union sctp_addr *)addr_buf; 1011 af = sctp_get_af_specific(sa_addr->sa.sa_family); 1012 - port = ntohs(sa_addr->v4.sin_port); 1013 1014 /* If the address family is not supported or if this address 1015 * causes the address buffer to overflow return EINVAL. ··· 1022 err = -EINVAL; 1023 goto out_free; 1024 } 1025 1026 /* Save current address so we can work with it */ 1027 memcpy(&to, sa_addr, af->sockaddr_len);
··· 916 /* Walk through the addrs buffer and count the number of addresses. */ 917 addr_buf = kaddrs; 918 while (walk_size < addrs_size) { 919 + if (walk_size + sizeof(sa_family_t) > addrs_size) { 920 + kfree(kaddrs); 921 + return -EINVAL; 922 + } 923 + 924 sa_addr = (struct sockaddr *)addr_buf; 925 af = sctp_get_af_specific(sa_addr->sa_family); 926 ··· 1002 /* Walk through the addrs buffer and count the number of addresses. */ 1003 addr_buf = kaddrs; 1004 while (walk_size < addrs_size) { 1005 + if (walk_size + sizeof(sa_family_t) > addrs_size) { 1006 + err = -EINVAL; 1007 + goto out_free; 1008 + } 1009 + 1010 sa_addr = (union sctp_addr *)addr_buf; 1011 af = sctp_get_af_specific(sa_addr->sa.sa_family); 1012 1013 /* If the address family is not supported or if this address 1014 * causes the address buffer to overflow return EINVAL. ··· 1013 err = -EINVAL; 1014 goto out_free; 1015 } 1016 + 1017 + port = ntohs(sa_addr->v4.sin_port); 1018 1019 /* Save current address so we can work with it */ 1020 memcpy(&to, sa_addr, af->sockaddr_len);
+1 -1
scripts/kconfig/conf.c
··· 427 if (sym->name && !sym_is_choice_value(sym)) { 428 printf("CONFIG_%s\n", sym->name); 429 } 430 - } else { 431 if (!conf_cnt++) 432 printf(_("*\n* Restart config...\n*\n")); 433 rootEntry = menu_get_parent_menu(menu);
··· 427 if (sym->name && !sym_is_choice_value(sym)) { 428 printf("CONFIG_%s\n", sym->name); 429 } 430 + } else if (input_mode != oldnoconfig) { 431 if (!conf_cnt++) 432 printf(_("*\n* Restart config...\n*\n")); 433 rootEntry = menu_get_parent_menu(menu);
-1
scripts/kconfig/expr.h
··· 165 struct symbol *sym; 166 struct property *prompt; 167 struct expr *dep; 168 - struct expr *dir_dep; 169 unsigned int flags; 170 char *help; 171 struct file *file;
··· 165 struct symbol *sym; 166 struct property *prompt; 167 struct expr *dep; 168 unsigned int flags; 169 char *help; 170 struct file *file;
+2 -5
scripts/kconfig/menu.c
··· 107 void menu_add_dep(struct expr *dep) 108 { 109 current_entry->dep = expr_alloc_and(current_entry->dep, menu_check_dep(dep)); 110 - current_entry->dir_dep = current_entry->dep; 111 } 112 113 void menu_set_type(int type) ··· 290 for (menu = parent->list; menu; menu = menu->next) 291 menu_finalize(menu); 292 } else if (sym) { 293 - /* ignore inherited dependencies for dir_dep */ 294 - sym->dir_dep.expr = expr_transform(expr_copy(parent->dir_dep)); 295 - sym->dir_dep.expr = expr_eliminate_dups(sym->dir_dep.expr); 296 - 297 basedep = parent->prompt ? parent->prompt->visible.expr : NULL; 298 basedep = expr_trans_compare(basedep, E_UNEQUAL, &symbol_no); 299 basedep = expr_eliminate_dups(expr_transform(basedep)); ··· 320 parent->next = last_menu->next; 321 last_menu->next = NULL; 322 } 323 } 324 for (menu = parent->list; menu; menu = menu->next) { 325 if (sym && sym_is_choice(sym) &&
··· 107 void menu_add_dep(struct expr *dep) 108 { 109 current_entry->dep = expr_alloc_and(current_entry->dep, menu_check_dep(dep)); 110 } 111 112 void menu_set_type(int type) ··· 291 for (menu = parent->list; menu; menu = menu->next) 292 menu_finalize(menu); 293 } else if (sym) { 294 basedep = parent->prompt ? parent->prompt->visible.expr : NULL; 295 basedep = expr_trans_compare(basedep, E_UNEQUAL, &symbol_no); 296 basedep = expr_eliminate_dups(expr_transform(basedep)); ··· 325 parent->next = last_menu->next; 326 last_menu->next = NULL; 327 } 328 + 329 + sym->dir_dep.expr = parent->dep; 330 } 331 for (menu = parent->list; menu; menu = menu->next) { 332 if (sym && sym_is_choice(sym) &&
+2
scripts/kconfig/symbol.c
··· 350 } 351 } 352 calc_newval: 353 if (sym->dir_dep.tri == no && sym->rev_dep.tri != no) { 354 fprintf(stderr, "warning: ("); 355 expr_fprint(sym->rev_dep.expr, stderr); ··· 359 expr_fprint(sym->dir_dep.expr, stderr); 360 fprintf(stderr, ")\n"); 361 } 362 newval.tri = EXPR_OR(newval.tri, sym->rev_dep.tri); 363 } 364 if (newval.tri == mod && sym_get_type(sym) == S_BOOLEAN)
··· 350 } 351 } 352 calc_newval: 353 + #if 0 354 if (sym->dir_dep.tri == no && sym->rev_dep.tri != no) { 355 fprintf(stderr, "warning: ("); 356 expr_fprint(sym->rev_dep.expr, stderr); ··· 358 expr_fprint(sym->dir_dep.expr, stderr); 359 fprintf(stderr, ")\n"); 360 } 361 + #endif 362 newval.tri = EXPR_OR(newval.tri, sym->rev_dep.tri); 363 } 364 if (newval.tri == mod && sym_get_type(sym) == S_BOOLEAN)
+3 -1
sound/core/rawmidi.c
··· 535 { 536 struct snd_rawmidi_file *rfile; 537 struct snd_rawmidi *rmidi; 538 539 rfile = file->private_data; 540 rmidi = rfile->rmidi; 541 rawmidi_release_priv(rfile); 542 kfree(rfile); 543 snd_card_file_remove(rmidi->card, file); 544 - module_put(rmidi->card->module); 545 return 0; 546 } 547
··· 535 { 536 struct snd_rawmidi_file *rfile; 537 struct snd_rawmidi *rmidi; 538 + struct module *module; 539 540 rfile = file->private_data; 541 rmidi = rfile->rmidi; 542 rawmidi_release_priv(rfile); 543 kfree(rfile); 544 + module = rmidi->card->module; 545 snd_card_file_remove(rmidi->card, file); 546 + module_put(module); 547 return 0; 548 } 549
+2 -2
sound/oss/soundcard.c
··· 391 case SND_DEV_DSP: 392 case SND_DEV_DSP16: 393 case SND_DEV_AUDIO: 394 - return audio_ioctl(dev, file, cmd, p); 395 break; 396 397 case SND_DEV_MIDIN: 398 - return MIDIbuf_ioctl(dev, file, cmd, p); 399 break; 400 401 }
··· 391 case SND_DEV_DSP: 392 case SND_DEV_DSP16: 393 case SND_DEV_AUDIO: 394 + ret = audio_ioctl(dev, file, cmd, p); 395 break; 396 397 case SND_DEV_MIDIN: 398 + ret = MIDIbuf_ioctl(dev, file, cmd, p); 399 break; 400 401 }
+2
sound/pci/hda/patch_sigmatel.c
··· 1747 "HP dv6", STAC_HP_DV5), 1748 SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x3061, 1749 "HP dv6", STAC_HP_DV5), /* HP dv6-1110ax */ 1750 SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x7010, 1751 "HP", STAC_HP_DV5), 1752 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0233,
··· 1747 "HP dv6", STAC_HP_DV5), 1748 SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x3061, 1749 "HP dv6", STAC_HP_DV5), /* HP dv6-1110ax */ 1750 + SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x363e, 1751 + "HP DV6", STAC_HP_DV5), 1752 SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x7010, 1753 "HP", STAC_HP_DV5), 1754 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0233,
+12
tools/perf/perf.h
··· 73 #define cpu_relax() asm volatile("":::"memory") 74 #endif 75 76 #include <time.h> 77 #include <unistd.h> 78 #include <sys/types.h>
··· 73 #define cpu_relax() asm volatile("":::"memory") 74 #endif 75 76 + #ifdef __mips__ 77 + #include "../../arch/mips/include/asm/unistd.h" 78 + #define rmb() asm volatile( \ 79 + ".set mips2\n\t" \ 80 + "sync\n\t" \ 81 + ".set mips0" \ 82 + : /* no output */ \ 83 + : /* no input */ \ 84 + : "memory") 85 + #define cpu_relax() asm volatile("" ::: "memory") 86 + #endif 87 + 88 #include <time.h> 89 #include <unistd.h> 90 #include <sys/types.h>