Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 3.16-rc6 into driver-core-next

We want the platform changes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3876 -2201
+3 -6
Documentation/Changes
··· 280 280 mcelog 281 281 ------ 282 282 283 - In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility 284 - as a regular cronjob similar to the x86-64 kernel to process and log 285 - machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check 286 - events are errors reported by the CPU. Processing them is strongly encouraged. 287 - All x86-64 kernels since 2.6.4 require the mcelog utility to 288 - process machine checks. 283 + On x86 kernels the mcelog utility is needed to process and log machine check 284 + events when CONFIG_X86_MCE is enabled. Machine check events are errors reported 285 + by the CPU. Processing them is strongly encouraged. 289 286 290 287 Getting updated software 291 288 ========================
+1 -1
Documentation/DocBook/gadget.tmpl
··· 708 708 709 709 <para>Systems need specialized hardware support to implement OTG, 710 710 notably including a special <emphasis>Mini-AB</emphasis> jack 711 - and associated transciever to support <emphasis>Dual-Role</emphasis> 711 + and associated transceiver to support <emphasis>Dual-Role</emphasis> 712 712 operation: 713 713 they can act either as a host, using the standard 714 714 Linux-USB host side driver stack,
+2 -2
Documentation/DocBook/genericirq.tmpl
··· 182 182 <para> 183 183 Each interrupt is described by an interrupt descriptor structure 184 184 irq_desc. The interrupt is referenced by an 'unsigned int' numeric 185 - value which selects the corresponding interrupt decription structure 185 + value which selects the corresponding interrupt description structure 186 186 in the descriptor structures array. 187 187 The descriptor structure contains status information and pointers 188 188 to the interrupt flow method and the interrupt chip structure ··· 470 470 <para> 471 471 To avoid copies of identical implementations of IRQ chips the 472 472 core provides a configurable generic interrupt chip 473 - implementation. Developers should check carefuly whether the 473 + implementation. Developers should check carefully whether the 474 474 generic chip fits their needs before implementing the same 475 475 functionality slightly differently themselves. 476 476 </para>
+1 -1
Documentation/DocBook/kernel-locking.tmpl
··· 1760 1760 </para> 1761 1761 1762 1762 <para> 1763 - There is a furthur optimization possible here: remember our original 1763 + There is a further optimization possible here: remember our original 1764 1764 cache code, where there were no reference counts and the caller simply 1765 1765 held the lock whenever using the object? This is still possible: if 1766 1766 you hold the lock, no one can delete the object, so you don't need to
+3 -3
Documentation/DocBook/libata.tmpl
··· 677 677 678 678 <listitem> 679 679 <para> 680 - ATA_QCFLAG_ACTIVE is clared from qc->flags. 680 + ATA_QCFLAG_ACTIVE is cleared from qc->flags. 681 681 </para> 682 682 </listitem> 683 683 ··· 708 708 709 709 <listitem> 710 710 <para> 711 - qc->waiting is claread &amp; completed (in that order). 711 + qc->waiting is cleared &amp; completed (in that order). 712 712 </para> 713 713 </listitem> 714 714 ··· 1163 1163 1164 1164 <para> 1165 1165 Once sense data is acquired, this type of errors can be 1166 - handled similary to other SCSI errors. Note that sense data 1166 + handled similarly to other SCSI errors. Note that sense data 1167 1167 may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR 1168 1168 &amp;&amp; ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such 1169 1169 cases, the error should be considered as an ATA bus error and
+1 -1
Documentation/DocBook/media_api.tmpl
··· 68 68 several digital tv standards. While it is called as DVB API, 69 69 in fact it covers several different video standards including 70 70 DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated 71 - to documment support also for DVB-S2, ISDB-T and ISDB-S.</para> 71 + to document support also for DVB-S2, ISDB-T and ISDB-S.</para> 72 72 <para>The third part covers the Remote Controller API.</para> 73 73 <para>The fourth part covers the Media Controller API.</para> 74 74 <para>For additional information and for the latest development code,
+15 -15
Documentation/DocBook/mtdnand.tmpl
··· 91 91 <listitem><para> 92 92 [MTD Interface]</para><para> 93 93 These functions provide the interface to the MTD kernel API. 94 - They are not replacable and provide functionality 94 + They are not replaceable and provide functionality 95 95 which is complete hardware independent. 96 96 </para></listitem> 97 97 <listitem><para> ··· 100 100 </para></listitem> 101 101 <listitem><para> 102 102 [GENERIC]</para><para> 103 - Generic functions are not replacable and provide functionality 103 + Generic functions are not replaceable and provide functionality 104 104 which is complete hardware independent. 105 105 </para></listitem> 106 106 <listitem><para> 107 107 [DEFAULT]</para><para> 108 108 Default functions provide hardware related functionality which is suitable 109 109 for most of the implementations. These functions can be replaced by the 110 - board driver if neccecary. Those functions are called via pointers in the 110 + board driver if necessary. Those functions are called via pointers in the 111 111 NAND chip description structure. The board driver can set the functions which 112 112 should be replaced by board dependent functions before calling nand_scan(). 113 113 If the function pointer is NULL on entry to nand_scan() then the pointer ··· 264 264 is set up nand_scan() is called. This function tries to 265 265 detect and identify then chip. If a chip is found all the 266 266 internal data fields are initialized accordingly. 267 - The structure(s) have to be zeroed out first and then filled with the neccecary 267 + The structure(s) have to be zeroed out first and then filled with the necessary 268 268 information about the device. 269 269 </para> 270 270 <programlisting> ··· 327 327 <sect1 id="Exit_function"> 328 328 <title>Exit function</title> 329 329 <para> 330 - The exit function is only neccecary if the driver is 330 + The exit function is only necessary if the driver is 331 331 compiled as a module. It releases all resources which 332 332 are held by the chip driver and unregisters the partitions 333 333 in the MTD layer. ··· 494 494 in this case. See rts_from4.c and diskonchip.c for 495 495 implementation reference. In those cases we must also 496 496 use bad block tables on FLASH, because the ECC layout is 497 - interferring with the bad block marker positions. 497 + interfering with the bad block marker positions. 498 498 See bad block table support for details. 499 499 </para> 500 500 </sect2> ··· 542 542 <para> 543 543 nand_scan() calls the function nand_default_bbt(). 544 544 nand_default_bbt() selects appropriate default 545 - bad block table desriptors depending on the chip information 545 + bad block table descriptors depending on the chip information 546 546 which was retrieved by nand_scan(). 547 547 </para> 548 548 <para> ··· 554 554 <sect2 id="Flash_based_tables"> 555 555 <title>Flash based tables</title> 556 556 <para> 557 - It may be desired or neccecary to keep a bad block table in FLASH. 557 + It may be desired or necessary to keep a bad block table in FLASH. 558 558 For AG-AND chips this is mandatory, as they have no factory marked 559 559 bad blocks. They have factory marked good blocks. The marker pattern 560 560 is erased when the block is erased to be reused. So in case of ··· 565 565 of the blocks. 566 566 </para> 567 567 <para> 568 - The blocks in which the tables are stored are procteted against 568 + The blocks in which the tables are stored are protected against 569 569 accidental access by marking them bad in the memory bad block 570 570 table. The bad block table management functions are allowed 571 - to circumvernt this protection. 571 + to circumvent this protection. 572 572 </para> 573 573 <para> 574 574 The simplest way to activate the FLASH based bad block table support ··· 592 592 User defined tables are created by filling out a 593 593 nand_bbt_descr structure and storing the pointer in the 594 594 nand_chip structure member bbt_td before calling nand_scan(). 595 - If a mirror table is neccecary a second structure must be 595 + If a mirror table is necessary a second structure must be 596 596 created and a pointer to this structure must be stored 597 597 in bbt_md inside the nand_chip structure. If the bbt_md 598 598 member is set to NULL then only the main table is used ··· 666 666 <para> 667 667 For automatic placement some blocks must be reserved for 668 668 bad block table storage. The number of reserved blocks is defined 669 - in the maxblocks member of the babd block table description structure. 669 + in the maxblocks member of the bad block table description structure. 670 670 Reserving 4 blocks for mirrored tables should be a reasonable number. 671 671 This also limits the number of blocks which are scanned for the bad 672 672 block table ident pattern. ··· 1068 1068 <chapter id="filesystems"> 1069 1069 <title>Filesystem support</title> 1070 1070 <para> 1071 - The NAND driver provides all neccecary functions for a 1071 + The NAND driver provides all necessary functions for a 1072 1072 filesystem via the MTD interface. 1073 1073 </para> 1074 1074 <para> 1075 - Filesystems must be aware of the NAND pecularities and 1075 + Filesystems must be aware of the NAND peculiarities and 1076 1076 restrictions. One major restrictions of NAND Flash is, that you cannot 1077 1077 write as often as you want to a page. The consecutive writes to a page, 1078 1078 before erasing it again, are restricted to 1-3 writes, depending on the ··· 1222 1222 #define NAND_BBT_VERSION 0x00000100 1223 1223 /* Create a bbt if none axists */ 1224 1224 #define NAND_BBT_CREATE 0x00000200 1225 - /* Write bbt if neccecary */ 1225 + /* Write bbt if necessary */ 1226 1226 #define NAND_BBT_WRITE 0x00001000 1227 1227 /* Read and write back block contents when writing bbt */ 1228 1228 #define NAND_BBT_SAVECONTENT 0x00002000
+1 -1
Documentation/DocBook/regulator.tmpl
··· 155 155 release regulators. Functions are 156 156 provided to <link linkend='API-regulator-enable'>enable</link> 157 157 and <link linkend='API-regulator-disable'>disable</link> the 158 - reguator and to get and set the runtime parameters of the 158 + regulator and to get and set the runtime parameters of the 159 159 regulator. 160 160 </para> 161 161 <para>
+2 -2
Documentation/DocBook/uio-howto.tmpl
··· 766 766 <para> 767 767 The dynamic memory regions will be allocated when the UIO device file, 768 768 <varname>/dev/uioX</varname> is opened. 769 - Simiar to static memory resources, the memory region information for 769 + Similar to static memory resources, the memory region information for 770 770 dynamic regions is then visible via sysfs at 771 771 <varname>/sys/class/uio/uioX/maps/mapY/*</varname>. 772 - The dynmaic memory regions will be freed when the UIO device file is 772 + The dynamic memory regions will be freed when the UIO device file is 773 773 closed. When no processes are holding the device file open, the address 774 774 returned to userspace is ~0. 775 775 </para>
+1 -1
Documentation/DocBook/usb.tmpl
··· 153 153 154 154 <listitem><para>The Linux USB API supports synchronous calls for 155 155 control and bulk messages. 156 - It also supports asynchnous calls for all kinds of data transfer, 156 + It also supports asynchronous calls for all kinds of data transfer, 157 157 using request structures called "URBs" (USB Request Blocks). 158 158 </para></listitem> 159 159
+1 -1
Documentation/DocBook/writing-an-alsa-driver.tmpl
··· 5696 5696 suspending the PCM operations via 5697 5697 <function>snd_pcm_suspend_all()</function> or 5698 5698 <function>snd_pcm_suspend()</function>. It means that the PCM 5699 - streams are already stoppped when the register snapshot is 5699 + streams are already stopped when the register snapshot is 5700 5700 taken. But, remember that you don't have to restart the PCM 5701 5701 stream in the resume callback. It'll be restarted via 5702 5702 trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant>
-6
Documentation/acpi/enumeration.txt
··· 60 60 configuring GPIOs it can get its ACPI handle and extract this information 61 61 from ACPI tables. 62 62 63 - Currently the kernel is not able to automatically determine from which ACPI 64 - device it should make the corresponding platform device so we need to add 65 - the ACPI device explicitly to acpi_platform_device_ids list defined in 66 - drivers/acpi/acpi_platform.c. This limitation is only for the platform 67 - devices, SPI and I2C devices are created automatically as described below. 68 - 69 63 DMA support 70 64 ~~~~~~~~~~~ 71 65 DMA controllers enumerated via ACPI should be registered in the system to
+5 -2
Documentation/cpu-freq/intel-pstate.txt
··· 15 15 /sys/devices/system/cpu/intel_pstate/ 16 16 17 17 max_perf_pct: limits the maximum P state that will be requested by 18 - the driver stated as a percentage of the available performance. 18 + the driver stated as a percentage of the available performance. The 19 + available (P states) performance may be reduced by the no_turbo 20 + setting described below. 19 21 20 22 min_perf_pct: limits the minimum P state that will be requested by 21 - the driver stated as a percentage of the available performance. 23 + the driver stated as a percentage of the max (non-turbo) 24 + performance level. 22 25 23 26 no_turbo: limits the driver to selecting P states below the turbo 24 27 frequency range.
+20
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 9 9 - reg: physical base address of the controller and length of memory mapped 10 10 region. 11 11 12 + Optional Properties: 13 + - clocks: List of clock handles. The parent clocks of the input clocks to the 14 + devices in this power domain are set to oscclk before power gating 15 + and restored back after powering on a domain. This is required for 16 + all domains which are powered on and off and not required for unused 17 + domains. 18 + - clock-names: The following clocks can be specified: 19 + - oscclk: Oscillator clock. 20 + - pclkN, clkN: Pairs of parent of input clock and input clock to the 21 + devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 22 + are supported currently. 23 + 12 24 Node of a device using power domains must have a samsung,power-domain property 13 25 defined with a phandle to respective power domain. 14 26 ··· 29 17 lcd0: power-domain-lcd0 { 30 18 compatible = "samsung,exynos4210-pd"; 31 19 reg = <0x10023C00 0x10>; 20 + }; 21 + 22 + mfc_pd: power-domain@10044060 { 23 + compatible = "samsung,exynos4210-pd"; 24 + reg = <0x10044060 0x20>; 25 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 26 + <&clock CLK_MOUT_USER_ACLK333>; 27 + clock-names = "oscclk", "pclk0", "clk0"; 32 28 }; 33 29 34 30 Example of the node using power domain:
+4 -2
Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt
··· 8 8 under node /cpus/cpu@0. 9 9 10 10 Required properties: 11 - - operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt 12 - for details 11 + - None 13 12 14 13 Optional properties: 14 + - operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for 15 + details. OPPs *must* be supplied either via DT, i.e. this property, or 16 + populated at runtime. 15 17 - clock-latency: Specify the possible maximum transition latency for clock, 16 18 in unit of nanoseconds. 17 19 - voltage-tolerance: Specify the CPU voltage tolerance in percentage.
+7
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 4 4 5 5 - compatible: Must contain one of the following: 6 6 7 + - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. 8 + - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. 9 + - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART. 10 + - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART. 11 + - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART. 12 + - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART. 13 + - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. 7 14 - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. 8 15 - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. 9 16 - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART.
+7 -1
Documentation/kernel-parameters.txt
··· 2790 2790 leaf rcu_node structure. Useful for very large 2791 2791 systems. 2792 2792 2793 + rcutree.jiffies_till_sched_qs= [KNL] 2794 + Set required age in jiffies for a 2795 + given grace period before RCU starts 2796 + soliciting quiescent-state help from 2797 + rcu_note_context_switch(). 2798 + 2793 2799 rcutree.jiffies_till_first_fqs= [KNL] 2794 2800 Set delay from grace-period initialization to 2795 2801 first attempt to force quiescent states. ··· 3532 3526 the allocated input device; If set to 0, video driver 3533 3527 will only send out the event without touching backlight 3534 3528 brightness level. 3535 - default: 0 3529 + default: 1 3536 3530 3537 3531 virtio_mmio.device= 3538 3532 [VMMIO] Memory mapped virtio (platform) device.
+2 -2
Documentation/laptops/00-INDEX
··· 8 8 - information on hard disk shock protection. 9 9 dslm.c 10 10 - Simple Disk Sleep Monitor program 11 - hpfall.c 12 - - (HP) laptop accelerometer program for disk protection. 11 + freefall.c 12 + - (HP/DELL) laptop accelerometer program for disk protection. 13 13 laptop-mode.txt 14 14 - how to conserve battery power using laptop-mode. 15 15 sony-laptop.txt
+45 -14
Documentation/laptops/hpfall.c Documentation/laptops/freefall.c
··· 1 - /* Disk protection for HP machines. 1 + /* Disk protection for HP/DELL machines. 2 2 * 3 3 * Copyright 2008 Eric Piel 4 4 * Copyright 2009 Pavel Machek <pavel@ucw.cz> 5 + * Copyright 2012 Sonal Santan 6 + * Copyright 2014 Pali Rohár <pali.rohar@gmail.com> 5 7 * 6 8 * GPLv2. 7 9 */ ··· 20 18 #include <signal.h> 21 19 #include <sys/mman.h> 22 20 #include <sched.h> 21 + #include <syslog.h> 23 22 24 - char unload_heads_path[64]; 23 + static int noled; 24 + static char unload_heads_path[64]; 25 + static char device_path[32]; 26 + static const char app_name[] = "FREE FALL"; 25 27 26 - int set_unload_heads_path(char *device) 28 + static int set_unload_heads_path(char *device) 27 29 { 28 30 char devname[64]; 29 31 30 32 if (strlen(device) <= 5 || strncmp(device, "/dev/", 5) != 0) 31 33 return -EINVAL; 32 - strncpy(devname, device + 5, sizeof(devname)); 34 + strncpy(devname, device + 5, sizeof(devname) - 1); 35 + strncpy(device_path, device, sizeof(device_path) - 1); 33 36 34 37 snprintf(unload_heads_path, sizeof(unload_heads_path) - 1, 35 38 "/sys/block/%s/device/unload_heads", devname); 36 39 return 0; 37 40 } 38 - int valid_disk(void) 41 + 42 + static int valid_disk(void) 39 43 { 40 44 int fd = open(unload_heads_path, O_RDONLY); 45 + 41 46 if (fd < 0) { 42 47 perror(unload_heads_path); 43 48 return 0; ··· 54 45 return 1; 55 46 } 56 47 57 - void write_int(char *path, int i) 48 + static void write_int(char *path, int i) 58 49 { 59 50 char buf[1024]; 60 51 int fd = open(path, O_RDWR); 52 + 61 53 if (fd < 0) { 62 54 perror("open"); 63 55 exit(1); 64 56 } 57 + 65 58 sprintf(buf, "%d", i); 59 + 66 60 if (write(fd, buf, strlen(buf)) != strlen(buf)) { 67 61 perror("write"); 68 62 exit(1); 69 63 } 64 + 70 65 close(fd); 71 66 } 72 67 73 - void set_led(int on) 68 + static void set_led(int on) 74 69 { 70 + if (noled) 71 + return; 75 72 write_int("/sys/class/leds/hp::hddprotect/brightness", on); 76 73 } 77 74 78 - void protect(int seconds) 75 + static void protect(int seconds) 79 76 { 77 + const char *str = (seconds == 0) ? "Unparked" : "Parked"; 78 + 80 79 write_int(unload_heads_path, seconds*1000); 80 + syslog(LOG_INFO, "%s %s disk head\n", str, device_path); 81 81 } 82 82 83 - int on_ac(void) 83 + static int on_ac(void) 84 84 { 85 - // /sys/class/power_supply/AC0/online 85 + /* /sys/class/power_supply/AC0/online */ 86 + return 1; 86 87 } 87 88 88 - int lid_open(void) 89 + static int lid_open(void) 89 90 { 90 - // /proc/acpi/button/lid/LID/state 91 + /* /proc/acpi/button/lid/LID/state */ 92 + return 1; 91 93 } 92 94 93 - void ignore_me(void) 95 + static void ignore_me(int signum) 94 96 { 95 97 protect(0); 96 98 set_led(0); ··· 110 90 int main(int argc, char **argv) 111 91 { 112 92 int fd, ret; 93 + struct stat st; 113 94 struct sched_param param; 114 95 115 96 if (argc == 1) ··· 132 111 return EXIT_FAILURE; 133 112 } 134 113 135 - daemon(0, 0); 114 + if (stat("/sys/class/leds/hp::hddprotect/brightness", &st)) 115 + noled = 1; 116 + 117 + if (daemon(0, 0) != 0) { 118 + perror("daemon"); 119 + return EXIT_FAILURE; 120 + } 121 + 122 + openlog(app_name, LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1); 123 + 136 124 param.sched_priority = sched_get_priority_max(SCHED_FIFO); 137 125 sched_setscheduler(0, SCHED_FIFO, &param); 138 126 mlockall(MCL_CURRENT|MCL_FUTURE); ··· 171 141 alarm(20); 172 142 } 173 143 144 + closelog(); 174 145 close(fd); 175 146 return EXIT_SUCCESS; 176 147 }
+18 -6
MAINTAINERS
··· 156 156 157 157 8169 10/100/1000 GIGABIT ETHERNET DRIVER 158 158 M: Realtek linux nic maintainers <nic_swsd@realtek.com> 159 - M: Francois Romieu <romieu@fr.zoreil.com> 160 159 L: netdev@vger.kernel.org 161 160 S: Maintained 162 161 F: drivers/net/ethernet/realtek/r8169.c ··· 1313 1314 Q: http://patchwork.kernel.org/project/linux-sh/list/ 1314 1315 T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next 1315 1316 S: Supported 1317 + F: arch/arm/boot/dts/emev2* 1318 + F: arch/arm/boot/dts/r7s* 1319 + F: arch/arm/boot/dts/r8a* 1320 + F: arch/arm/boot/dts/sh* 1321 + F: arch/arm/configs/ape6evm_defconfig 1322 + F: arch/arm/configs/armadillo800eva_defconfig 1323 + F: arch/arm/configs/bockw_defconfig 1324 + F: arch/arm/configs/genmai_defconfig 1325 + F: arch/arm/configs/koelsch_defconfig 1326 + F: arch/arm/configs/kzm9g_defconfig 1327 + F: arch/arm/configs/lager_defconfig 1328 + F: arch/arm/configs/mackerel_defconfig 1329 + F: arch/arm/configs/marzen_defconfig 1330 + F: arch/arm/configs/shmobile_defconfig 1316 1331 F: arch/arm/mach-shmobile/ 1317 1332 F: drivers/sh/ 1318 1333 ··· 4510 4497 F: drivers/idle/i7300_idle.c 4511 4498 4512 4499 IEEE 802.15.4 SUBSYSTEM 4513 - M: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> 4514 - M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> 4500 + M: Alexander Aring <alex.aring@gmail.com> 4515 4501 L: linux-zigbee-devel@lists.sourceforge.net (moderated for non-subscribers) 4516 4502 W: http://apps.sourceforge.net/trac/linux-zigbee 4517 4503 T: git git://git.kernel.org/pub/scm/linux/kernel/git/lowpan/lowpan.git ··· 6799 6787 6800 6788 PCI DRIVER FOR IMX6 6801 6789 M: Richard Zhu <r65037@freescale.com> 6802 - M: Shawn Guo <shawn.guo@linaro.org> 6790 + M: Shawn Guo <shawn.guo@freescale.com> 6803 6791 L: linux-pci@vger.kernel.org 6804 6792 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6805 6793 S: Maintained ··· 8996 8984 8997 8985 THERMAL 8998 8986 M: Zhang Rui <rui.zhang@intel.com> 8999 - M: Eduardo Valentin <eduardo.valentin@ti.com> 8987 + M: Eduardo Valentin <edubezval@gmail.com> 9000 8988 L: linux-pm@vger.kernel.org 9001 8989 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git 9002 8990 T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git ··· 9023 9011 F: drivers/platform/x86/thinkpad_acpi.c 9024 9012 9025 9013 TI BANDGAP AND THERMAL DRIVER 9026 - M: Eduardo Valentin <eduardo.valentin@ti.com> 9014 + M: Eduardo Valentin <edubezval@gmail.com> 9027 9015 L: linux-pm@vger.kernel.org 9028 9016 S: Supported 9029 9017 F: drivers/thermal/ti-soc-thermal/
+52 -49
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 16 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc6 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION* ··· 41 41 # descending is started. They are now explicitly listed as the 42 42 # prepare rule. 43 43 44 + # Beautify output 45 + # --------------------------------------------------------------------------- 46 + # 47 + # Normally, we echo the whole command before executing it. By making 48 + # that echo $($(quiet)$(cmd)), we now have the possibility to set 49 + # $(quiet) to choose other forms of output instead, e.g. 50 + # 51 + # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 52 + # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 53 + # 54 + # If $(quiet) is empty, the whole command will be printed. 55 + # If it is set to "quiet_", only the short version will be printed. 56 + # If it is set to "silent_", nothing will be printed at all, since 57 + # the variable $(silent_cmd_cc_o_c) doesn't exist. 58 + # 59 + # A simple variant is to prefix commands with $(Q) - that's useful 60 + # for commands that shall be hidden in non-verbose mode. 61 + # 62 + # $(Q)ln $@ :< 63 + # 64 + # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 65 + # If KBUILD_VERBOSE equals 1 then the above command is displayed. 66 + # 44 67 # To put more focus on warnings, be less verbose as default 45 68 # Use 'make V=1' to see the full commands 46 69 ··· 73 50 ifndef KBUILD_VERBOSE 74 51 KBUILD_VERBOSE = 0 75 52 endif 53 + 54 + ifeq ($(KBUILD_VERBOSE),1) 55 + quiet = 56 + Q = 57 + else 58 + quiet=quiet_ 59 + Q = @ 60 + endif 61 + 62 + # If the user is running make -s (silent mode), suppress echoing of 63 + # commands 64 + 65 + ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 66 + ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 67 + quiet=silent_ 68 + endif 69 + else # make-3.8x 70 + ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 71 + quiet=silent_ 72 + endif 73 + endif 74 + 75 + export quiet Q KBUILD_VERBOSE 76 76 77 77 # Call a source code checker (by default, "sparse") as part of the 78 78 # C compilation. ··· 174 128 175 129 # Fake the "Entering directory" message once, so that IDEs/editors are 176 130 # able to understand relative filenames. 131 + echodir := @echo 132 + quiet_echodir := @echo 133 + silent_echodir := @: 177 134 sub-make: FORCE 178 - @echo "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" 135 + $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" 179 136 $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ 180 137 KBUILD_SRC=$(CURDIR) \ 181 138 KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \ ··· 340 291 341 292 export KBUILD_MODULES KBUILD_BUILTIN 342 293 export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD 343 - 344 - # Beautify output 345 - # --------------------------------------------------------------------------- 346 - # 347 - # Normally, we echo the whole command before executing it. By making 348 - # that echo $($(quiet)$(cmd)), we now have the possibility to set 349 - # $(quiet) to choose other forms of output instead, e.g. 350 - # 351 - # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 352 - # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 353 - # 354 - # If $(quiet) is empty, the whole command will be printed. 355 - # If it is set to "quiet_", only the short version will be printed. 356 - # If it is set to "silent_", nothing will be printed at all, since 357 - # the variable $(silent_cmd_cc_o_c) doesn't exist. 358 - # 359 - # A simple variant is to prefix commands with $(Q) - that's useful 360 - # for commands that shall be hidden in non-verbose mode. 361 - # 362 - # $(Q)ln $@ :< 363 - # 364 - # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 365 - # If KBUILD_VERBOSE equals 1 then the above command is displayed. 366 - 367 - ifeq ($(KBUILD_VERBOSE),1) 368 - quiet = 369 - Q = 370 - else 371 - quiet=quiet_ 372 - Q = @ 373 - endif 374 - 375 - # If the user is running make -s (silent mode), suppress echoing of 376 - # commands 377 - 378 - ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 379 - ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 380 - quiet=silent_ 381 - endif 382 - else # make-3.8x 383 - ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 384 - quiet=silent_ 385 - endif 386 - endif 387 - 388 - export quiet Q KBUILD_VERBOSE 389 294 390 295 ifneq ($(CC),) 391 296 ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1) ··· 1176 1173 # Packaging of the kernel to various formats 1177 1174 # --------------------------------------------------------------------------- 1178 1175 # rpm target kept for backward compatibility 1179 - package-dir := $(srctree)/scripts/package 1176 + package-dir := scripts/package 1180 1177 1181 1178 %src-pkg: FORCE 1182 1179 $(Q)$(MAKE) $(build)=$(package-dir) $@
+1
arch/arm/Kconfig
··· 6 6 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 7 7 select ARCH_HAVE_CUSTOM_GPIO_H 8 8 select ARCH_MIGHT_HAVE_PC_PARPORT 9 + select ARCH_SUPPORTS_ATOMIC_RMW 9 10 select ARCH_USE_BUILTIN_BSWAP 10 11 select ARCH_USE_CMPXCHG_LOCKREF 11 12 select ARCH_WANT_IPC_PARSE_VERSION
+2 -2
arch/arm/boot/dts/am335x-evm.dts
··· 529 529 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 530 530 0 0 1 2 531 531 >; 532 - tx-num-evt = <1>; 533 - rx-num-evt = <1>; 532 + tx-num-evt = <32>; 533 + rx-num-evt = <32>; 534 534 }; 535 535 536 536 &tps {
+2 -2
arch/arm/boot/dts/am335x-evmsk.dts
··· 560 560 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 561 561 0 0 1 2 562 562 >; 563 - tx-num-evt = <1>; 564 - rx-num-evt = <1>; 563 + tx-num-evt = <32>; 564 + rx-num-evt = <32>; 565 565 }; 566 566 567 567 &tscadc {
+6
arch/arm/boot/dts/am335x-igep0033.dtsi
··· 105 105 106 106 &cpsw_emac0 { 107 107 phy_id = <&davinci_mdio>, <0>; 108 + phy-mode = "rmii"; 108 109 }; 109 110 110 111 &cpsw_emac1 { 111 112 phy_id = <&davinci_mdio>, <1>; 113 + phy-mode = "rmii"; 114 + }; 115 + 116 + &phy_sel { 117 + rmii-clock-ext; 112 118 }; 113 119 114 120 &elm {
+1 -1
arch/arm/boot/dts/at91sam9n12.dtsi
··· 925 925 compatible = "atmel,at91rm9200-ohci", "usb-ohci"; 926 926 reg = <0x00500000 0x00100000>; 927 927 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 928 - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, 928 + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, 929 929 <&uhpck>; 930 930 clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; 931 931 status = "disabled";
+4 -2
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1045 1045 reg = <0x00500000 0x80000 1046 1046 0xf803c000 0x400>; 1047 1047 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1048 + clocks = <&usb>, <&udphs_clk>; 1049 + clock-names = "hclk", "pclk"; 1048 1050 status = "disabled"; 1049 1051 1050 1052 ep0 { ··· 1124 1122 compatible = "atmel,at91sam9rl-pwm"; 1125 1123 reg = <0xf8034000 0x300>; 1126 1124 interrupts = <18 IRQ_TYPE_LEVEL_HIGH 4>; 1125 + clocks = <&pwm_clk>; 1127 1126 #pwm-cells = <3>; 1128 1127 status = "disabled"; 1129 1128 }; ··· 1156 1153 compatible = "atmel,at91rm9200-ohci", "usb-ohci"; 1157 1154 reg = <0x00600000 0x100000>; 1158 1155 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1159 - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, 1160 - <&uhpck>; 1156 + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1161 1157 clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; 1162 1158 status = "disabled"; 1163 1159 };
+1
arch/arm/boot/dts/dra7-evm.dts
··· 240 240 regulator-name = "ldo3"; 241 241 regulator-min-microvolt = <1800000>; 242 242 regulator-max-microvolt = <1800000>; 243 + regulator-always-on; 243 244 regulator-boot-on; 244 245 }; 245 246
+6 -4
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 673 673 674 674 l3_iclk_div: l3_iclk_div { 675 675 #clock-cells = <0>; 676 - compatible = "fixed-factor-clock"; 676 + compatible = "ti,divider-clock"; 677 + ti,max-div = <2>; 678 + ti,bit-shift = <4>; 679 + reg = <0x0100>; 677 680 clocks = <&dpll_core_h12x2_ck>; 678 - clock-mult = <1>; 679 - clock-div = <1>; 681 + ti,index-power-of-two; 680 682 }; 681 683 682 684 l4_root_clk_div: l4_root_clk_div { ··· 686 684 compatible = "fixed-factor-clock"; 687 685 clocks = <&l3_iclk_div>; 688 686 clock-mult = <1>; 689 - clock-div = <1>; 687 + clock-div = <2>; 690 688 }; 691 689 692 690 video1_clk2_div: video1_clk2_div {
+1 -1
arch/arm/boot/dts/exynos4.dtsi
··· 554 554 interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>, <0 41 0>; 555 555 clocks = <&clock CLK_PWM>; 556 556 clock-names = "timers"; 557 - #pwm-cells = <2>; 557 + #pwm-cells = <3>; 558 558 status = "disabled"; 559 559 }; 560 560
+4 -1
arch/arm/boot/dts/exynos5420.dtsi
··· 167 167 compatible = "samsung,exynos5420-audss-clock"; 168 168 reg = <0x03810000 0x0C>; 169 169 #clock-cells = <1>; 170 - clocks = <&clock CLK_FIN_PLL>, <&clock CLK_FOUT_EPLL>, 170 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MAU_EPLL>, 171 171 <&clock CLK_SCLK_MAUDIO0>, <&clock CLK_SCLK_MAUPCM0>; 172 172 clock-names = "pll_ref", "pll_in", "sclk_audio", "sclk_pcm_in"; 173 173 }; ··· 260 260 mfc_pd: power-domain@10044060 { 261 261 compatible = "samsung,exynos4210-pd"; 262 262 reg = <0x10044060 0x20>; 263 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 264 + <&clock CLK_MOUT_USER_ACLK333>; 265 + clock-names = "oscclk", "pclk0", "clk0"; 263 266 }; 264 267 265 268 disp_pd: power-domain@100440C0 {
+19 -9
arch/arm/kernel/kprobes-test-arm.c
··· 74 74 TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ 75 75 TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ 76 76 TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ 77 - TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ 78 - TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ 79 77 TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ 80 78 TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ 81 79 TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ ··· 101 103 TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ 102 104 TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ 103 105 TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ 104 - TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ 105 - TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ 106 106 TEST_R( op "eq r",11,VAL1,", #0xf5") \ 107 107 TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ 108 108 TEST_R( op " r",8, VAL2,", #0x000af000") ··· 121 125 TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ 122 126 TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ 123 127 TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ 124 - TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ 125 128 TEST( op "eq" s " r0, #0xf5") \ 126 129 TEST( op "ne" s " r11, #0xf5000000") \ 127 130 TEST( op s " r7, #0x000af000") \ ··· 154 159 TEST_SUPPORTED("cmp pc, #0x1000"); 155 160 TEST_SUPPORTED("cmp sp, #0x1000"); 156 161 157 - /* Data-processing with PC as shift*/ 162 + /* Data-processing with PC and a shift count in a register */ 158 163 TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc") 159 164 TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc") 160 165 TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc") 166 + TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2") 167 + TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2") 168 + TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2") 169 + TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2") 170 + TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3") 171 + TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3") 172 + TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3") 161 173 162 - /* Data-processing with PC as shift*/ 174 + /* Data-processing with PC as a target and status registers updated */ 163 175 TEST_UNSUPPORTED("movs pc, r1") 164 176 TEST_UNSUPPORTED("movs pc, r1, lsl r2") 165 177 TEST_UNSUPPORTED("movs pc, #0x10000") ··· 189 187 TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") 190 188 TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") 191 189 TEST_BF_R ("mov pc, r",0,2f,"") 192 - TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 190 + TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1") 193 191 TEST_BB( "sub pc, pc, #1b-2b+8") 194 192 #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) 195 193 TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ 196 194 #endif 197 195 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 198 196 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") 199 - TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") 197 + TEST_R( "add pc, pc, r",10,-2,", asl #1") 200 198 #ifdef CONFIG_THUMB2_KERNEL 201 199 TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") 202 200 TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") ··· 218 216 TEST_BB_R("bx r",7,2f,"") 219 217 TEST_BF_R("bxeq r",14,2f,"") 220 218 219 + #if __LINUX_ARM_ARCH__ >= 5 221 220 TEST_R("clz r0, r",0, 0x0,"") 222 221 TEST_R("clzeq r7, r",14,0x1,"") 223 222 TEST_R("clz lr, r",7, 0xffffffff,"") ··· 340 337 TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2") 341 338 TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2") 342 339 TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc") 340 + #endif 343 341 344 342 TEST_GROUP("Multiply and multiply-accumulate") 345 343 ··· 563 559 TEST_UNSUPPORTED("ldrsht r1, [r2], #48") 564 560 #endif 565 561 562 + #if __LINUX_ARM_ARCH__ >= 5 566 563 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") 567 564 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 568 565 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") ··· 600 595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!") 601 596 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48") 602 597 TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48") 598 + #endif 603 599 604 600 TEST_GROUP("Miscellaneous") 605 601 ··· 1233 1227 TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") 1234 1228 1235 1229 COPROCESSOR_INSTRUCTIONS_ST_LD("",e) 1230 + #if __LINUX_ARM_ARCH__ >= 5 1236 1231 COPROCESSOR_INSTRUCTIONS_MC_MR("",e) 1232 + #endif 1237 1233 TEST_UNSUPPORTED("svc 0") 1238 1234 TEST_UNSUPPORTED("svc 0xffffff") 1239 1235 ··· 1295 1287 TEST( "blx __dummy_thumb_subroutine_odd") 1296 1288 #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1297 1289 1290 + #if __LINUX_ARM_ARCH__ >= 5 1298 1291 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f) 1292 + #endif 1299 1293 #if __LINUX_ARM_ARCH__ >= 6 1300 1294 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f) 1301 1295 #endif
+10
arch/arm/kernel/kprobes-test.c
··· 225 225 static int post_handler_called; 226 226 static int jprobe_func_called; 227 227 static int kretprobe_handler_called; 228 + static int tests_failed; 228 229 229 230 #define FUNC_ARG1 0x12345678 230 231 #define FUNC_ARG2 0xabcdef ··· 462 461 463 462 pr_info(" jprobe\n"); 464 463 ret = test_jprobe(func); 464 + #if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) 465 + if (ret == -EINVAL) { 466 + pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); 467 + tests_failed = ret; 468 + ret = 0; 469 + } 470 + #endif 465 471 if (ret < 0) 466 472 return ret; 467 473 ··· 1679 1671 #endif 1680 1672 1681 1673 out: 1674 + if (ret == 0) 1675 + ret = tests_failed; 1682 1676 if (ret == 0) 1683 1677 pr_info("Finished kprobe tests OK\n"); 1684 1678 else
+3 -3
arch/arm/kernel/probes-arm.c
··· 341 341 /* CMP (reg-shift reg) cccc 0001 0101 xxxx xxxx xxxx 0xx1 xxxx */ 342 342 /* CMN (reg-shift reg) cccc 0001 0111 xxxx xxxx xxxx 0xx1 xxxx */ 343 343 DECODE_EMULATEX (0x0f900090, 0x01100010, PROBES_DATA_PROCESSING_REG, 344 - REGS(ANY, 0, NOPC, 0, ANY)), 344 + REGS(NOPC, 0, NOPC, 0, NOPC)), 345 345 346 346 /* MOV (reg-shift reg) cccc 0001 101x xxxx xxxx xxxx 0xx1 xxxx */ 347 347 /* MVN (reg-shift reg) cccc 0001 111x xxxx xxxx xxxx 0xx1 xxxx */ 348 348 DECODE_EMULATEX (0x0fa00090, 0x01a00010, PROBES_DATA_PROCESSING_REG, 349 - REGS(0, ANY, NOPC, 0, ANY)), 349 + REGS(0, NOPC, NOPC, 0, NOPC)), 350 350 351 351 /* AND (reg-shift reg) cccc 0000 000x xxxx xxxx xxxx 0xx1 xxxx */ 352 352 /* EOR (reg-shift reg) cccc 0000 001x xxxx xxxx xxxx 0xx1 xxxx */ ··· 359 359 /* ORR (reg-shift reg) cccc 0001 100x xxxx xxxx xxxx 0xx1 xxxx */ 360 360 /* BIC (reg-shift reg) cccc 0001 110x xxxx xxxx xxxx 0xx1 xxxx */ 361 361 DECODE_EMULATEX (0x0e000090, 0x00000010, PROBES_DATA_PROCESSING_REG, 362 - REGS(ANY, ANY, NOPC, 0, ANY)), 362 + REGS(NOPC, NOPC, NOPC, 0, NOPC)), 363 363 364 364 DECODE_END 365 365 };
+1 -1
arch/arm/kernel/topology.c
··· 275 275 cpu_topology[cpuid].socket_id, mpidr); 276 276 } 277 277 278 - static inline const int cpu_corepower_flags(void) 278 + static inline int cpu_corepower_flags(void) 279 279 { 280 280 return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN; 281 281 }
+3 -5
arch/arm/mach-exynos/exynos.c
··· 173 173 174 174 void __init exynos_cpuidle_init(void) 175 175 { 176 - if (soc_is_exynos5440()) 177 - return; 178 - 179 - platform_device_register(&exynos_cpuidle); 176 + if (soc_is_exynos4210() || soc_is_exynos5250()) 177 + platform_device_register(&exynos_cpuidle); 180 178 } 181 179 182 180 void __init exynos_cpufreq_init(void) ··· 295 297 * This is called from smp_prepare_cpus if we've built for SMP, but 296 298 * we still need to set it up for PM and firmware ops if not. 297 299 */ 298 - if (!IS_ENABLED(SMP)) 300 + if (!IS_ENABLED(CONFIG_SMP)) 299 301 exynos_sysram_init(); 300 302 301 303 exynos_cpuidle_init();
+7 -2
arch/arm/mach-exynos/firmware.c
··· 57 57 58 58 boot_reg = sysram_ns_base_addr + 0x1c; 59 59 60 - if (!soc_is_exynos4212() && !soc_is_exynos3250()) 61 - boot_reg += 4*cpu; 60 + /* 61 + * Almost all Exynos-series of SoCs that run in secure mode don't need 62 + * additional offset for every CPU, with Exynos4412 being the only 63 + * exception. 64 + */ 65 + if (soc_is_exynos4412()) 66 + boot_reg += 4 * cpu; 62 67 63 68 __raw_writel(boot_addr, boot_reg); 64 69 return 0;
+6 -4
arch/arm/mach-exynos/hotplug.c
··· 40 40 41 41 static inline void platform_do_lowpower(unsigned int cpu, int *spurious) 42 42 { 43 + u32 mpidr = cpu_logical_map(cpu); 44 + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 45 + 43 46 for (;;) { 44 47 45 - /* make cpu1 to be turned off at next WFI command */ 46 - if (cpu == 1) 47 - exynos_cpu_power_down(cpu); 48 + /* Turn the CPU off on next WFI instruction. */ 49 + exynos_cpu_power_down(core_id); 48 50 49 51 wfi(); 50 52 51 - if (pen_release == cpu_logical_map(cpu)) { 53 + if (pen_release == core_id) { 52 54 /* 53 55 * OK, proper wakeup, we're done 54 56 */
+19 -15
arch/arm/mach-exynos/platsmp.c
··· 90 90 static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) 91 91 { 92 92 unsigned long timeout; 93 - unsigned long phys_cpu = cpu_logical_map(cpu); 93 + u32 mpidr = cpu_logical_map(cpu); 94 + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 94 95 int ret = -ENOSYS; 95 96 96 97 /* ··· 105 104 * the holding pen - release it, then wait for it to flag 106 105 * that it has been released by resetting pen_release. 107 106 * 108 - * Note that "pen_release" is the hardware CPU ID, whereas 107 + * Note that "pen_release" is the hardware CPU core ID, whereas 109 108 * "cpu" is Linux's internal ID. 110 109 */ 111 - write_pen_release(phys_cpu); 110 + write_pen_release(core_id); 112 111 113 - if (!exynos_cpu_power_state(cpu)) { 114 - exynos_cpu_power_up(cpu); 112 + if (!exynos_cpu_power_state(core_id)) { 113 + exynos_cpu_power_up(core_id); 115 114 timeout = 10; 116 115 117 116 /* wait max 10 ms until cpu1 is on */ 118 - while (exynos_cpu_power_state(cpu) != S5P_CORE_LOCAL_PWR_EN) { 117 + while (exynos_cpu_power_state(core_id) 118 + != S5P_CORE_LOCAL_PWR_EN) { 119 119 if (timeout-- == 0) 120 120 break; 121 121 ··· 147 145 * Try to set boot address using firmware first 148 146 * and fall back to boot register if it fails. 149 147 */ 150 - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); 148 + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); 151 149 if (ret && ret != -ENOSYS) 152 150 goto fail; 153 151 if (ret == -ENOSYS) { 154 - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); 152 + void __iomem *boot_reg = cpu_boot_reg(core_id); 155 153 156 154 if (IS_ERR(boot_reg)) { 157 155 ret = PTR_ERR(boot_reg); 158 156 goto fail; 159 157 } 160 - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); 158 + __raw_writel(boot_addr, cpu_boot_reg(core_id)); 161 159 } 162 160 163 - call_firmware_op(cpu_boot, phys_cpu); 161 + call_firmware_op(cpu_boot, core_id); 164 162 165 163 arch_send_wakeup_ipi_mask(cpumask_of(cpu)); 166 164 ··· 229 227 * boot register if it fails. 230 228 */ 231 229 for (i = 1; i < max_cpus; ++i) { 232 - unsigned long phys_cpu; 233 230 unsigned long boot_addr; 231 + u32 mpidr; 232 + u32 core_id; 234 233 int ret; 235 234 236 - phys_cpu = cpu_logical_map(i); 235 + mpidr = cpu_logical_map(i); 236 + core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 237 237 boot_addr = virt_to_phys(exynos4_secondary_startup); 238 238 239 - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); 239 + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); 240 240 if (ret && ret != -ENOSYS) 241 241 break; 242 242 if (ret == -ENOSYS) { 243 - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); 243 + void __iomem *boot_reg = cpu_boot_reg(core_id); 244 244 245 245 if (IS_ERR(boot_reg)) 246 246 break; 247 - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); 247 + __raw_writel(boot_addr, cpu_boot_reg(core_id)); 248 248 } 249 249 } 250 250 }
+60 -1
arch/arm/mach-exynos/pm_domains.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/pm_domain.h> 20 + #include <linux/clk.h> 20 21 #include <linux/delay.h> 21 22 #include <linux/of_address.h> 22 23 #include <linux/of_platform.h> 23 24 #include <linux/sched.h> 24 25 25 26 #include "regs-pmu.h" 27 + 28 + #define MAX_CLK_PER_DOMAIN 4 26 29 27 30 /* 28 31 * Exynos specific wrapper around the generic power domain ··· 35 32 char const *name; 36 33 bool is_off; 37 34 struct generic_pm_domain pd; 35 + struct clk *oscclk; 36 + struct clk *clk[MAX_CLK_PER_DOMAIN]; 37 + struct clk *pclk[MAX_CLK_PER_DOMAIN]; 38 38 }; 39 39 40 40 static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ··· 49 43 50 44 pd = container_of(domain, struct exynos_pm_domain, pd); 51 45 base = pd->base; 46 + 47 + /* Set oscclk before powering off a domain*/ 48 + if (!power_on) { 49 + int i; 50 + 51 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 52 + if (IS_ERR(pd->clk[i])) 53 + break; 54 + if (clk_set_parent(pd->clk[i], pd->oscclk)) 55 + pr_err("%s: error setting oscclk as parent to clock %d\n", 56 + pd->name, i); 57 + } 58 + } 52 59 53 60 pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0; 54 61 __raw_writel(pwr, base); ··· 79 60 cpu_relax(); 80 61 usleep_range(80, 100); 81 62 } 63 + 64 + /* Restore clocks after powering on a domain*/ 65 + if (power_on) { 66 + int i; 67 + 68 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 69 + if (IS_ERR(pd->clk[i])) 70 + break; 71 + if (clk_set_parent(pd->clk[i], pd->pclk[i])) 72 + pr_err("%s: error setting parent to clock%d\n", 73 + pd->name, i); 74 + } 75 + } 76 + 82 77 return 0; 83 78 } 84 79 ··· 185 152 186 153 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 187 154 struct exynos_pm_domain *pd; 188 - int on; 155 + int on, i; 156 + struct device *dev; 189 157 190 158 pdev = of_find_device_by_node(np); 159 + dev = &pdev->dev; 191 160 192 161 pd = kzalloc(sizeof(*pd), GFP_KERNEL); 193 162 if (!pd) { ··· 205 170 pd->pd.power_on = exynos_pd_power_on; 206 171 pd->pd.of_node = np; 207 172 173 + pd->oscclk = clk_get(dev, "oscclk"); 174 + if (IS_ERR(pd->oscclk)) 175 + goto no_clk; 176 + 177 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 178 + char clk_name[8]; 179 + 180 + snprintf(clk_name, sizeof(clk_name), "clk%d", i); 181 + pd->clk[i] = clk_get(dev, clk_name); 182 + if (IS_ERR(pd->clk[i])) 183 + break; 184 + snprintf(clk_name, sizeof(clk_name), "pclk%d", i); 185 + pd->pclk[i] = clk_get(dev, clk_name); 186 + if (IS_ERR(pd->pclk[i])) { 187 + clk_put(pd->clk[i]); 188 + pd->clk[i] = ERR_PTR(-EINVAL); 189 + break; 190 + } 191 + } 192 + 193 + if (IS_ERR(pd->clk[0])) 194 + clk_put(pd->oscclk); 195 + 196 + no_clk: 208 197 platform_set_drvdata(pdev, pd); 209 198 210 199 on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;
+23 -8
arch/arm/mach-imx/clk-gate2.c
··· 67 67 68 68 spin_lock_irqsave(gate->lock, flags); 69 69 70 - if (gate->share_count && --(*gate->share_count) > 0) 71 - goto out; 70 + if (gate->share_count) { 71 + if (WARN_ON(*gate->share_count == 0)) 72 + goto out; 73 + else if (--(*gate->share_count) > 0) 74 + goto out; 75 + } 72 76 73 77 reg = readl(gate->reg); 74 78 reg &= ~(3 << gate->bit_idx); ··· 82 78 spin_unlock_irqrestore(gate->lock, flags); 83 79 } 84 80 85 - static int clk_gate2_is_enabled(struct clk_hw *hw) 81 + static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx) 86 82 { 87 - u32 reg; 88 - struct clk_gate2 *gate = to_clk_gate2(hw); 83 + u32 val = readl(reg); 89 84 90 - reg = readl(gate->reg); 91 - 92 - if (((reg >> gate->bit_idx) & 1) == 1) 85 + if (((val >> bit_idx) & 1) == 1) 93 86 return 1; 94 87 95 88 return 0; 89 + } 90 + 91 + static int clk_gate2_is_enabled(struct clk_hw *hw) 92 + { 93 + struct clk_gate2 *gate = to_clk_gate2(hw); 94 + 95 + if (gate->share_count) 96 + return !!(*gate->share_count); 97 + else 98 + return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); 96 99 } 97 100 98 101 static struct clk_ops clk_gate2_ops = { ··· 127 116 gate->bit_idx = bit_idx; 128 117 gate->flags = clk_gate2_flags; 129 118 gate->lock = lock; 119 + 120 + /* Initialize share_count per hardware state */ 121 + if (share_count) 122 + *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; 130 123 gate->share_count = share_count; 131 124 132 125 init.name = name;
+2 -2
arch/arm/mach-imx/clk-imx6q.c
··· 70 70 static const char *lvds_sels[] = { 71 71 "dummy", "dummy", "dummy", "dummy", "dummy", "dummy", 72 72 "pll4_audio", "pll5_video", "pll8_mlb", "enet_ref", 73 - "pcie_ref", "sata_ref", 73 + "pcie_ref_125m", "sata_ref_100m", 74 74 }; 75 75 76 76 enum mx6q_clks { ··· 491 491 492 492 /* All existing boards with PCIe use LVDS1 */ 493 493 if (IS_ENABLED(CONFIG_PCI_IMX6)) 494 - clk_set_parent(clk[lvds1_sel], clk[sata_ref]); 494 + clk_set_parent(clk[lvds1_sel], clk[sata_ref_100m]); 495 495 496 496 /* Set initial power mode */ 497 497 imx6q_set_lpm(WAIT_CLOCKED);
+5 -1
arch/arm/mach-mvebu/coherency.c
··· 292 292 .notifier_call = mvebu_hwcc_notifier, 293 293 }; 294 294 295 + static struct notifier_block mvebu_hwcc_pci_nb = { 296 + .notifier_call = mvebu_hwcc_notifier, 297 + }; 298 + 295 299 static void __init armada_370_coherency_init(struct device_node *np) 296 300 { 297 301 struct resource res; ··· 431 427 { 432 428 if (coherency_available()) 433 429 bus_register_notifier(&pci_bus_type, 434 - &mvebu_hwcc_nb); 430 + &mvebu_hwcc_pci_nb); 435 431 return 0; 436 432 } 437 433
+8 -1
arch/arm/mach-mvebu/headsmp-a9.S
··· 15 15 #include <linux/linkage.h> 16 16 #include <linux/init.h> 17 17 18 + #include <asm/assembler.h> 19 + 18 20 __CPUINIT 19 21 #define CPU_RESUME_ADDR_REG 0xf10182d4 20 22 ··· 24 22 .global armada_375_smp_cpu1_enable_code_end 25 23 26 24 armada_375_smp_cpu1_enable_code_start: 27 - ldr r0, [pc, #4] 25 + ARM_BE8(setend be) 26 + adr r0, 1f 27 + ldr r0, [r0] 28 28 ldr r1, [r0] 29 + ARM_BE8(rev r1, r1) 29 30 mov pc, r1 31 + 1: 30 32 .word CPU_RESUME_ADDR_REG 31 33 armada_375_smp_cpu1_enable_code_end: 32 34 33 35 ENTRY(mvebu_cortex_a9_secondary_startup) 36 + ARM_BE8(setend be) 34 37 bl v7_invalidate_l1 35 38 b secondary_startup 36 39 ENDPROC(mvebu_cortex_a9_secondary_startup)
+5 -5
arch/arm/mach-mvebu/pmsu.c
··· 201 201 202 202 /* Test the CR_C bit and set it if it was cleared */ 203 203 asm volatile( 204 - "mrc p15, 0, %0, c1, c0, 0 \n\t" 205 - "tst %0, #(1 << 2) \n\t" 206 - "orreq %0, %0, #(1 << 2) \n\t" 207 - "mcreq p15, 0, %0, c1, c0, 0 \n\t" 204 + "mrc p15, 0, r0, c1, c0, 0 \n\t" 205 + "tst r0, #(1 << 2) \n\t" 206 + "orreq r0, r0, #(1 << 2) \n\t" 207 + "mcreq p15, 0, r0, c1, c0, 0 \n\t" 208 208 "isb " 209 - : : "r" (0)); 209 + : : : "r0"); 210 210 211 211 pr_warn("Failed to suspend the system\n"); 212 212
+1 -1
arch/arm/mach-omap2/clkt_dpll.c
··· 76 76 * (assuming that it is counting N upwards), or -2 if the enclosing loop 77 77 * should skip to the next iteration (again assuming N is increasing). 78 78 */ 79 - static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n) 79 + static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n) 80 80 { 81 81 struct dpll_data *dd; 82 82 long fint, fint_min, fint_max;
+3
arch/arm/mach-omap2/cm-regbits-34xx.h
··· 26 26 #define OMAP3430_EN_WDT3_SHIFT 12 27 27 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_MASK (1 << 0) 28 28 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_SHIFT 0 29 + #define OMAP3430_IVA2_DPLL_FREQSEL_SHIFT 4 29 30 #define OMAP3430_IVA2_DPLL_FREQSEL_MASK (0xf << 4) 30 31 #define OMAP3430_EN_IVA2_DPLL_DRIFTGUARD_SHIFT 3 32 + #define OMAP3430_EN_IVA2_DPLL_SHIFT 0 31 33 #define OMAP3430_EN_IVA2_DPLL_MASK (0x7 << 0) 32 34 #define OMAP3430_ST_IVA2_SHIFT 0 33 35 #define OMAP3430_ST_IVA2_CLK_MASK (1 << 0) 36 + #define OMAP3430_AUTO_IVA2_DPLL_SHIFT 0 34 37 #define OMAP3430_AUTO_IVA2_DPLL_MASK (0x7 << 0) 35 38 #define OMAP3430_IVA2_CLK_SRC_SHIFT 19 36 39 #define OMAP3430_IVA2_CLK_SRC_WIDTH 3
+2 -1
arch/arm/mach-omap2/common.h
··· 162 162 } 163 163 #endif 164 164 165 - #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) 165 + #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 166 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) 166 167 void omap44xx_restart(enum reboot_mode mode, const char *cmd); 167 168 #else 168 169 static inline void omap44xx_restart(enum reboot_mode mode, const char *cmd)
-28
arch/arm/mach-omap2/devices.c
··· 297 297 static inline void omap_init_audio(void) {} 298 298 #endif 299 299 300 - #if defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI) || \ 301 - defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI_MODULE) 302 - 303 - static struct platform_device omap_hdmi_audio = { 304 - .name = "omap-hdmi-audio", 305 - .id = -1, 306 - }; 307 - 308 - static void __init omap_init_hdmi_audio(void) 309 - { 310 - struct omap_hwmod *oh; 311 - struct platform_device *pdev; 312 - 313 - oh = omap_hwmod_lookup("dss_hdmi"); 314 - if (!oh) 315 - return; 316 - 317 - pdev = omap_device_build("omap-hdmi-audio-dai", -1, oh, NULL, 0); 318 - WARN(IS_ERR(pdev), 319 - "Can't build omap_device for omap-hdmi-audio-dai.\n"); 320 - 321 - platform_device_register(&omap_hdmi_audio); 322 - } 323 - #else 324 - static inline void omap_init_hdmi_audio(void) {} 325 - #endif 326 - 327 300 #if defined(CONFIG_SPI_OMAP24XX) || defined(CONFIG_SPI_OMAP24XX_MODULE) 328 301 329 302 #include <linux/platform_data/spi-omap2-mcspi.h> ··· 432 459 */ 433 460 omap_init_audio(); 434 461 omap_init_camera(); 435 - omap_init_hdmi_audio(); 436 462 omap_init_mbox(); 437 463 /* If dtb is there, the devices will be created dynamically */ 438 464 if (!of_have_populated_dt()) {
+10
arch/arm/mach-omap2/dsp.c
··· 29 29 #ifdef CONFIG_TIDSPBRIDGE_DVFS 30 30 #include "omap-pm.h" 31 31 #endif 32 + #include "soc.h" 32 33 33 34 #include <linux/platform_data/dsp-omap.h> 34 35 ··· 60 59 phys_addr_t size = CONFIG_TIDSPBRIDGE_MEMPOOL_SIZE; 61 60 phys_addr_t paddr; 62 61 62 + if (!cpu_is_omap34xx()) 63 + return; 64 + 63 65 if (!size) 64 66 return; 65 67 ··· 86 82 struct platform_device *pdev; 87 83 int err = -ENOMEM; 88 84 struct omap_dsp_platform_data *pdata = &omap_dsp_pdata; 85 + 86 + if (!cpu_is_omap34xx()) 87 + return 0; 89 88 90 89 pdata->phys_mempool_base = omap_dsp_get_mempool_base(); 91 90 ··· 122 115 123 116 static void __exit omap_dsp_exit(void) 124 117 { 118 + if (!cpu_is_omap34xx()) 119 + return; 120 + 125 121 platform_device_unregister(omap_dsp_pdev); 126 122 } 127 123 module_exit(omap_dsp_exit);
+1 -1
arch/arm/mach-omap2/gpmc.c
··· 1615 1615 return ret; 1616 1616 } 1617 1617 1618 - for_each_child_of_node(pdev->dev.of_node, child) { 1618 + for_each_available_child_of_node(pdev->dev.of_node, child) { 1619 1619 1620 1620 if (!child->name) 1621 1621 continue;
+13 -5
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1268 1268 }; 1269 1269 1270 1270 /* sata */ 1271 - static struct omap_hwmod_opt_clk sata_opt_clks[] = { 1272 - { .role = "ref_clk", .clk = "sata_ref_clk" }, 1273 - }; 1274 1271 1275 1272 static struct omap_hwmod dra7xx_sata_hwmod = { 1276 1273 .name = "sata", ··· 1275 1278 .clkdm_name = "l3init_clkdm", 1276 1279 .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, 1277 1280 .main_clk = "func_48m_fclk", 1281 + .mpu_rt_idx = 1, 1278 1282 .prcm = { 1279 1283 .omap4 = { 1280 1284 .clkctrl_offs = DRA7XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, ··· 1283 1285 .modulemode = MODULEMODE_SWCTRL, 1284 1286 }, 1285 1287 }, 1286 - .opt_clks = sata_opt_clks, 1287 - .opt_clks_cnt = ARRAY_SIZE(sata_opt_clks), 1288 1288 }; 1289 1289 1290 1290 /* ··· 1727 1731 * 1728 1732 */ 1729 1733 1734 + static struct omap_hwmod_class_sysconfig dra7xx_usb_otg_ss_sysc = { 1735 + .rev_offs = 0x0000, 1736 + .sysc_offs = 0x0010, 1737 + .sysc_flags = (SYSC_HAS_DMADISABLE | SYSC_HAS_MIDLEMODE | 1738 + SYSC_HAS_SIDLEMODE), 1739 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 1740 + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | 1741 + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), 1742 + .sysc_fields = &omap_hwmod_sysc_type2, 1743 + }; 1744 + 1730 1745 static struct omap_hwmod_class dra7xx_usb_otg_ss_hwmod_class = { 1731 1746 .name = "usb_otg_ss", 1747 + .sysc = &dra7xx_usb_otg_ss_sysc, 1732 1748 }; 1733 1749 1734 1750 /* usb_otg_ss1 */
+6
arch/arm/mach-omap2/prm-regbits-34xx.h
··· 35 35 #define OMAP3430_LOGICSTATEST_MASK (1 << 2) 36 36 #define OMAP3430_LASTLOGICSTATEENTERED_MASK (1 << 2) 37 37 #define OMAP3430_LASTPOWERSTATEENTERED_MASK (0x3 << 0) 38 + #define OMAP3430_GRPSEL_MCBSP5_MASK (1 << 10) 39 + #define OMAP3430_GRPSEL_MCBSP1_MASK (1 << 9) 38 40 #define OMAP3630_GRPSEL_UART4_MASK (1 << 18) 39 41 #define OMAP3430_GRPSEL_GPIO6_MASK (1 << 17) 40 42 #define OMAP3430_GRPSEL_GPIO5_MASK (1 << 16) ··· 44 42 #define OMAP3430_GRPSEL_GPIO3_MASK (1 << 14) 45 43 #define OMAP3430_GRPSEL_GPIO2_MASK (1 << 13) 46 44 #define OMAP3430_GRPSEL_UART3_MASK (1 << 11) 45 + #define OMAP3430_GRPSEL_GPT8_MASK (1 << 9) 46 + #define OMAP3430_GRPSEL_GPT7_MASK (1 << 8) 47 + #define OMAP3430_GRPSEL_GPT6_MASK (1 << 7) 48 + #define OMAP3430_GRPSEL_GPT5_MASK (1 << 6) 47 49 #define OMAP3430_GRPSEL_MCBSP4_MASK (1 << 2) 48 50 #define OMAP3430_GRPSEL_MCBSP3_MASK (1 << 1) 49 51 #define OMAP3430_GRPSEL_MCBSP2_MASK (1 << 0)
+1 -1
arch/arm/mm/cache-l2x0.c
··· 664 664 665 665 static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) 666 666 { 667 - unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; 667 + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; 668 668 bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; 669 669 670 670 if (rev >= L310_CACHE_ID_RTL_R2P0) {
+1
arch/arm64/Kconfig
··· 4 4 select ARCH_HAS_OPP 5 5 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 6 6 select ARCH_USE_CMPXCHG_LOCKREF 7 + select ARCH_SUPPORTS_ATOMIC_RMW 7 8 select ARCH_WANT_OPTIONAL_GPIOLIB 8 9 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 9 10 select ARCH_WANT_FRAME_POINTERS
+2
arch/arm64/include/asm/memory.h
··· 56 56 #define TASK_SIZE_32 UL(0x100000000) 57 57 #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ 58 58 TASK_SIZE_32 : TASK_SIZE_64) 59 + #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ 60 + TASK_SIZE_32 : TASK_SIZE_64) 59 61 #else 60 62 #define TASK_SIZE TASK_SIZE_64 61 63 #endif /* CONFIG_COMPAT */
-2
arch/arm64/kernel/efi-stub.c
··· 12 12 #include <linux/efi.h> 13 13 #include <linux/libfdt.h> 14 14 #include <asm/sections.h> 15 - #include <generated/compile.h> 16 - #include <generated/utsrelease.h> 17 15 18 16 /* 19 17 * AArch64 requires the DTB to be 8-byte aligned in the first 512MiB from
+2
arch/arm64/mm/copypage.c
··· 27 27 copy_page(kto, kfrom); 28 28 __flush_dcache_area(kto, PAGE_SIZE); 29 29 } 30 + EXPORT_SYMBOL_GPL(__cpu_copy_user_page); 30 31 31 32 void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) 32 33 { 33 34 clear_page(kaddr); 34 35 } 36 + EXPORT_SYMBOL_GPL(__cpu_clear_user_page);
+2 -1
arch/m68k/kernel/head.S
··· 921 921 jls 1f 922 922 lsrl #1,%d1 923 923 1: 924 - movel %d1,m68k_init_mapped_size 924 + lea %pc@(m68k_init_mapped_size),%a0 925 + movel %d1,%a0@ 925 926 mmu_map #PAGE_OFFSET,%pc@(L(phys_kernel_start)),%d1,\ 926 927 %pc@(m68k_supervisor_cachemode) 927 928
+2
arch/m68k/kernel/time.c
··· 11 11 */ 12 12 13 13 #include <linux/errno.h> 14 + #include <linux/export.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> 16 17 #include <linux/kernel.h> ··· 31 30 32 31 33 32 unsigned long (*mach_random_get_entropy)(void); 33 + EXPORT_SYMBOL_GPL(mach_random_get_entropy); 34 34 35 35 36 36 /*
+2 -1
arch/parisc/kernel/hardware.c
··· 1210 1210 {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, 1211 1211 {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, 1212 1212 {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, 1213 - {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, 1213 + {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, 1214 + {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, 1214 1215 {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, 1215 1216 {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, 1216 1217 {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},
+10 -36
arch/parisc/kernel/sys_parisc32.c
··· 4 4 * Copyright (C) 2000-2001 Hewlett Packard Company 5 5 * Copyright (C) 2000 John Marvin 6 6 * Copyright (C) 2001 Matthew Wilcox 7 + * Copyright (C) 2014 Helge Deller <deller@gmx.de> 7 8 * 8 9 * These routines maintain argument size conversion between 32bit and 64bit 9 10 * environment. Based heavily on sys_ia32.c and sys_sparc32.c. ··· 12 11 13 12 #include <linux/compat.h> 14 13 #include <linux/kernel.h> 15 - #include <linux/sched.h> 16 - #include <linux/fs.h> 17 - #include <linux/mm.h> 18 - #include <linux/file.h> 19 - #include <linux/signal.h> 20 - #include <linux/resource.h> 21 - #include <linux/times.h> 22 - #include <linux/time.h> 23 - #include <linux/smp.h> 24 - #include <linux/sem.h> 25 - #include <linux/shm.h> 26 - #include <linux/slab.h> 27 - #include <linux/uio.h> 28 - #include <linux/ncp_fs.h> 29 - #include <linux/poll.h> 30 - #include <linux/personality.h> 31 - #include <linux/stat.h> 32 - #include <linux/highmem.h> 33 - #include <linux/highuid.h> 34 - #include <linux/mman.h> 35 - #include <linux/binfmts.h> 36 - #include <linux/namei.h> 37 - #include <linux/vfs.h> 38 - #include <linux/ptrace.h> 39 - #include <linux/swap.h> 40 14 #include <linux/syscalls.h> 41 15 42 - #include <asm/types.h> 43 - #include <asm/uaccess.h> 44 - #include <asm/mmu_context.h> 45 - 46 - #undef DEBUG 47 - 48 - #ifdef DEBUG 49 - #define DBG(x) printk x 50 - #else 51 - #define DBG(x) 52 - #endif 53 16 54 17 asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, 55 18 int r22, int r21, int r20) ··· 21 56 printk(KERN_ERR "%s(%d): Unimplemented 32 on 64 syscall #%d!\n", 22 57 current->comm, current->pid, r20); 23 58 return -ENOSYS; 59 + } 60 + 61 + asmlinkage long sys32_fanotify_mark(compat_int_t fanotify_fd, compat_uint_t flags, 62 + compat_uint_t mask0, compat_uint_t mask1, compat_int_t dfd, 63 + const char __user * pathname) 64 + { 65 + return sys_fanotify_mark(fanotify_fd, flags, 66 + ((__u64)mask1 << 32) | mask0, 67 + dfd, pathname); 24 68 }
+1 -1
arch/parisc/kernel/syscall_table.S
··· 418 418 ENTRY_SAME(accept4) /* 320 */ 419 419 ENTRY_SAME(prlimit64) 420 420 ENTRY_SAME(fanotify_init) 421 - ENTRY_COMP(fanotify_mark) 421 + ENTRY_DIFF(fanotify_mark) 422 422 ENTRY_COMP(clock_adjtime) 423 423 ENTRY_SAME(name_to_handle_at) /* 325 */ 424 424 ENTRY_COMP(open_by_handle_at)
+3 -1
arch/powerpc/Kconfig
··· 145 145 select HAVE_IRQ_EXIT_ON_IRQ_STACK 146 146 select ARCH_USE_CMPXCHG_LOCKREF if PPC64 147 147 select HAVE_ARCH_AUDITSYSCALL 148 + select ARCH_SUPPORTS_ATOMIC_RMW 148 149 149 150 config GENERIC_CSUM 150 151 def_bool CPU_LITTLE_ENDIAN ··· 415 414 config CRASH_DUMP 416 415 bool "Build a kdump crash kernel" 417 416 depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) 418 - select RELOCATABLE if PPC64 || 44x || FSL_BOOKE 417 + select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE 419 418 help 420 419 Build a kernel suitable for use as a kdump capture kernel. 421 420 The same kernel binary can be used as production kernel and dump ··· 1018 1017 if PPC64 1019 1018 config RELOCATABLE 1020 1019 bool "Build a relocatable kernel" 1020 + depends on !COMPILE_TEST 1021 1021 select NONSTATIC_KERNEL 1022 1022 help 1023 1023 This builds a kernel image that is capable of running anywhere
+1 -9
arch/powerpc/include/asm/mmu.h
··· 19 19 #define MMU_FTR_TYPE_40x ASM_CONST(0x00000004) 20 20 #define MMU_FTR_TYPE_44x ASM_CONST(0x00000008) 21 21 #define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010) 22 - #define MMU_FTR_TYPE_3E ASM_CONST(0x00000020) 23 - #define MMU_FTR_TYPE_47x ASM_CONST(0x00000040) 22 + #define MMU_FTR_TYPE_47x ASM_CONST(0x00000020) 24 23 25 24 /* 26 25 * This is individual features ··· 105 106 MMU_FTR_CI_LARGE_PAGE 106 107 #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ 107 108 MMU_FTR_CI_LARGE_PAGE | MMU_FTR_NO_SLBIE_B 108 - #define MMU_FTRS_A2 MMU_FTR_TYPE_3E | MMU_FTR_USE_TLBILX | \ 109 - MMU_FTR_USE_TLBIVAX_BCAST | \ 110 - MMU_FTR_LOCK_BCAST_INVAL | \ 111 - MMU_FTR_USE_TLBRSRV | \ 112 - MMU_FTR_USE_PAIRED_MAS | \ 113 - MMU_FTR_TLBIEL | \ 114 - MMU_FTR_16M_PAGE 115 109 #ifndef __ASSEMBLY__ 116 110 #include <asm/cputable.h> 117 111
+1 -2
arch/powerpc/include/asm/perf_event_server.h
··· 61 61 #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ 62 62 #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ 63 63 #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ 64 - #define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ 65 - #define PPMU_EBB 0x00000100 /* supports event based branch */ 64 + #define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ 66 65 67 66 /* 68 67 * Values for flags to get_alternatives()
+1 -1
arch/powerpc/kernel/idle_power7.S
··· 131 131 132 132 _GLOBAL(power7_sleep) 133 133 li r3,1 134 - li r4,0 134 + li r4,1 135 135 b power7_powersave_common 136 136 /* No return */ 137 137
+1 -1
arch/powerpc/kernel/smp.c
··· 747 747 748 748 #ifdef CONFIG_SCHED_SMT 749 749 /* cpumask of CPUs with asymetric SMT dependancy */ 750 - static const int powerpc_smt_flags(void) 750 + static int powerpc_smt_flags(void) 751 751 { 752 752 int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; 753 753
-5
arch/powerpc/kvm/book3s_hv_interrupts.S
··· 127 127 stw r10, HSTATE_PMC + 24(r13) 128 128 stw r11, HSTATE_PMC + 28(r13) 129 129 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201) 130 - BEGIN_FTR_SECTION 131 - mfspr r9, SPRN_SIER 132 - std r8, HSTATE_MMCR + 40(r13) 133 - std r9, HSTATE_MMCR + 48(r13) 134 - END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 135 130 31: 136 131 137 132 /*
+1 -11
arch/powerpc/mm/mmu_context_nohash.c
··· 410 410 } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { 411 411 first_context = 1; 412 412 last_context = 65535; 413 - } else 414 - #ifdef CONFIG_PPC_BOOK3E_MMU 415 - if (mmu_has_feature(MMU_FTR_TYPE_3E)) { 416 - u32 mmucfg = mfspr(SPRN_MMUCFG); 417 - u32 pid_bits = (mmucfg & MMUCFG_PIDSIZE_MASK) 418 - >> MMUCFG_PIDSIZE_SHIFT; 419 - first_context = 1; 420 - last_context = (1UL << (pid_bits + 1)) - 1; 421 - } else 422 - #endif 423 - { 413 + } else { 424 414 first_context = 1; 425 415 last_context = 255; 426 416 }
+7 -3
arch/powerpc/net/bpf_jit_comp.c
··· 390 390 case BPF_ANC | SKF_AD_VLAN_TAG: 391 391 case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: 392 392 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); 393 + BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); 394 + 393 395 PPC_LHZ_OFFS(r_A, r_skb, offsetof(struct sk_buff, 394 396 vlan_tci)); 395 - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) 396 - PPC_ANDI(r_A, r_A, VLAN_VID_MASK); 397 - else 397 + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { 398 + PPC_ANDI(r_A, r_A, ~VLAN_TAG_PRESENT); 399 + } else { 398 400 PPC_ANDI(r_A, r_A, VLAN_TAG_PRESENT); 401 + PPC_SRWI(r_A, r_A, 12); 402 + } 399 403 break; 400 404 case BPF_ANC | SKF_AD_QUEUE: 401 405 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
+22 -4
arch/powerpc/perf/core-book3s.c
··· 485 485 * check that the PMU supports EBB, meaning those that don't can still 486 486 * use bit 63 of the event code for something else if they wish. 487 487 */ 488 - return (ppmu->flags & PPMU_EBB) && 488 + return (ppmu->flags & PPMU_ARCH_207S) && 489 489 ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1); 490 490 } 491 491 ··· 777 777 if (ppmu->flags & PPMU_HAS_SIER) 778 778 sier = mfspr(SPRN_SIER); 779 779 780 - if (ppmu->flags & PPMU_EBB) { 780 + if (ppmu->flags & PPMU_ARCH_207S) { 781 781 pr_info("MMCR2: %016lx EBBHR: %016lx\n", 782 782 mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR)); 783 783 pr_info("EBBRR: %016lx BESCR: %016lx\n", ··· 996 996 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); 997 997 998 998 local64_add(delta, &event->count); 999 - local64_sub(delta, &event->hw.period_left); 999 + 1000 + /* 1001 + * A number of places program the PMC with (0x80000000 - period_left). 1002 + * We never want period_left to be less than 1 because we will program 1003 + * the PMC with a value >= 0x800000000 and an edge detected PMC will 1004 + * roll around to 0 before taking an exception. We have seen this 1005 + * on POWER8. 1006 + * 1007 + * To fix this, clamp the minimum value of period_left to 1. 1008 + */ 1009 + do { 1010 + prev = local64_read(&event->hw.period_left); 1011 + val = prev - delta; 1012 + if (val < 1) 1013 + val = 1; 1014 + } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); 1000 1015 } 1001 1016 1002 1017 /* ··· 1314 1299 ppmu->config_bhrb(cpuhw->bhrb_filter); 1315 1300 1316 1301 write_mmcr0(cpuhw, mmcr0); 1302 + 1303 + if (ppmu->flags & PPMU_ARCH_207S) 1304 + mtspr(SPRN_MMCR2, 0); 1317 1305 1318 1306 /* 1319 1307 * Enable instruction sampling if necessary ··· 1714 1696 1715 1697 if (has_branch_stack(event)) { 1716 1698 /* PMU has BHRB enabled */ 1717 - if (!(ppmu->flags & PPMU_BHRB)) 1699 + if (!(ppmu->flags & PPMU_ARCH_207S)) 1718 1700 return -EOPNOTSUPP; 1719 1701 } 1720 1702
+1 -1
arch/powerpc/perf/power8-pmu.c
··· 792 792 .get_constraint = power8_get_constraint, 793 793 .get_alternatives = power8_get_alternatives, 794 794 .disable_pmc = power8_disable_pmc, 795 - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB | PPMU_EBB, 795 + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, 796 796 .n_generic = ARRAY_SIZE(power8_generic_events), 797 797 .generic_events = power8_generic_events, 798 798 .cache_events = &power8_cache_events,
+2
arch/powerpc/platforms/cell/spu_syscalls.c
··· 111 111 return ret; 112 112 } 113 113 114 + #ifdef CONFIG_COREDUMP 114 115 int elf_coredump_extra_notes_size(void) 115 116 { 116 117 struct spufs_calls *calls; ··· 143 142 144 143 return ret; 145 144 } 145 + #endif 146 146 147 147 void notify_spus_active(void) 148 148 {
+2 -1
arch/powerpc/platforms/cell/spufs/Makefile
··· 1 1 2 2 obj-$(CONFIG_SPU_FS) += spufs.o 3 - spufs-y += inode.o file.o context.o syscalls.o coredump.o 3 + spufs-y += inode.o file.o context.o syscalls.o 4 4 spufs-y += sched.o backing_ops.o hw_ops.o run.o gang.o 5 5 spufs-y += switch.o fault.o lscsa_alloc.o 6 + spufs-$(CONFIG_COREDUMP) += coredump.o 6 7 7 8 # magic for the trace events 8 9 CFLAGS_sched.o := -I$(src)
+4 -2
arch/powerpc/platforms/cell/spufs/syscalls.c
··· 79 79 struct spufs_calls spufs_calls = { 80 80 .create_thread = do_spu_create, 81 81 .spu_run = do_spu_run, 82 - .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 83 - .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 84 82 .notify_spus_active = do_notify_spus_active, 85 83 .owner = THIS_MODULE, 84 + #ifdef CONFIG_COREDUMP 85 + .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 86 + .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 87 + #endif 86 88 };
+1
arch/sparc/Kconfig
··· 78 78 select HAVE_C_RECORDMCOUNT 79 79 select NO_BOOTMEM 80 80 select HAVE_ARCH_AUDITSYSCALL 81 + select ARCH_SUPPORTS_ATOMIC_RMW 81 82 82 83 config ARCH_DEFCONFIG 83 84 string
+8 -1
arch/um/kernel/tlb.c
··· 12 12 #include <mem_user.h> 13 13 #include <os.h> 14 14 #include <skas.h> 15 + #include <kern_util.h> 15 16 16 17 struct host_vm_change { 17 18 struct host_vm_op { ··· 124 123 { 125 124 struct host_vm_op *last; 126 125 int ret = 0; 126 + 127 + if ((addr >= STUB_START) && (addr < STUB_END)) 128 + return -EINVAL; 127 129 128 130 if (hvc->index != 0) { 129 131 last = &hvc->ops[hvc->index - 1]; ··· 287 283 /* This is not an else because ret is modified above */ 288 284 if (ret) { 289 285 printk(KERN_ERR "fix_range_common: failed, killing current " 290 - "process\n"); 286 + "process: %d\n", task_tgid_vnr(current)); 287 + /* We are under mmap_sem, release it such that current can terminate */ 288 + up_write(&current->mm->mmap_sem); 291 289 force_sig(SIGKILL, current); 290 + do_signal(); 292 291 } 293 292 } 294 293
+1 -1
arch/um/kernel/trap.c
··· 206 206 int is_write = FAULT_WRITE(fi); 207 207 unsigned long address = FAULT_ADDRESS(fi); 208 208 209 - if (regs) 209 + if (!is_user && regs) 210 210 current->thread.segv_regs = container_of(regs, struct pt_regs, regs); 211 211 212 212 if (!is_user && (address >= start_vm) && (address < end_vm)) {
+2 -7
arch/um/os-Linux/skas/process.c
··· 54 54 55 55 void wait_stub_done(int pid) 56 56 { 57 - int n, status, err, bad_stop = 0; 57 + int n, status, err; 58 58 59 59 while (1) { 60 60 CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL)); ··· 74 74 75 75 if (((1 << WSTOPSIG(status)) & STUB_DONE_MASK) != 0) 76 76 return; 77 - else 78 - bad_stop = 1; 79 77 80 78 bad_wait: 81 79 err = ptrace_dump_regs(pid); ··· 83 85 printk(UM_KERN_ERR "wait_stub_done : failed to wait for SIGTRAP, " 84 86 "pid = %d, n = %d, errno = %d, status = 0x%x\n", pid, n, errno, 85 87 status); 86 - if (bad_stop) 87 - kill(pid, SIGKILL); 88 - else 89 - fatal_sigsegv(); 88 + fatal_sigsegv(); 90 89 } 91 90 92 91 extern unsigned long current_stub_stack(void);
+1
arch/x86/Kconfig
··· 131 131 select HAVE_CC_STACKPROTECTOR 132 132 select GENERIC_CPU_AUTOPROBE 133 133 select HAVE_ARCH_AUDITSYSCALL 134 + select ARCH_SUPPORTS_ATOMIC_RMW 134 135 135 136 config INSTRUCTION_DECODER 136 137 def_bool y
+22 -4
arch/x86/boot/header.S
··· 91 91 92 92 .section ".bsdata", "a" 93 93 bugger_off_msg: 94 - .ascii "Direct floppy boot is not supported. " 95 - .ascii "Use a boot loader program instead.\r\n" 94 + .ascii "Use a boot loader.\r\n" 96 95 .ascii "\n" 97 - .ascii "Remove disk and press any key to reboot ...\r\n" 96 + .ascii "Remove disk and press any key to reboot...\r\n" 98 97 .byte 0 99 98 100 99 #ifdef CONFIG_EFI_STUB ··· 107 108 #else 108 109 .word 0x8664 # x86-64 109 110 #endif 110 - .word 3 # nr_sections 111 + .word 4 # nr_sections 111 112 .long 0 # TimeDateStamp 112 113 .long 0 # PointerToSymbolTable 113 114 .long 1 # NumberOfSymbols ··· 248 249 .word 0 # NumberOfRelocations 249 250 .word 0 # NumberOfLineNumbers 250 251 .long 0x60500020 # Characteristics (section flags) 252 + 253 + # 254 + # The offset & size fields are filled in by build.c. 255 + # 256 + .ascii ".bss" 257 + .byte 0 258 + .byte 0 259 + .byte 0 260 + .byte 0 261 + .long 0 262 + .long 0x0 263 + .long 0 # Size of initialized data 264 + # on disk 265 + .long 0x0 266 + .long 0 # PointerToRelocations 267 + .long 0 # PointerToLineNumbers 268 + .word 0 # NumberOfRelocations 269 + .word 0 # NumberOfLineNumbers 270 + .long 0xc8000080 # Characteristics (section flags) 251 271 252 272 #endif /* CONFIG_EFI_STUB */ 253 273
+30 -8
arch/x86/boot/tools/build.c
··· 143 143 144 144 #ifdef CONFIG_EFI_STUB 145 145 146 - static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) 146 + static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset) 147 147 { 148 148 unsigned int pe_header; 149 149 unsigned short num_sections; ··· 164 164 put_unaligned_le32(size, section + 0x8); 165 165 166 166 /* section header vma field */ 167 - put_unaligned_le32(offset, section + 0xc); 167 + put_unaligned_le32(vma, section + 0xc); 168 168 169 169 /* section header 'size of initialised data' field */ 170 - put_unaligned_le32(size, section + 0x10); 170 + put_unaligned_le32(datasz, section + 0x10); 171 171 172 172 /* section header 'file offset' field */ 173 173 put_unaligned_le32(offset, section + 0x14); ··· 177 177 section += 0x28; 178 178 num_sections--; 179 179 } 180 + } 181 + 182 + static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) 183 + { 184 + update_pecoff_section_header_fields(section_name, offset, size, size, offset); 180 185 } 181 186 182 187 static void update_pecoff_setup_and_reloc(unsigned int size) ··· 208 203 209 204 pe_header = get_unaligned_le32(&buf[0x3c]); 210 205 211 - /* Size of image */ 212 - put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); 213 - 214 206 /* 215 207 * Size of code: Subtract the size of the first sector (512 bytes) 216 208 * which includes the header. ··· 220 218 put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]); 221 219 222 220 update_pecoff_section_header(".text", text_start, text_sz); 221 + } 222 + 223 + static void update_pecoff_bss(unsigned int file_sz, unsigned int init_sz) 224 + { 225 + unsigned int pe_header; 226 + unsigned int bss_sz = init_sz - file_sz; 227 + 228 + pe_header = get_unaligned_le32(&buf[0x3c]); 229 + 230 + /* Size of uninitialized data */ 231 + put_unaligned_le32(bss_sz, &buf[pe_header + 0x24]); 232 + 233 + /* Size of image */ 234 + put_unaligned_le32(init_sz, &buf[pe_header + 0x50]); 235 + 236 + update_pecoff_section_header_fields(".bss", file_sz, bss_sz, 0, 0); 223 237 } 224 238 225 239 static int reserve_pecoff_reloc_section(int c) ··· 277 259 static inline void update_pecoff_setup_and_reloc(unsigned int size) {} 278 260 static inline void update_pecoff_text(unsigned int text_start, 279 261 unsigned int file_sz) {} 262 + static inline void update_pecoff_bss(unsigned int file_sz, 263 + unsigned int init_sz) {} 280 264 static inline void efi_stub_defaults(void) {} 281 265 static inline void efi_stub_entry_update(void) {} 282 266 ··· 330 310 331 311 int main(int argc, char ** argv) 332 312 { 333 - unsigned int i, sz, setup_sectors; 313 + unsigned int i, sz, setup_sectors, init_sz; 334 314 int c; 335 315 u32 sys_size; 336 316 struct stat sb; ··· 396 376 buf[0x1f1] = setup_sectors-1; 397 377 put_unaligned_le32(sys_size, &buf[0x1f4]); 398 378 399 - update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); 379 + update_pecoff_text(setup_sectors * 512, i + (sys_size * 16)); 380 + init_sz = get_unaligned_le32(&buf[0x260]); 381 + update_pecoff_bss(i + (sys_size * 16), init_sz); 400 382 401 383 efi_stub_entry_update(); 402 384
+1 -1
arch/x86/crypto/sha512_ssse3_glue.c
··· 141 141 142 142 /* save number of bits */ 143 143 bits[1] = cpu_to_be64(sctx->count[0] << 3); 144 - bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; 144 + bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); 145 145 146 146 /* Pad out to 112 mod 128 and append length */ 147 147 index = sctx->count[0] & 0x7f;
-1
arch/x86/kernel/apm_32.c
··· 841 841 u32 eax; 842 842 u8 ret = 0; 843 843 int idled = 0; 844 - int polling; 845 844 int err = 0; 846 845 847 846 if (!need_resched()) {
+9
arch/x86/kernel/cpu/perf_event_intel.c
··· 1382 1382 intel_pmu_lbr_read(); 1383 1383 1384 1384 /* 1385 + * CondChgd bit 63 doesn't mean any overflow status. Ignore 1386 + * and clear the bit. 1387 + */ 1388 + if (__test_and_clear_bit(63, (unsigned long *)&status)) { 1389 + if (!status) 1390 + goto done; 1391 + } 1392 + 1393 + /* 1385 1394 * PEBS overflow sets bit 62 in the global status register 1386 1395 */ 1387 1396 if (__test_and_clear_bit(62, (unsigned long *)&status)) {
+2 -3
arch/x86/kernel/espfix_64.c
··· 175 175 if (!pud_present(pud)) { 176 176 pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); 177 177 pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); 178 - paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); 178 + paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); 179 179 for (n = 0; n < ESPFIX_PUD_CLONES; n++) 180 180 set_pud(&pud_p[n], pud); 181 181 } ··· 185 185 if (!pmd_present(pmd)) { 186 186 pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); 187 187 pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); 188 - paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT); 188 + paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); 189 189 for (n = 0; n < ESPFIX_PMD_CLONES; n++) 190 190 set_pmd(&pmd_p[n], pmd); 191 191 } ··· 193 193 pte_p = pte_offset_kernel(&pmd, addr); 194 194 stack_page = (void *)__get_free_page(GFP_KERNEL); 195 195 pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); 196 - paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT); 197 196 for (n = 0; n < ESPFIX_PTE_CLONES; n++) 198 197 set_pte(&pte_p[n*PTE_STRIDE], pte); 199 198
+2 -2
arch/x86/kernel/tsc.c
··· 920 920 tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); 921 921 if (!(freq->flags & CPUFREQ_CONST_LOOPS)) 922 922 mark_tsc_unstable("cpufreq changes"); 923 - } 924 923 925 - set_cyc2ns_scale(tsc_khz, freq->cpu); 924 + set_cyc2ns_scale(tsc_khz, freq->cpu); 925 + } 926 926 927 927 return 0; 928 928 }
+3
arch/x86/vdso/vdso2c.h
··· 93 93 uint64_t flags = GET_LE(&in->sh_flags); 94 94 95 95 bool copy = flags & SHF_ALLOC && 96 + (GET_LE(&in->sh_size) || 97 + (GET_LE(&in->sh_type) != SHT_RELA && 98 + GET_LE(&in->sh_type) != SHT_REL)) && 96 99 strcmp(name, ".altinstructions") && 97 100 strcmp(name, ".altinstr_replacement"); 98 101
+4
arch/x86/vdso/vma.c
··· 62 62 Only used for the 64-bit and x32 vdsos. */ 63 63 static unsigned long vdso_addr(unsigned long start, unsigned len) 64 64 { 65 + #ifdef CONFIG_X86_32 66 + return 0; 67 + #else 65 68 unsigned long addr, end; 66 69 unsigned offset; 67 70 end = (start + PMD_SIZE - 1) & PMD_MASK; ··· 86 83 addr = align_vdso_addr(addr); 87 84 88 85 return addr; 86 + #endif 89 87 } 90 88 91 89 static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+129 -3
drivers/acpi/ac.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/dmi.h> 32 32 #include <linux/delay.h> 33 + #ifdef CONFIG_ACPI_PROCFS_POWER 34 + #include <linux/proc_fs.h> 35 + #include <linux/seq_file.h> 36 + #endif 33 37 #include <linux/platform_device.h> 34 38 #include <linux/power_supply.h> 35 39 #include <linux/acpi.h> ··· 56 52 MODULE_DESCRIPTION("ACPI AC Adapter Driver"); 57 53 MODULE_LICENSE("GPL"); 58 54 55 + 59 56 static int acpi_ac_add(struct acpi_device *device); 60 57 static int acpi_ac_remove(struct acpi_device *device); 61 58 static void acpi_ac_notify(struct acpi_device *device, u32 event); ··· 71 66 static int acpi_ac_resume(struct device *dev); 72 67 #endif 73 68 static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 69 + 70 + #ifdef CONFIG_ACPI_PROCFS_POWER 71 + extern struct proc_dir_entry *acpi_lock_ac_dir(void); 72 + extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir); 73 + static int acpi_ac_open_fs(struct inode *inode, struct file *file); 74 + #endif 75 + 74 76 75 77 static int ac_sleep_before_get_state_ms; 76 78 ··· 102 90 }; 103 91 104 92 #define to_acpi_ac(x) container_of(x, struct acpi_ac, charger) 93 + 94 + #ifdef CONFIG_ACPI_PROCFS_POWER 95 + static const struct file_operations acpi_ac_fops = { 96 + .owner = THIS_MODULE, 97 + .open = acpi_ac_open_fs, 98 + .read = seq_read, 99 + .llseek = seq_lseek, 100 + .release = single_release, 101 + }; 102 + #endif 105 103 106 104 /* -------------------------------------------------------------------------- 107 105 AC Adapter Management ··· 164 142 static enum power_supply_property ac_props[] = { 165 143 POWER_SUPPLY_PROP_ONLINE, 166 144 }; 145 + 146 + #ifdef CONFIG_ACPI_PROCFS_POWER 147 + /* -------------------------------------------------------------------------- 148 + FS Interface (/proc) 149 + -------------------------------------------------------------------------- */ 150 + 151 + static struct proc_dir_entry *acpi_ac_dir; 152 + 153 + static int acpi_ac_seq_show(struct seq_file *seq, void *offset) 154 + { 155 + struct acpi_ac *ac = seq->private; 156 + 157 + 158 + if (!ac) 159 + return 0; 160 + 161 + if (acpi_ac_get_state(ac)) { 162 + seq_puts(seq, "ERROR: Unable to read AC Adapter state\n"); 163 + return 0; 164 + } 165 + 166 + seq_puts(seq, "state: "); 167 + switch (ac->state) { 168 + case ACPI_AC_STATUS_OFFLINE: 169 + seq_puts(seq, "off-line\n"); 170 + break; 171 + case ACPI_AC_STATUS_ONLINE: 172 + seq_puts(seq, "on-line\n"); 173 + break; 174 + default: 175 + seq_puts(seq, "unknown\n"); 176 + break; 177 + } 178 + 179 + return 0; 180 + } 181 + 182 + static int acpi_ac_open_fs(struct inode *inode, struct file *file) 183 + { 184 + return single_open(file, acpi_ac_seq_show, PDE_DATA(inode)); 185 + } 186 + 187 + static int acpi_ac_add_fs(struct acpi_ac *ac) 188 + { 189 + struct proc_dir_entry *entry = NULL; 190 + 191 + printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded," 192 + " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n"); 193 + if (!acpi_device_dir(ac->device)) { 194 + acpi_device_dir(ac->device) = 195 + proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir); 196 + if (!acpi_device_dir(ac->device)) 197 + return -ENODEV; 198 + } 199 + 200 + /* 'state' [R] */ 201 + entry = proc_create_data(ACPI_AC_FILE_STATE, 202 + S_IRUGO, acpi_device_dir(ac->device), 203 + &acpi_ac_fops, ac); 204 + if (!entry) 205 + return -ENODEV; 206 + return 0; 207 + } 208 + 209 + static int acpi_ac_remove_fs(struct acpi_ac *ac) 210 + { 211 + 212 + if (acpi_device_dir(ac->device)) { 213 + remove_proc_entry(ACPI_AC_FILE_STATE, 214 + acpi_device_dir(ac->device)); 215 + remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir); 216 + acpi_device_dir(ac->device) = NULL; 217 + } 218 + 219 + return 0; 220 + } 221 + #endif 167 222 168 223 /* -------------------------------------------------------------------------- 169 224 Driver Model ··· 342 243 goto end; 343 244 344 245 ac->charger.name = acpi_device_bid(device); 246 + #ifdef CONFIG_ACPI_PROCFS_POWER 247 + result = acpi_ac_add_fs(ac); 248 + if (result) 249 + goto end; 250 + #endif 345 251 ac->charger.type = POWER_SUPPLY_TYPE_MAINS; 346 252 ac->charger.properties = ac_props; 347 253 ac->charger.num_properties = ARRAY_SIZE(ac_props); ··· 362 258 ac->battery_nb.notifier_call = acpi_ac_battery_notify; 363 259 register_acpi_notifier(&ac->battery_nb); 364 260 end: 365 - if (result) 261 + if (result) { 262 + #ifdef CONFIG_ACPI_PROCFS_POWER 263 + acpi_ac_remove_fs(ac); 264 + #endif 366 265 kfree(ac); 266 + } 367 267 368 268 dmi_check_system(ac_dmi_table); 369 269 return result; ··· 411 303 power_supply_unregister(&ac->charger); 412 304 unregister_acpi_notifier(&ac->battery_nb); 413 305 306 + #ifdef CONFIG_ACPI_PROCFS_POWER 307 + acpi_ac_remove_fs(ac); 308 + #endif 309 + 414 310 kfree(ac); 415 311 416 312 return 0; ··· 427 315 if (acpi_disabled) 428 316 return -ENODEV; 429 317 430 - result = acpi_bus_register_driver(&acpi_ac_driver); 431 - if (result < 0) 318 + #ifdef CONFIG_ACPI_PROCFS_POWER 319 + acpi_ac_dir = acpi_lock_ac_dir(); 320 + if (!acpi_ac_dir) 432 321 return -ENODEV; 322 + #endif 323 + 324 + 325 + result = acpi_bus_register_driver(&acpi_ac_driver); 326 + if (result < 0) { 327 + #ifdef CONFIG_ACPI_PROCFS_POWER 328 + acpi_unlock_ac_dir(acpi_ac_dir); 329 + #endif 330 + return -ENODEV; 331 + } 433 332 434 333 return 0; 435 334 } ··· 448 325 static void __exit acpi_ac_exit(void) 449 326 { 450 327 acpi_bus_unregister_driver(&acpi_ac_driver); 328 + #ifdef CONFIG_ACPI_PROCFS_POWER 329 + acpi_unlock_ac_dir(acpi_ac_dir); 330 + #endif 451 331 } 452 332 module_init(acpi_ac_init); 453 333 module_exit(acpi_ac_exit);
+2
drivers/acpi/acpi_pnp.c
··· 14 14 #include <linux/module.h> 15 15 16 16 static const struct acpi_device_id acpi_pnp_device_ids[] = { 17 + /* soc_button_array */ 18 + {"PNP0C40"}, 17 19 /* pata_isapnp */ 18 20 {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ 19 21 /* floppy */
+40 -1
drivers/acpi/battery.c
··· 35 35 #include <linux/delay.h> 36 36 #include <linux/slab.h> 37 37 #include <linux/suspend.h> 38 + #include <linux/delay.h> 38 39 #include <asm/unaligned.h> 39 40 40 41 #ifdef CONFIG_ACPI_PROCFS_POWER ··· 533 532 battery->rate_now = abs((s16)battery->rate_now); 534 533 printk_once(KERN_WARNING FW_BUG "battery: (dis)charge rate" 535 534 " invalid.\n"); 535 + } 536 + 537 + /* 538 + * When fully charged, some batteries wrongly report 539 + * capacity_now = design_capacity instead of = full_charge_capacity 540 + */ 541 + if (battery->capacity_now > battery->full_charge_capacity 542 + && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { 543 + battery->capacity_now = battery->full_charge_capacity; 544 + if (battery->capacity_now != battery->design_capacity) 545 + printk_once(KERN_WARNING FW_BUG 546 + "battery: reported current charge level (%d) " 547 + "is higher than reported maximum charge level (%d).\n", 548 + battery->capacity_now, battery->full_charge_capacity); 536 549 } 537 550 538 551 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) ··· 1166 1151 {}, 1167 1152 }; 1168 1153 1154 + /* 1155 + * Some machines'(E,G Lenovo Z480) ECs are not stable 1156 + * during boot up and this causes battery driver fails to be 1157 + * probed due to failure of getting battery information 1158 + * from EC sometimes. After several retries, the operation 1159 + * may work. So add retry code here and 20ms sleep between 1160 + * every retries. 1161 + */ 1162 + static int acpi_battery_update_retry(struct acpi_battery *battery) 1163 + { 1164 + int retry, ret; 1165 + 1166 + for (retry = 5; retry; retry--) { 1167 + ret = acpi_battery_update(battery, false); 1168 + if (!ret) 1169 + break; 1170 + 1171 + msleep(20); 1172 + } 1173 + return ret; 1174 + } 1175 + 1169 1176 static int acpi_battery_add(struct acpi_device *device) 1170 1177 { 1171 1178 int result = 0; ··· 1206 1169 mutex_init(&battery->sysfs_lock); 1207 1170 if (acpi_has_method(battery->device->handle, "_BIX")) 1208 1171 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); 1209 - result = acpi_battery_update(battery, false); 1172 + 1173 + result = acpi_battery_update_retry(battery); 1210 1174 if (result) 1211 1175 goto fail; 1176 + 1212 1177 #ifdef CONFIG_ACPI_PROCFS_POWER 1213 1178 result = acpi_battery_add_fs(device); 1214 1179 #endif
+85 -79
drivers/acpi/ec.c
··· 1 1 /* 2 - * ec.c - ACPI Embedded Controller Driver (v2.1) 2 + * ec.c - ACPI Embedded Controller Driver (v2.2) 3 3 * 4 - * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de> 5 - * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com> 6 - * Copyright (C) 2004 Luming Yu <luming.yu@intel.com> 7 - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> 8 - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 4 + * Copyright (C) 2001-2014 Intel Corporation 5 + * Author: 2014 Lv Zheng <lv.zheng@intel.com> 6 + * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> 7 + * 2006 Denis Sadykov <denis.m.sadykov@intel.com> 8 + * 2004 Luming Yu <luming.yu@intel.com> 9 + * 2001, 2002 Andy Grover <andrew.grover@intel.com> 10 + * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 11 + * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de> 9 12 * 10 13 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 11 14 * ··· 55 52 /* EC status register */ 56 53 #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ 57 54 #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ 55 + #define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */ 58 56 #define ACPI_EC_FLAG_BURST 0x10 /* burst mode */ 59 57 #define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */ 60 58 ··· 81 77 * OpReg are installed */ 82 78 EC_FLAGS_BLOCKED, /* Transactions are blocked */ 83 79 }; 80 + 81 + #define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ 82 + #define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ 84 83 85 84 /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ 86 85 static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; ··· 116 109 u8 ri; 117 110 u8 wlen; 118 111 u8 rlen; 119 - bool done; 112 + u8 flags; 120 113 }; 121 114 122 115 struct acpi_ec *boot_ec, *first_ec; ··· 134 127 static inline u8 acpi_ec_read_status(struct acpi_ec *ec) 135 128 { 136 129 u8 x = inb(ec->command_addr); 137 - pr_debug("---> status = 0x%2.2x\n", x); 130 + pr_debug("EC_SC(R) = 0x%2.2x " 131 + "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", 132 + x, 133 + !!(x & ACPI_EC_FLAG_SCI), 134 + !!(x & ACPI_EC_FLAG_BURST), 135 + !!(x & ACPI_EC_FLAG_CMD), 136 + !!(x & ACPI_EC_FLAG_IBF), 137 + !!(x & ACPI_EC_FLAG_OBF)); 138 138 return x; 139 139 } 140 140 141 141 static inline u8 acpi_ec_read_data(struct acpi_ec *ec) 142 142 { 143 143 u8 x = inb(ec->data_addr); 144 - pr_debug("---> data = 0x%2.2x\n", x); 144 + pr_debug("EC_DATA(R) = 0x%2.2x\n", x); 145 145 return x; 146 146 } 147 147 148 148 static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command) 149 149 { 150 - pr_debug("<--- command = 0x%2.2x\n", command); 150 + pr_debug("EC_SC(W) = 0x%2.2x\n", command); 151 151 outb(command, ec->command_addr); 152 152 } 153 153 154 154 static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) 155 155 { 156 - pr_debug("<--- data = 0x%2.2x\n", data); 156 + pr_debug("EC_DATA(W) = 0x%2.2x\n", data); 157 157 outb(data, ec->data_addr); 158 158 } 159 159 160 - static int ec_transaction_done(struct acpi_ec *ec) 160 + static int ec_transaction_completed(struct acpi_ec *ec) 161 161 { 162 162 unsigned long flags; 163 163 int ret = 0; 164 164 spin_lock_irqsave(&ec->lock, flags); 165 - if (!ec->curr || ec->curr->done) 165 + if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) 166 166 ret = 1; 167 167 spin_unlock_irqrestore(&ec->lock, flags); 168 168 return ret; 169 169 } 170 170 171 - static void start_transaction(struct acpi_ec *ec) 171 + static bool advance_transaction(struct acpi_ec *ec) 172 172 { 173 - ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 174 - ec->curr->done = false; 175 - acpi_ec_write_cmd(ec, ec->curr->command); 176 - } 177 - 178 - static void advance_transaction(struct acpi_ec *ec, u8 status) 179 - { 180 - unsigned long flags; 181 173 struct transaction *t; 174 + u8 status; 175 + bool wakeup = false; 182 176 183 - spin_lock_irqsave(&ec->lock, flags); 177 + pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); 178 + status = acpi_ec_read_status(ec); 184 179 t = ec->curr; 185 180 if (!t) 186 - goto unlock; 187 - if (t->wlen > t->wi) { 188 - if ((status & ACPI_EC_FLAG_IBF) == 0) 189 - acpi_ec_write_data(ec, 190 - t->wdata[t->wi++]); 191 - else 192 - goto err; 193 - } else if (t->rlen > t->ri) { 194 - if ((status & ACPI_EC_FLAG_OBF) == 1) { 195 - t->rdata[t->ri++] = acpi_ec_read_data(ec); 196 - if (t->rlen == t->ri) 197 - t->done = true; 181 + goto err; 182 + if (t->flags & ACPI_EC_COMMAND_POLL) { 183 + if (t->wlen > t->wi) { 184 + if ((status & ACPI_EC_FLAG_IBF) == 0) 185 + acpi_ec_write_data(ec, t->wdata[t->wi++]); 186 + else 187 + goto err; 188 + } else if (t->rlen > t->ri) { 189 + if ((status & ACPI_EC_FLAG_OBF) == 1) { 190 + t->rdata[t->ri++] = acpi_ec_read_data(ec); 191 + if (t->rlen == t->ri) { 192 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 193 + wakeup = true; 194 + } 195 + } else 196 + goto err; 197 + } else if (t->wlen == t->wi && 198 + (status & ACPI_EC_FLAG_IBF) == 0) { 199 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 200 + wakeup = true; 201 + } 202 + return wakeup; 203 + } else { 204 + if ((status & ACPI_EC_FLAG_IBF) == 0) { 205 + acpi_ec_write_cmd(ec, t->command); 206 + t->flags |= ACPI_EC_COMMAND_POLL; 198 207 } else 199 208 goto err; 200 - } else if (t->wlen == t->wi && 201 - (status & ACPI_EC_FLAG_IBF) == 0) 202 - t->done = true; 203 - goto unlock; 209 + return wakeup; 210 + } 204 211 err: 205 212 /* 206 213 * If SCI bit is set, then don't think it's a false IRQ 207 214 * otherwise will take a not handled IRQ as a false one. 208 215 */ 209 - if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI)) 210 - ++t->irq_count; 216 + if (!(status & ACPI_EC_FLAG_SCI)) { 217 + if (in_interrupt() && t) 218 + ++t->irq_count; 219 + } 220 + return wakeup; 221 + } 211 222 212 - unlock: 213 - spin_unlock_irqrestore(&ec->lock, flags); 223 + static void start_transaction(struct acpi_ec *ec) 224 + { 225 + ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 226 + ec->curr->flags = 0; 227 + (void)advance_transaction(ec); 214 228 } 215 229 216 230 static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); ··· 256 228 /* don't sleep with disabled interrupts */ 257 229 if (EC_FLAGS_MSI || irqs_disabled()) { 258 230 udelay(ACPI_EC_MSI_UDELAY); 259 - if (ec_transaction_done(ec)) 231 + if (ec_transaction_completed(ec)) 260 232 return 0; 261 233 } else { 262 234 if (wait_event_timeout(ec->wait, 263 - ec_transaction_done(ec), 235 + ec_transaction_completed(ec), 264 236 msecs_to_jiffies(1))) 265 237 return 0; 266 238 } 267 - advance_transaction(ec, acpi_ec_read_status(ec)); 239 + spin_lock_irqsave(&ec->lock, flags); 240 + (void)advance_transaction(ec); 241 + spin_unlock_irqrestore(&ec->lock, flags); 268 242 } while (time_before(jiffies, delay)); 269 243 pr_debug("controller reset, restart transaction\n"); 270 244 spin_lock_irqsave(&ec->lock, flags); ··· 298 268 return ret; 299 269 } 300 270 301 - static int ec_check_ibf0(struct acpi_ec *ec) 302 - { 303 - u8 status = acpi_ec_read_status(ec); 304 - return (status & ACPI_EC_FLAG_IBF) == 0; 305 - } 306 - 307 - static int ec_wait_ibf0(struct acpi_ec *ec) 308 - { 309 - unsigned long delay = jiffies + msecs_to_jiffies(ec_delay); 310 - /* interrupt wait manually if GPE mode is not active */ 311 - while (time_before(jiffies, delay)) 312 - if (wait_event_timeout(ec->wait, ec_check_ibf0(ec), 313 - msecs_to_jiffies(1))) 314 - return 0; 315 - return -ETIME; 316 - } 317 - 318 271 static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) 319 272 { 320 273 int status; ··· 317 304 status = -ENODEV; 318 305 goto unlock; 319 306 } 320 - } 321 - if (ec_wait_ibf0(ec)) { 322 - pr_err("input buffer is not empty, " 323 - "aborting transaction\n"); 324 - status = -ETIME; 325 - goto end; 326 307 } 327 308 pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n", 328 309 t->command, t->wdata ? t->wdata[0] : 0); ··· 341 334 set_bit(EC_FLAGS_GPE_STORM, &ec->flags); 342 335 } 343 336 pr_debug("transaction end\n"); 344 - end: 345 337 if (ec->global_lock) 346 338 acpi_release_global_lock(glk); 347 339 unlock: ··· 640 634 static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, 641 635 u32 gpe_number, void *data) 642 636 { 637 + unsigned long flags; 643 638 struct acpi_ec *ec = data; 644 - u8 status = acpi_ec_read_status(ec); 645 639 646 - pr_debug("~~~> interrupt, status:0x%02x\n", status); 647 - 648 - advance_transaction(ec, status); 649 - if (ec_transaction_done(ec) && 650 - (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) { 640 + spin_lock_irqsave(&ec->lock, flags); 641 + if (advance_transaction(ec)) 651 642 wake_up(&ec->wait); 652 - ec_check_sci(ec, acpi_ec_read_status(ec)); 653 - } 643 + spin_unlock_irqrestore(&ec->lock, flags); 644 + ec_check_sci(ec, acpi_ec_read_status(ec)); 654 645 return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; 655 646 } 656 647 ··· 1069 1066 /* fall through */ 1070 1067 } 1071 1068 1072 - if (EC_FLAGS_SKIP_DSDT_SCAN) 1069 + if (EC_FLAGS_SKIP_DSDT_SCAN) { 1070 + kfree(saved_ec); 1073 1071 return -ENODEV; 1072 + } 1074 1073 1075 1074 /* This workaround is needed only on some broken machines, 1076 1075 * which require early EC, but fail to provide ECDT */ ··· 1110 1105 } 1111 1106 error: 1112 1107 kfree(boot_ec); 1108 + kfree(saved_ec); 1113 1109 boot_ec = NULL; 1114 1110 return -ENODEV; 1115 1111 }
+5 -5
drivers/acpi/resource.c
··· 77 77 switch (ares->type) { 78 78 case ACPI_RESOURCE_TYPE_MEMORY24: 79 79 memory24 = &ares->data.memory24; 80 - if (!memory24->address_length) 80 + if (!memory24->minimum && !memory24->address_length) 81 81 return false; 82 82 acpi_dev_get_memresource(res, memory24->minimum, 83 83 memory24->address_length, ··· 85 85 break; 86 86 case ACPI_RESOURCE_TYPE_MEMORY32: 87 87 memory32 = &ares->data.memory32; 88 - if (!memory32->address_length) 88 + if (!memory32->minimum && !memory32->address_length) 89 89 return false; 90 90 acpi_dev_get_memresource(res, memory32->minimum, 91 91 memory32->address_length, ··· 93 93 break; 94 94 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 95 95 fixed_memory32 = &ares->data.fixed_memory32; 96 - if (!fixed_memory32->address_length) 96 + if (!fixed_memory32->address && !fixed_memory32->address_length) 97 97 return false; 98 98 acpi_dev_get_memresource(res, fixed_memory32->address, 99 99 fixed_memory32->address_length, ··· 150 150 switch (ares->type) { 151 151 case ACPI_RESOURCE_TYPE_IO: 152 152 io = &ares->data.io; 153 - if (!io->address_length) 153 + if (!io->minimum && !io->address_length) 154 154 return false; 155 155 acpi_dev_get_ioresource(res, io->minimum, 156 156 io->address_length, ··· 158 158 break; 159 159 case ACPI_RESOURCE_TYPE_FIXED_IO: 160 160 fixed_io = &ares->data.fixed_io; 161 - if (!fixed_io->address_length) 161 + if (!fixed_io->address && !fixed_io->address_length) 162 162 return false; 163 163 acpi_dev_get_ioresource(res, fixed_io->address, 164 164 fixed_io->address_length,
+19 -2
drivers/acpi/video.c
··· 68 68 MODULE_DESCRIPTION("ACPI Video Driver"); 69 69 MODULE_LICENSE("GPL"); 70 70 71 - static bool brightness_switch_enabled; 71 + static bool brightness_switch_enabled = 1; 72 72 module_param(brightness_switch_enabled, bool, 0644); 73 73 74 74 /* ··· 241 241 return use_native_backlight_dmi; 242 242 } 243 243 244 - static bool acpi_video_verify_backlight_support(void) 244 + bool acpi_video_verify_backlight_support(void) 245 245 { 246 246 if (acpi_osi_is_win8() && acpi_video_use_native_backlight() && 247 247 backlight_device_registered(BACKLIGHT_RAW)) 248 248 return false; 249 249 return acpi_video_backlight_support(); 250 250 } 251 + EXPORT_SYMBOL_GPL(acpi_video_verify_backlight_support); 251 252 252 253 /* backlight device sysfs support */ 253 254 static int acpi_video_get_brightness(struct backlight_device *bd) ··· 564 563 }, 565 564 }, 566 565 { 566 + .callback = video_set_use_native_backlight, 567 + .ident = "Acer TravelMate B113", 568 + .matches = { 569 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 570 + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), 571 + }, 572 + }, 573 + { 567 574 .callback = video_set_use_native_backlight, 568 575 .ident = "HP ProBook 4340s", 569 576 .matches = { 570 577 DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 571 578 DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4340s"), 579 + }, 580 + }, 581 + { 582 + .callback = video_set_use_native_backlight, 583 + .ident = "HP ProBook 4540s", 584 + .matches = { 585 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 586 + DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4540s"), 572 587 }, 573 588 }, 574 589 {
+8
drivers/acpi/video_detect.c
··· 166 166 DMI_MATCH(DMI_PRODUCT_NAME, "UL30A"), 167 167 }, 168 168 }, 169 + { 170 + .callback = video_detect_force_vendor, 171 + .ident = "Dell Inspiron 5737", 172 + .matches = { 173 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 174 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), 175 + }, 176 + }, 169 177 { }, 170 178 }; 171 179
+2
drivers/ata/ahci.h
··· 371 371 int pmp, unsigned long deadline, 372 372 int (*check_ready)(struct ata_link *link)); 373 373 374 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 374 375 int ahci_stop_engine(struct ata_port *ap); 376 + void ahci_start_fis_rx(struct ata_port *ap); 375 377 void ahci_start_engine(struct ata_port *ap); 376 378 int ahci_check_ready(struct ata_link *link); 377 379 int ahci_kick_engine(struct ata_port *ap);
+34 -4
drivers/ata/ahci_imx.c
··· 58 58 struct imx_ahci_priv { 59 59 struct platform_device *ahci_pdev; 60 60 enum ahci_imx_type type; 61 + struct clk *sata_clk; 62 + struct clk *sata_ref_clk; 61 63 struct clk *ahb_clk; 62 64 struct regmap *gpr; 63 65 bool no_device; ··· 226 224 return ret; 227 225 } 228 226 229 - ret = ahci_platform_enable_clks(hpriv); 227 + ret = clk_prepare_enable(imxpriv->sata_ref_clk); 230 228 if (ret < 0) 231 229 goto disable_regulator; 232 230 ··· 293 291 !IMX6Q_GPR13_SATA_MPLL_CLK_EN); 294 292 } 295 293 296 - ahci_platform_disable_clks(hpriv); 294 + clk_disable_unprepare(imxpriv->sata_ref_clk); 297 295 298 296 if (hpriv->target_pwr) 299 297 regulator_disable(hpriv->target_pwr); ··· 326 324 writel(reg_val | IMX_P0PHYCR_TEST_PDDQ, mmio + IMX_P0PHYCR); 327 325 imx_sata_disable(hpriv); 328 326 imxpriv->no_device = true; 327 + 328 + dev_info(ap->dev, "no device found, disabling link.\n"); 329 + dev_info(ap->dev, "pass " MODULE_PARAM_PREFIX ".hotplug=1 to enable hotplug\n"); 329 330 } 330 331 331 332 static int ahci_imx_softreset(struct ata_link *link, unsigned int *class, ··· 390 385 imxpriv->no_device = false; 391 386 imxpriv->first_time = true; 392 387 imxpriv->type = (enum ahci_imx_type)of_id->data; 388 + 389 + imxpriv->sata_clk = devm_clk_get(dev, "sata"); 390 + if (IS_ERR(imxpriv->sata_clk)) { 391 + dev_err(dev, "can't get sata clock.\n"); 392 + return PTR_ERR(imxpriv->sata_clk); 393 + } 394 + 395 + imxpriv->sata_ref_clk = devm_clk_get(dev, "sata_ref"); 396 + if (IS_ERR(imxpriv->sata_ref_clk)) { 397 + dev_err(dev, "can't get sata_ref clock.\n"); 398 + return PTR_ERR(imxpriv->sata_ref_clk); 399 + } 400 + 393 401 imxpriv->ahb_clk = devm_clk_get(dev, "ahb"); 394 402 if (IS_ERR(imxpriv->ahb_clk)) { 395 403 dev_err(dev, "can't get ahb clock.\n"); ··· 425 407 426 408 hpriv->plat_data = imxpriv; 427 409 428 - ret = imx_sata_enable(hpriv); 410 + ret = clk_prepare_enable(imxpriv->sata_clk); 429 411 if (ret) 430 412 return ret; 413 + 414 + ret = imx_sata_enable(hpriv); 415 + if (ret) 416 + goto disable_clk; 431 417 432 418 /* 433 419 * Configure the HWINIT bits of the HOST_CAP and HOST_PORTS_IMPL, ··· 457 435 ret = ahci_platform_init_host(pdev, hpriv, &ahci_imx_port_info, 458 436 0, 0, 0); 459 437 if (ret) 460 - imx_sata_disable(hpriv); 438 + goto disable_sata; 461 439 440 + return 0; 441 + 442 + disable_sata: 443 + imx_sata_disable(hpriv); 444 + disable_clk: 445 + clk_disable_unprepare(imxpriv->sata_clk); 462 446 return ret; 463 447 } 464 448 465 449 static void ahci_imx_host_stop(struct ata_host *host) 466 450 { 467 451 struct ahci_host_priv *hpriv = host->private_data; 452 + struct imx_ahci_priv *imxpriv = hpriv->plat_data; 468 453 469 454 imx_sata_disable(hpriv); 455 + clk_disable_unprepare(imxpriv->sata_clk); 470 456 } 471 457 472 458 #ifdef CONFIG_PM_SLEEP
+1 -1
drivers/ata/ahci_platform.c
··· 58 58 } 59 59 60 60 if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) 61 - hflags |= AHCI_HFLAG_NO_FBS; 61 + hflags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; 62 62 63 63 rc = ahci_platform_init_host(pdev, hpriv, &ahci_port_info, 64 64 hflags, 0, 0);
+47 -13
drivers/ata/ahci_xgene.c
··· 78 78 struct xgene_ahci_context { 79 79 struct ahci_host_priv *hpriv; 80 80 struct device *dev; 81 + u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/ 81 82 void __iomem *csr_core; /* Core CSR address of IP */ 82 83 void __iomem *csr_diag; /* Diag CSR address of IP */ 83 84 void __iomem *csr_axi; /* AXI CSR address of IP */ ··· 99 98 } 100 99 101 100 /** 101 + * xgene_ahci_restart_engine - Restart the dma engine. 102 + * @ap : ATA port of interest 103 + * 104 + * Restarts the dma engine inside the controller. 105 + */ 106 + static int xgene_ahci_restart_engine(struct ata_port *ap) 107 + { 108 + struct ahci_host_priv *hpriv = ap->host->private_data; 109 + 110 + ahci_stop_engine(ap); 111 + ahci_start_fis_rx(ap); 112 + hpriv->start_engine(ap); 113 + 114 + return 0; 115 + } 116 + 117 + /** 118 + * xgene_ahci_qc_issue - Issue commands to the device 119 + * @qc: Command to issue 120 + * 121 + * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot 122 + * clear the BSY bit after receiving the PIO setup FIS. This results in the dma 123 + * state machine goes into the CMFatalErrorUpdate state and locks up. By 124 + * restarting the dma engine, it removes the controller out of lock up state. 125 + */ 126 + static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc) 127 + { 128 + struct ata_port *ap = qc->ap; 129 + struct ahci_host_priv *hpriv = ap->host->private_data; 130 + struct xgene_ahci_context *ctx = hpriv->plat_data; 131 + int rc = 0; 132 + 133 + if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA)) 134 + xgene_ahci_restart_engine(ap); 135 + 136 + rc = ahci_qc_issue(qc); 137 + 138 + /* Save the last command issued */ 139 + ctx->last_cmd[ap->port_no] = qc->tf.command; 140 + 141 + return rc; 142 + } 143 + 144 + /** 102 145 * xgene_ahci_read_id - Read ID data from the specified device 103 146 * @dev: device 104 147 * @tf: proposed taskfile 105 148 * @id: data buffer 106 149 * 107 150 * This custom read ID function is required due to the fact that the HW 108 - * does not support DEVSLP and the controller state machine may get stuck 109 - * after processing the ID query command. 151 + * does not support DEVSLP. 110 152 */ 111 153 static unsigned int xgene_ahci_read_id(struct ata_device *dev, 112 154 struct ata_taskfile *tf, u16 *id) 113 155 { 114 156 u32 err_mask; 115 - void __iomem *port_mmio = ahci_port_base(dev->link->ap); 116 157 117 158 err_mask = ata_do_dev_read_id(dev, tf, id); 118 159 if (err_mask) ··· 176 133 */ 177 134 id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8); 178 135 179 - /* 180 - * Due to HW errata, restart the port if no other command active. 181 - * Otherwise the controller may get stuck. 182 - */ 183 - if (!readl(port_mmio + PORT_CMD_ISSUE)) { 184 - writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD); 185 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 186 - writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD); 187 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 188 - } 189 136 return 0; 190 137 } 191 138 ··· 333 300 .host_stop = xgene_ahci_host_stop, 334 301 .hardreset = xgene_ahci_hardreset, 335 302 .read_id = xgene_ahci_read_id, 303 + .qc_issue = xgene_ahci_qc_issue, 336 304 }; 337 305 338 306 static const struct ata_port_info xgene_ahci_port_info = {
+4 -3
drivers/ata/libahci.c
··· 68 68 69 69 static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val); 70 70 static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val); 71 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 72 71 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc); 73 72 static int ahci_port_start(struct ata_port *ap); 74 73 static void ahci_port_stop(struct ata_port *ap); ··· 619 620 } 620 621 EXPORT_SYMBOL_GPL(ahci_stop_engine); 621 622 622 - static void ahci_start_fis_rx(struct ata_port *ap) 623 + void ahci_start_fis_rx(struct ata_port *ap) 623 624 { 624 625 void __iomem *port_mmio = ahci_port_base(ap); 625 626 struct ahci_host_priv *hpriv = ap->host->private_data; ··· 645 646 /* flush */ 646 647 readl(port_mmio + PORT_CMD); 647 648 } 649 + EXPORT_SYMBOL_GPL(ahci_start_fis_rx); 648 650 649 651 static int ahci_stop_fis_rx(struct ata_port *ap) 650 652 { ··· 1945 1945 } 1946 1946 EXPORT_SYMBOL_GPL(ahci_interrupt); 1947 1947 1948 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1948 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1949 1949 { 1950 1950 struct ata_port *ap = qc->ap; 1951 1951 void __iomem *port_mmio = ahci_port_base(ap); ··· 1974 1974 1975 1975 return 0; 1976 1976 } 1977 + EXPORT_SYMBOL_GPL(ahci_qc_issue); 1977 1978 1978 1979 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc) 1979 1980 {
+6 -1
drivers/ata/libahci_platform.c
··· 250 250 if (IS_ERR(hpriv->phy)) { 251 251 rc = PTR_ERR(hpriv->phy); 252 252 switch (rc) { 253 - case -ENODEV: 254 253 case -ENOSYS: 254 + /* No PHY support. Check if PHY is required. */ 255 + if (of_find_property(dev->of_node, "phys", NULL)) { 256 + dev_err(dev, "couldn't get sata-phy: ENOSYS\n"); 257 + goto err_out; 258 + } 259 + case -ENODEV: 255 260 /* continue normally */ 256 261 hpriv->phy = NULL; 257 262 break;
+14 -4
drivers/base/platform.c
··· 90 90 return dev->archdata.irqs[num]; 91 91 #else 92 92 struct resource *r; 93 - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) 94 - return of_irq_get(dev->dev.of_node, num); 93 + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { 94 + int ret; 95 + 96 + ret = of_irq_get(dev->dev.of_node, num); 97 + if (ret >= 0 || ret == -EPROBE_DEFER) 98 + return ret; 99 + } 95 100 96 101 r = platform_get_resource(dev, IORESOURCE_IRQ, num); 97 102 ··· 139 134 { 140 135 struct resource *r; 141 136 142 - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) 143 - return of_irq_get_byname(dev->dev.of_node, name); 137 + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { 138 + int ret; 139 + 140 + ret = of_irq_get_byname(dev->dev.of_node, name); 141 + if (ret >= 0 || ret == -EPROBE_DEFER) 142 + return ret; 143 + } 144 144 145 145 r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name); 146 146 return r ? r->start : -ENXIO;
-2
drivers/bluetooth/ath3k.c
··· 90 90 { USB_DEVICE(0x0b05, 0x17d0) }, 91 91 { USB_DEVICE(0x0CF3, 0x0036) }, 92 92 { USB_DEVICE(0x0CF3, 0x3004) }, 93 - { USB_DEVICE(0x0CF3, 0x3005) }, 94 93 { USB_DEVICE(0x0CF3, 0x3008) }, 95 94 { USB_DEVICE(0x0CF3, 0x311D) }, 96 95 { USB_DEVICE(0x0CF3, 0x311E) }, ··· 139 140 { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, 140 141 { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 }, 141 142 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 142 - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, 143 143 { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, 144 144 { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, 145 145 { USB_DEVICE(0x0cf3, 0x311E), .driver_info = BTUSB_ATH3012 },
-1
drivers/bluetooth/btusb.c
··· 162 162 { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, 163 163 { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, 164 164 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 165 - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, 166 165 { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, 167 166 { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 168 167 { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 },
+1
drivers/bluetooth/hci_h5.c
··· 406 406 H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { 407 407 BT_ERR("Non-link packet received in non-active state"); 408 408 h5_reset_rx(h5); 409 + return 0; 409 410 } 410 411 411 412 h5->rx_func = h5_rx_payload;
+39 -8
drivers/char/hw_random/core.c
··· 55 55 static int data_avail; 56 56 static u8 *rng_buffer; 57 57 58 + static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size, 59 + int wait); 60 + 58 61 static size_t rng_buffer_size(void) 59 62 { 60 63 return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES; 61 64 } 62 65 66 + static void add_early_randomness(struct hwrng *rng) 67 + { 68 + unsigned char bytes[16]; 69 + int bytes_read; 70 + 71 + /* 72 + * Currently only virtio-rng cannot return data during device 73 + * probe, and that's handled in virtio-rng.c itself. If there 74 + * are more such devices, this call to rng_get_data can be 75 + * made conditional here instead of doing it per-device. 76 + */ 77 + bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); 78 + if (bytes_read > 0) 79 + add_device_randomness(bytes, bytes_read); 80 + } 81 + 63 82 static inline int hwrng_init(struct hwrng *rng) 64 83 { 65 - if (!rng->init) 66 - return 0; 67 - return rng->init(rng); 84 + if (rng->init) { 85 + int ret; 86 + 87 + ret = rng->init(rng); 88 + if (ret) 89 + return ret; 90 + } 91 + add_early_randomness(rng); 92 + return 0; 68 93 } 69 94 70 95 static inline void hwrng_cleanup(struct hwrng *rng) ··· 329 304 { 330 305 int err = -EINVAL; 331 306 struct hwrng *old_rng, *tmp; 332 - unsigned char bytes[16]; 333 - int bytes_read; 334 307 335 308 if (rng->name == NULL || 336 309 (rng->data_read == NULL && rng->read == NULL)) ··· 370 347 INIT_LIST_HEAD(&rng->list); 371 348 list_add_tail(&rng->list, &rng_list); 372 349 373 - bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); 374 - if (bytes_read > 0) 375 - add_device_randomness(bytes, bytes_read); 350 + if (old_rng && !rng->init) { 351 + /* 352 + * Use a new device's input to add some randomness to 353 + * the system. If this rng device isn't going to be 354 + * used right away, its init function hasn't been 355 + * called yet; so only use the randomness from devices 356 + * that don't need an init callback. 357 + */ 358 + add_early_randomness(rng); 359 + } 360 + 376 361 out_unlock: 377 362 mutex_unlock(&rng_mutex); 378 363 out:
+10
drivers/char/hw_random/virtio-rng.c
··· 38 38 int index; 39 39 }; 40 40 41 + static bool probe_done; 42 + 41 43 static void random_recv_done(struct virtqueue *vq) 42 44 { 43 45 struct virtrng_info *vi = vq->vdev->priv; ··· 68 66 { 69 67 int ret; 70 68 struct virtrng_info *vi = (struct virtrng_info *)rng->priv; 69 + 70 + /* 71 + * Don't ask host for data till we're setup. This call can 72 + * happen during hwrng_register(), after commit d9e7972619. 73 + */ 74 + if (unlikely(!probe_done)) 75 + return 0; 71 76 72 77 if (!vi->busy) { 73 78 vi->busy = true; ··· 146 137 return err; 147 138 } 148 139 140 + probe_done = true; 149 141 return 0; 150 142 } 151 143
+3 -1
drivers/char/i8k.c
··· 138 138 if (!alloc_cpumask_var(&old_mask, GFP_KERNEL)) 139 139 return -ENOMEM; 140 140 cpumask_copy(old_mask, &current->cpus_allowed); 141 - set_cpus_allowed_ptr(current, cpumask_of(0)); 141 + rc = set_cpus_allowed_ptr(current, cpumask_of(0)); 142 + if (rc) 143 + goto out; 142 144 if (smp_processor_id() != 0) { 143 145 rc = -EBUSY; 144 146 goto out;
+14 -3
drivers/char/random.c
··· 641 641 } while (unlikely(entropy_count < pool_size-2 && pnfrac)); 642 642 } 643 643 644 - if (entropy_count < 0) { 644 + if (unlikely(entropy_count < 0)) { 645 645 pr_warn("random: negative entropy/overflow: pool %s count %d\n", 646 646 r->name, entropy_count); 647 647 WARN_ON(1); ··· 981 981 int reserved) 982 982 { 983 983 int entropy_count, orig; 984 - size_t ibytes; 984 + size_t ibytes, nfrac; 985 985 986 986 BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); 987 987 ··· 999 999 } 1000 1000 if (ibytes < min) 1001 1001 ibytes = 0; 1002 - if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0) 1002 + 1003 + if (unlikely(entropy_count < 0)) { 1004 + pr_warn("random: negative entropy count: pool %s count %d\n", 1005 + r->name, entropy_count); 1006 + WARN_ON(1); 1007 + entropy_count = 0; 1008 + } 1009 + nfrac = ibytes << (ENTROPY_SHIFT + 3); 1010 + if ((size_t) entropy_count > nfrac) 1011 + entropy_count -= nfrac; 1012 + else 1003 1013 entropy_count = 0; 1004 1014 1005 1015 if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) ··· 1386 1376 "with %d bits of entropy available\n", 1387 1377 current->comm, nonblocking_pool.entropy_total); 1388 1378 1379 + nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); 1389 1380 ret = extract_entropy_user(&nonblocking_pool, buf, nbytes); 1390 1381 1391 1382 trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool),
+2 -5
drivers/clk/clk-s2mps11.c
··· 230 230 goto err_reg; 231 231 } 232 232 233 - s2mps11_clk->lookup = devm_kzalloc(&pdev->dev, 234 - sizeof(struct clk_lookup), GFP_KERNEL); 233 + s2mps11_clk->lookup = clkdev_alloc(s2mps11_clk->clk, 234 + s2mps11_name(s2mps11_clk), NULL); 235 235 if (!s2mps11_clk->lookup) { 236 236 ret = -ENOMEM; 237 237 goto err_lup; 238 238 } 239 - 240 - s2mps11_clk->lookup->con_id = s2mps11_name(s2mps11_clk); 241 - s2mps11_clk->lookup->clk = s2mps11_clk->clk; 242 239 243 240 clkdev_add(s2mps11_clk->lookup); 244 241 }
+1 -1
drivers/clk/qcom/mmcc-msm8960.c
··· 1209 1209 1210 1210 static u8 mmcc_pxo_hdmi_map[] = { 1211 1211 [P_PXO] = 0, 1212 - [P_HDMI_PLL] = 2, 1212 + [P_HDMI_PLL] = 3, 1213 1213 }; 1214 1214 1215 1215 static const char *mmcc_pxo_hdmi[] = {
+4 -12
drivers/clk/samsung/clk-exynos4.c
··· 925 925 GATE(CLK_RTC, "rtc", "aclk100", E4X12_GATE_IP_PERIR, 15, 926 926 0, 0), 927 927 GATE(CLK_KEYIF, "keyif", "aclk100", E4X12_GATE_IP_PERIR, 16, 0, 0), 928 - GATE(CLK_SCLK_PWM_ISP, "sclk_pwm_isp", "div_pwm_isp", 929 - E4X12_SRC_MASK_ISP, 0, CLK_SET_RATE_PARENT, 0), 930 - GATE(CLK_SCLK_SPI0_ISP, "sclk_spi0_isp", "div_spi0_isp_pre", 931 - E4X12_SRC_MASK_ISP, 4, CLK_SET_RATE_PARENT, 0), 932 - GATE(CLK_SCLK_SPI1_ISP, "sclk_spi1_isp", "div_spi1_isp_pre", 933 - E4X12_SRC_MASK_ISP, 8, CLK_SET_RATE_PARENT, 0), 934 - GATE(CLK_SCLK_UART_ISP, "sclk_uart_isp", "div_uart_isp", 935 - E4X12_SRC_MASK_ISP, 12, CLK_SET_RATE_PARENT, 0), 936 - GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "sclk_pwm_isp", 928 + GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "div_pwm_isp", 937 929 E4X12_GATE_IP_ISP, 0, 0, 0), 938 - GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "sclk_spi0_isp", 930 + GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "div_spi0_isp_pre", 939 931 E4X12_GATE_IP_ISP, 1, 0, 0), 940 - GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "sclk_spi1_isp", 932 + GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "div_spi1_isp_pre", 941 933 E4X12_GATE_IP_ISP, 2, 0, 0), 942 - GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "sclk_uart_isp", 934 + GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "div_uart_isp", 943 935 E4X12_GATE_IP_ISP, 3, 0, 0), 944 936 GATE(CLK_WDT, "watchdog", "aclk100", E4X12_GATE_IP_PERIR, 14, 0, 0), 945 937 GATE(CLK_PCM0, "pcm0", "aclk100", E4X12_GATE_IP_MAUDIO, 2,
+1 -1
drivers/clk/samsung/clk-exynos5250.c
··· 661 661 GATE(CLK_RTC, "rtc", "div_aclk66", GATE_IP_PERIS, 20, 0, 0), 662 662 GATE(CLK_TMU, "tmu", "div_aclk66", GATE_IP_PERIS, 21, 0, 0), 663 663 GATE(CLK_SMMU_TV, "smmu_tv", "mout_aclk200_disp1_sub", 664 - GATE_IP_DISP1, 2, 0, 0), 664 + GATE_IP_DISP1, 9, 0, 0), 665 665 GATE(CLK_SMMU_FIMD1, "smmu_fimd1", "mout_aclk200_disp1_sub", 666 666 GATE_IP_DISP1, 8, 0, 0), 667 667 GATE(CLK_SMMU_2D, "smmu_2d", "div_aclk200", GATE_IP_ACP, 7, 0, 0),
+58 -31
drivers/clk/samsung/clk-exynos5420.c
··· 631 631 SRC_TOP4, 16, 1), 632 632 MUX(0, "mout_user_aclk266", mout_user_aclk266_p, SRC_TOP4, 20, 1), 633 633 MUX(0, "mout_user_aclk166", mout_user_aclk166_p, SRC_TOP4, 24, 1), 634 - MUX(0, "mout_user_aclk333", mout_user_aclk333_p, SRC_TOP4, 28, 1), 634 + MUX(CLK_MOUT_USER_ACLK333, "mout_user_aclk333", mout_user_aclk333_p, 635 + SRC_TOP4, 28, 1), 635 636 636 637 MUX(0, "mout_user_aclk400_disp1", mout_user_aclk400_disp1_p, 637 638 SRC_TOP5, 0, 1), ··· 685 684 SRC_TOP11, 12, 1), 686 685 MUX(0, "mout_sw_aclk266", mout_sw_aclk266_p, SRC_TOP11, 20, 1), 687 686 MUX(0, "mout_sw_aclk166", mout_sw_aclk166_p, SRC_TOP11, 24, 1), 688 - MUX(0, "mout_sw_aclk333", mout_sw_aclk333_p, SRC_TOP11, 28, 1), 687 + MUX(CLK_MOUT_SW_ACLK333, "mout_sw_aclk333", mout_sw_aclk333_p, 688 + SRC_TOP11, 28, 1), 689 689 690 690 MUX(0, "mout_sw_aclk400_disp1", mout_sw_aclk400_disp1_p, 691 691 SRC_TOP12, 4, 1), ··· 892 890 GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0), 893 891 GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen", 894 892 GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0), 895 - GATE(CLK_ACLK66_PERIC, "aclk66_peric", "mout_user_aclk66_peric", 896 - GATE_BUS_TOP, 11, CLK_IGNORE_UNUSED, 0), 897 893 GATE(0, "aclk266_isp", "mout_user_aclk266_isp", 898 894 GATE_BUS_TOP, 13, 0, 0), 899 895 GATE(0, "aclk166", "mout_user_aclk166", ··· 994 994 SRC_MASK_FSYS, 24, CLK_SET_RATE_PARENT, 0), 995 995 996 996 /* PERIC Block */ 997 - GATE(CLK_UART0, "uart0", "aclk66_peric", GATE_IP_PERIC, 0, 0, 0), 998 - GATE(CLK_UART1, "uart1", "aclk66_peric", GATE_IP_PERIC, 1, 0, 0), 999 - GATE(CLK_UART2, "uart2", "aclk66_peric", GATE_IP_PERIC, 2, 0, 0), 1000 - GATE(CLK_UART3, "uart3", "aclk66_peric", GATE_IP_PERIC, 3, 0, 0), 1001 - GATE(CLK_I2C0, "i2c0", "aclk66_peric", GATE_IP_PERIC, 6, 0, 0), 1002 - GATE(CLK_I2C1, "i2c1", "aclk66_peric", GATE_IP_PERIC, 7, 0, 0), 1003 - GATE(CLK_I2C2, "i2c2", "aclk66_peric", GATE_IP_PERIC, 8, 0, 0), 1004 - GATE(CLK_I2C3, "i2c3", "aclk66_peric", GATE_IP_PERIC, 9, 0, 0), 1005 - GATE(CLK_USI0, "usi0", "aclk66_peric", GATE_IP_PERIC, 10, 0, 0), 1006 - GATE(CLK_USI1, "usi1", "aclk66_peric", GATE_IP_PERIC, 11, 0, 0), 1007 - GATE(CLK_USI2, "usi2", "aclk66_peric", GATE_IP_PERIC, 12, 0, 0), 1008 - GATE(CLK_USI3, "usi3", "aclk66_peric", GATE_IP_PERIC, 13, 0, 0), 1009 - GATE(CLK_I2C_HDMI, "i2c_hdmi", "aclk66_peric", GATE_IP_PERIC, 14, 0, 0), 1010 - GATE(CLK_TSADC, "tsadc", "aclk66_peric", GATE_IP_PERIC, 15, 0, 0), 1011 - GATE(CLK_SPI0, "spi0", "aclk66_peric", GATE_IP_PERIC, 16, 0, 0), 1012 - GATE(CLK_SPI1, "spi1", "aclk66_peric", GATE_IP_PERIC, 17, 0, 0), 1013 - GATE(CLK_SPI2, "spi2", "aclk66_peric", GATE_IP_PERIC, 18, 0, 0), 1014 - GATE(CLK_I2S1, "i2s1", "aclk66_peric", GATE_IP_PERIC, 20, 0, 0), 1015 - GATE(CLK_I2S2, "i2s2", "aclk66_peric", GATE_IP_PERIC, 21, 0, 0), 1016 - GATE(CLK_PCM1, "pcm1", "aclk66_peric", GATE_IP_PERIC, 22, 0, 0), 1017 - GATE(CLK_PCM2, "pcm2", "aclk66_peric", GATE_IP_PERIC, 23, 0, 0), 1018 - GATE(CLK_PWM, "pwm", "aclk66_peric", GATE_IP_PERIC, 24, 0, 0), 1019 - GATE(CLK_SPDIF, "spdif", "aclk66_peric", GATE_IP_PERIC, 26, 0, 0), 1020 - GATE(CLK_USI4, "usi4", "aclk66_peric", GATE_IP_PERIC, 28, 0, 0), 1021 - GATE(CLK_USI5, "usi5", "aclk66_peric", GATE_IP_PERIC, 30, 0, 0), 1022 - GATE(CLK_USI6, "usi6", "aclk66_peric", GATE_IP_PERIC, 31, 0, 0), 997 + GATE(CLK_UART0, "uart0", "mout_user_aclk66_peric", 998 + GATE_IP_PERIC, 0, 0, 0), 999 + GATE(CLK_UART1, "uart1", "mout_user_aclk66_peric", 1000 + GATE_IP_PERIC, 1, 0, 0), 1001 + GATE(CLK_UART2, "uart2", "mout_user_aclk66_peric", 1002 + GATE_IP_PERIC, 2, 0, 0), 1003 + GATE(CLK_UART3, "uart3", "mout_user_aclk66_peric", 1004 + GATE_IP_PERIC, 3, 0, 0), 1005 + GATE(CLK_I2C0, "i2c0", "mout_user_aclk66_peric", 1006 + GATE_IP_PERIC, 6, 0, 0), 1007 + GATE(CLK_I2C1, "i2c1", "mout_user_aclk66_peric", 1008 + GATE_IP_PERIC, 7, 0, 0), 1009 + GATE(CLK_I2C2, "i2c2", "mout_user_aclk66_peric", 1010 + GATE_IP_PERIC, 8, 0, 0), 1011 + GATE(CLK_I2C3, "i2c3", "mout_user_aclk66_peric", 1012 + GATE_IP_PERIC, 9, 0, 0), 1013 + GATE(CLK_USI0, "usi0", "mout_user_aclk66_peric", 1014 + GATE_IP_PERIC, 10, 0, 0), 1015 + GATE(CLK_USI1, "usi1", "mout_user_aclk66_peric", 1016 + GATE_IP_PERIC, 11, 0, 0), 1017 + GATE(CLK_USI2, "usi2", "mout_user_aclk66_peric", 1018 + GATE_IP_PERIC, 12, 0, 0), 1019 + GATE(CLK_USI3, "usi3", "mout_user_aclk66_peric", 1020 + GATE_IP_PERIC, 13, 0, 0), 1021 + GATE(CLK_I2C_HDMI, "i2c_hdmi", "mout_user_aclk66_peric", 1022 + GATE_IP_PERIC, 14, 0, 0), 1023 + GATE(CLK_TSADC, "tsadc", "mout_user_aclk66_peric", 1024 + GATE_IP_PERIC, 15, 0, 0), 1025 + GATE(CLK_SPI0, "spi0", "mout_user_aclk66_peric", 1026 + GATE_IP_PERIC, 16, 0, 0), 1027 + GATE(CLK_SPI1, "spi1", "mout_user_aclk66_peric", 1028 + GATE_IP_PERIC, 17, 0, 0), 1029 + GATE(CLK_SPI2, "spi2", "mout_user_aclk66_peric", 1030 + GATE_IP_PERIC, 18, 0, 0), 1031 + GATE(CLK_I2S1, "i2s1", "mout_user_aclk66_peric", 1032 + GATE_IP_PERIC, 20, 0, 0), 1033 + GATE(CLK_I2S2, "i2s2", "mout_user_aclk66_peric", 1034 + GATE_IP_PERIC, 21, 0, 0), 1035 + GATE(CLK_PCM1, "pcm1", "mout_user_aclk66_peric", 1036 + GATE_IP_PERIC, 22, 0, 0), 1037 + GATE(CLK_PCM2, "pcm2", "mout_user_aclk66_peric", 1038 + GATE_IP_PERIC, 23, 0, 0), 1039 + GATE(CLK_PWM, "pwm", "mout_user_aclk66_peric", 1040 + GATE_IP_PERIC, 24, 0, 0), 1041 + GATE(CLK_SPDIF, "spdif", "mout_user_aclk66_peric", 1042 + GATE_IP_PERIC, 26, 0, 0), 1043 + GATE(CLK_USI4, "usi4", "mout_user_aclk66_peric", 1044 + GATE_IP_PERIC, 28, 0, 0), 1045 + GATE(CLK_USI5, "usi5", "mout_user_aclk66_peric", 1046 + GATE_IP_PERIC, 30, 0, 0), 1047 + GATE(CLK_USI6, "usi6", "mout_user_aclk66_peric", 1048 + GATE_IP_PERIC, 31, 0, 0), 1023 1049 1024 - GATE(CLK_KEYIF, "keyif", "aclk66_peric", GATE_BUS_PERIC, 22, 0, 0), 1050 + GATE(CLK_KEYIF, "keyif", "mout_user_aclk66_peric", 1051 + GATE_BUS_PERIC, 22, 0, 0), 1025 1052 1026 1053 /* PERIS Block */ 1027 1054 GATE(CLK_CHIPID, "chipid", "aclk66_psgen",
+7 -2
drivers/clk/samsung/clk-s3c2410.c
··· 152 152 ALIAS(HCLK, NULL, "hclk"), 153 153 ALIAS(MPLL, NULL, "mpll"), 154 154 ALIAS(FCLK, NULL, "fclk"), 155 + ALIAS(PCLK, NULL, "watchdog"), 156 + ALIAS(PCLK_SDI, NULL, "sdi"), 157 + ALIAS(HCLK_NAND, NULL, "nand"), 158 + ALIAS(PCLK_I2S, NULL, "iis"), 159 + ALIAS(PCLK_I2C, NULL, "i2c"), 155 160 }; 156 161 157 162 /* S3C2410 specific clocks */ ··· 383 378 if (!np) 384 379 s3c2410_common_clk_register_fixed_ext(ctx, xti_f); 385 380 386 - if (current_soc == 2410) { 381 + if (current_soc == S3C2410) { 387 382 if (_get_rate("xti") == 12 * MHZ) { 388 383 s3c2410_plls[mpll].rate_table = pll_s3c2410_12mhz_tbl; 389 384 s3c2410_plls[upll].rate_table = pll_s3c2410_12mhz_tbl; ··· 437 432 samsung_clk_register_fixed_factor(ctx, s3c2410_ffactor, 438 433 ARRAY_SIZE(s3c2410_ffactor)); 439 434 samsung_clk_register_alias(ctx, s3c2410_aliases, 440 - ARRAY_SIZE(s3c2410_common_aliases)); 435 + ARRAY_SIZE(s3c2410_aliases)); 441 436 break; 442 437 case S3C2440: 443 438 samsung_clk_register_mux(ctx, s3c2440_muxes,
+4 -2
drivers/clk/samsung/clk-s3c64xx.c
··· 418 418 ALIAS(SCLK_MMC2, "s3c-sdhci.2", "mmc_busclk.2"), 419 419 ALIAS(SCLK_MMC1, "s3c-sdhci.1", "mmc_busclk.2"), 420 420 ALIAS(SCLK_MMC0, "s3c-sdhci.0", "mmc_busclk.2"), 421 - ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi-bus"), 422 - ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi-bus"), 421 + ALIAS(PCLK_SPI1, "s3c6410-spi.1", "spi_busclk0"), 422 + ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi_busclk2"), 423 + ALIAS(PCLK_SPI0, "s3c6410-spi.0", "spi_busclk0"), 424 + ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi_busclk2"), 423 425 ALIAS(SCLK_AUDIO1, "samsung-pcm.1", "audio-bus"), 424 426 ALIAS(SCLK_AUDIO1, "samsung-i2s.1", "audio-bus"), 425 427 ALIAS(SCLK_AUDIO0, "samsung-pcm.0", "audio-bus"),
+11 -5
drivers/clk/spear/spear3xx_clock.c
··· 211 211 /* array of all spear 320 clock lookups */ 212 212 #ifdef CONFIG_MACH_SPEAR320 213 213 214 - #define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) 214 + #define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) 215 215 #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) 216 216 217 217 #define SPEAR320_UARTX_PCLK_MASK 0x1 ··· 245 245 "ras_syn0_gclk", }; 246 246 static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; 247 247 248 - static void __init spear320_clk_init(void __iomem *soc_config_base) 248 + static void __init spear320_clk_init(void __iomem *soc_config_base, 249 + struct clk *ras_apb_clk) 249 250 { 250 251 struct clk *clk; 251 252 ··· 343 342 SPEAR320_CONTROL_REG, UART1_PCLK_SHIFT, UART1_PCLK_MASK, 344 343 0, &_lock); 345 344 clk_register_clkdev(clk, NULL, "a3000000.serial"); 345 + /* Enforce ras_apb_clk */ 346 + clk_set_parent(clk, ras_apb_clk); 346 347 347 348 clk = clk_register_mux(NULL, "uart2_clk", uartx_parents, 348 349 ARRAY_SIZE(uartx_parents), ··· 352 349 SPEAR320_EXT_CTRL_REG, SPEAR320_UART2_PCLK_SHIFT, 353 350 SPEAR320_UARTX_PCLK_MASK, 0, &_lock); 354 351 clk_register_clkdev(clk, NULL, "a4000000.serial"); 352 + /* Enforce ras_apb_clk */ 353 + clk_set_parent(clk, ras_apb_clk); 355 354 356 355 clk = clk_register_mux(NULL, "uart3_clk", uartx_parents, 357 356 ARRAY_SIZE(uartx_parents), ··· 384 379 clk_register_clkdev(clk, NULL, "60100000.serial"); 385 380 } 386 381 #else 387 - static inline void spear320_clk_init(void __iomem *soc_config_base) { } 382 + static inline void spear320_clk_init(void __iomem *sb, struct clk *rc) { } 388 383 #endif 389 384 390 385 void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base) 391 386 { 392 - struct clk *clk, *clk1; 387 + struct clk *clk, *clk1, *ras_apb_clk; 393 388 394 389 clk = clk_register_fixed_rate(NULL, "osc_32k_clk", NULL, CLK_IS_ROOT, 395 390 32000); ··· 618 613 clk = clk_register_gate(NULL, "ras_apb_clk", "apb_clk", 0, RAS_CLK_ENB, 619 614 RAS_APB_CLK_ENB, 0, &_lock); 620 615 clk_register_clkdev(clk, "ras_apb_clk", NULL); 616 + ras_apb_clk = clk; 621 617 622 618 clk = clk_register_gate(NULL, "ras_32k_clk", "osc_32k_clk", 0, 623 619 RAS_CLK_ENB, RAS_32K_CLK_ENB, 0, &_lock); ··· 665 659 else if (of_machine_is_compatible("st,spear310")) 666 660 spear310_clk_init(); 667 661 else if (of_machine_is_compatible("st,spear320")) 668 - spear320_clk_init(soc_config_base); 662 + spear320_clk_init(soc_config_base, ras_apb_clk); 669 663 }
+1 -1
drivers/clk/sunxi/clk-sun6i-apb0-gates.c
··· 29 29 30 30 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 31 31 reg = devm_ioremap_resource(&pdev->dev, r); 32 - if (!reg) 32 + if (IS_ERR(reg)) 33 33 return PTR_ERR(reg); 34 34 35 35 clk_parent = of_clk_get_parent_name(np, 0);
+3 -5
drivers/clk/ti/apll.c
··· 77 77 if (i == MAX_APLL_WAIT_TRIES) { 78 78 pr_warn("clock: %s failed transition to '%s'\n", 79 79 clk_name, (state) ? "locked" : "bypassed"); 80 - } else { 80 + r = -EBUSY; 81 + } else 81 82 pr_debug("clock: %s transition to '%s' in %d loops\n", 82 83 clk_name, (state) ? "locked" : "bypassed", i); 83 - 84 - r = 0; 85 - } 86 84 87 85 return r; 88 86 } ··· 336 338 const char *parent_name; 337 339 u32 val; 338 340 339 - ad = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 + ad = kzalloc(sizeof(*ad), GFP_KERNEL); 340 342 clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 343 init = kzalloc(sizeof(*init), GFP_KERNEL); 342 344
+3 -2
drivers/clk/ti/dpll.c
··· 161 161 } 162 162 163 163 #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 164 - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) 164 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) || \ 165 + defined(CONFIG_SOC_AM43XX) 165 166 /** 166 167 * ti_clk_register_dpll_x2 - Registers a DPLLx2 clock 167 168 * @node: device node for this clock ··· 323 322 of_ti_omap4_dpll_x2_setup); 324 323 #endif 325 324 326 - #ifdef CONFIG_SOC_AM33XX 325 + #if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) 327 326 static void __init of_ti_am3_dpll_x2_setup(struct device_node *node) 328 327 { 329 328 ti_clk_register_dpll_x2(node, &dpll_x2_ck_ops, NULL);
+1 -1
drivers/clk/ti/mux.c
··· 160 160 u8 clk_mux_flags = 0; 161 161 u32 mask = 0; 162 162 u32 shift = 0; 163 - u32 flags = 0; 163 + u32 flags = CLK_SET_RATE_NO_REPARENT; 164 164 165 165 num_parents = of_clk_get_parent_count(node); 166 166 if (num_parents < 2) {
+18 -2
drivers/clocksource/exynos_mct.c
··· 162 162 exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); 163 163 } 164 164 165 - static cycle_t exynos4_frc_read(struct clocksource *cs) 165 + static cycle_t notrace _exynos4_frc_read(void) 166 166 { 167 167 unsigned int lo, hi; 168 168 u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); ··· 174 174 } while (hi != hi2); 175 175 176 176 return ((cycle_t)hi << 32) | lo; 177 + } 178 + 179 + static cycle_t exynos4_frc_read(struct clocksource *cs) 180 + { 181 + return _exynos4_frc_read(); 177 182 } 178 183 179 184 static void exynos4_frc_resume(struct clocksource *cs) ··· 197 192 198 193 static u64 notrace exynos4_read_sched_clock(void) 199 194 { 200 - return exynos4_frc_read(&mct_frc); 195 + return _exynos4_frc_read(); 196 + } 197 + 198 + static struct delay_timer exynos4_delay_timer; 199 + 200 + static cycles_t exynos4_read_current_timer(void) 201 + { 202 + return _exynos4_frc_read(); 201 203 } 202 204 203 205 static void __init exynos4_clocksource_init(void) 204 206 { 205 207 exynos4_mct_frc_start(); 208 + 209 + exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; 210 + exynos4_delay_timer.freq = clk_rate; 211 + register_current_timer_delay(&exynos4_delay_timer); 206 212 207 213 if (clocksource_register_hz(&mct_frc, clk_rate)) 208 214 panic("%s: can't register clocksource\n", mct_frc.name);
+2 -1
drivers/cpufreq/Kconfig.arm
··· 104 104 tristate "Freescale i.MX6 cpufreq support" 105 105 depends on ARCH_MXC 106 106 depends on REGULATOR_ANATOP 107 + select PM_OPP 107 108 help 108 109 This adds cpufreq driver support for Freescale i.MX6 series SoCs. 109 110 ··· 119 118 If in doubt, say Y. 120 119 121 120 config ARM_KIRKWOOD_CPUFREQ 122 - def_bool MACH_KIRKWOOD 121 + def_bool ARCH_KIRKWOOD || MACH_KIRKWOOD 123 122 help 124 123 This adds the CPUFreq driver for Marvell Kirkwood 125 124 SoCs.
+1 -1
drivers/cpufreq/Makefile
··· 49 49 # LITTLE drivers, so that it is probed last. 50 50 obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o 51 51 52 - obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o 52 + obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 53 53 obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o 54 54 obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o 55 55 obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o
+2 -5
drivers/cpufreq/cpufreq-cpu0.c
··· 152 152 goto out_put_reg; 153 153 } 154 154 155 - ret = of_init_opp_table(cpu_dev); 156 - if (ret) { 157 - pr_err("failed to init OPP table: %d\n", ret); 158 - goto out_put_clk; 159 - } 155 + /* OPPs might be populated at runtime, don't check for error here */ 156 + of_init_opp_table(cpu_dev); 160 157 161 158 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 162 159 if (ret) {
+4 -2
drivers/cpufreq/cpufreq.c
··· 1153 1153 * the creation of a brand new one. So we need to perform this update 1154 1154 * by invoking update_policy_cpu(). 1155 1155 */ 1156 - if (recover_policy && cpu != policy->cpu) 1156 + if (recover_policy && cpu != policy->cpu) { 1157 1157 update_policy_cpu(policy, cpu); 1158 - else 1158 + WARN_ON(kobject_move(&policy->kobj, &dev->kobj)); 1159 + } else { 1159 1160 policy->cpu = cpu; 1161 + } 1160 1162 1161 1163 cpumask_copy(policy->cpus, cpumask_of(cpu)); 1162 1164
+22 -13
drivers/cpufreq/intel_pstate.c
··· 128 128 129 129 struct perf_limits { 130 130 int no_turbo; 131 + int turbo_disabled; 131 132 int max_perf_pct; 132 133 int min_perf_pct; 133 134 int32_t max_perf; ··· 288 287 if (ret != 1) 289 288 return -EINVAL; 290 289 limits.no_turbo = clamp_t(int, input, 0 , 1); 291 - 290 + if (limits.turbo_disabled) { 291 + pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); 292 + limits.no_turbo = limits.turbo_disabled; 293 + } 292 294 return count; 293 295 } 294 296 ··· 361 357 { 362 358 u64 value; 363 359 rdmsrl(BYT_RATIOS, value); 364 - return (value >> 8) & 0x3F; 360 + return (value >> 8) & 0x7F; 365 361 } 366 362 367 363 static int byt_get_max_pstate(void) 368 364 { 369 365 u64 value; 370 366 rdmsrl(BYT_RATIOS, value); 371 - return (value >> 16) & 0x3F; 367 + return (value >> 16) & 0x7F; 372 368 } 373 369 374 370 static int byt_get_turbo_pstate(void) 375 371 { 376 372 u64 value; 377 373 rdmsrl(BYT_TURBO_RATIOS, value); 378 - return value & 0x3F; 374 + return value & 0x7F; 379 375 } 380 376 381 377 static void byt_set_pstate(struct cpudata *cpudata, int pstate) ··· 385 381 u32 vid; 386 382 387 383 val = pstate << 8; 388 - if (limits.no_turbo) 384 + if (limits.no_turbo && !limits.turbo_disabled) 389 385 val |= (u64)1 << 32; 390 386 391 387 vid_fp = cpudata->vid.min + mul_fp( ··· 409 405 410 406 411 407 rdmsrl(BYT_VIDS, value); 412 - cpudata->vid.min = int_tofp((value >> 8) & 0x3f); 413 - cpudata->vid.max = int_tofp((value >> 16) & 0x3f); 408 + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); 409 + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); 414 410 cpudata->vid.ratio = div_fp( 415 411 cpudata->vid.max - cpudata->vid.min, 416 412 int_tofp(cpudata->pstate.max_pstate - ··· 452 448 u64 val; 453 449 454 450 val = pstate << 8; 455 - if (limits.no_turbo) 451 + if (limits.no_turbo && !limits.turbo_disabled) 456 452 val |= (u64)1 << 32; 457 453 458 454 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); ··· 700 696 701 697 cpu = all_cpu_data[cpunum]; 702 698 703 - intel_pstate_get_cpu_pstates(cpu); 704 - 705 699 cpu->cpu = cpunum; 700 + intel_pstate_get_cpu_pstates(cpu); 706 701 707 702 init_timer_deferrable(&cpu->timer); 708 703 cpu->timer.function = intel_pstate_timer_func; ··· 744 741 limits.min_perf = int_tofp(1); 745 742 limits.max_perf_pct = 100; 746 743 limits.max_perf = int_tofp(1); 747 - limits.no_turbo = 0; 744 + limits.no_turbo = limits.turbo_disabled; 748 745 return 0; 749 746 } 750 747 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; ··· 787 784 { 788 785 struct cpudata *cpu; 789 786 int rc; 787 + u64 misc_en; 790 788 791 789 rc = intel_pstate_init_cpu(policy->cpu); 792 790 if (rc) ··· 795 791 796 792 cpu = all_cpu_data[policy->cpu]; 797 793 798 - if (!limits.no_turbo && 799 - limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 794 + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); 795 + if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || 796 + cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) { 797 + limits.turbo_disabled = 1; 798 + limits.no_turbo = 1; 799 + } 800 + if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 800 801 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 801 802 else 802 803 policy->policy = CPUFREQ_POLICY_POWERSAVE;
+1 -1
drivers/cpufreq/sa1110-cpufreq.c
··· 349 349 name = "K4S641632D"; 350 350 if (machine_is_h3100()) 351 351 name = "KM416S4030CT"; 352 - if (machine_is_jornada720()) 352 + if (machine_is_jornada720() || machine_is_h3600()) 353 353 name = "K4S281632B-1H"; 354 354 if (machine_is_nanoengine()) 355 355 name = "MT48LC8M16A2TG-75";
+3 -5
drivers/crypto/caam/jr.c
··· 453 453 int error; 454 454 455 455 jrdev = &pdev->dev; 456 - jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), 457 - GFP_KERNEL); 456 + jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr), 457 + GFP_KERNEL); 458 458 if (!jrpriv) 459 459 return -ENOMEM; 460 460 ··· 487 487 488 488 /* Now do the platform independent part */ 489 489 error = caam_jr_init(jrdev); /* now turn on hardware */ 490 - if (error) { 491 - kfree(jrpriv); 490 + if (error) 492 491 return error; 493 - } 494 492 495 493 jrpriv->dev = jrdev; 496 494 spin_lock(&driver_data.jr_alloc_lock);
+10 -3
drivers/dma/cppi41.c
··· 86 86 87 87 #define USBSS_IRQ_PD_COMP (1 << 2) 88 88 89 + /* Packet Descriptor */ 90 + #define PD2_ZERO_LENGTH (1 << 19) 91 + 89 92 struct cppi41_channel { 90 93 struct dma_chan chan; 91 94 struct dma_async_tx_descriptor txd; ··· 310 307 __iormb(); 311 308 312 309 while (val) { 313 - u32 desc; 310 + u32 desc, len; 314 311 315 312 q_num = __fls(val); 316 313 val &= ~(1 << q_num); ··· 322 319 q_num, desc); 323 320 continue; 324 321 } 325 - c->residue = pd_trans_len(c->desc->pd6) - 326 - pd_trans_len(c->desc->pd0); 327 322 323 + if (c->desc->pd2 & PD2_ZERO_LENGTH) 324 + len = 0; 325 + else 326 + len = pd_trans_len(c->desc->pd0); 327 + 328 + c->residue = pd_trans_len(c->desc->pd6) - len; 328 329 dma_cookie_complete(&c->txd); 329 330 c->txd.callback(c->txd.callback_param); 330 331 }
+18 -4
drivers/dma/imx-sdma.c
··· 255 255 enum dma_slave_buswidth word_size; 256 256 unsigned int buf_tail; 257 257 unsigned int num_bd; 258 + unsigned int period_len; 258 259 struct sdma_buffer_descriptor *bd; 259 260 dma_addr_t bd_phys; 260 261 unsigned int pc_from_device, pc_to_device; ··· 594 593 595 594 static void sdma_handle_channel_loop(struct sdma_channel *sdmac) 596 595 { 596 + if (sdmac->desc.callback) 597 + sdmac->desc.callback(sdmac->desc.callback_param); 598 + } 599 + 600 + static void sdma_update_channel_loop(struct sdma_channel *sdmac) 601 + { 597 602 struct sdma_buffer_descriptor *bd; 598 603 599 604 /* ··· 618 611 bd->mode.status |= BD_DONE; 619 612 sdmac->buf_tail++; 620 613 sdmac->buf_tail %= sdmac->num_bd; 621 - 622 - if (sdmac->desc.callback) 623 - sdmac->desc.callback(sdmac->desc.callback_param); 624 614 } 625 615 } 626 616 ··· 672 668 while (stat) { 673 669 int channel = fls(stat) - 1; 674 670 struct sdma_channel *sdmac = &sdma->channel[channel]; 671 + 672 + if (sdmac->flags & IMX_DMA_SG_LOOP) 673 + sdma_update_channel_loop(sdmac); 675 674 676 675 tasklet_schedule(&sdmac->tasklet); 677 676 ··· 1136 1129 sdmac->status = DMA_IN_PROGRESS; 1137 1130 1138 1131 sdmac->buf_tail = 0; 1132 + sdmac->period_len = period_len; 1139 1133 1140 1134 sdmac->flags |= IMX_DMA_SG_LOOP; 1141 1135 sdmac->direction = direction; ··· 1233 1225 struct dma_tx_state *txstate) 1234 1226 { 1235 1227 struct sdma_channel *sdmac = to_sdma_chan(chan); 1228 + u32 residue; 1229 + 1230 + if (sdmac->flags & IMX_DMA_SG_LOOP) 1231 + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; 1232 + else 1233 + residue = sdmac->chn_count - sdmac->chn_real_count; 1236 1234 1237 1235 dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 1238 - sdmac->chn_count - sdmac->chn_real_count); 1236 + residue); 1239 1237 1240 1238 return sdmac->status; 1241 1239 }
+1
drivers/firewire/Kconfig
··· 1 1 menu "IEEE 1394 (FireWire) support" 2 + depends on HAS_DMA 2 3 depends on PCI || COMPILE_TEST 3 4 # firewire-core does not depend on PCI but is 4 5 # not useful without PCI controller driver
+15 -7
drivers/firmware/efi/efi.c
··· 346 346 347 347 struct param_info { 348 348 int verbose; 349 + int found; 349 350 void *params; 350 351 }; 351 352 ··· 363 362 (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0)) 364 363 return 0; 365 364 366 - pr_info("Getting parameters from FDT:\n"); 367 - 368 365 for (i = 0; i < ARRAY_SIZE(dt_params); i++) { 369 366 prop = of_get_flat_dt_prop(node, dt_params[i].propname, &len); 370 - if (!prop) { 371 - pr_err("Can't find %s in device tree!\n", 372 - dt_params[i].name); 367 + if (!prop) 373 368 return 0; 374 - } 375 369 dest = info->params + dt_params[i].offset; 370 + info->found++; 376 371 377 372 val = of_read_number(prop, len / sizeof(u32)); 378 373 ··· 387 390 int __init efi_get_fdt_params(struct efi_fdt_params *params, int verbose) 388 391 { 389 392 struct param_info info; 393 + int ret; 394 + 395 + pr_info("Getting EFI parameters from FDT:\n"); 390 396 391 397 info.verbose = verbose; 398 + info.found = 0; 392 399 info.params = params; 393 400 394 - return of_scan_flat_dt(fdt_find_uefi_params, &info); 401 + ret = of_scan_flat_dt(fdt_find_uefi_params, &info); 402 + if (!info.found) 403 + pr_info("UEFI not found.\n"); 404 + else if (!ret) 405 + pr_err("Can't find '%s' in device tree!\n", 406 + dt_params[info.found].name); 407 + 408 + return ret; 395 409 } 396 410 #endif /* CONFIG_EFI_PARAMS_FROM_FDT */
-10
drivers/firmware/efi/fdt.c
··· 23 23 u32 fdt_val32; 24 24 u64 fdt_val64; 25 25 26 - /* 27 - * Copy definition of linux_banner here. Since this code is 28 - * built as part of the decompressor for ARM v7, pulling 29 - * in version.c where linux_banner is defined for the 30 - * kernel brings other kernel dependencies with it. 31 - */ 32 - const char linux_banner[] = 33 - "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@" 34 - LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n"; 35 - 36 26 /* Do some checks on provided FDT, if it exists*/ 37 27 if (orig_fdt) { 38 28 if (fdt_check_header(orig_fdt)) {
-6
drivers/gpio/gpio-mcp23s08.c
··· 900 900 if (spi_present_mask & (1 << addr)) 901 901 chips++; 902 902 } 903 - if (!chips) 904 - return -ENODEV; 905 903 } else { 906 904 type = spi_get_device_id(spi)->driver_data; 907 905 pdata = dev_get_platdata(&spi->dev); ··· 938 940 if (!(spi_present_mask & (1 << addr))) 939 941 continue; 940 942 chips--; 941 - if (chips < 0) { 942 - dev_err(&spi->dev, "FATAL: invalid negative chip id\n"); 943 - goto fail; 944 - } 945 943 data->mcp[addr] = &data->chip[chips]; 946 944 status = mcp23s08_probe_one(data->mcp[addr], &spi->dev, spi, 947 945 0x40 | (addr << 1), type, base,
+3 -2
drivers/gpu/drm/i915/i915_dma.c
··· 1464 1464 #else 1465 1465 static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) 1466 1466 { 1467 - int ret; 1467 + int ret = 0; 1468 1468 1469 1469 DRM_INFO("Replacing VGA console driver\n"); 1470 1470 1471 1471 console_lock(); 1472 - ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1472 + if (con_is_bound(&vga_con)) 1473 + ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1473 1474 if (ret == 0) { 1474 1475 ret = do_unregister_con_driver(&vga_con); 1475 1476
+1
drivers/gpu/drm/i915/i915_drv.h
··· 656 656 #define QUIRK_PIPEA_FORCE (1<<0) 657 657 #define QUIRK_LVDS_SSC_DISABLE (1<<1) 658 658 #define QUIRK_INVERT_BRIGHTNESS (1<<2) 659 + #define QUIRK_BACKLIGHT_PRESENT (1<<3) 659 660 660 661 struct intel_fbdev; 661 662 struct intel_fbc_work;
+44
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 74 74 if (base == 0) 75 75 return 0; 76 76 77 + /* make sure we don't clobber the GTT if it's within stolen memory */ 78 + if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) { 79 + struct { 80 + u32 start, end; 81 + } stolen[2] = { 82 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 83 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 84 + }; 85 + u64 gtt_start, gtt_end; 86 + 87 + gtt_start = I915_READ(PGTBL_CTL); 88 + if (IS_GEN4(dev)) 89 + gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) | 90 + (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28; 91 + else 92 + gtt_start &= PGTBL_ADDRESS_LO_MASK; 93 + gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4; 94 + 95 + if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end) 96 + stolen[0].end = gtt_start; 97 + if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end) 98 + stolen[1].start = gtt_end; 99 + 100 + /* pick the larger of the two chunks */ 101 + if (stolen[0].end - stolen[0].start > 102 + stolen[1].end - stolen[1].start) { 103 + base = stolen[0].start; 104 + dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start; 105 + } else { 106 + base = stolen[1].start; 107 + dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start; 108 + } 109 + 110 + if (stolen[0].start != stolen[1].start || 111 + stolen[0].end != stolen[1].end) { 112 + DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n", 113 + (unsigned long long) gtt_start, 114 + (unsigned long long) gtt_end - 1); 115 + DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n", 116 + base, base + (u32) dev_priv->gtt.stolen_size - 1); 117 + } 118 + } 119 + 120 + 77 121 /* Verify that nothing else uses this physical address. Stolen 78 122 * memory should be reserved by the BIOS and hidden from the 79 123 * kernel. So if the region is already marked as busy, something
+3
drivers/gpu/drm/i915/i915_reg.h
··· 942 942 /* 943 943 * Instruction and interrupt control regs 944 944 */ 945 + #define PGTBL_CTL 0x02020 946 + #define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */ 947 + #define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */ 945 948 #define PGTBL_ER 0x02024 946 949 #define RENDER_RING_BASE 0x02000 947 950 #define BSD_RING_BASE 0x04000
+18
drivers/gpu/drm/i915/intel_display.c
··· 11591 11591 DRM_INFO("applying inverted panel brightness quirk\n"); 11592 11592 } 11593 11593 11594 + /* Some VBT's incorrectly indicate no backlight is present */ 11595 + static void quirk_backlight_present(struct drm_device *dev) 11596 + { 11597 + struct drm_i915_private *dev_priv = dev->dev_private; 11598 + dev_priv->quirks |= QUIRK_BACKLIGHT_PRESENT; 11599 + DRM_INFO("applying backlight present quirk\n"); 11600 + } 11601 + 11594 11602 struct intel_quirk { 11595 11603 int device; 11596 11604 int subsystem_vendor; ··· 11667 11659 11668 11660 /* Acer Aspire 5336 */ 11669 11661 { 0x2a42, 0x1025, 0x048a, quirk_invert_brightness }, 11662 + 11663 + /* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */ 11664 + { 0x0a06, 0x1025, 0x0a11, quirk_backlight_present }, 11665 + 11666 + /* Toshiba CB35 Chromebook (Celeron 2955U) */ 11667 + { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present }, 11668 + 11669 + /* HP Chromebook 14 (Celeron 2955U) */ 11670 + { 0x0a06, 0x103c, 0x21ed, quirk_backlight_present }, 11670 11671 }; 11671 11672 11672 11673 static void intel_init_quirks(struct drm_device *dev) ··· 11914 11897 * ... */ 11915 11898 plane = crtc->plane; 11916 11899 crtc->plane = !plane; 11900 + crtc->primary_enabled = true; 11917 11901 dev_priv->display.crtc_disable(&crtc->base); 11918 11902 crtc->plane = plane; 11919 11903
+44 -2
drivers/gpu/drm/i915/intel_dp.c
··· 28 28 #include <linux/i2c.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/export.h> 31 + #include <linux/notifier.h> 32 + #include <linux/reboot.h> 31 33 #include <drm/drmP.h> 32 34 #include <drm/drm_crtc.h> 33 35 #include <drm/drm_crtc_helper.h> ··· 336 334 return PCH_PP_STATUS; 337 335 else 338 336 return VLV_PIPE_PP_STATUS(vlv_power_sequencer_pipe(intel_dp)); 337 + } 338 + 339 + /* Reboot notifier handler to shutdown panel power to guarantee T12 timing 340 + This function only applicable when panel PM state is not to be tracked */ 341 + static int edp_notify_handler(struct notifier_block *this, unsigned long code, 342 + void *unused) 343 + { 344 + struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), 345 + edp_notifier); 346 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 347 + struct drm_i915_private *dev_priv = dev->dev_private; 348 + u32 pp_div; 349 + u32 pp_ctrl_reg, pp_div_reg; 350 + enum pipe pipe = vlv_power_sequencer_pipe(intel_dp); 351 + 352 + if (!is_edp(intel_dp) || code != SYS_RESTART) 353 + return 0; 354 + 355 + if (IS_VALLEYVIEW(dev)) { 356 + pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe); 357 + pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe); 358 + pp_div = I915_READ(pp_div_reg); 359 + pp_div &= PP_REFERENCE_DIVIDER_MASK; 360 + 361 + /* 0x1F write to PP_DIV_REG sets max cycle delay */ 362 + I915_WRITE(pp_div_reg, pp_div | 0x1F); 363 + I915_WRITE(pp_ctrl_reg, PANEL_UNLOCK_REGS | PANEL_POWER_OFF); 364 + msleep(intel_dp->panel_power_cycle_delay); 365 + } 366 + 367 + return 0; 339 368 } 340 369 341 370 static bool edp_have_panel_power(struct intel_dp *intel_dp) ··· 906 873 mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, 907 874 bpp); 908 875 909 - for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { 910 - for (clock = min_clock; clock <= max_clock; clock++) { 876 + for (clock = min_clock; clock <= max_clock; clock++) { 877 + for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { 911 878 link_clock = drm_dp_bw_code_to_link_rate(bws[clock]); 912 879 link_avail = intel_dp_max_data_rate(link_clock, 913 880 lane_count); ··· 3740 3707 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 3741 3708 edp_panel_vdd_off_sync(intel_dp); 3742 3709 drm_modeset_unlock(&dev->mode_config.connection_mutex); 3710 + if (intel_dp->edp_notifier.notifier_call) { 3711 + unregister_reboot_notifier(&intel_dp->edp_notifier); 3712 + intel_dp->edp_notifier.notifier_call = NULL; 3713 + } 3743 3714 } 3744 3715 kfree(intel_dig_port); 3745 3716 } ··· 4220 4183 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 4221 4184 } 4222 4185 mutex_unlock(&dev->mode_config.mutex); 4186 + 4187 + if (IS_VALLEYVIEW(dev)) { 4188 + intel_dp->edp_notifier.notifier_call = edp_notify_handler; 4189 + register_reboot_notifier(&intel_dp->edp_notifier); 4190 + } 4223 4191 4224 4192 intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); 4225 4193 intel_panel_setup_backlight(connector);
+2
drivers/gpu/drm/i915/intel_drv.h
··· 538 538 unsigned long last_power_on; 539 539 unsigned long last_backlight_off; 540 540 bool psr_setup_done; 541 + struct notifier_block edp_notifier; 542 + 541 543 bool use_tps3; 542 544 struct intel_connector *attached_connector; 543 545
+15 -14
drivers/gpu/drm/i915/intel_dsi.c
··· 117 117 /* bandgap reset is needed after everytime we do power gate */ 118 118 band_gap_reset(dev_priv); 119 119 120 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 121 + usleep_range(2500, 3000); 122 + 120 123 val = I915_READ(MIPI_PORT_CTRL(pipe)); 121 124 I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD); 122 125 usleep_range(1000, 1500); 123 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 124 - usleep_range(2000, 2500); 126 + 127 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 128 + usleep_range(2500, 3000); 129 + 125 130 I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 126 - usleep_range(2000, 2500); 127 - I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 128 - usleep_range(2000, 2500); 129 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 130 - usleep_range(2000, 2500); 131 + usleep_range(2500, 3000); 131 132 } 132 133 133 134 static void intel_dsi_enable(struct intel_encoder *encoder) ··· 272 271 273 272 DRM_DEBUG_KMS("\n"); 274 273 275 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 274 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 276 275 usleep_range(2000, 2500); 277 276 278 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 277 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 279 278 usleep_range(2000, 2500); 280 279 281 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 280 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 282 281 usleep_range(2000, 2500); 283 - 284 - val = I915_READ(MIPI_PORT_CTRL(pipe)); 285 - I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 286 - usleep_range(1000, 1500); 287 282 288 283 if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT) 289 284 == 0x00000), 30)) 290 285 DRM_ERROR("DSI LP not going Low\n"); 286 + 287 + val = I915_READ(MIPI_PORT_CTRL(pipe)); 288 + I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 289 + usleep_range(1000, 1500); 291 290 292 291 I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 293 292 usleep_range(2000, 2500);
-6
drivers/gpu/drm/i915/intel_dsi_cmd.c
··· 404 404 else 405 405 cmd |= DPI_LP_MODE; 406 406 407 - /* DPI virtual channel?! */ 408 - 409 - mask = DPI_FIFO_EMPTY; 410 - if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 50)) 411 - DRM_ERROR("Timeout waiting for DPI FIFO empty.\n"); 412 - 413 407 /* clear bit */ 414 408 I915_WRITE(MIPI_INTR_STAT(pipe), SPL_PKT_SENT_INTERRUPT); 415 409
+7
drivers/gpu/drm/i915/intel_lvds.c
··· 111 111 112 112 pipe_config->adjusted_mode.flags |= flags; 113 113 114 + /* gen2/3 store dither state in pfit control, needs to match */ 115 + if (INTEL_INFO(dev)->gen < 4) { 116 + tmp = I915_READ(PFIT_CONTROL); 117 + 118 + pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE; 119 + } 120 + 114 121 dotclock = pipe_config->port_clock; 115 122 116 123 if (HAS_PCH_SPLIT(dev_priv->dev))
+9
drivers/gpu/drm/i915/intel_opregion.c
··· 403 403 404 404 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); 405 405 406 + /* 407 + * If the acpi_video interface is not supposed to be used, don't 408 + * bother processing backlight level change requests from firmware. 409 + */ 410 + if (!acpi_video_verify_backlight_support()) { 411 + DRM_DEBUG_KMS("opregion backlight request ignored\n"); 412 + return 0; 413 + } 414 + 406 415 if (!(bclp & ASLE_BCLP_VALID)) 407 416 return ASLC_BACKLIGHT_FAILED; 408 417
+10 -6
drivers/gpu/drm/i915/intel_panel.c
··· 361 361 pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | 362 362 PFIT_FILTER_FUZZY); 363 363 364 - /* Make sure pre-965 set dither correctly for 18bpp panels. */ 365 - if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) 366 - pfit_control |= PANEL_8TO6_DITHER_ENABLE; 367 - 368 364 out: 369 365 if ((pfit_control & PFIT_ENABLE) == 0) { 370 366 pfit_control = 0; 371 367 pfit_pgm_ratios = 0; 372 368 } 369 + 370 + /* Make sure pre-965 set dither correctly for 18bpp panels. */ 371 + if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) 372 + pfit_control |= PANEL_8TO6_DITHER_ENABLE; 373 373 374 374 pipe_config->gmch_pfit.control = pfit_control; 375 375 pipe_config->gmch_pfit.pgm_ratios = pfit_pgm_ratios; ··· 1118 1118 int ret; 1119 1119 1120 1120 if (!dev_priv->vbt.backlight.present) { 1121 - DRM_DEBUG_KMS("native backlight control not available per VBT\n"); 1122 - return 0; 1121 + if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { 1122 + DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); 1123 + } else { 1124 + DRM_DEBUG_KMS("no backlight present per VBT\n"); 1125 + return 0; 1126 + } 1123 1127 } 1124 1128 1125 1129 /* set level and max in panel struct */
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
··· 1516 1516 } 1517 1517 1518 1518 switch ((ctrl & 0x000f0000) >> 16) { 1519 - case 6: datarate = pclk * 30 / 8; break; 1520 - case 5: datarate = pclk * 24 / 8; break; 1519 + case 6: datarate = pclk * 30; break; 1520 + case 5: datarate = pclk * 24; break; 1521 1521 case 2: 1522 1522 default: 1523 - datarate = pclk * 18 / 8; 1523 + datarate = pclk * 18; 1524 1524 break; 1525 1525 } 1526 1526
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c
··· 1159 1159 if (outp->info.type == DCB_OUTPUT_DP) { 1160 1160 u32 sync = nv_rd32(priv, 0x660404 + (head * 0x300)); 1161 1161 switch ((sync & 0x000003c0) >> 6) { 1162 - case 6: pclk = pclk * 30 / 8; break; 1163 - case 5: pclk = pclk * 24 / 8; break; 1162 + case 6: pclk = pclk * 30; break; 1163 + case 5: pclk = pclk * 24; break; 1164 1164 case 2: 1165 1165 default: 1166 - pclk = pclk * 18 / 8; 1166 + pclk = pclk * 18; 1167 1167 break; 1168 1168 } 1169 1169
+5 -3
drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c
··· 34 34 struct nvkm_output_dp *outp = (void *)base; 35 35 bool retrain = true; 36 36 u8 link[2], stat[3]; 37 - u32 rate; 37 + u32 linkrate; 38 38 int ret, i; 39 39 40 40 /* check that the link is trained at a high enough rate */ ··· 44 44 goto done; 45 45 } 46 46 47 - rate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 - if (rate < ((datarate / 8) * 10)) { 47 + linkrate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 + linkrate = (linkrate * 8) / 10; /* 8B/10B coding overhead */ 49 + datarate = (datarate + 9) / 10; /* -> decakilobits */ 50 + if (linkrate < datarate) { 49 51 DBG("link not trained at sufficient rate\n"); 50 52 goto done; 51 53 }
+1
drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c
··· 87 87 struct nvkm_output_dp *outpdp = (void *)outp; 88 88 switch (data) { 89 89 case NV94_DISP_SOR_DP_PWR_STATE_OFF: 90 + nouveau_event_put(outpdp->irq); 90 91 ((struct nvkm_output_dp_impl *)nv_oclass(outp)) 91 92 ->lnk_pwr(outpdp, 0); 92 93 atomic_set(&outpdp->lt.done, 0);
+2 -2
drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h
··· 26 26 }; 27 27 } 28 28 29 - static inline struct ramfuc_reg 29 + static noinline struct ramfuc_reg 30 30 ramfuc_reg(u32 addr) 31 31 { 32 32 return ramfuc_reg2(addr, addr); ··· 107 107 108 108 #define ram_init(s,p) ramfuc_init(&(s)->base, (p)) 109 109 #define ram_exec(s,e) ramfuc_exec(&(s)->base, (e)) 110 - #define ram_have(s,r) ((s)->r_##r.addr != 0x000000) 110 + #define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000) 111 111 #define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r) 112 112 #define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d)) 113 113 #define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r)
+1
drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c
··· 200 200 /* (re)program mempll, if required */ 201 201 if (ram->mode == 2) { 202 202 ram_mask(fuc, 0x1373f4, 0x00010000, 0x00000000); 203 + ram_mask(fuc, 0x132000, 0x80000000, 0x80000000); 203 204 ram_mask(fuc, 0x132000, 0x00000001, 0x00000000); 204 205 ram_mask(fuc, 0x132004, 0x103fffff, mcoef); 205 206 ram_mask(fuc, 0x132000, 0x00000001, 0x00000001);
+3 -3
drivers/gpu/drm/nouveau/core/subdev/therm/temp.c
··· 192 192 nouveau_therm_threshold_hyst_polling(therm, &sensor->thrs_shutdown, 193 193 NOUVEAU_THERM_THRS_SHUTDOWN); 194 194 195 + spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); 196 + 195 197 /* schedule the next poll in one second */ 196 198 if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head)) 197 - ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm); 198 - 199 - spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); 199 + ptimer->alarm(ptimer, 1000000000ULL, alarm); 200 200 } 201 201 202 202 void
+9 -8
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 652 652 ret = nouveau_do_resume(drm_dev); 653 653 if (ret) 654 654 return ret; 655 - if (drm_dev->mode_config.num_crtc) 656 - nouveau_fbcon_set_suspend(drm_dev, 0); 657 655 658 - nouveau_fbcon_zfill_all(drm_dev); 659 - if (drm_dev->mode_config.num_crtc) 656 + if (drm_dev->mode_config.num_crtc) { 660 657 nouveau_display_resume(drm_dev); 658 + nouveau_fbcon_set_suspend(drm_dev, 0); 659 + } 660 + 661 661 return 0; 662 662 } 663 663 ··· 683 683 ret = nouveau_do_resume(drm_dev); 684 684 if (ret) 685 685 return ret; 686 - if (drm_dev->mode_config.num_crtc) 687 - nouveau_fbcon_set_suspend(drm_dev, 0); 688 - nouveau_fbcon_zfill_all(drm_dev); 689 - if (drm_dev->mode_config.num_crtc) 686 + 687 + if (drm_dev->mode_config.num_crtc) { 690 688 nouveau_display_resume(drm_dev); 689 + nouveau_fbcon_set_suspend(drm_dev, 0); 690 + } 691 + 691 692 return 0; 692 693 } 693 694
+3 -10
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 531 531 if (state == 1) 532 532 nouveau_fbcon_save_disable_accel(dev); 533 533 fb_set_suspend(drm->fbcon->helper.fbdev, state); 534 - if (state == 0) 534 + if (state == 0) { 535 535 nouveau_fbcon_restore_accel(dev); 536 + nouveau_fbcon_zfill(dev, drm->fbcon); 537 + } 536 538 console_unlock(); 537 - } 538 - } 539 - 540 - void 541 - nouveau_fbcon_zfill_all(struct drm_device *dev) 542 - { 543 - struct nouveau_drm *drm = nouveau_drm(dev); 544 - if (drm->fbcon) { 545 - nouveau_fbcon_zfill(dev, drm->fbcon); 546 539 } 547 540 }
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.h
··· 61 61 int nouveau_fbcon_init(struct drm_device *dev); 62 62 void nouveau_fbcon_fini(struct drm_device *dev); 63 63 void nouveau_fbcon_set_suspend(struct drm_device *dev, int state); 64 - void nouveau_fbcon_zfill_all(struct drm_device *dev); 65 64 void nouveau_fbcon_save_disable_accel(struct drm_device *dev); 66 65 void nouveau_fbcon_restore_accel(struct drm_device *dev); 67 66
+2 -1
drivers/gpu/drm/nouveau/nv50_display.c
··· 1741 1741 } 1742 1742 } 1743 1743 1744 - mthd = (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1744 + mthd = (ffs(nv_encoder->dcb->heads) - 1) << 3; 1745 + mthd |= (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1745 1746 mthd |= nv_encoder->or; 1746 1747 1747 1748 if (nv_encoder->dcb->type == DCB_OUTPUT_DP) {
+3
drivers/gpu/drm/qxl/qxl_irq.c
··· 33 33 34 34 pending = xchg(&qdev->ram_header->int_pending, 0); 35 35 36 + if (!pending) 37 + return IRQ_NONE; 38 + 36 39 atomic_inc(&qdev->irq_received); 37 40 38 41 if (pending & QXL_INTERRUPT_DISPLAY) {
+4 -4
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1414 1414 tmp &= ~EVERGREEN_GRPH_SURFACE_UPDATE_H_RETRACE_EN; 1415 1415 WREG32(EVERGREEN_GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); 1416 1416 1417 - /* set pageflip to happen anywhere in vblank interval */ 1418 - WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); 1417 + /* set pageflip to happen only at start of vblank interval (front porch) */ 1418 + WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); 1419 1419 1420 1420 if (!atomic && fb && fb != crtc->primary->fb) { 1421 1421 radeon_fb = to_radeon_framebuffer(fb); ··· 1614 1614 tmp &= ~AVIVO_D1GRPH_SURFACE_UPDATE_H_RETRACE_EN; 1615 1615 WREG32(AVIVO_D1GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); 1616 1616 1617 - /* set pageflip to happen anywhere in vblank interval */ 1618 - WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); 1617 + /* set pageflip to happen only at start of vblank interval (front porch) */ 1618 + WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); 1619 1619 1620 1620 if (!atomic && fb && fb != crtc->primary->fb) { 1621 1621 radeon_fb = to_radeon_framebuffer(fb);
+1 -1
drivers/gpu/drm/radeon/atombios_dp.c
··· 127 127 /* flags not zero */ 128 128 if (args.v1.ucReplyStatus == 2) { 129 129 DRM_DEBUG_KMS("dp_aux_ch flags not zero\n"); 130 - r = -EBUSY; 130 + r = -EIO; 131 131 goto done; 132 132 } 133 133
+7 -3
drivers/gpu/drm/radeon/atombios_encoders.c
··· 183 183 struct backlight_properties props; 184 184 struct radeon_backlight_privdata *pdata; 185 185 struct radeon_encoder_atom_dig *dig; 186 - u8 backlight_level; 187 186 char bl_name[16]; 188 187 189 188 /* Mac laptops with multiple GPUs use the gmux driver for backlight ··· 221 222 222 223 pdata->encoder = radeon_encoder; 223 224 224 - backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); 225 - 226 225 dig = radeon_encoder->enc_priv; 227 226 dig->bl_dev = bd; 228 227 229 228 bd->props.brightness = radeon_atom_backlight_get_brightness(bd); 229 + /* Set a reasonable default here if the level is 0 otherwise 230 + * fbdev will attempt to turn the backlight on after console 231 + * unblanking and it will try and restore 0 which turns the backlight 232 + * off again. 233 + */ 234 + if (bd->props.brightness == 0) 235 + bd->props.brightness = RADEON_MAX_BL_LEVEL; 230 236 bd->props.power = FB_BLANK_UNBLANK; 231 237 backlight_update_status(bd); 232 238
+1 -1
drivers/gpu/drm/radeon/ci_dpm.c
··· 1179 1179 tmp &= ~GLOBAL_PWRMGT_EN; 1180 1180 WREG32_SMC(GENERAL_PWRMGT, tmp); 1181 1181 1182 - tmp = RREG32(SCLK_PWRMGT_CNTL); 1182 + tmp = RREG32_SMC(SCLK_PWRMGT_CNTL); 1183 1183 tmp &= ~DYNAMIC_PM_EN; 1184 1184 WREG32_SMC(SCLK_PWRMGT_CNTL, tmp); 1185 1185
+4 -2
drivers/gpu/drm/radeon/cik.c
··· 7676 7676 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 7677 7677 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 7678 7678 mc_client = RREG32(VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT); 7679 + /* reset addr and status */ 7680 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7681 + if (addr == 0x0 && status == 0x0) 7682 + break; 7679 7683 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 7680 7684 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 7681 7685 addr); 7682 7686 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 7683 7687 status); 7684 7688 cik_vm_decode_fault(rdev, status, addr, mc_client); 7685 - /* reset addr and status */ 7686 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7687 7689 break; 7688 7690 case 167: /* VCE */ 7689 7691 DRM_DEBUG("IH: VCE int: 0x%08x\n", src_data);
+11 -8
drivers/gpu/drm/radeon/evergreen.c
··· 189 189 0x8c1c, 0xffffffff, 0x00001010, 190 190 0x28350, 0xffffffff, 0x00000000, 191 191 0xa008, 0xffffffff, 0x00010000, 192 - 0x5cc, 0xffffffff, 0x00000001, 192 + 0x5c4, 0xffffffff, 0x00000001, 193 193 0x9508, 0xffffffff, 0x00000002, 194 194 0x913c, 0x0000000f, 0x0000000a 195 195 }; ··· 476 476 0x8c1c, 0xffffffff, 0x00001010, 477 477 0x28350, 0xffffffff, 0x00000000, 478 478 0xa008, 0xffffffff, 0x00010000, 479 - 0x5cc, 0xffffffff, 0x00000001, 479 + 0x5c4, 0xffffffff, 0x00000001, 480 480 0x9508, 0xffffffff, 0x00000002 481 481 }; 482 482 ··· 635 635 static const u32 supersumo_golden_registers[] = 636 636 { 637 637 0x5eb4, 0xffffffff, 0x00000002, 638 - 0x5cc, 0xffffffff, 0x00000001, 638 + 0x5c4, 0xffffffff, 0x00000001, 639 639 0x7030, 0xffffffff, 0x00000011, 640 640 0x7c30, 0xffffffff, 0x00000011, 641 641 0x6104, 0x01000300, 0x00000000, ··· 719 719 static const u32 wrestler_golden_registers[] = 720 720 { 721 721 0x5eb4, 0xffffffff, 0x00000002, 722 - 0x5cc, 0xffffffff, 0x00000001, 722 + 0x5c4, 0xffffffff, 0x00000001, 723 723 0x7030, 0xffffffff, 0x00000011, 724 724 0x7c30, 0xffffffff, 0x00000011, 725 725 0x6104, 0x01000300, 0x00000000, ··· 2642 2642 for (i = 0; i < rdev->num_crtc; i++) { 2643 2643 if (save->crtc_enabled[i]) { 2644 2644 tmp = RREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i]); 2645 - if ((tmp & 0x3) != 0) { 2646 - tmp &= ~0x3; 2645 + if ((tmp & 0x7) != 3) { 2646 + tmp &= ~0x7; 2647 + tmp |= 0x3; 2647 2648 WREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); 2648 2649 } 2649 2650 tmp = RREG32(EVERGREEN_GRPH_UPDATE + crtc_offsets[i]); ··· 5067 5066 case 147: 5068 5067 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 5069 5068 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 5069 + /* reset addr and status */ 5070 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5071 + if (addr == 0x0 && status == 0x0) 5072 + break; 5070 5073 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 5071 5074 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 5072 5075 addr); 5073 5076 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 5074 5077 status); 5075 5078 cayman_vm_decode_fault(rdev, status, addr); 5076 - /* reset addr and status */ 5077 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5078 5079 break; 5079 5080 case 176: /* CP_INT in ring buffer */ 5080 5081 case 177: /* CP_INT in IB1 */
-1
drivers/gpu/drm/radeon/evergreen_reg.h
··· 239 239 # define EVERGREEN_CRTC_V_BLANK (1 << 0) 240 240 #define EVERGREEN_CRTC_STATUS_POSITION 0x6e90 241 241 #define EVERGREEN_CRTC_STATUS_HV_COUNT 0x6ea0 242 - #define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 243 242 #define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4 244 243 #define EVERGREEN_MASTER_UPDATE_LOCK 0x6ef4 245 244 #define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8
+1 -2
drivers/gpu/drm/radeon/radeon.h
··· 684 684 struct work_struct unpin_work; 685 685 struct radeon_device *rdev; 686 686 int crtc_id; 687 - struct drm_framebuffer *fb; 687 + uint64_t base; 688 688 struct drm_pending_vblank_event *event; 689 689 struct radeon_bo *old_rbo; 690 - struct radeon_bo *new_rbo; 691 690 struct radeon_fence *fence; 692 691 }; 693 692
+105 -101
drivers/gpu/drm/radeon/radeon_display.c
··· 366 366 spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); 367 367 368 368 drm_vblank_put(rdev->ddev, radeon_crtc->crtc_id); 369 - radeon_fence_unref(&work->fence); 370 369 radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id); 371 370 queue_work(radeon_crtc->flip_queue, &work->unpin_work); 372 371 } ··· 385 386 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[work->crtc_id]; 386 387 387 388 struct drm_crtc *crtc = &radeon_crtc->base; 388 - struct drm_framebuffer *fb = work->fb; 389 - 390 - uint32_t tiling_flags, pitch_pixels; 391 - uint64_t base; 392 - 393 389 unsigned long flags; 394 390 int r; 395 391 396 392 down_read(&rdev->exclusive_lock); 397 - while (work->fence) { 393 + if (work->fence) { 398 394 r = radeon_fence_wait(work->fence, false); 399 395 if (r == -EDEADLK) { 400 396 up_read(&rdev->exclusive_lock); 401 397 r = radeon_gpu_reset(rdev); 402 398 down_read(&rdev->exclusive_lock); 403 399 } 400 + if (r) 401 + DRM_ERROR("failed to wait on page flip fence (%d)!\n", r); 404 402 405 - if (r) { 406 - DRM_ERROR("failed to wait on page flip fence (%d)!\n", 407 - r); 408 - goto cleanup; 409 - } else 410 - radeon_fence_unref(&work->fence); 403 + /* We continue with the page flip even if we failed to wait on 404 + * the fence, otherwise the DRM core and userspace will be 405 + * confused about which BO the CRTC is scanning out 406 + */ 407 + 408 + radeon_fence_unref(&work->fence); 411 409 } 412 410 413 - /* pin the new buffer */ 414 - DRM_DEBUG_DRIVER("flip-ioctl() cur_fbo = %p, cur_bbo = %p\n", 415 - work->old_rbo, work->new_rbo); 411 + /* We borrow the event spin lock for protecting flip_status */ 412 + spin_lock_irqsave(&crtc->dev->event_lock, flags); 416 413 417 - r = radeon_bo_reserve(work->new_rbo, false); 414 + /* set the proper interrupt */ 415 + radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); 416 + 417 + /* do the flip (mmio) */ 418 + radeon_page_flip(rdev, radeon_crtc->crtc_id, work->base); 419 + 420 + radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; 421 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 422 + up_read(&rdev->exclusive_lock); 423 + } 424 + 425 + static int radeon_crtc_page_flip(struct drm_crtc *crtc, 426 + struct drm_framebuffer *fb, 427 + struct drm_pending_vblank_event *event, 428 + uint32_t page_flip_flags) 429 + { 430 + struct drm_device *dev = crtc->dev; 431 + struct radeon_device *rdev = dev->dev_private; 432 + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 433 + struct radeon_framebuffer *old_radeon_fb; 434 + struct radeon_framebuffer *new_radeon_fb; 435 + struct drm_gem_object *obj; 436 + struct radeon_flip_work *work; 437 + struct radeon_bo *new_rbo; 438 + uint32_t tiling_flags, pitch_pixels; 439 + uint64_t base; 440 + unsigned long flags; 441 + int r; 442 + 443 + work = kzalloc(sizeof *work, GFP_KERNEL); 444 + if (work == NULL) 445 + return -ENOMEM; 446 + 447 + INIT_WORK(&work->flip_work, radeon_flip_work_func); 448 + INIT_WORK(&work->unpin_work, radeon_unpin_work_func); 449 + 450 + work->rdev = rdev; 451 + work->crtc_id = radeon_crtc->crtc_id; 452 + work->event = event; 453 + 454 + /* schedule unpin of the old buffer */ 455 + old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); 456 + obj = old_radeon_fb->obj; 457 + 458 + /* take a reference to the old object */ 459 + drm_gem_object_reference(obj); 460 + work->old_rbo = gem_to_radeon_bo(obj); 461 + 462 + new_radeon_fb = to_radeon_framebuffer(fb); 463 + obj = new_radeon_fb->obj; 464 + new_rbo = gem_to_radeon_bo(obj); 465 + 466 + spin_lock(&new_rbo->tbo.bdev->fence_lock); 467 + if (new_rbo->tbo.sync_obj) 468 + work->fence = radeon_fence_ref(new_rbo->tbo.sync_obj); 469 + spin_unlock(&new_rbo->tbo.bdev->fence_lock); 470 + 471 + /* pin the new buffer */ 472 + DRM_DEBUG_DRIVER("flip-ioctl() cur_rbo = %p, new_rbo = %p\n", 473 + work->old_rbo, new_rbo); 474 + 475 + r = radeon_bo_reserve(new_rbo, false); 418 476 if (unlikely(r != 0)) { 419 477 DRM_ERROR("failed to reserve new rbo buffer before flip\n"); 420 478 goto cleanup; 421 479 } 422 480 /* Only 27 bit offset for legacy CRTC */ 423 - r = radeon_bo_pin_restricted(work->new_rbo, RADEON_GEM_DOMAIN_VRAM, 481 + r = radeon_bo_pin_restricted(new_rbo, RADEON_GEM_DOMAIN_VRAM, 424 482 ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27, &base); 425 483 if (unlikely(r != 0)) { 426 - radeon_bo_unreserve(work->new_rbo); 484 + radeon_bo_unreserve(new_rbo); 427 485 r = -EINVAL; 428 486 DRM_ERROR("failed to pin new rbo buffer before flip\n"); 429 487 goto cleanup; 430 488 } 431 - radeon_bo_get_tiling_flags(work->new_rbo, &tiling_flags, NULL); 432 - radeon_bo_unreserve(work->new_rbo); 489 + radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); 490 + radeon_bo_unreserve(new_rbo); 433 491 434 492 if (!ASIC_IS_AVIVO(rdev)) { 435 493 /* crtc offset is from display base addr not FB location */ ··· 523 467 } 524 468 base &= ~7; 525 469 } 470 + work->base = base; 526 471 527 472 r = drm_vblank_get(crtc->dev, radeon_crtc->crtc_id); 528 473 if (r) { ··· 534 477 /* We borrow the event spin lock for protecting flip_work */ 535 478 spin_lock_irqsave(&crtc->dev->event_lock, flags); 536 479 537 - /* set the proper interrupt */ 538 - radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); 539 - 540 - /* do the flip (mmio) */ 541 - radeon_page_flip(rdev, radeon_crtc->crtc_id, base); 542 - 543 - radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; 544 - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 545 - up_read(&rdev->exclusive_lock); 546 - 547 - return; 548 - 549 - pflip_cleanup: 550 - if (unlikely(radeon_bo_reserve(work->new_rbo, false) != 0)) { 551 - DRM_ERROR("failed to reserve new rbo in error path\n"); 552 - goto cleanup; 553 - } 554 - if (unlikely(radeon_bo_unpin(work->new_rbo) != 0)) { 555 - DRM_ERROR("failed to unpin new rbo in error path\n"); 556 - } 557 - radeon_bo_unreserve(work->new_rbo); 558 - 559 - cleanup: 560 - drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); 561 - radeon_fence_unref(&work->fence); 562 - kfree(work); 563 - up_read(&rdev->exclusive_lock); 564 - } 565 - 566 - static int radeon_crtc_page_flip(struct drm_crtc *crtc, 567 - struct drm_framebuffer *fb, 568 - struct drm_pending_vblank_event *event, 569 - uint32_t page_flip_flags) 570 - { 571 - struct drm_device *dev = crtc->dev; 572 - struct radeon_device *rdev = dev->dev_private; 573 - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 574 - struct radeon_framebuffer *old_radeon_fb; 575 - struct radeon_framebuffer *new_radeon_fb; 576 - struct drm_gem_object *obj; 577 - struct radeon_flip_work *work; 578 - unsigned long flags; 579 - 580 - work = kzalloc(sizeof *work, GFP_KERNEL); 581 - if (work == NULL) 582 - return -ENOMEM; 583 - 584 - INIT_WORK(&work->flip_work, radeon_flip_work_func); 585 - INIT_WORK(&work->unpin_work, radeon_unpin_work_func); 586 - 587 - work->rdev = rdev; 588 - work->crtc_id = radeon_crtc->crtc_id; 589 - work->fb = fb; 590 - work->event = event; 591 - 592 - /* schedule unpin of the old buffer */ 593 - old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); 594 - obj = old_radeon_fb->obj; 595 - 596 - /* take a reference to the old object */ 597 - drm_gem_object_reference(obj); 598 - work->old_rbo = gem_to_radeon_bo(obj); 599 - 600 - new_radeon_fb = to_radeon_framebuffer(fb); 601 - obj = new_radeon_fb->obj; 602 - work->new_rbo = gem_to_radeon_bo(obj); 603 - 604 - spin_lock(&work->new_rbo->tbo.bdev->fence_lock); 605 - if (work->new_rbo->tbo.sync_obj) 606 - work->fence = radeon_fence_ref(work->new_rbo->tbo.sync_obj); 607 - spin_unlock(&work->new_rbo->tbo.bdev->fence_lock); 608 - 609 - /* We borrow the event spin lock for protecting flip_work */ 610 - spin_lock_irqsave(&crtc->dev->event_lock, flags); 611 - 612 480 if (radeon_crtc->flip_status != RADEON_FLIP_NONE) { 613 481 DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); 614 482 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 615 - drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); 616 - radeon_fence_unref(&work->fence); 617 - kfree(work); 618 - return -EBUSY; 483 + r = -EBUSY; 484 + goto vblank_cleanup; 619 485 } 620 486 radeon_crtc->flip_status = RADEON_FLIP_PENDING; 621 487 radeon_crtc->flip_work = work; ··· 549 569 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 550 570 551 571 queue_work(radeon_crtc->flip_queue, &work->flip_work); 552 - 553 572 return 0; 573 + 574 + vblank_cleanup: 575 + drm_vblank_put(crtc->dev, radeon_crtc->crtc_id); 576 + 577 + pflip_cleanup: 578 + if (unlikely(radeon_bo_reserve(new_rbo, false) != 0)) { 579 + DRM_ERROR("failed to reserve new rbo in error path\n"); 580 + goto cleanup; 581 + } 582 + if (unlikely(radeon_bo_unpin(new_rbo) != 0)) { 583 + DRM_ERROR("failed to unpin new rbo in error path\n"); 584 + } 585 + radeon_bo_unreserve(new_rbo); 586 + 587 + cleanup: 588 + drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); 589 + radeon_fence_unref(&work->fence); 590 + kfree(work); 591 + 592 + return r; 554 593 } 555 594 556 595 static int ··· 829 830 struct radeon_device *rdev = dev->dev_private; 830 831 int ret = 0; 831 832 833 + /* don't leak the edid if we already fetched it in detect() */ 834 + if (radeon_connector->edid) 835 + goto got_edid; 836 + 832 837 /* on hw with routers, select right port */ 833 838 if (radeon_connector->router.ddc_valid) 834 839 radeon_router_select_ddc_port(radeon_connector); ··· 871 868 radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); 872 869 } 873 870 if (radeon_connector->edid) { 871 + got_edid: 874 872 drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); 875 873 ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); 876 874 drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid);
+3 -2
drivers/gpu/drm/radeon/rv515.c
··· 406 406 for (i = 0; i < rdev->num_crtc; i++) { 407 407 if (save->crtc_enabled[i]) { 408 408 tmp = RREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i]); 409 - if ((tmp & 0x3) != 0) { 410 - tmp &= ~0x3; 409 + if ((tmp & 0x7) != 3) { 410 + tmp &= ~0x7; 411 + tmp |= 0x3; 411 412 WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); 412 413 } 413 414 tmp = RREG32(AVIVO_D1GRPH_UPDATE + crtc_offsets[i]);
-6
drivers/gpu/drm/radeon/rv770_dpm.c
··· 2329 2329 pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss, 2330 2330 ASIC_INTERNAL_MEMORY_SS, 0); 2331 2331 2332 - /* disable ss, causes hangs on some cayman boards */ 2333 - if (rdev->family == CHIP_CAYMAN) { 2334 - pi->sclk_ss = false; 2335 - pi->mclk_ss = false; 2336 - } 2337 - 2338 2332 if (pi->sclk_ss || pi->mclk_ss) 2339 2333 pi->dynamic_ss = true; 2340 2334 else
+4 -2
drivers/gpu/drm/radeon/si.c
··· 6376 6376 case 147: 6377 6377 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 6378 6378 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 6379 + /* reset addr and status */ 6380 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6381 + if (addr == 0x0 && status == 0x0) 6382 + break; 6379 6383 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 6380 6384 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 6381 6385 addr); 6382 6386 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 6383 6387 status); 6384 6388 si_vm_decode_fault(rdev, status, addr); 6385 - /* reset addr and status */ 6386 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6387 6389 break; 6388 6390 case 176: /* RINGID0 CP_INT */ 6389 6391 radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
+1 -1
drivers/hid/Kconfig
··· 810 810 811 811 config HID_SENSOR_HUB 812 812 tristate "HID Sensors framework support" 813 - depends on HID 813 + depends on HID && HAS_IOMEM 814 814 select MFD_CORE 815 815 default n 816 816 ---help---
+3
drivers/hid/hid-ids.h
··· 323 323 324 324 #define USB_VENDOR_ID_ETURBOTOUCH 0x22b9 325 325 #define USB_DEVICE_ID_ETURBOTOUCH 0x0006 326 + #define USB_DEVICE_ID_ETURBOTOUCH_2968 0x2968 326 327 327 328 #define USB_VENDOR_ID_EZKEY 0x0518 328 329 #define USB_DEVICE_ID_BTC_8193 0x0002 ··· 716 715 717 716 #define USB_VENDOR_ID_PENMOUNT 0x14e1 718 717 #define USB_DEVICE_ID_PENMOUNT_PCI 0x3500 718 + #define USB_DEVICE_ID_PENMOUNT_1610 0x1610 719 + #define USB_DEVICE_ID_PENMOUNT_1640 0x1640 719 720 720 721 #define USB_VENDOR_ID_PETALYNX 0x18b1 721 722 #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037
+2
drivers/hid/hid-rmi.c
··· 428 428 return 0; 429 429 } 430 430 431 + #ifdef CONFIG_PM 431 432 static int rmi_post_reset(struct hid_device *hdev) 432 433 { 433 434 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); ··· 438 437 { 439 438 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); 440 439 } 440 + #endif /* CONFIG_PM */ 441 441 442 442 #define RMI4_MAX_PAGE 0xff 443 443 #define RMI4_PAGE_SIZE 0x0100
+15 -10
drivers/hid/hid-sensor-hub.c
··· 159 159 { 160 160 struct hid_sensor_hub_callbacks_list *callback; 161 161 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 162 + unsigned long flags; 162 163 163 - spin_lock(&pdata->dyn_callback_lock); 164 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 164 165 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 165 166 if (callback->usage_id == usage_id && 166 167 callback->hsdev == hsdev) { 167 - spin_unlock(&pdata->dyn_callback_lock); 168 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 168 169 return -EINVAL; 169 170 } 170 171 callback = kzalloc(sizeof(*callback), GFP_ATOMIC); 171 172 if (!callback) { 172 - spin_unlock(&pdata->dyn_callback_lock); 173 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 173 174 return -ENOMEM; 174 175 } 175 176 callback->hsdev = hsdev; ··· 178 177 callback->usage_id = usage_id; 179 178 callback->priv = NULL; 180 179 list_add_tail(&callback->list, &pdata->dyn_callback_list); 181 - spin_unlock(&pdata->dyn_callback_lock); 180 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 182 181 183 182 return 0; 184 183 } ··· 189 188 { 190 189 struct hid_sensor_hub_callbacks_list *callback; 191 190 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 191 + unsigned long flags; 192 192 193 - spin_lock(&pdata->dyn_callback_lock); 193 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 194 194 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 195 195 if (callback->usage_id == usage_id && 196 196 callback->hsdev == hsdev) { ··· 199 197 kfree(callback); 200 198 break; 201 199 } 202 - spin_unlock(&pdata->dyn_callback_lock); 200 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 203 201 204 202 return 0; 205 203 } ··· 380 378 { 381 379 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 382 380 struct hid_sensor_hub_callbacks_list *callback; 381 + unsigned long flags; 383 382 384 383 hid_dbg(hdev, " sensor_hub_suspend\n"); 385 - spin_lock(&pdata->dyn_callback_lock); 384 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 386 385 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 387 386 if (callback->usage_callback->suspend) 388 387 callback->usage_callback->suspend( 389 388 callback->hsdev, callback->priv); 390 389 } 391 - spin_unlock(&pdata->dyn_callback_lock); 390 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 392 391 393 392 return 0; 394 393 } ··· 398 395 { 399 396 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 400 397 struct hid_sensor_hub_callbacks_list *callback; 398 + unsigned long flags; 401 399 402 400 hid_dbg(hdev, " sensor_hub_resume\n"); 403 - spin_lock(&pdata->dyn_callback_lock); 401 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 404 402 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 405 403 if (callback->usage_callback->resume) 406 404 callback->usage_callback->resume( 407 405 callback->hsdev, callback->priv); 408 406 } 409 - spin_unlock(&pdata->dyn_callback_lock); 407 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 410 408 411 409 return 0; 412 410 } ··· 636 632 if (name == NULL) { 637 633 hid_err(hdev, "Failed MFD device name\n"); 638 634 ret = -ENOMEM; 635 + kfree(hsdev); 639 636 goto err_no_mem; 640 637 } 641 638 sd->hid_sensor_hub_client_devs[
+3
drivers/hid/usbhid/hid-quirks.c
··· 49 49 50 50 { USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II, HID_QUIRK_MULTI_INPUT }, 51 51 { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH, HID_QUIRK_MULTI_INPUT }, 52 + { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH_2968, HID_QUIRK_MULTI_INPUT }, 52 53 { USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD, HID_QUIRK_MULTI_INPUT }, 53 54 { USB_VENDOR_ID_PANTHERLORD, USB_DEVICE_ID_PANTHERLORD_TWIN_USB_JOYSTICK, HID_QUIRK_MULTI_INPUT | HID_QUIRK_SKIP_OUTPUT_REPORTS }, 54 55 { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, ··· 77 76 { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, 78 77 { USB_VENDOR_ID_NEXIO, USB_DEVICE_ID_NEXIO_MULTITOUCH_PTI0750, HID_QUIRK_NO_INIT_REPORTS }, 79 78 { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS }, 79 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET }, 80 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET }, 80 81 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, 81 82 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, 82 83 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS },
+6 -2
drivers/hv/connection.c
··· 339 339 */ 340 340 341 341 do { 342 - hv_begin_read(&channel->inbound); 342 + if (read_state) 343 + hv_begin_read(&channel->inbound); 343 344 channel->onchannel_callback(arg); 344 - bytes_to_read = hv_end_read(&channel->inbound); 345 + if (read_state) 346 + bytes_to_read = hv_end_read(&channel->inbound); 347 + else 348 + bytes_to_read = 0; 345 349 } while (read_state && (bytes_to_read != 0)); 346 350 } else { 347 351 pr_err("no channel callback for relid - %u\n", relid);
+1 -1
drivers/hv/hv_fcopy.c
··· 246 246 /* 247 247 * Send the information to the user-level daemon. 248 248 */ 249 - fcopy_send_data(); 250 249 schedule_delayed_work(&fcopy_work, 5*HZ); 250 + fcopy_send_data(); 251 251 return; 252 252 } 253 253 icmsghdr->icflags = ICMSGHDRFLAG_TRANSACTION | ICMSGHDRFLAG_RESPONSE;
+14 -3
drivers/hv/hv_kvp.c
··· 127 127 kvp_respond_to_host(NULL, HV_E_FAIL); 128 128 } 129 129 130 + static void poll_channel(struct vmbus_channel *channel) 131 + { 132 + if (channel->target_cpu != smp_processor_id()) 133 + smp_call_function_single(channel->target_cpu, 134 + hv_kvp_onchannelcallback, 135 + channel, true); 136 + else 137 + hv_kvp_onchannelcallback(channel); 138 + } 139 + 140 + 130 141 static int kvp_handle_handshake(struct hv_kvp_msg *msg) 131 142 { 132 143 int ret = 1; ··· 166 155 kvp_register(dm_reg_value); 167 156 kvp_transaction.active = false; 168 157 if (kvp_transaction.kvp_context) 169 - hv_kvp_onchannelcallback(kvp_transaction.kvp_context); 158 + poll_channel(kvp_transaction.kvp_context); 170 159 } 171 160 return ret; 172 161 } ··· 579 568 580 569 vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, 581 570 VM_PKT_DATA_INBAND, 0); 582 - 571 + poll_channel(channel); 583 572 } 584 573 585 574 /* ··· 614 603 return; 615 604 } 616 605 617 - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, 606 + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, 618 607 &requestid); 619 608 620 609 if (recvlen > 0) {
+1 -1
drivers/hv/hv_util.c
··· 319 319 (struct hv_util_service *)dev_id->driver_data; 320 320 int ret; 321 321 322 - srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); 322 + srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); 323 323 if (!srv->recv_buffer) 324 324 return -ENOMEM; 325 325 if (srv->util_init) {
+14 -14
drivers/hwmon/adc128d818.c
··· 239 239 return sprintf(buf, "%u\n", !!(alarms & mask)); 240 240 } 241 241 242 - static SENSOR_DEVICE_ATTR_2(in0_input, S_IWUSR | S_IRUGO, 243 - adc128_show_in, adc128_set_in, 0, 0); 242 + static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, 243 + adc128_show_in, NULL, 0, 0); 244 244 static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, 245 245 adc128_show_in, adc128_set_in, 0, 1); 246 246 static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, 247 247 adc128_show_in, adc128_set_in, 0, 2); 248 248 249 - static SENSOR_DEVICE_ATTR_2(in1_input, S_IWUSR | S_IRUGO, 250 - adc128_show_in, adc128_set_in, 1, 0); 249 + static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, 250 + adc128_show_in, NULL, 1, 0); 251 251 static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, 252 252 adc128_show_in, adc128_set_in, 1, 1); 253 253 static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, 254 254 adc128_show_in, adc128_set_in, 1, 2); 255 255 256 - static SENSOR_DEVICE_ATTR_2(in2_input, S_IWUSR | S_IRUGO, 257 - adc128_show_in, adc128_set_in, 2, 0); 256 + static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, 257 + adc128_show_in, NULL, 2, 0); 258 258 static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, 259 259 adc128_show_in, adc128_set_in, 2, 1); 260 260 static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, 261 261 adc128_show_in, adc128_set_in, 2, 2); 262 262 263 - static SENSOR_DEVICE_ATTR_2(in3_input, S_IWUSR | S_IRUGO, 264 - adc128_show_in, adc128_set_in, 3, 0); 263 + static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, 264 + adc128_show_in, NULL, 3, 0); 265 265 static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, 266 266 adc128_show_in, adc128_set_in, 3, 1); 267 267 static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, 268 268 adc128_show_in, adc128_set_in, 3, 2); 269 269 270 - static SENSOR_DEVICE_ATTR_2(in4_input, S_IWUSR | S_IRUGO, 271 - adc128_show_in, adc128_set_in, 4, 0); 270 + static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, 271 + adc128_show_in, NULL, 4, 0); 272 272 static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, 273 273 adc128_show_in, adc128_set_in, 4, 1); 274 274 static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, 275 275 adc128_show_in, adc128_set_in, 4, 2); 276 276 277 - static SENSOR_DEVICE_ATTR_2(in5_input, S_IWUSR | S_IRUGO, 278 - adc128_show_in, adc128_set_in, 5, 0); 277 + static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, 278 + adc128_show_in, NULL, 5, 0); 279 279 static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, 280 280 adc128_show_in, adc128_set_in, 5, 1); 281 281 static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, 282 282 adc128_show_in, adc128_set_in, 5, 2); 283 283 284 - static SENSOR_DEVICE_ATTR_2(in6_input, S_IWUSR | S_IRUGO, 285 - adc128_show_in, adc128_set_in, 6, 0); 284 + static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, 285 + adc128_show_in, NULL, 6, 0); 286 286 static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, 287 287 adc128_show_in, adc128_set_in, 6, 1); 288 288 static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO,
+8 -6
drivers/hwmon/adm1021.c
··· 185 185 struct adm1021_data *data = dev_get_drvdata(dev); 186 186 struct i2c_client *client = data->client; 187 187 long temp; 188 - int err; 188 + int reg_val, err; 189 189 190 190 err = kstrtol(buf, 10, &temp); 191 191 if (err) ··· 193 193 temp /= 1000; 194 194 195 195 mutex_lock(&data->update_lock); 196 - data->temp_max[index] = clamp_val(temp, -128, 127); 196 + reg_val = clamp_val(temp, -128, 127); 197 + data->temp_max[index] = reg_val * 1000; 197 198 if (!read_only) 198 199 i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), 199 - data->temp_max[index]); 200 + reg_val); 200 201 mutex_unlock(&data->update_lock); 201 202 202 203 return count; ··· 211 210 struct adm1021_data *data = dev_get_drvdata(dev); 212 211 struct i2c_client *client = data->client; 213 212 long temp; 214 - int err; 213 + int reg_val, err; 215 214 216 215 err = kstrtol(buf, 10, &temp); 217 216 if (err) ··· 219 218 temp /= 1000; 220 219 221 220 mutex_lock(&data->update_lock); 222 - data->temp_min[index] = clamp_val(temp, -128, 127); 221 + reg_val = clamp_val(temp, -128, 127); 222 + data->temp_min[index] = reg_val * 1000; 223 223 if (!read_only) 224 224 i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), 225 - data->temp_min[index]); 225 + reg_val); 226 226 mutex_unlock(&data->update_lock); 227 227 228 228 return count;
+3
drivers/hwmon/adm1029.c
··· 232 232 /* Update the value */ 233 233 reg = (reg & 0x3F) | (val << 6); 234 234 235 + /* Update the cache */ 236 + data->fan_div[attr->index] = reg; 237 + 235 238 /* Write value */ 236 239 i2c_smbus_write_byte_data(client, 237 240 ADM1029_REG_FAN_DIV[attr->index], reg);
+5 -3
drivers/hwmon/adm1031.c
··· 365 365 if (ret) 366 366 return ret; 367 367 368 + val = clamp_val(val, 0, 127000); 368 369 mutex_lock(&data->update_lock); 369 370 data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); 370 371 adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), ··· 395 394 if (ret) 396 395 return ret; 397 396 397 + val = clamp_val(val, 0, 127000); 398 398 mutex_lock(&data->update_lock); 399 399 data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], 400 400 data->pwm[nr]); ··· 698 696 if (ret) 699 697 return ret; 700 698 701 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 699 + val = clamp_val(val, -55000, 127000); 702 700 mutex_lock(&data->update_lock); 703 701 data->temp_min[nr] = TEMP_TO_REG(val); 704 702 adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), ··· 719 717 if (ret) 720 718 return ret; 721 719 722 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 720 + val = clamp_val(val, -55000, 127000); 723 721 mutex_lock(&data->update_lock); 724 722 data->temp_max[nr] = TEMP_TO_REG(val); 725 723 adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), ··· 740 738 if (ret) 741 739 return ret; 742 740 743 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 741 + val = clamp_val(val, -55000, 127000); 744 742 mutex_lock(&data->update_lock); 745 743 data->temp_crit[nr] = TEMP_TO_REG(val); 746 744 adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),
+3 -3
drivers/hwmon/adt7470.c
··· 515 515 return -EINVAL; 516 516 517 517 temp = DIV_ROUND_CLOSEST(temp, 1000); 518 - temp = clamp_val(temp, 0, 255); 518 + temp = clamp_val(temp, -128, 127); 519 519 520 520 mutex_lock(&data->lock); 521 521 data->temp_min[attr->index] = temp; ··· 549 549 return -EINVAL; 550 550 551 551 temp = DIV_ROUND_CLOSEST(temp, 1000); 552 - temp = clamp_val(temp, 0, 255); 552 + temp = clamp_val(temp, -128, 127); 553 553 554 554 mutex_lock(&data->lock); 555 555 data->temp_max[attr->index] = temp; ··· 826 826 return -EINVAL; 827 827 828 828 temp = DIV_ROUND_CLOSEST(temp, 1000); 829 - temp = clamp_val(temp, 0, 255); 829 + temp = clamp_val(temp, -128, 127); 830 830 831 831 mutex_lock(&data->lock); 832 832 data->pwm_tmin[attr->index] = temp;
+1 -1
drivers/hwmon/amc6821.c
··· 704 704 get_temp_alarm, NULL, IDX_TEMP1_MAX); 705 705 static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, 706 706 get_temp_alarm, NULL, IDX_TEMP1_CRIT); 707 - static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, 707 + static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, 708 708 get_temp, NULL, IDX_TEMP2_INPUT); 709 709 static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, 710 710 set_temp, IDX_TEMP2_MIN);
+1 -1
drivers/hwmon/da9052-hwmon.c
··· 194 194 struct device_attribute *devattr, 195 195 char *buf) 196 196 { 197 - return sprintf(buf, "da9052-hwmon\n"); 197 + return sprintf(buf, "da9052\n"); 198 198 } 199 199 200 200 static ssize_t show_label(struct device *dev,
+1 -1
drivers/hwmon/da9055-hwmon.c
··· 204 204 struct device_attribute *devattr, 205 205 char *buf) 206 206 { 207 - return sprintf(buf, "da9055-hwmon\n"); 207 + return sprintf(buf, "da9055\n"); 208 208 } 209 209 210 210 static ssize_t show_label(struct device *dev,
+5 -10
drivers/hwmon/emc2103.c
··· 250 250 if (result < 0) 251 251 return result; 252 252 253 - val = DIV_ROUND_CLOSEST(val, 1000); 254 - if ((val < -63) || (val > 127)) 255 - return -EINVAL; 253 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 256 254 257 255 mutex_lock(&data->update_lock); 258 256 data->temp_min[nr] = val; ··· 272 274 if (result < 0) 273 275 return result; 274 276 275 - val = DIV_ROUND_CLOSEST(val, 1000); 276 - if ((val < -63) || (val > 127)) 277 - return -EINVAL; 277 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 278 278 279 279 mutex_lock(&data->update_lock); 280 280 data->temp_max[nr] = val; ··· 386 390 { 387 391 struct emc2103_data *data = emc2103_update_device(dev); 388 392 struct i2c_client *client = to_i2c_client(dev); 389 - long rpm_target; 393 + unsigned long rpm_target; 390 394 391 - int result = kstrtol(buf, 10, &rpm_target); 395 + int result = kstrtoul(buf, 10, &rpm_target); 392 396 if (result < 0) 393 397 return result; 394 398 395 399 /* Datasheet states 16384 as maximum RPM target (table 3.2) */ 396 - if ((rpm_target < 0) || (rpm_target > 16384)) 397 - return -EINVAL; 400 + rpm_target = clamp_val(rpm_target, 0, 16384); 398 401 399 402 mutex_lock(&data->update_lock); 400 403
+1 -1
drivers/hwmon/ntc_thermistor.c
··· 512 512 } 513 513 514 514 dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n", 515 - pdev->name); 515 + pdev_id->name); 516 516 517 517 return 0; 518 518 err_after_sysfs:
-1
drivers/i2c/busses/i2c-sun6i-p2wi.c
··· 22 22 * 23 23 */ 24 24 #include <linux/clk.h> 25 - #include <linux/module.h> 26 25 #include <linux/i2c.h> 27 26 #include <linux/io.h> 28 27 #include <linux/interrupt.h>
+1
drivers/i2c/muxes/Kconfig
··· 40 40 41 41 config I2C_MUX_PCA954x 42 42 tristate "Philips PCA954x I2C Mux/switches" 43 + depends on GPIOLIB 43 44 help 44 45 If you say yes here you get support for the Philips PCA954x 45 46 I2C mux/switch devices.
+2 -5
drivers/iio/accel/hid-sensor-accel-3d.c
··· 110 110 struct accel_3d_state *accel_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &accel_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &accel_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+7 -1
drivers/iio/accel/mma8452.c
··· 111 111 {6, 250000}, {1, 560000} 112 112 }; 113 113 114 + /* 115 + * Hardware has fullscale of -2G, -4G, -8G corresponding to raw value -2048 116 + * The userspace interface uses m/s^2 and we declare micro units 117 + * So scale factor is given by: 118 + * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665 119 + */ 114 120 static const int mma8452_scales[3][2] = { 115 - {0, 977}, {0, 1953}, {0, 3906} 121 + {0, 9577}, {0, 19154}, {0, 38307} 116 122 }; 117 123 118 124 static ssize_t mma8452_show_samp_freq_avail(struct device *dev,
+1 -1
drivers/iio/adc/ti_am335x_adc.c
··· 374 374 return -EAGAIN; 375 375 } 376 376 } 377 - map_val = chan->channel + TOTAL_CHANNELS; 377 + map_val = adc_dev->channel_step[chan->scan_index]; 378 378 379 379 /* 380 380 * We check the complete FIFO. We programmed just one entry but in case
+2 -5
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 110 110 struct gyro_3d_state *gyro_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &gyro_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &gyro_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+3
drivers/iio/industrialio-event.c
··· 345 345 &indio_dev->event_interface->dev_attr_list); 346 346 kfree(postfix); 347 347 348 + if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE)) 349 + continue; 350 + 348 351 if (ret) 349 352 return ret; 350 353
+2 -5
drivers/iio/light/hid-sensor-als.c
··· 79 79 struct als_state *als_state = iio_priv(indio_dev); 80 80 int report_id = -1; 81 81 u32 address; 82 - int ret; 83 82 int ret_type; 84 83 s32 poll_value; 85 84 ··· 128 129 ret_type = IIO_VAL_INT; 129 130 break; 130 131 case IIO_CHAN_INFO_SAMP_FREQ: 131 - ret = hid_sensor_read_samp_freq_value( 132 + ret_type = hid_sensor_read_samp_freq_value( 132 133 &als_state->common_attributes, val, val2); 133 - ret_type = IIO_VAL_INT_PLUS_MICRO; 134 134 break; 135 135 case IIO_CHAN_INFO_HYSTERESIS: 136 - ret = hid_sensor_read_raw_hyst_value( 136 + ret_type = hid_sensor_read_raw_hyst_value( 137 137 &als_state->common_attributes, val, val2); 138 - ret_type = IIO_VAL_INT_PLUS_MICRO; 139 138 break; 140 139 default: 141 140 ret_type = -EINVAL;
+2 -5
drivers/iio/light/hid-sensor-prox.c
··· 74 74 struct prox_state *prox_state = iio_priv(indio_dev); 75 75 int report_id = -1; 76 76 u32 address; 77 - int ret; 78 77 int ret_type; 79 78 s32 poll_value; 80 79 ··· 124 125 ret_type = IIO_VAL_INT; 125 126 break; 126 127 case IIO_CHAN_INFO_SAMP_FREQ: 127 - ret = hid_sensor_read_samp_freq_value( 128 + ret_type = hid_sensor_read_samp_freq_value( 128 129 &prox_state->common_attributes, val, val2); 129 - ret_type = IIO_VAL_INT_PLUS_MICRO; 130 130 break; 131 131 case IIO_CHAN_INFO_HYSTERESIS: 132 - ret = hid_sensor_read_raw_hyst_value( 132 + ret_type = hid_sensor_read_raw_hyst_value( 133 133 &prox_state->common_attributes, val, val2); 134 - ret_type = IIO_VAL_INT_PLUS_MICRO; 135 134 break; 136 135 default: 137 136 ret_type = -EINVAL;
+10 -1
drivers/iio/light/tcs3472.c
··· 52 52 53 53 struct tcs3472_data { 54 54 struct i2c_client *client; 55 + struct mutex lock; 55 56 u8 enable; 56 57 u8 control; 57 58 u8 atime; ··· 117 116 118 117 switch (mask) { 119 118 case IIO_CHAN_INFO_RAW: 119 + if (iio_buffer_enabled(indio_dev)) 120 + return -EBUSY; 121 + 122 + mutex_lock(&data->lock); 120 123 ret = tcs3472_req_data(data); 121 - if (ret < 0) 124 + if (ret < 0) { 125 + mutex_unlock(&data->lock); 122 126 return ret; 127 + } 123 128 ret = i2c_smbus_read_word_data(data->client, chan->address); 129 + mutex_unlock(&data->lock); 124 130 if (ret < 0) 125 131 return ret; 126 132 *val = ret; ··· 263 255 data = iio_priv(indio_dev); 264 256 i2c_set_clientdata(client, indio_dev); 265 257 data->client = client; 258 + mutex_init(&data->lock); 266 259 267 260 indio_dev->dev.parent = &client->dev; 268 261 indio_dev->info = &tcs3472_info;
+2 -5
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 110 110 struct magn_3d_state *magn_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 152 153 ret_type = IIO_VAL_INT; 153 154 break; 154 155 case IIO_CHAN_INFO_SAMP_FREQ: 155 - ret = hid_sensor_read_samp_freq_value( 156 + ret_type = hid_sensor_read_samp_freq_value( 156 157 &magn_state->common_attributes, val, val2); 157 - ret_type = IIO_VAL_INT_PLUS_MICRO; 158 158 break; 159 159 case IIO_CHAN_INFO_HYSTERESIS: 160 - ret = hid_sensor_read_raw_hyst_value( 160 + ret_type = hid_sensor_read_raw_hyst_value( 161 161 &magn_state->common_attributes, val, val2); 162 - ret_type = IIO_VAL_INT_PLUS_MICRO; 163 162 break; 164 163 default: 165 164 ret_type = -EINVAL;
+2 -5
drivers/iio/pressure/hid-sensor-press.c
··· 78 78 struct press_state *press_state = iio_priv(indio_dev); 79 79 int report_id = -1; 80 80 u32 address; 81 - int ret; 82 81 int ret_type; 83 82 s32 poll_value; 84 83 ··· 127 128 ret_type = IIO_VAL_INT; 128 129 break; 129 130 case IIO_CHAN_INFO_SAMP_FREQ: 130 - ret = hid_sensor_read_samp_freq_value( 131 + ret_type = hid_sensor_read_samp_freq_value( 131 132 &press_state->common_attributes, val, val2); 132 - ret_type = IIO_VAL_INT_PLUS_MICRO; 133 133 break; 134 134 case IIO_CHAN_INFO_HYSTERESIS: 135 - ret = hid_sensor_read_raw_hyst_value( 135 + ret_type = hid_sensor_read_raw_hyst_value( 136 136 &press_state->common_attributes, val, val2); 137 - ret_type = IIO_VAL_INT_PLUS_MICRO; 138 137 break; 139 138 default: 140 139 ret_type = -EINVAL;
+11 -3
drivers/infiniband/hw/cxgb4/cm.c
··· 432 432 */ 433 433 static void act_open_req_arp_failure(void *handle, struct sk_buff *skb) 434 434 { 435 + struct c4iw_ep *ep = handle; 436 + 435 437 printk(KERN_ERR MOD "ARP failure duing connect\n"); 436 438 kfree_skb(skb); 439 + connect_reply_upcall(ep, -EHOSTUNREACH); 440 + state_set(&ep->com, DEAD); 441 + remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); 442 + cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); 443 + dst_release(ep->dst); 444 + cxgb4_l2t_release(ep->l2t); 445 + c4iw_put_ep(&ep->com); 437 446 } 438 447 439 448 /* ··· 667 658 opt2 |= T5_OPT_2_VALID; 668 659 opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE); 669 660 } 670 - t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure); 661 + t4_set_arp_err_handler(skb, ep, act_open_req_arp_failure); 671 662 672 663 if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) { 673 664 if (ep->com.remote_addr.ss_family == AF_INET) { ··· 2189 2180 PDBG("%s c4iw_dev %p tid %u\n", __func__, dev, hwtid); 2190 2181 BUG_ON(skb_cloned(skb)); 2191 2182 skb_trim(skb, sizeof(struct cpl_tid_release)); 2192 - skb_get(skb); 2193 2183 release_tid(&dev->rdev, hwtid, skb); 2194 2184 return; 2195 2185 } ··· 3925 3917 return 0; 3926 3918 } 3927 3919 3928 - void __exit c4iw_cm_term(void) 3920 + void c4iw_cm_term(void) 3929 3921 { 3930 3922 WARN_ON(!list_empty(&timeout_list)); 3931 3923 flush_workqueue(workq);
+11 -7
drivers/infiniband/hw/cxgb4/device.c
··· 696 696 pr_err(MOD "error allocating status page\n"); 697 697 goto err4; 698 698 } 699 + rdev->status_page->db_off = 0; 699 700 return 0; 700 701 err4: 701 702 c4iw_rqtpool_destroy(rdev); ··· 730 729 if (ctx->dev->rdev.oc_mw_kva) 731 730 iounmap(ctx->dev->rdev.oc_mw_kva); 732 731 ib_dealloc_device(&ctx->dev->ibdev); 733 - iwpm_exit(RDMA_NL_C4IW); 734 732 ctx->dev = NULL; 735 733 } 736 734 ··· 826 826 setup_debugfs(devp); 827 827 } 828 828 829 - ret = iwpm_init(RDMA_NL_C4IW); 830 - if (ret) { 831 - pr_err("port mapper initialization failed with %d\n", ret); 832 - ib_dealloc_device(&devp->ibdev); 833 - return ERR_PTR(ret); 834 - } 835 829 836 830 return devp; 837 831 } ··· 1326 1332 pr_err("%s[%u]: Failed to add netlink callback\n" 1327 1333 , __func__, __LINE__); 1328 1334 1335 + err = iwpm_init(RDMA_NL_C4IW); 1336 + if (err) { 1337 + pr_err("port mapper initialization failed with %d\n", err); 1338 + ibnl_remove_client(RDMA_NL_C4IW); 1339 + c4iw_cm_term(); 1340 + debugfs_remove_recursive(c4iw_debugfs_root); 1341 + return err; 1342 + } 1343 + 1329 1344 cxgb4_register_uld(CXGB4_ULD_RDMA, &c4iw_uld_info); 1330 1345 1331 1346 return 0; ··· 1352 1349 } 1353 1350 mutex_unlock(&dev_mutex); 1354 1351 cxgb4_unregister_uld(CXGB4_ULD_RDMA); 1352 + iwpm_exit(RDMA_NL_C4IW); 1355 1353 ibnl_remove_client(RDMA_NL_C4IW); 1356 1354 c4iw_cm_term(); 1357 1355 debugfs_remove_recursive(c4iw_debugfs_root);
+1 -1
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 908 908 int c4iw_register_device(struct c4iw_dev *dev); 909 909 void c4iw_unregister_device(struct c4iw_dev *dev); 910 910 int __init c4iw_cm_init(void); 911 - void __exit c4iw_cm_term(void); 911 + void c4iw_cm_term(void); 912 912 void c4iw_release_dev_ucontext(struct c4iw_rdev *rdev, 913 913 struct c4iw_dev_ucontext *uctx); 914 914 void c4iw_init_dev_ucontext(struct c4iw_rdev *rdev,
+1 -1
drivers/infiniband/hw/mlx5/qp.c
··· 675 675 int err; 676 676 677 677 uuari = &dev->mdev.priv.uuari; 678 - if (init_attr->create_flags & ~IB_QP_CREATE_SIGNATURE_EN) 678 + if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)) 679 679 return -EINVAL; 680 680 681 681 if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR)
+4 -4
drivers/iommu/fsl_pamu.c
··· 170 170 static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size) 171 171 { 172 172 /* Bug if not a power of 2 */ 173 - BUG_ON(!is_power_of_2(addrspace_size)); 173 + BUG_ON((addrspace_size & (addrspace_size - 1))); 174 174 175 175 /* window size is 2^(WSE+1) bytes */ 176 - return __ffs(addrspace_size) - 1; 176 + return fls64(addrspace_size) - 2; 177 177 } 178 178 179 179 /* Derive the PAACE window count encoding for the subwindow count */ ··· 351 351 struct paace *ppaace; 352 352 unsigned long fspi; 353 353 354 - if (!is_power_of_2(win_size) || win_size < PAMU_PAGE_SIZE) { 354 + if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) { 355 355 pr_debug("window size too small or not a power of two %llx\n", win_size); 356 356 return -EINVAL; 357 357 } ··· 464 464 return -ENOENT; 465 465 } 466 466 467 - if (!is_power_of_2(subwin_size) || subwin_size < PAMU_PAGE_SIZE) { 467 + if ((subwin_size & (subwin_size - 1)) || subwin_size < PAMU_PAGE_SIZE) { 468 468 pr_debug("subwindow size out of range, or not a power of 2\n"); 469 469 return -EINVAL; 470 470 }
+8 -10
drivers/iommu/fsl_pamu_domain.c
··· 301 301 * Size must be a power of two and at least be equal 302 302 * to PAMU page size. 303 303 */ 304 - if (!is_power_of_2(size) || size < PAMU_PAGE_SIZE) { 304 + if ((size & (size - 1)) || size < PAMU_PAGE_SIZE) { 305 305 pr_debug("%s: size too small or not a power of two\n", __func__); 306 306 return -EINVAL; 307 307 } ··· 333 333 spin_lock_init(&domain->domain_lock); 334 334 335 335 return domain; 336 - } 337 - 338 - static inline struct device_domain_info *find_domain(struct device *dev) 339 - { 340 - return dev->archdata.iommu_domain; 341 336 } 342 337 343 338 static void remove_device_ref(struct device_domain_info *info, u32 win_cnt) ··· 375 380 * Check here if the device is already attached to domain or not. 376 381 * If the device is already attached to a domain detach it. 377 382 */ 378 - old_domain_info = find_domain(dev); 383 + old_domain_info = dev->archdata.iommu_domain; 379 384 if (old_domain_info && old_domain_info->domain != dma_domain) { 380 385 spin_unlock_irqrestore(&device_domain_lock, flags); 381 386 detach_device(dev, old_domain_info->domain); ··· 394 399 * the info for the first LIODN as all 395 400 * LIODNs share the same domain 396 401 */ 397 - if (!old_domain_info) 402 + if (!dev->archdata.iommu_domain) 398 403 dev->archdata.iommu_domain = info; 399 404 spin_unlock_irqrestore(&device_domain_lock, flags); 400 405 ··· 1037 1042 group = get_shared_pci_device_group(pdev); 1038 1043 } 1039 1044 1045 + if (!group) 1046 + group = ERR_PTR(-ENODEV); 1047 + 1040 1048 return group; 1041 1049 } 1042 1050 1043 1051 static int fsl_pamu_add_device(struct device *dev) 1044 1052 { 1045 - struct iommu_group *group = NULL; 1053 + struct iommu_group *group = ERR_PTR(-ENODEV); 1046 1054 struct pci_dev *pdev; 1047 1055 const u32 *prop; 1048 1056 int ret, len; ··· 1068 1070 group = get_device_iommu_group(dev); 1069 1071 } 1070 1072 1071 - if (!group || IS_ERR(group)) 1073 + if (IS_ERR(group)) 1072 1074 return PTR_ERR(group); 1073 1075 1074 1076 ret = iommu_group_add_device(group, dev);
+6 -1
drivers/irqchip/irq-gic.c
··· 42 42 #include <linux/irqchip/chained_irq.h> 43 43 #include <linux/irqchip/arm-gic.h> 44 44 45 + #include <asm/cputype.h> 45 46 #include <asm/irq.h> 46 47 #include <asm/exception.h> 47 48 #include <asm/smp_plat.h> ··· 955 954 } 956 955 957 956 for_each_possible_cpu(cpu) { 958 - unsigned long offset = percpu_offset * cpu_logical_map(cpu); 957 + u32 mpidr = cpu_logical_map(cpu); 958 + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 959 + unsigned long offset = percpu_offset * core_id; 959 960 *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; 960 961 *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; 961 962 } ··· 1074 1071 gic_cnt++; 1075 1072 return 0; 1076 1073 } 1074 + IRQCHIP_DECLARE(gic_400, "arm,gic-400", gic_of_init); 1077 1075 IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); 1078 1076 IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); 1077 + IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); 1079 1078 IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); 1080 1079 IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); 1081 1080
+9 -5
drivers/isdn/hisax/l3ni1.c
··· 2059 2059 memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */ 2060 2060 l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */ 2061 2061 2062 - if (ic->parm.ni1_io.timeout > 0) 2063 - if (!(pc = ni1_new_l3_process(st, -1))) 2064 - { free_invoke_id(st, id); 2062 + if (ic->parm.ni1_io.timeout > 0) { 2063 + pc = ni1_new_l3_process(st, -1); 2064 + if (!pc) { 2065 + free_invoke_id(st, id); 2065 2066 return (-2); 2066 2067 } 2067 - pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; /* remember id */ 2068 - pc->prot.ni1.proc = ic->parm.ni1_io.proc; /* and procedure */ 2068 + /* remember id */ 2069 + pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; 2070 + /* and procedure */ 2071 + pc->prot.ni1.proc = ic->parm.ni1_io.proc; 2072 + } 2069 2073 2070 2074 if (!(skb = l3_alloc_skb(l))) 2071 2075 { free_invoke_id(st, id);
+1 -7
drivers/isdn/i4l/isdn_ppp.c
··· 442 442 { 443 443 struct sock_fprog uprog; 444 444 struct sock_filter *code = NULL; 445 - int len, err; 445 + int len; 446 446 447 447 if (copy_from_user(&uprog, arg, sizeof(uprog))) 448 448 return -EFAULT; ··· 457 457 code = memdup_user(uprog.filter, len); 458 458 if (IS_ERR(code)) 459 459 return PTR_ERR(code); 460 - 461 - err = sk_chk_filter(code, uprog.len); 462 - if (err) { 463 - kfree(code); 464 - return err; 465 - } 466 460 467 461 *p = code; 468 462 return uprog.len;
+9
drivers/md/dm-cache-metadata.c
··· 425 425 426 426 disk_super = dm_block_data(sblock); 427 427 428 + /* Verify the data block size hasn't changed */ 429 + if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { 430 + DMERR("changing the data block size (from %u to %llu) is not supported", 431 + le32_to_cpu(disk_super->data_block_size), 432 + (unsigned long long)cmd->data_block_size); 433 + r = -EINVAL; 434 + goto bad; 435 + } 436 + 428 437 r = __check_incompat_features(disk_super, cmd); 429 438 if (r < 0) 430 439 goto bad;
+2 -2
drivers/md/dm-crypt.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org> 4 4 * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved. 5 5 * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com> ··· 1996 1996 module_init(dm_crypt_init); 1997 1997 module_exit(dm_crypt_exit); 1998 1998 1999 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 1999 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 2000 2000 MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption"); 2001 2001 MODULE_LICENSE("GPL");
+8 -14
drivers/md/dm-io.c
··· 10 10 #include <linux/device-mapper.h> 11 11 12 12 #include <linux/bio.h> 13 + #include <linux/completion.h> 13 14 #include <linux/mempool.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> ··· 33 32 struct io { 34 33 unsigned long error_bits; 35 34 atomic_t count; 36 - struct task_struct *sleeper; 35 + struct completion *wait; 37 36 struct dm_io_client *client; 38 37 io_notify_fn callback; 39 38 void *context; ··· 122 121 invalidate_kernel_vmap_range(io->vma_invalidate_address, 123 122 io->vma_invalidate_size); 124 123 125 - if (io->sleeper) 126 - wake_up_process(io->sleeper); 124 + if (io->wait) 125 + complete(io->wait); 127 126 128 127 else { 129 128 unsigned long r = io->error_bits; ··· 388 387 */ 389 388 volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; 390 389 struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); 390 + DECLARE_COMPLETION_ONSTACK(wait); 391 391 392 392 if (num_regions > 1 && (rw & RW_MASK) != WRITE) { 393 393 WARN_ON(1); ··· 397 395 398 396 io->error_bits = 0; 399 397 atomic_set(&io->count, 1); /* see dispatch_io() */ 400 - io->sleeper = current; 398 + io->wait = &wait; 401 399 io->client = client; 402 400 403 401 io->vma_invalidate_address = dp->vma_invalidate_address; ··· 405 403 406 404 dispatch_io(rw, num_regions, where, dp, io, 1); 407 405 408 - while (1) { 409 - set_current_state(TASK_UNINTERRUPTIBLE); 410 - 411 - if (!atomic_read(&io->count)) 412 - break; 413 - 414 - io_schedule(); 415 - } 416 - set_current_state(TASK_RUNNING); 406 + wait_for_completion_io(&wait); 417 407 418 408 if (error_bits) 419 409 *error_bits = io->error_bits; ··· 428 434 io = mempool_alloc(client->pool, GFP_NOIO); 429 435 io->error_bits = 0; 430 436 atomic_set(&io->count, 1); /* see dispatch_io() */ 431 - io->sleeper = NULL; 437 + io->wait = NULL; 432 438 io->client = client; 433 439 io->callback = fn; 434 440 io->context = context;
+3 -2
drivers/md/dm-mpath.c
··· 1611 1611 1612 1612 spin_lock_irqsave(&m->lock, flags); 1613 1613 1614 - /* pg_init in progress, requeue until done */ 1615 - if (!pg_ready(m)) { 1614 + /* pg_init in progress or no paths available */ 1615 + if (m->pg_init_in_progress || 1616 + (!m->nr_valid_paths && m->queue_if_no_path)) { 1616 1617 busy = 1; 1617 1618 goto out; 1618 1619 }
+9
drivers/md/dm-thin-metadata.c
··· 613 613 614 614 disk_super = dm_block_data(sblock); 615 615 616 + /* Verify the data block size hasn't changed */ 617 + if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { 618 + DMERR("changing the data block size (from %u to %llu) is not supported", 619 + le32_to_cpu(disk_super->data_block_size), 620 + (unsigned long long)pmd->data_block_size); 621 + r = -EINVAL; 622 + goto bad_unlock_sblock; 623 + } 624 + 616 625 r = __check_incompat_features(disk_super, pmd); 617 626 if (r < 0) 618 627 goto bad_unlock_sblock;
+2 -2
drivers/md/dm-zero.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * 4 4 * This file is released under the GPL. 5 5 */ ··· 79 79 module_init(dm_zero_init) 80 80 module_exit(dm_zero_exit) 81 81 82 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 82 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 83 83 MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros"); 84 84 MODULE_LICENSE("GPL");
+13 -2
drivers/md/dm.c
··· 54 54 55 55 static DECLARE_WORK(deferred_remove_work, do_deferred_remove); 56 56 57 + static struct workqueue_struct *deferred_remove_workqueue; 58 + 57 59 /* 58 60 * For bio-based dm. 59 61 * One of these is allocated per bio. ··· 278 276 if (r) 279 277 goto out_free_rq_tio_cache; 280 278 279 + deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); 280 + if (!deferred_remove_workqueue) { 281 + r = -ENOMEM; 282 + goto out_uevent_exit; 283 + } 284 + 281 285 _major = major; 282 286 r = register_blkdev(_major, _name); 283 287 if (r < 0) 284 - goto out_uevent_exit; 288 + goto out_free_workqueue; 285 289 286 290 if (!_major) 287 291 _major = r; 288 292 289 293 return 0; 290 294 295 + out_free_workqueue: 296 + destroy_workqueue(deferred_remove_workqueue); 291 297 out_uevent_exit: 292 298 dm_uevent_exit(); 293 299 out_free_rq_tio_cache: ··· 309 299 static void local_exit(void) 310 300 { 311 301 flush_scheduled_work(); 302 + destroy_workqueue(deferred_remove_workqueue); 312 303 313 304 kmem_cache_destroy(_rq_tio_cache); 314 305 kmem_cache_destroy(_io_cache); ··· 418 407 419 408 if (atomic_dec_and_test(&md->open_count) && 420 409 (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 421 - schedule_work(&deferred_remove_work); 410 + queue_work(deferred_remove_workqueue, &deferred_remove_work); 422 411 423 412 dm_put(md); 424 413
+43
drivers/mtd/chips/cfi_cmdset_0001.c
··· 52 52 /* Atmel chips */ 53 53 #define AT49BV640D 0x02de 54 54 #define AT49BV640DT 0x02db 55 + /* Sharp chips */ 56 + #define LH28F640BFHE_PTTL90 0x00b0 57 + #define LH28F640BFHE_PBTL90 0x00b1 58 + #define LH28F640BFHE_PTTL70A 0x00b2 59 + #define LH28F640BFHE_PBTL70A 0x00b3 55 60 56 61 static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); 57 62 static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); ··· 263 258 (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e; 264 259 }; 265 260 261 + static int is_LH28F640BF(struct cfi_private *cfi) 262 + { 263 + /* Sharp LH28F640BF Family */ 264 + if (cfi->mfr == CFI_MFR_SHARP && ( 265 + cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 || 266 + cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A)) 267 + return 1; 268 + return 0; 269 + } 270 + 271 + static void fixup_LH28F640BF(struct mtd_info *mtd) 272 + { 273 + struct map_info *map = mtd->priv; 274 + struct cfi_private *cfi = map->fldrv_priv; 275 + struct cfi_pri_intelext *extp = cfi->cmdset_priv; 276 + 277 + /* Reset the Partition Configuration Register on LH28F640BF 278 + * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */ 279 + if (is_LH28F640BF(cfi)) { 280 + printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n"); 281 + map_write(map, CMD(0x60), 0); 282 + map_write(map, CMD(0x04), 0); 283 + 284 + /* We have set one single partition thus 285 + * Simultaneous Operations are not allowed */ 286 + printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n"); 287 + extp->FeatureSupport &= ~512; 288 + } 289 + } 290 + 266 291 static void fixup_use_point(struct mtd_info *mtd) 267 292 { 268 293 struct map_info *map = mtd->priv; ··· 344 309 { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct }, 345 310 { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb }, 346 311 { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock }, 312 + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock }, 313 + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF }, 347 314 { 0, 0, NULL } 348 315 }; 349 316 ··· 1685 1648 adr += chip->start; 1686 1649 initial_adr = adr; 1687 1650 cmd_adr = adr & ~(wbufsize-1); 1651 + 1652 + /* Sharp LH28F640BF chips need the first address for the 1653 + * Page Buffer Program command. See Table 5 of 1654 + * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */ 1655 + if (is_LH28F640BF(cfi)) 1656 + cmd_adr = adr; 1688 1657 1689 1658 /* Let's determine this according to the interleave only once */ 1690 1659 write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9);
+2
drivers/mtd/devices/elm.c
··· 475 475 ELM_SYNDROME_FRAGMENT_1 + offset); 476 476 regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, 477 477 ELM_SYNDROME_FRAGMENT_0 + offset); 478 + break; 478 479 default: 479 480 return -EINVAL; 480 481 } ··· 521 520 regs->elm_syndrome_fragment_1[i]); 522 521 elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, 523 522 regs->elm_syndrome_fragment_0[i]); 523 + break; 524 524 default: 525 525 return -EINVAL; 526 526 }
+4 -2
drivers/mtd/nand/nand_base.c
··· 4047 4047 ecc->layout->oobavail += ecc->layout->oobfree[i].length; 4048 4048 mtd->oobavail = ecc->layout->oobavail; 4049 4049 4050 - /* ECC sanity check: warn noisily if it's too weak */ 4051 - WARN_ON(!nand_ecc_strength_good(mtd)); 4050 + /* ECC sanity check: warn if it's too weak */ 4051 + if (!nand_ecc_strength_good(mtd)) 4052 + pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", 4053 + mtd->name); 4052 4054 4053 4055 /* 4054 4056 * Set the number of read / write steps for one page depending on ECC
+2 -2
drivers/mtd/ubi/fastmap.c
··· 125 125 parent = *p; 126 126 av = rb_entry(parent, struct ubi_ainf_volume, rb); 127 127 128 - if (vol_id < av->vol_id) 128 + if (vol_id > av->vol_id) 129 129 p = &(*p)->rb_left; 130 130 else 131 131 p = &(*p)->rb_right; ··· 423 423 pnum, err); 424 424 ret = err > 0 ? UBI_BAD_FASTMAP : err; 425 425 goto out; 426 - } else if (ret == UBI_IO_BITFLIPS) 426 + } else if (err == UBI_IO_BITFLIPS) 427 427 scrub = 1; 428 428 429 429 /*
+1 -1
drivers/net/bonding/bond_main.c
··· 4068 4068 } 4069 4069 4070 4070 if (ad_select) { 4071 - bond_opt_initstr(&newval, lacp_rate); 4071 + bond_opt_initstr(&newval, ad_select); 4072 4072 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), 4073 4073 &newval); 4074 4074 if (!valptr) {
+11 -32
drivers/net/ethernet/broadcom/bcmsysport.c
··· 654 654 655 655 work_done = bcm_sysport_tx_reclaim(ring->priv, ring); 656 656 657 - if (work_done < budget) { 657 + if (work_done == 0) { 658 658 napi_complete(napi); 659 659 /* re-enable TX interrupt */ 660 660 intrl2_1_mask_clear(ring->priv, BIT(ring->index)); 661 661 } 662 662 663 - return work_done; 663 + return 0; 664 664 } 665 665 666 666 static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv) ··· 1254 1254 usleep_range(1000, 2000); 1255 1255 } 1256 1256 1257 - static inline int umac_reset(struct bcm_sysport_priv *priv) 1257 + static inline void umac_reset(struct bcm_sysport_priv *priv) 1258 1258 { 1259 - unsigned int timeout = 0; 1260 1259 u32 reg; 1261 - int ret = 0; 1262 1260 1263 - umac_writel(priv, 0, UMAC_CMD); 1264 - while (timeout++ < 1000) { 1265 - reg = umac_readl(priv, UMAC_CMD); 1266 - if (!(reg & CMD_SW_RESET)) 1267 - break; 1268 - 1269 - udelay(1); 1270 - } 1271 - 1272 - if (timeout == 1000) { 1273 - dev_err(&priv->pdev->dev, 1274 - "timeout waiting for MAC to come out of reset\n"); 1275 - ret = -ETIMEDOUT; 1276 - } 1277 - 1278 - return ret; 1261 + reg = umac_readl(priv, UMAC_CMD); 1262 + reg |= CMD_SW_RESET; 1263 + umac_writel(priv, reg, UMAC_CMD); 1264 + udelay(10); 1265 + reg = umac_readl(priv, UMAC_CMD); 1266 + reg &= ~CMD_SW_RESET; 1267 + umac_writel(priv, reg, UMAC_CMD); 1279 1268 } 1280 1269 1281 1270 static void umac_set_hw_addr(struct bcm_sysport_priv *priv, ··· 1292 1303 int ret; 1293 1304 1294 1305 /* Reset UniMAC */ 1295 - ret = umac_reset(priv); 1296 - if (ret) { 1297 - netdev_err(dev, "UniMAC reset failed\n"); 1298 - return ret; 1299 - } 1306 + umac_reset(priv); 1300 1307 1301 1308 /* Flush TX and RX FIFOs at TOPCTRL level */ 1302 1309 topctrl_flush(priv); ··· 1573 1588 /* Set the needed headroom once and for all */ 1574 1589 BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8); 1575 1590 dev->needed_headroom += sizeof(struct bcm_tsb); 1576 - 1577 - /* We are interfaced to a switch which handles the multicast 1578 - * filtering for us, so we do not support programming any 1579 - * multicast hash table in this Ethernet MAC. 1580 - */ 1581 - dev->flags &= ~IFF_MULTICAST; 1582 1591 1583 1592 /* libphy will adjust the link state accordingly */ 1584 1593 netif_carrier_off(dev);
+2 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 797 797 798 798 return; 799 799 } 800 - bnx2x_frag_free(fp, new_data); 800 + if (new_data) 801 + bnx2x_frag_free(fp, new_data); 801 802 drop: 802 803 /* drop the packet and keep the buffer in the bin */ 803 804 DP(NETIF_MSG_RX_STATUS,
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12937 12937 * without the default SB. 12938 12938 * For VFs there is no default SB, then we return (index+1). 12939 12939 */ 12940 - pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control); 12940 + pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control); 12941 12941 12942 12942 index = control & PCI_MSIX_FLAGS_QSIZE; 12943 12943
+6 -10
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1408 1408 if (cb->skb) 1409 1409 continue; 1410 1410 1411 - /* set the DMA descriptor length once and for all 1412 - * it will only change if we support dynamically sizing 1413 - * priv->rx_buf_len, but we do not 1414 - */ 1415 - dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr, 1416 - priv->rx_buf_len << DMA_BUFLENGTH_SHIFT); 1417 - 1418 1411 ret = bcmgenet_rx_refill(priv, cb); 1419 1412 if (ret) 1420 1413 break; ··· 2528 2535 netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1); 2529 2536 netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1); 2530 2537 2531 - err = register_netdev(dev); 2532 - if (err) 2533 - goto err_clk_disable; 2538 + /* libphy will determine the link state */ 2539 + netif_carrier_off(dev); 2534 2540 2535 2541 /* Turn off the main clock, WOL clock is handled separately */ 2536 2542 if (!IS_ERR(priv->clk)) 2537 2543 clk_disable_unprepare(priv->clk); 2544 + 2545 + err = register_netdev(dev); 2546 + if (err) 2547 + goto err; 2538 2548 2539 2549 return err; 2540 2550
+1 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 331 331 #define EXT_ENERGY_DET_MASK (1 << 12) 332 332 333 333 #define EXT_RGMII_OOB_CTRL 0x0C 334 - #define RGMII_MODE_EN (1 << 0) 335 334 #define RGMII_LINK (1 << 4) 336 335 #define OOB_DISABLE (1 << 5) 336 + #define RGMII_MODE_EN (1 << 6) 337 337 #define ID_MODE_DIS (1 << 16) 338 338 339 339 #define EXT_GPHY_CTRL 0x1C
+1 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 2902 2902 for_all_evt_queues(adapter, eqo, i) { 2903 2903 napi_enable(&eqo->napi); 2904 2904 be_enable_busy_poll(eqo); 2905 - be_eq_notify(adapter, eqo->q.id, true, false, 0); 2905 + be_eq_notify(adapter, eqo->q.id, true, true, 0); 2906 2906 } 2907 2907 adapter->flags |= BE_FLAGS_NAPI_ENABLED; 2908 2908
+2 -2
drivers/net/ethernet/freescale/ucc_geth.c
··· 2990 2990 if (ug_info->rxExtendedFiltering) { 2991 2991 size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING; 2992 2992 if (ug_info->largestexternallookupkeysize == 2993 - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES) 2993 + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES) 2994 2994 size += 2995 2995 THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_8; 2996 2996 if (ug_info->largestexternallookupkeysize == 2997 - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES) 2997 + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES) 2998 2998 size += 2999 2999 THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_16; 3000 3000 }
+7
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 1481 1481 s32 ret_val; 1482 1482 u16 i, rar_count = mac->rar_entry_count; 1483 1483 1484 + if ((hw->mac.type >= e1000_i210) && 1485 + !(igb_get_flash_presence_i210(hw))) { 1486 + ret_val = igb_pll_workaround_i210(hw); 1487 + if (ret_val) 1488 + return ret_val; 1489 + } 1490 + 1484 1491 /* Initialize identification LED */ 1485 1492 ret_val = igb_id_led_init(hw); 1486 1493 if (ret_val) {
+10 -8
drivers/net/ethernet/intel/igb/e1000_defines.h
··· 46 46 #define E1000_CTRL_EXT_SDP3_DIR 0x00000800 /* SDP3 Data direction */ 47 47 48 48 /* Physical Func Reset Done Indication */ 49 - #define E1000_CTRL_EXT_PFRSTD 0x00004000 50 - #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 51 - #define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 52 - #define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 53 - #define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 54 - #define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 55 - #define E1000_CTRL_EXT_EIAME 0x01000000 56 - #define E1000_CTRL_EXT_IRCA 0x00000001 49 + #define E1000_CTRL_EXT_PFRSTD 0x00004000 50 + #define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ 51 + #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 52 + #define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 53 + #define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 54 + #define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 55 + #define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 56 + #define E1000_CTRL_EXT_EIAME 0x01000000 57 + #define E1000_CTRL_EXT_IRCA 0x00000001 57 58 /* Interrupt delay cancellation */ 58 59 /* Driver loaded bit for FW */ 59 60 #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 ··· 63 62 /* packet buffer parity error detection enabled */ 64 63 /* descriptor FIFO parity error detection enable */ 65 64 #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ 65 + #define E1000_CTRL_EXT_PHYPDEN 0x00100000 66 66 #define E1000_I2CCMD_REG_ADDR_SHIFT 16 67 67 #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 68 68 #define E1000_I2CCMD_OPCODE_READ 0x08000000
+3
drivers/net/ethernet/intel/igb/e1000_hw.h
··· 567 567 /* These functions must be implemented by drivers */ 568 568 s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); 569 569 s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); 570 + 571 + void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 572 + void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 570 573 #endif /* _E1000_HW_H_ */
+66
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 834 834 } 835 835 return ret_val; 836 836 } 837 + 838 + /** 839 + * igb_pll_workaround_i210 840 + * @hw: pointer to the HW structure 841 + * 842 + * Works around an errata in the PLL circuit where it occasionally 843 + * provides the wrong clock frequency after power up. 844 + **/ 845 + s32 igb_pll_workaround_i210(struct e1000_hw *hw) 846 + { 847 + s32 ret_val; 848 + u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; 849 + u16 nvm_word, phy_word, pci_word, tmp_nvm; 850 + int i; 851 + 852 + /* Get and set needed register values */ 853 + wuc = rd32(E1000_WUC); 854 + mdicnfg = rd32(E1000_MDICNFG); 855 + reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; 856 + wr32(E1000_MDICNFG, reg_val); 857 + 858 + /* Get data from NVM, or set default */ 859 + ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, 860 + &nvm_word); 861 + if (ret_val) 862 + nvm_word = E1000_INVM_DEFAULT_AL; 863 + tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; 864 + for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { 865 + /* check current state directly from internal PHY */ 866 + igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | 867 + E1000_PHY_PLL_FREQ_REG), &phy_word); 868 + if ((phy_word & E1000_PHY_PLL_UNCONF) 869 + != E1000_PHY_PLL_UNCONF) { 870 + ret_val = 0; 871 + break; 872 + } else { 873 + ret_val = -E1000_ERR_PHY; 874 + } 875 + /* directly reset the internal PHY */ 876 + ctrl = rd32(E1000_CTRL); 877 + wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); 878 + 879 + ctrl_ext = rd32(E1000_CTRL_EXT); 880 + ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); 881 + wr32(E1000_CTRL_EXT, ctrl_ext); 882 + 883 + wr32(E1000_WUC, 0); 884 + reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); 885 + wr32(E1000_EEARBC_I210, reg_val); 886 + 887 + igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 888 + pci_word |= E1000_PCI_PMCSR_D3; 889 + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 890 + usleep_range(1000, 2000); 891 + pci_word &= ~E1000_PCI_PMCSR_D3; 892 + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 893 + reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); 894 + wr32(E1000_EEARBC_I210, reg_val); 895 + 896 + /* restore WUC register */ 897 + wr32(E1000_WUC, wuc); 898 + } 899 + /* restore MDICNFG setting */ 900 + wr32(E1000_MDICNFG, mdicnfg); 901 + return ret_val; 902 + }
+12
drivers/net/ethernet/intel/igb/e1000_i210.h
··· 33 33 s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); 34 34 s32 igb_init_nvm_params_i210(struct e1000_hw *hw); 35 35 bool igb_get_flash_presence_i210(struct e1000_hw *hw); 36 + s32 igb_pll_workaround_i210(struct e1000_hw *hw); 36 37 37 38 #define E1000_STM_OPCODE 0xDB00 38 39 #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 ··· 78 77 #define NVM_INIT_CTRL_4_DEFAULT_I211 0x00C1 79 78 #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 80 79 #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C 80 + 81 + /* PLL Defines */ 82 + #define E1000_PCI_PMCSR 0x44 83 + #define E1000_PCI_PMCSR_D3 0x03 84 + #define E1000_MAX_PLL_TRIES 5 85 + #define E1000_PHY_PLL_UNCONF 0xFF 86 + #define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 87 + #define E1000_PHY_PLL_FREQ_REG 0x000E 88 + #define E1000_INVM_DEFAULT_AL 0x202F 89 + #define E1000_INVM_AUTOLOAD 0x0A 90 + #define E1000_INVM_PLL_WO_VAL 0x0010 81 91 82 92 #endif
+1
drivers/net/ethernet/intel/igb/e1000_regs.h
··· 66 66 #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ 67 67 #define E1000_PBS 0x01008 /* Packet Buffer Size */ 68 68 #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ 69 + #define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ 69 70 #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ 70 71 #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ 71 72 #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */
+16
drivers/net/ethernet/intel/igb/igb_main.c
··· 7215 7215 } 7216 7216 } 7217 7217 7218 + void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) 7219 + { 7220 + struct igb_adapter *adapter = hw->back; 7221 + 7222 + pci_read_config_word(adapter->pdev, reg, value); 7223 + } 7224 + 7225 + void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) 7226 + { 7227 + struct igb_adapter *adapter = hw->back; 7228 + 7229 + pci_write_config_word(adapter->pdev, reg, *value); 7230 + } 7231 + 7218 7232 s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) 7219 7233 { 7220 7234 struct igb_adapter *adapter = hw->back; ··· 7592 7578 7593 7579 if (netif_running(netdev)) 7594 7580 igb_close(netdev); 7581 + else 7582 + igb_reset(adapter); 7595 7583 7596 7584 igb_clear_interrupt_scheme(adapter); 7597 7585
+2 -2
drivers/net/ethernet/marvell/mvneta.c
··· 1207 1207 command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; 1208 1208 command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; 1209 1209 1210 - if (l3_proto == swab16(ETH_P_IP)) 1210 + if (l3_proto == htons(ETH_P_IP)) 1211 1211 command |= MVNETA_TXD_IP_CSUM; 1212 1212 else 1213 1213 command |= MVNETA_TX_L3_IP6; ··· 2529 2529 2530 2530 if (phydev->speed == SPEED_1000) 2531 2531 val |= MVNETA_GMAC_CONFIG_GMII_SPEED; 2532 - else 2532 + else if (phydev->speed == SPEED_100) 2533 2533 val |= MVNETA_GMAC_CONFIG_MII_SPEED; 2534 2534 2535 2535 mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val);
-2
drivers/net/ethernet/mellanox/mlx4/cq.c
··· 294 294 init_completion(&cq->free); 295 295 296 296 cq->irq = priv->eq_table.eq[cq->vector].irq; 297 - cq->irq_affinity_change = false; 298 - 299 297 return 0; 300 298 301 299 err_radix:
+5 -2
drivers/net/ethernet/mellanox/mlx4/en_cq.c
··· 128 128 mlx4_warn(mdev, "Failed assigning an EQ to %s, falling back to legacy EQ's\n", 129 129 name); 130 130 } 131 + 132 + cq->irq_desc = 133 + irq_to_desc(mlx4_eq_get_irq(mdev->dev, 134 + cq->vector)); 131 135 } 132 136 } else { 133 137 cq->vector = (cq->ring + 1 + priv->port) % ··· 191 187 mlx4_en_unmap_buffer(&cq->wqres.buf); 192 188 mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size); 193 189 if (priv->mdev->dev->caps.comp_pool && cq->vector) { 194 - if (!cq->is_tx) 195 - irq_set_affinity_hint(cq->mcq.irq, NULL); 196 190 mlx4_release_eq(priv->mdev->dev, cq->vector); 197 191 } 198 192 cq->vector = 0; ··· 206 204 if (!cq->is_tx) { 207 205 napi_hash_del(&cq->napi); 208 206 synchronize_rcu(); 207 + irq_set_affinity_hint(cq->mcq.irq, NULL); 209 208 } 210 209 netif_napi_del(&cq->napi); 211 210
+7
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 417 417 418 418 coal->tx_coalesce_usecs = priv->tx_usecs; 419 419 coal->tx_max_coalesced_frames = priv->tx_frames; 420 + coal->tx_max_coalesced_frames_irq = priv->tx_work_limit; 421 + 420 422 coal->rx_coalesce_usecs = priv->rx_usecs; 421 423 coal->rx_max_coalesced_frames = priv->rx_frames; 422 424 ··· 428 426 coal->rx_coalesce_usecs_high = priv->rx_usecs_high; 429 427 coal->rate_sample_interval = priv->sample_interval; 430 428 coal->use_adaptive_rx_coalesce = priv->adaptive_rx_coal; 429 + 431 430 return 0; 432 431 } 433 432 ··· 436 433 struct ethtool_coalesce *coal) 437 434 { 438 435 struct mlx4_en_priv *priv = netdev_priv(dev); 436 + 437 + if (!coal->tx_max_coalesced_frames_irq) 438 + return -EINVAL; 439 439 440 440 priv->rx_frames = (coal->rx_max_coalesced_frames == 441 441 MLX4_EN_AUTO_CONF) ? ··· 463 457 priv->rx_usecs_high = coal->rx_coalesce_usecs_high; 464 458 priv->sample_interval = coal->rate_sample_interval; 465 459 priv->adaptive_rx_coal = coal->use_adaptive_rx_coalesce; 460 + priv->tx_work_limit = coal->tx_max_coalesced_frames_irq; 466 461 467 462 return mlx4_en_moderation_update(priv); 468 463 }
+2 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2336 2336 struct mlx4_en_priv *priv = netdev_priv(dev); 2337 2337 __be16 current_port; 2338 2338 2339 - if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS)) 2339 + if (priv->mdev->dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 2340 2340 return; 2341 2341 2342 2342 if (sa_family == AF_INET6) ··· 2473 2473 MLX4_WQE_CTRL_SOLICITED); 2474 2474 priv->num_tx_rings_p_up = mdev->profile.num_tx_rings_p_up; 2475 2475 priv->tx_ring_num = prof->tx_ring_num; 2476 + priv->tx_work_limit = MLX4_EN_DEFAULT_TX_WORK; 2476 2477 2477 2478 priv->tx_ring = kzalloc(sizeof(struct mlx4_en_tx_ring *) * MAX_TX_RINGS, 2478 2479 GFP_KERNEL);
+14 -3
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 40 40 #include <linux/if_ether.h> 41 41 #include <linux/if_vlan.h> 42 42 #include <linux/vmalloc.h> 43 + #include <linux/irq.h> 43 44 44 45 #include "mlx4_en.h" 45 46 ··· 783 782 PKT_HASH_TYPE_L3); 784 783 785 784 skb_record_rx_queue(gro_skb, cq->ring); 785 + skb_mark_napi_id(gro_skb, &cq->napi); 786 786 787 787 if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) { 788 788 timestamp = mlx4_en_get_cqe_ts(cqe); ··· 898 896 899 897 /* If we used up all the quota - we're probably not done yet... */ 900 898 if (done == budget) { 899 + int cpu_curr; 900 + const struct cpumask *aff; 901 + 901 902 INC_PERF_COUNTER(priv->pstats.napi_quota); 902 - if (unlikely(cq->mcq.irq_affinity_change)) { 903 - cq->mcq.irq_affinity_change = false; 903 + 904 + cpu_curr = smp_processor_id(); 905 + aff = irq_desc_get_irq_data(cq->irq_desc)->affinity; 906 + 907 + if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) { 908 + /* Current cpu is not according to smp_irq_affinity - 909 + * probably affinity changed. need to stop this NAPI 910 + * poll, and restart it on the right CPU 911 + */ 904 912 napi_complete(napi); 905 913 mlx4_en_arm_cq(priv, cq); 906 914 return 0; 907 915 } 908 916 } else { 909 917 /* Done for now */ 910 - cq->mcq.irq_affinity_change = false; 911 918 napi_complete(napi); 912 919 mlx4_en_arm_cq(priv, cq); 913 920 }
+13 -21
drivers/net/ethernet/mellanox/mlx4/en_tx.c
··· 351 351 return cnt; 352 352 } 353 353 354 - static int mlx4_en_process_tx_cq(struct net_device *dev, 355 - struct mlx4_en_cq *cq, 356 - int budget) 354 + static bool mlx4_en_process_tx_cq(struct net_device *dev, 355 + struct mlx4_en_cq *cq) 357 356 { 358 357 struct mlx4_en_priv *priv = netdev_priv(dev); 359 358 struct mlx4_cq *mcq = &cq->mcq; ··· 371 372 int factor = priv->cqe_factor; 372 373 u64 timestamp = 0; 373 374 int done = 0; 375 + int budget = priv->tx_work_limit; 374 376 375 377 if (!priv->port_up) 376 - return 0; 378 + return true; 377 379 378 380 index = cons_index & size_mask; 379 381 cqe = &buf[(index << factor) + factor]; ··· 447 447 netif_tx_wake_queue(ring->tx_queue); 448 448 ring->wake_queue++; 449 449 } 450 - return done; 450 + return done < budget; 451 451 } 452 452 453 453 void mlx4_en_tx_irq(struct mlx4_cq *mcq) ··· 467 467 struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi); 468 468 struct net_device *dev = cq->dev; 469 469 struct mlx4_en_priv *priv = netdev_priv(dev); 470 - int done; 470 + int clean_complete; 471 471 472 - done = mlx4_en_process_tx_cq(dev, cq, budget); 472 + clean_complete = mlx4_en_process_tx_cq(dev, cq); 473 + if (!clean_complete) 474 + return budget; 473 475 474 - /* If we used up all the quota - we're probably not done yet... */ 475 - if (done < budget) { 476 - /* Done for now */ 477 - cq->mcq.irq_affinity_change = false; 478 - napi_complete(napi); 479 - mlx4_en_arm_cq(priv, cq); 480 - return done; 481 - } else if (unlikely(cq->mcq.irq_affinity_change)) { 482 - cq->mcq.irq_affinity_change = false; 483 - napi_complete(napi); 484 - mlx4_en_arm_cq(priv, cq); 485 - return 0; 486 - } 487 - return budget; 476 + napi_complete(napi); 477 + mlx4_en_arm_cq(priv, cq); 478 + 479 + return 0; 488 480 } 489 481 490 482 static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
+8 -61
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 53 53 MLX4_EQ_ENTRY_SIZE = 0x20 54 54 }; 55 55 56 - struct mlx4_irq_notify { 57 - void *arg; 58 - struct irq_affinity_notify notify; 59 - }; 60 - 61 56 #define MLX4_EQ_STATUS_OK ( 0 << 28) 62 57 #define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28) 63 58 #define MLX4_EQ_OWNER_SW ( 0 << 24) ··· 1083 1088 iounmap(priv->clr_base); 1084 1089 } 1085 1090 1086 - static void mlx4_irq_notifier_notify(struct irq_affinity_notify *notify, 1087 - const cpumask_t *mask) 1088 - { 1089 - struct mlx4_irq_notify *n = container_of(notify, 1090 - struct mlx4_irq_notify, 1091 - notify); 1092 - struct mlx4_priv *priv = (struct mlx4_priv *)n->arg; 1093 - struct radix_tree_iter iter; 1094 - void **slot; 1095 - 1096 - radix_tree_for_each_slot(slot, &priv->cq_table.tree, &iter, 0) { 1097 - struct mlx4_cq *cq = (struct mlx4_cq *)(*slot); 1098 - 1099 - if (cq->irq == notify->irq) 1100 - cq->irq_affinity_change = true; 1101 - } 1102 - } 1103 - 1104 - static void mlx4_release_irq_notifier(struct kref *ref) 1105 - { 1106 - struct mlx4_irq_notify *n = container_of(ref, struct mlx4_irq_notify, 1107 - notify.kref); 1108 - kfree(n); 1109 - } 1110 - 1111 - static void mlx4_assign_irq_notifier(struct mlx4_priv *priv, 1112 - struct mlx4_dev *dev, int irq) 1113 - { 1114 - struct mlx4_irq_notify *irq_notifier = NULL; 1115 - int err = 0; 1116 - 1117 - irq_notifier = kzalloc(sizeof(*irq_notifier), GFP_KERNEL); 1118 - if (!irq_notifier) { 1119 - mlx4_warn(dev, "Failed to allocate irq notifier. irq %d\n", 1120 - irq); 1121 - return; 1122 - } 1123 - 1124 - irq_notifier->notify.irq = irq; 1125 - irq_notifier->notify.notify = mlx4_irq_notifier_notify; 1126 - irq_notifier->notify.release = mlx4_release_irq_notifier; 1127 - irq_notifier->arg = priv; 1128 - err = irq_set_affinity_notifier(irq, &irq_notifier->notify); 1129 - if (err) { 1130 - kfree(irq_notifier); 1131 - irq_notifier = NULL; 1132 - mlx4_warn(dev, "Failed to set irq notifier. irq %d\n", irq); 1133 - } 1134 - } 1135 - 1136 - 1137 1091 int mlx4_alloc_eq_table(struct mlx4_dev *dev) 1138 1092 { 1139 1093 struct mlx4_priv *priv = mlx4_priv(dev); ··· 1353 1409 continue; 1354 1410 /*we dont want to break here*/ 1355 1411 } 1356 - mlx4_assign_irq_notifier(priv, dev, 1357 - priv->eq_table.eq[vec].irq); 1358 1412 1359 1413 eq_set_ci(&priv->eq_table.eq[vec], 1); 1360 1414 } ··· 1369 1427 } 1370 1428 EXPORT_SYMBOL(mlx4_assign_eq); 1371 1429 1430 + int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec) 1431 + { 1432 + struct mlx4_priv *priv = mlx4_priv(dev); 1433 + 1434 + return priv->eq_table.eq[vec].irq; 1435 + } 1436 + EXPORT_SYMBOL(mlx4_eq_get_irq); 1437 + 1372 1438 void mlx4_release_eq(struct mlx4_dev *dev, int vec) 1373 1439 { 1374 1440 struct mlx4_priv *priv = mlx4_priv(dev); ··· 1388 1438 Belonging to a legacy EQ*/ 1389 1439 mutex_lock(&priv->msix_ctl.pool_lock); 1390 1440 if (priv->msix_ctl.pool_bm & 1ULL << i) { 1391 - irq_set_affinity_notifier( 1392 - priv->eq_table.eq[vec].irq, 1393 - NULL); 1394 1441 free_irq(priv->eq_table.eq[vec].irq, 1395 1442 &priv->eq_table.eq[vec]); 1396 1443 priv->msix_ctl.pool_bm &= ~(1ULL << i);
+4
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 126 126 #define MAX_TX_RINGS (MLX4_EN_MAX_TX_RING_P_UP * \ 127 127 MLX4_EN_NUM_UP) 128 128 129 + #define MLX4_EN_DEFAULT_TX_WORK 256 130 + 129 131 /* Target number of packets to coalesce with interrupt moderation */ 130 132 #define MLX4_EN_RX_COAL_TARGET 44 131 133 #define MLX4_EN_RX_COAL_TIME 0x10 ··· 345 343 #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) 346 344 spinlock_t poll_lock; /* protects from LLS/napi conflicts */ 347 345 #endif /* CONFIG_NET_RX_BUSY_POLL */ 346 + struct irq_desc *irq_desc; 348 347 }; 349 348 350 349 struct mlx4_en_port_profile { ··· 545 542 __be32 ctrl_flags; 546 543 u32 flags; 547 544 u8 num_tx_rings_p_up; 545 + u32 tx_work_limit; 548 546 u32 tx_ring_num; 549 547 u32 rx_ring_num; 550 548 u32 rx_skb_size;
+15 -4
drivers/net/ethernet/mellanox/mlx5/core/mr.c
··· 94 94 write_lock_irq(&table->lock); 95 95 err = radix_tree_insert(&table->tree, mlx5_base_mkey(mr->key), mr); 96 96 write_unlock_irq(&table->lock); 97 + if (err) { 98 + mlx5_core_warn(dev, "failed radix tree insert of mr 0x%x, %d\n", 99 + mlx5_base_mkey(mr->key), err); 100 + mlx5_core_destroy_mkey(dev, mr); 101 + } 97 102 98 103 return err; 99 104 } ··· 109 104 struct mlx5_mr_table *table = &dev->priv.mr_table; 110 105 struct mlx5_destroy_mkey_mbox_in in; 111 106 struct mlx5_destroy_mkey_mbox_out out; 107 + struct mlx5_core_mr *deleted_mr; 112 108 unsigned long flags; 113 109 int err; 114 110 115 111 memset(&in, 0, sizeof(in)); 116 112 memset(&out, 0, sizeof(out)); 113 + 114 + write_lock_irqsave(&table->lock, flags); 115 + deleted_mr = radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); 116 + write_unlock_irqrestore(&table->lock, flags); 117 + if (!deleted_mr) { 118 + mlx5_core_warn(dev, "failed radix tree delete of mr 0x%x\n", 119 + mlx5_base_mkey(mr->key)); 120 + return -ENOENT; 121 + } 117 122 118 123 in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_DESTROY_MKEY); 119 124 in.mkey = cpu_to_be32(mlx5_mkey_to_idx(mr->key)); ··· 133 118 134 119 if (out.hdr.status) 135 120 return mlx5_cmd_status_to_err(&out.hdr); 136 - 137 - write_lock_irqsave(&table->lock, flags); 138 - radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); 139 - write_unlock_irqrestore(&table->lock, flags); 140 121 141 122 return err; 142 123 }
+25
drivers/net/ethernet/realtek/r8169.c
··· 538 538 MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */ 539 539 LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */ 540 540 Jumbo_En0 = (1 << 2), /* 8168 only. Reserved in the 8168b */ 541 + Rdy_to_L23 = (1 << 1), /* L23 Enable */ 541 542 Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */ 542 543 543 544 /* Config4 register */ ··· 4898 4897 PCI_EXP_LNKCTL_CLKREQ_EN); 4899 4898 } 4900 4899 4900 + static void rtl_pcie_state_l2l3_enable(struct rtl8169_private *tp, bool enable) 4901 + { 4902 + void __iomem *ioaddr = tp->mmio_addr; 4903 + u8 data; 4904 + 4905 + data = RTL_R8(Config3); 4906 + 4907 + if (enable) 4908 + data |= Rdy_to_L23; 4909 + else 4910 + data &= ~Rdy_to_L23; 4911 + 4912 + RTL_W8(Config3, data); 4913 + } 4914 + 4901 4915 #define R8168_CPCMD_QUIRK_MASK (\ 4902 4916 EnableBist | \ 4903 4917 Mac_dbgo_oe | \ ··· 5262 5246 }; 5263 5247 5264 5248 rtl_hw_start_8168f(tp); 5249 + rtl_pcie_state_l2l3_enable(tp, false); 5265 5250 5266 5251 rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); 5267 5252 ··· 5301 5284 5302 5285 rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC); 5303 5286 rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC); 5287 + 5288 + rtl_pcie_state_l2l3_enable(tp, false); 5304 5289 } 5305 5290 5306 5291 static void rtl_hw_start_8168g_2(struct rtl8169_private *tp) ··· 5555 5536 RTL_W8(DLLPR, RTL_R8(DLLPR) | PFM_EN); 5556 5537 5557 5538 rtl_ephy_init(tp, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); 5539 + 5540 + rtl_pcie_state_l2l3_enable(tp, false); 5558 5541 } 5559 5542 5560 5543 static void rtl_hw_start_8105e_2(struct rtl8169_private *tp) ··· 5592 5571 rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); 5593 5572 rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); 5594 5573 rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC); 5574 + 5575 + rtl_pcie_state_l2l3_enable(tp, false); 5595 5576 } 5596 5577 5597 5578 static void rtl_hw_start_8106(struct rtl8169_private *tp) ··· 5606 5583 RTL_W32(MISC, (RTL_R32(MISC) | DISABLE_LAN_EN) & ~EARLY_TALLY_EN); 5607 5584 RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); 5608 5585 RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN); 5586 + 5587 + rtl_pcie_state_l2l3_enable(tp, false); 5609 5588 } 5610 5589 5611 5590 static void rtl_hw_start_8101(struct net_device *dev)
+1 -4
drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
··· 320 320 321 321 static void dwmac1000_ctrl_ane(void __iomem *ioaddr, bool restart) 322 322 { 323 - u32 value; 324 - 325 - value = readl(ioaddr + GMAC_AN_CTRL); 326 323 /* auto negotiation enable and External Loopback enable */ 327 - value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; 324 + u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; 328 325 329 326 if (restart) 330 327 value |= GMAC_AN_CTRL_RAN;
+1 -1
drivers/net/ethernet/stmicro/stmmac/enh_desc.c
··· 145 145 x->rx_msg_type_delay_req++; 146 146 else if (p->des4.erx.msg_type == RDES_EXT_DELAY_RESP) 147 147 x->rx_msg_type_delay_resp++; 148 - else if (p->des4.erx.msg_type == RDES_EXT_DELAY_REQ) 148 + else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_REQ) 149 149 x->rx_msg_type_pdelay_req++; 150 150 else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_RESP) 151 151 x->rx_msg_type_pdelay_resp++;
+7 -10
drivers/net/fddi/defxx.c
··· 291 291 292 292 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers); 293 293 static void dfx_rcv_queue_process(DFX_board_t *bp); 294 + #ifdef DYNAMIC_BUFFERS 294 295 static void dfx_rcv_flush(DFX_board_t *bp); 296 + #else 297 + static inline void dfx_rcv_flush(DFX_board_t *bp) {} 298 + #endif 295 299 296 300 static netdev_tx_t dfx_xmt_queue_pkt(struct sk_buff *skb, 297 301 struct net_device *dev); ··· 2853 2849 * Align an sk_buff to a boundary power of 2 2854 2850 * 2855 2851 */ 2856 - 2852 + #ifdef DYNAMIC_BUFFERS 2857 2853 static void my_skb_align(struct sk_buff *skb, int n) 2858 2854 { 2859 2855 unsigned long x = (unsigned long)skb->data; ··· 2863 2859 2864 2860 skb_reserve(skb, v - x); 2865 2861 } 2866 - 2862 + #endif 2867 2863 2868 2864 /* 2869 2865 * ================ ··· 3078 3074 break; 3079 3075 } 3080 3076 else { 3081 - #ifndef DYNAMIC_BUFFERS 3082 - if (! rx_in_place) 3083 - #endif 3084 - { 3077 + if (!rx_in_place) { 3085 3078 /* Receive buffer allocated, pass receive packet up */ 3086 3079 3087 3080 skb_copy_to_linear_data(skb, ··· 3454 3453 } 3455 3454 3456 3455 } 3457 - #else 3458 - static inline void dfx_rcv_flush( DFX_board_t *bp ) 3459 - { 3460 - } 3461 3456 #endif /* DYNAMIC_BUFFERS */ 3462 3457 3463 3458 /*
+3 -3
drivers/net/phy/dp83640.c
··· 1323 1323 { 1324 1324 struct dp83640_private *dp83640 = phydev->priv; 1325 1325 1326 - if (!dp83640->hwts_rx_en) 1327 - return false; 1328 - 1329 1326 if (is_status_frame(skb, type)) { 1330 1327 decode_status_frame(dp83640, skb); 1331 1328 kfree_skb(skb); 1332 1329 return true; 1333 1330 } 1331 + 1332 + if (!dp83640->hwts_rx_en) 1333 + return false; 1334 1334 1335 1335 SKB_PTP_TYPE(skb) = type; 1336 1336 skb_queue_tail(&dp83640->rx_queue, skb);
+44
drivers/net/phy/mdio_bus.c
··· 187 187 return d ? to_mii_bus(d) : NULL; 188 188 } 189 189 EXPORT_SYMBOL(of_mdio_find_bus); 190 + 191 + /* Walk the list of subnodes of a mdio bus and look for a node that matches the 192 + * phy's address with its 'reg' property. If found, set the of_node pointer for 193 + * the phy. This allows auto-probed pyh devices to be supplied with information 194 + * passed in via DT. 195 + */ 196 + static void of_mdiobus_link_phydev(struct mii_bus *mdio, 197 + struct phy_device *phydev) 198 + { 199 + struct device *dev = &phydev->dev; 200 + struct device_node *child; 201 + 202 + if (dev->of_node || !mdio->dev.of_node) 203 + return; 204 + 205 + for_each_available_child_of_node(mdio->dev.of_node, child) { 206 + int addr; 207 + int ret; 208 + 209 + ret = of_property_read_u32(child, "reg", &addr); 210 + if (ret < 0) { 211 + dev_err(dev, "%s has invalid PHY address\n", 212 + child->full_name); 213 + continue; 214 + } 215 + 216 + /* A PHY must have a reg property in the range [0-31] */ 217 + if (addr >= PHY_MAX_ADDR) { 218 + dev_err(dev, "%s PHY address %i is too large\n", 219 + child->full_name, addr); 220 + continue; 221 + } 222 + 223 + if (addr == phydev->addr) { 224 + dev->of_node = child; 225 + return; 226 + } 227 + } 228 + } 229 + #else /* !IS_ENABLED(CONFIG_OF_MDIO) */ 230 + static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, 231 + struct phy_device *phydev) 232 + { 233 + } 190 234 #endif 191 235 192 236 /**
+1 -7
drivers/net/ppp/ppp_generic.c
··· 539 539 { 540 540 struct sock_fprog uprog; 541 541 struct sock_filter *code = NULL; 542 - int len, err; 542 + int len; 543 543 544 544 if (copy_from_user(&uprog, arg, sizeof(uprog))) 545 545 return -EFAULT; ··· 553 553 code = memdup_user(uprog.filter, len); 554 554 if (IS_ERR(code)) 555 555 return PTR_ERR(code); 556 - 557 - err = sk_chk_filter(code, uprog.len); 558 - if (err) { 559 - kfree(code); 560 - return err; 561 - } 562 556 563 557 *p = code; 564 558 return uprog.len;
+1 -1
drivers/net/ppp/pppoe.c
··· 675 675 po->chan.hdrlen = (sizeof(struct pppoe_hdr) + 676 676 dev->hard_header_len); 677 677 678 - po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); 678 + po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; 679 679 po->chan.private = sk; 680 680 po->chan.ops = &pppoe_chan_ops; 681 681
+20 -36
drivers/net/usb/hso.c
··· 258 258 * so as not to drop characters on the floor. 259 259 */ 260 260 int curr_rx_urb_idx; 261 - u16 curr_rx_urb_offset; 262 261 u8 rx_urb_filled[MAX_RX_URBS]; 263 262 struct tasklet_struct unthrottle_tasklet; 264 - struct work_struct retry_unthrottle_workqueue; 265 263 }; 266 264 267 265 struct hso_device { ··· 1250 1252 tasklet_hi_schedule(&serial->unthrottle_tasklet); 1251 1253 } 1252 1254 1253 - static void hso_unthrottle_workfunc(struct work_struct *work) 1254 - { 1255 - struct hso_serial *serial = 1256 - container_of(work, struct hso_serial, 1257 - retry_unthrottle_workqueue); 1258 - hso_unthrottle_tasklet(serial); 1259 - } 1260 - 1261 1255 /* open the requested serial port */ 1262 1256 static int hso_serial_open(struct tty_struct *tty, struct file *filp) 1263 1257 { ··· 1285 1295 tasklet_init(&serial->unthrottle_tasklet, 1286 1296 (void (*)(unsigned long))hso_unthrottle_tasklet, 1287 1297 (unsigned long)serial); 1288 - INIT_WORK(&serial->retry_unthrottle_workqueue, 1289 - hso_unthrottle_workfunc); 1290 1298 result = hso_start_serial_device(serial->parent, GFP_KERNEL); 1291 1299 if (result) { 1292 1300 hso_stop_serial_device(serial->parent); ··· 1333 1345 if (!usb_gone) 1334 1346 hso_stop_serial_device(serial->parent); 1335 1347 tasklet_kill(&serial->unthrottle_tasklet); 1336 - cancel_work_sync(&serial->retry_unthrottle_workqueue); 1337 1348 } 1338 1349 1339 1350 if (!usb_gone) ··· 2000 2013 static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) 2001 2014 { 2002 2015 struct tty_struct *tty; 2003 - int write_length_remaining = 0; 2004 - int curr_write_len; 2016 + int count; 2005 2017 2006 2018 /* Sanity check */ 2007 2019 if (urb == NULL || serial == NULL) { ··· 2010 2024 2011 2025 tty = tty_port_tty_get(&serial->port); 2012 2026 2013 - /* Push data to tty */ 2014 - write_length_remaining = urb->actual_length - 2015 - serial->curr_rx_urb_offset; 2016 - D1("data to push to tty"); 2017 - while (write_length_remaining) { 2018 - if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { 2019 - tty_kref_put(tty); 2020 - return -1; 2021 - } 2022 - curr_write_len = tty_insert_flip_string(&serial->port, 2023 - urb->transfer_buffer + serial->curr_rx_urb_offset, 2024 - write_length_remaining); 2025 - serial->curr_rx_urb_offset += curr_write_len; 2026 - write_length_remaining -= curr_write_len; 2027 - tty_flip_buffer_push(&serial->port); 2027 + if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { 2028 + tty_kref_put(tty); 2029 + return -1; 2028 2030 } 2031 + 2032 + /* Push data to tty */ 2033 + D1("data to push to tty"); 2034 + count = tty_buffer_request_room(&serial->port, urb->actual_length); 2035 + if (count >= urb->actual_length) { 2036 + tty_insert_flip_string(&serial->port, urb->transfer_buffer, 2037 + urb->actual_length); 2038 + tty_flip_buffer_push(&serial->port); 2039 + } else { 2040 + dev_warn(&serial->parent->usb->dev, 2041 + "dropping data, %d bytes lost\n", urb->actual_length); 2042 + } 2043 + 2029 2044 tty_kref_put(tty); 2030 2045 2031 - if (write_length_remaining == 0) { 2032 - serial->curr_rx_urb_offset = 0; 2033 - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; 2034 - } 2035 - return write_length_remaining; 2046 + serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; 2047 + 2048 + return 0; 2036 2049 } 2037 2050 2038 2051 ··· 2202 2217 } 2203 2218 } 2204 2219 serial->curr_rx_urb_idx = 0; 2205 - serial->curr_rx_urb_offset = 0; 2206 2220 2207 2221 if (serial->tx_urb) 2208 2222 usb_kill_urb(serial->tx_urb);
+1
drivers/net/usb/qmi_wwan.c
··· 741 741 {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, 742 742 {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, 743 743 {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ 744 + {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ 744 745 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 745 746 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 746 747 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */
+6 -1
drivers/net/usb/r8152.c
··· 1359 1359 struct sk_buff_head seg_list; 1360 1360 struct sk_buff *segs, *nskb; 1361 1361 1362 - features &= ~(NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO); 1362 + features &= ~(NETIF_F_SG | NETIF_F_IPV6_CSUM | NETIF_F_TSO6); 1363 1363 segs = skb_gso_segment(skb, features); 1364 1364 if (IS_ERR(segs) || !segs) 1365 1365 goto drop; ··· 3204 3204 struct r8152 *tp = netdev_priv(dev); 3205 3205 struct tally_counter tally; 3206 3206 3207 + if (usb_autopm_get_interface(tp->intf) < 0) 3208 + return; 3209 + 3207 3210 generic_ocp_read(tp, PLA_TALLYCNT, sizeof(tally), &tally, MCU_TYPE_PLA); 3211 + 3212 + usb_autopm_put_interface(tp->intf); 3208 3213 3209 3214 data[0] = le64_to_cpu(tally.tx_packets); 3210 3215 data[1] = le64_to_cpu(tally.rx_packets);
+13 -1
drivers/net/usb/smsc95xx.c
··· 1714 1714 return ret; 1715 1715 } 1716 1716 1717 + static int smsc95xx_reset_resume(struct usb_interface *intf) 1718 + { 1719 + struct usbnet *dev = usb_get_intfdata(intf); 1720 + int ret; 1721 + 1722 + ret = smsc95xx_reset(dev); 1723 + if (ret < 0) 1724 + return ret; 1725 + 1726 + return smsc95xx_resume(intf); 1727 + } 1728 + 1717 1729 static void smsc95xx_rx_csum_offload(struct sk_buff *skb) 1718 1730 { 1719 1731 skb->csum = *(u16 *)(skb_tail_pointer(skb) - 2); ··· 2016 2004 .probe = usbnet_probe, 2017 2005 .suspend = smsc95xx_suspend, 2018 2006 .resume = smsc95xx_resume, 2019 - .reset_resume = smsc95xx_resume, 2007 + .reset_resume = smsc95xx_reset_resume, 2020 2008 .disconnect = usbnet_disconnect, 2021 2009 .disable_hub_initiated_lpm = 1, 2022 2010 .supports_autosuspend = 1,
+58 -54
drivers/net/wan/farsync.c
··· 2363 2363 "FarSync TE1" 2364 2364 }; 2365 2365 2366 - static void 2366 + static int 2367 2367 fst_init_card(struct fst_card_info *card) 2368 2368 { 2369 2369 int i; ··· 2374 2374 * we'll have to revise it in some way then. 2375 2375 */ 2376 2376 for (i = 0; i < card->nports; i++) { 2377 - err = register_hdlc_device(card->ports[i].dev); 2378 - if (err < 0) { 2379 - int j; 2377 + err = register_hdlc_device(card->ports[i].dev); 2378 + if (err < 0) { 2380 2379 pr_err("Cannot register HDLC device for port %d (errno %d)\n", 2381 - i, -err); 2382 - for (j = i; j < card->nports; j++) { 2383 - free_netdev(card->ports[j].dev); 2384 - card->ports[j].dev = NULL; 2385 - } 2386 - card->nports = i; 2387 - break; 2388 - } 2380 + i, -err); 2381 + while (i--) 2382 + unregister_hdlc_device(card->ports[i].dev); 2383 + return err; 2384 + } 2389 2385 } 2390 2386 2391 2387 pr_info("%s-%s: %s IRQ%d, %d ports\n", 2392 2388 port_to_dev(&card->ports[0])->name, 2393 2389 port_to_dev(&card->ports[card->nports - 1])->name, 2394 2390 type_strings[card->type], card->irq, card->nports); 2391 + return 0; 2395 2392 } 2396 2393 2397 2394 static const struct net_device_ops fst_ops = { ··· 2444 2447 /* Try to enable the device */ 2445 2448 if ((err = pci_enable_device(pdev)) != 0) { 2446 2449 pr_err("Failed to enable card. Err %d\n", -err); 2447 - kfree(card); 2448 - return err; 2450 + goto enable_fail; 2449 2451 } 2450 2452 2451 2453 if ((err = pci_request_regions(pdev, "FarSync")) !=0) { 2452 2454 pr_err("Failed to allocate regions. Err %d\n", -err); 2453 - pci_disable_device(pdev); 2454 - kfree(card); 2455 - return err; 2455 + goto regions_fail; 2456 2456 } 2457 2457 2458 2458 /* Get virtual addresses of memory regions */ ··· 2458 2464 card->phys_ctlmem = pci_resource_start(pdev, 3); 2459 2465 if ((card->mem = ioremap(card->phys_mem, FST_MEMSIZE)) == NULL) { 2460 2466 pr_err("Physical memory remap failed\n"); 2461 - pci_release_regions(pdev); 2462 - pci_disable_device(pdev); 2463 - kfree(card); 2464 - return -ENODEV; 2467 + err = -ENODEV; 2468 + goto ioremap_physmem_fail; 2465 2469 } 2466 2470 if ((card->ctlmem = ioremap(card->phys_ctlmem, 0x10)) == NULL) { 2467 2471 pr_err("Control memory remap failed\n"); 2468 - pci_release_regions(pdev); 2469 - pci_disable_device(pdev); 2470 - iounmap(card->mem); 2471 - kfree(card); 2472 - return -ENODEV; 2472 + err = -ENODEV; 2473 + goto ioremap_ctlmem_fail; 2473 2474 } 2474 2475 dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem); 2475 2476 2476 2477 /* Register the interrupt handler */ 2477 2478 if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) { 2478 2479 pr_err("Unable to register interrupt %d\n", card->irq); 2479 - pci_release_regions(pdev); 2480 - pci_disable_device(pdev); 2481 - iounmap(card->ctlmem); 2482 - iounmap(card->mem); 2483 - kfree(card); 2484 - return -ENODEV; 2480 + err = -ENODEV; 2481 + goto irq_fail; 2485 2482 } 2486 2483 2487 2484 /* Record info we need */ ··· 2498 2513 while (i--) 2499 2514 free_netdev(card->ports[i].dev); 2500 2515 pr_err("FarSync: out of memory\n"); 2501 - free_irq(card->irq, card); 2502 - pci_release_regions(pdev); 2503 - pci_disable_device(pdev); 2504 - iounmap(card->ctlmem); 2505 - iounmap(card->mem); 2506 - kfree(card); 2507 - return -ENODEV; 2516 + err = -ENOMEM; 2517 + goto hdlcdev_fail; 2508 2518 } 2509 2519 card->ports[i].dev = dev; 2510 2520 card->ports[i].card = card; ··· 2545 2565 pci_set_drvdata(pdev, card); 2546 2566 2547 2567 /* Remainder of card setup */ 2568 + if (no_of_cards_added >= FST_MAX_CARDS) { 2569 + pr_err("FarSync: too many cards\n"); 2570 + err = -ENOMEM; 2571 + goto card_array_fail; 2572 + } 2548 2573 fst_card_array[no_of_cards_added] = card; 2549 2574 card->card_no = no_of_cards_added++; /* Record instance and bump it */ 2550 - fst_init_card(card); 2575 + err = fst_init_card(card); 2576 + if (err) 2577 + goto init_card_fail; 2551 2578 if (card->family == FST_FAMILY_TXU) { 2552 2579 /* 2553 2580 * Allocate a dma buffer for transmit and receives ··· 2564 2577 &card->rx_dma_handle_card); 2565 2578 if (card->rx_dma_handle_host == NULL) { 2566 2579 pr_err("Could not allocate rx dma buffer\n"); 2567 - fst_disable_intr(card); 2568 - pci_release_regions(pdev); 2569 - pci_disable_device(pdev); 2570 - iounmap(card->ctlmem); 2571 - iounmap(card->mem); 2572 - kfree(card); 2573 - return -ENOMEM; 2580 + err = -ENOMEM; 2581 + goto rx_dma_fail; 2574 2582 } 2575 2583 card->tx_dma_handle_host = 2576 2584 pci_alloc_consistent(card->device, FST_MAX_MTU, 2577 2585 &card->tx_dma_handle_card); 2578 2586 if (card->tx_dma_handle_host == NULL) { 2579 2587 pr_err("Could not allocate tx dma buffer\n"); 2580 - fst_disable_intr(card); 2581 - pci_release_regions(pdev); 2582 - pci_disable_device(pdev); 2583 - iounmap(card->ctlmem); 2584 - iounmap(card->mem); 2585 - kfree(card); 2586 - return -ENOMEM; 2588 + err = -ENOMEM; 2589 + goto tx_dma_fail; 2587 2590 } 2588 2591 } 2589 2592 return 0; /* Success */ 2593 + 2594 + tx_dma_fail: 2595 + pci_free_consistent(card->device, FST_MAX_MTU, 2596 + card->rx_dma_handle_host, 2597 + card->rx_dma_handle_card); 2598 + rx_dma_fail: 2599 + fst_disable_intr(card); 2600 + for (i = 0 ; i < card->nports ; i++) 2601 + unregister_hdlc_device(card->ports[i].dev); 2602 + init_card_fail: 2603 + fst_card_array[card->card_no] = NULL; 2604 + card_array_fail: 2605 + for (i = 0 ; i < card->nports ; i++) 2606 + free_netdev(card->ports[i].dev); 2607 + hdlcdev_fail: 2608 + free_irq(card->irq, card); 2609 + irq_fail: 2610 + iounmap(card->ctlmem); 2611 + ioremap_ctlmem_fail: 2612 + iounmap(card->mem); 2613 + ioremap_physmem_fail: 2614 + pci_release_regions(pdev); 2615 + regions_fail: 2616 + pci_disable_device(pdev); 2617 + enable_fail: 2618 + kfree(card); 2619 + return err; 2590 2620 } 2591 2621 2592 2622 /*
+5 -1
drivers/net/wireless/ath/ath10k/core.c
··· 795 795 if (status) 796 796 goto err_htc_stop; 797 797 798 - ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; 798 + if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) 799 + ar->free_vdev_map = (1 << TARGET_10X_NUM_VDEVS) - 1; 800 + else 801 + ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; 802 + 799 803 INIT_LIST_HEAD(&ar->arvifs); 800 804 801 805 if (!test_bit(ATH10K_FLAG_FIRST_BOOT_DONE, &ar->dev_flags))
-18
drivers/net/wireless/ath/ath10k/htt_rx.c
··· 312 312 int msdu_len, msdu_chaining = 0; 313 313 struct sk_buff *msdu; 314 314 struct htt_rx_desc *rx_desc; 315 - bool corrupted = false; 316 315 317 316 lockdep_assert_held(&htt->rx_ring.lock); 318 317 ··· 438 439 last_msdu = __le32_to_cpu(rx_desc->msdu_end.info0) & 439 440 RX_MSDU_END_INFO0_LAST_MSDU; 440 441 441 - if (msdu_chaining && !last_msdu) 442 - corrupted = true; 443 - 444 442 if (last_msdu) { 445 443 msdu->next = NULL; 446 444 break; ··· 451 455 452 456 if (*head_msdu == NULL) 453 457 msdu_chaining = -1; 454 - 455 - /* 456 - * Apparently FW sometimes reports weird chained MSDU sequences with 457 - * more than one rx descriptor. This seems like a bug but needs more 458 - * analyzing. For the time being fix it by dropping such sequences to 459 - * avoid blowing up the host system. 460 - */ 461 - if (corrupted) { 462 - ath10k_warn("failed to pop chained msdus, dropping\n"); 463 - ath10k_htt_rx_free_msdu_chain(*head_msdu); 464 - *head_msdu = NULL; 465 - *tail_msdu = NULL; 466 - msdu_chaining = -EINVAL; 467 - } 468 458 469 459 /* 470 460 * Don't refill the ring yet.
+3 -2
drivers/net/wireless/brcm80211/brcmfmac/usb.c
··· 1184 1184 bus->bus_priv.usb = bus_pub; 1185 1185 dev_set_drvdata(dev, bus); 1186 1186 bus->ops = &brcmf_usb_bus_ops; 1187 - bus->chip = bus_pub->devid; 1188 - bus->chiprev = bus_pub->chiprev; 1189 1187 bus->proto_type = BRCMF_PROTO_BCDC; 1190 1188 bus->always_use_fws_queue = true; 1191 1189 ··· 1192 1194 if (ret) 1193 1195 goto fail; 1194 1196 } 1197 + bus->chip = bus_pub->devid; 1198 + bus->chiprev = bus_pub->chiprev; 1199 + 1195 1200 /* request firmware here */ 1196 1201 brcmf_fw_get_firmwares(dev, 0, brcmf_usb_get_fwname(devinfo), NULL, 1197 1202 brcmf_usb_probe_phase2);
-12
drivers/net/wireless/iwlwifi/dvm/rxon.c
··· 1068 1068 /* recalculate basic rates */ 1069 1069 iwl_calc_basic_rates(priv, ctx); 1070 1070 1071 - /* 1072 - * force CTS-to-self frames protection if RTS-CTS is not preferred 1073 - * one aggregation protection method 1074 - */ 1075 - if (!priv->hw_params.use_rts_for_aggregation) 1076 - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; 1077 - 1078 1071 if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) || 1079 1072 !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK)) 1080 1073 ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK; ··· 1472 1479 ctx->staging.flags |= RXON_FLG_TGG_PROTECT_MSK; 1473 1480 else 1474 1481 ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK; 1475 - 1476 - if (bss_conf->use_cts_prot) 1477 - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; 1478 - else 1479 - ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN; 1480 1482 1481 1483 memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN); 1482 1484
+1
drivers/net/wireless/iwlwifi/iwl-fw.h
··· 88 88 * P2P client interfaces simultaneously if they are in different bindings. 89 89 * @IWL_UCODE_TLV_FLAGS_P2P_BSS_PS_SCM: support power save on BSS station and 90 90 * P2P client interfaces simultaneously if they are in same bindings. 91 + * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: General support for uAPSD 91 92 * @IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save 92 93 * @IWL_UCODE_TLV_FLAGS_BCAST_FILTERING: uCode supports broadcast filtering. 93 94 * @IWL_UCODE_TLV_FLAGS_GO_UAPSD: AP/GO interfaces support uAPSD clients
+2 -3
drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
··· 667 667 if (vif->bss_conf.qos) 668 668 cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA); 669 669 670 - if (vif->bss_conf.use_cts_prot) { 670 + if (vif->bss_conf.use_cts_prot) 671 671 cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT); 672 - cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN); 673 - } 672 + 674 673 IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n", 675 674 vif->bss_conf.use_cts_prot, 676 675 vif->bss_conf.ht_operation_mode);
+13 -6
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 303 303 hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; 304 304 } 305 305 306 + if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT && 307 + !iwlwifi_mod_params.uapsd_disable) { 308 + hw->flags |= IEEE80211_HW_SUPPORTS_UAPSD; 309 + hw->uapsd_queues = IWL_UAPSD_AC_INFO; 310 + hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; 311 + } 312 + 306 313 hw->sta_data_size = sizeof(struct iwl_mvm_sta); 307 314 hw->vif_data_size = sizeof(struct iwl_mvm_vif); 308 315 hw->chanctx_data_size = sizeof(u16); ··· 1166 1159 1167 1160 bcast_mac = &cmd->macs[mvmvif->id]; 1168 1161 1169 - /* enable filtering only for associated stations */ 1170 - if (vif->type != NL80211_IFTYPE_STATION || !vif->bss_conf.assoc) 1162 + /* 1163 + * enable filtering only for associated stations, but not for P2P 1164 + * Clients 1165 + */ 1166 + if (vif->type != NL80211_IFTYPE_STATION || vif->p2p || 1167 + !vif->bss_conf.assoc) 1171 1168 return; 1172 1169 1173 1170 bcast_mac->default_discard = 1; ··· 1246 1235 struct iwl_bcast_filter_cmd cmd; 1247 1236 1248 1237 if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING)) 1249 - return 0; 1250 - 1251 - /* bcast filtering isn't supported for P2P client */ 1252 - if (vif->p2p) 1253 1238 return 0; 1254 1239 1255 1240 if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd))
+18 -45
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 588 588 struct iwl_scan_offload_cmd *scan, 589 589 struct iwl_mvm_scan_params *params) 590 590 { 591 - scan->channel_count = 592 - mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels + 593 - mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; 591 + scan->channel_count = req->n_channels; 594 592 scan->quiet_time = cpu_to_le16(IWL_ACTIVE_QUIET_TIME); 595 593 scan->quiet_plcp_th = cpu_to_le16(IWL_PLCP_QUIET_THRESH); 596 594 scan->good_CRC_th = IWL_GOOD_CRC_TH_DEFAULT; ··· 667 669 struct cfg80211_sched_scan_request *req, 668 670 struct iwl_scan_channel_cfg *channels, 669 671 enum ieee80211_band band, 670 - int *head, int *tail, 672 + int *head, 671 673 u32 ssid_bitmap, 672 674 struct iwl_mvm_scan_params *params) 673 675 { 674 - struct ieee80211_supported_band *s_band; 675 - int n_channels = req->n_channels; 676 - int i, j, index = 0; 677 - bool partial; 676 + int i, index = 0; 678 677 679 - /* 680 - * We have to configure all supported channels, even if we don't want to 681 - * scan on them, but we have to send channels in the order that we want 682 - * to scan. So add requested channels to head of the list and others to 683 - * the end. 684 - */ 685 - s_band = &mvm->nvm_data->bands[band]; 678 + for (i = 0; i < req->n_channels; i++) { 679 + struct ieee80211_channel *chan = req->channels[i]; 686 680 687 - for (i = 0; i < s_band->n_channels && *head <= *tail; i++) { 688 - partial = false; 689 - for (j = 0; j < n_channels; j++) 690 - if (s_band->channels[i].center_freq == 691 - req->channels[j]->center_freq) { 692 - index = *head; 693 - (*head)++; 694 - /* 695 - * Channels that came with the request will be 696 - * in partial scan . 697 - */ 698 - partial = true; 699 - break; 700 - } 701 - if (!partial) { 702 - index = *tail; 703 - (*tail)--; 704 - } 705 - channels->channel_number[index] = 706 - cpu_to_le16(ieee80211_frequency_to_channel( 707 - s_band->channels[i].center_freq)); 681 + if (chan->band != band) 682 + continue; 683 + 684 + index = *head; 685 + (*head)++; 686 + 687 + channels->channel_number[index] = cpu_to_le16(chan->hw_value); 708 688 channels->dwell_time[index][0] = params->dwell[band].active; 709 689 channels->dwell_time[index][1] = params->dwell[band].passive; 710 690 711 691 channels->iter_count[index] = cpu_to_le16(1); 712 692 channels->iter_interval[index] = 0; 713 693 714 - if (!(s_band->channels[i].flags & IEEE80211_CHAN_NO_IR)) 694 + if (!(chan->flags & IEEE80211_CHAN_NO_IR)) 715 695 channels->type[index] |= 716 696 cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_ACTIVE); 717 697 718 698 channels->type[index] |= 719 - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL); 720 - if (partial) 721 - channels->type[index] |= 722 - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); 699 + cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL | 700 + IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); 723 701 724 - if (s_band->channels[i].flags & IEEE80211_CHAN_NO_HT40) 702 + if (chan->flags & IEEE80211_CHAN_NO_HT40) 725 703 channels->type[index] |= 726 704 cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_NARROW); 727 705 ··· 714 740 int band_2ghz = mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels; 715 741 int band_5ghz = mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; 716 742 int head = 0; 717 - int tail = band_2ghz + band_5ghz - 1; 718 743 u32 ssid_bitmap; 719 744 int cmd_len; 720 745 int ret; ··· 745 772 &scan_cfg->scan_cmd.tx_cmd[0], 746 773 scan_cfg->data); 747 774 iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, 748 - IEEE80211_BAND_2GHZ, &head, &tail, 775 + IEEE80211_BAND_2GHZ, &head, 749 776 ssid_bitmap, &params); 750 777 } 751 778 if (band_5ghz) { ··· 755 782 scan_cfg->data + 756 783 SCAN_OFFLOAD_PROBE_REQ_SIZE); 757 784 iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, 758 - IEEE80211_BAND_5GHZ, &head, &tail, 785 + IEEE80211_BAND_5GHZ, &head, 759 786 ssid_bitmap, &params); 760 787 } 761 788
+2 -1
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 367 367 {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, 368 368 {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, 369 369 {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, 370 + {IWL_PCI_DEVICE(0x095A, 0x5510, iwl7265_2ac_cfg)}, 370 371 {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, 371 372 {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, 372 373 {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, ··· 381 380 {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, 382 381 {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, 383 382 {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, 384 - {IWL_PCI_DEVICE(0x095A, 0x9200, iwl7265_2ac_cfg)}, 383 + {IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)}, 385 384 {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, 386 385 {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)}, 387 386 {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)},
+1
drivers/net/wireless/mwifiex/11n_aggr.c
··· 185 185 skb_reserve(skb_aggr, headroom + sizeof(struct txpd)); 186 186 tx_info_aggr = MWIFIEX_SKB_TXCB(skb_aggr); 187 187 188 + memset(tx_info_aggr, 0, sizeof(*tx_info_aggr)); 188 189 tx_info_aggr->bss_type = tx_info_src->bss_type; 189 190 tx_info_aggr->bss_num = tx_info_src->bss_num; 190 191
+1
drivers/net/wireless/mwifiex/cfg80211.c
··· 220 220 } 221 221 222 222 tx_info = MWIFIEX_SKB_TXCB(skb); 223 + memset(tx_info, 0, sizeof(*tx_info)); 223 224 tx_info->bss_num = priv->bss_num; 224 225 tx_info->bss_type = priv->bss_type; 225 226 tx_info->pkt_len = pkt_len;
+1
drivers/net/wireless/mwifiex/cmdevt.c
··· 453 453 454 454 if (skb) { 455 455 rx_info = MWIFIEX_SKB_RXCB(skb); 456 + memset(rx_info, 0, sizeof(*rx_info)); 456 457 rx_info->bss_num = priv->bss_num; 457 458 rx_info->bss_type = priv->bss_type; 458 459 }
+1
drivers/net/wireless/mwifiex/main.c
··· 645 645 } 646 646 647 647 tx_info = MWIFIEX_SKB_TXCB(skb); 648 + memset(tx_info, 0, sizeof(*tx_info)); 648 649 tx_info->bss_num = priv->bss_num; 649 650 tx_info->bss_type = priv->bss_type; 650 651 tx_info->pkt_len = skb->len;
+1
drivers/net/wireless/mwifiex/sta_tx.c
··· 150 150 return -1; 151 151 152 152 tx_info = MWIFIEX_SKB_TXCB(skb); 153 + memset(tx_info, 0, sizeof(*tx_info)); 153 154 tx_info->bss_num = priv->bss_num; 154 155 tx_info->bss_type = priv->bss_type; 155 156 tx_info->pkt_len = data_len - (sizeof(struct txpd) + INTF_HEADER_LEN);
+2
drivers/net/wireless/mwifiex/tdls.c
··· 605 605 } 606 606 607 607 tx_info = MWIFIEX_SKB_TXCB(skb); 608 + memset(tx_info, 0, sizeof(*tx_info)); 608 609 tx_info->bss_num = priv->bss_num; 609 610 tx_info->bss_type = priv->bss_type; 610 611 ··· 761 760 skb->priority = MWIFIEX_PRIO_VI; 762 761 763 762 tx_info = MWIFIEX_SKB_TXCB(skb); 763 + memset(tx_info, 0, sizeof(*tx_info)); 764 764 tx_info->bss_num = priv->bss_num; 765 765 tx_info->bss_type = priv->bss_type; 766 766 tx_info->flags |= MWIFIEX_BUF_FLAG_TDLS_PKT;
+1
drivers/net/wireless/mwifiex/txrx.c
··· 55 55 return -1; 56 56 } 57 57 58 + memset(rx_info, 0, sizeof(*rx_info)); 58 59 rx_info->bss_num = priv->bss_num; 59 60 rx_info->bss_type = priv->bss_type; 60 61
+1
drivers/net/wireless/mwifiex/uap_txrx.c
··· 175 175 } 176 176 177 177 tx_info = MWIFIEX_SKB_TXCB(skb); 178 + memset(tx_info, 0, sizeof(*tx_info)); 178 179 tx_info->bss_num = priv->bss_num; 179 180 tx_info->bss_type = priv->bss_type; 180 181 tx_info->flags |= MWIFIEX_BUF_FLAG_BRIDGED_PKT;
+22 -6
drivers/net/wireless/rt2x00/rt2800usb.c
··· 231 231 */ 232 232 static int rt2800usb_autorun_detect(struct rt2x00_dev *rt2x00dev) 233 233 { 234 - __le32 reg; 234 + __le32 *reg; 235 235 u32 fw_mode; 236 236 237 + reg = kmalloc(sizeof(*reg), GFP_KERNEL); 238 + if (reg == NULL) 239 + return -ENOMEM; 237 240 /* cannot use rt2x00usb_register_read here as it uses different 238 241 * mode (MULTI_READ vs. DEVICE_MODE) and does not pass the 239 242 * magic value USB_MODE_AUTORUN (0x11) to the device, thus the ··· 244 241 */ 245 242 rt2x00usb_vendor_request(rt2x00dev, USB_DEVICE_MODE, 246 243 USB_VENDOR_REQUEST_IN, 0, USB_MODE_AUTORUN, 247 - &reg, sizeof(reg), REGISTER_TIMEOUT_FIRMWARE); 248 - fw_mode = le32_to_cpu(reg); 244 + reg, sizeof(*reg), REGISTER_TIMEOUT_FIRMWARE); 245 + fw_mode = le32_to_cpu(*reg); 246 + kfree(reg); 249 247 250 248 if ((fw_mode & 0x00000003) == 2) 251 249 return 1; ··· 265 261 int status; 266 262 u32 offset; 267 263 u32 length; 264 + int retval; 268 265 269 266 /* 270 267 * Check which section of the firmware we need. ··· 283 278 /* 284 279 * Write firmware to device. 285 280 */ 286 - if (rt2800usb_autorun_detect(rt2x00dev)) { 281 + retval = rt2800usb_autorun_detect(rt2x00dev); 282 + if (retval < 0) 283 + return retval; 284 + if (retval) { 287 285 rt2x00_info(rt2x00dev, 288 286 "Firmware loading not required - NIC in AutoRun mode\n"); 289 287 } else { ··· 771 763 */ 772 764 static int rt2800usb_efuse_detect(struct rt2x00_dev *rt2x00dev) 773 765 { 774 - if (rt2800usb_autorun_detect(rt2x00dev)) 766 + int retval; 767 + 768 + retval = rt2800usb_autorun_detect(rt2x00dev); 769 + if (retval < 0) 770 + return retval; 771 + if (retval) 775 772 return 1; 776 773 return rt2800_efuse_detect(rt2x00dev); 777 774 } ··· 785 772 { 786 773 int retval; 787 774 788 - if (rt2800usb_efuse_detect(rt2x00dev)) 775 + retval = rt2800usb_efuse_detect(rt2x00dev); 776 + if (retval < 0) 777 + return retval; 778 + if (retval) 789 779 retval = rt2800_read_eeprom_efuse(rt2x00dev); 790 780 else 791 781 retval = rt2x00usb_eeprom_read(rt2x00dev, rt2x00dev->eeprom,
+16 -11
drivers/net/xen-netfront.c
··· 1439 1439 unsigned int i = 0; 1440 1440 unsigned int num_queues = info->netdev->real_num_tx_queues; 1441 1441 1442 + netif_carrier_off(info->netdev); 1443 + 1442 1444 for (i = 0; i < num_queues; ++i) { 1443 1445 struct netfront_queue *queue = &info->queues[i]; 1444 - 1445 - /* Stop old i/f to prevent errors whilst we rebuild the state. */ 1446 - spin_lock_bh(&queue->rx_lock); 1447 - spin_lock_irq(&queue->tx_lock); 1448 - netif_carrier_off(queue->info->netdev); 1449 - spin_unlock_irq(&queue->tx_lock); 1450 - spin_unlock_bh(&queue->rx_lock); 1451 1446 1452 1447 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq)) 1453 1448 unbind_from_irqhandler(queue->tx_irq, queue); ··· 1452 1457 } 1453 1458 queue->tx_evtchn = queue->rx_evtchn = 0; 1454 1459 queue->tx_irq = queue->rx_irq = 0; 1460 + 1461 + napi_synchronize(&queue->napi); 1455 1462 1456 1463 /* End access and free the pages */ 1457 1464 xennet_end_access(queue->tx_ring_ref, queue->tx.sring); ··· 2043 2046 /* By now, the queue structures have been set up */ 2044 2047 for (j = 0; j < num_queues; ++j) { 2045 2048 queue = &np->queues[j]; 2046 - spin_lock_bh(&queue->rx_lock); 2047 - spin_lock_irq(&queue->tx_lock); 2048 2049 2049 2050 /* Step 1: Discard all pending TX packet fragments. */ 2051 + spin_lock_irq(&queue->tx_lock); 2050 2052 xennet_release_tx_bufs(queue); 2053 + spin_unlock_irq(&queue->tx_lock); 2051 2054 2052 2055 /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */ 2056 + spin_lock_bh(&queue->rx_lock); 2057 + 2053 2058 for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) { 2054 2059 skb_frag_t *frag; 2055 2060 const struct page *page; ··· 2075 2076 } 2076 2077 2077 2078 queue->rx.req_prod_pvt = requeue_idx; 2079 + 2080 + spin_unlock_bh(&queue->rx_lock); 2078 2081 } 2079 2082 2080 2083 /* ··· 2088 2087 netif_carrier_on(np->netdev); 2089 2088 for (j = 0; j < num_queues; ++j) { 2090 2089 queue = &np->queues[j]; 2090 + 2091 2091 notify_remote_via_irq(queue->tx_irq); 2092 2092 if (queue->tx_irq != queue->rx_irq) 2093 2093 notify_remote_via_irq(queue->rx_irq); 2094 - xennet_tx_buf_gc(queue); 2095 - xennet_alloc_rx_buffers(queue); 2096 2094 2095 + spin_lock_irq(&queue->tx_lock); 2096 + xennet_tx_buf_gc(queue); 2097 2097 spin_unlock_irq(&queue->tx_lock); 2098 + 2099 + spin_lock_bh(&queue->rx_lock); 2100 + xennet_alloc_rx_buffers(queue); 2098 2101 spin_unlock_bh(&queue->rx_lock); 2099 2102 } 2100 2103
-34
drivers/of/of_mdio.c
··· 182 182 } 183 183 EXPORT_SYMBOL(of_mdiobus_register); 184 184 185 - /** 186 - * of_mdiobus_link_phydev - Find a device node for a phy 187 - * @mdio: pointer to mii_bus structure 188 - * @phydev: phydev for which the of_node pointer should be set 189 - * 190 - * Walk the list of subnodes of a mdio bus and look for a node that matches the 191 - * phy's address with its 'reg' property. If found, set the of_node pointer for 192 - * the phy. This allows auto-probed pyh devices to be supplied with information 193 - * passed in via DT. 194 - */ 195 - void of_mdiobus_link_phydev(struct mii_bus *mdio, 196 - struct phy_device *phydev) 197 - { 198 - struct device *dev = &phydev->dev; 199 - struct device_node *child; 200 - 201 - if (dev->of_node || !mdio->dev.of_node) 202 - return; 203 - 204 - for_each_available_child_of_node(mdio->dev.of_node, child) { 205 - int addr; 206 - 207 - addr = of_mdio_parse_addr(&mdio->dev, child); 208 - if (addr < 0) 209 - continue; 210 - 211 - if (addr == phydev->addr) { 212 - dev->of_node = child; 213 - return; 214 - } 215 - } 216 - } 217 - EXPORT_SYMBOL(of_mdiobus_link_phydev); 218 - 219 185 /* Helper function for of_phy_find_device */ 220 186 static int of_phy_match(struct device *dev, void *phy_np) 221 187 {
+7 -2
drivers/pci/pci.c
··· 3135 3135 if (probe) 3136 3136 return 0; 3137 3137 3138 - /* Wait for Transaction Pending bit clean */ 3139 - if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP)) 3138 + /* 3139 + * Wait for Transaction Pending bit to clear. A word-aligned test 3140 + * is used, so we use the conrol offset rather than status and shift 3141 + * the test bit to match. 3142 + */ 3143 + if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL, 3144 + PCI_AF_STATUS_TP << 8)) 3140 3145 goto clear; 3141 3146 3142 3147 dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
+2
drivers/phy/Kconfig
··· 112 112 config PHY_SUN4I_USB 113 113 tristate "Allwinner sunxi SoC USB PHY driver" 114 114 depends on ARCH_SUNXI && HAS_IOMEM && OF 115 + depends on RESET_CONTROLLER 115 116 select GENERIC_PHY 116 117 help 117 118 Enable this to support the transceiver that is part of Allwinner ··· 123 122 124 123 config PHY_SAMSUNG_USB2 125 124 tristate "Samsung USB 2.0 PHY driver" 125 + depends on HAS_IOMEM 126 126 select GENERIC_PHY 127 127 select MFD_SYSCON 128 128 help
+4 -3
drivers/phy/phy-core.c
··· 614 614 return phy; 615 615 616 616 put_dev: 617 - put_device(&phy->dev); 618 - ida_remove(&phy_ida, phy->id); 617 + put_device(&phy->dev); /* calls phy_release() which frees resources */ 618 + return ERR_PTR(ret); 619 + 619 620 free_phy: 620 621 kfree(phy); 621 622 return ERR_PTR(ret); ··· 800 799 801 800 phy = to_phy(dev); 802 801 dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); 803 - ida_remove(&phy_ida, phy->id); 802 + ida_simple_remove(&phy_ida, phy->id); 804 803 kfree(phy); 805 804 } 806 805
+7 -4
drivers/phy/phy-omap-usb2.c
··· 233 233 if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { 234 234 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 235 235 phy->phy_base = devm_ioremap_resource(&pdev->dev, res); 236 - if (!phy->phy_base) 237 - return -ENOMEM; 236 + if (IS_ERR(phy->phy_base)) 237 + return PTR_ERR(phy->phy_base); 238 238 phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; 239 239 } 240 240 ··· 262 262 otg->phy = &phy->phy; 263 263 264 264 platform_set_drvdata(pdev, phy); 265 - pm_runtime_enable(phy->dev); 266 265 267 266 generic_phy = devm_phy_create(phy->dev, &ops, NULL); 268 267 if (IS_ERR(generic_phy)) ··· 269 270 270 271 phy_set_drvdata(generic_phy, phy); 271 272 273 + pm_runtime_enable(phy->dev); 272 274 phy_provider = devm_of_phy_provider_register(phy->dev, 273 275 of_phy_simple_xlate); 274 - if (IS_ERR(phy_provider)) 276 + if (IS_ERR(phy_provider)) { 277 + pm_runtime_disable(phy->dev); 275 278 return PTR_ERR(phy_provider); 279 + } 276 280 277 281 phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); 278 282 if (IS_ERR(phy->wkupclk)) { ··· 319 317 if (!IS_ERR(phy->optclk)) 320 318 clk_unprepare(phy->optclk); 321 319 usb_remove_phy(&phy->phy); 320 + pm_runtime_disable(phy->dev); 322 321 323 322 return 0; 324 323 }
+1
drivers/phy/phy-samsung-usb2.c
··· 107 107 #endif 108 108 { }, 109 109 }; 110 + MODULE_DEVICE_TABLE(of, samsung_usb2_phy_of_match); 110 111 111 112 static int samsung_usb2_phy_probe(struct platform_device *pdev) 112 113 {
+1 -1
drivers/pinctrl/berlin/berlin.c
··· 320 320 321 321 regmap = dev_get_regmap(&pdev->dev, NULL); 322 322 if (!regmap) 323 - return PTR_ERR(regmap); 323 + return -ENODEV; 324 324 325 325 pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); 326 326 if (!pctrl)
+4
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 211 211 configlen++; 212 212 213 213 pinconfig = kzalloc(configlen * sizeof(*pinconfig), GFP_KERNEL); 214 + if (!pinconfig) { 215 + kfree(*map); 216 + return -ENOMEM; 217 + } 214 218 215 219 if (!of_property_read_u32(node, "allwinner,drive", &val)) { 216 220 u16 strength = (val + 1) * 10;
+7 -11
drivers/thermal/imx_thermal.c
··· 306 306 { 307 307 struct imx_thermal_data *data = platform_get_drvdata(pdev); 308 308 struct regmap *map; 309 - int t1, t2, n1, n2; 309 + int t1, n1; 310 310 int ret; 311 311 u32 val; 312 312 u64 temp64; ··· 333 333 /* 334 334 * Sensor data layout: 335 335 * [31:20] - sensor value @ 25C 336 - * [19:8] - sensor value of hot 337 - * [7:0] - hot temperature value 338 336 * Use universal formula now and only need sensor value @ 25C 339 337 * slope = 0.4297157 - (0.0015976 * 25C fuse) 340 338 */ 341 339 n1 = val >> 20; 342 - n2 = (val & 0xfff00) >> 8; 343 - t2 = val & 0xff; 344 340 t1 = 25; /* t1 always 25C */ 345 341 346 342 /* ··· 362 366 data->c2 = n1 * data->c1 + 1000 * t1; 363 367 364 368 /* 365 - * Set the default passive cooling trip point to 20 °C below the 366 - * maximum die temperature. Can be changed from userspace. 369 + * Set the default passive cooling trip point, 370 + * can be changed from userspace. 367 371 */ 368 - data->temp_passive = 1000 * (t2 - 20); 372 + data->temp_passive = IMX_TEMP_PASSIVE; 369 373 370 374 /* 371 - * The maximum die temperature is t2, let's give 5 °C cushion 372 - * for noise and possible temperature rise between measurements. 375 + * The maximum die temperature set to 20 C higher than 376 + * IMX_TEMP_PASSIVE. 373 377 */ 374 - data->temp_critical = 1000 * (t2 - 5); 378 + data->temp_critical = 1000 * 20 + data->temp_passive; 375 379 376 380 return 0; 377 381 }
+4 -3
drivers/thermal/of-thermal.c
··· 156 156 157 157 ret = thermal_zone_bind_cooling_device(thermal, 158 158 tbp->trip_id, cdev, 159 - tbp->min, 160 - tbp->max); 159 + tbp->max, 160 + tbp->min); 161 161 if (ret) 162 162 return ret; 163 163 } ··· 712 712 } 713 713 714 714 i = 0; 715 - for_each_child_of_node(child, gchild) 715 + for_each_child_of_node(child, gchild) { 716 716 ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++], 717 717 tz->trips, tz->ntrips); 718 718 if (ret) 719 719 goto free_tbps; 720 + } 720 721 721 722 finish: 722 723 of_node_put(child);
+18 -15
drivers/thermal/thermal_hwmon.c
··· 140 140 return NULL; 141 141 } 142 142 143 + static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz) 144 + { 145 + unsigned long temp; 146 + return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp); 147 + } 148 + 143 149 int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) 144 150 { 145 151 struct thermal_hwmon_device *hwmon; ··· 195 189 if (result) 196 190 goto free_temp_mem; 197 191 198 - if (tz->ops->get_crit_temp) { 199 - unsigned long temperature; 200 - if (!tz->ops->get_crit_temp(tz, &temperature)) { 201 - snprintf(temp->temp_crit.name, 202 - sizeof(temp->temp_crit.name), 192 + if (thermal_zone_crit_temp_valid(tz)) { 193 + snprintf(temp->temp_crit.name, 194 + sizeof(temp->temp_crit.name), 203 195 "temp%d_crit", hwmon->count); 204 - temp->temp_crit.attr.attr.name = temp->temp_crit.name; 205 - temp->temp_crit.attr.attr.mode = 0444; 206 - temp->temp_crit.attr.show = temp_crit_show; 207 - sysfs_attr_init(&temp->temp_crit.attr.attr); 208 - result = device_create_file(hwmon->device, 209 - &temp->temp_crit.attr); 210 - if (result) 211 - goto unregister_input; 212 - } 196 + temp->temp_crit.attr.attr.name = temp->temp_crit.name; 197 + temp->temp_crit.attr.attr.mode = 0444; 198 + temp->temp_crit.attr.show = temp_crit_show; 199 + sysfs_attr_init(&temp->temp_crit.attr.attr); 200 + result = device_create_file(hwmon->device, 201 + &temp->temp_crit.attr); 202 + if (result) 203 + goto unregister_input; 213 204 } 214 205 215 206 mutex_lock(&thermal_hwmon_list_lock); ··· 253 250 } 254 251 255 252 device_remove_file(hwmon->device, &temp->temp_input.attr); 256 - if (tz->ops->get_crit_temp) 253 + if (thermal_zone_crit_temp_valid(tz)) 257 254 device_remove_file(hwmon->device, &temp->temp_crit.attr); 258 255 259 256 mutex_lock(&thermal_hwmon_list_lock);
+1 -1
drivers/thermal/ti-soc-thermal/ti-bandgap.c
··· 1155 1155 /* register shadow for context save and restore */ 1156 1156 bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * 1157 1157 bgp->conf->sensor_count, GFP_KERNEL); 1158 - if (!bgp) { 1158 + if (!bgp->regval) { 1159 1159 dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); 1160 1160 return ERR_PTR(-ENOMEM); 1161 1161 }
+1 -1
drivers/tty/serial/arc_uart.c
··· 177 177 uart->port.icount.tx++; 178 178 uart->port.x_char = 0; 179 179 sent = 1; 180 - } else if (xmit->tail != xmit->head) { /* TODO: uart_circ_empty */ 180 + } else if (!uart_circ_empty(xmit)) { 181 181 ch = xmit->buf[xmit->tail]; 182 182 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 183 183 uart->port.icount.tx++;
+3
drivers/tty/serial/imx.c
··· 567 567 struct imx_port *sport = (struct imx_port *)port; 568 568 unsigned long temp; 569 569 570 + if (uart_circ_empty(&port->state->xmit)) 571 + return; 572 + 570 573 if (USE_IRDA(sport)) { 571 574 /* half duplex in IrDA mode; have to disable receive mode */ 572 575 temp = readl(sport->port.membase + UCR4);
+2
drivers/tty/serial/ip22zilog.c
··· 603 603 } else { 604 604 struct circ_buf *xmit = &port->state->xmit; 605 605 606 + if (uart_circ_empty(xmit)) 607 + return; 606 608 writeb(xmit->buf[xmit->tail], &channel->data); 607 609 ZSDELAY(); 608 610 ZS_WSYNC(channel);
+5 -3
drivers/tty/serial/m32r_sio.c
··· 266 266 if (!(up->ier & UART_IER_THRI)) { 267 267 up->ier |= UART_IER_THRI; 268 268 serial_out(up, UART_IER, up->ier); 269 - serial_out(up, UART_TX, xmit->buf[xmit->tail]); 270 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 271 - up->port.icount.tx++; 269 + if (!uart_circ_empty(xmit)) { 270 + serial_out(up, UART_TX, xmit->buf[xmit->tail]); 271 + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 272 + up->port.icount.tx++; 273 + } 272 274 } 273 275 while((serial_in(up, UART_LSR) & UART_EMPTY) != UART_EMPTY); 274 276 #else
+3
drivers/tty/serial/pmac_zilog.c
··· 653 653 } else { 654 654 struct circ_buf *xmit = &port->state->xmit; 655 655 656 + if (uart_circ_empty(xmit)) 657 + goto out; 656 658 write_zsdata(uap, xmit->buf[xmit->tail]); 657 659 zssync(uap); 658 660 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); ··· 663 661 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 664 662 uart_write_wakeup(&uap->port); 665 663 } 664 + out: 666 665 pmz_debug("pmz: start_tx() done.\n"); 667 666 } 668 667
+3
drivers/tty/serial/sunsab.c
··· 427 427 struct circ_buf *xmit = &up->port.state->xmit; 428 428 int i; 429 429 430 + if (uart_circ_empty(xmit)) 431 + return; 432 + 430 433 up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); 431 434 writeb(up->interrupt_mask1, &up->regs->w.imr1); 432 435
+2
drivers/tty/serial/sunzilog.c
··· 703 703 } else { 704 704 struct circ_buf *xmit = &port->state->xmit; 705 705 706 + if (uart_circ_empty(xmit)) 707 + return; 706 708 writeb(xmit->buf[xmit->tail], &channel->data); 707 709 ZSDELAY(); 708 710 ZS_WSYNC(channel);
+2 -2
drivers/usb/chipidea/udc.c
··· 1169 1169 1170 1170 if (hwep->type == USB_ENDPOINT_XFER_CONTROL) 1171 1171 cap |= QH_IOS; 1172 - if (hwep->num) 1173 - cap |= QH_ZLT; 1172 + 1173 + cap |= QH_ZLT; 1174 1174 cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT; 1175 1175 /* 1176 1176 * For ISO-TX, we set mult at QH as the largest value, and use
+19
drivers/usb/core/hub.c
··· 889 889 if (!hub_is_superspeed(hub->hdev)) 890 890 return -EINVAL; 891 891 892 + ret = hub_port_status(hub, port1, &portstatus, &portchange); 893 + if (ret < 0) 894 + return ret; 895 + 896 + /* 897 + * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI 898 + * Controller [1022:7814] will have spurious result making the following 899 + * usb 3.0 device hotplugging route to the 2.0 root hub and recognized 900 + * as high-speed device if we set the usb 3.0 port link state to 901 + * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we 902 + * check the state here to avoid the bug. 903 + */ 904 + if ((portstatus & USB_PORT_STAT_LINK_STATE) == 905 + USB_SS_PORT_LS_RX_DETECT) { 906 + dev_dbg(&hub->ports[port1 - 1]->dev, 907 + "Not disabling port; link state is RxDetect\n"); 908 + return ret; 909 + } 910 + 892 911 ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); 893 912 if (ret) 894 913 return ret;
+1
drivers/usb/serial/cp210x.c
··· 153 153 { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ 154 154 { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ 155 155 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 156 + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 156 157 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 157 158 { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ 158 159 { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
+4 -1
drivers/usb/serial/ftdi_sio.c
··· 720 720 { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, 721 721 { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, 722 722 { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, 723 - { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, 723 + { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, 724 + { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, 724 725 { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, 725 726 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, 726 727 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, ··· 945 944 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, 946 945 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, 947 946 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, 947 + /* Infineon Devices */ 948 + { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 948 949 { } /* Terminating entry */ 949 950 }; 950 951
+8 -1
drivers/usb/serial/ftdi_sio_ids.h
··· 584 584 #define RATOC_PRODUCT_ID_USB60F 0xb020 585 585 586 586 /* 587 + * Infineon Technologies 588 + */ 589 + #define INFINEON_VID 0x058b 590 + #define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ 591 + 592 + /* 587 593 * Acton Research Corp. 588 594 */ 589 595 #define ACTON_VID 0x0647 /* Vendor ID */ ··· 804 798 * Submitted by Colin Leroy 805 799 */ 806 800 #define TESTO_VID 0x128D 807 - #define TESTO_USB_INTERFACE_PID 0x0001 801 + #define TESTO_1_PID 0x0001 802 + #define TESTO_3_PID 0x0003 808 803 809 804 /* 810 805 * Mobility Electronics products.
+2
drivers/usb/serial/option.c
··· 1487 1487 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1488 1488 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ 1489 1489 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1490 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ 1491 + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1490 1492 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, 1491 1493 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, 1492 1494 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
+5 -7
drivers/xen/balloon.c
··· 426 426 * p2m are consistent. 427 427 */ 428 428 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 429 - unsigned long p; 430 - struct page *scratch_page = get_balloon_scratch_page(); 431 - 432 429 if (!PageHighMem(page)) { 430 + struct page *scratch_page = get_balloon_scratch_page(); 431 + 433 432 ret = HYPERVISOR_update_va_mapping( 434 433 (unsigned long)__va(pfn << PAGE_SHIFT), 435 434 pfn_pte(page_to_pfn(scratch_page), 436 435 PAGE_KERNEL_RO), 0); 437 436 BUG_ON(ret); 438 - } 439 - p = page_to_pfn(scratch_page); 440 - __set_phys_to_machine(pfn, pfn_to_mfn(p)); 441 437 442 - put_balloon_scratch_page(); 438 + put_balloon_scratch_page(); 439 + } 440 + __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); 443 441 } 444 442 #endif 445 443
+4 -1
drivers/xen/manage.c
··· 88 88 89 89 if (!si->cancelled) { 90 90 xen_irq_resume(); 91 - xen_console_resume(); 92 91 xen_timer_resume(); 93 92 } 94 93 ··· 133 134 si.cancelled = 1; 134 135 135 136 err = stop_machine(xen_suspend, &si, cpumask_of(0)); 137 + 138 + /* Resume console as early as possible. */ 139 + if (!si.cancelled) 140 + xen_console_resume(); 136 141 137 142 raw_notifier_call_chain(&xen_resume_notifier, 0, NULL); 138 143
+6
firmware/Makefile
··· 219 219 obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) 220 220 obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y)) 221 221 222 + ifeq ($(KBUILD_SRC),) 223 + # Makefile.build only creates subdirectories for O= builds, but external 224 + # firmware might live outside the kernel source tree 225 + _dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) 226 + endif 227 + 222 228 # Remove .S files and binaries created from ihex 223 229 # (during 'make clean' .config isn't included so they're all in $(fw-shipped-)) 224 230 targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \
+7
fs/aio.c
··· 830 830 static void put_reqs_available(struct kioctx *ctx, unsigned nr) 831 831 { 832 832 struct kioctx_cpu *kcpu; 833 + unsigned long flags; 833 834 834 835 preempt_disable(); 835 836 kcpu = this_cpu_ptr(ctx->cpu); 836 837 838 + local_irq_save(flags); 837 839 kcpu->reqs_available += nr; 840 + 838 841 while (kcpu->reqs_available >= ctx->req_batch * 2) { 839 842 kcpu->reqs_available -= ctx->req_batch; 840 843 atomic_add(ctx->req_batch, &ctx->reqs_available); 841 844 } 842 845 846 + local_irq_restore(flags); 843 847 preempt_enable(); 844 848 } 845 849 ··· 851 847 { 852 848 struct kioctx_cpu *kcpu; 853 849 bool ret = false; 850 + unsigned long flags; 854 851 855 852 preempt_disable(); 856 853 kcpu = this_cpu_ptr(ctx->cpu); 857 854 855 + local_irq_save(flags); 858 856 if (!kcpu->reqs_available) { 859 857 int old, avail = atomic_read(&ctx->reqs_available); 860 858 ··· 875 869 ret = true; 876 870 kcpu->reqs_available--; 877 871 out: 872 + local_irq_restore(flags); 878 873 preempt_enable(); 879 874 return ret; 880 875 }
+11
fs/btrfs/ordered-data.c
··· 484 484 log_list); 485 485 list_del_init(&ordered->log_list); 486 486 spin_unlock_irq(&log->log_extents_lock[index]); 487 + 488 + if (!test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags) && 489 + !test_bit(BTRFS_ORDERED_DIRECT, &ordered->flags)) { 490 + struct inode *inode = ordered->inode; 491 + u64 start = ordered->file_offset; 492 + u64 end = ordered->file_offset + ordered->len - 1; 493 + 494 + WARN_ON(!inode); 495 + filemap_fdatawrite_range(inode->i_mapping, start, end); 496 + } 487 497 wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE, 488 498 &ordered->flags)); 499 + 489 500 btrfs_put_ordered_extent(ordered); 490 501 spin_lock_irq(&log->log_extents_lock[index]); 491 502 }
+4 -4
fs/btrfs/volumes.c
··· 1680 1680 if (device->bdev == root->fs_info->fs_devices->latest_bdev) 1681 1681 root->fs_info->fs_devices->latest_bdev = next_device->bdev; 1682 1682 1683 - if (device->bdev) 1683 + if (device->bdev) { 1684 1684 device->fs_devices->open_devices--; 1685 - 1686 - /* remove sysfs entry */ 1687 - btrfs_kobj_rm_device(root->fs_info, device); 1685 + /* remove sysfs entry */ 1686 + btrfs_kobj_rm_device(root->fs_info, device); 1687 + } 1688 1688 1689 1689 call_rcu(&device->rcu, free_device); 1690 1690
+2 -2
fs/ext4/extents_status.c
··· 966 966 continue; 967 967 } 968 968 969 - if (ei->i_es_lru_nr == 0 || ei == locked_ei) 969 + if (ei->i_es_lru_nr == 0 || ei == locked_ei || 970 + !write_trylock(&ei->i_es_lock)) 970 971 continue; 971 972 972 - write_lock(&ei->i_es_lock); 973 973 shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); 974 974 if (ei->i_es_lru_nr == 0) 975 975 list_del_init(&ei->i_es_lru);
+8 -8
fs/ext4/ialloc.c
··· 338 338 fatal = err; 339 339 } else { 340 340 ext4_error(sb, "bit already cleared for inode %lu", ino); 341 - if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 341 + if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 342 342 int count; 343 343 count = ext4_free_inodes_count(sb, gdp); 344 344 percpu_counter_sub(&sbi->s_freeinodes_counter, ··· 874 874 goto out; 875 875 } 876 876 877 + BUFFER_TRACE(group_desc_bh, "get_write_access"); 878 + err = ext4_journal_get_write_access(handle, group_desc_bh); 879 + if (err) { 880 + ext4_std_error(sb, err); 881 + goto out; 882 + } 883 + 877 884 /* We may have to initialize the block bitmap if it isn't already */ 878 885 if (ext4_has_group_desc_csum(sb) && 879 886 gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { ··· 915 908 ext4_std_error(sb, err); 916 909 goto out; 917 910 } 918 - } 919 - 920 - BUFFER_TRACE(group_desc_bh, "get_write_access"); 921 - err = ext4_journal_get_write_access(handle, group_desc_bh); 922 - if (err) { 923 - ext4_std_error(sb, err); 924 - goto out; 925 911 } 926 912 927 913 /* Update the relevant bg descriptor fields */
+2 -2
fs/ext4/mballoc.c
··· 752 752 753 753 if (free != grp->bb_free) { 754 754 ext4_grp_locked_error(sb, group, 0, 0, 755 - "%u clusters in bitmap, %u in gd; " 756 - "block bitmap corrupt.", 755 + "block bitmap and bg descriptor " 756 + "inconsistent: %u vs %u free clusters", 757 757 free, grp->bb_free); 758 758 /* 759 759 * If we intend to continue, we consider group descriptor
+28 -32
fs/ext4/super.c
··· 1525 1525 arg = JBD2_DEFAULT_MAX_COMMIT_AGE; 1526 1526 sbi->s_commit_interval = HZ * arg; 1527 1527 } else if (token == Opt_max_batch_time) { 1528 - if (arg == 0) 1529 - arg = EXT4_DEF_MAX_BATCH_TIME; 1530 1528 sbi->s_max_batch_time = arg; 1531 1529 } else if (token == Opt_min_batch_time) { 1532 1530 sbi->s_min_batch_time = arg; ··· 2807 2809 es = sbi->s_es; 2808 2810 2809 2811 if (es->s_error_count) 2810 - ext4_msg(sb, KERN_NOTICE, "error count: %u", 2812 + /* fsck newer than v1.41.13 is needed to clean this condition. */ 2813 + ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", 2811 2814 le32_to_cpu(es->s_error_count)); 2812 2815 if (es->s_first_error_time) { 2813 - printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", 2816 + printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", 2814 2817 sb->s_id, le32_to_cpu(es->s_first_error_time), 2815 2818 (int) sizeof(es->s_first_error_func), 2816 2819 es->s_first_error_func, ··· 2825 2826 printk("\n"); 2826 2827 } 2827 2828 if (es->s_last_error_time) { 2828 - printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", 2829 + printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", 2829 2830 sb->s_id, le32_to_cpu(es->s_last_error_time), 2830 2831 (int) sizeof(es->s_last_error_func), 2831 2832 es->s_last_error_func, ··· 3879 3880 goto failed_mount2; 3880 3881 } 3881 3882 } 3882 - 3883 - /* 3884 - * set up enough so that it can read an inode, 3885 - * and create new inode for buddy allocator 3886 - */ 3887 - sbi->s_gdb_count = db_count; 3888 - if (!test_opt(sb, NOLOAD) && 3889 - EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3890 - sb->s_op = &ext4_sops; 3891 - else 3892 - sb->s_op = &ext4_nojournal_sops; 3893 - 3894 - ext4_ext_init(sb); 3895 - err = ext4_mb_init(sb); 3896 - if (err) { 3897 - ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 3898 - err); 3899 - goto failed_mount2; 3900 - } 3901 - 3902 3883 if (!ext4_check_descriptors(sb, &first_not_zeroed)) { 3903 3884 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); 3904 - goto failed_mount2a; 3885 + goto failed_mount2; 3905 3886 } 3906 3887 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) 3907 3888 if (!ext4_fill_flex_info(sb)) { 3908 3889 ext4_msg(sb, KERN_ERR, 3909 3890 "unable to initialize " 3910 3891 "flex_bg meta info!"); 3911 - goto failed_mount2a; 3892 + goto failed_mount2; 3912 3893 } 3913 3894 3895 + sbi->s_gdb_count = db_count; 3914 3896 get_random_bytes(&sbi->s_next_generation, sizeof(u32)); 3915 3897 spin_lock_init(&sbi->s_next_gen_lock); 3916 3898 ··· 3926 3946 sbi->s_stripe = ext4_get_stripe_size(sbi); 3927 3947 sbi->s_extent_max_zeroout_kb = 32; 3928 3948 3949 + /* 3950 + * set up enough so that it can read an inode 3951 + */ 3952 + if (!test_opt(sb, NOLOAD) && 3953 + EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3954 + sb->s_op = &ext4_sops; 3955 + else 3956 + sb->s_op = &ext4_nojournal_sops; 3929 3957 sb->s_export_op = &ext4_export_ops; 3930 3958 sb->s_xattr = ext4_xattr_handlers; 3931 3959 #ifdef CONFIG_QUOTA ··· 4123 4135 if (err) { 4124 4136 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " 4125 4137 "reserved pool", ext4_calculate_resv_clusters(sb)); 4126 - goto failed_mount5; 4138 + goto failed_mount4a; 4127 4139 } 4128 4140 4129 4141 err = ext4_setup_system_zone(sb); 4130 4142 if (err) { 4131 4143 ext4_msg(sb, KERN_ERR, "failed to initialize system " 4132 4144 "zone (%d)", err); 4145 + goto failed_mount4a; 4146 + } 4147 + 4148 + ext4_ext_init(sb); 4149 + err = ext4_mb_init(sb); 4150 + if (err) { 4151 + ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 4152 + err); 4133 4153 goto failed_mount5; 4134 4154 } 4135 4155 ··· 4214 4218 failed_mount7: 4215 4219 ext4_unregister_li_request(sb); 4216 4220 failed_mount6: 4217 - ext4_release_system_zone(sb); 4221 + ext4_mb_release(sb); 4218 4222 failed_mount5: 4223 + ext4_ext_release(sb); 4224 + ext4_release_system_zone(sb); 4225 + failed_mount4a: 4219 4226 dput(sb->s_root); 4220 4227 sb->s_root = NULL; 4221 4228 failed_mount4: ··· 4242 4243 percpu_counter_destroy(&sbi->s_extent_cache_cnt); 4243 4244 if (sbi->s_mmp_tsk) 4244 4245 kthread_stop(sbi->s_mmp_tsk); 4245 - failed_mount2a: 4246 - ext4_mb_release(sb); 4247 4246 failed_mount2: 4248 4247 for (i = 0; i < db_count; i++) 4249 4248 brelse(sbi->s_group_desc[i]); 4250 4249 ext4_kvfree(sbi->s_group_desc); 4251 4250 failed_mount: 4252 - ext4_ext_release(sb); 4253 4251 if (sbi->s_chksum_driver) 4254 4252 crypto_free_shash(sbi->s_chksum_driver); 4255 4253 if (sbi->s_proc) {
+18 -5
fs/f2fs/data.c
··· 608 608 * b. do not use extent cache for better performance 609 609 * c. give the block addresses to blockdev 610 610 */ 611 - static int get_data_block(struct inode *inode, sector_t iblock, 612 - struct buffer_head *bh_result, int create) 611 + static int __get_data_block(struct inode *inode, sector_t iblock, 612 + struct buffer_head *bh_result, int create, bool fiemap) 613 613 { 614 614 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); 615 615 unsigned int blkbits = inode->i_sb->s_blocksize_bits; ··· 637 637 err = 0; 638 638 goto unlock_out; 639 639 } 640 - if (dn.data_blkaddr == NEW_ADDR) 640 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 641 641 goto put_out; 642 642 643 643 if (dn.data_blkaddr != NULL_ADDR) { ··· 671 671 err = 0; 672 672 goto unlock_out; 673 673 } 674 - if (dn.data_blkaddr == NEW_ADDR) 674 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 675 675 goto put_out; 676 676 677 677 end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode)); ··· 708 708 return err; 709 709 } 710 710 711 + static int get_data_block(struct inode *inode, sector_t iblock, 712 + struct buffer_head *bh_result, int create) 713 + { 714 + return __get_data_block(inode, iblock, bh_result, create, false); 715 + } 716 + 717 + static int get_data_block_fiemap(struct inode *inode, sector_t iblock, 718 + struct buffer_head *bh_result, int create) 719 + { 720 + return __get_data_block(inode, iblock, bh_result, create, true); 721 + } 722 + 711 723 int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 712 724 u64 start, u64 len) 713 725 { 714 - return generic_block_fiemap(inode, fieinfo, start, len, get_data_block); 726 + return generic_block_fiemap(inode, fieinfo, 727 + start, len, get_data_block_fiemap); 715 728 } 716 729 717 730 static int f2fs_read_data_page(struct file *file, struct page *page)
+1 -1
fs/f2fs/dir.c
··· 376 376 377 377 put_error: 378 378 f2fs_put_page(page, 1); 379 + error: 379 380 /* once the failed inode becomes a bad inode, i_mode is S_IFREG */ 380 381 truncate_inode_pages(&inode->i_data, 0); 381 382 truncate_blocks(inode, 0); 382 383 remove_dirty_dir_inode(inode); 383 - error: 384 384 remove_inode_page(inode); 385 385 return ERR_PTR(err); 386 386 }
+2 -4
fs/f2fs/f2fs.h
··· 342 342 struct dirty_seglist_info *dirty_info; /* dirty segment information */ 343 343 struct curseg_info *curseg_array; /* active segment information */ 344 344 345 - struct list_head wblist_head; /* list of under-writeback pages */ 346 - spinlock_t wblist_lock; /* lock for checkpoint */ 347 - 348 345 block_t seg0_blkaddr; /* block address of 0'th segment */ 349 346 block_t main_blkaddr; /* start block address of main area */ 350 347 block_t ssa_blkaddr; /* start block address of SSA area */ ··· 641 644 */ 642 645 static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid) 643 646 { 644 - WARN_ON((nid >= NM_I(sbi)->max_nid)); 647 + if (unlikely(nid < F2FS_ROOT_INO(sbi))) 648 + return -EINVAL; 645 649 if (unlikely(nid >= NM_I(sbi)->max_nid)) 646 650 return -EINVAL; 647 651 return 0;
+8 -4
fs/f2fs/file.c
··· 659 659 off_start = offset & (PAGE_CACHE_SIZE - 1); 660 660 off_end = (offset + len) & (PAGE_CACHE_SIZE - 1); 661 661 662 + f2fs_lock_op(sbi); 663 + 662 664 for (index = pg_start; index <= pg_end; index++) { 663 665 struct dnode_of_data dn; 664 666 665 - f2fs_lock_op(sbi); 667 + if (index == pg_end && !off_end) 668 + goto noalloc; 669 + 666 670 set_new_dnode(&dn, inode, NULL, NULL, 0); 667 671 ret = f2fs_reserve_block(&dn, index); 668 - f2fs_unlock_op(sbi); 669 672 if (ret) 670 673 break; 671 - 674 + noalloc: 672 675 if (pg_start == pg_end) 673 676 new_size = offset + len; 674 677 else if (index == pg_start && off_start) ··· 686 683 i_size_read(inode) < new_size) { 687 684 i_size_write(inode, new_size); 688 685 mark_inode_dirty(inode); 689 - f2fs_write_inode(inode, NULL); 686 + update_inode_page(inode); 690 687 } 688 + f2fs_unlock_op(sbi); 691 689 692 690 return ret; 693 691 }
+1
fs/f2fs/inode.c
··· 78 78 if (check_nid_range(sbi, inode->i_ino)) { 79 79 f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu", 80 80 (unsigned long) inode->i_ino); 81 + WARN_ON(1); 81 82 return -EINVAL; 82 83 } 83 84
+6 -7
fs/f2fs/namei.c
··· 417 417 } 418 418 419 419 f2fs_set_link(new_dir, new_entry, new_page, old_inode); 420 - down_write(&F2FS_I(old_inode)->i_sem); 421 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 422 - up_write(&F2FS_I(old_inode)->i_sem); 423 420 424 421 new_inode->i_ctime = CURRENT_TIME; 425 422 down_write(&F2FS_I(new_inode)->i_sem); ··· 445 448 } 446 449 } 447 450 451 + down_write(&F2FS_I(old_inode)->i_sem); 452 + file_lost_pino(old_inode); 453 + up_write(&F2FS_I(old_inode)->i_sem); 454 + 448 455 old_inode->i_ctime = CURRENT_TIME; 449 456 mark_inode_dirty(old_inode); 450 457 ··· 458 457 if (old_dir != new_dir) { 459 458 f2fs_set_link(old_inode, old_dir_entry, 460 459 old_dir_page, new_dir); 461 - down_write(&F2FS_I(old_inode)->i_sem); 462 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 463 - up_write(&F2FS_I(old_inode)->i_sem); 464 460 update_inode_page(old_inode); 465 461 } else { 466 462 kunmap(old_dir_page); ··· 472 474 return 0; 473 475 474 476 put_out_dir: 475 - f2fs_put_page(new_page, 1); 477 + kunmap(new_page); 478 + f2fs_put_page(new_page, 0); 476 479 out_dir: 477 480 if (old_dir_entry) { 478 481 kunmap(old_dir_page);
+2
fs/f2fs/node.c
··· 42 42 mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12; 43 43 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2); 44 44 } else if (type == DIRTY_DENTS) { 45 + if (sbi->sb->s_bdi->dirty_exceeded) 46 + return false; 45 47 mem_size = get_pages(sbi, F2FS_DIRTY_DENTS); 46 48 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1); 47 49 }
+2 -3
fs/f2fs/segment.c
··· 272 272 return -ENOMEM; 273 273 spin_lock_init(&fcc->issue_lock); 274 274 init_waitqueue_head(&fcc->flush_wait_queue); 275 + sbi->sm_info->cmd_control_info = fcc; 275 276 fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi, 276 277 "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); 277 278 if (IS_ERR(fcc->f2fs_issue_flush)) { 278 279 err = PTR_ERR(fcc->f2fs_issue_flush); 279 280 kfree(fcc); 281 + sbi->sm_info->cmd_control_info = NULL; 280 282 return err; 281 283 } 282 - sbi->sm_info->cmd_control_info = fcc; 283 284 284 285 return err; 285 286 } ··· 1886 1885 1887 1886 /* init sm info */ 1888 1887 sbi->sm_info = sm_info; 1889 - INIT_LIST_HEAD(&sm_info->wblist_head); 1890 - spin_lock_init(&sm_info->wblist_lock); 1891 1888 sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr); 1892 1889 sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr); 1893 1890 sm_info->segment_count = le32_to_cpu(raw_super->segment_count);
+1 -3
fs/f2fs/super.c
··· 689 689 struct f2fs_sb_info *sbi = F2FS_SB(sb); 690 690 struct inode *inode; 691 691 692 - if (unlikely(ino < F2FS_ROOT_INO(sbi))) 693 - return ERR_PTR(-ESTALE); 694 - if (unlikely(ino >= NM_I(sbi)->max_nid)) 692 + if (check_nid_range(sbi, ino)) 695 693 return ERR_PTR(-ESTALE); 696 694 697 695 /*
+23 -28
fs/fuse/dev.c
··· 643 643 unsigned long seglen; 644 644 unsigned long addr; 645 645 struct page *pg; 646 - void *mapaddr; 647 - void *buf; 648 646 unsigned len; 647 + unsigned offset; 649 648 unsigned move_pages:1; 650 649 }; 651 650 ··· 665 666 if (cs->currbuf) { 666 667 struct pipe_buffer *buf = cs->currbuf; 667 668 668 - if (!cs->write) { 669 - kunmap_atomic(cs->mapaddr); 670 - } else { 671 - kunmap_atomic(cs->mapaddr); 669 + if (cs->write) 672 670 buf->len = PAGE_SIZE - cs->len; 673 - } 674 671 cs->currbuf = NULL; 675 - cs->mapaddr = NULL; 676 - } else if (cs->mapaddr) { 677 - kunmap_atomic(cs->mapaddr); 672 + } else if (cs->pg) { 678 673 if (cs->write) { 679 674 flush_dcache_page(cs->pg); 680 675 set_page_dirty_lock(cs->pg); 681 676 } 682 677 put_page(cs->pg); 683 - cs->mapaddr = NULL; 684 678 } 679 + cs->pg = NULL; 685 680 } 686 681 687 682 /* ··· 684 691 */ 685 692 static int fuse_copy_fill(struct fuse_copy_state *cs) 686 693 { 687 - unsigned long offset; 694 + struct page *page; 688 695 int err; 689 696 690 697 unlock_request(cs->fc, cs->req); ··· 699 706 700 707 BUG_ON(!cs->nr_segs); 701 708 cs->currbuf = buf; 702 - cs->mapaddr = kmap_atomic(buf->page); 709 + cs->pg = buf->page; 710 + cs->offset = buf->offset; 703 711 cs->len = buf->len; 704 - cs->buf = cs->mapaddr + buf->offset; 705 712 cs->pipebufs++; 706 713 cs->nr_segs--; 707 714 } else { 708 - struct page *page; 709 - 710 715 if (cs->nr_segs == cs->pipe->buffers) 711 716 return -EIO; 712 717 ··· 717 726 buf->len = 0; 718 727 719 728 cs->currbuf = buf; 720 - cs->mapaddr = kmap_atomic(page); 721 - cs->buf = cs->mapaddr; 729 + cs->pg = page; 730 + cs->offset = 0; 722 731 cs->len = PAGE_SIZE; 723 732 cs->pipebufs++; 724 733 cs->nr_segs++; ··· 731 740 cs->iov++; 732 741 cs->nr_segs--; 733 742 } 734 - err = get_user_pages_fast(cs->addr, 1, cs->write, &cs->pg); 743 + err = get_user_pages_fast(cs->addr, 1, cs->write, &page); 735 744 if (err < 0) 736 745 return err; 737 746 BUG_ON(err != 1); 738 - offset = cs->addr % PAGE_SIZE; 739 - cs->mapaddr = kmap_atomic(cs->pg); 740 - cs->buf = cs->mapaddr + offset; 741 - cs->len = min(PAGE_SIZE - offset, cs->seglen); 747 + cs->pg = page; 748 + cs->offset = cs->addr % PAGE_SIZE; 749 + cs->len = min(PAGE_SIZE - cs->offset, cs->seglen); 742 750 cs->seglen -= cs->len; 743 751 cs->addr += cs->len; 744 752 } ··· 750 760 { 751 761 unsigned ncpy = min(*size, cs->len); 752 762 if (val) { 763 + void *pgaddr = kmap_atomic(cs->pg); 764 + void *buf = pgaddr + cs->offset; 765 + 753 766 if (cs->write) 754 - memcpy(cs->buf, *val, ncpy); 767 + memcpy(buf, *val, ncpy); 755 768 else 756 - memcpy(*val, cs->buf, ncpy); 769 + memcpy(*val, buf, ncpy); 770 + 771 + kunmap_atomic(pgaddr); 757 772 *val += ncpy; 758 773 } 759 774 *size -= ncpy; 760 775 cs->len -= ncpy; 761 - cs->buf += ncpy; 776 + cs->offset += ncpy; 762 777 return ncpy; 763 778 } 764 779 ··· 869 874 out_fallback_unlock: 870 875 unlock_page(newpage); 871 876 out_fallback: 872 - cs->mapaddr = kmap_atomic(buf->page); 873 - cs->buf = cs->mapaddr + buf->offset; 877 + cs->pg = buf->page; 878 + cs->offset = buf->offset; 874 879 875 880 err = lock_request(cs->fc, cs->req); 876 881 if (err)
+25 -18
fs/fuse/dir.c
··· 198 198 inode = ACCESS_ONCE(entry->d_inode); 199 199 if (inode && is_bad_inode(inode)) 200 200 goto invalid; 201 - else if (fuse_dentry_time(entry) < get_jiffies_64()) { 201 + else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || 202 + (flags & LOOKUP_REVAL)) { 202 203 int err; 203 204 struct fuse_entry_out outarg; 204 205 struct fuse_req *req; ··· 815 814 return err; 816 815 } 817 816 818 - static int fuse_rename(struct inode *olddir, struct dentry *oldent, 819 - struct inode *newdir, struct dentry *newent) 820 - { 821 - return fuse_rename_common(olddir, oldent, newdir, newent, 0, 822 - FUSE_RENAME, sizeof(struct fuse_rename_in)); 823 - } 824 - 825 817 static int fuse_rename2(struct inode *olddir, struct dentry *oldent, 826 818 struct inode *newdir, struct dentry *newent, 827 819 unsigned int flags) ··· 825 831 if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE)) 826 832 return -EINVAL; 827 833 828 - if (fc->no_rename2 || fc->minor < 23) 829 - return -EINVAL; 834 + if (flags) { 835 + if (fc->no_rename2 || fc->minor < 23) 836 + return -EINVAL; 830 837 831 - err = fuse_rename_common(olddir, oldent, newdir, newent, flags, 832 - FUSE_RENAME2, sizeof(struct fuse_rename2_in)); 833 - if (err == -ENOSYS) { 834 - fc->no_rename2 = 1; 835 - err = -EINVAL; 838 + err = fuse_rename_common(olddir, oldent, newdir, newent, flags, 839 + FUSE_RENAME2, 840 + sizeof(struct fuse_rename2_in)); 841 + if (err == -ENOSYS) { 842 + fc->no_rename2 = 1; 843 + err = -EINVAL; 844 + } 845 + } else { 846 + err = fuse_rename_common(olddir, oldent, newdir, newent, 0, 847 + FUSE_RENAME, 848 + sizeof(struct fuse_rename_in)); 836 849 } 837 - return err; 838 850 851 + return err; 852 + } 853 + 854 + static int fuse_rename(struct inode *olddir, struct dentry *oldent, 855 + struct inode *newdir, struct dentry *newent) 856 + { 857 + return fuse_rename2(olddir, oldent, newdir, newent, 0); 839 858 } 840 859 841 860 static int fuse_link(struct dentry *entry, struct inode *newdir, ··· 992 985 int err; 993 986 bool r; 994 987 995 - if (fi->i_time < get_jiffies_64()) { 988 + if (time_before64(fi->i_time, get_jiffies_64())) { 996 989 r = true; 997 990 err = fuse_do_getattr(inode, stat, file); 998 991 } else { ··· 1178 1171 ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { 1179 1172 struct fuse_inode *fi = get_fuse_inode(inode); 1180 1173 1181 - if (fi->i_time < get_jiffies_64()) { 1174 + if (time_before64(fi->i_time, get_jiffies_64())) { 1182 1175 refreshed = true; 1183 1176 1184 1177 err = fuse_perm_getattr(inode, mask);
+5 -3
fs/fuse/file.c
··· 1687 1687 error = -EIO; 1688 1688 req->ff = fuse_write_file_get(fc, fi); 1689 1689 if (!req->ff) 1690 - goto err_free; 1690 + goto err_nofile; 1691 1691 1692 1692 fuse_write_fill(req, req->ff, page_offset(page), 0); 1693 1693 ··· 1715 1715 1716 1716 return 0; 1717 1717 1718 + err_nofile: 1719 + __free_page(tmp_page); 1718 1720 err_free: 1719 1721 fuse_request_free(req); 1720 1722 err: ··· 1957 1955 data.ff = NULL; 1958 1956 1959 1957 err = -ENOMEM; 1960 - data.orig_pages = kzalloc(sizeof(struct page *) * 1961 - FUSE_MAX_PAGES_PER_REQ, 1958 + data.orig_pages = kcalloc(FUSE_MAX_PAGES_PER_REQ, 1959 + sizeof(struct page *), 1962 1960 GFP_NOFS); 1963 1961 if (!data.orig_pages) 1964 1962 goto out;
+17 -5
fs/fuse/inode.c
··· 478 478 {OPT_ERR, NULL} 479 479 }; 480 480 481 + static int fuse_match_uint(substring_t *s, unsigned int *res) 482 + { 483 + int err = -ENOMEM; 484 + char *buf = match_strdup(s); 485 + if (buf) { 486 + err = kstrtouint(buf, 10, res); 487 + kfree(buf); 488 + } 489 + return err; 490 + } 491 + 481 492 static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) 482 493 { 483 494 char *p; ··· 499 488 while ((p = strsep(&opt, ",")) != NULL) { 500 489 int token; 501 490 int value; 491 + unsigned uv; 502 492 substring_t args[MAX_OPT_ARGS]; 503 493 if (!*p) 504 494 continue; ··· 523 511 break; 524 512 525 513 case OPT_USER_ID: 526 - if (match_int(&args[0], &value)) 514 + if (fuse_match_uint(&args[0], &uv)) 527 515 return 0; 528 - d->user_id = make_kuid(current_user_ns(), value); 516 + d->user_id = make_kuid(current_user_ns(), uv); 529 517 if (!uid_valid(d->user_id)) 530 518 return 0; 531 519 d->user_id_present = 1; 532 520 break; 533 521 534 522 case OPT_GROUP_ID: 535 - if (match_int(&args[0], &value)) 523 + if (fuse_match_uint(&args[0], &uv)) 536 524 return 0; 537 - d->group_id = make_kgid(current_user_ns(), value); 525 + d->group_id = make_kgid(current_user_ns(), uv); 538 526 if (!gid_valid(d->group_id)) 539 527 return 0; 540 528 d->group_id_present = 1; ··· 1018 1006 1019 1007 sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION); 1020 1008 1021 - if (!parse_fuse_opt((char *) data, &d, is_bdev)) 1009 + if (!parse_fuse_opt(data, &d, is_bdev)) 1022 1010 goto err; 1023 1011 1024 1012 if (is_bdev) {
+2 -2
fs/gfs2/file.c
··· 981 981 int error = 0; 982 982 983 983 state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED; 984 - flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT | GL_NOCACHE; 984 + flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT; 985 985 986 986 mutex_lock(&fp->f_fl_mutex); 987 987 ··· 991 991 goto out; 992 992 flock_lock_file_wait(file, 993 993 &(struct file_lock){.fl_type = F_UNLCK}); 994 - gfs2_glock_dq_wait(fl_gh); 994 + gfs2_glock_dq(fl_gh); 995 995 gfs2_holder_reinit(state, flags, fl_gh); 996 996 } else { 997 997 error = gfs2_glock_get(GFS2_SB(&ip->i_inode), ip->i_no_addr,
+9 -5
fs/gfs2/glock.c
··· 731 731 cachep = gfs2_glock_aspace_cachep; 732 732 else 733 733 cachep = gfs2_glock_cachep; 734 - gl = kmem_cache_alloc(cachep, GFP_KERNEL); 734 + gl = kmem_cache_alloc(cachep, GFP_NOFS); 735 735 if (!gl) 736 736 return -ENOMEM; 737 737 738 738 memset(&gl->gl_lksb, 0, sizeof(struct dlm_lksb)); 739 739 740 740 if (glops->go_flags & GLOF_LVB) { 741 - gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_KERNEL); 741 + gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_NOFS); 742 742 if (!gl->gl_lksb.sb_lvbptr) { 743 743 kmem_cache_free(cachep, gl); 744 744 return -ENOMEM; ··· 1404 1404 gl = list_entry(list->next, struct gfs2_glock, gl_lru); 1405 1405 list_del_init(&gl->gl_lru); 1406 1406 if (!spin_trylock(&gl->gl_spin)) { 1407 + add_back_to_lru: 1407 1408 list_add(&gl->gl_lru, &lru_list); 1408 1409 atomic_inc(&lru_count); 1409 1410 continue; 1410 1411 } 1412 + if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { 1413 + spin_unlock(&gl->gl_spin); 1414 + goto add_back_to_lru; 1415 + } 1411 1416 clear_bit(GLF_LRU, &gl->gl_flags); 1412 - spin_unlock(&lru_lock); 1413 1417 gl->gl_lockref.count++; 1414 1418 if (demote_ok(gl)) 1415 1419 handle_callback(gl, LM_ST_UNLOCKED, 0, false); ··· 1421 1417 if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0) 1422 1418 gl->gl_lockref.count--; 1423 1419 spin_unlock(&gl->gl_spin); 1424 - spin_lock(&lru_lock); 1420 + cond_resched_lock(&lru_lock); 1425 1421 } 1426 1422 } 1427 1423 ··· 1446 1442 gl = list_entry(lru_list.next, struct gfs2_glock, gl_lru); 1447 1443 1448 1444 /* Test for being demotable */ 1449 - if (!test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { 1445 + if (!test_bit(GLF_LOCK, &gl->gl_flags)) { 1450 1446 list_move(&gl->gl_lru, &dispose); 1451 1447 atomic_dec(&lru_count); 1452 1448 freed++;
+2 -2
fs/gfs2/glops.c
··· 234 234 * inode_go_inval - prepare a inode glock to be released 235 235 * @gl: the glock 236 236 * @flags: 237 - * 238 - * Normally we invlidate everything, but if we are moving into 237 + * 238 + * Normally we invalidate everything, but if we are moving into 239 239 * LM_ST_DEFERRED from LM_ST_SHARED or LM_ST_EXCLUSIVE then we 240 240 * can keep hold of the metadata, since it won't have changed. 241 241 *
+2 -2
fs/gfs2/lock_dlm.c
··· 1036 1036 1037 1037 new_size = old_size + RECOVER_SIZE_INC; 1038 1038 1039 - submit = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); 1040 - result = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); 1039 + submit = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); 1040 + result = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); 1041 1041 if (!submit || !result) { 1042 1042 kfree(submit); 1043 1043 kfree(result);
+2 -2
fs/gfs2/rgrp.c
··· 337 337 338 338 /** 339 339 * gfs2_free_extlen - Return extent length of free blocks 340 - * @rbm: Starting position 340 + * @rrbm: Starting position 341 341 * @len: Max length to check 342 342 * 343 343 * Starting at the block specified by the rbm, see how many free blocks ··· 2522 2522 2523 2523 /** 2524 2524 * gfs2_rlist_free - free a resource group list 2525 - * @list: the list of resource groups 2525 + * @rlist: the list of resource groups 2526 2526 * 2527 2527 */ 2528 2528
+4 -1
fs/jbd2/transaction.c
··· 1588 1588 * to perform a synchronous write. We do this to detect the 1589 1589 * case where a single process is doing a stream of sync 1590 1590 * writes. No point in waiting for joiners in that case. 1591 + * 1592 + * Setting max_batch_time to 0 disables this completely. 1591 1593 */ 1592 1594 pid = current->pid; 1593 - if (handle->h_sync && journal->j_last_sync_writer != pid) { 1595 + if (handle->h_sync && journal->j_last_sync_writer != pid && 1596 + journal->j_max_batch_time) { 1594 1597 u64 commit_time, trans_time; 1595 1598 1596 1599 journal->j_last_sync_writer = pid;
+30
fs/kernfs/mount.c
··· 211 211 kernfs_put(root_kn); 212 212 } 213 213 214 + /** 215 + * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root 216 + * @kernfs_root: the kernfs_root in question 217 + * @ns: the namespace tag 218 + * 219 + * Pin the superblock so the superblock won't be destroyed in subsequent 220 + * operations. This can be used to block ->kill_sb() which may be useful 221 + * for kernfs users which dynamically manage superblocks. 222 + * 223 + * Returns NULL if there's no superblock associated to this kernfs_root, or 224 + * -EINVAL if the superblock is being freed. 225 + */ 226 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns) 227 + { 228 + struct kernfs_super_info *info; 229 + struct super_block *sb = NULL; 230 + 231 + mutex_lock(&kernfs_mutex); 232 + list_for_each_entry(info, &root->supers, node) { 233 + if (info->ns == ns) { 234 + sb = info->sb; 235 + if (!atomic_inc_not_zero(&info->sb->s_active)) 236 + sb = ERR_PTR(-EINVAL); 237 + break; 238 + } 239 + } 240 + mutex_unlock(&kernfs_mutex); 241 + return sb; 242 + } 243 + 214 244 void __init kernfs_init(void) 215 245 { 216 246 kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
-2
fs/nfs/direct.c
··· 756 756 spin_unlock(&dreq->lock); 757 757 758 758 while (!list_empty(&hdr->pages)) { 759 - bool do_destroy = true; 760 759 761 760 req = nfs_list_entry(hdr->pages.next); 762 761 nfs_list_remove_request(req); ··· 764 765 case NFS_IOHDR_NEED_COMMIT: 765 766 kref_get(&req->wb_kref); 766 767 nfs_mark_request_commit(req, hdr->lseg, &cinfo); 767 - do_destroy = false; 768 768 } 769 769 nfs_unlock_and_release_request(req); 770 770 }
+1
fs/nfs/internal.h
··· 244 244 int nfs_generic_pgio(struct nfs_pageio_descriptor *, struct nfs_pgio_header *); 245 245 int nfs_initiate_pgio(struct rpc_clnt *, struct nfs_pgio_data *, 246 246 const struct rpc_call_ops *, int, int); 247 + void nfs_free_request(struct nfs_page *req); 247 248 248 249 static inline void nfs_iocounter_init(struct nfs_io_counter *c) 249 250 {
+43
fs/nfs/nfs3acl.c
··· 247 247 &posix_acl_default_xattr_handler, 248 248 NULL, 249 249 }; 250 + 251 + static int 252 + nfs3_list_one_acl(struct inode *inode, int type, const char *name, void *data, 253 + size_t size, ssize_t *result) 254 + { 255 + struct posix_acl *acl; 256 + char *p = data + *result; 257 + 258 + acl = get_acl(inode, type); 259 + if (!acl) 260 + return 0; 261 + 262 + posix_acl_release(acl); 263 + 264 + *result += strlen(name); 265 + *result += 1; 266 + if (!size) 267 + return 0; 268 + if (*result > size) 269 + return -ERANGE; 270 + 271 + strcpy(p, name); 272 + return 0; 273 + } 274 + 275 + ssize_t 276 + nfs3_listxattr(struct dentry *dentry, char *data, size_t size) 277 + { 278 + struct inode *inode = dentry->d_inode; 279 + ssize_t result = 0; 280 + int error; 281 + 282 + error = nfs3_list_one_acl(inode, ACL_TYPE_ACCESS, 283 + POSIX_ACL_XATTR_ACCESS, data, size, &result); 284 + if (error) 285 + return error; 286 + 287 + error = nfs3_list_one_acl(inode, ACL_TYPE_DEFAULT, 288 + POSIX_ACL_XATTR_DEFAULT, data, size, &result); 289 + if (error) 290 + return error; 291 + return result; 292 + }
+2 -2
fs/nfs/nfs3proc.c
··· 885 885 .getattr = nfs_getattr, 886 886 .setattr = nfs_setattr, 887 887 #ifdef CONFIG_NFS_V3_ACL 888 - .listxattr = generic_listxattr, 888 + .listxattr = nfs3_listxattr, 889 889 .getxattr = generic_getxattr, 890 890 .setxattr = generic_setxattr, 891 891 .removexattr = generic_removexattr, ··· 899 899 .getattr = nfs_getattr, 900 900 .setattr = nfs_setattr, 901 901 #ifdef CONFIG_NFS_V3_ACL 902 - .listxattr = generic_listxattr, 902 + .listxattr = nfs3_listxattr, 903 903 .getxattr = generic_getxattr, 904 904 .setxattr = generic_setxattr, 905 905 .removexattr = generic_removexattr,
+15 -5
fs/nfs/pagelist.c
··· 29 29 static struct kmem_cache *nfs_page_cachep; 30 30 static const struct rpc_call_ops nfs_pgio_common_ops; 31 31 32 - static void nfs_free_request(struct nfs_page *); 33 - 34 32 static bool nfs_pgarray_set(struct nfs_page_array *p, unsigned int pagecount) 35 33 { 36 34 p->npages = pagecount; ··· 237 239 WARN_ON_ONCE(prev == req); 238 240 239 241 if (!prev) { 242 + /* a head request */ 240 243 req->wb_head = req; 241 244 req->wb_this_page = req; 242 245 } else { 246 + /* a subrequest */ 243 247 WARN_ON_ONCE(prev->wb_this_page != prev->wb_head); 244 248 WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags)); 245 249 req->wb_head = prev->wb_head; 246 250 req->wb_this_page = prev->wb_this_page; 247 251 prev->wb_this_page = req; 248 252 253 + /* All subrequests take a ref on the head request until 254 + * nfs_page_group_destroy is called */ 255 + kref_get(&req->wb_head->wb_kref); 256 + 249 257 /* grab extra ref if head request has extra ref from 250 258 * the write/commit path to handle handoff between write 251 259 * and commit lists */ 252 - if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) 260 + if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) { 261 + set_bit(PG_INODE_REF, &req->wb_flags); 253 262 kref_get(&req->wb_kref); 263 + } 254 264 } 255 265 } 256 266 ··· 274 268 { 275 269 struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref); 276 270 struct nfs_page *tmp, *next; 271 + 272 + /* subrequests must release the ref on the head request */ 273 + if (req->wb_head != req) 274 + nfs_release_request(req->wb_head); 277 275 278 276 if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN)) 279 277 return; ··· 404 394 * 405 395 * Note: Should never be called with the spinlock held! 406 396 */ 407 - static void nfs_free_request(struct nfs_page *req) 397 + void nfs_free_request(struct nfs_page *req) 408 398 { 409 399 WARN_ON_ONCE(req->wb_this_page != req); 410 400 ··· 935 925 nfs_pageio_doio(desc); 936 926 if (desc->pg_error < 0) 937 927 return 0; 938 - desc->pg_moreio = 0; 939 928 if (desc->pg_recoalesce) 940 929 return 0; 941 930 /* retry add_request for this subreq */ ··· 981 972 desc->pg_count = 0; 982 973 desc->pg_base = 0; 983 974 desc->pg_recoalesce = 0; 975 + desc->pg_moreio = 0; 984 976 985 977 while (!list_empty(&head)) { 986 978 struct nfs_page *req;
+287 -58
fs/nfs/write.c
··· 46 46 static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops; 47 47 static const struct nfs_commit_completion_ops nfs_commit_completion_ops; 48 48 static const struct nfs_rw_ops nfs_rw_write_ops; 49 + static void nfs_clear_request_commit(struct nfs_page *req); 49 50 50 51 static struct kmem_cache *nfs_wdata_cachep; 51 52 static mempool_t *nfs_wdata_mempool; ··· 92 91 set_bit(NFS_CONTEXT_ERROR_WRITE, &ctx->flags); 93 92 } 94 93 94 + /* 95 + * nfs_page_find_head_request_locked - find head request associated with @page 96 + * 97 + * must be called while holding the inode lock. 98 + * 99 + * returns matching head request with reference held, or NULL if not found. 100 + */ 95 101 static struct nfs_page * 96 - nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page) 102 + nfs_page_find_head_request_locked(struct nfs_inode *nfsi, struct page *page) 97 103 { 98 104 struct nfs_page *req = NULL; 99 105 ··· 112 104 /* Linearly search the commit list for the correct req */ 113 105 list_for_each_entry_safe(freq, t, &nfsi->commit_info.list, wb_list) { 114 106 if (freq->wb_page == page) { 115 - req = freq; 107 + req = freq->wb_head; 116 108 break; 117 109 } 118 110 } 119 111 } 120 112 121 - if (req) 113 + if (req) { 114 + WARN_ON_ONCE(req->wb_head != req); 115 + 122 116 kref_get(&req->wb_kref); 117 + } 123 118 124 119 return req; 125 120 } 126 121 127 - static struct nfs_page *nfs_page_find_request(struct page *page) 122 + /* 123 + * nfs_page_find_head_request - find head request associated with @page 124 + * 125 + * returns matching head request with reference held, or NULL if not found. 126 + */ 127 + static struct nfs_page *nfs_page_find_head_request(struct page *page) 128 128 { 129 129 struct inode *inode = page_file_mapping(page)->host; 130 130 struct nfs_page *req = NULL; 131 131 132 132 spin_lock(&inode->i_lock); 133 - req = nfs_page_find_request_locked(NFS_I(inode), page); 133 + req = nfs_page_find_head_request_locked(NFS_I(inode), page); 134 134 spin_unlock(&inode->i_lock); 135 135 return req; 136 136 } ··· 290 274 clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC); 291 275 } 292 276 293 - static struct nfs_page *nfs_find_and_lock_request(struct page *page, bool nonblock) 277 + 278 + /* nfs_page_group_clear_bits 279 + * @req - an nfs request 280 + * clears all page group related bits from @req 281 + */ 282 + static void 283 + nfs_page_group_clear_bits(struct nfs_page *req) 294 284 { 295 - struct inode *inode = page_file_mapping(page)->host; 296 - struct nfs_page *req; 285 + clear_bit(PG_TEARDOWN, &req->wb_flags); 286 + clear_bit(PG_UNLOCKPAGE, &req->wb_flags); 287 + clear_bit(PG_UPTODATE, &req->wb_flags); 288 + clear_bit(PG_WB_END, &req->wb_flags); 289 + clear_bit(PG_REMOVE, &req->wb_flags); 290 + } 291 + 292 + 293 + /* 294 + * nfs_unroll_locks_and_wait - unlock all newly locked reqs and wait on @req 295 + * 296 + * this is a helper function for nfs_lock_and_join_requests 297 + * 298 + * @inode - inode associated with request page group, must be holding inode lock 299 + * @head - head request of page group, must be holding head lock 300 + * @req - request that couldn't lock and needs to wait on the req bit lock 301 + * @nonblock - if true, don't actually wait 302 + * 303 + * NOTE: this must be called holding page_group bit lock and inode spin lock 304 + * and BOTH will be released before returning. 305 + * 306 + * returns 0 on success, < 0 on error. 307 + */ 308 + static int 309 + nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head, 310 + struct nfs_page *req, bool nonblock) 311 + __releases(&inode->i_lock) 312 + { 313 + struct nfs_page *tmp; 297 314 int ret; 298 315 299 - spin_lock(&inode->i_lock); 300 - for (;;) { 301 - req = nfs_page_find_request_locked(NFS_I(inode), page); 302 - if (req == NULL) 303 - break; 304 - if (nfs_lock_request(req)) 305 - break; 306 - /* Note: If we hold the page lock, as is the case in nfs_writepage, 307 - * then the call to nfs_lock_request() will always 308 - * succeed provided that someone hasn't already marked the 309 - * request as dirty (in which case we don't care). 310 - */ 311 - spin_unlock(&inode->i_lock); 312 - if (!nonblock) 313 - ret = nfs_wait_on_request(req); 314 - else 315 - ret = -EAGAIN; 316 - nfs_release_request(req); 317 - if (ret != 0) 318 - return ERR_PTR(ret); 319 - spin_lock(&inode->i_lock); 320 - } 316 + /* relinquish all the locks successfully grabbed this run */ 317 + for (tmp = head ; tmp != req; tmp = tmp->wb_this_page) 318 + nfs_unlock_request(tmp); 319 + 320 + WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags)); 321 + 322 + /* grab a ref on the request that will be waited on */ 323 + kref_get(&req->wb_kref); 324 + 325 + nfs_page_group_unlock(head); 321 326 spin_unlock(&inode->i_lock); 322 - return req; 327 + 328 + /* release ref from nfs_page_find_head_request_locked */ 329 + nfs_release_request(head); 330 + 331 + if (!nonblock) 332 + ret = nfs_wait_on_request(req); 333 + else 334 + ret = -EAGAIN; 335 + nfs_release_request(req); 336 + 337 + return ret; 338 + } 339 + 340 + /* 341 + * nfs_destroy_unlinked_subrequests - destroy recently unlinked subrequests 342 + * 343 + * @destroy_list - request list (using wb_this_page) terminated by @old_head 344 + * @old_head - the old head of the list 345 + * 346 + * All subrequests must be locked and removed from all lists, so at this point 347 + * they are only "active" in this function, and possibly in nfs_wait_on_request 348 + * with a reference held by some other context. 349 + */ 350 + static void 351 + nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list, 352 + struct nfs_page *old_head) 353 + { 354 + while (destroy_list) { 355 + struct nfs_page *subreq = destroy_list; 356 + 357 + destroy_list = (subreq->wb_this_page == old_head) ? 358 + NULL : subreq->wb_this_page; 359 + 360 + WARN_ON_ONCE(old_head != subreq->wb_head); 361 + 362 + /* make sure old group is not used */ 363 + subreq->wb_head = subreq; 364 + subreq->wb_this_page = subreq; 365 + 366 + nfs_clear_request_commit(subreq); 367 + 368 + /* subreq is now totally disconnected from page group or any 369 + * write / commit lists. last chance to wake any waiters */ 370 + nfs_unlock_request(subreq); 371 + 372 + if (!test_bit(PG_TEARDOWN, &subreq->wb_flags)) { 373 + /* release ref on old head request */ 374 + nfs_release_request(old_head); 375 + 376 + nfs_page_group_clear_bits(subreq); 377 + 378 + /* release the PG_INODE_REF reference */ 379 + if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) 380 + nfs_release_request(subreq); 381 + else 382 + WARN_ON_ONCE(1); 383 + } else { 384 + WARN_ON_ONCE(test_bit(PG_CLEAN, &subreq->wb_flags)); 385 + /* zombie requests have already released the last 386 + * reference and were waiting on the rest of the 387 + * group to complete. Since it's no longer part of a 388 + * group, simply free the request */ 389 + nfs_page_group_clear_bits(subreq); 390 + nfs_free_request(subreq); 391 + } 392 + } 393 + } 394 + 395 + /* 396 + * nfs_lock_and_join_requests - join all subreqs to the head req and return 397 + * a locked reference, cancelling any pending 398 + * operations for this page. 399 + * 400 + * @page - the page used to lookup the "page group" of nfs_page structures 401 + * @nonblock - if true, don't block waiting for request locks 402 + * 403 + * This function joins all sub requests to the head request by first 404 + * locking all requests in the group, cancelling any pending operations 405 + * and finally updating the head request to cover the whole range covered by 406 + * the (former) group. All subrequests are removed from any write or commit 407 + * lists, unlinked from the group and destroyed. 408 + * 409 + * Returns a locked, referenced pointer to the head request - which after 410 + * this call is guaranteed to be the only request associated with the page. 411 + * Returns NULL if no requests are found for @page, or a ERR_PTR if an 412 + * error was encountered. 413 + */ 414 + static struct nfs_page * 415 + nfs_lock_and_join_requests(struct page *page, bool nonblock) 416 + { 417 + struct inode *inode = page_file_mapping(page)->host; 418 + struct nfs_page *head, *subreq; 419 + struct nfs_page *destroy_list = NULL; 420 + unsigned int total_bytes; 421 + int ret; 422 + 423 + try_again: 424 + total_bytes = 0; 425 + 426 + WARN_ON_ONCE(destroy_list); 427 + 428 + spin_lock(&inode->i_lock); 429 + 430 + /* 431 + * A reference is taken only on the head request which acts as a 432 + * reference to the whole page group - the group will not be destroyed 433 + * until the head reference is released. 434 + */ 435 + head = nfs_page_find_head_request_locked(NFS_I(inode), page); 436 + 437 + if (!head) { 438 + spin_unlock(&inode->i_lock); 439 + return NULL; 440 + } 441 + 442 + /* lock each request in the page group */ 443 + nfs_page_group_lock(head); 444 + subreq = head; 445 + do { 446 + /* 447 + * Subrequests are always contiguous, non overlapping 448 + * and in order. If not, it's a programming error. 449 + */ 450 + WARN_ON_ONCE(subreq->wb_offset != 451 + (head->wb_offset + total_bytes)); 452 + 453 + /* keep track of how many bytes this group covers */ 454 + total_bytes += subreq->wb_bytes; 455 + 456 + if (!nfs_lock_request(subreq)) { 457 + /* releases page group bit lock and 458 + * inode spin lock and all references */ 459 + ret = nfs_unroll_locks_and_wait(inode, head, 460 + subreq, nonblock); 461 + 462 + if (ret == 0) 463 + goto try_again; 464 + 465 + return ERR_PTR(ret); 466 + } 467 + 468 + subreq = subreq->wb_this_page; 469 + } while (subreq != head); 470 + 471 + /* Now that all requests are locked, make sure they aren't on any list. 472 + * Commit list removal accounting is done after locks are dropped */ 473 + subreq = head; 474 + do { 475 + nfs_list_remove_request(subreq); 476 + subreq = subreq->wb_this_page; 477 + } while (subreq != head); 478 + 479 + /* unlink subrequests from head, destroy them later */ 480 + if (head->wb_this_page != head) { 481 + /* destroy list will be terminated by head */ 482 + destroy_list = head->wb_this_page; 483 + head->wb_this_page = head; 484 + 485 + /* change head request to cover whole range that 486 + * the former page group covered */ 487 + head->wb_bytes = total_bytes; 488 + } 489 + 490 + /* 491 + * prepare head request to be added to new pgio descriptor 492 + */ 493 + nfs_page_group_clear_bits(head); 494 + 495 + /* 496 + * some part of the group was still on the inode list - otherwise 497 + * the group wouldn't be involved in async write. 498 + * grab a reference for the head request, iff it needs one. 499 + */ 500 + if (!test_and_set_bit(PG_INODE_REF, &head->wb_flags)) 501 + kref_get(&head->wb_kref); 502 + 503 + nfs_page_group_unlock(head); 504 + 505 + /* drop lock to clear_request_commit the head req and clean up 506 + * requests on destroy list */ 507 + spin_unlock(&inode->i_lock); 508 + 509 + nfs_destroy_unlinked_subrequests(destroy_list, head); 510 + 511 + /* clean up commit list state */ 512 + nfs_clear_request_commit(head); 513 + 514 + /* still holds ref on head from nfs_page_find_head_request_locked 515 + * and still has lock on head from lock loop */ 516 + return head; 323 517 } 324 518 325 519 /* ··· 542 316 struct nfs_page *req; 543 317 int ret = 0; 544 318 545 - req = nfs_find_and_lock_request(page, nonblock); 319 + req = nfs_lock_and_join_requests(page, nonblock); 546 320 if (!req) 547 321 goto out; 548 322 ret = PTR_ERR(req); ··· 674 448 set_page_private(req->wb_page, (unsigned long)req); 675 449 } 676 450 nfsi->npages++; 677 - set_bit(PG_INODE_REF, &req->wb_flags); 451 + /* this a head request for a page group - mark it as having an 452 + * extra reference so sub groups can follow suit */ 453 + WARN_ON(test_and_set_bit(PG_INODE_REF, &req->wb_flags)); 678 454 kref_get(&req->wb_kref); 679 455 spin_unlock(&inode->i_lock); 680 456 } ··· 702 474 nfsi->npages--; 703 475 spin_unlock(&inode->i_lock); 704 476 } 705 - nfs_release_request(req); 477 + 478 + if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) 479 + nfs_release_request(req); 706 480 } 707 481 708 482 static void ··· 868 638 { 869 639 struct nfs_commit_info cinfo; 870 640 unsigned long bytes = 0; 871 - bool do_destroy; 872 641 873 642 if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) 874 643 goto out; ··· 897 668 next: 898 669 nfs_unlock_request(req); 899 670 nfs_end_page_writeback(req); 900 - do_destroy = !test_bit(NFS_IOHDR_NEED_COMMIT, &hdr->flags); 901 671 nfs_release_request(req); 902 672 } 903 673 out: ··· 997 769 spin_lock(&inode->i_lock); 998 770 999 771 for (;;) { 1000 - req = nfs_page_find_request_locked(NFS_I(inode), page); 772 + req = nfs_page_find_head_request_locked(NFS_I(inode), page); 1001 773 if (req == NULL) 1002 774 goto out_unlock; 1003 775 ··· 1105 877 * dropped page. 1106 878 */ 1107 879 do { 1108 - req = nfs_page_find_request(page); 880 + req = nfs_page_find_head_request(page); 1109 881 if (req == NULL) 1110 882 return 0; 1111 883 l_ctx = req->wb_lock_context; ··· 1797 1569 struct nfs_page *req; 1798 1570 int ret = 0; 1799 1571 1800 - for (;;) { 1801 - wait_on_page_writeback(page); 1802 - req = nfs_page_find_request(page); 1803 - if (req == NULL) 1804 - break; 1805 - if (nfs_lock_request(req)) { 1806 - nfs_clear_request_commit(req); 1807 - nfs_inode_remove_request(req); 1808 - /* 1809 - * In case nfs_inode_remove_request has marked the 1810 - * page as being dirty 1811 - */ 1812 - cancel_dirty_page(page, PAGE_CACHE_SIZE); 1813 - nfs_unlock_and_release_request(req); 1814 - break; 1815 - } 1816 - ret = nfs_wait_on_request(req); 1817 - nfs_release_request(req); 1818 - if (ret < 0) 1819 - break; 1572 + wait_on_page_writeback(page); 1573 + 1574 + /* blocking call to cancel all requests and join to a single (head) 1575 + * request */ 1576 + req = nfs_lock_and_join_requests(page, false); 1577 + 1578 + if (IS_ERR(req)) { 1579 + ret = PTR_ERR(req); 1580 + } else if (req) { 1581 + /* all requests from this page have been cancelled by 1582 + * nfs_lock_and_join_requests, so just remove the head 1583 + * request from the inode / page_private pointer and 1584 + * release it */ 1585 + nfs_inode_remove_request(req); 1586 + /* 1587 + * In case nfs_inode_remove_request has marked the 1588 + * page as being dirty 1589 + */ 1590 + cancel_dirty_page(page, PAGE_CACHE_SIZE); 1591 + nfs_unlock_and_release_request(req); 1820 1592 } 1593 + 1821 1594 return ret; 1822 1595 } 1823 1596
+1 -1
fs/nfsd/nfs4xdr.c
··· 2641 2641 { 2642 2642 __be32 *p; 2643 2643 2644 - p = xdr_reserve_space(xdr, 6); 2644 + p = xdr_reserve_space(xdr, 20); 2645 2645 if (!p) 2646 2646 return NULL; 2647 2647 *p++ = htonl(2);
+2
fs/quota/dquot.c
··· 702 702 struct dquot *dquot; 703 703 unsigned long freed = 0; 704 704 705 + spin_lock(&dq_list_lock); 705 706 head = free_dquots.prev; 706 707 while (head != &free_dquots && sc->nr_to_scan) { 707 708 dquot = list_entry(head, struct dquot, dq_free); ··· 714 713 freed++; 715 714 head = free_dquots.prev; 716 715 } 716 + spin_unlock(&dq_list_lock); 717 717 return freed; 718 718 } 719 719
+2 -5
fs/xfs/xfs_bmap.c
··· 4298 4298 } 4299 4299 4300 4300 4301 - int 4302 - __xfs_bmapi_allocate( 4301 + static int 4302 + xfs_bmapi_allocate( 4303 4303 struct xfs_bmalloca *bma) 4304 4304 { 4305 4305 struct xfs_mount *mp = bma->ip->i_mount; ··· 4577 4577 bma.userdata = 0; 4578 4578 bma.flist = flist; 4579 4579 bma.firstblock = firstblock; 4580 - 4581 - if (flags & XFS_BMAPI_STACK_SWITCH) 4582 - bma.stack_switch = 1; 4583 4580 4584 4581 while (bno < end && n < *nmap) { 4585 4582 inhole = eof || bma.got.br_startoff > bno;
+1 -3
fs/xfs/xfs_bmap.h
··· 77 77 * from written to unwritten, otherwise convert from unwritten to written. 78 78 */ 79 79 #define XFS_BMAPI_CONVERT 0x040 80 - #define XFS_BMAPI_STACK_SWITCH 0x080 81 80 82 81 #define XFS_BMAPI_FLAGS \ 83 82 { XFS_BMAPI_ENTIRE, "ENTIRE" }, \ ··· 85 86 { XFS_BMAPI_PREALLOC, "PREALLOC" }, \ 86 87 { XFS_BMAPI_IGSTATE, "IGSTATE" }, \ 87 88 { XFS_BMAPI_CONTIG, "CONTIG" }, \ 88 - { XFS_BMAPI_CONVERT, "CONVERT" }, \ 89 - { XFS_BMAPI_STACK_SWITCH, "STACK_SWITCH" } 89 + { XFS_BMAPI_CONVERT, "CONVERT" } 90 90 91 91 92 92 static inline int xfs_bmapi_aflag(int w)
-53
fs/xfs/xfs_bmap_util.c
··· 249 249 } 250 250 251 251 /* 252 - * Stack switching interfaces for allocation 253 - */ 254 - static void 255 - xfs_bmapi_allocate_worker( 256 - struct work_struct *work) 257 - { 258 - struct xfs_bmalloca *args = container_of(work, 259 - struct xfs_bmalloca, work); 260 - unsigned long pflags; 261 - unsigned long new_pflags = PF_FSTRANS; 262 - 263 - /* 264 - * we are in a transaction context here, but may also be doing work 265 - * in kswapd context, and hence we may need to inherit that state 266 - * temporarily to ensure that we don't block waiting for memory reclaim 267 - * in any way. 268 - */ 269 - if (args->kswapd) 270 - new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; 271 - 272 - current_set_flags_nested(&pflags, new_pflags); 273 - 274 - args->result = __xfs_bmapi_allocate(args); 275 - complete(args->done); 276 - 277 - current_restore_flags_nested(&pflags, new_pflags); 278 - } 279 - 280 - /* 281 - * Some allocation requests often come in with little stack to work on. Push 282 - * them off to a worker thread so there is lots of stack to use. Otherwise just 283 - * call directly to avoid the context switch overhead here. 284 - */ 285 - int 286 - xfs_bmapi_allocate( 287 - struct xfs_bmalloca *args) 288 - { 289 - DECLARE_COMPLETION_ONSTACK(done); 290 - 291 - if (!args->stack_switch) 292 - return __xfs_bmapi_allocate(args); 293 - 294 - 295 - args->done = &done; 296 - args->kswapd = current_is_kswapd(); 297 - INIT_WORK_ONSTACK(&args->work, xfs_bmapi_allocate_worker); 298 - queue_work(xfs_alloc_wq, &args->work); 299 - wait_for_completion(&done); 300 - destroy_work_on_stack(&args->work); 301 - return args->result; 302 - } 303 - 304 - /* 305 252 * Check if the endoff is outside the last extent. If so the caller will grow 306 253 * the allocation to a stripe unit boundary. All offsets are considered outside 307 254 * the end of file for an empty fork, so 1 is returned in *eof in that case.
-4
fs/xfs/xfs_bmap_util.h
··· 55 55 bool userdata;/* set if is user data */ 56 56 bool aeof; /* allocated space at eof */ 57 57 bool conv; /* overwriting unwritten extents */ 58 - bool stack_switch; 59 - bool kswapd; /* allocation in kswapd context */ 60 58 int flags; 61 59 struct completion *done; 62 60 struct work_struct work; ··· 64 66 int xfs_bmap_finish(struct xfs_trans **tp, struct xfs_bmap_free *flist, 65 67 int *committed); 66 68 int xfs_bmap_rtalloc(struct xfs_bmalloca *ap); 67 - int xfs_bmapi_allocate(struct xfs_bmalloca *args); 68 - int __xfs_bmapi_allocate(struct xfs_bmalloca *args); 69 69 int xfs_bmap_eof(struct xfs_inode *ip, xfs_fileoff_t endoff, 70 70 int whichfork, int *eof); 71 71 int xfs_bmap_count_blocks(struct xfs_trans *tp, struct xfs_inode *ip,
+81 -1
fs/xfs/xfs_btree.c
··· 33 33 #include "xfs_error.h" 34 34 #include "xfs_trace.h" 35 35 #include "xfs_cksum.h" 36 + #include "xfs_alloc.h" 36 37 37 38 /* 38 39 * Cursor allocation zone. ··· 2324 2323 * record (to be inserted into parent). 2325 2324 */ 2326 2325 STATIC int /* error */ 2327 - xfs_btree_split( 2326 + __xfs_btree_split( 2328 2327 struct xfs_btree_cur *cur, 2329 2328 int level, 2330 2329 union xfs_btree_ptr *ptrp, ··· 2503 2502 XFS_BTREE_TRACE_CURSOR(cur, XBT_ERROR); 2504 2503 return error; 2505 2504 } 2505 + 2506 + struct xfs_btree_split_args { 2507 + struct xfs_btree_cur *cur; 2508 + int level; 2509 + union xfs_btree_ptr *ptrp; 2510 + union xfs_btree_key *key; 2511 + struct xfs_btree_cur **curp; 2512 + int *stat; /* success/failure */ 2513 + int result; 2514 + bool kswapd; /* allocation in kswapd context */ 2515 + struct completion *done; 2516 + struct work_struct work; 2517 + }; 2518 + 2519 + /* 2520 + * Stack switching interfaces for allocation 2521 + */ 2522 + static void 2523 + xfs_btree_split_worker( 2524 + struct work_struct *work) 2525 + { 2526 + struct xfs_btree_split_args *args = container_of(work, 2527 + struct xfs_btree_split_args, work); 2528 + unsigned long pflags; 2529 + unsigned long new_pflags = PF_FSTRANS; 2530 + 2531 + /* 2532 + * we are in a transaction context here, but may also be doing work 2533 + * in kswapd context, and hence we may need to inherit that state 2534 + * temporarily to ensure that we don't block waiting for memory reclaim 2535 + * in any way. 2536 + */ 2537 + if (args->kswapd) 2538 + new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; 2539 + 2540 + current_set_flags_nested(&pflags, new_pflags); 2541 + 2542 + args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, 2543 + args->key, args->curp, args->stat); 2544 + complete(args->done); 2545 + 2546 + current_restore_flags_nested(&pflags, new_pflags); 2547 + } 2548 + 2549 + /* 2550 + * BMBT split requests often come in with little stack to work on. Push 2551 + * them off to a worker thread so there is lots of stack to use. For the other 2552 + * btree types, just call directly to avoid the context switch overhead here. 2553 + */ 2554 + STATIC int /* error */ 2555 + xfs_btree_split( 2556 + struct xfs_btree_cur *cur, 2557 + int level, 2558 + union xfs_btree_ptr *ptrp, 2559 + union xfs_btree_key *key, 2560 + struct xfs_btree_cur **curp, 2561 + int *stat) /* success/failure */ 2562 + { 2563 + struct xfs_btree_split_args args; 2564 + DECLARE_COMPLETION_ONSTACK(done); 2565 + 2566 + if (cur->bc_btnum != XFS_BTNUM_BMAP) 2567 + return __xfs_btree_split(cur, level, ptrp, key, curp, stat); 2568 + 2569 + args.cur = cur; 2570 + args.level = level; 2571 + args.ptrp = ptrp; 2572 + args.key = key; 2573 + args.curp = curp; 2574 + args.stat = stat; 2575 + args.done = &done; 2576 + args.kswapd = current_is_kswapd(); 2577 + INIT_WORK_ONSTACK(&args.work, xfs_btree_split_worker); 2578 + queue_work(xfs_alloc_wq, &args.work); 2579 + wait_for_completion(&done); 2580 + destroy_work_on_stack(&args.work); 2581 + return args.result; 2582 + } 2583 + 2506 2584 2507 2585 /* 2508 2586 * Copy the old inode root contents into a real block and make the
+1 -2
fs/xfs/xfs_iomap.c
··· 749 749 * pointer that the caller gave to us. 750 750 */ 751 751 error = xfs_bmapi_write(tp, ip, map_start_fsb, 752 - count_fsb, 753 - XFS_BMAPI_STACK_SWITCH, 752 + count_fsb, 0, 754 753 &first_block, 1, 755 754 imap, &nimaps, &free_list); 756 755 if (error)
+21 -4
fs/xfs/xfs_sb.c
··· 483 483 } 484 484 485 485 /* 486 - * GQUOTINO and PQUOTINO cannot be used together in versions 487 - * of superblock that do not have pquotino. from->sb_flags 488 - * tells us which quota is active and should be copied to 489 - * disk. 486 + * GQUOTINO and PQUOTINO cannot be used together in versions of 487 + * superblock that do not have pquotino. from->sb_flags tells us which 488 + * quota is active and should be copied to disk. If neither are active, 489 + * make sure we write NULLFSINO to the sb_gquotino field as a quota 490 + * inode value of "0" is invalid when the XFS_SB_VERSION_QUOTA feature 491 + * bit is set. 492 + * 493 + * Note that we don't need to handle the sb_uquotino or sb_pquotino here 494 + * as they do not require any translation. Hence the main sb field loop 495 + * will write them appropriately from the in-core superblock. 490 496 */ 491 497 if ((*fields & XFS_SB_GQUOTINO) && 492 498 (from->sb_qflags & XFS_GQUOTA_ACCT)) ··· 500 494 else if ((*fields & XFS_SB_PQUOTINO) && 501 495 (from->sb_qflags & XFS_PQUOTA_ACCT)) 502 496 to->sb_gquotino = cpu_to_be64(from->sb_pquotino); 497 + else { 498 + /* 499 + * We can't rely on just the fields being logged to tell us 500 + * that it is safe to write NULLFSINO - we should only do that 501 + * if quotas are not actually enabled. Hence only write 502 + * NULLFSINO if both in-core quota inodes are NULL. 503 + */ 504 + if (from->sb_gquotino == NULLFSINO && 505 + from->sb_pquotino == NULLFSINO) 506 + to->sb_gquotino = cpu_to_be64(NULLFSINO); 507 + } 503 508 504 509 *fields &= ~(XFS_SB_PQUOTINO | XFS_SB_GQUOTINO); 505 510 }
+2
include/acpi/video.h
··· 22 22 extern void acpi_video_unregister_backlight(void); 23 23 extern int acpi_video_get_edid(struct acpi_device *device, int type, 24 24 int device_id, void **edid); 25 + extern bool acpi_video_verify_backlight_support(void); 25 26 #else 26 27 static inline int acpi_video_register(void) { return 0; } 27 28 static inline void acpi_video_unregister(void) { return; } ··· 32 31 { 33 32 return -ENODEV; 34 33 } 34 + static inline bool acpi_video_verify_backlight_support(void) { return false; } 35 35 #endif 36 36 37 37 #endif
+1 -1
include/asm-generic/vmlinux.lds.h
··· 693 693 . = ALIGN(PAGE_SIZE); \ 694 694 *(.data..percpu..page_aligned) \ 695 695 . = ALIGN(cacheline); \ 696 - *(.data..percpu..readmostly) \ 696 + *(.data..percpu..read_mostly) \ 697 697 . = ALIGN(cacheline); \ 698 698 *(.data..percpu) \ 699 699 *(.data..percpu..shared_aligned) \
+2 -1
include/dt-bindings/clock/exynos5420.h
··· 63 63 #define CLK_SCLK_MPHY_IXTAL24 161 64 64 65 65 /* gate clocks */ 66 - #define CLK_ACLK66_PERIC 256 67 66 #define CLK_UART0 257 68 67 #define CLK_UART1 258 69 68 #define CLK_UART2 259 ··· 202 203 #define CLK_MOUT_G3D 641 203 204 #define CLK_MOUT_VPLL 642 204 205 #define CLK_MOUT_MAUDIO0 643 206 + #define CLK_MOUT_USER_ACLK333 644 207 + #define CLK_MOUT_SW_ACLK333 645 205 208 206 209 /* divider clocks */ 207 210 #define CLK_DOUT_PIXEL 768
+2 -2
include/linux/cpufreq.h
··· 482 482 *********************************************************************/ 483 483 484 484 /* Special Values of .frequency field */ 485 - #define CPUFREQ_ENTRY_INVALID ~0 486 - #define CPUFREQ_TABLE_END ~1 485 + #define CPUFREQ_ENTRY_INVALID ~0u 486 + #define CPUFREQ_TABLE_END ~1u 487 487 /* Special Values of .flags field */ 488 488 #define CPUFREQ_BOOST_FREQ (1 << 0) 489 489
+1
include/linux/kernfs.h
··· 305 305 struct kernfs_root *root, unsigned long magic, 306 306 bool *new_sb_created, const void *ns); 307 307 void kernfs_kill_sb(struct super_block *sb); 308 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns); 308 309 309 310 void kernfs_init(void); 310 311
+2 -2
include/linux/mlx4/device.h
··· 578 578 u32 cons_index; 579 579 580 580 u16 irq; 581 - bool irq_affinity_change; 582 - 583 581 __be32 *set_ci_db; 584 582 __be32 *arm_db; 585 583 int arm_sn; ··· 1164 1166 int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, 1165 1167 int *vector); 1166 1168 void mlx4_release_eq(struct mlx4_dev *dev, int vec); 1169 + 1170 + int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec); 1167 1171 1168 1172 int mlx4_get_phys_port_id(struct mlx4_dev *dev); 1169 1173 int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port);
+2 -2
include/linux/mutex.h
··· 17 17 #include <linux/lockdep.h> 18 18 #include <linux/atomic.h> 19 19 #include <asm/processor.h> 20 + #include <linux/osq_lock.h> 20 21 21 22 /* 22 23 * Simple, straightforward mutexes with strict semantics: ··· 47 46 * - detects multi-task circular deadlocks and prints out all affected 48 47 * locks and tasks (and only those tasks) 49 48 */ 50 - struct optimistic_spin_queue; 51 49 struct mutex { 52 50 /* 1: unlocked, 0: locked, negative: locked, possible waiters */ 53 51 atomic_t count; ··· 56 56 struct task_struct *owner; 57 57 #endif 58 58 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER 59 - struct optimistic_spin_queue *osq; /* Spinner MCS lock */ 59 + struct optimistic_spin_queue osq; /* Spinner MCS lock */ 60 60 #endif 61 61 #ifdef CONFIG_DEBUG_MUTEXES 62 62 const char *name;
-8
include/linux/of_mdio.h
··· 25 25 26 26 extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); 27 27 28 - extern void of_mdiobus_link_phydev(struct mii_bus *mdio, 29 - struct phy_device *phydev); 30 - 31 28 #else /* CONFIG_OF */ 32 29 static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) 33 30 { ··· 59 62 static inline struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np) 60 63 { 61 64 return NULL; 62 - } 63 - 64 - static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, 65 - struct phy_device *phydev) 66 - { 67 65 } 68 66 #endif /* CONFIG_OF */ 69 67
+27
include/linux/osq_lock.h
··· 1 + #ifndef __LINUX_OSQ_LOCK_H 2 + #define __LINUX_OSQ_LOCK_H 3 + 4 + /* 5 + * An MCS like lock especially tailored for optimistic spinning for sleeping 6 + * lock implementations (mutex, rwsem, etc). 7 + */ 8 + 9 + #define OSQ_UNLOCKED_VAL (0) 10 + 11 + struct optimistic_spin_queue { 12 + /* 13 + * Stores an encoded value of the CPU # of the tail node in the queue. 14 + * If the queue is empty, then it's set to OSQ_UNLOCKED_VAL. 15 + */ 16 + atomic_t tail; 17 + }; 18 + 19 + /* Init macro and function. */ 20 + #define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) } 21 + 22 + static inline void osq_lock_init(struct optimistic_spin_queue *lock) 23 + { 24 + atomic_set(&lock->tail, OSQ_UNLOCKED_VAL); 25 + } 26 + 27 + #endif
+2 -2
include/linux/percpu-defs.h
··· 146 146 * Declaration/definition used for per-CPU variables that must be read mostly. 147 147 */ 148 148 #define DECLARE_PER_CPU_READ_MOSTLY(type, name) \ 149 - DECLARE_PER_CPU_SECTION(type, name, "..readmostly") 149 + DECLARE_PER_CPU_SECTION(type, name, "..read_mostly") 150 150 151 151 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \ 152 - DEFINE_PER_CPU_SECTION(type, name, "..readmostly") 152 + DEFINE_PER_CPU_SECTION(type, name, "..read_mostly") 153 153 154 154 /* 155 155 * Intermodule exports for per-CPU variables. sparse forgets about
+10 -36
include/linux/rcupdate.h
··· 44 44 #include <linux/debugobjects.h> 45 45 #include <linux/bug.h> 46 46 #include <linux/compiler.h> 47 - #include <linux/percpu.h> 48 47 #include <asm/barrier.h> 49 48 50 49 extern int rcu_expedited; /* for sysctl */ ··· 299 300 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */ 300 301 301 302 /* 302 - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. 303 - */ 304 - 305 - #define RCU_COND_RESCHED_LIM 256 /* ms vs. 100s of ms. */ 306 - DECLARE_PER_CPU(int, rcu_cond_resched_count); 307 - void rcu_resched(void); 308 - 309 - /* 310 - * Is it time to report RCU quiescent states? 311 - * 312 - * Note unsynchronized access to rcu_cond_resched_count. Yes, we might 313 - * increment some random CPU's count, and possibly also load the result from 314 - * yet another CPU's count. We might even clobber some other CPU's attempt 315 - * to zero its counter. This is all OK because the goal is not precision, 316 - * but rather reasonable amortization of rcu_note_context_switch() overhead 317 - * and extremely high probability of avoiding RCU CPU stall warnings. 318 - * Note that this function has to be preempted in just the wrong place, 319 - * many thousands of times in a row, for anything bad to happen. 320 - */ 321 - static inline bool rcu_should_resched(void) 322 - { 323 - return raw_cpu_inc_return(rcu_cond_resched_count) >= 324 - RCU_COND_RESCHED_LIM; 325 - } 326 - 327 - /* 328 - * Report quiscent states to RCU if it is time to do so. 329 - */ 330 - static inline void rcu_cond_resched(void) 331 - { 332 - if (unlikely(rcu_should_resched())) 333 - rcu_resched(); 334 - } 335 - 336 - /* 337 303 * Infrastructure to implement the synchronize_() primitives in 338 304 * TREE_RCU and rcu_barrier_() primitives in TINY_RCU. 339 305 */ ··· 322 358 * initialization. 323 359 */ 324 360 #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD 361 + void init_rcu_head(struct rcu_head *head); 362 + void destroy_rcu_head(struct rcu_head *head); 325 363 void init_rcu_head_on_stack(struct rcu_head *head); 326 364 void destroy_rcu_head_on_stack(struct rcu_head *head); 327 365 #else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ 366 + static inline void init_rcu_head(struct rcu_head *head) 367 + { 368 + } 369 + 370 + static inline void destroy_rcu_head(struct rcu_head *head) 371 + { 372 + } 373 + 328 374 static inline void init_rcu_head_on_stack(struct rcu_head *head) 329 375 { 330 376 }
+4 -4
include/linux/rwsem-spinlock.h
··· 15 15 #ifdef __KERNEL__ 16 16 /* 17 17 * the rw-semaphore definition 18 - * - if activity is 0 then there are no active readers or writers 19 - * - if activity is +ve then that is the number of active readers 20 - * - if activity is -1 then there is one active writer 18 + * - if count is 0 then there are no active readers or writers 19 + * - if count is +ve then that is the number of active readers 20 + * - if count is -1 then there is one active writer 21 21 * - if wait_list is not empty, then there are processes waiting for the semaphore 22 22 */ 23 23 struct rw_semaphore { 24 - __s32 activity; 24 + __s32 count; 25 25 raw_spinlock_t wait_lock; 26 26 struct list_head wait_list; 27 27 #ifdef CONFIG_DEBUG_LOCK_ALLOC
+16 -18
include/linux/rwsem.h
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/list.h> 15 15 #include <linux/spinlock.h> 16 - 17 16 #include <linux/atomic.h> 17 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 18 + #include <linux/osq_lock.h> 19 + #endif 18 20 19 - struct optimistic_spin_queue; 20 21 struct rw_semaphore; 21 22 22 23 #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK ··· 26 25 /* All arch specific implementations share the same struct */ 27 26 struct rw_semaphore { 28 27 long count; 29 - raw_spinlock_t wait_lock; 30 28 struct list_head wait_list; 31 - #ifdef CONFIG_SMP 29 + raw_spinlock_t wait_lock; 30 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 31 + struct optimistic_spin_queue osq; /* spinner MCS lock */ 32 32 /* 33 33 * Write owner. Used as a speculative check to see 34 34 * if the owner is running on the cpu. 35 35 */ 36 36 struct task_struct *owner; 37 - struct optimistic_spin_queue *osq; /* spinner MCS lock */ 38 37 #endif 39 38 #ifdef CONFIG_DEBUG_LOCK_ALLOC 40 39 struct lockdep_map dep_map; ··· 65 64 # define __RWSEM_DEP_MAP_INIT(lockname) 66 65 #endif 67 66 68 - #if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK) 69 - #define __RWSEM_INITIALIZER(name) \ 70 - { RWSEM_UNLOCKED_VALUE, \ 71 - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ 72 - LIST_HEAD_INIT((name).wait_list), \ 73 - NULL, /* owner */ \ 74 - NULL /* mcs lock */ \ 75 - __RWSEM_DEP_MAP_INIT(name) } 67 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 68 + #define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL 76 69 #else 77 - #define __RWSEM_INITIALIZER(name) \ 78 - { RWSEM_UNLOCKED_VALUE, \ 79 - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ 80 - LIST_HEAD_INIT((name).wait_list) \ 81 - __RWSEM_DEP_MAP_INIT(name) } 70 + #define __RWSEM_OPT_INIT(lockname) 82 71 #endif 72 + 73 + #define __RWSEM_INITIALIZER(name) \ 74 + { .count = RWSEM_UNLOCKED_VALUE, \ 75 + .wait_list = LIST_HEAD_INIT((name).wait_list), \ 76 + .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock) \ 77 + __RWSEM_OPT_INIT(name) \ 78 + __RWSEM_DEP_MAP_INIT(name) } 83 79 84 80 #define DECLARE_RWSEM(name) \ 85 81 struct rw_semaphore name = __RWSEM_INITIALIZER(name)
+4 -4
include/linux/sched.h
··· 872 872 #define SD_NUMA 0x4000 /* cross-node balancing */ 873 873 874 874 #ifdef CONFIG_SCHED_SMT 875 - static inline const int cpu_smt_flags(void) 875 + static inline int cpu_smt_flags(void) 876 876 { 877 877 return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; 878 878 } 879 879 #endif 880 880 881 881 #ifdef CONFIG_SCHED_MC 882 - static inline const int cpu_core_flags(void) 882 + static inline int cpu_core_flags(void) 883 883 { 884 884 return SD_SHARE_PKG_RESOURCES; 885 885 } 886 886 #endif 887 887 888 888 #ifdef CONFIG_NUMA 889 - static inline const int cpu_numa_flags(void) 889 + static inline int cpu_numa_flags(void) 890 890 { 891 891 return SD_NUMA; 892 892 } ··· 999 999 bool cpus_share_cache(int this_cpu, int that_cpu); 1000 1000 1001 1001 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); 1002 - typedef const int (*sched_domain_flags_f)(void); 1002 + typedef int (*sched_domain_flags_f)(void); 1003 1003 1004 1004 #define SDTL_OVERLAP 0x01 1005 1005
-1
include/net/neighbour.h
··· 203 203 void (*proxy_redo)(struct sk_buff *skb); 204 204 char *id; 205 205 struct neigh_parms parms; 206 - /* HACK. gc_* should follow parms without a gap! */ 207 206 int gc_interval; 208 207 int gc_thresh1; 209 208 int gc_thresh2;
+1 -1
include/net/netns/ieee802154_6lowpan.h
··· 16 16 struct netns_ieee802154_lowpan { 17 17 struct netns_sysctl_lowpan sysctl; 18 18 struct netns_frags frags; 19 - u16 max_dsize; 19 + int max_dsize; 20 20 }; 21 21 22 22 #endif
+6 -6
include/net/sock.h
··· 1768 1768 static inline void 1769 1769 sk_dst_set(struct sock *sk, struct dst_entry *dst) 1770 1770 { 1771 - spin_lock(&sk->sk_dst_lock); 1772 - __sk_dst_set(sk, dst); 1773 - spin_unlock(&sk->sk_dst_lock); 1771 + struct dst_entry *old_dst; 1772 + 1773 + sk_tx_queue_clear(sk); 1774 + old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); 1775 + dst_release(old_dst); 1774 1776 } 1775 1777 1776 1778 static inline void ··· 1784 1782 static inline void 1785 1783 sk_dst_reset(struct sock *sk) 1786 1784 { 1787 - spin_lock(&sk->sk_dst_lock); 1788 - __sk_dst_reset(sk); 1789 - spin_unlock(&sk->sk_dst_lock); 1785 + sk_dst_set(sk, NULL); 1790 1786 } 1791 1787 1792 1788 struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie);
+8 -1
kernel/Kconfig.locks
··· 220 220 221 221 endif 222 222 223 + config ARCH_SUPPORTS_ATOMIC_RMW 224 + bool 225 + 223 226 config MUTEX_SPIN_ON_OWNER 224 227 def_bool y 225 - depends on SMP && !DEBUG_MUTEXES 228 + depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW 229 + 230 + config RWSEM_SPIN_ON_OWNER 231 + def_bool y 232 + depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW 226 233 227 234 config ARCH_USE_QUEUE_RWLOCK 228 235 bool
+50 -8
kernel/cgroup.c
··· 1648 1648 int flags, const char *unused_dev_name, 1649 1649 void *data) 1650 1650 { 1651 + struct super_block *pinned_sb = NULL; 1652 + struct cgroup_subsys *ss; 1651 1653 struct cgroup_root *root; 1652 1654 struct cgroup_sb_opts opts; 1653 1655 struct dentry *dentry; 1654 1656 int ret; 1657 + int i; 1655 1658 bool new_sb; 1656 1659 1657 1660 /* ··· 1678 1675 cgroup_get(&root->cgrp); 1679 1676 ret = 0; 1680 1677 goto out_unlock; 1678 + } 1679 + 1680 + /* 1681 + * Destruction of cgroup root is asynchronous, so subsystems may 1682 + * still be dying after the previous unmount. Let's drain the 1683 + * dying subsystems. We just need to ensure that the ones 1684 + * unmounted previously finish dying and don't care about new ones 1685 + * starting. Testing ref liveliness is good enough. 1686 + */ 1687 + for_each_subsys(ss, i) { 1688 + if (!(opts.subsys_mask & (1 << i)) || 1689 + ss->root == &cgrp_dfl_root) 1690 + continue; 1691 + 1692 + if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) { 1693 + mutex_unlock(&cgroup_mutex); 1694 + msleep(10); 1695 + ret = restart_syscall(); 1696 + goto out_free; 1697 + } 1698 + cgroup_put(&ss->root->cgrp); 1681 1699 } 1682 1700 1683 1701 for_each_root(root) { ··· 1741 1717 } 1742 1718 1743 1719 /* 1744 - * A root's lifetime is governed by its root cgroup. 1745 - * tryget_live failure indicate that the root is being 1746 - * destroyed. Wait for destruction to complete so that the 1747 - * subsystems are free. We can use wait_queue for the wait 1748 - * but this path is super cold. Let's just sleep for a bit 1749 - * and retry. 1720 + * We want to reuse @root whose lifetime is governed by its 1721 + * ->cgrp. Let's check whether @root is alive and keep it 1722 + * that way. As cgroup_kill_sb() can happen anytime, we 1723 + * want to block it by pinning the sb so that @root doesn't 1724 + * get killed before mount is complete. 1725 + * 1726 + * With the sb pinned, tryget_live can reliably indicate 1727 + * whether @root can be reused. If it's being killed, 1728 + * drain it. We can use wait_queue for the wait but this 1729 + * path is super cold. Let's just sleep a bit and retry. 1750 1730 */ 1751 - if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1731 + pinned_sb = kernfs_pin_sb(root->kf_root, NULL); 1732 + if (IS_ERR(pinned_sb) || 1733 + !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1752 1734 mutex_unlock(&cgroup_mutex); 1735 + if (!IS_ERR_OR_NULL(pinned_sb)) 1736 + deactivate_super(pinned_sb); 1753 1737 msleep(10); 1754 1738 ret = restart_syscall(); 1755 1739 goto out_free; ··· 1802 1770 CGROUP_SUPER_MAGIC, &new_sb); 1803 1771 if (IS_ERR(dentry) || !new_sb) 1804 1772 cgroup_put(&root->cgrp); 1773 + 1774 + /* 1775 + * If @pinned_sb, we're reusing an existing root and holding an 1776 + * extra ref on its sb. Mount is complete. Put the extra ref. 1777 + */ 1778 + if (pinned_sb) { 1779 + WARN_ON(new_sb); 1780 + deactivate_super(pinned_sb); 1781 + } 1782 + 1805 1783 return dentry; 1806 1784 } 1807 1785 ··· 3370 3328 3371 3329 rcu_read_lock(); 3372 3330 css_for_each_child(child, css) { 3373 - if (css->flags & CSS_ONLINE) { 3331 + if (child->flags & CSS_ONLINE) { 3374 3332 ret = true; 3375 3333 break; 3376 3334 }
+19 -1
kernel/cpuset.c
··· 1181 1181 1182 1182 int current_cpuset_is_being_rebound(void) 1183 1183 { 1184 - return task_cs(current) == cpuset_being_rebound; 1184 + int ret; 1185 + 1186 + rcu_read_lock(); 1187 + ret = task_cs(current) == cpuset_being_rebound; 1188 + rcu_read_unlock(); 1189 + 1190 + return ret; 1185 1191 } 1186 1192 1187 1193 static int update_relax_domain_level(struct cpuset *cs, s64 val) ··· 1623 1617 * resources, wait for the previously scheduled operations before 1624 1618 * proceeding, so that we don't end up keep removing tasks added 1625 1619 * after execution capability is restored. 1620 + * 1621 + * cpuset_hotplug_work calls back into cgroup core via 1622 + * cgroup_transfer_tasks() and waiting for it from a cgroupfs 1623 + * operation like this one can lead to a deadlock through kernfs 1624 + * active_ref protection. Let's break the protection. Losing the 1625 + * protection is okay as we check whether @cs is online after 1626 + * grabbing cpuset_mutex anyway. This only happens on the legacy 1627 + * hierarchies. 1626 1628 */ 1629 + css_get(&cs->css); 1630 + kernfs_break_active_protection(of->kn); 1627 1631 flush_work(&cpuset_hotplug_work); 1628 1632 1629 1633 mutex_lock(&cpuset_mutex); ··· 1661 1645 free_trial_cpuset(trialcs); 1662 1646 out_unlock: 1663 1647 mutex_unlock(&cpuset_mutex); 1648 + kernfs_unbreak_active_protection(of->kn); 1649 + css_put(&cs->css); 1664 1650 return retval ?: nbytes; 1665 1651 } 1666 1652
+1 -1
kernel/events/core.c
··· 2320 2320 next_parent = rcu_dereference(next_ctx->parent_ctx); 2321 2321 2322 2322 /* If neither context have a parent context; they cannot be clones. */ 2323 - if (!parent && !next_parent) 2323 + if (!parent || !next_parent) 2324 2324 goto unlock; 2325 2325 2326 2326 if (next_parent == ctx || next_ctx == parent || next_parent == parent) {
+48 -16
kernel/locking/mcs_spinlock.c
··· 14 14 * called from interrupt context and we have preemption disabled while 15 15 * spinning. 16 16 */ 17 - static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_queue, osq_node); 17 + static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node); 18 + 19 + /* 20 + * We use the value 0 to represent "no CPU", thus the encoded value 21 + * will be the CPU number incremented by 1. 22 + */ 23 + static inline int encode_cpu(int cpu_nr) 24 + { 25 + return cpu_nr + 1; 26 + } 27 + 28 + static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) 29 + { 30 + int cpu_nr = encoded_cpu_val - 1; 31 + 32 + return per_cpu_ptr(&osq_node, cpu_nr); 33 + } 18 34 19 35 /* 20 36 * Get a stable @node->next pointer, either for unlock() or unqueue() purposes. 21 37 * Can return NULL in case we were the last queued and we updated @lock instead. 22 38 */ 23 - static inline struct optimistic_spin_queue * 24 - osq_wait_next(struct optimistic_spin_queue **lock, 25 - struct optimistic_spin_queue *node, 26 - struct optimistic_spin_queue *prev) 39 + static inline struct optimistic_spin_node * 40 + osq_wait_next(struct optimistic_spin_queue *lock, 41 + struct optimistic_spin_node *node, 42 + struct optimistic_spin_node *prev) 27 43 { 28 - struct optimistic_spin_queue *next = NULL; 44 + struct optimistic_spin_node *next = NULL; 45 + int curr = encode_cpu(smp_processor_id()); 46 + int old; 47 + 48 + /* 49 + * If there is a prev node in queue, then the 'old' value will be 50 + * the prev node's CPU #, else it's set to OSQ_UNLOCKED_VAL since if 51 + * we're currently last in queue, then the queue will then become empty. 52 + */ 53 + old = prev ? prev->cpu : OSQ_UNLOCKED_VAL; 29 54 30 55 for (;;) { 31 - if (*lock == node && cmpxchg(lock, node, prev) == node) { 56 + if (atomic_read(&lock->tail) == curr && 57 + atomic_cmpxchg(&lock->tail, curr, old) == curr) { 32 58 /* 33 59 * We were the last queued, we moved @lock back. @prev 34 60 * will now observe @lock and will complete its ··· 85 59 return next; 86 60 } 87 61 88 - bool osq_lock(struct optimistic_spin_queue **lock) 62 + bool osq_lock(struct optimistic_spin_queue *lock) 89 63 { 90 - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); 91 - struct optimistic_spin_queue *prev, *next; 64 + struct optimistic_spin_node *node = this_cpu_ptr(&osq_node); 65 + struct optimistic_spin_node *prev, *next; 66 + int curr = encode_cpu(smp_processor_id()); 67 + int old; 92 68 93 69 node->locked = 0; 94 70 node->next = NULL; 71 + node->cpu = curr; 95 72 96 - node->prev = prev = xchg(lock, node); 97 - if (likely(prev == NULL)) 73 + old = atomic_xchg(&lock->tail, curr); 74 + if (old == OSQ_UNLOCKED_VAL) 98 75 return true; 99 76 77 + prev = decode_cpu(old); 78 + node->prev = prev; 100 79 ACCESS_ONCE(prev->next) = node; 101 80 102 81 /* ··· 180 149 return false; 181 150 } 182 151 183 - void osq_unlock(struct optimistic_spin_queue **lock) 152 + void osq_unlock(struct optimistic_spin_queue *lock) 184 153 { 185 - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); 186 - struct optimistic_spin_queue *next; 154 + struct optimistic_spin_node *node, *next; 155 + int curr = encode_cpu(smp_processor_id()); 187 156 188 157 /* 189 158 * Fast path for the uncontended case. 190 159 */ 191 - if (likely(cmpxchg(lock, node, NULL) == node)) 160 + if (likely(atomic_cmpxchg(&lock->tail, curr, OSQ_UNLOCKED_VAL) == curr)) 192 161 return; 193 162 194 163 /* 195 164 * Second most likely case. 196 165 */ 166 + node = this_cpu_ptr(&osq_node); 197 167 next = xchg(&node->next, NULL); 198 168 if (next) { 199 169 ACCESS_ONCE(next->locked) = 1;
+5 -4
kernel/locking/mcs_spinlock.h
··· 118 118 * mutex_lock()/rwsem_down_{read,write}() etc. 119 119 */ 120 120 121 - struct optimistic_spin_queue { 122 - struct optimistic_spin_queue *next, *prev; 121 + struct optimistic_spin_node { 122 + struct optimistic_spin_node *next, *prev; 123 123 int locked; /* 1 if lock acquired */ 124 + int cpu; /* encoded CPU # value */ 124 125 }; 125 126 126 - extern bool osq_lock(struct optimistic_spin_queue **lock); 127 - extern void osq_unlock(struct optimistic_spin_queue **lock); 127 + extern bool osq_lock(struct optimistic_spin_queue *lock); 128 + extern void osq_unlock(struct optimistic_spin_queue *lock); 128 129 129 130 #endif /* __LINUX_MCS_SPINLOCK_H */
+1 -1
kernel/locking/mutex.c
··· 60 60 INIT_LIST_HEAD(&lock->wait_list); 61 61 mutex_clear_owner(lock); 62 62 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER 63 - lock->osq = NULL; 63 + osq_lock_init(&lock->osq); 64 64 #endif 65 65 66 66 debug_mutex_init(lock, name, key);
+14 -14
kernel/locking/rwsem-spinlock.c
··· 26 26 unsigned long flags; 27 27 28 28 if (raw_spin_trylock_irqsave(&sem->wait_lock, flags)) { 29 - ret = (sem->activity != 0); 29 + ret = (sem->count != 0); 30 30 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); 31 31 } 32 32 return ret; ··· 46 46 debug_check_no_locks_freed((void *)sem, sizeof(*sem)); 47 47 lockdep_init_map(&sem->dep_map, name, key, 0); 48 48 #endif 49 - sem->activity = 0; 49 + sem->count = 0; 50 50 raw_spin_lock_init(&sem->wait_lock); 51 51 INIT_LIST_HEAD(&sem->wait_list); 52 52 } ··· 95 95 waiter = list_entry(next, struct rwsem_waiter, list); 96 96 } while (waiter->type != RWSEM_WAITING_FOR_WRITE); 97 97 98 - sem->activity += woken; 98 + sem->count += woken; 99 99 100 100 out: 101 101 return sem; ··· 126 126 127 127 raw_spin_lock_irqsave(&sem->wait_lock, flags); 128 128 129 - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { 129 + if (sem->count >= 0 && list_empty(&sem->wait_list)) { 130 130 /* granted */ 131 - sem->activity++; 131 + sem->count++; 132 132 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); 133 133 goto out; 134 134 } ··· 170 170 171 171 raw_spin_lock_irqsave(&sem->wait_lock, flags); 172 172 173 - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { 173 + if (sem->count >= 0 && list_empty(&sem->wait_list)) { 174 174 /* granted */ 175 - sem->activity++; 175 + sem->count++; 176 176 ret = 1; 177 177 } 178 178 ··· 206 206 * itself into sleep and waiting for system woke it or someone 207 207 * else in the head of the wait list up. 208 208 */ 209 - if (sem->activity == 0) 209 + if (sem->count == 0) 210 210 break; 211 211 set_task_state(tsk, TASK_UNINTERRUPTIBLE); 212 212 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); ··· 214 214 raw_spin_lock_irqsave(&sem->wait_lock, flags); 215 215 } 216 216 /* got the lock */ 217 - sem->activity = -1; 217 + sem->count = -1; 218 218 list_del(&waiter.list); 219 219 220 220 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); ··· 235 235 236 236 raw_spin_lock_irqsave(&sem->wait_lock, flags); 237 237 238 - if (sem->activity == 0) { 238 + if (sem->count == 0) { 239 239 /* got the lock */ 240 - sem->activity = -1; 240 + sem->count = -1; 241 241 ret = 1; 242 242 } 243 243 ··· 255 255 256 256 raw_spin_lock_irqsave(&sem->wait_lock, flags); 257 257 258 - if (--sem->activity == 0 && !list_empty(&sem->wait_list)) 258 + if (--sem->count == 0 && !list_empty(&sem->wait_list)) 259 259 sem = __rwsem_wake_one_writer(sem); 260 260 261 261 raw_spin_unlock_irqrestore(&sem->wait_lock, flags); ··· 270 270 271 271 raw_spin_lock_irqsave(&sem->wait_lock, flags); 272 272 273 - sem->activity = 0; 273 + sem->count = 0; 274 274 if (!list_empty(&sem->wait_list)) 275 275 sem = __rwsem_do_wake(sem, 1); 276 276 ··· 287 287 288 288 raw_spin_lock_irqsave(&sem->wait_lock, flags); 289 289 290 - sem->activity = 1; 290 + sem->count = 1; 291 291 if (!list_empty(&sem->wait_list)) 292 292 sem = __rwsem_do_wake(sem, 0); 293 293
+8 -8
kernel/locking/rwsem-xadd.c
··· 82 82 sem->count = RWSEM_UNLOCKED_VALUE; 83 83 raw_spin_lock_init(&sem->wait_lock); 84 84 INIT_LIST_HEAD(&sem->wait_list); 85 - #ifdef CONFIG_SMP 85 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 86 86 sem->owner = NULL; 87 - sem->osq = NULL; 87 + osq_lock_init(&sem->osq); 88 88 #endif 89 89 } 90 90 ··· 262 262 return false; 263 263 } 264 264 265 - #ifdef CONFIG_SMP 265 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 266 266 /* 267 267 * Try to acquire write lock before the writer has been put on wait queue. 268 268 */ ··· 285 285 static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) 286 286 { 287 287 struct task_struct *owner; 288 - bool on_cpu = true; 288 + bool on_cpu = false; 289 289 290 290 if (need_resched()) 291 - return 0; 291 + return false; 292 292 293 293 rcu_read_lock(); 294 294 owner = ACCESS_ONCE(sem->owner); ··· 297 297 rcu_read_unlock(); 298 298 299 299 /* 300 - * If sem->owner is not set, the rwsem owner may have 301 - * just acquired it and not set the owner yet or the rwsem 302 - * has been released. 300 + * If sem->owner is not set, yet we have just recently entered the 301 + * slowpath, then there is a possibility reader(s) may have the lock. 302 + * To be safe, avoid spinning in these situations. 303 303 */ 304 304 return on_cpu; 305 305 }
+1 -1
kernel/locking/rwsem.c
··· 12 12 13 13 #include <linux/atomic.h> 14 14 15 - #if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM) 15 + #ifdef CONFIG_RWSEM_SPIN_ON_OWNER 16 16 static inline void rwsem_set_owner(struct rw_semaphore *sem) 17 17 { 18 18 sem->owner = current;
+1
kernel/power/process.c
··· 186 186 187 187 printk("Restarting tasks ... "); 188 188 189 + __usermodehelper_set_disable_depth(UMH_FREEZING); 189 190 thaw_workqueues(); 190 191 191 192 read_lock(&tasklist_lock);
+2 -2
kernel/power/suspend.c
··· 306 306 error = suspend_ops->begin(state); 307 307 if (error) 308 308 goto Close; 309 - } else if (state == PM_SUSPEND_FREEZE && freeze_ops->begin) { 309 + } else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) { 310 310 error = freeze_ops->begin(); 311 311 if (error) 312 312 goto Close; ··· 335 335 Close: 336 336 if (need_suspend_ops(state) && suspend_ops->end) 337 337 suspend_ops->end(); 338 - else if (state == PM_SUSPEND_FREEZE && freeze_ops->end) 338 + else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) 339 339 freeze_ops->end(); 340 340 341 341 return error;
+112 -28
kernel/rcu/tree.c
··· 206 206 rdp->passed_quiesce = 1; 207 207 } 208 208 209 + static DEFINE_PER_CPU(int, rcu_sched_qs_mask); 210 + 211 + static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { 212 + .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, 213 + .dynticks = ATOMIC_INIT(1), 214 + #ifdef CONFIG_NO_HZ_FULL_SYSIDLE 215 + .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, 216 + .dynticks_idle = ATOMIC_INIT(1), 217 + #endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ 218 + }; 219 + 220 + /* 221 + * Let the RCU core know that this CPU has gone through the scheduler, 222 + * which is a quiescent state. This is called when the need for a 223 + * quiescent state is urgent, so we burn an atomic operation and full 224 + * memory barriers to let the RCU core know about it, regardless of what 225 + * this CPU might (or might not) do in the near future. 226 + * 227 + * We inform the RCU core by emulating a zero-duration dyntick-idle 228 + * period, which we in turn do by incrementing the ->dynticks counter 229 + * by two. 230 + */ 231 + static void rcu_momentary_dyntick_idle(void) 232 + { 233 + unsigned long flags; 234 + struct rcu_data *rdp; 235 + struct rcu_dynticks *rdtp; 236 + int resched_mask; 237 + struct rcu_state *rsp; 238 + 239 + local_irq_save(flags); 240 + 241 + /* 242 + * Yes, we can lose flag-setting operations. This is OK, because 243 + * the flag will be set again after some delay. 244 + */ 245 + resched_mask = raw_cpu_read(rcu_sched_qs_mask); 246 + raw_cpu_write(rcu_sched_qs_mask, 0); 247 + 248 + /* Find the flavor that needs a quiescent state. */ 249 + for_each_rcu_flavor(rsp) { 250 + rdp = raw_cpu_ptr(rsp->rda); 251 + if (!(resched_mask & rsp->flavor_mask)) 252 + continue; 253 + smp_mb(); /* rcu_sched_qs_mask before cond_resched_completed. */ 254 + if (ACCESS_ONCE(rdp->mynode->completed) != 255 + ACCESS_ONCE(rdp->cond_resched_completed)) 256 + continue; 257 + 258 + /* 259 + * Pretend to be momentarily idle for the quiescent state. 260 + * This allows the grace-period kthread to record the 261 + * quiescent state, with no need for this CPU to do anything 262 + * further. 263 + */ 264 + rdtp = this_cpu_ptr(&rcu_dynticks); 265 + smp_mb__before_atomic(); /* Earlier stuff before QS. */ 266 + atomic_add(2, &rdtp->dynticks); /* QS. */ 267 + smp_mb__after_atomic(); /* Later stuff after QS. */ 268 + break; 269 + } 270 + local_irq_restore(flags); 271 + } 272 + 209 273 /* 210 274 * Note a context switch. This is a quiescent state for RCU-sched, 211 275 * and requires special handling for preemptible RCU. ··· 280 216 trace_rcu_utilization(TPS("Start context switch")); 281 217 rcu_sched_qs(cpu); 282 218 rcu_preempt_note_context_switch(cpu); 219 + if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) 220 + rcu_momentary_dyntick_idle(); 283 221 trace_rcu_utilization(TPS("End context switch")); 284 222 } 285 223 EXPORT_SYMBOL_GPL(rcu_note_context_switch); 286 - 287 - static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { 288 - .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, 289 - .dynticks = ATOMIC_INIT(1), 290 - #ifdef CONFIG_NO_HZ_FULL_SYSIDLE 291 - .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, 292 - .dynticks_idle = ATOMIC_INIT(1), 293 - #endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ 294 - }; 295 224 296 225 static long blimit = 10; /* Maximum callbacks per rcu_do_batch. */ 297 226 static long qhimark = 10000; /* If this many pending, ignore blimit. */ ··· 299 242 300 243 module_param(jiffies_till_first_fqs, ulong, 0644); 301 244 module_param(jiffies_till_next_fqs, ulong, 0644); 245 + 246 + /* 247 + * How long the grace period must be before we start recruiting 248 + * quiescent-state help from rcu_note_context_switch(). 249 + */ 250 + static ulong jiffies_till_sched_qs = HZ / 20; 251 + module_param(jiffies_till_sched_qs, ulong, 0644); 302 252 303 253 static bool rcu_start_gp_advanced(struct rcu_state *rsp, struct rcu_node *rnp, 304 254 struct rcu_data *rdp); ··· 917 853 bool *isidle, unsigned long *maxj) 918 854 { 919 855 unsigned int curr; 856 + int *rcrmp; 920 857 unsigned int snap; 921 858 922 859 curr = (unsigned int)atomic_add_return(0, &rdp->dynticks->dynticks); ··· 958 893 } 959 894 960 895 /* 961 - * There is a possibility that a CPU in adaptive-ticks state 962 - * might run in the kernel with the scheduling-clock tick disabled 963 - * for an extended time period. Invoke rcu_kick_nohz_cpu() to 964 - * force the CPU to restart the scheduling-clock tick in this 965 - * CPU is in this state. 896 + * A CPU running for an extended time within the kernel can 897 + * delay RCU grace periods. When the CPU is in NO_HZ_FULL mode, 898 + * even context-switching back and forth between a pair of 899 + * in-kernel CPU-bound tasks cannot advance grace periods. 900 + * So if the grace period is old enough, make the CPU pay attention. 901 + * Note that the unsynchronized assignments to the per-CPU 902 + * rcu_sched_qs_mask variable are safe. Yes, setting of 903 + * bits can be lost, but they will be set again on the next 904 + * force-quiescent-state pass. So lost bit sets do not result 905 + * in incorrect behavior, merely in a grace period lasting 906 + * a few jiffies longer than it might otherwise. Because 907 + * there are at most four threads involved, and because the 908 + * updates are only once every few jiffies, the probability of 909 + * lossage (and thus of slight grace-period extension) is 910 + * quite low. 911 + * 912 + * Note that if the jiffies_till_sched_qs boot/sysfs parameter 913 + * is set too high, we override with half of the RCU CPU stall 914 + * warning delay. 966 915 */ 967 - rcu_kick_nohz_cpu(rdp->cpu); 968 - 969 - /* 970 - * Alternatively, the CPU might be running in the kernel 971 - * for an extended period of time without a quiescent state. 972 - * Attempt to force the CPU through the scheduler to gain the 973 - * needed quiescent state, but only if the grace period has gone 974 - * on for an uncommonly long time. If there are many stuck CPUs, 975 - * we will beat on the first one until it gets unstuck, then move 976 - * to the next. Only do this for the primary flavor of RCU. 977 - */ 978 - if (rdp->rsp == rcu_state_p && 916 + rcrmp = &per_cpu(rcu_sched_qs_mask, rdp->cpu); 917 + if (ULONG_CMP_GE(jiffies, 918 + rdp->rsp->gp_start + jiffies_till_sched_qs) || 979 919 ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { 980 - rdp->rsp->jiffies_resched += 5; 981 - resched_cpu(rdp->cpu); 920 + if (!(ACCESS_ONCE(*rcrmp) & rdp->rsp->flavor_mask)) { 921 + ACCESS_ONCE(rdp->cond_resched_completed) = 922 + ACCESS_ONCE(rdp->mynode->completed); 923 + smp_mb(); /* ->cond_resched_completed before *rcrmp. */ 924 + ACCESS_ONCE(*rcrmp) = 925 + ACCESS_ONCE(*rcrmp) + rdp->rsp->flavor_mask; 926 + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ 927 + rdp->rsp->jiffies_resched += 5; /* Enable beating. */ 928 + } else if (ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { 929 + /* Time to beat on that CPU again! */ 930 + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ 931 + rdp->rsp->jiffies_resched += 5; /* Re-enable beating. */ 932 + } 982 933 } 983 934 984 935 return 0; ··· 3572 3491 "rcu_node_fqs_1", 3573 3492 "rcu_node_fqs_2", 3574 3493 "rcu_node_fqs_3" }; /* Match MAX_RCU_LVLS */ 3494 + static u8 fl_mask = 0x1; 3575 3495 int cpustride = 1; 3576 3496 int i; 3577 3497 int j; ··· 3591 3509 for (i = 1; i < rcu_num_lvls; i++) 3592 3510 rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1]; 3593 3511 rcu_init_levelspread(rsp); 3512 + rsp->flavor_mask = fl_mask; 3513 + fl_mask <<= 1; 3594 3514 3595 3515 /* Initialize the elements themselves, starting from the leaves. */ 3596 3516
+5 -1
kernel/rcu/tree.h
··· 307 307 /* 4) reasons this CPU needed to be kicked by force_quiescent_state */ 308 308 unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ 309 309 unsigned long offline_fqs; /* Kicked due to being offline. */ 310 + unsigned long cond_resched_completed; 311 + /* Grace period that needs help */ 312 + /* from cond_resched(). */ 310 313 311 314 /* 5) __rcu_pending() statistics. */ 312 315 unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ ··· 395 392 struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */ 396 393 u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */ 397 394 u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */ 395 + u8 flavor_mask; /* bit in flavor mask. */ 398 396 struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */ 399 397 void (*call)(struct rcu_head *head, /* call_rcu() flavor. */ 400 398 void (*func)(struct rcu_head *head)); ··· 567 563 static void do_nocb_deferred_wakeup(struct rcu_data *rdp); 568 564 static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); 569 565 static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); 570 - static void rcu_kick_nohz_cpu(int cpu); 566 + static void __maybe_unused rcu_kick_nohz_cpu(int cpu); 571 567 static bool init_nocb_callback_list(struct rcu_data *rdp); 572 568 static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); 573 569 static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
+1 -1
kernel/rcu/tree_plugin.h
··· 2404 2404 * if an adaptive-ticks CPU is failing to respond to the current grace 2405 2405 * period and has not be idle from an RCU perspective, kick it. 2406 2406 */ 2407 - static void rcu_kick_nohz_cpu(int cpu) 2407 + static void __maybe_unused rcu_kick_nohz_cpu(int cpu) 2408 2408 { 2409 2409 #ifdef CONFIG_NO_HZ_FULL 2410 2410 if (tick_nohz_full_cpu(cpu))
+2 -20
kernel/rcu/update.c
··· 200 200 EXPORT_SYMBOL_GPL(wait_rcu_gp); 201 201 202 202 #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD 203 - static inline void debug_init_rcu_head(struct rcu_head *head) 203 + void init_rcu_head(struct rcu_head *head) 204 204 { 205 205 debug_object_init(head, &rcuhead_debug_descr); 206 206 } 207 207 208 - static inline void debug_rcu_head_free(struct rcu_head *head) 208 + void destroy_rcu_head(struct rcu_head *head) 209 209 { 210 210 debug_object_free(head, &rcuhead_debug_descr); 211 211 } ··· 350 350 early_initcall(check_cpu_stall_init); 351 351 352 352 #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ 353 - 354 - /* 355 - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. 356 - */ 357 - 358 - DEFINE_PER_CPU(int, rcu_cond_resched_count); 359 - 360 - /* 361 - * Report a set of RCU quiescent states, for use by cond_resched() 362 - * and friends. Out of line due to being called infrequently. 363 - */ 364 - void rcu_resched(void) 365 - { 366 - preempt_disable(); 367 - __this_cpu_write(rcu_cond_resched_count, 0); 368 - rcu_note_context_switch(smp_processor_id()); 369 - preempt_enable(); 370 - }
+1 -6
kernel/sched/core.c
··· 4147 4147 4148 4148 int __sched _cond_resched(void) 4149 4149 { 4150 - rcu_cond_resched(); 4151 4150 if (should_resched()) { 4152 4151 __cond_resched(); 4153 4152 return 1; ··· 4165 4166 */ 4166 4167 int __cond_resched_lock(spinlock_t *lock) 4167 4168 { 4168 - bool need_rcu_resched = rcu_should_resched(); 4169 4169 int resched = should_resched(); 4170 4170 int ret = 0; 4171 4171 4172 4172 lockdep_assert_held(lock); 4173 4173 4174 - if (spin_needbreak(lock) || resched || need_rcu_resched) { 4174 + if (spin_needbreak(lock) || resched) { 4175 4175 spin_unlock(lock); 4176 4176 if (resched) 4177 4177 __cond_resched(); 4178 - else if (unlikely(need_rcu_resched)) 4179 - rcu_resched(); 4180 4178 else 4181 4179 cpu_relax(); 4182 4180 ret = 1; ··· 4187 4191 { 4188 4192 BUG_ON(!in_softirq()); 4189 4193 4190 - rcu_cond_resched(); /* BH disabled OK, just recording QSes. */ 4191 4194 if (should_resched()) { 4192 4195 local_bh_enable(); 4193 4196 __cond_resched();
+1 -1
kernel/sched/debug.c
··· 608 608 609 609 avg_atom = p->se.sum_exec_runtime; 610 610 if (nr_switches) 611 - do_div(avg_atom, nr_switches); 611 + avg_atom = div64_ul(avg_atom, nr_switches); 612 612 else 613 613 avg_atom = -1LL; 614 614
+18 -2
kernel/time/alarmtimer.c
··· 585 585 struct itimerspec *new_setting, 586 586 struct itimerspec *old_setting) 587 587 { 588 + ktime_t exp; 589 + 588 590 if (!rtcdev) 589 591 return -ENOTSUPP; 592 + 593 + if (flags & ~TIMER_ABSTIME) 594 + return -EINVAL; 590 595 591 596 if (old_setting) 592 597 alarm_timer_get(timr, old_setting); ··· 602 597 603 598 /* start the timer */ 604 599 timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); 605 - alarm_start(&timr->it.alarm.alarmtimer, 606 - timespec_to_ktime(new_setting->it_value)); 600 + exp = timespec_to_ktime(new_setting->it_value); 601 + /* Convert (if necessary) to absolute time */ 602 + if (flags != TIMER_ABSTIME) { 603 + ktime_t now; 604 + 605 + now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); 606 + exp = ktime_add(now, exp); 607 + } 608 + 609 + alarm_start(&timr->it.alarm.alarmtimer, exp); 607 610 return 0; 608 611 } 609 612 ··· 742 729 743 730 if (!alarmtimer_get_rtcdev()) 744 731 return -ENOTSUPP; 732 + 733 + if (flags & ~TIMER_ABSTIME) 734 + return -EINVAL; 745 735 746 736 if (!capable(CAP_WAKE_ALARM)) 747 737 return -EPERM;
+2 -2
kernel/trace/ftrace.c
··· 265 265 func = ftrace_ops_list_func; 266 266 } 267 267 268 + update_function_graph_func(); 269 + 268 270 /* If there's no change, then do nothing more here */ 269 271 if (ftrace_trace_function == func) 270 272 return; 271 - 272 - update_function_graph_func(); 273 273 274 274 /* 275 275 * If we are using the list function, it doesn't care
-4
kernel/trace/ring_buffer.c
··· 616 616 struct ring_buffer_per_cpu *cpu_buffer; 617 617 struct rb_irq_work *work; 618 618 619 - if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || 620 - (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) 621 - return POLLIN | POLLRDNORM; 622 - 623 619 if (cpu == RING_BUFFER_ALL_CPUS) 624 620 work = &buffer->irq_work; 625 621 else {
+16 -2
kernel/trace/trace.c
··· 466 466 struct print_entry *entry; 467 467 unsigned long irq_flags; 468 468 int alloc; 469 + int pc; 470 + 471 + if (!(trace_flags & TRACE_ITER_PRINTK)) 472 + return 0; 473 + 474 + pc = preempt_count(); 469 475 470 476 if (unlikely(tracing_selftest_running || tracing_disabled)) 471 477 return 0; ··· 481 475 local_save_flags(irq_flags); 482 476 buffer = global_trace.trace_buffer.buffer; 483 477 event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, 484 - irq_flags, preempt_count()); 478 + irq_flags, pc); 485 479 if (!event) 486 480 return 0; 487 481 ··· 498 492 entry->buf[size] = '\0'; 499 493 500 494 __buffer_unlock_commit(buffer, event); 495 + ftrace_trace_stack(buffer, irq_flags, 4, pc); 501 496 502 497 return size; 503 498 } ··· 516 509 struct bputs_entry *entry; 517 510 unsigned long irq_flags; 518 511 int size = sizeof(struct bputs_entry); 512 + int pc; 513 + 514 + if (!(trace_flags & TRACE_ITER_PRINTK)) 515 + return 0; 516 + 517 + pc = preempt_count(); 519 518 520 519 if (unlikely(tracing_selftest_running || tracing_disabled)) 521 520 return 0; ··· 529 516 local_save_flags(irq_flags); 530 517 buffer = global_trace.trace_buffer.buffer; 531 518 event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, 532 - irq_flags, preempt_count()); 519 + irq_flags, pc); 533 520 if (!event) 534 521 return 0; 535 522 ··· 538 525 entry->str = str; 539 526 540 527 __buffer_unlock_commit(buffer, event); 528 + ftrace_trace_stack(buffer, irq_flags, 4, pc); 541 529 542 530 return 1; 543 531 }
+1
kernel/trace/trace_events.c
··· 470 470 471 471 list_del(&file->list); 472 472 remove_subsystem(file->system); 473 + free_event_filter(file->filter); 473 474 kmem_cache_free(file_cachep, file); 474 475 } 475 476
+2 -1
kernel/workqueue.c
··· 3284 3284 } 3285 3285 } 3286 3286 3287 + dev_set_uevent_suppress(&wq_dev->dev, false); 3287 3288 kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); 3288 3289 return 0; 3289 3290 } ··· 4880 4879 BUG_ON(!tbl); 4881 4880 4882 4881 for_each_node(node) 4883 - BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4882 + BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4884 4883 node_online(node) ? node : NUMA_NO_NODE)); 4885 4884 4886 4885 for_each_possible_cpu(cpu) {
+1 -1
lib/cpumask.c
··· 191 191 192 192 i %= num_online_cpus(); 193 193 194 - if (!cpumask_of_node(numa_node)) { 194 + if (numa_node == -1 || !cpumask_of_node(numa_node)) { 195 195 /* Use all online cpu's for non numa aware system */ 196 196 cpumask_copy(mask, cpu_online_mask); 197 197 } else {
-2
mm/mempolicy.c
··· 2139 2139 } else 2140 2140 *new = *old; 2141 2141 2142 - rcu_read_lock(); 2143 2142 if (current_cpuset_is_being_rebound()) { 2144 2143 nodemask_t mems = cpuset_mems_allowed(current); 2145 2144 if (new->flags & MPOL_F_REBINDING) ··· 2146 2147 else 2147 2148 mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); 2148 2149 } 2149 - rcu_read_unlock(); 2150 2150 atomic_set(&new->refcnt, 1); 2151 2151 return new; 2152 2152 }
+10 -3
net/8021q/vlan_dev.c
··· 627 627 struct vlan_dev_priv *vlan = vlan_dev_priv(dev); 628 628 int i; 629 629 630 - free_percpu(vlan->vlan_pcpu_stats); 631 - vlan->vlan_pcpu_stats = NULL; 632 630 for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { 633 631 while ((pm = vlan->egress_priority_map[i]) != NULL) { 634 632 vlan->egress_priority_map[i] = pm->next; ··· 783 785 .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, 784 786 }; 785 787 788 + static void vlan_dev_free(struct net_device *dev) 789 + { 790 + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); 791 + 792 + free_percpu(vlan->vlan_pcpu_stats); 793 + vlan->vlan_pcpu_stats = NULL; 794 + free_netdev(dev); 795 + } 796 + 786 797 void vlan_setup(struct net_device *dev) 787 798 { 788 799 ether_setup(dev); ··· 801 794 dev->tx_queue_len = 0; 802 795 803 796 dev->netdev_ops = &vlan_netdev_ops; 804 - dev->destructor = free_netdev; 797 + dev->destructor = vlan_dev_free; 805 798 dev->ethtool_ops = &vlan_ethtool_ops; 806 799 807 800 memset(dev->broadcast, 0, ETH_ALEN);
-3
net/appletalk/ddp.c
··· 1489 1489 goto drop; 1490 1490 1491 1491 /* Queue packet (standard) */ 1492 - skb->sk = sock; 1493 - 1494 1492 if (sock_queue_rcv_skb(sock, skb) < 0) 1495 1493 goto drop; 1496 1494 ··· 1642 1644 if (!skb) 1643 1645 goto out; 1644 1646 1645 - skb->sk = sk; 1646 1647 skb_reserve(skb, ddp_dl->header_length); 1647 1648 skb_reserve(skb, dev->hard_header_len); 1648 1649 skb->dev = dev;
+11 -1
net/bluetooth/hci_conn.c
··· 289 289 { 290 290 struct hci_conn *conn = container_of(work, struct hci_conn, 291 291 disc_work.work); 292 + int refcnt = atomic_read(&conn->refcnt); 292 293 293 294 BT_DBG("hcon %p state %s", conn, state_to_string(conn->state)); 294 295 295 - if (atomic_read(&conn->refcnt)) 296 + WARN_ON(refcnt < 0); 297 + 298 + /* FIXME: It was observed that in pairing failed scenario, refcnt 299 + * drops below 0. Probably this is because l2cap_conn_del calls 300 + * l2cap_chan_del for each channel, and inside l2cap_chan_del conn is 301 + * dropped. After that loop hci_chan_del is called which also drops 302 + * conn. For now make sure that ACL is alive if refcnt is higher then 0, 303 + * otherwise drop it. 304 + */ 305 + if (refcnt > 0) 296 306 return; 297 307 298 308 switch (conn->state) {
+46 -14
net/bluetooth/smp.c
··· 385 385 { CFM_PASSKEY, CFM_PASSKEY, REQ_PASSKEY, JUST_WORKS, OVERLAP }, 386 386 }; 387 387 388 + static u8 get_auth_method(struct smp_chan *smp, u8 local_io, u8 remote_io) 389 + { 390 + /* If either side has unknown io_caps, use JUST WORKS */ 391 + if (local_io > SMP_IO_KEYBOARD_DISPLAY || 392 + remote_io > SMP_IO_KEYBOARD_DISPLAY) 393 + return JUST_WORKS; 394 + 395 + return gen_method[remote_io][local_io]; 396 + } 397 + 388 398 static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, 389 399 u8 local_io, u8 remote_io) 390 400 { ··· 411 401 BT_DBG("tk_request: auth:%d lcl:%d rem:%d", auth, local_io, remote_io); 412 402 413 403 /* If neither side wants MITM, use JUST WORKS */ 414 - /* If either side has unknown io_caps, use JUST WORKS */ 415 404 /* Otherwise, look up method from the table */ 416 - if (!(auth & SMP_AUTH_MITM) || 417 - local_io > SMP_IO_KEYBOARD_DISPLAY || 418 - remote_io > SMP_IO_KEYBOARD_DISPLAY) 405 + if (!(auth & SMP_AUTH_MITM)) 419 406 method = JUST_WORKS; 420 407 else 421 - method = gen_method[remote_io][local_io]; 408 + method = get_auth_method(smp, local_io, remote_io); 422 409 423 410 /* If not bonding, don't ask user to confirm a Zero TK */ 424 411 if (!(auth & SMP_AUTH_BONDING) && method == JUST_CFM) ··· 676 669 { 677 670 struct smp_cmd_pairing rsp, *req = (void *) skb->data; 678 671 struct smp_chan *smp; 679 - u8 key_size, auth; 672 + u8 key_size, auth, sec_level; 680 673 int ret; 681 674 682 675 BT_DBG("conn %p", conn); ··· 702 695 /* We didn't start the pairing, so match remote */ 703 696 auth = req->auth_req; 704 697 705 - conn->hcon->pending_sec_level = authreq_to_seclevel(auth); 698 + sec_level = authreq_to_seclevel(auth); 699 + if (sec_level > conn->hcon->pending_sec_level) 700 + conn->hcon->pending_sec_level = sec_level; 701 + 702 + /* If we need MITM check that it can be acheived */ 703 + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { 704 + u8 method; 705 + 706 + method = get_auth_method(smp, conn->hcon->io_capability, 707 + req->io_capability); 708 + if (method == JUST_WORKS || method == JUST_CFM) 709 + return SMP_AUTH_REQUIREMENTS; 710 + } 706 711 707 712 build_pairing_cmd(conn, req, &rsp, auth); 708 713 ··· 761 742 key_size = min(req->max_key_size, rsp->max_key_size); 762 743 if (check_enc_key_size(conn, key_size)) 763 744 return SMP_ENC_KEY_SIZE; 745 + 746 + /* If we need MITM check that it can be acheived */ 747 + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { 748 + u8 method; 749 + 750 + method = get_auth_method(smp, req->io_capability, 751 + rsp->io_capability); 752 + if (method == JUST_WORKS || method == JUST_CFM) 753 + return SMP_AUTH_REQUIREMENTS; 754 + } 764 755 765 756 get_random_bytes(smp->prnd, sizeof(smp->prnd)); 766 757 ··· 867 838 struct smp_cmd_pairing cp; 868 839 struct hci_conn *hcon = conn->hcon; 869 840 struct smp_chan *smp; 841 + u8 sec_level; 870 842 871 843 BT_DBG("conn %p", conn); 872 844 ··· 877 847 if (!(conn->hcon->link_mode & HCI_LM_MASTER)) 878 848 return SMP_CMD_NOTSUPP; 879 849 880 - hcon->pending_sec_level = authreq_to_seclevel(rp->auth_req); 850 + sec_level = authreq_to_seclevel(rp->auth_req); 851 + if (sec_level > hcon->pending_sec_level) 852 + hcon->pending_sec_level = sec_level; 881 853 882 854 if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) 883 855 return 0; ··· 933 901 if (smp_sufficient_security(hcon, sec_level)) 934 902 return 1; 935 903 904 + if (sec_level > hcon->pending_sec_level) 905 + hcon->pending_sec_level = sec_level; 906 + 936 907 if (hcon->link_mode & HCI_LM_MASTER) 937 - if (smp_ltk_encrypt(conn, sec_level)) 938 - goto done; 908 + if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) 909 + return 0; 939 910 940 911 if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags)) 941 912 return 0; ··· 953 918 * requires it. 954 919 */ 955 920 if (hcon->io_capability != HCI_IO_NO_INPUT_OUTPUT || 956 - sec_level > BT_SECURITY_MEDIUM) 921 + hcon->pending_sec_level > BT_SECURITY_MEDIUM) 957 922 authreq |= SMP_AUTH_MITM; 958 923 959 924 if (hcon->link_mode & HCI_LM_MASTER) { ··· 971 936 } 972 937 973 938 set_bit(SMP_FLAG_INITIATOR, &smp->flags); 974 - 975 - done: 976 - hcon->pending_sec_level = sec_level; 977 939 978 940 return 0; 979 941 }
+18 -12
net/core/dev.c
··· 148 148 static struct list_head offload_base __read_mostly; 149 149 150 150 static int netif_rx_internal(struct sk_buff *skb); 151 + static int call_netdevice_notifiers_info(unsigned long val, 152 + struct net_device *dev, 153 + struct netdev_notifier_info *info); 151 154 152 155 /* 153 156 * The @dev_base_head list is protected by @dev_base_lock and the rtnl ··· 1210 1207 void netdev_state_change(struct net_device *dev) 1211 1208 { 1212 1209 if (dev->flags & IFF_UP) { 1213 - call_netdevice_notifiers(NETDEV_CHANGE, dev); 1210 + struct netdev_notifier_change_info change_info; 1211 + 1212 + change_info.flags_changed = 0; 1213 + call_netdevice_notifiers_info(NETDEV_CHANGE, dev, 1214 + &change_info.info); 1214 1215 rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); 1215 1216 } 1216 1217 } ··· 4234 4227 #endif 4235 4228 napi->weight = weight_p; 4236 4229 local_irq_disable(); 4237 - while (work < quota) { 4230 + while (1) { 4238 4231 struct sk_buff *skb; 4239 - unsigned int qlen; 4240 4232 4241 4233 while ((skb = __skb_dequeue(&sd->process_queue))) { 4242 4234 local_irq_enable(); ··· 4249 4243 } 4250 4244 4251 4245 rps_lock(sd); 4252 - qlen = skb_queue_len(&sd->input_pkt_queue); 4253 - if (qlen) 4254 - skb_queue_splice_tail_init(&sd->input_pkt_queue, 4255 - &sd->process_queue); 4256 - 4257 - if (qlen < quota - work) { 4246 + if (skb_queue_empty(&sd->input_pkt_queue)) { 4258 4247 /* 4259 4248 * Inline a custom version of __napi_complete(). 4260 4249 * only current cpu owns and manipulates this napi, 4261 - * and NAPI_STATE_SCHED is the only possible flag set on backlog. 4262 - * we can use a plain write instead of clear_bit(), 4250 + * and NAPI_STATE_SCHED is the only possible flag set 4251 + * on backlog. 4252 + * We can use a plain write instead of clear_bit(), 4263 4253 * and we dont need an smp_mb() memory barrier. 4264 4254 */ 4265 4255 list_del(&napi->poll_list); 4266 4256 napi->state = 0; 4257 + rps_unlock(sd); 4267 4258 4268 - quota = work + qlen; 4259 + break; 4269 4260 } 4261 + 4262 + skb_queue_splice_tail_init(&sd->input_pkt_queue, 4263 + &sd->process_queue); 4270 4264 rps_unlock(sd); 4271 4265 } 4272 4266 local_irq_enable();
+5 -4
net/core/neighbour.c
··· 3059 3059 memset(&t->neigh_vars[NEIGH_VAR_GC_INTERVAL], 0, 3060 3060 sizeof(t->neigh_vars[NEIGH_VAR_GC_INTERVAL])); 3061 3061 } else { 3062 + struct neigh_table *tbl = p->tbl; 3062 3063 dev_name_source = "default"; 3063 - t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = (int *)(p + 1); 3064 - t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = (int *)(p + 1) + 1; 3065 - t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = (int *)(p + 1) + 2; 3066 - t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = (int *)(p + 1) + 3; 3064 + t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = &tbl->gc_interval; 3065 + t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = &tbl->gc_thresh1; 3066 + t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = &tbl->gc_thresh2; 3067 + t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = &tbl->gc_thresh3; 3067 3068 } 3068 3069 3069 3070 if (handler) {
+1
net/ipv4/gre_demux.c
··· 68 68 69 69 skb_push(skb, hdr_len); 70 70 71 + skb_reset_transport_header(skb); 71 72 greh = (struct gre_base_hdr *)skb->data; 72 73 greh->flags = tnl_flags_to_gre_flags(tpi->flags); 73 74 greh->protocol = tpi->proto;
-2
net/ipv4/icmp.c
··· 739 739 /* fall through */ 740 740 case 0: 741 741 info = ntohs(icmph->un.frag.mtu); 742 - if (!info) 743 - goto out; 744 742 } 745 743 break; 746 744 case ICMP_SR_FAILED:
+6 -4
net/ipv4/igmp.c
··· 1944 1944 1945 1945 rtnl_lock(); 1946 1946 in_dev = ip_mc_find_dev(net, imr); 1947 + if (!in_dev) { 1948 + ret = -ENODEV; 1949 + goto out; 1950 + } 1947 1951 ifindex = imr->imr_ifindex; 1948 1952 for (imlp = &inet->mc_list; 1949 1953 (iml = rtnl_dereference(*imlp)) != NULL; ··· 1965 1961 1966 1962 *imlp = iml->next_rcu; 1967 1963 1968 - if (in_dev) 1969 - ip_mc_dec_group(in_dev, group); 1964 + ip_mc_dec_group(in_dev, group); 1970 1965 rtnl_unlock(); 1971 1966 /* decrease mem now to avoid the memleak warning */ 1972 1967 atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); 1973 1968 kfree_rcu(iml, rcu); 1974 1969 return 0; 1975 1970 } 1976 - if (!in_dev) 1977 - ret = -ENODEV; 1971 + out: 1978 1972 rtnl_unlock(); 1979 1973 return ret; 1980 1974 }
+8 -4
net/ipv4/ip_tunnel.c
··· 169 169 170 170 hlist_for_each_entry_rcu(t, head, hash_node) { 171 171 if (remote != t->parms.iph.daddr || 172 + t->parms.iph.saddr != 0 || 172 173 !(t->dev->flags & IFF_UP)) 173 174 continue; 174 175 ··· 186 185 head = &itn->tunnels[hash]; 187 186 188 187 hlist_for_each_entry_rcu(t, head, hash_node) { 189 - if ((local != t->parms.iph.saddr && 190 - (local != t->parms.iph.daddr || 191 - !ipv4_is_multicast(local))) || 192 - !(t->dev->flags & IFF_UP)) 188 + if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && 189 + (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) 190 + continue; 191 + 192 + if (!(t->dev->flags & IFF_UP)) 193 193 continue; 194 194 195 195 if (!ip_tunnel_key_match(&t->parms, flags, key)) ··· 207 205 208 206 hlist_for_each_entry_rcu(t, head, hash_node) { 209 207 if (t->parms.i_key != key || 208 + t->parms.iph.saddr != 0 || 209 + t->parms.iph.daddr != 0 || 210 210 !(t->dev->flags & IFF_UP)) 211 211 continue; 212 212
+8 -7
net/ipv4/route.c
··· 1010 1010 const struct iphdr *iph = (const struct iphdr *) skb->data; 1011 1011 struct flowi4 fl4; 1012 1012 struct rtable *rt; 1013 - struct dst_entry *dst; 1013 + struct dst_entry *odst = NULL; 1014 1014 bool new = false; 1015 1015 1016 1016 bh_lock_sock(sk); ··· 1018 1018 if (!ip_sk_accept_pmtu(sk)) 1019 1019 goto out; 1020 1020 1021 - rt = (struct rtable *) __sk_dst_get(sk); 1021 + odst = sk_dst_get(sk); 1022 1022 1023 - if (sock_owned_by_user(sk) || !rt) { 1023 + if (sock_owned_by_user(sk) || !odst) { 1024 1024 __ipv4_sk_update_pmtu(skb, sk, mtu); 1025 1025 goto out; 1026 1026 } 1027 1027 1028 1028 __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); 1029 1029 1030 - if (!__sk_dst_check(sk, 0)) { 1030 + rt = (struct rtable *)odst; 1031 + if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { 1031 1032 rt = ip_route_output_flow(sock_net(sk), &fl4, sk); 1032 1033 if (IS_ERR(rt)) 1033 1034 goto out; ··· 1038 1037 1039 1038 __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); 1040 1039 1041 - dst = dst_check(&rt->dst, 0); 1042 - if (!dst) { 1040 + if (!dst_check(&rt->dst, 0)) { 1043 1041 if (new) 1044 1042 dst_release(&rt->dst); 1045 1043 ··· 1050 1050 } 1051 1051 1052 1052 if (new) 1053 - __sk_dst_set(sk, &rt->dst); 1053 + sk_dst_set(sk, &rt->dst); 1054 1054 1055 1055 out: 1056 1056 bh_unlock_sock(sk); 1057 + dst_release(odst); 1057 1058 } 1058 1059 EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); 1059 1060
+2 -1
net/ipv4/tcp.c
··· 1108 1108 if (unlikely(tp->repair)) { 1109 1109 if (tp->repair_queue == TCP_RECV_QUEUE) { 1110 1110 copied = tcp_send_rcvq(sk, msg, size); 1111 - goto out; 1111 + goto out_nopush; 1112 1112 } 1113 1113 1114 1114 err = -EINVAL; ··· 1282 1282 out: 1283 1283 if (copied) 1284 1284 tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); 1285 + out_nopush: 1285 1286 release_sock(sk); 1286 1287 return copied + copied_syn; 1287 1288
+4 -4
net/ipv4/tcp_input.c
··· 1106 1106 } 1107 1107 1108 1108 /* D-SACK for already forgotten data... Do dumb counting. */ 1109 - if (dup_sack && tp->undo_marker && tp->undo_retrans && 1109 + if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && 1110 1110 !after(end_seq_0, prior_snd_una) && 1111 1111 after(end_seq_0, tp->undo_marker)) 1112 1112 tp->undo_retrans--; ··· 1187 1187 1188 1188 /* Account D-SACK for retransmitted packet. */ 1189 1189 if (dup_sack && (sacked & TCPCB_RETRANS)) { 1190 - if (tp->undo_marker && tp->undo_retrans && 1190 + if (tp->undo_marker && tp->undo_retrans > 0 && 1191 1191 after(end_seq, tp->undo_marker)) 1192 1192 tp->undo_retrans--; 1193 1193 if (sacked & TCPCB_SACKED_ACKED) ··· 1893 1893 tp->lost_out = 0; 1894 1894 1895 1895 tp->undo_marker = 0; 1896 - tp->undo_retrans = 0; 1896 + tp->undo_retrans = -1; 1897 1897 } 1898 1898 1899 1899 void tcp_clear_retrans(struct tcp_sock *tp) ··· 2665 2665 2666 2666 tp->prior_ssthresh = 0; 2667 2667 tp->undo_marker = tp->snd_una; 2668 - tp->undo_retrans = tp->retrans_out; 2668 + tp->undo_retrans = tp->retrans_out ? : -1; 2669 2669 2670 2670 if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { 2671 2671 if (!ece_ack)
+4 -2
net/ipv4/tcp_output.c
··· 2525 2525 if (!tp->retrans_stamp) 2526 2526 tp->retrans_stamp = TCP_SKB_CB(skb)->when; 2527 2527 2528 - tp->undo_retrans += tcp_skb_pcount(skb); 2529 - 2530 2528 /* snd_nxt is stored to detect loss of retransmitted segment, 2531 2529 * see tcp_input.c tcp_sacktag_write_queue(). 2532 2530 */ ··· 2532 2534 } else if (err != -EBUSY) { 2533 2535 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); 2534 2536 } 2537 + 2538 + if (tp->undo_retrans < 0) 2539 + tp->undo_retrans = 0; 2540 + tp->undo_retrans += tcp_skb_pcount(skb); 2535 2541 return err; 2536 2542 } 2537 2543
+4 -1
net/ipv4/udp.c
··· 1588 1588 goto csum_error; 1589 1589 1590 1590 1591 - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) 1591 + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { 1592 + UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS, 1593 + is_udplite); 1592 1594 goto drop; 1595 + } 1593 1596 1594 1597 rc = 0; 1595 1598
+11 -2
net/ipv6/mcast.c
··· 1301 1301 len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr); 1302 1302 len -= skb_network_header_len(skb); 1303 1303 1304 - /* Drop queries with not link local source */ 1305 - if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) 1304 + /* RFC3810 6.2 1305 + * Upon reception of an MLD message that contains a Query, the node 1306 + * checks if the source address of the message is a valid link-local 1307 + * address, if the Hop Limit is set to 1, and if the Router Alert 1308 + * option is present in the Hop-By-Hop Options header of the IPv6 1309 + * packet. If any of these checks fails, the packet is dropped. 1310 + */ 1311 + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) || 1312 + ipv6_hdr(skb)->hop_limit != 1 || 1313 + !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) || 1314 + IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD)) 1306 1315 return -EINVAL; 1307 1316 1308 1317 idev = __in6_dev_get(skb->dev);
+5 -1
net/ipv6/udp.c
··· 674 674 goto csum_error; 675 675 } 676 676 677 - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) 677 + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { 678 + UDP6_INC_STATS_BH(sock_net(sk), 679 + UDP_MIB_RCVBUFERRORS, is_udplite); 678 680 goto drop; 681 + } 679 682 680 683 skb_dst_drop(skb); 681 684 ··· 693 690 bh_unlock_sock(sk); 694 691 695 692 return rc; 693 + 696 694 csum_error: 697 695 UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite); 698 696 drop:
+2 -2
net/l2tp/l2tp_ppp.c
··· 1365 1365 int err; 1366 1366 1367 1367 if (level != SOL_PPPOL2TP) 1368 - return udp_prot.setsockopt(sk, level, optname, optval, optlen); 1368 + return -EINVAL; 1369 1369 1370 1370 if (optlen < sizeof(int)) 1371 1371 return -EINVAL; ··· 1491 1491 struct pppol2tp_session *ps; 1492 1492 1493 1493 if (level != SOL_PPPOL2TP) 1494 - return udp_prot.getsockopt(sk, level, optname, optval, optlen); 1494 + return -EINVAL; 1495 1495 1496 1496 if (get_user(len, optlen)) 1497 1497 return -EFAULT;
+3 -2
net/mac80211/util.c
··· 1096 1096 int err; 1097 1097 1098 1098 /* 24 + 6 = header + auth_algo + auth_transaction + status_code */ 1099 - skb = dev_alloc_skb(local->hw.extra_tx_headroom + 24 + 6 + extra_len); 1099 + skb = dev_alloc_skb(local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN + 1100 + 24 + 6 + extra_len + IEEE80211_WEP_ICV_LEN); 1100 1101 if (!skb) 1101 1102 return; 1102 1103 1103 - skb_reserve(skb, local->hw.extra_tx_headroom); 1104 + skb_reserve(skb, local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN); 1104 1105 1105 1106 mgmt = (struct ieee80211_mgmt *) skb_put(skb, 24 + 6); 1106 1107 memset(mgmt, 0, 24 + 6);
+2 -2
net/netlink/af_netlink.c
··· 636 636 while (nlk->cb_running && netlink_dump_space(nlk)) { 637 637 err = netlink_dump(sk); 638 638 if (err < 0) { 639 - sk->sk_err = err; 639 + sk->sk_err = -err; 640 640 sk->sk_error_report(sk); 641 641 break; 642 642 } ··· 2483 2483 atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { 2484 2484 ret = netlink_dump(sk); 2485 2485 if (ret) { 2486 - sk->sk_err = ret; 2486 + sk->sk_err = -ret; 2487 2487 sk->sk_error_report(sk); 2488 2488 } 2489 2489 }
+2
net/openvswitch/actions.c
··· 551 551 552 552 case OVS_ACTION_ATTR_SAMPLE: 553 553 err = sample(dp, skb, a); 554 + if (unlikely(err)) /* skb already freed. */ 555 + return err; 554 556 break; 555 557 } 556 558
+13 -14
net/openvswitch/datapath.c
··· 1 1 /* 2 - * Copyright (c) 2007-2013 Nicira, Inc. 2 + * Copyright (c) 2007-2014 Nicira, Inc. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 276 276 OVS_CB(skb)->flow = flow; 277 277 OVS_CB(skb)->pkt_key = &key; 278 278 279 - ovs_flow_stats_update(OVS_CB(skb)->flow, skb); 279 + ovs_flow_stats_update(OVS_CB(skb)->flow, key.tp.flags, skb); 280 280 ovs_execute_actions(dp, skb); 281 281 stats_counter = &stats->n_hit; 282 282 ··· 889 889 } 890 890 /* The unmasked key has to be the same for flow updates. */ 891 891 if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { 892 - error = -EEXIST; 893 - goto err_unlock_ovs; 892 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 893 + if (!flow) { 894 + error = -ENOENT; 895 + goto err_unlock_ovs; 896 + } 894 897 } 895 898 /* Update actions. */ 896 899 old_acts = ovsl_dereference(flow->sf_acts); ··· 984 981 goto err_unlock_ovs; 985 982 } 986 983 /* Check that the flow exists. */ 987 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 984 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 988 985 if (unlikely(!flow)) { 989 986 error = -ENOENT; 990 987 goto err_unlock_ovs; 991 988 } 992 - /* The unmasked key has to be the same for flow updates. */ 993 - if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { 994 - error = -EEXIST; 995 - goto err_unlock_ovs; 996 - } 989 + 997 990 /* Update actions, if present. */ 998 991 if (likely(acts)) { 999 992 old_acts = ovsl_dereference(flow->sf_acts); ··· 1062 1063 goto unlock; 1063 1064 } 1064 1065 1065 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 1066 - if (!flow || !ovs_flow_cmp_unmasked_key(flow, &match)) { 1066 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 1067 + if (!flow) { 1067 1068 err = -ENOENT; 1068 1069 goto unlock; 1069 1070 } ··· 1112 1113 goto unlock; 1113 1114 } 1114 1115 1115 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 1116 - if (unlikely(!flow || !ovs_flow_cmp_unmasked_key(flow, &match))) { 1116 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 1117 + if (unlikely(!flow)) { 1117 1118 err = -ENOENT; 1118 1119 goto unlock; 1119 1120 }
+2 -2
net/openvswitch/flow.c
··· 61 61 62 62 #define TCP_FLAGS_BE16(tp) (*(__be16 *)&tcp_flag_word(tp) & htons(0x0FFF)) 63 63 64 - void ovs_flow_stats_update(struct sw_flow *flow, struct sk_buff *skb) 64 + void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, 65 + struct sk_buff *skb) 65 66 { 66 67 struct flow_stats *stats; 67 - __be16 tcp_flags = flow->key.tp.flags; 68 68 int node = numa_node_id(); 69 69 70 70 stats = rcu_dereference(flow->stats[node]);
+3 -2
net/openvswitch/flow.h
··· 1 1 /* 2 - * Copyright (c) 2007-2013 Nicira, Inc. 2 + * Copyright (c) 2007-2014 Nicira, Inc. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 180 180 unsigned char ar_tip[4]; /* target IP address */ 181 181 } __packed; 182 182 183 - void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *); 183 + void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags, 184 + struct sk_buff *); 184 185 void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *, 185 186 unsigned long *used, __be16 *tcp_flags); 186 187 void ovs_flow_stats_clear(struct sw_flow *);
+16
net/openvswitch/flow_table.c
··· 456 456 return ovs_flow_tbl_lookup_stats(tbl, key, &n_mask_hit); 457 457 } 458 458 459 + struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, 460 + struct sw_flow_match *match) 461 + { 462 + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); 463 + struct sw_flow_mask *mask; 464 + struct sw_flow *flow; 465 + 466 + /* Always called under ovs-mutex. */ 467 + list_for_each_entry(mask, &tbl->mask_list, list) { 468 + flow = masked_flow_lookup(ti, match->key, mask); 469 + if (flow && ovs_flow_cmp_unmasked_key(flow, match)) /* Found */ 470 + return flow; 471 + } 472 + return NULL; 473 + } 474 + 459 475 int ovs_flow_tbl_num_masks(const struct flow_table *table) 460 476 { 461 477 struct sw_flow_mask *mask;
+2 -1
net/openvswitch/flow_table.h
··· 76 76 u32 *n_mask_hit); 77 77 struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *, 78 78 const struct sw_flow_key *); 79 - 79 + struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, 80 + struct sw_flow_match *match); 80 81 bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow, 81 82 struct sw_flow_match *match); 82 83
+17
net/openvswitch/vport-gre.c
··· 110 110 return PACKET_RCVD; 111 111 } 112 112 113 + /* Called with rcu_read_lock and BH disabled. */ 114 + static int gre_err(struct sk_buff *skb, u32 info, 115 + const struct tnl_ptk_info *tpi) 116 + { 117 + struct ovs_net *ovs_net; 118 + struct vport *vport; 119 + 120 + ovs_net = net_generic(dev_net(skb->dev), ovs_net_id); 121 + vport = rcu_dereference(ovs_net->vport_net.gre_vport); 122 + 123 + if (unlikely(!vport)) 124 + return PACKET_REJECT; 125 + else 126 + return PACKET_RCVD; 127 + } 128 + 113 129 static int gre_tnl_send(struct vport *vport, struct sk_buff *skb) 114 130 { 115 131 struct net *net = ovs_dp_get_net(vport->dp); ··· 202 186 203 187 static struct gre_cisco_protocol gre_protocol = { 204 188 .handler = gre_rcv, 189 + .err_handler = gre_err, 205 190 .priority = 1, 206 191 }; 207 192
+15 -107
net/sctp/ulpevent.c
··· 366 366 * specification [SCTP] and any extensions for a list of possible 367 367 * error formats. 368 368 */ 369 - struct sctp_ulpevent *sctp_ulpevent_make_remote_error( 370 - const struct sctp_association *asoc, struct sctp_chunk *chunk, 371 - __u16 flags, gfp_t gfp) 369 + struct sctp_ulpevent * 370 + sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, 371 + struct sctp_chunk *chunk, __u16 flags, 372 + gfp_t gfp) 372 373 { 373 374 struct sctp_ulpevent *event; 374 375 struct sctp_remote_error *sre; ··· 388 387 /* Copy the skb to a new skb with room for us to prepend 389 388 * notification with. 390 389 */ 391 - skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), 392 - 0, gfp); 390 + skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); 393 391 394 392 /* Pull off the rest of the cause TLV from the chunk. */ 395 393 skb_pull(chunk->skb, elen); ··· 399 399 event = sctp_skb2event(skb); 400 400 sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); 401 401 402 - sre = (struct sctp_remote_error *) 403 - skb_push(skb, sizeof(struct sctp_remote_error)); 402 + sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); 404 403 405 404 /* Trim the buffer to the right length. */ 406 - skb_trim(skb, sizeof(struct sctp_remote_error) + elen); 405 + skb_trim(skb, sizeof(*sre) + elen); 407 406 408 - /* Socket Extensions for SCTP 409 - * 5.3.1.3 SCTP_REMOTE_ERROR 410 - * 411 - * sre_type: 412 - * It should be SCTP_REMOTE_ERROR. 413 - */ 407 + /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ 408 + memset(sre, 0, sizeof(*sre)); 414 409 sre->sre_type = SCTP_REMOTE_ERROR; 415 - 416 - /* 417 - * Socket Extensions for SCTP 418 - * 5.3.1.3 SCTP_REMOTE_ERROR 419 - * 420 - * sre_flags: 16 bits (unsigned integer) 421 - * Currently unused. 422 - */ 423 410 sre->sre_flags = 0; 424 - 425 - /* Socket Extensions for SCTP 426 - * 5.3.1.3 SCTP_REMOTE_ERROR 427 - * 428 - * sre_length: sizeof (__u32) 429 - * 430 - * This field is the total length of the notification data, 431 - * including the notification header. 432 - */ 433 411 sre->sre_length = skb->len; 434 - 435 - /* Socket Extensions for SCTP 436 - * 5.3.1.3 SCTP_REMOTE_ERROR 437 - * 438 - * sre_error: 16 bits (unsigned integer) 439 - * This value represents one of the Operational Error causes defined in 440 - * the SCTP specification, in network byte order. 441 - */ 442 412 sre->sre_error = cause; 443 - 444 - /* Socket Extensions for SCTP 445 - * 5.3.1.3 SCTP_REMOTE_ERROR 446 - * 447 - * sre_assoc_id: sizeof (sctp_assoc_t) 448 - * 449 - * The association id field, holds the identifier for the association. 450 - * All notifications for a given association have the same association 451 - * identifier. For TCP style socket, this field is ignored. 452 - */ 453 413 sctp_ulpevent_set_owner(event, asoc); 454 414 sre->sre_assoc_id = sctp_assoc2id(asoc); 455 415 456 416 return event; 457 - 458 417 fail: 459 418 return NULL; 460 419 } ··· 858 899 return notification->sn_header.sn_type; 859 900 } 860 901 861 - /* Copy out the sndrcvinfo into a msghdr. */ 902 + /* RFC6458, Section 5.3.2. SCTP Header Information Structure 903 + * (SCTP_SNDRCV, DEPRECATED) 904 + */ 862 905 void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, 863 906 struct msghdr *msghdr) 864 907 { ··· 869 908 if (sctp_ulpevent_is_notification(event)) 870 909 return; 871 910 872 - /* Sockets API Extensions for SCTP 873 - * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) 874 - * 875 - * sinfo_stream: 16 bits (unsigned integer) 876 - * 877 - * For recvmsg() the SCTP stack places the message's stream number in 878 - * this value. 879 - */ 911 + memset(&sinfo, 0, sizeof(sinfo)); 880 912 sinfo.sinfo_stream = event->stream; 881 - /* sinfo_ssn: 16 bits (unsigned integer) 882 - * 883 - * For recvmsg() this value contains the stream sequence number that 884 - * the remote endpoint placed in the DATA chunk. For fragmented 885 - * messages this is the same number for all deliveries of the message 886 - * (if more than one recvmsg() is needed to read the message). 887 - */ 888 913 sinfo.sinfo_ssn = event->ssn; 889 - /* sinfo_ppid: 32 bits (unsigned integer) 890 - * 891 - * In recvmsg() this value is 892 - * the same information that was passed by the upper layer in the peer 893 - * application. Please note that byte order issues are NOT accounted 894 - * for and this information is passed opaquely by the SCTP stack from 895 - * one end to the other. 896 - */ 897 914 sinfo.sinfo_ppid = event->ppid; 898 - /* sinfo_flags: 16 bits (unsigned integer) 899 - * 900 - * This field may contain any of the following flags and is composed of 901 - * a bitwise OR of these values. 902 - * 903 - * recvmsg() flags: 904 - * 905 - * SCTP_UNORDERED - This flag is present when the message was sent 906 - * non-ordered. 907 - */ 908 915 sinfo.sinfo_flags = event->flags; 909 - /* sinfo_tsn: 32 bit (unsigned integer) 910 - * 911 - * For the receiving side, this field holds a TSN that was 912 - * assigned to one of the SCTP Data Chunks. 913 - */ 914 916 sinfo.sinfo_tsn = event->tsn; 915 - /* sinfo_cumtsn: 32 bit (unsigned integer) 916 - * 917 - * This field will hold the current cumulative TSN as 918 - * known by the underlying SCTP layer. Note this field is 919 - * ignored when sending and only valid for a receive 920 - * operation when sinfo_flags are set to SCTP_UNORDERED. 921 - */ 922 917 sinfo.sinfo_cumtsn = event->cumtsn; 923 - /* sinfo_assoc_id: sizeof (sctp_assoc_t) 924 - * 925 - * The association handle field, sinfo_assoc_id, holds the identifier 926 - * for the association announced in the COMMUNICATION_UP notification. 927 - * All notifications for a given association have the same identifier. 928 - * Ignored for one-to-one style sockets. 929 - */ 930 918 sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); 931 - 932 - /* context value that is set via SCTP_CONTEXT socket option. */ 919 + /* Context value that is set via SCTP_CONTEXT socket option. */ 933 920 sinfo.sinfo_context = event->asoc->default_rcv_context; 934 - 935 921 /* These fields are not used while receiving. */ 936 922 sinfo.sinfo_timetolive = 0; 937 923 938 924 put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, 939 - sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); 925 + sizeof(sinfo), &sinfo); 940 926 } 941 927 942 928 /* Do accounting for bytes received and hold a reference to the association
+1
net/tipc/bcast.c
··· 559 559 560 560 buf = node->bclink.deferred_head; 561 561 node->bclink.deferred_head = buf->next; 562 + buf->next = NULL; 562 563 node->bclink.deferred_size--; 563 564 goto receive; 564 565 }
+8 -3
net/tipc/msg.c
··· 101 101 } 102 102 103 103 /* tipc_buf_append(): Append a buffer to the fragment list of another buffer 104 - * Let first buffer become head buffer 105 - * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0 106 - * Leaves headbuf pointer at NULL if failure 104 + * @*headbuf: in: NULL for first frag, otherwise value returned from prev call 105 + * out: set when successful non-complete reassembly, otherwise NULL 106 + * @*buf: in: the buffer to append. Always defined 107 + * out: head buf after sucessful complete reassembly, otherwise NULL 108 + * Returns 1 when reassembly complete, otherwise 0 107 109 */ 108 110 int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) 109 111 { ··· 124 122 goto out_free; 125 123 head = *headbuf = frag; 126 124 skb_frag_list_init(head); 125 + *buf = NULL; 127 126 return 0; 128 127 } 129 128 if (!head) ··· 153 150 out_free: 154 151 pr_warn_ratelimited("Unable to build fragment list\n"); 155 152 kfree_skb(*buf); 153 + kfree_skb(*headbuf); 154 + *buf = *headbuf = NULL; 156 155 return 0; 157 156 }
+1 -1
net/wireless/core.h
··· 424 424 if (end >= start) 425 425 return jiffies_to_msecs(end - start); 426 426 427 - return jiffies_to_msecs(end + (MAX_JIFFY_OFFSET - start) + 1); 427 + return jiffies_to_msecs(end + (ULONG_MAX - start) + 1); 428 428 } 429 429 430 430 void
+5 -6
net/wireless/nl80211.c
··· 1497 1497 } 1498 1498 CMD(start_p2p_device, START_P2P_DEVICE); 1499 1499 CMD(set_mcast_rate, SET_MCAST_RATE); 1500 + #ifdef CONFIG_NL80211_TESTMODE 1501 + CMD(testmode_cmd, TESTMODE); 1502 + #endif 1500 1503 if (state->split) { 1501 1504 CMD(crit_proto_start, CRIT_PROTOCOL_START); 1502 1505 CMD(crit_proto_stop, CRIT_PROTOCOL_STOP); 1503 1506 if (rdev->wiphy.flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH) 1504 1507 CMD(channel_switch, CHANNEL_SWITCH); 1508 + CMD(set_qos_map, SET_QOS_MAP); 1505 1509 } 1506 - CMD(set_qos_map, SET_QOS_MAP); 1507 - 1508 - #ifdef CONFIG_NL80211_TESTMODE 1509 - CMD(testmode_cmd, TESTMODE); 1510 - #endif 1511 - 1510 + /* add into the if now */ 1512 1511 #undef CMD 1513 1512 1514 1513 if (rdev->ops->connect || rdev->ops->auth) {
+7 -15
net/wireless/reg.c
··· 935 935 if (!band_rule_found) 936 936 band_rule_found = freq_in_rule_band(fr, center_freq); 937 937 938 - bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(5)); 938 + bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(20)); 939 939 940 940 if (band_rule_found && bw_fits) 941 941 return rr; ··· 1019 1019 } 1020 1020 #endif 1021 1021 1022 - /* Find an ieee80211_reg_rule such that a 5MHz channel with frequency 1023 - * chan->center_freq fits there. 1024 - * If there is no such reg_rule, disable the channel, otherwise set the 1025 - * flags corresponding to the bandwidths allowed in the particular reg_rule 1022 + /* 1023 + * Note that right now we assume the desired channel bandwidth 1024 + * is always 20 MHz for each individual channel (HT40 uses 20 MHz 1025 + * per channel, the primary and the extension channel). 1026 1026 */ 1027 1027 static void handle_channel(struct wiphy *wiphy, 1028 1028 enum nl80211_reg_initiator initiator, ··· 1083 1083 if (reg_rule->flags & NL80211_RRF_AUTO_BW) 1084 1084 max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); 1085 1085 1086 - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) 1087 - bw_flags = IEEE80211_CHAN_NO_10MHZ; 1088 - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) 1089 - bw_flags |= IEEE80211_CHAN_NO_20MHZ; 1090 1086 if (max_bandwidth_khz < MHZ_TO_KHZ(40)) 1091 - bw_flags |= IEEE80211_CHAN_NO_HT40; 1087 + bw_flags = IEEE80211_CHAN_NO_HT40; 1092 1088 if (max_bandwidth_khz < MHZ_TO_KHZ(80)) 1093 1089 bw_flags |= IEEE80211_CHAN_NO_80MHZ; 1094 1090 if (max_bandwidth_khz < MHZ_TO_KHZ(160)) ··· 1518 1522 if (reg_rule->flags & NL80211_RRF_AUTO_BW) 1519 1523 max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); 1520 1524 1521 - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) 1522 - bw_flags = IEEE80211_CHAN_NO_10MHZ; 1523 - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) 1524 - bw_flags |= IEEE80211_CHAN_NO_20MHZ; 1525 1525 if (max_bandwidth_khz < MHZ_TO_KHZ(40)) 1526 - bw_flags |= IEEE80211_CHAN_NO_HT40; 1526 + bw_flags = IEEE80211_CHAN_NO_HT40; 1527 1527 if (max_bandwidth_khz < MHZ_TO_KHZ(80)) 1528 1528 bw_flags |= IEEE80211_CHAN_NO_80MHZ; 1529 1529 if (max_bandwidth_khz < MHZ_TO_KHZ(160))
+12 -3
scripts/kernel-doc
··· 2073 2073 sub dump_function($$) { 2074 2074 my $prototype = shift; 2075 2075 my $file = shift; 2076 + my $noret = 0; 2076 2077 2077 2078 $prototype =~ s/^static +//; 2078 2079 $prototype =~ s/^extern +//; ··· 2087 2086 $prototype =~ s/__init_or_module +//; 2088 2087 $prototype =~ s/__must_check +//; 2089 2088 $prototype =~ s/__weak +//; 2090 - $prototype =~ s/^#\s*define\s+//; #ak added 2089 + my $define = $prototype =~ s/^#\s*define\s+//; #ak added 2091 2090 $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//; 2092 2091 2093 2092 # Yes, this truly is vile. We are looking for: ··· 2106 2105 # - atomic_set (macro) 2107 2106 # - pci_match_device, __copy_to_user (long return type) 2108 2107 2109 - if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2108 + if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) { 2109 + # This is an object-like macro, it has no return type and no parameter 2110 + # list. 2111 + # Function-like macros are not allowed to have spaces between 2112 + # declaration_name and opening parenthesis (notice the \s+). 2113 + $return_type = $1; 2114 + $declaration_name = $2; 2115 + $noret = 1; 2116 + } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2110 2117 $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2111 2118 $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2112 2119 $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || ··· 2149 2140 # of warnings goes sufficiently down, the check is only performed in 2150 2141 # verbose mode. 2151 2142 # TODO: always perform the check. 2152 - if ($verbose) { 2143 + if ($verbose && !$noret) { 2153 2144 check_return_section($file, $declaration_name, $return_type); 2154 2145 } 2155 2146
+2 -1
sound/pci/hda/hda_controller.c
··· 193 193 dsp_unlock(azx_dev); 194 194 return azx_dev; 195 195 } 196 - if (!res) 196 + if (!res || 197 + (chip->driver_caps & AZX_DCAPS_REVERSE_ASSIGN)) 197 198 res = azx_dev; 198 199 } 199 200 dsp_unlock(azx_dev);
+6 -6
sound/pci/hda/hda_intel.c
··· 227 227 /* quirks for Intel PCH */ 228 228 #define AZX_DCAPS_INTEL_PCH_NOPM \ 229 229 (AZX_DCAPS_SCH_SNOOP | AZX_DCAPS_BUFSIZE | \ 230 - AZX_DCAPS_COUNT_LPIB_DELAY) 230 + AZX_DCAPS_COUNT_LPIB_DELAY | AZX_DCAPS_REVERSE_ASSIGN) 231 231 232 232 #define AZX_DCAPS_INTEL_PCH \ 233 233 (AZX_DCAPS_INTEL_PCH_NOPM | AZX_DCAPS_PM_RUNTIME) ··· 596 596 struct azx *chip = card->private_data; 597 597 struct azx_pcm *p; 598 598 599 - if (chip->disabled) 599 + if (chip->disabled || chip->init_failed) 600 600 return 0; 601 601 602 602 snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); ··· 628 628 struct snd_card *card = dev_get_drvdata(dev); 629 629 struct azx *chip = card->private_data; 630 630 631 - if (chip->disabled) 631 + if (chip->disabled || chip->init_failed) 632 632 return 0; 633 633 634 634 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { ··· 665 665 struct snd_card *card = dev_get_drvdata(dev); 666 666 struct azx *chip = card->private_data; 667 667 668 - if (chip->disabled) 668 + if (chip->disabled || chip->init_failed) 669 669 return 0; 670 670 671 671 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) ··· 692 692 struct hda_codec *codec; 693 693 int status; 694 694 695 - if (chip->disabled) 695 + if (chip->disabled || chip->init_failed) 696 696 return 0; 697 697 698 698 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) ··· 729 729 struct snd_card *card = dev_get_drvdata(dev); 730 730 struct azx *chip = card->private_data; 731 731 732 - if (chip->disabled) 732 + if (chip->disabled || chip->init_failed) 733 733 return 0; 734 734 735 735 if (!power_save_controller ||
+1
sound/pci/hda/hda_priv.h
··· 186 186 #define AZX_DCAPS_BUFSIZE (1 << 21) /* no buffer size alignment */ 187 187 #define AZX_DCAPS_ALIGN_BUFSIZE (1 << 22) /* buffer size alignment */ 188 188 #define AZX_DCAPS_4K_BDLE_BOUNDARY (1 << 23) /* BDLE in 4k boundary */ 189 + #define AZX_DCAPS_REVERSE_ASSIGN (1 << 24) /* Assign devices in reverse order */ 189 190 #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ 190 191 #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ 191 192 #define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */
+1 -1
sound/pci/hda/hda_tegra.c
··· 236 236 return rc; 237 237 } 238 238 239 + #ifdef CONFIG_PM_SLEEP 239 240 static void hda_tegra_disable_clocks(struct hda_tegra *data) 240 241 { 241 242 clk_disable_unprepare(data->hda2hdmi_clk); ··· 244 243 clk_disable_unprepare(data->hda_clk); 245 244 } 246 245 247 - #ifdef CONFIG_PM_SLEEP 248 246 /* 249 247 * power management 250 248 */
+2
sound/pci/hda/patch_hdmi.c
··· 3337 3337 { .id = 0x10de0051, .name = "GPU 51 HDMI/DP", .patch = patch_nvhdmi }, 3338 3338 { .id = 0x10de0060, .name = "GPU 60 HDMI/DP", .patch = patch_nvhdmi }, 3339 3339 { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch }, 3340 + { .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi }, 3340 3341 { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi }, 3341 3342 { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch }, 3342 3343 { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, ··· 3395 3394 MODULE_ALIAS("snd-hda-codec-id:10de0051"); 3396 3395 MODULE_ALIAS("snd-hda-codec-id:10de0060"); 3397 3396 MODULE_ALIAS("snd-hda-codec-id:10de0067"); 3397 + MODULE_ALIAS("snd-hda-codec-id:10de0070"); 3398 3398 MODULE_ALIAS("snd-hda-codec-id:10de0071"); 3399 3399 MODULE_ALIAS("snd-hda-codec-id:10de8001"); 3400 3400 MODULE_ALIAS("snd-hda-codec-id:11069f80");
-1
sound/soc/fsl/imx-pcm-dma.c
··· 59 59 { 60 60 return devm_snd_dmaengine_pcm_register(&pdev->dev, 61 61 &imx_dmaengine_pcm_config, 62 - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | 63 62 SND_DMAENGINE_PCM_FLAG_COMPAT); 64 63 } 65 64 EXPORT_SYMBOL_GPL(imx_pcm_dma_init);
+2 -2
tools/lib/lockdep/include/liblockdep/mutex.h
··· 35 35 36 36 static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock) 37 37 { 38 - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); 38 + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); 39 39 return pthread_mutex_lock(&lock->mutex); 40 40 } 41 41 ··· 47 47 48 48 static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *lock) 49 49 { 50 - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); 50 + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); 51 51 return pthread_mutex_trylock(&lock->mutex) == 0 ? 1 : 0; 52 52 } 53 53
+4 -4
tools/lib/lockdep/include/liblockdep/rwlock.h
··· 36 36 37 37 static inline int liblockdep_pthread_rwlock_rdlock(liblockdep_pthread_rwlock_t *lock) 38 38 { 39 - lock_acquire(&lock->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); 39 + lock_acquire(&lock->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); 40 40 return pthread_rwlock_rdlock(&lock->rwlock); 41 41 42 42 } ··· 49 49 50 50 static inline int liblockdep_pthread_rwlock_wrlock(liblockdep_pthread_rwlock_t *lock) 51 51 { 52 - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); 52 + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); 53 53 return pthread_rwlock_wrlock(&lock->rwlock); 54 54 } 55 55 56 56 static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_t *lock) 57 57 { 58 - lock_acquire(&lock->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); 58 + lock_acquire(&lock->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); 59 59 return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0; 60 60 } 61 61 62 62 static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock) 63 63 { 64 - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); 64 + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); 65 65 return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0; 66 66 } 67 67
+9 -11
tools/lib/lockdep/preload.c
··· 92 92 static void init_preload(void); 93 93 static void try_init_preload(void) 94 94 { 95 - if (!__init_state != done) 95 + if (__init_state != done) 96 96 init_preload(); 97 97 } 98 98 ··· 252 252 253 253 try_init_preload(); 254 254 255 - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, 255 + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, 256 256 (unsigned long)_RET_IP_); 257 257 /* 258 258 * Here's the thing with pthread mutexes: unlike the kernel variant, ··· 281 281 282 282 try_init_preload(); 283 283 284 - lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); 284 + lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); 285 285 r = ll_pthread_mutex_trylock(mutex); 286 286 if (r) 287 287 lock_release(&__get_lock(mutex)->dep_map, 0, (unsigned long)_RET_IP_); ··· 303 303 */ 304 304 r = ll_pthread_mutex_unlock(mutex); 305 305 if (r) 306 - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); 306 + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); 307 307 308 308 return r; 309 309 } ··· 352 352 353 353 init_preload(); 354 354 355 - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); 355 + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); 356 356 r = ll_pthread_rwlock_rdlock(rwlock); 357 357 if (r) 358 358 lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); ··· 366 366 367 367 init_preload(); 368 368 369 - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); 369 + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); 370 370 r = ll_pthread_rwlock_tryrdlock(rwlock); 371 371 if (r) 372 372 lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); ··· 380 380 381 381 init_preload(); 382 382 383 - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); 383 + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); 384 384 r = ll_pthread_rwlock_trywrlock(rwlock); 385 385 if (r) 386 386 lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); ··· 394 394 395 395 init_preload(); 396 396 397 - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); 397 + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); 398 398 r = ll_pthread_rwlock_wrlock(rwlock); 399 399 if (r) 400 400 lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); ··· 411 411 lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); 412 412 r = ll_pthread_rwlock_unlock(rwlock); 413 413 if (r) 414 - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); 414 + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); 415 415 416 416 return r; 417 417 } ··· 438 438 ll_pthread_rwlock_trywrlock = dlsym(RTLD_NEXT, "pthread_rwlock_trywrlock"); 439 439 ll_pthread_rwlock_unlock = dlsym(RTLD_NEXT, "pthread_rwlock_unlock"); 440 440 #endif 441 - 442 - printf("%p\n", ll_pthread_mutex_trylock);fflush(stdout); 443 441 444 442 lockdep_init(); 445 443
+16 -5
tools/perf/ui/browsers/hists.c
··· 17 17 #include "../util.h" 18 18 #include "../ui.h" 19 19 #include "map.h" 20 + #include "annotate.h" 20 21 21 22 struct hist_browser { 22 23 struct ui_browser b; ··· 1594 1593 bi->to.sym->name) > 0) 1595 1594 annotate_t = nr_options++; 1596 1595 } else { 1597 - 1598 1596 if (browser->selection != NULL && 1599 1597 browser->selection->sym != NULL && 1600 - !browser->selection->map->dso->annotate_warned && 1601 - asprintf(&options[nr_options], "Annotate %s", 1602 - browser->selection->sym->name) > 0) 1603 - annotate = nr_options++; 1598 + !browser->selection->map->dso->annotate_warned) { 1599 + struct annotation *notes; 1600 + 1601 + notes = symbol__annotation(browser->selection->sym); 1602 + 1603 + if (notes->src && 1604 + asprintf(&options[nr_options], "Annotate %s", 1605 + browser->selection->sym->name) > 0) 1606 + annotate = nr_options++; 1607 + } 1604 1608 } 1605 1609 1606 1610 if (thread != NULL && ··· 1662 1656 1663 1657 if (choice == annotate || choice == annotate_t || choice == annotate_f) { 1664 1658 struct hist_entry *he; 1659 + struct annotation *notes; 1665 1660 int err; 1666 1661 do_annotate: 1667 1662 if (!objdump_path && perf_session_env__lookup_objdump(env)) ··· 1685 1678 he->ms.sym = he->branch_info->to.sym; 1686 1679 he->ms.map = he->branch_info->to.map; 1687 1680 } 1681 + 1682 + notes = symbol__annotation(he->ms.sym); 1683 + if (!notes->src) 1684 + continue; 1688 1685 1689 1686 /* 1690 1687 * Don't let this be freed, say, by hists__decay_entry.
+22 -32
tools/perf/util/machine.c
··· 496 496 u64 start; 497 497 }; 498 498 499 - static int symbol__in_kernel(void *arg, const char *name, 500 - char type __maybe_unused, u64 start) 501 - { 502 - struct process_args *args = arg; 503 - 504 - if (strchr(name, '[')) 505 - return 0; 506 - 507 - args->start = start; 508 - return 1; 509 - } 510 - 511 499 static void machine__get_kallsyms_filename(struct machine *machine, char *buf, 512 500 size_t bufsz) 513 501 { ··· 505 517 scnprintf(buf, bufsz, "%s/proc/kallsyms", machine->root_dir); 506 518 } 507 519 508 - /* Figure out the start address of kernel map from /proc/kallsyms */ 509 - static u64 machine__get_kernel_start_addr(struct machine *machine) 520 + const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; 521 + 522 + /* Figure out the start address of kernel map from /proc/kallsyms. 523 + * Returns the name of the start symbol in *symbol_name. Pass in NULL as 524 + * symbol_name if it's not that important. 525 + */ 526 + static u64 machine__get_kernel_start_addr(struct machine *machine, 527 + const char **symbol_name) 510 528 { 511 529 char filename[PATH_MAX]; 512 - struct process_args args; 530 + int i; 531 + const char *name; 532 + u64 addr = 0; 513 533 514 534 machine__get_kallsyms_filename(machine, filename, PATH_MAX); 515 535 516 536 if (symbol__restricted_filename(filename, "/proc/kallsyms")) 517 537 return 0; 518 538 519 - if (kallsyms__parse(filename, &args, symbol__in_kernel) <= 0) 520 - return 0; 539 + for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { 540 + addr = kallsyms__get_function_start(filename, name); 541 + if (addr) 542 + break; 543 + } 521 544 522 - return args.start; 545 + if (symbol_name) 546 + *symbol_name = name; 547 + 548 + return addr; 523 549 } 524 550 525 551 int __machine__create_kernel_maps(struct machine *machine, struct dso *kernel) 526 552 { 527 553 enum map_type type; 528 - u64 start = machine__get_kernel_start_addr(machine); 554 + u64 start = machine__get_kernel_start_addr(machine, NULL); 529 555 530 556 for (type = 0; type < MAP__NR_TYPES; ++type) { 531 557 struct kmap *kmap; ··· 854 852 return 0; 855 853 } 856 854 857 - const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; 858 - 859 855 int machine__create_kernel_maps(struct machine *machine) 860 856 { 861 857 struct dso *kernel = machine__get_kernel(machine); 862 - char filename[PATH_MAX]; 863 858 const char *name; 864 - u64 addr = 0; 865 - int i; 866 - 867 - machine__get_kallsyms_filename(machine, filename, PATH_MAX); 868 - 869 - for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { 870 - addr = kallsyms__get_function_start(filename, name); 871 - if (addr) 872 - break; 873 - } 859 + u64 addr = machine__get_kernel_start_addr(machine, &name); 874 860 if (!addr) 875 861 return -1; 876 862
+1 -1
tools/thermal/tmon/Makefile
··· 21 21 OBJS += 22 22 23 23 tmon: $(OBJS) Makefile tmon.h 24 - $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -lpthread 24 + $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -ltinfo -lpthread 25 25 26 26 valgrind: tmon 27 27 sudo valgrind -v --track-origins=yes --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./$(TARGET) 1> /dev/null
+25 -1
tools/thermal/tmon/tmon.c
··· 142 142 static void prepare_logging(void) 143 143 { 144 144 int i; 145 + struct stat logstat; 145 146 146 147 if (!logging) 147 148 return; ··· 152 151 syslog(LOG_ERR, "failed to open log file %s\n", TMON_LOG_FILE); 153 152 return; 154 153 } 154 + 155 + if (lstat(TMON_LOG_FILE, &logstat) < 0) { 156 + syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE); 157 + fclose(tmon_log); 158 + tmon_log = NULL; 159 + return; 160 + } 161 + 162 + /* The log file must be a regular file owned by us */ 163 + if (S_ISLNK(logstat.st_mode)) { 164 + syslog(LOG_ERR, "Log file is a symlink. Will not log\n"); 165 + fclose(tmon_log); 166 + tmon_log = NULL; 167 + return; 168 + } 169 + 170 + if (logstat.st_uid != getuid()) { 171 + syslog(LOG_ERR, "We don't own the log file. Not logging\n"); 172 + fclose(tmon_log); 173 + tmon_log = NULL; 174 + return; 175 + } 176 + 155 177 156 178 fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n"); 157 179 for (i = 0; i < ptdata.nr_tz_sensor; i++) { ··· 355 331 disable_tui(); 356 332 357 333 /* change the file mode mask */ 358 - umask(0); 334 + umask(S_IWGRP | S_IWOTH); 359 335 360 336 /* new SID for the daemon process */ 361 337 sid = setsid();