Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Signed-off-by: David S. Miller <davem@davemloft.net>

+5004 -3113
+3 -6
Documentation/Changes
··· 280 280 mcelog 281 281 ------ 282 282 283 - In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility 284 - as a regular cronjob similar to the x86-64 kernel to process and log 285 - machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check 286 - events are errors reported by the CPU. Processing them is strongly encouraged. 287 - All x86-64 kernels since 2.6.4 require the mcelog utility to 288 - process machine checks. 283 + On x86 kernels the mcelog utility is needed to process and log machine check 284 + events when CONFIG_X86_MCE is enabled. Machine check events are errors reported 285 + by the CPU. Processing them is strongly encouraged. 289 286 290 287 Getting updated software 291 288 ========================
+1 -1
Documentation/DocBook/gadget.tmpl
··· 708 708 709 709 <para>Systems need specialized hardware support to implement OTG, 710 710 notably including a special <emphasis>Mini-AB</emphasis> jack 711 - and associated transciever to support <emphasis>Dual-Role</emphasis> 711 + and associated transceiver to support <emphasis>Dual-Role</emphasis> 712 712 operation: 713 713 they can act either as a host, using the standard 714 714 Linux-USB host side driver stack,
+2 -2
Documentation/DocBook/genericirq.tmpl
··· 182 182 <para> 183 183 Each interrupt is described by an interrupt descriptor structure 184 184 irq_desc. The interrupt is referenced by an 'unsigned int' numeric 185 - value which selects the corresponding interrupt decription structure 185 + value which selects the corresponding interrupt description structure 186 186 in the descriptor structures array. 187 187 The descriptor structure contains status information and pointers 188 188 to the interrupt flow method and the interrupt chip structure ··· 470 470 <para> 471 471 To avoid copies of identical implementations of IRQ chips the 472 472 core provides a configurable generic interrupt chip 473 - implementation. Developers should check carefuly whether the 473 + implementation. Developers should check carefully whether the 474 474 generic chip fits their needs before implementing the same 475 475 functionality slightly differently themselves. 476 476 </para>
+1 -1
Documentation/DocBook/kernel-locking.tmpl
··· 1760 1760 </para> 1761 1761 1762 1762 <para> 1763 - There is a furthur optimization possible here: remember our original 1763 + There is a further optimization possible here: remember our original 1764 1764 cache code, where there were no reference counts and the caller simply 1765 1765 held the lock whenever using the object? This is still possible: if 1766 1766 you hold the lock, no one can delete the object, so you don't need to
+3 -3
Documentation/DocBook/libata.tmpl
··· 677 677 678 678 <listitem> 679 679 <para> 680 - ATA_QCFLAG_ACTIVE is clared from qc->flags. 680 + ATA_QCFLAG_ACTIVE is cleared from qc->flags. 681 681 </para> 682 682 </listitem> 683 683 ··· 708 708 709 709 <listitem> 710 710 <para> 711 - qc->waiting is claread &amp; completed (in that order). 711 + qc->waiting is cleared &amp; completed (in that order). 712 712 </para> 713 713 </listitem> 714 714 ··· 1163 1163 1164 1164 <para> 1165 1165 Once sense data is acquired, this type of errors can be 1166 - handled similary to other SCSI errors. Note that sense data 1166 + handled similarly to other SCSI errors. Note that sense data 1167 1167 may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR 1168 1168 &amp;&amp; ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such 1169 1169 cases, the error should be considered as an ATA bus error and
+1 -1
Documentation/DocBook/media_api.tmpl
··· 68 68 several digital tv standards. While it is called as DVB API, 69 69 in fact it covers several different video standards including 70 70 DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated 71 - to documment support also for DVB-S2, ISDB-T and ISDB-S.</para> 71 + to document support also for DVB-S2, ISDB-T and ISDB-S.</para> 72 72 <para>The third part covers the Remote Controller API.</para> 73 73 <para>The fourth part covers the Media Controller API.</para> 74 74 <para>For additional information and for the latest development code,
+15 -15
Documentation/DocBook/mtdnand.tmpl
··· 91 91 <listitem><para> 92 92 [MTD Interface]</para><para> 93 93 These functions provide the interface to the MTD kernel API. 94 - They are not replacable and provide functionality 94 + They are not replaceable and provide functionality 95 95 which is complete hardware independent. 96 96 </para></listitem> 97 97 <listitem><para> ··· 100 100 </para></listitem> 101 101 <listitem><para> 102 102 [GENERIC]</para><para> 103 - Generic functions are not replacable and provide functionality 103 + Generic functions are not replaceable and provide functionality 104 104 which is complete hardware independent. 105 105 </para></listitem> 106 106 <listitem><para> 107 107 [DEFAULT]</para><para> 108 108 Default functions provide hardware related functionality which is suitable 109 109 for most of the implementations. These functions can be replaced by the 110 - board driver if neccecary. Those functions are called via pointers in the 110 + board driver if necessary. Those functions are called via pointers in the 111 111 NAND chip description structure. The board driver can set the functions which 112 112 should be replaced by board dependent functions before calling nand_scan(). 113 113 If the function pointer is NULL on entry to nand_scan() then the pointer ··· 264 264 is set up nand_scan() is called. This function tries to 265 265 detect and identify then chip. If a chip is found all the 266 266 internal data fields are initialized accordingly. 267 - The structure(s) have to be zeroed out first and then filled with the neccecary 267 + The structure(s) have to be zeroed out first and then filled with the necessary 268 268 information about the device. 269 269 </para> 270 270 <programlisting> ··· 327 327 <sect1 id="Exit_function"> 328 328 <title>Exit function</title> 329 329 <para> 330 - The exit function is only neccecary if the driver is 330 + The exit function is only necessary if the driver is 331 331 compiled as a module. It releases all resources which 332 332 are held by the chip driver and unregisters the partitions 333 333 in the MTD layer. ··· 494 494 in this case. See rts_from4.c and diskonchip.c for 495 495 implementation reference. In those cases we must also 496 496 use bad block tables on FLASH, because the ECC layout is 497 - interferring with the bad block marker positions. 497 + interfering with the bad block marker positions. 498 498 See bad block table support for details. 499 499 </para> 500 500 </sect2> ··· 542 542 <para> 543 543 nand_scan() calls the function nand_default_bbt(). 544 544 nand_default_bbt() selects appropriate default 545 - bad block table desriptors depending on the chip information 545 + bad block table descriptors depending on the chip information 546 546 which was retrieved by nand_scan(). 547 547 </para> 548 548 <para> ··· 554 554 <sect2 id="Flash_based_tables"> 555 555 <title>Flash based tables</title> 556 556 <para> 557 - It may be desired or neccecary to keep a bad block table in FLASH. 557 + It may be desired or necessary to keep a bad block table in FLASH. 558 558 For AG-AND chips this is mandatory, as they have no factory marked 559 559 bad blocks. They have factory marked good blocks. The marker pattern 560 560 is erased when the block is erased to be reused. So in case of ··· 565 565 of the blocks. 566 566 </para> 567 567 <para> 568 - The blocks in which the tables are stored are procteted against 568 + The blocks in which the tables are stored are protected against 569 569 accidental access by marking them bad in the memory bad block 570 570 table. The bad block table management functions are allowed 571 - to circumvernt this protection. 571 + to circumvent this protection. 572 572 </para> 573 573 <para> 574 574 The simplest way to activate the FLASH based bad block table support ··· 592 592 User defined tables are created by filling out a 593 593 nand_bbt_descr structure and storing the pointer in the 594 594 nand_chip structure member bbt_td before calling nand_scan(). 595 - If a mirror table is neccecary a second structure must be 595 + If a mirror table is necessary a second structure must be 596 596 created and a pointer to this structure must be stored 597 597 in bbt_md inside the nand_chip structure. If the bbt_md 598 598 member is set to NULL then only the main table is used ··· 666 666 <para> 667 667 For automatic placement some blocks must be reserved for 668 668 bad block table storage. The number of reserved blocks is defined 669 - in the maxblocks member of the babd block table description structure. 669 + in the maxblocks member of the bad block table description structure. 670 670 Reserving 4 blocks for mirrored tables should be a reasonable number. 671 671 This also limits the number of blocks which are scanned for the bad 672 672 block table ident pattern. ··· 1068 1068 <chapter id="filesystems"> 1069 1069 <title>Filesystem support</title> 1070 1070 <para> 1071 - The NAND driver provides all neccecary functions for a 1071 + The NAND driver provides all necessary functions for a 1072 1072 filesystem via the MTD interface. 1073 1073 </para> 1074 1074 <para> 1075 - Filesystems must be aware of the NAND pecularities and 1075 + Filesystems must be aware of the NAND peculiarities and 1076 1076 restrictions. One major restrictions of NAND Flash is, that you cannot 1077 1077 write as often as you want to a page. The consecutive writes to a page, 1078 1078 before erasing it again, are restricted to 1-3 writes, depending on the ··· 1222 1222 #define NAND_BBT_VERSION 0x00000100 1223 1223 /* Create a bbt if none axists */ 1224 1224 #define NAND_BBT_CREATE 0x00000200 1225 - /* Write bbt if neccecary */ 1225 + /* Write bbt if necessary */ 1226 1226 #define NAND_BBT_WRITE 0x00001000 1227 1227 /* Read and write back block contents when writing bbt */ 1228 1228 #define NAND_BBT_SAVECONTENT 0x00002000
+1 -1
Documentation/DocBook/regulator.tmpl
··· 155 155 release regulators. Functions are 156 156 provided to <link linkend='API-regulator-enable'>enable</link> 157 157 and <link linkend='API-regulator-disable'>disable</link> the 158 - reguator and to get and set the runtime parameters of the 158 + regulator and to get and set the runtime parameters of the 159 159 regulator. 160 160 </para> 161 161 <para>
+2 -2
Documentation/DocBook/uio-howto.tmpl
··· 766 766 <para> 767 767 The dynamic memory regions will be allocated when the UIO device file, 768 768 <varname>/dev/uioX</varname> is opened. 769 - Simiar to static memory resources, the memory region information for 769 + Similar to static memory resources, the memory region information for 770 770 dynamic regions is then visible via sysfs at 771 771 <varname>/sys/class/uio/uioX/maps/mapY/*</varname>. 772 - The dynmaic memory regions will be freed when the UIO device file is 772 + The dynamic memory regions will be freed when the UIO device file is 773 773 closed. When no processes are holding the device file open, the address 774 774 returned to userspace is ~0. 775 775 </para>
+1 -1
Documentation/DocBook/usb.tmpl
··· 153 153 154 154 <listitem><para>The Linux USB API supports synchronous calls for 155 155 control and bulk messages. 156 - It also supports asynchnous calls for all kinds of data transfer, 156 + It also supports asynchronous calls for all kinds of data transfer, 157 157 using request structures called "URBs" (USB Request Blocks). 158 158 </para></listitem> 159 159
+1 -1
Documentation/DocBook/writing-an-alsa-driver.tmpl
··· 5696 5696 suspending the PCM operations via 5697 5697 <function>snd_pcm_suspend_all()</function> or 5698 5698 <function>snd_pcm_suspend()</function>. It means that the PCM 5699 - streams are already stoppped when the register snapshot is 5699 + streams are already stopped when the register snapshot is 5700 5700 taken. But, remember that you don't have to restart the PCM 5701 5701 stream in the resume callback. It'll be restarted via 5702 5702 trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant>
+5 -2
Documentation/cpu-freq/intel-pstate.txt
··· 15 15 /sys/devices/system/cpu/intel_pstate/ 16 16 17 17 max_perf_pct: limits the maximum P state that will be requested by 18 - the driver stated as a percentage of the available performance. 18 + the driver stated as a percentage of the available performance. The 19 + available (P states) performance may be reduced by the no_turbo 20 + setting described below. 19 21 20 22 min_perf_pct: limits the minimum P state that will be requested by 21 - the driver stated as a percentage of the available performance. 23 + the driver stated as a percentage of the max (non-turbo) 24 + performance level. 22 25 23 26 no_turbo: limits the driver to selecting P states below the turbo 24 27 frequency range.
+20
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 9 9 - reg: physical base address of the controller and length of memory mapped 10 10 region. 11 11 12 + Optional Properties: 13 + - clocks: List of clock handles. The parent clocks of the input clocks to the 14 + devices in this power domain are set to oscclk before power gating 15 + and restored back after powering on a domain. This is required for 16 + all domains which are powered on and off and not required for unused 17 + domains. 18 + - clock-names: The following clocks can be specified: 19 + - oscclk: Oscillator clock. 20 + - pclkN, clkN: Pairs of parent of input clock and input clock to the 21 + devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 22 + are supported currently. 23 + 12 24 Node of a device using power domains must have a samsung,power-domain property 13 25 defined with a phandle to respective power domain. 14 26 ··· 29 17 lcd0: power-domain-lcd0 { 30 18 compatible = "samsung,exynos4210-pd"; 31 19 reg = <0x10023C00 0x10>; 20 + }; 21 + 22 + mfc_pd: power-domain@10044060 { 23 + compatible = "samsung,exynos4210-pd"; 24 + reg = <0x10044060 0x20>; 25 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 26 + <&clock CLK_MOUT_USER_ACLK333>; 27 + clock-names = "oscclk", "pclk0", "clk0"; 32 28 }; 33 29 34 30 Example of the node using power domain:
+3
Documentation/devicetree/bindings/arm/l2cc.txt
··· 40 40 - arm,filter-ranges : <start length> Starting address and length of window to 41 41 filter. Addresses in the filter window are directed to the M1 port. Other 42 42 addresses will go to the M0 port. 43 + - arm,io-coherent : indicates that the system is operating in an hardware 44 + I/O coherent mode. Valid only when the arm,pl310-cache compatible 45 + string is used. 43 46 - interrupts : 1 combined interrupt. 44 47 - cache-id-part: cache id part number to be used if it is not present 45 48 on hardware
+7
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 4 4 5 5 - compatible: Must contain one of the following: 6 6 7 + - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. 8 + - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. 9 + - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART. 10 + - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART. 11 + - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART. 12 + - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART. 13 + - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. 7 14 - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. 8 15 - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. 9 16 - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART.
+6
Documentation/devicetree/bindings/spi/qcom,spi-qup.txt
··· 23 23 - spi-max-frequency: Specifies maximum SPI clock frequency, 24 24 Units - Hz. Definition as per 25 25 Documentation/devicetree/bindings/spi/spi-bus.txt 26 + - num-cs: total number of chipselects 27 + - cs-gpios: should specify GPIOs used for chipselects. 28 + The gpios will be referred to as reg = <index> in the SPI child 29 + nodes. If unspecified, a single SPI device without a chip 30 + select can be used. 31 + 26 32 27 33 SPI slave nodes must be children of the SPI master node and can contain 28 34 properties described in Documentation/devicetree/bindings/spi/spi-bus.txt
+11
Documentation/email-clients.txt
··· 1 1 Email clients info for Linux 2 2 ====================================================================== 3 3 4 + Git 5 + ---------------------------------------------------------------------- 6 + These days most developers use `git send-email` instead of regular 7 + email clients. The man page for this is quite good. On the receiving 8 + end, maintainers use `git am` to apply the patches. 9 + 10 + If you are new to git then send your first patch to yourself. Save it 11 + as raw text including all the headers. Run `git am raw_email.txt` and 12 + then review the changelog with `git log`. When that works then send 13 + the patch to the appropriate mailing list(s). 14 + 4 15 General Preferences 5 16 ---------------------------------------------------------------------- 6 17 Patches for the Linux kernel are submitted via email, preferably as
+2 -2
Documentation/laptops/00-INDEX
··· 8 8 - information on hard disk shock protection. 9 9 dslm.c 10 10 - Simple Disk Sleep Monitor program 11 - hpfall.c 12 - - (HP) laptop accelerometer program for disk protection. 11 + freefall.c 12 + - (HP/DELL) laptop accelerometer program for disk protection. 13 13 laptop-mode.txt 14 14 - how to conserve battery power using laptop-mode. 15 15 sony-laptop.txt
+45 -14
Documentation/laptops/hpfall.c Documentation/laptops/freefall.c
··· 1 - /* Disk protection for HP machines. 1 + /* Disk protection for HP/DELL machines. 2 2 * 3 3 * Copyright 2008 Eric Piel 4 4 * Copyright 2009 Pavel Machek <pavel@ucw.cz> 5 + * Copyright 2012 Sonal Santan 6 + * Copyright 2014 Pali Rohár <pali.rohar@gmail.com> 5 7 * 6 8 * GPLv2. 7 9 */ ··· 20 18 #include <signal.h> 21 19 #include <sys/mman.h> 22 20 #include <sched.h> 21 + #include <syslog.h> 23 22 24 - char unload_heads_path[64]; 23 + static int noled; 24 + static char unload_heads_path[64]; 25 + static char device_path[32]; 26 + static const char app_name[] = "FREE FALL"; 25 27 26 - int set_unload_heads_path(char *device) 28 + static int set_unload_heads_path(char *device) 27 29 { 28 30 char devname[64]; 29 31 30 32 if (strlen(device) <= 5 || strncmp(device, "/dev/", 5) != 0) 31 33 return -EINVAL; 32 - strncpy(devname, device + 5, sizeof(devname)); 34 + strncpy(devname, device + 5, sizeof(devname) - 1); 35 + strncpy(device_path, device, sizeof(device_path) - 1); 33 36 34 37 snprintf(unload_heads_path, sizeof(unload_heads_path) - 1, 35 38 "/sys/block/%s/device/unload_heads", devname); 36 39 return 0; 37 40 } 38 - int valid_disk(void) 41 + 42 + static int valid_disk(void) 39 43 { 40 44 int fd = open(unload_heads_path, O_RDONLY); 45 + 41 46 if (fd < 0) { 42 47 perror(unload_heads_path); 43 48 return 0; ··· 54 45 return 1; 55 46 } 56 47 57 - void write_int(char *path, int i) 48 + static void write_int(char *path, int i) 58 49 { 59 50 char buf[1024]; 60 51 int fd = open(path, O_RDWR); 52 + 61 53 if (fd < 0) { 62 54 perror("open"); 63 55 exit(1); 64 56 } 57 + 65 58 sprintf(buf, "%d", i); 59 + 66 60 if (write(fd, buf, strlen(buf)) != strlen(buf)) { 67 61 perror("write"); 68 62 exit(1); 69 63 } 64 + 70 65 close(fd); 71 66 } 72 67 73 - void set_led(int on) 68 + static void set_led(int on) 74 69 { 70 + if (noled) 71 + return; 75 72 write_int("/sys/class/leds/hp::hddprotect/brightness", on); 76 73 } 77 74 78 - void protect(int seconds) 75 + static void protect(int seconds) 79 76 { 77 + const char *str = (seconds == 0) ? "Unparked" : "Parked"; 78 + 80 79 write_int(unload_heads_path, seconds*1000); 80 + syslog(LOG_INFO, "%s %s disk head\n", str, device_path); 81 81 } 82 82 83 - int on_ac(void) 83 + static int on_ac(void) 84 84 { 85 - // /sys/class/power_supply/AC0/online 85 + /* /sys/class/power_supply/AC0/online */ 86 + return 1; 86 87 } 87 88 88 - int lid_open(void) 89 + static int lid_open(void) 89 90 { 90 - // /proc/acpi/button/lid/LID/state 91 + /* /proc/acpi/button/lid/LID/state */ 92 + return 1; 91 93 } 92 94 93 - void ignore_me(void) 95 + static void ignore_me(int signum) 94 96 { 95 97 protect(0); 96 98 set_led(0); ··· 110 90 int main(int argc, char **argv) 111 91 { 112 92 int fd, ret; 93 + struct stat st; 113 94 struct sched_param param; 114 95 115 96 if (argc == 1) ··· 132 111 return EXIT_FAILURE; 133 112 } 134 113 135 - daemon(0, 0); 114 + if (stat("/sys/class/leds/hp::hddprotect/brightness", &st)) 115 + noled = 1; 116 + 117 + if (daemon(0, 0) != 0) { 118 + perror("daemon"); 119 + return EXIT_FAILURE; 120 + } 121 + 122 + openlog(app_name, LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1); 123 + 136 124 param.sched_priority = sched_get_priority_max(SCHED_FIFO); 137 125 sched_setscheduler(0, SCHED_FIFO, &param); 138 126 mlockall(MCL_CURRENT|MCL_FUTURE); ··· 171 141 alarm(20); 172 142 } 173 143 144 + closelog(); 174 145 close(fd); 175 146 return EXIT_SUCCESS; 176 147 }
+5
Documentation/sound/alsa/HD-Audio-Models.txt
··· 286 286 hp-inv-led HP with broken BIOS for inverted mute LED 287 287 auto BIOS setup (default) 288 288 289 + STAC92HD95 290 + ========== 291 + hp-led LED support for HP laptops 292 + hp-bass Bass HPF setup for HP Spectre 13 293 + 289 294 STAC9872 290 295 ======== 291 296 vaio VAIO laptop without SPDIF
+2 -12
Documentation/trace/postprocess/trace-vmscan-postprocess.pl
··· 47 47 use constant HIGH_NR_SCANNED => 22; 48 48 use constant HIGH_NR_TAKEN => 23; 49 49 use constant HIGH_NR_RECLAIMED => 24; 50 - use constant HIGH_NR_CONTIG_DIRTY => 25; 51 50 52 51 my %perprocesspid; 53 52 my %perprocess; ··· 104 105 my $regex_kswapd_wake_default = 'nid=([0-9]*) order=([0-9]*)'; 105 106 my $regex_kswapd_sleep_default = 'nid=([0-9]*)'; 106 107 my $regex_wakeup_kswapd_default = 'nid=([0-9]*) zid=([0-9]*) order=([0-9]*)'; 107 - my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_taken=([0-9]*) contig_taken=([0-9]*) contig_dirty=([0-9]*) contig_failed=([0-9]*)'; 108 + my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_taken=([0-9]*) file=([0-9]*)'; 108 109 my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) zid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)'; 109 110 my $regex_lru_shrink_active_default = 'lru=([A-Z_]*) nr_scanned=([0-9]*) nr_rotated=([0-9]*) priority=([0-9]*)'; 110 111 my $regex_writepage_default = 'page=([0-9a-f]*) pfn=([0-9]*) flags=([A-Z_|]*)'; ··· 199 200 $regex_lru_isolate_default, 200 201 "isolate_mode", "order", 201 202 "nr_requested", "nr_scanned", "nr_taken", 202 - "contig_taken", "contig_dirty", "contig_failed"); 203 + "file"); 203 204 $regex_lru_shrink_inactive = generate_traceevent_regex( 204 205 "vmscan/mm_vmscan_lru_shrink_inactive", 205 206 $regex_lru_shrink_inactive_default, ··· 374 375 } 375 376 my $isolate_mode = $1; 376 377 my $nr_scanned = $4; 377 - my $nr_contig_dirty = $7; 378 378 379 379 # To closer match vmstat scanning statistics, only count isolate_both 380 380 # and isolate_inactive as scanning. isolate_active is rotation ··· 383 385 if ($isolate_mode != 2) { 384 386 $perprocesspid{$process_pid}->{HIGH_NR_SCANNED} += $nr_scanned; 385 387 } 386 - $perprocesspid{$process_pid}->{HIGH_NR_CONTIG_DIRTY} += $nr_contig_dirty; 387 388 } elsif ($tracepoint eq "mm_vmscan_lru_shrink_inactive") { 388 389 $details = $6; 389 390 if ($details !~ /$regex_lru_shrink_inactive/o) { ··· 534 537 if ($count != 0) { 535 538 print "wakeup-$order=$count "; 536 539 } 537 - } 538 - } 539 - if ($stats{$process_pid}->{HIGH_NR_CONTIG_DIRTY}) { 540 - print " "; 541 - my $count = $stats{$process_pid}->{HIGH_NR_CONTIG_DIRTY}; 542 - if ($count != 0) { 543 - print "contig-dirty=$count "; 544 540 } 545 541 } 546 542
+53 -31
MAINTAINERS
··· 156 156 157 157 8169 10/100/1000 GIGABIT ETHERNET DRIVER 158 158 M: Realtek linux nic maintainers <nic_swsd@realtek.com> 159 - M: Francois Romieu <romieu@fr.zoreil.com> 160 159 L: netdev@vger.kernel.org 161 160 S: Maintained 162 161 F: drivers/net/ethernet/realtek/r8169.c ··· 942 943 S: Maintained 943 944 T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 944 945 F: arch/arm/mach-imx/ 946 + F: arch/arm/mach-mxs/ 945 947 F: arch/arm/boot/dts/imx* 946 948 F: arch/arm/configs/imx*_defconfig 947 - 948 - ARM/FREESCALE MXS ARM ARCHITECTURE 949 - M: Shawn Guo <shawn.guo@linaro.org> 950 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 951 - S: Maintained 952 - T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 953 - F: arch/arm/mach-mxs/ 954 949 955 950 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 956 951 M: Lennert Buytenhek <kernel@wantstofly.org> ··· 1045 1052 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1046 1053 S: Maintained 1047 1054 F: arch/arm/mach-keystone/ 1048 - F: drivers/clk/keystone/ 1049 1055 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git 1056 + 1057 + ARM/TEXAS INSTRUMENT KEYSTONE CLOCK FRAMEWORK 1058 + M: Santosh Shilimkar <santosh.shilimkar@ti.com> 1059 + L: linux-kernel@vger.kernel.org 1060 + S: Maintained 1061 + F: drivers/clk/keystone/ 1062 + 1063 + ARM/TEXAS INSTRUMENT KEYSTONE ClOCKSOURCE 1064 + M: Santosh Shilimkar <santosh.shilimkar@ti.com> 1065 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1066 + L: linux-kernel@vger.kernel.org 1067 + S: Maintained 1068 + F: drivers/clocksource/timer-keystone.c 1069 + 1070 + ARM/TEXAS INSTRUMENT KEYSTONE RESET DRIVER 1071 + M: Santosh Shilimkar <santosh.shilimkar@ti.com> 1072 + L: linux-kernel@vger.kernel.org 1073 + S: Maintained 1074 + F: drivers/power/reset/keystone-reset.c 1075 + 1076 + ARM/TEXAS INSTRUMENT AEMIF/EMIF DRIVERS 1077 + M: Santosh Shilimkar <santosh.shilimkar@ti.com> 1078 + L: linux-kernel@vger.kernel.org 1079 + S: Maintained 1080 + F: drivers/memory/*emif* 1050 1081 1051 1082 ARM/LOGICPD PXA270 MACHINE SUPPORT 1052 1083 M: Lennert Buytenhek <kernel@wantstofly.org> ··· 1313 1296 Q: http://patchwork.kernel.org/project/linux-sh/list/ 1314 1297 T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next 1315 1298 S: Supported 1299 + F: arch/arm/boot/dts/emev2* 1300 + F: arch/arm/boot/dts/r7s* 1301 + F: arch/arm/boot/dts/r8a* 1302 + F: arch/arm/boot/dts/sh* 1303 + F: arch/arm/configs/ape6evm_defconfig 1304 + F: arch/arm/configs/armadillo800eva_defconfig 1305 + F: arch/arm/configs/bockw_defconfig 1306 + F: arch/arm/configs/genmai_defconfig 1307 + F: arch/arm/configs/koelsch_defconfig 1308 + F: arch/arm/configs/kzm9g_defconfig 1309 + F: arch/arm/configs/lager_defconfig 1310 + F: arch/arm/configs/mackerel_defconfig 1311 + F: arch/arm/configs/marzen_defconfig 1312 + F: arch/arm/configs/shmobile_defconfig 1316 1313 F: arch/arm/mach-shmobile/ 1317 1314 F: drivers/sh/ 1318 1315 ··· 2949 2918 T: quilt http://www.infradead.org/~rdunlap/Doc/patches/ 2950 2919 S: Maintained 2951 2920 F: Documentation/ 2921 + X: Documentation/ABI/ 2922 + X: Documentation/devicetree/ 2923 + X: Documentation/[a-z][a-z]_[A-Z][A-Z]/ 2952 2924 2953 2925 DOUBLETALK DRIVER 2954 2926 M: "James R. Van Zandt" <jrv@vanzandt.mv.com> ··· 4511 4477 F: drivers/idle/i7300_idle.c 4512 4478 4513 4479 IEEE 802.15.4 SUBSYSTEM 4514 - M: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> 4515 - M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> 4480 + M: Alexander Aring <alex.aring@gmail.com> 4516 4481 L: linux-zigbee-devel@lists.sourceforge.net (moderated for non-subscribers) 4517 4482 W: http://apps.sourceforge.net/trac/linux-zigbee 4518 4483 T: git git://git.kernel.org/pub/scm/linux/kernel/git/lowpan/lowpan.git ··· 5543 5510 F: arch/arm/mach-lpc32xx/ 5544 5511 5545 5512 LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI) 5546 - M: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> 5547 - M: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> 5548 - M: support@lsi.com 5549 - L: DL-MPTFusionLinux@lsi.com 5513 + M: Nagalakshmi Nandigama <nagalakshmi.nandigama@avagotech.com> 5514 + M: Praveen Krishnamoorthy <praveen.krishnamoorthy@avagotech.com> 5515 + M: Sreekanth Reddy <sreekanth.reddy@avagotech.com> 5516 + M: Abhijit Mahajan <abhijit.mahajan@avagotech.com> 5517 + L: MPT-FusionLinux.pdl@avagotech.com 5550 5518 L: linux-scsi@vger.kernel.org 5551 5519 W: http://www.lsilogic.com/support 5552 5520 S: Supported ··· 6790 6756 6791 6757 PCI DRIVER FOR IMX6 6792 6758 M: Richard Zhu <r65037@freescale.com> 6793 - M: Shawn Guo <shawn.guo@linaro.org> 6759 + M: Shawn Guo <shawn.guo@freescale.com> 6794 6760 L: linux-pci@vger.kernel.org 6795 6761 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6796 6762 S: Maintained ··· 8987 8953 8988 8954 THERMAL 8989 8955 M: Zhang Rui <rui.zhang@intel.com> 8990 - M: Eduardo Valentin <eduardo.valentin@ti.com> 8956 + M: Eduardo Valentin <edubezval@gmail.com> 8991 8957 L: linux-pm@vger.kernel.org 8992 8958 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git 8993 8959 T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git ··· 9014 8980 F: drivers/platform/x86/thinkpad_acpi.c 9015 8981 9016 8982 TI BANDGAP AND THERMAL DRIVER 9017 - M: Eduardo Valentin <eduardo.valentin@ti.com> 8983 + M: Eduardo Valentin <edubezval@gmail.com> 9018 8984 L: linux-pm@vger.kernel.org 9019 8985 S: Supported 9020 8986 F: drivers/thermal/ti-soc-thermal/ ··· 9428 9394 F: drivers/usb/host/isp116x* 9429 9395 F: include/linux/usb/isp116x.h 9430 9396 9431 - USB KAWASAKI LSI DRIVER 9432 - M: Oliver Neukum <oliver@neukum.org> 9433 - L: linux-usb@vger.kernel.org 9434 - S: Maintained 9435 - F: drivers/usb/serial/kl5kusb105.* 9436 - 9437 9397 USB MASS STORAGE DRIVER 9438 9398 M: Matthew Dharm <mdharm-usb@one-eyed-alien.net> 9439 9399 L: linux-usb@vger.kernel.org ··· 9454 9426 S: Maintained 9455 9427 F: Documentation/usb/ohci.txt 9456 9428 F: drivers/usb/host/ohci* 9457 - 9458 - USB OPTION-CARD DRIVER 9459 - M: Matthias Urlichs <smurf@smurf.noris.de> 9460 - L: linux-usb@vger.kernel.org 9461 - S: Maintained 9462 - F: drivers/usb/serial/option.c 9463 9429 9464 9430 USB PEGASUS DRIVER 9465 9431 M: Petko Manolov <petkan@nucleusys.com> ··· 9487 9465 F: drivers/net/usb/rtl8150.c 9488 9466 9489 9467 USB SERIAL SUBSYSTEM 9490 - M: Johan Hovold <jhovold@gmail.com> 9468 + M: Johan Hovold <johan@kernel.org> 9491 9469 L: linux-usb@vger.kernel.org 9492 9470 S: Maintained 9493 9471 F: Documentation/usb/usb-serial.txt
+54 -48
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 16 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc5 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION* ··· 41 41 # descending is started. They are now explicitly listed as the 42 42 # prepare rule. 43 43 44 + # Beautify output 45 + # --------------------------------------------------------------------------- 46 + # 47 + # Normally, we echo the whole command before executing it. By making 48 + # that echo $($(quiet)$(cmd)), we now have the possibility to set 49 + # $(quiet) to choose other forms of output instead, e.g. 50 + # 51 + # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 52 + # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 53 + # 54 + # If $(quiet) is empty, the whole command will be printed. 55 + # If it is set to "quiet_", only the short version will be printed. 56 + # If it is set to "silent_", nothing will be printed at all, since 57 + # the variable $(silent_cmd_cc_o_c) doesn't exist. 58 + # 59 + # A simple variant is to prefix commands with $(Q) - that's useful 60 + # for commands that shall be hidden in non-verbose mode. 61 + # 62 + # $(Q)ln $@ :< 63 + # 64 + # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 65 + # If KBUILD_VERBOSE equals 1 then the above command is displayed. 66 + # 44 67 # To put more focus on warnings, be less verbose as default 45 68 # Use 'make V=1' to see the full commands 46 69 ··· 73 50 ifndef KBUILD_VERBOSE 74 51 KBUILD_VERBOSE = 0 75 52 endif 53 + 54 + ifeq ($(KBUILD_VERBOSE),1) 55 + quiet = 56 + Q = 57 + else 58 + quiet=quiet_ 59 + Q = @ 60 + endif 61 + 62 + # If the user is running make -s (silent mode), suppress echoing of 63 + # commands 64 + 65 + ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 66 + ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 67 + quiet=silent_ 68 + endif 69 + else # make-3.8x 70 + ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 71 + quiet=silent_ 72 + endif 73 + endif 74 + 75 + export quiet Q KBUILD_VERBOSE 76 76 77 77 # Call a source code checker (by default, "sparse") as part of the 78 78 # C compilation. ··· 172 126 $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make 173 127 @: 174 128 129 + # Fake the "Entering directory" message once, so that IDEs/editors are 130 + # able to understand relative filenames. 131 + echodir := @echo 132 + quiet_echodir := @echo 133 + silent_echodir := @: 175 134 sub-make: FORCE 135 + $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" 176 136 $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ 177 137 KBUILD_SRC=$(CURDIR) \ 178 138 KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \ ··· 340 288 341 289 export KBUILD_MODULES KBUILD_BUILTIN 342 290 export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD 343 - 344 - # Beautify output 345 - # --------------------------------------------------------------------------- 346 - # 347 - # Normally, we echo the whole command before executing it. By making 348 - # that echo $($(quiet)$(cmd)), we now have the possibility to set 349 - # $(quiet) to choose other forms of output instead, e.g. 350 - # 351 - # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 352 - # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 353 - # 354 - # If $(quiet) is empty, the whole command will be printed. 355 - # If it is set to "quiet_", only the short version will be printed. 356 - # If it is set to "silent_", nothing will be printed at all, since 357 - # the variable $(silent_cmd_cc_o_c) doesn't exist. 358 - # 359 - # A simple variant is to prefix commands with $(Q) - that's useful 360 - # for commands that shall be hidden in non-verbose mode. 361 - # 362 - # $(Q)ln $@ :< 363 - # 364 - # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 365 - # If KBUILD_VERBOSE equals 1 then the above command is displayed. 366 - 367 - ifeq ($(KBUILD_VERBOSE),1) 368 - quiet = 369 - Q = 370 - else 371 - quiet=quiet_ 372 - Q = @ 373 - endif 374 - 375 - # If the user is running make -s (silent mode), suppress echoing of 376 - # commands 377 - 378 - ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 379 - ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 380 - quiet=silent_ 381 - endif 382 - else # make-3.8x 383 - ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 384 - quiet=silent_ 385 - endif 386 - endif 387 - 388 - export quiet Q KBUILD_VERBOSE 389 291 390 292 ifneq ($(CC),) 391 293 ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1) ··· 1176 1170 # Packaging of the kernel to various formats 1177 1171 # --------------------------------------------------------------------------- 1178 1172 # rpm target kept for backward compatibility 1179 - package-dir := $(srctree)/scripts/package 1173 + package-dir := scripts/package 1180 1174 1181 1175 %src-pkg: FORCE 1182 1176 $(Q)$(MAKE) $(build)=$(package-dir) $@
+2 -2
arch/arc/include/asm/cache.h
··· 60 60 #define ARC_REG_IC_IVIC 0x10 61 61 #define ARC_REG_IC_CTRL 0x11 62 62 #define ARC_REG_IC_IVIL 0x19 63 - #if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) 63 + #if defined(CONFIG_ARC_MMU_V3) 64 64 #define ARC_REG_IC_PTAG 0x1E 65 65 #endif 66 66 ··· 74 74 #define ARC_REG_DC_IVDL 0x4A 75 75 #define ARC_REG_DC_FLSH 0x4B 76 76 #define ARC_REG_DC_FLDL 0x4C 77 - #if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) 77 + #if defined(CONFIG_ARC_MMU_V3) 78 78 #define ARC_REG_DC_PTAG 0x5C 79 79 #endif 80 80
+1
arch/arc/include/uapi/asm/ptrace.h
··· 11 11 #ifndef _UAPI__ASM_ARC_PTRACE_H 12 12 #define _UAPI__ASM_ARC_PTRACE_H 13 13 14 + #define PTRACE_GET_THREAD_AREA 25 14 15 15 16 #ifndef __ASSEMBLY__ 16 17 /*
+1 -1
arch/arc/kernel/ctx_sw_asm.S
··· 10 10 * -This is the more "natural" hand written assembler 11 11 */ 12 12 13 + #include <linux/linkage.h> 13 14 #include <asm/entry.h> /* For the SAVE_* macros */ 14 15 #include <asm/asm-offsets.h> 15 - #include <asm/linkage.h> 16 16 17 17 #define KSP_WORD_OFF ((TASK_THREAD + THREAD_KSP) / 4) 18 18
+1 -1
arch/arc/kernel/devtree.c
··· 41 41 { 42 42 const struct machine_desc *mdesc; 43 43 unsigned long dt_root; 44 - void *clk; 44 + const void *clk; 45 45 int len; 46 46 47 47 if (!early_init_dt_scan(dt))
+4 -3
arch/arc/kernel/head.S
··· 77 77 ; Clear BSS before updating any globals 78 78 ; XXX: use ZOL here 79 79 mov r5, __bss_start 80 - mov r6, __bss_stop 80 + sub r6, __bss_stop, r5 81 + lsr.f lp_count, r6, 2 82 + lpnz 1f 83 + st.ab 0, [r5, 4] 81 84 1: 82 - st.ab 0, [r5,4] 83 - brlt r5, r6, 1b 84 85 85 86 ; Uboot - kernel ABI 86 87 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
+4
arch/arc/kernel/ptrace.c
··· 146 146 pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data); 147 147 148 148 switch (request) { 149 + case PTRACE_GET_THREAD_AREA: 150 + ret = put_user(task_thread_info(child)->thr_ptr, 151 + (unsigned long __user *)data); 152 + break; 149 153 default: 150 154 ret = ptrace_request(child, request, addr, data); 151 155 break;
+13 -2
arch/arc/kernel/smp.c
··· 337 337 * API called by platform code to hookup arch-common ISR to their IPI IRQ 338 338 */ 339 339 static DEFINE_PER_CPU(int, ipi_dev); 340 + 341 + static struct irqaction arc_ipi_irq = { 342 + .name = "IPI Interrupt", 343 + .flags = IRQF_PERCPU, 344 + .handler = do_IPI, 345 + }; 346 + 340 347 int smp_ipi_irq_setup(int cpu, int irq) 341 348 { 342 - int *dev_id = &per_cpu(ipi_dev, smp_processor_id()); 343 - return request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev_id); 349 + if (!cpu) 350 + return setup_irq(irq, &arc_ipi_irq); 351 + else 352 + arch_unmask_irq(irq); 353 + 354 + return 0; 344 355 }
+1 -1
arch/arc/kernel/vmlinux.lds.S
··· 116 116 117 117 _edata = .; 118 118 119 - BSS_SECTION(0, 0, 0) 119 + BSS_SECTION(4, 4, 4) 120 120 121 121 #ifdef CONFIG_ARC_DW2_UNWIND 122 122 . = ALIGN(PAGE_SIZE);
+19 -6
arch/arc/mm/cache_arc700.c
··· 389 389 /*********************************************************** 390 390 * Machine specific helper for per line I-Cache invalidate. 391 391 */ 392 - static void __ic_line_inv_vaddr(unsigned long paddr, unsigned long vaddr, 392 + static void __ic_line_inv_vaddr_local(unsigned long paddr, unsigned long vaddr, 393 393 unsigned long sz) 394 394 { 395 395 unsigned long flags; ··· 405 405 read_aux_reg(ARC_REG_IC_CTRL); /* blocks */ 406 406 } 407 407 408 + struct ic_line_inv_vaddr_ipi { 409 + unsigned long paddr, vaddr; 410 + int sz; 411 + }; 412 + 413 + static void __ic_line_inv_vaddr_helper(void *info) 414 + { 415 + struct ic_line_inv_vaddr_ipi *ic_inv = (struct ic_line_inv_vaddr_ipi*) info; 416 + __ic_line_inv_vaddr_local(ic_inv->paddr, ic_inv->vaddr, ic_inv->sz); 417 + } 418 + 419 + static void __ic_line_inv_vaddr(unsigned long paddr, unsigned long vaddr, 420 + unsigned long sz) 421 + { 422 + struct ic_line_inv_vaddr_ipi ic_inv = { paddr, vaddr , sz}; 423 + on_each_cpu(__ic_line_inv_vaddr_helper, &ic_inv, 1); 424 + } 408 425 #else 409 426 410 427 #define __ic_entire_inv() ··· 570 553 */ 571 554 void __sync_icache_dcache(unsigned long paddr, unsigned long vaddr, int len) 572 555 { 573 - unsigned long flags; 574 - 575 - local_irq_save(flags); 576 - __ic_line_inv_vaddr(paddr, vaddr, len); 577 556 __dc_line_op(paddr, vaddr, len, OP_FLUSH_N_INV); 578 - local_irq_restore(flags); 557 + __ic_line_inv_vaddr(paddr, vaddr, len); 579 558 } 580 559 581 560 /* wrapper to compile time eliminate alignment checks in flush loop */
+2 -2
arch/arm/boot/dts/am335x-evm.dts
··· 529 529 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 530 530 0 0 1 2 531 531 >; 532 - tx-num-evt = <1>; 533 - rx-num-evt = <1>; 532 + tx-num-evt = <32>; 533 + rx-num-evt = <32>; 534 534 }; 535 535 536 536 &tps {
+2 -2
arch/arm/boot/dts/am335x-evmsk.dts
··· 560 560 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 561 561 0 0 1 2 562 562 >; 563 - tx-num-evt = <1>; 564 - rx-num-evt = <1>; 563 + tx-num-evt = <32>; 564 + rx-num-evt = <32>; 565 565 }; 566 566 567 567 &tscadc {
+6
arch/arm/boot/dts/am335x-igep0033.dtsi
··· 105 105 106 106 &cpsw_emac0 { 107 107 phy_id = <&davinci_mdio>, <0>; 108 + phy-mode = "rmii"; 108 109 }; 109 110 110 111 &cpsw_emac1 { 111 112 phy_id = <&davinci_mdio>, <1>; 113 + phy-mode = "rmii"; 114 + }; 115 + 116 + &phy_sel { 117 + rmii-clock-ext; 112 118 }; 113 119 114 120 &elm {
+4
arch/arm/boot/dts/am43x-epos-evm.dts
··· 319 319 phy-mode = "rmii"; 320 320 }; 321 321 322 + &phy_sel { 323 + rmii-clock-ext; 324 + }; 325 + 322 326 &i2c0 { 323 327 status = "okay"; 324 328 pinctrl-names = "default";
+2
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1045 1045 reg = <0x00500000 0x80000 1046 1046 0xf803c000 0x400>; 1047 1047 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1048 + clocks = <&usb>, <&udphs_clk>; 1049 + clock-names = "hclk", "pclk"; 1048 1050 status = "disabled"; 1049 1051 1050 1052 ep0 {
+1
arch/arm/boot/dts/dra7-evm.dts
··· 240 240 regulator-name = "ldo3"; 241 241 regulator-min-microvolt = <1800000>; 242 242 regulator-max-microvolt = <1800000>; 243 + regulator-always-on; 243 244 regulator-boot-on; 244 245 }; 245 246
+11 -1
arch/arm/boot/dts/dra7.dtsi
··· 773 773 clocks = <&qspi_gfclk_div>; 774 774 clock-names = "fck"; 775 775 num-cs = <4>; 776 - interrupts = <0 343 0x4>; 777 776 status = "disabled"; 778 777 }; 779 778 ··· 981 982 gpmc,num-waitpins = <2>; 982 983 #address-cells = <2>; 983 984 #size-cells = <1>; 985 + status = "disabled"; 986 + }; 987 + 988 + atl: atl@4843c000 { 989 + compatible = "ti,dra7-atl"; 990 + reg = <0x4843c000 0x3ff>; 991 + ti,hwmods = "atl"; 992 + ti,provided-clocks = <&atl_clkin0_ck>, <&atl_clkin1_ck>, 993 + <&atl_clkin2_ck>, <&atl_clkin3_ck>; 994 + clocks = <&atl_gfclk_mux>; 995 + clock-names = "fck"; 984 996 status = "disabled"; 985 997 }; 986 998 };
+14 -12
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 10 10 &cm_core_aon_clocks { 11 11 atl_clkin0_ck: atl_clkin0_ck { 12 12 #clock-cells = <0>; 13 - compatible = "fixed-clock"; 14 - clock-frequency = <0>; 13 + compatible = "ti,dra7-atl-clock"; 14 + clocks = <&atl_gfclk_mux>; 15 15 }; 16 16 17 17 atl_clkin1_ck: atl_clkin1_ck { 18 18 #clock-cells = <0>; 19 - compatible = "fixed-clock"; 20 - clock-frequency = <0>; 19 + compatible = "ti,dra7-atl-clock"; 20 + clocks = <&atl_gfclk_mux>; 21 21 }; 22 22 23 23 atl_clkin2_ck: atl_clkin2_ck { 24 24 #clock-cells = <0>; 25 - compatible = "fixed-clock"; 26 - clock-frequency = <0>; 25 + compatible = "ti,dra7-atl-clock"; 26 + clocks = <&atl_gfclk_mux>; 27 27 }; 28 28 29 29 atl_clkin3_ck: atl_clkin3_ck { 30 30 #clock-cells = <0>; 31 - compatible = "fixed-clock"; 32 - clock-frequency = <0>; 31 + compatible = "ti,dra7-atl-clock"; 32 + clocks = <&atl_gfclk_mux>; 33 33 }; 34 34 35 35 hdmi_clkin_ck: hdmi_clkin_ck { ··· 673 673 674 674 l3_iclk_div: l3_iclk_div { 675 675 #clock-cells = <0>; 676 - compatible = "fixed-factor-clock"; 676 + compatible = "ti,divider-clock"; 677 + ti,max-div = <2>; 678 + ti,bit-shift = <4>; 679 + reg = <0x0100>; 677 680 clocks = <&dpll_core_h12x2_ck>; 678 - clock-mult = <1>; 679 - clock-div = <1>; 681 + ti,index-power-of-two; 680 682 }; 681 683 682 684 l4_root_clk_div: l4_root_clk_div { ··· 686 684 compatible = "fixed-factor-clock"; 687 685 clocks = <&l3_iclk_div>; 688 686 clock-mult = <1>; 689 - clock-div = <1>; 687 + clock-div = <2>; 690 688 }; 691 689 692 690 video1_clk2_div: video1_clk2_div {
+1 -1
arch/arm/boot/dts/exynos4.dtsi
··· 554 554 interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>, <0 41 0>; 555 555 clocks = <&clock CLK_PWM>; 556 556 clock-names = "timers"; 557 - #pwm-cells = <2>; 557 + #pwm-cells = <3>; 558 558 status = "disabled"; 559 559 }; 560 560
+4 -1
arch/arm/boot/dts/exynos5420.dtsi
··· 167 167 compatible = "samsung,exynos5420-audss-clock"; 168 168 reg = <0x03810000 0x0C>; 169 169 #clock-cells = <1>; 170 - clocks = <&clock CLK_FIN_PLL>, <&clock CLK_FOUT_EPLL>, 170 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MAU_EPLL>, 171 171 <&clock CLK_SCLK_MAUDIO0>, <&clock CLK_SCLK_MAUPCM0>; 172 172 clock-names = "pll_ref", "pll_in", "sclk_audio", "sclk_pcm_in"; 173 173 }; ··· 260 260 mfc_pd: power-domain@10044060 { 261 261 compatible = "samsung,exynos4210-pd"; 262 262 reg = <0x10044060 0x20>; 263 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 264 + <&clock CLK_MOUT_USER_ACLK333>; 265 + clock-names = "oscclk", "pclk0", "clk0"; 263 266 }; 264 267 265 268 disp_pd: power-domain@100440C0 {
+6
arch/arm/boot/dts/omap3-beagle-xm.dts
··· 251 251 codec { 252 252 }; 253 253 }; 254 + 255 + twl_power: power { 256 + compatible = "ti,twl4030-power-beagleboard-xm", "ti,twl4030-power-idle-osc-off"; 257 + ti,use_poweroff; 258 + }; 254 259 }; 255 260 }; 256 261 ··· 306 301 }; 307 302 308 303 &uart3 { 304 + interrupts-extended = <&intc 74 &omap3_pmx_core OMAP3_UART3_RX>; 309 305 pinctrl-names = "default"; 310 306 pinctrl-0 = <&uart3_pins>; 311 307 };
+7
arch/arm/boot/dts/omap3-evm-common.dtsi
··· 50 50 gpios = <&twl_gpio 18 GPIO_ACTIVE_LOW>; 51 51 }; 52 52 53 + &twl { 54 + twl_power: power { 55 + compatible = "ti,twl4030-power-omap3-evm", "ti,twl4030-power-idle"; 56 + ti,use_poweroff; 57 + }; 58 + }; 59 + 53 60 &i2c2 { 54 61 clock-frequency = <400000>; 55 62 };
+5
arch/arm/boot/dts/omap3-n900.dts
··· 351 351 compatible = "ti,twl4030-audio"; 352 352 ti,enable-vibra = <1>; 353 353 }; 354 + 355 + twl_power: power { 356 + compatible = "ti,twl4030-power-n900", "ti,twl4030-power-idle-osc-off"; 357 + ti,use_poweroff; 358 + }; 354 359 }; 355 360 356 361 &twl_keypad {
-1
arch/arm/boot/dts/omap5.dtsi
··· 45 45 46 46 operating-points = < 47 47 /* kHz uV */ 48 - 500000 880000 49 48 1000000 1060000 50 49 1500000 1250000 51 50 >;
+1 -1
arch/arm/configs/bcm_defconfig
··· 94 94 CONFIG_BACKLIGHT_PWM=y 95 95 # CONFIG_USB_SUPPORT is not set 96 96 CONFIG_MMC=y 97 - CONFIG_MMC_UNSAFE_RESUME=y 98 97 CONFIG_MMC_BLOCK_MINORS=32 99 98 CONFIG_MMC_TEST=y 100 99 CONFIG_MMC_SDHCI=y 100 + CONFIG_MMC_SDHCI_PLTFM=y 101 101 CONFIG_MMC_SDHCI_BCM_KONA=y 102 102 CONFIG_NEW_LEDS=y 103 103 CONFIG_LEDS_CLASS=y
+2 -1
arch/arm/configs/multi_v7_defconfig
··· 223 223 CONFIG_POWER_RESET_SUN6I=y 224 224 CONFIG_SENSORS_LM90=y 225 225 CONFIG_THERMAL=y 226 - CONFIG_DOVE_THERMAL=y 227 226 CONFIG_ARMADA_THERMAL=y 228 227 CONFIG_WATCHDOG=y 229 228 CONFIG_ORION_WATCHDOG=y 230 229 CONFIG_SUNXI_WATCHDOG=y 231 230 CONFIG_MFD_AS3722=y 231 + CONFIG_MFD_BCM590XX=y 232 232 CONFIG_MFD_CROS_EC=y 233 233 CONFIG_MFD_CROS_EC_SPI=y 234 234 CONFIG_MFD_MAX8907=y ··· 240 240 CONFIG_REGULATOR_VIRTUAL_CONSUMER=y 241 241 CONFIG_REGULATOR_AB8500=y 242 242 CONFIG_REGULATOR_AS3722=y 243 + CONFIG_REGULATOR_BCM590XX=y 243 244 CONFIG_REGULATOR_GPIO=y 244 245 CONFIG_REGULATOR_MAX8907=y 245 246 CONFIG_REGULATOR_PALMAS=y
-2
arch/arm/include/asm/mcpm.h
··· 208 208 struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS]; 209 209 }; 210 210 211 - extern unsigned long sync_phys; /* physical address of *mcpm_sync */ 212 - 213 211 void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster); 214 212 void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster); 215 213 void __mcpm_outbound_leave_critical(unsigned int cluster, int state);
+19 -9
arch/arm/kernel/kprobes-test-arm.c
··· 74 74 TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ 75 75 TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ 76 76 TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ 77 - TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ 78 - TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ 79 77 TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ 80 78 TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ 81 79 TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ ··· 101 103 TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ 102 104 TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ 103 105 TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ 104 - TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ 105 - TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ 106 106 TEST_R( op "eq r",11,VAL1,", #0xf5") \ 107 107 TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ 108 108 TEST_R( op " r",8, VAL2,", #0x000af000") ··· 121 125 TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ 122 126 TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ 123 127 TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ 124 - TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ 125 128 TEST( op "eq" s " r0, #0xf5") \ 126 129 TEST( op "ne" s " r11, #0xf5000000") \ 127 130 TEST( op s " r7, #0x000af000") \ ··· 154 159 TEST_SUPPORTED("cmp pc, #0x1000"); 155 160 TEST_SUPPORTED("cmp sp, #0x1000"); 156 161 157 - /* Data-processing with PC as shift*/ 162 + /* Data-processing with PC and a shift count in a register */ 158 163 TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc") 159 164 TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc") 160 165 TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc") 166 + TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2") 167 + TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2") 168 + TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2") 169 + TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2") 170 + TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3") 171 + TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3") 172 + TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3") 161 173 162 - /* Data-processing with PC as shift*/ 174 + /* Data-processing with PC as a target and status registers updated */ 163 175 TEST_UNSUPPORTED("movs pc, r1") 164 176 TEST_UNSUPPORTED("movs pc, r1, lsl r2") 165 177 TEST_UNSUPPORTED("movs pc, #0x10000") ··· 189 187 TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") 190 188 TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") 191 189 TEST_BF_R ("mov pc, r",0,2f,"") 192 - TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 190 + TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1") 193 191 TEST_BB( "sub pc, pc, #1b-2b+8") 194 192 #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) 195 193 TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ 196 194 #endif 197 195 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 198 196 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") 199 - TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") 197 + TEST_R( "add pc, pc, r",10,-2,", asl #1") 200 198 #ifdef CONFIG_THUMB2_KERNEL 201 199 TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") 202 200 TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") ··· 218 216 TEST_BB_R("bx r",7,2f,"") 219 217 TEST_BF_R("bxeq r",14,2f,"") 220 218 219 + #if __LINUX_ARM_ARCH__ >= 5 221 220 TEST_R("clz r0, r",0, 0x0,"") 222 221 TEST_R("clzeq r7, r",14,0x1,"") 223 222 TEST_R("clz lr, r",7, 0xffffffff,"") ··· 340 337 TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2") 341 338 TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2") 342 339 TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc") 340 + #endif 343 341 344 342 TEST_GROUP("Multiply and multiply-accumulate") 345 343 ··· 563 559 TEST_UNSUPPORTED("ldrsht r1, [r2], #48") 564 560 #endif 565 561 562 + #if __LINUX_ARM_ARCH__ >= 5 566 563 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") 567 564 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 568 565 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") ··· 600 595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!") 601 596 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48") 602 597 TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48") 598 + #endif 603 599 604 600 TEST_GROUP("Miscellaneous") 605 601 ··· 1233 1227 TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") 1234 1228 1235 1229 COPROCESSOR_INSTRUCTIONS_ST_LD("",e) 1230 + #if __LINUX_ARM_ARCH__ >= 5 1236 1231 COPROCESSOR_INSTRUCTIONS_MC_MR("",e) 1232 + #endif 1237 1233 TEST_UNSUPPORTED("svc 0") 1238 1234 TEST_UNSUPPORTED("svc 0xffffff") 1239 1235 ··· 1295 1287 TEST( "blx __dummy_thumb_subroutine_odd") 1296 1288 #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1297 1289 1290 + #if __LINUX_ARM_ARCH__ >= 5 1298 1291 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f) 1292 + #endif 1299 1293 #if __LINUX_ARM_ARCH__ >= 6 1300 1294 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f) 1301 1295 #endif
+10
arch/arm/kernel/kprobes-test.c
··· 225 225 static int post_handler_called; 226 226 static int jprobe_func_called; 227 227 static int kretprobe_handler_called; 228 + static int tests_failed; 228 229 229 230 #define FUNC_ARG1 0x12345678 230 231 #define FUNC_ARG2 0xabcdef ··· 462 461 463 462 pr_info(" jprobe\n"); 464 463 ret = test_jprobe(func); 464 + #if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) 465 + if (ret == -EINVAL) { 466 + pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); 467 + tests_failed = ret; 468 + ret = 0; 469 + } 470 + #endif 465 471 if (ret < 0) 466 472 return ret; 467 473 ··· 1679 1671 #endif 1680 1672 1681 1673 out: 1674 + if (ret == 0) 1675 + ret = tests_failed; 1682 1676 if (ret == 0) 1683 1677 pr_info("Finished kprobe tests OK\n"); 1684 1678 else
+3 -3
arch/arm/kernel/probes-arm.c
··· 341 341 /* CMP (reg-shift reg) cccc 0001 0101 xxxx xxxx xxxx 0xx1 xxxx */ 342 342 /* CMN (reg-shift reg) cccc 0001 0111 xxxx xxxx xxxx 0xx1 xxxx */ 343 343 DECODE_EMULATEX (0x0f900090, 0x01100010, PROBES_DATA_PROCESSING_REG, 344 - REGS(ANY, 0, NOPC, 0, ANY)), 344 + REGS(NOPC, 0, NOPC, 0, NOPC)), 345 345 346 346 /* MOV (reg-shift reg) cccc 0001 101x xxxx xxxx xxxx 0xx1 xxxx */ 347 347 /* MVN (reg-shift reg) cccc 0001 111x xxxx xxxx xxxx 0xx1 xxxx */ 348 348 DECODE_EMULATEX (0x0fa00090, 0x01a00010, PROBES_DATA_PROCESSING_REG, 349 - REGS(0, ANY, NOPC, 0, ANY)), 349 + REGS(0, NOPC, NOPC, 0, NOPC)), 350 350 351 351 /* AND (reg-shift reg) cccc 0000 000x xxxx xxxx xxxx 0xx1 xxxx */ 352 352 /* EOR (reg-shift reg) cccc 0000 001x xxxx xxxx xxxx 0xx1 xxxx */ ··· 359 359 /* ORR (reg-shift reg) cccc 0001 100x xxxx xxxx xxxx 0xx1 xxxx */ 360 360 /* BIC (reg-shift reg) cccc 0001 110x xxxx xxxx xxxx 0xx1 xxxx */ 361 361 DECODE_EMULATEX (0x0e000090, 0x00000010, PROBES_DATA_PROCESSING_REG, 362 - REGS(ANY, ANY, NOPC, 0, ANY)), 362 + REGS(NOPC, NOPC, NOPC, 0, NOPC)), 363 363 364 364 DECODE_END 365 365 };
+4 -3
arch/arm/kernel/ptrace.c
··· 908 908 PTRACE_SYSCALL_EXIT, 909 909 }; 910 910 911 - static int tracehook_report_syscall(struct pt_regs *regs, 911 + static void tracehook_report_syscall(struct pt_regs *regs, 912 912 enum ptrace_syscall_dir dir) 913 913 { 914 914 unsigned long ip; ··· 926 926 current_thread_info()->syscall = -1; 927 927 928 928 regs->ARM_ip = ip; 929 - return current_thread_info()->syscall; 930 929 } 931 930 932 931 asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno) ··· 937 938 return -1; 938 939 939 940 if (test_thread_flag(TIF_SYSCALL_TRACE)) 940 - scno = tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); 941 + tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); 942 + 943 + scno = current_thread_info()->syscall; 941 944 942 945 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 943 946 trace_sys_enter(regs, scno);
+1 -1
arch/arm/kernel/topology.c
··· 275 275 cpu_topology[cpuid].socket_id, mpidr); 276 276 } 277 277 278 - static inline const int cpu_corepower_flags(void) 278 + static inline int cpu_corepower_flags(void) 279 279 { 280 280 return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN; 281 281 }
+3 -5
arch/arm/mach-exynos/exynos.c
··· 173 173 174 174 void __init exynos_cpuidle_init(void) 175 175 { 176 - if (soc_is_exynos5440()) 177 - return; 178 - 179 - platform_device_register(&exynos_cpuidle); 176 + if (soc_is_exynos4210() || soc_is_exynos5250()) 177 + platform_device_register(&exynos_cpuidle); 180 178 } 181 179 182 180 void __init exynos_cpufreq_init(void) ··· 295 297 * This is called from smp_prepare_cpus if we've built for SMP, but 296 298 * we still need to set it up for PM and firmware ops if not. 297 299 */ 298 - if (!IS_ENABLED(SMP)) 300 + if (!IS_ENABLED(CONFIG_SMP)) 299 301 exynos_sysram_init(); 300 302 301 303 exynos_cpuidle_init();
+7 -2
arch/arm/mach-exynos/firmware.c
··· 57 57 58 58 boot_reg = sysram_ns_base_addr + 0x1c; 59 59 60 - if (!soc_is_exynos4212() && !soc_is_exynos3250()) 61 - boot_reg += 4*cpu; 60 + /* 61 + * Almost all Exynos-series of SoCs that run in secure mode don't need 62 + * additional offset for every CPU, with Exynos4412 being the only 63 + * exception. 64 + */ 65 + if (soc_is_exynos4412()) 66 + boot_reg += 4 * cpu; 62 67 63 68 __raw_writel(boot_addr, boot_reg); 64 69 return 0;
+60 -1
arch/arm/mach-exynos/pm_domains.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/pm_domain.h> 20 + #include <linux/clk.h> 20 21 #include <linux/delay.h> 21 22 #include <linux/of_address.h> 22 23 #include <linux/of_platform.h> 23 24 #include <linux/sched.h> 24 25 25 26 #include "regs-pmu.h" 27 + 28 + #define MAX_CLK_PER_DOMAIN 4 26 29 27 30 /* 28 31 * Exynos specific wrapper around the generic power domain ··· 35 32 char const *name; 36 33 bool is_off; 37 34 struct generic_pm_domain pd; 35 + struct clk *oscclk; 36 + struct clk *clk[MAX_CLK_PER_DOMAIN]; 37 + struct clk *pclk[MAX_CLK_PER_DOMAIN]; 38 38 }; 39 39 40 40 static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ··· 49 43 50 44 pd = container_of(domain, struct exynos_pm_domain, pd); 51 45 base = pd->base; 46 + 47 + /* Set oscclk before powering off a domain*/ 48 + if (!power_on) { 49 + int i; 50 + 51 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 52 + if (IS_ERR(pd->clk[i])) 53 + break; 54 + if (clk_set_parent(pd->clk[i], pd->oscclk)) 55 + pr_err("%s: error setting oscclk as parent to clock %d\n", 56 + pd->name, i); 57 + } 58 + } 52 59 53 60 pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0; 54 61 __raw_writel(pwr, base); ··· 79 60 cpu_relax(); 80 61 usleep_range(80, 100); 81 62 } 63 + 64 + /* Restore clocks after powering on a domain*/ 65 + if (power_on) { 66 + int i; 67 + 68 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 69 + if (IS_ERR(pd->clk[i])) 70 + break; 71 + if (clk_set_parent(pd->clk[i], pd->pclk[i])) 72 + pr_err("%s: error setting parent to clock%d\n", 73 + pd->name, i); 74 + } 75 + } 76 + 82 77 return 0; 83 78 } 84 79 ··· 185 152 186 153 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 187 154 struct exynos_pm_domain *pd; 188 - int on; 155 + int on, i; 156 + struct device *dev; 189 157 190 158 pdev = of_find_device_by_node(np); 159 + dev = &pdev->dev; 191 160 192 161 pd = kzalloc(sizeof(*pd), GFP_KERNEL); 193 162 if (!pd) { ··· 205 170 pd->pd.power_on = exynos_pd_power_on; 206 171 pd->pd.of_node = np; 207 172 173 + pd->oscclk = clk_get(dev, "oscclk"); 174 + if (IS_ERR(pd->oscclk)) 175 + goto no_clk; 176 + 177 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 178 + char clk_name[8]; 179 + 180 + snprintf(clk_name, sizeof(clk_name), "clk%d", i); 181 + pd->clk[i] = clk_get(dev, clk_name); 182 + if (IS_ERR(pd->clk[i])) 183 + break; 184 + snprintf(clk_name, sizeof(clk_name), "pclk%d", i); 185 + pd->pclk[i] = clk_get(dev, clk_name); 186 + if (IS_ERR(pd->pclk[i])) { 187 + clk_put(pd->clk[i]); 188 + pd->clk[i] = ERR_PTR(-EINVAL); 189 + break; 190 + } 191 + } 192 + 193 + if (IS_ERR(pd->clk[0])) 194 + clk_put(pd->oscclk); 195 + 196 + no_clk: 208 197 platform_set_drvdata(pdev, pd); 209 198 210 199 on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;
+23 -8
arch/arm/mach-imx/clk-gate2.c
··· 67 67 68 68 spin_lock_irqsave(gate->lock, flags); 69 69 70 - if (gate->share_count && --(*gate->share_count) > 0) 71 - goto out; 70 + if (gate->share_count) { 71 + if (WARN_ON(*gate->share_count == 0)) 72 + goto out; 73 + else if (--(*gate->share_count) > 0) 74 + goto out; 75 + } 72 76 73 77 reg = readl(gate->reg); 74 78 reg &= ~(3 << gate->bit_idx); ··· 82 78 spin_unlock_irqrestore(gate->lock, flags); 83 79 } 84 80 85 - static int clk_gate2_is_enabled(struct clk_hw *hw) 81 + static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx) 86 82 { 87 - u32 reg; 88 - struct clk_gate2 *gate = to_clk_gate2(hw); 83 + u32 val = readl(reg); 89 84 90 - reg = readl(gate->reg); 91 - 92 - if (((reg >> gate->bit_idx) & 1) == 1) 85 + if (((val >> bit_idx) & 1) == 1) 93 86 return 1; 94 87 95 88 return 0; 89 + } 90 + 91 + static int clk_gate2_is_enabled(struct clk_hw *hw) 92 + { 93 + struct clk_gate2 *gate = to_clk_gate2(hw); 94 + 95 + if (gate->share_count) 96 + return !!(*gate->share_count); 97 + else 98 + return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); 96 99 } 97 100 98 101 static struct clk_ops clk_gate2_ops = { ··· 127 116 gate->bit_idx = bit_idx; 128 117 gate->flags = clk_gate2_flags; 129 118 gate->lock = lock; 119 + 120 + /* Initialize share_count per hardware state */ 121 + if (share_count) 122 + *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; 130 123 gate->share_count = share_count; 131 124 132 125 init.name = name;
+1 -1
arch/arm/mach-mvebu/Makefile
··· 7 7 obj-y += system-controller.o mvebu-soc-id.o 8 8 9 9 ifeq ($(CONFIG_MACH_MVEBU_V7),y) 10 - obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o 10 + obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o 11 11 obj-$(CONFIG_SMP) += platsmp.o headsmp.o platsmp-a9.o headsmp-a9.o 12 12 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o 13 13 endif
+19 -10
arch/arm/mach-mvebu/board-v7.c
··· 23 23 #include <linux/mbus.h> 24 24 #include <linux/signal.h> 25 25 #include <linux/slab.h> 26 + #include <linux/irqchip.h> 26 27 #include <asm/hardware/cache-l2x0.h> 27 28 #include <asm/mach/arch.h> 28 29 #include <asm/mach/map.h> ··· 72 71 return 1; 73 72 } 74 73 75 - static void __init mvebu_timer_and_clk_init(void) 74 + static void __init mvebu_init_irq(void) 76 75 { 77 - of_clk_init(NULL); 78 - clocksource_of_init(); 76 + irqchip_init(); 79 77 mvebu_scu_enable(); 80 78 coherency_init(); 81 79 BUG_ON(mvebu_mbus_dt_init(coherency_available())); 80 + } 82 81 83 - if (of_machine_is_compatible("marvell,armada375")) 84 - hook_fault_code(16 + 6, armada_375_external_abort_wa, SIGBUS, 0, 85 - "imprecise external abort"); 82 + static void __init external_abort_quirk(void) 83 + { 84 + u32 dev, rev; 85 + 86 + if (mvebu_get_soc_id(&dev, &rev) == 0 && rev > ARMADA_375_Z1_REV) 87 + return; 88 + 89 + hook_fault_code(16 + 6, armada_375_external_abort_wa, SIGBUS, 0, 90 + "imprecise external abort"); 86 91 } 87 92 88 93 static void __init i2c_quirk(void) ··· 176 169 { 177 170 if (of_machine_is_compatible("plathome,openblocks-ax3-4")) 178 171 i2c_quirk(); 179 - if (of_machine_is_compatible("marvell,a375-db")) 172 + if (of_machine_is_compatible("marvell,a375-db")) { 173 + external_abort_quirk(); 180 174 thermal_quirk(); 175 + } 181 176 182 177 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 183 178 } ··· 194 185 .l2c_aux_mask = ~0, 195 186 .smp = smp_ops(armada_xp_smp_ops), 196 187 .init_machine = mvebu_dt_init, 197 - .init_time = mvebu_timer_and_clk_init, 188 + .init_irq = mvebu_init_irq, 198 189 .restart = mvebu_restart, 199 190 .dt_compat = armada_370_xp_dt_compat, 200 191 MACHINE_END ··· 207 198 DT_MACHINE_START(ARMADA_375_DT, "Marvell Armada 375 (Device Tree)") 208 199 .l2c_aux_val = 0, 209 200 .l2c_aux_mask = ~0, 210 - .init_time = mvebu_timer_and_clk_init, 201 + .init_irq = mvebu_init_irq, 211 202 .init_machine = mvebu_dt_init, 212 203 .restart = mvebu_restart, 213 204 .dt_compat = armada_375_dt_compat, ··· 222 213 DT_MACHINE_START(ARMADA_38X_DT, "Marvell Armada 380/385 (Device Tree)") 223 214 .l2c_aux_val = 0, 224 215 .l2c_aux_mask = ~0, 225 - .init_time = mvebu_timer_and_clk_init, 216 + .init_irq = mvebu_init_irq, 226 217 .restart = mvebu_restart, 227 218 .dt_compat = armada_38x_dt_compat, 228 219 MACHINE_END
+2 -7
arch/arm/mach-mvebu/pmsu.c
··· 66 66 extern void ll_disable_coherency(void); 67 67 extern void ll_enable_coherency(void); 68 68 69 + extern void armada_370_xp_cpu_resume(void); 70 + 69 71 static struct platform_device armada_xp_cpuidle_device = { 70 72 .name = "cpuidle-armada-370-xp", 71 73 }; ··· 140 138 reg = readl(pmsu_mp_base + L2C_NFABRIC_PM_CTL); 141 139 reg |= L2C_NFABRIC_PM_CTL_PWR_DOWN; 142 140 writel(reg, pmsu_mp_base + L2C_NFABRIC_PM_CTL); 143 - } 144 - 145 - static void armada_370_xp_cpu_resume(void) 146 - { 147 - asm volatile("bl ll_add_cpu_to_smp_group\n\t" 148 - "bl ll_enable_coherency\n\t" 149 - "b cpu_resume\n\t"); 150 141 } 151 142 152 143 /* No locking is needed because we only access per-CPU registers */
+25
arch/arm/mach-mvebu/pmsu_ll.S
··· 1 + /* 2 + * Copyright (C) 2014 Marvell 3 + * 4 + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 5 + * Gregory Clement <gregory.clement@free-electrons.com> 6 + * 7 + * This file is licensed under the terms of the GNU General Public 8 + * License version 2. This program is licensed "as is" without any 9 + * warranty of any kind, whether express or implied. 10 + */ 11 + 12 + #include <linux/linkage.h> 13 + #include <asm/assembler.h> 14 + 15 + /* 16 + * This is the entry point through which CPUs exiting cpuidle deep 17 + * idle state are going. 18 + */ 19 + ENTRY(armada_370_xp_cpu_resume) 20 + ARM_BE8(setend be ) @ go BE8 if entered LE 21 + bl ll_add_cpu_to_smp_group 22 + bl ll_enable_coherency 23 + b cpu_resume 24 + ENDPROC(armada_370_xp_cpu_resume) 25 +
+4 -2
arch/arm/mach-omap2/Makefile
··· 110 110 obj-$(CONFIG_ARCH_OMAP2) += prm2xxx_3xxx.o prm2xxx.o cm2xxx.o 111 111 obj-$(CONFIG_ARCH_OMAP3) += prm2xxx_3xxx.o prm3xxx.o cm3xxx.o 112 112 obj-$(CONFIG_ARCH_OMAP3) += vc3xxx_data.o vp3xxx_data.o 113 - obj-$(CONFIG_SOC_AM33XX) += prm33xx.o cm33xx.o 114 113 omap-prcm-4-5-common = cminst44xx.o cm44xx.o prm44xx.o \ 115 114 prcm_mpu44xx.o prminst44xx.o \ 116 115 vc44xx_data.o vp44xx_data.o 117 116 obj-$(CONFIG_ARCH_OMAP4) += $(omap-prcm-4-5-common) 118 117 obj-$(CONFIG_SOC_OMAP5) += $(omap-prcm-4-5-common) 119 118 obj-$(CONFIG_SOC_DRA7XX) += $(omap-prcm-4-5-common) 120 - obj-$(CONFIG_SOC_AM43XX) += $(omap-prcm-4-5-common) 119 + am33xx-43xx-prcm-common += prm33xx.o cm33xx.o 120 + obj-$(CONFIG_SOC_AM33XX) += $(am33xx-43xx-prcm-common) 121 + obj-$(CONFIG_SOC_AM43XX) += $(omap-prcm-4-5-common) \ 122 + $(am33xx-43xx-prcm-common) 121 123 122 124 # OMAP voltage domains 123 125 voltagedomain-common := voltage.o vc.o vp.o
+1 -1
arch/arm/mach-omap2/clkt_dpll.c
··· 76 76 * (assuming that it is counting N upwards), or -2 if the enclosing loop 77 77 * should skip to the next iteration (again assuming N is increasing). 78 78 */ 79 - static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n) 79 + static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n) 80 80 { 81 81 struct dpll_data *dd; 82 82 long fint, fint_min, fint_max;
+3
arch/arm/mach-omap2/cm-regbits-34xx.h
··· 26 26 #define OMAP3430_EN_WDT3_SHIFT 12 27 27 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_MASK (1 << 0) 28 28 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_SHIFT 0 29 + #define OMAP3430_IVA2_DPLL_FREQSEL_SHIFT 4 29 30 #define OMAP3430_IVA2_DPLL_FREQSEL_MASK (0xf << 4) 30 31 #define OMAP3430_EN_IVA2_DPLL_DRIFTGUARD_SHIFT 3 32 + #define OMAP3430_EN_IVA2_DPLL_SHIFT 0 31 33 #define OMAP3430_EN_IVA2_DPLL_MASK (0x7 << 0) 32 34 #define OMAP3430_ST_IVA2_SHIFT 0 33 35 #define OMAP3430_ST_IVA2_CLK_MASK (1 << 0) 36 + #define OMAP3430_AUTO_IVA2_DPLL_SHIFT 0 34 37 #define OMAP3430_AUTO_IVA2_DPLL_MASK (0x7 << 0) 35 38 #define OMAP3430_IVA2_CLK_SRC_SHIFT 19 36 39 #define OMAP3430_IVA2_CLK_SRC_WIDTH 3
+1 -1
arch/arm/mach-omap2/cm33xx.h
··· 380 380 void am33xx_cm_clkdm_force_sleep(u16 inst, u16 cdoffs); 381 381 void am33xx_cm_clkdm_force_wakeup(u16 inst, u16 cdoffs); 382 382 383 - #ifdef CONFIG_SOC_AM33XX 383 + #if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) 384 384 extern int am33xx_cm_wait_module_idle(u16 inst, s16 cdoffs, 385 385 u16 clkctrl_offs); 386 386 extern void am33xx_cm_module_enable(u8 mode, u16 inst, s16 cdoffs,
+2 -2
arch/arm/mach-omap2/common.h
··· 162 162 } 163 163 #endif 164 164 165 - #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) 165 + #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 166 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) 166 167 void omap44xx_restart(enum reboot_mode mode, const char *cmd); 167 168 #else 168 169 static inline void omap44xx_restart(enum reboot_mode mode, const char *cmd) ··· 249 248 } 250 249 #endif 251 250 252 - extern void __init gic_init_irq(void); 253 251 extern void gic_dist_disable(void); 254 252 extern void gic_dist_enable(void); 255 253 extern bool gic_dist_disabled(void);
-28
arch/arm/mach-omap2/devices.c
··· 297 297 static inline void omap_init_audio(void) {} 298 298 #endif 299 299 300 - #if defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI) || \ 301 - defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI_MODULE) 302 - 303 - static struct platform_device omap_hdmi_audio = { 304 - .name = "omap-hdmi-audio", 305 - .id = -1, 306 - }; 307 - 308 - static void __init omap_init_hdmi_audio(void) 309 - { 310 - struct omap_hwmod *oh; 311 - struct platform_device *pdev; 312 - 313 - oh = omap_hwmod_lookup("dss_hdmi"); 314 - if (!oh) 315 - return; 316 - 317 - pdev = omap_device_build("omap-hdmi-audio-dai", -1, oh, NULL, 0); 318 - WARN(IS_ERR(pdev), 319 - "Can't build omap_device for omap-hdmi-audio-dai.\n"); 320 - 321 - platform_device_register(&omap_hdmi_audio); 322 - } 323 - #else 324 - static inline void omap_init_hdmi_audio(void) {} 325 - #endif 326 - 327 300 #if defined(CONFIG_SPI_OMAP24XX) || defined(CONFIG_SPI_OMAP24XX_MODULE) 328 301 329 302 #include <linux/platform_data/spi-omap2-mcspi.h> ··· 432 459 */ 433 460 omap_init_audio(); 434 461 omap_init_camera(); 435 - omap_init_hdmi_audio(); 436 462 omap_init_mbox(); 437 463 /* If dtb is there, the devices will be created dynamically */ 438 464 if (!of_have_populated_dt()) {
+10
arch/arm/mach-omap2/dsp.c
··· 29 29 #ifdef CONFIG_TIDSPBRIDGE_DVFS 30 30 #include "omap-pm.h" 31 31 #endif 32 + #include "soc.h" 32 33 33 34 #include <linux/platform_data/dsp-omap.h> 34 35 ··· 60 59 phys_addr_t size = CONFIG_TIDSPBRIDGE_MEMPOOL_SIZE; 61 60 phys_addr_t paddr; 62 61 62 + if (!cpu_is_omap34xx()) 63 + return; 64 + 63 65 if (!size) 64 66 return; 65 67 ··· 86 82 struct platform_device *pdev; 87 83 int err = -ENOMEM; 88 84 struct omap_dsp_platform_data *pdata = &omap_dsp_pdata; 85 + 86 + if (!cpu_is_omap34xx()) 87 + return 0; 89 88 90 89 pdata->phys_mempool_base = omap_dsp_get_mempool_base(); 91 90 ··· 122 115 123 116 static void __exit omap_dsp_exit(void) 124 117 { 118 + if (!cpu_is_omap34xx()) 119 + return; 120 + 125 121 platform_device_unregister(omap_dsp_pdev); 126 122 } 127 123 module_exit(omap_dsp_exit);
+1 -1
arch/arm/mach-omap2/gpmc.c
··· 1615 1615 return ret; 1616 1616 } 1617 1617 1618 - for_each_child_of_node(pdev->dev.of_node, child) { 1618 + for_each_available_child_of_node(pdev->dev.of_node, child) { 1619 1619 1620 1620 if (!child->name) 1621 1621 continue;
+12
arch/arm/mach-omap2/id.c
··· 649 649 } 650 650 break; 651 651 652 + case 0xb9bc: 653 + switch (rev) { 654 + case 0: 655 + omap_revision = DRA722_REV_ES1_0; 656 + break; 657 + default: 658 + /* If we have no new revisions */ 659 + omap_revision = DRA722_REV_ES1_0; 660 + break; 661 + } 662 + break; 663 + 652 664 default: 653 665 /* Unknown default to latest silicon rev as default*/ 654 666 pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%d)\n",
+4 -2
arch/arm/mach-omap2/mux.c
··· 183 183 m0_entry = mux->muxnames[0]; 184 184 185 185 /* First check for full name in mode0.muxmode format */ 186 - if (mode0_len && strncmp(muxname, m0_entry, mode0_len)) 187 - continue; 186 + if (mode0_len) 187 + if (strncmp(muxname, m0_entry, mode0_len) || 188 + (strlen(m0_entry) != mode0_len)) 189 + continue; 188 190 189 191 /* Then check for muxmode only */ 190 192 for (i = 0; i < OMAP_MUX_NR_MODES; i++) {
-20
arch/arm/mach-omap2/omap4-common.c
··· 102 102 {} 103 103 #endif 104 104 105 - void __init gic_init_irq(void) 106 - { 107 - void __iomem *omap_irq_base; 108 - 109 - /* Static mapping, never released */ 110 - gic_dist_base_addr = ioremap(OMAP44XX_GIC_DIST_BASE, SZ_4K); 111 - BUG_ON(!gic_dist_base_addr); 112 - 113 - twd_base = ioremap(OMAP44XX_LOCAL_TWD_BASE, SZ_4K); 114 - BUG_ON(!twd_base); 115 - 116 - /* Static mapping, never released */ 117 - omap_irq_base = ioremap(OMAP44XX_GIC_CPU_BASE, SZ_512); 118 - BUG_ON(!omap_irq_base); 119 - 120 - omap_wakeupgen_init(); 121 - 122 - gic_init(0, 29, gic_dist_base_addr, omap_irq_base); 123 - } 124 - 125 105 void gic_dist_disable(void) 126 106 { 127 107 if (gic_dist_base_addr)
+3 -3
arch/arm/mach-omap2/omap_hwmod.c
··· 4251 4251 soc_ops.enable_module = _omap4_enable_module; 4252 4252 soc_ops.disable_module = _omap4_disable_module; 4253 4253 soc_ops.wait_target_ready = _omap4_wait_target_ready; 4254 - soc_ops.assert_hardreset = _omap4_assert_hardreset; 4255 - soc_ops.deassert_hardreset = _omap4_deassert_hardreset; 4256 - soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted; 4254 + soc_ops.assert_hardreset = _am33xx_assert_hardreset; 4255 + soc_ops.deassert_hardreset = _am33xx_deassert_hardreset; 4256 + soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted; 4257 4257 soc_ops.init_clkdm = _init_clkdm; 4258 4258 } else if (soc_is_am33xx()) { 4259 4259 soc_ops.enable_module = _am33xx_enable_module;
+73
arch/arm/mach-omap2/omap_hwmod_54xx_data.c
··· 2020 2020 }, 2021 2021 }; 2022 2022 2023 + /* 2024 + * 'ocp2scp' class 2025 + * bridge to transform ocp interface protocol to scp (serial control port) 2026 + * protocol 2027 + */ 2028 + /* ocp2scp3 */ 2029 + static struct omap_hwmod omap54xx_ocp2scp3_hwmod; 2030 + /* l4_cfg -> ocp2scp3 */ 2031 + static struct omap_hwmod_ocp_if omap54xx_l4_cfg__ocp2scp3 = { 2032 + .master = &omap54xx_l4_cfg_hwmod, 2033 + .slave = &omap54xx_ocp2scp3_hwmod, 2034 + .clk = "l4_root_clk_div", 2035 + .user = OCP_USER_MPU | OCP_USER_SDMA, 2036 + }; 2037 + 2038 + static struct omap_hwmod omap54xx_ocp2scp3_hwmod = { 2039 + .name = "ocp2scp3", 2040 + .class = &omap54xx_ocp2scp_hwmod_class, 2041 + .clkdm_name = "l3init_clkdm", 2042 + .prcm = { 2043 + .omap4 = { 2044 + .clkctrl_offs = OMAP54XX_CM_L3INIT_OCP2SCP3_CLKCTRL_OFFSET, 2045 + .context_offs = OMAP54XX_RM_L3INIT_OCP2SCP3_CONTEXT_OFFSET, 2046 + .modulemode = MODULEMODE_HWCTRL, 2047 + }, 2048 + }, 2049 + }; 2050 + 2051 + /* 2052 + * 'sata' class 2053 + * sata: serial ata interface gen2 compliant ( 1 rx/ 1 tx) 2054 + */ 2055 + 2056 + static struct omap_hwmod_class_sysconfig omap54xx_sata_sysc = { 2057 + .sysc_offs = 0x0000, 2058 + .sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE), 2059 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 2060 + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | 2061 + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), 2062 + .sysc_fields = &omap_hwmod_sysc_type2, 2063 + }; 2064 + 2065 + static struct omap_hwmod_class omap54xx_sata_hwmod_class = { 2066 + .name = "sata", 2067 + .sysc = &omap54xx_sata_sysc, 2068 + }; 2069 + 2070 + /* sata */ 2071 + static struct omap_hwmod omap54xx_sata_hwmod = { 2072 + .name = "sata", 2073 + .class = &omap54xx_sata_hwmod_class, 2074 + .clkdm_name = "l3init_clkdm", 2075 + .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, 2076 + .main_clk = "func_48m_fclk", 2077 + .mpu_rt_idx = 1, 2078 + .prcm = { 2079 + .omap4 = { 2080 + .clkctrl_offs = OMAP54XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, 2081 + .context_offs = OMAP54XX_RM_L3INIT_SATA_CONTEXT_OFFSET, 2082 + .modulemode = MODULEMODE_SWCTRL, 2083 + }, 2084 + }, 2085 + }; 2086 + 2087 + /* l4_cfg -> sata */ 2088 + static struct omap_hwmod_ocp_if omap54xx_l4_cfg__sata = { 2089 + .master = &omap54xx_l4_cfg_hwmod, 2090 + .slave = &omap54xx_sata_hwmod, 2091 + .clk = "l3_iclk_div", 2092 + .user = OCP_USER_MPU | OCP_USER_SDMA, 2093 + }; 2023 2094 2024 2095 /* 2025 2096 * Interfaces ··· 2836 2765 &omap54xx_l4_cfg__usb_tll_hs, 2837 2766 &omap54xx_l4_cfg__usb_otg_ss, 2838 2767 &omap54xx_l4_wkup__wd_timer2, 2768 + &omap54xx_l4_cfg__ocp2scp3, 2769 + &omap54xx_l4_cfg__sata, 2839 2770 NULL, 2840 2771 }; 2841 2772
+13 -5
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1268 1268 }; 1269 1269 1270 1270 /* sata */ 1271 - static struct omap_hwmod_opt_clk sata_opt_clks[] = { 1272 - { .role = "ref_clk", .clk = "sata_ref_clk" }, 1273 - }; 1274 1271 1275 1272 static struct omap_hwmod dra7xx_sata_hwmod = { 1276 1273 .name = "sata", ··· 1275 1278 .clkdm_name = "l3init_clkdm", 1276 1279 .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, 1277 1280 .main_clk = "func_48m_fclk", 1281 + .mpu_rt_idx = 1, 1278 1282 .prcm = { 1279 1283 .omap4 = { 1280 1284 .clkctrl_offs = DRA7XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, ··· 1283 1285 .modulemode = MODULEMODE_SWCTRL, 1284 1286 }, 1285 1287 }, 1286 - .opt_clks = sata_opt_clks, 1287 - .opt_clks_cnt = ARRAY_SIZE(sata_opt_clks), 1288 1288 }; 1289 1289 1290 1290 /* ··· 1727 1731 * 1728 1732 */ 1729 1733 1734 + static struct omap_hwmod_class_sysconfig dra7xx_usb_otg_ss_sysc = { 1735 + .rev_offs = 0x0000, 1736 + .sysc_offs = 0x0010, 1737 + .sysc_flags = (SYSC_HAS_DMADISABLE | SYSC_HAS_MIDLEMODE | 1738 + SYSC_HAS_SIDLEMODE), 1739 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 1740 + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | 1741 + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), 1742 + .sysc_fields = &omap_hwmod_sysc_type2, 1743 + }; 1744 + 1730 1745 static struct omap_hwmod_class dra7xx_usb_otg_ss_hwmod_class = { 1731 1746 .name = "usb_otg_ss", 1747 + .sysc = &dra7xx_usb_otg_ss_sysc, 1732 1748 }; 1733 1749 1734 1750 /* usb_otg_ss1 */
+6
arch/arm/mach-omap2/prm-regbits-34xx.h
··· 35 35 #define OMAP3430_LOGICSTATEST_MASK (1 << 2) 36 36 #define OMAP3430_LASTLOGICSTATEENTERED_MASK (1 << 2) 37 37 #define OMAP3430_LASTPOWERSTATEENTERED_MASK (0x3 << 0) 38 + #define OMAP3430_GRPSEL_MCBSP5_MASK (1 << 10) 39 + #define OMAP3430_GRPSEL_MCBSP1_MASK (1 << 9) 38 40 #define OMAP3630_GRPSEL_UART4_MASK (1 << 18) 39 41 #define OMAP3430_GRPSEL_GPIO6_MASK (1 << 17) 40 42 #define OMAP3430_GRPSEL_GPIO5_MASK (1 << 16) ··· 44 42 #define OMAP3430_GRPSEL_GPIO3_MASK (1 << 14) 45 43 #define OMAP3430_GRPSEL_GPIO2_MASK (1 << 13) 46 44 #define OMAP3430_GRPSEL_UART3_MASK (1 << 11) 45 + #define OMAP3430_GRPSEL_GPT8_MASK (1 << 9) 46 + #define OMAP3430_GRPSEL_GPT7_MASK (1 << 8) 47 + #define OMAP3430_GRPSEL_GPT6_MASK (1 << 7) 48 + #define OMAP3430_GRPSEL_GPT5_MASK (1 << 6) 47 49 #define OMAP3430_GRPSEL_MCBSP4_MASK (1 << 2) 48 50 #define OMAP3430_GRPSEL_MCBSP3_MASK (1 << 1) 49 51 #define OMAP3430_GRPSEL_MCBSP2_MASK (1 << 0)
+1
arch/arm/mach-omap2/soc.h
··· 462 462 #define DRA7XX_CLASS 0x07000000 463 463 #define DRA752_REV_ES1_0 (DRA7XX_CLASS | (0x52 << 16) | (0x10 << 8)) 464 464 #define DRA752_REV_ES1_1 (DRA7XX_CLASS | (0x52 << 16) | (0x11 << 8)) 465 + #define DRA722_REV_ES1_0 (DRA7XX_CLASS | (0x22 << 16) | (0x10 << 8)) 465 466 466 467 void omap2xxx_check_revision(void); 467 468 void omap3xxx_check_revision(void);
+6 -1
arch/arm/mach-sa1100/collie.c
··· 329 329 .name = "rootfs", 330 330 .offset = MTDPART_OFS_APPEND, 331 331 .size = 0x00e20000, 332 + }, { 333 + .name = "bootblock", 334 + .offset = MTDPART_OFS_APPEND, 335 + .size = 0x00020000, 336 + .mask_flags = MTD_WRITEABLE 332 337 } 333 338 }; 334 339 ··· 361 356 } 362 357 363 358 static struct flash_platform_data collie_flash_data = { 364 - .map_name = "jedec_probe", 359 + .map_name = "cfi_probe", 365 360 .init = collie_flash_init, 366 361 .set_vpp = collie_set_vpp, 367 362 .exit = collie_flash_exit,
+77
arch/arm/mach-sunxi/sunxi.c
··· 12 12 13 13 #include <linux/clk-provider.h> 14 14 #include <linux/clocksource.h> 15 + #include <linux/delay.h> 16 + #include <linux/kernel.h> 17 + #include <linux/init.h> 18 + #include <linux/of_address.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/of_platform.h> 21 + #include <linux/io.h> 22 + #include <linux/reboot.h> 15 23 16 24 #include <asm/mach/arch.h> 25 + #include <asm/mach/map.h> 26 + #include <asm/system_misc.h> 27 + 28 + #define SUN4I_WATCHDOG_CTRL_REG 0x00 29 + #define SUN4I_WATCHDOG_CTRL_RESTART BIT(0) 30 + #define SUN4I_WATCHDOG_MODE_REG 0x04 31 + #define SUN4I_WATCHDOG_MODE_ENABLE BIT(0) 32 + #define SUN4I_WATCHDOG_MODE_RESET_ENABLE BIT(1) 33 + 34 + #define SUN6I_WATCHDOG1_IRQ_REG 0x00 35 + #define SUN6I_WATCHDOG1_CTRL_REG 0x10 36 + #define SUN6I_WATCHDOG1_CTRL_RESTART BIT(0) 37 + #define SUN6I_WATCHDOG1_CONFIG_REG 0x14 38 + #define SUN6I_WATCHDOG1_CONFIG_RESTART BIT(0) 39 + #define SUN6I_WATCHDOG1_CONFIG_IRQ BIT(1) 40 + #define SUN6I_WATCHDOG1_MODE_REG 0x18 41 + #define SUN6I_WATCHDOG1_MODE_ENABLE BIT(0) 42 + 43 + static void __iomem *wdt_base; 44 + 45 + static void sun4i_restart(enum reboot_mode mode, const char *cmd) 46 + { 47 + if (!wdt_base) 48 + return; 49 + 50 + /* Enable timer and set reset bit in the watchdog */ 51 + writel(SUN4I_WATCHDOG_MODE_ENABLE | SUN4I_WATCHDOG_MODE_RESET_ENABLE, 52 + wdt_base + SUN4I_WATCHDOG_MODE_REG); 53 + 54 + /* 55 + * Restart the watchdog. The default (and lowest) interval 56 + * value for the watchdog is 0.5s. 57 + */ 58 + writel(SUN4I_WATCHDOG_CTRL_RESTART, wdt_base + SUN4I_WATCHDOG_CTRL_REG); 59 + 60 + while (1) { 61 + mdelay(5); 62 + writel(SUN4I_WATCHDOG_MODE_ENABLE | SUN4I_WATCHDOG_MODE_RESET_ENABLE, 63 + wdt_base + SUN4I_WATCHDOG_MODE_REG); 64 + } 65 + } 66 + 67 + static struct of_device_id sunxi_restart_ids[] = { 68 + { .compatible = "allwinner,sun4i-a10-wdt" }, 69 + { /*sentinel*/ } 70 + }; 71 + 72 + static void sunxi_setup_restart(void) 73 + { 74 + struct device_node *np; 75 + 76 + np = of_find_matching_node(NULL, sunxi_restart_ids); 77 + if (WARN(!np, "unable to setup watchdog restart")) 78 + return; 79 + 80 + wdt_base = of_iomap(np, 0); 81 + WARN(!wdt_base, "failed to map watchdog base address"); 82 + } 83 + 84 + static void __init sunxi_dt_init(void) 85 + { 86 + sunxi_setup_restart(); 87 + 88 + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 89 + } 17 90 18 91 static const char * const sunxi_board_dt_compat[] = { 19 92 "allwinner,sun4i-a10", ··· 96 23 }; 97 24 98 25 DT_MACHINE_START(SUNXI_DT, "Allwinner A1X (Device Tree)") 26 + .init_machine = sunxi_dt_init, 99 27 .dt_compat = sunxi_board_dt_compat, 28 + .restart = sun4i_restart, 100 29 MACHINE_END 101 30 102 31 static const char * const sun6i_board_dt_compat[] = { ··· 126 51 }; 127 52 128 53 DT_MACHINE_START(SUN7I_DT, "Allwinner sun7i (A20) Family") 54 + .init_machine = sunxi_dt_init, 129 55 .dt_compat = sun7i_board_dt_compat, 56 + .restart = sun4i_restart, 130 57 MACHINE_END
+32 -1
arch/arm/mm/cache-l2x0.c
··· 664 664 665 665 static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) 666 666 { 667 - unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; 667 + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; 668 668 bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; 669 669 670 670 if (rev >= L310_CACHE_ID_RTL_R2P0) { ··· 1064 1064 .flush_all = l2c210_flush_all, 1065 1065 .disable = l2c310_disable, 1066 1066 .sync = l2c210_sync, 1067 + .resume = l2c310_resume, 1068 + }, 1069 + }; 1070 + 1071 + /* 1072 + * This is a variant of the of_l2c310_data with .sync set to 1073 + * NULL. Outer sync operations are not needed when the system is I/O 1074 + * coherent, and potentially harmful in certain situations (PCIe/PL310 1075 + * deadlock on Armada 375/38x due to hardware I/O coherency). The 1076 + * other operations are kept because they are infrequent (therefore do 1077 + * not cause the deadlock in practice) and needed for secondary CPU 1078 + * boot and other power management activities. 1079 + */ 1080 + static const struct l2c_init_data of_l2c310_coherent_data __initconst = { 1081 + .type = "L2C-310 Coherent", 1082 + .way_size_0 = SZ_8K, 1083 + .num_lock = 8, 1084 + .of_parse = l2c310_of_parse, 1085 + .enable = l2c310_enable, 1086 + .fixup = l2c310_fixup, 1087 + .save = l2c310_save, 1088 + .outer_cache = { 1089 + .inv_range = l2c210_inv_range, 1090 + .clean_range = l2c210_clean_range, 1091 + .flush_range = l2c210_flush_range, 1092 + .flush_all = l2c210_flush_all, 1093 + .disable = l2c310_disable, 1067 1094 .resume = l2c310_resume, 1068 1095 }, 1069 1096 }; ··· 1513 1486 l2x0_saved_regs.phy_base = res.start; 1514 1487 1515 1488 data = of_match_node(l2x0_ids, np)->data; 1489 + 1490 + if (of_device_is_compatible(np, "arm,pl310-cache") && 1491 + of_property_read_bool(np, "arm,io-coherent")) 1492 + data = &of_l2c310_coherent_data; 1516 1493 1517 1494 old_aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); 1518 1495 if (old_aux != ((old_aux & aux_mask) | aux_val)) {
+1
arch/arm/mm/nommu.c
··· 300 300 sanity_check_meminfo_mpu(); 301 301 end = memblock_end_of_DRAM(); 302 302 high_memory = __va(end - 1) + 1; 303 + memblock_set_current_limit(end); 303 304 } 304 305 305 306 /*
+2
arch/arm64/include/asm/memory.h
··· 56 56 #define TASK_SIZE_32 UL(0x100000000) 57 57 #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ 58 58 TASK_SIZE_32 : TASK_SIZE_64) 59 + #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ 60 + TASK_SIZE_32 : TASK_SIZE_64) 59 61 #else 60 62 #define TASK_SIZE TASK_SIZE_64 61 63 #endif /* CONFIG_COMPAT */
+1 -1
arch/arm64/include/asm/pgtable.h
··· 292 292 #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ 293 293 PMD_TYPE_SECT) 294 294 295 - #ifdef ARM64_64K_PAGES 295 + #ifdef CONFIG_ARM64_64K_PAGES 296 296 #define pud_sect(pud) (0) 297 297 #else 298 298 #define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \
+4
arch/arm64/include/asm/ptrace.h
··· 21 21 22 22 #include <uapi/asm/ptrace.h> 23 23 24 + /* Current Exception Level values, as contained in CurrentEL */ 25 + #define CurrentEL_EL1 (1 << 2) 26 + #define CurrentEL_EL2 (2 << 2) 27 + 24 28 /* AArch32-specific ptrace requests */ 25 29 #define COMPAT_PTRACE_GETREGS 12 26 30 #define COMPAT_PTRACE_SETREGS 13
+1 -2
arch/arm64/kernel/efi-entry.S
··· 78 78 79 79 /* Turn off Dcache and MMU */ 80 80 mrs x0, CurrentEL 81 - cmp x0, #PSR_MODE_EL2t 82 - ccmp x0, #PSR_MODE_EL2h, #0x4, ne 81 + cmp x0, #CurrentEL_EL2 83 82 b.ne 1f 84 83 mrs x0, sctlr_el2 85 84 bic x0, x0, #1 << 0 // clear SCTLR.M
+1 -2
arch/arm64/kernel/head.S
··· 270 270 */ 271 271 ENTRY(el2_setup) 272 272 mrs x0, CurrentEL 273 - cmp x0, #PSR_MODE_EL2t 274 - ccmp x0, #PSR_MODE_EL2h, #0x4, ne 273 + cmp x0, #CurrentEL_EL2 275 274 b.ne 1f 276 275 mrs x0, sctlr_el2 277 276 CPU_BE( orr x0, x0, #(1 << 25) ) // Set the EE bit for EL2
+2
arch/arm64/mm/copypage.c
··· 27 27 copy_page(kto, kfrom); 28 28 __flush_dcache_area(kto, PAGE_SIZE); 29 29 } 30 + EXPORT_SYMBOL_GPL(__cpu_copy_user_page); 30 31 31 32 void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) 32 33 { 33 34 clear_page(kaddr); 34 35 } 36 + EXPORT_SYMBOL_GPL(__cpu_clear_user_page);
+2 -1
arch/arm64/mm/flush.c
··· 79 79 return; 80 80 81 81 if (!test_and_set_bit(PG_dcache_clean, &page->flags)) { 82 - __flush_dcache_area(page_address(page), PAGE_SIZE); 82 + __flush_dcache_area(page_address(page), 83 + PAGE_SIZE << compound_order(page)); 83 84 __flush_icache_all(); 84 85 } else if (icache_is_aivivt()) { 85 86 __flush_icache_all();
+2 -1
arch/m68k/kernel/head.S
··· 921 921 jls 1f 922 922 lsrl #1,%d1 923 923 1: 924 - movel %d1,m68k_init_mapped_size 924 + lea %pc@(m68k_init_mapped_size),%a0 925 + movel %d1,%a0@ 925 926 mmu_map #PAGE_OFFSET,%pc@(L(phys_kernel_start)),%d1,\ 926 927 %pc@(m68k_supervisor_cachemode) 927 928
+2
arch/m68k/kernel/time.c
··· 11 11 */ 12 12 13 13 #include <linux/errno.h> 14 + #include <linux/export.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> 16 17 #include <linux/kernel.h> ··· 31 30 32 31 33 32 unsigned long (*mach_random_get_entropy)(void); 33 + EXPORT_SYMBOL_GPL(mach_random_get_entropy); 34 34 35 35 36 36 /*
+1
arch/mips/Kconfig
··· 269 269 config LASAT 270 270 bool "LASAT Networks platforms" 271 271 select CEVT_R4K 272 + select CRC32 272 273 select CSRC_R4K 273 274 select DMA_NONCOHERENT 274 275 select SYS_HAS_EARLY_PRINTK
-2
arch/mips/include/asm/sigcontext.h
··· 32 32 __u32 sc_lo2; 33 33 __u32 sc_hi3; 34 34 __u32 sc_lo3; 35 - __u64 sc_msaregs[32]; /* Most significant 64 bits */ 36 - __u32 sc_msa_csr; 37 35 }; 38 36 #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 */ 39 37 #endif /* _ASM_SIGCONTEXT_H */
+4
arch/mips/include/asm/uasm.h
··· 67 67 #define Ip_u2s3u1(op) \ 68 68 void ISAOPC(op)(u32 **buf, unsigned int a, signed int b, unsigned int c) 69 69 70 + #define Ip_s3s1s2(op) \ 71 + void ISAOPC(op)(u32 **buf, int a, int b, int c) 72 + 70 73 #define Ip_u2u1s3(op) \ 71 74 void ISAOPC(op)(u32 **buf, unsigned int a, unsigned int b, signed int c) 72 75 ··· 150 147 Ip_u2s3u1(_sd); 151 148 Ip_u2u1u3(_sll); 152 149 Ip_u3u2u1(_sllv); 150 + Ip_s3s1s2(_slt); 153 151 Ip_u2u1s3(_sltiu); 154 152 Ip_u3u1u2(_sltu); 155 153 Ip_u2u1u3(_sra);
+1
arch/mips/include/uapi/asm/inst.h
··· 273 273 mm_and_op = 0x250, 274 274 mm_or32_op = 0x290, 275 275 mm_xor32_op = 0x310, 276 + mm_slt_op = 0x350, 276 277 mm_sltu_op = 0x390, 277 278 }; 278 279
-8
arch/mips/include/uapi/asm/sigcontext.h
··· 12 12 #include <linux/types.h> 13 13 #include <asm/sgidefs.h> 14 14 15 - /* Bits which may be set in sc_used_math */ 16 - #define USEDMATH_FP (1 << 0) 17 - #define USEDMATH_MSA (1 << 1) 18 - 19 15 #if _MIPS_SIM == _MIPS_SIM_ABI32 20 16 21 17 /* ··· 37 41 unsigned long sc_lo2; 38 42 unsigned long sc_hi3; 39 43 unsigned long sc_lo3; 40 - unsigned long long sc_msaregs[32]; /* Most significant 64 bits */ 41 - unsigned long sc_msa_csr; 42 44 }; 43 45 44 46 #endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ ··· 70 76 __u32 sc_used_math; 71 77 __u32 sc_dsp; 72 78 __u32 sc_reserved; 73 - __u64 sc_msaregs[32]; 74 - __u32 sc_msa_csr; 75 79 }; 76 80 77 81
-3
arch/mips/kernel/asm-offsets.c
··· 293 293 OFFSET(SC_LO2, sigcontext, sc_lo2); 294 294 OFFSET(SC_HI3, sigcontext, sc_hi3); 295 295 OFFSET(SC_LO3, sigcontext, sc_lo3); 296 - OFFSET(SC_MSAREGS, sigcontext, sc_msaregs); 297 296 BLANK(); 298 297 } 299 298 #endif ··· 307 308 OFFSET(SC_MDLO, sigcontext, sc_mdlo); 308 309 OFFSET(SC_PC, sigcontext, sc_pc); 309 310 OFFSET(SC_FPC_CSR, sigcontext, sc_fpc_csr); 310 - OFFSET(SC_MSAREGS, sigcontext, sc_msaregs); 311 311 BLANK(); 312 312 } 313 313 #endif ··· 318 320 OFFSET(SC32_FPREGS, sigcontext32, sc_fpregs); 319 321 OFFSET(SC32_FPC_CSR, sigcontext32, sc_fpc_csr); 320 322 OFFSET(SC32_FPC_EIR, sigcontext32, sc_fpc_eir); 321 - OFFSET(SC32_MSAREGS, sigcontext32, sc_msaregs); 322 323 BLANK(); 323 324 } 324 325 #endif
+1 -1
arch/mips/kernel/irq-msc01.c
··· 126 126 127 127 board_bind_eic_interrupt = &msc_bind_eic_interrupt; 128 128 129 - for (; nirq >= 0; nirq--, imp++) { 129 + for (; nirq > 0; nirq--, imp++) { 130 130 int n = imp->im_irq; 131 131 132 132 switch (imp->im_type) {
+2 -2
arch/mips/kernel/pm-cps.c
··· 101 101 if (!coupled_coherence) 102 102 return; 103 103 104 - smp_mb__before_atomic_inc(); 104 + smp_mb__before_atomic(); 105 105 atomic_inc(a); 106 106 107 107 while (atomic_read(a) < online) ··· 158 158 159 159 /* Indicate that this CPU might not be coherent */ 160 160 cpumask_clear_cpu(cpu, &cpu_coherent_mask); 161 - smp_mb__after_clear_bit(); 161 + smp_mb__after_atomic(); 162 162 163 163 /* Create a non-coherent mapping of the core ready_count */ 164 164 core_ready_count = per_cpu(ready_count, core);
-213
arch/mips/kernel/r4k_fpu.S
··· 13 13 * Copyright (C) 1999, 2001 Silicon Graphics, Inc. 14 14 */ 15 15 #include <asm/asm.h> 16 - #include <asm/asmmacro.h> 17 16 #include <asm/errno.h> 18 17 #include <asm/fpregdef.h> 19 18 #include <asm/mipsregs.h> ··· 244 245 li v0, 0 # success 245 246 END(_restore_fp_context32) 246 247 #endif 247 - 248 - #ifdef CONFIG_CPU_HAS_MSA 249 - 250 - .macro save_sc_msareg wr, off, sc, tmp 251 - #ifdef CONFIG_64BIT 252 - copy_u_d \tmp, \wr, 1 253 - EX sd \tmp, (\off+(\wr*8))(\sc) 254 - #elif defined(CONFIG_CPU_LITTLE_ENDIAN) 255 - copy_u_w \tmp, \wr, 2 256 - EX sw \tmp, (\off+(\wr*8)+0)(\sc) 257 - copy_u_w \tmp, \wr, 3 258 - EX sw \tmp, (\off+(\wr*8)+4)(\sc) 259 - #else /* CONFIG_CPU_BIG_ENDIAN */ 260 - copy_u_w \tmp, \wr, 2 261 - EX sw \tmp, (\off+(\wr*8)+4)(\sc) 262 - copy_u_w \tmp, \wr, 3 263 - EX sw \tmp, (\off+(\wr*8)+0)(\sc) 264 - #endif 265 - .endm 266 - 267 - /* 268 - * int _save_msa_context(struct sigcontext *sc) 269 - * 270 - * Save the upper 64 bits of each vector register along with the MSA_CSR 271 - * register into sc. Returns zero on success, else non-zero. 272 - */ 273 - LEAF(_save_msa_context) 274 - save_sc_msareg 0, SC_MSAREGS, a0, t0 275 - save_sc_msareg 1, SC_MSAREGS, a0, t0 276 - save_sc_msareg 2, SC_MSAREGS, a0, t0 277 - save_sc_msareg 3, SC_MSAREGS, a0, t0 278 - save_sc_msareg 4, SC_MSAREGS, a0, t0 279 - save_sc_msareg 5, SC_MSAREGS, a0, t0 280 - save_sc_msareg 6, SC_MSAREGS, a0, t0 281 - save_sc_msareg 7, SC_MSAREGS, a0, t0 282 - save_sc_msareg 8, SC_MSAREGS, a0, t0 283 - save_sc_msareg 9, SC_MSAREGS, a0, t0 284 - save_sc_msareg 10, SC_MSAREGS, a0, t0 285 - save_sc_msareg 11, SC_MSAREGS, a0, t0 286 - save_sc_msareg 12, SC_MSAREGS, a0, t0 287 - save_sc_msareg 13, SC_MSAREGS, a0, t0 288 - save_sc_msareg 14, SC_MSAREGS, a0, t0 289 - save_sc_msareg 15, SC_MSAREGS, a0, t0 290 - save_sc_msareg 16, SC_MSAREGS, a0, t0 291 - save_sc_msareg 17, SC_MSAREGS, a0, t0 292 - save_sc_msareg 18, SC_MSAREGS, a0, t0 293 - save_sc_msareg 19, SC_MSAREGS, a0, t0 294 - save_sc_msareg 20, SC_MSAREGS, a0, t0 295 - save_sc_msareg 21, SC_MSAREGS, a0, t0 296 - save_sc_msareg 22, SC_MSAREGS, a0, t0 297 - save_sc_msareg 23, SC_MSAREGS, a0, t0 298 - save_sc_msareg 24, SC_MSAREGS, a0, t0 299 - save_sc_msareg 25, SC_MSAREGS, a0, t0 300 - save_sc_msareg 26, SC_MSAREGS, a0, t0 301 - save_sc_msareg 27, SC_MSAREGS, a0, t0 302 - save_sc_msareg 28, SC_MSAREGS, a0, t0 303 - save_sc_msareg 29, SC_MSAREGS, a0, t0 304 - save_sc_msareg 30, SC_MSAREGS, a0, t0 305 - save_sc_msareg 31, SC_MSAREGS, a0, t0 306 - jr ra 307 - li v0, 0 308 - END(_save_msa_context) 309 - 310 - #ifdef CONFIG_MIPS32_COMPAT 311 - 312 - /* 313 - * int _save_msa_context32(struct sigcontext32 *sc) 314 - * 315 - * Save the upper 64 bits of each vector register along with the MSA_CSR 316 - * register into sc. Returns zero on success, else non-zero. 317 - */ 318 - LEAF(_save_msa_context32) 319 - save_sc_msareg 0, SC32_MSAREGS, a0, t0 320 - save_sc_msareg 1, SC32_MSAREGS, a0, t0 321 - save_sc_msareg 2, SC32_MSAREGS, a0, t0 322 - save_sc_msareg 3, SC32_MSAREGS, a0, t0 323 - save_sc_msareg 4, SC32_MSAREGS, a0, t0 324 - save_sc_msareg 5, SC32_MSAREGS, a0, t0 325 - save_sc_msareg 6, SC32_MSAREGS, a0, t0 326 - save_sc_msareg 7, SC32_MSAREGS, a0, t0 327 - save_sc_msareg 8, SC32_MSAREGS, a0, t0 328 - save_sc_msareg 9, SC32_MSAREGS, a0, t0 329 - save_sc_msareg 10, SC32_MSAREGS, a0, t0 330 - save_sc_msareg 11, SC32_MSAREGS, a0, t0 331 - save_sc_msareg 12, SC32_MSAREGS, a0, t0 332 - save_sc_msareg 13, SC32_MSAREGS, a0, t0 333 - save_sc_msareg 14, SC32_MSAREGS, a0, t0 334 - save_sc_msareg 15, SC32_MSAREGS, a0, t0 335 - save_sc_msareg 16, SC32_MSAREGS, a0, t0 336 - save_sc_msareg 17, SC32_MSAREGS, a0, t0 337 - save_sc_msareg 18, SC32_MSAREGS, a0, t0 338 - save_sc_msareg 19, SC32_MSAREGS, a0, t0 339 - save_sc_msareg 20, SC32_MSAREGS, a0, t0 340 - save_sc_msareg 21, SC32_MSAREGS, a0, t0 341 - save_sc_msareg 22, SC32_MSAREGS, a0, t0 342 - save_sc_msareg 23, SC32_MSAREGS, a0, t0 343 - save_sc_msareg 24, SC32_MSAREGS, a0, t0 344 - save_sc_msareg 25, SC32_MSAREGS, a0, t0 345 - save_sc_msareg 26, SC32_MSAREGS, a0, t0 346 - save_sc_msareg 27, SC32_MSAREGS, a0, t0 347 - save_sc_msareg 28, SC32_MSAREGS, a0, t0 348 - save_sc_msareg 29, SC32_MSAREGS, a0, t0 349 - save_sc_msareg 30, SC32_MSAREGS, a0, t0 350 - save_sc_msareg 31, SC32_MSAREGS, a0, t0 351 - jr ra 352 - li v0, 0 353 - END(_save_msa_context32) 354 - 355 - #endif /* CONFIG_MIPS32_COMPAT */ 356 - 357 - .macro restore_sc_msareg wr, off, sc, tmp 358 - #ifdef CONFIG_64BIT 359 - EX ld \tmp, (\off+(\wr*8))(\sc) 360 - insert_d \wr, 1, \tmp 361 - #elif defined(CONFIG_CPU_LITTLE_ENDIAN) 362 - EX lw \tmp, (\off+(\wr*8)+0)(\sc) 363 - insert_w \wr, 2, \tmp 364 - EX lw \tmp, (\off+(\wr*8)+4)(\sc) 365 - insert_w \wr, 3, \tmp 366 - #else /* CONFIG_CPU_BIG_ENDIAN */ 367 - EX lw \tmp, (\off+(\wr*8)+4)(\sc) 368 - insert_w \wr, 2, \tmp 369 - EX lw \tmp, (\off+(\wr*8)+0)(\sc) 370 - insert_w \wr, 3, \tmp 371 - #endif 372 - .endm 373 - 374 - /* 375 - * int _restore_msa_context(struct sigcontext *sc) 376 - */ 377 - LEAF(_restore_msa_context) 378 - restore_sc_msareg 0, SC_MSAREGS, a0, t0 379 - restore_sc_msareg 1, SC_MSAREGS, a0, t0 380 - restore_sc_msareg 2, SC_MSAREGS, a0, t0 381 - restore_sc_msareg 3, SC_MSAREGS, a0, t0 382 - restore_sc_msareg 4, SC_MSAREGS, a0, t0 383 - restore_sc_msareg 5, SC_MSAREGS, a0, t0 384 - restore_sc_msareg 6, SC_MSAREGS, a0, t0 385 - restore_sc_msareg 7, SC_MSAREGS, a0, t0 386 - restore_sc_msareg 8, SC_MSAREGS, a0, t0 387 - restore_sc_msareg 9, SC_MSAREGS, a0, t0 388 - restore_sc_msareg 10, SC_MSAREGS, a0, t0 389 - restore_sc_msareg 11, SC_MSAREGS, a0, t0 390 - restore_sc_msareg 12, SC_MSAREGS, a0, t0 391 - restore_sc_msareg 13, SC_MSAREGS, a0, t0 392 - restore_sc_msareg 14, SC_MSAREGS, a0, t0 393 - restore_sc_msareg 15, SC_MSAREGS, a0, t0 394 - restore_sc_msareg 16, SC_MSAREGS, a0, t0 395 - restore_sc_msareg 17, SC_MSAREGS, a0, t0 396 - restore_sc_msareg 18, SC_MSAREGS, a0, t0 397 - restore_sc_msareg 19, SC_MSAREGS, a0, t0 398 - restore_sc_msareg 20, SC_MSAREGS, a0, t0 399 - restore_sc_msareg 21, SC_MSAREGS, a0, t0 400 - restore_sc_msareg 22, SC_MSAREGS, a0, t0 401 - restore_sc_msareg 23, SC_MSAREGS, a0, t0 402 - restore_sc_msareg 24, SC_MSAREGS, a0, t0 403 - restore_sc_msareg 25, SC_MSAREGS, a0, t0 404 - restore_sc_msareg 26, SC_MSAREGS, a0, t0 405 - restore_sc_msareg 27, SC_MSAREGS, a0, t0 406 - restore_sc_msareg 28, SC_MSAREGS, a0, t0 407 - restore_sc_msareg 29, SC_MSAREGS, a0, t0 408 - restore_sc_msareg 30, SC_MSAREGS, a0, t0 409 - restore_sc_msareg 31, SC_MSAREGS, a0, t0 410 - jr ra 411 - li v0, 0 412 - END(_restore_msa_context) 413 - 414 - #ifdef CONFIG_MIPS32_COMPAT 415 - 416 - /* 417 - * int _restore_msa_context32(struct sigcontext32 *sc) 418 - */ 419 - LEAF(_restore_msa_context32) 420 - restore_sc_msareg 0, SC32_MSAREGS, a0, t0 421 - restore_sc_msareg 1, SC32_MSAREGS, a0, t0 422 - restore_sc_msareg 2, SC32_MSAREGS, a0, t0 423 - restore_sc_msareg 3, SC32_MSAREGS, a0, t0 424 - restore_sc_msareg 4, SC32_MSAREGS, a0, t0 425 - restore_sc_msareg 5, SC32_MSAREGS, a0, t0 426 - restore_sc_msareg 6, SC32_MSAREGS, a0, t0 427 - restore_sc_msareg 7, SC32_MSAREGS, a0, t0 428 - restore_sc_msareg 8, SC32_MSAREGS, a0, t0 429 - restore_sc_msareg 9, SC32_MSAREGS, a0, t0 430 - restore_sc_msareg 10, SC32_MSAREGS, a0, t0 431 - restore_sc_msareg 11, SC32_MSAREGS, a0, t0 432 - restore_sc_msareg 12, SC32_MSAREGS, a0, t0 433 - restore_sc_msareg 13, SC32_MSAREGS, a0, t0 434 - restore_sc_msareg 14, SC32_MSAREGS, a0, t0 435 - restore_sc_msareg 15, SC32_MSAREGS, a0, t0 436 - restore_sc_msareg 16, SC32_MSAREGS, a0, t0 437 - restore_sc_msareg 17, SC32_MSAREGS, a0, t0 438 - restore_sc_msareg 18, SC32_MSAREGS, a0, t0 439 - restore_sc_msareg 19, SC32_MSAREGS, a0, t0 440 - restore_sc_msareg 20, SC32_MSAREGS, a0, t0 441 - restore_sc_msareg 21, SC32_MSAREGS, a0, t0 442 - restore_sc_msareg 22, SC32_MSAREGS, a0, t0 443 - restore_sc_msareg 23, SC32_MSAREGS, a0, t0 444 - restore_sc_msareg 24, SC32_MSAREGS, a0, t0 445 - restore_sc_msareg 25, SC32_MSAREGS, a0, t0 446 - restore_sc_msareg 26, SC32_MSAREGS, a0, t0 447 - restore_sc_msareg 27, SC32_MSAREGS, a0, t0 448 - restore_sc_msareg 28, SC32_MSAREGS, a0, t0 449 - restore_sc_msareg 29, SC32_MSAREGS, a0, t0 450 - restore_sc_msareg 30, SC32_MSAREGS, a0, t0 451 - restore_sc_msareg 31, SC32_MSAREGS, a0, t0 452 - jr ra 453 - li v0, 0 454 - END(_restore_msa_context32) 455 - 456 - #endif /* CONFIG_MIPS32_COMPAT */ 457 - 458 - #endif /* CONFIG_CPU_HAS_MSA */ 459 248 460 249 .set reorder 461 250
+8 -71
arch/mips/kernel/signal.c
··· 31 31 #include <linux/bitops.h> 32 32 #include <asm/cacheflush.h> 33 33 #include <asm/fpu.h> 34 - #include <asm/msa.h> 35 34 #include <asm/sim.h> 36 35 #include <asm/ucontext.h> 37 36 #include <asm/cpu-features.h> ··· 46 47 47 48 extern asmlinkage int _save_fp_context(struct sigcontext __user *sc); 48 49 extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc); 49 - 50 - extern asmlinkage int _save_msa_context(struct sigcontext __user *sc); 51 - extern asmlinkage int _restore_msa_context(struct sigcontext __user *sc); 52 50 53 51 struct sigframe { 54 52 u32 sf_ass[4]; /* argument save space for o32 */ ··· 96 100 } 97 101 98 102 /* 99 - * These functions will save only the upper 64 bits of the vector registers, 100 - * since the lower 64 bits have already been saved as the scalar FP context. 101 - */ 102 - static int copy_msa_to_sigcontext(struct sigcontext __user *sc) 103 - { 104 - int i; 105 - int err = 0; 106 - 107 - for (i = 0; i < NUM_FPU_REGS; i++) { 108 - err |= 109 - __put_user(get_fpr64(&current->thread.fpu.fpr[i], 1), 110 - &sc->sc_msaregs[i]); 111 - } 112 - err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); 113 - 114 - return err; 115 - } 116 - 117 - static int copy_msa_from_sigcontext(struct sigcontext __user *sc) 118 - { 119 - int i; 120 - int err = 0; 121 - u64 val; 122 - 123 - for (i = 0; i < NUM_FPU_REGS; i++) { 124 - err |= __get_user(val, &sc->sc_msaregs[i]); 125 - set_fpr64(&current->thread.fpu.fpr[i], 1, val); 126 - } 127 - err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); 128 - 129 - return err; 130 - } 131 - 132 - /* 133 103 * Helper routines 134 104 */ 135 - static int protected_save_fp_context(struct sigcontext __user *sc, 136 - unsigned used_math) 105 + static int protected_save_fp_context(struct sigcontext __user *sc) 137 106 { 138 107 int err; 139 - bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA); 140 108 #ifndef CONFIG_EVA 141 109 while (1) { 142 110 lock_fpu_owner(); 143 111 if (is_fpu_owner()) { 144 112 err = save_fp_context(sc); 145 - if (save_msa && !err) 146 - err = _save_msa_context(sc); 147 113 unlock_fpu_owner(); 148 114 } else { 149 115 unlock_fpu_owner(); 150 116 err = copy_fp_to_sigcontext(sc); 151 - if (save_msa && !err) 152 - err = copy_msa_to_sigcontext(sc); 153 117 } 154 118 if (likely(!err)) 155 119 break; ··· 125 169 * EVA does not have FPU EVA instructions so saving fpu context directly 126 170 * does not work. 127 171 */ 128 - disable_msa(); 129 172 lose_fpu(1); 130 173 err = save_fp_context(sc); /* this might fail */ 131 - if (save_msa && !err) 132 - err = copy_msa_to_sigcontext(sc); 133 174 #endif 134 175 return err; 135 176 } 136 177 137 - static int protected_restore_fp_context(struct sigcontext __user *sc, 138 - unsigned used_math) 178 + static int protected_restore_fp_context(struct sigcontext __user *sc) 139 179 { 140 180 int err, tmp __maybe_unused; 141 - bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA); 142 181 #ifndef CONFIG_EVA 143 182 while (1) { 144 183 lock_fpu_owner(); 145 184 if (is_fpu_owner()) { 146 185 err = restore_fp_context(sc); 147 - if (restore_msa && !err) { 148 - enable_msa(); 149 - err = _restore_msa_context(sc); 150 - } else { 151 - /* signal handler may have used MSA */ 152 - disable_msa(); 153 - } 154 186 unlock_fpu_owner(); 155 187 } else { 156 188 unlock_fpu_owner(); 157 189 err = copy_fp_from_sigcontext(sc); 158 - if (!err && (used_math & USEDMATH_MSA)) 159 - err = copy_msa_from_sigcontext(sc); 160 190 } 161 191 if (likely(!err)) 162 192 break; ··· 158 216 * EVA does not have FPU EVA instructions so restoring fpu context 159 217 * directly does not work. 160 218 */ 161 - enable_msa(); 162 219 lose_fpu(0); 163 220 err = restore_fp_context(sc); /* this might fail */ 164 - if (restore_msa && !err) 165 - err = copy_msa_from_sigcontext(sc); 166 221 #endif 167 222 return err; 168 223 } ··· 191 252 err |= __put_user(rddsp(DSP_MASK), &sc->sc_dsp); 192 253 } 193 254 194 - used_math = used_math() ? USEDMATH_FP : 0; 195 - used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0; 255 + used_math = !!used_math(); 196 256 err |= __put_user(used_math, &sc->sc_used_math); 197 257 198 258 if (used_math) { ··· 199 261 * Save FPU state to signal context. Signal handler 200 262 * will "inherit" current FPU state. 201 263 */ 202 - err |= protected_save_fp_context(sc, used_math); 264 + err |= protected_save_fp_context(sc); 203 265 } 204 266 return err; 205 267 } ··· 224 286 } 225 287 226 288 static int 227 - check_and_restore_fp_context(struct sigcontext __user *sc, unsigned used_math) 289 + check_and_restore_fp_context(struct sigcontext __user *sc) 228 290 { 229 291 int err, sig; 230 292 231 293 err = sig = fpcsr_pending(&sc->sc_fpc_csr); 232 294 if (err > 0) 233 295 err = 0; 234 - err |= protected_restore_fp_context(sc, used_math); 296 + err |= protected_restore_fp_context(sc); 235 297 return err ?: sig; 236 298 } 237 299 ··· 271 333 if (used_math) { 272 334 /* restore fpu context if we have used it before */ 273 335 if (!err) 274 - err = check_and_restore_fp_context(sc, used_math); 336 + err = check_and_restore_fp_context(sc); 275 337 } else { 276 - /* signal handler may have used FPU or MSA. Disable them. */ 277 - disable_msa(); 338 + /* signal handler may have used FPU. Give it up. */ 278 339 lose_fpu(0); 279 340 } 280 341
+8 -66
arch/mips/kernel/signal32.c
··· 30 30 #include <asm/sim.h> 31 31 #include <asm/ucontext.h> 32 32 #include <asm/fpu.h> 33 - #include <asm/msa.h> 34 33 #include <asm/war.h> 35 34 #include <asm/vdso.h> 36 35 #include <asm/dsp.h> ··· 41 42 42 43 extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc); 43 44 extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc); 44 - 45 - extern asmlinkage int _save_msa_context32(struct sigcontext32 __user *sc); 46 - extern asmlinkage int _restore_msa_context32(struct sigcontext32 __user *sc); 47 45 48 46 /* 49 47 * Including <asm/unistd.h> would give use the 64-bit syscall numbers ... ··· 111 115 } 112 116 113 117 /* 114 - * These functions will save only the upper 64 bits of the vector registers, 115 - * since the lower 64 bits have already been saved as the scalar FP context. 116 - */ 117 - static int copy_msa_to_sigcontext32(struct sigcontext32 __user *sc) 118 - { 119 - int i; 120 - int err = 0; 121 - 122 - for (i = 0; i < NUM_FPU_REGS; i++) { 123 - err |= 124 - __put_user(get_fpr64(&current->thread.fpu.fpr[i], 1), 125 - &sc->sc_msaregs[i]); 126 - } 127 - err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); 128 - 129 - return err; 130 - } 131 - 132 - static int copy_msa_from_sigcontext32(struct sigcontext32 __user *sc) 133 - { 134 - int i; 135 - int err = 0; 136 - u64 val; 137 - 138 - for (i = 0; i < NUM_FPU_REGS; i++) { 139 - err |= __get_user(val, &sc->sc_msaregs[i]); 140 - set_fpr64(&current->thread.fpu.fpr[i], 1, val); 141 - } 142 - err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); 143 - 144 - return err; 145 - } 146 - 147 - /* 148 118 * sigcontext handlers 149 119 */ 150 - static int protected_save_fp_context32(struct sigcontext32 __user *sc, 151 - unsigned used_math) 120 + static int protected_save_fp_context32(struct sigcontext32 __user *sc) 152 121 { 153 122 int err; 154 - bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA); 155 123 while (1) { 156 124 lock_fpu_owner(); 157 125 if (is_fpu_owner()) { 158 126 err = save_fp_context32(sc); 159 - if (save_msa && !err) 160 - err = _save_msa_context32(sc); 161 127 unlock_fpu_owner(); 162 128 } else { 163 129 unlock_fpu_owner(); 164 130 err = copy_fp_to_sigcontext32(sc); 165 - if (save_msa && !err) 166 - err = copy_msa_to_sigcontext32(sc); 167 131 } 168 132 if (likely(!err)) 169 133 break; ··· 137 181 return err; 138 182 } 139 183 140 - static int protected_restore_fp_context32(struct sigcontext32 __user *sc, 141 - unsigned used_math) 184 + static int protected_restore_fp_context32(struct sigcontext32 __user *sc) 142 185 { 143 186 int err, tmp __maybe_unused; 144 - bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA); 145 187 while (1) { 146 188 lock_fpu_owner(); 147 189 if (is_fpu_owner()) { 148 190 err = restore_fp_context32(sc); 149 - if (restore_msa && !err) { 150 - enable_msa(); 151 - err = _restore_msa_context32(sc); 152 - } else { 153 - /* signal handler may have used MSA */ 154 - disable_msa(); 155 - } 156 191 unlock_fpu_owner(); 157 192 } else { 158 193 unlock_fpu_owner(); 159 194 err = copy_fp_from_sigcontext32(sc); 160 - if (restore_msa && !err) 161 - err = copy_msa_from_sigcontext32(sc); 162 195 } 163 196 if (likely(!err)) 164 197 break; ··· 186 241 err |= __put_user(mflo3(), &sc->sc_lo3); 187 242 } 188 243 189 - used_math = used_math() ? USEDMATH_FP : 0; 190 - used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0; 244 + used_math = !!used_math(); 191 245 err |= __put_user(used_math, &sc->sc_used_math); 192 246 193 247 if (used_math) { ··· 194 250 * Save FPU state to signal context. Signal handler 195 251 * will "inherit" current FPU state. 196 252 */ 197 - err |= protected_save_fp_context32(sc, used_math); 253 + err |= protected_save_fp_context32(sc); 198 254 } 199 255 return err; 200 256 } 201 257 202 258 static int 203 - check_and_restore_fp_context32(struct sigcontext32 __user *sc, 204 - unsigned used_math) 259 + check_and_restore_fp_context32(struct sigcontext32 __user *sc) 205 260 { 206 261 int err, sig; 207 262 208 263 err = sig = fpcsr_pending(&sc->sc_fpc_csr); 209 264 if (err > 0) 210 265 err = 0; 211 - err |= protected_restore_fp_context32(sc, used_math); 266 + err |= protected_restore_fp_context32(sc); 212 267 return err ?: sig; 213 268 } 214 269 ··· 244 301 if (used_math) { 245 302 /* restore fpu context if we have used it before */ 246 303 if (!err) 247 - err = check_and_restore_fp_context32(sc, used_math); 304 + err = check_and_restore_fp_context32(sc); 248 305 } else { 249 - /* signal handler may have used FPU or MSA. Disable them. */ 250 - disable_msa(); 306 + /* signal handler may have used FPU. Give it up. */ 251 307 lose_fpu(0); 252 308 } 253 309
+1 -1
arch/mips/kernel/smp-cps.c
··· 301 301 302 302 core_cfg = &mips_cps_core_bootcfg[current_cpu_data.core]; 303 303 atomic_sub(1 << cpu_vpe_id(&current_cpu_data), &core_cfg->vpe_mask); 304 - smp_mb__after_atomic_dec(); 304 + smp_mb__after_atomic(); 305 305 set_cpu_online(cpu, false); 306 306 cpu_clear(cpu, cpu_callin_map); 307 307
+1
arch/mips/kvm/kvm_mips.c
··· 384 384 385 385 kfree(vcpu->arch.guest_ebase); 386 386 kfree(vcpu->arch.kseg0_commpage); 387 + kfree(vcpu); 387 388 } 388 389 389 390 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+14 -9
arch/mips/math-emu/ieee754.c
··· 34 34 * Special constants 35 35 */ 36 36 37 - #define DPCNST(s, b, m) \ 37 + /* 38 + * Older GCC requires the inner braces for initialization of union ieee754dp's 39 + * anonymous struct member. Without an error will result. 40 + */ 41 + #define xPCNST(s, b, m, ebias) \ 38 42 { \ 39 - .sign = (s), \ 40 - .bexp = (b) + DP_EBIAS, \ 41 - .mant = (m) \ 43 + { \ 44 + .sign = (s), \ 45 + .bexp = (b) + ebias, \ 46 + .mant = (m) \ 47 + } \ 42 48 } 49 + 50 + #define DPCNST(s, b, m) \ 51 + xPCNST(s, b, m, DP_EBIAS) 43 52 44 53 const union ieee754dp __ieee754dp_spcvals[] = { 45 54 DPCNST(0, DP_EMIN - 1, 0x0000000000000ULL), /* + zero */ ··· 71 62 }; 72 63 73 64 #define SPCNST(s, b, m) \ 74 - { \ 75 - .sign = (s), \ 76 - .bexp = (b) + SP_EBIAS, \ 77 - .mant = (m) \ 78 - } 65 + xPCNST(s, b, m, SP_EBIAS) 79 66 80 67 const union ieee754sp __ieee754sp_spcvals[] = { 81 68 SPCNST(0, SP_EMIN - 1, 0x000000), /* + zero */
+1
arch/mips/mm/uasm-micromips.c
··· 102 102 { insn_sd, 0, 0 }, 103 103 { insn_sll, M(mm_pool32a_op, 0, 0, 0, 0, mm_sll32_op), RT | RS | RD }, 104 104 { insn_sllv, M(mm_pool32a_op, 0, 0, 0, 0, mm_sllv32_op), RT | RS | RD }, 105 + { insn_slt, M(mm_pool32a_op, 0, 0, 0, 0, mm_slt_op), RT | RS | RD }, 105 106 { insn_sltiu, M(mm_sltiu32_op, 0, 0, 0, 0, 0), RT | RS | SIMM }, 106 107 { insn_sltu, M(mm_pool32a_op, 0, 0, 0, 0, mm_sltu_op), RT | RS | RD }, 107 108 { insn_sra, M(mm_pool32a_op, 0, 0, 0, 0, mm_sra_op), RT | RS | RD },
+2 -1
arch/mips/mm/uasm-mips.c
··· 89 89 { insn_lb, M(lb_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 90 90 { insn_ld, M(ld_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 91 91 { insn_ldx, M(spec3_op, 0, 0, 0, ldx_op, lx_op), RS | RT | RD }, 92 - { insn_lh, M(lw_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 92 + { insn_lh, M(lh_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 93 93 { insn_lld, M(lld_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 94 94 { insn_ll, M(ll_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 95 95 { insn_lui, M(lui_op, 0, 0, 0, 0, 0), RT | SIMM }, ··· 110 110 { insn_sd, M(sd_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 111 111 { insn_sll, M(spec_op, 0, 0, 0, 0, sll_op), RT | RD | RE }, 112 112 { insn_sllv, M(spec_op, 0, 0, 0, 0, sllv_op), RS | RT | RD }, 113 + { insn_slt, M(spec_op, 0, 0, 0, 0, slt_op), RS | RT | RD }, 113 114 { insn_sltiu, M(sltiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 114 115 { insn_sltu, M(spec_op, 0, 0, 0, 0, sltu_op), RS | RT | RD }, 115 116 { insn_sra, M(spec_op, 0, 0, 0, 0, sra_op), RT | RD | RE },
+9 -1
arch/mips/mm/uasm.c
··· 53 53 insn_ld, insn_ldx, insn_lh, insn_ll, insn_lld, insn_lui, insn_lw, 54 54 insn_lwx, insn_mfc0, insn_mfhi, insn_mflo, insn_mtc0, insn_mul, 55 55 insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sc, insn_scd, 56 - insn_sd, insn_sll, insn_sllv, insn_sltiu, insn_sltu, insn_sra, 56 + insn_sd, insn_sll, insn_sllv, insn_slt, insn_sltiu, insn_sltu, insn_sra, 57 57 insn_srl, insn_srlv, insn_subu, insn_sw, insn_sync, insn_syscall, 58 58 insn_tlbp, insn_tlbr, insn_tlbwi, insn_tlbwr, insn_wait, insn_wsbh, 59 59 insn_xor, insn_xori, insn_yield, ··· 136 136 Ip_u1u2u3(op) \ 137 137 { \ 138 138 build_insn(buf, insn##op, a, b, c); \ 139 + } \ 140 + UASM_EXPORT_SYMBOL(uasm_i##op); 141 + 142 + #define I_s3s1s2(op) \ 143 + Ip_s3s1s2(op) \ 144 + { \ 145 + build_insn(buf, insn##op, b, c, a); \ 139 146 } \ 140 147 UASM_EXPORT_SYMBOL(uasm_i##op); 141 148 ··· 296 289 I_u2s3u1(_sd) 297 290 I_u2u1u3(_sll) 298 291 I_u3u2u1(_sllv) 292 + I_s3s1s2(_slt) 299 293 I_u2u1s3(_sltiu) 300 294 I_u3u1u2(_sltu) 301 295 I_u2u1u3(_sra)
+149 -117
arch/mips/net/bpf_jit.c
··· 119 119 /* Arguments used by JIT */ 120 120 #define ARGS_USED_BY_JIT 2 /* only applicable to 64-bit */ 121 121 122 - #define FLAG_NEED_X_RESET (1 << 0) 123 - 124 122 #define SBIT(x) (1 << (x)) /* Signed version of BIT() */ 125 123 126 124 /** ··· 151 153 return 0; 152 154 } 153 155 156 + static inline void emit_jit_reg_move(ptr dst, ptr src, struct jit_ctx *ctx); 157 + 154 158 /* Simply emit the instruction if the JIT memory space has been allocated */ 155 159 #define emit_instr(ctx, func, ...) \ 156 160 do { \ ··· 166 166 /* Determine if immediate is within the 16-bit signed range */ 167 167 static inline bool is_range16(s32 imm) 168 168 { 169 - if (imm >= SBIT(15) || imm < -SBIT(15)) 170 - return true; 171 - return false; 169 + return !(imm >= SBIT(15) || imm < -SBIT(15)); 172 170 } 173 171 174 172 static inline void emit_addu(unsigned int dst, unsigned int src1, ··· 185 187 { 186 188 if (ctx->target != NULL) { 187 189 /* addiu can only handle s16 */ 188 - if (is_range16(imm)) { 190 + if (!is_range16(imm)) { 189 191 u32 *p = &ctx->target[ctx->idx]; 190 192 uasm_i_lui(&p, r_tmp_imm, (s32)imm >> 16); 191 193 p = &ctx->target[ctx->idx + 1]; ··· 197 199 } 198 200 ctx->idx++; 199 201 200 - if (is_range16(imm)) 202 + if (!is_range16(imm)) 201 203 ctx->idx++; 202 204 } 203 205 ··· 238 240 static inline void emit_addiu(unsigned int dst, unsigned int src, 239 241 u32 imm, struct jit_ctx *ctx) 240 242 { 241 - if (is_range16(imm)) { 243 + if (!is_range16(imm)) { 242 244 emit_load_imm(r_tmp, imm, ctx); 243 245 emit_addu(dst, r_tmp, src, ctx); 244 246 } else { ··· 311 313 unsigned int sa, struct jit_ctx *ctx) 312 314 { 313 315 /* sa is 5-bits long */ 314 - BUG_ON(sa >= BIT(5)); 315 - emit_instr(ctx, sll, dst, src, sa); 316 + if (sa >= BIT(5)) 317 + /* Shifting >= 32 results in zero */ 318 + emit_jit_reg_move(dst, r_zero, ctx); 319 + else 320 + emit_instr(ctx, sll, dst, src, sa); 316 321 } 317 322 318 323 static inline void emit_srlv(unsigned int dst, unsigned int src, ··· 328 327 unsigned int sa, struct jit_ctx *ctx) 329 328 { 330 329 /* sa is 5-bits long */ 331 - BUG_ON(sa >= BIT(5)); 332 - emit_instr(ctx, srl, dst, src, sa); 330 + if (sa >= BIT(5)) 331 + /* Shifting >= 32 results in zero */ 332 + emit_jit_reg_move(dst, r_zero, ctx); 333 + else 334 + emit_instr(ctx, srl, dst, src, sa); 335 + } 336 + 337 + static inline void emit_slt(unsigned int dst, unsigned int src1, 338 + unsigned int src2, struct jit_ctx *ctx) 339 + { 340 + emit_instr(ctx, slt, dst, src1, src2); 333 341 } 334 342 335 343 static inline void emit_sltu(unsigned int dst, unsigned int src1, ··· 351 341 unsigned int imm, struct jit_ctx *ctx) 352 342 { 353 343 /* 16 bit immediate */ 354 - if (is_range16((s32)imm)) { 344 + if (!is_range16((s32)imm)) { 355 345 emit_load_imm(r_tmp, imm, ctx); 356 346 emit_sltu(dst, src, r_tmp, ctx); 357 347 } else { ··· 418 408 u32 *p = &ctx->target[ctx->idx]; 419 409 uasm_i_divu(&p, dst, src); 420 410 p = &ctx->target[ctx->idx + 1]; 421 - uasm_i_mfhi(&p, dst); 411 + uasm_i_mflo(&p, dst); 422 412 } 423 413 ctx->idx += 2; /* 2 insts */ 424 414 } ··· 451 441 struct jit_ctx *ctx) 452 442 { 453 443 emit_instr(ctx, wsbh, dst, src); 444 + } 445 + 446 + /* load pointer to register */ 447 + static inline void emit_load_ptr(unsigned int dst, unsigned int src, 448 + int imm, struct jit_ctx *ctx) 449 + { 450 + /* src contains the base addr of the 32/64-pointer */ 451 + if (config_enabled(CONFIG_64BIT)) 452 + emit_instr(ctx, ld, dst, imm, src); 453 + else 454 + emit_instr(ctx, lw, dst, imm, src); 454 455 } 455 456 456 457 /* load a function pointer to register */ ··· 566 545 return num; 567 546 } 568 547 569 - static inline void update_on_xread(struct jit_ctx *ctx) 570 - { 571 - if (!(ctx->flags & SEEN_X)) 572 - ctx->flags |= FLAG_NEED_X_RESET; 573 - 574 - ctx->flags |= SEEN_X; 575 - } 576 - 577 548 static bool is_load_to_a(u16 inst) 578 549 { 579 550 switch (inst) { 580 - case BPF_S_LD_W_LEN: 581 - case BPF_S_LD_W_ABS: 582 - case BPF_S_LD_H_ABS: 583 - case BPF_S_LD_B_ABS: 584 - case BPF_S_ANC_CPU: 585 - case BPF_S_ANC_IFINDEX: 586 - case BPF_S_ANC_MARK: 587 - case BPF_S_ANC_PROTOCOL: 588 - case BPF_S_ANC_RXHASH: 589 - case BPF_S_ANC_VLAN_TAG: 590 - case BPF_S_ANC_VLAN_TAG_PRESENT: 591 - case BPF_S_ANC_QUEUE: 551 + case BPF_LD | BPF_W | BPF_LEN: 552 + case BPF_LD | BPF_W | BPF_ABS: 553 + case BPF_LD | BPF_H | BPF_ABS: 554 + case BPF_LD | BPF_B | BPF_ABS: 592 555 return true; 593 556 default: 594 557 return false; ··· 623 618 if (ctx->flags & SEEN_MEM) { 624 619 if (real_off % (RSIZE * 2)) 625 620 real_off += RSIZE; 626 - emit_addiu(r_M, r_sp, real_off, ctx); 621 + if (config_enabled(CONFIG_64BIT)) 622 + emit_daddiu(r_M, r_sp, real_off, ctx); 623 + else 624 + emit_addiu(r_M, r_sp, real_off, ctx); 627 625 } 628 626 } 629 627 ··· 713 705 if (ctx->flags & SEEN_SKB) 714 706 emit_reg_move(r_skb, MIPS_R_A0, ctx); 715 707 716 - if (ctx->flags & FLAG_NEED_X_RESET) 708 + if (ctx->flags & SEEN_X) 717 709 emit_jit_reg_move(r_X, r_zero, ctx); 718 710 719 711 /* Do not leak kernel data to userspace */ 720 - if ((first_inst != BPF_S_RET_K) && !(is_load_to_a(first_inst))) 712 + if ((first_inst != (BPF_RET | BPF_K)) && !(is_load_to_a(first_inst))) 721 713 emit_jit_reg_move(r_A, r_zero, ctx); 722 714 } 723 715 ··· 765 757 return (u64)err << 32 | ntohl(ret); 766 758 } 767 759 768 - #define PKT_TYPE_MAX 7 760 + #ifdef __BIG_ENDIAN_BITFIELD 761 + #define PKT_TYPE_MAX (7 << 5) 762 + #else 763 + #define PKT_TYPE_MAX 7 764 + #endif 769 765 static int pkt_type_offset(void) 770 766 { 771 767 struct sk_buff skb_probe = { 772 768 .pkt_type = ~0, 773 769 }; 774 - char *ct = (char *)&skb_probe; 770 + u8 *ct = (u8 *)&skb_probe; 775 771 unsigned int off; 776 772 777 773 for (off = 0; off < sizeof(struct sk_buff); off++) { ··· 795 783 u32 k, b_off __maybe_unused; 796 784 797 785 for (i = 0; i < prog->len; i++) { 786 + u16 code; 787 + 798 788 inst = &(prog->insns[i]); 799 789 pr_debug("%s: code->0x%02x, jt->0x%x, jf->0x%x, k->0x%x\n", 800 790 __func__, inst->code, inst->jt, inst->jf, inst->k); 801 791 k = inst->k; 792 + code = bpf_anc_helper(inst); 802 793 803 794 if (ctx->target == NULL) 804 795 ctx->offsets[i] = ctx->idx * 4; 805 796 806 - switch (inst->code) { 807 - case BPF_S_LD_IMM: 797 + switch (code) { 798 + case BPF_LD | BPF_IMM: 808 799 /* A <- k ==> li r_A, k */ 809 800 ctx->flags |= SEEN_A; 810 801 emit_load_imm(r_A, k, ctx); 811 802 break; 812 - case BPF_S_LD_W_LEN: 803 + case BPF_LD | BPF_W | BPF_LEN: 813 804 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4); 814 805 /* A <- len ==> lw r_A, offset(skb) */ 815 806 ctx->flags |= SEEN_SKB | SEEN_A; 816 807 off = offsetof(struct sk_buff, len); 817 808 emit_load(r_A, r_skb, off, ctx); 818 809 break; 819 - case BPF_S_LD_MEM: 810 + case BPF_LD | BPF_MEM: 820 811 /* A <- M[k] ==> lw r_A, offset(M) */ 821 812 ctx->flags |= SEEN_MEM | SEEN_A; 822 813 emit_load(r_A, r_M, SCRATCH_OFF(k), ctx); 823 814 break; 824 - case BPF_S_LD_W_ABS: 815 + case BPF_LD | BPF_W | BPF_ABS: 825 816 /* A <- P[k:4] */ 826 817 load_order = 2; 827 818 goto load; 828 - case BPF_S_LD_H_ABS: 819 + case BPF_LD | BPF_H | BPF_ABS: 829 820 /* A <- P[k:2] */ 830 821 load_order = 1; 831 822 goto load; 832 - case BPF_S_LD_B_ABS: 823 + case BPF_LD | BPF_B | BPF_ABS: 833 824 /* A <- P[k:1] */ 834 825 load_order = 0; 835 826 load: 827 + /* the interpreter will deal with the negative K */ 828 + if ((int)k < 0) 829 + return -ENOTSUPP; 830 + 836 831 emit_load_imm(r_off, k, ctx); 837 832 load_common: 833 + /* 834 + * We may got here from the indirect loads so 835 + * return if offset is negative. 836 + */ 837 + emit_slt(r_s0, r_off, r_zero, ctx); 838 + emit_bcond(MIPS_COND_NE, r_s0, r_zero, 839 + b_imm(prog->len, ctx), ctx); 840 + emit_reg_move(r_ret, r_zero, ctx); 841 + 838 842 ctx->flags |= SEEN_CALL | SEEN_OFF | SEEN_S0 | 839 843 SEEN_SKB | SEEN_A; 840 844 ··· 880 852 emit_b(b_imm(prog->len, ctx), ctx); 881 853 emit_reg_move(r_ret, r_zero, ctx); 882 854 break; 883 - case BPF_S_LD_W_IND: 855 + case BPF_LD | BPF_W | BPF_IND: 884 856 /* A <- P[X + k:4] */ 885 857 load_order = 2; 886 858 goto load_ind; 887 - case BPF_S_LD_H_IND: 859 + case BPF_LD | BPF_H | BPF_IND: 888 860 /* A <- P[X + k:2] */ 889 861 load_order = 1; 890 862 goto load_ind; 891 - case BPF_S_LD_B_IND: 863 + case BPF_LD | BPF_B | BPF_IND: 892 864 /* A <- P[X + k:1] */ 893 865 load_order = 0; 894 866 load_ind: 895 - update_on_xread(ctx); 896 867 ctx->flags |= SEEN_OFF | SEEN_X; 897 868 emit_addiu(r_off, r_X, k, ctx); 898 869 goto load_common; 899 - case BPF_S_LDX_IMM: 870 + case BPF_LDX | BPF_IMM: 900 871 /* X <- k */ 901 872 ctx->flags |= SEEN_X; 902 873 emit_load_imm(r_X, k, ctx); 903 874 break; 904 - case BPF_S_LDX_MEM: 875 + case BPF_LDX | BPF_MEM: 905 876 /* X <- M[k] */ 906 877 ctx->flags |= SEEN_X | SEEN_MEM; 907 878 emit_load(r_X, r_M, SCRATCH_OFF(k), ctx); 908 879 break; 909 - case BPF_S_LDX_W_LEN: 880 + case BPF_LDX | BPF_W | BPF_LEN: 910 881 /* X <- len */ 911 882 ctx->flags |= SEEN_X | SEEN_SKB; 912 883 off = offsetof(struct sk_buff, len); 913 884 emit_load(r_X, r_skb, off, ctx); 914 885 break; 915 - case BPF_S_LDX_B_MSH: 886 + case BPF_LDX | BPF_B | BPF_MSH: 887 + /* the interpreter will deal with the negative K */ 888 + if ((int)k < 0) 889 + return -ENOTSUPP; 890 + 916 891 /* X <- 4 * (P[k:1] & 0xf) */ 917 892 ctx->flags |= SEEN_X | SEEN_CALL | SEEN_S0 | SEEN_SKB; 918 893 /* Load offset to a1 */ ··· 948 917 emit_b(b_imm(prog->len, ctx), ctx); 949 918 emit_load_imm(r_ret, 0, ctx); /* delay slot */ 950 919 break; 951 - case BPF_S_ST: 920 + case BPF_ST: 952 921 /* M[k] <- A */ 953 922 ctx->flags |= SEEN_MEM | SEEN_A; 954 923 emit_store(r_A, r_M, SCRATCH_OFF(k), ctx); 955 924 break; 956 - case BPF_S_STX: 925 + case BPF_STX: 957 926 /* M[k] <- X */ 958 927 ctx->flags |= SEEN_MEM | SEEN_X; 959 928 emit_store(r_X, r_M, SCRATCH_OFF(k), ctx); 960 929 break; 961 - case BPF_S_ALU_ADD_K: 930 + case BPF_ALU | BPF_ADD | BPF_K: 962 931 /* A += K */ 963 932 ctx->flags |= SEEN_A; 964 933 emit_addiu(r_A, r_A, k, ctx); 965 934 break; 966 - case BPF_S_ALU_ADD_X: 935 + case BPF_ALU | BPF_ADD | BPF_X: 967 936 /* A += X */ 968 937 ctx->flags |= SEEN_A | SEEN_X; 969 938 emit_addu(r_A, r_A, r_X, ctx); 970 939 break; 971 - case BPF_S_ALU_SUB_K: 940 + case BPF_ALU | BPF_SUB | BPF_K: 972 941 /* A -= K */ 973 942 ctx->flags |= SEEN_A; 974 943 emit_addiu(r_A, r_A, -k, ctx); 975 944 break; 976 - case BPF_S_ALU_SUB_X: 945 + case BPF_ALU | BPF_SUB | BPF_X: 977 946 /* A -= X */ 978 947 ctx->flags |= SEEN_A | SEEN_X; 979 948 emit_subu(r_A, r_A, r_X, ctx); 980 949 break; 981 - case BPF_S_ALU_MUL_K: 950 + case BPF_ALU | BPF_MUL | BPF_K: 982 951 /* A *= K */ 983 952 /* Load K to scratch register before MUL */ 984 953 ctx->flags |= SEEN_A | SEEN_S0; 985 954 emit_load_imm(r_s0, k, ctx); 986 955 emit_mul(r_A, r_A, r_s0, ctx); 987 956 break; 988 - case BPF_S_ALU_MUL_X: 957 + case BPF_ALU | BPF_MUL | BPF_X: 989 958 /* A *= X */ 990 - update_on_xread(ctx); 991 959 ctx->flags |= SEEN_A | SEEN_X; 992 960 emit_mul(r_A, r_A, r_X, ctx); 993 961 break; 994 - case BPF_S_ALU_DIV_K: 962 + case BPF_ALU | BPF_DIV | BPF_K: 995 963 /* A /= k */ 996 964 if (k == 1) 997 965 break; ··· 1003 973 emit_load_imm(r_s0, k, ctx); 1004 974 emit_div(r_A, r_s0, ctx); 1005 975 break; 1006 - case BPF_S_ALU_MOD_K: 976 + case BPF_ALU | BPF_MOD | BPF_K: 1007 977 /* A %= k */ 1008 978 if (k == 1 || optimize_div(&k)) { 1009 979 ctx->flags |= SEEN_A; ··· 1014 984 emit_mod(r_A, r_s0, ctx); 1015 985 } 1016 986 break; 1017 - case BPF_S_ALU_DIV_X: 987 + case BPF_ALU | BPF_DIV | BPF_X: 1018 988 /* A /= X */ 1019 - update_on_xread(ctx); 1020 989 ctx->flags |= SEEN_X | SEEN_A; 1021 990 /* Check if r_X is zero */ 1022 991 emit_bcond(MIPS_COND_EQ, r_X, r_zero, ··· 1023 994 emit_load_imm(r_val, 0, ctx); /* delay slot */ 1024 995 emit_div(r_A, r_X, ctx); 1025 996 break; 1026 - case BPF_S_ALU_MOD_X: 997 + case BPF_ALU | BPF_MOD | BPF_X: 1027 998 /* A %= X */ 1028 - update_on_xread(ctx); 1029 999 ctx->flags |= SEEN_X | SEEN_A; 1030 1000 /* Check if r_X is zero */ 1031 1001 emit_bcond(MIPS_COND_EQ, r_X, r_zero, ··· 1032 1004 emit_load_imm(r_val, 0, ctx); /* delay slot */ 1033 1005 emit_mod(r_A, r_X, ctx); 1034 1006 break; 1035 - case BPF_S_ALU_OR_K: 1007 + case BPF_ALU | BPF_OR | BPF_K: 1036 1008 /* A |= K */ 1037 1009 ctx->flags |= SEEN_A; 1038 1010 emit_ori(r_A, r_A, k, ctx); 1039 1011 break; 1040 - case BPF_S_ALU_OR_X: 1012 + case BPF_ALU | BPF_OR | BPF_X: 1041 1013 /* A |= X */ 1042 - update_on_xread(ctx); 1043 1014 ctx->flags |= SEEN_A; 1044 1015 emit_ori(r_A, r_A, r_X, ctx); 1045 1016 break; 1046 - case BPF_S_ALU_XOR_K: 1017 + case BPF_ALU | BPF_XOR | BPF_K: 1047 1018 /* A ^= k */ 1048 1019 ctx->flags |= SEEN_A; 1049 1020 emit_xori(r_A, r_A, k, ctx); 1050 1021 break; 1051 - case BPF_S_ANC_ALU_XOR_X: 1052 - case BPF_S_ALU_XOR_X: 1022 + case BPF_ANC | SKF_AD_ALU_XOR_X: 1023 + case BPF_ALU | BPF_XOR | BPF_X: 1053 1024 /* A ^= X */ 1054 - update_on_xread(ctx); 1055 1025 ctx->flags |= SEEN_A; 1056 1026 emit_xor(r_A, r_A, r_X, ctx); 1057 1027 break; 1058 - case BPF_S_ALU_AND_K: 1028 + case BPF_ALU | BPF_AND | BPF_K: 1059 1029 /* A &= K */ 1060 1030 ctx->flags |= SEEN_A; 1061 1031 emit_andi(r_A, r_A, k, ctx); 1062 1032 break; 1063 - case BPF_S_ALU_AND_X: 1033 + case BPF_ALU | BPF_AND | BPF_X: 1064 1034 /* A &= X */ 1065 - update_on_xread(ctx); 1066 1035 ctx->flags |= SEEN_A | SEEN_X; 1067 1036 emit_and(r_A, r_A, r_X, ctx); 1068 1037 break; 1069 - case BPF_S_ALU_LSH_K: 1038 + case BPF_ALU | BPF_LSH | BPF_K: 1070 1039 /* A <<= K */ 1071 1040 ctx->flags |= SEEN_A; 1072 1041 emit_sll(r_A, r_A, k, ctx); 1073 1042 break; 1074 - case BPF_S_ALU_LSH_X: 1043 + case BPF_ALU | BPF_LSH | BPF_X: 1075 1044 /* A <<= X */ 1076 1045 ctx->flags |= SEEN_A | SEEN_X; 1077 - update_on_xread(ctx); 1078 1046 emit_sllv(r_A, r_A, r_X, ctx); 1079 1047 break; 1080 - case BPF_S_ALU_RSH_K: 1048 + case BPF_ALU | BPF_RSH | BPF_K: 1081 1049 /* A >>= K */ 1082 1050 ctx->flags |= SEEN_A; 1083 1051 emit_srl(r_A, r_A, k, ctx); 1084 1052 break; 1085 - case BPF_S_ALU_RSH_X: 1053 + case BPF_ALU | BPF_RSH | BPF_X: 1086 1054 ctx->flags |= SEEN_A | SEEN_X; 1087 - update_on_xread(ctx); 1088 1055 emit_srlv(r_A, r_A, r_X, ctx); 1089 1056 break; 1090 - case BPF_S_ALU_NEG: 1057 + case BPF_ALU | BPF_NEG: 1091 1058 /* A = -A */ 1092 1059 ctx->flags |= SEEN_A; 1093 1060 emit_neg(r_A, ctx); 1094 1061 break; 1095 - case BPF_S_JMP_JA: 1062 + case BPF_JMP | BPF_JA: 1096 1063 /* pc += K */ 1097 1064 emit_b(b_imm(i + k + 1, ctx), ctx); 1098 1065 emit_nop(ctx); 1099 1066 break; 1100 - case BPF_S_JMP_JEQ_K: 1067 + case BPF_JMP | BPF_JEQ | BPF_K: 1101 1068 /* pc += ( A == K ) ? pc->jt : pc->jf */ 1102 1069 condt = MIPS_COND_EQ | MIPS_COND_K; 1103 1070 goto jmp_cmp; 1104 - case BPF_S_JMP_JEQ_X: 1071 + case BPF_JMP | BPF_JEQ | BPF_X: 1105 1072 ctx->flags |= SEEN_X; 1106 1073 /* pc += ( A == X ) ? pc->jt : pc->jf */ 1107 1074 condt = MIPS_COND_EQ | MIPS_COND_X; 1108 1075 goto jmp_cmp; 1109 - case BPF_S_JMP_JGE_K: 1076 + case BPF_JMP | BPF_JGE | BPF_K: 1110 1077 /* pc += ( A >= K ) ? pc->jt : pc->jf */ 1111 1078 condt = MIPS_COND_GE | MIPS_COND_K; 1112 1079 goto jmp_cmp; 1113 - case BPF_S_JMP_JGE_X: 1080 + case BPF_JMP | BPF_JGE | BPF_X: 1114 1081 ctx->flags |= SEEN_X; 1115 1082 /* pc += ( A >= X ) ? pc->jt : pc->jf */ 1116 1083 condt = MIPS_COND_GE | MIPS_COND_X; 1117 1084 goto jmp_cmp; 1118 - case BPF_S_JMP_JGT_K: 1085 + case BPF_JMP | BPF_JGT | BPF_K: 1119 1086 /* pc += ( A > K ) ? pc->jt : pc->jf */ 1120 1087 condt = MIPS_COND_GT | MIPS_COND_K; 1121 1088 goto jmp_cmp; 1122 - case BPF_S_JMP_JGT_X: 1089 + case BPF_JMP | BPF_JGT | BPF_X: 1123 1090 ctx->flags |= SEEN_X; 1124 1091 /* pc += ( A > X ) ? pc->jt : pc->jf */ 1125 1092 condt = MIPS_COND_GT | MIPS_COND_X; ··· 1132 1109 } 1133 1110 /* A < (K|X) ? r_scrach = 1 */ 1134 1111 b_off = b_imm(i + inst->jf + 1, ctx); 1135 - emit_bcond(MIPS_COND_GT, r_s0, r_zero, b_off, 1112 + emit_bcond(MIPS_COND_NE, r_s0, r_zero, b_off, 1136 1113 ctx); 1137 1114 emit_nop(ctx); 1138 1115 /* A > (K|X) ? scratch = 0 */ ··· 1190 1167 } 1191 1168 } 1192 1169 break; 1193 - case BPF_S_JMP_JSET_K: 1170 + case BPF_JMP | BPF_JSET | BPF_K: 1194 1171 ctx->flags |= SEEN_S0 | SEEN_S1 | SEEN_A; 1195 1172 /* pc += (A & K) ? pc -> jt : pc -> jf */ 1196 1173 emit_load_imm(r_s1, k, ctx); ··· 1204 1181 emit_b(b_off, ctx); 1205 1182 emit_nop(ctx); 1206 1183 break; 1207 - case BPF_S_JMP_JSET_X: 1184 + case BPF_JMP | BPF_JSET | BPF_X: 1208 1185 ctx->flags |= SEEN_S0 | SEEN_X | SEEN_A; 1209 1186 /* pc += (A & X) ? pc -> jt : pc -> jf */ 1210 1187 emit_and(r_s0, r_A, r_X, ctx); ··· 1217 1194 emit_b(b_off, ctx); 1218 1195 emit_nop(ctx); 1219 1196 break; 1220 - case BPF_S_RET_A: 1197 + case BPF_RET | BPF_A: 1221 1198 ctx->flags |= SEEN_A; 1222 1199 if (i != prog->len - 1) 1223 1200 /* ··· 1227 1204 emit_b(b_imm(prog->len, ctx), ctx); 1228 1205 emit_reg_move(r_ret, r_A, ctx); /* delay slot */ 1229 1206 break; 1230 - case BPF_S_RET_K: 1207 + case BPF_RET | BPF_K: 1231 1208 /* 1232 1209 * It can emit two instructions so it does not fit on 1233 1210 * the delay slot. ··· 1242 1219 emit_nop(ctx); 1243 1220 } 1244 1221 break; 1245 - case BPF_S_MISC_TAX: 1222 + case BPF_MISC | BPF_TAX: 1246 1223 /* X = A */ 1247 1224 ctx->flags |= SEEN_X | SEEN_A; 1248 1225 emit_jit_reg_move(r_X, r_A, ctx); 1249 1226 break; 1250 - case BPF_S_MISC_TXA: 1227 + case BPF_MISC | BPF_TXA: 1251 1228 /* A = X */ 1252 1229 ctx->flags |= SEEN_A | SEEN_X; 1253 - update_on_xread(ctx); 1254 1230 emit_jit_reg_move(r_A, r_X, ctx); 1255 1231 break; 1256 1232 /* AUX */ 1257 - case BPF_S_ANC_PROTOCOL: 1233 + case BPF_ANC | SKF_AD_PROTOCOL: 1258 1234 /* A = ntohs(skb->protocol */ 1259 1235 ctx->flags |= SEEN_SKB | SEEN_OFF | SEEN_A; 1260 1236 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, ··· 1278 1256 } 1279 1257 #endif 1280 1258 break; 1281 - case BPF_S_ANC_CPU: 1259 + case BPF_ANC | SKF_AD_CPU: 1282 1260 ctx->flags |= SEEN_A | SEEN_OFF; 1283 1261 /* A = current_thread_info()->cpu */ 1284 1262 BUILD_BUG_ON(FIELD_SIZEOF(struct thread_info, ··· 1287 1265 /* $28/gp points to the thread_info struct */ 1288 1266 emit_load(r_A, 28, off, ctx); 1289 1267 break; 1290 - case BPF_S_ANC_IFINDEX: 1268 + case BPF_ANC | SKF_AD_IFINDEX: 1291 1269 /* A = skb->dev->ifindex */ 1292 1270 ctx->flags |= SEEN_SKB | SEEN_A | SEEN_S0; 1293 1271 off = offsetof(struct sk_buff, dev); 1294 - emit_load(r_s0, r_skb, off, ctx); 1272 + /* Load *dev pointer */ 1273 + emit_load_ptr(r_s0, r_skb, off, ctx); 1295 1274 /* error (0) in the delay slot */ 1296 1275 emit_bcond(MIPS_COND_EQ, r_s0, r_zero, 1297 1276 b_imm(prog->len, ctx), ctx); ··· 1302 1279 off = offsetof(struct net_device, ifindex); 1303 1280 emit_load(r_A, r_s0, off, ctx); 1304 1281 break; 1305 - case BPF_S_ANC_MARK: 1282 + case BPF_ANC | SKF_AD_MARK: 1306 1283 ctx->flags |= SEEN_SKB | SEEN_A; 1307 1284 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4); 1308 1285 off = offsetof(struct sk_buff, mark); 1309 1286 emit_load(r_A, r_skb, off, ctx); 1310 1287 break; 1311 - case BPF_S_ANC_RXHASH: 1288 + case BPF_ANC | SKF_AD_RXHASH: 1312 1289 ctx->flags |= SEEN_SKB | SEEN_A; 1313 1290 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, hash) != 4); 1314 1291 off = offsetof(struct sk_buff, hash); 1315 1292 emit_load(r_A, r_skb, off, ctx); 1316 1293 break; 1317 - case BPF_S_ANC_VLAN_TAG: 1318 - case BPF_S_ANC_VLAN_TAG_PRESENT: 1294 + case BPF_ANC | SKF_AD_VLAN_TAG: 1295 + case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: 1319 1296 ctx->flags |= SEEN_SKB | SEEN_S0 | SEEN_A; 1320 1297 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, 1321 1298 vlan_tci) != 2); 1322 1299 off = offsetof(struct sk_buff, vlan_tci); 1323 1300 emit_half_load(r_s0, r_skb, off, ctx); 1324 - if (inst->code == BPF_S_ANC_VLAN_TAG) 1325 - emit_and(r_A, r_s0, VLAN_VID_MASK, ctx); 1326 - else 1327 - emit_and(r_A, r_s0, VLAN_TAG_PRESENT, ctx); 1301 + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { 1302 + emit_andi(r_A, r_s0, (u16)~VLAN_TAG_PRESENT, ctx); 1303 + } else { 1304 + emit_andi(r_A, r_s0, VLAN_TAG_PRESENT, ctx); 1305 + /* return 1 if present */ 1306 + emit_sltu(r_A, r_zero, r_A, ctx); 1307 + } 1328 1308 break; 1329 - case BPF_S_ANC_PKTTYPE: 1309 + case BPF_ANC | SKF_AD_PKTTYPE: 1310 + ctx->flags |= SEEN_SKB; 1311 + 1330 1312 off = pkt_type_offset(); 1331 1313 1332 1314 if (off < 0) ··· 1339 1311 emit_load_byte(r_tmp, r_skb, off, ctx); 1340 1312 /* Keep only the last 3 bits */ 1341 1313 emit_andi(r_A, r_tmp, PKT_TYPE_MAX, ctx); 1314 + #ifdef __BIG_ENDIAN_BITFIELD 1315 + /* Get the actual packet type to the lower 3 bits */ 1316 + emit_srl(r_A, r_A, 5, ctx); 1317 + #endif 1342 1318 break; 1343 - case BPF_S_ANC_QUEUE: 1319 + case BPF_ANC | SKF_AD_QUEUE: 1344 1320 ctx->flags |= SEEN_SKB | SEEN_A; 1345 1321 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, 1346 1322 queue_mapping) != 2); ··· 1354 1322 emit_half_load(r_A, r_skb, off, ctx); 1355 1323 break; 1356 1324 default: 1357 - pr_warn("%s: Unhandled opcode: 0x%02x\n", __FILE__, 1358 - inst->code); 1325 + pr_debug("%s: Unhandled opcode: 0x%02x\n", __FILE__, 1326 + inst->code); 1359 1327 return -1; 1360 1328 } 1361 1329 }
+2 -1
arch/parisc/kernel/hardware.c
··· 1210 1210 {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, 1211 1211 {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, 1212 1212 {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, 1213 - {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, 1213 + {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, 1214 + {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, 1214 1215 {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, 1215 1216 {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, 1216 1217 {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},
+10 -36
arch/parisc/kernel/sys_parisc32.c
··· 4 4 * Copyright (C) 2000-2001 Hewlett Packard Company 5 5 * Copyright (C) 2000 John Marvin 6 6 * Copyright (C) 2001 Matthew Wilcox 7 + * Copyright (C) 2014 Helge Deller <deller@gmx.de> 7 8 * 8 9 * These routines maintain argument size conversion between 32bit and 64bit 9 10 * environment. Based heavily on sys_ia32.c and sys_sparc32.c. ··· 12 11 13 12 #include <linux/compat.h> 14 13 #include <linux/kernel.h> 15 - #include <linux/sched.h> 16 - #include <linux/fs.h> 17 - #include <linux/mm.h> 18 - #include <linux/file.h> 19 - #include <linux/signal.h> 20 - #include <linux/resource.h> 21 - #include <linux/times.h> 22 - #include <linux/time.h> 23 - #include <linux/smp.h> 24 - #include <linux/sem.h> 25 - #include <linux/shm.h> 26 - #include <linux/slab.h> 27 - #include <linux/uio.h> 28 - #include <linux/ncp_fs.h> 29 - #include <linux/poll.h> 30 - #include <linux/personality.h> 31 - #include <linux/stat.h> 32 - #include <linux/highmem.h> 33 - #include <linux/highuid.h> 34 - #include <linux/mman.h> 35 - #include <linux/binfmts.h> 36 - #include <linux/namei.h> 37 - #include <linux/vfs.h> 38 - #include <linux/ptrace.h> 39 - #include <linux/swap.h> 40 14 #include <linux/syscalls.h> 41 15 42 - #include <asm/types.h> 43 - #include <asm/uaccess.h> 44 - #include <asm/mmu_context.h> 45 - 46 - #undef DEBUG 47 - 48 - #ifdef DEBUG 49 - #define DBG(x) printk x 50 - #else 51 - #define DBG(x) 52 - #endif 53 16 54 17 asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, 55 18 int r22, int r21, int r20) ··· 21 56 printk(KERN_ERR "%s(%d): Unimplemented 32 on 64 syscall #%d!\n", 22 57 current->comm, current->pid, r20); 23 58 return -ENOSYS; 59 + } 60 + 61 + asmlinkage long sys32_fanotify_mark(compat_int_t fanotify_fd, compat_uint_t flags, 62 + compat_uint_t mask0, compat_uint_t mask1, compat_int_t dfd, 63 + const char __user * pathname) 64 + { 65 + return sys_fanotify_mark(fanotify_fd, flags, 66 + ((__u64)mask1 << 32) | mask0, 67 + dfd, pathname); 24 68 }
+1 -1
arch/parisc/kernel/syscall_table.S
··· 418 418 ENTRY_SAME(accept4) /* 320 */ 419 419 ENTRY_SAME(prlimit64) 420 420 ENTRY_SAME(fanotify_init) 421 - ENTRY_COMP(fanotify_mark) 421 + ENTRY_DIFF(fanotify_mark) 422 422 ENTRY_COMP(clock_adjtime) 423 423 ENTRY_SAME(name_to_handle_at) /* 325 */ 424 424 ENTRY_COMP(open_by_handle_at)
+2 -1
arch/powerpc/Kconfig
··· 414 414 config CRASH_DUMP 415 415 bool "Build a kdump crash kernel" 416 416 depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) 417 - select RELOCATABLE if PPC64 || 44x || FSL_BOOKE 417 + select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE 418 418 help 419 419 Build a kernel suitable for use as a kdump capture kernel. 420 420 The same kernel binary can be used as production kernel and dump ··· 1017 1017 if PPC64 1018 1018 config RELOCATABLE 1019 1019 bool "Build a relocatable kernel" 1020 + depends on !COMPILE_TEST 1020 1021 select NONSTATIC_KERNEL 1021 1022 help 1022 1023 This builds a kernel image that is capable of running anywhere
+1 -9
arch/powerpc/include/asm/mmu.h
··· 19 19 #define MMU_FTR_TYPE_40x ASM_CONST(0x00000004) 20 20 #define MMU_FTR_TYPE_44x ASM_CONST(0x00000008) 21 21 #define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010) 22 - #define MMU_FTR_TYPE_3E ASM_CONST(0x00000020) 23 - #define MMU_FTR_TYPE_47x ASM_CONST(0x00000040) 22 + #define MMU_FTR_TYPE_47x ASM_CONST(0x00000020) 24 23 25 24 /* 26 25 * This is individual features ··· 105 106 MMU_FTR_CI_LARGE_PAGE 106 107 #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ 107 108 MMU_FTR_CI_LARGE_PAGE | MMU_FTR_NO_SLBIE_B 108 - #define MMU_FTRS_A2 MMU_FTR_TYPE_3E | MMU_FTR_USE_TLBILX | \ 109 - MMU_FTR_USE_TLBIVAX_BCAST | \ 110 - MMU_FTR_LOCK_BCAST_INVAL | \ 111 - MMU_FTR_USE_TLBRSRV | \ 112 - MMU_FTR_USE_PAIRED_MAS | \ 113 - MMU_FTR_TLBIEL | \ 114 - MMU_FTR_16M_PAGE 115 109 #ifndef __ASSEMBLY__ 116 110 #include <asm/cputable.h> 117 111
+1 -2
arch/powerpc/include/asm/perf_event_server.h
··· 61 61 #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ 62 62 #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ 63 63 #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ 64 - #define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ 65 - #define PPMU_EBB 0x00000100 /* supports event based branch */ 64 + #define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ 66 65 67 66 /* 68 67 * Values for flags to get_alternatives()
+1 -1
arch/powerpc/kernel/idle_power7.S
··· 131 131 132 132 _GLOBAL(power7_sleep) 133 133 li r3,1 134 - li r4,0 134 + li r4,1 135 135 b power7_powersave_common 136 136 /* No return */ 137 137
+1 -1
arch/powerpc/kernel/smp.c
··· 747 747 748 748 #ifdef CONFIG_SCHED_SMT 749 749 /* cpumask of CPUs with asymetric SMT dependancy */ 750 - static const int powerpc_smt_flags(void) 750 + static int powerpc_smt_flags(void) 751 751 { 752 752 int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; 753 753
-5
arch/powerpc/kvm/book3s_hv_interrupts.S
··· 127 127 stw r10, HSTATE_PMC + 24(r13) 128 128 stw r11, HSTATE_PMC + 28(r13) 129 129 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201) 130 - BEGIN_FTR_SECTION 131 - mfspr r9, SPRN_SIER 132 - std r8, HSTATE_MMCR + 40(r13) 133 - std r9, HSTATE_MMCR + 48(r13) 134 - END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 135 130 31: 136 131 137 132 /*
+1 -11
arch/powerpc/mm/mmu_context_nohash.c
··· 410 410 } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { 411 411 first_context = 1; 412 412 last_context = 65535; 413 - } else 414 - #ifdef CONFIG_PPC_BOOK3E_MMU 415 - if (mmu_has_feature(MMU_FTR_TYPE_3E)) { 416 - u32 mmucfg = mfspr(SPRN_MMUCFG); 417 - u32 pid_bits = (mmucfg & MMUCFG_PIDSIZE_MASK) 418 - >> MMUCFG_PIDSIZE_SHIFT; 419 - first_context = 1; 420 - last_context = (1UL << (pid_bits + 1)) - 1; 421 - } else 422 - #endif 423 - { 413 + } else { 424 414 first_context = 1; 425 415 last_context = 255; 426 416 }
+7 -3
arch/powerpc/net/bpf_jit_comp.c
··· 390 390 case BPF_ANC | SKF_AD_VLAN_TAG: 391 391 case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: 392 392 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); 393 + BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); 394 + 393 395 PPC_LHZ_OFFS(r_A, r_skb, offsetof(struct sk_buff, 394 396 vlan_tci)); 395 - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) 396 - PPC_ANDI(r_A, r_A, VLAN_VID_MASK); 397 - else 397 + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { 398 + PPC_ANDI(r_A, r_A, ~VLAN_TAG_PRESENT); 399 + } else { 398 400 PPC_ANDI(r_A, r_A, VLAN_TAG_PRESENT); 401 + PPC_SRWI(r_A, r_A, 12); 402 + } 399 403 break; 400 404 case BPF_ANC | SKF_AD_QUEUE: 401 405 BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
+22 -4
arch/powerpc/perf/core-book3s.c
··· 485 485 * check that the PMU supports EBB, meaning those that don't can still 486 486 * use bit 63 of the event code for something else if they wish. 487 487 */ 488 - return (ppmu->flags & PPMU_EBB) && 488 + return (ppmu->flags & PPMU_ARCH_207S) && 489 489 ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1); 490 490 } 491 491 ··· 777 777 if (ppmu->flags & PPMU_HAS_SIER) 778 778 sier = mfspr(SPRN_SIER); 779 779 780 - if (ppmu->flags & PPMU_EBB) { 780 + if (ppmu->flags & PPMU_ARCH_207S) { 781 781 pr_info("MMCR2: %016lx EBBHR: %016lx\n", 782 782 mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR)); 783 783 pr_info("EBBRR: %016lx BESCR: %016lx\n", ··· 996 996 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); 997 997 998 998 local64_add(delta, &event->count); 999 - local64_sub(delta, &event->hw.period_left); 999 + 1000 + /* 1001 + * A number of places program the PMC with (0x80000000 - period_left). 1002 + * We never want period_left to be less than 1 because we will program 1003 + * the PMC with a value >= 0x800000000 and an edge detected PMC will 1004 + * roll around to 0 before taking an exception. We have seen this 1005 + * on POWER8. 1006 + * 1007 + * To fix this, clamp the minimum value of period_left to 1. 1008 + */ 1009 + do { 1010 + prev = local64_read(&event->hw.period_left); 1011 + val = prev - delta; 1012 + if (val < 1) 1013 + val = 1; 1014 + } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); 1000 1015 } 1001 1016 1002 1017 /* ··· 1314 1299 ppmu->config_bhrb(cpuhw->bhrb_filter); 1315 1300 1316 1301 write_mmcr0(cpuhw, mmcr0); 1302 + 1303 + if (ppmu->flags & PPMU_ARCH_207S) 1304 + mtspr(SPRN_MMCR2, 0); 1317 1305 1318 1306 /* 1319 1307 * Enable instruction sampling if necessary ··· 1714 1696 1715 1697 if (has_branch_stack(event)) { 1716 1698 /* PMU has BHRB enabled */ 1717 - if (!(ppmu->flags & PPMU_BHRB)) 1699 + if (!(ppmu->flags & PPMU_ARCH_207S)) 1718 1700 return -EOPNOTSUPP; 1719 1701 } 1720 1702
+1 -1
arch/powerpc/perf/power8-pmu.c
··· 792 792 .get_constraint = power8_get_constraint, 793 793 .get_alternatives = power8_get_alternatives, 794 794 .disable_pmc = power8_disable_pmc, 795 - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB | PPMU_EBB, 795 + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, 796 796 .n_generic = ARRAY_SIZE(power8_generic_events), 797 797 .generic_events = power8_generic_events, 798 798 .cache_events = &power8_cache_events,
+2
arch/powerpc/platforms/cell/spu_syscalls.c
··· 111 111 return ret; 112 112 } 113 113 114 + #ifdef CONFIG_COREDUMP 114 115 int elf_coredump_extra_notes_size(void) 115 116 { 116 117 struct spufs_calls *calls; ··· 143 142 144 143 return ret; 145 144 } 145 + #endif 146 146 147 147 void notify_spus_active(void) 148 148 {
+2 -1
arch/powerpc/platforms/cell/spufs/Makefile
··· 1 1 2 2 obj-$(CONFIG_SPU_FS) += spufs.o 3 - spufs-y += inode.o file.o context.o syscalls.o coredump.o 3 + spufs-y += inode.o file.o context.o syscalls.o 4 4 spufs-y += sched.o backing_ops.o hw_ops.o run.o gang.o 5 5 spufs-y += switch.o fault.o lscsa_alloc.o 6 + spufs-$(CONFIG_COREDUMP) += coredump.o 6 7 7 8 # magic for the trace events 8 9 CFLAGS_sched.o := -I$(src)
+4 -2
arch/powerpc/platforms/cell/spufs/syscalls.c
··· 79 79 struct spufs_calls spufs_calls = { 80 80 .create_thread = do_spu_create, 81 81 .spu_run = do_spu_run, 82 - .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 83 - .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 84 82 .notify_spus_active = do_notify_spus_active, 85 83 .owner = THIS_MODULE, 84 + #ifdef CONFIG_COREDUMP 85 + .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 86 + .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 87 + #endif 86 88 };
+1
arch/s390/include/uapi/asm/Kbuild
··· 36 36 header-y += socket.h 37 37 header-y += sockios.h 38 38 header-y += sclp_ctl.h 39 + header-y += sie.h 39 40 header-y += stat.h 40 41 header-y += statfs.h 41 42 header-y += swab.h
+12 -14
arch/s390/include/uapi/asm/sie.h
··· 1 1 #ifndef _UAPI_ASM_S390_SIE_H 2 2 #define _UAPI_ASM_S390_SIE_H 3 3 4 - #include <asm/sigp.h> 5 - 6 4 #define diagnose_codes \ 7 5 { 0x10, "DIAG (0x10) release pages" }, \ 8 6 { 0x44, "DIAG (0x44) time slice end" }, \ ··· 11 13 { 0x500, "DIAG (0x500) KVM virtio functions" }, \ 12 14 { 0x501, "DIAG (0x501) KVM breakpoint" } 13 15 14 - #define sigp_order_codes \ 15 - { SIGP_SENSE, "SIGP sense" }, \ 16 - { SIGP_EXTERNAL_CALL, "SIGP external call" }, \ 17 - { SIGP_EMERGENCY_SIGNAL, "SIGP emergency signal" }, \ 18 - { SIGP_STOP, "SIGP stop" }, \ 19 - { SIGP_STOP_AND_STORE_STATUS, "SIGP stop and store status" }, \ 20 - { SIGP_SET_ARCHITECTURE, "SIGP set architecture" }, \ 21 - { SIGP_SET_PREFIX, "SIGP set prefix" }, \ 22 - { SIGP_SENSE_RUNNING, "SIGP sense running" }, \ 23 - { SIGP_RESTART, "SIGP restart" }, \ 24 - { SIGP_INITIAL_CPU_RESET, "SIGP initial cpu reset" }, \ 25 - { SIGP_STORE_STATUS_AT_ADDRESS, "SIGP store status at address" } 16 + #define sigp_order_codes \ 17 + { 0x01, "SIGP sense" }, \ 18 + { 0x02, "SIGP external call" }, \ 19 + { 0x03, "SIGP emergency signal" }, \ 20 + { 0x05, "SIGP stop" }, \ 21 + { 0x06, "SIGP restart" }, \ 22 + { 0x09, "SIGP stop and store status" }, \ 23 + { 0x0b, "SIGP initial cpu reset" }, \ 24 + { 0x0d, "SIGP set prefix" }, \ 25 + { 0x0e, "SIGP store status at address" }, \ 26 + { 0x12, "SIGP set architecture" }, \ 27 + { 0x15, "SIGP sense running" } 26 28 27 29 #define icpt_prog_codes \ 28 30 { 0x0001, "Prog Operation" }, \
+1 -1
arch/x86/crypto/sha512_ssse3_glue.c
··· 141 141 142 142 /* save number of bits */ 143 143 bits[1] = cpu_to_be64(sctx->count[0] << 3); 144 - bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; 144 + bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); 145 145 146 146 /* Pad out to 112 mod 128 and append length */ 147 147 index = sctx->count[0] & 0x7f;
+2 -2
arch/x86/include/asm/kvm_host.h
··· 95 95 #define KVM_REFILL_PAGES 25 96 96 #define KVM_MAX_CPUID_ENTRIES 80 97 97 #define KVM_NR_FIXED_MTRR_REGION 88 98 - #define KVM_NR_VAR_MTRR 8 98 + #define KVM_NR_VAR_MTRR 10 99 99 100 100 #define ASYNC_PF_PER_VCPU 64 101 101 ··· 461 461 bool nmi_injected; /* Trying to inject an NMI this entry */ 462 462 463 463 struct mtrr_state_type mtrr_state; 464 - u32 pat; 464 + u64 pat; 465 465 466 466 unsigned switch_db_regs; 467 467 unsigned long db[KVM_NR_DB_REGS];
+16
arch/x86/include/asm/ptrace.h
··· 231 231 232 232 #define ARCH_HAS_USER_SINGLE_STEP_INFO 233 233 234 + /* 235 + * When hitting ptrace_stop(), we cannot return using SYSRET because 236 + * that does not restore the full CPU state, only a minimal set. The 237 + * ptracer can change arbitrary register values, which is usually okay 238 + * because the usual ptrace stops run off the signal delivery path which 239 + * forces IRET; however, ptrace_event() stops happen in arbitrary places 240 + * in the kernel and don't force IRET path. 241 + * 242 + * So force IRET path after a ptrace stop. 243 + */ 244 + #define arch_ptrace_stop_needed(code, info) \ 245 + ({ \ 246 + set_thread_flag(TIF_NOTIFY_RESUME); \ 247 + false; \ 248 + }) 249 + 234 250 struct user_desc; 235 251 extern int do_get_thread_area(struct task_struct *p, int idx, 236 252 struct user_desc __user *info);
+9
arch/x86/kernel/cpu/perf_event_intel.c
··· 1382 1382 intel_pmu_lbr_read(); 1383 1383 1384 1384 /* 1385 + * CondChgd bit 63 doesn't mean any overflow status. Ignore 1386 + * and clear the bit. 1387 + */ 1388 + if (__test_and_clear_bit(63, (unsigned long *)&status)) { 1389 + if (!status) 1390 + goto done; 1391 + } 1392 + 1393 + /* 1385 1394 * PEBS overflow sets bit 62 in the global status register 1386 1395 */ 1387 1396 if (__test_and_clear_bit(62, (unsigned long *)&status)) {
+8 -2
arch/x86/kernel/entry_32.S
··· 423 423 jnz sysenter_audit 424 424 sysenter_do_call: 425 425 cmpl $(NR_syscalls), %eax 426 - jae syscall_badsys 426 + jae sysenter_badsys 427 427 call *sys_call_table(,%eax,4) 428 428 movl %eax,PT_EAX(%esp) 429 + sysenter_after_call: 429 430 LOCKDEP_SYS_EXIT 430 431 DISABLE_INTERRUPTS(CLBR_ANY) 431 432 TRACE_IRQS_OFF ··· 676 675 677 676 syscall_badsys: 678 677 movl $-ENOSYS,PT_EAX(%esp) 679 - jmp resume_userspace 678 + jmp syscall_exit 679 + END(syscall_badsys) 680 + 681 + sysenter_badsys: 682 + movl $-ENOSYS,PT_EAX(%esp) 683 + jmp sysenter_after_call 680 684 END(syscall_badsys) 681 685 CFI_ENDPROC 682 686
+1 -1
arch/x86/kernel/signal.c
··· 363 363 364 364 /* Set up to return from userspace. */ 365 365 restorer = current->mm->context.vdso + 366 - selected_vdso32->sym___kernel_sigreturn; 366 + selected_vdso32->sym___kernel_rt_sigreturn; 367 367 if (ksig->ka.sa.sa_flags & SA_RESTORER) 368 368 restorer = ksig->ka.sa.sa_restorer; 369 369 put_user_ex(restorer, &frame->pretcode);
+2 -2
arch/x86/kernel/tsc.c
··· 920 920 tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); 921 921 if (!(freq->flags & CPUFREQ_CONST_LOOPS)) 922 922 mark_tsc_unstable("cpufreq changes"); 923 - } 924 923 925 - set_cyc2ns_scale(tsc_khz, freq->cpu); 924 + set_cyc2ns_scale(tsc_khz, freq->cpu); 925 + } 926 926 927 927 return 0; 928 928 }
+1
arch/x86/kvm/svm.c
··· 1462 1462 */ 1463 1463 if (var->unusable) 1464 1464 var->db = 0; 1465 + var->dpl = to_svm(vcpu)->vmcb->save.cpl; 1465 1466 break; 1466 1467 } 1467 1468 }
+1 -1
arch/x86/kvm/x86.c
··· 1898 1898 if (!(data & HV_X64_MSR_TSC_REFERENCE_ENABLE)) 1899 1899 break; 1900 1900 gfn = data >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT; 1901 - if (kvm_write_guest(kvm, data, 1901 + if (kvm_write_guest(kvm, gfn << HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT, 1902 1902 &tsc_ref, sizeof(tsc_ref))) 1903 1903 return 1; 1904 1904 mark_page_dirty(kvm, gfn);
+18 -6
arch/x86/vdso/Makefile
··· 11 11 12 12 # files to link into the vdso 13 13 vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vdso-fakesections.o 14 - vobjs-nox32 := vdso-fakesections.o 15 14 16 15 # files to link into kernel 17 16 obj-y += vma.o ··· 66 67 # 67 68 CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \ 68 69 $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \ 69 - -fno-omit-frame-pointer -foptimize-sibling-calls 70 + -fno-omit-frame-pointer -foptimize-sibling-calls \ 71 + -DDISABLE_BRANCH_PROFILING 70 72 71 73 $(vobjs): KBUILD_CFLAGS += $(CFL) 72 74 ··· 134 134 135 135 targets += vdso32/vdso32.lds 136 136 targets += vdso32/note.o vdso32/vclock_gettime.o $(vdso32.so-y:%=vdso32/%.o) 137 - targets += vdso32/vclock_gettime.o 137 + targets += vdso32/vclock_gettime.o vdso32/vdso-fakesections.o 138 138 139 139 $(obj)/vdso32.o: $(vdso32-images:%=$(obj)/%) 140 140 ··· 150 150 KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector) 151 151 KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls) 152 152 KBUILD_CFLAGS_32 += -fno-omit-frame-pointer 153 + KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING 153 154 $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32) 154 155 155 156 $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \ 156 157 $(obj)/vdso32/vdso32.lds \ 157 158 $(obj)/vdso32/vclock_gettime.o \ 159 + $(obj)/vdso32/vdso-fakesections.o \ 158 160 $(obj)/vdso32/note.o \ 159 161 $(obj)/vdso32/%.o 160 162 $(call if_changed,vdso) ··· 171 169 sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@' 172 170 173 171 VDSO_LDFLAGS = -fPIC -shared $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) \ 174 - -Wl,-Bsymbolic $(LTO_CFLAGS) 172 + $(call cc-ldoption, -Wl$(comma)--build-id) -Wl,-Bsymbolic $(LTO_CFLAGS) 175 173 GCOV_PROFILE := n 176 174 177 175 # 178 - # Install the unstripped copies of vdso*.so. 176 + # Install the unstripped copies of vdso*.so. If our toolchain supports 177 + # build-id, install .build-id links as well. 179 178 # 180 179 quiet_cmd_vdso_install = INSTALL $(@:install_%=%) 181 - cmd_vdso_install = cp $< $(MODLIB)/vdso/$(@:install_%=%) 180 + define cmd_vdso_install 181 + cp $< "$(MODLIB)/vdso/$(@:install_%=%)"; \ 182 + if readelf -n $< |grep -q 'Build ID'; then \ 183 + buildid=`readelf -n $< |grep 'Build ID' |sed -e 's/^.*Build ID: \(.*\)$$/\1/'`; \ 184 + first=`echo $$buildid | cut -b-2`; \ 185 + last=`echo $$buildid | cut -b3-`; \ 186 + mkdir -p "$(MODLIB)/vdso/.build-id/$$first"; \ 187 + ln -sf "../../$(@:install_%=%)" "$(MODLIB)/vdso/.build-id/$$first/$$last.debug"; \ 188 + fi 189 + endef 182 190 183 191 vdso_img_insttargets := $(vdso_img_sodbg:%.dbg=install_%) 184 192
-3
arch/x86/vdso/vclock_gettime.c
··· 11 11 * Check with readelf after changing. 12 12 */ 13 13 14 - /* Disable profiling for userspace code: */ 15 - #define DISABLE_BRANCH_PROFILING 16 - 17 14 #include <uapi/linux/time.h> 18 15 #include <asm/vgtod.h> 19 16 #include <asm/hpet.h>
+15 -26
arch/x86/vdso/vdso-fakesections.c
··· 2 2 * Copyright 2014 Andy Lutomirski 3 3 * Subject to the GNU Public License, v.2 4 4 * 5 - * Hack to keep broken Go programs working. 6 - * 7 - * The Go runtime had a couple of bugs: it would read the section table to try 8 - * to figure out how many dynamic symbols there were (it shouldn't have looked 9 - * at the section table at all) and, if there were no SHT_SYNDYM section table 10 - * entry, it would use an uninitialized value for the number of symbols. As a 11 - * workaround, we supply a minimal section table. vdso2c will adjust the 12 - * in-memory image so that "vdso_fake_sections" becomes the section table. 13 - * 14 - * The bug was introduced by: 15 - * https://code.google.com/p/go/source/detail?r=56ea40aac72b (2012-08-31) 16 - * and is being addressed in the Go runtime in this issue: 17 - * https://code.google.com/p/go/issues/detail?id=8197 5 + * String table for loadable section headers. See vdso2c.h for why 6 + * this exists. 18 7 */ 19 8 20 - #ifndef __x86_64__ 21 - #error This hack is specific to the 64-bit vDSO 22 - #endif 23 - 24 - #include <linux/elf.h> 25 - 26 - extern const __visible struct elf64_shdr vdso_fake_sections[]; 27 - const __visible struct elf64_shdr vdso_fake_sections[] = { 28 - { 29 - .sh_type = SHT_DYNSYM, 30 - .sh_entsize = sizeof(Elf64_Sym), 31 - } 32 - }; 9 + const char fake_shstrtab[] __attribute__((section(".fake_shstrtab"))) = 10 + ".hash\0" 11 + ".dynsym\0" 12 + ".dynstr\0" 13 + ".gnu.version\0" 14 + ".gnu.version_d\0" 15 + ".dynamic\0" 16 + ".rodata\0" 17 + ".fake_shstrtab\0" /* Yay, self-referential code. */ 18 + ".note\0" 19 + ".eh_frame_hdr\0" 20 + ".eh_frame\0" 21 + ".text";
+46 -18
arch/x86/vdso/vdso-layout.lds.S
··· 6 6 * This script controls its layout. 7 7 */ 8 8 9 + #if defined(BUILD_VDSO64) 10 + # define SHDR_SIZE 64 11 + #elif defined(BUILD_VDSO32) || defined(BUILD_VDSOX32) 12 + # define SHDR_SIZE 40 13 + #else 14 + # error unknown VDSO target 15 + #endif 16 + 17 + #define NUM_FAKE_SHDRS 13 18 + 9 19 SECTIONS 10 20 { 11 21 . = SIZEOF_HEADERS; ··· 28 18 .gnu.version_d : { *(.gnu.version_d) } 29 19 .gnu.version_r : { *(.gnu.version_r) } 30 20 21 + .dynamic : { *(.dynamic) } :text :dynamic 22 + 23 + .rodata : { 24 + *(.rodata*) 25 + *(.data*) 26 + *(.sdata*) 27 + *(.got.plt) *(.got) 28 + *(.gnu.linkonce.d.*) 29 + *(.bss*) 30 + *(.dynbss*) 31 + *(.gnu.linkonce.b.*) 32 + 33 + /* 34 + * Ideally this would live in a C file, but that won't 35 + * work cleanly for x32 until we start building the x32 36 + * C code using an x32 toolchain. 37 + */ 38 + VDSO_FAKE_SECTION_TABLE_START = .; 39 + . = . + NUM_FAKE_SHDRS * SHDR_SIZE; 40 + VDSO_FAKE_SECTION_TABLE_END = .; 41 + } :text 42 + 43 + .fake_shstrtab : { *(.fake_shstrtab) } :text 44 + 45 + 31 46 .note : { *(.note.*) } :text :note 32 47 33 48 .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr 34 49 .eh_frame : { KEEP (*(.eh_frame)) } :text 35 50 36 - .dynamic : { *(.dynamic) } :text :dynamic 37 - 38 - .rodata : { *(.rodata*) } :text 39 - .data : { 40 - *(.data*) 41 - *(.sdata*) 42 - *(.got.plt) *(.got) 43 - *(.gnu.linkonce.d.*) 44 - *(.bss*) 45 - *(.dynbss*) 46 - *(.gnu.linkonce.b.*) 47 - } 48 - 49 - .altinstructions : { *(.altinstructions) } 50 - .altinstr_replacement : { *(.altinstr_replacement) } 51 51 52 52 /* 53 - * Align the actual code well away from the non-instruction data. 54 - * This is the best thing for the I-cache. 53 + * Text is well-separated from actual data: there's plenty of 54 + * stuff that isn't used at runtime in between. 55 55 */ 56 - . = ALIGN(0x100); 57 56 58 57 .text : { *(.text*) } :text =0x90909090, 58 + 59 + /* 60 + * At the end so that eu-elflint stays happy when vdso2c strips 61 + * these. A better implementation would avoid allocating space 62 + * for these. 63 + */ 64 + .altinstructions : { *(.altinstructions) } :text 65 + .altinstr_replacement : { *(.altinstr_replacement) } :text 59 66 60 67 /* 61 68 * The remainder of the vDSO consists of special pages that are ··· 102 75 /DISCARD/ : { 103 76 *(.discard) 104 77 *(.discard.*) 78 + *(__bug_table) 105 79 } 106 80 } 107 81
+2
arch/x86/vdso/vdso.lds.S
··· 6 6 * the DSO. 7 7 */ 8 8 9 + #define BUILD_VDSO64 10 + 9 11 #include "vdso-layout.lds.S" 10 12 11 13 /*
+35 -38
arch/x86/vdso/vdso2c.c
··· 23 23 sym_vvar_page, 24 24 sym_hpet_page, 25 25 sym_end_mapping, 26 + sym_VDSO_FAKE_SECTION_TABLE_START, 27 + sym_VDSO_FAKE_SECTION_TABLE_END, 26 28 }; 27 29 28 30 const int special_pages[] = { ··· 32 30 sym_hpet_page, 33 31 }; 34 32 35 - char const * const required_syms[] = { 36 - [sym_vvar_page] = "vvar_page", 37 - [sym_hpet_page] = "hpet_page", 38 - [sym_end_mapping] = "end_mapping", 39 - "VDSO32_NOTE_MASK", 40 - "VDSO32_SYSENTER_RETURN", 41 - "__kernel_vsyscall", 42 - "__kernel_sigreturn", 43 - "__kernel_rt_sigreturn", 33 + struct vdso_sym { 34 + const char *name; 35 + bool export; 36 + }; 37 + 38 + struct vdso_sym required_syms[] = { 39 + [sym_vvar_page] = {"vvar_page", true}, 40 + [sym_hpet_page] = {"hpet_page", true}, 41 + [sym_end_mapping] = {"end_mapping", true}, 42 + [sym_VDSO_FAKE_SECTION_TABLE_START] = { 43 + "VDSO_FAKE_SECTION_TABLE_START", false 44 + }, 45 + [sym_VDSO_FAKE_SECTION_TABLE_END] = { 46 + "VDSO_FAKE_SECTION_TABLE_END", false 47 + }, 48 + {"VDSO32_NOTE_MASK", true}, 49 + {"VDSO32_SYSENTER_RETURN", true}, 50 + {"__kernel_vsyscall", true}, 51 + {"__kernel_sigreturn", true}, 52 + {"__kernel_rt_sigreturn", true}, 44 53 }; 45 54 46 55 __attribute__((format(printf, 1, 2))) __attribute__((noreturn)) ··· 96 83 97 84 #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0])) 98 85 99 - #define BITS 64 100 - #define GOFUNC go64 101 - #define Elf_Ehdr Elf64_Ehdr 102 - #define Elf_Shdr Elf64_Shdr 103 - #define Elf_Phdr Elf64_Phdr 104 - #define Elf_Sym Elf64_Sym 105 - #define Elf_Dyn Elf64_Dyn 106 - #include "vdso2c.h" 107 - #undef BITS 108 - #undef GOFUNC 109 - #undef Elf_Ehdr 110 - #undef Elf_Shdr 111 - #undef Elf_Phdr 112 - #undef Elf_Sym 113 - #undef Elf_Dyn 86 + #define BITSFUNC3(name, bits) name##bits 87 + #define BITSFUNC2(name, bits) BITSFUNC3(name, bits) 88 + #define BITSFUNC(name) BITSFUNC2(name, ELF_BITS) 114 89 115 - #define BITS 32 116 - #define GOFUNC go32 117 - #define Elf_Ehdr Elf32_Ehdr 118 - #define Elf_Shdr Elf32_Shdr 119 - #define Elf_Phdr Elf32_Phdr 120 - #define Elf_Sym Elf32_Sym 121 - #define Elf_Dyn Elf32_Dyn 90 + #define ELF_BITS_XFORM2(bits, x) Elf##bits##_##x 91 + #define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x) 92 + #define ELF(x) ELF_BITS_XFORM(ELF_BITS, x) 93 + 94 + #define ELF_BITS 64 122 95 #include "vdso2c.h" 123 - #undef BITS 124 - #undef GOFUNC 125 - #undef Elf_Ehdr 126 - #undef Elf_Shdr 127 - #undef Elf_Phdr 128 - #undef Elf_Sym 129 - #undef Elf_Dyn 96 + #undef ELF_BITS 97 + 98 + #define ELF_BITS 32 99 + #include "vdso2c.h" 100 + #undef ELF_BITS 130 101 131 102 static void go(void *addr, size_t len, FILE *outfile, const char *name) 132 103 {
+172 -30
arch/x86/vdso/vdso2c.h
··· 4 4 * are built for 32-bit userspace. 5 5 */ 6 6 7 - static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) 7 + /* 8 + * We're writing a section table for a few reasons: 9 + * 10 + * The Go runtime had a couple of bugs: it would read the section 11 + * table to try to figure out how many dynamic symbols there were (it 12 + * shouldn't have looked at the section table at all) and, if there 13 + * were no SHT_SYNDYM section table entry, it would use an 14 + * uninitialized value for the number of symbols. An empty DYNSYM 15 + * table would work, but I see no reason not to write a valid one (and 16 + * keep full performance for old Go programs). This hack is only 17 + * needed on x86_64. 18 + * 19 + * The bug was introduced on 2012-08-31 by: 20 + * https://code.google.com/p/go/source/detail?r=56ea40aac72b 21 + * and was fixed on 2014-06-13 by: 22 + * https://code.google.com/p/go/source/detail?r=fc1cd5e12595 23 + * 24 + * Binutils has issues debugging the vDSO: it reads the section table to 25 + * find SHT_NOTE; it won't look at PT_NOTE for the in-memory vDSO, which 26 + * would break build-id if we removed the section table. Binutils 27 + * also requires that shstrndx != 0. See: 28 + * https://sourceware.org/bugzilla/show_bug.cgi?id=17064 29 + * 30 + * elfutils might not look for PT_NOTE if there is a section table at 31 + * all. I don't know whether this matters for any practical purpose. 32 + * 33 + * For simplicity, rather than hacking up a partial section table, we 34 + * just write a mostly complete one. We omit non-dynamic symbols, 35 + * though, since they're rather large. 36 + * 37 + * Once binutils gets fixed, we might be able to drop this for all but 38 + * the 64-bit vdso, since build-id only works in kernel RPMs, and 39 + * systems that update to new enough kernel RPMs will likely update 40 + * binutils in sync. build-id has never worked for home-built kernel 41 + * RPMs without manual symlinking, and I suspect that no one ever does 42 + * that. 43 + */ 44 + struct BITSFUNC(fake_sections) 45 + { 46 + ELF(Shdr) *table; 47 + unsigned long table_offset; 48 + int count, max_count; 49 + 50 + int in_shstrndx; 51 + unsigned long shstr_offset; 52 + const char *shstrtab; 53 + size_t shstrtab_len; 54 + 55 + int out_shstrndx; 56 + }; 57 + 58 + static unsigned int BITSFUNC(find_shname)(struct BITSFUNC(fake_sections) *out, 59 + const char *name) 60 + { 61 + const char *outname = out->shstrtab; 62 + while (outname - out->shstrtab < out->shstrtab_len) { 63 + if (!strcmp(name, outname)) 64 + return (outname - out->shstrtab) + out->shstr_offset; 65 + outname += strlen(outname) + 1; 66 + } 67 + 68 + if (*name) 69 + printf("Warning: could not find output name \"%s\"\n", name); 70 + return out->shstr_offset + out->shstrtab_len - 1; /* Use a null. */ 71 + } 72 + 73 + static void BITSFUNC(init_sections)(struct BITSFUNC(fake_sections) *out) 74 + { 75 + if (!out->in_shstrndx) 76 + fail("didn't find the fake shstrndx\n"); 77 + 78 + memset(out->table, 0, out->max_count * sizeof(ELF(Shdr))); 79 + 80 + if (out->max_count < 1) 81 + fail("we need at least two fake output sections\n"); 82 + 83 + PUT_LE(&out->table[0].sh_type, SHT_NULL); 84 + PUT_LE(&out->table[0].sh_name, BITSFUNC(find_shname)(out, "")); 85 + 86 + out->count = 1; 87 + } 88 + 89 + static void BITSFUNC(copy_section)(struct BITSFUNC(fake_sections) *out, 90 + int in_idx, const ELF(Shdr) *in, 91 + const char *name) 92 + { 93 + uint64_t flags = GET_LE(&in->sh_flags); 94 + 95 + bool copy = flags & SHF_ALLOC && 96 + (GET_LE(&in->sh_size) || 97 + (GET_LE(&in->sh_type) != SHT_RELA && 98 + GET_LE(&in->sh_type) != SHT_REL)) && 99 + strcmp(name, ".altinstructions") && 100 + strcmp(name, ".altinstr_replacement"); 101 + 102 + if (!copy) 103 + return; 104 + 105 + if (out->count >= out->max_count) 106 + fail("too many copied sections (max = %d)\n", out->max_count); 107 + 108 + if (in_idx == out->in_shstrndx) 109 + out->out_shstrndx = out->count; 110 + 111 + out->table[out->count] = *in; 112 + PUT_LE(&out->table[out->count].sh_name, 113 + BITSFUNC(find_shname)(out, name)); 114 + 115 + /* elfutils requires that a strtab have the correct type. */ 116 + if (!strcmp(name, ".fake_shstrtab")) 117 + PUT_LE(&out->table[out->count].sh_type, SHT_STRTAB); 118 + 119 + out->count++; 120 + } 121 + 122 + static void BITSFUNC(go)(void *addr, size_t len, 123 + FILE *outfile, const char *name) 8 124 { 9 125 int found_load = 0; 10 126 unsigned long load_size = -1; /* Work around bogus warning */ 11 127 unsigned long data_size; 12 - Elf_Ehdr *hdr = (Elf_Ehdr *)addr; 128 + ELF(Ehdr) *hdr = (ELF(Ehdr) *)addr; 13 129 int i; 14 130 unsigned long j; 15 - Elf_Shdr *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr, 131 + ELF(Shdr) *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr, 16 132 *alt_sec = NULL; 17 - Elf_Dyn *dyn = 0, *dyn_end = 0; 133 + ELF(Dyn) *dyn = 0, *dyn_end = 0; 18 134 const char *secstrings; 19 135 uint64_t syms[NSYMS] = {}; 20 136 21 - uint64_t fake_sections_value = 0, fake_sections_size = 0; 137 + struct BITSFUNC(fake_sections) fake_sections = {}; 22 138 23 - Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(&hdr->e_phoff)); 139 + ELF(Phdr) *pt = (ELF(Phdr) *)(addr + GET_LE(&hdr->e_phoff)); 24 140 25 141 /* Walk the segment table. */ 26 142 for (i = 0; i < GET_LE(&hdr->e_phnum); i++) { ··· 167 51 for (i = 0; dyn + i < dyn_end && 168 52 GET_LE(&dyn[i].d_tag) != DT_NULL; i++) { 169 53 typeof(dyn[i].d_tag) tag = GET_LE(&dyn[i].d_tag); 170 - if (tag == DT_REL || tag == DT_RELSZ || 54 + if (tag == DT_REL || tag == DT_RELSZ || tag == DT_RELA || 171 55 tag == DT_RELENT || tag == DT_TEXTREL) 172 56 fail("vdso image contains dynamic relocations\n"); 173 57 } ··· 177 61 GET_LE(&hdr->e_shentsize)*GET_LE(&hdr->e_shstrndx); 178 62 secstrings = addr + GET_LE(&secstrings_hdr->sh_offset); 179 63 for (i = 0; i < GET_LE(&hdr->e_shnum); i++) { 180 - Elf_Shdr *sh = addr + GET_LE(&hdr->e_shoff) + 64 + ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) + 181 65 GET_LE(&hdr->e_shentsize) * i; 182 66 if (GET_LE(&sh->sh_type) == SHT_SYMTAB) 183 67 symtab_hdr = sh; ··· 198 82 i < GET_LE(&symtab_hdr->sh_size) / GET_LE(&symtab_hdr->sh_entsize); 199 83 i++) { 200 84 int k; 201 - Elf_Sym *sym = addr + GET_LE(&symtab_hdr->sh_offset) + 85 + ELF(Sym) *sym = addr + GET_LE(&symtab_hdr->sh_offset) + 202 86 GET_LE(&symtab_hdr->sh_entsize) * i; 203 87 const char *name = addr + GET_LE(&strtab_hdr->sh_offset) + 204 88 GET_LE(&sym->st_name); 205 89 206 90 for (k = 0; k < NSYMS; k++) { 207 - if (!strcmp(name, required_syms[k])) { 91 + if (!strcmp(name, required_syms[k].name)) { 208 92 if (syms[k]) { 209 93 fail("duplicate symbol %s\n", 210 - required_syms[k]); 94 + required_syms[k].name); 211 95 } 212 96 syms[k] = GET_LE(&sym->st_value); 213 97 } 214 98 } 215 99 216 - if (!strcmp(name, "vdso_fake_sections")) { 217 - if (fake_sections_value) 218 - fail("duplicate vdso_fake_sections\n"); 219 - fake_sections_value = GET_LE(&sym->st_value); 220 - fake_sections_size = GET_LE(&sym->st_size); 100 + if (!strcmp(name, "fake_shstrtab")) { 101 + ELF(Shdr) *sh; 102 + 103 + fake_sections.in_shstrndx = GET_LE(&sym->st_shndx); 104 + fake_sections.shstrtab = addr + GET_LE(&sym->st_value); 105 + fake_sections.shstrtab_len = GET_LE(&sym->st_size); 106 + sh = addr + GET_LE(&hdr->e_shoff) + 107 + GET_LE(&hdr->e_shentsize) * 108 + fake_sections.in_shstrndx; 109 + fake_sections.shstr_offset = GET_LE(&sym->st_value) - 110 + GET_LE(&sh->sh_addr); 221 111 } 222 112 } 113 + 114 + /* Build the output section table. */ 115 + if (!syms[sym_VDSO_FAKE_SECTION_TABLE_START] || 116 + !syms[sym_VDSO_FAKE_SECTION_TABLE_END]) 117 + fail("couldn't find fake section table\n"); 118 + if ((syms[sym_VDSO_FAKE_SECTION_TABLE_END] - 119 + syms[sym_VDSO_FAKE_SECTION_TABLE_START]) % sizeof(ELF(Shdr))) 120 + fail("fake section table size isn't a multiple of sizeof(Shdr)\n"); 121 + fake_sections.table = addr + syms[sym_VDSO_FAKE_SECTION_TABLE_START]; 122 + fake_sections.table_offset = syms[sym_VDSO_FAKE_SECTION_TABLE_START]; 123 + fake_sections.max_count = (syms[sym_VDSO_FAKE_SECTION_TABLE_END] - 124 + syms[sym_VDSO_FAKE_SECTION_TABLE_START]) / 125 + sizeof(ELF(Shdr)); 126 + 127 + BITSFUNC(init_sections)(&fake_sections); 128 + for (i = 0; i < GET_LE(&hdr->e_shnum); i++) { 129 + ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) + 130 + GET_LE(&hdr->e_shentsize) * i; 131 + BITSFUNC(copy_section)(&fake_sections, i, sh, 132 + secstrings + GET_LE(&sh->sh_name)); 133 + } 134 + if (!fake_sections.out_shstrndx) 135 + fail("didn't generate shstrndx?!?\n"); 136 + 137 + PUT_LE(&hdr->e_shoff, fake_sections.table_offset); 138 + PUT_LE(&hdr->e_shentsize, sizeof(ELF(Shdr))); 139 + PUT_LE(&hdr->e_shnum, fake_sections.count); 140 + PUT_LE(&hdr->e_shstrndx, fake_sections.out_shstrndx); 223 141 224 142 /* Validate mapping addresses. */ 225 143 for (i = 0; i < sizeof(special_pages) / sizeof(special_pages[0]); i++) { ··· 262 112 263 113 if (syms[i] % 4096) 264 114 fail("%s must be a multiple of 4096\n", 265 - required_syms[i]); 115 + required_syms[i].name); 266 116 if (syms[i] < data_size) 267 117 fail("%s must be after the text mapping\n", 268 - required_syms[i]); 118 + required_syms[i].name); 269 119 if (syms[sym_end_mapping] < syms[i] + 4096) 270 - fail("%s overruns end_mapping\n", required_syms[i]); 120 + fail("%s overruns end_mapping\n", 121 + required_syms[i].name); 271 122 } 272 123 if (syms[sym_end_mapping] % 4096) 273 124 fail("end_mapping must be a multiple of 4096\n"); 274 - 275 - /* Remove sections or use fakes */ 276 - if (fake_sections_size % sizeof(Elf_Shdr)) 277 - fail("vdso_fake_sections size is not a multiple of %ld\n", 278 - (long)sizeof(Elf_Shdr)); 279 - PUT_LE(&hdr->e_shoff, fake_sections_value); 280 - PUT_LE(&hdr->e_shentsize, fake_sections_value ? sizeof(Elf_Shdr) : 0); 281 - PUT_LE(&hdr->e_shnum, fake_sections_size / sizeof(Elf_Shdr)); 282 - PUT_LE(&hdr->e_shstrndx, SHN_UNDEF); 283 125 284 126 if (!name) { 285 127 fwrite(addr, load_size, 1, outfile); ··· 310 168 (unsigned long)GET_LE(&alt_sec->sh_size)); 311 169 } 312 170 for (i = 0; i < NSYMS; i++) { 313 - if (syms[i]) 171 + if (required_syms[i].export && syms[i]) 314 172 fprintf(outfile, "\t.sym_%s = 0x%" PRIx64 ",\n", 315 - required_syms[i], syms[i]); 173 + required_syms[i].name, syms[i]); 316 174 } 317 175 fprintf(outfile, "};\n"); 318 176 }
+1
arch/x86/vdso/vdso32/vdso-fakesections.c
··· 1 + #include "../vdso-fakesections.c"
+2
arch/x86/vdso/vdsox32.lds.S
··· 6 6 * the DSO. 7 7 */ 8 8 9 + #define BUILD_VDSOX32 10 + 9 11 #include "vdso-layout.lds.S" 10 12 11 13 /*
+4
arch/x86/vdso/vma.c
··· 62 62 Only used for the 64-bit and x32 vdsos. */ 63 63 static unsigned long vdso_addr(unsigned long start, unsigned len) 64 64 { 65 + #ifdef CONFIG_X86_32 66 + return 0; 67 + #else 65 68 unsigned long addr, end; 66 69 unsigned offset; 67 70 end = (start + PMD_SIZE - 1) & PMD_MASK; ··· 86 83 addr = align_vdso_addr(addr); 87 84 88 85 return addr; 86 + #endif 89 87 } 90 88 91 89 static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+8
block/bio.c
··· 746 746 747 747 goto done; 748 748 } 749 + 750 + /* 751 + * If the queue doesn't support SG gaps and adding this 752 + * offset would create a gap, disallow it. 753 + */ 754 + if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS) && 755 + bvec_gap_to_prev(prev, offset)) 756 + return 0; 749 757 } 750 758 751 759 if (bio->bi_vcnt >= bio->bi_max_vecs)
+3 -6
block/blk-cgroup.c
··· 80 80 blkg->q = q; 81 81 INIT_LIST_HEAD(&blkg->q_node); 82 82 blkg->blkcg = blkcg; 83 - blkg->refcnt = 1; 83 + atomic_set(&blkg->refcnt, 1); 84 84 85 85 /* root blkg uses @q->root_rl, init rl only for !root blkgs */ 86 86 if (blkcg != &blkcg_root) { ··· 399 399 400 400 /* release the blkcg and parent blkg refs this blkg has been holding */ 401 401 css_put(&blkg->blkcg->css); 402 - if (blkg->parent) { 403 - spin_lock_irq(blkg->q->queue_lock); 402 + if (blkg->parent) 404 403 blkg_put(blkg->parent); 405 - spin_unlock_irq(blkg->q->queue_lock); 406 - } 407 404 408 405 blkg_free(blkg); 409 406 } ··· 1090 1093 * Register @pol with blkcg core. Might sleep and @pol may be modified on 1091 1094 * successful registration. Returns 0 on success and -errno on failure. 1092 1095 */ 1093 - int __init blkcg_policy_register(struct blkcg_policy *pol) 1096 + int blkcg_policy_register(struct blkcg_policy *pol) 1094 1097 { 1095 1098 int i, ret; 1096 1099
+9 -12
block/blk-cgroup.h
··· 18 18 #include <linux/seq_file.h> 19 19 #include <linux/radix-tree.h> 20 20 #include <linux/blkdev.h> 21 + #include <linux/atomic.h> 21 22 22 23 /* Max limits for throttle policy */ 23 24 #define THROTL_IOPS_MAX UINT_MAX ··· 105 104 struct request_list rl; 106 105 107 106 /* reference count */ 108 - int refcnt; 107 + atomic_t refcnt; 109 108 110 109 /* is this blkg online? protected by both blkcg and q locks */ 111 110 bool online; ··· 146 145 void blkcg_exit_queue(struct request_queue *q); 147 146 148 147 /* Blkio controller policy registration */ 149 - int __init blkcg_policy_register(struct blkcg_policy *pol); 148 + int blkcg_policy_register(struct blkcg_policy *pol); 150 149 void blkcg_policy_unregister(struct blkcg_policy *pol); 151 150 int blkcg_activate_policy(struct request_queue *q, 152 151 const struct blkcg_policy *pol); ··· 258 257 * blkg_get - get a blkg reference 259 258 * @blkg: blkg to get 260 259 * 261 - * The caller should be holding queue_lock and an existing reference. 260 + * The caller should be holding an existing reference. 262 261 */ 263 262 static inline void blkg_get(struct blkcg_gq *blkg) 264 263 { 265 - lockdep_assert_held(blkg->q->queue_lock); 266 - WARN_ON_ONCE(!blkg->refcnt); 267 - blkg->refcnt++; 264 + WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); 265 + atomic_inc(&blkg->refcnt); 268 266 } 269 267 270 268 void __blkg_release_rcu(struct rcu_head *rcu); ··· 271 271 /** 272 272 * blkg_put - put a blkg reference 273 273 * @blkg: blkg to put 274 - * 275 - * The caller should be holding queue_lock. 276 274 */ 277 275 static inline void blkg_put(struct blkcg_gq *blkg) 278 276 { 279 - lockdep_assert_held(blkg->q->queue_lock); 280 - WARN_ON_ONCE(blkg->refcnt <= 0); 281 - if (!--blkg->refcnt) 277 + WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); 278 + if (atomic_dec_and_test(&blkg->refcnt)) 282 279 call_rcu(&blkg->rcu_head, __blkg_release_rcu); 283 280 } 284 281 ··· 577 580 static inline int blkcg_init_queue(struct request_queue *q) { return 0; } 578 581 static inline void blkcg_drain_queue(struct request_queue *q) { } 579 582 static inline void blkcg_exit_queue(struct request_queue *q) { } 580 - static inline int __init blkcg_policy_register(struct blkcg_policy *pol) { return 0; } 583 + static inline int blkcg_policy_register(struct blkcg_policy *pol) { return 0; } 581 584 static inline void blkcg_policy_unregister(struct blkcg_policy *pol) { } 582 585 static inline int blkcg_activate_policy(struct request_queue *q, 583 586 const struct blkcg_policy *pol) { return 0; }
+10
block/blk-merge.c
··· 568 568 569 569 bool blk_rq_merge_ok(struct request *rq, struct bio *bio) 570 570 { 571 + struct request_queue *q = rq->q; 572 + 571 573 if (!rq_mergeable(rq) || !bio_mergeable(bio)) 572 574 return false; 573 575 ··· 592 590 if (rq->cmd_flags & REQ_WRITE_SAME && 593 591 !blk_write_same_mergeable(rq->bio, bio)) 594 592 return false; 593 + 594 + if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) { 595 + struct bio_vec *bprev; 596 + 597 + bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1]; 598 + if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset)) 599 + return false; 600 + } 595 601 596 602 return true; 597 603 }
+1 -1
block/blk-mq.c
··· 878 878 clear_bit(BLK_MQ_S_STOPPED, &hctx->state); 879 879 880 880 preempt_disable(); 881 - __blk_mq_run_hw_queue(hctx); 881 + blk_mq_run_hw_queue(hctx, false); 882 882 preempt_enable(); 883 883 } 884 884 EXPORT_SYMBOL(blk_mq_start_hw_queue);
+1 -1
block/elevator.c
··· 825 825 } 826 826 EXPORT_SYMBOL(elv_unregister_queue); 827 827 828 - int __init elv_register(struct elevator_type *e) 828 + int elv_register(struct elevator_type *e) 829 829 { 830 830 char *def = ""; 831 831
+129 -3
drivers/acpi/ac.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/dmi.h> 32 32 #include <linux/delay.h> 33 + #ifdef CONFIG_ACPI_PROCFS_POWER 34 + #include <linux/proc_fs.h> 35 + #include <linux/seq_file.h> 36 + #endif 33 37 #include <linux/platform_device.h> 34 38 #include <linux/power_supply.h> 35 39 #include <linux/acpi.h> ··· 56 52 MODULE_DESCRIPTION("ACPI AC Adapter Driver"); 57 53 MODULE_LICENSE("GPL"); 58 54 55 + 59 56 static int acpi_ac_add(struct acpi_device *device); 60 57 static int acpi_ac_remove(struct acpi_device *device); 61 58 static void acpi_ac_notify(struct acpi_device *device, u32 event); ··· 71 66 static int acpi_ac_resume(struct device *dev); 72 67 #endif 73 68 static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 69 + 70 + #ifdef CONFIG_ACPI_PROCFS_POWER 71 + extern struct proc_dir_entry *acpi_lock_ac_dir(void); 72 + extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir); 73 + static int acpi_ac_open_fs(struct inode *inode, struct file *file); 74 + #endif 75 + 74 76 75 77 static int ac_sleep_before_get_state_ms; 76 78 ··· 102 90 }; 103 91 104 92 #define to_acpi_ac(x) container_of(x, struct acpi_ac, charger) 93 + 94 + #ifdef CONFIG_ACPI_PROCFS_POWER 95 + static const struct file_operations acpi_ac_fops = { 96 + .owner = THIS_MODULE, 97 + .open = acpi_ac_open_fs, 98 + .read = seq_read, 99 + .llseek = seq_lseek, 100 + .release = single_release, 101 + }; 102 + #endif 105 103 106 104 /* -------------------------------------------------------------------------- 107 105 AC Adapter Management ··· 164 142 static enum power_supply_property ac_props[] = { 165 143 POWER_SUPPLY_PROP_ONLINE, 166 144 }; 145 + 146 + #ifdef CONFIG_ACPI_PROCFS_POWER 147 + /* -------------------------------------------------------------------------- 148 + FS Interface (/proc) 149 + -------------------------------------------------------------------------- */ 150 + 151 + static struct proc_dir_entry *acpi_ac_dir; 152 + 153 + static int acpi_ac_seq_show(struct seq_file *seq, void *offset) 154 + { 155 + struct acpi_ac *ac = seq->private; 156 + 157 + 158 + if (!ac) 159 + return 0; 160 + 161 + if (acpi_ac_get_state(ac)) { 162 + seq_puts(seq, "ERROR: Unable to read AC Adapter state\n"); 163 + return 0; 164 + } 165 + 166 + seq_puts(seq, "state: "); 167 + switch (ac->state) { 168 + case ACPI_AC_STATUS_OFFLINE: 169 + seq_puts(seq, "off-line\n"); 170 + break; 171 + case ACPI_AC_STATUS_ONLINE: 172 + seq_puts(seq, "on-line\n"); 173 + break; 174 + default: 175 + seq_puts(seq, "unknown\n"); 176 + break; 177 + } 178 + 179 + return 0; 180 + } 181 + 182 + static int acpi_ac_open_fs(struct inode *inode, struct file *file) 183 + { 184 + return single_open(file, acpi_ac_seq_show, PDE_DATA(inode)); 185 + } 186 + 187 + static int acpi_ac_add_fs(struct acpi_ac *ac) 188 + { 189 + struct proc_dir_entry *entry = NULL; 190 + 191 + printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded," 192 + " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n"); 193 + if (!acpi_device_dir(ac->device)) { 194 + acpi_device_dir(ac->device) = 195 + proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir); 196 + if (!acpi_device_dir(ac->device)) 197 + return -ENODEV; 198 + } 199 + 200 + /* 'state' [R] */ 201 + entry = proc_create_data(ACPI_AC_FILE_STATE, 202 + S_IRUGO, acpi_device_dir(ac->device), 203 + &acpi_ac_fops, ac); 204 + if (!entry) 205 + return -ENODEV; 206 + return 0; 207 + } 208 + 209 + static int acpi_ac_remove_fs(struct acpi_ac *ac) 210 + { 211 + 212 + if (acpi_device_dir(ac->device)) { 213 + remove_proc_entry(ACPI_AC_FILE_STATE, 214 + acpi_device_dir(ac->device)); 215 + remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir); 216 + acpi_device_dir(ac->device) = NULL; 217 + } 218 + 219 + return 0; 220 + } 221 + #endif 167 222 168 223 /* -------------------------------------------------------------------------- 169 224 Driver Model ··· 342 243 goto end; 343 244 344 245 ac->charger.name = acpi_device_bid(device); 246 + #ifdef CONFIG_ACPI_PROCFS_POWER 247 + result = acpi_ac_add_fs(ac); 248 + if (result) 249 + goto end; 250 + #endif 345 251 ac->charger.type = POWER_SUPPLY_TYPE_MAINS; 346 252 ac->charger.properties = ac_props; 347 253 ac->charger.num_properties = ARRAY_SIZE(ac_props); ··· 362 258 ac->battery_nb.notifier_call = acpi_ac_battery_notify; 363 259 register_acpi_notifier(&ac->battery_nb); 364 260 end: 365 - if (result) 261 + if (result) { 262 + #ifdef CONFIG_ACPI_PROCFS_POWER 263 + acpi_ac_remove_fs(ac); 264 + #endif 366 265 kfree(ac); 266 + } 367 267 368 268 dmi_check_system(ac_dmi_table); 369 269 return result; ··· 411 303 power_supply_unregister(&ac->charger); 412 304 unregister_acpi_notifier(&ac->battery_nb); 413 305 306 + #ifdef CONFIG_ACPI_PROCFS_POWER 307 + acpi_ac_remove_fs(ac); 308 + #endif 309 + 414 310 kfree(ac); 415 311 416 312 return 0; ··· 427 315 if (acpi_disabled) 428 316 return -ENODEV; 429 317 430 - result = acpi_bus_register_driver(&acpi_ac_driver); 431 - if (result < 0) 318 + #ifdef CONFIG_ACPI_PROCFS_POWER 319 + acpi_ac_dir = acpi_lock_ac_dir(); 320 + if (!acpi_ac_dir) 432 321 return -ENODEV; 322 + #endif 323 + 324 + 325 + result = acpi_bus_register_driver(&acpi_ac_driver); 326 + if (result < 0) { 327 + #ifdef CONFIG_ACPI_PROCFS_POWER 328 + acpi_unlock_ac_dir(acpi_ac_dir); 329 + #endif 330 + return -ENODEV; 331 + } 433 332 434 333 return 0; 435 334 } ··· 448 325 static void __exit acpi_ac_exit(void) 449 326 { 450 327 acpi_bus_unregister_driver(&acpi_ac_driver); 328 + #ifdef CONFIG_ACPI_PROCFS_POWER 329 + acpi_unlock_ac_dir(acpi_ac_dir); 330 + #endif 451 331 } 452 332 module_init(acpi_ac_init); 453 333 module_exit(acpi_ac_exit);
+2
drivers/acpi/acpi_pnp.c
··· 14 14 #include <linux/module.h> 15 15 16 16 static const struct acpi_device_id acpi_pnp_device_ids[] = { 17 + /* soc_button_array */ 18 + {"PNP0C40"}, 17 19 /* pata_isapnp */ 18 20 {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ 19 21 /* floppy */
+40 -1
drivers/acpi/battery.c
··· 35 35 #include <linux/delay.h> 36 36 #include <linux/slab.h> 37 37 #include <linux/suspend.h> 38 + #include <linux/delay.h> 38 39 #include <asm/unaligned.h> 39 40 40 41 #ifdef CONFIG_ACPI_PROCFS_POWER ··· 533 532 battery->rate_now = abs((s16)battery->rate_now); 534 533 printk_once(KERN_WARNING FW_BUG "battery: (dis)charge rate" 535 534 " invalid.\n"); 535 + } 536 + 537 + /* 538 + * When fully charged, some batteries wrongly report 539 + * capacity_now = design_capacity instead of = full_charge_capacity 540 + */ 541 + if (battery->capacity_now > battery->full_charge_capacity 542 + && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { 543 + battery->capacity_now = battery->full_charge_capacity; 544 + if (battery->capacity_now != battery->design_capacity) 545 + printk_once(KERN_WARNING FW_BUG 546 + "battery: reported current charge level (%d) " 547 + "is higher than reported maximum charge level (%d).\n", 548 + battery->capacity_now, battery->full_charge_capacity); 536 549 } 537 550 538 551 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) ··· 1166 1151 {}, 1167 1152 }; 1168 1153 1154 + /* 1155 + * Some machines'(E,G Lenovo Z480) ECs are not stable 1156 + * during boot up and this causes battery driver fails to be 1157 + * probed due to failure of getting battery information 1158 + * from EC sometimes. After several retries, the operation 1159 + * may work. So add retry code here and 20ms sleep between 1160 + * every retries. 1161 + */ 1162 + static int acpi_battery_update_retry(struct acpi_battery *battery) 1163 + { 1164 + int retry, ret; 1165 + 1166 + for (retry = 5; retry; retry--) { 1167 + ret = acpi_battery_update(battery, false); 1168 + if (!ret) 1169 + break; 1170 + 1171 + msleep(20); 1172 + } 1173 + return ret; 1174 + } 1175 + 1169 1176 static int acpi_battery_add(struct acpi_device *device) 1170 1177 { 1171 1178 int result = 0; ··· 1206 1169 mutex_init(&battery->sysfs_lock); 1207 1170 if (acpi_has_method(battery->device->handle, "_BIX")) 1208 1171 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); 1209 - result = acpi_battery_update(battery, false); 1172 + 1173 + result = acpi_battery_update_retry(battery); 1210 1174 if (result) 1211 1175 goto fail; 1176 + 1212 1177 #ifdef CONFIG_ACPI_PROCFS_POWER 1213 1178 result = acpi_battery_add_fs(device); 1214 1179 #endif
+85 -79
drivers/acpi/ec.c
··· 1 1 /* 2 - * ec.c - ACPI Embedded Controller Driver (v2.1) 2 + * ec.c - ACPI Embedded Controller Driver (v2.2) 3 3 * 4 - * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de> 5 - * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com> 6 - * Copyright (C) 2004 Luming Yu <luming.yu@intel.com> 7 - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> 8 - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 4 + * Copyright (C) 2001-2014 Intel Corporation 5 + * Author: 2014 Lv Zheng <lv.zheng@intel.com> 6 + * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> 7 + * 2006 Denis Sadykov <denis.m.sadykov@intel.com> 8 + * 2004 Luming Yu <luming.yu@intel.com> 9 + * 2001, 2002 Andy Grover <andrew.grover@intel.com> 10 + * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 11 + * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de> 9 12 * 10 13 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 11 14 * ··· 55 52 /* EC status register */ 56 53 #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ 57 54 #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ 55 + #define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */ 58 56 #define ACPI_EC_FLAG_BURST 0x10 /* burst mode */ 59 57 #define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */ 60 58 ··· 81 77 * OpReg are installed */ 82 78 EC_FLAGS_BLOCKED, /* Transactions are blocked */ 83 79 }; 80 + 81 + #define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ 82 + #define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ 84 83 85 84 /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ 86 85 static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; ··· 116 109 u8 ri; 117 110 u8 wlen; 118 111 u8 rlen; 119 - bool done; 112 + u8 flags; 120 113 }; 121 114 122 115 struct acpi_ec *boot_ec, *first_ec; ··· 134 127 static inline u8 acpi_ec_read_status(struct acpi_ec *ec) 135 128 { 136 129 u8 x = inb(ec->command_addr); 137 - pr_debug("---> status = 0x%2.2x\n", x); 130 + pr_debug("EC_SC(R) = 0x%2.2x " 131 + "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", 132 + x, 133 + !!(x & ACPI_EC_FLAG_SCI), 134 + !!(x & ACPI_EC_FLAG_BURST), 135 + !!(x & ACPI_EC_FLAG_CMD), 136 + !!(x & ACPI_EC_FLAG_IBF), 137 + !!(x & ACPI_EC_FLAG_OBF)); 138 138 return x; 139 139 } 140 140 141 141 static inline u8 acpi_ec_read_data(struct acpi_ec *ec) 142 142 { 143 143 u8 x = inb(ec->data_addr); 144 - pr_debug("---> data = 0x%2.2x\n", x); 144 + pr_debug("EC_DATA(R) = 0x%2.2x\n", x); 145 145 return x; 146 146 } 147 147 148 148 static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command) 149 149 { 150 - pr_debug("<--- command = 0x%2.2x\n", command); 150 + pr_debug("EC_SC(W) = 0x%2.2x\n", command); 151 151 outb(command, ec->command_addr); 152 152 } 153 153 154 154 static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) 155 155 { 156 - pr_debug("<--- data = 0x%2.2x\n", data); 156 + pr_debug("EC_DATA(W) = 0x%2.2x\n", data); 157 157 outb(data, ec->data_addr); 158 158 } 159 159 160 - static int ec_transaction_done(struct acpi_ec *ec) 160 + static int ec_transaction_completed(struct acpi_ec *ec) 161 161 { 162 162 unsigned long flags; 163 163 int ret = 0; 164 164 spin_lock_irqsave(&ec->lock, flags); 165 - if (!ec->curr || ec->curr->done) 165 + if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) 166 166 ret = 1; 167 167 spin_unlock_irqrestore(&ec->lock, flags); 168 168 return ret; 169 169 } 170 170 171 - static void start_transaction(struct acpi_ec *ec) 171 + static bool advance_transaction(struct acpi_ec *ec) 172 172 { 173 - ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 174 - ec->curr->done = false; 175 - acpi_ec_write_cmd(ec, ec->curr->command); 176 - } 177 - 178 - static void advance_transaction(struct acpi_ec *ec, u8 status) 179 - { 180 - unsigned long flags; 181 173 struct transaction *t; 174 + u8 status; 175 + bool wakeup = false; 182 176 183 - spin_lock_irqsave(&ec->lock, flags); 177 + pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); 178 + status = acpi_ec_read_status(ec); 184 179 t = ec->curr; 185 180 if (!t) 186 - goto unlock; 187 - if (t->wlen > t->wi) { 188 - if ((status & ACPI_EC_FLAG_IBF) == 0) 189 - acpi_ec_write_data(ec, 190 - t->wdata[t->wi++]); 191 - else 192 - goto err; 193 - } else if (t->rlen > t->ri) { 194 - if ((status & ACPI_EC_FLAG_OBF) == 1) { 195 - t->rdata[t->ri++] = acpi_ec_read_data(ec); 196 - if (t->rlen == t->ri) 197 - t->done = true; 181 + goto err; 182 + if (t->flags & ACPI_EC_COMMAND_POLL) { 183 + if (t->wlen > t->wi) { 184 + if ((status & ACPI_EC_FLAG_IBF) == 0) 185 + acpi_ec_write_data(ec, t->wdata[t->wi++]); 186 + else 187 + goto err; 188 + } else if (t->rlen > t->ri) { 189 + if ((status & ACPI_EC_FLAG_OBF) == 1) { 190 + t->rdata[t->ri++] = acpi_ec_read_data(ec); 191 + if (t->rlen == t->ri) { 192 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 193 + wakeup = true; 194 + } 195 + } else 196 + goto err; 197 + } else if (t->wlen == t->wi && 198 + (status & ACPI_EC_FLAG_IBF) == 0) { 199 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 200 + wakeup = true; 201 + } 202 + return wakeup; 203 + } else { 204 + if ((status & ACPI_EC_FLAG_IBF) == 0) { 205 + acpi_ec_write_cmd(ec, t->command); 206 + t->flags |= ACPI_EC_COMMAND_POLL; 198 207 } else 199 208 goto err; 200 - } else if (t->wlen == t->wi && 201 - (status & ACPI_EC_FLAG_IBF) == 0) 202 - t->done = true; 203 - goto unlock; 209 + return wakeup; 210 + } 204 211 err: 205 212 /* 206 213 * If SCI bit is set, then don't think it's a false IRQ 207 214 * otherwise will take a not handled IRQ as a false one. 208 215 */ 209 - if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI)) 210 - ++t->irq_count; 216 + if (!(status & ACPI_EC_FLAG_SCI)) { 217 + if (in_interrupt() && t) 218 + ++t->irq_count; 219 + } 220 + return wakeup; 221 + } 211 222 212 - unlock: 213 - spin_unlock_irqrestore(&ec->lock, flags); 223 + static void start_transaction(struct acpi_ec *ec) 224 + { 225 + ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 226 + ec->curr->flags = 0; 227 + (void)advance_transaction(ec); 214 228 } 215 229 216 230 static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); ··· 256 228 /* don't sleep with disabled interrupts */ 257 229 if (EC_FLAGS_MSI || irqs_disabled()) { 258 230 udelay(ACPI_EC_MSI_UDELAY); 259 - if (ec_transaction_done(ec)) 231 + if (ec_transaction_completed(ec)) 260 232 return 0; 261 233 } else { 262 234 if (wait_event_timeout(ec->wait, 263 - ec_transaction_done(ec), 235 + ec_transaction_completed(ec), 264 236 msecs_to_jiffies(1))) 265 237 return 0; 266 238 } 267 - advance_transaction(ec, acpi_ec_read_status(ec)); 239 + spin_lock_irqsave(&ec->lock, flags); 240 + (void)advance_transaction(ec); 241 + spin_unlock_irqrestore(&ec->lock, flags); 268 242 } while (time_before(jiffies, delay)); 269 243 pr_debug("controller reset, restart transaction\n"); 270 244 spin_lock_irqsave(&ec->lock, flags); ··· 298 268 return ret; 299 269 } 300 270 301 - static int ec_check_ibf0(struct acpi_ec *ec) 302 - { 303 - u8 status = acpi_ec_read_status(ec); 304 - return (status & ACPI_EC_FLAG_IBF) == 0; 305 - } 306 - 307 - static int ec_wait_ibf0(struct acpi_ec *ec) 308 - { 309 - unsigned long delay = jiffies + msecs_to_jiffies(ec_delay); 310 - /* interrupt wait manually if GPE mode is not active */ 311 - while (time_before(jiffies, delay)) 312 - if (wait_event_timeout(ec->wait, ec_check_ibf0(ec), 313 - msecs_to_jiffies(1))) 314 - return 0; 315 - return -ETIME; 316 - } 317 - 318 271 static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) 319 272 { 320 273 int status; ··· 317 304 status = -ENODEV; 318 305 goto unlock; 319 306 } 320 - } 321 - if (ec_wait_ibf0(ec)) { 322 - pr_err("input buffer is not empty, " 323 - "aborting transaction\n"); 324 - status = -ETIME; 325 - goto end; 326 307 } 327 308 pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n", 328 309 t->command, t->wdata ? t->wdata[0] : 0); ··· 341 334 set_bit(EC_FLAGS_GPE_STORM, &ec->flags); 342 335 } 343 336 pr_debug("transaction end\n"); 344 - end: 345 337 if (ec->global_lock) 346 338 acpi_release_global_lock(glk); 347 339 unlock: ··· 640 634 static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, 641 635 u32 gpe_number, void *data) 642 636 { 637 + unsigned long flags; 643 638 struct acpi_ec *ec = data; 644 - u8 status = acpi_ec_read_status(ec); 645 639 646 - pr_debug("~~~> interrupt, status:0x%02x\n", status); 647 - 648 - advance_transaction(ec, status); 649 - if (ec_transaction_done(ec) && 650 - (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) { 640 + spin_lock_irqsave(&ec->lock, flags); 641 + if (advance_transaction(ec)) 651 642 wake_up(&ec->wait); 652 - ec_check_sci(ec, acpi_ec_read_status(ec)); 653 - } 643 + spin_unlock_irqrestore(&ec->lock, flags); 644 + ec_check_sci(ec, acpi_ec_read_status(ec)); 654 645 return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; 655 646 } 656 647 ··· 1069 1066 /* fall through */ 1070 1067 } 1071 1068 1072 - if (EC_FLAGS_SKIP_DSDT_SCAN) 1069 + if (EC_FLAGS_SKIP_DSDT_SCAN) { 1070 + kfree(saved_ec); 1073 1071 return -ENODEV; 1072 + } 1074 1073 1075 1074 /* This workaround is needed only on some broken machines, 1076 1075 * which require early EC, but fail to provide ECDT */ ··· 1110 1105 } 1111 1106 error: 1112 1107 kfree(boot_ec); 1108 + kfree(saved_ec); 1113 1109 boot_ec = NULL; 1114 1110 return -ENODEV; 1115 1111 }
+5 -5
drivers/acpi/resource.c
··· 77 77 switch (ares->type) { 78 78 case ACPI_RESOURCE_TYPE_MEMORY24: 79 79 memory24 = &ares->data.memory24; 80 - if (!memory24->address_length) 80 + if (!memory24->minimum && !memory24->address_length) 81 81 return false; 82 82 acpi_dev_get_memresource(res, memory24->minimum, 83 83 memory24->address_length, ··· 85 85 break; 86 86 case ACPI_RESOURCE_TYPE_MEMORY32: 87 87 memory32 = &ares->data.memory32; 88 - if (!memory32->address_length) 88 + if (!memory32->minimum && !memory32->address_length) 89 89 return false; 90 90 acpi_dev_get_memresource(res, memory32->minimum, 91 91 memory32->address_length, ··· 93 93 break; 94 94 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 95 95 fixed_memory32 = &ares->data.fixed_memory32; 96 - if (!fixed_memory32->address_length) 96 + if (!fixed_memory32->address && !fixed_memory32->address_length) 97 97 return false; 98 98 acpi_dev_get_memresource(res, fixed_memory32->address, 99 99 fixed_memory32->address_length, ··· 150 150 switch (ares->type) { 151 151 case ACPI_RESOURCE_TYPE_IO: 152 152 io = &ares->data.io; 153 - if (!io->address_length) 153 + if (!io->minimum && !io->address_length) 154 154 return false; 155 155 acpi_dev_get_ioresource(res, io->minimum, 156 156 io->address_length, ··· 158 158 break; 159 159 case ACPI_RESOURCE_TYPE_FIXED_IO: 160 160 fixed_io = &ares->data.fixed_io; 161 - if (!fixed_io->address_length) 161 + if (!fixed_io->address && !fixed_io->address_length) 162 162 return false; 163 163 acpi_dev_get_ioresource(res, fixed_io->address, 164 164 fixed_io->address_length,
+10 -1
drivers/acpi/video.c
··· 241 241 return use_native_backlight_dmi; 242 242 } 243 243 244 - static bool acpi_video_verify_backlight_support(void) 244 + bool acpi_video_verify_backlight_support(void) 245 245 { 246 246 if (acpi_osi_is_win8() && acpi_video_use_native_backlight() && 247 247 backlight_device_registered(BACKLIGHT_RAW)) 248 248 return false; 249 249 return acpi_video_backlight_support(); 250 250 } 251 + EXPORT_SYMBOL_GPL(acpi_video_verify_backlight_support); 251 252 252 253 /* backlight device sysfs support */ 253 254 static int acpi_video_get_brightness(struct backlight_device *bd) ··· 561 560 .matches = { 562 561 DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 563 562 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-471G"), 563 + }, 564 + }, 565 + { 566 + .callback = video_set_use_native_backlight, 567 + .ident = "Acer TravelMate B113", 568 + .matches = { 569 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 570 + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), 564 571 }, 565 572 }, 566 573 {
+8
drivers/acpi/video_detect.c
··· 166 166 DMI_MATCH(DMI_PRODUCT_NAME, "UL30A"), 167 167 }, 168 168 }, 169 + { 170 + .callback = video_detect_force_vendor, 171 + .ident = "Dell Inspiron 5737", 172 + .matches = { 173 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 174 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), 175 + }, 176 + }, 169 177 { }, 170 178 }; 171 179
+2
drivers/ata/ahci.h
··· 371 371 int pmp, unsigned long deadline, 372 372 int (*check_ready)(struct ata_link *link)); 373 373 374 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 374 375 int ahci_stop_engine(struct ata_port *ap); 376 + void ahci_start_fis_rx(struct ata_port *ap); 375 377 void ahci_start_engine(struct ata_port *ap); 376 378 int ahci_check_ready(struct ata_link *link); 377 379 int ahci_kick_engine(struct ata_port *ap);
+34 -4
drivers/ata/ahci_imx.c
··· 58 58 struct imx_ahci_priv { 59 59 struct platform_device *ahci_pdev; 60 60 enum ahci_imx_type type; 61 + struct clk *sata_clk; 62 + struct clk *sata_ref_clk; 61 63 struct clk *ahb_clk; 62 64 struct regmap *gpr; 63 65 bool no_device; ··· 226 224 return ret; 227 225 } 228 226 229 - ret = ahci_platform_enable_clks(hpriv); 227 + ret = clk_prepare_enable(imxpriv->sata_ref_clk); 230 228 if (ret < 0) 231 229 goto disable_regulator; 232 230 ··· 293 291 !IMX6Q_GPR13_SATA_MPLL_CLK_EN); 294 292 } 295 293 296 - ahci_platform_disable_clks(hpriv); 294 + clk_disable_unprepare(imxpriv->sata_ref_clk); 297 295 298 296 if (hpriv->target_pwr) 299 297 regulator_disable(hpriv->target_pwr); ··· 326 324 writel(reg_val | IMX_P0PHYCR_TEST_PDDQ, mmio + IMX_P0PHYCR); 327 325 imx_sata_disable(hpriv); 328 326 imxpriv->no_device = true; 327 + 328 + dev_info(ap->dev, "no device found, disabling link.\n"); 329 + dev_info(ap->dev, "pass " MODULE_PARAM_PREFIX ".hotplug=1 to enable hotplug\n"); 329 330 } 330 331 331 332 static int ahci_imx_softreset(struct ata_link *link, unsigned int *class, ··· 390 385 imxpriv->no_device = false; 391 386 imxpriv->first_time = true; 392 387 imxpriv->type = (enum ahci_imx_type)of_id->data; 388 + 389 + imxpriv->sata_clk = devm_clk_get(dev, "sata"); 390 + if (IS_ERR(imxpriv->sata_clk)) { 391 + dev_err(dev, "can't get sata clock.\n"); 392 + return PTR_ERR(imxpriv->sata_clk); 393 + } 394 + 395 + imxpriv->sata_ref_clk = devm_clk_get(dev, "sata_ref"); 396 + if (IS_ERR(imxpriv->sata_ref_clk)) { 397 + dev_err(dev, "can't get sata_ref clock.\n"); 398 + return PTR_ERR(imxpriv->sata_ref_clk); 399 + } 400 + 393 401 imxpriv->ahb_clk = devm_clk_get(dev, "ahb"); 394 402 if (IS_ERR(imxpriv->ahb_clk)) { 395 403 dev_err(dev, "can't get ahb clock.\n"); ··· 425 407 426 408 hpriv->plat_data = imxpriv; 427 409 428 - ret = imx_sata_enable(hpriv); 410 + ret = clk_prepare_enable(imxpriv->sata_clk); 429 411 if (ret) 430 412 return ret; 413 + 414 + ret = imx_sata_enable(hpriv); 415 + if (ret) 416 + goto disable_clk; 431 417 432 418 /* 433 419 * Configure the HWINIT bits of the HOST_CAP and HOST_PORTS_IMPL, ··· 457 435 ret = ahci_platform_init_host(pdev, hpriv, &ahci_imx_port_info, 458 436 0, 0, 0); 459 437 if (ret) 460 - imx_sata_disable(hpriv); 438 + goto disable_sata; 461 439 440 + return 0; 441 + 442 + disable_sata: 443 + imx_sata_disable(hpriv); 444 + disable_clk: 445 + clk_disable_unprepare(imxpriv->sata_clk); 462 446 return ret; 463 447 } 464 448 465 449 static void ahci_imx_host_stop(struct ata_host *host) 466 450 { 467 451 struct ahci_host_priv *hpriv = host->private_data; 452 + struct imx_ahci_priv *imxpriv = hpriv->plat_data; 468 453 469 454 imx_sata_disable(hpriv); 455 + clk_disable_unprepare(imxpriv->sata_clk); 470 456 } 471 457 472 458 #ifdef CONFIG_PM_SLEEP
+1 -1
drivers/ata/ahci_platform.c
··· 58 58 } 59 59 60 60 if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) 61 - hflags |= AHCI_HFLAG_NO_FBS; 61 + hflags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; 62 62 63 63 rc = ahci_platform_init_host(pdev, hpriv, &ahci_port_info, 64 64 hflags, 0, 0);
+47 -13
drivers/ata/ahci_xgene.c
··· 78 78 struct xgene_ahci_context { 79 79 struct ahci_host_priv *hpriv; 80 80 struct device *dev; 81 + u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/ 81 82 void __iomem *csr_core; /* Core CSR address of IP */ 82 83 void __iomem *csr_diag; /* Diag CSR address of IP */ 83 84 void __iomem *csr_axi; /* AXI CSR address of IP */ ··· 99 98 } 100 99 101 100 /** 101 + * xgene_ahci_restart_engine - Restart the dma engine. 102 + * @ap : ATA port of interest 103 + * 104 + * Restarts the dma engine inside the controller. 105 + */ 106 + static int xgene_ahci_restart_engine(struct ata_port *ap) 107 + { 108 + struct ahci_host_priv *hpriv = ap->host->private_data; 109 + 110 + ahci_stop_engine(ap); 111 + ahci_start_fis_rx(ap); 112 + hpriv->start_engine(ap); 113 + 114 + return 0; 115 + } 116 + 117 + /** 118 + * xgene_ahci_qc_issue - Issue commands to the device 119 + * @qc: Command to issue 120 + * 121 + * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot 122 + * clear the BSY bit after receiving the PIO setup FIS. This results in the dma 123 + * state machine goes into the CMFatalErrorUpdate state and locks up. By 124 + * restarting the dma engine, it removes the controller out of lock up state. 125 + */ 126 + static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc) 127 + { 128 + struct ata_port *ap = qc->ap; 129 + struct ahci_host_priv *hpriv = ap->host->private_data; 130 + struct xgene_ahci_context *ctx = hpriv->plat_data; 131 + int rc = 0; 132 + 133 + if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA)) 134 + xgene_ahci_restart_engine(ap); 135 + 136 + rc = ahci_qc_issue(qc); 137 + 138 + /* Save the last command issued */ 139 + ctx->last_cmd[ap->port_no] = qc->tf.command; 140 + 141 + return rc; 142 + } 143 + 144 + /** 102 145 * xgene_ahci_read_id - Read ID data from the specified device 103 146 * @dev: device 104 147 * @tf: proposed taskfile 105 148 * @id: data buffer 106 149 * 107 150 * This custom read ID function is required due to the fact that the HW 108 - * does not support DEVSLP and the controller state machine may get stuck 109 - * after processing the ID query command. 151 + * does not support DEVSLP. 110 152 */ 111 153 static unsigned int xgene_ahci_read_id(struct ata_device *dev, 112 154 struct ata_taskfile *tf, u16 *id) 113 155 { 114 156 u32 err_mask; 115 - void __iomem *port_mmio = ahci_port_base(dev->link->ap); 116 157 117 158 err_mask = ata_do_dev_read_id(dev, tf, id); 118 159 if (err_mask) ··· 176 133 */ 177 134 id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8); 178 135 179 - /* 180 - * Due to HW errata, restart the port if no other command active. 181 - * Otherwise the controller may get stuck. 182 - */ 183 - if (!readl(port_mmio + PORT_CMD_ISSUE)) { 184 - writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD); 185 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 186 - writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD); 187 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 188 - } 189 136 return 0; 190 137 } 191 138 ··· 333 300 .host_stop = xgene_ahci_host_stop, 334 301 .hardreset = xgene_ahci_hardreset, 335 302 .read_id = xgene_ahci_read_id, 303 + .qc_issue = xgene_ahci_qc_issue, 336 304 }; 337 305 338 306 static const struct ata_port_info xgene_ahci_port_info = {
+4 -3
drivers/ata/libahci.c
··· 68 68 69 69 static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val); 70 70 static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val); 71 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 72 71 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc); 73 72 static int ahci_port_start(struct ata_port *ap); 74 73 static void ahci_port_stop(struct ata_port *ap); ··· 619 620 } 620 621 EXPORT_SYMBOL_GPL(ahci_stop_engine); 621 622 622 - static void ahci_start_fis_rx(struct ata_port *ap) 623 + void ahci_start_fis_rx(struct ata_port *ap) 623 624 { 624 625 void __iomem *port_mmio = ahci_port_base(ap); 625 626 struct ahci_host_priv *hpriv = ap->host->private_data; ··· 645 646 /* flush */ 646 647 readl(port_mmio + PORT_CMD); 647 648 } 649 + EXPORT_SYMBOL_GPL(ahci_start_fis_rx); 648 650 649 651 static int ahci_stop_fis_rx(struct ata_port *ap) 650 652 { ··· 1945 1945 } 1946 1946 EXPORT_SYMBOL_GPL(ahci_interrupt); 1947 1947 1948 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1948 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1949 1949 { 1950 1950 struct ata_port *ap = qc->ap; 1951 1951 void __iomem *port_mmio = ahci_port_base(ap); ··· 1974 1974 1975 1975 return 0; 1976 1976 } 1977 + EXPORT_SYMBOL_GPL(ahci_qc_issue); 1977 1978 1978 1979 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc) 1979 1980 {
+6 -1
drivers/ata/libahci_platform.c
··· 250 250 if (IS_ERR(hpriv->phy)) { 251 251 rc = PTR_ERR(hpriv->phy); 252 252 switch (rc) { 253 - case -ENODEV: 254 253 case -ENOSYS: 254 + /* No PHY support. Check if PHY is required. */ 255 + if (of_find_property(dev->of_node, "phys", NULL)) { 256 + dev_err(dev, "couldn't get sata-phy: ENOSYS\n"); 257 + goto err_out; 258 + } 259 + case -ENODEV: 255 260 /* continue normally */ 256 261 hpriv->phy = NULL; 257 262 break;
+4 -1
drivers/block/drbd/drbd_receiver.c
··· 1337 1337 return 0; 1338 1338 } 1339 1339 1340 + /* Discards don't have any payload. 1341 + * But the scsi layer still expects a bio_vec it can use internally, 1342 + * see sd_setup_discard_cmnd() and blk_add_request_payload(). */ 1340 1343 if (peer_req->flags & EE_IS_TRIM) 1341 - nr_pages = 0; /* discards don't have any payload. */ 1344 + nr_pages = 1; 1342 1345 1343 1346 /* In most cases, we will only need one bio. But in case the lower 1344 1347 * level restrictions happen to be different at this offset on this
+1 -1
drivers/block/floppy.c
··· 3777 3777 int drive = cbdata->drive; 3778 3778 3779 3779 if (err) { 3780 - pr_info("floppy: error %d while reading block 0", err); 3780 + pr_info("floppy: error %d while reading block 0\n", err); 3781 3781 set_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags); 3782 3782 } 3783 3783 complete(&cbdata->complete);
+4 -1
drivers/block/zram/zram_drv.c
··· 622 622 memset(&zram->stats, 0, sizeof(zram->stats)); 623 623 624 624 zram->disksize = 0; 625 - if (reset_capacity) 625 + if (reset_capacity) { 626 626 set_capacity(zram->disk, 0); 627 + revalidate_disk(zram->disk); 628 + } 627 629 up_write(&zram->init_lock); 628 630 } 629 631 ··· 666 664 zram->comp = comp; 667 665 zram->disksize = disksize; 668 666 set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT); 667 + revalidate_disk(zram->disk); 669 668 up_write(&zram->init_lock); 670 669 return len; 671 670
+3 -1
drivers/char/i8k.c
··· 138 138 if (!alloc_cpumask_var(&old_mask, GFP_KERNEL)) 139 139 return -ENOMEM; 140 140 cpumask_copy(old_mask, &current->cpus_allowed); 141 - set_cpus_allowed_ptr(current, cpumask_of(0)); 141 + rc = set_cpus_allowed_ptr(current, cpumask_of(0)); 142 + if (rc) 143 + goto out; 142 144 if (smp_processor_id() != 0) { 143 145 rc = -EBUSY; 144 146 goto out;
+2 -5
drivers/clk/clk-s2mps11.c
··· 230 230 goto err_reg; 231 231 } 232 232 233 - s2mps11_clk->lookup = devm_kzalloc(&pdev->dev, 234 - sizeof(struct clk_lookup), GFP_KERNEL); 233 + s2mps11_clk->lookup = clkdev_alloc(s2mps11_clk->clk, 234 + s2mps11_name(s2mps11_clk), NULL); 235 235 if (!s2mps11_clk->lookup) { 236 236 ret = -ENOMEM; 237 237 goto err_lup; 238 238 } 239 - 240 - s2mps11_clk->lookup->con_id = s2mps11_name(s2mps11_clk); 241 - s2mps11_clk->lookup->clk = s2mps11_clk->clk; 242 239 243 240 clkdev_add(s2mps11_clk->lookup); 244 241 }
+1 -1
drivers/clk/qcom/mmcc-msm8960.c
··· 1209 1209 1210 1210 static u8 mmcc_pxo_hdmi_map[] = { 1211 1211 [P_PXO] = 0, 1212 - [P_HDMI_PLL] = 2, 1212 + [P_HDMI_PLL] = 3, 1213 1213 }; 1214 1214 1215 1215 static const char *mmcc_pxo_hdmi[] = {
+4 -12
drivers/clk/samsung/clk-exynos4.c
··· 925 925 GATE(CLK_RTC, "rtc", "aclk100", E4X12_GATE_IP_PERIR, 15, 926 926 0, 0), 927 927 GATE(CLK_KEYIF, "keyif", "aclk100", E4X12_GATE_IP_PERIR, 16, 0, 0), 928 - GATE(CLK_SCLK_PWM_ISP, "sclk_pwm_isp", "div_pwm_isp", 929 - E4X12_SRC_MASK_ISP, 0, CLK_SET_RATE_PARENT, 0), 930 - GATE(CLK_SCLK_SPI0_ISP, "sclk_spi0_isp", "div_spi0_isp_pre", 931 - E4X12_SRC_MASK_ISP, 4, CLK_SET_RATE_PARENT, 0), 932 - GATE(CLK_SCLK_SPI1_ISP, "sclk_spi1_isp", "div_spi1_isp_pre", 933 - E4X12_SRC_MASK_ISP, 8, CLK_SET_RATE_PARENT, 0), 934 - GATE(CLK_SCLK_UART_ISP, "sclk_uart_isp", "div_uart_isp", 935 - E4X12_SRC_MASK_ISP, 12, CLK_SET_RATE_PARENT, 0), 936 - GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "sclk_pwm_isp", 928 + GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "div_pwm_isp", 937 929 E4X12_GATE_IP_ISP, 0, 0, 0), 938 - GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "sclk_spi0_isp", 930 + GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "div_spi0_isp_pre", 939 931 E4X12_GATE_IP_ISP, 1, 0, 0), 940 - GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "sclk_spi1_isp", 932 + GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "div_spi1_isp_pre", 941 933 E4X12_GATE_IP_ISP, 2, 0, 0), 942 - GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "sclk_uart_isp", 934 + GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "div_uart_isp", 943 935 E4X12_GATE_IP_ISP, 3, 0, 0), 944 936 GATE(CLK_WDT, "watchdog", "aclk100", E4X12_GATE_IP_PERIR, 14, 0, 0), 945 937 GATE(CLK_PCM0, "pcm0", "aclk100", E4X12_GATE_IP_MAUDIO, 2,
+1 -1
drivers/clk/samsung/clk-exynos5250.c
··· 661 661 GATE(CLK_RTC, "rtc", "div_aclk66", GATE_IP_PERIS, 20, 0, 0), 662 662 GATE(CLK_TMU, "tmu", "div_aclk66", GATE_IP_PERIS, 21, 0, 0), 663 663 GATE(CLK_SMMU_TV, "smmu_tv", "mout_aclk200_disp1_sub", 664 - GATE_IP_DISP1, 2, 0, 0), 664 + GATE_IP_DISP1, 9, 0, 0), 665 665 GATE(CLK_SMMU_FIMD1, "smmu_fimd1", "mout_aclk200_disp1_sub", 666 666 GATE_IP_DISP1, 8, 0, 0), 667 667 GATE(CLK_SMMU_2D, "smmu_2d", "div_aclk200", GATE_IP_ACP, 7, 0, 0),
+58 -31
drivers/clk/samsung/clk-exynos5420.c
··· 631 631 SRC_TOP4, 16, 1), 632 632 MUX(0, "mout_user_aclk266", mout_user_aclk266_p, SRC_TOP4, 20, 1), 633 633 MUX(0, "mout_user_aclk166", mout_user_aclk166_p, SRC_TOP4, 24, 1), 634 - MUX(0, "mout_user_aclk333", mout_user_aclk333_p, SRC_TOP4, 28, 1), 634 + MUX(CLK_MOUT_USER_ACLK333, "mout_user_aclk333", mout_user_aclk333_p, 635 + SRC_TOP4, 28, 1), 635 636 636 637 MUX(0, "mout_user_aclk400_disp1", mout_user_aclk400_disp1_p, 637 638 SRC_TOP5, 0, 1), ··· 685 684 SRC_TOP11, 12, 1), 686 685 MUX(0, "mout_sw_aclk266", mout_sw_aclk266_p, SRC_TOP11, 20, 1), 687 686 MUX(0, "mout_sw_aclk166", mout_sw_aclk166_p, SRC_TOP11, 24, 1), 688 - MUX(0, "mout_sw_aclk333", mout_sw_aclk333_p, SRC_TOP11, 28, 1), 687 + MUX(CLK_MOUT_SW_ACLK333, "mout_sw_aclk333", mout_sw_aclk333_p, 688 + SRC_TOP11, 28, 1), 689 689 690 690 MUX(0, "mout_sw_aclk400_disp1", mout_sw_aclk400_disp1_p, 691 691 SRC_TOP12, 4, 1), ··· 892 890 GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0), 893 891 GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen", 894 892 GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0), 895 - GATE(CLK_ACLK66_PERIC, "aclk66_peric", "mout_user_aclk66_peric", 896 - GATE_BUS_TOP, 11, CLK_IGNORE_UNUSED, 0), 897 893 GATE(0, "aclk266_isp", "mout_user_aclk266_isp", 898 894 GATE_BUS_TOP, 13, 0, 0), 899 895 GATE(0, "aclk166", "mout_user_aclk166", ··· 994 994 SRC_MASK_FSYS, 24, CLK_SET_RATE_PARENT, 0), 995 995 996 996 /* PERIC Block */ 997 - GATE(CLK_UART0, "uart0", "aclk66_peric", GATE_IP_PERIC, 0, 0, 0), 998 - GATE(CLK_UART1, "uart1", "aclk66_peric", GATE_IP_PERIC, 1, 0, 0), 999 - GATE(CLK_UART2, "uart2", "aclk66_peric", GATE_IP_PERIC, 2, 0, 0), 1000 - GATE(CLK_UART3, "uart3", "aclk66_peric", GATE_IP_PERIC, 3, 0, 0), 1001 - GATE(CLK_I2C0, "i2c0", "aclk66_peric", GATE_IP_PERIC, 6, 0, 0), 1002 - GATE(CLK_I2C1, "i2c1", "aclk66_peric", GATE_IP_PERIC, 7, 0, 0), 1003 - GATE(CLK_I2C2, "i2c2", "aclk66_peric", GATE_IP_PERIC, 8, 0, 0), 1004 - GATE(CLK_I2C3, "i2c3", "aclk66_peric", GATE_IP_PERIC, 9, 0, 0), 1005 - GATE(CLK_USI0, "usi0", "aclk66_peric", GATE_IP_PERIC, 10, 0, 0), 1006 - GATE(CLK_USI1, "usi1", "aclk66_peric", GATE_IP_PERIC, 11, 0, 0), 1007 - GATE(CLK_USI2, "usi2", "aclk66_peric", GATE_IP_PERIC, 12, 0, 0), 1008 - GATE(CLK_USI3, "usi3", "aclk66_peric", GATE_IP_PERIC, 13, 0, 0), 1009 - GATE(CLK_I2C_HDMI, "i2c_hdmi", "aclk66_peric", GATE_IP_PERIC, 14, 0, 0), 1010 - GATE(CLK_TSADC, "tsadc", "aclk66_peric", GATE_IP_PERIC, 15, 0, 0), 1011 - GATE(CLK_SPI0, "spi0", "aclk66_peric", GATE_IP_PERIC, 16, 0, 0), 1012 - GATE(CLK_SPI1, "spi1", "aclk66_peric", GATE_IP_PERIC, 17, 0, 0), 1013 - GATE(CLK_SPI2, "spi2", "aclk66_peric", GATE_IP_PERIC, 18, 0, 0), 1014 - GATE(CLK_I2S1, "i2s1", "aclk66_peric", GATE_IP_PERIC, 20, 0, 0), 1015 - GATE(CLK_I2S2, "i2s2", "aclk66_peric", GATE_IP_PERIC, 21, 0, 0), 1016 - GATE(CLK_PCM1, "pcm1", "aclk66_peric", GATE_IP_PERIC, 22, 0, 0), 1017 - GATE(CLK_PCM2, "pcm2", "aclk66_peric", GATE_IP_PERIC, 23, 0, 0), 1018 - GATE(CLK_PWM, "pwm", "aclk66_peric", GATE_IP_PERIC, 24, 0, 0), 1019 - GATE(CLK_SPDIF, "spdif", "aclk66_peric", GATE_IP_PERIC, 26, 0, 0), 1020 - GATE(CLK_USI4, "usi4", "aclk66_peric", GATE_IP_PERIC, 28, 0, 0), 1021 - GATE(CLK_USI5, "usi5", "aclk66_peric", GATE_IP_PERIC, 30, 0, 0), 1022 - GATE(CLK_USI6, "usi6", "aclk66_peric", GATE_IP_PERIC, 31, 0, 0), 997 + GATE(CLK_UART0, "uart0", "mout_user_aclk66_peric", 998 + GATE_IP_PERIC, 0, 0, 0), 999 + GATE(CLK_UART1, "uart1", "mout_user_aclk66_peric", 1000 + GATE_IP_PERIC, 1, 0, 0), 1001 + GATE(CLK_UART2, "uart2", "mout_user_aclk66_peric", 1002 + GATE_IP_PERIC, 2, 0, 0), 1003 + GATE(CLK_UART3, "uart3", "mout_user_aclk66_peric", 1004 + GATE_IP_PERIC, 3, 0, 0), 1005 + GATE(CLK_I2C0, "i2c0", "mout_user_aclk66_peric", 1006 + GATE_IP_PERIC, 6, 0, 0), 1007 + GATE(CLK_I2C1, "i2c1", "mout_user_aclk66_peric", 1008 + GATE_IP_PERIC, 7, 0, 0), 1009 + GATE(CLK_I2C2, "i2c2", "mout_user_aclk66_peric", 1010 + GATE_IP_PERIC, 8, 0, 0), 1011 + GATE(CLK_I2C3, "i2c3", "mout_user_aclk66_peric", 1012 + GATE_IP_PERIC, 9, 0, 0), 1013 + GATE(CLK_USI0, "usi0", "mout_user_aclk66_peric", 1014 + GATE_IP_PERIC, 10, 0, 0), 1015 + GATE(CLK_USI1, "usi1", "mout_user_aclk66_peric", 1016 + GATE_IP_PERIC, 11, 0, 0), 1017 + GATE(CLK_USI2, "usi2", "mout_user_aclk66_peric", 1018 + GATE_IP_PERIC, 12, 0, 0), 1019 + GATE(CLK_USI3, "usi3", "mout_user_aclk66_peric", 1020 + GATE_IP_PERIC, 13, 0, 0), 1021 + GATE(CLK_I2C_HDMI, "i2c_hdmi", "mout_user_aclk66_peric", 1022 + GATE_IP_PERIC, 14, 0, 0), 1023 + GATE(CLK_TSADC, "tsadc", "mout_user_aclk66_peric", 1024 + GATE_IP_PERIC, 15, 0, 0), 1025 + GATE(CLK_SPI0, "spi0", "mout_user_aclk66_peric", 1026 + GATE_IP_PERIC, 16, 0, 0), 1027 + GATE(CLK_SPI1, "spi1", "mout_user_aclk66_peric", 1028 + GATE_IP_PERIC, 17, 0, 0), 1029 + GATE(CLK_SPI2, "spi2", "mout_user_aclk66_peric", 1030 + GATE_IP_PERIC, 18, 0, 0), 1031 + GATE(CLK_I2S1, "i2s1", "mout_user_aclk66_peric", 1032 + GATE_IP_PERIC, 20, 0, 0), 1033 + GATE(CLK_I2S2, "i2s2", "mout_user_aclk66_peric", 1034 + GATE_IP_PERIC, 21, 0, 0), 1035 + GATE(CLK_PCM1, "pcm1", "mout_user_aclk66_peric", 1036 + GATE_IP_PERIC, 22, 0, 0), 1037 + GATE(CLK_PCM2, "pcm2", "mout_user_aclk66_peric", 1038 + GATE_IP_PERIC, 23, 0, 0), 1039 + GATE(CLK_PWM, "pwm", "mout_user_aclk66_peric", 1040 + GATE_IP_PERIC, 24, 0, 0), 1041 + GATE(CLK_SPDIF, "spdif", "mout_user_aclk66_peric", 1042 + GATE_IP_PERIC, 26, 0, 0), 1043 + GATE(CLK_USI4, "usi4", "mout_user_aclk66_peric", 1044 + GATE_IP_PERIC, 28, 0, 0), 1045 + GATE(CLK_USI5, "usi5", "mout_user_aclk66_peric", 1046 + GATE_IP_PERIC, 30, 0, 0), 1047 + GATE(CLK_USI6, "usi6", "mout_user_aclk66_peric", 1048 + GATE_IP_PERIC, 31, 0, 0), 1023 1049 1024 - GATE(CLK_KEYIF, "keyif", "aclk66_peric", GATE_BUS_PERIC, 22, 0, 0), 1050 + GATE(CLK_KEYIF, "keyif", "mout_user_aclk66_peric", 1051 + GATE_BUS_PERIC, 22, 0, 0), 1025 1052 1026 1053 /* PERIS Block */ 1027 1054 GATE(CLK_CHIPID, "chipid", "aclk66_psgen",
+7 -2
drivers/clk/samsung/clk-s3c2410.c
··· 152 152 ALIAS(HCLK, NULL, "hclk"), 153 153 ALIAS(MPLL, NULL, "mpll"), 154 154 ALIAS(FCLK, NULL, "fclk"), 155 + ALIAS(PCLK, NULL, "watchdog"), 156 + ALIAS(PCLK_SDI, NULL, "sdi"), 157 + ALIAS(HCLK_NAND, NULL, "nand"), 158 + ALIAS(PCLK_I2S, NULL, "iis"), 159 + ALIAS(PCLK_I2C, NULL, "i2c"), 155 160 }; 156 161 157 162 /* S3C2410 specific clocks */ ··· 383 378 if (!np) 384 379 s3c2410_common_clk_register_fixed_ext(ctx, xti_f); 385 380 386 - if (current_soc == 2410) { 381 + if (current_soc == S3C2410) { 387 382 if (_get_rate("xti") == 12 * MHZ) { 388 383 s3c2410_plls[mpll].rate_table = pll_s3c2410_12mhz_tbl; 389 384 s3c2410_plls[upll].rate_table = pll_s3c2410_12mhz_tbl; ··· 437 432 samsung_clk_register_fixed_factor(ctx, s3c2410_ffactor, 438 433 ARRAY_SIZE(s3c2410_ffactor)); 439 434 samsung_clk_register_alias(ctx, s3c2410_aliases, 440 - ARRAY_SIZE(s3c2410_common_aliases)); 435 + ARRAY_SIZE(s3c2410_aliases)); 441 436 break; 442 437 case S3C2440: 443 438 samsung_clk_register_mux(ctx, s3c2440_muxes,
+4 -2
drivers/clk/samsung/clk-s3c64xx.c
··· 418 418 ALIAS(SCLK_MMC2, "s3c-sdhci.2", "mmc_busclk.2"), 419 419 ALIAS(SCLK_MMC1, "s3c-sdhci.1", "mmc_busclk.2"), 420 420 ALIAS(SCLK_MMC0, "s3c-sdhci.0", "mmc_busclk.2"), 421 - ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi-bus"), 422 - ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi-bus"), 421 + ALIAS(PCLK_SPI1, "s3c6410-spi.1", "spi_busclk0"), 422 + ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi_busclk2"), 423 + ALIAS(PCLK_SPI0, "s3c6410-spi.0", "spi_busclk0"), 424 + ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi_busclk2"), 423 425 ALIAS(SCLK_AUDIO1, "samsung-pcm.1", "audio-bus"), 424 426 ALIAS(SCLK_AUDIO1, "samsung-i2s.1", "audio-bus"), 425 427 ALIAS(SCLK_AUDIO0, "samsung-pcm.0", "audio-bus"),
+11 -5
drivers/clk/spear/spear3xx_clock.c
··· 211 211 /* array of all spear 320 clock lookups */ 212 212 #ifdef CONFIG_MACH_SPEAR320 213 213 214 - #define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) 214 + #define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) 215 215 #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) 216 216 217 217 #define SPEAR320_UARTX_PCLK_MASK 0x1 ··· 245 245 "ras_syn0_gclk", }; 246 246 static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; 247 247 248 - static void __init spear320_clk_init(void __iomem *soc_config_base) 248 + static void __init spear320_clk_init(void __iomem *soc_config_base, 249 + struct clk *ras_apb_clk) 249 250 { 250 251 struct clk *clk; 251 252 ··· 343 342 SPEAR320_CONTROL_REG, UART1_PCLK_SHIFT, UART1_PCLK_MASK, 344 343 0, &_lock); 345 344 clk_register_clkdev(clk, NULL, "a3000000.serial"); 345 + /* Enforce ras_apb_clk */ 346 + clk_set_parent(clk, ras_apb_clk); 346 347 347 348 clk = clk_register_mux(NULL, "uart2_clk", uartx_parents, 348 349 ARRAY_SIZE(uartx_parents), ··· 352 349 SPEAR320_EXT_CTRL_REG, SPEAR320_UART2_PCLK_SHIFT, 353 350 SPEAR320_UARTX_PCLK_MASK, 0, &_lock); 354 351 clk_register_clkdev(clk, NULL, "a4000000.serial"); 352 + /* Enforce ras_apb_clk */ 353 + clk_set_parent(clk, ras_apb_clk); 355 354 356 355 clk = clk_register_mux(NULL, "uart3_clk", uartx_parents, 357 356 ARRAY_SIZE(uartx_parents), ··· 384 379 clk_register_clkdev(clk, NULL, "60100000.serial"); 385 380 } 386 381 #else 387 - static inline void spear320_clk_init(void __iomem *soc_config_base) { } 382 + static inline void spear320_clk_init(void __iomem *sb, struct clk *rc) { } 388 383 #endif 389 384 390 385 void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base) 391 386 { 392 - struct clk *clk, *clk1; 387 + struct clk *clk, *clk1, *ras_apb_clk; 393 388 394 389 clk = clk_register_fixed_rate(NULL, "osc_32k_clk", NULL, CLK_IS_ROOT, 395 390 32000); ··· 618 613 clk = clk_register_gate(NULL, "ras_apb_clk", "apb_clk", 0, RAS_CLK_ENB, 619 614 RAS_APB_CLK_ENB, 0, &_lock); 620 615 clk_register_clkdev(clk, "ras_apb_clk", NULL); 616 + ras_apb_clk = clk; 621 617 622 618 clk = clk_register_gate(NULL, "ras_32k_clk", "osc_32k_clk", 0, 623 619 RAS_CLK_ENB, RAS_32K_CLK_ENB, 0, &_lock); ··· 665 659 else if (of_machine_is_compatible("st,spear310")) 666 660 spear310_clk_init(); 667 661 else if (of_machine_is_compatible("st,spear320")) 668 - spear320_clk_init(soc_config_base); 662 + spear320_clk_init(soc_config_base, ras_apb_clk); 669 663 }
+1 -1
drivers/clk/sunxi/clk-sun6i-apb0-gates.c
··· 29 29 30 30 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 31 31 reg = devm_ioremap_resource(&pdev->dev, r); 32 - if (!reg) 32 + if (IS_ERR(reg)) 33 33 return PTR_ERR(reg); 34 34 35 35 clk_parent = of_clk_get_parent_name(np, 0);
+3 -5
drivers/clk/ti/apll.c
··· 77 77 if (i == MAX_APLL_WAIT_TRIES) { 78 78 pr_warn("clock: %s failed transition to '%s'\n", 79 79 clk_name, (state) ? "locked" : "bypassed"); 80 - } else { 80 + r = -EBUSY; 81 + } else 81 82 pr_debug("clock: %s transition to '%s' in %d loops\n", 82 83 clk_name, (state) ? "locked" : "bypassed", i); 83 - 84 - r = 0; 85 - } 86 84 87 85 return r; 88 86 } ··· 336 338 const char *parent_name; 337 339 u32 val; 338 340 339 - ad = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 + ad = kzalloc(sizeof(*ad), GFP_KERNEL); 340 342 clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 343 init = kzalloc(sizeof(*init), GFP_KERNEL); 342 344
+3 -2
drivers/clk/ti/dpll.c
··· 161 161 } 162 162 163 163 #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 164 - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) 164 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) || \ 165 + defined(CONFIG_SOC_AM43XX) 165 166 /** 166 167 * ti_clk_register_dpll_x2 - Registers a DPLLx2 clock 167 168 * @node: device node for this clock ··· 323 322 of_ti_omap4_dpll_x2_setup); 324 323 #endif 325 324 326 - #ifdef CONFIG_SOC_AM33XX 325 + #if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) 327 326 static void __init of_ti_am3_dpll_x2_setup(struct device_node *node) 328 327 { 329 328 ti_clk_register_dpll_x2(node, &dpll_x2_ck_ops, NULL);
+1 -1
drivers/clk/ti/mux.c
··· 160 160 u8 clk_mux_flags = 0; 161 161 u32 mask = 0; 162 162 u32 shift = 0; 163 - u32 flags = 0; 163 + u32 flags = CLK_SET_RATE_NO_REPARENT; 164 164 165 165 num_parents = of_clk_get_parent_count(node); 166 166 if (num_parents < 2) {
+18 -2
drivers/clocksource/exynos_mct.c
··· 162 162 exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); 163 163 } 164 164 165 - static cycle_t exynos4_frc_read(struct clocksource *cs) 165 + static cycle_t notrace _exynos4_frc_read(void) 166 166 { 167 167 unsigned int lo, hi; 168 168 u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); ··· 174 174 } while (hi != hi2); 175 175 176 176 return ((cycle_t)hi << 32) | lo; 177 + } 178 + 179 + static cycle_t exynos4_frc_read(struct clocksource *cs) 180 + { 181 + return _exynos4_frc_read(); 177 182 } 178 183 179 184 static void exynos4_frc_resume(struct clocksource *cs) ··· 197 192 198 193 static u64 notrace exynos4_read_sched_clock(void) 199 194 { 200 - return exynos4_frc_read(&mct_frc); 195 + return _exynos4_frc_read(); 196 + } 197 + 198 + static struct delay_timer exynos4_delay_timer; 199 + 200 + static cycles_t exynos4_read_current_timer(void) 201 + { 202 + return _exynos4_frc_read(); 201 203 } 202 204 203 205 static void __init exynos4_clocksource_init(void) 204 206 { 205 207 exynos4_mct_frc_start(); 208 + 209 + exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; 210 + exynos4_delay_timer.freq = clk_rate; 211 + register_current_timer_delay(&exynos4_delay_timer); 206 212 207 213 if (clocksource_register_hz(&mct_frc, clk_rate)) 208 214 panic("%s: can't register clocksource\n", mct_frc.name);
+1 -1
drivers/cpufreq/Makefile
··· 49 49 # LITTLE drivers, so that it is probed last. 50 50 obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o 51 51 52 - obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o 52 + obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 53 53 obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o 54 54 obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o 55 55 obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o
+22 -13
drivers/cpufreq/intel_pstate.c
··· 128 128 129 129 struct perf_limits { 130 130 int no_turbo; 131 + int turbo_disabled; 131 132 int max_perf_pct; 132 133 int min_perf_pct; 133 134 int32_t max_perf; ··· 288 287 if (ret != 1) 289 288 return -EINVAL; 290 289 limits.no_turbo = clamp_t(int, input, 0 , 1); 291 - 290 + if (limits.turbo_disabled) { 291 + pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); 292 + limits.no_turbo = limits.turbo_disabled; 293 + } 292 294 return count; 293 295 } 294 296 ··· 361 357 { 362 358 u64 value; 363 359 rdmsrl(BYT_RATIOS, value); 364 - return (value >> 8) & 0x3F; 360 + return (value >> 8) & 0x7F; 365 361 } 366 362 367 363 static int byt_get_max_pstate(void) 368 364 { 369 365 u64 value; 370 366 rdmsrl(BYT_RATIOS, value); 371 - return (value >> 16) & 0x3F; 367 + return (value >> 16) & 0x7F; 372 368 } 373 369 374 370 static int byt_get_turbo_pstate(void) 375 371 { 376 372 u64 value; 377 373 rdmsrl(BYT_TURBO_RATIOS, value); 378 - return value & 0x3F; 374 + return value & 0x7F; 379 375 } 380 376 381 377 static void byt_set_pstate(struct cpudata *cpudata, int pstate) ··· 385 381 u32 vid; 386 382 387 383 val = pstate << 8; 388 - if (limits.no_turbo) 384 + if (limits.no_turbo && !limits.turbo_disabled) 389 385 val |= (u64)1 << 32; 390 386 391 387 vid_fp = cpudata->vid.min + mul_fp( ··· 409 405 410 406 411 407 rdmsrl(BYT_VIDS, value); 412 - cpudata->vid.min = int_tofp((value >> 8) & 0x3f); 413 - cpudata->vid.max = int_tofp((value >> 16) & 0x3f); 408 + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); 409 + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); 414 410 cpudata->vid.ratio = div_fp( 415 411 cpudata->vid.max - cpudata->vid.min, 416 412 int_tofp(cpudata->pstate.max_pstate - ··· 452 448 u64 val; 453 449 454 450 val = pstate << 8; 455 - if (limits.no_turbo) 451 + if (limits.no_turbo && !limits.turbo_disabled) 456 452 val |= (u64)1 << 32; 457 453 458 454 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); ··· 700 696 701 697 cpu = all_cpu_data[cpunum]; 702 698 703 - intel_pstate_get_cpu_pstates(cpu); 704 - 705 699 cpu->cpu = cpunum; 700 + intel_pstate_get_cpu_pstates(cpu); 706 701 707 702 init_timer_deferrable(&cpu->timer); 708 703 cpu->timer.function = intel_pstate_timer_func; ··· 744 741 limits.min_perf = int_tofp(1); 745 742 limits.max_perf_pct = 100; 746 743 limits.max_perf = int_tofp(1); 747 - limits.no_turbo = 0; 744 + limits.no_turbo = limits.turbo_disabled; 748 745 return 0; 749 746 } 750 747 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; ··· 787 784 { 788 785 struct cpudata *cpu; 789 786 int rc; 787 + u64 misc_en; 790 788 791 789 rc = intel_pstate_init_cpu(policy->cpu); 792 790 if (rc) ··· 795 791 796 792 cpu = all_cpu_data[policy->cpu]; 797 793 798 - if (!limits.no_turbo && 799 - limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 794 + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); 795 + if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || 796 + cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) { 797 + limits.turbo_disabled = 1; 798 + limits.no_turbo = 1; 799 + } 800 + if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 800 801 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 801 802 else 802 803 policy->policy = CPUFREQ_POLICY_POWERSAVE;
+3 -5
drivers/crypto/caam/jr.c
··· 453 453 int error; 454 454 455 455 jrdev = &pdev->dev; 456 - jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), 457 - GFP_KERNEL); 456 + jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr), 457 + GFP_KERNEL); 458 458 if (!jrpriv) 459 459 return -ENOMEM; 460 460 ··· 487 487 488 488 /* Now do the platform independent part */ 489 489 error = caam_jr_init(jrdev); /* now turn on hardware */ 490 - if (error) { 491 - kfree(jrpriv); 490 + if (error) 492 491 return error; 493 - } 494 492 495 493 jrpriv->dev = jrdev; 496 494 spin_lock(&driver_data.jr_alloc_lock);
+10 -3
drivers/dma/cppi41.c
··· 86 86 87 87 #define USBSS_IRQ_PD_COMP (1 << 2) 88 88 89 + /* Packet Descriptor */ 90 + #define PD2_ZERO_LENGTH (1 << 19) 91 + 89 92 struct cppi41_channel { 90 93 struct dma_chan chan; 91 94 struct dma_async_tx_descriptor txd; ··· 310 307 __iormb(); 311 308 312 309 while (val) { 313 - u32 desc; 310 + u32 desc, len; 314 311 315 312 q_num = __fls(val); 316 313 val &= ~(1 << q_num); ··· 322 319 q_num, desc); 323 320 continue; 324 321 } 325 - c->residue = pd_trans_len(c->desc->pd6) - 326 - pd_trans_len(c->desc->pd0); 327 322 323 + if (c->desc->pd2 & PD2_ZERO_LENGTH) 324 + len = 0; 325 + else 326 + len = pd_trans_len(c->desc->pd0); 327 + 328 + c->residue = pd_trans_len(c->desc->pd6) - len; 328 329 dma_cookie_complete(&c->txd); 329 330 c->txd.callback(c->txd.callback_param); 330 331 }
+18 -4
drivers/dma/imx-sdma.c
··· 255 255 enum dma_slave_buswidth word_size; 256 256 unsigned int buf_tail; 257 257 unsigned int num_bd; 258 + unsigned int period_len; 258 259 struct sdma_buffer_descriptor *bd; 259 260 dma_addr_t bd_phys; 260 261 unsigned int pc_from_device, pc_to_device; ··· 594 593 595 594 static void sdma_handle_channel_loop(struct sdma_channel *sdmac) 596 595 { 596 + if (sdmac->desc.callback) 597 + sdmac->desc.callback(sdmac->desc.callback_param); 598 + } 599 + 600 + static void sdma_update_channel_loop(struct sdma_channel *sdmac) 601 + { 597 602 struct sdma_buffer_descriptor *bd; 598 603 599 604 /* ··· 618 611 bd->mode.status |= BD_DONE; 619 612 sdmac->buf_tail++; 620 613 sdmac->buf_tail %= sdmac->num_bd; 621 - 622 - if (sdmac->desc.callback) 623 - sdmac->desc.callback(sdmac->desc.callback_param); 624 614 } 625 615 } 626 616 ··· 672 668 while (stat) { 673 669 int channel = fls(stat) - 1; 674 670 struct sdma_channel *sdmac = &sdma->channel[channel]; 671 + 672 + if (sdmac->flags & IMX_DMA_SG_LOOP) 673 + sdma_update_channel_loop(sdmac); 675 674 676 675 tasklet_schedule(&sdmac->tasklet); 677 676 ··· 1136 1129 sdmac->status = DMA_IN_PROGRESS; 1137 1130 1138 1131 sdmac->buf_tail = 0; 1132 + sdmac->period_len = period_len; 1139 1133 1140 1134 sdmac->flags |= IMX_DMA_SG_LOOP; 1141 1135 sdmac->direction = direction; ··· 1233 1225 struct dma_tx_state *txstate) 1234 1226 { 1235 1227 struct sdma_channel *sdmac = to_sdma_chan(chan); 1228 + u32 residue; 1229 + 1230 + if (sdmac->flags & IMX_DMA_SG_LOOP) 1231 + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; 1232 + else 1233 + residue = sdmac->chn_count - sdmac->chn_real_count; 1236 1234 1237 1235 dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 1238 - sdmac->chn_count - sdmac->chn_real_count); 1236 + residue); 1239 1237 1240 1238 return sdmac->status; 1241 1239 }
+1
drivers/firewire/Kconfig
··· 1 1 menu "IEEE 1394 (FireWire) support" 2 + depends on HAS_DMA 2 3 depends on PCI || COMPILE_TEST 3 4 # firewire-core does not depend on PCI but is 4 5 # not useful without PCI controller driver
+1 -1
drivers/firmware/efi/efi-pstore.c
··· 40 40 static inline u64 generic_id(unsigned long timestamp, 41 41 unsigned int part, int count) 42 42 { 43 - return (timestamp * 100 + part) * 1000 + count; 43 + return ((u64) timestamp * 100 + part) * 1000 + count; 44 44 } 45 45 46 46 static int efi_pstore_read_func(struct efivar_entry *entry, void *data)
+3 -3
drivers/firmware/efi/efi.c
··· 353 353 int depth, void *data) 354 354 { 355 355 struct param_info *info = data; 356 - void *prop, *dest; 357 - unsigned long len; 356 + const void *prop; 357 + void *dest; 358 358 u64 val; 359 - int i; 359 + int i, len; 360 360 361 361 if (depth != 1 || 362 362 (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0))
+1 -1
drivers/firmware/efi/fdt.c
··· 63 63 */ 64 64 prev = 0; 65 65 for (;;) { 66 - const char *type, *name; 66 + const char *type; 67 67 int len; 68 68 69 69 node = fdt_next_node(fdt, prev, NULL);
-6
drivers/gpio/gpio-mcp23s08.c
··· 900 900 if (spi_present_mask & (1 << addr)) 901 901 chips++; 902 902 } 903 - if (!chips) 904 - return -ENODEV; 905 903 } else { 906 904 type = spi_get_device_id(spi)->driver_data; 907 905 pdata = dev_get_platdata(&spi->dev); ··· 938 940 if (!(spi_present_mask & (1 << addr))) 939 941 continue; 940 942 chips--; 941 - if (chips < 0) { 942 - dev_err(&spi->dev, "FATAL: invalid negative chip id\n"); 943 - goto fail; 944 - } 945 943 data->mcp[addr] = &data->chip[chips]; 946 944 status = mcp23s08_probe_one(data->mcp[addr], &spi->dev, spi, 947 945 0x40 | (addr << 1), type, base,
+2 -1
drivers/gpu/drm/drm_drv.c
··· 419 419 retcode = -EFAULT; 420 420 goto err_i1; 421 421 } 422 - } else 422 + } else if (cmd & IOC_OUT) { 423 423 memset(kdata, 0, usize); 424 + } 424 425 425 426 if (ioctl->flags & DRM_UNLOCKED) 426 427 retcode = func(dev, kdata, file_priv);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_dpi.c
··· 40 40 { 41 41 struct exynos_dpi *ctx = connector_to_dpi(connector); 42 42 43 - if (!ctx->panel->connector) 43 + if (ctx->panel && !ctx->panel->connector) 44 44 drm_panel_attach(ctx->panel, &ctx->connector); 45 45 46 46 return connector_status_connected;
+4 -4
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 765 765 766 766 return 0; 767 767 768 - err_unregister_pd: 769 - platform_device_unregister(exynos_drm_pdev); 770 - 771 768 err_remove_vidi: 772 769 #ifdef CONFIG_DRM_EXYNOS_VIDI 773 770 exynos_drm_remove_vidi(); 771 + 772 + err_unregister_pd: 774 773 #endif 774 + platform_device_unregister(exynos_drm_pdev); 775 775 776 776 return ret; 777 777 } 778 778 779 779 static void exynos_drm_exit(void) 780 780 { 781 + platform_driver_unregister(&exynos_drm_platform_driver); 781 782 #ifdef CONFIG_DRM_EXYNOS_VIDI 782 783 exynos_drm_remove_vidi(); 783 784 #endif 784 785 platform_device_unregister(exynos_drm_pdev); 785 - platform_driver_unregister(&exynos_drm_platform_driver); 786 786 } 787 787 788 788 module_init(exynos_drm_init);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 343 343 int exynos_dpi_remove(struct device *dev); 344 344 #else 345 345 static inline struct exynos_drm_display * 346 - exynos_dpi_probe(struct device *dev) { return 0; } 346 + exynos_dpi_probe(struct device *dev) { return NULL; } 347 347 static inline int exynos_dpi_remove(struct device *dev) { return 0; } 348 348 #endif 349 349
+2
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 741 741 win_data = &ctx->win_data[i]; 742 742 if (win_data->enabled) 743 743 fimd_win_commit(mgr, i); 744 + else 745 + fimd_win_disable(mgr, i); 744 746 } 745 747 746 748 fimd_commit(mgr);
+19
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 2090 2090 2091 2091 static void hdmi_dpms(struct exynos_drm_display *display, int mode) 2092 2092 { 2093 + struct hdmi_context *hdata = display->ctx; 2094 + struct drm_encoder *encoder = hdata->encoder; 2095 + struct drm_crtc *crtc = encoder->crtc; 2096 + struct drm_crtc_helper_funcs *funcs = NULL; 2097 + 2093 2098 DRM_DEBUG_KMS("mode %d\n", mode); 2094 2099 2095 2100 switch (mode) { ··· 2104 2099 case DRM_MODE_DPMS_STANDBY: 2105 2100 case DRM_MODE_DPMS_SUSPEND: 2106 2101 case DRM_MODE_DPMS_OFF: 2102 + /* 2103 + * The SFRs of VP and Mixer are updated by Vertical Sync of 2104 + * Timing generator which is a part of HDMI so the sequence 2105 + * to disable TV Subsystem should be as following, 2106 + * VP -> Mixer -> HDMI 2107 + * 2108 + * Below codes will try to disable Mixer and VP(if used) 2109 + * prior to disabling HDMI. 2110 + */ 2111 + if (crtc) 2112 + funcs = crtc->helper_private; 2113 + if (funcs && funcs->dpms) 2114 + (*funcs->dpms)(crtc, mode); 2115 + 2107 2116 hdmi_poweroff(display); 2108 2117 break; 2109 2118 default:
+35 -15
drivers/gpu/drm/exynos/exynos_mixer.c
··· 377 377 mixer_regs_dump(ctx); 378 378 } 379 379 380 + static void mixer_stop(struct mixer_context *ctx) 381 + { 382 + struct mixer_resources *res = &ctx->mixer_res; 383 + int timeout = 20; 384 + 385 + mixer_reg_writemask(res, MXR_STATUS, 0, MXR_STATUS_REG_RUN); 386 + 387 + while (!(mixer_reg_read(res, MXR_STATUS) & MXR_STATUS_REG_IDLE) && 388 + --timeout) 389 + usleep_range(10000, 12000); 390 + 391 + mixer_regs_dump(ctx); 392 + } 393 + 380 394 static void vp_video_buffer(struct mixer_context *ctx, int win) 381 395 { 382 396 struct mixer_resources *res = &ctx->mixer_res; ··· 511 497 static void mixer_layer_update(struct mixer_context *ctx) 512 498 { 513 499 struct mixer_resources *res = &ctx->mixer_res; 514 - u32 val; 515 500 516 - val = mixer_reg_read(res, MXR_CFG); 517 - 518 - /* allow one update per vsync only */ 519 - if (!(val & MXR_CFG_LAYER_UPDATE_COUNT_MASK)) 520 - mixer_reg_writemask(res, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE); 501 + mixer_reg_writemask(res, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE); 521 502 } 522 503 523 504 static void mixer_graph_buffer(struct mixer_context *ctx, int win) ··· 1019 1010 } 1020 1011 mutex_unlock(&mixer_ctx->mixer_mutex); 1021 1012 1013 + drm_vblank_get(mgr->crtc->dev, mixer_ctx->pipe); 1014 + 1022 1015 atomic_set(&mixer_ctx->wait_vsync_event, 1); 1023 1016 1024 1017 /* ··· 1031 1020 !atomic_read(&mixer_ctx->wait_vsync_event), 1032 1021 HZ/20)) 1033 1022 DRM_DEBUG_KMS("vblank wait timed out.\n"); 1023 + 1024 + drm_vblank_put(mgr->crtc->dev, mixer_ctx->pipe); 1034 1025 } 1035 1026 1036 1027 static void mixer_window_suspend(struct exynos_drm_manager *mgr) ··· 1074 1061 mutex_unlock(&ctx->mixer_mutex); 1075 1062 return; 1076 1063 } 1077 - ctx->powered = true; 1064 + 1078 1065 mutex_unlock(&ctx->mixer_mutex); 1079 1066 1080 1067 pm_runtime_get_sync(ctx->dev); ··· 1084 1071 clk_prepare_enable(res->vp); 1085 1072 clk_prepare_enable(res->sclk_mixer); 1086 1073 } 1074 + 1075 + mutex_lock(&ctx->mixer_mutex); 1076 + ctx->powered = true; 1077 + mutex_unlock(&ctx->mixer_mutex); 1078 + 1079 + mixer_reg_writemask(res, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET); 1087 1080 1088 1081 mixer_reg_write(res, MXR_INT_EN, ctx->int_en); 1089 1082 mixer_win_reset(ctx); ··· 1103 1084 struct mixer_resources *res = &ctx->mixer_res; 1104 1085 1105 1086 mutex_lock(&ctx->mixer_mutex); 1106 - if (!ctx->powered) 1107 - goto out; 1087 + if (!ctx->powered) { 1088 + mutex_unlock(&ctx->mixer_mutex); 1089 + return; 1090 + } 1108 1091 mutex_unlock(&ctx->mixer_mutex); 1109 1092 1093 + mixer_stop(ctx); 1110 1094 mixer_window_suspend(mgr); 1111 1095 1112 1096 ctx->int_en = mixer_reg_read(res, MXR_INT_EN); 1097 + 1098 + mutex_lock(&ctx->mixer_mutex); 1099 + ctx->powered = false; 1100 + mutex_unlock(&ctx->mixer_mutex); 1113 1101 1114 1102 clk_disable_unprepare(res->mixer); 1115 1103 if (ctx->vp_enabled) { ··· 1125 1099 } 1126 1100 1127 1101 pm_runtime_put_sync(ctx->dev); 1128 - 1129 - mutex_lock(&ctx->mixer_mutex); 1130 - ctx->powered = false; 1131 - 1132 - out: 1133 - mutex_unlock(&ctx->mixer_mutex); 1134 1102 } 1135 1103 1136 1104 static void mixer_dpms(struct exynos_drm_manager *mgr, int mode)
+1
drivers/gpu/drm/exynos/regs-mixer.h
··· 78 78 #define MXR_STATUS_BIG_ENDIAN (1 << 3) 79 79 #define MXR_STATUS_ENDIAN_MASK (1 << 3) 80 80 #define MXR_STATUS_SYNC_ENABLE (1 << 2) 81 + #define MXR_STATUS_REG_IDLE (1 << 1) 81 82 #define MXR_STATUS_REG_RUN (1 << 0) 82 83 83 84 /* bits for MXR_CFG */
+9 -3
drivers/gpu/drm/i2c/tda998x_drv.c
··· 810 810 tda998x_encoder_mode_valid(struct drm_encoder *encoder, 811 811 struct drm_display_mode *mode) 812 812 { 813 + if (mode->clock > 150000) 814 + return MODE_CLOCK_HIGH; 815 + if (mode->htotal >= BIT(13)) 816 + return MODE_BAD_HVALUE; 817 + if (mode->vtotal >= BIT(11)) 818 + return MODE_BAD_VVALUE; 813 819 return MODE_OK; 814 820 } 815 821 ··· 1054 1048 return i; 1055 1049 } 1056 1050 } else { 1057 - for (i = 10; i > 0; i--) { 1058 - msleep(10); 1051 + for (i = 100; i > 0; i--) { 1052 + msleep(1); 1059 1053 ret = reg_read(priv, REG_INT_FLAGS_2); 1060 1054 if (ret < 0) 1061 1055 return ret; ··· 1189 1183 tda998x_encoder_destroy(struct drm_encoder *encoder) 1190 1184 { 1191 1185 struct tda998x_priv *priv = to_tda998x_priv(encoder); 1192 - drm_i2c_encoder_destroy(encoder); 1193 1186 1194 1187 /* disable all IRQs and free the IRQ handler */ 1195 1188 cec_write(priv, REG_CEC_RXSHPDINTENA, 0); ··· 1198 1193 1199 1194 if (priv->cec) 1200 1195 i2c_unregister_device(priv->cec); 1196 + drm_i2c_encoder_destroy(encoder); 1201 1197 kfree(priv); 1202 1198 } 1203 1199
+2
drivers/gpu/drm/i915/i915_debugfs.c
··· 446 446 447 447 memset(&stats, 0, sizeof(stats)); 448 448 stats.file_priv = file->driver_priv; 449 + spin_lock(&file->table_lock); 449 450 idr_for_each(&file->object_idr, per_file_stats, &stats); 451 + spin_unlock(&file->table_lock); 450 452 /* 451 453 * Although we have a valid reference on file->pid, that does 452 454 * not guarantee that the task_struct who called get_pid() is
+3 -2
drivers/gpu/drm/i915/i915_dma.c
··· 1464 1464 #else 1465 1465 static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) 1466 1466 { 1467 - int ret; 1467 + int ret = 0; 1468 1468 1469 1469 DRM_INFO("Replacing VGA console driver\n"); 1470 1470 1471 1471 console_lock(); 1472 - ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1472 + if (con_is_bound(&vga_con)) 1473 + ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1473 1474 if (ret == 0) { 1474 1475 ret = do_unregister_con_driver(&vga_con); 1475 1476
+3
drivers/gpu/drm/i915/i915_drv.h
··· 656 656 #define QUIRK_PIPEA_FORCE (1<<0) 657 657 #define QUIRK_LVDS_SSC_DISABLE (1<<1) 658 658 #define QUIRK_INVERT_BRIGHTNESS (1<<2) 659 + #define QUIRK_BACKLIGHT_PRESENT (1<<3) 659 660 660 661 struct intel_fbdev; 661 662 struct intel_fbc_work; ··· 978 977 bool always_on; 979 978 /* power well enable/disable usage count */ 980 979 int count; 980 + /* cached hw enabled state */ 981 + bool hw_enabled; 981 982 unsigned long domains; 982 983 unsigned long data; 983 984 const struct i915_power_well_ops *ops;
+5 -3
drivers/gpu/drm/i915/i915_gem_context.c
··· 598 598 struct intel_context *from = ring->last_context; 599 599 struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(to); 600 600 u32 hw_flags = 0; 601 + bool uninitialized = false; 601 602 int ret, i; 602 603 603 604 if (from != NULL && ring == &dev_priv->ring[RCS]) { ··· 697 696 i915_gem_context_unreference(from); 698 697 } 699 698 699 + uninitialized = !to->is_initialized && from == NULL; 700 + to->is_initialized = true; 701 + 700 702 done: 701 703 i915_gem_context_reference(to); 702 704 ring->last_context = to; 703 705 to->last_ring = ring; 704 706 705 - if (ring->id == RCS && !to->is_initialized && from == NULL) { 707 + if (uninitialized) { 706 708 ret = i915_gem_render_state_init(ring); 707 709 if (ret) 708 710 DRM_ERROR("init render state: %d\n", ret); 709 711 } 710 - 711 - to->is_initialized = true; 712 712 713 713 return 0; 714 714
+44
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 74 74 if (base == 0) 75 75 return 0; 76 76 77 + /* make sure we don't clobber the GTT if it's within stolen memory */ 78 + if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) { 79 + struct { 80 + u32 start, end; 81 + } stolen[2] = { 82 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 83 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 84 + }; 85 + u64 gtt_start, gtt_end; 86 + 87 + gtt_start = I915_READ(PGTBL_CTL); 88 + if (IS_GEN4(dev)) 89 + gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) | 90 + (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28; 91 + else 92 + gtt_start &= PGTBL_ADDRESS_LO_MASK; 93 + gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4; 94 + 95 + if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end) 96 + stolen[0].end = gtt_start; 97 + if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end) 98 + stolen[1].start = gtt_end; 99 + 100 + /* pick the larger of the two chunks */ 101 + if (stolen[0].end - stolen[0].start > 102 + stolen[1].end - stolen[1].start) { 103 + base = stolen[0].start; 104 + dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start; 105 + } else { 106 + base = stolen[1].start; 107 + dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start; 108 + } 109 + 110 + if (stolen[0].start != stolen[1].start || 111 + stolen[0].end != stolen[1].end) { 112 + DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n", 113 + (unsigned long long) gtt_start, 114 + (unsigned long long) gtt_end - 1); 115 + DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n", 116 + base, base + (u32) dev_priv->gtt.stolen_size - 1); 117 + } 118 + } 119 + 120 + 77 121 /* Verify that nothing else uses this physical address. Stolen 78 122 * memory should be reserved by the BIOS and hidden from the 79 123 * kernel. So if the region is already marked as busy, something
+3
drivers/gpu/drm/i915/i915_reg.h
··· 942 942 /* 943 943 * Instruction and interrupt control regs 944 944 */ 945 + #define PGTBL_CTL 0x02020 946 + #define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */ 947 + #define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */ 945 948 #define PGTBL_ER 0x02024 946 949 #define RENDER_RING_BASE 0x02000 947 950 #define BSD_RING_BASE 0x04000
+3 -3
drivers/gpu/drm/i915/intel_bios.c
··· 315 315 const struct bdb_lfp_backlight_data *backlight_data; 316 316 const struct bdb_lfp_backlight_data_entry *entry; 317 317 318 - /* Err to enabling backlight if no backlight block. */ 319 - dev_priv->vbt.backlight.present = true; 320 - 321 318 backlight_data = find_section(bdb, BDB_LVDS_BACKLIGHT); 322 319 if (!backlight_data) 323 320 return; ··· 1084 1087 enum port port; 1085 1088 1086 1089 dev_priv->vbt.crt_ddc_pin = GMBUS_PORT_VGADDC; 1090 + 1091 + /* Default to having backlight */ 1092 + dev_priv->vbt.backlight.present = true; 1087 1093 1088 1094 /* LFP panel data */ 1089 1095 dev_priv->vbt.lvds_dither = 1;
+47 -7
drivers/gpu/drm/i915/intel_display.c
··· 2087 2087 static void intel_enable_primary_hw_plane(struct drm_i915_private *dev_priv, 2088 2088 enum plane plane, enum pipe pipe) 2089 2089 { 2090 + struct drm_device *dev = dev_priv->dev; 2090 2091 struct intel_crtc *intel_crtc = 2091 2092 to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]); 2092 2093 int reg; ··· 2107 2106 2108 2107 I915_WRITE(reg, val | DISPLAY_PLANE_ENABLE); 2109 2108 intel_flush_primary_plane(dev_priv, plane); 2109 + 2110 + /* 2111 + * BDW signals flip done immediately if the plane 2112 + * is disabled, even if the plane enable is already 2113 + * armed to occur at the next vblank :( 2114 + */ 2115 + if (IS_BROADWELL(dev)) 2116 + intel_wait_for_vblank(dev, intel_crtc->pipe); 2110 2117 } 2111 2118 2112 2119 /** ··· 4573 4564 if (intel_crtc->active) 4574 4565 return; 4575 4566 4576 - vlv_prepare_pll(intel_crtc); 4567 + is_dsi = intel_pipe_has_type(crtc, INTEL_OUTPUT_DSI); 4568 + 4569 + if (!is_dsi && !IS_CHERRYVIEW(dev)) 4570 + vlv_prepare_pll(intel_crtc); 4577 4571 4578 4572 /* Set up the display plane register */ 4579 4573 dspcntr = DISPPLANE_GAMMA_ENABLE; ··· 4609 4597 for_each_encoder_on_crtc(dev, crtc, encoder) 4610 4598 if (encoder->pre_pll_enable) 4611 4599 encoder->pre_pll_enable(encoder); 4612 - 4613 - is_dsi = intel_pipe_has_type(crtc, INTEL_OUTPUT_DSI); 4614 4600 4615 4601 if (!is_dsi) { 4616 4602 if (IS_CHERRYVIEW(dev)) ··· 11097 11087 return names[output]; 11098 11088 } 11099 11089 11090 + static bool intel_crt_present(struct drm_device *dev) 11091 + { 11092 + struct drm_i915_private *dev_priv = dev->dev_private; 11093 + 11094 + if (IS_ULT(dev)) 11095 + return false; 11096 + 11097 + if (IS_CHERRYVIEW(dev)) 11098 + return false; 11099 + 11100 + if (IS_VALLEYVIEW(dev) && !dev_priv->vbt.int_crt_support) 11101 + return false; 11102 + 11103 + return true; 11104 + } 11105 + 11100 11106 static void intel_setup_outputs(struct drm_device *dev) 11101 11107 { 11102 11108 struct drm_i915_private *dev_priv = dev->dev_private; ··· 11121 11095 11122 11096 intel_lvds_init(dev); 11123 11097 11124 - if (!IS_ULT(dev) && !IS_CHERRYVIEW(dev) && dev_priv->vbt.int_crt_support) 11098 + if (intel_crt_present(dev)) 11125 11099 intel_crt_init(dev); 11126 11100 11127 11101 if (HAS_DDI(dev)) { ··· 11591 11565 DRM_INFO("applying inverted panel brightness quirk\n"); 11592 11566 } 11593 11567 11568 + /* Some VBT's incorrectly indicate no backlight is present */ 11569 + static void quirk_backlight_present(struct drm_device *dev) 11570 + { 11571 + struct drm_i915_private *dev_priv = dev->dev_private; 11572 + dev_priv->quirks |= QUIRK_BACKLIGHT_PRESENT; 11573 + DRM_INFO("applying backlight present quirk\n"); 11574 + } 11575 + 11594 11576 struct intel_quirk { 11595 11577 int device; 11596 11578 int subsystem_vendor; ··· 11667 11633 11668 11634 /* Acer Aspire 5336 */ 11669 11635 { 0x2a42, 0x1025, 0x048a, quirk_invert_brightness }, 11636 + 11637 + /* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */ 11638 + { 0x0a06, 0x1025, 0x0a11, quirk_backlight_present }, 11639 + 11640 + /* Toshiba CB35 Chromebook (Celeron 2955U) */ 11641 + { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present }, 11670 11642 }; 11671 11643 11672 11644 static void intel_init_quirks(struct drm_device *dev) ··· 12451 12411 12452 12412 for_each_pipe(i) { 12453 12413 error->pipe[i].power_domain_on = 12454 - intel_display_power_enabled_sw(dev_priv, 12455 - POWER_DOMAIN_PIPE(i)); 12414 + intel_display_power_enabled_unlocked(dev_priv, 12415 + POWER_DOMAIN_PIPE(i)); 12456 12416 if (!error->pipe[i].power_domain_on) 12457 12417 continue; 12458 12418 ··· 12487 12447 enum transcoder cpu_transcoder = transcoders[i]; 12488 12448 12489 12449 error->transcoder[i].power_domain_on = 12490 - intel_display_power_enabled_sw(dev_priv, 12450 + intel_display_power_enabled_unlocked(dev_priv, 12491 12451 POWER_DOMAIN_TRANSCODER(cpu_transcoder)); 12492 12452 if (!error->transcoder[i].power_domain_on) 12493 12453 continue;
+42
drivers/gpu/drm/i915/intel_dp.c
··· 28 28 #include <linux/i2c.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/export.h> 31 + #include <linux/notifier.h> 32 + #include <linux/reboot.h> 31 33 #include <drm/drmP.h> 32 34 #include <drm/drm_crtc.h> 33 35 #include <drm/drm_crtc_helper.h> ··· 336 334 return PCH_PP_STATUS; 337 335 else 338 336 return VLV_PIPE_PP_STATUS(vlv_power_sequencer_pipe(intel_dp)); 337 + } 338 + 339 + /* Reboot notifier handler to shutdown panel power to guarantee T12 timing 340 + This function only applicable when panel PM state is not to be tracked */ 341 + static int edp_notify_handler(struct notifier_block *this, unsigned long code, 342 + void *unused) 343 + { 344 + struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), 345 + edp_notifier); 346 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 347 + struct drm_i915_private *dev_priv = dev->dev_private; 348 + u32 pp_div; 349 + u32 pp_ctrl_reg, pp_div_reg; 350 + enum pipe pipe = vlv_power_sequencer_pipe(intel_dp); 351 + 352 + if (!is_edp(intel_dp) || code != SYS_RESTART) 353 + return 0; 354 + 355 + if (IS_VALLEYVIEW(dev)) { 356 + pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe); 357 + pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe); 358 + pp_div = I915_READ(pp_div_reg); 359 + pp_div &= PP_REFERENCE_DIVIDER_MASK; 360 + 361 + /* 0x1F write to PP_DIV_REG sets max cycle delay */ 362 + I915_WRITE(pp_div_reg, pp_div | 0x1F); 363 + I915_WRITE(pp_ctrl_reg, PANEL_UNLOCK_REGS | PANEL_POWER_OFF); 364 + msleep(intel_dp->panel_power_cycle_delay); 365 + } 366 + 367 + return 0; 339 368 } 340 369 341 370 static bool edp_have_panel_power(struct intel_dp *intel_dp) ··· 3740 3707 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 3741 3708 edp_panel_vdd_off_sync(intel_dp); 3742 3709 drm_modeset_unlock(&dev->mode_config.connection_mutex); 3710 + if (intel_dp->edp_notifier.notifier_call) { 3711 + unregister_reboot_notifier(&intel_dp->edp_notifier); 3712 + intel_dp->edp_notifier.notifier_call = NULL; 3713 + } 3743 3714 } 3744 3715 kfree(intel_dig_port); 3745 3716 } ··· 4220 4183 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 4221 4184 } 4222 4185 mutex_unlock(&dev->mode_config.mutex); 4186 + 4187 + if (IS_VALLEYVIEW(dev)) { 4188 + intel_dp->edp_notifier.notifier_call = edp_notify_handler; 4189 + register_reboot_notifier(&intel_dp->edp_notifier); 4190 + } 4223 4191 4224 4192 intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); 4225 4193 intel_panel_setup_backlight(connector);
+4 -2
drivers/gpu/drm/i915/intel_drv.h
··· 538 538 unsigned long last_power_on; 539 539 unsigned long last_backlight_off; 540 540 bool psr_setup_done; 541 + struct notifier_block edp_notifier; 542 + 541 543 bool use_tps3; 542 544 struct intel_connector *attached_connector; 543 545 ··· 952 950 void intel_power_domains_remove(struct drm_i915_private *); 953 951 bool intel_display_power_enabled(struct drm_i915_private *dev_priv, 954 952 enum intel_display_power_domain domain); 955 - bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv, 956 - enum intel_display_power_domain domain); 953 + bool intel_display_power_enabled_unlocked(struct drm_i915_private *dev_priv, 954 + enum intel_display_power_domain domain); 957 955 void intel_display_power_get(struct drm_i915_private *dev_priv, 958 956 enum intel_display_power_domain domain); 959 957 void intel_display_power_put(struct drm_i915_private *dev_priv,
+15 -14
drivers/gpu/drm/i915/intel_dsi.c
··· 117 117 /* bandgap reset is needed after everytime we do power gate */ 118 118 band_gap_reset(dev_priv); 119 119 120 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 121 + usleep_range(2500, 3000); 122 + 120 123 val = I915_READ(MIPI_PORT_CTRL(pipe)); 121 124 I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD); 122 125 usleep_range(1000, 1500); 123 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 124 - usleep_range(2000, 2500); 126 + 127 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 128 + usleep_range(2500, 3000); 129 + 125 130 I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 126 - usleep_range(2000, 2500); 127 - I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 128 - usleep_range(2000, 2500); 129 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 130 - usleep_range(2000, 2500); 131 + usleep_range(2500, 3000); 131 132 } 132 133 133 134 static void intel_dsi_enable(struct intel_encoder *encoder) ··· 272 271 273 272 DRM_DEBUG_KMS("\n"); 274 273 275 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 274 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 276 275 usleep_range(2000, 2500); 277 276 278 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 277 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 279 278 usleep_range(2000, 2500); 280 279 281 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 280 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 282 281 usleep_range(2000, 2500); 283 - 284 - val = I915_READ(MIPI_PORT_CTRL(pipe)); 285 - I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 286 - usleep_range(1000, 1500); 287 282 288 283 if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT) 289 284 == 0x00000), 30)) 290 285 DRM_ERROR("DSI LP not going Low\n"); 286 + 287 + val = I915_READ(MIPI_PORT_CTRL(pipe)); 288 + I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 289 + usleep_range(1000, 1500); 291 290 292 291 I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 293 292 usleep_range(2000, 2500);
-6
drivers/gpu/drm/i915/intel_dsi_cmd.c
··· 404 404 else 405 405 cmd |= DPI_LP_MODE; 406 406 407 - /* DPI virtual channel?! */ 408 - 409 - mask = DPI_FIFO_EMPTY; 410 - if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 50)) 411 - DRM_ERROR("Timeout waiting for DPI FIFO empty.\n"); 412 - 413 407 /* clear bit */ 414 408 I915_WRITE(MIPI_INTR_STAT(pipe), SPL_PKT_SENT_INTERRUPT); 415 409
+9
drivers/gpu/drm/i915/intel_opregion.c
··· 403 403 404 404 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); 405 405 406 + /* 407 + * If the acpi_video interface is not supposed to be used, don't 408 + * bother processing backlight level change requests from firmware. 409 + */ 410 + if (!acpi_video_verify_backlight_support()) { 411 + DRM_DEBUG_KMS("opregion backlight request ignored\n"); 412 + return 0; 413 + } 414 + 406 415 if (!(bclp & ASLE_BCLP_VALID)) 407 416 return ASLC_BACKLIGHT_FAILED; 408 417
+6 -2
drivers/gpu/drm/i915/intel_panel.c
··· 1118 1118 int ret; 1119 1119 1120 1120 if (!dev_priv->vbt.backlight.present) { 1121 - DRM_DEBUG_KMS("native backlight control not available per VBT\n"); 1122 - return 0; 1121 + if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { 1122 + DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); 1123 + } else { 1124 + DRM_DEBUG_KMS("no backlight present per VBT\n"); 1125 + return 0; 1126 + } 1123 1127 } 1124 1128 1125 1129 /* set level and max in panel struct */
+44 -22
drivers/gpu/drm/i915/intel_pm.c
··· 3209 3209 */ 3210 3210 static void vlv_set_rps_idle(struct drm_i915_private *dev_priv) 3211 3211 { 3212 + struct drm_device *dev = dev_priv->dev; 3213 + 3214 + /* Latest VLV doesn't need to force the gfx clock */ 3215 + if (dev->pdev->revision >= 0xd) { 3216 + valleyview_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit); 3217 + return; 3218 + } 3219 + 3212 3220 /* 3213 3221 * When we are idle. Drop to min voltage state. 3214 3222 */ ··· 5611 5603 (HSW_PWR_WELL_ENABLE_REQUEST | HSW_PWR_WELL_STATE_ENABLED); 5612 5604 } 5613 5605 5614 - bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv, 5615 - enum intel_display_power_domain domain) 5606 + bool intel_display_power_enabled_unlocked(struct drm_i915_private *dev_priv, 5607 + enum intel_display_power_domain domain) 5616 5608 { 5617 5609 struct i915_power_domains *power_domains; 5618 5610 struct i915_power_well *power_well; ··· 5623 5615 return false; 5624 5616 5625 5617 power_domains = &dev_priv->power_domains; 5618 + 5626 5619 is_enabled = true; 5620 + 5627 5621 for_each_power_well_rev(i, power_well, BIT(domain), power_domains) { 5628 5622 if (power_well->always_on) 5629 5623 continue; 5630 5624 5631 - if (!power_well->count) { 5625 + if (!power_well->hw_enabled) { 5632 5626 is_enabled = false; 5633 5627 break; 5634 5628 } 5635 5629 } 5630 + 5636 5631 return is_enabled; 5637 5632 } 5638 5633 ··· 5643 5632 enum intel_display_power_domain domain) 5644 5633 { 5645 5634 struct i915_power_domains *power_domains; 5646 - struct i915_power_well *power_well; 5647 - bool is_enabled; 5648 - int i; 5649 - 5650 - if (dev_priv->pm.suspended) 5651 - return false; 5635 + bool ret; 5652 5636 5653 5637 power_domains = &dev_priv->power_domains; 5654 5638 5655 - is_enabled = true; 5656 - 5657 5639 mutex_lock(&power_domains->lock); 5658 - for_each_power_well_rev(i, power_well, BIT(domain), power_domains) { 5659 - if (power_well->always_on) 5660 - continue; 5661 - 5662 - if (!power_well->ops->is_enabled(dev_priv, power_well)) { 5663 - is_enabled = false; 5664 - break; 5665 - } 5666 - } 5640 + ret = intel_display_power_enabled_unlocked(dev_priv, domain); 5667 5641 mutex_unlock(&power_domains->lock); 5668 5642 5669 - return is_enabled; 5643 + return ret; 5670 5644 } 5671 5645 5672 5646 /* ··· 5972 5976 if (!power_well->count++) { 5973 5977 DRM_DEBUG_KMS("enabling %s\n", power_well->name); 5974 5978 power_well->ops->enable(dev_priv, power_well); 5979 + power_well->hw_enabled = true; 5975 5980 } 5976 5981 5977 5982 check_power_well_state(dev_priv, power_well); ··· 6002 6005 6003 6006 if (!--power_well->count && i915.disable_power_well) { 6004 6007 DRM_DEBUG_KMS("disabling %s\n", power_well->name); 6008 + power_well->hw_enabled = false; 6005 6009 power_well->ops->disable(dev_priv, power_well); 6006 6010 } 6007 6011 ··· 6045 6047 return 0; 6046 6048 } 6047 6049 EXPORT_SYMBOL_GPL(i915_release_power_well); 6050 + 6051 + /* 6052 + * Private interface for the audio driver to get CDCLK in kHz. 6053 + * 6054 + * Caller must request power well using i915_request_power_well() prior to 6055 + * making the call. 6056 + */ 6057 + int i915_get_cdclk_freq(void) 6058 + { 6059 + struct drm_i915_private *dev_priv; 6060 + 6061 + if (!hsw_pwr) 6062 + return -ENODEV; 6063 + 6064 + dev_priv = container_of(hsw_pwr, struct drm_i915_private, 6065 + power_domains); 6066 + 6067 + return intel_ddi_get_cdclk_freq(dev_priv); 6068 + } 6069 + EXPORT_SYMBOL_GPL(i915_get_cdclk_freq); 6070 + 6048 6071 6049 6072 #define POWER_DOMAIN_MASK (BIT(POWER_DOMAIN_NUM) - 1) 6050 6073 ··· 6286 6267 int i; 6287 6268 6288 6269 mutex_lock(&power_domains->lock); 6289 - for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) 6270 + for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) { 6290 6271 power_well->ops->sync_hw(dev_priv, power_well); 6272 + power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, 6273 + power_well); 6274 + } 6291 6275 mutex_unlock(&power_domains->lock); 6292 6276 } 6293 6277
+8
drivers/gpu/drm/i915/intel_sprite.c
··· 691 691 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 692 692 693 693 /* 694 + * BDW signals flip done immediately if the plane 695 + * is disabled, even if the plane enable is already 696 + * armed to occur at the next vblank :( 697 + */ 698 + if (IS_BROADWELL(dev)) 699 + intel_wait_for_vblank(dev, intel_crtc->pipe); 700 + 701 + /* 694 702 * FIXME IPS should be fine as long as one plane is 695 703 * enabled, but in practice it seems to have problems 696 704 * when going from primary only to sprite only and vice
+2
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 277 277 static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"}; 278 278 static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"}; 279 279 static const char *hpd_clk_names[] = {"iface_clk", "core_clk", "mdp_core_clk"}; 280 + static unsigned long hpd_clk_freq[] = {0, 19200000, 0}; 280 281 static const char *pwr_clk_names[] = {"extp_clk", "alt_iface_clk"}; 281 282 282 283 config.phy_init = hdmi_phy_8x74_init; ··· 287 286 config.pwr_reg_names = pwr_reg_names; 288 287 config.pwr_reg_cnt = ARRAY_SIZE(pwr_reg_names); 289 288 config.hpd_clk_names = hpd_clk_names; 289 + config.hpd_freq = hpd_clk_freq; 290 290 config.hpd_clk_cnt = ARRAY_SIZE(hpd_clk_names); 291 291 config.pwr_clk_names = pwr_clk_names; 292 292 config.pwr_clk_cnt = ARRAY_SIZE(pwr_clk_names);
+1
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 87 87 88 88 /* clks that need to be on for hpd: */ 89 89 const char **hpd_clk_names; 90 + const long unsigned *hpd_freq; 90 91 int hpd_clk_cnt; 91 92 92 93 /* clks that need to be on for screen pwr (ie pixel clk): */
+8
drivers/gpu/drm/msm/hdmi/hdmi_connector.c
··· 127 127 } 128 128 129 129 for (i = 0; i < config->hpd_clk_cnt; i++) { 130 + if (config->hpd_freq && config->hpd_freq[i]) { 131 + ret = clk_set_rate(hdmi->hpd_clks[i], 132 + config->hpd_freq[i]); 133 + if (ret) 134 + dev_warn(dev->dev, "failed to set clk %s (%d)\n", 135 + config->hpd_clk_names[i], ret); 136 + } 137 + 130 138 ret = clk_prepare_enable(hdmi->hpd_clks[i]); 131 139 if (ret) { 132 140 dev_err(dev->dev, "failed to enable hpd clk: %s (%d)\n",
+17 -5
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
··· 20 20 #include "msm_mmu.h" 21 21 #include "mdp5_kms.h" 22 22 23 + static const char *iommu_ports[] = { 24 + "mdp_0", 25 + }; 26 + 23 27 static struct mdp5_platform_config *mdp5_get_config(struct platform_device *dev); 24 28 25 29 static int mdp5_hw_init(struct msm_kms *kms) ··· 108 104 static void mdp5_destroy(struct msm_kms *kms) 109 105 { 110 106 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 107 + struct msm_mmu *mmu = mdp5_kms->mmu; 108 + 109 + if (mmu) { 110 + mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports)); 111 + mmu->funcs->destroy(mmu); 112 + } 111 113 kfree(mdp5_kms); 112 114 } 113 115 ··· 226 216 return ret; 227 217 } 228 218 229 - static const char *iommu_ports[] = { 230 - "mdp_0", 231 - }; 232 - 233 219 static int get_clk(struct platform_device *pdev, struct clk **clkp, 234 220 const char *name) 235 221 { ··· 323 317 mmu = msm_iommu_new(dev, config->iommu); 324 318 if (IS_ERR(mmu)) { 325 319 ret = PTR_ERR(mmu); 320 + dev_err(dev->dev, "failed to init iommu: %d\n", ret); 326 321 goto fail; 327 322 } 323 + 328 324 ret = mmu->funcs->attach(mmu, iommu_ports, 329 325 ARRAY_SIZE(iommu_ports)); 330 - if (ret) 326 + if (ret) { 327 + dev_err(dev->dev, "failed to attach iommu: %d\n", ret); 328 + mmu->funcs->destroy(mmu); 331 329 goto fail; 330 + } 332 331 } else { 333 332 dev_info(dev->dev, "no iommu, fallback to phys " 334 333 "contig buffers for scanout\n"); 335 334 mmu = NULL; 336 335 } 336 + mdp5_kms->mmu = mmu; 337 337 338 338 mdp5_kms->id = msm_register_mmu(dev, mmu); 339 339 if (mdp5_kms->id < 0) {
+1
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
··· 33 33 34 34 /* mapper-id used to request GEM buffer mapped for scanout: */ 35 35 int id; 36 + struct msm_mmu *mmu; 36 37 37 38 /* for tracking smp allocation amongst pipes: */ 38 39 mdp5_smp_state_t smp_state;
+1 -1
drivers/gpu/drm/msm/msm_drv.c
··· 159 159 static int get_mdp_ver(struct platform_device *pdev) 160 160 { 161 161 #ifdef CONFIG_OF 162 - const static struct of_device_id match_types[] = { { 162 + static const struct of_device_id match_types[] = { { 163 163 .compatible = "qcom,mdss_mdp", 164 164 .data = (void *)5, 165 165 }, {
+1 -1
drivers/gpu/drm/msm/msm_fbdev.c
··· 59 59 struct drm_framebuffer *fb = NULL; 60 60 struct fb_info *fbi = NULL; 61 61 struct drm_mode_fb_cmd2 mode_cmd = {0}; 62 - dma_addr_t paddr; 62 + uint32_t paddr; 63 63 int ret, size; 64 64 65 65 sizes->surface_bpp = 32;
+6
drivers/gpu/drm/msm/msm_gem.c
··· 278 278 uint32_t *iova) 279 279 { 280 280 struct msm_gem_object *msm_obj = to_msm_bo(obj); 281 + struct drm_device *dev = obj->dev; 281 282 int ret = 0; 282 283 283 284 if (!msm_obj->domain[id].iova) { 284 285 struct msm_drm_private *priv = obj->dev->dev_private; 285 286 struct msm_mmu *mmu = priv->mmus[id]; 286 287 struct page **pages = get_pages(obj); 288 + 289 + if (!mmu) { 290 + dev_err(dev->dev, "null MMU pointer\n"); 291 + return -EINVAL; 292 + } 287 293 288 294 if (IS_ERR(pages)) 289 295 return PTR_ERR(pages);
+20 -3
drivers/gpu/drm/msm/msm_iommu.c
··· 28 28 unsigned long iova, int flags, void *arg) 29 29 { 30 30 DBG("*** fault: iova=%08lx, flags=%d", iova, flags); 31 - return 0; 31 + return -ENOSYS; 32 32 } 33 33 34 34 static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt) ··· 40 40 for (i = 0; i < cnt; i++) { 41 41 struct device *msm_iommu_get_ctx(const char *ctx_name); 42 42 struct device *ctx = msm_iommu_get_ctx(names[i]); 43 - if (IS_ERR_OR_NULL(ctx)) 43 + if (IS_ERR_OR_NULL(ctx)) { 44 + dev_warn(dev->dev, "couldn't get %s context", names[i]); 44 45 continue; 46 + } 45 47 ret = iommu_attach_device(iommu->domain, ctx); 46 48 if (ret) { 47 49 dev_warn(dev->dev, "could not attach iommu to %s", names[i]); ··· 52 50 } 53 51 54 52 return 0; 53 + } 54 + 55 + static void msm_iommu_detach(struct msm_mmu *mmu, const char **names, int cnt) 56 + { 57 + struct msm_iommu *iommu = to_msm_iommu(mmu); 58 + int i; 59 + 60 + for (i = 0; i < cnt; i++) { 61 + struct device *msm_iommu_get_ctx(const char *ctx_name); 62 + struct device *ctx = msm_iommu_get_ctx(names[i]); 63 + if (IS_ERR_OR_NULL(ctx)) 64 + continue; 65 + iommu_detach_device(iommu->domain, ctx); 66 + } 55 67 } 56 68 57 69 static int msm_iommu_map(struct msm_mmu *mmu, uint32_t iova, ··· 126 110 127 111 VERB("unmap[%d]: %08x(%x)", i, iova, bytes); 128 112 129 - BUG_ON(!IS_ALIGNED(bytes, PAGE_SIZE)); 113 + BUG_ON(!PAGE_ALIGNED(bytes)); 130 114 131 115 da += bytes; 132 116 } ··· 143 127 144 128 static const struct msm_mmu_funcs funcs = { 145 129 .attach = msm_iommu_attach, 130 + .detach = msm_iommu_detach, 146 131 .map = msm_iommu_map, 147 132 .unmap = msm_iommu_unmap, 148 133 .destroy = msm_iommu_destroy,
+1
drivers/gpu/drm/msm/msm_mmu.h
··· 22 22 23 23 struct msm_mmu_funcs { 24 24 int (*attach)(struct msm_mmu *mmu, const char **names, int cnt); 25 + void (*detach)(struct msm_mmu *mmu, const char **names, int cnt); 25 26 int (*map)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, 26 27 unsigned len, int prot); 27 28 int (*unmap)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt,
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
··· 1516 1516 } 1517 1517 1518 1518 switch ((ctrl & 0x000f0000) >> 16) { 1519 - case 6: datarate = pclk * 30 / 8; break; 1520 - case 5: datarate = pclk * 24 / 8; break; 1519 + case 6: datarate = pclk * 30; break; 1520 + case 5: datarate = pclk * 24; break; 1521 1521 case 2: 1522 1522 default: 1523 - datarate = pclk * 18 / 8; 1523 + datarate = pclk * 18; 1524 1524 break; 1525 1525 } 1526 1526
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c
··· 1159 1159 if (outp->info.type == DCB_OUTPUT_DP) { 1160 1160 u32 sync = nv_rd32(priv, 0x660404 + (head * 0x300)); 1161 1161 switch ((sync & 0x000003c0) >> 6) { 1162 - case 6: pclk = pclk * 30 / 8; break; 1163 - case 5: pclk = pclk * 24 / 8; break; 1162 + case 6: pclk = pclk * 30; break; 1163 + case 5: pclk = pclk * 24; break; 1164 1164 case 2: 1165 1165 default: 1166 - pclk = pclk * 18 / 8; 1166 + pclk = pclk * 18; 1167 1167 break; 1168 1168 } 1169 1169
+5 -3
drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c
··· 34 34 struct nvkm_output_dp *outp = (void *)base; 35 35 bool retrain = true; 36 36 u8 link[2], stat[3]; 37 - u32 rate; 37 + u32 linkrate; 38 38 int ret, i; 39 39 40 40 /* check that the link is trained at a high enough rate */ ··· 44 44 goto done; 45 45 } 46 46 47 - rate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 - if (rate < ((datarate / 8) * 10)) { 47 + linkrate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 + linkrate = (linkrate * 8) / 10; /* 8B/10B coding overhead */ 49 + datarate = (datarate + 9) / 10; /* -> decakilobits */ 50 + if (linkrate < datarate) { 49 51 DBG("link not trained at sufficient rate\n"); 50 52 goto done; 51 53 }
+1
drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c
··· 87 87 struct nvkm_output_dp *outpdp = (void *)outp; 88 88 switch (data) { 89 89 case NV94_DISP_SOR_DP_PWR_STATE_OFF: 90 + nouveau_event_put(outpdp->irq); 90 91 ((struct nvkm_output_dp_impl *)nv_oclass(outp)) 91 92 ->lnk_pwr(outpdp, 0); 92 93 atomic_set(&outpdp->lt.done, 0);
+2 -2
drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h
··· 26 26 }; 27 27 } 28 28 29 - static inline struct ramfuc_reg 29 + static noinline struct ramfuc_reg 30 30 ramfuc_reg(u32 addr) 31 31 { 32 32 return ramfuc_reg2(addr, addr); ··· 107 107 108 108 #define ram_init(s,p) ramfuc_init(&(s)->base, (p)) 109 109 #define ram_exec(s,e) ramfuc_exec(&(s)->base, (e)) 110 - #define ram_have(s,r) ((s)->r_##r.addr != 0x000000) 110 + #define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000) 111 111 #define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r) 112 112 #define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d)) 113 113 #define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r)
+1
drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c
··· 200 200 /* (re)program mempll, if required */ 201 201 if (ram->mode == 2) { 202 202 ram_mask(fuc, 0x1373f4, 0x00010000, 0x00000000); 203 + ram_mask(fuc, 0x132000, 0x80000000, 0x80000000); 203 204 ram_mask(fuc, 0x132000, 0x00000001, 0x00000000); 204 205 ram_mask(fuc, 0x132004, 0x103fffff, mcoef); 205 206 ram_mask(fuc, 0x132000, 0x00000001, 0x00000001);
+9 -8
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 652 652 ret = nouveau_do_resume(drm_dev); 653 653 if (ret) 654 654 return ret; 655 - if (drm_dev->mode_config.num_crtc) 656 - nouveau_fbcon_set_suspend(drm_dev, 0); 657 655 658 - nouveau_fbcon_zfill_all(drm_dev); 659 - if (drm_dev->mode_config.num_crtc) 656 + if (drm_dev->mode_config.num_crtc) { 660 657 nouveau_display_resume(drm_dev); 658 + nouveau_fbcon_set_suspend(drm_dev, 0); 659 + } 660 + 661 661 return 0; 662 662 } 663 663 ··· 683 683 ret = nouveau_do_resume(drm_dev); 684 684 if (ret) 685 685 return ret; 686 - if (drm_dev->mode_config.num_crtc) 687 - nouveau_fbcon_set_suspend(drm_dev, 0); 688 - nouveau_fbcon_zfill_all(drm_dev); 689 - if (drm_dev->mode_config.num_crtc) 686 + 687 + if (drm_dev->mode_config.num_crtc) { 690 688 nouveau_display_resume(drm_dev); 689 + nouveau_fbcon_set_suspend(drm_dev, 0); 690 + } 691 + 691 692 return 0; 692 693 } 693 694
+3 -10
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 531 531 if (state == 1) 532 532 nouveau_fbcon_save_disable_accel(dev); 533 533 fb_set_suspend(drm->fbcon->helper.fbdev, state); 534 - if (state == 0) 534 + if (state == 0) { 535 535 nouveau_fbcon_restore_accel(dev); 536 + nouveau_fbcon_zfill(dev, drm->fbcon); 537 + } 536 538 console_unlock(); 537 - } 538 - } 539 - 540 - void 541 - nouveau_fbcon_zfill_all(struct drm_device *dev) 542 - { 543 - struct nouveau_drm *drm = nouveau_drm(dev); 544 - if (drm->fbcon) { 545 - nouveau_fbcon_zfill(dev, drm->fbcon); 546 539 } 547 540 }
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.h
··· 61 61 int nouveau_fbcon_init(struct drm_device *dev); 62 62 void nouveau_fbcon_fini(struct drm_device *dev); 63 63 void nouveau_fbcon_set_suspend(struct drm_device *dev, int state); 64 - void nouveau_fbcon_zfill_all(struct drm_device *dev); 65 64 void nouveau_fbcon_save_disable_accel(struct drm_device *dev); 66 65 void nouveau_fbcon_restore_accel(struct drm_device *dev); 67 66
+2 -1
drivers/gpu/drm/nouveau/nv50_display.c
··· 1741 1741 } 1742 1742 } 1743 1743 1744 - mthd = (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1744 + mthd = (ffs(nv_encoder->dcb->heads) - 1) << 3; 1745 + mthd |= (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1745 1746 mthd |= nv_encoder->or; 1746 1747 1747 1748 if (nv_encoder->dcb->type == DCB_OUTPUT_DP) {
+8 -6
drivers/gpu/drm/radeon/atombios_dp.c
··· 127 127 /* flags not zero */ 128 128 if (args.v1.ucReplyStatus == 2) { 129 129 DRM_DEBUG_KMS("dp_aux_ch flags not zero\n"); 130 - r = -EBUSY; 130 + r = -EIO; 131 131 goto done; 132 132 } 133 133 ··· 403 403 { 404 404 struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; 405 405 u8 msg[DP_DPCD_SIZE]; 406 - int ret, i; 406 + int ret; 407 + 408 + char dpcd_hex_dump[DP_DPCD_SIZE * 3]; 407 409 408 410 ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg, 409 411 DP_DPCD_SIZE); 410 412 if (ret > 0) { 411 413 memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE); 412 - DRM_DEBUG_KMS("DPCD: "); 413 - for (i = 0; i < DP_DPCD_SIZE; i++) 414 - DRM_DEBUG_KMS("%02x ", msg[i]); 415 - DRM_DEBUG_KMS("\n"); 414 + 415 + hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd), 416 + 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false); 417 + DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump); 416 418 417 419 radeon_dp_probe_oui(radeon_connector); 418 420
+1 -1
drivers/gpu/drm/radeon/ci_dpm.c
··· 1179 1179 tmp &= ~GLOBAL_PWRMGT_EN; 1180 1180 WREG32_SMC(GENERAL_PWRMGT, tmp); 1181 1181 1182 - tmp = RREG32(SCLK_PWRMGT_CNTL); 1182 + tmp = RREG32_SMC(SCLK_PWRMGT_CNTL); 1183 1183 tmp &= ~DYNAMIC_PM_EN; 1184 1184 WREG32_SMC(SCLK_PWRMGT_CNTL, tmp); 1185 1185
+4 -2
drivers/gpu/drm/radeon/cik.c
··· 7676 7676 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 7677 7677 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 7678 7678 mc_client = RREG32(VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT); 7679 + /* reset addr and status */ 7680 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7681 + if (addr == 0x0 && status == 0x0) 7682 + break; 7679 7683 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 7680 7684 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 7681 7685 addr); 7682 7686 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 7683 7687 status); 7684 7688 cik_vm_decode_fault(rdev, status, addr, mc_client); 7685 - /* reset addr and status */ 7686 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7687 7689 break; 7688 7690 case 167: /* VCE */ 7689 7691 DRM_DEBUG("IH: VCE int: 0x%08x\n", src_data);
+1 -1
drivers/gpu/drm/radeon/cikd.h
··· 1752 1752 #define EOP_TC_WB_ACTION_EN (1 << 15) /* L2 */ 1753 1753 #define EOP_TCL1_ACTION_EN (1 << 16) 1754 1754 #define EOP_TC_ACTION_EN (1 << 17) /* L2 */ 1755 + #define EOP_TCL2_VOLATILE (1 << 24) 1755 1756 #define EOP_CACHE_POLICY(x) ((x) << 25) 1756 1757 /* 0 - LRU 1757 1758 * 1 - Stream 1758 1759 * 2 - Bypass 1759 1760 */ 1760 - #define EOP_TCL2_VOLATILE (1 << 27) 1761 1761 #define DATA_SEL(x) ((x) << 29) 1762 1762 /* 0 - discard 1763 1763 * 1 - send low 32bit data
+1 -1
drivers/gpu/drm/radeon/cypress_dpm.c
··· 1551 1551 1552 1552 table->voltageMaskTable.highMask[RV770_SMC_VOLTAGEMASK_VDDCI] = 0; 1553 1553 table->voltageMaskTable.lowMask[RV770_SMC_VOLTAGEMASK_VDDCI] = 1554 - cpu_to_be32(eg_pi->vddc_voltage_table.mask_low); 1554 + cpu_to_be32(eg_pi->vddci_voltage_table.mask_low); 1555 1555 } 1556 1556 1557 1557 return 0;
+8 -6
drivers/gpu/drm/radeon/evergreen.c
··· 189 189 0x8c1c, 0xffffffff, 0x00001010, 190 190 0x28350, 0xffffffff, 0x00000000, 191 191 0xa008, 0xffffffff, 0x00010000, 192 - 0x5cc, 0xffffffff, 0x00000001, 192 + 0x5c4, 0xffffffff, 0x00000001, 193 193 0x9508, 0xffffffff, 0x00000002, 194 194 0x913c, 0x0000000f, 0x0000000a 195 195 }; ··· 476 476 0x8c1c, 0xffffffff, 0x00001010, 477 477 0x28350, 0xffffffff, 0x00000000, 478 478 0xa008, 0xffffffff, 0x00010000, 479 - 0x5cc, 0xffffffff, 0x00000001, 479 + 0x5c4, 0xffffffff, 0x00000001, 480 480 0x9508, 0xffffffff, 0x00000002 481 481 }; 482 482 ··· 635 635 static const u32 supersumo_golden_registers[] = 636 636 { 637 637 0x5eb4, 0xffffffff, 0x00000002, 638 - 0x5cc, 0xffffffff, 0x00000001, 638 + 0x5c4, 0xffffffff, 0x00000001, 639 639 0x7030, 0xffffffff, 0x00000011, 640 640 0x7c30, 0xffffffff, 0x00000011, 641 641 0x6104, 0x01000300, 0x00000000, ··· 719 719 static const u32 wrestler_golden_registers[] = 720 720 { 721 721 0x5eb4, 0xffffffff, 0x00000002, 722 - 0x5cc, 0xffffffff, 0x00000001, 722 + 0x5c4, 0xffffffff, 0x00000001, 723 723 0x7030, 0xffffffff, 0x00000011, 724 724 0x7c30, 0xffffffff, 0x00000011, 725 725 0x6104, 0x01000300, 0x00000000, ··· 5066 5066 case 147: 5067 5067 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 5068 5068 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 5069 + /* reset addr and status */ 5070 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5071 + if (addr == 0x0 && status == 0x0) 5072 + break; 5069 5073 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 5070 5074 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 5071 5075 addr); 5072 5076 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 5073 5077 status); 5074 5078 cayman_vm_decode_fault(rdev, status, addr); 5075 - /* reset addr and status */ 5076 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5077 5079 break; 5078 5080 case 176: /* CP_INT in ring buffer */ 5079 5081 case 177: /* CP_INT in IB1 */
+1 -1
drivers/gpu/drm/radeon/kv_dpm.c
··· 2726 2726 pi->caps_sclk_ds = true; 2727 2727 pi->enable_auto_thermal_throttling = true; 2728 2728 pi->disable_nb_ps3_in_battery = false; 2729 - pi->bapm_enable = false; 2729 + pi->bapm_enable = true; 2730 2730 pi->voltage_drop_t = 0; 2731 2731 pi->caps_sclk_throttle_low_notification = false; 2732 2732 pi->caps_fps = false; /* true? */
+1 -1
drivers/gpu/drm/radeon/ni_dpm.c
··· 1315 1315 1316 1316 table->voltageMaskTable.highMask[NISLANDS_SMC_VOLTAGEMASK_VDDCI] = 0; 1317 1317 table->voltageMaskTable.lowMask[NISLANDS_SMC_VOLTAGEMASK_VDDCI] = 1318 - cpu_to_be32(eg_pi->vddc_voltage_table.mask_low); 1318 + cpu_to_be32(eg_pi->vddci_voltage_table.mask_low); 1319 1319 } 1320 1320 } 1321 1321
+1 -4
drivers/gpu/drm/radeon/radeon.h
··· 102 102 extern int radeon_hard_reset; 103 103 extern int radeon_vm_size; 104 104 extern int radeon_vm_block_size; 105 + extern int radeon_deep_color; 105 106 106 107 /* 107 108 * Copy from radeon_drv.h so we don't have to include both and have conflicting ··· 749 748 struct evergreen_irq_stat_regs evergreen; 750 749 struct cik_irq_stat_regs cik; 751 750 }; 752 - 753 - #define RADEON_MAX_HPD_PINS 7 754 - #define RADEON_MAX_CRTCS 6 755 - #define RADEON_MAX_AFMT_BLOCKS 7 756 751 757 752 struct radeon_irq { 758 753 bool installed;
+9 -1
drivers/gpu/drm/radeon/radeon_atombios.c
··· 1227 1227 rdev->clock.default_dispclk = 1228 1228 le32_to_cpu(firmware_info->info_21.ulDefaultDispEngineClkFreq); 1229 1229 if (rdev->clock.default_dispclk == 0) { 1230 - if (ASIC_IS_DCE5(rdev)) 1230 + if (ASIC_IS_DCE6(rdev)) 1231 + rdev->clock.default_dispclk = 60000; /* 600 Mhz */ 1232 + else if (ASIC_IS_DCE5(rdev)) 1231 1233 rdev->clock.default_dispclk = 54000; /* 540 Mhz */ 1232 1234 else 1233 1235 rdev->clock.default_dispclk = 60000; /* 600 Mhz */ 1236 + } 1237 + /* set a reasonable default for DP */ 1238 + if (ASIC_IS_DCE6(rdev) && (rdev->clock.default_dispclk < 53900)) { 1239 + DRM_INFO("Changing default dispclk from %dMhz to 600Mhz\n", 1240 + rdev->clock.default_dispclk / 100); 1241 + rdev->clock.default_dispclk = 60000; 1234 1242 } 1235 1243 rdev->clock.dp_extclk = 1236 1244 le16_to_cpu(firmware_info->info_21.usUniphyDPModeExtClkFreq);
+3
drivers/gpu/drm/radeon/radeon_connectors.c
··· 199 199 } 200 200 } 201 201 202 + if ((radeon_deep_color == 0) && (bpc > 8)) 203 + bpc = 8; 204 + 202 205 DRM_DEBUG("%s: Display bpc=%d, returned bpc=%d\n", 203 206 connector->name, connector->display_info.bpc, bpc); 204 207
+14 -5
drivers/gpu/drm/radeon/radeon_display.c
··· 285 285 void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id) 286 286 { 287 287 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; 288 - struct radeon_flip_work *work; 289 288 unsigned long flags; 290 289 u32 update_pending; 291 290 int vpos, hpos; ··· 294 295 return; 295 296 296 297 spin_lock_irqsave(&rdev->ddev->event_lock, flags); 297 - work = radeon_crtc->flip_work; 298 - if (work == NULL) { 298 + if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) { 299 + DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != " 300 + "RADEON_FLIP_SUBMITTED(%d)\n", 301 + radeon_crtc->flip_status, 302 + RADEON_FLIP_SUBMITTED); 299 303 spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); 300 304 return; 301 305 } ··· 346 344 347 345 spin_lock_irqsave(&rdev->ddev->event_lock, flags); 348 346 work = radeon_crtc->flip_work; 349 - if (work == NULL) { 347 + if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) { 348 + DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != " 349 + "RADEON_FLIP_SUBMITTED(%d)\n", 350 + radeon_crtc->flip_status, 351 + RADEON_FLIP_SUBMITTED); 350 352 spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); 351 353 return; 352 354 } 353 355 354 356 /* Pageflip completed. Clean up. */ 357 + radeon_crtc->flip_status = RADEON_FLIP_NONE; 355 358 radeon_crtc->flip_work = NULL; 356 359 357 360 /* wakeup userspace */ ··· 483 476 /* do the flip (mmio) */ 484 477 radeon_page_flip(rdev, radeon_crtc->crtc_id, base); 485 478 479 + radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; 486 480 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 487 481 up_read(&rdev->exclusive_lock); 488 482 ··· 552 544 /* We borrow the event spin lock for protecting flip_work */ 553 545 spin_lock_irqsave(&crtc->dev->event_lock, flags); 554 546 555 - if (radeon_crtc->flip_work) { 547 + if (radeon_crtc->flip_status != RADEON_FLIP_NONE) { 556 548 DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); 557 549 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 558 550 drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); ··· 560 552 kfree(work); 561 553 return -EBUSY; 562 554 } 555 + radeon_crtc->flip_status = RADEON_FLIP_PENDING; 563 556 radeon_crtc->flip_work = work; 564 557 565 558 /* update crtc fb */
+4
drivers/gpu/drm/radeon/radeon_drv.c
··· 175 175 int radeon_hard_reset = 0; 176 176 int radeon_vm_size = 4096; 177 177 int radeon_vm_block_size = 9; 178 + int radeon_deep_color = 0; 178 179 179 180 MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers"); 180 181 module_param_named(no_wb, radeon_no_wb, int, 0444); ··· 248 247 249 248 MODULE_PARM_DESC(vm_block_size, "VM page table size in bits (default 9)"); 250 249 module_param_named(vm_block_size, radeon_vm_block_size, int, 0444); 250 + 251 + MODULE_PARM_DESC(deep_color, "Deep Color support (1 = enable, 0 = disable (default))"); 252 + module_param_named(deep_color, radeon_deep_color, int, 0444); 251 253 252 254 static struct pci_device_id pciidlist[] = { 253 255 radeon_PCI_IDS
+13 -2
drivers/gpu/drm/radeon/radeon_mode.h
··· 46 46 #define to_radeon_encoder(x) container_of(x, struct radeon_encoder, base) 47 47 #define to_radeon_framebuffer(x) container_of(x, struct radeon_framebuffer, base) 48 48 49 + #define RADEON_MAX_HPD_PINS 7 50 + #define RADEON_MAX_CRTCS 6 51 + #define RADEON_MAX_AFMT_BLOCKS 7 52 + 49 53 enum radeon_rmx_type { 50 54 RMX_OFF, 51 55 RMX_FULL, ··· 237 233 struct card_info *atom_card_info; 238 234 enum radeon_connector_table connector_table; 239 235 bool mode_config_initialized; 240 - struct radeon_crtc *crtcs[6]; 241 - struct radeon_afmt *afmt[7]; 236 + struct radeon_crtc *crtcs[RADEON_MAX_CRTCS]; 237 + struct radeon_afmt *afmt[RADEON_MAX_AFMT_BLOCKS]; 242 238 /* DVI-I properties */ 243 239 struct drm_property *coherent_mode_property; 244 240 /* DAC enable load detect */ ··· 306 302 uint16_t amount; 307 303 }; 308 304 305 + enum radeon_flip_status { 306 + RADEON_FLIP_NONE, 307 + RADEON_FLIP_PENDING, 308 + RADEON_FLIP_SUBMITTED 309 + }; 310 + 309 311 struct radeon_crtc { 310 312 struct drm_crtc base; 311 313 int crtc_id; ··· 337 327 /* page flipping */ 338 328 struct workqueue_struct *flip_queue; 339 329 struct radeon_flip_work *flip_work; 330 + enum radeon_flip_status flip_status; 340 331 /* pll sharing */ 341 332 struct radeon_atom_ss ss; 342 333 bool ss_enabled;
+4 -2
drivers/gpu/drm/radeon/radeon_pm.c
··· 73 73 rdev->pm.dpm.ac_power = true; 74 74 else 75 75 rdev->pm.dpm.ac_power = false; 76 - if (rdev->asic->dpm.enable_bapm) 77 - radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power); 76 + if (rdev->family == CHIP_ARUBA) { 77 + if (rdev->asic->dpm.enable_bapm) 78 + radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power); 79 + } 78 80 mutex_unlock(&rdev->pm.mutex); 79 81 } else if (rdev->pm.pm_method == PM_METHOD_PROFILE) { 80 82 if (rdev->pm.profile == PM_PROFILE_AUTO) {
+2 -2
drivers/gpu/drm/radeon/radeon_vm.c
··· 495 495 mutex_unlock(&vm->mutex); 496 496 497 497 r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8, 498 - RADEON_GPU_PAGE_SIZE, false, 498 + RADEON_GPU_PAGE_SIZE, true, 499 499 RADEON_GEM_DOMAIN_VRAM, NULL, &pt); 500 500 if (r) 501 501 return r; ··· 992 992 return -ENOMEM; 993 993 } 994 994 995 - r = radeon_bo_create(rdev, pd_size, align, false, 995 + r = radeon_bo_create(rdev, pd_size, align, true, 996 996 RADEON_GEM_DOMAIN_VRAM, NULL, 997 997 &vm->page_directory); 998 998 if (r)
-6
drivers/gpu/drm/radeon/rv770_dpm.c
··· 2329 2329 pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss, 2330 2330 ASIC_INTERNAL_MEMORY_SS, 0); 2331 2331 2332 - /* disable ss, causes hangs on some cayman boards */ 2333 - if (rdev->family == CHIP_CAYMAN) { 2334 - pi->sclk_ss = false; 2335 - pi->mclk_ss = false; 2336 - } 2337 - 2338 2332 if (pi->sclk_ss || pi->mclk_ss) 2339 2333 pi->dynamic_ss = true; 2340 2334 else
+4 -2
drivers/gpu/drm/radeon/si.c
··· 6376 6376 case 147: 6377 6377 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 6378 6378 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 6379 + /* reset addr and status */ 6380 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6381 + if (addr == 0x0 && status == 0x0) 6382 + break; 6379 6383 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 6380 6384 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 6381 6385 addr); 6382 6386 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 6383 6387 status); 6384 6388 si_vm_decode_fault(rdev, status, addr); 6385 - /* reset addr and status */ 6386 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6387 6389 break; 6388 6390 case 176: /* RINGID0 CP_INT */ 6389 6391 radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
+9 -1
drivers/gpu/drm/radeon/trinity_dpm.c
··· 1874 1874 for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) 1875 1875 pi->at[i] = TRINITY_AT_DFLT; 1876 1876 1877 - pi->enable_bapm = false; 1877 + /* There are stability issues reported on latops with 1878 + * bapm installed when switching between AC and battery 1879 + * power. At the same time, some desktop boards hang 1880 + * if it's not enabled and dpm is enabled. 1881 + */ 1882 + if (rdev->flags & RADEON_IS_MOBILITY) 1883 + pi->enable_bapm = false; 1884 + else 1885 + pi->enable_bapm = true; 1878 1886 pi->enable_nbps_policy = true; 1879 1887 pi->enable_sclk_ds = true; 1880 1888 pi->enable_gfx_power_gating = true;
-1
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 179 179 vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, info->var.yoffset); 180 180 vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, info->var.xres); 181 181 vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, info->var.yres); 182 - vmw_write(vmw_priv, SVGA_REG_BYTES_PER_LINE, info->fix.line_length); 183 182 vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID); 184 183 } 185 184
+1 -1
drivers/hid/Kconfig
··· 810 810 811 811 config HID_SENSOR_HUB 812 812 tristate "HID Sensors framework support" 813 - depends on HID 813 + depends on HID && HAS_IOMEM 814 814 select MFD_CORE 815 815 default n 816 816 ---help---
+3
drivers/hid/hid-ids.h
··· 323 323 324 324 #define USB_VENDOR_ID_ETURBOTOUCH 0x22b9 325 325 #define USB_DEVICE_ID_ETURBOTOUCH 0x0006 326 + #define USB_DEVICE_ID_ETURBOTOUCH_2968 0x2968 326 327 327 328 #define USB_VENDOR_ID_EZKEY 0x0518 328 329 #define USB_DEVICE_ID_BTC_8193 0x0002 ··· 716 715 717 716 #define USB_VENDOR_ID_PENMOUNT 0x14e1 718 717 #define USB_DEVICE_ID_PENMOUNT_PCI 0x3500 718 + #define USB_DEVICE_ID_PENMOUNT_1610 0x1610 719 + #define USB_DEVICE_ID_PENMOUNT_1640 0x1640 719 720 720 721 #define USB_VENDOR_ID_PETALYNX 0x18b1 721 722 #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037
+2
drivers/hid/hid-rmi.c
··· 428 428 return 0; 429 429 } 430 430 431 + #ifdef CONFIG_PM 431 432 static int rmi_post_reset(struct hid_device *hdev) 432 433 { 433 434 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); ··· 438 437 { 439 438 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); 440 439 } 440 + #endif /* CONFIG_PM */ 441 441 442 442 #define RMI4_MAX_PAGE 0xff 443 443 #define RMI4_PAGE_SIZE 0x0100
+15 -10
drivers/hid/hid-sensor-hub.c
··· 159 159 { 160 160 struct hid_sensor_hub_callbacks_list *callback; 161 161 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 162 + unsigned long flags; 162 163 163 - spin_lock(&pdata->dyn_callback_lock); 164 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 164 165 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 165 166 if (callback->usage_id == usage_id && 166 167 callback->hsdev == hsdev) { 167 - spin_unlock(&pdata->dyn_callback_lock); 168 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 168 169 return -EINVAL; 169 170 } 170 171 callback = kzalloc(sizeof(*callback), GFP_ATOMIC); 171 172 if (!callback) { 172 - spin_unlock(&pdata->dyn_callback_lock); 173 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 173 174 return -ENOMEM; 174 175 } 175 176 callback->hsdev = hsdev; ··· 178 177 callback->usage_id = usage_id; 179 178 callback->priv = NULL; 180 179 list_add_tail(&callback->list, &pdata->dyn_callback_list); 181 - spin_unlock(&pdata->dyn_callback_lock); 180 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 182 181 183 182 return 0; 184 183 } ··· 189 188 { 190 189 struct hid_sensor_hub_callbacks_list *callback; 191 190 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 191 + unsigned long flags; 192 192 193 - spin_lock(&pdata->dyn_callback_lock); 193 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 194 194 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 195 195 if (callback->usage_id == usage_id && 196 196 callback->hsdev == hsdev) { ··· 199 197 kfree(callback); 200 198 break; 201 199 } 202 - spin_unlock(&pdata->dyn_callback_lock); 200 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 203 201 204 202 return 0; 205 203 } ··· 380 378 { 381 379 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 382 380 struct hid_sensor_hub_callbacks_list *callback; 381 + unsigned long flags; 383 382 384 383 hid_dbg(hdev, " sensor_hub_suspend\n"); 385 - spin_lock(&pdata->dyn_callback_lock); 384 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 386 385 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 387 386 if (callback->usage_callback->suspend) 388 387 callback->usage_callback->suspend( 389 388 callback->hsdev, callback->priv); 390 389 } 391 - spin_unlock(&pdata->dyn_callback_lock); 390 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 392 391 393 392 return 0; 394 393 } ··· 398 395 { 399 396 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 400 397 struct hid_sensor_hub_callbacks_list *callback; 398 + unsigned long flags; 401 399 402 400 hid_dbg(hdev, " sensor_hub_resume\n"); 403 - spin_lock(&pdata->dyn_callback_lock); 401 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 404 402 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 405 403 if (callback->usage_callback->resume) 406 404 callback->usage_callback->resume( 407 405 callback->hsdev, callback->priv); 408 406 } 409 - spin_unlock(&pdata->dyn_callback_lock); 407 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 410 408 411 409 return 0; 412 410 } ··· 636 632 if (name == NULL) { 637 633 hid_err(hdev, "Failed MFD device name\n"); 638 634 ret = -ENOMEM; 635 + kfree(hsdev); 639 636 goto err_no_mem; 640 637 } 641 638 sd->hid_sensor_hub_client_devs[
+3
drivers/hid/usbhid/hid-quirks.c
··· 49 49 50 50 { USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II, HID_QUIRK_MULTI_INPUT }, 51 51 { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH, HID_QUIRK_MULTI_INPUT }, 52 + { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH_2968, HID_QUIRK_MULTI_INPUT }, 52 53 { USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD, HID_QUIRK_MULTI_INPUT }, 53 54 { USB_VENDOR_ID_PANTHERLORD, USB_DEVICE_ID_PANTHERLORD_TWIN_USB_JOYSTICK, HID_QUIRK_MULTI_INPUT | HID_QUIRK_SKIP_OUTPUT_REPORTS }, 54 55 { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, ··· 77 76 { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, 78 77 { USB_VENDOR_ID_NEXIO, USB_DEVICE_ID_NEXIO_MULTITOUCH_PTI0750, HID_QUIRK_NO_INIT_REPORTS }, 79 78 { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS }, 79 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET }, 80 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET }, 80 81 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, 81 82 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, 82 83 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS },
+6 -2
drivers/hv/connection.c
··· 339 339 */ 340 340 341 341 do { 342 - hv_begin_read(&channel->inbound); 342 + if (read_state) 343 + hv_begin_read(&channel->inbound); 343 344 channel->onchannel_callback(arg); 344 - bytes_to_read = hv_end_read(&channel->inbound); 345 + if (read_state) 346 + bytes_to_read = hv_end_read(&channel->inbound); 347 + else 348 + bytes_to_read = 0; 345 349 } while (read_state && (bytes_to_read != 0)); 346 350 } else { 347 351 pr_err("no channel callback for relid - %u\n", relid);
+14 -3
drivers/hv/hv_kvp.c
··· 127 127 kvp_respond_to_host(NULL, HV_E_FAIL); 128 128 } 129 129 130 + static void poll_channel(struct vmbus_channel *channel) 131 + { 132 + if (channel->target_cpu != smp_processor_id()) 133 + smp_call_function_single(channel->target_cpu, 134 + hv_kvp_onchannelcallback, 135 + channel, true); 136 + else 137 + hv_kvp_onchannelcallback(channel); 138 + } 139 + 140 + 130 141 static int kvp_handle_handshake(struct hv_kvp_msg *msg) 131 142 { 132 143 int ret = 1; ··· 166 155 kvp_register(dm_reg_value); 167 156 kvp_transaction.active = false; 168 157 if (kvp_transaction.kvp_context) 169 - hv_kvp_onchannelcallback(kvp_transaction.kvp_context); 158 + poll_channel(kvp_transaction.kvp_context); 170 159 } 171 160 return ret; 172 161 } ··· 579 568 580 569 vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, 581 570 VM_PKT_DATA_INBAND, 0); 582 - 571 + poll_channel(channel); 583 572 } 584 573 585 574 /* ··· 614 603 return; 615 604 } 616 605 617 - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, 606 + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, 618 607 &requestid); 619 608 620 609 if (recvlen > 0) {
+1 -1
drivers/hv/hv_util.c
··· 319 319 (struct hv_util_service *)dev_id->driver_data; 320 320 int ret; 321 321 322 - srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); 322 + srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); 323 323 if (!srv->recv_buffer) 324 324 return -ENOMEM; 325 325 if (srv->util_init) {
+14 -14
drivers/hwmon/adc128d818.c
··· 239 239 return sprintf(buf, "%u\n", !!(alarms & mask)); 240 240 } 241 241 242 - static SENSOR_DEVICE_ATTR_2(in0_input, S_IWUSR | S_IRUGO, 243 - adc128_show_in, adc128_set_in, 0, 0); 242 + static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, 243 + adc128_show_in, NULL, 0, 0); 244 244 static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, 245 245 adc128_show_in, adc128_set_in, 0, 1); 246 246 static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, 247 247 adc128_show_in, adc128_set_in, 0, 2); 248 248 249 - static SENSOR_DEVICE_ATTR_2(in1_input, S_IWUSR | S_IRUGO, 250 - adc128_show_in, adc128_set_in, 1, 0); 249 + static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, 250 + adc128_show_in, NULL, 1, 0); 251 251 static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, 252 252 adc128_show_in, adc128_set_in, 1, 1); 253 253 static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, 254 254 adc128_show_in, adc128_set_in, 1, 2); 255 255 256 - static SENSOR_DEVICE_ATTR_2(in2_input, S_IWUSR | S_IRUGO, 257 - adc128_show_in, adc128_set_in, 2, 0); 256 + static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, 257 + adc128_show_in, NULL, 2, 0); 258 258 static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, 259 259 adc128_show_in, adc128_set_in, 2, 1); 260 260 static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, 261 261 adc128_show_in, adc128_set_in, 2, 2); 262 262 263 - static SENSOR_DEVICE_ATTR_2(in3_input, S_IWUSR | S_IRUGO, 264 - adc128_show_in, adc128_set_in, 3, 0); 263 + static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, 264 + adc128_show_in, NULL, 3, 0); 265 265 static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, 266 266 adc128_show_in, adc128_set_in, 3, 1); 267 267 static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, 268 268 adc128_show_in, adc128_set_in, 3, 2); 269 269 270 - static SENSOR_DEVICE_ATTR_2(in4_input, S_IWUSR | S_IRUGO, 271 - adc128_show_in, adc128_set_in, 4, 0); 270 + static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, 271 + adc128_show_in, NULL, 4, 0); 272 272 static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, 273 273 adc128_show_in, adc128_set_in, 4, 1); 274 274 static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, 275 275 adc128_show_in, adc128_set_in, 4, 2); 276 276 277 - static SENSOR_DEVICE_ATTR_2(in5_input, S_IWUSR | S_IRUGO, 278 - adc128_show_in, adc128_set_in, 5, 0); 277 + static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, 278 + adc128_show_in, NULL, 5, 0); 279 279 static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, 280 280 adc128_show_in, adc128_set_in, 5, 1); 281 281 static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, 282 282 adc128_show_in, adc128_set_in, 5, 2); 283 283 284 - static SENSOR_DEVICE_ATTR_2(in6_input, S_IWUSR | S_IRUGO, 285 - adc128_show_in, adc128_set_in, 6, 0); 284 + static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, 285 + adc128_show_in, NULL, 6, 0); 286 286 static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, 287 287 adc128_show_in, adc128_set_in, 6, 1); 288 288 static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO,
+8 -6
drivers/hwmon/adm1021.c
··· 185 185 struct adm1021_data *data = dev_get_drvdata(dev); 186 186 struct i2c_client *client = data->client; 187 187 long temp; 188 - int err; 188 + int reg_val, err; 189 189 190 190 err = kstrtol(buf, 10, &temp); 191 191 if (err) ··· 193 193 temp /= 1000; 194 194 195 195 mutex_lock(&data->update_lock); 196 - data->temp_max[index] = clamp_val(temp, -128, 127); 196 + reg_val = clamp_val(temp, -128, 127); 197 + data->temp_max[index] = reg_val * 1000; 197 198 if (!read_only) 198 199 i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), 199 - data->temp_max[index]); 200 + reg_val); 200 201 mutex_unlock(&data->update_lock); 201 202 202 203 return count; ··· 211 210 struct adm1021_data *data = dev_get_drvdata(dev); 212 211 struct i2c_client *client = data->client; 213 212 long temp; 214 - int err; 213 + int reg_val, err; 215 214 216 215 err = kstrtol(buf, 10, &temp); 217 216 if (err) ··· 219 218 temp /= 1000; 220 219 221 220 mutex_lock(&data->update_lock); 222 - data->temp_min[index] = clamp_val(temp, -128, 127); 221 + reg_val = clamp_val(temp, -128, 127); 222 + data->temp_min[index] = reg_val * 1000; 223 223 if (!read_only) 224 224 i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), 225 - data->temp_min[index]); 225 + reg_val); 226 226 mutex_unlock(&data->update_lock); 227 227 228 228 return count;
+3
drivers/hwmon/adm1029.c
··· 232 232 /* Update the value */ 233 233 reg = (reg & 0x3F) | (val << 6); 234 234 235 + /* Update the cache */ 236 + data->fan_div[attr->index] = reg; 237 + 235 238 /* Write value */ 236 239 i2c_smbus_write_byte_data(client, 237 240 ADM1029_REG_FAN_DIV[attr->index], reg);
+5 -3
drivers/hwmon/adm1031.c
··· 365 365 if (ret) 366 366 return ret; 367 367 368 + val = clamp_val(val, 0, 127000); 368 369 mutex_lock(&data->update_lock); 369 370 data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); 370 371 adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), ··· 395 394 if (ret) 396 395 return ret; 397 396 397 + val = clamp_val(val, 0, 127000); 398 398 mutex_lock(&data->update_lock); 399 399 data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], 400 400 data->pwm[nr]); ··· 698 696 if (ret) 699 697 return ret; 700 698 701 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 699 + val = clamp_val(val, -55000, 127000); 702 700 mutex_lock(&data->update_lock); 703 701 data->temp_min[nr] = TEMP_TO_REG(val); 704 702 adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), ··· 719 717 if (ret) 720 718 return ret; 721 719 722 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 720 + val = clamp_val(val, -55000, 127000); 723 721 mutex_lock(&data->update_lock); 724 722 data->temp_max[nr] = TEMP_TO_REG(val); 725 723 adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), ··· 740 738 if (ret) 741 739 return ret; 742 740 743 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 741 + val = clamp_val(val, -55000, 127000); 744 742 mutex_lock(&data->update_lock); 745 743 data->temp_crit[nr] = TEMP_TO_REG(val); 746 744 adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),
+1 -1
drivers/hwmon/amc6821.c
··· 704 704 get_temp_alarm, NULL, IDX_TEMP1_MAX); 705 705 static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, 706 706 get_temp_alarm, NULL, IDX_TEMP1_CRIT); 707 - static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, 707 + static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, 708 708 get_temp, NULL, IDX_TEMP2_INPUT); 709 709 static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, 710 710 set_temp, IDX_TEMP2_MIN);
+5 -10
drivers/hwmon/emc2103.c
··· 250 250 if (result < 0) 251 251 return result; 252 252 253 - val = DIV_ROUND_CLOSEST(val, 1000); 254 - if ((val < -63) || (val > 127)) 255 - return -EINVAL; 253 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 256 254 257 255 mutex_lock(&data->update_lock); 258 256 data->temp_min[nr] = val; ··· 272 274 if (result < 0) 273 275 return result; 274 276 275 - val = DIV_ROUND_CLOSEST(val, 1000); 276 - if ((val < -63) || (val > 127)) 277 - return -EINVAL; 277 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 278 278 279 279 mutex_lock(&data->update_lock); 280 280 data->temp_max[nr] = val; ··· 386 390 { 387 391 struct emc2103_data *data = emc2103_update_device(dev); 388 392 struct i2c_client *client = to_i2c_client(dev); 389 - long rpm_target; 393 + unsigned long rpm_target; 390 394 391 - int result = kstrtol(buf, 10, &rpm_target); 395 + int result = kstrtoul(buf, 10, &rpm_target); 392 396 if (result < 0) 393 397 return result; 394 398 395 399 /* Datasheet states 16384 as maximum RPM target (table 3.2) */ 396 - if ((rpm_target < 0) || (rpm_target > 16384)) 397 - return -EINVAL; 400 + rpm_target = clamp_val(rpm_target, 0, 16384); 398 401 399 402 mutex_lock(&data->update_lock); 400 403
+1 -1
drivers/hwmon/ntc_thermistor.c
··· 512 512 } 513 513 514 514 dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n", 515 - pdev->name); 515 + pdev_id->name); 516 516 517 517 return 0; 518 518 err_after_sysfs:
-1
drivers/i2c/busses/i2c-sun6i-p2wi.c
··· 22 22 * 23 23 */ 24 24 #include <linux/clk.h> 25 - #include <linux/module.h> 26 25 #include <linux/i2c.h> 27 26 #include <linux/io.h> 28 27 #include <linux/interrupt.h>
+1
drivers/i2c/muxes/Kconfig
··· 40 40 41 41 config I2C_MUX_PCA954x 42 42 tristate "Philips PCA954x I2C Mux/switches" 43 + depends on GPIOLIB 43 44 help 44 45 If you say yes here you get support for the Philips PCA954x 45 46 I2C mux/switch devices.
+2 -5
drivers/iio/accel/hid-sensor-accel-3d.c
··· 110 110 struct accel_3d_state *accel_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &accel_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &accel_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+6 -2
drivers/iio/adc/ad799x.c
··· 427 427 int ret; 428 428 struct ad799x_state *st = iio_priv(indio_dev); 429 429 430 + if (val < 0 || val > RES_MASK(chan->scan_type.realbits)) 431 + return -EINVAL; 432 + 430 433 mutex_lock(&indio_dev->mlock); 431 434 ret = ad799x_i2c_write16(st, ad799x_threshold_reg(chan, dir, info), 432 - val); 435 + val << chan->scan_type.shift); 433 436 mutex_unlock(&indio_dev->mlock); 434 437 435 438 return ret; ··· 455 452 mutex_unlock(&indio_dev->mlock); 456 453 if (ret < 0) 457 454 return ret; 458 - *val = valin; 455 + *val = (valin >> chan->scan_type.shift) & 456 + RES_MASK(chan->scan_type.realbits); 459 457 460 458 return IIO_VAL_INT; 461 459 }
+1 -1
drivers/iio/adc/ti_am335x_adc.c
··· 374 374 return -EAGAIN; 375 375 } 376 376 } 377 - map_val = chan->channel + TOTAL_CHANNELS; 377 + map_val = adc_dev->channel_step[chan->scan_index]; 378 378 379 379 /* 380 380 * We check the complete FIFO. We programmed just one entry but in case
+2 -5
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 110 110 struct gyro_3d_state *gyro_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &gyro_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &gyro_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+4 -2
drivers/iio/inkern.c
··· 183 183 else if (name && index >= 0) { 184 184 pr_err("ERROR: could not get IIO channel %s:%s(%i)\n", 185 185 np->full_name, name ? name : "", index); 186 - return chan; 186 + return NULL; 187 187 } 188 188 189 189 /* ··· 193 193 */ 194 194 np = np->parent; 195 195 if (np && !of_get_property(np, "io-channel-ranges", NULL)) 196 - break; 196 + return NULL; 197 197 } 198 + 198 199 return chan; 199 200 } 200 201 ··· 318 317 if (channel != NULL) 319 318 return channel; 320 319 } 320 + 321 321 return iio_channel_get_sys(name, channel_name); 322 322 } 323 323 EXPORT_SYMBOL_GPL(iio_channel_get);
+2 -5
drivers/iio/light/hid-sensor-als.c
··· 79 79 struct als_state *als_state = iio_priv(indio_dev); 80 80 int report_id = -1; 81 81 u32 address; 82 - int ret; 83 82 int ret_type; 84 83 s32 poll_value; 85 84 ··· 128 129 ret_type = IIO_VAL_INT; 129 130 break; 130 131 case IIO_CHAN_INFO_SAMP_FREQ: 131 - ret = hid_sensor_read_samp_freq_value( 132 + ret_type = hid_sensor_read_samp_freq_value( 132 133 &als_state->common_attributes, val, val2); 133 - ret_type = IIO_VAL_INT_PLUS_MICRO; 134 134 break; 135 135 case IIO_CHAN_INFO_HYSTERESIS: 136 - ret = hid_sensor_read_raw_hyst_value( 136 + ret_type = hid_sensor_read_raw_hyst_value( 137 137 &als_state->common_attributes, val, val2); 138 - ret_type = IIO_VAL_INT_PLUS_MICRO; 139 138 break; 140 139 default: 141 140 ret_type = -EINVAL;
+2 -5
drivers/iio/light/hid-sensor-prox.c
··· 74 74 struct prox_state *prox_state = iio_priv(indio_dev); 75 75 int report_id = -1; 76 76 u32 address; 77 - int ret; 78 77 int ret_type; 79 78 s32 poll_value; 80 79 ··· 124 125 ret_type = IIO_VAL_INT; 125 126 break; 126 127 case IIO_CHAN_INFO_SAMP_FREQ: 127 - ret = hid_sensor_read_samp_freq_value( 128 + ret_type = hid_sensor_read_samp_freq_value( 128 129 &prox_state->common_attributes, val, val2); 129 - ret_type = IIO_VAL_INT_PLUS_MICRO; 130 130 break; 131 131 case IIO_CHAN_INFO_HYSTERESIS: 132 - ret = hid_sensor_read_raw_hyst_value( 132 + ret_type = hid_sensor_read_raw_hyst_value( 133 133 &prox_state->common_attributes, val, val2); 134 - ret_type = IIO_VAL_INT_PLUS_MICRO; 135 134 break; 136 135 default: 137 136 ret_type = -EINVAL;
+10 -1
drivers/iio/light/tcs3472.c
··· 52 52 53 53 struct tcs3472_data { 54 54 struct i2c_client *client; 55 + struct mutex lock; 55 56 u8 enable; 56 57 u8 control; 57 58 u8 atime; ··· 117 116 118 117 switch (mask) { 119 118 case IIO_CHAN_INFO_RAW: 119 + if (iio_buffer_enabled(indio_dev)) 120 + return -EBUSY; 121 + 122 + mutex_lock(&data->lock); 120 123 ret = tcs3472_req_data(data); 121 - if (ret < 0) 124 + if (ret < 0) { 125 + mutex_unlock(&data->lock); 122 126 return ret; 127 + } 123 128 ret = i2c_smbus_read_word_data(data->client, chan->address); 129 + mutex_unlock(&data->lock); 124 130 if (ret < 0) 125 131 return ret; 126 132 *val = ret; ··· 263 255 data = iio_priv(indio_dev); 264 256 i2c_set_clientdata(client, indio_dev); 265 257 data->client = client; 258 + mutex_init(&data->lock); 266 259 267 260 indio_dev->dev.parent = &client->dev; 268 261 indio_dev->info = &tcs3472_info;
+2 -5
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 110 110 struct magn_3d_state *magn_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 152 153 ret_type = IIO_VAL_INT; 153 154 break; 154 155 case IIO_CHAN_INFO_SAMP_FREQ: 155 - ret = hid_sensor_read_samp_freq_value( 156 + ret_type = hid_sensor_read_samp_freq_value( 156 157 &magn_state->common_attributes, val, val2); 157 - ret_type = IIO_VAL_INT_PLUS_MICRO; 158 158 break; 159 159 case IIO_CHAN_INFO_HYSTERESIS: 160 - ret = hid_sensor_read_raw_hyst_value( 160 + ret_type = hid_sensor_read_raw_hyst_value( 161 161 &magn_state->common_attributes, val, val2); 162 - ret_type = IIO_VAL_INT_PLUS_MICRO; 163 162 break; 164 163 default: 165 164 ret_type = -EINVAL;
+2 -5
drivers/iio/pressure/hid-sensor-press.c
··· 78 78 struct press_state *press_state = iio_priv(indio_dev); 79 79 int report_id = -1; 80 80 u32 address; 81 - int ret; 82 81 int ret_type; 83 82 s32 poll_value; 84 83 ··· 127 128 ret_type = IIO_VAL_INT; 128 129 break; 129 130 case IIO_CHAN_INFO_SAMP_FREQ: 130 - ret = hid_sensor_read_samp_freq_value( 131 + ret_type = hid_sensor_read_samp_freq_value( 131 132 &press_state->common_attributes, val, val2); 132 - ret_type = IIO_VAL_INT_PLUS_MICRO; 133 133 break; 134 134 case IIO_CHAN_INFO_HYSTERESIS: 135 - ret = hid_sensor_read_raw_hyst_value( 135 + ret_type = hid_sensor_read_raw_hyst_value( 136 136 &press_state->common_attributes, val, val2); 137 - ret_type = IIO_VAL_INT_PLUS_MICRO; 138 137 break; 139 138 default: 140 139 ret_type = -EINVAL;
+13 -5
drivers/iommu/amd_iommu_v2.c
··· 45 45 struct pasid_state { 46 46 struct list_head list; /* For global state-list */ 47 47 atomic_t count; /* Reference count */ 48 - atomic_t mmu_notifier_count; /* Counting nested mmu_notifier 48 + unsigned mmu_notifier_count; /* Counting nested mmu_notifier 49 49 calls */ 50 50 struct task_struct *task; /* Task bound to this PASID */ 51 51 struct mm_struct *mm; /* mm_struct for the faults */ ··· 53 53 struct pri_queue pri[PRI_QUEUE_SIZE]; /* PRI tag states */ 54 54 struct device_state *device_state; /* Link to our device_state */ 55 55 int pasid; /* PASID index */ 56 - spinlock_t lock; /* Protect pri_queues */ 56 + spinlock_t lock; /* Protect pri_queues and 57 + mmu_notifer_count */ 57 58 wait_queue_head_t wq; /* To wait for count == 0 */ 58 59 }; 59 60 ··· 432 431 { 433 432 struct pasid_state *pasid_state; 434 433 struct device_state *dev_state; 434 + unsigned long flags; 435 435 436 436 pasid_state = mn_to_state(mn); 437 437 dev_state = pasid_state->device_state; 438 438 439 - if (atomic_add_return(1, &pasid_state->mmu_notifier_count) == 1) { 439 + spin_lock_irqsave(&pasid_state->lock, flags); 440 + if (pasid_state->mmu_notifier_count == 0) { 440 441 amd_iommu_domain_set_gcr3(dev_state->domain, 441 442 pasid_state->pasid, 442 443 __pa(empty_page_table)); 443 444 } 445 + pasid_state->mmu_notifier_count += 1; 446 + spin_unlock_irqrestore(&pasid_state->lock, flags); 444 447 } 445 448 446 449 static void mn_invalidate_range_end(struct mmu_notifier *mn, ··· 453 448 { 454 449 struct pasid_state *pasid_state; 455 450 struct device_state *dev_state; 451 + unsigned long flags; 456 452 457 453 pasid_state = mn_to_state(mn); 458 454 dev_state = pasid_state->device_state; 459 455 460 - if (atomic_dec_and_test(&pasid_state->mmu_notifier_count)) { 456 + spin_lock_irqsave(&pasid_state->lock, flags); 457 + pasid_state->mmu_notifier_count -= 1; 458 + if (pasid_state->mmu_notifier_count == 0) { 461 459 amd_iommu_domain_set_gcr3(dev_state->domain, 462 460 pasid_state->pasid, 463 461 __pa(pasid_state->mm->pgd)); 464 462 } 463 + spin_unlock_irqrestore(&pasid_state->lock, flags); 465 464 } 466 465 467 466 static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) ··· 659 650 goto out; 660 651 661 652 atomic_set(&pasid_state->count, 1); 662 - atomic_set(&pasid_state->mmu_notifier_count, 0); 663 653 init_waitqueue_head(&pasid_state->wq); 664 654 spin_lock_init(&pasid_state->lock); 665 655
+3 -6
drivers/iommu/intel-iommu.c
··· 3816 3816 ((void *)rmrr) + rmrr->header.length, 3817 3817 rmrr->segment, rmrru->devices, 3818 3818 rmrru->devices_cnt); 3819 - if (ret > 0) 3820 - break; 3821 - else if(ret < 0) 3819 + if(ret < 0) 3822 3820 return ret; 3823 3821 } else if (info->event == BUS_NOTIFY_DEL_DEVICE) { 3824 - if (dmar_remove_dev_scope(info, rmrr->segment, 3825 - rmrru->devices, rmrru->devices_cnt)) 3826 - break; 3822 + dmar_remove_dev_scope(info, rmrr->segment, 3823 + rmrru->devices, rmrru->devices_cnt); 3827 3824 } 3828 3825 } 3829 3826
+15 -2
drivers/irqchip/irq-armada-370-xp.c
··· 334 334 335 335 static void armada_xp_mpic_smp_cpu_init(void) 336 336 { 337 + u32 control; 338 + int nr_irqs, i; 339 + 340 + control = readl(main_int_base + ARMADA_370_XP_INT_CONTROL); 341 + nr_irqs = (control >> 2) & 0x3ff; 342 + 343 + for (i = 0; i < nr_irqs; i++) 344 + writel(i, per_cpu_int_base + ARMADA_370_XP_INT_SET_MASK_OFFS); 345 + 337 346 /* Clear pending IPIs */ 338 347 writel(0, per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS); 339 348 ··· 483 474 struct device_node *parent) 484 475 { 485 476 struct resource main_int_res, per_cpu_int_res; 486 - int parent_irq; 477 + int parent_irq, nr_irqs, i; 487 478 u32 control; 488 479 489 480 BUG_ON(of_address_to_resource(node, 0, &main_int_res)); ··· 505 496 BUG_ON(!per_cpu_int_base); 506 497 507 498 control = readl(main_int_base + ARMADA_370_XP_INT_CONTROL); 499 + nr_irqs = (control >> 2) & 0x3ff; 500 + 501 + for (i = 0; i < nr_irqs; i++) 502 + writel(i, main_int_base + ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS); 508 503 509 504 armada_370_xp_mpic_domain = 510 - irq_domain_add_linear(node, (control >> 2) & 0x3ff, 505 + irq_domain_add_linear(node, nr_irqs, 511 506 &armada_370_xp_mpic_irq_ops, NULL); 512 507 513 508 BUG_ON(!armada_370_xp_mpic_domain);
+1 -1
drivers/irqchip/irq-brcmstb-l2.c
··· 150 150 151 151 /* Allocate a single Generic IRQ chip for this node */ 152 152 ret = irq_alloc_domain_generic_chips(data->domain, 32, 1, 153 - np->full_name, handle_level_irq, clr, 0, 0); 153 + np->full_name, handle_edge_irq, clr, 0, 0); 154 154 if (ret) { 155 155 pr_err("failed to allocate generic irq chip\n"); 156 156 goto out_free_domain;
+1 -1
drivers/irqchip/spear-shirq.c
··· 125 125 }; 126 126 127 127 static struct spear_shirq spear320_shirq_ras3 = { 128 - .irq_nr = 3, 128 + .irq_nr = 7, 129 129 .irq_bit_off = 0, 130 130 .invalid_irq = 1, 131 131 .regs = {
+9 -5
drivers/isdn/hisax/l3ni1.c
··· 2059 2059 memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */ 2060 2060 l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */ 2061 2061 2062 - if (ic->parm.ni1_io.timeout > 0) 2063 - if (!(pc = ni1_new_l3_process(st, -1))) 2064 - { free_invoke_id(st, id); 2062 + if (ic->parm.ni1_io.timeout > 0) { 2063 + pc = ni1_new_l3_process(st, -1); 2064 + if (!pc) { 2065 + free_invoke_id(st, id); 2065 2066 return (-2); 2066 2067 } 2067 - pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; /* remember id */ 2068 - pc->prot.ni1.proc = ic->parm.ni1_io.proc; /* and procedure */ 2068 + /* remember id */ 2069 + pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; 2070 + /* and procedure */ 2071 + pc->prot.ni1.proc = ic->parm.ni1_io.proc; 2072 + } 2069 2073 2070 2074 if (!(skb = l3_alloc_skb(l))) 2071 2075 { free_invoke_id(st, id);
+1 -7
drivers/isdn/i4l/isdn_ppp.c
··· 442 442 { 443 443 struct sock_fprog uprog; 444 444 struct sock_filter *code = NULL; 445 - int len, err; 445 + int len; 446 446 447 447 if (copy_from_user(&uprog, arg, sizeof(uprog))) 448 448 return -EFAULT; ··· 457 457 code = memdup_user(uprog.filter, len); 458 458 if (IS_ERR(code)) 459 459 return PTR_ERR(code); 460 - 461 - err = sk_chk_filter(code, uprog.len); 462 - if (err) { 463 - kfree(code); 464 - return err; 465 - } 466 460 467 461 *p = code; 468 462 return uprog.len;
+2 -2
drivers/md/dm-crypt.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org> 4 4 * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved. 5 5 * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com> ··· 1996 1996 module_init(dm_crypt_init); 1997 1997 module_exit(dm_crypt_exit); 1998 1998 1999 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 1999 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 2000 2000 MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption"); 2001 2001 MODULE_LICENSE("GPL");
+8 -14
drivers/md/dm-io.c
··· 10 10 #include <linux/device-mapper.h> 11 11 12 12 #include <linux/bio.h> 13 + #include <linux/completion.h> 13 14 #include <linux/mempool.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> ··· 33 32 struct io { 34 33 unsigned long error_bits; 35 34 atomic_t count; 36 - struct task_struct *sleeper; 35 + struct completion *wait; 37 36 struct dm_io_client *client; 38 37 io_notify_fn callback; 39 38 void *context; ··· 122 121 invalidate_kernel_vmap_range(io->vma_invalidate_address, 123 122 io->vma_invalidate_size); 124 123 125 - if (io->sleeper) 126 - wake_up_process(io->sleeper); 124 + if (io->wait) 125 + complete(io->wait); 127 126 128 127 else { 129 128 unsigned long r = io->error_bits; ··· 388 387 */ 389 388 volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; 390 389 struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); 390 + DECLARE_COMPLETION_ONSTACK(wait); 391 391 392 392 if (num_regions > 1 && (rw & RW_MASK) != WRITE) { 393 393 WARN_ON(1); ··· 397 395 398 396 io->error_bits = 0; 399 397 atomic_set(&io->count, 1); /* see dispatch_io() */ 400 - io->sleeper = current; 398 + io->wait = &wait; 401 399 io->client = client; 402 400 403 401 io->vma_invalidate_address = dp->vma_invalidate_address; ··· 405 403 406 404 dispatch_io(rw, num_regions, where, dp, io, 1); 407 405 408 - while (1) { 409 - set_current_state(TASK_UNINTERRUPTIBLE); 410 - 411 - if (!atomic_read(&io->count)) 412 - break; 413 - 414 - io_schedule(); 415 - } 416 - set_current_state(TASK_RUNNING); 406 + wait_for_completion_io(&wait); 417 407 418 408 if (error_bits) 419 409 *error_bits = io->error_bits; ··· 428 434 io = mempool_alloc(client->pool, GFP_NOIO); 429 435 io->error_bits = 0; 430 436 atomic_set(&io->count, 1); /* see dispatch_io() */ 431 - io->sleeper = NULL; 437 + io->wait = NULL; 432 438 io->client = client; 433 439 io->callback = fn; 434 440 io->context = context;
+3 -2
drivers/md/dm-mpath.c
··· 1611 1611 1612 1612 spin_lock_irqsave(&m->lock, flags); 1613 1613 1614 - /* pg_init in progress, requeue until done */ 1615 - if (!pg_ready(m)) { 1614 + /* pg_init in progress or no paths available */ 1615 + if (m->pg_init_in_progress || 1616 + (!m->nr_valid_paths && m->queue_if_no_path)) { 1616 1617 busy = 1; 1617 1618 goto out; 1618 1619 }
+2 -2
drivers/md/dm-zero.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * 4 4 * This file is released under the GPL. 5 5 */ ··· 79 79 module_init(dm_zero_init) 80 80 module_exit(dm_zero_exit) 81 81 82 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 82 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 83 83 MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros"); 84 84 MODULE_LICENSE("GPL");
+13 -2
drivers/md/dm.c
··· 54 54 55 55 static DECLARE_WORK(deferred_remove_work, do_deferred_remove); 56 56 57 + static struct workqueue_struct *deferred_remove_workqueue; 58 + 57 59 /* 58 60 * For bio-based dm. 59 61 * One of these is allocated per bio. ··· 278 276 if (r) 279 277 goto out_free_rq_tio_cache; 280 278 279 + deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); 280 + if (!deferred_remove_workqueue) { 281 + r = -ENOMEM; 282 + goto out_uevent_exit; 283 + } 284 + 281 285 _major = major; 282 286 r = register_blkdev(_major, _name); 283 287 if (r < 0) 284 - goto out_uevent_exit; 288 + goto out_free_workqueue; 285 289 286 290 if (!_major) 287 291 _major = r; 288 292 289 293 return 0; 290 294 295 + out_free_workqueue: 296 + destroy_workqueue(deferred_remove_workqueue); 291 297 out_uevent_exit: 292 298 dm_uevent_exit(); 293 299 out_free_rq_tio_cache: ··· 309 299 static void local_exit(void) 310 300 { 311 301 flush_scheduled_work(); 302 + destroy_workqueue(deferred_remove_workqueue); 312 303 313 304 kmem_cache_destroy(_rq_tio_cache); 314 305 kmem_cache_destroy(_io_cache); ··· 418 407 419 408 if (atomic_dec_and_test(&md->open_count) && 420 409 (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 421 - schedule_work(&deferred_remove_work); 410 + queue_work(deferred_remove_workqueue, &deferred_remove_work); 422 411 423 412 dm_put(md); 424 413
+14 -1
drivers/md/md.c
··· 5599 5599 if (mddev->in_sync) 5600 5600 info.state = (1<<MD_SB_CLEAN); 5601 5601 if (mddev->bitmap && mddev->bitmap_info.offset) 5602 - info.state = (1<<MD_SB_BITMAP_PRESENT); 5602 + info.state |= (1<<MD_SB_BITMAP_PRESENT); 5603 5603 info.active_disks = insync; 5604 5604 info.working_disks = working; 5605 5605 info.failed_disks = failed; ··· 7501 7501 rdev->recovery_offset < j) 7502 7502 j = rdev->recovery_offset; 7503 7503 rcu_read_unlock(); 7504 + 7505 + /* If there is a bitmap, we need to make sure all 7506 + * writes that started before we added a spare 7507 + * complete before we start doing a recovery. 7508 + * Otherwise the write might complete and (via 7509 + * bitmap_endwrite) set a bit in the bitmap after the 7510 + * recovery has checked that bit and skipped that 7511 + * region. 7512 + */ 7513 + if (mddev->bitmap) { 7514 + mddev->pers->quiesce(mddev, 1); 7515 + mddev->pers->quiesce(mddev, 0); 7516 + } 7504 7517 } 7505 7518 7506 7519 printk(KERN_INFO "md: %s of RAID array %s\n", desc, mdname(mddev));
+3 -2
drivers/mfd/Kconfig
··· 760 760 config MFD_DAVINCI_VOICECODEC 761 761 tristate 762 762 select MFD_CORE 763 + select REGMAP_MMIO 763 764 764 765 config MFD_TI_AM335X_TSCADC 765 766 tristate "TI ADC / Touch Screen chip support" ··· 1226 1225 functionaltiy of the device other drivers must be enabled. 1227 1226 1228 1227 config MFD_STW481X 1229 - bool "Support for ST Microelectronics STw481x" 1228 + tristate "Support for ST Microelectronics STw481x" 1230 1229 depends on I2C && ARCH_NOMADIK 1231 1230 select REGMAP_I2C 1232 1231 select MFD_CORE ··· 1249 1248 1250 1249 # Chip drivers 1251 1250 config MCP_UCB1200 1252 - bool "Support for UCB1200 / UCB1300" 1251 + tristate "Support for UCB1200 / UCB1300" 1253 1252 depends on MCP_SA11X0 1254 1253 select MCP 1255 1254
+1 -1
drivers/mfd/ab8500-core.c
··· 591 591 num_irqs = AB8500_NR_IRQS; 592 592 593 593 /* If ->irq_base is zero this will give a linear mapping */ 594 - ab8500->domain = irq_domain_add_simple(NULL, 594 + ab8500->domain = irq_domain_add_simple(ab8500->dev->of_node, 595 595 num_irqs, 0, 596 596 &ab8500_irq_ops, ab8500); 597 597
+43
drivers/mtd/chips/cfi_cmdset_0001.c
··· 52 52 /* Atmel chips */ 53 53 #define AT49BV640D 0x02de 54 54 #define AT49BV640DT 0x02db 55 + /* Sharp chips */ 56 + #define LH28F640BFHE_PTTL90 0x00b0 57 + #define LH28F640BFHE_PBTL90 0x00b1 58 + #define LH28F640BFHE_PTTL70A 0x00b2 59 + #define LH28F640BFHE_PBTL70A 0x00b3 55 60 56 61 static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); 57 62 static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); ··· 263 258 (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e; 264 259 }; 265 260 261 + static int is_LH28F640BF(struct cfi_private *cfi) 262 + { 263 + /* Sharp LH28F640BF Family */ 264 + if (cfi->mfr == CFI_MFR_SHARP && ( 265 + cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 || 266 + cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A)) 267 + return 1; 268 + return 0; 269 + } 270 + 271 + static void fixup_LH28F640BF(struct mtd_info *mtd) 272 + { 273 + struct map_info *map = mtd->priv; 274 + struct cfi_private *cfi = map->fldrv_priv; 275 + struct cfi_pri_intelext *extp = cfi->cmdset_priv; 276 + 277 + /* Reset the Partition Configuration Register on LH28F640BF 278 + * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */ 279 + if (is_LH28F640BF(cfi)) { 280 + printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n"); 281 + map_write(map, CMD(0x60), 0); 282 + map_write(map, CMD(0x04), 0); 283 + 284 + /* We have set one single partition thus 285 + * Simultaneous Operations are not allowed */ 286 + printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n"); 287 + extp->FeatureSupport &= ~512; 288 + } 289 + } 290 + 266 291 static void fixup_use_point(struct mtd_info *mtd) 267 292 { 268 293 struct map_info *map = mtd->priv; ··· 344 309 { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct }, 345 310 { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb }, 346 311 { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock }, 312 + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock }, 313 + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF }, 347 314 { 0, 0, NULL } 348 315 }; 349 316 ··· 1685 1648 adr += chip->start; 1686 1649 initial_adr = adr; 1687 1650 cmd_adr = adr & ~(wbufsize-1); 1651 + 1652 + /* Sharp LH28F640BF chips need the first address for the 1653 + * Page Buffer Program command. See Table 5 of 1654 + * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */ 1655 + if (is_LH28F640BF(cfi)) 1656 + cmd_adr = adr; 1688 1657 1689 1658 /* Let's determine this according to the interleave only once */ 1690 1659 write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9);
+2
drivers/mtd/devices/elm.c
··· 475 475 ELM_SYNDROME_FRAGMENT_1 + offset); 476 476 regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, 477 477 ELM_SYNDROME_FRAGMENT_0 + offset); 478 + break; 478 479 default: 479 480 return -EINVAL; 480 481 } ··· 521 520 regs->elm_syndrome_fragment_1[i]); 522 521 elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, 523 522 regs->elm_syndrome_fragment_0[i]); 523 + break; 524 524 default: 525 525 return -EINVAL; 526 526 }
+4 -2
drivers/mtd/nand/nand_base.c
··· 4047 4047 ecc->layout->oobavail += ecc->layout->oobfree[i].length; 4048 4048 mtd->oobavail = ecc->layout->oobavail; 4049 4049 4050 - /* ECC sanity check: warn noisily if it's too weak */ 4051 - WARN_ON(!nand_ecc_strength_good(mtd)); 4050 + /* ECC sanity check: warn if it's too weak */ 4051 + if (!nand_ecc_strength_good(mtd)) 4052 + pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", 4053 + mtd->name); 4052 4054 4053 4055 /* 4054 4056 * Set the number of read / write steps for one page depending on ECC
+1 -1
drivers/net/bonding/bond_main.c
··· 4037 4037 } 4038 4038 4039 4039 if (ad_select) { 4040 - bond_opt_initstr(&newval, lacp_rate); 4040 + bond_opt_initstr(&newval, ad_select); 4041 4041 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), 4042 4042 &newval); 4043 4043 if (!valptr) {
+11 -32
drivers/net/ethernet/broadcom/bcmsysport.c
··· 710 710 711 711 work_done = bcm_sysport_tx_reclaim(ring->priv, ring); 712 712 713 - if (work_done < budget) { 713 + if (work_done == 0) { 714 714 napi_complete(napi); 715 715 /* re-enable TX interrupt */ 716 716 intrl2_1_mask_clear(ring->priv, BIT(ring->index)); 717 717 } 718 718 719 - return work_done; 719 + return 0; 720 720 } 721 721 722 722 static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv) ··· 1339 1339 usleep_range(1000, 2000); 1340 1340 } 1341 1341 1342 - static inline int umac_reset(struct bcm_sysport_priv *priv) 1342 + static inline void umac_reset(struct bcm_sysport_priv *priv) 1343 1343 { 1344 - unsigned int timeout = 0; 1345 1344 u32 reg; 1346 - int ret = 0; 1347 1345 1348 - umac_writel(priv, 0, UMAC_CMD); 1349 - while (timeout++ < 1000) { 1350 - reg = umac_readl(priv, UMAC_CMD); 1351 - if (!(reg & CMD_SW_RESET)) 1352 - break; 1353 - 1354 - udelay(1); 1355 - } 1356 - 1357 - if (timeout == 1000) { 1358 - dev_err(&priv->pdev->dev, 1359 - "timeout waiting for MAC to come out of reset\n"); 1360 - ret = -ETIMEDOUT; 1361 - } 1362 - 1363 - return ret; 1346 + reg = umac_readl(priv, UMAC_CMD); 1347 + reg |= CMD_SW_RESET; 1348 + umac_writel(priv, reg, UMAC_CMD); 1349 + udelay(10); 1350 + reg = umac_readl(priv, UMAC_CMD); 1351 + reg &= ~CMD_SW_RESET; 1352 + umac_writel(priv, reg, UMAC_CMD); 1364 1353 } 1365 1354 1366 1355 static void umac_set_hw_addr(struct bcm_sysport_priv *priv, ··· 1401 1412 int ret; 1402 1413 1403 1414 /* Reset UniMAC */ 1404 - ret = umac_reset(priv); 1405 - if (ret) { 1406 - netdev_err(dev, "UniMAC reset failed\n"); 1407 - return ret; 1408 - } 1415 + umac_reset(priv); 1409 1416 1410 1417 /* Flush TX and RX FIFOs at TOPCTRL level */ 1411 1418 topctrl_flush(priv); ··· 1683 1698 /* Set the needed headroom once and for all */ 1684 1699 BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8); 1685 1700 dev->needed_headroom += sizeof(struct bcm_tsb); 1686 - 1687 - /* We are interfaced to a switch which handles the multicast 1688 - * filtering for us, so we do not support programming any 1689 - * multicast hash table in this Ethernet MAC. 1690 - */ 1691 - dev->flags &= ~IFF_MULTICAST; 1692 1701 1693 1702 /* libphy will adjust the link state accordingly */ 1694 1703 netif_carrier_off(dev);
+2 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 797 797 798 798 return; 799 799 } 800 - bnx2x_frag_free(fp, new_data); 800 + if (new_data) 801 + bnx2x_frag_free(fp, new_data); 801 802 drop: 802 803 /* drop the packet and keep the buffer in the bin */ 803 804 DP(NETIF_MSG_RX_STATUS,
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12946 12946 * without the default SB. 12947 12947 * For VFs there is no default SB, then we return (index+1). 12948 12948 */ 12949 - pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control); 12949 + pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control); 12950 12950 12951 12951 index = control & PCI_MSIX_FLAGS_QSIZE; 12952 12952
+6 -10
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1408 1408 if (cb->skb) 1409 1409 continue; 1410 1410 1411 - /* set the DMA descriptor length once and for all 1412 - * it will only change if we support dynamically sizing 1413 - * priv->rx_buf_len, but we do not 1414 - */ 1415 - dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr, 1416 - priv->rx_buf_len << DMA_BUFLENGTH_SHIFT); 1417 - 1418 1411 ret = bcmgenet_rx_refill(priv, cb); 1419 1412 if (ret) 1420 1413 break; ··· 2528 2535 netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1); 2529 2536 netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1); 2530 2537 2531 - err = register_netdev(dev); 2532 - if (err) 2533 - goto err_clk_disable; 2538 + /* libphy will determine the link state */ 2539 + netif_carrier_off(dev); 2534 2540 2535 2541 /* Turn off the main clock, WOL clock is handled separately */ 2536 2542 if (!IS_ERR(priv->clk)) 2537 2543 clk_disable_unprepare(priv->clk); 2544 + 2545 + err = register_netdev(dev); 2546 + if (err) 2547 + goto err; 2538 2548 2539 2549 return err; 2540 2550
+1 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 331 331 #define EXT_ENERGY_DET_MASK (1 << 12) 332 332 333 333 #define EXT_RGMII_OOB_CTRL 0x0C 334 - #define RGMII_MODE_EN (1 << 0) 335 334 #define RGMII_LINK (1 << 4) 336 335 #define OOB_DISABLE (1 << 5) 336 + #define RGMII_MODE_EN (1 << 6) 337 337 #define ID_MODE_DIS (1 << 16) 338 338 339 339 #define EXT_GPHY_CTRL 0x1C
+1 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 2897 2897 for_all_evt_queues(adapter, eqo, i) { 2898 2898 napi_enable(&eqo->napi); 2899 2899 be_enable_busy_poll(eqo); 2900 - be_eq_notify(adapter, eqo->q.id, true, false, 0); 2900 + be_eq_notify(adapter, eqo->q.id, true, true, 0); 2901 2901 } 2902 2902 adapter->flags |= BE_FLAGS_NAPI_ENABLED; 2903 2903
+2 -2
drivers/net/ethernet/freescale/ucc_geth.c
··· 2990 2990 if (ug_info->rxExtendedFiltering) { 2991 2991 size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING; 2992 2992 if (ug_info->largestexternallookupkeysize == 2993 - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES) 2993 + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES) 2994 2994 size += 2995 2995 THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_8; 2996 2996 if (ug_info->largestexternallookupkeysize == 2997 - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES) 2997 + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES) 2998 2998 size += 2999 2999 THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_16; 3000 3000 }
+7
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 1480 1480 s32 ret_val; 1481 1481 u16 i, rar_count = mac->rar_entry_count; 1482 1482 1483 + if ((hw->mac.type >= e1000_i210) && 1484 + !(igb_get_flash_presence_i210(hw))) { 1485 + ret_val = igb_pll_workaround_i210(hw); 1486 + if (ret_val) 1487 + return ret_val; 1488 + } 1489 + 1483 1490 /* Initialize identification LED */ 1484 1491 ret_val = igb_id_led_init(hw); 1485 1492 if (ret_val) {
+10 -8
drivers/net/ethernet/intel/igb/e1000_defines.h
··· 46 46 #define E1000_CTRL_EXT_SDP3_DIR 0x00000800 /* SDP3 Data direction */ 47 47 48 48 /* Physical Func Reset Done Indication */ 49 - #define E1000_CTRL_EXT_PFRSTD 0x00004000 50 - #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 51 - #define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 52 - #define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 53 - #define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 54 - #define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 55 - #define E1000_CTRL_EXT_EIAME 0x01000000 56 - #define E1000_CTRL_EXT_IRCA 0x00000001 49 + #define E1000_CTRL_EXT_PFRSTD 0x00004000 50 + #define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ 51 + #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 52 + #define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 53 + #define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 54 + #define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 55 + #define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 56 + #define E1000_CTRL_EXT_EIAME 0x01000000 57 + #define E1000_CTRL_EXT_IRCA 0x00000001 57 58 /* Interrupt delay cancellation */ 58 59 /* Driver loaded bit for FW */ 59 60 #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 ··· 63 62 /* packet buffer parity error detection enabled */ 64 63 /* descriptor FIFO parity error detection enable */ 65 64 #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ 65 + #define E1000_CTRL_EXT_PHYPDEN 0x00100000 66 66 #define E1000_I2CCMD_REG_ADDR_SHIFT 16 67 67 #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 68 68 #define E1000_I2CCMD_OPCODE_READ 0x08000000
+3
drivers/net/ethernet/intel/igb/e1000_hw.h
··· 567 567 /* These functions must be implemented by drivers */ 568 568 s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); 569 569 s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); 570 + 571 + void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 572 + void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 570 573 #endif /* _E1000_HW_H_ */
+66
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 834 834 } 835 835 return ret_val; 836 836 } 837 + 838 + /** 839 + * igb_pll_workaround_i210 840 + * @hw: pointer to the HW structure 841 + * 842 + * Works around an errata in the PLL circuit where it occasionally 843 + * provides the wrong clock frequency after power up. 844 + **/ 845 + s32 igb_pll_workaround_i210(struct e1000_hw *hw) 846 + { 847 + s32 ret_val; 848 + u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; 849 + u16 nvm_word, phy_word, pci_word, tmp_nvm; 850 + int i; 851 + 852 + /* Get and set needed register values */ 853 + wuc = rd32(E1000_WUC); 854 + mdicnfg = rd32(E1000_MDICNFG); 855 + reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; 856 + wr32(E1000_MDICNFG, reg_val); 857 + 858 + /* Get data from NVM, or set default */ 859 + ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, 860 + &nvm_word); 861 + if (ret_val) 862 + nvm_word = E1000_INVM_DEFAULT_AL; 863 + tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; 864 + for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { 865 + /* check current state directly from internal PHY */ 866 + igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | 867 + E1000_PHY_PLL_FREQ_REG), &phy_word); 868 + if ((phy_word & E1000_PHY_PLL_UNCONF) 869 + != E1000_PHY_PLL_UNCONF) { 870 + ret_val = 0; 871 + break; 872 + } else { 873 + ret_val = -E1000_ERR_PHY; 874 + } 875 + /* directly reset the internal PHY */ 876 + ctrl = rd32(E1000_CTRL); 877 + wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); 878 + 879 + ctrl_ext = rd32(E1000_CTRL_EXT); 880 + ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); 881 + wr32(E1000_CTRL_EXT, ctrl_ext); 882 + 883 + wr32(E1000_WUC, 0); 884 + reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); 885 + wr32(E1000_EEARBC_I210, reg_val); 886 + 887 + igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 888 + pci_word |= E1000_PCI_PMCSR_D3; 889 + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 890 + usleep_range(1000, 2000); 891 + pci_word &= ~E1000_PCI_PMCSR_D3; 892 + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); 893 + reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); 894 + wr32(E1000_EEARBC_I210, reg_val); 895 + 896 + /* restore WUC register */ 897 + wr32(E1000_WUC, wuc); 898 + } 899 + /* restore MDICNFG setting */ 900 + wr32(E1000_MDICNFG, mdicnfg); 901 + return ret_val; 902 + }
+12
drivers/net/ethernet/intel/igb/e1000_i210.h
··· 33 33 s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); 34 34 s32 igb_init_nvm_params_i210(struct e1000_hw *hw); 35 35 bool igb_get_flash_presence_i210(struct e1000_hw *hw); 36 + s32 igb_pll_workaround_i210(struct e1000_hw *hw); 36 37 37 38 #define E1000_STM_OPCODE 0xDB00 38 39 #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 ··· 78 77 #define NVM_INIT_CTRL_4_DEFAULT_I211 0x00C1 79 78 #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 80 79 #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C 80 + 81 + /* PLL Defines */ 82 + #define E1000_PCI_PMCSR 0x44 83 + #define E1000_PCI_PMCSR_D3 0x03 84 + #define E1000_MAX_PLL_TRIES 5 85 + #define E1000_PHY_PLL_UNCONF 0xFF 86 + #define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 87 + #define E1000_PHY_PLL_FREQ_REG 0x000E 88 + #define E1000_INVM_DEFAULT_AL 0x202F 89 + #define E1000_INVM_AUTOLOAD 0x0A 90 + #define E1000_INVM_PLL_WO_VAL 0x0010 81 91 82 92 #endif
+1
drivers/net/ethernet/intel/igb/e1000_regs.h
··· 66 66 #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ 67 67 #define E1000_PBS 0x01008 /* Packet Buffer Size */ 68 68 #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ 69 + #define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ 69 70 #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ 70 71 #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ 71 72 #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */
+16
drivers/net/ethernet/intel/igb/igb_main.c
··· 7217 7217 } 7218 7218 } 7219 7219 7220 + void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) 7221 + { 7222 + struct igb_adapter *adapter = hw->back; 7223 + 7224 + pci_read_config_word(adapter->pdev, reg, value); 7225 + } 7226 + 7227 + void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) 7228 + { 7229 + struct igb_adapter *adapter = hw->back; 7230 + 7231 + pci_write_config_word(adapter->pdev, reg, *value); 7232 + } 7233 + 7220 7234 s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) 7221 7235 { 7222 7236 struct igb_adapter *adapter = hw->back; ··· 7594 7580 7595 7581 if (netif_running(netdev)) 7596 7582 igb_close(netdev); 7583 + else 7584 + igb_reset(adapter); 7597 7585 7598 7586 igb_clear_interrupt_scheme(adapter); 7599 7587
+2 -2
drivers/net/ethernet/marvell/mvneta.c
··· 1207 1207 command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; 1208 1208 command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; 1209 1209 1210 - if (l3_proto == swab16(ETH_P_IP)) 1210 + if (l3_proto == htons(ETH_P_IP)) 1211 1211 command |= MVNETA_TXD_IP_CSUM; 1212 1212 else 1213 1213 command |= MVNETA_TX_L3_IP6; ··· 2529 2529 2530 2530 if (phydev->speed == SPEED_1000) 2531 2531 val |= MVNETA_GMAC_CONFIG_GMII_SPEED; 2532 - else 2532 + else if (phydev->speed == SPEED_100) 2533 2533 val |= MVNETA_GMAC_CONFIG_MII_SPEED; 2534 2534 2535 2535 mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val);
-2
drivers/net/ethernet/mellanox/mlx4/cq.c
··· 294 294 init_completion(&cq->free); 295 295 296 296 cq->irq = priv->eq_table.eq[cq->vector].irq; 297 - cq->irq_affinity_change = false; 298 - 299 297 return 0; 300 298 301 299 err_radix:
+5 -2
drivers/net/ethernet/mellanox/mlx4/en_cq.c
··· 128 128 mlx4_warn(mdev, "Failed assigning an EQ to %s, falling back to legacy EQ's\n", 129 129 name); 130 130 } 131 + 132 + cq->irq_desc = 133 + irq_to_desc(mlx4_eq_get_irq(mdev->dev, 134 + cq->vector)); 131 135 } 132 136 } else { 133 137 cq->vector = (cq->ring + 1 + priv->port) % ··· 191 187 mlx4_en_unmap_buffer(&cq->wqres.buf); 192 188 mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size); 193 189 if (priv->mdev->dev->caps.comp_pool && cq->vector) { 194 - if (!cq->is_tx) 195 - irq_set_affinity_hint(cq->mcq.irq, NULL); 196 190 mlx4_release_eq(priv->mdev->dev, cq->vector); 197 191 } 198 192 cq->vector = 0; ··· 206 204 if (!cq->is_tx) { 207 205 napi_hash_del(&cq->napi); 208 206 synchronize_rcu(); 207 + irq_set_affinity_hint(cq->mcq.irq, NULL); 209 208 } 210 209 netif_napi_del(&cq->napi); 211 210
+7
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 417 417 418 418 coal->tx_coalesce_usecs = priv->tx_usecs; 419 419 coal->tx_max_coalesced_frames = priv->tx_frames; 420 + coal->tx_max_coalesced_frames_irq = priv->tx_work_limit; 421 + 420 422 coal->rx_coalesce_usecs = priv->rx_usecs; 421 423 coal->rx_max_coalesced_frames = priv->rx_frames; 422 424 ··· 428 426 coal->rx_coalesce_usecs_high = priv->rx_usecs_high; 429 427 coal->rate_sample_interval = priv->sample_interval; 430 428 coal->use_adaptive_rx_coalesce = priv->adaptive_rx_coal; 429 + 431 430 return 0; 432 431 } 433 432 ··· 436 433 struct ethtool_coalesce *coal) 437 434 { 438 435 struct mlx4_en_priv *priv = netdev_priv(dev); 436 + 437 + if (!coal->tx_max_coalesced_frames_irq) 438 + return -EINVAL; 439 439 440 440 priv->rx_frames = (coal->rx_max_coalesced_frames == 441 441 MLX4_EN_AUTO_CONF) ? ··· 463 457 priv->rx_usecs_high = coal->rx_coalesce_usecs_high; 464 458 priv->sample_interval = coal->rate_sample_interval; 465 459 priv->adaptive_rx_coal = coal->use_adaptive_rx_coalesce; 460 + priv->tx_work_limit = coal->tx_max_coalesced_frames_irq; 466 461 467 462 return mlx4_en_moderation_update(priv); 468 463 }
+2 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2331 2331 struct mlx4_en_priv *priv = netdev_priv(dev); 2332 2332 __be16 current_port; 2333 2333 2334 - if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS)) 2334 + if (priv->mdev->dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 2335 2335 return; 2336 2336 2337 2337 if (sa_family == AF_INET6) ··· 2468 2468 MLX4_WQE_CTRL_SOLICITED); 2469 2469 priv->num_tx_rings_p_up = mdev->profile.num_tx_rings_p_up; 2470 2470 priv->tx_ring_num = prof->tx_ring_num; 2471 + priv->tx_work_limit = MLX4_EN_DEFAULT_TX_WORK; 2471 2472 2472 2473 priv->tx_ring = kzalloc(sizeof(struct mlx4_en_tx_ring *) * MAX_TX_RINGS, 2473 2474 GFP_KERNEL);
+14 -3
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 40 40 #include <linux/if_ether.h> 41 41 #include <linux/if_vlan.h> 42 42 #include <linux/vmalloc.h> 43 + #include <linux/irq.h> 43 44 44 45 #include "mlx4_en.h" 45 46 ··· 783 782 PKT_HASH_TYPE_L3); 784 783 785 784 skb_record_rx_queue(gro_skb, cq->ring); 785 + skb_mark_napi_id(gro_skb, &cq->napi); 786 786 787 787 if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) { 788 788 timestamp = mlx4_en_get_cqe_ts(cqe); ··· 898 896 899 897 /* If we used up all the quota - we're probably not done yet... */ 900 898 if (done == budget) { 899 + int cpu_curr; 900 + const struct cpumask *aff; 901 + 901 902 INC_PERF_COUNTER(priv->pstats.napi_quota); 902 - if (unlikely(cq->mcq.irq_affinity_change)) { 903 - cq->mcq.irq_affinity_change = false; 903 + 904 + cpu_curr = smp_processor_id(); 905 + aff = irq_desc_get_irq_data(cq->irq_desc)->affinity; 906 + 907 + if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) { 908 + /* Current cpu is not according to smp_irq_affinity - 909 + * probably affinity changed. need to stop this NAPI 910 + * poll, and restart it on the right CPU 911 + */ 904 912 napi_complete(napi); 905 913 mlx4_en_arm_cq(priv, cq); 906 914 return 0; 907 915 } 908 916 } else { 909 917 /* Done for now */ 910 - cq->mcq.irq_affinity_change = false; 911 918 napi_complete(napi); 912 919 mlx4_en_arm_cq(priv, cq); 913 920 }
+13 -21
drivers/net/ethernet/mellanox/mlx4/en_tx.c
··· 351 351 return cnt; 352 352 } 353 353 354 - static int mlx4_en_process_tx_cq(struct net_device *dev, 355 - struct mlx4_en_cq *cq, 356 - int budget) 354 + static bool mlx4_en_process_tx_cq(struct net_device *dev, 355 + struct mlx4_en_cq *cq) 357 356 { 358 357 struct mlx4_en_priv *priv = netdev_priv(dev); 359 358 struct mlx4_cq *mcq = &cq->mcq; ··· 371 372 int factor = priv->cqe_factor; 372 373 u64 timestamp = 0; 373 374 int done = 0; 375 + int budget = priv->tx_work_limit; 374 376 375 377 if (!priv->port_up) 376 - return 0; 378 + return true; 377 379 378 380 index = cons_index & size_mask; 379 381 cqe = &buf[(index << factor) + factor]; ··· 447 447 netif_tx_wake_queue(ring->tx_queue); 448 448 ring->wake_queue++; 449 449 } 450 - return done; 450 + return done < budget; 451 451 } 452 452 453 453 void mlx4_en_tx_irq(struct mlx4_cq *mcq) ··· 467 467 struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi); 468 468 struct net_device *dev = cq->dev; 469 469 struct mlx4_en_priv *priv = netdev_priv(dev); 470 - int done; 470 + int clean_complete; 471 471 472 - done = mlx4_en_process_tx_cq(dev, cq, budget); 472 + clean_complete = mlx4_en_process_tx_cq(dev, cq); 473 + if (!clean_complete) 474 + return budget; 473 475 474 - /* If we used up all the quota - we're probably not done yet... */ 475 - if (done < budget) { 476 - /* Done for now */ 477 - cq->mcq.irq_affinity_change = false; 478 - napi_complete(napi); 479 - mlx4_en_arm_cq(priv, cq); 480 - return done; 481 - } else if (unlikely(cq->mcq.irq_affinity_change)) { 482 - cq->mcq.irq_affinity_change = false; 483 - napi_complete(napi); 484 - mlx4_en_arm_cq(priv, cq); 485 - return 0; 486 - } 487 - return budget; 476 + napi_complete(napi); 477 + mlx4_en_arm_cq(priv, cq); 478 + 479 + return 0; 488 480 } 489 481 490 482 static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
+8 -61
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 53 53 MLX4_EQ_ENTRY_SIZE = 0x20 54 54 }; 55 55 56 - struct mlx4_irq_notify { 57 - void *arg; 58 - struct irq_affinity_notify notify; 59 - }; 60 - 61 56 #define MLX4_EQ_STATUS_OK ( 0 << 28) 62 57 #define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28) 63 58 #define MLX4_EQ_OWNER_SW ( 0 << 24) ··· 1083 1088 iounmap(priv->clr_base); 1084 1089 } 1085 1090 1086 - static void mlx4_irq_notifier_notify(struct irq_affinity_notify *notify, 1087 - const cpumask_t *mask) 1088 - { 1089 - struct mlx4_irq_notify *n = container_of(notify, 1090 - struct mlx4_irq_notify, 1091 - notify); 1092 - struct mlx4_priv *priv = (struct mlx4_priv *)n->arg; 1093 - struct radix_tree_iter iter; 1094 - void **slot; 1095 - 1096 - radix_tree_for_each_slot(slot, &priv->cq_table.tree, &iter, 0) { 1097 - struct mlx4_cq *cq = (struct mlx4_cq *)(*slot); 1098 - 1099 - if (cq->irq == notify->irq) 1100 - cq->irq_affinity_change = true; 1101 - } 1102 - } 1103 - 1104 - static void mlx4_release_irq_notifier(struct kref *ref) 1105 - { 1106 - struct mlx4_irq_notify *n = container_of(ref, struct mlx4_irq_notify, 1107 - notify.kref); 1108 - kfree(n); 1109 - } 1110 - 1111 - static void mlx4_assign_irq_notifier(struct mlx4_priv *priv, 1112 - struct mlx4_dev *dev, int irq) 1113 - { 1114 - struct mlx4_irq_notify *irq_notifier = NULL; 1115 - int err = 0; 1116 - 1117 - irq_notifier = kzalloc(sizeof(*irq_notifier), GFP_KERNEL); 1118 - if (!irq_notifier) { 1119 - mlx4_warn(dev, "Failed to allocate irq notifier. irq %d\n", 1120 - irq); 1121 - return; 1122 - } 1123 - 1124 - irq_notifier->notify.irq = irq; 1125 - irq_notifier->notify.notify = mlx4_irq_notifier_notify; 1126 - irq_notifier->notify.release = mlx4_release_irq_notifier; 1127 - irq_notifier->arg = priv; 1128 - err = irq_set_affinity_notifier(irq, &irq_notifier->notify); 1129 - if (err) { 1130 - kfree(irq_notifier); 1131 - irq_notifier = NULL; 1132 - mlx4_warn(dev, "Failed to set irq notifier. irq %d\n", irq); 1133 - } 1134 - } 1135 - 1136 - 1137 1091 int mlx4_alloc_eq_table(struct mlx4_dev *dev) 1138 1092 { 1139 1093 struct mlx4_priv *priv = mlx4_priv(dev); ··· 1353 1409 continue; 1354 1410 /*we dont want to break here*/ 1355 1411 } 1356 - mlx4_assign_irq_notifier(priv, dev, 1357 - priv->eq_table.eq[vec].irq); 1358 1412 1359 1413 eq_set_ci(&priv->eq_table.eq[vec], 1); 1360 1414 } ··· 1369 1427 } 1370 1428 EXPORT_SYMBOL(mlx4_assign_eq); 1371 1429 1430 + int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec) 1431 + { 1432 + struct mlx4_priv *priv = mlx4_priv(dev); 1433 + 1434 + return priv->eq_table.eq[vec].irq; 1435 + } 1436 + EXPORT_SYMBOL(mlx4_eq_get_irq); 1437 + 1372 1438 void mlx4_release_eq(struct mlx4_dev *dev, int vec) 1373 1439 { 1374 1440 struct mlx4_priv *priv = mlx4_priv(dev); ··· 1388 1438 Belonging to a legacy EQ*/ 1389 1439 mutex_lock(&priv->msix_ctl.pool_lock); 1390 1440 if (priv->msix_ctl.pool_bm & 1ULL << i) { 1391 - irq_set_affinity_notifier( 1392 - priv->eq_table.eq[vec].irq, 1393 - NULL); 1394 1441 free_irq(priv->eq_table.eq[vec].irq, 1395 1442 &priv->eq_table.eq[vec]); 1396 1443 priv->msix_ctl.pool_bm &= ~(1ULL << i);
+4
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 126 126 #define MAX_TX_RINGS (MLX4_EN_MAX_TX_RING_P_UP * \ 127 127 MLX4_EN_NUM_UP) 128 128 129 + #define MLX4_EN_DEFAULT_TX_WORK 256 130 + 129 131 /* Target number of packets to coalesce with interrupt moderation */ 130 132 #define MLX4_EN_RX_COAL_TARGET 44 131 133 #define MLX4_EN_RX_COAL_TIME 0x10 ··· 343 341 #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) 344 342 spinlock_t poll_lock; /* protects from LLS/napi conflicts */ 345 343 #endif /* CONFIG_NET_RX_BUSY_POLL */ 344 + struct irq_desc *irq_desc; 346 345 }; 347 346 348 347 struct mlx4_en_port_profile { ··· 543 540 __be32 ctrl_flags; 544 541 u32 flags; 545 542 u8 num_tx_rings_p_up; 543 + u32 tx_work_limit; 546 544 u32 tx_ring_num; 547 545 u32 rx_ring_num; 548 546 u32 rx_skb_size;
+25
drivers/net/ethernet/realtek/r8169.c
··· 540 540 MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */ 541 541 LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */ 542 542 Jumbo_En0 = (1 << 2), /* 8168 only. Reserved in the 8168b */ 543 + Rdy_to_L23 = (1 << 1), /* L23 Enable */ 543 544 Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */ 544 545 545 546 /* Config4 register */ ··· 4884 4883 PCI_EXP_LNKCTL_CLKREQ_EN); 4885 4884 } 4886 4885 4886 + static void rtl_pcie_state_l2l3_enable(struct rtl8169_private *tp, bool enable) 4887 + { 4888 + void __iomem *ioaddr = tp->mmio_addr; 4889 + u8 data; 4890 + 4891 + data = RTL_R8(Config3); 4892 + 4893 + if (enable) 4894 + data |= Rdy_to_L23; 4895 + else 4896 + data &= ~Rdy_to_L23; 4897 + 4898 + RTL_W8(Config3, data); 4899 + } 4900 + 4887 4901 #define R8168_CPCMD_QUIRK_MASK (\ 4888 4902 EnableBist | \ 4889 4903 Mac_dbgo_oe | \ ··· 5248 5232 }; 5249 5233 5250 5234 rtl_hw_start_8168f(tp); 5235 + rtl_pcie_state_l2l3_enable(tp, false); 5251 5236 5252 5237 rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); 5253 5238 ··· 5287 5270 5288 5271 rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC); 5289 5272 rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC); 5273 + 5274 + rtl_pcie_state_l2l3_enable(tp, false); 5290 5275 } 5291 5276 5292 5277 static void rtl_hw_start_8168g_2(struct rtl8169_private *tp) ··· 5541 5522 RTL_W8(DLLPR, RTL_R8(DLLPR) | PFM_EN); 5542 5523 5543 5524 rtl_ephy_init(tp, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); 5525 + 5526 + rtl_pcie_state_l2l3_enable(tp, false); 5544 5527 } 5545 5528 5546 5529 static void rtl_hw_start_8105e_2(struct rtl8169_private *tp) ··· 5578 5557 rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); 5579 5558 rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); 5580 5559 rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC); 5560 + 5561 + rtl_pcie_state_l2l3_enable(tp, false); 5581 5562 } 5582 5563 5583 5564 static void rtl_hw_start_8106(struct rtl8169_private *tp) ··· 5592 5569 RTL_W32(MISC, (RTL_R32(MISC) | DISABLE_LAN_EN) & ~EARLY_TALLY_EN); 5593 5570 RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); 5594 5571 RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN); 5572 + 5573 + rtl_pcie_state_l2l3_enable(tp, false); 5595 5574 } 5596 5575 5597 5576 static void rtl_hw_start_8101(struct net_device *dev)
+1 -4
drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
··· 320 320 321 321 static void dwmac1000_ctrl_ane(void __iomem *ioaddr, bool restart) 322 322 { 323 - u32 value; 324 - 325 - value = readl(ioaddr + GMAC_AN_CTRL); 326 323 /* auto negotiation enable and External Loopback enable */ 327 - value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; 324 + u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; 328 325 329 326 if (restart) 330 327 value |= GMAC_AN_CTRL_RAN;
+1 -1
drivers/net/ethernet/stmicro/stmmac/enh_desc.c
··· 145 145 x->rx_msg_type_delay_req++; 146 146 else if (p->des4.erx.msg_type == RDES_EXT_DELAY_RESP) 147 147 x->rx_msg_type_delay_resp++; 148 - else if (p->des4.erx.msg_type == RDES_EXT_DELAY_REQ) 148 + else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_REQ) 149 149 x->rx_msg_type_pdelay_req++; 150 150 else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_RESP) 151 151 x->rx_msg_type_pdelay_resp++;
+7 -10
drivers/net/fddi/defxx.c
··· 292 292 293 293 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers); 294 294 static void dfx_rcv_queue_process(DFX_board_t *bp); 295 + #ifdef DYNAMIC_BUFFERS 295 296 static void dfx_rcv_flush(DFX_board_t *bp); 297 + #else 298 + static inline void dfx_rcv_flush(DFX_board_t *bp) {} 299 + #endif 296 300 297 301 static netdev_tx_t dfx_xmt_queue_pkt(struct sk_buff *skb, 298 302 struct net_device *dev); ··· 2853 2849 * Align an sk_buff to a boundary power of 2 2854 2850 * 2855 2851 */ 2856 - 2852 + #ifdef DYNAMIC_BUFFERS 2857 2853 static void my_skb_align(struct sk_buff *skb, int n) 2858 2854 { 2859 2855 unsigned long x = (unsigned long)skb->data; ··· 2863 2859 2864 2860 skb_reserve(skb, v - x); 2865 2861 } 2866 - 2862 + #endif 2867 2863 2868 2864 /* 2869 2865 * ================ ··· 3112 3108 break; 3113 3109 } 3114 3110 else { 3115 - #ifndef DYNAMIC_BUFFERS 3116 - if (! rx_in_place) 3117 - #endif 3118 - { 3111 + if (!rx_in_place) { 3119 3112 /* Receive buffer allocated, pass receive packet up */ 3120 3113 dma_sync_single_for_cpu( 3121 3114 bp->bus_dev, ··· 3506 3505 } 3507 3506 3508 3507 } 3509 - #else 3510 - static inline void dfx_rcv_flush( DFX_board_t *bp ) 3511 - { 3512 - } 3513 3508 #endif /* DYNAMIC_BUFFERS */ 3514 3509 3515 3510 /*
+3 -3
drivers/net/phy/dp83640.c
··· 1341 1341 { 1342 1342 struct dp83640_private *dp83640 = phydev->priv; 1343 1343 1344 - if (!dp83640->hwts_rx_en) 1345 - return false; 1346 - 1347 1344 if (is_status_frame(skb, type)) { 1348 1345 decode_status_frame(dp83640, skb); 1349 1346 kfree_skb(skb); 1350 1347 return true; 1351 1348 } 1349 + 1350 + if (!dp83640->hwts_rx_en) 1351 + return false; 1352 1352 1353 1353 SKB_PTP_TYPE(skb) = type; 1354 1354 skb_queue_tail(&dp83640->rx_queue, skb);
+44
drivers/net/phy/mdio_bus.c
··· 187 187 return d ? to_mii_bus(d) : NULL; 188 188 } 189 189 EXPORT_SYMBOL(of_mdio_find_bus); 190 + 191 + /* Walk the list of subnodes of a mdio bus and look for a node that matches the 192 + * phy's address with its 'reg' property. If found, set the of_node pointer for 193 + * the phy. This allows auto-probed pyh devices to be supplied with information 194 + * passed in via DT. 195 + */ 196 + static void of_mdiobus_link_phydev(struct mii_bus *mdio, 197 + struct phy_device *phydev) 198 + { 199 + struct device *dev = &phydev->dev; 200 + struct device_node *child; 201 + 202 + if (dev->of_node || !mdio->dev.of_node) 203 + return; 204 + 205 + for_each_available_child_of_node(mdio->dev.of_node, child) { 206 + int addr; 207 + int ret; 208 + 209 + ret = of_property_read_u32(child, "reg", &addr); 210 + if (ret < 0) { 211 + dev_err(dev, "%s has invalid PHY address\n", 212 + child->full_name); 213 + continue; 214 + } 215 + 216 + /* A PHY must have a reg property in the range [0-31] */ 217 + if (addr >= PHY_MAX_ADDR) { 218 + dev_err(dev, "%s PHY address %i is too large\n", 219 + child->full_name, addr); 220 + continue; 221 + } 222 + 223 + if (addr == phydev->addr) { 224 + dev->of_node = child; 225 + return; 226 + } 227 + } 228 + } 229 + #else /* !IS_ENABLED(CONFIG_OF_MDIO) */ 230 + static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, 231 + struct phy_device *phydev) 232 + { 233 + } 190 234 #endif 191 235 192 236 /**
+1 -7
drivers/net/ppp/ppp_generic.c
··· 539 539 { 540 540 struct sock_fprog uprog; 541 541 struct sock_filter *code = NULL; 542 - int len, err; 542 + int len; 543 543 544 544 if (copy_from_user(&uprog, arg, sizeof(uprog))) 545 545 return -EFAULT; ··· 553 553 code = memdup_user(uprog.filter, len); 554 554 if (IS_ERR(code)) 555 555 return PTR_ERR(code); 556 - 557 - err = sk_chk_filter(code, uprog.len); 558 - if (err) { 559 - kfree(code); 560 - return err; 561 - } 562 556 563 557 *p = code; 564 558 return uprog.len;
+1 -1
drivers/net/ppp/pppoe.c
··· 675 675 po->chan.hdrlen = (sizeof(struct pppoe_hdr) + 676 676 dev->hard_header_len); 677 677 678 - po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); 678 + po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; 679 679 po->chan.private = sk; 680 680 po->chan.ops = &pppoe_chan_ops; 681 681
+20 -36
drivers/net/usb/hso.c
··· 258 258 * so as not to drop characters on the floor. 259 259 */ 260 260 int curr_rx_urb_idx; 261 - u16 curr_rx_urb_offset; 262 261 u8 rx_urb_filled[MAX_RX_URBS]; 263 262 struct tasklet_struct unthrottle_tasklet; 264 - struct work_struct retry_unthrottle_workqueue; 265 263 }; 266 264 267 265 struct hso_device { ··· 1250 1252 tasklet_hi_schedule(&serial->unthrottle_tasklet); 1251 1253 } 1252 1254 1253 - static void hso_unthrottle_workfunc(struct work_struct *work) 1254 - { 1255 - struct hso_serial *serial = 1256 - container_of(work, struct hso_serial, 1257 - retry_unthrottle_workqueue); 1258 - hso_unthrottle_tasklet(serial); 1259 - } 1260 - 1261 1255 /* open the requested serial port */ 1262 1256 static int hso_serial_open(struct tty_struct *tty, struct file *filp) 1263 1257 { ··· 1285 1295 tasklet_init(&serial->unthrottle_tasklet, 1286 1296 (void (*)(unsigned long))hso_unthrottle_tasklet, 1287 1297 (unsigned long)serial); 1288 - INIT_WORK(&serial->retry_unthrottle_workqueue, 1289 - hso_unthrottle_workfunc); 1290 1298 result = hso_start_serial_device(serial->parent, GFP_KERNEL); 1291 1299 if (result) { 1292 1300 hso_stop_serial_device(serial->parent); ··· 1333 1345 if (!usb_gone) 1334 1346 hso_stop_serial_device(serial->parent); 1335 1347 tasklet_kill(&serial->unthrottle_tasklet); 1336 - cancel_work_sync(&serial->retry_unthrottle_workqueue); 1337 1348 } 1338 1349 1339 1350 if (!usb_gone) ··· 2000 2013 static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) 2001 2014 { 2002 2015 struct tty_struct *tty; 2003 - int write_length_remaining = 0; 2004 - int curr_write_len; 2016 + int count; 2005 2017 2006 2018 /* Sanity check */ 2007 2019 if (urb == NULL || serial == NULL) { ··· 2010 2024 2011 2025 tty = tty_port_tty_get(&serial->port); 2012 2026 2013 - /* Push data to tty */ 2014 - write_length_remaining = urb->actual_length - 2015 - serial->curr_rx_urb_offset; 2016 - D1("data to push to tty"); 2017 - while (write_length_remaining) { 2018 - if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { 2019 - tty_kref_put(tty); 2020 - return -1; 2021 - } 2022 - curr_write_len = tty_insert_flip_string(&serial->port, 2023 - urb->transfer_buffer + serial->curr_rx_urb_offset, 2024 - write_length_remaining); 2025 - serial->curr_rx_urb_offset += curr_write_len; 2026 - write_length_remaining -= curr_write_len; 2027 - tty_flip_buffer_push(&serial->port); 2027 + if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { 2028 + tty_kref_put(tty); 2029 + return -1; 2028 2030 } 2031 + 2032 + /* Push data to tty */ 2033 + D1("data to push to tty"); 2034 + count = tty_buffer_request_room(&serial->port, urb->actual_length); 2035 + if (count >= urb->actual_length) { 2036 + tty_insert_flip_string(&serial->port, urb->transfer_buffer, 2037 + urb->actual_length); 2038 + tty_flip_buffer_push(&serial->port); 2039 + } else { 2040 + dev_warn(&serial->parent->usb->dev, 2041 + "dropping data, %d bytes lost\n", urb->actual_length); 2042 + } 2043 + 2029 2044 tty_kref_put(tty); 2030 2045 2031 - if (write_length_remaining == 0) { 2032 - serial->curr_rx_urb_offset = 0; 2033 - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; 2034 - } 2035 - return write_length_remaining; 2046 + serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; 2047 + 2048 + return 0; 2036 2049 } 2037 2050 2038 2051 ··· 2202 2217 } 2203 2218 } 2204 2219 serial->curr_rx_urb_idx = 0; 2205 - serial->curr_rx_urb_offset = 0; 2206 2220 2207 2221 if (serial->tx_urb) 2208 2222 usb_kill_urb(serial->tx_urb);
+1
drivers/net/usb/qmi_wwan.c
··· 741 741 {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, 742 742 {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, 743 743 {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ 744 + {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ 744 745 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 745 746 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 746 747 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */
+6 -1
drivers/net/usb/r8152.c
··· 1367 1367 struct sk_buff_head seg_list; 1368 1368 struct sk_buff *segs, *nskb; 1369 1369 1370 - features &= ~(NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO); 1370 + features &= ~(NETIF_F_SG | NETIF_F_IPV6_CSUM | NETIF_F_TSO6); 1371 1371 segs = skb_gso_segment(skb, features); 1372 1372 if (IS_ERR(segs) || !segs) 1373 1373 goto drop; ··· 3213 3213 struct r8152 *tp = netdev_priv(dev); 3214 3214 struct tally_counter tally; 3215 3215 3216 + if (usb_autopm_get_interface(tp->intf) < 0) 3217 + return; 3218 + 3216 3219 generic_ocp_read(tp, PLA_TALLYCNT, sizeof(tally), &tally, MCU_TYPE_PLA); 3220 + 3221 + usb_autopm_put_interface(tp->intf); 3217 3222 3218 3223 data[0] = le64_to_cpu(tally.tx_packets); 3219 3224 data[1] = le64_to_cpu(tally.rx_packets);
+13 -1
drivers/net/usb/smsc95xx.c
··· 1714 1714 return ret; 1715 1715 } 1716 1716 1717 + static int smsc95xx_reset_resume(struct usb_interface *intf) 1718 + { 1719 + struct usbnet *dev = usb_get_intfdata(intf); 1720 + int ret; 1721 + 1722 + ret = smsc95xx_reset(dev); 1723 + if (ret < 0) 1724 + return ret; 1725 + 1726 + return smsc95xx_resume(intf); 1727 + } 1728 + 1717 1729 static void smsc95xx_rx_csum_offload(struct sk_buff *skb) 1718 1730 { 1719 1731 skb->csum = *(u16 *)(skb_tail_pointer(skb) - 2); ··· 2016 2004 .probe = usbnet_probe, 2017 2005 .suspend = smsc95xx_suspend, 2018 2006 .resume = smsc95xx_resume, 2019 - .reset_resume = smsc95xx_resume, 2007 + .reset_resume = smsc95xx_reset_resume, 2020 2008 .disconnect = usbnet_disconnect, 2021 2009 .disable_hub_initiated_lpm = 1, 2022 2010 .supports_autosuspend = 1,
+58 -54
drivers/net/wan/farsync.c
··· 2363 2363 "FarSync TE1" 2364 2364 }; 2365 2365 2366 - static void 2366 + static int 2367 2367 fst_init_card(struct fst_card_info *card) 2368 2368 { 2369 2369 int i; ··· 2374 2374 * we'll have to revise it in some way then. 2375 2375 */ 2376 2376 for (i = 0; i < card->nports; i++) { 2377 - err = register_hdlc_device(card->ports[i].dev); 2378 - if (err < 0) { 2379 - int j; 2377 + err = register_hdlc_device(card->ports[i].dev); 2378 + if (err < 0) { 2380 2379 pr_err("Cannot register HDLC device for port %d (errno %d)\n", 2381 - i, -err); 2382 - for (j = i; j < card->nports; j++) { 2383 - free_netdev(card->ports[j].dev); 2384 - card->ports[j].dev = NULL; 2385 - } 2386 - card->nports = i; 2387 - break; 2388 - } 2380 + i, -err); 2381 + while (i--) 2382 + unregister_hdlc_device(card->ports[i].dev); 2383 + return err; 2384 + } 2389 2385 } 2390 2386 2391 2387 pr_info("%s-%s: %s IRQ%d, %d ports\n", 2392 2388 port_to_dev(&card->ports[0])->name, 2393 2389 port_to_dev(&card->ports[card->nports - 1])->name, 2394 2390 type_strings[card->type], card->irq, card->nports); 2391 + return 0; 2395 2392 } 2396 2393 2397 2394 static const struct net_device_ops fst_ops = { ··· 2444 2447 /* Try to enable the device */ 2445 2448 if ((err = pci_enable_device(pdev)) != 0) { 2446 2449 pr_err("Failed to enable card. Err %d\n", -err); 2447 - kfree(card); 2448 - return err; 2450 + goto enable_fail; 2449 2451 } 2450 2452 2451 2453 if ((err = pci_request_regions(pdev, "FarSync")) !=0) { 2452 2454 pr_err("Failed to allocate regions. Err %d\n", -err); 2453 - pci_disable_device(pdev); 2454 - kfree(card); 2455 - return err; 2455 + goto regions_fail; 2456 2456 } 2457 2457 2458 2458 /* Get virtual addresses of memory regions */ ··· 2458 2464 card->phys_ctlmem = pci_resource_start(pdev, 3); 2459 2465 if ((card->mem = ioremap(card->phys_mem, FST_MEMSIZE)) == NULL) { 2460 2466 pr_err("Physical memory remap failed\n"); 2461 - pci_release_regions(pdev); 2462 - pci_disable_device(pdev); 2463 - kfree(card); 2464 - return -ENODEV; 2467 + err = -ENODEV; 2468 + goto ioremap_physmem_fail; 2465 2469 } 2466 2470 if ((card->ctlmem = ioremap(card->phys_ctlmem, 0x10)) == NULL) { 2467 2471 pr_err("Control memory remap failed\n"); 2468 - pci_release_regions(pdev); 2469 - pci_disable_device(pdev); 2470 - iounmap(card->mem); 2471 - kfree(card); 2472 - return -ENODEV; 2472 + err = -ENODEV; 2473 + goto ioremap_ctlmem_fail; 2473 2474 } 2474 2475 dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem); 2475 2476 2476 2477 /* Register the interrupt handler */ 2477 2478 if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) { 2478 2479 pr_err("Unable to register interrupt %d\n", card->irq); 2479 - pci_release_regions(pdev); 2480 - pci_disable_device(pdev); 2481 - iounmap(card->ctlmem); 2482 - iounmap(card->mem); 2483 - kfree(card); 2484 - return -ENODEV; 2480 + err = -ENODEV; 2481 + goto irq_fail; 2485 2482 } 2486 2483 2487 2484 /* Record info we need */ ··· 2498 2513 while (i--) 2499 2514 free_netdev(card->ports[i].dev); 2500 2515 pr_err("FarSync: out of memory\n"); 2501 - free_irq(card->irq, card); 2502 - pci_release_regions(pdev); 2503 - pci_disable_device(pdev); 2504 - iounmap(card->ctlmem); 2505 - iounmap(card->mem); 2506 - kfree(card); 2507 - return -ENODEV; 2516 + err = -ENOMEM; 2517 + goto hdlcdev_fail; 2508 2518 } 2509 2519 card->ports[i].dev = dev; 2510 2520 card->ports[i].card = card; ··· 2545 2565 pci_set_drvdata(pdev, card); 2546 2566 2547 2567 /* Remainder of card setup */ 2568 + if (no_of_cards_added >= FST_MAX_CARDS) { 2569 + pr_err("FarSync: too many cards\n"); 2570 + err = -ENOMEM; 2571 + goto card_array_fail; 2572 + } 2548 2573 fst_card_array[no_of_cards_added] = card; 2549 2574 card->card_no = no_of_cards_added++; /* Record instance and bump it */ 2550 - fst_init_card(card); 2575 + err = fst_init_card(card); 2576 + if (err) 2577 + goto init_card_fail; 2551 2578 if (card->family == FST_FAMILY_TXU) { 2552 2579 /* 2553 2580 * Allocate a dma buffer for transmit and receives ··· 2564 2577 &card->rx_dma_handle_card); 2565 2578 if (card->rx_dma_handle_host == NULL) { 2566 2579 pr_err("Could not allocate rx dma buffer\n"); 2567 - fst_disable_intr(card); 2568 - pci_release_regions(pdev); 2569 - pci_disable_device(pdev); 2570 - iounmap(card->ctlmem); 2571 - iounmap(card->mem); 2572 - kfree(card); 2573 - return -ENOMEM; 2580 + err = -ENOMEM; 2581 + goto rx_dma_fail; 2574 2582 } 2575 2583 card->tx_dma_handle_host = 2576 2584 pci_alloc_consistent(card->device, FST_MAX_MTU, 2577 2585 &card->tx_dma_handle_card); 2578 2586 if (card->tx_dma_handle_host == NULL) { 2579 2587 pr_err("Could not allocate tx dma buffer\n"); 2580 - fst_disable_intr(card); 2581 - pci_release_regions(pdev); 2582 - pci_disable_device(pdev); 2583 - iounmap(card->ctlmem); 2584 - iounmap(card->mem); 2585 - kfree(card); 2586 - return -ENOMEM; 2588 + err = -ENOMEM; 2589 + goto tx_dma_fail; 2587 2590 } 2588 2591 } 2589 2592 return 0; /* Success */ 2593 + 2594 + tx_dma_fail: 2595 + pci_free_consistent(card->device, FST_MAX_MTU, 2596 + card->rx_dma_handle_host, 2597 + card->rx_dma_handle_card); 2598 + rx_dma_fail: 2599 + fst_disable_intr(card); 2600 + for (i = 0 ; i < card->nports ; i++) 2601 + unregister_hdlc_device(card->ports[i].dev); 2602 + init_card_fail: 2603 + fst_card_array[card->card_no] = NULL; 2604 + card_array_fail: 2605 + for (i = 0 ; i < card->nports ; i++) 2606 + free_netdev(card->ports[i].dev); 2607 + hdlcdev_fail: 2608 + free_irq(card->irq, card); 2609 + irq_fail: 2610 + iounmap(card->ctlmem); 2611 + ioremap_ctlmem_fail: 2612 + iounmap(card->mem); 2613 + ioremap_physmem_fail: 2614 + pci_release_regions(pdev); 2615 + regions_fail: 2616 + pci_disable_device(pdev); 2617 + enable_fail: 2618 + kfree(card); 2619 + return err; 2590 2620 } 2591 2621 2592 2622 /*
+16 -11
drivers/net/xen-netfront.c
··· 1439 1439 unsigned int i = 0; 1440 1440 unsigned int num_queues = info->netdev->real_num_tx_queues; 1441 1441 1442 + netif_carrier_off(info->netdev); 1443 + 1442 1444 for (i = 0; i < num_queues; ++i) { 1443 1445 struct netfront_queue *queue = &info->queues[i]; 1444 - 1445 - /* Stop old i/f to prevent errors whilst we rebuild the state. */ 1446 - spin_lock_bh(&queue->rx_lock); 1447 - spin_lock_irq(&queue->tx_lock); 1448 - netif_carrier_off(queue->info->netdev); 1449 - spin_unlock_irq(&queue->tx_lock); 1450 - spin_unlock_bh(&queue->rx_lock); 1451 1446 1452 1447 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq)) 1453 1448 unbind_from_irqhandler(queue->tx_irq, queue); ··· 1452 1457 } 1453 1458 queue->tx_evtchn = queue->rx_evtchn = 0; 1454 1459 queue->tx_irq = queue->rx_irq = 0; 1460 + 1461 + napi_synchronize(&queue->napi); 1455 1462 1456 1463 /* End access and free the pages */ 1457 1464 xennet_end_access(queue->tx_ring_ref, queue->tx.sring); ··· 2043 2046 /* By now, the queue structures have been set up */ 2044 2047 for (j = 0; j < num_queues; ++j) { 2045 2048 queue = &np->queues[j]; 2046 - spin_lock_bh(&queue->rx_lock); 2047 - spin_lock_irq(&queue->tx_lock); 2048 2049 2049 2050 /* Step 1: Discard all pending TX packet fragments. */ 2051 + spin_lock_irq(&queue->tx_lock); 2050 2052 xennet_release_tx_bufs(queue); 2053 + spin_unlock_irq(&queue->tx_lock); 2051 2054 2052 2055 /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */ 2056 + spin_lock_bh(&queue->rx_lock); 2057 + 2053 2058 for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) { 2054 2059 skb_frag_t *frag; 2055 2060 const struct page *page; ··· 2075 2076 } 2076 2077 2077 2078 queue->rx.req_prod_pvt = requeue_idx; 2079 + 2080 + spin_unlock_bh(&queue->rx_lock); 2078 2081 } 2079 2082 2080 2083 /* ··· 2088 2087 netif_carrier_on(np->netdev); 2089 2088 for (j = 0; j < num_queues; ++j) { 2090 2089 queue = &np->queues[j]; 2090 + 2091 2091 notify_remote_via_irq(queue->tx_irq); 2092 2092 if (queue->tx_irq != queue->rx_irq) 2093 2093 notify_remote_via_irq(queue->rx_irq); 2094 - xennet_tx_buf_gc(queue); 2095 - xennet_alloc_rx_buffers(queue); 2096 2094 2095 + spin_lock_irq(&queue->tx_lock); 2096 + xennet_tx_buf_gc(queue); 2097 2097 spin_unlock_irq(&queue->tx_lock); 2098 + 2099 + spin_lock_bh(&queue->rx_lock); 2100 + xennet_alloc_rx_buffers(queue); 2098 2101 spin_unlock_bh(&queue->rx_lock); 2099 2102 } 2100 2103
+15
drivers/of/fdt.c
··· 880 880 const u64 phys_offset = __pa(PAGE_OFFSET); 881 881 base &= PAGE_MASK; 882 882 size &= PAGE_MASK; 883 + 884 + if (sizeof(phys_addr_t) < sizeof(u64)) { 885 + if (base > ULONG_MAX) { 886 + pr_warning("Ignoring memory block 0x%llx - 0x%llx\n", 887 + base, base + size); 888 + return; 889 + } 890 + 891 + if (base + size > ULONG_MAX) { 892 + pr_warning("Ignoring memory range 0x%lx - 0x%llx\n", 893 + ULONG_MAX, base + size); 894 + size = ULONG_MAX - base; 895 + } 896 + } 897 + 883 898 if (base + size < phys_offset) { 884 899 pr_warning("Ignoring memory block 0x%llx - 0x%llx\n", 885 900 base, base + size);
-34
drivers/of/of_mdio.c
··· 182 182 } 183 183 EXPORT_SYMBOL(of_mdiobus_register); 184 184 185 - /** 186 - * of_mdiobus_link_phydev - Find a device node for a phy 187 - * @mdio: pointer to mii_bus structure 188 - * @phydev: phydev for which the of_node pointer should be set 189 - * 190 - * Walk the list of subnodes of a mdio bus and look for a node that matches the 191 - * phy's address with its 'reg' property. If found, set the of_node pointer for 192 - * the phy. This allows auto-probed pyh devices to be supplied with information 193 - * passed in via DT. 194 - */ 195 - void of_mdiobus_link_phydev(struct mii_bus *mdio, 196 - struct phy_device *phydev) 197 - { 198 - struct device *dev = &phydev->dev; 199 - struct device_node *child; 200 - 201 - if (dev->of_node || !mdio->dev.of_node) 202 - return; 203 - 204 - for_each_available_child_of_node(mdio->dev.of_node, child) { 205 - int addr; 206 - 207 - addr = of_mdio_parse_addr(&mdio->dev, child); 208 - if (addr < 0) 209 - continue; 210 - 211 - if (addr == phydev->addr) { 212 - dev->of_node = child; 213 - return; 214 - } 215 - } 216 - } 217 - EXPORT_SYMBOL(of_mdiobus_link_phydev); 218 - 219 185 /* Helper function for of_phy_find_device */ 220 186 static int of_phy_match(struct device *dev, void *phy_np) 221 187 {
+7 -2
drivers/pci/pci.c
··· 3135 3135 if (probe) 3136 3136 return 0; 3137 3137 3138 - /* Wait for Transaction Pending bit clean */ 3139 - if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP)) 3138 + /* 3139 + * Wait for Transaction Pending bit to clear. A word-aligned test 3140 + * is used, so we use the conrol offset rather than status and shift 3141 + * the test bit to match. 3142 + */ 3143 + if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL, 3144 + PCI_AF_STATUS_TP << 8)) 3140 3145 goto clear; 3141 3146 3142 3147 dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
+2
drivers/phy/Kconfig
··· 112 112 config PHY_SUN4I_USB 113 113 tristate "Allwinner sunxi SoC USB PHY driver" 114 114 depends on ARCH_SUNXI && HAS_IOMEM && OF 115 + depends on RESET_CONTROLLER 115 116 select GENERIC_PHY 116 117 help 117 118 Enable this to support the transceiver that is part of Allwinner ··· 123 122 124 123 config PHY_SAMSUNG_USB2 125 124 tristate "Samsung USB 2.0 PHY driver" 125 + depends on HAS_IOMEM 126 126 select GENERIC_PHY 127 127 select MFD_SYSCON 128 128 help
+4 -3
drivers/phy/phy-core.c
··· 614 614 return phy; 615 615 616 616 put_dev: 617 - put_device(&phy->dev); 618 - ida_remove(&phy_ida, phy->id); 617 + put_device(&phy->dev); /* calls phy_release() which frees resources */ 618 + return ERR_PTR(ret); 619 + 619 620 free_phy: 620 621 kfree(phy); 621 622 return ERR_PTR(ret); ··· 800 799 801 800 phy = to_phy(dev); 802 801 dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); 803 - ida_remove(&phy_ida, phy->id); 802 + ida_simple_remove(&phy_ida, phy->id); 804 803 kfree(phy); 805 804 } 806 805
+7 -4
drivers/phy/phy-omap-usb2.c
··· 233 233 if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { 234 234 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 235 235 phy->phy_base = devm_ioremap_resource(&pdev->dev, res); 236 - if (!phy->phy_base) 237 - return -ENOMEM; 236 + if (IS_ERR(phy->phy_base)) 237 + return PTR_ERR(phy->phy_base); 238 238 phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; 239 239 } 240 240 ··· 262 262 otg->phy = &phy->phy; 263 263 264 264 platform_set_drvdata(pdev, phy); 265 - pm_runtime_enable(phy->dev); 266 265 267 266 generic_phy = devm_phy_create(phy->dev, &ops, NULL); 268 267 if (IS_ERR(generic_phy)) ··· 269 270 270 271 phy_set_drvdata(generic_phy, phy); 271 272 273 + pm_runtime_enable(phy->dev); 272 274 phy_provider = devm_of_phy_provider_register(phy->dev, 273 275 of_phy_simple_xlate); 274 - if (IS_ERR(phy_provider)) 276 + if (IS_ERR(phy_provider)) { 277 + pm_runtime_disable(phy->dev); 275 278 return PTR_ERR(phy_provider); 279 + } 276 280 277 281 phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); 278 282 if (IS_ERR(phy->wkupclk)) { ··· 319 317 if (!IS_ERR(phy->optclk)) 320 318 clk_unprepare(phy->optclk); 321 319 usb_remove_phy(&phy->phy); 320 + pm_runtime_disable(phy->dev); 322 321 323 322 return 0; 324 323 }
+1
drivers/phy/phy-samsung-usb2.c
··· 107 107 #endif 108 108 { }, 109 109 }; 110 + MODULE_DEVICE_TABLE(of, samsung_usb2_phy_of_match); 110 111 111 112 static int samsung_usb2_phy_probe(struct platform_device *pdev) 112 113 {
+1 -1
drivers/pinctrl/berlin/berlin.c
··· 320 320 321 321 regmap = dev_get_regmap(&pdev->dev, NULL); 322 322 if (!regmap) 323 - return PTR_ERR(regmap); 323 + return -ENODEV; 324 324 325 325 pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); 326 326 if (!pctrl)
+4
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 211 211 configlen++; 212 212 213 213 pinconfig = kzalloc(configlen * sizeof(*pinconfig), GFP_KERNEL); 214 + if (!pinconfig) { 215 + kfree(*map); 216 + return -ENOMEM; 217 + } 214 218 215 219 if (!of_property_read_u32(node, "allwinner,drive", &val)) { 216 220 u16 strength = (val + 1) * 10;
+5
drivers/regulator/bcm590xx-regulator.c
··· 119 119 2900000, 3000000, 3300000, 120 120 }; 121 121 122 + static const unsigned int ldo_vbus[] = { 123 + 5000000, 124 + }; 125 + 122 126 /* DCDC group CSR: supported voltages in microvolts */ 123 127 static const struct regulator_linear_range dcdc_csr_ranges[] = { 124 128 REGULATOR_LINEAR_RANGE(860000, 2, 50, 10000), ··· 196 192 BCM590XX_REG_TABLE(gpldo4, ldo_a_table), 197 193 BCM590XX_REG_TABLE(gpldo5, ldo_a_table), 198 194 BCM590XX_REG_TABLE(gpldo6, ldo_a_table), 195 + BCM590XX_REG_TABLE(vbus, ldo_vbus), 199 196 }; 200 197 201 198 struct bcm590xx_reg {
+12
drivers/regulator/palmas-regulator.c
··· 325 325 if (rail_enable) 326 326 palmas_smps_write(pmic->palmas, 327 327 palmas_regs_info[id].ctrl_addr, reg); 328 + 329 + /* Switch the enable value to ensure this is used for enable */ 330 + pmic->desc[id].enable_val = pmic->current_reg_mode[id]; 331 + 328 332 return 0; 329 333 } 330 334 ··· 968 964 return ret; 969 965 pmic->current_reg_mode[id] = reg & 970 966 PALMAS_SMPS12_CTRL_MODE_ACTIVE_MASK; 967 + 968 + pmic->desc[id].enable_reg = 969 + PALMAS_BASE_TO_REG(PALMAS_SMPS_BASE, 970 + palmas_regs_info[id].ctrl_addr); 971 + pmic->desc[id].enable_mask = 972 + PALMAS_SMPS12_CTRL_MODE_ACTIVE_MASK; 973 + /* set_mode overrides this value */ 974 + pmic->desc[id].enable_val = SMPS_CTRL_MODE_ON; 971 975 } 972 976 973 977 pmic->desc[id].type = REGULATOR_VOLTAGE;
+2 -1
drivers/regulator/tps65218-regulator.c
··· 209 209 1, -1, -1, TPS65218_REG_ENABLE1, 210 210 TPS65218_ENABLE1_DC6_EN, NULL, NULL, 0, 0), 211 211 TPS65218_REGULATOR("LDO1", TPS65218_LDO_1, tps65218_ldo1_dcdc34_ops, 64, 212 - TPS65218_REG_CONTROL_DCDC4, 212 + TPS65218_REG_CONTROL_LDO1, 213 213 TPS65218_CONTROL_LDO1_MASK, TPS65218_REG_ENABLE2, 214 214 TPS65218_ENABLE2_LDO1_EN, NULL, ldo1_dcdc3_ranges, 215 215 2, 0), ··· 240 240 config.init_data = init_data; 241 241 config.driver_data = tps; 242 242 config.regmap = tps->regmap; 243 + config.of_node = pdev->dev.of_node; 243 244 244 245 rdev = devm_regulator_register(&pdev->dev, &regulators[id], &config); 245 246 if (IS_ERR(rdev)) {
+2
drivers/scsi/be2iscsi/be_main.c
··· 4198 4198 kfree(phba->ep_array); 4199 4199 phba->ep_array = NULL; 4200 4200 ret = -ENOMEM; 4201 + 4202 + goto free_memory; 4201 4203 } 4202 4204 4203 4205 for (i = 0; i < phba->params.cxns_per_ctrl; i++) {
+1 -3
drivers/scsi/be2iscsi/be_mgmt.c
··· 1008 1008 BE2_IPV6 : BE2_IPV4 ; 1009 1009 1010 1010 rc = mgmt_get_if_info(phba, ip_type, &if_info); 1011 - if (rc) { 1012 - kfree(if_info); 1011 + if (rc) 1013 1012 return rc; 1014 - } 1015 1013 1016 1014 if (boot_proto == ISCSI_BOOTPROTO_DHCP) { 1017 1015 if (if_info->dhcp_state) {
+4 -12
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 516 516 skb_pull(skb, sizeof(struct fcoe_hdr)); 517 517 fr_len = skb->len - sizeof(struct fcoe_crc_eof); 518 518 519 - stats = per_cpu_ptr(lport->stats, get_cpu()); 520 - stats->RxFrames++; 521 - stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; 522 - 523 519 fp = (struct fc_frame *)skb; 524 520 fc_frame_init(fp); 525 521 fr_dev(fp) = lport; 526 522 fr_sof(fp) = hp->fcoe_sof; 527 523 if (skb_copy_bits(skb, fr_len, &crc_eof, sizeof(crc_eof))) { 528 - put_cpu(); 529 524 kfree_skb(skb); 530 525 return; 531 526 } 532 527 fr_eof(fp) = crc_eof.fcoe_eof; 533 528 fr_crc(fp) = crc_eof.fcoe_crc32; 534 529 if (pskb_trim(skb, fr_len)) { 535 - put_cpu(); 536 530 kfree_skb(skb); 537 531 return; 538 532 } ··· 538 544 port = lport_priv(vn_port); 539 545 if (!ether_addr_equal(port->data_src_addr, dest_mac)) { 540 546 BNX2FC_HBA_DBG(lport, "fpma mismatch\n"); 541 - put_cpu(); 542 547 kfree_skb(skb); 543 548 return; 544 549 } ··· 545 552 if (fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA && 546 553 fh->fh_type == FC_TYPE_FCP) { 547 554 /* Drop FCP data. We dont this in L2 path */ 548 - put_cpu(); 549 555 kfree_skb(skb); 550 556 return; 551 557 } ··· 554 562 case ELS_LOGO: 555 563 if (ntoh24(fh->fh_s_id) == FC_FID_FLOGI) { 556 564 /* drop non-FIP LOGO */ 557 - put_cpu(); 558 565 kfree_skb(skb); 559 566 return; 560 567 } ··· 563 572 564 573 if (fh->fh_r_ctl == FC_RCTL_BA_ABTS) { 565 574 /* Drop incoming ABTS */ 566 - put_cpu(); 567 575 kfree_skb(skb); 568 576 return; 569 577 } 578 + 579 + stats = per_cpu_ptr(lport->stats, smp_processor_id()); 580 + stats->RxFrames++; 581 + stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; 570 582 571 583 if (le32_to_cpu(fr_crc(fp)) != 572 584 ~crc32(~0, skb->data, fr_len)) { ··· 577 583 printk(KERN_WARNING PFX "dropping frame with " 578 584 "CRC error\n"); 579 585 stats->InvalidCRCCount++; 580 - put_cpu(); 581 586 kfree_skb(skb); 582 587 return; 583 588 } 584 - put_cpu(); 585 589 fc_exch_recv(lport, fp); 586 590 } 587 591
+2
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 282 282 arr_sz, GFP_KERNEL); 283 283 if (!cmgr->free_list_lock) { 284 284 printk(KERN_ERR PFX "failed to alloc free_list_lock\n"); 285 + kfree(cmgr->free_list); 286 + cmgr->free_list = NULL; 285 287 goto mem_err; 286 288 } 287 289
+12 -1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 185 185 if (crq->valid & 0x80) { 186 186 if (++queue->cur == queue->size) 187 187 queue->cur = 0; 188 + 189 + /* Ensure the read of the valid bit occurs before reading any 190 + * other bits of the CRQ entry 191 + */ 192 + rmb(); 188 193 } else 189 194 crq = NULL; 190 195 spin_unlock_irqrestore(&queue->lock, flags); ··· 208 203 { 209 204 struct vio_dev *vdev = to_vio_dev(hostdata->dev); 210 205 206 + /* 207 + * Ensure the command buffer is flushed to memory before handing it 208 + * over to the VIOS to prevent it from fetching any stale data. 209 + */ 210 + mb(); 211 211 return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2); 212 212 } 213 213 ··· 807 797 evt->hostdata->dev); 808 798 if (evt->cmnd_done) 809 799 evt->cmnd_done(evt->cmnd); 810 - } else if (evt->done) 800 + } else if (evt->done && evt->crq.format != VIOSRP_MAD_FORMAT && 801 + evt->iu.srp.login_req.opcode != SRP_LOGIN_REQ) 811 802 evt->done(evt); 812 803 free_event_struct(&evt->hostdata->pool, evt); 813 804 spin_lock_irqsave(hostdata->host->host_lock, flags);
+10 -3
drivers/scsi/pm8001/pm8001_init.c
··· 677 677 * pm8001_get_phy_settings_info : Read phy setting values. 678 678 * @pm8001_ha : our hba. 679 679 */ 680 - void pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) 680 + static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) 681 681 { 682 682 683 683 #ifdef PM8001_READ_VPD ··· 691 691 payload.offset = 0; 692 692 payload.length = 4096; 693 693 payload.func_specific = kzalloc(4096, GFP_KERNEL); 694 + if (!payload.func_specific) 695 + return -ENOMEM; 694 696 /* Read phy setting values from flash */ 695 697 PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload); 696 698 wait_for_completion(&completion); 697 699 pm8001_set_phy_profile(pm8001_ha, sizeof(u8), payload.func_specific); 700 + kfree(payload.func_specific); 698 701 #endif 702 + return 0; 699 703 } 700 704 701 705 #ifdef PM8001_USE_MSIX ··· 883 879 pm8001_init_sas_add(pm8001_ha); 884 880 /* phy setting support for motherboard controller */ 885 881 if (pdev->subsystem_vendor != PCI_VENDOR_ID_ADAPTEC2 && 886 - pdev->subsystem_vendor != 0) 887 - pm8001_get_phy_settings_info(pm8001_ha); 882 + pdev->subsystem_vendor != 0) { 883 + rc = pm8001_get_phy_settings_info(pm8001_ha); 884 + if (rc) 885 + goto err_out_shost; 886 + } 888 887 pm8001_post_sas_ha_init(shost, chip); 889 888 rc = sas_register_ha(SHOST_TO_SAS_HA(shost)); 890 889 if (rc)
+11 -6
drivers/scsi/qla2xxx/qla_target.c
··· 1128 1128 ctio->u.status1.flags = 1129 1129 __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 1130 1130 CTIO7_FLAGS_TERMINATE); 1131 - ctio->u.status1.ox_id = entry->fcp_hdr_le.ox_id; 1131 + ctio->u.status1.ox_id = cpu_to_le16(entry->fcp_hdr_le.ox_id); 1132 1132 1133 1133 qla2x00_start_iocbs(vha, vha->req); 1134 1134 ··· 1262 1262 { 1263 1263 struct atio_from_isp *atio = &mcmd->orig_iocb.atio; 1264 1264 struct ctio7_to_24xx *ctio; 1265 + uint16_t temp; 1265 1266 1266 1267 ql_dbg(ql_dbg_tgt, ha, 0xe008, 1267 1268 "Sending task mgmt CTIO7 (ha=%p, atio=%p, resp_code=%x\n", ··· 1293 1292 ctio->u.status1.flags = (atio->u.isp24.attr << 9) | 1294 1293 __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 1295 1294 CTIO7_FLAGS_SEND_STATUS); 1296 - ctio->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); 1295 + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); 1296 + ctio->u.status1.ox_id = cpu_to_le16(temp); 1297 1297 ctio->u.status1.scsi_status = 1298 1298 __constant_cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID); 1299 1299 ctio->u.status1.response_len = __constant_cpu_to_le16(8); ··· 1515 1513 struct ctio7_to_24xx *pkt; 1516 1514 struct qla_hw_data *ha = vha->hw; 1517 1515 struct atio_from_isp *atio = &prm->cmd->atio; 1516 + uint16_t temp; 1518 1517 1519 1518 pkt = (struct ctio7_to_24xx *)vha->req->ring_ptr; 1520 1519 prm->pkt = pkt; ··· 1544 1541 pkt->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; 1545 1542 pkt->exchange_addr = atio->u.isp24.exchange_addr; 1546 1543 pkt->u.status0.flags |= (atio->u.isp24.attr << 9); 1547 - pkt->u.status0.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); 1544 + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); 1545 + pkt->u.status0.ox_id = cpu_to_le16(temp); 1548 1546 pkt->u.status0.relative_offset = cpu_to_le32(prm->cmd->offset); 1549 1547 1550 1548 ql_dbg(ql_dbg_tgt, vha, 0xe00c, 1551 1549 "qla_target(%d): handle(cmd) -> %08x, timeout %d, ox_id %#x\n", 1552 - vha->vp_idx, pkt->handle, QLA_TGT_TIMEOUT, 1553 - le16_to_cpu(pkt->u.status0.ox_id)); 1550 + vha->vp_idx, pkt->handle, QLA_TGT_TIMEOUT, temp); 1554 1551 return 0; 1555 1552 } 1556 1553 ··· 2622 2619 struct qla_hw_data *ha = vha->hw; 2623 2620 request_t *pkt; 2624 2621 int ret = 0; 2622 + uint16_t temp; 2625 2623 2626 2624 ql_dbg(ql_dbg_tgt, vha, 0xe01c, "Sending TERM EXCH CTIO (ha=%p)\n", ha); 2627 2625 ··· 2659 2655 ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | 2660 2656 __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 2661 2657 CTIO7_FLAGS_TERMINATE); 2662 - ctio24->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); 2658 + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); 2659 + ctio24->u.status1.ox_id = cpu_to_le16(temp); 2663 2660 2664 2661 /* Most likely, it isn't needed */ 2665 2662 ctio24->u.status1.residual = get_unaligned((uint32_t *)
+2 -2
drivers/scsi/qla2xxx/qla_target.h
··· 443 443 uint16_t reserved1; 444 444 __le16 flags; 445 445 uint32_t residual; 446 - uint16_t ox_id; 446 + __le16 ox_id; 447 447 uint16_t scsi_status; 448 448 uint32_t relative_offset; 449 449 uint32_t reserved2; ··· 458 458 uint16_t sense_length; 459 459 uint16_t flags; 460 460 uint32_t residual; 461 - uint16_t ox_id; 461 + __le16 ox_id; 462 462 uint16_t scsi_status; 463 463 uint16_t response_len; 464 464 uint16_t reserved;
+10 -10
drivers/scsi/scsi_error.c
··· 131 131 "aborting command %p\n", scmd)); 132 132 rtn = scsi_try_to_abort_cmd(sdev->host->hostt, scmd); 133 133 if (rtn == SUCCESS) { 134 - scmd->result |= DID_TIME_OUT << 16; 134 + set_host_byte(scmd, DID_TIME_OUT); 135 135 if (scsi_host_eh_past_deadline(sdev->host)) { 136 136 SCSI_LOG_ERROR_RECOVERY(3, 137 137 scmd_printk(KERN_INFO, scmd, ··· 167 167 scmd_printk(KERN_WARNING, scmd, 168 168 "scmd %p terminate " 169 169 "aborted command\n", scmd)); 170 - scmd->result |= DID_TIME_OUT << 16; 170 + set_host_byte(scmd, DID_TIME_OUT); 171 171 scsi_finish_command(scmd); 172 172 } 173 173 } ··· 287 287 else if (host->hostt->eh_timed_out) 288 288 rtn = host->hostt->eh_timed_out(scmd); 289 289 290 - if (rtn == BLK_EH_NOT_HANDLED && !host->hostt->no_async_abort) 291 - if (scsi_abort_command(scmd) == SUCCESS) 290 + if (rtn == BLK_EH_NOT_HANDLED) { 291 + if (!host->hostt->no_async_abort && 292 + scsi_abort_command(scmd) == SUCCESS) 292 293 return BLK_EH_NOT_HANDLED; 293 294 294 - scmd->result |= DID_TIME_OUT << 16; 295 - 296 - if (unlikely(rtn == BLK_EH_NOT_HANDLED && 297 - !scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD))) 298 - rtn = BLK_EH_HANDLED; 295 + set_host_byte(scmd, DID_TIME_OUT); 296 + if (!scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD)) 297 + rtn = BLK_EH_HANDLED; 298 + } 299 299 300 300 return rtn; 301 301 } ··· 1777 1777 break; 1778 1778 case DID_ABORT: 1779 1779 if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) { 1780 - scmd->result |= DID_TIME_OUT << 16; 1780 + set_host_byte(scmd, DID_TIME_OUT); 1781 1781 return SUCCESS; 1782 1782 } 1783 1783 case DID_NO_CONNECT:
+1
drivers/scsi/scsi_transport_fc.c
··· 2549 2549 fc_flush_devloss(shost); 2550 2550 if (!cancel_delayed_work(&rport->dev_loss_work)) 2551 2551 fc_flush_devloss(shost); 2552 + cancel_work_sync(&rport->scan_work); 2552 2553 spin_lock_irqsave(shost->host_lock, flags); 2553 2554 rport->flags &= ~FC_RPORT_DEVLOSS_PENDING; 2554 2555 }
+4 -1
drivers/scsi/sd.c
··· 2441 2441 } 2442 2442 2443 2443 sdkp->DPOFUA = (data.device_specific & 0x10) != 0; 2444 - if (sdkp->DPOFUA && !sdkp->device->use_10_for_rw) { 2444 + if (sdp->broken_fua) { 2445 + sd_first_printk(KERN_NOTICE, sdkp, "Disabling FUA\n"); 2446 + sdkp->DPOFUA = 0; 2447 + } else if (sdkp->DPOFUA && !sdkp->device->use_10_for_rw) { 2445 2448 sd_first_printk(KERN_NOTICE, sdkp, 2446 2449 "Uses READ/WRITE(6), disabling FUA\n"); 2447 2450 sdkp->DPOFUA = 0;
+25 -1
drivers/scsi/virtio_scsi.c
··· 237 237 virtscsi_vq_done(vscsi, req_vq, virtscsi_complete_cmd); 238 238 }; 239 239 240 + static void virtscsi_poll_requests(struct virtio_scsi *vscsi) 241 + { 242 + int i, num_vqs; 243 + 244 + num_vqs = vscsi->num_queues; 245 + for (i = 0; i < num_vqs; i++) 246 + virtscsi_vq_done(vscsi, &vscsi->req_vqs[i], 247 + virtscsi_complete_cmd); 248 + } 249 + 240 250 static void virtscsi_complete_free(struct virtio_scsi *vscsi, void *buf) 241 251 { 242 252 struct virtio_scsi_cmd *cmd = buf; ··· 263 253 virtscsi_vq_done(vscsi, &vscsi->ctrl_vq, virtscsi_complete_free); 264 254 }; 265 255 256 + static void virtscsi_handle_event(struct work_struct *work); 257 + 266 258 static int virtscsi_kick_event(struct virtio_scsi *vscsi, 267 259 struct virtio_scsi_event_node *event_node) 268 260 { ··· 272 260 struct scatterlist sg; 273 261 unsigned long flags; 274 262 263 + INIT_WORK(&event_node->work, virtscsi_handle_event); 275 264 sg_init_one(&sg, &event_node->event, sizeof(struct virtio_scsi_event)); 276 265 277 266 spin_lock_irqsave(&vscsi->event_vq.vq_lock, flags); ··· 390 377 { 391 378 struct virtio_scsi_event_node *event_node = buf; 392 379 393 - INIT_WORK(&event_node->work, virtscsi_handle_event); 394 380 schedule_work(&event_node->work); 395 381 } 396 382 ··· 600 588 if (cmd->resp.tmf.response == VIRTIO_SCSI_S_OK || 601 589 cmd->resp.tmf.response == VIRTIO_SCSI_S_FUNCTION_SUCCEEDED) 602 590 ret = SUCCESS; 591 + 592 + /* 593 + * The spec guarantees that all requests related to the TMF have 594 + * been completed, but the callback might not have run yet if 595 + * we're using independent interrupts (e.g. MSI). Poll the 596 + * virtqueues once. 597 + * 598 + * In the abort case, sc->scsi_done will do nothing, because 599 + * the block layer must have detected a timeout and as a result 600 + * REQ_ATOM_COMPLETE has been set. 601 + */ 602 + virtscsi_poll_requests(vscsi); 603 603 604 604 out: 605 605 mempool_free(cmd, virtscsi_cmd_pool);
+6 -2
drivers/spi/spi-pxa2xx.c
··· 118 118 */ 119 119 orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); 120 120 121 + /* Test SPI_CS_CONTROL_SW_MODE bit enabling */ 121 122 value = orig | SPI_CS_CONTROL_SW_MODE; 122 123 writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL); 123 124 value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); ··· 127 126 goto detection_done; 128 127 } 129 128 130 - value &= ~SPI_CS_CONTROL_SW_MODE; 129 + orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); 130 + 131 + /* Test SPI_CS_CONTROL_SW_MODE bit disabling */ 132 + value = orig & ~SPI_CS_CONTROL_SW_MODE; 131 133 writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL); 132 134 value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); 133 - if (value != orig) { 135 + if (value != (orig & ~SPI_CS_CONTROL_SW_MODE)) { 134 136 offset = 0x800; 135 137 goto detection_done; 136 138 }
+13 -31
drivers/spi/spi-qup.c
··· 424 424 return 0; 425 425 } 426 426 427 - static void spi_qup_set_cs(struct spi_device *spi, bool enable) 428 - { 429 - struct spi_qup *controller = spi_master_get_devdata(spi->master); 430 - 431 - u32 iocontol, mask; 432 - 433 - iocontol = readl_relaxed(controller->base + SPI_IO_CONTROL); 434 - 435 - /* Disable auto CS toggle and use manual */ 436 - iocontol &= ~SPI_IO_C_MX_CS_MODE; 437 - iocontol |= SPI_IO_C_FORCE_CS; 438 - 439 - iocontol &= ~SPI_IO_C_CS_SELECT_MASK; 440 - iocontol |= SPI_IO_C_CS_SELECT(spi->chip_select); 441 - 442 - mask = SPI_IO_C_CS_N_POLARITY_0 << spi->chip_select; 443 - 444 - if (enable) 445 - iocontol |= mask; 446 - else 447 - iocontol &= ~mask; 448 - 449 - writel_relaxed(iocontol, controller->base + SPI_IO_CONTROL); 450 - } 451 - 452 427 static int spi_qup_transfer_one(struct spi_master *master, 453 428 struct spi_device *spi, 454 429 struct spi_transfer *xfer) ··· 546 571 return -ENOMEM; 547 572 } 548 573 574 + /* use num-cs unless not present or out of range */ 575 + if (of_property_read_u16(dev->of_node, "num-cs", 576 + &master->num_chipselect) || 577 + (master->num_chipselect > SPI_NUM_CHIPSELECTS)) 578 + master->num_chipselect = SPI_NUM_CHIPSELECTS; 579 + 549 580 master->bus_num = pdev->id; 550 581 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP; 551 - master->num_chipselect = SPI_NUM_CHIPSELECTS; 552 582 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 553 583 master->max_speed_hz = max_freq; 554 - master->set_cs = spi_qup_set_cs; 555 584 master->transfer_one = spi_qup_transfer_one; 556 585 master->dev.of_node = pdev->dev.of_node; 557 586 master->auto_runtime_pm = true; ··· 619 640 if (ret) 620 641 goto error; 621 642 622 - ret = devm_spi_register_master(dev, master); 623 - if (ret) 624 - goto error; 625 - 626 643 pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); 627 644 pm_runtime_use_autosuspend(dev); 628 645 pm_runtime_set_active(dev); 629 646 pm_runtime_enable(dev); 647 + 648 + ret = devm_spi_register_master(dev, master); 649 + if (ret) 650 + goto disable_pm; 651 + 630 652 return 0; 631 653 654 + disable_pm: 655 + pm_runtime_disable(&pdev->dev); 632 656 error: 633 657 clk_disable_unprepare(cclk); 634 658 clk_disable_unprepare(iclk);
+2 -2
drivers/spi/spi-sh-sci.c
··· 175 175 { 176 176 struct sh_sci_spi *sp = platform_get_drvdata(dev); 177 177 178 - iounmap(sp->membase); 179 - setbits(sp, PIN_INIT, 0); 180 178 spi_bitbang_stop(&sp->bitbang); 179 + setbits(sp, PIN_INIT, 0); 180 + iounmap(sp->membase); 181 181 spi_master_put(sp->bitbang.master); 182 182 return 0; 183 183 }
+2 -2
drivers/staging/iio/adc/ad7291.c
··· 465 465 struct ad7291_platform_data *pdata = client->dev.platform_data; 466 466 struct ad7291_chip_info *chip; 467 467 struct iio_dev *indio_dev; 468 - int ret = 0; 468 + int ret; 469 469 470 470 indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*chip)); 471 471 if (!indio_dev) ··· 475 475 if (pdata && pdata->use_external_ref) { 476 476 chip->reg = devm_regulator_get(&client->dev, "vref"); 477 477 if (IS_ERR(chip->reg)) 478 - return ret; 478 + return PTR_ERR(chip->reg); 479 479 480 480 ret = regulator_enable(chip->reg); 481 481 if (ret)
+4 -2
drivers/staging/tidspbridge/core/tiomap3430.c
··· 280 280 OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL); 281 281 282 282 /* Wait until the state has moved to ON */ 283 - while (*pdata->dsp_prm_read(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST)& 284 - OMAP_INTRANSITION_MASK); 283 + while ((*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, 284 + OMAP2_PM_PWSTST) & 285 + OMAP_INTRANSITION_MASK) 286 + ; 285 287 /* Disable Automatic transition */ 286 288 (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_DISABLE_AUTO, 287 289 OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 1309 1309 if (cmd->data_direction != DMA_TO_DEVICE) { 1310 1310 pr_err("Command ITT: 0x%08x received DataOUT for a" 1311 1311 " NON-WRITE command.\n", cmd->init_task_tag); 1312 - return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf); 1312 + return iscsit_dump_data_payload(conn, payload_length, 1); 1313 1313 } 1314 1314 se_cmd = &cmd->se_cmd; 1315 1315 iscsit_mod_dataout_timer(cmd);
+11 -3
drivers/target/iscsi/iscsi_target_auth.c
··· 174 174 char *nr_out_ptr, 175 175 unsigned int *nr_out_len) 176 176 { 177 - char *endptr; 178 177 unsigned long id; 179 178 unsigned char id_as_uchar; 180 179 unsigned char digest[MD5_SIGNATURE_SIZE]; ··· 319 320 } 320 321 321 322 if (type == HEX) 322 - id = simple_strtoul(&identifier[2], &endptr, 0); 323 + ret = kstrtoul(&identifier[2], 0, &id); 323 324 else 324 - id = simple_strtoul(identifier, &endptr, 0); 325 + ret = kstrtoul(identifier, 0, &id); 326 + 327 + if (ret < 0) { 328 + pr_err("kstrtoul() failed for CHAP identifier: %d\n", ret); 329 + goto out; 330 + } 325 331 if (id > 255) { 326 332 pr_err("chap identifier: %lu greater than 255\n", id); 327 333 goto out; ··· 353 349 strlen(challenge)); 354 350 if (!challenge_len) { 355 351 pr_err("Unable to convert incoming challenge\n"); 352 + goto out; 353 + } 354 + if (challenge_len > 1024) { 355 + pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n"); 356 356 goto out; 357 357 } 358 358 /*
+7 -6
drivers/target/iscsi/iscsi_target_login.c
··· 1216 1216 static int __iscsi_target_login_thread(struct iscsi_np *np) 1217 1217 { 1218 1218 u8 *buffer, zero_tsih = 0; 1219 - int ret = 0, rc, stop; 1219 + int ret = 0, rc; 1220 1220 struct iscsi_conn *conn = NULL; 1221 1221 struct iscsi_login *login; 1222 1222 struct iscsi_portal_group *tpg = NULL; ··· 1230 1230 if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { 1231 1231 np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; 1232 1232 complete(&np->np_restart_comp); 1233 + } else if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN) { 1234 + spin_unlock_bh(&np->np_thread_lock); 1235 + goto exit; 1233 1236 } else { 1234 1237 np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; 1235 1238 } ··· 1425 1422 } 1426 1423 1427 1424 out: 1428 - stop = kthread_should_stop(); 1429 - /* Wait for another socket.. */ 1430 - if (!stop) 1431 - return 1; 1425 + return 1; 1426 + 1432 1427 exit: 1433 1428 iscsi_stop_login_thread_timer(np); 1434 1429 spin_lock_bh(&np->np_thread_lock); ··· 1443 1442 1444 1443 allow_signal(SIGINT); 1445 1444 1446 - while (!kthread_should_stop()) { 1445 + while (1) { 1447 1446 ret = __iscsi_target_login_thread(np); 1448 1447 /* 1449 1448 * We break and exit here unless another sock_accept() call
+2
drivers/target/iscsi/iscsi_target_util.c
··· 1295 1295 login->login_failed = 1; 1296 1296 iscsit_collect_login_stats(conn, status_class, status_detail); 1297 1297 1298 + memset(&login->rsp[0], 0, ISCSI_HDR_LEN); 1299 + 1298 1300 hdr = (struct iscsi_login_rsp *)&login->rsp[0]; 1299 1301 hdr->opcode = ISCSI_OP_LOGIN_RSP; 1300 1302 hdr->status_class = status_class;
+1
drivers/target/loopback/tcm_loop.c
··· 239 239 return; 240 240 241 241 out_done: 242 + kmem_cache_free(tcm_loop_cmd_cache, tl_cmd); 242 243 sc->scsi_done(sc); 243 244 return; 244 245 }
+1
drivers/target/target_core_device.c
··· 616 616 dev->export_count--; 617 617 spin_unlock(&hba->device_lock); 618 618 619 + lun->lun_sep = NULL; 619 620 lun->lun_se_dev = NULL; 620 621 } 621 622
+8 -2
drivers/tc/tc.c
··· 129 129 130 130 tc_device_get_irq(tdev); 131 131 132 - device_register(&tdev->dev); 132 + if (device_register(&tdev->dev)) { 133 + put_device(&tdev->dev); 134 + goto out_err; 135 + } 133 136 list_add_tail(&tdev->node, &tbus->devices); 134 137 135 138 out_err: ··· 151 148 152 149 INIT_LIST_HEAD(&tc_bus.devices); 153 150 dev_set_name(&tc_bus.dev, "tc"); 154 - device_register(&tc_bus.dev); 151 + if (device_register(&tc_bus.dev)) { 152 + put_device(&tc_bus.dev); 153 + return 0; 154 + } 155 155 156 156 if (tc_bus.info.slot_size) { 157 157 unsigned int tc_clock = tc_get_speed(&tc_bus) / 100000;
+7 -11
drivers/thermal/imx_thermal.c
··· 306 306 { 307 307 struct imx_thermal_data *data = platform_get_drvdata(pdev); 308 308 struct regmap *map; 309 - int t1, t2, n1, n2; 309 + int t1, n1; 310 310 int ret; 311 311 u32 val; 312 312 u64 temp64; ··· 333 333 /* 334 334 * Sensor data layout: 335 335 * [31:20] - sensor value @ 25C 336 - * [19:8] - sensor value of hot 337 - * [7:0] - hot temperature value 338 336 * Use universal formula now and only need sensor value @ 25C 339 337 * slope = 0.4297157 - (0.0015976 * 25C fuse) 340 338 */ 341 339 n1 = val >> 20; 342 - n2 = (val & 0xfff00) >> 8; 343 - t2 = val & 0xff; 344 340 t1 = 25; /* t1 always 25C */ 345 341 346 342 /* ··· 362 366 data->c2 = n1 * data->c1 + 1000 * t1; 363 367 364 368 /* 365 - * Set the default passive cooling trip point to 20 °C below the 366 - * maximum die temperature. Can be changed from userspace. 369 + * Set the default passive cooling trip point, 370 + * can be changed from userspace. 367 371 */ 368 - data->temp_passive = 1000 * (t2 - 20); 372 + data->temp_passive = IMX_TEMP_PASSIVE; 369 373 370 374 /* 371 - * The maximum die temperature is t2, let's give 5 °C cushion 372 - * for noise and possible temperature rise between measurements. 375 + * The maximum die temperature set to 20 C higher than 376 + * IMX_TEMP_PASSIVE. 373 377 */ 374 - data->temp_critical = 1000 * (t2 - 5); 378 + data->temp_critical = 1000 * 20 + data->temp_passive; 375 379 376 380 return 0; 377 381 }
+4 -3
drivers/thermal/of-thermal.c
··· 156 156 157 157 ret = thermal_zone_bind_cooling_device(thermal, 158 158 tbp->trip_id, cdev, 159 - tbp->min, 160 - tbp->max); 159 + tbp->max, 160 + tbp->min); 161 161 if (ret) 162 162 return ret; 163 163 } ··· 712 712 } 713 713 714 714 i = 0; 715 - for_each_child_of_node(child, gchild) 715 + for_each_child_of_node(child, gchild) { 716 716 ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++], 717 717 tz->trips, tz->ntrips); 718 718 if (ret) 719 719 goto free_tbps; 720 + } 720 721 721 722 finish: 722 723 of_node_put(child);
+18 -15
drivers/thermal/thermal_hwmon.c
··· 140 140 return NULL; 141 141 } 142 142 143 + static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz) 144 + { 145 + unsigned long temp; 146 + return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp); 147 + } 148 + 143 149 int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) 144 150 { 145 151 struct thermal_hwmon_device *hwmon; ··· 195 189 if (result) 196 190 goto free_temp_mem; 197 191 198 - if (tz->ops->get_crit_temp) { 199 - unsigned long temperature; 200 - if (!tz->ops->get_crit_temp(tz, &temperature)) { 201 - snprintf(temp->temp_crit.name, 202 - sizeof(temp->temp_crit.name), 192 + if (thermal_zone_crit_temp_valid(tz)) { 193 + snprintf(temp->temp_crit.name, 194 + sizeof(temp->temp_crit.name), 203 195 "temp%d_crit", hwmon->count); 204 - temp->temp_crit.attr.attr.name = temp->temp_crit.name; 205 - temp->temp_crit.attr.attr.mode = 0444; 206 - temp->temp_crit.attr.show = temp_crit_show; 207 - sysfs_attr_init(&temp->temp_crit.attr.attr); 208 - result = device_create_file(hwmon->device, 209 - &temp->temp_crit.attr); 210 - if (result) 211 - goto unregister_input; 212 - } 196 + temp->temp_crit.attr.attr.name = temp->temp_crit.name; 197 + temp->temp_crit.attr.attr.mode = 0444; 198 + temp->temp_crit.attr.show = temp_crit_show; 199 + sysfs_attr_init(&temp->temp_crit.attr.attr); 200 + result = device_create_file(hwmon->device, 201 + &temp->temp_crit.attr); 202 + if (result) 203 + goto unregister_input; 213 204 } 214 205 215 206 mutex_lock(&thermal_hwmon_list_lock); ··· 253 250 } 254 251 255 252 device_remove_file(hwmon->device, &temp->temp_input.attr); 256 - if (tz->ops->get_crit_temp) 253 + if (thermal_zone_crit_temp_valid(tz)) 257 254 device_remove_file(hwmon->device, &temp->temp_crit.attr); 258 255 259 256 mutex_lock(&thermal_hwmon_list_lock);
+1 -1
drivers/thermal/ti-soc-thermal/ti-bandgap.c
··· 1155 1155 /* register shadow for context save and restore */ 1156 1156 bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * 1157 1157 bgp->conf->sensor_count, GFP_KERNEL); 1158 - if (!bgp) { 1158 + if (!bgp->regval) { 1159 1159 dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); 1160 1160 return ERR_PTR(-ENOMEM); 1161 1161 }
+1 -1
drivers/tty/serial/arc_uart.c
··· 177 177 uart->port.icount.tx++; 178 178 uart->port.x_char = 0; 179 179 sent = 1; 180 - } else if (xmit->tail != xmit->head) { /* TODO: uart_circ_empty */ 180 + } else if (!uart_circ_empty(xmit)) { 181 181 ch = xmit->buf[xmit->tail]; 182 182 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 183 183 uart->port.icount.tx++;
+3
drivers/tty/serial/imx.c
··· 567 567 struct imx_port *sport = (struct imx_port *)port; 568 568 unsigned long temp; 569 569 570 + if (uart_circ_empty(&port->state->xmit)) 571 + return; 572 + 570 573 if (USE_IRDA(sport)) { 571 574 /* half duplex in IrDA mode; have to disable receive mode */ 572 575 temp = readl(sport->port.membase + UCR4);
+2
drivers/tty/serial/ip22zilog.c
··· 603 603 } else { 604 604 struct circ_buf *xmit = &port->state->xmit; 605 605 606 + if (uart_circ_empty(xmit)) 607 + return; 606 608 writeb(xmit->buf[xmit->tail], &channel->data); 607 609 ZSDELAY(); 608 610 ZS_WSYNC(channel);
+5 -3
drivers/tty/serial/m32r_sio.c
··· 266 266 if (!(up->ier & UART_IER_THRI)) { 267 267 up->ier |= UART_IER_THRI; 268 268 serial_out(up, UART_IER, up->ier); 269 - serial_out(up, UART_TX, xmit->buf[xmit->tail]); 270 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 271 - up->port.icount.tx++; 269 + if (!uart_circ_empty(xmit)) { 270 + serial_out(up, UART_TX, xmit->buf[xmit->tail]); 271 + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 272 + up->port.icount.tx++; 273 + } 272 274 } 273 275 while((serial_in(up, UART_LSR) & UART_EMPTY) != UART_EMPTY); 274 276 #else
+3
drivers/tty/serial/pmac_zilog.c
··· 653 653 } else { 654 654 struct circ_buf *xmit = &port->state->xmit; 655 655 656 + if (uart_circ_empty(xmit)) 657 + goto out; 656 658 write_zsdata(uap, xmit->buf[xmit->tail]); 657 659 zssync(uap); 658 660 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); ··· 663 661 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 664 662 uart_write_wakeup(&uap->port); 665 663 } 664 + out: 666 665 pmz_debug("pmz: start_tx() done.\n"); 667 666 } 668 667
+3
drivers/tty/serial/sunsab.c
··· 427 427 struct circ_buf *xmit = &up->port.state->xmit; 428 428 int i; 429 429 430 + if (uart_circ_empty(xmit)) 431 + return; 432 + 430 433 up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); 431 434 writeb(up->interrupt_mask1, &up->regs->w.imr1); 432 435
+2
drivers/tty/serial/sunzilog.c
··· 703 703 } else { 704 704 struct circ_buf *xmit = &port->state->xmit; 705 705 706 + if (uart_circ_empty(xmit)) 707 + return; 706 708 writeb(xmit->buf[xmit->tail], &channel->data); 707 709 ZSDELAY(); 708 710 ZS_WSYNC(channel);
+7
drivers/usb/chipidea/udc.c
··· 1321 1321 struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); 1322 1322 struct ci_hw_req *hwreq = container_of(req, struct ci_hw_req, req); 1323 1323 unsigned long flags; 1324 + struct td_node *node, *tmpnode; 1324 1325 1325 1326 if (ep == NULL || req == NULL || hwreq->req.status != -EALREADY || 1326 1327 hwep->ep.desc == NULL || list_empty(&hwreq->queue) || ··· 1331 1330 spin_lock_irqsave(hwep->lock, flags); 1332 1331 1333 1332 hw_ep_flush(hwep->ci, hwep->num, hwep->dir); 1333 + 1334 + list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { 1335 + dma_pool_free(hwep->td_pool, node->ptr, node->dma); 1336 + list_del(&node->td); 1337 + kfree(node); 1338 + } 1334 1339 1335 1340 /* pop request */ 1336 1341 list_del_init(&hwreq->queue);
+1
drivers/usb/dwc3/Kconfig
··· 45 45 config USB_DWC3_OMAP 46 46 tristate "Texas Instruments OMAP5 and similar Platforms" 47 47 depends on EXTCON && (ARCH_OMAP2PLUS || COMPILE_TEST) 48 + depends on OF 48 49 default USB_DWC3 49 50 help 50 51 Some platforms from Texas Instruments like OMAP5, DRA7xxx and
+14 -3
drivers/usb/dwc3/dwc3-omap.c
··· 322 322 { 323 323 struct platform_device *pdev = to_platform_device(dev); 324 324 325 - platform_device_unregister(pdev); 325 + of_device_unregister(pdev); 326 326 327 327 return 0; 328 328 } ··· 599 599 { 600 600 struct dwc3_omap *omap = dev_get_drvdata(dev); 601 601 602 - dwc3_omap_disable_irqs(omap); 602 + dwc3_omap_write_irqmisc_set(omap, 0x00); 603 603 604 604 return 0; 605 605 } ··· 607 607 static void dwc3_omap_complete(struct device *dev) 608 608 { 609 609 struct dwc3_omap *omap = dev_get_drvdata(dev); 610 + u32 reg; 610 611 611 - dwc3_omap_enable_irqs(omap); 612 + reg = (USBOTGSS_IRQMISC_OEVT | 613 + USBOTGSS_IRQMISC_DRVVBUS_RISE | 614 + USBOTGSS_IRQMISC_CHRGVBUS_RISE | 615 + USBOTGSS_IRQMISC_DISCHRGVBUS_RISE | 616 + USBOTGSS_IRQMISC_IDPULLUP_RISE | 617 + USBOTGSS_IRQMISC_DRVVBUS_FALL | 618 + USBOTGSS_IRQMISC_CHRGVBUS_FALL | 619 + USBOTGSS_IRQMISC_DISCHRGVBUS_FALL | 620 + USBOTGSS_IRQMISC_IDPULLUP_FALL); 621 + 622 + dwc3_omap_write_irqmisc_set(omap, reg); 612 623 } 613 624 614 625 static int dwc3_omap_suspend(struct device *dev)
+4 -4
drivers/usb/dwc3/gadget.c
··· 828 828 length, last ? " last" : "", 829 829 chain ? " chain" : ""); 830 830 831 - /* Skip the LINK-TRB on ISOC */ 832 - if (((dep->free_slot & DWC3_TRB_MASK) == DWC3_TRB_NUM - 1) && 833 - usb_endpoint_xfer_isoc(dep->endpoint.desc)) 834 - dep->free_slot++; 835 831 836 832 trb = &dep->trb_pool[dep->free_slot & DWC3_TRB_MASK]; 837 833 ··· 839 843 } 840 844 841 845 dep->free_slot++; 846 + /* Skip the LINK-TRB on ISOC */ 847 + if (((dep->free_slot & DWC3_TRB_MASK) == DWC3_TRB_NUM - 1) && 848 + usb_endpoint_xfer_isoc(dep->endpoint.desc)) 849 + dep->free_slot++; 842 850 843 851 trb->size = DWC3_TRB_SIZE_LENGTH(length); 844 852 trb->bpl = lower_32_bits(dma);
+19 -18
drivers/usb/gadget/configfs.c
··· 1145 1145 .store_attribute = usb_os_desc_attr_store, 1146 1146 }; 1147 1147 1148 - static ssize_t rndis_grp_compatible_id_show(struct usb_os_desc *desc, 1149 - char *page) 1148 + static ssize_t interf_grp_compatible_id_show(struct usb_os_desc *desc, 1149 + char *page) 1150 1150 { 1151 1151 memcpy(page, desc->ext_compat_id, 8); 1152 1152 return 8; 1153 1153 } 1154 1154 1155 - static ssize_t rndis_grp_compatible_id_store(struct usb_os_desc *desc, 1156 - const char *page, size_t len) 1155 + static ssize_t interf_grp_compatible_id_store(struct usb_os_desc *desc, 1156 + const char *page, size_t len) 1157 1157 { 1158 1158 int l; 1159 1159 ··· 1171 1171 return len; 1172 1172 } 1173 1173 1174 - static struct usb_os_desc_attribute rndis_grp_attr_compatible_id = 1174 + static struct usb_os_desc_attribute interf_grp_attr_compatible_id = 1175 1175 __CONFIGFS_ATTR(compatible_id, S_IRUGO | S_IWUSR, 1176 - rndis_grp_compatible_id_show, 1177 - rndis_grp_compatible_id_store); 1176 + interf_grp_compatible_id_show, 1177 + interf_grp_compatible_id_store); 1178 1178 1179 - static ssize_t rndis_grp_sub_compatible_id_show(struct usb_os_desc *desc, 1180 - char *page) 1179 + static ssize_t interf_grp_sub_compatible_id_show(struct usb_os_desc *desc, 1180 + char *page) 1181 1181 { 1182 1182 memcpy(page, desc->ext_compat_id + 8, 8); 1183 1183 return 8; 1184 1184 } 1185 1185 1186 - static ssize_t rndis_grp_sub_compatible_id_store(struct usb_os_desc *desc, 1187 - const char *page, size_t len) 1186 + static ssize_t interf_grp_sub_compatible_id_store(struct usb_os_desc *desc, 1187 + const char *page, size_t len) 1188 1188 { 1189 1189 int l; 1190 1190 ··· 1202 1202 return len; 1203 1203 } 1204 1204 1205 - static struct usb_os_desc_attribute rndis_grp_attr_sub_compatible_id = 1205 + static struct usb_os_desc_attribute interf_grp_attr_sub_compatible_id = 1206 1206 __CONFIGFS_ATTR(sub_compatible_id, S_IRUGO | S_IWUSR, 1207 - rndis_grp_sub_compatible_id_show, 1208 - rndis_grp_sub_compatible_id_store); 1207 + interf_grp_sub_compatible_id_show, 1208 + interf_grp_sub_compatible_id_store); 1209 1209 1210 1210 static struct configfs_attribute *interf_grp_attrs[] = { 1211 - &rndis_grp_attr_compatible_id.attr, 1212 - &rndis_grp_attr_sub_compatible_id.attr, 1211 + &interf_grp_attr_compatible_id.attr, 1212 + &interf_grp_attr_sub_compatible_id.attr, 1213 1213 NULL 1214 1214 }; 1215 1215 1216 1216 int usb_os_desc_prepare_interf_dir(struct config_group *parent, 1217 1217 int n_interf, 1218 1218 struct usb_os_desc **desc, 1219 + char **names, 1219 1220 struct module *owner) 1220 1221 { 1221 1222 struct config_group **f_default_groups, *os_desc_group, ··· 1258 1257 d = desc[n_interf]; 1259 1258 d->owner = owner; 1260 1259 config_group_init_type_name(&d->group, "", interface_type); 1261 - config_item_set_name(&d->group.cg_item, "interface.%d", 1262 - n_interf); 1260 + config_item_set_name(&d->group.cg_item, "interface.%s", 1261 + names[n_interf]); 1263 1262 interface_groups[n_interf] = &d->group; 1264 1263 } 1265 1264
+1
drivers/usb/gadget/configfs.h
··· 8 8 int usb_os_desc_prepare_interf_dir(struct config_group *parent, 9 9 int n_interf, 10 10 struct usb_os_desc **desc, 11 + char **names, 11 12 struct module *owner); 12 13 13 14 static inline struct usb_os_desc *to_usb_os_desc(struct config_item *item)
+7 -5
drivers/usb/gadget/f_fs.c
··· 1483 1483 ffs->ep0req->context = ffs; 1484 1484 1485 1485 lang = ffs->stringtabs; 1486 - for (lang = ffs->stringtabs; *lang; ++lang) { 1487 - struct usb_string *str = (*lang)->strings; 1488 - int id = first_id; 1489 - for (; str->s; ++id, ++str) 1490 - str->id = id; 1486 + if (lang) { 1487 + for (; *lang; ++lang) { 1488 + struct usb_string *str = (*lang)->strings; 1489 + int id = first_id; 1490 + for (; str->s; ++id, ++str) 1491 + str->id = id; 1492 + } 1491 1493 } 1492 1494 1493 1495 ffs->gadget = cdev->gadget;
+4 -2
drivers/usb/gadget/f_rndis.c
··· 687 687 f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), 688 688 GFP_KERNEL); 689 689 if (!f->os_desc_table) 690 - return PTR_ERR(f->os_desc_table); 690 + return -ENOMEM; 691 691 f->os_desc_n = 1; 692 692 f->os_desc_table[0].os_desc = &rndis_opts->rndis_os_desc; 693 693 } ··· 905 905 { 906 906 struct f_rndis_opts *opts; 907 907 struct usb_os_desc *descs[1]; 908 + char *names[1]; 908 909 909 910 opts = kzalloc(sizeof(*opts), GFP_KERNEL); 910 911 if (!opts) ··· 923 922 INIT_LIST_HEAD(&opts->rndis_os_desc.ext_prop); 924 923 925 924 descs[0] = &opts->rndis_os_desc; 925 + names[0] = "rndis"; 926 926 usb_os_desc_prepare_interf_dir(&opts->func_inst.group, 1, descs, 927 - THIS_MODULE); 927 + names, THIS_MODULE); 928 928 config_group_init_type_name(&opts->func_inst.group, "", 929 929 &rndis_func_type); 930 930
+3 -2
drivers/usb/gadget/gr_udc.c
··· 1532 1532 "%s mode: multiple trans./microframe not valid\n", 1533 1533 (mode == 2 ? "Bulk" : "Control")); 1534 1534 return -EINVAL; 1535 - } else if (nt == 0x11) { 1536 - dev_err(dev->dev, "Invalid value for trans./microframe\n"); 1535 + } else if (nt == 0x3) { 1536 + dev_err(dev->dev, 1537 + "Invalid value 0x3 for additional trans./microframe\n"); 1537 1538 return -EINVAL; 1538 1539 } else if ((nt + 1) * max > buffer_size) { 1539 1540 dev_err(dev->dev, "Hw buffer size %d < max payload %d * %d\n",
+6 -1
drivers/usb/gadget/inode.c
··· 1264 1264 1265 1265 kfree (dev->buf); 1266 1266 dev->buf = NULL; 1267 - put_dev (dev); 1268 1267 1268 + /* other endpoints were all decoupled from this device */ 1269 + spin_lock_irq(&dev->lock); 1270 + dev->state = STATE_DEV_DISABLED; 1271 + spin_unlock_irq(&dev->lock); 1272 + 1273 + put_dev (dev); 1269 1274 return 0; 1270 1275 } 1271 1276
+3
drivers/usb/gadget/u_ether.c
··· 1120 1120 1121 1121 DBG(dev, "%s\n", __func__); 1122 1122 1123 + netif_tx_lock(dev->net); 1123 1124 netif_stop_queue(dev->net); 1125 + netif_tx_unlock(dev->net); 1126 + 1124 1127 netif_carrier_off(dev->net); 1125 1128 1126 1129 /* disable endpoints, forcing (synchronous) completion
+1 -1
drivers/usb/host/Kconfig
··· 176 176 177 177 config USB_EHCI_MSM 178 178 tristate "Support for Qualcomm QSD/MSM on-chip EHCI USB controller" 179 - depends on ARCH_MSM 179 + depends on ARCH_MSM || ARCH_QCOM 180 180 select USB_EHCI_ROOT_HUB_TT 181 181 ---help--- 182 182 Enables support for the USB Host controller present on the
+4 -1
drivers/usb/host/xhci-hub.c
··· 22 22 23 23 24 24 #include <linux/slab.h> 25 + #include <linux/device.h> 25 26 #include <asm/unaligned.h> 26 27 27 28 #include "xhci.h" ··· 1140 1139 * including the USB 3.0 roothub, but only if CONFIG_PM_RUNTIME 1141 1140 * is enabled, so also enable remote wake here. 1142 1141 */ 1143 - if (hcd->self.root_hub->do_remote_wakeup) { 1142 + if (hcd->self.root_hub->do_remote_wakeup 1143 + && device_may_wakeup(hcd->self.controller)) { 1144 + 1144 1145 if (t1 & PORT_CONNECT) { 1145 1146 t2 |= PORT_WKOC_E | PORT_WKDISC_E; 1146 1147 t2 &= ~PORT_WKCONN_E;
+6 -3
drivers/usb/host/xhci-ring.c
··· 1433 1433 xhci_handle_cmd_reset_ep(xhci, slot_id, cmd_trb, cmd_comp_code); 1434 1434 break; 1435 1435 case TRB_RESET_DEV: 1436 - WARN_ON(slot_id != TRB_TO_SLOT_ID( 1437 - le32_to_cpu(cmd_trb->generic.field[3]))); 1436 + /* SLOT_ID field in reset device cmd completion event TRB is 0. 1437 + * Use the SLOT_ID from the command TRB instead (xhci 4.6.11) 1438 + */ 1439 + slot_id = TRB_TO_SLOT_ID( 1440 + le32_to_cpu(cmd_trb->generic.field[3])); 1438 1441 xhci_handle_cmd_reset_dev(xhci, slot_id, event); 1439 1442 break; 1440 1443 case TRB_NEC_GET_FW: ··· 3537 3534 return 0; 3538 3535 3539 3536 max_burst = urb->ep->ss_ep_comp.bMaxBurst; 3540 - return roundup(total_packet_count, max_burst + 1) - 1; 3537 + return DIV_ROUND_UP(total_packet_count, max_burst + 1) - 1; 3541 3538 } 3542 3539 3543 3540 /*
+7 -3
drivers/usb/host/xhci.c
··· 936 936 */ 937 937 int xhci_resume(struct xhci_hcd *xhci, bool hibernated) 938 938 { 939 - u32 command, temp = 0; 939 + u32 command, temp = 0, status; 940 940 struct usb_hcd *hcd = xhci_to_hcd(xhci); 941 941 struct usb_hcd *secondary_hcd; 942 942 int retval = 0; ··· 1054 1054 1055 1055 done: 1056 1056 if (retval == 0) { 1057 - usb_hcd_resume_root_hub(hcd); 1058 - usb_hcd_resume_root_hub(xhci->shared_hcd); 1057 + /* Resume root hubs only when have pending events. */ 1058 + status = readl(&xhci->op_regs->status); 1059 + if (status & STS_EINT) { 1060 + usb_hcd_resume_root_hub(hcd); 1061 + usb_hcd_resume_root_hub(xhci->shared_hcd); 1062 + } 1059 1063 } 1060 1064 1061 1065 /*
+6 -17
drivers/usb/musb/musb_am335x.c
··· 19 19 return ret; 20 20 } 21 21 22 - static int of_remove_populated_child(struct device *dev, void *d) 23 - { 24 - struct platform_device *pdev = to_platform_device(dev); 25 - 26 - of_device_unregister(pdev); 27 - return 0; 28 - } 29 - 30 - static int am335x_child_remove(struct platform_device *pdev) 31 - { 32 - device_for_each_child(&pdev->dev, NULL, of_remove_populated_child); 33 - pm_runtime_disable(&pdev->dev); 34 - return 0; 35 - } 36 - 37 22 static const struct of_device_id am335x_child_of_match[] = { 38 23 { .compatible = "ti,am33xx-usb" }, 39 24 { }, ··· 27 42 28 43 static struct platform_driver am335x_child_driver = { 29 44 .probe = am335x_child_probe, 30 - .remove = am335x_child_remove, 31 45 .driver = { 32 46 .name = "am335x-usb-childs", 33 47 .of_match_table = am335x_child_of_match, 34 48 }, 35 49 }; 36 50 37 - module_platform_driver(am335x_child_driver); 51 + static int __init am335x_child_init(void) 52 + { 53 + return platform_driver_register(&am335x_child_driver); 54 + } 55 + module_init(am335x_child_init); 56 + 38 57 MODULE_DESCRIPTION("AM33xx child devices"); 39 58 MODULE_LICENSE("GPL v2");
+1 -1
drivers/usb/musb/musb_core.c
··· 849 849 } 850 850 851 851 /* handle babble condition */ 852 - if (int_usb & MUSB_INTR_BABBLE) 852 + if (int_usb & MUSB_INTR_BABBLE && is_host_active(musb)) 853 853 schedule_work(&musb->recover_work); 854 854 855 855 #if 0
+1 -1
drivers/usb/musb/musb_cppi41.c
··· 318 318 } 319 319 list_add_tail(&cppi41_channel->tx_check, 320 320 &controller->early_tx_list); 321 - if (!hrtimer_active(&controller->early_tx)) { 321 + if (!hrtimer_is_queued(&controller->early_tx)) { 322 322 hrtimer_start_range_ns(&controller->early_tx, 323 323 ktime_set(0, 140 * NSEC_PER_USEC), 324 324 40 * NSEC_PER_USEC,
+4 -5
drivers/usb/musb/musb_dsps.c
··· 494 494 struct dsps_glue *glue = dev_get_drvdata(dev->parent); 495 495 const struct dsps_musb_wrapper *wrp = glue->wrp; 496 496 void __iomem *ctrl_base = musb->ctrl_base; 497 - void __iomem *base = musb->mregs; 498 497 u32 reg; 499 498 500 - reg = dsps_readl(base, wrp->mode); 499 + reg = dsps_readl(ctrl_base, wrp->mode); 501 500 502 501 switch (mode) { 503 502 case MUSB_HOST: ··· 509 510 */ 510 511 reg |= (1 << wrp->iddig_mux); 511 512 512 - dsps_writel(base, wrp->mode, reg); 513 + dsps_writel(ctrl_base, wrp->mode, reg); 513 514 dsps_writel(ctrl_base, wrp->phy_utmi, 0x02); 514 515 break; 515 516 case MUSB_PERIPHERAL: ··· 522 523 */ 523 524 reg |= (1 << wrp->iddig_mux); 524 525 525 - dsps_writel(base, wrp->mode, reg); 526 + dsps_writel(ctrl_base, wrp->mode, reg); 526 527 break; 527 528 case MUSB_OTG: 528 - dsps_writel(base, wrp->phy_utmi, 0x02); 529 + dsps_writel(ctrl_base, wrp->phy_utmi, 0x02); 529 530 break; 530 531 default: 531 532 dev_err(glue->dev, "unsupported mode %d\n", mode);
-1
drivers/usb/musb/ux500.c
··· 274 274 musb->dev.parent = &pdev->dev; 275 275 musb->dev.dma_mask = &pdev->dev.coherent_dma_mask; 276 276 musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask; 277 - musb->dev.of_node = pdev->dev.of_node; 278 277 279 278 glue->dev = &pdev->dev; 280 279 glue->musb = musb;
+3 -1
drivers/usb/phy/phy-msm-usb.c
··· 1229 1229 motg->chg_state = USB_CHG_STATE_UNDEFINED; 1230 1230 motg->chg_type = USB_INVALID_CHARGER; 1231 1231 } 1232 - pm_runtime_put_sync(otg->phy->dev); 1232 + 1233 + if (otg->phy->state == OTG_STATE_B_IDLE) 1234 + pm_runtime_put_sync(otg->phy->dev); 1233 1235 break; 1234 1236 case OTG_STATE_B_PERIPHERAL: 1235 1237 dev_dbg(otg->phy->dev, "OTG_STATE_B_PERIPHERAL state\n");
+8
drivers/usb/renesas_usbhs/fifo.c
··· 681 681 usbhs_pipe_number(pipe), 682 682 pkt->length, pkt->actual, *is_done, pkt->zero); 683 683 684 + /* 685 + * Transmission end 686 + */ 687 + if (*is_done) { 688 + if (usbhs_pipe_is_dcp(pipe)) 689 + usbhs_dcp_control_transfer_done(pipe); 690 + } 691 + 684 692 usbhs_fifo_read_busy: 685 693 usbhsf_fifo_unselect(pipe, fifo); 686 694
+1
drivers/usb/serial/cp210x.c
··· 153 153 { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ 154 154 { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ 155 155 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 156 + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 156 157 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 157 158 { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ 158 159 { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
+9 -3
drivers/usb/serial/ftdi_sio.c
··· 720 720 { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, 721 721 { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, 722 722 { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, 723 - { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, 723 + { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, 724 + { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, 724 725 { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, 725 726 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, 726 727 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, ··· 945 944 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, 946 945 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, 947 946 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, 947 + /* Infineon Devices */ 948 + { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 948 949 { } /* Terminating entry */ 949 950 }; 950 951 ··· 1569 1566 struct usb_device *udev = serial->dev; 1570 1567 1571 1568 struct usb_interface *interface = serial->interface; 1572 - struct usb_endpoint_descriptor *ep_desc = &interface->cur_altsetting->endpoint[1].desc; 1569 + struct usb_endpoint_descriptor *ep_desc; 1573 1570 1574 1571 unsigned num_endpoints; 1575 - int i; 1572 + unsigned i; 1576 1573 1577 1574 num_endpoints = interface->cur_altsetting->desc.bNumEndpoints; 1578 1575 dev_info(&udev->dev, "Number of endpoints %d\n", num_endpoints); 1576 + 1577 + if (!num_endpoints) 1578 + return; 1579 1579 1580 1580 /* NOTE: some customers have programmed FT232R/FT245R devices 1581 1581 * with an endpoint size of 0 - not good. In this case, we
+8 -1
drivers/usb/serial/ftdi_sio_ids.h
··· 584 584 #define RATOC_PRODUCT_ID_USB60F 0xb020 585 585 586 586 /* 587 + * Infineon Technologies 588 + */ 589 + #define INFINEON_VID 0x058b 590 + #define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ 591 + 592 + /* 587 593 * Acton Research Corp. 588 594 */ 589 595 #define ACTON_VID 0x0647 /* Vendor ID */ ··· 804 798 * Submitted by Colin Leroy 805 799 */ 806 800 #define TESTO_VID 0x128D 807 - #define TESTO_USB_INTERFACE_PID 0x0001 801 + #define TESTO_1_PID 0x0001 802 + #define TESTO_3_PID 0x0003 808 803 809 804 /* 810 805 * Mobility Electronics products.
+22 -6
drivers/usb/serial/option.c
··· 352 352 /* Zoom */ 353 353 #define ZOOM_PRODUCT_4597 0x9607 354 354 355 + /* SpeedUp SU9800 usb 3g modem */ 356 + #define SPEEDUP_PRODUCT_SU9800 0x9800 357 + 355 358 /* Haier products */ 356 359 #define HAIER_VENDOR_ID 0x201e 357 360 #define HAIER_PRODUCT_CE100 0x2009 ··· 375 372 /* Olivetti products */ 376 373 #define OLIVETTI_VENDOR_ID 0x0b3c 377 374 #define OLIVETTI_PRODUCT_OLICARD100 0xc000 375 + #define OLIVETTI_PRODUCT_OLICARD120 0xc001 376 + #define OLIVETTI_PRODUCT_OLICARD140 0xc002 378 377 #define OLIVETTI_PRODUCT_OLICARD145 0xc003 378 + #define OLIVETTI_PRODUCT_OLICARD155 0xc004 379 379 #define OLIVETTI_PRODUCT_OLICARD200 0xc005 380 + #define OLIVETTI_PRODUCT_OLICARD160 0xc00a 380 381 #define OLIVETTI_PRODUCT_OLICARD500 0xc00b 381 382 382 383 /* Celot products */ ··· 1487 1480 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1488 1481 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ 1489 1482 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1483 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ 1484 + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1490 1485 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, 1491 1486 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, 1492 1487 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, ··· 1586 1577 { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), 1587 1578 .driver_info = (kernel_ulong_t)&four_g_w14_blacklist 1588 1579 }, 1580 + { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) }, 1589 1581 { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, 1590 1582 { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, 1591 1583 { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, ··· 1621 1611 { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) }, 1622 1612 { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ 1623 1613 { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, 1624 - 1625 - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, 1614 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100), 1615 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 1616 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120), 1617 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 1618 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD140), 1619 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 1626 1620 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, 1621 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD155), 1622 + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, 1627 1623 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), 1628 - .driver_info = (kernel_ulong_t)&net_intf6_blacklist 1629 - }, 1624 + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, 1625 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD160), 1626 + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, 1630 1627 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD500), 1631 - .driver_info = (kernel_ulong_t)&net_intf4_blacklist 1632 - }, 1628 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 1633 1629 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ 1634 1630 { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ 1635 1631 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) },
+4
drivers/usb/storage/scsiglue.c
··· 256 256 if (us->fflags & US_FL_WRITE_CACHE) 257 257 sdev->wce_default_on = 1; 258 258 259 + /* A few buggy USB-ATA bridges don't understand FUA */ 260 + if (us->fflags & US_FL_BROKEN_FUA) 261 + sdev->broken_fua = 1; 262 + 259 263 } else { 260 264 261 265 /* Non-disk-type devices don't need to blacklist any pages
+7
drivers/usb/storage/unusual_devs.h
··· 1936 1936 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1937 1937 US_FL_IGNORE_RESIDUE ), 1938 1938 1939 + /* Reported by Michael Büsch <m@bues.ch> */ 1940 + UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0114, 1941 + "JMicron", 1942 + "USB to ATA/ATAPI Bridge", 1943 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1944 + US_FL_BROKEN_FUA ), 1945 + 1939 1946 /* Reported by Alexandre Oliva <oliva@lsd.ic.unicamp.br> 1940 1947 * JMicron responds to USN and several other SCSI ioctls with a 1941 1948 * residue that causes subsequent I/O requests to fail. */
+2
drivers/video/fbdev/atmel_lcdfb.c
··· 1057 1057 goto put_display_node; 1058 1058 } 1059 1059 1060 + INIT_LIST_HEAD(&pdata->pwr_gpios); 1060 1061 ret = -ENOMEM; 1061 1062 for (i = 0; i < of_gpio_named_count(display_np, "atmel,power-control-gpio"); i++) { 1062 1063 gpio = of_get_named_gpio_flags(display_np, "atmel,power-control-gpio", ··· 1083 1082 dev_err(dev, "set direction output gpio %d failed\n", gpio); 1084 1083 goto put_display_node; 1085 1084 } 1085 + list_add(&og->list, &pdata->pwr_gpios); 1086 1086 } 1087 1087 1088 1088 if (is_gpio_power)
+1 -1
drivers/video/fbdev/bfin_adv7393fb.c
··· 408 408 /* Workaround "PPI Does Not Start Properly In Specific Mode" */ 409 409 if (ANOMALY_05000400) { 410 410 ret = gpio_request_one(P_IDENT(P_PPI0_FS3), GPIOF_OUT_INIT_LOW, 411 - "PPI0_FS3") 411 + "PPI0_FS3"); 412 412 if (ret) { 413 413 dev_err(&client->dev, "PPI0_FS3 GPIO request failed\n"); 414 414 ret = -EBUSY;
+5 -3
drivers/video/fbdev/omap2/dss/omapdss-boot-init.c
··· 121 121 { 122 122 struct dss_conv_node *n = kmalloc(sizeof(struct dss_conv_node), 123 123 GFP_KERNEL); 124 - n->node = node; 125 - n->root = root; 126 - list_add(&n->list, &dss_conv_list); 124 + if (n) { 125 + n->node = node; 126 + n->root = root; 127 + list_add(&n->list, &dss_conv_list); 128 + } 127 129 } 128 130 129 131 static bool __init omapdss_list_contains(const struct device_node *node)
-2
drivers/video/fbdev/vt8500lcdfb.c
··· 474 474 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 475 475 release_mem_region(res->start, resource_size(res)); 476 476 477 - kfree(fbi); 478 - 479 477 return 0; 480 478 } 481 479
+6
firmware/Makefile
··· 219 219 obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) 220 220 obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y)) 221 221 222 + ifeq ($(KBUILD_SRC),) 223 + # Makefile.build only creates subdirectories for O= builds, but external 224 + # firmware might live outside the kernel source tree 225 + _dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) 226 + endif 227 + 222 228 # Remove .S files and binaries created from ihex 223 229 # (during 'make clean' .config isn't included so they're all in $(fw-shipped-)) 224 230 targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \
+7
fs/aio.c
··· 830 830 static void put_reqs_available(struct kioctx *ctx, unsigned nr) 831 831 { 832 832 struct kioctx_cpu *kcpu; 833 + unsigned long flags; 833 834 834 835 preempt_disable(); 835 836 kcpu = this_cpu_ptr(ctx->cpu); 836 837 838 + local_irq_save(flags); 837 839 kcpu->reqs_available += nr; 840 + 838 841 while (kcpu->reqs_available >= ctx->req_batch * 2) { 839 842 kcpu->reqs_available -= ctx->req_batch; 840 843 atomic_add(ctx->req_batch, &ctx->reqs_available); 841 844 } 842 845 846 + local_irq_restore(flags); 843 847 preempt_enable(); 844 848 } 845 849 ··· 851 847 { 852 848 struct kioctx_cpu *kcpu; 853 849 bool ret = false; 850 + unsigned long flags; 854 851 855 852 preempt_disable(); 856 853 kcpu = this_cpu_ptr(ctx->cpu); 857 854 855 + local_irq_save(flags); 858 856 if (!kcpu->reqs_available) { 859 857 int old, avail = atomic_read(&ctx->reqs_available); 860 858 ··· 875 869 ret = true; 876 870 kcpu->reqs_available--; 877 871 out: 872 + local_irq_restore(flags); 878 873 preempt_enable(); 879 874 return ret; 880 875 }
+1 -1
fs/autofs4/inode.c
··· 210 210 int pipefd; 211 211 struct autofs_sb_info *sbi; 212 212 struct autofs_info *ino; 213 - int pgrp; 213 + int pgrp = 0; 214 214 bool pgrp_set = false; 215 215 int ret = -EINVAL; 216 216
+1 -1
fs/btrfs/compression.c
··· 821 821 822 822 spin_lock(workspace_lock); 823 823 if (*num_workspace < num_online_cpus()) { 824 - list_add_tail(workspace, idle_workspace); 824 + list_add(workspace, idle_workspace); 825 825 (*num_workspace)++; 826 826 spin_unlock(workspace_lock); 827 827 goto wake;
+5
fs/btrfs/dev-replace.c
··· 36 36 #include "check-integrity.h" 37 37 #include "rcu-string.h" 38 38 #include "dev-replace.h" 39 + #include "sysfs.h" 39 40 40 41 static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, 41 42 int scrub_ret); ··· 562 561 if (fs_info->fs_devices->latest_bdev == src_device->bdev) 563 562 fs_info->fs_devices->latest_bdev = tgt_device->bdev; 564 563 list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list); 564 + 565 + /* replace the sysfs entry */ 566 + btrfs_kobj_rm_device(fs_info, src_device); 567 + btrfs_kobj_add_device(fs_info, tgt_device); 565 568 566 569 btrfs_rm_dev_replace_blocked(fs_info); 567 570
+4 -1
fs/btrfs/disk-io.c
··· 369 369 out: 370 370 unlock_extent_cached(io_tree, eb->start, eb->start + eb->len - 1, 371 371 &cached_state, GFP_NOFS); 372 - btrfs_tree_read_unlock_blocking(eb); 372 + if (need_lock) 373 + btrfs_tree_read_unlock_blocking(eb); 373 374 return ret; 374 375 } 375 376 ··· 2905 2904 if (ret) 2906 2905 goto fail_qgroup; 2907 2906 2907 + mutex_lock(&fs_info->cleaner_mutex); 2908 2908 ret = btrfs_recover_relocation(tree_root); 2909 + mutex_unlock(&fs_info->cleaner_mutex); 2909 2910 if (ret < 0) { 2910 2911 printk(KERN_WARNING 2911 2912 "BTRFS: failed to recover relocation\n");
+1 -4
fs/btrfs/extent-tree.c
··· 5678 5678 struct btrfs_caching_control *next; 5679 5679 struct btrfs_caching_control *caching_ctl; 5680 5680 struct btrfs_block_group_cache *cache; 5681 - struct btrfs_space_info *space_info; 5682 5681 5683 5682 down_write(&fs_info->commit_root_sem); 5684 5683 ··· 5699 5700 fs_info->pinned_extents = &fs_info->freed_extents[0]; 5700 5701 5701 5702 up_write(&fs_info->commit_root_sem); 5702 - 5703 - list_for_each_entry_rcu(space_info, &fs_info->space_info, list) 5704 - percpu_counter_set(&space_info->total_bytes_pinned, 0); 5705 5703 5706 5704 update_global_block_rsv(fs_info); 5707 5705 } ··· 5737 5741 spin_lock(&cache->lock); 5738 5742 cache->pinned -= len; 5739 5743 space_info->bytes_pinned -= len; 5744 + percpu_counter_add(&space_info->total_bytes_pinned, -len); 5740 5745 if (cache->ro) { 5741 5746 space_info->bytes_readonly += len; 5742 5747 readonly = true;
+19 -18
fs/btrfs/ioctl.c
··· 136 136 void btrfs_update_iflags(struct inode *inode) 137 137 { 138 138 struct btrfs_inode *ip = BTRFS_I(inode); 139 - 140 - inode->i_flags &= ~(S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC); 139 + unsigned int new_fl = 0; 141 140 142 141 if (ip->flags & BTRFS_INODE_SYNC) 143 - inode->i_flags |= S_SYNC; 142 + new_fl |= S_SYNC; 144 143 if (ip->flags & BTRFS_INODE_IMMUTABLE) 145 - inode->i_flags |= S_IMMUTABLE; 144 + new_fl |= S_IMMUTABLE; 146 145 if (ip->flags & BTRFS_INODE_APPEND) 147 - inode->i_flags |= S_APPEND; 146 + new_fl |= S_APPEND; 148 147 if (ip->flags & BTRFS_INODE_NOATIME) 149 - inode->i_flags |= S_NOATIME; 148 + new_fl |= S_NOATIME; 150 149 if (ip->flags & BTRFS_INODE_DIRSYNC) 151 - inode->i_flags |= S_DIRSYNC; 150 + new_fl |= S_DIRSYNC; 151 + 152 + set_mask_bits(&inode->i_flags, 153 + S_SYNC | S_APPEND | S_IMMUTABLE | S_NOATIME | S_DIRSYNC, 154 + new_fl); 152 155 } 153 156 154 157 /* ··· 3142 3139 static void clone_update_extent_map(struct inode *inode, 3143 3140 const struct btrfs_trans_handle *trans, 3144 3141 const struct btrfs_path *path, 3145 - struct btrfs_file_extent_item *fi, 3146 3142 const u64 hole_offset, 3147 3143 const u64 hole_len) 3148 3144 { ··· 3156 3154 return; 3157 3155 } 3158 3156 3159 - if (fi) { 3157 + if (path) { 3158 + struct btrfs_file_extent_item *fi; 3159 + 3160 + fi = btrfs_item_ptr(path->nodes[0], path->slots[0], 3161 + struct btrfs_file_extent_item); 3160 3162 btrfs_extent_item_to_extent_map(inode, path, fi, false, em); 3161 3163 em->generation = -1; 3162 3164 if (btrfs_file_extent_type(path->nodes[0], fi) == ··· 3514 3508 btrfs_item_ptr_offset(leaf, slot), 3515 3509 size); 3516 3510 inode_add_bytes(inode, datal); 3517 - extent = btrfs_item_ptr(leaf, slot, 3518 - struct btrfs_file_extent_item); 3519 3511 } 3520 3512 3521 3513 /* If we have an implicit hole (NO_HOLES feature). */ 3522 3514 if (drop_start < new_key.offset) 3523 3515 clone_update_extent_map(inode, trans, 3524 - path, NULL, drop_start, 3516 + NULL, drop_start, 3525 3517 new_key.offset - drop_start); 3526 3518 3527 - clone_update_extent_map(inode, trans, path, 3528 - extent, 0, 0); 3519 + clone_update_extent_map(inode, trans, path, 0, 0); 3529 3520 3530 3521 btrfs_mark_buffer_dirty(leaf); 3531 3522 btrfs_release_path(path); ··· 3565 3562 btrfs_end_transaction(trans, root); 3566 3563 goto out; 3567 3564 } 3565 + clone_update_extent_map(inode, trans, NULL, last_dest_end, 3566 + destoff + len - last_dest_end); 3568 3567 ret = clone_finish_inode_update(trans, inode, destoff + len, 3569 3568 destoff, olen); 3570 - if (ret) 3571 - goto out; 3572 - clone_update_extent_map(inode, trans, path, NULL, last_dest_end, 3573 - destoff + len - last_dest_end); 3574 3569 } 3575 3570 3576 3571 out:
+5 -4
fs/btrfs/print-tree.c
··· 54 54 btrfs_extent_data_ref_count(eb, ref)); 55 55 } 56 56 57 - static void print_extent_item(struct extent_buffer *eb, int slot) 57 + static void print_extent_item(struct extent_buffer *eb, int slot, int type) 58 58 { 59 59 struct btrfs_extent_item *ei; 60 60 struct btrfs_extent_inline_ref *iref; ··· 63 63 struct btrfs_disk_key key; 64 64 unsigned long end; 65 65 unsigned long ptr; 66 - int type; 67 66 u32 item_size = btrfs_item_size_nr(eb, slot); 68 67 u64 flags; 69 68 u64 offset; ··· 87 88 btrfs_extent_refs(eb, ei), btrfs_extent_generation(eb, ei), 88 89 flags); 89 90 90 - if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { 91 + if ((type == BTRFS_EXTENT_ITEM_KEY) && 92 + flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { 91 93 struct btrfs_tree_block_info *info; 92 94 info = (struct btrfs_tree_block_info *)(ei + 1); 93 95 btrfs_tree_block_key(eb, info, &key); ··· 223 223 btrfs_disk_root_refs(l, ri)); 224 224 break; 225 225 case BTRFS_EXTENT_ITEM_KEY: 226 - print_extent_item(l, i); 226 + case BTRFS_METADATA_ITEM_KEY: 227 + print_extent_item(l, i, type); 227 228 break; 228 229 case BTRFS_TREE_BLOCK_REF_KEY: 229 230 printk(KERN_INFO "\t\ttree block backref\n");
+3 -2
fs/btrfs/raid56.c
··· 1956 1956 * pages are going to be uptodate. 1957 1957 */ 1958 1958 for (stripe = 0; stripe < bbio->num_stripes; stripe++) { 1959 - if (rbio->faila == stripe || 1960 - rbio->failb == stripe) 1959 + if (rbio->faila == stripe || rbio->failb == stripe) { 1960 + atomic_inc(&rbio->bbio->error); 1961 1961 continue; 1962 + } 1962 1963 1963 1964 for (pagenr = 0; pagenr < nr_pages; pagenr++) { 1964 1965 struct page *p;
+6 -1
fs/btrfs/super.c
··· 522 522 case Opt_ssd_spread: 523 523 btrfs_set_and_info(root, SSD_SPREAD, 524 524 "use spread ssd allocation scheme"); 525 + btrfs_set_opt(info->mount_opt, SSD); 525 526 break; 526 527 case Opt_nossd: 527 - btrfs_clear_and_info(root, NOSSD, 528 + btrfs_set_and_info(root, NOSSD, 528 529 "not using ssd allocation scheme"); 529 530 btrfs_clear_opt(info->mount_opt, SSD); 530 531 break; ··· 1468 1467 goto restore; 1469 1468 1470 1469 /* recover relocation */ 1470 + mutex_lock(&fs_info->cleaner_mutex); 1471 1471 ret = btrfs_recover_relocation(root); 1472 + mutex_unlock(&fs_info->cleaner_mutex); 1472 1473 if (ret) 1473 1474 goto restore; 1474 1475 ··· 1810 1807 head = &cur_devices->devices; 1811 1808 list_for_each_entry(dev, head, dev_list) { 1812 1809 if (dev->missing) 1810 + continue; 1811 + if (!dev->name) 1813 1812 continue; 1814 1813 if (!first_dev || dev->devid < first_dev->devid) 1815 1814 first_dev = dev;
+29 -3
fs/btrfs/sysfs.c
··· 605 605 } 606 606 } 607 607 608 - static int add_device_membership(struct btrfs_fs_info *fs_info) 608 + int btrfs_kobj_rm_device(struct btrfs_fs_info *fs_info, 609 + struct btrfs_device *one_device) 610 + { 611 + struct hd_struct *disk; 612 + struct kobject *disk_kobj; 613 + 614 + if (!fs_info->device_dir_kobj) 615 + return -EINVAL; 616 + 617 + if (one_device) { 618 + disk = one_device->bdev->bd_part; 619 + disk_kobj = &part_to_dev(disk)->kobj; 620 + 621 + sysfs_remove_link(fs_info->device_dir_kobj, 622 + disk_kobj->name); 623 + } 624 + 625 + return 0; 626 + } 627 + 628 + int btrfs_kobj_add_device(struct btrfs_fs_info *fs_info, 629 + struct btrfs_device *one_device) 609 630 { 610 631 int error = 0; 611 632 struct btrfs_fs_devices *fs_devices = fs_info->fs_devices; 612 633 struct btrfs_device *dev; 613 634 614 - fs_info->device_dir_kobj = kobject_create_and_add("devices", 635 + if (!fs_info->device_dir_kobj) 636 + fs_info->device_dir_kobj = kobject_create_and_add("devices", 615 637 &fs_info->super_kobj); 638 + 616 639 if (!fs_info->device_dir_kobj) 617 640 return -ENOMEM; 618 641 ··· 644 621 struct kobject *disk_kobj; 645 622 646 623 if (!dev->bdev) 624 + continue; 625 + 626 + if (one_device && one_device != dev) 647 627 continue; 648 628 649 629 disk = dev->bdev->bd_part; ··· 692 666 if (error) 693 667 goto failure; 694 668 695 - error = add_device_membership(fs_info); 669 + error = btrfs_kobj_add_device(fs_info, NULL); 696 670 if (error) 697 671 goto failure; 698 672
+4
fs/btrfs/sysfs.h
··· 66 66 extern const char * const btrfs_feature_set_names[3]; 67 67 extern struct kobj_type space_info_ktype; 68 68 extern struct kobj_type btrfs_raid_ktype; 69 + int btrfs_kobj_add_device(struct btrfs_fs_info *fs_info, 70 + struct btrfs_device *one_device); 71 + int btrfs_kobj_rm_device(struct btrfs_fs_info *fs_info, 72 + struct btrfs_device *one_device); 69 73 #endif /* _BTRFS_SYSFS_H_ */
+5 -7
fs/btrfs/transaction.c
··· 386 386 bool reloc_reserved = false; 387 387 int ret; 388 388 389 + /* Send isn't supposed to start transactions. */ 390 + ASSERT(current->journal_info != (void *)BTRFS_SEND_TRANS_STUB); 391 + 389 392 if (test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state)) 390 393 return ERR_PTR(-EROFS); 391 394 392 - if (current->journal_info && 393 - current->journal_info != (void *)BTRFS_SEND_TRANS_STUB) { 395 + if (current->journal_info) { 394 396 WARN_ON(type & TRANS_EXTWRITERS); 395 397 h = current->journal_info; 396 398 h->use_count++; ··· 493 491 smp_mb(); 494 492 if (cur_trans->state >= TRANS_STATE_BLOCKED && 495 493 may_wait_transaction(root, type)) { 494 + current->journal_info = h; 496 495 btrfs_commit_transaction(h, root); 497 496 goto again; 498 497 } ··· 1618 1615 int ret; 1619 1616 1620 1617 ret = btrfs_run_delayed_items(trans, root); 1621 - /* 1622 - * running the delayed items may have added new refs. account 1623 - * them now so that they hinder processing of more delayed refs 1624 - * as little as possible. 1625 - */ 1626 1618 if (ret) 1627 1619 return ret; 1628 1620
+25 -5
fs/btrfs/volumes.c
··· 40 40 #include "rcu-string.h" 41 41 #include "math.h" 42 42 #include "dev-replace.h" 43 + #include "sysfs.h" 43 44 44 45 static int init_first_rw_device(struct btrfs_trans_handle *trans, 45 46 struct btrfs_root *root, ··· 555 554 * This is ok to do without rcu read locked because we hold the 556 555 * uuid mutex so nothing we touch in here is going to disappear. 557 556 */ 558 - name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS); 559 - if (!name) { 560 - kfree(device); 561 - goto error; 557 + if (orig_dev->name) { 558 + name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS); 559 + if (!name) { 560 + kfree(device); 561 + goto error; 562 + } 563 + rcu_assign_pointer(device->name, name); 562 564 } 563 - rcu_assign_pointer(device->name, name); 564 565 565 566 list_add(&device->dev_list, &fs_devices->devices); 566 567 device->fs_devices = fs_devices; ··· 1683 1680 if (device->bdev) 1684 1681 device->fs_devices->open_devices--; 1685 1682 1683 + /* remove sysfs entry */ 1684 + btrfs_kobj_rm_device(root->fs_info, device); 1685 + 1686 1686 call_rcu(&device->rcu, free_device); 1687 1687 1688 1688 num_devices = btrfs_super_num_devices(root->fs_info->super_copy) - 1; ··· 2149 2143 total_bytes = btrfs_super_num_devices(root->fs_info->super_copy); 2150 2144 btrfs_set_super_num_devices(root->fs_info->super_copy, 2151 2145 total_bytes + 1); 2146 + 2147 + /* add sysfs device entry */ 2148 + btrfs_kobj_add_device(root->fs_info, device); 2149 + 2152 2150 mutex_unlock(&root->fs_info->fs_devices->device_list_mutex); 2153 2151 2154 2152 if (seeding_dev) { 2153 + char fsid_buf[BTRFS_UUID_UNPARSED_SIZE]; 2155 2154 ret = init_first_rw_device(trans, root, device); 2156 2155 if (ret) { 2157 2156 btrfs_abort_transaction(trans, root, ret); ··· 2167 2156 btrfs_abort_transaction(trans, root, ret); 2168 2157 goto error_trans; 2169 2158 } 2159 + 2160 + /* Sprouting would change fsid of the mounted root, 2161 + * so rename the fsid on the sysfs 2162 + */ 2163 + snprintf(fsid_buf, BTRFS_UUID_UNPARSED_SIZE, "%pU", 2164 + root->fs_info->fsid); 2165 + if (kobject_rename(&root->fs_info->super_kobj, fsid_buf)) 2166 + goto error_trans; 2170 2167 } else { 2171 2168 ret = btrfs_add_device(trans, root, device); 2172 2169 if (ret) { ··· 2224 2205 unlock_chunks(root); 2225 2206 btrfs_end_transaction(trans, root); 2226 2207 rcu_string_free(device->name); 2208 + btrfs_kobj_rm_device(root->fs_info, device); 2227 2209 kfree(device); 2228 2210 error: 2229 2211 blkdev_put(bdev, FMODE_EXCL);
+1 -1
fs/btrfs/zlib.c
··· 136 136 if (workspace->def_strm.total_in > 8192 && 137 137 workspace->def_strm.total_in < 138 138 workspace->def_strm.total_out) { 139 - ret = -EIO; 139 + ret = -E2BIG; 140 140 goto out; 141 141 } 142 142 /* we need another page for writing out. Test this
+16
fs/ext4/balloc.c
··· 194 194 if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) { 195 195 ext4_error(sb, "Checksum bad for group %u", block_group); 196 196 grp = ext4_get_group_info(sb, block_group); 197 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) 198 + percpu_counter_sub(&sbi->s_freeclusters_counter, 199 + grp->bb_free); 197 200 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); 201 + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 202 + int count; 203 + count = ext4_free_inodes_count(sb, gdp); 204 + percpu_counter_sub(&sbi->s_freeinodes_counter, 205 + count); 206 + } 198 207 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); 199 208 return; 200 209 } ··· 368 359 { 369 360 ext4_fsblk_t blk; 370 361 struct ext4_group_info *grp = ext4_get_group_info(sb, block_group); 362 + struct ext4_sb_info *sbi = EXT4_SB(sb); 371 363 372 364 if (buffer_verified(bh)) 373 365 return; ··· 379 369 ext4_unlock_group(sb, block_group); 380 370 ext4_error(sb, "bg %u: block %llu: invalid block bitmap", 381 371 block_group, blk); 372 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) 373 + percpu_counter_sub(&sbi->s_freeclusters_counter, 374 + grp->bb_free); 382 375 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); 383 376 return; 384 377 } ··· 389 376 desc, bh))) { 390 377 ext4_unlock_group(sb, block_group); 391 378 ext4_error(sb, "bg %u: bad block bitmap checksum", block_group); 379 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) 380 + percpu_counter_sub(&sbi->s_freeclusters_counter, 381 + grp->bb_free); 392 382 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); 393 383 return; 394 384 }
+2 -2
fs/ext4/extents_status.c
··· 966 966 continue; 967 967 } 968 968 969 - if (ei->i_es_lru_nr == 0 || ei == locked_ei) 969 + if (ei->i_es_lru_nr == 0 || ei == locked_ei || 970 + !write_trylock(&ei->i_es_lock)) 970 971 continue; 971 972 972 - write_lock(&ei->i_es_lock); 973 973 shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); 974 974 if (ei->i_es_lru_nr == 0) 975 975 list_del_init(&ei->i_es_lru);
+30 -7
fs/ext4/ialloc.c
··· 71 71 struct ext4_group_desc *gdp) 72 72 { 73 73 struct ext4_group_info *grp; 74 + struct ext4_sb_info *sbi = EXT4_SB(sb); 74 75 J_ASSERT_BH(bh, buffer_locked(bh)); 75 76 76 77 /* If checksum is bad mark all blocks and inodes use to prevent ··· 79 78 if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) { 80 79 ext4_error(sb, "Checksum bad for group %u", block_group); 81 80 grp = ext4_get_group_info(sb, block_group); 81 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) 82 + percpu_counter_sub(&sbi->s_freeclusters_counter, 83 + grp->bb_free); 82 84 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); 85 + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 86 + int count; 87 + count = ext4_free_inodes_count(sb, gdp); 88 + percpu_counter_sub(&sbi->s_freeinodes_counter, 89 + count); 90 + } 83 91 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); 84 92 return 0; 85 93 } ··· 126 116 struct buffer_head *bh = NULL; 127 117 ext4_fsblk_t bitmap_blk; 128 118 struct ext4_group_info *grp; 119 + struct ext4_sb_info *sbi = EXT4_SB(sb); 129 120 130 121 desc = ext4_get_group_desc(sb, block_group, NULL); 131 122 if (!desc) ··· 196 185 ext4_error(sb, "Corrupt inode bitmap - block_group = %u, " 197 186 "inode_bitmap = %llu", block_group, bitmap_blk); 198 187 grp = ext4_get_group_info(sb, block_group); 188 + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 189 + int count; 190 + count = ext4_free_inodes_count(sb, desc); 191 + percpu_counter_sub(&sbi->s_freeinodes_counter, 192 + count); 193 + } 199 194 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); 200 195 return NULL; 201 196 } ··· 338 321 fatal = err; 339 322 } else { 340 323 ext4_error(sb, "bit already cleared for inode %lu", ino); 324 + if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 325 + int count; 326 + count = ext4_free_inodes_count(sb, gdp); 327 + percpu_counter_sub(&sbi->s_freeinodes_counter, 328 + count); 329 + } 341 330 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); 342 331 } 343 332 ··· 874 851 goto out; 875 852 } 876 853 854 + BUFFER_TRACE(group_desc_bh, "get_write_access"); 855 + err = ext4_journal_get_write_access(handle, group_desc_bh); 856 + if (err) { 857 + ext4_std_error(sb, err); 858 + goto out; 859 + } 860 + 877 861 /* We may have to initialize the block bitmap if it isn't already */ 878 862 if (ext4_has_group_desc_csum(sb) && 879 863 gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { ··· 915 885 ext4_std_error(sb, err); 916 886 goto out; 917 887 } 918 - } 919 - 920 - BUFFER_TRACE(group_desc_bh, "get_write_access"); 921 - err = ext4_journal_get_write_access(handle, group_desc_bh); 922 - if (err) { 923 - ext4_std_error(sb, err); 924 - goto out; 925 888 } 926 889 927 890 /* Update the relevant bg descriptor fields */
+19 -5
fs/ext4/indirect.c
··· 389 389 return 0; 390 390 failed: 391 391 for (; i >= 0; i--) { 392 - if (i != indirect_blks && branch[i].bh) 392 + /* 393 + * We want to ext4_forget() only freshly allocated indirect 394 + * blocks. Buffer for new_blocks[i-1] is at branch[i].bh and 395 + * buffer at branch[0].bh is indirect block / inode already 396 + * existing before ext4_alloc_branch() was called. 397 + */ 398 + if (i > 0 && i != indirect_blks && branch[i].bh) 393 399 ext4_forget(handle, 1, inode, branch[i].bh, 394 400 branch[i].bh->b_blocknr); 395 401 ext4_free_blocks(handle, inode, NULL, new_blocks[i], ··· 1316 1310 blk = *i_data; 1317 1311 if (level > 0) { 1318 1312 ext4_lblk_t first2; 1313 + ext4_lblk_t count2; 1314 + 1319 1315 bh = sb_bread(inode->i_sb, le32_to_cpu(blk)); 1320 1316 if (!bh) { 1321 1317 EXT4_ERROR_INODE_BLOCK(inode, le32_to_cpu(blk), 1322 1318 "Read failure"); 1323 1319 return -EIO; 1324 1320 } 1325 - first2 = (first > offset) ? first - offset : 0; 1321 + if (first > offset) { 1322 + first2 = first - offset; 1323 + count2 = count; 1324 + } else { 1325 + first2 = 0; 1326 + count2 = count - (offset - first); 1327 + } 1326 1328 ret = free_hole_blocks(handle, inode, bh, 1327 1329 (__le32 *)bh->b_data, level - 1, 1328 - first2, count - offset, 1330 + first2, count2, 1329 1331 inode->i_sb->s_blocksize >> 2); 1330 1332 if (ret) { 1331 1333 brelse(bh); ··· 1343 1329 if (level == 0 || 1344 1330 (bh && all_zeroes((__le32 *)bh->b_data, 1345 1331 (__le32 *)bh->b_data + addr_per_block))) { 1346 - ext4_free_data(handle, inode, parent_bh, &blk, &blk+1); 1347 - *i_data = 0; 1332 + ext4_free_data(handle, inode, parent_bh, 1333 + i_data, i_data + 1); 1348 1334 } 1349 1335 brelse(bh); 1350 1336 bh = NULL;
+10 -2
fs/ext4/mballoc.c
··· 722 722 void *buddy, void *bitmap, ext4_group_t group) 723 723 { 724 724 struct ext4_group_info *grp = ext4_get_group_info(sb, group); 725 + struct ext4_sb_info *sbi = EXT4_SB(sb); 725 726 ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb); 726 727 ext4_grpblk_t i = 0; 727 728 ext4_grpblk_t first; ··· 752 751 753 752 if (free != grp->bb_free) { 754 753 ext4_grp_locked_error(sb, group, 0, 0, 755 - "%u clusters in bitmap, %u in gd; " 756 - "block bitmap corrupt.", 754 + "block bitmap and bg descriptor " 755 + "inconsistent: %u vs %u free clusters", 757 756 free, grp->bb_free); 758 757 /* 759 758 * If we intend to continue, we consider group descriptor 760 759 * corrupt and update bb_free using bitmap value 761 760 */ 762 761 grp->bb_free = free; 762 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) 763 + percpu_counter_sub(&sbi->s_freeclusters_counter, 764 + grp->bb_free); 763 765 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); 764 766 } 765 767 mb_set_largest_free_order(sb, grp); ··· 1435 1431 right_is_free = !mb_test_bit(last + 1, e4b->bd_bitmap); 1436 1432 1437 1433 if (unlikely(block != -1)) { 1434 + struct ext4_sb_info *sbi = EXT4_SB(sb); 1438 1435 ext4_fsblk_t blocknr; 1439 1436 1440 1437 blocknr = ext4_group_first_block_no(sb, e4b->bd_group); ··· 1446 1441 "freeing already freed block " 1447 1442 "(bit %u); block bitmap corrupt.", 1448 1443 block); 1444 + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)) 1445 + percpu_counter_sub(&sbi->s_freeclusters_counter, 1446 + e4b->bd_info->bb_free); 1449 1447 /* Mark the block group as corrupt. */ 1450 1448 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, 1451 1449 &e4b->bd_info->bb_state);
+28 -32
fs/ext4/super.c
··· 1525 1525 arg = JBD2_DEFAULT_MAX_COMMIT_AGE; 1526 1526 sbi->s_commit_interval = HZ * arg; 1527 1527 } else if (token == Opt_max_batch_time) { 1528 - if (arg == 0) 1529 - arg = EXT4_DEF_MAX_BATCH_TIME; 1530 1528 sbi->s_max_batch_time = arg; 1531 1529 } else if (token == Opt_min_batch_time) { 1532 1530 sbi->s_min_batch_time = arg; ··· 2807 2809 es = sbi->s_es; 2808 2810 2809 2811 if (es->s_error_count) 2810 - ext4_msg(sb, KERN_NOTICE, "error count: %u", 2812 + /* fsck newer than v1.41.13 is needed to clean this condition. */ 2813 + ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", 2811 2814 le32_to_cpu(es->s_error_count)); 2812 2815 if (es->s_first_error_time) { 2813 - printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", 2816 + printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", 2814 2817 sb->s_id, le32_to_cpu(es->s_first_error_time), 2815 2818 (int) sizeof(es->s_first_error_func), 2816 2819 es->s_first_error_func, ··· 2825 2826 printk("\n"); 2826 2827 } 2827 2828 if (es->s_last_error_time) { 2828 - printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", 2829 + printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", 2829 2830 sb->s_id, le32_to_cpu(es->s_last_error_time), 2830 2831 (int) sizeof(es->s_last_error_func), 2831 2832 es->s_last_error_func, ··· 3879 3880 goto failed_mount2; 3880 3881 } 3881 3882 } 3882 - 3883 - /* 3884 - * set up enough so that it can read an inode, 3885 - * and create new inode for buddy allocator 3886 - */ 3887 - sbi->s_gdb_count = db_count; 3888 - if (!test_opt(sb, NOLOAD) && 3889 - EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3890 - sb->s_op = &ext4_sops; 3891 - else 3892 - sb->s_op = &ext4_nojournal_sops; 3893 - 3894 - ext4_ext_init(sb); 3895 - err = ext4_mb_init(sb); 3896 - if (err) { 3897 - ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 3898 - err); 3899 - goto failed_mount2; 3900 - } 3901 - 3902 3883 if (!ext4_check_descriptors(sb, &first_not_zeroed)) { 3903 3884 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); 3904 - goto failed_mount2a; 3885 + goto failed_mount2; 3905 3886 } 3906 3887 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) 3907 3888 if (!ext4_fill_flex_info(sb)) { 3908 3889 ext4_msg(sb, KERN_ERR, 3909 3890 "unable to initialize " 3910 3891 "flex_bg meta info!"); 3911 - goto failed_mount2a; 3892 + goto failed_mount2; 3912 3893 } 3913 3894 3895 + sbi->s_gdb_count = db_count; 3914 3896 get_random_bytes(&sbi->s_next_generation, sizeof(u32)); 3915 3897 spin_lock_init(&sbi->s_next_gen_lock); 3916 3898 ··· 3926 3946 sbi->s_stripe = ext4_get_stripe_size(sbi); 3927 3947 sbi->s_extent_max_zeroout_kb = 32; 3928 3948 3949 + /* 3950 + * set up enough so that it can read an inode 3951 + */ 3952 + if (!test_opt(sb, NOLOAD) && 3953 + EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3954 + sb->s_op = &ext4_sops; 3955 + else 3956 + sb->s_op = &ext4_nojournal_sops; 3929 3957 sb->s_export_op = &ext4_export_ops; 3930 3958 sb->s_xattr = ext4_xattr_handlers; 3931 3959 #ifdef CONFIG_QUOTA ··· 4123 4135 if (err) { 4124 4136 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " 4125 4137 "reserved pool", ext4_calculate_resv_clusters(sb)); 4126 - goto failed_mount5; 4138 + goto failed_mount4a; 4127 4139 } 4128 4140 4129 4141 err = ext4_setup_system_zone(sb); 4130 4142 if (err) { 4131 4143 ext4_msg(sb, KERN_ERR, "failed to initialize system " 4132 4144 "zone (%d)", err); 4145 + goto failed_mount4a; 4146 + } 4147 + 4148 + ext4_ext_init(sb); 4149 + err = ext4_mb_init(sb); 4150 + if (err) { 4151 + ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 4152 + err); 4133 4153 goto failed_mount5; 4134 4154 } 4135 4155 ··· 4214 4218 failed_mount7: 4215 4219 ext4_unregister_li_request(sb); 4216 4220 failed_mount6: 4217 - ext4_release_system_zone(sb); 4221 + ext4_mb_release(sb); 4218 4222 failed_mount5: 4223 + ext4_ext_release(sb); 4224 + ext4_release_system_zone(sb); 4225 + failed_mount4a: 4219 4226 dput(sb->s_root); 4220 4227 sb->s_root = NULL; 4221 4228 failed_mount4: ··· 4242 4243 percpu_counter_destroy(&sbi->s_extent_cache_cnt); 4243 4244 if (sbi->s_mmp_tsk) 4244 4245 kthread_stop(sbi->s_mmp_tsk); 4245 - failed_mount2a: 4246 - ext4_mb_release(sb); 4247 4246 failed_mount2: 4248 4247 for (i = 0; i < db_count; i++) 4249 4248 brelse(sbi->s_group_desc[i]); 4250 4249 ext4_kvfree(sbi->s_group_desc); 4251 4250 failed_mount: 4252 - ext4_ext_release(sb); 4253 4251 if (sbi->s_chksum_driver) 4254 4252 crypto_free_shash(sbi->s_chksum_driver); 4255 4253 if (sbi->s_proc) {
+18 -5
fs/f2fs/data.c
··· 608 608 * b. do not use extent cache for better performance 609 609 * c. give the block addresses to blockdev 610 610 */ 611 - static int get_data_block(struct inode *inode, sector_t iblock, 612 - struct buffer_head *bh_result, int create) 611 + static int __get_data_block(struct inode *inode, sector_t iblock, 612 + struct buffer_head *bh_result, int create, bool fiemap) 613 613 { 614 614 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); 615 615 unsigned int blkbits = inode->i_sb->s_blocksize_bits; ··· 637 637 err = 0; 638 638 goto unlock_out; 639 639 } 640 - if (dn.data_blkaddr == NEW_ADDR) 640 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 641 641 goto put_out; 642 642 643 643 if (dn.data_blkaddr != NULL_ADDR) { ··· 671 671 err = 0; 672 672 goto unlock_out; 673 673 } 674 - if (dn.data_blkaddr == NEW_ADDR) 674 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 675 675 goto put_out; 676 676 677 677 end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode)); ··· 708 708 return err; 709 709 } 710 710 711 + static int get_data_block(struct inode *inode, sector_t iblock, 712 + struct buffer_head *bh_result, int create) 713 + { 714 + return __get_data_block(inode, iblock, bh_result, create, false); 715 + } 716 + 717 + static int get_data_block_fiemap(struct inode *inode, sector_t iblock, 718 + struct buffer_head *bh_result, int create) 719 + { 720 + return __get_data_block(inode, iblock, bh_result, create, true); 721 + } 722 + 711 723 int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 712 724 u64 start, u64 len) 713 725 { 714 - return generic_block_fiemap(inode, fieinfo, start, len, get_data_block); 726 + return generic_block_fiemap(inode, fieinfo, 727 + start, len, get_data_block_fiemap); 715 728 } 716 729 717 730 static int f2fs_read_data_page(struct file *file, struct page *page)
+1 -1
fs/f2fs/dir.c
··· 376 376 377 377 put_error: 378 378 f2fs_put_page(page, 1); 379 + error: 379 380 /* once the failed inode becomes a bad inode, i_mode is S_IFREG */ 380 381 truncate_inode_pages(&inode->i_data, 0); 381 382 truncate_blocks(inode, 0); 382 383 remove_dirty_dir_inode(inode); 383 - error: 384 384 remove_inode_page(inode); 385 385 return ERR_PTR(err); 386 386 }
+2 -4
fs/f2fs/f2fs.h
··· 342 342 struct dirty_seglist_info *dirty_info; /* dirty segment information */ 343 343 struct curseg_info *curseg_array; /* active segment information */ 344 344 345 - struct list_head wblist_head; /* list of under-writeback pages */ 346 - spinlock_t wblist_lock; /* lock for checkpoint */ 347 - 348 345 block_t seg0_blkaddr; /* block address of 0'th segment */ 349 346 block_t main_blkaddr; /* start block address of main area */ 350 347 block_t ssa_blkaddr; /* start block address of SSA area */ ··· 641 644 */ 642 645 static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid) 643 646 { 644 - WARN_ON((nid >= NM_I(sbi)->max_nid)); 647 + if (unlikely(nid < F2FS_ROOT_INO(sbi))) 648 + return -EINVAL; 645 649 if (unlikely(nid >= NM_I(sbi)->max_nid)) 646 650 return -EINVAL; 647 651 return 0;
+8 -4
fs/f2fs/file.c
··· 659 659 off_start = offset & (PAGE_CACHE_SIZE - 1); 660 660 off_end = (offset + len) & (PAGE_CACHE_SIZE - 1); 661 661 662 + f2fs_lock_op(sbi); 663 + 662 664 for (index = pg_start; index <= pg_end; index++) { 663 665 struct dnode_of_data dn; 664 666 665 - f2fs_lock_op(sbi); 667 + if (index == pg_end && !off_end) 668 + goto noalloc; 669 + 666 670 set_new_dnode(&dn, inode, NULL, NULL, 0); 667 671 ret = f2fs_reserve_block(&dn, index); 668 - f2fs_unlock_op(sbi); 669 672 if (ret) 670 673 break; 671 - 674 + noalloc: 672 675 if (pg_start == pg_end) 673 676 new_size = offset + len; 674 677 else if (index == pg_start && off_start) ··· 686 683 i_size_read(inode) < new_size) { 687 684 i_size_write(inode, new_size); 688 685 mark_inode_dirty(inode); 689 - f2fs_write_inode(inode, NULL); 686 + update_inode_page(inode); 690 687 } 688 + f2fs_unlock_op(sbi); 691 689 692 690 return ret; 693 691 }
+1
fs/f2fs/inode.c
··· 78 78 if (check_nid_range(sbi, inode->i_ino)) { 79 79 f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu", 80 80 (unsigned long) inode->i_ino); 81 + WARN_ON(1); 81 82 return -EINVAL; 82 83 } 83 84
+6 -7
fs/f2fs/namei.c
··· 417 417 } 418 418 419 419 f2fs_set_link(new_dir, new_entry, new_page, old_inode); 420 - down_write(&F2FS_I(old_inode)->i_sem); 421 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 422 - up_write(&F2FS_I(old_inode)->i_sem); 423 420 424 421 new_inode->i_ctime = CURRENT_TIME; 425 422 down_write(&F2FS_I(new_inode)->i_sem); ··· 445 448 } 446 449 } 447 450 451 + down_write(&F2FS_I(old_inode)->i_sem); 452 + file_lost_pino(old_inode); 453 + up_write(&F2FS_I(old_inode)->i_sem); 454 + 448 455 old_inode->i_ctime = CURRENT_TIME; 449 456 mark_inode_dirty(old_inode); 450 457 ··· 458 457 if (old_dir != new_dir) { 459 458 f2fs_set_link(old_inode, old_dir_entry, 460 459 old_dir_page, new_dir); 461 - down_write(&F2FS_I(old_inode)->i_sem); 462 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 463 - up_write(&F2FS_I(old_inode)->i_sem); 464 460 update_inode_page(old_inode); 465 461 } else { 466 462 kunmap(old_dir_page); ··· 472 474 return 0; 473 475 474 476 put_out_dir: 475 - f2fs_put_page(new_page, 1); 477 + kunmap(new_page); 478 + f2fs_put_page(new_page, 0); 476 479 out_dir: 477 480 if (old_dir_entry) { 478 481 kunmap(old_dir_page);
+2
fs/f2fs/node.c
··· 42 42 mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12; 43 43 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2); 44 44 } else if (type == DIRTY_DENTS) { 45 + if (sbi->sb->s_bdi->dirty_exceeded) 46 + return false; 45 47 mem_size = get_pages(sbi, F2FS_DIRTY_DENTS); 46 48 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1); 47 49 }
+2 -3
fs/f2fs/segment.c
··· 272 272 return -ENOMEM; 273 273 spin_lock_init(&fcc->issue_lock); 274 274 init_waitqueue_head(&fcc->flush_wait_queue); 275 + sbi->sm_info->cmd_control_info = fcc; 275 276 fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi, 276 277 "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); 277 278 if (IS_ERR(fcc->f2fs_issue_flush)) { 278 279 err = PTR_ERR(fcc->f2fs_issue_flush); 279 280 kfree(fcc); 281 + sbi->sm_info->cmd_control_info = NULL; 280 282 return err; 281 283 } 282 - sbi->sm_info->cmd_control_info = fcc; 283 284 284 285 return err; 285 286 } ··· 1886 1885 1887 1886 /* init sm info */ 1888 1887 sbi->sm_info = sm_info; 1889 - INIT_LIST_HEAD(&sm_info->wblist_head); 1890 - spin_lock_init(&sm_info->wblist_lock); 1891 1888 sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr); 1892 1889 sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr); 1893 1890 sm_info->segment_count = le32_to_cpu(raw_super->segment_count);
+1 -3
fs/f2fs/super.c
··· 689 689 struct f2fs_sb_info *sbi = F2FS_SB(sb); 690 690 struct inode *inode; 691 691 692 - if (unlikely(ino < F2FS_ROOT_INO(sbi))) 693 - return ERR_PTR(-ESTALE); 694 - if (unlikely(ino >= NM_I(sbi)->max_nid)) 692 + if (check_nid_range(sbi, ino)) 695 693 return ERR_PTR(-ESTALE); 696 694 697 695 /*
+23 -28
fs/fuse/dev.c
··· 643 643 unsigned long seglen; 644 644 unsigned long addr; 645 645 struct page *pg; 646 - void *mapaddr; 647 - void *buf; 648 646 unsigned len; 647 + unsigned offset; 649 648 unsigned move_pages:1; 650 649 }; 651 650 ··· 665 666 if (cs->currbuf) { 666 667 struct pipe_buffer *buf = cs->currbuf; 667 668 668 - if (!cs->write) { 669 - kunmap_atomic(cs->mapaddr); 670 - } else { 671 - kunmap_atomic(cs->mapaddr); 669 + if (cs->write) 672 670 buf->len = PAGE_SIZE - cs->len; 673 - } 674 671 cs->currbuf = NULL; 675 - cs->mapaddr = NULL; 676 - } else if (cs->mapaddr) { 677 - kunmap_atomic(cs->mapaddr); 672 + } else if (cs->pg) { 678 673 if (cs->write) { 679 674 flush_dcache_page(cs->pg); 680 675 set_page_dirty_lock(cs->pg); 681 676 } 682 677 put_page(cs->pg); 683 - cs->mapaddr = NULL; 684 678 } 679 + cs->pg = NULL; 685 680 } 686 681 687 682 /* ··· 684 691 */ 685 692 static int fuse_copy_fill(struct fuse_copy_state *cs) 686 693 { 687 - unsigned long offset; 694 + struct page *page; 688 695 int err; 689 696 690 697 unlock_request(cs->fc, cs->req); ··· 699 706 700 707 BUG_ON(!cs->nr_segs); 701 708 cs->currbuf = buf; 702 - cs->mapaddr = kmap_atomic(buf->page); 709 + cs->pg = buf->page; 710 + cs->offset = buf->offset; 703 711 cs->len = buf->len; 704 - cs->buf = cs->mapaddr + buf->offset; 705 712 cs->pipebufs++; 706 713 cs->nr_segs--; 707 714 } else { 708 - struct page *page; 709 - 710 715 if (cs->nr_segs == cs->pipe->buffers) 711 716 return -EIO; 712 717 ··· 717 726 buf->len = 0; 718 727 719 728 cs->currbuf = buf; 720 - cs->mapaddr = kmap_atomic(page); 721 - cs->buf = cs->mapaddr; 729 + cs->pg = page; 730 + cs->offset = 0; 722 731 cs->len = PAGE_SIZE; 723 732 cs->pipebufs++; 724 733 cs->nr_segs++; ··· 731 740 cs->iov++; 732 741 cs->nr_segs--; 733 742 } 734 - err = get_user_pages_fast(cs->addr, 1, cs->write, &cs->pg); 743 + err = get_user_pages_fast(cs->addr, 1, cs->write, &page); 735 744 if (err < 0) 736 745 return err; 737 746 BUG_ON(err != 1); 738 - offset = cs->addr % PAGE_SIZE; 739 - cs->mapaddr = kmap_atomic(cs->pg); 740 - cs->buf = cs->mapaddr + offset; 741 - cs->len = min(PAGE_SIZE - offset, cs->seglen); 747 + cs->pg = page; 748 + cs->offset = cs->addr % PAGE_SIZE; 749 + cs->len = min(PAGE_SIZE - cs->offset, cs->seglen); 742 750 cs->seglen -= cs->len; 743 751 cs->addr += cs->len; 744 752 } ··· 750 760 { 751 761 unsigned ncpy = min(*size, cs->len); 752 762 if (val) { 763 + void *pgaddr = kmap_atomic(cs->pg); 764 + void *buf = pgaddr + cs->offset; 765 + 753 766 if (cs->write) 754 - memcpy(cs->buf, *val, ncpy); 767 + memcpy(buf, *val, ncpy); 755 768 else 756 - memcpy(*val, cs->buf, ncpy); 769 + memcpy(*val, buf, ncpy); 770 + 771 + kunmap_atomic(pgaddr); 757 772 *val += ncpy; 758 773 } 759 774 *size -= ncpy; 760 775 cs->len -= ncpy; 761 - cs->buf += ncpy; 776 + cs->offset += ncpy; 762 777 return ncpy; 763 778 } 764 779 ··· 869 874 out_fallback_unlock: 870 875 unlock_page(newpage); 871 876 out_fallback: 872 - cs->mapaddr = kmap_atomic(buf->page); 873 - cs->buf = cs->mapaddr + buf->offset; 877 + cs->pg = buf->page; 878 + cs->offset = buf->offset; 874 879 875 880 err = lock_request(cs->fc, cs->req); 876 881 if (err)
+25 -18
fs/fuse/dir.c
··· 198 198 inode = ACCESS_ONCE(entry->d_inode); 199 199 if (inode && is_bad_inode(inode)) 200 200 goto invalid; 201 - else if (fuse_dentry_time(entry) < get_jiffies_64()) { 201 + else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || 202 + (flags & LOOKUP_REVAL)) { 202 203 int err; 203 204 struct fuse_entry_out outarg; 204 205 struct fuse_req *req; ··· 815 814 return err; 816 815 } 817 816 818 - static int fuse_rename(struct inode *olddir, struct dentry *oldent, 819 - struct inode *newdir, struct dentry *newent) 820 - { 821 - return fuse_rename_common(olddir, oldent, newdir, newent, 0, 822 - FUSE_RENAME, sizeof(struct fuse_rename_in)); 823 - } 824 - 825 817 static int fuse_rename2(struct inode *olddir, struct dentry *oldent, 826 818 struct inode *newdir, struct dentry *newent, 827 819 unsigned int flags) ··· 825 831 if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE)) 826 832 return -EINVAL; 827 833 828 - if (fc->no_rename2 || fc->minor < 23) 829 - return -EINVAL; 834 + if (flags) { 835 + if (fc->no_rename2 || fc->minor < 23) 836 + return -EINVAL; 830 837 831 - err = fuse_rename_common(olddir, oldent, newdir, newent, flags, 832 - FUSE_RENAME2, sizeof(struct fuse_rename2_in)); 833 - if (err == -ENOSYS) { 834 - fc->no_rename2 = 1; 835 - err = -EINVAL; 838 + err = fuse_rename_common(olddir, oldent, newdir, newent, flags, 839 + FUSE_RENAME2, 840 + sizeof(struct fuse_rename2_in)); 841 + if (err == -ENOSYS) { 842 + fc->no_rename2 = 1; 843 + err = -EINVAL; 844 + } 845 + } else { 846 + err = fuse_rename_common(olddir, oldent, newdir, newent, 0, 847 + FUSE_RENAME, 848 + sizeof(struct fuse_rename_in)); 836 849 } 837 - return err; 838 850 851 + return err; 852 + } 853 + 854 + static int fuse_rename(struct inode *olddir, struct dentry *oldent, 855 + struct inode *newdir, struct dentry *newent) 856 + { 857 + return fuse_rename2(olddir, oldent, newdir, newent, 0); 839 858 } 840 859 841 860 static int fuse_link(struct dentry *entry, struct inode *newdir, ··· 992 985 int err; 993 986 bool r; 994 987 995 - if (fi->i_time < get_jiffies_64()) { 988 + if (time_before64(fi->i_time, get_jiffies_64())) { 996 989 r = true; 997 990 err = fuse_do_getattr(inode, stat, file); 998 991 } else { ··· 1178 1171 ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { 1179 1172 struct fuse_inode *fi = get_fuse_inode(inode); 1180 1173 1181 - if (fi->i_time < get_jiffies_64()) { 1174 + if (time_before64(fi->i_time, get_jiffies_64())) { 1182 1175 refreshed = true; 1183 1176 1184 1177 err = fuse_perm_getattr(inode, mask);
+5 -3
fs/fuse/file.c
··· 1687 1687 error = -EIO; 1688 1688 req->ff = fuse_write_file_get(fc, fi); 1689 1689 if (!req->ff) 1690 - goto err_free; 1690 + goto err_nofile; 1691 1691 1692 1692 fuse_write_fill(req, req->ff, page_offset(page), 0); 1693 1693 ··· 1715 1715 1716 1716 return 0; 1717 1717 1718 + err_nofile: 1719 + __free_page(tmp_page); 1718 1720 err_free: 1719 1721 fuse_request_free(req); 1720 1722 err: ··· 1957 1955 data.ff = NULL; 1958 1956 1959 1957 err = -ENOMEM; 1960 - data.orig_pages = kzalloc(sizeof(struct page *) * 1961 - FUSE_MAX_PAGES_PER_REQ, 1958 + data.orig_pages = kcalloc(FUSE_MAX_PAGES_PER_REQ, 1959 + sizeof(struct page *), 1962 1960 GFP_NOFS); 1963 1961 if (!data.orig_pages) 1964 1962 goto out;
+17 -5
fs/fuse/inode.c
··· 478 478 {OPT_ERR, NULL} 479 479 }; 480 480 481 + static int fuse_match_uint(substring_t *s, unsigned int *res) 482 + { 483 + int err = -ENOMEM; 484 + char *buf = match_strdup(s); 485 + if (buf) { 486 + err = kstrtouint(buf, 10, res); 487 + kfree(buf); 488 + } 489 + return err; 490 + } 491 + 481 492 static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) 482 493 { 483 494 char *p; ··· 499 488 while ((p = strsep(&opt, ",")) != NULL) { 500 489 int token; 501 490 int value; 491 + unsigned uv; 502 492 substring_t args[MAX_OPT_ARGS]; 503 493 if (!*p) 504 494 continue; ··· 523 511 break; 524 512 525 513 case OPT_USER_ID: 526 - if (match_int(&args[0], &value)) 514 + if (fuse_match_uint(&args[0], &uv)) 527 515 return 0; 528 - d->user_id = make_kuid(current_user_ns(), value); 516 + d->user_id = make_kuid(current_user_ns(), uv); 529 517 if (!uid_valid(d->user_id)) 530 518 return 0; 531 519 d->user_id_present = 1; 532 520 break; 533 521 534 522 case OPT_GROUP_ID: 535 - if (match_int(&args[0], &value)) 523 + if (fuse_match_uint(&args[0], &uv)) 536 524 return 0; 537 - d->group_id = make_kgid(current_user_ns(), value); 525 + d->group_id = make_kgid(current_user_ns(), uv); 538 526 if (!gid_valid(d->group_id)) 539 527 return 0; 540 528 d->group_id_present = 1; ··· 1018 1006 1019 1007 sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION); 1020 1008 1021 - if (!parse_fuse_opt((char *) data, &d, is_bdev)) 1009 + if (!parse_fuse_opt(data, &d, is_bdev)) 1022 1010 goto err; 1023 1011 1024 1012 if (is_bdev) {
+4 -1
fs/jbd2/transaction.c
··· 1588 1588 * to perform a synchronous write. We do this to detect the 1589 1589 * case where a single process is doing a stream of sync 1590 1590 * writes. No point in waiting for joiners in that case. 1591 + * 1592 + * Setting max_batch_time to 0 disables this completely. 1591 1593 */ 1592 1594 pid = current->pid; 1593 - if (handle->h_sync && journal->j_last_sync_writer != pid) { 1595 + if (handle->h_sync && journal->j_last_sync_writer != pid && 1596 + journal->j_max_batch_time) { 1594 1597 u64 commit_time, trans_time; 1595 1598 1596 1599 journal->j_last_sync_writer = pid;
+55 -14
fs/kernfs/file.c
··· 39 39 struct list_head files; /* goes through kernfs_open_file.list */ 40 40 }; 41 41 42 + /* 43 + * kernfs_notify() may be called from any context and bounces notifications 44 + * through a work item. To minimize space overhead in kernfs_node, the 45 + * pending queue is implemented as a singly linked list of kernfs_nodes. 46 + * The list is terminated with the self pointer so that whether a 47 + * kernfs_node is on the list or not can be determined by testing the next 48 + * pointer for NULL. 49 + */ 50 + #define KERNFS_NOTIFY_EOL ((void *)&kernfs_notify_list) 51 + 52 + static DEFINE_SPINLOCK(kernfs_notify_lock); 53 + static struct kernfs_node *kernfs_notify_list = KERNFS_NOTIFY_EOL; 54 + 42 55 static struct kernfs_open_file *kernfs_of(struct file *file) 43 56 { 44 57 return ((struct seq_file *)file->private_data)->private; ··· 796 783 return DEFAULT_POLLMASK|POLLERR|POLLPRI; 797 784 } 798 785 799 - /** 800 - * kernfs_notify - notify a kernfs file 801 - * @kn: file to notify 802 - * 803 - * Notify @kn such that poll(2) on @kn wakes up. 804 - */ 805 - void kernfs_notify(struct kernfs_node *kn) 786 + static void kernfs_notify_workfn(struct work_struct *work) 806 787 { 807 - struct kernfs_root *root = kernfs_root(kn); 788 + struct kernfs_node *kn; 808 789 struct kernfs_open_node *on; 809 790 struct kernfs_super_info *info; 810 - unsigned long flags; 811 - 812 - if (WARN_ON(kernfs_type(kn) != KERNFS_FILE)) 791 + repeat: 792 + /* pop one off the notify_list */ 793 + spin_lock_irq(&kernfs_notify_lock); 794 + kn = kernfs_notify_list; 795 + if (kn == KERNFS_NOTIFY_EOL) { 796 + spin_unlock_irq(&kernfs_notify_lock); 813 797 return; 798 + } 799 + kernfs_notify_list = kn->attr.notify_next; 800 + kn->attr.notify_next = NULL; 801 + spin_unlock_irq(&kernfs_notify_lock); 814 802 815 803 /* kick poll */ 816 - spin_lock_irqsave(&kernfs_open_node_lock, flags); 804 + spin_lock_irq(&kernfs_open_node_lock); 817 805 818 806 on = kn->attr.open; 819 807 if (on) { ··· 822 808 wake_up_interruptible(&on->poll); 823 809 } 824 810 825 - spin_unlock_irqrestore(&kernfs_open_node_lock, flags); 811 + spin_unlock_irq(&kernfs_open_node_lock); 826 812 827 813 /* kick fsnotify */ 828 814 mutex_lock(&kernfs_mutex); 829 815 830 - list_for_each_entry(info, &root->supers, node) { 816 + list_for_each_entry(info, &kernfs_root(kn)->supers, node) { 831 817 struct inode *inode; 832 818 struct dentry *dentry; 833 819 ··· 847 833 } 848 834 849 835 mutex_unlock(&kernfs_mutex); 836 + kernfs_put(kn); 837 + goto repeat; 838 + } 839 + 840 + /** 841 + * kernfs_notify - notify a kernfs file 842 + * @kn: file to notify 843 + * 844 + * Notify @kn such that poll(2) on @kn wakes up. Maybe be called from any 845 + * context. 846 + */ 847 + void kernfs_notify(struct kernfs_node *kn) 848 + { 849 + static DECLARE_WORK(kernfs_notify_work, kernfs_notify_workfn); 850 + unsigned long flags; 851 + 852 + if (WARN_ON(kernfs_type(kn) != KERNFS_FILE)) 853 + return; 854 + 855 + spin_lock_irqsave(&kernfs_notify_lock, flags); 856 + if (!kn->attr.notify_next) { 857 + kernfs_get(kn); 858 + kn->attr.notify_next = kernfs_notify_list; 859 + kernfs_notify_list = kn; 860 + schedule_work(&kernfs_notify_work); 861 + } 862 + spin_unlock_irqrestore(&kernfs_notify_lock, flags); 850 863 } 851 864 EXPORT_SYMBOL_GPL(kernfs_notify); 852 865
+30
fs/kernfs/mount.c
··· 211 211 kernfs_put(root_kn); 212 212 } 213 213 214 + /** 215 + * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root 216 + * @kernfs_root: the kernfs_root in question 217 + * @ns: the namespace tag 218 + * 219 + * Pin the superblock so the superblock won't be destroyed in subsequent 220 + * operations. This can be used to block ->kill_sb() which may be useful 221 + * for kernfs users which dynamically manage superblocks. 222 + * 223 + * Returns NULL if there's no superblock associated to this kernfs_root, or 224 + * -EINVAL if the superblock is being freed. 225 + */ 226 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns) 227 + { 228 + struct kernfs_super_info *info; 229 + struct super_block *sb = NULL; 230 + 231 + mutex_lock(&kernfs_mutex); 232 + list_for_each_entry(info, &root->supers, node) { 233 + if (info->ns == ns) { 234 + sb = info->sb; 235 + if (!atomic_inc_not_zero(&info->sb->s_active)) 236 + sb = ERR_PTR(-EINVAL); 237 + break; 238 + } 239 + } 240 + mutex_unlock(&kernfs_mutex); 241 + return sb; 242 + } 243 + 214 244 void __init kernfs_init(void) 215 245 { 216 246 kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
+2 -1
fs/mbcache.c
··· 73 73 #include <linux/mbcache.h> 74 74 #include <linux/init.h> 75 75 #include <linux/blockgroup_lock.h> 76 + #include <linux/log2.h> 76 77 77 78 #ifdef MB_CACHE_DEBUG 78 79 # define mb_debug(f...) do { \ ··· 94 93 95 94 #define MB_CACHE_WRITER ((unsigned short)~0U >> 1) 96 95 97 - #define MB_CACHE_ENTRY_LOCK_BITS __builtin_log2(NR_BG_LOCKS) 96 + #define MB_CACHE_ENTRY_LOCK_BITS ilog2(NR_BG_LOCKS) 98 97 #define MB_CACHE_ENTRY_LOCK_INDEX(ce) \ 99 98 (hash_long((unsigned long)ce, MB_CACHE_ENTRY_LOCK_BITS)) 100 99
-9
fs/nfsd/nfs4proc.c
··· 617 617 618 618 switch (create->cr_type) { 619 619 case NF4LNK: 620 - /* ugh! we have to null-terminate the linktext, or 621 - * vfs_symlink() will choke. it is always safe to 622 - * null-terminate by brute force, since at worst we 623 - * will overwrite the first byte of the create namelen 624 - * in the XDR buffer, which has already been extracted 625 - * during XDR decode. 626 - */ 627 - create->cr_linkname[create->cr_linklen] = 0; 628 - 629 620 status = nfsd_symlink(rqstp, &cstate->current_fh, 630 621 create->cr_name, create->cr_namelen, 631 622 create->cr_linkname, create->cr_linklen,
+14 -3
fs/nfsd/nfs4xdr.c
··· 600 600 READ_BUF(4); 601 601 create->cr_linklen = be32_to_cpup(p++); 602 602 READ_BUF(create->cr_linklen); 603 - SAVEMEM(create->cr_linkname, create->cr_linklen); 603 + /* 604 + * The VFS will want a null-terminated string, and 605 + * null-terminating in place isn't safe since this might 606 + * end on a page boundary: 607 + */ 608 + create->cr_linkname = 609 + kmalloc(create->cr_linklen + 1, GFP_KERNEL); 610 + if (!create->cr_linkname) 611 + return nfserr_jukebox; 612 + memcpy(create->cr_linkname, p, create->cr_linklen); 613 + create->cr_linkname[create->cr_linklen] = '\0'; 614 + defer_free(argp, kfree, create->cr_linkname); 604 615 break; 605 616 case NF4BLK: 606 617 case NF4CHR: ··· 2641 2630 { 2642 2631 __be32 *p; 2643 2632 2644 - p = xdr_reserve_space(xdr, 6); 2633 + p = xdr_reserve_space(xdr, 20); 2645 2634 if (!p) 2646 2635 return NULL; 2647 2636 *p++ = htonl(2); ··· 3278 3267 3279 3268 wire_count = htonl(maxcount); 3280 3269 write_bytes_to_xdr_buf(xdr->buf, length_offset, &wire_count, 4); 3281 - xdr_truncate_encode(xdr, length_offset + 4 + maxcount); 3270 + xdr_truncate_encode(xdr, length_offset + 4 + ALIGN(maxcount, 4)); 3282 3271 if (maxcount & 3) 3283 3272 write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount, 3284 3273 &zero, 4 - (maxcount&3));
+2 -20
fs/proc/stat.c
··· 184 184 185 185 static int stat_open(struct inode *inode, struct file *file) 186 186 { 187 - size_t size = 1024 + 128 * num_possible_cpus(); 188 - char *buf; 189 - struct seq_file *m; 190 - int res; 187 + size_t size = 1024 + 128 * num_online_cpus(); 191 188 192 189 /* minimum size to display an interrupt count : 2 bytes */ 193 190 size += 2 * nr_irqs; 194 - 195 - /* don't ask for more than the kmalloc() max size */ 196 - if (size > KMALLOC_MAX_SIZE) 197 - size = KMALLOC_MAX_SIZE; 198 - buf = kmalloc(size, GFP_KERNEL); 199 - if (!buf) 200 - return -ENOMEM; 201 - 202 - res = single_open(file, show_stat, NULL); 203 - if (!res) { 204 - m = file->private_data; 205 - m->buf = buf; 206 - m->size = ksize(buf); 207 - } else 208 - kfree(buf); 209 - return res; 191 + return single_open_size(file, show_stat, NULL, size); 210 192 } 211 193 212 194 static const struct file_operations proc_stat_operations = {
+2
fs/quota/dquot.c
··· 702 702 struct dquot *dquot; 703 703 unsigned long freed = 0; 704 704 705 + spin_lock(&dq_list_lock); 705 706 head = free_dquots.prev; 706 707 while (head != &free_dquots && sc->nr_to_scan) { 707 708 dquot = list_entry(head, struct dquot, dq_free); ··· 714 713 freed++; 715 714 head = free_dquots.prev; 716 715 } 716 + spin_unlock(&dq_list_lock); 717 717 return freed; 718 718 } 719 719
+21 -9
fs/seq_file.c
··· 8 8 #include <linux/fs.h> 9 9 #include <linux/export.h> 10 10 #include <linux/seq_file.h> 11 + #include <linux/vmalloc.h> 11 12 #include <linux/slab.h> 12 13 #include <linux/cred.h> 14 + #include <linux/mm.h> 13 15 14 16 #include <asm/uaccess.h> 15 17 #include <asm/page.h> ··· 30 28 static void seq_set_overflow(struct seq_file *m) 31 29 { 32 30 m->count = m->size; 31 + } 32 + 33 + static void *seq_buf_alloc(unsigned long size) 34 + { 35 + void *buf; 36 + 37 + buf = kmalloc(size, GFP_KERNEL | __GFP_NOWARN); 38 + if (!buf && size > PAGE_SIZE) 39 + buf = vmalloc(size); 40 + return buf; 33 41 } 34 42 35 43 /** ··· 108 96 return 0; 109 97 } 110 98 if (!m->buf) { 111 - m->buf = kmalloc(m->size = PAGE_SIZE, GFP_KERNEL); 99 + m->buf = seq_buf_alloc(m->size = PAGE_SIZE); 112 100 if (!m->buf) 113 101 return -ENOMEM; 114 102 } ··· 147 135 148 136 Eoverflow: 149 137 m->op->stop(m, p); 150 - kfree(m->buf); 138 + kvfree(m->buf); 151 139 m->count = 0; 152 - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL); 140 + m->buf = seq_buf_alloc(m->size <<= 1); 153 141 return !m->buf ? -ENOMEM : -EAGAIN; 154 142 } 155 143 ··· 204 192 205 193 /* grab buffer if we didn't have one */ 206 194 if (!m->buf) { 207 - m->buf = kmalloc(m->size = PAGE_SIZE, GFP_KERNEL); 195 + m->buf = seq_buf_alloc(m->size = PAGE_SIZE); 208 196 if (!m->buf) 209 197 goto Enomem; 210 198 } ··· 244 232 if (m->count < m->size) 245 233 goto Fill; 246 234 m->op->stop(m, p); 247 - kfree(m->buf); 235 + kvfree(m->buf); 248 236 m->count = 0; 249 - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL); 237 + m->buf = seq_buf_alloc(m->size <<= 1); 250 238 if (!m->buf) 251 239 goto Enomem; 252 240 m->version = 0; ··· 362 350 int seq_release(struct inode *inode, struct file *file) 363 351 { 364 352 struct seq_file *m = file->private_data; 365 - kfree(m->buf); 353 + kvfree(m->buf); 366 354 kfree(m); 367 355 return 0; 368 356 } ··· 617 605 int single_open_size(struct file *file, int (*show)(struct seq_file *, void *), 618 606 void *data, size_t size) 619 607 { 620 - char *buf = kmalloc(size, GFP_KERNEL); 608 + char *buf = seq_buf_alloc(size); 621 609 int ret; 622 610 if (!buf) 623 611 return -ENOMEM; 624 612 ret = single_open(file, show, data); 625 613 if (ret) { 626 - kfree(buf); 614 + kvfree(buf); 627 615 return ret; 628 616 } 629 617 ((struct seq_file *)file->private_data)->buf = buf;
+2
include/acpi/video.h
··· 22 22 extern void acpi_video_unregister_backlight(void); 23 23 extern int acpi_video_get_edid(struct acpi_device *device, int type, 24 24 int device_id, void **edid); 25 + extern bool acpi_video_verify_backlight_support(void); 25 26 #else 26 27 static inline int acpi_video_register(void) { return 0; } 27 28 static inline void acpi_video_unregister(void) { return; } ··· 32 31 { 33 32 return -ENODEV; 34 33 } 34 + static inline bool acpi_video_verify_backlight_support(void) { return false; } 35 35 #endif 36 36 37 37 #endif
+1 -1
include/asm-generic/vmlinux.lds.h
··· 693 693 . = ALIGN(PAGE_SIZE); \ 694 694 *(.data..percpu..page_aligned) \ 695 695 . = ALIGN(cacheline); \ 696 - *(.data..percpu..readmostly) \ 696 + *(.data..percpu..read_mostly) \ 697 697 . = ALIGN(cacheline); \ 698 698 *(.data..percpu) \ 699 699 *(.data..percpu..shared_aligned) \
+10 -2
include/drm/i915_pciids.h
··· 237 237 #define INTEL_BDW_GT3D_IDS(info) \ 238 238 _INTEL_BDW_D_IDS(3, info) 239 239 240 + #define INTEL_BDW_RSVDM_IDS(info) \ 241 + _INTEL_BDW_M_IDS(4, info) 242 + 243 + #define INTEL_BDW_RSVDD_IDS(info) \ 244 + _INTEL_BDW_D_IDS(4, info) 245 + 240 246 #define INTEL_BDW_M_IDS(info) \ 241 247 INTEL_BDW_GT12M_IDS(info), \ 242 - INTEL_BDW_GT3M_IDS(info) 248 + INTEL_BDW_GT3M_IDS(info), \ 249 + INTEL_BDW_RSVDM_IDS(info) 243 250 244 251 #define INTEL_BDW_D_IDS(info) \ 245 252 INTEL_BDW_GT12D_IDS(info), \ 246 - INTEL_BDW_GT3D_IDS(info) 253 + INTEL_BDW_GT3D_IDS(info), \ 254 + INTEL_BDW_RSVDD_IDS(info) 247 255 248 256 #define INTEL_CHV_IDS(info) \ 249 257 INTEL_VGA_DEVICE(0x22b0, info), \
+1
include/drm/i915_powerwell.h
··· 32 32 /* For use by hda_i915 driver */ 33 33 extern int i915_request_power_well(void); 34 34 extern int i915_release_power_well(void); 35 + extern int i915_get_cdclk_freq(void); 35 36 36 37 #endif /* _I915_POWERWELL_H_ */
+2 -1
include/dt-bindings/clock/exynos5420.h
··· 63 63 #define CLK_SCLK_MPHY_IXTAL24 161 64 64 65 65 /* gate clocks */ 66 - #define CLK_ACLK66_PERIC 256 67 66 #define CLK_UART0 257 68 67 #define CLK_UART1 258 69 68 #define CLK_UART2 259 ··· 202 203 #define CLK_MOUT_G3D 641 203 204 #define CLK_MOUT_VPLL 642 204 205 #define CLK_MOUT_MAUDIO0 643 206 + #define CLK_MOUT_USER_ACLK333 644 207 + #define CLK_MOUT_SW_ACLK333 645 205 208 206 209 /* divider clocks */ 207 210 #define CLK_DOUT_PIXEL 768
+9 -4
include/linux/bio.h
··· 186 186 #define BIOVEC_SEG_BOUNDARY(q, b1, b2) \ 187 187 __BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, queue_segment_boundary((q))) 188 188 189 + /* 190 + * Check if adding a bio_vec after bprv with offset would create a gap in 191 + * the SG list. Most drivers don't care about this, but some do. 192 + */ 193 + static inline bool bvec_gap_to_prev(struct bio_vec *bprv, unsigned int offset) 194 + { 195 + return offset || ((bprv->bv_offset + bprv->bv_len) & (PAGE_SIZE - 1)); 196 + } 197 + 189 198 #define bio_io_error(bio) bio_endio((bio), -EIO) 190 199 191 200 /* ··· 652 643 #define BIO_SPLIT_ENTRIES 2 653 644 654 645 #if defined(CONFIG_BLK_DEV_INTEGRITY) 655 - 656 - 657 - 658 - #define bip_vec_idx(bip, idx) (&(bip->bip_vec[(idx)])) 659 646 660 647 #define bip_for_each_vec(bvl, bip, iter) \ 661 648 for_each_bvec(bvl, (bip)->bip_vec, iter, (bip)->bip_iter)
+1
include/linux/blkdev.h
··· 512 512 #define QUEUE_FLAG_DEAD 19 /* queue tear-down finished */ 513 513 #define QUEUE_FLAG_INIT_DONE 20 /* queue is initialized */ 514 514 #define QUEUE_FLAG_NO_SG_MERGE 21 /* don't attempt to merge SG segments*/ 515 + #define QUEUE_FLAG_SG_GAPS 22 /* queue doesn't support SG gaps */ 515 516 516 517 #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ 517 518 (1 << QUEUE_FLAG_STACKABLE) | \
+1 -1
include/linux/elevator.h
··· 143 143 * io scheduler registration 144 144 */ 145 145 extern void __init load_default_elevator_module(void); 146 - extern int __init elv_register(struct elevator_type *); 146 + extern int elv_register(struct elevator_type *); 147 147 extern void elv_unregister(struct elevator_type *); 148 148 149 149 /*
+2
include/linux/kernfs.h
··· 91 91 const struct kernfs_ops *ops; 92 92 struct kernfs_open_node *open; 93 93 loff_t size; 94 + struct kernfs_node *notify_next; /* for kernfs_notify() */ 94 95 }; 95 96 96 97 /* ··· 305 304 struct kernfs_root *root, unsigned long magic, 306 305 bool *new_sb_created, const void *ns); 307 306 void kernfs_kill_sb(struct super_block *sb); 307 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns); 308 308 309 309 void kernfs_init(void); 310 310
+2 -2
include/linux/mlx4/device.h
··· 589 589 u32 cons_index; 590 590 591 591 u16 irq; 592 - bool irq_affinity_change; 593 - 594 592 __be32 *set_ci_db; 595 593 __be32 *arm_db; 596 594 int arm_sn; ··· 1175 1177 int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, 1176 1178 int *vector); 1177 1179 void mlx4_release_eq(struct mlx4_dev *dev, int vec); 1180 + 1181 + int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec); 1178 1182 1179 1183 int mlx4_get_phys_port_id(struct mlx4_dev *dev); 1180 1184 int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port);
-8
include/linux/of_mdio.h
··· 25 25 26 26 extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); 27 27 28 - extern void of_mdiobus_link_phydev(struct mii_bus *mdio, 29 - struct phy_device *phydev); 30 - 31 28 #else /* CONFIG_OF */ 32 29 static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) 33 30 { ··· 59 62 static inline struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np) 60 63 { 61 64 return NULL; 62 - } 63 - 64 - static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, 65 - struct phy_device *phydev) 66 - { 67 65 } 68 66 #endif /* CONFIG_OF */ 69 67
+2 -2
include/linux/percpu-defs.h
··· 146 146 * Declaration/definition used for per-CPU variables that must be read mostly. 147 147 */ 148 148 #define DECLARE_PER_CPU_READ_MOSTLY(type, name) \ 149 - DECLARE_PER_CPU_SECTION(type, name, "..readmostly") 149 + DECLARE_PER_CPU_SECTION(type, name, "..read_mostly") 150 150 151 151 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \ 152 - DEFINE_PER_CPU_SECTION(type, name, "..readmostly") 152 + DEFINE_PER_CPU_SECTION(type, name, "..read_mostly") 153 153 154 154 /* 155 155 * Intermodule exports for per-CPU variables. sparse forgets about
+3
include/linux/ptrace.h
··· 334 334 * calling arch_ptrace_stop() when it would be superfluous. For example, 335 335 * if the thread has not been back to user mode since the last stop, the 336 336 * thread state might indicate that nothing needs to be done. 337 + * 338 + * This is guaranteed to be invoked once before a task stops for ptrace and 339 + * may include arch-specific operations necessary prior to a ptrace stop. 337 340 */ 338 341 #define arch_ptrace_stop_needed(code, info) (0) 339 342 #endif
+4 -4
include/linux/sched.h
··· 872 872 #define SD_NUMA 0x4000 /* cross-node balancing */ 873 873 874 874 #ifdef CONFIG_SCHED_SMT 875 - static inline const int cpu_smt_flags(void) 875 + static inline int cpu_smt_flags(void) 876 876 { 877 877 return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; 878 878 } 879 879 #endif 880 880 881 881 #ifdef CONFIG_SCHED_MC 882 - static inline const int cpu_core_flags(void) 882 + static inline int cpu_core_flags(void) 883 883 { 884 884 return SD_SHARE_PKG_RESOURCES; 885 885 } 886 886 #endif 887 887 888 888 #ifdef CONFIG_NUMA 889 - static inline const int cpu_numa_flags(void) 889 + static inline int cpu_numa_flags(void) 890 890 { 891 891 return SD_NUMA; 892 892 } ··· 999 999 bool cpus_share_cache(int this_cpu, int that_cpu); 1000 1000 1001 1001 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); 1002 - typedef const int (*sched_domain_flags_f)(void); 1002 + typedef int (*sched_domain_flags_f)(void); 1003 1003 1004 1004 #define SDTL_OVERLAP 0x01 1005 1005
-4
include/linux/socket.h
··· 305 305 /* IPX options */ 306 306 #define IPX_TYPE 1 307 307 308 - extern int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, 309 - int offset, int len); 310 308 extern int csum_partial_copy_fromiovecend(unsigned char *kdata, 311 309 struct iovec *iov, 312 310 int offset, ··· 313 315 unsigned long nr_segs); 314 316 315 317 extern int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *address, int mode); 316 - extern int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata, 317 - int offset, int len); 318 318 extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr); 319 319 extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data); 320 320
+17 -2
include/linux/uio.h
··· 94 94 return i->count; 95 95 } 96 96 97 - static inline void iov_iter_truncate(struct iov_iter *i, size_t count) 97 + /* 98 + * Cap the iov_iter by given limit; note that the second argument is 99 + * *not* the new size - it's upper limit for such. Passing it a value 100 + * greater than the amount of data in iov_iter is fine - it'll just do 101 + * nothing in that case. 102 + */ 103 + static inline void iov_iter_truncate(struct iov_iter *i, u64 count) 98 104 { 105 + /* 106 + * count doesn't have to fit in size_t - comparison extends both 107 + * operands to u64 here and any value that would be truncated by 108 + * conversion in assignement is by definition greater than all 109 + * values of size_t, including old i->count. 110 + */ 99 111 if (i->count > count) 100 112 i->count = count; 101 113 } ··· 123 111 124 112 int memcpy_fromiovec(unsigned char *kdata, struct iovec *iov, int len); 125 113 int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len); 126 - 114 + int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, 115 + int offset, int len); 116 + int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata, 117 + int offset, int len); 127 118 128 119 #endif
+3 -1
include/linux/usb_usual.h
··· 70 70 US_FLAG(NEEDS_CAP16, 0x00400000) \ 71 71 /* cannot handle READ_CAPACITY_10 */ \ 72 72 US_FLAG(IGNORE_UAS, 0x00800000) \ 73 - /* Device advertises UAS but it is broken */ 73 + /* Device advertises UAS but it is broken */ \ 74 + US_FLAG(BROKEN_FUA, 0x01000000) \ 75 + /* Cannot handle FUA in WRITE or READ CDBs */ \ 74 76 75 77 #define US_FLAG(name, value) US_FL_##name = value , 76 78 enum { US_DO_ALL_FLAGS };
-1
include/net/neighbour.h
··· 203 203 void (*proxy_redo)(struct sk_buff *skb); 204 204 char *id; 205 205 struct neigh_parms parms; 206 - /* HACK. gc_* should follow parms without a gap! */ 207 206 int gc_interval; 208 207 int gc_thresh1; 209 208 int gc_thresh2;
+1 -1
include/net/netns/ieee802154_6lowpan.h
··· 16 16 struct netns_ieee802154_lowpan { 17 17 struct netns_sysctl_lowpan sysctl; 18 18 struct netns_frags frags; 19 - u16 max_dsize; 19 + int max_dsize; 20 20 }; 21 21 22 22 #endif
+6 -6
include/net/sock.h
··· 1772 1772 static inline void 1773 1773 sk_dst_set(struct sock *sk, struct dst_entry *dst) 1774 1774 { 1775 - spin_lock(&sk->sk_dst_lock); 1776 - __sk_dst_set(sk, dst); 1777 - spin_unlock(&sk->sk_dst_lock); 1775 + struct dst_entry *old_dst; 1776 + 1777 + sk_tx_queue_clear(sk); 1778 + old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); 1779 + dst_release(old_dst); 1778 1780 } 1779 1781 1780 1782 static inline void ··· 1788 1786 static inline void 1789 1787 sk_dst_reset(struct sock *sk) 1790 1788 { 1791 - spin_lock(&sk->sk_dst_lock); 1792 - __sk_dst_reset(sk); 1793 - spin_unlock(&sk->sk_dst_lock); 1789 + sk_dst_set(sk, NULL); 1794 1790 } 1795 1791 1796 1792 struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie);
+1 -1
include/scsi/scsi_cmnd.h
··· 318 318 319 319 static inline unsigned scsi_transfer_length(struct scsi_cmnd *scmd) 320 320 { 321 - unsigned int xfer_len = blk_rq_bytes(scmd->request); 321 + unsigned int xfer_len = scsi_out(scmd)->length; 322 322 unsigned int prot_op = scsi_get_prot_op(scmd); 323 323 unsigned int sector_size = scmd->device->sector_size; 324 324
+1
include/scsi/scsi_device.h
··· 173 173 unsigned is_visible:1; /* is the device visible in sysfs */ 174 174 unsigned wce_default_on:1; /* Cache is ON by default */ 175 175 unsigned no_dif:1; /* T10 PI (DIF) should be disabled */ 176 + unsigned broken_fua:1; /* Don't set FUA bit */ 176 177 177 178 atomic_t disk_events_disable_depth; /* disable depth for disk events */ 178 179
+1
include/uapi/linux/btrfs.h
··· 38 38 #define BTRFS_SUBVOL_QGROUP_INHERIT (1ULL << 2) 39 39 #define BTRFS_FSID_SIZE 16 40 40 #define BTRFS_UUID_SIZE 16 41 + #define BTRFS_UUID_UNPARSED_SIZE 37 41 42 42 43 #define BTRFS_QGROUP_INHERIT_SET_LIMITS (1ULL << 0) 43 44
+7
include/uapi/linux/usb/functionfs.h
··· 33 33 __u8 bInterval; 34 34 } __attribute__((packed)); 35 35 36 + /* Legacy format, deprecated as of 3.14. */ 37 + struct usb_functionfs_descs_head { 38 + __le32 magic; 39 + __le32 length; 40 + __le32 fs_count; 41 + __le32 hs_count; 42 + } __attribute__((packed, deprecated)); 36 43 37 44 /* 38 45 * Descriptors format:
+7 -7
include/uapi/sound/compress_offload.h
··· 39 39 struct snd_compressed_buffer { 40 40 __u32 fragment_size; 41 41 __u32 fragments; 42 - }; 42 + } __attribute__((packed, aligned(4))); 43 43 44 44 /** 45 45 * struct snd_compr_params: compressed stream params ··· 51 51 struct snd_compressed_buffer buffer; 52 52 struct snd_codec codec; 53 53 __u8 no_wake_mode; 54 - }; 54 + } __attribute__((packed, aligned(4))); 55 55 56 56 /** 57 57 * struct snd_compr_tstamp: timestamp descriptor ··· 70 70 __u32 pcm_frames; 71 71 __u32 pcm_io_frames; 72 72 __u32 sampling_rate; 73 - }; 73 + } __attribute__((packed, aligned(4))); 74 74 75 75 /** 76 76 * struct snd_compr_avail: avail descriptor ··· 80 80 struct snd_compr_avail { 81 81 __u64 avail; 82 82 struct snd_compr_tstamp tstamp; 83 - } __attribute__((packed)); 83 + } __attribute__((packed, aligned(4))); 84 84 85 85 enum snd_compr_direction { 86 86 SND_COMPRESS_PLAYBACK = 0, ··· 107 107 __u32 max_fragments; 108 108 __u32 codecs[MAX_NUM_CODECS]; 109 109 __u32 reserved[11]; 110 - }; 110 + } __attribute__((packed, aligned(4))); 111 111 112 112 /** 113 113 * struct snd_compr_codec_caps: query capability of codec ··· 119 119 __u32 codec; 120 120 __u32 num_descriptors; 121 121 struct snd_codec_desc descriptor[MAX_NUM_CODEC_DESCRIPTORS]; 122 - }; 122 + } __attribute__((packed, aligned(4))); 123 123 124 124 /** 125 125 * @SNDRV_COMPRESS_ENCODER_PADDING: no of samples appended by the encoder at the ··· 140 140 struct snd_compr_metadata { 141 141 __u32 key; 142 142 __u32 value[8]; 143 - }; 143 + } __attribute__((packed, aligned(4))); 144 144 145 145 /** 146 146 * compress path ioctl definitions
+7 -7
include/uapi/sound/compress_params.h
··· 268 268 __u32 max_bit_rate; 269 269 __u32 min_bit_rate; 270 270 __u32 downmix; 271 - }; 271 + } __attribute__((packed, aligned(4))); 272 272 273 273 274 274 /** ··· 284 284 __u32 quant_bits; 285 285 __u32 start_region; 286 286 __u32 num_regions; 287 - }; 287 + } __attribute__((packed, aligned(4))); 288 288 289 289 /** 290 290 * struct snd_enc_flac ··· 308 308 struct snd_enc_flac { 309 309 __u32 num; 310 310 __u32 gain; 311 - }; 311 + } __attribute__((packed, aligned(4))); 312 312 313 313 struct snd_enc_generic { 314 314 __u32 bw; /* encoder bandwidth */ 315 315 __s32 reserved[15]; 316 - }; 316 + } __attribute__((packed, aligned(4))); 317 317 318 318 union snd_codec_options { 319 319 struct snd_enc_wma wma; ··· 321 321 struct snd_enc_real real; 322 322 struct snd_enc_flac flac; 323 323 struct snd_enc_generic generic; 324 - }; 324 + } __attribute__((packed, aligned(4))); 325 325 326 326 /** struct snd_codec_desc - description of codec capabilities 327 327 * @max_ch: Maximum number of audio channels ··· 358 358 __u32 formats; 359 359 __u32 min_buffer; 360 360 __u32 reserved[15]; 361 - }; 361 + } __attribute__((packed, aligned(4))); 362 362 363 363 /** struct snd_codec 364 364 * @id: Identifies the supported audio encoder/decoder. ··· 399 399 __u32 align; 400 400 union snd_codec_options options; 401 401 __u32 reserved[3]; 402 - }; 402 + } __attribute__((packed, aligned(4))); 403 403 404 404 #endif
+50 -8
kernel/cgroup.c
··· 1648 1648 int flags, const char *unused_dev_name, 1649 1649 void *data) 1650 1650 { 1651 + struct super_block *pinned_sb = NULL; 1652 + struct cgroup_subsys *ss; 1651 1653 struct cgroup_root *root; 1652 1654 struct cgroup_sb_opts opts; 1653 1655 struct dentry *dentry; 1654 1656 int ret; 1657 + int i; 1655 1658 bool new_sb; 1656 1659 1657 1660 /* ··· 1678 1675 cgroup_get(&root->cgrp); 1679 1676 ret = 0; 1680 1677 goto out_unlock; 1678 + } 1679 + 1680 + /* 1681 + * Destruction of cgroup root is asynchronous, so subsystems may 1682 + * still be dying after the previous unmount. Let's drain the 1683 + * dying subsystems. We just need to ensure that the ones 1684 + * unmounted previously finish dying and don't care about new ones 1685 + * starting. Testing ref liveliness is good enough. 1686 + */ 1687 + for_each_subsys(ss, i) { 1688 + if (!(opts.subsys_mask & (1 << i)) || 1689 + ss->root == &cgrp_dfl_root) 1690 + continue; 1691 + 1692 + if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) { 1693 + mutex_unlock(&cgroup_mutex); 1694 + msleep(10); 1695 + ret = restart_syscall(); 1696 + goto out_free; 1697 + } 1698 + cgroup_put(&ss->root->cgrp); 1681 1699 } 1682 1700 1683 1701 for_each_root(root) { ··· 1741 1717 } 1742 1718 1743 1719 /* 1744 - * A root's lifetime is governed by its root cgroup. 1745 - * tryget_live failure indicate that the root is being 1746 - * destroyed. Wait for destruction to complete so that the 1747 - * subsystems are free. We can use wait_queue for the wait 1748 - * but this path is super cold. Let's just sleep for a bit 1749 - * and retry. 1720 + * We want to reuse @root whose lifetime is governed by its 1721 + * ->cgrp. Let's check whether @root is alive and keep it 1722 + * that way. As cgroup_kill_sb() can happen anytime, we 1723 + * want to block it by pinning the sb so that @root doesn't 1724 + * get killed before mount is complete. 1725 + * 1726 + * With the sb pinned, tryget_live can reliably indicate 1727 + * whether @root can be reused. If it's being killed, 1728 + * drain it. We can use wait_queue for the wait but this 1729 + * path is super cold. Let's just sleep a bit and retry. 1750 1730 */ 1751 - if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1731 + pinned_sb = kernfs_pin_sb(root->kf_root, NULL); 1732 + if (IS_ERR(pinned_sb) || 1733 + !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1752 1734 mutex_unlock(&cgroup_mutex); 1735 + if (!IS_ERR_OR_NULL(pinned_sb)) 1736 + deactivate_super(pinned_sb); 1753 1737 msleep(10); 1754 1738 ret = restart_syscall(); 1755 1739 goto out_free; ··· 1802 1770 CGROUP_SUPER_MAGIC, &new_sb); 1803 1771 if (IS_ERR(dentry) || !new_sb) 1804 1772 cgroup_put(&root->cgrp); 1773 + 1774 + /* 1775 + * If @pinned_sb, we're reusing an existing root and holding an 1776 + * extra ref on its sb. Mount is complete. Put the extra ref. 1777 + */ 1778 + if (pinned_sb) { 1779 + WARN_ON(new_sb); 1780 + deactivate_super(pinned_sb); 1781 + } 1782 + 1805 1783 return dentry; 1806 1784 } 1807 1785 ··· 3370 3328 3371 3329 rcu_read_lock(); 3372 3330 css_for_each_child(child, css) { 3373 - if (css->flags & CSS_ONLINE) { 3331 + if (child->flags & CSS_ONLINE) { 3374 3332 ret = true; 3375 3333 break; 3376 3334 }
+19 -1
kernel/cpuset.c
··· 1181 1181 1182 1182 int current_cpuset_is_being_rebound(void) 1183 1183 { 1184 - return task_cs(current) == cpuset_being_rebound; 1184 + int ret; 1185 + 1186 + rcu_read_lock(); 1187 + ret = task_cs(current) == cpuset_being_rebound; 1188 + rcu_read_unlock(); 1189 + 1190 + return ret; 1185 1191 } 1186 1192 1187 1193 static int update_relax_domain_level(struct cpuset *cs, s64 val) ··· 1623 1617 * resources, wait for the previously scheduled operations before 1624 1618 * proceeding, so that we don't end up keep removing tasks added 1625 1619 * after execution capability is restored. 1620 + * 1621 + * cpuset_hotplug_work calls back into cgroup core via 1622 + * cgroup_transfer_tasks() and waiting for it from a cgroupfs 1623 + * operation like this one can lead to a deadlock through kernfs 1624 + * active_ref protection. Let's break the protection. Losing the 1625 + * protection is okay as we check whether @cs is online after 1626 + * grabbing cpuset_mutex anyway. This only happens on the legacy 1627 + * hierarchies. 1626 1628 */ 1629 + css_get(&cs->css); 1630 + kernfs_break_active_protection(of->kn); 1627 1631 flush_work(&cpuset_hotplug_work); 1628 1632 1629 1633 mutex_lock(&cpuset_mutex); ··· 1661 1645 free_trial_cpuset(trialcs); 1662 1646 out_unlock: 1663 1647 mutex_unlock(&cpuset_mutex); 1648 + kernfs_unbreak_active_protection(of->kn); 1649 + css_put(&cs->css); 1664 1650 return retval ?: nbytes; 1665 1651 } 1666 1652
+1 -1
kernel/events/core.c
··· 2320 2320 next_parent = rcu_dereference(next_ctx->parent_ctx); 2321 2321 2322 2322 /* If neither context have a parent context; they cannot be clones. */ 2323 - if (!parent && !next_parent) 2323 + if (!parent || !next_parent) 2324 2324 goto unlock; 2325 2325 2326 2326 if (next_parent == ctx || next_ctx == parent || next_parent == parent) {
+3 -3
kernel/events/uprobes.c
··· 846 846 { 847 847 int err; 848 848 849 - if (!consumer_del(uprobe, uc)) /* WARN? */ 849 + if (WARN_ON(!consumer_del(uprobe, uc))) 850 850 return; 851 851 852 852 err = register_for_each_vma(uprobe, NULL); ··· 927 927 int ret = -ENOENT; 928 928 929 929 uprobe = find_uprobe(inode, offset); 930 - if (!uprobe) 930 + if (WARN_ON(!uprobe)) 931 931 return ret; 932 932 933 933 down_write(&uprobe->register_rwsem); ··· 952 952 struct uprobe *uprobe; 953 953 954 954 uprobe = find_uprobe(inode, offset); 955 - if (!uprobe) 955 + if (WARN_ON(!uprobe)) 956 956 return; 957 957 958 958 down_write(&uprobe->register_rwsem);
+2 -2
kernel/irq/irqdesc.c
··· 455 455 */ 456 456 void irq_free_hwirqs(unsigned int from, int cnt) 457 457 { 458 - int i; 458 + int i, j; 459 459 460 - for (i = from; cnt > 0; i++, cnt--) { 460 + for (i = from, j = cnt; j > 0; i++, j--) { 461 461 irq_set_status_flags(i, _IRQ_NOREQUEST | _IRQ_NOPROBE); 462 462 arch_teardown_hwirq(i); 463 463 }
+18 -26
kernel/printk/printk.c
··· 1416 1416 /* 1417 1417 * Can we actually use the console at this time on this cpu? 1418 1418 * 1419 - * Console drivers may assume that per-cpu resources have been allocated. So 1420 - * unless they're explicitly marked as being able to cope (CON_ANYTIME) don't 1421 - * call them until this CPU is officially up. 1419 + * Console drivers may assume that per-cpu resources have 1420 + * been allocated. So unless they're explicitly marked as 1421 + * being able to cope (CON_ANYTIME) don't call them until 1422 + * this CPU is officially up. 1422 1423 */ 1423 1424 static inline int can_use_console(unsigned int cpu) 1424 1425 { ··· 1432 1431 * console_lock held, and 'console_locked' set) if it 1433 1432 * is successful, false otherwise. 1434 1433 */ 1435 - static int console_trylock_for_printk(void) 1434 + static int console_trylock_for_printk(unsigned int cpu) 1436 1435 { 1437 - unsigned int cpu = smp_processor_id(); 1438 - 1439 1436 if (!console_trylock()) 1440 1437 return 0; 1441 1438 /* ··· 1608 1609 */ 1609 1610 if (!oops_in_progress && !lockdep_recursing(current)) { 1610 1611 recursion_bug = 1; 1611 - local_irq_restore(flags); 1612 - return 0; 1612 + goto out_restore_irqs; 1613 1613 } 1614 1614 zap_locks(); 1615 1615 } ··· 1716 1718 1717 1719 logbuf_cpu = UINT_MAX; 1718 1720 raw_spin_unlock(&logbuf_lock); 1719 - lockdep_on(); 1720 - local_irq_restore(flags); 1721 1721 1722 1722 /* If called from the scheduler, we can not call up(). */ 1723 - if (in_sched) 1724 - return printed_len; 1723 + if (!in_sched) { 1724 + /* 1725 + * Try to acquire and then immediately release the console 1726 + * semaphore. The release will print out buffers and wake up 1727 + * /dev/kmsg and syslog() users. 1728 + */ 1729 + if (console_trylock_for_printk(this_cpu)) 1730 + console_unlock(); 1731 + } 1725 1732 1726 - /* 1727 - * Disable preemption to avoid being preempted while holding 1728 - * console_sem which would prevent anyone from printing to console 1729 - */ 1730 - preempt_disable(); 1731 - /* 1732 - * Try to acquire and then immediately release the console semaphore. 1733 - * The release will print out buffers and wake up /dev/kmsg and syslog() 1734 - * users. 1735 - */ 1736 - if (console_trylock_for_printk()) 1737 - console_unlock(); 1738 - preempt_enable(); 1739 - 1733 + lockdep_on(); 1734 + out_restore_irqs: 1735 + local_irq_restore(flags); 1740 1736 return printed_len; 1741 1737 } 1742 1738 EXPORT_SYMBOL(vprintk_emit);
-2
kernel/trace/trace.c
··· 1396 1396 1397 1397 arch_spin_unlock(&global_trace.max_lock); 1398 1398 1399 - ftrace_start(); 1400 1399 out: 1401 1400 raw_spin_unlock_irqrestore(&global_trace.start_lock, flags); 1402 1401 } ··· 1442 1443 struct ring_buffer *buffer; 1443 1444 unsigned long flags; 1444 1445 1445 - ftrace_stop(); 1446 1446 raw_spin_lock_irqsave(&global_trace.start_lock, flags); 1447 1447 if (global_trace.stop_count++) 1448 1448 goto out;
+27 -19
kernel/trace/trace_uprobe.c
··· 893 893 int ret; 894 894 895 895 if (file) { 896 + if (tu->tp.flags & TP_FLAG_PROFILE) 897 + return -EINTR; 898 + 896 899 link = kmalloc(sizeof(*link), GFP_KERNEL); 897 900 if (!link) 898 901 return -ENOMEM; ··· 904 901 list_add_tail_rcu(&link->list, &tu->tp.files); 905 902 906 903 tu->tp.flags |= TP_FLAG_TRACE; 907 - } else 908 - tu->tp.flags |= TP_FLAG_PROFILE; 904 + } else { 905 + if (tu->tp.flags & TP_FLAG_TRACE) 906 + return -EINTR; 909 907 910 - ret = uprobe_buffer_enable(); 911 - if (ret < 0) 912 - return ret; 908 + tu->tp.flags |= TP_FLAG_PROFILE; 909 + } 913 910 914 911 WARN_ON(!uprobe_filter_is_empty(&tu->filter)); 915 912 916 913 if (enabled) 917 914 return 0; 918 915 916 + ret = uprobe_buffer_enable(); 917 + if (ret) 918 + goto err_flags; 919 + 919 920 tu->consumer.filter = filter; 920 921 ret = uprobe_register(tu->inode, tu->offset, &tu->consumer); 921 - if (ret) { 922 - if (file) { 923 - list_del(&link->list); 924 - kfree(link); 925 - tu->tp.flags &= ~TP_FLAG_TRACE; 926 - } else 927 - tu->tp.flags &= ~TP_FLAG_PROFILE; 928 - } 922 + if (ret) 923 + goto err_buffer; 929 924 925 + return 0; 926 + 927 + err_buffer: 928 + uprobe_buffer_disable(); 929 + 930 + err_flags: 931 + if (file) { 932 + list_del(&link->list); 933 + kfree(link); 934 + tu->tp.flags &= ~TP_FLAG_TRACE; 935 + } else { 936 + tu->tp.flags &= ~TP_FLAG_PROFILE; 937 + } 930 938 return ret; 931 939 } 932 940 ··· 1214 1200 udd.bp_addr = instruction_pointer(regs); 1215 1201 1216 1202 current->utask->vaddr = (unsigned long) &udd; 1217 - 1218 - #ifdef CONFIG_PERF_EVENTS 1219 - if ((tu->tp.flags & TP_FLAG_TRACE) == 0 && 1220 - !uprobe_perf_filter(&tu->consumer, 0, current->mm)) 1221 - return UPROBE_HANDLER_REMOVE; 1222 - #endif 1223 1203 1224 1204 if (WARN_ON_ONCE(!uprobe_cpu_buffer)) 1225 1205 return 0;
+2 -1
kernel/workqueue.c
··· 3284 3284 } 3285 3285 } 3286 3286 3287 + dev_set_uevent_suppress(&wq_dev->dev, false); 3287 3288 kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); 3288 3289 return 0; 3289 3290 } ··· 4880 4879 BUG_ON(!tbl); 4881 4880 4882 4881 for_each_node(node) 4883 - BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4882 + BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4884 4883 node_online(node) ? node : NUMA_NO_NODE)); 4885 4884 4886 4885 for_each_possible_cpu(cpu) {
+1 -1
lib/cpumask.c
··· 191 191 192 192 i %= num_online_cpus(); 193 193 194 - if (!cpumask_of_node(numa_node)) { 194 + if (numa_node == -1 || !cpumask_of_node(numa_node)) { 195 195 /* Use all online cpu's for non numa aware system */ 196 196 cpumask_copy(mask, cpu_online_mask); 197 197 } else {
+55
lib/iovec.c
··· 51 51 return 0; 52 52 } 53 53 EXPORT_SYMBOL(memcpy_toiovec); 54 + 55 + /* 56 + * Copy kernel to iovec. Returns -EFAULT on error. 57 + */ 58 + 59 + int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata, 60 + int offset, int len) 61 + { 62 + int copy; 63 + for (; len > 0; ++iov) { 64 + /* Skip over the finished iovecs */ 65 + if (unlikely(offset >= iov->iov_len)) { 66 + offset -= iov->iov_len; 67 + continue; 68 + } 69 + copy = min_t(unsigned int, iov->iov_len - offset, len); 70 + if (copy_to_user(iov->iov_base + offset, kdata, copy)) 71 + return -EFAULT; 72 + offset = 0; 73 + kdata += copy; 74 + len -= copy; 75 + } 76 + 77 + return 0; 78 + } 79 + EXPORT_SYMBOL(memcpy_toiovecend); 80 + 81 + /* 82 + * Copy iovec to kernel. Returns -EFAULT on error. 83 + */ 84 + 85 + int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, 86 + int offset, int len) 87 + { 88 + /* Skip over the finished iovecs */ 89 + while (offset >= iov->iov_len) { 90 + offset -= iov->iov_len; 91 + iov++; 92 + } 93 + 94 + while (len > 0) { 95 + u8 __user *base = iov->iov_base + offset; 96 + int copy = min_t(unsigned int, len, iov->iov_len - offset); 97 + 98 + offset = 0; 99 + if (copy_from_user(kdata, base, copy)) 100 + return -EFAULT; 101 + len -= copy; 102 + kdata += copy; 103 + iov++; 104 + } 105 + 106 + return 0; 107 + } 108 + EXPORT_SYMBOL(memcpy_fromiovecend);
+8 -2
lib/lz4/lz4_decompress.c
··· 108 108 if (length == ML_MASK) { 109 109 for (; *ip == 255; length += 255) 110 110 ip++; 111 + if (unlikely(length > (size_t)(length + *ip))) 112 + goto _output_error; 111 113 length += *ip++; 112 114 } 113 115 ··· 159 157 160 158 /* write overflow error detected */ 161 159 _output_error: 162 - return (int) (-(((char *)ip) - source)); 160 + return -1; 163 161 } 164 162 165 163 static int lz4_uncompress_unknownoutputsize(const char *source, char *dest, ··· 192 190 int s = 255; 193 191 while ((ip < iend) && (s == 255)) { 194 192 s = *ip++; 193 + if (unlikely(length > (size_t)(length + s))) 194 + goto _output_error; 195 195 length += s; 196 196 } 197 197 } ··· 234 230 if (length == ML_MASK) { 235 231 while (ip < iend) { 236 232 int s = *ip++; 233 + if (unlikely(length > (size_t)(length + s))) 234 + goto _output_error; 237 235 length += s; 238 236 if (s == 255) 239 237 continue; ··· 288 282 289 283 /* write overflow error detected */ 290 284 _output_error: 291 - return (int) (-(((char *) ip) - source)); 285 + return -1; 292 286 } 293 287 294 288 int lz4_decompress(const unsigned char *src, size_t *src_len,
+18 -10
lib/swiotlb.c
··· 86 86 * We need to save away the original address corresponding to a mapped entry 87 87 * for the sync operations. 88 88 */ 89 + #define INVALID_PHYS_ADDR (~(phys_addr_t)0) 89 90 static phys_addr_t *io_tlb_orig_addr; 90 91 91 92 /* ··· 189 188 io_tlb_list = memblock_virt_alloc( 190 189 PAGE_ALIGN(io_tlb_nslabs * sizeof(int)), 191 190 PAGE_SIZE); 192 - for (i = 0; i < io_tlb_nslabs; i++) 193 - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); 194 - io_tlb_index = 0; 195 191 io_tlb_orig_addr = memblock_virt_alloc( 196 192 PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)), 197 193 PAGE_SIZE); 194 + for (i = 0; i < io_tlb_nslabs; i++) { 195 + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); 196 + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; 197 + } 198 + io_tlb_index = 0; 198 199 199 200 if (verbose) 200 201 swiotlb_print_info(); ··· 316 313 if (!io_tlb_list) 317 314 goto cleanup3; 318 315 319 - for (i = 0; i < io_tlb_nslabs; i++) 320 - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); 321 - io_tlb_index = 0; 322 - 323 316 io_tlb_orig_addr = (phys_addr_t *) 324 317 __get_free_pages(GFP_KERNEL, 325 318 get_order(io_tlb_nslabs * ··· 323 324 if (!io_tlb_orig_addr) 324 325 goto cleanup4; 325 326 326 - memset(io_tlb_orig_addr, 0, io_tlb_nslabs * sizeof(phys_addr_t)); 327 + for (i = 0; i < io_tlb_nslabs; i++) { 328 + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); 329 + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; 330 + } 331 + io_tlb_index = 0; 327 332 328 333 swiotlb_print_info(); 329 334 ··· 559 556 /* 560 557 * First, sync the memory before unmapping the entry 561 558 */ 562 - if (orig_addr && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) 559 + if (orig_addr != INVALID_PHYS_ADDR && 560 + ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) 563 561 swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE); 564 562 565 563 /* ··· 577 573 * Step 1: return the slots to the free list, merging the 578 574 * slots with superceeding slots 579 575 */ 580 - for (i = index + nslots - 1; i >= index; i--) 576 + for (i = index + nslots - 1; i >= index; i--) { 581 577 io_tlb_list[i] = ++count; 578 + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; 579 + } 582 580 /* 583 581 * Step 2: merge the returned slots with the preceding slots, 584 582 * if available (non zero) ··· 599 593 int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; 600 594 phys_addr_t orig_addr = io_tlb_orig_addr[index]; 601 595 596 + if (orig_addr == INVALID_PHYS_ADDR) 597 + return; 602 598 orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1); 603 599 604 600 switch (target) {
+5 -4
mm/memory-failure.c
··· 895 895 struct page *hpage = *hpagep; 896 896 struct page *ppage; 897 897 898 - if (PageReserved(p) || PageSlab(p)) 898 + if (PageReserved(p) || PageSlab(p) || !PageLRU(p)) 899 899 return SWAP_SUCCESS; 900 900 901 901 /* ··· 1159 1159 action_result(pfn, "free buddy, 2nd try", DELAYED); 1160 1160 return 0; 1161 1161 } 1162 - action_result(pfn, "non LRU", IGNORED); 1163 - put_page(p); 1164 - return -EBUSY; 1165 1162 } 1166 1163 } 1167 1164 ··· 1190 1193 put_page(hpage); 1191 1194 return 0; 1192 1195 } 1196 + 1197 + if (!PageHuge(p) && !PageTransTail(p) && !PageLRU(p)) 1198 + goto identify_page_state; 1193 1199 1194 1200 /* 1195 1201 * For error on the tail page, we should set PG_hwpoison ··· 1243 1243 goto out; 1244 1244 } 1245 1245 1246 + identify_page_state: 1246 1247 res = -EBUSY; 1247 1248 /* 1248 1249 * The first check uses the current page flags which may not have any
-2
mm/mempolicy.c
··· 2139 2139 } else 2140 2140 *new = *old; 2141 2141 2142 - rcu_read_lock(); 2143 2142 if (current_cpuset_is_being_rebound()) { 2144 2143 nodemask_t mems = cpuset_mems_allowed(current); 2145 2144 if (new->flags & MPOL_F_REBINDING) ··· 2146 2147 else 2147 2148 mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); 2148 2149 } 2149 - rcu_read_unlock(); 2150 2150 atomic_set(&new->refcnt, 1); 2151 2151 return new; 2152 2152 }
+2 -1
mm/msync.c
··· 78 78 goto out_unlock; 79 79 } 80 80 file = vma->vm_file; 81 - fstart = start + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); 81 + fstart = (start - vma->vm_start) + 82 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); 82 83 fend = fstart + (min(end, vma->vm_end) - start) - 1; 83 84 start = vma->vm_end; 84 85 if ((flags & MS_SYNC) && file &&
+14 -2
mm/page_alloc.c
··· 816 816 set_page_count(p, 0); 817 817 } while (++p, --i); 818 818 819 - set_page_refcounted(page); 820 819 set_pageblock_migratetype(page, MIGRATE_CMA); 821 - __free_pages(page, pageblock_order); 820 + 821 + if (pageblock_order >= MAX_ORDER) { 822 + i = pageblock_nr_pages; 823 + p = page; 824 + do { 825 + set_page_refcounted(p); 826 + __free_pages(p, MAX_ORDER - 1); 827 + p += MAX_ORDER_NR_PAGES; 828 + } while (i -= MAX_ORDER_NR_PAGES); 829 + } else { 830 + set_page_refcounted(page); 831 + __free_pages(page, pageblock_order); 832 + } 833 + 822 834 adjust_managed_page_count(page, pageblock_nr_pages); 823 835 } 824 836 #endif
+10 -5
mm/shmem.c
··· 1029 1029 goto failed; 1030 1030 } 1031 1031 1032 + if (page && sgp == SGP_WRITE) 1033 + mark_page_accessed(page); 1034 + 1032 1035 /* fallocated page? */ 1033 1036 if (page && !PageUptodate(page)) { 1034 1037 if (sgp != SGP_READ) ··· 1113 1110 shmem_recalc_inode(inode); 1114 1111 spin_unlock(&info->lock); 1115 1112 1113 + if (sgp == SGP_WRITE) 1114 + mark_page_accessed(page); 1115 + 1116 1116 delete_from_swap_cache(page); 1117 1117 set_page_dirty(page); 1118 1118 swap_free(swap); ··· 1142 1136 1143 1137 __SetPageSwapBacked(page); 1144 1138 __set_page_locked(page); 1139 + if (sgp == SGP_WRITE) 1140 + init_page_accessed(page); 1141 + 1145 1142 error = mem_cgroup_charge_file(page, current->mm, 1146 1143 gfp & GFP_RECLAIM_MASK); 1147 1144 if (error) ··· 1421 1412 loff_t pos, unsigned len, unsigned flags, 1422 1413 struct page **pagep, void **fsdata) 1423 1414 { 1424 - int ret; 1425 1415 struct inode *inode = mapping->host; 1426 1416 pgoff_t index = pos >> PAGE_CACHE_SHIFT; 1427 - ret = shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); 1428 - if (ret == 0 && *pagep) 1429 - init_page_accessed(*pagep); 1430 - return ret; 1417 + return shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); 1431 1418 } 1432 1419 1433 1420 static int
+3 -3
mm/slub.c
··· 1881 1881 1882 1882 new.frozen = 0; 1883 1883 1884 - if (!new.inuse && n->nr_partial > s->min_partial) 1884 + if (!new.inuse && n->nr_partial >= s->min_partial) 1885 1885 m = M_FREE; 1886 1886 else if (new.freelist) { 1887 1887 m = M_PARTIAL; ··· 1992 1992 new.freelist, new.counters, 1993 1993 "unfreezing slab")); 1994 1994 1995 - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) { 1995 + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { 1996 1996 page->next = discard_page; 1997 1997 discard_page = page; 1998 1998 } else { ··· 2620 2620 return; 2621 2621 } 2622 2622 2623 - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) 2623 + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) 2624 2624 goto slab_empty; 2625 2625 2626 2626 /*
+10 -3
net/8021q/vlan_dev.c
··· 629 629 struct vlan_dev_priv *vlan = vlan_dev_priv(dev); 630 630 int i; 631 631 632 - free_percpu(vlan->vlan_pcpu_stats); 633 - vlan->vlan_pcpu_stats = NULL; 634 632 for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { 635 633 while ((pm = vlan->egress_priority_map[i]) != NULL) { 636 634 vlan->egress_priority_map[i] = pm->next; ··· 785 787 .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, 786 788 }; 787 789 790 + static void vlan_dev_free(struct net_device *dev) 791 + { 792 + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); 793 + 794 + free_percpu(vlan->vlan_pcpu_stats); 795 + vlan->vlan_pcpu_stats = NULL; 796 + free_netdev(dev); 797 + } 798 + 788 799 void vlan_setup(struct net_device *dev) 789 800 { 790 801 ether_setup(dev); ··· 803 796 dev->tx_queue_len = 0; 804 797 805 798 dev->netdev_ops = &vlan_netdev_ops; 806 - dev->destructor = free_netdev; 799 + dev->destructor = vlan_dev_free; 807 800 dev->ethtool_ops = &vlan_ethtool_ops; 808 801 809 802 memset(dev->broadcast, 0, ETH_ALEN);
-3
net/appletalk/ddp.c
··· 1489 1489 goto drop; 1490 1490 1491 1491 /* Queue packet (standard) */ 1492 - skb->sk = sock; 1493 - 1494 1492 if (sock_queue_rcv_skb(sock, skb) < 0) 1495 1493 goto drop; 1496 1494 ··· 1642 1644 if (!skb) 1643 1645 goto out; 1644 1646 1645 - skb->sk = sk; 1646 1647 skb_reserve(skb, ddp_dl->header_length); 1647 1648 skb_reserve(skb, dev->hard_header_len); 1648 1649 skb->dev = dev;
+18 -12
net/core/dev.c
··· 148 148 static struct list_head offload_base __read_mostly; 149 149 150 150 static int netif_rx_internal(struct sk_buff *skb); 151 + static int call_netdevice_notifiers_info(unsigned long val, 152 + struct net_device *dev, 153 + struct netdev_notifier_info *info); 151 154 152 155 /* 153 156 * The @dev_base_head list is protected by @dev_base_lock and the rtnl ··· 1217 1214 void netdev_state_change(struct net_device *dev) 1218 1215 { 1219 1216 if (dev->flags & IFF_UP) { 1220 - call_netdevice_notifiers(NETDEV_CHANGE, dev); 1217 + struct netdev_notifier_change_info change_info; 1218 + 1219 + change_info.flags_changed = 0; 1220 + call_netdevice_notifiers_info(NETDEV_CHANGE, dev, 1221 + &change_info.info); 1221 1222 rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); 1222 1223 } 1223 1224 } ··· 4241 4234 #endif 4242 4235 napi->weight = weight_p; 4243 4236 local_irq_disable(); 4244 - while (work < quota) { 4237 + while (1) { 4245 4238 struct sk_buff *skb; 4246 - unsigned int qlen; 4247 4239 4248 4240 while ((skb = __skb_dequeue(&sd->process_queue))) { 4249 4241 local_irq_enable(); ··· 4256 4250 } 4257 4251 4258 4252 rps_lock(sd); 4259 - qlen = skb_queue_len(&sd->input_pkt_queue); 4260 - if (qlen) 4261 - skb_queue_splice_tail_init(&sd->input_pkt_queue, 4262 - &sd->process_queue); 4263 - 4264 - if (qlen < quota - work) { 4253 + if (skb_queue_empty(&sd->input_pkt_queue)) { 4265 4254 /* 4266 4255 * Inline a custom version of __napi_complete(). 4267 4256 * only current cpu owns and manipulates this napi, 4268 - * and NAPI_STATE_SCHED is the only possible flag set on backlog. 4269 - * we can use a plain write instead of clear_bit(), 4257 + * and NAPI_STATE_SCHED is the only possible flag set 4258 + * on backlog. 4259 + * We can use a plain write instead of clear_bit(), 4270 4260 * and we dont need an smp_mb() memory barrier. 4271 4261 */ 4272 4262 list_del(&napi->poll_list); 4273 4263 napi->state = 0; 4264 + rps_unlock(sd); 4274 4265 4275 - quota = work + qlen; 4266 + break; 4276 4267 } 4268 + 4269 + skb_queue_splice_tail_init(&sd->input_pkt_queue, 4270 + &sd->process_queue); 4277 4271 rps_unlock(sd); 4278 4272 } 4279 4273 local_irq_enable();
-55
net/core/iovec.c
··· 75 75 } 76 76 77 77 /* 78 - * Copy kernel to iovec. Returns -EFAULT on error. 79 - */ 80 - 81 - int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata, 82 - int offset, int len) 83 - { 84 - int copy; 85 - for (; len > 0; ++iov) { 86 - /* Skip over the finished iovecs */ 87 - if (unlikely(offset >= iov->iov_len)) { 88 - offset -= iov->iov_len; 89 - continue; 90 - } 91 - copy = min_t(unsigned int, iov->iov_len - offset, len); 92 - if (copy_to_user(iov->iov_base + offset, kdata, copy)) 93 - return -EFAULT; 94 - offset = 0; 95 - kdata += copy; 96 - len -= copy; 97 - } 98 - 99 - return 0; 100 - } 101 - EXPORT_SYMBOL(memcpy_toiovecend); 102 - 103 - /* 104 - * Copy iovec to kernel. Returns -EFAULT on error. 105 - */ 106 - 107 - int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, 108 - int offset, int len) 109 - { 110 - /* Skip over the finished iovecs */ 111 - while (offset >= iov->iov_len) { 112 - offset -= iov->iov_len; 113 - iov++; 114 - } 115 - 116 - while (len > 0) { 117 - u8 __user *base = iov->iov_base + offset; 118 - int copy = min_t(unsigned int, len, iov->iov_len - offset); 119 - 120 - offset = 0; 121 - if (copy_from_user(kdata, base, copy)) 122 - return -EFAULT; 123 - len -= copy; 124 - kdata += copy; 125 - iov++; 126 - } 127 - 128 - return 0; 129 - } 130 - EXPORT_SYMBOL(memcpy_fromiovecend); 131 - 132 - /* 133 78 * And now for the all-in-one: copy and checksum from a user iovec 134 79 * directly to a datagram 135 80 * Calls to csum_partial but the last must be in 32 bit chunks
+5 -4
net/core/neighbour.c
··· 3059 3059 memset(&t->neigh_vars[NEIGH_VAR_GC_INTERVAL], 0, 3060 3060 sizeof(t->neigh_vars[NEIGH_VAR_GC_INTERVAL])); 3061 3061 } else { 3062 + struct neigh_table *tbl = p->tbl; 3062 3063 dev_name_source = "default"; 3063 - t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = (int *)(p + 1); 3064 - t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = (int *)(p + 1) + 1; 3065 - t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = (int *)(p + 1) + 2; 3066 - t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = (int *)(p + 1) + 3; 3064 + t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = &tbl->gc_interval; 3065 + t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = &tbl->gc_thresh1; 3066 + t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = &tbl->gc_thresh2; 3067 + t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = &tbl->gc_thresh3; 3067 3068 } 3068 3069 3069 3070 if (handler) {
+1
net/ipv4/gre_demux.c
··· 68 68 69 69 skb_push(skb, hdr_len); 70 70 71 + skb_reset_transport_header(skb); 71 72 greh = (struct gre_base_hdr *)skb->data; 72 73 greh->flags = tnl_flags_to_gre_flags(tpi->flags); 73 74 greh->protocol = tpi->proto;
-2
net/ipv4/icmp.c
··· 739 739 /* fall through */ 740 740 case 0: 741 741 info = ntohs(icmph->un.frag.mtu); 742 - if (!info) 743 - goto out; 744 742 } 745 743 break; 746 744 case ICMP_SR_FAILED:
+6 -4
net/ipv4/igmp.c
··· 1944 1944 1945 1945 rtnl_lock(); 1946 1946 in_dev = ip_mc_find_dev(net, imr); 1947 + if (!in_dev) { 1948 + ret = -ENODEV; 1949 + goto out; 1950 + } 1947 1951 ifindex = imr->imr_ifindex; 1948 1952 for (imlp = &inet->mc_list; 1949 1953 (iml = rtnl_dereference(*imlp)) != NULL; ··· 1965 1961 1966 1962 *imlp = iml->next_rcu; 1967 1963 1968 - if (in_dev) 1969 - ip_mc_dec_group(in_dev, group); 1964 + ip_mc_dec_group(in_dev, group); 1970 1965 rtnl_unlock(); 1971 1966 /* decrease mem now to avoid the memleak warning */ 1972 1967 atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); 1973 1968 kfree_rcu(iml, rcu); 1974 1969 return 0; 1975 1970 } 1976 - if (!in_dev) 1977 - ret = -ENODEV; 1971 + out: 1978 1972 rtnl_unlock(); 1979 1973 return ret; 1980 1974 }
+8 -4
net/ipv4/ip_tunnel.c
··· 169 169 170 170 hlist_for_each_entry_rcu(t, head, hash_node) { 171 171 if (remote != t->parms.iph.daddr || 172 + t->parms.iph.saddr != 0 || 172 173 !(t->dev->flags & IFF_UP)) 173 174 continue; 174 175 ··· 186 185 head = &itn->tunnels[hash]; 187 186 188 187 hlist_for_each_entry_rcu(t, head, hash_node) { 189 - if ((local != t->parms.iph.saddr && 190 - (local != t->parms.iph.daddr || 191 - !ipv4_is_multicast(local))) || 192 - !(t->dev->flags & IFF_UP)) 188 + if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && 189 + (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) 190 + continue; 191 + 192 + if (!(t->dev->flags & IFF_UP)) 193 193 continue; 194 194 195 195 if (!ip_tunnel_key_match(&t->parms, flags, key)) ··· 207 205 208 206 hlist_for_each_entry_rcu(t, head, hash_node) { 209 207 if (t->parms.i_key != key || 208 + t->parms.iph.saddr != 0 || 209 + t->parms.iph.daddr != 0 || 210 210 !(t->dev->flags & IFF_UP)) 211 211 continue; 212 212
+8 -7
net/ipv4/route.c
··· 1010 1010 const struct iphdr *iph = (const struct iphdr *) skb->data; 1011 1011 struct flowi4 fl4; 1012 1012 struct rtable *rt; 1013 - struct dst_entry *dst; 1013 + struct dst_entry *odst = NULL; 1014 1014 bool new = false; 1015 1015 1016 1016 bh_lock_sock(sk); ··· 1018 1018 if (!ip_sk_accept_pmtu(sk)) 1019 1019 goto out; 1020 1020 1021 - rt = (struct rtable *) __sk_dst_get(sk); 1021 + odst = sk_dst_get(sk); 1022 1022 1023 - if (sock_owned_by_user(sk) || !rt) { 1023 + if (sock_owned_by_user(sk) || !odst) { 1024 1024 __ipv4_sk_update_pmtu(skb, sk, mtu); 1025 1025 goto out; 1026 1026 } 1027 1027 1028 1028 __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); 1029 1029 1030 - if (!__sk_dst_check(sk, 0)) { 1030 + rt = (struct rtable *)odst; 1031 + if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { 1031 1032 rt = ip_route_output_flow(sock_net(sk), &fl4, sk); 1032 1033 if (IS_ERR(rt)) 1033 1034 goto out; ··· 1038 1037 1039 1038 __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); 1040 1039 1041 - dst = dst_check(&rt->dst, 0); 1042 - if (!dst) { 1040 + if (!dst_check(&rt->dst, 0)) { 1043 1041 if (new) 1044 1042 dst_release(&rt->dst); 1045 1043 ··· 1050 1050 } 1051 1051 1052 1052 if (new) 1053 - __sk_dst_set(sk, &rt->dst); 1053 + sk_dst_set(sk, &rt->dst); 1054 1054 1055 1055 out: 1056 1056 bh_unlock_sock(sk); 1057 + dst_release(odst); 1057 1058 } 1058 1059 EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); 1059 1060
+2 -1
net/ipv4/tcp.c
··· 1108 1108 if (unlikely(tp->repair)) { 1109 1109 if (tp->repair_queue == TCP_RECV_QUEUE) { 1110 1110 copied = tcp_send_rcvq(sk, msg, size); 1111 - goto out; 1111 + goto out_nopush; 1112 1112 } 1113 1113 1114 1114 err = -EINVAL; ··· 1282 1282 out: 1283 1283 if (copied) 1284 1284 tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); 1285 + out_nopush: 1285 1286 release_sock(sk); 1286 1287 return copied + copied_syn; 1287 1288
+4 -4
net/ipv4/tcp_input.c
··· 1106 1106 } 1107 1107 1108 1108 /* D-SACK for already forgotten data... Do dumb counting. */ 1109 - if (dup_sack && tp->undo_marker && tp->undo_retrans && 1109 + if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && 1110 1110 !after(end_seq_0, prior_snd_una) && 1111 1111 after(end_seq_0, tp->undo_marker)) 1112 1112 tp->undo_retrans--; ··· 1187 1187 1188 1188 /* Account D-SACK for retransmitted packet. */ 1189 1189 if (dup_sack && (sacked & TCPCB_RETRANS)) { 1190 - if (tp->undo_marker && tp->undo_retrans && 1190 + if (tp->undo_marker && tp->undo_retrans > 0 && 1191 1191 after(end_seq, tp->undo_marker)) 1192 1192 tp->undo_retrans--; 1193 1193 if (sacked & TCPCB_SACKED_ACKED) ··· 1893 1893 tp->lost_out = 0; 1894 1894 1895 1895 tp->undo_marker = 0; 1896 - tp->undo_retrans = 0; 1896 + tp->undo_retrans = -1; 1897 1897 } 1898 1898 1899 1899 void tcp_clear_retrans(struct tcp_sock *tp) ··· 2664 2664 2665 2665 tp->prior_ssthresh = 0; 2666 2666 tp->undo_marker = tp->snd_una; 2667 - tp->undo_retrans = tp->retrans_out; 2667 + tp->undo_retrans = tp->retrans_out ? : -1; 2668 2668 2669 2669 if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { 2670 2670 if (!ece_ack)
+4 -2
net/ipv4/tcp_output.c
··· 2526 2526 if (!tp->retrans_stamp) 2527 2527 tp->retrans_stamp = TCP_SKB_CB(skb)->when; 2528 2528 2529 - tp->undo_retrans += tcp_skb_pcount(skb); 2530 - 2531 2529 /* snd_nxt is stored to detect loss of retransmitted segment, 2532 2530 * see tcp_input.c tcp_sacktag_write_queue(). 2533 2531 */ ··· 2533 2535 } else if (err != -EBUSY) { 2534 2536 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); 2535 2537 } 2538 + 2539 + if (tp->undo_retrans < 0) 2540 + tp->undo_retrans = 0; 2541 + tp->undo_retrans += tcp_skb_pcount(skb); 2536 2542 return err; 2537 2543 } 2538 2544
+4 -1
net/ipv4/udp.c
··· 1587 1587 goto csum_error; 1588 1588 1589 1589 1590 - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) 1590 + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { 1591 + UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS, 1592 + is_udplite); 1591 1593 goto drop; 1594 + } 1592 1595 1593 1596 rc = 0; 1594 1597
+11 -2
net/ipv6/mcast.c
··· 1301 1301 len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr); 1302 1302 len -= skb_network_header_len(skb); 1303 1303 1304 - /* Drop queries with not link local source */ 1305 - if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) 1304 + /* RFC3810 6.2 1305 + * Upon reception of an MLD message that contains a Query, the node 1306 + * checks if the source address of the message is a valid link-local 1307 + * address, if the Hop Limit is set to 1, and if the Router Alert 1308 + * option is present in the Hop-By-Hop Options header of the IPv6 1309 + * packet. If any of these checks fails, the packet is dropped. 1310 + */ 1311 + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) || 1312 + ipv6_hdr(skb)->hop_limit != 1 || 1313 + !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) || 1314 + IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD)) 1306 1315 return -EINVAL; 1307 1316 1308 1317 idev = __in6_dev_get(skb->dev);
+5 -1
net/ipv6/udp.c
··· 673 673 goto csum_error; 674 674 } 675 675 676 - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) 676 + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { 677 + UDP6_INC_STATS_BH(sock_net(sk), 678 + UDP_MIB_RCVBUFERRORS, is_udplite); 677 679 goto drop; 680 + } 678 681 679 682 skb_dst_drop(skb); 680 683 ··· 692 689 bh_unlock_sock(sk); 693 690 694 691 return rc; 692 + 695 693 csum_error: 696 694 UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite); 697 695 drop:
+2 -2
net/l2tp/l2tp_ppp.c
··· 1365 1365 int err; 1366 1366 1367 1367 if (level != SOL_PPPOL2TP) 1368 - return udp_prot.setsockopt(sk, level, optname, optval, optlen); 1368 + return -EINVAL; 1369 1369 1370 1370 if (optlen < sizeof(int)) 1371 1371 return -EINVAL; ··· 1491 1491 struct pppol2tp_session *ps; 1492 1492 1493 1493 if (level != SOL_PPPOL2TP) 1494 - return udp_prot.getsockopt(sk, level, optname, optval, optlen); 1494 + return -EINVAL; 1495 1495 1496 1496 if (get_user(len, optlen)) 1497 1497 return -EFAULT;
+2 -2
net/netlink/af_netlink.c
··· 636 636 while (nlk->cb_running && netlink_dump_space(nlk)) { 637 637 err = netlink_dump(sk); 638 638 if (err < 0) { 639 - sk->sk_err = err; 639 + sk->sk_err = -err; 640 640 sk->sk_error_report(sk); 641 641 break; 642 642 } ··· 2480 2480 atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { 2481 2481 ret = netlink_dump(sk); 2482 2482 if (ret) { 2483 - sk->sk_err = ret; 2483 + sk->sk_err = -ret; 2484 2484 sk->sk_error_report(sk); 2485 2485 } 2486 2486 }
+2
net/openvswitch/actions.c
··· 551 551 552 552 case OVS_ACTION_ATTR_SAMPLE: 553 553 err = sample(dp, skb, a); 554 + if (unlikely(err)) /* skb already freed. */ 555 + return err; 554 556 break; 555 557 } 556 558
+13 -14
net/openvswitch/datapath.c
··· 1 1 /* 2 - * Copyright (c) 2007-2013 Nicira, Inc. 2 + * Copyright (c) 2007-2014 Nicira, Inc. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 276 276 OVS_CB(skb)->flow = flow; 277 277 OVS_CB(skb)->pkt_key = &key; 278 278 279 - ovs_flow_stats_update(OVS_CB(skb)->flow, skb); 279 + ovs_flow_stats_update(OVS_CB(skb)->flow, key.tp.flags, skb); 280 280 ovs_execute_actions(dp, skb); 281 281 stats_counter = &stats->n_hit; 282 282 ··· 889 889 } 890 890 /* The unmasked key has to be the same for flow updates. */ 891 891 if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { 892 - error = -EEXIST; 893 - goto err_unlock_ovs; 892 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 893 + if (!flow) { 894 + error = -ENOENT; 895 + goto err_unlock_ovs; 896 + } 894 897 } 895 898 /* Update actions. */ 896 899 old_acts = ovsl_dereference(flow->sf_acts); ··· 984 981 goto err_unlock_ovs; 985 982 } 986 983 /* Check that the flow exists. */ 987 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 984 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 988 985 if (unlikely(!flow)) { 989 986 error = -ENOENT; 990 987 goto err_unlock_ovs; 991 988 } 992 - /* The unmasked key has to be the same for flow updates. */ 993 - if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { 994 - error = -EEXIST; 995 - goto err_unlock_ovs; 996 - } 989 + 997 990 /* Update actions, if present. */ 998 991 if (likely(acts)) { 999 992 old_acts = ovsl_dereference(flow->sf_acts); ··· 1062 1063 goto unlock; 1063 1064 } 1064 1065 1065 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 1066 - if (!flow || !ovs_flow_cmp_unmasked_key(flow, &match)) { 1066 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 1067 + if (!flow) { 1067 1068 err = -ENOENT; 1068 1069 goto unlock; 1069 1070 } ··· 1112 1113 goto unlock; 1113 1114 } 1114 1115 1115 - flow = ovs_flow_tbl_lookup(&dp->table, &key); 1116 - if (unlikely(!flow || !ovs_flow_cmp_unmasked_key(flow, &match))) { 1116 + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); 1117 + if (unlikely(!flow)) { 1117 1118 err = -ENOENT; 1118 1119 goto unlock; 1119 1120 }
+2 -2
net/openvswitch/flow.c
··· 61 61 62 62 #define TCP_FLAGS_BE16(tp) (*(__be16 *)&tcp_flag_word(tp) & htons(0x0FFF)) 63 63 64 - void ovs_flow_stats_update(struct sw_flow *flow, struct sk_buff *skb) 64 + void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, 65 + struct sk_buff *skb) 65 66 { 66 67 struct flow_stats *stats; 67 - __be16 tcp_flags = flow->key.tp.flags; 68 68 int node = numa_node_id(); 69 69 70 70 stats = rcu_dereference(flow->stats[node]);
+3 -2
net/openvswitch/flow.h
··· 1 1 /* 2 - * Copyright (c) 2007-2013 Nicira, Inc. 2 + * Copyright (c) 2007-2014 Nicira, Inc. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 180 180 unsigned char ar_tip[4]; /* target IP address */ 181 181 } __packed; 182 182 183 - void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *); 183 + void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags, 184 + struct sk_buff *); 184 185 void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *, 185 186 unsigned long *used, __be16 *tcp_flags); 186 187 void ovs_flow_stats_clear(struct sw_flow *);
+16
net/openvswitch/flow_table.c
··· 456 456 return ovs_flow_tbl_lookup_stats(tbl, key, &n_mask_hit); 457 457 } 458 458 459 + struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, 460 + struct sw_flow_match *match) 461 + { 462 + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); 463 + struct sw_flow_mask *mask; 464 + struct sw_flow *flow; 465 + 466 + /* Always called under ovs-mutex. */ 467 + list_for_each_entry(mask, &tbl->mask_list, list) { 468 + flow = masked_flow_lookup(ti, match->key, mask); 469 + if (flow && ovs_flow_cmp_unmasked_key(flow, match)) /* Found */ 470 + return flow; 471 + } 472 + return NULL; 473 + } 474 + 459 475 int ovs_flow_tbl_num_masks(const struct flow_table *table) 460 476 { 461 477 struct sw_flow_mask *mask;
+2 -1
net/openvswitch/flow_table.h
··· 76 76 u32 *n_mask_hit); 77 77 struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *, 78 78 const struct sw_flow_key *); 79 - 79 + struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, 80 + struct sw_flow_match *match); 80 81 bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow, 81 82 struct sw_flow_match *match); 82 83
+17
net/openvswitch/vport-gre.c
··· 110 110 return PACKET_RCVD; 111 111 } 112 112 113 + /* Called with rcu_read_lock and BH disabled. */ 114 + static int gre_err(struct sk_buff *skb, u32 info, 115 + const struct tnl_ptk_info *tpi) 116 + { 117 + struct ovs_net *ovs_net; 118 + struct vport *vport; 119 + 120 + ovs_net = net_generic(dev_net(skb->dev), ovs_net_id); 121 + vport = rcu_dereference(ovs_net->vport_net.gre_vport); 122 + 123 + if (unlikely(!vport)) 124 + return PACKET_REJECT; 125 + else 126 + return PACKET_RCVD; 127 + } 128 + 113 129 static int gre_tnl_send(struct vport *vport, struct sk_buff *skb) 114 130 { 115 131 struct net *net = ovs_dp_get_net(vport->dp); ··· 202 186 203 187 static struct gre_cisco_protocol gre_protocol = { 204 188 .handler = gre_rcv, 189 + .err_handler = gre_err, 205 190 .priority = 1, 206 191 }; 207 192
+15 -107
net/sctp/ulpevent.c
··· 366 366 * specification [SCTP] and any extensions for a list of possible 367 367 * error formats. 368 368 */ 369 - struct sctp_ulpevent *sctp_ulpevent_make_remote_error( 370 - const struct sctp_association *asoc, struct sctp_chunk *chunk, 371 - __u16 flags, gfp_t gfp) 369 + struct sctp_ulpevent * 370 + sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, 371 + struct sctp_chunk *chunk, __u16 flags, 372 + gfp_t gfp) 372 373 { 373 374 struct sctp_ulpevent *event; 374 375 struct sctp_remote_error *sre; ··· 388 387 /* Copy the skb to a new skb with room for us to prepend 389 388 * notification with. 390 389 */ 391 - skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), 392 - 0, gfp); 390 + skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); 393 391 394 392 /* Pull off the rest of the cause TLV from the chunk. */ 395 393 skb_pull(chunk->skb, elen); ··· 399 399 event = sctp_skb2event(skb); 400 400 sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); 401 401 402 - sre = (struct sctp_remote_error *) 403 - skb_push(skb, sizeof(struct sctp_remote_error)); 402 + sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); 404 403 405 404 /* Trim the buffer to the right length. */ 406 - skb_trim(skb, sizeof(struct sctp_remote_error) + elen); 405 + skb_trim(skb, sizeof(*sre) + elen); 407 406 408 - /* Socket Extensions for SCTP 409 - * 5.3.1.3 SCTP_REMOTE_ERROR 410 - * 411 - * sre_type: 412 - * It should be SCTP_REMOTE_ERROR. 413 - */ 407 + /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ 408 + memset(sre, 0, sizeof(*sre)); 414 409 sre->sre_type = SCTP_REMOTE_ERROR; 415 - 416 - /* 417 - * Socket Extensions for SCTP 418 - * 5.3.1.3 SCTP_REMOTE_ERROR 419 - * 420 - * sre_flags: 16 bits (unsigned integer) 421 - * Currently unused. 422 - */ 423 410 sre->sre_flags = 0; 424 - 425 - /* Socket Extensions for SCTP 426 - * 5.3.1.3 SCTP_REMOTE_ERROR 427 - * 428 - * sre_length: sizeof (__u32) 429 - * 430 - * This field is the total length of the notification data, 431 - * including the notification header. 432 - */ 433 411 sre->sre_length = skb->len; 434 - 435 - /* Socket Extensions for SCTP 436 - * 5.3.1.3 SCTP_REMOTE_ERROR 437 - * 438 - * sre_error: 16 bits (unsigned integer) 439 - * This value represents one of the Operational Error causes defined in 440 - * the SCTP specification, in network byte order. 441 - */ 442 412 sre->sre_error = cause; 443 - 444 - /* Socket Extensions for SCTP 445 - * 5.3.1.3 SCTP_REMOTE_ERROR 446 - * 447 - * sre_assoc_id: sizeof (sctp_assoc_t) 448 - * 449 - * The association id field, holds the identifier for the association. 450 - * All notifications for a given association have the same association 451 - * identifier. For TCP style socket, this field is ignored. 452 - */ 453 413 sctp_ulpevent_set_owner(event, asoc); 454 414 sre->sre_assoc_id = sctp_assoc2id(asoc); 455 415 456 416 return event; 457 - 458 417 fail: 459 418 return NULL; 460 419 } ··· 858 899 return notification->sn_header.sn_type; 859 900 } 860 901 861 - /* Copy out the sndrcvinfo into a msghdr. */ 902 + /* RFC6458, Section 5.3.2. SCTP Header Information Structure 903 + * (SCTP_SNDRCV, DEPRECATED) 904 + */ 862 905 void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, 863 906 struct msghdr *msghdr) 864 907 { ··· 869 908 if (sctp_ulpevent_is_notification(event)) 870 909 return; 871 910 872 - /* Sockets API Extensions for SCTP 873 - * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) 874 - * 875 - * sinfo_stream: 16 bits (unsigned integer) 876 - * 877 - * For recvmsg() the SCTP stack places the message's stream number in 878 - * this value. 879 - */ 911 + memset(&sinfo, 0, sizeof(sinfo)); 880 912 sinfo.sinfo_stream = event->stream; 881 - /* sinfo_ssn: 16 bits (unsigned integer) 882 - * 883 - * For recvmsg() this value contains the stream sequence number that 884 - * the remote endpoint placed in the DATA chunk. For fragmented 885 - * messages this is the same number for all deliveries of the message 886 - * (if more than one recvmsg() is needed to read the message). 887 - */ 888 913 sinfo.sinfo_ssn = event->ssn; 889 - /* sinfo_ppid: 32 bits (unsigned integer) 890 - * 891 - * In recvmsg() this value is 892 - * the same information that was passed by the upper layer in the peer 893 - * application. Please note that byte order issues are NOT accounted 894 - * for and this information is passed opaquely by the SCTP stack from 895 - * one end to the other. 896 - */ 897 914 sinfo.sinfo_ppid = event->ppid; 898 - /* sinfo_flags: 16 bits (unsigned integer) 899 - * 900 - * This field may contain any of the following flags and is composed of 901 - * a bitwise OR of these values. 902 - * 903 - * recvmsg() flags: 904 - * 905 - * SCTP_UNORDERED - This flag is present when the message was sent 906 - * non-ordered. 907 - */ 908 915 sinfo.sinfo_flags = event->flags; 909 - /* sinfo_tsn: 32 bit (unsigned integer) 910 - * 911 - * For the receiving side, this field holds a TSN that was 912 - * assigned to one of the SCTP Data Chunks. 913 - */ 914 916 sinfo.sinfo_tsn = event->tsn; 915 - /* sinfo_cumtsn: 32 bit (unsigned integer) 916 - * 917 - * This field will hold the current cumulative TSN as 918 - * known by the underlying SCTP layer. Note this field is 919 - * ignored when sending and only valid for a receive 920 - * operation when sinfo_flags are set to SCTP_UNORDERED. 921 - */ 922 917 sinfo.sinfo_cumtsn = event->cumtsn; 923 - /* sinfo_assoc_id: sizeof (sctp_assoc_t) 924 - * 925 - * The association handle field, sinfo_assoc_id, holds the identifier 926 - * for the association announced in the COMMUNICATION_UP notification. 927 - * All notifications for a given association have the same identifier. 928 - * Ignored for one-to-one style sockets. 929 - */ 930 918 sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); 931 - 932 - /* context value that is set via SCTP_CONTEXT socket option. */ 919 + /* Context value that is set via SCTP_CONTEXT socket option. */ 933 920 sinfo.sinfo_context = event->asoc->default_rcv_context; 934 - 935 921 /* These fields are not used while receiving. */ 936 922 sinfo.sinfo_timetolive = 0; 937 923 938 924 put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, 939 - sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); 925 + sizeof(sinfo), &sinfo); 940 926 } 941 927 942 928 /* Do accounting for bytes received and hold a reference to the association
+1
net/tipc/bcast.c
··· 559 559 560 560 buf = node->bclink.deferred_head; 561 561 node->bclink.deferred_head = buf->next; 562 + buf->next = NULL; 562 563 node->bclink.deferred_size--; 563 564 goto receive; 564 565 }
+8 -3
net/tipc/msg.c
··· 96 96 } 97 97 98 98 /* tipc_buf_append(): Append a buffer to the fragment list of another buffer 99 - * Let first buffer become head buffer 100 - * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0 101 - * Leaves headbuf pointer at NULL if failure 99 + * @*headbuf: in: NULL for first frag, otherwise value returned from prev call 100 + * out: set when successful non-complete reassembly, otherwise NULL 101 + * @*buf: in: the buffer to append. Always defined 102 + * out: head buf after sucessful complete reassembly, otherwise NULL 103 + * Returns 1 when reassembly complete, otherwise 0 102 104 */ 103 105 int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) 104 106 { ··· 119 117 goto out_free; 120 118 head = *headbuf = frag; 121 119 skb_frag_list_init(head); 120 + *buf = NULL; 122 121 return 0; 123 122 } 124 123 if (!head) ··· 148 145 out_free: 149 146 pr_warn_ratelimited("Unable to build fragment list\n"); 150 147 kfree_skb(*buf); 148 + kfree_skb(*headbuf); 149 + *buf = *headbuf = NULL; 151 150 return 0; 152 151 } 153 152
+12 -3
scripts/kernel-doc
··· 2073 2073 sub dump_function($$) { 2074 2074 my $prototype = shift; 2075 2075 my $file = shift; 2076 + my $noret = 0; 2076 2077 2077 2078 $prototype =~ s/^static +//; 2078 2079 $prototype =~ s/^extern +//; ··· 2087 2086 $prototype =~ s/__init_or_module +//; 2088 2087 $prototype =~ s/__must_check +//; 2089 2088 $prototype =~ s/__weak +//; 2090 - $prototype =~ s/^#\s*define\s+//; #ak added 2089 + my $define = $prototype =~ s/^#\s*define\s+//; #ak added 2091 2090 $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//; 2092 2091 2093 2092 # Yes, this truly is vile. We are looking for: ··· 2106 2105 # - atomic_set (macro) 2107 2106 # - pci_match_device, __copy_to_user (long return type) 2108 2107 2109 - if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2108 + if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) { 2109 + # This is an object-like macro, it has no return type and no parameter 2110 + # list. 2111 + # Function-like macros are not allowed to have spaces between 2112 + # declaration_name and opening parenthesis (notice the \s+). 2113 + $return_type = $1; 2114 + $declaration_name = $2; 2115 + $noret = 1; 2116 + } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2110 2117 $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2111 2118 $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2112 2119 $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || ··· 2149 2140 # of warnings goes sufficiently down, the check is only performed in 2150 2141 # verbose mode. 2151 2142 # TODO: always perform the check. 2152 - if ($verbose) { 2143 + if ($verbose && !$noret) { 2153 2144 check_return_section($file, $declaration_name, $return_type); 2154 2145 } 2155 2146
+2 -2
scripts/recordmcount.h
··· 163 163 164 164 static int MIPS_is_fake_mcount(Elf_Rel const *rp) 165 165 { 166 - static Elf_Addr old_r_offset; 166 + static Elf_Addr old_r_offset = ~(Elf_Addr)0; 167 167 Elf_Addr current_r_offset = _w(rp->r_offset); 168 168 int is_fake; 169 169 170 - is_fake = old_r_offset && 170 + is_fake = (old_r_offset != ~(Elf_Addr)0) && 171 171 (current_r_offset - old_r_offset == MIPS_FAKEMCOUNT_OFFSET); 172 172 old_r_offset = current_r_offset; 173 173
+1
sound/pci/hda/hda_auto_parser.c
··· 898 898 if (!strcmp(codec->modelname, models->name)) { 899 899 codec->fixup_id = models->id; 900 900 codec->fixup_name = models->name; 901 + codec->fixup_list = fixlist; 901 902 codec->fixup_forced = 1; 902 903 return; 903 904 }
+2 -1
sound/pci/hda/hda_controller.c
··· 193 193 dsp_unlock(azx_dev); 194 194 return azx_dev; 195 195 } 196 - if (!res) 196 + if (!res || 197 + (chip->driver_caps & AZX_DCAPS_REVERSE_ASSIGN)) 197 198 res = azx_dev; 198 199 } 199 200 dsp_unlock(azx_dev);
+55
sound/pci/hda/hda_i915.c
··· 20 20 #include <linux/module.h> 21 21 #include <sound/core.h> 22 22 #include <drm/i915_powerwell.h> 23 + #include "hda_priv.h" 23 24 #include "hda_i915.h" 25 + 26 + /* Intel HSW/BDW display HDA controller Extended Mode registers. 27 + * EM4 (M value) and EM5 (N Value) are used to convert CDClk (Core Display 28 + * Clock) to 24MHz BCLK: BCLK = CDCLK * M / N 29 + * The values will be lost when the display power well is disabled. 30 + */ 31 + #define ICH6_REG_EM4 0x100c 32 + #define ICH6_REG_EM5 0x1010 24 33 25 34 static int (*get_power)(void); 26 35 static int (*put_power)(void); 36 + static int (*get_cdclk)(void); 27 37 28 38 int hda_display_power(bool enable) 29 39 { ··· 47 37 else 48 38 return put_power(); 49 39 } 40 + 41 + void haswell_set_bclk(struct azx *chip) 42 + { 43 + int cdclk_freq; 44 + unsigned int bclk_m, bclk_n; 45 + 46 + if (!get_cdclk) 47 + return; 48 + 49 + cdclk_freq = get_cdclk(); 50 + switch (cdclk_freq) { 51 + case 337500: 52 + bclk_m = 16; 53 + bclk_n = 225; 54 + break; 55 + 56 + case 450000: 57 + default: /* default CDCLK 450MHz */ 58 + bclk_m = 4; 59 + bclk_n = 75; 60 + break; 61 + 62 + case 540000: 63 + bclk_m = 4; 64 + bclk_n = 90; 65 + break; 66 + 67 + case 675000: 68 + bclk_m = 8; 69 + bclk_n = 225; 70 + break; 71 + } 72 + 73 + azx_writew(chip, EM4, bclk_m); 74 + azx_writew(chip, EM5, bclk_n); 75 + } 76 + 50 77 51 78 int hda_i915_init(void) 52 79 { ··· 102 55 return -ENODEV; 103 56 } 104 57 58 + get_cdclk = symbol_request(i915_get_cdclk_freq); 59 + if (!get_cdclk) /* may have abnormal BCLK and audio playback rate */ 60 + pr_warn("hda-i915: get_cdclk symbol get fail\n"); 61 + 105 62 pr_debug("HDA driver get symbol successfully from i915 module\n"); 106 63 107 64 return err; ··· 120 69 if (put_power) { 121 70 symbol_put(i915_release_power_well); 122 71 put_power = NULL; 72 + } 73 + if (get_cdclk) { 74 + symbol_put(i915_get_cdclk_freq); 75 + get_cdclk = NULL; 123 76 } 124 77 125 78 return 0;
+2
sound/pci/hda/hda_i915.h
··· 18 18 19 19 #ifdef CONFIG_SND_HDA_I915 20 20 int hda_display_power(bool enable); 21 + void haswell_set_bclk(struct azx *chip); 21 22 int hda_i915_init(void); 22 23 int hda_i915_exit(void); 23 24 #else 24 25 static inline int hda_display_power(bool enable) { return 0; } 26 + static inline void haswell_set_bclk(struct azx *chip) { return; } 25 27 static inline int hda_i915_init(void) 26 28 { 27 29 return -ENODEV;
+32 -13
sound/pci/hda/hda_intel.c
··· 62 62 #include <linux/vga_switcheroo.h> 63 63 #include <linux/firmware.h> 64 64 #include "hda_codec.h" 65 - #include "hda_i915.h" 66 65 #include "hda_controller.h" 67 66 #include "hda_priv.h" 67 + #include "hda_i915.h" 68 68 69 69 70 70 static int index[SNDRV_CARDS] = SNDRV_DEFAULT_IDX; ··· 227 227 /* quirks for Intel PCH */ 228 228 #define AZX_DCAPS_INTEL_PCH_NOPM \ 229 229 (AZX_DCAPS_SCH_SNOOP | AZX_DCAPS_BUFSIZE | \ 230 - AZX_DCAPS_COUNT_LPIB_DELAY) 230 + AZX_DCAPS_COUNT_LPIB_DELAY | AZX_DCAPS_REVERSE_ASSIGN) 231 231 232 232 #define AZX_DCAPS_INTEL_PCH \ 233 233 (AZX_DCAPS_INTEL_PCH_NOPM | AZX_DCAPS_PM_RUNTIME) ··· 287 287 [AZX_DRIVER_CTHDA] = "HDA Creative", 288 288 [AZX_DRIVER_GENERIC] = "HD-Audio Generic", 289 289 }; 290 + 291 + struct hda_intel { 292 + struct azx chip; 293 + }; 294 + 290 295 291 296 #ifdef CONFIG_X86 292 297 static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool on) ··· 596 591 struct azx *chip = card->private_data; 597 592 struct azx_pcm *p; 598 593 599 - if (chip->disabled) 594 + if (chip->disabled || chip->init_failed) 600 595 return 0; 601 596 602 597 snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); ··· 611 606 free_irq(chip->irq, chip); 612 607 chip->irq = -1; 613 608 } 609 + 614 610 if (chip->msi) 615 611 pci_disable_msi(chip->pci); 616 612 pci_disable_device(pci); ··· 628 622 struct snd_card *card = dev_get_drvdata(dev); 629 623 struct azx *chip = card->private_data; 630 624 631 - if (chip->disabled) 625 + if (chip->disabled || chip->init_failed) 632 626 return 0; 633 627 634 - if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 628 + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { 635 629 hda_display_power(true); 630 + haswell_set_bclk(chip); 631 + } 636 632 pci_set_power_state(pci, PCI_D0); 637 633 pci_restore_state(pci); 638 634 if (pci_enable_device(pci) < 0) { ··· 665 657 struct snd_card *card = dev_get_drvdata(dev); 666 658 struct azx *chip = card->private_data; 667 659 668 - if (chip->disabled) 660 + if (chip->disabled || chip->init_failed) 669 661 return 0; 670 662 671 663 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) ··· 680 672 azx_clear_irq_pending(chip); 681 673 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 682 674 hda_display_power(false); 675 + 683 676 return 0; 684 677 } 685 678 ··· 692 683 struct hda_codec *codec; 693 684 int status; 694 685 695 - if (chip->disabled) 686 + if (chip->disabled || chip->init_failed) 696 687 return 0; 697 688 698 689 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) 699 690 return 0; 700 691 701 - if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 692 + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { 702 693 hda_display_power(true); 694 + haswell_set_bclk(chip); 695 + } 703 696 704 697 /* Read STATESTS before controller reset */ 705 698 status = azx_readw(chip, STATESTS); ··· 729 718 struct snd_card *card = dev_get_drvdata(dev); 730 719 struct azx *chip = card->private_data; 731 720 732 - if (chip->disabled) 721 + if (chip->disabled || chip->init_failed) 733 722 return 0; 734 723 735 724 if (!power_save_controller || ··· 894 883 static int azx_free(struct azx *chip) 895 884 { 896 885 struct pci_dev *pci = chip->pci; 886 + struct hda_intel *hda = container_of(chip, struct hda_intel, chip); 887 + 897 888 int i; 898 889 899 890 if ((chip->driver_caps & AZX_DCAPS_PM_RUNTIME) ··· 943 930 hda_display_power(false); 944 931 hda_i915_exit(); 945 932 } 946 - kfree(chip); 933 + kfree(hda); 947 934 948 935 return 0; 949 936 } ··· 1187 1174 static struct snd_device_ops ops = { 1188 1175 .dev_free = azx_dev_free, 1189 1176 }; 1177 + struct hda_intel *hda; 1190 1178 struct azx *chip; 1191 1179 int err; 1192 1180 ··· 1197 1183 if (err < 0) 1198 1184 return err; 1199 1185 1200 - chip = kzalloc(sizeof(*chip), GFP_KERNEL); 1201 - if (!chip) { 1202 - dev_err(card->dev, "Cannot allocate chip\n"); 1186 + hda = kzalloc(sizeof(*hda), GFP_KERNEL); 1187 + if (!hda) { 1188 + dev_err(card->dev, "Cannot allocate hda\n"); 1203 1189 pci_disable_device(pci); 1204 1190 return -ENOMEM; 1205 1191 } 1206 1192 1193 + chip = &hda->chip; 1207 1194 spin_lock_init(&chip->reg_lock); 1208 1195 mutex_init(&chip->open_mutex); 1209 1196 chip->card = card; ··· 1390 1375 1391 1376 /* initialize chip */ 1392 1377 azx_init_pci(chip); 1378 + 1379 + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 1380 + haswell_set_bclk(chip); 1381 + 1393 1382 azx_init_chip(chip, (probe_only[dev] & 2) == 0); 1394 1383 1395 1384 /* codec detection */
+21
sound/pci/hda/hda_local.h
··· 417 417 int value; /* quirk value */ 418 418 }; 419 419 420 + #ifdef CONFIG_SND_DEBUG_VERBOSE 421 + 422 + #define SND_HDA_PIN_QUIRK(_codec, _subvendor, _name, _value, _pins...) \ 423 + { .codec = _codec,\ 424 + .subvendor = _subvendor,\ 425 + .name = _name,\ 426 + .value = _value,\ 427 + .pins = (const struct hda_pintbl[]) { _pins } \ 428 + } 429 + #else 430 + 431 + #define SND_HDA_PIN_QUIRK(_codec, _subvendor, _name, _value, _pins...) \ 432 + { .codec = _codec,\ 433 + .subvendor = _subvendor,\ 434 + .value = _value,\ 435 + .pins = (const struct hda_pintbl[]) { _pins } \ 436 + } 437 + 438 + #endif 439 + 440 + 420 441 /* fixup types */ 421 442 enum { 422 443 HDA_FIXUP_INVALID,
+1
sound/pci/hda/hda_priv.h
··· 186 186 #define AZX_DCAPS_BUFSIZE (1 << 21) /* no buffer size alignment */ 187 187 #define AZX_DCAPS_ALIGN_BUFSIZE (1 << 22) /* buffer size alignment */ 188 188 #define AZX_DCAPS_4K_BDLE_BOUNDARY (1 << 23) /* BDLE in 4k boundary */ 189 + #define AZX_DCAPS_REVERSE_ASSIGN (1 << 24) /* Assign devices in reverse order */ 189 190 #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ 190 191 #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ 191 192 #define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */
+1 -1
sound/pci/hda/hda_tegra.c
··· 236 236 return rc; 237 237 } 238 238 239 + #ifdef CONFIG_PM_SLEEP 239 240 static void hda_tegra_disable_clocks(struct hda_tegra *data) 240 241 { 241 242 clk_disable_unprepare(data->hda2hdmi_clk); ··· 244 243 clk_disable_unprepare(data->hda_clk); 245 244 } 246 245 247 - #ifdef CONFIG_PM_SLEEP 248 246 /* 249 247 * power management 250 248 */
+3 -1
sound/pci/hda/patch_hdmi.c
··· 2204 2204 struct hdmi_spec *spec = codec->spec; 2205 2205 int pin_idx; 2206 2206 2207 - generic_hdmi_init(codec); 2207 + codec->patch_ops.init(codec); 2208 2208 snd_hda_codec_resume_amp(codec); 2209 2209 snd_hda_codec_resume_cache(codec); 2210 2210 ··· 3337 3337 { .id = 0x10de0051, .name = "GPU 51 HDMI/DP", .patch = patch_nvhdmi }, 3338 3338 { .id = 0x10de0060, .name = "GPU 60 HDMI/DP", .patch = patch_nvhdmi }, 3339 3339 { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch }, 3340 + { .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi }, 3340 3341 { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi }, 3341 3342 { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch }, 3342 3343 { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, ··· 3395 3394 MODULE_ALIAS("snd-hda-codec-id:10de0051"); 3396 3395 MODULE_ALIAS("snd-hda-codec-id:10de0060"); 3397 3396 MODULE_ALIAS("snd-hda-codec-id:10de0067"); 3397 + MODULE_ALIAS("snd-hda-codec-id:10de0070"); 3398 3398 MODULE_ALIAS("snd-hda-codec-id:10de0071"); 3399 3399 MODULE_ALIAS("snd-hda-codec-id:10de8001"); 3400 3400 MODULE_ALIAS("snd-hda-codec-id:11069f80");
+196 -306
sound/pci/hda/patch_realtek.c
··· 4880 4880 SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK), 4881 4881 SND_PCI_QUIRK(0x17aa, 0x220c, "Thinkpad T440s", ALC292_FIXUP_TPT440_DOCK), 4882 4882 SND_PCI_QUIRK(0x17aa, 0x220e, "Thinkpad T440p", ALC292_FIXUP_TPT440_DOCK), 4883 + SND_PCI_QUIRK(0x17aa, 0x2210, "Thinkpad T540p", ALC292_FIXUP_TPT440_DOCK), 4883 4884 SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4884 4885 SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4885 4886 SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 4963 4962 }; 4964 4963 4965 4964 static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { 4966 - { 4967 - .codec = 0x10ec0255, 4968 - .subvendor = 0x1028, 4969 - #ifdef CONFIG_SND_DEBUG_VERBOSE 4970 - .name = "Dell", 4971 - #endif 4972 - .pins = (const struct hda_pintbl[]) { 4973 - {0x12, 0x90a60140}, 4974 - {0x14, 0x90170110}, 4975 - {0x17, 0x40000000}, 4976 - {0x18, 0x411111f0}, 4977 - {0x19, 0x411111f0}, 4978 - {0x1a, 0x411111f0}, 4979 - {0x1b, 0x411111f0}, 4980 - {0x1d, 0x40700001}, 4981 - {0x1e, 0x411111f0}, 4982 - {0x21, 0x02211020}, 4983 - }, 4984 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 4985 - }, 4986 - { 4987 - .codec = 0x10ec0255, 4988 - .subvendor = 0x1028, 4989 - #ifdef CONFIG_SND_DEBUG_VERBOSE 4990 - .name = "Dell", 4991 - #endif 4992 - .pins = (const struct hda_pintbl[]) { 4993 - {0x12, 0x90a60160}, 4994 - {0x14, 0x90170120}, 4995 - {0x17, 0x40000000}, 4996 - {0x18, 0x411111f0}, 4997 - {0x19, 0x411111f0}, 4998 - {0x1a, 0x411111f0}, 4999 - {0x1b, 0x411111f0}, 5000 - {0x1d, 0x40700001}, 5001 - {0x1e, 0x411111f0}, 5002 - {0x21, 0x02211030}, 5003 - }, 5004 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5005 - }, 5006 - { 5007 - .codec = 0x10ec0255, 5008 - .subvendor = 0x1028, 5009 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5010 - .name = "Dell", 5011 - #endif 5012 - .pins = (const struct hda_pintbl[]) { 5013 - {0x12, 0x90a60160}, 5014 - {0x14, 0x90170120}, 5015 - {0x17, 0x90170140}, 5016 - {0x18, 0x40000000}, 5017 - {0x19, 0x411111f0}, 5018 - {0x1a, 0x411111f0}, 5019 - {0x1b, 0x411111f0}, 5020 - {0x1d, 0x41163b05}, 5021 - {0x1e, 0x411111f0}, 5022 - {0x21, 0x0321102f}, 5023 - }, 5024 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5025 - }, 5026 - { 5027 - .codec = 0x10ec0255, 5028 - .subvendor = 0x1028, 5029 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5030 - .name = "Dell", 5031 - #endif 5032 - .pins = (const struct hda_pintbl[]) { 5033 - {0x12, 0x90a60160}, 5034 - {0x14, 0x90170130}, 5035 - {0x17, 0x40000000}, 5036 - {0x18, 0x411111f0}, 5037 - {0x19, 0x411111f0}, 5038 - {0x1a, 0x411111f0}, 5039 - {0x1b, 0x411111f0}, 5040 - {0x1d, 0x40700001}, 5041 - {0x1e, 0x411111f0}, 5042 - {0x21, 0x02211040}, 5043 - }, 5044 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5045 - }, 5046 - { 5047 - .codec = 0x10ec0255, 5048 - .subvendor = 0x1028, 5049 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5050 - .name = "Dell", 5051 - #endif 5052 - .pins = (const struct hda_pintbl[]) { 5053 - {0x12, 0x90a60160}, 5054 - {0x14, 0x90170140}, 5055 - {0x17, 0x40000000}, 5056 - {0x18, 0x411111f0}, 5057 - {0x19, 0x411111f0}, 5058 - {0x1a, 0x411111f0}, 5059 - {0x1b, 0x411111f0}, 5060 - {0x1d, 0x40700001}, 5061 - {0x1e, 0x411111f0}, 5062 - {0x21, 0x02211050}, 5063 - }, 5064 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5065 - }, 5066 - { 5067 - .codec = 0x10ec0255, 5068 - .subvendor = 0x1028, 5069 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5070 - .name = "Dell", 5071 - #endif 5072 - .pins = (const struct hda_pintbl[]) { 5073 - {0x12, 0x90a60170}, 5074 - {0x14, 0x90170120}, 5075 - {0x17, 0x40000000}, 5076 - {0x18, 0x411111f0}, 5077 - {0x19, 0x411111f0}, 5078 - {0x1a, 0x411111f0}, 5079 - {0x1b, 0x411111f0}, 5080 - {0x1d, 0x40700001}, 5081 - {0x1e, 0x411111f0}, 5082 - {0x21, 0x02211030}, 5083 - }, 5084 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5085 - }, 5086 - { 5087 - .codec = 0x10ec0255, 5088 - .subvendor = 0x1028, 5089 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5090 - .name = "Dell", 5091 - #endif 5092 - .pins = (const struct hda_pintbl[]) { 5093 - {0x12, 0x90a60170}, 5094 - {0x14, 0x90170130}, 5095 - {0x17, 0x40000000}, 5096 - {0x18, 0x411111f0}, 5097 - {0x19, 0x411111f0}, 5098 - {0x1a, 0x411111f0}, 5099 - {0x1b, 0x411111f0}, 5100 - {0x1d, 0x40700001}, 5101 - {0x1e, 0x411111f0}, 5102 - {0x21, 0x02211040}, 5103 - }, 5104 - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5105 - }, 5106 - { 5107 - .codec = 0x10ec0283, 5108 - .subvendor = 0x1028, 5109 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5110 - .name = "Dell", 5111 - #endif 5112 - .pins = (const struct hda_pintbl[]) { 5113 - {0x12, 0x90a60130}, 5114 - {0x14, 0x90170110}, 5115 - {0x17, 0x40020008}, 5116 - {0x18, 0x411111f0}, 5117 - {0x19, 0x411111f0}, 5118 - {0x1a, 0x411111f0}, 5119 - {0x1b, 0x411111f0}, 5120 - {0x1d, 0x40e00001}, 5121 - {0x1e, 0x411111f0}, 5122 - {0x21, 0x0321101f}, 5123 - }, 5124 - .value = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 5125 - }, 5126 - { 5127 - .codec = 0x10ec0283, 5128 - .subvendor = 0x1028, 5129 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5130 - .name = "Dell", 5131 - #endif 5132 - .pins = (const struct hda_pintbl[]) { 5133 - {0x12, 0x90a60160}, 5134 - {0x14, 0x90170120}, 5135 - {0x17, 0x40000000}, 5136 - {0x18, 0x411111f0}, 5137 - {0x19, 0x411111f0}, 5138 - {0x1a, 0x411111f0}, 5139 - {0x1b, 0x411111f0}, 5140 - {0x1d, 0x40700001}, 5141 - {0x1e, 0x411111f0}, 5142 - {0x21, 0x02211030}, 5143 - }, 5144 - .value = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 5145 - }, 5146 - { 5147 - .codec = 0x10ec0292, 5148 - .subvendor = 0x1028, 5149 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5150 - .name = "Dell", 5151 - #endif 5152 - .pins = (const struct hda_pintbl[]) { 5153 - {0x12, 0x90a60140}, 5154 - {0x13, 0x411111f0}, 5155 - {0x14, 0x90170110}, 5156 - {0x15, 0x0221401f}, 5157 - {0x16, 0x411111f0}, 5158 - {0x18, 0x411111f0}, 5159 - {0x19, 0x411111f0}, 5160 - {0x1a, 0x411111f0}, 5161 - {0x1b, 0x411111f0}, 5162 - {0x1d, 0x40700001}, 5163 - {0x1e, 0x411111f0}, 5164 - }, 5165 - .value = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 5166 - }, 5167 - { 5168 - .codec = 0x10ec0293, 5169 - .subvendor = 0x1028, 5170 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5171 - .name = "Dell", 5172 - #endif 5173 - .pins = (const struct hda_pintbl[]) { 5174 - {0x12, 0x40000000}, 5175 - {0x13, 0x90a60140}, 5176 - {0x14, 0x90170110}, 5177 - {0x15, 0x0221401f}, 5178 - {0x16, 0x21014020}, 5179 - {0x18, 0x411111f0}, 5180 - {0x19, 0x21a19030}, 5181 - {0x1a, 0x411111f0}, 5182 - {0x1b, 0x411111f0}, 5183 - {0x1d, 0x40700001}, 5184 - {0x1e, 0x411111f0}, 5185 - }, 5186 - .value = ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, 5187 - }, 4965 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 4966 + {0x12, 0x90a60140}, 4967 + {0x14, 0x90170110}, 4968 + {0x17, 0x40000000}, 4969 + {0x18, 0x411111f0}, 4970 + {0x19, 0x411111f0}, 4971 + {0x1a, 0x411111f0}, 4972 + {0x1b, 0x411111f0}, 4973 + {0x1d, 0x40700001}, 4974 + {0x1e, 0x411111f0}, 4975 + {0x21, 0x02211020}), 4976 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 4977 + {0x12, 0x90a60160}, 4978 + {0x14, 0x90170120}, 4979 + {0x17, 0x40000000}, 4980 + {0x18, 0x411111f0}, 4981 + {0x19, 0x411111f0}, 4982 + {0x1a, 0x411111f0}, 4983 + {0x1b, 0x411111f0}, 4984 + {0x1d, 0x40700001}, 4985 + {0x1e, 0x411111f0}, 4986 + {0x21, 0x02211030}), 4987 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 4988 + {0x12, 0x90a60160}, 4989 + {0x14, 0x90170120}, 4990 + {0x17, 0x90170140}, 4991 + {0x18, 0x40000000}, 4992 + {0x19, 0x411111f0}, 4993 + {0x1a, 0x411111f0}, 4994 + {0x1b, 0x411111f0}, 4995 + {0x1d, 0x41163b05}, 4996 + {0x1e, 0x411111f0}, 4997 + {0x21, 0x0321102f}), 4998 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 4999 + {0x12, 0x90a60160}, 5000 + {0x14, 0x90170130}, 5001 + {0x17, 0x40000000}, 5002 + {0x18, 0x411111f0}, 5003 + {0x19, 0x411111f0}, 5004 + {0x1a, 0x411111f0}, 5005 + {0x1b, 0x411111f0}, 5006 + {0x1d, 0x40700001}, 5007 + {0x1e, 0x411111f0}, 5008 + {0x21, 0x02211040}), 5009 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5010 + {0x12, 0x90a60160}, 5011 + {0x14, 0x90170140}, 5012 + {0x17, 0x40000000}, 5013 + {0x18, 0x411111f0}, 5014 + {0x19, 0x411111f0}, 5015 + {0x1a, 0x411111f0}, 5016 + {0x1b, 0x411111f0}, 5017 + {0x1d, 0x40700001}, 5018 + {0x1e, 0x411111f0}, 5019 + {0x21, 0x02211050}), 5020 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5021 + {0x12, 0x90a60170}, 5022 + {0x14, 0x90170120}, 5023 + {0x17, 0x40000000}, 5024 + {0x18, 0x411111f0}, 5025 + {0x19, 0x411111f0}, 5026 + {0x1a, 0x411111f0}, 5027 + {0x1b, 0x411111f0}, 5028 + {0x1d, 0x40700001}, 5029 + {0x1e, 0x411111f0}, 5030 + {0x21, 0x02211030}), 5031 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5032 + {0x12, 0x90a60170}, 5033 + {0x14, 0x90170130}, 5034 + {0x17, 0x40000000}, 5035 + {0x18, 0x411111f0}, 5036 + {0x19, 0x411111f0}, 5037 + {0x1a, 0x411111f0}, 5038 + {0x1b, 0x411111f0}, 5039 + {0x1d, 0x40700001}, 5040 + {0x1e, 0x411111f0}, 5041 + {0x21, 0x02211040}), 5042 + SND_HDA_PIN_QUIRK(0x10ec0283, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 5043 + {0x12, 0x90a60130}, 5044 + {0x14, 0x90170110}, 5045 + {0x17, 0x40020008}, 5046 + {0x18, 0x411111f0}, 5047 + {0x19, 0x411111f0}, 5048 + {0x1a, 0x411111f0}, 5049 + {0x1b, 0x411111f0}, 5050 + {0x1d, 0x40e00001}, 5051 + {0x1e, 0x411111f0}, 5052 + {0x21, 0x0321101f}), 5053 + SND_HDA_PIN_QUIRK(0x10ec0283, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 5054 + {0x12, 0x90a60160}, 5055 + {0x14, 0x90170120}, 5056 + {0x17, 0x40000000}, 5057 + {0x18, 0x411111f0}, 5058 + {0x19, 0x411111f0}, 5059 + {0x1a, 0x411111f0}, 5060 + {0x1b, 0x411111f0}, 5061 + {0x1d, 0x40700001}, 5062 + {0x1e, 0x411111f0}, 5063 + {0x21, 0x02211030}), 5064 + SND_HDA_PIN_QUIRK(0x10ec0292, 0x1028, "Dell", ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 5065 + {0x12, 0x90a60140}, 5066 + {0x13, 0x411111f0}, 5067 + {0x14, 0x90170110}, 5068 + {0x15, 0x0221401f}, 5069 + {0x16, 0x411111f0}, 5070 + {0x18, 0x411111f0}, 5071 + {0x19, 0x411111f0}, 5072 + {0x1a, 0x411111f0}, 5073 + {0x1b, 0x411111f0}, 5074 + {0x1d, 0x40700001}, 5075 + {0x1e, 0x411111f0}), 5076 + SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, 5077 + {0x12, 0x40000000}, 5078 + {0x13, 0x90a60140}, 5079 + {0x14, 0x90170110}, 5080 + {0x15, 0x0221401f}, 5081 + {0x16, 0x21014020}, 5082 + {0x18, 0x411111f0}, 5083 + {0x19, 0x21a19030}, 5084 + {0x1a, 0x411111f0}, 5085 + {0x1b, 0x411111f0}, 5086 + {0x1d, 0x40700001}, 5087 + {0x1e, 0x411111f0}), 5088 + SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, 5089 + {0x12, 0x40000000}, 5090 + {0x13, 0x90a60140}, 5091 + {0x14, 0x90170110}, 5092 + {0x15, 0x0221401f}, 5093 + {0x16, 0x411111f0}, 5094 + {0x18, 0x411111f0}, 5095 + {0x19, 0x411111f0}, 5096 + {0x1a, 0x411111f0}, 5097 + {0x1b, 0x411111f0}, 5098 + {0x1d, 0x40700001}, 5099 + {0x1e, 0x411111f0}), 5188 5100 {} 5189 5101 }; 5190 5102 ··· 5953 6039 }; 5954 6040 5955 6041 static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = { 5956 - { 5957 - .codec = 0x10ec0668, 5958 - .subvendor = 0x1028, 5959 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5960 - .name = "Dell", 5961 - #endif 5962 - .pins = (const struct hda_pintbl[]) { 5963 - {0x12, 0x99a30130}, 5964 - {0x14, 0x90170110}, 5965 - {0x15, 0x0321101f}, 5966 - {0x16, 0x03011020}, 5967 - {0x18, 0x40000008}, 5968 - {0x19, 0x411111f0}, 5969 - {0x1a, 0x411111f0}, 5970 - {0x1b, 0x411111f0}, 5971 - {0x1d, 0x41000001}, 5972 - {0x1e, 0x411111f0}, 5973 - {0x1f, 0x411111f0}, 5974 - }, 5975 - .value = ALC668_FIXUP_AUTO_MUTE, 5976 - }, 5977 - { 5978 - .codec = 0x10ec0668, 5979 - .subvendor = 0x1028, 5980 - #ifdef CONFIG_SND_DEBUG_VERBOSE 5981 - .name = "Dell", 5982 - #endif 5983 - .pins = (const struct hda_pintbl[]) { 5984 - {0x12, 0x99a30140}, 5985 - {0x14, 0x90170110}, 5986 - {0x15, 0x0321101f}, 5987 - {0x16, 0x03011020}, 5988 - {0x18, 0x40000008}, 5989 - {0x19, 0x411111f0}, 5990 - {0x1a, 0x411111f0}, 5991 - {0x1b, 0x411111f0}, 5992 - {0x1d, 0x41000001}, 5993 - {0x1e, 0x411111f0}, 5994 - {0x1f, 0x411111f0}, 5995 - }, 5996 - .value = ALC668_FIXUP_AUTO_MUTE, 5997 - }, 5998 - { 5999 - .codec = 0x10ec0668, 6000 - .subvendor = 0x1028, 6001 - #ifdef CONFIG_SND_DEBUG_VERBOSE 6002 - .name = "Dell", 6003 - #endif 6004 - .pins = (const struct hda_pintbl[]) { 6005 - {0x12, 0x99a30150}, 6006 - {0x14, 0x90170110}, 6007 - {0x15, 0x0321101f}, 6008 - {0x16, 0x03011020}, 6009 - {0x18, 0x40000008}, 6010 - {0x19, 0x411111f0}, 6011 - {0x1a, 0x411111f0}, 6012 - {0x1b, 0x411111f0}, 6013 - {0x1d, 0x41000001}, 6014 - {0x1e, 0x411111f0}, 6015 - {0x1f, 0x411111f0}, 6016 - }, 6017 - .value = ALC668_FIXUP_AUTO_MUTE, 6018 - }, 6019 - { 6020 - .codec = 0x10ec0668, 6021 - .subvendor = 0x1028, 6022 - #ifdef CONFIG_SND_DEBUG_VERBOSE 6023 - .name = "Dell", 6024 - #endif 6025 - .pins = (const struct hda_pintbl[]) { 6026 - {0x12, 0x411111f0}, 6027 - {0x14, 0x90170110}, 6028 - {0x15, 0x0321101f}, 6029 - {0x16, 0x03011020}, 6030 - {0x18, 0x40000008}, 6031 - {0x19, 0x411111f0}, 6032 - {0x1a, 0x411111f0}, 6033 - {0x1b, 0x411111f0}, 6034 - {0x1d, 0x41000001}, 6035 - {0x1e, 0x411111f0}, 6036 - {0x1f, 0x411111f0}, 6037 - }, 6038 - .value = ALC668_FIXUP_AUTO_MUTE, 6039 - }, 6042 + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, 6043 + {0x12, 0x99a30130}, 6044 + {0x14, 0x90170110}, 6045 + {0x15, 0x0321101f}, 6046 + {0x16, 0x03011020}, 6047 + {0x18, 0x40000008}, 6048 + {0x19, 0x411111f0}, 6049 + {0x1a, 0x411111f0}, 6050 + {0x1b, 0x411111f0}, 6051 + {0x1d, 0x41000001}, 6052 + {0x1e, 0x411111f0}, 6053 + {0x1f, 0x411111f0}), 6054 + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, 6055 + {0x12, 0x99a30140}, 6056 + {0x14, 0x90170110}, 6057 + {0x15, 0x0321101f}, 6058 + {0x16, 0x03011020}, 6059 + {0x18, 0x40000008}, 6060 + {0x19, 0x411111f0}, 6061 + {0x1a, 0x411111f0}, 6062 + {0x1b, 0x411111f0}, 6063 + {0x1d, 0x41000001}, 6064 + {0x1e, 0x411111f0}, 6065 + {0x1f, 0x411111f0}), 6066 + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, 6067 + {0x12, 0x99a30150}, 6068 + {0x14, 0x90170110}, 6069 + {0x15, 0x0321101f}, 6070 + {0x16, 0x03011020}, 6071 + {0x18, 0x40000008}, 6072 + {0x19, 0x411111f0}, 6073 + {0x1a, 0x411111f0}, 6074 + {0x1b, 0x411111f0}, 6075 + {0x1d, 0x41000001}, 6076 + {0x1e, 0x411111f0}, 6077 + {0x1f, 0x411111f0}), 6078 + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, 6079 + {0x12, 0x411111f0}, 6080 + {0x14, 0x90170110}, 6081 + {0x15, 0x0321101f}, 6082 + {0x16, 0x03011020}, 6083 + {0x18, 0x40000008}, 6084 + {0x19, 0x411111f0}, 6085 + {0x1a, 0x411111f0}, 6086 + {0x1b, 0x411111f0}, 6087 + {0x1d, 0x41000001}, 6088 + {0x1e, 0x411111f0}, 6089 + {0x1f, 0x411111f0}), 6090 + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell XPS 15", ALC668_FIXUP_AUTO_MUTE, 6091 + {0x12, 0x90a60130}, 6092 + {0x14, 0x90170110}, 6093 + {0x15, 0x0321101f}, 6094 + {0x16, 0x40000000}, 6095 + {0x18, 0x411111f0}, 6096 + {0x19, 0x411111f0}, 6097 + {0x1a, 0x411111f0}, 6098 + {0x1b, 0x411111f0}, 6099 + {0x1d, 0x40d6832d}, 6100 + {0x1e, 0x411111f0}, 6101 + {0x1f, 0x411111f0}), 6040 6102 {} 6041 6103 }; 6042 6104
+57 -1
sound/pci/hda/patch_sigmatel.c
··· 122 122 }; 123 123 124 124 enum { 125 + STAC_92HD95_HP_LED, 126 + STAC_92HD95_HP_BASS, 127 + STAC_92HD95_MODELS 128 + }; 129 + 130 + enum { 125 131 STAC_925x_REF, 126 132 STAC_M1, 127 133 STAC_M1_2, ··· 4134 4128 {} /* terminator */ 4135 4129 }; 4136 4130 4131 + static void stac92hd95_fixup_hp_led(struct hda_codec *codec, 4132 + const struct hda_fixup *fix, int action) 4133 + { 4134 + struct sigmatel_spec *spec = codec->spec; 4135 + 4136 + if (action != HDA_FIXUP_ACT_PRE_PROBE) 4137 + return; 4138 + 4139 + if (find_mute_led_cfg(codec, spec->default_polarity)) 4140 + codec_dbg(codec, "mute LED gpio %d polarity %d\n", 4141 + spec->gpio_led, 4142 + spec->gpio_led_polarity); 4143 + } 4144 + 4145 + static const struct hda_fixup stac92hd95_fixups[] = { 4146 + [STAC_92HD95_HP_LED] = { 4147 + .type = HDA_FIXUP_FUNC, 4148 + .v.func = stac92hd95_fixup_hp_led, 4149 + }, 4150 + [STAC_92HD95_HP_BASS] = { 4151 + .type = HDA_FIXUP_VERBS, 4152 + .v.verbs = (const struct hda_verb[]) { 4153 + {0x1a, 0x795, 0x00}, /* HPF to 100Hz */ 4154 + {} 4155 + }, 4156 + .chained = true, 4157 + .chain_id = STAC_92HD95_HP_LED, 4158 + }, 4159 + }; 4160 + 4161 + static const struct snd_pci_quirk stac92hd95_fixup_tbl[] = { 4162 + SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1911, "HP Spectre 13", STAC_92HD95_HP_BASS), 4163 + {} /* terminator */ 4164 + }; 4165 + 4166 + static const struct hda_model_fixup stac92hd95_models[] = { 4167 + { .id = STAC_92HD95_HP_LED, .name = "hp-led" }, 4168 + { .id = STAC_92HD95_HP_BASS, .name = "hp-bass" }, 4169 + {} 4170 + }; 4171 + 4172 + 4137 4173 static int stac_parse_auto_config(struct hda_codec *codec) 4138 4174 { 4139 4175 struct sigmatel_spec *spec = codec->spec; ··· 4628 4580 spec->gen.beep_nid = 0x19; /* digital beep */ 4629 4581 spec->pwr_nids = stac92hd95_pwr_nids; 4630 4582 spec->num_pwrs = ARRAY_SIZE(stac92hd95_pwr_nids); 4631 - spec->default_polarity = -1; /* no default cfg */ 4583 + spec->default_polarity = 0; 4632 4584 4633 4585 codec->patch_ops = stac_patch_ops; 4586 + 4587 + snd_hda_pick_fixup(codec, stac92hd95_models, stac92hd95_fixup_tbl, 4588 + stac92hd95_fixups); 4589 + snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE); 4590 + 4591 + stac_setup_gpio(codec); 4634 4592 4635 4593 err = stac_parse_auto_config(codec); 4636 4594 if (err < 0) { ··· 4645 4591 } 4646 4592 4647 4593 codec->proc_widget_hook = stac92hd_proc_hook; 4594 + 4595 + snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PROBE); 4648 4596 4649 4597 return 0; 4650 4598 }
-1
sound/soc/fsl/imx-pcm-dma.c
··· 59 59 { 60 60 return devm_snd_dmaengine_pcm_register(&pdev->dev, 61 61 &imx_dmaengine_pcm_config, 62 - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | 63 62 SND_DMAENGINE_PCM_FLAG_COMPAT); 64 63 } 65 64 EXPORT_SYMBOL_GPL(imx_pcm_dma_init);
+10 -3
sound/usb/card.c
··· 307 307 308 308 static int snd_usb_audio_free(struct snd_usb_audio *chip) 309 309 { 310 + struct list_head *p, *n; 311 + 312 + list_for_each_safe(p, n, &chip->ep_list) 313 + snd_usb_endpoint_free(p); 314 + 310 315 mutex_destroy(&chip->mutex); 311 316 kfree(chip); 312 317 return 0; ··· 590 585 struct snd_usb_audio *chip) 591 586 { 592 587 struct snd_card *card; 593 - struct list_head *p, *n; 588 + struct list_head *p; 594 589 595 590 if (chip == (void *)-1L) 596 591 return; ··· 603 598 mutex_lock(&register_mutex); 604 599 chip->num_interfaces--; 605 600 if (chip->num_interfaces <= 0) { 601 + struct snd_usb_endpoint *ep; 602 + 606 603 snd_card_disconnect(card); 607 604 /* release the pcm resources */ 608 605 list_for_each(p, &chip->pcm_list) { 609 606 snd_usb_stream_disconnect(p); 610 607 } 611 608 /* release the endpoint resources */ 612 - list_for_each_safe(p, n, &chip->ep_list) { 613 - snd_usb_endpoint_free(p); 609 + list_for_each_entry(ep, &chip->ep_list, list) { 610 + snd_usb_endpoint_release(ep); 614 611 } 615 612 /* release the midi resources */ 616 613 list_for_each(p, &chip->midi_list) {
+14 -3
sound/usb/endpoint.c
··· 987 987 } 988 988 989 989 /** 990 + * snd_usb_endpoint_release: Tear down an snd_usb_endpoint 991 + * 992 + * @ep: the endpoint to release 993 + * 994 + * This function does not care for the endpoint's use count but will tear 995 + * down all the streaming URBs immediately. 996 + */ 997 + void snd_usb_endpoint_release(struct snd_usb_endpoint *ep) 998 + { 999 + release_urbs(ep, 1); 1000 + } 1001 + 1002 + /** 990 1003 * snd_usb_endpoint_free: Free the resources of an snd_usb_endpoint 991 1004 * 992 1005 * @ep: the list header of the endpoint to free 993 1006 * 994 - * This function does not care for the endpoint's use count but will tear 995 - * down all the streaming URBs immediately and free all resources. 1007 + * This free all resources of the given ep. 996 1008 */ 997 1009 void snd_usb_endpoint_free(struct list_head *head) 998 1010 { 999 1011 struct snd_usb_endpoint *ep; 1000 1012 1001 1013 ep = list_entry(head, struct snd_usb_endpoint, list); 1002 - release_urbs(ep, 1); 1003 1014 kfree(ep); 1004 1015 } 1005 1016
+1
sound/usb/endpoint.h
··· 23 23 void snd_usb_endpoint_sync_pending_stop(struct snd_usb_endpoint *ep); 24 24 int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); 25 25 void snd_usb_endpoint_deactivate(struct snd_usb_endpoint *ep); 26 + void snd_usb_endpoint_release(struct snd_usb_endpoint *ep); 26 27 void snd_usb_endpoint_free(struct list_head *head); 27 28 28 29 int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
+16 -5
tools/perf/ui/browsers/hists.c
··· 17 17 #include "../util.h" 18 18 #include "../ui.h" 19 19 #include "map.h" 20 + #include "annotate.h" 20 21 21 22 struct hist_browser { 22 23 struct ui_browser b; ··· 1594 1593 bi->to.sym->name) > 0) 1595 1594 annotate_t = nr_options++; 1596 1595 } else { 1597 - 1598 1596 if (browser->selection != NULL && 1599 1597 browser->selection->sym != NULL && 1600 - !browser->selection->map->dso->annotate_warned && 1601 - asprintf(&options[nr_options], "Annotate %s", 1602 - browser->selection->sym->name) > 0) 1603 - annotate = nr_options++; 1598 + !browser->selection->map->dso->annotate_warned) { 1599 + struct annotation *notes; 1600 + 1601 + notes = symbol__annotation(browser->selection->sym); 1602 + 1603 + if (notes->src && 1604 + asprintf(&options[nr_options], "Annotate %s", 1605 + browser->selection->sym->name) > 0) 1606 + annotate = nr_options++; 1607 + } 1604 1608 } 1605 1609 1606 1610 if (thread != NULL && ··· 1662 1656 1663 1657 if (choice == annotate || choice == annotate_t || choice == annotate_f) { 1664 1658 struct hist_entry *he; 1659 + struct annotation *notes; 1665 1660 int err; 1666 1661 do_annotate: 1667 1662 if (!objdump_path && perf_session_env__lookup_objdump(env)) ··· 1685 1678 he->ms.sym = he->branch_info->to.sym; 1686 1679 he->ms.map = he->branch_info->to.map; 1687 1680 } 1681 + 1682 + notes = symbol__annotation(he->ms.sym); 1683 + if (!notes->src) 1684 + continue; 1688 1685 1689 1686 /* 1690 1687 * Don't let this be freed, say, by hists__decay_entry.
+22 -32
tools/perf/util/machine.c
··· 496 496 u64 start; 497 497 }; 498 498 499 - static int symbol__in_kernel(void *arg, const char *name, 500 - char type __maybe_unused, u64 start) 501 - { 502 - struct process_args *args = arg; 503 - 504 - if (strchr(name, '[')) 505 - return 0; 506 - 507 - args->start = start; 508 - return 1; 509 - } 510 - 511 499 static void machine__get_kallsyms_filename(struct machine *machine, char *buf, 512 500 size_t bufsz) 513 501 { ··· 505 517 scnprintf(buf, bufsz, "%s/proc/kallsyms", machine->root_dir); 506 518 } 507 519 508 - /* Figure out the start address of kernel map from /proc/kallsyms */ 509 - static u64 machine__get_kernel_start_addr(struct machine *machine) 520 + const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; 521 + 522 + /* Figure out the start address of kernel map from /proc/kallsyms. 523 + * Returns the name of the start symbol in *symbol_name. Pass in NULL as 524 + * symbol_name if it's not that important. 525 + */ 526 + static u64 machine__get_kernel_start_addr(struct machine *machine, 527 + const char **symbol_name) 510 528 { 511 529 char filename[PATH_MAX]; 512 - struct process_args args; 530 + int i; 531 + const char *name; 532 + u64 addr = 0; 513 533 514 534 machine__get_kallsyms_filename(machine, filename, PATH_MAX); 515 535 516 536 if (symbol__restricted_filename(filename, "/proc/kallsyms")) 517 537 return 0; 518 538 519 - if (kallsyms__parse(filename, &args, symbol__in_kernel) <= 0) 520 - return 0; 539 + for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { 540 + addr = kallsyms__get_function_start(filename, name); 541 + if (addr) 542 + break; 543 + } 521 544 522 - return args.start; 545 + if (symbol_name) 546 + *symbol_name = name; 547 + 548 + return addr; 523 549 } 524 550 525 551 int __machine__create_kernel_maps(struct machine *machine, struct dso *kernel) 526 552 { 527 553 enum map_type type; 528 - u64 start = machine__get_kernel_start_addr(machine); 554 + u64 start = machine__get_kernel_start_addr(machine, NULL); 529 555 530 556 for (type = 0; type < MAP__NR_TYPES; ++type) { 531 557 struct kmap *kmap; ··· 854 852 return 0; 855 853 } 856 854 857 - const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; 858 - 859 855 int machine__create_kernel_maps(struct machine *machine) 860 856 { 861 857 struct dso *kernel = machine__get_kernel(machine); 862 - char filename[PATH_MAX]; 863 858 const char *name; 864 - u64 addr = 0; 865 - int i; 866 - 867 - machine__get_kallsyms_filename(machine, filename, PATH_MAX); 868 - 869 - for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { 870 - addr = kallsyms__get_function_start(filename, name); 871 - if (addr) 872 - break; 873 - } 859 + u64 addr = machine__get_kernel_start_addr(machine, &name); 874 860 if (!addr) 875 861 return -1; 876 862
+1 -1
tools/testing/selftests/cpu-hotplug/Makefile
··· 1 1 all: 2 2 3 3 run_tests: 4 - @/bin/sh ./on-off-test.sh || echo "cpu-hotplug selftests: [FAIL]" 4 + @/bin/bash ./on-off-test.sh || echo "cpu-hotplug selftests: [FAIL]" 5 5 6 6 clean:
+5
tools/testing/selftests/ipc/msgque.c
··· 193 193 int msg, pid, err; 194 194 struct msgque_data msgque; 195 195 196 + if (getuid() != 0) { 197 + printf("Please run the test as root - Exiting.\n"); 198 + exit(1); 199 + } 200 + 196 201 msgque.key = ftok(argv[0], 822155650); 197 202 if (msgque.key == -1) { 198 203 printf("Can't make key\n");
+1 -1
tools/testing/selftests/memory-hotplug/Makefile
··· 1 1 all: 2 2 3 3 run_tests: 4 - @/bin/sh ./on-off-test.sh || echo "memory-hotplug selftests: [FAIL]" 4 + @/bin/bash ./on-off-test.sh || echo "memory-hotplug selftests: [FAIL]" 5 5 6 6 clean:
+1 -1
tools/thermal/tmon/Makefile
··· 21 21 OBJS += 22 22 23 23 tmon: $(OBJS) Makefile tmon.h 24 - $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -lpthread 24 + $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -ltinfo -lpthread 25 25 26 26 valgrind: tmon 27 27 sudo valgrind -v --track-origins=yes --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./$(TARGET) 1> /dev/null
+25 -1
tools/thermal/tmon/tmon.c
··· 142 142 static void prepare_logging(void) 143 143 { 144 144 int i; 145 + struct stat logstat; 145 146 146 147 if (!logging) 147 148 return; ··· 152 151 syslog(LOG_ERR, "failed to open log file %s\n", TMON_LOG_FILE); 153 152 return; 154 153 } 154 + 155 + if (lstat(TMON_LOG_FILE, &logstat) < 0) { 156 + syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE); 157 + fclose(tmon_log); 158 + tmon_log = NULL; 159 + return; 160 + } 161 + 162 + /* The log file must be a regular file owned by us */ 163 + if (S_ISLNK(logstat.st_mode)) { 164 + syslog(LOG_ERR, "Log file is a symlink. Will not log\n"); 165 + fclose(tmon_log); 166 + tmon_log = NULL; 167 + return; 168 + } 169 + 170 + if (logstat.st_uid != getuid()) { 171 + syslog(LOG_ERR, "We don't own the log file. Not logging\n"); 172 + fclose(tmon_log); 173 + tmon_log = NULL; 174 + return; 175 + } 176 + 155 177 156 178 fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n"); 157 179 for (i = 0; i < ptdata.nr_tz_sensor; i++) { ··· 355 331 disable_tui(); 356 332 357 333 /* change the file mode mask */ 358 - umask(0); 334 + umask(S_IWGRP | S_IWOTH); 359 335 360 336 /* new SID for the daemon process */ 361 337 sid = setsid();
+2 -2
tools/usb/ffs-test.c
··· 116 116 .header = { 117 117 .magic = cpu_to_le32(FUNCTIONFS_DESCRIPTORS_MAGIC), 118 118 .length = cpu_to_le32(sizeof descriptors), 119 - .fs_count = 3, 120 - .hs_count = 3, 119 + .fs_count = cpu_to_le32(3), 120 + .hs_count = cpu_to_le32(3), 121 121 }, 122 122 .fs_descs = { 123 123 .intf = {