Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 3.16-rc5 into staging-next

We want the fixes in -rc5 in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+1624 -785
+3 -6
Documentation/Changes
··· 280 280 mcelog 281 281 ------ 282 282 283 - In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility 284 - as a regular cronjob similar to the x86-64 kernel to process and log 285 - machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check 286 - events are errors reported by the CPU. Processing them is strongly encouraged. 287 - All x86-64 kernels since 2.6.4 require the mcelog utility to 288 - process machine checks. 283 + On x86 kernels the mcelog utility is needed to process and log machine check 284 + events when CONFIG_X86_MCE is enabled. Machine check events are errors reported 285 + by the CPU. Processing them is strongly encouraged. 289 286 290 287 Getting updated software 291 288 ========================
+1 -1
Documentation/DocBook/gadget.tmpl
··· 708 708 709 709 <para>Systems need specialized hardware support to implement OTG, 710 710 notably including a special <emphasis>Mini-AB</emphasis> jack 711 - and associated transciever to support <emphasis>Dual-Role</emphasis> 711 + and associated transceiver to support <emphasis>Dual-Role</emphasis> 712 712 operation: 713 713 they can act either as a host, using the standard 714 714 Linux-USB host side driver stack,
+2 -2
Documentation/DocBook/genericirq.tmpl
··· 182 182 <para> 183 183 Each interrupt is described by an interrupt descriptor structure 184 184 irq_desc. The interrupt is referenced by an 'unsigned int' numeric 185 - value which selects the corresponding interrupt decription structure 185 + value which selects the corresponding interrupt description structure 186 186 in the descriptor structures array. 187 187 The descriptor structure contains status information and pointers 188 188 to the interrupt flow method and the interrupt chip structure ··· 470 470 <para> 471 471 To avoid copies of identical implementations of IRQ chips the 472 472 core provides a configurable generic interrupt chip 473 - implementation. Developers should check carefuly whether the 473 + implementation. Developers should check carefully whether the 474 474 generic chip fits their needs before implementing the same 475 475 functionality slightly differently themselves. 476 476 </para>
+1 -1
Documentation/DocBook/kernel-locking.tmpl
··· 1760 1760 </para> 1761 1761 1762 1762 <para> 1763 - There is a furthur optimization possible here: remember our original 1763 + There is a further optimization possible here: remember our original 1764 1764 cache code, where there were no reference counts and the caller simply 1765 1765 held the lock whenever using the object? This is still possible: if 1766 1766 you hold the lock, no one can delete the object, so you don't need to
+3 -3
Documentation/DocBook/libata.tmpl
··· 677 677 678 678 <listitem> 679 679 <para> 680 - ATA_QCFLAG_ACTIVE is clared from qc->flags. 680 + ATA_QCFLAG_ACTIVE is cleared from qc->flags. 681 681 </para> 682 682 </listitem> 683 683 ··· 708 708 709 709 <listitem> 710 710 <para> 711 - qc->waiting is claread &amp; completed (in that order). 711 + qc->waiting is cleared &amp; completed (in that order). 712 712 </para> 713 713 </listitem> 714 714 ··· 1163 1163 1164 1164 <para> 1165 1165 Once sense data is acquired, this type of errors can be 1166 - handled similary to other SCSI errors. Note that sense data 1166 + handled similarly to other SCSI errors. Note that sense data 1167 1167 may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR 1168 1168 &amp;&amp; ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such 1169 1169 cases, the error should be considered as an ATA bus error and
+1 -1
Documentation/DocBook/media_api.tmpl
··· 68 68 several digital tv standards. While it is called as DVB API, 69 69 in fact it covers several different video standards including 70 70 DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated 71 - to documment support also for DVB-S2, ISDB-T and ISDB-S.</para> 71 + to document support also for DVB-S2, ISDB-T and ISDB-S.</para> 72 72 <para>The third part covers the Remote Controller API.</para> 73 73 <para>The fourth part covers the Media Controller API.</para> 74 74 <para>For additional information and for the latest development code,
+15 -15
Documentation/DocBook/mtdnand.tmpl
··· 91 91 <listitem><para> 92 92 [MTD Interface]</para><para> 93 93 These functions provide the interface to the MTD kernel API. 94 - They are not replacable and provide functionality 94 + They are not replaceable and provide functionality 95 95 which is complete hardware independent. 96 96 </para></listitem> 97 97 <listitem><para> ··· 100 100 </para></listitem> 101 101 <listitem><para> 102 102 [GENERIC]</para><para> 103 - Generic functions are not replacable and provide functionality 103 + Generic functions are not replaceable and provide functionality 104 104 which is complete hardware independent. 105 105 </para></listitem> 106 106 <listitem><para> 107 107 [DEFAULT]</para><para> 108 108 Default functions provide hardware related functionality which is suitable 109 109 for most of the implementations. These functions can be replaced by the 110 - board driver if neccecary. Those functions are called via pointers in the 110 + board driver if necessary. Those functions are called via pointers in the 111 111 NAND chip description structure. The board driver can set the functions which 112 112 should be replaced by board dependent functions before calling nand_scan(). 113 113 If the function pointer is NULL on entry to nand_scan() then the pointer ··· 264 264 is set up nand_scan() is called. This function tries to 265 265 detect and identify then chip. If a chip is found all the 266 266 internal data fields are initialized accordingly. 267 - The structure(s) have to be zeroed out first and then filled with the neccecary 267 + The structure(s) have to be zeroed out first and then filled with the necessary 268 268 information about the device. 269 269 </para> 270 270 <programlisting> ··· 327 327 <sect1 id="Exit_function"> 328 328 <title>Exit function</title> 329 329 <para> 330 - The exit function is only neccecary if the driver is 330 + The exit function is only necessary if the driver is 331 331 compiled as a module. It releases all resources which 332 332 are held by the chip driver and unregisters the partitions 333 333 in the MTD layer. ··· 494 494 in this case. See rts_from4.c and diskonchip.c for 495 495 implementation reference. In those cases we must also 496 496 use bad block tables on FLASH, because the ECC layout is 497 - interferring with the bad block marker positions. 497 + interfering with the bad block marker positions. 498 498 See bad block table support for details. 499 499 </para> 500 500 </sect2> ··· 542 542 <para> 543 543 nand_scan() calls the function nand_default_bbt(). 544 544 nand_default_bbt() selects appropriate default 545 - bad block table desriptors depending on the chip information 545 + bad block table descriptors depending on the chip information 546 546 which was retrieved by nand_scan(). 547 547 </para> 548 548 <para> ··· 554 554 <sect2 id="Flash_based_tables"> 555 555 <title>Flash based tables</title> 556 556 <para> 557 - It may be desired or neccecary to keep a bad block table in FLASH. 557 + It may be desired or necessary to keep a bad block table in FLASH. 558 558 For AG-AND chips this is mandatory, as they have no factory marked 559 559 bad blocks. They have factory marked good blocks. The marker pattern 560 560 is erased when the block is erased to be reused. So in case of ··· 565 565 of the blocks. 566 566 </para> 567 567 <para> 568 - The blocks in which the tables are stored are procteted against 568 + The blocks in which the tables are stored are protected against 569 569 accidental access by marking them bad in the memory bad block 570 570 table. The bad block table management functions are allowed 571 - to circumvernt this protection. 571 + to circumvent this protection. 572 572 </para> 573 573 <para> 574 574 The simplest way to activate the FLASH based bad block table support ··· 592 592 User defined tables are created by filling out a 593 593 nand_bbt_descr structure and storing the pointer in the 594 594 nand_chip structure member bbt_td before calling nand_scan(). 595 - If a mirror table is neccecary a second structure must be 595 + If a mirror table is necessary a second structure must be 596 596 created and a pointer to this structure must be stored 597 597 in bbt_md inside the nand_chip structure. If the bbt_md 598 598 member is set to NULL then only the main table is used ··· 666 666 <para> 667 667 For automatic placement some blocks must be reserved for 668 668 bad block table storage. The number of reserved blocks is defined 669 - in the maxblocks member of the babd block table description structure. 669 + in the maxblocks member of the bad block table description structure. 670 670 Reserving 4 blocks for mirrored tables should be a reasonable number. 671 671 This also limits the number of blocks which are scanned for the bad 672 672 block table ident pattern. ··· 1068 1068 <chapter id="filesystems"> 1069 1069 <title>Filesystem support</title> 1070 1070 <para> 1071 - The NAND driver provides all neccecary functions for a 1071 + The NAND driver provides all necessary functions for a 1072 1072 filesystem via the MTD interface. 1073 1073 </para> 1074 1074 <para> 1075 - Filesystems must be aware of the NAND pecularities and 1075 + Filesystems must be aware of the NAND peculiarities and 1076 1076 restrictions. One major restrictions of NAND Flash is, that you cannot 1077 1077 write as often as you want to a page. The consecutive writes to a page, 1078 1078 before erasing it again, are restricted to 1-3 writes, depending on the ··· 1222 1222 #define NAND_BBT_VERSION 0x00000100 1223 1223 /* Create a bbt if none axists */ 1224 1224 #define NAND_BBT_CREATE 0x00000200 1225 - /* Write bbt if neccecary */ 1225 + /* Write bbt if necessary */ 1226 1226 #define NAND_BBT_WRITE 0x00001000 1227 1227 /* Read and write back block contents when writing bbt */ 1228 1228 #define NAND_BBT_SAVECONTENT 0x00002000
+1 -1
Documentation/DocBook/regulator.tmpl
··· 155 155 release regulators. Functions are 156 156 provided to <link linkend='API-regulator-enable'>enable</link> 157 157 and <link linkend='API-regulator-disable'>disable</link> the 158 - reguator and to get and set the runtime parameters of the 158 + regulator and to get and set the runtime parameters of the 159 159 regulator. 160 160 </para> 161 161 <para>
+2 -2
Documentation/DocBook/uio-howto.tmpl
··· 766 766 <para> 767 767 The dynamic memory regions will be allocated when the UIO device file, 768 768 <varname>/dev/uioX</varname> is opened. 769 - Simiar to static memory resources, the memory region information for 769 + Similar to static memory resources, the memory region information for 770 770 dynamic regions is then visible via sysfs at 771 771 <varname>/sys/class/uio/uioX/maps/mapY/*</varname>. 772 - The dynmaic memory regions will be freed when the UIO device file is 772 + The dynamic memory regions will be freed when the UIO device file is 773 773 closed. When no processes are holding the device file open, the address 774 774 returned to userspace is ~0. 775 775 </para>
+1 -1
Documentation/DocBook/usb.tmpl
··· 153 153 154 154 <listitem><para>The Linux USB API supports synchronous calls for 155 155 control and bulk messages. 156 - It also supports asynchnous calls for all kinds of data transfer, 156 + It also supports asynchronous calls for all kinds of data transfer, 157 157 using request structures called "URBs" (USB Request Blocks). 158 158 </para></listitem> 159 159
+1 -1
Documentation/DocBook/writing-an-alsa-driver.tmpl
··· 5696 5696 suspending the PCM operations via 5697 5697 <function>snd_pcm_suspend_all()</function> or 5698 5698 <function>snd_pcm_suspend()</function>. It means that the PCM 5699 - streams are already stoppped when the register snapshot is 5699 + streams are already stopped when the register snapshot is 5700 5700 taken. But, remember that you don't have to restart the PCM 5701 5701 stream in the resume callback. It'll be restarted via 5702 5702 trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant>
+5 -2
Documentation/cpu-freq/intel-pstate.txt
··· 15 15 /sys/devices/system/cpu/intel_pstate/ 16 16 17 17 max_perf_pct: limits the maximum P state that will be requested by 18 - the driver stated as a percentage of the available performance. 18 + the driver stated as a percentage of the available performance. The 19 + available (P states) performance may be reduced by the no_turbo 20 + setting described below. 19 21 20 22 min_perf_pct: limits the minimum P state that will be requested by 21 - the driver stated as a percentage of the available performance. 23 + the driver stated as a percentage of the max (non-turbo) 24 + performance level. 22 25 23 26 no_turbo: limits the driver to selecting P states below the turbo 24 27 frequency range.
+20
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 9 9 - reg: physical base address of the controller and length of memory mapped 10 10 region. 11 11 12 + Optional Properties: 13 + - clocks: List of clock handles. The parent clocks of the input clocks to the 14 + devices in this power domain are set to oscclk before power gating 15 + and restored back after powering on a domain. This is required for 16 + all domains which are powered on and off and not required for unused 17 + domains. 18 + - clock-names: The following clocks can be specified: 19 + - oscclk: Oscillator clock. 20 + - pclkN, clkN: Pairs of parent of input clock and input clock to the 21 + devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 22 + are supported currently. 23 + 12 24 Node of a device using power domains must have a samsung,power-domain property 13 25 defined with a phandle to respective power domain. 14 26 ··· 29 17 lcd0: power-domain-lcd0 { 30 18 compatible = "samsung,exynos4210-pd"; 31 19 reg = <0x10023C00 0x10>; 20 + }; 21 + 22 + mfc_pd: power-domain@10044060 { 23 + compatible = "samsung,exynos4210-pd"; 24 + reg = <0x10044060 0x20>; 25 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 26 + <&clock CLK_MOUT_USER_ACLK333>; 27 + clock-names = "oscclk", "pclk0", "clk0"; 32 28 }; 33 29 34 30 Example of the node using power domain:
+7
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 4 4 5 5 - compatible: Must contain one of the following: 6 6 7 + - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. 8 + - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. 9 + - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART. 10 + - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART. 11 + - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART. 12 + - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART. 13 + - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. 7 14 - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. 8 15 - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. 9 16 - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART.
+2 -2
Documentation/laptops/00-INDEX
··· 8 8 - information on hard disk shock protection. 9 9 dslm.c 10 10 - Simple Disk Sleep Monitor program 11 - hpfall.c 12 - - (HP) laptop accelerometer program for disk protection. 11 + freefall.c 12 + - (HP/DELL) laptop accelerometer program for disk protection. 13 13 laptop-mode.txt 14 14 - how to conserve battery power using laptop-mode. 15 15 sony-laptop.txt
+45 -14
Documentation/laptops/hpfall.c Documentation/laptops/freefall.c
··· 1 - /* Disk protection for HP machines. 1 + /* Disk protection for HP/DELL machines. 2 2 * 3 3 * Copyright 2008 Eric Piel 4 4 * Copyright 2009 Pavel Machek <pavel@ucw.cz> 5 + * Copyright 2012 Sonal Santan 6 + * Copyright 2014 Pali Rohár <pali.rohar@gmail.com> 5 7 * 6 8 * GPLv2. 7 9 */ ··· 20 18 #include <signal.h> 21 19 #include <sys/mman.h> 22 20 #include <sched.h> 21 + #include <syslog.h> 23 22 24 - char unload_heads_path[64]; 23 + static int noled; 24 + static char unload_heads_path[64]; 25 + static char device_path[32]; 26 + static const char app_name[] = "FREE FALL"; 25 27 26 - int set_unload_heads_path(char *device) 28 + static int set_unload_heads_path(char *device) 27 29 { 28 30 char devname[64]; 29 31 30 32 if (strlen(device) <= 5 || strncmp(device, "/dev/", 5) != 0) 31 33 return -EINVAL; 32 - strncpy(devname, device + 5, sizeof(devname)); 34 + strncpy(devname, device + 5, sizeof(devname) - 1); 35 + strncpy(device_path, device, sizeof(device_path) - 1); 33 36 34 37 snprintf(unload_heads_path, sizeof(unload_heads_path) - 1, 35 38 "/sys/block/%s/device/unload_heads", devname); 36 39 return 0; 37 40 } 38 - int valid_disk(void) 41 + 42 + static int valid_disk(void) 39 43 { 40 44 int fd = open(unload_heads_path, O_RDONLY); 45 + 41 46 if (fd < 0) { 42 47 perror(unload_heads_path); 43 48 return 0; ··· 54 45 return 1; 55 46 } 56 47 57 - void write_int(char *path, int i) 48 + static void write_int(char *path, int i) 58 49 { 59 50 char buf[1024]; 60 51 int fd = open(path, O_RDWR); 52 + 61 53 if (fd < 0) { 62 54 perror("open"); 63 55 exit(1); 64 56 } 57 + 65 58 sprintf(buf, "%d", i); 59 + 66 60 if (write(fd, buf, strlen(buf)) != strlen(buf)) { 67 61 perror("write"); 68 62 exit(1); 69 63 } 64 + 70 65 close(fd); 71 66 } 72 67 73 - void set_led(int on) 68 + static void set_led(int on) 74 69 { 70 + if (noled) 71 + return; 75 72 write_int("/sys/class/leds/hp::hddprotect/brightness", on); 76 73 } 77 74 78 - void protect(int seconds) 75 + static void protect(int seconds) 79 76 { 77 + const char *str = (seconds == 0) ? "Unparked" : "Parked"; 78 + 80 79 write_int(unload_heads_path, seconds*1000); 80 + syslog(LOG_INFO, "%s %s disk head\n", str, device_path); 81 81 } 82 82 83 - int on_ac(void) 83 + static int on_ac(void) 84 84 { 85 - // /sys/class/power_supply/AC0/online 85 + /* /sys/class/power_supply/AC0/online */ 86 + return 1; 86 87 } 87 88 88 - int lid_open(void) 89 + static int lid_open(void) 89 90 { 90 - // /proc/acpi/button/lid/LID/state 91 + /* /proc/acpi/button/lid/LID/state */ 92 + return 1; 91 93 } 92 94 93 - void ignore_me(void) 95 + static void ignore_me(int signum) 94 96 { 95 97 protect(0); 96 98 set_led(0); ··· 110 90 int main(int argc, char **argv) 111 91 { 112 92 int fd, ret; 93 + struct stat st; 113 94 struct sched_param param; 114 95 115 96 if (argc == 1) ··· 132 111 return EXIT_FAILURE; 133 112 } 134 113 135 - daemon(0, 0); 114 + if (stat("/sys/class/leds/hp::hddprotect/brightness", &st)) 115 + noled = 1; 116 + 117 + if (daemon(0, 0) != 0) { 118 + perror("daemon"); 119 + return EXIT_FAILURE; 120 + } 121 + 122 + openlog(app_name, LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1); 123 + 136 124 param.sched_priority = sched_get_priority_max(SCHED_FIFO); 137 125 sched_setscheduler(0, SCHED_FIFO, &param); 138 126 mlockall(MCL_CURRENT|MCL_FUTURE); ··· 171 141 alarm(20); 172 142 } 173 143 144 + closelog(); 174 145 close(fd); 175 146 return EXIT_SUCCESS; 176 147 }
+17 -3
MAINTAINERS
··· 1314 1314 Q: http://patchwork.kernel.org/project/linux-sh/list/ 1315 1315 T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next 1316 1316 S: Supported 1317 + F: arch/arm/boot/dts/emev2* 1318 + F: arch/arm/boot/dts/r7s* 1319 + F: arch/arm/boot/dts/r8a* 1320 + F: arch/arm/boot/dts/sh* 1321 + F: arch/arm/configs/ape6evm_defconfig 1322 + F: arch/arm/configs/armadillo800eva_defconfig 1323 + F: arch/arm/configs/bockw_defconfig 1324 + F: arch/arm/configs/genmai_defconfig 1325 + F: arch/arm/configs/koelsch_defconfig 1326 + F: arch/arm/configs/kzm9g_defconfig 1327 + F: arch/arm/configs/lager_defconfig 1328 + F: arch/arm/configs/mackerel_defconfig 1329 + F: arch/arm/configs/marzen_defconfig 1330 + F: arch/arm/configs/shmobile_defconfig 1317 1331 F: arch/arm/mach-shmobile/ 1318 1332 F: drivers/sh/ 1319 1333 ··· 6802 6788 6803 6789 PCI DRIVER FOR IMX6 6804 6790 M: Richard Zhu <r65037@freescale.com> 6805 - M: Shawn Guo <shawn.guo@linaro.org> 6791 + M: Shawn Guo <shawn.guo@freescale.com> 6806 6792 L: linux-pci@vger.kernel.org 6807 6793 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6808 6794 S: Maintained ··· 8964 8950 8965 8951 THERMAL 8966 8952 M: Zhang Rui <rui.zhang@intel.com> 8967 - M: Eduardo Valentin <eduardo.valentin@ti.com> 8953 + M: Eduardo Valentin <edubezval@gmail.com> 8968 8954 L: linux-pm@vger.kernel.org 8969 8955 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git 8970 8956 T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git ··· 8991 8977 F: drivers/platform/x86/thinkpad_acpi.c 8992 8978 8993 8979 TI BANDGAP AND THERMAL DRIVER 8994 - M: Eduardo Valentin <eduardo.valentin@ti.com> 8980 + M: Eduardo Valentin <edubezval@gmail.com> 8995 8981 L: linux-pm@vger.kernel.org 8996 8982 S: Supported 8997 8983 F: drivers/thermal/ti-soc-thermal/
+52 -49
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 16 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION* ··· 41 41 # descending is started. They are now explicitly listed as the 42 42 # prepare rule. 43 43 44 + # Beautify output 45 + # --------------------------------------------------------------------------- 46 + # 47 + # Normally, we echo the whole command before executing it. By making 48 + # that echo $($(quiet)$(cmd)), we now have the possibility to set 49 + # $(quiet) to choose other forms of output instead, e.g. 50 + # 51 + # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 52 + # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 53 + # 54 + # If $(quiet) is empty, the whole command will be printed. 55 + # If it is set to "quiet_", only the short version will be printed. 56 + # If it is set to "silent_", nothing will be printed at all, since 57 + # the variable $(silent_cmd_cc_o_c) doesn't exist. 58 + # 59 + # A simple variant is to prefix commands with $(Q) - that's useful 60 + # for commands that shall be hidden in non-verbose mode. 61 + # 62 + # $(Q)ln $@ :< 63 + # 64 + # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 65 + # If KBUILD_VERBOSE equals 1 then the above command is displayed. 66 + # 44 67 # To put more focus on warnings, be less verbose as default 45 68 # Use 'make V=1' to see the full commands 46 69 ··· 73 50 ifndef KBUILD_VERBOSE 74 51 KBUILD_VERBOSE = 0 75 52 endif 53 + 54 + ifeq ($(KBUILD_VERBOSE),1) 55 + quiet = 56 + Q = 57 + else 58 + quiet=quiet_ 59 + Q = @ 60 + endif 61 + 62 + # If the user is running make -s (silent mode), suppress echoing of 63 + # commands 64 + 65 + ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 66 + ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 67 + quiet=silent_ 68 + endif 69 + else # make-3.8x 70 + ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 71 + quiet=silent_ 72 + endif 73 + endif 74 + 75 + export quiet Q KBUILD_VERBOSE 76 76 77 77 # Call a source code checker (by default, "sparse") as part of the 78 78 # C compilation. ··· 174 128 175 129 # Fake the "Entering directory" message once, so that IDEs/editors are 176 130 # able to understand relative filenames. 131 + echodir := @echo 132 + quiet_echodir := @echo 133 + silent_echodir := @: 177 134 sub-make: FORCE 178 - @echo "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" 135 + $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" 179 136 $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ 180 137 KBUILD_SRC=$(CURDIR) \ 181 138 KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \ ··· 340 291 341 292 export KBUILD_MODULES KBUILD_BUILTIN 342 293 export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD 343 - 344 - # Beautify output 345 - # --------------------------------------------------------------------------- 346 - # 347 - # Normally, we echo the whole command before executing it. By making 348 - # that echo $($(quiet)$(cmd)), we now have the possibility to set 349 - # $(quiet) to choose other forms of output instead, e.g. 350 - # 351 - # quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ 352 - # cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< 353 - # 354 - # If $(quiet) is empty, the whole command will be printed. 355 - # If it is set to "quiet_", only the short version will be printed. 356 - # If it is set to "silent_", nothing will be printed at all, since 357 - # the variable $(silent_cmd_cc_o_c) doesn't exist. 358 - # 359 - # A simple variant is to prefix commands with $(Q) - that's useful 360 - # for commands that shall be hidden in non-verbose mode. 361 - # 362 - # $(Q)ln $@ :< 363 - # 364 - # If KBUILD_VERBOSE equals 0 then the above command will be hidden. 365 - # If KBUILD_VERBOSE equals 1 then the above command is displayed. 366 - 367 - ifeq ($(KBUILD_VERBOSE),1) 368 - quiet = 369 - Q = 370 - else 371 - quiet=quiet_ 372 - Q = @ 373 - endif 374 - 375 - # If the user is running make -s (silent mode), suppress echoing of 376 - # commands 377 - 378 - ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 379 - ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) 380 - quiet=silent_ 381 - endif 382 - else # make-3.8x 383 - ifneq ($(filter s% -s%,$(MAKEFLAGS)),) 384 - quiet=silent_ 385 - endif 386 - endif 387 - 388 - export quiet Q KBUILD_VERBOSE 389 294 390 295 ifneq ($(CC),) 391 296 ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1) ··· 1176 1173 # Packaging of the kernel to various formats 1177 1174 # --------------------------------------------------------------------------- 1178 1175 # rpm target kept for backward compatibility 1179 - package-dir := $(srctree)/scripts/package 1176 + package-dir := scripts/package 1180 1177 1181 1178 %src-pkg: FORCE 1182 1179 $(Q)$(MAKE) $(build)=$(package-dir) $@
+2 -2
arch/arm/boot/dts/am335x-evm.dts
··· 529 529 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 530 530 0 0 1 2 531 531 >; 532 - tx-num-evt = <1>; 533 - rx-num-evt = <1>; 532 + tx-num-evt = <32>; 533 + rx-num-evt = <32>; 534 534 }; 535 535 536 536 &tps {
+2 -2
arch/arm/boot/dts/am335x-evmsk.dts
··· 560 560 serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 561 561 0 0 1 2 562 562 >; 563 - tx-num-evt = <1>; 564 - rx-num-evt = <1>; 563 + tx-num-evt = <32>; 564 + rx-num-evt = <32>; 565 565 }; 566 566 567 567 &tscadc {
+6
arch/arm/boot/dts/am335x-igep0033.dtsi
··· 105 105 106 106 &cpsw_emac0 { 107 107 phy_id = <&davinci_mdio>, <0>; 108 + phy-mode = "rmii"; 108 109 }; 109 110 110 111 &cpsw_emac1 { 111 112 phy_id = <&davinci_mdio>, <1>; 113 + phy-mode = "rmii"; 114 + }; 115 + 116 + &phy_sel { 117 + rmii-clock-ext; 112 118 }; 113 119 114 120 &elm {
+2
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1045 1045 reg = <0x00500000 0x80000 1046 1046 0xf803c000 0x400>; 1047 1047 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1048 + clocks = <&usb>, <&udphs_clk>; 1049 + clock-names = "hclk", "pclk"; 1048 1050 status = "disabled"; 1049 1051 1050 1052 ep0 {
+1
arch/arm/boot/dts/dra7-evm.dts
··· 240 240 regulator-name = "ldo3"; 241 241 regulator-min-microvolt = <1800000>; 242 242 regulator-max-microvolt = <1800000>; 243 + regulator-always-on; 243 244 regulator-boot-on; 244 245 }; 245 246
+6 -4
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 673 673 674 674 l3_iclk_div: l3_iclk_div { 675 675 #clock-cells = <0>; 676 - compatible = "fixed-factor-clock"; 676 + compatible = "ti,divider-clock"; 677 + ti,max-div = <2>; 678 + ti,bit-shift = <4>; 679 + reg = <0x0100>; 677 680 clocks = <&dpll_core_h12x2_ck>; 678 - clock-mult = <1>; 679 - clock-div = <1>; 681 + ti,index-power-of-two; 680 682 }; 681 683 682 684 l4_root_clk_div: l4_root_clk_div { ··· 686 684 compatible = "fixed-factor-clock"; 687 685 clocks = <&l3_iclk_div>; 688 686 clock-mult = <1>; 689 - clock-div = <1>; 687 + clock-div = <2>; 690 688 }; 691 689 692 690 video1_clk2_div: video1_clk2_div {
+1 -1
arch/arm/boot/dts/exynos4.dtsi
··· 554 554 interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>, <0 41 0>; 555 555 clocks = <&clock CLK_PWM>; 556 556 clock-names = "timers"; 557 - #pwm-cells = <2>; 557 + #pwm-cells = <3>; 558 558 status = "disabled"; 559 559 }; 560 560
+4 -1
arch/arm/boot/dts/exynos5420.dtsi
··· 167 167 compatible = "samsung,exynos5420-audss-clock"; 168 168 reg = <0x03810000 0x0C>; 169 169 #clock-cells = <1>; 170 - clocks = <&clock CLK_FIN_PLL>, <&clock CLK_FOUT_EPLL>, 170 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MAU_EPLL>, 171 171 <&clock CLK_SCLK_MAUDIO0>, <&clock CLK_SCLK_MAUPCM0>; 172 172 clock-names = "pll_ref", "pll_in", "sclk_audio", "sclk_pcm_in"; 173 173 }; ··· 260 260 mfc_pd: power-domain@10044060 { 261 261 compatible = "samsung,exynos4210-pd"; 262 262 reg = <0x10044060 0x20>; 263 + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, 264 + <&clock CLK_MOUT_USER_ACLK333>; 265 + clock-names = "oscclk", "pclk0", "clk0"; 263 266 }; 264 267 265 268 disp_pd: power-domain@100440C0 {
+19 -9
arch/arm/kernel/kprobes-test-arm.c
··· 74 74 TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ 75 75 TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ 76 76 TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ 77 - TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ 78 - TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ 79 77 TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ 80 78 TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ 81 79 TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ ··· 101 103 TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ 102 104 TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ 103 105 TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ 104 - TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ 105 - TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ 106 106 TEST_R( op "eq r",11,VAL1,", #0xf5") \ 107 107 TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ 108 108 TEST_R( op " r",8, VAL2,", #0x000af000") ··· 121 125 TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ 122 126 TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ 123 127 TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ 124 - TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ 125 128 TEST( op "eq" s " r0, #0xf5") \ 126 129 TEST( op "ne" s " r11, #0xf5000000") \ 127 130 TEST( op s " r7, #0x000af000") \ ··· 154 159 TEST_SUPPORTED("cmp pc, #0x1000"); 155 160 TEST_SUPPORTED("cmp sp, #0x1000"); 156 161 157 - /* Data-processing with PC as shift*/ 162 + /* Data-processing with PC and a shift count in a register */ 158 163 TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc") 159 164 TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc") 160 165 TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc") 166 + TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2") 167 + TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2") 168 + TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2") 169 + TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2") 170 + TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3") 171 + TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3") 172 + TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3") 161 173 162 - /* Data-processing with PC as shift*/ 174 + /* Data-processing with PC as a target and status registers updated */ 163 175 TEST_UNSUPPORTED("movs pc, r1") 164 176 TEST_UNSUPPORTED("movs pc, r1, lsl r2") 165 177 TEST_UNSUPPORTED("movs pc, #0x10000") ··· 189 187 TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") 190 188 TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") 191 189 TEST_BF_R ("mov pc, r",0,2f,"") 192 - TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 190 + TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1") 193 191 TEST_BB( "sub pc, pc, #1b-2b+8") 194 192 #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) 195 193 TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ 196 194 #endif 197 195 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 198 196 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") 199 - TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") 197 + TEST_R( "add pc, pc, r",10,-2,", asl #1") 200 198 #ifdef CONFIG_THUMB2_KERNEL 201 199 TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") 202 200 TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") ··· 218 216 TEST_BB_R("bx r",7,2f,"") 219 217 TEST_BF_R("bxeq r",14,2f,"") 220 218 219 + #if __LINUX_ARM_ARCH__ >= 5 221 220 TEST_R("clz r0, r",0, 0x0,"") 222 221 TEST_R("clzeq r7, r",14,0x1,"") 223 222 TEST_R("clz lr, r",7, 0xffffffff,"") ··· 340 337 TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2") 341 338 TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2") 342 339 TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc") 340 + #endif 343 341 344 342 TEST_GROUP("Multiply and multiply-accumulate") 345 343 ··· 563 559 TEST_UNSUPPORTED("ldrsht r1, [r2], #48") 564 560 #endif 565 561 562 + #if __LINUX_ARM_ARCH__ >= 5 566 563 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") 567 564 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 568 565 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") ··· 600 595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!") 601 596 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48") 602 597 TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48") 598 + #endif 603 599 604 600 TEST_GROUP("Miscellaneous") 605 601 ··· 1233 1227 TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") 1234 1228 1235 1229 COPROCESSOR_INSTRUCTIONS_ST_LD("",e) 1230 + #if __LINUX_ARM_ARCH__ >= 5 1236 1231 COPROCESSOR_INSTRUCTIONS_MC_MR("",e) 1232 + #endif 1237 1233 TEST_UNSUPPORTED("svc 0") 1238 1234 TEST_UNSUPPORTED("svc 0xffffff") 1239 1235 ··· 1295 1287 TEST( "blx __dummy_thumb_subroutine_odd") 1296 1288 #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1297 1289 1290 + #if __LINUX_ARM_ARCH__ >= 5 1298 1291 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f) 1292 + #endif 1299 1293 #if __LINUX_ARM_ARCH__ >= 6 1300 1294 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f) 1301 1295 #endif
+10
arch/arm/kernel/kprobes-test.c
··· 225 225 static int post_handler_called; 226 226 static int jprobe_func_called; 227 227 static int kretprobe_handler_called; 228 + static int tests_failed; 228 229 229 230 #define FUNC_ARG1 0x12345678 230 231 #define FUNC_ARG2 0xabcdef ··· 462 461 463 462 pr_info(" jprobe\n"); 464 463 ret = test_jprobe(func); 464 + #if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) 465 + if (ret == -EINVAL) { 466 + pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); 467 + tests_failed = ret; 468 + ret = 0; 469 + } 470 + #endif 465 471 if (ret < 0) 466 472 return ret; 467 473 ··· 1679 1671 #endif 1680 1672 1681 1673 out: 1674 + if (ret == 0) 1675 + ret = tests_failed; 1682 1676 if (ret == 0) 1683 1677 pr_info("Finished kprobe tests OK\n"); 1684 1678 else
+3 -3
arch/arm/kernel/probes-arm.c
··· 341 341 /* CMP (reg-shift reg) cccc 0001 0101 xxxx xxxx xxxx 0xx1 xxxx */ 342 342 /* CMN (reg-shift reg) cccc 0001 0111 xxxx xxxx xxxx 0xx1 xxxx */ 343 343 DECODE_EMULATEX (0x0f900090, 0x01100010, PROBES_DATA_PROCESSING_REG, 344 - REGS(ANY, 0, NOPC, 0, ANY)), 344 + REGS(NOPC, 0, NOPC, 0, NOPC)), 345 345 346 346 /* MOV (reg-shift reg) cccc 0001 101x xxxx xxxx xxxx 0xx1 xxxx */ 347 347 /* MVN (reg-shift reg) cccc 0001 111x xxxx xxxx xxxx 0xx1 xxxx */ 348 348 DECODE_EMULATEX (0x0fa00090, 0x01a00010, PROBES_DATA_PROCESSING_REG, 349 - REGS(0, ANY, NOPC, 0, ANY)), 349 + REGS(0, NOPC, NOPC, 0, NOPC)), 350 350 351 351 /* AND (reg-shift reg) cccc 0000 000x xxxx xxxx xxxx 0xx1 xxxx */ 352 352 /* EOR (reg-shift reg) cccc 0000 001x xxxx xxxx xxxx 0xx1 xxxx */ ··· 359 359 /* ORR (reg-shift reg) cccc 0001 100x xxxx xxxx xxxx 0xx1 xxxx */ 360 360 /* BIC (reg-shift reg) cccc 0001 110x xxxx xxxx xxxx 0xx1 xxxx */ 361 361 DECODE_EMULATEX (0x0e000090, 0x00000010, PROBES_DATA_PROCESSING_REG, 362 - REGS(ANY, ANY, NOPC, 0, ANY)), 362 + REGS(NOPC, NOPC, NOPC, 0, NOPC)), 363 363 364 364 DECODE_END 365 365 };
+3 -5
arch/arm/mach-exynos/exynos.c
··· 173 173 174 174 void __init exynos_cpuidle_init(void) 175 175 { 176 - if (soc_is_exynos5440()) 177 - return; 178 - 179 - platform_device_register(&exynos_cpuidle); 176 + if (soc_is_exynos4210() || soc_is_exynos5250()) 177 + platform_device_register(&exynos_cpuidle); 180 178 } 181 179 182 180 void __init exynos_cpufreq_init(void) ··· 295 297 * This is called from smp_prepare_cpus if we've built for SMP, but 296 298 * we still need to set it up for PM and firmware ops if not. 297 299 */ 298 - if (!IS_ENABLED(SMP)) 300 + if (!IS_ENABLED(CONFIG_SMP)) 299 301 exynos_sysram_init(); 300 302 301 303 exynos_cpuidle_init();
+7 -2
arch/arm/mach-exynos/firmware.c
··· 57 57 58 58 boot_reg = sysram_ns_base_addr + 0x1c; 59 59 60 - if (!soc_is_exynos4212() && !soc_is_exynos3250()) 61 - boot_reg += 4*cpu; 60 + /* 61 + * Almost all Exynos-series of SoCs that run in secure mode don't need 62 + * additional offset for every CPU, with Exynos4412 being the only 63 + * exception. 64 + */ 65 + if (soc_is_exynos4412()) 66 + boot_reg += 4 * cpu; 62 67 63 68 __raw_writel(boot_addr, boot_reg); 64 69 return 0;
+60 -1
arch/arm/mach-exynos/pm_domains.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/pm_domain.h> 20 + #include <linux/clk.h> 20 21 #include <linux/delay.h> 21 22 #include <linux/of_address.h> 22 23 #include <linux/of_platform.h> 23 24 #include <linux/sched.h> 24 25 25 26 #include "regs-pmu.h" 27 + 28 + #define MAX_CLK_PER_DOMAIN 4 26 29 27 30 /* 28 31 * Exynos specific wrapper around the generic power domain ··· 35 32 char const *name; 36 33 bool is_off; 37 34 struct generic_pm_domain pd; 35 + struct clk *oscclk; 36 + struct clk *clk[MAX_CLK_PER_DOMAIN]; 37 + struct clk *pclk[MAX_CLK_PER_DOMAIN]; 38 38 }; 39 39 40 40 static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) ··· 49 43 50 44 pd = container_of(domain, struct exynos_pm_domain, pd); 51 45 base = pd->base; 46 + 47 + /* Set oscclk before powering off a domain*/ 48 + if (!power_on) { 49 + int i; 50 + 51 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 52 + if (IS_ERR(pd->clk[i])) 53 + break; 54 + if (clk_set_parent(pd->clk[i], pd->oscclk)) 55 + pr_err("%s: error setting oscclk as parent to clock %d\n", 56 + pd->name, i); 57 + } 58 + } 52 59 53 60 pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0; 54 61 __raw_writel(pwr, base); ··· 79 60 cpu_relax(); 80 61 usleep_range(80, 100); 81 62 } 63 + 64 + /* Restore clocks after powering on a domain*/ 65 + if (power_on) { 66 + int i; 67 + 68 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 69 + if (IS_ERR(pd->clk[i])) 70 + break; 71 + if (clk_set_parent(pd->clk[i], pd->pclk[i])) 72 + pr_err("%s: error setting parent to clock%d\n", 73 + pd->name, i); 74 + } 75 + } 76 + 82 77 return 0; 83 78 } 84 79 ··· 185 152 186 153 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 187 154 struct exynos_pm_domain *pd; 188 - int on; 155 + int on, i; 156 + struct device *dev; 189 157 190 158 pdev = of_find_device_by_node(np); 159 + dev = &pdev->dev; 191 160 192 161 pd = kzalloc(sizeof(*pd), GFP_KERNEL); 193 162 if (!pd) { ··· 205 170 pd->pd.power_on = exynos_pd_power_on; 206 171 pd->pd.of_node = np; 207 172 173 + pd->oscclk = clk_get(dev, "oscclk"); 174 + if (IS_ERR(pd->oscclk)) 175 + goto no_clk; 176 + 177 + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 178 + char clk_name[8]; 179 + 180 + snprintf(clk_name, sizeof(clk_name), "clk%d", i); 181 + pd->clk[i] = clk_get(dev, clk_name); 182 + if (IS_ERR(pd->clk[i])) 183 + break; 184 + snprintf(clk_name, sizeof(clk_name), "pclk%d", i); 185 + pd->pclk[i] = clk_get(dev, clk_name); 186 + if (IS_ERR(pd->pclk[i])) { 187 + clk_put(pd->clk[i]); 188 + pd->clk[i] = ERR_PTR(-EINVAL); 189 + break; 190 + } 191 + } 192 + 193 + if (IS_ERR(pd->clk[0])) 194 + clk_put(pd->oscclk); 195 + 196 + no_clk: 208 197 platform_set_drvdata(pdev, pd); 209 198 210 199 on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;
+23 -8
arch/arm/mach-imx/clk-gate2.c
··· 67 67 68 68 spin_lock_irqsave(gate->lock, flags); 69 69 70 - if (gate->share_count && --(*gate->share_count) > 0) 71 - goto out; 70 + if (gate->share_count) { 71 + if (WARN_ON(*gate->share_count == 0)) 72 + goto out; 73 + else if (--(*gate->share_count) > 0) 74 + goto out; 75 + } 72 76 73 77 reg = readl(gate->reg); 74 78 reg &= ~(3 << gate->bit_idx); ··· 82 78 spin_unlock_irqrestore(gate->lock, flags); 83 79 } 84 80 85 - static int clk_gate2_is_enabled(struct clk_hw *hw) 81 + static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx) 86 82 { 87 - u32 reg; 88 - struct clk_gate2 *gate = to_clk_gate2(hw); 83 + u32 val = readl(reg); 89 84 90 - reg = readl(gate->reg); 91 - 92 - if (((reg >> gate->bit_idx) & 1) == 1) 85 + if (((val >> bit_idx) & 1) == 1) 93 86 return 1; 94 87 95 88 return 0; 89 + } 90 + 91 + static int clk_gate2_is_enabled(struct clk_hw *hw) 92 + { 93 + struct clk_gate2 *gate = to_clk_gate2(hw); 94 + 95 + if (gate->share_count) 96 + return !!(*gate->share_count); 97 + else 98 + return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); 96 99 } 97 100 98 101 static struct clk_ops clk_gate2_ops = { ··· 127 116 gate->bit_idx = bit_idx; 128 117 gate->flags = clk_gate2_flags; 129 118 gate->lock = lock; 119 + 120 + /* Initialize share_count per hardware state */ 121 + if (share_count) 122 + *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; 130 123 gate->share_count = share_count; 131 124 132 125 init.name = name;
+1 -1
arch/arm/mach-omap2/clkt_dpll.c
··· 76 76 * (assuming that it is counting N upwards), or -2 if the enclosing loop 77 77 * should skip to the next iteration (again assuming N is increasing). 78 78 */ 79 - static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n) 79 + static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n) 80 80 { 81 81 struct dpll_data *dd; 82 82 long fint, fint_min, fint_max;
+3
arch/arm/mach-omap2/cm-regbits-34xx.h
··· 26 26 #define OMAP3430_EN_WDT3_SHIFT 12 27 27 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_MASK (1 << 0) 28 28 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_SHIFT 0 29 + #define OMAP3430_IVA2_DPLL_FREQSEL_SHIFT 4 29 30 #define OMAP3430_IVA2_DPLL_FREQSEL_MASK (0xf << 4) 30 31 #define OMAP3430_EN_IVA2_DPLL_DRIFTGUARD_SHIFT 3 32 + #define OMAP3430_EN_IVA2_DPLL_SHIFT 0 31 33 #define OMAP3430_EN_IVA2_DPLL_MASK (0x7 << 0) 32 34 #define OMAP3430_ST_IVA2_SHIFT 0 33 35 #define OMAP3430_ST_IVA2_CLK_MASK (1 << 0) 36 + #define OMAP3430_AUTO_IVA2_DPLL_SHIFT 0 34 37 #define OMAP3430_AUTO_IVA2_DPLL_MASK (0x7 << 0) 35 38 #define OMAP3430_IVA2_CLK_SRC_SHIFT 19 36 39 #define OMAP3430_IVA2_CLK_SRC_WIDTH 3
+2 -1
arch/arm/mach-omap2/common.h
··· 162 162 } 163 163 #endif 164 164 165 - #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) 165 + #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 166 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) 166 167 void omap44xx_restart(enum reboot_mode mode, const char *cmd); 167 168 #else 168 169 static inline void omap44xx_restart(enum reboot_mode mode, const char *cmd)
-28
arch/arm/mach-omap2/devices.c
··· 297 297 static inline void omap_init_audio(void) {} 298 298 #endif 299 299 300 - #if defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI) || \ 301 - defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI_MODULE) 302 - 303 - static struct platform_device omap_hdmi_audio = { 304 - .name = "omap-hdmi-audio", 305 - .id = -1, 306 - }; 307 - 308 - static void __init omap_init_hdmi_audio(void) 309 - { 310 - struct omap_hwmod *oh; 311 - struct platform_device *pdev; 312 - 313 - oh = omap_hwmod_lookup("dss_hdmi"); 314 - if (!oh) 315 - return; 316 - 317 - pdev = omap_device_build("omap-hdmi-audio-dai", -1, oh, NULL, 0); 318 - WARN(IS_ERR(pdev), 319 - "Can't build omap_device for omap-hdmi-audio-dai.\n"); 320 - 321 - platform_device_register(&omap_hdmi_audio); 322 - } 323 - #else 324 - static inline void omap_init_hdmi_audio(void) {} 325 - #endif 326 - 327 300 #if defined(CONFIG_SPI_OMAP24XX) || defined(CONFIG_SPI_OMAP24XX_MODULE) 328 301 329 302 #include <linux/platform_data/spi-omap2-mcspi.h> ··· 432 459 */ 433 460 omap_init_audio(); 434 461 omap_init_camera(); 435 - omap_init_hdmi_audio(); 436 462 omap_init_mbox(); 437 463 /* If dtb is there, the devices will be created dynamically */ 438 464 if (!of_have_populated_dt()) {
+10
arch/arm/mach-omap2/dsp.c
··· 29 29 #ifdef CONFIG_TIDSPBRIDGE_DVFS 30 30 #include "omap-pm.h" 31 31 #endif 32 + #include "soc.h" 32 33 33 34 #include <linux/platform_data/dsp-omap.h> 34 35 ··· 60 59 phys_addr_t size = CONFIG_TIDSPBRIDGE_MEMPOOL_SIZE; 61 60 phys_addr_t paddr; 62 61 62 + if (!cpu_is_omap34xx()) 63 + return; 64 + 63 65 if (!size) 64 66 return; 65 67 ··· 86 82 struct platform_device *pdev; 87 83 int err = -ENOMEM; 88 84 struct omap_dsp_platform_data *pdata = &omap_dsp_pdata; 85 + 86 + if (!cpu_is_omap34xx()) 87 + return 0; 89 88 90 89 pdata->phys_mempool_base = omap_dsp_get_mempool_base(); 91 90 ··· 122 115 123 116 static void __exit omap_dsp_exit(void) 124 117 { 118 + if (!cpu_is_omap34xx()) 119 + return; 120 + 125 121 platform_device_unregister(omap_dsp_pdev); 126 122 } 127 123 module_exit(omap_dsp_exit);
+1 -1
arch/arm/mach-omap2/gpmc.c
··· 1615 1615 return ret; 1616 1616 } 1617 1617 1618 - for_each_child_of_node(pdev->dev.of_node, child) { 1618 + for_each_available_child_of_node(pdev->dev.of_node, child) { 1619 1619 1620 1620 if (!child->name) 1621 1621 continue;
+13 -5
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1268 1268 }; 1269 1269 1270 1270 /* sata */ 1271 - static struct omap_hwmod_opt_clk sata_opt_clks[] = { 1272 - { .role = "ref_clk", .clk = "sata_ref_clk" }, 1273 - }; 1274 1271 1275 1272 static struct omap_hwmod dra7xx_sata_hwmod = { 1276 1273 .name = "sata", ··· 1275 1278 .clkdm_name = "l3init_clkdm", 1276 1279 .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, 1277 1280 .main_clk = "func_48m_fclk", 1281 + .mpu_rt_idx = 1, 1278 1282 .prcm = { 1279 1283 .omap4 = { 1280 1284 .clkctrl_offs = DRA7XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, ··· 1283 1285 .modulemode = MODULEMODE_SWCTRL, 1284 1286 }, 1285 1287 }, 1286 - .opt_clks = sata_opt_clks, 1287 - .opt_clks_cnt = ARRAY_SIZE(sata_opt_clks), 1288 1288 }; 1289 1289 1290 1290 /* ··· 1727 1731 * 1728 1732 */ 1729 1733 1734 + static struct omap_hwmod_class_sysconfig dra7xx_usb_otg_ss_sysc = { 1735 + .rev_offs = 0x0000, 1736 + .sysc_offs = 0x0010, 1737 + .sysc_flags = (SYSC_HAS_DMADISABLE | SYSC_HAS_MIDLEMODE | 1738 + SYSC_HAS_SIDLEMODE), 1739 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 1740 + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | 1741 + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), 1742 + .sysc_fields = &omap_hwmod_sysc_type2, 1743 + }; 1744 + 1730 1745 static struct omap_hwmod_class dra7xx_usb_otg_ss_hwmod_class = { 1731 1746 .name = "usb_otg_ss", 1747 + .sysc = &dra7xx_usb_otg_ss_sysc, 1732 1748 }; 1733 1749 1734 1750 /* usb_otg_ss1 */
+6
arch/arm/mach-omap2/prm-regbits-34xx.h
··· 35 35 #define OMAP3430_LOGICSTATEST_MASK (1 << 2) 36 36 #define OMAP3430_LASTLOGICSTATEENTERED_MASK (1 << 2) 37 37 #define OMAP3430_LASTPOWERSTATEENTERED_MASK (0x3 << 0) 38 + #define OMAP3430_GRPSEL_MCBSP5_MASK (1 << 10) 39 + #define OMAP3430_GRPSEL_MCBSP1_MASK (1 << 9) 38 40 #define OMAP3630_GRPSEL_UART4_MASK (1 << 18) 39 41 #define OMAP3430_GRPSEL_GPIO6_MASK (1 << 17) 40 42 #define OMAP3430_GRPSEL_GPIO5_MASK (1 << 16) ··· 44 42 #define OMAP3430_GRPSEL_GPIO3_MASK (1 << 14) 45 43 #define OMAP3430_GRPSEL_GPIO2_MASK (1 << 13) 46 44 #define OMAP3430_GRPSEL_UART3_MASK (1 << 11) 45 + #define OMAP3430_GRPSEL_GPT8_MASK (1 << 9) 46 + #define OMAP3430_GRPSEL_GPT7_MASK (1 << 8) 47 + #define OMAP3430_GRPSEL_GPT6_MASK (1 << 7) 48 + #define OMAP3430_GRPSEL_GPT5_MASK (1 << 6) 47 49 #define OMAP3430_GRPSEL_MCBSP4_MASK (1 << 2) 48 50 #define OMAP3430_GRPSEL_MCBSP3_MASK (1 << 1) 49 51 #define OMAP3430_GRPSEL_MCBSP2_MASK (1 << 0)
+1 -1
arch/arm/mm/cache-l2x0.c
··· 664 664 665 665 static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) 666 666 { 667 - unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; 667 + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; 668 668 bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; 669 669 670 670 if (rev >= L310_CACHE_ID_RTL_R2P0) {
+2
arch/arm64/include/asm/memory.h
··· 56 56 #define TASK_SIZE_32 UL(0x100000000) 57 57 #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ 58 58 TASK_SIZE_32 : TASK_SIZE_64) 59 + #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ 60 + TASK_SIZE_32 : TASK_SIZE_64) 59 61 #else 60 62 #define TASK_SIZE TASK_SIZE_64 61 63 #endif /* CONFIG_COMPAT */
+2
arch/arm64/mm/copypage.c
··· 27 27 copy_page(kto, kfrom); 28 28 __flush_dcache_area(kto, PAGE_SIZE); 29 29 } 30 + EXPORT_SYMBOL_GPL(__cpu_copy_user_page); 30 31 31 32 void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) 32 33 { 33 34 clear_page(kaddr); 34 35 } 36 + EXPORT_SYMBOL_GPL(__cpu_clear_user_page);
+2 -1
arch/m68k/kernel/head.S
··· 921 921 jls 1f 922 922 lsrl #1,%d1 923 923 1: 924 - movel %d1,m68k_init_mapped_size 924 + lea %pc@(m68k_init_mapped_size),%a0 925 + movel %d1,%a0@ 925 926 mmu_map #PAGE_OFFSET,%pc@(L(phys_kernel_start)),%d1,\ 926 927 %pc@(m68k_supervisor_cachemode) 927 928
+2
arch/m68k/kernel/time.c
··· 11 11 */ 12 12 13 13 #include <linux/errno.h> 14 + #include <linux/export.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> 16 17 #include <linux/kernel.h> ··· 31 30 32 31 33 32 unsigned long (*mach_random_get_entropy)(void); 33 + EXPORT_SYMBOL_GPL(mach_random_get_entropy); 34 34 35 35 36 36 /*
+2 -1
arch/parisc/kernel/hardware.c
··· 1210 1210 {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, 1211 1211 {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, 1212 1212 {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, 1213 - {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, 1213 + {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, 1214 + {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, 1214 1215 {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, 1215 1216 {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, 1216 1217 {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},
+10 -36
arch/parisc/kernel/sys_parisc32.c
··· 4 4 * Copyright (C) 2000-2001 Hewlett Packard Company 5 5 * Copyright (C) 2000 John Marvin 6 6 * Copyright (C) 2001 Matthew Wilcox 7 + * Copyright (C) 2014 Helge Deller <deller@gmx.de> 7 8 * 8 9 * These routines maintain argument size conversion between 32bit and 64bit 9 10 * environment. Based heavily on sys_ia32.c and sys_sparc32.c. ··· 12 11 13 12 #include <linux/compat.h> 14 13 #include <linux/kernel.h> 15 - #include <linux/sched.h> 16 - #include <linux/fs.h> 17 - #include <linux/mm.h> 18 - #include <linux/file.h> 19 - #include <linux/signal.h> 20 - #include <linux/resource.h> 21 - #include <linux/times.h> 22 - #include <linux/time.h> 23 - #include <linux/smp.h> 24 - #include <linux/sem.h> 25 - #include <linux/shm.h> 26 - #include <linux/slab.h> 27 - #include <linux/uio.h> 28 - #include <linux/ncp_fs.h> 29 - #include <linux/poll.h> 30 - #include <linux/personality.h> 31 - #include <linux/stat.h> 32 - #include <linux/highmem.h> 33 - #include <linux/highuid.h> 34 - #include <linux/mman.h> 35 - #include <linux/binfmts.h> 36 - #include <linux/namei.h> 37 - #include <linux/vfs.h> 38 - #include <linux/ptrace.h> 39 - #include <linux/swap.h> 40 14 #include <linux/syscalls.h> 41 15 42 - #include <asm/types.h> 43 - #include <asm/uaccess.h> 44 - #include <asm/mmu_context.h> 45 - 46 - #undef DEBUG 47 - 48 - #ifdef DEBUG 49 - #define DBG(x) printk x 50 - #else 51 - #define DBG(x) 52 - #endif 53 16 54 17 asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, 55 18 int r22, int r21, int r20) ··· 21 56 printk(KERN_ERR "%s(%d): Unimplemented 32 on 64 syscall #%d!\n", 22 57 current->comm, current->pid, r20); 23 58 return -ENOSYS; 59 + } 60 + 61 + asmlinkage long sys32_fanotify_mark(compat_int_t fanotify_fd, compat_uint_t flags, 62 + compat_uint_t mask0, compat_uint_t mask1, compat_int_t dfd, 63 + const char __user * pathname) 64 + { 65 + return sys_fanotify_mark(fanotify_fd, flags, 66 + ((__u64)mask1 << 32) | mask0, 67 + dfd, pathname); 24 68 }
+1 -1
arch/parisc/kernel/syscall_table.S
··· 418 418 ENTRY_SAME(accept4) /* 320 */ 419 419 ENTRY_SAME(prlimit64) 420 420 ENTRY_SAME(fanotify_init) 421 - ENTRY_COMP(fanotify_mark) 421 + ENTRY_DIFF(fanotify_mark) 422 422 ENTRY_COMP(clock_adjtime) 423 423 ENTRY_SAME(name_to_handle_at) /* 325 */ 424 424 ENTRY_COMP(open_by_handle_at)
+2 -1
arch/powerpc/Kconfig
··· 414 414 config CRASH_DUMP 415 415 bool "Build a kdump crash kernel" 416 416 depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) 417 - select RELOCATABLE if PPC64 || 44x || FSL_BOOKE 417 + select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE 418 418 help 419 419 Build a kernel suitable for use as a kdump capture kernel. 420 420 The same kernel binary can be used as production kernel and dump ··· 1017 1017 if PPC64 1018 1018 config RELOCATABLE 1019 1019 bool "Build a relocatable kernel" 1020 + depends on !COMPILE_TEST 1020 1021 select NONSTATIC_KERNEL 1021 1022 help 1022 1023 This builds a kernel image that is capable of running anywhere
+1 -9
arch/powerpc/include/asm/mmu.h
··· 19 19 #define MMU_FTR_TYPE_40x ASM_CONST(0x00000004) 20 20 #define MMU_FTR_TYPE_44x ASM_CONST(0x00000008) 21 21 #define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010) 22 - #define MMU_FTR_TYPE_3E ASM_CONST(0x00000020) 23 - #define MMU_FTR_TYPE_47x ASM_CONST(0x00000040) 22 + #define MMU_FTR_TYPE_47x ASM_CONST(0x00000020) 24 23 25 24 /* 26 25 * This is individual features ··· 105 106 MMU_FTR_CI_LARGE_PAGE 106 107 #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ 107 108 MMU_FTR_CI_LARGE_PAGE | MMU_FTR_NO_SLBIE_B 108 - #define MMU_FTRS_A2 MMU_FTR_TYPE_3E | MMU_FTR_USE_TLBILX | \ 109 - MMU_FTR_USE_TLBIVAX_BCAST | \ 110 - MMU_FTR_LOCK_BCAST_INVAL | \ 111 - MMU_FTR_USE_TLBRSRV | \ 112 - MMU_FTR_USE_PAIRED_MAS | \ 113 - MMU_FTR_TLBIEL | \ 114 - MMU_FTR_16M_PAGE 115 109 #ifndef __ASSEMBLY__ 116 110 #include <asm/cputable.h> 117 111
+1 -2
arch/powerpc/include/asm/perf_event_server.h
··· 61 61 #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ 62 62 #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ 63 63 #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ 64 - #define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ 65 - #define PPMU_EBB 0x00000100 /* supports event based branch */ 64 + #define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ 66 65 67 66 /* 68 67 * Values for flags to get_alternatives()
+1 -1
arch/powerpc/kernel/idle_power7.S
··· 131 131 132 132 _GLOBAL(power7_sleep) 133 133 li r3,1 134 - li r4,0 134 + li r4,1 135 135 b power7_powersave_common 136 136 /* No return */ 137 137
-5
arch/powerpc/kvm/book3s_hv_interrupts.S
··· 127 127 stw r10, HSTATE_PMC + 24(r13) 128 128 stw r11, HSTATE_PMC + 28(r13) 129 129 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201) 130 - BEGIN_FTR_SECTION 131 - mfspr r9, SPRN_SIER 132 - std r8, HSTATE_MMCR + 40(r13) 133 - std r9, HSTATE_MMCR + 48(r13) 134 - END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 135 130 31: 136 131 137 132 /*
+1 -11
arch/powerpc/mm/mmu_context_nohash.c
··· 410 410 } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { 411 411 first_context = 1; 412 412 last_context = 65535; 413 - } else 414 - #ifdef CONFIG_PPC_BOOK3E_MMU 415 - if (mmu_has_feature(MMU_FTR_TYPE_3E)) { 416 - u32 mmucfg = mfspr(SPRN_MMUCFG); 417 - u32 pid_bits = (mmucfg & MMUCFG_PIDSIZE_MASK) 418 - >> MMUCFG_PIDSIZE_SHIFT; 419 - first_context = 1; 420 - last_context = (1UL << (pid_bits + 1)) - 1; 421 - } else 422 - #endif 423 - { 413 + } else { 424 414 first_context = 1; 425 415 last_context = 255; 426 416 }
+22 -4
arch/powerpc/perf/core-book3s.c
··· 485 485 * check that the PMU supports EBB, meaning those that don't can still 486 486 * use bit 63 of the event code for something else if they wish. 487 487 */ 488 - return (ppmu->flags & PPMU_EBB) && 488 + return (ppmu->flags & PPMU_ARCH_207S) && 489 489 ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1); 490 490 } 491 491 ··· 777 777 if (ppmu->flags & PPMU_HAS_SIER) 778 778 sier = mfspr(SPRN_SIER); 779 779 780 - if (ppmu->flags & PPMU_EBB) { 780 + if (ppmu->flags & PPMU_ARCH_207S) { 781 781 pr_info("MMCR2: %016lx EBBHR: %016lx\n", 782 782 mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR)); 783 783 pr_info("EBBRR: %016lx BESCR: %016lx\n", ··· 996 996 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); 997 997 998 998 local64_add(delta, &event->count); 999 - local64_sub(delta, &event->hw.period_left); 999 + 1000 + /* 1001 + * A number of places program the PMC with (0x80000000 - period_left). 1002 + * We never want period_left to be less than 1 because we will program 1003 + * the PMC with a value >= 0x800000000 and an edge detected PMC will 1004 + * roll around to 0 before taking an exception. We have seen this 1005 + * on POWER8. 1006 + * 1007 + * To fix this, clamp the minimum value of period_left to 1. 1008 + */ 1009 + do { 1010 + prev = local64_read(&event->hw.period_left); 1011 + val = prev - delta; 1012 + if (val < 1) 1013 + val = 1; 1014 + } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); 1000 1015 } 1001 1016 1002 1017 /* ··· 1314 1299 ppmu->config_bhrb(cpuhw->bhrb_filter); 1315 1300 1316 1301 write_mmcr0(cpuhw, mmcr0); 1302 + 1303 + if (ppmu->flags & PPMU_ARCH_207S) 1304 + mtspr(SPRN_MMCR2, 0); 1317 1305 1318 1306 /* 1319 1307 * Enable instruction sampling if necessary ··· 1714 1696 1715 1697 if (has_branch_stack(event)) { 1716 1698 /* PMU has BHRB enabled */ 1717 - if (!(ppmu->flags & PPMU_BHRB)) 1699 + if (!(ppmu->flags & PPMU_ARCH_207S)) 1718 1700 return -EOPNOTSUPP; 1719 1701 } 1720 1702
+1 -1
arch/powerpc/perf/power8-pmu.c
··· 792 792 .get_constraint = power8_get_constraint, 793 793 .get_alternatives = power8_get_alternatives, 794 794 .disable_pmc = power8_disable_pmc, 795 - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB | PPMU_EBB, 795 + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, 796 796 .n_generic = ARRAY_SIZE(power8_generic_events), 797 797 .generic_events = power8_generic_events, 798 798 .cache_events = &power8_cache_events,
+2
arch/powerpc/platforms/cell/spu_syscalls.c
··· 111 111 return ret; 112 112 } 113 113 114 + #ifdef CONFIG_COREDUMP 114 115 int elf_coredump_extra_notes_size(void) 115 116 { 116 117 struct spufs_calls *calls; ··· 143 142 144 143 return ret; 145 144 } 145 + #endif 146 146 147 147 void notify_spus_active(void) 148 148 {
+2 -1
arch/powerpc/platforms/cell/spufs/Makefile
··· 1 1 2 2 obj-$(CONFIG_SPU_FS) += spufs.o 3 - spufs-y += inode.o file.o context.o syscalls.o coredump.o 3 + spufs-y += inode.o file.o context.o syscalls.o 4 4 spufs-y += sched.o backing_ops.o hw_ops.o run.o gang.o 5 5 spufs-y += switch.o fault.o lscsa_alloc.o 6 + spufs-$(CONFIG_COREDUMP) += coredump.o 6 7 7 8 # magic for the trace events 8 9 CFLAGS_sched.o := -I$(src)
+4 -2
arch/powerpc/platforms/cell/spufs/syscalls.c
··· 79 79 struct spufs_calls spufs_calls = { 80 80 .create_thread = do_spu_create, 81 81 .spu_run = do_spu_run, 82 - .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 83 - .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 84 82 .notify_spus_active = do_notify_spus_active, 85 83 .owner = THIS_MODULE, 84 + #ifdef CONFIG_COREDUMP 85 + .coredump_extra_notes_size = spufs_coredump_extra_notes_size, 86 + .coredump_extra_notes_write = spufs_coredump_extra_notes_write, 87 + #endif 86 88 };
+1 -1
arch/x86/crypto/sha512_ssse3_glue.c
··· 141 141 142 142 /* save number of bits */ 143 143 bits[1] = cpu_to_be64(sctx->count[0] << 3); 144 - bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; 144 + bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); 145 145 146 146 /* Pad out to 112 mod 128 and append length */ 147 147 index = sctx->count[0] & 0x7f;
+3
arch/x86/vdso/vdso2c.h
··· 93 93 uint64_t flags = GET_LE(&in->sh_flags); 94 94 95 95 bool copy = flags & SHF_ALLOC && 96 + (GET_LE(&in->sh_size) || 97 + (GET_LE(&in->sh_type) != SHT_RELA && 98 + GET_LE(&in->sh_type) != SHT_REL)) && 96 99 strcmp(name, ".altinstructions") && 97 100 strcmp(name, ".altinstr_replacement"); 98 101
+4
arch/x86/vdso/vma.c
··· 62 62 Only used for the 64-bit and x32 vdsos. */ 63 63 static unsigned long vdso_addr(unsigned long start, unsigned len) 64 64 { 65 + #ifdef CONFIG_X86_32 66 + return 0; 67 + #else 65 68 unsigned long addr, end; 66 69 unsigned offset; 67 70 end = (start + PMD_SIZE - 1) & PMD_MASK; ··· 86 83 addr = align_vdso_addr(addr); 87 84 88 85 return addr; 86 + #endif 89 87 } 90 88 91 89 static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+129 -3
drivers/acpi/ac.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/dmi.h> 32 32 #include <linux/delay.h> 33 + #ifdef CONFIG_ACPI_PROCFS_POWER 34 + #include <linux/proc_fs.h> 35 + #include <linux/seq_file.h> 36 + #endif 33 37 #include <linux/platform_device.h> 34 38 #include <linux/power_supply.h> 35 39 #include <linux/acpi.h> ··· 56 52 MODULE_DESCRIPTION("ACPI AC Adapter Driver"); 57 53 MODULE_LICENSE("GPL"); 58 54 55 + 59 56 static int acpi_ac_add(struct acpi_device *device); 60 57 static int acpi_ac_remove(struct acpi_device *device); 61 58 static void acpi_ac_notify(struct acpi_device *device, u32 event); ··· 71 66 static int acpi_ac_resume(struct device *dev); 72 67 #endif 73 68 static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 69 + 70 + #ifdef CONFIG_ACPI_PROCFS_POWER 71 + extern struct proc_dir_entry *acpi_lock_ac_dir(void); 72 + extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir); 73 + static int acpi_ac_open_fs(struct inode *inode, struct file *file); 74 + #endif 75 + 74 76 75 77 static int ac_sleep_before_get_state_ms; 76 78 ··· 102 90 }; 103 91 104 92 #define to_acpi_ac(x) container_of(x, struct acpi_ac, charger) 93 + 94 + #ifdef CONFIG_ACPI_PROCFS_POWER 95 + static const struct file_operations acpi_ac_fops = { 96 + .owner = THIS_MODULE, 97 + .open = acpi_ac_open_fs, 98 + .read = seq_read, 99 + .llseek = seq_lseek, 100 + .release = single_release, 101 + }; 102 + #endif 105 103 106 104 /* -------------------------------------------------------------------------- 107 105 AC Adapter Management ··· 164 142 static enum power_supply_property ac_props[] = { 165 143 POWER_SUPPLY_PROP_ONLINE, 166 144 }; 145 + 146 + #ifdef CONFIG_ACPI_PROCFS_POWER 147 + /* -------------------------------------------------------------------------- 148 + FS Interface (/proc) 149 + -------------------------------------------------------------------------- */ 150 + 151 + static struct proc_dir_entry *acpi_ac_dir; 152 + 153 + static int acpi_ac_seq_show(struct seq_file *seq, void *offset) 154 + { 155 + struct acpi_ac *ac = seq->private; 156 + 157 + 158 + if (!ac) 159 + return 0; 160 + 161 + if (acpi_ac_get_state(ac)) { 162 + seq_puts(seq, "ERROR: Unable to read AC Adapter state\n"); 163 + return 0; 164 + } 165 + 166 + seq_puts(seq, "state: "); 167 + switch (ac->state) { 168 + case ACPI_AC_STATUS_OFFLINE: 169 + seq_puts(seq, "off-line\n"); 170 + break; 171 + case ACPI_AC_STATUS_ONLINE: 172 + seq_puts(seq, "on-line\n"); 173 + break; 174 + default: 175 + seq_puts(seq, "unknown\n"); 176 + break; 177 + } 178 + 179 + return 0; 180 + } 181 + 182 + static int acpi_ac_open_fs(struct inode *inode, struct file *file) 183 + { 184 + return single_open(file, acpi_ac_seq_show, PDE_DATA(inode)); 185 + } 186 + 187 + static int acpi_ac_add_fs(struct acpi_ac *ac) 188 + { 189 + struct proc_dir_entry *entry = NULL; 190 + 191 + printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded," 192 + " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n"); 193 + if (!acpi_device_dir(ac->device)) { 194 + acpi_device_dir(ac->device) = 195 + proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir); 196 + if (!acpi_device_dir(ac->device)) 197 + return -ENODEV; 198 + } 199 + 200 + /* 'state' [R] */ 201 + entry = proc_create_data(ACPI_AC_FILE_STATE, 202 + S_IRUGO, acpi_device_dir(ac->device), 203 + &acpi_ac_fops, ac); 204 + if (!entry) 205 + return -ENODEV; 206 + return 0; 207 + } 208 + 209 + static int acpi_ac_remove_fs(struct acpi_ac *ac) 210 + { 211 + 212 + if (acpi_device_dir(ac->device)) { 213 + remove_proc_entry(ACPI_AC_FILE_STATE, 214 + acpi_device_dir(ac->device)); 215 + remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir); 216 + acpi_device_dir(ac->device) = NULL; 217 + } 218 + 219 + return 0; 220 + } 221 + #endif 167 222 168 223 /* -------------------------------------------------------------------------- 169 224 Driver Model ··· 342 243 goto end; 343 244 344 245 ac->charger.name = acpi_device_bid(device); 246 + #ifdef CONFIG_ACPI_PROCFS_POWER 247 + result = acpi_ac_add_fs(ac); 248 + if (result) 249 + goto end; 250 + #endif 345 251 ac->charger.type = POWER_SUPPLY_TYPE_MAINS; 346 252 ac->charger.properties = ac_props; 347 253 ac->charger.num_properties = ARRAY_SIZE(ac_props); ··· 362 258 ac->battery_nb.notifier_call = acpi_ac_battery_notify; 363 259 register_acpi_notifier(&ac->battery_nb); 364 260 end: 365 - if (result) 261 + if (result) { 262 + #ifdef CONFIG_ACPI_PROCFS_POWER 263 + acpi_ac_remove_fs(ac); 264 + #endif 366 265 kfree(ac); 266 + } 367 267 368 268 dmi_check_system(ac_dmi_table); 369 269 return result; ··· 411 303 power_supply_unregister(&ac->charger); 412 304 unregister_acpi_notifier(&ac->battery_nb); 413 305 306 + #ifdef CONFIG_ACPI_PROCFS_POWER 307 + acpi_ac_remove_fs(ac); 308 + #endif 309 + 414 310 kfree(ac); 415 311 416 312 return 0; ··· 427 315 if (acpi_disabled) 428 316 return -ENODEV; 429 317 430 - result = acpi_bus_register_driver(&acpi_ac_driver); 431 - if (result < 0) 318 + #ifdef CONFIG_ACPI_PROCFS_POWER 319 + acpi_ac_dir = acpi_lock_ac_dir(); 320 + if (!acpi_ac_dir) 432 321 return -ENODEV; 322 + #endif 323 + 324 + 325 + result = acpi_bus_register_driver(&acpi_ac_driver); 326 + if (result < 0) { 327 + #ifdef CONFIG_ACPI_PROCFS_POWER 328 + acpi_unlock_ac_dir(acpi_ac_dir); 329 + #endif 330 + return -ENODEV; 331 + } 433 332 434 333 return 0; 435 334 } ··· 448 325 static void __exit acpi_ac_exit(void) 449 326 { 450 327 acpi_bus_unregister_driver(&acpi_ac_driver); 328 + #ifdef CONFIG_ACPI_PROCFS_POWER 329 + acpi_unlock_ac_dir(acpi_ac_dir); 330 + #endif 451 331 } 452 332 module_init(acpi_ac_init); 453 333 module_exit(acpi_ac_exit);
+2
drivers/acpi/acpi_pnp.c
··· 14 14 #include <linux/module.h> 15 15 16 16 static const struct acpi_device_id acpi_pnp_device_ids[] = { 17 + /* soc_button_array */ 18 + {"PNP0C40"}, 17 19 /* pata_isapnp */ 18 20 {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ 19 21 /* floppy */
+40 -1
drivers/acpi/battery.c
··· 35 35 #include <linux/delay.h> 36 36 #include <linux/slab.h> 37 37 #include <linux/suspend.h> 38 + #include <linux/delay.h> 38 39 #include <asm/unaligned.h> 39 40 40 41 #ifdef CONFIG_ACPI_PROCFS_POWER ··· 533 532 battery->rate_now = abs((s16)battery->rate_now); 534 533 printk_once(KERN_WARNING FW_BUG "battery: (dis)charge rate" 535 534 " invalid.\n"); 535 + } 536 + 537 + /* 538 + * When fully charged, some batteries wrongly report 539 + * capacity_now = design_capacity instead of = full_charge_capacity 540 + */ 541 + if (battery->capacity_now > battery->full_charge_capacity 542 + && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { 543 + battery->capacity_now = battery->full_charge_capacity; 544 + if (battery->capacity_now != battery->design_capacity) 545 + printk_once(KERN_WARNING FW_BUG 546 + "battery: reported current charge level (%d) " 547 + "is higher than reported maximum charge level (%d).\n", 548 + battery->capacity_now, battery->full_charge_capacity); 536 549 } 537 550 538 551 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) ··· 1166 1151 {}, 1167 1152 }; 1168 1153 1154 + /* 1155 + * Some machines'(E,G Lenovo Z480) ECs are not stable 1156 + * during boot up and this causes battery driver fails to be 1157 + * probed due to failure of getting battery information 1158 + * from EC sometimes. After several retries, the operation 1159 + * may work. So add retry code here and 20ms sleep between 1160 + * every retries. 1161 + */ 1162 + static int acpi_battery_update_retry(struct acpi_battery *battery) 1163 + { 1164 + int retry, ret; 1165 + 1166 + for (retry = 5; retry; retry--) { 1167 + ret = acpi_battery_update(battery, false); 1168 + if (!ret) 1169 + break; 1170 + 1171 + msleep(20); 1172 + } 1173 + return ret; 1174 + } 1175 + 1169 1176 static int acpi_battery_add(struct acpi_device *device) 1170 1177 { 1171 1178 int result = 0; ··· 1206 1169 mutex_init(&battery->sysfs_lock); 1207 1170 if (acpi_has_method(battery->device->handle, "_BIX")) 1208 1171 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); 1209 - result = acpi_battery_update(battery, false); 1172 + 1173 + result = acpi_battery_update_retry(battery); 1210 1174 if (result) 1211 1175 goto fail; 1176 + 1212 1177 #ifdef CONFIG_ACPI_PROCFS_POWER 1213 1178 result = acpi_battery_add_fs(device); 1214 1179 #endif
+85 -79
drivers/acpi/ec.c
··· 1 1 /* 2 - * ec.c - ACPI Embedded Controller Driver (v2.1) 2 + * ec.c - ACPI Embedded Controller Driver (v2.2) 3 3 * 4 - * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de> 5 - * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com> 6 - * Copyright (C) 2004 Luming Yu <luming.yu@intel.com> 7 - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> 8 - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 4 + * Copyright (C) 2001-2014 Intel Corporation 5 + * Author: 2014 Lv Zheng <lv.zheng@intel.com> 6 + * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> 7 + * 2006 Denis Sadykov <denis.m.sadykov@intel.com> 8 + * 2004 Luming Yu <luming.yu@intel.com> 9 + * 2001, 2002 Andy Grover <andrew.grover@intel.com> 10 + * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 11 + * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de> 9 12 * 10 13 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 11 14 * ··· 55 52 /* EC status register */ 56 53 #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ 57 54 #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ 55 + #define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */ 58 56 #define ACPI_EC_FLAG_BURST 0x10 /* burst mode */ 59 57 #define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */ 60 58 ··· 81 77 * OpReg are installed */ 82 78 EC_FLAGS_BLOCKED, /* Transactions are blocked */ 83 79 }; 80 + 81 + #define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ 82 + #define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ 84 83 85 84 /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ 86 85 static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; ··· 116 109 u8 ri; 117 110 u8 wlen; 118 111 u8 rlen; 119 - bool done; 112 + u8 flags; 120 113 }; 121 114 122 115 struct acpi_ec *boot_ec, *first_ec; ··· 134 127 static inline u8 acpi_ec_read_status(struct acpi_ec *ec) 135 128 { 136 129 u8 x = inb(ec->command_addr); 137 - pr_debug("---> status = 0x%2.2x\n", x); 130 + pr_debug("EC_SC(R) = 0x%2.2x " 131 + "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", 132 + x, 133 + !!(x & ACPI_EC_FLAG_SCI), 134 + !!(x & ACPI_EC_FLAG_BURST), 135 + !!(x & ACPI_EC_FLAG_CMD), 136 + !!(x & ACPI_EC_FLAG_IBF), 137 + !!(x & ACPI_EC_FLAG_OBF)); 138 138 return x; 139 139 } 140 140 141 141 static inline u8 acpi_ec_read_data(struct acpi_ec *ec) 142 142 { 143 143 u8 x = inb(ec->data_addr); 144 - pr_debug("---> data = 0x%2.2x\n", x); 144 + pr_debug("EC_DATA(R) = 0x%2.2x\n", x); 145 145 return x; 146 146 } 147 147 148 148 static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command) 149 149 { 150 - pr_debug("<--- command = 0x%2.2x\n", command); 150 + pr_debug("EC_SC(W) = 0x%2.2x\n", command); 151 151 outb(command, ec->command_addr); 152 152 } 153 153 154 154 static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) 155 155 { 156 - pr_debug("<--- data = 0x%2.2x\n", data); 156 + pr_debug("EC_DATA(W) = 0x%2.2x\n", data); 157 157 outb(data, ec->data_addr); 158 158 } 159 159 160 - static int ec_transaction_done(struct acpi_ec *ec) 160 + static int ec_transaction_completed(struct acpi_ec *ec) 161 161 { 162 162 unsigned long flags; 163 163 int ret = 0; 164 164 spin_lock_irqsave(&ec->lock, flags); 165 - if (!ec->curr || ec->curr->done) 165 + if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) 166 166 ret = 1; 167 167 spin_unlock_irqrestore(&ec->lock, flags); 168 168 return ret; 169 169 } 170 170 171 - static void start_transaction(struct acpi_ec *ec) 171 + static bool advance_transaction(struct acpi_ec *ec) 172 172 { 173 - ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 174 - ec->curr->done = false; 175 - acpi_ec_write_cmd(ec, ec->curr->command); 176 - } 177 - 178 - static void advance_transaction(struct acpi_ec *ec, u8 status) 179 - { 180 - unsigned long flags; 181 173 struct transaction *t; 174 + u8 status; 175 + bool wakeup = false; 182 176 183 - spin_lock_irqsave(&ec->lock, flags); 177 + pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); 178 + status = acpi_ec_read_status(ec); 184 179 t = ec->curr; 185 180 if (!t) 186 - goto unlock; 187 - if (t->wlen > t->wi) { 188 - if ((status & ACPI_EC_FLAG_IBF) == 0) 189 - acpi_ec_write_data(ec, 190 - t->wdata[t->wi++]); 191 - else 192 - goto err; 193 - } else if (t->rlen > t->ri) { 194 - if ((status & ACPI_EC_FLAG_OBF) == 1) { 195 - t->rdata[t->ri++] = acpi_ec_read_data(ec); 196 - if (t->rlen == t->ri) 197 - t->done = true; 181 + goto err; 182 + if (t->flags & ACPI_EC_COMMAND_POLL) { 183 + if (t->wlen > t->wi) { 184 + if ((status & ACPI_EC_FLAG_IBF) == 0) 185 + acpi_ec_write_data(ec, t->wdata[t->wi++]); 186 + else 187 + goto err; 188 + } else if (t->rlen > t->ri) { 189 + if ((status & ACPI_EC_FLAG_OBF) == 1) { 190 + t->rdata[t->ri++] = acpi_ec_read_data(ec); 191 + if (t->rlen == t->ri) { 192 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 193 + wakeup = true; 194 + } 195 + } else 196 + goto err; 197 + } else if (t->wlen == t->wi && 198 + (status & ACPI_EC_FLAG_IBF) == 0) { 199 + t->flags |= ACPI_EC_COMMAND_COMPLETE; 200 + wakeup = true; 201 + } 202 + return wakeup; 203 + } else { 204 + if ((status & ACPI_EC_FLAG_IBF) == 0) { 205 + acpi_ec_write_cmd(ec, t->command); 206 + t->flags |= ACPI_EC_COMMAND_POLL; 198 207 } else 199 208 goto err; 200 - } else if (t->wlen == t->wi && 201 - (status & ACPI_EC_FLAG_IBF) == 0) 202 - t->done = true; 203 - goto unlock; 209 + return wakeup; 210 + } 204 211 err: 205 212 /* 206 213 * If SCI bit is set, then don't think it's a false IRQ 207 214 * otherwise will take a not handled IRQ as a false one. 208 215 */ 209 - if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI)) 210 - ++t->irq_count; 216 + if (!(status & ACPI_EC_FLAG_SCI)) { 217 + if (in_interrupt() && t) 218 + ++t->irq_count; 219 + } 220 + return wakeup; 221 + } 211 222 212 - unlock: 213 - spin_unlock_irqrestore(&ec->lock, flags); 223 + static void start_transaction(struct acpi_ec *ec) 224 + { 225 + ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 226 + ec->curr->flags = 0; 227 + (void)advance_transaction(ec); 214 228 } 215 229 216 230 static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); ··· 256 228 /* don't sleep with disabled interrupts */ 257 229 if (EC_FLAGS_MSI || irqs_disabled()) { 258 230 udelay(ACPI_EC_MSI_UDELAY); 259 - if (ec_transaction_done(ec)) 231 + if (ec_transaction_completed(ec)) 260 232 return 0; 261 233 } else { 262 234 if (wait_event_timeout(ec->wait, 263 - ec_transaction_done(ec), 235 + ec_transaction_completed(ec), 264 236 msecs_to_jiffies(1))) 265 237 return 0; 266 238 } 267 - advance_transaction(ec, acpi_ec_read_status(ec)); 239 + spin_lock_irqsave(&ec->lock, flags); 240 + (void)advance_transaction(ec); 241 + spin_unlock_irqrestore(&ec->lock, flags); 268 242 } while (time_before(jiffies, delay)); 269 243 pr_debug("controller reset, restart transaction\n"); 270 244 spin_lock_irqsave(&ec->lock, flags); ··· 298 268 return ret; 299 269 } 300 270 301 - static int ec_check_ibf0(struct acpi_ec *ec) 302 - { 303 - u8 status = acpi_ec_read_status(ec); 304 - return (status & ACPI_EC_FLAG_IBF) == 0; 305 - } 306 - 307 - static int ec_wait_ibf0(struct acpi_ec *ec) 308 - { 309 - unsigned long delay = jiffies + msecs_to_jiffies(ec_delay); 310 - /* interrupt wait manually if GPE mode is not active */ 311 - while (time_before(jiffies, delay)) 312 - if (wait_event_timeout(ec->wait, ec_check_ibf0(ec), 313 - msecs_to_jiffies(1))) 314 - return 0; 315 - return -ETIME; 316 - } 317 - 318 271 static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) 319 272 { 320 273 int status; ··· 317 304 status = -ENODEV; 318 305 goto unlock; 319 306 } 320 - } 321 - if (ec_wait_ibf0(ec)) { 322 - pr_err("input buffer is not empty, " 323 - "aborting transaction\n"); 324 - status = -ETIME; 325 - goto end; 326 307 } 327 308 pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n", 328 309 t->command, t->wdata ? t->wdata[0] : 0); ··· 341 334 set_bit(EC_FLAGS_GPE_STORM, &ec->flags); 342 335 } 343 336 pr_debug("transaction end\n"); 344 - end: 345 337 if (ec->global_lock) 346 338 acpi_release_global_lock(glk); 347 339 unlock: ··· 640 634 static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, 641 635 u32 gpe_number, void *data) 642 636 { 637 + unsigned long flags; 643 638 struct acpi_ec *ec = data; 644 - u8 status = acpi_ec_read_status(ec); 645 639 646 - pr_debug("~~~> interrupt, status:0x%02x\n", status); 647 - 648 - advance_transaction(ec, status); 649 - if (ec_transaction_done(ec) && 650 - (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) { 640 + spin_lock_irqsave(&ec->lock, flags); 641 + if (advance_transaction(ec)) 651 642 wake_up(&ec->wait); 652 - ec_check_sci(ec, acpi_ec_read_status(ec)); 653 - } 643 + spin_unlock_irqrestore(&ec->lock, flags); 644 + ec_check_sci(ec, acpi_ec_read_status(ec)); 654 645 return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; 655 646 } 656 647 ··· 1069 1066 /* fall through */ 1070 1067 } 1071 1068 1072 - if (EC_FLAGS_SKIP_DSDT_SCAN) 1069 + if (EC_FLAGS_SKIP_DSDT_SCAN) { 1070 + kfree(saved_ec); 1073 1071 return -ENODEV; 1072 + } 1074 1073 1075 1074 /* This workaround is needed only on some broken machines, 1076 1075 * which require early EC, but fail to provide ECDT */ ··· 1110 1105 } 1111 1106 error: 1112 1107 kfree(boot_ec); 1108 + kfree(saved_ec); 1113 1109 boot_ec = NULL; 1114 1110 return -ENODEV; 1115 1111 }
+5 -5
drivers/acpi/resource.c
··· 77 77 switch (ares->type) { 78 78 case ACPI_RESOURCE_TYPE_MEMORY24: 79 79 memory24 = &ares->data.memory24; 80 - if (!memory24->address_length) 80 + if (!memory24->minimum && !memory24->address_length) 81 81 return false; 82 82 acpi_dev_get_memresource(res, memory24->minimum, 83 83 memory24->address_length, ··· 85 85 break; 86 86 case ACPI_RESOURCE_TYPE_MEMORY32: 87 87 memory32 = &ares->data.memory32; 88 - if (!memory32->address_length) 88 + if (!memory32->minimum && !memory32->address_length) 89 89 return false; 90 90 acpi_dev_get_memresource(res, memory32->minimum, 91 91 memory32->address_length, ··· 93 93 break; 94 94 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 95 95 fixed_memory32 = &ares->data.fixed_memory32; 96 - if (!fixed_memory32->address_length) 96 + if (!fixed_memory32->address && !fixed_memory32->address_length) 97 97 return false; 98 98 acpi_dev_get_memresource(res, fixed_memory32->address, 99 99 fixed_memory32->address_length, ··· 150 150 switch (ares->type) { 151 151 case ACPI_RESOURCE_TYPE_IO: 152 152 io = &ares->data.io; 153 - if (!io->address_length) 153 + if (!io->minimum && !io->address_length) 154 154 return false; 155 155 acpi_dev_get_ioresource(res, io->minimum, 156 156 io->address_length, ··· 158 158 break; 159 159 case ACPI_RESOURCE_TYPE_FIXED_IO: 160 160 fixed_io = &ares->data.fixed_io; 161 - if (!fixed_io->address_length) 161 + if (!fixed_io->address && !fixed_io->address_length) 162 162 return false; 163 163 acpi_dev_get_ioresource(res, fixed_io->address, 164 164 fixed_io->address_length,
+10 -1
drivers/acpi/video.c
··· 241 241 return use_native_backlight_dmi; 242 242 } 243 243 244 - static bool acpi_video_verify_backlight_support(void) 244 + bool acpi_video_verify_backlight_support(void) 245 245 { 246 246 if (acpi_osi_is_win8() && acpi_video_use_native_backlight() && 247 247 backlight_device_registered(BACKLIGHT_RAW)) 248 248 return false; 249 249 return acpi_video_backlight_support(); 250 250 } 251 + EXPORT_SYMBOL_GPL(acpi_video_verify_backlight_support); 251 252 252 253 /* backlight device sysfs support */ 253 254 static int acpi_video_get_brightness(struct backlight_device *bd) ··· 561 560 .matches = { 562 561 DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 563 562 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-471G"), 563 + }, 564 + }, 565 + { 566 + .callback = video_set_use_native_backlight, 567 + .ident = "Acer TravelMate B113", 568 + .matches = { 569 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 570 + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), 564 571 }, 565 572 }, 566 573 {
+8
drivers/acpi/video_detect.c
··· 166 166 DMI_MATCH(DMI_PRODUCT_NAME, "UL30A"), 167 167 }, 168 168 }, 169 + { 170 + .callback = video_detect_force_vendor, 171 + .ident = "Dell Inspiron 5737", 172 + .matches = { 173 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 174 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), 175 + }, 176 + }, 169 177 { }, 170 178 }; 171 179
+2
drivers/ata/ahci.h
··· 371 371 int pmp, unsigned long deadline, 372 372 int (*check_ready)(struct ata_link *link)); 373 373 374 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 374 375 int ahci_stop_engine(struct ata_port *ap); 376 + void ahci_start_fis_rx(struct ata_port *ap); 375 377 void ahci_start_engine(struct ata_port *ap); 376 378 int ahci_check_ready(struct ata_link *link); 377 379 int ahci_kick_engine(struct ata_port *ap);
+34 -4
drivers/ata/ahci_imx.c
··· 58 58 struct imx_ahci_priv { 59 59 struct platform_device *ahci_pdev; 60 60 enum ahci_imx_type type; 61 + struct clk *sata_clk; 62 + struct clk *sata_ref_clk; 61 63 struct clk *ahb_clk; 62 64 struct regmap *gpr; 63 65 bool no_device; ··· 226 224 return ret; 227 225 } 228 226 229 - ret = ahci_platform_enable_clks(hpriv); 227 + ret = clk_prepare_enable(imxpriv->sata_ref_clk); 230 228 if (ret < 0) 231 229 goto disable_regulator; 232 230 ··· 293 291 !IMX6Q_GPR13_SATA_MPLL_CLK_EN); 294 292 } 295 293 296 - ahci_platform_disable_clks(hpriv); 294 + clk_disable_unprepare(imxpriv->sata_ref_clk); 297 295 298 296 if (hpriv->target_pwr) 299 297 regulator_disable(hpriv->target_pwr); ··· 326 324 writel(reg_val | IMX_P0PHYCR_TEST_PDDQ, mmio + IMX_P0PHYCR); 327 325 imx_sata_disable(hpriv); 328 326 imxpriv->no_device = true; 327 + 328 + dev_info(ap->dev, "no device found, disabling link.\n"); 329 + dev_info(ap->dev, "pass " MODULE_PARAM_PREFIX ".hotplug=1 to enable hotplug\n"); 329 330 } 330 331 331 332 static int ahci_imx_softreset(struct ata_link *link, unsigned int *class, ··· 390 385 imxpriv->no_device = false; 391 386 imxpriv->first_time = true; 392 387 imxpriv->type = (enum ahci_imx_type)of_id->data; 388 + 389 + imxpriv->sata_clk = devm_clk_get(dev, "sata"); 390 + if (IS_ERR(imxpriv->sata_clk)) { 391 + dev_err(dev, "can't get sata clock.\n"); 392 + return PTR_ERR(imxpriv->sata_clk); 393 + } 394 + 395 + imxpriv->sata_ref_clk = devm_clk_get(dev, "sata_ref"); 396 + if (IS_ERR(imxpriv->sata_ref_clk)) { 397 + dev_err(dev, "can't get sata_ref clock.\n"); 398 + return PTR_ERR(imxpriv->sata_ref_clk); 399 + } 400 + 393 401 imxpriv->ahb_clk = devm_clk_get(dev, "ahb"); 394 402 if (IS_ERR(imxpriv->ahb_clk)) { 395 403 dev_err(dev, "can't get ahb clock.\n"); ··· 425 407 426 408 hpriv->plat_data = imxpriv; 427 409 428 - ret = imx_sata_enable(hpriv); 410 + ret = clk_prepare_enable(imxpriv->sata_clk); 429 411 if (ret) 430 412 return ret; 413 + 414 + ret = imx_sata_enable(hpriv); 415 + if (ret) 416 + goto disable_clk; 431 417 432 418 /* 433 419 * Configure the HWINIT bits of the HOST_CAP and HOST_PORTS_IMPL, ··· 457 435 ret = ahci_platform_init_host(pdev, hpriv, &ahci_imx_port_info, 458 436 0, 0, 0); 459 437 if (ret) 460 - imx_sata_disable(hpriv); 438 + goto disable_sata; 461 439 440 + return 0; 441 + 442 + disable_sata: 443 + imx_sata_disable(hpriv); 444 + disable_clk: 445 + clk_disable_unprepare(imxpriv->sata_clk); 462 446 return ret; 463 447 } 464 448 465 449 static void ahci_imx_host_stop(struct ata_host *host) 466 450 { 467 451 struct ahci_host_priv *hpriv = host->private_data; 452 + struct imx_ahci_priv *imxpriv = hpriv->plat_data; 468 453 469 454 imx_sata_disable(hpriv); 455 + clk_disable_unprepare(imxpriv->sata_clk); 470 456 } 471 457 472 458 #ifdef CONFIG_PM_SLEEP
+1 -1
drivers/ata/ahci_platform.c
··· 58 58 } 59 59 60 60 if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) 61 - hflags |= AHCI_HFLAG_NO_FBS; 61 + hflags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; 62 62 63 63 rc = ahci_platform_init_host(pdev, hpriv, &ahci_port_info, 64 64 hflags, 0, 0);
+47 -13
drivers/ata/ahci_xgene.c
··· 78 78 struct xgene_ahci_context { 79 79 struct ahci_host_priv *hpriv; 80 80 struct device *dev; 81 + u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/ 81 82 void __iomem *csr_core; /* Core CSR address of IP */ 82 83 void __iomem *csr_diag; /* Diag CSR address of IP */ 83 84 void __iomem *csr_axi; /* AXI CSR address of IP */ ··· 99 98 } 100 99 101 100 /** 101 + * xgene_ahci_restart_engine - Restart the dma engine. 102 + * @ap : ATA port of interest 103 + * 104 + * Restarts the dma engine inside the controller. 105 + */ 106 + static int xgene_ahci_restart_engine(struct ata_port *ap) 107 + { 108 + struct ahci_host_priv *hpriv = ap->host->private_data; 109 + 110 + ahci_stop_engine(ap); 111 + ahci_start_fis_rx(ap); 112 + hpriv->start_engine(ap); 113 + 114 + return 0; 115 + } 116 + 117 + /** 118 + * xgene_ahci_qc_issue - Issue commands to the device 119 + * @qc: Command to issue 120 + * 121 + * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot 122 + * clear the BSY bit after receiving the PIO setup FIS. This results in the dma 123 + * state machine goes into the CMFatalErrorUpdate state and locks up. By 124 + * restarting the dma engine, it removes the controller out of lock up state. 125 + */ 126 + static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc) 127 + { 128 + struct ata_port *ap = qc->ap; 129 + struct ahci_host_priv *hpriv = ap->host->private_data; 130 + struct xgene_ahci_context *ctx = hpriv->plat_data; 131 + int rc = 0; 132 + 133 + if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA)) 134 + xgene_ahci_restart_engine(ap); 135 + 136 + rc = ahci_qc_issue(qc); 137 + 138 + /* Save the last command issued */ 139 + ctx->last_cmd[ap->port_no] = qc->tf.command; 140 + 141 + return rc; 142 + } 143 + 144 + /** 102 145 * xgene_ahci_read_id - Read ID data from the specified device 103 146 * @dev: device 104 147 * @tf: proposed taskfile 105 148 * @id: data buffer 106 149 * 107 150 * This custom read ID function is required due to the fact that the HW 108 - * does not support DEVSLP and the controller state machine may get stuck 109 - * after processing the ID query command. 151 + * does not support DEVSLP. 110 152 */ 111 153 static unsigned int xgene_ahci_read_id(struct ata_device *dev, 112 154 struct ata_taskfile *tf, u16 *id) 113 155 { 114 156 u32 err_mask; 115 - void __iomem *port_mmio = ahci_port_base(dev->link->ap); 116 157 117 158 err_mask = ata_do_dev_read_id(dev, tf, id); 118 159 if (err_mask) ··· 176 133 */ 177 134 id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8); 178 135 179 - /* 180 - * Due to HW errata, restart the port if no other command active. 181 - * Otherwise the controller may get stuck. 182 - */ 183 - if (!readl(port_mmio + PORT_CMD_ISSUE)) { 184 - writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD); 185 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 186 - writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD); 187 - readl(port_mmio + PORT_CMD); /* Force a barrier */ 188 - } 189 136 return 0; 190 137 } 191 138 ··· 333 300 .host_stop = xgene_ahci_host_stop, 334 301 .hardreset = xgene_ahci_hardreset, 335 302 .read_id = xgene_ahci_read_id, 303 + .qc_issue = xgene_ahci_qc_issue, 336 304 }; 337 305 338 306 static const struct ata_port_info xgene_ahci_port_info = {
+4 -3
drivers/ata/libahci.c
··· 68 68 69 69 static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val); 70 70 static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val); 71 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 72 71 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc); 73 72 static int ahci_port_start(struct ata_port *ap); 74 73 static void ahci_port_stop(struct ata_port *ap); ··· 619 620 } 620 621 EXPORT_SYMBOL_GPL(ahci_stop_engine); 621 622 622 - static void ahci_start_fis_rx(struct ata_port *ap) 623 + void ahci_start_fis_rx(struct ata_port *ap) 623 624 { 624 625 void __iomem *port_mmio = ahci_port_base(ap); 625 626 struct ahci_host_priv *hpriv = ap->host->private_data; ··· 645 646 /* flush */ 646 647 readl(port_mmio + PORT_CMD); 647 648 } 649 + EXPORT_SYMBOL_GPL(ahci_start_fis_rx); 648 650 649 651 static int ahci_stop_fis_rx(struct ata_port *ap) 650 652 { ··· 1945 1945 } 1946 1946 EXPORT_SYMBOL_GPL(ahci_interrupt); 1947 1947 1948 - static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1948 + unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) 1949 1949 { 1950 1950 struct ata_port *ap = qc->ap; 1951 1951 void __iomem *port_mmio = ahci_port_base(ap); ··· 1974 1974 1975 1975 return 0; 1976 1976 } 1977 + EXPORT_SYMBOL_GPL(ahci_qc_issue); 1977 1978 1978 1979 static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc) 1979 1980 {
+6 -1
drivers/ata/libahci_platform.c
··· 250 250 if (IS_ERR(hpriv->phy)) { 251 251 rc = PTR_ERR(hpriv->phy); 252 252 switch (rc) { 253 - case -ENODEV: 254 253 case -ENOSYS: 254 + /* No PHY support. Check if PHY is required. */ 255 + if (of_find_property(dev->of_node, "phys", NULL)) { 256 + dev_err(dev, "couldn't get sata-phy: ENOSYS\n"); 257 + goto err_out; 258 + } 259 + case -ENODEV: 255 260 /* continue normally */ 256 261 hpriv->phy = NULL; 257 262 break;
+3 -1
drivers/char/i8k.c
··· 138 138 if (!alloc_cpumask_var(&old_mask, GFP_KERNEL)) 139 139 return -ENOMEM; 140 140 cpumask_copy(old_mask, &current->cpus_allowed); 141 - set_cpus_allowed_ptr(current, cpumask_of(0)); 141 + rc = set_cpus_allowed_ptr(current, cpumask_of(0)); 142 + if (rc) 143 + goto out; 142 144 if (smp_processor_id() != 0) { 143 145 rc = -EBUSY; 144 146 goto out;
+2 -5
drivers/clk/clk-s2mps11.c
··· 230 230 goto err_reg; 231 231 } 232 232 233 - s2mps11_clk->lookup = devm_kzalloc(&pdev->dev, 234 - sizeof(struct clk_lookup), GFP_KERNEL); 233 + s2mps11_clk->lookup = clkdev_alloc(s2mps11_clk->clk, 234 + s2mps11_name(s2mps11_clk), NULL); 235 235 if (!s2mps11_clk->lookup) { 236 236 ret = -ENOMEM; 237 237 goto err_lup; 238 238 } 239 - 240 - s2mps11_clk->lookup->con_id = s2mps11_name(s2mps11_clk); 241 - s2mps11_clk->lookup->clk = s2mps11_clk->clk; 242 239 243 240 clkdev_add(s2mps11_clk->lookup); 244 241 }
+1 -1
drivers/clk/qcom/mmcc-msm8960.c
··· 1209 1209 1210 1210 static u8 mmcc_pxo_hdmi_map[] = { 1211 1211 [P_PXO] = 0, 1212 - [P_HDMI_PLL] = 2, 1212 + [P_HDMI_PLL] = 3, 1213 1213 }; 1214 1214 1215 1215 static const char *mmcc_pxo_hdmi[] = {
+4 -12
drivers/clk/samsung/clk-exynos4.c
··· 925 925 GATE(CLK_RTC, "rtc", "aclk100", E4X12_GATE_IP_PERIR, 15, 926 926 0, 0), 927 927 GATE(CLK_KEYIF, "keyif", "aclk100", E4X12_GATE_IP_PERIR, 16, 0, 0), 928 - GATE(CLK_SCLK_PWM_ISP, "sclk_pwm_isp", "div_pwm_isp", 929 - E4X12_SRC_MASK_ISP, 0, CLK_SET_RATE_PARENT, 0), 930 - GATE(CLK_SCLK_SPI0_ISP, "sclk_spi0_isp", "div_spi0_isp_pre", 931 - E4X12_SRC_MASK_ISP, 4, CLK_SET_RATE_PARENT, 0), 932 - GATE(CLK_SCLK_SPI1_ISP, "sclk_spi1_isp", "div_spi1_isp_pre", 933 - E4X12_SRC_MASK_ISP, 8, CLK_SET_RATE_PARENT, 0), 934 - GATE(CLK_SCLK_UART_ISP, "sclk_uart_isp", "div_uart_isp", 935 - E4X12_SRC_MASK_ISP, 12, CLK_SET_RATE_PARENT, 0), 936 - GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "sclk_pwm_isp", 928 + GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "div_pwm_isp", 937 929 E4X12_GATE_IP_ISP, 0, 0, 0), 938 - GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "sclk_spi0_isp", 930 + GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "div_spi0_isp_pre", 939 931 E4X12_GATE_IP_ISP, 1, 0, 0), 940 - GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "sclk_spi1_isp", 932 + GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "div_spi1_isp_pre", 941 933 E4X12_GATE_IP_ISP, 2, 0, 0), 942 - GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "sclk_uart_isp", 934 + GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "div_uart_isp", 943 935 E4X12_GATE_IP_ISP, 3, 0, 0), 944 936 GATE(CLK_WDT, "watchdog", "aclk100", E4X12_GATE_IP_PERIR, 14, 0, 0), 945 937 GATE(CLK_PCM0, "pcm0", "aclk100", E4X12_GATE_IP_MAUDIO, 2,
+1 -1
drivers/clk/samsung/clk-exynos5250.c
··· 661 661 GATE(CLK_RTC, "rtc", "div_aclk66", GATE_IP_PERIS, 20, 0, 0), 662 662 GATE(CLK_TMU, "tmu", "div_aclk66", GATE_IP_PERIS, 21, 0, 0), 663 663 GATE(CLK_SMMU_TV, "smmu_tv", "mout_aclk200_disp1_sub", 664 - GATE_IP_DISP1, 2, 0, 0), 664 + GATE_IP_DISP1, 9, 0, 0), 665 665 GATE(CLK_SMMU_FIMD1, "smmu_fimd1", "mout_aclk200_disp1_sub", 666 666 GATE_IP_DISP1, 8, 0, 0), 667 667 GATE(CLK_SMMU_2D, "smmu_2d", "div_aclk200", GATE_IP_ACP, 7, 0, 0),
+58 -31
drivers/clk/samsung/clk-exynos5420.c
··· 631 631 SRC_TOP4, 16, 1), 632 632 MUX(0, "mout_user_aclk266", mout_user_aclk266_p, SRC_TOP4, 20, 1), 633 633 MUX(0, "mout_user_aclk166", mout_user_aclk166_p, SRC_TOP4, 24, 1), 634 - MUX(0, "mout_user_aclk333", mout_user_aclk333_p, SRC_TOP4, 28, 1), 634 + MUX(CLK_MOUT_USER_ACLK333, "mout_user_aclk333", mout_user_aclk333_p, 635 + SRC_TOP4, 28, 1), 635 636 636 637 MUX(0, "mout_user_aclk400_disp1", mout_user_aclk400_disp1_p, 637 638 SRC_TOP5, 0, 1), ··· 685 684 SRC_TOP11, 12, 1), 686 685 MUX(0, "mout_sw_aclk266", mout_sw_aclk266_p, SRC_TOP11, 20, 1), 687 686 MUX(0, "mout_sw_aclk166", mout_sw_aclk166_p, SRC_TOP11, 24, 1), 688 - MUX(0, "mout_sw_aclk333", mout_sw_aclk333_p, SRC_TOP11, 28, 1), 687 + MUX(CLK_MOUT_SW_ACLK333, "mout_sw_aclk333", mout_sw_aclk333_p, 688 + SRC_TOP11, 28, 1), 689 689 690 690 MUX(0, "mout_sw_aclk400_disp1", mout_sw_aclk400_disp1_p, 691 691 SRC_TOP12, 4, 1), ··· 892 890 GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0), 893 891 GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen", 894 892 GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0), 895 - GATE(CLK_ACLK66_PERIC, "aclk66_peric", "mout_user_aclk66_peric", 896 - GATE_BUS_TOP, 11, CLK_IGNORE_UNUSED, 0), 897 893 GATE(0, "aclk266_isp", "mout_user_aclk266_isp", 898 894 GATE_BUS_TOP, 13, 0, 0), 899 895 GATE(0, "aclk166", "mout_user_aclk166", ··· 994 994 SRC_MASK_FSYS, 24, CLK_SET_RATE_PARENT, 0), 995 995 996 996 /* PERIC Block */ 997 - GATE(CLK_UART0, "uart0", "aclk66_peric", GATE_IP_PERIC, 0, 0, 0), 998 - GATE(CLK_UART1, "uart1", "aclk66_peric", GATE_IP_PERIC, 1, 0, 0), 999 - GATE(CLK_UART2, "uart2", "aclk66_peric", GATE_IP_PERIC, 2, 0, 0), 1000 - GATE(CLK_UART3, "uart3", "aclk66_peric", GATE_IP_PERIC, 3, 0, 0), 1001 - GATE(CLK_I2C0, "i2c0", "aclk66_peric", GATE_IP_PERIC, 6, 0, 0), 1002 - GATE(CLK_I2C1, "i2c1", "aclk66_peric", GATE_IP_PERIC, 7, 0, 0), 1003 - GATE(CLK_I2C2, "i2c2", "aclk66_peric", GATE_IP_PERIC, 8, 0, 0), 1004 - GATE(CLK_I2C3, "i2c3", "aclk66_peric", GATE_IP_PERIC, 9, 0, 0), 1005 - GATE(CLK_USI0, "usi0", "aclk66_peric", GATE_IP_PERIC, 10, 0, 0), 1006 - GATE(CLK_USI1, "usi1", "aclk66_peric", GATE_IP_PERIC, 11, 0, 0), 1007 - GATE(CLK_USI2, "usi2", "aclk66_peric", GATE_IP_PERIC, 12, 0, 0), 1008 - GATE(CLK_USI3, "usi3", "aclk66_peric", GATE_IP_PERIC, 13, 0, 0), 1009 - GATE(CLK_I2C_HDMI, "i2c_hdmi", "aclk66_peric", GATE_IP_PERIC, 14, 0, 0), 1010 - GATE(CLK_TSADC, "tsadc", "aclk66_peric", GATE_IP_PERIC, 15, 0, 0), 1011 - GATE(CLK_SPI0, "spi0", "aclk66_peric", GATE_IP_PERIC, 16, 0, 0), 1012 - GATE(CLK_SPI1, "spi1", "aclk66_peric", GATE_IP_PERIC, 17, 0, 0), 1013 - GATE(CLK_SPI2, "spi2", "aclk66_peric", GATE_IP_PERIC, 18, 0, 0), 1014 - GATE(CLK_I2S1, "i2s1", "aclk66_peric", GATE_IP_PERIC, 20, 0, 0), 1015 - GATE(CLK_I2S2, "i2s2", "aclk66_peric", GATE_IP_PERIC, 21, 0, 0), 1016 - GATE(CLK_PCM1, "pcm1", "aclk66_peric", GATE_IP_PERIC, 22, 0, 0), 1017 - GATE(CLK_PCM2, "pcm2", "aclk66_peric", GATE_IP_PERIC, 23, 0, 0), 1018 - GATE(CLK_PWM, "pwm", "aclk66_peric", GATE_IP_PERIC, 24, 0, 0), 1019 - GATE(CLK_SPDIF, "spdif", "aclk66_peric", GATE_IP_PERIC, 26, 0, 0), 1020 - GATE(CLK_USI4, "usi4", "aclk66_peric", GATE_IP_PERIC, 28, 0, 0), 1021 - GATE(CLK_USI5, "usi5", "aclk66_peric", GATE_IP_PERIC, 30, 0, 0), 1022 - GATE(CLK_USI6, "usi6", "aclk66_peric", GATE_IP_PERIC, 31, 0, 0), 997 + GATE(CLK_UART0, "uart0", "mout_user_aclk66_peric", 998 + GATE_IP_PERIC, 0, 0, 0), 999 + GATE(CLK_UART1, "uart1", "mout_user_aclk66_peric", 1000 + GATE_IP_PERIC, 1, 0, 0), 1001 + GATE(CLK_UART2, "uart2", "mout_user_aclk66_peric", 1002 + GATE_IP_PERIC, 2, 0, 0), 1003 + GATE(CLK_UART3, "uart3", "mout_user_aclk66_peric", 1004 + GATE_IP_PERIC, 3, 0, 0), 1005 + GATE(CLK_I2C0, "i2c0", "mout_user_aclk66_peric", 1006 + GATE_IP_PERIC, 6, 0, 0), 1007 + GATE(CLK_I2C1, "i2c1", "mout_user_aclk66_peric", 1008 + GATE_IP_PERIC, 7, 0, 0), 1009 + GATE(CLK_I2C2, "i2c2", "mout_user_aclk66_peric", 1010 + GATE_IP_PERIC, 8, 0, 0), 1011 + GATE(CLK_I2C3, "i2c3", "mout_user_aclk66_peric", 1012 + GATE_IP_PERIC, 9, 0, 0), 1013 + GATE(CLK_USI0, "usi0", "mout_user_aclk66_peric", 1014 + GATE_IP_PERIC, 10, 0, 0), 1015 + GATE(CLK_USI1, "usi1", "mout_user_aclk66_peric", 1016 + GATE_IP_PERIC, 11, 0, 0), 1017 + GATE(CLK_USI2, "usi2", "mout_user_aclk66_peric", 1018 + GATE_IP_PERIC, 12, 0, 0), 1019 + GATE(CLK_USI3, "usi3", "mout_user_aclk66_peric", 1020 + GATE_IP_PERIC, 13, 0, 0), 1021 + GATE(CLK_I2C_HDMI, "i2c_hdmi", "mout_user_aclk66_peric", 1022 + GATE_IP_PERIC, 14, 0, 0), 1023 + GATE(CLK_TSADC, "tsadc", "mout_user_aclk66_peric", 1024 + GATE_IP_PERIC, 15, 0, 0), 1025 + GATE(CLK_SPI0, "spi0", "mout_user_aclk66_peric", 1026 + GATE_IP_PERIC, 16, 0, 0), 1027 + GATE(CLK_SPI1, "spi1", "mout_user_aclk66_peric", 1028 + GATE_IP_PERIC, 17, 0, 0), 1029 + GATE(CLK_SPI2, "spi2", "mout_user_aclk66_peric", 1030 + GATE_IP_PERIC, 18, 0, 0), 1031 + GATE(CLK_I2S1, "i2s1", "mout_user_aclk66_peric", 1032 + GATE_IP_PERIC, 20, 0, 0), 1033 + GATE(CLK_I2S2, "i2s2", "mout_user_aclk66_peric", 1034 + GATE_IP_PERIC, 21, 0, 0), 1035 + GATE(CLK_PCM1, "pcm1", "mout_user_aclk66_peric", 1036 + GATE_IP_PERIC, 22, 0, 0), 1037 + GATE(CLK_PCM2, "pcm2", "mout_user_aclk66_peric", 1038 + GATE_IP_PERIC, 23, 0, 0), 1039 + GATE(CLK_PWM, "pwm", "mout_user_aclk66_peric", 1040 + GATE_IP_PERIC, 24, 0, 0), 1041 + GATE(CLK_SPDIF, "spdif", "mout_user_aclk66_peric", 1042 + GATE_IP_PERIC, 26, 0, 0), 1043 + GATE(CLK_USI4, "usi4", "mout_user_aclk66_peric", 1044 + GATE_IP_PERIC, 28, 0, 0), 1045 + GATE(CLK_USI5, "usi5", "mout_user_aclk66_peric", 1046 + GATE_IP_PERIC, 30, 0, 0), 1047 + GATE(CLK_USI6, "usi6", "mout_user_aclk66_peric", 1048 + GATE_IP_PERIC, 31, 0, 0), 1023 1049 1024 - GATE(CLK_KEYIF, "keyif", "aclk66_peric", GATE_BUS_PERIC, 22, 0, 0), 1050 + GATE(CLK_KEYIF, "keyif", "mout_user_aclk66_peric", 1051 + GATE_BUS_PERIC, 22, 0, 0), 1025 1052 1026 1053 /* PERIS Block */ 1027 1054 GATE(CLK_CHIPID, "chipid", "aclk66_psgen",
+7 -2
drivers/clk/samsung/clk-s3c2410.c
··· 152 152 ALIAS(HCLK, NULL, "hclk"), 153 153 ALIAS(MPLL, NULL, "mpll"), 154 154 ALIAS(FCLK, NULL, "fclk"), 155 + ALIAS(PCLK, NULL, "watchdog"), 156 + ALIAS(PCLK_SDI, NULL, "sdi"), 157 + ALIAS(HCLK_NAND, NULL, "nand"), 158 + ALIAS(PCLK_I2S, NULL, "iis"), 159 + ALIAS(PCLK_I2C, NULL, "i2c"), 155 160 }; 156 161 157 162 /* S3C2410 specific clocks */ ··· 383 378 if (!np) 384 379 s3c2410_common_clk_register_fixed_ext(ctx, xti_f); 385 380 386 - if (current_soc == 2410) { 381 + if (current_soc == S3C2410) { 387 382 if (_get_rate("xti") == 12 * MHZ) { 388 383 s3c2410_plls[mpll].rate_table = pll_s3c2410_12mhz_tbl; 389 384 s3c2410_plls[upll].rate_table = pll_s3c2410_12mhz_tbl; ··· 437 432 samsung_clk_register_fixed_factor(ctx, s3c2410_ffactor, 438 433 ARRAY_SIZE(s3c2410_ffactor)); 439 434 samsung_clk_register_alias(ctx, s3c2410_aliases, 440 - ARRAY_SIZE(s3c2410_common_aliases)); 435 + ARRAY_SIZE(s3c2410_aliases)); 441 436 break; 442 437 case S3C2440: 443 438 samsung_clk_register_mux(ctx, s3c2440_muxes,
+4 -2
drivers/clk/samsung/clk-s3c64xx.c
··· 418 418 ALIAS(SCLK_MMC2, "s3c-sdhci.2", "mmc_busclk.2"), 419 419 ALIAS(SCLK_MMC1, "s3c-sdhci.1", "mmc_busclk.2"), 420 420 ALIAS(SCLK_MMC0, "s3c-sdhci.0", "mmc_busclk.2"), 421 - ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi-bus"), 422 - ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi-bus"), 421 + ALIAS(PCLK_SPI1, "s3c6410-spi.1", "spi_busclk0"), 422 + ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi_busclk2"), 423 + ALIAS(PCLK_SPI0, "s3c6410-spi.0", "spi_busclk0"), 424 + ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi_busclk2"), 423 425 ALIAS(SCLK_AUDIO1, "samsung-pcm.1", "audio-bus"), 424 426 ALIAS(SCLK_AUDIO1, "samsung-i2s.1", "audio-bus"), 425 427 ALIAS(SCLK_AUDIO0, "samsung-pcm.0", "audio-bus"),
+11 -5
drivers/clk/spear/spear3xx_clock.c
··· 211 211 /* array of all spear 320 clock lookups */ 212 212 #ifdef CONFIG_MACH_SPEAR320 213 213 214 - #define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) 214 + #define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) 215 215 #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) 216 216 217 217 #define SPEAR320_UARTX_PCLK_MASK 0x1 ··· 245 245 "ras_syn0_gclk", }; 246 246 static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; 247 247 248 - static void __init spear320_clk_init(void __iomem *soc_config_base) 248 + static void __init spear320_clk_init(void __iomem *soc_config_base, 249 + struct clk *ras_apb_clk) 249 250 { 250 251 struct clk *clk; 251 252 ··· 343 342 SPEAR320_CONTROL_REG, UART1_PCLK_SHIFT, UART1_PCLK_MASK, 344 343 0, &_lock); 345 344 clk_register_clkdev(clk, NULL, "a3000000.serial"); 345 + /* Enforce ras_apb_clk */ 346 + clk_set_parent(clk, ras_apb_clk); 346 347 347 348 clk = clk_register_mux(NULL, "uart2_clk", uartx_parents, 348 349 ARRAY_SIZE(uartx_parents), ··· 352 349 SPEAR320_EXT_CTRL_REG, SPEAR320_UART2_PCLK_SHIFT, 353 350 SPEAR320_UARTX_PCLK_MASK, 0, &_lock); 354 351 clk_register_clkdev(clk, NULL, "a4000000.serial"); 352 + /* Enforce ras_apb_clk */ 353 + clk_set_parent(clk, ras_apb_clk); 355 354 356 355 clk = clk_register_mux(NULL, "uart3_clk", uartx_parents, 357 356 ARRAY_SIZE(uartx_parents), ··· 384 379 clk_register_clkdev(clk, NULL, "60100000.serial"); 385 380 } 386 381 #else 387 - static inline void spear320_clk_init(void __iomem *soc_config_base) { } 382 + static inline void spear320_clk_init(void __iomem *sb, struct clk *rc) { } 388 383 #endif 389 384 390 385 void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base) 391 386 { 392 - struct clk *clk, *clk1; 387 + struct clk *clk, *clk1, *ras_apb_clk; 393 388 394 389 clk = clk_register_fixed_rate(NULL, "osc_32k_clk", NULL, CLK_IS_ROOT, 395 390 32000); ··· 618 613 clk = clk_register_gate(NULL, "ras_apb_clk", "apb_clk", 0, RAS_CLK_ENB, 619 614 RAS_APB_CLK_ENB, 0, &_lock); 620 615 clk_register_clkdev(clk, "ras_apb_clk", NULL); 616 + ras_apb_clk = clk; 621 617 622 618 clk = clk_register_gate(NULL, "ras_32k_clk", "osc_32k_clk", 0, 623 619 RAS_CLK_ENB, RAS_32K_CLK_ENB, 0, &_lock); ··· 665 659 else if (of_machine_is_compatible("st,spear310")) 666 660 spear310_clk_init(); 667 661 else if (of_machine_is_compatible("st,spear320")) 668 - spear320_clk_init(soc_config_base); 662 + spear320_clk_init(soc_config_base, ras_apb_clk); 669 663 }
+1 -1
drivers/clk/sunxi/clk-sun6i-apb0-gates.c
··· 29 29 30 30 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 31 31 reg = devm_ioremap_resource(&pdev->dev, r); 32 - if (!reg) 32 + if (IS_ERR(reg)) 33 33 return PTR_ERR(reg); 34 34 35 35 clk_parent = of_clk_get_parent_name(np, 0);
+3 -5
drivers/clk/ti/apll.c
··· 77 77 if (i == MAX_APLL_WAIT_TRIES) { 78 78 pr_warn("clock: %s failed transition to '%s'\n", 79 79 clk_name, (state) ? "locked" : "bypassed"); 80 - } else { 80 + r = -EBUSY; 81 + } else 81 82 pr_debug("clock: %s transition to '%s' in %d loops\n", 82 83 clk_name, (state) ? "locked" : "bypassed", i); 83 - 84 - r = 0; 85 - } 86 84 87 85 return r; 88 86 } ··· 336 338 const char *parent_name; 337 339 u32 val; 338 340 339 - ad = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 + ad = kzalloc(sizeof(*ad), GFP_KERNEL); 340 342 clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL); 341 343 init = kzalloc(sizeof(*init), GFP_KERNEL); 342 344
+3 -2
drivers/clk/ti/dpll.c
··· 161 161 } 162 162 163 163 #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 164 - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) 164 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) || \ 165 + defined(CONFIG_SOC_AM43XX) 165 166 /** 166 167 * ti_clk_register_dpll_x2 - Registers a DPLLx2 clock 167 168 * @node: device node for this clock ··· 323 322 of_ti_omap4_dpll_x2_setup); 324 323 #endif 325 324 326 - #ifdef CONFIG_SOC_AM33XX 325 + #if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) 327 326 static void __init of_ti_am3_dpll_x2_setup(struct device_node *node) 328 327 { 329 328 ti_clk_register_dpll_x2(node, &dpll_x2_ck_ops, NULL);
+1 -1
drivers/clk/ti/mux.c
··· 160 160 u8 clk_mux_flags = 0; 161 161 u32 mask = 0; 162 162 u32 shift = 0; 163 - u32 flags = 0; 163 + u32 flags = CLK_SET_RATE_NO_REPARENT; 164 164 165 165 num_parents = of_clk_get_parent_count(node); 166 166 if (num_parents < 2) {
+18 -2
drivers/clocksource/exynos_mct.c
··· 162 162 exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); 163 163 } 164 164 165 - static cycle_t exynos4_frc_read(struct clocksource *cs) 165 + static cycle_t notrace _exynos4_frc_read(void) 166 166 { 167 167 unsigned int lo, hi; 168 168 u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); ··· 174 174 } while (hi != hi2); 175 175 176 176 return ((cycle_t)hi << 32) | lo; 177 + } 178 + 179 + static cycle_t exynos4_frc_read(struct clocksource *cs) 180 + { 181 + return _exynos4_frc_read(); 177 182 } 178 183 179 184 static void exynos4_frc_resume(struct clocksource *cs) ··· 197 192 198 193 static u64 notrace exynos4_read_sched_clock(void) 199 194 { 200 - return exynos4_frc_read(&mct_frc); 195 + return _exynos4_frc_read(); 196 + } 197 + 198 + static struct delay_timer exynos4_delay_timer; 199 + 200 + static cycles_t exynos4_read_current_timer(void) 201 + { 202 + return _exynos4_frc_read(); 201 203 } 202 204 203 205 static void __init exynos4_clocksource_init(void) 204 206 { 205 207 exynos4_mct_frc_start(); 208 + 209 + exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; 210 + exynos4_delay_timer.freq = clk_rate; 211 + register_current_timer_delay(&exynos4_delay_timer); 206 212 207 213 if (clocksource_register_hz(&mct_frc, clk_rate)) 208 214 panic("%s: can't register clocksource\n", mct_frc.name);
+1 -1
drivers/cpufreq/Makefile
··· 49 49 # LITTLE drivers, so that it is probed last. 50 50 obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o 51 51 52 - obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o 52 + obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 53 53 obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o 54 54 obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o 55 55 obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o
+22 -13
drivers/cpufreq/intel_pstate.c
··· 128 128 129 129 struct perf_limits { 130 130 int no_turbo; 131 + int turbo_disabled; 131 132 int max_perf_pct; 132 133 int min_perf_pct; 133 134 int32_t max_perf; ··· 288 287 if (ret != 1) 289 288 return -EINVAL; 290 289 limits.no_turbo = clamp_t(int, input, 0 , 1); 291 - 290 + if (limits.turbo_disabled) { 291 + pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); 292 + limits.no_turbo = limits.turbo_disabled; 293 + } 292 294 return count; 293 295 } 294 296 ··· 361 357 { 362 358 u64 value; 363 359 rdmsrl(BYT_RATIOS, value); 364 - return (value >> 8) & 0x3F; 360 + return (value >> 8) & 0x7F; 365 361 } 366 362 367 363 static int byt_get_max_pstate(void) 368 364 { 369 365 u64 value; 370 366 rdmsrl(BYT_RATIOS, value); 371 - return (value >> 16) & 0x3F; 367 + return (value >> 16) & 0x7F; 372 368 } 373 369 374 370 static int byt_get_turbo_pstate(void) 375 371 { 376 372 u64 value; 377 373 rdmsrl(BYT_TURBO_RATIOS, value); 378 - return value & 0x3F; 374 + return value & 0x7F; 379 375 } 380 376 381 377 static void byt_set_pstate(struct cpudata *cpudata, int pstate) ··· 385 381 u32 vid; 386 382 387 383 val = pstate << 8; 388 - if (limits.no_turbo) 384 + if (limits.no_turbo && !limits.turbo_disabled) 389 385 val |= (u64)1 << 32; 390 386 391 387 vid_fp = cpudata->vid.min + mul_fp( ··· 409 405 410 406 411 407 rdmsrl(BYT_VIDS, value); 412 - cpudata->vid.min = int_tofp((value >> 8) & 0x3f); 413 - cpudata->vid.max = int_tofp((value >> 16) & 0x3f); 408 + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); 409 + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); 414 410 cpudata->vid.ratio = div_fp( 415 411 cpudata->vid.max - cpudata->vid.min, 416 412 int_tofp(cpudata->pstate.max_pstate - ··· 452 448 u64 val; 453 449 454 450 val = pstate << 8; 455 - if (limits.no_turbo) 451 + if (limits.no_turbo && !limits.turbo_disabled) 456 452 val |= (u64)1 << 32; 457 453 458 454 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); ··· 700 696 701 697 cpu = all_cpu_data[cpunum]; 702 698 703 - intel_pstate_get_cpu_pstates(cpu); 704 - 705 699 cpu->cpu = cpunum; 700 + intel_pstate_get_cpu_pstates(cpu); 706 701 707 702 init_timer_deferrable(&cpu->timer); 708 703 cpu->timer.function = intel_pstate_timer_func; ··· 744 741 limits.min_perf = int_tofp(1); 745 742 limits.max_perf_pct = 100; 746 743 limits.max_perf = int_tofp(1); 747 - limits.no_turbo = 0; 744 + limits.no_turbo = limits.turbo_disabled; 748 745 return 0; 749 746 } 750 747 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; ··· 787 784 { 788 785 struct cpudata *cpu; 789 786 int rc; 787 + u64 misc_en; 790 788 791 789 rc = intel_pstate_init_cpu(policy->cpu); 792 790 if (rc) ··· 795 791 796 792 cpu = all_cpu_data[policy->cpu]; 797 793 798 - if (!limits.no_turbo && 799 - limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 794 + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); 795 + if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || 796 + cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) { 797 + limits.turbo_disabled = 1; 798 + limits.no_turbo = 1; 799 + } 800 + if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 800 801 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 801 802 else 802 803 policy->policy = CPUFREQ_POLICY_POWERSAVE;
+3 -5
drivers/crypto/caam/jr.c
··· 453 453 int error; 454 454 455 455 jrdev = &pdev->dev; 456 - jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), 457 - GFP_KERNEL); 456 + jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr), 457 + GFP_KERNEL); 458 458 if (!jrpriv) 459 459 return -ENOMEM; 460 460 ··· 487 487 488 488 /* Now do the platform independent part */ 489 489 error = caam_jr_init(jrdev); /* now turn on hardware */ 490 - if (error) { 491 - kfree(jrpriv); 490 + if (error) 492 491 return error; 493 - } 494 492 495 493 jrpriv->dev = jrdev; 496 494 spin_lock(&driver_data.jr_alloc_lock);
+10 -3
drivers/dma/cppi41.c
··· 86 86 87 87 #define USBSS_IRQ_PD_COMP (1 << 2) 88 88 89 + /* Packet Descriptor */ 90 + #define PD2_ZERO_LENGTH (1 << 19) 91 + 89 92 struct cppi41_channel { 90 93 struct dma_chan chan; 91 94 struct dma_async_tx_descriptor txd; ··· 310 307 __iormb(); 311 308 312 309 while (val) { 313 - u32 desc; 310 + u32 desc, len; 314 311 315 312 q_num = __fls(val); 316 313 val &= ~(1 << q_num); ··· 322 319 q_num, desc); 323 320 continue; 324 321 } 325 - c->residue = pd_trans_len(c->desc->pd6) - 326 - pd_trans_len(c->desc->pd0); 327 322 323 + if (c->desc->pd2 & PD2_ZERO_LENGTH) 324 + len = 0; 325 + else 326 + len = pd_trans_len(c->desc->pd0); 327 + 328 + c->residue = pd_trans_len(c->desc->pd6) - len; 328 329 dma_cookie_complete(&c->txd); 329 330 c->txd.callback(c->txd.callback_param); 330 331 }
+18 -4
drivers/dma/imx-sdma.c
··· 255 255 enum dma_slave_buswidth word_size; 256 256 unsigned int buf_tail; 257 257 unsigned int num_bd; 258 + unsigned int period_len; 258 259 struct sdma_buffer_descriptor *bd; 259 260 dma_addr_t bd_phys; 260 261 unsigned int pc_from_device, pc_to_device; ··· 594 593 595 594 static void sdma_handle_channel_loop(struct sdma_channel *sdmac) 596 595 { 596 + if (sdmac->desc.callback) 597 + sdmac->desc.callback(sdmac->desc.callback_param); 598 + } 599 + 600 + static void sdma_update_channel_loop(struct sdma_channel *sdmac) 601 + { 597 602 struct sdma_buffer_descriptor *bd; 598 603 599 604 /* ··· 618 611 bd->mode.status |= BD_DONE; 619 612 sdmac->buf_tail++; 620 613 sdmac->buf_tail %= sdmac->num_bd; 621 - 622 - if (sdmac->desc.callback) 623 - sdmac->desc.callback(sdmac->desc.callback_param); 624 614 } 625 615 } 626 616 ··· 672 668 while (stat) { 673 669 int channel = fls(stat) - 1; 674 670 struct sdma_channel *sdmac = &sdma->channel[channel]; 671 + 672 + if (sdmac->flags & IMX_DMA_SG_LOOP) 673 + sdma_update_channel_loop(sdmac); 675 674 676 675 tasklet_schedule(&sdmac->tasklet); 677 676 ··· 1136 1129 sdmac->status = DMA_IN_PROGRESS; 1137 1130 1138 1131 sdmac->buf_tail = 0; 1132 + sdmac->period_len = period_len; 1139 1133 1140 1134 sdmac->flags |= IMX_DMA_SG_LOOP; 1141 1135 sdmac->direction = direction; ··· 1233 1225 struct dma_tx_state *txstate) 1234 1226 { 1235 1227 struct sdma_channel *sdmac = to_sdma_chan(chan); 1228 + u32 residue; 1229 + 1230 + if (sdmac->flags & IMX_DMA_SG_LOOP) 1231 + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; 1232 + else 1233 + residue = sdmac->chn_count - sdmac->chn_real_count; 1236 1234 1237 1235 dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 1238 - sdmac->chn_count - sdmac->chn_real_count); 1236 + residue); 1239 1237 1240 1238 return sdmac->status; 1241 1239 }
+3 -2
drivers/gpu/drm/i915/i915_dma.c
··· 1464 1464 #else 1465 1465 static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) 1466 1466 { 1467 - int ret; 1467 + int ret = 0; 1468 1468 1469 1469 DRM_INFO("Replacing VGA console driver\n"); 1470 1470 1471 1471 console_lock(); 1472 - ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1472 + if (con_is_bound(&vga_con)) 1473 + ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); 1473 1474 if (ret == 0) { 1474 1475 ret = do_unregister_con_driver(&vga_con); 1475 1476
+1
drivers/gpu/drm/i915/i915_drv.h
··· 656 656 #define QUIRK_PIPEA_FORCE (1<<0) 657 657 #define QUIRK_LVDS_SSC_DISABLE (1<<1) 658 658 #define QUIRK_INVERT_BRIGHTNESS (1<<2) 659 + #define QUIRK_BACKLIGHT_PRESENT (1<<3) 659 660 660 661 struct intel_fbdev; 661 662 struct intel_fbc_work;
+44
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 74 74 if (base == 0) 75 75 return 0; 76 76 77 + /* make sure we don't clobber the GTT if it's within stolen memory */ 78 + if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) { 79 + struct { 80 + u32 start, end; 81 + } stolen[2] = { 82 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 83 + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, 84 + }; 85 + u64 gtt_start, gtt_end; 86 + 87 + gtt_start = I915_READ(PGTBL_CTL); 88 + if (IS_GEN4(dev)) 89 + gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) | 90 + (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28; 91 + else 92 + gtt_start &= PGTBL_ADDRESS_LO_MASK; 93 + gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4; 94 + 95 + if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end) 96 + stolen[0].end = gtt_start; 97 + if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end) 98 + stolen[1].start = gtt_end; 99 + 100 + /* pick the larger of the two chunks */ 101 + if (stolen[0].end - stolen[0].start > 102 + stolen[1].end - stolen[1].start) { 103 + base = stolen[0].start; 104 + dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start; 105 + } else { 106 + base = stolen[1].start; 107 + dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start; 108 + } 109 + 110 + if (stolen[0].start != stolen[1].start || 111 + stolen[0].end != stolen[1].end) { 112 + DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n", 113 + (unsigned long long) gtt_start, 114 + (unsigned long long) gtt_end - 1); 115 + DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n", 116 + base, base + (u32) dev_priv->gtt.stolen_size - 1); 117 + } 118 + } 119 + 120 + 77 121 /* Verify that nothing else uses this physical address. Stolen 78 122 * memory should be reserved by the BIOS and hidden from the 79 123 * kernel. So if the region is already marked as busy, something
+3
drivers/gpu/drm/i915/i915_reg.h
··· 942 942 /* 943 943 * Instruction and interrupt control regs 944 944 */ 945 + #define PGTBL_CTL 0x02020 946 + #define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */ 947 + #define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */ 945 948 #define PGTBL_ER 0x02024 946 949 #define RENDER_RING_BASE 0x02000 947 950 #define BSD_RING_BASE 0x04000
+14
drivers/gpu/drm/i915/intel_display.c
··· 11591 11591 DRM_INFO("applying inverted panel brightness quirk\n"); 11592 11592 } 11593 11593 11594 + /* Some VBT's incorrectly indicate no backlight is present */ 11595 + static void quirk_backlight_present(struct drm_device *dev) 11596 + { 11597 + struct drm_i915_private *dev_priv = dev->dev_private; 11598 + dev_priv->quirks |= QUIRK_BACKLIGHT_PRESENT; 11599 + DRM_INFO("applying backlight present quirk\n"); 11600 + } 11601 + 11594 11602 struct intel_quirk { 11595 11603 int device; 11596 11604 int subsystem_vendor; ··· 11667 11659 11668 11660 /* Acer Aspire 5336 */ 11669 11661 { 0x2a42, 0x1025, 0x048a, quirk_invert_brightness }, 11662 + 11663 + /* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */ 11664 + { 0x0a06, 0x1025, 0x0a11, quirk_backlight_present }, 11665 + 11666 + /* Toshiba CB35 Chromebook (Celeron 2955U) */ 11667 + { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present }, 11670 11668 }; 11671 11669 11672 11670 static void intel_init_quirks(struct drm_device *dev)
+42
drivers/gpu/drm/i915/intel_dp.c
··· 28 28 #include <linux/i2c.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/export.h> 31 + #include <linux/notifier.h> 32 + #include <linux/reboot.h> 31 33 #include <drm/drmP.h> 32 34 #include <drm/drm_crtc.h> 33 35 #include <drm/drm_crtc_helper.h> ··· 336 334 return PCH_PP_STATUS; 337 335 else 338 336 return VLV_PIPE_PP_STATUS(vlv_power_sequencer_pipe(intel_dp)); 337 + } 338 + 339 + /* Reboot notifier handler to shutdown panel power to guarantee T12 timing 340 + This function only applicable when panel PM state is not to be tracked */ 341 + static int edp_notify_handler(struct notifier_block *this, unsigned long code, 342 + void *unused) 343 + { 344 + struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), 345 + edp_notifier); 346 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 347 + struct drm_i915_private *dev_priv = dev->dev_private; 348 + u32 pp_div; 349 + u32 pp_ctrl_reg, pp_div_reg; 350 + enum pipe pipe = vlv_power_sequencer_pipe(intel_dp); 351 + 352 + if (!is_edp(intel_dp) || code != SYS_RESTART) 353 + return 0; 354 + 355 + if (IS_VALLEYVIEW(dev)) { 356 + pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe); 357 + pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe); 358 + pp_div = I915_READ(pp_div_reg); 359 + pp_div &= PP_REFERENCE_DIVIDER_MASK; 360 + 361 + /* 0x1F write to PP_DIV_REG sets max cycle delay */ 362 + I915_WRITE(pp_div_reg, pp_div | 0x1F); 363 + I915_WRITE(pp_ctrl_reg, PANEL_UNLOCK_REGS | PANEL_POWER_OFF); 364 + msleep(intel_dp->panel_power_cycle_delay); 365 + } 366 + 367 + return 0; 339 368 } 340 369 341 370 static bool edp_have_panel_power(struct intel_dp *intel_dp) ··· 3740 3707 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 3741 3708 edp_panel_vdd_off_sync(intel_dp); 3742 3709 drm_modeset_unlock(&dev->mode_config.connection_mutex); 3710 + if (intel_dp->edp_notifier.notifier_call) { 3711 + unregister_reboot_notifier(&intel_dp->edp_notifier); 3712 + intel_dp->edp_notifier.notifier_call = NULL; 3713 + } 3743 3714 } 3744 3715 kfree(intel_dig_port); 3745 3716 } ··· 4220 4183 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 4221 4184 } 4222 4185 mutex_unlock(&dev->mode_config.mutex); 4186 + 4187 + if (IS_VALLEYVIEW(dev)) { 4188 + intel_dp->edp_notifier.notifier_call = edp_notify_handler; 4189 + register_reboot_notifier(&intel_dp->edp_notifier); 4190 + } 4223 4191 4224 4192 intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); 4225 4193 intel_panel_setup_backlight(connector);
+2
drivers/gpu/drm/i915/intel_drv.h
··· 538 538 unsigned long last_power_on; 539 539 unsigned long last_backlight_off; 540 540 bool psr_setup_done; 541 + struct notifier_block edp_notifier; 542 + 541 543 bool use_tps3; 542 544 struct intel_connector *attached_connector; 543 545
+15 -14
drivers/gpu/drm/i915/intel_dsi.c
··· 117 117 /* bandgap reset is needed after everytime we do power gate */ 118 118 band_gap_reset(dev_priv); 119 119 120 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 121 + usleep_range(2500, 3000); 122 + 120 123 val = I915_READ(MIPI_PORT_CTRL(pipe)); 121 124 I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD); 122 125 usleep_range(1000, 1500); 123 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 124 - usleep_range(2000, 2500); 126 + 127 + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 128 + usleep_range(2500, 3000); 129 + 125 130 I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 126 - usleep_range(2000, 2500); 127 - I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 128 - usleep_range(2000, 2500); 129 - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); 130 - usleep_range(2000, 2500); 131 + usleep_range(2500, 3000); 131 132 } 132 133 133 134 static void intel_dsi_enable(struct intel_encoder *encoder) ··· 272 271 273 272 DRM_DEBUG_KMS("\n"); 274 273 275 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 274 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 276 275 usleep_range(2000, 2500); 277 276 278 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); 277 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); 279 278 usleep_range(2000, 2500); 280 279 281 - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); 280 + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); 282 281 usleep_range(2000, 2500); 283 - 284 - val = I915_READ(MIPI_PORT_CTRL(pipe)); 285 - I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 286 - usleep_range(1000, 1500); 287 282 288 283 if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT) 289 284 == 0x00000), 30)) 290 285 DRM_ERROR("DSI LP not going Low\n"); 286 + 287 + val = I915_READ(MIPI_PORT_CTRL(pipe)); 288 + I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); 289 + usleep_range(1000, 1500); 291 290 292 291 I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); 293 292 usleep_range(2000, 2500);
-6
drivers/gpu/drm/i915/intel_dsi_cmd.c
··· 404 404 else 405 405 cmd |= DPI_LP_MODE; 406 406 407 - /* DPI virtual channel?! */ 408 - 409 - mask = DPI_FIFO_EMPTY; 410 - if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 50)) 411 - DRM_ERROR("Timeout waiting for DPI FIFO empty.\n"); 412 - 413 407 /* clear bit */ 414 408 I915_WRITE(MIPI_INTR_STAT(pipe), SPL_PKT_SENT_INTERRUPT); 415 409
+9
drivers/gpu/drm/i915/intel_opregion.c
··· 403 403 404 404 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); 405 405 406 + /* 407 + * If the acpi_video interface is not supposed to be used, don't 408 + * bother processing backlight level change requests from firmware. 409 + */ 410 + if (!acpi_video_verify_backlight_support()) { 411 + DRM_DEBUG_KMS("opregion backlight request ignored\n"); 412 + return 0; 413 + } 414 + 406 415 if (!(bclp & ASLE_BCLP_VALID)) 407 416 return ASLC_BACKLIGHT_FAILED; 408 417
+6 -2
drivers/gpu/drm/i915/intel_panel.c
··· 1118 1118 int ret; 1119 1119 1120 1120 if (!dev_priv->vbt.backlight.present) { 1121 - DRM_DEBUG_KMS("native backlight control not available per VBT\n"); 1122 - return 0; 1121 + if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { 1122 + DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); 1123 + } else { 1124 + DRM_DEBUG_KMS("no backlight present per VBT\n"); 1125 + return 0; 1126 + } 1123 1127 } 1124 1128 1125 1129 /* set level and max in panel struct */
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
··· 1516 1516 } 1517 1517 1518 1518 switch ((ctrl & 0x000f0000) >> 16) { 1519 - case 6: datarate = pclk * 30 / 8; break; 1520 - case 5: datarate = pclk * 24 / 8; break; 1519 + case 6: datarate = pclk * 30; break; 1520 + case 5: datarate = pclk * 24; break; 1521 1521 case 2: 1522 1522 default: 1523 - datarate = pclk * 18 / 8; 1523 + datarate = pclk * 18; 1524 1524 break; 1525 1525 } 1526 1526
+3 -3
drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c
··· 1159 1159 if (outp->info.type == DCB_OUTPUT_DP) { 1160 1160 u32 sync = nv_rd32(priv, 0x660404 + (head * 0x300)); 1161 1161 switch ((sync & 0x000003c0) >> 6) { 1162 - case 6: pclk = pclk * 30 / 8; break; 1163 - case 5: pclk = pclk * 24 / 8; break; 1162 + case 6: pclk = pclk * 30; break; 1163 + case 5: pclk = pclk * 24; break; 1164 1164 case 2: 1165 1165 default: 1166 - pclk = pclk * 18 / 8; 1166 + pclk = pclk * 18; 1167 1167 break; 1168 1168 } 1169 1169
+5 -3
drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c
··· 34 34 struct nvkm_output_dp *outp = (void *)base; 35 35 bool retrain = true; 36 36 u8 link[2], stat[3]; 37 - u32 rate; 37 + u32 linkrate; 38 38 int ret, i; 39 39 40 40 /* check that the link is trained at a high enough rate */ ··· 44 44 goto done; 45 45 } 46 46 47 - rate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 - if (rate < ((datarate / 8) * 10)) { 47 + linkrate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); 48 + linkrate = (linkrate * 8) / 10; /* 8B/10B coding overhead */ 49 + datarate = (datarate + 9) / 10; /* -> decakilobits */ 50 + if (linkrate < datarate) { 49 51 DBG("link not trained at sufficient rate\n"); 50 52 goto done; 51 53 }
+1
drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c
··· 87 87 struct nvkm_output_dp *outpdp = (void *)outp; 88 88 switch (data) { 89 89 case NV94_DISP_SOR_DP_PWR_STATE_OFF: 90 + nouveau_event_put(outpdp->irq); 90 91 ((struct nvkm_output_dp_impl *)nv_oclass(outp)) 91 92 ->lnk_pwr(outpdp, 0); 92 93 atomic_set(&outpdp->lt.done, 0);
+2 -2
drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h
··· 26 26 }; 27 27 } 28 28 29 - static inline struct ramfuc_reg 29 + static noinline struct ramfuc_reg 30 30 ramfuc_reg(u32 addr) 31 31 { 32 32 return ramfuc_reg2(addr, addr); ··· 107 107 108 108 #define ram_init(s,p) ramfuc_init(&(s)->base, (p)) 109 109 #define ram_exec(s,e) ramfuc_exec(&(s)->base, (e)) 110 - #define ram_have(s,r) ((s)->r_##r.addr != 0x000000) 110 + #define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000) 111 111 #define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r) 112 112 #define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d)) 113 113 #define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r)
+1
drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c
··· 200 200 /* (re)program mempll, if required */ 201 201 if (ram->mode == 2) { 202 202 ram_mask(fuc, 0x1373f4, 0x00010000, 0x00000000); 203 + ram_mask(fuc, 0x132000, 0x80000000, 0x80000000); 203 204 ram_mask(fuc, 0x132000, 0x00000001, 0x00000000); 204 205 ram_mask(fuc, 0x132004, 0x103fffff, mcoef); 205 206 ram_mask(fuc, 0x132000, 0x00000001, 0x00000001);
+9 -8
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 652 652 ret = nouveau_do_resume(drm_dev); 653 653 if (ret) 654 654 return ret; 655 - if (drm_dev->mode_config.num_crtc) 656 - nouveau_fbcon_set_suspend(drm_dev, 0); 657 655 658 - nouveau_fbcon_zfill_all(drm_dev); 659 - if (drm_dev->mode_config.num_crtc) 656 + if (drm_dev->mode_config.num_crtc) { 660 657 nouveau_display_resume(drm_dev); 658 + nouveau_fbcon_set_suspend(drm_dev, 0); 659 + } 660 + 661 661 return 0; 662 662 } 663 663 ··· 683 683 ret = nouveau_do_resume(drm_dev); 684 684 if (ret) 685 685 return ret; 686 - if (drm_dev->mode_config.num_crtc) 687 - nouveau_fbcon_set_suspend(drm_dev, 0); 688 - nouveau_fbcon_zfill_all(drm_dev); 689 - if (drm_dev->mode_config.num_crtc) 686 + 687 + if (drm_dev->mode_config.num_crtc) { 690 688 nouveau_display_resume(drm_dev); 689 + nouveau_fbcon_set_suspend(drm_dev, 0); 690 + } 691 + 691 692 return 0; 692 693 } 693 694
+3 -10
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 531 531 if (state == 1) 532 532 nouveau_fbcon_save_disable_accel(dev); 533 533 fb_set_suspend(drm->fbcon->helper.fbdev, state); 534 - if (state == 0) 534 + if (state == 0) { 535 535 nouveau_fbcon_restore_accel(dev); 536 + nouveau_fbcon_zfill(dev, drm->fbcon); 537 + } 536 538 console_unlock(); 537 - } 538 - } 539 - 540 - void 541 - nouveau_fbcon_zfill_all(struct drm_device *dev) 542 - { 543 - struct nouveau_drm *drm = nouveau_drm(dev); 544 - if (drm->fbcon) { 545 - nouveau_fbcon_zfill(dev, drm->fbcon); 546 539 } 547 540 }
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.h
··· 61 61 int nouveau_fbcon_init(struct drm_device *dev); 62 62 void nouveau_fbcon_fini(struct drm_device *dev); 63 63 void nouveau_fbcon_set_suspend(struct drm_device *dev, int state); 64 - void nouveau_fbcon_zfill_all(struct drm_device *dev); 65 64 void nouveau_fbcon_save_disable_accel(struct drm_device *dev); 66 65 void nouveau_fbcon_restore_accel(struct drm_device *dev); 67 66
+2 -1
drivers/gpu/drm/nouveau/nv50_display.c
··· 1741 1741 } 1742 1742 } 1743 1743 1744 - mthd = (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1744 + mthd = (ffs(nv_encoder->dcb->heads) - 1) << 3; 1745 + mthd |= (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; 1745 1746 mthd |= nv_encoder->or; 1746 1747 1747 1748 if (nv_encoder->dcb->type == DCB_OUTPUT_DP) {
+1 -1
drivers/gpu/drm/radeon/atombios_dp.c
··· 127 127 /* flags not zero */ 128 128 if (args.v1.ucReplyStatus == 2) { 129 129 DRM_DEBUG_KMS("dp_aux_ch flags not zero\n"); 130 - r = -EBUSY; 130 + r = -EIO; 131 131 goto done; 132 132 } 133 133
+1 -1
drivers/gpu/drm/radeon/ci_dpm.c
··· 1179 1179 tmp &= ~GLOBAL_PWRMGT_EN; 1180 1180 WREG32_SMC(GENERAL_PWRMGT, tmp); 1181 1181 1182 - tmp = RREG32(SCLK_PWRMGT_CNTL); 1182 + tmp = RREG32_SMC(SCLK_PWRMGT_CNTL); 1183 1183 tmp &= ~DYNAMIC_PM_EN; 1184 1184 WREG32_SMC(SCLK_PWRMGT_CNTL, tmp); 1185 1185
+4 -2
drivers/gpu/drm/radeon/cik.c
··· 7676 7676 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 7677 7677 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 7678 7678 mc_client = RREG32(VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT); 7679 + /* reset addr and status */ 7680 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7681 + if (addr == 0x0 && status == 0x0) 7682 + break; 7679 7683 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 7680 7684 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 7681 7685 addr); 7682 7686 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 7683 7687 status); 7684 7688 cik_vm_decode_fault(rdev, status, addr, mc_client); 7685 - /* reset addr and status */ 7686 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 7687 7689 break; 7688 7690 case 167: /* VCE */ 7689 7691 DRM_DEBUG("IH: VCE int: 0x%08x\n", src_data);
+8 -6
drivers/gpu/drm/radeon/evergreen.c
··· 189 189 0x8c1c, 0xffffffff, 0x00001010, 190 190 0x28350, 0xffffffff, 0x00000000, 191 191 0xa008, 0xffffffff, 0x00010000, 192 - 0x5cc, 0xffffffff, 0x00000001, 192 + 0x5c4, 0xffffffff, 0x00000001, 193 193 0x9508, 0xffffffff, 0x00000002, 194 194 0x913c, 0x0000000f, 0x0000000a 195 195 }; ··· 476 476 0x8c1c, 0xffffffff, 0x00001010, 477 477 0x28350, 0xffffffff, 0x00000000, 478 478 0xa008, 0xffffffff, 0x00010000, 479 - 0x5cc, 0xffffffff, 0x00000001, 479 + 0x5c4, 0xffffffff, 0x00000001, 480 480 0x9508, 0xffffffff, 0x00000002 481 481 }; 482 482 ··· 635 635 static const u32 supersumo_golden_registers[] = 636 636 { 637 637 0x5eb4, 0xffffffff, 0x00000002, 638 - 0x5cc, 0xffffffff, 0x00000001, 638 + 0x5c4, 0xffffffff, 0x00000001, 639 639 0x7030, 0xffffffff, 0x00000011, 640 640 0x7c30, 0xffffffff, 0x00000011, 641 641 0x6104, 0x01000300, 0x00000000, ··· 719 719 static const u32 wrestler_golden_registers[] = 720 720 { 721 721 0x5eb4, 0xffffffff, 0x00000002, 722 - 0x5cc, 0xffffffff, 0x00000001, 722 + 0x5c4, 0xffffffff, 0x00000001, 723 723 0x7030, 0xffffffff, 0x00000011, 724 724 0x7c30, 0xffffffff, 0x00000011, 725 725 0x6104, 0x01000300, 0x00000000, ··· 5066 5066 case 147: 5067 5067 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 5068 5068 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 5069 + /* reset addr and status */ 5070 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5071 + if (addr == 0x0 && status == 0x0) 5072 + break; 5069 5073 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 5070 5074 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 5071 5075 addr); 5072 5076 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 5073 5077 status); 5074 5078 cayman_vm_decode_fault(rdev, status, addr); 5075 - /* reset addr and status */ 5076 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 5077 5079 break; 5078 5080 case 176: /* CP_INT in ring buffer */ 5079 5081 case 177: /* CP_INT in IB1 */
-6
drivers/gpu/drm/radeon/rv770_dpm.c
··· 2329 2329 pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss, 2330 2330 ASIC_INTERNAL_MEMORY_SS, 0); 2331 2331 2332 - /* disable ss, causes hangs on some cayman boards */ 2333 - if (rdev->family == CHIP_CAYMAN) { 2334 - pi->sclk_ss = false; 2335 - pi->mclk_ss = false; 2336 - } 2337 - 2338 2332 if (pi->sclk_ss || pi->mclk_ss) 2339 2333 pi->dynamic_ss = true; 2340 2334 else
+4 -2
drivers/gpu/drm/radeon/si.c
··· 6376 6376 case 147: 6377 6377 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); 6378 6378 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); 6379 + /* reset addr and status */ 6380 + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6381 + if (addr == 0x0 && status == 0x0) 6382 + break; 6379 6383 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); 6380 6384 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", 6381 6385 addr); 6382 6386 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 6383 6387 status); 6384 6388 si_vm_decode_fault(rdev, status, addr); 6385 - /* reset addr and status */ 6386 - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); 6387 6389 break; 6388 6390 case 176: /* RINGID0 CP_INT */ 6389 6391 radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
+1 -1
drivers/hid/Kconfig
··· 810 810 811 811 config HID_SENSOR_HUB 812 812 tristate "HID Sensors framework support" 813 - depends on HID 813 + depends on HID && HAS_IOMEM 814 814 select MFD_CORE 815 815 default n 816 816 ---help---
+3
drivers/hid/hid-ids.h
··· 323 323 324 324 #define USB_VENDOR_ID_ETURBOTOUCH 0x22b9 325 325 #define USB_DEVICE_ID_ETURBOTOUCH 0x0006 326 + #define USB_DEVICE_ID_ETURBOTOUCH_2968 0x2968 326 327 327 328 #define USB_VENDOR_ID_EZKEY 0x0518 328 329 #define USB_DEVICE_ID_BTC_8193 0x0002 ··· 716 715 717 716 #define USB_VENDOR_ID_PENMOUNT 0x14e1 718 717 #define USB_DEVICE_ID_PENMOUNT_PCI 0x3500 718 + #define USB_DEVICE_ID_PENMOUNT_1610 0x1610 719 + #define USB_DEVICE_ID_PENMOUNT_1640 0x1640 719 720 720 721 #define USB_VENDOR_ID_PETALYNX 0x18b1 721 722 #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037
+2
drivers/hid/hid-rmi.c
··· 428 428 return 0; 429 429 } 430 430 431 + #ifdef CONFIG_PM 431 432 static int rmi_post_reset(struct hid_device *hdev) 432 433 { 433 434 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); ··· 438 437 { 439 438 return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); 440 439 } 440 + #endif /* CONFIG_PM */ 441 441 442 442 #define RMI4_MAX_PAGE 0xff 443 443 #define RMI4_PAGE_SIZE 0x0100
+15 -10
drivers/hid/hid-sensor-hub.c
··· 159 159 { 160 160 struct hid_sensor_hub_callbacks_list *callback; 161 161 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 162 + unsigned long flags; 162 163 163 - spin_lock(&pdata->dyn_callback_lock); 164 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 164 165 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 165 166 if (callback->usage_id == usage_id && 166 167 callback->hsdev == hsdev) { 167 - spin_unlock(&pdata->dyn_callback_lock); 168 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 168 169 return -EINVAL; 169 170 } 170 171 callback = kzalloc(sizeof(*callback), GFP_ATOMIC); 171 172 if (!callback) { 172 - spin_unlock(&pdata->dyn_callback_lock); 173 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 173 174 return -ENOMEM; 174 175 } 175 176 callback->hsdev = hsdev; ··· 178 177 callback->usage_id = usage_id; 179 178 callback->priv = NULL; 180 179 list_add_tail(&callback->list, &pdata->dyn_callback_list); 181 - spin_unlock(&pdata->dyn_callback_lock); 180 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 182 181 183 182 return 0; 184 183 } ··· 189 188 { 190 189 struct hid_sensor_hub_callbacks_list *callback; 191 190 struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); 191 + unsigned long flags; 192 192 193 - spin_lock(&pdata->dyn_callback_lock); 193 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 194 194 list_for_each_entry(callback, &pdata->dyn_callback_list, list) 195 195 if (callback->usage_id == usage_id && 196 196 callback->hsdev == hsdev) { ··· 199 197 kfree(callback); 200 198 break; 201 199 } 202 - spin_unlock(&pdata->dyn_callback_lock); 200 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 203 201 204 202 return 0; 205 203 } ··· 380 378 { 381 379 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 382 380 struct hid_sensor_hub_callbacks_list *callback; 381 + unsigned long flags; 383 382 384 383 hid_dbg(hdev, " sensor_hub_suspend\n"); 385 - spin_lock(&pdata->dyn_callback_lock); 384 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 386 385 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 387 386 if (callback->usage_callback->suspend) 388 387 callback->usage_callback->suspend( 389 388 callback->hsdev, callback->priv); 390 389 } 391 - spin_unlock(&pdata->dyn_callback_lock); 390 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 392 391 393 392 return 0; 394 393 } ··· 398 395 { 399 396 struct sensor_hub_data *pdata = hid_get_drvdata(hdev); 400 397 struct hid_sensor_hub_callbacks_list *callback; 398 + unsigned long flags; 401 399 402 400 hid_dbg(hdev, " sensor_hub_resume\n"); 403 - spin_lock(&pdata->dyn_callback_lock); 401 + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); 404 402 list_for_each_entry(callback, &pdata->dyn_callback_list, list) { 405 403 if (callback->usage_callback->resume) 406 404 callback->usage_callback->resume( 407 405 callback->hsdev, callback->priv); 408 406 } 409 - spin_unlock(&pdata->dyn_callback_lock); 407 + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); 410 408 411 409 return 0; 412 410 } ··· 636 632 if (name == NULL) { 637 633 hid_err(hdev, "Failed MFD device name\n"); 638 634 ret = -ENOMEM; 635 + kfree(hsdev); 639 636 goto err_no_mem; 640 637 } 641 638 sd->hid_sensor_hub_client_devs[
+3
drivers/hid/usbhid/hid-quirks.c
··· 49 49 50 50 { USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II, HID_QUIRK_MULTI_INPUT }, 51 51 { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH, HID_QUIRK_MULTI_INPUT }, 52 + { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH_2968, HID_QUIRK_MULTI_INPUT }, 52 53 { USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD, HID_QUIRK_MULTI_INPUT }, 53 54 { USB_VENDOR_ID_PANTHERLORD, USB_DEVICE_ID_PANTHERLORD_TWIN_USB_JOYSTICK, HID_QUIRK_MULTI_INPUT | HID_QUIRK_SKIP_OUTPUT_REPORTS }, 54 55 { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, ··· 77 76 { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, 78 77 { USB_VENDOR_ID_NEXIO, USB_DEVICE_ID_NEXIO_MULTITOUCH_PTI0750, HID_QUIRK_NO_INIT_REPORTS }, 79 78 { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS }, 79 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET }, 80 + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET }, 80 81 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, 81 82 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, 82 83 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS },
+6 -2
drivers/hv/connection.c
··· 339 339 */ 340 340 341 341 do { 342 - hv_begin_read(&channel->inbound); 342 + if (read_state) 343 + hv_begin_read(&channel->inbound); 343 344 channel->onchannel_callback(arg); 344 - bytes_to_read = hv_end_read(&channel->inbound); 345 + if (read_state) 346 + bytes_to_read = hv_end_read(&channel->inbound); 347 + else 348 + bytes_to_read = 0; 345 349 } while (read_state && (bytes_to_read != 0)); 346 350 } else { 347 351 pr_err("no channel callback for relid - %u\n", relid);
+14 -3
drivers/hv/hv_kvp.c
··· 127 127 kvp_respond_to_host(NULL, HV_E_FAIL); 128 128 } 129 129 130 + static void poll_channel(struct vmbus_channel *channel) 131 + { 132 + if (channel->target_cpu != smp_processor_id()) 133 + smp_call_function_single(channel->target_cpu, 134 + hv_kvp_onchannelcallback, 135 + channel, true); 136 + else 137 + hv_kvp_onchannelcallback(channel); 138 + } 139 + 140 + 130 141 static int kvp_handle_handshake(struct hv_kvp_msg *msg) 131 142 { 132 143 int ret = 1; ··· 166 155 kvp_register(dm_reg_value); 167 156 kvp_transaction.active = false; 168 157 if (kvp_transaction.kvp_context) 169 - hv_kvp_onchannelcallback(kvp_transaction.kvp_context); 158 + poll_channel(kvp_transaction.kvp_context); 170 159 } 171 160 return ret; 172 161 } ··· 579 568 580 569 vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, 581 570 VM_PKT_DATA_INBAND, 0); 582 - 571 + poll_channel(channel); 583 572 } 584 573 585 574 /* ··· 614 603 return; 615 604 } 616 605 617 - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, 606 + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, 618 607 &requestid); 619 608 620 609 if (recvlen > 0) {
+1 -1
drivers/hv/hv_util.c
··· 319 319 (struct hv_util_service *)dev_id->driver_data; 320 320 int ret; 321 321 322 - srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); 322 + srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); 323 323 if (!srv->recv_buffer) 324 324 return -ENOMEM; 325 325 if (srv->util_init) {
+14 -14
drivers/hwmon/adc128d818.c
··· 239 239 return sprintf(buf, "%u\n", !!(alarms & mask)); 240 240 } 241 241 242 - static SENSOR_DEVICE_ATTR_2(in0_input, S_IWUSR | S_IRUGO, 243 - adc128_show_in, adc128_set_in, 0, 0); 242 + static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, 243 + adc128_show_in, NULL, 0, 0); 244 244 static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, 245 245 adc128_show_in, adc128_set_in, 0, 1); 246 246 static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, 247 247 adc128_show_in, adc128_set_in, 0, 2); 248 248 249 - static SENSOR_DEVICE_ATTR_2(in1_input, S_IWUSR | S_IRUGO, 250 - adc128_show_in, adc128_set_in, 1, 0); 249 + static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, 250 + adc128_show_in, NULL, 1, 0); 251 251 static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, 252 252 adc128_show_in, adc128_set_in, 1, 1); 253 253 static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, 254 254 adc128_show_in, adc128_set_in, 1, 2); 255 255 256 - static SENSOR_DEVICE_ATTR_2(in2_input, S_IWUSR | S_IRUGO, 257 - adc128_show_in, adc128_set_in, 2, 0); 256 + static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, 257 + adc128_show_in, NULL, 2, 0); 258 258 static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, 259 259 adc128_show_in, adc128_set_in, 2, 1); 260 260 static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, 261 261 adc128_show_in, adc128_set_in, 2, 2); 262 262 263 - static SENSOR_DEVICE_ATTR_2(in3_input, S_IWUSR | S_IRUGO, 264 - adc128_show_in, adc128_set_in, 3, 0); 263 + static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, 264 + adc128_show_in, NULL, 3, 0); 265 265 static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, 266 266 adc128_show_in, adc128_set_in, 3, 1); 267 267 static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, 268 268 adc128_show_in, adc128_set_in, 3, 2); 269 269 270 - static SENSOR_DEVICE_ATTR_2(in4_input, S_IWUSR | S_IRUGO, 271 - adc128_show_in, adc128_set_in, 4, 0); 270 + static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, 271 + adc128_show_in, NULL, 4, 0); 272 272 static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, 273 273 adc128_show_in, adc128_set_in, 4, 1); 274 274 static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, 275 275 adc128_show_in, adc128_set_in, 4, 2); 276 276 277 - static SENSOR_DEVICE_ATTR_2(in5_input, S_IWUSR | S_IRUGO, 278 - adc128_show_in, adc128_set_in, 5, 0); 277 + static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, 278 + adc128_show_in, NULL, 5, 0); 279 279 static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, 280 280 adc128_show_in, adc128_set_in, 5, 1); 281 281 static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, 282 282 adc128_show_in, adc128_set_in, 5, 2); 283 283 284 - static SENSOR_DEVICE_ATTR_2(in6_input, S_IWUSR | S_IRUGO, 285 - adc128_show_in, adc128_set_in, 6, 0); 284 + static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, 285 + adc128_show_in, NULL, 6, 0); 286 286 static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, 287 287 adc128_show_in, adc128_set_in, 6, 1); 288 288 static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO,
+8 -6
drivers/hwmon/adm1021.c
··· 185 185 struct adm1021_data *data = dev_get_drvdata(dev); 186 186 struct i2c_client *client = data->client; 187 187 long temp; 188 - int err; 188 + int reg_val, err; 189 189 190 190 err = kstrtol(buf, 10, &temp); 191 191 if (err) ··· 193 193 temp /= 1000; 194 194 195 195 mutex_lock(&data->update_lock); 196 - data->temp_max[index] = clamp_val(temp, -128, 127); 196 + reg_val = clamp_val(temp, -128, 127); 197 + data->temp_max[index] = reg_val * 1000; 197 198 if (!read_only) 198 199 i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), 199 - data->temp_max[index]); 200 + reg_val); 200 201 mutex_unlock(&data->update_lock); 201 202 202 203 return count; ··· 211 210 struct adm1021_data *data = dev_get_drvdata(dev); 212 211 struct i2c_client *client = data->client; 213 212 long temp; 214 - int err; 213 + int reg_val, err; 215 214 216 215 err = kstrtol(buf, 10, &temp); 217 216 if (err) ··· 219 218 temp /= 1000; 220 219 221 220 mutex_lock(&data->update_lock); 222 - data->temp_min[index] = clamp_val(temp, -128, 127); 221 + reg_val = clamp_val(temp, -128, 127); 222 + data->temp_min[index] = reg_val * 1000; 223 223 if (!read_only) 224 224 i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), 225 - data->temp_min[index]); 225 + reg_val); 226 226 mutex_unlock(&data->update_lock); 227 227 228 228 return count;
+3
drivers/hwmon/adm1029.c
··· 232 232 /* Update the value */ 233 233 reg = (reg & 0x3F) | (val << 6); 234 234 235 + /* Update the cache */ 236 + data->fan_div[attr->index] = reg; 237 + 235 238 /* Write value */ 236 239 i2c_smbus_write_byte_data(client, 237 240 ADM1029_REG_FAN_DIV[attr->index], reg);
+5 -3
drivers/hwmon/adm1031.c
··· 365 365 if (ret) 366 366 return ret; 367 367 368 + val = clamp_val(val, 0, 127000); 368 369 mutex_lock(&data->update_lock); 369 370 data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); 370 371 adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), ··· 395 394 if (ret) 396 395 return ret; 397 396 397 + val = clamp_val(val, 0, 127000); 398 398 mutex_lock(&data->update_lock); 399 399 data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], 400 400 data->pwm[nr]); ··· 698 696 if (ret) 699 697 return ret; 700 698 701 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 699 + val = clamp_val(val, -55000, 127000); 702 700 mutex_lock(&data->update_lock); 703 701 data->temp_min[nr] = TEMP_TO_REG(val); 704 702 adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), ··· 719 717 if (ret) 720 718 return ret; 721 719 722 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 720 + val = clamp_val(val, -55000, 127000); 723 721 mutex_lock(&data->update_lock); 724 722 data->temp_max[nr] = TEMP_TO_REG(val); 725 723 adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), ··· 740 738 if (ret) 741 739 return ret; 742 740 743 - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); 741 + val = clamp_val(val, -55000, 127000); 744 742 mutex_lock(&data->update_lock); 745 743 data->temp_crit[nr] = TEMP_TO_REG(val); 746 744 adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),
+1 -1
drivers/hwmon/amc6821.c
··· 704 704 get_temp_alarm, NULL, IDX_TEMP1_MAX); 705 705 static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, 706 706 get_temp_alarm, NULL, IDX_TEMP1_CRIT); 707 - static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, 707 + static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, 708 708 get_temp, NULL, IDX_TEMP2_INPUT); 709 709 static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, 710 710 set_temp, IDX_TEMP2_MIN);
+5 -10
drivers/hwmon/emc2103.c
··· 250 250 if (result < 0) 251 251 return result; 252 252 253 - val = DIV_ROUND_CLOSEST(val, 1000); 254 - if ((val < -63) || (val > 127)) 255 - return -EINVAL; 253 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 256 254 257 255 mutex_lock(&data->update_lock); 258 256 data->temp_min[nr] = val; ··· 272 274 if (result < 0) 273 275 return result; 274 276 275 - val = DIV_ROUND_CLOSEST(val, 1000); 276 - if ((val < -63) || (val > 127)) 277 - return -EINVAL; 277 + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); 278 278 279 279 mutex_lock(&data->update_lock); 280 280 data->temp_max[nr] = val; ··· 386 390 { 387 391 struct emc2103_data *data = emc2103_update_device(dev); 388 392 struct i2c_client *client = to_i2c_client(dev); 389 - long rpm_target; 393 + unsigned long rpm_target; 390 394 391 - int result = kstrtol(buf, 10, &rpm_target); 395 + int result = kstrtoul(buf, 10, &rpm_target); 392 396 if (result < 0) 393 397 return result; 394 398 395 399 /* Datasheet states 16384 as maximum RPM target (table 3.2) */ 396 - if ((rpm_target < 0) || (rpm_target > 16384)) 397 - return -EINVAL; 400 + rpm_target = clamp_val(rpm_target, 0, 16384); 398 401 399 402 mutex_lock(&data->update_lock); 400 403
+1 -1
drivers/hwmon/ntc_thermistor.c
··· 512 512 } 513 513 514 514 dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n", 515 - pdev->name); 515 + pdev_id->name); 516 516 517 517 return 0; 518 518 err_after_sysfs:
-1
drivers/i2c/busses/i2c-sun6i-p2wi.c
··· 22 22 * 23 23 */ 24 24 #include <linux/clk.h> 25 - #include <linux/module.h> 26 25 #include <linux/i2c.h> 27 26 #include <linux/io.h> 28 27 #include <linux/interrupt.h>
+1
drivers/i2c/muxes/Kconfig
··· 40 40 41 41 config I2C_MUX_PCA954x 42 42 tristate "Philips PCA954x I2C Mux/switches" 43 + depends on GPIOLIB 43 44 help 44 45 If you say yes here you get support for the Philips PCA954x 45 46 I2C mux/switch devices.
+2 -5
drivers/iio/accel/hid-sensor-accel-3d.c
··· 110 110 struct accel_3d_state *accel_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &accel_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &accel_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+1 -1
drivers/iio/adc/ti_am335x_adc.c
··· 374 374 return -EAGAIN; 375 375 } 376 376 } 377 - map_val = chan->channel + TOTAL_CHANNELS; 377 + map_val = adc_dev->channel_step[chan->scan_index]; 378 378 379 379 /* 380 380 * We check the complete FIFO. We programmed just one entry but in case
+2 -5
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 110 110 struct gyro_3d_state *gyro_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 150 151 ret_type = IIO_VAL_INT; 151 152 break; 152 153 case IIO_CHAN_INFO_SAMP_FREQ: 153 - ret = hid_sensor_read_samp_freq_value( 154 + ret_type = hid_sensor_read_samp_freq_value( 154 155 &gyro_state->common_attributes, val, val2); 155 - ret_type = IIO_VAL_INT_PLUS_MICRO; 156 156 break; 157 157 case IIO_CHAN_INFO_HYSTERESIS: 158 - ret = hid_sensor_read_raw_hyst_value( 158 + ret_type = hid_sensor_read_raw_hyst_value( 159 159 &gyro_state->common_attributes, val, val2); 160 - ret_type = IIO_VAL_INT_PLUS_MICRO; 161 160 break; 162 161 default: 163 162 ret_type = -EINVAL;
+2 -5
drivers/iio/light/hid-sensor-als.c
··· 79 79 struct als_state *als_state = iio_priv(indio_dev); 80 80 int report_id = -1; 81 81 u32 address; 82 - int ret; 83 82 int ret_type; 84 83 s32 poll_value; 85 84 ··· 128 129 ret_type = IIO_VAL_INT; 129 130 break; 130 131 case IIO_CHAN_INFO_SAMP_FREQ: 131 - ret = hid_sensor_read_samp_freq_value( 132 + ret_type = hid_sensor_read_samp_freq_value( 132 133 &als_state->common_attributes, val, val2); 133 - ret_type = IIO_VAL_INT_PLUS_MICRO; 134 134 break; 135 135 case IIO_CHAN_INFO_HYSTERESIS: 136 - ret = hid_sensor_read_raw_hyst_value( 136 + ret_type = hid_sensor_read_raw_hyst_value( 137 137 &als_state->common_attributes, val, val2); 138 - ret_type = IIO_VAL_INT_PLUS_MICRO; 139 138 break; 140 139 default: 141 140 ret_type = -EINVAL;
+2 -5
drivers/iio/light/hid-sensor-prox.c
··· 74 74 struct prox_state *prox_state = iio_priv(indio_dev); 75 75 int report_id = -1; 76 76 u32 address; 77 - int ret; 78 77 int ret_type; 79 78 s32 poll_value; 80 79 ··· 124 125 ret_type = IIO_VAL_INT; 125 126 break; 126 127 case IIO_CHAN_INFO_SAMP_FREQ: 127 - ret = hid_sensor_read_samp_freq_value( 128 + ret_type = hid_sensor_read_samp_freq_value( 128 129 &prox_state->common_attributes, val, val2); 129 - ret_type = IIO_VAL_INT_PLUS_MICRO; 130 130 break; 131 131 case IIO_CHAN_INFO_HYSTERESIS: 132 - ret = hid_sensor_read_raw_hyst_value( 132 + ret_type = hid_sensor_read_raw_hyst_value( 133 133 &prox_state->common_attributes, val, val2); 134 - ret_type = IIO_VAL_INT_PLUS_MICRO; 135 134 break; 136 135 default: 137 136 ret_type = -EINVAL;
+10 -1
drivers/iio/light/tcs3472.c
··· 52 52 53 53 struct tcs3472_data { 54 54 struct i2c_client *client; 55 + struct mutex lock; 55 56 u8 enable; 56 57 u8 control; 57 58 u8 atime; ··· 117 116 118 117 switch (mask) { 119 118 case IIO_CHAN_INFO_RAW: 119 + if (iio_buffer_enabled(indio_dev)) 120 + return -EBUSY; 121 + 122 + mutex_lock(&data->lock); 120 123 ret = tcs3472_req_data(data); 121 - if (ret < 0) 124 + if (ret < 0) { 125 + mutex_unlock(&data->lock); 122 126 return ret; 127 + } 123 128 ret = i2c_smbus_read_word_data(data->client, chan->address); 129 + mutex_unlock(&data->lock); 124 130 if (ret < 0) 125 131 return ret; 126 132 *val = ret; ··· 263 255 data = iio_priv(indio_dev); 264 256 i2c_set_clientdata(client, indio_dev); 265 257 data->client = client; 258 + mutex_init(&data->lock); 266 259 267 260 indio_dev->dev.parent = &client->dev; 268 261 indio_dev->info = &tcs3472_info;
+2 -5
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 110 110 struct magn_3d_state *magn_state = iio_priv(indio_dev); 111 111 int report_id = -1; 112 112 u32 address; 113 - int ret; 114 113 int ret_type; 115 114 s32 poll_value; 116 115 ··· 152 153 ret_type = IIO_VAL_INT; 153 154 break; 154 155 case IIO_CHAN_INFO_SAMP_FREQ: 155 - ret = hid_sensor_read_samp_freq_value( 156 + ret_type = hid_sensor_read_samp_freq_value( 156 157 &magn_state->common_attributes, val, val2); 157 - ret_type = IIO_VAL_INT_PLUS_MICRO; 158 158 break; 159 159 case IIO_CHAN_INFO_HYSTERESIS: 160 - ret = hid_sensor_read_raw_hyst_value( 160 + ret_type = hid_sensor_read_raw_hyst_value( 161 161 &magn_state->common_attributes, val, val2); 162 - ret_type = IIO_VAL_INT_PLUS_MICRO; 163 162 break; 164 163 default: 165 164 ret_type = -EINVAL;
+2 -5
drivers/iio/pressure/hid-sensor-press.c
··· 78 78 struct press_state *press_state = iio_priv(indio_dev); 79 79 int report_id = -1; 80 80 u32 address; 81 - int ret; 82 81 int ret_type; 83 82 s32 poll_value; 84 83 ··· 127 128 ret_type = IIO_VAL_INT; 128 129 break; 129 130 case IIO_CHAN_INFO_SAMP_FREQ: 130 - ret = hid_sensor_read_samp_freq_value( 131 + ret_type = hid_sensor_read_samp_freq_value( 131 132 &press_state->common_attributes, val, val2); 132 - ret_type = IIO_VAL_INT_PLUS_MICRO; 133 133 break; 134 134 case IIO_CHAN_INFO_HYSTERESIS: 135 - ret = hid_sensor_read_raw_hyst_value( 135 + ret_type = hid_sensor_read_raw_hyst_value( 136 136 &press_state->common_attributes, val, val2); 137 - ret_type = IIO_VAL_INT_PLUS_MICRO; 138 137 break; 139 138 default: 140 139 ret_type = -EINVAL;
+2 -2
drivers/md/dm-crypt.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org> 4 4 * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved. 5 5 * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com> ··· 1996 1996 module_init(dm_crypt_init); 1997 1997 module_exit(dm_crypt_exit); 1998 1998 1999 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 1999 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 2000 2000 MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption"); 2001 2001 MODULE_LICENSE("GPL");
+8 -14
drivers/md/dm-io.c
··· 10 10 #include <linux/device-mapper.h> 11 11 12 12 #include <linux/bio.h> 13 + #include <linux/completion.h> 13 14 #include <linux/mempool.h> 14 15 #include <linux/module.h> 15 16 #include <linux/sched.h> ··· 33 32 struct io { 34 33 unsigned long error_bits; 35 34 atomic_t count; 36 - struct task_struct *sleeper; 35 + struct completion *wait; 37 36 struct dm_io_client *client; 38 37 io_notify_fn callback; 39 38 void *context; ··· 122 121 invalidate_kernel_vmap_range(io->vma_invalidate_address, 123 122 io->vma_invalidate_size); 124 123 125 - if (io->sleeper) 126 - wake_up_process(io->sleeper); 124 + if (io->wait) 125 + complete(io->wait); 127 126 128 127 else { 129 128 unsigned long r = io->error_bits; ··· 388 387 */ 389 388 volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; 390 389 struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); 390 + DECLARE_COMPLETION_ONSTACK(wait); 391 391 392 392 if (num_regions > 1 && (rw & RW_MASK) != WRITE) { 393 393 WARN_ON(1); ··· 397 395 398 396 io->error_bits = 0; 399 397 atomic_set(&io->count, 1); /* see dispatch_io() */ 400 - io->sleeper = current; 398 + io->wait = &wait; 401 399 io->client = client; 402 400 403 401 io->vma_invalidate_address = dp->vma_invalidate_address; ··· 405 403 406 404 dispatch_io(rw, num_regions, where, dp, io, 1); 407 405 408 - while (1) { 409 - set_current_state(TASK_UNINTERRUPTIBLE); 410 - 411 - if (!atomic_read(&io->count)) 412 - break; 413 - 414 - io_schedule(); 415 - } 416 - set_current_state(TASK_RUNNING); 406 + wait_for_completion_io(&wait); 417 407 418 408 if (error_bits) 419 409 *error_bits = io->error_bits; ··· 428 434 io = mempool_alloc(client->pool, GFP_NOIO); 429 435 io->error_bits = 0; 430 436 atomic_set(&io->count, 1); /* see dispatch_io() */ 431 - io->sleeper = NULL; 437 + io->wait = NULL; 432 438 io->client = client; 433 439 io->callback = fn; 434 440 io->context = context;
+3 -2
drivers/md/dm-mpath.c
··· 1611 1611 1612 1612 spin_lock_irqsave(&m->lock, flags); 1613 1613 1614 - /* pg_init in progress, requeue until done */ 1615 - if (!pg_ready(m)) { 1614 + /* pg_init in progress or no paths available */ 1615 + if (m->pg_init_in_progress || 1616 + (!m->nr_valid_paths && m->queue_if_no_path)) { 1616 1617 busy = 1; 1617 1618 goto out; 1618 1619 }
+2 -2
drivers/md/dm-zero.c
··· 1 1 /* 2 - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> 2 + * Copyright (C) 2003 Jana Saout <jana@saout.de> 3 3 * 4 4 * This file is released under the GPL. 5 5 */ ··· 79 79 module_init(dm_zero_init) 80 80 module_exit(dm_zero_exit) 81 81 82 - MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); 82 + MODULE_AUTHOR("Jana Saout <jana@saout.de>"); 83 83 MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros"); 84 84 MODULE_LICENSE("GPL");
+13 -2
drivers/md/dm.c
··· 54 54 55 55 static DECLARE_WORK(deferred_remove_work, do_deferred_remove); 56 56 57 + static struct workqueue_struct *deferred_remove_workqueue; 58 + 57 59 /* 58 60 * For bio-based dm. 59 61 * One of these is allocated per bio. ··· 278 276 if (r) 279 277 goto out_free_rq_tio_cache; 280 278 279 + deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); 280 + if (!deferred_remove_workqueue) { 281 + r = -ENOMEM; 282 + goto out_uevent_exit; 283 + } 284 + 281 285 _major = major; 282 286 r = register_blkdev(_major, _name); 283 287 if (r < 0) 284 - goto out_uevent_exit; 288 + goto out_free_workqueue; 285 289 286 290 if (!_major) 287 291 _major = r; 288 292 289 293 return 0; 290 294 295 + out_free_workqueue: 296 + destroy_workqueue(deferred_remove_workqueue); 291 297 out_uevent_exit: 292 298 dm_uevent_exit(); 293 299 out_free_rq_tio_cache: ··· 309 299 static void local_exit(void) 310 300 { 311 301 flush_scheduled_work(); 302 + destroy_workqueue(deferred_remove_workqueue); 312 303 313 304 kmem_cache_destroy(_rq_tio_cache); 314 305 kmem_cache_destroy(_io_cache); ··· 418 407 419 408 if (atomic_dec_and_test(&md->open_count) && 420 409 (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 421 - schedule_work(&deferred_remove_work); 410 + queue_work(deferred_remove_workqueue, &deferred_remove_work); 422 411 423 412 dm_put(md); 424 413
+7 -2
drivers/pci/pci.c
··· 3135 3135 if (probe) 3136 3136 return 0; 3137 3137 3138 - /* Wait for Transaction Pending bit clean */ 3139 - if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP)) 3138 + /* 3139 + * Wait for Transaction Pending bit to clear. A word-aligned test 3140 + * is used, so we use the conrol offset rather than status and shift 3141 + * the test bit to match. 3142 + */ 3143 + if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL, 3144 + PCI_AF_STATUS_TP << 8)) 3140 3145 goto clear; 3141 3146 3142 3147 dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
+2
drivers/phy/Kconfig
··· 112 112 config PHY_SUN4I_USB 113 113 tristate "Allwinner sunxi SoC USB PHY driver" 114 114 depends on ARCH_SUNXI && HAS_IOMEM && OF 115 + depends on RESET_CONTROLLER 115 116 select GENERIC_PHY 116 117 help 117 118 Enable this to support the transceiver that is part of Allwinner ··· 123 122 124 123 config PHY_SAMSUNG_USB2 125 124 tristate "Samsung USB 2.0 PHY driver" 125 + depends on HAS_IOMEM 126 126 select GENERIC_PHY 127 127 select MFD_SYSCON 128 128 help
+4 -3
drivers/phy/phy-core.c
··· 614 614 return phy; 615 615 616 616 put_dev: 617 - put_device(&phy->dev); 618 - ida_remove(&phy_ida, phy->id); 617 + put_device(&phy->dev); /* calls phy_release() which frees resources */ 618 + return ERR_PTR(ret); 619 + 619 620 free_phy: 620 621 kfree(phy); 621 622 return ERR_PTR(ret); ··· 800 799 801 800 phy = to_phy(dev); 802 801 dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); 803 - ida_remove(&phy_ida, phy->id); 802 + ida_simple_remove(&phy_ida, phy->id); 804 803 kfree(phy); 805 804 } 806 805
+7 -4
drivers/phy/phy-omap-usb2.c
··· 233 233 if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { 234 234 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 235 235 phy->phy_base = devm_ioremap_resource(&pdev->dev, res); 236 - if (!phy->phy_base) 237 - return -ENOMEM; 236 + if (IS_ERR(phy->phy_base)) 237 + return PTR_ERR(phy->phy_base); 238 238 phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; 239 239 } 240 240 ··· 262 262 otg->phy = &phy->phy; 263 263 264 264 platform_set_drvdata(pdev, phy); 265 - pm_runtime_enable(phy->dev); 266 265 267 266 generic_phy = devm_phy_create(phy->dev, &ops, NULL); 268 267 if (IS_ERR(generic_phy)) ··· 269 270 270 271 phy_set_drvdata(generic_phy, phy); 271 272 273 + pm_runtime_enable(phy->dev); 272 274 phy_provider = devm_of_phy_provider_register(phy->dev, 273 275 of_phy_simple_xlate); 274 - if (IS_ERR(phy_provider)) 276 + if (IS_ERR(phy_provider)) { 277 + pm_runtime_disable(phy->dev); 275 278 return PTR_ERR(phy_provider); 279 + } 276 280 277 281 phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); 278 282 if (IS_ERR(phy->wkupclk)) { ··· 319 317 if (!IS_ERR(phy->optclk)) 320 318 clk_unprepare(phy->optclk); 321 319 usb_remove_phy(&phy->phy); 320 + pm_runtime_disable(phy->dev); 322 321 323 322 return 0; 324 323 }
+1
drivers/phy/phy-samsung-usb2.c
··· 107 107 #endif 108 108 { }, 109 109 }; 110 + MODULE_DEVICE_TABLE(of, samsung_usb2_phy_of_match); 110 111 111 112 static int samsung_usb2_phy_probe(struct platform_device *pdev) 112 113 {
+1 -1
drivers/pinctrl/berlin/berlin.c
··· 320 320 321 321 regmap = dev_get_regmap(&pdev->dev, NULL); 322 322 if (!regmap) 323 - return PTR_ERR(regmap); 323 + return -ENODEV; 324 324 325 325 pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); 326 326 if (!pctrl)
+4
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 211 211 configlen++; 212 212 213 213 pinconfig = kzalloc(configlen * sizeof(*pinconfig), GFP_KERNEL); 214 + if (!pinconfig) { 215 + kfree(*map); 216 + return -ENOMEM; 217 + } 214 218 215 219 if (!of_property_read_u32(node, "allwinner,drive", &val)) { 216 220 u16 strength = (val + 1) * 10;
+7 -11
drivers/thermal/imx_thermal.c
··· 306 306 { 307 307 struct imx_thermal_data *data = platform_get_drvdata(pdev); 308 308 struct regmap *map; 309 - int t1, t2, n1, n2; 309 + int t1, n1; 310 310 int ret; 311 311 u32 val; 312 312 u64 temp64; ··· 333 333 /* 334 334 * Sensor data layout: 335 335 * [31:20] - sensor value @ 25C 336 - * [19:8] - sensor value of hot 337 - * [7:0] - hot temperature value 338 336 * Use universal formula now and only need sensor value @ 25C 339 337 * slope = 0.4297157 - (0.0015976 * 25C fuse) 340 338 */ 341 339 n1 = val >> 20; 342 - n2 = (val & 0xfff00) >> 8; 343 - t2 = val & 0xff; 344 340 t1 = 25; /* t1 always 25C */ 345 341 346 342 /* ··· 362 366 data->c2 = n1 * data->c1 + 1000 * t1; 363 367 364 368 /* 365 - * Set the default passive cooling trip point to 20 °C below the 366 - * maximum die temperature. Can be changed from userspace. 369 + * Set the default passive cooling trip point, 370 + * can be changed from userspace. 367 371 */ 368 - data->temp_passive = 1000 * (t2 - 20); 372 + data->temp_passive = IMX_TEMP_PASSIVE; 369 373 370 374 /* 371 - * The maximum die temperature is t2, let's give 5 °C cushion 372 - * for noise and possible temperature rise between measurements. 375 + * The maximum die temperature set to 20 C higher than 376 + * IMX_TEMP_PASSIVE. 373 377 */ 374 - data->temp_critical = 1000 * (t2 - 5); 378 + data->temp_critical = 1000 * 20 + data->temp_passive; 375 379 376 380 return 0; 377 381 }
+4 -3
drivers/thermal/of-thermal.c
··· 156 156 157 157 ret = thermal_zone_bind_cooling_device(thermal, 158 158 tbp->trip_id, cdev, 159 - tbp->min, 160 - tbp->max); 159 + tbp->max, 160 + tbp->min); 161 161 if (ret) 162 162 return ret; 163 163 } ··· 712 712 } 713 713 714 714 i = 0; 715 - for_each_child_of_node(child, gchild) 715 + for_each_child_of_node(child, gchild) { 716 716 ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++], 717 717 tz->trips, tz->ntrips); 718 718 if (ret) 719 719 goto free_tbps; 720 + } 720 721 721 722 finish: 722 723 of_node_put(child);
+18 -15
drivers/thermal/thermal_hwmon.c
··· 140 140 return NULL; 141 141 } 142 142 143 + static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz) 144 + { 145 + unsigned long temp; 146 + return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp); 147 + } 148 + 143 149 int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) 144 150 { 145 151 struct thermal_hwmon_device *hwmon; ··· 195 189 if (result) 196 190 goto free_temp_mem; 197 191 198 - if (tz->ops->get_crit_temp) { 199 - unsigned long temperature; 200 - if (!tz->ops->get_crit_temp(tz, &temperature)) { 201 - snprintf(temp->temp_crit.name, 202 - sizeof(temp->temp_crit.name), 192 + if (thermal_zone_crit_temp_valid(tz)) { 193 + snprintf(temp->temp_crit.name, 194 + sizeof(temp->temp_crit.name), 203 195 "temp%d_crit", hwmon->count); 204 - temp->temp_crit.attr.attr.name = temp->temp_crit.name; 205 - temp->temp_crit.attr.attr.mode = 0444; 206 - temp->temp_crit.attr.show = temp_crit_show; 207 - sysfs_attr_init(&temp->temp_crit.attr.attr); 208 - result = device_create_file(hwmon->device, 209 - &temp->temp_crit.attr); 210 - if (result) 211 - goto unregister_input; 212 - } 196 + temp->temp_crit.attr.attr.name = temp->temp_crit.name; 197 + temp->temp_crit.attr.attr.mode = 0444; 198 + temp->temp_crit.attr.show = temp_crit_show; 199 + sysfs_attr_init(&temp->temp_crit.attr.attr); 200 + result = device_create_file(hwmon->device, 201 + &temp->temp_crit.attr); 202 + if (result) 203 + goto unregister_input; 213 204 } 214 205 215 206 mutex_lock(&thermal_hwmon_list_lock); ··· 253 250 } 254 251 255 252 device_remove_file(hwmon->device, &temp->temp_input.attr); 256 - if (tz->ops->get_crit_temp) 253 + if (thermal_zone_crit_temp_valid(tz)) 257 254 device_remove_file(hwmon->device, &temp->temp_crit.attr); 258 255 259 256 mutex_lock(&thermal_hwmon_list_lock);
+1 -1
drivers/thermal/ti-soc-thermal/ti-bandgap.c
··· 1155 1155 /* register shadow for context save and restore */ 1156 1156 bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * 1157 1157 bgp->conf->sensor_count, GFP_KERNEL); 1158 - if (!bgp) { 1158 + if (!bgp->regval) { 1159 1159 dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); 1160 1160 return ERR_PTR(-ENOMEM); 1161 1161 }
+1 -1
drivers/tty/serial/arc_uart.c
··· 177 177 uart->port.icount.tx++; 178 178 uart->port.x_char = 0; 179 179 sent = 1; 180 - } else if (xmit->tail != xmit->head) { /* TODO: uart_circ_empty */ 180 + } else if (!uart_circ_empty(xmit)) { 181 181 ch = xmit->buf[xmit->tail]; 182 182 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 183 183 uart->port.icount.tx++;
+3
drivers/tty/serial/imx.c
··· 567 567 struct imx_port *sport = (struct imx_port *)port; 568 568 unsigned long temp; 569 569 570 + if (uart_circ_empty(&port->state->xmit)) 571 + return; 572 + 570 573 if (USE_IRDA(sport)) { 571 574 /* half duplex in IrDA mode; have to disable receive mode */ 572 575 temp = readl(sport->port.membase + UCR4);
+2
drivers/tty/serial/ip22zilog.c
··· 603 603 } else { 604 604 struct circ_buf *xmit = &port->state->xmit; 605 605 606 + if (uart_circ_empty(xmit)) 607 + return; 606 608 writeb(xmit->buf[xmit->tail], &channel->data); 607 609 ZSDELAY(); 608 610 ZS_WSYNC(channel);
+5 -3
drivers/tty/serial/m32r_sio.c
··· 266 266 if (!(up->ier & UART_IER_THRI)) { 267 267 up->ier |= UART_IER_THRI; 268 268 serial_out(up, UART_IER, up->ier); 269 - serial_out(up, UART_TX, xmit->buf[xmit->tail]); 270 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 271 - up->port.icount.tx++; 269 + if (!uart_circ_empty(xmit)) { 270 + serial_out(up, UART_TX, xmit->buf[xmit->tail]); 271 + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 272 + up->port.icount.tx++; 273 + } 272 274 } 273 275 while((serial_in(up, UART_LSR) & UART_EMPTY) != UART_EMPTY); 274 276 #else
+3
drivers/tty/serial/pmac_zilog.c
··· 653 653 } else { 654 654 struct circ_buf *xmit = &port->state->xmit; 655 655 656 + if (uart_circ_empty(xmit)) 657 + goto out; 656 658 write_zsdata(uap, xmit->buf[xmit->tail]); 657 659 zssync(uap); 658 660 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); ··· 663 661 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 664 662 uart_write_wakeup(&uap->port); 665 663 } 664 + out: 666 665 pmz_debug("pmz: start_tx() done.\n"); 667 666 } 668 667
+3
drivers/tty/serial/sunsab.c
··· 427 427 struct circ_buf *xmit = &up->port.state->xmit; 428 428 int i; 429 429 430 + if (uart_circ_empty(xmit)) 431 + return; 432 + 430 433 up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); 431 434 writeb(up->interrupt_mask1, &up->regs->w.imr1); 432 435
+2
drivers/tty/serial/sunzilog.c
··· 703 703 } else { 704 704 struct circ_buf *xmit = &port->state->xmit; 705 705 706 + if (uart_circ_empty(xmit)) 707 + return; 706 708 writeb(xmit->buf[xmit->tail], &channel->data); 707 709 ZSDELAY(); 708 710 ZS_WSYNC(channel);
+1
drivers/usb/serial/cp210x.c
··· 153 153 { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ 154 154 { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ 155 155 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 156 + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 156 157 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 157 158 { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ 158 159 { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
+4 -1
drivers/usb/serial/ftdi_sio.c
··· 720 720 { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, 721 721 { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, 722 722 { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, 723 - { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, 723 + { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, 724 + { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, 724 725 { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, 725 726 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, 726 727 { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, ··· 945 944 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, 946 945 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, 947 946 { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, 947 + /* Infineon Devices */ 948 + { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 948 949 { } /* Terminating entry */ 949 950 }; 950 951
+8 -1
drivers/usb/serial/ftdi_sio_ids.h
··· 584 584 #define RATOC_PRODUCT_ID_USB60F 0xb020 585 585 586 586 /* 587 + * Infineon Technologies 588 + */ 589 + #define INFINEON_VID 0x058b 590 + #define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ 591 + 592 + /* 587 593 * Acton Research Corp. 588 594 */ 589 595 #define ACTON_VID 0x0647 /* Vendor ID */ ··· 804 798 * Submitted by Colin Leroy 805 799 */ 806 800 #define TESTO_VID 0x128D 807 - #define TESTO_USB_INTERFACE_PID 0x0001 801 + #define TESTO_1_PID 0x0001 802 + #define TESTO_3_PID 0x0003 808 803 809 804 /* 810 805 * Mobility Electronics products.
+2
drivers/usb/serial/option.c
··· 1487 1487 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1488 1488 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ 1489 1489 .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1490 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ 1491 + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1490 1492 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, 1491 1493 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, 1492 1494 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
+6
firmware/Makefile
··· 219 219 obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) 220 220 obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y)) 221 221 222 + ifeq ($(KBUILD_SRC),) 223 + # Makefile.build only creates subdirectories for O= builds, but external 224 + # firmware might live outside the kernel source tree 225 + _dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) 226 + endif 227 + 222 228 # Remove .S files and binaries created from ihex 223 229 # (during 'make clean' .config isn't included so they're all in $(fw-shipped-)) 224 230 targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \
+2 -2
fs/ext4/extents_status.c
··· 966 966 continue; 967 967 } 968 968 969 - if (ei->i_es_lru_nr == 0 || ei == locked_ei) 969 + if (ei->i_es_lru_nr == 0 || ei == locked_ei || 970 + !write_trylock(&ei->i_es_lock)) 970 971 continue; 971 972 972 - write_lock(&ei->i_es_lock); 973 973 shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); 974 974 if (ei->i_es_lru_nr == 0) 975 975 list_del_init(&ei->i_es_lru);
+8 -8
fs/ext4/ialloc.c
··· 338 338 fatal = err; 339 339 } else { 340 340 ext4_error(sb, "bit already cleared for inode %lu", ino); 341 - if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 341 + if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { 342 342 int count; 343 343 count = ext4_free_inodes_count(sb, gdp); 344 344 percpu_counter_sub(&sbi->s_freeinodes_counter, ··· 874 874 goto out; 875 875 } 876 876 877 + BUFFER_TRACE(group_desc_bh, "get_write_access"); 878 + err = ext4_journal_get_write_access(handle, group_desc_bh); 879 + if (err) { 880 + ext4_std_error(sb, err); 881 + goto out; 882 + } 883 + 877 884 /* We may have to initialize the block bitmap if it isn't already */ 878 885 if (ext4_has_group_desc_csum(sb) && 879 886 gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { ··· 915 908 ext4_std_error(sb, err); 916 909 goto out; 917 910 } 918 - } 919 - 920 - BUFFER_TRACE(group_desc_bh, "get_write_access"); 921 - err = ext4_journal_get_write_access(handle, group_desc_bh); 922 - if (err) { 923 - ext4_std_error(sb, err); 924 - goto out; 925 911 } 926 912 927 913 /* Update the relevant bg descriptor fields */
+2 -2
fs/ext4/mballoc.c
··· 752 752 753 753 if (free != grp->bb_free) { 754 754 ext4_grp_locked_error(sb, group, 0, 0, 755 - "%u clusters in bitmap, %u in gd; " 756 - "block bitmap corrupt.", 755 + "block bitmap and bg descriptor " 756 + "inconsistent: %u vs %u free clusters", 757 757 free, grp->bb_free); 758 758 /* 759 759 * If we intend to continue, we consider group descriptor
+28 -32
fs/ext4/super.c
··· 1525 1525 arg = JBD2_DEFAULT_MAX_COMMIT_AGE; 1526 1526 sbi->s_commit_interval = HZ * arg; 1527 1527 } else if (token == Opt_max_batch_time) { 1528 - if (arg == 0) 1529 - arg = EXT4_DEF_MAX_BATCH_TIME; 1530 1528 sbi->s_max_batch_time = arg; 1531 1529 } else if (token == Opt_min_batch_time) { 1532 1530 sbi->s_min_batch_time = arg; ··· 2807 2809 es = sbi->s_es; 2808 2810 2809 2811 if (es->s_error_count) 2810 - ext4_msg(sb, KERN_NOTICE, "error count: %u", 2812 + /* fsck newer than v1.41.13 is needed to clean this condition. */ 2813 + ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", 2811 2814 le32_to_cpu(es->s_error_count)); 2812 2815 if (es->s_first_error_time) { 2813 - printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", 2816 + printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", 2814 2817 sb->s_id, le32_to_cpu(es->s_first_error_time), 2815 2818 (int) sizeof(es->s_first_error_func), 2816 2819 es->s_first_error_func, ··· 2825 2826 printk("\n"); 2826 2827 } 2827 2828 if (es->s_last_error_time) { 2828 - printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", 2829 + printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", 2829 2830 sb->s_id, le32_to_cpu(es->s_last_error_time), 2830 2831 (int) sizeof(es->s_last_error_func), 2831 2832 es->s_last_error_func, ··· 3879 3880 goto failed_mount2; 3880 3881 } 3881 3882 } 3882 - 3883 - /* 3884 - * set up enough so that it can read an inode, 3885 - * and create new inode for buddy allocator 3886 - */ 3887 - sbi->s_gdb_count = db_count; 3888 - if (!test_opt(sb, NOLOAD) && 3889 - EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3890 - sb->s_op = &ext4_sops; 3891 - else 3892 - sb->s_op = &ext4_nojournal_sops; 3893 - 3894 - ext4_ext_init(sb); 3895 - err = ext4_mb_init(sb); 3896 - if (err) { 3897 - ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 3898 - err); 3899 - goto failed_mount2; 3900 - } 3901 - 3902 3883 if (!ext4_check_descriptors(sb, &first_not_zeroed)) { 3903 3884 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); 3904 - goto failed_mount2a; 3885 + goto failed_mount2; 3905 3886 } 3906 3887 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) 3907 3888 if (!ext4_fill_flex_info(sb)) { 3908 3889 ext4_msg(sb, KERN_ERR, 3909 3890 "unable to initialize " 3910 3891 "flex_bg meta info!"); 3911 - goto failed_mount2a; 3892 + goto failed_mount2; 3912 3893 } 3913 3894 3895 + sbi->s_gdb_count = db_count; 3914 3896 get_random_bytes(&sbi->s_next_generation, sizeof(u32)); 3915 3897 spin_lock_init(&sbi->s_next_gen_lock); 3916 3898 ··· 3926 3946 sbi->s_stripe = ext4_get_stripe_size(sbi); 3927 3947 sbi->s_extent_max_zeroout_kb = 32; 3928 3948 3949 + /* 3950 + * set up enough so that it can read an inode 3951 + */ 3952 + if (!test_opt(sb, NOLOAD) && 3953 + EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) 3954 + sb->s_op = &ext4_sops; 3955 + else 3956 + sb->s_op = &ext4_nojournal_sops; 3929 3957 sb->s_export_op = &ext4_export_ops; 3930 3958 sb->s_xattr = ext4_xattr_handlers; 3931 3959 #ifdef CONFIG_QUOTA ··· 4123 4135 if (err) { 4124 4136 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " 4125 4137 "reserved pool", ext4_calculate_resv_clusters(sb)); 4126 - goto failed_mount5; 4138 + goto failed_mount4a; 4127 4139 } 4128 4140 4129 4141 err = ext4_setup_system_zone(sb); 4130 4142 if (err) { 4131 4143 ext4_msg(sb, KERN_ERR, "failed to initialize system " 4132 4144 "zone (%d)", err); 4145 + goto failed_mount4a; 4146 + } 4147 + 4148 + ext4_ext_init(sb); 4149 + err = ext4_mb_init(sb); 4150 + if (err) { 4151 + ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", 4152 + err); 4133 4153 goto failed_mount5; 4134 4154 } 4135 4155 ··· 4214 4218 failed_mount7: 4215 4219 ext4_unregister_li_request(sb); 4216 4220 failed_mount6: 4217 - ext4_release_system_zone(sb); 4221 + ext4_mb_release(sb); 4218 4222 failed_mount5: 4223 + ext4_ext_release(sb); 4224 + ext4_release_system_zone(sb); 4225 + failed_mount4a: 4219 4226 dput(sb->s_root); 4220 4227 sb->s_root = NULL; 4221 4228 failed_mount4: ··· 4242 4243 percpu_counter_destroy(&sbi->s_extent_cache_cnt); 4243 4244 if (sbi->s_mmp_tsk) 4244 4245 kthread_stop(sbi->s_mmp_tsk); 4245 - failed_mount2a: 4246 - ext4_mb_release(sb); 4247 4246 failed_mount2: 4248 4247 for (i = 0; i < db_count; i++) 4249 4248 brelse(sbi->s_group_desc[i]); 4250 4249 ext4_kvfree(sbi->s_group_desc); 4251 4250 failed_mount: 4252 - ext4_ext_release(sb); 4253 4251 if (sbi->s_chksum_driver) 4254 4252 crypto_free_shash(sbi->s_chksum_driver); 4255 4253 if (sbi->s_proc) {
+18 -5
fs/f2fs/data.c
··· 608 608 * b. do not use extent cache for better performance 609 609 * c. give the block addresses to blockdev 610 610 */ 611 - static int get_data_block(struct inode *inode, sector_t iblock, 612 - struct buffer_head *bh_result, int create) 611 + static int __get_data_block(struct inode *inode, sector_t iblock, 612 + struct buffer_head *bh_result, int create, bool fiemap) 613 613 { 614 614 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); 615 615 unsigned int blkbits = inode->i_sb->s_blocksize_bits; ··· 637 637 err = 0; 638 638 goto unlock_out; 639 639 } 640 - if (dn.data_blkaddr == NEW_ADDR) 640 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 641 641 goto put_out; 642 642 643 643 if (dn.data_blkaddr != NULL_ADDR) { ··· 671 671 err = 0; 672 672 goto unlock_out; 673 673 } 674 - if (dn.data_blkaddr == NEW_ADDR) 674 + if (dn.data_blkaddr == NEW_ADDR && !fiemap) 675 675 goto put_out; 676 676 677 677 end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode)); ··· 708 708 return err; 709 709 } 710 710 711 + static int get_data_block(struct inode *inode, sector_t iblock, 712 + struct buffer_head *bh_result, int create) 713 + { 714 + return __get_data_block(inode, iblock, bh_result, create, false); 715 + } 716 + 717 + static int get_data_block_fiemap(struct inode *inode, sector_t iblock, 718 + struct buffer_head *bh_result, int create) 719 + { 720 + return __get_data_block(inode, iblock, bh_result, create, true); 721 + } 722 + 711 723 int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 712 724 u64 start, u64 len) 713 725 { 714 - return generic_block_fiemap(inode, fieinfo, start, len, get_data_block); 726 + return generic_block_fiemap(inode, fieinfo, 727 + start, len, get_data_block_fiemap); 715 728 } 716 729 717 730 static int f2fs_read_data_page(struct file *file, struct page *page)
+1 -1
fs/f2fs/dir.c
··· 376 376 377 377 put_error: 378 378 f2fs_put_page(page, 1); 379 + error: 379 380 /* once the failed inode becomes a bad inode, i_mode is S_IFREG */ 380 381 truncate_inode_pages(&inode->i_data, 0); 381 382 truncate_blocks(inode, 0); 382 383 remove_dirty_dir_inode(inode); 383 - error: 384 384 remove_inode_page(inode); 385 385 return ERR_PTR(err); 386 386 }
+2 -4
fs/f2fs/f2fs.h
··· 342 342 struct dirty_seglist_info *dirty_info; /* dirty segment information */ 343 343 struct curseg_info *curseg_array; /* active segment information */ 344 344 345 - struct list_head wblist_head; /* list of under-writeback pages */ 346 - spinlock_t wblist_lock; /* lock for checkpoint */ 347 - 348 345 block_t seg0_blkaddr; /* block address of 0'th segment */ 349 346 block_t main_blkaddr; /* start block address of main area */ 350 347 block_t ssa_blkaddr; /* start block address of SSA area */ ··· 641 644 */ 642 645 static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid) 643 646 { 644 - WARN_ON((nid >= NM_I(sbi)->max_nid)); 647 + if (unlikely(nid < F2FS_ROOT_INO(sbi))) 648 + return -EINVAL; 645 649 if (unlikely(nid >= NM_I(sbi)->max_nid)) 646 650 return -EINVAL; 647 651 return 0;
+8 -4
fs/f2fs/file.c
··· 659 659 off_start = offset & (PAGE_CACHE_SIZE - 1); 660 660 off_end = (offset + len) & (PAGE_CACHE_SIZE - 1); 661 661 662 + f2fs_lock_op(sbi); 663 + 662 664 for (index = pg_start; index <= pg_end; index++) { 663 665 struct dnode_of_data dn; 664 666 665 - f2fs_lock_op(sbi); 667 + if (index == pg_end && !off_end) 668 + goto noalloc; 669 + 666 670 set_new_dnode(&dn, inode, NULL, NULL, 0); 667 671 ret = f2fs_reserve_block(&dn, index); 668 - f2fs_unlock_op(sbi); 669 672 if (ret) 670 673 break; 671 - 674 + noalloc: 672 675 if (pg_start == pg_end) 673 676 new_size = offset + len; 674 677 else if (index == pg_start && off_start) ··· 686 683 i_size_read(inode) < new_size) { 687 684 i_size_write(inode, new_size); 688 685 mark_inode_dirty(inode); 689 - f2fs_write_inode(inode, NULL); 686 + update_inode_page(inode); 690 687 } 688 + f2fs_unlock_op(sbi); 691 689 692 690 return ret; 693 691 }
+1
fs/f2fs/inode.c
··· 78 78 if (check_nid_range(sbi, inode->i_ino)) { 79 79 f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu", 80 80 (unsigned long) inode->i_ino); 81 + WARN_ON(1); 81 82 return -EINVAL; 82 83 } 83 84
+6 -7
fs/f2fs/namei.c
··· 417 417 } 418 418 419 419 f2fs_set_link(new_dir, new_entry, new_page, old_inode); 420 - down_write(&F2FS_I(old_inode)->i_sem); 421 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 422 - up_write(&F2FS_I(old_inode)->i_sem); 423 420 424 421 new_inode->i_ctime = CURRENT_TIME; 425 422 down_write(&F2FS_I(new_inode)->i_sem); ··· 445 448 } 446 449 } 447 450 451 + down_write(&F2FS_I(old_inode)->i_sem); 452 + file_lost_pino(old_inode); 453 + up_write(&F2FS_I(old_inode)->i_sem); 454 + 448 455 old_inode->i_ctime = CURRENT_TIME; 449 456 mark_inode_dirty(old_inode); 450 457 ··· 458 457 if (old_dir != new_dir) { 459 458 f2fs_set_link(old_inode, old_dir_entry, 460 459 old_dir_page, new_dir); 461 - down_write(&F2FS_I(old_inode)->i_sem); 462 - F2FS_I(old_inode)->i_pino = new_dir->i_ino; 463 - up_write(&F2FS_I(old_inode)->i_sem); 464 460 update_inode_page(old_inode); 465 461 } else { 466 462 kunmap(old_dir_page); ··· 472 474 return 0; 473 475 474 476 put_out_dir: 475 - f2fs_put_page(new_page, 1); 477 + kunmap(new_page); 478 + f2fs_put_page(new_page, 0); 476 479 out_dir: 477 480 if (old_dir_entry) { 478 481 kunmap(old_dir_page);
+2
fs/f2fs/node.c
··· 42 42 mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12; 43 43 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2); 44 44 } else if (type == DIRTY_DENTS) { 45 + if (sbi->sb->s_bdi->dirty_exceeded) 46 + return false; 45 47 mem_size = get_pages(sbi, F2FS_DIRTY_DENTS); 46 48 res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1); 47 49 }
+2 -3
fs/f2fs/segment.c
··· 272 272 return -ENOMEM; 273 273 spin_lock_init(&fcc->issue_lock); 274 274 init_waitqueue_head(&fcc->flush_wait_queue); 275 + sbi->sm_info->cmd_control_info = fcc; 275 276 fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi, 276 277 "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); 277 278 if (IS_ERR(fcc->f2fs_issue_flush)) { 278 279 err = PTR_ERR(fcc->f2fs_issue_flush); 279 280 kfree(fcc); 281 + sbi->sm_info->cmd_control_info = NULL; 280 282 return err; 281 283 } 282 - sbi->sm_info->cmd_control_info = fcc; 283 284 284 285 return err; 285 286 } ··· 1886 1885 1887 1886 /* init sm info */ 1888 1887 sbi->sm_info = sm_info; 1889 - INIT_LIST_HEAD(&sm_info->wblist_head); 1890 - spin_lock_init(&sm_info->wblist_lock); 1891 1888 sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr); 1892 1889 sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr); 1893 1890 sm_info->segment_count = le32_to_cpu(raw_super->segment_count);
+1 -3
fs/f2fs/super.c
··· 689 689 struct f2fs_sb_info *sbi = F2FS_SB(sb); 690 690 struct inode *inode; 691 691 692 - if (unlikely(ino < F2FS_ROOT_INO(sbi))) 693 - return ERR_PTR(-ESTALE); 694 - if (unlikely(ino >= NM_I(sbi)->max_nid)) 692 + if (check_nid_range(sbi, ino)) 695 693 return ERR_PTR(-ESTALE); 696 694 697 695 /*
+4 -1
fs/jbd2/transaction.c
··· 1588 1588 * to perform a synchronous write. We do this to detect the 1589 1589 * case where a single process is doing a stream of sync 1590 1590 * writes. No point in waiting for joiners in that case. 1591 + * 1592 + * Setting max_batch_time to 0 disables this completely. 1591 1593 */ 1592 1594 pid = current->pid; 1593 - if (handle->h_sync && journal->j_last_sync_writer != pid) { 1595 + if (handle->h_sync && journal->j_last_sync_writer != pid && 1596 + journal->j_max_batch_time) { 1594 1597 u64 commit_time, trans_time; 1595 1598 1596 1599 journal->j_last_sync_writer = pid;
+30
fs/kernfs/mount.c
··· 211 211 kernfs_put(root_kn); 212 212 } 213 213 214 + /** 215 + * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root 216 + * @kernfs_root: the kernfs_root in question 217 + * @ns: the namespace tag 218 + * 219 + * Pin the superblock so the superblock won't be destroyed in subsequent 220 + * operations. This can be used to block ->kill_sb() which may be useful 221 + * for kernfs users which dynamically manage superblocks. 222 + * 223 + * Returns NULL if there's no superblock associated to this kernfs_root, or 224 + * -EINVAL if the superblock is being freed. 225 + */ 226 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns) 227 + { 228 + struct kernfs_super_info *info; 229 + struct super_block *sb = NULL; 230 + 231 + mutex_lock(&kernfs_mutex); 232 + list_for_each_entry(info, &root->supers, node) { 233 + if (info->ns == ns) { 234 + sb = info->sb; 235 + if (!atomic_inc_not_zero(&info->sb->s_active)) 236 + sb = ERR_PTR(-EINVAL); 237 + break; 238 + } 239 + } 240 + mutex_unlock(&kernfs_mutex); 241 + return sb; 242 + } 243 + 214 244 void __init kernfs_init(void) 215 245 { 216 246 kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
+1 -1
fs/nfsd/nfs4xdr.c
··· 2641 2641 { 2642 2642 __be32 *p; 2643 2643 2644 - p = xdr_reserve_space(xdr, 6); 2644 + p = xdr_reserve_space(xdr, 20); 2645 2645 if (!p) 2646 2646 return NULL; 2647 2647 *p++ = htonl(2);
+2
include/acpi/video.h
··· 22 22 extern void acpi_video_unregister_backlight(void); 23 23 extern int acpi_video_get_edid(struct acpi_device *device, int type, 24 24 int device_id, void **edid); 25 + extern bool acpi_video_verify_backlight_support(void); 25 26 #else 26 27 static inline int acpi_video_register(void) { return 0; } 27 28 static inline void acpi_video_unregister(void) { return; } ··· 32 31 { 33 32 return -ENODEV; 34 33 } 34 + static inline bool acpi_video_verify_backlight_support(void) { return false; } 35 35 #endif 36 36 37 37 #endif
+1 -1
include/asm-generic/vmlinux.lds.h
··· 693 693 . = ALIGN(PAGE_SIZE); \ 694 694 *(.data..percpu..page_aligned) \ 695 695 . = ALIGN(cacheline); \ 696 - *(.data..percpu..readmostly) \ 696 + *(.data..percpu..read_mostly) \ 697 697 . = ALIGN(cacheline); \ 698 698 *(.data..percpu) \ 699 699 *(.data..percpu..shared_aligned) \
+2 -1
include/dt-bindings/clock/exynos5420.h
··· 63 63 #define CLK_SCLK_MPHY_IXTAL24 161 64 64 65 65 /* gate clocks */ 66 - #define CLK_ACLK66_PERIC 256 67 66 #define CLK_UART0 257 68 67 #define CLK_UART1 258 69 68 #define CLK_UART2 259 ··· 202 203 #define CLK_MOUT_G3D 641 203 204 #define CLK_MOUT_VPLL 642 204 205 #define CLK_MOUT_MAUDIO0 643 206 + #define CLK_MOUT_USER_ACLK333 644 207 + #define CLK_MOUT_SW_ACLK333 645 205 208 206 209 /* divider clocks */ 207 210 #define CLK_DOUT_PIXEL 768
+1
include/linux/kernfs.h
··· 305 305 struct kernfs_root *root, unsigned long magic, 306 306 bool *new_sb_created, const void *ns); 307 307 void kernfs_kill_sb(struct super_block *sb); 308 + struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns); 308 309 309 310 void kernfs_init(void); 310 311
+2 -2
include/linux/percpu-defs.h
··· 146 146 * Declaration/definition used for per-CPU variables that must be read mostly. 147 147 */ 148 148 #define DECLARE_PER_CPU_READ_MOSTLY(type, name) \ 149 - DECLARE_PER_CPU_SECTION(type, name, "..readmostly") 149 + DECLARE_PER_CPU_SECTION(type, name, "..read_mostly") 150 150 151 151 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \ 152 - DEFINE_PER_CPU_SECTION(type, name, "..readmostly") 152 + DEFINE_PER_CPU_SECTION(type, name, "..read_mostly") 153 153 154 154 /* 155 155 * Intermodule exports for per-CPU variables. sparse forgets about
+50 -8
kernel/cgroup.c
··· 1648 1648 int flags, const char *unused_dev_name, 1649 1649 void *data) 1650 1650 { 1651 + struct super_block *pinned_sb = NULL; 1652 + struct cgroup_subsys *ss; 1651 1653 struct cgroup_root *root; 1652 1654 struct cgroup_sb_opts opts; 1653 1655 struct dentry *dentry; 1654 1656 int ret; 1657 + int i; 1655 1658 bool new_sb; 1656 1659 1657 1660 /* ··· 1678 1675 cgroup_get(&root->cgrp); 1679 1676 ret = 0; 1680 1677 goto out_unlock; 1678 + } 1679 + 1680 + /* 1681 + * Destruction of cgroup root is asynchronous, so subsystems may 1682 + * still be dying after the previous unmount. Let's drain the 1683 + * dying subsystems. We just need to ensure that the ones 1684 + * unmounted previously finish dying and don't care about new ones 1685 + * starting. Testing ref liveliness is good enough. 1686 + */ 1687 + for_each_subsys(ss, i) { 1688 + if (!(opts.subsys_mask & (1 << i)) || 1689 + ss->root == &cgrp_dfl_root) 1690 + continue; 1691 + 1692 + if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) { 1693 + mutex_unlock(&cgroup_mutex); 1694 + msleep(10); 1695 + ret = restart_syscall(); 1696 + goto out_free; 1697 + } 1698 + cgroup_put(&ss->root->cgrp); 1681 1699 } 1682 1700 1683 1701 for_each_root(root) { ··· 1741 1717 } 1742 1718 1743 1719 /* 1744 - * A root's lifetime is governed by its root cgroup. 1745 - * tryget_live failure indicate that the root is being 1746 - * destroyed. Wait for destruction to complete so that the 1747 - * subsystems are free. We can use wait_queue for the wait 1748 - * but this path is super cold. Let's just sleep for a bit 1749 - * and retry. 1720 + * We want to reuse @root whose lifetime is governed by its 1721 + * ->cgrp. Let's check whether @root is alive and keep it 1722 + * that way. As cgroup_kill_sb() can happen anytime, we 1723 + * want to block it by pinning the sb so that @root doesn't 1724 + * get killed before mount is complete. 1725 + * 1726 + * With the sb pinned, tryget_live can reliably indicate 1727 + * whether @root can be reused. If it's being killed, 1728 + * drain it. We can use wait_queue for the wait but this 1729 + * path is super cold. Let's just sleep a bit and retry. 1750 1730 */ 1751 - if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1731 + pinned_sb = kernfs_pin_sb(root->kf_root, NULL); 1732 + if (IS_ERR(pinned_sb) || 1733 + !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { 1752 1734 mutex_unlock(&cgroup_mutex); 1735 + if (!IS_ERR_OR_NULL(pinned_sb)) 1736 + deactivate_super(pinned_sb); 1753 1737 msleep(10); 1754 1738 ret = restart_syscall(); 1755 1739 goto out_free; ··· 1802 1770 CGROUP_SUPER_MAGIC, &new_sb); 1803 1771 if (IS_ERR(dentry) || !new_sb) 1804 1772 cgroup_put(&root->cgrp); 1773 + 1774 + /* 1775 + * If @pinned_sb, we're reusing an existing root and holding an 1776 + * extra ref on its sb. Mount is complete. Put the extra ref. 1777 + */ 1778 + if (pinned_sb) { 1779 + WARN_ON(new_sb); 1780 + deactivate_super(pinned_sb); 1781 + } 1782 + 1805 1783 return dentry; 1806 1784 } 1807 1785 ··· 3370 3328 3371 3329 rcu_read_lock(); 3372 3330 css_for_each_child(child, css) { 3373 - if (css->flags & CSS_ONLINE) { 3331 + if (child->flags & CSS_ONLINE) { 3374 3332 ret = true; 3375 3333 break; 3376 3334 }
+19 -1
kernel/cpuset.c
··· 1181 1181 1182 1182 int current_cpuset_is_being_rebound(void) 1183 1183 { 1184 - return task_cs(current) == cpuset_being_rebound; 1184 + int ret; 1185 + 1186 + rcu_read_lock(); 1187 + ret = task_cs(current) == cpuset_being_rebound; 1188 + rcu_read_unlock(); 1189 + 1190 + return ret; 1185 1191 } 1186 1192 1187 1193 static int update_relax_domain_level(struct cpuset *cs, s64 val) ··· 1623 1617 * resources, wait for the previously scheduled operations before 1624 1618 * proceeding, so that we don't end up keep removing tasks added 1625 1619 * after execution capability is restored. 1620 + * 1621 + * cpuset_hotplug_work calls back into cgroup core via 1622 + * cgroup_transfer_tasks() and waiting for it from a cgroupfs 1623 + * operation like this one can lead to a deadlock through kernfs 1624 + * active_ref protection. Let's break the protection. Losing the 1625 + * protection is okay as we check whether @cs is online after 1626 + * grabbing cpuset_mutex anyway. This only happens on the legacy 1627 + * hierarchies. 1626 1628 */ 1629 + css_get(&cs->css); 1630 + kernfs_break_active_protection(of->kn); 1627 1631 flush_work(&cpuset_hotplug_work); 1628 1632 1629 1633 mutex_lock(&cpuset_mutex); ··· 1661 1645 free_trial_cpuset(trialcs); 1662 1646 out_unlock: 1663 1647 mutex_unlock(&cpuset_mutex); 1648 + kernfs_unbreak_active_protection(of->kn); 1649 + css_put(&cs->css); 1664 1650 return retval ?: nbytes; 1665 1651 } 1666 1652
+2 -1
kernel/workqueue.c
··· 3284 3284 } 3285 3285 } 3286 3286 3287 + dev_set_uevent_suppress(&wq_dev->dev, false); 3287 3288 kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); 3288 3289 return 0; 3289 3290 } ··· 4880 4879 BUG_ON(!tbl); 4881 4880 4882 4881 for_each_node(node) 4883 - BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4882 + BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, 4884 4883 node_online(node) ? node : NUMA_NO_NODE)); 4885 4884 4886 4885 for_each_possible_cpu(cpu) {
-2
mm/mempolicy.c
··· 2139 2139 } else 2140 2140 *new = *old; 2141 2141 2142 - rcu_read_lock(); 2143 2142 if (current_cpuset_is_being_rebound()) { 2144 2143 nodemask_t mems = cpuset_mems_allowed(current); 2145 2144 if (new->flags & MPOL_F_REBINDING) ··· 2146 2147 else 2147 2148 mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); 2148 2149 } 2149 - rcu_read_unlock(); 2150 2150 atomic_set(&new->refcnt, 1); 2151 2151 return new; 2152 2152 }
+12 -3
scripts/kernel-doc
··· 2073 2073 sub dump_function($$) { 2074 2074 my $prototype = shift; 2075 2075 my $file = shift; 2076 + my $noret = 0; 2076 2077 2077 2078 $prototype =~ s/^static +//; 2078 2079 $prototype =~ s/^extern +//; ··· 2087 2086 $prototype =~ s/__init_or_module +//; 2088 2087 $prototype =~ s/__must_check +//; 2089 2088 $prototype =~ s/__weak +//; 2090 - $prototype =~ s/^#\s*define\s+//; #ak added 2089 + my $define = $prototype =~ s/^#\s*define\s+//; #ak added 2091 2090 $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//; 2092 2091 2093 2092 # Yes, this truly is vile. We are looking for: ··· 2106 2105 # - atomic_set (macro) 2107 2106 # - pci_match_device, __copy_to_user (long return type) 2108 2107 2109 - if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2108 + if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) { 2109 + # This is an object-like macro, it has no return type and no parameter 2110 + # list. 2111 + # Function-like macros are not allowed to have spaces between 2112 + # declaration_name and opening parenthesis (notice the \s+). 2113 + $return_type = $1; 2114 + $declaration_name = $2; 2115 + $noret = 1; 2116 + } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2110 2117 $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2111 2118 $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || 2112 2119 $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || ··· 2149 2140 # of warnings goes sufficiently down, the check is only performed in 2150 2141 # verbose mode. 2151 2142 # TODO: always perform the check. 2152 - if ($verbose) { 2143 + if ($verbose && !$noret) { 2153 2144 check_return_section($file, $declaration_name, $return_type); 2154 2145 } 2155 2146
-1
sound/soc/fsl/imx-pcm-dma.c
··· 59 59 { 60 60 return devm_snd_dmaengine_pcm_register(&pdev->dev, 61 61 &imx_dmaengine_pcm_config, 62 - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | 63 62 SND_DMAENGINE_PCM_FLAG_COMPAT); 64 63 } 65 64 EXPORT_SYMBOL_GPL(imx_pcm_dma_init);
+1 -1
tools/thermal/tmon/Makefile
··· 21 21 OBJS += 22 22 23 23 tmon: $(OBJS) Makefile tmon.h 24 - $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -lpthread 24 + $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -ltinfo -lpthread 25 25 26 26 valgrind: tmon 27 27 sudo valgrind -v --track-origins=yes --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./$(TARGET) 1> /dev/null
+25 -1
tools/thermal/tmon/tmon.c
··· 142 142 static void prepare_logging(void) 143 143 { 144 144 int i; 145 + struct stat logstat; 145 146 146 147 if (!logging) 147 148 return; ··· 152 151 syslog(LOG_ERR, "failed to open log file %s\n", TMON_LOG_FILE); 153 152 return; 154 153 } 154 + 155 + if (lstat(TMON_LOG_FILE, &logstat) < 0) { 156 + syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE); 157 + fclose(tmon_log); 158 + tmon_log = NULL; 159 + return; 160 + } 161 + 162 + /* The log file must be a regular file owned by us */ 163 + if (S_ISLNK(logstat.st_mode)) { 164 + syslog(LOG_ERR, "Log file is a symlink. Will not log\n"); 165 + fclose(tmon_log); 166 + tmon_log = NULL; 167 + return; 168 + } 169 + 170 + if (logstat.st_uid != getuid()) { 171 + syslog(LOG_ERR, "We don't own the log file. Not logging\n"); 172 + fclose(tmon_log); 173 + tmon_log = NULL; 174 + return; 175 + } 176 + 155 177 156 178 fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n"); 157 179 for (i = 0; i < ptdata.nr_tz_sensor; i++) { ··· 355 331 disable_tui(); 356 332 357 333 /* change the file mode mask */ 358 - umask(0); 334 + umask(S_IWGRP | S_IWOTH); 359 335 360 336 /* new SID for the daemon process */ 361 337 sid = setsid();