Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 3.9-rc3 into tty-next

+2597 -1655
+1 -5
Documentation/devicetree/bindings/mfd/ab8500.txt
··· 13 13 4 = active high level-sensitive 14 14 8 = active low level-sensitive 15 15 16 - Optional parent device properties: 17 - - reg : contains the PRCMU mailbox address for the AB8500 i2c port 18 - 19 16 The AB8500 consists of a large and varied group of sub-devices: 20 17 21 18 Device IRQ Names Supply Names Description ··· 83 86 - stericsson,amic2-bias-vamic1 : Analoge Mic wishes to use a non-standard Vamic 84 87 - stericsson,earpeice-cmv : Earpeice voltage (only: 950 | 1100 | 1270 | 1580) 85 88 86 - ab8500@5 { 89 + ab8500 { 87 90 compatible = "stericsson,ab8500"; 88 - reg = <5>; /* mailbox 5 is i2c */ 89 91 interrupts = <0 40 0x4>; 90 92 interrupt-controller; 91 93 #interrupt-cells = <2>;
+3
Documentation/devicetree/bindings/tty/serial/of-serial.txt
··· 11 11 - "nvidia,tegra20-uart" 12 12 - "nxp,lpc3220-uart" 13 13 - "ibm,qpace-nwp-serial" 14 + - "altr,16550-FIFO32" 15 + - "altr,16550-FIFO64" 16 + - "altr,16550-FIFO128" 14 17 - "serial" if the port type is unknown. 15 18 - reg : offset and length of the register set for the device. 16 19 - interrupts : should contain uart interrupt.
+59 -6
Documentation/input/alps.txt
··· 3 3 4 4 Introduction 5 5 ------------ 6 + Currently the ALPS touchpad driver supports five protocol versions in use by 7 + ALPS touchpads, called versions 1, 2, 3, 4 and 5. 6 8 7 - Currently the ALPS touchpad driver supports four protocol versions in use by 8 - ALPS touchpads, called versions 1, 2, 3, and 4. Information about the various 9 - protocol versions is contained in the following sections. 9 + Since roughly mid-2010 several new ALPS touchpads have been released and 10 + integrated into a variety of laptops and netbooks. These new touchpads 11 + have enough behavior differences that the alps_model_data definition 12 + table, describing the properties of the different versions, is no longer 13 + adequate. The design choices were to re-define the alps_model_data 14 + table, with the risk of regression testing existing devices, or isolate 15 + the new devices outside of the alps_model_data table. The latter design 16 + choice was made. The new touchpad signatures are named: "Rushmore", 17 + "Pinnacle", and "Dolphin", which you will see in the alps.c code. 18 + For the purposes of this document, this group of ALPS touchpads will 19 + generically be called "new ALPS touchpads". 20 + 21 + We experimented with probing the ACPI interface _HID (Hardware ID)/_CID 22 + (Compatibility ID) definition as a way to uniquely identify the 23 + different ALPS variants but there did not appear to be a 1:1 mapping. 24 + In fact, it appeared to be an m:n mapping between the _HID and actual 25 + hardware type. 10 26 11 27 Detection 12 28 --------- ··· 36 20 report" sequence: E8-E7-E7-E7-E9. The response is the model signature and is 37 21 matched against known models in the alps_model_data_array. 38 22 39 - With protocol versions 3 and 4, the E7 report model signature is always 40 - 73-02-64. To differentiate between these versions, the response from the 41 - "Enter Command Mode" sequence must be inspected as described below. 23 + For older touchpads supporting protocol versions 3 and 4, the E7 report 24 + model signature is always 73-02-64. To differentiate between these 25 + versions, the response from the "Enter Command Mode" sequence must be 26 + inspected as described below. 27 + 28 + The new ALPS touchpads have an E7 signature of 73-03-50 or 73-03-0A but 29 + seem to be better differentiated by the EC Command Mode response. 42 30 43 31 Command Mode 44 32 ------------ ··· 66 46 address of the register being read, and the third contains the value of the 67 47 register. Registers are written by writing the value one nibble at a time 68 48 using the same encoding used for addresses. 49 + 50 + For the new ALPS touchpads, the EC command is used to enter command 51 + mode. The response in the new ALPS touchpads is significantly different, 52 + and more important in determining the behavior. This code has been 53 + separated from the original alps_model_data table and put in the 54 + alps_identify function. For example, there seem to be two hardware init 55 + sequences for the "Dolphin" touchpads as determined by the second byte 56 + of the EC response. 69 57 70 58 Packet Format 71 59 ------------- ··· 215 187 well. 216 188 217 189 So far no v4 devices with tracksticks have been encountered. 190 + 191 + ALPS Absolute Mode - Protocol Version 5 192 + --------------------------------------- 193 + This is basically Protocol Version 3 but with different logic for packet 194 + decode. It uses the same alps_process_touchpad_packet_v3 call with a 195 + specialized decode_fields function pointer to correctly interpret the 196 + packets. This appears to only be used by the Dolphin devices. 197 + 198 + For single-touch, the 6-byte packet format is: 199 + 200 + byte 0: 1 1 0 0 1 0 0 0 201 + byte 1: 0 x6 x5 x4 x3 x2 x1 x0 202 + byte 2: 0 y6 y5 y4 y3 y2 y1 y0 203 + byte 3: 0 M R L 1 m r l 204 + byte 4: y10 y9 y8 y7 x10 x9 x8 x7 205 + byte 5: 0 z6 z5 z4 z3 z2 z1 z0 206 + 207 + For mt, the format is: 208 + 209 + byte 0: 1 1 1 n3 1 n2 n1 x24 210 + byte 1: 1 y7 y6 y5 y4 y3 y2 y1 211 + byte 2: ? x2 x1 y12 y11 y10 y9 y8 212 + byte 3: 0 x23 x22 x21 x20 x19 x18 x17 213 + byte 4: 0 x9 x8 x7 x6 x5 x4 x3 214 + byte 5: 0 x16 x15 x14 x13 x12 x11 x10
+77
Documentation/networking/tuntap.txt
··· 105 105 Proto [2 bytes] 106 106 Raw protocol(IP, IPv6, etc) frame. 107 107 108 + 3.3 Multiqueue tuntap interface: 109 + 110 + From version 3.8, Linux supports multiqueue tuntap which can uses multiple 111 + file descriptors (queues) to parallelize packets sending or receiving. The 112 + device allocation is the same as before, and if user wants to create multiple 113 + queues, TUNSETIFF with the same device name must be called many times with 114 + IFF_MULTI_QUEUE flag. 115 + 116 + char *dev should be the name of the device, queues is the number of queues to 117 + be created, fds is used to store and return the file descriptors (queues) 118 + created to the caller. Each file descriptor were served as the interface of a 119 + queue which could be accessed by userspace. 120 + 121 + #include <linux/if.h> 122 + #include <linux/if_tun.h> 123 + 124 + int tun_alloc_mq(char *dev, int queues, int *fds) 125 + { 126 + struct ifreq ifr; 127 + int fd, err, i; 128 + 129 + if (!dev) 130 + return -1; 131 + 132 + memset(&ifr, 0, sizeof(ifr)); 133 + /* Flags: IFF_TUN - TUN device (no Ethernet headers) 134 + * IFF_TAP - TAP device 135 + * 136 + * IFF_NO_PI - Do not provide packet information 137 + * IFF_MULTI_QUEUE - Create a queue of multiqueue device 138 + */ 139 + ifr.ifr_flags = IFF_TAP | IFF_NO_PI | IFF_MULTI_QUEUE; 140 + strcpy(ifr.ifr_name, dev); 141 + 142 + for (i = 0; i < queues; i++) { 143 + if ((fd = open("/dev/net/tun", O_RDWR)) < 0) 144 + goto err; 145 + err = ioctl(fd, TUNSETIFF, (void *)&ifr); 146 + if (err) { 147 + close(fd); 148 + goto err; 149 + } 150 + fds[i] = fd; 151 + } 152 + 153 + return 0; 154 + err: 155 + for (--i; i >= 0; i--) 156 + close(fds[i]); 157 + return err; 158 + } 159 + 160 + A new ioctl(TUNSETQUEUE) were introduced to enable or disable a queue. When 161 + calling it with IFF_DETACH_QUEUE flag, the queue were disabled. And when 162 + calling it with IFF_ATTACH_QUEUE flag, the queue were enabled. The queue were 163 + enabled by default after it was created through TUNSETIFF. 164 + 165 + fd is the file descriptor (queue) that we want to enable or disable, when 166 + enable is true we enable it, otherwise we disable it 167 + 168 + #include <linux/if.h> 169 + #include <linux/if_tun.h> 170 + 171 + int tun_set_queue(int fd, int enable) 172 + { 173 + struct ifreq ifr; 174 + 175 + memset(&ifr, 0, sizeof(ifr)); 176 + 177 + if (enable) 178 + ifr.ifr_flags = IFF_ATTACH_QUEUE; 179 + else 180 + ifr.ifr_flags = IFF_DETACH_QUEUE; 181 + 182 + return ioctl(fd, TUNSETQUEUE, (void *)&ifr); 183 + } 184 + 108 185 Universal TUN/TAP device driver Frequently Asked Question. 109 186 110 187 1. What platforms are supported by TUN/TAP driver ?
+1 -1
Documentation/trace/ftrace.txt
··· 1873 1873 1874 1874 status\input | 0 | 1 | else | 1875 1875 --------------+------------+------------+------------+ 1876 - not allocated |(do nothing)| alloc+swap | EINVAL | 1876 + not allocated |(do nothing)| alloc+swap |(do nothing)| 1877 1877 --------------+------------+------------+------------+ 1878 1878 allocated | free | swap | clear | 1879 1879 --------------+------------+------------+------------+
+27
MAINTAINERS
··· 4005 4005 S: Maintained 4006 4006 F: drivers/usb/atm/ueagle-atm.c 4007 4007 4008 + INA209 HARDWARE MONITOR DRIVER 4009 + M: Guenter Roeck <linux@roeck-us.net> 4010 + L: lm-sensors@lm-sensors.org 4011 + S: Maintained 4012 + F: Documentation/hwmon/ina209 4013 + F: Documentation/devicetree/bindings/i2c/ina209.txt 4014 + F: drivers/hwmon/ina209.c 4015 + 4016 + INA2XX HARDWARE MONITOR DRIVER 4017 + M: Guenter Roeck <linux@roeck-us.net> 4018 + L: lm-sensors@lm-sensors.org 4019 + S: Maintained 4020 + F: Documentation/hwmon/ina2xx 4021 + F: drivers/hwmon/ina2xx.c 4022 + F: include/linux/platform_data/ina2xx.h 4023 + 4008 4024 INDUSTRY PACK SUBSYSTEM (IPACK) 4009 4025 M: Samuel Iglesias Gonsalvez <siglesias@igalia.com> 4010 4026 M: Jens Taprogge <jens.taprogge@taprogge.org> ··· 5113 5097 S: Maintained 5114 5098 F: Documentation/hwmon/max6650 5115 5099 F: drivers/hwmon/max6650.c 5100 + 5101 + MAX6697 HARDWARE MONITOR DRIVER 5102 + M: Guenter Roeck <linux@roeck-us.net> 5103 + L: lm-sensors@lm-sensors.org 5104 + S: Maintained 5105 + F: Documentation/hwmon/max6697 5106 + F: Documentation/devicetree/bindings/i2c/max6697.txt 5107 + F: drivers/hwmon/max6697.c 5108 + F: include/linux/platform_data/max6697.h 5116 5109 5117 5110 MAXIRADIO FM RADIO RECEIVER DRIVER 5118 5111 M: Hans Verkuil <hverkuil@xs4all.nl> ··· 6437 6412 F: drivers/net/ethernet/qlogic/qla3xxx.* 6438 6413 6439 6414 QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER 6415 + M: Rajesh Borundia <rajesh.borundia@qlogic.com> 6416 + M: Shahed Shaikh <shahed.shaikh@qlogic.com> 6440 6417 M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> 6441 6418 M: Sony Chacko <sony.chacko@qlogic.com> 6442 6419 M: linux-driver@qlogic.com
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 9 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
-7
arch/Kconfig
··· 319 319 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 320 320 bool 321 321 322 - config HAVE_VIRT_TO_BUS 323 - bool 324 - help 325 - An architecture should select this if it implements the 326 - deprecated interface virt_to_bus(). All new architectures 327 - should probably not select this. 328 - 329 322 config HAVE_ARCH_SECCOMP_FILTER 330 323 bool 331 324 help
+1 -1
arch/alpha/Kconfig
··· 9 9 select HAVE_PERF_EVENTS 10 10 select HAVE_DMA_ATTRS 11 11 select HAVE_GENERIC_HARDIRQS 12 - select HAVE_VIRT_TO_BUS 12 + select VIRT_TO_BUS 13 13 select GENERIC_IRQ_PROBE 14 14 select AUTO_IRQ_AFFINITY if SMP 15 15 select GENERIC_IRQ_SHOW
+8 -5
arch/arm/Kconfig
··· 49 49 select HAVE_REGS_AND_STACK_ACCESS_API 50 50 select HAVE_SYSCALL_TRACEPOINTS 51 51 select HAVE_UID16 52 - select HAVE_VIRT_TO_BUS 52 + select VIRT_TO_BUS 53 53 select KTIME_SCALAR 54 54 select PERF_USE_VMALLOC 55 55 select RTC_LIB ··· 556 556 config ARCH_DOVE 557 557 bool "Marvell Dove" 558 558 select ARCH_REQUIRE_GPIOLIB 559 - select COMMON_CLK_DOVE 560 559 select CPU_V7 561 560 select GENERIC_CLOCKEVENTS 562 561 select MIGHT_HAVE_PCI ··· 1656 1657 accounting to be spread across the timer interval, preventing a 1657 1658 "thundering herd" at every timer tick. 1658 1659 1660 + # The GPIO number here must be sorted by descending number. In case of 1661 + # a multiplatform kernel, we just want the highest value required by the 1662 + # selected platforms. 1659 1663 config ARCH_NR_GPIO 1660 1664 int 1661 1665 default 1024 if ARCH_SHMOBILE || ARCH_TEGRA 1662 - default 355 if ARCH_U8500 1663 - default 264 if MACH_H4700 1664 1666 default 512 if SOC_OMAP5 1667 + default 355 if ARCH_U8500 1665 1668 default 288 if ARCH_VT8500 || ARCH_SUNXI 1669 + default 264 if MACH_H4700 1666 1670 default 0 1667 1671 help 1668 1672 Maximum number of GPIOs in the system. ··· 1889 1887 1890 1888 config XEN 1891 1889 bool "Xen guest support on ARM (EXPERIMENTAL)" 1892 - depends on ARM && OF 1890 + depends on ARM && AEABI && OF 1893 1891 depends on CPU_V7 && !CPU_V6 1892 + depends on !GENERIC_ATOMIC64 1894 1893 help 1895 1894 Say Y if you want to run Linux in a Virtual Machine on Xen on ARM. 1896 1895
+1 -1
arch/arm/Kconfig.debug
··· 492 492 DEBUG_IMX31_UART || \ 493 493 DEBUG_IMX35_UART || \ 494 494 DEBUG_IMX51_UART || \ 495 - DEBUG_IMX50_IMX53_UART || \ 495 + DEBUG_IMX53_UART || \ 496 496 DEBUG_IMX6Q_UART 497 497 default 1 498 498 help
+1 -1
arch/arm/boot/Makefile
··· 115 115 $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 116 116 $(obj)/Image System.map "$(INSTALL_PATH)" 117 117 118 - subdir- := bootp compressed 118 + subdir- := bootp compressed dts
+8
arch/arm/boot/dts/armada-370-rd.dts
··· 64 64 status = "okay"; 65 65 /* No CD or WP GPIOs */ 66 66 }; 67 + 68 + usb@d0050000 { 69 + status = "okay"; 70 + }; 71 + 72 + usb@d0051000 { 73 + status = "okay"; 74 + }; 67 75 }; 68 76 };
+2 -3
arch/arm/boot/dts/armada-370-xp.dtsi
··· 31 31 mpic: interrupt-controller@d0020000 { 32 32 compatible = "marvell,mpic"; 33 33 #interrupt-cells = <1>; 34 - #address-cells = <1>; 35 34 #size-cells = <1>; 36 35 interrupt-controller; 37 36 }; ··· 53 54 reg = <0xd0012000 0x100>; 54 55 reg-shift = <2>; 55 56 interrupts = <41>; 56 - reg-io-width = <4>; 57 + reg-io-width = <1>; 57 58 status = "disabled"; 58 59 }; 59 60 serial@d0012100 { ··· 61 62 reg = <0xd0012100 0x100>; 62 63 reg-shift = <2>; 63 64 interrupts = <42>; 64 - reg-io-width = <4>; 65 + reg-io-width = <1>; 65 66 status = "disabled"; 66 67 }; 67 68
+2 -2
arch/arm/boot/dts/armada-xp.dtsi
··· 46 46 reg = <0xd0012200 0x100>; 47 47 reg-shift = <2>; 48 48 interrupts = <43>; 49 - reg-io-width = <4>; 49 + reg-io-width = <1>; 50 50 status = "disabled"; 51 51 }; 52 52 serial@d0012300 { ··· 54 54 reg = <0xd0012300 0x100>; 55 55 reg-shift = <2>; 56 56 interrupts = <44>; 57 - reg-io-width = <4>; 57 + reg-io-width = <1>; 58 58 status = "disabled"; 59 59 }; 60 60
+1 -1
arch/arm/boot/dts/bcm2835.dtsi
··· 105 105 compatible = "fixed-clock"; 106 106 reg = <1>; 107 107 #clock-cells = <0>; 108 - clock-frequency = <150000000>; 108 + clock-frequency = <250000000>; 109 109 }; 110 110 }; 111 111 };
+1 -2
arch/arm/boot/dts/dbx5x0.dtsi
··· 319 319 }; 320 320 }; 321 321 322 - ab8500@5 { 322 + ab8500 { 323 323 compatible = "stericsson,ab8500"; 324 - reg = <5>; /* mailbox 5 is i2c */ 325 324 interrupt-parent = <&intc>; 326 325 interrupts = <0 40 0x4>; 327 326 interrupt-controller;
+5
arch/arm/boot/dts/dove.dtsi
··· 197 197 status = "disabled"; 198 198 }; 199 199 200 + rtc@d8500 { 201 + compatible = "marvell,orion-rtc"; 202 + reg = <0xd8500 0x20>; 203 + }; 204 + 200 205 crypto: crypto@30000 { 201 206 compatible = "marvell,orion-crypto"; 202 207 reg = <0x30000 0x10000>,
+1 -1
arch/arm/boot/dts/href.dtsi
··· 221 221 }; 222 222 }; 223 223 224 - ab8500@5 { 224 + ab8500 { 225 225 ab8500-regulators { 226 226 ab8500_ldo_aux1_reg: ab8500_ldo_aux1 { 227 227 regulator-name = "V-DISPLAY";
+1 -1
arch/arm/boot/dts/hrefv60plus.dts
··· 158 158 }; 159 159 }; 160 160 161 - ab8500@5 { 161 + ab8500 { 162 162 ab8500-regulators { 163 163 ab8500_ldo_aux1_reg: ab8500_ldo_aux1 { 164 164 regulator-name = "V-DISPLAY";
+1 -2
arch/arm/boot/dts/imx53-mba53.dts
··· 42 42 fsl,pins = <689 0x10000 /* DISP1_DRDY */ 43 43 482 0x10000 /* DISP1_HSYNC */ 44 44 489 0x10000 /* DISP1_VSYNC */ 45 - 684 0x10000 /* DISP1_DAT_0 */ 46 45 515 0x10000 /* DISP1_DAT_22 */ 47 46 523 0x10000 /* DISP1_DAT_23 */ 48 - 543 0x10000 /* DISP1_DAT_21 */ 47 + 545 0x10000 /* DISP1_DAT_21 */ 49 48 553 0x10000 /* DISP1_DAT_20 */ 50 49 558 0x10000 /* DISP1_DAT_19 */ 51 50 564 0x10000 /* DISP1_DAT_18 */
-2
arch/arm/boot/dts/kirkwood-dns320.dts
··· 42 42 43 43 ocp@f1000000 { 44 44 serial@12000 { 45 - clock-frequency = <166666667>; 46 45 status = "okay"; 47 46 }; 48 47 49 48 serial@12100 { 50 - clock-frequency = <166666667>; 51 49 status = "okay"; 52 50 }; 53 51 };
-1
arch/arm/boot/dts/kirkwood-dns325.dts
··· 50 50 }; 51 51 }; 52 52 serial@12000 { 53 - clock-frequency = <200000000>; 54 53 status = "okay"; 55 54 }; 56 55 };
-1
arch/arm/boot/dts/kirkwood-dockstar.dts
··· 37 37 }; 38 38 }; 39 39 serial@12000 { 40 - clock-frequency = <200000000>; 41 40 status = "ok"; 42 41 }; 43 42
-1
arch/arm/boot/dts/kirkwood-dreamplug.dts
··· 38 38 }; 39 39 }; 40 40 serial@12000 { 41 - clock-frequency = <200000000>; 42 41 status = "ok"; 43 42 }; 44 43
-1
arch/arm/boot/dts/kirkwood-goflexnet.dts
··· 73 73 }; 74 74 }; 75 75 serial@12000 { 76 - clock-frequency = <200000000>; 77 76 status = "ok"; 78 77 }; 79 78
-1
arch/arm/boot/dts/kirkwood-ib62x0.dts
··· 51 51 }; 52 52 }; 53 53 serial@12000 { 54 - clock-frequency = <200000000>; 55 54 status = "okay"; 56 55 }; 57 56
-1
arch/arm/boot/dts/kirkwood-iconnect.dts
··· 78 78 }; 79 79 }; 80 80 serial@12000 { 81 - clock-frequency = <200000000>; 82 81 status = "ok"; 83 82 }; 84 83
-1
arch/arm/boot/dts/kirkwood-iomega_ix2_200.dts
··· 115 115 }; 116 116 117 117 serial@12000 { 118 - clock-frequency = <200000000>; 119 118 status = "ok"; 120 119 }; 121 120
-1
arch/arm/boot/dts/kirkwood-km_kirkwood.dts
··· 34 34 }; 35 35 36 36 serial@12000 { 37 - clock-frequency = <200000000>; 38 37 status = "ok"; 39 38 }; 40 39
-1
arch/arm/boot/dts/kirkwood-lschlv2.dts
··· 13 13 14 14 ocp@f1000000 { 15 15 serial@12000 { 16 - clock-frequency = <166666667>; 17 16 status = "okay"; 18 17 }; 19 18 };
-1
arch/arm/boot/dts/kirkwood-lsxhl.dts
··· 13 13 14 14 ocp@f1000000 { 15 15 serial@12000 { 16 - clock-frequency = <200000000>; 17 16 status = "okay"; 18 17 }; 19 18 };
-1
arch/arm/boot/dts/kirkwood-mplcec4.dts
··· 90 90 }; 91 91 92 92 serial@12000 { 93 - clock-frequency = <200000000>; 94 93 status = "ok"; 95 94 }; 96 95
-1
arch/arm/boot/dts/kirkwood-ns2-common.dtsi
··· 23 23 }; 24 24 25 25 serial@12000 { 26 - clock-frequency = <166666667>; 27 26 status = "okay"; 28 27 }; 29 28
-1
arch/arm/boot/dts/kirkwood-nsa310.dts
··· 117 117 }; 118 118 119 119 serial@12000 { 120 - clock-frequency = <200000000>; 121 120 status = "ok"; 122 121 }; 123 122
-2
arch/arm/boot/dts/kirkwood-openblocks_a6.dts
··· 18 18 19 19 ocp@f1000000 { 20 20 serial@12000 { 21 - clock-frequency = <200000000>; 22 21 status = "ok"; 23 22 }; 24 23 25 24 serial@12100 { 26 - clock-frequency = <200000000>; 27 25 status = "ok"; 28 26 }; 29 27
-1
arch/arm/boot/dts/kirkwood-topkick.dts
··· 108 108 }; 109 109 110 110 serial@12000 { 111 - clock-frequency = <200000000>; 112 111 status = "ok"; 113 112 }; 114 113
+3 -2
arch/arm/boot/dts/kirkwood.dtsi
··· 38 38 interrupt-controller; 39 39 #interrupt-cells = <2>; 40 40 interrupts = <35>, <36>, <37>, <38>; 41 + clocks = <&gate_clk 7>; 41 42 }; 42 43 43 44 gpio1: gpio@10140 { ··· 50 49 interrupt-controller; 51 50 #interrupt-cells = <2>; 52 51 interrupts = <39>, <40>, <41>; 52 + clocks = <&gate_clk 7>; 53 53 }; 54 54 55 55 serial@12000 { ··· 59 57 reg-shift = <2>; 60 58 interrupts = <33>; 61 59 clocks = <&gate_clk 7>; 62 - /* set clock-frequency in board dts */ 63 60 status = "disabled"; 64 61 }; 65 62 ··· 68 67 reg-shift = <2>; 69 68 interrupts = <34>; 70 69 clocks = <&gate_clk 7>; 71 - /* set clock-frequency in board dts */ 72 70 status = "disabled"; 73 71 }; 74 72 ··· 75 75 compatible = "marvell,kirkwood-rtc", "marvell,orion-rtc"; 76 76 reg = <0x10300 0x20>; 77 77 interrupts = <53>; 78 + clocks = <&gate_clk 7>; 78 79 }; 79 80 80 81 spi@10600 {
+1 -1
arch/arm/boot/dts/orion5x-lacie-ethernet-disk-mini-v2.dts
··· 11 11 12 12 / { 13 13 model = "LaCie Ethernet Disk mini V2"; 14 - compatible = "lacie,ethernet-disk-mini-v2", "marvell-orion5x-88f5182", "marvell,orion5x"; 14 + compatible = "lacie,ethernet-disk-mini-v2", "marvell,orion5x-88f5182", "marvell,orion5x"; 15 15 16 16 memory { 17 17 reg = <0x00000000 0x4000000>; /* 64 MB */
+1 -1
arch/arm/boot/dts/snowball.dts
··· 298 298 }; 299 299 }; 300 300 301 - ab8500@5 { 301 + ab8500 { 302 302 ab8500-regulators { 303 303 ab8500_ldo_aux1_reg: ab8500_ldo_aux1 { 304 304 regulator-name = "V-DISPLAY";
+3
arch/arm/boot/dts/socfpga.dtsi
··· 75 75 compatible = "arm,pl330", "arm,primecell"; 76 76 reg = <0xffe01000 0x1000>; 77 77 interrupts = <0 180 4>; 78 + #dma-cells = <1>; 79 + #dma-channels = <8>; 80 + #dma-requests = <32>; 78 81 }; 79 82 }; 80 83
+1
arch/arm/boot/dts/tegra20.dtsi
··· 118 118 compatible = "arm,cortex-a9-twd-timer"; 119 119 reg = <0x50040600 0x20>; 120 120 interrupts = <1 13 0x304>; 121 + clocks = <&tegra_car 132>; 121 122 }; 122 123 123 124 intc: interrupt-controller {
+1
arch/arm/boot/dts/tegra30.dtsi
··· 119 119 compatible = "arm,cortex-a9-twd-timer"; 120 120 reg = <0x50040600 0x20>; 121 121 interrupts = <1 13 0xf04>; 122 + clocks = <&tegra_car 214>; 122 123 }; 123 124 124 125 intc: interrupt-controller {
+1
arch/arm/configs/mxs_defconfig
··· 116 116 CONFIG_SND_MXS_SOC=y 117 117 CONFIG_SND_SOC_MXS_SGTL5000=y 118 118 CONFIG_USB=y 119 + CONFIG_USB_EHCI_HCD=y 119 120 CONFIG_USB_CHIPIDEA=y 120 121 CONFIG_USB_CHIPIDEA_HOST=y 121 122 CONFIG_USB_STORAGE=y
+2
arch/arm/configs/omap2plus_defconfig
··· 126 126 CONFIG_INPUT_TWL4030_PWRBUTTON=y 127 127 CONFIG_VT_HW_CONSOLE_BINDING=y 128 128 # CONFIG_LEGACY_PTYS is not set 129 + CONFIG_SERIAL_8250=y 130 + CONFIG_SERIAL_8250_CONSOLE=y 129 131 CONFIG_SERIAL_8250_NR_UARTS=32 130 132 CONFIG_SERIAL_8250_EXTENDED=y 131 133 CONFIG_SERIAL_8250_MANY_PORTS=y
+4 -21
arch/arm/include/asm/xen/events.h
··· 2 2 #define _ASM_ARM_XEN_EVENTS_H 3 3 4 4 #include <asm/ptrace.h> 5 + #include <asm/atomic.h> 5 6 6 7 enum ipi_vector { 7 8 XEN_PLACEHOLDER_VECTOR, ··· 16 15 return raw_irqs_disabled_flags(regs->ARM_cpsr); 17 16 } 18 17 19 - /* 20 - * We cannot use xchg because it does not support 8-byte 21 - * values. However it is safe to use {ldr,dtd}exd directly because all 22 - * platforms which Xen can run on support those instructions. 23 - */ 24 - static inline xen_ulong_t xchg_xen_ulong(xen_ulong_t *ptr, xen_ulong_t val) 25 - { 26 - xen_ulong_t oldval; 27 - unsigned int tmp; 28 - 29 - wmb(); 30 - asm volatile("@ xchg_xen_ulong\n" 31 - "1: ldrexd %0, %H0, [%3]\n" 32 - " strexd %1, %2, %H2, [%3]\n" 33 - " teq %1, #0\n" 34 - " bne 1b" 35 - : "=&r" (oldval), "=&r" (tmp) 36 - : "r" (val), "r" (ptr) 37 - : "memory", "cc"); 38 - return oldval; 39 - } 18 + #define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr), \ 19 + atomic64_t, \ 20 + counter), (val)) 40 21 41 22 #endif /* _ASM_ARM_XEN_EVENTS_H */
+1
arch/arm/mach-at91/board-foxg20.c
··· 176 176 /* If you choose to use a pin other than PB16 it needs to be 3.3V */ 177 177 .pin = AT91_PIN_PB16, 178 178 .is_open_drain = 1, 179 + .ext_pullup_enable_pin = -EINVAL, 179 180 }; 180 181 181 182 static struct platform_device w1_device = {
+1
arch/arm/mach-at91/board-stamp9g20.c
··· 188 188 static struct w1_gpio_platform_data w1_gpio_pdata = { 189 189 .pin = AT91_PIN_PA29, 190 190 .is_open_drain = 1, 191 + .ext_pullup_enable_pin = -EINVAL, 191 192 }; 192 193 193 194 static struct platform_device w1_device = {
+1 -1
arch/arm/mach-imx/clk-imx6q.c
··· 172 172 static struct clk_onecell_data clk_data; 173 173 174 174 static enum mx6q_clks const clks_init_on[] __initconst = { 175 - mmdc_ch0_axi, rom, 175 + mmdc_ch0_axi, rom, pll1_sys, 176 176 }; 177 177 178 178 static struct clk_div_table clk_enet_ref_table[] = {
+9 -9
arch/arm/mach-imx/headsmp.S
··· 26 26 27 27 #ifdef CONFIG_PM 28 28 /* 29 - * The following code is located into the .data section. This is to 30 - * allow phys_l2x0_saved_regs to be accessed with a relative load 31 - * as we are running on physical address here. 29 + * The following code must assume it is running from physical address 30 + * where absolute virtual addresses to the data section have to be 31 + * turned into relative ones. 32 32 */ 33 - .data 34 - .align 35 33 36 34 #ifdef CONFIG_CACHE_L2X0 37 35 .macro pl310_resume 38 - ldr r2, phys_l2x0_saved_regs 36 + adr r0, l2x0_saved_regs_offset 37 + ldr r2, [r0] 38 + add r2, r2, r0 39 39 ldr r0, [r2, #L2X0_R_PHY_BASE] @ get physical base of l2x0 40 40 ldr r1, [r2, #L2X0_R_AUX_CTRL] @ get aux_ctrl value 41 41 str r1, [r0, #L2X0_AUX_CTRL] @ restore aux_ctrl ··· 43 43 str r1, [r0, #L2X0_CTRL] @ re-enable L2 44 44 .endm 45 45 46 - .globl phys_l2x0_saved_regs 47 - phys_l2x0_saved_regs: 48 - .long 0 46 + l2x0_saved_regs_offset: 47 + .word l2x0_saved_regs - . 48 + 49 49 #else 50 50 .macro pl310_resume 51 51 .endm
-15
arch/arm/mach-imx/pm-imx6q.c
··· 22 22 #include "common.h" 23 23 #include "hardware.h" 24 24 25 - extern unsigned long phys_l2x0_saved_regs; 26 - 27 25 static int imx6q_suspend_finish(unsigned long val) 28 26 { 29 27 cpu_do_idle(); ··· 55 57 56 58 void __init imx6q_pm_init(void) 57 59 { 58 - /* 59 - * The l2x0 core code provides an infrastucture to save and restore 60 - * l2x0 registers across suspend/resume cycle. But because imx6q 61 - * retains L2 content during suspend and needs to resume L2 before 62 - * MMU is enabled, it can only utilize register saving support and 63 - * have to take care of restoring on its own. So we save physical 64 - * address of the data structure used by l2x0 core to save registers, 65 - * and later restore the necessary ones in imx6q resume entry. 66 - */ 67 - #ifdef CONFIG_CACHE_L2X0 68 - phys_l2x0_saved_regs = __pa(&l2x0_saved_regs); 69 - #endif 70 - 71 60 suspend_set_ops(&imx6q_pm_ops); 72 61 }
+1
arch/arm/mach-ixp4xx/vulcan-setup.c
··· 163 163 164 164 static struct w1_gpio_platform_data vulcan_w1_gpio_pdata = { 165 165 .pin = 14, 166 + .ext_pullup_enable_pin = -EINVAL, 166 167 }; 167 168 168 169 static struct platform_device vulcan_w1_gpio = {
+18 -7
arch/arm/mach-kirkwood/board-dt.c
··· 41 41 42 42 struct device_node *np = of_find_compatible_node( 43 43 NULL, NULL, "marvell,kirkwood-gating-clock"); 44 - 45 44 struct of_phandle_args clkspec; 45 + struct clk *clk; 46 46 47 47 clkspec.np = np; 48 48 clkspec.args_count = 1; 49 - 50 - clkspec.args[0] = CGC_BIT_GE0; 51 - orion_clkdev_add(NULL, "mv643xx_eth_port.0", 52 - of_clk_get_from_provider(&clkspec)); 53 49 54 50 clkspec.args[0] = CGC_BIT_PEX0; 55 51 orion_clkdev_add("0", "pcie", ··· 55 59 orion_clkdev_add("1", "pcie", 56 60 of_clk_get_from_provider(&clkspec)); 57 61 58 - clkspec.args[0] = CGC_BIT_GE1; 59 - orion_clkdev_add(NULL, "mv643xx_eth_port.1", 62 + clkspec.args[0] = CGC_BIT_SDIO; 63 + orion_clkdev_add(NULL, "mvsdio", 60 64 of_clk_get_from_provider(&clkspec)); 65 + 66 + /* 67 + * The ethernet interfaces forget the MAC address assigned by 68 + * u-boot if the clocks are turned off. Until proper DT support 69 + * is available we always enable them for now. 70 + */ 71 + clkspec.args[0] = CGC_BIT_GE0; 72 + clk = of_clk_get_from_provider(&clkspec); 73 + orion_clkdev_add(NULL, "mv643xx_eth_port.0", clk); 74 + clk_prepare_enable(clk); 75 + 76 + clkspec.args[0] = CGC_BIT_GE1; 77 + clk = of_clk_get_from_provider(&clkspec); 78 + orion_clkdev_add(NULL, "mv643xx_eth_port.1", clk); 79 + clk_prepare_enable(clk); 61 80 } 62 81 63 82 static void __init kirkwood_of_clk_init(void)
+1 -1
arch/arm/mach-mxs/icoll.c
··· 100 100 .xlate = irq_domain_xlate_onecell, 101 101 }; 102 102 103 - void __init icoll_of_init(struct device_node *np, 103 + static void __init icoll_of_init(struct device_node *np, 104 104 struct device_node *interrupt_parent) 105 105 { 106 106 /*
+5 -5
arch/arm/mach-mxs/mach-mxs.c
··· 402 402 { 403 403 enable_clk_enet_out(); 404 404 update_fec_mac_prop(OUI_CRYSTALFONTZ); 405 + 406 + mxsfb_pdata.mode_list = cfa10049_video_modes; 407 + mxsfb_pdata.mode_count = ARRAY_SIZE(cfa10049_video_modes); 408 + mxsfb_pdata.default_bpp = 32; 409 + mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT; 405 410 } 406 411 407 412 static void __init cfa10037_init(void) 408 413 { 409 414 enable_clk_enet_out(); 410 415 update_fec_mac_prop(OUI_CRYSTALFONTZ); 411 - 412 - mxsfb_pdata.mode_list = cfa10049_video_modes; 413 - mxsfb_pdata.mode_count = ARRAY_SIZE(cfa10049_video_modes); 414 - mxsfb_pdata.default_bpp = 32; 415 - mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT; 416 416 } 417 417 418 418 static void __init apf28_init(void)
+1
arch/arm/mach-mxs/mm.c
··· 18 18 19 19 #include <mach/mx23.h> 20 20 #include <mach/mx28.h> 21 + #include <mach/common.h> 21 22 22 23 /* 23 24 * Define the MX23 memory map.
+1
arch/arm/mach-mxs/ocotp.c
··· 19 19 #include <asm/processor.h> /* for cpu_relax() */ 20 20 21 21 #include <mach/mxs.h> 22 + #include <mach/common.h> 22 23 23 24 #define OCOTP_WORD_OFFSET 0x20 24 25 #define OCOTP_WORD_COUNT 0x20
+2
arch/arm/mach-omap1/common.h
··· 31 31 32 32 #include <plat/i2c.h> 33 33 34 + #include <mach/irqs.h> 35 + 34 36 #if defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP850) 35 37 void omap7xx_map_io(void); 36 38 #else
-6
arch/arm/mach-omap2/Kconfig
··· 311 311 default y 312 312 select OMAP_PACKAGE_CBB 313 313 select REGULATOR_FIXED_VOLTAGE if REGULATOR 314 - select SERIAL_8250 315 - select SERIAL_8250_CONSOLE 316 - select SERIAL_CORE_CONSOLE 317 314 318 315 config MACH_OMAP_ZOOM3 319 316 bool "OMAP3630 Zoom3 board" ··· 318 321 default y 319 322 select OMAP_PACKAGE_CBP 320 323 select REGULATOR_FIXED_VOLTAGE if REGULATOR 321 - select SERIAL_8250 322 - select SERIAL_8250_CONSOLE 323 - select SERIAL_CORE_CONSOLE 324 324 325 325 config MACH_CM_T35 326 326 bool "CompuLab CM-T35/CM-T3730 modules"
+2
arch/arm/mach-omap2/board-generic.c
··· 102 102 .init_irq = omap_intc_of_init, 103 103 .handle_irq = omap3_intc_handle_irq, 104 104 .init_machine = omap_generic_init, 105 + .init_late = omap3_init_late, 105 106 .init_time = omap3_sync32k_timer_init, 106 107 .dt_compat = omap3_boards_compat, 107 108 .restart = omap3xxx_restart, ··· 120 119 .init_irq = omap_intc_of_init, 121 120 .handle_irq = omap3_intc_handle_irq, 122 121 .init_machine = omap_generic_init, 122 + .init_late = omap3_init_late, 123 123 .init_time = omap3_secure_sync32k_timer_init, 124 124 .dt_compat = omap3_gp_boards_compat, 125 125 .restart = omap3xxx_restart,
+2
arch/arm/mach-omap2/board-rx51.c
··· 17 17 #include <linux/io.h> 18 18 #include <linux/gpio.h> 19 19 #include <linux/leds.h> 20 + #include <linux/usb/phy.h> 20 21 #include <linux/usb/musb.h> 21 22 #include <linux/platform_data/spi-omap2-mcspi.h> 22 23 ··· 99 98 sdrc_params = nokia_get_sdram_timings(); 100 99 omap_sdrc_init(sdrc_params, sdrc_params); 101 100 101 + usb_bind_phy("musb-hdrc.0.auto", 0, "twl4030_usb"); 102 102 usb_musb_init(&musb_board_data); 103 103 rx51_peripherals_init(); 104 104
-1
arch/arm/mach-omap2/common.h
··· 108 108 void omap3630_init_late(void); 109 109 void am35xx_init_late(void); 110 110 void ti81xx_init_late(void); 111 - void omap4430_init_late(void); 112 111 int omap2_common_pm_late_init(void); 113 112 114 113 #if defined(CONFIG_SOC_OMAP2420) || defined(CONFIG_SOC_OMAP2430)
+3 -3
arch/arm/mach-omap2/gpmc.c
··· 1122 1122 /* TODO: remove, see function definition */ 1123 1123 gpmc_convert_ps_to_ns(gpmc_t); 1124 1124 1125 - /* Now the GPMC is initialised, unreserve the chip-selects */ 1126 - gpmc_cs_map = 0; 1127 - 1128 1125 return 0; 1129 1126 } 1130 1127 ··· 1379 1382 1380 1383 if (IS_ERR_VALUE(gpmc_setup_irq())) 1381 1384 dev_warn(gpmc_dev, "gpmc_setup_irq failed\n"); 1385 + 1386 + /* Now the GPMC is initialised, unreserve the chip-selects */ 1387 + gpmc_cs_map = 0; 1382 1388 1383 1389 rc = gpmc_probe_dt(pdev); 1384 1390 if (rc < 0) {
+5 -4
arch/arm/mach-omap2/mux.c
··· 211 211 return -EINVAL; 212 212 } 213 213 214 - pr_err("%s: Could not find signal %s\n", __func__, muxname); 215 - 216 214 return -ENODEV; 217 215 } 218 216 ··· 231 233 232 234 return mux_mode; 233 235 } 236 + 237 + pr_err("%s: Could not find signal %s\n", __func__, muxname); 234 238 235 239 return -ENODEV; 236 240 } ··· 739 739 list_for_each_entry(e, &partition->muxmodes, node) { 740 740 struct omap_mux *m = &e->mux; 741 741 742 - (void)debugfs_create_file(m->muxnames[0], S_IWUSR, mux_dbg_dir, 743 - m, &omap_mux_dbg_signal_fops); 742 + (void)debugfs_create_file(m->muxnames[0], S_IWUSR | S_IRUGO, 743 + mux_dbg_dir, m, 744 + &omap_mux_dbg_signal_fops); 744 745 } 745 746 } 746 747
+1
arch/arm/mach-pxa/raumfeld.c
··· 505 505 .pin = GPIO_ONE_WIRE, 506 506 .is_open_drain = 0, 507 507 .enable_external_pullup = w1_enable_external_pullup, 508 + .ext_pullup_enable_pin = -EINVAL, 508 509 }; 509 510 510 511 struct platform_device raumfeld_w1_gpio_device = {
+1 -1
arch/arm/mach-spear3xx/spear3xx.c
··· 14 14 #define pr_fmt(fmt) "SPEAr3xx: " fmt 15 15 16 16 #include <linux/amba/pl022.h> 17 - #include <linux/amba/pl08x.h> 17 + #include <linux/amba/pl080.h> 18 18 #include <linux/io.h> 19 19 #include <plat/pl080.h> 20 20 #include <mach/generic.h>
+3 -2
arch/arm/mm/dma-mapping.c
··· 342 342 { 343 343 struct dma_pool *pool = &atomic_pool; 344 344 pgprot_t prot = pgprot_dmacoherent(pgprot_kernel); 345 + gfp_t gfp = GFP_KERNEL | GFP_DMA; 345 346 unsigned long nr_pages = pool->size >> PAGE_SHIFT; 346 347 unsigned long *bitmap; 347 348 struct page *page; ··· 362 361 ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page, 363 362 atomic_pool_init); 364 363 else 365 - ptr = __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot, 366 - &page, atomic_pool_init); 364 + ptr = __alloc_remap_buffer(NULL, pool->size, gfp, prot, &page, 365 + atomic_pool_init); 367 366 if (ptr) { 368 367 int i; 369 368
+5 -2
arch/arm/plat-orion/addr-map.c
··· 157 157 u32 size = readl(ddr_window_cpu_base + DDR_SIZE_CS_OFF(i)); 158 158 159 159 /* 160 - * Chip select enabled? 160 + * We only take care of entries for which the chip 161 + * select is enabled, and that don't have high base 162 + * address bits set (devices can only access the first 163 + * 32 bits of the memory). 161 164 */ 162 - if (size & 1) { 165 + if ((size & 1) && !(base & 0xF)) { 163 166 struct mbus_dram_window *w; 164 167 165 168 w = &orion_mbus_dram_info.cs[cs++];
+1 -1
arch/arm/plat-spear/Kconfig
··· 10 10 11 11 config ARCH_SPEAR13XX 12 12 bool "ST SPEAr13xx with Device Tree" 13 - select ARCH_HAVE_CPUFREQ 13 + select ARCH_HAS_CPUFREQ 14 14 select ARM_GIC 15 15 select CPU_V7 16 16 select GPIO_SPEAR_SPICS
+1 -1
arch/avr32/Kconfig
··· 7 7 select HAVE_OPROFILE 8 8 select HAVE_KPROBES 9 9 select HAVE_GENERIC_HARDIRQS 10 - select HAVE_VIRT_TO_BUS 10 + select VIRT_TO_BUS 11 11 select GENERIC_IRQ_PROBE 12 12 select GENERIC_ATOMIC64 13 13 select HARDIRQS_SW_RESEND
+1 -1
arch/blackfin/Kconfig
··· 33 33 select ARCH_HAVE_CUSTOM_GPIO_H 34 34 select ARCH_WANT_OPTIONAL_GPIOLIB 35 35 select HAVE_UID16 36 - select HAVE_VIRT_TO_BUS 36 + select VIRT_TO_BUS 37 37 select ARCH_WANT_IPC_PARSE_VERSION 38 38 select HAVE_GENERIC_HARDIRQS 39 39 select GENERIC_ATOMIC64
+1 -1
arch/cris/Kconfig
··· 43 43 select GENERIC_ATOMIC64 44 44 select HAVE_GENERIC_HARDIRQS 45 45 select HAVE_UID16 46 - select HAVE_VIRT_TO_BUS 46 + select VIRT_TO_BUS 47 47 select ARCH_WANT_IPC_PARSE_VERSION 48 48 select GENERIC_IRQ_SHOW 49 49 select GENERIC_IOMAP
+1 -1
arch/frv/Kconfig
··· 6 6 select HAVE_PERF_EVENTS 7 7 select HAVE_UID16 8 8 select HAVE_GENERIC_HARDIRQS 9 - select HAVE_VIRT_TO_BUS 9 + select VIRT_TO_BUS 10 10 select GENERIC_IRQ_SHOW 11 11 select HAVE_DEBUG_BUGVERBOSE 12 12 select ARCH_HAVE_NMI_SAFE_CMPXCHG
+1 -1
arch/h8300/Kconfig
··· 5 5 select HAVE_GENERIC_HARDIRQS 6 6 select GENERIC_ATOMIC64 7 7 select HAVE_UID16 8 - select HAVE_VIRT_TO_BUS 8 + select VIRT_TO_BUS 9 9 select ARCH_WANT_IPC_PARSE_VERSION 10 10 select GENERIC_IRQ_SHOW 11 11 select GENERIC_CPU_DEVICES
+1 -1
arch/ia64/Kconfig
··· 26 26 select HAVE_MEMBLOCK 27 27 select HAVE_MEMBLOCK_NODE_MAP 28 28 select HAVE_VIRT_CPU_ACCOUNTING 29 - select HAVE_VIRT_TO_BUS 29 + select VIRT_TO_BUS 30 30 select ARCH_DISCARD_MEMBLOCK 31 31 select GENERIC_IRQ_PROBE 32 32 select GENERIC_PENDING_IRQ if SMP
+1 -1
arch/m32r/Kconfig
··· 10 10 select ARCH_WANT_IPC_PARSE_VERSION 11 11 select HAVE_DEBUG_BUGVERBOSE 12 12 select HAVE_GENERIC_HARDIRQS 13 - select HAVE_VIRT_TO_BUS 13 + select VIRT_TO_BUS 14 14 select GENERIC_IRQ_PROBE 15 15 select GENERIC_IRQ_SHOW 16 16 select GENERIC_ATOMIC64
+2 -2
arch/m32r/include/uapi/asm/stat.h
··· 63 63 long long st_size; 64 64 unsigned long st_blksize; 65 65 66 - #if defined(__BIG_ENDIAN) 66 + #if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN) 67 67 unsigned long __pad4; /* future possible st_blocks high bits */ 68 68 unsigned long st_blocks; /* Number 512-byte blocks allocated. */ 69 - #elif defined(__LITTLE_ENDIAN) 69 + #elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN) 70 70 unsigned long st_blocks; /* Number 512-byte blocks allocated. */ 71 71 unsigned long __pad4; /* future possible st_blocks high bits */ 72 72 #else
+1 -1
arch/m68k/Kconfig
··· 8 8 select GENERIC_IRQ_SHOW 9 9 select GENERIC_ATOMIC64 10 10 select HAVE_UID16 11 - select HAVE_VIRT_TO_BUS 11 + select VIRT_TO_BUS 12 12 select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS 13 13 select GENERIC_CPU_DEVICES 14 14 select GENERIC_STRNCPY_FROM_USER if MMU
-1
arch/m68k/Kconfig.machine
··· 310 310 config SOM5282EM 311 311 bool "EMAC.Inc SOM5282EM board support" 312 312 depends on M528x 313 - select EMAC_INC 314 313 help 315 314 Support for the EMAC.Inc SOM5282EM module. 316 315
+5 -5
arch/m68k/include/asm/MC68328.h
··· 293 293 /* 294 294 * Here go the bitmasks themselves 295 295 */ 296 - #define IMR_MSPIM (1 << SPIM _IRQ_NUM) /* Mask SPI Master interrupt */ 296 + #define IMR_MSPIM (1 << SPIM_IRQ_NUM) /* Mask SPI Master interrupt */ 297 297 #define IMR_MTMR2 (1 << TMR2_IRQ_NUM) /* Mask Timer 2 interrupt */ 298 298 #define IMR_MUART (1 << UART_IRQ_NUM) /* Mask UART interrupt */ 299 299 #define IMR_MWDT (1 << WDT_IRQ_NUM) /* Mask Watchdog Timer interrupt */ ··· 327 327 #define IWR_ADDR 0xfffff308 328 328 #define IWR LONG_REF(IWR_ADDR) 329 329 330 - #define IWR_SPIM (1 << SPIM _IRQ_NUM) /* SPI Master interrupt */ 330 + #define IWR_SPIM (1 << SPIM_IRQ_NUM) /* SPI Master interrupt */ 331 331 #define IWR_TMR2 (1 << TMR2_IRQ_NUM) /* Timer 2 interrupt */ 332 332 #define IWR_UART (1 << UART_IRQ_NUM) /* UART interrupt */ 333 333 #define IWR_WDT (1 << WDT_IRQ_NUM) /* Watchdog Timer interrupt */ ··· 357 357 #define ISR_ADDR 0xfffff30c 358 358 #define ISR LONG_REF(ISR_ADDR) 359 359 360 - #define ISR_SPIM (1 << SPIM _IRQ_NUM) /* SPI Master interrupt */ 360 + #define ISR_SPIM (1 << SPIM_IRQ_NUM) /* SPI Master interrupt */ 361 361 #define ISR_TMR2 (1 << TMR2_IRQ_NUM) /* Timer 2 interrupt */ 362 362 #define ISR_UART (1 << UART_IRQ_NUM) /* UART interrupt */ 363 363 #define ISR_WDT (1 << WDT_IRQ_NUM) /* Watchdog Timer interrupt */ ··· 391 391 #define IPR_ADDR 0xfffff310 392 392 #define IPR LONG_REF(IPR_ADDR) 393 393 394 - #define IPR_SPIM (1 << SPIM _IRQ_NUM) /* SPI Master interrupt */ 394 + #define IPR_SPIM (1 << SPIM_IRQ_NUM) /* SPI Master interrupt */ 395 395 #define IPR_TMR2 (1 << TMR2_IRQ_NUM) /* Timer 2 interrupt */ 396 396 #define IPR_UART (1 << UART_IRQ_NUM) /* UART interrupt */ 397 397 #define IPR_WDT (1 << WDT_IRQ_NUM) /* Watchdog Timer interrupt */ ··· 757 757 758 758 /* 'EZ328-compatible definitions */ 759 759 #define TCN_ADDR TCN1_ADDR 760 - #define TCN TCN 760 + #define TCN TCN1 761 761 762 762 /* 763 763 * Timer Unit 1 and 2 Status Registers
+3
arch/m68k/kernel/setup_no.c
··· 57 57 void (*mach_halt)(void); 58 58 void (*mach_power_off)(void); 59 59 60 + #ifdef CONFIG_M68000 61 + #define CPU_NAME "MC68000" 62 + #endif 60 63 #ifdef CONFIG_M68328 61 64 #define CPU_NAME "MC68328" 62 65 #endif
+1 -1
arch/m68k/mm/init.c
··· 188 188 } 189 189 } 190 190 191 - #if !defined(CONFIG_SUN3) && !defined(CONFIG_COLDFIRE) 191 + #if defined(CONFIG_MMU) && !defined(CONFIG_SUN3) && !defined(CONFIG_COLDFIRE) 192 192 /* insert pointer tables allocated so far into the tablelist */ 193 193 init_pointer_table((unsigned long)kernel_pg_dir); 194 194 for (i = 0; i < PTRS_PER_PGD; i++) {
+1 -1
arch/m68k/platform/coldfire/m528x.c
··· 69 69 u8 port; 70 70 71 71 /* make sure PUAPAR is set for UART0 and UART1 */ 72 - port = readb(MCF5282_GPIO_PUAPAR); 72 + port = readb(MCFGPIO_PUAPAR); 73 73 port |= 0x03 | (0x03 << 2); 74 74 writeb(port, MCFGPIO_PUAPAR); 75 75 }
+1 -1
arch/microblaze/Kconfig
··· 19 19 select HAVE_DEBUG_KMEMLEAK 20 20 select IRQ_DOMAIN 21 21 select HAVE_GENERIC_HARDIRQS 22 - select HAVE_VIRT_TO_BUS 22 + select VIRT_TO_BUS 23 23 select GENERIC_IRQ_PROBE 24 24 select GENERIC_IRQ_SHOW 25 25 select GENERIC_PCI_IOMAP
+1 -1
arch/mips/Kconfig
··· 38 38 select GENERIC_CLOCKEVENTS 39 39 select GENERIC_CMOS_UPDATE 40 40 select HAVE_MOD_ARCH_SPECIFIC 41 - select HAVE_VIRT_TO_BUS 41 + select VIRT_TO_BUS 42 42 select MODULES_USE_ELF_REL if MODULES 43 43 select MODULES_USE_ELF_RELA if MODULES && 64BIT 44 44 select CLONE_BACKWARDS
+1 -1
arch/mn10300/Kconfig
··· 8 8 select HAVE_ARCH_KGDB 9 9 select GENERIC_ATOMIC64 10 10 select HAVE_NMI_WATCHDOG if MN10300_WD_TIMER 11 - select HAVE_VIRT_TO_BUS 11 + select VIRT_TO_BUS 12 12 select GENERIC_CLOCKEVENTS 13 13 select MODULES_USE_ELF_RELA 14 14 select OLD_SIGSUSPEND3
+1 -2
arch/openrisc/Kconfig
··· 9 9 select OF_EARLY_FLATTREE 10 10 select IRQ_DOMAIN 11 11 select HAVE_MEMBLOCK 12 - select ARCH_WANT_OPTIONAL_GPIOLIB 12 + select ARCH_REQUIRE_GPIOLIB 13 13 select HAVE_ARCH_TRACEHOOK 14 14 select HAVE_GENERIC_HARDIRQS 15 - select HAVE_VIRT_TO_BUS 16 15 select GENERIC_IRQ_CHIP 17 16 select GENERIC_IRQ_PROBE 18 17 select GENERIC_IRQ_SHOW
+1 -1
arch/parisc/Kconfig
··· 21 21 select GENERIC_STRNCPY_FROM_USER 22 22 select SYSCTL_ARCH_UNALIGN_ALLOW 23 23 select HAVE_MOD_ARCH_SPECIFIC 24 - select HAVE_VIRT_TO_BUS 24 + select VIRT_TO_BUS 25 25 select MODULES_USE_ELF_RELA 26 26 select CLONE_BACKWARDS 27 27 select TTY # Needed for pdc_cons.c
+1 -1
arch/powerpc/Kconfig
··· 98 98 select HAVE_FUNCTION_GRAPH_TRACER 99 99 select SYSCTL_EXCEPTION_TRACE 100 100 select ARCH_WANT_OPTIONAL_GPIOLIB 101 - select HAVE_VIRT_TO_BUS if !PPC64 101 + select VIRT_TO_BUS if !PPC64 102 102 select HAVE_IDE 103 103 select HAVE_IOREMAP_PROT 104 104 select HAVE_EFFICIENT_UNALIGNED_ACCESS
+1 -1
arch/s390/Kconfig
··· 134 134 select HAVE_SYSCALL_WRAPPERS 135 135 select HAVE_UID16 if 32BIT 136 136 select HAVE_VIRT_CPU_ACCOUNTING 137 - select HAVE_VIRT_TO_BUS 137 + select VIRT_TO_BUS 138 138 select INIT_ALL_POSSIBLE 139 139 select KTIME_SCALAR if 32BIT 140 140 select MODULES_USE_ELF_RELA
+1
arch/s390/include/asm/cpu_mf.h
··· 12 12 #ifndef _ASM_S390_CPU_MF_H 13 13 #define _ASM_S390_CPU_MF_H 14 14 15 + #include <linux/errno.h> 15 16 #include <asm/facility.h> 16 17 17 18 #define CPU_MF_INT_SF_IAE (1 << 31) /* invalid entry address */
+1 -1
arch/score/Kconfig
··· 12 12 select GENERIC_CPU_DEVICES 13 13 select GENERIC_CLOCKEVENTS 14 14 select HAVE_MOD_ARCH_SPECIFIC 15 - select HAVE_VIRT_TO_BUS 15 + select VIRT_TO_BUS 16 16 select MODULES_USE_ELF_REL 17 17 select CLONE_BACKWARDS 18 18
+1 -1
arch/tile/Kconfig
··· 17 17 select GENERIC_IRQ_SHOW 18 18 select HAVE_DEBUG_BUGVERBOSE 19 19 select HAVE_SYSCALL_WRAPPERS if TILEGX 20 - select HAVE_VIRT_TO_BUS 20 + select VIRT_TO_BUS 21 21 select SYS_HYPERVISOR 22 22 select ARCH_HAVE_NMI_SAFE_CMPXCHG 23 23 select GENERIC_CLOCKEVENTS
+1 -1
arch/um/drivers/chan.h
··· 37 37 extern int console_open_chan(struct line *line, struct console *co); 38 38 extern void deactivate_chan(struct chan *chan, int irq); 39 39 extern void reactivate_chan(struct chan *chan, int irq); 40 - extern void chan_enable_winch(struct chan *chan, struct tty_struct *tty); 40 + extern void chan_enable_winch(struct chan *chan, struct tty_port *port); 41 41 extern int enable_chan(struct line *line); 42 42 extern void close_chan(struct line *line); 43 43 extern int chan_window_size(struct line *line,
+2 -2
arch/um/drivers/chan_kern.c
··· 122 122 return err; 123 123 } 124 124 125 - void chan_enable_winch(struct chan *chan, struct tty_struct *tty) 125 + void chan_enable_winch(struct chan *chan, struct tty_port *port) 126 126 { 127 127 if (chan && chan->primary && chan->ops->winch) 128 - register_winch(chan->fd, tty); 128 + register_winch(chan->fd, port); 129 129 } 130 130 131 131 static void line_timer_cb(struct work_struct *work)
+6 -6
arch/um/drivers/chan_user.c
··· 216 216 } 217 217 } 218 218 219 - static int winch_tramp(int fd, struct tty_struct *tty, int *fd_out, 219 + static int winch_tramp(int fd, struct tty_port *port, int *fd_out, 220 220 unsigned long *stack_out) 221 221 { 222 222 struct winch_data data; ··· 271 271 return err; 272 272 } 273 273 274 - void register_winch(int fd, struct tty_struct *tty) 274 + void register_winch(int fd, struct tty_port *port) 275 275 { 276 276 unsigned long stack; 277 277 int pid, thread, count, thread_fd = -1; ··· 281 281 return; 282 282 283 283 pid = tcgetpgrp(fd); 284 - if (is_skas_winch(pid, fd, tty)) { 285 - register_winch_irq(-1, fd, -1, tty, 0); 284 + if (is_skas_winch(pid, fd, port)) { 285 + register_winch_irq(-1, fd, -1, port, 0); 286 286 return; 287 287 } 288 288 289 289 if (pid == -1) { 290 - thread = winch_tramp(fd, tty, &thread_fd, &stack); 290 + thread = winch_tramp(fd, port, &thread_fd, &stack); 291 291 if (thread < 0) 292 292 return; 293 293 294 - register_winch_irq(thread_fd, fd, thread, tty, stack); 294 + register_winch_irq(thread_fd, fd, thread, port, stack); 295 295 296 296 count = write(thread_fd, &c, sizeof(c)); 297 297 if (count != sizeof(c))
+3 -3
arch/um/drivers/chan_user.h
··· 38 38 unsigned short *cols_out); 39 39 extern void generic_free(void *data); 40 40 41 - struct tty_struct; 42 - extern void register_winch(int fd, struct tty_struct *tty); 41 + struct tty_port; 42 + extern void register_winch(int fd, struct tty_port *port); 43 43 extern void register_winch_irq(int fd, int tty_fd, int pid, 44 - struct tty_struct *tty, unsigned long stack); 44 + struct tty_port *port, unsigned long stack); 45 45 46 46 #define __channel_help(fn, prefix) \ 47 47 __uml_help(fn, prefix "[0-9]*=<channel description>\n" \
+24 -18
arch/um/drivers/line.c
··· 299 299 return ret; 300 300 301 301 if (!line->sigio) { 302 - chan_enable_winch(line->chan_out, tty); 302 + chan_enable_winch(line->chan_out, port); 303 303 line->sigio = 1; 304 304 } 305 305 ··· 309 309 return 0; 310 310 } 311 311 312 + static void unregister_winch(struct tty_struct *tty); 313 + 314 + static void line_destruct(struct tty_port *port) 315 + { 316 + struct tty_struct *tty = tty_port_tty_get(port); 317 + struct line *line = tty->driver_data; 318 + 319 + if (line->sigio) { 320 + unregister_winch(tty); 321 + line->sigio = 0; 322 + } 323 + } 324 + 312 325 static const struct tty_port_operations line_port_ops = { 313 326 .activate = line_activate, 327 + .destruct = line_destruct, 314 328 }; 315 329 316 330 int line_open(struct tty_struct *tty, struct file *filp) ··· 346 332 tty->driver_data = line; 347 333 348 334 return 0; 349 - } 350 - 351 - static void unregister_winch(struct tty_struct *tty); 352 - 353 - void line_cleanup(struct tty_struct *tty) 354 - { 355 - struct line *line = tty->driver_data; 356 - 357 - if (line->sigio) { 358 - unregister_winch(tty); 359 - line->sigio = 0; 360 - } 361 335 } 362 336 363 337 void line_close(struct tty_struct *tty, struct file * filp) ··· 597 595 int fd; 598 596 int tty_fd; 599 597 int pid; 600 - struct tty_struct *tty; 598 + struct tty_port *port; 601 599 unsigned long stack; 602 600 struct work_struct work; 603 601 }; ··· 651 649 goto out; 652 650 } 653 651 } 654 - tty = winch->tty; 652 + tty = tty_port_tty_get(winch->port); 655 653 if (tty != NULL) { 656 654 line = tty->driver_data; 657 655 if (line != NULL) { ··· 659 657 &tty->winsize.ws_col); 660 658 kill_pgrp(tty->pgrp, SIGWINCH, 1); 661 659 } 660 + tty_kref_put(tty); 662 661 } 663 662 out: 664 663 if (winch->fd != -1) ··· 667 664 return IRQ_HANDLED; 668 665 } 669 666 670 - void register_winch_irq(int fd, int tty_fd, int pid, struct tty_struct *tty, 667 + void register_winch_irq(int fd, int tty_fd, int pid, struct tty_port *port, 671 668 unsigned long stack) 672 669 { 673 670 struct winch *winch; ··· 682 679 .fd = fd, 683 680 .tty_fd = tty_fd, 684 681 .pid = pid, 685 - .tty = tty, 682 + .port = port, 686 683 .stack = stack }); 687 684 688 685 if (um_request_irq(WINCH_IRQ, fd, IRQ_READ, winch_interrupt, ··· 711 708 { 712 709 struct list_head *ele, *next; 713 710 struct winch *winch; 711 + struct tty_struct *wtty; 714 712 715 713 spin_lock(&winch_handler_lock); 716 714 717 715 list_for_each_safe(ele, next, &winch_handlers) { 718 716 winch = list_entry(ele, struct winch, list); 719 - if (winch->tty == tty) { 717 + wtty = tty_port_tty_get(winch->port); 718 + if (wtty == tty) { 720 719 free_winch(winch); 721 720 break; 722 721 } 722 + tty_kref_put(wtty); 723 723 } 724 724 spin_unlock(&winch_handler_lock); 725 725 }
+2
arch/um/drivers/net_kern.c
··· 218 218 spin_lock_irqsave(&lp->lock, flags); 219 219 220 220 len = (*lp->write)(lp->fd, skb, lp); 221 + skb_tx_timestamp(skb); 221 222 222 223 if (len == skb->len) { 223 224 dev->stats.tx_packets++; ··· 282 281 static const struct ethtool_ops uml_net_ethtool_ops = { 283 282 .get_drvinfo = uml_net_get_drvinfo, 284 283 .get_link = ethtool_op_get_link, 284 + .get_ts_info = ethtool_op_get_ts_info, 285 285 }; 286 286 287 287 static void uml_net_user_timer_expire(unsigned long _conn)
-1
arch/um/drivers/ssl.c
··· 105 105 .throttle = line_throttle, 106 106 .unthrottle = line_unthrottle, 107 107 .install = ssl_install, 108 - .cleanup = line_cleanup, 109 108 .hangup = line_hangup, 110 109 }; 111 110
-1
arch/um/drivers/stdio_console.c
··· 110 110 .set_termios = line_set_termios, 111 111 .throttle = line_throttle, 112 112 .unthrottle = line_unthrottle, 113 - .cleanup = line_cleanup, 114 113 .hangup = line_hangup, 115 114 }; 116 115
+1 -1
arch/um/os-Linux/signal.c
··· 15 15 #include <sysdep/mcontext.h> 16 16 #include "internal.h" 17 17 18 - void (*sig_info[NSIG])(int, siginfo_t *, struct uml_pt_regs *) = { 18 + void (*sig_info[NSIG])(int, struct siginfo *, struct uml_pt_regs *) = { 19 19 [SIGTRAP] = relay_signal, 20 20 [SIGFPE] = relay_signal, 21 21 [SIGILL] = relay_signal,
+2
arch/um/os-Linux/start_up.c
··· 15 15 #include <sys/mman.h> 16 16 #include <sys/stat.h> 17 17 #include <sys/wait.h> 18 + #include <sys/time.h> 19 + #include <sys/resource.h> 18 20 #include <asm/unistd.h> 19 21 #include <init.h> 20 22 #include <os.h>
+1 -1
arch/unicore32/Kconfig
··· 9 9 select GENERIC_ATOMIC64 10 10 select HAVE_KERNEL_LZO 11 11 select HAVE_KERNEL_LZMA 12 - select HAVE_VIRT_TO_BUS 12 + select VIRT_TO_BUS 13 13 select ARCH_HAVE_CUSTOM_GPIO_H 14 14 select GENERIC_FIND_FIRST_BIT 15 15 select GENERIC_IRQ_PROBE
+1 -1
arch/x86/Kconfig
··· 112 112 select GENERIC_STRNLEN_USER 113 113 select HAVE_CONTEXT_TRACKING if X86_64 114 114 select HAVE_IRQ_TIME_ACCOUNTING 115 - select HAVE_VIRT_TO_BUS 115 + select VIRT_TO_BUS 116 116 select MODULES_USE_ELF_REL if X86_32 117 117 select MODULES_USE_ELF_RELA if X86_64 118 118 select CLONE_BACKWARDS if X86_32
+10
arch/x86/kernel/cpu/perf_event_intel_ds.c
··· 729 729 } 730 730 } 731 731 } 732 + 733 + void perf_restore_debug_store(void) 734 + { 735 + struct debug_store *ds = __this_cpu_read(cpu_hw_events.ds); 736 + 737 + if (!x86_pmu.bts && !x86_pmu.pebs) 738 + return; 739 + 740 + wrmsrl(MSR_IA32_DS_AREA, (unsigned long)ds); 741 + }
+2
arch/x86/power/cpu.c
··· 11 11 #include <linux/suspend.h> 12 12 #include <linux/export.h> 13 13 #include <linux/smp.h> 14 + #include <linux/perf_event.h> 14 15 15 16 #include <asm/pgtable.h> 16 17 #include <asm/proto.h> ··· 229 228 do_fpu_end(); 230 229 x86_platform.restore_sched_clock_state(); 231 230 mtrr_bp_restore(); 231 + perf_restore_debug_store(); 232 232 } 233 233 234 234 /* Needed by apm.c */
+1 -1
arch/xtensa/Kconfig
··· 9 9 select HAVE_IDE 10 10 select GENERIC_ATOMIC64 11 11 select HAVE_GENERIC_HARDIRQS 12 - select HAVE_VIRT_TO_BUS 12 + select VIRT_TO_BUS 13 13 select GENERIC_IRQ_SHOW 14 14 select GENERIC_CPU_DEVICES 15 15 select MODULES_USE_ELF_RELA
+2 -2
drivers/acpi/processor_perflib.c
··· 465 465 return result; 466 466 } 467 467 468 - static int acpi_processor_get_performance_info(struct acpi_processor *pr) 468 + int acpi_processor_get_performance_info(struct acpi_processor *pr) 469 469 { 470 470 int result = 0; 471 471 acpi_status status = AE_OK; ··· 509 509 #endif 510 510 return result; 511 511 } 512 - 512 + EXPORT_SYMBOL_GPL(acpi_processor_get_performance_info); 513 513 int acpi_processor_notify_smm(struct module *calling_module) 514 514 { 515 515 acpi_status status;
+11 -2
drivers/char/hw_random/virtio-rng.c
··· 92 92 { 93 93 int err; 94 94 95 + if (vq) { 96 + /* We only support one device for now */ 97 + return -EBUSY; 98 + } 95 99 /* We expect a single virtqueue. */ 96 100 vq = virtio_find_single_vq(vdev, random_recv_done, "input"); 97 - if (IS_ERR(vq)) 98 - return PTR_ERR(vq); 101 + if (IS_ERR(vq)) { 102 + err = PTR_ERR(vq); 103 + vq = NULL; 104 + return err; 105 + } 99 106 100 107 err = hwrng_register(&virtio_hwrng); 101 108 if (err) { 102 109 vdev->config->del_vqs(vdev); 110 + vq = NULL; 103 111 return err; 104 112 } 105 113 ··· 120 112 busy = false; 121 113 hwrng_unregister(&virtio_hwrng); 122 114 vdev->config->del_vqs(vdev); 115 + vq = NULL; 123 116 } 124 117 125 118 static int virtrng_probe(struct virtio_device *vdev)
-1
drivers/clk/tegra/clk-tegra20.c
··· 1292 1292 TEGRA_CLK_DUPLICATE(usbd, "tegra-ehci.0", NULL), 1293 1293 TEGRA_CLK_DUPLICATE(usbd, "tegra-otg", NULL), 1294 1294 TEGRA_CLK_DUPLICATE(cclk, NULL, "cpu"), 1295 - TEGRA_CLK_DUPLICATE(twd, "smp_twd", NULL), 1296 1295 TEGRA_CLK_DUPLICATE(clk_max, NULL, NULL), /* Must be the last entry */ 1297 1296 }; 1298 1297
-1
drivers/clk/tegra/clk-tegra30.c
··· 1931 1931 TEGRA_CLK_DUPLICATE(cml1, "tegra_sata_cml", NULL), 1932 1932 TEGRA_CLK_DUPLICATE(cml0, "tegra_pcie", "cml"), 1933 1933 TEGRA_CLK_DUPLICATE(pciex, "tegra_pcie", "pciex"), 1934 - TEGRA_CLK_DUPLICATE(twd, "smp_twd", NULL), 1935 1934 TEGRA_CLK_DUPLICATE(vcp, "nvavp", "vcp"), 1936 1935 TEGRA_CLK_DUPLICATE(clk_max, NULL, NULL), /* MUST be the last entry */ 1937 1936 };
+7
drivers/gpio/gpio-mvebu.c
··· 42 42 #include <linux/io.h> 43 43 #include <linux/of_irq.h> 44 44 #include <linux/of_device.h> 45 + #include <linux/clk.h> 45 46 #include <linux/pinctrl/consumer.h> 46 47 47 48 /* ··· 497 496 struct resource *res; 498 497 struct irq_chip_generic *gc; 499 498 struct irq_chip_type *ct; 499 + struct clk *clk; 500 500 unsigned int ngpios; 501 501 int soc_variant; 502 502 int i, cpu, id; ··· 530 528 dev_err(&pdev->dev, "Couldn't get OF id\n"); 531 529 return id; 532 530 } 531 + 532 + clk = devm_clk_get(&pdev->dev, NULL); 533 + /* Not all SoCs require a clock.*/ 534 + if (!IS_ERR(clk)) 535 + clk_prepare_enable(clk); 533 536 534 537 mvchip->soc_variant = soc_variant; 535 538 mvchip->chip.label = dev_name(&pdev->dev);
+2 -2
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
··· 544 544 static void 545 545 nv50_disp_base_vblank_enable(struct nouveau_event *event, int head) 546 546 { 547 - nv_mask(event->priv, 0x61002c, (1 << head), (1 << head)); 547 + nv_mask(event->priv, 0x61002c, (4 << head), (4 << head)); 548 548 } 549 549 550 550 static void 551 551 nv50_disp_base_vblank_disable(struct nouveau_event *event, int head) 552 552 { 553 - nv_mask(event->priv, 0x61002c, (1 << head), (0 << head)); 553 + nv_mask(event->priv, 0x61002c, (4 << head), 0); 554 554 } 555 555 556 556 static int
+5
drivers/gpu/drm/nouveau/nouveau_abi16.c
··· 116 116 { 117 117 struct nouveau_abi16_ntfy *ntfy, *temp; 118 118 119 + /* wait for all activity to stop before releasing notify object, which 120 + * may be still in use */ 121 + if (chan->chan && chan->ntfy) 122 + nouveau_channel_idle(chan->chan); 123 + 119 124 /* cleanup notifier state */ 120 125 list_for_each_entry_safe(ntfy, temp, &chan->notifiers, head) { 121 126 nouveau_abi16_ntfy_fini(chan, ntfy);
+2 -2
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 801 801 stride = 16 * 4; 802 802 height = amount / stride; 803 803 804 - if (new_mem->mem_type == TTM_PL_VRAM && 804 + if (old_mem->mem_type == TTM_PL_VRAM && 805 805 nouveau_bo_tile_layout(nvbo)) { 806 806 ret = RING_SPACE(chan, 8); 807 807 if (ret) ··· 823 823 BEGIN_NV04(chan, NvSubCopy, 0x0200, 1); 824 824 OUT_RING (chan, 1); 825 825 } 826 - if (old_mem->mem_type == TTM_PL_VRAM && 826 + if (new_mem->mem_type == TTM_PL_VRAM && 827 827 nouveau_bo_tile_layout(nvbo)) { 828 828 ret = RING_SPACE(chan, 8); 829 829 if (ret)
+1
drivers/gpu/drm/nouveau/nv50_display.c
··· 2276 2276 NV_WARN(drm, "failed to create encoder %d/%d/%d: %d\n", 2277 2277 dcbe->location, dcbe->type, 2278 2278 ffs(dcbe->or) - 1, ret); 2279 + ret = 0; 2279 2280 } 2280 2281 } 2281 2282
+2
drivers/hwmon/lineage-pem.c
··· 422 422 &sensor_dev_attr_in2_input.dev_attr.attr, 423 423 &sensor_dev_attr_curr1_input.dev_attr.attr, 424 424 &sensor_dev_attr_power1_input.dev_attr.attr, 425 + NULL 425 426 }; 426 427 427 428 static const struct attribute_group pem_input_group = { ··· 433 432 &sensor_dev_attr_fan1_input.dev_attr.attr, 434 433 &sensor_dev_attr_fan2_input.dev_attr.attr, 435 434 &sensor_dev_attr_fan3_input.dev_attr.attr, 435 + NULL 436 436 }; 437 437 438 438 static const struct attribute_group pem_fan_group = {
+8 -6
drivers/hwmon/pmbus/ltc2978.c
··· 59 59 struct ltc2978_data { 60 60 enum chips id; 61 61 int vin_min, vin_max; 62 - int temp_min, temp_max; 62 + int temp_min, temp_max[2]; 63 63 int vout_min[8], vout_max[8]; 64 64 int iout_max[2]; 65 65 int temp2_max; ··· 113 113 ret = pmbus_read_word_data(client, page, 114 114 LTC2978_MFR_TEMPERATURE_PEAK); 115 115 if (ret >= 0) { 116 - if (lin11_to_val(ret) > lin11_to_val(data->temp_max)) 117 - data->temp_max = ret; 118 - ret = data->temp_max; 116 + if (lin11_to_val(ret) 117 + > lin11_to_val(data->temp_max[page])) 118 + data->temp_max[page] = ret; 119 + ret = data->temp_max[page]; 119 120 } 120 121 break; 121 122 case PMBUS_VIRT_RESET_VOUT_HISTORY: ··· 267 266 break; 268 267 case PMBUS_VIRT_RESET_TEMP_HISTORY: 269 268 data->temp_min = 0x7bff; 270 - data->temp_max = 0x7c00; 269 + data->temp_max[page] = 0x7c00; 271 270 ret = ltc2978_clear_peaks(client, page, data->id); 272 271 break; 273 272 default: ··· 324 323 data->vin_min = 0x7bff; 325 324 data->vin_max = 0x7c00; 326 325 data->temp_min = 0x7bff; 327 - data->temp_max = 0x7c00; 326 + for (i = 0; i < ARRAY_SIZE(data->temp_max); i++) 327 + data->temp_max[i] = 0x7c00; 328 328 data->temp2_max = 0x7c00; 329 329 330 330 switch (data->id) {
+7 -5
drivers/hwmon/pmbus/pmbus_core.c
··· 766 766 static int pmbus_add_attribute(struct pmbus_data *data, struct attribute *attr) 767 767 { 768 768 if (data->num_attributes >= data->max_attributes - 1) { 769 - data->max_attributes += PMBUS_ATTR_ALLOC_SIZE; 770 - data->group.attrs = krealloc(data->group.attrs, 771 - sizeof(struct attribute *) * 772 - data->max_attributes, GFP_KERNEL); 773 - if (data->group.attrs == NULL) 769 + int new_max_attrs = data->max_attributes + PMBUS_ATTR_ALLOC_SIZE; 770 + void *new_attrs = krealloc(data->group.attrs, 771 + new_max_attrs * sizeof(void *), 772 + GFP_KERNEL); 773 + if (!new_attrs) 774 774 return -ENOMEM; 775 + data->group.attrs = new_attrs; 776 + data->max_attributes = new_max_attrs; 775 777 } 776 778 777 779 data->group.attrs[data->num_attributes++] = attr;
+4 -5
drivers/iio/common/st_sensors/st_sensors_core.c
··· 62 62 int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr) 63 63 { 64 64 int err; 65 - struct st_sensor_odr_avl odr_out; 65 + struct st_sensor_odr_avl odr_out = {0, 0}; 66 66 struct st_sensor_data *sdata = iio_priv(indio_dev); 67 67 68 68 err = st_sensors_match_odr(sdata->sensor, odr, &odr_out); ··· 114 114 115 115 static int st_sensors_set_fullscale(struct iio_dev *indio_dev, unsigned int fs) 116 116 { 117 - int err, i; 117 + int err, i = 0; 118 118 struct st_sensor_data *sdata = iio_priv(indio_dev); 119 119 120 120 err = st_sensors_match_fs(sdata->sensor, fs, &i); ··· 139 139 140 140 int st_sensors_set_enable(struct iio_dev *indio_dev, bool enable) 141 141 { 142 - bool found; 143 142 u8 tmp_value; 144 143 int err = -EINVAL; 145 - struct st_sensor_odr_avl odr_out; 144 + bool found = false; 145 + struct st_sensor_odr_avl odr_out = {0, 0}; 146 146 struct st_sensor_data *sdata = iio_priv(indio_dev); 147 147 148 148 if (enable) { 149 - found = false; 150 149 tmp_value = sdata->sensor->pw.value_on; 151 150 if ((sdata->sensor->odr.addr == sdata->sensor->pw.addr) && 152 151 (sdata->sensor->odr.mask == sdata->sensor->pw.mask)) {
+38 -26
drivers/iio/dac/ad5064.c
··· 27 27 #define AD5064_ADDR(x) ((x) << 20) 28 28 #define AD5064_CMD(x) ((x) << 24) 29 29 30 - #define AD5064_ADDR_DAC(chan) (chan) 31 30 #define AD5064_ADDR_ALL_DAC 0xF 32 31 33 32 #define AD5064_CMD_WRITE_INPUT_N 0x0 ··· 130 131 } 131 132 132 133 static int ad5064_sync_powerdown_mode(struct ad5064_state *st, 133 - unsigned int channel) 134 + const struct iio_chan_spec *chan) 134 135 { 135 136 unsigned int val; 136 137 int ret; 137 138 138 - val = (0x1 << channel); 139 + val = (0x1 << chan->address); 139 140 140 - if (st->pwr_down[channel]) 141 - val |= st->pwr_down_mode[channel] << 8; 141 + if (st->pwr_down[chan->channel]) 142 + val |= st->pwr_down_mode[chan->channel] << 8; 142 143 143 144 ret = ad5064_write(st, AD5064_CMD_POWERDOWN_DAC, 0, val, 0); 144 145 ··· 168 169 mutex_lock(&indio_dev->mlock); 169 170 st->pwr_down_mode[chan->channel] = mode + 1; 170 171 171 - ret = ad5064_sync_powerdown_mode(st, chan->channel); 172 + ret = ad5064_sync_powerdown_mode(st, chan); 172 173 mutex_unlock(&indio_dev->mlock); 173 174 174 175 return ret; ··· 204 205 mutex_lock(&indio_dev->mlock); 205 206 st->pwr_down[chan->channel] = pwr_down; 206 207 207 - ret = ad5064_sync_powerdown_mode(st, chan->channel); 208 + ret = ad5064_sync_powerdown_mode(st, chan); 208 209 mutex_unlock(&indio_dev->mlock); 209 210 return ret ? ret : len; 210 211 } ··· 257 258 258 259 switch (mask) { 259 260 case IIO_CHAN_INFO_RAW: 260 - if (val > (1 << chan->scan_type.realbits) || val < 0) 261 + if (val >= (1 << chan->scan_type.realbits) || val < 0) 261 262 return -EINVAL; 262 263 263 264 mutex_lock(&indio_dev->mlock); ··· 291 292 { }, 292 293 }; 293 294 294 - #define AD5064_CHANNEL(chan, bits) { \ 295 + #define AD5064_CHANNEL(chan, addr, bits) { \ 295 296 .type = IIO_VOLTAGE, \ 296 297 .indexed = 1, \ 297 298 .output = 1, \ 298 299 .channel = (chan), \ 299 300 .info_mask = IIO_CHAN_INFO_RAW_SEPARATE_BIT | \ 300 301 IIO_CHAN_INFO_SCALE_SEPARATE_BIT, \ 301 - .address = AD5064_ADDR_DAC(chan), \ 302 + .address = addr, \ 302 303 .scan_type = IIO_ST('u', (bits), 16, 20 - (bits)), \ 303 304 .ext_info = ad5064_ext_info, \ 304 305 } 305 306 306 307 #define DECLARE_AD5064_CHANNELS(name, bits) \ 307 308 const struct iio_chan_spec name[] = { \ 308 - AD5064_CHANNEL(0, bits), \ 309 - AD5064_CHANNEL(1, bits), \ 310 - AD5064_CHANNEL(2, bits), \ 311 - AD5064_CHANNEL(3, bits), \ 312 - AD5064_CHANNEL(4, bits), \ 313 - AD5064_CHANNEL(5, bits), \ 314 - AD5064_CHANNEL(6, bits), \ 315 - AD5064_CHANNEL(7, bits), \ 309 + AD5064_CHANNEL(0, 0, bits), \ 310 + AD5064_CHANNEL(1, 1, bits), \ 311 + AD5064_CHANNEL(2, 2, bits), \ 312 + AD5064_CHANNEL(3, 3, bits), \ 313 + AD5064_CHANNEL(4, 4, bits), \ 314 + AD5064_CHANNEL(5, 5, bits), \ 315 + AD5064_CHANNEL(6, 6, bits), \ 316 + AD5064_CHANNEL(7, 7, bits), \ 317 + } 318 + 319 + #define DECLARE_AD5065_CHANNELS(name, bits) \ 320 + const struct iio_chan_spec name[] = { \ 321 + AD5064_CHANNEL(0, 0, bits), \ 322 + AD5064_CHANNEL(1, 3, bits), \ 316 323 } 317 324 318 325 static DECLARE_AD5064_CHANNELS(ad5024_channels, 12); 319 326 static DECLARE_AD5064_CHANNELS(ad5044_channels, 14); 320 327 static DECLARE_AD5064_CHANNELS(ad5064_channels, 16); 328 + 329 + static DECLARE_AD5065_CHANNELS(ad5025_channels, 12); 330 + static DECLARE_AD5065_CHANNELS(ad5045_channels, 14); 331 + static DECLARE_AD5065_CHANNELS(ad5065_channels, 16); 321 332 322 333 static const struct ad5064_chip_info ad5064_chip_info_tbl[] = { 323 334 [ID_AD5024] = { ··· 337 328 }, 338 329 [ID_AD5025] = { 339 330 .shared_vref = false, 340 - .channels = ad5024_channels, 331 + .channels = ad5025_channels, 341 332 .num_channels = 2, 342 333 }, 343 334 [ID_AD5044] = { ··· 347 338 }, 348 339 [ID_AD5045] = { 349 340 .shared_vref = false, 350 - .channels = ad5044_channels, 341 + .channels = ad5045_channels, 351 342 .num_channels = 2, 352 343 }, 353 344 [ID_AD5064] = { ··· 362 353 }, 363 354 [ID_AD5065] = { 364 355 .shared_vref = false, 365 - .channels = ad5064_channels, 356 + .channels = ad5065_channels, 366 357 .num_channels = 2, 367 358 }, 368 359 [ID_AD5628_1] = { ··· 438 429 { 439 430 struct iio_dev *indio_dev; 440 431 struct ad5064_state *st; 432 + unsigned int midscale; 441 433 unsigned int i; 442 434 int ret; 443 435 ··· 475 465 goto error_free_reg; 476 466 } 477 467 478 - for (i = 0; i < st->chip_info->num_channels; ++i) { 479 - st->pwr_down_mode[i] = AD5064_LDAC_PWRDN_1K; 480 - st->dac_cache[i] = 0x8000; 481 - } 482 - 483 468 indio_dev->dev.parent = dev; 484 469 indio_dev->name = name; 485 470 indio_dev->info = &ad5064_info; 486 471 indio_dev->modes = INDIO_DIRECT_MODE; 487 472 indio_dev->channels = st->chip_info->channels; 488 473 indio_dev->num_channels = st->chip_info->num_channels; 474 + 475 + midscale = (1 << indio_dev->channels[0].scan_type.realbits) / 2; 476 + 477 + for (i = 0; i < st->chip_info->num_channels; ++i) { 478 + st->pwr_down_mode[i] = AD5064_LDAC_PWRDN_1K; 479 + st->dac_cache[i] = midscale; 480 + } 489 481 490 482 ret = iio_device_register(indio_dev); 491 483 if (ret)
+1
drivers/iio/imu/inv_mpu6050/Kconfig
··· 5 5 config INV_MPU6050_IIO 6 6 tristate "Invensense MPU6050 devices" 7 7 depends on I2C && SYSFS 8 + select IIO_BUFFER 8 9 select IIO_TRIGGERED_BUFFER 9 10 help 10 11 This driver supports the Invensense MPU6050 devices.
-1
drivers/infiniband/hw/mlx4/cm.c
··· 362 362 INIT_LIST_HEAD(&dev->sriov.cm_list); 363 363 dev->sriov.sl_id_map = RB_ROOT; 364 364 idr_init(&dev->sriov.pv_id_table); 365 - idr_pre_get(&dev->sriov.pv_id_table, GFP_KERNEL); 366 365 } 367 366 368 367 /* slave = -1 ==> all slaves */
+4 -4
drivers/input/keyboard/tc3589x-keypad.c
··· 70 70 #define TC3589x_EVT_INT_CLR 0x2 71 71 #define TC3589x_KBD_INT_CLR 0x1 72 72 73 - #define TC3589x_KBD_KEYMAP_SIZE 64 74 - 75 73 /** 76 74 * struct tc_keypad - data structure used by keypad driver 77 75 * @tc3589x: pointer to tc35893 ··· 86 88 const struct tc3589x_keypad_platform_data *board; 87 89 unsigned int krow; 88 90 unsigned int kcol; 89 - unsigned short keymap[TC3589x_KBD_KEYMAP_SIZE]; 91 + unsigned short *keymap; 90 92 bool keypad_stopped; 91 93 }; 92 94 ··· 336 338 337 339 error = matrix_keypad_build_keymap(plat->keymap_data, NULL, 338 340 TC3589x_MAX_KPROW, TC3589x_MAX_KPCOL, 339 - keypad->keymap, input); 341 + NULL, input); 340 342 if (error) { 341 343 dev_err(&pdev->dev, "Failed to build keymap\n"); 342 344 goto err_free_mem; 343 345 } 346 + 347 + keypad->keymap = input->keycode; 344 348 345 349 input_set_capability(input, EV_MSC, MSC_SCAN); 346 350 if (!plat->no_autorepeat)
+72 -13
drivers/input/mouse/alps.c
··· 490 490 f->y_map |= (p[5] & 0x20) << 6; 491 491 } 492 492 493 + static void alps_decode_dolphin(struct alps_fields *f, unsigned char *p) 494 + { 495 + f->first_mp = !!(p[0] & 0x02); 496 + f->is_mp = !!(p[0] & 0x20); 497 + 498 + f->fingers = ((p[0] & 0x6) >> 1 | 499 + (p[0] & 0x10) >> 2); 500 + f->x_map = ((p[2] & 0x60) >> 5) | 501 + ((p[4] & 0x7f) << 2) | 502 + ((p[5] & 0x7f) << 9) | 503 + ((p[3] & 0x07) << 16) | 504 + ((p[3] & 0x70) << 15) | 505 + ((p[0] & 0x01) << 22); 506 + f->y_map = (p[1] & 0x7f) | 507 + ((p[2] & 0x1f) << 7); 508 + 509 + f->x = ((p[1] & 0x7f) | ((p[4] & 0x0f) << 7)); 510 + f->y = ((p[2] & 0x7f) | ((p[4] & 0xf0) << 3)); 511 + f->z = (p[0] & 4) ? 0 : p[5] & 0x7f; 512 + 513 + alps_decode_buttons_v3(f, p); 514 + } 515 + 493 516 static void alps_process_touchpad_packet_v3(struct psmouse *psmouse) 494 517 { 495 518 struct alps_data *priv = psmouse->private; ··· 897 874 } 898 875 899 876 /* Bytes 2 - pktsize should have 0 in the highest bit */ 900 - if (psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize && 877 + if (priv->proto_version != ALPS_PROTO_V5 && 878 + psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize && 901 879 (psmouse->packet[psmouse->pktcnt - 1] & 0x80)) { 902 880 psmouse_dbg(psmouse, "refusing packet[%i] = %x\n", 903 881 psmouse->pktcnt - 1, ··· 1018 994 return 0; 1019 995 } 1020 996 1021 - static int alps_enter_command_mode(struct psmouse *psmouse, 1022 - unsigned char *resp) 997 + static int alps_enter_command_mode(struct psmouse *psmouse) 1023 998 { 1024 999 unsigned char param[4]; 1025 1000 ··· 1027 1004 return -1; 1028 1005 } 1029 1006 1030 - if (param[0] != 0x88 || (param[1] != 0x07 && param[1] != 0x08)) { 1007 + if ((param[0] != 0x88 || (param[1] != 0x07 && param[1] != 0x08)) && 1008 + param[0] != 0x73) { 1031 1009 psmouse_dbg(psmouse, 1032 1010 "unknown response while entering command mode\n"); 1033 1011 return -1; 1034 1012 } 1035 - 1036 - if (resp) 1037 - *resp = param[2]; 1038 1013 return 0; 1039 1014 } 1040 1015 ··· 1197 1176 { 1198 1177 int reg_val, ret = -1; 1199 1178 1200 - if (alps_enter_command_mode(psmouse, NULL)) 1179 + if (alps_enter_command_mode(psmouse)) 1201 1180 return -1; 1202 1181 1203 1182 reg_val = alps_command_mode_read_reg(psmouse, reg_base + 0x0008); ··· 1237 1216 { 1238 1217 int ret = -EIO, reg_val; 1239 1218 1240 - if (alps_enter_command_mode(psmouse, NULL)) 1219 + if (alps_enter_command_mode(psmouse)) 1241 1220 goto error; 1242 1221 1243 1222 reg_val = alps_command_mode_read_reg(psmouse, reg_base + 0x08); ··· 1300 1279 * supported by this driver. If bit 1 isn't set the packet 1301 1280 * format is different. 1302 1281 */ 1303 - if (alps_enter_command_mode(psmouse, NULL) || 1282 + if (alps_enter_command_mode(psmouse) || 1304 1283 alps_command_mode_write_reg(psmouse, 1305 1284 reg_base + 0x08, 0x82) || 1306 1285 alps_exit_command_mode(psmouse)) ··· 1327 1306 alps_setup_trackstick_v3(psmouse, ALPS_REG_BASE_PINNACLE) == -EIO) 1328 1307 goto error; 1329 1308 1330 - if (alps_enter_command_mode(psmouse, NULL) || 1309 + if (alps_enter_command_mode(psmouse) || 1331 1310 alps_absolute_mode_v3(psmouse)) { 1332 1311 psmouse_err(psmouse, "Failed to enter absolute mode\n"); 1333 1312 goto error; ··· 1402 1381 priv->flags &= ~ALPS_DUALPOINT; 1403 1382 } 1404 1383 1405 - if (alps_enter_command_mode(psmouse, NULL) || 1384 + if (alps_enter_command_mode(psmouse) || 1406 1385 alps_command_mode_read_reg(psmouse, 0xc2d9) == -1 || 1407 1386 alps_command_mode_write_reg(psmouse, 0xc2cb, 0x00)) 1408 1387 goto error; ··· 1452 1431 struct ps2dev *ps2dev = &psmouse->ps2dev; 1453 1432 unsigned char param[4]; 1454 1433 1455 - if (alps_enter_command_mode(psmouse, NULL)) 1434 + if (alps_enter_command_mode(psmouse)) 1456 1435 goto error; 1457 1436 1458 1437 if (alps_absolute_mode_v4(psmouse)) { ··· 1520 1499 return -1; 1521 1500 } 1522 1501 1502 + static int alps_hw_init_dolphin_v1(struct psmouse *psmouse) 1503 + { 1504 + struct ps2dev *ps2dev = &psmouse->ps2dev; 1505 + unsigned char param[2]; 1506 + 1507 + /* This is dolphin "v1" as empirically defined by florin9doi */ 1508 + param[0] = 0x64; 1509 + param[1] = 0x28; 1510 + 1511 + if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSTREAM) || 1512 + ps2_command(ps2dev, &param[0], PSMOUSE_CMD_SETRATE) || 1513 + ps2_command(ps2dev, &param[1], PSMOUSE_CMD_SETRATE)) 1514 + return -1; 1515 + 1516 + return 0; 1517 + } 1518 + 1523 1519 static void alps_set_defaults(struct alps_data *priv) 1524 1520 { 1525 1521 priv->byte0 = 0x8f; ··· 1569 1531 priv->set_abs_params = alps_set_abs_params_mt; 1570 1532 priv->nibble_commands = alps_v4_nibble_commands; 1571 1533 priv->addr_command = PSMOUSE_CMD_DISABLE; 1534 + break; 1535 + case ALPS_PROTO_V5: 1536 + priv->hw_init = alps_hw_init_dolphin_v1; 1537 + priv->process_packet = alps_process_packet_v3; 1538 + priv->decode_fields = alps_decode_dolphin; 1539 + priv->set_abs_params = alps_set_abs_params_mt; 1540 + priv->nibble_commands = alps_v3_nibble_commands; 1541 + priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 1542 + priv->byte0 = 0xc8; 1543 + priv->mask0 = 0xc8; 1544 + priv->flags = 0; 1545 + priv->x_max = 1360; 1546 + priv->y_max = 660; 1547 + priv->x_bits = 23; 1548 + priv->y_bits = 12; 1572 1549 break; 1573 1550 } 1574 1551 } ··· 1644 1591 return -EIO; 1645 1592 1646 1593 if (alps_match_table(psmouse, priv, e7, ec) == 0) { 1594 + return 0; 1595 + } else if (e7[0] == 0x73 && e7[1] == 0x03 && e7[2] == 0x50 && 1596 + ec[0] == 0x73 && ec[1] == 0x01) { 1597 + priv->proto_version = ALPS_PROTO_V5; 1598 + alps_set_defaults(priv); 1599 + 1647 1600 return 0; 1648 1601 } else if (ec[0] == 0x88 && ec[1] == 0x08) { 1649 1602 priv->proto_version = ALPS_PROTO_V3;
+1
drivers/input/mouse/alps.h
··· 16 16 #define ALPS_PROTO_V2 2 17 17 #define ALPS_PROTO_V3 3 18 18 #define ALPS_PROTO_V4 4 19 + #define ALPS_PROTO_V5 5 19 20 20 21 /** 21 22 * struct alps_model_info - touchpad ID table
+13 -6
drivers/input/mouse/cypress_ps2.c
··· 236 236 cytp->fw_version = param[2] & FW_VERSION_MASX; 237 237 cytp->tp_metrics_supported = (param[2] & TP_METRICS_MASK) ? 1 : 0; 238 238 239 + /* 240 + * Trackpad fw_version 11 (in Dell XPS12) yields a bogus response to 241 + * CYTP_CMD_READ_TP_METRICS so do not try to use it. LP: #1103594. 242 + */ 243 + if (cytp->fw_version >= 11) 244 + cytp->tp_metrics_supported = 0; 245 + 239 246 psmouse_dbg(psmouse, "cytp->fw_version = %d\n", cytp->fw_version); 240 247 psmouse_dbg(psmouse, "cytp->tp_metrics_supported = %d\n", 241 248 cytp->tp_metrics_supported); ··· 264 257 cytp->tp_max_pressure = CYTP_MAX_PRESSURE; 265 258 cytp->tp_res_x = cytp->tp_max_abs_x / cytp->tp_width; 266 259 cytp->tp_res_y = cytp->tp_max_abs_y / cytp->tp_high; 260 + 261 + if (!cytp->tp_metrics_supported) 262 + return 0; 267 263 268 264 memset(param, 0, sizeof(param)); 269 265 if (cypress_send_ext_cmd(psmouse, CYTP_CMD_READ_TP_METRICS, param) == 0) { ··· 325 315 326 316 static int cypress_query_hardware(struct psmouse *psmouse) 327 317 { 328 - struct cytp_data *cytp = psmouse->private; 329 318 int ret; 330 319 331 320 ret = cypress_read_fw_version(psmouse); 332 321 if (ret) 333 322 return ret; 334 323 335 - if (cytp->tp_metrics_supported) { 336 - ret = cypress_read_tp_metrics(psmouse); 337 - if (ret) 338 - return ret; 339 - } 324 + ret = cypress_read_tp_metrics(psmouse); 325 + if (ret) 326 + return ret; 340 327 341 328 return 0; 342 329 }
+4
drivers/input/tablet/wacom_wac.c
··· 2017 2017 static const struct wacom_features wacom_features_0x101 = 2018 2018 { "Wacom ISDv4 101", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2019 2019 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2020 + static const struct wacom_features wacom_features_0x10D = 2021 + { "Wacom ISDv4 10D", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2022 + 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2020 2023 static const struct wacom_features wacom_features_0x4001 = 2021 2024 { "Wacom ISDv4 4001", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2022 2025 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; ··· 2204 2201 { USB_DEVICE_WACOM(0xEF) }, 2205 2202 { USB_DEVICE_WACOM(0x100) }, 2206 2203 { USB_DEVICE_WACOM(0x101) }, 2204 + { USB_DEVICE_WACOM(0x10D) }, 2207 2205 { USB_DEVICE_WACOM(0x4001) }, 2208 2206 { USB_DEVICE_WACOM(0x47) }, 2209 2207 { USB_DEVICE_WACOM(0xF4) },
+6 -1
drivers/input/touchscreen/ads7846.c
··· 236 236 /* Must be called with ts->lock held */ 237 237 static void __ads7846_enable(struct ads7846 *ts) 238 238 { 239 - regulator_enable(ts->reg); 239 + int error; 240 + 241 + error = regulator_enable(ts->reg); 242 + if (error != 0) 243 + dev_err(&ts->spi->dev, "Failed to enable supply: %d\n", error); 244 + 240 245 ads7846_restart(ts); 241 246 } 242 247
+25 -9
drivers/input/touchscreen/mms114.c
··· 314 314 struct i2c_client *client = data->client; 315 315 int error; 316 316 317 - if (data->core_reg) 318 - regulator_enable(data->core_reg); 319 - if (data->io_reg) 320 - regulator_enable(data->io_reg); 317 + error = regulator_enable(data->core_reg); 318 + if (error) { 319 + dev_err(&client->dev, "Failed to enable avdd: %d\n", error); 320 + return error; 321 + } 322 + 323 + error = regulator_enable(data->io_reg); 324 + if (error) { 325 + dev_err(&client->dev, "Failed to enable vdd: %d\n", error); 326 + regulator_disable(data->core_reg); 327 + return error; 328 + } 329 + 321 330 mdelay(MMS114_POWERON_DELAY); 322 331 323 332 error = mms114_setup_regs(data); 324 - if (error < 0) 333 + if (error < 0) { 334 + regulator_disable(data->io_reg); 335 + regulator_disable(data->core_reg); 325 336 return error; 337 + } 326 338 327 339 if (data->pdata->cfg_pin) 328 340 data->pdata->cfg_pin(true); ··· 347 335 static void mms114_stop(struct mms114_data *data) 348 336 { 349 337 struct i2c_client *client = data->client; 338 + int error; 350 339 351 340 disable_irq(client->irq); 352 341 353 342 if (data->pdata->cfg_pin) 354 343 data->pdata->cfg_pin(false); 355 344 356 - if (data->io_reg) 357 - regulator_disable(data->io_reg); 358 - if (data->core_reg) 359 - regulator_disable(data->core_reg); 345 + error = regulator_disable(data->io_reg); 346 + if (error) 347 + dev_warn(&client->dev, "Failed to disable vdd: %d\n", error); 348 + 349 + error = regulator_disable(data->core_reg); 350 + if (error) 351 + dev_warn(&client->dev, "Failed to disable avdd: %d\n", error); 360 352 } 361 353 362 354 static int mms114_input_open(struct input_dev *dev)
+1 -1
drivers/irqchip/irq-gic.c
··· 648 648 649 649 /* Convert our logical CPU mask into a physical one. */ 650 650 for_each_cpu(cpu, mask) 651 - map |= 1 << cpu_logical_map(cpu); 651 + map |= gic_cpu_map[cpu]; 652 652 653 653 /* 654 654 * Ensure that stores to Normal memory are visible to the
+3 -1
drivers/isdn/i4l/isdn_tty.c
··· 902 902 int j; 903 903 int l; 904 904 905 - l = strlen(msg); 905 + l = min(strlen(msg), sizeof(cmd.parm) - sizeof(cmd.parm.cmsg) 906 + + sizeof(cmd.parm.cmsg.para) - 2); 907 + 906 908 if (!l) { 907 909 isdn_tty_modem_result(RESULT_ERROR, info); 908 910 return;
+1
drivers/mfd/Kconfig
··· 858 858 config AB8500_CORE 859 859 bool "ST-Ericsson AB8500 Mixed Signal Power Management chip" 860 860 depends on GENERIC_HARDIRQS && ABX500_CORE && MFD_DB8500_PRCMU 861 + select POWER_SUPPLY 861 862 select MFD_CORE 862 863 select IRQ_DOMAIN 863 864 help
+13 -4
drivers/mfd/ab8500-gpadc.c
··· 594 594 static int ab8500_gpadc_runtime_resume(struct device *dev) 595 595 { 596 596 struct ab8500_gpadc *gpadc = dev_get_drvdata(dev); 597 + int ret; 597 598 598 - regulator_enable(gpadc->regu); 599 - return 0; 599 + ret = regulator_enable(gpadc->regu); 600 + if (ret) 601 + dev_err(dev, "Failed to enable vtvout LDO: %d\n", ret); 602 + return ret; 600 603 } 601 604 602 605 static int ab8500_gpadc_runtime_idle(struct device *dev) ··· 646 643 } 647 644 648 645 /* VTVout LDO used to power up ab8500-GPADC */ 649 - gpadc->regu = regulator_get(&pdev->dev, "vddadc"); 646 + gpadc->regu = devm_regulator_get(&pdev->dev, "vddadc"); 650 647 if (IS_ERR(gpadc->regu)) { 651 648 ret = PTR_ERR(gpadc->regu); 652 649 dev_err(gpadc->dev, "failed to get vtvout LDO\n"); ··· 655 652 656 653 platform_set_drvdata(pdev, gpadc); 657 654 658 - regulator_enable(gpadc->regu); 655 + ret = regulator_enable(gpadc->regu); 656 + if (ret) { 657 + dev_err(gpadc->dev, "Failed to enable vtvout LDO: %d\n", ret); 658 + goto fail_enable; 659 + } 659 660 660 661 pm_runtime_set_autosuspend_delay(gpadc->dev, GPADC_AUDOSUSPEND_DELAY); 661 662 pm_runtime_use_autosuspend(gpadc->dev); ··· 670 663 list_add_tail(&gpadc->node, &ab8500_gpadc_list); 671 664 dev_dbg(gpadc->dev, "probe success\n"); 672 665 return 0; 666 + 667 + fail_enable: 673 668 fail_irq: 674 669 free_irq(gpadc->irq, gpadc); 675 670 fail:
+3 -3
drivers/mfd/omap-usb-host.c
··· 460 460 461 461 switch (omap->usbhs_rev) { 462 462 case OMAP_USBHS_REV1: 463 - omap_usbhs_rev1_hostconfig(omap, reg); 463 + reg = omap_usbhs_rev1_hostconfig(omap, reg); 464 464 break; 465 465 466 466 case OMAP_USBHS_REV2: 467 - omap_usbhs_rev2_hostconfig(omap, reg); 467 + reg = omap_usbhs_rev2_hostconfig(omap, reg); 468 468 break; 469 469 470 470 default: /* newer revisions */ 471 - omap_usbhs_rev2_hostconfig(omap, reg); 471 + reg = omap_usbhs_rev2_hostconfig(omap, reg); 472 472 break; 473 473 } 474 474
+33 -3
drivers/mfd/palmas.c
··· 257 257 PALMAS_INT1_MASK), 258 258 }; 259 259 260 - static void palmas_dt_to_pdata(struct device_node *node, 260 + static int palmas_set_pdata_irq_flag(struct i2c_client *i2c, 261 261 struct palmas_platform_data *pdata) 262 262 { 263 + struct irq_data *irq_data = irq_get_irq_data(i2c->irq); 264 + if (!irq_data) { 265 + dev_err(&i2c->dev, "Invalid IRQ: %d\n", i2c->irq); 266 + return -EINVAL; 267 + } 268 + 269 + pdata->irq_flags = irqd_get_trigger_type(irq_data); 270 + dev_info(&i2c->dev, "Irq flag is 0x%08x\n", pdata->irq_flags); 271 + return 0; 272 + } 273 + 274 + static void palmas_dt_to_pdata(struct i2c_client *i2c, 275 + struct palmas_platform_data *pdata) 276 + { 277 + struct device_node *node = i2c->dev.of_node; 263 278 int ret; 264 279 u32 prop; 265 280 ··· 298 283 pdata->power_ctrl = PALMAS_POWER_CTRL_NSLEEP_MASK | 299 284 PALMAS_POWER_CTRL_ENABLE1_MASK | 300 285 PALMAS_POWER_CTRL_ENABLE2_MASK; 286 + if (i2c->irq) 287 + palmas_set_pdata_irq_flag(i2c, pdata); 301 288 } 302 289 303 290 static int palmas_i2c_probe(struct i2c_client *i2c, ··· 321 304 if (!pdata) 322 305 return -ENOMEM; 323 306 324 - palmas_dt_to_pdata(node, pdata); 307 + palmas_dt_to_pdata(i2c, pdata); 325 308 } 326 309 327 310 if (!pdata) ··· 361 344 } 362 345 } 363 346 347 + /* Change interrupt line output polarity */ 348 + if (pdata->irq_flags & IRQ_TYPE_LEVEL_HIGH) 349 + reg = PALMAS_POLARITY_CTRL_INT_POLARITY; 350 + else 351 + reg = 0; 352 + ret = palmas_update_bits(palmas, PALMAS_PU_PD_OD_BASE, 353 + PALMAS_POLARITY_CTRL, PALMAS_POLARITY_CTRL_INT_POLARITY, 354 + reg); 355 + if (ret < 0) { 356 + dev_err(palmas->dev, "POLARITY_CTRL updat failed: %d\n", ret); 357 + goto err; 358 + } 359 + 364 360 /* Change IRQ into clear on read mode for efficiency */ 365 361 slave = PALMAS_BASE_TO_SLAVE(PALMAS_INTERRUPT_BASE); 366 362 addr = PALMAS_BASE_TO_REG(PALMAS_INTERRUPT_BASE, PALMAS_INT_CTRL); ··· 382 352 regmap_write(palmas->regmap[slave], addr, reg); 383 353 384 354 ret = regmap_add_irq_chip(palmas->regmap[slave], palmas->irq, 385 - IRQF_ONESHOT | IRQF_TRIGGER_LOW, 0, &palmas_irq_chip, 355 + IRQF_ONESHOT | pdata->irq_flags, 0, &palmas_irq_chip, 386 356 &palmas->irq_data); 387 357 if (ret < 0) 388 358 goto err;
+1
drivers/mfd/tps65912-core.c
··· 169 169 void tps65912_device_exit(struct tps65912 *tps65912) 170 170 { 171 171 mfd_remove_devices(tps65912->dev); 172 + tps65912_irq_exit(tps65912); 172 173 kfree(tps65912); 173 174 } 174 175
+1 -1
drivers/mfd/twl4030-audio.c
··· 118 118 * Disable the resource. 119 119 * The function returns with error or the content of the register 120 120 */ 121 - int twl4030_audio_disable_resource(unsigned id) 121 + int twl4030_audio_disable_resource(enum twl4030_audio_res id) 122 122 { 123 123 struct twl4030_audio *audio = platform_get_drvdata(twl4030_audio_dev); 124 124 int val;
+1 -1
drivers/mfd/twl4030-madc.c
··· 800 800 801 801 static struct platform_driver twl4030_madc_driver = { 802 802 .probe = twl4030_madc_probe, 803 - .remove = __exit_p(twl4030_madc_remove), 803 + .remove = twl4030_madc_remove, 804 804 .driver = { 805 805 .name = "twl4030_madc", 806 806 .owner = THIS_MODULE,
+3 -2
drivers/net/bonding/bond_main.c
··· 1964 1964 } 1965 1965 1966 1966 block_netpoll_tx(); 1967 - call_netdevice_notifiers(NETDEV_RELEASE, bond_dev); 1968 1967 write_lock_bh(&bond->lock); 1969 1968 1970 1969 slave = bond_get_slave_by_dev(bond, slave_dev); ··· 2065 2066 write_unlock_bh(&bond->lock); 2066 2067 unblock_netpoll_tx(); 2067 2068 2068 - if (bond->slave_cnt == 0) 2069 + if (bond->slave_cnt == 0) { 2069 2070 call_netdevice_notifiers(NETDEV_CHANGEADDR, bond->dev); 2071 + call_netdevice_notifiers(NETDEV_RELEASE, bond->dev); 2072 + } 2070 2073 2071 2074 bond_compute_features(bond); 2072 2075 if (!(bond_dev->features & NETIF_F_VLAN_CHALLENGED) &&
+16 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 8647 8647 MDIO_WC_DEVAD, 8648 8648 MDIO_WC_REG_DIGITAL5_MISC6, 8649 8649 &rx_tx_in_reset); 8650 - if (!rx_tx_in_reset) { 8650 + if ((!rx_tx_in_reset) && 8651 + (params->link_flags & 8652 + PHY_INITIALIZED)) { 8651 8653 bnx2x_warpcore_reset_lane(bp, phy, 1); 8652 8654 bnx2x_warpcore_config_sfi(phy, params); 8653 8655 bnx2x_warpcore_reset_lane(bp, phy, 0); ··· 12529 12527 vars->flow_ctrl = BNX2X_FLOW_CTRL_NONE; 12530 12528 vars->mac_type = MAC_TYPE_NONE; 12531 12529 vars->phy_flags = 0; 12530 + vars->check_kr2_recovery_cnt = 0; 12531 + params->link_flags = PHY_INITIALIZED; 12532 12532 /* Driver opens NIG-BRB filters */ 12533 12533 bnx2x_set_rx_filter(params, 1); 12534 12534 /* Check if link flap can be avoided */ ··· 12695 12691 struct bnx2x *bp = params->bp; 12696 12692 vars->link_up = 0; 12697 12693 vars->phy_flags = 0; 12694 + params->link_flags &= ~PHY_INITIALIZED; 12698 12695 if (!params->lfa_base) 12699 12696 return bnx2x_link_reset(params, vars, 1); 12700 12697 /* ··· 13416 13411 vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE; 13417 13412 bnx2x_update_link_attr(params, vars->link_attr_sync); 13418 13413 13414 + vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT; 13419 13415 /* Restart AN on leading lane */ 13420 13416 bnx2x_warpcore_restart_AN_KR(phy, params); 13421 13417 } ··· 13445 13439 return; 13446 13440 } 13447 13441 13442 + /* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery 13443 + * since some switches tend to reinit the AN process and clear the 13444 + * advertised BP/NP after ~2 seconds causing the KR2 to be disabled 13445 + * and recovered many times 13446 + */ 13447 + if (vars->check_kr2_recovery_cnt > 0) { 13448 + vars->check_kr2_recovery_cnt--; 13449 + return; 13450 + } 13448 13451 lane = bnx2x_get_warpcore_lane(phy, params); 13449 13452 CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_AER_BLOCK, 13450 13453 MDIO_AER_BLOCK_AER_REG, lane);
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
··· 309 309 req_flow_ctrl is set to AUTO */ 310 310 u16 link_flags; 311 311 #define LINK_FLAGS_INT_DISABLED (1<<0) 312 + #define PHY_INITIALIZED (1<<1) 312 313 u32 lfa_base; 313 314 }; 314 315 ··· 343 342 u32 link_status; 344 343 u32 eee_status; 345 344 u8 fault_detected; 346 - u8 rsrv1; 345 + u8 check_kr2_recovery_cnt; 346 + #define CHECK_KR2_RECOVERY_CNT 5 347 347 u16 periodic_flags; 348 348 #define PERIODIC_FLAGS_LINK_EVENT 0x0001 349 349
+5 -9
drivers/net/ethernet/broadcom/tg3.c
··· 1869 1869 1870 1870 tg3_ump_link_report(tp); 1871 1871 } 1872 + 1873 + tp->link_up = netif_carrier_ok(tp->dev); 1872 1874 } 1873 1875 1874 1876 static u16 tg3_advert_flowctrl_1000X(u8 flow_ctrl) ··· 2524 2522 return err; 2525 2523 } 2526 2524 2527 - static void tg3_carrier_on(struct tg3 *tp) 2528 - { 2529 - netif_carrier_on(tp->dev); 2530 - tp->link_up = true; 2531 - } 2532 - 2533 2525 static void tg3_carrier_off(struct tg3 *tp) 2534 2526 { 2535 2527 netif_carrier_off(tp->dev); ··· 2549 2553 return -EBUSY; 2550 2554 2551 2555 if (netif_running(tp->dev) && tp->link_up) { 2552 - tg3_carrier_off(tp); 2556 + netif_carrier_off(tp->dev); 2553 2557 tg3_link_report(tp); 2554 2558 } 2555 2559 ··· 4258 4262 { 4259 4263 if (curr_link_up != tp->link_up) { 4260 4264 if (curr_link_up) { 4261 - tg3_carrier_on(tp); 4265 + netif_carrier_on(tp->dev); 4262 4266 } else { 4263 - tg3_carrier_off(tp); 4267 + netif_carrier_off(tp->dev); 4264 4268 if (tp->phy_flags & TG3_PHYFLG_MII_SERDES) 4265 4269 tp->phy_flags &= ~TG3_PHYFLG_PARALLEL_DETECT; 4266 4270 }
+1
drivers/net/ethernet/emulex/benet/be.h
··· 349 349 struct pci_dev *pdev; 350 350 struct net_device *netdev; 351 351 352 + u8 __iomem *csr; /* CSR BAR used only for BE2/3 */ 352 353 u8 __iomem *db; /* Door Bell */ 353 354 354 355 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */
+16 -20
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 473 473 return 0; 474 474 } 475 475 476 - static int be_POST_stage_get(struct be_adapter *adapter, u16 *stage) 476 + static u16 be_POST_stage_get(struct be_adapter *adapter) 477 477 { 478 478 u32 sem; 479 - u32 reg = skyhawk_chip(adapter) ? SLIPORT_SEMAPHORE_OFFSET_SH : 480 - SLIPORT_SEMAPHORE_OFFSET_BE; 481 479 482 - pci_read_config_dword(adapter->pdev, reg, &sem); 483 - *stage = sem & POST_STAGE_MASK; 484 - 485 - if ((sem >> POST_ERR_SHIFT) & POST_ERR_MASK) 486 - return -1; 480 + if (BEx_chip(adapter)) 481 + sem = ioread32(adapter->csr + SLIPORT_SEMAPHORE_OFFSET_BEx); 487 482 else 488 - return 0; 483 + pci_read_config_dword(adapter->pdev, 484 + SLIPORT_SEMAPHORE_OFFSET_SH, &sem); 485 + 486 + return sem & POST_STAGE_MASK; 489 487 } 490 488 491 489 int lancer_wait_ready(struct be_adapter *adapter) ··· 577 579 } 578 580 579 581 do { 580 - status = be_POST_stage_get(adapter, &stage); 581 - if (status) { 582 - dev_err(dev, "POST error; stage=0x%x\n", stage); 583 - return -1; 584 - } else if (stage != POST_STAGE_ARMFW_RDY) { 585 - if (msleep_interruptible(2000)) { 586 - dev_err(dev, "Waiting for POST aborted\n"); 587 - return -EINTR; 588 - } 589 - timeout += 2; 590 - } else { 582 + stage = be_POST_stage_get(adapter); 583 + if (stage == POST_STAGE_ARMFW_RDY) 591 584 return 0; 585 + 586 + dev_info(dev, "Waiting for POST, %ds elapsed\n", 587 + timeout); 588 + if (msleep_interruptible(2000)) { 589 + dev_err(dev, "Waiting for POST aborted\n"); 590 + return -EINTR; 592 591 } 592 + timeout += 2; 593 593 } while (timeout < 60); 594 594 595 595 dev_err(dev, "POST timeout; stage=0x%x\n", stage);
+2 -2
drivers/net/ethernet/emulex/benet/be_hw.h
··· 32 32 #define MPU_EP_CONTROL 0 33 33 34 34 /********** MPU semphore: used for SH & BE *************/ 35 - #define SLIPORT_SEMAPHORE_OFFSET_BE 0x7c 36 - #define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 35 + #define SLIPORT_SEMAPHORE_OFFSET_BEx 0xac /* CSR BAR offset */ 36 + #define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 /* PCI-CFG offset */ 37 37 #define POST_STAGE_MASK 0x0000FFFF 38 38 #define POST_ERR_MASK 0x1 39 39 #define POST_ERR_SHIFT 31
+10
drivers/net/ethernet/emulex/benet/be_main.c
··· 3688 3688 3689 3689 static void be_unmap_pci_bars(struct be_adapter *adapter) 3690 3690 { 3691 + if (adapter->csr) 3692 + pci_iounmap(adapter->pdev, adapter->csr); 3691 3693 if (adapter->db) 3692 3694 pci_iounmap(adapter->pdev, adapter->db); 3693 3695 } ··· 3722 3720 pci_read_config_dword(adapter->pdev, SLI_INTF_REG_OFFSET, &sli_intf); 3723 3721 adapter->if_type = (sli_intf & SLI_INTF_IF_TYPE_MASK) >> 3724 3722 SLI_INTF_IF_TYPE_SHIFT; 3723 + 3724 + if (BEx_chip(adapter) && be_physfn(adapter)) { 3725 + adapter->csr = pci_iomap(adapter->pdev, 2, 0); 3726 + if (adapter->csr == NULL) 3727 + return -ENOMEM; 3728 + } 3725 3729 3726 3730 addr = pci_iomap(adapter->pdev, db_bar(adapter), 0); 3727 3731 if (addr == NULL) ··· 4337 4329 pci_restore_state(pdev); 4338 4330 4339 4331 /* Check if card is ok and fw is ready */ 4332 + dev_info(&adapter->pdev->dev, 4333 + "Waiting for FW to be ready after EEH reset\n"); 4340 4334 status = be_fw_wait_ready(adapter); 4341 4335 if (status) 4342 4336 return PCI_ERS_RESULT_DISCONNECT;
+13
drivers/net/ethernet/intel/e1000e/ethtool.c
··· 36 36 #include <linux/delay.h> 37 37 #include <linux/vmalloc.h> 38 38 #include <linux/mdio.h> 39 + #include <linux/pm_runtime.h> 39 40 40 41 #include "e1000.h" 41 42 ··· 2230 2229 return 0; 2231 2230 } 2232 2231 2232 + static int e1000e_ethtool_begin(struct net_device *netdev) 2233 + { 2234 + return pm_runtime_get_sync(netdev->dev.parent); 2235 + } 2236 + 2237 + static void e1000e_ethtool_complete(struct net_device *netdev) 2238 + { 2239 + pm_runtime_put_sync(netdev->dev.parent); 2240 + } 2241 + 2233 2242 static const struct ethtool_ops e1000_ethtool_ops = { 2243 + .begin = e1000e_ethtool_begin, 2244 + .complete = e1000e_ethtool_complete, 2234 2245 .get_settings = e1000_get_settings, 2235 2246 .set_settings = e1000_set_settings, 2236 2247 .get_drvinfo = e1000_get_drvinfo,
+70 -1
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 782 782 } 783 783 784 784 /** 785 + * e1000_k1_workaround_lpt_lp - K1 workaround on Lynxpoint-LP 786 + * @hw: pointer to the HW structure 787 + * @link: link up bool flag 788 + * 789 + * When K1 is enabled for 1Gbps, the MAC can miss 2 DMA completion indications 790 + * preventing further DMA write requests. Workaround the issue by disabling 791 + * the de-assertion of the clock request when in 1Gpbs mode. 792 + **/ 793 + static s32 e1000_k1_workaround_lpt_lp(struct e1000_hw *hw, bool link) 794 + { 795 + u32 fextnvm6 = er32(FEXTNVM6); 796 + s32 ret_val = 0; 797 + 798 + if (link && (er32(STATUS) & E1000_STATUS_SPEED_1000)) { 799 + u16 kmrn_reg; 800 + 801 + ret_val = hw->phy.ops.acquire(hw); 802 + if (ret_val) 803 + return ret_val; 804 + 805 + ret_val = 806 + e1000e_read_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_K1_CONFIG, 807 + &kmrn_reg); 808 + if (ret_val) 809 + goto release; 810 + 811 + ret_val = 812 + e1000e_write_kmrn_reg_locked(hw, 813 + E1000_KMRNCTRLSTA_K1_CONFIG, 814 + kmrn_reg & 815 + ~E1000_KMRNCTRLSTA_K1_ENABLE); 816 + if (ret_val) 817 + goto release; 818 + 819 + usleep_range(10, 20); 820 + 821 + ew32(FEXTNVM6, fextnvm6 | E1000_FEXTNVM6_REQ_PLL_CLK); 822 + 823 + ret_val = 824 + e1000e_write_kmrn_reg_locked(hw, 825 + E1000_KMRNCTRLSTA_K1_CONFIG, 826 + kmrn_reg); 827 + release: 828 + hw->phy.ops.release(hw); 829 + } else { 830 + /* clear FEXTNVM6 bit 8 on link down or 10/100 */ 831 + ew32(FEXTNVM6, fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); 832 + } 833 + 834 + return ret_val; 835 + } 836 + 837 + /** 785 838 * e1000_check_for_copper_link_ich8lan - Check for link (Copper) 786 839 * @hw: pointer to the HW structure 787 840 * ··· 867 814 868 815 if (hw->mac.type == e1000_pchlan) { 869 816 ret_val = e1000_k1_gig_workaround_hv(hw, link); 817 + if (ret_val) 818 + return ret_val; 819 + } 820 + 821 + /* Work-around I218 hang issue */ 822 + if ((hw->adapter->pdev->device == E1000_DEV_ID_PCH_LPTLP_I218_LM) || 823 + (hw->adapter->pdev->device == E1000_DEV_ID_PCH_LPTLP_I218_V)) { 824 + ret_val = e1000_k1_workaround_lpt_lp(hw, link); 870 825 if (ret_val) 871 826 return ret_val; 872 827 } ··· 4015 3954 4016 3955 phy_ctrl = er32(PHY_CTRL); 4017 3956 phy_ctrl |= E1000_PHY_CTRL_GBE_DISABLE; 3957 + 4018 3958 if (hw->phy.type == e1000_phy_i217) { 4019 - u16 phy_reg; 3959 + u16 phy_reg, device_id = hw->adapter->pdev->device; 3960 + 3961 + if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || 3962 + (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V)) { 3963 + u32 fextnvm6 = er32(FEXTNVM6); 3964 + 3965 + ew32(FEXTNVM6, fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); 3966 + } 4020 3967 4021 3968 ret_val = hw->phy.ops.acquire(hw); 4022 3969 if (ret_val)
+2
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 92 92 #define E1000_FEXTNVM4_BEACON_DURATION_8USEC 0x7 93 93 #define E1000_FEXTNVM4_BEACON_DURATION_16USEC 0x3 94 94 95 + #define E1000_FEXTNVM6_REQ_PLL_CLK 0x00000100 96 + 95 97 #define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL 96 98 97 99 #define E1000_ICH_RAR_ENTRIES 7
+21 -61
drivers/net/ethernet/intel/e1000e/netdev.c
··· 4303 4303 netif_start_queue(netdev); 4304 4304 4305 4305 adapter->idle_check = true; 4306 + hw->mac.get_link_status = true; 4306 4307 pm_runtime_put(&pdev->dev); 4307 4308 4308 4309 /* fire a link status change interrupt to start the watchdog */ ··· 4663 4662 (adapter->hw.phy.media_type == e1000_media_type_copper)) { 4664 4663 int ret_val; 4665 4664 4665 + pm_runtime_get_sync(&adapter->pdev->dev); 4666 4666 ret_val = e1e_rphy(hw, MII_BMCR, &phy->bmcr); 4667 4667 ret_val |= e1e_rphy(hw, MII_BMSR, &phy->bmsr); 4668 4668 ret_val |= e1e_rphy(hw, MII_ADVERTISE, &phy->advertise); ··· 4674 4672 ret_val |= e1e_rphy(hw, MII_ESTATUS, &phy->estatus); 4675 4673 if (ret_val) 4676 4674 e_warn("Error reading PHY register\n"); 4675 + pm_runtime_put_sync(&adapter->pdev->dev); 4677 4676 } else { 4678 4677 /* Do not read PHY registers if link is not up 4679 4678 * Set values to typical power-on defaults ··· 5890 5887 return retval; 5891 5888 } 5892 5889 5893 - static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake, 5894 - bool runtime) 5890 + static int __e1000_shutdown(struct pci_dev *pdev, bool runtime) 5895 5891 { 5896 5892 struct net_device *netdev = pci_get_drvdata(pdev); 5897 5893 struct e1000_adapter *adapter = netdev_priv(netdev); ··· 5913 5911 e1000_free_irq(adapter); 5914 5912 } 5915 5913 e1000e_reset_interrupt_capability(adapter); 5916 - 5917 - retval = pci_save_state(pdev); 5918 - if (retval) 5919 - return retval; 5920 5914 5921 5915 status = er32(STATUS); 5922 5916 if (status & E1000_STATUS_LU) ··· 5969 5971 ew32(WUFC, 0); 5970 5972 } 5971 5973 5972 - *enable_wake = !!wufc; 5973 - 5974 - /* make sure adapter isn't asleep if manageability is enabled */ 5975 - if ((adapter->flags & FLAG_MNG_PT_ENABLED) || 5976 - (hw->mac.ops.check_mng_mode(hw))) 5977 - *enable_wake = true; 5978 - 5979 5974 if (adapter->hw.phy.type == e1000_phy_igp_3) 5980 5975 e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw); 5981 5976 ··· 5977 5986 */ 5978 5987 e1000e_release_hw_control(adapter); 5979 5988 5980 - pci_disable_device(pdev); 5981 - 5982 - return 0; 5983 - } 5984 - 5985 - static void e1000_power_off(struct pci_dev *pdev, bool sleep, bool wake) 5986 - { 5987 - if (sleep && wake) { 5988 - pci_prepare_to_sleep(pdev); 5989 - return; 5990 - } 5991 - 5992 - pci_wake_from_d3(pdev, wake); 5993 - pci_set_power_state(pdev, PCI_D3hot); 5994 - } 5995 - 5996 - static void e1000_complete_shutdown(struct pci_dev *pdev, bool sleep, 5997 - bool wake) 5998 - { 5999 - struct net_device *netdev = pci_get_drvdata(pdev); 6000 - struct e1000_adapter *adapter = netdev_priv(netdev); 5989 + pci_clear_master(pdev); 6001 5990 6002 5991 /* The pci-e switch on some quad port adapters will report a 6003 5992 * correctable error when the MAC transitions from D0 to D3. To ··· 5992 6021 pcie_capability_write_word(us_dev, PCI_EXP_DEVCTL, 5993 6022 (devctl & ~PCI_EXP_DEVCTL_CERE)); 5994 6023 5995 - e1000_power_off(pdev, sleep, wake); 6024 + pci_save_state(pdev); 6025 + pci_prepare_to_sleep(pdev); 5996 6026 5997 6027 pcie_capability_write_word(us_dev, PCI_EXP_DEVCTL, devctl); 5998 - } else { 5999 - e1000_power_off(pdev, sleep, wake); 6000 6028 } 6029 + 6030 + return 0; 6001 6031 } 6002 6032 6003 6033 #ifdef CONFIG_PCIEASPM ··· 6056 6084 if (aspm_disable_flag) 6057 6085 e1000e_disable_aspm(pdev, aspm_disable_flag); 6058 6086 6059 - pci_set_power_state(pdev, PCI_D0); 6060 - pci_restore_state(pdev); 6061 - pci_save_state(pdev); 6087 + pci_set_master(pdev); 6062 6088 6063 6089 e1000e_set_interrupt_capability(adapter); 6064 6090 if (netif_running(netdev)) { ··· 6122 6152 static int e1000_suspend(struct device *dev) 6123 6153 { 6124 6154 struct pci_dev *pdev = to_pci_dev(dev); 6125 - int retval; 6126 - bool wake; 6127 6155 6128 - retval = __e1000_shutdown(pdev, &wake, false); 6129 - if (!retval) 6130 - e1000_complete_shutdown(pdev, true, wake); 6131 - 6132 - return retval; 6156 + return __e1000_shutdown(pdev, false); 6133 6157 } 6134 6158 6135 6159 static int e1000_resume(struct device *dev) ··· 6146 6182 struct net_device *netdev = pci_get_drvdata(pdev); 6147 6183 struct e1000_adapter *adapter = netdev_priv(netdev); 6148 6184 6149 - if (e1000e_pm_ready(adapter)) { 6150 - bool wake; 6185 + if (!e1000e_pm_ready(adapter)) 6186 + return 0; 6151 6187 6152 - __e1000_shutdown(pdev, &wake, true); 6153 - } 6154 - 6155 - return 0; 6188 + return __e1000_shutdown(pdev, true); 6156 6189 } 6157 6190 6158 6191 static int e1000_idle(struct device *dev) ··· 6187 6226 6188 6227 static void e1000_shutdown(struct pci_dev *pdev) 6189 6228 { 6190 - bool wake = false; 6191 - 6192 - __e1000_shutdown(pdev, &wake, false); 6193 - 6194 - if (system_state == SYSTEM_POWER_OFF) 6195 - e1000_complete_shutdown(pdev, false, wake); 6229 + __e1000_shutdown(pdev, false); 6196 6230 } 6197 6231 6198 6232 #ifdef CONFIG_NET_POLL_CONTROLLER ··· 6308 6352 "Cannot re-enable PCI device after reset.\n"); 6309 6353 result = PCI_ERS_RESULT_DISCONNECT; 6310 6354 } else { 6311 - pci_set_master(pdev); 6312 6355 pdev->state_saved = true; 6313 6356 pci_restore_state(pdev); 6357 + pci_set_master(pdev); 6314 6358 6315 6359 pci_enable_wake(pdev, PCI_D3hot, 0); 6316 6360 pci_enable_wake(pdev, PCI_D3cold, 0); ··· 6739 6783 6740 6784 /* initialize the wol settings based on the eeprom settings */ 6741 6785 adapter->wol = adapter->eeprom_wol; 6742 - device_set_wakeup_enable(&adapter->pdev->dev, adapter->wol); 6786 + 6787 + /* make sure adapter isn't asleep if manageability is enabled */ 6788 + if (adapter->wol || (adapter->flags & FLAG_MNG_PT_ENABLED) || 6789 + (hw->mac.ops.check_mng_mode(hw))) 6790 + device_wakeup_enable(&pdev->dev); 6743 6791 6744 6792 /* save off EEPROM version number */ 6745 6793 e1000_read_nvm(&adapter->hw, 5, 1, &adapter->eeprom_vers);
+1
drivers/net/ethernet/intel/e1000e/regs.h
··· 42 42 #define E1000_FEXTNVM 0x00028 /* Future Extended NVM - RW */ 43 43 #define E1000_FEXTNVM3 0x0003C /* Future Extended NVM 3 - RW */ 44 44 #define E1000_FEXTNVM4 0x00024 /* Future Extended NVM 4 - RW */ 45 + #define E1000_FEXTNVM6 0x00010 /* Future Extended NVM 6 - RW */ 45 46 #define E1000_FEXTNVM7 0x000E4 /* Future Extended NVM 7 - RW */ 46 47 #define E1000_FCT 0x00030 /* Flow Control Type - RW */ 47 48 #define E1000_VET 0x00038 /* VLAN Ether Type - RW */
+8 -3
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 1361 1361 switch (hw->phy.type) { 1362 1362 case e1000_phy_i210: 1363 1363 case e1000_phy_m88: 1364 - if (hw->phy.id == I347AT4_E_PHY_ID || 1365 - hw->phy.id == M88E1112_E_PHY_ID) 1364 + switch (hw->phy.id) { 1365 + case I347AT4_E_PHY_ID: 1366 + case M88E1112_E_PHY_ID: 1367 + case I210_I_PHY_ID: 1366 1368 ret_val = igb_copper_link_setup_m88_gen2(hw); 1367 - else 1369 + break; 1370 + default: 1368 1371 ret_val = igb_copper_link_setup_m88(hw); 1372 + break; 1373 + } 1369 1374 break; 1370 1375 case e1000_phy_igp_3: 1371 1376 ret_val = igb_copper_link_setup_igp(hw);
+1 -1
drivers/net/ethernet/intel/igb/igb.h
··· 447 447 #endif 448 448 struct i2c_algo_bit_data i2c_algo; 449 449 struct i2c_adapter i2c_adap; 450 - struct igb_i2c_client_list *i2c_clients; 450 + struct i2c_client *i2c_client; 451 451 }; 452 452 453 453 #define IGB_FLAG_HAS_MSI (1 << 0)
+14
drivers/net/ethernet/intel/igb/igb_hwmon.c
··· 39 39 #include <linux/pci.h> 40 40 41 41 #ifdef CONFIG_IGB_HWMON 42 + struct i2c_board_info i350_sensor_info = { 43 + I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)), 44 + }; 45 + 42 46 /* hwmon callback functions */ 43 47 static ssize_t igb_hwmon_show_location(struct device *dev, 44 48 struct device_attribute *attr, ··· 192 188 unsigned int i; 193 189 int n_attrs; 194 190 int rc = 0; 191 + struct i2c_client *client = NULL; 195 192 196 193 /* If this method isn't defined we don't support thermals */ 197 194 if (adapter->hw.mac.ops.init_thermal_sensor_thresh == NULL) ··· 202 197 rc = (adapter->hw.mac.ops.init_thermal_sensor_thresh(&adapter->hw)); 203 198 if (rc) 204 199 goto exit; 200 + 201 + /* init i2c_client */ 202 + client = i2c_new_device(&adapter->i2c_adap, &i350_sensor_info); 203 + if (client == NULL) { 204 + dev_info(&adapter->pdev->dev, 205 + "Failed to create new i2c device..\n"); 206 + goto exit; 207 + } 208 + adapter->i2c_client = client; 205 209 206 210 /* Allocation space for max attributes 207 211 * max num sensors * values (loc, temp, max, caution)
+2 -74
drivers/net/ethernet/intel/igb/igb_main.c
··· 1923 1923 return; 1924 1924 } 1925 1925 1926 - static const struct i2c_board_info i350_sensor_info = { 1927 - I2C_BOARD_INFO("i350bb", 0Xf8), 1928 - }; 1929 - 1930 1926 /* igb_init_i2c - Init I2C interface 1931 1927 * @adapter: pointer to adapter structure 1932 1928 * ··· 6223 6227 /* If we spanned a buffer we have a huge mess so test for it */ 6224 6228 BUG_ON(unlikely(!igb_test_staterr(rx_desc, E1000_RXD_STAT_EOP))); 6225 6229 6226 - /* Guarantee this function can be used by verifying buffer sizes */ 6227 - BUILD_BUG_ON(SKB_WITH_OVERHEAD(IGB_RX_BUFSZ) < (NET_SKB_PAD + 6228 - NET_IP_ALIGN + 6229 - IGB_TS_HDR_LEN + 6230 - ETH_FRAME_LEN + 6231 - ETH_FCS_LEN)); 6232 - 6233 6230 rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; 6234 6231 page = rx_buffer->page; 6235 6232 prefetchw(page); ··· 7713 7724 } 7714 7725 } 7715 7726 7716 - static DEFINE_SPINLOCK(i2c_clients_lock); 7717 - 7718 - /* igb_get_i2c_client - returns matching client 7719 - * in adapters's client list. 7720 - * @adapter: adapter struct 7721 - * @dev_addr: device address of i2c needed. 7722 - */ 7723 - static struct i2c_client * 7724 - igb_get_i2c_client(struct igb_adapter *adapter, u8 dev_addr) 7725 - { 7726 - ulong flags; 7727 - struct igb_i2c_client_list *client_list; 7728 - struct i2c_client *client = NULL; 7729 - struct i2c_board_info client_info = { 7730 - I2C_BOARD_INFO("igb", 0x00), 7731 - }; 7732 - 7733 - spin_lock_irqsave(&i2c_clients_lock, flags); 7734 - client_list = adapter->i2c_clients; 7735 - 7736 - /* See if we already have an i2c_client */ 7737 - while (client_list) { 7738 - if (client_list->client->addr == (dev_addr >> 1)) { 7739 - client = client_list->client; 7740 - goto exit; 7741 - } else { 7742 - client_list = client_list->next; 7743 - } 7744 - } 7745 - 7746 - /* no client_list found, create a new one */ 7747 - client_list = kzalloc(sizeof(*client_list), GFP_ATOMIC); 7748 - if (client_list == NULL) 7749 - goto exit; 7750 - 7751 - /* dev_addr passed to us is left-shifted by 1 bit 7752 - * i2c_new_device call expects it to be flush to the right. 7753 - */ 7754 - client_info.addr = dev_addr >> 1; 7755 - client_info.platform_data = adapter; 7756 - client_list->client = i2c_new_device(&adapter->i2c_adap, &client_info); 7757 - if (client_list->client == NULL) { 7758 - dev_info(&adapter->pdev->dev, 7759 - "Failed to create new i2c device..\n"); 7760 - goto err_no_client; 7761 - } 7762 - 7763 - /* insert new client at head of list */ 7764 - client_list->next = adapter->i2c_clients; 7765 - adapter->i2c_clients = client_list; 7766 - 7767 - client = client_list->client; 7768 - goto exit; 7769 - 7770 - err_no_client: 7771 - kfree(client_list); 7772 - exit: 7773 - spin_unlock_irqrestore(&i2c_clients_lock, flags); 7774 - return client; 7775 - } 7776 - 7777 7727 /* igb_read_i2c_byte - Reads 8 bit word over I2C 7778 7728 * @hw: pointer to hardware structure 7779 7729 * @byte_offset: byte offset to read ··· 7726 7798 u8 dev_addr, u8 *data) 7727 7799 { 7728 7800 struct igb_adapter *adapter = container_of(hw, struct igb_adapter, hw); 7729 - struct i2c_client *this_client = igb_get_i2c_client(adapter, dev_addr); 7801 + struct i2c_client *this_client = adapter->i2c_client; 7730 7802 s32 status; 7731 7803 u16 swfw_mask = 0; 7732 7804 ··· 7763 7835 u8 dev_addr, u8 data) 7764 7836 { 7765 7837 struct igb_adapter *adapter = container_of(hw, struct igb_adapter, hw); 7766 - struct i2c_client *this_client = igb_get_i2c_client(adapter, dev_addr); 7838 + struct i2c_client *this_client = adapter->i2c_client; 7767 7839 s32 status; 7768 7840 u16 swfw_mask = E1000_SWFW_PHY0_SM; 7769 7841
+51 -4
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 1081 1081 1082 1082 1083 1083 /* mii management interface *************************************************/ 1084 + static void mv643xx_adjust_pscr(struct mv643xx_eth_private *mp) 1085 + { 1086 + u32 pscr = rdlp(mp, PORT_SERIAL_CONTROL); 1087 + u32 autoneg_disable = FORCE_LINK_PASS | 1088 + DISABLE_AUTO_NEG_SPEED_GMII | 1089 + DISABLE_AUTO_NEG_FOR_FLOW_CTRL | 1090 + DISABLE_AUTO_NEG_FOR_DUPLEX; 1091 + 1092 + if (mp->phy->autoneg == AUTONEG_ENABLE) { 1093 + /* enable auto negotiation */ 1094 + pscr &= ~autoneg_disable; 1095 + goto out_write; 1096 + } 1097 + 1098 + pscr |= autoneg_disable; 1099 + 1100 + if (mp->phy->speed == SPEED_1000) { 1101 + /* force gigabit, half duplex not supported */ 1102 + pscr |= SET_GMII_SPEED_TO_1000; 1103 + pscr |= SET_FULL_DUPLEX_MODE; 1104 + goto out_write; 1105 + } 1106 + 1107 + pscr &= ~SET_GMII_SPEED_TO_1000; 1108 + 1109 + if (mp->phy->speed == SPEED_100) 1110 + pscr |= SET_MII_SPEED_TO_100; 1111 + else 1112 + pscr &= ~SET_MII_SPEED_TO_100; 1113 + 1114 + if (mp->phy->duplex == DUPLEX_FULL) 1115 + pscr |= SET_FULL_DUPLEX_MODE; 1116 + else 1117 + pscr &= ~SET_FULL_DUPLEX_MODE; 1118 + 1119 + out_write: 1120 + wrlp(mp, PORT_SERIAL_CONTROL, pscr); 1121 + } 1122 + 1084 1123 static irqreturn_t mv643xx_eth_err_irq(int irq, void *dev_id) 1085 1124 { 1086 1125 struct mv643xx_eth_shared_private *msp = dev_id; ··· 1538 1499 mv643xx_eth_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 1539 1500 { 1540 1501 struct mv643xx_eth_private *mp = netdev_priv(dev); 1502 + int ret; 1541 1503 1542 1504 if (mp->phy == NULL) 1543 1505 return -EINVAL; ··· 1548 1508 */ 1549 1509 cmd->advertising &= ~ADVERTISED_1000baseT_Half; 1550 1510 1551 - return phy_ethtool_sset(mp->phy, cmd); 1511 + ret = phy_ethtool_sset(mp->phy, cmd); 1512 + if (!ret) 1513 + mv643xx_adjust_pscr(mp); 1514 + return ret; 1552 1515 } 1553 1516 1554 1517 static void mv643xx_eth_get_drvinfo(struct net_device *dev, ··· 2485 2442 static int mv643xx_eth_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2486 2443 { 2487 2444 struct mv643xx_eth_private *mp = netdev_priv(dev); 2445 + int ret; 2488 2446 2489 - if (mp->phy != NULL) 2490 - return phy_mii_ioctl(mp->phy, ifr, cmd); 2447 + if (mp->phy == NULL) 2448 + return -ENOTSUPP; 2491 2449 2492 - return -EOPNOTSUPP; 2450 + ret = phy_mii_ioctl(mp->phy, ifr, cmd); 2451 + if (!ret) 2452 + mv643xx_adjust_pscr(mp); 2453 + return ret; 2493 2454 } 2494 2455 2495 2456 static int mv643xx_eth_change_mtu(struct net_device *dev, int new_mtu)
+1 -1
drivers/net/ethernet/mellanox/mlx4/cq.c
··· 226 226 227 227 static void mlx4_cq_free_icm(struct mlx4_dev *dev, int cqn) 228 228 { 229 - u64 in_param; 229 + u64 in_param = 0; 230 230 int err; 231 231 232 232 if (mlx4_is_mfunc(dev)) {
+46 -40
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 565 565 struct mlx4_en_dev *mdev = priv->mdev; 566 566 struct mlx4_dev *dev = mdev->dev; 567 567 int qpn = priv->base_qpn; 568 - u64 mac = mlx4_en_mac_to_u64(priv->dev->dev_addr); 568 + u64 mac; 569 569 570 - en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 571 - priv->dev->dev_addr); 572 - mlx4_unregister_mac(dev, priv->port, mac); 573 - 574 - if (dev->caps.steering_mode != MLX4_STEERING_MODE_A0) { 570 + if (dev->caps.steering_mode == MLX4_STEERING_MODE_A0) { 571 + mac = mlx4_en_mac_to_u64(priv->dev->dev_addr); 572 + en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 573 + priv->dev->dev_addr); 574 + mlx4_unregister_mac(dev, priv->port, mac); 575 + } else { 575 576 struct mlx4_mac_entry *entry; 576 577 struct hlist_node *tmp; 577 578 struct hlist_head *bucket; 578 - unsigned int mac_hash; 579 + unsigned int i; 579 580 580 - mac_hash = priv->dev->dev_addr[MLX4_EN_MAC_HASH_IDX]; 581 - bucket = &priv->mac_hash[mac_hash]; 582 - hlist_for_each_entry_safe(entry, tmp, bucket, hlist) { 583 - if (ether_addr_equal_64bits(entry->mac, 584 - priv->dev->dev_addr)) { 585 - en_dbg(DRV, priv, "Releasing qp: port %d, MAC %pM, qpn %d\n", 586 - priv->port, priv->dev->dev_addr, qpn); 581 + for (i = 0; i < MLX4_EN_MAC_HASH_SIZE; ++i) { 582 + bucket = &priv->mac_hash[i]; 583 + hlist_for_each_entry_safe(entry, tmp, bucket, hlist) { 584 + mac = mlx4_en_mac_to_u64(entry->mac); 585 + en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 586 + entry->mac); 587 587 mlx4_en_uc_steer_release(priv, entry->mac, 588 588 qpn, entry->reg_id); 589 - mlx4_qp_release_range(dev, qpn, 1); 590 589 590 + mlx4_unregister_mac(dev, priv->port, mac); 591 591 hlist_del_rcu(&entry->hlist); 592 592 kfree_rcu(entry, rcu); 593 - break; 594 593 } 595 594 } 595 + 596 + en_dbg(DRV, priv, "Releasing qp: port %d, qpn %d\n", 597 + priv->port, qpn); 598 + mlx4_qp_release_range(dev, qpn, 1); 599 + priv->flags &= ~MLX4_EN_FLAG_FORCE_PROMISC; 596 600 } 597 601 } 598 602 ··· 654 650 return mac; 655 651 } 656 652 657 - static int mlx4_en_set_mac(struct net_device *dev, void *addr) 653 + static int mlx4_en_do_set_mac(struct mlx4_en_priv *priv) 658 654 { 659 - struct mlx4_en_priv *priv = netdev_priv(dev); 660 - struct mlx4_en_dev *mdev = priv->mdev; 661 - struct sockaddr *saddr = addr; 662 - 663 - if (!is_valid_ether_addr(saddr->sa_data)) 664 - return -EADDRNOTAVAIL; 665 - 666 - memcpy(dev->dev_addr, saddr->sa_data, ETH_ALEN); 667 - queue_work(mdev->workqueue, &priv->mac_task); 668 - return 0; 669 - } 670 - 671 - static void mlx4_en_do_set_mac(struct work_struct *work) 672 - { 673 - struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv, 674 - mac_task); 675 - struct mlx4_en_dev *mdev = priv->mdev; 676 655 int err = 0; 677 656 678 - mutex_lock(&mdev->state_lock); 679 657 if (priv->port_up) { 680 658 /* Remove old MAC and insert the new one */ 681 659 err = mlx4_en_replace_mac(priv, priv->base_qpn, ··· 669 683 } else 670 684 en_dbg(HW, priv, "Port is down while registering mac, exiting...\n"); 671 685 686 + return err; 687 + } 688 + 689 + static int mlx4_en_set_mac(struct net_device *dev, void *addr) 690 + { 691 + struct mlx4_en_priv *priv = netdev_priv(dev); 692 + struct mlx4_en_dev *mdev = priv->mdev; 693 + struct sockaddr *saddr = addr; 694 + int err; 695 + 696 + if (!is_valid_ether_addr(saddr->sa_data)) 697 + return -EADDRNOTAVAIL; 698 + 699 + memcpy(dev->dev_addr, saddr->sa_data, ETH_ALEN); 700 + 701 + mutex_lock(&mdev->state_lock); 702 + err = mlx4_en_do_set_mac(priv); 672 703 mutex_unlock(&mdev->state_lock); 704 + 705 + return err; 673 706 } 674 707 675 708 static void mlx4_en_clear_list(struct net_device *dev) ··· 1353 1348 queue_delayed_work(mdev->workqueue, &priv->stats_task, STATS_DELAY); 1354 1349 } 1355 1350 if (mdev->mac_removed[MLX4_MAX_PORTS + 1 - priv->port]) { 1356 - queue_work(mdev->workqueue, &priv->mac_task); 1351 + mlx4_en_do_set_mac(priv); 1357 1352 mdev->mac_removed[MLX4_MAX_PORTS + 1 - priv->port] = 0; 1358 1353 } 1359 1354 mutex_unlock(&mdev->state_lock); ··· 1833 1828 } 1834 1829 1835 1830 #ifdef CONFIG_RFS_ACCEL 1836 - priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->mdev->dev->caps.comp_pool); 1837 - if (!priv->dev->rx_cpu_rmap) 1838 - goto err; 1831 + if (priv->mdev->dev->caps.comp_pool) { 1832 + priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->mdev->dev->caps.comp_pool); 1833 + if (!priv->dev->rx_cpu_rmap) 1834 + goto err; 1835 + } 1839 1836 #endif 1840 1837 1841 1838 return 0; ··· 2085 2078 priv->msg_enable = MLX4_EN_MSG_LEVEL; 2086 2079 spin_lock_init(&priv->stats_lock); 2087 2080 INIT_WORK(&priv->rx_mode_task, mlx4_en_do_set_rx_mode); 2088 - INIT_WORK(&priv->mac_task, mlx4_en_do_set_mac); 2089 2081 INIT_WORK(&priv->watchdog_task, mlx4_en_restart); 2090 2082 INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate); 2091 2083 INIT_DELAYED_WORK(&priv->stats_task, mlx4_en_do_get_stats);
+8
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 787 787 bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN; 788 788 MLX4_PUT(outbox->buf, bmme_flags, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); 789 789 790 + /* turn off device-managed steering capability if not enabled */ 791 + if (dev->caps.steering_mode != MLX4_STEERING_MODE_DEVICE_MANAGED) { 792 + MLX4_GET(field, outbox->buf, 793 + QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET); 794 + field &= 0x7f; 795 + MLX4_PUT(outbox->buf, field, 796 + QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET); 797 + } 790 798 return 0; 791 799 } 792 800
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 1555 1555 1556 1556 void mlx4_counter_free(struct mlx4_dev *dev, u32 idx) 1557 1557 { 1558 - u64 in_param; 1558 + u64 in_param = 0; 1559 1559 1560 1560 if (mlx4_is_mfunc(dev)) { 1561 1561 set_param_l(&in_param, idx);
+1 -1
drivers/net/ethernet/mellanox/mlx4/mlx4.h
··· 1235 1235 1236 1236 static inline void set_param_l(u64 *arg, u32 val) 1237 1237 { 1238 - *((u32 *)arg) = val; 1238 + *arg = (*arg & 0xffffffff00000000ULL) | (u64) val; 1239 1239 } 1240 1240 1241 1241 static inline void set_param_h(u64 *arg, u32 val)
-1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 509 509 struct mlx4_en_cq rx_cq[MAX_RX_RINGS]; 510 510 struct mlx4_qp drop_qp; 511 511 struct work_struct rx_mode_task; 512 - struct work_struct mac_task; 513 512 struct work_struct watchdog_task; 514 513 struct work_struct linkstate_task; 515 514 struct delayed_work stats_task;
+5 -5
drivers/net/ethernet/mellanox/mlx4/mr.c
··· 183 183 184 184 static u32 mlx4_alloc_mtt_range(struct mlx4_dev *dev, int order) 185 185 { 186 - u64 in_param; 186 + u64 in_param = 0; 187 187 u64 out_param; 188 188 int err; 189 189 ··· 240 240 241 241 static void mlx4_free_mtt_range(struct mlx4_dev *dev, u32 offset, int order) 242 242 { 243 - u64 in_param; 243 + u64 in_param = 0; 244 244 int err; 245 245 246 246 if (mlx4_is_mfunc(dev)) { ··· 351 351 352 352 static void mlx4_mpt_release(struct mlx4_dev *dev, u32 index) 353 353 { 354 - u64 in_param; 354 + u64 in_param = 0; 355 355 356 356 if (mlx4_is_mfunc(dev)) { 357 357 set_param_l(&in_param, index); ··· 374 374 375 375 static int mlx4_mpt_alloc_icm(struct mlx4_dev *dev, u32 index) 376 376 { 377 - u64 param; 377 + u64 param = 0; 378 378 379 379 if (mlx4_is_mfunc(dev)) { 380 380 set_param_l(&param, index); ··· 395 395 396 396 static void mlx4_mpt_free_icm(struct mlx4_dev *dev, u32 index) 397 397 { 398 - u64 in_param; 398 + u64 in_param = 0; 399 399 400 400 if (mlx4_is_mfunc(dev)) { 401 401 set_param_l(&in_param, index);
+1 -1
drivers/net/ethernet/mellanox/mlx4/pd.c
··· 101 101 102 102 void mlx4_xrcd_free(struct mlx4_dev *dev, u32 xrcdn) 103 103 { 104 - u64 in_param; 104 + u64 in_param = 0; 105 105 int err; 106 106 107 107 if (mlx4_is_mfunc(dev)) {
+4 -4
drivers/net/ethernet/mellanox/mlx4/port.c
··· 175 175 176 176 int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac) 177 177 { 178 - u64 out_param; 178 + u64 out_param = 0; 179 179 int err; 180 180 181 181 if (mlx4_is_mfunc(dev)) { ··· 222 222 223 223 void mlx4_unregister_mac(struct mlx4_dev *dev, u8 port, u64 mac) 224 224 { 225 - u64 out_param; 225 + u64 out_param = 0; 226 226 227 227 if (mlx4_is_mfunc(dev)) { 228 228 set_param_l(&out_param, port); ··· 361 361 362 362 int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index) 363 363 { 364 - u64 out_param; 364 + u64 out_param = 0; 365 365 int err; 366 366 367 367 if (mlx4_is_mfunc(dev)) { ··· 406 406 407 407 void mlx4_unregister_vlan(struct mlx4_dev *dev, u8 port, int index) 408 408 { 409 - u64 in_param; 409 + u64 in_param = 0; 410 410 int err; 411 411 412 412 if (mlx4_is_mfunc(dev)) {
+4 -4
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 222 222 223 223 int mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align, int *base) 224 224 { 225 - u64 in_param; 225 + u64 in_param = 0; 226 226 u64 out_param; 227 227 int err; 228 228 ··· 255 255 256 256 void mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt) 257 257 { 258 - u64 in_param; 258 + u64 in_param = 0; 259 259 int err; 260 260 261 261 if (mlx4_is_mfunc(dev)) { ··· 319 319 320 320 static int mlx4_qp_alloc_icm(struct mlx4_dev *dev, int qpn) 321 321 { 322 - u64 param; 322 + u64 param = 0; 323 323 324 324 if (mlx4_is_mfunc(dev)) { 325 325 set_param_l(&param, qpn); ··· 344 344 345 345 static void mlx4_qp_free_icm(struct mlx4_dev *dev, int qpn) 346 346 { 347 - u64 in_param; 347 + u64 in_param = 0; 348 348 349 349 if (mlx4_is_mfunc(dev)) { 350 350 set_param_l(&in_param, qpn);
+3
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 2990 2990 u8 steer_type_mask = 2; 2991 2991 enum mlx4_steer_type type = (gid[7] & steer_type_mask) >> 1; 2992 2992 2993 + if (dev->caps.steering_mode != MLX4_STEERING_MODE_B0) 2994 + return -EINVAL; 2995 + 2993 2996 qpn = vhcr->in_modifier & 0xffffff; 2994 2997 err = get_res(dev, slave, qpn, RES_QP, &rqp); 2995 2998 if (err)
+1 -1
drivers/net/ethernet/mellanox/mlx4/srq.c
··· 149 149 150 150 static void mlx4_srq_free_icm(struct mlx4_dev *dev, int srqn) 151 151 { 152 - u64 in_param; 152 + u64 in_param = 0; 153 153 154 154 if (mlx4_is_mfunc(dev)) { 155 155 set_param_l(&in_param, srqn);
+2 -2
drivers/net/ethernet/sfc/efx.h
··· 171 171 * TX scheduler is stopped when we're done and before 172 172 * netif_device_present() becomes false. 173 173 */ 174 - netif_tx_lock(dev); 174 + netif_tx_lock_bh(dev); 175 175 netif_device_detach(dev); 176 - netif_tx_unlock(dev); 176 + netif_tx_unlock_bh(dev); 177 177 } 178 178 179 179 #endif /* EFX_EFX_H */
+1 -1
drivers/net/ethernet/sfc/rx.c
··· 215 215 rx_buf = efx_rx_buffer(rx_queue, index); 216 216 rx_buf->dma_addr = dma_addr + EFX_PAGE_IP_ALIGN; 217 217 rx_buf->u.page = page; 218 - rx_buf->page_offset = page_offset; 218 + rx_buf->page_offset = page_offset + EFX_PAGE_IP_ALIGN; 219 219 rx_buf->len = efx->rx_buffer_len - EFX_PAGE_IP_ALIGN; 220 220 rx_buf->flags = EFX_RX_BUF_PAGE; 221 221 ++rx_queue->added_count;
+3
drivers/net/hippi/rrunner.c
··· 202 202 return 0; 203 203 204 204 out: 205 + if (rrpriv->evt_ring) 206 + pci_free_consistent(pdev, EVT_RING_SIZE, rrpriv->evt_ring, 207 + rrpriv->evt_ring_dma); 205 208 if (rrpriv->rx_ring) 206 209 pci_free_consistent(pdev, RX_TOTAL_SIZE, rrpriv->rx_ring, 207 210 rrpriv->rx_ring_dma);
+1
drivers/net/macvlan.c
··· 660 660 ether_setup(dev); 661 661 662 662 dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_TX_SKB_SHARING); 663 + dev->priv_flags |= IFF_UNICAST_FLT; 663 664 dev->netdev_ops = &macvlan_netdev_ops; 664 665 dev->destructor = free_netdev; 665 666 dev->header_ops = &macvlan_hard_header_ops,
+2
drivers/net/team/team.c
··· 1138 1138 netdev_upper_dev_unlink(port_dev, dev); 1139 1139 team_port_disable_netpoll(port); 1140 1140 vlan_vids_del_by_dev(port_dev, dev); 1141 + dev_uc_unsync(port_dev, dev); 1142 + dev_mc_unsync(port_dev, dev); 1141 1143 dev_close(port_dev); 1142 1144 team_port_leave(team, port); 1143 1145
+2
drivers/net/tun.c
··· 747 747 goto drop; 748 748 skb_orphan(skb); 749 749 750 + nf_reset(skb); 751 + 750 752 /* Enqueue packet */ 751 753 skb_queue_tail(&tfile->socket.sk->sk_receive_queue, skb); 752 754
+1
drivers/net/vmxnet3/vmxnet3_drv.c
··· 2958 2958 2959 2959 adapter->num_rx_queues = num_rx_queues; 2960 2960 adapter->num_tx_queues = num_tx_queues; 2961 + adapter->rx_buf_per_pkt = 1; 2961 2962 2962 2963 size = sizeof(struct Vmxnet3_TxQueueDesc) * adapter->num_tx_queues; 2963 2964 size += sizeof(struct Vmxnet3_RxQueueDesc) * adapter->num_rx_queues;
+6
drivers/net/vmxnet3/vmxnet3_ethtool.c
··· 472 472 VMXNET3_RX_RING_MAX_SIZE) 473 473 return -EINVAL; 474 474 475 + /* if adapter not yet initialized, do nothing */ 476 + if (adapter->rx_buf_per_pkt == 0) { 477 + netdev_err(netdev, "adapter not completely initialized, " 478 + "ring size cannot be changed yet\n"); 479 + return -EOPNOTSUPP; 480 + } 475 481 476 482 /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */ 477 483 new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) &
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 70 70 /* 71 71 * Version numbers 72 72 */ 73 - #define VMXNET3_DRIVER_VERSION_STRING "1.1.29.0-k" 73 + #define VMXNET3_DRIVER_VERSION_STRING "1.1.30.0-k" 74 74 75 75 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 76 - #define VMXNET3_DRIVER_VERSION_NUM 0x01011D00 76 + #define VMXNET3_DRIVER_VERSION_NUM 0x01011E00 77 77 78 78 #if defined(CONFIG_PCI_MSI) 79 79 /* RSS only makes sense if MSI-X is supported. */
+10
drivers/net/vxlan.c
··· 961 961 iph->ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); 962 962 tunnel_ip_select_ident(skb, old_iph, &rt->dst); 963 963 964 + nf_reset(skb); 965 + 964 966 vxlan_set_owner(dev, skb); 965 967 966 968 /* See iptunnel_xmit() */ ··· 1506 1504 static __net_exit void vxlan_exit_net(struct net *net) 1507 1505 { 1508 1506 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 1507 + struct vxlan_dev *vxlan; 1508 + unsigned h; 1509 + 1510 + rtnl_lock(); 1511 + for (h = 0; h < VNI_HASH_SIZE; ++h) 1512 + hlist_for_each_entry(vxlan, &vn->vni_list[h], hlist) 1513 + dev_close(vxlan->dev); 1514 + rtnl_unlock(); 1509 1515 1510 1516 if (vn->sock) { 1511 1517 sk_release_kernel(vn->sock->sk);
+1 -1
drivers/net/wireless/iwlwifi/dvm/sta.c
··· 151 151 sta_id, sta->sta.addr, flags & CMD_ASYNC ? "a" : ""); 152 152 153 153 if (!(flags & CMD_ASYNC)) { 154 - cmd.flags |= CMD_WANT_SKB | CMD_WANT_HCMD; 154 + cmd.flags |= CMD_WANT_SKB; 155 155 might_sleep(); 156 156 } 157 157
+1 -1
drivers/net/wireless/iwlwifi/iwl-devtrace.h
··· 363 363 __entry->flags = cmd->flags; 364 364 memcpy(__get_dynamic_array(hcmd), hdr, sizeof(*hdr)); 365 365 366 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 366 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 367 367 if (!cmd->len[i]) 368 368 continue; 369 369 memcpy((u8 *)__get_dynamic_array(hcmd) + offset,
+1 -2
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1102 1102 1103 1103 /* shared module parameters */ 1104 1104 struct iwl_mod_params iwlwifi_mod_params = { 1105 - .amsdu_size_8K = 1, 1106 1105 .restart_fw = 1, 1107 1106 .plcp_check = true, 1108 1107 .bt_coex_active = true, ··· 1206 1207 "disable 11n functionality, bitmap: 1: full, 2: agg TX, 4: agg RX"); 1207 1208 module_param_named(amsdu_size_8K, iwlwifi_mod_params.amsdu_size_8K, 1208 1209 int, S_IRUGO); 1209 - MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size"); 1210 + MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size (default 0)"); 1210 1211 module_param_named(fw_restart, iwlwifi_mod_params.restart_fw, int, S_IRUGO); 1211 1212 MODULE_PARM_DESC(fw_restart, "restart firmware in case of error"); 1212 1213
+1 -1
drivers/net/wireless/iwlwifi/iwl-modparams.h
··· 91 91 * @sw_crypto: using hardware encryption, default = 0 92 92 * @disable_11n: disable 11n capabilities, default = 0, 93 93 * use IWL_DISABLE_HT_* constants 94 - * @amsdu_size_8K: enable 8K amsdu size, default = 1 94 + * @amsdu_size_8K: enable 8K amsdu size, default = 0 95 95 * @restart_fw: restart firmware, default = 1 96 96 * @plcp_check: enable plcp health check, default = true 97 97 * @wd_disable: enable stuck queue check, default = 0
+9 -11
drivers/net/wireless/iwlwifi/iwl-trans.h
··· 186 186 * @CMD_ASYNC: Return right away and don't want for the response 187 187 * @CMD_WANT_SKB: valid only with CMD_SYNC. The caller needs the buffer of the 188 188 * response. The caller needs to call iwl_free_resp when done. 189 - * @CMD_WANT_HCMD: The caller needs to get the HCMD that was sent in the 190 - * response handler. Chunks flagged by %IWL_HCMD_DFL_NOCOPY won't be 191 - * copied. The pointer passed to the response handler is in the transport 192 - * ownership and don't need to be freed by the op_mode. This also means 193 - * that the pointer is invalidated after the op_mode's handler returns. 194 189 * @CMD_ON_DEMAND: This command is sent by the test mode pipe. 195 190 */ 196 191 enum CMD_MODE { 197 192 CMD_SYNC = 0, 198 193 CMD_ASYNC = BIT(0), 199 194 CMD_WANT_SKB = BIT(1), 200 - CMD_WANT_HCMD = BIT(2), 201 - CMD_ON_DEMAND = BIT(3), 195 + CMD_ON_DEMAND = BIT(2), 202 196 }; 203 197 204 198 #define DEF_CMD_PAYLOAD_SIZE 320 ··· 211 217 212 218 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd)) 213 219 214 - #define IWL_MAX_CMD_TFDS 2 220 + /* 221 + * number of transfer buffers (fragments) per transmit frame descriptor; 222 + * this is just the driver's idea, the hardware supports 20 223 + */ 224 + #define IWL_MAX_CMD_TBS_PER_TFD 2 215 225 216 226 /** 217 227 * struct iwl_hcmd_dataflag - flag for each one of the chunks of the command ··· 252 254 * @id: id of the host command 253 255 */ 254 256 struct iwl_host_cmd { 255 - const void *data[IWL_MAX_CMD_TFDS]; 257 + const void *data[IWL_MAX_CMD_TBS_PER_TFD]; 256 258 struct iwl_rx_packet *resp_pkt; 257 259 unsigned long _rx_page_addr; 258 260 u32 _rx_page_order; 259 261 int handler_status; 260 262 261 263 u32 flags; 262 - u16 len[IWL_MAX_CMD_TFDS]; 263 - u8 dataflags[IWL_MAX_CMD_TFDS]; 264 + u16 len[IWL_MAX_CMD_TBS_PER_TFD]; 265 + u8 dataflags[IWL_MAX_CMD_TBS_PER_TFD]; 264 266 u8 id; 265 267 }; 266 268
+10 -8
drivers/net/wireless/iwlwifi/mvm/fw-api.h
··· 762 762 #define IWL_RX_INFO_PHY_CNT 8 763 763 #define IWL_RX_INFO_AGC_IDX 1 764 764 #define IWL_RX_INFO_RSSI_AB_IDX 2 765 - #define IWL_RX_INFO_RSSI_C_IDX 3 766 - #define IWL_OFDM_AGC_DB_MSK 0xfe00 767 - #define IWL_OFDM_AGC_DB_POS 9 765 + #define IWL_OFDM_AGC_A_MSK 0x0000007f 766 + #define IWL_OFDM_AGC_A_POS 0 767 + #define IWL_OFDM_AGC_B_MSK 0x00003f80 768 + #define IWL_OFDM_AGC_B_POS 7 769 + #define IWL_OFDM_AGC_CODE_MSK 0x3fe00000 770 + #define IWL_OFDM_AGC_CODE_POS 20 768 771 #define IWL_OFDM_RSSI_INBAND_A_MSK 0x00ff 769 - #define IWL_OFDM_RSSI_ALLBAND_A_MSK 0xff00 770 772 #define IWL_OFDM_RSSI_A_POS 0 773 + #define IWL_OFDM_RSSI_ALLBAND_A_MSK 0xff00 774 + #define IWL_OFDM_RSSI_ALLBAND_A_POS 8 771 775 #define IWL_OFDM_RSSI_INBAND_B_MSK 0xff0000 772 - #define IWL_OFDM_RSSI_ALLBAND_B_MSK 0xff000000 773 776 #define IWL_OFDM_RSSI_B_POS 16 774 - #define IWL_OFDM_RSSI_INBAND_C_MSK 0x00ff 775 - #define IWL_OFDM_RSSI_ALLBAND_C_MSK 0xff00 776 - #define IWL_OFDM_RSSI_C_POS 0 777 + #define IWL_OFDM_RSSI_ALLBAND_B_MSK 0xff000000 778 + #define IWL_OFDM_RSSI_ALLBAND_B_POS 24 777 779 778 780 /** 779 781 * struct iwl_rx_phy_info - phy info
+5 -128
drivers/net/wireless/iwlwifi/mvm/fw.c
··· 79 79 #define UCODE_VALID_OK cpu_to_le32(0x1) 80 80 81 81 /* Default calibration values for WkP - set to INIT image w/o running */ 82 - static const u8 wkp_calib_values_bb_filter[] = { 0xbf, 0x00, 0x5f, 0x00, 0x2f, 83 - 0x00, 0x18, 0x00 }; 84 - static const u8 wkp_calib_values_rx_dc[] = { 0x7f, 0x7f, 0x7f, 0x7f, 0x7f, 85 - 0x7f, 0x7f, 0x7f }; 86 - static const u8 wkp_calib_values_tx_lo[] = { 0x00, 0x00, 0x00, 0x00 }; 87 - static const u8 wkp_calib_values_tx_iq[] = { 0xff, 0x00, 0xff, 0x00, 0x00, 88 - 0x00 }; 89 - static const u8 wkp_calib_values_rx_iq[] = { 0xff, 0x00, 0x00, 0x00 }; 90 82 static const u8 wkp_calib_values_rx_iq_skew[] = { 0x00, 0x00, 0x01, 0x00 }; 91 83 static const u8 wkp_calib_values_tx_iq_skew[] = { 0x01, 0x00, 0x00, 0x00 }; 92 - static const u8 wkp_calib_values_xtal[] = { 0xd2, 0xd2 }; 93 84 94 85 struct iwl_calib_default_data { 95 86 u16 size; ··· 90 99 #define CALIB_SIZE_N_DATA(_buf) {.size = sizeof(_buf), .data = &_buf} 91 100 92 101 static const struct iwl_calib_default_data wkp_calib_default_data[12] = { 93 - [5] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_dc), 94 - [6] = CALIB_SIZE_N_DATA(wkp_calib_values_bb_filter), 95 - [7] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_lo), 96 - [8] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_iq), 97 102 [9] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_iq_skew), 98 - [10] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_iq), 99 103 [11] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_iq_skew), 100 104 }; 101 105 ··· 227 241 228 242 return 0; 229 243 } 230 - #define IWL_HW_REV_ID_RAINBOW 0x2 231 - #define IWL_PROJ_TYPE_LHP 0x5 232 - 233 - static u32 iwl_mvm_build_phy_cfg(struct iwl_mvm *mvm) 234 - { 235 - struct iwl_nvm_data *data = mvm->nvm_data; 236 - /* Temp calls to static definitions, will be changed to CSR calls */ 237 - u8 hw_rev_id = IWL_HW_REV_ID_RAINBOW; 238 - u8 project_type = IWL_PROJ_TYPE_LHP; 239 - 240 - return data->radio_cfg_dash | (data->radio_cfg_step << 2) | 241 - (hw_rev_id << 4) | ((project_type & 0x7f) << 6) | 242 - (data->valid_tx_ant << 16) | (data->valid_rx_ant << 20); 243 - } 244 244 245 245 static int iwl_send_phy_cfg_cmd(struct iwl_mvm *mvm) 246 246 { ··· 234 262 enum iwl_ucode_type ucode_type = mvm->cur_ucode; 235 263 236 264 /* Set parameters */ 237 - phy_cfg_cmd.phy_cfg = cpu_to_le32(iwl_mvm_build_phy_cfg(mvm)); 265 + phy_cfg_cmd.phy_cfg = cpu_to_le32(mvm->fw->phy_config); 238 266 phy_cfg_cmd.calib_control.event_trigger = 239 267 mvm->fw->default_calib[ucode_type].event_trigger; 240 268 phy_cfg_cmd.calib_control.flow_trigger = ··· 245 273 246 274 return iwl_mvm_send_cmd_pdu(mvm, PHY_CONFIGURATION_CMD, CMD_SYNC, 247 275 sizeof(phy_cfg_cmd), &phy_cfg_cmd); 248 - } 249 - 250 - /* Starting with the new PHY DB implementation - New calibs are enabled */ 251 - /* Value - 0x405e7 */ 252 - #define IWL_CALIB_DEFAULT_FLOW_INIT (IWL_CALIB_CFG_XTAL_IDX |\ 253 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 254 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 255 - IWL_CALIB_CFG_DC_IDX |\ 256 - IWL_CALIB_CFG_BB_FILTER_IDX |\ 257 - IWL_CALIB_CFG_LO_LEAKAGE_IDX |\ 258 - IWL_CALIB_CFG_TX_IQ_IDX |\ 259 - IWL_CALIB_CFG_RX_IQ_IDX |\ 260 - IWL_CALIB_CFG_AGC_IDX) 261 - 262 - #define IWL_CALIB_DEFAULT_EVENT_INIT 0x0 263 - 264 - /* Value 0x41567 */ 265 - #define IWL_CALIB_DEFAULT_FLOW_RUN (IWL_CALIB_CFG_XTAL_IDX |\ 266 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 267 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 268 - IWL_CALIB_CFG_BB_FILTER_IDX |\ 269 - IWL_CALIB_CFG_DC_IDX |\ 270 - IWL_CALIB_CFG_TX_IQ_IDX |\ 271 - IWL_CALIB_CFG_RX_IQ_IDX |\ 272 - IWL_CALIB_CFG_SENSITIVITY_IDX |\ 273 - IWL_CALIB_CFG_AGC_IDX) 274 - 275 - #define IWL_CALIB_DEFAULT_EVENT_RUN (IWL_CALIB_CFG_XTAL_IDX |\ 276 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 277 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 278 - IWL_CALIB_CFG_TX_PWR_IDX |\ 279 - IWL_CALIB_CFG_DC_IDX |\ 280 - IWL_CALIB_CFG_TX_IQ_IDX |\ 281 - IWL_CALIB_CFG_SENSITIVITY_IDX) 282 - 283 - /* 284 - * Sets the calibrations trigger values that will be sent to the FW for runtime 285 - * and init calibrations. 286 - * The ones given in the FW TLV are not correct. 287 - */ 288 - static void iwl_set_default_calib_trigger(struct iwl_mvm *mvm) 289 - { 290 - struct iwl_tlv_calib_ctrl default_calib; 291 - 292 - /* 293 - * WkP FW TLV calib bits are wrong, overwrite them. 294 - * This defines the dynamic calibrations which are implemented in the 295 - * uCode both for init(flow) calculation and event driven calibs. 296 - */ 297 - 298 - /* Init Image */ 299 - default_calib.event_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_EVENT_INIT); 300 - default_calib.flow_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_FLOW_INIT); 301 - 302 - if (default_calib.event_trigger != 303 - mvm->fw->default_calib[IWL_UCODE_INIT].event_trigger) 304 - IWL_ERR(mvm, 305 - "Updating the event calib for INIT image: 0x%x -> 0x%x\n", 306 - mvm->fw->default_calib[IWL_UCODE_INIT].event_trigger, 307 - default_calib.event_trigger); 308 - if (default_calib.flow_trigger != 309 - mvm->fw->default_calib[IWL_UCODE_INIT].flow_trigger) 310 - IWL_ERR(mvm, 311 - "Updating the flow calib for INIT image: 0x%x -> 0x%x\n", 312 - mvm->fw->default_calib[IWL_UCODE_INIT].flow_trigger, 313 - default_calib.flow_trigger); 314 - 315 - memcpy((void *)&mvm->fw->default_calib[IWL_UCODE_INIT], 316 - &default_calib, sizeof(struct iwl_tlv_calib_ctrl)); 317 - IWL_ERR(mvm, 318 - "Setting uCode init calibrations event 0x%x, trigger 0x%x\n", 319 - default_calib.event_trigger, 320 - default_calib.flow_trigger); 321 - 322 - /* Run time image */ 323 - default_calib.event_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_EVENT_RUN); 324 - default_calib.flow_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_FLOW_RUN); 325 - 326 - if (default_calib.event_trigger != 327 - mvm->fw->default_calib[IWL_UCODE_REGULAR].event_trigger) 328 - IWL_ERR(mvm, 329 - "Updating the event calib for RT image: 0x%x -> 0x%x\n", 330 - mvm->fw->default_calib[IWL_UCODE_REGULAR].event_trigger, 331 - default_calib.event_trigger); 332 - if (default_calib.flow_trigger != 333 - mvm->fw->default_calib[IWL_UCODE_REGULAR].flow_trigger) 334 - IWL_ERR(mvm, 335 - "Updating the flow calib for RT image: 0x%x -> 0x%x\n", 336 - mvm->fw->default_calib[IWL_UCODE_REGULAR].flow_trigger, 337 - default_calib.flow_trigger); 338 - 339 - memcpy((void *)&mvm->fw->default_calib[IWL_UCODE_REGULAR], 340 - &default_calib, sizeof(struct iwl_tlv_calib_ctrl)); 341 - IWL_ERR(mvm, 342 - "Setting uCode runtime calibs event 0x%x, trigger 0x%x\n", 343 - default_calib.event_trigger, 344 - default_calib.flow_trigger); 345 276 } 346 277 347 278 static int iwl_set_default_calibrations(struct iwl_mvm *mvm) ··· 321 446 ret = iwl_nvm_check_version(mvm->nvm_data, mvm->trans); 322 447 WARN_ON(ret); 323 448 324 - /* Override the calibrations from TLV and the const of fw */ 325 - iwl_set_default_calib_trigger(mvm); 449 + /* Send TX valid antennas before triggering calibrations */ 450 + ret = iwl_send_tx_ant_cfg(mvm, mvm->nvm_data->valid_tx_ant); 451 + if (ret) 452 + goto error; 326 453 327 454 /* WkP doesn't have all calibrations, need to set default values */ 328 455 if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
+2 -1
drivers/net/wireless/iwlwifi/mvm/mvm.h
··· 80 80 81 81 #define IWL_INVALID_MAC80211_QUEUE 0xff 82 82 #define IWL_MVM_MAX_ADDRESSES 2 83 - #define IWL_RSSI_OFFSET 44 83 + /* RSSI offset for WkP */ 84 + #define IWL_RSSI_OFFSET 50 84 85 85 86 enum iwl_mvm_tx_fifo { 86 87 IWL_MVM_TX_FIFO_BK = 0,
+13 -5
drivers/net/wireless/iwlwifi/mvm/ops.c
··· 624 624 ieee80211_free_txskb(mvm->hw, skb); 625 625 } 626 626 627 - static void iwl_mvm_nic_error(struct iwl_op_mode *op_mode) 627 + static void iwl_mvm_nic_restart(struct iwl_mvm *mvm) 628 628 { 629 - struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 630 - 631 - iwl_mvm_dump_nic_error_log(mvm); 632 - 633 629 iwl_abort_notification_waits(&mvm->notif_wait); 634 630 635 631 /* ··· 659 663 } 660 664 } 661 665 666 + static void iwl_mvm_nic_error(struct iwl_op_mode *op_mode) 667 + { 668 + struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 669 + 670 + iwl_mvm_dump_nic_error_log(mvm); 671 + 672 + iwl_mvm_nic_restart(mvm); 673 + } 674 + 662 675 static void iwl_mvm_cmd_queue_full(struct iwl_op_mode *op_mode) 663 676 { 677 + struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 678 + 664 679 WARN_ON(1); 680 + iwl_mvm_nic_restart(mvm); 665 681 } 666 682 667 683 static const struct iwl_op_mode_ops iwl_mvm_ops = {
+23 -14
drivers/net/wireless/iwlwifi/mvm/rx.c
··· 131 131 static int iwl_mvm_calc_rssi(struct iwl_mvm *mvm, 132 132 struct iwl_rx_phy_info *phy_info) 133 133 { 134 - u32 rssi_a, rssi_b, rssi_c, max_rssi, agc_db; 134 + int rssi_a, rssi_b, rssi_a_dbm, rssi_b_dbm, max_rssi_dbm; 135 + int rssi_all_band_a, rssi_all_band_b; 136 + u32 agc_a, agc_b, max_agc; 135 137 u32 val; 136 138 137 - /* Find max rssi among 3 possible receivers. 139 + /* Find max rssi among 2 possible receivers. 138 140 * These values are measured by the Digital Signal Processor (DSP). 139 141 * They should stay fairly constant even as the signal strength varies, 140 142 * if the radio's Automatic Gain Control (AGC) is working right. 141 143 * AGC value (see below) will provide the "interesting" info. 142 144 */ 145 + val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]); 146 + agc_a = (val & IWL_OFDM_AGC_A_MSK) >> IWL_OFDM_AGC_A_POS; 147 + agc_b = (val & IWL_OFDM_AGC_B_MSK) >> IWL_OFDM_AGC_B_POS; 148 + max_agc = max_t(u32, agc_a, agc_b); 149 + 143 150 val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_AB_IDX]); 144 151 rssi_a = (val & IWL_OFDM_RSSI_INBAND_A_MSK) >> IWL_OFDM_RSSI_A_POS; 145 152 rssi_b = (val & IWL_OFDM_RSSI_INBAND_B_MSK) >> IWL_OFDM_RSSI_B_POS; 146 - val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_C_IDX]); 147 - rssi_c = (val & IWL_OFDM_RSSI_INBAND_C_MSK) >> IWL_OFDM_RSSI_C_POS; 153 + rssi_all_band_a = (val & IWL_OFDM_RSSI_ALLBAND_A_MSK) >> 154 + IWL_OFDM_RSSI_ALLBAND_A_POS; 155 + rssi_all_band_b = (val & IWL_OFDM_RSSI_ALLBAND_B_MSK) >> 156 + IWL_OFDM_RSSI_ALLBAND_B_POS; 148 157 149 - val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]); 150 - agc_db = (val & IWL_OFDM_AGC_DB_MSK) >> IWL_OFDM_AGC_DB_POS; 158 + /* 159 + * dBm = rssi dB - agc dB - constant. 160 + * Higher AGC (higher radio gain) means lower signal. 161 + */ 162 + rssi_a_dbm = rssi_a - IWL_RSSI_OFFSET - agc_a; 163 + rssi_b_dbm = rssi_b - IWL_RSSI_OFFSET - agc_b; 164 + max_rssi_dbm = max_t(int, rssi_a_dbm, rssi_b_dbm); 151 165 152 - max_rssi = max_t(u32, rssi_a, rssi_b); 153 - max_rssi = max_t(u32, max_rssi, rssi_c); 166 + IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d Max %d AGCA %d AGCB %d\n", 167 + rssi_a_dbm, rssi_b_dbm, max_rssi_dbm, agc_a, agc_b); 154 168 155 - IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d C %d Max %d AGC dB %d\n", 156 - rssi_a, rssi_b, rssi_c, max_rssi, agc_db); 157 - 158 - /* dBm = max_rssi dB - agc dB - constant. 159 - * Higher AGC (higher radio gain) means lower signal. */ 160 - return max_rssi - agc_db - IWL_RSSI_OFFSET; 169 + return max_rssi_dbm; 161 170 } 162 171 163 172 /*
+10
drivers/net/wireless/iwlwifi/mvm/sta.c
··· 770 770 u16 txq_id; 771 771 int err; 772 772 773 + 774 + /* 775 + * If mac80211 is cleaning its state, then say that we finished since 776 + * our state has been cleared anyway. 777 + */ 778 + if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { 779 + ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid); 780 + return 0; 781 + } 782 + 773 783 spin_lock_bh(&mvmsta->lock); 774 784 775 785 txq_id = tid_data->txq_id;
+1 -5
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 607 607 608 608 /* Single frame failure in an AMPDU queue => send BAR */ 609 609 if (txq_id >= IWL_FIRST_AMPDU_QUEUE && 610 - !(info->flags & IEEE80211_TX_STAT_ACK)) { 611 - /* there must be only one skb in the skb_list */ 612 - WARN_ON_ONCE(skb_freed > 1 || 613 - !skb_queue_empty(&skbs)); 610 + !(info->flags & IEEE80211_TX_STAT_ACK)) 614 611 info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK; 615 - } 616 612 617 613 /* W/A FW bug: seq_ctl is wrong when the queue is flushed */ 618 614 if (status == TX_STATUS_FAIL_FIFO_FLUSHED) {
+25 -9
drivers/net/wireless/iwlwifi/pcie/internal.h
··· 137 137 struct iwl_cmd_meta { 138 138 /* only for SYNC commands, iff the reply skb is wanted */ 139 139 struct iwl_host_cmd *source; 140 - 141 - DEFINE_DMA_UNMAP_ADDR(mapping); 142 - DEFINE_DMA_UNMAP_LEN(len); 143 - 144 140 u32 flags; 145 141 }; 146 142 ··· 181 185 /* 182 186 * The FH will write back to the first TB only, so we need 183 187 * to copy some data into the buffer regardless of whether 184 - * it should be mapped or not. This indicates how much to 185 - * copy, even for HCMDs it must be big enough to fit the 186 - * DRAM scratch from the TX cmd, at least 16 bytes. 188 + * it should be mapped or not. This indicates how big the 189 + * first TB must be to include the scratch buffer. Since 190 + * the scratch is 4 bytes at offset 12, it's 16 now. If we 191 + * make it bigger then allocations will be bigger and copy 192 + * slower, so that's probably not useful. 187 193 */ 188 - #define IWL_HCMD_MIN_COPY_SIZE 16 194 + #define IWL_HCMD_SCRATCHBUF_SIZE 16 189 195 190 196 struct iwl_pcie_txq_entry { 191 197 struct iwl_device_cmd *cmd; 192 - struct iwl_device_cmd *copy_cmd; 193 198 struct sk_buff *skb; 194 199 /* buffer to free after command completes */ 195 200 const void *free_buf; 196 201 struct iwl_cmd_meta meta; 197 202 }; 198 203 204 + struct iwl_pcie_txq_scratch_buf { 205 + struct iwl_cmd_header hdr; 206 + u8 buf[8]; 207 + __le32 scratch; 208 + }; 209 + 199 210 /** 200 211 * struct iwl_txq - Tx Queue for DMA 201 212 * @q: generic Rx/Tx queue descriptor 202 213 * @tfds: transmit frame descriptors (DMA memory) 214 + * @scratchbufs: start of command headers, including scratch buffers, for 215 + * the writeback -- this is DMA memory and an array holding one buffer 216 + * for each command on the queue 217 + * @scratchbufs_dma: DMA address for the scratchbufs start 203 218 * @entries: transmit entries (driver state) 204 219 * @lock: queue lock 205 220 * @stuck_timer: timer that fires if queue gets stuck ··· 224 217 struct iwl_txq { 225 218 struct iwl_queue q; 226 219 struct iwl_tfd *tfds; 220 + struct iwl_pcie_txq_scratch_buf *scratchbufs; 221 + dma_addr_t scratchbufs_dma; 227 222 struct iwl_pcie_txq_entry *entries; 228 223 spinlock_t lock; 229 224 struct timer_list stuck_timer; ··· 233 224 u8 need_update; 234 225 u8 active; 235 226 }; 227 + 228 + static inline dma_addr_t 229 + iwl_pcie_get_scratchbuf_dma(struct iwl_txq *txq, int idx) 230 + { 231 + return txq->scratchbufs_dma + 232 + sizeof(struct iwl_pcie_txq_scratch_buf) * idx; 233 + } 236 234 237 235 /** 238 236 * struct iwl_trans_pcie - PCIe transport specific data
+3 -11
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 637 637 index = SEQ_TO_INDEX(sequence); 638 638 cmd_index = get_cmd_index(&txq->q, index); 639 639 640 - if (reclaim) { 641 - struct iwl_pcie_txq_entry *ent; 642 - ent = &txq->entries[cmd_index]; 643 - cmd = ent->copy_cmd; 644 - WARN_ON_ONCE(!cmd && ent->meta.flags & CMD_WANT_HCMD); 645 - } else { 640 + if (reclaim) 641 + cmd = txq->entries[cmd_index].cmd; 642 + else 646 643 cmd = NULL; 647 - } 648 644 649 645 err = iwl_op_mode_rx(trans->op_mode, &rxcb, cmd); 650 646 651 647 if (reclaim) { 652 - /* The original command isn't needed any more */ 653 - kfree(txq->entries[cmd_index].copy_cmd); 654 - txq->entries[cmd_index].copy_cmd = NULL; 655 - /* nor is the duplicated part of the command */ 656 648 kfree(txq->entries[cmd_index].free_buf); 657 649 txq->entries[cmd_index].free_buf = NULL; 658 650 }
+129 -147
drivers/net/wireless/iwlwifi/pcie/tx.c
··· 191 191 } 192 192 193 193 for (i = q->read_ptr; i != q->write_ptr; 194 - i = iwl_queue_inc_wrap(i, q->n_bd)) { 195 - struct iwl_tx_cmd *tx_cmd = 196 - (struct iwl_tx_cmd *)txq->entries[i].cmd->payload; 194 + i = iwl_queue_inc_wrap(i, q->n_bd)) 197 195 IWL_ERR(trans, "scratch %d = 0x%08x\n", i, 198 - get_unaligned_le32(&tx_cmd->scratch)); 199 - } 196 + le32_to_cpu(txq->scratchbufs[i].scratch)); 200 197 201 198 iwl_op_mode_nic_error(trans->op_mode); 202 199 } ··· 364 367 } 365 368 366 369 static void iwl_pcie_tfd_unmap(struct iwl_trans *trans, 367 - struct iwl_cmd_meta *meta, struct iwl_tfd *tfd, 368 - enum dma_data_direction dma_dir) 370 + struct iwl_cmd_meta *meta, 371 + struct iwl_tfd *tfd) 369 372 { 370 373 int i; 371 374 int num_tbs; ··· 379 382 return; 380 383 } 381 384 382 - /* Unmap tx_cmd */ 383 - if (num_tbs) 384 - dma_unmap_single(trans->dev, 385 - dma_unmap_addr(meta, mapping), 386 - dma_unmap_len(meta, len), 387 - DMA_BIDIRECTIONAL); 385 + /* first TB is never freed - it's the scratchbuf data */ 388 386 389 - /* Unmap chunks, if any. */ 390 387 for (i = 1; i < num_tbs; i++) 391 388 dma_unmap_single(trans->dev, iwl_pcie_tfd_tb_get_addr(tfd, i), 392 - iwl_pcie_tfd_tb_get_len(tfd, i), dma_dir); 389 + iwl_pcie_tfd_tb_get_len(tfd, i), 390 + DMA_TO_DEVICE); 393 391 394 392 tfd->num_tbs = 0; 395 393 } ··· 398 406 * Does NOT advance any TFD circular buffer read/write indexes 399 407 * Does NOT free the TFD itself (which is within circular buffer) 400 408 */ 401 - static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq, 402 - enum dma_data_direction dma_dir) 409 + static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq) 403 410 { 404 411 struct iwl_tfd *tfd_tmp = txq->tfds; 405 412 ··· 409 418 lockdep_assert_held(&txq->lock); 410 419 411 420 /* We have only q->n_window txq->entries, but we use q->n_bd tfds */ 412 - iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr], 413 - dma_dir); 421 + iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr]); 414 422 415 423 /* free SKB */ 416 424 if (txq->entries) { ··· 469 479 { 470 480 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 471 481 size_t tfd_sz = sizeof(struct iwl_tfd) * TFD_QUEUE_SIZE_MAX; 482 + size_t scratchbuf_sz; 472 483 int i; 473 484 474 485 if (WARN_ON(txq->entries || txq->tfds)) ··· 505 514 IWL_ERR(trans, "dma_alloc_coherent(%zd) failed\n", tfd_sz); 506 515 goto error; 507 516 } 517 + 518 + BUILD_BUG_ON(IWL_HCMD_SCRATCHBUF_SIZE != sizeof(*txq->scratchbufs)); 519 + BUILD_BUG_ON(offsetof(struct iwl_pcie_txq_scratch_buf, scratch) != 520 + sizeof(struct iwl_cmd_header) + 521 + offsetof(struct iwl_tx_cmd, scratch)); 522 + 523 + scratchbuf_sz = sizeof(*txq->scratchbufs) * slots_num; 524 + 525 + txq->scratchbufs = dma_alloc_coherent(trans->dev, scratchbuf_sz, 526 + &txq->scratchbufs_dma, 527 + GFP_KERNEL); 528 + if (!txq->scratchbufs) 529 + goto err_free_tfds; 530 + 508 531 txq->q.id = txq_id; 509 532 510 533 return 0; 534 + err_free_tfds: 535 + dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->q.dma_addr); 511 536 error: 512 537 if (txq->entries && txq_id == trans_pcie->cmd_queue) 513 538 for (i = 0; i < slots_num; i++) ··· 572 565 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 573 566 struct iwl_txq *txq = &trans_pcie->txq[txq_id]; 574 567 struct iwl_queue *q = &txq->q; 575 - enum dma_data_direction dma_dir; 576 568 577 569 if (!q->n_bd) 578 570 return; 579 571 580 - /* In the command queue, all the TBs are mapped as BIDI 581 - * so unmap them as such. 582 - */ 583 - if (txq_id == trans_pcie->cmd_queue) 584 - dma_dir = DMA_BIDIRECTIONAL; 585 - else 586 - dma_dir = DMA_TO_DEVICE; 587 - 588 572 spin_lock_bh(&txq->lock); 589 573 while (q->write_ptr != q->read_ptr) { 590 - iwl_pcie_txq_free_tfd(trans, txq, dma_dir); 574 + iwl_pcie_txq_free_tfd(trans, txq); 591 575 q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd); 592 576 } 593 577 spin_unlock_bh(&txq->lock); ··· 608 610 if (txq_id == trans_pcie->cmd_queue) 609 611 for (i = 0; i < txq->q.n_window; i++) { 610 612 kfree(txq->entries[i].cmd); 611 - kfree(txq->entries[i].copy_cmd); 612 613 kfree(txq->entries[i].free_buf); 613 614 } 614 615 ··· 616 619 dma_free_coherent(dev, sizeof(struct iwl_tfd) * 617 620 txq->q.n_bd, txq->tfds, txq->q.dma_addr); 618 621 txq->q.dma_addr = 0; 622 + 623 + dma_free_coherent(dev, 624 + sizeof(*txq->scratchbufs) * txq->q.n_window, 625 + txq->scratchbufs, txq->scratchbufs_dma); 619 626 } 620 627 621 628 kfree(txq->entries); ··· 963 962 964 963 iwl_pcie_txq_inval_byte_cnt_tbl(trans, txq); 965 964 966 - iwl_pcie_txq_free_tfd(trans, txq, DMA_TO_DEVICE); 965 + iwl_pcie_txq_free_tfd(trans, txq); 967 966 } 968 967 969 968 iwl_pcie_txq_progress(trans_pcie, txq); ··· 1153 1152 void *dup_buf = NULL; 1154 1153 dma_addr_t phys_addr; 1155 1154 int idx; 1156 - u16 copy_size, cmd_size, dma_size; 1155 + u16 copy_size, cmd_size, scratch_size; 1157 1156 bool had_nocopy = false; 1158 1157 int i; 1159 1158 u32 cmd_pos; 1160 - const u8 *cmddata[IWL_MAX_CMD_TFDS]; 1161 - u16 cmdlen[IWL_MAX_CMD_TFDS]; 1159 + const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD]; 1160 + u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD]; 1162 1161 1163 1162 copy_size = sizeof(out_cmd->hdr); 1164 1163 cmd_size = sizeof(out_cmd->hdr); 1165 1164 1166 1165 /* need one for the header if the first is NOCOPY */ 1167 - BUILD_BUG_ON(IWL_MAX_CMD_TFDS > IWL_NUM_OF_TBS - 1); 1166 + BUILD_BUG_ON(IWL_MAX_CMD_TBS_PER_TFD > IWL_NUM_OF_TBS - 1); 1168 1167 1169 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1168 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1170 1169 cmddata[i] = cmd->data[i]; 1171 1170 cmdlen[i] = cmd->len[i]; 1172 1171 1173 1172 if (!cmd->len[i]) 1174 1173 continue; 1175 1174 1176 - /* need at least IWL_HCMD_MIN_COPY_SIZE copied */ 1177 - if (copy_size < IWL_HCMD_MIN_COPY_SIZE) { 1178 - int copy = IWL_HCMD_MIN_COPY_SIZE - copy_size; 1175 + /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */ 1176 + if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) { 1177 + int copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size; 1179 1178 1180 1179 if (copy > cmdlen[i]) 1181 1180 copy = cmdlen[i]; ··· 1261 1260 /* and copy the data that needs to be copied */ 1262 1261 cmd_pos = offsetof(struct iwl_device_cmd, payload); 1263 1262 copy_size = sizeof(out_cmd->hdr); 1264 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1263 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1265 1264 int copy = 0; 1266 1265 1267 1266 if (!cmd->len) 1268 1267 continue; 1269 1268 1270 - /* need at least IWL_HCMD_MIN_COPY_SIZE copied */ 1271 - if (copy_size < IWL_HCMD_MIN_COPY_SIZE) { 1272 - copy = IWL_HCMD_MIN_COPY_SIZE - copy_size; 1269 + /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */ 1270 + if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) { 1271 + copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size; 1273 1272 1274 1273 if (copy > cmd->len[i]) 1275 1274 copy = cmd->len[i]; ··· 1287 1286 } 1288 1287 } 1289 1288 1290 - WARN_ON_ONCE(txq->entries[idx].copy_cmd); 1291 - 1292 - /* 1293 - * since out_cmd will be the source address of the FH, it will write 1294 - * the retry count there. So when the user needs to receivce the HCMD 1295 - * that corresponds to the response in the response handler, it needs 1296 - * to set CMD_WANT_HCMD. 1297 - */ 1298 - if (cmd->flags & CMD_WANT_HCMD) { 1299 - txq->entries[idx].copy_cmd = 1300 - kmemdup(out_cmd, cmd_pos, GFP_ATOMIC); 1301 - if (unlikely(!txq->entries[idx].copy_cmd)) { 1302 - idx = -ENOMEM; 1303 - goto out; 1304 - } 1305 - } 1306 - 1307 1289 IWL_DEBUG_HC(trans, 1308 1290 "Sending command %s (#%x), seq: 0x%04X, %d bytes at %d[%d]:%d\n", 1309 1291 get_cmd_string(trans_pcie, out_cmd->hdr.cmd), 1310 1292 out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), 1311 1293 cmd_size, q->write_ptr, idx, trans_pcie->cmd_queue); 1312 1294 1313 - /* 1314 - * If the entire command is smaller than IWL_HCMD_MIN_COPY_SIZE, we must 1315 - * still map at least that many bytes for the hardware to write back to. 1316 - * We have enough space, so that's not a problem. 1317 - */ 1318 - dma_size = max_t(u16, copy_size, IWL_HCMD_MIN_COPY_SIZE); 1295 + /* start the TFD with the scratchbuf */ 1296 + scratch_size = min_t(int, copy_size, IWL_HCMD_SCRATCHBUF_SIZE); 1297 + memcpy(&txq->scratchbufs[q->write_ptr], &out_cmd->hdr, scratch_size); 1298 + iwl_pcie_txq_build_tfd(trans, txq, 1299 + iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr), 1300 + scratch_size, 1); 1319 1301 1320 - phys_addr = dma_map_single(trans->dev, &out_cmd->hdr, dma_size, 1321 - DMA_BIDIRECTIONAL); 1322 - if (unlikely(dma_mapping_error(trans->dev, phys_addr))) { 1323 - idx = -ENOMEM; 1324 - goto out; 1302 + /* map first command fragment, if any remains */ 1303 + if (copy_size > scratch_size) { 1304 + phys_addr = dma_map_single(trans->dev, 1305 + ((u8 *)&out_cmd->hdr) + scratch_size, 1306 + copy_size - scratch_size, 1307 + DMA_TO_DEVICE); 1308 + if (dma_mapping_error(trans->dev, phys_addr)) { 1309 + iwl_pcie_tfd_unmap(trans, out_meta, 1310 + &txq->tfds[q->write_ptr]); 1311 + idx = -ENOMEM; 1312 + goto out; 1313 + } 1314 + 1315 + iwl_pcie_txq_build_tfd(trans, txq, phys_addr, 1316 + copy_size - scratch_size, 0); 1325 1317 } 1326 1318 1327 - dma_unmap_addr_set(out_meta, mapping, phys_addr); 1328 - dma_unmap_len_set(out_meta, len, dma_size); 1329 - 1330 - iwl_pcie_txq_build_tfd(trans, txq, phys_addr, copy_size, 1); 1331 - 1332 1319 /* map the remaining (adjusted) nocopy/dup fragments */ 1333 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1320 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1334 1321 const void *data = cmddata[i]; 1335 1322 1336 1323 if (!cmdlen[i]) ··· 1329 1340 if (cmd->dataflags[i] & IWL_HCMD_DFL_DUP) 1330 1341 data = dup_buf; 1331 1342 phys_addr = dma_map_single(trans->dev, (void *)data, 1332 - cmdlen[i], DMA_BIDIRECTIONAL); 1343 + cmdlen[i], DMA_TO_DEVICE); 1333 1344 if (dma_mapping_error(trans->dev, phys_addr)) { 1334 1345 iwl_pcie_tfd_unmap(trans, out_meta, 1335 - &txq->tfds[q->write_ptr], 1336 - DMA_BIDIRECTIONAL); 1346 + &txq->tfds[q->write_ptr]); 1337 1347 idx = -ENOMEM; 1338 1348 goto out; 1339 1349 } ··· 1406 1418 cmd = txq->entries[cmd_index].cmd; 1407 1419 meta = &txq->entries[cmd_index].meta; 1408 1420 1409 - iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index], DMA_BIDIRECTIONAL); 1421 + iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index]); 1410 1422 1411 1423 /* Input error checking is done when commands are added to queue. */ 1412 1424 if (meta->flags & CMD_WANT_SKB) { ··· 1585 1597 struct iwl_cmd_meta *out_meta; 1586 1598 struct iwl_txq *txq; 1587 1599 struct iwl_queue *q; 1588 - dma_addr_t phys_addr = 0; 1589 - dma_addr_t txcmd_phys; 1590 - dma_addr_t scratch_phys; 1591 - u16 len, firstlen, secondlen; 1600 + dma_addr_t tb0_phys, tb1_phys, scratch_phys; 1601 + void *tb1_addr; 1602 + u16 len, tb1_len, tb2_len; 1592 1603 u8 wait_write_ptr = 0; 1593 1604 __le16 fc = hdr->frame_control; 1594 1605 u8 hdr_len = ieee80211_hdrlen(fc); ··· 1625 1638 cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | 1626 1639 INDEX_TO_SEQ(q->write_ptr))); 1627 1640 1641 + tb0_phys = iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr); 1642 + scratch_phys = tb0_phys + sizeof(struct iwl_cmd_header) + 1643 + offsetof(struct iwl_tx_cmd, scratch); 1644 + 1645 + tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys); 1646 + tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys); 1647 + 1628 1648 /* Set up first empty entry in queue's array of Tx/cmd buffers */ 1629 1649 out_meta = &txq->entries[q->write_ptr].meta; 1630 1650 1631 1651 /* 1632 - * Use the first empty entry in this queue's command buffer array 1633 - * to contain the Tx command and MAC header concatenated together 1634 - * (payload data will be in another buffer). 1635 - * Size of this varies, due to varying MAC header length. 1636 - * If end is not dword aligned, we'll have 2 extra bytes at the end 1637 - * of the MAC header (device reads on dword boundaries). 1638 - * We'll tell device about this padding later. 1652 + * The second TB (tb1) points to the remainder of the TX command 1653 + * and the 802.11 header - dword aligned size 1654 + * (This calculation modifies the TX command, so do it before the 1655 + * setup of the first TB) 1639 1656 */ 1640 - len = sizeof(struct iwl_tx_cmd) + 1641 - sizeof(struct iwl_cmd_header) + hdr_len; 1642 - firstlen = (len + 3) & ~3; 1657 + len = sizeof(struct iwl_tx_cmd) + sizeof(struct iwl_cmd_header) + 1658 + hdr_len - IWL_HCMD_SCRATCHBUF_SIZE; 1659 + tb1_len = (len + 3) & ~3; 1643 1660 1644 1661 /* Tell NIC about any 2-byte padding after MAC header */ 1645 - if (firstlen != len) 1662 + if (tb1_len != len) 1646 1663 tx_cmd->tx_flags |= TX_CMD_FLG_MH_PAD_MSK; 1647 1664 1648 - /* Physical address of this Tx command's header (not MAC header!), 1649 - * within command buffer array. */ 1650 - txcmd_phys = dma_map_single(trans->dev, 1651 - &dev_cmd->hdr, firstlen, 1652 - DMA_BIDIRECTIONAL); 1653 - if (unlikely(dma_mapping_error(trans->dev, txcmd_phys))) 1665 + /* The first TB points to the scratchbuf data - min_copy bytes */ 1666 + memcpy(&txq->scratchbufs[q->write_ptr], &dev_cmd->hdr, 1667 + IWL_HCMD_SCRATCHBUF_SIZE); 1668 + iwl_pcie_txq_build_tfd(trans, txq, tb0_phys, 1669 + IWL_HCMD_SCRATCHBUF_SIZE, 1); 1670 + 1671 + /* there must be data left over for TB1 or this code must be changed */ 1672 + BUILD_BUG_ON(sizeof(struct iwl_tx_cmd) < IWL_HCMD_SCRATCHBUF_SIZE); 1673 + 1674 + /* map the data for TB1 */ 1675 + tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_HCMD_SCRATCHBUF_SIZE; 1676 + tb1_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); 1677 + if (unlikely(dma_mapping_error(trans->dev, tb1_phys))) 1654 1678 goto out_err; 1655 - dma_unmap_addr_set(out_meta, mapping, txcmd_phys); 1656 - dma_unmap_len_set(out_meta, len, firstlen); 1679 + iwl_pcie_txq_build_tfd(trans, txq, tb1_phys, tb1_len, 0); 1680 + 1681 + /* 1682 + * Set up TFD's third entry to point directly to remainder 1683 + * of skb, if any (802.11 null frames have no payload). 1684 + */ 1685 + tb2_len = skb->len - hdr_len; 1686 + if (tb2_len > 0) { 1687 + dma_addr_t tb2_phys = dma_map_single(trans->dev, 1688 + skb->data + hdr_len, 1689 + tb2_len, DMA_TO_DEVICE); 1690 + if (unlikely(dma_mapping_error(trans->dev, tb2_phys))) { 1691 + iwl_pcie_tfd_unmap(trans, out_meta, 1692 + &txq->tfds[q->write_ptr]); 1693 + goto out_err; 1694 + } 1695 + iwl_pcie_txq_build_tfd(trans, txq, tb2_phys, tb2_len, 0); 1696 + } 1697 + 1698 + /* Set up entry for this TFD in Tx byte-count array */ 1699 + iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len)); 1700 + 1701 + trace_iwlwifi_dev_tx(trans->dev, skb, 1702 + &txq->tfds[txq->q.write_ptr], 1703 + sizeof(struct iwl_tfd), 1704 + &dev_cmd->hdr, IWL_HCMD_SCRATCHBUF_SIZE + tb1_len, 1705 + skb->data + hdr_len, tb2_len); 1706 + trace_iwlwifi_dev_tx_data(trans->dev, skb, 1707 + skb->data + hdr_len, tb2_len); 1657 1708 1658 1709 if (!ieee80211_has_morefrags(fc)) { 1659 1710 txq->need_update = 1; ··· 1699 1674 wait_write_ptr = 1; 1700 1675 txq->need_update = 0; 1701 1676 } 1702 - 1703 - /* Set up TFD's 2nd entry to point directly to remainder of skb, 1704 - * if any (802.11 null frames have no payload). */ 1705 - secondlen = skb->len - hdr_len; 1706 - if (secondlen > 0) { 1707 - phys_addr = dma_map_single(trans->dev, skb->data + hdr_len, 1708 - secondlen, DMA_TO_DEVICE); 1709 - if (unlikely(dma_mapping_error(trans->dev, phys_addr))) { 1710 - dma_unmap_single(trans->dev, 1711 - dma_unmap_addr(out_meta, mapping), 1712 - dma_unmap_len(out_meta, len), 1713 - DMA_BIDIRECTIONAL); 1714 - goto out_err; 1715 - } 1716 - } 1717 - 1718 - /* Attach buffers to TFD */ 1719 - iwl_pcie_txq_build_tfd(trans, txq, txcmd_phys, firstlen, 1); 1720 - if (secondlen > 0) 1721 - iwl_pcie_txq_build_tfd(trans, txq, phys_addr, secondlen, 0); 1722 - 1723 - scratch_phys = txcmd_phys + sizeof(struct iwl_cmd_header) + 1724 - offsetof(struct iwl_tx_cmd, scratch); 1725 - 1726 - /* take back ownership of DMA buffer to enable update */ 1727 - dma_sync_single_for_cpu(trans->dev, txcmd_phys, firstlen, 1728 - DMA_BIDIRECTIONAL); 1729 - tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys); 1730 - tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys); 1731 - 1732 - /* Set up entry for this TFD in Tx byte-count array */ 1733 - iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len)); 1734 - 1735 - dma_sync_single_for_device(trans->dev, txcmd_phys, firstlen, 1736 - DMA_BIDIRECTIONAL); 1737 - 1738 - trace_iwlwifi_dev_tx(trans->dev, skb, 1739 - &txq->tfds[txq->q.write_ptr], 1740 - sizeof(struct iwl_tfd), 1741 - &dev_cmd->hdr, firstlen, 1742 - skb->data + hdr_len, secondlen); 1743 - trace_iwlwifi_dev_tx_data(trans->dev, skb, 1744 - skb->data + hdr_len, secondlen); 1745 1677 1746 1678 /* start timer if queue currently empty */ 1747 1679 if (txq->need_update && q->read_ptr == q->write_ptr &&
+24 -4
drivers/rtc/rtc-mv.c
··· 14 14 #include <linux/platform_device.h> 15 15 #include <linux/of.h> 16 16 #include <linux/delay.h> 17 + #include <linux/clk.h> 17 18 #include <linux/gfp.h> 18 19 #include <linux/module.h> 19 20 ··· 42 41 struct rtc_device *rtc; 43 42 void __iomem *ioaddr; 44 43 int irq; 44 + struct clk *clk; 45 45 }; 46 46 47 47 static int mv_rtc_set_time(struct device *dev, struct rtc_time *tm) ··· 223 221 struct rtc_plat_data *pdata; 224 222 resource_size_t size; 225 223 u32 rtc_time; 224 + int ret = 0; 226 225 227 226 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 228 227 if (!res) ··· 242 239 if (!pdata->ioaddr) 243 240 return -ENOMEM; 244 241 242 + pdata->clk = devm_clk_get(&pdev->dev, NULL); 243 + /* Not all SoCs require a clock.*/ 244 + if (!IS_ERR(pdata->clk)) 245 + clk_prepare_enable(pdata->clk); 246 + 245 247 /* make sure the 24 hours mode is enabled */ 246 248 rtc_time = readl(pdata->ioaddr + RTC_TIME_REG_OFFS); 247 249 if (rtc_time & RTC_HOURS_12H_MODE) { 248 250 dev_err(&pdev->dev, "24 Hours mode not supported.\n"); 249 - return -EINVAL; 251 + ret = -EINVAL; 252 + goto out; 250 253 } 251 254 252 255 /* make sure it is actually functional */ ··· 261 252 rtc_time = readl(pdata->ioaddr + RTC_TIME_REG_OFFS); 262 253 if (rtc_time == 0x01000000) { 263 254 dev_err(&pdev->dev, "internal RTC not ticking\n"); 264 - return -ENODEV; 255 + ret = -ENODEV; 256 + goto out; 265 257 } 266 258 } 267 259 ··· 278 268 } else 279 269 pdata->rtc = rtc_device_register(pdev->name, &pdev->dev, 280 270 &mv_rtc_ops, THIS_MODULE); 281 - if (IS_ERR(pdata->rtc)) 282 - return PTR_ERR(pdata->rtc); 271 + if (IS_ERR(pdata->rtc)) { 272 + ret = PTR_ERR(pdata->rtc); 273 + goto out; 274 + } 283 275 284 276 if (pdata->irq >= 0) { 285 277 writel(0, pdata->ioaddr + RTC_ALARM_INTERRUPT_MASK_REG_OFFS); ··· 294 282 } 295 283 296 284 return 0; 285 + out: 286 + if (!IS_ERR(pdata->clk)) 287 + clk_disable_unprepare(pdata->clk); 288 + 289 + return ret; 297 290 } 298 291 299 292 static int __exit mv_rtc_remove(struct platform_device *pdev) ··· 309 292 device_init_wakeup(&pdev->dev, 0); 310 293 311 294 rtc_device_unregister(pdata->rtc); 295 + if (!IS_ERR(pdata->clk)) 296 + clk_disable_unprepare(pdata->clk); 297 + 312 298 return 0; 313 299 } 314 300
+10 -6
drivers/staging/comedi/drivers/dt9812.c
··· 947 947 unsigned int *data) 948 948 { 949 949 struct comedi_dt9812 *devpriv = dev->private; 950 + unsigned int channel = CR_CHAN(insn->chanspec); 950 951 int n; 951 952 u8 bits = 0; 952 953 953 954 dt9812_digital_in(devpriv->slot, &bits); 954 955 for (n = 0; n < insn->n; n++) 955 - data[n] = ((1 << insn->chanspec) & bits) != 0; 956 + data[n] = ((1 << channel) & bits) != 0; 956 957 return n; 957 958 } 958 959 ··· 962 961 unsigned int *data) 963 962 { 964 963 struct comedi_dt9812 *devpriv = dev->private; 964 + unsigned int channel = CR_CHAN(insn->chanspec); 965 965 int n; 966 966 u8 bits = 0; 967 967 968 968 dt9812_digital_out_shadow(devpriv->slot, &bits); 969 969 for (n = 0; n < insn->n; n++) { 970 - u8 mask = 1 << insn->chanspec; 970 + u8 mask = 1 << channel; 971 971 972 972 bits &= ~mask; 973 973 if (data[n]) ··· 983 981 unsigned int *data) 984 982 { 985 983 struct comedi_dt9812 *devpriv = dev->private; 984 + unsigned int channel = CR_CHAN(insn->chanspec); 986 985 int n; 987 986 988 987 for (n = 0; n < insn->n; n++) { 989 988 u16 value = 0; 990 989 991 - dt9812_analog_in(devpriv->slot, insn->chanspec, &value, 992 - DT9812_GAIN_1); 990 + dt9812_analog_in(devpriv->slot, channel, &value, DT9812_GAIN_1); 993 991 data[n] = value; 994 992 } 995 993 return n; ··· 1000 998 unsigned int *data) 1001 999 { 1002 1000 struct comedi_dt9812 *devpriv = dev->private; 1001 + unsigned int channel = CR_CHAN(insn->chanspec); 1003 1002 int n; 1004 1003 u16 value; 1005 1004 1006 1005 for (n = 0; n < insn->n; n++) { 1007 1006 value = 0; 1008 - dt9812_analog_out_shadow(devpriv->slot, insn->chanspec, &value); 1007 + dt9812_analog_out_shadow(devpriv->slot, channel, &value); 1009 1008 data[n] = value; 1010 1009 } 1011 1010 return n; ··· 1017 1014 unsigned int *data) 1018 1015 { 1019 1016 struct comedi_dt9812 *devpriv = dev->private; 1017 + unsigned int channel = CR_CHAN(insn->chanspec); 1020 1018 int n; 1021 1019 1022 1020 for (n = 0; n < insn->n; n++) 1023 - dt9812_analog_out(devpriv->slot, insn->chanspec, data[n]); 1021 + dt9812_analog_out(devpriv->slot, channel, data[n]); 1024 1022 return n; 1025 1023 } 1026 1024
+19 -12
drivers/staging/comedi/drivers/usbdux.c
··· 730 730 static int usbduxsub_start(struct usbduxsub *usbduxsub) 731 731 { 732 732 int errcode = 0; 733 - uint8_t local_transfer_buffer[16]; 733 + uint8_t *local_transfer_buffer; 734 + 735 + local_transfer_buffer = kmalloc(1, GFP_KERNEL); 736 + if (!local_transfer_buffer) 737 + return -ENOMEM; 734 738 735 739 /* 7f92 to zero */ 736 - local_transfer_buffer[0] = 0; 740 + *local_transfer_buffer = 0; 737 741 errcode = usb_control_msg(usbduxsub->usbdev, 738 742 /* create a pipe for a control transfer */ 739 743 usb_sndctrlpipe(usbduxsub->usbdev, 0), ··· 755 751 1, 756 752 /* Timeout */ 757 753 BULK_TIMEOUT); 758 - if (errcode < 0) { 754 + if (errcode < 0) 759 755 dev_err(&usbduxsub->interface->dev, 760 756 "comedi_: control msg failed (start)\n"); 761 - return errcode; 762 - } 763 - return 0; 757 + 758 + kfree(local_transfer_buffer); 759 + return errcode; 764 760 } 765 761 766 762 static int usbduxsub_stop(struct usbduxsub *usbduxsub) 767 763 { 768 764 int errcode = 0; 765 + uint8_t *local_transfer_buffer; 769 766 770 - uint8_t local_transfer_buffer[16]; 767 + local_transfer_buffer = kmalloc(1, GFP_KERNEL); 768 + if (!local_transfer_buffer) 769 + return -ENOMEM; 771 770 772 771 /* 7f92 to one */ 773 - local_transfer_buffer[0] = 1; 772 + *local_transfer_buffer = 1; 774 773 errcode = usb_control_msg(usbduxsub->usbdev, 775 774 usb_sndctrlpipe(usbduxsub->usbdev, 0), 776 775 /* bRequest, "Firmware" */ ··· 788 781 1, 789 782 /* Timeout */ 790 783 BULK_TIMEOUT); 791 - if (errcode < 0) { 784 + if (errcode < 0) 792 785 dev_err(&usbduxsub->interface->dev, 793 786 "comedi_: control msg failed (stop)\n"); 794 - return errcode; 795 - } 796 - return 0; 787 + 788 + kfree(local_transfer_buffer); 789 + return errcode; 797 790 } 798 791 799 792 static int usbduxsub_upload(struct usbduxsub *usbduxsub,
+18 -12
drivers/staging/comedi/drivers/usbduxfast.c
··· 436 436 static int usbduxfastsub_start(struct usbduxfastsub_s *udfs) 437 437 { 438 438 int ret; 439 - unsigned char local_transfer_buffer[16]; 439 + unsigned char *local_transfer_buffer; 440 + 441 + local_transfer_buffer = kmalloc(1, GFP_KERNEL); 442 + if (!local_transfer_buffer) 443 + return -ENOMEM; 440 444 441 445 /* 7f92 to zero */ 442 - local_transfer_buffer[0] = 0; 446 + *local_transfer_buffer = 0; 443 447 /* bRequest, "Firmware" */ 444 448 ret = usb_control_msg(udfs->usbdev, usb_sndctrlpipe(udfs->usbdev, 0), 445 449 USBDUXFASTSUB_FIRMWARE, ··· 454 450 local_transfer_buffer, 455 451 1, /* Length */ 456 452 EZTIMEOUT); /* Timeout */ 457 - if (ret < 0) { 453 + if (ret < 0) 458 454 dev_err(&udfs->interface->dev, 459 455 "control msg failed (start)\n"); 460 - return ret; 461 - } 462 456 463 - return 0; 457 + kfree(local_transfer_buffer); 458 + return ret; 464 459 } 465 460 466 461 static int usbduxfastsub_stop(struct usbduxfastsub_s *udfs) 467 462 { 468 463 int ret; 469 - unsigned char local_transfer_buffer[16]; 464 + unsigned char *local_transfer_buffer; 465 + 466 + local_transfer_buffer = kmalloc(1, GFP_KERNEL); 467 + if (!local_transfer_buffer) 468 + return -ENOMEM; 470 469 471 470 /* 7f92 to one */ 472 - local_transfer_buffer[0] = 1; 471 + *local_transfer_buffer = 1; 473 472 /* bRequest, "Firmware" */ 474 473 ret = usb_control_msg(udfs->usbdev, usb_sndctrlpipe(udfs->usbdev, 0), 475 474 USBDUXFASTSUB_FIRMWARE, ··· 481 474 0x0000, /* Index */ 482 475 local_transfer_buffer, 1, /* Length */ 483 476 EZTIMEOUT); /* Timeout */ 484 - if (ret < 0) { 477 + if (ret < 0) 485 478 dev_err(&udfs->interface->dev, 486 479 "control msg failed (stop)\n"); 487 - return ret; 488 - } 489 480 490 - return 0; 481 + kfree(local_transfer_buffer); 482 + return ret; 491 483 } 492 484 493 485 static int usbduxfastsub_upload(struct usbduxfastsub_s *udfs,
+17 -10
drivers/staging/comedi/drivers/usbduxsigma.c
··· 681 681 static int usbduxsub_start(struct usbduxsub *usbduxsub) 682 682 { 683 683 int errcode = 0; 684 - uint8_t local_transfer_buffer[16]; 684 + uint8_t *local_transfer_buffer; 685 + 686 + local_transfer_buffer = kmalloc(16, GFP_KERNEL); 687 + if (!local_transfer_buffer) 688 + return -ENOMEM; 685 689 686 690 /* 7f92 to zero */ 687 691 local_transfer_buffer[0] = 0; ··· 706 702 1, 707 703 /* Timeout */ 708 704 BULK_TIMEOUT); 709 - if (errcode < 0) { 705 + if (errcode < 0) 710 706 dev_err(&usbduxsub->interface->dev, 711 707 "comedi_: control msg failed (start)\n"); 712 - return errcode; 713 - } 714 - return 0; 708 + 709 + kfree(local_transfer_buffer); 710 + return errcode; 715 711 } 716 712 717 713 static int usbduxsub_stop(struct usbduxsub *usbduxsub) 718 714 { 719 715 int errcode = 0; 716 + uint8_t *local_transfer_buffer; 720 717 721 - uint8_t local_transfer_buffer[16]; 718 + local_transfer_buffer = kmalloc(16, GFP_KERNEL); 719 + if (!local_transfer_buffer) 720 + return -ENOMEM; 722 721 723 722 /* 7f92 to one */ 724 723 local_transfer_buffer[0] = 1; ··· 739 732 1, 740 733 /* Timeout */ 741 734 BULK_TIMEOUT); 742 - if (errcode < 0) { 735 + if (errcode < 0) 743 736 dev_err(&usbduxsub->interface->dev, 744 737 "comedi_: control msg failed (stop)\n"); 745 - return errcode; 746 - } 747 - return 0; 738 + 739 + kfree(local_transfer_buffer); 740 + return errcode; 748 741 } 749 742 750 743 static int usbduxsub_upload(struct usbduxsub *usbduxsub,
+12 -11
drivers/staging/imx-drm/ipuv3-crtc.c
··· 483 483 goto err_out; 484 484 } 485 485 486 - ipu_crtc->irq = ipu_idmac_channel_irq(ipu, ipu_crtc->ipu_ch, 487 - IPU_IRQ_EOF); 488 - ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0, 489 - "imx_drm", ipu_crtc); 490 - if (ret < 0) { 491 - dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret); 492 - goto err_out; 493 - } 494 - 495 - disable_irq(ipu_crtc->irq); 496 - 497 486 return 0; 498 487 err_out: 499 488 ipu_put_resources(ipu_crtc); ··· 493 504 static int ipu_crtc_init(struct ipu_crtc *ipu_crtc, 494 505 struct ipu_client_platformdata *pdata) 495 506 { 507 + struct ipu_soc *ipu = dev_get_drvdata(ipu_crtc->dev->parent); 496 508 int ret; 497 509 498 510 ret = ipu_get_resources(ipu_crtc, pdata); ··· 511 521 dev_err(ipu_crtc->dev, "adding crtc failed with %d.\n", ret); 512 522 goto err_put_resources; 513 523 } 524 + 525 + ipu_crtc->irq = ipu_idmac_channel_irq(ipu, ipu_crtc->ipu_ch, 526 + IPU_IRQ_EOF); 527 + ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0, 528 + "imx_drm", ipu_crtc); 529 + if (ret < 0) { 530 + dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret); 531 + goto err_put_resources; 532 + } 533 + 534 + disable_irq(ipu_crtc->irq); 514 535 515 536 return 0; 516 537
+26 -44
drivers/staging/tidspbridge/rmgr/drv.c
··· 76 76 struct node_res_object **node_res_obj = 77 77 (struct node_res_object **)node_resource; 78 78 struct process_context *ctxt = (struct process_context *)process_ctxt; 79 - int status = 0; 80 79 int retval; 81 80 82 81 *node_res_obj = kzalloc(sizeof(struct node_res_object), GFP_KERNEL); 83 - if (!*node_res_obj) { 84 - status = -ENOMEM; 85 - goto func_end; 86 - } 82 + if (!*node_res_obj) 83 + return -ENOMEM; 87 84 88 85 (*node_res_obj)->node = hnode; 89 - retval = idr_get_new(ctxt->node_id, *node_res_obj, 90 - &(*node_res_obj)->id); 91 - if (retval == -EAGAIN) { 92 - if (!idr_pre_get(ctxt->node_id, GFP_KERNEL)) { 93 - pr_err("%s: OUT OF MEMORY\n", __func__); 94 - status = -ENOMEM; 95 - goto func_end; 96 - } 97 - 98 - retval = idr_get_new(ctxt->node_id, *node_res_obj, 99 - &(*node_res_obj)->id); 86 + retval = idr_alloc(ctxt->node_id, *node_res_obj, 0, 0, GFP_KERNEL); 87 + if (retval >= 0) { 88 + (*node_res_obj)->id = retval; 89 + return 0; 100 90 } 101 - if (retval) { 91 + 92 + kfree(*node_res_obj); 93 + 94 + if (retval == -ENOSPC) { 102 95 pr_err("%s: FAILED, IDR is FULL\n", __func__); 103 - status = -EFAULT; 96 + return -EFAULT; 97 + } else { 98 + pr_err("%s: OUT OF MEMORY\n", __func__); 99 + return -ENOMEM; 104 100 } 105 - func_end: 106 - if (status) 107 - kfree(*node_res_obj); 108 - 109 - return status; 110 101 } 111 102 112 103 /* Release all Node resources and its context ··· 192 201 struct strm_res_object **pstrm_res = 193 202 (struct strm_res_object **)strm_res; 194 203 struct process_context *ctxt = (struct process_context *)process_ctxt; 195 - int status = 0; 196 204 int retval; 197 205 198 206 *pstrm_res = kzalloc(sizeof(struct strm_res_object), GFP_KERNEL); 199 - if (*pstrm_res == NULL) { 200 - status = -EFAULT; 201 - goto func_end; 202 - } 207 + if (*pstrm_res == NULL) 208 + return -EFAULT; 203 209 204 210 (*pstrm_res)->stream = stream_obj; 205 - retval = idr_get_new(ctxt->stream_id, *pstrm_res, 206 - &(*pstrm_res)->id); 207 - if (retval == -EAGAIN) { 208 - if (!idr_pre_get(ctxt->stream_id, GFP_KERNEL)) { 209 - pr_err("%s: OUT OF MEMORY\n", __func__); 210 - status = -ENOMEM; 211 - goto func_end; 212 - } 213 - 214 - retval = idr_get_new(ctxt->stream_id, *pstrm_res, 215 - &(*pstrm_res)->id); 211 + retval = idr_alloc(ctxt->stream_id, *pstrm_res, 0, 0, GFP_KERNEL); 212 + if (retval >= 0) { 213 + (*pstrm_res)->id = retval; 214 + return 0; 216 215 } 217 - if (retval) { 216 + 217 + if (retval == -ENOSPC) { 218 218 pr_err("%s: FAILED, IDR is FULL\n", __func__); 219 - status = -EPERM; 219 + return -EPERM; 220 + } else { 221 + pr_err("%s: OUT OF MEMORY\n", __func__); 222 + return -ENOMEM; 220 223 } 221 - 222 - func_end: 223 - return status; 224 224 } 225 225 226 226 static int drv_proc_free_strm_res(int id, void *p, void *process_ctxt)
+1 -1
drivers/staging/vt6656/card.c
··· 790 790 if ((~uLowNextTBTT) < uLowRemain) 791 791 qwTSF = ((qwTSF >> 32) + 1) << 32; 792 792 793 - qwTSF = (qwTSF & 0xffffffff00000000UL) | 793 + qwTSF = (qwTSF & 0xffffffff00000000ULL) | 794 794 (u64)(uLowNextTBTT + uLowRemain); 795 795 796 796 return (qwTSF);
-4
drivers/staging/vt6656/main_usb.c
··· 669 669 if (device->flags & DEVICE_FLAGS_OPENED) 670 670 device_close(device->dev); 671 671 672 - usb_put_dev(interface_to_usbdev(intf)); 673 - 674 672 return 0; 675 673 } 676 674 ··· 678 680 679 681 if (!device || !device->dev) 680 682 return -ENODEV; 681 - 682 - usb_get_dev(interface_to_usbdev(intf)); 683 683 684 684 if (!(device->flags & DEVICE_FLAGS_OPENED)) 685 685 device_open(device->dev);
+10 -15
drivers/staging/zcache/ramster/tcp.c
··· 300 300 301 301 static int r2net_prep_nsw(struct r2net_node *nn, struct r2net_status_wait *nsw) 302 302 { 303 - int ret = 0; 303 + int ret; 304 304 305 - do { 306 - if (!idr_pre_get(&nn->nn_status_idr, GFP_ATOMIC)) { 307 - ret = -EAGAIN; 308 - break; 309 - } 310 - spin_lock(&nn->nn_lock); 311 - ret = idr_get_new(&nn->nn_status_idr, nsw, &nsw->ns_id); 312 - if (ret == 0) 313 - list_add_tail(&nsw->ns_node_item, 314 - &nn->nn_status_list); 315 - spin_unlock(&nn->nn_lock); 316 - } while (ret == -EAGAIN); 305 + spin_lock(&nn->nn_lock); 306 + ret = idr_alloc(&nn->nn_status_idr, nsw, 0, 0, GFP_ATOMIC); 307 + if (ret >= 0) { 308 + nsw->ns_id = ret; 309 + list_add_tail(&nsw->ns_node_item, &nn->nn_status_list); 310 + } 311 + spin_unlock(&nn->nn_lock); 317 312 318 - if (ret == 0) { 313 + if (ret >= 0) { 319 314 init_waitqueue_head(&nsw->ns_wq); 320 315 nsw->ns_sys_status = R2NET_ERR_NONE; 321 316 nsw->ns_status = 0; 317 + return 0; 322 318 } 323 - 324 319 return ret; 325 320 } 326 321
+51 -1
drivers/tty/serial/8250/8250.c
··· 301 301 }, 302 302 [PORT_8250_CIR] = { 303 303 .name = "CIR port" 304 - } 304 + }, 305 + [PORT_ALTR_16550_F32] = { 306 + .name = "Altera 16550 FIFO32", 307 + .fifo_size = 32, 308 + .tx_loadsz = 32, 309 + .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, 310 + .flags = UART_CAP_FIFO | UART_CAP_AFE, 311 + }, 312 + [PORT_ALTR_16550_F64] = { 313 + .name = "Altera 16550 FIFO64", 314 + .fifo_size = 64, 315 + .tx_loadsz = 64, 316 + .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, 317 + .flags = UART_CAP_FIFO | UART_CAP_AFE, 318 + }, 319 + [PORT_ALTR_16550_F128] = { 320 + .name = "Altera 16550 FIFO128", 321 + .fifo_size = 128, 322 + .tx_loadsz = 128, 323 + .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, 324 + .flags = UART_CAP_FIFO | UART_CAP_AFE, 325 + }, 305 326 }; 306 327 307 328 /* Uart divisor latch read */ ··· 3417 3396 MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA"); 3418 3397 #endif 3419 3398 MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR); 3399 + 3400 + #ifndef MODULE 3401 + /* This module was renamed to 8250_core in 3.7. Keep the old "8250" name 3402 + * working as well for the module options so we don't break people. We 3403 + * need to keep the names identical and the convenient macros will happily 3404 + * refuse to let us do that by failing the build with redefinition errors 3405 + * of global variables. So we stick them inside a dummy function to avoid 3406 + * those conflicts. The options still get parsed, and the redefined 3407 + * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive. 3408 + * 3409 + * This is hacky. I'm sorry. 3410 + */ 3411 + static void __used s8250_options(void) 3412 + { 3413 + #undef MODULE_PARAM_PREFIX 3414 + #define MODULE_PARAM_PREFIX "8250." 3415 + 3416 + module_param_cb(share_irqs, &param_ops_uint, &share_irqs, 0644); 3417 + module_param_cb(nr_uarts, &param_ops_uint, &nr_uarts, 0644); 3418 + module_param_cb(skip_txen_test, &param_ops_uint, &skip_txen_test, 0644); 3419 + #ifdef CONFIG_SERIAL_8250_RSA 3420 + __module_param_call(MODULE_PARAM_PREFIX, probe_rsa, 3421 + &param_array_ops, .arr = &__param_arr_probe_rsa, 3422 + 0444, -1); 3423 + #endif 3424 + } 3425 + #else 3426 + MODULE_ALIAS("8250"); 3427 + #endif
+11 -10
drivers/tty/serial/8250/8250_pci.c
··· 1571 1571 1572 1572 /* Unknown vendors/cards - this should not be in linux/pci_ids.h */ 1573 1573 #define PCI_SUBDEVICE_ID_UNKNOWN_0x1584 0x1584 1574 + #define PCI_SUBDEVICE_ID_UNKNOWN_0x1588 0x1588 1574 1575 1575 1576 /* 1576 1577 * Master list of serial port init/setup/exit quirks. ··· 1847 1846 .device = PCI_DEVICE_ID_PLX_9050, 1848 1847 .subvendor = PCI_SUBVENDOR_ID_KEYSPAN, 1849 1848 .subdevice = PCI_SUBDEVICE_ID_KEYSPAN_SX2, 1850 - .init = pci_plx9050_init, 1851 - .setup = pci_default_setup, 1852 - .exit = pci_plx9050_exit, 1853 - }, 1854 - { 1855 - .vendor = PCI_VENDOR_ID_PLX, 1856 - .device = PCI_DEVICE_ID_PLX_9050, 1857 - .subvendor = PCI_VENDOR_ID_PLX, 1858 - .subdevice = PCI_SUBDEVICE_ID_UNKNOWN_0x1584, 1859 1849 .init = pci_plx9050_init, 1860 1850 .setup = pci_default_setup, 1861 1851 .exit = pci_plx9050_exit, ··· 3725 3733 { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, 3726 3734 PCI_VENDOR_ID_PLX, 3727 3735 PCI_SUBDEVICE_ID_UNKNOWN_0x1584, 0, 0, 3728 - pbn_b0_4_115200 }, 3736 + pbn_b2_4_115200 }, 3737 + /* Unknown card - subdevice 0x1588 */ 3738 + { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, 3739 + PCI_VENDOR_ID_PLX, 3740 + PCI_SUBDEVICE_ID_UNKNOWN_0x1588, 0, 0, 3741 + pbn_b2_8_115200 }, 3729 3742 { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, 3730 3743 PCI_SUBVENDOR_ID_KEYSPAN, 3731 3744 PCI_SUBDEVICE_ID_KEYSPAN_SX2, 0, 0, ··· 4786 4789 4787 4790 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9835, 4788 4791 PCI_VENDOR_ID_IBM, 0x0299, 4792 + 0, 0, pbn_b0_bt_2_115200 }, 4793 + 4794 + { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9835, 4795 + 0x1000, 0x0012, 4789 4796 0, 0, pbn_b0_bt_2_115200 }, 4790 4797 4791 4798 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9901,
+7 -5
drivers/tty/serial/8250/8250_pnp.c
··· 429 429 { 430 430 struct uart_8250_port uart; 431 431 int ret, line, flags = dev_id->driver_data; 432 + struct resource *res = NULL; 432 433 433 434 if (flags & UNKNOWN_DEV) { 434 435 ret = serial_pnp_guess_board(dev); ··· 440 439 memset(&uart, 0, sizeof(uart)); 441 440 if (pnp_irq_valid(dev, 0)) 442 441 uart.port.irq = pnp_irq(dev, 0); 443 - if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) { 444 - uart.port.iobase = pnp_port_start(dev, 2); 445 - uart.port.iotype = UPIO_PORT; 446 - } else if (pnp_port_valid(dev, 0)) { 447 - uart.port.iobase = pnp_port_start(dev, 0); 442 + if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) 443 + res = pnp_get_resource(dev, IORESOURCE_IO, 2); 444 + else if (pnp_port_valid(dev, 0)) 445 + res = pnp_get_resource(dev, IORESOURCE_IO, 0); 446 + if (pnp_resource_enabled(res)) { 447 + uart.port.iobase = res->start; 448 448 uart.port.iotype = UPIO_PORT; 449 449 } else if (pnp_mem_valid(dev, 0)) { 450 450 uart.port.mapbase = pnp_mem_start(dev, 0);
+2 -2
drivers/tty/serial/Kconfig
··· 211 211 config SERIAL_SAMSUNG_UARTS_4 212 212 bool 213 213 depends on PLAT_SAMSUNG 214 - default y if !(CPU_S3C2410 || SERIAL_S3C2412 || CPU_S3C2440 || CPU_S3C2442) 214 + default y if !(CPU_S3C2410 || CPU_S3C2412 || CPU_S3C2440 || CPU_S3C2442) 215 215 help 216 216 Internal node for the common case of 4 Samsung compatible UARTs 217 217 218 218 config SERIAL_SAMSUNG_UARTS 219 219 int 220 220 depends on PLAT_SAMSUNG 221 - default 6 if ARCH_S5P6450 221 + default 6 if CPU_S5P6450 222 222 default 4 if SERIAL_SAMSUNG_UARTS_4 || CPU_S3C2416 223 223 default 3 224 224 help
+4 -4
drivers/tty/serial/bcm63xx_uart.c
··· 235 235 */ 236 236 static void bcm_uart_do_rx(struct uart_port *port) 237 237 { 238 - struct tty_port *port = &port->state->port; 238 + struct tty_port *tty_port = &port->state->port; 239 239 unsigned int max_count; 240 240 241 241 /* limit number of char read in interrupt, should not be ··· 260 260 bcm_uart_writel(port, val, UART_CTL_REG); 261 261 262 262 port->icount.overrun++; 263 - tty_insert_flip_char(port, 0, TTY_OVERRUN); 263 + tty_insert_flip_char(tty_port, 0, TTY_OVERRUN); 264 264 } 265 265 266 266 if (!(iestat & UART_IR_STAT(UART_IR_RXNOTEMPTY))) ··· 299 299 300 300 301 301 if ((cstat & port->ignore_status_mask) == 0) 302 - tty_insert_flip_char(port, c, flag); 302 + tty_insert_flip_char(tty_port, c, flag); 303 303 304 304 } while (--max_count); 305 305 306 - tty_flip_buffer_push(port); 306 + tty_flip_buffer_push(tty_port); 307 307 } 308 308 309 309 /*
+1 -1
drivers/tty/serial/mpc52xx_uart.c
··· 550 550 return 0; 551 551 552 552 psc_num = (port->mapbase & 0xf00) >> 8; 553 - snprintf(clk_name, sizeof(clk_name), "psc%d_clk", psc_num); 553 + snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num); 554 554 psc_clk = clk_get(port->dev, clk_name); 555 555 if (IS_ERR(psc_clk)) { 556 556 dev_err(port->dev, "Failed to get PSC clock entry!\n");
+6
drivers/tty/serial/of_serial.c
··· 241 241 { .compatible = "ns16850", .data = (void *)PORT_16850, }, 242 242 { .compatible = "nvidia,tegra20-uart", .data = (void *)PORT_TEGRA, }, 243 243 { .compatible = "nxp,lpc3220-uart", .data = (void *)PORT_LPC3220, }, 244 + { .compatible = "altr,16550-FIFO32", 245 + .data = (void *)PORT_ALTR_16550_F32, }, 246 + { .compatible = "altr,16550-FIFO64", 247 + .data = (void *)PORT_ALTR_16550_F64, }, 248 + { .compatible = "altr,16550-FIFO128", 249 + .data = (void *)PORT_ALTR_16550_F128, }, 244 250 #ifdef CONFIG_SERIAL_OF_PLATFORM_NWPSERIAL 245 251 { .compatible = "ibm,qpace-nwp-serial", 246 252 .data = (void *)PORT_NWPSERIAL, },
+1 -8
drivers/tty/serial/vt8500_serial.c
··· 612 612 vt8500_port->uart.dev = &pdev->dev; 613 613 vt8500_port->uart.flags = UPF_IOREMAP | UPF_BOOT_AUTOCONF; 614 614 615 - vt8500_port->clk = of_clk_get(pdev->dev.of_node, 0); 616 - if (!IS_ERR(vt8500_port->clk)) { 617 - vt8500_port->uart.uartclk = clk_get_rate(vt8500_port->clk); 618 - } else { 619 - /* use the default of 24Mhz if not specified and warn */ 620 - pr_warn("%s: serial clock source not specified\n", __func__); 621 - vt8500_port->uart.uartclk = 24000000; 622 - } 615 + vt8500_port->uart.uartclk = clk_get_rate(vt8500_port->clk); 623 616 624 617 snprintf(vt8500_port->name, sizeof(vt8500_port->name), 625 618 "VT8500 UART%d", pdev->id);
+1 -1
drivers/tty/tty_buffer.c
··· 425 425 struct tty_ldisc *disc; 426 426 427 427 tty = port->itty; 428 - if (WARN_RATELIMIT(tty == NULL, "tty is NULL\n")) 428 + if (tty == NULL) 429 429 return; 430 430 431 431 disc = tty_ldisc_ref(tty);
+1 -1
drivers/usb/Makefile
··· 46 46 obj-$(CONFIG_USB_SERIAL) += serial/ 47 47 48 48 obj-$(CONFIG_USB) += misc/ 49 - obj-$(CONFIG_USB_COMMON) += phy/ 49 + obj-$(CONFIG_USB_OTG_UTILS) += phy/ 50 50 obj-$(CONFIG_EARLY_PRINTK_DBGP) += early/ 51 51 52 52 obj-$(CONFIG_USB_ATM) += atm/
+2 -2
drivers/usb/c67x00/c67x00-sched.c
··· 100 100 #define TD_PIDEP_OFFSET 0x04 101 101 #define TD_PIDEPMASK_PID 0xF0 102 102 #define TD_PIDEPMASK_EP 0x0F 103 - #define TD_PORTLENMASK_DL 0x02FF 103 + #define TD_PORTLENMASK_DL 0x03FF 104 104 #define TD_PORTLENMASK_PN 0xC000 105 105 106 106 #define TD_STATUS_OFFSET 0x07 ··· 590 590 { 591 591 struct c67x00_td *td; 592 592 struct c67x00_urb_priv *urbp = urb->hcpriv; 593 - const __u8 active_flag = 1, retry_cnt = 1; 593 + const __u8 active_flag = 1, retry_cnt = 3; 594 594 __u8 cmd = 0; 595 595 int tt = 0; 596 596
+3 -3
drivers/usb/chipidea/udc.c
··· 1767 1767 goto put_transceiver; 1768 1768 } 1769 1769 1770 - retval = dbg_create_files(&ci->gadget.dev); 1770 + retval = dbg_create_files(ci->dev); 1771 1771 if (retval) 1772 1772 goto unreg_device; 1773 1773 ··· 1796 1796 1797 1797 dev_err(dev, "error = %i\n", retval); 1798 1798 remove_dbg: 1799 - dbg_remove_files(&ci->gadget.dev); 1799 + dbg_remove_files(ci->dev); 1800 1800 unreg_device: 1801 1801 device_unregister(&ci->gadget.dev); 1802 1802 put_transceiver: ··· 1836 1836 if (ci->global_phy) 1837 1837 usb_put_phy(ci->transceiver); 1838 1838 } 1839 - dbg_remove_files(&ci->gadget.dev); 1839 + dbg_remove_files(ci->dev); 1840 1840 device_unregister(&ci->gadget.dev); 1841 1841 /* my kobject is dynamic, I swear! */ 1842 1842 memset(&ci->gadget, 0, sizeof(ci->gadget));
+20 -3
drivers/usb/class/cdc-wdm.c
··· 56 56 #define WDM_RESPONDING 7 57 57 #define WDM_SUSPENDING 8 58 58 #define WDM_RESETTING 9 59 + #define WDM_OVERFLOW 10 59 60 60 61 #define WDM_MAX 16 61 62 ··· 156 155 { 157 156 struct wdm_device *desc = urb->context; 158 157 int status = urb->status; 158 + int length = urb->actual_length; 159 159 160 160 spin_lock(&desc->iuspin); 161 161 clear_bit(WDM_RESPONDING, &desc->flags); ··· 187 185 } 188 186 189 187 desc->rerr = status; 190 - desc->reslength = urb->actual_length; 191 - memmove(desc->ubuf + desc->length, desc->inbuf, desc->reslength); 192 - desc->length += desc->reslength; 188 + if (length + desc->length > desc->wMaxCommand) { 189 + /* The buffer would overflow */ 190 + set_bit(WDM_OVERFLOW, &desc->flags); 191 + } else { 192 + /* we may already be in overflow */ 193 + if (!test_bit(WDM_OVERFLOW, &desc->flags)) { 194 + memmove(desc->ubuf + desc->length, desc->inbuf, length); 195 + desc->length += length; 196 + desc->reslength = length; 197 + } 198 + } 193 199 skip_error: 194 200 wake_up(&desc->wait); 195 201 ··· 445 435 rv = -ENODEV; 446 436 goto err; 447 437 } 438 + if (test_bit(WDM_OVERFLOW, &desc->flags)) { 439 + clear_bit(WDM_OVERFLOW, &desc->flags); 440 + rv = -ENOBUFS; 441 + goto err; 442 + } 448 443 i++; 449 444 if (file->f_flags & O_NONBLOCK) { 450 445 if (!test_bit(WDM_READ, &desc->flags)) { ··· 493 478 spin_unlock_irq(&desc->iuspin); 494 479 goto retry; 495 480 } 481 + 496 482 if (!desc->reslength) { /* zero length read */ 497 483 dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__); 498 484 clear_bit(WDM_READ, &desc->flags); ··· 1020 1004 struct wdm_device *desc = wdm_find_device(intf); 1021 1005 int rv; 1022 1006 1007 + clear_bit(WDM_OVERFLOW, &desc->flags); 1023 1008 clear_bit(WDM_RESETTING, &desc->flags); 1024 1009 rv = recover_from_urb_loss(desc); 1025 1010 mutex_unlock(&desc->wlock);
+1
drivers/usb/dwc3/core.c
··· 583 583 break; 584 584 } 585 585 586 + dwc3_free_event_buffers(dwc); 586 587 dwc3_core_exit(dwc); 587 588 588 589 return 0;
-2
drivers/usb/dwc3/dwc3-exynos.c
··· 23 23 #include <linux/usb/nop-usb-xceiv.h> 24 24 #include <linux/of.h> 25 25 26 - #include "core.h" 27 - 28 26 struct dwc3_exynos { 29 27 struct platform_device *dwc3; 30 28 struct platform_device *usb2_phy;
+3 -5
drivers/usb/dwc3/dwc3-omap.c
··· 54 54 #include <linux/usb/otg.h> 55 55 #include <linux/usb/nop-usb-xceiv.h> 56 56 57 - #include "core.h" 58 - 59 57 /* 60 58 * All these registers belong to OMAP's Wrapper around the 61 59 * DesignWare USB3 Core. ··· 463 465 return 0; 464 466 } 465 467 466 - static const struct of_device_id of_dwc3_matach[] = { 468 + static const struct of_device_id of_dwc3_match[] = { 467 469 { 468 470 "ti,dwc3", 469 471 }, 470 472 { }, 471 473 }; 472 - MODULE_DEVICE_TABLE(of, of_dwc3_matach); 474 + MODULE_DEVICE_TABLE(of, of_dwc3_match); 473 475 474 476 static struct platform_driver dwc3_omap_driver = { 475 477 .probe = dwc3_omap_probe, 476 478 .remove = dwc3_omap_remove, 477 479 .driver = { 478 480 .name = "omap-dwc3", 479 - .of_match_table = of_dwc3_matach, 481 + .of_match_table = of_dwc3_match, 480 482 }, 481 483 }; 482 484
-2
drivers/usb/dwc3/dwc3-pci.c
··· 45 45 #include <linux/usb/otg.h> 46 46 #include <linux/usb/nop-usb-xceiv.h> 47 47 48 - #include "core.h" 49 - 50 48 /* FIXME define these in <linux/pci_ids.h> */ 51 49 #define PCI_VENDOR_ID_SYNOPSYS 0x16c3 52 50 #define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 0xabcd
+4 -3
drivers/usb/dwc3/ep0.c
··· 891 891 DWC3_TRBCTL_CONTROL_DATA); 892 892 } else if (!IS_ALIGNED(req->request.length, dep->endpoint.maxpacket) 893 893 && (dep->number == 0)) { 894 - u32 transfer_size; 894 + u32 transfer_size; 895 + u32 maxpacket; 895 896 896 897 ret = usb_gadget_map_request(&dwc->gadget, &req->request, 897 898 dep->number); ··· 903 902 904 903 WARN_ON(req->request.length > DWC3_EP0_BOUNCE_SIZE); 905 904 906 - transfer_size = roundup(req->request.length, 907 - (u32) dep->endpoint.maxpacket); 905 + maxpacket = dep->endpoint.maxpacket; 906 + transfer_size = roundup(req->request.length, maxpacket); 908 907 909 908 dwc->ep0_bounced = true; 910 909
-3
drivers/usb/dwc3/gadget.c
··· 2159 2159 2160 2160 static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc) 2161 2161 { 2162 - struct dwc3_gadget_ep_cmd_params params; 2163 2162 struct dwc3_ep *dep; 2164 2163 int ret; 2165 2164 u32 reg; 2166 2165 u8 speed; 2167 2166 2168 2167 dev_vdbg(dwc->dev, "%s\n", __func__); 2169 - 2170 - memset(&params, 0x00, sizeof(params)); 2171 2168 2172 2169 reg = dwc3_readl(dwc->regs, DWC3_DSTS); 2173 2170 speed = reg & DWC3_DSTS_CONNECTSPD;
+6 -6
drivers/usb/gadget/Makefile
··· 35 35 obj-$(CONFIG_USB_FUSB300) += fusb300_udc.o 36 36 obj-$(CONFIG_USB_MV_U3D) += mv_u3d_core.o 37 37 38 + # USB Functions 39 + obj-$(CONFIG_USB_F_ACM) += f_acm.o 40 + f_ss_lb-y := f_loopback.o f_sourcesink.o 41 + obj-$(CONFIG_USB_F_SS_LB) += f_ss_lb.o 42 + obj-$(CONFIG_USB_U_SERIAL) += u_serial.o 43 + 38 44 # 39 45 # USB gadget drivers 40 46 # ··· 80 74 obj-$(CONFIG_USB_G_NCM) += g_ncm.o 81 75 obj-$(CONFIG_USB_G_ACM_MS) += g_acm_ms.o 82 76 obj-$(CONFIG_USB_GADGET_TARGET) += tcm_usb_gadget.o 83 - 84 - # USB Functions 85 - obj-$(CONFIG_USB_F_ACM) += f_acm.o 86 - f_ss_lb-y := f_loopback.o f_sourcesink.o 87 - obj-$(CONFIG_USB_F_SS_LB) += f_ss_lb.o 88 - obj-$(CONFIG_USB_U_SERIAL) += u_serial.o
+1 -4
drivers/usb/gadget/composite.c
··· 1757 1757 /** 1758 1758 * usb_composite_probe() - register a composite driver 1759 1759 * @driver: the driver to register 1760 - * @bind: the callback used to allocate resources that are shared across the 1761 - * whole device, such as string IDs, and add its configurations using 1762 - * @usb_add_config(). This may fail by returning a negative errno 1763 - * value; it should return zero on successful initialization. 1760 + * 1764 1761 * Context: single threaded during gadget setup 1765 1762 * 1766 1763 * This function is used to register drivers using the composite driver
+1
drivers/usb/gadget/f_uac1.c
··· 418 418 419 419 req->context = audio; 420 420 req->complete = f_audio_complete; 421 + len = min_t(size_t, sizeof(value), len); 421 422 memcpy(req->buf, &value, len); 422 423 423 424 return len;
+8 -12
drivers/usb/gadget/imx_udc.c
··· 1334 1334 struct usb_gadget_driver *driver) 1335 1335 { 1336 1336 struct imx_udc_struct *imx_usb; 1337 - int retval; 1338 1337 1339 1338 imx_usb = container_of(gadget, struct imx_udc_struct, gadget); 1340 1339 /* first hook up the driver ... */ 1341 1340 imx_usb->driver = driver; 1342 1341 imx_usb->gadget.dev.driver = &driver->driver; 1343 - 1344 - retval = device_add(&imx_usb->gadget.dev); 1345 - if (retval) 1346 - goto fail; 1347 1342 1348 1343 D_INI(imx_usb->dev, "<%s> registered gadget driver '%s'\n", 1349 1344 __func__, driver->driver.name); ··· 1346 1351 imx_udc_enable(imx_usb); 1347 1352 1348 1353 return 0; 1349 - fail: 1350 - imx_usb->driver = NULL; 1351 - imx_usb->gadget.dev.driver = NULL; 1352 - return retval; 1353 1354 } 1354 1355 1355 1356 static int imx_udc_stop(struct usb_gadget *gadget, ··· 1360 1369 1361 1370 imx_usb->gadget.dev.driver = NULL; 1362 1371 imx_usb->driver = NULL; 1363 - 1364 - device_del(&imx_usb->gadget.dev); 1365 1372 1366 1373 D_INI(imx_usb->dev, "<%s> unregistered gadget driver '%s'\n", 1367 1374 __func__, driver->driver.name); ··· 1466 1477 imx_usb->gadget.dev.parent = &pdev->dev; 1467 1478 imx_usb->gadget.dev.dma_mask = pdev->dev.dma_mask; 1468 1479 1480 + ret = device_add(&imx_usb->gadget.dev); 1481 + if (retval) 1482 + goto fail4; 1483 + 1469 1484 platform_set_drvdata(pdev, imx_usb); 1470 1485 1471 1486 usb_init_data(imx_usb); ··· 1481 1488 1482 1489 ret = usb_add_gadget_udc(&pdev->dev, &imx_usb->gadget); 1483 1490 if (ret) 1484 - goto fail4; 1491 + goto fail5; 1485 1492 1486 1493 return 0; 1494 + fail5: 1495 + device_unregister(&imx_usb->gadget.dev); 1487 1496 fail4: 1488 1497 for (i = 0; i < IMX_USB_NB_EP + 1; i++) 1489 1498 free_irq(imx_usb->usbd_int[i], imx_usb); ··· 1509 1514 int i; 1510 1515 1511 1516 usb_del_gadget_udc(&imx_usb->gadget); 1517 + device_unregister(&imx_usb->gadget.dev); 1512 1518 imx_udc_disable(imx_usb); 1513 1519 del_timer(&imx_usb->timer); 1514 1520
+2 -1
drivers/usb/gadget/omap_udc.c
··· 62 62 #define DRIVER_VERSION "4 October 2004" 63 63 64 64 #define OMAP_DMA_USB_W2FC_TX0 29 65 + #define OMAP_DMA_USB_W2FC_RX0 26 65 66 66 67 /* 67 68 * The OMAP UDC needs _very_ early endpoint setup: before enabling the ··· 1311 1310 } 1312 1311 1313 1312 static int omap_udc_start(struct usb_gadget *g, 1314 - struct usb_gadget_driver *driver) 1313 + struct usb_gadget_driver *driver); 1315 1314 static int omap_udc_stop(struct usb_gadget *g, 1316 1315 struct usb_gadget_driver *driver); 1317 1316
+15 -9
drivers/usb/gadget/pxa25x_udc.c
··· 1266 1266 dev->gadget.dev.driver = &driver->driver; 1267 1267 dev->pullup = 1; 1268 1268 1269 - retval = device_add (&dev->gadget.dev); 1270 - if (retval) { 1271 - dev->driver = NULL; 1272 - dev->gadget.dev.driver = NULL; 1273 - return retval; 1274 - } 1275 - 1276 1269 /* ... then enable host detection and ep0; and we're ready 1277 1270 * for set_configuration as well as eventual disconnect. 1278 1271 */ ··· 1303 1310 } 1304 1311 del_timer_sync(&dev->timer); 1305 1312 1313 + /* report disconnect; the driver is already quiesced */ 1314 + if (driver) 1315 + driver->disconnect(&dev->gadget); 1316 + 1306 1317 /* re-init driver-visible data structures */ 1307 1318 udc_reinit(dev); 1308 1319 } ··· 1328 1331 dev->gadget.dev.driver = NULL; 1329 1332 dev->driver = NULL; 1330 1333 1331 - device_del (&dev->gadget.dev); 1332 1334 dump_state(dev); 1333 1335 1334 1336 return 0; ··· 2142 2146 dev->gadget.dev.parent = &pdev->dev; 2143 2147 dev->gadget.dev.dma_mask = pdev->dev.dma_mask; 2144 2148 2149 + retval = device_add(&dev->gadget.dev); 2150 + if (retval) { 2151 + dev->driver = NULL; 2152 + dev->gadget.dev.driver = NULL; 2153 + goto err_device_add; 2154 + } 2155 + 2145 2156 the_controller = dev; 2146 2157 platform_set_drvdata(pdev, dev); 2147 2158 ··· 2199 2196 free_irq(irq, dev); 2200 2197 #endif 2201 2198 err_irq1: 2199 + device_unregister(&dev->gadget.dev); 2200 + err_device_add: 2202 2201 if (gpio_is_valid(dev->mach->gpio_pullup)) 2203 2202 gpio_free(dev->mach->gpio_pullup); 2204 2203 err_gpio_pullup: ··· 2222 2217 { 2223 2218 struct pxa25x_udc *dev = platform_get_drvdata(pdev); 2224 2219 2225 - usb_del_gadget_udc(&dev->gadget); 2226 2220 if (dev->driver) 2227 2221 return -EBUSY; 2228 2222 2223 + usb_del_gadget_udc(&dev->gadget); 2224 + device_unregister(&dev->gadget.dev); 2229 2225 dev->pullup = 0; 2230 2226 pullup(dev); 2231 2227
+12 -6
drivers/usb/gadget/pxa27x_udc.c
··· 1814 1814 udc->gadget.dev.driver = &driver->driver; 1815 1815 dplus_pullup(udc, 1); 1816 1816 1817 - retval = device_add(&udc->gadget.dev); 1818 - if (retval) { 1819 - dev_err(udc->dev, "device_add error %d\n", retval); 1820 - goto fail; 1821 - } 1822 1817 if (!IS_ERR_OR_NULL(udc->transceiver)) { 1823 1818 retval = otg_set_peripheral(udc->transceiver->otg, 1824 1819 &udc->gadget); ··· 1871 1876 1872 1877 udc->driver = NULL; 1873 1878 1874 - device_del(&udc->gadget.dev); 1875 1879 1876 1880 if (!IS_ERR_OR_NULL(udc->transceiver)) 1877 1881 return otg_set_peripheral(udc->transceiver->otg, NULL); ··· 2474 2480 driver_name, udc->irq, retval); 2475 2481 goto err_irq; 2476 2482 } 2483 + 2484 + retval = device_add(&udc->gadget.dev); 2485 + if (retval) { 2486 + dev_err(udc->dev, "device_add error %d\n", retval); 2487 + goto err_dev_add; 2488 + } 2489 + 2477 2490 retval = usb_add_gadget_udc(&pdev->dev, &udc->gadget); 2478 2491 if (retval) 2479 2492 goto err_add_udc; 2480 2493 2481 2494 pxa_init_debugfs(udc); 2495 + 2482 2496 return 0; 2497 + 2483 2498 err_add_udc: 2499 + device_unregister(&udc->gadget.dev); 2500 + err_dev_add: 2484 2501 free_irq(udc->irq, udc); 2485 2502 err_irq: 2486 2503 iounmap(udc->regs); ··· 2512 2507 int gpio = udc->mach->gpio_pullup; 2513 2508 2514 2509 usb_del_gadget_udc(&udc->gadget); 2510 + device_del(&udc->gadget.dev); 2515 2511 usb_gadget_unregister_driver(udc->driver); 2516 2512 free_irq(udc->irq, udc); 2517 2513 pxa_cleanup_debugfs(udc);
+12 -16
drivers/usb/gadget/s3c2410_udc.c
··· 1668 1668 static int s3c2410_udc_start(struct usb_gadget *g, 1669 1669 struct usb_gadget_driver *driver) 1670 1670 { 1671 - struct s3c2410_udc *udc = to_s3c2410(g) 1672 - int retval; 1671 + struct s3c2410_udc *udc = to_s3c2410(g); 1673 1672 1674 1673 dprintk(DEBUG_NORMAL, "%s() '%s'\n", __func__, driver->driver.name); 1675 1674 ··· 1676 1677 udc->driver = driver; 1677 1678 udc->gadget.dev.driver = &driver->driver; 1678 1679 1679 - /* Bind the driver */ 1680 - retval = device_add(&udc->gadget.dev); 1681 - if (retval) { 1682 - dev_err(&udc->gadget.dev, "Error in device_add() : %d\n", retval); 1683 - goto register_error; 1684 - } 1685 - 1686 1680 /* Enable udc */ 1687 1681 s3c2410_udc_enable(udc); 1688 1682 1689 1683 return 0; 1690 - 1691 - register_error: 1692 - udc->driver = NULL; 1693 - udc->gadget.dev.driver = NULL; 1694 - return retval; 1695 1684 } 1696 1685 1697 1686 static int s3c2410_udc_stop(struct usb_gadget *g, ··· 1687 1700 { 1688 1701 struct s3c2410_udc *udc = to_s3c2410(g); 1689 1702 1690 - device_del(&udc->gadget.dev); 1691 1703 udc->driver = NULL; 1692 1704 1693 1705 /* Disable udc */ ··· 1828 1842 udc->gadget.dev.parent = &pdev->dev; 1829 1843 udc->gadget.dev.dma_mask = pdev->dev.dma_mask; 1830 1844 1845 + /* Bind the driver */ 1846 + retval = device_add(&udc->gadget.dev); 1847 + if (retval) { 1848 + dev_err(&udc->gadget.dev, "Error in device_add() : %d\n", retval); 1849 + goto err_device_add; 1850 + } 1851 + 1831 1852 the_controller = udc; 1832 1853 platform_set_drvdata(pdev, udc); 1833 1854 ··· 1923 1930 err_int: 1924 1931 free_irq(IRQ_USBD, udc); 1925 1932 err_map: 1933 + device_unregister(&udc->gadget.dev); 1934 + err_device_add: 1926 1935 iounmap(base_addr); 1927 1936 err_mem: 1928 1937 release_mem_region(rsrc_start, rsrc_len); ··· 1942 1947 1943 1948 dev_dbg(&pdev->dev, "%s()\n", __func__); 1944 1949 1945 - usb_del_gadget_udc(&udc->gadget); 1946 1950 if (udc->driver) 1947 1951 return -EBUSY; 1948 1952 1953 + usb_del_gadget_udc(&udc->gadget); 1954 + device_unregister(&udc->gadget.dev); 1949 1955 debugfs_remove(udc->regs_info); 1950 1956 1951 1957 if (udc_info && !udc_info->udc_command &&
+3
drivers/usb/gadget/u_uac1.c
··· 240 240 snd = &card->playback; 241 241 snd->filp = filp_open(fn_play, O_WRONLY, 0); 242 242 if (IS_ERR(snd->filp)) { 243 + int ret = PTR_ERR(snd->filp); 244 + 243 245 ERROR(card, "No such PCM playback device: %s\n", fn_play); 244 246 snd->filp = NULL; 247 + return ret; 245 248 } 246 249 pcm_file = snd->filp->private_data; 247 250 snd->substream = pcm_file->substream;
+2 -4
drivers/usb/host/ehci-hcd.c
··· 748 748 /* guard against (alleged) silicon errata */ 749 749 if (cmd & CMD_IAAD) 750 750 ehci_dbg(ehci, "IAA with IAAD still set?\n"); 751 - if (ehci->async_iaa) { 751 + if (ehci->async_iaa) 752 752 COUNT(ehci->stats.iaa); 753 - end_unlink_async(ehci); 754 - } else 755 - ehci_dbg(ehci, "IAA with nothing unlinked?\n"); 753 + end_unlink_async(ehci); 756 754 } 757 755 758 756 /* remote wakeup [4.3.1] */
+27 -9
drivers/usb/host/ehci-q.c
··· 135 135 * qtd is updated in qh_completions(). Update the QH 136 136 * overlay here. 137 137 */ 138 - if (cpu_to_hc32(ehci, qtd->qtd_dma) == qh->hw->hw_current) { 138 + if (qh->hw->hw_token & ACTIVE_BIT(ehci)) { 139 139 qh->hw->hw_qtd_next = qtd->hw_next; 140 140 qtd = NULL; 141 141 } ··· 449 449 else if (last_status == -EINPROGRESS && !urb->unlinked) 450 450 continue; 451 451 452 - /* qh unlinked; token in overlay may be most current */ 453 - if (state == QH_STATE_IDLE 454 - && cpu_to_hc32(ehci, qtd->qtd_dma) 455 - == hw->hw_current) { 452 + /* 453 + * If this was the active qtd when the qh was unlinked 454 + * and the overlay's token is active, then the overlay 455 + * hasn't been written back to the qtd yet so use its 456 + * token instead of the qtd's. After the qtd is 457 + * processed and removed, the overlay won't be valid 458 + * any more. 459 + */ 460 + if (state == QH_STATE_IDLE && 461 + qh->qtd_list.next == &qtd->qtd_list && 462 + (hw->hw_token & ACTIVE_BIT(ehci))) { 456 463 token = hc32_to_cpu(ehci, hw->hw_token); 464 + hw->hw_token &= ~ACTIVE_BIT(ehci); 457 465 458 466 /* An unlink may leave an incomplete 459 467 * async transaction in the TT buffer. ··· 1178 1170 struct ehci_qh *prev; 1179 1171 1180 1172 /* Add to the end of the list of QHs waiting for the next IAAD */ 1181 - qh->qh_state = QH_STATE_UNLINK; 1173 + qh->qh_state = QH_STATE_UNLINK_WAIT; 1182 1174 if (ehci->async_unlink) 1183 1175 ehci->async_unlink_last->unlink_next = qh; 1184 1176 else ··· 1221 1213 1222 1214 /* Do only the first waiting QH (nVidia bug?) */ 1223 1215 qh = ehci->async_unlink; 1224 - ehci->async_iaa = qh; 1225 - ehci->async_unlink = qh->unlink_next; 1226 - qh->unlink_next = NULL; 1216 + 1217 + /* 1218 + * Intel (?) bug: The HC can write back the overlay region 1219 + * even after the IAA interrupt occurs. In self-defense, 1220 + * always go through two IAA cycles for each QH. 1221 + */ 1222 + if (qh->qh_state == QH_STATE_UNLINK_WAIT) { 1223 + qh->qh_state = QH_STATE_UNLINK; 1224 + } else { 1225 + ehci->async_iaa = qh; 1226 + ehci->async_unlink = qh->unlink_next; 1227 + qh->unlink_next = NULL; 1228 + } 1227 1229 1228 1230 /* Make sure the unlinks are all visible to the hardware */ 1229 1231 wmb();
-5
drivers/usb/musb/Kconfig
··· 7 7 config USB_MUSB_HDRC 8 8 tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)' 9 9 depends on USB && USB_GADGET 10 - select NOP_USB_XCEIV if (ARCH_DAVINCI || MACH_OMAP3EVM || BLACKFIN) 11 - select NOP_USB_XCEIV if (SOC_TI81XX || SOC_AM33XX) 12 - select TWL4030_USB if MACH_OMAP_3430SDP 13 - select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA 14 - select OMAP_CONTROL_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA 15 10 select USB_OTG_UTILS 16 11 help 17 12 Say Y here if your system has a dual role high speed USB
-6
drivers/usb/musb/musb_core.c
··· 1624 1624 1625 1625 /*-------------------------------------------------------------------------*/ 1626 1626 1627 - #ifdef CONFIG_SYSFS 1628 - 1629 1627 static ssize_t 1630 1628 musb_mode_show(struct device *dev, struct device_attribute *attr, char *buf) 1631 1629 { ··· 1739 1741 static const struct attribute_group musb_attr_group = { 1740 1742 .attrs = musb_attributes, 1741 1743 }; 1742 - 1743 - #endif /* sysfs */ 1744 1744 1745 1745 /* Only used to provide driver mode change events */ 1746 1746 static void musb_irq_work(struct work_struct *data) ··· 1964 1968 if (status < 0) 1965 1969 goto fail4; 1966 1970 1967 - #ifdef CONFIG_SYSFS 1968 1971 status = sysfs_create_group(&musb->controller->kobj, &musb_attr_group); 1969 1972 if (status) 1970 1973 goto fail5; 1971 - #endif 1972 1974 1973 1975 pm_runtime_put(musb->controller); 1974 1976
+8 -4
drivers/usb/musb/omap2430.c
··· 51 51 }; 52 52 #define glue_to_musb(g) platform_get_drvdata(g->musb) 53 53 54 - struct omap2430_glue *_glue; 54 + static struct omap2430_glue *_glue; 55 55 56 56 static struct timer_list musb_idle_timer; 57 57 ··· 237 237 { 238 238 struct omap2430_glue *glue = _glue; 239 239 240 - if (glue && glue_to_musb(glue)) { 241 - glue->status = status; 242 - } else { 240 + if (!glue) { 241 + pr_err("%s: musb core is not yet initialized\n", __func__); 242 + return; 243 + } 244 + glue->status = status; 245 + 246 + if (!glue_to_musb(glue)) { 243 247 pr_err("%s: musb core is not yet ready\n", __func__); 244 248 return; 245 249 }
+7 -3
drivers/usb/otg/otg.c
··· 130 130 spin_lock_irqsave(&phy_lock, flags); 131 131 132 132 phy = __usb_find_phy(&phy_list, type); 133 - if (IS_ERR(phy)) { 133 + if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) { 134 134 pr_err("unable to find transceiver of type %s\n", 135 135 usb_phy_type_string(type)); 136 136 goto err0; ··· 228 228 spin_lock_irqsave(&phy_lock, flags); 229 229 230 230 phy = __usb_find_phy_dev(dev, &phy_bind_list, index); 231 - if (IS_ERR(phy)) { 231 + if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) { 232 232 pr_err("unable to find transceiver\n"); 233 233 goto err0; 234 234 } ··· 301 301 */ 302 302 void usb_put_phy(struct usb_phy *x) 303 303 { 304 - if (x) 304 + if (x) { 305 + struct module *owner = x->dev->driver->owner; 306 + 305 307 put_device(x->dev); 308 + module_put(owner); 309 + } 306 310 } 307 311 EXPORT_SYMBOL(usb_put_phy); 308 312
+9 -15
drivers/usb/phy/omap-control-usb.c
··· 219 219 220 220 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 221 221 "control_dev_conf"); 222 - control_usb->dev_conf = devm_request_and_ioremap(&pdev->dev, res); 223 - if (!control_usb->dev_conf) { 224 - dev_err(&pdev->dev, "Failed to obtain io memory\n"); 225 - return -EADDRNOTAVAIL; 226 - } 222 + control_usb->dev_conf = devm_ioremap_resource(&pdev->dev, res); 223 + if (IS_ERR(control_usb->dev_conf)) 224 + return PTR_ERR(control_usb->dev_conf); 227 225 228 226 if (control_usb->type == OMAP_CTRL_DEV_TYPE1) { 229 227 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 230 228 "otghs_control"); 231 - control_usb->otghs_control = devm_request_and_ioremap( 229 + control_usb->otghs_control = devm_ioremap_resource( 232 230 &pdev->dev, res); 233 - if (!control_usb->otghs_control) { 234 - dev_err(&pdev->dev, "Failed to obtain io memory\n"); 235 - return -EADDRNOTAVAIL; 236 - } 231 + if (IS_ERR(control_usb->otghs_control)) 232 + return PTR_ERR(control_usb->otghs_control); 237 233 } 238 234 239 235 if (control_usb->type == OMAP_CTRL_DEV_TYPE2) { 240 236 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 241 237 "phy_power_usb"); 242 - control_usb->phy_power = devm_request_and_ioremap( 238 + control_usb->phy_power = devm_ioremap_resource( 243 239 &pdev->dev, res); 244 - if (!control_usb->phy_power) { 245 - dev_dbg(&pdev->dev, "Failed to obtain io memory\n"); 246 - return -EADDRNOTAVAIL; 247 - } 240 + if (IS_ERR(control_usb->phy_power)) 241 + return PTR_ERR(control_usb->phy_power); 248 242 249 243 control_usb->sys_clk = devm_clk_get(control_usb->dev, 250 244 "sys_clkin");
+3 -5
drivers/usb/phy/omap-usb3.c
··· 212 212 } 213 213 214 214 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pll_ctrl"); 215 - phy->pll_ctrl_base = devm_request_and_ioremap(&pdev->dev, res); 216 - if (!phy->pll_ctrl_base) { 217 - dev_err(&pdev->dev, "ioremap of pll_ctrl failed\n"); 218 - return -ENOMEM; 219 - } 215 + phy->pll_ctrl_base = devm_ioremap_resource(&pdev->dev, res); 216 + if (IS_ERR(phy->pll_ctrl_base)) 217 + return PTR_ERR(phy->pll_ctrl_base); 220 218 221 219 phy->dev = &pdev->dev; 222 220
+3 -5
drivers/usb/phy/samsung-usbphy.c
··· 787 787 return -ENODEV; 788 788 } 789 789 790 - phy_base = devm_request_and_ioremap(dev, phy_mem); 791 - if (!phy_base) { 792 - dev_err(dev, "%s: register mapping failed\n", __func__); 793 - return -ENXIO; 794 - } 790 + phy_base = devm_ioremap_resource(dev, phy_mem); 791 + if (IS_ERR(phy_base)) 792 + return PTR_ERR(phy_base); 795 793 796 794 sphy = devm_kzalloc(dev, sizeof(*sphy), GFP_KERNEL); 797 795 if (!sphy)
+20
drivers/usb/serial/cp210x.c
··· 85 85 { USB_DEVICE(0x10C4, 0x813F) }, /* Tams Master Easy Control */ 86 86 { USB_DEVICE(0x10C4, 0x814A) }, /* West Mountain Radio RIGblaster P&P */ 87 87 { USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */ 88 + { USB_DEVICE(0x2405, 0x0003) }, /* West Mountain Radio RIGblaster Advantage */ 88 89 { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */ 89 90 { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */ 90 91 { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */ ··· 151 150 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 152 151 { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ 153 152 { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */ 153 + { USB_DEVICE(0x1FB9, 0x0100) }, /* Lake Shore Model 121 Current Source */ 154 + { USB_DEVICE(0x1FB9, 0x0200) }, /* Lake Shore Model 218A Temperature Monitor */ 155 + { USB_DEVICE(0x1FB9, 0x0201) }, /* Lake Shore Model 219 Temperature Monitor */ 156 + { USB_DEVICE(0x1FB9, 0x0202) }, /* Lake Shore Model 233 Temperature Transmitter */ 157 + { USB_DEVICE(0x1FB9, 0x0203) }, /* Lake Shore Model 235 Temperature Transmitter */ 158 + { USB_DEVICE(0x1FB9, 0x0300) }, /* Lake Shore Model 335 Temperature Controller */ 159 + { USB_DEVICE(0x1FB9, 0x0301) }, /* Lake Shore Model 336 Temperature Controller */ 160 + { USB_DEVICE(0x1FB9, 0x0302) }, /* Lake Shore Model 350 Temperature Controller */ 161 + { USB_DEVICE(0x1FB9, 0x0303) }, /* Lake Shore Model 371 AC Bridge */ 162 + { USB_DEVICE(0x1FB9, 0x0400) }, /* Lake Shore Model 411 Handheld Gaussmeter */ 163 + { USB_DEVICE(0x1FB9, 0x0401) }, /* Lake Shore Model 425 Gaussmeter */ 164 + { USB_DEVICE(0x1FB9, 0x0402) }, /* Lake Shore Model 455A Gaussmeter */ 165 + { USB_DEVICE(0x1FB9, 0x0403) }, /* Lake Shore Model 475A Gaussmeter */ 166 + { USB_DEVICE(0x1FB9, 0x0404) }, /* Lake Shore Model 465 Three Axis Gaussmeter */ 167 + { USB_DEVICE(0x1FB9, 0x0600) }, /* Lake Shore Model 625A Superconducting MPS */ 168 + { USB_DEVICE(0x1FB9, 0x0601) }, /* Lake Shore Model 642A Magnet Power Supply */ 169 + { USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */ 170 + { USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */ 171 + { USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */ 154 172 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */ 155 173 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */ 156 174 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
+5
drivers/usb/serial/option.c
··· 341 341 #define CINTERION_PRODUCT_EU3_E 0x0051 342 342 #define CINTERION_PRODUCT_EU3_P 0x0052 343 343 #define CINTERION_PRODUCT_PH8 0x0053 344 + #define CINTERION_PRODUCT_AH6 0x0055 345 + #define CINTERION_PRODUCT_PLS8 0x0060 344 346 345 347 /* Olivetti products */ 346 348 #define OLIVETTI_VENDOR_ID 0x0b3c ··· 581 579 { USB_DEVICE(QUANTA_VENDOR_ID, 0xea42), 582 580 .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, 583 581 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c05, USB_CLASS_COMM, 0x02, 0xff) }, 582 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c1f, USB_CLASS_COMM, 0x02, 0xff) }, 584 583 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) }, 585 584 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff), 586 585 .driver_info = (kernel_ulong_t) &net_intf1_blacklist }, ··· 1263 1260 { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_E) }, 1264 1261 { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_P) }, 1265 1262 { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8) }, 1263 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AH6) }, 1264 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PLS8) }, 1266 1265 { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, 1267 1266 { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, 1268 1267 { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDM) },
+1
drivers/usb/serial/qcaux.c
··· 69 69 { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfd, 0xff) }, /* NMEA */ 70 70 { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfe, 0xff) }, /* WMC */ 71 71 { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xff, 0xff) }, /* DIAG */ 72 + { USB_DEVICE_AND_INTERFACE_INFO(0x1fac, 0x0151, 0xff, 0xff, 0xff) }, 72 73 { }, 73 74 }; 74 75 MODULE_DEVICE_TABLE(usb, id_table);
+5 -2
drivers/usb/serial/qcserial.c
··· 197 197 198 198 if (is_gobi1k) { 199 199 /* Gobi 1K USB layout: 200 - * 0: serial port (doesn't respond) 200 + * 0: DM/DIAG (use libqcdm from ModemManager for communication) 201 201 * 1: serial port (doesn't respond) 202 202 * 2: AT-capable modem port 203 203 * 3: QMI/net 204 204 */ 205 - if (ifnum == 2) 205 + if (ifnum == 0) { 206 + dev_dbg(dev, "Gobi 1K DM/DIAG interface found\n"); 207 + altsetting = 1; 208 + } else if (ifnum == 2) 206 209 dev_dbg(dev, "Modem port found\n"); 207 210 else 208 211 altsetting = -1;
+5 -2
drivers/usb/serial/quatech2.c
··· 657 657 __func__); 658 658 break; 659 659 } 660 - tty_flip_buffer_push(&port->port); 660 + 661 + if (port_priv->is_open) 662 + tty_flip_buffer_push(&port->port); 661 663 662 664 newport = *(ch + 3); 663 665 ··· 702 700 tty_insert_flip_string(&port->port, ch, 1); 703 701 } 704 702 705 - tty_flip_buffer_push(&port->port); 703 + if (port_priv->is_open) 704 + tty_flip_buffer_push(&port->port); 706 705 } 707 706 708 707 static void qt2_write_bulk_callback(struct urb *urb)
+2 -74
drivers/usb/storage/initializers.c
··· 92 92 return 0; 93 93 } 94 94 95 - /* This places the HUAWEI usb dongles in multi-port mode */ 96 - static int usb_stor_huawei_feature_init(struct us_data *us) 95 + /* This places the HUAWEI E220 devices in multi-port mode */ 96 + int usb_stor_huawei_e220_init(struct us_data *us) 97 97 { 98 98 int result; 99 99 ··· 103 103 0x01, 0x0, NULL, 0x0, 1000); 104 104 US_DEBUGP("Huawei mode set result is %d\n", result); 105 105 return 0; 106 - } 107 - 108 - /* 109 - * It will send a scsi switch command called rewind' to huawei dongle. 110 - * When the dongle receives this command at the first time, 111 - * it will reboot immediately. After rebooted, it will ignore this command. 112 - * So it is unnecessary to read its response. 113 - */ 114 - static int usb_stor_huawei_scsi_init(struct us_data *us) 115 - { 116 - int result = 0; 117 - int act_len = 0; 118 - struct bulk_cb_wrap *bcbw = (struct bulk_cb_wrap *) us->iobuf; 119 - char rewind_cmd[] = {0x11, 0x06, 0x20, 0x00, 0x00, 0x01, 0x01, 0x00, 120 - 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; 121 - 122 - bcbw->Signature = cpu_to_le32(US_BULK_CB_SIGN); 123 - bcbw->Tag = 0; 124 - bcbw->DataTransferLength = 0; 125 - bcbw->Flags = bcbw->Lun = 0; 126 - bcbw->Length = sizeof(rewind_cmd); 127 - memset(bcbw->CDB, 0, sizeof(bcbw->CDB)); 128 - memcpy(bcbw->CDB, rewind_cmd, sizeof(rewind_cmd)); 129 - 130 - result = usb_stor_bulk_transfer_buf(us, us->send_bulk_pipe, bcbw, 131 - US_BULK_CB_WRAP_LEN, &act_len); 132 - US_DEBUGP("transfer actual length=%d, result=%d\n", act_len, result); 133 - return result; 134 - } 135 - 136 - /* 137 - * It tries to find the supported Huawei USB dongles. 138 - * In Huawei, they assign the following product IDs 139 - * for all of their mobile broadband dongles, 140 - * including the new dongles in the future. 141 - * So if the product ID is not included in this list, 142 - * it means it is not Huawei's mobile broadband dongles. 143 - */ 144 - static int usb_stor_huawei_dongles_pid(struct us_data *us) 145 - { 146 - struct usb_interface_descriptor *idesc; 147 - int idProduct; 148 - 149 - idesc = &us->pusb_intf->cur_altsetting->desc; 150 - idProduct = le16_to_cpu(us->pusb_dev->descriptor.idProduct); 151 - /* The first port is CDROM, 152 - * means the dongle in the single port mode, 153 - * and a switch command is required to be sent. */ 154 - if (idesc && idesc->bInterfaceNumber == 0) { 155 - if ((idProduct == 0x1001) 156 - || (idProduct == 0x1003) 157 - || (idProduct == 0x1004) 158 - || (idProduct >= 0x1401 && idProduct <= 0x1500) 159 - || (idProduct >= 0x1505 && idProduct <= 0x1600) 160 - || (idProduct >= 0x1c02 && idProduct <= 0x2202)) { 161 - return 1; 162 - } 163 - } 164 - return 0; 165 - } 166 - 167 - int usb_stor_huawei_init(struct us_data *us) 168 - { 169 - int result = 0; 170 - 171 - if (usb_stor_huawei_dongles_pid(us)) { 172 - if (le16_to_cpu(us->pusb_dev->descriptor.idProduct) >= 0x1446) 173 - result = usb_stor_huawei_scsi_init(us); 174 - else 175 - result = usb_stor_huawei_feature_init(us); 176 - } 177 - return result; 178 106 }
+2 -2
drivers/usb/storage/initializers.h
··· 46 46 * flash reader */ 47 47 int usb_stor_ucr61s2b_init(struct us_data *us); 48 48 49 - /* This places the HUAWEI usb dongles in multi-port mode */ 50 - int usb_stor_huawei_init(struct us_data *us); 49 + /* This places the HUAWEI E220 devices in multi-port mode */ 50 + int usb_stor_huawei_e220_init(struct us_data *us);
+335 -2
drivers/usb/storage/unusual_devs.h
··· 53 53 * as opposed to devices that do something strangely or wrongly. 54 54 */ 55 55 56 + /* In-kernel mode switching is deprecated. Do not add new devices to 57 + * this list for the sole purpose of switching them to a different 58 + * mode. Existing userspace solutions are superior. 59 + * 60 + * New mode switching devices should instead be added to the database 61 + * maintained at http://www.draisberghof.de/usb_modeswitch/ 62 + */ 63 + 56 64 #if !defined(CONFIG_USB_STORAGE_SDDR09) && \ 57 65 !defined(CONFIG_USB_STORAGE_SDDR09_MODULE) 58 66 #define NO_SDDR09 ··· 1535 1527 /* Reported by fangxiaozhi <huananhu@huawei.com> 1536 1528 * This brings the HUAWEI data card devices into multi-port mode 1537 1529 */ 1538 - UNUSUAL_VENDOR_INTF(0x12d1, 0x08, 0x06, 0x50, 1530 + UNUSUAL_DEV( 0x12d1, 0x1001, 0x0000, 0x0000, 1539 1531 "HUAWEI MOBILE", 1540 1532 "Mass Storage", 1541 - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_init, 1533 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1534 + 0), 1535 + UNUSUAL_DEV( 0x12d1, 0x1003, 0x0000, 0x0000, 1536 + "HUAWEI MOBILE", 1537 + "Mass Storage", 1538 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1539 + 0), 1540 + UNUSUAL_DEV( 0x12d1, 0x1004, 0x0000, 0x0000, 1541 + "HUAWEI MOBILE", 1542 + "Mass Storage", 1543 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1544 + 0), 1545 + UNUSUAL_DEV( 0x12d1, 0x1401, 0x0000, 0x0000, 1546 + "HUAWEI MOBILE", 1547 + "Mass Storage", 1548 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1549 + 0), 1550 + UNUSUAL_DEV( 0x12d1, 0x1402, 0x0000, 0x0000, 1551 + "HUAWEI MOBILE", 1552 + "Mass Storage", 1553 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1554 + 0), 1555 + UNUSUAL_DEV( 0x12d1, 0x1403, 0x0000, 0x0000, 1556 + "HUAWEI MOBILE", 1557 + "Mass Storage", 1558 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1559 + 0), 1560 + UNUSUAL_DEV( 0x12d1, 0x1404, 0x0000, 0x0000, 1561 + "HUAWEI MOBILE", 1562 + "Mass Storage", 1563 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1564 + 0), 1565 + UNUSUAL_DEV( 0x12d1, 0x1405, 0x0000, 0x0000, 1566 + "HUAWEI MOBILE", 1567 + "Mass Storage", 1568 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1569 + 0), 1570 + UNUSUAL_DEV( 0x12d1, 0x1406, 0x0000, 0x0000, 1571 + "HUAWEI MOBILE", 1572 + "Mass Storage", 1573 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1574 + 0), 1575 + UNUSUAL_DEV( 0x12d1, 0x1407, 0x0000, 0x0000, 1576 + "HUAWEI MOBILE", 1577 + "Mass Storage", 1578 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1579 + 0), 1580 + UNUSUAL_DEV( 0x12d1, 0x1408, 0x0000, 0x0000, 1581 + "HUAWEI MOBILE", 1582 + "Mass Storage", 1583 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1584 + 0), 1585 + UNUSUAL_DEV( 0x12d1, 0x1409, 0x0000, 0x0000, 1586 + "HUAWEI MOBILE", 1587 + "Mass Storage", 1588 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1589 + 0), 1590 + UNUSUAL_DEV( 0x12d1, 0x140A, 0x0000, 0x0000, 1591 + "HUAWEI MOBILE", 1592 + "Mass Storage", 1593 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1594 + 0), 1595 + UNUSUAL_DEV( 0x12d1, 0x140B, 0x0000, 0x0000, 1596 + "HUAWEI MOBILE", 1597 + "Mass Storage", 1598 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1599 + 0), 1600 + UNUSUAL_DEV( 0x12d1, 0x140C, 0x0000, 0x0000, 1601 + "HUAWEI MOBILE", 1602 + "Mass Storage", 1603 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1604 + 0), 1605 + UNUSUAL_DEV( 0x12d1, 0x140D, 0x0000, 0x0000, 1606 + "HUAWEI MOBILE", 1607 + "Mass Storage", 1608 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1609 + 0), 1610 + UNUSUAL_DEV( 0x12d1, 0x140E, 0x0000, 0x0000, 1611 + "HUAWEI MOBILE", 1612 + "Mass Storage", 1613 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1614 + 0), 1615 + UNUSUAL_DEV( 0x12d1, 0x140F, 0x0000, 0x0000, 1616 + "HUAWEI MOBILE", 1617 + "Mass Storage", 1618 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1619 + 0), 1620 + UNUSUAL_DEV( 0x12d1, 0x1410, 0x0000, 0x0000, 1621 + "HUAWEI MOBILE", 1622 + "Mass Storage", 1623 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1624 + 0), 1625 + UNUSUAL_DEV( 0x12d1, 0x1411, 0x0000, 0x0000, 1626 + "HUAWEI MOBILE", 1627 + "Mass Storage", 1628 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1629 + 0), 1630 + UNUSUAL_DEV( 0x12d1, 0x1412, 0x0000, 0x0000, 1631 + "HUAWEI MOBILE", 1632 + "Mass Storage", 1633 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1634 + 0), 1635 + UNUSUAL_DEV( 0x12d1, 0x1413, 0x0000, 0x0000, 1636 + "HUAWEI MOBILE", 1637 + "Mass Storage", 1638 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1639 + 0), 1640 + UNUSUAL_DEV( 0x12d1, 0x1414, 0x0000, 0x0000, 1641 + "HUAWEI MOBILE", 1642 + "Mass Storage", 1643 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1644 + 0), 1645 + UNUSUAL_DEV( 0x12d1, 0x1415, 0x0000, 0x0000, 1646 + "HUAWEI MOBILE", 1647 + "Mass Storage", 1648 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1649 + 0), 1650 + UNUSUAL_DEV( 0x12d1, 0x1416, 0x0000, 0x0000, 1651 + "HUAWEI MOBILE", 1652 + "Mass Storage", 1653 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1654 + 0), 1655 + UNUSUAL_DEV( 0x12d1, 0x1417, 0x0000, 0x0000, 1656 + "HUAWEI MOBILE", 1657 + "Mass Storage", 1658 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1659 + 0), 1660 + UNUSUAL_DEV( 0x12d1, 0x1418, 0x0000, 0x0000, 1661 + "HUAWEI MOBILE", 1662 + "Mass Storage", 1663 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1664 + 0), 1665 + UNUSUAL_DEV( 0x12d1, 0x1419, 0x0000, 0x0000, 1666 + "HUAWEI MOBILE", 1667 + "Mass Storage", 1668 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1669 + 0), 1670 + UNUSUAL_DEV( 0x12d1, 0x141A, 0x0000, 0x0000, 1671 + "HUAWEI MOBILE", 1672 + "Mass Storage", 1673 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1674 + 0), 1675 + UNUSUAL_DEV( 0x12d1, 0x141B, 0x0000, 0x0000, 1676 + "HUAWEI MOBILE", 1677 + "Mass Storage", 1678 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1679 + 0), 1680 + UNUSUAL_DEV( 0x12d1, 0x141C, 0x0000, 0x0000, 1681 + "HUAWEI MOBILE", 1682 + "Mass Storage", 1683 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1684 + 0), 1685 + UNUSUAL_DEV( 0x12d1, 0x141D, 0x0000, 0x0000, 1686 + "HUAWEI MOBILE", 1687 + "Mass Storage", 1688 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1689 + 0), 1690 + UNUSUAL_DEV( 0x12d1, 0x141E, 0x0000, 0x0000, 1691 + "HUAWEI MOBILE", 1692 + "Mass Storage", 1693 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1694 + 0), 1695 + UNUSUAL_DEV( 0x12d1, 0x141F, 0x0000, 0x0000, 1696 + "HUAWEI MOBILE", 1697 + "Mass Storage", 1698 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1699 + 0), 1700 + UNUSUAL_DEV( 0x12d1, 0x1420, 0x0000, 0x0000, 1701 + "HUAWEI MOBILE", 1702 + "Mass Storage", 1703 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1704 + 0), 1705 + UNUSUAL_DEV( 0x12d1, 0x1421, 0x0000, 0x0000, 1706 + "HUAWEI MOBILE", 1707 + "Mass Storage", 1708 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1709 + 0), 1710 + UNUSUAL_DEV( 0x12d1, 0x1422, 0x0000, 0x0000, 1711 + "HUAWEI MOBILE", 1712 + "Mass Storage", 1713 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1714 + 0), 1715 + UNUSUAL_DEV( 0x12d1, 0x1423, 0x0000, 0x0000, 1716 + "HUAWEI MOBILE", 1717 + "Mass Storage", 1718 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1719 + 0), 1720 + UNUSUAL_DEV( 0x12d1, 0x1424, 0x0000, 0x0000, 1721 + "HUAWEI MOBILE", 1722 + "Mass Storage", 1723 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1724 + 0), 1725 + UNUSUAL_DEV( 0x12d1, 0x1425, 0x0000, 0x0000, 1726 + "HUAWEI MOBILE", 1727 + "Mass Storage", 1728 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1729 + 0), 1730 + UNUSUAL_DEV( 0x12d1, 0x1426, 0x0000, 0x0000, 1731 + "HUAWEI MOBILE", 1732 + "Mass Storage", 1733 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1734 + 0), 1735 + UNUSUAL_DEV( 0x12d1, 0x1427, 0x0000, 0x0000, 1736 + "HUAWEI MOBILE", 1737 + "Mass Storage", 1738 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1739 + 0), 1740 + UNUSUAL_DEV( 0x12d1, 0x1428, 0x0000, 0x0000, 1741 + "HUAWEI MOBILE", 1742 + "Mass Storage", 1743 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1744 + 0), 1745 + UNUSUAL_DEV( 0x12d1, 0x1429, 0x0000, 0x0000, 1746 + "HUAWEI MOBILE", 1747 + "Mass Storage", 1748 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1749 + 0), 1750 + UNUSUAL_DEV( 0x12d1, 0x142A, 0x0000, 0x0000, 1751 + "HUAWEI MOBILE", 1752 + "Mass Storage", 1753 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1754 + 0), 1755 + UNUSUAL_DEV( 0x12d1, 0x142B, 0x0000, 0x0000, 1756 + "HUAWEI MOBILE", 1757 + "Mass Storage", 1758 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1759 + 0), 1760 + UNUSUAL_DEV( 0x12d1, 0x142C, 0x0000, 0x0000, 1761 + "HUAWEI MOBILE", 1762 + "Mass Storage", 1763 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1764 + 0), 1765 + UNUSUAL_DEV( 0x12d1, 0x142D, 0x0000, 0x0000, 1766 + "HUAWEI MOBILE", 1767 + "Mass Storage", 1768 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1769 + 0), 1770 + UNUSUAL_DEV( 0x12d1, 0x142E, 0x0000, 0x0000, 1771 + "HUAWEI MOBILE", 1772 + "Mass Storage", 1773 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1774 + 0), 1775 + UNUSUAL_DEV( 0x12d1, 0x142F, 0x0000, 0x0000, 1776 + "HUAWEI MOBILE", 1777 + "Mass Storage", 1778 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1779 + 0), 1780 + UNUSUAL_DEV( 0x12d1, 0x1430, 0x0000, 0x0000, 1781 + "HUAWEI MOBILE", 1782 + "Mass Storage", 1783 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1784 + 0), 1785 + UNUSUAL_DEV( 0x12d1, 0x1431, 0x0000, 0x0000, 1786 + "HUAWEI MOBILE", 1787 + "Mass Storage", 1788 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1789 + 0), 1790 + UNUSUAL_DEV( 0x12d1, 0x1432, 0x0000, 0x0000, 1791 + "HUAWEI MOBILE", 1792 + "Mass Storage", 1793 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1794 + 0), 1795 + UNUSUAL_DEV( 0x12d1, 0x1433, 0x0000, 0x0000, 1796 + "HUAWEI MOBILE", 1797 + "Mass Storage", 1798 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1799 + 0), 1800 + UNUSUAL_DEV( 0x12d1, 0x1434, 0x0000, 0x0000, 1801 + "HUAWEI MOBILE", 1802 + "Mass Storage", 1803 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1804 + 0), 1805 + UNUSUAL_DEV( 0x12d1, 0x1435, 0x0000, 0x0000, 1806 + "HUAWEI MOBILE", 1807 + "Mass Storage", 1808 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1809 + 0), 1810 + UNUSUAL_DEV( 0x12d1, 0x1436, 0x0000, 0x0000, 1811 + "HUAWEI MOBILE", 1812 + "Mass Storage", 1813 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1814 + 0), 1815 + UNUSUAL_DEV( 0x12d1, 0x1437, 0x0000, 0x0000, 1816 + "HUAWEI MOBILE", 1817 + "Mass Storage", 1818 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1819 + 0), 1820 + UNUSUAL_DEV( 0x12d1, 0x1438, 0x0000, 0x0000, 1821 + "HUAWEI MOBILE", 1822 + "Mass Storage", 1823 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1824 + 0), 1825 + UNUSUAL_DEV( 0x12d1, 0x1439, 0x0000, 0x0000, 1826 + "HUAWEI MOBILE", 1827 + "Mass Storage", 1828 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1829 + 0), 1830 + UNUSUAL_DEV( 0x12d1, 0x143A, 0x0000, 0x0000, 1831 + "HUAWEI MOBILE", 1832 + "Mass Storage", 1833 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1834 + 0), 1835 + UNUSUAL_DEV( 0x12d1, 0x143B, 0x0000, 0x0000, 1836 + "HUAWEI MOBILE", 1837 + "Mass Storage", 1838 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1839 + 0), 1840 + UNUSUAL_DEV( 0x12d1, 0x143C, 0x0000, 0x0000, 1841 + "HUAWEI MOBILE", 1842 + "Mass Storage", 1843 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1844 + 0), 1845 + UNUSUAL_DEV( 0x12d1, 0x143D, 0x0000, 0x0000, 1846 + "HUAWEI MOBILE", 1847 + "Mass Storage", 1848 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1849 + 0), 1850 + UNUSUAL_DEV( 0x12d1, 0x143E, 0x0000, 0x0000, 1851 + "HUAWEI MOBILE", 1852 + "Mass Storage", 1853 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1854 + 0), 1855 + UNUSUAL_DEV( 0x12d1, 0x143F, 0x0000, 0x0000, 1856 + "HUAWEI MOBILE", 1857 + "Mass Storage", 1858 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init, 1542 1859 0), 1543 1860 1544 1861 /* Reported by Vilius Bilinkevicius <vilisas AT xxx DOT lt) */
+1
drivers/video/omap/lcd_ams_delta.c
··· 27 27 #include <linux/lcd.h> 28 28 #include <linux/gpio.h> 29 29 30 + #include <mach/hardware.h> 30 31 #include <mach/board-ams-delta.h> 31 32 32 33 #include "omapfb.h"
+3
drivers/video/omap/lcd_osk.c
··· 24 24 #include <linux/platform_device.h> 25 25 26 26 #include <asm/gpio.h> 27 + 28 + #include <mach/hardware.h> 27 29 #include <mach/mux.h> 30 + 28 31 #include "omapfb.h" 29 32 30 33 static int osk_panel_init(struct lcd_panel *panel, struct omapfb_device *fbdev)
+4 -2
drivers/w1/masters/w1-gpio.c
··· 47 47 return gpio_get_value(pdata->pin) ? 1 : 0; 48 48 } 49 49 50 + #if defined(CONFIG_OF) 50 51 static struct of_device_id w1_gpio_dt_ids[] = { 51 52 { .compatible = "w1-gpio" }, 52 53 {} 53 54 }; 54 55 MODULE_DEVICE_TABLE(of, w1_gpio_dt_ids); 56 + #endif 55 57 56 58 static int w1_gpio_probe_dt(struct platform_device *pdev) 57 59 { ··· 160 158 return err; 161 159 } 162 160 163 - static int __exit w1_gpio_remove(struct platform_device *pdev) 161 + static int w1_gpio_remove(struct platform_device *pdev) 164 162 { 165 163 struct w1_bus_master *master = platform_get_drvdata(pdev); 166 164 struct w1_gpio_platform_data *pdata = pdev->dev.platform_data; ··· 212 210 .of_match_table = of_match_ptr(w1_gpio_dt_ids), 213 211 }, 214 212 .probe = w1_gpio_probe, 215 - .remove = __exit_p(w1_gpio_remove), 213 + .remove = w1_gpio_remove, 216 214 .suspend = w1_gpio_suspend, 217 215 .resume = w1_gpio_resume, 218 216 };
+2 -1
drivers/w1/w1.c
··· 924 924 tmp64 = (triplet_ret >> 2); 925 925 rn |= (tmp64 << i); 926 926 927 - if (kthread_should_stop()) { 927 + /* ensure we're called from kthread and not by netlink callback */ 928 + if (!dev->priv && kthread_should_stop()) { 928 929 mutex_unlock(&dev->bus_mutex); 929 930 dev_dbg(&dev->dev, "Abort w1_search\n"); 930 931 return;
+4 -4
drivers/xen/xen-acpi-processor.c
··· 500 500 (void)acpi_processor_preregister_performance(acpi_perf_data); 501 501 502 502 for_each_possible_cpu(i) { 503 + struct acpi_processor *pr; 503 504 struct acpi_processor_performance *perf; 504 505 506 + pr = per_cpu(processors, i); 505 507 perf = per_cpu_ptr(acpi_perf_data, i); 506 - rc = acpi_processor_register_performance(perf, i); 508 + pr->performance = perf; 509 + rc = acpi_processor_get_performance_info(pr); 507 510 if (rc) 508 511 goto err_out; 509 512 } 510 - rc = acpi_processor_notify_smm(THIS_MODULE); 511 - if (rc) 512 - goto err_unregister; 513 513 514 514 for_each_possible_cpu(i) { 515 515 struct acpi_processor *_pr;
+2 -1
drivers/xen/xen-pciback/pciback_ops.c
··· 113 113 if (dev->msi_enabled) 114 114 pci_disable_msi(dev); 115 115 #endif 116 - pci_disable_device(dev); 116 + if (pci_is_enabled(dev)) 117 + pci_disable_device(dev); 117 118 118 119 pci_write_config_word(dev, PCI_COMMAND, 0); 119 120
-1
drivers/xen/xen-stub.c
··· 25 25 #include <linux/export.h> 26 26 #include <linux/types.h> 27 27 #include <linux/acpi.h> 28 - #include <acpi/acpi_drivers.h> 29 28 #include <xen/acpi.h> 30 29 31 30 #ifdef CONFIG_ACPI
+4 -1
fs/btrfs/extent-tree.c
··· 1467 1467 if (ret && !insert) { 1468 1468 err = -ENOENT; 1469 1469 goto out; 1470 + } else if (ret) { 1471 + err = -EIO; 1472 + WARN_ON(1); 1473 + goto out; 1470 1474 } 1471 - BUG_ON(ret); /* Corruption */ 1472 1475 1473 1476 leaf = path->nodes[0]; 1474 1477 item_size = btrfs_item_size_nr(leaf, path->slots[0]);
+1
fs/btrfs/file.c
··· 591 591 } 592 592 compressed = test_bit(EXTENT_FLAG_COMPRESSED, &em->flags); 593 593 clear_bit(EXTENT_FLAG_PINNED, &em->flags); 594 + clear_bit(EXTENT_FLAG_LOGGING, &flags); 594 595 remove_extent_mapping(em_tree, em); 595 596 if (no_splits) 596 597 goto next;
+3
fs/btrfs/inode.c
··· 2312 2312 key.type = BTRFS_EXTENT_DATA_KEY; 2313 2313 key.offset = start; 2314 2314 2315 + path->leave_spinning = 1; 2315 2316 if (merge) { 2316 2317 struct btrfs_file_extent_item *fi; 2317 2318 u64 extent_len; ··· 2369 2368 2370 2369 btrfs_mark_buffer_dirty(leaf); 2371 2370 inode_add_bytes(inode, len); 2371 + btrfs_release_path(path); 2372 2372 2373 2373 ret = btrfs_inc_extent_ref(trans, root, new->bytenr, 2374 2374 new->disk_len, 0, ··· 2383 2381 ret = 1; 2384 2382 out_free_path: 2385 2383 btrfs_release_path(path); 2384 + path->leave_spinning = 0; 2386 2385 btrfs_end_transaction(trans, root); 2387 2386 out_unlock: 2388 2387 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lock_start, lock_end,
-1
fs/btrfs/locking.h
··· 26 26 27 27 void btrfs_tree_lock(struct extent_buffer *eb); 28 28 void btrfs_tree_unlock(struct extent_buffer *eb); 29 - int btrfs_try_spin_lock(struct extent_buffer *eb); 30 29 31 30 void btrfs_tree_read_lock(struct extent_buffer *eb); 32 31 void btrfs_tree_read_unlock(struct extent_buffer *eb);
+6 -4
fs/btrfs/qgroup.c
··· 1525 1525 1526 1526 if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) && 1527 1527 qg->reserved + qg->rfer + num_bytes > 1528 - qg->max_rfer) 1528 + qg->max_rfer) { 1529 1529 ret = -EDQUOT; 1530 + goto out; 1531 + } 1530 1532 1531 1533 if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) && 1532 1534 qg->reserved + qg->excl + num_bytes > 1533 - qg->max_excl) 1535 + qg->max_excl) { 1534 1536 ret = -EDQUOT; 1537 + goto out; 1538 + } 1535 1539 1536 1540 list_for_each_entry(glist, &qg->groups, next_group) { 1537 1541 ulist_add(ulist, glist->group->qgroupid, 1538 1542 (uintptr_t)glist->group, GFP_ATOMIC); 1539 1543 } 1540 1544 } 1541 - if (ret) 1542 - goto out; 1543 1545 1544 1546 /* 1545 1547 * no limits exceeded, now record the reservation into all qgroups
+5 -6
fs/btrfs/transaction.c
··· 625 625 626 626 btrfs_trans_release_metadata(trans, root); 627 627 trans->block_rsv = NULL; 628 - /* 629 - * the same root has to be passed to start_transaction and 630 - * end_transaction. Subvolume quota depends on this. 631 - */ 632 - WARN_ON(trans->root != root); 633 628 634 629 if (trans->qgroup_reserved) { 635 - btrfs_qgroup_free(root, trans->qgroup_reserved); 630 + /* 631 + * the same root has to be passed here between start_transaction 632 + * and end_transaction. Subvolume quota depends on this. 633 + */ 634 + btrfs_qgroup_free(trans->root, trans->qgroup_reserved); 636 635 trans->qgroup_reserved = 0; 637 636 } 638 637
+6
fs/btrfs/volumes.c
··· 684 684 __btrfs_close_devices(fs_devices); 685 685 free_fs_devices(fs_devices); 686 686 } 687 + /* 688 + * Wait for rcu kworkers under __btrfs_close_devices 689 + * to finish all blkdev_puts so device is really 690 + * free when umount is done. 691 + */ 692 + rcu_barrier(); 687 693 return ret; 688 694 } 689 695
+1
fs/cifs/cifsfs.c
··· 777 777 .kill_sb = cifs_kill_sb, 778 778 /* .fs_flags */ 779 779 }; 780 + MODULE_ALIAS_FS("cifs"); 780 781 const struct inode_operations cifs_dir_inode_ops = { 781 782 .create = cifs_create, 782 783 .atomic_open = cifs_atomic_open,
+7 -8
fs/compat.c
··· 558 558 } 559 559 *ret_pointer = iov; 560 560 561 + ret = -EFAULT; 562 + if (!access_ok(VERIFY_READ, uvector, nr_segs*sizeof(*uvector))) 563 + goto out; 564 + 561 565 /* 562 566 * Single unix specification: 563 567 * We should -EINVAL if an element length is not >= 0 and fitting an ··· 1084 1080 if (!file->f_op) 1085 1081 goto out; 1086 1082 1087 - ret = -EFAULT; 1088 - if (!access_ok(VERIFY_READ, uvector, nr_segs*sizeof(*uvector))) 1089 - goto out; 1090 - 1091 - tot_len = compat_rw_copy_check_uvector(type, uvector, nr_segs, 1083 + ret = compat_rw_copy_check_uvector(type, uvector, nr_segs, 1092 1084 UIO_FASTIOV, iovstack, &iov); 1093 - if (tot_len == 0) { 1094 - ret = 0; 1085 + if (ret <= 0) 1095 1086 goto out; 1096 - } 1097 1087 1088 + tot_len = ret; 1098 1089 ret = rw_verify_area(type, file, pos, tot_len); 1099 1090 if (ret < 0) 1100 1091 goto out;
-1
fs/ext2/ialloc.c
··· 118 118 * as writing the quota to disk may need the lock as well. 119 119 */ 120 120 /* Quota is already initialized in iput() */ 121 - ext2_xattr_delete_inode(inode); 122 121 dquot_free_inode(inode); 123 122 dquot_drop(inode); 124 123
+2
fs/ext2/inode.c
··· 34 34 #include "ext2.h" 35 35 #include "acl.h" 36 36 #include "xip.h" 37 + #include "xattr.h" 37 38 38 39 static int __ext2_write_inode(struct inode *inode, int do_sync); 39 40 ··· 89 88 inode->i_size = 0; 90 89 if (inode->i_blocks) 91 90 ext2_truncate_blocks(inode, 0); 91 + ext2_xattr_delete_inode(inode); 92 92 } 93 93 94 94 invalidate_inode_buffers(inode);
+2 -2
fs/ext3/super.c
··· 353 353 return bdev; 354 354 355 355 fail: 356 - ext3_msg(sb, "error: failed to open journal device %s: %ld", 356 + ext3_msg(sb, KERN_ERR, "error: failed to open journal device %s: %ld", 357 357 __bdevname(dev, b), PTR_ERR(bdev)); 358 358 359 359 return NULL; ··· 887 887 /*todo: use simple_strtoll with >32bit ext3 */ 888 888 sb_block = simple_strtoul(options, &options, 0); 889 889 if (*options && *options != ',') { 890 - ext3_msg(sb, "error: invalid sb specification: %s", 890 + ext3_msg(sb, KERN_ERR, "error: invalid sb specification: %s", 891 891 (char *) *data); 892 892 return 1; 893 893 }
+2
fs/ext4/super.c
··· 91 91 .fs_flags = FS_REQUIRES_DEV, 92 92 }; 93 93 MODULE_ALIAS_FS("ext2"); 94 + MODULE_ALIAS("ext2"); 94 95 #define IS_EXT2_SB(sb) ((sb)->s_bdev->bd_holder == &ext2_fs_type) 95 96 #else 96 97 #define IS_EXT2_SB(sb) (0) ··· 107 106 .fs_flags = FS_REQUIRES_DEV, 108 107 }; 109 108 MODULE_ALIAS_FS("ext3"); 109 + MODULE_ALIAS("ext3"); 110 110 #define IS_EXT3_SB(sb) ((sb)->s_bdev->bd_holder == &ext3_fs_type) 111 111 #else 112 112 #define IS_EXT3_SB(sb) (0)
+1
fs/freevxfs/vxfs_super.c
··· 258 258 .fs_flags = FS_REQUIRES_DEV, 259 259 }; 260 260 MODULE_ALIAS_FS("vxfs"); /* makes mount -t vxfs autoload the module */ 261 + MODULE_ALIAS("vxfs"); 261 262 262 263 static int __init 263 264 vxfs_init(void)
+2 -8
fs/hostfs/hostfs_kern.c
··· 845 845 return err; 846 846 847 847 if ((attr->ia_valid & ATTR_SIZE) && 848 - attr->ia_size != i_size_read(inode)) { 849 - int error; 850 - 851 - error = inode_newsize_ok(inode, attr->ia_size); 852 - if (error) 853 - return error; 854 - 848 + attr->ia_size != i_size_read(inode)) 855 849 truncate_setsize(inode, attr->ia_size); 856 - } 857 850 858 851 setattr_copy(inode, attr); 859 852 mark_inode_dirty(inode); ··· 986 993 .kill_sb = hostfs_kill_sb, 987 994 .fs_flags = 0, 988 995 }; 996 + MODULE_ALIAS_FS("hostfs"); 989 997 990 998 static int __init init_hostfs(void) 991 999 {
+1
fs/hpfs/super.c
··· 688 688 .kill_sb = kill_block_super, 689 689 .fs_flags = FS_REQUIRES_DEV, 690 690 }; 691 + MODULE_ALIAS_FS("hpfs"); 691 692 692 693 static int __init init_hpfs_fs(void) 693 694 {
+1
fs/isofs/inode.c
··· 1557 1557 .fs_flags = FS_REQUIRES_DEV, 1558 1558 }; 1559 1559 MODULE_ALIAS_FS("iso9660"); 1560 + MODULE_ALIAS("iso9660"); 1560 1561 1561 1562 static int __init init_iso9660_fs(void) 1562 1563 {
+1
fs/nfs/super.c
··· 335 335 .fs_flags = FS_RENAME_DOES_D_MOVE|FS_BINARY_MOUNTDATA, 336 336 }; 337 337 MODULE_ALIAS_FS("nfs4"); 338 + MODULE_ALIAS("nfs4"); 338 339 EXPORT_SYMBOL_GPL(nfs4_fs_type); 339 340 340 341 static int __init register_nfs4_fs(void)
+2 -34
fs/nfsd/nfs4state.c
··· 230 230 __nfs4_file_put_access(fp, oflag); 231 231 } 232 232 233 - static inline int get_new_stid(struct nfs4_stid *stid) 234 - { 235 - static int min_stateid = 0; 236 - struct idr *stateids = &stid->sc_client->cl_stateids; 237 - int new_stid; 238 - int error; 239 - 240 - error = idr_get_new_above(stateids, stid, min_stateid, &new_stid); 241 - /* 242 - * Note: the necessary preallocation was done in 243 - * nfs4_alloc_stateid(). The idr code caps the number of 244 - * preallocations that can exist at a time, but the state lock 245 - * prevents anyone from using ours before we get here: 246 - */ 247 - WARN_ON_ONCE(error); 248 - /* 249 - * It shouldn't be a problem to reuse an opaque stateid value. 250 - * I don't think it is for 4.1. But with 4.0 I worry that, for 251 - * example, a stray write retransmission could be accepted by 252 - * the server when it should have been rejected. Therefore, 253 - * adopt a trick from the sctp code to attempt to maximize the 254 - * amount of time until an id is reused, by ensuring they always 255 - * "increase" (mod INT_MAX): 256 - */ 257 - 258 - min_stateid = new_stid+1; 259 - if (min_stateid == INT_MAX) 260 - min_stateid = 0; 261 - return new_stid; 262 - } 263 - 264 233 static struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, struct 265 234 kmem_cache *slab) 266 235 { ··· 242 273 if (!stid) 243 274 return NULL; 244 275 245 - if (!idr_pre_get(stateids, GFP_KERNEL)) 246 - goto out_free; 247 - if (idr_get_new_above(stateids, stid, min_stateid, &new_id)) 276 + new_id = idr_alloc(stateids, stid, min_stateid, 0, GFP_KERNEL); 277 + if (new_id < 0) 248 278 goto out_free; 249 279 stid->sc_client = cl; 250 280 stid->sc_type = 0;
+3
fs/pipe.c
··· 863 863 { 864 864 int ret = -ENOENT; 865 865 866 + if (!(filp->f_mode & (FMODE_READ|FMODE_WRITE))) 867 + return -EINVAL; 868 + 866 869 mutex_lock(&inode->i_mutex); 867 870 868 871 if (inode->i_pipe) {
+4 -1
fs/quota/dquot.c
··· 1439 1439 * did a write before quota was turned on 1440 1440 */ 1441 1441 rsv = inode_get_rsv_space(inode); 1442 - if (unlikely(rsv)) 1442 + if (unlikely(rsv)) { 1443 + spin_lock(&dq_data_lock); 1443 1444 dquot_resv_space(inode->i_dquot[cnt], rsv); 1445 + spin_unlock(&dq_data_lock); 1446 + } 1444 1447 } 1445 1448 } 1446 1449 out_err:
+1 -3
fs/reiserfs/super.c
··· 1147 1147 "on filesystem root."); 1148 1148 return 0; 1149 1149 } 1150 - qf_names[qtype] = 1151 - kmalloc(strlen(arg) + 1, GFP_KERNEL); 1150 + qf_names[qtype] = kstrdup(arg, GFP_KERNEL); 1152 1151 if (!qf_names[qtype]) { 1153 1152 reiserfs_warning(s, "reiserfs-2502", 1154 1153 "not enough memory " ··· 1155 1156 "quotafile name."); 1156 1157 return 0; 1157 1158 } 1158 - strcpy(qf_names[qtype], arg); 1159 1159 if (qtype == USRQUOTA) 1160 1160 *mount_options |= 1 << REISERFS_USRQUOTA; 1161 1161 else
+1
fs/squashfs/super.c
··· 489 489 .kill_sb = kill_block_super, 490 490 .fs_flags = FS_REQUIRES_DEV 491 491 }; 492 + MODULE_ALIAS_FS("squashfs"); 492 493 493 494 static const struct super_operations squashfs_super_ops = { 494 495 .alloc_inode = squashfs_alloc_inode,
+1
fs/sysv/super.c
··· 555 555 .fs_flags = FS_REQUIRES_DEV, 556 556 }; 557 557 MODULE_ALIAS_FS("v7"); 558 + MODULE_ALIAS("v7"); 558 559 559 560 static int __init init_sysv_fs(void) 560 561 {
+1
fs/udf/super.c
··· 118 118 .kill_sb = kill_block_super, 119 119 .fs_flags = FS_REQUIRES_DEV, 120 120 }; 121 + MODULE_ALIAS_FS("udf"); 121 122 122 123 static struct kmem_cache *udf_inode_cachep; 123 124
+3
include/acpi/processor.h
··· 235 235 if a _PPC object exists, rmmod is disallowed then */ 236 236 int acpi_processor_notify_smm(struct module *calling_module); 237 237 238 + /* parsing the _P* objects. */ 239 + extern int acpi_processor_get_performance_info(struct acpi_processor *pr); 240 + 238 241 /* for communication between multiple parts of the processor kernel module */ 239 242 DECLARE_PER_CPU(struct acpi_processor *, processors); 240 243 extern struct acpi_processor_errata errata;
-6
include/asm-generic/atomic.h
··· 136 136 #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) 137 137 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) 138 138 139 - #define cmpxchg_local(ptr, o, n) \ 140 - ((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\ 141 - (unsigned long)(n), sizeof(*(ptr)))) 142 - 143 - #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) 144 - 145 139 static inline int __atomic_add_unless(atomic_t *v, int a, int u) 146 140 { 147 141 int c, old;
+10
include/asm-generic/cmpxchg.h
··· 92 92 */ 93 93 #include <asm-generic/cmpxchg-local.h> 94 94 95 + #ifndef cmpxchg_local 96 + #define cmpxchg_local(ptr, o, n) \ 97 + ((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\ 98 + (unsigned long)(n), sizeof(*(ptr)))) 99 + #endif 100 + 101 + #ifndef cmpxchg64_local 102 + #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) 103 + #endif 104 + 95 105 #define cmpxchg(ptr, o, n) cmpxchg_local((ptr), (o), (n)) 96 106 #define cmpxchg64(ptr, o, n) cmpxchg64_local((ptr), (o), (n)) 97 107
+51 -17
include/linux/idr.h
··· 73 73 */ 74 74 75 75 void *idr_find_slowpath(struct idr *idp, int id); 76 - int idr_pre_get(struct idr *idp, gfp_t gfp_mask); 77 - int idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id); 78 76 void idr_preload(gfp_t gfp_mask); 79 77 int idr_alloc(struct idr *idp, void *ptr, int start, int end, gfp_t gfp_mask); 80 78 int idr_for_each(struct idr *idp, ··· 97 99 98 100 /** 99 101 * idr_find - return pointer for given id 100 - * @idp: idr handle 102 + * @idr: idr handle 101 103 * @id: lookup key 102 104 * 103 105 * Return the pointer given the id it has been registered with. A %NULL ··· 118 120 } 119 121 120 122 /** 121 - * idr_get_new - allocate new idr entry 122 - * @idp: idr handle 123 - * @ptr: pointer you want associated with the id 124 - * @id: pointer to the allocated handle 125 - * 126 - * Simple wrapper around idr_get_new_above() w/ @starting_id of zero. 127 - */ 128 - static inline int idr_get_new(struct idr *idp, void *ptr, int *id) 129 - { 130 - return idr_get_new_above(idp, ptr, 0, id); 131 - } 132 - 133 - /** 134 123 * idr_for_each_entry - iterate over an idr's elements of a given type 135 124 * @idp: idr handle 136 125 * @entry: the type * to use as cursor ··· 128 143 entry != NULL; \ 129 144 ++id, entry = (typeof(entry))idr_get_next((idp), &(id))) 130 145 131 - void __idr_remove_all(struct idr *idp); /* don't use */ 146 + /* 147 + * Don't use the following functions. These exist only to suppress 148 + * deprecated warnings on EXPORT_SYMBOL()s. 149 + */ 150 + int __idr_pre_get(struct idr *idp, gfp_t gfp_mask); 151 + int __idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id); 152 + void __idr_remove_all(struct idr *idp); 153 + 154 + /** 155 + * idr_pre_get - reserve resources for idr allocation 156 + * @idp: idr handle 157 + * @gfp_mask: memory allocation flags 158 + * 159 + * Part of old alloc interface. This is going away. Use 160 + * idr_preload[_end]() and idr_alloc() instead. 161 + */ 162 + static inline int __deprecated idr_pre_get(struct idr *idp, gfp_t gfp_mask) 163 + { 164 + return __idr_pre_get(idp, gfp_mask); 165 + } 166 + 167 + /** 168 + * idr_get_new_above - allocate new idr entry above or equal to a start id 169 + * @idp: idr handle 170 + * @ptr: pointer you want associated with the id 171 + * @starting_id: id to start search at 172 + * @id: pointer to the allocated handle 173 + * 174 + * Part of old alloc interface. This is going away. Use 175 + * idr_preload[_end]() and idr_alloc() instead. 176 + */ 177 + static inline int __deprecated idr_get_new_above(struct idr *idp, void *ptr, 178 + int starting_id, int *id) 179 + { 180 + return __idr_get_new_above(idp, ptr, starting_id, id); 181 + } 182 + 183 + /** 184 + * idr_get_new - allocate new idr entry 185 + * @idp: idr handle 186 + * @ptr: pointer you want associated with the id 187 + * @id: pointer to the allocated handle 188 + * 189 + * Part of old alloc interface. This is going away. Use 190 + * idr_preload[_end]() and idr_alloc() instead. 191 + */ 192 + static inline int __deprecated idr_get_new(struct idr *idp, void *ptr, int *id) 193 + { 194 + return __idr_get_new_above(idp, ptr, 0, id); 195 + } 132 196 133 197 /** 134 198 * idr_remove_all - remove all ids from the given idr tree
+6 -3
include/linux/iio/common/st_sensors.h
··· 227 227 }; 228 228 229 229 #ifdef CONFIG_IIO_BUFFER 230 + irqreturn_t st_sensors_trigger_handler(int irq, void *p); 231 + 232 + int st_sensors_get_buffer_element(struct iio_dev *indio_dev, u8 *buf); 233 + #endif 234 + 235 + #ifdef CONFIG_IIO_TRIGGER 230 236 int st_sensors_allocate_trigger(struct iio_dev *indio_dev, 231 237 const struct iio_trigger_ops *trigger_ops); 232 238 233 239 void st_sensors_deallocate_trigger(struct iio_dev *indio_dev); 234 240 235 - irqreturn_t st_sensors_trigger_handler(int irq, void *p); 236 - 237 - int st_sensors_get_buffer_element(struct iio_dev *indio_dev, u8 *buf); 238 241 #else 239 242 static inline int st_sensors_allocate_trigger(struct iio_dev *indio_dev, 240 243 const struct iio_trigger_ops *trigger_ops)
+3 -1
include/linux/list.h
··· 667 667 pos = n) 668 668 669 669 #define hlist_entry_safe(ptr, type, member) \ 670 - (ptr) ? hlist_entry(ptr, type, member) : NULL 670 + ({ typeof(ptr) ____ptr = (ptr); \ 671 + ____ptr ? hlist_entry(____ptr, type, member) : NULL; \ 672 + }) 671 673 672 674 /** 673 675 * hlist_for_each_entry - iterate over list of given type
+1
include/linux/mfd/palmas.h
··· 221 221 }; 222 222 223 223 struct palmas_platform_data { 224 + int irq_flags; 224 225 int gpio_base; 225 226 226 227 /* bit value to be loaded to the POWER_CTRL register */
+1
include/linux/mfd/tps65912.h
··· 323 323 void tps65912_device_exit(struct tps65912 *tps65912); 324 324 int tps65912_irq_init(struct tps65912 *tps65912, int irq, 325 325 struct tps65912_platform_data *pdata); 326 + int tps65912_irq_exit(struct tps65912 *tps65912); 326 327 327 328 #endif /* __LINUX_MFD_TPS65912_H */
+2
include/linux/mfd/wm831x/auxadc.h
··· 15 15 #ifndef __MFD_WM831X_AUXADC_H__ 16 16 #define __MFD_WM831X_AUXADC_H__ 17 17 18 + struct wm831x; 19 + 18 20 /* 19 21 * R16429 (0x402D) - AuxADC Data 20 22 */
+1 -1
include/linux/mfd/wm831x/core.h
··· 20 20 #include <linux/irqdomain.h> 21 21 #include <linux/list.h> 22 22 #include <linux/regmap.h> 23 + #include <linux/mfd/wm831x/auxadc.h> 23 24 24 25 /* 25 26 * Register values. ··· 356 355 }; 357 356 358 357 struct wm831x; 359 - enum wm831x_auxadc; 360 358 361 359 typedef int (*wm831x_auxadc_read_fn)(struct wm831x *wm831x, 362 360 enum wm831x_auxadc input);
+6
include/linux/perf_event.h
··· 799 799 static inline void perf_event_task_tick(void) { } 800 800 #endif 801 801 802 + #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) 803 + extern void perf_restore_debug_store(void); 804 + #else 805 + static inline void perf_restore_debug_store(void) { } 806 + #endif 807 + 802 808 #define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x)) 803 809 804 810 /*
+1
include/linux/res_counter.h
··· 14 14 */ 15 15 16 16 #include <linux/cgroup.h> 17 + #include <linux/errno.h> 17 18 18 19 /* 19 20 * The core object. the cgroup that wishes to account for some
+2 -1
include/linux/usb/composite.h
··· 60 60 * @name: For diagnostics, identifies the function. 61 61 * @strings: tables of strings, keyed by identifiers assigned during bind() 62 62 * and by language IDs provided in control requests 63 - * @descriptors: Table of full (or low) speed descriptors, using interface and 63 + * @fs_descriptors: Table of full (or low) speed descriptors, using interface and 64 64 * string identifiers assigned during @bind(). If this pointer is null, 65 65 * the function will not be available at full speed (or at low speed). 66 66 * @hs_descriptors: Table of high speed descriptors, using interface and ··· 290 290 * after function notifications 291 291 * @resume: Notifies configuration when the host restarts USB traffic, 292 292 * before function notifications 293 + * @gadget_driver: Gadget driver controlling this driver 293 294 * 294 295 * Devices default to reporting self powered operation. Devices which rely 295 296 * on bus powered operation should report this in their @bind method.
+4 -2
include/uapi/linux/acct.h
··· 107 107 #define ACORE 0x08 /* ... dumped core */ 108 108 #define AXSIG 0x10 /* ... was killed by a signal */ 109 109 110 - #ifdef __BIG_ENDIAN 110 + #if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN) 111 111 #define ACCT_BYTEORDER 0x80 /* accounting file is big endian */ 112 - #else 112 + #elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN) 113 113 #define ACCT_BYTEORDER 0x00 /* accounting file is little endian */ 114 + #else 115 + #error unspecified endianness 114 116 #endif 115 117 116 118 #ifndef __KERNEL__
+2 -2
include/uapi/linux/aio_abi.h
··· 62 62 __s64 res2; /* secondary result */ 63 63 }; 64 64 65 - #if defined(__LITTLE_ENDIAN) 65 + #if defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN) 66 66 #define PADDED(x,y) x, y 67 - #elif defined(__BIG_ENDIAN) 67 + #elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN) 68 68 #define PADDED(x,y) y, x 69 69 #else 70 70 #error edit for your odd byteorder.
+4 -2
include/uapi/linux/raid/md_p.h
··· 145 145 __u32 failed_disks; /* 4 Number of failed disks */ 146 146 __u32 spare_disks; /* 5 Number of spare disks */ 147 147 __u32 sb_csum; /* 6 checksum of the whole superblock */ 148 - #ifdef __BIG_ENDIAN 148 + #if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN) 149 149 __u32 events_hi; /* 7 high-order of superblock update count */ 150 150 __u32 events_lo; /* 8 low-order of superblock update count */ 151 151 __u32 cp_events_hi; /* 9 high-order of checkpoint update count */ 152 152 __u32 cp_events_lo; /* 10 low-order of checkpoint update count */ 153 - #else 153 + #elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN) 154 154 __u32 events_lo; /* 7 low-order of superblock update count */ 155 155 __u32 events_hi; /* 8 high-order of superblock update count */ 156 156 __u32 cp_events_lo; /* 9 low-order of checkpoint update count */ 157 157 __u32 cp_events_hi; /* 10 high-order of checkpoint update count */ 158 + #else 159 + #error unspecified endianness 158 160 #endif 159 161 __u32 recovery_cp; /* 11 recovery checkpoint sector count */ 160 162 /* There are only valid for minor_version > 90 */
+4 -1
include/uapi/linux/serial_core.h
··· 51 51 #define PORT_8250_CIR 23 /* CIR infrared port, has its own driver */ 52 52 #define PORT_XR17V35X 24 /* Exar XR17V35x UARTs */ 53 53 #define PORT_BRCM_TRUMANAGE 25 54 - #define PORT_MAX_8250 25 /* max port ID */ 54 + #define PORT_ALTR_16550_F32 26 /* Altera 16550 UART with 32 FIFOs */ 55 + #define PORT_ALTR_16550_F64 27 /* Altera 16550 UART with 64 FIFOs */ 56 + #define PORT_ALTR_16550_F128 28 /* Altera 16550 UART with 128 FIFOs */ 57 + #define PORT_MAX_8250 28 /* max port ID */ 55 58 56 59 /* 57 60 * ARM specific type numbers. These are not currently guaranteed
-4
init/Kconfig
··· 28 28 29 29 menu "General setup" 30 30 31 - config EXPERIMENTAL 32 - bool 33 - default y 34 - 35 31 config BROKEN 36 32 bool 37 33
+4 -1
kernel/fork.c
··· 1141 1141 if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS)) 1142 1142 return ERR_PTR(-EINVAL); 1143 1143 1144 + if ((clone_flags & (CLONE_NEWUSER|CLONE_FS)) == (CLONE_NEWUSER|CLONE_FS)) 1145 + return ERR_PTR(-EINVAL); 1146 + 1144 1147 /* 1145 1148 * Thread groups must share signals as well, and detached threads 1146 1149 * can only be started up within the thread group. ··· 1810 1807 * If unsharing a user namespace must also unshare the thread. 1811 1808 */ 1812 1809 if (unshare_flags & CLONE_NEWUSER) 1813 - unshare_flags |= CLONE_THREAD; 1810 + unshare_flags |= CLONE_THREAD | CLONE_FS; 1814 1811 /* 1815 1812 * If unsharing a pid namespace must also unshare the thread. 1816 1813 */
+23 -23
kernel/futex.c
··· 223 223 * @rw: mapping needs to be read/write (values: VERIFY_READ, 224 224 * VERIFY_WRITE) 225 225 * 226 - * Returns a negative error code or 0 226 + * Return: a negative error code or 0 227 + * 227 228 * The key words are stored in *key on success. 228 229 * 229 230 * For shared mappings, it's (page->index, file_inode(vma->vm_file), ··· 706 705 * be "current" except in the case of requeue pi. 707 706 * @set_waiters: force setting the FUTEX_WAITERS bit (1) or not (0) 708 707 * 709 - * Returns: 710 - * 0 - ready to wait 711 - * 1 - acquired the lock 708 + * Return: 709 + * 0 - ready to wait; 710 + * 1 - acquired the lock; 712 711 * <0 - error 713 712 * 714 713 * The hb->lock and futex_key refs shall be held by the caller. ··· 1192 1191 * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit. 1193 1192 * hb1 and hb2 must be held by the caller. 1194 1193 * 1195 - * Returns: 1196 - * 0 - failed to acquire the lock atomicly 1197 - * 1 - acquired the lock 1194 + * Return: 1195 + * 0 - failed to acquire the lock atomically; 1196 + * 1 - acquired the lock; 1198 1197 * <0 - error 1199 1198 */ 1200 1199 static int futex_proxy_trylock_atomic(u32 __user *pifutex, ··· 1255 1254 * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire 1256 1255 * uaddr2 atomically on behalf of the top waiter. 1257 1256 * 1258 - * Returns: 1259 - * >=0 - on success, the number of tasks requeued or woken 1257 + * Return: 1258 + * >=0 - on success, the number of tasks requeued or woken; 1260 1259 * <0 - on error 1261 1260 */ 1262 1261 static int futex_requeue(u32 __user *uaddr1, unsigned int flags, ··· 1537 1536 * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must 1538 1537 * be paired with exactly one earlier call to queue_me(). 1539 1538 * 1540 - * Returns: 1541 - * 1 - if the futex_q was still queued (and we removed unqueued it) 1539 + * Return: 1540 + * 1 - if the futex_q was still queued (and we removed unqueued it); 1542 1541 * 0 - if the futex_q was already removed by the waking thread 1543 1542 */ 1544 1543 static int unqueue_me(struct futex_q *q) ··· 1708 1707 * the pi_state owner as well as handle race conditions that may allow us to 1709 1708 * acquire the lock. Must be called with the hb lock held. 1710 1709 * 1711 - * Returns: 1712 - * 1 - success, lock taken 1713 - * 0 - success, lock not taken 1710 + * Return: 1711 + * 1 - success, lock taken; 1712 + * 0 - success, lock not taken; 1714 1713 * <0 - on error (-EFAULT) 1715 1714 */ 1716 1715 static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked) ··· 1825 1824 * Return with the hb lock held and a q.key reference on success, and unlocked 1826 1825 * with no q.key reference on failure. 1827 1826 * 1828 - * Returns: 1829 - * 0 - uaddr contains val and hb has been locked 1827 + * Return: 1828 + * 0 - uaddr contains val and hb has been locked; 1830 1829 * <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked 1831 1830 */ 1832 1831 static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags, ··· 2204 2203 * the wakeup and return the appropriate error code to the caller. Must be 2205 2204 * called with the hb lock held. 2206 2205 * 2207 - * Returns 2208 - * 0 - no early wakeup detected 2209 - * <0 - -ETIMEDOUT or -ERESTARTNOINTR 2206 + * Return: 2207 + * 0 = no early wakeup detected; 2208 + * <0 = -ETIMEDOUT or -ERESTARTNOINTR 2210 2209 */ 2211 2210 static inline 2212 2211 int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb, ··· 2248 2247 * @val: the expected value of uaddr 2249 2248 * @abs_time: absolute timeout 2250 2249 * @bitset: 32 bit wakeup bitset set by userspace, defaults to all 2251 - * @clockrt: whether to use CLOCK_REALTIME (1) or CLOCK_MONOTONIC (0) 2252 2250 * @uaddr2: the pi futex we will take prior to returning to user-space 2253 2251 * 2254 2252 * The caller will wait on uaddr and will be requeued by futex_requeue() to ··· 2258 2258 * there was a need to. 2259 2259 * 2260 2260 * We call schedule in futex_wait_queue_me() when we enqueue and return there 2261 - * via the following: 2261 + * via the following-- 2262 2262 * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue() 2263 2263 * 2) wakeup on uaddr2 after a requeue 2264 2264 * 3) signal ··· 2276 2276 * 2277 2277 * If 4 or 7, we cleanup and return with -ETIMEDOUT. 2278 2278 * 2279 - * Returns: 2280 - * 0 - On success 2279 + * Return: 2280 + * 0 - On success; 2281 2281 * <0 - On error 2282 2282 */ 2283 2283 static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+4 -1
kernel/signal.c
··· 485 485 if (force_default || ka->sa.sa_handler != SIG_IGN) 486 486 ka->sa.sa_handler = SIG_DFL; 487 487 ka->sa.sa_flags = 0; 488 + #ifdef __ARCH_HAS_SA_RESTORER 489 + ka->sa.sa_restorer = NULL; 490 + #endif 488 491 sigemptyset(&ka->sa.sa_mask); 489 492 ka++; 490 493 } ··· 2685 2682 /** 2686 2683 * sys_rt_sigpending - examine a pending signal that has been raised 2687 2684 * while blocked 2688 - * @set: stores pending signals 2685 + * @uset: stores pending signals 2689 2686 * @sigsetsize: size of sigset_t type or larger 2690 2687 */ 2691 2688 SYSCALL_DEFINE2(rt_sigpending, sigset_t __user *, uset, size_t, sigsetsize)
+14 -10
kernel/trace/Kconfig
··· 414 414 def_bool n 415 415 416 416 config DYNAMIC_FTRACE 417 - bool "enable/disable ftrace tracepoints dynamically" 417 + bool "enable/disable function tracing dynamically" 418 418 depends on FUNCTION_TRACER 419 419 depends on HAVE_DYNAMIC_FTRACE 420 420 default y 421 421 help 422 - This option will modify all the calls to ftrace dynamically 423 - (will patch them out of the binary image and replace them 424 - with a No-Op instruction) as they are called. A table is 425 - created to dynamically enable them again. 422 + This option will modify all the calls to function tracing 423 + dynamically (will patch them out of the binary image and 424 + replace them with a No-Op instruction) on boot up. During 425 + compile time, a table is made of all the locations that ftrace 426 + can function trace, and this table is linked into the kernel 427 + image. When this is enabled, functions can be individually 428 + enabled, and the functions not enabled will not affect 429 + performance of the system. 430 + 431 + See the files in /sys/kernel/debug/tracing: 432 + available_filter_functions 433 + set_ftrace_filter 434 + set_ftrace_notrace 426 435 427 436 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 428 437 otherwise has native performance as long as no tracing is active. 429 - 430 - The changes to the code are done by a kernel thread that 431 - wakes up once a second and checks to see if any ftrace calls 432 - were made. If so, it runs stop_machine (stops all CPUS) 433 - and modifies the code to jump over the call to ftrace. 434 438 435 439 config DYNAMIC_FTRACE_WITH_REGS 436 440 def_bool y
+24 -3
kernel/trace/trace.c
··· 2400 2400 seq_printf(m, "# MAY BE MISSING FUNCTION EVENTS\n"); 2401 2401 } 2402 2402 2403 + #ifdef CONFIG_TRACER_MAX_TRACE 2404 + static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter) 2405 + { 2406 + if (iter->trace->allocated_snapshot) 2407 + seq_printf(m, "#\n# * Snapshot is allocated *\n#\n"); 2408 + else 2409 + seq_printf(m, "#\n# * Snapshot is freed *\n#\n"); 2410 + 2411 + seq_printf(m, "# Snapshot commands:\n"); 2412 + seq_printf(m, "# echo 0 > snapshot : Clears and frees snapshot buffer\n"); 2413 + seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n"); 2414 + seq_printf(m, "# Takes a snapshot of the main buffer.\n"); 2415 + seq_printf(m, "# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)\n"); 2416 + seq_printf(m, "# (Doesn't have to be '2' works with any number that\n"); 2417 + seq_printf(m, "# is not a '0' or '1')\n"); 2418 + } 2419 + #else 2420 + /* Should never be called */ 2421 + static inline void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter) { } 2422 + #endif 2423 + 2403 2424 static int s_show(struct seq_file *m, void *v) 2404 2425 { 2405 2426 struct trace_iterator *iter = v; ··· 2432 2411 seq_puts(m, "#\n"); 2433 2412 test_ftrace_alive(m); 2434 2413 } 2435 - if (iter->trace && iter->trace->print_header) 2414 + if (iter->snapshot && trace_empty(iter)) 2415 + print_snapshot_help(m, iter); 2416 + else if (iter->trace && iter->trace->print_header) 2436 2417 iter->trace->print_header(m); 2437 2418 else 2438 2419 trace_default_header(m); ··· 4167 4144 default: 4168 4145 if (current_trace->allocated_snapshot) 4169 4146 tracing_reset_online_cpus(&max_tr); 4170 - else 4171 - ret = -EINVAL; 4172 4147 break; 4173 4148 } 4174 4149
+4
kernel/user_namespace.c
··· 21 21 #include <linux/uaccess.h> 22 22 #include <linux/ctype.h> 23 23 #include <linux/projid.h> 24 + #include <linux/fs_struct.h> 24 25 25 26 static struct kmem_cache *user_ns_cachep __read_mostly; 26 27 ··· 836 835 837 836 /* Threaded processes may not enter a different user namespace */ 838 837 if (atomic_read(&current->mm->mm_users) > 1) 838 + return -EINVAL; 839 + 840 + if (current->fs->users != 1) 839 841 return -EINVAL; 840 842 841 843 if (!ns_capable(user_ns, CAP_SYS_ADMIN))
+4 -3
kernel/workqueue.c
··· 457 457 int ret; 458 458 459 459 mutex_lock(&worker_pool_idr_mutex); 460 - idr_pre_get(&worker_pool_idr, GFP_KERNEL); 461 - ret = idr_get_new(&worker_pool_idr, pool, &pool->id); 460 + ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL); 461 + if (ret >= 0) 462 + pool->id = ret; 462 463 mutex_unlock(&worker_pool_idr_mutex); 463 464 464 - return ret; 465 + return ret < 0 ? ret : 0; 465 466 } 466 467 467 468 /*
+30 -50
lib/idr.c
··· 106 106 if (layer_idr) 107 107 return get_from_free_list(layer_idr); 108 108 109 - /* try to allocate directly from kmem_cache */ 110 - new = kmem_cache_zalloc(idr_layer_cache, gfp_mask); 109 + /* 110 + * Try to allocate directly from kmem_cache. We want to try this 111 + * before preload buffer; otherwise, non-preloading idr_alloc() 112 + * users will end up taking advantage of preloading ones. As the 113 + * following is allowed to fail for preloaded cases, suppress 114 + * warning this time. 115 + */ 116 + new = kmem_cache_zalloc(idr_layer_cache, gfp_mask | __GFP_NOWARN); 111 117 if (new) 112 118 return new; 113 119 ··· 121 115 * Try to fetch one from the per-cpu preload buffer if in process 122 116 * context. See idr_preload() for details. 123 117 */ 124 - if (in_interrupt()) 125 - return NULL; 126 - 127 - preempt_disable(); 128 - new = __this_cpu_read(idr_preload_head); 129 - if (new) { 130 - __this_cpu_write(idr_preload_head, new->ary[0]); 131 - __this_cpu_dec(idr_preload_cnt); 132 - new->ary[0] = NULL; 118 + if (!in_interrupt()) { 119 + preempt_disable(); 120 + new = __this_cpu_read(idr_preload_head); 121 + if (new) { 122 + __this_cpu_write(idr_preload_head, new->ary[0]); 123 + __this_cpu_dec(idr_preload_cnt); 124 + new->ary[0] = NULL; 125 + } 126 + preempt_enable(); 127 + if (new) 128 + return new; 133 129 } 134 - preempt_enable(); 135 - return new; 130 + 131 + /* 132 + * Both failed. Try kmem_cache again w/o adding __GFP_NOWARN so 133 + * that memory allocation failure warning is printed as intended. 134 + */ 135 + return kmem_cache_zalloc(idr_layer_cache, gfp_mask); 136 136 } 137 137 138 138 static void idr_layer_rcu_free(struct rcu_head *head) ··· 196 184 } 197 185 } 198 186 199 - /** 200 - * idr_pre_get - reserve resources for idr allocation 201 - * @idp: idr handle 202 - * @gfp_mask: memory allocation flags 203 - * 204 - * This function should be called prior to calling the idr_get_new* functions. 205 - * It preallocates enough memory to satisfy the worst possible allocation. The 206 - * caller should pass in GFP_KERNEL if possible. This of course requires that 207 - * no spinning locks be held. 208 - * 209 - * If the system is REALLY out of memory this function returns %0, 210 - * otherwise %1. 211 - */ 212 - int idr_pre_get(struct idr *idp, gfp_t gfp_mask) 187 + int __idr_pre_get(struct idr *idp, gfp_t gfp_mask) 213 188 { 214 189 while (idp->id_free_cnt < MAX_IDR_FREE) { 215 190 struct idr_layer *new; ··· 207 208 } 208 209 return 1; 209 210 } 210 - EXPORT_SYMBOL(idr_pre_get); 211 + EXPORT_SYMBOL(__idr_pre_get); 211 212 212 213 /** 213 214 * sub_alloc - try to allocate an id without growing the tree depth 214 215 * @idp: idr handle 215 216 * @starting_id: id to start search at 216 - * @id: pointer to the allocated handle 217 217 * @pa: idr_layer[MAX_IDR_LEVEL] used as backtrack buffer 218 218 * @gfp_mask: allocation mask for idr_layer_alloc() 219 219 * @layer_idr: optional idr passed to idr_layer_alloc() ··· 374 376 idr_mark_full(pa, id); 375 377 } 376 378 377 - /** 378 - * idr_get_new_above - allocate new idr entry above or equal to a start id 379 - * @idp: idr handle 380 - * @ptr: pointer you want associated with the id 381 - * @starting_id: id to start search at 382 - * @id: pointer to the allocated handle 383 - * 384 - * This is the allocate id function. It should be called with any 385 - * required locks. 386 - * 387 - * If allocation from IDR's private freelist fails, idr_get_new_above() will 388 - * return %-EAGAIN. The caller should retry the idr_pre_get() call to refill 389 - * IDR's preallocation and then retry the idr_get_new_above() call. 390 - * 391 - * If the idr is full idr_get_new_above() will return %-ENOSPC. 392 - * 393 - * @id returns a value in the range @starting_id ... %0x7fffffff 394 - */ 395 - int idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id) 379 + int __idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id) 396 380 { 397 381 struct idr_layer *pa[MAX_IDR_LEVEL + 1]; 398 382 int rv; ··· 387 407 *id = rv; 388 408 return 0; 389 409 } 390 - EXPORT_SYMBOL(idr_get_new_above); 410 + EXPORT_SYMBOL(__idr_get_new_above); 391 411 392 412 /** 393 413 * idr_preload - preload for idr_alloc() ··· 888 908 int ida_pre_get(struct ida *ida, gfp_t gfp_mask) 889 909 { 890 910 /* allocate idr_layers */ 891 - if (!idr_pre_get(&ida->idr, gfp_mask)) 911 + if (!__idr_pre_get(&ida->idr, gfp_mask)) 892 912 return 0; 893 913 894 914 /* allocate free_bitmap */
+1 -1
lib/xz/Kconfig
··· 15 15 16 16 config XZ_DEC_POWERPC 17 17 bool "PowerPC BCJ filter decoder" 18 - default y if POWERPC 18 + default y if PPC 19 19 select XZ_DEC_BCJ 20 20 21 21 config XZ_DEC_IA64
+6 -2
mm/Kconfig
··· 286 286 default "1" 287 287 288 288 config VIRT_TO_BUS 289 - def_bool y 290 - depends on HAVE_VIRT_TO_BUS 289 + bool 290 + help 291 + An architecture should select this if it implements the 292 + deprecated interface virt_to_bus(). All new architectures 293 + should probably not select this. 294 + 291 295 292 296 config MMU_NOTIFIER 293 297 bool
+3 -2
mm/fremap.c
··· 129 129 struct vm_area_struct *vma; 130 130 int err = -EINVAL; 131 131 int has_write_lock = 0; 132 - vm_flags_t vm_flags; 132 + vm_flags_t vm_flags = 0; 133 133 134 134 if (prot) 135 135 return err; ··· 254 254 */ 255 255 256 256 out: 257 - vm_flags = vma->vm_flags; 257 + if (vma) 258 + vm_flags = vma->vm_flags; 258 259 if (likely(!has_write_lock)) 259 260 up_read(&mm->mmap_sem); 260 261 else
+1 -1
mm/memory_hotplug.c
··· 1801 1801 int retry = 1; 1802 1802 1803 1803 start_pfn = PFN_DOWN(start); 1804 - end_pfn = start_pfn + PFN_DOWN(size); 1804 + end_pfn = PFN_UP(start + size - 1); 1805 1805 1806 1806 /* 1807 1807 * When CONFIG_MEMCG is on, one memory block may be used by other
-8
mm/process_vm_access.c
··· 429 429 if (flags != 0) 430 430 return -EINVAL; 431 431 432 - if (!access_ok(VERIFY_READ, lvec, liovcnt * sizeof(*lvec))) 433 - goto out; 434 - 435 - if (!access_ok(VERIFY_READ, rvec, riovcnt * sizeof(*rvec))) 436 - goto out; 437 - 438 432 if (vm_write) 439 433 rc = compat_rw_copy_check_uvector(WRITE, lvec, liovcnt, 440 434 UIO_FASTIOV, iovstack_l, ··· 453 459 kfree(iov_r); 454 460 if (iov_l != iovstack_l) 455 461 kfree(iov_l); 456 - 457 - out: 458 462 return rc; 459 463 } 460 464
+1 -1
net/bridge/br_device.c
··· 66 66 goto out; 67 67 } 68 68 69 - mdst = br_mdb_get(br, skb); 69 + mdst = br_mdb_get(br, skb, vid); 70 70 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) 71 71 br_multicast_deliver(mdst, skb); 72 72 else
+1 -1
net/bridge/br_input.c
··· 97 97 if (is_broadcast_ether_addr(dest)) 98 98 skb2 = skb; 99 99 else if (is_multicast_ether_addr(dest)) { 100 - mdst = br_mdb_get(br, skb); 100 + mdst = br_mdb_get(br, skb, vid); 101 101 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 102 102 if ((mdst && mdst->mglist) || 103 103 br_multicast_is_router(br))
+4
net/bridge/br_mdb.c
··· 80 80 port = p->port; 81 81 if (port) { 82 82 struct br_mdb_entry e; 83 + memset(&e, 0, sizeof(e)); 83 84 e.ifindex = port->dev->ifindex; 84 85 e.state = p->state; 85 86 if (p->addr.proto == htons(ETH_P_IP)) ··· 137 136 break; 138 137 139 138 bpm = nlmsg_data(nlh); 139 + memset(bpm, 0, sizeof(*bpm)); 140 140 bpm->ifindex = dev->ifindex; 141 141 if (br_mdb_fill_info(skb, cb, dev) < 0) 142 142 goto out; ··· 173 171 return -EMSGSIZE; 174 172 175 173 bpm = nlmsg_data(nlh); 174 + memset(bpm, 0, sizeof(*bpm)); 176 175 bpm->family = AF_BRIDGE; 177 176 bpm->ifindex = dev->ifindex; 178 177 nest = nla_nest_start(skb, MDBA_MDB); ··· 231 228 { 232 229 struct br_mdb_entry entry; 233 230 231 + memset(&entry, 0, sizeof(entry)); 234 232 entry.ifindex = port->dev->ifindex; 235 233 entry.addr.proto = group->proto; 236 234 entry.addr.u.ip4 = group->u.ip4;
+2 -1
net/bridge/br_multicast.c
··· 132 132 #endif 133 133 134 134 struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 135 - struct sk_buff *skb) 135 + struct sk_buff *skb, u16 vid) 136 136 { 137 137 struct net_bridge_mdb_htable *mdb = rcu_dereference(br->mdb); 138 138 struct br_ip ip; ··· 144 144 return NULL; 145 145 146 146 ip.proto = skb->protocol; 147 + ip.vid = vid; 147 148 148 149 switch (skb->protocol) { 149 150 case htons(ETH_P_IP):
+2 -2
net/bridge/br_private.h
··· 442 442 struct net_bridge_port *port, 443 443 struct sk_buff *skb); 444 444 extern struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 445 - struct sk_buff *skb); 445 + struct sk_buff *skb, u16 vid); 446 446 extern void br_multicast_add_port(struct net_bridge_port *port); 447 447 extern void br_multicast_del_port(struct net_bridge_port *port); 448 448 extern void br_multicast_enable_port(struct net_bridge_port *port); ··· 504 504 } 505 505 506 506 static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 507 - struct sk_buff *skb) 507 + struct sk_buff *skb, u16 vid) 508 508 { 509 509 return NULL; 510 510 }
+29 -13
net/ceph/osdmap.c
··· 654 654 return 0; 655 655 } 656 656 657 + static int __decode_pgid(void **p, void *end, struct ceph_pg *pg) 658 + { 659 + u8 v; 660 + 661 + ceph_decode_need(p, end, 1+8+4+4, bad); 662 + v = ceph_decode_8(p); 663 + if (v != 1) 664 + goto bad; 665 + pg->pool = ceph_decode_64(p); 666 + pg->seed = ceph_decode_32(p); 667 + *p += 4; /* skip preferred */ 668 + return 0; 669 + 670 + bad: 671 + dout("error decoding pgid\n"); 672 + return -EINVAL; 673 + } 674 + 657 675 /* 658 676 * decode a full map. 659 677 */ ··· 763 745 for (i = 0; i < len; i++) { 764 746 int n, j; 765 747 struct ceph_pg pgid; 766 - struct ceph_pg_v1 pgid_v1; 767 748 struct ceph_pg_mapping *pg; 768 749 769 - ceph_decode_need(p, end, sizeof(u32) + sizeof(u64), bad); 770 - ceph_decode_copy(p, &pgid_v1, sizeof(pgid_v1)); 771 - pgid.pool = le32_to_cpu(pgid_v1.pool); 772 - pgid.seed = le16_to_cpu(pgid_v1.ps); 750 + err = __decode_pgid(p, end, &pgid); 751 + if (err) 752 + goto bad; 753 + ceph_decode_need(p, end, sizeof(u32), bad); 773 754 n = ceph_decode_32(p); 774 755 err = -EINVAL; 775 756 if (n > (UINT_MAX - sizeof(*pg)) / sizeof(u32)) ··· 835 818 u16 version; 836 819 837 820 ceph_decode_16_safe(p, end, version, bad); 838 - if (version > 6) { 839 - pr_warning("got unknown v %d > %d of inc osdmap\n", version, 6); 821 + if (version != 6) { 822 + pr_warning("got unknown v %d != 6 of inc osdmap\n", version); 840 823 goto bad; 841 824 } 842 825 ··· 980 963 while (len--) { 981 964 struct ceph_pg_mapping *pg; 982 965 int j; 983 - struct ceph_pg_v1 pgid_v1; 984 966 struct ceph_pg pgid; 985 967 u32 pglen; 986 - ceph_decode_need(p, end, sizeof(u64) + sizeof(u32), bad); 987 - ceph_decode_copy(p, &pgid_v1, sizeof(pgid_v1)); 988 - pgid.pool = le32_to_cpu(pgid_v1.pool); 989 - pgid.seed = le16_to_cpu(pgid_v1.ps); 990 - pglen = ceph_decode_32(p); 991 968 969 + err = __decode_pgid(p, end, &pgid); 970 + if (err) 971 + goto bad; 972 + ceph_decode_need(p, end, sizeof(u32), bad); 973 + pglen = ceph_decode_32(p); 992 974 if (pglen) { 993 975 ceph_decode_need(p, end, pglen*sizeof(u32), bad); 994 976
+3 -2
net/core/dev.c
··· 3444 3444 } 3445 3445 switch (rx_handler(&skb)) { 3446 3446 case RX_HANDLER_CONSUMED: 3447 + ret = NET_RX_SUCCESS; 3447 3448 goto unlock; 3448 3449 case RX_HANDLER_ANOTHER: 3449 3450 goto another_round; ··· 4104 4103 * Allow this to run for 2 jiffies since which will allow 4105 4104 * an average latency of 1.5/HZ. 4106 4105 */ 4107 - if (unlikely(budget <= 0 || time_after(jiffies, time_limit))) 4106 + if (unlikely(budget <= 0 || time_after_eq(jiffies, time_limit))) 4108 4107 goto softnet_break; 4109 4108 4110 4109 local_irq_enable(); ··· 4781 4780 /** 4782 4781 * dev_change_carrier - Change device carrier 4783 4782 * @dev: device 4784 - * @new_carries: new value 4783 + * @new_carrier: new value 4785 4784 * 4786 4785 * Change device carrier 4787 4786 */
+1
net/core/rtnetlink.c
··· 979 979 * report anything. 980 980 */ 981 981 ivi.spoofchk = -1; 982 + memset(ivi.mac, 0, sizeof(ivi.mac)); 982 983 if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi)) 983 984 break; 984 985 vf_mac.vf =
+8
net/dcb/dcbnl.c
··· 284 284 if (!netdev->dcbnl_ops->getpermhwaddr) 285 285 return -EOPNOTSUPP; 286 286 287 + memset(perm_addr, 0, sizeof(perm_addr)); 287 288 netdev->dcbnl_ops->getpermhwaddr(netdev, perm_addr); 288 289 289 290 return nla_put(skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr), perm_addr); ··· 1043 1042 1044 1043 if (ops->ieee_getets) { 1045 1044 struct ieee_ets ets; 1045 + memset(&ets, 0, sizeof(ets)); 1046 1046 err = ops->ieee_getets(netdev, &ets); 1047 1047 if (!err && 1048 1048 nla_put(skb, DCB_ATTR_IEEE_ETS, sizeof(ets), &ets)) ··· 1052 1050 1053 1051 if (ops->ieee_getmaxrate) { 1054 1052 struct ieee_maxrate maxrate; 1053 + memset(&maxrate, 0, sizeof(maxrate)); 1055 1054 err = ops->ieee_getmaxrate(netdev, &maxrate); 1056 1055 if (!err) { 1057 1056 err = nla_put(skb, DCB_ATTR_IEEE_MAXRATE, ··· 1064 1061 1065 1062 if (ops->ieee_getpfc) { 1066 1063 struct ieee_pfc pfc; 1064 + memset(&pfc, 0, sizeof(pfc)); 1067 1065 err = ops->ieee_getpfc(netdev, &pfc); 1068 1066 if (!err && 1069 1067 nla_put(skb, DCB_ATTR_IEEE_PFC, sizeof(pfc), &pfc)) ··· 1098 1094 /* get peer info if available */ 1099 1095 if (ops->ieee_peer_getets) { 1100 1096 struct ieee_ets ets; 1097 + memset(&ets, 0, sizeof(ets)); 1101 1098 err = ops->ieee_peer_getets(netdev, &ets); 1102 1099 if (!err && 1103 1100 nla_put(skb, DCB_ATTR_IEEE_PEER_ETS, sizeof(ets), &ets)) ··· 1107 1102 1108 1103 if (ops->ieee_peer_getpfc) { 1109 1104 struct ieee_pfc pfc; 1105 + memset(&pfc, 0, sizeof(pfc)); 1110 1106 err = ops->ieee_peer_getpfc(netdev, &pfc); 1111 1107 if (!err && 1112 1108 nla_put(skb, DCB_ATTR_IEEE_PEER_PFC, sizeof(pfc), &pfc)) ··· 1286 1280 /* peer info if available */ 1287 1281 if (ops->cee_peer_getpg) { 1288 1282 struct cee_pg pg; 1283 + memset(&pg, 0, sizeof(pg)); 1289 1284 err = ops->cee_peer_getpg(netdev, &pg); 1290 1285 if (!err && 1291 1286 nla_put(skb, DCB_ATTR_CEE_PEER_PG, sizeof(pg), &pg)) ··· 1295 1288 1296 1289 if (ops->cee_peer_getpfc) { 1297 1290 struct cee_pfc pfc; 1291 + memset(&pfc, 0, sizeof(pfc)); 1298 1292 err = ops->cee_peer_getpfc(netdev, &pfc); 1299 1293 if (!err && 1300 1294 nla_put(skb, DCB_ATTR_CEE_PEER_PFC, sizeof(pfc), &pfc))
+1 -1
net/ieee802154/6lowpan.h
··· 84 84 (memcmp(addr1, addr2, length >> 3) == 0) 85 85 86 86 /* local link, i.e. FE80::/10 */ 87 - #define is_addr_link_local(a) (((a)->s6_addr16[0]) == 0x80FE) 87 + #define is_addr_link_local(a) (((a)->s6_addr16[0]) == htons(0xFE80)) 88 88 89 89 /* 90 90 * check whether we can compress the IID to 16 bits,
+1
net/ipv4/inet_connection_sock.c
··· 735 735 * tcp/dccp_create_openreq_child(). 736 736 */ 737 737 void inet_csk_prepare_forced_close(struct sock *sk) 738 + __releases(&sk->sk_lock.slock) 738 739 { 739 740 /* sk_clone_lock locked the socket and set refcnt to 2 */ 740 741 bh_unlock_sock(sk);
+1 -1
net/ipv4/ip_options.c
··· 423 423 put_unaligned_be32(midtime, timeptr); 424 424 opt->is_changed = 1; 425 425 } 426 - } else { 426 + } else if ((optptr[3]&0xF) != IPOPT_TS_PRESPEC) { 427 427 unsigned int overflow = optptr[3]>>4; 428 428 if (overflow == 15) { 429 429 pp_ptr = optptr + 3;
+2 -1
net/ipv6/ip6_input.c
··· 281 281 * IPv6 multicast router mode is now supported ;) 282 282 */ 283 283 if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding && 284 - !(ipv6_addr_type(&hdr->daddr) & IPV6_ADDR_LINKLOCAL) && 284 + !(ipv6_addr_type(&hdr->daddr) & 285 + (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) && 285 286 likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) { 286 287 /* 287 288 * Okay, we try to forward - split and duplicate
+16 -13
net/irda/ircomm/ircomm_tty.c
··· 280 280 struct tty_port *port = &self->port; 281 281 DECLARE_WAITQUEUE(wait, current); 282 282 int retval; 283 - int do_clocal = 0, extra_count = 0; 283 + int do_clocal = 0; 284 284 unsigned long flags; 285 285 286 286 IRDA_DEBUG(2, "%s()\n", __func__ ); ··· 289 289 * If non-blocking mode is set, or the port is not enabled, 290 290 * then make the check up front and then exit. 291 291 */ 292 - if (filp->f_flags & O_NONBLOCK || tty->flags & (1 << TTY_IO_ERROR)){ 293 - /* nonblock mode is set or port is not enabled */ 292 + if (test_bit(TTY_IO_ERROR, &tty->flags)) { 293 + port->flags |= ASYNC_NORMAL_ACTIVE; 294 + return 0; 295 + } 296 + 297 + if (filp->f_flags & O_NONBLOCK) { 298 + /* nonblock mode is set */ 299 + if (tty->termios.c_cflag & CBAUD) 300 + tty_port_raise_dtr_rts(port); 294 301 port->flags |= ASYNC_NORMAL_ACTIVE; 295 302 IRDA_DEBUG(1, "%s(), O_NONBLOCK requested!\n", __func__ ); 296 303 return 0; ··· 322 315 __FILE__, __LINE__, tty->driver->name, port->count); 323 316 324 317 spin_lock_irqsave(&port->lock, flags); 325 - if (!tty_hung_up_p(filp)) { 326 - extra_count = 1; 318 + if (!tty_hung_up_p(filp)) 327 319 port->count--; 328 - } 329 - spin_unlock_irqrestore(&port->lock, flags); 330 320 port->blocked_open++; 321 + spin_unlock_irqrestore(&port->lock, flags); 331 322 332 323 while (1) { 333 324 if (tty->termios.c_cflag & CBAUD) 334 325 tty_port_raise_dtr_rts(port); 335 326 336 - current->state = TASK_INTERRUPTIBLE; 327 + set_current_state(TASK_INTERRUPTIBLE); 337 328 338 329 if (tty_hung_up_p(filp) || 339 330 !test_bit(ASYNCB_INITIALIZED, &port->flags)) { ··· 366 361 __set_current_state(TASK_RUNNING); 367 362 remove_wait_queue(&port->open_wait, &wait); 368 363 369 - if (extra_count) { 370 - /* ++ is not atomic, so this should be protected - Jean II */ 371 - spin_lock_irqsave(&port->lock, flags); 364 + spin_lock_irqsave(&port->lock, flags); 365 + if (!tty_hung_up_p(filp)) 372 366 port->count++; 373 - spin_unlock_irqrestore(&port->lock, flags); 374 - } 375 367 port->blocked_open--; 368 + spin_unlock_irqrestore(&port->lock, flags); 376 369 377 370 IRDA_DEBUG(1, "%s(%d):block_til_ready after blocking on %s open_count=%d\n", 378 371 __FILE__, __LINE__, tty->driver->name, port->count);
+4 -4
net/key/af_key.c
··· 2201 2201 XFRM_POLICY_BLOCK : XFRM_POLICY_ALLOW); 2202 2202 xp->priority = pol->sadb_x_policy_priority; 2203 2203 2204 - sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1], 2204 + sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1]; 2205 2205 xp->family = pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.saddr); 2206 2206 if (!xp->family) { 2207 2207 err = -EINVAL; ··· 2214 2214 if (xp->selector.sport) 2215 2215 xp->selector.sport_mask = htons(0xffff); 2216 2216 2217 - sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1], 2217 + sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1]; 2218 2218 pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.daddr); 2219 2219 xp->selector.prefixlen_d = sa->sadb_address_prefixlen; 2220 2220 ··· 2315 2315 2316 2316 memset(&sel, 0, sizeof(sel)); 2317 2317 2318 - sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1], 2318 + sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1]; 2319 2319 sel.family = pfkey_sadb_addr2xfrm_addr(sa, &sel.saddr); 2320 2320 sel.prefixlen_s = sa->sadb_address_prefixlen; 2321 2321 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto); ··· 2323 2323 if (sel.sport) 2324 2324 sel.sport_mask = htons(0xffff); 2325 2325 2326 - sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1], 2326 + sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1]; 2327 2327 pfkey_sadb_addr2xfrm_addr(sa, &sel.daddr); 2328 2328 sel.prefixlen_d = sa->sadb_address_prefixlen; 2329 2329 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+13 -8
net/mac80211/cfg.c
··· 3290 3290 int ret = -ENODATA; 3291 3291 3292 3292 rcu_read_lock(); 3293 - if (local->use_chanctx) { 3294 - chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 3295 - if (chanctx_conf) { 3296 - *chandef = chanctx_conf->def; 3297 - ret = 0; 3298 - } 3299 - } else if (local->open_count == local->monitors) { 3300 - *chandef = local->monitor_chandef; 3293 + chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 3294 + if (chanctx_conf) { 3295 + *chandef = chanctx_conf->def; 3296 + ret = 0; 3297 + } else if (local->open_count > 0 && 3298 + local->open_count == local->monitors && 3299 + sdata->vif.type == NL80211_IFTYPE_MONITOR) { 3300 + if (local->use_chanctx) 3301 + *chandef = local->monitor_chandef; 3302 + else 3303 + cfg80211_chandef_create(chandef, 3304 + local->_oper_channel, 3305 + local->_oper_channel_type); 3301 3306 ret = 0; 3302 3307 } 3303 3308 rcu_read_unlock();
+6
net/mac80211/iface.c
··· 541 541 542 542 ieee80211_adjust_monitor_flags(sdata, 1); 543 543 ieee80211_configure_filter(local); 544 + mutex_lock(&local->mtx); 545 + ieee80211_recalc_idle(local); 546 + mutex_unlock(&local->mtx); 544 547 545 548 netif_carrier_on(dev); 546 549 break; ··· 815 812 816 813 ieee80211_adjust_monitor_flags(sdata, -1); 817 814 ieee80211_configure_filter(local); 815 + mutex_lock(&local->mtx); 816 + ieee80211_recalc_idle(local); 817 + mutex_unlock(&local->mtx); 818 818 break; 819 819 case NL80211_IFTYPE_P2P_DEVICE: 820 820 /* relies on synchronize_rcu() below */
+23 -5
net/mac80211/mlme.c
··· 647 647 our_mcs = (le16_to_cpu(vht_cap.vht_mcs.rx_mcs_map) & 648 648 mask) >> shift; 649 649 650 + if (our_mcs == IEEE80211_VHT_MCS_NOT_SUPPORTED) 651 + continue; 652 + 650 653 switch (ap_mcs) { 651 654 default: 652 655 if (our_mcs <= ap_mcs) ··· 3506 3503 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 3507 3504 3508 3505 /* 3506 + * Stop timers before deleting work items, as timers 3507 + * could race and re-add the work-items. They will be 3508 + * re-established on connection. 3509 + */ 3510 + del_timer_sync(&ifmgd->conn_mon_timer); 3511 + del_timer_sync(&ifmgd->bcn_mon_timer); 3512 + 3513 + /* 3509 3514 * we need to use atomic bitops for the running bits 3510 3515 * only because both timers might fire at the same 3511 3516 * time -- the code here is properly synchronised. ··· 3527 3516 if (del_timer_sync(&ifmgd->timer)) 3528 3517 set_bit(TMR_RUNNING_TIMER, &ifmgd->timers_running); 3529 3518 3530 - cancel_work_sync(&ifmgd->chswitch_work); 3531 3519 if (del_timer_sync(&ifmgd->chswitch_timer)) 3532 3520 set_bit(TMR_RUNNING_CHANSW, &ifmgd->timers_running); 3533 - 3534 - /* these will just be re-established on connection */ 3535 - del_timer_sync(&ifmgd->conn_mon_timer); 3536 - del_timer_sync(&ifmgd->bcn_mon_timer); 3521 + cancel_work_sync(&ifmgd->chswitch_work); 3537 3522 } 3538 3523 3539 3524 void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata) ··· 4321 4314 void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) 4322 4315 { 4323 4316 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 4317 + 4318 + /* 4319 + * Make sure some work items will not run after this, 4320 + * they will not do anything but might not have been 4321 + * cancelled when disconnecting. 4322 + */ 4323 + cancel_work_sync(&ifmgd->monitor_work); 4324 + cancel_work_sync(&ifmgd->beacon_connection_loss_work); 4325 + cancel_work_sync(&ifmgd->request_smps_work); 4326 + cancel_work_sync(&ifmgd->csa_connection_drop_work); 4327 + cancel_work_sync(&ifmgd->chswitch_work); 4324 4328 4325 4329 mutex_lock(&ifmgd->mtx); 4326 4330 if (ifmgd->assoc_data)
+2 -1
net/mac80211/tx.c
··· 2745 2745 cpu_to_le16(IEEE80211_FCTL_MOREDATA); 2746 2746 } 2747 2747 2748 - sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev); 2748 + if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 2749 + sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev); 2749 2750 if (!ieee80211_tx_prepare(sdata, &tx, skb)) 2750 2751 break; 2751 2752 dev_kfree_skb_any(skb);
+10 -1
net/netfilter/nf_conntrack_helper.c
··· 339 339 { 340 340 const struct nf_conn_help *help; 341 341 const struct nf_conntrack_helper *helper; 342 + struct va_format vaf; 343 + va_list args; 344 + 345 + va_start(args, fmt); 346 + 347 + vaf.fmt = fmt; 348 + vaf.va = &args; 342 349 343 350 /* Called from the helper function, this call never fails */ 344 351 help = nfct_help(ct); ··· 354 347 helper = rcu_dereference(help->helper); 355 348 356 349 nf_log_packet(nf_ct_l3num(ct), 0, skb, NULL, NULL, NULL, 357 - "nf_ct_%s: dropping packet: %s ", helper->name, fmt); 350 + "nf_ct_%s: dropping packet: %pV ", helper->name, &vaf); 351 + 352 + va_end(args); 358 353 } 359 354 EXPORT_SYMBOL_GPL(nf_ct_helper_log); 360 355
+1 -6
net/netfilter/nfnetlink.c
··· 62 62 } 63 63 EXPORT_SYMBOL_GPL(nfnl_unlock); 64 64 65 - static struct mutex *nfnl_get_lock(__u8 subsys_id) 66 - { 67 - return &table[subsys_id].mutex; 68 - } 69 - 70 65 int nfnetlink_subsys_register(const struct nfnetlink_subsystem *n) 71 66 { 72 67 nfnl_lock(n->subsys_id); ··· 194 199 rcu_read_unlock(); 195 200 nfnl_lock(subsys_id); 196 201 if (rcu_dereference_protected(table[subsys_id].subsys, 197 - lockdep_is_held(nfnl_get_lock(subsys_id))) != ss || 202 + lockdep_is_held(&table[subsys_id].mutex)) != ss || 198 203 nfnetlink_find_client(type, ss) != nc) 199 204 err = -EAGAIN; 200 205 else if (nc->call)
+3
net/netfilter/xt_AUDIT.c
··· 124 124 const struct xt_audit_info *info = par->targinfo; 125 125 struct audit_buffer *ab; 126 126 127 + if (audit_enabled == 0) 128 + goto errout; 129 + 127 130 ab = audit_log_start(NULL, GFP_ATOMIC, AUDIT_NETFILTER_PKT); 128 131 if (ab == NULL) 129 132 goto errout;
+11 -16
net/netlabel/netlabel_unlabeled.c
··· 1189 1189 struct netlbl_unlhsh_walk_arg cb_arg; 1190 1190 u32 skip_bkt = cb->args[0]; 1191 1191 u32 skip_chain = cb->args[1]; 1192 - u32 skip_addr4 = cb->args[2]; 1193 - u32 skip_addr6 = cb->args[3]; 1194 1192 u32 iter_bkt; 1195 1193 u32 iter_chain = 0, iter_addr4 = 0, iter_addr6 = 0; 1196 1194 struct netlbl_unlhsh_iface *iface; ··· 1213 1215 continue; 1214 1216 netlbl_af4list_foreach_rcu(addr4, 1215 1217 &iface->addr4_list) { 1216 - if (iter_addr4++ < skip_addr4) 1218 + if (iter_addr4++ < cb->args[2]) 1217 1219 continue; 1218 1220 if (netlbl_unlabel_staticlist_gen( 1219 1221 NLBL_UNLABEL_C_STATICLIST, ··· 1229 1231 #if IS_ENABLED(CONFIG_IPV6) 1230 1232 netlbl_af6list_foreach_rcu(addr6, 1231 1233 &iface->addr6_list) { 1232 - if (iter_addr6++ < skip_addr6) 1234 + if (iter_addr6++ < cb->args[3]) 1233 1235 continue; 1234 1236 if (netlbl_unlabel_staticlist_gen( 1235 1237 NLBL_UNLABEL_C_STATICLIST, ··· 1248 1250 1249 1251 unlabel_staticlist_return: 1250 1252 rcu_read_unlock(); 1251 - cb->args[0] = skip_bkt; 1252 - cb->args[1] = skip_chain; 1253 - cb->args[2] = skip_addr4; 1254 - cb->args[3] = skip_addr6; 1253 + cb->args[0] = iter_bkt; 1254 + cb->args[1] = iter_chain; 1255 + cb->args[2] = iter_addr4; 1256 + cb->args[3] = iter_addr6; 1255 1257 return skb->len; 1256 1258 } 1257 1259 ··· 1271 1273 { 1272 1274 struct netlbl_unlhsh_walk_arg cb_arg; 1273 1275 struct netlbl_unlhsh_iface *iface; 1274 - u32 skip_addr4 = cb->args[0]; 1275 - u32 skip_addr6 = cb->args[1]; 1276 - u32 iter_addr4 = 0; 1276 + u32 iter_addr4 = 0, iter_addr6 = 0; 1277 1277 struct netlbl_af4list *addr4; 1278 1278 #if IS_ENABLED(CONFIG_IPV6) 1279 - u32 iter_addr6 = 0; 1280 1279 struct netlbl_af6list *addr6; 1281 1280 #endif 1282 1281 ··· 1287 1292 goto unlabel_staticlistdef_return; 1288 1293 1289 1294 netlbl_af4list_foreach_rcu(addr4, &iface->addr4_list) { 1290 - if (iter_addr4++ < skip_addr4) 1295 + if (iter_addr4++ < cb->args[0]) 1291 1296 continue; 1292 1297 if (netlbl_unlabel_staticlist_gen(NLBL_UNLABEL_C_STATICLISTDEF, 1293 1298 iface, ··· 1300 1305 } 1301 1306 #if IS_ENABLED(CONFIG_IPV6) 1302 1307 netlbl_af6list_foreach_rcu(addr6, &iface->addr6_list) { 1303 - if (iter_addr6++ < skip_addr6) 1308 + if (iter_addr6++ < cb->args[1]) 1304 1309 continue; 1305 1310 if (netlbl_unlabel_staticlist_gen(NLBL_UNLABEL_C_STATICLISTDEF, 1306 1311 iface, ··· 1315 1320 1316 1321 unlabel_staticlistdef_return: 1317 1322 rcu_read_unlock(); 1318 - cb->args[0] = skip_addr4; 1319 - cb->args[1] = skip_addr6; 1323 + cb->args[0] = iter_addr4; 1324 + cb->args[1] = iter_addr6; 1320 1325 return skb->len; 1321 1326 } 1322 1327
+1
net/rds/stats.c
··· 87 87 for (i = 0; i < nr; i++) { 88 88 BUG_ON(strlen(names[i]) >= sizeof(ctr.name)); 89 89 strncpy(ctr.name, names[i], sizeof(ctr.name) - 1); 90 + ctr.name[sizeof(ctr.name) - 1] = '\0'; 90 91 ctr.value = values[i]; 91 92 92 93 rds_info_copy(iter, &ctr, sizeof(ctr));
+45 -21
net/sched/sch_qfq.c
··· 298 298 new_num_classes == q->max_agg_classes - 1) /* agg no more full */ 299 299 hlist_add_head(&agg->nonfull_next, &q->nonfull_aggs); 300 300 301 + /* The next assignment may let 302 + * agg->initial_budget > agg->budgetmax 303 + * hold, we will take it into account in charge_actual_service(). 304 + */ 301 305 agg->budgetmax = new_num_classes * agg->lmax; 302 306 new_agg_weight = agg->class_weight * new_num_classes; 303 307 agg->inv_w = ONE_FP/new_agg_weight; ··· 821 817 unsigned long old_vslot = q->oldV >> q->min_slot_shift; 822 818 823 819 if (vslot != old_vslot) { 824 - unsigned long mask = (1UL << fls(vslot ^ old_vslot)) - 1; 820 + unsigned long mask = (1ULL << fls(vslot ^ old_vslot)) - 1; 825 821 qfq_move_groups(q, mask, IR, ER); 826 822 qfq_move_groups(q, mask, IB, EB); 827 823 } ··· 992 988 /* Update F according to the actual service received by the aggregate. */ 993 989 static inline void charge_actual_service(struct qfq_aggregate *agg) 994 990 { 995 - /* compute the service received by the aggregate */ 996 - u32 service_received = agg->initial_budget - agg->budget; 991 + /* Compute the service received by the aggregate, taking into 992 + * account that, after decreasing the number of classes in 993 + * agg, it may happen that 994 + * agg->initial_budget - agg->budget > agg->bugdetmax 995 + */ 996 + u32 service_received = min(agg->budgetmax, 997 + agg->initial_budget - agg->budget); 997 998 998 999 agg->F = agg->S + (u64)service_received * agg->inv_w; 999 1000 } 1001 + 1002 + static inline void qfq_update_agg_ts(struct qfq_sched *q, 1003 + struct qfq_aggregate *agg, 1004 + enum update_reason reason); 1005 + 1006 + static void qfq_schedule_agg(struct qfq_sched *q, struct qfq_aggregate *agg); 1000 1007 1001 1008 static struct sk_buff *qfq_dequeue(struct Qdisc *sch) 1002 1009 { ··· 1036 1021 in_serv_agg->initial_budget = in_serv_agg->budget = 1037 1022 in_serv_agg->budgetmax; 1038 1023 1039 - if (!list_empty(&in_serv_agg->active)) 1024 + if (!list_empty(&in_serv_agg->active)) { 1040 1025 /* 1041 1026 * Still active: reschedule for 1042 1027 * service. Possible optimization: if no other ··· 1047 1032 * handle it, we would need to maintain an 1048 1033 * extra num_active_aggs field. 1049 1034 */ 1050 - qfq_activate_agg(q, in_serv_agg, requeue); 1051 - else if (sch->q.qlen == 0) { /* no aggregate to serve */ 1035 + qfq_update_agg_ts(q, in_serv_agg, requeue); 1036 + qfq_schedule_agg(q, in_serv_agg); 1037 + } else if (sch->q.qlen == 0) { /* no aggregate to serve */ 1052 1038 q->in_serv_agg = NULL; 1053 1039 return NULL; 1054 1040 } ··· 1068 1052 qdisc_bstats_update(sch, skb); 1069 1053 1070 1054 agg_dequeue(in_serv_agg, cl, len); 1071 - in_serv_agg->budget -= len; 1055 + /* If lmax is lowered, through qfq_change_class, for a class 1056 + * owning pending packets with larger size than the new value 1057 + * of lmax, then the following condition may hold. 1058 + */ 1059 + if (unlikely(in_serv_agg->budget < len)) 1060 + in_serv_agg->budget = 0; 1061 + else 1062 + in_serv_agg->budget -= len; 1063 + 1072 1064 q->V += (u64)len * IWSUM; 1073 1065 pr_debug("qfq dequeue: len %u F %lld now %lld\n", 1074 1066 len, (unsigned long long) in_serv_agg->F, ··· 1241 1217 cl->deficit = agg->lmax; 1242 1218 list_add_tail(&cl->alist, &agg->active); 1243 1219 1244 - if (list_first_entry(&agg->active, struct qfq_class, alist) != cl) 1245 - return err; /* aggregate was not empty, nothing else to do */ 1220 + if (list_first_entry(&agg->active, struct qfq_class, alist) != cl || 1221 + q->in_serv_agg == agg) 1222 + return err; /* non-empty or in service, nothing else to do */ 1246 1223 1247 - /* recharge budget */ 1248 - agg->initial_budget = agg->budget = agg->budgetmax; 1249 - 1250 - qfq_update_agg_ts(q, agg, enqueue); 1251 - if (q->in_serv_agg == NULL) 1252 - q->in_serv_agg = agg; 1253 - else if (agg != q->in_serv_agg) 1254 - qfq_schedule_agg(q, agg); 1224 + qfq_activate_agg(q, agg, enqueue); 1255 1225 1256 1226 return err; 1257 1227 } ··· 1279 1261 /* group was surely ineligible, remove */ 1280 1262 __clear_bit(grp->index, &q->bitmaps[IR]); 1281 1263 __clear_bit(grp->index, &q->bitmaps[IB]); 1282 - } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V)) 1264 + } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V) && 1265 + q->in_serv_agg == NULL) 1283 1266 q->V = roundedS; 1284 1267 1285 1268 grp->S = roundedS; ··· 1303 1284 static void qfq_activate_agg(struct qfq_sched *q, struct qfq_aggregate *agg, 1304 1285 enum update_reason reason) 1305 1286 { 1287 + agg->initial_budget = agg->budget = agg->budgetmax; /* recharge budg. */ 1288 + 1306 1289 qfq_update_agg_ts(q, agg, reason); 1307 - qfq_schedule_agg(q, agg); 1290 + if (q->in_serv_agg == NULL) { /* no aggr. in service or scheduled */ 1291 + q->in_serv_agg = agg; /* start serving this aggregate */ 1292 + /* update V: to be in service, agg must be eligible */ 1293 + q->oldV = q->V = agg->S; 1294 + } else if (agg != q->in_serv_agg) 1295 + qfq_schedule_agg(q, agg); 1308 1296 } 1309 1297 1310 1298 static void qfq_slot_remove(struct qfq_sched *q, struct qfq_group *grp, ··· 1383 1357 __set_bit(grp->index, &q->bitmaps[s]); 1384 1358 } 1385 1359 } 1386 - 1387 - qfq_update_eligible(q); 1388 1360 } 1389 1361 1390 1362 static void qfq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+8 -4
net/sunrpc/auth_gss/svcauth_gss.c
··· 447 447 else { 448 448 int N, i; 449 449 450 + /* 451 + * NOTE: we skip uid_valid()/gid_valid() checks here: 452 + * instead, * -1 id's are later mapped to the 453 + * (export-specific) anonymous id by nfsd_setuser. 454 + * 455 + * (But supplementary gid's get no such special 456 + * treatment so are checked for validity here.) 457 + */ 450 458 /* uid */ 451 459 rsci.cred.cr_uid = make_kuid(&init_user_ns, id); 452 - if (!uid_valid(rsci.cred.cr_uid)) 453 - goto out; 454 460 455 461 /* gid */ 456 462 if (get_int(&mesg, &id)) 457 463 goto out; 458 464 rsci.cred.cr_gid = make_kgid(&init_user_ns, id); 459 - if (!gid_valid(rsci.cred.cr_gid)) 460 - goto out; 461 465 462 466 /* number of additional gid's */ 463 467 if (get_int(&mesg, &N))
+1
net/sunrpc/rpc_pipe.c
··· 1175 1175 .kill_sb = rpc_kill_sb, 1176 1176 }; 1177 1177 MODULE_ALIAS_FS("rpc_pipefs"); 1178 + MODULE_ALIAS("rpc_pipefs"); 1178 1179 1179 1180 static void 1180 1181 init_once(void *foo)
+10 -5
net/sunrpc/xprtsock.c
··· 849 849 xs_tcp_shutdown(xprt); 850 850 } 851 851 852 + static void xs_local_destroy(struct rpc_xprt *xprt) 853 + { 854 + xs_close(xprt); 855 + xs_free_peer_addresses(xprt); 856 + xprt_free(xprt); 857 + module_put(THIS_MODULE); 858 + } 859 + 852 860 /** 853 861 * xs_destroy - prepare to shutdown a transport 854 862 * @xprt: doomed transport ··· 870 862 871 863 cancel_delayed_work_sync(&transport->connect_worker); 872 864 873 - xs_close(xprt); 874 - xs_free_peer_addresses(xprt); 875 - xprt_free(xprt); 876 - module_put(THIS_MODULE); 865 + xs_local_destroy(xprt); 877 866 } 878 867 879 868 static inline struct rpc_xprt *xprt_from_sock(struct sock *sk) ··· 2487 2482 .send_request = xs_local_send_request, 2488 2483 .set_retrans_timeout = xprt_set_retrans_timeout_def, 2489 2484 .close = xs_close, 2490 - .destroy = xs_destroy, 2485 + .destroy = xs_local_destroy, 2491 2486 .print_stats = xs_local_print_stats, 2492 2487 }; 2493 2488
+1 -2
net/wireless/core.c
··· 367 367 rdev->wiphy.rts_threshold = (u32) -1; 368 368 rdev->wiphy.coverage_class = 0; 369 369 370 - rdev->wiphy.features = NL80211_FEATURE_SCAN_FLUSH | 371 - NL80211_FEATURE_ADVERTISE_CHAN_LIMITS; 370 + rdev->wiphy.features = NL80211_FEATURE_SCAN_FLUSH; 372 371 373 372 return &rdev->wiphy; 374 373 }
+25 -26
net/wireless/nl80211.c
··· 557 557 if ((chan->flags & IEEE80211_CHAN_RADAR) && 558 558 nla_put_flag(msg, NL80211_FREQUENCY_ATTR_RADAR)) 559 559 goto nla_put_failure; 560 - if ((chan->flags & IEEE80211_CHAN_NO_HT40MINUS) && 561 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_MINUS)) 562 - goto nla_put_failure; 563 - if ((chan->flags & IEEE80211_CHAN_NO_HT40PLUS) && 564 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_PLUS)) 565 - goto nla_put_failure; 566 - if ((chan->flags & IEEE80211_CHAN_NO_80MHZ) && 567 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_80MHZ)) 568 - goto nla_put_failure; 569 - if ((chan->flags & IEEE80211_CHAN_NO_160MHZ) && 570 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_160MHZ)) 571 - goto nla_put_failure; 572 560 573 561 if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_MAX_TX_POWER, 574 562 DBM_TO_MBM(chan->max_power))) ··· 1298 1310 dev->wiphy.max_acl_mac_addrs)) 1299 1311 goto nla_put_failure; 1300 1312 1301 - if (dev->wiphy.extended_capabilities && 1302 - (nla_put(msg, NL80211_ATTR_EXT_CAPA, 1303 - dev->wiphy.extended_capabilities_len, 1304 - dev->wiphy.extended_capabilities) || 1305 - nla_put(msg, NL80211_ATTR_EXT_CAPA_MASK, 1306 - dev->wiphy.extended_capabilities_len, 1307 - dev->wiphy.extended_capabilities_mask))) 1308 - goto nla_put_failure; 1309 - 1310 1313 return genlmsg_end(msg, hdr); 1311 1314 1312 1315 nla_put_failure: ··· 1307 1328 1308 1329 static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb) 1309 1330 { 1310 - int idx = 0; 1331 + int idx = 0, ret; 1311 1332 int start = cb->args[0]; 1312 1333 struct cfg80211_registered_device *dev; 1313 1334 ··· 1317 1338 continue; 1318 1339 if (++idx <= start) 1319 1340 continue; 1320 - if (nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid, 1321 - cb->nlh->nlmsg_seq, NLM_F_MULTI, 1322 - dev) < 0) { 1341 + ret = nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid, 1342 + cb->nlh->nlmsg_seq, NLM_F_MULTI, 1343 + dev); 1344 + if (ret < 0) { 1345 + /* 1346 + * If sending the wiphy data didn't fit (ENOBUFS or 1347 + * EMSGSIZE returned), this SKB is still empty (so 1348 + * it's not too big because another wiphy dataset is 1349 + * already in the skb) and we've not tried to adjust 1350 + * the dump allocation yet ... then adjust the alloc 1351 + * size to be bigger, and return 1 but with the empty 1352 + * skb. This results in an empty message being RX'ed 1353 + * in userspace, but that is ignored. 1354 + * 1355 + * We can then retry with the larger buffer. 1356 + */ 1357 + if ((ret == -ENOBUFS || ret == -EMSGSIZE) && 1358 + !skb->len && 1359 + cb->min_dump_alloc < 4096) { 1360 + cb->min_dump_alloc = 4096; 1361 + mutex_unlock(&cfg80211_mutex); 1362 + return 1; 1363 + } 1323 1364 idx--; 1324 1365 break; 1325 1366 } ··· 1356 1357 struct sk_buff *msg; 1357 1358 struct cfg80211_registered_device *dev = info->user_ptr[0]; 1358 1359 1359 - msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 1360 + msg = nlmsg_new(4096, GFP_KERNEL); 1360 1361 if (!msg) 1361 1362 return -ENOMEM; 1362 1363
+6 -5
scripts/Makefile.headersinst
··· 14 14 include $(kbuild-file) 15 15 16 16 # called may set destination dir (when installing to asm/) 17 - _dst := $(or $(destination-y),$(dst),$(obj)) 17 + _dst := $(if $(destination-y),$(destination-y),$(if $(dst),$(dst),$(obj))) 18 18 19 19 old-kbuild-file := $(srctree)/$(subst uapi/,,$(obj))/Kbuild 20 20 ifneq ($(wildcard $(old-kbuild-file)),) ··· 48 48 output-files := $(addprefix $(installdir)/, $(all-files)) 49 49 50 50 input-files := $(foreach hdr, $(header-y), \ 51 - $(or \ 51 + $(if $(wildcard $(srcdir)/$(hdr)), \ 52 52 $(wildcard $(srcdir)/$(hdr)), \ 53 - $(wildcard $(oldsrcdir)/$(hdr)), \ 54 - $(error Missing UAPI file $(srcdir)/$(hdr)) \ 53 + $(if $(wildcard $(oldsrcdir)/$(hdr)), \ 54 + $(wildcard $(oldsrcdir)/$(hdr)), \ 55 + $(error Missing UAPI file $(srcdir)/$(hdr))) \ 55 56 )) \ 56 57 $(foreach hdr, $(genhdr-y), \ 57 - $(or \ 58 + $(if $(wildcard $(gendir)/$(hdr)), \ 58 59 $(wildcard $(gendir)/$(hdr)), \ 59 60 $(error Missing generated UAPI file $(gendir)/$(hdr)) \ 60 61 ))
+2 -2
security/keys/compat.c
··· 40 40 ARRAY_SIZE(iovstack), 41 41 iovstack, &iov); 42 42 if (ret < 0) 43 - return ret; 43 + goto err; 44 44 if (ret == 0) 45 45 goto no_payload_free; 46 46 47 47 ret = keyctl_instantiate_key_common(id, iov, ioc, ret, ringid); 48 - 48 + err: 49 49 if (iov != iovstack) 50 50 kfree(iov); 51 51 return ret;
+1 -1
security/keys/process_keys.c
··· 57 57 58 58 kenter("%p{%u}", user, uid); 59 59 60 - if (user->uid_keyring) { 60 + if (user->uid_keyring && user->session_keyring) { 61 61 kleave(" = 0 [exist]"); 62 62 return 0; 63 63 }
+4 -4
sound/core/seq/seq_timer.c
··· 290 290 tid.device = SNDRV_TIMER_GLOBAL_SYSTEM; 291 291 err = snd_timer_open(&t, str, &tid, q->queue); 292 292 } 293 - if (err < 0) { 294 - snd_printk(KERN_ERR "seq fatal error: cannot create timer (%i)\n", err); 295 - return err; 296 - } 293 + } 294 + if (err < 0) { 295 + snd_printk(KERN_ERR "seq fatal error: cannot create timer (%i)\n", err); 296 + return err; 297 297 } 298 298 t->callback = snd_seq_timer_interrupt; 299 299 t->callback_data = q;
+6
sound/oss/sequencer.c
··· 545 545 case MIDI_PGM_CHANGE: 546 546 if (seq_mode == SEQ_2) 547 547 { 548 + if (chn > 15) 549 + break; 550 + 548 551 synth_devs[dev]->chn_info[chn].pgm_num = p1; 549 552 if ((int) dev >= num_synths) 550 553 synth_devs[dev]->set_instr(dev, chn, p1); ··· 599 596 case MIDI_PITCH_BEND: 600 597 if (seq_mode == SEQ_2) 601 598 { 599 + if (chn > 15) 600 + break; 601 + 602 602 synth_devs[dev]->chn_info[chn].bender_value = w14; 603 603 604 604 if ((int) dev < num_synths)
+2 -1
sound/pci/asihpi/asihpi.c
··· 2549 2549 2550 2550 static int snd_card_asihpi_mixer_new(struct snd_card_asihpi *asihpi) 2551 2551 { 2552 - struct snd_card *card = asihpi->card; 2552 + struct snd_card *card; 2553 2553 unsigned int idx = 0; 2554 2554 unsigned int subindex = 0; 2555 2555 int err; ··· 2557 2557 2558 2558 if (snd_BUG_ON(!asihpi)) 2559 2559 return -EINVAL; 2560 + card = asihpi->card; 2560 2561 strcpy(card->mixername, "Asihpi Mixer"); 2561 2562 2562 2563 err =
+15 -11
sound/pci/hda/hda_codec.c
··· 494 494 495 495 int snd_hda_get_num_raw_conns(struct hda_codec *codec, hda_nid_t nid) 496 496 { 497 - return get_num_conns(codec, nid) & AC_CLIST_LENGTH; 497 + return snd_hda_get_raw_connections(codec, nid, NULL, 0); 498 498 } 499 499 500 500 /** ··· 516 516 unsigned int shift, num_elems, mask; 517 517 hda_nid_t prev_nid; 518 518 int null_count = 0; 519 - 520 - if (snd_BUG_ON(!conn_list || max_conns <= 0)) 521 - return -EINVAL; 522 519 523 520 parm = get_num_conns(codec, nid); 524 521 if (!parm) ··· 542 545 AC_VERB_GET_CONNECT_LIST, 0); 543 546 if (parm == -1 && codec->bus->rirb_error) 544 547 return -EIO; 545 - conn_list[0] = parm & mask; 548 + if (conn_list) 549 + conn_list[0] = parm & mask; 546 550 return 1; 547 551 } 548 552 ··· 578 580 continue; 579 581 } 580 582 for (n = prev_nid + 1; n <= val; n++) { 581 - if (conns >= max_conns) 582 - return -ENOSPC; 583 - conn_list[conns++] = n; 583 + if (conn_list) { 584 + if (conns >= max_conns) 585 + return -ENOSPC; 586 + conn_list[conns] = n; 587 + } 588 + conns++; 584 589 } 585 590 } else { 586 - if (conns >= max_conns) 587 - return -ENOSPC; 588 - conn_list[conns++] = val; 591 + if (conn_list) { 592 + if (conns >= max_conns) 593 + return -ENOSPC; 594 + conn_list[conns] = val; 595 + } 596 + conns++; 589 597 } 590 598 prev_nid = val; 591 599 }
+15 -13
sound/pci/hda/patch_ca0132.c
··· 3239 3239 struct ca0132_spec *spec = codec->spec; 3240 3240 unsigned int tmp; 3241 3241 3242 - if (!dspload_is_loaded(codec)) 3242 + if (spec->dsp_state != DSP_DOWNLOADED) 3243 3243 return 0; 3244 3244 3245 3245 /* if CrystalVoice if off, vipsource should be 0 */ ··· 4267 4267 */ 4268 4268 static void ca0132_setup_defaults(struct hda_codec *codec) 4269 4269 { 4270 + struct ca0132_spec *spec = codec->spec; 4270 4271 unsigned int tmp; 4271 4272 int num_fx; 4272 4273 int idx, i; 4273 4274 4274 - if (!dspload_is_loaded(codec)) 4275 + if (spec->dsp_state != DSP_DOWNLOADED) 4275 4276 return; 4276 4277 4277 4278 /* out, in effects + voicefx */ ··· 4352 4351 return false; 4353 4352 4354 4353 dsp_os_image = (struct dsp_image_seg *)(fw_entry->data); 4355 - dspload_image(codec, dsp_os_image, 0, 0, true, 0); 4354 + if (dspload_image(codec, dsp_os_image, 0, 0, true, 0)) { 4355 + pr_err("ca0132 dspload_image failed.\n"); 4356 + goto exit_download; 4357 + } 4358 + 4356 4359 dsp_loaded = dspload_wait_loaded(codec); 4357 4360 4361 + exit_download: 4358 4362 release_firmware(fw_entry); 4359 - 4360 4363 4361 4364 return dsp_loaded; 4362 4365 } ··· 4372 4367 #ifndef CONFIG_SND_HDA_CODEC_CA0132_DSP 4373 4368 return; /* NOP */ 4374 4369 #endif 4375 - spec->dsp_state = DSP_DOWNLOAD_INIT; 4376 4370 4377 - if (spec->dsp_state == DSP_DOWNLOAD_INIT) { 4378 - chipio_enable_clocks(codec); 4379 - spec->dsp_state = DSP_DOWNLOADING; 4380 - if (!ca0132_download_dsp_images(codec)) 4381 - spec->dsp_state = DSP_DOWNLOAD_FAILED; 4382 - else 4383 - spec->dsp_state = DSP_DOWNLOADED; 4384 - } 4371 + chipio_enable_clocks(codec); 4372 + spec->dsp_state = DSP_DOWNLOADING; 4373 + if (!ca0132_download_dsp_images(codec)) 4374 + spec->dsp_state = DSP_DOWNLOAD_FAILED; 4375 + else 4376 + spec->dsp_state = DSP_DOWNLOADED; 4385 4377 4386 4378 if (spec->dsp_state == DSP_DOWNLOADED) 4387 4379 ca0132_set_dsp_msr(codec, true);
+4
sound/pci/hda/patch_cirrus.c
··· 506 506 if (!spec) 507 507 return -ENOMEM; 508 508 509 + spec->gen.automute_hook = cs_automute; 510 + 509 511 snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl, 510 512 cs420x_fixups); 511 513 snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE); ··· 894 892 spec = cs_alloc_spec(codec, CS4210_VENDOR_NID); 895 893 if (!spec) 896 894 return -ENOMEM; 895 + 896 + spec->gen.automute_hook = cs_automute; 897 897 898 898 snd_hda_pick_fixup(codec, cs421x_models, cs421x_fixup_tbl, 899 899 cs421x_fixups);
+29
sound/pci/hda/patch_sigmatel.c
··· 815 815 return 0; 816 816 } 817 817 818 + /* check whether a built-in speaker is included in parsed pins */ 819 + static bool has_builtin_speaker(struct hda_codec *codec) 820 + { 821 + struct sigmatel_spec *spec = codec->spec; 822 + hda_nid_t *nid_pin; 823 + int nids, i; 824 + 825 + if (spec->gen.autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT) { 826 + nid_pin = spec->gen.autocfg.line_out_pins; 827 + nids = spec->gen.autocfg.line_outs; 828 + } else { 829 + nid_pin = spec->gen.autocfg.speaker_pins; 830 + nids = spec->gen.autocfg.speaker_outs; 831 + } 832 + 833 + for (i = 0; i < nids; i++) { 834 + unsigned int def_conf = snd_hda_codec_get_pincfg(codec, nid_pin[i]); 835 + if (snd_hda_get_input_pin_attr(def_conf) == INPUT_PIN_ATTR_INT) 836 + return true; 837 + } 838 + return false; 839 + } 840 + 818 841 /* 819 842 * PC beep controls 820 843 */ ··· 3912 3889 stac_free(codec); 3913 3890 return err; 3914 3891 } 3892 + 3893 + /* Don't GPIO-mute speakers if there are no internal speakers, because 3894 + * the GPIO might be necessary for Headphone 3895 + */ 3896 + if (spec->eapd_switch && !has_builtin_speaker(codec)) 3897 + spec->eapd_switch = 0; 3915 3898 3916 3899 codec->proc_widget_hook = stac92hd7x_proc_hook; 3917 3900
+15
sound/usb/card.c
··· 244 244 usb_ifnum_to_if(dev, ctrlif)->intf_assoc; 245 245 246 246 if (!assoc) { 247 + /* 248 + * Firmware writers cannot count to three. So to find 249 + * the IAD on the NuForce UDH-100, also check the next 250 + * interface. 251 + */ 252 + struct usb_interface *iface = 253 + usb_ifnum_to_if(dev, ctrlif + 1); 254 + if (iface && 255 + iface->intf_assoc && 256 + iface->intf_assoc->bFunctionClass == USB_CLASS_AUDIO && 257 + iface->intf_assoc->bFunctionProtocol == UAC_VERSION_2) 258 + assoc = iface->intf_assoc; 259 + } 260 + 261 + if (!assoc) { 247 262 snd_printk(KERN_ERR "Audio class v2 interfaces need an interface association\n"); 248 263 return -EINVAL; 249 264 }
+1 -1
tools/usb/ffs-test.c
··· 38 38 #include <unistd.h> 39 39 #include <tools/le_byteshift.h> 40 40 41 - #include "../../include/linux/usb/functionfs.h" 41 + #include "../../include/uapi/linux/usb/functionfs.h" 42 42 43 43 44 44 /******************** Little Endian Handling ********************************/